code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
<a href="https://colab.research.google.com/github/Shailyshaik2021/python/blob/main/PythonCodes.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<h1>Welcome to Colab!</h1>
If you're already familiar with Colab, check out this video to learn about interactive tables, the executed code history view, and the command palette.
<center>
<a href="https://www.youtube.com/watch?v=rNgswRZ2C1Y" target="_blank">
<img alt='Thumbnail for a video showing 3 cool Google Colab features' src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAVIAAAC+CAYAAABnJIeiAAAMa2lDQ1BJQ0MgUHJvZmlsZQAASImVlwdYk1cXgO83kpCQsAJhyAh7CbKnjBBWBAGZgouQBBJGjAlBxI2WKli3iOBEqyKKVisgoiJqnUVxW0dRi0qlFrUoisp/M0Br//H853nud9+ce+45557cb1wAtAe4Ekk+qgNAgbhQmhgVxpyYnsEkdQNNYA7IgAQMuTyZhJWQEAugDPd/lzc3AaLor7kofP1z/L+KHl8g4wGATIacxZfxCiC3AYBv5EmkhQAQFXrrmYUSBS+ArC+FCUJeq+AcFe9WcJaKW5U2yYlsyFcA0KByudIcALTuQT2ziJcD/Wh9gOwm5ovEAGiPhhzME3L5kBW5jy4omK7gKsgO0F4CGeYD/LK+8JnzN/9ZI/653JwRVq1LKRrhIpkknzvr/yzN/5aCfPlwDDvYqEJpdKJi/bCGt/OmxyiYCrlXnBUXr6g15AERX1V3AFCKUB6dorJHTXkyNqwfYEB243PDYyCbQo4U58fFqvVZ2aJIDmS4W9BiUSEnGbIR5CUCWUSS2mardHqiOhbamC1ls9T6c1ypMq4i1gN5XgpL7f+VUMBR+8e0SoTJaZApkG2KRKlxkLUgu8rykmLUNmNLhOy4YRupPFGRvw3kRIE4KkzlHyvKlkYmqu3LC2TD68W2CkWcODUfLBQmR6vqg53mcZX5w7VgVwRiVsqwH4FsYuzwWviC8AjV2rFnAnFKktrPgKQwLFE1F6dI8hPU9riVID9KobeC7CUrSlLPxVML4eZU+cezJYUJyao88ZJc7rgEVT74ShAL2CAcMIEctiwwHeQCUUdvUy/8pRqJBFwgBTlAAFzUmuEZacoRMbwmgRLwByQBkI3MC1OOCkAR1H8c0aquLiBbOVqknJEHnkAuADEgH/6WK2eJR6Klgt+gRvSP6FzYeDDffNgU4/9eP6z9rGFBTaxaIx+OyNQetiRGEMOJ0cRIoiNuggfjgXgsvIbC5oH74f7D6/hsT3hC6CQ8ItwgdBHuTBOVSr/Kcjzogv4j1bXI+rIWuB306Y2H4UHQO/SMM3AT4IJ7wTgsPARG9oZatjpvRVWYX/n+2wq++DfUdmQ3Mko2JIeSHb6eqeWk5T3iRVHrL+ujyjVrpN7skZGv47O/qD4f9jFfW2JLsEPYWewkdh5rxZoAEzuBNWOXsGMKHtldvyl313C0RGU+edCP6B/xuOqYikrK3Ordetw+qMYKBcWFihuPPV0ySyrKERYyWfDtIGByxDzX0UwPNw93ABTvGtXj6zVD+Q5BGBc+60qPABDEGRoaav2sizkKwKFl8Pa/9VnnkKV6jp+r5smlRSodrrgQ4FNCG95pxvBdZg0c4Ho8gA8IBKEgAowD8SAZpIOpsMpCuM+lYCaYAxaCMlABVoJ1oBpsAdvBbrAPHARNoBWcBD+Bi+AKuAHuwt3TDZ6DPvAGDCIIQkJoCB0xRiwQW8QZ8UD8kGAkAolFEpF0JBPJQcSIHJmDLEIqkNVINbINqUN+QI4gJ5HzSCdyB3mI9CCvkPcohlJRfdQMtUPHoH4oC41Bk9EpaA46Ay1BF6PL0Sq0Ft2LNqIn0YvoDbQLfY72YwDTxBiYJeaC+WFsLB7LwLIxKTYPK8cqsVqsAWuB//M1rAvrxd7hRJyOM3EXuIOj8RSch8/A5+HL8Gp8N96In8av4Q/xPvwTgUYwJTgTAggcwkRCDmEmoYxQSdhJOEw4A++lbsIbIpHIINoTfeG9mE7MJc4mLiNuIu4nthE7iY+J/SQSyZjkTAoixZO4pEJSGWkDaS/pBOkqqZs0oKGpYaHhoRGpkaEh1ijVqNTYo3Fc46rGU41Bsg7ZlhxAjifzybPIK8g7yC3ky+Ru8iBFl2JPCaIkU3IpCylVlAbKGco9ymtNTU0rTX/NCZoizQWaVZoHNM9pPtR8R9WjOlHZ1MlUOXU5dRe1jXqH+ppGo9nRQmkZtELaclod7RTtAW1Ai67lqsXR4mvN16rRatS6qvVCm6xtq83Snqpdol2pfUj7snavDlnHToetw9WZp1Ojc0Tnlk6/Ll3XXTdet0B3me4e3fO6z/RIenZ6EXp8vcV62/VO6T2mY3RrOpvOoy+i76CfoXfrE/Xt9Tn6ufoV+vv0O/T7DPQMvAxSDYoNagyOGXQxMIYdg8PIZ6xgHGTcZLw3NDNkGQoMlxo2GF41fGs0yijUSGBUbrTf6IbRe2OmcYRxnvEq4ybj+ya4iZPJBJOZJptNzpj0jtIfFTiKN6p81MFRv5iipk6miaazTbebXjLtNzM3izKTmG0wO2XWa84wDzXPNV9rfty8x4JuEWwhslhrccLid6YBk8XMZ1YxTzP7LE0toy3lltssOywHreytUqxKrfZb3bemWPtZZ1uvtW637rOxsBlvM8em3uYXW7Ktn63Qdr3tWdu3dvZ2aXbf2jXZPbM3sufYl9jX299zoDmEOMxwqHW47kh09HPMc9zkeMUJdfJ2EjrVOF12Rp19nEXOm5w7RxNG+48Wj64dfcuF6sJyKXKpd3noynCNdS11bXJ9McZmTMaYVWPOjvnk5u2W77bD7a67nvs491L3FvdXHk4ePI8aj+ueNM9Iz/mezZ4vvZy9BF6bvW57073He3/r3e790cfXR+rT4NPja+Ob6bvR95afvl+C3zK/c/4E/zD/+f6t/u8CfAIKAw4G/BnoEpgXuCfw2Vj7sYKxO8Y+DrIK4gZtC+oKZgZnBm8N7gqxDOGG1IY8CrUO5YfuDH3KcmTlsvayXoS5hUnDDoe9ZQew57LbwrHwqPDy8I4IvYiUiOqIB5FWkTmR9ZF9Ud5Rs6PaognRMdGrom9xzDg8Th2nb5zvuLnjTsdQY5JiqmMexTrFSmNbxqPjx41fM/5enG2cOK4pHsRz4tfE30+wT5iRcHQCcULChJoJTxLdE+cknk2iJ01L2pP0JjkseUXy3RSHFHlKe6p26uTUutS3aeFpq9O6Jo6ZOHfixXSTdFF6cwYpIzVjZ0b/pIhJ6yZ1T/aeXDb55hT7KcVTzk81mZo/9dg07WncaYcyCZlpmXsyP3DjubXc/ixO1sasPh6bt573nB/KX8vvEQQJVgueZgdlr85+lhOUsyanRxgirBT2itiiatHL3OjcLblv8+LzduUN5afl7y/QKMgsOCLWE+eJT083n148vVPiLCmTdM0ImLFuRp80RrpThsimyJoL9eFH/SW5g/wb+cOi4KKaooGZqTMPFesWi4svzXKatXTW05LIku9n47N5s9vnWM5ZOOfhXNbcbfOQeVnz2udbz188v3tB1ILdCykL8xb+XOpWurr0r0Vpi1oWmy1esPjxN1Hf1JdplUnLbn0b+O2WJfgS0ZKOpZ5LNyz9VM4vv1DhVlFZ8WEZb9mF79y/q/puaHn28o4VPis2rySuFK+8uSpk1e7VuqtLVj9eM35N41rm2vK1f62btu58pVfllvWU9fL1XVWxVc0bbDas3PChWlh9oyasZv9G041LN77dxN90dXPo5oYtZlsqtrzfKtp6e1vUtsZau9rK7cTtRduf7EjdcfZ7v+/rdprsrNj5cZd4V9fuxN2n63zr6vaY7llRj9bL63v2Tt57ZV/4vuYGl4Zt+xn7Kw6AA/IDv/+Q+cPNgzEH2w/5HWr40fbHjYfph8sbkcZZjX1Nwqau5vTmziPjjrS3BLYcPup6dFerZWvNMYNjK45Tji8+PnSi5ER/m6St92TOycft09rvnpp46vrpCac7zsScOfdT5E+nzrLOnjgXdK71fMD5Ixf8LjRd9LnYeMn70uGfvX8+3OHT0XjZ93LzFf8rLZ1jO49fDbl68lr4tZ+uc65fvBF3o/Nmys3btybf6rrNv/3sTv6dl78U/TJ4d8E9wr3y+zr3Kx+YPqj91fHX/V0+Xccehj+89Cjp0d3HvMfPf5P99qF78RPak8qnFk/rnnk8a+2J7Lny+6Tfu59Lng/2lv2h+8fGFw4vfvwz9M9LfRP7ul9KXw69Wvba+PWuv7z+au9P6H/wpuDN4NvyAeOB3e/83p19n/b+6eDMD6QPVR8dP7Z8ivl0b6hgaEjClXKVnwIYbGh2NgCvdgFASweADs9tlEmqs6BSENX5VUngP7HqvKgUHwAaYKf4jGe3AXAANrs25VEFKD7hk0MB6uk50tQiy/b0UPmiwpMQYWBo6LUZAKQWAD5Kh4YGNw0NfdwBk70DQNsM1RlUIUR4ZtjqpaCrjOIF4CtRnU+/WOPXPVBkoJz+t/5fpM6PWp0rMUkAAAA4ZVhJZk1NACoAAAAIAAGHaQAEAAAAAQAAABoAAAAAAAKgAgAEAAAAAQAAAVKgAwAEAAAAAQAAAL4AAAAAvqIx7QAAQABJREFUeAHsvWeUXcd171m3c05oNBoZIAIBEswEJYpRFCmJFBUt07L8LMu2xmFsz3jeevNmzZq1Zr7Nh/kw857X8rOXbMt+tpIlipIoiZSYRZFizgkEkTPQOefu+f92nX3vubdvNxpgA6Sk3sDtc06dOnXq1Dn1r51qV2b7FTd1zWSmWzIzmZGwREstoBaYmpysblveEtavaQ8TE5OhvLws9PSPhP0HDoSQKf2V/054vtKyspFV7a3VlZXVYf/Bw/ZM45MT1R/csXJk7erl1UODI7/yz3muPubJycls0WVlZbafTsue/DXYmZienvcpSmZmqktLS0fUCjPV5JzJxO28Vy2d/I1ogakwExoaGg1EJyYnAr+66vJQWVkVBoaHq8vLy/lifoXaIhNKVNvxsTGr8/LWZaG5sba6f3AkHDx0lLTqyqoqDSAzoaSkJPaHkoxtS/SYmZKMXbeQPzMzuoCmSV0yM51rq5mkrEzSQTMlqhmXzNBhuSiXt9j9KD+TSRWeZLL7UoLO+X6x60+X5lUt9sicm5mZsmpmMqVW1JSeg7w8B+dI59jLSd/Pz7OlTbxducbKVWbfT58v3E+Xmd6fVpklujnlpts8nYd9zk9NRoAkf5oow2lK76Q0w5dTnDiv1gjT05kQh5Pi+ZZSf0NbwDr59Lg40WoDUZoBrhQaHR3TvjqL/k2fptPbBe/pnwigYxMTYVpcVEtzU1jR1hbGxsRdHzoWhkdGQ011lTpWaZgCoKxbxApPq5OUqBN5Z/fHmAukHNwMyArapaRUIKMOmgNLAZKAJ5PqtLF8OjEdO9eZ/b5s57q354l1mPt6zzfftgBX8rLauQRA01Wfmo4A6uDKRYXAyDkHSc6n29Xz+nUOgun8dg2AGybtWs9DupODYrFznoftQkCUfA6iXm4aZP08YAotAak1w9IfbwF1b9s9crwj1NXWCmiMMQtHT3SEpsY6pVWHUx3dYXJ6IlRUVgpMoeId3069R394jhl18JGx8VBTUxPa16wMpaVl4djxE6Gvvz9UVFSGuro6ASjDQXyKd1NVQM7B1MuJHVrppZF7iyDpZ+M21+nzOaP8XPGosPx0nhINBgA1v4wGgAja6RyLu++AC6AWAuhU6nNIg+SZ1MC4SrhfkYNuuiw7n0bzJF+uPe3Son8ARkCxEBiLZlbifPkAW76eJSCdq/V+U9MFCADk8PBIeH3X3tDYUB/GxdF19/SGqsoKcXStYcP6NaG3ry90dHaHEunIqioqjKN7fwBq5MhGpOKkvuvXrjLQpL7d3b32Vqs1OACfzk1EgEv1fuWCGy1GaTBLg2cxbtE5RD9nnT9VKOcd+PIZURfP8+uUulS7+ZxnvIfEWpUp/tBUGfn5T39k1ybg5EBZ7Ko0fjm4pcG0+DWqUyb3POn8XoZflwZJB9H0Od9/N1sHU8pwjpP9+UCT84XEN8SzLAFpYcssHSOXGZgiEp841RFKJYYCPlNT0zLMHDFwbV/RFurrag1M+/oHQlVVdSgrE1cknZlURuednJMGQKlve9vyUF9fpwFhWFzoEVNRVEsPqq/eQPRcV9BAUmAdwVIAojZxvjdPbTAtjiZpL8CLPAbhOcwpUtW5GjheZPzvu9SV+k0BzTSocgzwpYHQ8863zUgk12e1YEqD6UIvWgg3mi4rDaDp9LPZXwLSs2m1X+NrvFPDXZpRyQxLdHBE1BITh0dHx8M7e/ebznHNyuXaNoaTpzoNtOACSzRCk/98EeLVxHg0iqEHXdbSHMbHx8LhI0fDqER76lQmfW+s05nVy7lJniULism+HSfsWZpzIt2J6x1APc10rxpwTE8KSpFHnI1kWMti+Q29BPqki0pnYpledrpeliFp7ynVpzSNfPFkXt2TJNukgZIy05e67tPzcwyVSo0AGaiKFwMkz4QA4UIu1K/3cwbUJdEo5efe7RZusxh4prnQ3PevAeQ0n0pOolkS7d/tu/k1v77wS0IcnhEoldmvr68/8GtfsTysXbPagPTEyQ61yWQok3ogUmEZi91kGbsvelBUDtCJk6fCwOBQKC8rN/2occlnCOyliMh61jQBNA5knp4GUNLSAJfNW1CO5cPYpOKnsd4LveZSJWTTDVCjyJ8t1yuR2gKirh/1fNQpXa9UdgM02aazAJoGVsDMgRW9a01NlTw5ZLibioBqnKkKK5PuGZYbC74T3GFh2/i5+baUybUYlRZCZ8K5FgPRhdyjMA8AyuDtxiieeokjLWylpePTt0ACDNHYNB0OHz0eagVk+J5esHF96OruMZ0qIjZ5jHRNerQ//U1OnwNxHlHerfGdXZ2mauC+ACsAyu9sKA2i6RLcbJQu0/MCvsWouLa1WM5cmov/uZSo/4wgWfw+nhcAnQ88PR9bwHJ6SmI3AAyKQgJGvDQyUuWMyU2oulIDp3Thhw8fDmPDvaGsoipUlJVoW5dkL1V710n9I9WJCJ9SJIK0ddwBz7YF7k920Xv4B4B1rhQu9HTfKQDqYOp5l4D0PXyBv6q39o8H8R+NHtbvycmpcPDwMelPGwSoy2ThrzdQgzPEQl4qtUDMvzhPDYhOarIAelpcmpgsgBgfDUlycQHs58ebBVUkDaJc4KBZeHExEDUukCZK14NjiDR+fkzaHBR9TmMhcIZQBEv2ihfgYEqONHl6mvMEJMelGgFQmqSmgZA0IM1YCN1dXeHIvrdDX3dHaABg65vtnP8ZHJ0MdVUC3+raUN/YEOobmqVOqQpVCbDivzvOxI7Ee8HFdr8+vfVzDuoOwOk8vs+5c0FzifTO0Tro5vrBEkd6Lt7Db1iZUdwHKAGxwcFBcy9a3toikb9N3OJYOH6yyzjHSnUsAHCx9KfT0tmhn4UTBUSdC80DrgW/jdgpp2T8gWZzhAsrCKBK0yyROv909IlMX5DsZ0V6HRf6nObKLA6iufOzC/ZzYBA6z/LyCgPlEqFCmRDk4L7d4YR0y4dO9YeWhprQTPsePRB6+obC+tZGA1EAdWQ4Tm7wO3QKKKfH4rBTUlnChIfQunpDWLV2fVi2rDXLpXr+ubYu2vt5B1Q/9i0g6ty0p7GdD3jT+XzfQdGP59um88KVpgfZJY50vpZbOrfgFgAcGcld3MfVqKu3P6wUd7pJ4n5PT4841C4rr0LuUuoFQogIBOmRfcE3TDJqep64qUmz1J/ptQvJP9tpPnKDgHUhaBaW56CVTdfjprkoO680Nw6lud1CIDfRWwX5jCjtZYst3HGuk/TCOk4ixosdZSZXnTjIiYmpcOrkyXBSng3vvPlG6OqZCpMqukpANT1UEjqVDkACjgc7+8L00Z4wOS6vjvrSUCXudFQAWkhDw9NhaHggHDn+mpUJqF546eWhbeX6MK7ZY+iFMUimQbNE6oT0cWGZhcc8g+tR0+A5F/D69biGQbq7J+VtC7nOvJNFDmb08SIkLAFpkcZZSjq7FoiAyAcq7sZ0o9Ph6PGToRtAXdGa1Z92dnWHSon7GK0cTM/ujkH+rAOmSsDJHst9eYWmrxpAF+8op7tPaUmaz8jlLgSk3JnT7BXgXV5HLziXLinNkabT4U5d9IzjUPHnTIOpX49FH3Lx/e23Xg9HDh8PU8M9Ar0BO7e6rTJU11QaxwlIjg9MhYlSVVTgmCUdTwCWYTyblN6pliIZ4IUA4a5TA+HpXzwZtlzUG7Zuu9h0kGMa/DRBzsiAMHG+jynn5q+DKKWzD5g6cBbekfQ0B1p4nuOof48PUbp85fr/Q2UyeXqJllpgkVsgEyrkPoX/KeDJFpG/uakhjGiqKYYiZuTYfPMzvDMWb7jRAakS6uswdLWGvoFBTWEdFUCXmwqhOMTMdSOpKKTnXb2iRXre6oAPLWDEfWyLLAyxKQKAaaA1sOTmRfI5l2r5k7KAKK8rnCjnotdA3NfpPDLtga5V7ZTuV+Zl0VgS02lf8gCiGINqaqrNVe2hhx4Nzzy3K5RNDYbqqoqwpqU2VNaUS+8sjlS/4cHx0D+sMsoyAWAUIynxXHgKR6mtmir7oybCdzs/PiE+USNqVaV+Giin1IAZlQF1d3SEnt6+0CqddpN0qVZ12lgsnYFS8UeJXPwc56zg+c5ZhvhnWm2Sfk8R6lMZtAt4eh6aEEBNmjI/o47gibER6BEml4B0VvMsJSxmC/CpAZQA6rCMDr2aIVUhvVzrshYDQBzmxwSqnAcYFtgnFGRHH7K+YLjarq4eu37VynZxumWhX4aSKbnoYEShTMAwccOc99EA0rXtzQakk+Juuc4JYKIs/p+WyFMsn9KoMz3P6q9scWZMLjMA6kclyT7H6V/+/a2FC26oMrhWv0ndiDGgvr4mjCq2wOuvvBQeeuTJMDEwEFobIjfVvmp16B8atGIdREcElPV1vDe5l0mU79dvRAAJUzsjBJpSmYCrcETow6XUMAIPW8T7CVn8uR6uFDBle+JkXzgpnXZDbak433q9J3G9yme2SC4sQgZsqXdRJMuCkmhPquu/YhcBmmkgnQtE7Vo1rIoUZZaAtFhjLqUtfgvQD+AgmVLap+maWIXhjtpXtMsPscSmnMKVcN475cJqAZiWWnmDgwPidhvlkN+iDj9jaYjIJbrvQvphGkhRE7iFPFePBEyTBJvemQBIej+XP9mLGGMHhfXwTuvXZME7dY2fK7bl+sIyvP3w66xQ29Q1tYdDB94J9979w7DnwMmwvD4TGuA+dR795djIsBXd2LI8DEpF0i8UXdZYYeePdStegThQAKWqQtcJGCsl2vOrEjiWANQJt86rgzNVsZYGdyrG3kT5kZHo/sa5rt6hcOzgIaHxcKhvaQ01VYrZwIk5yJ6vsOHmyFssOc1lFjvvaYj7GT2ot+e8IKqLcoPzEpB6Gy5tz2MLwH3S+bpkkIIjxfLeumyZgd+gxHMs+4AusLUQojMzC4u+eKqj0yzEiPpwvUNSH8Rpo6VmkJqvPxYC6envna5fMUBLSkhlm+/+5IbDQWdYCJB+7J3c8ib54iiRLjk6xyPKV9ctD8//8oFw/89+KYNIRgBZatwicnnLsroAePaKo9/YtiJ09HQqOI3ULTWanTYzGQBRqEp1am8pDctqpQHUdRmxopUAarU4Vt0c3TgA6ioAB1O4V36oAXjfnl6h/QFxuCe6esPMxEho0ky0xqYmYEz59RLTj0IFCo9JOwNKtxmXpYGVfQCTHyAK+fF8Yr3ly77XJSC1hlv6c35bwD5XIUaVDFL4guLADzG1E/9TAqYMCWArNHcfjjJ+3nPXEc6APKgQKJOIT6gQEO3bV6wwcX9oaFgAO2FpcwF0GkjHVQaghghLuXgl2H2SY06SZvpdQz9mQqmTqh78hBU5Su2zy8/ETK4DadgmRLpb89MAkN73vPlb1S+pJ4BQ31BrvrsP/fRH4Zcv7A7NQsPlzRVhXOwlQLh8ZUNY07wsHDp61IxLYWoivH2wT6CnmWt6gq4hZu+E0CBd54plMgxKasD4VE1c2mr5jFYoUIfS0IEivk+gG9WjAKblcKX2SJFb5REdSNNgOqML+rr7w9jA8ZBBtSPLPfEa7Fl1zbkigHLavitUSfFroH7cl6HD37NOGc0FqJZPo7cklyXR/ly9rKVyT98CfMTMQoKbHBbQdQpQAT+4SeKEDirw8phmyGA8ivCTA5y5SrcyTYUQxf1+cbiNApVl4k4R9wd0TP+o0H3y+6o6kawpriPFQZ2ORWdh6z/K55+dS+1TH2XL/dIVLFZtbq6f38OzA6Skm042hcbkK0bRC0A1AkTFzVXLG6K6utK8JR555Odh954joVlgWFtTYqI6uspSWdQxMJ1EIpBRiaIPnZINXmAIwUGiPa1prBQXGvWopNfXV5shqksupAN94zLIyaJv4rtAVOK/G5oiiAqsNKIYMAuREfvJA5DC2JKH+wCynUPj4fiBw2Gk95i8PSpU/3qpYwTcGA65cJHIuU+K4x1aQ+svLevtyyAYVTq54RbgLUZ8G7CvS0BarHWW0s5rCxhQ6UPV0h+aGy/9qSJJDQwMhUbFPl0hbhKuZ1CGEeZ3lwkg+eDRI9o3zJ95CBUCgITHANMWAWi43hEZXVApEHwDII+dan4g9duQt5T789O13gH9vG8N96yHesrsLXWzoM+pU/ZI+lNYbuRyZz8w4rABgm5IOEPosYcfDg88+qym6Q5KB6opuuLyJsSJwjliSUc3Otw/Frrl2gSgjYxNaUKDHkiED6mwLixrFscpF6WJ8anQ0FRjU0ZxkTrRNymPi6lQKZAHFLluCIDU5WUCTICTY1i7CulUywTcEaBj3QFX7gmwA1CAacYAV1b9Xk3e0GSA1Wtadc9lGlyZFVeRJ4pTx7MluFA1bPK+U6XYN4BaIbZBGmA9FwOVX2ucaHyc5D0tifbeTkvb97AF7MPU/dnCfcpXO3QoePSIOBJcpTAeTajH9QtQ3XhEdck/H9EtALtKdUY4zA5NdQRc25c3h9raGgNsON4KBTfBmj4pUXNte6NZ7SNHqs6uewCe/o/7USZ6XMDSRXS2DvCkz0tWMfplzOjdl2sKL7WBQ/nIw4/7AKpJEdqL5eClUCP/T0D0xTcOmoN4vUBsWNM3pwV8jYmFHtEeKzxiOKAGPsApwhkCohCcKMBLnsYGGZ3EeR48OhwGlQ99aVut9KNiFAdHpQYQeNYkgElZPAGgSlm4SQGk/HCbKlPdq9VI0wJhOFmorE7qHQG506jq1C/pZIX8juvqGmTRlzpGg+likPOYqGFpejhPfwex/Fgn50jt3Stf/M6iFBJbvrA2S0Ba2CJLx++LFojg5/pT/E+Zvw+osjxIvv+pvvTTEN0D4xUqBAI892iCAB4Dy1UmU1vhgslDB3MgZdE/QCEN1lFfywwu71Rc452PHpdU5HRV0nkHSK7wy9jnUj+X7uSeByD1fb8NkxAA0RefezY8LX0osKOYIkIycYUCOjhDuFEIEIUb5BhDEfpSfEXhDLmE/IAt4IflvksiPFwoFvpl1fo1VxqH2t0/bnlwkSpTG1AM3CnNAbga2tsdI+dKuYBpXCpJg4HqZkCuaarci/v7jKqTsuqXzYyFzVu22CSLdDtQZNpYZLco8iedx0V6BkNr20Qfmr4sFzHLQdtbOS9X+iC1/z4HUkY2+7BU5fQHnXqC981uuq6/KnV+3zRekYrA3eG2BPgBnDhy43+KQ3+13GUwHhFhCF0n7X2674PyIMrDeIDHAOI+7lJ4DBBYo0cuWResbTWOdBSHfLFYdD4nOiHH3tXS6ZbNX7xOCCcsiaudi2SbJuvUKWDknBVRkA8VgNfCt+S1svUsWOYP7Hk1PPjwswZoFUkFa6QbhUvEADQjkRsQhXB9gsvDzQk1BRwid6ZdILboMTkPOArnQqNAtKYuqg4GNc9+YFD6WKXXKg3DU01VqQBb3Kws+Tjjs0VSzgKm6gGYIvZPqHHoLwC8ccMAvvbhYJkrxf2OneoLy1uqQvvK1QLT/BlUhcBqlS74w709H/sQ30guLb4hP46tmTtPejyXXByLmOPvOQZSxB9GcdU/+yHMUZNZyRbrTy0wrkZEbIHdNh3ZWZQ1q/AFJHD/hdbb8qrh4aDonLw4QMA5mAXc7rxnoc6Ru1nIh3Leq2cfPTXjh2+p+58ODo0I6KLxKMtNqrdG/9O560knsh9Z9Ozl6vX4irrHAPrTcukSV7TIWCOgntQ3l4OvWK4fUw5fR66zJfdVunfMdKv6vgFpcpA35z65nA3fnBH5/Efdk+T0BjAiRuiRQ/vC93/wkJ2qE1ghQvNDNwm3h5sSQAr32STZHJ3oiOR0QBRiC8Pqv2oBIfPpSWemJ79aAZ1f16sZT8018gKQ5Z9we/wQ/+FkuQ+WffSwcL1TOo4ifwLoajwgDMke7rRKeRxsHUx1OwNU3vWWTRt0vQxpZDoDog/6L3uZV8QSlCG79f3c+0tOZt9xGlRzKgG/7pwAqV4AAQnUXL29A2FQriyIY1jfZltKvbq5LR18WqJcV0+f9Dtj2VESl5a+PvkY8mLVCfwRclcuzh4dhA7T0aHgDBI76iUCpoNJpO9CXalHr0TFfhlIqBsvfEyA2t8/aJ2xUu44Md+5qnG6RqffTz8fsizuQryrVBc+fSHvUQ43HrF+FNGe2rSsMsajMQ22GI9y1v2FVdDFfSz5TBBo0NIkq1fUm7sU+lJTes5RFO80jxIQRdSHg1QPzJ4GA2xwVUoaQMlAB01/GXaVJ3CQFONJXANhoWchwl71k3/91r0mUjdKtzkqGXlC4AWQYnDH33NiRAONAHZ5nTh8bWuUVlelgZS+BJCJA+U8YMkx7kvmJ6p9tlUNuJRFPSb5qpUGdwp4dp8cDkMyXI2Jex2SbG4We9Whe2A6DIibnVQ9MrrUDE4Jp8vyxfiS8tmhE81oNlqJ2o12ghtGPUDdBgZGQp0Au619lXlUxCdf2F/aO02Uq6FE78Gn3RZkSDLb+8jmyZXgon9M4dr09ZnJxfMtsIKlexEnRgzKoZGxsPWCdWH5ssbA+uH75N7QIbBZvqzZPir+WKPn6mqA0yt91ZiWstixfVO47OKtYf2adn18pdZR3nx7X3jmhdc0V7grtCqIMKCwWCHZYjUiV8m0w499+IOhQz5uLKlRKXeS7Bed1JfnxN+xt38oXLBhdbjm8u1h3ZpVCaczHvYeOBKef+Utxeg8LmdjdU6JlPpOROkXkBR2HjfMIce6y/OdkEHn4KGjScSm97ZeC2kC89uUSEq4vlGBZ3a5k9WrLboU60sRqm+hRHlwdb4e1QHFU924qi60NNWGEakNIF/iI5aZaiMHuKTHCi4SrinmYZKAk2GGDrNz8DlBNtIKerx9I0qnXvF78VLilimklEPQl/blK8Jjj79gZSDOT8qVCeGbMUB8S6iHixSI1UsGb5W+s0FtV1EXV4UdV5/sF8NCGsT+sDjKYiT/fLVRzMf54XHl1b0gyuYecLAYkkYEpj0CUP23ZyyD9RT4ojc1zwBxu06mSxXoT4wRvQujlJ9RHtJ1ePidty0UX0N9o7lDlUr5CycbATHWqXDeGjrRNLbAQcZhQE2eDIBs88Ex3tvTfOs1Ot02W/XTZTz9+RhoARAd14j0u5/9mNjy9dnL8BO8+8cPhwMClgimtHSOGOEB0TI16Bd/986wft0aOwmngPW2Rbqsa6+5PFx12fbwk4eeNGCOAEe23MvJ7Xv5nCvc9/yk+3l0X3rZektVGq251zt7D4aXX9+VACn3iURdfbD4zO03hssvvchO8Ixd3d2hQYGOL9EgwO/lV98Mjz/1sgXBQD+XT14PT6XrFKb5ufTW65xO831/pnQ58fl5vkkBULmix/N8u97eExicCKmmryopwMv26xdaJ7//ud5G/pm2LNWA0NUzYNzk1k3rLL4m4fti5Cl/nvnrEzucuBQBSqWCES+UAEAX47km17G93fJLAhizXKr27XredfHsWSBwiRYODQJE4Wora5aFp597Prz+2stmoRcOCkDFiSagVS4govBlAsB2cY/1YmgqhVTlkkCMxMlvlOkdFYfTwMCwQE3cpdRTY+L4K+XT6cQx5ADs6b51EAZkRzSLAWAd0WACOFKXMoE69QF0SUM3CqFPBWBH9Vx1Oo8hjHn6cKq02cGOsdC++81wyVXXSn0jSVTLgKcbjXbXpcbJUh4gCjlgsu+gmE4jfTFpUYEULgEQ/YPP3xZWrV5rIPLqW3vD6vbWcOOHdoYv3fWJ8NV/vcespLVaM12PaM8CMCGaMZPly1/4pDlPP/fim+G5l1+XUQFH4SnpW0rD2tUrw+c+8eHwsZs/GP7x6yckTowZyHE9wR9GBGSTakjAuFruLTGdsci/VmZrlOolSok+OpLKq1kgKsOXpfBwY8Ua2sBWzzmgEf33Pn+7DRYA7qNPPGfP5XVt1pS3Oz96Xbj8km3iTN+WsaTXDB2xzKj+oL1Gpb6AyvWRVFVrap729Wnor9eZAV3OyXPmjW1IGVxD3iEZAyYm44dfJ9Bktgj6QHzhMiVMpYzjM0ElIDo1z8XR6Mi4XUsbwvVgEZ6yEGe5+thF7/mfGFC6VtbqMQ20Bw6fCGv0ffRJQoi83Lmvbz43mdwvfdv0q0naK4qO8YSDMYABF+lgmW7awjRAFIf7R+Tm9NwLL5n+MmEq4ypHgJbAdIWmgtZoBhIc58p1q0JLS5NcjuI3Ua04pJWSuhTrLn2r0NYW0XpcfQP1FEQ+9pkVxvUYfkbFQQ6qXxYSYOtA68DaJbWCupsRRi8IkDWLUjy0+sKBGseqr9AmD+jbHKIvK+srr+wLTcuXhws2b5fXxajUEDIw0l/V522AUjlmmaex1D4EfM57DSaqJyNRck82XkYqqSgA+/li+f3cIgKpnlhUK+6msqo2vPbG7vCtex6QVa8mvPTKm5rD2xm+9DufDJds3xwefPxZ+YjpUfWAEDpIgOm37rzFQPS+B34eHnzs6dCmRdUAgmr5rU0LTN/YtUcvUHNz5aw9oReLSMaHCSeLu8oq8ktvRJ6THV1q8JLQpHXZo44zgmi3dErQiuXLLC8zXzo05xdupKGuSi9g1M7P9Yf79Un3e8MHLzMQBfDvvveB0NBQZzo26orC/ZgWgfv2PT+zZTiIGs9yHIB9BCzpYDW/ubqyXIPMcg0S5XKe7tPMnl59/DX6kCqVVxZVfRBsOVelvOtWtZu7DgYS/CybZHSBm2TNHT4uwLJbz1NfXxtWtrUbh3b0BIvR6bvVB1aW4j7Sz4cFl5VBe3VtS4umDja1avCa0DvrUHDfYXM7iuDOxxjfc/r693Kfd4vTdjRKTpp+k1iXarp3R3qHegEFZXDM86fTpRNN0M48A07TPCbeU4K+I6MkP5bqwqaFW0YXar6znk/AVlLeFI4dO2SXE4AZbg4CdKANTZELZd9BlH0H0AotnT0XleshymoaQoVxfjGXlmgyAmAh4hcQTcqt6QArVCfmaEIMTKv24WoHuvr0PedUBnCrENypeKAs+ewntn3yTZWvlfxLdVoO+rhEjWmUeeb518OatReo7zeGIfUngDNpwahXTdqAQgkQvRByTjWdNxc4W+X4DZIMRfMboKvN0oW82330gPj8fe2bP9TDlYhDaDcLKOWyXC9Up4ZOE67NxJFcv3alicIHDx0JP//lSxYEGE4K8IHQW65VeXB2HQISABK9Cnq+ZdJBfkGqBKYBOnVpNsu3fviAARaqBPUK22+XbvV3PvNRzZxRPMSEWHXyu/c+YoBaIyCej+AMAUU4bNQOP3nwcfk4NtvaQeOy9BpXqA7IfYbEbVLfRtV1EtlLr56/3dK9XrR1Y/j07R82i6ffD1H7ez95zMQXrgEQCMKx88pLw+0f+VBeXlQG9z7wRKiV7Me6ReQlgDIA/5GbrvUiFWKuO3z7+z8Td3yD6tIffnD/4zZgeAYGG+rJbKJPffT6rJqC87gEfffeh8NerWVPG/q78Gvf+2380pmXv6KN7iv9nfTWZyLaL8YzONgBfETUz4LkAgp3rvR0WZEiwIq6mhjJ6eTJLuNGDYRSF2NN37IehkKgpvdngCdOEhCt1yBdLkvSTCZfxTSZAs0JoUe5bOayJ4k5iX3Pi6+Q1GSkLjKmSQwsUwKXGj0gIrdbhae+qFzgiiqhQgNx0DJQnVqqRLyoPqqkTDHDPbL+m57UrlAfV79BohyU8aoOw5fqAESzPdk5GJ578a1w9RXbktxRnDd9aAKiAN18XKOfzxbwLne4l9OiAinKe1xS4HzKJYpPjk3I2BIB9LO332z3fFtisE3LEycB8dHBTa5b3WbHT2nkgYslrmQEJktmmDGRPKoEYhqcJFzoV/7Dp1VQSXjq2ZeNi4LLu/aay8Jf/tEXwj/+2/cNeLlig8Aarpj7Pf7k8+GUuMKY9/Lwp1/6bPinb95rhqz6VSuSm+Zv4CaHxe1edOEFBmrPv/yGZWCtoHFxhXCQqCjgrgEonhMC7NknyhGc6NWXXhju+OhNBsRPPveG9FEjYfOGNTaQ/LGczr9x9/0GbnC+N193lQEjgIihbVicI0Y49LK0xbcEkiwuhw/kLddfHW687mpbjvjZl3ZZrMcbP3h5+Is//h1rn5GRPZZmlUr+0BZ4Sdz1yVvCtgs3m174FUkTrAp56w1Xht/77TvCN757n0TnY1b/9wuY8i4wnE2L8wFEW7Uu0AGFZnPuMP2MZ7U/ixv1UtSr56A8PegceUjOVwnkZywEVrhRCKxokATSeeJA+NZ3fmggWgIbK2MOXBvcKEC0tkWxXlsaQ01tXOGzTICHeA4XWia1Tux1uXsCoqQ7ZWYiIHIM6BYS4JoFVJ2kfMR+B9V0fkAVnSxqAC33FMZLo4ELsZ8AJUqRZT9e4Vwpz8HzRDE/PpeXuWfvgbBt6xqTRKkZIGpW/gRIyVeMa0ynLzaYUja0qECq3iqOixBb5cZlXnbx9nDdzottDnWNONHv/+SR8MrrssKtXGHiKEohdy3ClQVCLEfkjRycJdmfnBUuWlr9PHpIQPTv/vm76khHTcTmHi+99nb4HwUgn73jpvDVr//ARNvP3/kRtfR0+LuvfUeA1i2jUpVGuTfCG2/vD1/5/c+GOz5ybfhvX7tbaoMoquTuHvcAfWI8LhPYQSdO9Ui0lgLcBoWom1yjZ8NTQWNIHgGWu/YcMM4OEIUL/vt/ucfy4If39HOvhpu0rDHnbrx2p6kLLpbnAtwlXPo/f+vH2bzU+RO3XW+gefO1V4orfkIf2EY7Ju8/6nnhEgDIF6VWYfDA8Oc60XTFesTd3yadMyDKQPSdH/xM3HqjzUd/6dU3wn/+qy+Lm71eZX7fVAdICbl3kS7p/OyjA0aVgTscA1j7mpUavMvCgUOHTbzH3Yxv8N0S7jhpjoPyiHSUpSKiv4YkE8PJA5fqJGHcd1Nb6XgFGAClD07sx7on99FCfIzFGJx8qePHn3jSyjAxHou4yPa1xWreqviiTs2ydAtd7RCwBCSN41S+mUzkLqvE8Ewl+nQyEvMAIoDLfJQGU8/noMpxPrBKBaVyWZe0QUxGkO6WIKczqtqIdjEs+TNwbXqfY0gaM0lUPaaCYjE9ZqjZt5AC0fm40VjK7L8WA1ffz1wAzBWsy6QWEczofaQ+AUT/eJ3UZrOLfjcpubvQiYWH0bk50TY3yPXCHaER1fHXVFWK3jD3Gc4+zQMRxHfD2lW2UiXcJSC6Yf1qA7Wy0qawd/8R4zrh0FADVMkCCZijfwVEURMAxnCJGIvQde688iIrA65yPgL4IFZhhEyUE5iiC922+co88dgy6A9iMqCGmxR0/8NP2XaVDHEYt/BhfOr518IHrrokXK16oDLYvmWT5fnJg08ah4u6gLyoHx554vlw2Y6t4erLL9IA9XDYpGeHHvz5cwYsnhf96kPSSQOk5TZv0LJl/8DNXrwt3uexp140vXSDys9kms116/mXXjcLPzOKDh89ae3luu1sIedlR76Ouo8ZJVXn9Xr31J0O1dHZrWcT56WB0QyGfPfnmubgWAE9wG8+ovNZ/1cZacAtdo2Xh4THFNB9++ROJ04UvSj6UQgAhTa2R+MROsvlrepr4ijBWuc4DSQFkIAoAWKgyYnct+4gSjr7Dqa+Dwg7jafA19NimMIcd+vpvoX9SIPpsNRkZtFXHc1disEpRWlOm+RS6Ur37t5lcWZZ2hmO9HTE9wBeFCMbKHUqj0tNDYB+TUbo6clgqoMp18XyFeDFMy/OFusvo2qwDvfy67vDL55+xcRJF1FrZCDC2GR6S4FP5DCmTX9HHSok0o+IncMY5dxqrJs3GiKzxDqJuM3NcX3tw8dOGSdKPkZ0ALKpuUEGn6hWWL6sSVPY4gh8srNXoFVragIaBfsehqIjx0+EneEi1aspvJ0YpOJ9c3/hPBHRma4Iofjed+CohlmOZuQc3mQWejwV0BWj5sAd57PyNFizehWZzNeUcxh28C919QWcHoPM7r2HwrXS9cLVNom7QPRGB9gsnS7qAwiOHzp05ISpA5rlytLW2qQqSLEvHWG6XEAXKz7nSgU2hUR7u976P//ll63O2Tx8PeKSIPSwcOPvDdGB5U6jiR0AOmL88PBg2Lf/oNWpWgAq5LePerG5ZVbvTK8rP9fzu55Un9MsKibKg4HTAAdrKhH+SGSgyjoeBcT7ZrXPQ8c6BIoalMmiSxCH65MejJX+6ku3hqamNqmKFDXJDEoz0nXqHuJEnQMFFDOZeBGSV3m5vFsSQGaQnlHejICWrQMrx2XSh3LVZMaBN+caRXXdys8+BKhCcKZpcjCNGm3OSJWg6PkoQwFTdZk8jhQwRW8K1WkAYXnoETElFTAzBUAauUPLan8cRAFM3mH625iVl29dVOT1xfQEjB1ELTH1J3kNqZR3sYuOEFcUnNTpzIhecA1TatTHnnwhbN+8xnSXT7/0hnXumip0NnrZapQDR46HG3XvSyXOviEgAmijhZsH5GMmX7kZVboEQhAcr1Ouk0eRDI7YCXcnJ8CNFzNtVlM+HBYGA/RiU6Tz+jW+BaSxnu8/dMySLrvoAnGyrxooA4QzmlM3INAb7WYOuCKRyzSJZZO1hHpldBqUjy1uXNQBAGcSYiGVJQr96dToT32ZZVVudcxdM4vD5IPRC8+ovXnh8RlpvdzzF96PY2ad4QP72JPPmZEinWdwRB1ielz+sX1aZ6fa2it9/nzsgxuA6GqpTRoaGk2Mxy2uWvXBYSwrDs/VC866kvpO6H1Wbq7d5youb+nmeeriKoPI1Or7K6ImoN96p3Vu9NShE8aFTokbK9EAi06UzwW3onWKWgWIVthE+5x4T10jIOb6AyAJValv6oPU92KHBtJRjOU4crdpcCXVwdWQjwSjCJbmUpWksI91312uSHZDVKUAHFcpV0OMTI2HevUFuind1jwYknK81nwDffiIiTCkjYhhmIvSIj77OOxPqUFLnBEQaKa6V14xrmO3QS6x/uMR48R1gDL58M6Jy0jDQC4SAXpwPljD8a9skqhqLj6at8xa56w9PoQyRPkqGHYSIgWxFkd99HsYUdANIqobVydwhmulwhiX8P/ctnmDgS8uVdAF61fJJSKOlIA5xBRNZkVBXHfsZLft49Pa29NvdTBuWKlcu3FtzNspbhOwnE30DInVGhw65NaB1ZxJA9defUk4cvSEPTsiPrOGUCUw99v0jzd90OqKoQg6KH9HqEkDTbcMT8yQoh54AwC6WzetMS6UoBqdOs8g0yhuEMNTzCuDlnw9MWYxkwrPAe5zSpw2ZNMGNZAB7LQz1ni4SThLBrRC4r30qgzUHnhLoG998vlX7ffgz5+W3vQFUzmgtiBikpC0sIhzfkwMg8aGBgPR3e/ssZgG0egYdfLntAIJt1jsHm4IKnZuvjTnhkwlNEdG+jxG85mSKg0iY8aNmkFGfbpU3CnU2FBiUzW3rWs2ldTQUK/et74TgalxoqmyJ6fEQOgHlZUq4n21jFEC0ZIZcajqW9H4Ey8APCen5Hqkn4MuKgB+iPsu8pO70LIfS4h/ayV9pjlSd5NCX9qQeO8ApkwawCUKVYVDg1vzYRQTZlGAqPuNDImTnzIjU/pexfaNqVBDTicceLE86TQf4AxEkxOuFvA0QHRKaMqW6bJOOUTzlHexHRsbNZ0o+rhbrr/KSjp4+Gg4fuKU9I8Xm54OsIRjxfk8stpygVCnR2T+2aNPG4+GceS6D16pefrDBsZMCQWU69X4n7njFonKt4RLLtpq/qmUxywdjC2AL/nYAsakY9QxX1GBDaCD8WbTxjXS952wvIfFXV5x6TYTkdGVHj503MBydjPkAAT3pwd//oy5FmEcwvDDjAvu7T+s+3d95mNZA9BLr79j+sdX5eIEffrjNxl4UQ+e78TxDrPQMxBh9IED27XnoOX93J3RTcrzdsgl6rYbr7Hrn3zmJeP6d+87bHlvl8EMIi8/2vXOWz9gacWMTZx48bXddv4PNaMMUR91AqoTVA1/8cd3hb/+0y/aeXxNzz8JLMWN4zt88tQpuz369fcC0AuffT79JiDrv7SRClVBjLbEI+S+qcKyOS6TWF4Soll7ZXNNmJZPEtZsANWXO14mZnB526qwqn2tvDiaJKXluCfKcA4SQ1NZ6bSArU5+qNKBKx8gWoxsuqbA1kT6ZEs5/it2TToNMd9/GKBwvcKLAI4UlywnnzkFNw2Yout1QxPPmCYGlWoZqZatXCfPlVHxBTkQS+djn4EKUDTOVN8/y6g4EBbm9WPUAD7AkYZeNM2JApqUAYhCbDn2chdtOWYeGyv4IXGWPONlmtGzU9M50Udeu/OycIOsy3TQe37yqOk4WDkQsR5iywiFm9Ae6Qi3bV4fdly0OVx1yYVynG8Rx7lW1v8d4WMfuc70kDj7v/jKG9bRMYBcdvEW+ZddbL6bdXJm/9DOS5X3ersfrjt0RCLv7JEB6ioZe666/OLQoo5ZL6vmLdddEW6+4RoD2X+/9yHjgtFHXqm6A1hv7N5v7hbUEm6auqJvRWx/c9fesOWCtWH7hZvCNVdeEla1NYtLXB2u3LHZ/DY3rF9jQP6te34qUUJBI6TLPNXZYw72zHj6oAxLtYres6p9WfjYLdeGKy672Ljy+x75pfSAzeG4dLzDw0OWfrXAnqhEa1Ytt7w7NJDQDg89/pxE3uWWd0ofzKUXbwtXXLLVAHGDZrR85o4Pi/ttMXVCT09PeO2tPRY6jufj+J0DR1Snbj1Ttd7ZhfJv3WAc+eYNa+XnepMp9vGC2KPBicHPlhO2t3a+/vDBKkq79Ni8Aqbm4oAPkCbf9CJVRD6Mki09HumoGWUKiqZzpztwcgxg8l0glTmhmvQf9eTsjBL4B6XzGqBmy82VAZcDYPD877zxshanUxAfpcGtEQe0VNLEhtUrwroNGxVIRMoidX4J/aY7nBaAlJRI8ZHoB/DrhhO1WT/az0wjdcGtYXwS8HCdpkZn4FLlSZop1RRc06mL+zLlCWs0AaYYIqtMIuJ+GT0TRh9WG3DjKxM8MlOl0ulruqmYqyznrUczFZjugJ4T4pjobo2aDIP3AAGfUUnh1oQkz1gDl8r+cvlr77z6SrvnlGY+4e2QDk0YdczJRdaMam31/djmse15D/yP3CfnYXl1DcRW1+n2yXklJac9S8yY/1f5F3PNpmg4IpoQ87cBgC2bN0r8XWXgB2cIqJ1Up22TQQcdZrojMOODqOXMSHpO1m1m4qxZ1WauUgBFk8CNMn78wC9Ml8eaPi3NTTYbiI6+csWysHXLRnG9G2TJX26A9K///iMFTBk2dQMgBFC/KLeoVW0t4mA3GQi2yngBJ/qNu+8z1woWDePlrBVgHTpyMhw9qUAYJurbm7EWxAhGVCiiWj0ja/+EAlzgz7pSOjzqSpnogB5/6nlzgCc8G+DMbA9mauFRcEiuTljaN4t7v0CgxfPBid5z32Pm/sXzUee33tkvp/zucKEGl1zeenkkvBB++uhToUEDQpU+ftxjXhdIYqW/UqB7wYZ1Vi5W7a/+97vDDlnm+WBffHWXDW6rV7ba8+HBgAj26pu7dX5CFvyt4tjX2Xujb9/7s8fDk8++IiU/ZgJ9ge8B8dHDoa9sb7eOSbQmiwTFF79olA+k+KjOIvobjeKkfUDUOdO4lG/shJG74QI6cFQ3paNJ0aG9NQ1IY1Zd7KmxtQFSBrxDe96RlBCt/c1NfE+1QXFINAivCs2Kp1qSSeqrfmQksKxIKRsBUUR2A0Wli681ACUv3GmmTLP6JAKbISoxas4kW9AEIIlhAOH2/FigOq1nkwVsWoAKiMflW7AXEO1N2nndEz3pmFRyDAhQmfKXlOheKrRM9aRNHUwry1W+wG+IxxGI8Qi0CIdb1rdpmuhmqcFAt3wQ1ekEMNkTUWF+s0ggb68wOZfkMTVA3vcEyM66OJsAh8q74XJdNpnZfsWNwxop55/Ok7389Dvo+xjZcRCv1ZxgdFtYB+ngPmXTRgEVlbaiJQOnKhf9MYe11jbXo5NE1MBi3dcbjUyN8uMkCo5Po0Q3S36mN1aVK5TYxJjlRdfJ9X4/ymY6KSBOtHU64wizj9ARatYHU0SjZVzLxUq3yPx3n66p5ip4eAaOOG8ffSb+pD7biufH0m46Y+lC0ZtSV8qw51Tdmc0FcQ2uKBipTJ8pDt7zUz6g3acZSRipmO1UopEffWW/VAdMBSWviyUYLZjdRF3aNQV2QmCAeI+I/n/+pz8xDhZgpP0MjATw3j7UGU8CDEqoLuDiiRxP7IBce8/zZdnTnJs/+vTNfYx6YbgDWDo6u+xm6N8hdW3bnt0frpWeWrrnD12+TpNDmqVj10yclA5srnKzQAp3Sn5Z3c24kf5e0npWnU86Xy6f2hrCwEgfSRNh744c3h+ef+xJc17Xp2KR6xHp0S+2bdgWVjVWS6JQDFVmKOk9GundoidFHAdE0XdCxpXimK9yDEDFXUIzM+IcJ8SlzkRAdhBFZQUYumWfvBxDMxOxT0wKxCfH86dWj9O5p+M9fa7+8NCgzdO3i5M/Lup7oBSSmQXVqcj8p9xBQGmDYkl3bGwI19z0MQMwONJCcn/cmJ4MXoWZ5jG8uj40Z3CbdXFegov7jPPFlSR52c/sAMDI6OVjdceAwhRJxIrGpjoDHgOUPOSP5WdBVddXKTBsbU28HtCAYhkOoLD1vEzAdFogGAN+0BFGBDCel0/SGtcxUHkBLq4l6jqWX4J4tMptCYquSPFDZmodFF+OF2BJyZ94b2GdZtYss8EDH0fikXL/KonKdHzu5XXlQtcLx3pMiXOP1zAbLE7DJD8fSSyfa/B1BegANiiXd8rKHh1V3IHK0vDnf3RXOCY3LiYVYFBz+tTHb7FdZpU5AcLqCrpeHU/vo0RcSfqdkY+gJXV1MWiJ64b8+vO5BSSZ9sm3sP/AAZNSLti4Xpx6l9qk33xIqT/Pc07IQbJI4c6NOoiSZVKAyCfugJm9TCA6H5UI+DTK62OPIMD1eFRUKXZFiaIr12u4IA5otdRX6IkrqzWDSXrHaKmfXfLpQNSu0OJ2gOeMRHxuDzmIugFqZorYunwmGrQFnpOJmV/DmwbrHKABnlN5evTYb5iliO4WfWm5jEVEmYIAUWY+OZhaov6krflEi4LKdBsW39s+0C+GSb7Xkzlda8yR/jsXiOby5EAzV//IcJ3+Wi/F9aMcLzqQAgAQYIDFORc6LgcQlmHOP+nrmc+btqA7KOVf7NwIOjyZIpOT0rXkZ7OjCFICCYmzTn59PPbOGOvheebeOqBqbnCqTDq136vYtQ6W6Wvmyk86Yd6q5fbh5NdzzIc6rIFh7z5Nobtwc/hf/vyL0iG/ZVlRdWD8Yx7/2zJe4ZZG3SKlnzG2Le2da3N/hnS+5NLzvlFwEnU6LPjEtkWtwzpOyzQd8oSMdQxIhFWMXDxvXoOD/sZvwJ/3LCsNpzkPmBaW6nyCbzlPtPkEEwxkSRPLwWYWwdECxhArLiBRsPrnqMRxQNQJKcopPV+eNLjRtHUdThRy+9KM2hE8NBG/AAUAUHSnDqwYqJDytV6FuFgNauJc4UKzlHC/uBmMV+i+40nd4VTlKUJ+J6z49foEPeCJgyiWfA/VVyI9LFNKof0nxiwcHxb9LgU0GewnFnGb9S2k3+JES58eEF1SLSzDrfFpoCzMU3hc0ISFpxfj+N18xGd67ZnkP5O8C22HsylzodfMlS86/sOlflvLTdxyfa95Ctzx0bZspdG9MgkCsZ2Pby7AjhfMdZ9sce/ZDgMenCfcDR4Y/JZrajH+pQDpCemztYCwGUnS0x4B4PzB8iweATB1EmdZYqoaT4jbrFSVnywEUpsXpMFtQsa5ItYklFG5RLgvkToAUb+sTBybJLtRAR960QY9O9RQ1yin+zrNDFKYyXGpsFy01zlmNMGNOpA6iNqFihDmRBWQyNLWe+dGzT/SdaTKJ2wVJUyK9lm2BD3wTAJm8XFGbR7+uEDPxHrVC1YNMHV/UgdQSnO/UsAU0R4rPmA6PSmpUmCq/6GiXm9U8Ul12ri+5xRDorV9nVRdisWhNkmDqe8XSk8YxaAosse2VtVFuXbnyMmea86X6bnyt/Gt5KctHf1KtQDmA32UAlI579rU0Wdf3qXVHhUdS1wsM6twN2uRXvT9G1v0TBo8dgrTjQqNiIqPjnqVjHxbt2w21Qazu4g8xlIivX0x8PNZR4SCE3VKwBSOMZXqZ89o66K/A6pf7BMxfEs6wJIG0WzeSSIqyU9bfqO4Nk1KTPdAI4Coi/bkR0eKaJ7WdQKiOfFdxicAMikcQI1BlGOC60U5yon3kSN0/WhaL0q+UunqS6VPLZuJHKlHinLRnjwOpuyzDLNTeemUpuRXGieuocKSiQI1NDQaBqTOWdG2woDU87MFQEudjU6dyHGexYEzlZVS9Nz5KXMBbjrXeQJSfz3cmk/QP8N0erpav2r7PIc/03tR9yiCM5UQfS266U5Z4yE4VdKwDNsHlZY334uqLtY9EwTCWEYQk4OKTsU+U3zdIDUyOmjiPwNKl4xwUWVxhu8pzYlSdwErDjUwXacjqlisuQvBs7AcdKUz4lIdAMZlWBlhmm8iyddIP1jG5IhQa76jMfAI0z1zoi4gCrmRiX2ATejDrjnjMy3TOVCSEtUnu0YALZQG1LiIc46rtQz6w9x7RPrShGnFwFRm6olcQ7lzvov2OOgPaJBHT8rPZzxRJpxpdUV1qNa7tWj/Mt+jp13VUCpVlsL3JQNcvnSFcTZ3P6/b+dieMyB1KzwPEd0E0FnyETNqIF4WCjsLf1ysuPim8aEtnAPnvrgrqA76Lfw66f6t2vGec9XSA/Zy/kzKnqu8M0+PnCnPxpTCcjldY4kG4LMf23vzjZ35oyzgimwbwz7IuAeIIt4PJMtjHDtx0koh4DOR8/stojusxhk0QqFxCOu760tPU0e41tytEIP1zfEdJYYkLkeMhwDNNHEMAOOjiY4Ujg/daGPL8tDX3WFZu8V1s/CfifQqhqdKi/NkctG+srLWRHhuL5k46kZZMYFjEYBp3KiAFQI4HUQ5Zl8eSwmgavqx1K2ZaTnXI28nZNGghCbpYCZmzU9m07lo7/nZYtDFfJqOtu/ivecjyn+ZouwzmQv/2S5Z8zEU12uqcI5TdCzxrV/NlrbNb9/02dn7Z5I3d/U5A1JugWWMjxvndWbY4NqB6xFz23OR4Gfx0bnazdqLnYAAxRDWbIBDf+x4rj8AN1bvjsQli06nr2Ku7LPS/Tk8zuisDEpgSiyis3MRxfKcXVocALJgOG8h3g4OFn4870W/FifxjGBaLi5tvGsI7wn0XVNZXZ+3y+kf2cAslY3PrBh3mcoy564N3gVnHURJZt/BlG/NviFtvbZ8r6XyYnEinmeDxsna5hapXytDyVT05uC8c6J4sMChAqLQpDjGUo0+gKKL86Sb8Ul4WMiNOheaBlTyu4iPeC9Tcr7RSecBVMAUPTaEWD86Nay6RFYVFygIbpQfon1VZaP8wfuMCwVIfcaTgadmRVWXCjwTjGQZElbGWAhFDxpynh04LuQenuccAGn0r0T86ZQ1leUwrtt5ic0Lr1cYPVj2PfsPh9fe3GNTI+NqoFhXF9LpI/gxPZTOcVjRj0pMnPFPzh8rf4uoiw7x6ssvMR9EdGoLd5ch0lSMOrT9wlZVtMg0Sc0gwe0I3dxC6pNfu7mOYjty785fKx3n7OclhgDxFghIwtwHYrk2CymY3ZUmYg/09I9k8/T399n03+i2Fjlx3i0zyljfi6nBa1avlpqjU8YO9IeIqgv5ztJ3Pbt9t8gbZ5oqwsZ9agHXWYQjNbBOwNSBu7KmUcsny8CWcKNwaS0S7av1TWfkq0lkp3RAZjhRNzhx65x4r9mE9Hg1Q2ZCAWjKxV3yOQtn8kR8OFR8UkW+LQRUrPdGAmFXKeAKBYh6rKvVMCsAAEAASURBVFLnTqtKa7KA29DYLFBXIOgKBXWWG5SL9gQigcAHgHRCM6NcT5qRN061ApsQpqSS6Fbqz7XZwdEuK/qHgfR8gCg3X2QgFVegf3AEfNDXXLHdAhVnnxIuUNzhegX7uEFz6X/66DPh7Xf2WhSfmMcB0bek+ocfI+mTwtx2AOZvv/bvmtWm2RXZAWf2dYjlLDBHMOY7br3W4o7ef/CJxGeT0iC/Rzwq/IuxhjihzN2fiwhazaws/GX1kAXZ5is/nTedLwYyYcbT+nWrLeYmc91LJLa/O/L7pe9FiXOlF97N85FeWEZh3vmPmRZMKMJa+Q0TgIUZXi61NMjft9HEt1wZfQCnfGk9D25Qh44ck8QzYXFJCQouZY8G2KPSC7dayD3m52PdRwd3tvV18MP6bvtqgqxqIVc9NWFOyplRUGaAEEDNaS7TmVWb3IebPYHEBlfqz+gqo8HRnBhNZkBGU0/kQ6u1kxLDkhcCV4phCcoamaQfxQHfCRA1YtkR+ZGmuVTP41sHUz/GYAWwwuVCADmc50xFTnfqIFoqdyh0AZPT0f2qTLMAQR18SiF4aXeBcjAlneeDqmVIQ63hyz/XzDBRJvYBDEtzSWpMosLYuNg0rQ8gf1qqGB6941zLLsodCWkWbJ0hjz9KeLaHfvG8IiQdN4DlI2E65x2aN/+pj98Yvi3RnzXgmWnEtegx4SBZQgJRLe3T6FUERNP6GdeZ+nWoEbDSxg84dnYiSTlxHh0tq5CybwGB1UB0whygeG7UWrmu8PDPn7LI+DWaI58mggsTVIX1zNEJe138GSghV36sEz6Pno/XkVtFFG40o4XshsJVOy40d6a7733Q/EBRZ0T9slxclKf4h+Rgx31y/pTUgamPU3JdYdJD+hyioEqzNo8zq3LtBQAxQFL/MU3VjHXVvGvVv/j9VXQRYmFElpTZvHGtGYQu3LJZ02nF5SeECIzYRpR7ZmUhmqUJDoPvYUzuMujJrv3ANdnTnZ2dGpT3mGSAxLN73xEbbMnwbtyfDDizd5m94+cBzULyc6Qnk4EKs8x7bA76ypEXR1bfBLZt/C4LF7Fz8ATonAv1mUwZWFFwT8DpIMqceaCeNIUK074IB31RHsab1Z33IbWAztlrSV4NABq50+hf6h5SZsG3knJ/phRwZUQNVTaaAzjcnPgS0ZOaC5Sei22acPmK89hialVN5F4Lvz3/XvhOCkEUAExTIRimz53N/qICqXV+TQW9eNtmi7LE3Piv/tsPzA+uqTGGlptSoAXmxjNllHB4bAEgAh2wCBvggb6xSnohrLG90ocy1ZNQcKwkisogTTiqAw5Mr5zrunR+9gmx52XSYYnYtFCd7Zu7D1rYPAuikSqYZ6jSqM8idExhxGeTOlM+z8AUT5/phM4WAO3WLA1mGKEDAxzTU0EJTk14vN5kFkhv32Do1LTIOG222mY5EQC7RQ7pDCSoRtgyYPRofj35uD9cbJ8GM56XpbIBs3IFsKQtqW//4KiAadREa3wWGaRoj6zOVx8gc92ZWkv8A54BHd6QyvTpo3GQoDHy3w0p3I8gNKxzdcVll0p8bzCwpBz057SPf/TMtx7V/fmdjjA2pq9buXJl2LRpk81+AmT75SLz+htvBtafIorW0ALK9HuiBnAqBEg/TvdLE9HzkEdAZIiTlJMCWb+e8meVQZreoelMjc+Bo9Q70/eBaH+icyygHUS0r1SkJ6zazlA4iFKu6zEtTXp7sXBi+4ArkQBTLRf3k2M208QmFVm0KcAUcE0oUybH+iITibDg4+uJlI2fKU+r2PthTLOzLEKXjsen5QequfYj+t514zCpeAGTk6NaJVTB1UcxGHn4x9osmCa3nbUBX8fLmsyHtNgU0SjKc1nq+WaVsvAEwLYQgLnaudL0uUUFUkR6DEk3X3el1ZbliCFf1oNBDDVVQ127dWCWtrD58erQTH8kgPHtt1xvS37YhfpDQJGfPPC4Onev6VsBgDTBSZVJfi9cAZPwet9XpClzeyH6bUIdnR020+cPv/hpdcTIaRJF6YHHnjKwQAXg8+39mvQWf0xiojLv3T9YP++rcbKO0o1atM6+HP3lGe796WM2aLAMNPPqJ/VhFT4rdSYoyzHFWV2jBfj+9A9+y8RTyr/jlqvDbTft1JLUey0+KBw/C/c9/vTLNgDBqbGcCTFDf1crqiLyPvncawq+siLcfut14dEnnrNlWZACIJZcIf5ou/wv7/r4bdYmpAPETz79YuDdMBhQHm2/VlGGCP3nq6+yGN9PHnrS4sjGKbb5ILpu1cpw07WXG3iuWLHCwBNOEgI8+wT2TgDo2VD6unSZxCrlt2HDhvCRWz4cTp48qZCLr4afP/Wy4noeP6NbuY6T71rjiBEDi+b9xP1Ez8nUzrSoLl5dg5uAKbnGt6RRputQKSQaKGERYnc0H1KpwQBdJKa0AZMljfF8qq2OBWNsQk8KoBKzwbeAqPuMjgxJf6wuMMV0zmRKZ6lsC+UC1YmEAxX8aVpxr2ZEZQTcxCoVGCXn0vrTab27En1rpYoOlVHMOz/nKktAFHIA7RIjMChf3lL5sU4ls6sqpQKZVNxUlvzGOKiJeRrk5dZUwI2m9aRWqP5MKrAOIFoqNwJ7Hj9h2zMH0DQYpotyjnU+ME3nX0QglTg7Omwdt31FmwU+7hK3yUJ3hcCEcQAxlh9cLFwRIPqHX7jTOjuAgngGUDDl8S++8ru2+Jpzr/4AcGS1imP4+3d90pZiLrzuT770W1o07ocCgj5zUNcXGa77wBUCg3pFhzpq90DMvESrcrJ20r98+0cWpi1yiHzys4loSp0qj/pC6LQqK6vsWQi+/Ft3Klaqyjt29HB4a8+R7DP8+Zd/O3z1X79nelSW//j9uz6RfVaCPaPa4Lo/+8PfCX/zj98yzmple5txgNynqWmZQK3KAkMTaR+dIlM/n3j2NQNlfEhH5NIC0JFOEGcCVhNBiuO21maBf51FxUKZ369FyIgY9af/4bMW15TB5JQ4XtZvYp0r6vP9+x8zjwvWxmI1UcIgsj4WU39ZyZS4sQAysVajbrgkXLBurQH+zquuNO6TugNyBBzOhlMj8RwS94NwfWKu+po1a+x30w3Xh+deeNHWtdp36NCcNcB9TFOJ7DyAB/AZoKHjFwGqyL4AncnACXjYydQfB89Uku1yfRp0owivuyRjkQRzA9WYRy5QCUoxu8lv1ZTojytKNBMq+VRjEObIHJjPqO7WL3chA7VerXKQSDdVpZJohgfChBiCcluSRAyOARTLiyjmg1RvtZXSXy9viWAq8R11I9HBAFEjmyIa4QNQhYYnoiQxpAHz8IGD4qC7w0nNrR/uynkVmOGIaa6SICG46/r6Fg3aqBkoJ1+sNzAVo0WsgVGZ7onLaiBaJGiJc4qU61QsjXNzAWj6Ot8v3DrIpstYNCAFEBH1EOGhYyc6NOKx9gtfBy83n2PhGEMQqQSj+MztNxqw3PvTx235Dpb+wDKPAQLu8a5PfST8/X+/RyNuTqFNh4b2HTwh7uv18OjjT9kxf6647CLr6NdceZmtyGknxEkANOg5H5ChCyBhe9OHrjCj2EdvvjZ878ePaM42L5WaJeyEXRz/rF2z2oCTSPQWwV9cLSMrazdhXAMMWUjv7nsfsPIJ/0bg6C987uPh44o5+p0fPWIF7T94TCH99oYHH3k8WzpxTf/kDz4fbr1+p4Wv+7//v6+Fm67/gBnJ7v7xw8bZtsiS3appkU5wIC7SIHq7LtiDOPsxz41BjNVKUUsAigRyJjI+4Q1fFZAS1Jn2+PynPmpSAUFO4E7v+vRtdrv/8vf/Jh3kKYF7rcU1ZalnAJmBZLUGz88o4PaHPvCBLPeJiO3g6Vuv9/nYck+4Vn58h3CpH7vt1nDj9deFXz7zTPiB2oMwiYWUFu2dIyXPtIvvfBbpz7mIb6nlddTzG3gwkqQcOE1zdwKUnbO1ouN3TZrmJ2ngksAsNyAIQIHQj44rhB1ACsW59hHQLF1KgKGhgXBIuuNOxcDFF3VC6o2RlIzer28gDWhty1rtXjLrhgFFUSMC1joNohipmJtvgJmaYw83CsYDskRnIl4JU0BPnTweQTQJxD0m7nF8IJE6ZJc42RWNVBbIWT6ia0O36lVrngjm/zxdfBkR8SwBo5sb4OzBZ/2J95ku9AGele/sE/ieeG9pWjQgpVBG2srEqjwmUGUUtw6UjOTpG/s+nY2ZNywxgghMx127bqUkCwlH4vp2KbAyXBBc0iat/b7rnQN+qensCPrhoHXjdTvNOs969W/qOgxda1e3GkD4RXCtrB/FFEKCfaDTRMSFEwMEf/H0Sxb7kgXV0pZZgAdy0djLY0u9X1IMVYIlI17f+9NHQm1djek/cfl65vlXwlU6B3fd+sQLpkz/geKOAlxeZ1QQr2ixQERmuGN0mhMCdG2MiPPIwMTgYRxRTF7wX4KWsIopK60ycDEDCO8Jlkx5/qXXrD1YiZP4pA889ksD0p0Kgs376FVYM+gDCl794qtvqdNmLDwfS0QzSPxPX/ntcM3OnVLTtJgRyLnP9wI852oQ6gKn6lzqR26+WXW/NPzyqafDfQ8/qY7fk3epidN0lmQsTYviZIQbdQ4Sa77n80LsnANvNtF3Zm+N+yzST7iP1Z0lNsambX2msgRUKQXQhIP1lUJJy4irPHXquJYZ3xu6O44poM1kQCUA1cgKnt63xNQf9JnSsqmteiTBjIZ1m9ZGrtTzmOEpHkxIAoVGZMiaFIdYJs70RNcxTUs+JQkkzqzjXuMsbidiOZHJcTjseGx8qgB4Upb8+mrdt6JPWNBmKqWQAlOp9G2qqGA7DEiVRxyA6cQ9ywrO6kRlMDZpIR/W0pxjzP/u/jqIZlUuwqr8O76L8hnxAc7hJIwWukbAJ3KkxQqWWl3gMCrx/IINkcN6Z+8BAyCMMYj/cKysBuoL47EG0xu79uQVxlxyljH57Cc+opFR1wnIKqRghyMDdI6KM04TYjSARGRu7sEWzhRVAsDSJnHmHe3X0ImMS4g9yblft9rXyyAGxwzn3CWj0KqV+gBkQMHCyvIcvtwtouWIOAHOQehW4f4AoM9pdVEsyl7nOz96o9UZsC8kni3WIenZhRlOc8yaTjw3ejMkhzXSjUIb5XP5v/3Pf5itL2luwMD1Cu73sSefsQGJwYwfYP/Grn1Wnz//ypekdmgykPKQh+8nAOV50kTd4CioKxzqHbd/PHzo2g+Gf/iXb4cnnnvJsiLa01kAMRftOQE4kub7ce/M/mY7X3JZ4bFxqAKD2PlzrlPu/uSO6Vw+oQqh00RP6lSpPnWy42B46tXd4cSJHkVRqggNMkgSlhsi2AkEd+pgZwn6MyLQw93IZyERFKVXQcVpj5pyBX5Wf4LgThHzhxP1VhgftnFkeEwxbXtOBWZdAd6I7YjqrY1RfK9uiOEqR/qjgYmycE870MsUWM10Ess5LN/wjern6Oc155Qs5owfo15JF6x7Y4SGosO9+mmaFrg+U/qS0+3zjtLg6fvp6xYNSLEaI+Id09pD6CK3bl4nXdTTavAYnSjncB9n6lAJPmYaJs3FpisHR0hwZRdP0+fY75eeZf3a1RJFbzPu85sybhGfkjXsN27YYCIx1vLTEe5NwnMjLMHoPSMlvSY5YlNotXfucJkChgBUrBa6e88hWcFzIc64DlEewr923ZoVJurz7ETxx3WKOqM2gONFh+muHHZR8idXr3Rq3PcaOwgW5gCEGejg8q3OmkQAsZ5WjzwCqqWHShP1JTQfuk8cp//pG/eGK3Zskb51g+l9v/D5T1mQEOqJ9f3dEG3ubm5Y7NknzVUWXjb3AgThLFnWBmKftLMhruWHJ8H/+td/Fq6XuP/kL35hRRnApcB0oeUDtMaNLvSCgnwOrOjh0J2WlmrxOy2vDBGLtF8r1EKlZQq+rRs5mJJWXaOg3/K1fWvXbuXrMxDdtuWiUCMdJDpvPEucJmQYHhXTwSzBsZFBieiaK5/4hXoetqwq0Y6uNJGM0udYmK9U8QwG5fI3Kc+P7t7IhRJspNo4yypJZU2yH8gzRQzDZDK7qUxcr9OmjZvCmuNHwytvHzBuFS4VcK0WmPpsKL+1QyZeOrairoHm7D7qZZ/NthA0vQzS6TswIv6O/BzbRQNSCsN4dFIdk1BXO6+8SKLuRRLLnwsE4bW1WfTi0aVi3ScqPctzTE73ZFf4XKPo5+jwCDCMrMQqmCf7OiTSr6L40NHdbxygHegPK2BedvGFdgiIImIjsiMmH3riGfmqfkANHsUIv4b13z2UF+5CEEaZtSsjV8yyw4CvNVaRd9SomVqjctvAfccJVyYi1lMuQPb9nzwsXWyjPoTYvAAmVmqWpkacv+PW6+3Sb91zvy09TZ175ZK1RzreW2+4Oo87RNyGGEzS/qyWqD+0pVrV2qrP1Cmx0rlpkZ4zt6VeuGhB+w4eDQ8+9rQtzEc9AVvADIMWulAGqrGyMS39MBmefektA/lPf+JjoV3LfqCWYS74mXKg7rYEYOIzStu4FZ97n+zokOtXtzpmX7YtaNeWpsbQLPXBiuXLs0DKe6Ct8QgAFKEzBVaeA8nhumtZGHFj+MWTv5Q64x0riz/vBhizhSQ7HpDE04t1yuw5dKTj/aFZFWA991MHDpl4zHnEd3mye9ZQIUv7yPBg2PXmW+HQib6wheVi1l4QmlvVVuYzrHaRVQoARQzH+g0t1zIlIyNV+oZiWfh6jksPWibucnSeACCTssBnxgayIDqs9geIscK3Num9Khg1hitoVGqJoYEerVuvmAhJnwBcSwS4/NatXB3aJNUcPcFCkKfsGiJeiTPRO2Wqq7hqXceXzbuCWNMJiRXy912MU4w5FvbX3wXbYmW5F0Sx0hYVSBkxGhubwpPPvGDLCrPaJ9zm86++bVweHZjOCl22Y6utBoph5pt3/8isyYDvS6+9aWsaIdKj42TxvE/IfQcgeWfvflv6wwpI/vjsCgAKYuVRgPEzd9ysrw3fVOfV7LTpKbFiv/HWXlMbEEmeVUXRjyJSH5PxgfWVWIMGMC9GWOxRWTiXjcUcP1Z8Fpn9dNvNHzS9a62m9uFZwHP/9Z/9nj6CUXkR/DjbBnx0EHXGt/XOj91oxh/0lIWEQeuU6gaHDrneEmDnWcbG6qyMT6kMKGlm2/c/zj2zbtSBw8eyq6DuPXjcJlHw7vr6eg08f1vGvUeeeN70tl/4zK1Zw9kff/n3jVs8Gz0oAArwIYnQId5+++3wzAuvCLjHwn6pXFCRLNTfE/9UgnawjHaVPBM+cNVlYd26dWboAli5h3cwf/65ti7u80xMEPjUnZ8I9/30Z+HIobdngWgaVOE+oXRaej+ezf+bttbnnyl+BAtRXzkTXu2OKip0jHB8aTJQVULv4ICForviog2hbfVGDdq46I1a6LnOnu7Qp5l3ZYqiVFndZBwiZQCAAB0gCPDVKfxiBXpQqcfKBKjsY/VHlHffUET8Sn1g2OjhRPvU1uPiTAFKl//GRgWew6ey6gLAkO99TG5QfMsDA1G8b2puU/8oN65109pVAuF6fQsHg5wMwgqcZlNEc09qgGdBuxJ5HkzL8AznDug5CHr29DFrTEEzqp8DpJ/3Y7+ObbE0T/fr0vnZX7RVRClM8CIDTrmJ3Lv3HggXCrAuEce448KNAqd66dKalbZWq2B+MOy8YofpBp996XUTL0509ITLNYvnmisu0mjDKDNjjtx3ffqjauTqcM+PHwpHjp0SsNbbdE2+3qdkxBnV3NyrLr9IDt+bZIAcMnefO2+7znSQKLRwmQLgmsXNXKGVO9XiYcf2rWbMwVn96sulq7zzNrvn3fc+Yo7yLDzHswCk6GsHBczbtlxgC9tR1oCOEcWd19WrNMMX4LRpw2pb9bNJ4uKgdEWsnfS5T9wsTnml6n8i7BbXDHdFnXds3ywn5GEp6Utl0f+QuPhLbaQdHuyTVXyf3b9ORqvtWmq6Ts6DOGuv0EqKrPoJB8bzbN+ywe5DaDFWDN2qBQcZrTvE1b0ugMWQd6lWWT2itZvgPmtVDiIza1WxEOFlavNLtl2g9pbjtLgVViAFjNEfMnGCNZ+2b90Qrrri0vD5z37GuD7ufSZcKGI4K38Cbj//xRPhh+LYv/uD+8LDjz8ddksffVD3YA2rudQSfFuFRF6u4dq9B4/YwoG/eOq58M6e/dZJ28S11utbAbzhmhdCPBNcLddcJkPUwNBY6JSxZiEEqPJzypSIqxFnA7DyPorRXJ2SvIjtTfpWy/t6QnW9BiBxbIODveFoz2jYKAPqujWrxZFJbcZNBSyoKzuOHQulivq1ZsMme4aJCUlJw93h6KH9EvmHw/oLLpAxdpn5f9o6Z+L+eVa+K4B0Qpb9Eb2jWklOWORL1AfNT1RWd8CuAtEc/1JJivjRMrVzRIPP4FC/1EDDYUZlMFjzGx7qCaPyDW1e1hraxE1XVMX1xmiJaQ0E1QJjPHBOdXUJfCUJCcR5p5yXYBXKtTgexmDqBwD3i9no6FbUJwUlXbVWfslaAXg6cYFyNVjeN5k3ogG2qrfSyJNu9+xxOj/76WNeCFQkDXFfi/kt5iqi3EmV1D9WpeyRE+7Lr++yGTYXSMzAKZ+li1lV1NxnBEis3HlcgU3Wrmo3/8q3ZFnGMn+xgA6QQB9H9J5vfO9+rUy615zTaQTAeEhgs+fAMdPvoWMESAEmgGFZS1345vfkfpTo/RxId2jGlc2JVzDgD9/wAbOy4141Ku7o63ffL9HilPxNfQpm/PgZ8QCdNbKkY1DCso74CTBkgVQ76DWJcvX6m++YIecieQHwDNQJV6n7Hnxc1vBnwgr5hgLuBFzZJj3yju1bLN+KtpbwnR88YC+Z0fsNrcQKmB08clxL1VarTbYYIE6qo+/ac8AGqyF99Nv03Dw792J22Hc1lRSw7ekblgHtoK3+yT0OHDoaDh87aZwlU+t4R0e0lPV+pbPC6EXKg9cB7QG4o7sFpLZv2Rg+efuHw/Uf+pABIUCT98Hy2gsIbh1RGbGdvMfUwb9zz73hW9/7cfi5vABo52G1aVzLquDiszykLMqk7BfkWfDs8y+Ho0ePyn+2xUR/e1/KQ6ebr/6cg5MFfLdv2yYAqbb6T4srm490mfUztvx0I8tu+xqMrfPq/pRvaZzV8VzEGfFcITPeJ1CSZ8uWC0Of/Jf3HDkVVjZWhY0b1kdwSFyshgRkZfoG62TMZOFCfpMjPXJ9itLN5m07NMX4QPjFa6+EQU2weGnvXg3WB8Oew4fD2JC4XRmTYBjKMopWPyoH+ooqfU8usDJVm7WjpJfWNwaVCjQxFPcK6Ps0Qw8R3O/L1ubiC4h7JNIfkw70xd2HNGgeDwdPdcqftTcckdTVUFcf6irLQu/JYzKY0YDUG+u/JiSLi6QcgJQfQDqogDWb1rWECnxg9X3V1CkAit6VvU/aMmlfq2CqbZ1j9a2d9z+pfJ50Jlsul5S9+KuIeiXg5Cw+pPwMiQDFkhA4lBPdhSUhCASCaxDO73xk6FDRbSL6b1i7MrRKtMdHcb+4PIwsKMtpMDqp60m4FmJKI1MiAQGIYBaAGtM+mfUUo6Mjhqjz6l6Ew4NTW792jcThPpvrj/4xLkgXO4AVlP2jqWKqFyMonKiPbu4e5boanoF11ymfmUmrVrSYFwNBh0kj0hV6X0Yx6lwt3yas5uhO9wn00LNiraSOuF/x4ifkZM/qnrQJUz5Znwi9KFb8ThmuUEOsl0g0LhcSvA2gCtUDXScxBLBwwrWja40R8nm+OHee9gTUefaNKqNB76NTqo7DAlhP+49/8UcC660GorS9fbR2l7n/0OEA0gMHDoT7fvZQePblt0zknPuKc3eGwCjXyI3rjo/dGjZs2GAAyUB4OvJnRRWx6+3d4dFHfqZrT3+dl5vnF5qJAq9zQi46cgznyWBdSKQ3z0yGFQrPNDohYKtqDDi6f+cH95sR6ZMfv8P0pHCBGJxkbJDFQyCn7dCYjH8TWrAu5XP91Auvh8Pd4+GOT340HBMXOCI3oh69+2Oa8tzQUhmuufQyDXxlYffuXWGdnN5bNIGjLBNt/SUC1Qb14aYG9bckyAmc6kDPsMo6KXXYiawIX5vMg+/U5JRR9T+8DTqPHwlTlU1iWFSnwfGwSpNgljeUh92vSmJqLDWXJ7wFXPwnoAl1H9KS7p52WP6weCF8SNIZ1CuutnHThVI3iJOVrcQp3baedjZbNyyd7lryCfxHFlW0T98UDSIPWS/RdExTzZj/zfrsiGOsnd2k9dgRFwiEgU4bLqlGHRCDFbEJj5/UCpHKiy4M8RZ9ZPwHDOh9Cij4sY/YC0cCyAAuWM8b5Z5kKzzqIy038UX+bBME66BOtbZU8hFNF+QeXI/IG4OBzP6oMUpxH0Y8oi8hyjuIWrIu8Wdg5c1qlYW+7aimesKZUx9AmjJ4Tp4DPSWQhpiOTpQ6EvFoUp2L5ypJ6sxoXCVVA89F+xFbAFGHa6k3oMDkB+5DmaglWD+cZZsR4c1eoJeN750NRFQ4eR6XHhj0Tkk/SZujjqB9UEn8p7/6Y7PMs9qqXVWkw9sJ/XEulKAjGI6++rV/C9/+/n3hrb2HLBaC5zvfWyy8+w4fl+j/Ujh48GDYtnWTifxwPvNxpz5g4Hfa3r4itLW1h3379umaXKed71nANjU6H2o2m5XJCf/pjN/HM6HH4/3g+FQl56u6jFz0ZNgp1TUNy1vDmCSxk1KDXbhtqwZTcYxwbXovDKz1mntfJV/NaUWegsEr1bZ/sCc8/9puK/6OOz8bNly6M7z6/NNhj9Q2tVJ3rZSagBU5K2XEu+bGW0ONQL+nU+oMfTgtUk9VaKAuQ8TWc1RKvyp5WueYLz8ubnBYNoABAXJ3mFac0Uat7lkuUJvsGQhVUq1c/dHfEXfbH3a9sT+sXbtcagfFolC+1uVN4YZbPqLQeNKT75d0Ih3tsnoFcBZXzPfOt8u7oR0gGAsGkWnehVQcY2LCMnrOsmaVKdDN82ZReywGpQc3wJJXlk6L9eIl27e/2KK9lZv8iY0QAVUvQWIeRgFE4DLN86UKsRrkix8bHZvGIw+gWgH3h2ojSU8Kto+GBtdbtSTO8yFxHT/2PS3uk00NLxBkyzn2vT6AE/X0F0fuNFmt9GWWiNOL16fPpvdj2aTA/cGZpeuTzun18zpTtqfFZ4ttEdsI17JKi4YeAdZTeQ4tfKZnrtJ5mpEyiDjFktgxV6y3rVKQrkCyT36en3pQ32orqyL8xz//Utix42IZKiKIFrk0mwSIIjojDj/x1FPhb//h6+GNd/a/pwCarVyyA6CiqkDkr64qk5pGkz5UZ8T4QjArvBZ1Rpvceco12+fw4X2Fp7PHeeI7oel0xjmkXKb4VrLHqZ3InZIQ+8649KzVAq7qKvn+ygG9tLElLGtoDQcO7A4EaanSe7bc0i9ifspUEg+AQMkyFklMBlK65f1QJb3mmlVrBYjVoblWjIUMsIcO7pf0NxKmBIJSRoYrN28LJVIVDUsdICdT039WKuAyVv9yffsYnjThSWBIH0BzIdWBxHlUCqi+EPkR5wdH+5Smb6ZGa4QJxKdlhBrvPRGGNI+fmKk1VaVhvaTNUt2jX66CVSXyOZWHAH25UutO1ddKPytnIoCUfgCIQgAp8QuW1TWEKakUyuReWNLUakwZ31+O2NdvnkE/l3f2noMmRfrPc/mxb0nHkj89PXMugdRvb5+THyx4m2ua+a6PH9yCC50342KWNe+NzuJkrFuuTfKLgOuc61x+ztMfwQt95fc/F2684Xrjqk9/BUtG1Btn/F//2z+GH/7s58bVLuS69yIPHPcLL74hD41DYedVl1vd4TpPR+TZuHGjgmuEcPzYoVnZrd/qj4PyDC8FjrSAslxpQTqHvEPOs8SI9mxobywTx6l7jgv8SjVolq/ZGE4cPCBD0Lhmki0TGEr6UP4KxZyAZqRbhJvNzMDZzSg4dquMvC1hWoMA/u3dp7rN5WnlCs3sk96zXQFltq5ZL51oZeg6dQJfQOmVl2t24bowI79wIZkxJqgqplU2KiJil07jkC9RHV/UCRltGajK5GrYsmyFLO+tYlonNM+/K9RV14UN69bLDrIyrJfLHD9AbqRPUpiqXC0GAG60WRIqqjEGN7hM9j0Ikhub6lSHuhp5IuAJI+ktNDTrGTV9FWRzshehY9tS/eKqE8/ONg2e6fSF7HPrqqry8wGkC6nOUp73SwvcKVezuz7/uaxOdK56uSiPQQwd4v/zN/8QdslT4/1Ops9WJ2MiwrNyvbpAU2ZXqZPDmc4n6vNccNxbNm+Wr6PCG/ZEd6Q5nzdhWxxYyecca/oa8ysV9JlrjgAwSkYRgGekI0U8b6oCLDVpQcA8I/DIjPTKYHRcRkUB1uSwcUVlkoBYVhkQNYu7QKqsihl+AGyZDJa1MrrVi4uVH7fAhWA2iO7L4TgBLrUJ+ns8SKalShiX2xPjwAxLI4sbtfuLA0bsRrSfkliuTeRIhxWKUUBKG5aihqhmcorUDPIVRX2HM35ZbZ2VgbqvSgBekZwHoCsVIAUpARCFctH4ZSOQrnRULlYZxQ2o0oBQJeAESEul7psS14u1H3coB85YQI4p4jWkRfJiwJrGYbv+DP7AkWqa+RKQnkGb/dpnJYj0X/7ZV8yYB2ikQSD98A6iqBseeuTR8F8VzKRLetH3Pwmw1MemxMVFnJsJL8tvuULgAkDyXPOBKefcmr9HPs0YT+aiyFnGDg2AZo8Leq0UK0kRcQsQOhFsD1fKGsRofDrF3Y10HNW0zc4wKdBsbq6VRBz13wj5M+gwxUFmSUWVyCgEIsK5sgRJhXTY5dK7jksvCiZOykkfZ/ypMU29lPsTTvilUifA41UoSj1LPUOI1SWaS29AqmPASYoEcbiEw1PgGgE5YDouQ1eFuF93xqfNWBpENxdARlUEwIer1eS4VjRIRPdSjXDT0gpHsV7lC6CkF1A9eSdqB7k7rcDPeUzLBKk4gHS6rikBUj1zwoFaZZM/gCaUBtL43tVWOuW/JPtZbWIZ50W0P6v6LV10nlugWeL5//W//7VZ2xFjTweiGLIefuSR8M/f/OGsiObnueoLuh0GQ4wSM/oRY5U1otC/dctL4YWXXw9NdfL62LjRnnsuMKVNOIdHwqZNm8Mrrzy/oHtneyy9LkXoRHPtnAAporlEdWCFuKe4QFWVsKSIuEGl9nR0yLCj8JHyJkBfyfLDGYFQqUDKdPjCPRVhazMZh1oum4O4Np5VurwwI+5xRFxgiRzjR4ZxkRoUgAo4AVxRdYPWyhJAN9bL8IPRU5wwA40NBtOyb4iTjXpfhc2UG1KHOHOMm2VTmtYsHSdcqWqjVT6btLyL9Jh6lqjzV50EiAaiAlzfh3OtNS4YbjeCp3hm40rHNPkAbpRZTg0N8njRvacEvjOoOQSkJS1taqNkFqLuqoNs6zqIkuCAWdD82byn28lKEnnvK3eVBuDJOETk0pb2fgNbAIvsX/0PX7TgI8QKyHXu/MZwThRx7Uc/uS/82/d+qgz54JB/xfvhiM4lTlIcGQbABhkrILw14KAIyEHH/+o3fmiua5/99Kfs/FwcOW2D+x2BWm697dPhwQd+aPkNaAQ8Ttb5/GCOrXg/6+RznLbkIXSh4vBYc2pSXCPUKi+CMonpPQpOgp4Tj1MnCywiWxNPzZuRmci2NdKB2tIiAqYalTOmNZTKygWo45WKji+xWopYgp8Q9KREqlEANiPVAusx5RF6U0jTUX1dJg4zNQLssQgnYyN9oUMzPZfLE4AQ+oj2aWLW4GS97m9GpnimfKbfdibGywSe4o6ZIqr2ccIdamZcUZYE2EEzyKe1OioRrwBuoS4oHX9+gbVA9uBd7zDBIlLuHacLXQLSdGv8hu5ff81l4aqrrjK96FwgStO4k/199/9UIHr/+7y1IpRgcS8TODRp6iGhCIfk2M2MNrgqpvYi2DrkfPXrP1CA8OZw84032bNZJy3ylLQRs7Quvmh7ePHll0Pnif2WayHgWVgc4jsEqEJwqWmaLNXyxlOKWi//zRJZuBVySVZ2Apho0ovUtGPyb66QQcrXSuLauBZTupTUvoBQUKWI9UqTNV/TmJKTeMhotpFE80n5mKIGgMrEZUIsqMcAWkgspzwzHOtsIOeIojB4/b0nbUmUMCmPHbk2lUvXC5VKGqgTFz2G14Tm9qNSwF80TqOX3J6iCKh6JrlajQKaojGtHVUhg5a1FQAKkJ5D4lvhNpGyO/YN+Ts/tzXwey9t37ctgEj/R1/6XZtyChc2H+Ej+vCjj4Wv/fuP58v2np5DhDcxXgDDsjSsPcXyMMRqIPIWIIpFGLc42EHyMpVzRG5e+Po+/+xz4SlFgTrd2um0FVNlP3nH7eLsYjea5epUpCXIAwA4YAKgxpkKUB1U05cRxm8w0fV5es/wSdtlzXpzutfzzEesx+RrMs3M5CYVYFnHH9RXCZ2WrjNG2ReAlhOJPqIiy5bgyzqLCN8nkHSCK4VzZskQfuwDhDPTcsQXx+kTBFj9FKoUmErLkgVY0jIluWBAgCc0rsks/Zpt56uNliq8X4Vc9QzEnBu1nOf3j4Mod10C0vPb9u+7u31RS4i4+9Jc3Cgivc/w+aev36NnyOea3j8PhS5PhhQBHAv7LWuRMUKAyXpgzA4rEeCVSBRmMoVxouIsyUtcB2aYEex6XP6Nu15/xjwReGaevRjRVkyGoO2u2nlzsSxF01Q9A05OFgPO9EUz0mVCiPeTWhakRNNVW9Yt05RRLYR4TDFzpTNMk+lEbWXQdOrs/elMHDDdOm7uPwJVCMB00CwE0PmWbGbNJScAlB/cKj8mpMBxMmOJVUezpLWmpjSwFRIri0Jwnk74OUO9MpLNKFjNwmh+xqCwDIBxrl86L3pyN5J5+hKQekv8Bm5ZoI6lQXzKbbEmAEjgzpip9f/+7dcsGHaxfO9tGmI8c57lBC/xvUUAysqbiPEE/iZeQxRLo7iPTnhG4MlzswTL5k0bxS1N2XLXbNGbPvSgdKZ6Zp59LjDlnpRx6Y4dAos41Zm0+cjBsxgn6uI912NpZsYnxhSIReUwCJVrquimtdu0xIhiCwz3GzhpipHlmetPiaYP83MCcJ1w3+EHqPoPwCYNcqONg64+AFVGS3InwJuR3hlOtJCqShWDVC5P/NIg6uvcW344WlEeuOq4olRxYjXAeZT8CqkxfM17K6uCWYtqnEQvisGtOOWes9j5QtAslqdYGoZWfmnKP0qfWdr/NW+BTPjyFz5lQDGXLpAGQC8KQP3N3/1T6FR80HdLgBjz38+OohiOyM4PMCwmxhOnNivGi5OZ4cPXgGBivxhMYkBgTSY2A6H4jii4yeGjx+XPSIyA6CqEhf+H92opcXSsiYhbrM60HWB77bU3FDs9Ky0NloUnHWRJR/THFQgal350WIwb4DWt+KQV0uPWrdti6zYhLrOGPEuOwGmmRXe7WH/Soj360xkxd/8/e+8BZcd13nnezgndjZwTEZgDmEkxiVkiFRw0HllylMf2euz18Yz3eHzGnrOzZ7zn7J7ZPWfOzK7HM2N7ZcuWaUuWrERJlKhAEiIJggRJEAAJgCAAIqMbHdA57v/3VX3v3Vf93utuoBHZ3+nXVXXr1q2qW3X/9eXr3CWgyU+T3Xv1CctqWf1zpIgtiDnucY1qbpbuNuUgAT//Uae+SlOU6+ecaM248vGqLT4CTj5Jn2+PK7fE0GhNGNdxTHxH2/PmL7N2me8+EeubjHP0Y6r0LCEA1X++L7uMwTO7b7LtmAt1rtTLZoF0st67TPdfowxaNypVHEaTcoR4+8JPXgyvKjZ7JghO5w9/7zfDA3ferOZKcRLFziSxXeAGNStHLcYj0qwhmuPO5GI8U0f3KXoJQESMd0I3OSK9KVzqImWEumLtWkugQ45b6hNYkKVTijnHmEQflCP6kL6c25r/QExFX+ptxn6OXgbg4qTPPj4E7RIn4Q4BMDhCqGXhiqS6RORiVAxUqWfGKIFzBQBIEpIMmcEpbRIQBXStLhZ7fjqGifCwmsNN1smrAAL8EOkBvNNK39evOaYASnS5gKjRqI7XDzCNAXV8JPUGUCU3MLGEA21S/gjaTGYUFbcMcOatP0m7Rf8XivaA6FTJgdLrO2B6+SxH6j3zAV9+4olHTNwtJbZSTuw9eU3/9svfmrHeYp4s5oL67d/8XPgtzZi6VKA2NUr0V8yVhd6TZNCI72QVI6l3t7JuIcbDaXoCYo02NZ2ANVxolfIPbFi3RudvDfsPHAyHj5JWsMHmzfIhhmgf07ZXX7A+oC/K9RWqgxs35bnSeNBmDUwx58m54D6zYEodfm6Uol7vqBInp1Mfu6he3TDPLPfsh5hPyamY9T7HnWKxj0kO/5aIhGxSAi2mXcYPNAe2qh8fi5uVi+TV1cqML88Bfg0Nc0NLq5KrqI1+RSOhA60ViFYLUPnlKAXUyn4ZqBxkczv11FJDkyWDZiaAyoYwoulxRhTZVVqUjxpIV0txoP6hc4DMHulA6cCZ3Z89bpYjzfbQB2B73cqlpteDkyplYKIckfbzf/PUjKfA69Q0M4jMjz78UPiTP/79cO9tN0gvlhnYmeeA3QXOEwIwsb53ptFUXcoGRLpB8nHCvWFIwqDkYn+/LL5LFy8yLrRPDulwoQx0QNTq6n8pQsR/+tvfsr4o11f05VVXXVXAlXqbMah6WXaZZH3Klibb6EkR7y0ngADPnOfRVYqwfCPeV4grHVA+UeZsylFax0GXctOXqp5ztUJehYGS2BkQVrJm9QvO/9Wa7nlM0UTmIsWBasvbcQ8AwLahrtnS7SGCE5kEtWpmiKaKRrkr9RpXOlLEoGQV9W+sQfNOKQcqhIsUOlfagrt1gjOtq9TzVTo/DFexKspF+Sy4JgBaGt78mThg5s6VGpKyQOn7Sy1Ln6nUEbPll3wPfFjTViOulnN3ggPbv3+/5mnaeU7ul8GAMYeJ5/717/7L8K/+5S8HAL4U8eLDiWYnQhyVEYZMV6Q3zBNcaOI4D3d61UZS580xLvSYknozhTccZGn4zLfEGiI+fUGflCL6kj5dufb6UlUKyrNcKTvL6U/Z3yOfSwDP9JuprhLLPWJxMq+9XPMVKZSjLNeZ7shxrQ6iSkAypg8bBJdpOtH0WANNrXPMuGYMqBQI84tpQJ4OIwrlRARvbEpmCq0id6loTB+uUjQkAHWrPT6mtAPRltPQYOKo79uWC9g3Jizhxv03YWeuYLogmTuwzMoskJbpnMtxF4ae2269xXwgS3FY3DdA9/m//fI5tdJzfqzecHN33n57+A//7g/C4w/cZTlos32P61KPEl8j6mFpByAR0TEsYXgChvjBhcLtksQbLnTt6lUC7B7jQhHb4UITkZ/Bmh+w2ihLL2x+voATylbmXvArvfbqaxSTPnFYuegOgDpgxmDKerydbV/xoGFI3FjHgO5V3CkGI3SgtdXSSwqEhtJ56xHHzbLuDaRcqW+yrFCqulg36iAKhwlZtKj0sAPH3g8nX3srvP3mdsuW1SegM0AFVPXrUcLwU+KABxSvz1xQEM71UAyG3QrrzHKlbMdi/fBA/kPobVlDEulxocIValz5U42DLKEfHVeCF//ZsfpXDDTdP5Z92Z8fN91lXhs/3SNn61+SPXCNuDNyazLoSxEzD+zes9emNClVZ6bKHcwBU7jEX//VXwwP3Ht3+MJT/2hJofPnAZzGTJyHuyRxN3kw3bBk9aRrxJiE7pQ5spiH6l3NPMDsAXXilgBaRPmpUFZv2XHqWDh69KhNsFdq+mkAnL6dO2+pcbF+nljPSVkpwHS/Ueq42xPrEOBLlFO7fErHO9o1D5JmvRWnWF03YFwkGZsI8SSMVMnjJbonHKYDpovl1lgKroj0wwNEejG9dWJAIlc0me/7205LhdIVOjVbQoc8BY4ps/0mTW9y1c03hRqpOzimUyDar2Tj/eQ0FY0qi77zqsBqpbjSSs2KAQGmBOfGelJEevSjg3wVUid96kJwoviOzlF6wO5eBfHKCb9ViZz1fS9CAnaBaDHKiu7UcbG+WP3JynClcyIXglN+zUtml5d1D9xyE/MQpTHKZe6UKUIwDJ1PAogAqQ3r14U//oPfC5968sHIVYrkF8nr2qlpLE5pWhoDUXGmGA6w3vfLQX6VAHTtKk3JrYz/TDnDoPEpaabDgWbvm3bgSssRXDx9e8W6DQVGIj8GMIx/Xu5LwDMGULPYSzkMwAK+Y0raPCjAPDxeH44or2hHe5/ALIEukpBAQ/KjBSBzhAog5kodRMVRMjMogAjBjVZU6yMj0R5QJRS1X5x8ncDxiis2hrvXX2n+ou3t3Tp3m3Gi7d2d4WgkepO53n+06dZ31qEe3QWcaMydUuZcLK5P/CAs9Pxw9K+UuqButSb0k0O/WGHbf6b/Yg40BlkAMgZJ3+fLYueLj5nlSIv10GVaxrQtt968ycCq1C0CBCc0t/i2t/aUqnLOyxH3MXT94md+XgEDdwRUDG9q8kO4yUpxAebWKVFaXuMCylEZNIZCq3StizU9Clzo3vcOWx7LRIxPOZAIW870BuBK6Zt58+aZ+qBYO3wIrpHR6bWtL0zgfEpxotl2HExHo2lNAFMHWmFlOK6DxnTf9QJQpSMJLWkIZy2hU+kHB0A1cPQTRCDqnCigNmeu5qCX6kA5SsT6JkC1YPWy0KCJIBHdoZqlDaFVZ2L6ZXSZvZpaHCPXXLG/wwvl3iQVS688Jprky9k7joeEQnH7FD9fdTrURS5WaDwb6pIPIu5RzonS5hBpq1Lql6qidsWKULVwmQVZYIxjqiCpw2W114dFHGg+kYgflV+yjzoAZykwZJ9TzF1S5vtYxgDr9cf1bDwtIGWzHKn3zAdgefWGNZY811+SYrcM9/biy1umPL98sTbOtgxxH+6OuZ+Yq/6P/83vhV/82Y8q87qm4NVA4gfBhTJZIpP/LV+WcKF79x0wDhWDEsBrvxkAUc4HV7pj55sRh0tpIdG3JCieN3+pcaVZsb6wdvEt16dm9xpnKhCBmIOLFHSjmru+srrJ0uJRDiACnlj24TiNIq4U3SYg2tOJRf20idpwo9TlN9yfHIP+kwxTC5avCk0LFuuempUcWi5M8iPFut4oL4sWrTcqOKCuXg7y8qhgsslhuaJBGMYgLPeD8poY1DW44QkA5Qd4+s85UY5Bv1qjj37L3MVKDq25zvg46Hj8R0cVlRWDKOv8smR6XhUCojwT/2XrFdt2TtOXxerEIMr+iVdQ7KjZssuiB65Ys9xAAJAqRuSNhKPa+vqOYrsvSBlzRiHyk97u//j3fxDu3HSDXQflWMmv3LjBttHpdsmtCsd65qDSyDvr6wUE/eeNHT74jvWR59j0cl/St3yMFi5c7EUTlrEudMJOFXBOr+Nc6JiiuPhBvrSNzD/nQLHso+/0PKMmEguMANEOTfOBzhJnecR4wk/5YXQaTrlZwmQHBYYjmpWURNA1NeIGlcgEMGVW0Yam5rBi2Uqz0tfLEEgiZ35EMeHzCcGZ9vSPhW6lmupTvwCm+JYCnhCADDmI1trEfwBp+gHQ+zjIjKgRAablONGo6hmtFuM+yzUEZwrNivbleuky2oef5vp1V5QUc7hVxPo2TXt78PCJi+bOnTvFGIWr1O//7m+El195JXzvB5s1SIc1Z/oxJSVR2KQyHBHNxCytM03OIQJwXacHNGtmh4BSs2+W+CDBBTEX0tu78leSBcb8nsI1r+elvl2JTJsS6wmYKmRSxf0SievEpps/glgxD/9kCZASOkqekj6J9sOKjhpRX82XIadRyZEBULLnQ8AXXmR9JIAOmhJZHGo1AQAK+q9BlRIRYFol63+LONBExwl3mVjv8azgNyRghljaujhWo/7RHIgmBYX//TjaGGWa6ZRyvqLe73qnJ6Ny0pcfWw48syK8H+NL50xnOVLvkct8iSV7zapVZa31AMC77713QcX6co8B3SncKa5Sv/tbnwvLF7fqfgbMpQlf0kIQhXs7O67UATS+JsDzwPvvl/0g4RGxVJO8ERrp5Jylb5/NkrR/TjlwSQuifCRexUCUOH2c9qGW5oUmtvcIZIfhfsnkDFgKRdmGKjTNB1RXV6FZSJWARNyjkuIlZRVD0nsOmfV+jgB5fkurASOcKeGiiPxk30LUBwwhlnCazm06V8o+nPAhdKTOlVYsWKCPhPyGpcoYldtayoybekX6C6s/2b8YREvpSSdrw4EyWw+A9R/7ZoE020OX6TYuQ+WMJNw2HOnLr2y7aHvAuVMA1R35/93/8lth7col0TXjR4pvqfRi9oPbKuSmospTWs2K9+/ue8/6qtTBgD19bQmXi1SCy3RO03cXK/N95ZZcW1UqIgOC8pSdUJ3QT2L0cY8aEgBUabZPxPZhRTLJk1+i8pDsU4oqIqGIgLBeM3Xya5bPcYPADIAkJJQfgIrLkod1esgqJ3VRHWAEUBH1AVTnTi15iUR/wBTuFa7Wj3FDE0sxrIkKCunCuU+1X+C2VARMXQ8aA6h3RrGyKqksynGjfixLZWqwXwye8f5Z0T7ujct0HbF+2eKFZe/O9aNHTkQhhmWPuHA7AVTAlOWmTZvCNddcY6GsL7z8upzve0OzuO85TQmg4CZFspLCyKepXTsgVYxG+o/n9KSlxHuOq2tcEE73Hi7WREGZgypcq697hVgfytxMEFnjbF4nifNwzdJ2imPTLKOaa6ulSVn0K3Tv0keawSlR4Zm+NCiDFIawXqWSGlZdwi2HdVx9ms5uVJn4a6QeATiNPDJKKgGMVBij0KPGvqCnlREMy74bkvAIHUoYTGtnYJhIqeS6fUI82h4VuDvhAeBO+IT+QhZyCucdAeYE3ahAVnKIN6Pl1DhVP4A+k8bZNycsAc80T07BWagIpwqoOs0CqffEZbycLzeWdWtkRY5Ewuztun50cFAD8RIgQBQizBRwwJH/8YcfMFepztN9GvTJS7544TxNn9xmOkXE/6kSgAawFSMFWE2qJ6WvFy1eFk7L13JwsLdYMznQ9POUA1FBjwauN1N4XYjA0PDwkCa0U6o6qSKr0IuqDIOToleNmI9pWNZ4pnkm8mvotJK8iCscF4fZVFdn3GZFCqIkkh7U5HijCguFsPD3ylEfwsm+RR4DHR0C0VNyyE99WBtSFyyrlP4b074R6VER9+MQEAdVZi2FQ62S5wE0NNxrMfW1ivU3az2FEVfK5pkS3GfWzYntmCstB57xeWMQFafcMAukce9cpusYYhYsKJ9lCR0SyUROtiVRKpdKVwCoiNL8Vq5cGf7Nv/7t8NwLm8P3fvyKuMYeMTTi1KSysHmaQJUp6k0d3Ir1A8BIdvzJ9G7MO39cDKkwKweaxdrLAmixOvmyLIjmueZ+cZlNMhxVSWzGRgR5AuZkK/+f/C8KVlK2/c4w0hlCuzjY0DesOZZSg5CqYmWHYj9QtuE+O/XDrcnJHPFD4QeDiKhusXQt8gQYWrTUdKbUBzibmueFKulfh3vE4WLpTwmxvkJ+rfQteRRyIApnCqD60g+YwjIGysmqOwearRfrSgFRpoj2qamHBnr6Z4E022OX4TbcGXPQ8ysX3tglEe18RzPNZHe7Iz9Zpa6SW9RXvv50OHikzYCUKCjM0sTo41t6tjQgzg7wLkX0NQT3CsViu4O0A2g80MfdSTY5LPpfCKC+AwMb3O9wVaLK6GVeefnQVkikTvxIZT3XoMdqz3ekRgmg6QcE6LlypG9Uflj8SDt7OsLJvUcKwLEzDf0EEOfVtliUE4AJzZ03P8T7rbDIP0B0KU71DTqP4vH5oOF32txSbxPfCYTsKEGqAFZ6W6ZvaZxr9zSitIEmFACgzpX6ssi5ShWvycGKAABAAElEQVRluU7qeZ9nOdRSbcQcKHUcRL3+LJB6T1zGyxZN1dBkyTpK3ySDEd/MS5ngTtFZ8rGAO/293/mfwjPf+1741vc3m8GDOHwAlVDDs/U15aNTTlVCP9bLao2qFo50MsKQ4jkyJ6tbbj9caY18NcXyabQTLproHHFzqvN5lZjDRMVMwVyppCcA7MKFy8OC1sUBnSfRTsPSQc/XdMo1em9YhxDjG+bV5UR5wBRa0rSyKACzjxBTpiJh6hG+XyPq+/qGRIzvkkubE+5WXd0nQkOLpokhM5kBpq7zDICTNgHKGCQdTB1A/bwsfZ+tS/cJZYGTspgL9W2W0CyQJv1wWf9v1GBgUJcb+IhS7e2Xllhf7qE5d/rkE0+Ee++5J/z3v/xrJWE5aAlNOk4n+VBx9xHa6Dc1DtX1phh4OgQqJn5OMtAra0grV/iBck40e/2JVbo45xnXxQjGNZgxLFWcci1Y3sFsdKVDSoac9f3MtZH6ZmLFN22HdozbPPJJnH1PavABRFsWMYEgyUKGDVS9DTc4edz8sHSgvXJFO37okFcxrlWK19Ct8srqRMHrRqXT3QoJTblR9KUYnwgLnbtmkfXriHIAVFeKEy1DWWDLVo1B00E1BkgzGKX958fG+71sKstZIJ1KL13idRYqjK9e/n7lCF1iT6TzKlf3Utjn3ClhpuQR/e3f/LWw9bXXwle/8YyAodIyR2Hhx2qLI38sUZfK0+oiOQBG5in6rJR4z0eLPrfwyTSsM+63eJDH5aXXI1BBnC+C/b26CWKF4EpJEdLExHOmQ06OVRCo1BxKBCId3whJnAVUowTup1Rdp+k/mGhO2wNyWwIku092mpXeAdPr+rZzq4BoZ3eH7YYLNcOTQNR8Q5U53wnxvVIqEYBzUJFMdWl8/THp5uFGG1sWCmBlxU/sZ37YhOWIPTBZztUGbkxQxXjCfbPu811hPIL43lnQFkxuarQDNF33OVUABbxjQsQfU+OzQBr3yuz6ZdkDzp3iyH+L3KW++fS3w3d+8JLN8wQgEjWV1Xmdj44o8IucwgnLSRQAhTAzRybi1whYUlGVHSOyiA+NyaqvMjn2WF0T+VMwJQUffqE19Up/VyNFanNj6JMHhINmrvF0JQZRDE+AZ8OKifNbIdbLk9V0oBxqIDrQmzgrSdQ/Le8BqGnVFWWlJqukfwmYJdDFB2lEln8HTK+TA8YIkB1Avc50lwaaGSB1YJ0F0un25mz9y6IHmOsenWTLHHFPyjLPnE+9A9ItimzWUIHKuSAGPgal1GNpyqcoB6LmJhQ1OMh0yuK2u8WVtogrBTghcpZCcKRmKE/ZvhhMicX3BM/UJRafGKcsmAKi/WJmseDjDtWQ5h1NkpDk/LR0pOLlBfBkgvWoJl9irWe9u1cJqlesUISaptCWfneyj9p4BVmt8+fIgijXDSXc5uTPMQe6yWEpUKcbU1zMAukUO2q22qXbA6TTwwhFjP7ffumbATES9m1coYdM28xcUPOU6ciplGiPbtNF8lJ1vA1f+lTOsdVePFmMA141XRaKjuUAlANyvpY44Evc7xfINEjEbcRDo6dTYKqQUIEpBKACojERDUU0U09qUCLkE2B1IjbfmTpyhzph/IcAUXxRibfHJ5SIpWLkGfOJoyfSyV2eevt6LdPTPFn2+/vz7lTehnN8bDvA+jOwslR0Zx2O08R3rSfLvEM9+4vRTIAo7c4CabHevczK2to7Au465QigmRP5EJareynsY8ZPQIwMUZ2dneE//9e/CK/v2BPF46eJouX+gyWfX41CG9GXlpotNL7vZnGy5aKaqEuf9/YkkWKAcBKlVBxoqA9oYjRy8GQ93nYmLGI+dZTrOPPgR1v4aDboVP0Dco2qlSVf0xpDgGnfoHR6yubkhGXf4+xJDM28SJSRzMTUAUokrchyA8uhKoV16pSAJlRJOn6delD5SD1ayZ3tx0Z6dVTeRxQQxRDm8fd9Ak4zMG1cZZ4URFs5WPq1xUsHVQf2eJ+vn4n47u16G2eynAXSM+m1S+yYPnEbA3ppGZSliME7mdN+qWMvxnK4UO73a1//Rvj6d58LbXLryRL+pPQIvqVMqoeudGRkoZhVcatFDETx8fPkGuSAF5f7uvf1YF+7FwmglCCkNI5avbjNeJ2dhQAacaO5M6B/FKql2MpUzZoHRNFVUncyz0eKaYj1ABoEsOINm3gwqE0BaEyDUkMMS4fpSUaGBZhkk4LzBDgdNP0Y305ANTlh4iOagCiAHov0/SODoTnU5vpyKqDmYOncZ7zt6349M7XEoFRJMEAJmgXSEh1zORV39/TLMTz1DC9xYwx8cnleyuRcKJFMh+SG45n1y92TGX+lv8OvtKIyzz3F4qO7zsTttCrjEX1Wjivl48VUGXFbcRtTWXfgzYJoqWOrxFEryN52M1Vzvzjnng4SOBf/iBJr35hx6CCUFPEecJXLfuK8JfUxYIrLUp/0mi6iA5y52HxNo+zk3ClLrPOjafCCZ4FCpI8pp6KIC0usZwHUq00FRGOgLmY88rayy3IgSt1ZIM322GW4XSVnQVK7lZvwDvABHEhwcqlGN8GF4o70ne8+E778zWdlxJioc5vq48WiThb6mLKASJ+Vcn/yviYDVJ/aKkVwnc69Zus4iGbL2S63z+vDTTYoD0GY12xgytK2vYKWJGzWDB4FPqeuIwVQ6+qqgulN5zTrXnUvUgsw3QiE+gCqSKcgMT1pmrTZdugfxiRAFA7Y1QuAKCI91CDVgN441my71D9AshSAljrGywHPYqDpoFpsnx9bagmHGlPhmxLvmV2/bHqAWTQnc7ZnQM/VNMeLFs4Ph4+fvGTuHS6UpCWA2sGDB8P/89//Krx38Ijp+c7uJkpzkg0Nc62vsqJ39nzHjh8XkCTGHTwEpuvuFLeX59jQYbInP5AR56vVB3Xqgzr5zI72SpaXftIJ8BxRNBGcaQymOfFe+8cqpOvU9M4Yn9yRf2ikz2YoxYoPqKIaaCWIQVnyAVUMV7QBsDqIDioQwA1Lfn4/D9twsg6idZX6uMsdq1kRVUgGpqeGo85QzGnG65lqUg8kfVJMz+r7ssewXW5fXD8LnuyDU531I4176TJeP6UsPfsOHCsrisJZkUOzrq48Z3CxdBMAii5z7ty5ptv8q7/5oia9ez/0pVE5+I7WNeDuA+rkDStTuf7E2MSgzIvDgKCHcBL2OVluV7jMtrYTOZVKFkRjEI7X/frgOAHMiZxnYvF38MTpH06PYKXTnafC69v3irfrDvddd5U3ZWCHFT8oeqn/tGKfiB5VtiWMQcwX34xOuIr7Sw4hLym60lqFj5I2zzlUzwQ1ls7LRG3OPSppxy32o3K4d/cmA011o3NrJCQhlh5qaVIUVr9mfpWO9KVXtoebbrrWkmG3t53UxIUJB+kcqB0wjX8AYzEwLdXEZPrP+DgHTi9zcPV79PLZ5WXYA4jqR5VKrhyZ1V4WbrLO7ztYruaF3wfQIcaTGOQnL70UvvTVp8OuPfuVS7NGM4nOD6tWrhB30xeOnsTQoxhzcVRnm6jEQZS7J+yTeZmImipHpNArB6DZY7OgGW9jSU/iz3WUnicfEITq4yeOh4OH2pVA5Fjolm9s56mecPWaFnHpN4Qhi3BKwI7g+uoKOcbLP7ZPv1TzaWCq2VNCMyFRykeaUJIcYFjJbmrQHSuzPhGkDqBwohip6sVs96TWeyqgL0UnCoBCAGf1mDjWShmYtHQQbahNEJt56xvEir69852w793d4Y477wnXXb06nO45bdxpzmqWXFQuIindLFhkucrsdkHlzIaDY1YP6iAZV3fQze6bBdK4ly7j9dOao5y5hspxUnCld95+c3hh6/aLsicAUFyaEOO7NU/TF/7uy+HFrW+aThdgVeBkOHz0uIVmLlEi6/VrVoX3Dx222UarLRvT9DjTUp2w8cqrxeUlYFGsDtdHX4+OJtxXsTrZMgfNQi40P8WGGEYD0FqJ7nMEoodlTHtbU1SfOCn1gQCwWlnvIWY4rmpUdn75khba3223/Etxf0rBNF0HVAHTenHREAb9xib0vwJWC78UTKQGKUCU8FIio8Z1hlFl2h/SFM7D6RwnGJYA0FIEJ5qlOcoK1TU0Hn78/AvhwP5F4UP33GUeJKXUUTG3WQowp6v3dBB1oIyvMQZa9mdBlLqzQBr32GW8TuYj5hpiLqFSIICIuf6KK0KTBmtvKoJdTF0CWEIFjvVWQtwMVGmcKq5M7x04FJYvXWLc6b73DmgfQAPHdXZgigWfua+KieNq3AhO+dixY2FQUUJO5ep7nfwyQSKfp4hyRPlmGQOhnzz/XHhzxwG7k7lzUV8oM1OXcpHKwdOT2pOQRLBq+yrQfwr00JUSOgqYdihpiJNzqAMS95v17KG+3hSGdQyEjydEQmimZIZQC8jFX1ytpjGpHQk9XcwQmnf4hwOtFvcJN4peFBB1H9KhQUIFEhpF7ypdMjml979/Mhz/yjfDffc/GNavX27nG04NO6MyXHEdLvrHIIrBalRo78t4n58nu3RAdBD1/V6e3c6W+36Ws0Aa98ZlvI54z1xDt996a8m7dD3p6hWLw653Lw753rlQQJQ0bv/1z/86vPrWbt1DDIjRulZJSoIr15Fjx8MVdTVKBzc3tGvKETjFMyUX0VsVAVWOq6d9QPPE8aT/pgeg6ESLsHMmys8LPbJ2P/fcZsWm615qK+QEPx46OgftI7Jk4ZwwX2ntdu9JzusT3SGCy2IkFywMRHkwrROYDoozZQkBrHV1eRCkDA41D6CKXEJUVzITBw3296hsTHpOHPKdqhXp1C+1RkzMU58D0RLcvHBUYJ/c1/ee/WE4fPhKawI1SmNjU5jfyrNcYGXdmhkBYHMQjEE0Pi/rDoBeNy7LrrMNOReabE3+3/tk8pqzNS75HnjvwBHTHZK1qJj/o+tJb9t03UUDpAAiYPTVr309fPvZnxR1rI8fTOIXiqO9EllooA8OJiGJ8cybcf1i6w6avozrrF1/bVn9KH2Lfvbw8e4c18p1cA+I727QoU0X5+P2Yy7Uy1vl/H/yxMnw3PObrahZ3vWdnSc1dxOBA+MBEF2xZEE4cPiY2fLn1FdL1NfJhgWEzOWkjw6eDXCmAGOtjpHHkhFg6gRHCc1pSjjfAV2zE37+gCjg6eXE3wOip+ESNTWJc6MjChfFiIRrE/rRBjGcTZmP2PCorkvz3WO5r5K6Riyp6UAB00ZN3DEkYN6x6x0/vS1JDdDcMi/ceMN1YcMVSy2kFKOiAyRgOhVyYC1Xdyp14uO57ln6gPTA23sPmAGilN8i3QAI3H3nHSbeX6huARwApNbWVnNp+pP/8z+FL/zjtycF0fz1uqjPbJTijpThaVwAcLaEX+l1195ofVSqLfoWI09nO+qE0hSDqE05LCd6ljEx7VRzS3NoE4j+4Ic/UgSW5qMX19nWJv2rKtbI4X6BRHvKTnX3hS5xp0vFdG9ctiR0DiT3a36i6DqpL59RQkMRzd2f1DlS2695kgDT9lPtSqmYJHgGLD1hCUtAtF+p72x9tL8ARA1AxYki0gOicKGQc6K2kW5j7QdMoSWN+Xj+YXHZgGiV5teCO+VXp1sxqBWud3R2mC71h89tNeNWs96RqYIe9fg58I6nU6nYRehfse1smdfNLmc50myPXMbb6D1f3fZ6+Mhjj5a0OCPeL168ONx8/cYLZnRCjOc6vvDFvwvP/OjlaTnWw4miIx0Qh3TFmpX2NLs0F1XNDBibFi1ebX3Dx6YUIYaS93QEi01KLt7H3GiyCzic6DdZrY8IngZkij9y9ISBaH3DHMtUdUrTf/T1J/rEWrkhefIPxH3842+94YqwQOnvEOPdAd7BdHx80LjS6hqlDhysMV2pX2PMmTKhYEc7+s360NycT+ZidaNbz3KhbpWnHiDK8U3KeerX6G5Rfk5fLhAj2ddUEY71SgWhQmGdCC5a+UYHmAZZorYAtVLsOnlYcfXat3+/fKOPhcceeUB+qK3htDwoHCAnA9Z4P0BZUZGofHzpZb7N1UxGs0A6WQ9dZvtfe+Pt8PCDHzaOr5h477f7xOOPhC3bdp7XKCd0mPx279kbvvDUP05bvcBsmoBclbjCDevW6B6rA4amGsXSM0CnQ4BfIedepUz795VtwhM9v7dvb0G9mPsETONtwUauLtxntWBjWC5EPfKyOCC3pt3vbDcgqpFcC4h2a84k+MtGGW5GZaBxA1wQt7i8uSIsXb5MIn06hUfNfOOOmSFUE9ELXFN5XsfXVcrrIDI+cRFjStxyOjIWoeeEy4wJcMyWsd9B1LnQOWnGKVIe15nonm+lGKCuml8bTivJy2n5odalelLF12oSP0kXKlPSUbMVKseMcacoHbrkavDNbz0TPvnxx8Kc5jnWZ/mzJGvOUdamKQOdC2avA6XXiY8tVlaqPuWXLZDCmUB5LQ9DKSnjC1dY7sMs2c9xlyvt2iOXmRMnNEfPwqJ6Uu4bMLpi7Zpw9Ya14U252JxLQozHsZ4sTei7/ux//GV48dUd0/Qa4LklSUfmyTVo+bKlApwuuUK9L6yoM0f6hNcpfyfMuDkSAUlce9HC1rBs2bKyYj0fAfr25ImD8aEF63kQTQAUazx6YLjP9w6cDNu3v2WcVq9ydKKiXL1qkYElfTOQ6jMXSieKmN8VJWKpEbfWuKDFuNERJSqxhCU6sxmbtLRoJYEp7Jyk5xy3Snhn2+kO6U1lXVcI6bASOxunqVz7gGaWANFiYEp2e6heH62YPAN+XFZqvVk43yt1Bh8kHKyGpfCuNTcEieQAaUQ8cXj5ft3Md5/dIjB9UMayWkki+tCkHCbVWQcUh3TfcTn7IPYBsuyfjIqBqx9TeNdeekkupWeRczCO06ManMmLpzjfSGHutwXHUiuXiwaFqFRpmkIGM5R0ZeED82MulyXx51tffS188hMftz4CxIoRL/OvfPZT4Q//t//rnHGl9DscFb6hm1980aYB2XfoWLHLKVomFaE97wHpJDEmIco3NiIOH5MurdPmZ9LoYbT4N3RCO4kqQNE4AlHAvF7RUCNSThZyoyHc/+GP2wAv5TrGveD2tHPXThPr84Dpp8xzni7OA6LzpN/sVXKT7373WQUVFAIwRqQG6S37paawfqoZCzVzFS+vMrjTmIalU2xJQWxYGemzA9vzjo5XSOUg0ITMT1RuQwaiSmpCtFOrpgcZkCW/RqANoEJuRGLdwdWXlGXBs0bX58SYqo3eMU9mwn6MTXwbmGmUaZsxSqXh+2aUG0/nU+FdJNTWhqkaZITGoxS96fe+/+PwqMR8/Ef7+rpzAMp5nEoB4VRA1Nsotcz2d6l6F305gEjkB47QACWJeldI6b5oQWtoaW5V1IteHD00kseeUH7OEydPSTTo1cs0EuZKLKhnjho9zCQCpji4XPSdMMUL/NHmV8LDDz1oAFZKvGfe9rVr14Y7br72nOhKGRwA18mTJ8Pn/+apaasRTOLQY4J7bpA+buWKZRJjB00tgGGJtpmq2IZc0ccpnZtGJgMUw8bi+fPMKDUicKuSpRkrMoSIv2zFBusL0uyVIj4GRDq98/aOVHSPgTM5Km+RT/bNX7AgHD16NDz97WeNs6JWksJPBiFhnYvtDqaNDYlVekQT0ZUiy/hUZCfJm2NiojzCQselL21J/VMxJGGZV059We7VAxiXZFCKQTUGybi9Yuv1AtABQ79kr6fYKybaA6bt3J9CWOswvMlJf0TcJu+ncah6hu4ZNqZ2nVjjI3b4aLuB6QP33mmW/dPp/FHObWaXfvx0lmN8YDOJbPz4SxpIGUy8eHAkbZ2nTRd2wzUbwvVXrw/r1ipZrB5OSdIgO/D+kbDj7XfDjnf2hVPyM2TwNTUmL1wyCEsefUnvgOt78623wofuusuSHhfjSnk5AZlf+YVPhzd3vTstg89knUNyFMTgZ559Nvzj17+XZKyf7KCC/Rpk+gAOCjhxusev87hE6jbNglovboiUeOWfn9yG9NEkhrxZxqE5kkx6NAE9XOzIsiSbvBuIqmVdfuKjT6YcUaKGKLgUbdBXvDuvvvqqjB4TQ3HzAJoeKXAARHfvPRye+f4PrNCfQVXFuO6hWuJ+IlYDmtXMoxSJzGxnCdHeSOGcBCZALtbbRvqPOHrgWJegD6km6RjlfR8Shzco16Y6s+rjJjVQATT0FIAqTQCsTrGBysuy4Ml2McKSP2Tfg7xRbrw/v84x8ehVtxgXWqw17zvA9OvffCY8/tiHc2Dq3GZ2WeyaplPmoMqS6b2rFi1b80fCo4lPZjqtnue6AGglhgV1fFtbu4l3gOfPfuzBcIuSHyyQA3YlWulypAfMgN64fq0s1FepftBgbAsnNQ0FqgHEFc7BA7Sv3iTNlTvVxbjvVHt7eODeuw0gil0fLydAytzmjUqltvWNncWqTbsMg9AtN1yl+Phvhi8p1V2PDBxTJz13PY0B6QurJMJtWLfW9GIHDr6v+Oxe+R9KpORBFgh+SeumBtA98TzxpQTIF8xFUlGUjwAUDlxOV2GZxOnGRk0OB9KI7rrn0XDtNddoLiF0b8VfAsrhmv7pn76scFR8MRNQA9uSNWsq+ZeCKCGe3/rOD60M7nNeq7IgzanXPdTK7UuWcklYYybaKhuTBqqFPKb6AtaH9SHok5qGbE91MhqNSJxtluiPFFZDWGeVAFNzUeklNpUVhjenSoFkhVyYxqUK4z4rKyU2j2GQU9SSEpVAI3pOVfqI8BuT8atSWZn41UpvWjkmAB7XvE+6cP/V6Fqr0/6BlwdA6WP/IFmjtDvUH4alt6yE6xQHjOMTMQOjMix1yBvhtLCUKZtJ7UfnMY5Rs1A3tTfBtntzBUuew5ASTu9/7z3plpeHufMXarqVLt1nPuFMwQFT2AAo/Xhfchjr/KDKypqRSwpIjQPVQOKlZvqMBk3MdfNNN4SfffLDYdMNV2sA5HUzdodT/McUE3Cwm667UmqAJhuUJ091CaiJ9lC+RA2SCQNiim1frNXaOrrD1etWhtWrV5tlt9R1AqarFRJ5YP9+JT4hCcjZEfOxv7LtrbBHIZzTo5SLVMq2+fPm6ppWWtan/QcPaVxVmnsTDupFoMtOwyDDLWpMvxZZx5n0jufbKYMNCUEY9IQeAqStyuAxrPsmXv+RR56w/oHrLEVwo2+88UbYteOVgirZI3Brwsg3qBR03/rO8wbewH6tLNMkUU48BRJ9ICGOiJFVAhKAFBrLDVwlXhY7hx2gQe+8AelQT6jWva1duThU1iUO9caR5hgKwBhOVKCvFfoMMK0SivHN4B2vSifH41zVgl//jdfIjUmgWWlgJhck9VO1kmD7D1BVQwrLT+4YH9WaukaBcyKWJ20L/GWooqxaXzPulXLYlFE573eIKWrvU75TbdfoY0KaPr4AzF4A8ez8w1Tqg0Y99g2r4oH9B8T1LwtrlszXNraS7NOg9kRy4IyBkloOmtly9l0CQJrcvBmEdMHdcgnp7u6xgXTfXTeHjz12f7j26nUShRBPqFv8S6UdUyIAddWKpeHWm66WyLhIA2g4HBOX2i1dKjoWBlsyVM/+XFO6oHNciUinB+65w774pYCCF56Pya2bbggv/GSL0tQhOp45YT0HpKZDfEDJKDSuEb965TJxbnPC+4ePyuDSafpR/AdLW+WTd2JUAxPxeJ6kEB5ip3xLeb41GvRgDXo39KPOkVaIo/vMZ3/D7h0H+1KDFzDgY/O1rz0lq7oc/3Vj/vN7FFMjUX2OgKNGYbqHw+bNL4QTknwwSMNl1Zs/KK5bmrpD+lo4Tggw5Z0rBqTsA/iJWAJIq5R9qbtvJKxQ9qt5mld+VNxmLQCcAilcKEDKTzBmP85RLbCG46tQPHytpVBUFFYK2OyHAFS4eHSo7pyf7En+A7DY2R0wbR4n7YrDXTHSsc298eOp8G7BaQKkfYpAO654UzhRn2uej16Vxh2RTzjpg4WlnkN8PXkwfS+QMmDpkkXicmslLfSb9BrXZT0Gz+y+qWzr1am5KDlSF8MQrZkwrFPRFiR9XSFwe+T+O8JHHr5XHMkym6yMG+XFldA2lXsuU4dWILWjnkE9cL30rVeKa+NdbFdOz3ZxqSN6Gfji8vKVHrxJSxf7/y59mJYtnhuu3LjRuK5i12svpQYBjtkb160Oz//kFVN9FKs7k2X+DiCK9xMpIxF2/RVrxakMhQMHD5sPZb3KoCQs1FYL/gHAo9KlYv1tlqTR2KBkLDJOdfWI79G7hahaIU5zjJOorgPpHGU+evyjnworli8vK9JzMlyXtmzdGvbt2W7vCe+Kv0nCRJ1/KLS0zrUPwbefflq66Xekj9X0w8JKmFzAoVbZOmAGEHMHZOCJwRQRn23ADd2og2y/6lXquufM0Rz0AqYxRRINKHFzS2tTWLNigcJHldJOxxi4ybAEAaIxAa4QHOp4up6oxNQ3KeBx3nIgyvEAn4GkrsFBMC53T4cYWFmvxpdV5x0a7EuAtFtcKOoDOEg9HwdR/GUR2dWduR+3wnYihXC2QuK9pW9PSvW3X9zp6lVL9MHR/FR6lz0logMonD9lznUWtjT5VmVVXf9FAqS8enSLHraWzOSIzsqAS524fu3K8NiH7woPP3CnxK0FeshJXTtA/wq3vHS6S1qZ2NIciYAb168Nt9wo1YEGYqc4Yiz+hB1iqa3Vj6O4g0uR9u49GB66/y6zENtLphcwS7yUcGXL5eyN2Lv1jV3ZKjO+zTkTg9KQ6f2WKNoKt6ajx09YmrxqcWrwVZ4Os/ACUAPIWCYQa9Azmz+vVYNECT6kzhjU+2SShb1D8fF5IL3/wcfDLZs2meqA6yhGtIdVHTXTN//pi+KsBAIp0TQ/3JvmaIoOwPCfvvZMaDvVbSGPer0NRHlnGOxCIbWlOHa1B4cWgymc56i+FIj+DqKcBtEeAOADAwF4XEtLQ0VYv1pZkxRjDwGkDphZIIUz5YMC1crlCWYEvSw/jFwG0GqXSKAR3a/rSRHtISz8vt4gbl/Mo3HYtlP/iMOH4DwhB2fnTMd0X+wb0bM6qb5p6x0L9fqIAaajKoPId5DjRq0EjOC+1K3q4wRe0x2ZBc/O3l2B8JDyDlyrZNfoaAfl1gWIOqAWE9czTZXd1JTOFwtHmhgR3IB04mSb3r0qs75/4iP3hbtuv8k4RO6G7i3+ape917PeiXiB2H/7pmvC0kXzJMYN62vXYWI/OrY6qQW4roRLvRBXeGa3iMdDR3tbuE85IBFRS4n4tI4rysYNGzSHT2XYtmP3mZ1wSkclBiVCC9etXW0fTiKU4EwJwbTBr4FdCkT5IPAuLZRLU4M4vQ4kGlnl4ez4+CXPKHshCZB+9p9/Un1xr33Iy/UF7XAtf/8PTymp8vFcY4x/B1Gm/4Bj/c53npGKqN1A1Cvq8nNAwNuCkQmHcu4Z3SfYQ4gkBGDyZrmITxlASrSTlxnoaYqRKhmGNqxZKtcm1F15IK1K85VSlhibkKjyQJrz2RTa1gjIOL+7WQHGRCe5jtQBlQQmY9KN2nb6IYnB1DlQB1LODXk5S/YN9neHo6f6Q5ceKBwtgA+Q8r7xDCThm34UAKWvAFHj0lVeDkg5l1NiUByxPKdLUH3woVb7Ts6dOrh6eXYZA3C874JypIhviMgYINyANEfuR3feen345EcSAxIcYQKfXPbZC/DxzZ/Ruq51oeY1uvG6jTmxv+1Uh1n7x/TQG6RPTKz9cpXRCYoP9jM68zk76NCRExp8y8LatWsNQEpxYbz0vNhXXXllkIeOuUXN5EUhjTCC+6XGWbxoQVgp0Zo4efShxI2jDzQuNNOvrgbg2BEBTLPAC2MSelW4UHRtlRgt9Oz4EBenivCrn35SsduPGPfNR6VUP9AHLS0tYcfOXWHryz8wzhMgMjDS+4wul9G+bOliBRq8EXbKxY7kGxAAocPzlIJEtQAJ0K9X2r8RgWF//7CBTCU3x1WLq4U71S0Y90kZOl8AFMCDI62Vsz160lWLFoWFcq8akSUe0HH9qJ+UD9EYBiKRrwOqMbACpjAHjE8+GmpFIKc4KK07qHI8lnp+AMyInlGlrhEw5ed5RLNAyjU7V0qO0ROn+8LJrgEbL5XSeZCoGk7UP2R6FPbs6EH7qW2kUlWZEpD6czQx/733Qs9QlTx2GhVNJu8eeTjgfwwlHyy1GXGrtiP6FwOt1+O48wikuW6wy8INBuqQ/ybGHCyxGJCefPS+cNVGNyBZFf3zY3374li62O/W/t6+gXBcRoRCa7+PmovzHuhJOLQ3lePzw3JmRlwtJ+L7oECvuqClPmzfuWcGdKYyVOh9GJKxhmdtEUoNjZpC44j0412ae0m+oRo4pThJBgpqADgn3iNcmrrlDsXHGTG+vDFKnJ8Gwq995hMGonBBk4EofYRz/lNf/EsNeo3yGBm1zpTILUqk8fobezUVyhbThzLwXVylzxM9qO5Jr4cwWNYKMlXJ66RC96pXZUQfAa4FsIjBFAMTYr77SAOgEPpT15O2NlYqQGGF+kuW9dTnFC7PCcB0itddBcA+zpO4XyU1WXfRnHPGiT+oUak+d0Blm9ylrjs1Pa2ejb877AdIiXIaV16AdhkNT3XJmJT6cGOx594hQkP1+bB1/nMfuEdB9A0SgAOlFZb4Rx1+WPQB1L3KtwuAtrY0KunJQumJyTolY2HanzFgepMAp6sB2O91KDtPQMqN8/D0kvE1FIeA8QbAWblssRmQnnz0XmUzzxuQ/OIv5iUQyZ25tf8WWfuXLJhrYv9xzRfEB6JSogpf9uSF5Yjojb6Ibg4R/33Nwnm/RHw4AV76Yi8oZb4PznRuS0N4a+deM8Kdye3AhdI3+ES2istbs3qVRSuZW5P24TFQHEDpSTghcaHShSJCt7bMkcg7ZNZ8ngwg6lRKMqgXF/W5z/50ePThhw1AJwNRuDOA4Ut//+fhtOL5eaKGhn4iLWsU37/5+RfD1m3bLes7Az8GUT/E8RcgFbKYjlTSPRsJmCr+OwumLuY7kFIbAtwMTEd7wqCMiCtXLddHEWkuoRhIXbSHG41/1IRTRWfpNEJsvvrXwFPnGGTG0BTkvE68BFCNO02bcDClTgyogDJAOjDYK+ajJ3TCdeoDlOVG6RtEe947Rg734TYS40i1v9h7yvmy5Bwu5bTbJuPxu/sO5QC1qXmeLkFgq48Y4OhA6e04yLLt+5wrPS9AyguPuDskruGE3IkGlOVlwxWrwicev1dc0B1mQLI3x6/4Elkm38X4YvNi/3VXrrXB4WI/g8Z8UvVCaPWiJPxE++XAfJfykZbTlzqY8mICptcpGGLPuwcUcls6hJIb5j2AjRjT4BSDJi5AE0OICx3U+7BKIZ74WGJQOi4deb3yiKInLwmi6kQMYIjEC6QLdcd6vDuqhUa86BwLgJYC0dUynv3+73xO81TdZiqNciDK9QOi+Ix++StfDYcO7qUoeZb24ZFIK+DGIHdS7/iPN79m+xFxHTApSB0ECjhScItfS3OjGY+4Dn4GsDomBlMirfAt5WPmYGoAqoEPOM2p1UepZzAsFEe8ZPES1RPzEulHuQY+XABoTFXyHEA1YaK+n1gVspyog2GpJSALmFYyt1P6ortO1M9XpfmkquTDOjrSYz7bxzTJU79UF5D5j2ppH3M1MKyHZwCqMr0O1m/Ug0wY0HiaKvHexj+O41YB1H0CVAC0GIfqoBkDKeu+zf5zCqQOoB6BhN7lxuuvCZ/86H3hQ3dsssiipBPo8al3SHLMxf0fMW3j+rU5J3+s/ceOnVA467D8BmVUMJCAS7+47nuvXqi5sswTzTNoonbxfuaF5GV3a/69d90aDh44WNJpn3cB0AQA8N0cVj/gmYHBZuMVK8V1jIf39iNuaU4hlaVjsMjJdV6hDvq3uTjWN88xyQb/YgCLtksfm2/uVkWz/dvf/20DPsR07oV7KkfoRZ974YWw7dXnTA9qOlEdh1GJ5CMYL/a+ezS8tm2bAKLfRHoGatwsZ7BfJOqzn3q11fLZlZ6UaTvQhxI1RDnkYKp5kqWqUEhiBKYu3gOkg7LWVwBUOn758sVmOEpaKPwPYDp4ssfuJa2CO5SL9YkuMrkI2vdzOXj70lt3bjXLlVbVCr4dWeXMDzMVFM7adbIjHJM+eDhNpOKce4XmeLLoprRhA1P1EyoeQJznBWPMCIIme3ZJrcL/MajyIQNQd797WJFzPfJOaQhzWhYYhxqL/IUtJFvnAEilR+ELYtxA4kDf1dWj6AsZkG65Lnz88QfCTddfqa+6ixw8IH+1il3ipVyWvHwMbLf2L9d0EOieTigmnH5BJWDuU3opxp1VucC3zFUjqq+T391kUU9+qQAuhqDbb90UliuS5P33j+ZCP90QBIjCza1dszqsXCqdlLgWBvMyOUuT5+DQkWM5tyYQMctFAsRwmCSZIU57fupYf0o5FgBlImAqxZ1iMilHS2Uo/OVPfyL87Cc/ZqI/FvKpDEKufZsA8sc//KYAATlUwCfXI7L4Mwhf2/a2HO1flGFpTw5EdRv2dlM3B6bp684+J/bRBkEDtVJlQAApBJiiI0WMLQemABoE0BGCOTLWHxbJ7au5KZOY2WolXCkOuDGYprts4UDKRhZMAdSkPFk6uFIWA+lY1bCyOVWZ4WlUHKh6TJGrCaCSuAS3pnZ5v3T1DRqQuljPfdIfdJ73E91WrQgwKAFRvQ1WJymbyjO0g0v8c1Cl7VMdPWHPe3qHT3eJ6amxkF3URHCsxQjOdAY5UvSf+mLpTEQg9UrntVj6wg/dfqMs8A8Yd5ZEIMWXknRCXHL5rHNv0f1ptGDtx8l/4xUrTLQ9daozdfJPoofQHwMWF5oIOsD4dNWGNZp1dEnJF8ivk5eQlx/Rd8P69ZI2bgldysKFIRF1DoYgfIOvvVIzlOqjClc6V/HkNYrb3iO3pl4ZhurFhYrdsPsvBqK0AdfUKg4Uf178eDv1nuG07SqAcj3X0tQY7rrl+vCvfvtfhGuuusquoZRRze/Ll3DI7+zeE7799D8Y1wkjMEcBCj2KRHpz++7wwgublbH9kHGSONoLz41b4vHXSD/qYEB7pb6XlMOBjcsCT6ABulD/AXuAKfpGjFCAKro8LPlwpoj4MbihFqhXZvoGuUAtWrBY5+caYHD8jpIlIn6paIYYSKmd3Y5bijlVgJSYfWZN0dXlqrGuRyW3I1Q7xOUrhZ6kj24FSJzQtChDunkc7yHADE6c+4QYRYj1AHqOG2Wf6nBP1D1bIOU8UAyonZq+5b2Dx813vEk+s+RAKAaoM2K1d/GdrymcRa+iRjAgfeTDd4bHH/qQGZCSya2SC539r8nFNBA3rl8bbrx2o2UeIuHGEc3Hnoj9ysBjYv+F7SmMT29s3xWuXL/GxF9Ebl7wUi8s5ewHnDD8fOiu28Mmic84QB89cUrier0G9TzNVgkHyCiosHBT5i7HKm+jYsItM4TUpsRAOLWFijbjHJ3i5hMuFC4sqTPh0LQAAL3vjhvDr//yz2uKlYdzXGi5e+FQ3w8nCoh+59tfUfIW3JPqZPVt03Qiu8KLP9msLPZHcwBqxmRdji7RfnCaDPaYKOKX/Iv2qBBASKKcEid73+sGJsGQgSmuQXDmDqZZZ32AdFCuT/Ua+HMkKSxQwuZxkE2UNTT5OTAyxXpTB06MTb7udYstHUwTjlRgmpwuV7VKiUarqqX3FohWoT/VDyA9JPe2Tn2QxgB73RcfZMj7kHXwH24UEPX9w3KRopx+nEkgpUnIAZV11DTlABWOtOKam+/vk1g57WwfJrLpc8CA60pT2BGBdPdt18vyupLzz1LZHmCE2ZDSWzMW3t69L2x5fZfm9T5qlui5chqu1qC1l4V29GaVCoUse5qz3AkQ/dbnPm0p99AlMkhLgamfChCCO+XrzYtPzs3vPvtjc0citLFP4jQ5Yg8cOmypDB1IeaeMDIFwAUo4FMJ14bpwrB+UXtXSlk0ixiPC36QkNI8//IBltmcQAvLTuX5AlNR4L7/4fXX/uAxqR2zqj4Oae51PAc+GudizHDT3YECgZcI9JkaSnP5P9+nH4Abl5QQMwWEtkQoIB33AEhoWmI0MD+ra86Ll4GACOKTbwx2LnKWeco8ldom5cxeGq69cFW5eslAeARJNNVZjFydrPP3nfqXsR/3kBJBOhSrGBsRRk5gkSck3Opg8TACULPxOcKvs6+vtDAcVIXioo10/DI55lyfqMgmenxkOP56kD5UAl8iUJMMCZi5xsnfSz3+mS56/07y588It16+Sc38yKwA5cacNpHCgXDQHg9SIaldtXK8EwFcrOcBiP9fs8gx74JhCIF9TCOZu+bl1an4e+rdJoiUK7fI5Ns/whFM4DBehz/zM4+FjTz5hBii40+m8uCaq6vo7la7u9TffVKz8obD3vffDOwpPJb7dnOXFuhmQwtlKtQCIYLDj3nGsPy1XMsjrZi+7VkCJ4Wm9IqE23XCl9LW3CEjmCsgIuSycdyh7bLzNgEHfizj/vWefDXt2bbW8oTt2vGHJg6mLiIrIjojqgOjjjG+Agyh1AVJImg19DJIPpwNnvO2AOiSAsJlBlb4Qoh8cSK1A/wBU9ImIwkh79QrvJKMVelKPRuK4SmXKX33lTeGq+Q1hlXK1AKajCpdEL8oyS4Ap2aEgDMNZQEVfCwGuvu7bAOkQnKZmLAUsY/Bk5lIvp36/3p+O7oEwoqmVX9rymlRAp+1enNukfxzL+bDQ1zEBnuI99JwqxNXyDAr3x3Vner0YoLbMXdA/ZSB1B/rObriCIQvZvOnaDWY8akXxbwRqn7+bSk96mSz8i5f0H36VbygM8/Ude8JJpQzEkto6VxmEpBe7EIDK8ydhDIme4e6maqSJHw7HweXB2XL8QfmtbnvzLenUpYt6/5g533dqUAEqzQJF3KNwrDcxHiTSgHFjUpP0iLg9XbFqqSX8uPPWm8w4Bnfm5/CBGV9DuXUGCcdz3NPf+W544bkfGge6XxwoZPrPlIM0XafKgKPCJ0fNtEw7fR9lun27NwdSyqAYUAFSiGlGmGYZYMyCKUBKKClLMkah9kBdRDSXc6WDii0fVG7U+nkbpGK5MixUghDAtLlVKQLTk2TB1IEUdykH1ORqkv/OnU4A0UpZ4AWuDqTUBjyLEYC6/9DBsPbux8MRRax9/e+/GKrn1Np9eH2AstgHiP18vAbVr2C6utPWKZ/Oh536Z0sxoG5ct6w0R+rcARfIwEV854Eul8X15huuta9+nLWbbps+hCadLc2blPC8kjSiHorXk9IPxP+kDwt7kq298tF8VVzquzJo8Aya5zQkLkIa+A4s56uDcBv6tV/6tIxQSw0MpyIqZ68NQIUQ/fGtZYoOwBWCezx+8qTlCW0/dUrcCOGJiLdKMq1EzPPEqS1RCKRnfgI0sZzjOeBW1TMBUFQRgOixY8fCM9/5UvjiU18tyYECos6J2sXpX8yRUobezojXOn2lKcpypg6i1AVgaduBYuWKReojMhblRXzq4Z4DAaZMiIdHRGNjrT4+LSbis495nhYtXmmSY8vidcapjw2eCFdqTqZ5zYmoPZByp9R3UHUwpSxLMScar4+KG3Ug5Rgy7JNcGlGfKaChet3CgB7lEemUF950a1i87Mrw//6X/yhJo9tAFICsTDnPIaYY0TG8JXCdMbEPamio1vsnDlrr5xtE7QLSfymgOkcaCnSkzGWDAhVfr95+vaBixXHhuf+uTdK5rEvALm7tDNfpEiJTht76VqjZcGeo7jgcxpSgtmLdfdZiYRee4Ukuo8MQ+7fIzeadPe8qHdygif0NMuxUpR+7hP85972G3vTXf+lT4e4775y26FzscQCsgCoEkLJOGW50MSGmA5IApgMp69MFzrhN1l318OLLL4evfPkfwvPiRJmdEv2nhyM6yAGggCbfer75LlWWBNL0ZM4b+LmdO/Xt7JLzE5+/RPrNGEypBzfvXCnbrKNjRF/aNGdhqFOmfICU4+66+6GwZft7Yd2Gq6VTVJLysZ5wReNAWCXOFBY7y526rpR2nYpxp77PlwAr4j00rrR9vu77AdX9Rw6H5qXLwtW3PaR+/lJ465XN4kabQucp+fGqX/2jAliCDfFHh3Z4BjDTcwSiqDSYjvlCgqjfm8C0v6p53uI/QZwiKW2/Qjb5MX8NFuQmfaFXr1waHnvgDrPAL1y4wN4cbvJshyttjJ/Q9BVv/KdQvfudMH7N/fKbOhrCT76sh3sqhNalobLW/U39kj+gSxuFybxAuCQxNQrzDLUrIQcZqHAFatDsl0kCbOvZGXhCpft6UOC15dXtYb8c6K+/ZqM4ocRXEaA7E+KrDmfLD6Id1gHJ+EeZn8PrxyLWdM8NYAOicMT/5c8+H57+/vNh/943FPetCfUEYgaY6s4cSOoEvPf2E4gazKcDwQHVlzwFk/t9vwoAXgwntGeWZlXxcEeqx1QjFQKAQv5V3AYBRQxudv/4ZkpvPaSM82zDmdIekzdaFFNFvelND2pOsjtvvyGsW7M8vCk10QJcoZSntEvO7r19+hhpgr8G4vZ14nGdzyObsN4j67iFH7WSO+mjO/V1rpdtvDDGFASAT5/mUrXbqJC+tbZKHKPcuE6P1YaOwZrQLS711ns+EmAIvvfdr2l81ypRe699lKpSbhSw9L6pSf1GadBBFJG+Tq5ITFdtaOsdbme9MP/0iEeqP/Xxh9PEIX25qyBUDRcWEik3ivtIiFeDH1a46VFyVP7/aO+JMLbtG6Gm/c2kLcYhrhF6WYLWaw9vCRWntoShFR8JNdc8rAekkD9O6Z9167zpXsX0rvmiqs0IjIhncrciw267+Xqz9r8pB/r9GjSIgK2arpfIKQDmXIr9qHtefn17OPAnh8MTD98TPnT3XTa/U8ItTW7Zj27nvK7SLy7Gn5Lq4CcvvhSefnazTcCHUa2vX8CUuSLXh1LsnFNsaIqrx6Cbd7lQjegRwmjzKmNQIXF5TUZ89fYQX7FQY4yZp+fqRPo8DFCuJzXuVGho27LuQ3U10pnK8+OV13aGX/ilX5G/ckc4eHR/uO6qdaGte044rjptnT1hYfdwWNY4Jr9YuXaJBR/XOzSoi3XrvnOobAOaTvE6+/BnrUSMlxM+E+yNj1QpO31VODlYF4aaWsOWN34U7pZLHLRz+xthRLpvuFEA0gGFPuVbzMhOtT9Wn39pfIJCOBX5JXWGbv+i4Eb9AquvWLu6f83YWIFo7zsLl2cOXByJHnR0sDuMbv+GASWPRIlujMaZ/2wkb1mtSLwKQs3e74SxjudD5ZqPh/FVd+sA+QzwBvK2XgRfouTqL9x/uJQb5OLDz639u/bsD8c18Phqm8FG/XQujVPHlO3qL//+G+G7P3op/NSTD4UPSdyHQ0VnOV3r/rnsSQDUpuWQThawf/ZHPwr/9K0fhMPHE0MSb6gTfHElA1wUg6gV6F88+L3Ml95KqdHiQAuwVupV1p9ZnrNiLO0Bog3yv8U63ybJo1U6YneLwh0KAjwhwNTJXKVGa81Y9cbr2+QHvCI89MijoeMbX7MZc/Fs6BmU5b4yBVTppxeKQ22uVyJqAWKD5rh3y/64EosAjE4KpzCgZZt1p+qKkdCfcqOdip0/PVAX2saS2U9/8vwrobL/VLh+UwKkRyTi9+nQKrjKIsSHxsV8duMKxTNBpId6pBu92KjqySef/CPpQ/U0ErjLX2CpVyFfo+gaQGcgl3+lWBvZ+Wyo3vJnoVKdWCmgVBRbntQv4+vuV2d3hHD0rSDJxIhl5YBekPffCmNtz0uZtUy9uTj9EtEqokRaN11+UBfu5E9KP7JNdZ4+HdoUx8y8RHXitM51bP9peRm8sm2H3Jt2yRl7MCxauNCs53B/xh2L1Tjf+iznPjFoIcZj0HpOUUh//oWvhGd+9JPANcfE/EXdpxQYga9qykFOGAUqKPcdp74fk7yh8Rny61aHf6rEAnHWuS4X91nCfTUpazziPaK83NmN+3M1iLfokU3Vmr/es+o3NLYaZ/faq2+EG268Ptx62+1h947XQ58s+nX1ebUZ4v6poUrjHjtGKkKHooz6dCYCIRD88ZFHB8pvQFyjr0vuUFTSaOiW0epob3U4Ji70/b7qcKhbQRMatq1yydqz89Ww661t4TOf+UxYuGiJXe5LL/4kHDnepvsgYiuvQqEfEL6AD8pNzBec6JSmammWpNzTI7cpXc/5fpe8n0ssR1IgLTYdM7d1JsTrw9uBnkWdJD3o0A/+W6jZvy3ImGcgOqHVEkBq9fQRckCtOPZqGOpw/Wky/zh1Eos/5zzTa55wRZdsAfH7a1YvD3dsutYy+TPd8ZGjJ6VTkvFB+4jtJ5E2cxVlLc8zcdMd0je+KjB9VSnk+nu75GGgcEpZ1lEVAWwYkVzPORPnK9aGG6vwBQVwjh49Gn78/Avh//viV5SVaWs41d2tw+J3hXdWCYx1bVkgpRo16Sv4L2rGR2qzJJWqRxtOvLLxDz4E4AJU4coAlN7eEeU3bZYevF6O7KflH0qOVcT+REdKfzqQ5sNKpSKT3pGJ8U4oGUfnqXYB6R1hw8arwn4ll6mqUKz7qLhSnYwPiP8GANGR6tDZr58MWEcHKkLbiNoYqp7wOzZQJcCsDkf6G0LXaI0d19ur2UzrmsPCuQqrfWtLeOXVbVJLzAuf+MRPmYEIt77nnvu++aCjB4b4MHGvfEwg1gFRluxjltX5C+bb/XarL6YCohjMaT3+lfI3LVY3Ps4uqjy2OJCOp/zhhMOtjen8s4vVSdGDjr7wF6Hy9e+FKk1uVSlf/QIuNG60HJB6PdQAkjCqu8TRnng+DMk/e7R1tT0c3kTjTcvfrLf0wViqL4jtv0mGqSuvWKKXUgOhrUsZ47v05iqbugZY2mtpf/DsZ45OC7jfemdv+OFzLylF2Xuhu1NhojJeYonn3GRSAgD4uYUeoJ0OxccBnqgUAE6CRdrb28MPf/xc+PrT3w9PfeXb8lfdZR4oHi1FFqFCUqb3CEjN9qEqLtrbyGA78ytsoxBoi72O3KG1lTmQy3EPAdAa0HYwBVAwgFUpfn6+3L9wlEd8Bzz9h2jPOtxoElLqQCwXJFnT6Q+CKgnV3bn9tXBg395wk9wYh8RF9vd26HwJqHJZWPcBWOXpsl+l5roHYIfHZOUv8otvBU/fJjFMb23bLOnkLdtF1BXjc/MLz2u21S/bDAhwo+hE0XXaU9c/7hN1FYl+GuplxGLaEfULnCj31a7pSKz/inVsfBFap4+zVApIi9XNHmsPfkJhrmCk4k//9E/79DJPQUeaO6jkCjfpetCat7dYvQrpyF0XSgH60Hjby0Yf+aNQdepdpdF5KriO1BqI/8XqALnSjTY2hYrlHw0Va+9Up+f1OPEhH9R1e+EyNw83sPX1neHtvfvDMYXnYX0lGoiEIqMWHM0rxY+jz4yIfAMEsLYTAz5Pvo0QjvUQs7KSi3b5sqWKiNtgOUhth/4BpPhyIopzPCAbEyAMYKJ/Rc8ZcyZtin9/Z89ey2dK1NTBwydCr7wZaKdJYjET4AG8fZrFoF3JYogUq9DATYj71UyeavuQrPZdnR250E8HUurFHLxzTVnsj/ud8V5uf9zLWVznOG6fK3RjFFE8zUpxuFhzhmFYxAcT31EMTzEBsoj4OOhDHu20b99R87vE8s0h111zVbj5jg+bvnRQ2eGdxoYTlUdljRuafc/EpdcFyEk319g4R324LezY9U4ugAHAxDcW4pk1CSSHpbJwv1GeC+5MqKdMv6u20P3iI0vUFtQuFymuGe52bFTzO03i+gSXmSWm3C5GxerG9WipFAhbPbk/ZQ2U8fFntD72ypdCzX5Z4/NGRgNPGgNAsyA65ZPEIMpBkuyrhjU49345jGPtxxg1S7keNogEJwAALulJREFUKPbKYO2//57bwj133RL27M3H9nMQ0xXnrf25Zqa84gEco3LBYR4e5p7HRQvXrDi8c5dCX/lB+KPOlTN9k0IXV69cHtatXSVuSi4+aoOQx9aW1oLzt0s8JXGz19m3X4Cp6Uh6lc+SKUm6MzpP8pbWCRwXiTOH4LIwwDHjp0VQwcEV6yjVddBksJtxyVqY+C87NouMXzsoBtWJQzwB3Lgt1oX19klzMZdtAGRo8Jj5lxp3KkAFfPgAwc0BsDXVyTonrtR6rX5jmvfeQyoBbaK0ALvOzrZw131PhAXoH2WAMhJHWgxYk52l/+Plc/TgHmsXYzIfG4i+5Hx8lMg7C9XL57VaEVRuLLPC9B8+sV3dgwaiDfJWOHLkpIHo7bfeHB55/Inw9a9+OXTp2rNUDhA5a7n92bamuz2jQGrdhsxdhABQs85r37TBlI9lFkg5R1qGy0byyCicpcl6AH/Eq6/aYD+s/S9ufSvs3XcgHFEoaosAFX0mnMPUrf3qff0hNjN45uNvLDolNQKWe0R5RKOsOxbAx484+ZPtXUqpd0TAJyu2DBXzNIXJQmWLiqlN10fCE4g63Zrqg5SNQygVixDpAJlQDrUG3BI0qrTqhJ5avk+57SRQZbty/5y/s4GfWu/ZCUg62GXBlboAhr+HXo8B7GW5E0Qrvs/F+rhd2mA/58VBP+bGCFv1MFIAtFuh23SNAyqnsHBRLQFTyFQhZv+WxEAuUsH04aPt4QfPfDPccd/jYdWyBXKNSu4e8T5H8XqusHBlTt1YTidKr8JFx/eiR4HZxKQVzw/gLcB5Jin09JGQpwKEmxMgC4jihP+Ln/2sdLyb/JBpL72fp33gFA+YUSDlpQklnOjHTqgjxaUON68PtSMS4adIo0q8MFq9NNT2pMfkbUy5FvA/neyFzVWeXSnoARLN/LTcllzsf+vtd8NRZfLHUb11botFTZHerBTnhigPFzokLhSjUpYLJQ6coWncQIm3GSBs6+yyX8HFneVGrT4YzDEEiAIicKSAKh+SOt1fYSapwpPFIr3vKfeOOQdLXQdRW/eDM0vvCtrMivWZqrn2brzhuvAzn/pnYeuWLeG117Ya54dj+ipx88Tln5IhFnHfCSBKRHwJcMr2XlV9QjAqoBIwPfbIg+Hw4cPWRodUGc999+/DDXc8ptSOG0ykhjt1rrQAVL3xdEmdBc01ORD1fATsTjhQ3Z+4UX5Wpn/9Uq+MyqDF8yCunr6I57Vqlk8rKgv8ZwHRdWuVcvKmm/ShUCKbnF877RviWLvn8p89qzLn4ipmFEizNwPnCYBCI8vXh8p7fkXGokNh/MV3p8aVwkSgprnnN8NY17thfPffhCp0bRkwneVI6eGzIxf7cfQnpd+2t97RNMjHTVQs5eSPOA9IAZbz5yZcKI7fQwJWdJmQc6GlgPjsrrr80RUavXDJJ9s7wzxZvdEFc32oG0gQPSrDy2k4WulcjWtOm0PANe4yHadw2SPpgC93Rh9rXt/rxsDqZdmli+8cG3NybBuJM967d5d9tG67447A751d28Of/tmfywr/vmYeWCX3ohXKB3zK9MfoLAHZujplipLonyQzyQPPhg3rrFnEe6K40L9ueeG7oe3QO9Kb3iefzUaVZa+ycNtBlHSIWOezU06nioJCQFWh3xORWx5Lz4eusVERWwJ/rv243KM8rn6BJJNazcNVUyMxJCJ7TtH2uVxNn0LRU7DvnAIpIDrUqgQTNz8ZalbebF+eYSV+nvJJU+lC30ZZ/a8Nowv+JAy9/d1QJSf9qr48oFpEVNFbvMQLZV3HONIvv0bciACsc03lnPxJ6QfXiVEIgILTJPMX4jNhxfhpcjx6SQfQc329ZdvXdTIFCdmjAFOMS2NKQA5hcEIvTAYp0vR1KVSRxChQdtDEwAZYlgJGLzfQVSPuzoMdD7E22zawVuxc8TkcYOH0EMP37t4RrrrmBmuLJXpDQGyP9M7kMSUHaZMYjSEFvyDuA0r8bF0p8gA7AOqv/+rzUuHUm1GNc5BXgOe5b/9+WfiPhTvv/FBYvGJjzhCV5UoB0dYGpjY+GbY+/y2Z6gqJvnDY5qMkISAp0Drnn9NMkphg1+CivYNot/SjZPkzY6I6Y948VDzyMUkNyuh7IW/fNi7wv+z9z9jljJ8WF3qzQjyf+LdhPAVRGq/OfFXKnjBRa1kVOg2RrOa6j4bx2/8wDM6/Q3oCFaoOHGlCF1PXppd0FotjJ9pkDGi3Fo4qAgfx+3wSYv8Tjz0QfutX/ll46N7bzC3G0igKeLB6L1403+ZPgstLZu+s0cs/+bxJ5/MeUDCSwzSx0CfrNr+TkIOpSrh2aJGS9DaJQ4X8LYrFddsxjX+AoRMgyqb9QCsRQJMV6R2w2ZdVLTgX98orryYNpFd59dXX2jYWbYD20KEDZlhqVmJoxP36hjkmJmO8g0NF16rmpWsWp65pkB38aQQwAFAxaD3z/R+YuI7uExB1MZ96zol2SB2z+QdftzZJssz15+4hvX/uHQd6+tKt9HzKEO/r6hJ5H8s8elPi7js78yDq7nBLlizjtDkiReDFRlNmDqd64WMdR40LrX7sd0NNXUv6xaVXeXzTJDFgVaO9YVBhpeGGj4cqtQexrLrts1Ib3Gzi/qjSil1uzk9dmoIBWr5ilS3hSNvaTphujw/K+SQX+7H4v/7mzvDtH75kM3iS1q9WLk4Yp3Cuxjh1IUT4Un2Rv5Z0VNs7SA4CSM7q4pxJIs3cWXD7ixfOk6+rMvF3lmoxKXeO0cHNwaPUUX529gMsMXhlQTNWKRRr78gRzbSKKiKVTq5Yv8G4TACSwQwI9vYm3OmS+XNCg37sQ7RfvXqtNXno/f05DtLvhR12bi3hfgE/ON12GfgQ9VsVnjk83COn+xrTicKJPve9r1rbLtLTBh8A+pd7ZsRzf3gbSB1qkgzbwCfGLjhQyABewA7A8mbHbm3svxRoRoGUCKPqh34jVDUoBpREI9o2it+c6faK+rpOCUzCj/NJTIzFp22J+2Otfxgqc3H6ZwDW072e81Qfx3LPxckpaxWVVFktR2mJaVVV517Ez94mBqXX3nhbOVEPhk997GHpRJvDDze/Gt47cCickE9qYY7U9Lmfyccze+JzuS0UgYMGlLg/vAxIWsJAByQR0QtAMLoWM6SIA3Mw9V0FnKgKefUpi4HF69KGc70OoM6JernXZeni/YFDxzTn1xrbtUARP8uWrzGRHEs5OQIAQbjTU5oPi4gg4vNrxXwQ6dXT3a53qVo5TfV+qb4THwO/F/w+4TC5f0R9OM877nlQhkSpQmRY6u/vCy8993QBiPrHxMGYdmmPcjhvQNL7AGBF7wEHGk8VwtVQJ0vNmhq5GJWqX6zuuS6bUSDlc1vdpBAm6w51CZ/fsyXEd4xLWtYe/k4YlX50/Obfs/MQLVENl8oPYJ2J853t9c7Q8Q2yKiPWDw+3mN4RXSlTu1Du3MgMnWpiM7kPYPL83laE0gtbtsvnc054VCkVGbzQpz7xqFLQdYc3NOPojt3vheMn2u1aPZN/OWv/xJOe/5KYY0X09zBLvxL7/us19sHNksFr4JhWcgDxY7JL15Nmy+NtB01fxvvidQM6AdPRQ/tyQMr+VatWGuCxDhC7vhNOFEDFAb++7qQkxOYw0NeZgJu+Fg6eAGf8zXMRnKdPW1j1Ac677n+CU5i7FJyvc6LeTtwXfDzYZkl/yZaUrGvbP7OmZtBO62drufi/+qa8G9z42JA8MaQ3FKmp80Kcxx3y8RTgHcjSzAJprvVip8rtnP4KYIoUoB9W+7HeNmmpBdj+lGjxMgJRbgdxGuPHyRNHjROlDIfnbhl0enu6FBG0+NwBatqX+Jg++9xWM8Y8+sDtRSc1xNjkTv47396bs/aPymWq0Mmf4TPD7wWdMmM09WEZv3bx6SkvSbr9cYFS3AMAp3NtJY8rsuPggQMFpatXr9H2CwZcfgLOAwfLNWG4IWPSNSubw+Kl12kK6R0GjuyH/DpYdw6R9ZgAzi2bf2hFACuZmHBfgpyTtY3MP0Ay9zHRdQypAN0n4jvXOF0aJheApIaLjc4RkJ6j2wRQY6n2MgPPbK8BUohjzFCA3yO6UQAWoxPGpzq5hJBlaaZ1prT//ee3yq3mULj79k3h9lsSg4aGXHqJE4cA1xCn9MPJn1DUU6mTP9Z+KAlFZW1iG5ReCmS9oH/u+1gKDL237J4iEHXx3e+V412NQK9kOTTn+Lw+etJYAlu+IjHG4HMJOHr7ACRtwTnSxq49B8O7+w4asIGh4CD5UCH2A4uEjS5cuCicOHZQH/C8SN128nA43tZh3G0MohwHAdi2xgfDSvL/2Ab6XPfpy3yN8mtDqadF+VrJXvrPzx+vT+XYs6lz8QMpnGgyRU3ClQKmIjrLEpVcwgPSbmSSf7gT8YsJMF0jCzMuPIcOHwkrVyyfMTB9TtmRENOJif+tz/1cdO6kx+PrKLXuTv4u9r+xc+8EsX/U2Dd/5Uu1dD7LGXYTyUAtvUwW8RU7iPjSATAHKjSXMk84pFMOd8aZHIQLjo0b51hRltsDALuks+TZu4oF6YSoJzhFJ06Lny9gyjlop04XgP6U86On1F9OR2tcow7iw/3TP/NJZcw6LlexzrBy7fVS35wO//d//N+t6VE1iH6V++EY8gFA6SLZKPJ/OuDpoZwAe5bwDS5nECzShdkmprxdKja/WAOFI7RYjQtcNjqmRAbzr0sMTrhDpRxp8df+Al/s+Ty9uHEfSCeVsONsp8JGD/rcS69bco+f+8TDubbztzT9Hnex/67bbtDUywdM7N///lFr0sX+qYeh5q/k3Kzlh6DfKZwdQOQq45ypu8wF5MRYryO04ngPkYxdnmLO0b4rOgYw9vN6EywdDNF3Iqa3nTice0bozJcvW2xA6twx91Ap8ET/yT2YHlRlHO9EuZNf12uy1D/08MNh2bIlmhCPXAcV4e++8D8E1M1KhfezcpQ/aq5RsK/eFPrDqClvcsaWGFjPhPJPdOpH+31M91jvi6mf6XzWBDhF1Tf/fBi554/CUPX6MB67pvjbl1T7QP7HLcooN9on6wbQIc8/oAf92y89HV58dUd4/MG7zICUAPR0X6XS52WgI/b/0j//ePjln3si3HDNBg3sEeNSB5RwhKmek+m+Z+6cpa+m1B4fQuoeVQG4DICmeElZzjE+Cxoob8brOSfqYOb7Oc6NPXEbvu77Dh8+5kW2XLFihS0BTAdoB0/f9gM4p5/Xy1iiFoAL/MGzz8rkkDgU/vv/9d8rU9jp8MSTn7Sqt95xt4VsskGP5XvNdp/zf0yzUoxm6jp4DvGzKHauYmUXPUfKRY+PKFmsjEvjD/1uGN/3fLH7+MCWEe5oxGidEumVEweBHpTpQY4eOxluv1kRMjk9qDcyU6+mt5cs16xeaUYrzk9Kv9dl8T8iMZLYflL61Wk0M4nb+Y2MYujk+w8wQYTOcZf5XXYT9EyWc3SOcRwLeIaoaw1q4fUcqOEF7LsWncNBNm7GAdjLmK4jpmbld4UASAfOeD2uO9n6W9tfN1/VF55/2bjcB+67NzS3zFWI6j75NS8TJzzPvAToJ0R2RHFu0e/c12PRuEp1fH+p83MclK03zMwZRtqj97xO3geKTUvLLo7FJQGkdJV3LlM1W4dHXNXF0ZXn/yoAo065Hy1bsmjKJ8dfcvNLr4XXlOj4yvWrw7/4hZ86d9b/MlflTv7E9u/YtVti/55wIBX75ytZCtn8se4mgFpqiJU5wVnsijCtoBVeObuSIhVcpM69p9GRgCXEguNjMKXcMFxtZ6367HJQzQIpjvIxzZNrGsS5OMd0wNSv3RrQv/rGueGU9LBbt75oRVte3hyuu+7qcO/9D5rHyO7du6zc9Z5+z35/vu26Tip7mR1Y4l9cB76gSn0CWJMHwMhujkCKvETlTcXHetmZLum/qbbndS8ZIOWC7fbSzjSWwMuS19u2Pgj/iGwhyglapjDOxK/UH33SU8X6IfYH/dXPfNLi5JNXhmNLH1esrbMt8zNiSNt047X2O3DwUHht+25L6XdSzuRzbXoN5dLMmTJAML/Ps72C+PjCe2cLcDHROB2zubOmIGrgGB3mgOet+v35dnZJffSmVk9tAhzGuWYrRtsOwBQRD8+HlA8StGCRpi/XUt9JhcPql16wL61SiX9+7XZPqjNPk+x1nTou17uE68Pf8++e+sdA9qkD+9+x6Ck4dgyGnNO7wZd+mng713++s8jS63tdX+aqph0UexPk9s3gip83zi7lfqSlTnPJAGnuBqYswuaOuLRXjA3idYU7GZOTfqfSv/UqMUVrCoTsyT16NlLKl6EHfeZHL9vxE/1B/fX1487PsthZXex3a//2XXtCm6zTgG1+RlS/r3N3nQVnoOszDJDvB3jcSONX4/fldSh3gPR9qAx4rNShedrIcoUcBzkn6mCXlOpdkI6ZZDYOpExmRzSSObnHJ/cDSixp312v4ntBB4s7FdeHU363PAKY94ptU1Pq+gHTcjSNy0iaUeP0kQmbfBDSxvv6PcdxUoJnAUT7XscKZvgfbfPLPP6Cs/g9XmJAei67raB/Lp6N9MMBuOCIz4BZsnSFBhivtMiBVsvtO/doXqZ2m8oDUHI96GEB6Z2bboz0oOirzq2lNbm4M/vv1n4y+buTP9Z+MueXSul3ZmcqflQ8cGxQR9Wsu9lOuz/aVbBa6k11EPUmnC8AzCAHXdvQvwkAmvptJrk8PVkP7lTVJpn0a0K8LLlhKQbrnP5X7dk5HRF0cEND3lmb++B4IpMcOO0eVI4Llbs/WT2V+f1oNU/q0Kj5fHlmjb6lnrdhfaiytpMKwDGyvTa9tGZXTMsujsUlBqQXR6edz6swPWiaAapoNFP61j39vedDW0d32Kj5kH784rYwd+c+m35jxdKF4RMf+bk88KYXfyn44MZO/i724+TfpYS/OPgTQ46e7lyFotpATvtrSkAQVYpWNT11IZCwz/w40xM4qMXnK/eOAWhwnoQML12S1CSVYWUVDvSaslkNuUjvIEotQNnF9+y58Gt1Kz8cH/H0EAAakx0vcLN4ee2gHe4nd7/aZxR/aLRu50uBMq1RdkF9/6j0kTLzPJPdk1/AFM593oHUUt4pVHaq042Qd3SEpCTE03+AiLl32k+d0ss9qByTC3IiXLEu2L5jt0JH+829iP03ydXov/3NV8PHHrm3ICY7f2x2GOX3XKxrWbEfa38+k3+zuU/NpJN/jhMEEBwcSnSOc3o+7hysvHoOZNICQKjUE+BYb8ePZ+nnSNYBRV3WUD4bPomP3VHdQTQ+nna5jmLnpT4gaq5VAmhmYW3UxJIQIO8iP/WsjRgkrVb+n+3Pbxaucd+TgKldn+rQ5ROvNSlpSefyok4G5wvPdx63zjuQji25KlTJe6HYbKIF9y2vHuoML7gx1JBN6oNCkm+IWsnrQUkCU546OrvDmpVLc5XQm61XxnRS3F1KhA6Y5NBkviI5i+v/4ntwsR8n/53vvGvW/vePHLcqeSd/hnPxoRi3VW7dwTAr2rvYGR+bA13OyqmddAnxJsUcDzAZsGV2sungkQXU+BzefOFS1uwybkEGzpnzFQNs2uzvT13q2EjRykCUi4va4Hi75hRYs33F4RMIMFVhqbpR87lDT544lByQdv6c1iRpTq7CRbCSdsH5u5LqxoVh9CP/IQytuMOA0ifEC9VJUl2uBKd7ploOt3w61Nz7a3r5Lrdso8X7Gz3ogfePWLIS9KCAxlTouqvXhW3b3wkYlaAXt7wuMO4Ia1YtTw8v9nomg8DeaL3VVsPe7uJ104bO3ULnPn7ssIEo+j6s9qg1ShGeClj78T5wJ/8R5SQgA9VgOkEeTv6ESfKbLpU8BGDxX6ZRB1EDwzIgSj3a91/cjJXxT+Rgnl23nfo3pPmoipE77bOvGGDHx8DpIv7zKwbWfj3csoP7BADWTu7ZxX07Ju2jUoBpfRRdSDkuvev0iLLq56ObWubkdbhRExd09TxzpMkbQuo7EjOPXvNoGHv5qVB95F1lkpVuR9NVBIn9gzc8EqqufixUarBkO/yC9tY5OnmsB827M039ZEQi3XfXzeHbz76ovKCN0p31hU88fl/C/iQQWbQx05NqVLjhaTz94l+IPocLx5DmYa/o/HDxaiRrfXpdRW9ChS72048vbn0jvL1nfzh5MrH2z21pku5Qr7mhUopQCU9UqrmkHCAowmbELXgDsdgNyHKoU64vnROjgRJt+zEss2CV3abOybY4zI+S4sQpDQSL7y5aSkKcmGjDRfy4PLvO/VLXl+wHTK0f6E/vHPrD16lDxUyZ9b/qnO7uML9WD4M2j5XmenPD8nNx+IWk8wyk6a2qZzESTIhWmi/x9LHfDPVK2Jx7AS9k75zjc7sedFCzjC1aUF4POtmlEIK5fu1K4+TySUzKvWbJPv4PvPGNUNX+XhhnpknNRGA5Xic74TnYDyfq5Ims4UQSP1nfU2yZ3AuqgIcfuDvc/6Hbzcl/6xvvhEMS+yvl4eBif+Lk7yO49FvmPRcDQnxm9oMLgChkOK1mKXeyPYCDCgBC41hVh+8C23ZMus+PMcCzA71k4tLOKcORG4S8xphmkyhHfk9xnWJcKPt5J6H4Urj+eNv2614Aypz1Xtv6S/rB79W3o6W1Q8UsRWV+LqZRyRvWxhUBVy9jY5MBafbwC7Wdf3PPyxWoa3KfVh5rwgsRrVTJ07Ck0H4h/kp6d3r5ZbDUvcKB9YiDmqPBv3TxQvVL9Aad4S0CJGv0S8j7r3hj9Dw0tPVvQ5Wypldf/0gYPrgtjG3+fAgP/c/aw37aOPf9T4ABfVFTpUnRZI3HWs98VdDkIEqt+Bo1kVsJJ/8jigbCyR+rdAKoHFu6nxwcqJUlzsirHANR3JJdkR4pXJyJ8tppx2TU1rnhkJ4gu02xg6vrKeNzpodZgu02pbmDbL9dQHKs16GIa+SaaIuN2Krv9VhOsJRreEJEXzlxXQzbLFFG81Buf3qcv+ZebpcZtZkclfynDXZxit7e5CMxPsZ8YbU2yR9Jq9PbTA44z/9zIbDqiPMMpPGdZrrAezhXJbM/V35pr7gYz7Qhy5ctjdLUzfR9Td5/4yd2hgqBaO2Dv6ORpRhmSQJDL/xFGN75g1Bz7cMzfUET2sO4ROYqPBPgyCG2nci1On0qvG8X+9ulc31dU0zv3H0gSunn1n6ZwM9ySDowcL1cQZw2j7KsWGzcKTsccVhPKQZTwKoUxZPCHTtywLJC+YAudRzXZiBaqlGVw9n29aV6yBTkuB/IwDNZnfJ/biEW4+MDc/tKgCm6Uz5op5TtKoRrc4e6U36u4AKveL9f4Mu4/E8P10X88lTcmWa+N/KjMZEB/H9ypqxudHx8yMa3hcjFo3oGLwzDGnkC4MgXRBw5CazHNHKyOVjP+NQgnD4S6F5jsZ/Y/iNK2JLP5F/IKgI4+V4rf/YsaMETxIDjIGqcKexVEfJzcax/ChxsfZvDAEF9f4ziuYx27HjbyhzwkhqF//1Rci7nbl01QU04WXoBzrZDvrqObdaFhd1D9QKarL/8/goOSjfseortUJnfe1aN4e5ZJQ4778WzQHqOu9y5LnROc2WFb22d3J1ppi8pB5uMbo0m2z60LYzsfl7TtnSbjrTm6gfC2N7NgVlgK2//heQSfOTN8AXhXUB/FOPIEetzUVszcV6QKaKs2P/K67ssVypO/gvmT93NjsHvQOfGIwa9D3z2O4AyYyaEOiGLRw4iDqB0uYMcR7HfqZQYzv42zeoJEb8POUfMOu3QbmwUy7YFiPJ6wAGS77QyjIRqXazdo8psFlDapcHCLqUkRzFnniucwkrJ49JzHT6cZLuqqKyx1nx6FevntP24r6ZwyhmtMgukM9qdhY0hTs60HrTwDFPbAjoNPDWaRntPmKcEAFp758+E8QVXh5FtfxdGnvnPobKpJZkFVt4SUALAUzvHVGuh2oArX7NK+TP///auNTau4gof73ptL7YTHOKAQxOSlMRJQ4AmEBQDeRS1UhtSWt5CLeJHf1CkVuofWrWoqH8qtVJVqUUI9U9/lCJVtFWrFkF5BBJCCiSA84I4UmOSyE6ah+MYEyf7sPt9595zPXv32t6179op2kmu773zOHNmdua7Z87MnAmBXKk04opnw35Kx7s7DwBQj0JK9mbCufkyDHrhfAmUKnn6nZ2AZc5AVKVQRoJTa0n+s8Xj3cDAwI5+pgd1adKfIMvdwZyA4RI3c+5Q16qV9JQ4b0hnNA1QFSQBoHRaDj6ALsttO5pYDsbz8Vm/GQYaxhspWJ5hQHRYIPVJu9G1rV6utpaUwjnrI5zvpDMqIWFUmaxOSkhejVJqDbh6UJq4K23CpFTq5cbTZq771DP7X5Cao3sksWSN1K/8qhIiBiSwFM2OzuYw32uqXgdDFyk3w3Hj80A/6oeDnjdu7OkJ5HpdG/bverdNdrz5hp6+aeDBOgrXgr0TrAwbFbgQl+8EH5usYvqw9SB3zSVpWVoDO3ipM/qWh/6afub79+2XjlvXa7yWFu+kTQ3309rNaJMGny0PklEwtYi8g1nTZxK0LxKp4O7acqfMX7BYtr/+MlZEdKkRaPr7wR5e+y+kSz7UsfAWyfcq6+anDVu7ap07R8+i8k4iBUXkMxGYBjz5DJDPAl5LYIy/o7ULN3oVSN3aiOHZdH9TXc4UAys+CV0tKvltT6mkkfjK953lTV7TUvMlvkjhxbYmFh8XRimFJU6fQiLVVm9ijAXO8J3DfgITL5oc3P3ODvngA6hA/M5MTGDNqMOLSXZaW34AgYkd2jobpVA1fAwUs45sIEo6Jrl6RJEWkQh2Bpzm794N/Lq6uqS35xiMLS8Qd+KJ+dOUnuuMV/Mbkz7SGp82/L/n7m/I+vUbwFNOlrc/Js889Ss9SE8P2vMBzAUxS695+XVn+ZZ7N1o8xZTHk3u73UZUDTMHem/O3FudTUTb/ZiZiTzS509n+UxEY6xwto2qi6kGOKFEC01cxxm1vTGmbMomkz3woqZJ4YSBJDZDeI3GmpCPAAHV8HsQEMsD64UL77mDi/V1qbrl7dfKtx5+RH78xJNCC/E8OZOYQIBkzREAA8kOz9RNqh7RBw4CqA7l/fi4qf6RQ2YCpV0EOF50BATSNjul+uz7KbhqrFHw1fWVAxMvyidd49UnofmQfpHzEYFpaEaPh+otW75SDn70nvQc7cJHIiM3rb29KBm/iVHfxcg8ilJHe7BWSNO+Cb09xzUil0AxYP78hf67V2fRVCb2DfOo+U6crCBGVSItqI6pvXwy+KmCRKyTJVNjyUt9tldqFnXoM4fwHL5XQv9ZKqucQefiex4prbP2nOSJ6oWlEqxgPPJ69733yZ1bvi579+yRrVtfUimIIENnRwhRp2rASX+TePhMCTQsedLflRINROmvw2vc2ME9iAVQ+L2dN8Y1N5Lwjkw+evSIeZV0J12HTFEa4ikBjCeTUqfddnW7Cglcw8kD8AJnHw54kJ7xG6bNn9eVWoP0EQ/Gm9IDfaM5MHBOY+ezFzCqqhNXnREmYzTC/hO9B2tDJ4oYCmd9VV1MNUBDG9QBXnKuZb6M9O7x2EKL1nn7Ult1hQpDvSQldzqeP0SVyKXsqOe+ae1aefxHP5XHHv2Onv9OfikV2tA/zD87MydCXBCl3pEXHaXEQBr1vBQ0DDh8r6KbG55OezYqeA69OZJ3wdb8g7z8/M3fvbvNwj4Sz/7h92pEmiqEfZ275NVXt2qSqDyiZvTJr8uzm1/Uc5g9W9Jl51QlU16ZTZ3hf9MKSIVpFARW4KUqkcZYqRy2DsB6EZf3xLVbKQ72UivukOwbz+guprrV90H4wz5qbdlsbuU08Ti4GaVByZ0SX+Zio66xpVpkIpOBo6mn+mRdrfzyt69YJbz4O7/37r9l51vbdckQlzeZo4RCSVSH5L43n8MuMOgcDhjn3aVje+J50qc5ZqOASCbgGN+Vfuk3yinfPMc0mhavBk7Ug1IP+dtf/wJGtWdj7W2fYiX9neIaicg783IBOjLSOJ6Wj51TZUugWq8yozzRia0sDOWz6/juhrthk3n2q3oySatpompAARQBl46UhUE8gDO18VFl9+ILv5Ts4d3oSCqX+i08qltFla4yfpT2aJCCIMoJBQIU7RBUznnLwdzuNZkaIM+bt9wlT/7s5/LA/ffI1W3e7izyTXou4NkzJ3BsEsfi8T6WUxAKBRqw0JujIErzHIK7LqwpofTLdKYiKAAWAKjNrLv1wDiMT2nrPNaWElB5GqhJqlYmN99SngvyLiGBgfBZbGjhihj73bgum7rrKMcheh4X7/ZMdYtd5mfhdrdwVzUTRT/sF81FOFb1vfQaQAtmB+NECq0XUcqKtGxfOsWpx2RLBJjWY5nTMLaFZt75q4x8vFMStzwoSdg3KLdhT52haAqU6GntiXYIeo+fqJD+1ANRrqfNf/SKMpKEFTLWw2QdPwTubL8tEXL1qFF1TGAzfwMwex+LF4YbgM2G8Q6qSDjcpnOlIotDf6oSFMD5TI+Qs7wLvAmuJOjfqS/lRboKxqSJ9wJnaQo843lh3vxY0Lyi2ZOYDbsJlJIHh7w998aP3aNytjoyiTscR+sXnqQxHp1wuuTmzZt/giGJt10gHFp9n3QNJDGZ0gxL3slkSvr6TsunMJZbjw43/RNRkDzR+il/qkjSNE9S7etleGhQ8u//HdYLz0ntvMXoIGyq5qwJRXU7i1OhO3i97LI0zAE2weLPoAycw2QHUKABZvXicd6mhNzW38nInGtgqrER9fBPqfnccnxrPKvwU8lnLtY3rrnpZvni6jUAnhE5c6pXLmSHceImqOLicJ8Aao7AxIuunNommF2/apVcf8ON8srLr+iHx0CCdAokUni4oK2Z2R+CX4RTXpRQRCC8WASN45clKJNTtuiU5flqHkjCemOZFy5okwULr/GJ1Ej34W4tu8ULqFulBh6F9esuhXKiBL+BFiOChhvXec5VgdSpjUo88nz25uZm3RI5cK5PMtk8joTggvSin74S2StNBVE+aZ66alSSrUtEFt0oNd27JLf3XzKSukySmJQKXNDzpo/PIG88JDBDQ1Np9fUYuvb3ySefDOgzP1BTcewgeRhmqVlwndSv2oyPyFKsIEhJ7sDLklx8y1RIF6Ql7yu+sFJuv30jzL414hTOU3IOx8EEgIpq5c9B3aVWNZ4VCH1/8onHyAu4rO7Bh76t4X987jl9D4DUHrxokX85SAl+YieG5sn0YID86bCazPgX02imloYJGBdptHkxPMJZWcJB9A9/XIrikB/wy2LPnXOF1qvFOYbVCv853K11R1p0lpe9e4wxwBMqxgJRpmWYhmth6FOSy5VQ5SURqkYarwbQyjip0jqvTWf1qT/1dD3jJYozbLRpBY0LrZ92R+twAgG3iuYPvCSZrb+RHIa82hTZkLTXxMlH+bQ4bOZsMfWnVJVMVX9q5U/N8vbVs98PN89WmwNcIzkGDpTPuJ+C/K/fsEke/+ET8t3Hvqez/eyoBEMO/am7tFl8V3dqGVp/trv5L1rQqgatd2x/Xb3K7chWTrsbXb6zjlI42tl0kxYWdbf6VJQj0pXpmF+Yh0gSfgGPfNxVEHzttUv0vSQaBSnjfZna5z1eXj7z1LhzhvpT20LKiZWZ0p9yssm2hSZgPq9hC0zoYeE+h7zZhTdI+vrN6FFsvZ5OMegwM/Qrqf4UOlTaLwj0p/g4leYMHgCa0BHzRAYeXJG6eg3EoTrJ7X9VRlBmPludFI6NS8tl3FioSy7y58UybHv9NXlv99s62890VKxwEodgqjpIgJJy7WtcbJG+5bGsfZU+BsY8LGCcO8HYaDPaaK2MJqIfL34AMhFHO4/G9J4MbJlmPDdeuNEYL70CNSJ8fOyU8ERZ2kmgW7xkmW4aoP50JttodWivP8f0/tHhPvSn1P2dhrHhCxcu4CzxtA5np4sTr9Gheauo470lMcy14X5274s63E+0wLgInAcw7A7+FRaRNFbl/1B/2gjDzIMw9EujHXUY6rM+ix34hERN6Y//KGnnMKTPdXdKzdd+AFFwRLLv/Enyh7ZjvDhf6lff63fEwjoppjt1H5bBhv1zWmZhm+cJOY82wGE/L647tZ8Fr+rcwQH9OtZ1SFtbK3YcHdQPC/342eO3z9LbT8Qw/sK8G2iZH7yKHNe+trTMhgV+HP3j80JadrkJXL5c/1KfyddETtfegiduAOOHoLmxQZa1L0dZMlKbqpe+MwTXo1rGMD2+82Ld6JB9oswmF17VkU6u3uJJxQmUy2c1qf70HKTTeCdVSuDReqsTlQZFEgtXQ186F1ah/iEjPZ0imIxK1DX5oISWrJKqk2iaHzlhRx0kQZRSfbT+1NN15S8OSLbzzzK87zUtV91tj0gSk0oJ6Ih1gmnxGkktWqedTYsRUSeVKh71vZw42bhpoyzBMTHn8XE4dZrtwJcccSeo0bmABW9ZsWKZLFy4CPrdnBw+fEw/xhRew2CnxWF6XCbV8dEFHAUb5KOz+whLpWrk8pZ5MoDtpzS8YjQK0nEYr0TpO3nn8jEWFY2DvMgH6+b0qeOyes0t0pD2ToPgyoW339qpxfOrq4gUaVSBtKhaPkMeaB2UUDipwi2mnJDiTH+0lDUd5fbW2glm92sxuz/iz+5n/dl9mG7yOiFFG23h+mc6GCvKw5XsOSHFiTyujOBEFUWv7IcvSX7336SmuUVqOx6W2rYVAQ1K2DX4ONhM/cyVwmNpbuuVBbP9/WdPyoVMTiVUApYBBPkkhumky8qV0oIlQP04jpsSGSdtWHRzBF/89xwSBc/m59+D7yLTMg0uftSzHNrDzyEZPDMO/d2wENn4X5EZT4QdQr2kIJ6aVDoLxzMfOdKtHyGrJzdzJPHK7n4R3AhTf65KpFOvw3goUDqhlEUQpZQ1c8ulUB40OOskKrk5s/uC2f0EZ/cZR3vTtHalyMqmZM+VEVzEn8GRzMnefZLv/ItI9qIkOx6SOkiclLStTMqxU8aZL8FosWy2/1ZYoOKwv7/vJHbLDSl4EhAooVIqa2pMA3jXwsAJz7sakH37D6q/SbCkyG8df6IxERRBXl0wtv+M+KRPEGVSrTP8IQbxIk175p3gPR590o3LWb7krw9rjdeuXQtdrrdd9Fx/v3QdOhT8xpYny6CSKBNXzlWBtHJ1OznKJmXlYUqIw7w89u5P73IpNjj/Qi9UM3AAoSSG+4mI4f7kShl/KnaW2jzWxO56VoZ7DsjJK9dJ8+o7JdXQjMwMEuLPt1IUbdh/620bMKGyVIawseMM2wOLAke1cEfHOuFhn4MYgu/duzcAUk5MqZSJn1GB1EtS9NfiBAEASfxXy/hNs1qwv/5ioLc1/S3DCWR20Z9+xNOJljEhyqSdwqD+AQnkOQSdcvvSxUJJnm7OFa3S+UGn+ptUGkSvLIgy+1x11p7VcAk66n14oiYPg6PJOe+YklkepyoWWHOZDua9mXub3c9++Nro7P4NWwKBxBrudHDEPIgpzJOTDpn3n5eR491St7RDEsu/JFdBOj3T18dowkP0zGCHevyf/bHZfi792vnmNtn25g4ZhBpoCNs2aSQnA2tI5bpgOM+EREI41icdQTIh56WxIQkJPydp7DajpDwLV6I2jXw984f1qWEA1xC2Q5+SHMG7gg1AeQOf5Nuy6cEkHW0e8PfH/BMEjgR2P/llqCQzXhYFfz0gRcdMJGsxRVd1l1INJPCJ57lGON8oTVDgVxjv+J2mC0Sj86m/7suS+fzN6eFdz+vaU241rWsmX9Pvcif2p7nltaZtsdRvflyStQ3KB415sO7Onu1P//dEj67hhd+M8BhXrbA8997/gGzYdEf60MHOIbQHtaGXyWSH8j4YqlV+IA3vY0mjijGGmmQO8S0uQYrWrK5bdXN6y13fxITTgO7nb2puGrPusBwt/czTT3ML55C/WiuuIhfTAd/GPw7ES6OPsOzpk6exD39IK0H5dA3IGPAWE4vBp6aGv8HQ/wBHmhsC2gZn9AAAAABJRU5ErkJggg==" height="188" width="336">
</a>
</center>
<h1>What is Colab?</h1>
Colab, or "Colaboratory", allows you to write and execute Python in your browser, with
- Zero configuration required
- Free access to GPUs
- Easy sharing
Whether you're a **student**, a **data scientist** or an **AI researcher**, Colab can make your work easier. Watch [Introduction to Colab](https://www.youtube.com/watch?v=inN8seMm7UI) to learn more, or just get started below!
## **Getting started**
The document you are reading is not a static web page, but an interactive environment called a **Colab notebook** that lets you write and execute code.
For example, here is a **code cell** with a short Python script that computes a value, stores it in a variable, and prints the result:
```
seconds_in_a_day = 24 * 60 * 60
seconds_in_a_day
```
To execute the code in the above cell, select it with a click and then either press the play button to the left of the code, or use the keyboard shortcut "Command/Ctrl+Enter". To edit the code, just click the cell and start editing.
Variables that you define in one cell can later be used in other cells:
```
seconds_in_a_week = 7 * seconds_in_a_day
seconds_in_a_week
```
Colab notebooks allow you to combine **executable code** and **rich text** in a single document, along with **images**, **HTML**, **LaTeX** and more. When you create your own Colab notebooks, they are stored in your Google Drive account. You can easily share your Colab notebooks with co-workers or friends, allowing them to comment on your notebooks or even edit them. To learn more, see [Overview of Colab](/notebooks/basic_features_overview.ipynb). To create a new Colab notebook you can use the File menu above, or use the following link: [create a new Colab notebook](http://colab.research.google.com#create=true).
Colab notebooks are Jupyter notebooks that are hosted by Colab. To learn more about the Jupyter project, see [jupyter.org](https://www.jupyter.org).
## Data science
With Colab you can harness the full power of popular Python libraries to analyze and visualize data. The code cell below uses **numpy** to generate some random data, and uses **matplotlib** to visualize it. To edit the code, just click the cell and start editing.
```
import numpy as np
from matplotlib import pyplot as plt
ys = 200 + np.random.randn(100)
x = [x for x in range(len(ys))]
plt.plot(x, ys, '-')
plt.fill_between(x, ys, 195, where=(ys > 195), facecolor='g', alpha=0.6)
plt.title("Sample Visualization")
plt.show()
```
You can import your own data into Colab notebooks from your Google Drive account, including from spreadsheets, as well as from Github and many other sources. To learn more about importing data, and how Colab can be used for data science, see the links below under [Working with Data](#working-with-data).
## Machine learning
With Colab you can import an image dataset, train an image classifier on it, and evaluate the model, all in just [a few lines of code](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/quickstart/beginner.ipynb). Colab notebooks execute code on Google's cloud servers, meaning you can leverage the power of Google hardware, including [GPUs and TPUs](#using-accelerated-hardware), regardless of the power of your machine. All you need is a browser.
Colab is used extensively in the machine learning community with applications including:
- Getting started with TensorFlow
- Developing and training neural networks
- Experimenting with TPUs
- Disseminating AI research
- Creating tutorials
To see sample Colab notebooks that demonstrate machine learning applications, see the [machine learning examples](#machine-learning-examples) below.
## More Resources
### Working with Notebooks in Colab
- [Overview of Colaboratory](/notebooks/basic_features_overview.ipynb)
- [Guide to Markdown](/notebooks/markdown_guide.ipynb)
- [Importing libraries and installing dependencies](/notebooks/snippets/importing_libraries.ipynb)
- [Saving and loading notebooks in GitHub](https://colab.research.google.com/github/googlecolab/colabtools/blob/main/notebooks/colab-github-demo.ipynb)
- [Interactive forms](/notebooks/forms.ipynb)
- [Interactive widgets](/notebooks/widgets.ipynb)
- <img src="/img/new.png" height="20px" align="left" hspace="4px" alt="New"></img>
[TensorFlow 2 in Colab](/notebooks/tensorflow_version.ipynb)
<a name="working-with-data"></a>
### Working with Data
- [Loading data: Drive, Sheets, and Google Cloud Storage](/notebooks/io.ipynb)
- [Charts: visualizing data](/notebooks/charts.ipynb)
- [Getting started with BigQuery](/notebooks/bigquery.ipynb)
### Machine Learning Crash Course
These are a few of the notebooks from Google's online Machine Learning course. See the [full course website](https://developers.google.com/machine-learning/crash-course/) for more.
- [Intro to Pandas DataFrame](https://colab.research.google.com/github/google/eng-edu/blob/main/ml/cc/exercises/pandas_dataframe_ultraquick_tutorial.ipynb)
- [Linear regression with tf.keras using synthetic data](https://colab.research.google.com/github/google/eng-edu/blob/main/ml/cc/exercises/linear_regression_with_synthetic_data.ipynb)
<a name="using-accelerated-hardware"></a>
### Using Accelerated Hardware
- [TensorFlow with GPUs](/notebooks/gpu.ipynb)
- [TensorFlow with TPUs](/notebooks/tpu.ipynb)
<a name="machine-learning-examples"></a>
### Featured examples
- [NeMo Voice Swap](https://colab.research.google.com/github/NVIDIA/NeMo/blob/stable/tutorials/VoiceSwapSample.ipynb): Use Nvidia's NeMo conversational AI Toolkit to swap a voice in an audio fragment with a computer generated one.
- [Retraining an Image Classifier](https://tensorflow.org/hub/tutorials/tf2_image_retraining): Build a Keras model on top of a pre-trained image classifier to distinguish flowers.
- [Text Classification](https://tensorflow.org/hub/tutorials/tf2_text_classification): Classify IMDB movie reviews as either *positive* or *negative*.
- [Style Transfer](https://tensorflow.org/hub/tutorials/tf2_arbitrary_image_stylization): Use deep learning to transfer style between images.
- [Multilingual Universal Sentence Encoder Q&A](https://tensorflow.org/hub/tutorials/retrieval_with_tf_hub_universal_encoder_qa): Use a machine learning model to answer questions from the SQuAD dataset.
- [Video Interpolation](https://tensorflow.org/hub/tutorials/tweening_conv3d): Predict what happened in a video between the first and the last frame.
My first program
```
x="hello world"
print(x)
```
Addition
```
x=1
y=2
z=x+y
print(z)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/VRB01/capstone/blob/main/Tranformer_librispeech.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import IPython.display as ipd
# % pylab inline
import os
import pandas as pd
import librosa
import glob
import librosa.display
import random
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from keras.utils.np_utils import to_categorical
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.optimizers import Adam
from keras.utils import np_utils
from sklearn import metrics
from sklearn.datasets import make_regression
from sklearn.preprocessing import StandardScaler
from sklearn import metrics
from sklearn.model_selection import train_test_split, GridSearchCV
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.wrappers.scikit_learn import KerasRegressor
from keras.callbacks import EarlyStopping
from keras import regularizers
from sklearn.preprocessing import LabelEncoder
from datetime import datetime
import os
import numpy
from keras.models import Sequential
from keras.layers import LSTM
from keras.datasets import imdb
from keras.layers import Dense
import tensorflow as tf
import numpy as np
import pandas as pd
import os
import librosa
import matplotlib.pyplot as plt
import gc
from tqdm import tqdm, tqdm_notebook
from sklearn.metrics import label_ranking_average_precision_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MultiLabelBinarizer
from sklearn.model_selection import train_test_split
import zipfile
tqdm.pandas()
from google.colab import drive
drive.mount('/content/gdrive')
Directory = 'gdrive/MyDrive/Capstone Data/LibriSpeech/train-clean-100'
Dataset = os.listdir(Directory)
audio_list = []
speakers = []
for speaker in Dataset:
chapters = os.listdir(Directory+'/'+speaker)
for chapter in chapters:
audios = os.listdir(Directory+'/'+speaker+'/'+chapter)
for audio in audios:
if(audio.endswith('.flac')):
audio_list.append(Directory+'/'+speaker+'/'+chapter+'/'+audio)
speakers.append(audio.split('-')[0])
audio_list = pd.DataFrame(audio_list)
audio_list = audio_list.rename(columns={0:'file'})
#len(audio_list)
len(speakers)
audio_list['speaker'] = speakers
df = audio_list.sample(frac=1, random_state=42).reset_index(drop=True)
df = df[:12000]
df_train = df[:8000] #19984:
df_validation = df[8000:11000] #19984:25694
df_test = df[11000:12000] #25694:
labels = df['speaker']
Counter = 1
df
def scaled_dot_product_attention(query, key, value, mask):
matmul_qk = tf.matmul(query, key, transpose_b=True)
depth = tf.cast(tf.shape(key)[-1], tf.float32)
logits = matmul_qk / tf.math.sqrt(depth)
# add the mask zero out padding tokens.
if mask is not None:
logits += (mask * -1e9)
attention_weights = tf.nn.softmax(logits, axis=-1)
return tf.matmul(attention_weights, value)
class MultiHeadAttention(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads, name="multi_head_attention"):
super(MultiHeadAttention, self).__init__(name=name)
self.num_heads = num_heads
self.d_model = d_model
assert d_model % self.num_heads == 0
self.depth = d_model // self.num_heads
self.query_dense = tf.keras.layers.Dense(units=d_model)
self.key_dense = tf.keras.layers.Dense(units=d_model)
self.value_dense = tf.keras.layers.Dense(units=d_model)
self.dense = tf.keras.layers.Dense(units=d_model)
def split_heads(self, inputs, batch_size):
inputs = tf.reshape(
inputs, shape=(batch_size, -1, self.num_heads, self.depth))
return tf.transpose(inputs, perm=[0, 2, 1, 3])
def call(self, inputs):
query, key, value, mask = inputs['query'], inputs['key'], inputs[
'value'], inputs['mask']
batch_size = tf.shape(query)[0]
# linear layers
query = self.query_dense(query)
key = self.key_dense(key)
value = self.value_dense(value)
# split heads
query = self.split_heads(query, batch_size)
key = self.split_heads(key, batch_size)
value = self.split_heads(value, batch_size)
scaled_attention = scaled_dot_product_attention(query, key, value, mask)
scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3])
concat_attention = tf.reshape(scaled_attention,
(batch_size, -1, self.d_model))
outputs = self.dense(concat_attention)
return outputs
class PositionalEncoding(tf.keras.layers.Layer):
def __init__(self, position, d_model):
super(PositionalEncoding, self).__init__()
self.pos_encoding = self.positional_encoding(position, d_model)
def get_angles(self, position, i, d_model):
angles = 1 / tf.pow(10000, (2 * (i // 2)) / tf.cast(d_model, tf.float32))
return position * angles
def positional_encoding(self, position, d_model):
angle_rads = self.get_angles(
position=tf.range(position, dtype=tf.float32)[:, tf.newaxis],
i=tf.range(d_model, dtype=tf.float32)[tf.newaxis, :],
d_model=d_model)
# apply sin to even index in the array
sines = tf.math.sin(angle_rads[:, 0::2])
# apply cos to odd index in the array
cosines = tf.math.cos(angle_rads[:, 1::2])
pos_encoding = tf.concat([sines, cosines], axis=-1)
pos_encoding = pos_encoding[tf.newaxis, ...]
return tf.cast(pos_encoding, tf.float32)
def call(self, inputs):
return inputs + self.pos_encoding[:, :tf.shape(inputs)[1], :]
# This allows to the transformer to know where there is real data and where it is padded
def create_padding_mask(seq):
seq = tf.cast(tf.math.equal(seq, 0), tf.float32)
# add extra dimensions to add the padding
# to the attention logits.
return seq[:, tf.newaxis, tf.newaxis, :] # (batch_size, 1, 1, seq_len)
def encoder_layer(units, d_model, num_heads, dropout,name="encoder_layer"):
inputs = tf.keras.Input(shape=(None,d_model ), name="inputs")
padding_mask = tf.keras.Input(shape=(1, 1, None), name="padding_mask")
attention = MultiHeadAttention(
d_model, num_heads, name="attention")({
'query': inputs,
'key': inputs,
'value': inputs,
'mask': padding_mask
})
attention = tf.keras.layers.Dropout(rate=dropout)(attention)
attention = tf.keras.layers.LayerNormalization(
epsilon=1e-6)(inputs + attention)
outputs = tf.keras.layers.Dense(units=units, activation='relu')(attention)
outputs = tf.keras.layers.Dense(units=d_model)(outputs)
outputs = tf.keras.layers.Dropout(rate=dropout)(outputs)
outputs = tf.keras.layers.LayerNormalization(
epsilon=1e-6)(attention + outputs)
return tf.keras.Model(
inputs=[inputs, padding_mask], outputs=outputs, name=name)
def encoder(time_steps,
num_layers,
units,
d_model,
num_heads,
dropout,
projection,
name="encoder"):
inputs = tf.keras.Input(shape=(None,d_model), name="inputs")
padding_mask = tf.keras.Input(shape=(1, 1, None), name="padding_mask")
if projection=='linear':
## We implement a linear projection based on Very Deep Self-Attention Networks for End-to-End Speech Recognition. Retrieved from https://arxiv.org/abs/1904.13377
projection=tf.keras.layers.Dense( d_model,use_bias=True, activation='linear')(inputs)
print('linear')
else:
projection=tf.identity(inputs)
print('none')
projection *= tf.math.sqrt(tf.cast(d_model, tf.float32))
projection = PositionalEncoding(time_steps, d_model)(projection)
outputs = tf.keras.layers.Dropout(rate=dropout)(projection)
for i in range(num_layers):
outputs = encoder_layer(
units=units,
d_model=d_model,
num_heads=num_heads,
dropout=dropout,
name="encoder_layer_{}".format(i),
)([outputs, padding_mask])
return tf.keras.Model(
inputs=[inputs, padding_mask], outputs=outputs, name=name)
def transformer(time_steps,
num_layers,
units,
d_model,
num_heads,
dropout,
output_size,
projection,
name="transformer"):
inputs = tf.keras.Input(shape=(None,d_model), name="inputs")
enc_padding_mask = tf.keras.layers.Lambda(
create_padding_mask, output_shape=(1, 1, None),
name='enc_padding_mask')(tf.dtypes.cast(
#Like our input has a dimension of length X d_model but the masking is applied to a vector
# We get the sum for each row and result is a vector. So, if result is 0 it is because in that position was masked
tf.math.reduce_sum(
inputs,
axis=2,
keepdims=False,
name=None
), tf.int32))
enc_outputs = encoder(
time_steps=time_steps,
num_layers=num_layers,
units=units,
d_model=d_model,
num_heads=num_heads,
dropout=dropout,
projection=projection,
name='encoder'
)(inputs=[inputs, enc_padding_mask])
#We reshape for feeding our FC in the next step
outputs=tf.reshape(enc_outputs,(-1,time_steps*d_model))
#We predict our class
outputs = tf.keras.layers.Dense(units=output_size,use_bias=True,activation='softmax', name="outputs")(outputs)
return tf.keras.Model(inputs=[inputs], outputs=outputs, name='audio_class')
def extract_features(files):
# Sets the name to be the path to where the file is in my computer
file_name = os.path.join(str(files.file))
global Counter
if(Counter%10==0):
print(Counter)
Counter+=1
# Loads the audio file as a floating point time series and assigns the default sample rate
# Sample rate is set to 22050 by default
X, sample_rate = librosa.load(file_name, res_type='kaiser_fast')
# Generate Mel-frequency cepstral coefficients (MFCCs) from a time series
#mfccs = np.mean(librosa.feature.mfcc(y=X, sr=sample_rate, n_mfcc=40).T,axis=0)
# Generates a Short-time Fourier transform (STFT) to use in the chroma_stft
#stft = np.abs(librosa.stft(X))
# Computes a chromagram from a waveform or power spectrogram.
#chroma = np.mean(librosa.feature.chroma_stft(S=stft, sr=sample_rate).T,axis=0)
# Computes a mel-scaled spectrogram.
mel = np.mean(librosa.feature.melspectrogram(X, sr=sample_rate).T,axis=0)
# Computes spectral contrast
#contrast = np.mean(librosa.feature.spectral_contrast(S=stft, sr=sample_rate).T,axis=0)
# Computes the tonal centroid features (tonnetz)
#tonnetz = np.mean(librosa.feature.tonnetz(y=librosa.effects.harmonic(X),
#sr=sample_rate).T,axis=0)
# We add also the classes of each file as a label at the end
#label = files.label
return mel
startTime = datetime.now()
# Applying the function to the train data by accessing each row of the dataframe
features_label = df.apply(extract_features, axis=1)
print(datetime.now() - startTime)
# Saving the numpy array because it takes a long time to extract the features
np.save('features_label_libri', features_label)
# loading the features
features_label = np.load('features_label_libri.npy', allow_pickle=True)
features_label.shape
trial_features=[]
for i in range(0,len(features_label)):
a=[]
a.append(features_label[i])
#a.append(features_label[i][1])
trial_features.append(a)
xxx = np.array(trial_features)
xxx.shape
X = xxx
y = np.array(labels)
lb = LabelEncoder()
y = to_categorical(lb.fit_transform(y))
X.shape
y.shape
limit_1 = int(X.shape[0]*0.7)
limit_2 = int(X.shape[0]*0.85)
X_train = X[:limit_1]
Y_train = y[:limit_1]
X_val = X[limit_1:limit_2]
Y_val = y[limit_1:limit_2]
X_test = X[limit_2:]
Y_test = y[limit_2:]
# #We get our train and test set
# X_train,X_test, Y_train, Y_test =train_test_split(X,y, test_size=0.2, random_state=27)
projection=['linear','none']
accuracy=[]
proj_implemented=[]
for i in projection:
NUM_LAYERS = 2
D_MODEL = X.shape[2]
NUM_HEADS = 4
UNITS = 1024
DROPOUT = 0.1
TIME_STEPS= X.shape[1]
OUTPUT_SIZE=251
EPOCHS = 100
EXPERIMENTS=1
for j in range(EXPERIMENTS):
model = transformer(time_steps=TIME_STEPS,
num_layers=NUM_LAYERS,
units=UNITS,
d_model=D_MODEL,
num_heads=NUM_HEADS,
dropout=DROPOUT,
output_size=OUTPUT_SIZE,
projection=i)
#model.compile(optimizer=tf.keras.optimizers.Adam(0.000001), loss='categorical_crossentropy', metrics=['accuracy'])
model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer='adam')
early_stop = EarlyStopping(monitor='val_loss', min_delta=0, patience=100, verbose=1, mode='auto')
#history=model.fit(X_train,Y_train, epochs=EPOCHS, validation_data=(X_test, Y_test))
history = model.fit(X_train, Y_train, batch_size=64, epochs=100, validation_data=(X_val, Y_val),callbacks=[early_stop])
accuracy.append(sum(history.history['val_accuracy'])/len(history.history['val_accuracy']))
proj_implemented.append(i)
# Check out our train accuracy and validation accuracy over epochs.
train_accuracy = history.history['accuracy']
val_accuracy = history.history['val_accuracy']
import matplotlib.pyplot as plt
# Set figure size.
plt.figure(figsize=(12, 8))
# Generate line plot of training, testing loss over epochs.
plt.plot(train_accuracy, label='Training Accuracy', color='#185fad')
plt.plot(val_accuracy, label='Validation Accuracy', color='orange')
# Set title
plt.title('Training and Validation Accuracy by Epoch', fontsize = 25)
plt.xlabel('Epoch', fontsize = 18)
plt.ylabel('Categorical Crossentropy', fontsize = 18)
plt.xticks(range(0,100,5), range(0,100,5))
plt.legend(fontsize = 18);
accuracy=pd.DataFrame(accuracy, columns=['accuracy'])
proj_implemented=pd.DataFrame(proj_implemented, columns=['projection'])
results=pd.concat([accuracy,proj_implemented],axis=1)
results.groupby('projection').mean()
import keras
y_prob = model.predict(X_test)
y_classes = y_prob.argmax(axis=-1)
res_list = y_classes.tolist()
label_mapping = {0:'Aayush',1:'Kanishk',2:'Kayan',3:'Rohit'}#clarify
# for i in range(len(res_list)):
# print("prediction ",i," ",label_mapping[res_list[i]])
model.evaluate(X_test,Y_test)
```
| github_jupyter |
# Project: Part of Speech Tagging with Hidden Markov Models
---
### Introduction
Part of speech tagging is the process of determining the syntactic category of a word from the words in its surrounding context. It is often used to help disambiguate natural language phrases because it can be done quickly with high accuracy. Tagging can be used for many NLP tasks like determining correct pronunciation during speech synthesis (for example, _dis_-count as a noun vs dis-_count_ as a verb), for information retrieval, and for word sense disambiguation.
In this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/) library to build a hidden Markov model for part of speech tagging using a "universal" tagset. Hidden Markov models have been able to achieve [>96% tag accuracy with larger tagsets on realistic text corpora](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf). Hidden Markov models have also been used for speech recognition and speech generation, machine translation, gene recognition for bioinformatics, and human gesture recognition for computer vision, and more.

The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated to complete the project; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you must provide code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
<div class="alert alert-block alert-info">
**Note:** Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You must then **export the notebook** by running the last cell in the notebook, or by using the menu above and navigating to **File -> Download as -> HTML (.html)** Your submissions should include both the `html` and `ipynb` files.
</div>
<div class="alert alert-block alert-info">
**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.
</div>
### The Road Ahead
You must complete Steps 1-3 below to pass the project. The section on Step 4 includes references & resources you can use to further explore HMM taggers.
- [Step 1](#Step-1:-Read-and-preprocess-the-dataset): Review the provided interface to load and access the text corpus
- [Step 2](#Step-2:-Build-a-Most-Frequent-Class-tagger): Build a Most Frequent Class tagger to use as a baseline
- [Step 3](#Step-3:-Build-an-HMM-tagger): Build an HMM Part of Speech tagger and compare to the MFC baseline
- [Step 4](#Step-4:-[Optional]-Improving-model-performance): (Optional) Improve the HMM tagger
<div class="alert alert-block alert-warning">
**Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
</div>
```
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers, tests
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from IPython.core.display import HTML
from itertools import chain
from collections import Counter, defaultdict
from helpers import show_model, Dataset
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
```
## Step 1: Read and preprocess the dataset
---
We'll start by reading in a text corpus and splitting it into a training and testing dataset. The data set is a copy of the [Brown corpus](https://en.wikipedia.org/wiki/Brown_Corpus) (originally from the [NLTK](https://www.nltk.org/) library) that has already been pre-processed to only include the [universal tagset](https://arxiv.org/pdf/1104.2086.pdf). You should expect to get slightly higher accuracy using this simplified tagset than the same model would achieve on a larger tagset like the full [Penn treebank tagset](https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html), but the process you'll follow would be the same.
The `Dataset` class provided in helpers.py will read and parse the corpus. You can generate your own datasets compatible with the reader by writing them to the following format. The dataset is stored in plaintext as a collection of words and corresponding tags. Each sentence starts with a unique identifier on the first line, followed by one tab-separated word/tag pair on each following line. Sentences are separated by a single blank line.
Example from the Brown corpus.
```
b100-38532
Perhaps ADV
it PRON
was VERB
right ADJ
; .
; .
b100-35577
...
```
```
data = Dataset("tags-universal.txt", "brown-universal.txt", train_test_split=0.8)
print("There are {} sentences in the corpus.".format(len(data)))
print("There are {} sentences in the training set.".format(len(data.training_set)))
print("There are {} sentences in the testing set.".format(len(data.testing_set)))
assert len(data) == len(data.training_set) + len(data.testing_set), \
"The number of sentences in the training set + testing set should sum to the number of sentences in the corpus"
```
### The Dataset Interface
You can access (mostly) immutable references to the dataset through a simple interface provided through the `Dataset` class, which represents an iterable collection of sentences along with easy access to partitions of the data for training & testing. Review the reference below, then run and review the next few cells to make sure you understand the interface before moving on to the next step.
```
Dataset-only Attributes:
training_set - reference to a Subset object containing the samples for training
testing_set - reference to a Subset object containing the samples for testing
Dataset & Subset Attributes:
sentences - a dictionary with an entry {sentence_key: Sentence()} for each sentence in the corpus
keys - an immutable ordered (not sorted) collection of the sentence_keys for the corpus
vocab - an immutable collection of the unique words in the corpus
tagset - an immutable collection of the unique tags in the corpus
X - returns an array of words grouped by sentences ((w11, w12, w13, ...), (w21, w22, w23, ...), ...)
Y - returns an array of tags grouped by sentences ((t11, t12, t13, ...), (t21, t22, t23, ...), ...)
N - returns the number of distinct samples (individual words or tags) in the dataset
Methods:
stream() - returns an flat iterable over all (word, tag) pairs across all sentences in the corpus
__iter__() - returns an iterable over the data as (sentence_key, Sentence()) pairs
__len__() - returns the nubmer of sentences in the dataset
```
For example, consider a Subset, `subset`, of the sentences `{"s0": Sentence(("See", "Spot", "run"), ("VERB", "NOUN", "VERB")), "s1": Sentence(("Spot", "ran"), ("NOUN", "VERB"))}`. The subset will have these attributes:
```
subset.keys == {"s1", "s0"} # unordered
subset.vocab == {"See", "run", "ran", "Spot"} # unordered
subset.tagset == {"VERB", "NOUN"} # unordered
subset.X == (("Spot", "ran"), ("See", "Spot", "run")) # order matches .keys
subset.Y == (("NOUN", "VERB"), ("VERB", "NOUN", "VERB")) # order matches .keys
subset.N == 7 # there are a total of seven observations over all sentences
len(subset) == 2 # because there are two sentences
```
<div class="alert alert-block alert-info">
**Note:** The `Dataset` class is _convenient_, but it is **not** efficient. It is not suitable for huge datasets because it stores multiple redundant copies of the same data.
</div>
#### Sentences
`Dataset.sentences` is a dictionary of all sentences in the training corpus, each keyed to a unique sentence identifier. Each `Sentence` is itself an object with two attributes: a tuple of the words in the sentence named `words` and a tuple of the tag corresponding to each word named `tags`.
```
key = 'b100-38532'
print("Sentence: {}".format(key))
print("words:\n\t{!s}".format(data.sentences[key].words))
print("tags:\n\t{!s}".format(data.sentences[key].tags))
```
<div class="alert alert-block alert-info">
**Note:** The underlying iterable sequence is **unordered** over the sentences in the corpus; it is not guaranteed to return the sentences in a consistent order between calls. Use `Dataset.stream()`, `Dataset.keys`, `Dataset.X`, or `Dataset.Y` attributes if you need ordered access to the data.
</div>
#### Counting Unique Elements
You can access the list of unique words (the dataset vocabulary) via `Dataset.vocab` and the unique list of tags via `Dataset.tagset`.
```
print("There are a total of {} samples of {} unique words in the corpus."
.format(data.N, len(data.vocab)))
print("There are {} samples of {} unique words in the training set."
.format(data.training_set.N, len(data.training_set.vocab)))
print("There are {} samples of {} unique words in the testing set."
.format(data.testing_set.N, len(data.testing_set.vocab)))
print("There are {} words in the test set that are missing in the training set."
.format(len(data.testing_set.vocab - data.training_set.vocab)))
assert data.N == data.training_set.N + data.testing_set.N, \
"The number of training + test samples should sum to the total number of samples"
```
#### Accessing word and tag Sequences
The `Dataset.X` and `Dataset.Y` attributes provide access to ordered collections of matching word and tag sequences for each sentence in the dataset.
```
# accessing words with Dataset.X and tags with Dataset.Y
for i in range(2):
print("Sentence {}:".format(i + 1), data.X[i])
print()
print("Labels {}:".format(i + 1), data.Y[i])
print()
```
#### Accessing (word, tag) Samples
The `Dataset.stream()` method returns an iterator that chains together every pair of (word, tag) entries across all sentences in the entire corpus.
```
# use Dataset.stream() (word, tag) samples for the entire corpus
print("\nStream (word, tag) pairs:\n")
for i, pair in enumerate(data.stream()):
print("\t", pair)
if i > 5: break
```
For both our baseline tagger and the HMM model we'll build, we need to estimate the frequency of tags & words from the frequency counts of observations in the training corpus. In the next several cells you will complete functions to compute the counts of several sets of counts.
## Step 2: Build a Most Frequent Class tagger
---
Perhaps the simplest tagger (and a good baseline for tagger performance) is to simply choose the tag most frequently assigned to each word. This "most frequent class" tagger inspects each observed word in the sequence and assigns it the label that was most often assigned to that word in the corpus.
### IMPLEMENTATION: Pair Counts
Complete the function below that computes the joint frequency counts for two input sequences.
```
from collections import defaultdict
def pair_counts(sequences_A, sequences_B):
"""Return a dictionary keyed to each unique value in the first sequence list
that counts the number of occurrences of the corresponding value from the
second sequences list.
For example, if sequences_A is tags and sequences_B is the corresponding
words, then if 1244 sequences contain the word "time" tagged as a NOUN, then
you should return a dictionary such that pair_counts[NOUN][time] == 1244
"""
# TODO: Finish this function!
# Init dictionary
tags_words_count = defaultdict(lambda : defaultdict(int))
for i in range(len(sequences_B)):
for itemA, itemB in zip(sequences_A[i], sequences_B[i]):
tags_words_count[itemA][itemB] += 1
return tags_words_count
# Calculate C(t_i, w_i)
emission_counts = pair_counts(data.Y, data.X)
assert len(emission_counts) == 12, \
"Uh oh. There should be 12 tags in your dictionary."
assert max(emission_counts["NOUN"], key=emission_counts["NOUN"].get) == 'time', \
"Hmmm...'time' is expected to be the most common NOUN."
HTML('<div class="alert alert-block alert-success">Your emission counts look good!</div>')
```
### IMPLEMENTATION: Most Frequent Class Tagger
Use the `pair_counts()` function and the training dataset to find the most frequent class label for each word in the training data, and populate the `mfc_table` below. The table keys should be words, and the values should be the appropriate tag string.
The `MFCTagger` class is provided to mock the interface of Pomegranite HMM models so that they can be used interchangeably.
```
# Create a lookup table mfc_table where mfc_table[word] contains the tag label most frequently assigned to that word
from collections import namedtuple
FakeState = namedtuple("FakeState", "name")
class MFCTagger:
# NOTE: You should not need to modify this class or any of its methods
missing = FakeState(name="<MISSING>")
def __init__(self, table):
self.table = defaultdict(lambda: MFCTagger.missing)
self.table.update({word: FakeState(name=tag) for word, tag in table.items()})
def viterbi(self, seq):
"""This method simplifies predictions by matching the Pomegranate viterbi() interface"""
return 0., list(enumerate(["<start>"] + [self.table[w] for w in seq] + ["<end>"]))
# TODO: calculate the frequency of each tag being assigned to each word (hint: similar, but not
# the same as the emission probabilities) and use it to fill the mfc_table
word_counts = pair_counts(data.training_set.X, data.training_set.Y)
mfc_table = {word: max(subdict, key=subdict.get) for word, subdict in word_counts.items()} # TODO: YOUR CODE HERE
# DO NOT MODIFY BELOW THIS LINE
mfc_model = MFCTagger(mfc_table) # Create a Most Frequent Class tagger instance
assert len(mfc_table) == len(data.training_set.vocab), ""
assert all(k in data.training_set.vocab for k in mfc_table.keys()), ""
assert sum(int(k not in mfc_table) for k in data.testing_set.vocab) == 5521, ""
HTML('<div class="alert alert-block alert-success">Your MFC tagger has all the correct words!</div>')
```
### Making Predictions with a Model
The helper functions provided below interface with Pomegranate network models & the mocked MFCTagger to take advantage of the [missing value](http://pomegranate.readthedocs.io/en/latest/nan.html) functionality in Pomegranate through a simple sequence decoding function. Run these functions, then run the next cell to see some of the predictions made by the MFC tagger.
```
def replace_unknown(sequence):
"""Return a copy of the input sequence where each unknown word is replaced
by the literal string value 'nan'. Pomegranate will ignore these values
during computation.
"""
return [w if w in data.training_set.vocab else 'nan' for w in sequence]
def simplify_decoding(X, model):
"""X should be a 1-D sequence of observations for the model to predict"""
_, state_path = model.viterbi(replace_unknown(X))
return [state[1].name for state in state_path[1:-1]] # do not show the start/end state predictions
```
### Example Decoding Sequences with MFC Tagger
```
for key in data.testing_set.keys[:3]:
print("Sentence Key: {}\n".format(key))
print("Predicted labels:\n-----------------")
print(simplify_decoding(data.sentences[key].words, mfc_model))
print()
print("Actual labels:\n--------------")
print(data.sentences[key].tags)
print("\n")
```
### Evaluating Model Accuracy
The function below will evaluate the accuracy of the MFC tagger on the collection of all sentences from a text corpus.
```
def accuracy(X, Y, model):
"""Calculate the prediction accuracy by using the model to decode each sequence
in the input X and comparing the prediction with the true labels in Y.
The X should be an array whose first dimension is the number of sentences to test,
and each element of the array should be an iterable of the words in the sequence.
The arrays X and Y should have the exact same shape.
X = [("See", "Spot", "run"), ("Run", "Spot", "run", "fast"), ...]
Y = [(), (), ...]
"""
correct = total_predictions = 0
for observations, actual_tags in zip(X, Y):
# The model.viterbi call in simplify_decoding will return None if the HMM
# raises an error (for example, if a test sentence contains a word that
# is out of vocabulary for the training set). Any exception counts the
# full sentence as an error (which makes this a conservative estimate).
try:
most_likely_tags = simplify_decoding(observations, model)
correct += sum(p == t for p, t in zip(most_likely_tags, actual_tags))
except:
pass
total_predictions += len(observations)
return correct / total_predictions
```
#### Evaluate the accuracy of the MFC tagger
Run the next cell to evaluate the accuracy of the tagger on the training and test corpus.
```
mfc_training_acc = accuracy(data.training_set.X, data.training_set.Y, mfc_model)
print("training accuracy mfc_model: {:.2f}%".format(100 * mfc_training_acc))
mfc_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, mfc_model)
print("testing accuracy mfc_model: {:.2f}%".format(100 * mfc_testing_acc))
assert mfc_training_acc >= 0.955, "Uh oh. Your MFC accuracy on the training set doesn't look right."
assert mfc_testing_acc >= 0.925, "Uh oh. Your MFC accuracy on the testing set doesn't look right."
HTML('<div class="alert alert-block alert-success">Your MFC tagger accuracy looks correct!</div>')
```
## Step 3: Build an HMM tagger
---
The HMM tagger has one hidden state for each possible tag, and parameterized by two distributions: the emission probabilties giving the conditional probability of observing a given **word** from each hidden state, and the transition probabilities giving the conditional probability of moving between **tags** during the sequence.
We will also estimate the starting probability distribution (the probability of each **tag** being the first tag in a sequence), and the terminal probability distribution (the probability of each **tag** being the last tag in a sequence).
The maximum likelihood estimate of these distributions can be calculated from the frequency counts as described in the following sections where you'll implement functions to count the frequencies, and finally build the model. The HMM model will make predictions according to the formula:
$$t_i^n = \underset{t_i^n}{\mathrm{argmin}} \prod_{i=1}^n P(w_i|t_i) P(t_i|t_{i-1})$$
Refer to Speech & Language Processing [Chapter 10](https://web.stanford.edu/~jurafsky/slp3/10.pdf) for more information.
### IMPLEMENTATION: Unigram Counts
Complete the function below to estimate the co-occurrence frequency of each symbol over all of the input sequences. The unigram probabilities in our HMM model are estimated from the formula below, where N is the total number of samples in the input. (You only need to compute the counts for now.)
$$P(tag_1) = \frac{C(tag_1)}{N}$$
```
def unigram_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequence list that
counts the number of occurrences of the value in the sequences list. The sequences
collection should be a 2-dimensional array.
For example, if the tag NOUN appears 275558 times over all the input sequences,
then you should return a dictionary such that your_unigram_counts[NOUN] == 275558.
"""
counter = defaultdict(int)
for i in range(len(sequences)):
for element in sequences[i]:
counter[element] += 1
return counter
# TODO: call unigram_counts with a list of tag sequences from the training set
tag_unigrams = unigram_counts(data.training_set.Y)
assert set(tag_unigrams.keys()) == data.training_set.tagset, \
"Uh oh. It looks like your tag counts doesn't include all the tags!"
assert min(tag_unigrams, key=tag_unigrams.get) == 'X', \
"Hmmm...'X' is expected to be the least common class"
assert max(tag_unigrams, key=tag_unigrams.get) == 'NOUN', \
"Hmmm...'NOUN' is expected to be the most common class"
HTML('<div class="alert alert-block alert-success">Your tag unigrams look good!</div>')
```
### IMPLEMENTATION: Bigram Counts
Complete the function below to estimate the co-occurrence frequency of each pair of symbols in each of the input sequences. These counts are used in the HMM model to estimate the bigram probability of two tags from the frequency counts according to the formula: $$P(tag_2|tag_1) = \frac{C(tag_2|tag_1)}{C(tag_2)}$$
```
def bigram_counts(sequences):
"""Return a dictionary keyed to each unique PAIR of values in the input sequences
list that counts the number of occurrences of pair in the sequences list. The input
should be a 2-dimensional array.
For example, if the pair of tags (NOUN, VERB) appear 61582 times, then you should
return a dictionary such that your_bigram_counts[(NOUN, VERB)] == 61582
"""
counter = defaultdict(int)
for i in range(len(sequences)):
seq = sequences[i]
for element, next_element in zip(seq[:-1], seq[1:]):
counter[(element, next_element)] += 1
return counter
# TODO: call bigram_counts with a list of tag sequences from the training set
tag_bigrams = bigram_counts(data.training_set.Y)
assert len(tag_bigrams) == 144, \
"Uh oh. There should be 144 pairs of bigrams (12 tags x 12 tags)"
assert min(tag_bigrams, key=tag_bigrams.get) in [('X', 'NUM'), ('PRON', 'X')], \
"Hmmm...The least common bigram should be one of ('X', 'NUM') or ('PRON', 'X')."
assert max(tag_bigrams, key=tag_bigrams.get) in [('DET', 'NOUN')], \
"Hmmm...('DET', 'NOUN') is expected to be the most common bigram."
HTML('<div class="alert alert-block alert-success">Your tag bigrams look good!</div>')
```
### IMPLEMENTATION: Sequence Starting Counts
Complete the code below to estimate the bigram probabilities of a sequence starting with each tag.
```
def starting_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequences list
that counts the number of occurrences where that value is at the beginning of
a sequence.
For example, if 8093 sequences start with NOUN, then you should return a
dictionary such that your_starting_counts[NOUN] == 8093
"""
counter = defaultdict(int)
for i in range(len(sequences)):
seq = sequences[i]
counter[seq[0]] += 1
return counter
# TODO: Calculate the count of each tag starting a sequence
tag_starts = starting_counts(data.training_set.Y)
assert len(tag_starts) == 12, "Uh oh. There should be 12 tags in your dictionary."
assert min(tag_starts, key=tag_starts.get) == 'X', "Hmmm...'X' is expected to be the least common starting bigram."
assert max(tag_starts, key=tag_starts.get) == 'DET', "Hmmm...'DET' is expected to be the most common starting bigram."
HTML('<div class="alert alert-block alert-success">Your starting tag counts look good!</div>')
```
### IMPLEMENTATION: Sequence Ending Counts
Complete the function below to estimate the bigram probabilities of a sequence ending with each tag.
```
def ending_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequences list
that counts the number of occurrences where that value is at the end of
a sequence.
For example, if 18 sequences end with DET, then you should return a
dictionary such that your_starting_counts[DET] == 18
"""
counter = defaultdict(int)
for i in range(len(sequences)):
seq = sequences[i]
counter[seq[-1]] += 1
return counter
# TODO: Calculate the count of each tag ending a sequence
tag_ends = ending_counts(data.training_set.Y)
assert len(tag_ends) == 12, "Uh oh. There should be 12 tags in your dictionary."
assert min(tag_ends, key=tag_ends.get) in ['X', 'CONJ'], "Hmmm...'X' or 'CONJ' should be the least common ending bigram."
assert max(tag_ends, key=tag_ends.get) == '.', "Hmmm...'.' is expected to be the most common ending bigram."
HTML('<div class="alert alert-block alert-success">Your ending tag counts look good!</div>')
```
### IMPLEMENTATION: Basic HMM Tagger
Use the tag unigrams and bigrams calculated above to construct a hidden Markov tagger.
- Add one state per tag
- The emission distribution at each state should be estimated with the formula: $P(w|t) = \frac{C(t, w)}{C(t)}$
- Add an edge from the starting state `basic_model.start` to each tag
- The transition probability should be estimated with the formula: $P(t|start) = \frac{C(start, t)}{C(start)}$
- Add an edge from each tag to the end state `basic_model.end`
- The transition probability should be estimated with the formula: $P(end|t) = \frac{C(t, end)}{C(t)}$
- Add an edge between _every_ pair of tags
- The transition probability should be estimated with the formula: $P(t_2|t_1) = \frac{C(t_1, t_2)}{C(t_1)}$
```
basic_model = HiddenMarkovModel(name="base-hmm-tagger")
# TODO: create states with emission probability distributions P(word | tag) and add to the model
# (Hint: you may need to loop & create/add new states)
tag_probabilities = defaultdict(dict)
for tag, subdict in emission_counts.items():
for word, value in subdict.items():
tag_probabilities[tag][word] = value / tag_unigrams[tag]
states = {}
for tag, prob in tag_probabilities.items():
states[tag] = State(DiscreteDistribution(prob), name=tag)
basic_model.add_states(list(states.values()))
# TODO: add edges between states for the observed transition frequencies P(tag_i | tag_i-1)
# (Hint: you may need to loop & add transitions
total = sum(tag_starts.values())
for tag, value in tag_starts.items():
basic_model.add_transition(basic_model.start, states[tag], value / total)
total = sum(tag_ends.values())
for tag, value in tag_ends.items():
basic_model.add_transition(states[tag], basic_model.end, value / total)
for keys, value in tag_bigrams.items():
tag1, tag2 = keys
basic_model.add_transition(states[tag1], states[tag2], value / tag_unigrams[tag1])
# NOTE: YOU SHOULD NOT NEED TO MODIFY ANYTHING BELOW THIS LINE
# finalize the model
basic_model.bake()
assert all(tag in set(s.name for s in basic_model.states) for tag in data.training_set.tagset), \
"Every state in your network should use the name of the associated tag, which must be one of the training set tags."
assert basic_model.edge_count() == 168, \
("Your network should have an edge from the start node to each state, one edge between every " +
"pair of tags (states), and an edge from each state to the end node.")
HTML('<div class="alert alert-block alert-success">Your HMM network topology looks good!</div>')
hmm_training_acc = accuracy(data.training_set.X, data.training_set.Y, basic_model)
print("training accuracy basic hmm model: {:.2f}%".format(100 * hmm_training_acc))
hmm_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, basic_model)
print("testing accuracy basic hmm model: {:.2f}%".format(100 * hmm_testing_acc))
assert hmm_training_acc > 0.97, "Uh oh. Your HMM accuracy on the training set doesn't look right."
assert hmm_training_acc > 0.955, "Uh oh. Your HMM accuracy on the training set doesn't look right."
HTML('<div class="alert alert-block alert-success">Your HMM tagger accuracy looks correct! Congratulations, you\'ve finished the project.</div>')
```
### Example Decoding Sequences with the HMM Tagger
```
for key in data.testing_set.keys[:3]:
print("Sentence Key: {}\n".format(key))
print("Predicted labels:\n-----------------")
print(simplify_decoding(data.sentences[key].words, basic_model))
print()
print("Actual labels:\n--------------")
print(data.sentences[key].tags)
print("\n")
```
## Finishing the project
---
<div class="alert alert-block alert-info">
**Note:** **SAVE YOUR NOTEBOOK**, then run the next cell to generate an HTML copy. You will zip & submit both this file and the HTML copy for review.
</div>
```
!!jupyter nbconvert *.ipynb
```
## Step 4: [Optional] Improving model performance
---
There are additional enhancements that can be incorporated into your tagger that improve performance on larger tagsets where the data sparsity problem is more significant. The data sparsity problem arises because the same amount of data split over more tags means there will be fewer samples in each tag, and there will be more missing data tags that have zero occurrences in the data. The techniques in this section are optional.
- [Laplace Smoothing](https://en.wikipedia.org/wiki/Additive_smoothing) (pseudocounts)
Laplace smoothing is a technique where you add a small, non-zero value to all observed counts to offset for unobserved values.
- Backoff Smoothing
Another smoothing technique is to interpolate between n-grams for missing data. This method is more effective than Laplace smoothing at combatting the data sparsity problem. Refer to chapters 4, 9, and 10 of the [Speech & Language Processing](https://web.stanford.edu/~jurafsky/slp3/) book for more information.
- Extending to Trigrams
HMM taggers have achieved better than 96% accuracy on this dataset with the full Penn treebank tagset using an architecture described in [this](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf) paper. Altering your HMM to achieve the same performance would require implementing deleted interpolation (described in the paper), incorporating trigram probabilities in your frequency tables, and re-implementing the Viterbi algorithm to consider three consecutive states instead of two.
### Obtain the Brown Corpus with a Larger Tagset
Run the code below to download a copy of the brown corpus with the full NLTK tagset. You will need to research the available tagset information in the NLTK docs and determine the best way to extract the subset of NLTK tags you want to explore. If you write the following the format specified in Step 1, then you can reload the data using all of the code above for comparison.
Refer to [Chapter 5](http://www.nltk.org/book/ch05.html) of the NLTK book for more information on the available tagsets.
```
import nltk
from nltk import pos_tag, word_tokenize
from nltk.corpus import brown
nltk.download('brown')
training_corpus = nltk.corpus.brown
training_corpus.tagged_sents()[0]
```
| github_jupyter |
### Set Data Path
```
from pathlib import Path
base_dir = Path("data")
train_dir = base_dir/Path("train")
validation_dir = base_dir/Path("validation")
test_dir = base_dir/Path("test")
```
### Image Transform Function
```
from torchvision import transforms
transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(mean=(.5, .5, .5), std=(.5, .5, .5))
])
```
### Load Training Data (x: features, y: labels)
```
import torch
from PIL import Image
x, y = [], []
for file_name in train_dir.glob("*.jpg"):
bounding_box_file = file_name.with_suffix('.txt')
with open(bounding_box_file) as file:
lines = file.readlines()
if(len(lines) > 1):
continue
else:
line = lines[0].strip('\n')
(classes, cen_x, cen_y, box_w, box_h) = list(map(float, line.split(' ')))
torch_data = torch.FloatTensor([cen_x, cen_y, box_w, box_h])
y.append(torch_data)
img = Image.open(str(file_name)).convert('RGB')
img = transform(img)
x.append(img)
```
### Put Training Data into Torch Loader
```
import torch.utils.data as Data
tensor_x = torch.stack(x)
tensor_y = torch.stack(y)
torch_dataset = Data.TensorDataset(tensor_x, tensor_y)
loader = Data.DataLoader(dataset=torch_dataset, batch_size=32, shuffle=True, num_workers=2)
```
### Load Pretrained RestNet18 Model
```
import torchvision
from torch import nn
model = torchvision.models.resnet18(pretrained=True)
fc_in_size = model.fc.in_features
model.fc = nn.Linear(fc_in_size, 4)
```
### Parameters
```
EPOCH = 10
LR = 1e-3
```
### Loss Function & Optimizer
```
loss_func = nn.SmoothL1Loss()
opt = torch.optim.Adam(model.parameters(), lr=LR)
```
### Training
```
for epoch in range(EPOCH):
for step, (batch_x, batch_y) in enumerate(loader):
batch_x = batch_x
batch_y = batch_y
output = model(batch_x)
loss = loss_func(output, batch_y)
opt.zero_grad()
loss.backward()
opt.step()
if(step % 5 == 0):
print("Epoch {} | Step {} | Loss {}".format(epoch, step, loss))
```
### Show some of the Prediction
```
%matplotlib inline
import cv2
from matplotlib import pyplot as plt
import numpy as np
model = model.cpu()
for batch_x, batch_y in loader:
predict = model(batch_x)
for x, pred, y in zip(batch_x, predict, batch_y):
(pos_x, pos_y, box_w, box_h) = pred
pos_x *= 224
pos_y *= 224
box_w *= 224
box_h *= 224
image = transforms.ToPILImage()(x)
img = cv2.cvtColor(np.asarray(image), cv2.COLOR_RGB2BGR)
img = cv2.rectangle(img, (pos_x - box_w/2, pos_y - box_h/2), (pos_x + box_w/2, pos_y + box_h/2), (255, 0, 0), 3)
plt.imshow(img)
plt.show()
break
```
| github_jupyter |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content-dl/blob/main/tutorials/W1D2_LinearDeepLearning/student/W1D2_Tutorial2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Tutorial 2: Learning Hyperparameters
**Week 1, Day 2: Linear Deep Learning**
**By Neuromatch Academy**
__Content creators:__ Saeed Salehi, Andrew Saxe
__Content reviewers:__ Polina Turishcheva, Antoine De Comite, Kelson Shilling-Scrivo
__Content editors:__ Anoop Kulkarni
__Production editors:__ Khalid Almubarak, Spiros Chavlis
**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
<p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>
---
# Tutorial Objectives
* Training landscape
* The effect of depth
* Choosing a learning rate
* Initialization matters
```
# @title Tutorial slides
# @markdown These are the slides for the videos in the tutorial
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/sne2m/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
```
---
# Setup
This a GPU-Free tutorial!
```
# @title Install dependencies
!pip install git+https://github.com/NeuromatchAcademy/evaltools --quiet
from evaltools.airtable import AirtableForm
# Imports
import time
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# @title Figure settings
from ipywidgets import interact, IntSlider, FloatSlider, fixed
from ipywidgets import HBox, interactive_output, ToggleButton, Layout
from mpl_toolkits.axes_grid1 import make_axes_locatable
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/content-creation/main/nma.mplstyle")
# @title Plotting functions
def plot_x_y_(x_t_, y_t_, x_ev_, y_ev_, loss_log_, weight_log_):
"""
"""
plt.figure(figsize=(12, 4))
plt.subplot(1, 3, 1)
plt.scatter(x_t_, y_t_, c='r', label='training data')
plt.plot(x_ev_, y_ev_, c='b', label='test results', linewidth=2)
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.subplot(1, 3, 2)
plt.plot(loss_log_, c='r')
plt.xlabel('epochs')
plt.ylabel('mean squared error')
plt.subplot(1, 3, 3)
plt.plot(weight_log_)
plt.xlabel('epochs')
plt.ylabel('weights')
plt.show()
def plot_vector_field(what, init_weights=None):
"""
"""
n_epochs=40
lr=0.15
x_pos = np.linspace(2.0, 0.5, 100, endpoint=True)
y_pos = 1. / x_pos
xx, yy = np.mgrid[-1.9:2.0:0.2, -1.9:2.0:0.2]
zz = np.empty_like(xx)
x, y = xx[:, 0], yy[0]
x_temp, y_temp = gen_samples(10, 1.0, 0.0)
cmap = matplotlib.cm.plasma
plt.figure(figsize=(8, 7))
ax = plt.gca()
if what == 'all' or what == 'vectors':
for i, a in enumerate(x):
for j, b in enumerate(y):
temp_model = ShallowNarrowLNN([a, b])
da, db = temp_model.dloss_dw(x_temp, y_temp)
zz[i, j] = temp_model.loss(temp_model.forward(x_temp), y_temp)
scale = min(40 * np.sqrt(da**2 + db**2), 50)
ax.quiver(a, b, - da, - db, scale=scale, color=cmap(np.sqrt(da**2 + db**2)))
if what == 'all' or what == 'trajectory':
if init_weights is None:
for init_weights in [[0.5, -0.5], [0.55, -0.45], [-1.8, 1.7]]:
temp_model = ShallowNarrowLNN(init_weights)
_, temp_records = temp_model.train(x_temp, y_temp, lr, n_epochs)
ax.scatter(temp_records[:, 0], temp_records[:, 1],
c=np.arange(len(temp_records)), cmap='Greys')
ax.scatter(temp_records[0, 0], temp_records[0, 1], c='blue', zorder=9)
ax.scatter(temp_records[-1, 0], temp_records[-1, 1], c='red', marker='X', s=100, zorder=9)
else:
temp_model = ShallowNarrowLNN(init_weights)
_, temp_records = temp_model.train(x_temp, y_temp, lr, n_epochs)
ax.scatter(temp_records[:, 0], temp_records[:, 1],
c=np.arange(len(temp_records)), cmap='Greys')
ax.scatter(temp_records[0, 0], temp_records[0, 1], c='blue', zorder=9)
ax.scatter(temp_records[-1, 0], temp_records[-1, 1], c='red', marker='X', s=100, zorder=9)
if what == 'all' or what == 'loss':
contplt = ax.contourf(x, y, np.log(zz+0.001), zorder=-1, cmap='coolwarm', levels=100)
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = plt.colorbar(contplt, cax=cax)
cbar.set_label('log (Loss)')
ax.set_xlabel("$w_1$")
ax.set_ylabel("$w_2$")
ax.set_xlim(-1.9, 1.9)
ax.set_ylim(-1.9, 1.9)
plt.show()
def plot_loss_landscape():
"""
"""
x_temp, y_temp = gen_samples(10, 1.0, 0.0)
xx, yy = np.mgrid[-1.9:2.0:0.2, -1.9:2.0:0.2]
zz = np.empty_like(xx)
x, y = xx[:, 0], yy[0]
for i, a in enumerate(x):
for j, b in enumerate(y):
temp_model = ShallowNarrowLNN([a, b])
zz[i, j] = temp_model.loss(temp_model.forward(x_temp), y_temp)
temp_model = ShallowNarrowLNN([-1.8, 1.7])
loss_rec_1, w_rec_1 = temp_model.train(x_temp, y_temp, 0.02, 240)
temp_model = ShallowNarrowLNN([1.5, -1.5])
loss_rec_2, w_rec_2 = temp_model.train(x_temp, y_temp, 0.02, 240)
plt.figure(figsize=(12, 8))
ax = plt.subplot(1, 1, 1, projection='3d')
ax.plot_surface(xx, yy, np.log(zz+0.5), cmap='coolwarm', alpha=0.5)
ax.scatter3D(w_rec_1[:, 0], w_rec_1[:, 1], np.log(loss_rec_1+0.5),
c='k', s=50, zorder=9)
ax.scatter3D(w_rec_2[:, 0], w_rec_2[:, 1], np.log(loss_rec_2+0.5),
c='k', s=50, zorder=9)
plt.axis("off")
ax.view_init(45, 260)
plt.show()
def depth_widget(depth):
if depth == 0:
depth_lr_init_interplay(depth, 0.02, 0.9)
else:
depth_lr_init_interplay(depth, 0.01, 0.9)
def lr_widget(lr):
depth_lr_init_interplay(50, lr, 0.9)
def depth_lr_interplay(depth, lr):
depth_lr_init_interplay(depth, lr, 0.9)
def depth_lr_init_interplay(depth, lr, init_weights):
n_epochs = 600
x_train, y_train = gen_samples(100, 2.0, 0.1)
model = DeepNarrowLNN(np.full((1, depth+1), init_weights))
plt.figure(figsize=(10, 5))
plt.plot(model.train(x_train, y_train, lr, n_epochs),
linewidth=3.0, c='m')
plt.title("Training a {}-layer LNN with"
" $\eta=${} initialized with $w_i=${}".format(depth, lr, init_weights), pad=15)
plt.yscale('log')
plt.xlabel('epochs')
plt.ylabel('Log mean squared error')
plt.ylim(0.001, 1.0)
plt.show()
def plot_init_effect():
depth = 15
n_epochs = 250
lr = 0.02
x_train, y_train = gen_samples(100, 2.0, 0.1)
plt.figure(figsize=(12, 6))
for init_w in np.arange(0.7, 1.09, 0.05):
model = DeepNarrowLNN(np.full((1, depth), init_w))
plt.plot(model.train(x_train, y_train, lr, n_epochs),
linewidth=3.0, label="initial weights {:.2f}".format(init_w))
plt.title("Training a {}-layer narrow LNN with $\eta=${}".format(depth, lr), pad=15)
plt.yscale('log')
plt.xlabel('epochs')
plt.ylabel('Log mean squared error')
plt.legend(loc='lower left', ncol=4)
plt.ylim(0.001, 1.0)
plt.show()
class InterPlay:
def __init__(self):
self.lr = [None]
self.depth = [None]
self.success = [None]
self.min_depth, self.max_depth = 5, 65
self.depth_list = np.arange(10, 61, 10)
self.i_depth = 0
self.min_lr, self.max_lr = 0.001, 0.105
self.n_epochs = 600
self.x_train, self.y_train = gen_samples(100, 2.0, 0.1)
self.converged = False
self.button = None
self.slider = None
def train(self, lr, update=False, init_weights=0.9):
if update and self.converged and self.i_depth < len(self.depth_list):
depth = self.depth_list[self.i_depth]
self.plot(depth, lr)
self.i_depth += 1
self.lr.append(None)
self.depth.append(None)
self.success.append(None)
self.converged = False
self.slider.value = 0.005
if self.i_depth < len(self.depth_list):
self.button.value = False
self.button.description = 'Explore!'
self.button.disabled = True
self.button.button_style = 'danger'
else:
self.button.value = False
self.button.button_style = ''
self.button.disabled = True
self.button.description = 'Done!'
time.sleep(1.0)
elif self.i_depth < len(self.depth_list):
depth = self.depth_list[self.i_depth]
# assert self.min_depth <= depth <= self.max_depth
assert self.min_lr <= lr <= self.max_lr
self.converged = False
model = DeepNarrowLNN(np.full((1, depth), init_weights))
self.losses = np.array(model.train(self.x_train, self.y_train, lr, self.n_epochs))
if np.any(self.losses < 1e-2):
success = np.argwhere(self.losses < 1e-2)[0][0]
if np.all((self.losses[success:] < 1e-2)):
self.converged = True
self.success[-1] = success
self.lr[-1] = lr
self.depth[-1] = depth
self.button.disabled = False
self.button.button_style = 'success'
self.button.description = 'Register!'
else:
self.button.disabled = True
self.button.button_style = 'danger'
self.button.description = 'Explore!'
else:
self.button.disabled = True
self.button.button_style = 'danger'
self.button.description = 'Explore!'
self.plot(depth, lr)
def plot(self, depth, lr):
fig = plt.figure(constrained_layout=False, figsize=(10, 8))
gs = fig.add_gridspec(2, 2)
ax1 = fig.add_subplot(gs[0, :])
ax2 = fig.add_subplot(gs[1, 0])
ax3 = fig.add_subplot(gs[1, 1])
ax1.plot(self.losses, linewidth=3.0, c='m')
ax1.set_title("Training a {}-layer LNN with"
" $\eta=${}".format(depth, lr), pad=15, fontsize=16)
ax1.set_yscale('log')
ax1.set_xlabel('epochs')
ax1.set_ylabel('Log mean squared error')
ax1.set_ylim(0.001, 1.0)
ax2.set_xlim(self.min_depth, self.max_depth)
ax2.set_ylim(-10, self.n_epochs)
ax2.set_xlabel('Depth')
ax2.set_ylabel('Learning time (Epochs)')
ax2.set_title("Learning time vs depth", fontsize=14)
ax2.scatter(np.array(self.depth), np.array(self.success), c='r')
# ax3.set_yscale('log')
ax3.set_xlim(self.min_depth, self.max_depth)
ax3.set_ylim(self.min_lr, self.max_lr)
ax3.set_xlabel('Depth')
ax3.set_ylabel('Optimial learning rate')
ax3.set_title("Empirically optimal $\eta$ vs depth", fontsize=14)
ax3.scatter(np.array(self.depth), np.array(self.lr), c='r')
plt.show()
# @title Helper functions
atform = AirtableForm('appn7VdPRseSoMXEG','W1D2_T2','https://portal.neuromatchacademy.org/api/redirect/to/9c55f6cb-cdf9-4429-ac1c-ec44fe64c303')
def gen_samples(n, a, sigma):
"""
Generates `n` samples with `y = z * x + noise(sgma)` linear relation.
Args:
n : int
a : float
sigma : float
Retutns:
x : np.array
y : np.array
"""
assert n > 0
assert sigma >= 0
if sigma > 0:
x = np.random.rand(n)
noise = np.random.normal(scale=sigma, size=(n))
y = a * x + noise
else:
x = np.linspace(0.0, 1.0, n, endpoint=True)
y = a * x
return x, y
class ShallowNarrowLNN:
"""
Shallow and narrow (one neuron per layer) linear neural network
"""
def __init__(self, init_ws):
"""
init_ws: initial weights as a list
"""
assert isinstance(init_ws, list)
assert len(init_ws) == 2
self.w1 = init_ws[0]
self.w2 = init_ws[1]
def forward(self, x):
"""
The forward pass through netwrok y = x * w1 * w2
"""
y = x * self.w1 * self.w2
return y
def loss(self, y_p, y_t):
"""
Mean squared error (L2) with 1/2 for convenience
"""
assert y_p.shape == y_t.shape
mse = ((y_t - y_p)**2).mean()
return mse
def dloss_dw(self, x, y_t):
"""
partial derivative of loss with respect to weights
Args:
x : np.array
y_t : np.array
"""
assert x.shape == y_t.shape
Error = y_t - self.w1 * self.w2 * x
dloss_dw1 = - (2 * self.w2 * x * Error).mean()
dloss_dw2 = - (2 * self.w1 * x * Error).mean()
return dloss_dw1, dloss_dw2
def train(self, x, y_t, eta, n_ep):
"""
Gradient descent algorithm
Args:
x : np.array
y_t : np.array
eta: float
n_ep : int
"""
assert x.shape == y_t.shape
loss_records = np.empty(n_ep) # pre allocation of loss records
weight_records = np.empty((n_ep, 2)) # pre allocation of weight records
for i in range(n_ep):
y_p = self.forward(x)
loss_records[i] = self.loss(y_p, y_t)
dloss_dw1, dloss_dw2 = self.dloss_dw(x, y_t)
self.w1 -= eta * dloss_dw1
self.w2 -= eta * dloss_dw2
weight_records[i] = [self.w1, self.w2]
return loss_records, weight_records
class DeepNarrowLNN:
"""
Deep but thin (one neuron per layer) linear neural network
"""
def __init__(self, init_ws):
"""
init_ws: initial weights as a numpy array
"""
self.n = init_ws.size
self.W = init_ws.reshape(1, -1)
def forward(self, x):
"""
x : np.array
input features
"""
y = np.prod(self.W) * x
return y
def loss(self, y_t, y_p):
"""
mean squared error (L2 loss)
Args:
y_t : np.array
y_p : np.array
"""
assert y_p.shape == y_t.shape
mse = ((y_t - y_p)**2 / 2).mean()
return mse
def dloss_dw(self, x, y_t, y_p):
"""
analytical gradient of weights
Args:
x : np.array
y_t : np.array
y_p : np.array
"""
E = y_t - y_p # = y_t - x * np.prod(self.W)
Ex = np.multiply(x, E).mean()
Wp = np.prod(self.W) / (self.W + 1e-9)
dW = - Ex * Wp
return dW
def train(self, x, y_t, eta, n_epochs):
"""
training using gradient descent
Args:
x : np.array
y_t : np.array
eta: float
n_epochs : int
"""
loss_records = np.empty(n_epochs)
loss_records[:] = np.nan
for i in range(n_epochs):
y_p = self.forward(x)
loss_records[i] = self.loss(y_t, y_p).mean()
dloss_dw = self.dloss_dw(x, y_t, y_p)
if np.isnan(dloss_dw).any() or np.isinf(dloss_dw).any():
return loss_records
self.W -= eta * dloss_dw
return loss_records
#@title Set random seed
#@markdown Executing `set_seed(seed=seed)` you are setting the seed
# for DL its critical to set the random seed so that students can have a
# baseline to compare their results to expected results.
# Read more here: https://pytorch.org/docs/stable/notes/randomness.html
# Call `set_seed` function in the exercises to ensure reproducibility.
import random
import torch
def set_seed(seed=None, seed_torch=True):
if seed is None:
seed = np.random.choice(2 ** 32)
random.seed(seed)
np.random.seed(seed)
if seed_torch:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Random seed {seed} has been set.')
# In case that `DataLoader` is used
def seed_worker(worker_id):
worker_seed = torch.initial_seed() % 2**32
np.random.seed(worker_seed)
random.seed(worker_seed)
#@title Set device (GPU or CPU). Execute `set_device()`
# especially if torch modules used.
# inform the user if the notebook uses GPU or CPU.
def set_device():
device = "cuda" if torch.cuda.is_available() else "cpu"
if device != "cuda":
print("GPU is not enabled in this notebook. \n"
"If you want to enable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `GPU` from the dropdown menu")
else:
print("GPU is enabled in this notebook. \n"
"If you want to disable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `None` from the dropdown menu")
return device
SEED = 2021
set_seed(seed=SEED)
DEVICE = set_device()
```
---
# Section 1: A Shallow Narrow Linear Neural Network
*Time estimate: ~30 mins*
```
# @title Video 1: Shallow Narrow Linear Net
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1F44y117ot", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"6e5JIYsqVvU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('video 1: Shallow Narrow Linear Net')
display(out)
```
## Section 1.1: A Shallow Narrow Linear Net
To better understand the behavior of neural network training with gradient descent, we start with the incredibly simple case of a shallow narrow linear neural net, since state-of-the-art models are impossible to dissect and comprehend with our current mathematical tools.
The model we use has one hidden layer, with only one neuron, and two weights. We consider the squared error (or L2 loss) as the cost function. As you may have already guessed, we can visualize the model as a neural network:
<center><img src="https://raw.githubusercontent.com/NeuromatchAcademy/course-content-dl/main/tutorials/W1D2_LinearDeepLearning/static/shallow_narrow_nn.png" width="400"/></center>
<br/>
or by its computation graph:
<center><img src="https://raw.githubusercontent.com/NeuromatchAcademy/course-content-dl/main/tutorials/W1D2_LinearDeepLearning/static/shallow_narrow.png" alt="Shallow Narrow Graph" width="400"/></center>
or on a rare occasion, even as a reasonably compact mapping:
$$ loss = (y - w_1 \cdot w_2 \cdot x)^2 $$
<br/>
Implementing a neural network from scratch without using any Automatic Differentiation tool is rarely necessary. The following two exercises are therefore **Bonus** (optional) exercises. Please ignore them if you have any time-limits or pressure and continue to Section 1.2.
### Analytical Exercise 1.1: Loss Gradients (Optional)
Once again, we ask you to calculate the network gradients analytically, since you will need them for the next exercise. We understand how annoying this is.
$\dfrac{\partial{loss}}{\partial{w_1}} = ?$
$\dfrac{\partial{loss}}{\partial{w_2}} = ?$
<br/>
---
#### Solution
$\dfrac{\partial{loss}}{\partial{w_1}} = -2 \cdot w_2 \cdot x \cdot (y - w_1 \cdot w_2 \cdot x)$
$\dfrac{\partial{loss}}{\partial{w_2}} = -2 \cdot w_1 \cdot x \cdot (y - w_1 \cdot w_2 \cdot x)$
---
### Coding Exercise 1.1: Implement simple narrow LNN (Optional)
Next, we ask you to implement the `forward` pass for our model from scratch without using PyTorch.
Also, although our model gets a single input feature and outputs a single prediction, we could calculate the loss and perform training for multiple samples at once. This is the common practice for neural networks, since computers are incredibly fast doing matrix (or tensor) operations on batches of data, rather than processing samples one at a time through `for` loops. Therefore, for the `loss` function, please implement the **mean** squared error (MSE), and adjust your analytical gradients accordingly when implementing the `dloss_dw` function.
Finally, complete the `train` function for the gradient descent algorithm:
\begin{equation}
\mathbf{w}^{(t+1)} = \mathbf{w}^{(t)} - \eta \nabla loss (\mathbf{w}^{(t)})
\end{equation}
```
class ShallowNarrowExercise:
"""Shallow and narrow (one neuron per layer) linear neural network
"""
def __init__(self, init_weights):
"""
Args:
init_weights (list): initial weights
"""
assert isinstance(init_weights, (list, np.ndarray, tuple))
assert len(init_weights) == 2
self.w1 = init_weights[0]
self.w2 = init_weights[1]
def forward(self, x):
"""The forward pass through netwrok y = x * w1 * w2
Args:
x (np.ndarray): features (inputs) to neural net
returns:
(np.ndarray): neural network output (prediction)
"""
#################################################
## Implement the forward pass to calculate prediction
## Note that prediction is not the loss
# Complete the function and remove or comment the line below
raise NotImplementedError("Forward Pass `forward`")
#################################################
y = ...
return y
def dloss_dw(self, x, y_true):
"""Gradient of loss with respect to weights
Args:
x (np.ndarray): features (inputs) to neural net
y_true (np.ndarray): true labels
returns:
(float): mean gradient of loss with respect to w1
(float): mean gradient of loss with respect to w2
"""
assert x.shape == y_true.shape
#################################################
## Implement the gradient computation function
# Complete the function and remove or comment the line below
raise NotImplementedError("Gradient of Loss `dloss_dw`")
#################################################
dloss_dw1 = ...
dloss_dw2 = ...
return dloss_dw1, dloss_dw2
def train(self, x, y_true, lr, n_ep):
"""Training with Gradient descent algorithm
Args:
x (np.ndarray): features (inputs) to neural net
y_true (np.ndarray): true labels
lr (float): learning rate
n_ep (int): number of epochs (training iterations)
returns:
(list): training loss records
(list): training weight records (evolution of weights)
"""
assert x.shape == y_true.shape
loss_records = np.empty(n_ep) # pre allocation of loss records
weight_records = np.empty((n_ep, 2)) # pre allocation of weight records
for i in range(n_ep):
y_prediction = self.forward(x)
loss_records[i] = loss(y_prediction, y_true)
dloss_dw1, dloss_dw2 = self.dloss_dw(x, y_true)
#################################################
## Implement the gradient descent step
# Complete the function and remove or comment the line below
raise NotImplementedError("Training loop `train`")
#################################################
self.w1 -= ...
self.w2 -= ...
weight_records[i] = [self.w1, self.w2]
return loss_records, weight_records
def loss(y_prediction, y_true):
"""Mean squared error
Args:
y_prediction (np.ndarray): model output (prediction)
y_true (np.ndarray): true label
returns:
(np.ndarray): mean squared error loss
"""
assert y_prediction.shape == y_true.shape
#################################################
## Implement the MEAN squared error
# Complete the function and remove or comment the line below
raise NotImplementedError("Loss function `loss`")
#################################################
mse = ...
return mse
#add event to airtable
atform.add_event('Coding Exercise 1.1: Implement simple narrow LNN')
set_seed(seed=SEED)
n_epochs = 211
learning_rate = 0.02
initial_weights = [1.4, -1.6]
x_train, y_train = gen_samples(n=73, a=2.0, sigma=0.2)
x_eval = np.linspace(0.0, 1.0, 37, endpoint=True)
## Uncomment to run
# sn_model = ShallowNarrowExercise(initial_weights)
# loss_log, weight_log = sn_model.train(x_train, y_train, learning_rate, n_epochs)
# y_eval = sn_model.forward(x_eval)
# plot_x_y_(x_train, y_train, x_eval, y_eval, loss_log, weight_log)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D2_LinearDeepLearning/solutions/W1D2_Tutorial2_Solution_46492cd6.py)
*Example output:*
<img alt='Solution hint' align='left' width=1696.0 height=544.0 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content-dl/main/tutorials/W1D2_LinearDeepLearning/static/W1D2_Tutorial2_Solution_46492cd6_1.png>
## Section 1.2: Learning landscapes
```
# @title Video 2: Training Landscape
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Nv411J71X", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"k28bnNAcOEg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 2: Training Landscape')
display(out)
```
As you may have already asked yourself, we can analytically find $w_1$ and $w_2$ without using gradient descent:
\begin{equation}
w_1 \cdot w_2 = \dfrac{y}{x}
\end{equation}
In fact, we can plot the gradients, the loss function and all the possible solutions in one figure. In this example, we use the $y = 1x$ mapping:
**Blue ribbon**: shows all possible solutions: $~ w_1 w_2 = \dfrac{y}{x} = \dfrac{x}{x} = 1 \Rightarrow w_1 = \dfrac{1}{w_2}$
**Contour background**: Shows the loss values, red being higher loss
**Vector field (arrows)**: shows the gradient vector field. The larger yellow arrows show larger gradients, which correspond to bigger steps by gradient descent.
**Scatter circles**: the trajectory (evolution) of weights during training for three different initializations, with blue dots marking the start of training and red crosses ( **x** ) marking the end of training. You can also try your own initializations (keep the initial values between `-2.0` and `2.0`) as shown here:
```python
plot_vector_field('all', [1.0, -1.0])
```
Finally, if the plot is too crowded, feel free to pass one of the following strings as argument:
```python
plot_vector_field('vectors') # for vector field
plot_vector_field('trajectory') # for training trajectory
plot_vector_field('loss') # for loss contour
```
**Think!**
Explore the next two plots. Try different initial values. Can you find the saddle point? Why does training slow down near the minima?
```
plot_vector_field('all')
```
Here, we also visualize the loss landscape in a 3-D plot, with two training trajectories for different initial conditions.
Note: the trajectories from the 3D plot and the previous plot are independent and different.
```
plot_loss_landscape()
# @title Student Response
from ipywidgets import widgets
text=widgets.Textarea(
value='Type your here and Push submit',
placeholder='Type something',
description='',
disabled=False
)
button = widgets.Button(description="Submit!")
display(text,button)
def on_button_clicked(b):
atform.add_answer('q1', text.value)
print("Submission successful!")
button.on_click(on_button_clicked)
# @title Video 3: Training Landscape - Discussion
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1py4y1j7cv", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"0EcUGgxOdkI", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 3: Training Landscape - Discussiond')
display(out)
```
---
# Section 2: Depth, Learning rate, and initialization
*Time estimate: ~45 mins*
Successful deep learning models are often developed by a team of very clever people, spending many many hours "tuning" learning hyperparameters, and finding effective initializations. In this section, we look at three basic (but often not simple) hyperparameters: depth, learning rate, and initialization.
## Section 2.1: The effect of depth
```
# @title Video 4: Effect of Depth
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1z341167di", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Ii_As9cRR5Q", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 4: Effect of Depth')
display(out)
```
Why might depth be useful? What makes a network or learning system "deep"? The reality is that shallow neural nets are often incapable of learning complex functions due to data limitations. On the other hand, depth seems like magic. Depth can change the functions a network can represent, the way a network learns, and how a network generalizes to unseen data.
So let's look at the challenges that depth poses in training a neural network. Imagine a single input, single output linear network with 50 hidden layers and only one neuron per layer (i.e. a narrow deep neural network). The output of the network is easy to calculate:
$$ prediction = x \cdot w_1 \cdot w_2 \cdot \cdot \cdot w_{50} $$
If the initial value for all the weights is $w_i = 2$, the prediction for $x=1$ would be **exploding**: $y_p = 2^{50} \approx 1.1256 \times 10^{15}$. On the other hand, for weights initialized to $w_i = 0.5$, the output is **vanishing**: $y_p = 0.5^{50} \approx 8.88 \times 10^{-16}$. Similarly, if we recall the chain rule, as the graph gets deeper, the number of elements in the chain multiplication increases, which could lead to exploding or vanishing gradients. To avoid such numerical vulnerablities that could impair our training algorithm, we need to understand the effect of depth.
### Interactive Demo 2.1: Depth widget
Use the widget to explore the impact of depth on the training curve (loss evolution) of a deep but narrow neural network.
**Think!**
Which networks trained the fastest? Did all networks eventually "work" (converge)? What is the shape of their learning trajectory?
```
# @markdown Make sure you execute this cell to enable the widget!
_ = interact(depth_widget,
depth = IntSlider(min=0, max=51,
step=5, value=0,
continuous_update=False))
# @title Video 5: Effect of Depth - Discussion
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Qq4y1H7uk", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"EqSDkwmSruk", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 5: Effect of Depth - Discussion')
display(out)
```
## Section 2.2: Choosing a learning rate
The learning rate is a common hyperparameter for most optimization algorithms. How should we set it? Sometimes the only option is to try all the possibilities, but sometimes knowing some key trade-offs will help guide our search for good hyperparameters.
```
# @title Video 6: Learning Rate
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV11f4y157MT", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"w_GrCVM-_Qo", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 6: Learning Rate')
display(out)
```
### Interactive Demo 2.2: Learning rate widget
Here, we fix the network depth to 50 layers. Use the widget to explore the impact of learning rate $\eta$ on the training curve (loss evolution) of a deep but narrow neural network.
**Think!**
Can we say that larger learning rates always lead to faster learning? Why not?
```
# @markdown Make sure you execute this cell to enable the widget!
_ = interact(lr_widget,
lr = FloatSlider(min=0.005, max=0.045, step=0.005, value=0.005,
continuous_update=False, readout_format='.3f',
description='eta'))
# @title Video 7: Learning Rate - Discussion
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Aq4y1p7bh", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"cmS0yqImz2E", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 7: Learning Rate - Discussion')
display(out)
```
## Section 2.3: Depth vs Learning Rate
```
# @title Video 8: Depth and Learning Rate
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1V44y1177e", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"J30phrux_3k", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 8: Depth and Learning Rate')
display(out)
```
### Interactive Demo 2.3: Depth and Learning-Rate
**Important instruction**
The exercise starts with 10 hidden layers. Your task is to find the learning rate that delivers fast but robust convergence (learning). When you are confident about the learning rate, you can **Register** the optimal learning rate for the given depth. Once you press register, a deeper model is instantiated, so you can find the next optimal learning rate. The Register button turns green only when the training converges, but does not imply the fastest convergence. Finally, be patient :) the widgets are slow.
**Think!**
Can you explain the relationship between the depth and optimal learning rate?
```
# @markdown Make sure you execute this cell to enable the widget!
intpl_obj = InterPlay()
intpl_obj.slider = FloatSlider(min=0.005, max=0.105, step=0.005, value=0.005,
layout=Layout(width='500px'),
continuous_update=False,
readout_format='.3f',
description='eta')
intpl_obj.button = ToggleButton(value=intpl_obj.converged, description='Register')
widgets_ui = HBox([intpl_obj.slider, intpl_obj.button])
widgets_out = interactive_output(intpl_obj.train,
{'lr': intpl_obj.slider,
'update': intpl_obj.button,
'init_weights': fixed(0.9)})
display(widgets_ui, widgets_out)
# @title Video 9: Depth and Learning Rate - Discussion
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV15q4y1p7Uq", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"7Fl8vH7cgco", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 9: Depth and Learning Rate - Discussion')
display(out)
```
## Section 2.4: Why initialization is important
```
# @title Video 10: Initialization Matters
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1UL411J7vu", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"KmqCz95AMzY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 10: Initialization Matters')
display(out)
```
We’ve seen, even in the simplest of cases, that depth can slow learning. Why? From the chain rule, gradients are multiplied by the current weight at each layer, so the product can vanish or explode. Therefore, weight initialization is a fundamentally important hyperparameter.
Although in practice initial values for learnable parameters are often sampled from different $\mathcal{Uniform}$ or $\mathcal{Normal}$ probability distribution, here we use a single value for all the parameters.
The figure below shows the effect of initialization on the speed of learning for the deep but narrow LNN. We have excluded initializations that lead to numerical errors such as `nan` or `inf`, which are the consequence of smaller or larger initializations.
```
# @markdown Make sure you execute this cell to see the figure!
plot_init_effect()
# @title Video 11: Initialization Matters Explained
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1hM4y1T7gJ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"vKktGdiQDsE", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 11: Initialization Matters Explained')
display(out)
```
---
# Summary
In the second tutorial, we have learned what is the training landscape, and also we have see in depth the effect of the depth of the network and the learning rate, and their interplay. Finally, we have seen that initialization matters and why we need smart ways of initialization.
```
# @title Video 12: Tutorial 2 Wrap-up
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1P44y117Pd", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"r3K8gtak3wA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 12: Tutorial 2 Wrap-up')
display(out)
# @title Airtable Submission Link
from IPython import display as IPydisplay
IPydisplay.HTML(
f"""
<div>
<a href= "{atform.url()}" target="_blank">
<img src="https://github.com/NeuromatchAcademy/course-content-dl/blob/main/tutorials/static/AirtableSubmissionButton.png?raw=1"
alt="button link to Airtable" style="width:410px"></a>
</div>""" )
```
---
# Bonus
## Hyperparameter interaction
Finally, let's put everything we learned together and find best initial weights and learning rate for a given depth. By now you should have learned the interactions and know how to find the optimal values quickly. If you get `numerical overflow` warnings, don't be discouraged! They are often caused by "exploding" or "vanishing" gradients.
**Think!**
Did you experience any surprising behaviour
or difficulty finding the optimal parameters?
```
# @markdown Make sure you execute this cell to enable the widget!
_ = interact(depth_lr_init_interplay,
depth = IntSlider(min=10, max=51, step=5, value=25,
continuous_update=False),
lr = FloatSlider(min=0.001, max=0.1,
step=0.005, value=0.005,
continuous_update=False,
readout_format='.3f',
description='eta'),
init_weights = FloatSlider(min=0.1, max=3.0,
step=0.1, value=0.9,
continuous_update=False,
readout_format='.3f',
description='initial weights'))
```
| github_jupyter |
# Partitioning feature space
**Make sure to get latest dtreeviz**
```
! pip install -q -U dtreeviz
! pip install -q graphviz==0.17 # 0.18 deletes the `run` func I need
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression, Ridge, Lasso, LogisticRegression
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor
from sklearn.datasets import load_boston, load_iris, load_wine, load_digits, \
load_breast_cancer, load_diabetes
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, accuracy_score
import matplotlib.pyplot as plt
%config InlineBackend.figure_format = 'retina'
from sklearn import tree
from dtreeviz.trees import *
from dtreeviz.models.shadow_decision_tree import ShadowDecTree
def show_mse_leaves(X,y,max_depth):
t = DecisionTreeRegressor(max_depth=max_depth)
t.fit(X,y)
shadow = ShadowDecTree.get_shadow_tree(t, X, y, feature_names=['sqfeet'], target_name='rent')
root, leaves, internal = shadow._get_tree_nodes()
# node2samples = shadow._get_tree_nodes()_samples()
# isleaf = shadow.get_node_type(t)
n_node_samples = t.tree_.n_node_samples
mse = 99.9#mean_squared_error(y, [np.mean(y)]*len(y))
print(f"Root {0:3d} has {n_node_samples[0]:3d} samples with MSE ={mse:6.2f}")
print("-----------------------------------------")
avg_mse_per_record = 0.0
node2samples = shadow.get_node_samples()
for node in leaves:
leafy = y[node2samples[node.id]]
n = len(leafy)
mse = mean_squared_error(leafy, [np.mean(leafy)]*n)
avg_mse_per_record += mse * n
print(f"Node {node.id:3d} has {n_node_samples[node.id]:3d} samples with MSE ={mse:6.2f}")
avg_mse_per_record /= len(y)
print(f"Average MSE per record is {avg_mse_per_record:.1f}")
```
## Regression
```
df_cars = pd.read_csv("data/cars.csv")
X, y = df_cars[['ENG']], df_cars['MPG']
df_cars.head(3)
dt = DecisionTreeRegressor(max_depth=1)
dt.fit(X, y)
rtreeviz_univar(dt, X, y,
feature_names='Horsepower',
markersize=5,
mean_linewidth=1,
target_name='MPG',
fontsize=9,
show={})
```
**Q.** What is the MSE between y and predicted $\hat{y} = \overline{y}$?
Hints: You can use function `mean_squared_error(` $y$,$\hat{y}$ `)`; create a vector of length $|y|$ with $\overline{y}$ as elements.
<details>
<summary>Solution</summary>
<pre>
mean_squared_error(y, [np.mean(y)]*len(y)) # about 60.76
</pre>
</details>
**Q.** Where would you split this if you could only split once? Set the `split` variable to a reasonable value.
```
split = ...
```
<details>
<summary>Solution</summary>
The split location that gets most pure subregion might be about split = 200 HP because the region to the right has a relatively flat MPG average.
</details>
**Alter the rtreeviz_univar() call to show the split with arg show={'splits'}**
<details>
<summary>Solution</summary>
<pre>
rtreeviz_univar(dt, X, y,
feature_names='Horsepower',
markersize=5,
mean_linewidth=1,
target_name='MPG',
fontsize=9,
show={'splits'})
</pre>
</details>
**Q.** What are the MSE values for the left, right partitions?
Hints: Get the y values whose `X['ENG']` are less than `split` into `lefty` and those greater than or equal to `split` into `righty`. The split introduces two new children that are leaves until we (possibly) split them; the leaves predict the mean of their samples.
```
lefty = ...; mleft = ...
righty = ...; mright = ...
mse_left = ...
mse_right = ...
mse_left, mse_right
```
<details>
<summary>Solution</summary>
Should be (35.68916307096633, 12.770261374699789)<p>
<pre>
lefty = y[X['ENG']<split]
righty = y[X['ENG']>=split]
mleft = np.mean(lefty)
mright = np.mean(righty)
mse_left = mean_squared_error(lefty, [mleft]\*len(lefty))
mse_right = mean_squared_error(righty, [mright]\*len(righty))
</pre>
</details>
**Q.** Compare the MSE values for overall y and the average of the left, right partition MSEs (which is about 24.2)?
<details>
<summary>Solution</summary>
After the split the MSE of the children is much lower than before the split, therefore, it is a worthwhile split.
</details>
**Q.** Set the split value to 100 and recompare MSE values for y, left, and right.
<details>
<summary>Solution</summary>
With split=100, mse_left, mse_right become 33.6 and 41.0. These are still less than the y MSE of 60.7 so worthwhile but not nearly as splitting at 200.
</details>
### Effect of deeper trees
Consider the sequence of tree depths 1..6 for horsepower vs MPG.
```
X = df_cars[['ENG']].values
y = df_cars['MPG'].values
fig, axes = plt.subplots(1,6, figsize=(14,3), sharey=True)
for i,ax in enumerate(axes.flatten()):
dt = DecisionTreeRegressor(max_depth=i+1)
dt.fit(X, y)
t = rtreeviz_univar(dt,
X, y,
feature_names='Horsepower',
markersize=5,
mean_linewidth=1,
target_name='MPG' if i==0 else None,
fontsize=9,
show={'splits'},
ax=ax)
ax.set_title(f"Depth {i+1}", fontsize=9)
plt.tight_layout()
plt.show()
```
**Q.** Focusing on the orange horizontal lines, what do you notice as more splits appear?
<details>
<summary>Solution</summary>
With depth 1, model is biased due to coarseness of the approximations (just 2 leaf means). Depth 2 gets much better approximation, so bias is lower. As we add more depth to tree, number of splits increases and these appear to be chasing details of the data, decreasing bias on training set but also hurting generality.
</details>
**Q.** Consider the MSE for the 4 leaves of a depth 2 tree and 15 leaves of a depth 4 tree. What happens to the average MSE per leaf? What happens to the leaf sizes and how is it related to average MSE?
```
show_mse_leaves(df_cars[['ENG']], df_cars['MPG'], max_depth=2)
show_mse_leaves(df_cars[['ENG']], df_cars['MPG'], max_depth=4)
```
<details>
<summary>Solution</summary>
The average MSE is much lower as we increase depth because that allows the tree to isolate pure/more-similar regions. This also shrinks leaf size since we are splitting more as the tree deepens.
</details>
Consider the plot of the CYL feature (num cylinders) vs MPG:
```
X = df_cars[['CYL']].values
y = df_cars['MPG'].values
fig, axes = plt.subplots(1,3, figsize=(7,2.5), sharey=True)
depths = [1,2,10]
for i,ax in enumerate(axes.flatten()):
dt = DecisionTreeRegressor(max_depth=depths[i])
dt.fit(X, y)
t = rtreeviz_univar(dt,
X, y,
feature_names='Horsepower',
markersize=5,
mean_linewidth=1,
target_name='MPG' if i==0 else None,
fontsize=9,
show={'splits','title'},
ax=ax)
ax.set_title(f"Depth {i+1}", fontsize=9)
plt.tight_layout()
plt.show()
```
**Q.** Explain why the graph looks like a bunch of vertical bars.
<details>
<summary>Solution</summary>
The x values are integers and will clump together. Since there are many MPG values at each int, you get vertical clumps of data.
</details>
**Q.** Why don't we get many more splits for depth 10 vs depth 2?
<details>
<summary>Solution</summary>
Once each unique x value has a "bin", there are no more splits to do.
</details>
**Q.** Why are the orange predictions bars at the levels they are in the plot?
<details>
<summary>Solution</summary>
Decision tree leaves predict the average y for all samples in a leaf.
</details>
## Classification
```
wine = load_wine()
df_wine = pd.DataFrame(data=wine.data, columns=wine.feature_names)
df_wine.head(3)
feature_names = list(wine.feature_names)
class_names = list(wine.target_names)
```
### 1 variable
```
X = df_wine[['flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=1)
dt.fit(X, y)
fig, ax = plt.subplots(1,1, figsize=(4,1.8))
ct = ctreeviz_univar(dt, X, y,
feature_names = 'flavanoids',
class_names=class_names,
target_name='Wine',
nbins=40, gtype='strip',
fontsize=9,
show={},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax)
plt.show()
```
**Q.** Where would you split this (vertically) if you could only split once?
<details>
<summary>Solution</summary>
The split location that gets most pure subregion might be about 1.5 because it nicely carves off the left green samples.
</details>
**Alter the code to show the split with arg show={'splits'}**
<details>
<summary>Solution</summary>
<pre>
X = df_wine[['flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=1)
dt.fit(X, y)
fig, ax = plt.subplots(1,1, figsize=(4,1.8))
ct = ctreeviz_univar(dt, X, y,
feature_names = 'flavanoids',
class_names=class_names,
target_name='Wine',
nbins=40, gtype='strip',
fontsize=9,
show={'splits'},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax)
plt.show()
</pre>
</details>
**Q.** For max_depth=2, how many splits will we get?
<details>
<summary>Solution</summary>
3. We get one split for root and then with depth=2, we have 2 children that each get a split.
</details>
**Q.** Where would you split this graph in that many places?
<details>
<summary>Solution</summary>
Once we carve off the leftmost green, we would want to isolate the blue in between 1.3 and 2.3. The other place to split is not obvious as there is no great choice. (sklearn will add a split point at 1.0)
</details>
**Alter the code to show max_depth=2**
<details>
<summary>Solution</summary>
<pre>
X = df_wine[['flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=2)
dt.fit(X, y)
fig, ax = plt.subplots(1,1, figsize=(4,1.8))
ct = ctreeviz_univar(dt, X, y,
feature_names = 'flavanoids',
class_names=class_names,
target_name='Wine',
nbins=40, gtype='strip',
fontsize=9,
show={'splits'},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax)
plt.show()
</pre>
</details>
### Gini impurity
Let's compute the gini impurity for left and right sides for a depth=1 tree that splits flavanoids at 1.3. Here's a function that computes the value:
$$
Gini({\bf p}) = \sum_{i=1}^{k} p_i \left[ \sum_{j \ne i}^k p_j \right] = \sum_{i=1}^{k} p_i (1 - p_i) = 1 - \sum_{i=1}^{k} p_i^2
$$
where $p_i = \frac{|y[y==i]|}{|y|}$. Since $\sum_{j \ne i}^k p_j$ is the probability of "not $p_i$", we can summarize that as just $1-p_i$. The gini value is then computing $p_i$ times "not $p_i$" for $k$ classes. Value $p_i$ is the probability of seeing class $i$ in a list of target values, $y$.
```
def gini(y):
"""
Compute gini impurity from y vector of class values (from k unique values).
Result is in range 0..(k-1/k) inclusive; binary range is 0..1/2.
See https://en.wikipedia.org/wiki/Decision_tree_learning#Gini_impurity"
"""
_, counts = np.unique(y, return_counts=True)
p = counts / len(y)
return 1 - np.sum( p**2 )
```
**Q.** Using that function, what is the gini impurity for the overall y target
<details>
<summary>Solution</summary>
gini(y) # about 0.66
</details>
**Get all y values for rows where `df_wine['flavanoids']`<1.3 into variable `lefty` and `>=` into `righty`**
```
lefty = ...
righty = ...
```
<details>
<summary>Solution</summary>
<pre>
lefty = y[df_wine['flavanoids']<1.3]
righty = y[df_wine['flavanoids']>=1.3]
</pre>
</details>
**Q.** What are the gini values for left and right partitions?
<details>
<summary>Solution</summary>
gini(lefty), gini(righty) # about 0.27, 0.53
</details>
**Q.** What can we conclude about the purity of left and right? Also, compare to gini for all y values.
<details>
<summary>Solution</summary>
Left partition is much more pure than right but right is still more pure than original gini(y). We can conclude that the split is worthwhile as the partition would let us give more accurate predictions.
</details>
### 2 variables
```
X = df_wine[['alcohol','flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=1)
dt.fit(X, y)
fig, ax = plt.subplots(1, 1, figsize=(4,3))
ct = ctreeviz_bivar(dt, X, y,
feature_names = ['alcohol','flavanoid'], class_names=class_names,
target_name='iris',
show={},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax
)
```
**Q.** Which variable and split point would you choose if you could only split once?
<details>
<summary>Solution</summary>
Because the blue dots are spread vertically, a horizontal split won't be very good. Hence, we should choose variable proline. The best split will carve off the blue dots, leaving the yellow and green mixed up. A split at proline=12.7 seems pretty good.
</details>
**Modify the code to view the splits and compare your answer**
**Q.** Which variable and split points would you choose next for depth=2?
<details>
<summary>Solution</summary>
Once we carve off most of the blue vertically, we should separate the yellow by choosing flavanoid=1.7 to split horizontally. NOTICE, however, that the 2nd split will not be across entire graph since we are splitting the region on the right. Splitting on the left can be at flavanoid=1 so we isolate the green from blue on left.
</details>
**Modify the code to view the splits for depth=2 and compare your answer**
### Gini
Let's examine gini impurity for a different pair of variables.
```
X = df_wine[['proline','flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=1)
dt.fit(X, y)
fig, ax = plt.subplots(1, 1, figsize=(4,3))
ctreeviz_bivar(dt, X, y,
feature_names = ['proline','flavanoid'],
class_names=class_names,
target_name='iris',
show={'splits'},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax)
plt.show()
```
**Get all y values for rows where the split var is less than the split value into variable `lefty` and those `>=` into `righty`**
```
lefty = ...
righty = ...
```
<details>
<summary>Solution</summary>
<pre>
lefty = y[df_wine['proline']<750]
righty = y[df_wine['proline']>=750]
</pre>
</details>
**Print out the gini for y, lefty, righty**
<details>
<summary>Solution</summary>
<pre>
gini(y), gini(lefty), gini(righty)
</pre>
</details>
## Training a single tree and print out the training accuracy (num correct / total)
```
t = DecisionTreeClassifier()
t.fit(df_wine, y)
accuracy_score(y, t.predict(df_wine))
```
Take a look at the feature importance:
```
from rfpimp import *
I = importances(t, df_wine, y)
plot_importances(I)
```
| github_jupyter |
# String equation example
## Analytic problem formulation
We consider a vibrating string on the segment $[0, 1]$, fixed on both sides, with input $u$ and output $\tilde{y}$ in the middle:
$$
\begin{align*}
\partial_{tt} \xi(z, t)
+ d \partial_t \xi(z, t)
- k \partial_{zz} \xi(z, t)
& = \delta(z - \tfrac{1}{2}) u(t), & 0 < z < 1,\ t > 0, \\
\partial_z \xi(0, t) & = 0, & t > 0, \\
\partial_z \xi(1, t) & = 0, & t > 0, \\
\tilde{y}(t) & = \xi(1/2, t), & t > 0.
\end{align*}
$$
## Semidiscretized formulation
Using the finite volume method on the equidistant mesh $0 = z_1 < z_2 < \ldots < z_{n + 1} = 1$, where $n = 2 n_2 - 1$, we obtain the semidiscretized formulation:
$$
\begin{align*}
\ddot{x}_i(t)
+ d \dot{x}_i(t)
- k \frac{x_{i - 1}(t) - 2 x_i(t) + x_{i + 1}(t)}{h^2}
& = \frac{1}{h} \delta_{i, n_2} u(t), & i = 1, 2, 3, \ldots, n - 1, n, \\
x_0(t) & = 0, \\
x_{n + 1}(t) & = 0, \\
y(t) & = x_{n_2}(t),
\end{align*}
$$
where $h = \frac{1}{n}$, $x_i(t) \approx \int_{z_i}^{z_{i + 1}} \xi(z, t) \, \mathrm{d}z$, and $y(t) \approx \tilde{y}(t)$.
Separating cases $i = 1$ and $i = n$ in the first equation, we find:
$$
\begin{alignat*}{6}
\ddot{x}_1(t)
+ d \dot{x}_1(t)
&
&& + 2 k n^2 x_1(t)
&& - k n^2 x_2(t)
&& = 0, \\
\ddot{x}_i(t)
+ d \dot{x}_i(t)
& - k n^2 x_{i - 1}(t)
&& + 2 k n^2 x_i(t)
&& - k n^2 x_{i + 1}(t)
&& = n \delta_{i, n_2} u(t),
& i = 2, 3, \ldots, n - 1, \\
\ddot{x}_n(t)
+ d \dot{x}_n(t)
& - k n^2 x_{n - 1}(t)
&& + 2 k n^2 x_n(t)
&&
&& = 0, \\
&
&&
&&
& y(t)
& = x_{n_2}(t).
\end{alignat*}
$$
## Import modules
```
import numpy as np
import scipy.sparse as sps
import matplotlib.pyplot as plt
from pymor.core.config import config
from pymor.models.iosys import SecondOrderModel
from pymor.reductors.bt import BTReductor
from pymor.reductors.h2 import IRKAReductor
from pymor.reductors.sobt import (SOBTpReductor, SOBTvReductor, SOBTpvReductor, SOBTvpReductor,
SOBTfvReductor, SOBTReductor)
from pymor.reductors.sor_irka import SOR_IRKAReductor
from pymor.core.logger import set_log_levels
set_log_levels({'pymor.algorithms.gram_schmidt.gram_schmidt': 'WARNING'})
```
## Assemble $M$, $D$, $K$, $B$, $C_p$
```
n2 = 50
n = 2 * n2 - 1 # dimension of the system
d = 10 # damping
k = 0.01 # stiffness
M = sps.eye(n, format='csc')
E = d * sps.eye(n, format='csc')
K = sps.diags([n * [2 * k * n ** 2],
(n - 1) * [-k * n ** 2],
(n - 1) * [-k * n ** 2]],
[0, -1, 1],
format='csc')
B = np.zeros((n, 1))
B[n2 - 1, 0] = n
Cp = np.zeros((1, n))
Cp[0, n2 - 1] = 1
```
## Second-order system
```
so_sys = SecondOrderModel.from_matrices(M, E, K, B, Cp)
print(f'order of the model = {so_sys.order}')
print(f'number of inputs = {so_sys.input_dim}')
print(f'number of outputs = {so_sys.output_dim}')
poles = so_sys.poles()
fig, ax = plt.subplots()
ax.plot(poles.real, poles.imag, '.')
ax.set_title('System poles')
plt.show()
w = np.logspace(-4, 2, 200)
fig, ax = plt.subplots()
so_sys.mag_plot(w, ax=ax)
ax.set_title('Bode plot of the full model')
plt.show()
psv = so_sys.psv()
vsv = so_sys.vsv()
pvsv = so_sys.pvsv()
vpsv = so_sys.vpsv()
fig, ax = plt.subplots(2, 2, figsize=(12, 8), sharey=True)
ax[0, 0].semilogy(range(1, len(psv) + 1), psv, '.-')
ax[0, 0].set_title('Position singular values')
ax[0, 1].semilogy(range(1, len(vsv) + 1), vsv, '.-')
ax[0, 1].set_title('Velocity singular values')
ax[1, 0].semilogy(range(1, len(pvsv) + 1), pvsv, '.-')
ax[1, 0].set_title('Position-velocity singular values')
ax[1, 1].semilogy(range(1, len(vpsv) + 1), vpsv, '.-')
ax[1, 1].set_title('Velocity-position singular values')
plt.show()
print(f'H_2-norm of the full model: {so_sys.h2_norm():e}')
if config.HAVE_SLYCOT:
print(f'H_inf-norm of the full model: {so_sys.hinf_norm():e}')
print(f'Hankel-norm of the full model: {so_sys.hankel_norm():e}')
```
## Position Second-Order Balanced Truncation (SOBTp)
```
r = 5
sobtp_reductor = SOBTpReductor(so_sys)
rom_sobtp = sobtp_reductor.reduce(r)
poles_rom_sobtp = rom_sobtp.poles()
fig, ax = plt.subplots()
ax.plot(poles_rom_sobtp.real, poles_rom_sobtp.imag, '.')
ax.set_title("SOBTp reduced model's poles")
plt.show()
err_sobtp = so_sys - rom_sobtp
print(f'SOBTp relative H_2-error: {err_sobtp.h2_norm() / so_sys.h2_norm():e}')
if config.HAVE_SLYCOT:
print(f'SOBTp relative H_inf-error: {err_sobtp.hinf_norm() / so_sys.hinf_norm():e}')
print(f'SOBTp relative Hankel-error: {err_sobtp.hankel_norm() / so_sys.hankel_norm():e}')
fig, ax = plt.subplots()
so_sys.mag_plot(w, ax=ax)
rom_sobtp.mag_plot(w, ax=ax, linestyle='dashed')
ax.set_title('Bode plot of the full and SOBTp reduced model')
plt.show()
fig, ax = plt.subplots()
err_sobtp.mag_plot(w, ax=ax)
ax.set_title('Bode plot of the SOBTp error system')
plt.show()
```
## Velocity Second-Order Balanced Truncation (SOBTv)
```
r = 5
sobtv_reductor = SOBTvReductor(so_sys)
rom_sobtv = sobtv_reductor.reduce(r)
poles_rom_sobtv = rom_sobtv.poles()
fig, ax = plt.subplots()
ax.plot(poles_rom_sobtv.real, poles_rom_sobtv.imag, '.')
ax.set_title("SOBTv reduced model's poles")
plt.show()
err_sobtv = so_sys - rom_sobtv
print(f'SOBTv relative H_2-error: {err_sobtv.h2_norm() / so_sys.h2_norm():e}')
if config.HAVE_SLYCOT:
print(f'SOBTv relative H_inf-error: {err_sobtv.hinf_norm() / so_sys.hinf_norm():e}')
print(f'SOBTv relative Hankel-error: {err_sobtv.hankel_norm() / so_sys.hankel_norm():e}')
fig, ax = plt.subplots()
so_sys.mag_plot(w, ax=ax)
rom_sobtv.mag_plot(w, ax=ax, linestyle='dashed')
ax.set_title('Bode plot of the full and SOBTv reduced model')
plt.show()
fig, ax = plt.subplots()
err_sobtv.mag_plot(w, ax=ax)
ax.set_title('Bode plot of the SOBTv error system')
plt.show()
```
## Position-Velocity Second-Order Balanced Truncation (SOBTpv)
```
r = 5
sobtpv_reductor = SOBTpvReductor(so_sys)
rom_sobtpv = sobtpv_reductor.reduce(r)
poles_rom_sobtpv = rom_sobtpv.poles()
fig, ax = plt.subplots()
ax.plot(poles_rom_sobtpv.real, poles_rom_sobtpv.imag, '.')
ax.set_title("SOBTpv reduced model's poles")
plt.show()
err_sobtpv = so_sys - rom_sobtpv
print(f'SOBTpv relative H_2-error: {err_sobtpv.h2_norm() / so_sys.h2_norm():e}')
if config.HAVE_SLYCOT:
print(f'SOBTpv relative H_inf-error: {err_sobtpv.hinf_norm() / so_sys.hinf_norm():e}')
print(f'SOBTpv relative Hankel-error: {err_sobtpv.hankel_norm() / so_sys.hankel_norm():e}')
fig, ax = plt.subplots()
so_sys.mag_plot(w, ax=ax)
rom_sobtpv.mag_plot(w, ax=ax, linestyle='dashed')
ax.set_title('Bode plot of the full and SOBTpv reduced model')
plt.show()
fig, ax = plt.subplots()
err_sobtpv.mag_plot(w, ax=ax)
ax.set_title('Bode plot of the SOBTpv error system')
plt.show()
```
## Velocity-Position Second-Order Balanced Truncation (SOBTvp)
```
r = 5
sobtvp_reductor = SOBTvpReductor(so_sys)
rom_sobtvp = sobtvp_reductor.reduce(r)
poles_rom_sobtvp = rom_sobtvp.poles()
fig, ax = plt.subplots()
ax.plot(poles_rom_sobtvp.real, poles_rom_sobtvp.imag, '.')
ax.set_title("SOBTvp reduced model's poles")
plt.show()
err_sobtvp = so_sys - rom_sobtvp
print(f'SOBTvp relative H_2-error: {err_sobtvp.h2_norm() / so_sys.h2_norm():e}')
if config.HAVE_SLYCOT:
print(f'SOBTvp relative H_inf-error: {err_sobtvp.hinf_norm() / so_sys.hinf_norm():e}')
print(f'SOBTvp relative Hankel-error: {err_sobtvp.hankel_norm() / so_sys.hankel_norm():e}')
fig, ax = plt.subplots()
so_sys.mag_plot(w, ax=ax)
rom_sobtvp.mag_plot(w, ax=ax, linestyle='dashed')
ax.set_title('Bode plot of the full and SOBTvp reduced model')
plt.show()
fig, ax = plt.subplots()
err_sobtvp.mag_plot(w, ax=ax)
ax.set_title('Bode plot of the SOBTvp error system')
plt.show()
```
## Free-Velocity Second-Order Balanced Truncation (SOBTfv)
```
r = 5
sobtfv_reductor = SOBTfvReductor(so_sys)
rom_sobtfv = sobtfv_reductor.reduce(r)
poles_rom_sobtfv = rom_sobtfv.poles()
fig, ax = plt.subplots()
ax.plot(poles_rom_sobtfv.real, poles_rom_sobtfv.imag, '.')
ax.set_title("SOBTfv reduced model's poles")
plt.show()
err_sobtfv = so_sys - rom_sobtfv
print(f'SOBTfv relative H_2-error: {err_sobtfv.h2_norm() / so_sys.h2_norm():e}')
if config.HAVE_SLYCOT:
print(f'SOBTfv relative H_inf-error: {err_sobtfv.hinf_norm() / so_sys.hinf_norm():e}')
print(f'SOBTfv relative Hankel-error: {err_sobtfv.hankel_norm() / so_sys.hankel_norm():e}')
fig, ax = plt.subplots()
so_sys.mag_plot(w, ax=ax)
rom_sobtfv.mag_plot(w, ax=ax, linestyle='dashed')
ax.set_title('Bode plot of the full and SOBTfv reduced model')
plt.show()
fig, ax = plt.subplots()
err_sobtfv.mag_plot(w, ax=ax)
ax.set_title('Bode plot of the SOBTfv error system')
plt.show()
```
## Second-Order Balanced Truncation (SOBT)
```
r = 5
sobt_reductor = SOBTReductor(so_sys)
rom_sobt = sobt_reductor.reduce(r)
poles_rom_sobt = rom_sobt.poles()
fig, ax = plt.subplots()
ax.plot(poles_rom_sobt.real, poles_rom_sobt.imag, '.')
ax.set_title("SOBT reduced model's poles")
plt.show()
err_sobt = so_sys - rom_sobt
print(f'SOBT relative H_2-error: {err_sobt.h2_norm() / so_sys.h2_norm():e}')
if config.HAVE_SLYCOT:
print(f'SOBT relative H_inf-error: {err_sobt.hinf_norm() / so_sys.hinf_norm():e}')
print(f'SOBT relative Hankel-error: {err_sobt.hankel_norm() / so_sys.hankel_norm():e}')
fig, ax = plt.subplots()
so_sys.mag_plot(w, ax=ax)
rom_sobt.mag_plot(w, ax=ax, linestyle='dashed')
ax.set_title('Bode plot of the full and SOBT reduced model')
plt.show()
fig, ax = plt.subplots()
err_sobt.mag_plot(w, ax=ax)
ax.set_title('Bode plot of the SOBT error system')
plt.show()
```
## Balanced Truncation (BT)
```
r = 5
bt_reductor = BTReductor(so_sys.to_lti())
rom_bt = bt_reductor.reduce(r)
poles_rom_bt = rom_bt.poles()
fig, ax = plt.subplots()
ax.plot(poles_rom_bt.real, poles_rom_bt.imag, '.')
ax.set_title("BT reduced model's poles")
plt.show()
err_bt = so_sys - rom_bt
print(f'BT relative H_2-error: {err_bt.h2_norm() / so_sys.h2_norm():e}')
if config.HAVE_SLYCOT:
print(f'BT relative H_inf-error: {err_bt.hinf_norm() / so_sys.hinf_norm():e}')
print(f'BT relative Hankel-error: {err_bt.hankel_norm() / so_sys.hankel_norm():e}')
fig, ax = plt.subplots()
so_sys.mag_plot(w, ax=ax)
rom_bt.mag_plot(w, ax=ax, linestyle='dashed')
ax.set_title('Bode plot of the full and BT reduced model')
plt.show()
fig, ax = plt.subplots()
err_bt.mag_plot(w, ax=ax)
ax.set_title('Bode plot of the BT error system')
plt.show()
```
## Iterative Rational Krylov Algorithm (IRKA)
```
r = 5
irka_reductor = IRKAReductor(so_sys.to_lti())
rom_irka = irka_reductor.reduce(r)
fig, ax = plt.subplots()
ax.semilogy(irka_reductor.dist, '.-')
ax.set_title('IRKA convergence criterion')
plt.show()
poles_rom_irka = rom_irka.poles()
fig, ax = plt.subplots()
ax.plot(poles_rom_irka.real, poles_rom_irka.imag, '.')
ax.set_title("IRKA reduced model's poles")
plt.show()
err_irka = so_sys - rom_irka
print(f'IRKA relative H_2-error: {err_irka.h2_norm() / so_sys.h2_norm():e}')
if config.HAVE_SLYCOT:
print(f'IRKA relative H_inf-error: {err_irka.hinf_norm() / so_sys.hinf_norm():e}')
print(f'IRKA relative Hankel-error: {err_irka.hankel_norm() / so_sys.hankel_norm():e}')
fig, ax = plt.subplots()
so_sys.mag_plot(w, ax=ax)
rom_irka.mag_plot(w, ax=ax, linestyle='dashed')
ax.set_title('Bode plot of the full and IRKA reduced model')
plt.show()
fig, ax = plt.subplots()
err_irka.mag_plot(w, ax=ax)
ax.set_title('Bode plot of the IRKA error system')
plt.show()
```
## Second-Order Iterative Rational Krylov Algorithm (SOR-IRKA)
```
r = 5
sor_irka_reductor = SOR_IRKAReductor(so_sys)
rom_sor_irka = sor_irka_reductor.reduce(r)
fig, ax = plt.subplots()
ax.semilogy(sor_irka_reductor.dist, '.-')
ax.set_title('SOR-IRKA convergence criterion')
plt.show()
poles_rom_sor_irka = rom_sor_irka.poles()
fig, ax = plt.subplots()
ax.plot(poles_rom_sor_irka.real, poles_rom_sor_irka.imag, '.')
ax.set_title("SOR-IRKA reduced model's poles")
plt.show()
err_sor_irka = so_sys - rom_sor_irka
print(f'SOR-IRKA relative H_2-error: {err_sor_irka.h2_norm() / so_sys.h2_norm():e}')
if config.HAVE_SLYCOT:
print(f'SOR-IRKA relative H_inf-error: {err_sor_irka.hinf_norm() / so_sys.hinf_norm():e}')
print(f'SOR-IRKA relative Hankel-error: {err_sor_irka.hankel_norm() / so_sys.hankel_norm():e}')
fig, ax = plt.subplots()
so_sys.mag_plot(w, ax=ax)
rom_sor_irka.mag_plot(w, ax=ax, linestyle='dashed')
ax.set_title('Bode plot of the full and SOR-IRKA reduced model')
plt.show()
fig, ax = plt.subplots()
err_sor_irka.mag_plot(w, ax=ax)
ax.set_title('Bode plot of the SOR-IRKA error system')
plt.show()
```
| github_jupyter |
## Dataset
The CIFAR-10 dataset (Canadian Institute For Advanced Research) is a collection of images that are commonly used to train machine learning and computer vision algorithms. It is one of the most widely used datasets for machine learning research. The CIFAR-10 dataset contains 60,000 32x32 color images in 10 different classes. The 10 different classes represent airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. There are 6,000 images of each class.
Computer algorithms for recognizing objects in photos often learn by example. CIFAR-10 is a set of images that can be used to teach a computer how to recognize objects. Since the images in CIFAR-10 are low-resolution (32x32), this dataset can allow researchers to quickly try different algorithms to see what works. Various kinds of convolutional neural networks tend to be the best at recognizing the images in CIFAR-10.
<table>
<tr>
<td class="cifar-class-name">airplane</td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/airplane1.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/airplane2.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/airplane3.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/airplane4.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/airplane5.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/airplane6.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/airplane7.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/airplane8.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/airplane9.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/airplane10.png" class="cifar-sample" /></td>
</tr>
<tr>
<td class="cifar-class-name">automobile</td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/automobile1.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/automobile2.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/automobile3.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/automobile4.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/automobile5.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/automobile6.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/automobile7.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/automobile8.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/automobile9.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/automobile10.png" class="cifar-sample" /></td>
</tr>
<tr>
<td class="cifar-class-name">bird</td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/bird1.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/bird2.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/bird3.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/bird4.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/bird5.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/bird6.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/bird7.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/bird8.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/bird9.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/bird10.png" class="cifar-sample" /></td>
</tr>
<tr>
<td class="cifar-class-name">cat</td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/cat1.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/cat2.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/cat3.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/cat4.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/cat5.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/cat6.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/cat7.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/cat8.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/cat9.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/cat10.png" class="cifar-sample" /></td>
</tr>
<tr>
<td class="cifar-class-name">deer</td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/deer1.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/deer2.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/deer3.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/deer4.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/deer5.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/deer6.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/deer7.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/deer8.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/deer9.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/deer10.png" class="cifar-sample" /></td>
</tr>
<tr>
<td class="cifar-class-name">dog</td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/dog1.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/dog2.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/dog3.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/dog4.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/dog5.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/dog6.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/dog7.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/dog8.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/dog9.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/dog10.png" class="cifar-sample" /></td>
</tr>
<tr>
<td class="cifar-class-name">frog</td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/frog1.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/frog2.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/frog3.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/frog4.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/frog5.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/frog6.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/frog7.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/frog8.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/frog9.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/frog10.png" class="cifar-sample" /></td>
</tr>
<tr>
<td class="cifar-class-name">horse</td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/horse1.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/horse2.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/horse3.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/horse4.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/horse5.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/horse6.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/horse7.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/horse8.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/horse9.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/horse10.png" class="cifar-sample" /></td>
</tr>
<tr>
<td class="cifar-class-name">ship</td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/ship1.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/ship2.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/ship3.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/ship4.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/ship5.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/ship6.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/ship7.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/ship8.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/ship9.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/ship10.png" class="cifar-sample" /></td>
</tr>
<tr>
<td class="cifar-class-name">truck</td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/truck1.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/truck2.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/truck3.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/truck4.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/truck5.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/truck6.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/truck7.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/truck8.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/truck9.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/truck10.png" class="cifar-sample" /></td>
</tr>
</table>
[Dataset Download](https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz)
### 1. Load CIFAR-10 Database
```
import keras
from keras.datasets import cifar10
# load the pre-shuffled train and test data
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
```
### 2. Visualize the First 24 Training Images
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
fig = plt.figure(figsize=(20,5))
for i in range(36):
ax = fig.add_subplot(3, 12, i + 1, xticks=[], yticks=[])
ax.imshow(np.squeeze(x_train[i]))
```
### 3. Rescale the Images by Dividing Every Pixel in Every Image by 255
```
# rescale [0,255] --> [0,1]
x_train = x_train.astype('float32')/255
x_test = x_test.astype('float32')/255
```
### 4. Break Dataset into Training, Testing, and Validation Sets
```
from keras.utils import np_utils
# one-hot encode the labels
num_classes = len(np.unique(y_train))
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
# break training set into training and validation sets
(x_train, x_valid) = x_train[5000:], x_train[:5000]
(y_train, y_valid) = y_train[5000:], y_train[:5000]
# print shape of training set
print('x_train shape:', x_train.shape)
# print number of training, validation, and test images
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
print(x_valid.shape[0], 'validation samples')
```
### 5. Define the Model Architecture
```
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
model = Sequential()
model.add(Conv2D(filters=16, kernel_size=2, padding='same', activation='relu',
input_shape=(32, 32, 3)))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=64, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.3))
model.add(Flatten())
model.add(Dense(500, activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(10, activation='softmax'))
model.summary()
```
### 6. Compile the Model
```
# compile the model
model.compile(loss='categorical_crossentropy', optimizer='rmsprop',
metrics=['accuracy'])
```
### 7. Train the Model
```
from keras.callbacks import ModelCheckpoint
# train the model
checkpointer = ModelCheckpoint(filepath='model.weights.best.hdf5', verbose=1,
save_best_only=True)
hist = model.fit(x_train, y_train, batch_size=32, epochs=100,
validation_data=(x_valid, y_valid), callbacks=[checkpointer],
verbose=2, shuffle=True)
```
### 8. Load the Model with the Best Validation Accuracy
```
# load the weights that yielded the best validation accuracy
model.load_weights('model.weights.best.hdf5')
```
### 9. Calculate Classification Accuracy on Test Set
```
# evaluate and print test accuracy
score = model.evaluate(x_test, y_test, verbose=0)
print('\n', 'Test accuracy:', score[1])
```
### 10. Visualize Some Predictions
This may give you some insight into why the network is misclassifying certain objects.
```
# get predictions on the test set
y_hat = model.predict(x_test)
# define text labels (source: https://www.cs.toronto.edu/~kriz/cifar.html)
cifar10_labels = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
# plot a random sample of test images, their predicted labels, and ground truth
fig = plt.figure(figsize=(20, 8))
for i, idx in enumerate(np.random.choice(x_test.shape[0], size=32, replace=False)):
ax = fig.add_subplot(4, 8, i + 1, xticks=[], yticks=[])
ax.imshow(np.squeeze(x_test[idx]))
pred_idx = np.argmax(y_hat[idx])
true_idx = np.argmax(y_test[idx])
ax.set_title("{} ({})".format(cifar10_labels[pred_idx], cifar10_labels[true_idx]),
color=("green" if pred_idx == true_idx else "red"))
```
| github_jupyter |
# Self Supervised Learning Fastai Extension
> Implementation of popular SOTA self-supervised learning algorithms as Fastai Callbacks.
You may find documentation [here](https://keremturgutlu.github.io/self_supervised) and github repo [here](https://github.com/keremturgutlu/self_supervised/tree/master/)
## Install
`pip install self-supervised`
## Algorithms
Here are the list of implemented algorithms:
- [SimCLR](https://arxiv.org/pdf/2002.05709.pdf)
- [BYOL](https://arxiv.org/pdf/2006.07733.pdf)
- [SwAV](https://arxiv.org/pdf/2006.09882.pdf)
## Simple Usage
```python
from self_supervised.simclr import *
dls = get_dls(resize, bs)
model = create_simclr_model(arch=xresnet34, pretrained=False)
learn = Learner(dls, model, SimCLRLoss(temp=0.1), opt_func=opt_func, cbs=[SimCLR(size=size)])
learn.fit_flat_cos(100, 1e-2)
```
```python
from self_supervised.byol import *
dls = get_dls(resize, bs)
model = create_byol_model(arch=xresnet34, pretrained=False)
learn = Learner(dls, model, byol_loss, opt_func=opt_func, cbs=[BYOL(size=size, T=0.99)])
learn.fit_flat_cos(100, 1e-2)
```
```python
from self_supervised.swav import *
dls = get_dls(resize, bs)
model = create_swav_model(arch=xresnet34, pretrained=False)
learn = Learner(dls, model, SWAVLoss(), opt_func=opt_func, cbs=[SWAV(crop_sizes=[size,96],
num_crops=[2,6],
min_scales=[0.25,0.2],
max_scales=[1.0,0.35])])
learn.fit_flat_cos(100, 1e-2)
```
## ImageWang Benchmarks
All of the algorithms implemented in this library have been evaluated in [ImageWang Leaderboard](https://github.com/fastai/imagenette#image%E7%BD%91-leaderboard).
In overall superiority of the algorithms are as follows `SwAV > BYOL > SimCLR` in most of the benchmarks. For details you may inspect the history of [ImageWang Leaderboard](https://github.com/fastai/imagenette#image%E7%BD%91-leaderboard) through github.
It should be noted that during these experiments no hyperparameter selection/tuning was made beyond using `learn.lr_find()` or making sanity checks over data augmentations by visualizing batches. So, there is still space for improvement and overall rankings of the alogrithms may change based on your setup. Yet, the overall rankings are on par with the papers.
## Contributing
Contributions and or requests for new self-supervised algorithms are welcome. This repo will try to keep itself up-to-date with recent SOTA self-supervised algorithms.
Before raising a PR please create a new branch with name `<self-supervised-algorithm>`. You may refer to previous notebooks before implementing your Callback.
Please refer to sections `Developers Guide, Abbreviations Guide, and Style Guide` from https://docs.fast.ai/dev-setup and note that same rules apply for this library.
| github_jupyter |
<a href="https://colab.research.google.com/github/itsCiandrei/LinearAlgebra_2ndSem/blob/main/Assignment10_BENITEZ_FERNANDEZ.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Linear Algebra for ECE
## Laboratory 10 : Linear Combination and Vector Spaces
Now that you have a fundamental knowledge about linear combination, we'll try to visualize it using scientific programming.
### Objectives
At the end of this activity you will be able to:
1. Be familiar with representing linear combinations in the 2-dimensional plane.
2. Visualize spans using vector fields in Python.
3. Perform vector fields operations using scientific programming.
## Discussion
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
##Linear Combination
Linear combination is the addition of two or more vectors. In order to properly understand the combination of vectors, it is necessary to to plot the involved vectors in the computation.
$$R = \begin{bmatrix} 1\\2 \\\end{bmatrix} , P = \begin{bmatrix} -2\\3 \\\end{bmatrix} $$
```
vectR = np.array([1,2])
vectP = np.array([-2,3])
vectR
vectP
```
## Span of single vectors
A vector of x and y is multiplied to three constants. The first one being the start of the point plot. While the second constant is where the plot will stop. Lastly, the last constant will be multiplied to the vector and serve increments or the point plots in the graph. The constants are multiplied to the vector using “np.arrange (c1,c2,c3)”. In addition, plt.scatter determines the plot as well as the constant multiplied to the vector. The [0] in the plt.scatter serves as the X value while [1] is for the Y value.
$$R = c\cdot \begin{bmatrix} 1\\2 \\\end{bmatrix} $$
```
vectR = np.array([1,2])
c = np.arange(-5,5,.5)
plt.scatter(c*vectR[0],c*vectR[1])
plt.xlim(-10,10)
plt.ylim(-10,10)
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.grid()
plt.show()
```
$$P = c\cdot \begin{bmatrix} -2\\3 \\\end{bmatrix} $$
```
vectP = np.array([-2,3])
c = np.arange(-10,10,.75)
plt.scatter(c*vectP[0],c*vectP[1])
plt.xlim(-31,31)
plt.ylim(-31,31)
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.grid()
plt.show()
```
##Span of a linear combination of vectors
Plotting the span of a linear combination of vectors will resulted to a two-dimensional plane. This span of a linear combination is the set of all possible combinations that can be formed from the elements of the given vectors multiplying to a set of scalars.
$$S = \begin{Bmatrix} c_1 \cdot\begin{bmatrix} -3\\3 \\\end{bmatrix},
c_2 \cdot \begin{bmatrix} 4\\2 \\\end{bmatrix}\end{Bmatrix} $$
```
vectRJ = np.array([-3,3])
vectJP = np.array([4,2])
R = np.arange(-5,5,1)
c1, c2 = np.meshgrid(R,R)
vectR = vectRJ + vectJP
spanRx = c1*vectRJ[0] + c2*vectJP[0]
spanRy = c1*vectRJ[1] + c2*vectJP[1]
plt.scatter(R*vectRJ[0],R*vectRJ[1])
plt.scatter(R*vectJP[0],R*vectJP[1])
plt.scatter(spanRx,spanRy, s=10, alpha=1)
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.grid()
plt.show()
```
$$S = \begin{Bmatrix} c_1 \cdot\begin{bmatrix} -2\\3 \\\end{bmatrix},
c_2 \cdot \begin{bmatrix} 1\\-4 \\\end{bmatrix}\end{Bmatrix} $$
```
vectA = np.array([-2,3])
vectB = np.array([1,-4])
R = np.arange(-5,5,1)
c1, c2 = np.meshgrid(R,R)
vectR = vectA + vectB
spanRx = c1*vectA[0] + c2*vectB[0]
spanRy = c1*vectA[1] + c2*vectB[1]
plt.scatter(R*vectA[0],R*vectA[1])
plt.scatter(R*vectB[0],R*vectB[1])
plt.scatter(spanRx,spanRy, s=10, alpha=1)
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.grid()
plt.show()
```
## Task 1
```
vectQ = np.array([4,2])
vectW= np.array([8,6])
R = np.arange(-5,5,0.75)
c1, c2 = np.meshgrid(R,R)
vectR = vectQ + vectW
spanRx = c1*vectQ[0] + c2*vectW[0]
spanRy = c1*vectQ[1] + c2*vectW[1]
plt.scatter(R*vectQ[0],R*vectQ[1])
plt.scatter(R*vectW[0],R*vectW[1])
plt.scatter(spanRx,spanRy, s=10, alpha=.5)
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.grid()
plt.show()
```
$$S = \begin{Bmatrix} c_1 \cdot\begin{bmatrix} 4\\2 \\\end{bmatrix},
c_2 \cdot \begin{bmatrix} 8\\6 \\\end{bmatrix}\end{Bmatrix} $$
$$ Q = 4\hat{x} + 2\hat{y} \\
W = 8\hat{x} + 6\hat{y}$$
| github_jupyter |
# Compute norm from function space
```
from dolfin import *
import dolfin as df
import numpy as np
import logging
df.set_log_level(logging.INFO)
df.set_log_level(WARNING)
mesh = RectangleMesh(0, 0, 1, 1, 10, 10)
#mesh = Mesh(Rectangle(-10, -10, 10, 10) - Circle(0, 0, 0.1), 10)
V = FunctionSpace(mesh, "CG", 1)
W = VectorFunctionSpace(mesh, "CG", 1)
w = interpolate(Expression(["2", "1"]), W)
%%timeit
norm_squared = 0
for i in range(2):
norm_squared += w[i] ** 2
norm = norm_squared ** 0.5
norm = df.project(norm, V)
#norm = df.interpolate(norm, V)
```
This next bit is fast, but doesn't compute the norm ;-|
```
%%timeit
n = interpolate(Expression("sqrt(pow(x[0], 2) + pow(x[1], 2))"), V)
```
# Compute norm via dolfin vector norm function
```
vector = w.vector()
%%timeit
norm2 = vector.norm('l2')
print(norm2)
```
Okay, the method above is not suitable: it computes the norm of the whole vector, not the norm for the 2d vector at each node.
# Compute the norm using dolfin generic vector functions
```
mesh = RectangleMesh(0, 0, 1, 1, 10, 10)
V = FunctionSpace(mesh, "CG", 1)
W = VectorFunctionSpace(mesh, "CG", 1)
w = interpolate(Expression(["2", "1"]), W)
norm = Function(V)
norm_vec = norm.vector()
print("Shape of w = {}".format(w.vector().get_local().shape))
print("Shape of norm = {}".format(norm.vector().get_local().shape))
```
Compute the norm-squared in dolfin vector:
```
%%timeit
wx, wy = w.split(deepcopy=True)
wnorm2 = (wx.vector() * wx.vector() + wy.vector() * wy.vector())
#At this point, I don't know how to compute the square root of wnorm2 (without numpy or other non-dolfin-generic-vector code).
wnorm = np.sqrt(wnorm2.array())
norm_vec.set_local(wnorm)
```
## plot some results
```
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.tri as tri
coords = mesh.coordinates()
x = coords[:,0]
y = coords[:,1]
triang = tri.Triangulation(x, y)
z = norm2.vector().array()
plt.tripcolor(triang, z, shading='flat', cmap=plt.cm.rainbow)
plt.colorbar()
coords[:,0]
```
# Wacky stuff from webpage (works on the coordinate, not the field value)
http://fenicsproject.org/qa/3693/parallel-vector-operations-something-akin-to-celliterator
```
from dolfin import *
import numpy as np
import math
mesh = RectangleMesh(-1, -1, 1, 1, 10, 10)
V = FunctionSpace(mesh, 'CG', 1)
u = Function(V)
uvec = u.vector()
dofmap = V.dofmap()
dof_x = dofmap.tabulate_all_coordinates(mesh).reshape((-1, 2))
first_dof, last_dof = dofmap.ownership_range() # U.local_size()
#rank = MPI.process_number()
new_values = np.zeros(last_dof - first_dof)
for i in range(len(new_values)):
x, y = dof_x[i]
new_values[i] = math.sqrt(x **2 + y **2)
uvec.set_local(new_values)
uvec.apply('insert')
#plot(u, title=str(rank))
#interactive()
dof_x[0]
mesh.coordinates()[0]
```
## Wacky stuff from http://fenicsproject.org/qa/3532/avoiding-assembly-vector-operations-scalar-vector-spaces
```
from dolfin import *
mesh = RectangleMesh(0.0, 0.0, 1.0, 1.0, 10, 10)
V = FunctionSpace(mesh, "Lagrange", 1)
V_vec = VectorFunctionSpace(mesh, "Lagrange", 1)
W = V_vec
c = project(Expression('1.1'), V)
v = as_vector((1,2))
d = project(c*v,V_vec)
d.vector().array()
W = VectorFunctionSpace(mesh, "CG", 1)
w = interpolate(Expression(["2", "1"]), W)
%%timeit
#dd = w #Function(V_vec)
dofs0 = V_vec.sub(0).dofmap().dofs() # indices of x-components
dofs1 = V_vec.sub(1).dofmap().dofs() # indices of y-components
norm = Function(V)
norm.vector()[:] = np.sqrt(w.vector()[dofs0] * w.vector()[dofs0] + w.vector()[dofs1] * w.vector()[dofs1])
norm = Function(V)
%%timeit
norm.vector()[:] = np.sqrt(w.vector()[dofs0] * w.vector()[dofs0] + w.vector()[dofs1] * w.vector()[dofs1])
norm.vector().array()
```
# Done a number of tests. Implement one or two versions as funcions
```
import numpy as np
def value_dim(w):
if isinstance(w.function_space(), df.FunctionSpace):
# Scalar field.
return 1
else:
# value_shape() returns a tuple (N,) and int is required.
return w.function_space().ufl_element().value_shape()[0]
def compute_pointwise_norm(w, target=None, method=1):
"""Given a function vectior function w, compute the norm at each vertex, and store in scalar function target.
If target is given (a scalar dolfin Function), then store the result in there, and return reference to it.
If traget is not given, create the object and return reference to it.
Method allows to choose which method we use.
"""
if not target:
raise NotImplementeError("This is missing - could cerate a df.Function(V) here")
dim = value_dim(w)
assert dim in [3], "Only implemented for 2d and 3d vector field"
if method == 1:
wx, wy, wz = w.split(deepcopy=True)
wnorm = np.sqrt(wx.vector() * wx.vector() + wy.vector() * wy.vector() + wz.vector() * wz.vector())
target.vector().set_local(wnorm)
elif method == 2:
V_vec = w.function_space()
dofs0 = V_vec.sub(0).dofmap().dofs() # indices of x-components
dofs1 = V_vec.sub(1).dofmap().dofs() # indices of y-components
dofs2 = V_vec.sub(2).dofmap().dofs() # indices of z-components
target.vector()[:] = np.sqrt(w.vector()[dofs0] * w.vector()[dofs0] +\
w.vector()[dofs1] * w.vector()[dofs1] +\
w.vector()[dofs2] * w.vector()[dofs2])
else:
raise NotImplementedError("method {} unknown".format(method))
import dolfin as df
def create_test_system(nx, ny=None):
if not ny:
ny = nx
nz = ny
mesh = df.BoxMesh(0, 0, 0, 1, 1, 1, nx, ny, nz)
V = df.FunctionSpace(mesh, "CG", 1)
W = df.VectorFunctionSpace(mesh, "CG", 1)
w = df.interpolate(Expression(["2", "1", "2"]), W)
target = df.Function(V)
return w, mesh, V, W, target
w, mesh, V, W, norm = create_test_system(5)
%timeit compute_pointwise_norm(w, norm, method=1)
assert norm.vector().array()[0] == np.sqrt(2*2 + 1 + 2*2)
assert norm.vector().array()[0] == np.sqrt(2*2 + 1 + 2*2)
%timeit compute_pointwise_norm(w, norm, method=2)
assert norm.vector().array()[0] == np.sqrt(2*2 + 1 + 2*2)
compute_pointwise_norm(w, norm, method=1)
norm.vector().array()[0]
```
| github_jupyter |
## Create Data
```
import numpy as np
import matplotlib.pyplot as plt
from patsy import dmatrix
from statsmodels.api import GLM, families
def simulate_poisson_process(rate, sampling_frequency):
return np.random.poisson(rate / sampling_frequency)
def plot_model_vs_true(time, spike_train, firing_rate, conditional_intensity, sampling_frequency):
fig, axes = plt.subplots(2, 1, figsize=(12, 6), sharex=True, constrained_layout=True)
s, t = np.nonzero(spike_train)
axes[0].scatter(np.unique(time)[s], t, s=1, color='black')
axes[0].set_ylabel('Trials')
axes[0].set_title('Simulated Spikes')
axes[0].set_xlim((0, 1))
axes[1].plot(np.unique(time), firing_rate[:, 0],
linestyle='--', color='black',
linewidth=4, label='True Rate')
axes[1].plot(time.ravel(), conditional_intensity * sampling_frequency,
linewidth=4, label='model conditional intensity')
axes[1].set_xlabel('Time')
axes[1].set_ylabel('Firing Rate (Hz)')
axes[1].set_title('True Rate vs. Model')
axes[1].set_ylim((0, 15))
plt.legend()
n_time, n_trials = 1500, 1000
sampling_frequency = 1500
# Firing rate starts at 5 Hz and switches to 10 Hz
firing_rate = np.ones((n_time, n_trials)) * 10
firing_rate[:n_time // 2, :] = 5
spike_train = simulate_poisson_process(
firing_rate, sampling_frequency)
time = (np.arange(0, n_time)[:, np.newaxis] / sampling_frequency *
np.ones((1, n_trials)))
trial_id = (np.arange(n_trials)[np.newaxis, :]
* np.ones((n_time, 1)))
```
## Good Fit
```
# Fit a spline model to the firing rate
design_matrix = dmatrix('bs(time, df=5)', dict(time=time.ravel()))
fit = GLM(spike_train.ravel(), design_matrix,
family=families.Poisson()).fit()
conditional_intensity = fit.mu
plot_model_vs_true(time, spike_train, firing_rate, conditional_intensity, sampling_frequency)
plt.savefig('simulated_spikes_model.png')
from time_rescale import TimeRescaling
conditional_intensity = fit.mu
rescaled = TimeRescaling(conditional_intensity,
spike_train.ravel(),
trial_id.ravel())
fig, axes = plt.subplots(1, 2, figsize=(12, 6))
rescaled.plot_ks(ax=axes[0])
rescaled.plot_rescaled_ISI_autocorrelation(ax=axes[1])
plt.savefig('time_rescaling_ks_autocorrelation.png')
```
### Adjust for short trials
```
rescaled_adjusted = TimeRescaling(conditional_intensity,
spike_train.ravel(),
trial_id.ravel(),
adjust_for_short_trials=True)
fig, axes = plt.subplots(1, 2, figsize=(12, 6))
rescaled_adjusted.plot_ks(ax=axes[0])
rescaled_adjusted.plot_rescaled_ISI_autocorrelation(ax=axes[1])
plt.savefig('time_rescaling_ks_autocorrelation_adjusted.png')
```
## Bad Fit
```
constant_fit = GLM(spike_train.ravel(),
np.ones_like(spike_train.ravel()),
family=families.Poisson()).fit()
conditional_intensity = constant_fit.mu
plot_model_vs_true(time, spike_train, firing_rate, conditional_intensity, sampling_frequency)
plt.savefig('constant_model_fit.png')
bad_rescaled = TimeRescaling(constant_fit.mu,
spike_train.ravel(),
trial_id.ravel(),
adjust_for_short_trials=True)
fig, axes = plt.subplots(1, 2, figsize=(12, 6))
bad_rescaled.plot_ks(ax=axes[0], scatter_kwargs=dict(s=10))
axes[0].set_title('KS Plot')
bad_rescaled.plot_rescaled_ISI_autocorrelation(ax=axes[1], scatter_kwargs=dict(s=10))
axes[1].set_title('Autocorrelation');
plt.savefig('time_rescaling_ks_autocorrelation_bad_fit.png')
```
| github_jupyter |
```
%matplotlib inline
```
**Read Later:**
Document about ``autograd.Function`` is at
https://pytorch.org/docs/stable/autograd.html#function
**Notes:**
needs to set requires_grad == True when define the tensor if you wanna autograd.
assume z = f(x,y)
z.backward() initiates the backward pass and computed all the gradient automatically.
the gradient will be accumulated into .grad attribute, that means x.grad will have dz/dx and y.grad will have dz/dy.
if z is a scalar, then can use z.backward() and no argument needs to be passed.
if z is a vector, then we have to pass a gradient argument into z.backward(), so if we wanna dz/dx and dz/dy
it should be: z.backward(1). the argument can be any number and should have the same size as z
Autograd: Automatic Differentiation
===================================
Central to all neural networks in PyTorch is the ``autograd`` package.
Let’s first briefly visit this, and we will then go to training our
first neural network.
The ``autograd`` package provides automatic differentiation for all operations
on Tensors. It is a define-by-run framework, which means that your backprop is
defined by how your code is run, and that every single iteration can be
different.
Let us see this in more simple terms with some examples.
Tensor
--------
``torch.Tensor`` is the central class of the package. If you set its attribute
``.requires_grad`` as ``True``, it starts to track all operations on it. When
you finish your computation you can call ``.backward()`` and have all the
gradients computed automatically. The gradient for this tensor will be
accumulated into ``.grad`` attribute.
To stop a tensor from tracking history, you can call ``.detach()`` to detach
it from the computation history, and to prevent future computation from being
tracked.
To prevent tracking history (and using memory), you can also wrap the code block
in ``with torch.no_grad():``. This can be particularly helpful when evaluating a
model because the model may have trainable parameters with
``requires_grad=True``, but for which we don't need the gradients.
There’s one more class which is very important for autograd
implementation - a ``Function``.
``Tensor`` and ``Function`` are interconnected and build up an acyclic
graph, that encodes a complete history of computation. Each tensor has
a ``.grad_fn`` attribute that references a ``Function`` that has created
the ``Tensor`` (except for Tensors created by the user - their
``grad_fn is None``).
If you want to compute the derivatives, you can call ``.backward()`` on
a ``Tensor``. If ``Tensor`` is a scalar (i.e. it holds a one element
data), you don’t need to specify any arguments to ``backward()``,
however if it has more elements, you need to specify a ``gradient``
argument that is a tensor of matching shape.
```
import torch
```
Create a tensor and set ``requires_grad=True`` to track computation with it
```
x = torch.tensor([[1,3],[2,2]], requires_grad=True, dtype = torch.float32)
print(x)
print(x.size())
```
Do a tensor operation:
```
y = ( x + 2)
z = 3 * (y ** 2)
print(y,z)
```
``y`` was created as a result of an operation, so it has a ``grad_fn``.
```
print(y.grad_fn)
print(zz.grad_fn)
```
Do more operations on ``y``
```
z = y * y # dot product
out = z.mean()
print(z, out)
```
``.requires_grad_( ... )`` changes an existing Tensor's ``requires_grad``
flag in-place. The input flag defaults to ``False`` if not given.
```
a = torch.tensor([2, 2],dtype = torch.float32)
a = ((a * 3))
print(a)
print(a.requires_grad)
a.requires_grad_(True)
print(a.requires_grad)
b = (a * a).sum()
print(b)
print(b.grad_fn)
```
Get Gradients
---------
Let's backprop now.
Because ``out`` contains a single scalar, ``out.backward()`` is
equivalent to ``out.backward(torch.tensor(1.))``.
Note:
out.backward() only works for scale
out.backward(v) where v is a vector works for non-scale input, it gives Jacobian-vector product. here v is the gradient we have to put into the backwass pass.
```
# example: y = 2 * x + 3 * (x^2)
x = torch.tensor([3],dtype = torch.float32,requires_grad = False)
x.requires_grad_(True)
y = 2 * x + x * x * 3
print(x,y)
```
Print gradients d(y)/dx, which should be y = 2 + 6 * x
```
y.backward() # first run y.backward(), then x.grad has the value
print(x.grad) # only works when x is scalar
```
Mathematically, if you have a vector valued function $\vec{y}=f(\vec{x})$,
then the gradient of $\vec{y}$ with respect to $\vec{x}$
is a Jacobian matrix:
\begin{align}J=\left(\begin{array}{ccc}
\frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{1}}{\partial x_{n}}\\
\vdots & \ddots & \vdots\\
\frac{\partial y_{m}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}}
\end{array}\right)\end{align}
Generally speaking, ``torch.autograd`` is an engine for computing
vector-Jacobian product. That is, given any vector
$v=\left(\begin{array}{cccc} v_{1} & v_{2} & \cdots & v_{m}\end{array}\right)^{T}$,
compute the product $v^{T}\cdot J$. If $v$ happens to be
the gradient of a scalar function $l=g\left(\vec{y}\right)$,
that is,
$v=\left(\begin{array}{ccc}\frac{\partial l}{\partial y_{1}} & \cdots & \frac{\partial l}{\partial y_{m}}\end{array}\right)^{T}$,
then by the chain rule, the vector-Jacobian product would be the
gradient of $l$ with respect to $\vec{x}$:
\begin{align}J^{T}\cdot v=\left(\begin{array}{ccc}
\frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{1}}\\
\vdots & \ddots & \vdots\\
\frac{\partial y_{1}}{\partial x_{n}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}}
\end{array}\right)\left(\begin{array}{c}
\frac{\partial l}{\partial y_{1}}\\
\vdots\\
\frac{\partial l}{\partial y_{m}}
\end{array}\right)=\left(\begin{array}{c}
\frac{\partial l}{\partial x_{1}}\\
\vdots\\
\frac{\partial l}{\partial x_{n}}
\end{array}\right)\end{align}
(Note that $v^{T}\cdot J$ gives a row vector which can be
treated as a column vector by taking $J^{T}\cdot v$.)
This characteristic of vector-Jacobian product makes it very
convenient to feed external gradients into a model that has
non-scalar output.
Note:
# vector-Jacobian product is matrix dot product!!!
Now let's take a look at an example of vector-Jacobian product:
```
# example 1:
x = torch.tensor([3,4,6], dtype = torch.float32,requires_grad=True)
y = x * x * 3
v = torch.tensor([1,1,2], dtype=torch.float)
y.backward(v) # here it does the derivative calulation dyi/dxi = 6 * xi * v
print(x.grad)
# example 2:
g = torch.tensor([[1,1,1],[2,3,4]],dtype = torch.float32)
g.add_(torch.ones_like(g))
g.requires_grad_(True) # _ here means replace the variable with new value
print(g, g.requires_grad)
gg = g*g*g
ggg = gg +3*g
v = torch.tensor([[1,1,2],[2,2,2]], dtype=torch.float)
ggg.backward(v) # here we do the derivative calculation dggg/dg = 3g^2 + 3
print(g.grad)
```
in the above case if we do gg.backward(v), then the derivative is 3g^2 only
```
x = torch.randn(3, requires_grad=True)
print(x)
y = x * 2
count = 0
while y.data.norm() < 100: # L2 norm / Euclidean norm
y = y * 2
count += 1
print(y,count)
# Now in this case ``y`` is no longer a scalar. ``torch.autograd``
# could not compute the full Jacobian directly, but if we just
# want the vector-Jacobian product, simply pass the vector to
# ``backward`` as argument:
v = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)
y.backward(v)
print(x.grad)
```
# Stop autograd
You can also stop autograd from tracking history on Tensors
with ``.requires_grad=True`` either by wrapping the code block in
``with torch.no_grad():``
```
x = torch.randn(3, requires_grad=True)
print(x.requires_grad)
print((x ** 2).requires_grad)
with torch.no_grad():
print((x ** 2).requires_grad)
x = torch.randn(3, requires_grad=True)
x.requires_grad_(False)
print((x**2).requires_grad)
```
Or by using ``.detach()`` to get a new Tensor with the same
content but that does not require gradients:
```
x.requires_grad_(True)
print(x.requires_grad)
y = x.detach()
print(y.requires_grad)
print(x.eq(y))
```
| github_jupyter |
```
# Dependencies
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from datetime import timedelta
import time
from datetime import date
# Import SQL Alchemy
from sqlalchemy import create_engine, ForeignKey, func
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
# Import PyMySQL (Not needed if mysqlclient is installed)
import pymysql
pymysql.install_as_MySQLdb()
firstDate = "2017-07-17"
lastDate = "2017-07-30"
engine = create_engine("sqlite:///hawaii.sqlite")
conn = engine.connect()
Base = automap_base()
Base.prepare(engine, reflect=True)
# mapped classes are now created with names by default
# matching that of the table name.
Base.classes.keys()
Measurement = Base.classes.Measurements
Station = Base.classes.Stations
# To push the objects made and query the server we use a Session object
session = Session(bind=engine)
# Calculate the date 1 year ago from today
prev_year = date.today() - timedelta(days=365)
# Perform a query to retrieve the data and precipitation scores
results = session.query(Measurement.date, Measurement.prcp).filter(Measurement.date >= prev_year).all()
# Save the query results as a Pandas DataFrame and set the index to the date column
df = pd.DataFrame(results, columns=['date', 'precipitation'])
df.head()
yAxis = df.precipitation
xAxis = df.date
plt.figure(figsize=(15,3))
plt.bar(xAxis, yAxis, color='blue', alpha = 0.5, align='edge')
plt.xticks(np.arange(12), df.date[1:13], rotation=90)
plt.xlabel('Date')
plt.ylabel('Precipitation (Inches)')
plt.title('Precipitation')
plt.show()
df.describe()
totalStations = session.query(Station.station).count()
totalStations
activeStations = session.query(Measurement.station, Measurement.tobs, func.count(Measurement.station)).group_by(Measurement.station).all()
dfAS = pd.DataFrame(activeStations, columns = ['station', 'tobs', 'stationCount'])
print(dfAS)
maxObs = dfAS.loc[(dfAS['tobs'] == dfAS['tobs'].max())]
maxObs
tobsData = session.query(Measurement.date, Measurement.tobs).filter(Measurement.date >= prev_year).all()
dfTD = pd.DataFrame(tobsData, columns = ['Date', 'tobs']).sort_values('tobs', ascending = False)
dfTD.head()
plt.hist(dfTD['tobs'], bins=12, color= "blue")
plt.xlabel('Tobs (bins=12)')
plt.ylabel('Frequency')
plt.title('Tobs Frequency')
plt.legend('Tobs')
plt.show()
def calcTemps(x):
session.query(x['Date'], x['tobs']).filter(lastDate >= x['Date'] >= firstDate).all()
x['tobs'].max()
x['tobs'].min()
calcTemps(dfTD)
```
| github_jupyter |
## Global Air Pollution Measurements
* [Air Quality Index - Wiki](https://en.wikipedia.org/wiki/Air_quality_index)
* [BigQuery - Wiki](https://en.wikipedia.org/wiki/BigQuery)
In this notebook data is extracted from *BigQuery Public Data* assesible exclusively only in *Kaggle*. The BigQurey Helper Object will convert data in cloud storage into *Pandas DataFrame* object. The query syntax is same as *SQL*. As size of data is very high convert entire data to DataFrame is cumbersome. So query is written such that will be readly available for Visualization.
***
>**Baisc attributes of Air quality index**
* Measurement units
* $ug/m^3$: micro gram/cubic meter
* $ppm$: Parts Per Million
* Pollutant
* $O3$: Ozone gas
* $SO2$: Sulphur Dioxed
* $NO2$: Nitrogen Dioxed
* $PM 2.5$: Particles with an aerodynamic diameter less than $2.5 μm$
* $PM 10$: Particles with an aerodynamic diameter less than $10 μm$
* $CO$: Carbon monoxide
**Steps**
1. Load Packages
2. Bigquery Object
3. AQI range and Statistics
4. Distribution of country listed in AQI
5. Location
6. Air Quality Index value distribution Map veiw
7. Pollutant Statistics
8. Distribution of pollutant and unit
9. Distribution of Source name
10. Sample AQI Averaged over in hours
11. AQI variation with time
12. Country Heatmap
13. Animation
### Load packages
```
# Load packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from mpl_toolkits.basemap import Basemap
import folium
import folium.plugins as plugins
import warnings
warnings.filterwarnings('ignore')
pd.options.display.max_rows =10
%matplotlib inline
```
### Bigquery
BigQuery is a RESTful web service that enables interactive analysis of massively large datasets working in conjunction with Google Storage. It is an Infrastructure as a Service that may be used complementarily with MapReduce.
```
# Customized query helper function explosively in Kaggle
import bq_helper
# Helper object
openAQ = bq_helper.BigQueryHelper(active_project='bigquery-public-data',
dataset_name='openaq')
# List of table
openAQ.list_tables()
#Schema
openAQ.table_schema('global_air_quality')
```
### Table display
```
openAQ.head('global_air_quality')
# Summary statics
query = """SELECT value,averaged_over_in_hours
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'µg/m³'
"""
p1 = openAQ.query_to_pandas(query)
p1.describe()
```
# Air Quality Index Range
* [AQI Range](http://aqicn.org/faq/2013-09-09/revised-pm25-aqi-breakpoints/)
<center><img src = 'https://campuspress.yale.edu/datadriven/files/2012/03/AQI-1024x634-1ybtu6l.png '><center>
The range of AQI is 0 - 500, so lets limit data to that range, in previous kernel's these outlier data points are not removed
```
query = """SELECT value,country
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'µg/m³' AND value < 0
"""
p1 = openAQ.query_to_pandas(query)
p1.describe().T
```
There are more than 100 value having value less than 0. The lowest value is -999000, which is outlier data point. **Air Quality Meter** is digital a instruments, if meter is show error value then sensor is disconnected or faulty.
```
query2 = """SELECT value,country,pollutant
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'µg/m³' AND value > 0
"""
p2 = openAQ.query_to_pandas(query2)
print('0.99 Quantile',p2['value'].quantile(0.99))
p2.describe().T
p2[p2['value']>10000]
```
Country
* MK is *Macedonia* [wiki](https://en.wikipedia.org/wiki/Republic_of_Macedonia)
* CL is *Chile* [Wiki](https://en.wikipedia.org/wiki/Chile)
>In both the countries some may some natural disaster happend so AQI is very high.
We will disgrad value more than 10000, which are outlier data point
### Distribution of country listed in AQI
```
query = """SELECT country,COUNT(country) as `count`
FROM `bigquery-public-data.openaq.global_air_quality`
GROUP BY country
HAVING COUNT(country) >10
ORDER BY `count`
"""
cnt = openAQ.query_to_pandas_safe(query)
cnt.tail()
plt.style.use('bmh')
plt.figure(figsize=(14,4))
sns.barplot(cnt['country'], cnt['count'], palette='magma')
plt.xticks(rotation=45)
plt.title('Distribution of country listed in data');
```
## Location
We find find different location where air quality is taken. This location data consist of latitude and logitude, city.
```
#Average polution of air by countries
query = """SELECT AVG(value) as `Average`,country
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'µg/m³' AND value BETWEEN 0 AND 10000
GROUP BY country
ORDER BY Average DESC
"""
cnt = openAQ.query_to_pandas(query)
plt.figure(figsize=(14,4))
sns.barplot(cnt['country'],cnt['Average'], palette= sns.color_palette('gist_heat',len(cnt)))
plt.xticks(rotation=90)
plt.title('Average polution of air by countries in unit $ug/m^3$')
plt.ylabel('Average AQI in $ug/m^3$');
```
* Country PL ( Poland) and IN (India) are top pollutor of air
***
### AQI measurement center
```
query = """SELECT city,latitude,longitude,
AVG(value) as `Average`
FROM `bigquery-public-data.openaq.global_air_quality`
GROUP BY latitude,city,longitude
"""
location = openAQ.query_to_pandas_safe(query)
#Location AQI measurement center
m = folium.Map(location = [20,10],tiles='Mapbox Bright',zoom_start=2)
# add marker one by on map
for i in range(0,500):
folium.Marker(location = [location.iloc[i]['latitude'],location.iloc[i]['longitude']],\
popup=location.iloc[i]['city']).add_to(m)
m # DRAW MAP
```
We find that thier are many air qulity index measurement unit across -US- and -Europe-. Thier are few measurement center in -African- continent. We are hardly find any measuring center in Mid East, Russia.
### Air Quality Index value distribution Map veiw
```
query = """SELECT city,latitude,longitude,
AVG(value) as `Average`
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'µg/m³' AND value BETWEEN 0 AND 10000
GROUP BY latitude,city,longitude
"""
location = openAQ.query_to_pandas_safe(query)
location.dropna(axis=0, inplace=True)
plt.style.use('ggplot')
f,ax = plt.subplots(figsize=(14,10))
m1 = Basemap(projection='cyl', llcrnrlon=-180, urcrnrlon=180, llcrnrlat=-90, urcrnrlat=90,
resolution='c',lat_ts=True)
m1.drawmapboundary(fill_color='#A6CAE0', linewidth=0)
m1.fillcontinents(color='grey', alpha=0.3)
m1.drawcoastlines(linewidth=0.1, color="white")
m1.shadedrelief()
m1.bluemarble(alpha=0.4)
avg = location['Average']
m1loc = m1(location['latitude'].tolist(),location['longitude'])
m1.scatter(m1loc[1],m1loc[0],lw=3,alpha=0.5,zorder=3,cmap='coolwarm', c=avg)
plt.title('Average Air qulity index in unit $ug/m^3$ value')
m1.colorbar(label=' Average AQI value in unit $ug/m^3$');
```
### US
```
#USA location
query = """SELECT
MAX(latitude) as `max_lat`,
MIN(latitude) as `min_lat`,
MAX(longitude) as `max_lon`,
MIN(longitude) as `min_lon`
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE country = 'US' """
us_loc = openAQ.query_to_pandas_safe(query)
us_loc
query = """ SELECT city,latitude,longitude,averaged_over_in_hours,
AVG(value) as `Average`
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE country = 'US' AND unit = 'µg/m³' AND value BETWEEN 0 AND 10000
GROUP BY latitude,city,longitude,averaged_over_in_hours,country """
us_aqi = openAQ.query_to_pandas_safe(query)
# USA
min_lat = us_loc['min_lat']
max_lat = us_loc['max_lat']
min_lon = us_loc['min_lon']
max_lon = us_loc['max_lon']
plt.figure(figsize=(14,8))
m2 = Basemap(projection='cyl', llcrnrlon=min_lon, urcrnrlon=max_lon, llcrnrlat=min_lat, urcrnrlat=max_lat,
resolution='c',lat_ts=True)
m2.drawcounties()
m2.drawmapboundary(fill_color='#A6CAE0', linewidth=0)
m2.fillcontinents(color='grey', alpha=0.3)
m2.drawcoastlines(linewidth=0.1, color="white")
m2.drawstates()
m2.bluemarble(alpha=0.4)
avg = (us_aqi['Average'])
m2loc = m2(us_aqi['latitude'].tolist(),us_aqi['longitude'])
m2.scatter(m2loc[1],m2loc[0],c = avg,lw=3,alpha=0.5,zorder=3,cmap='rainbow')
m1.colorbar(label = 'Average AQI value in unit $ug/m^3$')
plt.title('Average Air qulity index in unit $ug/m^3$ of US');
```
AQI of US range 0 to 400, most of city data points are within 100
### India
```
#INDIA location
query = """SELECT
MAX(latitude) as `max_lat`,
MIN(latitude) as `min_lat`,
MAX(longitude) as `max_lon`,
MIN(longitude) as `min_lon`
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE country = 'IN' """
in_loc = openAQ.query_to_pandas_safe(query)
in_loc
query = """ SELECT city,latitude,longitude,
AVG(value) as `Average`
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE country = 'IN' AND unit = 'µg/m³' AND value BETWEEN 0 AND 10000
GROUP BY latitude,city,longitude,country """
in_aqi = openAQ.query_to_pandas_safe(query)
# INDIA
min_lat = in_loc['min_lat']-5
max_lat = in_loc['max_lat']+5
min_lon = in_loc['min_lon']-5
max_lon = in_loc['max_lon']+5
plt.figure(figsize=(14,8))
m3 = Basemap(projection='cyl', llcrnrlon=min_lon, urcrnrlon=max_lon, llcrnrlat=min_lat, urcrnrlat=max_lat,
resolution='c',lat_ts=True)
m3.drawcounties()
m3.drawmapboundary(fill_color='#A6CAE0', linewidth=0)
m3.fillcontinents(color='grey', alpha=0.3)
m3.drawcoastlines(linewidth=0.1, color="white")
m3.drawstates()
avg = in_aqi['Average']
m3loc = m3(in_aqi['latitude'].tolist(),in_aqi['longitude'])
m3.scatter(m3loc[1],m3loc[0],c = avg,alpha=0.5,zorder=5,cmap='rainbow')
m1.colorbar(label = 'Average AQI value in unit $ug/m^3$')
plt.title('Average Air qulity index in unit $ug/m^3$ of India');
```
### Distribution of pollutant and unit
```
# Unit query
query = """SELECT unit,COUNT(unit) as `count`
FROM `bigquery-public-data.openaq.global_air_quality`
GROUP BY unit
"""
unit = openAQ.query_to_pandas(query)
# Pollutant query
query = """SELECT pollutant,COUNT(pollutant) as `count`
FROM `bigquery-public-data.openaq.global_air_quality`
GROUP BY pollutant
"""
poll_count = openAQ.query_to_pandas_safe(query)
plt.style.use('fivethirtyeight')
plt.style.use('bmh')
f, ax = plt.subplots(1,2,figsize = (14,5))
ax1,ax2= ax.flatten()
ax1.pie(x=unit['count'],labels=unit['unit'],shadow=True,autopct='%1.1f%%',explode=[0,0.1],\
colors=sns.color_palette('hot',2),startangle=90,)
ax1.set_title('Distribution of measurement unit')
explode = np.arange(0,0.1)
ax2.pie(x=poll_count['count'],labels=poll_count['pollutant'], shadow=True, autopct='%1.1f%%',\
colors=sns.color_palette('Set2',5),startangle=60,)
ax2.set_title('Distribution of pollutants in air');
```
* The most polular unit of mesurement of air quality is $ug/m^3$
* $O^3$ is share 23% pollution in air.
***
### Pollutant Statistics
```
query = """ SELECT pollutant,
AVG(value) as `Average`,
COUNT(value) as `Count`,
MIN(value) as `Min`,
MAX(value) as `Max`,
SUM(value) as `Sum`
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'µg/m³' AND value BETWEEN 0 AND 10000
GROUP BY pollutant
"""
cnt = openAQ.query_to_pandas_safe(query)
cnt
```
We find
* The CO (carbon monoxide) having very wide range of value.
* Look at sum of CO which is highest in list.
* Except Average AQI of CO, all are below 54 $ug/m^3$
### Pollutants by Country
```
query = """SELECT AVG(value) as`Average`,country, pollutant
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'µg/m³'AND value BETWEEN 0 AND 10000
GROUP BY country,pollutant"""
p1 = openAQ.query_to_pandas_safe(query)
# By country
p1_pivot = p1.pivot(index = 'country',values='Average', columns= 'pollutant')
plt.figure(figsize=(14,15))
ax = sns.heatmap(p1_pivot, lw=0.01, cmap=sns.color_palette('Reds',500))
plt.yticks(rotation=30)
plt.title('Heatmap average AQI by Pollutant');
f,ax = plt.subplots(figsize=(14,6))
sns.barplot(p1[p1['pollutant']=='co']['country'],p1[p1['pollutant']=='co']['Average'],)
plt.title('Co AQI in diffrent country')
plt.xticks(rotation=90);
f,ax = plt.subplots(figsize=(14,6))
sns.barplot(p1[p1['pollutant']=='pm25']['country'],p1[p1['pollutant']=='pm25']['Average'])
plt.title('pm25 AQI in diffrent country')
plt.xticks(rotation=90);
```
### Distribution of Source name
The institution where AQI is measure
```
#source_name
query = """ SELECT source_name, COUNT(source_name) as `count`
FROM `bigquery-public-data.openaq.global_air_quality`
GROUP BY source_name
ORDER BY count DESC
"""
source_name = openAQ.query_to_pandas_safe(query)
plt.figure(figsize=(14,10))
sns.barplot(source_name['count'][:20], source_name['source_name'][:20],palette = sns.color_palette('YlOrBr'))
plt.title('Distribution of Top 20 source_name')
#plt.axvline(source_name['count'].median())
plt.xticks(rotation=90);
```
We find
* Airnow is top source unit in list
* Europian country are top in the list, the instition name is starts with 'EEA country'.
***
### Sample AQI Averaged over in hours
The sample of AQI value taken in different hour
```
query = """SELECT averaged_over_in_hours, COUNT(*) as `count`
FROM `bigquery-public-data.openaq.global_air_quality`
GROUP BY averaged_over_in_hours
ORDER BY count DESC """
cnt = openAQ.query_to_pandas(query)
#cnt['averaged_over_in_hours'] = cnt['averaged_over_in_hours'].astype('category')
plt.figure(figsize=(14,5))
sns.barplot( cnt['averaged_over_in_hours'],cnt['count'], palette= sns.color_palette('brg'))
plt.title('Distibution of quality measurement per hour ');
```
we find that air quality is measured every hour
***
### AQI in ppm
```
query = """SELECT AVG(value) as`Average`,country,
EXTRACT(YEAR FROM timestamp) as `Year`,
pollutant
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'ppm'
GROUP BY country,Year,pollutant"""
pol_aqi = openAQ.query_to_pandas_safe(query)
# By month in year
plt.figure(figsize=(14,8))
sns.barplot(pol_aqi['country'], pol_aqi['Average'])
plt.title('Distribution of average AQI by country $ppm$');
```
### AQI variation with time
```
query = """SELECT EXTRACT(YEAR FROM timestamp) as `Year`,
AVG(value) as `Average`
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'µg/m³' AND value BETWEEN 0 AND 10000
GROUP BY EXTRACT(YEAR FROM timestamp)
"""
quality = openAQ.query_to_pandas(query)
query = """SELECT EXTRACT(MONTH FROM timestamp) as `Month`,
AVG(value) as `Average`
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'µg/m³' AND value BETWEEN 0 AND 10000
GROUP BY EXTRACT(MONTH FROM timestamp)
"""
quality1 = openAQ.query_to_pandas(query)
# plot
f,ax = plt.subplots(1,2, figsize= (14,6),sharey=True)
ax1,ax2 = ax.flatten()
sns.barplot(quality['Year'],quality['Average'],ax=ax1)
ax1.set_title('Distribution of average AQI by year')
sns.barplot(quality1['Month'],quality['Average'], ax=ax2 )
ax2.set_title('Distribution of average AQI by month')
ax2.set_ylabel('');
# by year & month
query = """SELECT EXTRACT(YEAR from timestamp) as `Year`,
EXTRACT(MONTH FROM timestamp) as `Month`,
AVG(value) as `Average`
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'µg/m³' AND value BETWEEN 0 AND 10000
GROUP BY year,Month"""
aqi_year = openAQ.query_to_pandas_safe(query)
# By month in year
plt.figure(figsize=(14,8))
sns.pointplot(aqi_year['Month'],aqi_year['Average'],hue = aqi_year['Year'])
plt.title('Distribution of average AQI by month');
```
We find
* the data available for perticular year is incomplete
* the year 2016, 2017 data is availabel completely
### Country Heatmap
```
# Heatmap by country
query = """SELECT AVG(value) as `Average`,
EXTRACT(YEAR FROM timestamp) as `Year`,
country
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'µg/m³' AND value BETWEEN 0 AND 10000
GROUP BY country,Year
"""
coun_aqi = openAQ.query_to_pandas_safe(query)
coun_pivot = coun_aqi.pivot(index='country', columns='Year', values='Average').fillna(0)
# By month in year
plt.figure(figsize=(14,15))
sns.heatmap(coun_pivot, lw=0.01, cmap=sns.color_palette('Reds',len(coun_pivot)))
plt.yticks(rotation=30)
plt.title('Heatmap average AQI by YEAR');
```
### Animation
```
query = """SELECT EXTRACT(YEAR FROM timestamp) as `Year`,AVG(value) as `Average`,
latitude,longitude
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'µg/m³' AND value BETWEEN 0 AND 10000
GROUP BY Year, latitude,longitude
"""
p1 = openAQ.query_to_pandas_safe(query)
from matplotlib import animation,rc
import io
import base64
from IPython.display import HTML, display
import warnings
warnings.filterwarnings('ignore')
fig = plt.figure(figsize=(14,10))
plt.style.use('ggplot')
def animate(Year):
ax = plt.axes()
ax.clear()
ax.set_title('Average AQI in Year: '+str(Year))
m4 = Basemap(llcrnrlat=-90, urcrnrlat=90, llcrnrlon=-180,urcrnrlon=180,projection='cyl')
m4.drawmapboundary(fill_color='#A6CAE0', linewidth=0)
m4.fillcontinents(color='grey', alpha=0.3)
m4.drawcoastlines(linewidth=0.1, color="white")
m4.shadedrelief()
lat_y = list(p1[p1['Year'] == Year]['latitude'])
lon_y = list(p1[p1['Year'] == Year]['longitude'])
lat,lon = m4(lat_y,lon_y)
avg = p1[p1['Year'] == Year]['Average']
m4.scatter(lon,lat,c = avg,lw=2, alpha=0.3,cmap='hot_r')
ani = animation.FuncAnimation(fig,animate,list(p1['Year'].unique()), interval = 1500)
ani.save('animation.gif', writer='imagemagick', fps=1)
plt.close(1)
filename = 'animation.gif'
video = io.open(filename, 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''<img src="data:image/gif;base64,{0}" type="gif" />'''.format(encoded.decode('ascii')))
# Continued
```
>>>>>> ### Thank you for visiting, please upvote if you like it.
| github_jupyter |
# Breast Cancer Wisconsin (Diagnostic) Data Set
* **[T81-558: Applications of Deep Learning](https://sites.wustl.edu/jeffheaton/t81-558/)**
* Dataset provided by [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29)
* [Download Here](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/data/wcbreast.csv)
This is a popular dataset that contains columns that might be useful to determine if a tumor is breast cancer or not. There are a total of 32 columns and 569 rows. This dataset is used in class to introduce binary (two class) classification. The following fields are present:
* **id** - Identity column, not really useful to a neural network.
* **diagnosis** - Diagnosis, B=Benign, M=Malignant.
* **mean_radius** - Potentially predictive field.
* **mean_texture** - Potentially predictive field.
* **mean_perimeter** - Potentially predictive field.
* **mean_area** - Potentially predictive field.
* **mean_smoothness** - Potentially predictive field.
* **mean_compactness** - Potentially predictive field.
* **mean_concavity** - Potentially predictive field.
* **mean_concave_points** - Potentially predictive field.
* **mean_symmetry** - Potentially predictive field.
* **mean_fractal_dimension** - Potentially predictive field.
* **se_radius** - Potentially predictive field.
* **se_texture** - Potentially predictive field.
* **se_perimeter** - Potentially predictive field.
* **se_area** - Potentially predictive field.
* **se_smoothness** - Potentially predictive field.
* **se_compactness** - Potentially predictive field.
* **se_concavity** - Potentially predictive field.
* **se_concave_points** - Potentially predictive field.
* **se_symmetry** - Potentially predictive field.
* **se_fractal_dimension** - Potentially predictive field.
* **worst_radius** - Potentially predictive field.
* **worst_texture** - Potentially predictive field.
* **worst_perimeter** - Potentially predictive field.
* **worst_area** - Potentially predictive field.
* **worst_smoothness** - Potentially predictive field.
* **worst_compactness** - Potentially predictive field.
* **worst_concavity** - Potentially predictive field.
* **worst_concave_points** - Potentially predictive field.
* **worst_symmetry** - Potentially predictive field.
* **worst_fractal_dimension** - Potentially predictive field.
The following code shows 10 sample rows.
```
import pandas as pd
import numpy as np
path = "./data/"
filename = os.path.join(path,"wcbreast_wdbc.csv")
df = pd.read_csv(filename,na_values=['NA','?'])
# Shuffle
np.random.seed(42)
df = df.reindex(np.random.permutation(df.index))
df.reset_index(inplace=True, drop=True)
df[0:10]
```
| github_jupyter |
# Digital Signal Processing
This collection of [jupyter](https://jupyter.org/) notebooks introduces various topics of [Digital Signal Processing](https://en.wikipedia.org/wiki/Digital_signal_processing). The theory is accompanied by computational examples written in [IPython 3](http://ipython.org/). The sources of the notebooks, as well as installation and usage instructions can be found on [GitHub](https://github.com/lev1khachatryan/Digital_Signal_Processing).
## Table of Contents
#### 1. Introduction
* [Introduction](introduction/introduction.ipynb)
#### 2. Spectral Analysis of Deterministic Signals
* [The Leakage-Effect](spectral_analysis_deterministic_signals/leakage_effect.ipynb)
* [Window Functions](spectral_analysis_deterministic_signals/window_functions.ipynb)
* [Zero-Padding](spectral_analysis_deterministic_signals/zero_padding.ipynb)
* [Short-Time Fourier Transform](spectral_analysis_deterministic_signals/stft.ipynb)
* [Summary](spectral_analysis_deterministic_signals/summary.ipynb)
#### 3. Random Signals
* [Introduction](random_signals/introduction.ipynb)
* [Amplitude Distributions](random_signals/distributions.ipynb)
* [Ensemble Averages](random_signals/ensemble_averages.ipynb)
* [Stationary and Ergodic Processes](random_signals/stationary_ergodic.ipynb)
* [Correlation Functions](random_signals/correlation_functions.ipynb)
* [Power Spectral Densities](random_signals/power_spectral_densities.ipynb)
* [Independent Processes](random_signals/independent.ipynb)
* [Important Amplitude Distributions](random_signals/important_distributions.ipynb)
* [White Noise](random_signals/white_noise.ipynb)
* [Superposition of Random Signals](random_signals/superposition.ipynb)
#### 4. Random Signals and LTI Systems
* [Introduction](random_signals_LTI_systems/introduction.ipynb)
* [Linear Mean](random_signals_LTI_systems/linear_mean.ipynb)
* [Correlation Functions](random_signals_LTI_systems/correlation_functions.ipynb)
* [Example: Measurement of Acoustic Impulse Responses](random_signals_LTI_systems/acoustic_impulse_response_measurement.ipynb)
* [Power Spectral Densities](random_signals_LTI_systems/power_spectral_densities.ipynb)
* [Wiener Filter](random_signals_LTI_systems/wiener_filter.ipynb)
#### 5. Spectral Estimation of Random Signals
* [Introduction](spectral_estimation_random_signals/introduction.ipynb)
* [Periodogram](spectral_estimation_random_signals/periodogram.ipynb)
* [Welch-Method](spectral_estimation_random_signals/welch_method.ipynb)
* [Parametric Methods](spectral_estimation_random_signals/parametric_methods.ipynb)
#### 6. Quantization
* [Introduction](quantization/introduction.ipynb)
* [Characteristic of Linear Uniform Quantization](quantization/linear_uniform_characteristic.ipynb)
* [Quantization Error of Linear Uniform Quantization](quantization/linear_uniform_quantization_error.ipynb)
* [Example: Requantization of a Speech Signal](quantization/requantization_speech_signal.ipynb)
* [Noise Shaping](quantization/noise_shaping.ipynb)
* [Oversampling](quantization/oversampling.ipynb)
* [Example: Non-Linear Quantization of a Speech Signal](quantization/nonlinear_quantization_speech_signal.ipynb)
#### 7. Realization of Non-Recursive Filters
* [Introduction](nonrecursive_filters/introduction.ipynb)
* [Fast Convolution](nonrecursive_filters/fast_convolution.ipynb)
* [Segmented Convolution](nonrecursive_filters/segmented_convolution.ipynb)
* [Quantization Effects](nonrecursive_filters/quantization_effects.ipynb)
#### 8. Realization of Recursive Filters
* [Introduction](recursive_filters/introduction.ipynb)
* [Direct Form Structures](recursive_filters/direct_forms.ipynb)
* [Cascaded Structures](recursive_filters/cascaded_structures.ipynb)
* [Quantization of Filter Coefficients](recursive_filters/quantization_of_coefficients.ipynb)
* [Quantization of Variables and Operations](recursive_filters/quantization_of_variables.ipynb)
#### 9. Design of Digital Filters
* [Design of Non-Recursive Filters by the Window Method](filter_design/window_method.ipynb)
* [Design of Non-Recursive Filters by the Frequency Sampling Method](filter_design/frequency_sampling_method.ipynb)
* [Design of Recursive Filters by the Bilinear Transform](filter_design/bilinear_transform.ipynb)
* [Example: Non-Recursive versus Recursive Filter](filter_design/comparison_non_recursive.ipynb)
* [Examples: Typical IIR-Filters in Audio](filter_design/audiofilter.ipynb)
#### Reference Cards
* [Reference Card Discrete Signals and Systems](reference_cards/RC_discrete_signals_and_systems.pdf)
* [Reference Card Random Signals and LTI Systems](reference_cards/RC_random_signals_and_LTI_systems.pdf)
| github_jupyter |
# Downloading GNSS station locations and tropospheric zenith delays
**Author**: Simran Sangha, David Bekaert - Jet Propulsion Laboratory
This notebook provides an overview of the functionality included in the **`raiderDownloadGNSS.py`** program. Specifically, we outline examples on how to access and store GNSS station location and tropospheric zenith delay information over a user defined area of interest and span of time. In this notebook, we query GNSS stations spanning northern California between 2016 and 2019.
We will outline the following downloading options to access station location and zenith delay information:
- For a specified range of years
- For a specified time of day
- Confined to a specified geographic bounding box
- Confined to an apriori defined list of GNSS stations
<div class="alert alert-info">
<b>Terminology:</b>
- *GNSS*: Stands for Global Navigation Satellite System. Describes any satellite constellation providing global or regional positioning, navigation, and timing services.
- *tropospheric zenith delay*: The precise atmospheric delay satellite signals experience when propagating through the troposphere.
</div>
## Table of Contents:
<a id='example_TOC'></a>
[**Overview of the raiderDownloadGNSS.py program**](#overview)
- [1. Define spatial extent and/or apriori list of stations](#overview_1)
- [2. Run parameters](#overview_2)
[**Examples of the raiderDownloadGNSS.py program**](#examples)
- [Example 1. Access data for specified year, time-step, and time of day, and across specified spatial subset](#example_1)
- [Example 2. Access data for specified range of years and time of day, and across specified spatial subset, with the maximum allowed CPUs](#example_2)
## Prep: Initial setup of the notebook
Below we set up the directory structure for this notebook exercise. In addition, we load the required modules into our python environment using the **`import`** command.
```
import os
import numpy as np
import matplotlib.pyplot as plt
## Defining the home and data directories
tutorial_home_dir = os.path.abspath(os.getcwd())
work_dir = os.path.abspath(os.getcwd())
print("Tutorial directory: ", tutorial_home_dir)
print("Work directory: ", work_dir)
# Verifying if RAiDER is installed correctly
try:
from RAiDER import downloadGNSSDelays
except:
raise Exception('RAiDER is missing from your PYTHONPATH')
os.chdir(work_dir)
```
# Supported GNSS provider
Currently **`raiderDownloadGNSS.py`** is able to access the UNR Geodetic Laboratory GNSS archive. The creation of a user account and/or special privileges are not necessary.
Data naming conventions are outlined here: http://geodesy.unr.edu/gps_timeseries/README_trop2.txt
This archive does not require a license agreement nor a setup of a user account.
## Overview of the raiderDownloadGNSS.py program
<a id='overview'></a>
The **`raiderDownloadGNSS.py`** program allows for easy access of GNSS station locations and tropospheric zenith delays. Running **`raiderDownloadGNSS.py`** with the **`-h`** option will show the parameter options and outline several basic, practical examples.
Let us explore these options:
```
!raiderDownloadGNSS.py -h
```
### 1. Define spatial extent and/or apriori list of stations
<a id='overview_1'></a>
#### Geographic bounding box (**`--bounding_box BOUNDING_BOX`**)
An area of interest may be specified as `SNWE` coordinates using the **`--bounding_box`** option. Coordinates should be specified as a space delimited string surrounded by quotes. This example below would restrict the query to stations over northern California:
**`--bounding_box '36 40 -124 -119'`**
If no area of interest is specified, the entire global archive will be queried.
#### Textfile with apriori list of station names (**`--station_file STATION_FILE`**)
The query may be restricted to an apropri list of stations. To pass this list to the program, a text file containing a list of 4-char station IDs separated by newlines must be passed as an argument for the **`--station_file`** option.
If used in conjunction with the **`--bounding_box`** option outlined above, then listed stations which fall outside of the specified geographic bounding box will be discarded.
As an example refer to the text-file below, which would be passed as so: **`--station_file support_docs/CA_subset.txt`**
```
!head support_docs/CA_subset.txt
```
### 2. Run parameters
<a id='overview_2'></a>
#### Output directory (**`--out OUT`**)
Specify directory to deposit all outputs into with **`--out`**. Absolute and relative paths are both supported.
By default, outputs will be deposited into the current working directory where the program is launched.
#### GPS repository (**`--gpsrepo GPS_REPO`**)
Specify GPS repository you wish to query with **`--gpsrepo`**.
NOTE that currently only the following archive is supported: UNR
#### Date(s) and step (**`----date DATELIST [DATELIST ...]`**)
**REQUIRED** argument. Specify valid year(s) and step in days **`--date DATE DATE STEP`** to access delays (format YYYYMMDD YYYYMMDD DD). Can be a single date (e.g. '20200101'), two dates between which data for every day between and inclusive is queried (e.g. '2017 2019'), or two dates and a step for which increment in days data is queried (e.g. '2019 2019 12').
Note that this option mirrors a similar option as found in the script `raiderDelay.py`, is used to download weather model data for specified spatiotemporal constraints (i.e. the counterpart to the `raiderDownloadGNSS.py` which downloads GNSS data).
#### Time of day (**`--returntime RETURNTIME`**)
Return tropospheric zenith delays closest to 'HH:MM:SS' time specified with **`--returntime`**.
Note that data is generally archived in 3 second increments. Thus if a time outside of this increment is specified (e.g. '00:00:02'), then the input is rounded to the closest 3 second increment (e.g. '00:00:03')
If not specified, the delays for all times of the day will be returned.
#### Physically download data (**`--download`**)
By default all data is virtually accessed from external zip and tarfiles. If **`--download`** is specified, these external files will be locally downloaded and stored.
Note that this option is **not recommended** for most purposes as it is not neccesary to proceed with statistical analyses, as the code is designed to handle the data virtually.
#### Number of CPUs to be used (**`--cpus NUMCPUS`**)
Specify number of cpus to be used for multiprocessing with **`--cpus`**. For most cases, multiprocessing is essential in order to access data and perform statistical analyses within a reasonable amount of time.
May specify **`--cpus all`** at your own discretion in order to leverage all available CPUs on your system.
By default 8 CPUs will be used.
#### Verbose mode (**`--verbose`**)
Specify **`--verbose`** to print all statements through entire routine. For example, print each station and year within a loop as it is being accessed by the program.
## Examples of the **`raiderDownloadGNSS.py`** program
<a id='examples'></a>
### Example 1. Access data for specified year, time-step, and time of day, and across specified spatial subset <a id='example_1'></a>
Virtually access GNSS station location and zenith delay information for the year '2016', for every 12 days, and at a UTC time of day 'HH:MM:SS' of '00:00:00', and across a geographic bounding box '36 40 -124 -119' spanning over Northern California.
The footprint of the specified geographic bounding box is depicted in **Fig. 1**.
<img src="support_docs/bbox_footprint.png" alt="footprint" width="700">
<center><b>Fig. 1</b> Footprint of geopraphic bounding box used in examples 1 and 2. </center>
```
!raiderDownloadGNSS.py --out products --date 20160101 20161231 12 --returntime '00:00:00' --bounding_box '36 40 -124 -119'
```
Now we can take a look at the generated products:
```
!ls products
```
A list of coordinates for all stations found within the specified geographic bounding box are recorded within **`gnssStationList_overbbox.csv`**:
```
!head products/gnssStationList_overbbox.csv
```
A list of all URL paths for zipfiles containing all tropospheric zenith delay information for a given station and year are recording within **`gnssStationList_overbbox_withpaths.csv`**:
```
!head products/gnssStationList_overbbox_withpaths.csv
```
The zipfiles listed within **`gnssStationList_overbbox_withpaths.csv`** are virtually accessed and queried for internal tarfiles that archive all tropospheric zenith delay information acquired for a given day of the year.
Since we an explicit time of day '00:00:00' and time-step of 12 days was specified above, only data every 12 days from each tarfile corresponding to the time of day '00:00:00' is passed along. If no data is available at that time for a given day, empty strings are passed.
This information is then appended to a primary file allocated and named for a given GNSS station. **`GPS_delays`**:
```
!ls products/GPS_delays
```
Finally, all of the extracted tropospheric zenith delay information stored under **`GPS_delays`** is concatenated with the GNSS station location information stored under **`gnssStationList_overbbox.csv`** into a primary comprehensive file **`UNRcombinedGPS_ztd.csv`**. In this file, the prefix `UNR` denotes the GNSS repository that has been queried, which again may be toggled with the **`--gpsrepo`** option.
**`UNRcombinedGPS_ztd.csv`** may in turn be directly used to perform basic statistical analyses using **`raiderStats.py`**. Please refer to the companion notebook **`raiderStats/raiderStats_tutorial.ipynb`** for a comprehensive outline of the program and examples.
```
!head products/UNRcombinedGPS_ztd.csv
```
### Example 2. Access data for specified range of years and time of day, and across specified spatial subset, with the maximum allowed CPUs <a id='example_2'></a>
Virtually access GNSS station location and zenith delay information for the years '2016-2019', for every day, at a UTC time of day 'HH:MM:SS' of '00:00:00', and across a geographic bounding box '36 40 -124 -119' spanning over Northern California.
The footprint of the specified geographic bounding box is again depicted in **Fig. 1**.
In addition to querying for multiple years, we will also experiment with using the maximum number of allowed CPUs to save some time! Recall again that the default number of CPUs used for parallelization is 8.
```
!rm -rf products
!raiderDownloadGNSS.py --out products --date 20160101 20191231 --returntime '00:00:00' --bounding_box '36 40 -124 -119' --cpus all
```
Outputs are organized again in a fashion consistent with that outlined under **Ex. 1**.
However now we have queried data spanning from the year 2016 up through 2019. Thus, **`UNRcombinedGPS_ztd.csv`** now contains GNSS station data recorded as late as in the year 2019:
```
!grep -m 10 '2019-' products/UNRcombinedGPS_ztd.csv
```
| github_jupyter |
### Let's load a Handwritten Digit classifier we'll be building very soon!
```
import cv2
import numpy as np
from keras.datasets import mnist
from keras.models import load_model
classifier = load_model('/home/deeplearningcv/DeepLearningCV/Trained Models/mnist_simple_cnn.h5')
# loads the MNIST dataset
(x_train, y_train), (x_test, y_test) = mnist.load_data()
def draw_test(name, pred, input_im):
BLACK = [0,0,0]
expanded_image = cv2.copyMakeBorder(input_im, 0, 0, 0, imageL.shape[0] ,cv2.BORDER_CONSTANT,value=BLACK)
expanded_image = cv2.cvtColor(expanded_image, cv2.COLOR_GRAY2BGR)
cv2.putText(expanded_image, str(pred), (152, 70) , cv2.FONT_HERSHEY_COMPLEX_SMALL,4, (0,255,0), 2)
cv2.imshow(name, expanded_image)
for i in range(0,10):
rand = np.random.randint(0,len(x_test))
input_im = x_test[rand]
imageL = cv2.resize(input_im, None, fx=4, fy=4, interpolation = cv2.INTER_CUBIC)
input_im = input_im.reshape(1,28,28,1)
## Get Prediction
res = str(classifier.predict_classes(input_im, 1, verbose = 0)[0])
draw_test("Prediction", res, imageL)
cv2.waitKey(0)
cv2.destroyAllWindows()
```
### Testing our classifier on a real image
```
import numpy as np
import cv2
from preprocessors import x_cord_contour, makeSquare, resize_to_pixel
image = cv2.imread('images/numbers.jpg')
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
cv2.imshow("image", image)
cv2.waitKey(0)
# Blur image then find edges using Canny
blurred = cv2.GaussianBlur(gray, (5, 5), 0)
#cv2.imshow("blurred", blurred)
#cv2.waitKey(0)
edged = cv2.Canny(blurred, 30, 150)
#cv2.imshow("edged", edged)
#cv2.waitKey(0)
# Find Contours
_, contours, _ = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
#Sort out contours left to right by using their x cordinates
contours = sorted(contours, key = x_cord_contour, reverse = False)
# Create empty array to store entire number
full_number = []
# loop over the contours
for c in contours:
# compute the bounding box for the rectangle
(x, y, w, h) = cv2.boundingRect(c)
if w >= 5 and h >= 25:
roi = blurred[y:y + h, x:x + w]
ret, roi = cv2.threshold(roi, 127, 255,cv2.THRESH_BINARY_INV)
roi = makeSquare(roi)
roi = resize_to_pixel(28, roi)
cv2.imshow("ROI", roi)
roi = roi / 255.0
roi = roi.reshape(1,28,28,1)
## Get Prediction
res = str(classifier.predict_classes(roi, 1, verbose = 0)[0])
full_number.append(res)
cv2.rectangle(image, (x, y), (x + w, y + h), (0, 0, 255), 2)
cv2.putText(image, res, (x , y + 155), cv2.FONT_HERSHEY_COMPLEX, 2, (255, 0, 0), 2)
cv2.imshow("image", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
print ("The number is: " + ''.join(full_number))
```
### Training this Model
```
from keras.datasets import mnist
from keras.utils import np_utils
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
# Training Parameters
batch_size = 128
epochs = 5
# loads the MNIST dataset
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# Lets store the number of rows and columns
img_rows = x_train[0].shape[0]
img_cols = x_train[1].shape[0]
# Getting our date in the right 'shape' needed for Keras
# We need to add a 4th dimenion to our date thereby changing our
# Our original image shape of (60000,28,28) to (60000,28,28,1)
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
# store the shape of a single image
input_shape = (img_rows, img_cols, 1)
# change our image type to float32 data type
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
# Normalize our data by changing the range from (0 to 255) to (0 to 1)
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# Now we one hot encode outputs
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
# Let's count the number columns in our hot encoded matrix
print ("Number of Classes: " + str(y_test.shape[1]))
num_classes = y_test.shape[1]
num_pixels = x_train.shape[1] * x_train.shape[2]
# create model
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss = 'categorical_crossentropy',
optimizer = keras.optimizers.Adadelta(),
metrics = ['accuracy'])
print(model.summary())
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
```
| github_jupyter |
# tutorial for reading a Gizmo snapshot
@author: Andrew Wetzel <arwetzel@gmail.com>
```
# First, move within a simulation directory, or point 'directory' below to a simulation directory.
# This directory should contain either a snapshot file
# snapshot_???.hdf5
# or a snapshot directory
# snapdir_???
# In general, the simulation directory also should contain a text file:
# m12*_center.txt
# that contains pre-computed galaxy center coordinates
# and rotation vectors to align with the principal axes of the galaxy,
# although that file is not required to read a snapshot.
# The simulation directory also may contain text files:
# m12*_LSR{0,1,2}.txt
# that contains the local standard of rest (LSR) coordinates
# used by Ananke in creating Gaia synthetic surveys.
# Ensure that your python path points to this python package, then:
import gizmo_read
directory = '.' # if running this notebook from within a simulation directory
#directory = 'm12i/' # if running higher-level directory
#directory = 'm12f/' # if running higher-level directory
#directory = 'm12m/' # if running higher-level directory
```
# read particle data from a snapshot
```
# read star particles (all properties)
part = gizmo_read.read.Read.read_snapshot(species='star', directory=directory)
# alternately, read all particle species (stars, gas, dark matter)
part = gizmo_read.read.Read.read_snapshot(species='all', directory=directory)
# alternately, read just stars and dark matter (or any combination of species)
part = gizmo_read.read.Read.read_snapshot(species=['star', 'dark'], directory=directory)
# alternately, read only a subset of particle properties (to save memory)
part = gizmo_read.read.Read.read_snapshot(species='star', properties=['position', 'velocity', 'mass'], directory=directory)
# also can use particle_subsample_factor to periodically sub-sample particles, to save memory
part = gizmo_read.read.Read.read_snapshot(species='all', directory=directory, particle_subsample_factor=10)
```
# species dictionary
```
# each particle species is stored as its own dictionary
# 'star' = stars, 'gas' = gas, 'dark' = dark matter, 'dark.2' = low-resolution dark matter
part.keys()
# properties of particles are stored as dictionary
# properties of star particles
for k in part['star'].keys():
print(k)
# properties of dark matter particles
for k in part['dark'].keys():
print(k)
# properties of gas particles
for k in part['gas'].keys():
print(k)
```
# particle coordinates
```
# 3-D position of star particle (particle number x dimension number) in cartesian coordiantes [kpc physical]
# if directory contains file m12*_center.txt, this reader automatically reads this file and
# convert all positions to be in galactocentric coordinates, alined with principal axes of the galaxy
part['star']['position']
# you can convert these to cylindrical coordiantes...
star_positions_cylindrical = gizmo_read.coordinate.get_positions_in_coordinate_system(
part['star']['position'], system_to='cylindrical')
print(star_positions_cylindrical)
# or spherical coordiantes
star_positions_spherical = gizmo_read.coordinate.get_positions_in_coordinate_system(
part['star']['position'], system_to='spherical')
print(star_positions_spherical)
# 3-D velocity of star particle (particle number x dimension number) in cartesian coordiantes [km/s]
part['star']['velocity']
# you can convert these to cylindrical coordiantes...
star_velocities_cylindrical = gizmo_read.coordinate.get_velocities_in_coordinate_system(
part['star']['velocity'], part['star']['position'], system_to='cylindrical')
print(star_velocities_cylindrical)
# or spherical coordiantes
star_velocities_spherical = gizmo_read.coordinate.get_velocities_in_coordinate_system(
part['star']['velocity'], part['star']['position'], system_to='spherical')
print(star_velocities_spherical)
# the galaxy center position [kpc comoving] and velocity [km/s] are stored via
print(part.center_position)
print(part.center_velocity)
# the rotation vectors to align with the principal axes are stored via
print(part.principal_axes_vectors)
```
# LSR coordinates for mock
```
# you can read the assumed local standard of rest (LSR) coordinates used in the Ananke mock catalogs
# you need to input which LSR to use (currently 0, 1, or 2, because we use 3 per galaxy)
gizmo_read.read.Read.read_lsr_coordinates(part, directory=directory, lsr_index=0)
gizmo_read.read.Read.read_lsr_coordinates(part, directory=directory, lsr_index=1)
gizmo_read.read.Read.read_lsr_coordinates(part, directory=directory, lsr_index=2)
# the particle catalog can store one LSR at a time via
print(part.lsr_position)
print(part.lsr_velocity)
# you can convert coordinates to be relative to LSR via
star_positions_wrt_lsr = part['star']['position'] - part.lsr_position
star_velocities_wrt_lsr = part['star']['velocity'] - part.lsr_velocity
print(star_positions_wrt_lsr)
print(star_velocities_wrt_lsr)
```
# other particle properties
```
# mass of star particle [M_sun]
# note that star particles are created with an initial mass of ~7070 Msun,
# but because of stellar mass loss they can be less massive by z = 0
# a few star particles form from slightly higher-mass gas particles
# (because gas particles gain mass via stellar mass loss)
# so some star particles are a little more massive than 7070 Msun
part['star']['mass']
# formation scale-factor of star particle
part['star']['form.scalefactor']
# or more usefully, the current age of star particle (the lookback time to when it formed) [Gyr]
part['star']['age']
# gravitational potential at position of star particle [km^2 / s^2 physical]
# note: normalization is arbitrary
part['star']['potential']
# ID of star particle
# NOTE: Ananke uses/references the *index* (within this array) of star particles, *not* their ID!
# (because for technical reasons some star particles can end up with the same ID)
# So you generally should never have to use this ID!
part['star']['id']
```
# metallicities
```
# elemental abundance (metallicity) is stored natively as *linear mass fraction*
# one value for each element, in a particle_number x element_number array
# the first value is the mass fraction of all metals (everything not H, He)
# 0 = all metals (everything not H, He), 1 = He, 2 = C, 3 = N, 4 = O, 5 = Ne, 6 = Mg, 7 = Si, 8 = S, 9 = Ca, 10 = Fe
part['star']['massfraction']
# get individual elements by their index
# total metal mass fraction (everything not H, He) is index 0
print(part['star']['massfraction'][:, 0])
# iron is index 10
print(part['star']['massfraction'][:, 10])
# for convenience, this reader also stores 'metallicity' := log10(mass_fraction / mass_fraction_solar)
# where mass_fraction_solar is from Asplund et al 2009
print(part['star']['metallicity.total'])
print(part['star']['metallicity.fe'])
print(part['star']['metallicity.o'])
# see gizmo_read.constant for assumed solar values (Asplund et al 2009) and other constants
gizmo_read.constant.sun_composition
```
# additional information stored in sub-dictionaries
```
# dictionary of 'header' information about the simulation
part.info
# dictionary of information about this snapshot's scale-factor, redshift, time, lookback-time
part.snapshot
# dictionary class of cosmological parameters, with function for cosmological conversions
part.Cosmology
```
See gizmo_read.constant for assumed (astro)physical constants used throughout.
See gizmo_read.coordinate for more coordiante transformation, zoom-in center
| github_jupyter |
<a href="https://colab.research.google.com/github/mostaphafakihi/Simulation/blob/main/PRsimulation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# **Projet de simulation d'un super marché**
```
import numpy as np
from pandas import DataFrame
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import statistics
import math
#plot
plt.rcParams['figure.figsize'] = (12, 9)
#la fonction alea
def aleas(IX,IY,IZ):
IX[0] = 171 * ( IX[0] % 177 ) - 2 * (IX[0] // 177 )
IY[0] = 172 * ( IY[0] % 176 ) - 35 * ( IY[0] // 176 )
IZ[0] = 170 * ( IZ[0] % 178 ) - 63 * ( IZ[0] //178 )
if ( IX[0]<0 ):
IX[0] = IX[0] + 30269
if ( IY[0]< 0 ) :
IY[0] = IY[0] + 30307
if (IZ[0]< 0 ) :
IZ[0] = IZ[0] + 30323
inter = ( ( IX[0] / 30269 ) + ( IY[0] / 30307 ) + ( IZ[0] / 30323 ) )
alea = inter - int ( inter )
return alea
def F1(alea):
if alea >= 0 and alea < 0.3:
return 1
elif alea >= 0.3 and alea <= 0.8:
return 2
elif alea > 0.8 and alea <= 0.9:
return 3
elif alea > 0.9 and alea <= 0.95:
return 4
elif alea > 0.95 and alea <= 0.98:
return 5
else:
return 6
def F2(alea):
if alea >= 0 and alea < 0.1:
return 2
elif alea >= 0.1 and alea < 0.3:
return 4
elif alea >= 0.3 and alea <= 0.7:
return 6
elif alea > 0.7 and alea <= 0.9:
return 8
else:
return 10
def F3(alea):
if alea >= 0 and alea < 0.2:
return 1
elif alea >= 0.2 and alea <= 0.6:
return 2
elif alea > 0.6 and alea <= 0.85:
return 3
else:
return 4
#Trier le calendrier
def Trier_cal(calendrier):
l = len(calendrier)
for i in range(0, l):
for j in range(0, l-i-1):
if (calendrier[j][2] > calendrier[j + 1][2]):
tempo = calendrier[j]
calendrier[j]= calendrier[j + 1]
calendrier[j + 1]= tempo
return calendrier
#pour planidier un événement
def planif_eve(evt=[]):
cal_tri.append(evt)
return cal_tri
#selectionner un événement
def select_eve(cal_tri):
evt_p=cal_tri[0]
cal_tri.pop(0)
return evt_p
def intervalle_confiance(NCP):
moy_NCP=np.array(NCP)
n=len(moy_NCP)
m=np.mean(moy_NCP)
s=statistics.stdev(moy_NCP,m)
IC=[m-1.96*(s//math.sqrt(n)),m+1.96*(s//math.sqrt(n))]
return IC
```
# **Scénario 1(simulation avec deux caisses)**
```
# Initialiser le calendrier par la 1ère arrivée du 1er client
IX=[0]
IY=[0]
IZ=[0]
IX[0]= int(input("Entrez la valeur du premier germe IX: "))
while (IX[0] <1 or IX[0] >30000):
IX[0] = int(input("la valeur que vous avez saisie ne convient pas"))
IY[0]= int(input("Entrez la valeur du deuxieme germe IY: "))
while (IY[0] <1 or IY[0] >30000):
IY[0] = int(input("la valeur que vous avez saisie ne convient pas"))
IZ[0]= int(input("Entrez1 la valeur du dernier germe IZ: "))
while (IZ[0] <1 or IZ[0] >30000):
IZ[0] = int(input("la valeur que vous avez saisie ne convient pas"))
resultat=[]
resultat_sanspi=[]
for k in range(40):
H = 0 # Horloge de simulation
i = 1 # numéro client arrivé à chaque fois
LQ = 0 # Longueur Queue
NCP = 0 # Nombre Clients Perdus
NCE = 0 # Nombre Clients Entrés
C1 = 0 # état caisse 1 libre
C2 = 0 # état caisse 2 libre
t1 = 0
t2 = 0
s1 = 0
s2 = 0
DEQ = 0
DSQ = 0
tj = 0
Q=[]
tmp1=0
tmp2=0
TSmoy = 0
TATmoy = 0
TauC1 =0
TauC2 = 0
Qj=[]
# Initialiser le calendrier par la 1ère arrivée du 1er client
evt=[]
a=aleas(IX,IY,IZ)
evt1=[1,'A',F1(a)]
cal_tri=[evt1]
file=[]
while (len(cal_tri)!=0):
cal_tri=Trier_cal(cal_tri)
evt_sel=select_eve(cal_tri)
H=evt_sel[2]
if (evt_sel[1] == 'A'):
if (LQ <= 1):
NCE = NCE+1
planif_eve([evt_sel[0],'FM',H+F2(aleas(IX,IY,IZ))])
else:
NCP = NCP+1
i=i+1
DA=H+F1(aleas(IX,IY,IZ))
if (DA<=720):
planif_eve([i,'A',DA])
if (evt_sel[1] == 'FM'):
if (C1==0 or C2==0):
if (C1==0):
C1=evt_sel[0]
t1=t1+(H-s1)
else:
C2=evt_sel[0]
t2=t2+(H-s2)
tmp1=H+F3(aleas(IX,IY,IZ))
planif_eve([evt_sel[0],'FP',tmp1])
DEQ=DEQ+tmp1
else :
LQ=LQ+1
s1=H
s2=H
file.append(evt_sel[0])
if (evt_sel[1] == 'FP'):
if (LQ==0):
if (C1==evt_sel[0]):
C1=0
else:
C2=0
else:
j=file[0]
file.pop(0)
LQ=LQ-1
tj=tj+(H-s1)
Q.append(tj)
if (C1==evt_sel[0]):
C1=j
else:
C2=j
tmp2=H+F3(aleas(IX,IY,IZ))
planif_eve([j,'FP',tmp2])
DSQ=DSQ+tmp2
DFS=H
Qj=[element * (1/DFS) for element in Q]
TauC1=t1/DFS
TauC2=t2/DFS
TATmoy=(DSQ-DEQ)/NCE
TSmoy=(H-DA)/NCE
resultat.extend([[DFS,NCE, NCP,TSmoy ,TATmoy ,TauC1 ,TauC2]])
IX[0]=IX[0]+10+k*10
IY[0]=IY[0]+30+k*30
IZ[0]=IZ[0]+20+k*20
df1 = pd.DataFrame(resultat, columns =['DFS','NCE', 'NCP','TSmoy' ,'TATmoy' ,'TauC1' ,'TauC2'],index=['sim1','sim2','sim3','sim4','sim5','sim6','sim7','sim8','sim9','sim10','sim11','sim12','sim13','sim14','sim15','sim16','sim17','sim18','sim19','sim20','sim21','sim22','sim23','sim24','sim25','sim26','sim27','sim28','sim29','sim30','sim31','sim32','sim33','sim34','sim35','sim36','sim37','sim38','sim39','sim40'])
df1
```
# **Scénario 2(simulation avec 3 caisses)**
```
# Initialiser le calendrier par la 1ère arrivée du 1er client
IX=[0]
IY=[0]
IZ=[0]
IX[0]= int(input("Entrez la valeur du premier germe IX: "))
while (IX[0] <1 or IX[0] >30000):
IX[0] = int(input("la valeur que vous avez saisie ne convient pas"))
IY[0]= int(input("Entrez la valeur du deuxieme germe IY: "))
while (IY[0] <1 or IY[0] >30000):
IY[0] = int(input("la valeur que vous avez saisie ne convient pas"))
IZ[0]= int(input("Entrez la valeur du dernier germe IZ: "))
while (IZ[0] <1 or IZ[0] >30000):
IZ[0] = int(input("la valeur que vous avez saisie ne convient pas"))
data=[]
data_sanspi=[]
for k in range(40):
H = 0 #(* Horloge de simulation *)
i = 1 #(* numéro client arrivé à chaque fois *)
LQ = 0 #(* Longueur Queue *)
NCP = 0 #(* Nombre Clients Perdus *)
NCE = 0 #(* Nombre Clients Entrés *)
C1 = 0 #(* état caisse 1 libre *)
C2 = 0 #(* état caisse 2 libre *)
C3 = 0
t1 = 0
t2 = 0
t3 = 0
s1 = 0
s2 = 0
s3 = 0
tmp1=0
tmp2=0
DEQ = 0
DSQ = 0
MTS=0
TATmoy=0
TauC1=0
TauC2=0
TauC3=0
Qj=[]
# Initialiser le calendrier par la 1ère arrivée du 1er client
evt=[]
a=aleas(IX,IY,IZ)
evt1=[1,'A',F1(a)]
cal_tri=[evt1]
file=[]
while (len(cal_tri)!=0):
cal_tri=Trier_cal(cal_tri)
evt_sel=select_eve(cal_tri)
H=evt_sel[2]
if (evt_sel[1] == 'A'):
if (LQ <= 1):
NCE = NCE+1
planif_eve([evt_sel[0],'FM',H+F2(aleas(IX,IY,IZ))])
else:
NCP = NCP+1
i=i+1
DA=H+F1(aleas(IX,IY,IZ))
if (DA<=720):
planif_eve([i,'A',DA])
if (evt_sel[1] == 'FM'):
if (C1==0 or C2==0 or C3==0):
if (C1==0):
C1=evt_sel[0]
t1=t1+(H-s1)
if (C2==0):
C2=evt_sel[0]
t2=t2+(H-s2)
else:
C3=evt_sel[0]
t3=t3+(H-s3)
tmp1=H+F3(aleas(IX,IY,IZ))
planif_eve([evt_sel[0],'FP',tmp1])
DEQ=DEQ+tmp1
else :
LQ=LQ+1
s1=H
s2=H
s3=H
file.append(evt_sel[0])
if (evt_sel[1] == 'FP'):
if (LQ==0):
if (C1==evt_sel[0]):
C1=0
if (C2==evt_sel[0]):
C2=0
else:
C3=0
else:
j=file[0]
file.pop(0)
LQ=LQ-1
tj=tj+(H-s1)
Q.append(tj)
if (C1==evt_sel[0]):
C1=j
if(C2==evt_sel[0]):
C2=j
else:
C3=j
tmp2=H+F3(aleas(IX,IY,IZ))
planif_eve([j,'FP',tmp2])
DSQ=DSQ+tmp2
DFS=H
Qj=[element * (1/DFS) for element in Q]
pi=sum(Qj)
p1=Qj[0]
TauC1=t1/DFS
TauC2=t2/DFS
TauC3=t3/DFS
TATmoy=(DSQ-DEQ)/NCE
TSmoy=(H-DA)/NCE
data.extend([[DFS,NCE, NCP,TATmoy,TSmoy,TauC1,TauC2,TauC3]])
IX[0]=IX[0]+10+k*10
IY[0]=IY[0]+30+k*30
IZ[0]=IZ[0]+20+k*20
df2 = pd.DataFrame(data, columns =['DFS','NCE', 'NCP','TATmoy','TSmoy','TauC1','TauC2','TauC3',],index=['sim1','sim2','sim3','sim4','sim5','sim6','sim7','sim8','sim9','sim10','sim11','sim12','sim13','sim14','sim15','sim16','sim17','sim18','sim19','sim20','sim21','sim22','sim23','sim24','sim25','sim26','sim27','sim28','sim29','sim30','sim31','sim32','sim33','sim34','sim35','sim36','sim37','sim38','sim39','sim40'])
df2
```
description du scénario 1
```
df1.describe()
```
description des données du scénario 2
```
df2.describe()
IC1 = intervalle_confiance(df1['NCP'])
IC1
IC2 = intervalle_confiance(df2['NCP'])
IC2
```
# **Représentation graphique**
# *Scénario 1*
```
df1_sanspi.plot.bar(rot=0,figsize=(60,14))
from google.colab import files
plt.savefig("abc.png")
files.download("abc.png")
df1_sanspi.plot.bar(stacked=True,figsize=(24, 11))
df1['NCP'].plot.bar(rot=0)
plt.figure()
plt.plot(df1['NCP'])
plt.plot(df1['NCE'])
sns.displot(df1['NCP'])
plt.title("Distribution de NCP", fontsize=20)
```
# *Scénario 2*
```
df2_sanspi.plot.bar(rot=0,figsize=(44,24))
df2_sanspi.plot.bar(stacked=True,figsize=(24, 11))
df2['NCP'].plot.bar(rot=0)
plt.figure()
plt.plot(df2['NCP'])
plt.plot(df2['NCE'])
sns.displot(df2['NCP'])
plt.title("Distribution de NCP", fontsize=20)
```
| github_jupyter |
# MACHINE LEARNING LAB - 4 ( Backpropagation Algorithm )
**4. Build an Artificial Neural Network by implementing the Backpropagation algorithm and test the same using appropriate data sets.**
```
import numpy as np
X = np.array(([2, 9], [1, 5], [3, 6]), dtype=float) # X = (hours sleeping, hours studying)
y = np.array(([92], [86], [89]), dtype=float) # y = score on test
# scale units
X = X/np.amax(X, axis=0) # maximum of X array
y = y/100 # max test score is 100
class Neural_Network(object):
def __init__(self):
# Parameters
self.inputSize = 2
self.outputSize = 1
self.hiddenSize = 3
# Weights
self.W1 = np.random.randn(self.inputSize, self.hiddenSize) # (3x2) weight matrix from input to hidden layer
self.W2 = np.random.randn(self.hiddenSize, self.outputSize) # (3x1) weight matrix from hidden to output layer
def forward(self, X):
#forward propagation through our network
self.z = np.dot(X, self.W1) # dot product of X (input) and first set of 3x2 weights
self.z2 = self.sigmoid(self.z) # activation function
self.z3 = np.dot(self.z2, self.W2) # dot product of hidden layer (z2) and second set of 3x1 weights
o = self.sigmoid(self.z3) # final activation function
return o
def sigmoid(self, s):
return 1/(1+np.exp(-s)) # activation function
def sigmoidPrime(self, s):
return s * (1 - s) # derivative of sigmoid
def backward(self, X, y, o):
# backward propgate through the network
self.o_error = y - o # error in output
self.o_delta = self.o_error*self.sigmoidPrime(o) # applying derivative of sigmoid to
self.z2_error = self.o_delta.dot(self.W2.T) # z2 error: how much our hidden layer weights contributed to output error
self.z2_delta = self.z2_error*self.sigmoidPrime(self.z2) # applying derivative of sigmoid to z2 error
self.W1 += X.T.dot(self.z2_delta) # adjusting first set (input --> hidden) weights
self.W2 += self.z2.T.dot(self.o_delta) # adjusting second set (hidden --> output) weights
def train (self, X, y):
o = self.forward(X)
self.backward(X, y, o)
NN = Neural_Network()
for i in range(1000): # trains the NN 1,000 times
print ("\nInput: \n" + str(X))
print ("\nActual Output: \n" + str(y))
print ("\nPredicted Output: \n" + str(NN.forward(X)))
print ("\nLoss: \n" + str(np.mean(np.square(y - NN.forward(X))))) # mean sum squared loss)
NN.train(X, y)
```
| github_jupyter |
# BYOA Tutorial - Prophet Forecasting en Sagemaker
The following notebook shows how to integrate your own algorithms to Amazon Sagemaker.
We are going to go the way of putting together an inference pipeline on the Prophet algorithm for time series.
The algorithm is installed in a docker container and then it helps us to train the model and make inferences on an endpoint.
We are going to work with a public dataset that we must download from Kaggle.
This dataset is called:
_Avocado Prices: Historical data on avocado prices and sales volume in multiple US markets_
and can be downloaded from: https://www.kaggle.com/neuromusic/avocado-prices/download
Once downloaded, we must upload it to the same directory where we are running this notebook.
The following code prepares the dataset so that Prophet can understand it:
```
import pandas as pd
# Nos quedamos solo con la fecha y las ventas
df = pd.read_csv('avocado.csv')
df = df[['Date', 'AveragePrice']].dropna()
df['Date'] = pd.to_datetime(df['Date'])
df = df.set_index('Date')
# Dejamos 1 solo registro por día con el promedio de ventas
daily_df = df.resample('D').mean()
d_df = daily_df.reset_index().dropna()
# Formateamos los nombre de columnas como los espera Prophet
d_df = d_df[['Date', 'AveragePrice']]
d_df.columns = ['ds', 'y']
d_df.head()
# Guardamos el dataset resultante como avocado_daily.csv
d_df.to_csv("avocado_daily.csv",index = False , columns = ['ds', 'y'] )
```
# Step 2: Package and upload the algorithm for use with Amazon SageMaker
### An overview of Docker
Docker provides a simple way to package code into an _image_ that is completely self-contained. Once you have an image, you can use Docker to run a _container_ based on that image. Running a container is the same as running a program on the machine, except that the container creates a completely self-contained environment for the program to run. Containers are isolated from each other and from the host environment, so the way you configure the program is the way it runs, no matter where you run it.
Docker is more powerful than environment managers like conda or virtualenv because (a) it is completely language independent and (b) it understands your entire operating environment, including startup commands, environment variables, etc.
In some ways, a Docker container is like a virtual machine, but it is much lighter. For example, a program that runs in a container can start in less than a second, and many containers can run on the same physical machine or virtual machine instance.
Docker uses a simple file called `Dockerfile` to specify how the image is assembled.
Amazon SagMaker uses Docker to allow users to train and implement algorithms.
In Amazon SageMaker, Docker containers are invoked in a certain way for training and in a slightly different way for hosting. The following sections describe how to create containers for the SageMaker environment.
### How Amazon SageMaker runs the Docker container
Because it can run the same image in training or hosting, Amazon SageMaker runs the container with the `train` or` serve` argument. How your container processes this argument depends on the container:
* In the example here, we did not define an ʻENTRYPOINT ʻin the Dockerfile for Docker to execute the `train` command at training time and` serve` at service time. In this example, we define them as executable Python scripts, but they could be any program that we want to start in that environment.
* If you specify a program as "ENTRYPOINT" in the Dockerfile, that program will run at startup and its first argument will be either `train` or` serve`. The program can then examine that argument and decide what to do.
* If you are building separate containers for training and hosting (or building just for one or the other), you can define a program as "ENTRYPOINT" in the Dockerfile and ignore (or check) the first argument passed.
#### Run container during training
When Amazon SageMaker runs the training, the `train` script runs like a regular Python program. A series of files are arranged for your use, under the `/ opt / ml` directory:
/opt/ml
├── input
│ ├── config
│ │ ├── hyperparameters.json
│ │ └── resourceConfig.json
│ └── data
│ └── <channel_name>
│ └── <input data>
├── model
│ └── <model files>
└── output
└── failure
##### The entrance
* `/ opt / ml / input / config` contains information to control how the program runs. `hyperparameters.json` is a JSON-formatted dictionary of hyperparameter names to values. These values will always be strings, so you may need to convert them. `ResourceConfig.json` is a JSON-formatted file that describes the network layout used for distributed training. Since scikit-learn does not support distributed training, we will ignore it here.
* `/ opt / ml / input / data / <channel_name> /` (for File mode) contains the input data for that channel. Channels are created based on the call to CreateTrainingJob, but it is generally important that the channels match what the algorithm expects. The files for each channel will be copied from S3 to this directory, preserving the tree structure indicated by the S3 key structure.
* `/ opt / ml / input / data / <channel_name> _ <epoch_number>` (for Pipe mode) is the pipe for a given epoch. The epochs start at zero and go up by one each time you read them. There is no limit to the number of epochs you can run, but you must close each pipe before reading the next epoch.
##### The exit
* `/ opt / ml / model /` is the directory where the model generated by your algorithm is written. Your model can be in any format you want. It can be a single file or an entire directory tree. SagMaker will package any files in this directory into a compressed tar file. This file will be available in the S3 location returned in the `DescribeTrainingJob` output.
* `/ opt / ml / output` is a directory where the algorithm can write a` failure` file that describes why the job failed. The content of this file will be returned in the `FailureReason` field of the` DescribeTrainingJob` result. For successful jobs, there is no reason to write this file as it will be ignored.
#### Running the container during hosting
Hosting has a very different model than training because it must respond to inference requests that arrive through HTTP. In this example, we use recommended Python code to provide a robust and scalable inference request service:
Amazon SagMaker uses two URLs in the container:
* `/ ping` will receive` GET` requests from the infrastructure. Returns 200 if the container is open and accepting requests.
* `/ invocations` is the endpoint that receives inference` POST` requests from the client. The request and response format depends on the algorithm. If the client supplied the `ContentType` and ʻAccept` headers, these will also be passed.
The container will have the model files in the same place where they were written during training:
/ opt / ml
└── model
└── <model files>
### Container Parts
In the `container` directory are all the components you need to package the sample algorithm for Amazon SageManager:
.
├── Dockerfile
├── build_and_push.sh
└── decision_trees
├── nginx.conf
├── predictor.py
├── serve
├── train
└── wsgi.py
Let's see each one:
* __`Dockerfile`__ describes how to build the Docker container image. More details below.
* __`build_and_push.sh`__ is a script that uses Dockerfile to build its container images and then publishes (push) it to ECR. We will invoke the commands directly later in this notebook, but you can copy and run the script for other algorithms.
* __`prophet`__ is the directory that contains the files to be installed in the container.
* __`local_test`__ is a directory that shows how to test the new container on any machine that can run Docker, including an Amazon SageMaker Notebook Instance. With this method, you can quickly iterate using small data sets to eliminate any structural errors before using the container with Amazon SageMaker.
The files that we are going to put in the container are:
* __`nginx.conf`__ is the configuration file for the nginx front-end. Generally, you should be able to take this file as is.
* __`predictor.py`__ is the program that actually implements the Flask web server and Prophet predictions for this application.
* __`serve`__ is the program started when the hosting container starts. It just launches the gunicorn server running multiple instances of the Flask application defined in `predictor.py`. You should be able to take this file as is.
* __`train`__ is the program that is invoked when the container for training is executed.
* __`wsgi.py`__ is a small wrapper used to invoke the Flask application. You should be able to take this file as is.
In summary, the two Prophet-specific code files are `train` and` predictor.py`.
### The Dockerfile file
The Dockerfile file describes the image we want to create. It is a description of the complete installation of the operating system of the system that you want to run. A running Docker container is significantly lighter than a full operating system, however, because it leverages Linux on the host machine for basic operations.
For this example, we'll start from a standard Ubuntu install and run the normal tools to install the things Prophet needs. Finally, we add the code that implements Prophet to the container and configure the correct environment to run correctly.
The following is the Dockerfile:
```
!cat container/Dockerfile
```
### The train file
The train file describes the way we are going to do the training.
The Prophet-Docker / container / prophet / train file contains the specific training code for Prophet.
We must modify the train () function in the following way:
def train():
print('Starting the training.')
try:
# Read in any hyperparameters that the user passed with the training job
with open(param_path, 'r') as tc:
trainingParams = json.load(tc)
# Take the set of files and read them all into a single pandas dataframe
input_files = [ os.path.join(training_path, file) for file in os.listdir(training_path) ]
if len(input_files) == 0:
raise ValueError(('There are no files in {}.\n' +
'This usually indicates that the channel ({}) was incorrectly specified,\n' +
'the data specification in S3 was incorrectly specified or the role specified\n' +
'does not have permission to access the data.').format(training_path, channel_name))
raw_data = [ pd.read_csv(file, error_bad_lines=False ) for file in input_files ]
train_data = pd.concat(raw_data)
train_data.columns = ['ds', 'y']
# Usamos Prophet para entrenar el modelo.
clf = Prophet()
clf = clf.fit(train_data)
# save the model
with open(os.path.join(model_path, 'prophet-model.pkl'), 'w') as out:
pickle.dump(clf, out)
print('Training complete.')
### The predictor.py file
The predictor.py file describes the way we are going to make predictions.
The file Prophet-Docker / container / prophet / predictor.py contains the specific prediction code for Prophet.
We must modify the predict () function in the following way:
def predict(cls, input):
"""For the input, do the predictions and return them.
Args:
input (a pandas dataframe): The data on which to do the predictions. There will be
one prediction per row in the dataframe"""
clf = cls.get_model()
future = clf.make_future_dataframe(periods=int(input.iloc[0]))
print(int(input.iloc[0]))
print(input)
forecast = clf.predict(future)
return forecast.tail(int(input.iloc[0]))
And then the transformation () function as follows:
def transformation():
"""Do an inference on a single batch of data. In this sample server, we take data as CSV, convert
it to a pandas data frame for internal use and then convert the predictions back to CSV (which really
just means one prediction per line, since there's a single column.
"""
data = None
# Convert from CSV to pandas
if flask.request.content_type == 'text/csv':
data = flask.request.data.decode('utf-8')
s = StringIO.StringIO(data)
data = pd.read_csv(s, header=None)
else:
return flask.Response(response='This predictor only supports CSV data', status=415, mimetype='text/plain')
print('Invoked with {} records'.format(data.shape[0]))
# Do the prediction
predictions = ScoringService.predict(data)
# Convert from numpy back to CSV
out = StringIO.StringIO()
pd.DataFrame({'results':[predictions]}, index=[0]).to_csv(out, header=False, index=False)
result = out.getvalue()
return flask.Response(response=result, status=200, mimetype='text/csv')
Basically we modify the line:
pd.DataFrame({'results':predictions}).to_csv(out, header=False, index=False)
By the line:
pd.DataFrame({'results':[predictions]}, index=[0]).to_csv(out, header=False, index=False)
# Part 3: Using Prophet in Amazon SageMaker
Now that we have all the files created, we are going to use Prophet in Sagemaker
## Container assembly
We start by building and registering the container
```
%%time
%%sh
# The name of our algorithm
algorithm_name=sagemaker-prophet
cd container
chmod +x prophet/train
chmod +x prophet/serve
account=$(aws sts get-caller-identity --query Account --output text)
# Get the region defined in the current configuration (default to us-west-2 if none defined)
region=$(aws configure get region)
region=${region:-us-west-2}
fullname="${account}.dkr.ecr.${region}.amazonaws.com/${algorithm_name}:latest"
# If the repository doesn't exist in ECR, create it.
aws ecr describe-repositories --repository-names "${algorithm_name}" > /dev/null 2>&1
if [ $? -ne 0 ]
then
aws ecr create-repository --repository-name "${algorithm_name}" > /dev/null
fi
# Get the login command from ECR and execute it directly
$(aws ecr get-login --region ${region} --no-include-email)
# Build the docker image locally with the image name and then push it to ECR
# with the full name.
docker build -t ${algorithm_name} .
docker tag ${algorithm_name} ${fullname}
docker push ${fullname}
```
## Building the Training Environment
We initialize the session, execution role.
```
%%time
import boto3
import re
import os
import numpy as np
import pandas as pd
from sagemaker import get_execution_role
import sagemaker as sage
from time import gmtime, strftime
prefix = 'DEMO-prophet-byo'
role = get_execution_role()
sess = sage.Session()
```
# Upload the data to S3
```
WORK_DIRECTORY = 'data'
data_location = sess.upload_data(WORK_DIRECTORY, key_prefix=prefix)
```
## Train the model
Using the data uploaded to S3, we train the model by raising an ml.c4.2xlarge instance.
Sagemaker will leave the trained model in the / output directory
```
%%time
account = sess.boto_session.client('sts').get_caller_identity()['Account']
region = sess.boto_session.region_name
image = '{}.dkr.ecr.{}.amazonaws.com/sagemaker-prophet:latest'.format(account, region)
tseries = sage.estimator.Estimator(image,
role,
1,
'ml.c4.2xlarge',
output_path="s3://{}/output".format(sess.default_bucket()),
sagemaker_session=sess)
tseries.fit(data_location)
```
## Endpoint assembly for inference
Using the newly trained model, we create an endpoint for inference hosted on an ml.c4.2xlarge instance
```
%%time
from sagemaker.predictor import csv_serializer
predictor = tseries.deploy(1, 'ml.m4.xlarge', serializer=csv_serializer)
```
## Inference test
Finally we ask the model to predict the sales for the next 30 days.
```
%%time
p = predictor.predict("30")
print(p)
```
| github_jupyter |
```
import urllib2
from bs4 import BeautifulSoup
import csv
import time
import re
import urllib2
import csv
import time
import sys
import xml.etree.ElementTree as ET
import os
import random
import traceback
from IPython.display import clear_output
def createUserDict(user_element):
#userDict = []
id = getval(user_element,'id')
name = getval(user_element,'name')
user_name = getval(user_element,'user_name')
profile_url = getval(user_element,'link')
image_url = getval(user_element,'image_url')
about = getval(user_element,'about')
age = getval(user_element,'age')
gender = getval(user_element,'gender')
location = getval(user_element,'location')
joined = getval(user_element,'joined')
last_active = getval(user_element,'last_active')
userDict = dict ([('user_id', id), ('name', name) , ('user_name' , user_name),
('profile_url', profile_url), ('image_url', image_url),
('about', about), ('age', age), ('gender', gender),
('location', location) , ('joined', joined), ('last_active', last_active)])
return userDict
def writeToCSV(writer, mydict):
#writer = csv.DictWriter(csvfile, delimiter=',', lineterminator='\n', fieldnames=insert_fieldnames)
#for key, value in mydict.items():
writer.writerow(mydict)
def getAmazonDetails(isbn):
with open('csv_files/amazon_book_ratings.csv', 'a') as csvfile_ratings, open('csv_files/amazon_book_reviews.csv', 'a') as csvfile_reviews:
##Create file headers and writer
ratings_fieldnames = ['book_isbn', 'avg_rating', 'five_rating', 'four_rating', 'three_rating', 'two_rating', 'one_rating' ]
#writer = csv.DictWriter(csvfile_ratings, delimiter=',', lineterminator='\n', fieldnames=ratings_fieldnames)
##writer.writeheader()
reviews_fieldnames = ['book_isbn', 'review']
writer_book = csv.DictWriter(csvfile_reviews, delimiter=',', lineterminator='\n', fieldnames=reviews_fieldnames)
##writer_book.writeheader()
##Get Overall details of the book
req = urllib2.Request('http://www.amazon.com/product-reviews/' + isbn + '?ie=UTF8&showViewpoints=1&sortBy=helpful&pageNumber=1', headers={ 'User-Agent': 'Mozilla/5.0 (X11; U; Linux i686) Gecko/20071127 Firefox/2.0.0.11' })
html = urllib2.urlopen(req).read()
soup = BeautifulSoup(html, 'html.parser')
avgRatingTemp = soup.find_all('div',{'class':"a-row averageStarRatingNumerical"})[0].get_text()
avgRating = re.findall('\d+\.\d+', avgRatingTemp)[0]
try:
fiveStarRatingTemp = soup.find_all('a',{'class':"a-size-small a-link-normal 5star histogram-review-count"})[0].get_text()
fiveStarRating = fiveStarRatingTemp.strip('%')
except:
fiveStarRating = 0
try:
fourStarRatingTemp = soup.find_all('a',{'class':"a-size-small a-link-normal 4star histogram-review-count"})[0].get_text()
fourStarRating = fourStarRatingTemp.strip('%')
except:
fourStarRating = 0
try:
threeStarRatingTemp = soup.find_all('a',{'class':"a-size-small a-link-normal 3star histogram-review-count"})[0].get_text()
threeStarRating = threeStarRatingTemp.strip('%')
except:
threeStarRating = 0
try:
twoStarRatingTemp = soup.find_all('a',{'class':"a-size-small a-link-normal 2star histogram-review-count"})[0].get_text()
twoStarRating = twoStarRatingTemp.strip('%')
except:
twoStarRating = 0
try:
oneStarRatingTemp = soup.find_all('a',{'class':"a-size-small a-link-normal 1star histogram-review-count"})[0].get_text()
oneStarRating = oneStarRatingTemp.strip('%')
except:
oneStarRating = 0
writer.writerow({'book_isbn': isbn, 'avg_rating': avgRating, 'five_rating': fiveStarRating,
'four_rating': fourStarRating, 'three_rating': threeStarRating, 'two_rating': twoStarRating,
'one_rating': oneStarRating})
##Get top 20 helpful review of book
for pagenumber in range(1,3):
req = urllib2.Request('http://www.amazon.com/product-reviews/' + isbn + '?ie=UTF8&showViewpoints=1&sortBy=helpful&pageNumber='+ str(pagenumber), headers={ 'User-Agent': 'Mozilla/5.0 (X11; U; Linux i686) Gecko/20071127 Firefox/2.0.0.11' })
html = urllib2.urlopen(req).read()
soup = BeautifulSoup(html, 'html.parser')
for i in range(0,10):
try:
review = soup.find_all('div',{'class':"a-section review"})[i].contents[3].get_text().encode('UTF-8')
writer_book.writerow({'book_isbn': isbn, 'review': review})
except:
print "No Reviews ISBN - " + isbn
def getval(root, element):
try:
ret = root.find(element).text
if ret is None:
return ""
else:
return ret.encode("utf8")
except:
return ""
with open('csv_files/amazon_book_ratings.csv', 'w') as csvfile_ratings, open('csv_files/amazon_book_reviews.csv', 'w') as csvfile_reviews:
##Create file headers and writer
ratings_fieldnames = ['book_isbn', 'avg_rating', 'five_rating', 'four_rating', 'three_rating', 'two_rating', 'one_rating' ]
writer = csv.DictWriter(csvfile_ratings, delimiter=',', lineterminator='\n', fieldnames=ratings_fieldnames)
writer.writeheader()
reviews_fieldnames = ['book_isbn', 'review']
writer_book = csv.DictWriter(csvfile_reviews, delimiter=',', lineterminator='\n', fieldnames=reviews_fieldnames)
writer_book.writeheader()
with open('csv_files/user_data.csv', 'w') as csvfile, open('csv_files/book_data.csv', 'w') as csvfile_book, open('csv_files/book_author.csv', 'w') as csvfile_author, open('csv_files/goodreads_user_reviews_ratings.csv', 'w') as gdrds_rr:
fieldnames = ['user_id', 'name','user_name', 'profile_url','image_url', 'about', 'age', 'gender',
'location','joined','last_active' ]
writer = csv.DictWriter(csvfile, delimiter = ',', lineterminator = '\n', fieldnames=fieldnames)
writer.writeheader()
book_fieldnames = [
'user_id',
'b_id',
'shelf',
'isbn',
'isbn13',
'text_reviews_count',
'title',
'image_url',
'link',
'num_pages',
'b_format',
'publisher',
'publication_day',
'publication_year',
'publication_month',
'average_rating',
'ratings_count',
'description',
'published',
'fiction' ,
'fantasy' ,
'classics' ,
'young_adult' ,
'romance' ,
'non_fiction' ,
'historical_fiction' ,
'science_fiction' ,
'dystopian' ,
'horror' ,
'paranormal' ,
'contemporary' ,
'childrens' ,
'adult' ,
'adventure' ,
'novels' ,
'urban_fantasy' ,
'history' ,
'chick_lit' ,
'thriller' ,
'audiobook' ,
'drama' ,
'biography' ,
'vampires' ]
writer_book = csv.DictWriter(csvfile_book, delimiter = ',', lineterminator = '\n', fieldnames=book_fieldnames)
writer_book.writeheader()
goodreads_ratings_fieldnames = ['user_id', 'b_id', 'rating', 'review' ]
rr_writer = csv.DictWriter(gdrds_rr, delimiter=',', lineterminator='\n', fieldnames=goodreads_ratings_fieldnames)
rr_writer.writeheader()
author_fieldnames = [
'u_id',
'b_id',
'a_id',
'name',
'average_rating',
'ratings_count',
'text_reviews_count']
writer_author = csv.DictWriter(csvfile_author, delimiter = ',', lineterminator = '\n', fieldnames = author_fieldnames)
writer_author.writeheader()
lst = []
i = 0
while i < 50:
try:
#time.sleep(1)
clear_output()
c = random.randint(1, 2500000)
#c = 23061285
print "random number: " + str(c)
if (c not in lst):
print "getting information for user id:"+ str(c)
lst.append(c)
url = 'https://www.goodreads.com/user/show/'+ str(c) +'.xml?key=i3Zsl7r13oHEQCjv1vXw'
response = urllib2.urlopen(url)
user_data_xml = response.read()
#write xml to file
f = open("xml_docs/user"+ str(c) +".xml", "w")
try:
f.write(user_data_xml)
finally:
f.close()
#root = ET.fromstring()
root = ET.parse("xml_docs/user"+ str(c) +".xml").getroot()
os.remove("xml_docs/user"+ str(c) +".xml")
user_element = root.find('user')
user_shelf_to_count = user_element.find('user_shelves')
b_count = 0
for user_shelf in user_shelf_to_count.findall('user_shelf'):
b_count = b_count + int(getval(user_shelf,'book_count'))
print 'Book count is ' + str(b_count)
if(b_count > 10):
print 'Collecting data for user ' + str(c)
'''id = getval(user_element,'id')
name = getval(user_element,'name')
user_name = getval(user_element,'user_name')
profile_url = getval(user_element,'link')
image_url = getval(user_element,'image_url')
about = getval(user_element,'about')
age = getval(user_element,'age')
gender = getval(user_element,'gender')
location = getval(user_element,'location')
joined = getval(user_element,'joined')
last_active = getval(user_element,'last_active')
'''
userDict = createUserDict(user_element)
id = userDict['user_id']
#writer.writerow({'id': id, 'name' : name,'user_name' : user_name,
# 'profile_url' : profile_url,'image_url' : image_url,
# 'about' : about, 'age': age, 'gender' : gender,
# 'location' : location, 'joined' : joined, 'last_active': last_active})
writeToCSV(writer,userDict)
print "Saved user data for user id:" + str(c)
# get list of user shelves
user_shelves_root = user_element.find('user_shelves')
user_shelf_list = []
for user_shelf in user_shelves_root.findall("user_shelf"):
shelf = getval(user_shelf,"name")
#Books on Shelf
print "Checking for books in shelf: " + shelf + " for user id:" + str(c)
shelf_url = "https://www.goodreads.com/review/list/"+ str(c) +".xml?key=i3Zsl7r13oHEQCjv1vXw&v=2&per_page=200&shelf=" + shelf
#time.sleep(1)
print shelf_url
response = urllib2.urlopen(shelf_url)
shelf_data_xml = response.read()
# write xml to file
f = open("xml_docs/user_shelf_" + shelf + "_"+ str(c) + ".xml", "w")
try:
f.write(shelf_data_xml)
finally:
f.close()
shelf_root = ET.parse("xml_docs/user_shelf_" + shelf + "_"+ str(c) + ".xml").getroot()
os.remove("xml_docs/user_shelf_" + shelf + "_"+ str(c) + ".xml")
try:
reviews = shelf_root.find("reviews")
for review in reviews.findall("review"):
for book in review.findall("book"):
b_id = getval(book,"id")
isbn = getval(book,"isbn")
print "Fetching data for book with isbn:" + str(isbn) + " and id:" + str(id)
isbn13 = getval(book,"isbn13")
text_reviews_count = getval(book,"text_reviews_count")
title = getval(book,"title")
image_url = getval(book,"image_url")
link = getval(book,"link")
num_pages = getval(book,"num_pages")
b_format = getval(book,"format")
publisher = getval(book,"publisher")
publication_day = getval(book,"publication_day")
publication_year = getval(book, "publication_year")
publication_month = getval(book,"publication_month")
average_rating = getval(book,"average_rating")
ratings_count = getval(book,"rating_count")
description = getval(book,"description")
published = getval(book,"published")
#getAmazonDetails(isbn)
print "Fetched review data from Amazon for book :" + title
#get number of books on each type of shelf
book_url = 'https://www.goodreads.com/book/show/'+str(b_id)+'.xml?key=i3Zsl7r13oHEQCjv1vXw'
response = urllib2.urlopen(book_url)
book_data_xml = response.read()
# write xml to file
f = open("xml_docs/book_data_" + str(b_id) + ".xml", "w")
try:
f.write(book_data_xml)
finally:
f.close()
book_root = ET.parse("xml_docs/book_data_" + str(b_id) + ".xml").getroot()
os.remove("xml_docs/book_data_" + str(b_id) + ".xml")
print "checking count in shelf for book_id:" + str(b_id)
book_root = book_root.find("book")
book_shelves = book_root.find("popular_shelves")
fiction = 0
fantasy = 0
classics = 0
young_adult = 0
romance = 0
non_fiction = 0
historical_fiction = 0
science_fiction = 0
dystopian = 0
horror = 0
paranormal = 0
contemporary = 0
childrens = 0
adult = 0
adventure = 0
novels = 0
urban_fantasy = 0
history = 0
chick_lit = 0
thriller = 0
audiobook = 0
drama = 0
biography = 0
vampires = 0
cnt = 0
for shelf_type in book_shelves.findall("shelf"):
attributes = shelf_type.attrib
name = attributes['name']
count = attributes['count']
#print name + ":" + count
if ( name == 'fiction'):
fiction = count
cnt = cnt+count
if ( name == 'fantasy'):
fantasy = count
cnt = cnt+count
if ( name == 'classics' or name == 'classic'):
classics = count
cnt = cnt+count
if ( name == 'young-adult'):
young_adult = count
cnt = cnt+count
if ( name == 'romance'):
romance = count
cnt = cnt+count
if ( name == 'non-fiction' or name == 'nonfiction'):
non_fiction = count
cnt = cnt+count
if ( name == 'historical-fiction'):
historical_fiction = count
cnt = cnt+count
if ( name == 'science-fiction' or name == 'sci-fi fantasy' or name == 'scifi' or name == 'fantasy-sci-fi' or name == 'sci-fi'):
science_fiction = count
cnt = cnt+count
if ( name == 'dystopian' or name == 'dystopia'):
dystopian = count
cnt = cnt+count
if ( name == 'horror'):
horror = count
cnt = cnt+count
if ( name == 'paranormal'):
paranormal = count
cnt = cnt+count
if ( name == 'contemporary' or name == 'contemporary-fiction'):
contemporary = count
cnt = cnt+count
if ( name == 'childrens' or name == 'children' or name == 'kids' or name =='children-s-books'):
childrens = count
cnt = cnt+count
if ( name == 'adult'):
adult = count
cnt = cnt+count
if ( name == 'adventure'):
adventure = count
cnt = cnt+count
if ( name == 'novels' or name == 'novel'):
novels = count
cnt = cnt+count
if ( name == 'urban-fantasy'):
urban_fantasy = count
cnt = cnt+count
if ( name == 'history' or name == 'historical'):
history = count
cnt = cnt+count
if ( name == 'chick-lit'):
chick_lit = count
cnt = cnt+count
if ( name == 'thriller'):
thriller = count
cnt = cnt+count
if ( name == 'audiobook' or name == "audio"):
audiobook = count
cnt = cnt+count
if ( name == 'drama'):
drama = count
cnt = cnt+count
if ( name == 'biography' or name == 'memoirs'):
biography = count
cnt = cnt+count
if ( name == 'vampires' or name == 'vampire'):
vampires = count
cnt = cnt+count
fiction = fiction/cnt
fantasy = fantasy/cnt
classics = classics/cnt
young_adult = young_adult/cnt
romance = romance/cnt
non_fiction = non_fiction/cnt
historical_fiction = historical_fiction/cnt
science_fiction = science_fiction/cnt
dystopian = dystopian/cnt
horror = horror/cnt
paranormal = paranormal/cnt
contemporary = contemporary/cnt
childrens = childrens/cnt
adult = adult/cnt
adventure = adventures/cnt
novels = novels/cnt
urban_fantasy = urban_fantasy/cnt
history = history/cnt
chick_lit = chick_lit/cnt
thriller = thriller/cnt
audiobook = audiobook/cnt
drama = drama/cnt
biography = biography/cnt
vampires = vampires/cnt
writer_book.writerow({
'user_id': id,
'b_id' : b_id ,
'shelf' : shelf,
'isbn' : isbn,
'isbn13': isbn13,
'text_reviews_count' : text_reviews_count,
'title' : title,
'image_url' : image_url,
'link' : link,
'num_pages' : num_pages,
'b_format' : b_format,
'publisher' : publisher,
'publication_day' : publication_day,
'publication_year' : publication_year,
'publication_month' : publication_month,
'average_rating' : average_rating,
'ratings_count' : ratings_count,
'description' : description,
'fiction' : fiction ,
'fantasy' : fantasy ,
'classics' : classics ,
'young_adult' : young_adult ,
'romance' : romance ,
'non_fiction' : non_fiction ,
'historical_fiction' : historical_fiction ,
'science_fiction' : science_fiction ,
'dystopian' : dystopian ,
'horror' : horror ,
'paranormal' : paranormal ,
'contemporary' : contemporary ,
'childrens' : childrens ,
'adult' : adult ,
'adventure' : adventure ,
'novels' : novels ,
'urban_fantasy' : urban_fantasy ,
'history' : history ,
'chick_lit' : chick_lit ,
'thriller' : thriller ,
'audiobook' : audiobook ,
'drama' : drama ,
'biography' : biography ,
'vampires' : vampires })
#bookDict = createBookDict(book)
print "Data written on csv for book:" + title
print "Getting reviews details from user: " + str(id) + " and book_id: " + str(b_id)
review_url = "https://www.goodreads.com/review/show_by_user_and_book.xml?book_id=" +str(b_id)+ "&key=i3Zsl7r13oHEQCjv1vXw&user_id=" + str(id)
review_response = urllib2.urlopen(review_url)
review_response_xml = review_response.read()
review_root = ET.fromstring(review_response_xml)
user_rr = review_root.find("review")
user_r_rating = getval(user_rr, "rating")
print "Got user review rating: " + user_r_rating
user_r_review = getval(user_rr, "body")
print "User review is: " + user_r_review
rr_writer.writerow({
'user_id': id,
'b_id' : b_id ,
'rating' : user_r_rating,
'review' : user_r_review })
authors = book.find("authors")
for author in authors.findall("author"):
a_id = getval(author,"id")
name = getval(author,"name")
average_rating = getval(author,"average_rating")
ratings_count = getval(author,"ratings_count")
text_reviews_count = getval(author,"text_reviews_count")
writer_author.writerow({'u_id': id,
'b_id' : b_id,
'a_id' : a_id,
'name' : name,
'average_rating' : average_rating,
'ratings_count' : ratings_count,
'text_reviews_count' : text_reviews_count})
except Exception, e:
traceback.print_exc()
i = i + 1
except:
#time.sleep(1)
print "Exception!!"
traceback.print_exc()
print "End of Program"
```
| github_jupyter |
Este código crea una función que me permite estimar la varianza de una distribución y lo chequea con las distribuciones de Poisson y Gauss. Además, usa el método boostrap resampling que se basa en, a partir de una muestra, se crea una población y luego se toman muestras de la misma.
Ésto permite medir un estadístico, por ej. la varianza, tantas veces como se quiera y luego tener un intervalo de confianza para el mismo.
-----
El cógido genera una función que me premite calcular la varianza de una distribución y luego crea otra función que, usando el método de boostrap resampling, me devuelve el intervalo de confianza para cierto nivel de significancia y también me da la distribución de varianzas medidas
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Estilo de gráficos
plt.style.use('dark_background')
# Estilo de gráficos
plt.style.use('dark_background')
def Var(X):
""" Devuelve la varianza un arreglo
Parameters
----------
X : np.darray()
Arreglo de valores de una variable aleatoria
Returns
-------
Var : .float
Varianza del arreglo X
"""
import numpy as np
# La varianza es el momento de orden 2 de la distribución discreta, la calculo:
Var = np.sum((X-np.mean(np.array(X)))**2)/len(X)
return Var
```
Para probarla uso dos distribuciones conocidas: gaussiana y Poisson. Para la de Poisson traigo mi propia función generadora
```
# Creo las dos distribuciones:
Puntos = 10000
# Poisson
from Misfunciones import Poisson # Usar help(Poisson) para ver detalles
Lambd = 3 # Teóricamente es la media y la varianza
XP = Poisson(lambd=Lambd, N=Puntos)
# Gauss
dev = 1 # desviación estándar
XG = np.random.normal(0, dev, Puntos)
# Grafico
fig, ax = plt.subplots(1, 2, figsize = (14,6))
ax[0].hist(XP, color='cyan', bins=50);
ax[1].hist(XG, color='green', bins=50);
ax[0].set_title('Poisson', fontsize=20)
ax[0].set_ylabel('Frecuencia', fontsize=20)
ax[0].set_xlabel('k', fontsize=20)
ax[1].set_title('Gaussiana', fontsize=20)
ax[1].set_ylabel('Frecuencia', fontsize=20)
ax[1].set_xlabel('x', fontsize=20);
# Calculo varianzas:
VarXP = Var(np.array(XP))
print('Varianza teórica Poisson =', Lambd)
print('Varianza obtenida Poisson =', VarXP)
VarXG = Var(XG)
print('Varianza teórica Gauss =', dev**2)
print('Varianza obtenida Gauss =', VarXG)
```
No es extremadamente precisa pero funciona. Probé calcularla con np.var(XP) y da lo mismo
Ahora defino la función Boostrap_var() que requiere tener cargada la función Var(). Podría usar la función np.var() pero creo que eso no es lo que se pide.
```
def Boostrap_var(Sample, N, Mult, alpha):
""" Usa el método boostrap resampling para tener la incerteza de la varianza de una muestra
Parameters
----------
Sample : np.ndarray()
Muestra de la variable aleatoria
N : int
Cantidad de resamplings. Valor positivo mayor a cero
Mult : int
Multiplicador para crear la población de tamaño M*len(Sample)
alpha : .float
Nivel de significancia deseado, pertenece a (0,1). Si se quiere 95% => alpha=0.95.
Returns
-------
.float
Valor inferior del intervalo de confianza
.float
Valor superior del intervalo de confianza
np.darray()
Arreglo de las medias de las varianzas
"""
# Errores
if N<1 or isinstance(N, int)==False:
raise ValueError('Error: N debe ser una entero positivo')
if alpha<0 or alpha>=1:
raise ValueError('Error: Alpha debe pertenecer al intervalo (0,1)')
# -------
import numpy as np
# Creo una población de tamaño M*len(Samples)
# Básicamente copio y pego la muestra M veces
Pop = []
ij = 0
while ij<Mult:
ik = 0
while ik<len(Sample):
Pop.append(Sample[ik])
ik = ik + 1
ij = ij + 1
# Tomo N samples DE ESA POBLACIÓN de tamaño len(Sample) y le calculo la varianza a c/u
ij = 0
Vars = []
while ij<N:
Resampling = np.random.choice(Pop, size=len(Sample))
Vars.append( Var(Resampling) )
ij = ij + 1
# Transformo a array de Numpy
Vars = np.array(Vars)
# Calculos los intervalos de confianza ------------------------
Dsort = np.sort(Vars) # Ordeno los valores de las varianzas
# Encuentro el ij correspondiente a la media, lo llamo "ijm"
ij = 0
while ij<len(Dsort):
EA = sum(Dsort<=Dsort[ij])/len(Dsort) # Estimación del área
if EA>=0.5: # media --> 0.5 del área estimada
ijm = ij
break
ij = ij + 1
# Suponiendo intervalos de confianza simétricos busco el intervalo de confianza:
ij = ijm
while ij<len(Dsort):
# Cuento los "True". Esto es la estimación de un área para una distrib. discreta
EA = sum(Dsort<=Dsort[ij])/len(Dsort) # Empieza con el valor "0.5" para ij=ijm
if EA>0.5+alpha/2:
sup = Dsort[ij] # parte superior del intervalo
inf = Dsort[ijm] - (sup - Dsort[ijm]) # Parte inferior del intervalo
break
ij = ij + 1
return inf, sup, Dsort
# Ejemplo
N1 = 1000
M1 = 10
alpha1 = 0.95
D = Boostrap_var(XG, N=N1, Mult=M1, alpha=alpha1) # Distribución calculada
fig, ax = plt.subplots(1, 1, figsize = (12,6))
ax.hist(D[2], color='green');
ax.axvline(D[0], ls='--', color='cyan', label='Límite inferior')
ax.axvline(D[1], ls='--', color='yellow', label='Límite superior')
ax.axvline(np.mean(D[2]), ls='--', color='white', label='Media')
ax.set_title('Distribución de medias de la varianza', fontsize=20)
ax.set_xlabel('Valor', fontsize=20)
ax.set_ylabel('Frecuencia', fontsize=20);
ax.legend();
```
Ahora veré si el nivel de significancia calculado contiene al valor correcto de la varianza
```
# Recordar que la varianza teórica de la gaussiana la definí como "VarXG"
print('El intervalo de confianza del %',100*alpha1, 'es: (', round(D[0],3),
',', round(D[1],3), ')')
print('La varianza teórica es:', round(VarXG,3))
# Condición
if VarXG>D[0] and VarXG<D[1]:
print('Resultado: Si, los resultados con compatibles')
else:
print('Resultado: No, los resultados no son compatibles')
```
| github_jupyter |
# VarEmbed Tutorial
Varembed is a word embedding model incorporating morphological information, capturing shared sub-word features. Unlike previous work that constructs word embeddings directly from morphemes, varembed combines morphological and distributional information in a unified probabilistic framework. Varembed thus yields improvements on intrinsic word similarity evaluations. Check out the original paper, [arXiv:1608.01056](https://arxiv.org/abs/1608.01056) accepted in [EMNLP 2016](http://www.emnlp2016.net/accepted-papers.html).
Varembed is now integrated into [Gensim](http://radimrehurek.com/gensim/) providing ability to load already trained varembed models into gensim with additional functionalities over word vectors already present in gensim.
# This Tutorial
In this tutorial you will learn how to train, load and evaluate varembed model on your data.
# Train Model
The authors provide their code to train a varembed model. Checkout the repository [MorphologicalPriorsForWordEmbeddings](https://github.com/rguthrie3/MorphologicalPriorsForWordEmbeddings) for to train a varembed model. You'll need to use that code if you want to train a model.
# Load Varembed Model
Now that you have an already trained varembed model, you can easily load the varembed word vectors directly into Gensim. <br>
For that, you need to provide the path to the word vectors pickle file generated after you train the model and run the script to [package varembed embeddings](https://github.com/rguthrie3/MorphologicalPriorsForWordEmbeddings/blob/master/package_embeddings.py) provided in the [varembed source code repository](https://github.com/rguthrie3/MorphologicalPriorsForWordEmbeddings).
We'll use a varembed model trained on [Lee Corpus](https://github.com/RaRe-Technologies/gensim/blob/develop/gensim/test/test_data/lee.cor) as the vocabulary, which is already available in gensim.
```
from gensim.models.wrappers import varembed
vector_file = '../../gensim/test/test_data/varembed_leecorpus_vectors.pkl'
model = varembed.VarEmbed.load_varembed_format(vectors=vector_file)
```
This loads a varembed model into Gensim. Also if you want to load with morphemes added into the varembed vectors, you just need to also provide the path to the trained morfessor model binary as an argument. This works as an optional parameter, if not provided, it would just load the varembed vectors without morphemes.
```
morfessor_file = '../../gensim/test/test_data/varembed_leecorpus_morfessor.bin'
model_with_morphemes = varembed.VarEmbed.load_varembed_format(vectors=vector_file, morfessor_model=morfessor_file)
```
This helps load trained varembed models into Gensim. Now you can use this for any of the Keyed Vector functionalities, like 'most_similar', 'similarity' and so on, already provided in gensim.
```
model.most_similar('government')
model.similarity('peace', 'grim')
```
# Conclusion
In this tutorial, we learnt how to load already trained varembed models vectors into gensim and easily use and evaluate it. That's it!
# Resources
* [Varembed Source Code](https://github.com/rguthrie3/MorphologicalPriorsForWordEmbeddings)
* [Gensim](http://radimrehurek.com/gensim/)
* [Lee Corpus](https://github.com/RaRe-Technologies/gensim/blob/develop/gensim/test/test_data/lee.cor)
| github_jupyter |
# FMskill assignment
You are working on a project modelling waves in the Southern North Sea. You have done 6 different calibration runs and want to choose the "best". You would also like to see how your best model is performing compared to a third-party model in NetCDF.
The data:
* SW model results: 6 dfs0 files ts_runX.dfs0 each with 4 items corresponding to 4 stations
* observations: 4 dfs0 files with station data for (name, longitude, latitude):
- F16: 4.0122, 54.1167
- HKZA: 4.0090, 52.3066
- K14: 3.6333, 53.2667
- L9: 4.9667, 53.6167
* A map observations_map.png showing the model domain and observation positions
* Third party model: 1 NetCDF file
The tasks:
1. Calibration - find the best run
2. Validation - compare model to third-party model
```
fldr = "../data/FMskill_assignment/" # where have you put your data?
import fmskill
from fmskill import PointObservation, ModelResult, Connector
```
## 1. Calibration
* 1.1 Start simple: compare F16 with SW1 (the first calibration run)
* 1.2 Define all observations and all model results
* 1.3 Create connector, plot temporal coverage
* 1.4 Evaluate results
* 1.5 Which model is best?
### 1.1 Simple compare
Use fmskill.compare to do a quick comparison of F16 and SW1.
What is the mean absolute error in cm?
Do a time series plot.
### 1.2 Define all observations and all model results
* Define 4 PointObservations o1, o2, o3, o4
* Define 6 ModelResults mr1, mr2, ... (name them "SW1", "SW2", ...)
* How many items do the ModelResults have?
### 1.3 Create connector, plot temporal coverage
* Create empty Connector con
* The add the connections one observation at a time (start by matching o1 with the 6 models, then o2...)
* Print con to screen - which observation has most observation points?
* Plot the temporal coverage of observations and models
* Save the Connector to an excel configuration file
### 1.4 Evaluate results
Do relevant qualitative and quantitative analysis (e.g. time series plots, scatter plots, skill tables etc) to compare the models.
### 1.5 Find the best
Which calibration run is best?
* Which model performs best in terms of bias?
* Which model has the smallest scatter index?
* Which model has linear slope closest to 1.0 for the station HKZA?
* Consider the last day only (Nov 19) - which model has the smallest bias for that day?
* Weighted: Give observation F16 10-times more weight than the other observations - which has the smallest MAE?
* Extremes: Which model has lowest rmse for Hs>4.0 (df = cc.all_df[cc.all_df.obs_val>4])?
## 2. Validation
We will now compare our best model against the UK MetOffice's North West Shelf model stored in NWS_HM0.nc.
* 2.1 Create a ModelResult mr_NWS, evaluate mr_NWS.ds
* 2.2 Plot the first time step (hint .isel(time=0)) of ds (hint: the item is called "VHM0")
* 2.3 Create a Connector con_NWS with the 4 observations and mr_NWS
* 2.4 Evaluate NWS - what is the mean rmse?
* 2.5 Compare NWS to SW5 - which model is better? And is it so for all stations and all metrics? (hint: you can merge ComparisonCollections using the + operator)
| github_jupyter |
## Instalación de numpy
```
! pip install numpy
import numpy as np
```
### Array creation
```
my_int_list = [1, 2, 3, 4]
#create numpy array from original python list
my_numpy_arr = np.array(my_int_list)
print(my_numpy_arr)
# Array of zeros
print(np.zeros(10))
# Array of ones with type int
print(np.ones(10, dtype=int))
# Range of numbers
rangeArray = np.array(range(10), int)
print(rangeArray)
# Random array
print(f"Random array: {np.random.rand(5)}\n")
# Random matrix
print(f"Random matrix:\n {np.random.rand(5,4)}\n")
# Random array of integers in a range (say 0-9)
randomArray = np.floor(np.random.rand(10) * 10)
print(f"Random integer array: {randomArray}\n")
# Futher simplification
print(f"Random matrix:\n{np.random.randint(0, 10, (2,5))}\n")
integerArray = np.array([1,2,3,4], int)
integerArray2 = np.array([5,6], int)
# Concatenate two arrays
print(np.concatenate((integerArray, integerArray2)))
# Multidimensional array
floatArray = np.array([[1,2,3], [4,5,6]], float)
print(floatArray)
# Convert one dimensional to multidimensional arrays
rangeArray = rangeArray.reshape(5, 2)
print(rangeArray)
# Convert multidimensional to one dimensional array
rangeArray = rangeArray.flatten()
print(rangeArray)
# Concatenation of multi-dimensional arrays
arr1 = np.array([[1,2], [3,4]], int)
arr2 = np.array([[5,6], [7,8]], int)
print(f'array1: \n{arr1}\n')
print(f'array2: \n{arr2}')
# Based on dimension 1
print(np.concatenate((arr1, arr2), axis=0))
# Based on dimension 2
print(np.concatenate((arr1, arr2), axis=1))
```
### Universal Functions
These functions are defined as functions that operate element-wise on the array elements whether it is a single or multidimensional array.
```
# we want to alter each element of the collection by multiplying each integer by 2
my_int_list = [1, 2, 3, 4]
# python code
for i, val in enumerate(my_int_list):
my_int_list[i] *= 2
my_int_list
#create numpy array from original python list
my_numpy_arr = np.array(my_int_list)
#multiply each element by 2
my_numpy_arr * 2
# Addition
print(f"Array 1 + Array 2\n {arr1 + arr2}\n")
# Multiplication
print(f"Array 1 * Array 2\n {arr1 * arr2}\n")
# Square root
print(f"Square root of Array 1\n {np.sqrt(arr1)}\n")
# Log
print(f"Log of Array 1\n {np.log(arr1)}\n")
```
https://towardsdatascience.com/numpy-python-made-efficient-f82a2d84b6f7
### Aggregation Functions
These functions are useful when we wish to summarise the information contained in an array.
```
arr1 = np.arange(1,10).reshape(3,3)
print(f'Array 1: \n{arr1}\n')
print(f"Sum of elements of Array 1: {arr1.sum()}\n")
print(f"Sum by row elements of Array 1: {np.sum(arr1, axis=1)}\n")
print(f"Sum by column elements of Array 1: {np.sum(arr1, axis=0)}\n")
print(f'Array 1: \n{arr1}\n')
# Mean of array elements
print(f"Mean of elements of Array 1: {arr1.mean()}\n")
# Minimum of array elements
print(f"Minimum of elements of Array 1: {arr1.min()}\n")
# Minimum of elements of Array 1: 1
# Index of maximum of array elements can be found using arg before the funciton name
print(f"Index of minimum of elements of Array 1: {arr1.argmax()}")
```
### Broadcasting
These are a set of rules of how universal functions operate on numpy arrays.
```
sampleArray = np.array([[5,2,3], [3,4,5], [1,1,1]], int)
print(f"Sample Array\n {sampleArray}\n")
# Get unqiue values
print(f"Unique values: {np.unique(sampleArray)}\n")
# Unique values: [1 2 3 4 5]
# Get diagonal values
print(f"Diagonal\n {sampleArray.diagonal()}\n")
# Diagonal
# [5 4 1]
# Sort values in the multidimensional array
print(f"Sorted\n {np.sort(sampleArray)}\n")
sampleArray = np.array([[5,2,3], [3,4,5], [1,1,1]], int)
print(f"Sample Array\n {sampleArray}\n")
# Get diagonal values
print(f"Diagonal\n {sampleArray.T.diagonal()}\n")
vector = np.array([1,2,3,4], int)
matrix1 = np.array([[1,2,3], [4,5,6], [7,8,9]], int)
matrix2 = np.array([[1,1,1], [0,0,0], [1,1,1]], int)
# Dot operator
print(f"Dot of Matrix 1 and Matrix 2\n {np.dot(matrix1, matrix2)}\n")
# Cross operator
print(f"Cross of Matrix 1 and Matrix 2\n {np.cross(matrix1, matrix2)}\n")
# Outer operator
print(f"Outer of Matrix 1 and Matrix 2\n {np.outer(matrix1, matrix2)}\n")
# Inner operator
print(f"Inner of Matrix 1 and Matrix 2\n {np.inner(matrix1, matrix2)}")
```
### Slicing, masking and fancy indexing
The last strategy pools in a few tricks too
```
arr1 = np.array([[1,5], [7,8]], int)
arr2 = np.array([[6, 2], [7,8]], int)
print(f'Array 1: \n{arr1}\n')
print(f'Array 2: \n{arr2}\n\n')
# We can compare complete arrays of equal size element wise
print(f"Array 1 > Array 2\n{arr1 > arr2}\n")
# We can compare elements of an array with a given value
print(f"Array 1 == 2\n {arr1 == arr2}\n")
bigArray = np.array(range(10))
print("Array: {}".format(bigArray))
# Slice array from index 0 to 4
print("Array value from index 0 to 4: {}".format(bigArray[-5]))
# Masking using boolean values and operators
mask = (bigArray > 6) | (bigArray < 3)
print(mask)
print("Array values with mask as true: {}".format(bigArray[mask]))
# Fancy indexing
ind = [2,4,6]
print("Array values with index in list: {}".format(bigArray[ind]))
# Combine all three
print("Array values with index in list: {}".format(bigArray[bigArray > 6][:1]))
```
<img src="https://cdn-images-1.medium.com/max/800/1*cxbe7Omfj6Be0fbvD7gmGQ.png">
<img src="https://cdn-images-1.medium.com/max/800/1*9FImAfjF6Z6Hyv9lm1WgjA.png">
https://medium.com/@zachary.bedell/writing-beautiful-code-with-numpy-505f3b353174
```
# multiplying two matrices containing 60,000 and 80,000 integers
import time
import random as r
tick = time.time()
#create a 300x200 matrix of 60,000 random integers
my_list_1 = []
for row_index in range(300):
new_row = []
for col_index in range(200):
new_row.append(r.randint(0, 20))
my_list_1.append(new_row)
#create a 200x400 matrix of 80,000 random integers
my_list_2 = []
for row_index in range(200):
new_row = []
for col_index in range(400):
new_row.append(r.randint(0, 20))
my_list_2.append(new_row)
#create 2X3 array to hold results
my_result_arr = []
for row_index in range(300):
new_row = []
for col_index in range(400):
new_row.append(0)
my_result_arr.append(new_row)
# iterate through rows of my_list_1
for i in range(len(my_list_1)):
# iterate through columns of my_list_2
for j in range(len(my_list_2[0])):
# iterate through rows of my_list_2
for k in range(len(my_list_2)):
my_result_arr[i][j] += my_list_1[i][k] * my_list_2[k][j]
time_to_completion = time.time() - tick
print("execution time without NumPy: ", time_to_completion)
```
The code is difficult to read, and the solution requires double and triple nested loops, each of which have high time complexities of O(n²) and O(n³).
```
import time
tick = time.time()
np_arr_1 = np.arange(0, 60000).reshape(300, 200)
np_arr_2 = np.arange(0, 80000).reshape(200, 400)
my_result_arr = np.matmul(np_arr_1, np_arr_2)
time_to_completion = time.time() - tick
print("execution time with NumPy: ", time_to_completion)
```
| github_jupyter |
# Lets-Plot in 2020
### Preparation
```
import numpy as np
import pandas as pd
import colorcet as cc
from PIL import Image
from lets_plot import *
from lets_plot.bistro.corr import *
LetsPlot.setup_html()
df = pd.read_csv("https://raw.githubusercontent.com/JetBrains/lets-plot-docs/master/data/lets_plot_git_history.csv", sep=';')
df = df[['author_date', 'author_name', 'files_changed', 'insertions', 'deletions']]
df.author_date = pd.to_datetime(df.author_date, utc=True)
df.files_changed = df.files_changed.str.split(' ').str[0].astype(int)
df.insertions = df.insertions.str.split(' ').str[0].astype(int)
df.deletions = df.deletions.fillna('0').str.split(' ').str[0].astype(int)
df['diff'] = df.insertions - df.deletions
df['month'] = df.author_date.dt.month
df['day'] = df.author_date.dt.day
df['weekday'] = df.author_date.dt.weekday
df['hour'] = df.author_date.dt.hour
df = df[df.author_date.dt.year == 2020].sort_values(by='author_date').reset_index(drop=True)
df.head()
```
### General Analytics
```
agg_features = {'files_changed': ['sum', 'mean'], \
'insertions': ['sum', 'mean'], \
'deletions': ['sum', 'mean'], \
'diff': ['sum']}
agg_df = df.groupby('author_name').agg(agg_features).reset_index()
agg_features['commits_number'] = ['sum']
agg_df = pd.merge(agg_df, df.author_name.value_counts().to_frame(('commits_number', 'sum')).reset_index(), \
left_on='author_name', right_on='index')
agg_df['color'] = cc.palette['glasbey_bw'][:agg_df.shape[0]]
plots = []
for feature, agg in [(key, val) for key, vals in agg_features.items() for val in vals]:
agg_df = agg_df.sort_values(by=(feature, agg), ascending=False)
aes_name = ('total {0}' if agg == 'sum' else 'mean {0} per commit').format(feature.replace('_', ' '))
plotted_df = agg_df[[('author_name', ''), (feature, agg), ('color', '')]]
plotted_df.columns = plotted_df.columns.get_level_values(0)
plots.append(ggplot(plotted_df) + \
geom_bar(aes(x='author_name', y=feature, color='color', fill='color'), \
stat='identity', alpha=.25, size=1, \
tooltips=layer_tooltips().line('^x')
.line('{0}|^y'.format(aes_name))) + \
scale_color_identity() + scale_fill_identity() + \
xlab('') + ylab('') + \
ggtitle(aes_name.title()))
w, h = 400, 300
bunch = GGBunch()
bunch.add_plot(plots[7], 0, 0, w, h)
bunch.add_plot(plots[6], w, 0, w, h)
bunch.add_plot(plots[0], 0, h, w, h)
bunch.add_plot(plots[1], w, h, w, h)
bunch.add_plot(plots[2], 0, 2 * h, w, h)
bunch.add_plot(plots[3], w, 2 * h, w, h)
bunch.add_plot(plots[4], 0, 3 * h, w, h)
bunch.add_plot(plots[5], w, 3 * h, w, h)
bunch.show()
```
Looking at the total values, we clearly see that Igor Alshannikov and Ivan Kupriyanov outcompete the rest. But there is a real intrigue as to who takes the third place.
Meanwhile, we see more diversity in mean values of different contribution types.
```
ggplot(df.hour.value_counts().to_frame('count').reset_index().sort_values(by='index')) + \
geom_histogram(aes(x='index', y='count', color='index', fill='index'), \
stat='identity', show_legend=False, \
tooltips=layer_tooltips().line('^y')) + \
scale_x_discrete(breaks=list(range(24))) + \
scale_color_gradient(low='#e0ecf4', high='#8856a7') + \
scale_fill_gradient(low='#e0ecf4', high='#8856a7') + \
xlab('hour') + ylab('commits number') + \
ggtitle('Total Hourly Committing') + ggsize(600, 450)
```
The peak of commit activity is about 18 p.m. The evening seems to be a good time to save daily results.
### Higher Resolution
```
plotted_df = df[df.insertions > 0].reset_index(drop=True)
plotted_df['insertions_unit'] = np.ones(plotted_df.shape[0])
ggplot(plotted_df) + \
geom_segment(aes(x='author_date', y='insertions_unit', xend='author_date', yend='insertions'), color='#8856a7') + \
geom_point(aes(x='author_date', y='insertions', fill='month'), shape=21, color='#8856a7', \
tooltips=layer_tooltips().line('@author_name').line('@|@insertions').line('@|@month')) + \
scale_x_datetime(name='date') + \
scale_y_log10(name='insertions (log)') + \
scale_fill_brewer(name='', type='qual', palette='Paired') + \
facet_grid(y='author_name') + \
ggtitle('Lollipop Plot of Commits by Authors') + ggsize(800, 1000)
```
Some of the team members started their work only a few months ago, so they still have time to catch up next year.
```
ggplot(df) + \
geom_point(aes(x='weekday', y='insertions', color='author_name', size='files_changed'), \
shape=8, alpha=.5, position='jitter', show_legend=False, \
tooltips=layer_tooltips().line('author|@author_name')
.line('@|@insertions')
.line('@|@deletions')
.line('files changed|@files_changed')) + \
scale_x_discrete(labels=['Monday', 'Tuesday', 'Wednesday', 'Thursday', \
'Friday', 'Saturday', 'Sunday']) + \
scale_y_log10(breaks=[2 ** n for n in range(16)]) + \
scale_size(range=[3, 7], trans='sqrt') + \
ggtitle('All Commits') + ggsize(800, 600) + \
theme(axis_tooltip='blank')
```
Usually no one works at the weekend. But if something needs to be done - it should be.
### And Finally...
```
r = df.groupby('day').insertions.median().values
x = r * np.cos(np.linspace(0, 2 * np.pi, r.size))
y = r * np.sin(np.linspace(0, 2 * np.pi, r.size))
daily_insertions_df = pd.DataFrame({'x': x, 'y': y})
MONTHS = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
mask_width, mask_height = 60, 80
mask = np.array(Image.open("images/snowman_mask.bmp").resize((mask_width, mask_height), Image.BILINEAR))
grid = [[(0 if color.mean() > 255 / 2 else 1) for color in row] for row in mask]
grid_df = pd.DataFrame(grid).stack().to_frame('month')
grid_df.index.set_names(['y', 'x'], inplace=True)
grid_df = grid_df.reset_index()
grid_df.y = grid_df.y.max() - grid_df.y
grid_df = grid_df[grid_df.month > 0].reset_index(drop=True)
agg_df = np.round(df.month.value_counts() * grid_df.shape[0] / df.shape[0]).to_frame('commits_number')
agg_df.iloc[0].commits_number += grid_df.shape[0] - agg_df.commits_number.sum()
agg_df.commits_number = agg_df.commits_number.astype(int)
agg_df.index.name = 'month'
agg_df = agg_df.reset_index()
grid_df['commits_number'] = 0
start_idx = 0
for idx, (month, commits_number) in agg_df.iterrows():
grid_df.loc[start_idx:(start_idx + commits_number), 'month'] = MONTHS[month - 1]
grid_df.loc[start_idx:(start_idx + commits_number), 'commits_number'] = commits_number
start_idx += commits_number
blank_theme = theme_classic() + theme(axis='blank', axis_ticks_x='blank', axis_ticks_y='blank', legend_position='none')
ps = ggplot(daily_insertions_df, aes(x='x', y='y')) + \
geom_polygon(color='#f03b20', fill='#fd8d3c', size=1) + coord_fixed() + blank_theme
p1l = corr_plot(data=df[['insertions', 'deletions']], flip=False).tiles(type='lower', diag=True)\
.palette_gradient(low='blue', mid='green', high='darkgreen').build() + blank_theme
p1r = corr_plot(data=df[['deletions', 'insertions']], flip=True).tiles(type='lower', diag=True)\
.palette_gradient(low='blue', mid='green', high='darkgreen').build() + blank_theme
p2l = corr_plot(data=df[['insertions', 'deletions', 'diff']], flip=False).tiles(type='lower', diag=True)\
.palette_gradient(low='blue', mid='green', high='darkgreen').build() + blank_theme
p2r = corr_plot(data=df[['diff', 'deletions', 'insertions']], flip=True).tiles(type='lower', diag=True)\
.palette_gradient(low='blue', mid='green', high='darkgreen').build() + blank_theme
p3l = corr_plot(data=df[['insertions', 'deletions', 'diff', 'files_changed']], flip=False)\
.tiles(type='lower', diag=True).palette_gradient(low='blue', mid='green', high='darkgreen').build() + blank_theme
p3r = corr_plot(data=df[['files_changed', 'diff', 'deletions', 'insertions']], flip=True)\
.tiles(type='lower', diag=True).palette_gradient(low='blue', mid='green', high='darkgreen').build() + blank_theme
pt = ggplot({'x': [0], 'y': [0], 'greetings': ['Happy New Year!']}, aes(x='x', y='y')) + \
geom_text(aes(label='greetings'), color='blue', size=20, family='Times New Roman', fontface='bold') + blank_theme
pm = ggplot(grid_df, aes(x='x', y='y')) + \
geom_tile(aes(fill='month'), width=.8, height=.8, \
tooltips=layer_tooltips().line('@|@month')
.line('@|@commits_number')) + \
scale_fill_brewer(type='qual', palette='Set2') + \
blank_theme
w, h = 50, 50
bunch = GGBunch()
bunch.add_plot(ps, 3 * w, 0, 2 * w, 2 * h)
bunch.add_plot(p1l, 2 * w, 2 * h, 2 * w, 2 * h)
bunch.add_plot(p1r, 4 * w, 2 * h, 2 * w, 2 * h)
bunch.add_plot(p2l, w, 4 * h, 3 * w, 3 * h)
bunch.add_plot(p2r, 4 * w, 4 * h, 3 * w, 3 * h)
bunch.add_plot(p3l, 0, 7 * h, 4 * w, 4 * h)
bunch.add_plot(p3r, 4 * w, 7 * h, 4 * w, 4 * h)
bunch.add_plot(pt, 0, 11 * h, 16 * w, 2 * h)
bunch.add_plot(pm, 8 * w, 3 * h, 8 * w, 8 * h)
bunch.show()
```
| github_jupyter |
# Cross-validation
This notebook contains the function that performs cross validation tests. This is a dummy function that can be tested with the model/s.
```
def cross_val(df, k, model, split_method='random'):
"""
Performs cross-validation for different train and test sets.
Parameters
-----------
df : the data to be split in the form of vanilla/transaction++ table (uid, iid, rating, timestamp)
k : the number of times splitting and learning with the model is desired
model : an unfitted sklearn model
split_method : 'random' splitting or 'chronological' splitting of the data
Returns
--------
mse and mae : error metrics using sklearn
"""
mse = []
mae = []
if split_method == 'random':
for i in range(k):
print(i)
# 1. split
print('Starting splitting')
df_train, df_test, df_test_um, indx_train, indx_test = split_train_test(
df, 0.7)
print('Finished splitting')
# 2. train with model
model_clone = clone(model)
print('Starting training')
model_clone_fit = fit_ml_cb(df_train, model_clone)
print('Finished training')
print('Starting completing matrix')
result = reco_ml_cb(user_df, list(df_test.index), item_df, model_clone_fit)
print('Finished completing matrix')
print('Starting computing MAE and MSE')
# 3. evaluate results (result is in the form of utility matrix)
mse_i, mae_i = evaluate(result, df_test_um)
print('Finished computing MAE and MSE')
mse.append(mse_i)
mae.append(mae_i)
elif split_method == 'chronological':
# 1. split
df_train, df_test, df_test_um, indx_train, indx_test = split_train_test_chronological(
df, 0.7)
print('Starting splitting')
print('Finished splitting')
# 2. train with model
model_clone = clone(model)
print('Starting training')
model_clone_fit = fit_ml_cb(df_train, model_clone)
print('Finished training')
print('Starting completing matrix')
result = reco_ml_cb(user_df, list(df_test.index), item_df, model_clone_fit)
print('Finished completing matrix')
print('Starting computing MAE and MSE')
# 3. evaluate results (result is in the form of utility matrix)
mse_i, mae_i = evaluate(result, df_test_um)
print('Finished computing MAE and MSE')
mse.append(mse_i)
mae.append(mae_i)
return mse, mae
```
| github_jupyter |
```
from tpot import TPOTClassifier
import os
from tqdm import tqdm_notebook as tqdm
# Ignore the warnings
import warnings
warnings.filterwarnings('always')
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
import warnings
import matplotlib.pyplot as plt
from matplotlib.pyplot import subplots
import matplotlib.patches as patches
import seaborn as sns
from pylab import rcParams
%matplotlib inline
plt.style.use('seaborn')
sns.set(style='whitegrid',color_codes=True)
# classifiaction
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
import xgboost as xgb
import catboost as ctb
# for classification
from sklearn.metrics import accuracy_score
# model selection
from sklearn.model_selection import train_test_split, cross_val_score, StratifiedKFold
from sklearn.model_selection import GridSearchCV
# Hp optimization imports
from hyperopt import STATUS_OK, Trials, fmin, hp, tpe
import mlflow
import re
import eli5
import gc
import random
import math
import psutil
import pickle
import datetime
from time import time
# save/load models
from joblib import dump
from joblib import load
import timeit
from sklearn.preprocessing import StandardScaler
root = "../../data/raw/Gamma_Log_Facies_Type_Prediction/"
models_root = "../../models/Gamma_Log_Facies_Type_Prediction/"
RANDOM_STATE = 42
np.random.seed(RANDOM_STATE)
pd.set_option('max_columns', 150)
# rcParams['figure.figsize'] = 16,8
%%time
full_train_df = pd.read_csv(root + "Train_File.csv")
full_test_df = pd.read_csv(root + "Test_File.csv")
submit_df = pd.read_csv(root + "Submission_File.csv")
def create_lags(df):
for i in range(0, 25):
df["lag_forward_{}".format(i)] = df.GR.shift(i)
df["lag_backward_{}".format(i)] = df.GR.shift(-i)
return df
train_df_ts = full_train_df[full_train_df["well_id"] < 100]
valid_df_ts = full_train_df[full_train_df["well_id"].isin(list(range(100,120)))]
train_df_ts.head()
width = 3
shifted = train_df_ts.GR.shift(width - 1)
window = shifted.rolling(window=width)
dataframe = pd.concat([window.min(), window.mean(), window.max(), shifted], axis=1)
dataframe.columns = ['min', 'mean', 'max', 't+1']
dataframe = pd.concat([dataframe, train_df_ts])
print(dataframe.head(10))
train_df_ts.head()
window
window = train_df_ts.expanding()
dataframe = pd.concat([window.min(), window.mean(), window.max(), train_df_ts.shift(-1)], axis=1)
# dataframe.columns = ['min', 'mean', 'max', 't+1']
print(dataframe.head(5))
train_df_ts = train_df_ts.groupby("well_id").apply(create_lags)
train_df_ts = train_df_ts.fillna(0)
valid_df_ts = valid_df_ts.groupby("well_id").apply(create_lags)
valid_df_ts = valid_df_ts.fillna(0)
X_train, y_train, X_test, y_test = train_df_ts.drop(["label"], axis=1), train_df_ts["label"], \
valid_df_ts.drop(["label"], axis=1), valid_df_ts["label"]
dataframe = concat([temps.shift(3), temps.shift(2), temps.shift(1), temps], axis=1)
dataframe.columns = ['t-3', 't-2', 't-1', 't+1']
mlflow.set_experiment("xgboost_cls_feature_selecting")
class HyperoptHPOptimizer:
def __init__(self, hyperparameters_space, max_evals):
self.trials = Trials()
self.max_evals = max_evals
self.hyperparameters_space = hyperparameters_space
self.skf = StratifiedKFold(n_splits=3, shuffle=False, random_state=RANDOM_STATE)
def get_loss(self, hyperparameters):
# MLflow will track and save hyperparameters, loss, and scores.
with mlflow.start_run(run_name='hyperopt_param'):
params = {
'min_child_weight': 8,
'gamma': 3,
'subsample': 1,
'colsample_bytree': 0.6,
'eta': 0.3,
'max_depth': 4,
'random_state': RANDOM_STATE,
'verbosity': 1,
'n_jobs': -1,
'n_estimators': 10,
'learning_rate': 0.1,
}
cols = [col for col, is_use in hyperparameters.items() if is_use == 1]
for k, v in hyperparameters.items():
mlflow.log_param(k, v)
model = xgb.XGBClassifier(**params)
model.fit(X_train[cols], y_train)
y_pred = model.predict(X_test[cols])
loss = accuracy_score(y_test, y_pred)
# Log the various losses and metrics (on train and validation)
mlflow.log_metric("accuracy", loss)
# Use the last validation loss from the history object to optimize
return {
'loss': -loss,
'status': STATUS_OK,
'eval_time': time()
}
def optimize(self):
"""
This is the optimization function that given a space of
hyperparameters and a scoring function, finds the best hyperparameters.
"""
# Use the fmin function from Hyperopt to find the best hyperparameters
# Here we use the tree-parzen estimator method.
best = fmin(self.get_loss, self.hyperparameters_space, algo=tpe.suggest,
trials=self.trials, max_evals=self.max_evals)
return best
MAX_EVALS = 200
HYPERPARAMETERS_SPACE = {col: hp.choice(col, [0, 1]) for col in X_train.columns.values}
hp_optimizer = HyperoptHPOptimizer(hyperparameters_space=HYPERPARAMETERS_SPACE, max_evals=MAX_EVALS)
optimal_hyperparameters = hp_optimizer.optimize()
print(optimal_hyperparameters)
```
| github_jupyter |
# LSV Data Analysis and Parameter Estimation
##### First, all relevent Python packages are imported
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.optimize import curve_fit
from scipy.signal import savgol_filter, find_peaks, find_peaks_cwt
import pandas as pd
import math
import glob
import altair as alt
from voltammetry import preprocessing, plotting, fitting
```
##### The user will be able to import experimental data for an LSV scan
##### (Currently, we assume that the LSV sweep starts at equilibrium)
```
##Import Experimental Reversible Data:
rev_exp_data = pd.read_csv("data/10mVs_Reversible.csv")
current_exp=rev_exp_data['current(A)'].values
voltage_exp=rev_exp_data['voltage(mV)'].values
time_exp=rev_exp_data['time(s)'].values
## all appropriate packages and the singular experimental data file is imported now
```
##### Next, the program will grab some simple quantitative information from the graph that may be hard to do by hand or over extensive datasets
```
t,i,v = preprocessing.readFile('data/10mM_F2CA_1M_KOH_pH_14_100mV.DTA',type='gamry',scan='first')
length = len(t)
v1, v2 = v[0:int(length/2)], v[int(length/2):]
i1, i2 = i[0:int(length/2)], i[int(length/2):]
t1, t2 = t[0:int(length/2)], t[int(length/2):]
peak_list = []
_, v_peaks, i_peaks = fitting.peak_find(v1,i1,v2,i2)
b1, b2 = fitting.baseline(v1,i1,v2,i2)
for n in range(len(v_peaks)):
peak_list.append([i_peaks[n],v_peaks[n]])
plotting.plot_voltammogram(t,i,v, peaks = peak_list).display()
plt.plot(v1,b1)
plt.plot(v1,i1)
plt.plot(v2,b2)
plt.plot(v2,i2)
```
##### This program can also return relevant parameters using a physics-based model.
```
# Import the dimensionless voltammagram (V I) for reversible reactions
rev_dim_values = pd.read_csv("data/dimensionless_values_rev.csv")
rev_dim_current=rev_dim_values['dimensionless_current'].values
rev_dim_voltage=rev_dim_values['dimensionless_Voltage'].values
##We will now prompt the user to submit known parameters (THESE CAN BE CHANGED OR MADE MORE CONVENIENT)
sweep_rate= float(input("What is the Voltage sweep rate in mV/s?(10)"))
electrode_surface_area= float(input("What is the electrode surface area in cm^2?(.2)"))
concentration_initial= float(input("What is the initial concentration in mol/cm^3?(.00001)"))
Temp= float(input("What is the temperature in K?(298)"))
eq_pot= float(input("What is the equilibrium potential in V?(.10)"))
##we are inserting a diffusion coefficient to check math here, we will estimate this later:
Diff_coeff=0.00001
## Here we define constant variables, these can be made to user inputs if needed.
n=1
Faradays_const=96285
R_const=8.314
sigma=(n*Faradays_const*sweep_rate)/(R_const*Temp)
Pre=electrode_surface_area*concentration_initial*n*Faradays_const*math.sqrt(Diff_coeff*sigma)
output_voltage=(eq_pot+rev_dim_voltage/n)
output_current=Pre*rev_dim_current
plt.plot(output_voltage,output_current)
```
##### Then, we can back out a relevant parameter from the data:
```
# Fitting Diff_Coeff
def test_func(rev_dim_current, D):
return electrode_surface_area*concentration_initial*n*Faradays_const*math.sqrt(D*sigma)*rev_dim_current
params, params_covariance = curve_fit(test_func, rev_dim_current, output_current,p0=None,bounds = (0,[1]))
print("Diffusion Coefficient (cm^2/s): {}".format(params[0]))
```
##### We can repeat this exercise on an LSV with an irreversible reaction to determine exchange current density.
```
##Import Experimental Irreversible Data:
irrev_exp_data = pd.read_csv("data/10mVs_Irreversible.csv")
current_exp=irrev_exp_data['current(A)'].values
voltage_exp=irrev_exp_data['voltage(mV)'].values
time_exp=irrev_exp_data['time(s)'].values
## all appropriate packages and the singular experimental data file is imported now
# Import the dimensionless voltammagram (V I) for irreversible reactions
irrev_dim_values = pd.read_csv("data/dimensionless_values_irrev.csv")
irrev_dim_current=irrev_dim_values['dimensionless_current'].values
irrev_dim_voltage=irrev_dim_values['dimensionless_Voltage'].values
##We will now prompt the user to submit known parameters (THESE CAN BE CHANGED OR MADE MORE CONVENIENT)
sweep_rate= float(input("What is the Voltage sweep rate in mV/s?(10)"))
electrode_surface_area= float(input("What is the electrode surface area in cm^2?(.2)"))
concentration_initial= float(input("What is the initial concentration in mol/cm^3?(.00001)"))
Temp= float(input("What is the temperature in K?(298)"))
eq_pot= float(input("What is the equilibrium potential in mV?(100)"))
##we are inserting a diffusion coefficient to check math here, we will estimate this later:
Diff_coeff=0.00001
## Here we define constant variables, these can be made to user inputs if needed.
n=1
Faradays_const=96285
R_const=8.314
exchange_current_density=0.0002
kinetic_coefficient=exchange_current_density/n/Faradays_const/electrode_surface_area/concentration_initial
transfer_coefficient=.6
eV_const=59.1
beta=transfer_coefficient*n*Faradays_const*sweep_rate/R_const/Temp/1000
Pre=(concentration_initial*n*Faradays_const*
math.sqrt(Diff_coeff*sweep_rate*transfer_coefficient
*Faradays_const/(R_const*Temp*1000)))
output_voltage=eq_pot+irrev_dim_voltage/transfer_coefficient-eV_const/transfer_coefficient*math.log(math.sqrt(math.pi*Diff_coeff*beta)/kinetic_coefficient)
output_current=Pre*irrev_dim_current
plt.plot(output_voltage,output_current)
# Fitting Diff_Coeff
from scipy import optimize
def test_func(irrev_dim_voltage, exchange_current_density):
return eq_pot+irrev_dim_voltage/transfer_coefficient-eV_const/transfer_coefficient*math.log(math.sqrt(math.pi*Diff_coeff*beta)/(exchange_current_density/n/Faradays_const/electrode_surface_area/concentration_initial))
params, params_covariance = optimize.curve_fit(test_func, irrev_dim_voltage, output_voltage,p0=None,bounds = (0,[1]))
print("Exchange current density (A/cm^2): {}".format(params[0]))
```
| github_jupyter |
# Now You Code 1: Address
Write a Python program to input elements of your postal address and then output them as if they were an address label. The program should use a dictionary to store the address and complete two function defintions one for inputting the address and one for printing the address.
**NOTE:** While you most certainly can write this program without using dictionaries or functions, the point of the exercise is to get used to using them!!!
Sample Run:
```
Enter Street: 314 Hinds Hall
Enter City: Syracuse
Enter State: NY
Enter Postal Zip Code: 13244
Mailing Address:
314 Hinds Hall
Syracuse , NY 13244
```
## Step 1: Problem Analysis `input_address` function
This function should get input from the user at run time and return the input address.
Inputs: None (gets input from user)
Outputs: a Python dictionary of address info (street, city, state, postal_code)
Algorithm (Steps in Program):
```
## Step 2: Write input_address_ function
#input: None (inputs from console)
#output: dictionary of the address
def input_address():
address= {}
# todo: write code here to input the street, city, state and zip code and add to dictionary at runtime and store in a dictionary
return address
```
## Step 3: Problem Analysis `print_address` function
This function should display a mailing address using the dictionary variable
Inputs: dictionary variable of address into (street, city, state, postal_code)
Outputs: None (prints to screen)
Algorithm (Steps in Program):
```
## Step 4: write code
# input: address dictionary
# output: none (outputs to console)
def print_address(address):
# todo: write code to print the address (leave empty return at the end
return
```
## Step 5: Problem Analysis main program
Should be trivial at this point.
Inputs:
Outputs:
Algorithm (Steps in Program):
```
## Step 6: write main program, use other 2 functions you made to solve this problem.
# main program
# todo: call input_address, then print_address
```
## Step 7: Questions
1. Explain a strategy for a situation when an expected dictionary key, like 'state' for example does not exist?
2. The program as it is written is not very useful. How can we make it more useful?
## Reminder of Evaluation Criteria
1. What the problem attempted (analysis, code, and answered questions) ?
2. What the problem analysis thought out? (does the program match the plan?)
3. Does the code execute without syntax error?
4. Does the code solve the intended problem?
5. Is the code well written? (easy to understand, modular, and self-documenting, handles errors)
| github_jupyter |
#Weak-Strong Cluster問題
2015年にGoogleとNASAが共同でD-Waveマシンは既存マシンの1億倍高速という発表を行いました。その際に利用されたのが量子ビットのクラスタを作ってフリップさせるWeak-Strong Cluster問題です。今回は簡単なweak clusterとstrong clusterを作って見て計算を行います。
論文は下記を参照します。
What is the Computational Value of Finite Range Tunneling?
https://arxiv.org/abs/1512.02206
##背景
量子アニーリングは量子トンネル効果を利用した最適化マシンとして提案されていますが、ここでは、このトンネル効果がどのような計算上のメリットをもたらすかを検証しています。D-Wave2Xの量子アニーリングマシンは局所解同士を隔てるエネルギー障壁が高く、細い形状をしているような問題に対して有利で、Simulated Annealing(SA)にくらべても優位性があるといわれています。945量子ビットの場合で、SAにくらべて、およそ10^8倍高速(成功率99%)で、古典計算機でトンネル効果をシミュレートする量子モンテカルロ法(QMC)と比較しても同様に高速です。
##ハミルトニアンとSA、QA
今回検証を行う際にシミュレーテッドアニーリング(以下SA)と量子アニーリング(今回は量子モンテカルロ法をつかっているので、以下QMC)が使用されています。
ときたい問題は一緒で、ハミルトニアンと呼ばれるコスト関数を最小にするようにアルゴリズムが働き、その最小基底状態に至る過程がSAとQMCでは原理が異なります。
SAでは熱をシミュレートして、熱で基底状態の探索を行います。一方QMCでは熱の代わりに磁力を使って、量子トンネル効果を活用しながら探索を行います。
SAではあるコスト関数がある場合、グラフの起伏をきちんとなぞるようにエネルギー障壁(以下エナジーバリア)を超えて探索を行うためエネルギー関数のコストをあげて探索を行わないといけませんが、QMCの場合にはトンネル効果によりエナジーバリアを越えるために熱のコストを上げる必要がなく、確率的にトンネル効果で起伏の向こう側に到達できると考えられます。
これらのエナジーバリアをトンネル効果で越える条件もありますので、できるだけエナジーバリアの高さが高くて、障壁の厚みが薄い方が確率的に超えやすいので、SAで行う場合には、かなり条件が厳しく、QMCや量子アニーリングに有利な条件となります。この条件を人為的に問題を作ることで、SAに対して速度優位性を持たせようという検証です。
つまり、求めたいコスト関数に高くて薄いエナジーバリアがたくさんあるほどD-WaveマシンやQMCアルゴリズムが有利になると推測されます。
##Weak-Strong Cluster問題とは
Weak-Strong Clusterという2つの量子ビットのクラスターを繋げる問題です。D-Waveはキメラグラフという接続を使用しており、8量子ビットで1ユニットセルという単位です。このユニットセルを2つ用意した、16量子ビットの2つのクラスターを構成する問題を用意しています。
<img src='https://github.com/Blueqat/Wildqat/blob/master/examples_ja/img/023_1.png?raw=1'>
全ての量子ビットはキメラグラフで接続されており、ferromagneticカップリングで結合されています。値が同じになるような結合です。一方局所磁場と呼ばれる量子ビットが-1か+1になりやすいように設定されたパラメータが工夫されています。右側のクラスターはすべてh2=−1という値が設定されている一方で、左側のクラスターの量子ビットにはh1=0.44というように設定されています。これにより、計算過程において、左側の量子ビットがまとめて8個同時にフリップして右側のクラスターと揃うという過程がおきます。局所磁場の値が、左がweak-clusterで右がstrong-clusterでweak-strong cluster問題です。
これをさらに巨大につなげることで大きなクラスタを作ることができています。クラスター16量子ビットの組みを複数用意し、strong-cluster同士を4量子ビットの結合でferro/anti-ferroをランダムで+1or-1でつなぐことで巨大なweak-strong clusterを作ったとのこと。
<img src='https://github.com/Blueqat/Wildqat/blob/master/examples_ja/img/023_2.png?raw=1'>
引用:https://arxiv.org/abs/1512.02206
D-Waveにはところどころ不良量子ビットもあるので、それを避けるようにクラスターを配置し、上記の巨大クラスター構築では、黒丸が-1のstrong cluster。グレーが0.44のweak cluster。青い接続がferroで赤い接続がanti-ferroとなっています。
##実験の結果
結果1億倍程度の速度差が生まれたとなっています。参考は下図。
<img src='https://github.com/Blueqat/Wildqat/blob/master/examples_ja/img/023_4.png?raw=1'>
引用:https://arxiv.org/abs/1512.02206
##部分的回路の実装
少し実際のアルゴリズムでやって見ます。実用的に2クラスタを解いて見ます。とりあえず16量子ビットのクラスタを今回は検証して見たいと思います。
まず面白いのは、量子ビット同士の結合がすべてferromagneticということです。設定する値は論文と符号が逆ですが、すべて-1を入れます。
上記量子ビットで今回の実験の肝は量子ビットの局所磁場を設定するところで、上記オレンジの右側のクラスターの量子ビットの局所磁場の設定をすべて+1に。上記水色の左側のクラスターの量子ビットの局所磁場の設定を全て-0.44にします。また、便宜的に16量子ビットに通し番号をふりました。局所磁場はh0からh15まで、量子ビット間の相互作用強度はJijの表記をJ0,4のように量子ビットの番号で表現します。
<img src='https://github.com/Blueqat/Wildqat/blob/master/examples_ja/img/023_1.png?raw=1'>
まずは何も考えずにSAをかけて見たいと思います。
##キメラグラフの実装
キメラグラフでの結合係数の決定をします。今回はせいぜい16量子ビットなので、そのまま16*16のmatrixを作って実現して見ます。wildqatに下記のQUBOmatrixをいれることで計算を行うことができます。
<img src='https://github.com/Blueqat/Wildqat/blob/master/examples_ja/img/023_5.png?raw=1'>
オレンジの-1がユニットセル内の16の結合。クラスタが2つあるので、合計32のユニットセル内部の結合があります。次に赤の-0.44はクラスタ1の局所磁場。青の+1はクラスタ2の局所磁場。紫はクラスタ間の-1の結合を表しています。論文とwildqatはプラスマイナスの符号が逆になっています。
##実行して見る
こちらをwildqatに入力して実行してみます。
```
!pip install blueqat
import blueqat.opt as wq
import numpy as np
a = wq.opt()
a.J = np.zeros((16,16))
for i in range(8):
a.J[i][i] = -0.44
for i in range(8,16):
a.J[i][i] = 1
for i in range(4,8):
for j in range(0,4):
a.J[j][i] = -1
for i in range(12,16):
for j in range(8,12):
a.J[j][i] = -1
a.J[4][12] = -1
a.J[5][13] = -1
a.J[6][14] = -1
a.J[7][15] = -1
a.sa()
```
すべて0になりました。時々実行すると左側だけすべて1となり、右側が0となる局所解にも落ちました。
```
a.sa()
```
##参考にD-Wave実機での実行結果
また、D-Wave本体でもやってみました。パラメータは論文と一緒です。
<img src='https://github.com/Blueqat/Wildqat/blob/master/examples_ja/img/023_6.png?raw=1'>
成功率98.6%で基底状態です。論文とほぼ同じ。shotは1000回にして見ました。
この問題はぜひ興味ある人は実装が難しくないので小さな問題からチャレンジして、大きな問題にチャレンジして欲しいです。実用というよりも研究の要素がとても大きかったと思います。正直SAでは局所解から最適解への相転移は容易ではないと思います。その辺りがD-WaveやQMCなどの量子アルゴリズムの利点なのかなと思いました。すべては左側のクラスターの量子ビットの局所磁場h0=−0.44という数字がポイントになるので、この値を調整して見ても勉強になるかと思います。
| github_jupyter |
# Disparate Impact by Providers' Gender
## the best model: XGBoost
```
import pandas as pd
import time
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
import glob
import copy
from collections import Counter
from numpy import where
import statsmodels.api as sm
from sklearn.preprocessing import scale
from sklearn.model_selection import train_test_split
import random
import itertools
from interpret.glassbox import ExplainableBoostingClassifier
import xgboost as xgb
from interpret.perf import ROC
from imblearn import over_sampling
from imblearn import under_sampling
from imblearn.pipeline import Pipeline
import os # for directory and file manipulation
import numpy as np # for basic array manipulation
import pandas as pd # for dataframe manipulation
import datetime # for timestamp
# for model eval
from sklearn.metrics import accuracy_score, f1_score, log_loss, mean_squared_error, roc_auc_score
# global constants
ROUND = 3
# set global random seed for better reproducibility
SEED = 1234
seed = 1234
NTHREAD = 4
#import sagemaker, boto3, os
import warnings
warnings.filterwarnings('ignore')
# import the cleaned dataset containing Gender feature
#%cd /Users/alex/Desktop/Master/BA_Practicum_6217_10/Project/dataset
partB = pd.read_csv("partB_new5.csv")
partB.info()
# One-Hot Encoding
# Convert the Fraud variable to object datatype
partB["Fraud"] = partB["Fraud"].astype(object)
# Encoding
encoded_partB = pd.get_dummies(partB, drop_first = True)
# Rename some of the changed variable names
encoded_partB.rename(columns = {"Gender_M":"Gender", "Fraud_1":"Fraud", "Place_Of_Srvc_O":"Place_Of_Srvc"}, inplace = True)
```
## Data Partitioning
```
# Assign X and y features
X_var = list(encoded_partB.columns)
for var in ["NPI","Fraud"]:
X_var.remove(var)
y_var = "Fraud"
# Split the whole dataset into train and test dataset
# Using a stratified random sampling so that the Fraud-class (1) data are evenly split into train & test sets
x_train, x_test, y_train, y_test = train_test_split(encoded_partB[X_var],
encoded_partB[y_var],
test_size=0.2,
stratify=encoded_partB["Fraud"])
# Also concatenate the split x & y dataframes
tr_df = pd.concat([x_train, y_train], axis = 1)
te_df = pd.concat([x_test, y_test], axis = 1)
```
## Over-Sampling
```
# SMOTE the dataset
oversample = over_sampling.SMOTE()
tr_X, tr_y = oversample.fit_resample(tr_df[X_var], tr_df[y_var])
```
## Modeling
### Data Partitioning (Train & Valid)
```
trans_tr_df = pd.concat([tr_X, tr_y], axis = 1)
# Split train and validation sets
np.random.seed(SEED)
ratio = 0.7 # split train & validation sets with 7:3 ratio
split = np.random.rand(len(trans_tr_df)) < ratio # define indices of 70% corresponding to the training set
train = trans_tr_df[split]
valid = trans_tr_df[~split]
# summarize split
print('Train data rows = %d, columns = %d' % (train.shape[0], train.shape[1]))
print('Validation data rows = %d, columns = %d' % (valid.shape[0], valid.shape[1]))
# reassign X_var
X_var.remove("Gender")
```
### XGBM
```
def xgb_grid(dtrain, dvalid, mono_constraints=None, gs_params=None, n_models=None,
ntree=None, early_stopping_rounds=None, verbose=False, seed=None):
""" Performs a random grid search over n_models and gs_params.
:param dtrain: Training data in LightSVM format.
:param dvalid: Validation data in LightSVM format.
:param mono_constraints: User-supplied monotonicity constraints.
:param gs_params: Dictionary of lists of potential XGBoost parameters over which to search.
:param n_models: Number of random models to evaluate.
:param ntree: Number of trees in XGBoost model.
:param early_stopping_rounds: XGBoost early stopping rounds.
:param verbose: Whether to display training iterations, default False.
:param seed: Random seed for better interpretability.
:return: Best candidate model from random grid search.
"""
# cartesian product of gs_params
keys, values = zip(*gs_params.items())
experiments = [dict(zip(keys, v)) for v in itertools.product(*values)]
# preserve exact reproducibility for this function
np.random.seed(SEED)
# select randomly from cartesian product space
selected_experiments = np.random.choice(len(experiments), n_models)
# set global params for objective, etc.
params = {'booster': 'gbtree',
'eval_metric': 'auc',
'nthread': NTHREAD,
'objective': 'binary:logistic',
'seed': SEED}
# init grid search loop
best_candidate = None
best_score = 0
# grid search loop
for i, exp in enumerate(selected_experiments):
params.update(experiments[exp]) # override global params with current grid run params
print('Grid search run %d/%d:' % (int(i + 1), int(n_models)))
print('Training with parameters:', params)
# train on current params
watchlist = [(dtrain, 'train'), (dvalid, 'eval')]
if mono_constraints is not None:
params['monotone_constraints'] = mono_constraints
candidate = xgb.train(params,
dtrain,
ntree,
early_stopping_rounds=early_stopping_rounds,
evals=watchlist,
verbose_eval=verbose)
# determine if current model is better than previous best
if candidate.best_score > best_score:
best_candidate = candidate
best_score = candidate.best_score
print('Grid search new best score discovered at iteration %d/%d: %.4f.' %
(int(i + 1), int(n_models), candidate.best_score))
print('---------- ----------')
return best_candidate
gs_params = {'colsample_bytree': [0.7],
'colsample_bylevel': [0.9],
'eta': [0.5],
'max_depth': [7],
'reg_alpha': [0.005],
'reg_lambda': [0.005],
'subsample': [0.9],
'min_child_weight': [1],
'gamma': [0.2]}
# Convert data to SVMLight format
dtrain = xgb.DMatrix(train[X_var], train[y_var])
dvalid = xgb.DMatrix(valid[X_var], valid[y_var])
best_mxgb = xgb_grid(dtrain, dvalid, gs_params=gs_params, n_models=1, ntree=1000, early_stopping_rounds=100, seed=SEED)
```
### Combine valid set with the best prediction
```
dtest = xgb.DMatrix(te_df[X_var])
best_mxgb_phat = pd.DataFrame(best_mxgb.predict(dtest, iteration_range=(0, best_mxgb.best_ntree_limit)), columns=['phat'])
best_mxgb_phat = pd.concat([te_df.reset_index(drop=True), best_mxgb_phat], axis=1)
best_mxgb_phat.head()
```
## Mitigating Discrimination
### Utility functions
### Calculate confusion matrices by demographic group
```
def get_confusion_matrix(frame, y, yhat, by=None, level=None, cutoff=0.2, verbose=True):
""" Creates confusion matrix from pandas dataframe of y and yhat values, can be sliced
by a variable and level.
:param frame: Pandas dataframe of actual (y) and predicted (yhat) values.
:param y: Name of actual value column.
:param yhat: Name of predicted value column.
:param by: By variable to slice frame before creating confusion matrix, default None.
:param level: Value of by variable to slice frame before creating confusion matrix, default None.
:param cutoff: Cutoff threshold for confusion matrix, default 0.5.
:param verbose: Whether to print confusion matrix titles, default True.
:return: Confusion matrix as pandas dataframe.
"""
# determine levels of target (y) variable
# sort for consistency
level_list = list(frame[y].unique())
level_list.sort(reverse=True)
# init confusion matrix
cm_frame = pd.DataFrame(columns=['actual: ' + str(i) for i in level_list],
index=['predicted: ' + str(i) for i in level_list])
# don't destroy original data
frame_ = frame.copy(deep=True)
# convert numeric predictions to binary decisions using cutoff
dname = 'd_' + str(y)
frame_[dname] = np.where(frame_[yhat] > cutoff , 1, 0)
# slice frame
if (by is not None) & (level is not None):
frame_ = frame_[frame[by] == level]
# calculate size of each confusion matrix value
for i, lev_i in enumerate(level_list):
for j, lev_j in enumerate(level_list):
cm_frame.iat[j, i] = frame_[(frame_[y] == lev_i) & (frame_[dname] == lev_j)].shape[0]
# i, j vs. j, i nasty little bug ... updated 8/30/19
# output results
if verbose:
if by is None:
print('Confusion matrix:')
else:
print('Confusion matrix by ' + by + '=' + str(level))
return cm_frame
```
### Calculate Adverse Impact Ratio (AIR)
```
def air(cm_dict, reference_key, protected_key, verbose=True):
""" Calculates the adverse impact ratio as a quotient between protected and
reference group acceptance rates: protected_prop/reference_prop.
Optionally prints intermediate values. ASSUMES 0 IS "POSITIVE" OUTCOME!
:param cm_dict: Dictionary of demographic group confusion matrices.
:param reference_key: Name of reference group in cm_dict as a string.
:param protected_key: Name of protected group in cm_dict as a string.
:param verbose: Whether to print intermediate acceptance rates, default True.
:return: AIR.
"""
eps = 1e-20 # numeric stability and divide by 0 protection
# reference group summary
reference_accepted = float(cm_dict[reference_key].iat[1,0] + cm_dict[reference_key].iat[1,1]) # predicted 0's
reference_total = float(cm_dict[reference_key].sum().sum())
reference_prop = reference_accepted/reference_total
if verbose:
print(reference_key.title() + ' proportion accepted: %.3f' % reference_prop)
# protected group summary
protected_accepted = float(cm_dict[protected_key].iat[1,0] + cm_dict[protected_key].iat[1,1]) # predicted 0's
protected_total = float(cm_dict[protected_key].sum().sum())
protected_prop = protected_accepted/protected_total
if verbose:
print(protected_key.title() + ' proportion accepted: %.3f' % protected_prop)
# return adverse impact ratio
return ((protected_prop + eps)/(reference_prop + eps))
```
### Select Probability Cutoff by F1-score
```
def get_max_f1_frame(frame, y, yhat, res=0.01, air_reference=None, air_protected=None):
""" Utility function for finding max. F1.
Coupled to get_confusion_matrix() and air().
Assumes 1 is the marker for class membership.
:param frame: Pandas dataframe of actual (y) and predicted (yhat) values.
:param y: Known y values.
:param yhat: Model scores.
:param res: Resolution over which to search for max. F1, default 0.01.
:param air_reference: Reference group for AIR calculation, optional.
:param air_protected: Protected group for AIR calculation, optional.
:return: Pandas DataFrame of cutoffs to select from.
"""
do_air = all(v is not None for v in [air_reference, air_protected])
# init frame to store f1 at different cutoffs
if do_air:
columns = ['cut', 'f1', 'acc', 'air']
else:
columns = ['cut', 'f1', 'acc']
f1_frame = pd.DataFrame(columns=['cut', 'f1', 'acc'])
# copy known y and score values into a temporary frame
temp_df = frame[[y, yhat]].copy(deep=True)
# find f1 at different cutoffs and store in acc_frame
for cut in np.arange(0, 1 + res, res):
temp_df['decision'] = np.where(temp_df.iloc[:, 1] > cut, 1, 0)
f1 = f1_score(temp_df.iloc[:, 0], temp_df['decision'])
acc = accuracy_score(temp_df.iloc[:, 0], temp_df['decision'])
row_dict = {'cut': cut, 'f1': f1, 'acc': acc}
if do_air:
# conditionally calculate AIR
cm_ref = get_confusion_matrix(frame, y, yhat, by=air_reference, level=1, cutoff=cut, verbose=False)
cm_pro = get_confusion_matrix(frame, y, yhat, by=air_protected, level=1, cutoff=cut, verbose=False)
air_ = air({air_reference: cm_ref, air_protected: cm_pro}, air_reference, air_protected, verbose=False)
row_dict['air'] = air_
f1_frame = f1_frame.append(row_dict, ignore_index=True)
del temp_df
return f1_frame
```
### Find optimal cutoff based on F1
```
f1_frame = get_max_f1_frame(best_mxgb_phat, y_var, 'phat')
print(f1_frame)
print()
max_f1 = f1_frame['f1'].max()
best_cut = f1_frame.loc[int(f1_frame['f1'].idxmax()), 'cut'] #idxmax() returns the index of the maximum value
acc = f1_frame.loc[int(f1_frame['f1'].idxmax()), 'acc']
print('Best XGB F1: %.4f achieved at cutoff: %.2f with accuracy: %.4f.' % (max_f1, best_cut, acc))
```
### Specify Interesting Demographic Groups
```
best_mxgb_phat_copy = best_mxgb_phat.copy()
best_mxgb_phat_copy.rename(columns = {"Gender":"male"}, inplace = True)
best_mxgb_phat_copy["female"] = np.where(best_mxgb_phat_copy["male"] == 0, 1,0)
```
### Confusion Matrix by Groups
```
demographic_group_names = ['male', 'female']
cm_dict = {}
for name in demographic_group_names:
cm_dict[name] = get_confusion_matrix(best_mxgb_phat_copy, y_var, 'phat', by=name, level=1, cutoff=best_cut)
print(cm_dict[name])
print()
```
### Find AIR for Female people
* protect accepted: female providers
* reference accepted: male providers
```
print('Adverse impact ratio(AIR) for Females vs. Males: %.3f' % air(cm_dict, 'male', 'female'))
```
* Threshold: AIR >= 0.8
| github_jupyter |
```
# Python3 program to solve N Queen
# Problem using backtracking
global N
N = 4
def printSolution(board):
for i in range(N):
for j in range(N):
print (board[i][j], end = " ")
print()
# A utility function to check if a queen can
# be placed on board[row][col]. Note that this
# function is called when "col" queens are
# already placed in columns from 0 to col -1.
# So we need to check only left side for
# attacking queens
def isSafe(board, row, col):
# Check this row on left side
for i in range(col):
if board[row][i] == 1:
return False
# Check upper diagonal on left side
for i, j in zip(range(row, -1, -1),
range(col, -1, -1)):
if board[i][j] == 1:
return False
# Check lower diagonal on left side
for i, j in zip(range(row, N, 1),
range(col, -1, -1)):
if board[i][j] == 1:
return False
return True
def solveNQUtil(board, col):
# base case: If all queens are placed
# then return true
if col >= N:
return True
# Consider this column and try placing
# this queen in all rows one by one
for i in range(N):
if isSafe(board, i, col):
# Place this queen in board[i][col]
board[i][col] = 1
# recur to place rest of the queens
if solveNQUtil(board, col + 1) == True:
return True
# If placing queen in board[i][col
# doesn't lead to a solution, then
# queen from board[i][col]
board[i][col] = 0
# if the queen can not be placed in any row in
# this colum col then return false
return False
# This function solves the N Queen problem using
# Backtracking. It mainly uses solveNQUtil() to
# solve the problem. It returns false if queens
# cannot be placed, otherwise return true and
# placement of queens in the form of 1s.
# note that there may be more than one
# solutions, this function prints one of the
# feasible solutions.
def solveNQ():
board = [ [0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0] ]
if solveNQUtil(board, 0) == False:
print ("Solution does not exist")
return False
printSolution(board)
return True
# Driver Code
solveNQ()
def sudokutest(s,i,j,z):
# z is the number
isiValid = numpy.logical_or((i+1<1),(i+1>9));
isjValid = numpy.logical_or((j+1<1),(j+1>9));
iszValid = numpy.logical_or((z<1),(z>9));
if s.shape!=(9,9):
raise(Exception("Sudokumatrix not valid"));
if isiValid:
raise(Exception("i not valid"));
if isjValid:
raise(Exception("j not valid"));
if iszValid:
raise(Exception("z not valid"));
if(s[i,j]!=0):
return False;
for ii in range(0,9):
if(s[ii,j]==z):
return False;
for jj in range(0,9):
if(s[i,jj]==z):
return False;
row = int(i/3) * 3;
col = int(j/3) * 3;
for ii in range(0,3):
for jj in range(0,3):
if(s[ii+row,jj+col]==z):
return False;
return True;
def possibleNums(s , i ,j):
l = [];
ind = 0;
for k in range(1,10):
if sudokutest(s,i,j,k):
l.insert(ind,k);
ind+=1;
return l;
def sudokusolver(S):
zeroFound = 0;
for i in range(0,9):
for j in range(0,9):
if(S[i,j]==0):
zeroFound=1;
break;
if(zeroFound==1):
break;
if(zeroFound==0):
print("REALLY The end")
z = numpy.zeros(shape=(9,9))
for x in range(0,9):
for y in range(0,9):
z[x,y] = S[x,y]
print(z)
return z
x = possibleNums(S,i,j);
for k in range(len(x)):
S[i,j]=x[k];
sudokusolver(S);
S[i,j] = 0;
if __name__ == "__main__":
import numpy
#s = numpy.zeros(shape=(9,9))
k = numpy.matrix([0,0,0,0,0,9,0,7,8,5,1,0,0,0,0,0,6,9,9,0,8,0,2,5,0,0,0,0,3,2,0,0,0,0,0,0,0,0,9,3,0,0,0,1,0,0,0,0,4,0,0,0,8,0,8,0,0,0,9,0,7,0,0,6,0,1,0,0,0,0,0,0,0,0,0,0,7,0,8,0,1]).reshape(9,9)
print(k)
print('*'*80)
%timeit sudokusolver(k)
import numpy as np
from functools import reduce
def solver_python(grid):
numbers=np.arange(1,10)
i,j = np.where(grid==0)
if (i.size==0):
return(True,grid)
else:
i,j=i[0],j[0]
row = grid[i,:]
col = grid[:,j]
sqr = grid[(i//3)*3:(3+(i//3)*3),(j//3)*3:(3+(j//3)*3)].reshape(9)
values = np.setdiff1d(numbers,reduce(np.union1d,(row,col,sqr)))
grid_temp = np.copy(grid)
for value in values:
grid_temp[i,j] = value
test = solver_python(grid_temp)
if (test[0]):
return(test)
return(False,None)
example = np.array([[5,3,0,0,7,0,0,0,0],
[6,0,0,1,9,5,0,0,0],
[0,9,8,0,0,0,0,6,0],
[8,0,0,0,6,0,0,0,3],
[4,0,0,8,0,3,0,0,1],
[7,0,0,0,2,0,0,0,6],
[0,6,0,0,0,0,2,8,0],
[0,0,0,4,1,9,0,0,5],
[0,0,0,0,8,0,0,7,9]])
%timeit solver_python(example)
import sys
import numpy as np
from functools import reduce
# Instructions:
# Linux>> python3 driver_3.py <soduku_str>
# Windows py3\> python driver_3.py <soduku_str>
# Inputs
# print("input was:", sys.argv)
def BT(soduku, slices):
"Backtracking search to solve soduku"
# If soduku is complete return it.
if isComplete(soduku):
return soduku
# Select the MRV variable to fill
vars = [tuple(e) for e in np.transpose(np.where(soduku==0))]
var, avail_d = selectMRVvar(vars, soduku, slices)
# Fill in a value and solve further (recursively),
# backtracking an assignment when stuck
for value in avail_d:
soduku[var] = value
result = BT(soduku, slices)
if np.any(result):
return result
else:
soduku[var] = 0
return False
def str2arr(soduku_str):
"Converts soduku_str to 2d array"
return np.array([int(s) for s in list(soduku_str)]).reshape((9,9))
def var2grid(var, slices):
"Returns the grid slice (3x3) to which the variable's coordinates belong "
row,col = var
grid = ( slices[int(row/3)], slices[int(col/3)] )
return grid
# Constraints
def unique_rows(soduku):
for row in soduku:
if not np.array_equal(np.unique(row),np.array(range(1,10))) :
return False
return True
def unique_columns(soduku):
for row in soduku.T: #transpose soduku to get columns
if not np.array_equal(np.unique(row),np.array(range(1,10))) :
return False
return True
def unique_grids(soduku, slices):
s1,s2,s3 = slices
allgrids=[(si,sj) for si in [s1,s2,s3] for sj in [s1,s2,s3]] # Makes 2d slices for grids
for grid in allgrids:
if not np.array_equal(np.unique(soduku[grid]),np.array(range(1,10))) :
return False
return True
def isComplete(soduku):
if 0 in soduku:
return False
else:
return True
def checkCorrect(soduku, slices):
if unique_columns(soduku):
if unique_rows(soduku):
if unique_grids(soduku, slices):
return True
return False
# Search
def getDomain(var, soduku, slices):
"Gets the remaining legal values (available domain) for an unfilled box `var` in `soduku`"
row,col = var
#ravail = np.setdiff1d(FULLDOMAIN, soduku[row,:])
#cavail = np.setdiff1d(FULLDOMAIN, soduku[:,col])
#gavail = np.setdiff1d(FULLDOMAIN, soduku[var2grid(var)])
#avail_d = reduce(np.intersect1d, (ravail,cavail,gavail))
used_d = reduce(np.union1d, (soduku[row,:], soduku[:,col], soduku[var2grid(var, slices)]))
FULLDOMAIN = np.array(range(1,10)) #All possible values (1-9)
avail_d = np.setdiff1d(FULLDOMAIN, used_d)
#print(var, avail_d)
return avail_d
def selectMRVvar(vars, soduku, slices):
"""
Returns the unfilled box `var` with minimum remaining [legal] values (MRV)
and the corresponding values (available domain)
"""
#Could this be improved?
avail_domains = [getDomain(var,soduku, slices) for var in vars]
avail_sizes = [len(avail_d) for avail_d in avail_domains]
index = np.argmin(avail_sizes)
return vars[index], avail_domains[index]
# Solve
def full_solution():
soduku_str='000000000302540000050301070000000004409006005023054790000000050700810000080060009'
soduku = str2arr(soduku_str)
slices = [slice(0,3), slice(3,6), slice(6,9)]
s1,s2,s3 = slices
return BT(soduku, slices), soduku, slices
%timeit sol, soduku, slices = full_solution()
print("solved:\n", sol)
print("correct:", checkCorrect(soduku, slices))
```
| github_jupyter |
# Test: Minimum error discrimination
In this notebook we are testing the evolution of the error probability with the number of evaluations.
```
import sys
sys.path.append('../../')
import itertools
import numpy as np
import matplotlib.pyplot as plt
from numpy import pi
from qiskit.algorithms.optimizers import SPSA
from qnn.quantum_neural_networks import StateDiscriminativeQuantumNeuralNetworks as nnd
from qnn.quantum_state import QuantumState
plt.style.use('ggplot')
def callback(params, results, prob_error, prob_inc, prob):
data.append(prob_error)
# Create random states
ψ = QuantumState.random(1)
ϕ = QuantumState.random(1)
# Parameters
th_u, fi_u, lam_u = [0], [0], [0]
th1, th2 = [0], [pi]
th_v1, th_v2 = [0], [0]
fi_v1, fi_v2 = [0], [0]
lam_v1, lam_v2 = [0], [0]
params = list(itertools.chain(th_u, fi_u, lam_u, th1, th2, th_v1, th_v2, fi_v1, fi_v2, lam_v1, lam_v2))
# Initialize Discriminator
discriminator = nnd([ψ, ϕ])
data = []
results = discriminator.discriminate(SPSA(100), params, callback=callback)
optimal = nnd.helstrom_bound(ψ, ϕ)
print(f'Optimal results: {optimal}\nActual results: {results}')
fig = plt.figure(figsize=(14, 6))
plt.plot(data, '-')
plt.xlabel('Number of evaluations')
plt.ylabel('Probability')
plt.legend(['Experimental'])
plt.title('Evolution of error probability for 2 states')
fig.savefig('twostates.png')
plt.show()
th_u, fi_u, lam_u = results[0][:3]
th1 = results[0][3]
th2 = results[0][4]
th_v1 = results[0][5]
th_v2 = results[0][6]
fi_v1 = results[0][7]
fi_v2 = results[0][8]
lam_v1 = results[0][9]
lam_v2 = results[0][10]
M = nnd.povm( 2,
[th_u], [fi_u], [lam_u],
[th1], [th2],
[th_v1], [th_v2],
[fi_v1], [fi_v2],
[lam_v1], [lam_v2], output='povm' )
plt.style.use('default')
sphere = nnd.plot_bloch_sphere( M , [ψ, ϕ] )
sphere.render()
plt.savefig('sphere_2_states')
plt.style.use('ggplot')
# Create random states
ψ = QuantumState.random(1)
ϕ = QuantumState.random(1)
χ = QuantumState.random(1)
# Parameters
th_u, fi_u, lam_u = [0], [0], [0]
th1, th2 = 2 * [0], 2 * [pi]
th_v1, th_v2 = 2 * [0], 2 * [0]
fi_v1, fi_v2 = 2 * [0], 2 * [0]
lam_v1, lam_v2 = 2 * [0], 2 * [0]
params = list(itertools.chain(th_u, fi_u, lam_u, th1, th2, th_v1, th_v2, fi_v1, fi_v2, lam_v1, lam_v2))
# Initialize Discriminator
discriminator = nnd([ψ, ϕ, χ])
data = []
results = discriminator.discriminate(SPSA(100), params, callback=callback)
print(f'Results: {results}')
fig = plt.figure(figsize=(14, 6))
plt.plot(data, '-')
plt.xlabel('Number of evaluations')
plt.ylabel('Probability')
plt.legend(['Experimental'])
plt.title('Evolution of error probability for 3 states')
fig.savefig('3states.png')
plt.show()
th_u, fi_u, lam_u = results[0][:3]
th1 = results[0][3:5]
th2 = results[0][5:7]
th_v1 = results[0][7:9]
th_v2 = results[0][9:11]
fi_v1 = results[0][11:13]
fi_v2 = results[0][13:15]
lam_v1 = results[0][15:17]
lam_v2 = results[0][17:19]
M = nnd.povm( 3,
[th_u], [fi_u], [lam_u],
th1, th2,
th_v1, th_v2,
fi_v1, fi_v2,
lam_v1, lam_v2, output='povm' )
plt.style.use('default')
sphere = nnd.plot_bloch_sphere( M , [ψ, ϕ, χ] )
sphere.render()
plt.savefig('sphere_3_states.png')
plt.style.use('ggplot')
# Create random states
ψ = QuantumState([ np.array([1,0]) ])
ϕ = QuantumState([ np.array([np.cos(np.pi/4), np.sin(np.pi/4)]),
np.array([np.cos(0.1+np.pi/4),np.sin(0.1+np.pi/4)] ) ])
χ = QuantumState([ np.array([np.cos(np.pi/4), 1j*np.sin(np.pi/4)]),
np.array([np.cos(0.1+np.pi/4), 1j*np.sin(0.1+np.pi/4)] ),
np.array([np.cos(-0.1+np.pi/4), 1j*np.sin(-0.1+np.pi/4)] )])
# Parameters
th_u, fi_u, lam_u = list(np.pi*np.random.randn(1)), list(np.pi*np.random.randn(1)), list(np.pi*np.random.randn(1))
th1, th2 = list(np.pi*np.random.randn(2)), list(np.pi*np.random.randn(2))
th_v1, th_v2 = list(np.pi*np.random.randn(2)), list(np.pi*np.random.randn(2))
fi_v1, fi_v2 = list(np.pi*np.random.randn(2)), list(np.pi*np.random.randn(2))
lam_v1, lam_v2 = list(np.pi*np.random.randn(2)), list(np.pi*np.random.randn(2))
params = list(itertools.chain(th_u, fi_u, lam_u, th1, th2, th_v1, th_v2, fi_v1, fi_v2, lam_v1, lam_v2))
# Initialize Discriminator
discriminator = nnd([ψ, ϕ, χ])
data = []
results = discriminator.discriminate(SPSA(100), params, callback=callback)
print(f'Results: {results}')
fig = plt.figure(figsize=(14, 6))
plt.plot(data, '-')
plt.xlabel('Number of evaluations')
plt.ylabel('Probability')
plt.legend(['Experimental'])
plt.title('Evolution of error probability for 3 states with noise')
fig.savefig('noisy.png')
plt.show()
th_u, fi_u, lam_u = results[0][:3]
th1 = results[0][3:5]
th2 = results[0][5:7]
th_v1 = results[0][7:9]
th_v2 = results[0][9:11]
fi_v1 = results[0][11:13]
fi_v2 = results[0][13:15]
lam_v1 = results[0][15:17]
lam_v2 = results[0][17:19]
M = nnd.povm( 3,
[th_u], [fi_u], [lam_u],
th1, th2,
th_v1, th_v2,
fi_v1, fi_v2,
lam_v1, lam_v2, output='povm' )
plt.style.use('default')
sphere = nnd.plot_bloch_sphere( M , [ψ, ϕ, χ] )
sphere.render()
plt.savefig('sphere_3_states_noisy.png')
plt.style.use('ggplot')
```
| github_jupyter |
# Terminologies
<img src="https://github.com/dorisjlee/remote/blob/master/astroSim-tutorial-img/terminology.jpg?raw=true",width=20%>
- __Domain__ (aka Grids): the whole simulation box.
- __Block__(aka Zones): group of cells that make up a larger unit so that it is more easily handled. If the code is run in parallel, you could have one processor assigned to be in charge to work on several blocks (specified by iProcs,jProcs,kProcs in flash.par). In FLASH, the default block size in flash is $2^3$ = 8 cells. This means that level 0 in the AMR is 8 cells and so forth.
<img src="https://github.com/dorisjlee/remote/blob/master/astroSim-tutorial-img/level_cells.jpg?raw=true",width=20%>
- __Cells__ : basic units that contain information about the fluid variables (often called primitives: $\rho$, $P$, $v_{x,y,z}$,$B_{x,y,z}$)
- __Ghost cells__ (abbrev as ``gc`` in FLASH): Could be thought of as an extra layer of padding outside the simulation domain. The alues of these gcs are mostly determined by what the boundary conditions you chose. Generally, you won't have to mess with these when specifying the initial conditions.
# Simulation_initBlock.F90
Simulation_initBlock is called by each block. First we compute the center based on the dimensions of the box (in cgs) from flash.par:
~~~fortran
center = abs(xmin-xmax)/2.
~~~
We loop through all the coordinates of the cell within each block.
~~~fortran
do k = blkLimits(LOW,KAXIS),blkLimits(HIGH,KAXIS)
! get the coordinates of the cell center in the z-direction
zz = zCoord(k)-center
do j = blkLimits(LOW,JAXIS),blkLimits(HIGH,JAXIS)
! get the coordinates of the cell center in the y-direction
yy = yCoord(j)-center
do i = blkLimits(LOW,IAXIS),blkLimits(HIGH,IAXIS)
! get the cell center, left, and right positions in x
xx = xCenter(i)-center
~~~
``xCenter,yCoord,zCoord`` are functions that return the cell position (in cgs) given its cell index. These calculations are based on treating the bottom left corner of the box as the origin, so we minus the box center to get the origin to be at the center, as shown in Fig 3.
<img src="https://github.com/dorisjlee/remote/blob/master/astroSim-tutorial-img/user_coord.png?raw=true",width=200,height=200>
__Fig 3: The corrected ``xx,yy,zz`` are physical positions measured from the origin.__
Given the cell positions, you can specify values for initializing the fluid variables.
The fluid variables are stored inside the local variables (called rhoZone,presZone,velxZone, velyZone,velzZone in the example) which are then transferred into to the cell one at a time using the method Grid_putData:
~~~fortran
call Grid_putPointData(blockId, CENTER, DENS_VAR, EXTERIOR, axis, rhoZone)
~~~
For example, you may have an analytical radial density distribution ($\rho= Ar^2$) that you would like to initialize the sphere with:
~~~fortran
rr = sqrt(xx**2 + yy**2 + zz**2)
rhoZone = A*rr**2
~~~
Or maybe your initial conditions can not be expressed in closed form,then you could also read in precomputed-values for each cell. This optional tutorial will explain how to do linear interpolation to setup the numerical solution of the Lane-Emden Sphere.
### Adding new RuntimeParameters to be read into Simulation_initBlock.F90
As we have already saw, to compute the center of the box, I need to read in the dimensions of the box (``xmin,xmax``) from flash.par. Some runtime parameters are used by other simulation modules and some are specific to the problem and defined by the users.
To add in a new runtime parameter:
1) In ``Simulation_data.F90``, declare the variables to store these runtime parameters:
~~~fortran
real, save :: fattening_factor,beta_param,xmin,xmax
~~~
2) In ``Simulation_init.F90``, read in the values of the runtime parameter:
~~~fortran
call RuntimeParameters_get('xmin',xmin)
call RuntimeParameters_get('xmax',xmax)
~~~
3) In ``Simulation_initBlock.F90``, use the data:
~~~fortran
use Simulation_data, ONLY: xmin,xmax
~~~
Note you should __NOT__ declare ``real::xmin,xmax`` again inside ``Simulation_initBlock.F90``, otherwise, the values that you read in will be overridden.
| github_jupyter |
# Single layer Neural Network
In this notebook, we will code a single neuron and use it as a linear classifier with two inputs. The tuning of the neuron parameters is done by backpropagation using gradient descent.
```
from sklearn.datasets import make_blobs
import numpy as np
# matplotlib to display the data
import matplotlib
matplotlib.rc('font', size=16)
matplotlib.rc('xtick', labelsize=16)
matplotlib.rc('ytick', labelsize=16)
from matplotlib import pyplot as plt, cm
from matplotlib.colors import ListedColormap
%matplotlib inline
```
## Dataset
Let's create some labeled data in the form of (X, y) with an associated class which can be 0 or 1. For this we can use the function `make_blobs` in the `sklearn.datasets` module. Here we use 2 centers with coordinates (-0.5, -1.0) and (1.0, 1.0).
```
X, y = make_blobs(n_features=2, random_state=42, centers=[(-0.5, -1.0), (1.0, 1.0)])
y = y.reshape((y.shape[0], 1))
print(X.shape)
print(y.shape)
```
Plot our training data using `plt.scatter` to have a first visualization. Here we color the points with their labels stored in `y`.
```
plt.scatter(X[:, 0], X[:, 1], c=y.squeeze(), edgecolors='gray')
plt.title('training data with labels')
plt.axis('equal')
plt.show()
```
## Activation functions
Here we play with popular activation functions like tanh, ReLu or sigmoid.
```
def heaviside(x):
return np.heaviside(x, np.zeros_like(x))
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def ReLU(x):
return np.maximum(0, x)
def leaky_ReLU(x, alpha=0.1):
return np.maximum(alpha * x, x)
def tanh(x):
return np.tanh(x)
from math import pi
plt.figure()
x = np.arange(-pi, pi, 0.01)
plt.axhline(y=0., color='gray', linestyle='dashed')
plt.axhline(y=-1, color='gray', linestyle='dashed')
plt.axhline(y=1., color='gray', linestyle='dashed')
plt.axvline(x=0., color='gray', linestyle='dashed')
plt.xlim(-pi, pi)
plt.ylim(-1.2, 1.2)
plt.title('activation functions', fontsize=16)
plt.plot(x, heaviside(x), label='heavyside', linewidth=3)
legend = plt.legend(loc='lower right')
plt.savefig('activation_functions_1.pdf')
plt.plot(x, sigmoid(x), label='sigmoid', linewidth=3)
plt.legend(loc='lower right')
plt.savefig('activation_functions_2.pdf')
plt.plot(x, tanh(x), label='tanh', linewidth=3)
plt.legend(loc='lower right')
plt.savefig('activation_functions_3.pdf')
plt.plot(x, ReLU(x), label='ReLU', linewidth=3)
plt.legend(loc='lower right')
plt.savefig('activation_functions_4.pdf')
plt.plot(x, leaky_ReLU(x), label='leaky ReLU', linewidth=3)
plt.legend(loc='lower right')
plt.savefig('activation_functions_5.pdf')
plt.show()
# gradients of the activation functions
def sigmoid_grad(x):
s = sigmoid(x)
return s * (1 - s)
def relu_grad(x):
return 1. * (x > 0)
def tanh_grad(x):
return 1 - np.tanh(x) ** 2
plt.figure()
x = np.arange(-pi, pi, 0.01)
plt.plot(x, sigmoid_grad(x), label='sigmoid gradient', linewidth=3)
plt.plot(x, relu_grad(x), label='ReLU gradient', linewidth=3)
plt.plot(x, tanh_grad(x), label='tanh gradient', linewidth=3)
plt.xlim(-pi, pi)
plt.title('activation function derivatives', fontsize=16)
legend = plt.legend()
legend.get_frame().set_linewidth(2)
plt.savefig('activation_functions_derivatives.pdf')
plt.show()
```
## ANN implementation
A simple neuron with two inputs $(x_1, x_2)$ which applies an affine transform of weigths $(w_1, w_2)$ and bias $w_0$.
The neuron compute the quantity called activation $a=\sum_i w_i x_i + w_0 = w_0 + w_1 x_1 + w_2 x_2$
This quantity is send to the activation function chosen to be a sigmoid function here: $f(a)=\dfrac{1}{1+e^{-a}}$
$f(a)$ is the output of the neuron bounded between 0 and 1.
### Quick implementation
First let's implement our network in a concise fashion.
```
import numpy as np
from numpy.random import randn
X, y = make_blobs(n_samples= 100, n_features=2, random_state=42, centers=[[-0.5, -1], [1, 1]])
# adjust the sizes of our arrays
X = np.c_[np.ones(X.shape[0]), X]
print(X.shape)
y = y.reshape((y.shape[0], 1))
np.random.seed(2)
W = randn(3, 1)
print('* model params: {}'.format(W.tolist()))
eta = 1e-2 # learning rate
n_epochs = 50
for t in range(n_epochs):
# forward pass
y_pred = sigmoid(X.dot(W))
loss = np.sum((y_pred - y) ** 2)
print(t, loss)
# backprop
grad_y_pred = 2 * (y_pred - y)
grad_W = np.dot(X.T, grad_y_pred * y_pred * (1 - y_pred))
# update rule
W -= eta * grad_W
print('* new model params: {}'.format(W.tolist()))
```
### Modular implementation
Now let's create a class to represent our neural network to have more flexibility and modularity. This will prove to be useful later when we add more layers.
```
class SingleLayerNeuralNetwork:
"""A simple artificial neuron with a single layer and two inputs.
This type of network is called a Single Layer Neural Network and belongs to
the Feed-Forward Neural Networks. Here, the activation function is a sigmoid,
the loss is computed using the squared error between the target and
the prediction. Learning the parameters is achieved using back-propagation
and gradient descent
"""
def __init__(self, eta=0.01, rand_seed=42):
"""Initialisation routine."""
np.random.seed(rand_seed)
self.W = np.random.randn(3, 1) # weigths
self.eta = eta # learning rate
self.loss_history = []
def sigmoid(self, x):
"""Our activation function."""
return 1 / (1 + np.exp(-x))
def sigmoid_grad(self, x):
"""Gradient of the sigmoid function."""
return self.sigmoid(x) * (1 - self.sigmoid(x))
def predict(self, X, bias_trick=True):
X = np.atleast_2d(X)
if bias_trick:
# bias trick: add a column of 1 to X
X = np.c_[np.ones((X.shape[0])), X]
return self.sigmoid(np.dot(X, self.W))
def loss(self, X, y, bias_trick=False):
"""Compute the squared error loss for a given set of inputs."""
y_pred = self.predict(X, bias_trick=bias_trick)
y_pred = y_pred.reshape((y_pred.shape[0], 1))
loss = np.sum((y_pred - y) ** 2)
return loss
def back_propagation(self, X, y):
"""Conduct backpropagation to update the weights."""
X = np.atleast_2d(X)
y_pred = self.sigmoid(np.dot(X, self.W)).reshape((X.shape[0], 1))
grad_y_pred = 2 * (y_pred - y)
grad_W = np.dot(X.T, grad_y_pred * y_pred * (1 - y_pred))
# update weights
self.W -= eta * grad_W
def fit(self, X, y, n_epochs=10, method='batch', save_fig=False):
"""Perform gradient descent on a given number of epochs to update the weights."""
# bias trick: add a column of 1 to X
X = np.c_[np.ones((X.shape[0])), X]
self.loss_history.append(self.loss(X, y)) # initial loss
for i_epoch in range(n_epochs):
if method == 'batch':
# perform backprop on the whole training set (batch)
self.back_propagation(X, y)
# weights were updated, compute the loss
loss = self.loss(X, y)
self.loss_history.append(loss)
print(i_epoch, self.loss_history[-1])
else:
# here we update the weight for every data point (SGD)
for (xi, yi) in zip(X, y):
self.back_propagation(xi, yi)
# weights were updated, compute the loss
loss = self.loss(X, y)
self.loss_history.append(loss)
if save_fig:
self.plot_model(i_epoch, save=True, display=False)
def decision_boundary(self, x):
"""Return the decision boundary in 2D."""
return -self.W[0] / self.W[2] - self.W[1] / self.W[2] * x
def plot_model(self, i_epoch=-1, save=False, display=True):
"""Build a figure to vizualise how the model perform."""
xx0, xx1 = np.arange(-3, 3.1, 0.1), np.arange(-3, 4.1, 0.1)
XX0, XX1 = np.meshgrid(xx0, xx1)
# apply the model to the grid
y_an = np.empty(len(XX0.ravel()))
i = 0
for (x0, x1) in zip(XX0.ravel(), XX1.ravel()):
y_an[i] = self.predict(np.array([x0, x1]))
i += 1
y_an = y_an.reshape((len(xx1), len(xx0)))
figure = plt.figure(figsize=(12, 4))
ax1 = plt.subplot(1, 3, 1)
#ax1.set_title(r'$w_0=%.3f$, $w_1=%.3f$, $w_2=%.3f$' % (self.W[0], self.W[1], self.W[2]))
ax1.set_title("current prediction")
ax1.contourf(XX0, XX1, y_an, alpha=.5)
ax1.scatter(X[:, 0], X[:, 1], c=y.squeeze(), edgecolors='gray')
ax1.set_xlim(-3, 3)
ax1.set_ylim(-3, 4)
print(ax1.get_xlim())
x = np.array(ax1.get_xlim())
ax1.plot(x, self.decision_boundary(x), 'k-', linewidth=2)
ax2 = plt.subplot(1, 3, 2)
x = np.arange(3) # the label locations
rects1 = ax2.bar(x, [self.W[0, 0], self.W[1, 0], self.W[2, 0]])
ax2.set_title('model parameters')
ax2.set_xticks(x)
ax2.set_xticklabels([r'$w_0$', r'$w_1$', r'$w_2$'])
ax2.set_ylim(-1, 2)
ax2.set_yticks([0, 2])
ax2.axhline(xmin=0, xmax=2)
ax3 = plt.subplot(1, 3, 3)
ax3.plot(self.loss_history, c='lightgray', lw=2)
if i_epoch < 0:
i_epoch = len(self.loss_history) - 1
ax3.plot(i_epoch, self.loss_history[i_epoch], 'o')
ax3.set_title('loss evolution')
ax3.set_yticks([])
plt.subplots_adjust(left=0.05, right=0.98)
if save:
plt.savefig('an_%02d.png' % i_epoch)
if display:
plt.show()
plt.close()
```
### Train our model on the data set
Create two blobs with $n=1000$ data points.
Instantiate the model with $\eta$=0.1 and a random seed of 2.
Train the model using the batch gradient descent on 20 epochs.
```
X, y = make_blobs(n_samples=10000, n_features=2, random_state=42, centers=[[-0.5, -1], [1, 1]])
y = y.reshape((y.shape[0], 1))
an1 = SingleLayerNeuralNetwork(eta=0.1, rand_seed=2)
print('* init model params: {}'.format(an1.W.tolist()))
print(an1.loss(X, y, bias_trick=True))
an1.fit(X, y, n_epochs=100, method='batch', save_fig=False)
print('* new model params: {}'.format(an1.W.tolist()))
```
Now we have trained our model, plot the results
```
an1.plot_model()
```
Now try to train another network using SGD. Use only 1 epoch since with SGD, we are updating the weights with every training point (so $n$ times per epoch).
```
an2 = SingleLayerNeuralNetwork(eta=0.1, rand_seed=2)
print('* init model params: {}'.format(an2.W.tolist()))
an2.fit(X, y, n_epochs=1, method='SGD', save_fig=False)
print('* new model params: {}'.format(an2.W.tolist()))
```
plot the difference in terms of loss evolution using batch or stochastic gradient descent
```
plt.plot(an1.loss_history[:], label='batch GD')
plt.plot(an2.loss_history[::100], label='stochastic GD')
#plt.ylim(0, 2000)
plt.legend()
plt.show()
an2.plot_model()
```
## Logistic regression
Our single layer network using the logistic function for activation is very similar to the logistic regression we saw in a previous tutorial. We can easily compare our result with the logistic regression using `sklearn` toolbox.
```
from sklearn.linear_model import LogisticRegression
X, y = make_blobs(n_samples=1000, n_features=2, random_state=42, centers=[[-0.5, -1], [1, 1]])
log_reg = LogisticRegression(solver='lbfgs')
log_reg.fit(X, y)
print(log_reg.coef_)
print(log_reg.intercept_)
x0, x1 = np.meshgrid(
np.linspace(-3, 3.1, 62).reshape(-1, 1),
np.linspace(-3, 4.1, 72).reshape(-1, 1),
)
X_new = np.c_[x0.ravel(), x1.ravel()]
y_proba = log_reg.predict_proba(X_new)
zz = y_proba[:, 1].reshape(x0.shape)
plt.figure(figsize=(4, 4))
contour = plt.contourf(x0, x1, zz, alpha=0.5)
plt.scatter(X[:, 0], X[:, 1], c=y, edgecolors='gray')
# decision boundary
x_bounds = np.array([-3, 3])
boundary = -(log_reg.coef_[0][0] * x_bounds + log_reg.intercept_[0]) / log_reg.coef_[0][1]
plt.plot(x_bounds, boundary, "k-", linewidth=3)
plt.xlim(-3, 3)
plt.ylim(-3, 4)
plt.show()
```
| github_jupyter |
# From raw *.ome.tif file to kinetic properties for immobile particles
This notebook will run ...
* picasso_addon.localize.main()
* picasso_addon.autopick.main()
* spt.immobile_props.main()
... in a single run to get from the raw data to the fully evaluated data in a single stroke. We therefore:
1. Define the full paths to the *ome.tif files
2. Set the execution parameters
3. Connect or start a local dask parallel computing cluster
4. Run all sub-module main() functions for all defined datasets
As a result files with extension *_locs.hdf5, *_render.hdf5, *_autopick.yaml, *_tprops.hdf5 will be created in the same folder as the *.ome.tif file.
```
import os
import traceback
import importlib
from dask.distributed import Client
import multiprocessing as mp
import picasso.io as io
import picasso_addon.localize as localize
import picasso_addon.autopick as autopick
import spt.immobile_props as improps
importlib.reload(localize)
importlib.reload(autopick)
importlib.reload(improps)
```
### 1. Define the full paths to the *ome.tif files
```
dir_names=[]
dir_names.extend([r'C:\Data\p06.SP-tracking\20-03-11_pseries_fix_B21_rep\id140_B_exp200_p114uW_T21_1\test'])
file_names=[]
file_names.extend(['id140_B_exp200_p114uW_T21_1_MMStack_Pos0.ome.tif'])
```
### 2. Set the execution parameters
```
### Valid for all evaluations
params_all={'undrift':False,
'min_n_locs':5,
'filter':'fix',
}
### Exceptions
params_special={}
```
All possible parameters for ...
* picasso_addon.localize.main()
* picasso_addon.autopick.main()
* spt.immobile_props.main()
... can be given. Please run `help(localize.main)` or `help(autopick.main)` or `help(improps.main)` or readthedocs. If not stated otherwise standard values are used (indicated in brackets).
```
help(localize.main)
```
### 3. Connect or start a local dask parallel computing cluster
This is only necessary if you want to use parallel computing for the spt.immobile.props.main() execution (standard). If not set `params_all={'parallel':False}`
```
try:
client = Client('localhost:8787')
print('Connecting to existing cluster...')
except OSError:
improps.cluster_setup_howto()
```
If we execute the prompt (see below) a local cluster is started, and we only have to execute the cell above to reconnect to it the next time. If you try to create a new cluster under the same address this will throw an error!
```
Client(n_workers=max(1,int(0.8 * mp.cpu_count())),
processes=True,
threads_per_worker=1,
scheduler_port=8787,
dashboard_address=":1234")
```
### 4. Run all sub-module main() functions for all defined datasets
```
failed_path=[]
for i in range(0,len(file_names)):
### Create path
path=os.path.join(dir_names[i],file_names[i])
### Set paramters for each run
params=params_all.copy()
for key, value in params_special.items():
params[key]=value[i]
### Run main function
try:
### Load movie
movie,info=io.load_movie(path)
### Localize and undrift
out=localize.main(movie,info,path,**params)
info=info+[out[0][0]]+[out[0][1]] # Update info to used params
path=out[-1] # Update path
### Autopick
print()
locs=out[1]
out=autopick.main(locs,info,path,**params)
info=info+[out[0]] # Update info to used params
path=out[-1] # Update path
### Immobile kinetics analysis
print()
locs=out[1]
out=improps.main(locs,info,path,**params)
except Exception:
traceback.print_exc()
failed_path.extend([path])
print()
print('Failed attempts: %i'%(len(failed_path)))
```
| github_jupyter |
# What are Tensors?
```
# -*- coding: utf-8 -*-
import numpy as np
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random input and output data
x = np.random.randn(N, D_in)
y = np.random.randn(N, D_out)
# Randomly initialize weights
w1 = np.random.randn(D_in, H)
w2 = np.random.randn(H, D_out)
learning_rate = 1e-6
for t in range(500):
# Forward pass: compute predicted y
h = x.dot(w1)
h_relu = np.maximum(h, 0)
y_pred = h_relu.dot(w2)
# Compute and print loss
loss = np.square(y_pred - y).sum()
print(t, loss)
# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred = 2.0 * (y_pred - y)
grad_w2 = h_relu.T.dot(grad_y_pred)
grad_h_relu = grad_y_pred.dot(w2.T)
grad_h = grad_h_relu.copy()
grad_h[h < 0] = 0
grad_w1 = x.T.dot(grad_h)
# Update weights
w1 -= learning_rate * grad_w1
w2 -= learning_rate * grad_w2
```
# PyTorch Tensors
Clearly modern deep neural networks are in need of more than what our beloved numpy can offer.
Here we introduce the most fundamental PyTorch concept: the *Tensor*. A PyTorch Tensor is conceptually identical to a numpy array: a Tensor is an n-dimensional array, and PyTorch provides many functions for operating on these Tensors. Like numpy arrays, PyTorch Tensors do not know anything about deep learning or computational graphs or gradients; they are a generic tool for scientific computing.
However unlike numpy, PyTorch Tensors can utilize GPUs to accelerate their numeric computations. To run a PyTorch Tensor on GPU, you simply need to cast it to a new datatype.
Here we use PyTorch Tensors to fit a two-layer network to random data. Like the numpy example above we need to manually implement the forward and backward passes through the network:
```
import torch
dtype = torch.FloatTensor
# dtype = torch.cuda.FloatTensor # Uncomment this to run on GPU
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random input and output data
x = torch.randn(N, D_in).type(dtype)
y = torch.randn(N, D_out).type(dtype)
# Randomly initialize weights
w1 = torch.randn(D_in, H).type(dtype)
w2 = torch.randn(H, D_out).type(dtype)
learning_rate = 1e-6
for t in range(500):
# Forward pass: compute predicted y
h = x.mm(w1)
h_relu = h.clamp(min=0)
y_pred = h_relu.mm(w2)
# Compute and print loss
loss = (y_pred - y).pow(2).sum()
print(t, loss)
# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred = 2.0 * (y_pred - y)
grad_w2 = h_relu.t().mm(grad_y_pred)
grad_h_relu = grad_y_pred.mm(w2.t())
grad_h = grad_h_relu.clone()
grad_h[h < 0] = 0
grad_w1 = x.t().mm(grad_h)
# Update weights using gradient descent
w1 -= learning_rate * grad_w1
w2 -= learning_rate * grad_w2
```
# Autograd
PyTorch variables and autograd. Autograd package provides cool functionality as the forward pass of your network defines the computational graph; nodes in the graph will be Tensors and edges will be functions that produce output Tensors from input Tensors. Backprop through this graph then allows us to easily compue gradients.
Here we wrap the PyTorch Tensor in a Variable object; where Vaiabel represents a node in the computational graph. if x is a variable then x.data is a Tensor and x.grad is another Varialble holding the gradient of x w.r.t to some scalar value.
PyTorch Variables have samer API as PyTorch Tensots: any operation that you can do with Tensor, also works fine with Variables, difference only being that the Variable defines a computational graph, allowing us to automatically compute gradients.
```
# Use of Vaiables and Autograd in a 2-layer network with no need to manually implement backprop!
import torch
from torch.autograd import Variable
dtype = torch.FloatTensor
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random Tensors to hold input and outputs and wrap them in Variables.
x = Variable(torch.randn(N, D_in).type(dtype), requires_grad=False) # requires_grad=False means no need to compute gradients
y = Variable(torch.randn(N, D_out).type(dtype), requires_grad=False)
# Create random Tensors to hold weights and wrap them in Variables.
# requires_grad=True here to compute gradients w.r.t Variables during a backprop pass.
w1 = Variable(torch.randn(D_in, H).type(dtype), requires_grad=True) # requires_grad=False means no need to compute gradients
w2 = Variable(torch.randn(H, D_out).type(dtype), requires_grad=True)
learning_rate = 1e-6
for t in range(500):
# Forward pass: compute predicted y using operations on Variables; these
# are exactly the same operations we used to compute the forward pass using
# Tensors, but we do not need to keep references to intermediate values since
# we are not implementing the backward pass by hand.
y_pred = x.mm(w1).clamp(min=0).mm(w2)
# Compute and print loss using operations on Variables.
# Now loss is a Variable of shape (1,) and loss.data is a Tensor of shape
# (1,); loss.data[0] is a scalar value holding the loss.
loss = (y_pred - y).pow(2).sum()
print(t, loss.data[0])
# Use autograd to compute the backward pass. This call will compute the
# gradient of loss with respect to all Variables with requires_grad=True.
# After this call w1.grad and w2.grad will be Variables holding the gradient
# of the loss with respect to w1 and w2 respectively.
loss.backward()
# Update weights using gradient descent; w1.data and w2.data are Tensors,
# w1.grad and w2.grad are Variables and w1.grad.data and w2.grad.data are
# Tensors.
w1.data -= learning_rate * w1.grad.data
w2.data -= learning_rate * w2.grad.data
# Manually zero the gradients after updating weights
w1.grad.data.zero_()
w2.grad.data.zero_()
```
# PyTorch: Defining new autograd functions
Under the hood, each primitive autograd operator is really two functions that operate on Tensors. The forward function computes output Tensors from input Tensors. The backward function receives the gradient of the output Tensors with respect to some scalar value, and computes the gradient of the input Tensors with respect to that same scalar value.
In PyTorch we can easily define our own autograd operator by defining a subclass of torch.autograd.Function and implementing the forward and backward functions. We can then use our new autograd operator by constructing an instance and calling it like a function, passing Variables containing input data.
In this example we define our own custom autograd function for performing the ReLU nonlinearity, and use it to implement our two-layer network:
```
# -*- coding: utf-8 -*-
import torch
from torch.autograd import Variable
class MyReLU(torch.autograd.Function):
"""
We can implement our own custom autograd Functions by subclassing
torch.autograd.Function and implementing the forward and backward passes
which operate on Tensors.
"""
def forward(self, input):
"""
In the forward pass we receive a Tensor containing the input and return a
Tensor containing the output. You can cache arbitrary Tensors for use in the
backward pass using the save_for_backward method.
"""
self.save_for_backward(input)
return input.clamp(min=0)
def backward(self, grad_output):
"""
In the backward pass we receive a Tensor containing the gradient of the loss
with respect to the output, and we need to compute the gradient of the loss
with respect to the input.
"""
input, = self.saved_tensors
grad_input = grad_output.clone()
grad_input[input < 0] = 0
return grad_input
dtype = torch.FloatTensor
# dtype = torch.cuda.FloatTensor # Uncomment this to run on GPU
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random Tensors to hold input and outputs, and wrap them in Variables.
x = Variable(torch.randn(N, D_in).type(dtype), requires_grad=False)
y = Variable(torch.randn(N, D_out).type(dtype), requires_grad=False)
# Create random Tensors for weights, and wrap them in Variables.
w1 = Variable(torch.randn(D_in, H).type(dtype), requires_grad=True)
w2 = Variable(torch.randn(H, D_out).type(dtype), requires_grad=True)
learning_rate = 1e-6
for t in range(500):
# Construct an instance of our MyReLU class to use in our network
relu = MyReLU()
# Forward pass: compute predicted y using operations on Variables; we compute
# ReLU using our custom autograd operation.
y_pred = relu(x.mm(w1)).mm(w2)
# Compute and print loss
loss = (y_pred - y).pow(2).sum()
print(t, loss.data[0])
# Use autograd to compute the backward pass.
loss.backward()
# Update weights using gradient descent
w1.data -= learning_rate * w1.grad.data
w2.data -= learning_rate * w2.grad.data
# Manually zero the gradients after updating weights
w1.grad.data.zero_()
w2.grad.data.zero_()
```
## What is a nn module
When building neural networks we frequently think of arranging the computation into layers, some of which have learnable parameters which will be optimized during learning.
In TensorFlow, packages like Keras, TensorFlow-Slim, and TFLearn provide higher-level abstractions over raw computational graphs that are useful for building neural networks.
In PyTorch, the nn package serves this same purpose. The nn package defines a set of Modules, which are roughly equivalent to neural network layers. A Module receives input Variables and computes output Variables, but may also hold internal state such as Variables containing learnable parameters. The nn package also defines a set of useful loss functions that are commonly used when training neural networks.
In this example we use the nn package to implement our two-layer network:
```
# -*- coding: utf-8 -*-
import torch
from torch.autograd import Variable
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random Tensors to hold inputs and outputs, and wrap them in Variables.
x = Variable(torch.randn(N, D_in))
y = Variable(torch.randn(N, D_out), requires_grad=False)
# Use the nn package to define our model as a sequence of layers. nn.Sequential
# is a Module which contains other Modules, and applies them in sequence to
# produce its output. Each Linear Module computes output from input using a
# linear function, and holds internal Variables for its weight and bias.
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.ReLU(),
torch.nn.Linear(H, D_out),
)
# The nn package also contains definitions of popular loss functions; in this
# case we will use Mean Squared Error (MSE) as our loss function.
loss_fn = torch.nn.MSELoss(size_average=False)
learning_rate = 1e-4
for t in range(500):
# Forward pass: compute predicted y by passing x to the model. Module objects
# override the __call__ operator so you can call them like functions. When
# doing so you pass a Variable of input data to the Module and it produces
# a Variable of output data.
y_pred = model(x)
# Compute and print loss. We pass Variables containing the predicted and true
# values of y, and the loss function returns a Variable containing the
# loss.
loss = loss_fn(y_pred, y)
print(t, loss.data[0])
# Zero the gradients before running the backward pass.
model.zero_grad()
# Backward pass: compute gradient of the loss with respect to all the learnable
# parameters of the model. Internally, the parameters of each Module are stored
# in Variables with requires_grad=True, so this call will compute gradients for
# all learnable parameters in the model.
loss.backward()
# Update the weights using gradient descent. Each parameter is a Variable, so
# we can access its data and gradients like we did before.
for param in model.parameters():
param.data -= learning_rate * param.grad.data
```
## PyTorch - optim
With learning rate of $1e-4$
```
import torch
from torch.autograd import Variable
N, D_in, H, D_out = 64, 1000, 100, 10
x = Variable(torch.randn(N, D_in))
y = Variable(torch.randn(N, D_out), requires_grad=False)
model = torch.nn.Sequential( torch.nn.Linear(D_in, H),
torch.nn.ReLU(),
torch.nn.Linear(H, D_out)
)
loss_fxn = torch.nn.MSELoss(size_average=False)
learning_rate = 1e-4
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# We loop
for i in range(500):
y_pred = model(x)
loss = loss_fxn(y_pred, y)
print(t, loss.data[0])
optimizer.zero_grad()
loss.backward()
optimizer.step()
```
## Custom nn module
For more complex computation, you can define your own module by subclassing nn.Module
```
import torch
from torch.autograd import Variable
class DoubleLayerNet(torch.nn.Module):
def __init__(self, D_in, H, D_out):
# initialize 2 instances of nn.Linear mods
super(DoubleLayerNet, self).__init__()
self.linear1 = torch.nn.Linear(D_in, H)
self.linear2 = torch.nn.Linear(H, D_out)
def forward(self, x):
# in this fxn we accept a Var of input data and
# return a Var of output data.
h_relu = self.linear1(x).clamp(min=0)
y_pred = self.linear2(h_relu)
return y_pred
# Next, again as usual, define batch size, input dimensions, hidden dimension and output dimension
N, D_in, H, D_out = 64, 1000, 100, 10
# Create some random tensors to hold both input and output
x = Variable(torch.randn(N, D_in))
y = Variable(torch.randn(N, D_out), requires_grad=False)
# Build model by instantiating class defined above
my_model = DoubleLayerNet(D_in, H, D_out)
# Build loss fxn and optimizer
criterion = torch.nn.MSELoss(size_average=False)
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)
# and then we loop
for i in range(500):
# fwd pass, calculate predicted y by passing x to the model
y_pred = my_model(x)
#calculate and print loss
loss = criteria(y_pred, y)
print(t, loss.data[0])
# Zero gradients, performs a backprop pass and update the weights as it goe along
optimizer.zero_grad()
loss.backward()
optimizer.step()
```
| github_jupyter |
#### New to Plotly?
Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).
<br>You can set up Plotly to work in [online](https://plot.ly/python/getting-started/#initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/#start-plotting-online).
<br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
#### Basic Symmetric Error Bars
```
import plotly.plotly as py
import plotly.graph_objs as go
data = [
go.Scatter(
x=[0, 1, 2],
y=[6, 10, 2],
error_y=dict(
type='data',
array=[1, 2, 3],
visible=True
)
)
]
py.iplot(data, filename='basic-error-bar')
```
#### Asymmetric Error Bars
```
import plotly.plotly as py
import plotly.graph_objs as go
data = [
go.Scatter(
x=[1, 2, 3, 4],
y=[2, 1, 3, 4],
error_y=dict(
type='data',
symmetric=False,
array=[0.1, 0.2, 0.1, 0.1],
arrayminus=[0.2, 0.4, 1, 0.2]
)
)
]
py.iplot(data, filename='error-bar-asymmetric-array')
```
#### Error Bars as a Percentage of the y Value
```
import plotly.plotly as py
import plotly.graph_objs as go
data = [
go.Scatter(
x=[0, 1, 2],
y=[6, 10, 2],
error_y=dict(
type='percent',
value=50,
visible=True
)
)
]
py.iplot(data, filename='percent-error-bar')
```
#### Asymmetric Error Bars with a Constant Offset
```
import plotly.plotly as py
import plotly.graph_objs as go
data = [
go.Scatter(
x=[1, 2, 3, 4],
y=[2, 1, 3, 4],
error_y=dict(
type='percent',
symmetric=False,
value=15,
valueminus=25
)
)
]
py.iplot(data, filename='error-bar-asymmetric-constant')
```
#### Horizontal Error Bars
```
import plotly.plotly as py
import plotly.graph_objs as go
data = [
go.Scatter(
x=[1, 2, 3, 4],
y=[2, 1, 3, 4],
error_x=dict(
type='percent',
value=10
)
)
]
py.iplot(data, filename='error-bar-horizontal')
```
#### Bar Chart with Error Bars
```
import plotly.plotly as py
import plotly.graph_objs as go
trace1 = go.Bar(
x=['Trial 1', 'Trial 2', 'Trial 3'],
y=[3, 6, 4],
name='Control',
error_y=dict(
type='data',
array=[1, 0.5, 1.5],
visible=True
)
)
trace2 = go.Bar(
x=['Trial 1', 'Trial 2', 'Trial 3'],
y=[4, 7, 3],
name='Experimental',
error_y=dict(
type='data',
array=[0.5, 1, 2],
visible=True
)
)
data = [trace1, trace2]
layout = go.Layout(
barmode='group'
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='error-bar-bar')
```
#### Colored and Styled Error Bars
```
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x_theo = np.linspace(-4, 4, 100)
sincx = np.sinc(x_theo)
x = [-3.8, -3.03, -1.91, -1.46, -0.89, -0.24, -0.0, 0.41, 0.89, 1.01, 1.91, 2.28, 2.79, 3.56]
y = [-0.02, 0.04, -0.01, -0.27, 0.36, 0.75, 1.03, 0.65, 0.28, 0.02, -0.11, 0.16, 0.04, -0.15]
trace1 = go.Scatter(
x=x_theo,
y=sincx,
name='sinc(x)'
)
trace2 = go.Scatter(
x=x,
y=y,
mode='markers',
name='measured',
error_y=dict(
type='constant',
value=0.1,
color='#85144B',
thickness=1.5,
width=3,
),
error_x=dict(
type='constant',
value=0.2,
color='#85144B',
thickness=1.5,
width=3,
),
marker=dict(
color='#85144B',
size=8
)
)
data = [trace1, trace2]
py.iplot(data, filename='error-bar-style')
```
#### Reference
See https://plot.ly/python/reference/#scatter for more information and chart attribute options!
```
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'error-bars.ipynb', 'python/error-bars/', 'Error Bars | plotly',
'How to add error-bars to charts in Python with Plotly.',
title = 'Error Bars | plotly',
name = 'Error Bars',
thumbnail='thumbnail/error-bar.jpg', language='python',
page_type='example_index', has_thumbnail='true', display_as='statistical', order=1,
ipynb='~notebook_demo/18')
```
| github_jupyter |
Lambda School Data Science
*Unit 2, Sprint 3, Module 3*
---
# Permutation & Boosting
You will use your portfolio project dataset for all assignments this sprint.
## Assignment
Complete these tasks for your project, and document your work.
- [ ] If you haven't completed assignment #1, please do so first.
- [ ] Continue to clean and explore your data. Make exploratory visualizations.
- [ ] Fit a model. Does it beat your baseline?
- [ ] Try xgboost.
- [ ] Get your model's permutation importances.
You should try to complete an initial model today, because the rest of the week, we're making model interpretation visualizations.
But, if you aren't ready to try xgboost and permutation importances with your dataset today, that's okay. You can practice with another dataset instead. You may choose any dataset you've worked with previously.
The data subdirectory includes the Titanic dataset for classification and the NYC apartments dataset for regression. You may want to choose one of these datasets, because example solutions will be available for each.
## Reading
Top recommendations in _**bold italic:**_
#### Permutation Importances
- _**[Kaggle / Dan Becker: Machine Learning Explainability](https://www.kaggle.com/dansbecker/permutation-importance)**_
- [Christoph Molnar: Interpretable Machine Learning](https://christophm.github.io/interpretable-ml-book/feature-importance.html)
#### (Default) Feature Importances
- [Ando Saabas: Selecting good features, Part 3, Random Forests](https://blog.datadive.net/selecting-good-features-part-iii-random-forests/)
- [Terence Parr, et al: Beware Default Random Forest Importances](https://explained.ai/rf-importance/index.html)
#### Gradient Boosting
- [A Gentle Introduction to the Gradient Boosting Algorithm for Machine Learning](https://machinelearningmastery.com/gentle-introduction-gradient-boosting-algorithm-machine-learning/)
- _**[A Kaggle Master Explains Gradient Boosting](http://blog.kaggle.com/2017/01/23/a-kaggle-master-explains-gradient-boosting/)**_
- [_An Introduction to Statistical Learning_](http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Seventh%20Printing.pdf) Chapter 8
- [Gradient Boosting Explained](http://arogozhnikov.github.io/2016/06/24/gradient_boosting_explained.html)
- _**[Boosting](https://www.youtube.com/watch?v=GM3CDQfQ4sw) (2.5 minute video)**_
```
%%capture
!pip install category_encoders
import pandas as pd
import seaborn as sns
from sklearn.model_selection import train_test_split
titanic = sns.load_dataset('titanic')
train, test = train_test_split(titanic, test_size=.2)
features = ['age', 'class', 'deck', 'embarked', 'fare', 'sex']
target = 'survived'
X_train = train[features]
y_train = train[target]
X_test = test[features]
y_test = test[target]
X_train.shape, X_test.shape, y_train.shape, y_test.shape
X_train.isnull().sum()
# we're dealing with some null values
# what is our baseline
max(1-y_train.mean(), y_train.mean())
from sklearn.pipeline import Pipeline
import category_encoders as ce
from xgboost import XGBClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.impute import SimpleImputer
# create base pipeline
pipeline = Pipeline([
('encoder', ce.OrdinalEncoder()),
('model', XGBClassifier())
])
# fit base pipeline
train_size = .8
cutoff = int(train_size*X_train.shape[0])
small_X_train = X_train[:cutoff]
X_val = X_train[cutoff:]
small_y_train = y_train[:cutoff]
y_val = y_train[cutoff:]
pipeline.fit(small_X_train, small_y_train)
pipeline.score(X_val, y_val)
```
## Baseline model beat the baseline by about 16%!
```
# now lets tune some hyperparameters!
params = {
'model__n_estimators': [50, 70, 90],
'model__max_depth': [3, 5]
}
search = GridSearchCV(pipeline, params, n_jobs=-1)
search.fit(X_train, y_train)
print(f"Best params: \n{search.best_params_}")
print(f"Best score: \n{search.best_score_}")
pipeline = Pipeline([
('encoder', ce.OrdinalEncoder()),
('model', XGBClassifier(n_estimators=90,
max_depth=3))
])
pipeline.fit(X_train, y_train)
pipeline.score(X_test, y_test)
# get model and encoded data seperate for permutation importance eval
model = XGBClassifier(n_estimators=90, max_depth=3)
transformer = Pipeline([
('encoder', ce.OrdinalEncoder()),
('imputer', SimpleImputer())
])
X_train_transformed = transformer.fit_transform(X_train)
X_test_transformed = transformer.transform(X_test)
model.fit(X_train_transformed, y_train)
!pip install eli5
import eli5
from eli5.sklearn import PermutationImportance
permuter = PermutationImportance(
model,
scoring="accuracy",
n_iter=10,
random_state=42
)
permuter.fit(X_test_transformed, y_test)
feature_names = X_train.columns.tolist()
pd.Series(permuter.feature_importances_, feature_names).sort_values()
eli5.show_weights(
permuter,
top=None, # includes all features
feature_names=feature_names
)
# I may be cautious about embarked given the standard error is larger than
# the permutation importance value
```
| github_jupyter |
```
import glob
import time
# Divide up into cars and notcars
images = glob.glob('dataset/**/*.png', recursive=True)
cars = []
notcars = []
for image in images:
if 'non-vehicles' in image:
notcars.append(image)
else:
cars.append(image)
from hog import *
color_space='YCrCb'
spatial_size=(32, 32)
hist_bins=32
orient=9
pix_per_cell=8
cell_per_block=2
hog_channel='ALL'
spatial_feat=True
hist_feat=True
hog_feat=True
t=time.time()
car_features = extract_features(cars, color_space=color_space, spatial_size=spatial_size,
hist_bins=hist_bins, orient=orient, pix_per_cell=pix_per_cell,
cell_per_block=cell_per_block, hog_channel=hog_channel,
spatial_feat=spatial_feat, hist_feat=hist_feat, hog_feat=hog_feat)
notcar_features = extract_features(notcars, color_space=color_space, spatial_size=spatial_size,
hist_bins=hist_bins, orient=orient, pix_per_cell=pix_per_cell,
cell_per_block=cell_per_block, hog_channel=hog_channel,
spatial_feat=spatial_feat, hist_feat=hist_feat, hog_feat=hog_feat)
t2 = time.time()
print(round(t2-t, 2), 'Seconds to extract features...')
# Create an array stack of feature vectors
X = np.vstack((car_features, notcar_features)).astype(np.float64)
# Define the labels vector
y = np.hstack((np.ones(len(car_features)), np.zeros(len(notcar_features))))
from sklearn.preprocessing import StandardScaler
# Fit a per-column scaler
X_scaler = StandardScaler().fit(X)
# Apply the scaler to X
scaled_X = X_scaler.transform(X)
# Split up data into randomized training and test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(scaled_X, y, test_size=0.2, stratify =y)
print('Feature vector length:', len(X_train[0]))
from sklearn.svm import LinearSVC
# Use a linear SVC
svc = LinearSVC()
# Check the training time for the SVC
t=time.time()
svc.fit(X_train, y_train)
t2 = time.time()
print(round(t2-t, 2), 'Seconds to train SVC...')
# Check the score of the SVC
print('Test Accuracy of SVC = ', round(svc.score(X_test, y_test), 4))
# Check the prediction time for a single sample
t=time.time()
n_predict = 10
print('My SVC predicts: ', svc.predict(X_test[0:n_predict]))
print('For these',n_predict, 'labels: ', y_test[0:n_predict])
t2 = time.time()
print(round(t2-t, 5), 'Seconds to predict', n_predict,'labels with SVC')
from hog import *
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
import pickle
import cv2
%matplotlib inline
img = mpimg.imread('test_images/test4.jpg')
ystart = 400
ystop = 656
scale = 1.5
out_img = find_cars(img, ystart, ystop, scale, svc, X_scaler, color_space, orient, pix_per_cell, cell_per_block, hog_channel, spatial_size, hist_bins)
plt.imshow(out_img)
import pickle
data={
'svc': svc,
'X_scaler': X_scaler,
'color_space': color_space,
'orient': orient,
'pix_per_cell': pix_per_cell,
'cell_per_block': cell_per_block,
'spatial_size' : spatial_size,
'hist_bins': hist_bins,
'hog_channel': hog_channel
}
with open('svc_model.p', 'wb') as pFile:
pickle.dump(data, pFile)
```
| github_jupyter |
<small><small><i>
All the IPython Notebooks in **[Python Seaborn Module](https://github.com/milaan9/12_Python_Seaborn_Module)** lecture series by **[Dr. Milaan Parmar](https://www.linkedin.com/in/milaanparmar/)** are available @ **[GitHub](https://github.com/milaan9)**
</i></small></small>
<a href="https://colab.research.google.com/github/milaan9/12_Python_Seaborn_Module/blob/main/017_Seaborn_FacetGrid_Plot.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# FacetGrid
Welcome to another lecture on *Seaborn*! Our journey began with assigning *style* and *color* to our plots as per our requirement. Then we moved on to *visualize distribution of a dataset*, and *Linear relationships*, and further we dived into topics covering *plots for Categorical data*. Every now and then, we've also roughly touched customization aspects using underlying Matplotlib code. That indeed is the end of the types of plots offered by Seaborn, and only leaves us with widening the scope of usage of all the plots that we have learnt till now.
Our discussion in upcoming lectures is majorly going to focus on using the core of Seaborn, based on which, *Seaborn* allows us to plot these amazing figures, that we had been detailing previously. This ofcourse isn't going to be a brand new topic because every now & then I have used these in previous lectures but hereon we're going to specifically deal with each one of those.
To introduce our new topic, i.e. **<span style="color:red">Grids</span>**, we shall at first list the options available. Majorly, there are just two aspects to our discussion on *Grids* that includes:
- **<span style="color:red">FacetGrid</span>**
- **<span style="color:red">PairGrid</span>**
Additionally, we also have a companion function for *PairGrid* to enhance execution speed of *PairGrid*, i.e.
- **<span style="color:red">Pairplot</span>**
Our discourse shall detail each one of these topics in-length for better understanding. As we have already covered the statistical inference of each type of plot, our emphasis shall mostly be on scaling and parameter variety of known plots on these grids. So let us commence our journey with **[FacetGrid](http://seaborn.pydata.org/generated/seaborn.FacetGrid.html?highlight=facetgrid#seaborn.FacetGrid)** in this lecture.
## FacetGrid
The term **Facet** here refers to *a dimension* or say, an *aspect* or a feature of a *multi-dimensional dataset*. This analysis is extremely useful when working with a multi-variate dataset which has a varied blend of datatypes, specially in *Data Science* & *Machine Learning* domain, where generally you would be dealing with huge datasets. If you're a *working pofessional*, you know what I am talking about. And if you're a *fresher* or a *student*, just to give you an idea, in this era of *Big Data*, an average *CSV file* (which is generally the most common form), or even a RDBMS size would vary from Gigabytes to Terabytes of data. If you are dealing with *Image/Video/Audio datasets*, then you may easily expect those to be in *hundreds of gigabyte*.
On the other hand, the term **Grid** refers to any *framework with spaced bars that are parallel to or cross each other, to form a series of squares or rectangles*. Statistically, these *Grids* are also used to represent and understand an entire *population* or just a *sample space* out of it. In general, these are pretty powerful tool for presentation, to describe our dataset and to study the *interrelationship*, or *correlation* between *each facet* of any *environment*.
Subplot grid for plotting conditional relationships.
The FacetGrid is an object that links a Pandas DataFrame to a matplotlib figure with a particular structure.
In particular, FacetGrid is used to draw plots with multiple Axes where each Axes shows the same relationship conditioned on different levels of some variable. It’s possible to condition on up to three variables by assigning variables to the rows and columns of the grid and using different colors for the plot elements.
The general approach to plotting here is called “small multiples”, where the same kind of plot is repeated multiple times, and the specific use of small multiples to display the same relationship conditioned on one ore more other variables is often called a “trellis plot”.
The basic workflow is to initialize the FacetGrid object with the dataset and the variables that are used to structure the grid. Then one or more plotting functions can be applied to each subset by calling **`FacetGrid.map()`** or **`FacetGrid.map_dataframe()`**. Finally, the plot can be tweaked with other methods to do things like change the axis labels, use different ticks, or add a legend. See the detailed code examples below for more information.
To kill our curiousity, let us plot a simple **<span style="color:red">FacetGrid</span>** before continuing on with our discussion. And to do that, we shall once again quickly import our package dependencies and set the aesthetics for future use with built-in datasets.
```
# Importing intrinsic libraries:
import numpy as np
import pandas as pd
np.random.seed(101)
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set(style="whitegrid", palette="rocket")
import warnings
warnings.filterwarnings("ignore")
# Let us also get tableau colors we defined earlier:
tableau_20 = [(31, 119, 180), (174, 199, 232), (255, 127, 14), (255, 187, 120),
(44, 160, 44), (152, 223, 138), (214, 39, 40), (255, 152, 150),
(148, 103, 189), (197, 176, 213), (140, 86, 75), (196, 156, 148),
(227, 119, 194), (247, 182, 210), (127, 127, 127), (199, 199, 199),
(188, 189, 34), (219, 219, 141), (23, 190, 207), (158, 218, 229)]
# Scaling above RGB values to [0, 1] range, which is Matplotlib acceptable format:
for i in range(len(tableau_20)):
r, g, b = tableau_20[i]
tableau_20[i] = (r / 255., g / 255., b / 255.)
# Loading built-in Tips dataset:
tips = sns.load_dataset("tips")
tips.head()
# Initialize a 2x2 grid of facets using the tips dataset:
sns.set(style="ticks", color_codes=True)
sns.FacetGrid(tips, row='time', col='smoker')
# Draw a univariate plot on each facet:
x = sns.FacetGrid(tips, col='time',row='smoker')
x = x.map(plt.hist,"total_bill")
bins = np.arange(0,65,5)
x = sns.FacetGrid(tips, col="time", row="smoker")
x =x.map(plt.hist, "total_bill", bins=bins, color="g")
# Plot a bivariate function on each facet:
x = sns.FacetGrid(tips, col="time", row="smoker")
x = x.map(plt.scatter, "total_bill", "tip", edgecolor="w")
# Assign one of the variables to the color of the plot elements:
x = sns.FacetGrid(tips, col="time", hue="smoker")
x = x.map(plt.scatter,"total_bill","tip",edgecolor = "w")
x =x.add_legend()
# Plotting a basic FacetGrid with Scatterplot representation:
ax = sns.FacetGrid(tips, col="sex", hue="smoker", size=5)
ax.map(plt.scatter, "total_bill", "tip", alpha=.6)
ax.add_legend()
```
This is a combined scatter representation of Tips dataset that we have seen earlier as well, where Total tip generated against Total Bill amount is drawn in accordance with their Gender and Smoking practice. With this we can conclude how **FacetGrid** helps us visualize distribution of a variable or the relationship between multiple variables separately within subsets of our dataset. Important to note here is that Seaborn FacetGrid can only support upto **3-Dimensional figures**, using `row`, `column` and `hue` dimensions of the grid for *Categorical* and *Discrete* variables within our dataset.
Let us now have a look at the *parameters* offered or supported by Seaborn for a **FacetGrid**:
**`seaborn.FacetGrid(data, row=None, col=None, hue=None, col_wrap=None, sharex=True, sharey=True, size=3, aspect=1, palette=None, row_order=None, col_order=None, hue_order=None, hue_kws=None, dropna=True, legend_out=True, despine=True, margin_titles=False, xlim=None, ylim=None, subplot_kws=None, gridspec_kws=None`**
There seems to be few new parameters out here for us, so let us one-by-one understand their scope before we start experimenting with those on our plots:
- We are well acquainted with mandatory **`data`**, **`row`**, **`col`** and **`hue`** parameters.
- Next is **`col_wrap`** that defines the **width of our variable** selected as **`col`** dimension, so that the *column facets* can span multiple rows.
- **`sharex`** helps us **draft dedicated Y-axis** for each sub-plot, if declared **`False`**. Same concept holds good for **`sharey`** as well.
- **`size`** helps us determine the size of our grid-frame.
- We may also declare **`hue_kws`** parameter that lets us **control other aesthetics** of our plot.
- **`dropna`** drops all the **NULL variables** from the selected features; and **`legend_out`** places the Legend either inside or outside our plot, as we've already seen.
- **`margin_titles`** fetch the **feature names** from our dataset; and **`xlim`** & **`ylim`** additionally offers Matplotlib style limitation to each of our axes on the grid.
That pretty much seems to cover *intrinsic parameters* so let us now try to use them one-by-one with slight modifications:
Let us begin by pulling the *Legend inside* our FacetGrid and *creating a Header* for our grid:
```
ax = sns.FacetGrid(tips, col="sex", hue="smoker", size=5, legend_out=False)
ax.map(plt.scatter, "total_bill", "tip", alpha=.6)
ax.add_legend()
plt.suptitle('Tip Collection based on Gender and Smoking', fontsize=11)
```
So declaring **`legend_out`** as **`False`** and creating a **Superhead title** using *Matplotlib* seems to be working great on our Grid. Customization on *Header size* gives us an add-on capability as well. Right now, we are going by default **`palette`** for **marker colors** which can be customized by setting to a different one. Let us try other parameters as well:
Actually, before we jump further into utilization of other parameters, let me quickly take you behind the curtain of this plot. As visible, we assigned **`ax`** as a variable to our **FacetGrid** for creating a visualizaion figure, and then plotted a **Scatterplot** on top of it, before decorating further with a *Legend* and a *Super Title*. So when we initialized the assignment of **`ax`**, the grid actually gets created using backend *Matplotlib figure and axes*, though doesn't plot anything on top of it. This is when we call Scatterplot on our sample data, that in turn at the backend calls **`FacetGrid.map()`** function to map this grid to our Scatterplot. We intended to draw a linear relation plot, and thus entered multiple variable names, i.e. **`Total Bill`** and associated **`Tip`** to form *facets*, or dimensions of our grid.
```
# Change the size and aspect ratio of each facet:
x = sns.FacetGrid(tips, col="day", size=5, aspect=.5)
x =x.map(plt.hist, "total_bill", bins=bins)
# Specify the order for plot elements:
g = sns.FacetGrid(tips, col="smoker", col_order=["Yes", "No"])
g = g.map(plt.hist, "total_bill", bins=bins, color="m")
# Use a different color palette:
kws = dict(s=50, linewidth=.5, edgecolor="w")
g =sns.FacetGrid(tips, col="sex", hue="time", palette="Set1",\
hue_order=["Dinner", "Lunch"])
g = g.map(plt.scatter, "total_bill", "tip", **kws)
g.add_legend()
# Use a dictionary mapping hue levels to colors:
pal = dict(Lunch="seagreen", Dinner="gray")
g = sns.FacetGrid(tips, col="sex", hue="time", palette=pal,\
hue_order=["Dinner", "Lunch"])
g = g.map(plt.scatter, "total_bill", "tip", **kws)
g.add_legend()
# FacetGrid with boxplot
x = sns.FacetGrid(tips,col= 'day')
x = x.map(sns.boxplot,"total_bill","time")
```
Also important to note is the use the **[matplotlib.pyplot.gca()](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.gca.html)** function, if required to *set the current axes* on our Grid. This shall fetch the current Axes instance on our current figure matching the given keyword arguments or params, & if unavailable, it shall even create one.
```
# Let us create a dummy DataFrame:
football = pd.DataFrame({
"Wins": [76, 64, 38, 78, 63, 45, 32, 46, 13, 40, 59, 80],
"Loss": [55, 67, 70, 56, 59, 69, 72, 24, 45, 21, 58, 22],
"Team": ["Arsenal"] * 4 + ["Liverpool"] * 4 + ["Chelsea"] * 4,
"Year": [2015, 2016, 2017, 2018] * 3})
```
Before I begin illustration using this DataFrame, on a lighter note, I would add a disclosure that this is a dummy dataset and holds no resemblance whatsoever to actual records of respective Soccer clubs. So if you're one among those die-hard fans of any of these clubs, kindly excuse me if the numbers don't tally, as they are all fabricated.
Here, **football** is kind of a *Time-series Pandas DataFrame* that in entirety reflects 4 features, where **`Wins`** and **`Loss`** variables represent the quarterly Scorecard of three soccer **`Teams`** for last four **`Years`**, from 2015 to 2018. Let us check how this DataFrame looks like:
```
football
```
This looks pretty good for our purpose so now let us initialize our FacetGrid on top of it and try to obtain a time-indexed with further plotting. In production environment, to keep our solution scalable, this is generally done by defining a function for data manipulation so we shall try that in this example:
```
# Defining a customizable function to be precise with our requirements & shall discuss it a little later:
# We shall be using a new type of plot here that I shall discuss in detail later on.
def football_plot(data, color):
sns.heatmap(data[["Wins", "Loss"]])
# 'margin_titles' won't necessarily guarantee desired results so better to be cautious:
ax = sns.FacetGrid(football, col="Team", size=5, margin_titles=True)
ax.map_dataframe(football_plot)
ax = sns.FacetGrid(football, col="Team", size=5)
ax.map(sns.kdeplot, "Wins", "Year", hist=True, lw=2)
```
As visible, **Heatmap** plots rectangular boxes for data points as a color-encoded matrix, and this is a topic we shall be discussing in detail in another Lecture but for now, I just wanted you to have a preview of it, and hence used it on top of our **FacetGrid**. Another good thing to know with *FacetGrid* is **gridspec** module which allows Matplotlib params to be passed for drawing attention to a particular facet by increasing its size. To better understand, let us try to use this module now:
```
# Loading built-in Titanic Dataset:
titanic = sns.load_dataset("titanic")
# Assigning reformed `deck` column:
titanic = titanic.assign(deck=titanic.deck.astype(object)).sort_values("deck")
# Creating Grid and Plot:
ax = sns.FacetGrid(titanic, col="class", sharex=False, size=7,
gridspec_kws={"width_ratios": [3.5, 2, 2]})
ax.map(sns.boxplot, "deck", "age")
ax.set_titles(fontweight='bold', size=17)
```
Breaking it down, at first we import our built-in Titanic dataset, and then assign a new column, i.e. **`deck`** using Pandas **`.assign()`** function. Here we declare this new column as a component of pre-existing **`deck`** column from Titanic dataset, but as a sorted object. Then we create our *FacetGrid* mentioning the DataFrame, the column on which Grids get segregated but with shared across *Y-axis*; for **`chosen deck`** against **`Age`** of passengers. Next in action is our **grid keyword specifications**, where we decide the *width ratio* of the plot that shall be passed on to these grids. Finally, we have our **Box Plot** representing values of **`Age`** feature across respective decks.
Now let us try to use different axes with same size for multivariate plotting on Tips dataset:
```
# Loading built-in Tips dataset:
tips = sns.load_dataset("tips")
# Mapping a Scatterplot to our FacetGrid:
ax = sns.FacetGrid(tips, col="smoker", row="sex", size=3.5)
ax = (ax.map(plt.scatter, "total_bill", "tip", color=tableau_20[6]).set_axis_labels("Total Bill Generated (USD)", "Tip Amount"))
# Increasing size for subplot Titles & making it appear Bolder:
ax.set_titles(fontweight='bold', size=11)
```
**Scatterplot** dealing with data that has multiple variables is no new science for us so instead let me highlight what **`.map()`** does for us. This function actually allows us to project our figure axes, in accordance to which our Scatterplot spreads the feature datapoints across the grids, depending upon the segregators. Here we have **`sex`** and **`smoker`** as our segregators (When I use the general term "segregator", it just refers to the columns on which we decide to determine the layout). This comes in really handy as we can pass *Matplotlib parrameters* for further customization of our plot. At the end, when we add **`.set_axis_labels()`** it gets easy for us to label our axes but please note that this method shall work for you only when you're dealing with grids, hence you didn't observe me adapting to this function, while detailing various other plots.
- Let us now talk about the **`football_plot`** function we defined earlier with **football** DataFrame. The only reason I didn't speak of it then was because I wanted you to go through a few more parameter implementation before getting into this. There are **3 important rules for defining such functions** that are supported by **[FacetGrid.map](http://xarray.pydata.org/en/stable/generated/xarray.plot.FacetGrid.map.html)**:
-They must take array-like inputs as positional arguments, with the first argument corresponding to the **`X-Axis`**, and the second argument corresponding to **`y-Axis`**.
-They must also accept two keyword arguments: **`color`**, and **`label`**. If you want to use a **`hue`** variable, than these should get passed to the underlying plotting function (As a side note: You may just catch **`**kwargs`** and not do anything with them, if it's not relevant to the specific plot you're making.
-Lastly, when called, they must draw a plot on the "currently active" matplotlib Axes.
- Important to note is that there may be cases where your function draws a plot that looks correct without taking `x`, `y`, positional inputs and then it is better to just call the plot, like: **`ax.set_axis_labels("Column_1", "Column_2")`** after you use **`.map()`**, which should rename your axes properly. Alternatively, you may also want to do something like `ax.set(xticklabels=)` to get more meaningful ticks.
- Well I am also quite stoked to mention another important function (though not that comonly used), that is **[`FacetGrid.map_dataframe()`](http://nullege.com/codes/search/axisgrid.FacetGrid.map_dataframe)**. The rules here are similar to **`FacetGrid.map`** but the function you pass must accept a DataFrame input in a parameter called `data`, and instead of taking *array-like positional* inputs it takes *strings* that correspond to variables in that dataframe. Then on each iteration through the *facets*, the function will be called with the *Input dataframe*, masked to just the values for that combination of **`row`**, **`col`**, and **`hue`** levels.
Another important to note with both the above-mentioned functions is that the **`return`** value is ignored so you don't really have to worry about it. Just for illustration purpose, let us consider drafting a function that just *draws a horizontal line* in each **`facet`** at **`y=2`** and ignores all the Input data*:
```
# That is all you require in your function:
def plot_func(x, y, color=None, label=None):
ax.map(plt.axhline, y=2)
```
I know this function concept might look little hazy at the moment but once you have covered more on dates and maptplotlib syntax in particular, the picture shall get much more clearer for you.
Let us look at one more example of **`FacetGrid()`** and this time let us again create a synthetic DataFrame for this demonstration:
```
# Creating synthetic Data (Don't focus on how it's getting created):
units = np.linspace(0, 50)
A = [1., 18., 40., 100.]
df = []
for i in A:
V1 = np.sin(i * units)
V2 = np.cos(i * units)
df.append(pd.DataFrame({"units": units, "V_1": V1, "V_2": V2, "A": i}))
sample = pd.concat(df, axis=0)
# Previewing DataFrame:
sample.head(10)
sample.describe()
# Melting our sample DataFrame:
sample_melt = sample.melt(id_vars=['A', 'units'], value_vars=['V_1', 'V_2'])
# Creating plot:
ax = sns.FacetGrid(sample_melt, col='A', hue='A', palette="icefire", row='variable', sharey='row', margin_titles=True)
ax.map(plt.plot, 'units', 'value')
ax.add_legend()
```
This process shall come in handy if you ever wish to vertically stack rows of subplots on top of one another. You do not really have to focus on the process of creating dataset, as generally you will have your dataset provided with a problem statement. For our plot, you may just consider these visual variations as **[Sinusoidal waves](https://en.wikipedia.org/wiki/Sine_wave)**. I shall attach a link in our notebook, if you wish to dig deeper into what these are and how are they actually computed.
Our next lecture would be pretty much a small follow up to this lecture, where we would try to bring more of *Categorical data* to our **`FacetGrid()`**. Meanwhile, I would again suggest you to play around with analyzing and plotting datasets, as much as you can because visualization is a very important facet of *Data Science & Research*. And, I shall see you in our next lecture with **[Heat Map](https://github.com/milaan9/12_Python_Seaborn_Module/blob/main/018_Seaborn_Heat_Map.ipynb)**.
| github_jupyter |
```
## imports ##
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import pickle as pkl
####
## global ##
dataPath='/Users/ziegler/repos/mayfly/output/templatePeaks1252021.pkl'
templatePitchAngles=np.linspace(85,90,51)
templatePos=np.linspace(0,5e-2,21)
radius=0.0
nPeaks=5
keysAmp=[]
keysInd=[]
keysR=[]
keysI=[]
for i in range(nPeaks):
keysAmp.append('pAmp'+str(i))
keysInd.append('pInd'+str(i))
keysR.append('pR'+str(i))
keysI.append('pI'+str(i))
colors=['r','b','g','c','m','k']
frequencyConversion=200e6/8192
####
## definitions ##
####
simulationPeaks=pd.read_pickle(dataPath)
peaksAtRadius=simulationPeaks[simulationPeaks["r"]==radius].sort_values('pa')
nEntries=peaksAtRadius['pa'].size
fig,axs=plt.subplots()
for i,key in enumerate(keysInd):
axi=axs.plot(peaksAtRadius['pa'],peaksAtRadius[key],c=peaksAtRadius[keysAmp[i]],cmap='inferno',s=25)
plt.colorbar(axi)
#fig,axs=plt.subplots()
#for i,key in enumerate(keysInd):
# axs.scatter(peaksAtRadius['pa'],np.arctan2(peaksAtRadius[keysI[i]],peaksAtRadius[keysR[i]]))
#fig,axs=plt.subplots()
#for i,key in enumerate(keysInd):
# axs.scatter(peaksAtRadius['pa'],np.sqrt(peaksAtRadius[keysR[i]]**2+peaksAtRadius[keysI[i]]**2))
#plt.colorbar(axi)
#lines={}
#for entry in range(nEntries):
# peaks=peaksAtRadius[['pa','pInd0','pInd1','pInd2','pInd3','pInd4']].iloc[entry]
# nextPeaks=peaksAtRadius[['pa','pInd0','pInd1','pInd2','pInd3','pInd4']].iloc[entry+1]
# if entry==0:
# for i,key in enumerate(keysInd):
# lines.update({i:[peaks[key]]})
# else:
# for i,keyLines in enumerate(lines): #loop over the lines that exist
# iLine=lines[keyLine][-1] # get the last point in that line
# distToNextPeaks=[]
# for i,key2 in enumerate(keysInd):
# distToNextPeaks.append((nextPeaks[key2]-iPeak)**2) # calculate the distance from the next
# get entries with r == radius sort by pitch angle
peaksAtRadius=simulationPeaks[simulationPeaks["r"]==radius].sort_values(by=['pa'])
peaksAtRadius.reset_index(inplace=True)
#print(len(np.where(peaksAtRadius['pAmp0']>10.)[0]))
lines={} # holds the (pitch angle, frequency) lines, permanent
linePeakIndexes=[] # holds the amplitude index of the peak (0-4) in the line, temporary
lineFreqIndexes=[] # holds the frequency index of the peak (0-8191) in the line, y coordinant, temporary, saved in lines
pitchAngles=[] # holds the pitch angle of the point (85-90) in the line, x coordinant, temporary, saved in lines
rowInds=[] # holds the row index that contains the pitch angle, used for setting the used elements to zero
# iterate through all the rows
for irow in range(len(peaksAtRadius['pa'])): # iterate through all the rows/pitch angles
# check if the row contains any peaks with amplitude above zero
# if not skip the row
hasPeak=False
for i in range(5):
if peaksAtRadius.iloc[irow]['pAmp'+str(i)]>0: # check if any peak amplitudes is not zero in the row. This means that this row contains a point in a line
hasPeak=True # we found a non-zero element in the row
if not hasPeak:
continue # otherwise skip the row
rowInds.append(irow) # add row index to the list of rows we are using.
# find highest frequency index in the row
maxPeakInd=0
maxFreqInd=peaksAtRadius.iloc[irow]['pInd'+str(0)]
for i in range(5):
if peaksAtRadius.iloc[irow]['pInd'+str(i)] > maxFreqInd:
maxPeakInd=i
maxFreqInd=peaksAtRadius.iloc[irow]['pInd'+str(i)]
linePeakIndexes.append(maxPeakInd) # the number 0-4 of the peak that is in the line
lineFreqIndexes.append(maxFreqInd) # the y coordinate of the line
pitchAngles.append(peaksAtRadius.iloc[irow]['pa']) # the x coordinate of the line point
# find the start of the rightmost line
lineStart=np.max(np.where(np.diff(indexes,prepend=0)>100)) # find the rightmost disconsinuity that marks the start of the line
# create a numpy array of the line
line=np.array([pitchAngles[lineStart:],lineFreqIndexes[lineStart:]]) # create 2D array with the line x and y coordinates
# add the line to the dict of lines
lines.update({1:line}) # put the line in some sort of dictionary
print(rowInds[lineStart:])
# set the amplitude of the peaks in our line to zero
for irow in rowInds[lineStart:]:
print(peaksAtRadius.iloc[irow]['pa'])
peaksAtRadius.at[irow,'pAmp'+str(maxPeakInd)]=0.
# check if all amplitudes are zeros
#print(peaksAtRadius[keysAmp])
print
fig,axs=plt.subplots()
axs.plot(peaksAtRadius['pa'],indexes,'.')
axs.plot(peaksAtRadius['pa'],np.diff(indexes,prepend=0),'.')
axs.plot(peaksAtRadius['pa'][lineStart:],indexes[lineStart:],'.')
axs.set_ylim(0,8192)
# plot the highest peak index vs pitch angle
for i in range(1):
fig,axs=plt.subplots()
axs.plot(peaksAtRadius['pa'],peaksAtRadius['pInd'+str(i)],'r.')
axs.set_ylim(0,8192)
# scatter plot
fig,axs=plt.subplots()
for i in range(5):
axi=axs.scatter(peaksAtRadius['pa'],peaksAtRadius['pInd'+str(i)],c=peaksAtRadius['pAmp'+str(i)],
cmap='inferno',s=50)
plt.colorbar(axi)
axs.set_ylim(0,8192)
#print(peaks90deg)
pa1cm=peaks1cm['pa']
peak0Ind1cm=peaks1cm['pInd0']
peak1Ind1cm=peaks1cm['pInd1']
peak0Amp1cm=peaks1cm['pAmp0']
peak1Amp1cm=peaks1cm['pAmp1']
peak01IndDiff=abs(peak0Ind1cm-peak1Ind1cm)
peak01AmpDiv=peak0Amp1cm/peak1Amp1cm
#peak0Ind1cm=peaks1cm[['pInd0','pInd1','pInd2','pInd3','pInd4']]
#peak0Amp1cm=peaks1cm[['pAmp0','pAmp1','pAmp2','pAmp3','pAmp4']]
peak0Amp1cm=peaks1cm[['pAmp0','pAmp1']]
peaks0cm=simulationPeaks[simulationPeaks["r"]==0.0].sort_values(by=['pa'])
#print(peaks90deg)
pa0cm=peaks0cm['pa']
peak0Ind0cm=peaks0cm['pInd0']
fig,axs=plt.subplots()
#axs.plot(pa0cm,peak0Ind0cm,'.')
axs.plot(pa1cm,peak0Ind1cm,'.')
#axs.set_yscale('log')
fig,axs=plt.subplots()
#axs.plot(pa0cm,peak0Ind0cm,'.')
axs.plot(pa1cm,peak0Amp1cm,'.')
fig,axs=plt.subplots()
#axs.plot(pa0cm,peak0Ind0cm,'.')
axs.plot(pa1cm,peak01AmpDiv,'.')
testDF=pd.DataFrame({'a':[1,2,3],'b':[4,5,6],'c':[7,8,9]})
print(testDF.take(,axis=0))
print(testDF)
def getFrequencyPitchAngleBehavior(peaksAtRadius,lineDict,lineNumber=0):
allAmplitudesAreZero=False
while not allAmplitudesAreZero:
linePeakIndexes=[] # holds the amplitude index of the peak (0-4) in the line, temporary
lineFreqIndexes=[] # holds the frequency index of the peak (0-8191) in the line, y coordinant, temporary, saved in lines
pitchAngles=[] # holds the pitch angle of the point (85-90) in the line, x coordinant, temporary, saved in lines
rowInds=[] # holds the row index that contains the pitch angle, used for setting the used elements to zero
# iterate through all the rows
for irow in range(len(peaksAtRadius['pa'])): # iterate through all the rows/pitch angles
# check if the row contains any peaks with amplitude above zero
# if not skip the row
hasPeak=False
for i in range(5):
if peaksAtRadius.iloc[irow]['pAmp'+str(i)]>0:# check if any peak amplitudes is not zero in the row. This means that this row contains a point in a line
print(peaksAtRadius.iloc[irow]['pAmp'+str(i)])
hasPeak=True # we found a non-zero element in the row
if not hasPeak:
continue # otherwise skip the row
if hasPeak:
rowInds.append(irow) # add row index to the list of rows we are using.
# find highest frequency index in the row
maxPeakInd=0
maxFreqInd=peaksAtRadius.iloc[irow]['pInd'+str(0)]
for i in range(5):
if peaksAtRadius.iloc[irow]['pInd'+str(i)] > maxFreqInd:
maxPeakInd=i
maxFreqInd=peaksAtRadius.iloc[irow]['pInd'+str(i)]
linePeakIndexes.append(maxPeakInd) # the number 0-4 of the peak that is in the line
lineFreqIndexes.append(maxFreqInd) # the y coordinate of the line
pitchAngles.append(peaksAtRadius.iloc[irow]['pa']) # the x coordinate of the line point
# find the start of the rightmost line
lineStart=np.max(np.where(np.diff(lineFreqIndexes,prepend=0)>100)) # find the rightmost disconsinuity that marks the start of the line
# create a numpy array of the line
line=np.array([pitchAngles[lineStart:],lineFreqIndexes[lineStart:]]) # create 2D array with the line x and y coordinates
# add the line to the dict of lines
lineDict.update({lineNumber:line}) # put the line in some sort of dictionary
#print(peaksAtRadius[keysAmp])
#print(rowInds[lineStart:])
# set the amplitude of the peaks in our line to zero
for irow in rowInds[lineStart:]:
#print(peaksAtRadius.iloc[irow]['pa'])
peaksAtRadius.at[irow,'pAmp'+str(maxPeakInd)]=
#print(peaksAtRadius[keysAmp])
# check if all amplitudes are zeros
allAmplitudesAreZero=True
for key in keysAmp:
if len(np.where(peaksAtRadius[key]>0)[0])>0:
allAmplitudesAreZero=False
if not allAmplitudesAreZero:
lineNumber+=1
return True
# get entries with r == radius sort by pitch angle
peaksAtRadius=simulationPeaks[simulationPeaks["r"]==radius].sort_values(by=['pa'])
peaksAtRadius.reset_index(inplace=True)
lines={}
getFrequencyPitchAngleBehavior(peaksAtRadius,lines)
print(lines)
peaksAtRadius=simulationPeaks[simulationPeaks["r"]==radius].sort_values(by=['pa'])
peaksAtRadius.reset_index(inplace=True)
dataDict=peaksAtRadius.to_dict()
rowInds=np.array(list(dataDict['pa'].keys()))
def findLines(lines,dataDict,nLine=0):
potentialRowsInLine=[]
linePeakIndexes=[] # holds the amplitude index of the peak (0-4) in the line, temporary
lineFreqIndexes=[] # holds the frequency index of the peak (0-8191) in the line, y coordinant, temporary, saved in lines
pitchAngles=[]
for irow in rowInds:
hasPeak=False
for key in keysAmp:
if dataDict[key][irow]>0: # check if any peak amplitudes is not zero in the row.
# This means that this row contains a point in a line
hasPeak=True # we found a non-zero element in the row
if hasPeak:
potentialRowsInLine.append(irow)
else:
continue # otherwise skip the row
# find highest frequency index in the row
maxPeakInd=0
maxFreqInd=dataDict['pInd'+str(0)][irow]
for i,key in enumerate(keysInd):
if dataDict[key][irow] > maxFreqInd:
maxPeakInd=i
maxFreqInd=dataDict[key][irow]
#print(maxPeakInd,maxFreqInd)
linePeakIndexes.append(maxPeakInd) # the number 0-4 of the peak that is in the line
lineFreqIndexes.append(maxFreqInd) # the y coordinate of the line
pitchAngles.append(dataDict['pa'][irow]) # the x coordinate of the line point
# find the start of the rightmost line
#print(lineFreqIndexes)
if len(lineFreqIndexes)==0:
return True
#print(lineFreqIndexes)
lineStart=np.max(np.where(np.diff(lineFreqIndexes,prepend=0)>150)) # find the rightmost disconsinuity that marks the start of the line
#print(lineStart)
# create a numpy array of the line
line=np.array([pitchAngles[lineStart:],lineFreqIndexes[lineStart:]]) # create 2D array with the line x and y coordinates
# add the line to the dict of lines
lines.update({nLine:line}) # put the line in a dictionary
# remove the points in the line from the data dictionary
for i,irow in enumerate(potentialRowsInLine):
if i>=lineStart:
dataDict['pAmp'+str(linePeakIndexes[i])][irow]=0
dataDict['pInd'+str(linePeakIndexes[i])][irow]=-1
allPeaksDone=True
for key in keysAmp:
if len(list(dataDict[key].keys()))>0:
allPeaksDone=False
if not allPeaksDone:
nLine+=1
#print(line)
findLines(lines,dataDict,nLine=nLine)
else:
return True
lines={}
findLines(lines,dataDict)
fig,axs=plt.subplots()
for i,key in enumerate(lines):
axs.plot(lines[key][0,:],lines[key][1,:],'.')
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import mxnet as mx
from mxnet import nd, autograd, gluon, init
from mxnet.gluon import nn, rnn
import gluonnlp as nlp
import pkuseg
import multiprocessing as mp
import time
from d2l import try_gpu
import itertools
import jieba
from sklearn.metrics import accuracy_score, f1_score
import d2l
import re
import warnings
warnings.filterwarnings("ignore")
# fixed random number seed
np.random.seed(2333)
mx.random.seed(2333)
DATA_FOLDER = 'data/'
TRAIN_DATA = 'train.csv'
WORD_EMBED = 'sgns.weibo.bigram-char'
LABEL_FILE = 'train.label'
N_ROWS=1000
ctx = mx.gpu(0)
seg = pkuseg.pkuseg(model_name='web')
train_df = pd.read_csv(DATA_FOLDER+TRAIN_DATA, sep='|')
train_df = train_df.sample(frac=1)
train_df.head()
dataset =[ [row[0], row[1]] for _, row in train_df.iterrows()]
train_dataset, valid_dataset = nlp.data.train_valid_split(dataset)
len(train_dataset), len(valid_dataset)
def tokenizer(x):
tweet, label = x
if type(tweet) != str:
print(tweet)
tweet = str(tweet)
word_list = jieba.lcut(tweet)
if len(word_list)==0:
word_list=['<unk>']
return word_list, label
def get_length(x):
return float(len(x[0]))
def to_word_list(dataset):
start = time.time()
with mp.Pool() as pool:
# Each sample is processed in an asynchronous manner.
dataset = gluon.data.ArrayDataset(pool.map(tokenizer, dataset))
lengths = gluon.data.ArrayDataset(pool.map(get_length, dataset))
end = time.time()
print('Done! Tokenizing Time={:.2f}s, #Sentences={}'.format(end - start, len(dataset)))
return dataset, lengths
train_word_list, train_word_lengths = to_word_list(train_dataset)
valid_word_list, valid_word_lengths = to_word_list(valid_dataset)
train_seqs = [sample[0] for sample in train_word_list]
counter = nlp.data.count_tokens(list(itertools.chain.from_iterable(train_seqs)))
vocab = nlp.Vocab(counter, max_size=200000)
# load customed pre-trained embedding
embedding_weights = nlp.embedding.TokenEmbedding.from_file(file_path=DATA_FOLDER+WORD_EMBED)
vocab.set_embedding(embedding_weights)
print(vocab)
def token_to_idx(x):
return vocab[x[0]], x[1]
# A token index or a list of token indices is returned according to the vocabulary.
with mp.Pool() as pool:
train_dataset = pool.map(token_to_idx, train_word_list)
valid_dataset = pool.map(token_to_idx, valid_word_list)
batch_size = 1024
bucket_num = 20
bucket_ratio = 0.1
def get_dataloader():
# Construct the DataLoader Pad data, stack label and lengths
batchify_fn = nlp.data.batchify.Tuple(nlp.data.batchify.Pad(axis=0), \
nlp.data.batchify.Stack())
# in this example, we use a FixedBucketSampler,
# which assigns each data sample to a fixed bucket based on its length.
batch_sampler = nlp.data.sampler.FixedBucketSampler(
train_word_lengths,
batch_size=batch_size,
num_buckets=bucket_num,
ratio=bucket_ratio,
shuffle=True)
print(batch_sampler.stats())
# train_dataloader
train_dataloader = gluon.data.DataLoader(
dataset=train_dataset,
batch_sampler=batch_sampler,
batchify_fn=batchify_fn)
# valid_dataloader
valid_dataloader = gluon.data.DataLoader(
dataset=valid_dataset,
batch_size=batch_size,
shuffle=False,
batchify_fn=batchify_fn)
return train_dataloader, valid_dataloader
train_dataloader, valid_dataloader = get_dataloader()
for tweet, label in train_dataloader:
print(tweet, label)
break
```
## Model contruction
Self attention layer, weighted cross entropy, and whole model
```
class TextCNN(nn.Block):
def __init__(self, vocab_len, embed_size, kernel_sizes, num_channels, \
dropout, nclass, **kwargs):
super(TextCNN, self).__init__(**kwargs)
self.embedding = nn.Embedding(vocab_len, embed_size)
self.constant_embedding = nn.Embedding(vocab_len, embed_size)
self.dropout = nn.Dropout(dropout)
self.decoder = nn.Dense(nclass)
self.pool = nn.GlobalMaxPool1D()
self.convs = nn.Sequential()
for c, k in zip(num_channels, kernel_sizes):
self.convs.add(nn.Conv1D(c, k, activation='relu'))
def forward(self, inputs):
embeddings = nd.concat(
self.embedding(inputs), self.constant_embedding(inputs), dim=2)
embeddings = embeddings.transpose((0, 2, 1))
encoding = nd.concat(*[nd.flatten(
self.pool(conv(embeddings))) for conv in self.convs], dim=1)
outputs = self.decoder(self.dropout(encoding))
return outputs
vocab_len = len(vocab)
emsize = 300 # word embedding size
nhidden = 400 # lstm hidden_dim
nlayers = 4 # lstm layers
natt_unit = 400 # the hidden_units of attention layer
natt_hops = 20 # the channels of attention
nfc = 256 # last dense layer size
nclass = 72 # we have 72 emoji in total
drop_prob = 0.2
pool_way = 'flatten' # # The way to handle M
prune_p = None
prune_q = None
ctx = try_gpu()
kernel_sizes, nums_channels = [2, 3, 4, 5], [100, 100, 100, 100]
model = TextCNN(vocab_len, emsize, kernel_sizes, nums_channels, drop_prob, nclass)
model.initialize(init.Xavier(), ctx=ctx)
print(model)
model.embedding.weight.set_data(vocab.embedding.idx_to_vec)
model.constant_embedding.weight.set_data(vocab.embedding.idx_to_vec)
model.constant_embedding.collect_params().setattr('grad_req', 'null')
tmp = nd.array([10, 20, 30, 40, 50, 60], ctx=ctx).reshape(1, -1)
model(tmp)
class WeightedSoftmaxCE(nn.HybridBlock):
def __init__(self, sparse_label=True, from_logits=False, **kwargs):
super(WeightedSoftmaxCE, self).__init__(**kwargs)
with self.name_scope():
self.sparse_label = sparse_label
self.from_logits = from_logits
def hybrid_forward(self, F, pred, label, class_weight, depth=None):
if self.sparse_label:
label = F.reshape(label, shape=(-1, ))
label = F.one_hot(label, depth)
if not self.from_logits:
pred = F.log_softmax(pred, -1)
weight_label = F.broadcast_mul(label, class_weight)
loss = -F.sum(pred * weight_label, axis=-1)
# return F.mean(loss, axis=0, exclude=True)
return loss
def calculate_loss(x, y, model, loss, class_weight):
pred = model(x)
y = nd.array(y.asnumpy().astype('int32')).as_in_context(ctx)
if loss_name == 'sce':
l = loss(pred, y)
elif loss_name == 'wsce':
l = loss(pred, y, class_weight, class_weight.shape[0])
else:
raise NotImplemented
return pred, l
def one_epoch(data_iter, model, loss, trainer, ctx, is_train, epoch,
clip=None, class_weight=None, loss_name='sce'):
loss_val = 0.
total_pred = []
total_true = []
n_batch = 0
for batch_x, batch_y in data_iter:
batch_x = batch_x.as_in_context(ctx)
batch_y = batch_y.as_in_context(ctx)
if is_train:
with autograd.record():
batch_pred, l = calculate_loss(batch_x, batch_y, model, \
loss, class_weight)
# backward calculate
l.backward()
# clip gradient
clip_params = [p.data() for p in model.collect_params().values()]
if clip is not None:
norm = nd.array([0.0], ctx)
for param in clip_params:
if param.grad is not None:
norm += (param.grad ** 2).sum()
norm = norm.sqrt().asscalar()
if norm > clip:
for param in clip_params:
if param.grad is not None:
param.grad[:] *= clip / norm
# update parmas
trainer.step(batch_x.shape[0])
else:
batch_pred, l = calculate_loss(batch_x, batch_y, model, \
loss, class_weight)
# keep result for metric
batch_pred = nd.argmax(nd.softmax(batch_pred, axis=1), axis=1).asnumpy()
batch_true = np.reshape(batch_y.asnumpy(), (-1, ))
total_pred.extend(batch_pred.tolist())
total_true.extend(batch_true.tolist())
batch_loss = l.mean().asscalar()
n_batch += 1
loss_val += batch_loss
# check the result of traing phase
if is_train and n_batch % 400 == 0:
print('epoch %d, batch %d, batch_train_loss %.4f, batch_train_acc %.3f' %
(epoch, n_batch, batch_loss, accuracy_score(batch_true, batch_pred)))
# metric
F1 = f1_score(np.array(total_true), np.array(total_pred), average='weighted')
acc = accuracy_score(np.array(total_true), np.array(total_pred))
loss_val /= n_batch
if is_train:
print('epoch %d, learning_rate %.5f \n\t train_loss %.4f, acc_train %.3f, F1_train %.3f, ' %
(epoch, trainer.learning_rate, loss_val, acc, F1))
# declay lr
if epoch % 3 == 0:
trainer.set_learning_rate(trainer.learning_rate * 0.9)
else:
print('\t valid_loss %.4f, acc_valid %.3f, F1_valid %.3f, ' % (loss_val, acc, F1))
def train_valid(data_iter_train, data_iter_valid, model, loss, trainer, ctx, nepochs,
clip=None, class_weight=None, loss_name='sce'):
for epoch in range(1, nepochs+1):
start = time.time()
# train
is_train = True
one_epoch(data_iter_train, model, loss, trainer, ctx, is_train,
epoch, clip, class_weight, loss_name)
# valid
is_train = False
one_epoch(data_iter_valid, model, loss, trainer, ctx, is_train,
epoch, clip, class_weight, loss_name)
end = time.time()
print('time %.2f sec' % (end-start))
print("*"*100)
from util import get_weight
weight_list = get_weight(DATA_FOLDER, LABEL_FILE)
class_weight = None
loss_name = 'sce'
optim = 'adam'
lr, wd = .001, .999
clip = None
nepochs = 5
trainer = gluon.Trainer(model.collect_params(), optim, {'learning_rate': lr})
if loss_name == 'sce':
loss = gluon.loss.SoftmaxCrossEntropyLoss()
elif loss_name == 'wsce':
loss = WeightedSoftmaxCE()
# the value of class_weight is obtained by counting data in advance. It can be seen as a hyperparameter.
class_weight = nd.array(weight_list, ctx=ctx)
# train and valid
print(ctx)
train_valid(train_dataloader, valid_dataloader, model, loss, \
trainer, ctx, nepochs, clip=clip, class_weight=class_weight, \
loss_name=loss_name)
model.save_parameters("model/textcnn.params")
kernel_sizes, nums_channels = [2, 3, 4, 5], [100, 100, 100, 100]
model = TextCNN(vocab_len, emsize, kernel_sizes, nums_channels, 0, nclass)
model.load_parameters('model/textcnn.params', ctx=ctx)
TEST_DATA = 'test.csv'
predictions = []
test_df = pd.read_csv(DATA_FOLDER+TEST_DATA, header=None, sep='\t')
len(test_df)
start = time.time()
for _, tweet in test_df.iterrows():
token = vocab[jieba.lcut(tweet[1])]
if len(token)<5:
token += [0.]*(5-len(token))
inp = nd.array(token, ctx=ctx).reshape(1,-1)
pred = model(inp)
pred = nd.argmax(pred, axis=1).asscalar()
predictions.append(int(pred))
if len(predictions)%2000==0:
ckpt = time.time()
print('current pred len %d, time %.2fs' % (len(predictions), ckpt-start))
start = ckpt
submit = pd.DataFrame({'Expected': predictions})
submit.to_csv('submission.csv', sep=',', index_label='ID')
```
| github_jupyter |
TSG088 - Hadoop datanode logs
=============================
Steps
-----
### Parameters
```
import re
tail_lines = 500
pod = None # All
container = "hadoop"
log_files = [ "/var/log/supervisor/log/datanode*.log" ]
expressions_to_analyze = [
re.compile(".{23} WARN "),
re.compile(".{23} ERROR ")
]
log_analyzer_rules = []
```
### Instantiate Kubernetes client
```
# Instantiate the Python Kubernetes client into 'api' variable
import os
from IPython.display import Markdown
try:
from kubernetes import client, config
from kubernetes.stream import stream
except ImportError:
# Install the Kubernetes module
import sys
!{sys.executable} -m pip install kubernetes
try:
from kubernetes import client, config
from kubernetes.stream import stream
except ImportError:
display(Markdown(f'HINT: Use [SOP059 - Install Kubernetes Python module](../install/sop059-install-kubernetes-module.ipynb) to resolve this issue.'))
raise
if "KUBERNETES_SERVICE_PORT" in os.environ and "KUBERNETES_SERVICE_HOST" in os.environ:
config.load_incluster_config()
else:
try:
config.load_kube_config()
except:
display(Markdown(f'HINT: Use [TSG118 - Configure Kubernetes config](../repair/tsg118-configure-kube-config.ipynb) to resolve this issue.'))
raise
api = client.CoreV1Api()
print('Kubernetes client instantiated')
```
### Get the namespace for the big data cluster
Get the namespace of the Big Data Cluster from the Kuberenetes API.
**NOTE:**
If there is more than one Big Data Cluster in the target Kubernetes
cluster, then either:
- set \[0\] to the correct value for the big data cluster.
- set the environment variable AZDATA\_NAMESPACE, before starting
Azure Data Studio.
```
# Place Kubernetes namespace name for BDC into 'namespace' variable
if "AZDATA_NAMESPACE" in os.environ:
namespace = os.environ["AZDATA_NAMESPACE"]
else:
try:
namespace = api.list_namespace(label_selector='MSSQL_CLUSTER').items[0].metadata.name
except IndexError:
from IPython.display import Markdown
display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print('The kubernetes namespace for your big data cluster is: ' + namespace)
```
### Get the Hadoop datanode logs from the hadoop container
### Get tail for log
```
# Display the last 'tail_lines' of files in 'log_files' list
pods = api.list_namespaced_pod(namespace)
entries_for_analysis = []
for p in pods.items:
if pod is None or p.metadata.name == pod:
for c in p.spec.containers:
if container is None or c.name == container:
for log_file in log_files:
print (f"- LOGS: '{log_file}' for CONTAINER: '{c.name}' in POD: '{p.metadata.name}'")
try:
output = stream(api.connect_get_namespaced_pod_exec, p.metadata.name, namespace, command=['/bin/sh', '-c', f'tail -n {tail_lines} {log_file}'], container=c.name, stderr=True, stdout=True)
except Exception:
print (f"FAILED to get LOGS for CONTAINER: {c.name} in POD: {p.metadata.name}")
else:
for line in output.split('\n'):
for expression in expressions_to_analyze:
if expression.match(line):
entries_for_analysis.append(line)
print(line)
print("")
print(f"{len(entries_for_analysis)} log entries found for further analysis.")
```
### Analyze log entries and suggest relevant Troubleshooting Guides
```
# Analyze log entries and suggest further relevant troubleshooting guides
from IPython.display import Markdown
print(f"Applying the following {len(log_analyzer_rules)} rules to {len(entries_for_analysis)} log entries for analysis, looking for HINTs to further troubleshooting.")
print(log_analyzer_rules)
hints = 0
if len(log_analyzer_rules) > 0:
for entry in entries_for_analysis:
for rule in log_analyzer_rules:
if entry.find(rule[0]) != -1:
print (entry)
display(Markdown(f'HINT: Use [{rule[2]}]({rule[3]}) to resolve this issue.'))
hints = hints + 1
print("")
print(f"{len(entries_for_analysis)} log entries analyzed (using {len(log_analyzer_rules)} rules). {hints} further troubleshooting hints made inline.")
print("Notebook execution is complete.")
```
| github_jupyter |
# Working with data files
Reading and writing data files is a common task, and Python offers native support for working with many kinds of data files. Today, we're going to be working mainly with CSVs.
### Import the csv module
We're going to be working with delimited text files, so the first thing we need to do is import this functionality from the standard library.
```
import csv
```
### Opening a file to read the contents
We're going to use something called a [`with`](https://docs.python.org/3/reference/compound_stmts.html#with) statement to open a file and read the contents. The `open()` function takes at least two arguments: The path to the file you're opening and what ["mode"](https://docs.python.org/3/library/functions.html#open) you're opening it in.
To start with, we're going to use the `'r'` mode to read the data. We'll use the default arguments for delimiter -- comma -- and we don't need to specify a quote character.
**Important:** If you open a data file in `w` (write) mode, anything that's already in the file will be erased.
The file we're using -- MLB roster data from 2017 -- lives at `data/mlb.csv`.
Once we have the file open, we're going to use some functionality from the `csv` module to iterate over the lines of data and print each one.
Specifically, we're going to use the `csv.reader` method, which returns a list of lines in the data file. Each line, in turn, is a list of the "cells" of data in that line.
Then we're going to loop over the lines of data and print each line. We can also use bracket notation to retrieve elements from inside each line of data.
```
# open the MLB data file `as` mlb
with open('data/mlb.csv', 'r') as mlb:
# create a reader object
reader = csv.reader(mlb)
# loop over the rows in the file
for row in reader:
# assign variables to each element in the row (shortcut!)
name, team, position, salary, start_year, end_year, years = row
# print the row, which is a list
print(row)
```
### Simple filtering
If you wanted to filter your data, you could use an `if` statement inside your `with` block.
```
# open the MLB data file `as` mlb
with open('data/mlb.csv', 'r') as mlb:
# create a reader object
reader = csv.reader(mlb)
# move past the header row
next(reader)
# loop over the rows in the file
for row in reader:
# assign variables to each element in the row (shortcut!)
name, team, position, salary, start_year, end_year, years = row
# print the line of data ~only~ if the player is on the Twins
if team == 'MIN':
# print the row, which is a list
print(row)
```
### _Exercise_
Read in the MLB data, print only the names and salaries of players who make at least $1 million. (Hint: Use type coercion!)
```
# open the MLB data file `as` mlb
with open('data/mlb.csv', 'r') as mlb:
# create a reader object
reader = csv.reader(mlb)
# move past the header row
next(reader)
# loop over the rows in the file
for row in reader:
# assign variables to each element in the row (shortcut!)
name, team, position, salary, start_year, end_year, years = row
# print the line of data ~only~ if the player is on the Twins
if int(salary) >= 1000000:
# print the row, which is a list
print(name, salary)
```
### DictReader: Another way to read CSV files
Sometimes it's more convenient to work with data files as a list of dictionaries instead of a list of lists. That way, you don't have to remember the position of each "column" of data -- you can just reference the column name. To do it, we'll use a `csv.DictReader` object instead of a `csv.reader` object. Otherwise the code is much the same.
```
# open the MLB data file `as` mlb
with open('data/mlb.csv', 'r') as mlb:
# create a reader object
reader = csv.DictReader(mlb)
# loop over the rows in the file
for row in reader:
# print just the player's name (the column header is "NAME")
print(row['NAME'])
```
### Writing to CSV files
You can also use the `csv` module to _create_ csv files -- same idea, you just need to change the mode to `'w'`. As with reading, there's a list-based writing method and a dictionary-based method.
```
# define the column names
COLNAMES = ['name', 'org', 'position']
# let's make a few rows of data to write
DATA_TO_WRITE = [
['Cody', 'IRE', 'Training Director'],
['Maggie', 'The New York Times', 'Reporter'],
['Donald', 'The White House', 'President']
]
# open an output file in write mode
with open('people-list.csv', 'w') as outfile:
# create a writer object
writer = csv.writer(outfile)
# write the header row
writer.writerow(COLNAMES)
# loop over the data and write to file
for human in DATA_TO_WRITE:
writer.writerow(human)
```
### Using DictWriter to write data
Similar to using the list-based method, except that you need to ensure that the keys in your dictionaries of data match exactly a list of fieldnames.
```
# define the column names
COLNAMES = ['name', 'org', 'position']
# let's make a few rows of data to write
DATA_TO_WRITE = [
{'name': 'Cody', 'org': 'IRE', 'position': 'Training Director'},
{'name': 'Maggie', 'org': 'The New York Times', 'position': 'Reporter'},
{'name': 'Donald', 'org': 'The White House', 'position': 'President'}
]
# open an output file in write mode
with open('people-dict.csv', 'w') as outfile:
# create a writer object -- pass the list of column names to the `fieldnames` keyword argument
writer = csv.DictWriter(outfile, fieldnames=COLNAMES)
# use the writeheader method to write the header row
writer.writeheader()
# loop over the data and write to file
for human in DATA_TO_WRITE:
writer.writerow(human)
```
### You can open multiple files for reading/writing
Sometimes you want to open multiple files at the same time. One thing you might want to do: Opening a file of raw data in read mode, clean each row in a loop and write out the clean data to a new file.
You can open multiple files in the same `with` block -- just separate your `open()` functions with a comma.
For this example, we're not going to do any cleaning -- we're just going to copy the contents of one file to another.
```
# open the MLB data file `as` mlb
# also, open `mlb-copy.csv` to write to
with open('data/mlb.csv', 'r') as mlb, open('mlb-copy.csv', 'w') as mlb_copy:
# create a reader object
reader = csv.DictReader(mlb)
# create a writer object
# we're going to use the `fieldnames` attribute of the DictReader object
# as our output headers, as well
# b/c we're basically just making a copy
writer = csv.DictWriter(mlb_copy, fieldnames=reader.fieldnames)
# write header row
writer.writeheader()
# loop over the rows in the file
for row in reader:
# what type of object is `row`?
# how would we find out?
# write row to output file
writer.writerow(row)
```
| github_jupyter |
## Project 2: Exploring the Uganda's milk imports and exports
A country's economy depends, sometimes heavily, on its exports and imports. The United Nations Comtrade database provides data on global trade. It will be used to analyse the Uganda's imports and exports of milk in 2015:
* How much does the Uganda export and import and is the balance positive (more exports than imports)?
* Which are the main trading partners, i.e. from/to which countries does the Uganda import/export the most?
* Which are the regular customers, i.e. which countries buy milk from the Uganda every month?
* Which countries does the Uganda both import from and export to?
```
import warnings
warnings.simplefilter('ignore', FutureWarning)
from pandas import *
%matplotlib inline
```
## Getting and preparing the data
The data is obtained from the [United Nations Comtrade](http://comtrade.un.org/data/) website, by selecting the following configuration:
- Type of Product: goods
- Frequency: monthly
- Periods: Jan - May 2018
- Reporter: Uganda
- Partners: all
- Flows: imports and exports
- HS (as reported) commodity codes: 401 (Milk and cream, neither concentrated nor sweetened) and 402 (Milk and cream, concentrated or sweetened)
```
LOCATION = 'comrade_milk_ug_jan_dec_2015.csv'
```
On reading in the data, the commodity code has to be read as a string, to not lose the leading zero.
```
import pandas as pd
milk = pd.read_csv(LOCATION, dtype={'Commodity Code':str})
milk.tail(2)
```
The data only covers the first five months of 2015. Most columns are irrelevant for this analysis, or contain always the same value, like the year and reporter columns. The commodity code is transformed into a short but descriptive text and only the relevant columns are selected.
```
def milkType(code):
if code == '401': # neither concentrated nor sweetened
return 'unprocessed'
if code == '402': # concentrated or sweetened
return 'processed'
return 'unknown'
COMMODITY = 'Milk and cream'
milk[COMMODITY] = milk['Commodity Code'].apply(milkType)
MONTH = 'Period'
PARTNER = 'Partner'
FLOW = 'Trade Flow'
VALUE = 'Trade Value (US$)'
headings = [MONTH, PARTNER, FLOW, COMMODITY, VALUE]
milk = milk[headings]
milk.head()
```
The data contains the total imports and exports per month, under the 'World' partner. Those rows are removed to keep only the per-country data.
```
milk = milk[milk[PARTNER] != 'World']
milk.head()
milk.tail()
```
## Total trade flow
To answer the first question, 'how much does the Uganda export and import and is the balance positive (more exports than imports)?', the dataframe is split into two groups: exports from the Uganda and imports into the Uganda. The trade values within each group are summed up to get the total trading.
```
grouped = milk.groupby([FLOW])
grouped[VALUE].aggregate(sum)
```
This shows a trade surplus of over 30 million dollars.
## Main trade partners
To address the second question, 'Which are the main trading partners, i.e. from/to which countries does the Uganda import/export the most?', the dataframe is split by country instead, and then each group aggregated for the total trade value. This is done separately for imports and exports. The result is sorted in descending order so that the main partners are at the top.
```
imports = milk[milk[FLOW] == 'Imports']
grouped = imports.groupby([PARTNER])
print('The Uganda imports from', len(grouped), 'countries.')
print('The 5 biggest exporters to the Uganda are:')
totalImports = grouped[VALUE].aggregate(sum).sort_values(inplace=False,ascending=False)
totalImports.head()
```
The export values can be plotted as a bar chart, making differences between countries easier to see.
```
totalImports.head(10).plot(kind='barh')
```
We can deduce that Switzerland is the lowest partnering company of milk to Uganda for imports.
```
exports = milk[milk[FLOW] == 'Exports']
grouped = exports.groupby([PARTNER])
print('The Uganda exports to', len(grouped), 'countries.')
print('The 5 biggest importers from the Uganda are:')
grouped[VALUE].aggregate(sum).sort_values(ascending=False,inplace=False).head()
```
## Regular importers
Given that there are two commodities, the third question, 'Which are the regular customers, i.e. which countries buy milk from the Uganda every month?', is meant in the sense that a regular customer imports both commodities every month. This means that if the exports dataframe is grouped by country, each group has exactly ten rows (two commodities bought each of the five months). To see the countries, only the first month of one commodity has to be listed, as by definition it's the same countries every month and for the other commodity.
```
def buysEveryMonth(group):
reply = len(group) == 20
return reply
grouped = exports.groupby([PARTNER])
regular = grouped.filter(buysEveryMonth)
print(regular)
regular[(regular[MONTH] == 201501) & (regular[COMMODITY] == 'processed')]
```
Just over 5% of the total Uganda exports are due to these regular customers.
```
regular[VALUE].sum() / exports[VALUE].sum()
```
## Bi-directional trade
To address the fourth question,
'Which countries does the Uganda both import from and export to?', a pivot table is used to list the total export and import value for each country.
```
countries = pivot_table(milk, index=[PARTNER], columns=[FLOW],
values=VALUE, aggfunc=sum)
countries.head()
```
Removing the rows with a missing value will result in only those countries with bi-directional trade flow with the Uganda.
```
countries.dropna()
```
## Conclusions
The milk and cream trade of the Uganda from January to December 2015 was analysed in terms of which countries the Uganda mostly depends on for income (exports) and goods (imports). Over the period, the Uganda had a trade surplus of over 1 million US dollars.
Kenya is the main partner, but it exported from the Uganda almost the triple in value than it imported to the Uganda.
The Uganda exported to over 100 countries during the period, but only imported from 24 countries, the main ones (top five by trade value) being not so geographically close (Kenya, Netherlands, United Arab Emirates, Oman, and South Africa). Kenya and Netherlands are the main importers that are not also main exporters except Kenya.
The Uganda is heavily dependent on its regular customers, the 10 countries that buy all types of milk and cream every month. They contribute three quarters of the total export value.
Although for some, the trade value (in US dollars) is suspiciously low, which raises questions about the data's accuracy.
| github_jupyter |
# Finding the best market to adverts in e-learning
The aim of this project is to give examples of how to use basic concepts in Statistics, such as mean values, medians, ranges, and standard deviations, to answer questions using real-world data.
To be concrete, we will focus on Programming courses markets.
Using a real-world dataset, we will determine what are the best markets to advertise in and estimate the extent to which these results should be trusted.
## The dataset
The dataset we will use, `2017-fCC-New-Coders-Survey-Data.csv`, was downloaded from [this Github repository](https://github.com/freeCodeCamp/2017-new-coder-survey).
It was used by Quincy Larson, founder of the e-learning platform [freeCodeCamp](https://www.freecodecamp.org/), to write [this article on Medium](https://www.freecodecamp.org/news/we-asked-20-000-people-who-they-are-and-how-theyre-learning-to-code-fff5d668969/) about new coders (defined as people who had been coding for less than 5 years) in 2017.
The questions asked in the survey can be found in `2017-fCC-New-Coders-Survey-Data_questions.csv`.
Let us first import the modules we will need:
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
```
We then import the dataset, find its shape, and print the five first rows:
```
path_to_csv = '../Data/2017-fCC-New-Coders-Survey-Data.csv'
df = pd.read_csv(path_to_csv)
print(df.shape)
df.head()
```
This dataframe has 18175 rows and 136 columns.
Let us list them and their main properties:
```
pd.set_option('display.max_columns', 200) # to print all columns
df.describe(include='all')
```
## Job role interests
We first want to determine which jobs new coders are interested in, which will in turn determine the topics they should be interested in learning.
To this end, we generate a relative frequency table for the `JobInterest` columns and represent it graphically as a bar plot.
```
# number of bars in the plot
nbars = 11
# job interests with their own column
list_jobs = ['BackEnd', 'DataEngr', 'DataSci', 'DevOps', 'FrontEnd',
'FullStack', 'GameDev', 'InfoSec', 'Mobile', 'ProjMngr',
'QAEngr', 'UX']
freqs = {}
nrows = df.shape[0]
for job in list_jobs:
name_col = 'JobInterest' + job
freqs[name_col] = df[name_col].sum() / nrows
# job interests in the 'other' column
for job in df['JobInterestOther']:
if pd.isna(job):
pass
elif job in freqs.keys():
freqs[job] += 1 / nrows
else:
freqs[job] = 1 / nrows
# turn the dictionary to two lists, ordered by the y value in descending order
x, y = zip(*sorted(freqs.items(), key=lambda item: -item[1]))
plt.bar(x, y)
plt.xlim(-0.6, nbars-0.4)
plt.xticks(rotation=90)
plt.grid()
plt.title('Frequency of job interests')
plt.show()
```
We find that:
* a plurality (more than 20%) are interested in Full Stack roles,
* the next two most popular careers are Front End Developer and Back End Developer.
Let us now sum the values:
```
sum(y)
```
Values add up to more than 1.4, meaning that, on average, a new coder contemplates approximately 1.4 different careers.
*It may thus be valuable to put forward learning opportunities which may lead to different careers.*
To help selecting them, let us plot the correlation map between expressions of interests in different career opportunities.
For simplicity, we focus on those having their own columns, which from the graph above covers at least the 11 most popular career opportunities.
```
df[['JobInterest' + job for job in list_jobs]].fillna(0).corr().style.background_gradient(cmap='coolwarm')
```
There is a significant (larger than 0.4) positive correlation between interests in Full Stack, Front End, Back End, and Mobile development.
Since these are also the four most popular career opportunities, *it would make sense to advertise e-learning content which can be useful to these four roles*.
There is also a significant correlation between interest for Data Scientist and Data Engineer careers, which are both among the ten most popular career opportunities.
*It would thus also make sense to advertise content relevent to these two roles.*
## Best countries to advertise in
We now focus on the geographic location of new coders to determine which countries an advertising campaign should focus on.
We focus on coders who have expressed interest for at least one of the 10 most popular careers and plot a frequency table of the `CountryLive` column, indicating in which country each new coder lives.
```
# list of the 10 most popular careers
most_popular_careers = list(x[:10])
# dataframe containing only the rows for coders having expressed interest in at
# least one of the 10 most popular careers
df_interested_10 = df[df[most_popular_careers].sum(axis=1) > 0.]
# frequency table
freq_table_country = df_interested_10['CountryLive'].value_counts()
freq_table_country_per = df_interested_10['CountryLive'].value_counts(normalize=True) * 100
# bar plot of the frequency table, showing only the 5 most represented
# countries
nbars = 5
freq_table_country.plot.bar()
plt.xlim(-0.5,nbars-0.5)
plt.grid()
plt.title('Number of new coders living in each country')
plt.show()
freq_table_country_per.plot.bar()
plt.xlim(-0.5,nbars-0.5)
plt.ylabel('%')
plt.grid()
plt.title('Percentage of new coders living in each country')
plt.show()
```
A plurality (more than 40%, corresponding to more than 3000 coders in the survey) of new coders live in the United States of America (USA).
The next two most represented countries are India and the United Kingdom (UK), with less than 10% each, followed by Canada and Poland.
Moreover, the first four are all English-speaking countries, which could make it easy to extent a campaign from one to another.
*An advertising campagign could thus start in the USA, before being potentially extended to India, the UK, and Canada.*
Let us check that the previous results about job interests remain true when restricting to coders living in the USA, which could be the primary target of a campaign.
```
# number of bars in the plot
nbars = 11
# country where
country = 'United States of America'
# number of rows
nrows_USA = df[df['CountryLive'] == country].shape[0]
# job interests with their own column
freqs_USA = {}
for job in list_jobs:
name_col = 'JobInterest' + job
freqs_USA[name_col] = df[df['CountryLive'] == country][name_col].sum() / nrows_USA
# job interests in the 'other' column
for job in df[df['CountryLive'] == country]['JobInterestOther']:
if pd.isna(job):
pass
elif job in freqs_USA.keys():
freqs_USA[job] += 1 / nrows_USA
else:
freqs_USA[job] = 1 / nrows_USA
# turn the dictionary to two lists, ordered by the y value in descending order
x, y = zip(*sorted(freqs_USA.items(), key=lambda item: -item[1]))
plt.bar(x, y)
plt.xlim(-0.6, nbars-0.4)
plt.xticks(rotation=90)
plt.grid()
plt.title('Frequency of job interests in the USA')
plt.show()
```
This frequency table looks very similar to the one obtaned above from the full dataset.
All frequencies tend to be a bit larger, which does not affect our conclusions.
Let us check that the correlations we found above are also still present:
```
df[df['CountryLive'] == country][['JobInterest' + job for job in list_jobs]].fillna(0).corr().style.background_gradient(cmap='coolwarm')
```
There are still strong positive correlations between interest for the Full Stack, Front End, and Back End careers, as well as the data Scientist and Data Engineer ones.
Another important information for deciding which countries to invest in is how much coders are willign to pay for learning.
To estimate it, we first add a new column `MoneyPerMonth` showing, in US Dollars, how much each coder has spent per month since they started programming.
It is obtained by dividng the `MoneyForLearning` column by the `MonthsProgramming` one, after replacing 0s in the later by 1s.
(Since most subscriptions are monthly, one can expect the total amount spent by someone who has just started coding to be a fair estimation of what they pay for the first month.)
```
df['MoneyPerMonth'] = df['MoneyForLearning'] / df['MonthsProgramming'].replace(0, 1)
```
We then group the results by country and show summary statistics for the four ones where the number of new coders is highest.
```
# column to consider
column = 'MoneyPerMonth'
# countries to be considered
list_countries = ['United States of America', 'India', 'United Kingdom', 'Canada']
# dataframe grouped by country
df_country = df.groupby(['CountryLive'])[column]
# mean values
mean_money_per_month = df_country.mean()
# standard deviations
std_money_per_month = df_country.std()
# stanard errors
ste_money_per_month = std_money_per_month / np.sqrt(df_country.size())
phrase = '{}: mean = ${:.0f}±{:.0f}, std = ${:.0f}'
for country in list_countries:
print(phrase.format(country, mean_money_per_month[country], ste_money_per_month[country], std_money_per_month[country]))
```
It seems that coders in the USA spend more for learning than coders in the other three countries on average.
However, the large standard deviation may indicate that the result is biased by a few high-payers.
Let us show box plots of the distributions of monthly spending in each of these four countries:
```
# dataframe grouped by country
df[df['CountryLive'].isin(list_countries)].boxplot(column, 'CountryLive')
plt.xticks(rotation=90)
plt.ylabel('$')
plt.title('Money spent per month')
plt.suptitle('')
plt.show()
```
There seems to be a significant number of outliers, which may well bias the analysis.
To get a better sense of the data, let us focus on coders spending less than a given threshold, which we choose as $1000, as it is unlikely that many coders will regularly spend more than that on learning each month.
Higher values may point to misreportings, other errors, or a lot of money spent on attending bootcamps, which are not covered by our advertising campaign.
```
threshold = 1000
# dataframe grouped by country
df[(df['CountryLive'].isin(list_countries)) & (df[column] < threshold)].boxplot(column, 'CountryLive')
plt.xticks(rotation=90)
plt.ylabel('$')
plt.title('Money spent per month')
plt.suptitle('')
plt.show()
```
These box plots seem more sensible.
We can alread see that more than half of new coders in each of these countries do not spend money on learning (the median is zero in all cases).
Let us re-compute the means and standard deviations with this threshold:
```
# dataframe grouped by country, keepong only the rows where MoneyPerMonth is
# below the threshold
df_country = df[df[column] < threshold].groupby(['CountryLive'])[column]
# mean values
mean_money_per_month = df_country.mean()
# standard deviations
std_money_per_month = df_country.std()
# stanard errors
ste_money_per_month = std_money_per_month / np.sqrt(df_country.size())
phrase = '{}: mean = ${:.1f}±{:.1f}, std = ${:.1f}'
for country in list_countries:
print(phrase.format(country, mean_money_per_month[country], ste_money_per_month[country], std_money_per_month[country]))
```
The average amounts of money spent per month by new coders is largest in the USA and smallest in India.
## Conclusions
From this brief study, it seems clear that the best market to advertise in is the USA, and that the campaign should focus on skills relevent to Full Stack, Front End, and Back End development.
Possible future extensions of the campaign, if successful, could include the UK, Canada, and India, as well as skills relevant to Data Science and Data Engineering.
| github_jupyter |
<a href="https://colab.research.google.com/github/AliKarasneh/create-react-app/blob/master/TwitterSentimentAnalysis.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip install langdetect
!pip install tweepy
from PIL import Image
from nltk.sentiment.vader import SentimentIntensityAnalyzer
from langdetect import detect
from nltk.stem import SnowballStemmer
from nltk.sentiment.vader import SentimentIntensityAnalyzer
from sklearn.feature_extraction.text import CountVectorizer
import tweepy
from textblob import TextBlob
import nltk
import pandas as pd
from matplotlib import pyplot as plt
# Authentication
consumerKey = 'rZms1g0HlFHDxlUmTbMV20vFk'
consumerSecret = '8ZygdjdRd0YdKXTvIvh07fD2SwCp1hFhnLNcM5qgD6GZ1dl02Q'
accessToken = '176112318-wVeHbbHx6A8wxgrJBHoLgVFitlUjzsqfe5OfSOcL'
accessTokenSecret = 'UMPsVRWULXSVEWKouANKjf1cmiIjLatwdCUUNQs5tXUm9'
auth = tweepy.OAuthHandler(consumerKey, consumerSecret)
auth.set_access_token(accessToken, accessTokenSecret)
api = tweepy.API(auth)
#Sentiment Analysis
def percentage(part,whole):
return 100 * float(part)/float(whole)
keyword = input('Please enter keyword or hashtag to search: ')
noOfTweet = int(input ('Please enter how many tweets to analyze: '))
tweets = tweepy.Cursor(api.search, q=keyword).items(noOfTweet)
positive = 0
negative = 0
neutral = 0
polarity = 0
tweet_list = []
neutral_list = []
negative_list = []
positive_list = []
for tweet in tweets:
nltk.download('vader_lexicon')
#nltk.download()
#print(tweet.text)
tweet_list.append(tweet.text)
analysis = TextBlob(tweet.text)
score = SentimentIntensityAnalyzer().polarity_scores(tweet.text)
neg = score['neg']
neu = score['neu']
pos = score['pos']
comp = score['compound']
polarity += analysis.sentiment.polarity
if neg > pos:
negative_list.append(tweet.text)
negative += 1
elif pos > neg:
positive_list.append(tweet.text)
positive += 1
elif pos == neg:
neutral_list.append(tweet.text)
neutral += 1
positive = percentage(positive, noOfTweet)
negative = percentage(negative, noOfTweet)
neutral = percentage(neutral, noOfTweet)
polarity = percentage(polarity, noOfTweet)
positive = format(positive, '.1f')
negative = format(negative, '.1f')
neutral = format(neutral, '.1f')
#Number of Tweets (Total, Positive, Negative, Neutral)
tweet_list = pd.DataFrame(tweet_list)
neutral_list = pd.DataFrame(neutral_list)
negative_list = pd.DataFrame(negative_list)
positive_list = pd.DataFrame(positive_list)
print('total number: ',len(tweet_list))
print('positive number :',len(positive_list))
print('negative number: ', len(negative_list))
print('neutral number: ',len(neutral_list))
#Creating PieCart
labels = ['Positive ['+str(positive)+'%]' , 'Neutral ['+str(neutral)+'%]','Negative ['+str(negative)+'%]']
sizes = [positive, neutral, negative]
colors = ['yellowgreen', 'blue','red']
patches, texts = plt.pie(sizes,colors=colors, startangle=90)
plt.style.use('default')
plt.legend(labels)
plt.title('Sentiment Analysis Result for keyword= '+keyword+'' )
plt.axis('equal')
plt.show()
tweet_list
```
| github_jupyter |
## Visualizing and Comparing hours enrolled by UT students during the Fall 2020 semester
This project aims to see what are the different hours that UT students were enrolled in during the Fall 2020 semester. It will try to see if there any trends or differences based on the gender
### Libraries used for the visualization of the data
```
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
import numpy as np
```
## Data from UT's statistical handbook for the Fall 2020 semester
###### Data about women enrollment (Fall 2020):
Part-time (< 12 hours) |Full-time (> 12 hours)
:-----:|:-----:|
1,318|21,014|
5,9 %|94,1 %|
###### Data about men enrollment (Fall 2020):
Part-time (< 12 hours) |Full-time (> 12 hours)
:-----:|:-----:|
1,326|16,390|
7,5 %|92,5 %|
## Visualization for this data
```
fig, (ax1,ax2) = plt.subplots(1,2,figsize=(10,10)) #ax1,ax2 refer to your two pies
# 1,2 denotes 1 row, 2 columns
fig.subplots_adjust(wspace=1)
labels = 'Full-time', 'Part-time'
sizes = [94.1, 5.9]
explode = (0.3, 0)
ax1.pie (sizes,labels = labels, autopct = '%.1f%%', shadow = True, explode = explode) #plot first pie
ax1.set_title ('Women enrolled in Full-time hours vs. Part-time hours')
ax1.axis = ('equal')
labels = 'Full-time', 'Part-time'
sizes = [92.5, 7.5]
explode = (0.3, 0)
ax2.pie(sizes,labels = labels, autopct = '%.1f%%', shadow = True, explode = explode) #plot second pie
ax2.set_title('Women enrolled in Full-time hours vs. Part-time hours')
ax2.axis = ("equal")
```
As we can see, there were more women enrolled Full-time than men enrolled Full-time and the number of Part-time students was also higher for men.
---
## Data collected from the form to know how many hours students were enrolled in and they were Full-time students (> 12 hours)
I led a form asking for the number of hours current seniors were enrolled during the Fall 2020 semester. I collected 20 responses and these were the results.
```
labels = '12 hours', '15 hours', '18 hours'
sizes = [30, 65, 5]
explode = (0, 0, 0.3)
plt.pie (sizes,labels = labels, autopct = '%.1f%%', shadow = True, explode = explode)
plt.title ('Enrolled hours by Full-time UT students during the Fall 2020 semester')
plt.axis = ('equal')
```
---
## Representing and visualizing this data
I decided to represent these values with binary numbers (0 and 1). 0 means "Yes" and 1 means "No". I represented this numbers on a Google Sheets spreadsheet and these were the results that I found.
```
plt.figure(figsize=(8,5))
x=['Part-time', '12 hours', '15 hours', '18 hours']
y= [7,28,60,5]
plt.bar(x,y, color="#bdeb13")
plt.xlabel("Hours enrolled by students (Fall 2020)")
plt.ylabel("Number of students out of 100")
plt.title('Number of hours enrolled by UT students (Fall 2020)')
plt.show()
```
## Analyzing number of enrollment hours through each Fall semester from 2011-2020
I collected this data from UT's statistical handbook ([UT Statistical Handbook](https://utexas.app.box.com/s/gflzag1a3f88jelotrdv9uendvmdf1kt)) and the data can be found on page 12 of this handbook. This data has been verified by the Admissions Office at the University of Texas and the aim of this section is to see if there are have been any trends for the number of enrolled students over the years. Furthermore, I will also compare enrollment over the years between genders.
## Reproducibility of this project
In order to reproduce this project, you would have to use UT's statistical handbook. A good idea to further this research would be observe if there are any trends in enrollment of there the years and compare the difference in enrollment between men and women.
| github_jupyter |
# Final project: StackOverflow assistant bot
Congratulations on coming this far and solving the programming assignments! In this final project, we will combine everything we have learned about Natural Language Processing to construct a *dialogue chat bot*, which will be able to:
* answer programming-related questions (using StackOverflow dataset);
* chit-chat and simulate dialogue on all non programming-related questions.
For a chit-chat mode we will use a pre-trained neural network engine available from [ChatterBot](https://github.com/gunthercox/ChatterBot).
Those who aim at honor certificates for our course or are just curious, will train their own models for chit-chat.

©[xkcd](https://xkcd.com)
### Data description
To detect *intent* of users questions we will need two text collections:
- `tagged_posts.tsv` — StackOverflow posts, tagged with one programming language (*positive samples*).
- `dialogues.tsv` — dialogue phrases from movie subtitles (*negative samples*).
```
%load_ext autoreload
%autoreload 2
import sys
sys.path.append("..")
from common.download_utils import download_project_resources
download_project_resources()
```
For those questions, that have programming-related intent, we will proceed as follow predict programming language (only one tag per question allowed here) and rank candidates within the tag using embeddings.
For the ranking part, you will need:
- `word_embeddings.tsv` — word embeddings, that you trained with StarSpace in the 3rd assignment. It's not a problem if you didn't do it, because we can offer an alternative solution for you.
As a result of this notebook, you should obtain the following new objects that you will then use in the running bot:
- `intent_recognizer.pkl` — intent recognition model;
- `tag_classifier.pkl` — programming language classification model;
- `tfidf_vectorizer.pkl` — vectorizer used during training;
- `thread_embeddings_by_tags` — folder with thread embeddings, arranged by tags.
Some functions will be reused by this notebook and the scripts, so we put them into *utils.py* file. Don't forget to open it and fill in the gaps!
```
from utils import *
```
## Part I. Intent and language recognition
We want to write a bot, which will not only **answer programming-related questions**, but also will be able to **maintain a dialogue**. We would also like to detect the *intent* of the user from the question (we could have had a 'Question answering mode' check-box in the bot, but it wouldn't fun at all, would it?). So the first thing we need to do is to **distinguish programming-related questions from general ones**.
It would also be good to predict which programming language a particular question referees to. By doing so, we will speed up question search by a factor of the number of languages (10 here), and exercise our *text classification* skill a bit. :)
```
import numpy as np
import pandas as pd
import pickle
import re
from sklearn.feature_extraction.text import TfidfVectorizer
```
### Data preparation
In the first assignment (Predict tags on StackOverflow with linear models), you have already learnt how to preprocess texts and do TF-IDF tranformations. Reuse your code here. In addition, you will also need to [dump](https://docs.python.org/3/library/pickle.html#pickle.dump) the TF-IDF vectorizer with pickle to use it later in the running bot.
```
def tfidf_features(X_train, X_test, vectorizer_path):
"""Performs TF-IDF transformation and dumps the model."""
# Train a vectorizer on X_train data.
# Transform X_train and X_test data.
# Pickle the trained vectorizer to 'vectorizer_path'
# Don't forget to open the file in writing bytes mode.
######################################
tfidf_vectorizer = TfidfVectorizer(
analyzer='word',
token_pattern='(\S+)',
min_df=5,
max_df=0.9,
ngram_range=(1, 2)
)
X_train = tfidf_vectorizer.fit_transform(X_train)
X_test = tfidf_vectorizer.transform(X_test)
pickle.dump(tfidf_vectorizer, open(vectorizer_path, "wb"))
######################################
return X_train, X_test
```
Now, load examples of two classes. Use a subsample of stackoverflow data to balance the classes. You will need the full data later.
```
sample_size = 200000
dialogue_df = pd.read_csv('data/dialogues.tsv', sep='\t').sample(sample_size, random_state=0)
stackoverflow_df = pd.read_csv('data/tagged_posts.tsv', sep='\t').sample(sample_size, random_state=0)
```
Check how the data look like:
```
dialogue_df.head()
stackoverflow_df.head()
```
Apply *text_prepare* function to preprocess the data:
```
from utils import text_prepare
dialogue_df['text'] = list(map(text_prepare, dialogue_df.text.values))
stackoverflow_df['title'] = list(map(text_prepare, stackoverflow_df.title.values))
```
### Intent recognition
We will do a binary classification on TF-IDF representations of texts. Labels will be either `dialogue` for general questions or `stackoverflow` for programming-related questions. First, prepare the data for this task:
- concatenate `dialogue` and `stackoverflow` examples into one sample
- split it into train and test in proportion 9:1, use *random_state=0* for reproducibility
- transform it into TF-IDF features
```
from sklearn.model_selection import train_test_split
X = np.concatenate([dialogue_df['text'].values, stackoverflow_df['title'].values])
y = ['dialogue'] * dialogue_df.shape[0] + ['stackoverflow'] * stackoverflow_df.shape[0]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.1, random_state = 0)
print('Train size = {}, test size = {}'.format(len(X_train), len(X_test)))
X_train_tfidf, X_test_tfidf = tfidf_features(X_train, X_test, RESOURCE_PATH['TFIDF_VECTORIZER'])
```
Train the **intent recognizer** using LogisticRegression on the train set with the following parameters: *penalty='l2'*, *C=10*, *random_state=0*. Print out the accuracy on the test set to check whether everything looks good.
```
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
######################################
intent_recognizer = LogisticRegression(penalty='l2', C=10, random_state=0)
intent_recognizer.fit(X_train_tfidf, y_train)
######################################
# Check test accuracy.
y_test_pred = intent_recognizer.predict(X_test_tfidf)
test_accuracy = accuracy_score(y_test, y_test_pred)
print('Test accuracy = {}'.format(test_accuracy))
```
Dump the classifier to use it in the running bot.
```
pickle.dump(intent_recognizer, open(RESOURCE_PATH['INTENT_RECOGNIZER'], 'wb'))
```
### Programming language classification
We will train one more classifier for the programming-related questions. It will predict exactly one tag (=programming language) and will be also based on Logistic Regression with TF-IDF features.
First, let us prepare the data for this task.
```
X = stackoverflow_df['title'].values
y = stackoverflow_df['tag'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
print('Train size = {}, test size = {}'.format(len(X_train), len(X_test)))
```
Let us reuse the TF-IDF vectorizer that we have already created above. It should not make a huge difference which data was used to train it.
```
vectorizer = pickle.load(open(RESOURCE_PATH['TFIDF_VECTORIZER'], 'rb'))
X_train_tfidf, X_test_tfidf = vectorizer.transform(X_train), vectorizer.transform(X_test)
```
Train the **tag classifier** using OneVsRestClassifier wrapper over LogisticRegression. Use the following parameters: *penalty='l2'*, *C=5*, *random_state=0*.
```
from sklearn.multiclass import OneVsRestClassifier
######################################
tag_classifier = OneVsRestClassifier(LogisticRegression(penalty='l2', C=5, random_state=0))
tag_classifier.fit(X_train_tfidf, y_train)
######################################
# Check test accuracy.
y_test_pred = tag_classifier.predict(X_test_tfidf)
test_accuracy = accuracy_score(y_test, y_test_pred)
print('Test accuracy = {}'.format(test_accuracy))
```
Dump the classifier to use it in the running bot.
```
pickle.dump(tag_classifier, open(RESOURCE_PATH['TAG_CLASSIFIER'], 'wb'))
```
## Part II. Ranking questions with embeddings
To find a relevant answer (a thread from StackOverflow) on a question you will use vector representations to calculate similarity between the question and existing threads. We already had `question_to_vec` function from the assignment 3, which can create such a representation based on word vectors.
However, it would be costly to compute such a representation for all possible answers in *online mode* of the bot (e.g. when bot is running and answering questions from many users). This is the reason why you will create a *database* with pre-computed representations. These representations will be arranged by non-overlaping tags (programming languages), so that the search of the answer can be performed only within one tag each time. This will make our bot even more efficient and allow not to store all the database in RAM.
Load StarSpace embeddings which were trained on Stack Overflow posts. These embeddings were trained in *supervised mode* for duplicates detection on the same corpus that is used in search. We can account on that these representations will allow us to find closely related answers for a question.
If for some reasons you didn't train StarSpace embeddings in the assignment 3, you can use [pre-trained word vectors](https://code.google.com/archive/p/word2vec/) from Google. All instructions about how to work with these vectors were provided in the same assignment. However, we highly recommend to use StartSpace's embeddings, because it contains more appropriate embeddings. If you chose to use Google's embeddings, delete the words, which is not in Stackoverflow data.
```
starspace_embeddings, embeddings_dim = load_embeddings(RESOURCE_PATH['WORD_EMBEDDINGS'])
```
Since we want to precompute representations for all possible answers, we need to load the whole posts dataset, unlike we did for the intent classifier:
```
posts_df = pd.read_csv('data/tagged_posts.tsv', sep='\t')
```
Look at the distribution of posts for programming languages (tags) and find the most common ones.
You might want to use pandas [groupby](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html) and [count](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.count.html) methods:
```
posts_df.head()
# Because of diskspace issues on the 1GB EC2 instance
# we need to reduce the size of the pickle files
#import math
posts_df = posts_df.sample(200000)
counts_by_tag = posts_df.tag.value_counts()
```
Now for each `tag` you need to create two data structures, which will serve as online search index:
* `tag_post_ids` — a list of post_ids with shape `(counts_by_tag[tag],)`. It will be needed to show the title and link to the thread;
* `tag_vectors` — a matrix with shape `(counts_by_tag[tag], embeddings_dim)` where embeddings for each answer are stored.
Implement the code which will calculate the mentioned structures and dump it to files. It should take several minutes to compute it.
```
import os
os.makedirs(RESOURCE_PATH['THREAD_EMBEDDINGS_FOLDER'], exist_ok=True)
for tag, count in counts_by_tag.items():
print(tag, count)
tag_posts = posts_df[posts_df['tag'] == tag]
tag_post_ids = tag_posts.post_id.tolist()
tag_vectors = np.zeros((count, embeddings_dim), dtype=np.float32)
for i, title in enumerate(tag_posts['title']):
tag_vectors[i, :] = question_to_vec(title, starspace_embeddings, embeddings_dim)
# Dump post ids and vectors to a file.
filename = os.path.join(RESOURCE_PATH['THREAD_EMBEDDINGS_FOLDER'], os.path.normpath('%s.pkl' % tag))
pickle.dump((tag_post_ids, tag_vectors), open(filename, 'wb'))
```
| github_jupyter |
Lambda School Data Science
*Unit 4, Sprint 3, Module 3*
---
# Autoencoders
> An autoencoder is a type of artificial neural network used to learn efficient data codings in an unsupervised manner.[1][2] The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore signal “noise”. Along with the reduction side, a reconstructing side is learnt, where the autoencoder tries to generate from the reduced encoding a representation as close as possible to its original input, hence its name.
## Learning Objectives
*At the end of the lecture you should be to*:
* <a href="#p1">Part 1</a>: Describe the componenets of an autoencoder
* <a href="#p2">Part 2</a>: Train an autoencoder
* <a href="#p3">Part 3</a>: Apply an autoenocder to a basic information retrieval problem
__Problem:__ Is it possible to automatically represent an image as a fixed-sized vector even if it isn’t labeled?
__Solution:__ Use an autoencoder
Why do we need to represent an image as a fixed-sized vector do you ask?
* __Information Retrieval__
- [Reverse Image Search](https://en.wikipedia.org/wiki/Reverse_image_search)
- [Recommendation Systems - Content Based Filtering](https://en.wikipedia.org/wiki/Recommender_system#Content-based_filtering)
* __Dimensionality Reduction__
- [Feature Extraction](https://www.kaggle.com/c/vsb-power-line-fault-detection/discussion/78285)
- [Manifold Learning](https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction)
We've already seen *representation learning* when we talked about word embedding modelings during our NLP week. Today we're going to achieve a similiar goal on images using *autoencoders*. An autoencoder is a neural network that is trained to attempt to copy its input to its output. Usually they are restricted in ways that allow them to copy only approximately. The model often learns useful properties of the data, because it is forced to prioritize which aspecs of the input should be copied. The properties of autoencoders have made them an important part of modern generative modeling approaches. Consider autoencoders a special case of feed-forward networks (the kind we've been studying); backpropagation and gradient descent still work.
# Autoencoder Architecture (Learn)
<a id="p1"></a>
## Overview
The *encoder* compresses the input data and the *decoder* does the reverse to produce the uncompressed version of the data to create a reconstruction of the input as accurately as possible:
<img src='https://miro.medium.com/max/1400/1*44eDEuZBEsmG_TCAKRI3Kw@2x.png' width=800/>
The learning process gis described simply as minimizing a loss function:
$ L(x, g(f(x))) $
- $L$ is a loss function penalizing $g(f(x))$ for being dissimiliar from $x$ (such as mean squared error)
- $f$ is the encoder function
- $g$ is the decoder function
## Follow Along
### Extremely Simple Autoencoder
```
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
# import wandb
# from wandb.keras import WandbCallback
# this is the size of our encoded representations
encoding_dim = 32 # 32 floats -> compression of factor 24.5, assuming the input is 784 floats
# this is our input placeholder
input_img = Input(shape=(784,))
# "encoded" is the encoded representation of the input
encoded = Dense(encoding_dim, activation='sigmoid')(input_img)
# "decoded" is the lossy reconstruction of the input
decoded = Dense(784, activation = 'sigmoid')(encoded)
# this model maps an input to its reconstruction
autoencoder = Model(input_img, decoded)
# this model maps an input to its encoded representation
encoder = Model(input_img, encoded)
# create a placeholder for an encoded (32-dimensional) input
# retrieve the last layer of the autoencoder model
# create the decoder model
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
from tensorflow.keras.datasets import mnist
import numpy as np
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
print(x_train.shape)
print(x_test.shape)
#wandb.init(project="mnist_autoencoder", entity="ds5")
autoencoder.fit(x_train, x_train,
epochs=10,
batch_size=256,
shuffle=True,
validation_data=(x_test, x_test),
verbose = True)
# can stop running/training of model at any point and training will be preserved
# encode and decode some digits
# note that we take them from the *test* set
# visualize the results
#encoded_images = encoder.predict(x_test)
decoded_imgs = autoencoder.predict(x_test)
# use Matplotlib (don't ask)
import matplotlib.pyplot as plt
n = 10 # how many digits we will display
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# poor results; highly dependent on training time
```
## Challenge
Expected to talk about the components of autoencoder and their purpose.
# Train an Autoencoder (Learn)
<a id="p2"></a>
## Overview
As long as our architecture maintains an hourglass shape, we can continue to add layers and create a deeper network.
## Follow Along
### Deep Autoencoder
```
input_img = Input(shape=(784,)) # first layer of the neural network
encoded = Dense(128, activation= 'relu')(input_img) # input_img - the data getting pushed to the next layer
encoded = Dense(64, activation= 'relu')(encoded)
encoded = Dense(32, activation= 'relu')(encoded) # fully dehydrated layer
decoded = Dense(64, activation= 'relu')(encoded)
decoded = Dense(128, activation= 'relu')(decoded)
decoded = Dense(784, activation= 'sigmoid')(decoded)
# compile & fit model
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
autoencoder.fit(x_train, x_train,
epochs=20,
batch_size=784,
shuffle=True,
validation_data=(x_test,x_test),
verbose= True)
decoded_imgs = autoencoder.predict(x_test)
# use Matplotlib (don't ask)
import matplotlib.pyplot as plt
n = 10 # how many digits we will display
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
```
### Convolutional autoencoder
> Since our inputs are images, it makes sense to use convolutional neural networks (convnets) as encoders and decoders. In practical settings, autoencoders applied to images are always convolutional autoencoders --they simply perform much better.
> Let's implement one. The encoder will consist in a stack of Conv2D and MaxPooling2D layers (max pooling being used for spatial down-sampling), while the decoder will consist in a stack of Conv2D and UpSampling2D layers.
```
# Working with upsampling example
from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D
from keras.models import Model
from keras import backend as K
# Create Model
# Create Model
input_img = Input(shape=(28,28,1))
x = Conv2D(16,(3,3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2,2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)
# at this point the representation is (4, 4, 8) i.e. 128-dimensional representation
x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation='relu')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
autoencoder.summary()
from keras.datasets import mnist
import numpy as np
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = np.reshape(x_train, (len(x_train), 28, 28, 1)) # adapt this if using `channels_first` image data format
x_test = np.reshape(x_test, (len(x_test), 28, 28, 1)) # adapt this if using `channels_first` image data format
#wandb.init(project="mnist_autoencoder", entity="ds5")
autoencoder.fit(x_train, x_train,
epochs=10,
batch_size=784,
shuffle=True,
validation_data=(x_test, x_test),
verbose=True)
decoded_imgs = autoencoder.predict(x_test)
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
```
#### Visualization of the Representations
```
encoder = Model(input_img, encoded)
encoder.predict(x_train)
n = 10
plt.figure(figsize=(20, 8))
for i in range(n):
ax = plt.subplot(1, n, i)
plt.imshow(encoded_imgs[i].reshape(4, 4 * 8).T)
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
```
## Challenge
You will train an autoencoder at some point in the near future.
# Information Retrieval with Autoencoders (Learn)
<a id="p3"></a>
## Overview
A common usecase for autoencoders is for reverse image search. Let's try to draw an image and see what's most similiar in our dataset.
To accomplish this we will need to slice our autoendoer in half to extract our reduced features. :)
## Follow Along
```
encoder = Model(input_img, encoded)
encoded_imgs = encoder.predict(x_train)
encoded_imgs[0].reshape((128,)) #shape before reshape: 4,4,8
from sklearn.neighbors import NearestNeighbors
nn = NearestNeighbors(n_neighbors=10, algorithm='ball_tree')
nn.fit(encoded_imgs)
nn.kneighbors(...)
```
## Challenge
You should already be familiar with KNN and similarity queries, so the key component of this section is know what to 'slice' from your autoencoder (the encoder) to extract features from your data.
# Review
* <a href="#p1">Part 1</a>: Describe the componenets of an autoencoder
- Enocder
- Decoder
* <a href="#p2">Part 2</a>: Train an autoencoder
- Can do in Keras Easily
- Can use a variety of architectures
- Architectures must follow hourglass shape
* <a href="#p3">Part 3</a>: Apply an autoenocder to a basic information retrieval problem
- Extract just the encoder to use for various tasks
- AE ares good for dimensionality reduction, reverse image search, and may more things.
# Sources
__References__
- [Building Autoencoders in Keras](https://blog.keras.io/building-autoencoders-in-keras.html)
- [Deep Learning Cookbook](http://shop.oreilly.com/product/0636920097471.do)
__Additional Material__
| github_jupyter |
# Analyzing colon tumor gene expression data
Data source:
- https://dx.doi.org/10.1038%2Fsdata.2018.61
- https://www.ncbi.nlm.nih.gov/gds?term=GSE8671
- https://www.ncbi.nlm.nih.gov/gds?term=GSE20916
### 1. Initialize the environment and variables
Upon launching this page, run the below code to initialize the analysis environment by selecting the cell and pressing `Shift + Enter`
```
#Set path to this directory for accessing and saving files
import os
import warnings
warnings.filterwarnings('ignore')
__path__ = os.getcwd() + os.path.sep
print('Current path: ' + __path__)
from local_utils import init_tcga, init_GSE8671, init_GSE20916, sort_data
from local_utils import eval_gene, make_heatmap
%matplotlib inline
# Read data
print("Loading data. Please wait...")
tcga_scaled, tcga_data, tcga_info, tcga_palette = init_tcga()
GSE8671_scaled, GSE8671_data, GSE8671_info, GSE8671_palette = init_GSE8671()
GSE20916_scaled, GSE20916_data, GSE20916_info, GSE20916_palette = init_GSE20916()
print("Data import complete. Continue below...")
```
### 2a. Explore a gene of interest in the Unified TCGA data or GSE8671 and GSE20916
- In the first line, edit the gene name (human) within the quotes
- Press `Shift + Enter`
```
gene = "FABP1" # <-- edit between the quotation marks here
# Do not edit below this line
# ------------------------------------------------------------------------
print("Running analysis. Please wait...\n\n")
eval_gene(gene, tcga_data, tcga_info, tcga_palette, 'TCGA (unified)')
eval_gene(gene, GSE8671_data, GSE8671_info, GSE8671_palette, 'GSE8671')
eval_gene(gene, GSE20916_data, GSE20916_info, GSE20916_palette, 'GSE20916')
```
### 2a. Explore a set of genes in the Unified TCGA data or GSE8671 and GSE20916
- Between the brackets, edit the gene names (human) within the quotes
- If you want to have less than the provided number of genes, remove the necessary number of lines
- If you want to have more than the provided number of genes, add lines with the gene name in quotes, followed by a comma outside of the quotes
- Press `Shift + Enter`
```
gene_list = [
"FABP1", # <-- edit between the quote marks here
"ME1",
"ME2",
"PC", # <-- add more genes by adding a line, the gene name between quotes, and a comma after that quote
]
# Do not edit below this line
# ------------------------------------------------------------------------
print("Running analysis. Please wait...\n\n")
make_heatmap(gene_list, tcga_scaled, tcga_info, tcga_palette, 'TCGA (unified)')
make_heatmap(gene_list, GSE8671_scaled, GSE8671_info, GSE8671_palette, 'GSE8671')
make_heatmap(gene_list, sort_data(GSE20916_scaled, GSE20916_info, ['adenoma', 'adenocarcinoma','normal_colon']), GSE20916_info, GSE20916_palette, 'GSE20916')
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import numpy as np
import nibabel as nb
import scipy as sp
import matplotlib.pyplot as pl
import os
opj = os.path.join
%matplotlib notebook
pl.ion()
import sys
sys.path.append("..")
from prfpy.stimulus import PRFStimulus2D
from prfpy.grid import Iso2DGaussianGridder, Norm_Iso2DGaussianGridder, DoG_Iso2DGaussianGridder
from prfpy.fit import Iso2DGaussianFitter, Norm_Iso2DGaussianFitter, DoG_Iso2DGaussianFitter
from prfpy.timecourse import sgfilter_predictions
from utils.utils import *
from prfpy.fit import iterative_search
#some params needed (stimulus size, initial volumes to discard, savgol window length)
n_pix=40
discard_volumes = 5
window_length=121
#create stimulus
dm_1R = create_dm_from_screenshots(screenshot_path='/Users/marcoaqil/PRFMapping/PRFMapping-Raw/sub-001/ses-1/rawdata/sub-001_ses-1_Logs/sub-001_ses-1_task-1R_run-1_Logs/Screenshots', n_pix=n_pix)
dm_1S = create_dm_from_screenshots(screenshot_path='/Users/marcoaqil/PRFMapping/PRFMapping-Raw/sub-001/ses-1/rawdata/sub-001_ses-1_Logs/sub-001_ses-1_task-1S_run-1_Logs/Screenshots', n_pix=n_pix)
dm_2R = create_dm_from_screenshots(screenshot_path='/Users/marcoaqil/PRFMapping/PRFMapping-Raw/sub-001/ses-1/rawdata/sub-001_ses-1_Logs/sub-001_ses-1_task-2R_run-1_Logs/Screenshots', n_pix=n_pix)
dm_4F = create_dm_from_screenshots(screenshot_path='/Users/marcoaqil/PRFMapping/PRFMapping-Raw/sub-001/ses-1/rawdata/sub-001_ses-1_Logs/sub-001_ses-1_task-4F_run-1_Logs/Screenshots', n_pix=n_pix)
dm_4R = create_dm_from_screenshots(screenshot_path='/Users/marcoaqil/PRFMapping/PRFMapping-Raw/sub-001/ses-1/rawdata/sub-001_ses-1_Logs/sub-001_ses-1_task-4R_run-1_Logs/Screenshots', n_pix=n_pix)
task_lengths=[dm_1R.shape[2]-discard_volumes,
dm_1S.shape[2]-discard_volumes,
dm_2R.shape[2]-discard_volumes,
dm_4F.shape[2]-discard_volumes,
dm_4R.shape[2]-discard_volumes]
dm_full = np.concatenate((dm_1R[:,:,discard_volumes:],
dm_1S[:,:,discard_volumes:],
dm_2R[:,:,discard_volumes:],
dm_4F[:,:,discard_volumes:],
dm_4R[:,:,discard_volumes:]), axis=-1)
prf_stim = PRFStimulus2D(screen_size_cm=70,
screen_distance_cm=210,
design_matrix=dm_full,
TR=1.5)
#calculating the average BOLD baseline so that it is the same throughout the timecourse (BOLD has arbtirary units)
iso_periods = np.where(np.sum(dm_full, axis=(0,1))==0)[0]
shifted_dm = np.zeros_like(dm_full)
#number of TRs in which activity may linger (hrf)
shifted_dm[:,:,7:] = dm_full[:,:,:-7]
late_iso_periods = np.where((np.sum(dm_full, axis=(0,1))==0) & (np.sum(shifted_dm, axis=(0,1))==0))[0]
late_iso_dict={}
late_iso_dict['1R'] = np.split(late_iso_periods,5)[0]
late_iso_dict['1S'] = np.split(late_iso_periods,5)[1]
late_iso_dict['2R'] = np.split(late_iso_periods,5)[2]
late_iso_dict['4F'] = np.split(late_iso_periods,5)[3]
late_iso_dict['4R'] = np.split(late_iso_periods,5)[4]
################preparing the data (SURFACE FITTING)
data_path = '/Users/marcoaqil/PRFMapping/PRFMapping-Deriv-noflairtse-manual-hires'
subj = 'sub-001'
tc_dict = {}
tc_full_iso_nonzerovar_dict = {}
for hemi in ['L', 'R']:
tc_dict[hemi] = {}
for task_name in ['1R', '1S', '2R', '4F', '4R']:
data_ses1 = nb.load(opj(data_path, 'fmriprep/'+subj+'/ses-1/func/'+subj+'_ses-1_task-'+task_name+'_run-1_space-fsaverage_hemi-'+hemi+'.func.gii'))
data_ses2 = nb.load(opj(data_path, 'fmriprep/'+subj+'/ses-2/func/'+subj+'_ses-2_task-'+task_name+'_run-1_space-fsaverage_hemi-'+hemi+'.func.gii'))
tc_ses_1 = sgfilter_predictions(np.array([arr.data for arr in data_ses1.darrays]).T[...,discard_volumes:],
window_length=window_length)
tc_ses_2 = sgfilter_predictions(np.array([arr.data for arr in data_ses2.darrays]).T[...,discard_volumes:],
window_length=window_length)
tc_dict[hemi][task_name] = (tc_ses_1+tc_ses_2)/2.0
print('Finished filtering hemi '+hemi)
#when scanning sub-001 i mistakenly set the length of the 4F scan to 147, while it should have been 145
#therefore, there are two extra images at the end to discard in that time series.
#from sub-002 onwards, this was corrected.
if subj == 'sub-001':
tc_dict[hemi]['4F'] = tc_dict[hemi]['4F'][...,:-2]
tc_full=np.concatenate((tc_dict[hemi]['1R'],
tc_dict[hemi]['1S'],
tc_dict[hemi]['2R'],
tc_dict[hemi]['4F'],
tc_dict[hemi]['4R']), axis=-1)
#shift timeseries so they have the same average value in proper baseline periods across conditions
iso_full = np.mean(tc_full[...,late_iso_periods], axis=-1)
iso_1R_diff = iso_full - np.mean(tc_full[...,late_iso_dict['1R']], axis=-1)
iso_1S_diff = iso_full - np.mean(tc_full[...,late_iso_dict['1S']], axis=-1)
iso_2R_diff = iso_full - np.mean(tc_full[...,late_iso_dict['2R']], axis=-1)
iso_4F_diff = iso_full - np.mean(tc_full[...,late_iso_dict['4F']], axis=-1)
iso_4R_diff = iso_full - np.mean(tc_full[...,late_iso_dict['4R']], axis=-1)
tc_full_iso=np.concatenate((tc_dict[hemi]['1R'] + iso_1R_diff[...,np.newaxis],
tc_dict[hemi]['1S'] + iso_1S_diff[...,np.newaxis],
tc_dict[hemi]['2R'] + iso_2R_diff[...,np.newaxis],
tc_dict[hemi]['4F'] + iso_4F_diff[...,np.newaxis],
tc_dict[hemi]['4R'] + iso_4R_diff[...,np.newaxis]), axis=-1)
tc_full_iso_nonzerovar_dict['indices_'+hemi] = np.where(np.var(tc_full_iso, axis=-1)>0)
tc_full_iso_nonzerovar_dict['tc_'+hemi] = tc_full_iso[np.where(np.var(tc_full_iso, axis=-1)>0)]
#############preparing the data (VOLUME FITTING)
############VOLUME MASK
data_path = '/Users/marcoaqil/PRFMapping/PRFMapping-Deriv-noflairtse-manual-hires'
subj = 'sub-001'
#create a single brain mask in epi space
mask_dict = {}
for task_name in ['1R', '1S', '2R', '4F', '4R']:
mask_ses_1 = nb.load(opj(data_path,'fmriprep/'+subj+'/ses-1/func/'+subj+'_ses-1_task-'+task_name+'_run-1_space-T1w_desc-brain_mask.nii.gz')).get_data().astype(bool)
mask_ses_2 = nb.load(opj(data_path, 'fmriprep/'+subj+'/ses-2/func/'+subj+'_ses-2_task-'+task_name+'_run-1_space-T1w_desc-brain_mask.nii.gz')).get_data().astype(bool)
mask_dict[task_name] = mask_ses_1 & mask_ses_2
final_mask = mask_dict['1R'] & mask_dict['1S'] & mask_dict['2R'] & mask_dict['4F'] & mask_dict['4R']
#############preparing the data (VOLUME FITTING)
tc_dict = {}
for task_name in ['1R', '1S', '2R', '4F', '4R']:
timecoursefile_ses_1 = nb.load(opj(data_path, 'fmriprep/'+subj+'/ses-1/func/'+subj+'_ses-1_task-'+task_name+'_run-1_space-T1w_desc-preproc_bold.nii.gz'))
timecoursefile_ses_2 = nb.load(opj(data_path, 'fmriprep/'+subj+'/ses-2/func/'+subj+'_ses-2_task-'+task_name+'_run-1_space-T1w_desc-preproc_bold.nii.gz'))
tc_ses_1 = sgfilter_predictions(timecoursefile_ses_1.get_data()[...,discard_volumes:],
window_length=window_length)
tc_ses_2 = sgfilter_predictions(timecoursefile_ses_2.get_data()[...,discard_volumes:],
window_length=window_length)
tc_dict[task_name] = (tc_ses_1+tc_ses_2)/2.0
#when scanning sub-001 i mistakenly set the length of the 4F-task scan to 147, while it should have been 145
#therefore, there are two extra images at the end to discard in that time series.
#from sub-002 onwards, this was corrected.
if subj == 'sub-001':
tc_dict['4F'] = tc_dict['4F'][...,:-2]
timecourse_full=np.concatenate((tc_dict['1R'],
tc_dict['1S'],
tc_dict['2R'],
tc_dict['4F'],
tc_dict['4R']), axis=-1)
#shift timeseries so they have the same average value in baseline periods across conditions
iso_full = np.mean(timecourse_full[...,late_iso_periods], axis=-1)
iso_1R_diff = iso_full - np.mean(timecourse_full[...,late_iso_dict['1R']], axis=-1)
iso_1S_diff = iso_full - np.mean(timecourse_full[...,late_iso_dict['1S']], axis=-1)
iso_2R_diff = iso_full - np.mean(timecourse_full[...,late_iso_dict['2R']], axis=-1)
iso_4F_diff = iso_full - np.mean(timecourse_full[...,late_iso_dict['4F']], axis=-1)
iso_4R_diff = iso_full - np.mean(timecourse_full[...,late_iso_dict['4R']], axis=-1)
timecourse_full_iso=np.concatenate((tc_dict['1R'] + iso_1R_diff[...,np.newaxis],
tc_dict['1S'] + iso_1S_diff[...,np.newaxis],
tc_dict['2R'] + iso_2R_diff[...,np.newaxis],
tc_dict['4F'] + iso_4F_diff[...,np.newaxis],
tc_dict['4R'] + iso_4R_diff[...,np.newaxis]), axis=-1)
#brain mask
timecourse_brain = timecourse_full_iso[final_mask]
#exclude timecourses with zero variance
timecourse_brain_nonzerovar = timecourse_brain[np.where(np.var(timecourse_brain, axis=-1)>0)]
#np.save('/Users/marcoaqil/PRFMapping/timecourse_brain_nonzerovar_sub-001.npy', timecourse_brain_nonzerovar)
timecourse_brain_nonzerovar = np.load('/Users/marcoaqil/PRFMapping/timecourse_brain_nonzerovar_sub-001.npy')
#create gaussian grid
grid_nr = 20
max_ecc_size = 16
sizes, eccs, polars = max_ecc_size * np.linspace(0.25,1,grid_nr)**2, \
max_ecc_size * np.linspace(0.1,1,grid_nr)**2, \
np.linspace(0, 2*np.pi, grid_nr)
gg = Iso2DGaussianGridder(stimulus=prf_stim,
filter_predictions=True,
window_length=window_length,
task_lengths=task_lengths)
%%time
gf = Iso2DGaussianFitter(data=timecourse_brain_nonzerovar, gridder=gg, n_jobs=10)
gf.grid_fit(ecc_grid=eccs,
polar_grid=polars,
size_grid=sizes)
np.save('/Users/marcoaqil/PRFMapping/gauss_grid_sub-001.npy', gf.gridsearch_params)
%%time
#refine Gaussian fits
gf.iterative_fit(rsq_threshold=0.1, verbose=True)
np.save('/Users/marcoaqil/PRFMapping/gauss_iter_sub-001.npy', gf.iterative_search_params)
gridsearch_params =np.load('/Users/marcoaqil/PRFMapping/gauss_grid_sub-001.npy')
%%time
#now refit normalization model, starting from results of iterated Gaussian fitting
gg_norm = Norm_Iso2DGaussianGridder(stimulus=prf_stim,
hrf=[1,1,0],
filter_predictions=True,
window_length=window_length,
task_lengths=task_lengths)
inf=np.inf
eps=1e-4 #to avoid dividing by zero
gf_norm = Norm_Iso2DGaussianFitter(data=timecourse_brain_nonzerovar,
gridder=gg_norm,
n_jobs=10,
bounds=[(-10*n_pix,10*n_pix), #x
(-10*n_pix,10*n_pix), #y
(eps,10*n_pix), #prf size
(-inf,+inf), #prf amplitude
(0,+inf), #bold baseline
(0,+inf), #neural baseline
(0,+inf), #surround amplitude
(eps,10*n_pix), #surround size
(eps,+inf)], #surround baseline
gradient_method='numerical')
#have to add a column since in current code syntax
#gridsearch_params always contains the CSS exponent parameter, even if it is not fit.
#whereas iterative_search_params does not contain it if it is not fit)
#starting_params = np.insert(gf.iterative_search_params, -1, 1.0, axis=-1)
#starting_params = np.insert(current_result_numerical, -1, 1.0, axis=-1)#gridsearch_params
starting_params = gridsearch_params
gf_norm.iterative_fit(rsq_threshold=0.0, gridsearch_params=starting_params, verbose=True)
current_result_numerical=np.copy(gf_norm.iterative_search_params)
#analytic LBFGSB in 1.1minutes, with tol 1e-80, maxls=300 (best rsq=0.758464 with spm hrf derivative)
print(gridsearch_params[np.where(gridsearch_params[:,-1]>0.1),-1].mean())
print(current_result[np.where(gridsearch_params[:,-1]>0.1),-1].mean())
#numerical L BFGS B in 3.7min, tol 1e-30 maxls=200 (best rsq 0.7801825 with spm hrf derivative)
print(gridsearch_params[np.where(gridsearch_params[:,-1]>0.66),-1].mean())
print(current_result_numerical[np.where(gridsearch_params[:,-1]>0.66),-1].mean())
#trust constr standard settings, in 7.4minutes (rsq 0.72019)
print(gridsearch_params[np.where(gridsearch_params[:,-1]>0.66),-1].mean())
print(current_result_numerical[np.where(gridsearch_params[:,-1]>0.66),-1].mean())
print(current_result[np.where(gridsearch_params[:,-1]>0.66),:] - current_result_numerical[np.where(gridsearch_params[:,-1]>0.66),:])
#current_result[np.where(gridsearch_params[:,-1]>0.66),:][0,0,:-1]
vox=6
fig=pl.figure()
pl.plot(timecourse_brain_nonzerovar[np.where(gridsearch_params[:,-1]>0.66),:][0,vox])
#pl.plot(gg_norm.return_single_prediction(*list(current_result[np.where(gridsearch_params[:,-1]>0.66),:][0,vox,:-1])))
pl.plot(gg_norm.return_single_prediction(*list(current_result_numerical[np.where(gridsearch_params[:,-1]>0.66),:][0,vox,:-1])))
from nistats.hemodynamic_models import spm_hrf, spm_time_derivative, spm_dispersion_derivative
fig=pl.figure()
pl.plot(spm_hrf(tr=1.5, oversampling=1, time_length=40)+
5*spm_time_derivative(tr=1.5, oversampling=1, time_length=40))
%%time
#now refit dog model, starting from results of iterated Gaussian fitting
gg_dog = DoG_Iso2DGaussianGridder(stimulus=prf_stim,
filter_predictions=True,
window_length=window_length,
cond_lengths=cond_lengths)
inf=np.inf
eps=1e-6 #to avoid dividing by zero
gf_dog = DoG_Iso2DGaussianFitter(data=timecourse_brain_nonzerovar,
gridder=gg_dog,
n_jobs=10,
bounds=[(-10*n_pix,10*n_pix), #x
(-10*n_pix,10*n_pix), #y
(eps,10*n_pix), #prf size
(0,+inf), #prf amplitude
(0,+inf), #bold baseline
(0,+inf), #surround amplitude
(eps,10*n_pix)]) #surround size
starting_params = gf.gridsearch_params
gf_dog.iterative_fit(rsq_threshold=0.0, gridsearch_params=starting_params, verbose=True)
#compare rsq between models (ideally should be crossvalidated AIC or BIC)
rsq_mask=np.where(gf_norm.iterative_search_params[:,-1]>0.1)
print(np.mean(gf.gridsearch_params[gf.rsq_mask,-1]))
print(np.mean(gf.iterative_search_params[gf.rsq_mask,-1]))
print(np.mean(gf_norm.iterative_search_params[gf.rsq_mask,-1]))
np.save('/Users/marcoaqil/PRFMapping/norm_bounded_iterparams_sub-001.npy', current_result)
#convert to ecc/polar and save results for plotting
ecc = np.sqrt(gf_norm.iterative_search_params[:,1]**2 + gf_norm.iterative_search_params[:,0]**2)
polar = np.arctan2(gf_norm.iterative_search_params[:,1], gf_norm.iterative_search_params[:,0])
polar[gf.rsq_mask]+=np.pi
attempt = np.zeros((final_mask.shape[0],final_mask.shape[1],final_mask.shape[2],10))
ha = attempt.reshape((-1,10))
combined_mask = np.ravel(np.var(timecourse_full_iso, axis=-1)>0) & np.ravel(final_mask)
ha[combined_mask,2:]=gf_norm.iterative_search_params[:,2:]
ha[combined_mask,0] = ecc
ha[combined_mask,1] = polar
haha = ha.reshape((final_mask.shape[0],final_mask.shape[1],final_mask.shape[2],10))
for i in range(0,10):
nb.Nifti1Image(haha[:,:,:,i], timecoursefile_ses_1.affine).to_filename('norm_bounded{}.nii.gz'.format(i))
###old plotting cells
#print(timecourse_brain_nonzerovar.shape)
#a nice voxel for testing should be 185537
fig2=pl.figure()
pl.plot(timecourse_brain_nonzerovar[185537,:])
#pl.plot(sgfilter_predictions(timecourse_brain_nonzerovar[185537,:],
# window_length=121))
#pl.plot(np.load('/Users/marcoaqil/PRFMapping/timecourse_brain_nonzerovar_sub-001.npy')[185537,:])
#print(np.min(gf.gridsearch_params[gf.rsq_mask,-1]))
#print(np.min(gf.iterative_search_params[gf.rsq_mask,-1]))
print(np.argmax(gf.gridsearch_params[np.where((gf.gridsearch_params[gf.rsq_mask,-1]<gf.iterative_search_params[gf.rsq_mask,-1])),-1]))
#combined_params = np.copy(gf.iterative_search_params)
#combined_params[np.where(gf.gridsearch_params[gf.rsq_mask,-1]>gf.iterative_search_params[gf.rsq_mask,-1])] =
fig=pl.figure()
voxel_nr = 1716
print(gf.gridsearch_params[voxel_nr,-1])
print(gf.iterative_search_params[voxel_nr,-1])
#print(gf_norm.gridsearch_params[voxel_nr,-1])
pl.plot(np.load('/Users/marcoaqil/PRFMapping/timecourse_brain_nonzerovar_sub-001.npy')[voxel_nr,:])
pl.plot(gg.return_single_prediction(*list(gf.gridsearch_params[voxel_nr,:-1])))
pl.plot(gg.return_single_prediction(*list(gf.iterative_search_params[voxel_nr,:-1])))
#pl.plot(gg_norm.return_single_prediction(*list(gf_norm.iterative_search_params[voxel_nr,:-1])))
fig = pl.figure()
gg_norm.add_mean=True
pl.plot(gg_norm.return_single_prediction(*list(gf_norm.iterative_search_params[185537,:-1])))
pl.plot(sgfilter_predictions(timecourse_brain_nonzerovar[185537,:],
window_length=121,add_mean=False)
)
#print(np.argmax(gf.iterative_search_params[:,-1]))
```
| github_jupyter |
# Graphical Solutions
## Introduction to Linear Programming
```
#Import some required packages.
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
Graphical solution is limited to linear programming models containing only two decision variables (can be used with three variables but only with great difficulty).
Graphical methods provide a picture of how a solution for a linear programming problem is obtained.
## Product mix problem - Beaver Creek Pottery Company
How many bowls and mugs should be produced to maximize profits given labor and materials constraints?
Product resource requirements and unit profit:
Decision Variables:
$x_{1}$ = number of bowls to produce per day
$x_{2}$ = number of mugs to produce per day
Profit (Z) Mazimization
Z = 40$x_{1}$ + 50$x_{2}$
Labor Constraint Check
1$x_{1}$ + 2$x_{2}$ <= 40
Clay (Physicial Resource) Constraint Check
4$x_{1}$ + 3$x_{2}$ <= 120
Negative Production Constaint Check
$x_{1}$ > 0
$x_{2}$ > 0
```
#Create an Array X2 from 0 to 60, and it should have a length of 61.
x2 = np.linspace(0, 60, 61)
#This is the same as starting your Excel Spreadsheet with incrementing X2
x2
#Labor Constraint Check
# 1x1 + 2x2 <= 40
#x1 = 40 - 2*x2
c1 = 40 - 2*x2
c1
#Clay (Physicial Resource) Constraint Check
#4x1 + 3x2 <= 120
#x1 = (120 - 3*x2)/4
c2 = (120 - 3*x2)/4
c2
#Calculate the minimum of X1 you can make per the 2 different constraints.
ct = np.minimum(c1,c2)
ct
#remove those valuese that don't follow non-negativity constraint.
ct= ct[0:21]
x2= x2[0:21] #Shape of array must be the same.
ct
#Calculate the profit from the constrained
profit = 40*ct+50*x2
profit
# Make plot for the labor constraint
plt.plot(c1, x2, label=r'1$x_{1}$ + 2$x_{2}$ <= 40')
plt.xlim((0, 60))
plt.ylim((0, 60))
plt.xlabel(r'$x_{1}$') #Latex way of writing X subscript 1 (See Markdown)
plt.ylabel(r'$x_{2}$') #Latex way of writing X subscript 1 (See Markdown)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.fill_between(c1, x2, color='grey', alpha=0.5)
#Graph Resource Constraint
plt.plot(c2, x2, label=r'4$x_{1}$ + 3$x_{2}$ <= 120')
plt.xlim((0, 60))
plt.ylim((0, 60))
plt.xlabel(r'$x_{1}$') #Latex way of writing X subscript 1 (See Markdown)
plt.ylabel(r'$x_{2}$') #Latex way of writing X subscript 1 (See Markdown)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.fill_between(c2, x2, color='grey', alpha=0.5)
# Make plot for the combined constraints.
plt.plot(c1, x2, label=r'1$x_{1}$ + 2$x_{2}$ <= 40')
plt.plot(c2, x2, label=r'4$x_{1}$ + 3$x_{2}$ <= 120')
#plt.plot(ct, x2, label=r'min(x$x_{1}$)')
plt.xlim((0, 60))
plt.ylim((0, 60))
plt.xlabel(r'$x_{1}$') #Latex way of writing X subscript 1 (See Markdown)
plt.ylabel(r'$x_{2}$') #Latex way of writing X subscript 1 (See Markdown)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.fill_between(ct, x2, color='grey', alpha=0.5)
```
Our solution must be in in lies somewhere in the grey feasible region in the graph above. However, according to the fundamental theorum of Linear programming we know it is at a vertex.
"In mathematical optimization, the fundamental theorem of linear programming states, in a weak formulation, that the maxima and minima of a linear function over a convex polygonal region occur at the region's corners. Further, if an extreme value occurs at two corners, then it must also occur everywhere on the line segment between them."
- [Wikipedia](https://en.wikipedia.org/wiki/Fundamental_theorem_of_linear_programming)
```
#This returns the index position of the maximum value
max_value = np.argmax(profit)
max_value
#Calculate The max Profit that is made.
profit_answer=profit[max_value]
profit_answer
# Verify all constraints are integers
x2_answer = x2[max_value]
x2_answer
# Verify all constraints are integers
ct_answer = ct[max_value]
ct_answer
```
## Q1 Challenge
What if the profit function is:
Z = 70$x_{1}$ + 20$x_{2}$
Find the optimal solution using Python. Assign the answers to:
q1_profit_answer
q1_x1_answer
q1_x2_answer
| github_jupyter |
```
#################
# Preprocessing #
#################
# Scores by other composers from the Bach family have been removed beforehand.
# Miscellaneous scores like mass pieces have also been removed; the assumption here is that
# since different interpretations of the same piece (e.g. Ave Maria, etc) exist, including
# theses pieces might hurt the prediction accuracy, here mostly based on chord progression.
# (more exactly, a reduced version of the chord progression.)
# In shell, find and copy midi files to target data directory and convert to mxl:
'''
cd {TARGETDIR}
find {MIDIFILEDIR} \( -name "bach*.mid" -o -name "beethoven*.mid" -o -name "scarlatti*.mid" \) -type f -exec cp {} . \;
find . -type f -name "*.mid" -exec /Applications/MuseScore\ 2.app/Contents/MacOS/mscore {} --export-to {}.mxl \;
for f in *.mxl; do mv "$f" "${f%.mid.mxl}.mxl"; done
ls *.mxl > mxl_list.txt
'''
from music21 import *
from os import listdir
from os.path import isfile, getsize
# timeout function that lets move on beyond too big files.
# by Thomas Ahle: http://stackoverflow.com/a/22348885
import signal
class timeout:
def __init__(self, seconds=1, error_message='Timeout'):
self.seconds = seconds
self.error_message = error_message
def handle_timeout(self, signum, frame):
raise TimeoutError(self.error_message)
def __enter__(self):
signal.signal(signal.SIGALRM, self.handle_timeout)
signal.alarm(self.seconds)
def __exit__(self, type, value, traceback):
signal.alarm(0)
def parse(mxllist, composer):
composer_list = [f for f in mxllist if f.replace('-', '_').split('_')[0] == composer]
for file in composer_list:
if (getsize(file)>10000): # remove too short scores that may contain no notes
with timeout(seconds=6000):
try:
s = converter.parse(mxldir+file)
try:
k = s.flat.keySignature.sharps
except AttributeError:
k = s.analyze('key').sharps
except:
with open('{}-parsed.txt'.format(composer), 'a') as output_file:
output_file.write('key could not by analyzed\n')
with open('{}-transposed.txt'.format(composer), 'a') as output_file:
output_file.write('key could not by analyzed\n')
continue
t = s.transpose((k*5)%12)
except:
with open('{}-parsed.txt'.format(composer), 'a') as output_file:
output_file.write('timeout\n')
with open('{}-transposed.txt'.format(composer), 'a') as output_file:
output_file.write('timeout\n')
continue
fp_s = converter.freeze(s, fmt='pickle')
fp_t = converter.freeze(t, fmt='pickle')
with open('{}-parsed.txt'.format(composer), 'a') as output_file:
output_file.write(fp_s+'\n')
with open('{}-transposed.txt'.format(composer), 'a') as output_file:
output_file.write(fp_t+'\n')
with open('mxl_list.txt', 'r') as f:
mxllist = [line.strip() for line in f.readlines()]
parse(mxllist, 'bach')
parse(mxllist, 'beethoven')
parse(mxllist, 'debussy')
parse(mxllist, 'scarlatti')
parse(mxllist, 'victoria')
######################
# Feature Extraction #
######################
import itertools
from collections import Counter
flatten = lambda l: [item for sublist in l for item in sublist] # by Alex Martinelli & Guillaume Jacquenot: http://stackoverflow.com/a/952952
uniqify = lambda seq: list(set(seq))
# Define known chords
major, minor, suspended, augmented, diminished, major_sixth, minor_sixth, dominant_seventh, major_seventh, minor_seventh, half_diminished_seventh, diminished_seventh, major_ninth, dominant_ninth, dominant_minor_ninth, minor_ninth = [0,4,7],[0,3,7],[0,5,7],[0,4,8],[0,3,6],[0,4,7,9],[0,3,7,9],[0,4,7,10],[0,4,7,11],[0,3,7,10],[0,3,6,10],[0,3,6,9],[0,2,4,7,11],[0,2,4,7,10],[0,1,4,7,10],[0,2,3,7,10]
chord_types_list = [major, minor, suspended, augmented, diminished, major_sixth, minor_sixth, dominant_seventh, major_seventh, minor_seventh, half_diminished_seventh, diminished_seventh, major_ninth, dominant_ninth, dominant_minor_ninth, minor_ninth]
chord_types_string = ['major', 'minor', 'suspended', 'augmented', 'diminished', 'major_sixth', 'minor_sixth', 'dominant_seventh', 'major_seventh', 'minor_seventh', 'half_diminished_seventh', 'diminished_seventh', 'major_ninth', 'dominant_ninth', 'dominant_minor_ninth', 'minor_ninth']
roots = list(range(12))
chord_orders = flatten([[{(n+r)%12 for n in v} for v in chord_types_list] for r in roots])
unique_orders = []
for i in range(192):
if chord_orders[i] not in unique_orders:
unique_orders.append(chord_orders[i])
def merge_chords(s):
sf = s.flat
chords_by_offset = []
for i in range(int(sf.highestTime)):
chords_by_offset.append(chord.Chord(sf.getElementsByOffset(i,i+1, includeEndBoundary=False, mustFinishInSpan=False, mustBeginInSpan=False).notes))
return chords_by_offset
def find_neighbor_note(n, k):
# find notes k steps away from n
return (roots[n-6:]+roots[:(n+6)%12])[6+k], (roots[n-6:]+roots[:(n+6)%12])[6-k]
def find_note_distance(n1, n2):
return abs(6 - (roots[n1-6:]+roots[:(n1+6)%12]).index(n2))
def find_chord_distance(set1, set2):
d1, d2 = set1.difference(set2), set2.difference(set1)
if len(d1) < len(d2):
longer, shorter = d2, list(d1)
else:
longer, shorter = d1, list(d2)
distances = []
for combination in itertools.combinations(longer, len(shorter)):
for permutation in itertools.permutations(combination):
dist_p = abs(len(d1)-len(d2))*3 # length difference means notes need to be added/deleted. weighted by 3
for i in range(len(shorter)):
dist_p += find_note_distance(shorter[i], permutation[i])
distances.append(dist_p)
return min(distances)
CACHE = dict()
def find_closest_chord(c, cache=CACHE):
if len(c) == 0:
return -1 # use -1 for rest (chords are 0 to 191)
# retrieve from existing knowledge
o_str, o, p = str(c.normalOrder), set(c.normalOrder), c.pitchClasses
if o in chord_orders:
return chord_orders.index(o)
# the above root sometimes differs from c.findRoot(), which might be more reliable.
# however, the errors are rare and it should be good enough for now.
if o_str in cache.keys():
return cache[o_str]
# find closest chord from scratch
chord_distances = dict()
most_common_note = Counter(c.pitchClasses).most_common(1)[0][0]
for i in range(192):
d = find_chord_distance(o, chord_orders[i])
# prioritize found chord's root note if most common note of the chord.
if int(i/16) == most_common_note:
d += -1
if chord_distances.get(d) == None:
chord_distances[d] = []
chord_distances[d].append(i)
# if multiple chords are tied, use first one (could be better)
closest_chord = chord_distances[min(chord_distances.keys())][0]
cache[o_str] = closest_chord
return closest_chord
def extract_features(parsed_list, idx):
s = converter.thaw(parsed_list[idx])
chords_by_offset = merge_chords(s)
chord_sequence = []
for i in range(len(chords_by_offset)):
chord_sequence.append(find_closest_chord(chords_by_offset[i], CACHE))
return chord_sequence
with open('bach-parsed.txt', 'r') as f:
FILES_BACH = [line.strip() for line in f.readlines()]
with open('beethoven-parsed.txt', 'r') as f:
FILES_BEETHOVEN = [line.strip() for line in f.readlines()]
with open('debussy-parsed.txt', 'r') as f:
FILES_DEBUSSY = [line.strip() for line in f.readlines()]
with open('scarlatti-parsed.txt', 'r') as f:
FILES_SCARLATTI = [line.strip() for line in f.readlines()]
with open('victoria-parsed.txt', 'r') as f:
FILES_VICTORIA = [line.strip() for line in f.readlines()]
for i in range(len(FILES_BACH)):
with open('bach-chordsequence.txt', 'a') as f:
f.write(str(extract_features(FILES_BACH, i))+'\n')
for i in range(len(FILES_BEETHOVEN)):
with open('beethoven-chordsequence.txt', 'a') as f:
f.write(str(extract_features(FILES_BEETHOVEN, i))+'\n')
for i in range(len(FILES_DEBUSSY)):
with open('debussy-chordsequence.txt', 'a') as f:
f.write(str(extract_features(FILES_DEBUSSY, i))+'\n')
for i in range(len(FILES_SCARLATTI)):
with open('scarlatti-chordsequence.txt', 'a') as f:
f.write(str(extract_features(FILES_SCARLATTI, i))+'\n')
for i in range(len(FILES_VICTORIA)):
with open('victoria-chordsequence.txt', 'a') as f:
f.write(str(extract_features(FILES_VICTORIA, i))+'\n')
# Additional feature set: extract durations of notes, chords, and rests
def find_length_add_to_list(cnr, out_list):
try:
out_list.append(cnr.duration.fullName)
except:
out_list.append(str(cnr.duration.quarterLength))
def extract_cnr_duration(piece):
s = converter.thaw(piece).flat
chords, notes, rests = [], [], []
for c in s.getElementsByClass(chord.Chord):
find_length_add_to_list(c, chords)
for n in s.getElementsByClass(note.Note):
find_length_add_to_list(n, notes)
for r in s.getElementsByClass(note.Rest):
find_length_add_to_list(r, rests)
elements = ['chord|'+d for d in chords] + ['note|'+d for d in notes] + ['rest|'+d for d in rests]
return ';'.join(elements)
for piece in FILES_BACH:
with open('bach-durations.txt', 'a') as f:
f.write(extract_cnr_duration(piece)+'\n')
for piece in FILES_BEETHOVEN:
with open('beethoven-durations.txt', 'a') as f:
f.write(extract_cnr_duration(piece)+'\n')
for piece in FILES_DEBUSSY:
with open('debussy-durations.txt', 'a') as f:
f.write(extract_cnr_duration(piece)+'\n')
for piece in FILES_SCARLATTI:
with open('scarlatti-durations.txt', 'a') as f:
f.write(extract_cnr_duration(piece)+'\n')
for piece in FILES_VICTORIA:
with open('victoria-durations.txt', 'a') as f:
f.write(extract_cnr_duration(piece)+'\n')
```
| github_jupyter |
# How to work with Ruby
* **Difficulty level**: easy
* **Time need to lean**: 10 minutes or less
## Ruby
Basic data types recognised in Ruby are similar with Python's data types and there is a one-to-one correspondence for these types.
The convertion of datatype from SoS to Ruby (e.g. `%get` from Ruby) is as followings:
| Python | condition | Ruby |
| --- | --- |---|
| `None` | | `nil` |
| `boolean` | | `TrueClass or FalseClass` |
| `integer` | | `Integer` |
| `float` | | `Float` |
| `complex` | | `Complex` |
| `str` | | `String` |
| Sequence (`list`, `tuple`, ...) | | `Array` |
| `set` | | `Set` |
| `dict` | | `Hash` |
| `range` | | `Range` |
| `numpy.ndarray` | | `Array` |
| `numpy.matrix` | | `NMatrix` |
| `pandas.Series` | | `Hash` |
| `pandas.DataFrame` | | `Daru::DataFrame` |
Python objects in other datatypes are transferred as string `"Unsupported datatype"`. Please [let us know](https://github.com/vatlab/sos-ruby/issues) if there is a natural corresponding data type in Ruby to convert this data type.
Conversion of datatypes from Ruby to SoS (`%get var --from Ruby` from SoS) follows the following rules:
| Ruby | condition | Python |
| --- | ---| ---|
| `nil` | | `None` |
| `Float::NAN` | | `numpy.nan` |
| `TrueClass or FalseClass` | | `boolean` |
| `Integer` | | `integer` |
| `String` | | `str` |
| `Complex` | | `complex` |
| `Float` | | `float` |
| `Array` | | `numpy.ndarray` |
| `Range` | | `range` |
| `Set` | | `set` |
| `Hash` | | `dict` |
| `NMatrix` | | `numpy.matrix` |
| `Array` | | `numpy.ndarray` |
| `Daru::DataFrame` | | `pandas.DataFrame` |
Ruby objects in other datatypes are transferred as string `"Unsupported datatype"`.
For example, the scalar data is converted from SoS to Ruby as follows:
```
null_var = None
num_var = 123
logic_var = True
char_var = '1"23'
comp_var = 1+2j
%get null_var num_var logic_var char_var comp_var
puts [null_var, num_var, logic_var, char_var, comp_var]
```
Ruby supports DataFrame from its daru (Data Analysis in RUby) library so you will need to install this library before using the Ruby kernel. For example, a R dataframe is transfered as Daru::DataFrame to Ruby.
```
%get mtcars --from R
mtcars
```
Also, we choose NMatrix library in Ruby becuase its fast performance. Same as daru (Data Analysis in RUby), you will need to install nmatrix library before using the Ruby kernel.
```
mat_var = N[ [2, 3, 4], [7, 8, 9] ]
%put mat_var
mat_var
```
## Further reading
*
| github_jupyter |
# Multilayer Perceptron
Some say that 9 out of 10 people who use neural networks apply a Multilayer Perceptron (MLP). A MLP is basically a feed-forward network with 3 layers (at least): an input layer, an output layer, and a hidden layer in between. Thus, the MLP has no structural loops: information always flows from left (input)to right (output). The lack of inherent feedback saves a lot of headaches. Its analysis is totally straightforward given that the output of the network is always a function of the input, it does not depend on any former state of the model or previous input.

Regarding the topology of a MLP it is normally assumed to be a densely-meshed one-to-many link model between the layers. This is mathematically represented by two matrices of parameters named “the thetas”. In any case, if a certain connection is of little relevance with respect to the observable training data, the network will automatically pay little attention to its contribution and assign it a low weight close to zero.
## Prediction
The evaluation of the output of a MLP, i.e., its prediction, given an input vector of data is a matter of matrix multiplication. To that end, the following variables are described for convenience:
* $N$ is the dimension of the input layer.
* $H$ is the dimension of the hidden layer.
* $K$ is the dimension of the output layer.
* $M$ is the dimension of the corpus (number of examples).
Given the variables above, the parameters of the network, i.e., the thetas matrices, are defined as follows:
* $\theta^{(IN)} \rightarrow H \times (N+1)$
* $\theta^{(OUT)} \rightarrow K \times (H+1)$
```
import NeuralNetwork
# 2 input neurons, 3 hidden neurons, 1 output neuron
nn = NeuralNetwork.MLP([2,3,1])
# nn[0] -> ThetaIN, nn[1] -> ThetaOUT
print(nn)
```
What follows are the ordered steps that need to be followed in order to evaluate the network prediction.
### Input Feature Expansion
The first step to attain a successful operation of the neural network is to add a bias term to the input feature space (mapped to the input layer):
$$a^{(IN)} = [1;\ x]$$
The feature expansion of the input space with the bias term increases the learning effectiveness of the model because it adds a degree of freedom to the adaptation process. Note that $a^{(IN)}$ directly represents the activation values of the input layer. Thus, the input layer is linear with the input vector $x$ (it is defined by a linear activation function).
### Transit to the Hidden Layer
Once the activations (outputs) of the input layer are determined, their values flow into the hidden layer through the weights defined in $\theta^{(IN)}$:
$$z^{(HID)} = \theta^{(IN)}\;a^{(IN)}$$
Similarly, the dimensionality of the hidden layer is expanded with a bias term to increase its learning effectiveness:
$$a^{(HID)} = [1;\ g(z^{(HID)})]$$
Here, a new function $g()$ is introduced. This is the generic activation function of a neuron, and generally it is non-linear. Its application yields the output values of the hidden layer $a^{(HID)}$ and provides the true learning power to the neural model.
### Output
Then, the activation values of the output layer, i.e., the network prediction, are calculated as follows:
$$z^{(OUT)} = \theta^{(OUT)}\;a^{(HID)}$$
and finally
$$a^{(OUT)} = g(z^{(OUT)}) = y$$
### Activation Function
The activation function of the neuron is (usually) a non-linear function that provides the expressive power to the neural network. It is recommended this function to be smooth, differentiable and monotonically non-decreasing (for learning purposes). Typically, the logistic sigmoid function is used.
$$g(z) = \frac{1}{(1 + \exp^{-z})}$$
Note that the range of this function varies from 0 to 1. Therefore, the output values of the neurons will always be bounded by the upper and the lower limits of this range. This entails considering a scaling process if a broader range of predicted values is needed. Other activation functions can be used with the "af" parameter. For example, the range of the hyperbolic tangent ("HyperTan" function) goes from -1 to 1.
```
import numpy as np
# Random instance with 2 values
x = np.array([1.0, 2.0])
y = NeuralNetwork.MLP_Predict(nn, x)
# intermediate results are available
# y[0] -> input result, y[1] -> hidden result, y[2] -> output result
print(y)
z = np.arange(-8, 8, 0.1)
g = NeuralNetwork.Logistic(z)
%matplotlib inline
import matplotlib.pyplot as plt
plt.figure()
plt.plot(z, g, 'b-', label='g(z)')
plt.legend(loc='upper left')
plt.xlabel('Input [z]')
plt.ylabel('Output [g]')
plt.title('Logistic sigmoid activation function')
plt.show()
```
## Training
Training a neural network essentially means fitting its parameters to a set of example data considering an objective function, aka cost function. This process is also known as supervised learning. It is usually implemented as an iterative procedure.
### Cost Function
The cost function somehow encodes the objective or goal that should be attained with the network. It is usually defined as a classification or a regression evaluation function. However, the actual form of the cost function is effectively the same, which is an error or fitting function. A cost function measures the discrepancy between the desired output for a pattern and the output produced by the network.
The cost function $J$ quantifies the amount of squared error (or misfitting) that the network displays with respect to a set of data. Thus, in order to achieve a successfully working model, this cost function must be minimised with an adequate set of parameter values. To do so, several solutions are valid as long as this cost function be a convex function (i.e., a bowl-like shape). A well known example of such is the quadratic function, which trains the neural network considering a minimum squared error criterion over the whole dataset of training examples:
$$J(\theta, x) = \frac{1}{M} \sum_{m=1}^M \sum_{k=1}^K \left(Error_k^{(m)}\right)^2 = \frac{1}{M} \sum_{m=1}^M \sum_{k=1}^K \left(t_k^{(m)}-y_k^{(m)}\right)^2$$
Note that the term $t$ in the cost function represents the target value of the network (i.e., the ideal/desired network output) for a given input data value $x$. Now that the cost function can be expressed, a convex optimisation procedure (e.g., a gradient-based method) must be conducted in order to minimise its value. Note that this is essentially a least-squares regression.
### Regularisation
The mean squared-error cost function described above does not incorporate any knowledge or constraint about the characteristics of the parameters being adjusted through the optimisation training strategy. This may develop into a generalisation problem because the space of solutions is large and some of these solutions may turn the model unstable with new unseen data. Therefore, there is the need to smooth the performance of the model over a wide range of input data.
Neural networks usually generalise well as long as the weights are kept small. Thus, the Tikhonov regularisation function, aka ridge regression, is introduced as a means to control complexity of the model in favour of its increased general performance. This regularisation approach, which is used in conjunction with the aforementioned cost function, favours small weight values (it is a cost over large weight values):
$$R(\theta) = \frac{\lambda}{2 M} \sum_{\forall \theta \notin bias} \theta^2$$
There is a typical trade-off in Machine Learning, known as the bias-variance trade-off, which has a direct relationship with the complexity of the model, the nature of the data and the amount of available training data to adjust it. This ability of the model to learn more or less complex scenarios raises an issue with respect to its fitting (memorisation v. generalisation): if the data is simple to explain, a complex model is said to overfit the data, causing its overall performance to drop (high variance model). Similarly, if complex data is tackled with a simple model, such model is said to underfit the data, also causing its overall performance to drop (high bias model). As it is usual in engineering, a compromise must be reached with an adequate $\lambda$ value.
### Parameter Initialisation
The initial weights of the thetas assigned by the training process are critical with respect to the success of the learning strategy. They determine the starting point of the optimisation procedure, and depending on their value, the adjusted parameter values may end up in different places if the cost function has multiple (local) minima.
The parameter initialisation process is based on a uniform distribution between two small numbers that take into account the amount of input and output units of the adjacent layers:
$$\theta_{init} = U[-\sigma, +\sigma]\ \ where\ \ \sigma = \frac{\sqrt{6}}{\sqrt{in + out}}$$
In order to ensure a proper learning procedure, the weights of the parameters need to be randomly assigned in order to prevent any symmetry in the topology of the network model (that would be likely to end in convergence problems).
### Gradient Descent
Given the convex shape of the cost function (which usually also includes the regularisation), the minimisation objective boils down to finding the extremum of this function using its derivative in the continuos space of the weights. To this end you may use the analytic form of the derivative of the cost function (a nightmare), a numerical finite difference, or automatic differentiation.
Gradient descent is a first-order optimisation algorithm, complete but non-optimal. It first starts with some arbitrarily chosen parameters and computes the derivative of the cost function with respect to each of them $\frac{\partial J(\theta,x)}{\partial \theta}$. The model parameters are then updated by moving them some distance (determined by the so called learning rate $\eta$) from the former initial point in the direction of the steepest descent, i.e., along the negative of the gradient. If $\eta$ is set too small, though, convergence is needlessly slow, whereas if it is too large, the update correction process may overshoot and even diverge.
$$\theta^{t+1} \leftarrow \theta^t - \eta \frac{\partial^t J(\theta,x)}{\partial \theta} $$
These steps are iterated in a loop until some stopping criterion is met, e.g., a determined number of epochs (i.e., the processing of all patterns in the training example set) is reached, or when no significant improvement is observed.
#### Stochastic versus Batch Learning
One last remark should be made about the amount of examples $M$ used in the cost function for learning. If the training procedure considers several instances at once per cost gradient computation and parameter update, i.e., $M \gg 1$, the approach is called batch learning. Batch learning is usually slow because each cost computation accounts for all the available training instances, and especially if the data redundancy is high (similar patterns). However, the conditions of convergence are well understood.
Alternatively, it is usual to consider only one single training instance at a time, i.e., $M=1$, to estimate the gradient in order to speed up the iterative learning process. This procedure is called stochastic (online) learning. Online learning steps are faster to compute, but this noisy single-instance approximation of the cost gradient function makes it a little inaccurate around the optimum. However, stochastic learning often results in better solutions because of the noise in the updates, and thus it is very convenient in most cases.
```
# Load Iris dataset
from sklearn import datasets as dset
import copy
iris = dset.load_iris()
# build network with 4 input, 1 output
nn = NeuralNetwork.MLP([4,4,1])
# keep original for further experiments
orig = copy.deepcopy(nn)
# Target needs to be divided by 2 because of the sigmoid, values 0, 0.5, 1
idat, itar = iris.data, iris.target/2.0
# regularisation parameter of 0.2
tcost = NeuralNetwork.MLP_Cost(nn, idat, itar, 0.2)
# Cost value for an untrained network
print("J(ini) = " + str(tcost))
# Train with numerical gradient, 20 rounds, batch
# learning rate is 0.1
NeuralNetwork.MLP_NumGradDesc(nn, idat, itar, 0.2, 20, 0.1)
```
### Backpropagation
The backpropagation algorithm estimates the error for each neuron unit so as to effectively deploy the gradient descent optimisation procedure. It is a popular algorithm, conceptually simple, computationally efficient, and it often works. In order to conduct the estimation of the neuron-wise errors, it first propagates the training data through the network, then it computes the error with the predictions and the target values, and afterwards it backpropagates the error from the output to the input, generally speaking, from a given layer $(n)$ to the immediately former one $(n-1)$:
$$Error^{(n-1)} = Error^{(n)} \; \theta^{(n)}$$
Note that the bias neurons don't backpropagate, they are not connected to the former layer.
Finally, the gradient is computed so that the weights may be updated. Each weight links an input unit $I$ to an output unit $O$, which also provides the error feedback. The general formula that is derived for a logistic sigmoid activation function is shown as folows:
$$\theta^{(t+1)} \leftarrow \theta^{(t)} + \eta \; I \; Error \; O \; (1 - O)$$
From a computational complexity perspective, Backpropagation is much more effective than the numerical gradient applied above because it computes the errors for all the weights in 2 network traversals, whereas numerical gradient needs to compute 2 traversals per parameter. In addition, stochastic learning is generally the preferred method for Backprop.
```
# Iris example with Backprop
# load original network
nn = copy.deepcopy(orig)
# Cost value for an untrained network
tcost = NeuralNetwork.MLP_Cost(nn, idat, itar, 0.2)
print("J(ini) = " + str(tcost))
# Train with numerical gradient, 20 rounds
# learning rate is 0.1
NeuralNetwork.MLP_Backprop(nn, idat, itar, 0.2, 20, 0.1)
```
### Practical Techniques
Backpropagation learning can be tricky particularly for multilayered networks where the cost surface is non-quadratic, non-convex, and high dimensional with many local minima and/or flat regions. Its successful convergence is not guarateed. Designing and training a MLP using Backprop requires making choices such as the number and type of nodes, layers, learning rates, training and test sets, etc, and many undesirable behaviours can be avoided with practical techniques.
#### Instance Shuffling
In stochastic learning neural networks learn the most from the unexpected instances. Therefore, it is advisable to iterate over instances that are the most unfamiliar to the system (i.e., have the maximum information content). As a means to progress towards getting more chances for learning better, it is recommended to shuffle the training set so that successive training instances rarely belong to the same class.
```
from sklearn.utils import shuffle
# load original network
nn = copy.deepcopy(orig)
# shuffle instances
idat, itar = shuffle(idat, itar)
NeuralNetwork.MLP_Backprop(nn, idat, itar, 0.2, 20, 0.1)
```
#### Feature Standardisation
Convergence is usually faster if the average of each input feature over the training set is close to zero, otherwise the updates will be biased in a particular direction and thus will slow learning.
Additionally, scaling the features so that all have about the same covariance speeds learning because it helps to balance out the rate at which the weights connected to the input nodes learn.
```
# feature stats
mu_idat = np.mean(idat, axis=0)
std_idat = np.std(idat, axis=0)
# standardise
s_idat = (idat - mu_idat) / std_idat
# eval
test = copy.deepcopy(orig)
NeuralNetwork.MLP_Backprop(test, s_idat, itar, 0.2, 20, 0.1)
```
#### Feature Decorrelation
If inputs are uncorrelated then it is possible to solve for the weight values independently. With correlated inputs, the solution must be searched simultaneously, which is a much harder problem. Principal Component Analysis (aka the Karhunen-Loeve expansion) can be used to remove linear correlations in inputs.
```
# construct orthogonal basis with principal vectors
covmat = np.cov(s_idat.T)
l,v = np.linalg.eig(covmat)
# reproject
d_s_idat = s_idat.dot(v)
# eval
test = copy.deepcopy(orig)
NeuralNetwork.MLP_Backprop(test, d_s_idat, itar, 0.2, 20, 0.1)
```
#### Target Values
Target values at the sigmoid asymptotes need to be driven by large weights, which can result in instabilities. Instead, target values at the points of the extrema of the second derivative of the sigmoid activation function avoid saturating the output units. The second derivative of the logistic sigmoid is $g''(z) = g(z)(1 - g(z))(1 - 2g(z))$, shown below.
```
g = NeuralNetwork.Logistic
ddg = g(z)*(1 - g(z))*(1 - 2*g(z))
plt.figure()
plt.plot(z, ddg, 'b-', label='g\'\'(z)')
plt.legend(loc='upper left')
plt.xlabel('Input [z]')
plt.ylabel('Output [g\'\']')
plt.title('Second derivative of the logistic sigmoid activation function')
plt.show()
# max min target values
mx = max(ddg)
mi = min(ddg)
c = 0
for i in ddg:
if i == mx:
print("Max target " + str(z[c]) + " -> " + str(g(z[c])))
if i == mi:
print("Min target " + str(z[c]) + " -> " + str(g(z[c])))
c += 1
```
Therefore, optimum target values must be at 0.21 and 0.79.
```
for i in xrange(len(itar)):
if itar[i] == 0:
itar[i] = 0.21
if itar[i] == 1:
itar[i] = 0.79
test = copy.deepcopy(orig)
NeuralNetwork.MLP_Backprop(test, d_s_idat, itar, 0.2, 20, 0.1)
```
#### Target Vectors
When designing a learning system, it is suitable to take into account the nature of the problem at hand (e.g., whether if it is a classification problem or a regression problem) to determine the number of output units $K$.
In the case of classification, $K$ should be the amount of different classes, and the target output should be a binary vector. Given an instance, only the output unit that corresponds to the instance class should be set. This approach is usually referred to as "one-hot" encoding. The decision rule for classification is then driven by the maximum output unit.
In the case of a regression problem, $K$ should be equal to the number of dependent variables.
```
# Iris is a classification problem, K=3
# build network with 4 input, 3 outputs
test3 = NeuralNetwork.MLP([4,4,3])
# modify targets
t = []
for i in itar:
if i == 0.21:
t.append([0.79,0.21,0.21])
elif i == 0.5:
t.append([0.21,0.79,0.21])
else:
t.append([0.21,0.21,0.79])
t = np.array(t)
NeuralNetwork.MLP_Backprop(test3, d_s_idat, t, 0.2, 20, 0.1)
```
Finally, the effectiveness/performance of each approach should be scored with an appropriate metric: squared-error residuals like the cost function for regression problems, and competitive selection for classification.
```
# compare accuracies between single K and multiple K
single = 0
multiple = 0
for x,y in zip(d_s_idat, itar):
ps = NeuralNetwork.MLP_Predict(test, x)
ps = ps[-1][0]
pm = NeuralNetwork.MLP_Predict(test3, x)
pm = [pm[-1][0], pm[-1][1], pm[-1][2]]
if y == 0.21: # class 0
if np.abs(ps - 0.21) < np.abs(ps - 0.5):
if np.abs(ps - 0.21) < np.abs(ps - 0.79):
single += 1
if pm[0] > pm[1]:
if pm[0] > pm[2]:
multiple += 1
elif y == 0.5: # class 1
if np.abs(ps - 0.5) < np.abs(ps - 0.21):
if np.abs(ps - 0.5) < np.abs(ps - 0.79):
single += 1
if pm[1] > pm[0]:
if pm[1] > pm[2]:
multiple += 1
else: # class 2
if np.abs(ps - 0.79) < np.abs(ps - 0.21):
if np.abs(ps - 0.79) < np.abs(ps - 0.5):
single += 1
if pm[2] > pm[0]:
if pm[2] > pm[1]:
multiple += 1
print("Accuracy single: " + str(single))
print("Accuracy multiple: " + str(multiple))
```
#### Hidden Units
The number of hidden units determines the expressive power of the network, and thus, the complexity of its transfer function. The more complex a model is, the more complicated data structures it can learn. Nevertheless, this argument cannot be extended ad infinitum because a shortage of training data with respect to the amount of parameters to be learnt may lead the model to overfit the data. That’s why the aforementioned regularisation function is also used to avoid this situation.
Thus, it is common to have a skew toward suggesting a slightly more complex model than strictly necessary (regularisation will compensate for the extra complexity if necessary). Some heuristic guidelines to guess this optimum number of hidden units indicate an amount somewhat related to the number of input and output units. This is an experimental issue, though. There is no rule of thumb for this. Apply a configuration that works for your problem and you’re done.
#### Final Remarks
* Tweak the network: different activation function, adaptive learning rate, momentum, annealing, noise, etc.
* Focus on model generalisation: keep a separate self-validation set of data (not used to train the model) to test and estimate the actual performance of the model. See [test_iris.py](test_iris.py)
* Incorporate as much knowledge as possible. Expertise is a key indicator of success. Data driven models don’t do magic, the more information that is available, the greater the performance of the model.
* Feature Engineering is of utmost importance. This relates to the former point: the more useful information that can be extracted from the input data, the better performance can be expected. Salient indicators are keys to success. This may lead to selecting only the most informative features (mutual information, chi-square...), or to change the feature space that is used to represent the instance data (Principal Component Analysis for feature extraction and dimensionality reduction). And always standardise your data and exclude outliers.
* Get more data if the model is not good enough. Related to “the curse of dimensionality” principle: if good data is lacking, no successful model can be obtained. There must be a coherent relation between the parameters of the model (i.e., its complexity) and the amount of available data to train them.
* Ensemble models, integrate criteria. Bearing in mind that the optimum model structure is not known in advance, one of the most reasonable approaches to obtain a fairly good guess is to apply different models (with different learning features) to the same problem and combine/weight their outputs. Related techniques to this are also known as “boosting”.
| github_jupyter |
# A Whale off the Port(folio)
---
In this assignment, you'll get to use what you've learned this week to evaluate the performance among various algorithmic, hedge, and mutual fund portfolios and compare them against the S&P TSX 60 Index.
## Assumptions and limitations
1. Limitation: Only dates that overlap between portfolios will be compared
2. Assumption: There are no significant anomalous price impacting events during the time window such as share split, trading halt
3. Assumption: S&P TSX 60 is representative of the market as a whole, acting as an index
4. Assumption: Each portfolio (new shares, Whale, and Algos) will have an even spread of weights across all sub-portfolios
## 0. Import Required Libraries
```
# Initial imports
import pandas as pd # daataframe manipulation
import numpy as np # calc and numeric manipulatino
import datetime as dt # date and time
from pathlib import Path # setting the path for file manipulation
import datetime
import seaborn as sns # advanced plotting/charting library
import matplotlib as plt
pd.options.display.float_format = '{:.6f}'.format # float format to 6 decimal places
```
# I. Data Cleaning
In this section, you will need to read the CSV files into DataFrames and perform any necessary data cleaning steps. After cleaning, combine all DataFrames into a single DataFrame.
Files:
* `whale_returns.csv`: Contains returns of some famous "whale" investors' portfolios.
* `algo_returns.csv`: Contains returns from the in-house trading algorithms from Harold's company.
* `sp_tsx_history.csv`: Contains historical closing prices of the S&P TSX 60 Index.
## A. Whale Returns
Read the Whale Portfolio daily returns and clean the data.
### 1. import whale csv and set index to date
```
df_wr = pd.read_csv('Resources/whale_returns.csv', index_col="Date")
```
### 2. Inspect imported data
```
# look at colums and value head
df_wr.head(3)
# look at last few values
df_wr.tail(3)
# check dimensions of df
df_wr.shape
# get index datatype - for later merging
df_wr.index.dtype
# get datatypes of all values
df_wr.dtypes
```
### 3. Count and drop any null values
```
# Count nulls
df_wr.isna().sum()
# Drop nulls
df_wr.dropna(inplace=True)
# Count nulls -again to ensure they're removed
df_wr.isna().sum()
df_wr.count() #double check all values are equal in length
```
### 4. Sort the index to ensure the correct date order for calculations
```
df_wr.sort_index(inplace=True)
```
### 5. Rename columns - shorten and make consistent with other tables
```
# change columns to be consistent and informative
df_wr.columns
df_wr.columns = ['Whale_Soros_Fund_Daily_Returns', 'Whale_Paulson_Daily_Returns',
'Whale_Tiger_Daily_Returns', 'Whale_Berekshire_Daily_Returns']
```
### 6. Create copy dataframe with new column for cumulative returns
```
# copy the dataframe to store cumprod in a new view
df_wr_cumulative = df_wr.copy()
# create a new column in new df for each cumulative daily return using the cumprod function
df_wr_cumulative['Whale_Soros_Fund_Daily_CumReturns'] = (1 + df_wr_cumulative['Whale_Soros_Fund_Daily_Returns']).cumprod()
df_wr_cumulative['Whale_Paulson_Daily_CumReturns'] = (1 + df_wr_cumulative['Whale_Paulson_Daily_Returns']).cumprod()
df_wr_cumulative['Whale_Tiger_Daily_CumReturns'] = (1 + df_wr_cumulative['Whale_Tiger_Daily_Returns']).cumprod()
df_wr_cumulative['Whale_Berekshire_Daily_CumReturns'] = (1 + df_wr_cumulative['Whale_Berekshire_Daily_Returns']).cumprod()
df_wr_cumulative.head() # check result is consistent against original column ie adds up
# drop returns columns from cumulative df
df_wr_cumulative.columns
df_wr_cumulative = df_wr_cumulative[['Whale_Soros_Fund_Daily_CumReturns', 'Whale_Paulson_Daily_CumReturns','Whale_Tiger_Daily_CumReturns', 'Whale_Berekshire_Daily_CumReturns']]
df_wr_cumulative.head()
```
### 7. Look at high level stats & plot for whale portfolios
```
df_wr.describe(include='all') # basic stats for daily whale returns
df_wr_cumulative.describe(include='all') # basic stats for daily cumulative whale returns
# plot daily returns - whales
df_wr.plot(figsize=(10,5))
# Plot cumulative returns - individual subportfolios
df_wr_cumulative.plot(figsize=(10,5), title='Cumulative Returns - Whale Sub-Portfolios')
```
### 8. Calculate the overall portfolio returns, given equal weight to sub-portfolios
```
# Set weights
weights_wr = [0.25, 0.25, 0.25, 0.25] # equal weights across all 4 portfolios
# use the dot function to cross multiple the daily rturns of individual stocks against the weights
portfolio_df_wr = df_wr.dot(weights_wr)
portfolio_df_wr.plot(figsize=(10,5), title='Daily Returns for Overall Whale Portfolio (Equal Weighting)')
```
### 9. Calculate the overall portfolio cumultative returns, given equal weight to sub-portfolios
```
# Use the `cumprod` function to cumulatively multiply each element in the Series by it's preceding element until the end
wr_cumulative_returns = (1 + portfolio_df_wr).cumprod() - 1
wr_cumulative_returns.head()
wr_cumulative_returns.plot(figsize=(10,5), title='Cumulative Daily Returns for Overall Whale Portfolio (Equal Weighting)')
```
### 10. Initial data overview for Whales
<li> Lack of gaps in chart indicate there are no data gaps, and the lack of extreme fluctuations indicates the data is consistent. The data looks consistent and there are no obvious data errors identified.
<li> Initial high level observations of standalone daily returns data for whale portfolio: At initial glance, the mean daily return indicates that Berkshire portfolio performed best (mean daily returns of 0.000501, mean cumulative daily returns 1.159732), while Paulson worst (-0.000203). The standard deviation indicates highest risk for Berkshire (0.012831 STD), while lowest risk/volatility is Paulson (std 0.006977)
<li> By looking at the cumulative chart, it is evident that all portfolios were vulnerable to a loss at the same time around 2019-02-16, but that Berkshir was able to increas the most over time and climb the steepest after the downturn.
<li> A more thorough portfolio comparison analysis will be done in the following analysis section, so no conclusions are drawn yet.
## B. Algorithmic Daily Returns
Read the algorithmic daily returns and clean the data.
### 1. import algo csv and set index to date
```
# Reading algorithmic returns
df_ar = pd.read_csv('Resources/algo_returns.csv', index_col='Date')
```
### 2. Inspect resulting dataframe and contained data
```
# look at colums and value first 3 rows
df_ar.head(3)
# look at colums and value last 3 rows
df_ar.tail(3)
# get dimensions of df
df_ar.shape
# get index datatype - for later merging
df_ar.index.dtype
# get datatypes
df_ar.dtypes
```
### 3. Count and remove null values
```
# Count nulls
df_ar.isna().sum()
# Drop nulls
df_ar.dropna(inplace=True)
# Count nulls -again to ensure that nulls actually are removed
df_ar.isna().sum()
df_ar.count()
```
### 4. Sort index to ensure correct date order for calculations
```
df_ar.sort_index(inplace=True)
```
### 5. Rename columns to be consistent with future merge
```
df_ar.columns
df_ar.columns = ['Algo1_Daily_Returns', 'Algo2_Daily_Returns']
```
### 6. Create new column in a copy df for cumulative returns per Algo daily return
```
# create a df copy to store cumulative data
df_ar_cumulative = df_ar.copy()
# use cumprod to get the daily cumulative returns for each of the algos 1 and 2
df_ar_cumulative['Algo1_Daily_CumReturns'] = (1 + df_ar_cumulative['Algo1_Daily_Returns']).cumprod()
df_ar_cumulative['Algo2_Daily_CumReturns'] = (1 + df_ar_cumulative['Algo2_Daily_Returns']).cumprod()
# check the result is consistent with the daily returns for first few columns
df_ar_cumulative.head(10)
# drop columns that are not required
df_ar_cumulative.columns # get the columns
df_ar_cumulative = df_ar_cumulative[['Algo1_Daily_CumReturns','Algo2_Daily_CumReturns']]
# check result - first few lines
df_ar_cumulative.head(10)
```
### 7. Look at high level stats & plot for algo portfolios
```
df_ar.describe(include='all') # stats for daily returns
df_ar_cumulative.describe(include='all') # stats for daily cumulative returns
# plot daily returns - algos
df_ar.plot(figsize=(10,5))
# plot daily cumulative returns - algos
df_ar_cumulative.plot(figsize=(10,5))
```
### 8. Calculate the overall portfolio returns, given equal weight to sub-portfolios
```
# Set weights
weights_ar = [0.5, 0.5] # equal weights across 2 algo sub-portfolios
# use the dot function to cross multiple the daily rturns of individual stocks against the weights
portfolio_df_ar = df_ar.dot(weights_wr)
portfolio_df_ar.plot(figsize=(10,5), title='Daily Returns for Overall Algos Portfolio (Equal Weighting)')
```
### 9. Calculate the overall portfolio cumultative returns, given equal weight to sub-portfolios
```
# Use the `cumprod` function to cumulatively multiply each element in the Series by it's preceding element until the end
ar_cumulative_returns = (1 + portfolio_df_ar).cumprod() - 1
ar_cumulative_returns.head()
ar_cumulative_returns.plot(figsize=(10,5), title='Cumulative Daily Returns for Overall Algos Portfolio (Equal Weighting)')
```
### 10. Quick data overview - Algos
Initial observations of standalone daily returns data for Algo 1 vs Algo 2:
<li> mean daily return indicates that Algo 1 (mean daily return 0.000654) performs slightly better than Algo 2 (mean daily return 0.000341), which is alo evident in the cumulative daily returns plot.
<li> When looking at just daily returns, Algo 2 is more risky, but when looking at cumulative returns, Algo 1 is more risky (ie higher standard deviation).
<li> Lack of gaps in chart indicate there are no data gaps, and the lack of extreme fluctuations indicates the data is consistent
<li> Cumulative portfio level returns appear steeper compared with Whales at initial glance
## C. S&P TSX 60 Returns
Read the S&P TSX 60 historic closing prices and create a new daily returns DataFrame from the data.
Note: this contains daily closing and not returns - needs to be converted
### 1. Import S&P csv daily closing price (not returns)
```
# Reading S&P TSX 60 Closing Prices
df_sr = pd.read_csv('Resources/sp_tsx_history.csv')
```
### 2. Inspect columns of dataframe
```
# look at colums and value head
df_sr.head(3)
# look at tail values
df_sr.tail(3)
```
#### Note from dataframe inspection:
#### 1. date column was not immediated converted because it is in
#### a different format to the other csv files and
#### needs to bee converted to consistent format first
#### 2. Close cannot be explicitly converted to float as it has
#### dollar and commas.
#### 3. A new column for returns will need to be created from
#### return calculations.
```
# check dimension of df
df_sr.shape
# Check Data Types
df_sr.dtypes
```
### 3. Convert the date into a consistent format with other tables
```
df_sr['Date']= pd.to_datetime(df_sr['Date']).dt.strftime('%Y-%m-%d')
```
### 4. Convert the date data to index and check format and data type
```
# set date as index
df_sr.set_index('Date', inplace=True)
df_sr.head(2)
df_sr.index.dtype
```
### 5. Check for null values
```
# Count nulls - none observed
df_ar.isna().sum()
```
### 6. Convert daily closing price to float (from string)
```
# Change the Closing column to b float type
df_sr['Close']= df_sr['Close'].str.replace('$','')
df_sr['Close']= df_sr['Close'].str.replace(',','')
df_sr['Close']= df_sr['Close'].astype(float)
# Check Data Types
df_sr.dtypes
# test
df_sr.iloc[0]
# check null values
df_sr.isna().sum()
df_sr.count()
```
### 7. Sort the index for calculations of returns
```
# sort_index
df_sr.sort_index(inplace=True)
df_sr.head(2)
```
### 8. Calculate daily returns and store in new column
Equation: $r=\frac{{p_{t}} - {p_{t-1}}}{p_{t-1}}$
The daily return is the (current closing price minus the previous day closing price) all divided by the previous day closing price. The initial value has no daily return as there is no prior period to compare it with.
Here the calculation uses the python shift function
```
df_sr['SnP_TSX_60_Returns'] = (df_sr['Close'] - df_sr['Close'].shift(1))/ df_sr['Close'].shift(1)
df_sr.head(10)
```
### 9. Cross check conversion to daily returns against alternative method - pct_change function
```
df_sr['SnP_TSX_60_Returns'] = df_sr['Close'].pct_change()
df_sr.head(10)
```
#### Methods cross check - looks good - continue
```
# check for null - first row would have null
df_sr.isna().sum()
# Drop nulls - first row
df_sr.dropna(inplace=True)
# Rename `Close` Column to be specific to this portfolio.
df_sr.columns
df_sr.head()
```
### 10. Drop original Closing column - not needed for comparison
```
df_sr = df_sr[['SnP_TSX_60_Returns']]
df_sr.columns
```
### 11. Create new column in a copy df for cumulative returns per daily return S&P TSX 60
```
df_sr_cumulative = df_sr.copy()
# use cumprod to get the daily cumulative returns for each of the algos 1 and 2
df_sr_cumulative['SnP_TSX_60_CumReturns'] = (1+df_sr_cumulative['SnP_TSX_60_Returns']).cumprod()
# visually check first 10 rows to ensure that results make sense
df_sr_cumulative.head(10)
# drop daily returns column from cumulative df
df_sr_cumulative = df_sr_cumulative[['SnP_TSX_60_CumReturns']]
df_sr_cumulative.head()
```
### 12. Look at high level stats & plot for algo portfolios
```
df_sr.describe()
df_sr_cumulative.describe()
# plot daily returns - S&P TSX 60
df_sr.plot(figsize=(10,5))
# plot daily returns - S&P TSX 60
df_sr_cumulative.plot(figsize=(10,5))
```
### 13. Initial Data Overview - S&P (Market Representation)
<li> The standard deviation, as expected is the lowst of all portfolios, as this represents the market index and so should not fluctuate as much as other portfolios. It is the leeast risky. The closest other sub-portfolio with lowest risk is Whale_Paulson_Daily_Returns.
<li> Lack of gaps in chart indicate there are no data gaps, and the lack of extreme fluctuations indicates the data is consistent
<li>The returns of individual portfolios would be expected to be higher than the SnP (given higher risk) but this is not always the case, as shall be explored in the analysis sectiond
## D. Combine Whale, Algorithmic, and S&P TSX 60 Returns
### 1. Merge daily returns dataframes from all portfolios
```
# Use the `concat` function to combine the two DataFrames by matching indexes (or in this case `Date`)
merged_analysis_df_tmp = pd.concat([df_wr, df_ar ], axis="columns", join="inner")
merged_analysis_df_tmp.head(3)
# Use the `concat` function to combine the two DataFrames by matching indexes
merged_daily_returns_df = pd.concat([merged_analysis_df_tmp, df_sr ], axis="columns", join="inner")
merged_daily_returns_df.head(3)
merged_daily_returns_df.tail(3)
merged_daily_returns_df.shape
```
# II Conduct Quantitative Analysis
In this section, you will calculate and visualize performance and risk metrics for the portfolios.
<li> First a daily returns comparison is reviewed for individual sub-portfolios
<li> Second a daily returns comparison is reviewed for portfolio level - comparing Whales and Algos
## A. Performance Anlysis
#### Calculate and Plot the daily returns
### 1. Compare daily returns of individual sub-portfolios
```
# Plot daily returns of all portfolios
drp = merged_daily_returns_df.plot(figsize=(20,10), rot=45, title='Comparison of Daily Returns on Stock Portfolios')
drp.set_xlabel("Daily Returns")
drp.set_ylabel("Date")
```
#### Calculate and Plot cumulative returns.
### 2. Compare Cumulative Daily Returns
Calculations were already done in the first section
```
# Use the `concat` function to combine the two DataFrames by matching indexes
merged_cumulative__df_tmp = pd.concat([df_wr_cumulative, df_ar_cumulative ], axis="columns", join="inner")
merged_daily_cumreturns_df = pd.concat([merged_cumulative__df_tmp, df_sr_cumulative ], axis="columns", join="inner")
merged_daily_cumreturns_df.head()
# Plot cumulative returns
dcrp = merged_daily_cumreturns_df.plot(figsize=(20,10), rot=45, title='Comparison of Daily Cumulative Returns on Stock Portfolios')
dcrp.set_xlabel("Daily Cumulative Returns")
dcrp.set_ylabel("Date")
```
### 3. Compare portfolio level daily returns
```
# create a copy of the daily returns and add the portoflio level columns
portfolio_daily_return = merged_daily_returns_df.copy()
portfolio_daily_return['Whale_Portfolio_Daily_Returns'] = portfolio_df_wr
portfolio_daily_return['Algo_Portfolio_Daily_Returns'] = portfolio_df_ar
portfolio_daily_return.head(3)
portfolio_daily_return.tail(3)
portfolio_daily_return.describe()
# Plot portfolio vs individual daily returns
dcrp = portfolio_daily_return.plot(figsize=(20,10), rot=45, title='Comparison of Daily Returns on Stock Portfolios - Individual Sub Portfolios vs Portfolio')
dcrp.set_xlabel("Daily Returns")
dcrp.set_ylabel("Date")
# Plot portfolio only (remove individual sub-portfolios)
portfolio_daily_return.columns
portfolio_daily_return_only = portfolio_daily_return[['SnP_TSX_60_Returns','Whale_Portfolio_Daily_Returns', 'Algo_Portfolio_Daily_Returns']]
dcrp = portfolio_daily_return_only.plot(figsize=(20,10), rot=45, title='Comparison of Daily Returns on Stock Portfolios - Whale vs Algos')
dcrp.set_xlabel("Daily Returns")
dcrp.set_ylabel("Date")
dcrp = portfolio_daily_return.plot(figsize=(20,10), rot=45, title='Comparison of Daily Returns on Stock Portfolios - Individual Sub Portfolios vs Portfolio')
dcrp.set_xlabel("Daily Returns")
dcrp.set_ylabel("Date")
```
### 4. Compare portfolio level cumulative daily returns
```
# Copy cumulative daily retrurns df to include portfolio
portfolio_daily_cumreturns = merged_daily_cumreturns_df.copy()
# add porrtfolio level cumulative daily returned for whales and algos
portfolio_daily_cumreturns['Whale_Portfolio_CumRet'] = wr_cumulative_returns
portfolio_daily_cumreturns['Algos_Portfolio_CumRet'] = ar_cumulative_returns
dcrp = portfolio_daily_cumreturns.plot(figsize=(20,10), rot=45, title='Comparison of Cumulative Daily Returns on Stock Portfolios - Individual Sub Portfolios vs Portfolio')
dcrp.set_xlabel("Daily Cumulative Returns")
dcrp.set_ylabel("Date")
portfolio_daily_cumreturns.tail(1)
```
---
## B. Risk Analysis
Determine the _risk_ of each portfolio:
1. Create a box plot for each portfolio.
2. Calculate the standard deviation for all portfolios.
4. Determine which portfolios are riskier than the S&P TSX 60.
5. Calculate the Annualized Standard Deviation.
### 1. Create a box plot for each portfolio
```
# Box plot to visually show risk
mcrb = merged_daily_returns_df.plot.box(figsize=(20,10), rot=45, title='Boxplot Comparison of Daily Returns on Stock Portfolios')
dcrp.set_xlabel("Daily Returns")
dcrp.set_ylabel("Date")
```
### 2. Calculate Standard Deviations
```
# Daily standard deviation of daily returns sorted in ascending ordeer
daily_std = merged_daily_returns_df.std()
daily_std.sort_values()
mcrb = daily_std.plot.hist(figsize=(20,10), rot=45, title='Comparison of Standard Deviation of Daily Returns on Stock Portfolios')
mcrb.set_xlabel("Returns Standard Deviation")
mcrb.set_ylabel("Portfolio")
```
### 3. Standard deviation for S&P TSX 60
```
# Calculate the daily standard deviation of S&P TSX 60
daily_SnP60_std = merged_daily_returns_df['SnP_TSX_60_Returns'].std()
daily_SnP60_std.sort_values()
```
### 4. Determine which portfolios are riskier than the S&P TSX 60
The S&P TSX 60 is a market indicator, acting as a benchmark to represent m
By sorting in ordere of srd deviation on daily return above, the riskier portfolios than S&P TSX 60 are all eexcept Whale Paulson portfolio, as all others have higher std deviation than S&P TSX 60
### 5. Calculate the Annualized Standard Deviation
```
# Calculate the annualized standard deviation (252 trading days)
annualized_std = daily_std * np.sqrt(252)
annualized_std
```
---
## D. Rolling Statistics
Risk changes over time. Analyze the rolling statistics for Risk and Beta.
1. Calculate and plot the rolling standard deviation for the S&P TSX 60 using a 21-day window.
2. Calculate the correlation between each stock to determine which portfolios may mimick the S&P TSX 60.
3. Choose one portfolio, then calculate and plot the 60-day rolling beta for it and the S&P TSX 60.
### 1. Calculate and plot rolling `std` for all portfolios with 21-day window
```
# Calculate the rolling standard deviation for all portfolios using a 21-day window
roll21_std = merged_daily_returns_df.rolling(window=21).std()
roll21_std
# Plot the rolling standard deviation on all daily return (not closing price)
rollsp = roll21_std.rolling(window=21).std().plot(figsize=(20,10), rot=45, title='21 Day Rolling Standard Deviation on Daily Returns on Stock Portfolios')
rollsp.set_xlabel("21 Day Rolling Dates")
rollsp.set_ylabel("Standard Deviation")
```
### 2. Calculate and plot the correlation
```
# Calculate the correlation between each column
correlation = merged_daily_returns_df.corr()
correlation.sort_values(ascending=False)
# Display correlation matrix
import matplotlib.pyplot as plt
fig = plt.gcf()
# Set the title
plt.title('Inter-Portfolio Correlations')
# Change seaborn plot size
fig.set_size_inches(12, 8)
sns.heatmap(correlation, vmin=-1, vmax=1)
```
### 3. Calculate and Plot Beta for a chosen portfolio and the S&P 60 TSX
```
# Covariance of Whales against SnP TSX 60 Returns
Whale_Soros_Covariance = df_wr["Whale_Soros_Fund_Daily_Returns"].cov(df_sr["SnP_TSX_60_Returns"])
Whale_Paulson_Covariance = df_wr["Whale_Paulson_Daily_Returns"].cov(df_sr["SnP_TSX_60_Returns"])
Whale_Tiger_Covariance = df_wr["Whale_Tiger_Daily_Returns"].cov(df_sr["SnP_TSX_60_Returns"])
Whale_Berekshire_Covariance = df_wr["Whale_Berekshire_Daily_Returns"].cov(df_sr["SnP_TSX_60_Returns"])
# Display the covariance of each whale sub-portfolio
print("Soros Covariance: ", "%.16f" % Whale_Soros_Covariance)
print("Paulson Covariance: ", "%.16f" % Whale_Paulson_Covariance)
print("Tiger Covariance: ", "%.16f" % Whale_Tiger_Covariance)
print("Berekshire Covariance: ", "%.16f" % Whale_Berekshire_Covariance)
# Covariance of Whales against SnP TSX 60 Returns
Algo1_Covariance = df_ar["Algo1_Daily_Returns"].cov(df_sr["SnP_TSX_60_Returns"])
Algo2_Covariance = df_ar["Algo2_Daily_Returns"].cov(df_sr["SnP_TSX_60_Returns"])
# Display the covariance of each whale sub-portfolio
print("Algo1 Covariance: ", "%.16f" % Algo1_Covariance)
print("Algo2 Covariance: ", "%.16f" % Algo2_Covariance)
# covariance of algos portfolio (within the portfolio)
covariance_algo = df_ar.cov()
covariance_algo
# covariance of s&p 60 TSR portfolio
covariance_snp = df_sr.cov()
covariance_snp
# Calculate covariance of a single sub-portfolio streams in portfolios
# how each individual sub-portfolios covary with other sub-portfolios
# similar evaluation to correlation heat map
covariance_a = merged_daily_returns_df.cov()
covariance_a
# Calculate variance of S&P TSX
variance_snp = df_sr.var()
variance_snp
# Beta Values for Whales Sub-Portfolios
# Calculate beta of all daily returns of whale portfolio
Soros_beta = Whale_Soros_Covariance / variance_snp
Paulson_beta = Whale_Paulson_Covariance / variance_snp
Tiger_beta = Whale_Tiger_Covariance / variance_snp
Berekshire_beta = Whale_Berekshire_Covariance / variance_snp
# Display the covariance of each Whale sub-portfolio
print("Soros Beta: ", "%.16f" % Soros_beta)
print("Paulson Beta: ", "%.16f" % Paulson_beta)
print("Tiger Beta: ", "%.16f" % Tiger_beta)
print("Berekshire Beta: ", "%.16f" % Berekshire_beta)
print("--------------------")
Average_Whale_beta = (Soros_beta + Paulson_beta + Tiger_beta + Berekshire_beta)/4
print("Average Whale Beta: ", "%.16f" % Average_Whale_beta)
# Beta Values for Algos Sub-Portfolios
# Calculate beta of all daily returns of Algos portfolio
Algo1_beta = Algo1_Covariance / variance_snp
Algo2_beta = Algo2_Covariance / variance_snp
# Display the covariance of each Algos sub-portfolio
print("Algo1 Beta: ", "%.16f" % Algo1_beta)
print("Algo2 Beta: ", "%.16f" % Algo2_beta)
print("--------------------")
Average_Algo_beta = (Algo1_beta + Algo2_beta)/2
print("Average Algo Beta: ", "%.16f" % Average_Algo_beta)
# 21 day rolling covariance of algo portfolio stocks vs. S&P TSX 60
rolling_algo1_covariance = merged_daily_returns_df["Algo1_Daily_Returns"].rolling(window=21, min_periods=1).cov(df_sr["SnP_TSX_60_Returns"])
rolling_algo2_covariance = merged_daily_returns_df["Algo2_Daily_Returns"].rolling(window=21, min_periods=1).cov(df_sr["SnP_TSX_60_Returns"])
# 21 day rolling covariance of whale portfolio stocks vs. S&P TSX 60
rolling_Soros_covariance = merged_daily_returns_df["Whale_Soros_Fund_Daily_Returns"].rolling(window=21, min_periods=1).cov(df_sr["SnP_TSX_60_Returns"])
rolling_Paulson_covariance = merged_daily_returns_df["Whale_Paulson_Daily_Returns"].rolling(window=21, min_periods=1).cov(df_sr["SnP_TSX_60_Returns"])
rolling_Tiger_covariance = merged_daily_returns_df["Whale_Tiger_Daily_Returns"].rolling(window=21, min_periods=1).cov(df_sr["SnP_TSX_60_Returns"])
rolling_Berkshire_covariance = merged_daily_returns_df["Whale_Berekshire_Daily_Returns"].rolling(window=21, min_periods=1).cov(df_sr["SnP_TSX_60_Returns"])
# 21 day rolling S&P TSX 60 covariance
rolling_SnP_covariance = merged_daily_returns_df["SnP_TSX_60_Returns"].rolling(window=21, min_periods=1).cov(df_sr["SnP_TSX_60_Returns"])
# 21 day rolling variance of S&P TSX 60
rolling_variance = merged_daily_returns_df["SnP_TSX_60_Returns"].rolling(window=21).var()
# 21 day rolling beta of algo portfolio stocks vs. S&P TSX 60
rolling_algo1_beta = rolling_algo1_covariance / rolling_variance
rolling_algo2_beta = rolling_algo2_covariance / rolling_variance
# 21 day average beta for algo portfolio
rolling_average_algo_beta = (rolling_algo1_beta + rolling_algo1_beta)/2
# 21 day rolling beta of whale portfolio stocks vs. S&P TSX 60
rolling_Soros_beta = rolling_Soros_covariance / rolling_variance
rolling_Paulson_beta = rolling_Paulson_covariance / rolling_variance
rolling_Tiger_beta = rolling_Tiger_covariance / rolling_variance
rolling_Berkshire_beta = rolling_Berkshire_covariance / rolling_variance
rolling_SnP_Beta = rolling_SnP_covariance/ rolling_variance
# 21 day average beta for whale portfolio
rolling_average_whale_beta = (rolling_Soros_beta + rolling_Paulson_beta + rolling_Tiger_beta + rolling_Berkshire_beta)/4
# Set the figure and plot the different social media beta values as multiple trends on the same figure
ax = rolling_algo1_covariance.plot(figsize=(20, 10), title="Rolling 21 Day Covariance of Sub-Portfolio Returns vs. S&P TSX 60 Returns")
rolling_algo2_covariance.plot(ax=ax)
rolling_Soros_covariance.plot(ax=ax)
rolling_Paulson_covariance.plot(ax=ax)
rolling_Tiger_covariance.plot(ax=ax)
rolling_Berkshire_covariance.plot(ax=ax)
# Set the legend of the figure
ax.legend(["Algo1 Covariance", "Algo2 Covariance", "Whale Soros Covariance", "Whale Paulson Covariance", "Whale Tiger Covariance","Whale Berkshire Covariance"])
rolling_algo1_covariance.plot(figsize=(20, 10), title='Rolling 21 Day Covariance of Sub-Portfolio Returns vs. S&P TSX 60 Returns')
rolling_algo1_beta.describe()
# Plot beta trend
# Set the figure and plot the different social media beta values as multiple trends on the same figure
ax = rolling_algo1_beta.plot(figsize=(20, 10), title="Rolling 21 Day Beta of Sub-Portfolio Returns vs. S&P TSX 60 Returns")
rolling_algo2_beta.plot(ax=ax)
rolling_Soros_beta.plot(ax=ax)
rolling_Paulson_beta.plot(ax=ax)
rolling_Tiger_beta.plot(ax=ax)
rolling_Berkshire_beta.plot(ax=ax)
rolling_SnP_Beta.plot(ax=ax)
# Set the legend of the figure
ax.legend(["Algo1 Beta", "Algo2 Beta", "Whale Soros Beta", "Whale Paulson Beta", "Whale Tiger Beta","Whale Berkshire Beta"])
```
## E. Rolling Statistics Challenge: Exponentially Weighted Average
An alternative way to calculate a rolling window is to take the exponentially weighted moving average. This is like a moving window average, but it assigns greater importance to more recent observations. Try calculating the [`ewm`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.ewm.html) with a 21-day half-life.
```
### 1. Show mean and stanadrd deviation exponentially weighted average
```
```
ewm_21_mean.head()
ewm_21_mean.plot(figsize=(20, 10), title="Mean EWM")
ewm_21_std = merged_daily_returns_df.ewm(halflife=21).std()
ewm_21_std.head()
ewm_21_std.plot(figsize=(20, 10), title="Standard Deviation EWM")
```
### 2. EWM end of section discussion
Exponentially Weighted Mean analysis provides an emphasis on more recent events by providing an exponential "decay" of weights values going back in time, growing smaller further back in times. In this instance a half life of 21 days is used, meaning that at that point the weights would have "decayed" exponentially to 50% of the most recent value. The theory behind this is that more recent events are more relevant and accurate to the current market conditions and should therefore have more weight.
---
# III Sharpe Ratios
In reality, investment managers and thier institutional investors look at the ratio of return-to-risk, and not just returns alone. After all, if you could invest in one of two portfolios, and each offered the same 10% return, yet one offered lower risk, you'd take that one, right?
### Using the daily returns, calculate and visualise the Sharpe ratios using a bar plot
```
# Annualised Sharpe Ratios
sharpe_ratios = (merged_daily_returns_df.mean() * 252) / (merged_daily_returns_df.std() * np.sqrt(252))
sharpe_ratios.sort_values(ascending = False)
# Visualize the sharpe ratios as a bar plot
# Plot sharpe ratios
sharpe_ratios.plot(kind="bar", title="Sharpe Ratios")
```
### [[TODO]] Get individual portfolio average sharp ratios to compare overall portfolio types
```
# Calculate standar deviaton for all investments for each portfolio
harold_std_annual = harold_returns.std() * np.sqrt(252)
my_std_annual = my_returns.std() * np.sqrt(252)
algo1_sharp = merged_daily_returns_df["Algo1_Daily_Returns"].rolling(window=21, min_periods=1).cov(df_sr["SnP_TSX_60_Returns"])
rolling_algo2_covariance = merged_daily_returns_df["Algo2_Daily_Returns"].rolling(window=21, min_periods=1).cov(df_sr["SnP_TSX_60_Returns"])
# 21 day rolling covariance of whale portfolio stocks vs. S&P TSX 60
rolling_Soros_covariance = merged_daily_returns_df["Whale_Soros_Fund_Daily_Returns"].rolling(window=21, min_periods=1).cov(df_sr["SnP_TSX_60_Returns"])
rolling_Paulson_covariance = merged_daily_returns_df["Whale_Paulson_Daily_Returns"].rolling(window=21, min_periods=1).cov(df_sr["SnP_TSX_60_Returns"])
rolling_Tiger_covariance = merged_daily_returns_df["Whale_Tiger_Daily_Returns"].rolling(window=21, min_periods=1).cov(df_sr["SnP_TSX_60_Returns"])
rolling_Berkshire_covariance = merged_daily_returns_df["Whale_Berekshire_Daily_Returns"].rolling(window=21, min_periods=1).cov(df_sr["SnP_TSX_60_Returns"])
# Calculate standar deviaton for all investments for each portfolio
harold_std_annual = harold_returns.std() * np.sqrt(252)
my_std_annual = my_returns.std() * np.sqrt(252)
# Calculate sharpe ratio
harold_sharpe_ratios = (harold_returns.mean() * 252) / (harold_std_annual)
my_sharpe_ratios = (my_returns.mean() * 252) / (my_std_annual)
```
### Determine whether the algorithmic strategies outperform both the market (S&P TSX 60) and the whales portfolios.
Write your answer here!
---
# Create Custom Portfolio
In this section, you will build your own portfolio of stocks, calculate the returns, and compare the results to the Whale Portfolios and the S&P TSX 60.
1. Choose 3-5 custom stocks with at last 1 year's worth of historic prices and create a DataFrame of the closing prices and dates for each stock.
2. Calculate the weighted returns for the portfolio assuming an equal number of shares for each stock.
3. Join your portfolio returns to the DataFrame that contains all of the portfolio returns.
4. Re-run the performance and risk analysis with your portfolio to see how it compares to the others.
5. Include correlation analysis to determine which stocks (if any) are correlated.
## Choose 3-5 custom stocks with at last 1 year's worth of historic prices and create a DataFrame of the closing prices and dates for each stock.
For this demo solution, we fetch data from three companies listes in the S&P TSX 60 index.
* `SHOP` - [Shopify Inc](https://en.wikipedia.org/wiki/Shopify)
* `OTEX` - [Open Text Corporation](https://en.wikipedia.org/wiki/OpenText)
* `L` - [Loblaw Companies Limited](https://en.wikipedia.org/wiki/Loblaw_Companies)
```
merged_daily_returns_df.head(1)
```
## A. Get Daily Returns for Shopify Stocks
### 1. Read in csv shopify data
```
# Reading data from 1st stock - shopify
df_shop = pd.read_csv('Resources/Shopify.csv')
```
### 2. Inspect data
```
df_shop.shape
df_shop.head(3)
df_shop.dtypes
df_shop.count()
```
### 3. Convert date to index
```
df_shop['Date']= pd.to_datetime(df_shop['Date']).dt.strftime('%Y-%m-%d')
df_shop.head(3)
# set date as index
df_shop.set_index('Date', inplace=True)
df_shop.head(3)
```
### 4. Remove unwanted columns
```
df_shop.columns
df_shop = df_shop[['Close']]
```
### 5. Sort date index ascending just in case
```
df_shop.sort_index(inplace=True) # probably not necssary but just in case
```
### 6. Get daily returns & remove closing cost
```
df_shop['Shop_Daily_Returns'] = df_shop['Close'].pct_change()
df_shop = df_shop[['Shop_Daily_Returns']]
```
### 7. Review and drop nulls
```
df_shop.isna().sum() # first row would be null
df_shop.dropna(inplace=True)
df_shop.isna().sum() #null should be gone
```
## B. Get Daily Returns For Open Text Stocks
### 1. Read in csv for Open Text
```
# Reading data from 2nd stock - Otex
df_otex = pd.read_csv('Resources/Otex.csv')
```
### 2. Inspect dataframe
```
df_otex.heaad(3)
df_otex.tail(3)
df_otex.shape
df_otex.count()
```
### 3. Convert date to index
```
df_otex['Date']= pd.to_datetime(df_otex['Date']).dt.strftime('%Y-%m-%d')
# set date as index
df_otex.set_index('Date', inplace=True)
### 4. Remove unwanted columns -declutter
df_otex = df_otex[['Close']]
### 5. Sort date index ascending just in case
df_otex.sort_index(inplace=True) # probably not necssary but just in case
### 6. Get daily returns & remove closing cost
df_otex['Otex_Daily_Returns'] = df_otex['Close'].pct_change()
df_otex = df_otex[['Otex_Daily_Returns']]
### 7. Review and drop nulls
df_otex.isna().sum() # first row would be null
df_otex.dropna(inplace=True)
```
## C. Get Returns for Loblaw Stocks
### 1. Read in csv for Loblaw
```
# Reading data from 3rd stock - Loblaw
df_lob = pd.read_csv('Resources/TSE_L.csv')
```
### 2. Inspect dataframe
```
df_lob.head(3)
df_lob.tail(3)
df_lob.shape
df_lob.dtypes
```
### 3. Convert date to index
```
df_lob['Date']= pd.to_datetime(df_lob['Date']).dt.strftime('%Y-%m-%d')
# set date as index
df_lob.set_index('Date', inplace=True)
### 4. Remove unwanted columns -declutter
df_lob = df_lob[['Close']]
### 5. Sort date index ascending just in case
df_lob.sort_index(inplace=True) # probably not necssary but just in case
### 6. Get daily returns & remove closing cost
df_lob['Loblaw_Daily_Returns'] = df_lob['Close'].pct_change()
df_lob = df_lob[['Loblaw_Daily_Returns']]
```
### 7. Review and drop nulls
```
df_lob.isna().sum() # first row would be null
df_lob.dropna(inplace = True)
```
### 8. Have a final look at the data - plot and describe
```
df_lob.describe()
df_lob.plot() # have a quick look that it is centred around zero
```
## D. Concat New Stock Daily Returns into Single Dataframe
### 1. Perform inner concat to ensure dates line up
```
# Use the `concat` function to combine the two DataFrames by matching indexes (or in this case `Date`)
merged_analysis_newstock_df_tmp = pd.concat([df_lob, df_otex], axis="columns", join="inner")
merged_newstock_daily_returns_df = pd.concat([merged_analysis_newstock_df_tmp, df_shop], axis="columns", join="inner")
```
### 2. Inspect data in newly merged
```
merged_newstock_daily_returns_df.head(5)
merged_newstock_daily_returns_df.tail(5)
merged_newstock_daily_returns_df.shape
merged_newstock_daily_returns_df.dtypes
merged_newstock_daily_returns_df.index.dtype # check the data type of the index
merged_newstock_daily_returns_df.isna().sum() # no nulls found, already removed in last step
### 3. Plot Merged Daily Returns Data
drp = merged_newstock_daily_returns_df.plot(figsize=(20,10), rot=45, title='Comparison of Daily Returns on Stocks in New Portfolio')
drp.set_xlabel("Daily Returns")
drp.set_ylabel("Date")
```
## E. Calculate the weighted returns for the portfolio assuming an equal number of shares for each stock
### 1. Portfolio returns. - Set even weights across the 3 new portfolios
```
# Set weights
weights = [1/3, 1/3, 1/3]
```
### 2. Calculate portfolio return
```
# use the dot function to cross multiple the daily rturns of individual stocks against the weights
portfolio_newstock_daily_returns_df = merged_newstock_daily_returns_df.dot(weights)
portfolio_newstock_daily_returns_df.shape
portfolio_newstock_daily_returns_df.head()
```
### 3. Plot Portfolio Returns
```
drp = portfolio_newstock_daily_returns_df.plot(figsize=(20,10), rot=45, title='Comparison of Daily Portfolio Returns on Stocks in New Portfolio')
drp.set_xlabel("Portfolio Daily Returns")
drp.set_ylabel("Date")
```
## F. Join your portfolio returns to the DataFrame that contains all of the portfolio returns
```
# Join your returns DataFrame to the original returns DataFrame
merged_orig_vs_new_returns = pd.concat([merged_daily_returns_df, portfolio_newstock_daily_returns_df], axis="columns", join="inner")
len(merged_orig_vs_new_returns) # there are 966 overlappting dates after inner join
merged_orig_vs_new_returns.head()
# Only compare dates where return data exists for all the stocks (drop NaNs)
drp = merged_orig_vs_new_returns.plot(figsize=(20,10), rot=45, title='Comparison of Daily Portfolio Returns on Stocks in New Portfolio')
drp.set_xlabel("Portfolio Daily Returns")
drp.set_ylabel("Date")
```
## G. Re-run the risk analysis with your portfolio to see how it compares to the others
### 1. Calculate the Annualized Standard Deviation
```
# Daily standard deviation of new and old daily returns sorted in ascending ordeer
daily_std_new_old = merged_orig_vs_new_returns.std()
daily_std_new_old.sort_values()
nomcrb = daily_std_new_old.plot.hist(figsize=(20,10), rot=45, title='Comparison of Standard Deviation of Daily Returns on New vs Original Stock Portfolios')
nomcrb.set_xlabel("Daily Returns Standard Deviation")
nomcrb.set_ylabel("Portfolio")
# Calculate the annualised `std`
# Calculate the annualised standard deviation (252 trading days)
annualized_new_old_std = daily_std_new_old * np.sqrt(252)
annualized_new_old_std.sort_values()
# plot annualised standard deviation old vs new
nomcrb = annualized_new_old_std.hist(figsize=(20,10), rot=45, title='Comparison of Annualised Standard Deviation of Daily Returns on New vs Original Stock Portfolios')
```
### 2. Calculate and plot rolling `std` with 21-day window
```
# Calculate rolling standard deviation
# Calculate the rolling standard deviation for all portfolios using a 21-day window
roll21_old_new_std = merged_daily_returns_df.rolling(window=21).std()
roll21_old_new_std
# Plot the rolling standard deviation on all daily return (not closing price)
rollsp = roll21_old_new_std.rolling(window=21).std().plot(figsize=(20,10), rot=45, title='21 Day Rolling Standard Deviation on Old vs New Daily Returns on Stock Portfolios')
rollsp.set_xlabel("21 Day Rolling Dates")
rollsp.set_ylabel("Standard Deviation")
```
### 3. Calculate and plot the correlation
```
# Calculate and plot the correlation
# Calculate the correlation between each column
correlation_oldnew = merged_orig_vs_new_returns.corr()
correlation_oldnew.sort_values(ascending=False)
# Display correlation matrix
import matplotlib.pyplot as plt
fig = plt.gcf()
# Set the title
plt.title('Inter-Portfolio Correlations')
# Change seaborn plot size
fig.set_size_inches(12, 8)
sns.heatmap(correlation_oldnewcorrelation, vmin=-1, vmax=1)
```
### 4. Calculate and Plot the 60-day Rolling Beta for Your Portfolio compared to the S&P 60 TSX
```
# Calculate and plot Beta
# Covariance of Whales against SnP TSX 60 Return
New_Stocks_Covariance = df_wr["Whale_Berekshire_Daily_Returns"].cov(df_sr["SnP_TSX_60_Returns"])
# Display the covariance of each whale sub-portfolio
print("Soros Covariance: ", "%.16f" % Whale_Soros_Covariance)
print("Paulson Covariance: ", "%.16f" % Whale_Paulson_Covariance)
print("Tiger Covariance: ", "%.16f" % Whale_Tiger_Covariance)
print("Berekshire Covariance: ", "%.16f" % Whale_Berekshire_Covariance)
# Covariance of Whales against SnP TSX 60 Returns
Algo1_Covariance = df_ar["Algo1_Daily_Returns"].cov(df_sr["SnP_TSX_60_Returns"])
Algo2_Covariance = df_ar["Algo2_Daily_Returns"].cov(df_sr["SnP_TSX_60_Returns"])
# Display the covariance of each whale sub-portfolio
print("Algo1 Covariance: ", "%.16f" % Algo1_Covariance)
print("Algo2 Covariance: ", "%.16f" % Algo2_Covariance
# covariance of algos portfolio (within the portfolio)
covariance_algo = df_ar.cov()
covariance_algo
# covariance of s&p 60 TSR portfolio
covariance_snp = df_sr.cov()
covariance_snp
# Calculate covariance of a single sub-portfolio streams in portfolios
# how each individual sub-portfolios covary with other sub-portfolios
# similar evaluation to correlation heat map
covariance_a = merged_daily_returns_df.cov()
covariance_a
# Calculate variance of S&P TSX
variance_snp = df_sr.var()
variance_snp
# Beta Values for Whales Sub-Portfolios
# Calculate beta of all daily returns of whale portfolio
Soros_beta = Whale_Soros_Covariance / variance_snp
Paulson_beta = Whale_Paulson_Covariance / variance_snp
Tiger_beta = Whale_Tiger_Covariance / variance_snp
Berekshire_beta = Whale_Berekshire_Covariance / variance_snp
# Display the covariance of each Whale sub-portfolio
print("Soros Beta: ", "%.16f" % Soros_beta)
print("Paulson Beta: ", "%.16f" % Paulson_beta)
print("Tiger Beta: ", "%.16f" % Tiger_beta)
print("Berekshire Beta: ", "%.16f" % Berekshire_beta)
print("--------------------")
Average_Whale_beta = (Soros_beta + Paulson_beta + Tiger_beta + Berekshire_beta)/4
print("Average Whale Beta: ", "%.16f" % Average_Whale_beta)
# Beta Values for Algos Sub-Portfolios
# Calculate beta of all daily returns of Algos portfolio
Algo1_beta = Algo1_Covariance / variance_snp
Algo2_beta = Algo2_Covariance / variance_snp
# Display the covariance of each Algos sub-portfolio
print("Algo1 Beta: ", "%.16f" % Algo1_beta)
print("Algo2 Beta: ", "%.16f" % Algo2_beta)
print("--------------------")
Average_Algo_beta = (Algo1_beta + Algo2_beta)/2
print("Average Algo Beta: ", "%.16f" % Average_Algo_beta)
# 21 day rolling covariance of algo portfolio stocks vs. S&P TSX 60
rolling_algo1_covariance = merged_daily_returns_df["Algo1_Daily_Returns"].rolling(window=21, min_periods=1).cov(df_sr["SnP_TSX_60_Returns"])
rolling_algo2_covariance = merged_daily_returns_df["Algo2_Daily_Returns"].rolling(window=21, min_periods=1).cov(df_sr["SnP_TSX_60_Returns"])
# 21 day rolling covariance of whale portfolio stocks vs. S&P TSX 60
rolling_Soros_covariance = merged_daily_returns_df["Whale_Soros_Fund_Daily_Returns"].rolling(window=21, min_periods=1).cov(df_sr["SnP_TSX_60_Returns"])
rolling_Paulson_covariance = merged_daily_returns_df["Whale_Paulson_Daily_Returns"].rolling(window=21, min_periods=1).cov(df_sr["SnP_TSX_60_Returns"])
rolling_Tiger_covariance = merged_daily_returns_df["Whale_Tiger_Daily_Returns"].rolling(window=21, min_periods=1).cov(df_sr["SnP_TSX_60_Returns"])
rolling_Berkshire_covariance = merged_daily_returns_df["Whale_Berekshire_Daily_Returns"].rolling(window=21, min_periods=1).cov(df_sr["SnP_TSX_60_Returns"])
# 21 day rolling S&P TSX 60 covariance
rolling_SnP_covariance = merged_daily_returns_df["SnP_TSX_60_Returns"].rolling(window=21, min_periods=1).cov(df_sr["SnP_TSX_60_Returns"])
# 21 day rolling variance of S&P TSX 60
rolling_variance = merged_daily_returns_df["SnP_TSX_60_Returns"].rolling(window=21).var()
# 21 day rolling beta of algo portfolio stocks vs. S&P TSX 60
rolling_algo1_beta = rolling_algo1_covariance / rolling_variance
rolling_algo2_beta = rolling_algo2_covariance / rolling_variance
# 21 day average beta for algo portfolio
rolling_average_algo_beta = (rolling_algo1_beta + rolling_algo1_beta)/2
# 21 day rolling beta of whale portfolio stocks vs. S&P TSX 60
rolling_Soros_beta = rolling_Soros_covariance / rolling_variance
rolling_Paulson_beta = rolling_Paulson_covariance / rolling_variance
rolling_Tiger_beta = rolling_Tiger_covariance / rolling_variance
rolling_Berkshire_beta = rolling_Berkshire_covariance / rolling_variance
rolling_SnP_Beta = rolling_SnP_covariance/ rolling_variance
# 21 day average beta for whale portfolio
rolling_average_whale_beta = (rolling_Soros_beta + rolling_Paulson_beta + rolling_Tiger_beta + rolling_Berkshire_beta)/4
# Set the figure and plot the different social media beta values as multiple trends on the same figure
ax = rolling_algo1_covariance.plot(figsize=(20, 10), title="Rolling 21 Day Covariance of Sub-Portfolio Returns vs. S&P TSX 60 Returns")
rolling_algo2_covariance.plot(ax=ax)
rolling_Soros_covariance.plot(ax=ax)
rolling_Paulson_covariance.plot(ax=ax)
rolling_Tiger_covariance.plot(ax=ax)
rolling_Berkshire_covariance.plot(ax=ax)
# Set the legend of the figure
ax.legend(["Algo1 Covariance", "Algo2 Covariance", "Whale Soros Covariance", "Whale Paulson Covariance", "Whale Tiger Covariance","Whale Berkshire Covariance"])
```
### 5. Using the daily returns, calculate and visualize the Sharpe ratios using a bar plot
```
# Calculate Annualised Sharpe Ratios
# Annualised Sharpe Ratios
sharpe_ratios_old_new = (merged_daily_returns_df.mean() * 252) / (merged_daily_returns_df.std() * np.sqrt(252))
sharpe_ratios_old_new.sort_values(ascending=False)
# Visualise the sharpe ratios as a bar plot
# Plot sharpe ratios
sharpe_ratios_old_new.plot(kind="bar", title="Sharpe Ratios")
```
### 6. How does your portfolio do?
However, the individual risk appetite of the individual needs to be taken into consideration. For relatively low risk and reasonable returns, a market index such as the S&P is a good options.
For others who have more liquid cash to gamble, might simple ignore risk and go for short term highest rewards.
However, overall it is possible to look at the best stock as one that offer the best return for the lowest risk. This is what the sharpe ratio uncovers. For that reason, of all the portfolios, the best investment would be
More data would be needed to see to what extent the intrenal portfolio stock eg Algo 1, correlate with one another. Highly correlated stocks move togeether, and the net effect is an amplification of gains and losses. From a risk mitigation perspective, highly correlated stock portfolios can be diversified with those that have none or negative correlation.
## References
Shift function in pandas -
https://stackoverflow.com/questions/20000726/calculate-daily-returns-with-pandas-dataframe
Conditional line color -
https://stackoverflow.com/questions/31590184/plot-multicolored-line-based-on-conditional-in-python
https://stackoverflow.com/questions/40803570/python-matplotlib-scatter-plot-specify-color-points-depending-on-conditions/40804861
https://stackoverflow.com/questions/42453649/conditional-color-with-matplotlib-scatter
https://stackoverflow.com/questions/3832809/how-to-change-the-color-of-a-single-bar-if-condition-is-true-matplotlib
https://stackoverflow.com/questions/56779975/conditional-coloring-in-matplotlib-using-numpys-where
Google finance - https://support.google.com/docs/answer/3093281?hl=en
Boxplots - https://towardsdatascience.com/understanding-boxplots-5e2df7bcbd51
EWM - https://www.youtube.com/watch?v=lAq96T8FkTw
PEP 8 - Standards - https://www.python.org/dev/peps/pep-0008/
# Instructions: Unit 4 Homework Assignment: A Whale Off the Port(folio)

## Background
Harold's company has been investing in algorithmic trading strategies. Some of the investment managers love them, some hate them, but they all think their way is best.
You just learned these quantitative analysis techniques with Python and Pandas, so Harold has come to you with a challenge—to help him determine which portfolio is performing the best across multiple areas: volatility, returns, risk, and Sharpe ratios.
You need to create a tool (an analysis notebook) that analyzes and visualizes the major metrics of the portfolios across all of these areas, and determine which portfolio outperformed the others. You will be given the historical daily returns of several portfolios: some from the firm's algorithmic portfolios, some that represent the portfolios of famous "whale" investors like Warren Buffett, and some from the big hedge and mutual funds. You will then use this analysis to create a custom portfolio of stocks and compare its performance to that of the other portfolios, as well as the larger market ([S&P TSX 60 Index](https://en.wikipedia.org/wiki/S%26P/TSX_60)).
For this homework assignment, you have three main tasks:
1. [Read in and Wrangle Returns Data](#Prepare-the-Data)
2. [Determine Success of Each Portfolio](#Conduct-Quantitative-Analysis)
3. [Choose and Evaluate a Custom Portfolio](#Create-a-Custom-Portfolio)
---
## Instructions
**Files:**
* [Whale Analysis Starter Code](Starter_Code/whale_analysis.ipynb)
* [algo_returns.csv](Starter_Code/Resources/algo_returns.csv)
* [otex_historical.csv](Starter_Code/Resources/otex_historical.csv)
* [sp_tsx_history.csv](Starter_Code/Resources/sp_tsx_history.csv)
* [l_historical.csv](Starter_Code/Resources/l_historical.csv)
* [shop_historical.csv](Starter_Code/Resources/shop_historical.csv)
* [whale_returns.csv](Starter_Code/Resources/whale_returns.csv)
### Prepare the Data
First, read and clean several CSV files for analysis. The CSV files include whale portfolio returns, algorithmic trading portfolio returns, and S&P TSX 60 Index historical prices. Use the starter code to complete the following steps:
1. Use Pandas to read the following CSV files as a DataFrame. Be sure to convert the dates to a `DateTimeIndex`.
* `whale_returns.csv`: Contains returns of some famous "whale" investors' portfolios.
* `algo_returns.csv`: Contains returns from the in-house trading algorithms from Harold's company.
* `sp_tsx_history.csv`: Contains historical closing prices of the S&P TSX 60 Index.
2. Detect and remove null values.
3. If any columns have dollar signs or characters other than numeric values, remove those characters and convert the data types as needed.
4. The whale portfolios and algorithmic portfolio CSV files contain daily returns, but the S&P TSX 60 CSV file contains closing prices. Convert the S&P TSX 60 closing prices to daily returns.
5. Join `Whale Returns`, `Algorithmic Returns`, and the `S&P TSX 60 Returns` into a single DataFrame with columns for each portfolio's returns.

### Conduct Quantitative Analysis
Analyze the data to see if any of the portfolios outperform the stock market (i.e., the S&P TSX 60).
#### Performance Analysis
1. Calculate and plot daily returns of all portfolios.
2. Calculate and plot cumulative returns for all portfolios. Does any portfolio outperform the S&P TSX 60?
#### Risk Analysis
1. Create a box plot for each of the returns.
2. Calculate the standard deviation for each portfolio.
3. Determine which portfolios are riskier than the S&P TSX 60
4. Calculate the Annualized Standard Deviation.
#### Rolling Statistics
1. Calculate and plot the rolling standard deviation for all portfolios using a 21-day window.
2. Calculate and plot the correlation between each stock to determine which portfolios may mimick the S&P TSX 60.
3. Choose one portfolio, then calculate and plot the 60-day rolling beta for it and the S&P TSX 60.
#### Rolling Statistics Challenge: Exponentially Weighted Average
An alternative method to calculate a rolling window is to take the exponentially weighted moving average. This is like a moving window average, but it assigns greater importance to more recent observations. Try calculating the [`ewm`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.ewm.html) with a 21-day half-life.
### Sharpe Ratios
Investment managers and their institutional investors look at the return-to-risk ratio, not just the returns. After all, if you have two portfolios that each offer a 10% return, yet one is lower risk, you would invest in the lower-risk portfolio, right?
1. Using the daily returns, calculate and visualize the Sharpe ratios using a bar plot.
2. Determine whether the algorithmic strategies outperform both the market (S&P TSX 60) and the whales portfolios.
### Create a Custom Portfolio
Harold is ecstatic that you were able to help him prove that the algorithmic trading portfolios are doing so well compared to the market and whales portfolios. However, now you are wondering whether you can choose your own portfolio that performs just as well as the algorithmic portfolios. Investigate by doing the following:
1. Visit [Google Sheets](https://docs.google.com/spreadsheets/) and use the built-in Google Finance function to choose 3-5 stocks for your portfolio.
2. Download the data as CSV files and calculate the portfolio returns.
3. Calculate the weighted returns for your portfolio, assuming equal number of shares per stock.
4. Add your portfolio returns to the DataFrame with the other portfolios.
5. Run the following analyses:
* Calculate the Annualized Standard Deviation.
* Calculate and plot rolling `std` with a 21-day window.
* Calculate and plot the correlation.
* Calculate and plot the 60-day rolling beta for your portfolio compared to the S&P 60 TSX.
* Calculate the Sharpe ratios and generate a bar plot.
4. How does your portfolio do?
---
## Resources
* [Pandas API Docs](https://pandas.pydata.org/pandas-docs/stable/reference/index.html)
* [Exponential weighted function in Pandas](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.ewm.html)
* [`GOOGLEFINANCE` function help](https://support.google.com/docs/answer/3093281)
* [Supplemental Guide: Fetching Stock Data Using Google Sheets](../../../01-Lesson-Plans/04-Pandas/Supplemental/googlefinance_guide.md)
---
## Hints
* After reading each CSV file, don't forget to sort each DataFrame in ascending order by the Date using `sort_index`. This is especially important when working with time series data, as we want to make sure Date indexes go from earliest to latest.
* The Pandas functions used in class this week will be useful for this assignment.
* Be sure to use `head()` or `tail()` when you want to look at your data, but don't want to print to a large DataFrame.
---
## Submission
1. Use the provided starter Jupyter Notebook to house the code for your data preparation, analysis, and visualizations. Put any analysis or answers to assignment questions in raw text (markdown) cells in the report.
2. Submit your notebook to a new GitHub repository.
3. Add the URL of your GitHub repository to your assignment when submitting via Bootcamp Spot.
---
© 2020 Trilogy Education Services, a 2U, Inc. brand. All Rights Reserved.
| github_jupyter |
```
from sklearn.datasets import load_iris # iris dataset
from sklearn import tree # for fitting model
# for the particular visualization used
from six import StringIO
import pydot
import os.path
# to display graphs
%matplotlib inline
import matplotlib.pyplot
# get dataset
iris = load_iris()
iris.keys()
import pandas
iris_df = pandas.DataFrame(iris.data)
iris_df.columns = iris.feature_names
iris_df['target'] = [iris.target_names[target] for target in iris.target]
iris_df.head()
iris_df.describe()
print(iris_df)
# choose two features to plot
x_feature = 0
y_feature = 3
#x = list(list(zip(*iris.data))[x_feature])
#y = list(list(zip(*iris.data))[y_feature])
x = iris.data[:, x_feature]
y = iris.data[:, y_feature]
# The data are in order by type (types of irises). Find out the border indexes of the types.
end_type_one = list(iris.target).index(1)
end_type_two = list(iris.target).index(2)
fig = matplotlib.pyplot.figure() # create graph
fig.suptitle('Two Features of the Iris Data Set') # set title
# set axis labels
matplotlib.pyplot.xlabel(iris.feature_names[x_feature])
matplotlib.pyplot.ylabel(iris.feature_names[y_feature])
# put the input data on the graph, with different colors and shapes for each type
scatter_0 = matplotlib.pyplot.scatter(x[:end_type_one], y[:end_type_one],
c="red", marker="o", label=iris.target_names[0])
scatter_1 = matplotlib.pyplot.scatter(x[end_type_one:end_type_two], y[end_type_one:end_type_two],
c="blue", marker="^", label=iris.target_names[1])
scatter_2 = matplotlib.pyplot.scatter(x[end_type_two:], y[end_type_two:],
c="green", marker="*", label=iris.target_names[2])
matplotlib.pyplot.legend(handles=[scatter_0, scatter_1, scatter_2]) # make legend
matplotlib.pyplot.show() # show the graph
print(iris.data)
print(x)
decision_tree = tree.DecisionTreeClassifier() # make model
decision_tree.fit(iris.data, iris.target) # fit model to data
# make pdf diagram of decision tree
dot_data = StringIO()
tree.export_graphviz(decision_tree, out_file=dot_data, feature_names=iris.feature_names, class_names=iris.target_names,
filled=True, rounded=True, special_characters=True)
graph = pydot.graph_from_dot_data(dot_data.getvalue())[0]
graph.write_pdf(os.path.expanduser("~/Desktop/introToML/ML/New Jupyter Notebooks/iris_decision_tree_regular.pdf"))
inputs = [iris.data[0], iris.data[end_type_one], iris.data[end_type_two]] # use the first input of each class
print('Class predictions: {0}'.format(list(iris.target_names[prediction] for prediction in decision_tree.predict(inputs)))) # print predictions
print('Probabilities:\n{0}'.format(decision_tree.predict_proba(inputs))) # print prediction probabilities
```
# Exercise Option #1 - Standard Difficulty
0. Submit the PDF you generated as a separate file in Canvas.
1. According to the PDF, a petal width <= 0.8 cm would tell you with high (100%) probability that you are looking at a setosa iris.
2. According to the PDF, you're supposed to look at the petal length, petal width, and sepal length to tell a virginica from a versicolor.
3. The array value at each node in the pdf shows how many data values of each class passed through the node.
4. The predictions are always have a 100% probability because any data value you give will end up at one end node. Each end node has one class prediction.
5. Below I use a subset of the features (3/4). The new decision tree was completely different than the original: it had more nodes and a different overall shape. When looking at the original decision tree, most of the nodes separated data based on petal length or petal width. The one feature that the new tree does not use is petal width, which is the most likely cause for why the second tree had to use more nodes (it lacked a feature that would make it easy to distinguish the classes).
```
# Use 3/4 columns (the first, second, & third)
first_feature = 0
second_feature = 1
third_feature = 2
iris_inputs = iris.data[:,[first_feature, second_feature, third_feature]] # use only two collumns of the data
decision_tree_with_portion = tree.DecisionTreeClassifier() # make model
decision_tree_with_portion.fit(iris_inputs, iris.target) # fit model to data
# make pdf diagram of decision tree
dot_data = StringIO()
tree.export_graphviz(decision_tree_with_portion, out_file=dot_data, feature_names=iris.feature_names[:3], class_names=iris.target_names,
filled=True, rounded=True, special_characters=True)
graph = pydot.graph_from_dot_data(dot_data.getvalue())[0]
graph.write_pdf(os.path.expanduser("~/Desktop/introToML/ML/New Jupyter Notebooks/iris_decision_tree_with_portion.pdf"))
new_inputs = [iris_inputs[0], iris_inputs[end_type_one], iris_inputs[end_type_two]] # make new inputs with iris_inputs, which only has two features per input
print('Class predictions: {0}'.format(list(iris.target_names[prediction] for prediction in decision_tree_with_portion.predict(new_inputs)))) # print predictions
print('Probabilities:\n{0}'.format(decision_tree_with_portion.predict_proba(new_inputs))) # print prediction probabilities
```
# Exercise Option #2 - Advanced Difficulty
Try fitting a Random Forest model to the iris data. See [this example](http://scikit-learn.org/stable/modules/ensemble.html#forest).
As seen below, the random forest & decision tree had the same F1 score (a perfect 1.0), meaning that they performed the same.
```
# https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html?highlight=random%20forest#sklearn.ensemble.RandomForestClassifier
from sklearn.ensemble import RandomForestClassifier
rand_forst = RandomForestClassifier() # make model
rand_forst = rand_forst.fit(iris.data, iris.target) # fit model
print('Class predictions: {0}'.format(list(iris.target_names[prediction] for prediction in rand_forst.predict(inputs)))) # print class predictions
# https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html?highlight=f1#sklearn.metrics.f1_score
from sklearn.metrics import f1_score
# get predictions for whole dataset
decision_tree_predictions = decision_tree.predict(iris.data)
rand_forst_predictions = rand_forst.predict(iris.data)
# print F1 scores
print ('Decision tree F1 score: {}'.format(f1_score(iris.target, decision_tree_predictions, average='weighted')))
print ('Random forest F1 score: {}'.format(f1_score(iris.target, rand_forst_predictions, average='weighted')))
```
| github_jupyter |
# <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 7</font>
## Download: http://github.com/dsacademybr
```
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
```
## Missão: Analisar o Comportamento de Compra de Consumidores.
## Nível de Dificuldade: Alto
Você recebeu a tarefa de analisar os dados de compras de um web site! Os dados estão no formato JSON e disponíveis junto com este notebook.
No site, cada usuário efetua login usando sua conta pessoal e pode adquirir produtos à medida que navega pela lista de produtos oferecidos. Cada produto possui um valor de venda. Dados de idade e sexo de cada usuário foram coletados e estão fornecidos no arquivo JSON.
Seu trabalho é entregar uma análise de comportamento de compra dos consumidores. Esse é um tipo de atividade comum realizado por Cientistas de Dados e o resultado deste trabalho pode ser usado, por exemplo, para alimentar um modelo de Machine Learning e fazer previsões sobre comportamentos futuros.
Mas nesta missão você vai analisar o comportamento de compra dos consumidores usando o pacote Pandas da linguagem Python e seu relatório final deve incluir cada um dos seguintes itens:
** Contagem de Consumidores **
* Número total de consumidores
** Análise Geral de Compras **
* Número de itens exclusivos
* Preço médio de compra
* Número total de compras
* Rendimento total
** Informações Demográficas Por Gênero **
* Porcentagem e contagem de compradores masculinos
* Porcentagem e contagem de compradores do sexo feminino
* Porcentagem e contagem de outros / não divulgados
** Análise de Compras Por Gênero **
* Número de compras
* Preço médio de compra
* Valor Total de Compra
* Compras for faixa etária
** Identifique os 5 principais compradores pelo valor total de compra e, em seguida, liste (em uma tabela): **
* Login
* Número de compras
* Preço médio de compra
* Valor Total de Compra
* Itens mais populares
** Identifique os 5 itens mais populares por contagem de compras e, em seguida, liste (em uma tabela): **
* ID do item
* Nome do item
* Número de compras
* Preço do item
* Valor Total de Compra
* Itens mais lucrativos
** Identifique os 5 itens mais lucrativos pelo valor total de compra e, em seguida, liste (em uma tabela): **
* ID do item
* Nome do item
* Número de compras
* Preço do item
* Valor Total de Compra
** Como considerações finais: **
* Seu script deve funcionar para o conjunto de dados fornecido.
* Você deve usar a Biblioteca Pandas e o Jupyter Notebook.
```
# Imports
import pandas as pd
import numpy as np
# Carrega o arquivo
load_file = "dados_compras.json"
df = pd.read_json(load_file, orient = "records")
df.head()
```
## Informações Sobre os Consumidores
```
df
len(df['Login'].unique())
```
### O WebSite teve um total de 573 logins diferentes realizando compras
## Análise Geral de Compras
- Número de itens exclusivos
- Preço médio de compra
- Número total de compras
- Rendimento total
### Os seguintes itens foram vendidos
```
for prod in df['Nome do Item'].unique():
print(prod)
```
### O número total de compras foi 780, cada uma tendo valor médio de 2.93
```
df.describe()
df['Valor'].sum()
```
### Totalizando uma renda total de 2286,33
## Análise Demográfica
## Informações Demográficas Por Gênero
- Porcentagem e contagem de compradores masculinos
- Porcentagem e contagem de compradores do sexo feminino
- Porcentagem e contagem de outros / não divulgados
```
dfLoginSex = df[['Login', 'Sexo']].drop_duplicates()
dfLoginSex['Sexo'].value_counts()
total = 465 + 100 + 8
Masc = 465
percMasc = 100*Masc / total
Fem = 100
percFem = 100 * Fem / total
outro = 8
percOutro = 100 * outro / total
print(f'Foram um total de {total} consumidores, sendo: \n \
- {Masc} do sexo Masculino ({percMasc:.2f}%) \n \
- {Fem} do sexo Feminino ({percFem:.2f}%) \n \
- {outro} do sexo outro/não divulgado ({percOutro:.2f}%)')
```
## Análise de Compras Por Gênero
- Número de compras
- Preço médio de compra
- Valor Total de Compra
- Compras for faixa etária
```
df['FaixaEtaria'] = ''
for i, v in enumerate(df['Idade']):
if v < 18:
df['FaixaEtaria'][i] = 'Até 18 anos'
elif 18 <= v <= 25:
df['FaixaEtaria'][i] = '18 - 25 anos'
elif 25 < v <= 40:
df['FaixaEtaria'][i] = '26 - 40 anos'
elif 40 < v <= 60:
df['FaixaEtaria'][i] = '41 - 60 anos'
elif v > 60:
df['FaixaEtaria'][i] = 'Mais de 60 anos'
df_sexGroup = df[['Idade', 'Valor', 'FaixaEtaria']].groupby(by = df['Sexo'])
df_sexGroup.describe()
df_sexGroup['Valor'].sum()
df_sexGroup['FaixaEtaria'].value_counts()
df_FaixaEtariaGroup = df[['Idade', 'Valor', 'Sexo']].groupby(by=df['FaixaEtaria'])
sexo = df['Sexo'].unique()
df_FaixaMasc = df[['Idade', 'Valor', 'Sexo']].query(f"Sexo == '{sexo[0]}'").groupby(by=df['FaixaEtaria'])
df_FaixaFem = df[['Idade', 'Valor', 'Sexo']].query(f"Sexo == '{sexo[1]}'").groupby(by=df['FaixaEtaria'])
df_FaixaOutro = df[['Idade', 'Valor', 'Sexo']].query(f"Sexo == '{sexo[2]}'").groupby(by=df['FaixaEtaria'])
df_FaixaMasc.describe().round(2)
df_FaixaFem.describe().round(2)
df_FaixaOutro.describe().round(2)
```
### Total de compras:
- Feminno: 136
- Masculino: 633
- Outro / Não Divulgado: 11
### Preço Médio de compras
- Feminino: 2.81
- Masculino: 2.95
- Outro / Não Divulgado: 3.25
### Valor Total
- Feminino: 382.91
- Masculino: 1867.68
- Outro / Não Divulgado: 35.74
### Perfil de comprar por faixa etária (Média do valor)
Feminino
- Até 18 anos: 2.83
- 18 - 25 anos: 2.80
- 26 - 40 anos: 2.83
- 41 - 60 anos: Nenhum
- Mais de 60 anos: Nenhum
Masculino
- Até 18 anos: 2.89
- 18 - 25 anos: 2.96
- 26 - 40 anos: 2.97
- 41 - 60 anos: 2.88
- Mais de 60 anos: Nenhum
Outro / Não Divulgado
- Até 18 anos: 4.00
- 18 - 25 anos: 3.30
- 26 - 40 anos: 3.15
- 41 - 60 anos: Nenhum
- Mais de 60 anos: Nenhum
## Consumidores Mais Populares (Top 5)
- Login
- Número de compras
- Preço médio de compra
- Valor Total de Compra
- Itens mais populares
### Top 5 - Login com número de compras
```
df['Login'].value_counts().sort_values(ascending=False).head(5)
top5Consumers = []
for k, v in df['Login'].value_counts().sort_values(ascending=False).head().items():
top5Consumers.append(k)
print('Valor médio de compra dos 5 principais consumidores (com base no número de compras)')
df[['Login','Valor']].query(f'Login in {top5Consumers}').sort_values(by='Login').groupby(by = 'Login').mean()
print('Valor total de compra dos 5 principais consumidores (com base no número de compras)')
df[['Login', 'Valor']].query(f'Login in {top5Consumers}').sort_values(by='Login').groupby(by='Login').sum()
print('Os 5 produtos mais comprados')
df['Nome do Item'].value_counts().sort_values(ascending=False).head()
```
## Itens Mais Populares
- ID do item
- Nome do item
- Número de compras
- Preço do item
- Valor Total de Compra
- Itens mais lucrativos
```
df = df[['Login', 'Idade', 'Sexo', 'Item ID', 'Item', 'Valor', 'FaixaEtaria']]
itensMaisPopulares = []
for k, v in df['Item'].value_counts().sort_values(ascending=False).head().items():
itensMaisPopulares.append(k)
dfTopItens = df.query(f'Item in {itensMaisPopulares}').sort_values(by = 'Item')
dfTopItens[['Item ID', 'Item']].groupby(by = 'Item ID').describe()
```
- O DataFrame acima mostra os protudos mais comprados, com ItemID e a quantidade de vezes em que foram comprados.
- Inclusive pode-se perceber um erro em que um produto (Final Critic) possui dois IDs diferentes.
```
dfTopItens[['Item', 'Valor']].groupby(by='Valor').describe()
```
- O DataFrame acima mostra os principais produtos com os valores.
- Pode-se observar que também existem diferenças nos valores, os produtos StormCaller e Final Critic possuem dois valores diferentes
```
dfTopItens[['Item', 'Valor']].groupby(by='Item').sum()
```
- Com o DataFrame acima pode-se observar o valor total vendido dos 5 produtos mais vendidos
```
df[['Item', 'Valor']].groupby(by='Item').sum().sort_values(by = 'Valor', ascending=False).head()
```
## Itens Mais Lucrativos
- ID do item
- Nome do item
- Número de compras
- Preço do item
- Valor Total de Compra
```
prodMaisLucrativos = []
for k,v in df[['Item', 'Valor']].groupby(by='Item').sum().sort_values(by = 'Valor', ascending=False).head().items():
for prod in v.items():
prodMaisLucrativos.append(prod[0])
df_prodMaisLucrativos = df.query(f'Item in {prodMaisLucrativos}')
df_prodMaisLucrativos[['Item ID', 'Item']].groupby(by='Item ID').describe()
```
- O DataFrame acima mostra os produtos mais lucrativos, com o número de vendas e o Item ID.
- Observa-se mais uma vez os dois IDs do produto Final Critic
```
df_prodMaisLucrativos[['Item', 'Valor']].groupby(by='Valor').describe()
```
- O DataFrame acima mostra o valor dos produtos mais lucrativos
- Observa-se mais uma vez os dois valores dos produtos Final Critic e Stormcaller
```
df_prodMaisLucrativos[['Item', 'Valor']].groupby(by='Item').sum().sort_values(by='Valor', ascending=False)
```
- A tabela acima mostra o valor total vendido dos produtos mais lucrativos.
## Fim
### Obrigado - Data Science Academy - <a href="http://facebook.com/dsacademybr">facebook.com/dsacademybr</a>
| github_jupyter |
## 1. Volatility changes over time
<p>What is financial risk? </p>
<p>Financial risk has many faces, and we measure it in many ways, but for now, let's agree that it is a measure of the possible loss on an investment. In financial markets, where we measure prices frequently, volatility (which is analogous to <em>standard deviation</em>) is an obvious choice to measure risk. But in real markets, volatility changes with the market itself. </p>
<p><img src="https://assets.datacamp.com/production/project_738/img/VolaClusteringAssetClasses.png" alt=""></p>
<p>In the picture above, we see the returns of four very different assets. All of them exhibit alternating regimes of low and high volatilities. The highest volatility is observed around the end of 2008 - the most severe period of the recent financial crisis.</p>
<p>In this notebook, we will build a model to study the nature of volatility in the case of US government bond yields.</p>
```
# Load the packages
library(xts)
library(readr)
# Load the data
yc_raw <- read_csv("datasets/FED-SVENY.csv")
# Convert the data into xts format
yc_all <- as.xts(x = yc_raw[, -1], order.by = yc_raw$Date)
# Show only the tail of the 1st, 5th, 10th, 20th and 30th columns
yc_all_tail <- tail(yc_all[,c(1,5,10, 20, 30)])
yc_all_tail
```
## 2. Plotting the evolution of bond yields
<p>In the output table of the previous task, we see the yields for some maturities.</p>
<p>These data include the whole yield curve. The yield of a bond is the price of the money lent. The higher the yield, the more money you receive on your investment. The yield curve has many maturities; in this case, it ranges from 1 year to 30 years. Different maturities have different yields, but yields of neighboring maturities are relatively close to each other and also move together.</p>
<p>Let's visualize the yields over time. We will see that the long yields (e.g. SVENY30) tend to be more stable in the long term, while the short yields (e.g. SVENY01) vary a lot. These movements are related to the monetary policy of the FED and economic cycles.</p>
```
library(viridis)
# Define plot arguments
yields <- yc_all
plot.type <- "single"
plot.palette <- viridis(n = 30)
asset.names <- colnames(yc_all)
# Plot the time series
plot.zoo(x = yc_all, plot.type = plot.type, col = plot.palette)
# Add the legend
legend(x = "topleft", legend = asset.names,
col = plot.palette, cex = 0.45, lwd = 3)
```
## 3. Make the difference
<p>In the output of the previous task, we see the level of bond yields for some maturities, but to understand how volatility evolves we have to examine the changes in the time series. Currently, we have yield levels; we need to calculate the changes in the yield levels. This is called "differentiation" in time series analysis. Differentiation has the added benefit of making a time series independent of time.</p>
```
# Differentiate the time series
ycc_all <- diff.xts(yc_all)
# Show the tail of the 1st, 5th, 10th, 20th and 30th columns
ycc_all_tail <- tail(ycc_all[, c(1, 5, 10, 20, 30)])
ycc_all_tail
```
## 4. The US yields are no exceptions, but maturity matters
<p>Now that we have a time series of the changes in US government yields let's examine it visually.</p>
<p>By taking a look at the time series from the previous plots, we see hints that the returns following each other have some unique properties:</p>
<ul>
<li>The direction (positive or negative) of a return is mostly independent of the previous day's return. In other words, you don't know if the next day's return will be positive or negative just by looking at the time series.</li>
<li>The magnitude of the return is similar to the previous day's return. That means, if markets are calm today, we expect the same tomorrow. However, in a volatile market (crisis), you should expect a similarly turbulent tomorrow.</li>
</ul>
```
# Define the plot parameters
yield.changes <- ycc_all
plot.type <- "multiple"
# Plot the differentiated time series
plot.zoo(x = yield.changes, plot.type = plot.type,
ylim = c(-0.5, 0.5), cex.axis = 0.7,
ylab = 1:30, col = plot.palette)
```
## 5. Let's dive into some statistics
<p>The statistical properties visualized earlier can be measured by analytical tools. The simplest method is to test for autocorrelation. Autocorrelation measures how a datapoint's past determines the future of a time series. </p>
<ul>
<li>If the autocorrelation is close to 1, the next day's value will be very close to today's value. </li>
<li>If the autocorrelation is close to 0, the next day's value will be unaffected by today's value.</li>
</ul>
<p>Because we are interested in the recent evolution of bond yields, we will filter the time series for data from 2000 onward.</p>
```
# Filter for changes in and after 2000
ycc <- ycc_all["2000/",]
# Save the 1-year and 20-year maturity yield changes into separate variables
x_1 <- ycc[,"SVENY01"]
x_20 <- ycc[, "SVENY20"]
# Plot the autocorrelations of the yield changes
par(mfrow=c(2,2))
acf_1 <- acf(x_1)
acf_20 <- acf(x_20)
# Plot the autocorrelations of the absolute changes of yields
acf_abs_1 <- acf(abs(x_1))
acf_abs_20 <- acf(abs(x_20))
```
## 6. GARCH in action
<p>A Generalized AutoRegressive Conditional Heteroskedasticity (<a href="https://en.wikipedia.org/wiki/Autoregressive_conditional_heteroskedasticity">GARCH</a>) model is the most well known econometric tool to handle changing volatility in financial time series data. It assumes a hidden volatility variable that has a long-run average it tries to return to while the short-run behavior is affected by the past returns.</p>
<p>The most popular form of the GARCH model assumes that the volatility follows this process:
</p><p></p>
<math>
σ<sup>2</sup><sub>t</sub> = ω + α ⋅ ε<sup>2</sup><sub>t-1</sub> + β ⋅ σ<sup>2</sup><sub>t-1</sub>
</math>
<p></p><p></p>
<math>
where σ is the current volatility, σ<sub>t-1</sub> the last day's volatility and ε<sub>t-1</sub> is the last day's return. The estimated parameters are ω, α, and β.
</math>
<p>For GARCH modeling we will use <a href="https://cran.r-project.org/web/packages/rugarch/index.html"><code>rugarch</code></a> package developed by Alexios Ghalanos.</p>
```
library(rugarch)
# Specify the GARCH model with the skewed t-distribution
spec <- ugarchspec(distribution.model = "sstd")
# Fit the model
fit_1 <- ugarchfit(x_1, spec = spec)
# Save the volatilities and the rescaled residuals
vol_1 <- sigma(fit_1)
res_1 <- scale(residuals(fit_1, standardize = TRUE)) * sd(x_1) + mean(x_1)
# Plot the yield changes with the estimated volatilities and residuals
merge_1 <- merge.xts(x_1, vol_1, res_1)
plot.zoo(merge_1)
```
## 7. Fitting the 20-year maturity
<p>Let's do the same for the 20-year maturity. As we can see in the plot from Task 6, the bond yields of various maturities show similar but slightly different characteristics. These different characteristics can be the result of multiple factors such as the monetary policy of the FED or the fact that the investors might be different.</p>
<p>Are there differences between the 1-year maturity and 20-year maturity plots?</p>
```
# Fit the model
fit_20 <- ugarchfit(x_20, spec = spec)
# Save the volatilities and the rescaled residuals
vol_20 <- sigma(fit_20)
res_20 <- scale(residuals(fit_20, standardize = TRUE)) * sd(x_20) + mean(x_20)
# Plot the yield changes with the estimated volatilities and residuals
merge_20 <- merge.xts(x_20, vol_20, res_20)
plot.zoo(merge_20)
```
## 8. What about the distributions? (Part 1)
<p>From the plots in Task 6 and Task 7, we can see that the 1-year GARCH model shows a similar but more erratic behavior compared to the 20-year GARCH model. Not only does the 1-year model have greater volatility, but the volatility of its volatility is larger than the 20-year model. That brings us to two statistical facts of financial markets not mentioned yet. </p>
<ul>
<li>The unconditional (before GARCH) distribution of the yield differences has heavier tails than the normal distribution.</li>
<li>The distribution of the yield differences adjusted by the GARCH model has lighter tails than the unconditional distribution, but they are still heavier than the normal distribution.</li>
</ul>
<p>Let's find out what the fitted GARCH model did with the distribution we examined.</p>
```
# Calculate the kernel density for the 1-year maturity and residuals
density_x_1 <- density(x_1)
density_res_1 <- density(res_1)
# Plot the density diagram for the 1-year maturity and residuals
plot(density_x_1)
lines(density_res_1, col = "red")
# Add the normal distribution to the plot
norm_dist <- dnorm(seq(-0.4, 0.4, by = .01), mean = mean(x_1), sd = sd(x_1))
lines(seq(-0.4, 0.4, by = .01),
norm_dist,
col = "darkgreen"
)
# Add legend
legend <- c("Before GARCH", "After GARCH", "Normal distribution")
legend("topleft", legend = legend,
col = c("black", "red", "darkgreen"), lty=c(1,1))
```
## 9. What about the distributions? (Part 2)
<p>In the previous plot, we see that the two distributions from the GARCH models are different from the normal distribution of the data, but the tails, where the differences are the most profound, are hard to see. Using a Q-Q plot will help us focus in on the tails.</p>
<p>You can read an excellent summary of Q-Q plots <a href="https://stats.stackexchange.com/questions/101274/how-to-interpret-a-qq-plot">here</a>.</p>
```
# Define the data to plot: the 1-year maturity yield changes and residuals
data_orig <- x_1
data_res <- res_1
# Define the benchmark distribution
distribution <- qnorm
# Make the Q-Q plot of original data with the line of normal distribution
qqnorm(data_orig, ylim = c(-0.5, 0.5))
qqline(data_orig, distribution = distribution, col = "darkgreen")
# Make the Q-Q plot of GARCH residuals with the line of normal distribution
par(new=TRUE)
qqnorm(data_res * 0.614256270265139, col = "red", ylim = c(-0.5, 0.5))
qqline(data_res * 0.614256270265139, distribution = distribution, col = "darkgreen")
legend("topleft", c("Before GARCH", "After GARCH"), col = c("black", "red"), pch=c(1,1))
```
## 10. A final quiz
<p>In this project, we fitted a GARCH model to develop a better understanding of how bond volatility evolves and how it affects the probability distribution. In the final task, we will evaluate our model. Did the model succeed, or did it fail?</p>
```
# Q1: Did GARCH revealed how volatility changed over time? # Yes or No?
(Q1 <- "Yes")
# Q2: Did GARCH bring the residuals closer to normal distribution? Yes or No?
(Q2 <- "Yes")
# Q3: Which time series of yield changes deviates more
# from a normally distributed white noise process? Choose 1 or 20.
(Q3 <- 1)
```
| github_jupyter |
```
# Copyright 2019 The Kubeflow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Install Pipeline SDK - This only needs to be ran once in the enviroment.
!python3 -m pip install 'kfp>=0.1.31' --quiet
!pip3 install tensorflow==1.14 --upgrade
```
## KubeFlow Pipelines Serving Component
In this notebook, we will demo:
* Saving a Keras model in a format compatible with TF Serving
* Creating a pipeline to serve a trained model within a KubeFlow cluster
Reference documentation:
* https://www.tensorflow.org/tfx/serving/architecture
* https://www.tensorflow.org/beta/guide/keras/saving_and_serializing
* https://www.kubeflow.org/docs/components/serving/tfserving_new/
### Setup
```
# Set your output and project. !!!Must Do before you can proceed!!!
project = 'Your-Gcp-Project-ID' #'Your-GCP-Project-ID'
model_name = 'model-name' # Model name matching TF_serve naming requirements
import time
ts = int(time.time())
model_version = str(ts) # Here we use timestamp as version to avoid conflict
output = 'Your-Gcs-Path' # A GCS bucket for asset outputs
KUBEFLOW_DEPLOYER_IMAGE = 'gcr.io/ml-pipeline/ml-pipeline-kubeflow-deployer:1.7.0-rc.3'
model_path = '%s/%s' % (output,model_name)
model_version_path = '%s/%s/%s' % (output,model_name,model_version)
```
### Load a Keras Model
Loading a pretrained Keras model to use as an example.
```
import tensorflow as tf
model = tf.keras.applications.NASNetMobile(input_shape=None,
include_top=True,
weights='imagenet',
input_tensor=None,
pooling=None,
classes=1000)
```
### Saved the Model for TF-Serve
Save the model using keras export_saved_model function. Note that specifically for TF-Serve the output directory should be structure as model_name/model_version/saved_model.
```
tf.keras.experimental.export_saved_model(model, model_version_path)
```
### Create a pipeline using KFP TF-Serve component
```
def kubeflow_deploy_op():
return dsl.ContainerOp(
name = 'deploy',
image = KUBEFLOW_DEPLOYER_IMAGE,
arguments = [
'--model-export-path', model_path,
'--server-name', model_name,
]
)
import kfp
import kfp.dsl as dsl
# The pipeline definition
@dsl.pipeline(
name='sample-model-deployer',
description='Sample for deploying models using KFP model serving component'
)
def model_server():
deploy = kubeflow_deploy_op()
```
Submit pipeline for execution on Kubeflow Pipelines cluster
```
kfp.Client().create_run_from_pipeline_func(model_server, arguments={})
#vvvvvvvvv This link leads to the run information page. (Note: There is a bug in JupyterLab that modifies the URL and makes the link stop working)
```
| github_jupyter |
# トークトリアル 4
# リガンドベーススクリーニング:化合物類似性
#### Developed in the CADD seminars 2017 and 2018, AG Volkamer, Charité/FU Berlin
Andrea Morger and Franziska Fritz
## このトークトリアルの目的
このトークトリアルでは、化合物をエンコード(記述子、フィンガープリント)し、比較(類似性評価)する様々なアプローチを取り扱います。さらに、バーチャルスクリーニングを実施します。バーチャルスクリーニングは、ChEMBLデータベースから取得し、リピンスキーのルールオブファイブでフィルタリングをかけた、EGFRに対して評価済みの化合物データセット(**トークトリアル 2** 参照) に対して、EGFR阻害剤ゲフィチニブ(Gefitinib)との類似性検索を実施するという形で実施します。
## 学習の目標
### 理論
* 化合物類似性(Molecular similarity)
* 化合物記述子(Molecular descriptors)
* 化合物フィンガープリント(Molecular fingerprints)
* 部分構造ベースのフィンガープリント(Substructure-based fingerprints)
* MACCSフィンガープリント(MACCS fingerprints)
* Morganフィンガープリント、サーキュラーフィンガープリント(Morgan fingerprints, circular fingerprints)
* 化合物類似性評価(Molecular similarity measures)
* タニモト係数(Tanimoto coefficient)
* Dice係数(Dice coefficient)
* バーチャルスクリーニング(Virtual screening)
* 類似性検索(similarity search)によるバーチャルスクリーニング
### 実践
* 分子の読み込みと描画
* 化合物記述子の計算
* 1D 化合物記述子:分子量
* 2D 化合物記述子:MACCS ファインガープリント
* 2D 化合物記述子:Morgan フィンガープリント
* 化合物類似性の計算
* MACCS フィンガープリント:タニモト類似性とDice類似性
* Morgan フィンガープリント:タニモト類似性とDice類似性
* 類似性検索によるバーチャルスクリーニング
* データセットの全化合物に対する化合物クエリの比較
* 類似度の分布
* 最も類似した分子の描画
* エンリッチメントプロットの作成
## レファレンス
* レビュー"Molecular similarity in medicinal chemistry" ([<i>J. Med. Chem.</i> (2014), <b>57</b>, 3186-3204](http://pubs.acs.org/doi/abs/10.1021/jm401411z))
* RDKitのMorganフィンガープリント ([RDKit tutorial on Morgan fingerprints](http://www.rdkit.org/docs/GettingStartedInPython.html#morgan-fingerprints-circular-fingerprints))
* ECFP - extended-connectivity fingerprints ([<i>J. Chem. Inf. Model.</i> (2010), <b>50</b>,742-754](https://pubs.acs.org/doi/abs/10.1021/ci100050t))
* ケミカルスペース
([<i>ACS Chem. Neurosci.</i> (2012), <b>19</b>, 649-57](https://www.ncbi.nlm.nih.gov/pubmed/23019491))
* RDKitの化合物記述子リスト ([RDKit documentation: Descriptors](https://www.rdkit.org/docs/GettingStartedInPython.html#list-of-available-descriptors))
* RDKitのフィンガープリントのリスト ([RDKit documentation: Fingerprints](https://www.rdkit.org/docs/GettingStartedInPython.html#list-of-available-fingerprints))
* エンリッチメントプロット([Applied Chemoinformatics, Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim, (2018), **1**, 313-31](https://onlinelibrary.wiley.com/doi/10.1002/9783527806539.ch6h))
_____________________________________________________________________________________________________________________
## 理論
### 化合物類似性
化合物類似性は化学情報学(ケモインフォマティクス、chemical informatics)の中でよく知られており、頻繁に用いられる考え方です。化合物とその特性の比較はいろいろな用途で使用でき、望みの特性と生理活性をもつ新しい化合物を見つけるのに役立つかもしれません。
構造的に類似の化合物は類似の特性、そして類似の生理活性を示すという考え方は、類似性質原則(similar property principle、SPP)や構造活性相関(structure activity relationship、SAR)に表れています。この文脈において、バーチャルスクリーニングは、結合親和性のわかっている化合物セットがあれば、そのような化合物をさらに探すことができる、というアイデアに基づいています。
### 化合物記述子
類似度は適用範囲に応じて様々な方法で評価することができます(参考 <a href="http://pubs.acs.org/doi/abs/10.1021/jm401411z"><i>J. Med. Chem.</i> (2014), <b>57</b>, 3186-3204</a>):
* **1D 化合物記述子**: 溶解度、logP、分子量、融点 etc.
* グローバル記述子(Global descriptor):分子全体を一つの値だけで表現する <br>
* 通常、機械学習(machine learning、ML)を適用するには分子を特定するのに十分な特性とはならない
* 機械学習のための化合物エンコーディングを改良するために2Dフィンガープリントに付け加えることができる
* **2D 化合物記述子**: 分子グラフ(Molecular graph)、経路(path)、フラグメント、原子環境(atom environment)
* 分子の個々の部位の詳細な表現
* 一つの分子に対して多数のフィンガープリントと呼ばれる特徴/ビット
* 類似性検索と機械学習で非常によく使われる
* **3D 化合物記述子**: 形状(Shape), 立体化学
* 化学者は通常2次元表現で訓練されている <br>
* 化合物の自由度(flexibility、化合物の「正しい」配座はどれか?)のため、2次元表現と比べて頑健性が低い
* **生物学的類似性**
* 生物学的フィンガープリント(例、個々のビットが異なるターゲット分子に対して評価された生理活性を表す)
* 化合物構造からは独立
* 実験データ(あるいは予測値)が必要
すでに **トークトリアル 2** で、分子量やlogPといった1D 物理化学パラメーターを計算する方法を学びました。RDKitに実装されているそのような記述子は [RDKit documentation: Descriptors](https://www.rdkit.org/docs/GettingStartedInPython.html#list-of-available-descriptors) で見つけることができます。
以降では、2D(あるいは3D)化合物記述子の定義に焦点を当てます。多くの場合、分子ごとに固有の(ユニークな)ものとなるので、これらの記述子はフィンガープリント(指紋)ともよばれます。
### 化合物フィンガープリント(Molecular fingerprints)
#### 部分構造に基づくフィンガープリント(Substructure-based fingerprints)
化合物フィンガープリントは化学的な特徴と分子の特徴をビット文字列(bitstring)やビットベクトル(bitvector)、配列(array)の形でエンコードします。各ビットは、事前に定義された分子の特徴あるいは環境に相当し、「1」はその特徴が存在していることを、「0」は存在していないことを示します。実装の仕方によっては、数え上げベース(count-based)となっていて、ある特定の特徴がいくつ存在しているかを数えるようになっていることに注意してください。
フィンガープリントのデザインには複数の方法があります。ここではよく使われる2Dフィンガープリントのとして、MACCSキーとMorganフィンガープリントの2種類を導入します。
[RDKit documentation: Fingerprints](https://www.rdkit.org/docs/GettingStartedInPython.html#list-of-available-fingerprints)に記載されているように、RDKitではこの2つ以外にも多数のフィンガープリントを提供しています。
#### MACCS フィンガープリント(MACCS fingerprints)
Molecular ACCess System (MACCS) フィンガープリント、あるいはMACCS構造キーとも名付けられている手法は、あらかじめ定義された166個の構造フラグメントから構成されています。各位置は、ある特定の構造フラグメントあるいはキーが存在しているかいないかを問い合わせた(クエリ)結果を格納しています。それぞれのキーは創薬化学者によって経験的に定義されたもので、利用、解釈が容易です。([RDKit documentation: MACCS keys](http://rdkit.org/Python_Docs/rdkit.Chem.MACCSkeys-module.html)).
<img src="images/maccs_fp.png" align="above" alt="Image cannot be shown" width="250">
<div align="center"> Figure 2: MACCSフィンガープリントの図例(Andrea Morgerによる図)</div>
#### Morganフィンガープリントとサーキュラーフィンガープリント(Morgan fingerprints and circular fingerprints)
この一連のフィンガープリントはMorganアルゴリズムに基づいています。ビットは分子の各原子の円形状の環境(circular environment)に相当しています。半径(radius)によって、環境として考慮にいれる近接の結合と原子の数を設定します。ビット文字列の長さを定義することもでき、より長いビット文字列を希望する長さに縮められます。従って、Morganフィンガープリントはある特定の数のビットには制限されません。Morganフィンガープリントに関してもっと知りたい場合は[RDKit documentation: Morgan fingerprints](http://www.rdkit.org/docs/GettingStartedInPython.html#morgan-fingerprints-circular-fingerprints) を参照してください。Extended connectivity fingerprints (ECFP)もよく使われるフィンガープリントで、Morganアルゴリズムのバリエーションから導かれています。さらなる情報は([<i>J. Chem. Inf. Model.</i> (2010), <b>50</b>,742-754](https://pubs.acs.org/doi/abs/10.1021/ci100050t))を参照してください。
<img src="images/morgan_fp.png" align="above" alt="Image cannot be shown" width="270">
<div align="center">Figure 3: Morganサーキュラーフィンガープリントの図例(Andrea Morgerによる図)</div>
### 化合物類似性評価
記述子/フィンガープリントの計算ができれば、それらを比較することで、二つの分子の間の類似度を評価することができます。化合物類似度は様々な類似度係数で定量化することができますが、よく使われる2つの指標はタニモト係数とDice係数です(Tanimoto and Dice index) ([<i>J. Med. Chem.</i> (2014), <b>57</b>, 3186-3204](http://pubs.acs.org/doi/abs/10.1021/jm401411z))。
#### タニモト係数(Tanimoto coefficient)
$$T _{c}(A,B) = \frac{c}{a+b-c}$$
a: 化合物Aに存在する特徴の数 <br>
b: 化合物Bに存在する特徴の数 <br>
c: 化合物AとBで共有されている特徴の数
#### Dice係数(Dice coefficient)
$$D_{c}(A,B) = \frac{c}{\frac{1}{2}(a+b)}$$
a: 化合物Aに存在する特徴の数 <br>
b: 化合物Bに存在する特徴の数 <br>
c: 化合物AとBで共有されている特徴の数
類似度評価は通常、それぞれのフィンガープリントの正の(1の)ビットの数と、両者が共通してもつ正のビットの数を考慮します。Dice類似度は通常タニモト類似度よりも大きな値を返し、それはそれぞれの分母の違いに由来します。:
$$\frac{c}{a+b-c} \leq \frac{c}{\frac{1}{2}(a+b)}$$
### バーチャルスクリーニング(Virtual screening)
医薬品探索の初期段階における課題は、低分子(化合物)のセットを、有りうる巨大なケミカルスペースから、研究対象のターゲット分子に結合するポテンシャルのあるものに範囲を狭めることです。このケミカルスペースは非常に大きく、低分子化合物群は部分構造(chemical moiety)の10<sup>20</sup> の組み合わせにまでいたります ([<i>ACS Chem. Neurosci.</i> (2012), <b>19</b>, 649-57](https://www.ncbi.nlm.nih.gov/pubmed/23019491)) 。
目的のターゲット分子に対するこれら低分子の活性を評価するハイスループットスクリーニング(HTS)実験は費用と時間が非常にかかるので、計算機に支援された(computer-aided)手法により、試験にかける低分子のリストをより絞り込む(focused list)ことが期待されています。このプロセスはバーチャル(ハイスループット)スクリーニングと呼ばれていて、研究対象のターゲット分子に結合する可能性の最も高い低分子を見つけるために、巨大な低分子ライブラリーをルールとパターンのどちらか、あるいは両方によってフィルタリングします。
#### 類似度検索を用いたバーチャルスクリーニング
バーチャルスクリーニングの簡単な方法として、既知の活性化合物(群)と新規な化合物セットを比較して、最も類似しているものを探すことが行われます。類似性質原則(similar property principle、SPP)に基づき、(例えば既知の阻害剤に)最も類似した化合物は類似の効果を有すると推測されます。類似性検索に必要となるものは次の通りです(上でも詳細に議論しています)。
* 化学/分子の特徴をエンコードした表現
* 特徴のポテンシャルの重み付け(オプション)
* 類似度評価
類似性検索はある特定のデータベースの全ての化合物と一つの化合物との間の類似度を計算することで実行することができます。データベースの化合物の類似度係数によるランク付けにより、最も類似度の高い分子が得られます。
#### エンリッチメントプロット(Enrichment plots)
エンリッチメントプロットはバーチャルスクリーニングの結果の妥当性を評価するために使われ、ランク付けされたリストの上位x%の中に含まれる活性化合物の比率を表します。すなわち、
* データセット全体のうち、トップにランクした化合物の比率(x-axis) vs.
* データセット全体のうち活性化合物(y-axis)の比率
<img src="images/enrichment_plot.png" align="above" alt="Image cannot be shown" width="270">
<div align="center">Figure 4: バーチャルスクリーニングの結果のエンリッチメントプロットの例</div>
## 実践
実践編の最初のパートでは、RDKitを使って化合物のエンコード(化合物フィンガープリント)をしたのち、上の理論編で議論したように、類似度(化合物類似性評価)を計算するため、それらの比較を実行します。
2番目のパートではこれらのエンコーディングと比較手法を使って類似度検索(バーチャルスクリーニング)を実施します。既知のEGFR阻害剤ゲフィチニブ(Gefitinib)をクエリとして使用し、EGFRに対して試験済みの化合物データセットの中から類似した化合物を検索します。このデータセットは **トークトリアル1**でChEMBLから抽出し、**トークトリアル2**でリピンスキーのルールオブファイブによりフィルタリングをおこなったものです。
### 化合物の読み込みと描画
まず、8個の化合物例を定義し描画します。後ほど、これらの分子をエンコードし比較します。SMILES形式の分子をRDKitのmolオブジェクトに変換し、RDKitの`Draw`関数で可視化します。
```
# 関連するPythonパッケージのimport
# 基本的な分子を取り扱う機能はモジュール rdkti.Chem にあります
from rdkit import Chem
# 描画関係
from rdkit.Chem.Draw import IPythonConsole
from rdkit.Chem import Draw
from rdkit.Chem import Descriptors
from rdkit.Chem import AllChem
from rdkit.Chem import MACCSkeys
from rdkit.Chem import rdFingerprintGenerator
from rdkit import DataStructs
import math
import numpy as np
import pandas as pd
from rdkit.Chem import PandasTools
import matplotlib.pyplot as plt
# SMILES形式の化合物
smiles1 = 'CC1C2C(C3C(C(=O)C(=C(C3(C(=O)C2=C(C4=C1C=CC=C4O)O)O)O)C(=O)N)N(C)C)O' # Doxycycline
smiles2 = 'CC1(C(N2C(S1)C(C2=O)NC(=O)C(C3=CC=C(C=C3)O)N)C(=O)O)C' # Amoxicilline
smiles3 = 'C1=COC(=C1)CNC2=CC(=C(C=C2C(=O)O)S(=O)(=O)N)Cl' # Furosemide
smiles4 = 'CCCCCCCCCCCC(=O)OCCOC(=O)CCCCCCCCCCC' # Glycol dilaurate
smiles5 = 'C1NC2=CC(=C(C=C2S(=O)(=O)N1)S(=O)(=O)N)Cl' # Hydrochlorothiazide
smiles6 = 'CC1=C(C(CCC1)(C)C)C=CC(=CC=CC(=CC(=O)O)C)C' # Isotretinoine
smiles7 = 'CC1(C2CC3C(C(=O)C(=C(C3(C(=O)C2=C(C4=C1C=CC=C4O)O)O)O)C(=O)N)N(C)C)O' # Tetracycline
smiles8 = 'CC1C(CC(=O)C2=C1C=CC=C2O)C(=O)O' # Hemi-cycline D
# 化合物SMILESのリストを作成
smiles = [smiles1, smiles2, smiles3, smiles4, smiles5, smiles6, smiles7, smiles8]
# ROMolオブジェクトのリストを作成
mols = [Chem.MolFromSmiles(i) for i in smiles]
# 化合物名称のリストを作成
mol_names = ['Doxycycline', 'Amoxicilline', 'Furosemide', 'Glycol dilaurate',
'Hydrochlorothiazide', 'Isotretinoine', 'Tetracycline', 'Hemi-cycline D']
# 化合物の描画
Draw.MolsToGridImage(mols, molsPerRow=2, subImgSize=(450,150), legends=mol_names)
```
### 化合物記述子の計算
化合物の比較を行うために1Dと2Dの化合物記述子を抽出、生成します。2D記述子については、あとで化合物の類似度の計算に使用するので、異なるタイプのフィンガープリントを生成します。
#### 1D 化合物記述子:分子量
例示構造の分子量を計算します。
```
# 化合物の分子量を計算
mol_weights = [Descriptors.MolWt(mol) for mol in mols]
```
視覚的に比較するために、類似の分子量の化合物構造を描画します。分子量は化合物の類似度にとって有用な記述子となるでしょうか?
```
# 結果を格納するデータフレームの生成
sim_mw_df = pd.DataFrame({'smiles': smiles, 'name': mol_names, 'mw': mol_weights, "Mol": mols})
# 分子量でソート
sim_mw_df.sort_values(['mw'], ascending=False, inplace=True)
sim_mw_df[["smiles", "name", "mw"]]
# 分子量とともに化合物を描画
Draw.MolsToGridImage(sim_mw_df["Mol"],
legends=[i+': '+str(round(j, 2))+" Da" for i,j in zip(sim_mw_df["name"], sim_mw_df["mw"])],
molsPerRow=2, subImgSize=(450, 150))
```
見てわかるように類似の分子量を持つ化合物は類似の構造をもつことがあります(例 Doxycycline/Tetracycline)。一方で、類似の数の原子を持ちながらも全く異なる原子の配置となっているものもあります(例 Doxycycline/Glycol dilaurate あるいはHydrochlorothiazide/Isotretinoine)。
次に、より詳細な分子の特徴を説明するために、2D化合物記述子を見て見ましょう。
#### 2D 化合物記述子:MACCSフィンガープリント
MACCSフィンガープリントはRDKitを使って簡単に生成することができます。明示的なビットベクトル(explicit bitvector)は我々人間が読めるものではないので、さらにビット文字配列(bitstring)へと変換します。
```
# MACCSフィンガープリントの生成
maccs_fp1 = MACCSkeys.GenMACCSKeys(mols[0]) # Doxycycline
maccs_fp2 = MACCSkeys.GenMACCSKeys(mols[1]) # Amoxicilline
maccs_fp1
# フィンガープリントをビット文字配列としてプリント
maccs_fp1.ToBitString()
# 全化合物のMACCS fingerprintsを生成
maccs_fp_list = []
for i in range(len(mols)):
maccs_fp_list.append(MACCSkeys.GenMACCSKeys(mols[i]))
```
#### 2D 化合物記述子:Morganフィンガープリント
RDKitを使ってMorganサーキュラーフィンガープリントも計算します。2つの異なる関数により、Morganフィンガープリントは整数(int)あるいはビット(bit)ベクトルとして計算することができます。
```
# Morganフィンガープリントを生成(int vector)、デフォルトでは半径2でベクトルの長さは2048
circ_fp1 = rdFingerprintGenerator.GetCountFPs(mols[:1])[0]
circ_fp1
# セットされた値をみてみます:
circ_fp1.GetNonzeroElements()
# Morganフィンガープリントを(bit vectorとして)生成、デフォルトでは半径2でフィンガープリントの長さは2048
circ_b_fp1 = rdFingerprintGenerator.GetFPs(mols[:1])[0]
circ_b_fp1
# フィンガープリントをビット文字列としてプリント
circ_b_fp1.ToBitString()
# 全化合物のMorganフィンガープリントを生成
circ_fp_list = rdFingerprintGenerator.GetFPs(mols)
```
### 化合物類似度の計算
次では、2つの類似度評価、すなわち**Tanimoto**と**Dice**を、2つのタイプのフィンガープリント、すなわち**MACCS**と**Morgan**フィンガープリントに適用します。
例:2つのMACCSフィンガープリントをタニモト類似度で比較
```
# 2つの化合物のタニモト係数を計算
DataStructs.TanimotoSimilarity(maccs_fp1, maccs_fp2)
# 同じ化合物のタニモト係数を計算
DataStructs.TanimotoSimilarity(maccs_fp1, maccs_fp1)
```
次に、クエリ化合物を我々の化合物リストと比較したいと思います。
そこで、RDKitの ```BulkTanimotoSimilarity```関数と```BulkDiceSimilarity```関数を使って、タニモトあるいはDice類似度の類似度評価に基づいて、クエリのフィンガープリントと、リストに格納されたフィンガープリントとの類似度を計算します。
類似度を計算したあとで、次の関数を使ってランク付けした化合物を描画したいと思います。:
```
def draw_ranked_molecules(sim_df_sorted, sorted_column):
"""
(ソートした)データフレームの分子を描画する関数
"""
# ラベルを定義:最初の分子はクエリ(Query)で、次の分子はランク1から始まる
rank = ["#"+str(i)+": " for i in range(0, len(sim_df_sorted))]
rank[0] = "Query: "
# Doxycyclineと最も類似した化合物(Tanimoto と MACCS フィンガープリント)
top_smiles = sim_df_sorted["smiles"].tolist()
top_mols = [Chem.MolFromSmiles(i) for i in top_smiles]
top_names = [i+j+" ("+str(round(k, 2))+")" for i, j, k in zip(rank, sim_df_sorted["name"].tolist(),
sim_df_sorted[sorted_column])]
return Draw.MolsToGridImage(top_mols, legends=top_names, molsPerRow=2, subImgSize=(450, 150))
```
次に、タニモト/Dice類似度評価に基づいて、MACCS/Morganフィンガープリントの比較の全ての組み合わせを調べます。そこで、結果を要約するデータフレームを作成します。
```
# 結果を格納するデータフレームの生成
sim_df = pd.DataFrame({'smiles': smiles, 'name': mol_names})
```
#### MACCSフィンガープリント:タニモト類似度
```
# 類似度評価スコアをデータフレームに追加
sim_df['tanimoto_MACCS'] = DataStructs.BulkTanimotoSimilarity(maccs_fp1,maccs_fp_list)
# MACCSフィンガープリントのタニモト類似度で並べ替えたデータフレーム
sim_df_sorted_t_ma = sim_df.copy()
sim_df_sorted_t_ma.sort_values(['tanimoto_MACCS'], ascending=False, inplace=True)
sim_df_sorted_t_ma
# MACCSフィンガープリントのタニモト類似度でランクした分子の描画
draw_ranked_molecules(sim_df_sorted_t_ma, "tanimoto_MACCS")
```
MACCSフィンガープリントを使用した場合、Tetracyclineは最も類似した分子(スコアが高い)で、ついでAmoxicillineでした。1D 化合物記述子の分子量とは対照的に、線形分子のGlucol dilaurateは類似していない(ランクが最も低い)と認識されました。
#### MACCSフィンガープリント:Dice類似度
```
# データフレームへの類似度スコアの追加
sim_df['dice_MACCS'] = DataStructs.BulkDiceSimilarity(maccs_fp1, maccs_fp_list)
# MACCSフィンガープリントのDice類似度でソートしたデータフレーム
sim_df_sorted_d_ma = sim_df.copy()
sim_df_sorted_d_ma.sort_values(['dice_MACCS'], ascending=False, inplace=True)
sim_df_sorted_d_ma
```
定義より、タニモトとDice類似度評価は同じランキング結果になりますが、Dice類似度の値の方が大きくなります(タニモトとDiceを求める式はこのトークトリアルの理論編を参照してください)。
#### Morganフィンガープリント:タニモト類似度
```
# データフレームへの類似度スコアの追加
sim_df['tanimoto_morgan'] = DataStructs.BulkTanimotoSimilarity(circ_b_fp1, circ_fp_list)
sim_df['dice_morgan'] = DataStructs.BulkDiceSimilarity(circ_b_fp1, circ_fp_list)
# Morganフィンガープリントのタニモト類似度で並べ替えたデータフレーム
sim_df_sorted_t_mo = sim_df.copy()
sim_df_sorted_t_mo.sort_values(['tanimoto_morgan'], ascending=False, inplace=True)
sim_df_sorted_t_mo
# Morganフィンガープリントのタニモト類似度による化合物ランキングの描画
draw_ranked_molecules(sim_df_sorted_t_mo, "tanimoto_morgan")
```
MACCSとMorganの類似度をタニモト(Morgan) vs タニモト(MACCS)でプロットし比較します。
```
fig, axes = plt.subplots(figsize=(6,6), nrows=1, ncols=1)
sim_df_sorted_t_mo.plot('tanimoto_MACCS','tanimoto_morgan',kind='scatter',ax=axes)
plt.plot([0,1],[0,1],'k--')
axes.set_xlabel("MACCS")
axes.set_ylabel("Morgan")
plt.show()
```
異なるフィンガープリント(ここでは、MACCSフィンガープリントとMorganフィンガープリント)を用いると、異なる類似度の値(ここでは、タニモト係数)となり、ここで示したように、潜在的には化合物類似度のランキングが異なるものとなります。
MorganフィンガープリントはDoxycyclineに対してTetracyclineを(スコアはより低かったでしたが)最も類似した化合物として認識し、Glycol dilaurateを最も似ていない化合物として認識しました。一方で、2番目にランク付されたのはHemi-cycline Dでした。この化合物はサイクリン系化合物の部分構造で、Morganフィンガープリントのアルゴリズムが原子の環境に基づくものであることがその理由であるかもしれません(一方で、MACCSフィンガープリントは特定の特徴の出現頻度を求めるものとなっています)。
### 類似度検索を使ったバーチャルスクリーニング
フィンガープリントと類似度の計算方法を学んだので、この知識を化合物セット全体からのクエリ化合物の類似度検索に応用することができます。
既知のEGFR阻害剤ゲフィチニブ(Gefitinib)をクエリとして使用し、EGFRに対して試験済みの化合物データセットの中から類似した化合物を検索します。このデータセットは **トークトリアル1**でChEMBLから抽出し、**トークトリアル2**でリピンスキーのルールオブファイブによりフィルタリングをおこなったものです。
#### クエリ化合物をデータセットの全化合物と比較
**トークトリアル2**で取得したChEMBLデータベースから取り出したEGFRに対して評価済みのフィルタリングされた化合物を含むcsvファイルから化合物を読み込みます。1つのクエリ化合物(ここではゲフィチニブ)を使って、類似の化合物をデータセットの中から探し出します。
```
# SMILES形式の化合物を含むcsvファイルからデータを読み込む
filtered_df = pd.read_csv('../data/T2/EGFR_compounds_lipinski.csv', delimiter=';', usecols=['molecule_chembl_id', 'smiles', 'pIC50'])
filtered_df.head()
# クエリ化合物のSMILESからMolオブジェクトを生成
query = Chem.MolFromSmiles('COC1=C(OCCCN2CCOCC2)C=C2C(NC3=CC(Cl)=C(F)C=C3)=NC=NC2=C1'); # Gefitinib, Iressa
query
# クエリ化合物のMACCSフィンガープリントとMorganフィンガープリントを生成
maccs_fp_query = MACCSkeys.GenMACCSKeys(query)
circ_fp_query = rdFingerprintGenerator.GetCountFPs([query])[0]
# ファイルの全化合物のMACCSフィンガープリントとMorganフィンガープリントを生成
ms = [Chem.MolFromSmiles(i) for i in filtered_df.smiles]
circ_fp_list = rdFingerprintGenerator.GetCountFPs(ms)
maccs_fp_list = [MACCSkeys.GenMACCSKeys(m) for m in ms]
# クエリ化合物(Gefitinib)とファイルの全化合物のタニモト類似性を計算(MACCS、Morgan)
tanimoto_maccs = DataStructs.BulkTanimotoSimilarity(maccs_fp_query,maccs_fp_list)
tanimoto_circ = DataStructs.BulkTanimotoSimilarity(circ_fp_query,circ_fp_list)
# クエリ化合物(Gefitinib)とファイルの全化合物のDice類似性を計算(MACCS、Morgan)
dice_maccs = DataStructs.BulkDiceSimilarity(maccs_fp_query,maccs_fp_list)
dice_circ = DataStructs.BulkDiceSimilarity(circ_fp_query,circ_fp_list)
# ChEMBL IDとSMILES、Gefitinibに対する化合物のタニモト類似性のテーブルを作成
similarity_df = pd.DataFrame({'ChEMBL_ID':filtered_df.molecule_chembl_id,
'bioactivity':filtered_df.pIC50,
'tanimoto_MACCS': tanimoto_maccs,
'tanimoto_morgan': tanimoto_circ,
'dice_MACCS': dice_maccs,
'dice_morgan': dice_circ,
'smiles': filtered_df.smiles,})
# データフレームを表示
similarity_df.head()
```
#### 類似性評価の値の分布
理論編で述べたように、同じフィンガープリント(例 MACCSフィンガープリント)について比較すれば、タニモト類似度の値はDIce類似度の値よりも小さくなります。また、2つの異なるフィンガープリント(例 MACCSフィンガープリントとMorganフィンガープリント)を比較すると、類似性評価の値(例 タニモト類似度)は変化します。
ヒストグラムをプロットすることで分布を見ることができます。
```
# MACCSフィンガープリントのタニモト類似度の分布をプロット
%matplotlib inline
fig, axes = plt.subplots(figsize=(10,6), nrows=2, ncols=2)
similarity_df.hist(["tanimoto_MACCS"], ax=axes[0,0])
similarity_df.hist(["tanimoto_morgan"], ax=axes[0,1])
similarity_df.hist(["dice_MACCS"], ax=axes[1,0])
similarity_df.hist(["dice_morgan"], ax=axes[1,1])
axes[1,0].set_xlabel("similarity value")
axes[1,0].set_ylabel("# molecules")
plt.show()
```
ここでも類似度を比較します。今回は直接、2つのフィンガープリントに関するタニモト類似度とDice類似度を比較します。
```
fig, axes = plt.subplots(figsize=(12,6), nrows=1, ncols=2)
similarity_df.plot('tanimoto_MACCS','dice_MACCS',kind='scatter',ax=axes[0])
axes[0].plot([0,1],[0,1],'k--')
axes[0].set_xlabel("Tanimoto(MACCS)")
axes[0].set_ylabel("Dice(MACCS)")
similarity_df.plot('tanimoto_morgan','dice_morgan',kind='scatter',ax=axes[1])
axes[1].plot([0,1],[0,1],'k--')
axes[1].set_xlabel("Tanimoto(Morgan)")
axes[1].set_ylabel("Dice(Morgan)")
plt.show()
```
類似度分布は類似度を解釈するのに重要です(例 MACCSフィンガープリントとMorganフィンガープリント、タニモト類似度とDice類似度について値0.6は異なる評価を与えられる必要があります)
次では、Morganフィンガープリントに基づき、タニモト類似度で最もよく似た化合物を描画します。
#### 最も類似の化合物を描画
私たちの作成したランキングにおいて最も類似した化合物との比較として、ゲフィチニブ(Gefitinib)の構造を視覚的に調べます。生理活性の情報(**トークトリアル1**でChEMBLから抽出したpIC50)も含めます。
```
# tanimoto_morganでソートしたデータフレーム
similarity_df.sort_values(['tanimoto_morgan'], ascending=False, inplace=True)
similarity_df.head()
# データフレームにSMILES文字列の構造表現(ROMol - RDKit オブジェクト Mol)を追加
PandasTools.AddMoleculeColumnToFrame(similarity_df, 'smiles')
# クエリ構造とトップランクの化合物群(+生理活性)の描画
sim_mols = [Chem.MolFromSmiles(i) for i in similarity_df.smiles][:11]
legend = ['#' + str(a) + ' ' + b + ' ('+str(round(c,2))+')' for a, b, c in zip(range(1,len(sim_mols)+1),
similarity_df.ChEMBL_ID,
similarity_df.bioactivity)]
Chem.Draw.MolsToGridImage(mols = [query] + sim_mols[:11],
legends = (['Gefitinib'] + legend),
molsPerRow = 4)
```
データセットにおいてゲフィチニブと比較してトップにランクした化合物群は、最初は我々のデータセットに含まれるゲフィチニブのエントリー(rank1 と rank2)で、続いてゲフィチニブの変換体(例 benzole置換基パターンが異なるもの)です。
注:ChEMBLにはゲフィチニブ(よく研究された化合物なので)の完全な構造活性相関分析がふくまれていて、したがって私たちが取得したデータセットにゲフィチニブ様化合物が多く含まれていることは驚くべきことではありません。
それでは、類似度検索がどの程度、データセット上の活性化合物と不活性化合物を区別することができるか、その性能をチェックしたいと思います。そこで、**トークトリアル1** でChEMBLから取得した化合物の(EGFRに対する)生理活性の値を使用します。
#### エンリッチメントプロットの生成
バーチャルスクリーニングの妥当性を評価し、見つかった活性化合物の比率を見るためにエンリッチメントプロットを作成します。
エンリッチメントプロットが示すのは;
* データセット全体のうち、トップにランクした化合物の比率(x-axis) vs.
* データセット全体のうち活性化合物(y-axis)の比率
MACCSフィンガープリントとMorganフィンガープリントのタニモト類似度を比較します。
化合物を活性化合物あるいは不活性化合物のいずれとして取り扱うかを決めるために、一般に使用されるpIC50のカットオフ値6.3を適用します。文献中にはpIC50カットオフ値として5〜7にわたる範囲でいくつか提案がなされていて、データポイントをとらない排除範囲を定義しているものもありますが、私たちはこのカットオフ(6.3)は合理的と考えています。
同じカットオフを**トークトリアル10**の機械学習でも用います。
```
# 活性化合物と不活性化合物を区別するpIC50 カットオフ値
threshold = 6.3
similarity_df.head()
def get_enrichment_data(similarity_df, similarity_measure, threshold):
"""
エンリッチメントプロットのxとyの値を計算する関数:
x - データセットで何%にランクしているか
y - 何%の本当に活性な化合物が見つかったか
"""
# データセットの化合物の数を取得
mols_all = len(similarity_df)
# データセットの活性化合物の数を取得
actives_all = sum(similarity_df.bioactivity >= threshold)
# データセット全体を処理している間、活性化合物のカウンターを保持するリストを初期化
actives_counter_list = []
# 活性化合物のカウンターを初期化
actives_counter = 0
# 注: エンリッチメントプロットのためデータをランク付けしなければなりません。
# 選択した類似度評価によって化合物を並べ替えます。
similarity_df.sort_values([similarity_measure], ascending=False, inplace=True)
# ランク付けされたデータセットを一つずつ処理し、(生理活性をチェックすることで)各化合物が活性化合物どうか確認します
for value in similarity_df.bioactivity:
if value >= threshold:
actives_counter += 1
actives_counter_list.append(actives_counter)
# 化合物の数をデータセットのランク何%になるかに変換
mols_perc_list = [i/mols_all for i in list(range(1, mols_all+1))]
# 活性化合物の数を本当の活性化合物の何%が見つかったかに変換
actives_perc_list = [i/actives_all for i in actives_counter_list]
# xとyの値とラベルをもつデータフレームを生成
enrich_df = pd.DataFrame({'% ranked dataset':mols_perc_list,
'% true actives identified':actives_perc_list,
'similarity_measure': similarity_measure})
return enrich_df
# プロットする類似度評価を定義
sim_measures = ['tanimoto_MACCS', 'tanimoto_morgan']
# 全類似度評価についてエンリッチメントプロットのデータを持つデータフレームのリストを作成
enrich_data = [get_enrichment_data(similarity_df, i, threshold) for i in sim_measures]
# プロットのためのデータセットを準備:
# 類似度評価毎のデータフレームを一つのデータフレームに連結
# …異なる類似度評価は「similarity_measure」列によって区別可能です
enrich_df = pd.concat(enrich_data)
fig, ax = plt.subplots(figsize=(6, 6))
fontsize = 20
for key, grp in enrich_df.groupby(['similarity_measure']):
ax = grp.plot(ax = ax,
x = '% ranked dataset',
y = '% true actives identified',
label=key,
alpha=0.5, linewidth=4)
ax.set_ylabel('% True actives identified', size=fontsize)
ax.set_xlabel('% Ranked dataset', size=fontsize)
# データセットの活性化合物比率
ratio = sum(similarity_df.bioactivity >= threshold) / len(similarity_df)
# 理想的な場合のカーブをプロット
ax.plot([0,ratio,1], [0,1,1], label="Optimal curve", color="black", linestyle="--")
# ランダムな場合のカーブをプロット
ax.plot([0,1], [0,1], label="Random curve", color="grey", linestyle="--")
plt.tick_params(labelsize=16)
plt.legend(labels=['MACCS', 'Morgan', "Optimal", "Random"], loc=(.5, 0.08),
fontsize=fontsize, labelspacing=0.3)
# プロットを保存ーテキストボックスを含めるためにbbox_inchesを使用:
# https://stackoverflow.com/questions/44642082/text-or-legend-cut-from-matplotlib-figure-on-savefig?rq=1
plt.savefig("../data/T4/enrichment_plot.png", dpi=300, bbox_inches="tight", transparent=True)
plt.show()
```
エンリッチメントプロットによるとMACCSフィンガープリントよりもMorganフィンガープリント基づく比較の方が少し良いパフォーマンスを示しています。
```
# ランク付されたデータセットのx%についてEFを取得
def print_data_ef(perc_ranked_dataset, enrich_df):
data_ef = enrich_df[enrich_df['% ranked dataset'] <= perc_ranked_dataset].tail(1)
data_ef = round(float(data_ef['% true actives identified']), 1)
print("Experimental EF for ", perc_ranked_dataset, "% of ranked dataset: ", data_ef, "%", sep="")
# ランク付されたデータセットのx%についてランダムEFを取得
def print_random_ef(perc_ranked_dataset):
random_ef = round(float(perc_ranked_dataset), 1)
print("Random EF for ", perc_ranked_dataset, "% of ranked dataset: ", random_ef, "%", sep="")
# ランク付されたデータセットのx%について理想的な場合のEFを取得
def print_optimal_ef(perc_ranked_dataset, similarity_df, threshold):
ratio = sum(similarity_df.bioactivity >= threshold) / len(similarity_df) * 100
if perc_ranked_dataset <= ratio:
optimal_ef = round(100/ratio * perc_ranked_dataset, 1)
else:
optimal_ef = round(float(100), 2)
print("Optimal EF for ", perc_ranked_dataset, "% of ranked dataset: ", optimal_ef, "%", sep="")
# パーセンテージを選択
perc_ranked_list = 5
# EFデータを取得
print_data_ef(perc_ranked_list, enrich_df)
print_random_ef(perc_ranked_list)
print_optimal_ef(perc_ranked_list, similarity_df, threshold)
```
**訳注(04/2020)**
オリジナルの実践編はここまでですが、このEFの結果は少しおかしい気がします。エンリッチメントプロットを見ると、エンリッチメントファクターは「**optimal** > **Experimental** > **Random**」となると思われます。**Random**よりも**Experimental**が低いということは、むしろ不活性化合物へと選択のバイアスがかかっていることになってしまいます。
どこかおかしいところがないか?順番に見ていきます。
まずEFの算出に使われているDataFrame **enrich_df**は2つの類似度評価基準のデータを繋げたものなので、それぞれ別々にしてみます。
```
# tanimoto_MACCS
enrich_df_taMA = enrich_df[enrich_df['similarity_measure'] == 'tanimoto_MACCS']
# tanimoto_morgan
enrich_df_tamo = enrich_df[enrich_df['similarity_measure'] == 'tanimoto_morgan']
print("Size of enrich_df: ", len(enrich_df))
print("Size of tanimoto_MACCS DataFrame: ", len(enrich_df_taMA))
print("Size of tanimoto_morgan DataFrame: ", len(enrich_df_tamo))
```
DataFrameの5% ranked Datasetに相当する箇所を見てみます。
```
# 5% に相当する数
index_5perc = round(len(enrich_df_taMA)*0.05)
# DataFrameのindexは0から始まるので-1した行を表示
enrich_df_taMA[index_5perc-1:index_5perc]
```
見やすさのためsliceでDataFrameとして取り出しています。
ランク上位5%(0.049)に相当する数のなかに、実際の活性評価でactiveだったものは7.3%(% true actives identified, 0.07319)となっています。この値は先の**Random**、**Optimal**と比較して妥当な値に思います。
DataFrameのデータ自体には問題なさそうなので、値の取り出し方(関数`print_data_ef`)に問題があったのでしょう。
関数の中身を順番に実行してみます。
```
# 5%に設定
perc_ranked_dataset = 5
# 5%内のDataFrameを取り出し、その一番最後の行(tail)を取り出す
enrich_df[enrich_df['% ranked dataset'] <= perc_ranked_dataset].tail(1)
```
取り出されたのは「index:4522」で4523番目の行です。この`% true actives identifed`列がEFとして取り出されていた値(1.0)です。
閾値5以下で取り出されたのは`similarity_measure`:**tanimoto_morgan**の全化合物でした。単純にDataFrameのデータは%に換算していないのに、取り出す際に%換算の値を使ってしまったのが原因のようです。
それではそれぞれの類似度について正しい値を確認してみます。
```
# 関数の再定義
def print_data_ef2(perc_ranked_dataset, enrich_df):
perc_ranked_dataset_100 = perc_ranked_dataset / 100
data_ef = enrich_df[enrich_df['% ranked dataset'] <= perc_ranked_dataset_100].tail(1)
data_ef = round(float(data_ef['% true actives identified'] * 100), 1)
print("Experimental EF for ", perc_ranked_dataset, "% of ranked dataset: ", data_ef, "%", sep="")
# MACCS keyの場合
# パーセンテージを選択
perc_ranked_list = 5
# EFデータを取得
print_data_ef2(perc_ranked_list, enrich_df_taMA)
print_random_ef(perc_ranked_list)
print_optimal_ef(perc_ranked_list, similarity_df, threshold)
# Morganフィンガープリントの場合
# パーセンテージを選択
perc_ranked_list = 5
# EFデータを取得
print_data_ef2(perc_ranked_list, enrich_df_tamo)
print_random_ef(perc_ranked_list)
print_optimal_ef(perc_ranked_list, similarity_df, threshold)
```
いずれも「**optimal** > **Experimental** > **Random**」となっており、**Morgan**の方が**MACCS**よりも若干良い値となっています。無事エンリッチメントプロットと比較してもおかしくない値が得られました。
**訳注ここまで**
## ディスカッション
ここではタニモト類似度を使ってバーチャルスクリーニングを実施しました。もちろん、Dice類似度や他の類似度評価を使っても行うことができます。
化合物フィンガープリントを使用した類似度検索の欠点は、化合物類似度に基づくものなので新規な構造を生み出さないことです。化合物類似度を扱う上でのその他の課題としては、いわゆるアクティビティクリフ(activity cllif)があります。分子の官能基におけるちょっとした変化が生理活性の大きな変化を起こすことがあります。
## クイズ
* アクティビティクリフを回避するにはどこから始めれば良いでしょうか?
* MACCSフィンガープリントとMorganフィンガープリントを互いに比較した場合の利点と欠点は何でしょう?
* 使用したフィンガープリントによっておこる、類似度データフレームにおける順序の違いをどう説明できるでしょうか?
| github_jupyter |
<a href="https://colab.research.google.com/github/adasegroup/ML2021_seminars/blob/master/seminar13/gp.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Gaussian Processes (GP) with GPy
In this notebook we are going to use GPy library for GP modeling [SheffieldML github page](https://github.com/SheffieldML/GPy).
Why **GPy**?
* Specialized library of GP models (regression, classification, GPLVM)
* Variety of covariance functions is implemented
* There are GP models for large-scale problems
* Easy to use
Run the following line to install GPy library
```
!pip install GPy
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
import GPy
%matplotlib inline
```
Current documentation of GPy library can be found [here](http://gpy.readthedocs.org/en/latest/).
## Gaussian Process Regression
A data set $\left (X, \mathbf{y} \right ) = \left \{ (x_i, y_i), x_i \in \mathbb{R}^d, y_i \in \mathbb{R} \right \}_{i = 1}^N$ is given.
Assumption:
$$
y = f(x) + \varepsilon,
$$
where $f(x)$ is a Gaussian Processes and $\varepsilon \sim \mathcal{N}(0, \sigma_n^2)$ is a Gaussian noise .
Posterior distribution of function value $y^*$ at point $x^*$
$$
y_* | X, \mathbf{y}, x_* \sim \mathcal{N}(m(x_*), \sigma(x_*)),
$$
with predictive mean and variance given by
$$
m(x_*) = \mathbf{k}^T \mathbf{K}_y^{-1} \mathbf{y} = \sum_{i = 1}^N \alpha_i k(x_*, x_i),
$$
$$
\sigma^2(x_*) = k(x_*, x_*) - \mathbf{k}^T\mathbf{K}_y^{-1}\mathbf{k},
$$
where
$$
\mathbf{k} = \left ( k(x_*, x_1), \ldots, k(x_*, x_N) \right )^T
$$
$$
\mathbf{K}_y = \|k(x_i, x_j)\|_{i, j = 1}^N + \sigma_n^2 \mathbf{I}
$$
### Exercises
1. What the posterior variance at the points from the training set is equal to? What if the noise variance is equal to 0?
2. Suppose that we want to minimize some unknown function $f(\mathbf{x})$.
We are given a set of observations $y_i = f(\mathbf{x}_i) + \varepsilon_i$, where $\varepsilon_i \sim \mathcal{N}(0, \sigma^2)$.
Using the observations we built a GP model $\hat{f}(\mathbf{x})$.
Now, let us consider the value called *improvement*:
$$
I(\mathbf{x}) = \max(0, y^* - f(\mathbf{x})),
$$
where $y^*$ is currently found minimum value of $f(\mathbf{x})$.
To choose the next candidate for the minimum we would like to maximize the *Expected Improvement*
$$
EI(x) = \mathbb{E}_f I(\mathbf{x})
$$
1. Express the $EI(\mathbf{x})$ in terms $\Phi(\cdot)$ and $\phi(\cdot)$ - the pdf and cdf of the standard normal distribution $\mathcal{N}(0, 1)$.
2. Assuming $\sigma = 0$ what is the value of $EI(\mathbf{x})$ for any value $y_i$ from the dataset?
## Building GPR model
Lets fit GPR model for function $f(x) = − \cos(\pi x) + \sin(4\pi x)$ in $[0, 1]$,
with noise $y(x) = f(x) + \epsilon$, $\epsilon \sim \mathcal{N}(0, 0.1)$.
```
N = 10
X = np.linspace(0.05, 0.95, N).reshape(-1, 1)
Y = -np.cos(np.pi * X) + np.sin(4 * np.pi * X) + \
np.random.normal(loc=0.0, scale=0.1, size=(N, 1))
plt.figure(figsize=(5, 3))
plt.plot(X, Y, '.')
```
#### 1. Define covariance function
The most popular kernel - RBF kernel - has 2 parameters: `variance` and `lengthscale`, $k(x, y) = \sigma^2 \exp\left ( -\dfrac{\|x - y\|^2}{2l^2}\right )$,
where `variance` is $\sigma^2$, and `lengthscale` - $l$.
```
input_dim = 1
variance = 1
lengthscale = 0.2
kernel = GPy.kern.RBF(input_dim, variance=variance,
lengthscale=lengthscale)
```
#### 2. Create GPR model
```
model = GPy.models.GPRegression(X, Y, kernel)
print(model)
model.plot(figsize=(5, 3))
```
### Parameters of the covariance function
Values of parameters of covariance function can be set like: `k.lengthscale = 0.1`.
Let's change the value of `lengthscale` parameter and see how it changes the covariance function.
```
k = GPy.kern.RBF(1)
theta = np.asarray([0.2, 0.5, 1, 2, 4, 10])
figure, axes = plt.subplots(2, 3, figsize=(8, 4))
for t, ax in zip(theta, axes.ravel()):
k.lengthscale = t
k.plot(ax=ax)
ax.set_ylim([0, 1])
ax.set_xlim([-4, 4])
ax.legend([t])
```
### Task
Try to change parameters to obtain more accurate model.
```
######## Your code here ########
kernel =
model =
model.Gaussian_noise.variance.fix(0.01)
print(model)
model.plot()
```
### Tuning parameters of the covariance function
The parameters are tuned by maximizing likelihood. To do it just use `optimize()` method of the model.
```
model = GPy.models.GPRegression(X, Y, kernel)
model.optimize()
print(model)
model.plot(figsize=(5, 3))
```
### Noise variance
Noise variance acts like a regularization in GP models. Larger values of noise variance lead to more smooth model.
Let's check it: try to change noise variance to some large value, to some small value and see the results.
Noise variance accessed like this: `model.Gaussian_noise.variance = 1`
```
######## Your code here ########
model.Gaussian_noise.variance =
model.plot(figsize=(5, 3))
```
Now, let's generate more noisy data and try to fit model.
```
N = 40
X = np.linspace(0.05, 0.95, N).reshape(-1, 1)
Y = -np.cos(np.pi * X) + np.sin(4 * np.pi * X) + \
np.random.normal(loc=0.0, scale=0.5, size=(N, 1))
kernel = GPy.kern.RBF(1)
model = GPy.models.GPRegression(X, Y, kernel)
model.optimize()
print(model)
model.plot(figsize=(5, 3))
```
Now, let's fix noise variance to some small value and fit the model
```
kernel = GPy.kern.RBF(1)
model = GPy.models.GPRegression(X, Y, kernel)
model.Gaussian_noise.variance.fix(0.01)
model.optimize()
model.plot(figsize=(5, 3))
```
## Approximate multi-dimensional function
```
def rosenbrock(x):
x = 0.5 * (4 * x - 2)
y = np.sum((1 - x[:, :-1])**2 +
100 * (x[:, 1:] - x[:, :-1]**2)**2, axis=1)
return y
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from sklearn.metrics import mean_squared_error
def plot_2d_func(func, n_rows=1, n_cols=1, title=None):
grid_size = 100
x_grid = np.meshgrid(np.linspace(0, 1, grid_size), np.linspace(0, 1, grid_size))
x_grid = np.hstack((x_grid[0].reshape(-1, 1), x_grid[1].reshape(-1, 1)))
y = func(x_grid)
fig = plt.figure(figsize=(n_cols * 6, n_rows * 6))
ax = fig.add_subplot(n_rows, n_cols, 1, projection='3d')
ax.plot_surface(x_grid[:, 0].reshape(grid_size, grid_size), x_grid[:, 1].reshape(grid_size, grid_size),
y.reshape(grid_size, grid_size),
cmap=cm.jet, rstride=1, cstride=1)
if title is not None:
ax.set_title(title)
return fig
```
#### Here how the function looks like in 2D
```
fig = plot_2d_func(rosenbrock)
```
### Training set
Note that it is 3-dimensional now
```
dim = 3
x_train = np.random.rand(300, dim)
y_train = rosenbrock(x_train).reshape(-1, 1)
```
### Task
Try to approximate Rosenbrock function using RBF kernel. MSE (mean squared error) should be $<10^{-2}$.
**Hint**: if results are not good maybe it is due to bad local minimum. You can do one of the following things:
1. Try to use multi-start by calling `model.optimize_restarts(n_restarts)` method of the model.
2. Constrain model parameters to some reasonable bounds. You can do it for example as follows:
`model.Gaussian_noise.variance.constrain_bounded(0, 1)`
```
######## Your code here ########
model =
x_test = np.random.rand(3000, dim)
y_test = rosenbrock(x_test)
y_pr = model.predict(x_test)[0]
mse = mean_squared_error(y_test.ravel(), y_pr.ravel())
print('\nMSE: {}'.format(mse))
```
### Covariance functions
Short info about covariance function can be printed using `print(k)`.
```
k = GPy.kern.RBF(1)
print(k)
```
You can plot the covariance function using `plot()` method.
```
k.plot(figsize=(5, 3))
```
## More "complex" functions
The most popular covariance function is RBF. However, not all the functions can be modelled using RBF covariance function. For example, approximations of discontinuous functions will suffer from oscillations, approximation of curvy function may suffer from oversmoothing.
```
def heaviside(x):
return np.asfarray(x > 0)
def rastrigin(x):
"""
Parameters
==========
x : ndarray - 2D array in [0, 1]
Returns
=======
1D array of values of Rastrigin function
"""
scale = 8 # 10.24
x = scale * x - scale / 2
y = 10 * x.shape[1] + (x**2).sum(axis=1) - 10 * np.cos(2 * np.pi * x).sum(axis=1)
return y
fig = plot_2d_func(rastrigin, 1, 2, title='Rastrigin function')
x = np.linspace(-1, 1, 100)
y = heaviside(x)
ax = fig.add_subplot(1, 2, 2)
ax.plot(x, y)
ax.set_title('Heaviside function')
plt.show()
```
#### Example of oscillations
As you can see there are oscillations in viscinity of discontinuity because we are trying to approximate
discontinuous function using infinitily smooth function.
```
np.random.seed(42)
X = np.random.rand(30, 1) * 2 - 1
y = heaviside(X)
k = GPy.kern.RBF(input_dim=1, variance=1., lengthscale=1.)
m = GPy.models.GPRegression(X, y, k)
m.optimize()
m.plot(figsize=(5, 3))
plt.ylim([-0.2, 1.2])
```
#### Example of oversmoothing
Actually, the GP model only approximates trend of the function.
All the curves are treated as noise.
The knowledge about this (in fact there is some repeated structure) should be incorporated into the model via kernel function.
```
np.random.seed(42)
X = np.random.rand(300, 2)
y = rastrigin(X)
k = GPy.kern.RBF(input_dim=2)
m = GPy.models.GPRegression(X, y.reshape(-1, 1), k)
m.optimize()
fig = plot_2d_func(lambda x: m.predict(x)[0])
```
### Covariance functions in GPy
Popular covariance functions: `Exponential`, `Matern32`, `Matern52`, `RatQuad`, `Linear`, `StdPeriodic`.
* Exponential:
$$
k(x, x') = \sigma^2 \exp \left (-\frac{r}{l} \right), \quad r = \|x - x'\|
$$
* Matern32
$$
k(x, x') = \sigma^2 \left (1 + \sqrt{3}\frac{r}{l} \right )\exp \left (-\sqrt{3}\frac{r}{l} \right )
$$
* Matern52
$$
k(x, x') = \sigma^2 \left (1 + \sqrt{5}\frac{r}{l} + \frac{5}{3}\frac{r^2}{l^2} \right ) \exp \left (-\sqrt{5}\frac{r}{l} \right )
$$
* RatQuad
$$
k(x, x') = \left ( 1 + \frac{r^2}{2\alpha l^2}\right )^{-\alpha}
$$
* Linear
$$
k(x, x') = \sum_i \sigma_i^2 x_i x_i'
$$
* Poly
$$
k(x, x') = \sigma^2 (x^T x' + c)^d
$$
* StdPeriodic
$$
k(x, x') = \sigma^2 \exp\left ( -2 \frac{\sin^2(\pi r)}{l^2}\right )
$$
```
covariance_functions = [GPy.kern.Exponential(1), GPy.kern.Matern32(1),
GPy.kern.RatQuad(1), GPy.kern.Linear(1),
GPy.kern.Poly(1), GPy.kern.StdPeriodic(1)]
figure, axes = plt.subplots(2, 3, figsize=(9, 6))
axes = axes.ravel()
for i, k in enumerate(covariance_functions):
k.plot(ax=axes[i])
axes[i].set_title(k.name)
figure.tight_layout()
```
## Combination of covariance functions
* Sum of covariance function is a valid covariance function:
$$
k(x, x') = k_1(x, x') + k_2(x, x')
$$
* Product of covariance functions is a valid covariance funciton:
$$
k(x, x') = k_1(x, x') k_2(x, x')
$$
### Combinations of covariance functions in GPy
In GPy to combine covariance functions you can just use operators `+` and `*`.
Let's plot some of the combinations
```
covariance_functions = [GPy.kern.Linear(input_dim=1), GPy.kern.StdPeriodic(input_dim=1), GPy.kern.RBF(input_dim=1, lengthscale=1)]
operations = {'+': lambda x, y: x + y, '*': lambda x, y: x * y}
figure, axes = plt.subplots(len(operations), len(covariance_functions), figsize=(9, 6))
import itertools
axes = axes.ravel()
count = 0
for j, base_kernels in enumerate(itertools.combinations(covariance_functions, 2)):
for k, (op_name, op) in enumerate(operations.items()):
kernel = op(base_kernels[0], base_kernels[1])
kernel.plot(ax=axes[count])
axes[count].set_title('{} {} {}'.format(base_kernels[0].name, op_name, base_kernels[1].name),
fontsize=14)
count += 1
figure.tight_layout()
```
### Additive kernels
One of the popular approach to model the function of interest is
$$
f(x) = \sum_{i=1}^d f_i(x_i) + \sum_{i < j} f_{ij}(x_i, x_j) + \ldots
$$
**Example**: $\quad f(x_1, x_2) = f_1(x_1) + f_2(x_2)$
To model it using GP use additive kernel $\quad k(x, y) = k_1(x_1, y_1) + k_2(x_2, y_2)$.
More general - add kernels each depending on subset of inputs
$$
k(x, y) = k_1(x, y) + \ldots + k_D(x, y),
$$
where, for example, $k_1(x, x') = k_1(x_1, x_1'), \; k_2(x, x') = k_2((x_1, x_3), (x_1', x_3'))$, etc.
Here is an example of ${\rm RBF}(x_1) + {\rm RBF}(x_2)$
```
k1 = GPy.kern.RBF(1, active_dims=[0])
k2 = GPy.kern.RBF(1, active_dims=[1])
kernel = k1 + k2
x = np.meshgrid(np.linspace(-3, 3, 50), np.linspace(-3, 3, 50))
x = np.hstack((x[0].reshape(-1, 1), x[1].reshape(-1, 1)))
z = kernel.K(x, np.array([[0, 0]]))
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
figure = plt.figure()
ax = figure.add_subplot(111, projection='3d')
ax.plot_surface(x[:, 0].reshape(50, 50), x[:, 1].reshape(50, 50), z.reshape(50, 50), cmap=cm.jet)
plt.show()
```
### Kernels on arbitrary types of objects
Kernels can be defined over all types of data structures: text, images, matrices, graphs, etc. You just need to define similarity between objects.
#### Kernels on categorical data
* Represent your categorical variable as a by a one-of-k encoding: $\quad x = (x_1, \ldots, x_k)$.
* Use RBF kernel with `ARD=True`: $\quad k(x , x') = \sigma^2 \prod_{i = 1}^k\exp{\left ( -\dfrac{(x_i - x_i')^2}{\sigma_i^2} \right )}$. The lengthscale will now encode whether the rest of the function changes.
* Short lengthscales for categorical variables means your model is not sharing any information between data of different categories.
## 2 Sampling from GP
So, you have defined some complex kernel.
You can plot it to see how it looks and guess what kind of functions it can approximate.
Another way to do it is to actually generate random functions using this kernel.
GP defines distribution over functions, which is defined by its *mean function* $m(x)$ and *covariance function* $k(x, y)$: for any set $\mathbf{x}_1, \ldots, \mathbf{x}_N \in \mathbb{R}^d \rightarrow$ $\left (f(\mathbf{x}_1), \ldots, f(\mathbf{x}_N) \right ) \sim \mathcal{N}(\mathbf{m}, \mathbf{K})$,
where $\mathcal{m} = (m(\mathbf{x}_1, \ldots, \mathbf{x}_N)$, $\mathbf{K} = \|k(\mathbf{x}_i, \mathbf{x}_j)\|_{i,j=1}^N$.
Sampling procedure:
1. Generate set of points $\mathbf{x}_1, \ldots, \mathbf{x}_N$.
2. Calculate mean and covariance matrix $\mathcal{m} = (m(\mathbf{x}_1, \ldots, \mathbf{x}_N)$, $\mathbf{K} = \|k(\mathbf{x}_i, \mathbf{x}_j)\|_{i,j=1}^N$.
3. Generate vector from multivariate normal distribution $\mathcal{N}(\mathbf{m}, \mathbf{K})$.
Below try to change RBF kernel to some other kernel and see the results.
```
k = GPy.kern.RBF(input_dim=1, lengthscale=0.3)
X = np.linspace(0, 5, 500).reshape(-1, 1)
mu = np.zeros(500)
C = k.K(X, X)
Z = np.random.multivariate_normal(mu, C, 3)
plt.figure()
for i in range(3):
plt.plot(X, Z[i, :])
```
### Task
Build a GP model that predicts airline passenger counts on international flights.
```
!wget https://github.com/adasegroup/ML2020_seminars/raw/master/seminar11/data/airline.npz
data = np.load('airline.npz')
X = data['X']
y = data['y']
train_indices = list(range(70)) + list(range(90, 129))
test_indices = range(70, 90)
X_train = X[train_indices]
y_train = y[train_indices]
X_test = X[test_indices]
y_test = y[test_indices]
plt.figure(figsize=(5, 3))
plt.plot(X_train, y_train, '.')
```
You need to obtain something like this
<img src=https://github.com/adasegroup/ML2020_seminars/raw/master/seminar11/imgs/airline_result.png>
```
def plot_model(X, y, model):
x = np.linspace(1948, 1964, 400).reshape(-1, 1)
prediction_mean, prediction_var = model.predict(x)
prediction_std = np.sqrt(prediction_var).ravel()
prediction_mean = prediction_mean.ravel()
plt.figure(figsize=(5, 3))
plt.plot(X, y, '.', label='Train data')
plt.plot(x, prediction_mean, label='Prediction')
plt.fill_between(x.ravel(), prediction_mean - prediction_std, prediction_mean + prediction_std, alpha=0.3)
```
#### Let's try RBF kernel
```
######## Your code here ########
k_rbf =
```
As you can see below it doesn't work ;(
```
model = GPy.models.GPRegression(X, y, k_rbf)
model.optimize()
print(model)
plot_model(X_train, y_train, model)
```
We will try to model this data set using 3 additive components: trend, seasonality and noise.
So, the kernel should be a sum of 3 kernels:
`kernel = kernel_trend + kernel_seasonality + kernel_noise`
#### Let's first try to model trend
Trend is almost linear with some small nonlinearity, so you can use sum of linear kernel with some other which gives this small nonlinearity.
```
######## Your code here ########
k_trend =
model = GPy.models.GPRegression(X, y, k_trend)
model.optimize()
print(model)
plot_model(X_train, y_train, model)
```
#### Let's model periodicity
Just periodic kernel will not work (why?).
Try to use product of periodic kernel with some other kernel (or maybe 2 other kernels).
Note that the amplitude increases with x.
```
######## Your code here ########
k_trend =
k_seasonal =
kernel = k_trend + k_seasonal
model = GPy.models.GPRegression(X, y, kernel)
model.optimize()
print(model)
plot_model(X_train, y_train, model)
```
#### Let's add noise model
The dataset is heteroscedastic, i.e. noise variance depends on x: it increases linearly with x.
Noise can be modeled using `GPy.kern.White(1)`, but it assumes that noise variance is the same at every x.
By what kernel it should be multiplied?
```
######## Your code here ########
k_trend =
k_periodicity =
k_noise =
kernel = k_trend + k_periodicity + k_noise
model = GPy.models.GPRegression(X, y, kernel)
model.optimize()
print(model)
plot_model(X_train, y_train, model)
```
# Automatic covariance structure search
We can construct kernel is automatic way.
Here is our data set (almost the same)
```
idx_test = np.where((X[:,0] > 1957))[0]
idx_train = np.where((X[:,0] <= 1957))[0]
X_train = X[idx_train]
y_train = y[idx_train]
X_test = X[idx_test]
y_test = y[idx_test]
plt.figure(figsize=(7, 5))
plt.plot(X_train, y_train, '.', color='red');
plt.plot(X_test, y_test, '.', color='green');
def plot_model_learned(X, y, train_idx, test_idx, model):
prediction_mean, prediction_var = model.predict(X)
prediction_std = np.sqrt(prediction_var).ravel()
prediction_mean = prediction_mean.ravel()
plt.figure(figsize=(7, 5))
plt.plot(X, y, '.')
plt.plot(X[train_idx], y[train_idx], '.', color='green')
plt.plot(X, prediction_mean, color='red')
plt.fill_between(X.ravel(), prediction_mean - prediction_std, prediction_mean + prediction_std, alpha=0.3)
```
## Expressing Sturcture Through Kernels
For example:
$$
\underbrace{\text{RBF}\times\text{Lin}}_\text{increasing trend} + \underbrace{\text{RBF}\times\text{Per}}_\text{varying-amplitude periodic} + \underbrace{\text{RBF}}_\text{residual}
$$
## Greedy Searching for the Optimum Kernel Combination
One can wonder: how to automatically search the kernel structure? We can optimize some criteria, which balance between a loss function value and the complexity of the model.
Reasinobale candidate for this is BIC-criteria:
$$
BIC = - 2. \text{Log-Liklihood} + m \cdot\log{n}
$$
where $n$ sample size and $m$ number of the parameters.
However, the procedure of fitting Gaussian Process is quite expensive $O(n^3)$. Hence, instead of the combinatorial search through all possible combinations, we grow the kernel structure greedy.
You can find more details at the https://github.com/jamesrobertlloyd/gp-structure-search. For now, we present toy-example algorithm.
Consider the set of operations:
$$
\text{Algebra: } +,\times
$$
and the set of basic kernels:
$$
\text{Kernels: } \text{Poly}, \text{RBF}, \text{Periodic}
$$
For each level we select extenstion of our current kernel with the lowest BIC. This is an example of the possible kernel grow process (mark notes the lowest BIC at the level):
<img src='https://github.com/adasegroup/ML2020_seminars/raw/master/seminar11/imgs/gp.png'>
### Task*
Implement function that trains a model with given kernel and dataset, calculates and returns BIC
The log-lilkelihood of the model can be calculated using `model.log_likelihood()` method,
number of parameters of the model you can get via `len(model.param_array)`.
```
def train_model_get_bic(X_train, y_train, kernel, num_restarts=1):
'''
Input:
X_train: numpy array of train features, n*d (d>=1)
y_train: numpy array n*1
kernel: GPy object kern
num_restars: number of the restarts of the optimization routine
Output:
bic value
'''
kernel = kernel.copy()
######## Your code here ########
return bic
```
Here is a utility function which take list of kernels and operations between them, calculates all product kernels
and returns a list of them.
After that we need only take sum of the kernels from this list.
```
def _get_all_product_kernels(op_list, kernel_list):
'''
Find product pairs and calculate them.
For example, if we are given expression:
K = k1 * k2 + k3 * k4 * k5
the function will calculate all the product kernels
k_mul_1 = k1 * k2
k_mul_2 = k3 * k4 * k5
and return list [k_mul_1, k_mul_2].
'''
product_index = np.where(np.array(op_list) == '*')[0]
if len(product_index) == 0:
return kernel_list
product_index = product_index[0]
product_kernel = kernel_list[product_index] * kernel_list[product_index + 1]
if len(op_list) == product_index + 1:
kernel_list_copy = kernel_list[:product_index] + [product_kernel]
op_list_copy = op_list[:product_index]
else:
kernel_list_copy = kernel_list[:product_index] + [product_kernel] + kernel_list[product_index + 2:]
op_list_copy = op_list[:product_index] + op_list[product_index + 1:]
return _get_all_product_kernels(op_list_copy, kernel_list_copy)
```
### Task*
This is the main class, you need to implement several methods inside
1. method `init_kernel()` - this function constructs initial model, i.e. the model with one kernel. You need just iterate through the list of base kernels and choose the best one according to BIC
2. method `grow_level()` - this function adds new level. You need to iterate through all base kernels and all operations,
apply each operation to the previously constructed kernel and each base kernel (use method `_make_kernel()` for this) and then choose the best one according to BIC.
```
class GreedyKernel:
'''
Class for greedy growing kernel structure
'''
def __init__(self, algebra, base_kernels):
self.algebra = algebra
self.base_kernels = base_kernels
self.kernel = None
self.kernel_list = []
self.op_list = []
self.str_kernel = None
def _make_kernel(self, op_list, kernel_list):
'''
Sumation in kernel experssion
'''
kernels_to_sum = _get_all_product_kernels(op_list, kernel_list)
new_kernel = kernels_to_sum[0]
for k in kernels_to_sum[1:]:
new_kernel = new_kernel + k
return new_kernel
def init_kernel(self, X_train, y_train):
'''
Initialization of first kernel
'''
best_kernel = None
###### Your code here ######
# You need just iterate through the list of base kernels and choose the best one according to BIC
# save the kernel in `best_kernel` variable
# base kernels are given by self.base_kernels --- list of kernel objects
############################
assert best_kernel is not None
self.kernel_list.append(best_kernel)
self.str_kernel = str(best_kernel.name)
def grow_level(self, X_train, y_train):
'''
Select optimal extension of current kernel
'''
best_kernel = None # should be kernel object
best_op = None # should be operation name, i.e. "+" or "*"
###### Your code here ######
# You need to iterate through all base kernels and all operations,
# apply each operation to the previously constructed kernel and each base kernel
# (use method `_make_kernel()` for this) and then choose the best one according to BIC.
# base kernels are given by self.base_kernels --- list of kernel objects
# operations are given by self.algebra --- dictionary:
# {"+": lambda x, y: x + y
# "*": lambda x, y: x * y}
# best_kernel - kernel object, store in this variable the best found kernel
# best_op - '+' or '*', store in this variable the best found operation
############################
assert best_kernel is not None
assert best_op is not None
self.kernel_list.append(best_kernel)
self.op_list.append(best_op)
new_kernel = self._make_kernel(self.op_list, self.kernel_list)
str_new_kernel = '{} {} {}'.format(self.str_kernel, best_op, best_kernel.name)
return new_kernel, str_new_kernel
def grow_tree(self, X_train, y_train, max_depth):
'''
Greedy kernel growing
'''
if self.kernel == None:
self.init_kernel(X_train, y_train)
for i in range(max_depth):
self.kernel, self.str_kernel = self.grow_level(X_train, y_train)
print(self.str_kernel)
def fit_model(self, X_train, y_train, kernel, num_restarts=1):
model = GPy.models.GPRegression(X_train, y_train, kernel)
model.optimize_restarts(num_restarts, verbose=False)
return model
```
Now let us define the algebra and list of base kernels.
To make learning process more robust we constrain some parameters of the kernels to lie within
some reasonable intervals
```
# operations under kernels:
algebra = {'+': lambda x, y: x + y,
'*': lambda x, y: x * y
}
# basic kernels list:
poly_kern = GPy.kern.Poly(input_dim=1, order=1)
periodic_kern = GPy.kern.StdPeriodic(input_dim=1)
periodic_kern.period.constrain_bounded(1e-2, 1e1)
periodic_kern.lengthscale.constrain_bounded(1e-2, 1e1)
rbf_kern = GPy.kern.RBF(input_dim=1)
rbf_kern.lengthscale.constrain_bounded(1e-2, 1e1)
bias_kern = GPy.kern.Bias(1)
kernels_list = [poly_kern, periodic_kern, rbf_kern]
```
Let's train the model.
You should obtain something which is more accurate than the trend model ;)
```
GK = GreedyKernel(algebra, kernels_list)
GK.grow_tree(X_train, y_train, 4)
model = GK.fit_model(X_train, y_train, GK.kernel)
plot_model_learned(X, y, idx_train, idx_test, model)
```
## Bonus Task
Try to approximate rastrigin function
```
fig = plot_2d_func(rastrigin)
```
### Training set
```
np.random.rand(42)
x_train = np.random.rand(200, 2)
y_train = rastrigin(x_train)
```
#### Hint: you can constrain parameters of the covariance functions, for example
`model.std_periodic.period.constrain_bounded(0, 0.2)`.
```
######## Your code here ########
model =
print(model)
x_test = np.random.rand(1000, 2)
y_test = rastrigin(x_test)
y_pr = model.predict(x_test)[0]
mse = mean_squared_error(y_test.ravel(), y_pr.ravel())
print('MSE: {}'.format(mse))
fig = plot_2d_func(lambda x: model.predict(x)[0])
```
# Appendix: Gaussian Process Classification
### Classification
A data set $\left (X, \mathbf{y} \right ) = \left \{ (x_i, y_i), x_i \in \mathbb{R}^d, y_i \in \{+1, -1\} \right \}_{i = 1}^N$ is given.
Assumption:
$$
p(y = +1 \; | \; x) = \sigma(f(x)) = \pi(x),
$$
where latent function $f(x)$ is a Gaussian Processes.
We need to produce a probabilistic prediction
$$
\pi_* = p(y_* \; | \; X, \mathbf{y}, x_*) = \int \sigma(f_*) p(f_* \; | \; X, \mathbf{y}, x_*) df_*,
$$
$$
p(f_* \; | \; X, \mathbf{y}, x_*) = \int p(f_* \; | \; X, x_*, \mathbf{f}) p(\mathbf{f} \; | \; X, \mathbf{y}) d\mathbf{f},
$$
where $p(\mathbf{f} \; |\; X, \mathbf{y}) = \dfrac{p(\mathbf{y} | X, \mathbf{f}) p(\mathbf{f} | X)}{p(\mathbf{y} | X)}$ is the posterior over the latent variables.
Both integrals are intractable.
Use approximation technique like Laplace approximation or Expectation Propagation.
```
from matplotlib import cm
def cylinder(x):
y = (1 / 7.0 - (x[:, 0] - 0.5)**2 - (x[:, 1] - 0.5)**2) > 0
return y
np.random.seed(42)
X = np.random.rand(40, 2)
y = cylinder(X)
x_grid = np.meshgrid(np.linspace(0, 1, 100), np.linspace(0, 1, 100))
y_grid = cylinder(np.hstack((x_grid[0].reshape(-1, 1), x_grid[1].reshape(-1, 1)))).reshape(x_grid[0].shape)
positive_idx = y == 1
plt.figure(figsize=(5, 3))
plt.plot(X[positive_idx, 0], X[positive_idx, 1], '.', markersize=10, label='Positive')
plt.plot(X[~positive_idx, 0], X[~positive_idx, 1], '.', markersize=10, label='Negative')
im = plt.contour(x_grid[0], x_grid[1], y_grid, 10, cmap=cm.hot)
plt.colorbar(im)
plt.legend()
plt.show()
kernel = GPy.kern.RBF(2, variance=1., lengthscale=0.2, ARD=True)
model = GPy.models.GPClassification(X, y.reshape(-1, 1), kernel=kernel)
model.optimize()
print(model)
def plot_model_2d(model):
model.plot(levels=40, resolution=80, plot_data=False, figsize=(5, 3))
plt.plot(X[positive_idx, 0], X[positive_idx, 1], '.', markersize=10, label='Positive')
plt.plot(X[~positive_idx, 0], X[~positive_idx, 1], '.', markersize=10, label='Negative')
plt.legend()
plt.show()
plot_model_2d(model)
```
Let's change lengthscale to some small value
```
model.rbf.lengthscale = [0.05, 0.05]
plot_model_2d(model)
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
import scipy
from scipy.signal import convolve
from scipy import ndimage
import getBayer
% matplotlib inline
import io
import time
import copy
from numpy.lib.stride_tricks import as_strided
Im = getBayer.getBayer('pic2.jpeg')
bayer = getBayer.bayerGrid
testIm = copy.deepcopy(Im)
plt.imshow(Im/255)
# trying out a method where you interpolate along a direction based on which seems most constant
(m,n) = testIm[:,:,0].shape
R = testIm[:,:,0].astype(np.int32)
G = testIm[:,:,1].astype(np.int32)
B = testIm[:,:,2].astype(np.int32)
G[0:3,0:3]
G_n = copy.deepcopy(G)
R_n = copy.deepcopy(R)
B_n = copy.deepcopy(B)
```
I'm basing the below pixel patterning algorithm on the one detailed in Chuan-kai Lin's website https://sites.google.com/site/chklin/demosaic
```
for i in range(2,m -3):
for j in range(2,n-3):
if G[i,j] == 0:
if B[i,j] == 0: # if this isn't a blue or green pixel, its a red one
M = R
elif R[i,j] == 0:
M = B
north = 2*abs(M[i,j] -M[i-2,j]) + abs(G[i-1,j] - G[i+1,j])
east = 2*abs(M[i,j] -M[i,j+2]) + abs(G[i,j+1] - G[i,j-1])
south = 2*abs(M[i,j] -M[i+2,j]) + abs(G[i-1,j] - G[i+1,j])
west = 2*abs(M[i,j] -M[i,j-2]) + abs(G[i,j+1] - G[i,j-1])
# print(north)
grads = [north, east, south, west]
if min(grads) == north:
G_n[i,j] = (3*G[i-1,j] + G[i+1,j] + M[i,j] -M[i-2,j])/4
elif min(grads) == east:
G_n[i,j] = (3*G[i,j+1] + G[i,j-1] + M[i,j] -M[i,j+2])/4
elif min(grads) == south:
G_n[i,j] = (3*G[i+1,j] + G[i-1,j] + M[i,j] -M[i+2,j])/4
elif min(grads) == west:
G_n[i,j] = (3*G[i,j-1] + G[i,j+1] + M[i,j] -M[i,j-2])/4
temp = testIm.copy()
temp[:,:,1] = G_n[:,:]
plt.imshow(temp/255)
plt.imshow(G_n[100:200,100:200])
#make a hue gradient function
def hueGrad(c1, c2, c3, p1, p3):
if c1< c2 and c2 < c3 or c3< c2 and c2 < c1:
return p1 + (p3-p1)*(c2 - c1)/(c3-c1)
else:
return (p1+p3)/2 + (2*c2 + c1 + c3)/4
```
I'm not following the algorithm I was before here, I'm mostly just doing bilinear interpolation
```
for i in range(1,m-2):
for j in range(1,n-2):
if R[i,j] == 0 and B[i,j] ==0: #Green sencel location
if R[i+1,j] == 0: # this means that the next pixel in the bayer grid directly below is a blue one
R_n[i,j] = abs(R[i,j-1] + R[i,j+1])//2
B_n[i,j] = abs(B[i-1,j] + B[i+1,j])//2
# R_n[i,j] = hueGrad(G_n[i,j-1], G_n[i,j], G_n[i,j+1], R[i,j-1], R[i,j+1])
# B_n[i,j] = hueGrad(G_n[i-1,j], G_n[i,j], G_n[i+1,j], B[i-1,j], B[i+1,j])
elif B[i+1,j] == 0:
R_n[i,j] = abs(R[i-1,j] + R[i+1,j])//2
B_n[i,j] = abs(B[i,j-1] + B[i,j+1])//2
# B_n[i,j] = hueGrad(G_n[i,j-1], G_n[i,j], G_n[i,j+1], B[i,j-1], B[i,j+1])
# R_n[i,j] = hueGrad(G_n[i-1,j], G_n[i,j], G_n[i+1,j], R[i-1,j], R[i+1,j])
elif B[i,j] == 0 and G[i,j] ==0: # the sencel location is a Blue one
NE = abs(B[i-1,j+1] - B[i+1, j-1])
NW = abs(B[i-1, j-1] - B[i+1,j+1])
if NW > 2*NE:
B_n[i,j] = abs(B[i-1,j+1] + B[i+1, j-1])//2
elif NE > 2*NW:
B_n[i,j] = abs(B[i-1, j-1] + B[i+1,j+1])//2
else:
B_n[i,j] = abs(B[i-1, j-1] + B[i+1,j+1] + B[i-1,j+1] + B[i+1, j-1])//4
elif R[i,j] == 0 and G[i,j]==0: #Sencel in this location is Red
NE = abs(R[i-1,j+1] - R[i+1, j-1])
NW = abs(R[i-1, j-1] - R[i+1,j+1])
if NW > 2*NE:
R_n[i,j] = abs(R[i-1,j+1] + R[i+1, j-1])//2
elif NE > 2*NW:
R_n[i,j] = abs(R[i-1, j-1] + R[i+1,j+1])//2
else:
R_n[i,j] = abs(R[i-1, j-1] + R[i+1,j+1] + R[i-1,j+1] + R[i+1, j-1])//4
temp[:,:,0] = R_n[:,:]
temp[:,:,2] = B_n[:,:]
plt.imshow(temp/255)
# plt.imshow(testIm[100:120,100:120,0]/255, cmap = 'gray')
B_n[50:60,50:60]
# B[50:60,50:60]
B_n.shape
plt.imshow(temp[850:880,870:900]/255)
rgbIm = getBayer.get_rgb_array('pic2.jpeg')
plt.imshow(rgbIm/255)
```
| github_jupyter |
```
# General imports
import numpy as np
import pandas as pd
import os, sys, gc, time, warnings, pickle, psutil, random
# custom imports
from multiprocessing import Pool # Multiprocess Runs
warnings.filterwarnings('ignore')
########################### Helpers
#################################################################################
## Seeder
# :seed to make all processes deterministic # type: int
def seed_everything(seed=0):
random.seed(seed)
np.random.seed(seed)
## Multiprocess Runs
def df_parallelize_run(func, t_split):
num_cores = np.min([N_CORES,len(t_split)])
pool = Pool(num_cores)
df = pd.concat(pool.map(func, t_split), axis=1)
pool.close()
pool.join()
return df
########################### Helper to load data by store ID
#################################################################################
# Read data
def get_data_by_store(store):
# Read and contact basic feature
df = pd.concat([pd.read_pickle(BASE),
pd.read_pickle(PRICE).iloc[:,2:],
pd.read_pickle(CALENDAR).iloc[:,2:]],
axis=1)
# Leave only relevant store
df = df[df['store_id']==store]
# With memory limits we have to read
# lags and mean encoding features
# separately and drop items that we don't need.
# As our Features Grids are aligned
# we can use index to keep only necessary rows
# Alignment is good for us as concat uses less memory than merge.
df2 = pd.read_pickle(MEAN_ENC)[mean_features]
df2 = df2[df2.index.isin(df.index)]
df3 = pd.read_pickle(LAGS).iloc[:,3:]
df3 = df3[df3.index.isin(df.index)]
df = pd.concat([df, df2], axis=1)
del df2 # to not reach memory limit
df = pd.concat([df, df3], axis=1)
del df3 # to not reach memory limit
# Create features list
features = [col for col in list(df) if col not in remove_features]
df = df[['id','d',TARGET]+features]
# Skipping first n rows
df = df[df['d']>=START_TRAIN].reset_index(drop=True)
return df, features
# Recombine Test set after training
def get_base_test():
base_test = pd.DataFrame()
for store_id in STORES_IDS:
temp_df = pd.read_pickle('test_'+store_id+'.pkl')
temp_df['store_id'] = store_id
base_test = pd.concat([base_test, temp_df]).reset_index(drop=True)
return base_test
# -------------------------------------
# def get_base_valid():
# base_test = pd.DataFrame()
# for store_id in STORES_IDS:
# temp_df = pd.read_pickle('valid_'+store_id+'.pkl')
# temp_df['store_id'] = store_id
# base_test = pd.concat([base_test, temp_df]).reset_index(drop=True)
# return base_test
# -------------------------------------
########################### Helper to make dynamic rolling lags
#################################################################################
def make_lag(LAG_DAY):
lag_df = base_test[['id','d',TARGET]]
col_name = 'sales_lag_'+str(LAG_DAY)
lag_df[col_name] = lag_df.groupby(['id'])[TARGET].transform(lambda x: x.shift(LAG_DAY)).astype(np.float16)
return lag_df[[col_name]]
def make_lag_roll(LAG_DAY):
shift_day = LAG_DAY[0]
roll_wind = LAG_DAY[1]
lag_df = base_test[['id','d',TARGET]]
col_name = 'rolling_mean_tmp_'+str(shift_day)+'_'+str(roll_wind)
lag_df[col_name] = lag_df.groupby(['id'])[TARGET].transform(lambda x: x.shift(shift_day).rolling(roll_wind).mean())
return lag_df[[col_name]]
########################### Model params
#################################################################################
import lightgbm as lgb
lgb_params = {
'boosting_type': 'gbdt',
'objective': 'tweedie',
'tweedie_variance_power': 1.1,
'metric': 'rmse',
'subsample': 0.5,
'subsample_freq': 1,
'learning_rate': 0.05,
'num_leaves': 2**11-1,
'min_data_in_leaf': 2**12-1,
'feature_fraction': 0.5,
'max_bin': 100,
'n_estimators': 1400,
'boost_from_average': False,
'verbose': -1,
'n_jobs':40
}
# Let's look closer on params
## 'boosting_type': 'gbdt'
# we have 'goss' option for faster training
# but it normally leads to underfit.
# Also there is good 'dart' mode
# but it takes forever to train
# and model performance depends
# a lot on random factor
# https://www.kaggle.com/c/home-credit-default-risk/discussion/60921
## 'objective': 'tweedie'
# Tweedie Gradient Boosting for Extremely
# Unbalanced Zero-inflated Data
# https://arxiv.org/pdf/1811.10192.pdf
# and many more articles about tweediie
#
# Strange (for me) but Tweedie is close in results
# to my own ugly loss.
# My advice here - make OWN LOSS function
# https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/140564
# https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/143070
# I think many of you already using it (after poisson kernel appeared)
# (kagglers are very good with "params" testing and tuning).
# Try to figure out why Tweedie works.
# probably it will show you new features options
# or data transformation (Target transformation?).
## 'tweedie_variance_power': 1.1
# default = 1.5
# set this closer to 2 to shift towards a Gamma distribution
# set this closer to 1 to shift towards a Poisson distribution
# my CV shows 1.1 is optimal
# but you can make your own choice
## 'metric': 'rmse'
# Doesn't mean anything to us
# as competition metric is different
# and we don't use early stoppings here.
# So rmse serves just for general
# model performance overview.
# Also we use "fake" validation set
# (as it makes part of the training set)
# so even general rmse score doesn't mean anything))
# https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/133834
## 'subsample': 0.5
# Serves to fight with overfit
# this will randomly select part of data without resampling
# Chosen by CV (my CV can be wrong!)
# Next kernel will be about CV
##'subsample_freq': 1
# frequency for bagging
# default value - seems ok
## 'learning_rate': 0.03
# Chosen by CV
# Smaller - longer training
# but there is an option to stop
# in "local minimum"
# Bigger - faster training
# but there is a chance to
# not find "global minimum" minimum
## 'num_leaves': 2**11-1
## 'min_data_in_leaf': 2**12-1
# Force model to use more features
# We need it to reduce "recursive"
# error impact.
# Also it leads to overfit
# that's why we use small
# 'max_bin': 100
## l1, l2 regularizations
# https://towardsdatascience.com/l1-and-l2-regularization-methods-ce25e7fc831c
# Good tiny explanation
# l2 can work with bigger num_leaves
# but my CV doesn't show boost
## 'n_estimators': 1400
# CV shows that there should be
# different values for each state/store.
# Current value was chosen
# for general purpose.
# As we don't use any early stopings
# careful to not overfit Public LB.
##'feature_fraction': 0.5
# LightGBM will randomly select
# part of features on each iteration (tree).
# We have maaaany features
# and many of them are "duplicates"
# and many just "noise"
# good values here - 0.5-0.7 (by CV)
## 'boost_from_average': False
# There is some "problem"
# to code boost_from_average for
# custom loss
# 'True' makes training faster
# BUT carefull use it
# https://github.com/microsoft/LightGBM/issues/1514
# not our case but good to know cons
########################### Vars
#################################################################################
VER = 1 # Our model version
SEED = 42 # We want all things
seed_everything(SEED) # to be as deterministic
lgb_params['seed'] = SEED # as possible
N_CORES = psutil.cpu_count() # Available CPU cores
#LIMITS and const
TARGET = 'sales' # Our target
START_TRAIN = 0 # We can skip some rows (Nans/faster training)
END_TRAIN = 1941 # End day of our train set
START_VALID = 1913
P_HORIZON = 28 # Prediction horizon
USE_AUX = False # Use or not pretrained models
#FEATURES to remove
## These features lead to overfit
## or values not present in test set
remove_features = ['id','state_id','store_id',
'date','wm_yr_wk','d',TARGET]
mean_features = ['enc_cat_id_mean','enc_cat_id_std',
'enc_dept_id_mean','enc_dept_id_std',
'enc_item_id_mean','enc_item_id_std']
#PATHS for Features
ORIGINAL = 'data/m5-detrend/'
BASE = 'data/m5-detrend/grid_part_1.pkl'
PRICE = 'data/m5-detrend/grid_part_2.pkl'
CALENDAR = 'data/m5-detrend/grid_part_3.pkl'
LAGS = 'data/m5-detrend/lags_df_28.pkl'
MEAN_ENC = 'data/m5-custom-features/mean_encoding_df.pkl'
# AUX(pretrained) Models paths
AUX_MODELS = 'data/m5-aux-models/'
#STORES ids
STORES_IDS = pd.read_pickle(ORIGINAL+'final_sales_train_evaluation.pkl')['store_id']
STORES_IDS = list(STORES_IDS.unique())
#SPLITS for lags creation
SHIFT_DAY = 28
N_LAGS = 15
LAGS_SPLIT = [col for col in range(SHIFT_DAY,SHIFT_DAY+N_LAGS)]
ROLS_SPLIT = []
for i in [1,7,14]:
for j in [7,14,30,60]:
ROLS_SPLIT.append([i,j])
```
### Train and Valid
```
MODEL_PATH = 'models/1914_1941_valid_detrend_d2d/'
for day in range(1,28):
for store_id in STORES_IDS:
print('Train', store_id)
# Get grid for current store
grid_df, features_columns = get_data_by_store(store_id)
grid_df['sales'] = grid_df.groupby('item_id')['sales'].shift(-day)
grid_df = grid_df[grid_df.groupby('item_id').cumcount(ascending=False) > day-1]
grid_df['sales'] = grid_df['sales'].values * grid_df['sell_price'].values
# break
# Masks for
# Train (All data less than 1913)
# "Validation" (Last 28 days - not real validatio set)
# Test (All data greater than 1913 day,
# with some gap for recursive features)
train_mask = grid_df['d']<=END_TRAIN-P_HORIZON
# valid_mask = grid_df['d']>(END_TRAIN-100)
preds_mask = (grid_df['d']<=END_TRAIN)&(grid_df['d']>(END_TRAIN-P_HORIZON-100))
# Apply masks and save lgb dataset as bin
# to reduce memory spikes during dtype convertations
# https://github.com/Microsoft/LightGBM/issues/1032
# "To avoid any conversions, you should always use np.float32"
# or save to bin before start training
# https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/53773
train_data = lgb.Dataset(grid_df[train_mask][features_columns],
label=grid_df[train_mask][TARGET])
# train_data.save_binary('train_data.bin')
# train_data = lgb.Dataset('train_data.bin')
## valid_data = lgb.Dataset(grid_df[valid_mask][features_columns],
## label=grid_df[valid_mask][TARGET])
# break
# Saving part of the dataset for later predictions
# Removing features that we need to calculate recursively
grid_df = grid_df[preds_mask].reset_index(drop=True)
keep_cols = [col for col in list(grid_df) if '_tmp_' not in col]
grid_df = grid_df[keep_cols]
if day==1:
grid_df.to_pickle(MODEL_PATH+'valid_'+store_id+'.pkl')
del grid_df
# Launch seeder again to make lgb training 100% deterministic
# with each "code line" np.random "evolves"
# so we need (may want) to "reset" it
seed_everything(SEED)
estimator = lgb.train(lgb_params,
train_data,
valid_sets = [train_data],
verbose_eval = 100,
)
# Save model - it's not real '.bin' but a pickle file
# estimator = lgb.Booster(model_file='model.txt')
# can only predict with the best iteration (or the saving iteration)
# pickle.dump gives us more flexibility
# like estimator.predict(TEST, num_iteration=100)
# num_iteration - number of iteration want to predict with,
# NULL or <= 0 means use best iteration
model_name = MODEL_PATH+'lgb_model_'+store_id+'_v'+str(VER)+ '_valid' +'_d_'+ str(day+1) +'.bin'
pickle.dump(estimator, open(model_name, 'wb'))
# Remove temporary files and objects
# to free some hdd space and ram memory
# !rm train_data.bin
del train_data, estimator
gc.collect()
# "Keep" models features for predictions
MODEL_FEATURES = features_columns
features_columns = ['item_id', 'dept_id', 'cat_id', 'release', 'sell_price', 'price_max', 'price_min', 'price_std', 'price_mean', 'price_norm', 'price_nunique', 'item_nunique', 'price_momentum', 'price_momentum_m', 'price_momentum_y', 'event_name_1', 'event_type_1', 'event_name_2', 'event_type_2', 'snap_CA', 'snap_TX', 'snap_WI', 'tm_d', 'tm_w', 'tm_m', 'tm_y', 'tm_wm', 'tm_dw', 'tm_w_end', 'enc_cat_id_mean', 'enc_cat_id_std', 'enc_dept_id_mean', 'enc_dept_id_std', 'enc_item_id_mean', 'enc_item_id_std', 'sales_lag_28', 'sales_lag_29', 'sales_lag_30', 'sales_lag_31', 'sales_lag_32', 'sales_lag_33', 'sales_lag_34', 'sales_lag_35', 'sales_lag_36', 'sales_lag_37', 'sales_lag_38', 'sales_lag_39', 'sales_lag_40', 'sales_lag_41', 'sales_lag_42', 'sales_lag_43', 'sales_lag_44', 'sales_lag_45', 'sales_lag_46', 'sales_lag_47', 'sales_lag_48', 'sales_lag_49', 'sales_lag_50', 'sales_lag_51', 'sales_lag_52', 'sales_lag_53', 'sales_lag_54', 'sales_lag_55', 'rolling_mean_tmp_1_7', 'rolling_mean_tmp_1_14', 'rolling_mean_tmp_1_30', 'rolling_mean_tmp_1_60', 'rolling_mean_tmp_7_7', 'rolling_mean_tmp_7_14', 'rolling_mean_tmp_7_30', 'rolling_mean_tmp_7_60', 'rolling_mean_tmp_14_7', 'rolling_mean_tmp_14_14', 'rolling_mean_tmp_14_30', 'rolling_mean_tmp_14_60']
MODEL_FEATURES = features_columns
MODEL_PATH = 'models/1914_1941_valid_detrend_d2d/'
def get_base_valid():
base_test = pd.DataFrame()
for store_id in STORES_IDS:
temp_df = pd.read_pickle(MODEL_PATH+'valid_'+store_id+'.pkl')
temp_df['store_id'] = store_id
base_test = pd.concat([base_test, temp_df]).reset_index(drop=True)
return base_test
all_preds = pd.DataFrame()
# Join back the Test dataset with
# a small part of the training data
# to make recursive features
base_test = get_base_valid()
base_test = base_test[base_test['d']<=END_TRAIN]
index = base_test[base_test['d']>END_TRAIN-P_HORIZON].index
base_test.loc[index,'sales']=np.NaN
# Timer to measure predictions time
main_time = time.time()
PREDICT_DAY = 2
base_test[base_test['d']==1840]['sales'].sum()
END_TRAIN-P_HORIZON+PREDICT_DAY
start_time = time.time()
grid_df = base_test.copy()
grid_df = pd.concat([grid_df, df_parallelize_run(make_lag_roll, ROLS_SPLIT)], axis=1)
for store_id in STORES_IDS:
# Read all our models and make predictions
# for each day/store pairs
model_path = MODEL_PATH + 'lgb_model_'+store_id+'_v'+str(VER)+'_valid'+'_d_'+str(PREDICT_DAY)+'.bin'
if USE_AUX:
model_path = AUX_MODELS + model_path
estimator = pickle.load(open(model_path, 'rb'))
day_mask = base_test['d']==(END_TRAIN-P_HORIZON+PREDICT_DAY)
store_mask = base_test['store_id']==store_id
mask = (day_mask)&(store_mask)
base_test[TARGET][mask] = estimator.predict(grid_df[mask][MODEL_FEATURES])
# Make good column naming and add
# to all_preds DataFrame
temp_df = base_test[day_mask][['id',TARGET]]
temp_df.columns = ['id','F'+str(PREDICT_DAY)]
if 'id' in list(all_preds):
all_preds = all_preds.merge(temp_df, on=['id'], how='left')
else:
all_preds = temp_df.copy()
print('#'*10, ' %0.2f min round |' % ((time.time() - start_time) / 60),
' %0.2f min total |' % ((time.time() - main_time) / 60),
' %0.2f day sales |' % (temp_df['F'+str(PREDICT_DAY)].sum()))
del temp_df
all_preds
all_preds.to_pickle('revenue_1914_1941_valid_detrend_d2.pkl')
END_TRAIN-P_HORIZON+PREDICT_DAY
########################### Export
#################################################################################
# Reading competition sample submission and
# merging our predictions
# As we have predictions only for "_validation" data
# we need to do fillna() for "_evaluation" items
submission = pd.read_csv(ORIGINAL+'sample_submission.csv')[['id']]
submission = submission.merge(all_preds, on=['id'], how='left').fillna(0)
submission.to_csv('submission_v'+str(VER)+'.csv', index=False)
# Summary
# Of course here is no magic at all.
# No "Novel" features and no brilliant ideas.
# We just carefully joined all
# our previous fe work and created a model.
# Also!
# In my opinion this strategy is a "dead end".
# Overfits a lot LB and with 1 final submission
# you have no option to risk.
# Improvement should come from:
# Loss function
# Data representation
# Stable CV
# Good features reduction strategy
# Predictions stabilization with NN
# Trend prediction
# Real zero sales detection/classification
# Good kernels references
## (the order is random and the list is not complete):
# https://www.kaggle.com/ragnar123/simple-lgbm-groupkfold-cv
# https://www.kaggle.com/jpmiller/grouping-items-by-stockout-pattern
# https://www.kaggle.com/headsortails/back-to-predict-the-future-interactive-m5-eda
# https://www.kaggle.com/sibmike/m5-out-of-stock-feature
# https://www.kaggle.com/mayer79/m5-forecast-attack-of-the-data-table
# https://www.kaggle.com/yassinealouini/seq2seq
# https://www.kaggle.com/kailex/m5-forecaster-v2
# https://www.kaggle.com/aerdem4/m5-lofo-importance-on-gpu-via-rapids-xgboost
# Features were created in these kernels:
##
# Mean encodings and PCA options
# https://www.kaggle.com/kyakovlev/m5-custom-features
##
# Lags and rolling lags
# https://www.kaggle.com/kyakovlev/m5-lags-features
##
# Base Grid and base features (calendar/price/etc)
# https://www.kaggle.com/kyakovlev/m5-simple-fe
# Personal request
# Please don't upvote any ensemble and copypaste kernels
## The worst case is ensemble without any analyse.
## The best choice - just ignore it.
## I would like to see more kernels with interesting and original approaches.
## Don't feed copypasters with upvotes.
## It doesn't mean that you should not fork and improve others kernels
## but I would like to see params and code tuning based on some CV and analyse
## and not only on LB probing.
## Small changes could be shared in comments and authors can improve their kernel.
## Feel free to criticize this kernel as my knowlege is very limited
## and I can be wrong in code and descriptions.
## Thank you.
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
from __future__ import print_function
import argparse
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from importlib import reload
from deeprank.dataset import DataLoader, PairGenerator, ListGenerator
from deeprank import utils
seed = 1234
torch.manual_seed(seed)
loader = DataLoader('./config/letor07_mp_fold1.model')
import json
letor_config = json.loads(open('./config/letor07_mp_fold1.model').read())
#device = torch.device("cuda")
#device = torch.device("cpu")
select_device = torch.device("cpu")
rank_device = torch.device("cuda")
Letor07Path = letor_config['data_dir']
letor_config['fill_word'] = loader._PAD_
letor_config['embedding'] = loader.embedding
letor_config['feat_size'] = loader.feat_size
letor_config['vocab_size'] = loader.embedding.shape[0]
letor_config['embed_dim'] = loader.embedding.shape[1]
letor_config['pad_value'] = loader._PAD_
pair_gen = PairGenerator(rel_file=Letor07Path + '/relation.train.fold%d.txt'%(letor_config['fold']),
config=letor_config)
from deeprank import select_module
from deeprank import rank_module
letor_config['max_match'] = 20
letor_config['win_size'] = 5
select_net = select_module.QueryCentricNet(config=letor_config, out_device=rank_device)
select_net = select_net.to(select_device)
select_net.train()
'''
letor_config['q_limit'] = 20
letor_config['d_limit'] = 2000
letor_config['max_match'] = 20
letor_config['win_size'] = 5
letor_config['finetune_embed'] = True
letor_config['lr'] = 0.0001
select_net = select_module.PointerNet(config=letor_config)
select_net = select_net.to(device)
select_net.embedding.weight.data.copy_(torch.from_numpy(loader.embedding))
select_net.train()
select_optimizer = optim.RMSprop(select_net.parameters(), lr=letor_config['lr'])
'''
letor_config["dim_q"] = 1
letor_config["dim_d"] = 1
letor_config["dim_weight"] = 1
letor_config["c_reduce"] = [1, 1]
letor_config["k_reduce"] = [1, 50]
letor_config["s_reduce"] = 1
letor_config["p_reduce"] = [0, 0]
letor_config["c_en_conv_out"] = 4
letor_config["k_en_conv"] = 3
letor_config["s_en_conv"] = 1
letor_config["p_en_conv"] = 1
letor_config["en_pool_out"] = [1, 1]
letor_config["en_leaky"] = 0.2
letor_config["dim_gru_hidden"] = 3
letor_config['lr'] = 0.005
letor_config['finetune_embed'] = False
rank_net = rank_module.DeepRankNet(config=letor_config)
rank_net = rank_net.to(rank_device)
rank_net.embedding.weight.data.copy_(torch.from_numpy(loader.embedding))
rank_net.qw_embedding.weight.data.copy_(torch.from_numpy(loader.idf_embedding))
rank_net.train()
rank_optimizer = optim.Adam(rank_net.parameters(), lr=letor_config['lr'])
def to_device(*variables, device):
return (torch.from_numpy(variable).to(device) for variable in variables)
def show_text(x):
print(' '.join([loader.word_dict[w.item()] for w in x]))
X1, X1_len, X1_id, X2, X2_len, X2_id, Y, F = \
pair_gen.get_batch(data1=loader.query_data, data2=loader.doc_data)
X1, X1_len, X2, X2_len, Y, F = \
to_device(X1, X1_len, X2, X2_len, Y, F, device=rank_device)
show_text(X2[0])
X1, X2_new, X1_len, X2_len_new, X2_pos = select_net(X1, X2, X1_len, X2_len, X1_id, X2_id)
show_text(X1[0])
for i in range(5):
print(i, end=' ')
show_text(X2_new[0][i])
print(X2_pos[20].shape)
print(len(X2_pos))
print(len(X2))
print(X2_pos[0])
print(X2_pos[1])
# X1 = X1[:1]
# X1_len = X1_len[:1]
# X2 = X2[:1]
# X2_len = X2_len[:1]
# X1_id = X1_id[:1]
# X2_id = X2_id[:1]
# show_text(X2[0])
# X1, X2_new, X1_len, X2_len_new = select_net(X1, X2, X1_len, X2_len, X1_id, X2_id)
# show_text(X1[0])
# for i in range(5):
# print(i, end=' ')
# show_text(X2_new[0][i])
import time
rank_loss_list = []
start_t = time.time()
for i in range(1000):
# One Step Forward
X1, X1_len, X1_id, X2, X2_len, X2_id, Y, F = \
pair_gen.get_batch(data1=loader.query_data, data2=loader.doc_data)
X1, X1_len, X2, X2_len, Y, F = \
to_device(X1, X1_len, X2, X2_len, Y, F, device=select_device)
X1, X2, X1_len, X2_len, X2_pos = select_net(X1, X2, X1_len, X2_len, X1_id, X2_id)
X2, X2_len = utils.data_adaptor(X2, X2_len, select_net, rank_net, letor_config)
output = rank_net(X1, X2, X1_len, X2_len, X2_pos)
# Update Rank Net
rank_loss = rank_net.pair_loss(output, Y)
print('rank loss:', rank_loss.item())
rank_loss_list.append(rank_loss.item())
rank_optimizer.zero_grad()
rank_loss.backward()
rank_optimizer.step()
end_t = time.time()
print('Time Cost: %s s' % (end_t-start_t))
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure()
plt.plot(rank_loss_list)
plt.show()
torch.save(select_net, "qcentric.model")
torch.save(rank_net, "deeprank.model")
select_net_e = torch.load(f='qcentric.model')
rank_net_e = torch.load(f='deeprank.model')
list_gen = ListGenerator(rel_file=Letor07Path+'/relation.test.fold%d.txt'%(letor_config['fold']),
config=letor_config)
map_v = 0.0
map_c = 0.0
with torch.no_grad():
for X1, X1_len, X1_id, X2, X2_len, X2_id, Y, F in \
list_gen.get_batch(data1=loader.query_data, data2=loader.doc_data):
#print(X1.shape, X2.shape, Y.shape)
X1, X1_len, X2, X2_len, Y, F = to_device(X1, X1_len, X2, X2_len, Y, F, device=select_device)
X1, X2, X1_len, X2_len, X2_pos = select_net_e(X1, X2, X1_len, X2_len, X1_id, X2_id)
X2, X2_len = utils.data_adaptor(X2, X2_len, select_net, rank_net, letor_config)
#print(X1.shape, X2.shape, Y.shape)
pred = rank_net_e(X1, X2, X1_len, X2_len, X2_pos)
map_o = utils.eval_MAP(pred.tolist(), Y.tolist())
#print(pred.shape, Y.shape)
map_v += map_o
map_c += 1.0
map_v /= map_c
print('[Test]', map_v)
```
| github_jupyter |
```
%matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
```
# Reflect Tables into SQLAlchemy ORM
```
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func, inspect
# create engine to hawaii.sqlite
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# View all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
inspector = inspect(engine)
inspector.get_table_names()
```
# Exploratory Precipitation Analysis
```
# Find the most recent date in the data set.
date1 = session.query(Measurement.date).order_by(Measurement.date.desc()).first()
date1
# Design a query to retrieve the last 12 months of precipitation data and plot the results.
# Starting from the most recent data point in the database.
# Calculate the date one year from the last date in data set.
yearago =dt.date(2017, 8, 23) - dt.timedelta(days=365)
# Perform a query to retrieve the data and precipitation scores
qryresults = session.query(Measurement.date, Measurement.prcp).filter(Measurement.date >= yearago).all()
# Save the query results as a Pandas DataFrame and set the index to the date column
df = pd.DataFrame(qryresults, columns=['date', 'precipitation'])
# Sort the dataframe by date
df = df.sort_values("date")
# Use Pandas Plotting with Matplotlib to plot the data
df.plot(x='date', y='precipitation', rot=90)
plt.xlabel("Date")
plt.ylabel("Inches")
yearago
# Use Pandas to calcualte the summary statistics for the precipitation data
df.describe()
```
# Exploratory Station Analysis
```
# Design a query to calculate the total number stations in the dataset
session.query(func.count(Station.station)).all()
# Design a query to find the most active stations (i.e. what stations have the most rows?)
# List the stations and the counts in descending order.
session.query(Measurement.station, func.count(Measurement.station)).\
group_by(Measurement.station).order_by(func.count(Measurement.station).desc()).all()
# Using the most active station id from the previous query, calculate the lowest, highest, and average temperature.
session.query(func.min(Measurement.tobs), func.max(Measurement.tobs), func.avg(Measurement.tobs)).\
filter(Measurement.station == 'USC00519281').all()
# Using the most active station id "USC00519281"
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
histogramdata = session.query(Measurement.tobs).\
filter(Measurement.station == 'USC00519281').\
filter(Measurement.date >= yearago).all()
df = pd.DataFrame(histogramdata, columns=['tobs'])
df.plot.hist(bins=12)
plt.tight_layout()
plt.xlabel("Temperature")
```
# Close session
```
# Close Session
session.close()
```
| github_jupyter |
[](https://www.pythonista.io)
# El lenguaje de plantillas de *Jinja*.
## Etiquetas de *Jinja*.
*Jinja* ejecuta las expresiones y declaraciones que se encuentran encerrados entre signos de llaves "```{```" "```}```".
### Declaraciones.
Las declaraciones deben estar encerradas entre signos de porcentajes "```%```".
**Sintaxis:**
```
{% <declaración> %} ```
### Expresiones.
Las declaraciones deben estar encerradas entre llaves nuevamente "```{```" "```}```".
**Sintaxis:**
```
{{ <expresión> }}
```
### Comentarios.
Las declaraciones deben estar encerradas entre signos de gato "```#```".
**Sintaxis:**
```
{# <comentario> #}
```
## Expresiones.
### Nombres, índices y atributos.
En vista de que *Jinja* está basado en *Python*, es posible utilizar su sintaxis para acceder a los elementos y/o atributos de un objeto que se utiliza como parámetro.
**Ejemplos:**
```
texto = "Hola, {{persona['nombre'].upper()}}."
template = jinja2.Template(texto)
template.render(persona={'nombre':'Jose', 'apellido': 'Pérez'})
```
### Filtros.
Un filtro en *Jinja* es una especie de función que modifica al objeto resultante de una expresión.
Es posible consultar lo diversos filtros que ofrece Jinja en esta liga:
https://jinja.palletsprojects.com/en/3.0.x/templates/#filters
Es posible "encadenar" varios filtros al texto que se ingresa mediante *pipes* usando la siguiente sintaxis:
```
{{<expresión> | <filtro 1> | <filtro 2> |... | <filtro n>}}
```
De este modo, la salida de un filtro es la entrada del siguiente.
**Ejemplos:**
En las siguientes celdas se utilizarán los filtros ```center``` y ```reverse``` de forma separada y posteriormente combinada.
```
texto = "Hola, {{persona['nombre'].upper() | center(40)}}."
plantilla = jinja2.Template(texto)
plantilla.render(persona={'nombre':'Jose', 'apellido': 'Pérez'})
texto = "Hola, {{persona['nombre'].upper() | reverse}}."
plantilla = jinja2.Template(texto)
plantilla.render(persona={'nombre':'Jose', 'apellido': 'Pérez'})
texto = "Hola, {{persona['nombre'].upper()| center(40)| reverse}}."
plantilla = jinja2.Template(texto)
plantilla.render(persona={'nombre':'Jose', 'apellido': 'Pérez'})
```
## Declaraciones.
Una declaración corresponde a un bloque de código que se ejecuta y que incluye varias expresiones con la siguiente sintaxis.
```
{% <declaración> %}
...
<texto y expresiones>
...
<% end<expresión correspondiente> %>
```
*Jinja* puede utilizar expresiones de *Python* como:
* ```if```, ```elif``` y ```else```.
* ```for```.
* ```with```.
### Limitación del ámbito de las declaraciones.
Los nombres y objetos definidos dentro de una declaración pertenecen exclusivamente al ámbito de dicha declaración. Sólo los pares ```<identificador>=<objeto>``` ingresados en el contexto del método ```render()``` pertenecen al ámbito global.
### Condicionales con ```if``` .
Jinja 2 permite el uso del condicionales ```if``` con la siguiente sintaxis:
```
{% if <expresión lógica>%}
<Texto y código>
{% endif %}
```
Cabe hacer notar que los operadores lógicos de Python son los mismos que se utilizan para las expresiones lógicas de este condicional.
**Ejemplo:**
```
texto = "Hola {{persona['nombre']}}.\
{% if persona['socio'] %}\
\nUsted es socio distinguido.\
{% endif %}"
print(texto)
plantilla = jinja2.Template(texto)
resultado = plantilla.render(persona={'nombre':'Jose', 'socio': True})
print(resultado)
plantilla = jinja2.Template(texto)
resultado = plantilla.render(persona={'nombre':'Juan', 'socio': False})
print(resultado)
```
### Uso de ```if``` ```else``` y ```elif```.
También es posible evaluar más de una expresión con la siguiente sintaxis:
```
{% if <expresión lógica 1>%}
<Texto y código>
{% elif <expresión lógica 2>%}
<Texto y código>
...
...
{% elif <expresión lógica n>%}
<Texto y código>
{% else %}
<Texto y código>
{% endif %}
```
**Ejemplo:**
```
texto = "Hola {{persona['nombre']}}.\n\
{% if persona['status'] == 'socio' %}\
Usted es socio distinguido.\
{% elif persona['status'] == 'cliente' %}\
Usted tiene una cuenta de cliente.\
{% else %}\
Por favor indique si es socio o cliente.\
{% endif %}"
plantilla = jinja2.Template(texto)
resultado = plantilla.render(persona={'nombre':'Jose', 'status': 'socio'})
print(resultado)
plantilla = jinja2.Template(texto)
resultado = plantilla.render(persona={'nombre':'Juan', 'status': 'cliente'})
print(resultado)
plantilla = jinja2.Template(texto)
resultado = plantilla.render(persona={'nombre':'Juan'})
print(resultado)
```
### Validaciones adicionales.
*Jinja* cuenta con algunas validaciones que pueden ser consultadas en esta liga:
http://jinja.pocoo.org/docs/latest/templates/#builtin-tests
**Ejemplo:**
Para este caso se utilizarán los validadores ```even``` y ```odd```.
```
texto = "El número es {{numero}}.\n\
{% if numero is even %}\
Este número es par.\
{% elif numero is odd %}\
Este número es non.\
{% endif %}"
plantilla = jinja2.Template(texto)
resultado = plantilla.render(numero=6)
print(resultado)
```
### Ciclos con ```for```.
La evaluación de ciclos con ```for``` se comportan de forma idéntica a *Python*, pero con la siguiente sintaxis:
```
{% for <elemento> in <iterable> %}
{{ <elemento> }}
{% endfor %}
```
**Ejemplo:**
Se utilizará el ciclo ```for``` para una lista que a su vez contiene listas de dos elementos.
```
texto = "Enlaces recomendados:\n\
{%for nombre, liga in dato %}\
\n{{ nombre }}: {{ liga }} \
{% endfor %}"
ligas = [['slashdot', 'https://slashdot.org'],
['pythonista', 'https://pythonista.mx'],
['cloudevel', 'https://cloudevel.com']]
plantilla = jinja2.Template(texto)
resultado = plantilla.render(dato=ligas)
print(resultado)
```
## Macros.
Lo macros se comportan de forma similar a una función de Python y se definen con la siguiente sintaxis:
```
{% macro <nombre> (<argumentos>) %}
<texto y código>
{% endmacro %}
```
Para invocar un macro se hace de la siguiente manera:
```
{{ <nombre>(<parámetros>) }}
```
**Ejemplo:**
```
texto = '{% macro suma (a, b=2) %}\
La suma es {{a + b}}.\n\
{% endmacro %}\
{{ suma(2)}}\
{{ suma(2, 3) }}'
plantilla = jinja2.Template(texto)
resultado = plantilla.render()
print(resultado)
```
## Importación de macros.
Es posible importar un macro desde una plantilla mediante la siguiente sintaxis:
```
{% from <ruta del archivo en fornato str> import <nombre del macro>
```
**Ejemplo:**
El archivo [*plantillas/sumadora.txt*](plantillas/sumadora.txt) contiene la siguiente plantilla:
```
{% macro suma (a, b=2) %}
La suma es {{a + b}}.
{% endmacro %}
```
l archivo [*plantillas/importadora.txt*](plantillas/importadora.txt) contiene la siguiente plantilla:
```
{% from "sumadora.txt" import suma %}\
{{ suma(3, 4) }}
```
```
plantilla = entorno.get_template("importadora.txt")
print(plantilla.render())
```
## Herencia de plantillas.
Jinja 2 tiene la capacidad de aprovechar plantillas que pueden ser modificadas utilizando el concepto de bloques.
### Bloques.
Los bloques son etiquetas que lleva un nombre y que se definen con la siguiente sintaxis:
```
{% block <nombre> %}
...
...
{% endblock% }
```
Los bloques pueden ser anidados.
### Herencia con _extends_.
Es posible crear una nueva plantilla partir de mediante la siguiente sintaxis:
```{% extends '<ruta de la platilla de origen>' %}
```
Esto traerá consigo el contenido completo de la plantilla de origen y es posible sobrescribir los bloques simpremente redefiniéndolos.
**Ejemplo:**
El archivo [*plantillas/plantilla_base.html*](plantillas/plantilla_base.html) contiene el siguiente código.
``` html
<!DOCTYPE html>
<html>
<head>
{% block head %}
<link rel="stylesheet" href="style.css" />
<title>Bienvenidos a {% block title%}Pythonista{% endblock %}</title>
{% endblock %}
</head>
<body>
<div id="content">{% block content %}Hola, Bienvenidos.{% endblock %}</div>
<div id="footer">
{% block footer %}
© Copyright 2018 <a href="https://pythonista.io/">Pythonista®.</a>.
{% endblock %}
</div>
</body>
```
```
plantilla = entorno.get_template("plantilla_base.html")
print(plantilla.render())
```
El archivo [*plantillas/plantilla_hija.html*](plantillas/plantilla_hija.html) contiene el siguiente código, el cual hereda el código del archivo *plantilla_base.html*.
``` html
{% extends "plantilla_base.html" %}
{% block title %} Cloudevel {%endblock %}
{% block footer %}
© Copyright 2018 <a href="https://cloudevel.com/">Cloudevel.</a>.
{% endblock %}
```
```
plantilla = entorno.get_template("plantilla_hija.html")
print(plantilla.render())
```
### La función *super()*.
Esta función de Jinja 2 es similar a la super() de Python, y permite traer el contenido del bloque original para ser reutilizado en el nuevo bloque.
**Ejemplo:**
El archivo [*plantillas/plantilla_superpuesta.html*](plantillas/plantilla_superpuesta.html) contiene el siguiente código, el cual hereda el código del archivo *plantillas/plantilla_base.html*, pero usa la función *super()* para traer el bloque de texto que sobreescribió.
```
{% extends "plantilla_base.html" %}
{% block title %}
Cloudevel, empresa hermana de
{{ super() }}
{%endblock %}
{% block footer %}
© Copyright 2018 <a href="https://cloudevel.com/">Cloudevel.</a>.
{{ super() }}
{% endblock %}
```
**Nota:** Asegúrese que la ruta de la celda de abajo coresponda a la de la celda superior.
```
plantilla = entorno.get_template("plantilla_superpuesta.html")
print(plantilla.render())
```
<p style="text-align: center"><a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Licencia Creative Commons" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/80x15.png" /></a><br />Esta obra está bajo una <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Licencia Creative Commons Atribución 4.0 Internacional</a>.</p>
<p style="text-align: center">© José Luis Chiquete Valdivieso. 2021.</p>
| github_jupyter |
```
from IPython.display import Image
```
This is a follow on from Tutorial 1 where we browsed the Ocean marketplace and downloaded the imagenette dataset. In this tutorial, we will create a model that trains (and overfits) on the small amount of sample data. Once we know that data interface of the input is compatible with our created model (and that the model can successfully overfit on the sample data), then we can be confident enough to send the model to train on the complete dataset.
Now lets inspect the sample data. The data provider should provide this in the same format as the whole dataset. This helps us as data scientists to write scripts that run on both the sample data and the whole dataset. We call this the **interface** of the data.
```
from pathlib import Path
imagenette_dir = Path('imagenette2-sample')
print(f"Sub-directories: {sorted(list(imagenette_dir.glob('*')))}")
sorted(list(imagenette_dir.glob('*')))
train_dir, val_dir = sorted(list(imagenette_dir.glob('*')))
print(f"Sub-directories in train: {sorted(list(train_dir.glob('*/*')))}")
print(f"Sub-directories in val: {sorted(list(val_dir.glob('*/*')))}")
```
It seems like both the training and validation directorys have folders for each category of image that contain the image files. Of course, we could read the dataset docs if this wasn't immediately clear.
```
train_images = sorted(list(train_dir.glob('*/*')))
val_images = sorted(list(val_dir.glob('*/*')))
print(f"Number of train images:", len(train_images))
print(f"Number of val images:", len(val_images))
```
We will use the fast.ai library to train a simple image classifier.
```
from fastai.vision.all import *
```
First we will attempt to train as normal (using both training and validation sets) to ensure that all of the images load without any errors. First we create the dataloaders:
```
path = Path('imagenette2-sample.tgz')
import xtarfile as tarfile
tar = tarfile.open(path, "r:gz")
from PIL import Image
import io
images = []
for member in tar.getmembers():
f = tar.extractfile(member)
if f is not None:
image_data = f.read()
image = Image.open(io.BytesIO(image_data))
images.append(image)
path = Path("imagenette2-sample")
dls = ImageDataLoaders.from_folder(path, train='train', valid='val',
item_tfms=RandomResizedCrop(128, min_scale=0.35), batch_tfms=Normalize.from_stats(*imagenet_stats), bs=2)
```
We can visualise the images in the training set as follows:
```
dls.show_batch()
```
We choose a simple ResNet-34 architecture.
```
learn = cnn_learner(dls, resnet34, metrics=accuracy, pretrained=False)
```
And run training for 5 epochs with a learning rate of 0.001.
```
learn.fit_one_cycle(8, 1e-4)
```
As you can see, the accuracy is 50% meaning, which is the same as random guessing. We can visualise the results using the following. Note that the results are on the validation images.
```
learn.show_results()
```
The reason for the accuracy is that the size of the training set is not large enough to generalize to the validation set. Thus, while we have confirmed that both the training images and validation images load correctly, we have not confirmed that our selected model trains properly. To ensure, this we will instead use the training set for validation. This is a very simple case for the model since it does not have to learn to generalise and can simply memorise the input data. If the model cannot achieve this, there must be some bug in the code. Let's create new dataloaders for this scenario:
```
dls_overfit = ImageDataLoaders.from_folder(imagenette_dir, train='train', valid='train',
item_tfms=RandomResizedCrop(128, min_scale=0.35), batch_tfms=Normalize.from_stats(*imagenet_stats), bs=2)
dls_overfit.show_batch()
learn_overfit = cnn_learner(dls_overfit, resnet34, metrics=accuracy, pretrained=False)
learn_overfit.fit_one_cycle(8, 1e-4)
```
Note that the results are now on the training images.
```
learn_overfit.show_results()
preds, targs = learn_overfit.get_preds()
```
| github_jupyter |
```
from matplotlib import pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
tf.__version__
model = tf.keras.models.load_model("runs/machine_translation/2")
```
https://www.tensorflow.org/beta/tutorials/text/transformer#evaluate
```
tokenizer_pt = tfds.features.text.SubwordTextEncoder.load_from_file(
"subwords/ted_hrlr_translate/pt_to_en/subwords_pt")
tokenizer_en = tfds.features.text.SubwordTextEncoder.load_from_file(
"subwords/ted_hrlr_translate/pt_to_en/subwords_en")
inp_sentence = "este é um problema que temos que resolver."
```
real translation: "this is a problem we have to solve ."
```
inp = tf.expand_dims([tokenizer_pt.vocab_size] + tokenizer_pt.encode(inp_sentence) + [tokenizer_pt.vocab_size + 1], 0)
tar = tf.expand_dims([tokenizer_en.vocab_size], 0)
preds, enc_enc_attention, dec_dec_attention, enc_dec_attention = model([inp, tar])
preds = tf.argmax(preds[:, -1:, :], axis=-1, output_type=tf.int32)
preds
tar = tf.concat([tar, preds], axis=-1)
tokenizer_en.decode(tar[0].numpy()[1:])
preds, enc_enc_attention, dec_dec_attention, enc_dec_attention = model([inp, tar])
preds = tf.argmax(preds[:, -1:, :], axis=-1, output_type=tf.int32)
preds
tar = tf.concat([tar, preds], axis=-1)
tokenizer_en.decode(tar[0].numpy()[1:])
preds, enc_enc_attention, dec_dec_attention, enc_dec_attention = model([inp, tar])
preds = tf.argmax(preds[:, -1:, :], axis=-1, output_type=tf.int32)
preds
tar = tf.concat([tar, preds], axis=-1)
tokenizer_en.decode(tar[0].numpy()[1:])
preds, enc_enc_attention, dec_dec_attention, enc_dec_attention = model([inp, tar])
preds = tf.argmax(preds[:, -1:, :], axis=-1, output_type=tf.int32)
preds
tar = tf.concat([tar, preds], axis=-1)
tokenizer_en.decode(tar[0].numpy()[1:])
preds, enc_enc_attention, dec_dec_attention, enc_dec_attention = model([inp, tar])
preds = tf.argmax(preds[:, -1:, :], axis=-1, output_type=tf.int32)
preds
tar = tf.concat([tar, preds], axis=-1)
tokenizer_en.decode(tar[0].numpy()[1:])
preds, enc_enc_attention, dec_dec_attention, enc_dec_attention = model([inp, tar])
preds = tf.argmax(preds[:, -1:, :], axis=-1, output_type=tf.int32)
preds
tar = tf.concat([tar, preds], axis=-1)
tokenizer_en.decode(tar[0].numpy()[1:])
preds, enc_enc_attention, dec_dec_attention, enc_dec_attention = model([inp, tar])
preds = tf.argmax(preds[:, -1:, :], axis=-1, output_type=tf.int32)
preds
tar = tf.concat([tar, preds], axis=-1)
tokenizer_en.decode(tar[0].numpy()[1:])
preds, enc_enc_attention, dec_dec_attention, enc_dec_attention = model([inp, tar])
preds = tf.argmax(preds[:, -1:, :], axis=-1, output_type=tf.int32)
preds
tar = tf.concat([tar, preds], axis=-1)
tokenizer_en.decode(tar[0].numpy()[1:])
preds, enc_enc_attention, dec_dec_attention, enc_dec_attention = model([inp, tar])
preds = tf.argmax(preds[:, -1:, :], axis=-1, output_type=tf.int32)
preds
tar = tf.concat([tar, preds], axis=-1)
tokenizer_en.decode(tar[0].numpy()[1:])
preds, enc_enc_attention, dec_dec_attention, enc_dec_attention = model([inp, tar])
preds = tf.argmax(preds[:, -1:, :], axis=-1, output_type=tf.int32)
preds
tar = tf.concat([tar, preds], axis=-1)
tokenizer_en.decode(tar[0].numpy()[1:])
preds, enc_enc_attention, dec_dec_attention, enc_dec_attention = model([inp, tar])
preds = tf.argmax(preds[:, -1:, :], axis=-1, output_type=tf.int32)
preds
tar = tf.concat([tar, preds], axis=-1)
tokenizer_en.decode(tar[0].numpy()[1:])
preds, enc_enc_attention, dec_dec_attention, enc_dec_attention = model([inp, tar])
preds = tf.argmax(preds[:, -1:, :], axis=-1, output_type=tf.int32)
preds
tar = tf.concat([tar, preds], axis=-1)
tokenizer_en.decode(tar[0].numpy()[1:])
preds, enc_enc_attention, dec_dec_attention, enc_dec_attention = model([inp, tar])
preds = tf.argmax(preds[:, -1:, :], axis=-1, output_type=tf.int32)
preds
```
`8088` is the end token
Visualizing only encoder-decoder attention heads for the final prediction
```
enc_dec_attention_0, enc_dec_attention_1, enc_dec_attention_2, enc_dec_attention_3 = \
enc_dec_attention["layer_0"][0], enc_dec_attention["layer_1"][0], enc_dec_attention["layer_2"][0], enc_dec_attention["layer_3"][0]
xticklabels = ["##START##"] + [tokenizer_pt.decode([v]) for v in inp.numpy()[0][1:-1]] + ["##END##"]
yticklabels = ["##START##"] + [tokenizer_en.decode([v]) for v in tar.numpy()[0][1:]]
# https://matplotlib.org/users/colormaps.html
for i, cmap in enumerate(["Reds", "spring", "summer", "autumn", "winter", "cool", "Wistia", "Oranges"]):
fig = plt.figure()
fig, ax = plt.subplots(1,1, figsize=(12,12))
heatplot = ax.imshow(enc_dec_attention_0[i].numpy(), cmap=cmap)
ax.set_xticks(np.arange(11))
ax.set_yticks(np.arange(13))
ax.set_xticklabels(xticklabels)
ax.set_yticklabels(yticklabels)
plt.colorbar(heatplot)
plt.title("Layer 0, Attention Head %d" % (i + 1))
# https://matplotlib.org/users/colormaps.html
for i, cmap in enumerate(["Reds", "spring", "summer", "autumn", "winter", "cool", "Wistia", "Oranges"]):
fig = plt.figure()
fig, ax = plt.subplots(1,1, figsize=(12,12))
heatplot = ax.imshow(enc_dec_attention_1[i].numpy(), cmap=cmap)
ax.set_xticks(np.arange(11))
ax.set_yticks(np.arange(13))
ax.set_xticklabels(xticklabels)
ax.set_yticklabels(yticklabels)
plt.colorbar(heatplot)
plt.title("Layer 1, Attention Head %d" % (i + 1))
# https://matplotlib.org/users/colormaps.html
for i, cmap in enumerate(["Reds", "spring", "summer", "autumn", "winter", "cool", "Wistia", "Oranges"]):
fig = plt.figure()
fig, ax = plt.subplots(1,1, figsize=(12,12))
heatplot = ax.imshow(enc_dec_attention_2[i].numpy(), cmap=cmap)
ax.set_xticks(np.arange(11))
ax.set_yticks(np.arange(13))
ax.set_xticklabels(xticklabels)
ax.set_yticklabels(yticklabels)
plt.colorbar(heatplot)
plt.title("Layer 2, Attention Head %d" % (i + 1))
# https://matplotlib.org/users/colormaps.html
for i, cmap in enumerate(["Reds", "spring", "summer", "autumn", "winter", "cool", "Wistia", "Oranges"]):
fig = plt.figure()
fig, ax = plt.subplots(1,1, figsize=(12,12))
heatplot = ax.imshow(enc_dec_attention_3[i].numpy(), cmap=cmap)
ax.set_xticks(np.arange(11))
ax.set_yticks(np.arange(13))
ax.set_xticklabels(xticklabels)
ax.set_yticklabels(yticklabels)
plt.colorbar(heatplot)
plt.title("Layer 3, Attention Head %d" % (i + 1))
```
| github_jupyter |
```
import os
import pandas as pd
from bs4 import BeautifulSoup
import sys
import re
from nltk.stem import PorterStemmer
from nltk.tokenize import sent_tokenize, word_tokenize
ps = PorterStemmer()
print os.getcwd();
# if necessary change the directory
#os.chdir('c:\\Users\..')
data = pd.read_csv("nightlife_sanfrancisco_en.csv", header=0, delimiter=",")
# iexplore data set
data.shape
data.columns.values
print data["text"][0]
# Remove stop words from "words"
import nltk # import stop words
nltk.download('popular') # Download text data sets, including stop words
from nltk.corpus import stopwords # Import the stop word list
print stopwords.words("english")
#words = [w for w in words if not w in stopwords.words("english")]
#print words # "u" before each word indicates that Python is internally representing each word as a unicode string
# Clean all records
def text_to_words( raw_text ):
# 1. Remove end of line
without_end_line = re.sub('\n', ' ', raw_text)
# 2. Remove start of line
without_start_line = re.sub('\r', ' ', without_end_line)
# 3. Remove punctuation
without_punctual = re.sub(ur'[\W_]+',' ',without_start_line )
# 4. Replace number by XnumberX
without_number = re.sub('(\d+\s*)+', ' XnumberX ', without_punctual)
# 5. Remove non-letters
letters_only = re.sub("[^a-zA-Z]", " ", without_number)
# 6. Convert to lower case
lower_case = letters_only.lower()
# 7. Split into individual words
words = lower_case.split()
# 8. stemming - algorithms Porter stemmer
meaningful_words = [ps.stem(word) for word in words]
# 9.Remove stop words
# Redundant step, removing later in Creating the bag of words step
#stops = set(stopwords.words("english"))
#meaningful_words = [w for w in words if not w in stops]
# 10. Join the words back into one string separated by space and return the result.
return( " ".join( meaningful_words ))
#return (meaningful_words)
clean_text = text_to_words( data["text"][0] )
print clean_text
# Get the number of text based on the dataframe column size
num_text = data["text"].size
# Initialize an empty list to hold the clean text
clean_data = []
# Loop over each text; create an index i that goes from 0 to the length
print "Cleaning and parsing the data set text...\n"
clean_data = []
for i in xrange( 0, num_text ):
# If the index is evenly divisible by 1000, print a message
if( (i+1)%1000 == 0 ):
print "Text %d of %d\n" % ( i+1, num_text )
clean_data.append( text_to_words( data["text"][i] )) # in case of error run "pip install -U nltk"
# Compare original and edited text
data['text'][0]
clean_data[0]
print "Creating the bag of words...\n"
from sklearn.feature_extraction.text import CountVectorizer
# Initialize the "CountVectorizer" object, which is scikit-learn's
# bag of words tool.
vectorizer = CountVectorizer(analyzer = "word", \
tokenizer = None, \
preprocessor = None, \
stop_words = 'english', \
max_features = 5000)
# fit_transform() does two functions: First, it fits the model
# and learns the vocabulary; second, it transforms our training data
# into feature vectors. The input to fit_transform should be a list of
# strings.
train_data_features = vectorizer.fit_transform(clean_data)
# Numpy arrays are easy to work with, so convert the result to an
# array
train_data_features = train_data_features.toarray()
print train_data_features.shape
# Take a look at the words in the vocabulary
vocab = vectorizer.get_feature_names()
print vocab
import numpy as np
# Sum up the counts of each vocabulary word
dist = np.sum(train_data_features, axis=0)
# For each, print the vocabulary word and the number of times it
# appears in the training set
for tag, count in zip(vocab, dist):
print count, tag
# Using in model, random forest example
print "Training the random forest..."
from sklearn.ensemble import RandomForestClassifier
# Initialize a Random Forest classifier with 100 trees
forest = RandomForestClassifier(n_estimators = 100)
# Fit the forest to the training set, using the bag of words as
# features and the sentiment labels as the response variable
#
# This may take a few minutes to run
forest = forest.fit( train_data_features, data["stars"] )
```
| github_jupyter |
```
import datetime
import time
import functools
import pandas as pd
import numpy as np
import pytz
import nba_py
import nba_py.game
import nba_py.player
import nba_py.team
import pymysql
from sqlalchemy import create_engine
from password import hoop_pwd
pwd = hoop_pwd.password
conn = create_engine('mysql+pymysql://root:%s@118.190.202.87:3306/nba_stats' % pwd)
ustz = pytz.timezone('America/New_York')
us_time = datetime.datetime.now(ustz)
print('New York time: ' + str(us_time.date()) + ' ' + str(us_time.time())[:8])
try:
# read sql table of game header
game_header = pd.read_sql_table('game_header', conn)
length_1 = len(game_header)
print(str(length_1) + ' games loaded.')
# set begin date to the newest date in sql table
begin = datetime.datetime.strptime(game_header.iloc[-1]['GAME_DATE_EST'][:10],
"%Y-%m-%d").date() + datetime.timedelta(days=-2)
except ValueError:
print('no table yet!')
length_1 = 0
# if no table yet, set begin date to 2012-10-29
begin = datetime.date(2012, 10, 29)
# grab game headers of begining date
game_header = nba_py.Scoreboard(month = begin.month,
day = begin.day,
year = begin.year, league_id = '00', offset = 0).game_header()
# set end date to us yesterday
end = us_time.date() + datetime.timedelta(days=-1)
for i in range((end - begin).days + 1):
# grab game headers from begin date to end date
day = begin + datetime.timedelta(days = i)
game_header = game_header.append(nba_py.Scoreboard(month = day.month,
day = day.day,
year = day.year,
league_id = '00',
offset = 0).game_header())
print(str(day) + ' finished! ' + str(datetime.datetime.now().time())[:8])
game_header = game_header[game_header['GAME_STATUS_ID'] == 3]
length_2 = len(game_header)
# drop the duplicate by game id
game_header = game_header.drop_duplicates('GAME_ID')
length_3 = len(game_header)
print(str(length_2 - length_3) + ' duplicates droped.')
print(str(length_3 - length_1) + ' games added.')
# sort game headers by game id ascending
# game_header = game_header.sort_values('GAME_ID')
# commit new game headers to sql table
game_header.to_sql('game_header', conn, index = False, if_exists = 'replace')
print(str(length_3) + ' game headers commit complete!')
conn = create_engine('mysql+pymysql://root:%s@118.190.202.87:3306/nba_stats' % pwd)
game_stats_logs = pd.DataFrame()
try:
# read sql table of game stats logs id
game_stats_logs_id = pd.read_sql_table('game_stats_logs', conn, columns = ['GAME_ID'])
length_1 = len(game_stats_logs_id)
print(str(length_1) + ' player stats loaded.')
except ValueError:
print('no table yet!')
length_1 = 0
# create table and commit it to sql
game_stats_logs.to_sql('game_stats_logs', conn, index = False, if_exists = 'append')
print('game stats logs initialized!')
# define game types by the head of game id
game_type = {'001': 'pre_season', '002': 'regular_season', '003': 'all_star', '004': 'play_offs'}
# ------method 1------for game id in game headers from the max one in sql table
# for i in game_header[game_header['GAME_ID'] >= game_stats_logs['GAME_ID'].max()]['GAME_ID']:
# ------method 2------for game id in game header but not in game stats logs
for i in game_header['GAME_ID'][game_header['GAME_ID'].isin(game_stats_logs_id['GAME_ID'].drop_duplicates()) == False]:
# get game player stats of i
game_stats = nba_py.game.Boxscore(i).player_stats()
# create home team player stats
home_team_id = int(game_header[game_header['GAME_ID'] == i]['HOME_TEAM_ID'])
home_stats_logs = game_stats[game_stats['TEAM_ID'] == int(home_team_id)].copy()
home_stats_logs['LOCATION'] = 'HOME'
home_stats_logs['AGAINST_TEAM_ID'] = int(game_header[game_header['GAME_ID'] == i]['VISITOR_TEAM_ID'])
# create away team player stats
away_team_id = int(game_header[game_header['GAME_ID'] == i]['VISITOR_TEAM_ID'])
away_stats_logs = game_stats[game_stats['TEAM_ID'] == int(away_team_id)].copy()
away_stats_logs['LOCATION'] = 'AWAY'
away_stats_logs['AGAINST_TEAM_ID'] = int(game_header[game_header['GAME_ID'] == i]['HOME_TEAM_ID'])
# combine home and away team player stats and append to game stats logs
game_stats_logs = game_stats_logs.append(home_stats_logs)
game_stats_logs = game_stats_logs.append(away_stats_logs)
print('game ' + i + ' added! ' + str(datetime.datetime.now().time())[:8])
def min_convert(m):
'''
convert mm:ss to float
'''
try:
if ':' in m:
return float(m[:-3]) + round(float(m[-2:])/60, 2)
else:
return float(m)
except TypeError:
return None
# create float time
game_stats_logs['MINS'] = game_stats_logs['MIN'].apply(min_convert)
# set 0 time player to None
game_stats_logs['MINS'] = game_stats_logs['MINS'].apply(lambda x: None if x == 0 else x)
# add game type
game_stats_logs['GAME_TYPE'] = game_stats_logs['GAME_ID'].apply(lambda x: x[:3]).map(game_type)
# add game date and game sequence
game_stats_logs = game_stats_logs.merge(game_header[['GAME_DATE_EST', 'GAME_SEQUENCE', 'GAME_ID']],
how = 'left', on = 'GAME_ID')
# add new ordered game_id
game_stats_logs['GAME_ID_O'] = game_stats_logs['GAME_ID'].apply(lambda x: x[3:5] + x[:3] + x[-5:])
length_2 = len(game_stats_logs)
# drop duplicate game stats by game id and player id
game_stats_logs = game_stats_logs.drop_duplicates(['GAME_ID', 'PLAYER_ID'])
length_3 = len(game_stats_logs)
print(str(length_2 - length_3) + ' duplicates droped.')
print(str(length_3) + ' player stats added.')
# commit new game stats logs to sql table
game_stats_logs.to_sql('game_stats_logs', conn, index = False, if_exists = 'append')
print(str(length_3) + ' player stats commit complete!')
```
| github_jupyter |
```
import pandas as pd
data = pd.read_csv('Astronomy_institutes_list - Institute_with_location.csv')
# file is/will be included in github.
data.info()
# Auto-fill longitude and latitude (Not accurate due to language and map source)
from geopy.geocoders import Nominatim
import time
latitude = []
longitude = []
geolocator = Nominatim(user_agent="") #use your agent here. e.g. your e-mail address
for item in data.iterrows():
time.sleep(1) # Avoiding too frequent request
try:
location = geolocator.geocode(item[1][2])
print(item[1][0],(location.latitude, location.longitude))
latitude.append(location.latitude)
longitude.append(location.longitude)
except AttributeError:
print(item[1][0]+' Cannot found'+'\n'+'Using campus location...')
try:
location = geolocator.geocode(item[1][3])
print(item[1][0],(location.latitude, location.longitude))
latitude.append(location.latitude)
longitude.append(location.longitude)
except AttributeError:
print(item[1][0]+' Cannot found'+'\n'+'Using Name location...')
location = geolocator.geocode(item[1][0])
print(item[1][0],(location.latitude, location.longitude))
latitude.append(location.latitude)
longitude.append(location.longitude)
# Append auto-fill coordinates
data['latitude'] = latitude
data['longitude'] = longitude
# Check frequent key words in research area
import re
import math
# function check if the research area is not empty
def isNaN(string):
return string != string
keywords = []
for index,item in data.iterrows():
if isNaN(item['Region']):
continue
for region in re.findall(r"[\w']+", item['Region']):
if region.title() in ['The','And','Of']: # Don't count these words
continue
else:
keywords.append(region.title())
wordcount = {}
import collections
for word in keywords:
if word not in wordcount:
wordcount[word] = 1
else:
wordcount[word] += 1
# Print most common word
n_print = int(input("How many most common words to print: "))
print("\nThe {} most common words are as follows\n".format(n_print))
word_counter = collections.Counter(wordcount)
for word, count in word_counter.most_common(n_print):
print(word, ": ", count)
# Create a data frame of the most common words
# Draw a bar chart
#set up plot configuration
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.rc('text', usetex=True) # Latex support
plt.rc('font', family='serif')
plt.rc('lines', linewidth=0.5) # Linewidth of data
plt.rc('savefig', dpi=300)
fig = plt.figure()
fig.set_size_inches(23.2,11)
#plot
lst = word_counter.most_common(n_print)
df = pd.DataFrame(lst, columns = ['Word', 'Count'])
plt.bar(df.Word,df.Count)
plt.xticks(rotation=30)
plt.xticks(fontsize=16)
plt.yticks(fontsize=16)
plt.show()
# Creating institute map
import folium
from folium import plugins
from folium.plugins import MarkerCluster
import re
center = [23, 121] # Initial center coordinates
map_institute = folium.Map(location=center, zoom_start=7 , tiles=None) # zoom_start is the initial zoom factor
# Base map list from http://leaflet-extras.github.io/leaflet-providers/preview/
folium.TileLayer(tiles='https://server.arcgisonline.com/ArcGIS/rest/services/World_Topo_Map/MapServer/tile/{z}/{y}/{x}',
attr = 'Tiles © Esri — Esri, DeLorme, NAVTEQ, TomTom, Intermap, iPC, USGS, FAO, NPS, NRCAN, GeoBase, Kadaster NL, Ordnance Survey, Esri Japan, METI, Esri China (Hong Kong), and the GIS User Community'
, name='Import Tiles').add_to(map_institute)
# Create map with group marker
mcg = folium.plugins.MarkerCluster(control=False)
map_institute.add_child(mcg)
# Show all intitutes at first
touts = folium.plugins.FeatureGroupSubGroup(mcg, "All",show=True)
map_institute.add_child(touts)
# Creating individual marker(institutes)
for index, institute in data.iterrows():
location = [institute['latitude'], institute['longitude']]
html = ('<a href='+str(institute['URL'])+ ' target="_blank">'+str(institute['Name'])+'</a>'+'<br>'+
'<a href='+str(institute['Job_opportunities'])+ ' target="_blank">'+'Job Opportunity '+'</a>'+'<br>'
+'<p>'+'Research area: '+'<br>'+str(institute['Region']).title().replace(',','<br>')+'</p>')
# Set up the window size
iframe = folium.IFrame(html,
width=2200,
height=120)
popup = folium.Popup(iframe,
max_width=300)
if str(institute['Independency']) == 'Yes':
folium.Marker(location,tooltip=str(institute['Name'])+'<br>'+'Independency: '+'✅'
,popup = popup, icon=folium.Icon(icon='university', prefix='fa')
).add_to(touts) #icon list from https://fontawesome.com/icons?d=gallery
elif str(institute['Independency']) == 'No':
folium.Marker(location,tooltip=str(institute['Name'])+'<br>'+'Independency: '+'❎'
,popup = popup, icon=folium.Icon(icon='university', prefix='fa')
).add_to(touts)
else:
folium.Marker(location,tooltip=str(institute['Name'])+'<br>'+'Independency: '+'▢'
,popup = popup, icon=folium.Icon(icon='university', prefix='fa')
).add_to(touts)
# Check if research area is empty in the list
def isNaN(string):
return string != string
# Creating different research area groups in the map
def catalogs(keyword,catalogue): # key word for boolean institutes ['keyword'], catalogue is the research area name
for index, item in data.iterrows():
if isNaN(item['Region']):
continue
else:
keys = re.findall(r"[\w']+", item['Region'])
keys = [word.title() for word in keys]
location = [item['latitude'], item['longitude']]
html = ('<a href='+str(item['URL'])+ ' target="_blank">' +str(item['Name'])+'</a>'+'<br>'+
'<a href='+str(item['Job_opportunities'])+ ' target="_blank">'+'Job Opportunity '+'</a>'+'<br>'
+'<p>'+'Research area: '+'<br>'+str(item['Region'].title()).replace(',','<br>')+'</p>')
iframe = folium.IFrame(html,
width=2200,
height=170)
popup = folium.Popup(iframe,
max_width=300)
if any(set(keyword)&set(keys)):
if str(item['Independency']) == 'Yes':
folium.Marker(location,tooltip=str(item['Name'])+'<br>'+'Independency: '+'✅'
,popup = popup, icon=folium.Icon(icon='university', prefix='fa')
).add_to(catalogue) #icon list from https://fontawesome.com/icons?d=gallery
elif str(item['Independency']) == 'No':
folium.Marker(location,tooltip=str(item['Name'])+'<br>'+'Independency: '+'❎'
,popup = popup, icon=folium.Icon(icon='university', prefix='fa')
).add_to(catalogue)
else:
folium.Marker(location,tooltip=str(item['Name'])+'<br>'+'Independency: '+'▢'
,popup = popup, icon=folium.Icon(icon='university', prefix='fa')
).add_to(catalogue)
# Adding/deleting groups here
instr = folium.plugins.FeatureGroupSubGroup(mcg, "Instrumentation",show=False)
map_institute.add_child(instr)
catalogs(['Instrumentation'],instr)
transient = folium.plugins.FeatureGroupSubGroup(mcg, "Transients",show=False)
map_institute.add_child(transient)
catalogs(['Transient','Supernova','Supernovae','Flare'],transient)
GW = folium.plugins.FeatureGroupSubGroup(mcg, "Gravitational waves",show=False)
map_institute.add_child(GW)
catalogs(['Gravitational','Gw'],GW)
planetary = folium.plugins.FeatureGroupSubGroup(mcg, "Planetary science",show=False)
map_institute.add_child(planetary)
catalogs(['Planetary','Planet','Exoplanet'],planetary)
solar = folium.plugins.FeatureGroupSubGroup(mcg, "Solar system",show=False)
map_institute.add_child(solar)
catalogs(['Solar','Planet','Jupiter','Sun','Earth'],solar)
folium.LayerControl(collapsed=False).add_to(map_institute) # collapsed controls the option is folded or unfolded
#display the map
map_institute
# Save the html file
map_institute.save("map.html")
map_institute.save("index.html")
```
| github_jupyter |
# Tutorial Part 2: Learning MNIST Digit Classifiers
In the previous tutorial, we learned some basics of how to load data into DeepChem and how to use the basic DeepChem objects to load and manipulate this data. In this tutorial, you'll put the parts together and learn how to train a basic image classification model in DeepChem. You might ask, why are we bothering to learn this material in DeepChem? Part of the reason is that image processing is an increasingly important part of AI for the life sciences. So learning how to train image processing models will be very useful for using some of the more advanced DeepChem features.
The MNIST dataset contains handwritten digits along with their human annotated labels. The learning challenge for this dataset is to train a model that maps the digit image to its true label. MNIST has been a standard benchmark for machine learning for decades at this point.

## Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
[](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/02_Learning_MNIST_Digit_Classifiers.ipynb)
## Setup
We recommend running this tutorial on Google colab. You'll need to run the following cell of installation commands on Colab to get your environment set up. If you'd rather run the tutorial locally, make sure you don't run these commands (since they'll download and install a new Anaconda python setup)
```
%%capture
%tensorflow_version 1.x
!wget -c https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
!chmod +x Miniconda3-latest-Linux-x86_64.sh
!bash ./Miniconda3-latest-Linux-x86_64.sh -b -f -p /usr/local
!conda install -y -c deepchem -c rdkit -c conda-forge -c omnia deepchem-gpu=2.3.0
import sys
sys.path.append('/usr/local/lib/python3.7/site-packages/')
from tensorflow.examples.tutorials.mnist import input_data
# TODO: This is deprecated. Let's replace with a DeepChem native loader for maintainability.
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
import deepchem as dc
import tensorflow as tf
from tensorflow.keras.layers import Reshape, Conv2D, Flatten, Dense, Softmax
train = dc.data.NumpyDataset(mnist.train.images, mnist.train.labels)
valid = dc.data.NumpyDataset(mnist.validation.images, mnist.validation.labels)
keras_model = tf.keras.Sequential([
Reshape((28, 28, 1)),
Conv2D(filters=32, kernel_size=5, activation=tf.nn.relu),
Conv2D(filters=64, kernel_size=5, activation=tf.nn.relu),
Flatten(),
Dense(1024, activation=tf.nn.relu),
Dense(10),
Softmax()
])
model = dc.models.KerasModel(keras_model, dc.models.losses.CategoricalCrossEntropy())
model.fit(train, nb_epoch=2)
from sklearn.metrics import roc_curve, auc
import numpy as np
print("Validation")
prediction = np.squeeze(model.predict_on_batch(valid.X))
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(10):
fpr[i], tpr[i], thresh = roc_curve(valid.y[:, i], prediction[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
print("class %s:auc=%s" % (i, roc_auc[i]))
```
# Congratulations! Time to join the Community!
Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways:
## Star DeepChem on [GitHub](https://github.com/deepchem/deepchem)
This helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build.
## Join the DeepChem Gitter
The DeepChem [Gitter](https://gitter.im/deepchem/Lobby) hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!
| github_jupyter |
# QST CGAN with thermal noise in the channel (convolution)
```
import numpy as np
from qutip import Qobj, fidelity
from qutip.wigner import qfunc
from qutip.states import thermal_dm
from qutip import coherent_dm
from qutip.visualization import plot_wigner_fock_distribution
import tensorflow_addons as tfa
import tensorflow as tf
from qst_nn.ops import (cat, binomial, num, gkp, GaussianConv, husimi_ops, convert_to_real_ops, dm_to_tf, batched_expect)
from qst_cgan.gan import DensityMatrix, Expectation, Discriminator, generator_loss, discriminator_loss
from qst_cgan.ops import convert_to_complex_ops, tf_fidelity
from tqdm.auto import tqdm
from dataclasses import dataclass
import matplotlib.pyplot as plt
tf.keras.backend.set_floatx('float64') # Set float64 as the default
# https://scipy-cookbook.readthedocs.io/items/Matplotlib_LaTeX_Examples.html
fig_width_pt = 246.0 # Get this from LaTeX using \showthe\columnwidth
inches_per_pt = 1.0/72.27 # Convert pt to inch
golden_mean = (np.sqrt(5)-1.0)/2.0 # Aesthetic ratio
fig_width = fig_width_pt*inches_per_pt # width in inches
fig_height = fig_width*golden_mean # height in inches
fig_size = [fig_width,fig_height]
params = {
'axes.labelsize': 9,
'font.size': 9,
'legend.fontsize': 9,
'xtick.labelsize': 8,
'ytick.labelsize': 8,
'text.usetex': True,
'figure.figsize': fig_size,
'axes.labelpad':1,
'legend.handlelength':0.8,
'axes.titlesize': 9,
"text.usetex" : False
}
plt.rcParams.update(params)
# mpl.use('pdf')
```
# We create the state and the data using QuTiP
```
hilbert_size = 32
# Betas can be selected in a grid or randomly in a circle
num_grid = 64
num_points = num_grid*num_grid
beta_max_x = 5
beta_max_y = 5
xvec = np.linspace(-beta_max_x, beta_max_x, num_grid)
yvec = np.linspace(-beta_max_y, beta_max_y, num_grid)
X, Y = np.meshgrid(xvec, yvec)
betas = (X + 1j*Y).ravel()
```
# Measurement ops are simple projectors $\frac{1}{\pi}|\beta \rangle \langle \beta|$
```
m_ops = [(1/np.pi)*coherent_dm(hilbert_size, beta) for beta in betas]
ops_numpy = [op.data.toarray() for op in m_ops] # convert the QuTiP Qobj to numpy arrays
ops_tf = tf.convert_to_tensor([ops_numpy]) # convert the numpy arrays to complex TensorFlow tensors
A = convert_to_real_ops(ops_tf) # convert the complex-valued numpy matrices to real-valued TensorFlow tensors
print(A.shape, A.dtype)
```
# Convolution noise
The presence of thermal photons in the amplification channel lead to the data being
corrupted as a convolution over the Q function data (see [https://arxiv.org/abs/1206.3405](https://arxiv.org/abs/1206.3405))
The kernel for this convolution is a Gaussian determined by the average photon number in the thermal state. We corrupt our data assuming a thermal state with mean photon number 5.
```
# define normalized 2D gaussian
def gaus2d(x=0, y=0, n0=1):
return 1. / (np.pi * n0) * np.exp(-((x**2 + y**2.0)/n0))
nth = 5
X, Y = np.meshgrid(xvec, yvec) # get 2D variables instead of 1D
gauss_kernel = gaus2d(X, Y, n0=nth)
```
# State to reconstruct
Let us now create a state on which we will run QST
```
rho, _ = cat(hilbert_size, 2, 0, 0)
plot_wigner_fock_distribution(rho)
plt.show()
rho_tf = dm_to_tf([rho])
data = batched_expect(ops_tf, rho_tf)
```
# Q function plots using QuTiP and a custom TensorFlow expectation function
```
fig, ax = plt.subplots(1, 2, figsize=(7, 3))
ax[0].imshow(qfunc(rho, xvec, yvec, g=2))
ax[1].imshow(data.numpy().reshape(num_grid, num_grid))
ax[0].set_title("QuTiP Q func")
ax[1].set_title("TensorFlow computed Q func")
plt.show()
# The thermal state distribution
plot_wigner_fock_distribution(thermal_dm(hilbert_size, nth))
```
# Apply the convolution and show the simulated data that we can obtain experimentally
```
x = tf.reshape(tf.cast(data, tf.float64), (1, num_grid, num_grid, 1))
conved = GaussianConv(gauss_kernel)(x)
kernel = gauss_kernel/tf.reduce_max(gauss_kernel)
diff = conved.numpy().reshape(num_grid, num_grid)/tf.reduce_max(conved) - kernel.numpy().reshape(num_grid, num_grid)
diff = tf.convert_to_tensor(diff)
# Collect all the data in an array for plotting
matrices = [gauss_kernel.reshape((num_grid, num_grid)), x.numpy().reshape((num_grid, num_grid)),
conved.numpy().reshape((num_grid, num_grid)), diff.numpy().reshape((num_grid, num_grid))]
fig, ax = plt.subplots(1, 4, figsize=(fig_width, 0.35*2.5*fig_height), dpi=80, facecolor="white",
sharey=False, sharex=True)
axes = [ax[0], ax[1], ax[2], ax[3]]
aspect = 'equal'
for i in range(4):
im = axes[i].pcolor(xvec, yvec,
matrices[i]/np.max(matrices[i]), cmap="hot", vmin=0, vmax=1)
axes[i].set_aspect("equal")
axes[i].set_xticklabels(["", "", ""])
axes[i].set_yticklabels(["", "", ""], fontsize=6)
# axes[i].set_xlabel(r"$Re(\beta)$", fontsize=6)
axes[0].set_yticklabels(["-5", "", "5"], fontsize=6)
labels = ["Background\n(Gaussian)", "State", "Data\n(Convolution)", "Subtracted"]
for i in range(len(labels)):
axes[i].set_title(labels[i], fontsize=6)
# plt.subplots_adjust(wspace=-.4)
# cbar = fig.colorbar(im, ax=axes, pad=0.026, fraction = 0.046)
# cbar.ax.set_yticklabels(["0", "0.5", "1"])
axes[0].set_ylabel(r"Im$(\beta)$", labelpad=-8, fontsize=6)
######################################################################################################
```
# QST CGAN with a Gaussian convolution layer
```
def GeneratorConvQST(hilbert_size, num_points, noise=0.02, kernel=None):
"""
A tensorflow generative model which can be called as
>> generator([A, x])
where A is the set of all measurement operators
transformed into the shape (batch_size, hilbert_size, hilbert_size, num_points*2)
This can be done using the function `convert_to_real_ops` which
takes a set of complex operators shaped as (batch_size, num_points, hilbert_size, hilbert_size)
and converts it to this format which is easier to run convolution operations on.
x is the measurement statistics (frequencies) represented by a vector of shape
[batch_size, num_points] where we consider num_points different operators and their
expectation values.
Args:
hilbert_size (int): Hilbert size of the output matrix
This needs to be 32 now. We can adjust
the network architecture to allow it to
automatically change its outputs according
to the hilbert size in future
num_points (int): Number of different measurement operators
Returns:
generator: A TensorFlow model callable as
>> generator([A, x])
"""
initializer = tf.random_normal_initializer(0., 0.02)
n = int(hilbert_size/2)
ops = tf.keras.layers.Input(shape=[hilbert_size, hilbert_size, num_points*2],
name='operators')
inputs = tf.keras.Input(shape=(num_points), name = "inputs")
x = tf.keras.layers.Dense(16*16*2, use_bias=False,
kernel_initializer = tf.random_normal_initializer(0., 0.02),
)(inputs)
x = tf.keras.layers.LeakyReLU()(x)
x = tf.keras.layers.Reshape((16, 16, 2))(x)
x = tf.keras.layers.Conv2DTranspose(64, 4, use_bias=False,
strides=2,
padding='same',
kernel_initializer=initializer)(x)
x = tfa.layers.InstanceNormalization(axis=3)(x)
x = tf.keras.layers.LeakyReLU()(x)
x = tf.keras.layers.Conv2DTranspose(64, 4, use_bias=False,
strides=1,
padding='same',
kernel_initializer=initializer)(x)
x = tfa.layers.InstanceNormalization(axis=3)(x)
x = tf.keras.layers.LeakyReLU()(x)
x = tf.keras.layers.Conv2DTranspose(32, 4, use_bias=False,
strides=1,
padding='same',
kernel_initializer=initializer)(x)
# x = tfa.layers.InstanceNormalization(axis=3)(x)
# x = tf.keras.layers.LeakyReLU()(x)
# y = tf.keras.layers.Conv2D(8, 5, padding='same')(ops)
# out = x
# x = tf.keras.layers.concatenate([x, y])
x = tf.keras.layers.Conv2DTranspose(2, 4, use_bias=False,
strides=1,
padding='same',
kernel_initializer=initializer)(x)
x = DensityMatrix()(x)
complex_ops = convert_to_complex_ops(ops)
# prefactor = (0.25*g**2/np.pi)
prefactor = 1.
x = Expectation()(complex_ops, x, prefactor)
x = tf.keras.layers.Reshape((num_grid, num_grid, 1))(x)
x = GaussianConv(kernel, trainable=False)(x)
# x = x/tf.reduce_max(x)
x = tf.keras.layers.Reshape((num_points,))(x)
# y = kernel/tf.reduce_max(kernel)
# y = tf.reshape(y, (1, num_points))
# x = x - y
return tf.keras.Model(inputs=[ops, inputs], outputs=x)
tf.keras.backend.clear_session()
generator = GeneratorConvQST(hilbert_size, num_points, kernel=gauss_kernel)
discriminator = Discriminator(hilbert_size, num_points)
density_layer_idx = None
for i, layer in enumerate(generator.layers):
if "density_matrix" in layer._name:
density_layer_idx = i
break
print(density_layer_idx)
model_dm = tf.keras.Model(inputs=generator.input, outputs=generator.layers[density_layer_idx].output)
@dataclass
class LossHistory:
"""Class for keeping track of loss"""
generator: list
discriminator: list
l1: list
loss = LossHistory([], [], [])
fidelities = []
initial_learning_rate = 0.0002
lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay(initial_learning_rate,
decay_steps=10000,
decay_rate=.96,
staircase=False)
lam = 10.
generator_optimizer = tf.keras.optimizers.Adam(lr_schedule, 0.5, 0.5)
discriminator_optimizer = tf.keras.optimizers.Adam(lr_schedule, 0.5, 0.5)
def train_step(A, x):
"""Takes one step of training for the full A matrix representing the
measurement operators and data x.
Note that the `generator`, `discriminator`, `generator_optimizer` and the
`discriminator_optimizer` has to be defined before calling this function.
Args:
A (tf.Tensor): A tensor of shape (m, hilbert_size, hilbert_size, n x 2)
where m=1 for a single reconstruction, and n represents
the number of measured operators. We split the complex
operators as real and imaginary in the last axis. The
helper function `convert_to_real_ops` can be used to
generate the matrix A with a set of complex operators
given by `ops` with shape (1, n, hilbert_size, hilbert_size)
by calling `A = convert_to_real_ops(ops)`.
x (tf.Tensor): A tensor of shape (m, n) with m=1 for a single
reconstruction and `n` representing the number of
measurements.
"""
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
gen_output = generator([A, x], training=True)
disc_real_output = discriminator([A, x, x], training=True)
disc_generated_output = discriminator([A, x, gen_output], training=True)
gen_total_loss, gen_gan_loss, gen_l1_loss = generator_loss(
disc_generated_output, gen_output, x, lam=lam
)
disc_loss = discriminator_loss(disc_real_output, disc_generated_output)
generator_gradients = gen_tape.gradient(
gen_total_loss, generator.trainable_variables
)
discriminator_gradients = disc_tape.gradient(
disc_loss, discriminator.trainable_variables
)
generator_optimizer.apply_gradients(
zip(generator_gradients, generator.trainable_variables)
)
discriminator_optimizer.apply_gradients(
zip(discriminator_gradients, discriminator.trainable_variables)
)
loss.generator.append(gen_gan_loss)
loss.l1.append(gen_l1_loss)
loss.discriminator.append(disc_loss)
max_iterations = 300
pbar = tqdm(range(max_iterations))
for i in pbar:
train_step(A, conved.numpy().reshape(-1, num_points))
density_matrix = model_dm([A, conved.numpy().reshape(-1, num_points)])
rho_reconstructed = Qobj(density_matrix.numpy().reshape(rho.shape))
f = fidelity(rho_reconstructed, rho)
fidelities.append(f)
pbar.set_description("Fidelity {} | Gen loss {} | L1 loss {} | Disc loss {}".format(f, loss.generator[-1], loss.l1[-1], loss.discriminator[-1]))
rho_reconstructed = Qobj(density_matrix.numpy().reshape(rho.shape))
fig, ax = plot_wigner_fock_distribution(rho_reconstructed, alpha_max=beta_max_x, colorbar=True, figsize=(9, 3.5))
plt.title("Fidelity {:.4}".format(fidelity(rho_reconstructed, rho)))
plt.suptitle("QST CGAN reconstruction")
plt.show()
rho_tf_reconstructed = dm_to_tf([rho_reconstructed])
data_reconstructed = batched_expect(ops_tf, rho_tf_reconstructed)
reconstructed_x = tf.reshape(tf.cast(data_reconstructed, tf.float64), (1, num_grid, num_grid, 1))
reconstructed_conved = GaussianConv(gauss_kernel)(reconstructed_x)
diff2 = reconstructed_conved.numpy().reshape(num_grid, num_grid)/tf.reduce_max(reconstructed_conved) - kernel.numpy().reshape(num_grid, num_grid)
matrices2 = [gauss_kernel.reshape((num_grid, num_grid)), reconstructed_x.numpy().reshape((num_grid, num_grid)),
reconstructed_conved.numpy().reshape((num_grid, num_grid)), diff2.numpy().reshape((num_grid, num_grid))]
figpath = "figures/"
fig, ax = plt.subplots(2, 4, figsize=(fig_width, 0.35*2.5*fig_height), dpi=80, facecolor="white",
sharey=False, sharex=True)
axes = [ax[0, 0], ax[0, 1], ax[0, 2], ax[0, 3]]
aspect = 'equal'
for i in range(4):
im = axes[i].pcolor(xvec, yvec,
matrices[i]/np.max(matrices[i]), cmap="hot", vmin=0, vmax=1)
axes[i].set_aspect("equal")
axes[i].set_xticklabels(["", "", ""])
axes[i].set_yticklabels(["", "", ""], fontsize=6)
# axes[i].set_xlabel(r"$Re(\beta)$", fontsize=6)
axes[0].set_yticklabels(["-5", "", "5"], fontsize=6)
labels = ["Background\n(Gaussian)", "State", "Data\n(Convolution)", "Subtracted"]
for i in range(len(labels)):
axes[i].set_title(labels[i], fontsize=6)
# plt.subplots_adjust(wspace=-.4)
# cbar = fig.colorbar(im, ax=axes, pad=0.026, fraction = 0.046)
# cbar.ax.set_yticklabels(["0", "0.5", "1"])
axes[0].set_ylabel(r"Im$(\beta)$", labelpad=-8, fontsize=6)
plt.text(x = -24.5, y=30, s="cat state", fontsize=8)
######################################################################################################
axes = [ax[1, 0], ax[1, 1], ax[1, 2], ax[1, 3]]
for i in range(1, 4):
axes[i].pcolor(xvec, yvec,
matrices2[i]/np.max(matrices2[i]), cmap="hot", vmin=0, vmax=1)
axes[i].set_aspect("equal")
axes[i].set_xticklabels(["-5", "", "5"], fontsize=6)
axes[i].set_yticklabels(["", "", ""])
axes[i].set_xlabel(r"Re$(\beta)$", fontsize=6, labelpad=-4)
labels = ["Background\n(Gaussian)", "Reconstructed\nState", r"$Convoluted\noutput$"+"\noutput", "Subtracted"]
# for i in range(1, len(labels)):
# axes[i].set_title(labels[i], fontsize=6)
plt.subplots_adjust(hspace=0.7)
# cbar = fig.colorbar(im, ax=axes, pad=0.026, fraction = 0.046)
# cbar.ax.set_yticklabels(["0", "0.5", "1"])
plt.suptitle("QST-CGAN reconstruction", x=.45, y=.52, fontsize=8)
axes[1].set_ylabel(r"$Im(\beta)$", labelpad=-8, fontsize=6)
axes[1].set_yticklabels(["-5", "", "5"], fontsize=6)
axes[1].set_yticklabels(["-5", "", "5"])
axes[0].set_visible(False)
cbar = plt.colorbar(im, ax=ax.ravel().tolist(), aspect=40, ticks=[0, 0.5, 1], pad=0.02)
cbar.set_ticklabels(["0", "0.5", "1"])
cbar.ax.tick_params(labelsize=6)
# plt.text(x = -44.5, y=30, s="(a)", fontsize=8)
# plt.savefig(figpath+"fig-15a-fock-reconstruction.pdf", bbox_inches="tight", pad_inches=0)
```
| github_jupyter |
# T008 · Protein data acquisition: Protein Data Bank (PDB)
Authors:
- Anja Georgi, CADD seminar, 2017, Charité/FU Berlin
- Majid Vafadar, CADD seminar, 2018, Charité/FU Berlin
- Jaime Rodríguez-Guerra, Volkamer lab, Charité
- Dominique Sydow, Volkamer lab, Charité
__Talktorial T008__: This talktorial is part of the TeachOpenCADD pipeline described in the first TeachOpenCADD publication ([_J. Cheminform._ (2019), **11**, 1-7](https://jcheminf.biomedcentral.com/articles/10.1186/s13321-019-0351-x)), comprising of talktorials T001-T010.
## Aim of this talktorial
In this talktorial, we conduct the groundwork for the next talktorial where we will generate a ligand-based ensemble pharmacophore for EGFR. Therefore, we
(i) fetch all PDB IDs for EGFR from the PDB database,
(ii) retrieve five protein-ligand structures, which have the best structural quality and are derived from X-ray crystallography, and
(iii) align all structures to each in 3D as well as extract and save the ligands to be used in the next talktorial.
### Contents in Theory
* Protein Data Bank (PDB)
* Python package `pypdb`
### Contents in Practical
* Select query protein
* Get all PDB IDs for query protein
* Get statistic on PDB entries for query protein
* Get meta information on PDB entries
* Filter and sort meta information on PDB entries
* Get meta information of ligands from top structures
* Draw top ligand molecules
* Create protein-ligand ID pairs
* Get the PDB structure files
* Align PDB structures
### References
* Protein Data Bank
([PDB website](http://www.rcsb.org/))
* `pypdb` python package
([_Bioinformatics_ (2016), **1**, 159-60](https://academic.oup.com/bioinformatics/article-lookup/doi/10.1093/bioinformatics/btv543), [documentation](http://www.wgilpin.com/pypdb_docs/html/))
* Molecular superposition with the python package `opencadd` ([repository](https://github.com/volkamerlab/opencadd))
## Theory
### Protein Data Bank (PDB)
The Protein Data Bank (PDB) is one of the most comprehensive structural biology information database and a key resource in areas of structural biology, such as structural genomics and drug design ([PDB website](http://www.rcsb.org/)).
Structural data is generated from structural determination methods such as X-ray crystallography (most common method), nuclear magnetic resonance (NMR), and cryo electron microscopy (cryo-EM).
For each entry, the database contains (i) the 3D coordinates of the atoms and the bonds connecting these atoms for proteins, ligand, cofactors, water molecules, and ions, as well as (ii) meta information on the structural data such as the PDB ID, the authors, the deposition date, the structural determination method used and the structural resolution.
The structural resolution is a measure of the quality of the data that has been collected and has the unit Å (Angstrom). The lower the value, the higher the quality of the structure.
The PDB website offers a 3D visualization of the protein structures (with ligand interactions if available) and a structure quality metrics, as can be seen for the PDB entry of an example epidermal growth factor receptor (EGFR) with the PDB ID [3UG5](https://www.rcsb.org/structure/3UG5).

Figure 1: The protein structure (in gray) with an interacting ligand (in green) is shown for an example epidermal growth factor receptor (EGFR) with the PDB ID 3UG5 (figure by Dominique Sydow).
### Python package `pypdb`
`pypdb` is a python programming interface for the PDB and works exclusively in Python 3 ([_Bioinformatics_ (2016), **1**, 159-60](https://academic.oup.com/bioinformatics/article-lookup/doi/10.1093/bioinformatics/btv543), [documentation](http://www.wgilpin.com/pypdb_docs/html/)).
This package facilitates the integration of automatic PDB searches within bioinformatics workflows and simplifies the process of performing multiple searches based on the results of existing searches.
It also allows an advanced querying of information on PDB entries.
The PDB currently uses a RESTful API that allows for the retrieval of information via standard HTML vocabulary. `pypdb` converts these objects into XML strings.
## Practical
```
import collections
import logging
import pathlib
import time
import warnings
import pandas as pd
from tqdm.auto import tqdm
import redo
import requests_cache
import nglview
import pypdb
from rdkit.Chem import Draw
from rdkit.Chem import PandasTools
from opencadd.structure.superposition.api import align, METHODS
from opencadd.structure.core import Structure
# Disable some unneeded warnings
logger = logging.getLogger("opencadd")
logger.setLevel(logging.ERROR)
warnings.filterwarnings("ignore")
# cache requests -- this will speed up repeated queries to PDB
requests_cache.install_cache("rcsb_pdb", backend="memory")
# define paths
HERE = pathlib.Path(_dh[-1])
DATA = HERE / "data"
```
### Select query protein
We use EGFR as query protein for this talktorial. The UniProt ID of EGFR is `P00533`, which will be used in the following to query the PDB database.
### Get all PDB IDs for query protein
First, we get all PDB structures for our query protein EGFR, using the `pypdb` functions `make_query` and `do_search`.
```
search_dict = pypdb.make_query("P00533")
found_pdb_ids = pypdb.do_search(search_dict)
print("Sample PDB IDs found for query:", *found_pdb_ids[:3], "...")
print("Number of EGFR structures found:", len(found_pdb_ids))
```
### Get statistics on PDB entries for query protein
Next, we ask the question: How many PDB entries are deposited in the PDB for EGFR per year and how many in total?
Using `pypdb`, we can find all deposition dates of EGFR structures from the PDB database. The number of deposited structures was already determined and is needed to set the parameter `max_results` of the function `find_dates`.
```
# Query database
dates = pypdb.find_dates("P00533", max_results=len(found_pdb_ids))
# Example of the first three deposition dates
dates[:3]
```
We extract the year from the deposition dates and calculate a depositions-per-year histogram.
```
# Extract year
years = pd.Series([int(date[:4]) for date in dates])
bins = years.max() - years.min() + 1
axes = years.hist(bins=bins)
axes.set_ylabel("New entries per year")
axes.set_xlabel("Year")
axes.set_title("PDB entries for EGFR");
```
### Get meta information for PDB entries
We use `describe_pdb` to get meta information about the structures, which is stored per structure as a dictionary.
Note: we only fetch meta information on PDB structures here, we do not fetch the structures (3D coordinates), yet.
> The `redo.retriable` line is a _decorator_. This wraps the function and provides extra functionality. In this case, it will retry failed queries automatically (10 times maximum).
```
@redo.retriable(attempts=10, sleeptime=2)
def describe_one_pdb_id(pdb_id):
"""Fetch meta information from PDB."""
described = pypdb.describe_pdb(pdb_id)
if described is None:
print(f"! Error while fetching {pdb_id}, retrying ...")
raise ValueError(f"Could not fetch PDB id {pdb_id}")
return described
pdbs = [describe_one_pdb_id(pdb_id) for pdb_id in found_pdb_ids]
pdbs[0]
```
### Filter and sort meta information on PDB entries
Since we want to use the information to filter for relevant PDB structures, we convert the data set from dictionary to DataFrame for easier handling.
```
pdbs = pd.DataFrame(pdbs)
pdbs.head()
print(f"Number of PDB structures for EGFR: {len(pdbs)}")
```
We start filtering our dataset based on the following criteria:
#### 1. Experimental method: X-ray diffraction
We only keep structures resolved by `X-RAY DIFFRACTION`, the most commonly used structure determination method.
```
pdbs = pdbs[pdbs.expMethod == "X-RAY DIFFRACTION"]
print(f"Number of PDB structures for EGFR from X-ray: {len(pdbs)}")
```
#### 2. Structural resolution
We only keep structures with a resolution equal or lower than 3 Å. The lower the resolution value, the higher is the quality of the structure (-> the higher is the certainty that the assigned 3D coordinates of the atoms are correct). Below 3 Å, atomic orientations can be determined and therefore is often used as threshold for structures relevant for structure-based drug design.
```
pdbs.resolution = pdbs.resolution.astype(float) # convert to floats
pdbs = pdbs[pdbs.resolution <= 3.0]
print(f"Number of PDB entries for EGFR from X-ray with resolution <= 3.0 Angstrom: {len(pdbs)}")
```
We sort the data set by the structural resolution.
```
pdbs = pdbs.sort_values(["resolution"], ascending=True, na_position="last")
```
We check the top PDB structures (sorted by resolution):
```
pdbs.head()[["structureId", "resolution"]]
```
#### 3. Ligand-bound structures
Since we will create ensemble ligand-based pharmacophores in the next talktorial, we remove all PDB structures from our DataFrame, which do not contain a bound ligand: we use the `pypdb` function `get_ligands` to check/retrieve the ligand(s) from a PDB structure. PDB-annotated ligands can be ligands, cofactors, but also solvents and ions. In order to filter only ligand-bound structures, we (i) remove all structures without any annotated ligand and (ii) remove all structures that do not contain any ligands with a molecular weight (MW) greater than 100 Da (Dalton), since many solvents and ions weight less. Note: this is a simple, but not comprehensive exclusion of solvents and ions.
```
# Get all PDB IDs from DataFrame
pdb_ids = pdbs["structureId"].tolist()
# Remove structures
# (i) without ligand and
# (ii) without any ligands with molecular weight (MW) greater than 100 Da (Dalton)
@redo.retriable(attempts=10, sleeptime=2)
def get_ligands(pdb_id):
"""Decorate pypdb.get_ligands so it retries after a failure."""
return pypdb.get_ligands(pdb_id)
mw_cutoff = 100.0 # Molecular weight cutoff in Da
# This database query may take a moment
passed_pdb_ids = []
removed_pdb_ids = []
progressbar = tqdm(pdb_ids)
for pdb_id in progressbar:
progressbar.set_description(f"Processing {pdb_id}...")
ligand_dict = get_ligands(pdb_id)
# (i) Remove structure if no ligand present
if ligand_dict["ligandInfo"] is None:
removed_pdb_ids.append(pdb_id) # Store ligand-free PDB IDs
# (ii) Remove structure if not a single annotated ligand has a MW above mw_cutoff
else:
# Get ligand information
ligands = ligand_dict["ligandInfo"]["ligand"]
# Technicality: if only one ligand, cast dict to list (for the subsequent list comprehension)
if type(ligands) == dict:
ligands = [ligands]
# Get MW per annotated ligand
mw_list = [float(ligand["@molecularWeight"]) for ligand in ligands]
# Remove structure if not a single annotated ligand has a MW above mw_cutoff
if sum([mw > mw_cutoff for mw in mw_list]) == 0:
removed_pdb_ids.append(pdb_id) # Store ligand-free PDB IDs
else:
passed_pdb_ids.append(pdb_id) # Remove ligand-free PDB IDs from list
print(
"PDB structures without a ligand (removed from our data set):",
*removed_pdb_ids,
)
print("Number of structures with ligand:", len(passed_pdb_ids))
```
### Get meta information of ligands from top structures
In the next talktorial, we will build ligand-based ensemble pharmacophores from the top `top_num` structures with the highest resolution.
```
top_num = 8 # Number of top structures
selected_pdb_ids = passed_pdb_ids[:top_num]
selected_pdb_ids
```
The selected highest resolution PDB entries can contain ligands targeting different binding sites, e.g. allosteric and orthosteric ligands, which would hamper ligand-based pharmacophore generation. Thus, we will focus on the following 4 structures, which contain ligands in the orthosteric binding pocket. The code provided later in the notebook can be used to verify this.
```
selected_pdb_ids = ["5UG9", "5HG8", "5UG8", "3POZ"]
```
We fetch the PDB information about the top `top_num` ligands using `get_ligands`, to be stored as *csv* file (as dictionary per ligand).
If a structure contains several ligands, we select the largest ligand. Note: this is a simple, but not comprehensive method to select ligand binding the binding site of a protein. This approach may also select a cofactor bound to the protein. Therefore, please check the automatically selected top ligands visually before further usage.
```
ligands_list = []
for pdb_id in selected_pdb_ids:
ligands = get_ligands(pdb_id)["ligandInfo"]["ligand"]
# Technicality: if only one ligand, cast dict to list (for the subsequent list comprehension)
if isinstance(ligands, dict):
ligands = [ligands]
weight = 0
this_lig = {}
# If several ligands contained, take largest
for ligand in ligands:
if float(ligand["@molecularWeight"]) > weight:
this_ligand = ligand
weight = float(ligand["@molecularWeight"])
ligands_list.append(this_ligand)
# NBVAL_CHECK_OUTPUT
# Change the format to DataFrame
ligands = pd.DataFrame(ligands_list)
ligands
ligands.to_csv(DATA / "PDB_top_ligands.csv", header=True, index=False)
```
### Draw top ligand molecules
```
PandasTools.AddMoleculeColumnToFrame(ligands, "smiles")
Draw.MolsToGridImage(
mols=list(ligands.ROMol),
legends=list(ligands["@chemicalID"] + ", " + ligands["@structureId"]),
molsPerRow=top_num,
)
```
### Create protein-ligand ID pairs
```
# NBVAL_CHECK_OUTPUT
pairs = collections.OrderedDict(zip(ligands["@structureId"], ligands["@chemicalID"]))
pairs
```
### Align PDB structures
Since we want to build ligand-based ensemble pharmacophores in the next talktorial, it is necessary to align all structures to each other in 3D.
We will use one the python package `opencadd` ([repository](https://github.com/volkamerlab/opencadd)), which includes a 3D superposition subpackage to guide the structural alignment of the proteins. The approach is based on superposition guided by sequence alignment provided matched residues. There are other methods in the package, but this simple one will be enough for the task at hand.
#### Get the PDB structure files
We now fetch the PDB structure files, i.e. 3D coordinates of the protein, ligand (and if available other atomic or molecular entities such as cofactors, water molecules, and ions) from the PDB using `opencadd.structure.superposition`.
Available file formats are *pdb* and *cif*, which store the 3D coordinations of atoms of the protein (and ligand, cofactors, water molecules, and ions) as well as information on bonds between atoms. Here, we work with *pdb* files.
```
# Download PDB structures
structures = [Structure.from_pdbid(pdb_id) for pdb_id in pairs]
structures
```
#### Extract protein and ligand
Extract protein and ligand from the structure in order to remove solvent and other artifacts of crystallography.
```
complexes = [
Structure.from_atomgroup(structure.select_atoms(f"protein or resname {ligand}"))
for structure, ligand in zip(structures, pairs.values())
]
complexes
# Write complex to file
for complex_, pdb_id in zip(complexes, pairs.keys()):
complex_.write(DATA / f"{pdb_id}.pdb")
```
#### Align proteins
Align complexes (based on protein atoms).
```
results = align(complexes, method=METHODS["mda"])
```
`nglview` can be used to visualize molecular data within Jupyter notebooks. With the next cell we will visualize out aligned protein-ligand complexes.
```
view = nglview.NGLWidget()
for complex_ in complexes:
view.add_component(complex_.atoms)
view
view.render_image(trim=True, factor=2, transparent=True);
view._display_image()
```
#### Extract ligands
```
ligands = [
Structure.from_atomgroup(complex_.select_atoms(f"resname {ligand}"))
for complex_, ligand in zip(complexes, pairs.values())
]
ligands
for ligand, pdb_id in zip(ligands, pairs.keys()):
ligand.write(DATA / f"{pdb_id}_lig.pdb")
```
We check the existence of all ligand *pdb* files.
```
ligand_files = []
for file in DATA.glob("*_lig.pdb"):
ligand_files.append(file.name)
ligand_files
```
We can also use `nglview` to depict the co-crystallized ligands alone. As we can see, the selected complexes contain ligands populating the same binding pocket and can thus be used in the next talktorial for ligand-based pharmacophore generation.
```
view = nglview.NGLWidget()
for component_id, ligand in enumerate(ligands):
view.add_component(ligand.atoms)
view.remove_ball_and_stick(component=component_id)
view.add_licorice(component=component_id)
view
view.render_image(trim=True, factor=2, transparent=True);
view._display_image()
```
## Discussion
In this talktorial, we learned how to retrieve protein and ligand meta information and structural information from the PDB. We retained only X-ray structures and filtered our data by resolution and ligand availability. Ultimately, we aimed for an aligned set of ligands to be used in the next talktorial for the generation of ligand-based ensemble pharmacophores.
In order to enrich information about ligands for pharmacophore modeling, it is advisable to not only filter by PDB structure resolution, but also to check for ligand diversity (see **Talktorial 005** on molecule clustering by similarity) and to check for ligand activity (i.e. to include only potent ligands).
## Quiz
1. Summarize the kind of data that the Protein Data Bank contains.
2. Explain what the resolution of a structure stands for and how and why we filter for it in this talktorial.
3. Explain what an alignment of structures means and discuss the alignment performed in this talktorial.
| github_jupyter |
# Интерфейсы
Интерфейс - контракт, по которому класс, его реализующий, предоставляет какие-то методы.
Написание кода с опорой на интерфейсы, а не на конкретные типы позволяет:
- **Переиспользовать код, абстрагируясь от реализации.** Один раз написанный алгоритм сортировки элементов, опирающийся только на интерфейс IComparable, одинаково работает как со встроенными типами, так и с вашими.
- **Подменять реализацию, в том числе во время исполнения.**
- **Сделать код более безопасным.** Объект, передаваемый по интерфейсной ссылке предоставляет только ограниченную информацию о своих возможностях.
- **Не опасаться за последствия (по сравнению с наследованием).** Так как мы не тянем за собой реализацию, не возникает проблем, как с множественным наследованием.
## 1. Правила определения интерфейсов
В интерфейсе определяются сигнатуры *экземплярных функциональных* членов класса, кроме конструкторов.
Т.е. недопустимы
- Поля
- Конструкторы
Всё остальное - можно:
- Методы
- Свойства
- События
- Индексаторы
Начиная с C# 8.0 (кажется) можно определять в интерфейса *статические и экземплярные методы с реализацией*.
Модификатор доступа не указывается - он априори public.
```
public interface ISomethingMessy
{
// Метод
void Execute();
// Свойство
string Message { get; }
// Индексатор
object this[int index] { get; set; }
// Событие
event Action MyEvent;
// Лучше не переходить эту черту...
// --------------------------------
// Статический метод - обязательна реализация
static void StaticMethod()
{
Console.WriteLine("interface static method");
}
// Дефолтная реализация интерфейса: ДОСТУПНА ТОЛЬКО ПО ИНТЕРФЕЙСНОЙ ССЫЛКЕ
void SecretMethod()
{
Console.WriteLine("Your password is 123456");
}
}
```
Пример из стандартной библиотеки - System.IDisposable
```csharp
public interface IDisposable
{
void Dispose();
}
```
## 2. Реализация интерфейсов. Наследование
```
using System.IO;
class Base : IDisposable
{
private FileStream fileStream;
// ...
// public void Dispose() { fileStream.Dispose(); }
}
using System.IO;
class Base : IDisposable
{
private FileStream fileStream;
// ...
public void Dispose() { fileStream.Dispose(); }
}
class Derived : Base
{
// ...
}
// Все наследники класса автоматически реализуют интерфейсы родителя.
Derived derived = new Derived();
derived is IDisposable
```
## 3. Также доступны методы класса object
```
IComparable<int> val = 3;
val.ToString()
val.GetType()
```
## 4. ~~Реализация~~ Наследование интерфейсов интерфейсами
Можно расширить интерфейс, отнаследовав от него другой интерфейс. Типы, реализующие интерфейс-ребёнок будут обязаны реализовать функционал обоих интерфейсов.
**Однако это оправдано тогда и только тогда, когда жёсткая связь допустима.**
Иначе лучше использовать несколько маленьких интерфейсов согласно **Interface Segregation Principle**.
```
public interface IVehicle
{
void MoveTo(float x, float y, float z);
}
public interface IWheeledVehicle : IVehicle
{
int NumOfWheels { get; }
}
public class Car : IWheeledVehicle { }
```
Пример наследования интерфейсов из стандартной библиотеки - IEnumerable
```csharp
public interface IEnumerable<out T> : IEnumerable
{
IEnumerator<T> GetEnumerator();
}
public interface IEnumerable
{
IEnumerator GetEnumerator();
}
```
## 5. Явная (explicit) и неявная (implicit) реализации интерфейса
Однако можно реализовать интерфейс, не предоставив публичную реализацию методов.
Этого можно добиться, реализовав интерфейс **явно** (explicit). Такая реализация будет доступна **только по соответствующей интерфейсной ссылке**.
```
public class MyClass : IDisposable
{
// Неявная реализация интерфейса
// public void Dispose() { Console.WriteLine("Implicit"); }
// Явная реализация интерфейса
void IDisposable.Dispose() { Console.WriteLine("Explicit"); }
}
MyClass myClass = new MyClass();
myClass.Dispose();
IDisposable disposable = new MyClass();
disposable.Dispose();
```
**В чём смысл?**
Можно реализовать несколько интерфейсов, содержащих несколько одинаковых по сигнатуре методов. Если они представляют одинаковый смысл то проблем не возникает - а если они в сущности разные?
С помощью явных реализаций интерфейса можно определить **разное поведение** экземпляра в зависимости от того, по какой ссылке мы вызываем интерфейсный метод.
P.S. Пример супер надуманный, как обычно
```
// "Исполнитель"
public interface IExecutor
{
void Execute();
}
// "Палач"
public interface IExecutioner
{
void Execute();
}
public class Officer : IExecutor, IExecutioner
{
public void Execute() { /* some boring actions */ Console.WriteLine("Job executed."); }
void IExecutioner.Execute() { /* some murderous actions */ Console.WriteLine("Intruder executed."); }
}
Officer officer = new Officer();
officer.Execute();
IExecutor executor = officer;
executor.Execute();
IExecutioner executioner = officer;
executioner.Execute();
```
## 6. Обобщённые интерфейсы
Интерфейсы могут быть обобщёнными, таким образом получив все преимущества обобщений.
Из приятного: можно реализовать один и тот же интерфейс с различными параметрами типа, т.к. *как вы знаете*, обобщённые типы с разными параметрами конструируются в разные типы.
```
public class Number : IComparable<int>, IComparable<double>, IComparable<string>
{
private int Value { get; }
public Number(int number)
{
Value = number;
}
public int CompareTo(int other)
{
Console.WriteLine("Hello from int");
return Value.CompareTo(other);
}
public int CompareTo(double other)
{
Console.WriteLine("Hello from double");
return ((double)Value).CompareTo(other);
}
public int CompareTo(string other)
{
Console.WriteLine("Hello from string");
return ((double)Value).CompareTo(double.Parse(other));
}
}
Number number = new Number(42);
number.CompareTo(13)
number.CompareTo(42.5)
number.CompareTo("42")
```
Можно использовать интерфейсы в ограничениях на аргумент-тип. Если использовать несколько, то аргумент-тип должен реализовать все.
```
public void SayHello<T>(T value) where T : IComparable<int>, IDisposable
{
Console.WriteLine("Hello!");
}
public class MyClass : IComparable<int> //, IDisposable
{
public int CompareTo(int other) => throw new NotImplementedException();
public void Dispose() => throw new NotImplementedException();
}
MyClass obj = new MyClass();
SayHello(obj)
```
## 7. Реализация метода интерфейса по умолчанию
Начиная с C# 8.0 можно определять реализацию методов интерфейса по умолчанию.
Такая реализация доступна только по интерфейсной ссылке
```
public interface ISummator
{
int Sum(IEnumerable<int> values)
{
int result = 0;
foreach(var value in values)
{
result += value;
}
return result;
}
}
public class MySummator : ISummator
{
// Можно переопределить, тогда конкретная реализация полностью перекроет дефолтную
//public int Sum(IEnumerable<int> values) => values.Count();
}
MySummator mySummator = new MySummator();
mySummator.Sum(new int[]{1, 2, 3, 4, 5})
ISummator summator = new MySummator();
summator.Sum(new int[] { 1, 2, 3, 4, 5 })
```
## 8. Абстрактный класс или интерфейс?
**Абстрактный класс:**
- Является классом, а значит наследуясь от него нельзя наследоваться от других классов;
- Может определять часть состояния и поведения;
- Наследование - очень сильная связь;
Абстрактный определяет каркас для нескольких различных реализаций сущности.
**Интерфейс:**
- Класс может реализовывать сколько угодно интерфейсов;
- Определяет (в общем случае) только *что* должен делать класс, но не *как* (в общем случае);
- Реализация интерфейс - слабая связь;
Интерфейс определяет набор свойств, которыми должна обладать сущность, её некоторый обособленный функционал.
| github_jupyter |
```
from utils.t5 import *
input_data_name = "claim_LOF_base_0.11_data_explanation_prep_4.pickle" #"LOF_base_0.45_0.53_removed_inlier_outlier_23.782_full.pickle" # "LOF_base_0.46_0.54_removed_inlier_outlier_0.51_full.pickle"
data_inpit_dir = "./Data/Selection/" #"./Data/Selection/" "./Data/Preprocessed/"
output_dir = "./Data/Models/"
source_column = "source_text" #source_text_shorter " statement_explanation_prep" #"explanation_prep" "statement_explanation_prep" #"source_text_shorter" # "source_text_shorter" source_text
target_column = "target_text" #"shortExplanation_prep" #"target_text"
no_workers = 1
imput_data_path = data_inpit_dir + input_data_name
new_model_name = "d-t5-{}_{}".format(source_column, input_data_name)
torch.cuda.get_device_name(0)
new_model_name
data = pd.read_pickle(imput_data_path)
train, dev_test = train_test_split(data, test_size = 0.2, random_state = 42)
dev, test = train_test_split(dev_test, test_size = 0.5, random_state = 42)
train['target_text'] = train[target_column]
train['source_text'] = "summarize: " + train[source_column]
train['target_len'] = train["target_text"].str.split().str.len()
train['source_len'] = train["source_text"].str.split().str.len()
train[['target_len','source_len']].describe()
sum(train.source_len.to_list())
len(train[train.target_len>150])
len(train[train.source_len>1200])
len(train)
dev['target_text'] = dev[target_column]
dev['source_text'] = "summarize: " + dev[source_column]
dev['target_len'] = dev["target_text"].str.split().str.len()
dev['source_len'] = dev["source_text"].str.split().str.len()
dev[['target_len','source_len']].describe()
sum(dev.source_len.to_list())
test['target_text'] = test[target_column]
test['source_text'] = "summarize: " + test[source_column]
test['target_len'] = test["target_text"].str.split().str.len()
test['source_len'] = test["source_text"].str.split().str.len()
test[['target_len','source_len']].describe()
sum(test.source_len.to_list())
train = train[["target_text", "source_text"]]
dev = dev[["target_text","source_text"]]
model = SimpleT5()
model.from_pretrained(model_type="t5",model_name = "t5-base") # large "google/mt5-base"
import gc
#del data, model
gc.collect()
import torch
torch.cuda.empty_cache()
print(torch.cuda.memory_summary(device=None, abbreviated=False))
model.train(train_df = train, #LOF + Bert 11
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train, #claim + LOF + Bert 11
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train, #claim + LOF + Bert tunned 11
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train, #LOF + Bert tunned 13
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train, #LOF + Bert 13
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train, #claim + LOF + Bert 13
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train, #claim + LOF + Bert tunned 13
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train, #claim + LOF + Bert tunned 15
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train, #claim + LOF + Bert
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train, # LOF + Bert tunned
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train, # LOF + Bert
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train, # claim base
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train, #base
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train, #45_56
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train, #45_56
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train, #45_56
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train, #45_56
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train, #45_57
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train, #45_56
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train, #45_56
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train, #45_55
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train, #45_54
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train, # 57
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train,
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train,
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train,
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train,
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train,
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 150, #100 max_shortexplanation_tokens
batch_size = 8, max_epochs = 3, # 9 for base
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train,
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 100, #100 max_shortexplanation_tokens
batch_size = 9, max_epochs = 3, # 18
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
model.train(train_df = train,
eval_df = dev,
source_max_token_len = 1200, #2000 max_explanation_tokens
target_max_token_len = 100, #100 max_shortexplanation_tokens
batch_size = 9, max_epochs = 6, # 18
use_gpu = True,
outputdir = output_dir,
early_stopping_patience_epochs = 0) # 3
```
| github_jupyter |
# Image classification training on a DEBIAI project with a dataset generator
This tutorial shows how to classify images of flowers after inserting the project contextual into DEBIAI.
Based on the tensorflow tutorial : https://www.tensorflow.org/tutorials/images/classification
```
# Import TensorFlow and other libraries
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import os
import PIL
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
# The pythonModule folder need to be in the same folder
from debiai import debiai
```
## Download and explore the dataset
This tutorial uses a dataset of about 3,700 photos of flowers. The dataset contains 5 sub-directories, one per class:
daisy, dandelion, roses, sunflowers and tulips
```
import pathlib
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
data_dir = tf.keras.utils.get_file('flower_photos', origin=dataset_url, untar=True)
data_dir = pathlib.Path(data_dir)
```
## Create a dataset
```
# Define some parameters for the loader:
batch_size = 32
img_height = 180
img_width = 180
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
class_names = train_ds.class_names
print(class_names)
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
```
## Insert the project contextual data in DEBIAI
```
# Creation of the DEBIAI project block structure
DEBIAI_block_structure = [
{
"name": "image_id",
"groundTruth": [
{ "name": "class", "type": "text"},
],
"contexts": [
{ "name": "img_path", "type": "text"},
]
}
]
```
#### Converting some of the project data in a dataframe
In this exemple, it is done with the creation of a dataframe
more details here :
https://git.irt-systemx.fr/ML/DEBIAI/pythonModule#adding-samples
```
# Creation of a dataframe with the same columns as the block structure
data = {"image_id": [], "class": [], "img_path": []}
i = 0
for class_name in class_names:
images = list(data_dir.glob(class_name + '/*'))
for image in images:
data["image_id"].append(i)
data["class"].append(class_name)
data["img_path"].append(str(image))
i += 1
df = pd.DataFrame(data=data)
df
# Creation of a DEBIAI instance
DEBIAI_BACKEND_URL = 'http://localhost:3000/'
DEBIAI_PROJECT_NAME = 'Image classification demo'
my_debiai = debiai.Debiai(DEBIAI_BACKEND_URL)
# Creation of a DEBIAI project if it doesn't exist
debiai_project = my_debiai.get_project(DEBIAI_PROJECT_NAME)
if not debiai_project :
debiai_project = my_debiai.create_project(DEBIAI_PROJECT_NAME)
debiai_project
# Set the project block_structure if not already done
if not debiai_project.block_structure_defined():
debiai_project.set_blockstructure(DEBIAI_block_structure)
debiai_project.get_block_structure()
# Adding the dataframe
debiai_project.add_samples_pd(df, get_hash=False)
```
## Create the model
```
num_classes = len(class_names)
model = Sequential([
layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
# Compile the model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
```
## Train the model with the DEBIAI Dataset generator
```
# Because DEBIAI doesn't have the images to train the models, we will provide them with a function that take a sample information based on the given block_structure
def model_input_from_debiai_sample(debiai_sample: dict):
# "image_id", "class", "img_path"
img = keras.preprocessing.image.load_img(
debiai_sample['img_path'], target_size=(img_height, img_width))
img_array = keras.preprocessing.image.img_to_array(img)
return tf.expand_dims(img_array, 0) # Create a batch
# TF generated dataset
train_dataset_imported = debiai_project.get_tf_dataset_with_provided_inputs(
model_input_from_debiai_sample,
output_types=(tf.float32, tf.int32),
output_shapes=([None, img_height, img_width, 3], [1, ]),
classes=class_names
)
AUTOTUNE = tf.data.AUTOTUNE
train_dataset_imported = train_dataset_imported.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
# get_tf_dataset_with_provided_inputs Also work with a selection
# Train the model
epochs = 3
model.fit(train_dataset_imported, epochs=epochs)
```
| github_jupyter |
# Anomaly Detection on Enron Dataset
In this notebook, we aim to build and train models based on machine learning algorithms commonly used for unsupervised anomaly detection; namely one-class Support Vector Machine (SVM), Isolation Forest and Local Outlier Factor (LOF). The dataset used is a modified version of the Enron financial + email dataset that contains information about Enron Corporation, an energy, commodities, and services company that infamously went bankrupt in December 2001 as a result of fraudulent business practices.
The Enron dataset is widely used to try and develop models that can identify the persons of interests (POIs), i.e. individuals who were eventually tried for fraud or criminal activity in the Enron investigation, from the features within the data. The email + financial data contains the emails themselves, metadata about the emails such as number received by and sent from each individual, and financial information including salary and stock options.
The dataset we have obtained is from the [Udacity Data Analyst Nanodegree](https://www.udacity.com/course/data-analyst-nanodegree--nd002) and their [GitHub](https://github.com/udacity/ud120-projects) page. Inspiration for loading and preprocessing the dataset was taken from Will Koehrsen's [Medium article](https://williamkoehrsen.medium.com/machine-learning-with-python-on-the-enron-dataset-8d71015be26d). The data is stored in a pickled form and can be downloaded as a `.pkl` file that can be easily converted to a Python dictionary.
#### NOTE:
All references are presented in the form of appropriate hyperlinks within the paragraphs rather than in a separate section.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
```
## Loading the Dataset
As mentioned above, the data is presented in the form of a `.pkl.` file. We need to open the file as a Python object and then load it into a dictionary using Python's inbuilt `pickle` module.
```
import pickle
with open("enron_data.pkl", "rb") as data_file:
data_dict = pickle.load(data_file)
len(data_dict)
```
We have 146 personnel in the dataset. Let us view what data each person holds.
```
data_dict["ALLEN PHILLIP K"]
```
Including the names of the people, we have 146 samples with 22 features. We will now convert this to a Pandas dataframe.
```
temp_dict = {'name':[], 'salary': [], 'to_messages': [], 'deferral_payments': [], 'total_payments': [], 'loan_advances': [],
'bonus': [], 'email_address': [], 'restricted_stock_deferred': [], 'deferred_income': [], 'total_stock_value': [],
'expenses': [], 'from_poi_to_this_person': [], 'exercised_stock_options': [], 'from_messages': [], 'other': [],
'from_this_person_to_poi': [], 'long_term_incentive': [], 'shared_receipt_with_poi': [], 'restricted_stock': [],
'director_fees': [], 'poi': []}
for name in data_dict.keys():
temp_dict['name'].append(name)
for feature in data_dict[name].keys():
temp_dict[feature].append(data_dict[name][feature])
df = pd.DataFrame(temp_dict)
df
```
## Exploratory Data Analysis
Now, we perform some inital data exploration and analysis to get an idea of the characteritics and behaviour of our dataset as well as all the feature columns. We start by using the `.info()` and `.describe()` funtions from the *Pandas* library to get important metrics about our dataset.
```
df.info()
```
As seen the above, all our columns except `poi` are of the type objects despite most of the data being numeric values. So, we convert all the necessary columns into `float64` type and also convert the `poi` column to categorical after replacing `True` with `1` and `False` with `0`.
```
df['salary'] = df['salary'].astype('float64')
df['to_messages'] = df['to_messages'].astype('float64')
df['deferral_payments'] = df['deferral_payments'].astype('float64')
df['total_payments'] = df['total_payments'].astype('float64')
df['loan_advances'] = df['loan_advances'].astype('float64')
df['bonus'] = df['bonus'].astype('float64')
df['restricted_stock_deferred'] = df['restricted_stock_deferred'].astype('float64')
df['deferred_income'] = df['deferred_income'].astype('float64')
df['total_stock_value'] = df['total_stock_value'].astype('float64')
df['expenses'] = df['expenses'].astype('float64')
df['from_poi_to_this_person'] = df['from_poi_to_this_person'].astype('float64')
df['exercised_stock_options'] = df['exercised_stock_options'].astype('float64')
df['from_messages'] = df['from_messages'].astype('float64')
df['other'] = df['other'].astype('float64')
df['from_this_person_to_poi'] = df['from_this_person_to_poi'].astype('float64')
df['long_term_incentive'] = df['long_term_incentive'].astype('float64')
df['shared_receipt_with_poi'] = df['shared_receipt_with_poi'].astype('float64')
df['restricted_stock'] = df['restricted_stock'].astype('float64')
df['director_fees'] = df['director_fees'].astype('float64')
df.info()
df['poi'].replace( {True:1, False:0}, inplace=True)
df['poi'].astype('category')
df.head()
df.describe()
```
From the above, we can make the following observations:
* A few feature columns have very high maximum values when compared to the rest of the data like `salary, to_messages, total_payments` etc. However, we cannot make an assumption that these are outliers due to the very nature of the data.
* There are a lot of missing values in most of the columns. For the feature columns related to finances, `NaN` is actually 0 whereas it is an unknown value for the email related feature columns according to the [documentation](https://github.com/udacity/ud120-projects/blob/master/final_project/enron61702insiderpay.pdf)
* The columns `name` and `email_id` can be removed or stored separately as they provide no tangible information for our model
* Since many of the columns are arithmetically related to one another, we can check for errors using those relations.
Now, let us create a PairGrid with the feature columns `salary, total_payments, total_stock_value, from_poi_to_this_person, from_this_person_to_poi` to find any observable relations among them. The main reason we choose these columns specifically is because conceptaully, they seem the most important as well as related to most other feature columns.
```
tempdf = df[['salary','total_payments','total_stock_value','from_poi_to_this_person','from_this_person_to_poi','poi']]
sns.pairplot(tempdf, hue='poi', palette='bright')
```
From the diagonal plots, we can conclude that no one feature is enough to predict whether a given observation can be classified as a person of interest. This is because their is considerbale overlap in the distributions for both categories.
## Handling missing data
As mentioned above, we have a lot of missing data in our dataset. We cannot remove the columns with missing values because we already have very few observations to work with and removing more could prove detrimental during model training. For the financial feature columns, we replace `NaN` with 0. For the email feature columns, we replace them with the mean of respective category.
```
financial_features = ['bonus', 'deferral_payments', 'deferred_income', 'director_fees', 'exercised_stock_options',
'expenses', 'loan_advances', 'long_term_incentive', 'other', 'restricted_stock', 'restricted_stock_deferred',
'salary', 'total_payments', 'total_stock_value']
df[financial_features] = df[financial_features].fillna(0)
df[financial_features].isna().sum()
email_features = ['from_messages', 'from_poi_to_this_person', 'from_this_person_to_poi',
'shared_receipt_with_poi', 'to_messages']
email_mean = df[email_features].mean()
df[email_features] = df[email_features].fillna(email_mean)
df.isna().sum()
df.describe()
```
## Error Checking through Official Docs
One simple way to check for incorrect data is to add up all of the payment-related columns for each person and check if that is equal to the total payment recorded for the individkual. The same can be done for stock payments. Depending on the result, we can possibly make simple changes to rectify the errors.
```
payment_data = ['bonus', 'deferral_payments', 'deferred_income', 'director_fees', 'expenses', 'loan_advances',
'long_term_incentive', 'other', 'salary']
pay_err = df[ (df[payment_data].sum(axis='columns') != df['total_payments']) ]
pay_err
```
For the payment related financial feature columns, we get error for two observations.The errors appear to be caused by a misalignment of the columns when compared to the [official documentation](https://github.com/udacity/ud120-projects/blob/master/final_project/enron61702insiderpay.pdf); for `BELFER ROBERT`, the financial data has been shifted one column to the right, and for `BHATNAGAR SANJAY`, the data has been shifted one column to the left. We shift the columns to their correct positions and then check again.
```
df.loc[24,['deferred_income','deferral_payments','expenses',
'director_fees','total_payments','exercised_stock_options', 'restricted_stock',
'restricted_stock_deferred', 'total_stock_value']] = [-102500,0,3285,102500,3285,0,44093,-44093,0]
df.loc[117,['other','expenses','director_fees','total_payments','exercised_stock_options', 'restricted_stock',
'restricted_stock_deferred', 'total_stock_value']] = [0,137864,0,137864,15456290,2604490,-2604490,15456290]
pay_err = df[ (df[payment_data].sum(axis='columns') != df['total_payments']) ]
pay_err
stock_data = ['exercised_stock_options', 'restricted_stock', 'restricted_stock_deferred', 'total_stock_value']
stock_err = df[ (df[stock_data[:-1]].sum(axis='columns') != df['total_stock_value']) ]
stock_err
```
As expected, we rectified errors in our stock data along with the payment data in the code block above.
Now, we also remove two columns from the dataset namely `TOTAL` and `THE TRAVEL AGENCY IN THE PARK` as the first is unneccessary for prediction and the second is an organization rather than a person.
```
df[ (df['name'] == 'TOTAL') | (df['name'] == 'THE TRAVEL AGENCY IN THE PARK') ]
df.drop(labels=[100,103], axis=0, inplace=True)
df[ (df['name'] == 'TOTAL') | (df['name'] == 'THE TRAVEL AGENCY IN THE PARK') ]
```
## Data Preprocessing
All the above steps do come under preprocessing, but this section will deal with the final touches before we start building and training our models. We need to remove the `name` and `email_address` columns followed by scaling using z-score normalization.
```
names = df.pop('name')
emails = df.pop('email_address')
y = df.pop('poi')
y
```
Here, the index of `y` should be noted.
```
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler(copy=True)
scaled_arr = scaler.fit_transform(df)
scaled_df = pd.DataFrame(scaled_arr, columns=financial_features+email_features)
scaled_df.head()
scaled_df.index
```
Clearly, the index of `scaled_df` and `y` are different. This is becuase the scaling resetted our index and now includes the index 100 & 103 een though we removed those samples before. So, we need to realign `y` so that they match.
```
y.index = scaled_df.index
y
```
## Visualisation
Now, we will try to visualise our feature space by using t-Stochastic Neighbour Embedding as a dimensionality reduction method. Since this method is primarily used for visualisation, our embedding space is set to be 2 dimensional. Our aim is to observe whether we can achieve a embedding feature space where the anomalous observations are distinctly separate from the non-anomalous ones.
```
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
tsne.fit_transform(scaled_df)
x1 = []
x2 = []
for value in tsne.embedding_:
x1.append(value[0])
x2.append(value[1])
plt.figure(figsize=(16, 10))
plt.scatter(x1,x2,c=y)
```
As seen above, the compression of our data into 2 dimensions does not show an immediate distinction between non-pois and pois.
## Is splitting needed?
When dealing with supervised algorithms, it is an essential step to split our dataset into training, validation and test sets. Training set is used to train our model. Then, the validaion set is used to tune its hyperparameters. Usually, we skip this split and instead use cross-validation for tuning. Finally, the test set is used to give the true accuracy that can be expected after deployment.
However, doing the same for unsupervised learning does not make sense. Unsupervised learning in general benefit more from a cross-validation score to replace the metric of test accuracy. Refer this [question from Stats.StackExchange](https://stats.stackexchange.com/questions/387326/unsupervised-learning-train-test-division) and this [one from StackOverflow](https://stackoverflow.com/questions/31673388/is-train-test-split-in-unsupervised-learning-necessary-useful) to know more in detail.
#### For our case, we will be looking at the accuracy and the confusion matrix since we have labels for our data.
```
X = scaled_df.copy()
y.value_counts()
(18/144)*100
```
We can now begin building our models.
## One-class SVM
The best and most comprehensive explanation for this method is present in the [original paper](https://papers.nips.cc/paper/1999/file/8725fb777f25776ffa9076e44fcfd776-Paper.pdf) authored by researchers from Microsoft, Australian National University and University of London. The gist of it is that regular kernel SVM for classification cannot be used in cases for novelty detection, so minor modifications were made to find a function that is positive for regions with high density of points, and negative for small densities. We will now build and train our model. It is important to remember that this is an unsupervised method, meaing our labels have no use here. The main difference of one class SVM from the other methods is that it is looking for outliers according to the distribution in the feature space, rather than using an index or metric to quantify the anomalous behaviour of one observation with respect to the rest.
```
from sklearn.svm import OneClassSVM
svm = OneClassSVM()
svmpred = svm.fit_predict(X)
```
The `fit_predict()` function returns -1 for outliers and 1 for inliers, which is different from how are labels are assigned. So, we will modify the results and then calculate accuracy.
```
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
act_svmpred = np.where(svmpred == -1, 1, 0)
print(accuracy_score(y, act_svmpred))
print(f1_score(y, act_svmpred))
```
We get an accuracy of 56.25% with our default model. According to the [Scikit-Learn documentation](https://scikit-learn.org/stable/modules/outlier_detection.html#overview-of-outlier-detection-methods), one-class SVM is very sensitive to outliers in our data which are not anomalies. This might indicate the low training accuracy as we do have a few observations in our dataset that may seem like outliers and hence should have `poi=1` but aren't anomalies per say. However, out F-score is very low. We can interpret this further by getting the confusion matrix
```
from sklearn.metrics import confusion_matrix
confusion_matrix(y,act_svmpred, labels=[0,1])
```
From drawing out the confusion matrix, we get a clearer picture of why our F-score is so low. Our model is very good at classifying our anomalies correctly but has a very high error in misclassifying many of normal observations as anomalies. This means we have good recall but very bad precision.
## Isolation Forest
The original paper can be found [here](https://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/icdm08b.pdf?q=isolation-forest). This is also an unsupervised algorithm that returns an anomaly score for each observation. As the name suggests, it is based on the random forests algorithm in terms of its working.
```
from sklearn.ensemble import IsolationForest
isof = IsolationForest()
isofpred = isof.fit_predict(X)
act_isofpred = np.where(isofpred == -1, 1, 0)
print(accuracy_score(y, act_isofpred))
print(f1_score(y, act_isofpred))
```
This is excellent accuracy for what is a very small dataset but very low F-score. Since our model is unsupervised, there is a small possibility that our model overfit the data.
```
confusion_matrix(y,act_isofpred, labels=[0,1])
```
Here, we can clearly see that our model has very bad recall, about 22% (4/18) causing the F-score to fall. However, it is very good at classifying a non-anomalous observation correctly. This indicates that when our model predicts a new observation to be normal, we can use that result with utmost trust as it was able to identify 120 of 126 normal training observations. However, if it predicts a new observation as an anomaly, we need to more information or need to look at other methods.
## Local Outlier Factor
The original paper can be found [here](https://www.dbs.ifi.lmu.de/Publikationen/Papers/LOF.pdf). LOF is based on a concept of a local density, where locality is given by K nearest neighbors, whose distance is used to estimate the density. By comparing the local density of an object to the local densities of its neighbors, one can identify regions of similar density, and points that have a substantially lower density than their neighbors. It shares a lot of similarties with the DBSCAN clsutering algorithm.
```
from sklearn.neighbors import LocalOutlierFactor
lof = LocalOutlierFactor()
lofpred = lof.fit_predict(X)
act_lofpred = np.where(lofpred == -1, 1, 0)
print(accuracy_score(y, act_lofpred))
print(f1_score(y, act_lofpred))
```
We get an accuracy of 65%, which is higher than our SVM but lower than our forest but the same F-score as the Forest.
```
confusion_matrix(y,act_lofpred, labels=[0,1])
```
Just like the results above, our confusion matrix is an average of the above two methods. While SVM was good at classifying anomalies and Isolation Forest was good at classifying normal observations, LOF lies right between the two.
## Hyperparameter Tuning
We will now take our Isolation Forest model and try tuning it various parameters to get as good a score as possible using `GridSearchCV` cross-validation method. We will concentrating on the `n_estimators` (number of trees in the forest), `max_samples` (number of observations taken for training per tree), `max_features` (number of features taken for splitting per tree) and `bootstrap` (bootstrapping of the data). Two metrics will be calculated, F-score and Accuracy and the best estimator will be decided by the former as we already get excellent accuracy from a default Isolation Forest.
```
from sklearn.model_selection import GridSearchCV
clf = IsolationForest(random_state=0)
param_grid = {'n_estimators':[100,200,300], 'max_samples':[50,100,'auto'], 'max_features':[1,5,10,15],
'bootstrap':[True,False]}
grid_isof = GridSearchCV(clf, param_grid, scoring=['f1_micro','accuracy'], refit='f1_micro', cv=5)
grid_isof.fit(X, np.where(y==1, -1, 1))
grid_isof.best_estimator_
```
Here, we obtain our best model. Let us look at the accuracy and F-score to see our improvements.
```
grid_isof.best_index_
print(grid_isof.cv_results_['mean_test_accuracy'][grid_isof.best_index_])
print(grid_isof.cv_results_['mean_test_f1_micro'][grid_isof.best_index_])
```
While we see only a small improvement in our accuracy, our F-score has greatly improved.
## Final Thoughts:
* With the default models, Isolation Forest worked best in identifying normal observations whereas one class SVM worked best in identifying anomalies
* LOF's performance was the average of the other two and did not provide any significant advantage.
* Hyperparameter tuning goes a long way in imporving our model. We have done so for our Isolation Forest but the same can be replicated for SVM.
* The main problem with this dataset is a very small number of observations. ML models generally tend to imporve in performance with an increase in data used for training, with overfitting bein prevented by using common methods.
* Supervised Learning algorithms, especially Random Forests, might prove to perform better since our data is labelled but they require finer tuning since their algorithms are not designed for anomaly detection specifically.
| github_jupyter |
```
import os
import sys
import json
import tempfile
import pandas as pd
import numpy as np
import datetime
from CoolProp.CoolProp import PropsSI
from math import exp, factorial, ceil
import matplotlib.pyplot as plt
%matplotlib inline
cwd = os.getcwd()
sys.path.append(os.path.normpath(os.path.join(cwd, '..', '..', '..', 'glhe')))
sys.path.append(os.path.normpath(os.path.join(cwd, '..', '..', '..', 'standalone')))
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = [15, 9]
plt.rcParams['font.size'] = 14
pd.set_option('display.max_columns', None)
# pd.set_option('display.max_rows', None)
df = pd.read_csv("out.csv", index_col=0)
df.head(2)
start_time = datetime.datetime(month=1, day=1, year=2018, hour=0, minute=0, second=0)
l = df['Simulation Time'].tolist()
dt = [datetime.timedelta(seconds=x) for x in l]
df.set_index(pd.to_datetime([start_time + x for x in dt]), inplace=True)
df.plot(y=['GLHE Inlet Temperature [C]', 'GLHE Outlet Temperature [C]'])
dT = df['GLHE Inlet Temperature [C]'].diff()
dt = df['GLHE Inlet Temperature [C]'].index.to_series().diff().dt.total_seconds()
df['dT_in/dt'] = dT/dt
df.plot(y='dT_in/dt')
df = df.loc['01-01-2018 02:50:00':'01-01-2018 03:30:00']
def hanby(time, vol_flow_rate, volume):
"""
Computes the non-dimensional response of a fluid conduit
assuming well mixed nodes. The model accounts for the thermal
capacity of the fluid and diffusive mixing.
Hanby, V.I., J.A. Wright, D.W. Fetcher, D.N.T. Jones. 2002
'Modeling the dynamic response of conduits.' HVAC&R Research 8(1): 1-12.
The model is non-dimensional, so input parameters should have consistent units
for that are able to compute the non-dimensional time parameter, tau.
:math \tau = \frac{\dot{V} \cdot t}{Vol}
:param time: time of fluid response
:param vol_flow_rate: volume flow rate
:param volume: volume of fluid circuit
:return:
"""
tau = vol_flow_rate * time / volume
num_nodes = 20
ret_sum = 1
for i in range(1, num_nodes):
ret_sum += (num_nodes * tau) ** i / factorial(i)
return 1 - exp(-num_nodes * tau) * ret_sum
def hanby_c(time, vol_flow_rate, volume):
return 1 - hanby(time, vol_flow_rate, volume)
delta_t = df['Simulation Time'][1] - df['Simulation Time'][0]
flow = 0.0002
vol = 0.05688
def calc_exft_correction_factors(timestep, flow_rate, volume):
t_tr = volume / flow_rate
time = np.arange(0, t_tr * 2, timestep)
f = np.array([hanby(x, flow_rate, volume) for x in time])
d = np.diff(f)
r = np.diff(f) / sum(d)
# r = np.append(np.zeros(ceil(t_tr/timestep)), r)
if len(r) == 0:
return np.ones(1)
else:
return r
calc_exft_correction_factors(120, flow, vol)
def update_exft_correction_factors(r):
if len(r) == 1:
return r
elif r[0] == 1:
return r
else:
pop_val = r[0]
l = np.count_nonzero(r) - 1
delta = pop_val / l
for i, val in enumerate(r):
if r[i] == 0:
break
else:
r[i] += delta
r = np.roll(r, -1)
r[-1] = 0
return r
cf_0 = calc_exft_correction_factors(delta_t, flow, vol)
cf_0
cf_1 = update_exft_correction_factors(cf_0)
cf_1
cf_2 = update_exft_correction_factors(cf_1)
cf_2
cf_3 = update_exft_correction_factors(cf_2)
cf_3
cf_4 = update_exft_correction_factors(cf_3)
cf_4
def calc_exft(signal, to_correct):
r = calc_exft_correction_factors(delta_t, flow, vol)
# r = np.array(l)
prev_temps = np.ones(len(r)) * to_correct[0]
prev_signal = signal[0]
dT_dt_prev = 0
new_temps = np.empty([0])
for i, t_sig in enumerate(signal):
dT_dt = (t_sig - prev_signal) / delta_t
# print(dT_dt, t_sig, prev_signal)
if abs(dT_dt - dT_dt_prev) > 0.01:
r = calc_exft_correction_factors(delta_t, flow, vol)
# r = np.array(l)
print(r)
prev_temps[0] = to_correct[i]
new_temp = sum(r * prev_temps)
# print(to_correct[i], new_temp)
new_temps = np.append(new_temps, new_temp)
# print(new_temps)
prev_temps = np.roll(prev_temps, 1)
prev_temps[0] = new_temp
r = update_exft_correction_factors(r)
prev_sig = t_sig
dT_dt_prev = dT_dt
# if i == 10:
# break
# else:
# print('\n')
return new_temps
t_c = calc_exft(df['GLHE Inlet Temperature [C]'], df['GLHE Outlet Temperature [C]'])
df['Corrected Temps'] = t_c
df.plot(y=['GLHE Inlet Temperature [C]', 'GLHE Outlet Temperature [C]', 'Corrected Temps', 'Average Fluid Temp [C]'], marker='X')
df.head(20)
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import re
import glob
import lzma
import pickle
import pandas as pd
import numpy as np
import requests as r
import seaborn as sns
import warnings
import matplotlib as mpl
import matplotlib.pyplot as plt
from joblib import hash
from collections import Counter
from sklearn.metrics import accuracy_score
from sklearn.pipeline import make_pipeline
from sklearn.neighbors import NearestCentroid
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.linear_model import RidgeClassifier, RidgeClassifierCV, PassiveAggressiveClassifier
warnings.simplefilter('ignore')
mpl.style.use('ggplot')
```
## Source Data
IF source data is missing run Elasticsearch query to extract data and then save it in JSON format to `data` directory
```
# news_json = r.get('http://locslhost:9200/indice/doc/_search?sort=date:desc&size=4000').json()
# with open('./data/news.json', 'w', encoding='utf8') as fh:
# dump(news_json['hits']['hits'], fh)
# df = pd.io.json.json_normalize(news_json['hits']['hits'])
# df.to_json('./data/news.json')
df = pd.read_json('./data/news.json')
```
## Common issues that we generally face during the data preparation phase:
- Format and structure normalization
- Detect and fix missing values
- Duplicates removal
- Units normalization
- Constraints validations
- Anomaly detection and removal
- Study of features importance/relevance
- Dimentional reduction, feature selection & extraction
```
df = df[['_source.body', '_source.date', '_source.subject', '_source.language', '_source.categories']]
df.columns = ['body', 'pubdate', 'subject', 'language', 'categories']
df.drop_duplicates(inplace=True)
df.head(1).T.style
df = df.loc[(df['categories'] != 'News') &
(df['categories'] != 'articles 2015') &
(df['categories'] != 'frontpage') &
(df['categories'] != 'English') &
(df['categories'] != 'Comment') &
(df['categories'] != 'Uncategorized') &
(df['language'] == 'English')]
df['categories'] = df['categories'].str.replace(r'[^a-zA-Z_, ]+', '').replace(', ', '')
df['categories'] = df['categories'].str.replace(r'^, ', '')
df.groupby(['categories']).agg({'count'}).drop_duplicates()
df['cat_id'] = df['categories'].factorize()[0]
df['lang_id'] = df['language'].factorize()[0]
df['char_count'] = df['body'].apply(len)
df['word_count'] = df['body'].apply(lambda x: len(x.split()))
df['word_density'] = df['char_count'] / (df['word_count']+1)
df.shape
sns.set()
sns.pairplot(df, height=3.5, kind="reg", palette="husl", diag_kind="auto")
xtrain, xtest, ytrain, ytest = train_test_split(df['body'], df['categories'], test_size=0.2, random_state=42)
tfidf = TfidfVectorizer(use_idf=False, sublinear_tf=True, min_df=5, norm='l2', encoding='latin-1', ngram_range=(1, 2), stop_words='english')
features = tfidf.fit_transform(df.body).toarray()
labels = df.cat_id
engines = [('PassiveAggressiveClassifier', PassiveAggressiveClassifier(fit_intercept=True, n_jobs=-1, random_state=0)),
('NearestCentroid', NearestCentroid()),
('RandomForestClassifier', RandomForestClassifier(min_samples_leaf=0.01))]
for name, engine in engines:
clf = make_pipeline(tfidf, engine).fit(xtrain, ytrain)
prediction = clf.predict(xtest)
score = clf.score(xtest, prediction)
with lzma.open('./data/{}.pickle.xz'.format(name.lower()), 'wb') as f:
pickle.dump(clf, f, protocol=5)
s = '''
‘Guys, you’ve got to hear this,” I said. I was sitting in front of my computer one day in July 2012, with one eye on a screen of share prices and the other on a live stream of the House of Commons Treasury select committee hearings. As the Barclays share price took a graceful swan dive, I pulled my headphones out of the socket and turned up the volume so everyone could hear. My colleagues left their terminals and came around to watch BBC Parliament with me.
It didn’t take long to realise what was happening. “Bob’s getting murdered,” someone said.
Bob Diamond, the swashbuckling chief executive of Barclays, had been called before the committee to explain exactly what his bank had been playing at in regards to the Libor rate-fixing scandal. The day before his appearance, he had made things very much worse by seeming to accuse the deputy governor of the Bank of England of ordering him to fiddle an important benchmark, then walking back the accusation as soon as it was challenged. He was trying to turn on his legendary charm in front of a committee of angry MPs, and it wasn’t working. On our trading floor, in Mayfair, calls were coming in from all over the City. Investors needed to know what was happening and whether the damage was reparable.
A couple of weeks later, the damage was done. The money was gone, Diamond was out of a job and the market, as it always does, had moved on. We were left asking ourselves: How did we get it so wrong?
'''
result = []
for file in glob.glob('./data/*.pickle.xz'):
clf = pickle.load(lzma.open('{}'.format(file), 'rb'))
ypred = clf.predict([s])
score = clf.score([s], ypred)
print(file, ypred[0], score)
result.append(ypred[0])
print(pd.io.json.dumps(Counter(result), indent=4))
```
| github_jupyter |
```
import time
import numpy as np
np.random.seed(1234)
from functools import reduce
import scipy.io
from scipy.interpolate import griddata
from sklearn.preprocessing import scale
# from utils import augment_EEG, cart2sph, pol2cart
######### import DNN for training using GPUs #########
from keras.utils.training_utils import multi_gpu_model
######### import DNN frameworks #########
import tensorflow as tf
import keras
# import high level optimizers, models and layers
from keras.optimizers import SGD
from keras.models import Sequential
from keras.layers import InputLayer
# for CNN
from keras.layers import Conv2D, MaxPooling2D
# for RNN
from keras.layers import LSTM
# for different layer functionality
from keras.layers import Dense, Dropout, Flatten
# utility functionality for keras
from keras.preprocessing import sequence
from keras.layers.embeddings import Embedding
# from keras import backend as K
```
# 1. Import in Data Necessary
Here, we can import the MNIST, or IMDB dataset for proof-of-concept. We also provide code for importing iEEG recording data, and how to transform them into input that can be provided to the DNN models built in section 2.
```
from keras.datasets import imdb
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
print len(mnist)
print type(mnist)
top_words = 5000
(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=top_words)
# import raw data
# perform signal processing - FFT
# save data
# load back in data and augment dataset
```
# 2. Preprocess Data
Here, we preprocess data by producing the final set of images needed to input into the DNN model.
We first augment the dataset by applying transformations that the model will be invariant to (e.g. rotation, translation, etc.).
Then we will mesh the data to fill in any missing data.
# 3. Build DNN Model
Here, we build the DNN model that will need to be trained. It will consist of a CNN-RNN model that has a VGG style CNN model with LSTM used for the RNN.
These will be capable of efficiently learning spatial patterns in the heatmaps fed in, and also capable of learning complex timing behavior from the recurrent neural network.
```
from ieeg_cnn_rnn import IEEGdnn
imsize=32 # the imsize dimension
n_colors=4 # the number of frequency bands we use can correpond
###### CNN Parameters #######
n_layers = (4,2,1) # the number of layers of convolution
poolsize=(2,2) # the size of the pooling done in 2D
n_outunits = 2 # the size of the output of the model (# classes)
n_fcunits = 1024 # the size of the fully connected layer at output
##### Optimizer Parameters #######
loss='categorical_crossentropy'
ADAM = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
metrics = ['accuracy']
# initialize the ieeg dnn model
ieegdnn = IEEGdnn(imsize, n_colors)
ieegdnn.build_cnn(w_init=None, n_layers=n_layers,poolsize=(2,2))
ieegdnn.build_output(n_outunits=n_outunits, n_fcunits=n_fcunits)
print ieegdnn.model.output
# ieegdnn.compile_model(loss=loss, optimizer=ADAM, metrics=metrics)
display(ieegdnn.model_config)
####### RNN Parameters ######
num_units = 128
grad_clipping = 110
nonlinearity = keras.activations.tanh
ieegdnn.build_rnn(num_units=num_units, grad_clipping=grad_clipping, nonlinearity=nonlinearity)
```
# 4. Train Model and Test
Here, we run the training on gpu(s) and document the entire training time, and visualize the output produced.
| github_jupyter |
# 2018.10.27: Multiple states: Time series
## incremental update
```
import sys,os
import numpy as np
from scipy import linalg
from sklearn.preprocessing import OneHotEncoder
import matplotlib.pyplot as plt
%matplotlib inline
# setting parameter:
np.random.seed(1)
n = 10 # number of positions
m = 3 # number of values at each position
l = 2*((n*m)**2) # number of samples
g = 1.
def itab(n,m):
i1 = np.zeros(n)
i2 = np.zeros(n)
for i in range(n):
i1[i] = i*m
i2[i] = (i+1)*m
return i1.astype(int),i2.astype(int)
i1tab,i2tab = itab(n,m)
# generate coupling matrix w0:
def generate_coupling(n,m,g):
nm = n*m
w = np.random.normal(0.0,g/np.sqrt(nm),size=(nm,nm))
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
w[i1:i2,:] -= w[i1:i2,:].mean(axis=0)
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
w[:,i1:i2] -= w[:,i1:i2].mean(axis=1)[:,np.newaxis]
return w
w0 = generate_coupling(n,m,g)
"""
plt.figure(figsize=(3,3))
plt.title('actual coupling matrix')
plt.imshow(w0,cmap='rainbow',origin='lower')
plt.xlabel('j')
plt.ylabel('i')
plt.clim(-0.3,0.3)
plt.colorbar(fraction=0.045, pad=0.05,ticks=[-0.3,0,0.3])
plt.show()
"""
# 2018.10.27: generate time series by MCMC
def generate_sequences_MCMC(w,n,m,l):
#print(i1tab,i2tab)
# initial s (categorical variables)
s_ini = np.random.randint(0,m,size=(l,n)) # integer values
#print(s_ini)
# onehot encoder
enc = OneHotEncoder(n_values=m)
s = enc.fit_transform(s_ini).toarray()
#print(s)
ntrial = 100
for t in range(l-1):
h = np.sum(s[t,:]*w[:,:],axis=1)
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
k = np.random.randint(0,m)
for itrial in range(ntrial):
k2 = np.random.randint(0,m)
while k2 == k:
k2 = np.random.randint(0,m)
if np.exp(h[i1+k2]- h[i1+k]) > np.random.rand():
k = k2
s[t+1,i1:i2] = 0.
s[t+1,i1+k] = 1.
return s
s = generate_sequences_MCMC(w0,n,m,l)
#print(s[:5])
def fit_increment1(s,n,m):
l = s.shape[0]
s_av = np.mean(s[:-1],axis=0)
ds = s[:-1] - s_av
c = np.cov(ds,rowvar=False,bias=True)
#print(c)
c_inv = linalg.pinv(c,rcond=1e-15)
#print(c_inv)
nm = n*m
wini = np.random.normal(0.0,g/np.sqrt(nm),size=(nm,nm))
#print(w)
nloop = 100
w_infer = np.zeros((nm,nm))
for i in range(n):
#print(i)
i1,i2 = i1tab[i],i2tab[i]
#s1 = np.copy(s[1:,i1:i2])
w = wini[i1:i2,:]
h = s[1:,i1:i2]
for iloop in range(nloop):
h_av = h.mean(axis=0)
dh = h - h_av
dhds = dh[:,:,np.newaxis]*ds[:,np.newaxis,:]
dhds_av = dhds.mean(axis=0)
w = np.dot(dhds_av,c_inv)
#w = w - w.mean(axis=0)
h = np.dot(s[:-1],w.T)
p = np.exp(h)
p_sum = p.sum(axis=1)
for k in range(m):
p[:,k] = p[:,k]/p_sum[:]
h += s[1:,i1:i2] - p
w_infer[i1:i2,:] = w
return w_infer
def fit_increment2(s,n,m):
l = s.shape[0]
s_av = np.mean(s[:-1],axis=0)
ds = s[:-1] - s_av
c = np.cov(ds,rowvar=False,bias=True)
#print(c)
c_inv = linalg.pinv(c,rcond=1e-15)
#print(c_inv)
nm = n*m
wini = np.random.normal(0.0,g/np.sqrt(nm),size=(nm,nm))
#print(w)
nloop = 10
w_infer = np.zeros((nm,nm))
p_obs = np.zeros(l-1)
for i in range(n):
#print(i)
i1,i2 = i1tab[i],i2tab[i]
#s1 = np.copy(s[1:,i1:i2])
iobs = np.argmax(s[1:,i1:i2],axis=1)
w = wini[i1:i2,:].copy()
#h = np.dot(s[:-1,:],w.T)
h = (s[1:,i1:i2]).copy()
for iloop in range(nloop):
#h = np.dot(s[:-1],w.T)
p = np.exp(h)
p_sum = p.sum(axis=1)
for k in range(m):
p[:,k] = p[:,k]/p_sum[:]
for t in range(l-1):
p_obs[t] = p[t,iobs[t]]
#mse = ((w0[i1:i2,:]-w)**2).mean()
#cost = ((1.-p_obs)**2).mean()
#print(iloop,mse,cost)
# update h: multiplicative
#for k in range(m):
# h[:,k] *= 1./p_obs[:]
# update h: incremental
h += s[1:,i1:i2] - p
h_av = h.mean(axis=0)
dh = h - h_av
dhds = dh[:,:,np.newaxis]*ds[:,np.newaxis,:]
dhds_av = dhds.mean(axis=0)
w = np.dot(dhds_av,c_inv)
h = np.dot(s[:-1],w.T)
#w = w - w.mean(axis=0)
w_infer[i1:i2,:] = w
return w_infer
#plt.scatter(w0,w_infer)
#plt.plot([-0.3,0.3],[-0.3,0.3],'r--')
w = fit_increment1(s,n,m)
plt.scatter(w0,w)
plt.plot([-0.3,0.3],[-0.3,0.3],'r--')
mse = ((w0-w)**2).mean()
slope = (w0*w).sum()/(w0**2).sum()
print(mse,slope)
```
| github_jupyter |
```
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_excel(r'C:\Users\kundi\Moji_radovi\MVanalysis\datasetup\MV_DataFrame.xlsx')
df['Sat'] = df['Uplaćeno'].astype(str).str.slice(-8,-6)
df['Datum'] = df['Uplaćeno'].astype(str).str.slice(-19,-13)
df.info()
df
df.drop(columns = ['Uplaćeno'], inplace = True)
akontacije = df.loc[df['Opis'] == 'Akontacija platomat']
uplate = df.loc[df['Opis'] != 'Akontacija platomat']
akontacije
uplate
akontacije.describe()
uplate.describe()
veće_od_50lp = akontacije.loc[akontacije['Uplata'] > 0.51]
veće_od_50lp
print('Broj nesukladnosti je:', int(veće_od_50lp['Partner'].count()))
print('Broj akontacija je:', int(akontacije['Partner'].count()))
nesukladnosti = int(veće_od_50lp['Partner'].count()) / int(akontacije['Partner'].count())
print('Postotak detektiranih nesukladnosti je: ', (nesukladnosti * 100), '%')
plt.boxplot(akontacije['Uplata'])
plt.grid()
plt.boxplot(uplate['Uplata'])
plt.grid()
df.columns
akontacije
dani = df['Datum'].unique()
dani = sorted(dani, key = lambda x: x.split('.')[1])
dani
uplate_po_danu = uplate.groupby('Datum').sum()
uplate_po_danu
akontacije_po_danu = akontacije.groupby('Datum').sum()
akontacije_po_danu
fig = plt.figure(figsize=(25,5))
fig.suptitle('Uplaćeni iznosi kroz vrijeme', fontsize = 20, weight = 'bold')
plt.plot(df['Uplata'])
plt.axhline(y = 122.11, color = 'k', linestyle = 'solid')
plt.xlabel('Broj uplata', fontsize = 12, weight = 'semibold')
plt.ylabel('Iznos uplata', fontsize = 12, weight = 'semibold')
plt.grid()
plt.show()
fig = plt.figure(figsize=(25,5))
fig.suptitle('Histogram vrijednosti uplata', fontsize = 20, weight = 'bold')
plt.hist(df['Uplata'], bins=10, ec = 'm')
plt.xlabel('Iznos uplate', fontsize = 12, weight = 'semibold')
plt.ylabel('Broj uplata', fontsize = 12, weight = 'semibold')
plt.grid()
plt.show()
fig = plt.figure(figsize=(25,5))
fig.suptitle('Pojavljivanje nesukladnosti (Vrijednosti iznad iscrtkane linije)', fontsize = 20, weight = 'bold')
plt.plot(akontacije['Uplata'])
plt.axhline(y = 0.51, color = 'k', linestyle = 'dashed')
plt.xlabel('Broj uplata', fontsize = 12, weight = 'semibold')
plt.ylabel('Iznos akontacije', fontsize = 12, weight = 'semibold')
plt.grid()
plt.show()
uplate.info()
sati = df['Sat'].unique()
sati.sort()
uplate_po_satu = uplate.groupby('Sat').sum()
uplate_po_satu
fig = plt.figure(figsize=(25,5))
fig.suptitle('Uplate po satima', fontsize = 20, weight = 'bold')
plt.bar(sati, uplate_po_satu['Uplata'])
plt.xlabel('Sati', fontsize = 12, weight = 'semibold')
plt.ylabel('Suma uplata', fontsize = 12, weight = 'semibold')
plt.grid()
plt.show()
akontacije_po_satu = akontacije.groupby('Sat').sum()
akontacije_po_satu_zbroj = akontacije.groupby('Sat').count()
akontacije_po_satu
akontacije_po_satu_zbroj
fig = plt.figure(figsize=(25,5))
fig.suptitle('Akontacije po satima u kn', fontsize = 20, weight = 'bold')
plt.bar(sati, akontacije_po_satu['Uplata'])
plt.xlabel('Sati', fontsize = 12, weight = 'semibold')
plt.ylabel('Suma akontacija', fontsize = 12, weight = 'semibold')
plt.grid()
plt.show()
fig = plt.figure(figsize=(25,5))
fig.suptitle('Broj izvršenih akontacija po satima', fontsize = 20, weight = 'bold')
plt.bar(sati, akontacije_po_satu_zbroj['Uplata'])
plt.xlabel('Sati', fontsize = 12, weight = 'semibold')
plt.ylabel('Broj akontacija', fontsize = 12, weight = 'semibold')
plt.grid()
plt.show()
fig = plt.figure(figsize=(25,5))
fig.suptitle('Vrijednost akontacija po danima u kn', fontsize = 20, weight = 'bold')
plt.bar(dani, akontacije_po_danu['Uplata'])
plt.xlabel('Dani', fontsize = 12, weight = 'semibold')
plt.ylabel('Suma akontacija', fontsize = 12, weight = 'semibold')
plt.grid()
plt.show()
fig = plt.figure(figsize=(25,5))
fig.suptitle('Vrijednost uplata po danima u kn', fontsize = 20, weight = 'bold')
plt.bar(dani, uplate_po_danu['Uplata'])
plt.xlabel('Dani', fontsize = 12, weight = 'semibold')
plt.ylabel('Suma uplata', fontsize = 12, weight = 'semibold')
plt.grid()
plt.show()
veće_od_50lp_po_danu_zbroj = veće_od_50lp.groupby('Datum').sum()
veće_od_50lp_po_danu = veće_od_50lp.groupby('Datum').count()
veće_od_50lp_po_danu
veće_od_50lp_po_danu_zbroj
dani_nesukladnosti = veće_od_50lp['Datum'].unique()
dani_nesukladnosti.sort()
fig = plt.figure(figsize=(25,5))
fig.suptitle('Izravan trošak nesukladnosti po danu u kn', fontsize = 20, weight = 'bold')
plt.bar(dani_nesukladnosti , veće_od_50lp_po_danu_zbroj['Uplata'])
plt.xlabel('Dani', fontsize = 12, weight = 'semibold')
plt.ylabel('Trošak nesukladnosti', fontsize = 12, weight = 'semibold')
plt.grid()
plt.show()
fig = plt.figure(figsize=(25,5))
fig.suptitle('Broj nesukladnosti po danu u mjesecu', fontsize = 20, weight = 'bold')
plt.bar(dani_nesukladnosti, veće_od_50lp_po_danu['Uplata'])
plt.xlabel('Dani', fontsize = 12, weight = 'semibold')
plt.ylabel('Broj nesukladnosti', fontsize = 12, weight = 'semibold')
plt.grid()
plt.show()
```
| github_jupyter |
# Introduction
- nb45の編集
- nb50 の結果を参考にExtraTreesRegressor回帰を行う
# Import everything I need :)
```
import warnings
warnings.filterwarnings('ignore')
import time
import multiprocessing
import glob
import gc
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
from plotly.offline import init_notebook_mode, iplot
import plotly.graph_objs as go
from sklearn.preprocessing import LabelEncoder, StandardScaler
from sklearn.model_selection import KFold
from sklearn.metrics import mean_absolute_error
from sklearn.ensemble import ExtraTreesRegressor, AdaBoostRegressor, RandomForestRegressor
from fastprogress import progress_bar
```
# Preparation
```
nb = 53
isSmallSet = False
length = 200000
model_name = 'extra_trees_regressor'
pd.set_option('display.max_columns', 200)
# use atomic numbers to recode atomic names
ATOMIC_NUMBERS = {
'H': 1,
'C': 6,
'N': 7,
'O': 8,
'F': 9
}
file_path = '../input/champs-scalar-coupling/'
glob.glob(file_path + '*')
# train
path = file_path + 'train.csv'
if isSmallSet:
train = pd.read_csv(path) [:length]
else:
train = pd.read_csv(path)
# test
path = file_path + 'test.csv'
if isSmallSet:
test = pd.read_csv(path)[:length]
else:
test = pd.read_csv(path)
# structure
path = file_path + 'structures.csv'
structures = pd.read_csv(path)
# fc_train
path = file_path + 'nb47_fc_train.csv'
if isSmallSet:
fc_train = pd.read_csv(path)[:length]
else:
fc_train = pd.read_csv(path)
# fc_test
path = file_path + 'nb47_fc_test.csv'
if isSmallSet:
fc_test = pd.read_csv(path)[:length]
else:
fc_test = pd.read_csv(path)
# train dist-interact
path = file_path + 'nb33_train_dist-interaction.csv'
if isSmallSet:
dist_interact_train = pd.read_csv(path)[:length]
else:
dist_interact_train = pd.read_csv(path)
# test dist-interact
path = file_path + 'nb33_test_dist-interaction.csv'
if isSmallSet:
dist_interact_test = pd.read_csv(path)[:length]
else:
dist_interact_test = pd.read_csv(path)
# ob charge train
path = file_path + 'train_ob_charges_V7EstimatioofMullikenChargeswithOpenBabel.csv'
if isSmallSet:
ob_charge_train = pd.read_csv(path)[:length].drop(['Unnamed: 0', 'error'], axis=1)
else:
ob_charge_train = pd.read_csv(path).drop(['Unnamed: 0', 'error'], axis=1)
# ob charge test
path = file_path + 'test_ob_charges_V7EstimatioofMullikenChargeswithOpenBabel.csv'
if isSmallSet:
ob_charge_test = pd.read_csv(path)[:length].drop(['Unnamed: 0', 'error'], axis=1)
else:
ob_charge_test = pd.read_csv(path).drop(['Unnamed: 0', 'error'], axis=1)
len(test), len(fc_test)
len(train), len(fc_train)
if isSmallSet:
print('using SmallSet !!')
print('-------------------')
print(f'There are {train.shape[0]} rows in train data.')
print(f'There are {test.shape[0]} rows in test data.')
print(f"There are {train['molecule_name'].nunique()} distinct molecules in train data.")
print(f"There are {test['molecule_name'].nunique()} distinct molecules in test data.")
print(f"There are {train['atom_index_0'].nunique()} unique atoms.")
print(f"There are {train['type'].nunique()} unique types.")
```
---
## myFunc
**metrics**
```
def kaggle_metric(df, preds):
df["prediction"] = preds
maes = []
for t in df.type.unique():
y_true = df[df.type==t].scalar_coupling_constant.values
y_pred = df[df.type==t].prediction.values
mae = np.log(mean_absolute_error(y_true, y_pred))
maes.append(mae)
return np.mean(maes)
```
---
**momory**
```
def reduce_mem_usage(df, verbose=True):
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
start_mem = df.memory_usage().sum() / 1024**2
for col in df.columns:
col_type = df[col].dtypes
if col_type in numerics:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
c_prec = df[col].apply(lambda x: np.finfo(x).precision).max()
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max and c_prec == np.finfo(np.float16).precision:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max and c_prec == np.finfo(np.float32).precision:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
end_mem = df.memory_usage().sum() / 1024**2
if verbose: print('Mem. usage decreased to {:5.2f} Mb ({:.1f}% reduction)'.format(end_mem, 100 * (start_mem - end_mem) / start_mem))
return df
```
# Feature Engineering
Build Distance Dataset
```
def build_type_dataframes(base, structures, coupling_type):
base = base[base['type'] == coupling_type].drop('type', axis=1).copy()
base = base.reset_index()
base['id'] = base['id'].astype('int32')
structures = structures[structures['molecule_name'].isin(base['molecule_name'])]
return base, structures
# a,b = build_type_dataframes(train, structures, '1JHN')
def add_coordinates(base, structures, index):
df = pd.merge(base, structures, how='inner',
left_on=['molecule_name', f'atom_index_{index}'],
right_on=['molecule_name', 'atom_index']).drop(['atom_index'], axis=1)
df = df.rename(columns={
'atom': f'atom_{index}',
'x': f'x_{index}',
'y': f'y_{index}',
'z': f'z_{index}'
})
return df
def add_atoms(base, atoms):
df = pd.merge(base, atoms, how='inner',
on=['molecule_name', 'atom_index_0', 'atom_index_1'])
return df
def merge_all_atoms(base, structures):
df = pd.merge(base, structures, how='left',
left_on=['molecule_name'],
right_on=['molecule_name'])
df = df[(df.atom_index_0 != df.atom_index) & (df.atom_index_1 != df.atom_index)]
return df
def add_center(df):
df['x_c'] = ((df['x_1'] + df['x_0']) * np.float32(0.5))
df['y_c'] = ((df['y_1'] + df['y_0']) * np.float32(0.5))
df['z_c'] = ((df['z_1'] + df['z_0']) * np.float32(0.5))
def add_distance_to_center(df):
df['d_c'] = ((
(df['x_c'] - df['x'])**np.float32(2) +
(df['y_c'] - df['y'])**np.float32(2) +
(df['z_c'] - df['z'])**np.float32(2)
)**np.float32(0.5))
def add_distance_between(df, suffix1, suffix2):
df[f'd_{suffix1}_{suffix2}'] = ((
(df[f'x_{suffix1}'] - df[f'x_{suffix2}'])**np.float32(2) +
(df[f'y_{suffix1}'] - df[f'y_{suffix2}'])**np.float32(2) +
(df[f'z_{suffix1}'] - df[f'z_{suffix2}'])**np.float32(2)
)**np.float32(0.5))
def add_distances(df):
n_atoms = 1 + max([int(c.split('_')[1]) for c in df.columns if c.startswith('x_')])
for i in range(1, n_atoms):
for vi in range(min(4, i)):
add_distance_between(df, i, vi)
def add_n_atoms(base, structures):
dfs = structures['molecule_name'].value_counts().rename('n_atoms').to_frame()
return pd.merge(base, dfs, left_on='molecule_name', right_index=True)
def build_couple_dataframe(some_csv, structures_csv, coupling_type, n_atoms=10):
base, structures = build_type_dataframes(some_csv, structures_csv, coupling_type)
base = add_coordinates(base, structures, 0)
base = add_coordinates(base, structures, 1)
base = base.drop(['atom_0', 'atom_1'], axis=1)
atoms = base.drop('id', axis=1).copy()
if 'scalar_coupling_constant' in some_csv:
atoms = atoms.drop(['scalar_coupling_constant'], axis=1)
add_center(atoms)
atoms = atoms.drop(['x_0', 'y_0', 'z_0', 'x_1', 'y_1', 'z_1'], axis=1)
atoms = merge_all_atoms(atoms, structures)
add_distance_to_center(atoms)
atoms = atoms.drop(['x_c', 'y_c', 'z_c', 'atom_index'], axis=1)
atoms.sort_values(['molecule_name', 'atom_index_0', 'atom_index_1', 'd_c'], inplace=True)
atom_groups = atoms.groupby(['molecule_name', 'atom_index_0', 'atom_index_1'])
atoms['num'] = atom_groups.cumcount() + 2
atoms = atoms.drop(['d_c'], axis=1)
atoms = atoms[atoms['num'] < n_atoms]
atoms = atoms.set_index(['molecule_name', 'atom_index_0', 'atom_index_1', 'num']).unstack()
atoms.columns = [f'{col[0]}_{col[1]}' for col in atoms.columns]
atoms = atoms.reset_index()
# # downcast back to int8
for col in atoms.columns:
if col.startswith('atom_'):
atoms[col] = atoms[col].fillna(0).astype('int8')
# atoms['molecule_name'] = atoms['molecule_name'].astype('int32')
full = add_atoms(base, atoms)
add_distances(full)
full.sort_values('id', inplace=True)
return full
def take_n_atoms(df, n_atoms, four_start=4):
labels = ['id', 'molecule_name', 'atom_index_1', 'atom_index_0']
for i in range(2, n_atoms):
label = f'atom_{i}'
labels.append(label)
for i in range(n_atoms):
num = min(i, 4) if i < four_start else 4
for j in range(num):
labels.append(f'd_{i}_{j}')
if 'scalar_coupling_constant' in df:
labels.append('scalar_coupling_constant')
return df[labels]
atoms = structures['atom'].values
types_train = train['type'].values
types_test = test['type'].values
structures['atom'] = structures['atom'].replace(ATOMIC_NUMBERS).astype('int8')
fulls_train = []
fulls_test = []
for type_ in progress_bar(train['type'].unique()):
full_train = build_couple_dataframe(train, structures, type_, n_atoms=10)
full_test = build_couple_dataframe(test, structures, type_, n_atoms=10)
full_train = take_n_atoms(full_train, 10)
full_test = take_n_atoms(full_test, 10)
fulls_train.append(full_train)
fulls_test.append(full_test)
structures['atom'] = atoms
train = pd.concat(fulls_train).sort_values(by=['id']) #, axis=0)
test = pd.concat(fulls_test).sort_values(by=['id']) #, axis=0)
train['type'] = types_train
test['type'] = types_test
train = train.fillna(0)
test = test.fillna(0)
```
<br>
<br>
dist-interact
```
train['dist_interact'] = dist_interact_train.values
test['dist_interact'] = dist_interact_test.values
```
<br>
<br>
basic
```
def map_atom_info(df_1,df_2, atom_idx):
df = pd.merge(df_1, df_2, how = 'left',
left_on = ['molecule_name', f'atom_index_{atom_idx}'],
right_on = ['molecule_name', 'atom_index'])
df = df.drop('atom_index', axis=1)
return df
# structure and ob_charges
ob_charge = pd.concat([ob_charge_train, ob_charge_test])
merge = pd.merge(ob_charge, structures, how='left',
left_on = ['molecule_name', 'atom_index'],
right_on = ['molecule_name', 'atom_index'])
for atom_idx in [0,1]:
train = map_atom_info(train, merge, atom_idx)
test = map_atom_info(test, merge, atom_idx)
train = train.rename(columns={
'atom': f'atom_{atom_idx}',
'x': f'x_{atom_idx}',
'y': f'y_{atom_idx}',
'z': f'z_{atom_idx}',
'eem': f'eem_{atom_idx}',
'mmff94': f'mmff94_{atom_idx}',
'gasteiger': f'gasteiger_{atom_idx}',
'qeq': f'qeq_{atom_idx}',
'qtpie': f'qtpie_{atom_idx}',
'eem2015ha': f'eem2015ha_{atom_idx}',
'eem2015hm': f'eem2015hm_{atom_idx}',
'eem2015hn': f'eem2015hn_{atom_idx}',
'eem2015ba': f'eem2015ba_{atom_idx}',
'eem2015bm': f'eem2015bm_{atom_idx}',
'eem2015bn': f'eem2015bn_{atom_idx}',})
test = test.rename(columns={
'atom': f'atom_{atom_idx}',
'x': f'x_{atom_idx}',
'y': f'y_{atom_idx}',
'z': f'z_{atom_idx}',
'eem': f'eem_{atom_idx}',
'mmff94': f'mmff94_{atom_idx}',
'gasteiger': f'gasteiger_{atom_idx}',
'qeq': f'qeq_{atom_idx}',
'qtpie': f'qtpie_{atom_idx}',
'eem2015ha': f'eem2015ha_{atom_idx}',
'eem2015hm': f'eem2015hm_{atom_idx}',
'eem2015hn': f'eem2015hn_{atom_idx}',
'eem2015ba': f'eem2015ba_{atom_idx}',
'eem2015bm': f'eem2015bm_{atom_idx}',
'eem2015bn': f'eem2015bn_{atom_idx}'})
# test = test.rename(columns={'atom': f'atom_{atom_idx}',
# 'x': f'x_{atom_idx}',
# 'y': f'y_{atom_idx}',
# 'z': f'z_{atom_idx}'})
# ob_charges
# train = map_atom_info(train, ob_charge_train, 0)
# test = map_atom_info(test, ob_charge_test, 0)
# train = map_atom_info(train, ob_charge_train, 1)
# test = map_atom_info(test, ob_charge_test, 1)
```
<br>
<br>
type0
```
def create_type0(df):
df['type_0'] = df['type'].apply(lambda x : x[0])
return df
# train['type_0'] = train['type'].apply(lambda x: x[0])
# test['type_0'] = test['type'].apply(lambda x: x[0])
```
<br>
<br>
distances
```
def distances(df):
df_p_0 = df[['x_0', 'y_0', 'z_0']].values
df_p_1 = df[['x_1', 'y_1', 'z_1']].values
df['dist'] = np.linalg.norm(df_p_0 - df_p_1, axis=1)
df['dist_x'] = (df['x_0'] - df['x_1']) ** 2
df['dist_y'] = (df['y_0'] - df['y_1']) ** 2
df['dist_z'] = (df['z_0'] - df['z_1']) ** 2
return df
# train = distances(train)
# test = distances(test)
```
<br>
<br>
統計量
```
def create_features(df):
df['molecule_couples'] = df.groupby('molecule_name')['id'].transform('count')
df['molecule_dist_mean'] = df.groupby('molecule_name')['dist'].transform('mean')
df['molecule_dist_min'] = df.groupby('molecule_name')['dist'].transform('min')
df['molecule_dist_max'] = df.groupby('molecule_name')['dist'].transform('max')
df['atom_0_couples_count'] = df.groupby(['molecule_name', 'atom_index_0'])['id'].transform('count')
df['atom_1_couples_count'] = df.groupby(['molecule_name', 'atom_index_1'])['id'].transform('count')
df[f'molecule_atom_index_0_x_1_std'] = df.groupby(['molecule_name', 'atom_index_0'])['x_1'].transform('std')
df[f'molecule_atom_index_0_y_1_mean'] = df.groupby(['molecule_name', 'atom_index_0'])['y_1'].transform('mean')
df[f'molecule_atom_index_0_y_1_mean_diff'] = df[f'molecule_atom_index_0_y_1_mean'] - df['y_1']
df[f'molecule_atom_index_0_y_1_mean_div'] = df[f'molecule_atom_index_0_y_1_mean'] / df['y_1']
df[f'molecule_atom_index_0_y_1_max'] = df.groupby(['molecule_name', 'atom_index_0'])['y_1'].transform('max')
df[f'molecule_atom_index_0_y_1_max_diff'] = df[f'molecule_atom_index_0_y_1_max'] - df['y_1']
df[f'molecule_atom_index_0_y_1_std'] = df.groupby(['molecule_name', 'atom_index_0'])['y_1'].transform('std')
df[f'molecule_atom_index_0_z_1_std'] = df.groupby(['molecule_name', 'atom_index_0'])['z_1'].transform('std')
df[f'molecule_atom_index_0_dist_mean'] = df.groupby(['molecule_name', 'atom_index_0'])['dist'].transform('mean')
df[f'molecule_atom_index_0_dist_mean_diff'] = df[f'molecule_atom_index_0_dist_mean'] - df['dist']
df[f'molecule_atom_index_0_dist_mean_div'] = df[f'molecule_atom_index_0_dist_mean'] / df['dist']
df[f'molecule_atom_index_0_dist_max'] = df.groupby(['molecule_name', 'atom_index_0'])['dist'].transform('max')
df[f'molecule_atom_index_0_dist_max_diff'] = df[f'molecule_atom_index_0_dist_max'] - df['dist']
df[f'molecule_atom_index_0_dist_max_div'] = df[f'molecule_atom_index_0_dist_max'] / df['dist']
df[f'molecule_atom_index_0_dist_min'] = df.groupby(['molecule_name', 'atom_index_0'])['dist'].transform('min')
df[f'molecule_atom_index_0_dist_min_diff'] = df[f'molecule_atom_index_0_dist_min'] - df['dist']
df[f'molecule_atom_index_0_dist_min_div'] = df[f'molecule_atom_index_0_dist_min'] / df['dist']
df[f'molecule_atom_index_0_dist_std'] = df.groupby(['molecule_name', 'atom_index_0'])['dist'].transform('std')
df[f'molecule_atom_index_0_dist_std_diff'] = df[f'molecule_atom_index_0_dist_std'] - df['dist']
df[f'molecule_atom_index_0_dist_std_div'] = df[f'molecule_atom_index_0_dist_std'] / df['dist']
df[f'molecule_atom_index_1_dist_mean'] = df.groupby(['molecule_name', 'atom_index_1'])['dist'].transform('mean')
df[f'molecule_atom_index_1_dist_mean_diff'] = df[f'molecule_atom_index_1_dist_mean'] - df['dist']
df[f'molecule_atom_index_1_dist_mean_div'] = df[f'molecule_atom_index_1_dist_mean'] / df['dist']
df[f'molecule_atom_index_1_dist_max'] = df.groupby(['molecule_name', 'atom_index_1'])['dist'].transform('max')
df[f'molecule_atom_index_1_dist_max_diff'] = df[f'molecule_atom_index_1_dist_max'] - df['dist']
df[f'molecule_atom_index_1_dist_max_div'] = df[f'molecule_atom_index_1_dist_max'] / df['dist']
df[f'molecule_atom_index_1_dist_min'] = df.groupby(['molecule_name', 'atom_index_1'])['dist'].transform('min')
df[f'molecule_atom_index_1_dist_min_diff'] = df[f'molecule_atom_index_1_dist_min'] - df['dist']
df[f'molecule_atom_index_1_dist_min_div'] = df[f'molecule_atom_index_1_dist_min'] / df['dist']
df[f'molecule_atom_index_1_dist_std'] = df.groupby(['molecule_name', 'atom_index_1'])['dist'].transform('std')
df[f'molecule_atom_index_1_dist_std_diff'] = df[f'molecule_atom_index_1_dist_std'] - df['dist']
df[f'molecule_atom_index_1_dist_std_div'] = df[f'molecule_atom_index_1_dist_std'] / df['dist']
df[f'molecule_atom_1_dist_mean'] = df.groupby(['molecule_name', 'atom_1'])['dist'].transform('mean')
df[f'molecule_atom_1_dist_min'] = df.groupby(['molecule_name', 'atom_1'])['dist'].transform('min')
df[f'molecule_atom_1_dist_min_diff'] = df[f'molecule_atom_1_dist_min'] - df['dist']
df[f'molecule_atom_1_dist_min_div'] = df[f'molecule_atom_1_dist_min'] / df['dist']
df[f'molecule_atom_1_dist_std'] = df.groupby(['molecule_name', 'atom_1'])['dist'].transform('std')
df[f'molecule_atom_1_dist_std_diff'] = df[f'molecule_atom_1_dist_std'] - df['dist']
df[f'molecule_type_0_dist_std'] = df.groupby(['molecule_name', 'type_0'])['dist'].transform('std')
df[f'molecule_type_0_dist_std_diff'] = df[f'molecule_type_0_dist_std'] - df['dist']
df[f'molecule_type_dist_mean'] = df.groupby(['molecule_name', 'type'])['dist'].transform('mean')
df[f'molecule_type_dist_mean_diff'] = df[f'molecule_type_dist_mean'] - df['dist']
df[f'molecule_type_dist_mean_div'] = df[f'molecule_type_dist_mean'] / df['dist']
df[f'molecule_type_dist_max'] = df.groupby(['molecule_name', 'type'])['dist'].transform('max')
df[f'molecule_type_dist_min'] = df.groupby(['molecule_name', 'type'])['dist'].transform('min')
df[f'molecule_type_dist_std'] = df.groupby(['molecule_name', 'type'])['dist'].transform('std')
df[f'molecule_type_dist_std_diff'] = df[f'molecule_type_dist_std'] - df['dist']
# fc
df[f'molecule_type_fc_max'] = df.groupby(['molecule_name', 'type'])['fc'].transform('max')
df[f'molecule_type_fc_min'] = df.groupby(['molecule_name', 'type'])['fc'].transform('min')
df[f'molecule_type_fc_std'] = df.groupby(['molecule_name', 'type'])['fc'].transform('std')
df[f'molecule_type_fc_std_diff'] = df[f'molecule_type_fc_std'] - df['fc']
return df
```
angle features
```
def map_atom_info(df_1,df_2, atom_idx):
df = pd.merge(df_1, df_2, how = 'left',
left_on = ['molecule_name', f'atom_index_{atom_idx}'],
right_on = ['molecule_name', 'atom_index'])
df = df.drop('atom_index', axis=1)
return df
def create_closest(df):
df_temp=df.loc[:,["molecule_name","atom_index_0","atom_index_1","dist","x_0","y_0","z_0","x_1","y_1","z_1"]].copy()
df_temp_=df_temp.copy()
df_temp_= df_temp_.rename(columns={'atom_index_0': 'atom_index_1',
'atom_index_1': 'atom_index_0',
'x_0': 'x_1',
'y_0': 'y_1',
'z_0': 'z_1',
'x_1': 'x_0',
'y_1': 'y_0',
'z_1': 'z_0'})
df_temp=pd.concat(objs=[df_temp,df_temp_],axis=0)
df_temp["min_distance"]=df_temp.groupby(['molecule_name', 'atom_index_0'])['dist'].transform('min')
df_temp= df_temp[df_temp["min_distance"]==df_temp["dist"]]
df_temp=df_temp.drop(['x_0','y_0','z_0','min_distance', 'dist'], axis=1)
df_temp= df_temp.rename(columns={'atom_index_0': 'atom_index',
'atom_index_1': 'atom_index_closest',
'distance': 'distance_closest',
'x_1': 'x_closest',
'y_1': 'y_closest',
'z_1': 'z_closest'})
for atom_idx in [0,1]:
df = map_atom_info(df,df_temp, atom_idx)
df = df.rename(columns={'atom_index_closest': f'atom_index_closest_{atom_idx}',
'distance_closest': f'distance_closest_{atom_idx}',
'x_closest': f'x_closest_{atom_idx}',
'y_closest': f'y_closest_{atom_idx}',
'z_closest': f'z_closest_{atom_idx}'})
return df
def add_cos_features(df):
df["distance_0"]=((df['x_0']-df['x_closest_0'])**2+(df['y_0']-df['y_closest_0'])**2+(df['z_0']-df['z_closest_0'])**2)**(1/2)
df["distance_1"]=((df['x_1']-df['x_closest_1'])**2+(df['y_1']-df['y_closest_1'])**2+(df['z_1']-df['z_closest_1'])**2)**(1/2)
df["vec_0_x"]=(df['x_0']-df['x_closest_0'])/df["distance_0"]
df["vec_0_y"]=(df['y_0']-df['y_closest_0'])/df["distance_0"]
df["vec_0_z"]=(df['z_0']-df['z_closest_0'])/df["distance_0"]
df["vec_1_x"]=(df['x_1']-df['x_closest_1'])/df["distance_1"]
df["vec_1_y"]=(df['y_1']-df['y_closest_1'])/df["distance_1"]
df["vec_1_z"]=(df['z_1']-df['z_closest_1'])/df["distance_1"]
df["vec_x"]=(df['x_1']-df['x_0'])/df["dist"]
df["vec_y"]=(df['y_1']-df['y_0'])/df["dist"]
df["vec_z"]=(df['z_1']-df['z_0'])/df["dist"]
df["cos_0_1"]=df["vec_0_x"]*df["vec_1_x"]+df["vec_0_y"]*df["vec_1_y"]+df["vec_0_z"]*df["vec_1_z"]
df["cos_0"]=df["vec_0_x"]*df["vec_x"]+df["vec_0_y"]*df["vec_y"]+df["vec_0_z"]*df["vec_z"]
df["cos_1"]=df["vec_1_x"]*df["vec_x"]+df["vec_1_y"]*df["vec_y"]+df["vec_1_z"]*df["vec_z"]
df=df.drop(['vec_0_x','vec_0_y','vec_0_z','vec_1_x','vec_1_y','vec_1_z','vec_x','vec_y','vec_z'], axis=1)
return df
%%time
print('add fc')
print(len(train), len(test))
train['fc'] = fc_train.values
test['fc'] = fc_test.values
print('type0')
print(len(train), len(test))
train = create_type0(train)
test = create_type0(test)
print('distances')
print(len(train), len(test))
train = distances(train)
test = distances(test)
print('create_featueres')
print(len(train), len(test))
train = create_features(train)
test = create_features(test)
print('create_closest')
print(len(train), len(test))
train = create_closest(train)
test = create_closest(test)
train.drop_duplicates(inplace=True, subset=['id']) # なぜかtrainの行数が増えるバグが発生
train = train.reset_index(drop=True)
print('add_cos_features')
print(len(train), len(test))
train = add_cos_features(train)
test = add_cos_features(test)
```
---
<br>
<br>
<br>
nanがある特徴量を削除
```
drop_feats = train.columns[train.isnull().sum(axis=0) != 0].values
drop_feats
train = train.drop(drop_feats, axis=1)
test = test.drop(drop_feats, axis=1)
assert sum(train.isnull().sum(axis=0))==0, f'train に nan があります。'
assert sum(test.isnull().sum(axis=0))==0, f'test に nan があります。'
```
<br>
<br>
<br>
エンコーディング
```
cat_cols = ['atom_1']
num_cols = list(set(train.columns) - set(cat_cols) - set(['type', "scalar_coupling_constant", 'molecule_name', 'id',
'atom_0', 'atom_1','atom_2', 'atom_3', 'atom_4', 'atom_5', 'atom_6', 'atom_7', 'atom_8', 'atom_9']))
print(f'カテゴリカル: {cat_cols}')
print(f'数値: {num_cols}')
```
<br>
<br>
LabelEncode
- `atom_1` = {H, C, N}
- `type_0` = {1, 2, 3}
- `type` = {2JHC, ...}
```
for f in ['type_0', 'type']:
if f in train.columns:
lbl = LabelEncoder()
lbl.fit(list(train[f].values) + list(test[f].values))
train[f] = lbl.transform(list(train[f].values))
test[f] = lbl.transform(list(test[f].values))
```
<br>
<br>
<br>
one hot encoding
```
train = pd.get_dummies(train, columns=cat_cols)
test = pd.get_dummies(test, columns=cat_cols)
```
<br>
<br>
<br>
標準化
```
scaler = StandardScaler()
train[num_cols] = scaler.fit_transform(train[num_cols])
test[num_cols] = scaler.transform(test[num_cols])
```
<br>
<br>
---
**show features**
```
train.head(2)
print(train.columns)
```
# create train, test data
```
y = train['scalar_coupling_constant']
train = train.drop(['id', 'molecule_name', 'atom_0', 'scalar_coupling_constant'], axis=1)
test = test.drop(['id', 'molecule_name', 'atom_0'], axis=1)
train = reduce_mem_usage(train)
test = reduce_mem_usage(test)
X = train.copy()
X_test = test.copy()
assert len(X.columns) == len(X_test.columns), f'X と X_test のサイズが違います X: {len(X.columns)}, X_test: {len(X_test.columns)}'
del train, test, full_train, full_test
gc.collect()
```
# Training model
**params**
```
# Configuration
model_params = {'n_estimators': 300,
'max_depth': 50,
'n_jobs': 30}
n_folds = 6
folds = KFold(n_splits=n_folds, shuffle=True)
def train_model(X, X_test, y, folds, model_params):
model = ExtraTreesRegressor(**model_params) # <=================
scores = []
oof = np.zeros(len(X)) # <========
prediction = np.zeros(len(X)) # <========
result_dict = {}
for fold_n, (train_idx, valid_idx) in enumerate(folds.split(X)):
print(f'Fold {fold_n + 1} started at {time.ctime()}')
model.fit(X.iloc[train_idx, :], y[train_idx])
y_valid_pred = model.predict(X.iloc[valid_idx, :])
prediction = model.predict(X_test)
oof[valid_idx] = y_valid_pred
score = mean_absolute_error(y[valid_idx], y_valid_pred)
scores.append(score)
print(f'fold {fold_n+1} mae: {score :.5f}')
print('')
print('CV mean score: {0:.4f}, std: {1:.4f}.'.format(np.mean(scores), np.std(scores)))
print('')
result_dict['oof'] = oof
result_dict['prediction'] = prediction
result_dict['scores'] = scores
return result_dict
%%time
# type ごとの学習
X_short = pd.DataFrame({'ind': list(X.index), 'type': X['type'].values, 'oof': [0] * len(X), 'target': y.values})
X_short_test = pd.DataFrame({'ind': list(X_test.index), 'type': X_test['type'].values, 'prediction': [0] * len(X_test)})
for t in X['type'].unique():
print('*'*80)
print(f'Training of type {t}')
print('*'*80)
X_t = X.loc[X['type'] == t]
X_test_t = X_test.loc[X_test['type'] == t]
y_t = X_short.loc[X_short['type'] == t, 'target'].values
result_dict = train_model(X_t, X_test_t, y_t, folds, model_params)
X_short.loc[X_short['type'] == t, 'oof'] = result_dict['oof']
X_short_test.loc[X_short_test['type'] == t, 'prediction'] = result_dict['prediction']
print('')
print('===== finish =====')
X['scalar_coupling_constant'] = y
metric = kaggle_metric(X, X_short['oof'])
X = X.drop(['scalar_coupling_constant', 'prediction'], axis=1)
print('CV mean score(group log mae): {0:.4f}'.format(metric))
prediction = X_short_test['prediction']
```
# Save
**submission**
```
# path_submittion = './output/' + 'nb{}_submission_lgb_{}.csv'.format(nb, metric)
path_submittion = f'../output/nb{nb}_submission_{model_name}_{metric :.5f}.csv'
print(f'save pash: {path_submittion}')
submittion = pd.read_csv('../input/champs-scalar-coupling/sample_submission.csv')
# submittion = pd.read_csv('./input/champs-scalar-coupling/sample_submission.csv')[::100]
submittion['scalar_coupling_constant'] = prediction
submittion.to_csv(path_submittion, index=False)
```
---
**result**
```
path_oof = f'../output/nb{nb}_oof_{model_name}_{metric :.5f}.csv'
print(f'save pash: {path_oof}')
oof = pd.DataFrame(result_dict['oof'])
oof.to_csv(path_oof, index=False)
```
# analysis
```
plot_data = pd.DataFrame(y)
plot_data.index.name = 'id'
plot_data['yhat'] = X_short['oof']
plot_data['type'] = lbl.inverse_transform(X['type'])
def plot_oof_preds(ctype, llim, ulim):
plt.figure(figsize=(6,6))
sns.scatterplot(x='scalar_coupling_constant',y='yhat',
data=plot_data.loc[plot_data['type']==ctype,
['scalar_coupling_constant', 'yhat']]);
plt.xlim((llim, ulim))
plt.ylim((llim, ulim))
plt.plot([llim, ulim], [llim, ulim])
plt.xlabel('scalar_coupling_constant')
plt.ylabel('predicted')
plt.title(f'{ctype}', fontsize=18)
plt.show()
plot_oof_preds('1JHC', 0, 250)
plot_oof_preds('1JHN', 0, 100)
plot_oof_preds('2JHC', -50, 50)
plot_oof_preds('2JHH', -50, 50)
plot_oof_preds('2JHN', -25, 25)
plot_oof_preds('3JHC', -25, 60)
plot_oof_preds('3JHH', -20, 20)
plot_oof_preds('3JHN', -10, 15)
```
| github_jupyter |
## P5.2 Topic Modeling
---
### Content
- [Topic Modelling using LDA](#Topic-Modelling-using-LDA)
- [Topic Modeling (Train data)](#Topic-Modeling-(Train-data))
- [Optimal Topic Size](#Optimal-Topic-Size)
- [Binary Classification (LDA topic features)](#Binary-Classification-(LDA-topic-features))
- [Binary Classification (LDA topic and Countvectorizer features)](#Binary-Classification-(LDA-topic-and-Countvectorizer-features))
- [Recommendations (Part2)](#Recommendations-(Part2))
- [Future Work](#Future-Work)
### Topic Modelling using LDA
Inspired by Marc Kelechava's work [https://towardsdatascience.com/unsupervised-nlp-topic-models-as-a-supervised-learning-input-cf8ee9e5cf28] and Andrew Ng et al., 2003.
In this section, I explore if underlying semantic structures, discovered through the Latent Dirichlet Allocation (LDA) technique (unsupervised machine learning technique), could be utilized in a supervised text classification problem. LDA application poses significant challenge due to personal inexperience in the domain, and I allocated approx. a week in reading up on basic LDA applications. I'm interested to explore
Steps as follows:
- Explore LDA topic modelling, and derive optimum number of topics (train data)
- Investigate the use of LDA topic distributions as feature vectors for supervised, binary classification (i.e. bomb or non-bomb). If the supervised sensitivty and roc_auc score on the unseen data generalizes, it is an indication that the topic model trained on trainsub has identified latent semantic structure that persists over varying motive texts in identification of bombing incidents.
- Investigate generalizability of supervised, binary classification model using feature vectors from both LDA and countvectorizer.
```
import pandas as pd
import numpy as np
import sys
import re
from pprint import pprint
# Gensim
import gensim, spacy, logging, warnings
import gensim.corpora as corpora
from gensim.utils import lemmatize, simple_preprocess
from gensim.models import CoherenceModel
# NLTK Stop words and stemmer
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import LogisticRegression
# Import library for cross-validation
from sklearn.model_selection import cross_val_score, cross_val_predict
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_auc_score
import matplotlib.pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
# Setting - display all columns
pd.set_option('display.max_columns', None)
# Read in cleaned featured engineered data
dframe = pd.read_csv('../assets/wordok.csv',encoding="ISO-8859-1",index_col=0)
# Instantiate the custom list of stopwords for modelling from P5_01
stop_words = stopwords.words('english')
own_stop = ['motive','specific','unknown','attack','sources','noted', 'claimed','stated','incident','targeted',\
'responsibility','violence','carried','government','suspected','trend','speculated','al','sectarian',\
'retaliation','group','related','security','forces','people','bomb','bombing','bombings']
# Extend the stop words
stop_words.extend(own_stop)
own_stopfn = ['death', 'want', 'off', 'momentum', 'star', 'colleg', 'aqi', 'treat', 'reveng', 'them', 'all', 'radio',\
'bodo', 'upcom', 'between', 'prior', 'enter', 'made', 'nimr', 'sectarian', 'muslim', 'past', 'previou',\
'intimid', 'held', 'fsa', 'women', 'are', 'mnlf', 'with', 'pattani', 'shutdown', 'border', 'departur',\
'advoc', 'have', 'eelam', 'across', 'villag', 'foreign', 'kill', 'shepherd', 'yemeni', 'develop', 'pro',\
'road', 'not', 'appear', 'jharkhand', 'spokesperson']
# Extend the Stop words
stop_words.extend(own_stopfn)
# Check the addition of firstset_words
stop_words[-5:]
# Create Train-Test split (80-20 split)
# X is motive text. y is bomb.
X_train,X_test,y_train,y_test = train_test_split(dframe[['motive']],dframe['bomb'],test_size=0.20,\
stratify=dframe['bomb'],\
random_state=42)
dframe.head(1)
```
### Topic Modeling (Train data)
```
def sent_to_words(sentences):
for sent in sentences:
sent = re.sub('\s+', ' ', sent) # remove newline chars
sent = re.sub("\'", "", sent) # remove single quotes
sent = gensim.utils.simple_preprocess(str(sent), deacc=True)
yield(sent)
# Convert to list
data = X_train.motive.values.tolist()
data_words = list(sent_to_words(data))
print(data_words[:1])
```
Utilize Gensim's `Phrases` to build and implement bigrams and trigrams. The higher the parameters `min_count` and `threshold`, the harder it is for words to be combined to bigrams
```
# Build the bigram and trigram models
bigram = gensim.models.Phrases(data_words, min_count=5, threshold=100) # higher threshold fewer phrases.
trigram = gensim.models.Phrases(bigram[data_words], threshold=100)
bigram_mod = gensim.models.phrases.Phraser(bigram)
trigram_mod = gensim.models.phrases.Phraser(trigram)
def process_words(texts, stop_words=stop_words, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']):
"""Remove Stopwords, Form Bigrams, Trigrams and Lemmatization"""
texts = [[word for word in simple_preprocess(str(doc)) if word not in stop_words] for doc in texts]
texts = [bigram_mod[doc] for doc in texts]
texts = [trigram_mod[bigram_mod[doc]] for doc in texts]
texts_out = []
# use 'en_core_web_sm' in place of 'en'
nlp = spacy.load('en_core_web_sm', disable=['parser', 'ner'])
for sent in texts:
doc = nlp(" ".join(sent))
texts_out.append([token.lemma_ for token in doc if token.pos_ in allowed_postags])
# remove stopwords once more after lemmatization
texts_out = [[word for word in simple_preprocess(str(doc)) if word not in stop_words] for doc in texts_out]
return texts_out
data_ready = process_words(data_words) # processed Text Data
len(data_ready)
# Create Dictionary
id2word = corpora.Dictionary(data_ready)
## Create corpus texts
texts = data_ready
# Create Corpus: Term Document Frequency
corpus = [id2word.doc2bow(text) for text in data_ready]
# View
display(corpus[:4])
# Human readable format of corpus (term-frequency)
[[(id2word[id], freq) for id, freq in cp] for cp in corpus[:4]]
```
Gensim creates unique id for each word in the document. The produced corpus shown above is a mapping of (word_id, word_frequency). A human-readable form of the corpus is displayed follows thereafter.
Build LDA model with 4 topics. Each topic is a combination of keywords (Each contributing certain weightage to topic).
```
# Build LDA model
lda_model = gensim.models.ldamodel.LdaModel(corpus=corpus,
id2word=id2word,
num_topics=4,
random_state=42,
update_every=1,
chunksize=100,
passes=10,
alpha='symmetric',
iterations=100,
per_word_topics=True)
pprint(lda_model.print_topics())
```
Interpretation: For topic 0, top 10 keywords that contribute to this topic are 'however', 'state' and so on, with weight of 'however' being 0.088.
```
# Compute Perplexity
print(f"Perplexity: {lda_model.log_perplexity(corpus)}") # a measure of how good the model is. lower the better.
# Compute Coherence Score
coherence_model_lda = CoherenceModel(model=lda_model, texts=data_ready, dictionary=id2word, coherence='c_v')
coherence_lda = coherence_model_lda.get_coherence()
print(f"Coherence Score: {coherence_lda}")
def format_topics_sentences(ldamodel=None, corpus=corpus, texts=data):
# Init output
sent_topics_df = pd.DataFrame()
# Get main topic in each document
for i, row_list in enumerate(ldamodel[corpus]):
row = row_list[0] if ldamodel.per_word_topics else row_list
# print(row)
row = sorted(row, key=lambda x: (x[1]), reverse=True)
# Get the Dominant topic, Perc Contribution and Keywords for each document
for j, (topic_num, prop_topic) in enumerate(row):
if j == 0: # => dominant topic
wp = ldamodel.show_topic(topic_num)
topic_keywords = ", ".join([word for word, prop in wp])
sent_topics_df = sent_topics_df.append(pd.Series([int(topic_num), round(prop_topic,4), topic_keywords]), ignore_index=True)
else:
break
sent_topics_df.columns = ['Dominant_Topic', 'Perc_Contribution', 'Topic_Keywords']
# Add original text to the end of the output
contents = pd.Series(texts)
sent_topics_df = pd.concat([sent_topics_df, contents], axis=1)
return(sent_topics_df)
df_topic_sents_keywords = format_topics_sentences(ldamodel=lda_model, corpus=corpus, texts=data_ready)
# Format
df_dominant_topic = df_topic_sents_keywords.reset_index()
df_dominant_topic.columns = ['Document_No', 'Dominant_Topic', 'Topic_Perc_Contrib', 'Keywords', 'Text']
df_dominant_topic.head(10)
```
The dominant topic with percentage contribution for each document is represented above.
```
# Display setting to show more characters in column
pd.options.display.max_colwidth = 80
sent_topics_sorteddf_mallet = pd.DataFrame()
sent_topics_outdf_grpd = df_topic_sents_keywords.groupby('Dominant_Topic')
for i, grp in sent_topics_outdf_grpd:
sent_topics_sorteddf_mallet = pd.concat([sent_topics_sorteddf_mallet,
grp.sort_values(['Perc_Contribution'], ascending=False).head(1)],
axis=0)
# Reset Index
sent_topics_sorteddf_mallet.reset_index(drop=True, inplace=True)
# Format
sent_topics_sorteddf_mallet.columns = ['Topic_Num', "Topic_Perc_Contrib", "Keywords", "Representative Text"]
# Show
sent_topics_sorteddf_mallet.head(10)
```
The documents a given topic has contributed to the most to facilitate topic inference ate displayed above.
```
# 1. Wordcloud of Top N words in each topic
from matplotlib import pyplot as plt
from wordcloud import WordCloud, STOPWORDS
import matplotlib.colors as mcolors
cols = [color for name, color in mcolors.TABLEAU_COLORS.items()] # more colors: 'mcolors.XKCD_COLORS'
cloud = WordCloud(stopwords=stop_words,
background_color='white',
width=2500,
height=1800,
max_words=10,
colormap='tab10',
color_func=lambda *args, **kwargs: cols[i],
prefer_horizontal=1.0)
topics = lda_model.show_topics(formatted=False)
fig, axes = plt.subplots(2, 2, figsize=(10,10), sharex=True, sharey=True)
for i, ax in enumerate(axes.flatten()):
fig.add_subplot(ax)
topic_words = dict(topics[i][1])
cloud.generate_from_frequencies(topic_words, max_font_size=300)
plt.gca().imshow(cloud)
plt.gca().set_title('Topic ' + str(i), fontdict=dict(size=16))
plt.gca().axis('off')
plt.subplots_adjust(wspace=0, hspace=0)
plt.axis('off')
plt.margins(x=0, y=0)
plt.tight_layout()
plt.show()
```
Interpretation of the Four topics using the representative text identified above: (topic 0: public unrest and law enforcement), (topic 1: tension admist elections), (topic 2: military campaigns and terror groups), (topic 3: sectarian violence).
Note: Changing the random_seed will also change the topics surfaced, currently versioned as 42.
```
# Sentence Coloring of N Sentences
from matplotlib.patches import Rectangle
# Pick documents amongst corpus
def sentences_chart(lda_model=lda_model, corpus=corpus, start = 7, end = 14):
corp = corpus[start:end]
mycolors = [color for name, color in mcolors.TABLEAU_COLORS.items()]
fig, axes = plt.subplots(end-start, 1, figsize=(20, (end-start)*0.95), dpi=160)
axes[0].axis('off')
for i, ax in enumerate(axes):
if i > 0:
corp_cur = corp[i-1]
topic_percs, wordid_topics, wordid_phivalues = lda_model[corp_cur]
word_dominanttopic = [(lda_model.id2word[wd], topic[0]) for wd, topic in wordid_topics]
ax.text(0.01, 0.5, "Doc " + str(i-1) + ": ", verticalalignment='center',
fontsize=16, color='black', transform=ax.transAxes, fontweight=700)
# Draw Rectange
topic_percs_sorted = sorted(topic_percs, key=lambda x: (x[1]), reverse=True)
ax.add_patch(Rectangle((0.0, 0.05), 0.99, 0.90, fill=None, alpha=1,
color=mycolors[topic_percs_sorted[0][0]], linewidth=2))
word_pos = 0.06
for j, (word, topics) in enumerate(word_dominanttopic):
if j < 14:
ax.text(word_pos, 0.5, word,
horizontalalignment='left',
verticalalignment='center',
fontsize=16, color=mycolors[topics],
transform=ax.transAxes, fontweight=700)
word_pos += .009 * len(word) # to move the word for the next iter
ax.axis('off')
ax.text(word_pos, 0.5, '. . .',
horizontalalignment='left',
verticalalignment='center',
fontsize=16, color='black',
transform=ax.transAxes)
plt.subplots_adjust(wspace=0, hspace=0)
plt.suptitle('Sentence Topic Coloring for Documents: ' + str(start) + ' to ' + str(end-2), fontsize=22, y=0.95, fontweight=700)
plt.tight_layout()
plt.show()
sentences_chart()
```
We can review the topic percent contribution for each document. Here document 7 is selected as an example.
```
df_dominant_topic[df_dominant_topic['Document_No']==7]
import pyLDAvis.gensim
pyLDAvis.enable_notebook()
vis = pyLDAvis.gensim.prepare(lda_model, corpus, dictionary=lda_model.id2word)
vis
```
Interpretation: On the left hand plot, topics are represented by a bubble. Larger bubble size indicates higher prevalence. Good topic model will have fairly big, non-overlapping bubbles scattered throughout the chart. The salient keywords and frequency bars on the right hand chart updates with review of each bubble (cursor over bubble).
### Optimal Topic Size
```
def compute_coherence_values(dictionary, corpus, texts, limit, start=2, step=3):
"""
Compute c_v coherence for various number of topics
Parameters:
----------
dictionary : Gensim dictionary
corpus : Gensim corpus
texts : List of input texts
limit : Max num of topics
Returns:
-------
model_list : List of LDA topic models
coherence_values : Coherence values corresponding to the LDA model with respective number of topics
"""
coherence_values = []
model_list = []
for num_topics in range(start, limit, step):
model = gensim.models.ldamodel.LdaModel(corpus=corpus, num_topics=num_topics, id2word=id2word, random_state=42, update_every=1,\
chunksize=100, passes=10, alpha='symmetric', iterations=100, per_word_topics=True)
model_list.append(model)
coherencemodel = CoherenceModel(model=model, texts=texts, dictionary=dictionary, coherence='c_v')
coherence_values.append(coherencemodel.get_coherence())
return model_list, coherence_values
# Can take a long time to run (10mins approx)
model_list, coherence_values = compute_coherence_values(dictionary=id2word, corpus=corpus, texts=data_ready, start=5, limit=60, step=12)
# Show graph
limit=60; start=5; step=12;
x = range(start, limit, step)
plt.plot(x, coherence_values)
plt.xlabel("Num Topics")
plt.ylabel("Coherence score")
plt.legend(("coherence_values"), loc='best')
plt.show()
# Print the coherence scores
for m, cv in zip(x, coherence_values):
print("Num Topics =", m, " has Coherence Value of", round(cv, 4))
```
coherence score saturates at 41 topics. We pick the model with 41 topics.
```
# Select the model and print the topics
optimal_model = model_list[3]
model_topics = optimal_model.show_topics(formatted=False)
pprint(optimal_model.print_topics(num_words=5))
```
The dominant topics in each sentence are identified using the defined function below.
```
def format_topics_sentences(ldamodel=None, corpus=corpus, texts=data):
# Init output
sent_topics_df = pd.DataFrame()
# Get main topic in each document
for i, row_list in enumerate(ldamodel[corpus]):
row = row_list[0] if ldamodel.per_word_topics else row_list
# print(row)
row = sorted(row, key=lambda x: (x[1]), reverse=True)
# Get the Dominant topic, Perc Contribution and Keywords for each document
for j, (topic_num, prop_topic) in enumerate(row):
if j == 0: # => dominant topic
wp = ldamodel.show_topic(topic_num)
topic_keywords = ", ".join([word for word, prop in wp])
sent_topics_df = sent_topics_df.append(pd.Series([int(topic_num), round(prop_topic,4), topic_keywords]), ignore_index=True)
else:
break
sent_topics_df.columns = ['Dominant_Topic', 'Perc_Contribution', 'Topic_Keywords']
# Add original text to the end of the output
contents = pd.Series(texts)
sent_topics_df = pd.concat([sent_topics_df, contents], axis=1)
return(sent_topics_df)
df_topic_sents_keywords = format_topics_sentences(ldamodel=optimal_model, corpus=corpus, texts=data_ready)
# Format
df_dominant_topic = df_topic_sents_keywords.reset_index()
df_dominant_topic.columns = ['Document_No', 'Dominant_Topic', 'Topic_Perc_Contrib', 'Keywords', 'Text']
df_dominant_topic.head(10)
```
Find the representative document for each topic and display them.
```
# Display setting to show more characters in column
pd.options.display.max_colwidth = 80
sent_topics_sorteddf_mallet = pd.DataFrame()
sent_topics_outdf_grpd = df_topic_sents_keywords.groupby('Dominant_Topic')
for i, grp in sent_topics_outdf_grpd:
sent_topics_sorteddf_mallet = pd.concat([sent_topics_sorteddf_mallet,
grp.sort_values(['Perc_Contribution'], ascending=False).head(1)],
axis=0)
# Reset Index
sent_topics_sorteddf_mallet.reset_index(drop=True, inplace=True)
# Format
sent_topics_sorteddf_mallet.columns = ['Topic_Num', "Topic_Perc_Contrib", "Keywords", "Representative Text"]
# Show
sent_topics_sorteddf_mallet.head(10)
# Specify mds as 'tsne', otherwise TypeError: Object of type 'complex' is not JSON serializable
# complex number had come from coordinate calculation and specifying the "mds"
# Ref1: https://stackoverflow.com/questions/46379763/typeerror-object-of-type-complex-is-not-json-serializable-while-using-pyldavi
# Ref2: https://pyldavis.readthedocs.io/en/latest/modules/API.html#pyLDAvis.prepare
import pyLDAvis.gensim
pyLDAvis.enable_notebook()
vis = pyLDAvis.gensim.prepare(topic_model=optimal_model, corpus=corpus, dictionary=optimal_model.id2word,mds='tsne')
vis
```
### Binary Classification (LDA topic features)
```
# Set the dictionary and corpus based on trainsub data
trainid2word = id2word
traincorpus = corpus
# Train model
# Build LDA model on trainsub data, using optimum topics
lda_train = gensim.models.ldamodel.LdaModel(corpus=traincorpus,
id2word=trainid2word,
num_topics=41,
random_state=42,
update_every=1,
chunksize=100,
passes=10,
alpha='symmetric',
iterations=100,
per_word_topics=True)
```
With the LDA model trained on train data, run the motive text through it using 'get document topics'. A list comprehension on that output (2nd line in loop) will give the probability distribution of the topics for a specific review (feature vector).
```
# Make train Vectors
train_vecs = []
for i in range(len(X_train)):
top_topics = lda_train.get_document_topics(traincorpus[i], minimum_probability=0.0)
topic_vec = [top_topics[i][1] for i in range(41)]
train_vecs.append(topic_vec)
# Sanity check; should correspond with the number of optimal topics
print(f"Number of vectors per train text: {len(train_vecs[2])}")
print(f"Length of train vectors: {len(train_vecs)}")
print(f"Length of X_train: {len(X_train)}")
# Pass the vectors into numpy array form
X_tr_vec = np.array(train_vecs)
y_tr = np.array(y_train)
# Split the train_vecs for training
X_trainsub,X_validate,y_trainsub,y_validate = train_test_split(X_tr_vec,y_tr,test_size=0.20,stratify=y_tr,random_state=42)
# Instantiate model
lr = LogisticRegression(random_state=42,solver='lbfgs',max_iter=500)
# Fit model
model_lr = lr.fit(X_trainsub,y_trainsub)
# Generate predictions from validate set
# Cross-validate 10 folds
predictions = cross_val_predict(model_lr, X_validate, y_validate, cv = 10)
print(f"Accuracy on validate set: {round(cross_val_score(model_lr, X_validate, y_validate, cv = 10).mean(),4)}")
# Confusion matrix for test set using NB model
# Pass in true values, predicted values to confusion matrix
# Convert Confusion matrix into dataframe
# Positive class (class 1) is bomb
cm = confusion_matrix(y_validate, predictions)
cm_df = pd.DataFrame(cm,columns=['pred non-bomb','pred bomb'], index=['Actual non-bomb','Actual bomb'])
cm_df
# return nparray as a 1-D array.
confusion_matrix(y_validate, predictions).ravel()
# Save TN/FP/FN/TP values.
tn, fp, fn, tp = confusion_matrix(y_validate, predictions).ravel()
# Summary of metrics for LR model
spec = tn/(tn+fp)
sens = tp/(tp+fn)
print(f"Specificity: {round(spec,4)}")
print(f"Sensitivity: {round(sens,4)}")
# To compute the ROC AUC curve, first
# Create a dataframe called pred_df that contains:
# 1. The list of true values of our test set.
# 2. The list of predicted probabilities based on our model.
pred_proba = [i[1] for i in lr.predict_proba(X_validate)]
pred_df = pd.DataFrame({'test_values': y_validate,
'pred_probs':pred_proba})
# Calculate ROC AUC.
print(f"roc_auc: {round(roc_auc_score(pred_df['test_values'],pred_df['pred_probs']),4)}")
def sent_to_words(sentences):
for sent in sentences:
sent = re.sub('\s+', ' ', sent) # remove newline chars
sent = re.sub("\'", "", sent) # remove single quotes
sent = gensim.utils.simple_preprocess(str(sent), deacc=True)
yield(sent)
# Convert to list
data = X_test.motive.values.tolist()
data_words = list(sent_to_words(data))
print(data_words[:1])
# Build the bigram and trigram models
bigram = gensim.models.Phrases(data_words, min_count=5, threshold=100) # higher threshold fewer phrases.
trigram = gensim.models.Phrases(bigram[data_words], threshold=100)
bigram_mod = gensim.models.phrases.Phraser(bigram)
trigram_mod = gensim.models.phrases.Phraser(trigram)
def process_words(texts, stop_words=stop_words, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']):
"""Remove Stopwords, Form Bigrams, Trigrams and Lemmatization"""
texts = [[word for word in simple_preprocess(str(doc)) if word not in stop_words] for doc in texts]
texts = [bigram_mod[doc] for doc in texts]
texts = [trigram_mod[bigram_mod[doc]] for doc in texts]
texts_out = []
# use 'en_core_web_sm' in place of 'en'
nlp = spacy.load('en_core_web_sm', disable=['parser', 'ner'])
for sent in texts:
doc = nlp(" ".join(sent))
texts_out.append([token.lemma_ for token in doc if token.pos_ in allowed_postags])
# remove stopwords once more after lemmatization
texts_out = [[word for word in simple_preprocess(str(doc)) if word not in stop_words] for doc in texts_out]
return texts_out
data_ready = process_words(data_words) # processed Text Data
# Using train dict on new unseen test words
testcorpus = [trainid2word.doc2bow(text) for text in data_ready]
# Use the LDA model from trained data on the unseen test corpus
# Code block similar to that for training code, except
# use the LDA model from the training data, and run them through the unseen test reviews
test_vecs = []
for i in range(len(X_test)):
top_topics = lda_train.get_document_topics(testcorpus[i], minimum_probability=0.0)
topic_vec = [top_topics[i][1] for i in range(41)]
test_vecs.append(topic_vec)
print(f"Length of test vectors: {len(test_vecs)}")
print(f"Length of X_test: {len(X_test)}")
# Pass the vectors into numpy array form
X_ts_vec = np.array(test_vecs)
y_ts = np.array(y_test)
# Instantiate model
lr = LogisticRegression(random_state=42,solver='lbfgs',max_iter=500)
# Fit model
model_lr = lr.fit(X_ts_vec,y_ts)
# Generate predictions from test set
predictions = lr.predict(X_ts_vec)
print(f"Accuracy on test set: {round(model_lr.score(X_ts_vec, y_ts),4)}")
# Confusion matrix for test set using NB model
# Pass in true values, predicted values to confusion matrix
# Convert Confusion matrix into dataframe
# Positive class (class 1) is bomb
cm = confusion_matrix(y_ts, predictions)
cm_df = pd.DataFrame(cm,columns=['pred non-bomb','pred bomb'], index=['Actual non-bomb','Actual bomb'])
cm_df
# return nparray as a 1-D array.
confusion_matrix(y_ts, predictions).ravel()
# Save TN/FP/FN/TP values.
tn, fp, fn, tp = confusion_matrix(y_ts, predictions).ravel()
# Summary of metrics for LR model
spec = tn/(tn+fp)
sens = tp/(tp+fn)
print(f"Specificity: {round(spec,4)}")
print(f"Sensitivity: {round(sens,4)}")
# To compute the ROC AUC curve, first
# Create a dataframe called pred_df that contains:
# 1. The list of true values of our test set.
# 2. The list of predicted probabilities based on our model.
pred_proba = [i[1] for i in lr.predict_proba(X_ts_vec)]
pred_df = pd.DataFrame({'test_values': y_ts,
'pred_probs':pred_proba})
# Calculate ROC AUC.
print(f"roc_auc: {round(roc_auc_score(pred_df['test_values'],pred_df['pred_probs']),4)}")
# Summary of the topic modeling + LR model scores in Dataframe
summary_df = pd.DataFrame({'accuracy' : [0.5820, 0.6034],
'specificity' : [0.3814, 0.4147],
'sensitivity' : [0.8024, 0.8106],
'roc_auc' : [0.6121, 0.6374]})
# Transpose dataframe
summary_dft = summary_df.T
# Rename columns
summary_dft.columns = ['Validate set','Test set']
print("Topic modeling + LR classifier scores: ")
display(summary_dft)
```
From the sensitivity and roc_auc score, the model is not overfitted as test sensitivity and roc_auc is higher than on validate set. Before proceeding, a recap on the steps done to consolidate understanding.
- Topic modeling using the train dataset,
- Find optimum topics based on coherence score
- Train LDA model on train data. The probability distributions of the topics are then used as feature vectors in the Logistic Regression model for binary classification (bomb vs. non-bomb) on the validate data set.
- Thereafter, the trained LDA model is used to derive probability distributions of the topics from the test data.
- Run Logistic Regression model on these topic probability distributions, to see if model generalizes
In the next section, the topic probability distributions are added to the count vectorized word features for both train and test dataset. The dataset is then run through the Logistic Regression model to determine overall model generalizability
### Binary Classification (LDA topic and Countvectorizer features)
```
# Instantiate porterstemmer
p_stemmer = PorterStemmer()
# Define function to convert a raw selftext to a string of words
def selftext_to_words(motive_text):
# 1. Remove non-letters.
letters_only = re.sub("[^a-zA-Z]", " ", motive_text)
# 2. Split into individual words
words = letters_only.split()
# 3. In Python, searching a set is much faster than searching
# a list, so convert the stopwords to a set.
stops = set(stop_words)
# 5. Remove stopwords.
meaningful_words = [w for w in words if w not in stops]
# 5.5 Stemming of words
meaningful_words = [p_stemmer.stem(w) for w in words]
# 6. Join the words back into one string separated by space,
# and return the result
return(" ".join(meaningful_words))
#Initialize an empty list to hold the clean test text.
X_train_clean = []
X_test_clean = []
for text in X_train['motive']:
"""Convert text to words, then append to X_train_clean."""
X_train_clean.append(selftext_to_words(text))
for text in X_test['motive']:
"""Convert text to words, then append to X_train_clean."""
X_test_clean.append(selftext_to_words(text))
# Instantiate our CountVectorizer
cv = CountVectorizer(ngram_range=(1,2),max_df=0.9,min_df=3,max_features=10000)
# Fit and transform on whole training data
X_train_cleancv = cv.fit_transform(X_train_clean)
# Transform test data
X_test_cleancv = cv.transform(X_test_clean)
# Add word vectors (topic modeling) to the sparse matrices
# Ref: https://stackoverflow.com/questions/55637498/numpy-ndarray-sparse-matrix-to-dense
# Ref: https://kite.com/python/docs/scipy.sparse
# Convert sparse matrix to dense
X_tr_dense = X_train_cleancv.toarray()
X_ts_dense = X_test_cleancv.toarray()
# add numpy array (train and test topic model vectors to dense matrix)
X_tr_dense_tm = np.concatenate((X_tr_dense,X_tr_vec),axis=1)
X_ts_dense_tm = np.concatenate((X_ts_dense,X_ts_vec),axis=1)
from scipy.sparse import csr_matrix
# Convert back to sparse matrix for modeling
X_tr_sparse = csr_matrix(X_tr_dense_tm)
X_ts_sparse = csr_matrix(X_ts_dense_tm)
# Sanity Check
display(X_tr_sparse)
display(X_train_cleancv)
```
---
```
# Instantiate model
lr_comb = LogisticRegression(random_state=42,solver='lbfgs',max_iter=500)
# Fit model on whole training data (without addn set of stopwords removed in NB model)
model_lr = lr_comb.fit(X_tr_sparse,y_train)
# Generate predictions from test set
predictions = lr_comb.predict(X_ts_sparse)
print(f"Accuracy on whole test set: {round(model_lr.score(X_ts_sparse, y_test),4)}")
# Confusion matrix for test set using NB model
# Pass in true values, predicted values to confusion matrix
# Convert Confusion matrix into dataframe
# Positive class (class 1) is bomb
cm = confusion_matrix(y_test, predictions)
cm_df = pd.DataFrame(cm,columns=['pred non-bomb','pred bomb'], index=['Actual non-bomb','Actual bomb'])
cm_df
# return nparray as a 1-D array.
confusion_matrix(y_test, predictions).ravel()
# Save TN/FP/FN/TP values.
tn, fp, fn, tp = confusion_matrix(y_test, predictions).ravel()
# Summary of metrics for LR model
spec = tn/(tn+fp)
sens = tp/(tp+fn)
print(f"Specificity: {round(spec,4)}")
print(f"Sensitivity: {round(sens,4)}")
# To compute the ROC AUC curve, first
# Create a dataframe called pred_df that contains:
# 1. The list of true values of our test set.
# 2. The list of predicted probabilities based on our model.
pred_proba = [i[1] for i in lr_comb.predict_proba(X_ts_sparse)]
pred_df = pd.DataFrame({'test_values': y_test,
'pred_probs':pred_proba})
# Calculate ROC AUC.
print(f"roc_auc: {round(roc_auc_score(pred_df['test_values'],pred_df['pred_probs']),4)}")
# Summary of the topic modeling + LR model scores in Dataframe
summary_df = pd.DataFrame({'accuracy' : [0.6859, 0.6034, 0.6893],
'specificity' : [0.5257, 0.4147, 0.5351],
'sensitivity' : [0.8619, 0.8106, 0.8587],
'roc_auc' : [0.7568, 0.6374, 0.7621]})
# Transpose dataframe
summary_dft = summary_df.T
# Rename columns
summary_dft.columns = ['LR model (50 false neg wrd rmvd)','LR model (tm)', 'LR model (tm + wrd vec)']
display(summary_dft)
```
### Recommendations (Part2)
From the model metric summaries, the model using topic distributions alone as feature vectors has the lowest performance scores (sensitivity and roc_auc). The addition of feature vectors from count vectorizer improved model sensitivity and roc_auc. Model generalizability using LDA topic distributions has been demonstrated, though the best performing model remains the production Logistic Regression model using count vectorized word features. Nevertheless, the results are encouraging, and could be further experimented upon (some prelim thoughts are listed under future work).
The approach applied in this project could work in general, for similar NLP-based classifiers.
### Future Work
Terrorism is a complex topic as it covers politics, psychology, philosophy, military strategy, etc. The current model is a very simplistic model in that it classifies a terrorist attack mode as 'bomb' or 'non-bomb' based solely on one form of intel (motive text). Additional sources or forms of intel are not included, nor political and social factors trends that could serve as supporting sources of intelligence.
Here are a few areas that I would like to revisit for future project extensions:
- source for additional data to widen perspective
- feature engineer spatial and temporal aspects (e.g. attacks by region, attacks by decades)
- explore model performance using Tfidf vectorizer and spaCy
- explore other classification models (currently only 2 models explored; time allocated between studying the dataset variables, motive texts, longer than usual modeling times with the inherent size of the dataset, and research on topic modeling (LDA) and spaCy)
---
| github_jupyter |
# Artificial Intelligence Nanodegree
## Machine Translation Project
In this notebook, sections that end with **'(IMPLEMENTATION)'** in the header indicate that the following blocks of code will require additional functionality which you must provide. Please be sure to read the instructions carefully!
## Introduction
In this notebook, you will build a deep neural network that functions as part of an end-to-end machine translation pipeline. Your completed pipeline will accept English text as input and return the French translation.
- **Preprocess** - You'll convert text to sequence of integers.
- **Models** Create models which accepts a sequence of integers as input and returns a probability distribution over possible translations. After learning about the basic types of neural networks that are often used for machine translation, you will engage in your own investigations, to design your own model!
- **Prediction** Run the model on English text.
```
import collections
import helper
import numpy as np
import project_tests as tests
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Model, Sequential
from keras.layers import GRU, BatchNormalization, Input, Dense, TimeDistributed, Activation, RepeatVector, Bidirectional, Dropout, LSTM
from keras.layers.embeddings import Embedding
from keras.optimizers import Adam
from keras.losses import sparse_categorical_crossentropy
```
### Verify access to the GPU
The following test applies only if you expect to be using a GPU, e.g., while running in a Udacity Workspace or using an AWS instance with GPU support. Run the next cell, and verify that the device_type is "GPU".
- If the device is not GPU & you are running from a Udacity Workspace, then save your workspace with the icon at the top, then click "enable" at the bottom of the workspace.
- If the device is not GPU & you are running from an AWS instance, then refer to the cloud computing instructions in the classroom to verify your setup steps.
```
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
```
## Dataset
We begin by investigating the dataset that will be used to train and evaluate your pipeline. The most common datasets used for machine translation are from [WMT](http://www.statmt.org/). However, that will take a long time to train a neural network on. We'll be using a dataset we created for this project that contains a small vocabulary. You'll be able to train your model in a reasonable time with this dataset.
### Load Data
The data is located in `data/small_vocab_en` and `data/small_vocab_fr`. The `small_vocab_en` file contains English sentences with their French translations in the `small_vocab_fr` file. Load the English and French data from these files from running the cell below.
```
# Load English data
english_sentences = helper.load_data('data/small_vocab_en')
# Load French data
french_sentences = helper.load_data('data/small_vocab_fr')
print('Dataset Loaded')
```
### Files
Each line in `small_vocab_en` contains an English sentence with the respective translation in each line of `small_vocab_fr`. View the first two lines from each file.
```
for sample_i in range(2):
print('small_vocab_en Line {}: {}'.format(sample_i + 1, english_sentences[sample_i]))
print('small_vocab_fr Line {}: {}'.format(sample_i + 1, french_sentences[sample_i]))
```
From looking at the sentences, you can see they have been preprocessed already. The puncuations have been delimited using spaces. All the text have been converted to lowercase. This should save you some time, but the text requires more preprocessing.
### Vocabulary
The complexity of the problem is determined by the complexity of the vocabulary. A more complex vocabulary is a more complex problem. Let's look at the complexity of the dataset we'll be working with.
```
english_words_counter = collections.Counter([word for sentence in english_sentences for word in sentence.split()])
french_words_counter = collections.Counter([word for sentence in french_sentences for word in sentence.split()])
print('{} English words.'.format(len([word for sentence in english_sentences for word in sentence.split()])))
print('{} unique English words.'.format(len(english_words_counter)))
print('10 Most common words in the English dataset:')
print('"' + '" "'.join(list(zip(*english_words_counter.most_common(10)))[0]) + '"')
print()
print('{} French words.'.format(len([word for sentence in french_sentences for word in sentence.split()])))
print('{} unique French words.'.format(len(french_words_counter)))
print('10 Most common words in the French dataset:')
print('"' + '" "'.join(list(zip(*french_words_counter.most_common(10)))[0]) + '"')
```
For comparison, _Alice's Adventures in Wonderland_ contains 2,766 unique words of a total of 15,500 words.
## Preprocess
For this project, you won't use text data as input to your model. Instead, you'll convert the text into sequences of integers using the following preprocess methods:
1. Tokenize the words into ids
2. Add padding to make all the sequences the same length.
Time to start preprocessing the data...
### Tokenize (IMPLEMENTATION)
For a neural network to predict on text data, it first has to be turned into data it can understand. Text data like "dog" is a sequence of ASCII character encodings. Since a neural network is a series of multiplication and addition operations, the input data needs to be number(s).
We can turn each character into a number or each word into a number. These are called character and word ids, respectively. Character ids are used for character level models that generate text predictions for each character. A word level model uses word ids that generate text predictions for each word. Word level models tend to learn better, since they are lower in complexity, so we'll use those.
Turn each sentence into a sequence of words ids using Keras's [`Tokenizer`](https://keras.io/preprocessing/text/#tokenizer) function. Use this function to tokenize `english_sentences` and `french_sentences` in the cell below.
Running the cell will run `tokenize` on sample data and show output for debugging.
```
def tokenize(x):
"""
Tokenize x
:param x: List of sentences/strings to be tokenized
:return: Tuple of (tokenized x data, tokenizer used to tokenize x)
"""
tokenizer = Tokenizer()
tokenizer.fit_on_texts(x)
return tokenizer.texts_to_sequences(x), tokenizer
# Tokenize Example output
text_sentences = [
'The quick brown fox jumps over the lazy dog .',
'By Jove , my quick study of lexicography won a prize .',
'This is a short sentence .']
text_tokenized, text_tokenizer = tokenize(text_sentences)
print(text_tokenizer.word_index)
print()
for sample_i, (sent, token_sent) in enumerate(zip(text_sentences, text_tokenized)):
print('Sequence {} in x'.format(sample_i + 1))
print(' Input: {}'.format(sent))
print(' Output: {}'.format(token_sent))
```
### Padding (IMPLEMENTATION)
When batching the sequence of word ids together, each sequence needs to be the same length. Since sentences are dynamic in length, we can add padding to the end of the sequences to make them the same length.
Make sure all the English sequences have the same length and all the French sequences have the same length by adding padding to the **end** of each sequence using Keras's [`pad_sequences`](https://keras.io/preprocessing/sequence/#pad_sequences) function.
```
def pad(x, length=None):
"""
Pad x
:param x: List of sequences.
:param length: Length to pad the sequence to. If None, use length of longest sequence in x.
:return: Padded numpy array of sequences
"""
return pad_sequences(x, maxlen=length, padding='post')
tests.test_pad(pad)
# Pad Tokenized output
test_pad = pad(text_tokenized)
for sample_i, (token_sent, pad_sent) in enumerate(zip(text_tokenized, test_pad)):
print('Sequence {} in x'.format(sample_i + 1))
print(' Input: {}'.format(np.array(token_sent)))
print(' Output: {}'.format(pad_sent))
```
### Preprocess Pipeline
Your focus for this project is to build neural network architecture, so we won't ask you to create a preprocess pipeline. Instead, we've provided you with the implementation of the `preprocess` function.
```
def preprocess(x, y):
"""
Preprocess x and y
:param x: Feature List of sentences
:param y: Label List of sentences
:return: Tuple of (Preprocessed x, Preprocessed y, x tokenizer, y tokenizer)
"""
preprocess_x, x_tk = tokenize(x)
preprocess_y, y_tk = tokenize(y)
preprocess_x = pad(preprocess_x)
preprocess_y = pad(preprocess_y)
# Keras's sparse_categorical_crossentropy function requires the labels to be in 3 dimensions
preprocess_y = preprocess_y.reshape(*preprocess_y.shape, 1)
return preprocess_x, preprocess_y, x_tk, y_tk
preproc_english_sentences, preproc_french_sentences, english_tokenizer, french_tokenizer =\
preprocess(english_sentences, french_sentences)
max_english_sequence_length = preproc_english_sentences.shape[1]
max_french_sequence_length = preproc_french_sentences.shape[1]
english_vocab_size = len(english_tokenizer.word_index)
french_vocab_size = len(french_tokenizer.word_index)
print('Data Preprocessed')
print("Max English sentence length:", max_english_sequence_length)
print("Max French sentence length:", max_french_sequence_length)
print("English vocabulary size:", english_vocab_size)
print("French vocabulary size:", french_vocab_size)
```
## Models
In this section, you will experiment with various neural network architectures.
You will begin by training four relatively simple architectures.
- Model 1 is a simple RNN
- Model 2 is a RNN with Embedding
- Model 3 is a Bidirectional RNN
- Model 4 is an optional Encoder-Decoder RNN
After experimenting with the four simple architectures, you will construct a deeper architecture that is designed to outperform all four models.
### Ids Back to Text
The neural network will be translating the input to words ids, which isn't the final form we want. We want the French translation. The function `logits_to_text` will bridge the gab between the logits from the neural network to the French translation. You'll be using this function to better understand the output of the neural network.
```
import matplotlib.pyplot as plt
def chart_model(model_histogram):
# Plot ACC vs Epoch
plt.plot(model_histogram.history['acc'])
plt.plot(model_histogram.history['val_acc'])
plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Training', 'Validation'], loc='upper left')
plt.show()
# Plot LOSS vs Epoch
plt.plot(model_histogram.history['loss'])
plt.title('Model Loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Training'], loc='upper left')
plt.show()
def logits_to_text(logits, tokenizer):
"""
Turn logits from a neural network into text using the tokenizer
:param logits: Logits from a neural network
:param tokenizer: Keras Tokenizer fit on the labels
:return: String that represents the text of the logits
"""
index_to_words = {id: word for word, id in tokenizer.word_index.items()}
index_to_words[0] = '<PAD>'
return ' '.join([index_to_words[prediction] for prediction in np.argmax(logits, 1)])
print('`logits_to_text` function loaded.')
```
### Model 1: RNN (IMPLEMENTATION)

A basic RNN model is a good baseline for sequence data. In this model, you'll build a RNN that translates English to French.
```
def simple_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train a basic RNN on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# Hyperparameters
learning_rate = 0.005 ## 0.01 #acceptable
rnn_dim = 256 ## 128 #high-loss value
batch_size = 1024 ## 512 #high-loss value
# Build the layers
model = Sequential()
model.add(GRU(rnn_dim, input_shape=input_shape[1:], return_sequences=True))
model.add(TimeDistributed(Dense(batch_size, activation='relu')))
model.add(Dropout(0.5))
model.add(TimeDistributed(Dense(french_vocab_size, activation='softmax')))
# Compile model
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
return model
tests.test_simple_model(simple_model)
# Reshaping the input to work with a basic RNN
tmp_x = pad(preproc_english_sentences, max_french_sequence_length)
tmp_x = tmp_x.reshape((-1, preproc_french_sentences.shape[-2], 1))
# Train the neural network
simple_rnn_model = simple_model(
tmp_x.shape,
max_french_sequence_length,
english_vocab_size,
french_vocab_size)
simple_rnn_model.summary()
simple_model_chart = simple_rnn_model.fit(tmp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)
# Print prediction(s)
print(logits_to_text(simple_rnn_model.predict(tmp_x[:1])[0], french_tokenizer))
# Visualize
chart_model(simple_model_chart)
```
### Model 2: Embedding (IMPLEMENTATION)

You've turned the words into ids, but there's a better representation of a word. This is called word embeddings. An embedding is a vector representation of the word that is close to similar words in n-dimensional space, where the n represents the size of the embedding vectors.
In this model, you'll create a RNN model using embedding.
```
def embed_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train a RNN model using word embedding on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# Hyperparameters
learning_rate = 0.01
emb_dim = 256
batch_size = 1024
# Build the layers
model = Sequential()
model.add(Embedding(english_vocab_size, emb_dim, input_length=input_shape[1], input_shape=input_shape[1:]))
model.add(GRU(emb_dim, return_sequences=True))
model.add(TimeDistributed(Dense(batch_size, activation='relu')))
model.add(Dropout(0.5))
model.add(TimeDistributed(Dense(french_vocab_size, activation='softmax')))
# Compile model
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
return model
tests.test_embed_model(embed_model)
# Reshape the input
tmp_x = pad(preproc_english_sentences, preproc_french_sentences.shape[1])
tmp_x = tmp_x.reshape((-1, preproc_french_sentences.shape[-2]))
# Train the neural network
embed_rnn_model = embed_model(
tmp_x.shape,
preproc_french_sentences.shape[1],
len(english_tokenizer.word_index)+1,
len(french_tokenizer.word_index)+1)
embed_rnn_model.summary()
embed_model_chart = embed_rnn_model.fit(tmp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)
# Print prediction(s)
print(logits_to_text(embed_rnn_model.predict(tmp_x[:1])[0], french_tokenizer))
# Visualize
chart_model(embed_model_chart)
```
### Model 3: Bidirectional RNNs (IMPLEMENTATION)

One restriction of a RNN is that it can't see the future input, only the past. This is where bidirectional recurrent neural networks come in. They are able to see the future data.
```
def bd_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train a bidirectional RNN model on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# TODO: Implement
# Hyperparameters
learning_rate = 0.003
# TODO: Build the layers
model = Sequential()
model.add(Bidirectional(GRU(128, return_sequences=True), input_shape=input_shape[1:]))
model.add(TimeDistributed(Dense(1024, activation='relu')))
model.add(Dropout(0.5))
model.add(TimeDistributed(Dense(french_vocab_size, activation='softmax')))
# Compile model
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
return model
tests.test_bd_model(bd_model)
# Reshape the input
tmp_x = pad(preproc_english_sentences, preproc_french_sentences.shape[1])
tmp_x = tmp_x.reshape((-1, preproc_french_sentences.shape[-2], 1))
# Train the neural network
bd_rnn_model = bd_model(tmp_x.shape,
preproc_french_sentences.shape[1],
len(english_tokenizer.word_index),
len(french_tokenizer.word_index))
bd_rnn_model.summary()
bd_model_chart = bd_rnn_model.fit(tmp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)
print(logits_to_text(bd_rnn_model.predict(tmp_x[:1])[0], french_tokenizer))
# Visualize
chart_model(bd_model_chart)
```
### Model 4: Encoder-Decoder (OPTIONAL)
Time to look at encoder-decoder models. This model is made up of an encoder and decoder. The encoder creates a matrix representation of the sentence. The decoder takes this matrix as input and predicts the translation as output.
Create an encoder-decoder model in the cell below.
```
def encdec_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train an encoder-decoder model on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# Hyperparameters
learning_rate = 0.001
encdec_dim = 256
batch_size = 1024
# Build the layers
model = Sequential()
# Encoder
model.add(GRU(encdec_dim, input_shape=input_shape[1:], go_backwards=True))
model.add(RepeatVector(output_sequence_length))
# Decoder
model.add(GRU(encdec_dim, return_sequences=True))
model.add(TimeDistributed(Dense(1024, activation='relu')))
model.add(Dropout(0.5))
model.add(TimeDistributed(Dense(french_vocab_size, activation='softmax')))
# Compile model
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
return model
tests.test_encdec_model(encdec_model)
# Reshape the input
tmp_x = pad(preproc_english_sentences, preproc_french_sentences.shape[1])
tmp_x = tmp_x.reshape((-1, preproc_french_sentences.shape[-2], 1))
# Train and Print prediction(s)
encdec_rnn_model = encdec_model(
tmp_x.shape,
preproc_french_sentences.shape[1],
len(english_tokenizer.word_index)+1,
len(french_tokenizer.word_index)+1)
encdec_rnn_model.summary()
encdec_model_chart = encdec_rnn_model.fit(tmp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)
print(logits_to_text(encdec_rnn_model.predict(tmp_x[:1])[0], french_tokenizer))
# Visualize
chart_model(encdec_model_chart)
```
### Model 5: Custom (IMPLEMENTATION)
Use everything you learned from the previous models to create a model that incorporates embedding and a bidirectional rnn into one model.
```
def model_final(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train a model that incorporates embedding, encoder-decoder, and bidirectional RNN on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# Hyperparameters
learning_rate = 0.01
# Build the layers
model = Sequential()
# Embedding
model.add(Embedding(english_vocab_size, 128, input_length=input_shape[1],
input_shape=input_shape[1:]))
# Encoder
model.add(Bidirectional(GRU(128)))
model.add(RepeatVector(output_sequence_length))
model.add(BatchNormalization())
# Decoder
model.add(Bidirectional(GRU(128, return_sequences=True)))
model.add(BatchNormalization())
model.add(TimeDistributed(Dense(256, activation='relu')))
model.add(Dropout(0.5))
model.add(TimeDistributed(Dense(french_vocab_size, activation='softmax')))
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
return model
tests.test_model_final(model_final)
print('Final Model Loaded')
```
## Prediction (IMPLEMENTATION)
```
def final_predictions(x, y, x_tk, y_tk):
"""
Gets predictions using the final model
:param x: Preprocessed English data
:param y: Preprocessed French data
:param x_tk: English tokenizer
:param y_tk: French tokenizer
"""
# Train neural network using model_final
model = model_final(x.shape,y.shape[1],
len(x_tk.word_index)+1,
len(y_tk.word_index)+1)
model.summary()
custom_model_chart = model.fit(x, y, batch_size=1024, epochs=15, validation_split=0.2)
# Visualize
chart_model(custom_model_chart)
## DON'T EDIT ANYTHING BELOW THIS LINE
y_id_to_word = {value: key for key, value in y_tk.word_index.items()}
y_id_to_word[0] = '<PAD>'
sentence = 'he saw a old yellow truck'
sentence = [x_tk.word_index[word] for word in sentence.split()]
sentence = pad_sequences([sentence], maxlen=x.shape[-1], padding='post')
sentences = np.array([sentence[0], x[0]])
predictions = model.predict(sentences, len(sentences))
print('Sample 1:')
print(' '.join([y_id_to_word[np.argmax(x)] for x in predictions[0]]))
print('Il a vu un vieux camion jaune')
print('Sample 2:')
print(' '.join([y_id_to_word[np.argmax(x)] for x in predictions[1]]))
print(' '.join([y_id_to_word[np.max(x)] for x in y[0]]))
final_predictions(preproc_english_sentences, preproc_french_sentences, english_tokenizer, french_tokenizer)
```
## Submission
When you're ready to submit, complete the following steps:
1. Review the [rubric](https://review.udacity.com/#!/rubrics/1004/view) to ensure your submission meets all requirements to pass
2. Generate an HTML version of this notebook
- Run the next cell to attempt automatic generation (this is the recommended method in Workspaces)
- Navigate to **FILE -> Download as -> HTML (.html)**
- Manually generate a copy using `nbconvert` from your shell terminal
```
$ pip install nbconvert
$ python -m nbconvert machine_translation.ipynb
```
3. Submit the project
- If you are in a Workspace, simply click the "Submit Project" button (bottom towards the right)
- Otherwise, add the following files into a zip archive and submit them
- `helper.py`
- `machine_translation.ipynb`
- `machine_translation.html`
- You can export the notebook by navigating to **File -> Download as -> HTML (.html)**.
### Generate the html
**Save your notebook before running the next cell to generate the HTML output.** Then submit your project.
```
# Save before you run this cell!
!!jupyter nbconvert *.ipynb
```
## Optional Enhancements
This project focuses on learning various network architectures for machine translation, but we don't evaluate the models according to best practices by splitting the data into separate test & training sets -- so the model accuracy is overstated. Use the [`sklearn.model_selection.train_test_split()`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function to create separate training & test datasets, then retrain each of the models using only the training set and evaluate the prediction accuracy using the hold out test set. Does the "best" model change?
| github_jupyter |
# Convolutional Neural Network
## Import Dependencies
```
%matplotlib inline
from imp import reload
import itertools
import numpy as np
import utils; reload(utils)
from utils import *
from __future__ import print_function
from sklearn.metrics import confusion_matrix, classification_report, f1_score
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.layers import Embedding, SpatialDropout1D
from keras.layers import LSTM
from keras.layers import Conv1D, GlobalMaxPooling1D
from keras.layers import Flatten
from keras.datasets import imdb
from keras.utils import plot_model
from keras.utils.vis_utils import model_to_dot
from IPython.display import SVG
from IPython.display import Image
```
## Configure Parameters
```
# Embedding
embedding_size = 50
max_features = 5000
maxlen = 400
# Convolution
kernel_size = 3
pool_size = 4
filters = 250
# Dense
hidden_dims = 250
# Training
batch_size = 64
epochs = 4
```
## Data Preparation
```
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
# Pad sequences
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('Train data size:', x_train.shape)
print('Test data size:', x_test.shape)
```
## Modelling
```
model = Sequential()
# we start off with an efficient embedding layer which maps
# our vocab indices into embedding_dims dimensions
model.add(Embedding(max_features,
embedding_size,
input_length=maxlen))
model.add(Dropout(0.2))
model.add(Conv1D(filters,
kernel_size,
padding='valid',
activation='relu',
strides=1))
model.add(GlobalMaxPooling1D())
# We add a vanilla hidden layer:
model.add(Dense(hidden_dims))
model.add(Dropout(0.2))
model.add(Activation('relu'))
# We project onto a single unit output layer, and squash it with a sigmoid:
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.summary()
# plot_model(model, to_file='model.png', show_shapes=True)
# Image(filename = 'model.png')
# SVG(model_to_dot(model).create(prog='dot', format='svg'))
```
## Evaluation
```
# Train the model
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
verbose=1)
# Evaluate model
score, acc = model.evaluate(x_test, y_test, batch_size=batch_size)
preds = model.predict_classes(x_test, batch_size=batch_size)
# Save the model weights
model_path = 'data/imdb/models/'
model.save(model_path + 'cnn_model.h5')
model.save_weights(model_path + 'cnn_weights.h5')
# Confusion Matrix
cm = confusion_matrix(y_test, preds)
plot_confusion_matrix(cm, {'negative': 0, 'positive': 1})
# F1 score
f1_macro = f1_score(y_test, preds, average='macro')
f1_micro = f1_score(y_test, preds, average='micro')
print('Test accuracy:', acc)
print('Test score (loss):', score)
print('')
print('F1 Score (Macro):', f1_macro)
print('F1 Score (Micro):', f1_micro)
```
| github_jupyter |
# Comparison of the data taken with a long adaptation time
(c) 2019 Manuel Razo. This work is licensed under a [Creative Commons Attribution License CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/). All code contained herein is licensed under an [MIT license](https://opensource.org/licenses/MIT)
---
```
import os
import glob
import re
# Our numerical workhorses
import numpy as np
import scipy as sp
import pandas as pd
# Import matplotlib stuff for plotting
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib as mpl
# Seaborn, useful for graphics
import seaborn as sns
# Import the project utils
import sys
sys.path.insert(0, '../../../')
import ccutils
# Magic function to make matplotlib inline; other style specs must come AFTER
%matplotlib inline
# This enables SVG graphics inline
%config InlineBackend.figure_format = 'retina'
tmpdir = '../../tmp/'
datadir = '../../../data/csv_microscopy/'
# Set PBoC plotting format
ccutils.viz.set_plotting_style()
# Increase dpi
mpl.rcParams['figure.dpi'] = 110
```
## Comparing the data
For this dataset taken on `20190814` I grew cells overnight on M9 media, the reason being that I wanted to make sure that cells had no memory of every having been in LB.
```
df_long = pd.read_csv('outdir/20190816_O2__M9_growth_test_microscopy.csv',
comment='#')
df_long[['date', 'operator', 'rbs', 'mean_intensity', 'intensity']].head()
```
Now the rest of the datasets taken with the laser system
```
# Read the tidy-data frame
files = glob.glob(datadir + '/*IPTG*csv')# + mwc_files
df_micro = pd.concat(pd.read_csv(f, comment='#') for f in files if 'Oid' not in f)
## Remove data sets that are ignored because of problems with the data quality
## NOTE: These data sets are kept in the repository for transparency, but they
## failed at one of our quality criteria
## (see README.txt file in microscopy folder)
ignore_files = [x for x in os.listdir('../../image_analysis/ignore_datasets/')
if 'microscopy' in x]
# Extract data from these files
ignore_dates = [int(x.split('_')[0]) for x in ignore_files]
# Remove these dates
df_micro = df_micro[~df_micro['date'].isin(ignore_dates)]
# Keep only the O2 operator
df_micro = df_micro[df_micro.operator == 'O2']
df_micro[['date', 'operator', 'rbs', 'mean_intensity', 'intensity']].head()
```
Let's now look at the O2 $\Delta lacI$ strain data. For this we first have to extract the mean autofluorescence value. First let's process the new data.
```
# Define names for columns in dataframe
names = ['date', 'IPTG_uM','operator', 'binding_energy',
'rbs', 'repressors', 'mean', 'std', 'noise']
# Initialize df_long frame to save the noise
df_noise_long = pd.DataFrame(columns=names)
# Extract the mean autofluorescence
I_auto = df_long[df_long.rbs == 'auto'].intensity.mean()
# Extract the strain fluorescence measurements
strain_df_long = df_long[df_long.rbs == 'delta']
# Group df_long by IPTG measurement
df_long_group = strain_df_long.groupby('IPTG_uM')
for inducer, df_long_inducer in df_long_group:
# Append the require info
strain_info = [20190624, 0, df_long_inducer.operator.unique()[0],
df_long_inducer.binding_energy.unique()[0],
df_long_inducer.rbs.unique()[0],
df_long_inducer.repressors.unique()[0],
(df_long_inducer.intensity - I_auto).mean(),
(df_long_inducer.intensity - I_auto).std(ddof=1)]
# Check if the values are negative for very small noise
if strain_info[int(np.where(np.array(names) == 'mean')[0])] > 0:
# Compute the noise
strain_info.append(strain_info[-1] / strain_info[-2])
# Convert to a pandas series to attach to the df_longframe
strain_info = pd.Series(strain_info, index=names)
# Append to the info to the df_long frame
df_noise_long = df_noise_long.append(strain_info,
ignore_index=True)
df_noise_long.head()
# group by date and by IPTG concentration
df_group = df_micro.groupby(['date'])
# Define names for columns in data frame
names = ['date', 'IPTG_uM','operator', 'binding_energy',
'rbs', 'repressors', 'mean', 'std', 'noise']
# Initialize data frame to save the noise
df_noise_delta = pd.DataFrame(columns=names)
for date, data in df_group:
# Extract the mean autofluorescence
I_auto = data[data.rbs == 'auto'].intensity.mean()
# Extract the strain fluorescence measurements
strain_data = data[data.rbs == 'delta']
# Group data by IPTG measurement
data_group = strain_data.groupby('IPTG_uM')
for inducer, data_inducer in data_group:
# Append the require info
strain_info = [date, inducer, data_inducer.operator.unique()[0],
data_inducer.binding_energy.unique()[0],
data_inducer.rbs.unique()[0],
data_inducer.repressors.unique()[0],
(data_inducer.intensity - I_auto).mean(),
(data_inducer.intensity - I_auto).std(ddof=1)]
# Check if the values are negative for very small noise
if strain_info[int(np.where(np.array(names) == 'mean')[0])] > 0:
# Compute the noise
strain_info.append(strain_info[-1] / strain_info[-2])
# Convert to a pandas series to attach to the dataframe
strain_info = pd.Series(strain_info, index=names)
# Append to the info to the data frame
df_noise_delta = df_noise_delta.append(strain_info,
ignore_index=True)
df_noise_delta.head()
```
It seems that the noise is exactly the same for both illumination systems, ≈ 0.4-0.5.
Let's look at the ECDF of single-cell fluorescence values. For all measurements to be comparable we will plot the fold-change distribution. What this means is that we will extract the mean autofluorescence value and we will normalize by the mean intensity of the $\Delta lacI$ strain.
```
# group laser data by date
df_group = df_micro.groupby('date')
colors = sns.color_palette('Blues', n_colors=len(df_group))
# Loop through dates
for j, (g, d) in enumerate(df_group):
# Extract mean autofluorescence
auto = d.loc[d.rbs == 'auto', 'intensity'].mean()
# Extract mean delta
delta = d.loc[d.rbs == 'delta', 'intensity'].mean()
# Keep only delta data
data = d[d.rbs == 'delta']
fold_change = (data.intensity - auto) / (delta - auto)
# Generate ECDF
x, y = ccutils.stats.ecdf(fold_change)
# Plot ECDF
plt.plot(x, y, lw=0, marker='.', color=colors[j],
alpha=0.3, label='')
## LED
# Extract mean autofluorescence
auto_long = df_long.loc[df_long.rbs == 'auto', 'intensity'].mean()
delta_long = df_long.loc[df_long.rbs == 'delta', 'intensity'].mean()
# Compute fold-change for delta strain
fold_change = (df_long[df_long.rbs == 'delta'].intensity - auto_long) /\
(delta_long - auto_long)
# Generate ECDF
x, y = ccutils.stats.ecdf(fold_change)
# Plot ECDF
plt.plot(x, y, lw=0, marker='v', color='red',
alpha=0.3, label='24 hour', ms=3)
# Add fake plot for legend
plt.plot([], [], marker='.', color=colors[-1],
alpha=0.3, label='8 hour', lw=0)
# Label x axis
plt.xlabel('fold-change')
# Add legend
plt.legend()
# Label y axis of left plot
plt.ylabel('ECDF')
# Change limit
plt.xlim(right=3)
plt.savefig('outdir/ecdf_comparison.png', bbox_inches='tight')
```
There is no difference whatsoever. Maybe it is not the memory of LB, but the memory of having been on a lag phase for quite a while.
## Comparison with theoretical prediction.
Let's compare these datasets with the theoretical prediction we obtained from the MaxEnt approach.
First we need to read the Lagrange multipliers to reconstruct the distribution.
```
# Define directory for MaxEnt data
maxentdir = '../../../data/csv_maxEnt_dist/'
# Read resulting values for the multipliers.
df_maxEnt = pd.read_csv(maxentdir + 'MaxEnt_Lagrange_mult_protein.csv')
df_maxEnt.head()
```
Now let's define the necessary objects to build the distribution from these constraints obtained with the MaxEnt method.
```
# Extract protein moments in constraints
prot_mom = [x for x in df_maxEnt.columns if 'm0' in x]
# Define index of moments to be used in the computation
moments = [tuple(map(int, re.findall(r'\d+', s))) for s in prot_mom]
# Define sample space
mRNA_space = np.array([0])
protein_space = np.arange(0, 1.9E4)
# Extract values to be used
df_sample = df_maxEnt[(df_maxEnt.operator == 'O1') &
(df_maxEnt.repressor == 0) &
(df_maxEnt.inducer_uM == 0)]
# Select the Lagrange multipliers
lagrange_sample = df_sample.loc[:, [col for col in df_sample.columns
if 'lambda' in col]].values[0]
# Compute distribution from Lagrange multipliers values
Pp_maxEnt = ccutils.maxent.maxEnt_from_lagrange(mRNA_space,
protein_space,
lagrange_sample,
exponents=moments).T[0]
mean_p = np.sum(protein_space * Pp_maxEnt)
```
Now we can compare both distributions.
```
# Define binstep for plot, meaning how often to plot
# an entry
binstep = 10
## LED
# Extract mean autofluorescence
auto_long = df_long.loc[df_long.rbs == 'auto', 'intensity'].mean()
delta_long = df_long.loc[df_long.rbs == 'delta', 'intensity'].mean()
# Compute fold-change for delta strain
fold_change = (df_long[df_long.rbs == 'delta'].intensity - auto_long) /\
(delta_long - auto_long)
# Generate ECDF
x, y = ccutils.stats.ecdf(fold_change)
# Plot ECDF
plt.plot(x, y, lw=0, marker='v', color='red',
alpha=0.3, label='20 hour', ms=3)
# Plot MaxEnt results
plt.plot(protein_space[0::binstep] / mean_p, np.cumsum(Pp_maxEnt)[0::binstep],
drawstyle='steps', label='MaxEnt', lw=2)
# Add legend
plt.legend()
# Label axis
plt.ylabel('CDF')
plt.xlabel('fold-change')
plt.savefig('outdir/maxent_comparison.png', bbox_inches='tight')
```
| github_jupyter |
<table width="100%"> <tr>
<td style="background-color:#ffffff;">
<a href="http://qworld.lu.lv" target="_blank"><img src="..\images\qworld.jpg" width="35%" align="left"> </a></td>
<td style="background-color:#ffffff;vertical-align:bottom;text-align:right;">
prepared by Abuzer Yakaryilmaz (<a href="http://qworld.lu.lv/index.php/qlatvia/" target="_blank">QLatvia</a>)
</td>
</tr></table>
<table width="100%"><tr><td style="color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;text-align:right;">This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. </td></tr></table>
$ \newcommand{\bra}[1]{\langle #1|} $
$ \newcommand{\ket}[1]{|#1\rangle} $
$ \newcommand{\braket}[2]{\langle #1|#2\rangle} $
$ \newcommand{\dot}[2]{ #1 \cdot #2} $
$ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $
$ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $
$ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $
$ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $
$ \newcommand{\mypar}[1]{\left( #1 \right)} $
$ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $
$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $
$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $
$ \newcommand{\onehalf}{\frac{1}{2}} $
$ \newcommand{\donehalf}{\dfrac{1}{2}} $
$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $
$ \newcommand{\vzero}{\myvector{1\\0}} $
$ \newcommand{\vone}{\myvector{0\\1}} $
$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $
$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $
$ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $
$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $
$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $
$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $
$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $
$ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $
<h2> One Bit </h2>
[Watch Lecture](https://youtu.be/kn53Qvl-h28)
In daily life, we use decimal number system. It is also called base-10 system, because we have 10 digits:
$ 0,~1,~2,~3,~4,~5,~6,~7,~8, \mbox{ and } 9 $.
In computer science, on the other hand, the widely used system is binary, which has only two digits:
$ 0 $ and $ 1 $.
Bit (or binary digit) is the basic unit of information used in computer science and information theory.
It can also be seen as the smallest "useful" memory unit, which has two states named 0 and 1.
At any moment, a bit can be in either state 0 or state 1.
<h3> Four operators </h3>
How many different operators can be defined on a single bit?
<i>An operator, depending on the current state of the bit, updates the state of bit (the result may be the same state).</i>
We can apply four different operators to a single bit:
<ol>
<li> Identity: $ I(0) = 0 $ and $ I(1) = 1 $ </li>
<li> Negation: $ NOT(0) = 1 $ and $ NOT(1) = 0 $ </li>
<li> Constant (Zero): $ ZERO(0) = 0 $ and $ ZERO(1) = 0 $ </li>
<li> Constant (One): $ ONE(0) = 1 $ and $ ONE(1) = 1 $ </li>
</ol>
The first operator is called IDENTITY, because it does not change the content/value of the bit.
The second operator is named NOT, because it negates (flips) the value of bit.
<i>Remark that 0 and 1 also refer to Boolean values False and True, respectively, and, False is the negation of True, and True is the negation of False.</i>
The third (resp., fourth) operator returns a constant value 0 (resp., 1), whatever the input is.
<h3> Table representation </h3>
We can represent the transitions of each operator by a table:
$
I = \begin{array}{lc|cc}
& & initial & states \\
& & \mathbf{0} & \mathbf{1} \\ \hline
final & \mathbf{0} & \mbox{goes-to} & \emptyset \\
states & \mathbf{1} & \emptyset & \mbox{goes-to} \end{array} ,
$
where
- the header (first row) represents the initial values, and
- the first column represents the final values.
We can also define the transitions numerically:
- we use 1 if there is a transition between two values, and,
- we use 0 if there is no transition between two values.
$
I = \begin{array}{lc|cc}
& & initial & states \\
& & \mathbf{0} & \mathbf{1} \\ \hline
final & \mathbf{0} & 1 & 0 \\
states & \mathbf{1} & 0 & 1 \end{array}
$
The values in <b>bold</b> are the initial and final values of the bits. The non-bold values represent the transitions.
<ul>
<li> The top-left non-bold 1 represents the transtion $ 0 \rightarrow 0 $. </li>
<li> The bottom-right non-bold 1 represents the transtion $ 1 \rightarrow 1 $. </li>
<li> The top-right non-bold 0 means that there is no transition from 1 to 0. </li>
<li> The bottom-left non-bold 0 means that there is no transition from 0 to 1. </li>
</ul>
The reader may think that the values 0 and 1 are representing the transitions as True (On) and False (Off), respectively.
Similarly, we can represent the other operators as below:
$
NOT = \begin{array}{lc|cc} & & initial & states \\ & & \mathbf{0} & \mathbf{1} \\ \hline final & \mathbf{0} & 0 & 1 \\
states & \mathbf{1} & 1 & 0 \end{array}
~~~
ZERO = \begin{array}{lc|cc} & & initial & states \\ & & \mathbf{0} & \mathbf{1} \\ \hline final & \mathbf{0} & 1 & 1 \\
states & \mathbf{1} & 0 & 0 \end{array}
~~~
ONE = \begin{array}{lc|cc} & & initial & states \\ & & \mathbf{0} & \mathbf{1} \\ \hline final & \mathbf{0} & 0 & 0 \\
states & \mathbf{1} & 1 & 1 \end{array}
.
$
<h3> Task 1 </h3>
Convince yourself with the correctness of each table.
<h3> Reversibility and Irreversibility </h3>
After applying Identity or NOT operator, we can easily determine the initial value by checking the final value.
<ul>
<li> In the case of Identity operator, we simply say the same value. </li>
<li> In the case of NOT operator, we simply say the other value, i.e., if the final value is 0 (resp., 1), then we say 1 (resp., 0). </li>
</ul>
However, we cannot know the initial value by checking the final value after applying ZERO or ONE operator.
Based on this observation, we can classify the operators into two types: <i>Reversible</i> and <i>Irreversible</i>.
<ul>
<li> If we can recover the initial value(s) from the final value(s), then the operator is called reversible like Identity and NOT operators. </li>
<li> If we cannot know the initial value(s) from the final value(s), then the operator is called irreversible like ZERO and ONE operators. </li>
</ul>
<b> This classification is important, as the quantum evolution operators are reversible </b> (as long as the system is closed).
The Identity and NOT operators are two fundamental quantum operators.
| github_jupyter |
```
import sinesum as ss
import matplotlib.pyplot as plt
import numpy as np
```
#### Fourier Series of the Step function
In this Homework assignment, we built a partial series calculator that would give us an approximation of the sign($x$) function. The Method we used to approximate this function is by making use of the Fourier Series of the function.
It's coefficients are given by the integral (Well, more than likely approximately since I haven't done the calculation in a sec):
$$ a_n = \frac{1}{T} \int_{0}^{T/2}\sin(\frac{2 \pi nx}{T})dx $$
Where $L$ is the length of the interval of approximation around the origin. Using these coefficients we find that we can rewrite the sign($x$) function as:
$$ sign(x) = \sum_{n=1}^{\infty} \frac{4}{\pi} \frac{1}{2n-1} \sin\left(\frac{2 \pi (2n-1)}{T} t\right) $$
The next bit of code is just initializing the arrays that we will use to plot these partial sums, and will define the functions we will use to plot them.
```
T = 2*np.pi
F1Array = ss.Snarray(T,1)
F3Array = ss.Snarray(T,3)
F5Array = ss.Snarray(T,5)
F10Array = ss.Snarray(T,10)
F30Array = ss.Snarray(T,30)
F100Array = ss.Snarray(T,100)
FuncArray = ss.farray(T)
Time = ss.timespace(T)
def lowNplot():
"""args: none
returns: null
This function is used to plot the Fourier partial sums up to 5 against the sign function
It should be used when after all of the arrays have been created"""
fig = plt.figure(figsize = (8,12))
a = plt.axes()
a.plot(Time, F1Array, 'b.-', label="S_1")
a.plot(Time, F3Array, 'k.-', label = "S_3")
a.plot(Time, F5Array, 'g.-', label="S_5")
a.plot(Time, FuncArray, 'r', label="Function being approximated")
a.set(xlabel = 't', ylabel = 'f(t)')
a.legend()
plt.show()
def highNplot():
"""args: none
returns: null
This function is used to plot the Fourier partial sums from 10 to 100 against the sign function
It should be used when after all of the arrays have been created"""
fig = plt.figure(figsize = (8,12))
a = plt.axes()
a.plot(Time, F10Array, 'g.-', label = "S_10")
a.plot(Time, F30Array, 'k.-', label="S_30")
a.plot(Time, F100Array, 'b.-', label="S_100")
a.plot(Time, FuncArray, 'r', label="Function being approximated")
a.set(xlabel = 't', ylabel = 'f(t)')
a.legend()
plt.show()
```
We will first consider the case when our arbitrary parameter $\alpha$ is 1.
The plot in this case shows us the values of the sum on $[-\pi,\pi]$
```
lowNplot()
```
As can be seen, the sinusoids are being summed to something that looks closer and closer like our step function.
```
highNplot()
```
While this plot is definitely messier looking, the difference area between the distance of the step function and the partial sums is getting closer to 0, which is how we define this series converging to the function we wish to approximate.
Now let's see how well the approximation does for a specified point $t$
```
t_1 = 0.01*T
t_2 = 0.25*T
t_3 = 0.49*T
Kterms = np.array([1,3,5,10,30,100])
plotdom = np.arange(6)
approx_1 = ss.Sn(T,t_1,Kterms)
approx_2 = ss.Sn(T,t_2,Kterms)
approx_3 = ss.Sn(T,t_3,Kterms)
fig = plt.figure(figsize = (8,12))
a = plt.axes()
a.plot(plotdom, approx_1, 'b.-', label = "t=0.01T")
a.plot(plotdom, approx_2, 'g.-', label = "t=0.25T")
a.plot(plotdom, approx_3, 'r.-', label = "t=0.49T")
a.set(xlabel = 'Terms needed', ylabel = 'f(t)')
a.legend()
plt.show()
```
As we can see, this graph shows that at the specified $t$ values the sum of the sin functions does in fact approach 1 at the specified values, it is interesting to note that $\sin(x)$ having a point of symmetry around $x = \frac{\pi}{2}$ means that t_1 and t_3 have exactly the same approximation. This makes the lies look like they overlap. Under this I have included a graph which shows the rate at which the sum converges to 1 (it takes forever.)
```
t_0 = 0.1*T
plotdom = np.arange(300)
approx_0 = ss.Sn(T,t_0,plotdom)
fig = plt.figure(figsize = (8,12))
a = plt.axes()
a.plot(plotdom, approx_0, 'b.-', label = "Approximation")
a.set(xlabel = 'Terms needed', ylabel = 'f(t)')
a.legend()
plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/BautistaDavid/Proyectos_ClaseML/blob/corte_1/Proyecto2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip install wooldridge
```
## **Proyecto 2**
Se va a construir un objeto para poder hallar algunas estadísticas descriptivas de un vector numérico de datos. Después de esto se va aprobar la clase construida usando algunas variables de la base de datos “wage1” del libro de wooldridge.
```
class estadisticos:
def __init__(self,lst):
self.lst = lst
return None
def media(self):
return sum(self.lst) / len(self.lst)
def test(self):
return self.media()
def desv_est(self):
return (sum([(i - self.media())**2 for i in self.lst]) / (len(self.lst)-1))**0.5
def varianza(self):
return self.desv_est()**2
def mediana(self):
return sorted(self.lst)[len(self.lst)//2] if len(self.lst)%2 != 0 else (sorted(self.lst)[len(self.lst)//2-1]+sorted(self.lst)[len(self.lst)//2]) / 2
def curtosis(self):
return sum([(i - self.media())**4 for i in self.lst]) / (len(self.lst)*self.desv_est()**4)
def simetria(self):
return sum([(i -self.media())**3 for i in self.lst]) / (len(self.lst)*self.desv_est()**3)
def coeficiente_variacion(self):
return self.desv_est() / abs(self.media())
```
Se va a probar la clase para las variables ```wage``` y ```educ``` de la bse de datos ```wage1```. A continuación se despliega información sobre estos datos.
```
import wooldridge as wd
wd.data("wage1",description = True) #Accedemos a información sobre las variables.
import numpy as np
import pandas as pd
datos = wd.data("wage1")
for i in ["wage","educ"]:
stats = estadisticos(datos[i])
mensaje = f"Error"
funciones = {"Media":[stats.media(),np.mean(datos[i])],
"Mediana":[stats.mediana(),np.median(datos[i])],
"Desviación estandar":[stats.desv_est(),np.std(datos[i])],
"Varianza":[stats.varianza(),np.var(datos[i])],
"Coeficiente Variación":[stats.coeficiente_variacion(),np.std(datos[i])/np.mean(datos[i])]}
print(f"Estadisticas {i}")
for key,value in funciones.items():
try:
assert value[0]==value[1],mensaje
print(f"{key} = {value[1]} --> Coincide con Numpy")
except:
print(f"{key} = {value[1]} --> No Coincide con Numpy, hay una diferencia de {abs(value[1]-value[0])}")
print("___________________________________________\n ")
```
### **Analizis resultados:**
* **Variable wage**:
| Estadistico |Valor |
|--------------|----------|
| Media | 5.89 |
| Mediana | 4.65 |
| Desviación.E | 3.69 |
| Varianza | 13.64 |
| Coeficiente.V| 0.63 |
<Br>
Las ganancia promedio por hora (salario) de la muesta de población americana para el año de 1976 era de 5.89 USD, notese que esta cantidad de dinero o superior era percibida por menos de la mitad de los individuos puesto que la mediana de los datos es de apenas 4.65 USD.
Asi mismo se puede comentar que en promedio las ganancias por hora de los individuos se desvian 3.693 USD de la media, lo cual muestra un alto nivel de variación en los salarios que se pueden confirmar interpretando el coeficiente de variación de los datos que haciende hasta un 0.62.
Esto se puede succeder debido alguna fijacion de salario minimo que afecta a parte de los individuos mientras que el resto con mejores puestos logran obtener salarios mas altos.
<Br>
* **Variable educ**:
| Estadistico |Valor |
|--------------|-----------|
| Media | 12.56 |
| Mediana | 12 |
| Desviación.E | 2.77 |
| Varianza | 7.67 |
| Coeficiente.V| 0.22 |
<Br>
Los años de educacuin promedio de la muestra de población americana para el año 1976 es de 12.65 años, notese que la mitad de los individuos por lo menos contaban con 12 años de educación.
Por otro lado se puede notar como en promedio los años de educacion de los individuos se desviaban de la media en 2.77 años, lo cual nos indica un bajo nivel de dispersion, esto puede ser causado por el hecho de que algunos individuos lograron terminar sus ciclos de primaria y bachillerato mientras que otros lograron adicionar algunos años de educacion extra en estudios profesionales.
Aun asi el coeficiente de variación de los años de educación es apenas de 0.22, lo que tambien es un indicativo de que el sistema educativo y gobierno del momento lograba que la diferencia en años de educación de los individuos no fuera tan alta.
```
```
| github_jupyter |
```
import os
import csv
import cv2
import numpy as np
from PIL import Image
import sklearn
from random import shuffle
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
config = tf.ConfigProto()
config.gpu_options.allow_growth = True # dynamically grow the memory used on the GPU
config.log_device_placement = True # to log device placement (on which device the operation ran)
# (nothing gets printed in Jupyter, only if you run it standalone)
sess = tf.Session(config=config)
set_session(sess)
csv_file = './data/driving_log.csv'
path = './data/' # fill in the path to your training IMG directory
samples = []
with open(csv_file) as csvfile:
reader = csv.reader(csvfile)
headers = next(reader)
for line in reader:
samples.append(line)
print('Done')
shuffle(samples)
images = []
angles = []
for batch_sample in samples:
steering_center = float(batch_sample[3].strip())
# create adjusted steering measurements for the side camera images
correction = 0.2 # this is a parameter to tune
steering_left = steering_center + correction
steering_right = steering_center - correction
# read in images from center, left and right cameras
img_center = np.array(Image.open(path + batch_sample[0].strip()))
img_left = np.array(Image.open(path + batch_sample[1].strip()))
img_right = np.array(Image.open(path + batch_sample[2].strip()))
# add images and angles to data set
images.extend((img_center, img_left, img_right, np.fliplr(img_center), np.fliplr(img_left), np.fliplr(img_right)))
angles.extend((steering_center, steering_left, steering_right, -steering_center, -steering_left, -steering_right))
break
print(len(images))
images = np.array(images)
print(images.shape)
import matplotlib.pyplot as plt
plt.rcdefaults()
%matplotlib inline
i=1
fig = plt.figure(figsize=(15, 5))
for image in images:
plt.subplot(2, 3, i)
plt.imshow(image)
plt.axis('off')
i+=1
fig.subplots_adjust(wspace=0.1, hspace=0.1)
plt.show()
print(angles)
from sklearn.model_selection import train_test_split
train_samples, validation_samples = train_test_split(samples, test_size=0.2)
def generator(samples, batch_size=32):
num_samples = len(samples)
while 1: # Loop forever so the generator never terminates
shuffle(samples)
for offset in range(0, num_samples, batch_size):
batch_samples = samples[offset:offset+batch_size]
images = []
angles = []
for batch_sample in batch_samples:
steering_center = float(batch_sample[3].strip())
# create adjusted steering measurements for the side camera images
correction = 0.2 # this is a parameter to tune
steering_left = steering_center + correction
steering_right = steering_center - correction
# read in images from center, left and right cameras
img_center = np.array(Image.open(path + batch_sample[0].strip()))
img_left = np.array(Image.open(path + batch_sample[1].strip()))
img_right = np.array(Image.open(path + batch_sample[2].strip()))
# add images and angles to data set
images.extend((img_center, img_left, img_right, np.fliplr(img_center), np.fliplr(img_left), np.fliplr(img_right)))
angles.extend((steering_center, steering_left, steering_right, -steering_center, -steering_left, -steering_right))
# trim image to only see section with road
X_train = np.array(images)
y_train = np.array(angles)
yield sklearn.utils.shuffle(X_train, y_train)
# compile and train the model using the generator function
train_generator = generator(train_samples, batch_size=32)
validation_generator = generator(validation_samples, batch_size=32)
from keras.models import Sequential
from keras.layers import Flatten, Dense, Lambda, Cropping2D, Dropout
from keras.layers.convolutional import Convolution2D
from keras.layers.pooling import MaxPooling2D
model = Sequential()
model.add(Lambda(lambda x: (x / 255.0) - 0.5, input_shape=(160,320,3)))
model.add(Cropping2D(cropping=((70,25), (0,0))))
model.add(Convolution2D(24,5,5,subsample=(2,2),activation='relu'))
model.add(Convolution2D(36,5,5,subsample=(2,2),activation='relu'))
model.add(Convolution2D(48,5,5,subsample=(2,2),activation='relu'))
model.add(Convolution2D(64,3,3,subsample=(1,1),activation='relu'))
model.add(Convolution2D(64,3,3,subsample=(1,1),activation='relu'))
model.add(Flatten())
model.add(Dropout(0.5))
model.add(Dense(100))
model.add(Dropout(0.5))
model.add(Dense(50))
model.add(Dropout(0.5))
model.add(Dense(10))
model.add(Dropout(0.5))
model.add(Dense(1))
print(model.summary())
model.compile(optimizer='adam', loss='mse')
history_object = model.fit_generator(train_generator, samples_per_epoch=len(train_samples)*6, validation_data=validation_generator, nb_val_samples=len(validation_samples)*6, nb_epoch=5)
model.save('model.h5')
from keras.utils.visualize_util import plot
plot(model, to_file='model.png')
plot(model,show_shapes=True, to_file='modelwithshapes.png')
from IPython.display import SVG
from keras.utils.visualize_util import model_to_dot
SVG(model_to_dot(model).create(prog='dot', format='svg'))
### print the keys contained in the history object
print(history_object.history.keys())
### plot the training and validation loss for each epoch
plt.plot(history_object.history['loss'])
plt.plot(history_object.history['val_loss'])
#plt.ylim(ymin=0)
plt.title('model mean squared error loss')
plt.ylabel('mean squared error loss')
plt.xlabel('epoch')
plt.legend(['training set', 'validation set'], loc='upper right')
plt.show()
```
| github_jupyter |
# DCGAN - Create Images from Random Numbers!
### Generative Adversarial Networks
Ever since Ian Goodfellow and colleagues [introduced the concept of Generative Adversarial Networks (GANs)](https://arxiv.org/abs/1406.2661), GANs have been a popular topic in the field of AI. GANs are an application of unsupervised learning - you don't need labels for your dataset in order to train a GAN.
The GAN framework composes of two neural networks: a generator network and a discriminator network.
The generator's job is to take a set of random numbers and produce data (such as images or text).
The discriminator then takes in that data as well as samples of that data from a dataset and tries to determine if is "fake" (created by the generator network) or "real" (from the original dataset).
During training, the two networks play a game against each other.
The generator tries to create realistic data, so that it can fool the discriminator into thinking that the data it generated is from the original dataset. At the same time, the discriminator tries to not be fooled - it learns to become better at determining if data is real or fake.
Since the two networks are fighting in this game, they can be seen as as adversaries, which is where the term "Generative Adverserial Network" comes from.
### Deep Convolutional Generative Adversarial Networks
This notebook takes a look at Deep Convolutional Generative Adversarial Networks (DCGAN), which combines Convolutional Neural Networks (CNNs) ands GANs.
We will create a DCGAN that is able to create images of handwritten digits from random numbers.
The tutorial uses the neural net architecture and guidelines outlined in [this paper](https://arxiv.org/abs/1511.06434), and the MNIST dataset.
## How to Use This Tutorial
You can use this tutorial by executing each snippet of python code in order as it appears in the notebook.
In this tutorial, we will train DCGAN on MNIST which will ultimately produces two neural networks:
- The first net is the "generator" and creates images of handwritten digits from random numbers.
- The second net is the "discriminator" and determines if the image created by the generator is real (a realistic looking image of handwritten digits) or fake (an image that doesn't look like it came from the original dataset).
Apart from creating a DCGAN, you'll also learn:
- How to manipulate and iterate through batches images that you can feed into your neural network.
- How to create a custom MXNet data iterator that generates random numbers from a normal distribution.
- How to create a custom training process in MXNet, using lower level functions from the [MXNet Module API](http://mxnet.io/api/python/module.html) such as `.bind()` `.forward()` and `.backward()`. The training process for a DCGAN is more complex than many other neural net's, so we need to use these functions instead of using the higher level `.fit()` function.
- How to visualize images as they are going through the training process
## Prerequisites
This notebook assumes you're familiar with the concept of CNN's and have implemented one in MXNet. If you haven't, check out [this tutorial](https://github.com/dmlc/mxnet-notebooks/blob/master/python/tutorials/mnist.ipynb), which walks you through implementing a CNN in MXNet. You should also be familiar with the concept of logistic regression.
Having a basic understanding for MXNet data iterators helps, since we'll create a custom Data Iterator to iterate though random numbers as inputs to our generator network. Take a look at [this tutorial](https://github.com/dmlc/mxnet-notebooks/blob/master/python/basic/data.ipynb) for a better understanding of how MXNet `DataIter` works.
This example is designed to be trained on a single GPU. Training this network on CPU can be slow, so it's recommended that you use a GPU for training.
To complete this tutorial, you need:
- [MXNet](http://mxnet.io/get_started/setup.html#overview)
- [Python 2.7](https://www.python.org/download/releases/2.7/), and the following libraries for Python:
- [Numpy](http://www.numpy.org/) - for matrix math
- [OpenCV](http://opencv.org/) - for image manipulation
- [Scikit-learn](http://scikit-learn.org/) - to easily get our dataset
- [Matplotlib](https://matplotlib.org/) - to visualize our output
## The Data
We need two pieces of data to train our DCGAN:
1. Images of handwritten digits from the MNSIT dataset
2. Random numbers from a normal distribution
Our generator network will use the random numbers as the input to produce images of handwritten digits, and out discriminator network will use images of handwritten digits from the MNIST dataset to determine if images produced by our generator are realistic.
We are going to use the python library, scikit-learn, to get the MNIST dataset. Scikit-learn comes with a function that gets the dataset for us, which we will then manipulate to create our training and testing inputs.
The MNIST dataset contains 70,000 images of handwritten digits. Each image is 28x28 pixels in size.
To create random numbers, we're going to create a custom MXNet data iterator, which will returns random numbers from a normal distribution as we need then.
## Prepare the Data
### 1. Preparing the MNSIT dataset
Let's start by preparing our handwritten digits from the MNIST dataset. We import the fetch_mldata function from scikit-learn, and use it to get the MNSIT dataset. Notice that it's shape is 70000x784. This contains the 70000 images on every row and 784 pixels of each image in the columns of each row. Each image is 28x28 pixels, but has been flattened so that all 784 images are represented in a single list.
```
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
mnist.data.shape
```
Next, we'll randomize the handwritten digits by using numpy to create random permutations on the dataset on our rows (images). We'll then reshape our dataset from 70000x786 to 70000x28x28, so that every image in our dataset is arranged into a 28x28 grid, where each cell in the grid represents 1 pixel of the image.
```
import numpy as np
#Use a seed so that we get the same random permutation each time
np.random.seed(1)
p = np.random.permutation(mnist.data.shape[0])
X = mnist.data[p]
X = X.reshape((70000, 28, 28))
```
Since the DCGAN that we're creating takes in a 64x64 image as the input, we'll use OpenCV to resize the each 28x28 image to 64x64 images:
```
import cv2
X = np.asarray([cv2.resize(x, (64,64)) for x in X])
```
Each pixel in our 64x64 image is represented by a number between 0-255, that represents the intensity of the pixel. However, we want to input numbers between -1 and 1 into our DCGAN, as suggested by the research paper. To rescale our pixels to be in the range of -1 to 1, we'll divide each pixel by (255/2). This put our images on a scale of 0-2. We can then subtract by 1, to get them in the range of -1 to 1.
```
X = X.astype(np.float32)/(255.0/2) - 1.0
```
Ultimately, images are inputted into our neural net from a 70000x3x64x64 array, and they are currently in a 70000x64x64 array. We need to add 3 channels to our images. Typically when we are working with images, the 3 channels represent the red, green, and blue components of each image. Since the MNIST dataset is grayscale, we only need 1 channel to represent our dataset. We will pad the other channels with 0's:
```
X = X.reshape((70000, 1, 64, 64))
X = np.tile(X, (1, 3, 1, 1))
```
Finally, we'll put our images into MXNet's NDArrayIter, which will allow MXNet to easily iterate through our images during training. We'll also split up them images into a batches, with 64 images in each batch. Every time we iterate, we'll get a 4 dimensional array with size `(64, 3, 64, 64)`, representing a batch of 64 images.
```
import mxnet as mx
batch_size = 64
image_iter = mx.io.NDArrayIter(X, batch_size=batch_size)
```
## 2. Preparing Random Numbers
We need to input random numbers from a normal distribution to our generator network, so we'll create an MXNet DataIter that produces random numbers for each training batch. The `DataIter` is the base class of [MXNet's Data Loading API](http://mxnet.io/api/python/io.html). Below, we create a class called `RandIter` which is a subclass of `DataIter`. If you want to know more about how MXNet data loading works in python, please look at [this notebook](https://github.com/dmlc/mxnet-notebooks/blob/master/python/basic/data.ipynb). We use MXNet's built in `mx.random.normal` function in order to return the normally distributed random numbers every time we iterate.
```
class RandIter(mx.io.DataIter):
def __init__(self, batch_size, ndim):
self.batch_size = batch_size
self.ndim = ndim
self.provide_data = [('rand', (batch_size, ndim, 1, 1))]
self.provide_label = []
def iter_next(self):
return True
def getdata(self):
#Returns random numbers from a gaussian (normal) distribution
#with mean=0 and standard deviation = 1
return [mx.random.normal(0, 1.0, shape=(self.batch_size, self.ndim, 1, 1))]
```
When we initalize our `RandIter`, we need to provide two numbers: the batch size and how many random numbers we want to produce a single image from. This number is referred to as `Z`, and we'll set this to 100. This value comes from the research paper on the topic. Every time we iterate and get a batch of random numbers, we will get a 4 dimensional array with shape: `(batch_size, Z, 1, 1)`, which in our example is `(64, 100, 1, 1)`.
```
Z = 100
rand_iter = RandIter(batch_size, Z)
```
## Create the Model
Our model has two networks that we will train together - the generator network and the disciminator network.
Below is an illustration of our generator network:
<img src="dcgan-model.png">
Source: https://arxiv.org/abs/1511.06434
The discriminator works exactly the same way but in reverse - using convolutional layers instead of deconvolutional layers to take an image and determine if it is real or fake.
The DCGAN paper recommends the following best practices for architecting DCGANs:
- Replace any pooling layers with strided convolutions (discriminator) and fractional-strided convolutions (generator).
- Use batchnorm in both the generator and the discriminator.
- Remove fully connected hidden layers for deeper architectures.
- Use ReLU activation in generator for all layers except for the output, which uses Tanh.
- Use LeakyReLU activation in the discriminator for all layers.
Our model will implement these best practices.
### The Generator
Let's start off by defining the generator network:
```
no_bias = True
fix_gamma = True
epsilon = 1e-5 + 1e-12
rand = mx.sym.Variable('rand')
g1 = mx.sym.Deconvolution(rand, name='g1', kernel=(4,4), num_filter=1024, no_bias=no_bias)
gbn1 = mx.sym.BatchNorm(g1, name='gbn1', fix_gamma=fix_gamma, eps=epsilon)
gact1 = mx.sym.Activation(gbn1, name='gact1', act_type='relu')
g2 = mx.sym.Deconvolution(gact1, name='g2', kernel=(4,4), stride=(2,2), pad=(1,1), num_filter=512, no_bias=no_bias)
gbn2 = mx.sym.BatchNorm(g2, name='gbn2', fix_gamma=fix_gamma, eps=epsilon)
gact2 = mx.sym.Activation(gbn2, name='gact2', act_type='relu')
g3 = mx.sym.Deconvolution(gact2, name='g3', kernel=(4,4), stride=(2,2), pad=(1,1), num_filter=256, no_bias=no_bias)
gbn3 = mx.sym.BatchNorm(g3, name='gbn3', fix_gamma=fix_gamma, eps=epsilon)
gact3 = mx.sym.Activation(gbn3, name='gact3', act_type='relu')
g4 = mx.sym.Deconvolution(gact3, name='g4', kernel=(4,4), stride=(2,2), pad=(1,1), num_filter=128, no_bias=no_bias)
gbn4 = mx.sym.BatchNorm(g4, name='gbn4', fix_gamma=fix_gamma, eps=epsilon)
gact4 = mx.sym.Activation(gbn4, name='gact4', act_type='relu')
g5 = mx.sym.Deconvolution(gact4, name='g5', kernel=(4,4), stride=(2,2), pad=(1,1), num_filter=3, no_bias=no_bias)
generatorSymbol = mx.sym.Activation(g5, name='gact5', act_type='tanh')
```
Our generator image starts with random numbers that will be obtained from the `RandIter` we created earlier, so we created the `rand` variable for this input.
We then start creating the model starting with a Deconvolution layer (sometimes called 'fractionally strided layer'). We apply batch normalization and ReLU activation after the Deconvolution layer.
We repeat this process 4 times, applying a `(2,2)` stride and `(1,1)` pad at each Deconvolutional layer, which doubles the size of our image at each layer. By creating these layers, our generator network will have to learn to upsample our input vector of random numbers, `Z` at each layer, so that network output a final image. We also reduce half the number of filters at each layer, reducing dimensionality at each layer. Ultimatley, our output layer is a 64x64x3 layer, representing the size and channels of our image. We use tanh activation instead of relu on the last layer, as recommended by the research on DCGANs. The output of neurons in the final `gout` layer represent the pixels of generated image.
Notice we used 3 parameters to help us create our model: no_bias, fixed_gamma, and epsilon.
Neurons in our network won't have a bias added to them, this seems to work better in practice for the DCGAN.
In our batch norm layer, we set `fixed_gamma=True`, which means `gamma=1` for all of our batch norm layers.
`epsilon` is a small number that gets added to our batch norm so that we don't end up dividing by zero. By default, CuDNN requires that this number is greater than `1e-5`, so we add a small number to this value, ensuring this values stays small.
### The Discriminator
Let's now create our discriminator network, which will take in images of handwritten digits from the MNIST dataset and images created by the generator network:
```
data = mx.sym.Variable('data')
d1 = mx.sym.Convolution(data, name='d1', kernel=(4,4), stride=(2,2), pad=(1,1), num_filter=128, no_bias=no_bias)
dact1 = mx.sym.LeakyReLU(d1, name='dact1', act_type='leaky', slope=0.2)
d2 = mx.sym.Convolution(dact1, name='d2', kernel=(4,4), stride=(2,2), pad=(1,1), num_filter=256, no_bias=no_bias)
dbn2 = mx.sym.BatchNorm(d2, name='dbn2', fix_gamma=fix_gamma, eps=epsilon)
dact2 = mx.sym.LeakyReLU(dbn2, name='dact2', act_type='leaky', slope=0.2)
d3 = mx.sym.Convolution(dact2, name='d3', kernel=(4,4), stride=(2,2), pad=(1,1), num_filter=512, no_bias=no_bias)
dbn3 = mx.sym.BatchNorm(d3, name='dbn3', fix_gamma=fix_gamma, eps=epsilon)
dact3 = mx.sym.LeakyReLU(dbn3, name='dact3', act_type='leaky', slope=0.2)
d4 = mx.sym.Convolution(dact3, name='d4', kernel=(4,4), stride=(2,2), pad=(1,1), num_filter=1024, no_bias=no_bias)
dbn4 = mx.sym.BatchNorm(d4, name='dbn4', fix_gamma=fix_gamma, eps=epsilon)
dact4 = mx.sym.LeakyReLU(dbn4, name='dact4', act_type='leaky', slope=0.2)
d5 = mx.sym.Convolution(dact4, name='d5', kernel=(4,4), num_filter=1, no_bias=no_bias)
d5 = mx.sym.Flatten(d5)
label = mx.sym.Variable('label')
discriminatorSymbol = mx.sym.LogisticRegressionOutput(data=d5, label=label, name='dloss')
```
We start off by creating the `data` variable, which is used to hold our input images to the discriminator.
The discriminator then goes through a series of 5 convolutional layers, each with a 4x4 kernel, 2x2 stride, and 1x1 pad. These layers half the size of the image (which starts at 64x64) at each convolutional layer. Our model also increases dimensionality at each layer by doubling the number of filters per convolutional layer, starting at 128 filters and ending at 1024 filters before we flatten the output.
At the final convolution, we flatten the neural net to get one number as the final output of discriminator network. This number is the probability the image is real, as determined by our discriminator. We use logistic regression to determine this probability. When we pass in "real" images from the MNIST dataset, we can label these as `1` and we can label the "fake" images from the generator net as `0` to perform logistic regression on the discriminator network.
### Prepare the models using the `Module` API
So far we have defined a MXNet `Symbol` for both the generator and the discriminator network.
Before we can train our model, we need to bind these symbols using the `Module` API, which creates the computation graph for our models. It also allows us to decide how we want to initialize our model and what type of optimizer we want to use. Let's set up `Module` for both of our networks:
```
#Hyperperameters
sigma = 0.02
lr = 0.0002
beta1 = 0.5
ctx = mx.gpu(0)
#=============Generator Module=============
generator = mx.mod.Module(symbol=generatorSymbol, data_names=('rand',), label_names=None, context=ctx)
generator.bind(data_shapes=rand_iter.provide_data)
generator.init_params(initializer=mx.init.Normal(sigma))
generator.init_optimizer(
optimizer='adam',
optimizer_params={
'learning_rate': lr,
'beta1': beta1,
})
mods = [generator]
# =============Discriminator Module=============
discriminator = mx.mod.Module(symbol=discriminatorSymbol, data_names=('data',), label_names=('label',), context=ctx)
discriminator.bind(data_shapes=image_iter.provide_data,
label_shapes=[('label', (batch_size,))],
inputs_need_grad=True)
discriminator.init_params(initializer=mx.init.Normal(sigma))
discriminator.init_optimizer(
optimizer='adam',
optimizer_params={
'learning_rate': lr,
'beta1': beta1,
})
mods.append(discriminator)
```
First, we create `Modules` for our networks and then bind the symbols that we've created in the previous steps to our modules.
We use `rand_iter.provide_data` as the `data_shape` to bind our generator network. This means that as we iterate though batches of data on the generator `Module`, our `RandIter` will provide us with random numbers to feed our `Module` using it's `provide_data` function.
Similarly, we bind the discriminator `Module` to `image_iter.provide_data`, which gives us images from MNIST from the `NDArrayIter` we had set up earlier, called `image_iter`.
Notice that we're using the `Normal` initialization, with the hyperparameter `sigma=0.02`. This means our weight initializations for the neurons in our networks will random numbers from a Gaussian (normal) distribution with a mean of 0 and a standard deviation of 0.02.
We also use the adam optimizer for gradient decent. We've set up two hyperparameters, `lr` and `beta1` based on the values used in the DCGAN paper. We're using a single gpu, `gpu(0)` for training.
### Visualizing Our Training
Before we train the model, let's set up some helper functions that will help visualize what our generator is producing, compared to what the real image is:
```
from matplotlib import pyplot as plt
#Takes the images in our batch and arranges them in an array so that they can be
#Plotted using matplotlib
def fill_buf(buf, num_images, img, shape):
width = buf.shape[0]/shape[1]
height = buf.shape[1]/shape[0]
img_width = (num_images%width)*shape[0]
img_hight = (num_images/height)*shape[1]
buf[img_hight:img_hight+shape[1], img_width:img_width+shape[0], :] = img
#Plots two images side by side using matplotlib
def visualize(fake, real):
#64x3x64x64 to 64x64x64x3
fake = fake.transpose((0, 2, 3, 1))
#Pixel values from 0-255
fake = np.clip((fake+1.0)*(255.0/2.0), 0, 255).astype(np.uint8)
#Repeat for real image
real = real.transpose((0, 2, 3, 1))
real = np.clip((real+1.0)*(255.0/2.0), 0, 255).astype(np.uint8)
#Create buffer array that will hold all the images in our batch
#Fill the buffer so to arrange all images in the batch onto the buffer array
n = np.ceil(np.sqrt(fake.shape[0]))
fbuff = np.zeros((int(n*fake.shape[1]), int(n*fake.shape[2]), int(fake.shape[3])), dtype=np.uint8)
for i, img in enumerate(fake):
fill_buf(fbuff, i, img, fake.shape[1:3])
rbuff = np.zeros((int(n*real.shape[1]), int(n*real.shape[2]), int(real.shape[3])), dtype=np.uint8)
for i, img in enumerate(real):
fill_buf(rbuff, i, img, real.shape[1:3])
#Create a matplotlib figure with two subplots: one for the real and the other for the fake
#fill each plot with our buffer array, which creates the image
fig = plt.figure()
ax1 = fig.add_subplot(2,2,1)
ax1.imshow(fbuff)
ax2 = fig.add_subplot(2,2,2)
ax2.imshow(rbuff)
plt.show()
```
## Fit the Model
Training the DCGAN is a complex process that requires multiple steps.
To fit the model, for every batch of data in our dataset:
1. Use the `Z` vector, which contains our random numbers to do a forward pass through our generator. This outputs the "fake" image, since it's created from our generator.
2. Use the fake image as the input to do a forward and backwards pass through the discriminator network. We set our labels for our logistic regression to `0` to represent that this is a fake image. This trains the discriminator to learn what a fake image looks like. We save the gradient produced in backpropogation for the next step.
3. Do a forwards and backwards pass through the discriminator using a real image from our dataset. Our label for logistic regression will now be `1` to represent real images, so our discriminator can learn to recognize a real image.
4. Update the discriminator by adding the result of the gradient generated during backpropogation on the fake image with the gradient from backpropogation on the real image.
5. Now that the discriminator has been updated for the this batch, we still need to update the generator. First, do a forward and backwards pass with the same batch on the updated discriminator, to produce a new gradient. Use the new gradient to do a backwards pass
Here's the main training loop for our DCGAN:
```
# =============train===============
print('Training...')
for epoch in range(1):
image_iter.reset()
for i, batch in enumerate(image_iter):
#Get a batch of random numbers to generate an image from the generator
rbatch = rand_iter.next()
#Forward pass on training batch
generator.forward(rbatch, is_train=True)
#Output of training batch is the 64x64x3 image
outG = generator.get_outputs()
#Pass the generated (fake) image through the discriminator, and save the gradient
#Label (for logistic regression) is an array of 0's since this image is fake
label = mx.nd.zeros((batch_size,), ctx=ctx)
#Forward pass on the output of the discriminator network
discriminator.forward(mx.io.DataBatch(outG, [label]), is_train=True)
#Do the backwards pass and save the gradient
discriminator.backward()
gradD = [[grad.copyto(grad.context) for grad in grads] for grads in discriminator._exec_group.grad_arrays]
#Pass a batch of real images from MNIST through the discriminator
#Set the label to be an array of 1's because these are the real images
label[:] = 1
batch.label = [label]
#Forward pass on a batch of MNIST images
discriminator.forward(batch, is_train=True)
#Do the backwards pass and add the saved gradient from the fake images to the gradient
#generated by this backwards pass on the real images
discriminator.backward()
for gradsr, gradsf in zip(discriminator._exec_group.grad_arrays, gradD):
for gradr, gradf in zip(gradsr, gradsf):
gradr += gradf
#Update gradient on the discriminator
discriminator.update()
#Now that we've updated the discriminator, let's update the generator
#First do a forward pass and backwards pass on the newly updated discriminator
#With the current batch
discriminator.forward(mx.io.DataBatch(outG, [label]), is_train=True)
discriminator.backward()
#Get the input gradient from the backwards pass on the discriminator,
#and use it to do the backwards pass on the generator
diffD = discriminator.get_input_grads()
generator.backward(diffD)
#Update the gradients on the generator
generator.update()
#Increment to the next batch, printing every 50 batches
i += 1
if i % 50 == 0:
print('epoch:', epoch, 'iter:', i)
print
print(" From generator: From MNIST:")
visualize(outG[0].asnumpy(), batch.data[0].asnumpy())
```
Here we have our GAN being trained and we can visualize the progress that we're making as our networks train. After every 25 iterations, we're calling the `visualize` function that we created earlier, which creates the visual plots during training.
The plot on our left is what our generator created (the fake image) in the most recent iteration. The plot on the right is the original (real) image from the MNIST dataset that was inputted to the discriminator on the same iteration.
As training goes on the generator becomes better at generating realistic images. You can see this happening since images on the left become closer to the original dataset with each iteration.
## Summary
We've now sucessfully used Apache MXNet to train a Deep Convolutional GAN using the MNIST dataset.
As a result, we've created two neural nets: a generator, which is able to create images of handwritten digits from random numbers, and a discriminator, which is able to take an image and determine if it is an image of handwritten digits.
Along the way, we've learned how to do the image manipulation and visualization that's associted with training deep neural nets. We've also learned how to some of MXNet's advanced training functionality to fit our model.
## Acknowledgements
This tutorial is based on [MXNet DCGAN codebase](https://github.com/dmlc/mxnet/blob/master/example/gan/dcgan.py), the [original paper on GANs](https://arxiv.org/abs/1406.2661), as well as [this paper](https://arxiv.org/abs/1511.06434) on deep convolutional GANs.
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.