\",\n 'coefficient': 0.5,\n },\n {\n 'title':\n \"Give three tags that belong to the head section\"\n \" of an HTML page, one per line\"\n \" (i.e. between <head> and </head>).\",\n\n 'coefficient': 0.5,\n },\n {\n 'title':\n \"Describe in a few words the role of HTML and CSS.\",\n 'coefficient': 1,\n },\n {\n 'title':\n \"HTML is an acronym. What is its expanded form?\",\n 'coefficient': 0.25,\n },\n {\n 'title':\n \"What is the role of \"\n \" in an HTML file?\",\n 'coefficient': 0.5,\n },\n {\n 'title':\n \"What do we have to write in HTML to render the\"\n \" < symbol?\",\n 'coefficient': 0.5,\n },\n {\n 'title':\n \"Which one of the following is a valid\"\n \" piece of HTML (enter 1, 2, 3 or 4):\"\n \"
<p>Lorem ipsum <em>dolor <strong>sit</strong></em> amet</p>
\"\n \"
<p>Lorem ipsum <html>dolor <strong>sit</strong></html> amet</p>
\"\n \"
<p>Lorem ipsum <em>dolor <strong>sit</em></strong> amet</p>
\"\n \"
<p>Lorem ipsum <em>dolor <strong>sit</strong><em> amet</p>
\"\n \"\",\n\n 'coefficient': 0.5,\n },\n {\n 'title':\n \"Write a piece of HTML that displays \\\"Go to hell\\\"\"\n \" such as the text is a link to the website\"\n \" \\\"http://666.com\\\".\",\n\n 'coefficient': 0.5,\n },\n {\n 'title':\n 'I am browsing a website at \"http://example.org/blog/\". What'\n ' will be the address of my browser if I click on:'\n '
<a href=\"/userlist/\">
',\n 'coefficient': 0.25,\n },\n {\n 'title':\n 'Warning, this is not the same question! I am browsing a website at \"http://example.org/blog/\". What'\n ' will be the address of my browser if I click on:'\n '
<a href=\"userlist/\">
',\n 'coefficient': 0.25,\n },\n {\n 'title':\n 'Be careful! I am browsing a website at \"http://example.org/blog\". What'\n ' will be the address of my browser if I click on:'\n '
<a href=\"userlist/\">
',\n 'coefficient': 0.25,\n },\n {\n 'title':\n 'This is the last question about urls! '\n ' I am browsing a website at \"http://example.org/blog\". What'\n ' will be the address of my browser if I click on:'\n '
<a href=\"userlist/\">
',\n 'coefficient': 0.25,\n },\n {\n 'title':\n 'Give a CSS selector for the p tags'\n ' inside a node with class major.',\n 'coefficient':1,\n },\n\n ],\n },\n {\n 'id': '02-Elm',\n 'name' : 'Elm',\n 'end_date': date(2019, 9, 19),\n 'questions': [\n { 'title':\n \"Write a mult function\"\n \" w(ith type annotation) taking two arguments and\"\n \" multiplying them.\",\n 'coefficient': 0.5,\n },\n {\n 'title':\n \"Let f : String -> Int\"\n \" and g : List Float -> Int\"\n \" be two functions. What is the type of\"\n \" h where:\"\n \"
\"\n \" h a b = f a + g b\"\n \"
\",\n 'coefficient': 1,\n },\n {\n 'title':\n \"How to produce the following piece of HTML in Elm?\"\n \"
\",\n 'coefficient': 1,\n },\n {\n 'title':\n 'Write a function f (with'\n ' type annotation) that converts a'\n ' list of integers into a list of string. For instance'\n ' f [2, 3, 5, 7] == [\"2\", \"3\", \"5\", \"7\"].'\n ' All the functions you need are in the slides.',\n 'coefficient': 1,\n },\n {\n 'title':\n 'ageStr is a string entered by a user.'\n ' Define a variable ageInt which is'\n ' the conversion of ageStr in IntInt'\n ' or -1 otherwise.',\n 'coefficient': 1,\n },\n\n ],\n },\n {\n 'id': '03-HTTP',\n 'name': 'HTTP',\n 'end_date': date(2019, 9, 26),\n 'questions': [],\n },\n {\n 'id': '04-AJAX',\n 'name': 'Json and Elm decoders',\n 'end_date': date(2019, 10, 2),\n 'questions': [\n {\n 'title':\n 'In Elm, what type do we use if we want represent'\n ' failure with an error message?',\n 'coefficient': 0.5,\n },\n {\n 'title':\n 'True or false? In a Result,'\n ' the error and the value types'\n ' can be the same.',\n 'coefficient': 0.5,\n },\n {\n 'title':\n 'Write a piece of JSON respresenting multiple'\n ' animals: Bud, a dog; Kit, a cat and Bob, a fish',\n 'coefficient': 1,\n },\n {\n 'title':\n 'JSON is an acronym. What is its expanded form?',\n 'coefficient': 0.5,\n },\n {\n 'title':\n 'True or false? Json can only be use with Javascript.',\n 'coefficient': 0.5,\n },\n {\n 'title':\n 'True or false? A JSON object can'\n ' only be decoded'\n ' to an Elm record containing the same number'\n ' of fields.',\n 'coefficient': 0.5,\n },\n {\n 'title':\n 'True or false? A field in a JSON object can'\n ' only be decoded'\n ' to an Elm field record with the same name.',\n 'coefficient': 0.5,\n },\n {\n 'title':\n 'Write out a type and a decoder for the following'\n ' piece of JSON:'\n '
',\n 'coefficient': 1.5,\n }\n\n ],\n },\n {\n 'id': '05-security',\n 'name': 'User account and security',\n 'end_date': date(2019, 10, 9),\n 'questions': [\n {\n 'title':\n 'How should you store the passwords of the users in the'\n ' database?',\n 'coefficient': 0.5,\n },\n {\n 'title':\n 'What mechnanism or technology does permit to'\n ' safely send the password user to the server?',\n 'coefficient': 0.5,\n },\n {\n 'title':\n 'True or false? When dealing with password hashes, it is safe'\n ' to take the same salt for all the passwords.',\n 'coefficient': 0.5,\n },\n {\n 'title':\n 'Give a simple solution to mitigate the effects of a session'\n ' hijacking.',\n 'coefficient': 0.5,\n },\n {\n 'title':\n 'How are the cookies shared between the client and the'\n ' server?',\n 'coefficient': 1,\n },\n {\n 'title':\n 'True or False? A malicious user can be logged in to an account'\n ' on a website without providing the password of this account.'\n ' If true, explain how, otherwise explain why (give only the'\n '\"big idea\").',\n 'coefficient': 1,\n },\n {\n 'title':\n 'In the following piece of code, form is a'\n ' dictionary containing the input from the user.'\n ' You can see here a request performed in an instant messaging'\n ' software. This reuqest searches the messeages sent to a given'\n ' user among all the messages from the current user.'\n '
'\n ' What value could a malicious user use for the \"to_user\"'\n ' field in order to get all the messages in the database?',\n 'coefficient': 1.5,\n },\n {\n 'title':\n 'Rewrite this following piece of code to prevent SQL'\n ' injections:'\n '
'),\n ]\n result = HTMLFeatureInfoDoc.combine(docs)\n\n eq_(b\"Hello
baz
\"\n b\"
baz2\\n
foo
\\n
bar
\",\n result.as_string())\n eq_(result.info_type, 'text')\n","sub_path":"mapproxy/test/unit/test_featureinfo.py","file_name":"test_featureinfo.py","file_ext":"py","file_size_in_byte":6337,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"72595543","text":"# Always prefer setuptools over distutils\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\nwith open('requirements.txt') as f:\n required = f.read().splitlines()\n\nsetup(\n name='shapley-effects',\n version='0.1',\n description='Estimation of Shapley effects for Sensitivity Analysis of Model Output.',\n long_description=open('README.md').read(),\n url='https://gitlab.com/CEMRACS17/shapley-effects',\n author='Nazih BENOUMECHIARA & Kevin ELIE-DIT-COSAQUE',\n author_email = 'nazih.benoumechiara@gmail.com',\n license='MIT',\n keywords=['sensitivity analysis', 'shapley', 'effects', 'depedencies'],\n packages=['shapley'],\n install_requires=required\n)","sub_path":"pypi_install_script/shapley-effects-0.1.tar/setup.py","file_name":"setup.py","file_ext":"py","file_size_in_byte":723,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"247142518","text":"from django.shortcuts import render_to_response, get_object_or_404\nfrom django.http import HttpResponse, HttpResponseRedirect\nfrom django.template import RequestContext\nfrom django.contrib.auth.decorators import login_required\nfrom django.http import Http404\nfrom django.conf import settings\n\nimport os\nimport random\n\nfrom ppt.models import *\nfrom ppt.forms import *\n\n\n#########\n# General\n#########\n\n\ndef homepage(request):\n user = request.user\n\n return render_to_response('ppt/index.html',\n { 'user': user,\n },\n context_instance=RequestContext(request))\n\n\n\n#########\n# Users\n#########\n\n\n@login_required\ndef user_list(request):\n users = User.objects.all()\n \n return render_to_response('ppt/user_list.html',\n {'users': users}, context_instance=RequestContext(request)\n )\n\n\n@login_required\ndef user_view(request, username=False):\n if not username:\n username = request.user\n\n user = get_object_or_404(User, username=username)\n \n ppts = Ppt.objects.filter(user_id=user.id).order_by('pk')\n \n return render_to_response('ppt/user_view.html',\n { 'user': user,\n 'ppts': ppts\n },\n context_instance=RequestContext(request)\n )\n\n\n##############\n# PowerPoints\n###############\n\n@login_required\ndef user_ppt_view(request, username, ppt_id):\n user = get_object_or_404(User, username=username)\n ppt = Ppt.objects.get(user_id=user.id, id=ppt_id)\n \n return render_to_response('ppt/user_ppt_view.html',\n { 'user': user,\n 'ppt': ppt,\n },\n context_instance=RequestContext(request)\n )\n\n\n\n# Note that this both uploads new files and allows edits.\n@login_required\ndef user_ppt_edit(request, username, ppt_id=False):\n user = User.objects.get(username=request.user)\n\n print(ppt_id)\n\n if not ppt_id == False:\n ppt = get_object_or_404(Ppt, id=ppt_id)\n else:\n ppt = Ppt(user=user, title='', description='')\n\n if request.method == 'POST':\n pptForm = PptForm(request.POST, request.FILES, instance=ppt)\n if pptForm.is_valid():\n ppt = pptForm.save()\n return HttpResponseRedirect(ppt.get_absolute_url())\n else:\n pptForm = PptForm(instance=ppt)\n \n return render_to_response('ppt/user_ppt_edit.html',\n {'user': user, 'form': pptForm},\n context_instance=RequestContext(request)\n )\n\n\n#########\n# Units\n#########\n\n@login_required\ndef unit_list(request):\n units = PptUnit.objects.all()\n \n return render_to_response('ppt/unit_list.html',\n {'units': units}, context_instance=RequestContext(request)\n )\n\n\n@login_required\ndef unit_view(request, unit_id):\n unit = get_object_or_404(PptUnit, id=unit_id)\n \n return render_to_response('ppt/unit_view.html',\n {'unit': unit}, context_instance=RequestContext(request)\n )\n\n\n\n","sub_path":"ppt/views.py","file_name":"views.py","file_ext":"py","file_size_in_byte":2942,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"653087069","text":"'''\n获取排放废气的公司的详细信息\n'''\nimport pyamf\nfrom pyamf.flex import messaging\nimport uuid\nimport requests\nimport datetime\nfrom pyamf import remoting\nfrom pyamf.flex import messaging\nfrom pyquery import PyQuery as pq\nfrom contextlib import closing\n\nfrom t3c.manager.db import DB\nfrom common_class import GetGasCommpanyData, ParseFlash\n\nclass GetGasCompanyDetial(object):\n\n def __init__(self):\n super(GetGasCompanyDetial, self).__init__()\n self.url = 'http://111.75.227.207:9180/eipp/messagebroker/amf'\n self.session = requests.Session()\n self.db = DB()\n\n def get_company_detial(self, company_id):\n message = ['getEnterpriseReportDetail', '', 'EnterpriseBasService']\n body = ['2017']\n\n body.insert(0, company_id)\n resp = ParseFlash().amf(message, body, self.url)\n\n if resp.ok:\n resp_msg = remoting.decode(resp.content).bodies[0][1]\n content = list(resp_msg.body.body)\n\n return content[0]\n\n def update_insert(self, insert, update, contents):\n with closing(self.db.engine.connect()) as cn:\n res = cn.execute(update, contents)\n\n if not res.rowcount:\n cn.execute(insert, contents)\n\n def save_detial(self, datas):\n insert_sql = 'INSERT INTO \"plant_app_tcx\".\"tb_enterprise\" (qy_corporation,qy_id,qy_address,'\\\n 'province_id,qy_link_phone,qy_wrylx,qy_name,qy_industry,qy_auto_monitor_operation_style)'\\\n 'VALUES (%(qy_corporation)s,%(qy_id)s,%(qy_address)s,%(province_id)s,%(qy_link_phone)s,'\\\n '%(qy_wrylx)s,%(qy_name)s,%(qy_industry)s,%(qy_auto_monitor_operation_style)s);'\n\n update_sql = 'UPDATE \"plant_app_tcx\".\"tb_enterprise\" SET qy_corporation=%(qy_corporation)s,'\\\n 'qy_address=%(qy_address)s,qy_link_phone=%(qy_link_phone)s,qy_wrylx=%(qy_wrylx)s,qy_name=%(qy_name)s,'\\\n 'qy_industry=%(qy_industry)s,qy_auto_monitor_operation_style=%(qy_auto_monitor_operation_style)s '\\\n 'WHERE ( qy_id=%(qy_id)s AND province_id=%(province_id)s);'\n\n for data in datas:\n try:\n con = {}\n con['qy_name'] = data['enterPriseName']\n con['qy_corporation'] = data['legalPerson']\n con['qy_link_phone'] = data['officePhone']\n con['qy_address'] = data['address']\n con['qy_industry'] = data['industryTypeName']\n con['qy_wrylx'] = data['monitorTypeName']\n con['province_id'] = int(36000000)\n con['qy_id'] = str(int(data['enterPriseId']))\n con['qy_auto_monitor_operation_style'] = '自动监测和手工监测'\n\n self.update_insert(insert_sql, update_sql, con)\n\n except Exception as e:\n print(e)\n\n def save_all_company_detial(self):\n company_id_list = GetGasCommpanyData().all_gas_company_name_company_id()\n company_detial_lists = []\n\n for item in company_id_list:\n company_detial_lists.append(self.get_company_detial(item[1]))\n\n self.save_detial(company_detial_lists)\n\nif __name__ == '__main__':\n t = GetGasCompanyDetial()\n t.save_all_company_detial()\n","sub_path":"江西环保局各排废气的厂污染数据抓取/江西环保l/get_gas_company_info.py","file_name":"get_gas_company_info.py","file_ext":"py","file_size_in_byte":3085,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"338169672","text":"import leviChess as lc\n\n##initialization \n\nBOARD_OFFSET_X = 79\nBOARD_OFFSET_Y = 79\nSPRITE_SIZE = 79\n\n#dictionary of every square coordinate, as strings. e.g. \"e4\"\n#from black POV\nboard = {chr(72 - j) + str(i + 1):[(BOARD_OFFSET_X + (j * SPRITE_SIZE), BOARD_OFFSET_Y + (i * SPRITE_SIZE))] for i in range(8) for j in range(8)}\nprint(board)\n\n#lc.init_graphics()\nlc.pygame.init()\nsize = width, height = 768, 768 #screen size\nscreen = lc.pygame.display.set_mode(size) #create screen\nlc.pygame.display.set_caption('Levi\\'s Chess')\nlc.pygame.mouse.set_visible(1)\n\n#load board asset\nbackground = lc.pygame.image.load(\"assets/chess_board.png\")\nbackground_rect = background.get_rect()\n\n#store all pieces in list\npieces = []\n#instantiate pieces, add to piece list\npieces.extend([lc.Piece(\"white\", \"pawn\", chr(65 + num) + str(2), board) for num in range(8)])\npieces.extend([lc.Piece(\"black\", \"pawn\", chr(65 + num) + str(7), board) for num in range(8)])\npieces.extend([lc.Piece(\"white\", \"rook\", chr(65 + (num * 7)) + str(1), board) for num in range(2)])\npieces.extend([lc.Piece(\"black\", \"rook\", chr(65 + (num * 7)) + str(8), board) for num in range(2)])\npieces.extend([lc.Piece(\"white\", \"knight\", chr(65 + ((num * 5) + 1)) + str(1), board) for num in range(2)])\npieces.extend([lc.Piece(\"black\", \"knight\", chr(65 + ((num * 5) + 1)) + str(8), board) for num in range(2)])\npieces.extend([lc.Piece(\"white\", \"bishop\", chr(65 + ((num * 3) + 2)) + str(1), board) for num in range(2)])\npieces.extend([lc.Piece(\"black\", \"bishop\", chr(65 + ((num * 3) + 2)) + str(8), board) for num in range(2)])\npieces.extend([lc.Piece(\"white\", \"queen\", \"D1\", board)])\npieces.extend([lc.Piece(\"black\", \"queen\", \"D8\", board)])\npieces.extend([lc.Piece(\"white\", \"king\", \"E1\", board)])\npieces.extend([lc.Piece(\"black\", \"king\", \"E8\", board)])\n\n#white to move first\nturn = 0\n#true when a piece has been clicked already, removes need for nested loops\npiece_selected = 0\n\n#create container for sprites\nallsprites = lc.pygame.sprite.RenderPlain(pieces)\n\n#init gameclock to spare cpu usage\nclock = lc.pygame.time.Clock()\n\n#main loop\nwhile 1:\n\tclock.tick(60)\n\n\t#this will restrict moveable pieces based on turn\n\tif turn == 0:\n\t\tto_move = \"white\"\n\telif turn == 1:\n\t\tto_move = \"black\"\n\n\t#render board\n\tscreen.blit(background, background_rect)\n\n\t#draw all sprites (TODO: ...that aren't captured)\n\tallsprites.draw(screen)\n\n\t#update display\n\tlc.pygame.display.flip()\n\n\t#get updated event queue\n\tev = lc.pygame.event.get()\n\n\t#select piece with a click, move the piece with the next click\n\tfor event in ev:\n\t\t#if a piece is clicked and none are currently selected\n\t\tif event.type == lc.pygame.MOUSEBUTTONDOWN and piece_selected == 0:\n\t\t\t#get click position\n\t\t\tpos = lc.pygame.mouse.get_pos()\n\n\t\t\t# get a list of all sprites that are under the mouse cursor\n\t\t\tclicked_sprites = [s for s in pieces if s.rect.collidepoint(pos)]\n\t\t\tif clicked_sprites:\n\t\t\t\t#if the piece has been clicked and it is the right team's turn\n\t\t\t\tif clicked_sprites[0].color == to_move:\n\n\t\t\t\t\t#enlarge selected piece for emphasis\n\t\t\t\t\tclicked_sprites[0].image = lc.pygame.transform.smoothscale(\\\n\t\t\t\t\t\tclicked_sprites[0].image, (80, 80))\n\n\t\t\t\t\t#recenter enlarged sprite\n\t\t\t\t\tclicked_sprites[0].rect = clicked_sprites[0].rect.move(\\\n\t\t\t\t\t\t(-12, -12))\n\n\t\t\t\t\t#next click will move the piece instead of selecting another sprite\n\t\t\t\t\tpiece_selected = 1\n\t\t\t\t\tprint(clicked_sprites[0].image)\n\n\t\t\t\t#if piece selected out of turn\n\t\t\t\telif clicked_sprites[0].color != to_move:\t\n\t\t\t\t\tprint(\"wrong turn\")\n\t\t\t\t\t\n\n\n\t\t\tprint(\"1st click selected \", clicked_sprites, piece_selected)\n\n\t\t#if a piece has been selected and needs to move\n\t\telif event.type == lc.pygame.MOUSEBUTTONDOWN and piece_selected == 1:\n\t\t\t#reset piece selection for next click\n\t\t\tpiece_selected = 0\n\n\t\t\t#get new click pos\n\t\t\tpos2 = lc.pygame.mouse.get_pos()\n\t\t\tprint(\"pos2\", pos2)\n\t\t\t#check if another piece was clicked on, if it's the same color, \n\t\t\t#select it instead of moving (unless king is castling)\n\t\t\t# get a list of all sprites that are under the mouse cursor\n\t\t\tnew_clicked_sprites = [p for p in pieces if p.rect.collidepoint(pos2)]\n\t\t\tprint(new_clicked_sprites)\n\t\t\t\n\t\t\t#logic for castling must be handled separately\n\t\t\t# if ((clicked_sprites[0].rank == \"king\" and new_clicked_sprites[0].rank == \"rook\")\\\n\t\t\t# and (clicked_sprites[0].color == to_move and new_clicked_sprites[0].color == to_move):\n\t\t\t# \tprint(\"castle attempted, no logic yet\")\n\t\t\t\n\t\t\t#if another piece is clicked on (instead of open square)\n\t\t\tif new_clicked_sprites:\n\n\t\t\t\t#if the player tries to move to a square they already occupy\n\t\t\t\tif new_clicked_sprites[0].color == to_move:\n\t\t\t\t\t#just select that new piece\n\t\t\t\t\t#enlarge selected piece for emphasis\n\t\t\t\t\tnew_clicked_sprites[0].image = lc.pygame.transform.smoothscale(\\\n\t\t\t\t\tnew_clicked_sprites[0].image, (80, 80))\n\n\t\t\t\t\t#unenlarge the old piece\n\t\t\t\t\tclicked_sprites[0].image = lc.pygame.transform.smoothscale(\\\n\t\t\t\t\tclicked_sprites[0].image, (60, 60))\n\n\t\t\t\t\t#recenter enlarged sprite\n\t\t\t\t\tnew_clicked_sprites[0].rect = new_clicked_sprites[0].rect.move(\\\n\t\t\t\t\t\t(-12, -12))\n\n\t\t\t\t\t#next click will move the piece instead of selecting another sprite\n\t\t\t\t\tpiece_selected = 1\n\t\t\t\t\tprint(new_clicked_sprites[0].image)\n\t\t\t\n\t\t\t#if an open square is clicked\n\t\t\telse:\n\t\t\t\t#un-enlarge piece for placement\n\t\t\t\tclicked_sprites[0].image, clicked_sprites[0].rect = lc.load_image(\\\n\t\t\t\t\tclicked_sprites[0].color + \"_\" + clicked_sprites[0].rank + \".png\", -1)\n\n\t\t\t\t#get the square that was clicked on \n\t\t\t\tnew_square = resolveSquare(pos2)\n\t\t\t\tprint(new_square)\n\t\t\t\t#pull the corresponding coordinate from board dict, render piece there\n\t\t\t\tclicked_sprites[0].rect.topleft = board[new_square][0]\n\t\t\t\t\n\t\t\t\t#let another piece get selected \n\t\t\t\tprint(\"second click\", piece_selected)\n\n\t\t\t\t#erase list of clicked sprites\n\t\t\t\tclicked_sprites = []\n\t\t\t\tprint(\"sprites selected after 2nd click \", clicked_sprites)\n\n\t\t\t\t#change turns\n\t\t\t\tturn = turn ^ 1\n\n\t#erase queue to make way for fresh clicks\n\tlc.pygame.event.clear()\n\t\n\tdef resolveSquare(mouse_click):\n\t\tselectedRow = lc.math.floor(mouse_click[1] / 79)\n\t\tselectedCol = lc.math.floor(mouse_click[0] / 79) - 1\n\t\t#convert column to character\n\t\tselectedCol = chr(72 -selectedCol)\n\t\tnew_square = selectedCol + str(selectedRow)\n\t\t#return square\n\t\tprint(mouse_click)\n\t\tprint(new_square)\n\t\treturn new_square","sub_path":"chess-game.py","file_name":"chess-game.py","file_ext":"py","file_size_in_byte":6328,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"500153981","text":"# coding=utf-8\n\n\ndef make_type_consistent(s1, s2):\n \"\"\"If both objects aren't either both string or unicode instances force them to unicode\"\"\"\n if isinstance(s1, str) and isinstance(s2, str): return s1, s2\n elif isinstance(s1, unicode) and isinstance(s2, unicode): return s1, s2\n else: return unicode(s1), unicode(s2)\n\n\ndef similarity(a,b):\n import difflib\n from constants import no_name_value\n if len(a)==0 or len(b)==0:\n return no_name_value\n a,b = make_type_consistent(a,b)\n a,b = a.lower(), b.lower()\n match_size = sum([m.size for m in difflib.SequenceMatcher(a=a, b=b, autojunk=False).get_matching_blocks()])*1.0\n longer_size, shorter_size = (len(b),len(a)) if len(a)= threshold.get('mid_value', 0.70)):\n evaluation[\"performance\"] = \"good\"\nelif(evaluation[\"metrics\"][threshold.get('metric','INVALID_METRIC')] <= threshold.get('min_value', 0.25)):\n evaluation[\"performance\"] = \"poor\"\nelse:\n evaluation[\"performance\"] = \"fair\"\n\nevaluation[\"modelName\"] = \"XGBDefault\"\nevaluation[\"startTime\"] = int(time.time())\n\nif(args.get('published').lower() == 'true'):\n evaluations_file_path = published_path +'/evaluations.json'\n evaluation[\"deployment\"] = \"default\"\nelse:\n evaluations_file_path = os.getenv(\"DSX_PROJECT_DIR\") + '/models/' + \"XGBDefault\" + '/' + \"latest\" + '/evaluations.json'\n evaluation[\"modelVersion\"] = \"latest\"\n\nif(os.path.isfile(evaluations_file_path)):\n current_evaluations = json.load(open(evaluations_file_path))\nelse:\n current_evaluations = []\ncurrent_evaluations.append(evaluation)\n\nwith open(evaluations_file_path, 'w') as outfile:\n json.dump(current_evaluations, outfile, indent=4, sort_keys=True)\n\n#copy to dir with helper function\nif (len(published_path) > 0):\n published_model_util.update_evaluation_metrics(project_name, \"XGBDefault\")\n published_model_util.delete_temp_model()","sub_path":"FSS_DEMO/scripts/XGBDefaultEval.py","file_name":"XGBDefaultEval.py","file_ext":"py","file_size_in_byte":4302,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"83042820","text":"import torch\nfrom torch import nn\nfrom .functional import reversible_block_forward\n\n\nclass LSHSelfAttention(nn.Module):\n def __init__(self):\n super(LSHAttention, self).__init__()\n\n def forward(self, x):\n return\n\n\nclass ChunkedFeedForward(nn.Module):\n def __init__(self):\n super(ChunkedFeedForward, self).__init__()\n\n def forward(self, x):\n return\n\n\nclass ReversibleBlock(nn.Module):\n # TODO check that it is contained in a reversible container or has access to an output stack in some other way?\n def __init__(self, f, g):\n super(ReversibleBlock, self).__init__()\n self.f = f\n self.g = g\n self.output_stack = None\n\n def forward(self, x):\n # NOTE input channel dim has to be divisible by two\n if self.output_stack is None:\n raise ValueError(\"output_stack of {} has to be set to a stack shared between reversible layers.\".format(self))\n return reversible_block_forward(self.f, self.g, self.output_stack, x, preserve_rng_state=True, dim=1)\n\n\nclass ReversibleSequential(nn.Sequential):\n def __init__(self, *args, output_stack=[]):\n super(ReversibleSequential, self).__init__(*args)\n self.output_stack = output_stack\n # TODO first block should not put its input on the stack\n for module in self:\n assert isinstance(module, ReversibleBlock)\n module.output_stack = self.output_stack\n return\n\n def forward(self, x):\n y = super(ReversibleSequential, self).forward(x)\n self.output_stack.append(y)\n return y\n\n\n# TODO norm\nclass ReversibleConvBlock(ReversibleBlock):\n def __init__(self, channels, kernel_size, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', alpha=1., dropout=0.5):\n f = None\n if dropout ==0.0:\n f = nn.Conv2d(\n channels, channels, kernel_size,\n stride=1, padding=padding, dilation=dilation,\n groups=groups, bias=bias, padding_mode=padding_mode\n )\n else:\n f = nn.Sequential(\n nn.Conv2d(\n channels, channels, kernel_size,\n stride=1, padding=padding, dilation=dilation,\n groups=groups, bias=bias, padding_mode=padding_mode\n ),\n nn.Dropout(p=dropout)\n )\n nonl = nn.ELU(alpha=alpha)\n super(ReversibleConvBlock, self).__init__(f, nonl)\n\n\n# TODO norm\n# TODO LSH\nclass ReversibleLSHSelfAttentionBlock(ReversibleBlock):\n def __init__(self):\n attn = LSHSelfAttention()\n # TODO chunked ff\n ff = nn.Sequential()\n super(ReversibleLSHSelfAttentionBlock, self).__init__(attn, ff)\n def forward(self, x):\n return\n\n\nclass PositionalEncoding(nn.Module):\n def __init__(self, dtype=torch.float):\n super(PositionalEncoding, self).__init__()\n self.dtype = dtype\n\n def forward(self, x):\n b, c, l = x.size()\n d = x.device\n y = torch.arange(l, dtype=self.dtype, device=x.device)\n y = y.expand(b, 1, l)\n y = torch.cat((x, y), 1)\n return y\n\n\nclass OuterConcatenation(nn.Module):\n def __init__(self):\n super(OuterConcatenation, self).__init__()\n\n def forward(self, x):\n b, c, l = x.size()\n x1 = x.unsqueeze(-1)\n x1 = x1.expand(b, c, l, l)\n\n x2 = torch.transpose(x1, -1, -2)\n\n return torch.cat((x1, x2), 1)\n","sub_path":"src/nnicotine/modules.py","file_name":"modules.py","file_ext":"py","file_size_in_byte":3521,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"539746371","text":"# !/usr/bin/python3\n# -*- coding: utf-8 -*-\n\nimport logging\n\nimport pyforms as app\n\nfrom pyforms.basewidget import BaseWidget\nfrom pyforms.controls import ControlText\nfrom pyforms.controls import ControlList\nfrom pyforms.controls import ControlButton\nfrom pyforms.controls import ControlCombo\nfrom pyforms.controls import ControlCheckBox\nfrom pyforms.controls import ControlCheckBoxList\nfrom pyforms.controls import ControlEmptyWidget\n\nfrom pybpodgui_plugin.models.subject import Subject\nfrom pybpodgui_api.models.setup import Setup\nfrom pybpodgui_api.exceptions.run_setup import RunSetupError\n\nfrom pybpodgui_plugin.models.setup.board_task import BoardTask\nfrom pybpodgui_plugin.models.session import Session\n\nlogger = logging.getLogger(__name__)\n\n\nclass SubjectSelectPopup(BaseWidget):\n\n def __init__(self, subjectlist, selectedsubjects):\n\n super(SubjectSelectPopup, self).__init__('Add Subjects')\n\n self._ok_btn = ControlButton('OK')\n self._cancel_btn = ControlButton('Cancel', default=self.cancel_evt)\n self._subjectslist = ControlCheckBoxList('Subject List')\n\n self._formset = [\n '',\n ('', '_subjectslist', ''),\n ('', '_ok_btn', '_cancel_btn', ''),\n ''\n ]\n\n for subject in sorted([s for s in subjectlist], key=lambda x: x.name.lower()):\n b = False\n for a in selectedsubjects.value:\n if subject.name == a[0]:\n self._subjectslist += (subject, True)\n b = True\n if b is False:\n self._subjectslist += (subject, False)\n\n def ok_evt(self):\n self.close()\n # self.ok_event(self._subjectslist.value)\n\n def closewidnow(self):\n self.close()\n\n def subjectlist(self):\n return self._subjectslist.items\n\n # THIS IS THE WAY WE CREATE A SIGNAL INSIDE PYBPOD (SCROLL DOWN...)\n def ok_event(self, subjects):\n pass\n\n def cancel_evt(self):\n print('cancel')\n self.close()\n\n\nclass SetupWindow(Setup, BaseWidget):\n \"\"\"\n Define here which fields from the setup model should appear on the details section.\n\n The model fields shall be defined as UI components like text fields, buttons, combo boxes, etc.\n\n You may also assign actions to these components.\n\n **Properties**\n\n name\n :class:`string`\n\n Name associated with this setup. Returns the current value stored in the :py:attr:`_name` text field.\n\n board\n :class:`pybpodgui_plugin.models.board.board_dockwindow.BoardDockWindow`\n\n Board associated with this setup. Returns the current value stored in the :py:attr:`_board` combo box.\n\n **Private attributes**\n\n _name\n :class:`pyforms.controls.ControlText`\n\n Text field to edit board name. Editing this field fires the event :meth:`SetupWindow._SetupWindow__name_changed_evt`.\n\n _board\n :class:`pyforms.controls.ControlCombo`\n\n Combo box to select board associated with this setup. Editing this field fires the event :meth:`SetupWindow._SetupWindow__board_changed_evt`.\n\n _run_task_btn\n :class:`pyforms.controls.ControlButton`\n\n Button to run task on board. Pressing the button fires the event :meth:`SetupWindow._SetupWindow__run_task`.\n\n _formset\n Describe window fields organization to PyForms.\n\n **Methods**\n\n \"\"\"\n\n def __init__(self, experiment=None):\n \"\"\"\n\n :param experiment: Experiment this setup belongs to.\n \"\"\"\n BaseWidget.__init__(self, 'Experiment')\n self.layout().setContentsMargins(5, 10, 5, 5)\n\n self._name = ControlText('Setup name')\n self._board = ControlCombo('Board')\n\n self._stoptrial_btn = ControlButton('Skip trial', default=self._stop_trial_evt)\n self._pause_btn = ControlButton('Pause', checkable=True, default=self._pause_evt)\n self._run_task_btn = ControlButton('Run',\n checkable=True,\n default=self._run_task,\n helptext=\"When a task is running, you can stop all remaining trials by pressing this button. NOTE: This means that you will need to break the cycle yourself in your task code when the run_state_machine method returns False.\")\n self._kill_task_btn = ControlButton('Kill',\n default=self._kill_task,\n style=\"background-color:rgb(255,0,0);font-weight:bold;\",\n helptext=\"NOTE:This will exit the task process abruptly. The code you might have after the trial loop won't execute.\")\n\n self._subjects_list = ControlList('Subjects', remove_function=self.__remove_subject)\n self._add_subject = ControlButton('Add subject')\n self._allsubjects = ControlCombo('Add subject')\n self._task = ControlCombo('Protocol', changed_event=self._task_changed_evt)\n\n self._detached = ControlCheckBox('Detach from GUI')\n\n self._varspanel = ControlEmptyWidget()\n self._btn = ControlButton('Open')\n\n Setup.__init__(self, experiment)\n\n self.reload_setups()\n self.reload_boards()\n self.reload_tasks()\n\n self._formset = [\n '_name',\n '_board',\n '_task',\n '_detached',\n ('_run_task_btn', '_kill_task_btn'),\n ('_stoptrial_btn', '_pause_btn'),\n #' ',\n {\n 'Subjects': [\n # '_allsubjects',\n '',\n '_add_subject',\n '_subjects_list',\n ],\n 'Variables':[\n '_varspanel',\n ],\n }\n ]\n\n self._kill_task_btn.enabled = False\n self._subjects_list.readonly = True\n self._varspanel.value = self.board_task\n self._add_subject.value = self.__add_subject\n self._name.changed_event = self.__name_changed_evt\n self._board.changed_event = self._board_changed_evt\n\n def slot(self):\n self.clear_subjects()\n listedsubjects = self.sswindow.subjectlist()\n for subj in listedsubjects:\n if subj[1] is True:\n self += subj[0]\n self.sswindow.closewidnow()\n\n def reload_tasks(self, current_selected_task=None):\n # type: (current_selected_task) -> None\n \"\"\"\n Reload tasks now\n\n :param current_selected_task: current selected task\n :type current_selected_task: pybpodgui_plugin.models.task.Task\n \"\"\"\n self._task.clear()\n self._task.add_item('', 0)\n for task in self.project.tasks:\n self._task.add_item(task.name, task)\n self._task.current_index = 0\n if current_selected_task:\n self.task = current_selected_task\n\n def _task_changed_evt(self):\n if hasattr(self, '_update_task'):\n return\n self.task = self._task.value\n\n def __add__(self, obj):\n res = super(SetupWindow, self).__add__(obj)\n if isinstance(obj, Subject):\n self._subjects_list.value = [[s.name] for s in self.subjects]\n return res\n\n def __sub__(self, obj):\n res = super(SetupWindow, self).__sub__(obj)\n if isinstance(obj, Subject):\n self._subjects_list.value = [[s.name] for s in self.subjects]\n return res\n\n def __open_subject_select(self):\n self.sswindow = SubjectSelectPopup(self.project.subjects, self._subjects_list)\n self.sswindow._ok_btn.value = self.slot\n self.sswindow.show()\n\n def __add_subject(self):\n self.__open_subject_select()\n\n def __remove_subject(self):\n if self._subjects_list.selected_row_index is not None:\n name = self._subjects_list.value[self._subjects_list.selected_row_index][0]\n subject = self.project.find_subject(name)\n self -= subject\n self._subjects_list -= -1\n\n def _stop_trial_evt(self):\n self.stop_trial()\n\n def _pause_evt(self):\n if self._pause_btn.checked:\n self.pause_trial()\n else:\n self.resume_trial()\n\n def can_run_task(self):\n try:\n return super().can_run_task()\n except Exception as err:\n self.alert(str(err), \"Unexpected Error\")\n self._run_task_btn.checked = False\n return False\n\n def _run_task(self):\n \"\"\"\n Defines behavior of the button :attr:`SetupWindow._run_task_btn`.\n\n This methods is called every time the user presses the button.\n \"\"\"\n try:\n if self.status == SetupWindow.STATUS_RUNNING_TASK:\n self.stop_task()\n elif self.status == SetupWindow.STATUS_READY:\n self.run_task()\n except RunSetupError as err:\n self.warning(str(err), \"Warning\")\n except Exception as err:\n self.alert(str(err), \"Unexpected Error\")\n\n def _kill_task(self):\n \"\"\"\n Kills a running task. This will stop the current trial and exit the task abruptly within the trial loop (if any).\n \"\"\"\n if self.status == SetupWindow.STATUS_RUNNING_TASK:\n self.kill_task()\n\n def _board_changed_evt(self):\n \"\"\"\n React to changes on text field :py:attr:`_board`.\n\n This method is called every time the user changes the field and forces a UI refresh.\n \"\"\"\n if hasattr(self, '_update_board'):\n return\n self.board = self._board.value\n self.update_ui()\n\n def __name_changed_evt(self):\n \"\"\"\n React to changes on text field :py:attr:`_name`.\n\n This methods is called every time the user changes the field.\n \"\"\"\n if not hasattr(self, '_update_name') or not self._update_name:\n self.name = self._name.value\n self.reload_setups()\n\n def reload_setups(self):\n for subject in self.project.subjects:\n subject.reload_setups()\n\n def reload_boards(self, current_selected_board=None):\n \"\"\"\n Reload boards list on combo box\n\n This method is fired by:\n * setup creation: :py:meth:`pybpodgui_plugin.models.setup.setup_window.SetupWindow._SetupWindow__init__`.\n * setup details section focus (dockwindow): :py:meth:`pybpodgui_plugin.models.setup.setup_dockwindow.SetupDockWindow.show`.\n\n :param current_selected_board: optional specify current selected board to restore after list update\n \"\"\"\n self._board.clear()\n self._board.add_item('', 0)\n for board in self.project.boards:\n self._board.add_item(board.name, board)\n self._board.current_index = 0\n\n if current_selected_board:\n self.board = current_selected_board\n\n self._allsubjects.clear()\n self._allsubjects.add_item('', 0)\n for subject in sorted([s for s in self.project.subjects], key=lambda x: x.name.lower()):\n self._allsubjects.add_item(subject.name, subject)\n self._allsubjects.current_index = 0\n\n self._subjects_list.value = [[s.name] for s in self.subjects]\n\n def create_board_task(self):\n \"\"\"\n Creates a new board task by calling the API.\n\n .. seealso::\n :py:class:`pybpodgui_api.models.setup.board_task.BoardTask`.\n \"\"\"\n return BoardTask(self)\n\n def create_session(self):\n \"\"\"\n Creates a new session by calling the API.\n\n .. seealso::\n :py:class:`pybpodgui_api.models.session.session_base.SessionBase`.\n \"\"\"\n return Session(self)\n\n @property\n def name(self):\n return self._name.value\n\n @name.setter\n def name(self, value):\n self._update_name = True # Flag to avoid recurse calls when editing the name text field\n self._name.value = value\n self._update_name = False\n # Update the session windows names\n if hasattr(self, 'sessions'):\n for session in self.sessions:\n session.name = session.name\n\n @property\n def board(self):\n if isinstance(self._board.value, str) or isinstance(self._board.value, int): return None\n return self._board.value\n\n @board.setter\n def board(self, value):\n if isinstance(value, str):\n value = self.project.find_board(value)\n self._update_board = True # Flag to avoid recurse calls when editing the name text field\n\n if value not in self._board.values:\n self.reload_boards()\n self._board.value = value\n del self._update_board\n Setup.board.fset(self, value)\n\n @property\n def task(self):\n if isinstance(self._task.value, str) or isinstance(self._task.value, int):\n return None\n return self._task.value\n\n @task.setter\n def task(self, value):\n if isinstance(value, str):\n value = self.project.find_task(value)\n\n self._update_task = True # Flag to avoid recurse calls when editing the name text field\n\n if value not in self._task.values:\n self.reload_tasks()\n\n self._task.value = value\n del self._update_task\n Setup.task.fset(self, value)\n\n @property\n def detached(self): return self._detached.value\n @detached.setter\n def detached(self, value): self._detached.value = value\n\n\n# Execute the application\nif __name__ == \"__main__\":\n app.start_app(SetupWindow)\n","sub_path":"pybpodgui_plugin/models/setup/setup_window.py","file_name":"setup_window.py","file_ext":"py","file_size_in_byte":13638,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"564050742","text":"from collections import defaultdict\r\n\r\n\r\nclass Blog:\r\n\r\n def __init__(self, title, posts=None):\r\n self.title = title\r\n self.entries = posts if posts is not None else []\r\n\r\n def append(self, post):\r\n self.entries.append(post)\r\n\r\n def by_tag(self):\r\n tag_index = defaultdict(list)\r\n for post in self.entries:\r\n for tag in post.tags:\r\n tag_index[tag].append(post.as_dict())\r\n return tag_index\r\n\r\n def as_dict(self):\r\n return dict(\r\n title=self.title,\r\n underline=\"=\" * len(self.title),\r\n entries=[p.as_dict() for p in self.entries])\r\nif __name__ == \"__main__\":\r\n from chapter9.post import Post\r\n import datetime\r\n import json\r\n travel_x = Blog(\"Travel\")\r\n travel_x.append(\r\n Post(date=datetime.datetime(2013, 11, 14, 17, 25),\r\n title=\"Hard Aground\",\r\n rst_text=\"\"\"Some embarrassing revelation. Including ☹ and ⚓\"\"\",\r\n tags=(\"#RedRanger\", \"#Whitby42\", \"#ICW\"),\r\n )\r\n )\r\n travel_x.append(\r\n Post(date=datetime.datetime(2013, 11, 18, 15, 30),\r\n title=\"Anchor Follies\",\r\n rst_text=\"\"\"Some witty epigram. Including < & > characters.\"\"\",\r\n tags=(\"#RedRanger\", \"#Whitby42\", \"#Mistakes\"),\r\n )\r\n )\r\n print(travel_x.by_tag())\r\n print(\"Less elegant\")\r\n print(json.dumps(travel_x.as_dict(), indent=4))\r\n","sub_path":"chapter9/blog.py","file_name":"blog.py","file_ext":"py","file_size_in_byte":1455,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"625002541","text":"from django.views.generic import ListView, DetailView, CreateView\nfrom .models import Gamecode, Platform, Slider\n\nclass GamecodeListing(ListView):\n model = Gamecode\n \nclass GamecodeDetail(DetailView):\n model = Gamecode\n\nclass PlatformListing(ListView):\n model = Platform\n\n \nclass PlatformDetail(DetailView):\n model = Platform\n \n\nclass GamecodeView(CreateView):\n\tmodel = Gamecode\n\tsuccess_url = '/sent'\n\nclass Slider(ListView):\n\tmodel = Slider\n\n\n","sub_path":"gamecheap/apps/sell/views.py","file_name":"views.py","file_ext":"py","file_size_in_byte":466,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"255704731","text":"import os\nfrom sendgrid import SendGridAPIClient\nfrom sendgrid.helpers.mail import Mail, From, To, Subject, PlainTextContent, HtmlContent\nimport config\n\nclass Mailer:\n def __init__(self):\n self.__sg = SendGridAPIClient(config.send_grid_api_key)\n\n def send_email_base(self, from_email, to_emails, subject, html_content):\n message = Mail(from_email=From(from_email),\n to_emails=[To(to_email) for to_email in to_emails],\n subject=Subject(subject),\n html_content=HtmlContent(html_content))\n return self.__sg.send(message)\n\n def send_email_to_me(self, subject, html_content):\n return self.send_email_base(\n from_email='nanazhoushop@gmail.com',\n to_emails=['nanazhouh@gmail.com'],\n subject=subject,\n html_content=html_content) \n\n def send_email(self, product, action, timestamp):\n time = '{} PT'.format(timestamp.strftime(\"%Y-%m-%d, %H:%M:%S\"))\n pattern = product['pattern']\n color = product['color']\n url = product['url']\n has_image = False\n if len(product['images']) > 0:\n has_image = True\n image = product['images'][0]\n\n subject = 'Hermes US {} {} - {} at {}'.format(action, pattern, color, time)\n if has_image:\n html_content = '''\n
\n '''.format(url, pattern, color)\n\n self.send_email_to_me(subject=subject, html_content=html_content)\n","sub_path":"mailer.py","file_name":"mailer.py","file_ext":"py","file_size_in_byte":2129,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"452715472","text":"import caffe\nimport matplotlib.pyplot as plt\nimport time as timelib\nimport pdb\n\ncaffe.set_mode_gpu()\ncaffe.set_device(1)\n\nsolver = caffe.get_solver('/media/ys/1a32a0d7-4d1f-494a-8527-68bb8427297f/End_to_End/caffe/solver.prototxt')\n#solver.net.copy_from('/media/ys/1a32a0d7-4d1f-494a-8527-68bb8427297f/End_to_End/caffe/weight/nvidia/nvidia_00001_iter_5000.caffemodel')\n#solver.net.copy_from('/media/ys/1a32a0d7-4d1f-494a-8527-68bb8427297f/Data/train/checkpoint-sdc-ch2.data-00000-of-00001.caffemodel')\nmax_iter = 800000\nfig, axes = plt.subplots()\nfig.show()\nloss = 0\nloss_list = []\niter0 = solver.iter\nepoch = 0\n\nwhile solver.iter < max_iter:\n\n# net_full_conv.save('./copy_vgg.caffemodel')\n solver.step(1)\n #if solver.iter == 3000:\n #\tpdb.set_trace()\n# if solver.iter % 100 == 0:\n# \tpdb.set_trace()\n #pdb.set_trace()\n #if solver.iter % 500 ==0:\n label = solver.net.blobs['label'].data \n out = solver.net.blobs['fc10'].data\n #pdb.set_trace()\n loss = solver.net.blobs['loss'].data.flatten()\n if loss > 30:\n \tloss = 30\n loss_list.append(loss) \n if solver.iter % 100 == 0:\n axes.clear()\n axes.plot(range(iter0, iter0+len(loss_list)), loss_list)\n# axes.grid(True)\n fig.canvas.draw()\n plt.pause(0.01)\n\nfig.savefig('fig_iter_%d.png' % solver.iter)","sub_path":"caffe_code/nvidia/train.py","file_name":"train.py","file_ext":"py","file_size_in_byte":1271,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"588896990","text":"import os.path\nimport torch\nfrom copy import deepcopy\n\nif os.path.exists(\"Sequence_Formers\"): # check if we are in the folder Continual_Learning_Data_Former\n from data_utils import load_data, check_and_Download_data\nelse:\n from ..data_utils import load_data, check_and_Download_data\n\n'''\nParent Class for Sequence Formers \n'''\n\nclass Sequence_Former(object):\n def __init__(self, args):\n super(Sequence_Former, self).__init__()\n\n self.n_tasks = args.n_tasks\n self.num_classes = args.num_classes\n self.i = args.i\n self.o = args.o\n self.imageSize = args.imageSize\n self.img_channels = args.img_channels\n self.dataset = args.dataset\n self.path_only = args.path_only # only valid for core50 at the moment\n self.task = args.task\n\n # if self.path_only we don't load data but just path\n # data will be loaded online while learning\n # it is considered as light mode this continual dataset are easy to generate and load\n if self.path_only:\n light_id='_light'\n else:\n light_id=''\n\n if not os.path.exists(args.o):\n os.makedirs(args.o)\n\n self.o_train = os.path.join(self.o, '{}_{}_train{}.pt'.format(self.task, self.n_tasks, light_id))\n self.o_valid = os.path.join(self.o, '{}_{}_valid{}.pt'.format(self.task, self.n_tasks, light_id))\n self.o_test = os.path.join(self.o, '{}_{}_test{}.pt'.format(self.task, self.n_tasks, light_id))\n\n\n self.o_train_full = os.path.join(self.o, '{}_1-{}_train{}.pt'.format(self.task, self.n_tasks,light_id))\n self.o_valid_full = os.path.join(self.o, '{}_1-{}_valid{}.pt'.format(self.task, self.n_tasks,light_id))\n self.o_test_full = os.path.join(self.o, '{}_1-{}_test{}.pt'.format(self.task, self.n_tasks,light_id))\n\n check_and_Download_data(self.i, self.dataset, task=self.task)\n\n def select_index(self, ind_task, y):\n \"\"\"\n This function help to select data in particular if needed\n :param ind_task: task index in the sequence\n :param y: data label\n :return: class min, class max, and index of data to keep\n \"\"\"\n return 0, self.num_classes - 1, torch.arange(len(y))\n\n def transformation(self, ind_task, data):\n \"\"\"\n Apply transformation to data if needed\n :param ind_task: task index in the sequence\n :param data: data to process\n :return: data post processing\n \"\"\"\n if not ind_task < self.num_classes:\n raise AssertionError(\"Error in task indice\")\n return deepcopy(data)\n\n def label_transformation(self, ind_task, label):\n \"\"\"\n Apply transformation to label if needed\n :param ind_task: task index in the sequence\n :param label: label to process\n :return: data post processing\n \"\"\"\n if not ind_task < self.num_classes:\n raise AssertionError(\"Error in task indice\")\n return label\n\n @staticmethod\n def get_valid_ind(i_tr):\n # it is time to taxe train for validation\n len_valid = int(len(i_tr) * 0.2)\n indices = torch.randperm(len(i_tr))\n\n valid_ind = indices[:len_valid]\n train_ind = indices[len_valid:]\n\n i_va = i_tr[valid_ind]\n i_tr = i_tr[train_ind]\n\n return i_tr, i_va\n\n\n def create_task(self, ind_task, x_tr, y_tr, x_te, y_te):\n\n # select only the good classes\n class_min, class_max, i_tr = self.select_index(ind_task, y_tr)\n _, _, i_te = self.select_index(ind_task, y_te)\n\n i_tr, i_va = self.get_valid_ind(i_tr)\n\n x_tr_t = self.transformation(ind_task, x_tr[i_tr])\n x_va_t = self.transformation(ind_task, x_tr[i_va])\n x_te_t = self.transformation(ind_task, x_te[i_te])\n\n y_tr_t = self.label_transformation(ind_task, y_tr[i_tr])\n y_va_t = self.label_transformation(ind_task, y_tr[i_va])\n y_te_t = self.label_transformation(ind_task, y_te[i_te])\n\n return class_min, class_max, x_tr_t, y_tr_t, x_va_t, y_va_t, x_te_t, y_te_t\n\n\n def formating_data(self):\n\n # variable to save the sequence\n tasks_tr = []\n tasks_va = []\n tasks_te = []\n\n\n # variable to save the cumul of the sequence for upperbound\n tasks_tr_full = []\n tasks_va_full = []\n tasks_te_full = []\n full_x_tr, full_y_tr = None, None\n full_x_va, full_y_va = None, None\n full_x_te, full_y_te = None, None\n\n x_tr, y_tr, x_te, y_te = load_data(self.dataset, self.i, self.imageSize, self.path_only)\n\n for ind_task in range(self.n_tasks):\n\n c1, c2, x_tr_t, y_tr_t, x_va_t, y_va_t, x_te_t, y_te_t = self.create_task(ind_task, x_tr, y_tr, x_te, y_te)\n\n tasks_tr.append([(c1, c2), x_tr_t, y_tr_t])\n tasks_va.append([(c1, c2), x_va_t, y_va_t])\n tasks_te.append([(c1, c2), x_te_t, y_te_t])\n\n if ind_task == 0:\n full_x_tr = x_tr_t\n full_x_va = x_va_t\n full_x_te = x_te_t\n full_y_tr = y_tr_t\n full_y_va = y_va_t\n full_y_te = y_te_t\n else:\n full_x_tr = torch.cat([full_x_tr, x_tr_t], dim=0)\n full_x_va = torch.cat([full_x_va, x_va_t], dim=0)\n full_x_te = torch.cat([full_x_te, x_te_t], dim=0)\n full_y_tr = torch.cat([full_y_tr, y_tr_t], dim=0)\n full_y_va = torch.cat([full_y_va, y_va_t], dim=0)\n full_y_te = torch.cat([full_y_te, y_te_t], dim=0)\n\n\n if not self.path_only:\n print(tasks_tr[0][1].shape)\n print(tasks_tr[0][1].mean())\n print(tasks_tr[0][1].std())\n\n torch.save(tasks_tr, self.o_train)\n torch.save(tasks_va, self.o_valid)\n torch.save(tasks_te, self.o_test)\n\n\n tasks_tr_full.append([(0, self.num_classes), full_x_tr, full_y_tr])\n tasks_va_full.append([(0, self.num_classes), full_x_va, full_y_va])\n tasks_te_full.append([(0, self.num_classes), full_x_te, full_y_te])\n\n torch.save(tasks_tr_full, self.o_train_full)\n torch.save(tasks_va_full, self.o_valid_full)\n torch.save(tasks_te_full, self.o_test_full)\n","sub_path":"Sequence_Formers/sequence_former.py","file_name":"sequence_former.py","file_ext":"py","file_size_in_byte":6263,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"377662416","text":"#!/usr/bin/python\n#-*- coding: utf-8 -*-\n\n# __author__ = '330mlcc'\n#\n# import socket\n# import time\n# import threading\n#\n# def tcplink(sock,addr):\n# print('accetp new connection for %s:5S...' % addr)\n# sock.send('welcome!'.encode())\n# while True:\n# data = socket.recv(1024)\n# time.sleep(5)\n#\n# if data == 'exit' or not data:\n# break\n#\n# sock.send('hello : '.encode()+data)\n#\n# socket.close()\n# print('Connection from %s:%s' % addr)\n#\n# s = socket.socket(socket.AF_INET,socket.SOCK_STREAM)\n#\n# s.bind(('127.0.0.1',9001))\n#\n# s.listen(5)\n#\n# print('waiting for connection....')\n#\n# while True:\n# sock,addr = s.accept()\n# t = threading.Thread(target=tcplink(),args=(socket,addr))\n# t.start()\n\nimport socket\nimport time\n\nhost = '' #host设定为空,表示可以与任何ip的socket在端口9001通信\nport = 9001\nbufsize = 1024\n\nquit = False\nshutdown = False\n\naddr = (host,port)\n\n##设置socket,AF_INET表示是IPV4标准,SOCK_STREAM是TCP传输协议\ntcpConnServers = socket.socket(socket.AF_INET,socket.SOCK_STREAM)\ntcpConnServers.bind(addr)\ntcpConnServers.listen(1)\n\nwhile True: #与客户端建立连接之后,获取客户端传来的数据\n print('watting for connection...')\n tcpConnClients,ddr = tcpConnServers.accept() # 不断监听获取新的客户端连接\n print('connected from : ', addr)\n\n while True:\n data = tcpConnClients.recv(bufsize)\n data = data.decode('utf-8')\n\n if not data:\n break\n\n ss = '[%s] %s' % (time.time(),data)\n print(ss)\n\n if data == 'bye':\n quit = True\n break\n elif data == 'shutdown':\n shutdown = True\n break\n print('server has been closed')\n\nif __name__ == '__main__':\n pass\n\n","sub_path":"src/reading/network/charpt1/fromExaSocketServer.py","file_name":"fromExaSocketServer.py","file_ext":"py","file_size_in_byte":1795,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"383195201","text":"import os, subprocess, requests, json, time, yaml, hashlib\n\nENVIRONMENT_URL = os.environ.get('ENVIRONMENT_URL')\nUPLOAD_URL = ENVIRONMENT_URL+\"/drivers/package\"\nCHANNEL_ID = os.environ.get('CHANNEL_ID')\nUPDATE_URL = ENVIRONMENT_URL+\"/channels/\"+CHANNEL_ID+\"/drivers/bulk\"\nTOKEN = os.environ.get('TOKEN')\nDRIVERID = \"driverId\"\nVERSION = \"version\"\nARCHIVEHASH = \"archiveHash\"\n\n# Make sure we're running in the root of the git directory\na = subprocess.run([\"git\", \"rev-parse\", \"--show-toplevel\"], capture_output=True)\nos.chdir(a.stdout.decode().strip()+\"/drivers/SmartThings/\")\n\n# Get list of all SmartThings driver folders\ndrivers = [driver.name for driver in os.scandir('.') if driver.is_dir()]\ndriver_updates = []\ndrivers_updated = []\nuploaded_drivers = {}\n\n# Get drivers currently on the channel\nresponse = requests.get(\n ENVIRONMENT_URL+\"/drivers\",\n headers={\n \"Accept\": \"application/vnd.smartthings+json;v=20200810\",\n \"Authorization\": \"Bearer \"+TOKEN\n }\n)\nif response.status_code != 200:\n print(\"Failed to retrieve channel's current drivers\")\n print(\"Error code: \"+str(response.status_code))\n print(\"Error response: \"+response.text)\nelse:\n response_json = json.loads(response.text)[\"items\"]\n for driver in response_json:\n if ARCHIVEHASH in driver.keys() and VERSION in driver.keys() and DRIVERID in driver.keys():\n uploaded_drivers[driver[\"packageKey\"]] = {DRIVERID: driver[DRIVERID], VERSION: driver[VERSION], ARCHIVEHASH: driver[ARCHIVEHASH]}\n\n# For each driver, first package the driver locally, then upload it\n# after it's been uploaded, hold on to the driver id and version\nfor driver in drivers:\n subprocess.run([\"rm\", \"edge.zip\"], capture_output=True)\n package_key = \"\"\n with open(driver+\"/config.yml\", 'r') as config_file:\n package_key = yaml.safe_load(config_file)[\"packageKey\"]\n print(package_key)\n subprocess.run([\"zip -r ../edge.zip $(find . -name \\\"*.yml\\\" -o -name \\\"*.lua\\\" -o -name \\\"*.yaml\\\") -X -x \\\"*test*\\\"\"], cwd=driver, shell=True, capture_output=True)\n with open(\"edge.zip\", 'rb') as driver_package:\n data = driver_package.read()\n # TODO: This does not yet work, hash returned by server does not match\n hash = hashlib.sha256(data).hexdigest()\n response = None\n retries = 0\n if package_key not in uploaded_drivers.keys() or hash != uploaded_drivers[package_key][\"archiveHash\"]: \n while response == None or (response.status_code == 500 or response.status_code == 429):\n response = requests.post(\n UPLOAD_URL, \n headers={\n \"Content-Type\": \"application/zip\", \n \"Accept\": \"application/vnd.smartthings+json;v=20200810\",\n \"Authorization\": \"Bearer \"+TOKEN,\n \"X-ST-LOG-LEVEL\": \"TRACE\"},\n data=data)\n if response.status_code != 200:\n print(\"Failed to upload driver \"+driver)\n print(\"Error code: \"+str(response.status_code))\n print(\"Error response: \"+response.text)\n if response.status_code == 500 or response.status_code == 429:\n retries = retries + 1\n if retries > 3:\n break # give up\n if response.status_code == 429:\n time.sleep(10)\n else:\n print(\"Uploaded package successfully: \"+driver)\n drivers_updated.append(driver)\n response_json = json.loads(response.text)\n driver_updates.append({DRIVERID: response_json[DRIVERID], VERSION: response_json[VERSION]})\n time.sleep(5)\n else:\n print(\"Hash matched existing driver for \"+package_key)\n # hash matched, use the currently uploaded version of the driver to \"update\" the channel\n driver_updates.append({DRIVERID: uploaded_drivers[package_key][DRIVERID], VERSION: uploaded_drivers[package_key][VERSION]}) \n\nresponse = requests.put(\n UPDATE_URL,\n headers={\n \"Accept\": \"application/vnd.smartthings+json;v=20200810\",\n \"Authorization\": \"Bearer \"+TOKEN,\n \"Content-Type\": \"application/json\",\n \"X-ST-LOG-LEVEL\": \"TRACE\"\n },\n data=json.dumps(driver_updates)\n)\nif response.status_code != 204:\n print(\"Failed to bulk update drivers\")\n print(\"Error code: \"+str(response.status_code))\n print(\"Error response: \"+response.text)\n exit(1)\n\nprint(\"Successfully bulk-updated channel: \")\nprint(drivers_updated)","sub_path":"tools/deploy.py","file_name":"deploy.py","file_ext":"py","file_size_in_byte":4290,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"308290330","text":"import os\nimport sys\nimport sqlite3\n\n# Existenz feststellen\nif os.path.exists(\"firma_SQLite_10.db\"):\n print(\"Datei bereits vorhanden\")\n sys.exit(0)\n\n# Verbindung zur Datenbank erzeugen\nconnection = sqlite3.connect(\"firma_SQLite_10.db\")\n\n# Datensatz-Cursor erzeugen\ncursor = connection.cursor()\n\n# Tabelle erzeugen\nsql = \"CREATE TABLE personen(\" \\\n \"name TEXT, \" \\\n \"vorname TEXT, \" \\\n \"personalnummer INTEGER PRIMARY KEY, \" \\\n \"gehalt REAL, \" \\\n \"waehrung TEXT, \" \\\n \"geburtstag TEXT)\"\ncursor.execute(sql)\n\n# Datensatz erzeugen\nsql = \"INSERT INTO personen VALUES('Maier', \" \\\n \"'Hans', 6714, 3500.00,'€', '15.03.1962')\"\ncursor.execute(sql)\nconnection.commit()\n\n# Datensatz erzeuegen\nsql = \"INSERT INTO personen VALUES('Schmitz', \" \\\n \"'Peter', 81343, 3750.00,'€', '12.04.1958')\"\ncursor.execute(sql)\nconnection.commit()\n\n# Datensatz erzeuegen\nsql = \"INSERT INTO personen VALUES('Mertens', \" \\\n \"'Julia', 2297, 3621.50,'€', '30.12.1959')\"\ncursor.execute(sql)\nconnection.commit()\n\n# Verbindung beenden\nconnection.close()\n","sub_path":"exercise/sqlite_erzeugen_10_1.py","file_name":"sqlite_erzeugen_10_1.py","file_ext":"py","file_size_in_byte":1057,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"574196997","text":"#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\n@author: Jiyuan Zhou\n\nEnable an agent to follow a hard coded trajectory in the form of\na square with rounded corners using trained straight and circle models.\n\"\"\"\nimport argparse\nimport cProfile\nimport pstats\nimport sys\nimport time\nimport math\nimport yaml\n\nimport joblib\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom rllab.misc import tensor_utils\n\nfrom aa_simulation.envs.renderer import _Renderer\n\n\ndef render(renderer, state, action):\n \"\"\"\n Render simulation environment.\n \"\"\"\n renderer.update(state, action)\n\n\ndef modify_state_curve(state, move_param):\n \"\"\"\n Convert state [x, y, yaw, x_dot, y_dot, yaw_dot] to\n [dx, theta, ddx, dtheta]\n \"\"\"\n x_0, y_0, r = move_param\n x, y, yaw, x_dot, y_dot, yaw_dot = state\n\n x -= x_0\n y -= y_0\n\n dx = np.sqrt(np.square(x) + np.square(y)) - r\n theta = _normalize_angle(np.arctan2(-x, y) + np.pi - yaw)\n ddx = x/(x**2 + y**2)**0.5*x_dot + y/(x**2 + y**2)**0.5*y_dot\n dtheta = x/(x**2 + y**2)*x_dot - y/(x**2 + y**2)*y_dot - yaw_dot\n\n return np.array([dx, theta, ddx, dtheta])\n\n\ndef _normalize_angle(angle):\n \"\"\"\n Normalize angle to [-pi, pi).\n \"\"\"\n angle = angle % (2*np.pi)\n if (angle >= np.pi):\n angle -= 2*np.pi\n return angle\n\n\ndef _normalize_angle2(angle):\n \"\"\"\n Normalize angle to [0, 2 * pi).\n \"\"\"\n angle = angle % (2*np.pi)\n return angle\n\n\ndef modify_state_straight(state, move_param):\n \"\"\"\n Add target direction and target velocity to state, to feed\n in the NN.\n \"\"\"\n x_0, y_0, target_dir = move_param\n x, y, yaw, x_dot, y_dot, dyaw = state\n target_dir = _normalize_angle2(target_dir)\n\n new_x, new_y = _cal_distance(x, y, move_param)\n yaw = _normalize_angle2(yaw) - target_dir\n yaw = _normalize_angle(yaw)\n\n new_x_dot = x_dot * np.cos(target_dir) + y_dot * np.sin(target_dir)\n new_y_dot = y_dot * np.cos(target_dir) - x_dot * np.sin(target_dir)\n\n return np.array([new_y, yaw, new_x_dot, new_y_dot, dyaw])\n\n\ndef _cal_distance(x, y, move_param):\n # For arbitrary trajectory.\n init_x, init_y, target_dir = move_param\n\n # if _normalize_angle(target_dir) == math.pi / 2:\n # next_x, next_y = init_x, init_y + 1\n # print(1)\n # return(0, - x + init_x)\n # elif _normalize_angle(target_dir) == -math.pi / 2:\n # next_x, next_y = init_x, init_y - 1\n # return(0, x - init_x)\n # else:\n # next_x, next_y = init_x + 1, init_y + np.tan(target_dir)\n\n #print(\"x,y\", x, y, init_x, init_y)\n position_dir = np.arctan2((y - init_y), (x - init_x))\n projection_dir = _normalize_angle(position_dir - target_dir)\n #print(\"yaws\", position_dir, target_dir, projection_dir)\n dist = np.sqrt(np.square(x - init_x) + np.square(y - init_y))\n # new_y = np.absolute((next_y - init_y) * init_x + (init_x - next_x) * init_y\\\n # - init_x * next_y + next_x * init_y) / \\\n # np.sqrt(np.square(next_y - init_y) + np.square(next_x - init_x))\n\n new_y = dist * np.sin(projection_dir)\n # new_x = dist * np.cos(projection_dir)\n new_x = 0\n # new_y = (y - init_y) * np.cos(target_dir) - (x - init_x) * np.sin(target_dir)\n #print(\"new y: \", new_y)\n # if (np.sin(projection_dir < 0)):\n # new_y = new_y\n\n return (new_x, new_y)\n\ndef _check_point(state, way_point):\n # Potential bug!!! Can only follow a curve that is less than\n # 180 degrees! must be deal with in the hard coded trajectory\n # or higher level planner for the time being.\n x, y, _, _, _, _ = state\n check_point_x, check_point_y, direction = way_point\n direction = np.deg2rad(direction)\n\n state_direction = np.arctan2((y - check_point_y), (x - check_point_x))\n\n intersect_angle = _normalize_angle(state_direction - direction)\n\n return np.absolute(intersect_angle) <= math.pi / 2\n # return True\n\n\ndef rollout(env, agent, way_point=[], animated=False, speedup=1,\n always_return_paths=False, renderer=None, state=np.zeros(6),\n isCurve=False, move_param=[]):\n observations = []\n actions = []\n rewards = []\n agent_infos = []\n env_infos = []\n\n path_length = 0\n\n # update initial state!!!\n # Bad implementation. Just temporary.\n env._wrapped_env._state = state\n\n # if not isCurve:\n # print(\"Start straight state: \", state)\n # tmp = modify_state_straight(state, move_param)\n # tmp_a, _ = agent.get_action(tmp)\n # print(\"Start projection: \", tmp)\n # print(\"Corresponding action: \", tmp_a)\n ttt = 20\n\n while _check_point(state, way_point):\n ttt -= 1\n # print(\"State: \", state)\n # State observation convertion\n if isCurve:\n o = modify_state_curve(state, move_param)\n else:\n o = modify_state_straight(state, move_param)\n #\n a, agent_info = agent.get_action(o)\n next_o, r, d, env_info = env.step(a)\n #\n\n print(\"Start straight state: \", state)\n print(\"Start projection: \", o)\n print(\"Corresponding action: \", a)\n print()\n\n observations.append(env.observation_space.flatten(o))\n rewards.append(r)\n actions.append(env.action_space.flatten(a))\n agent_infos.append(agent_info)\n env_infos.append(env_info)\n\n path_length += 1\n if d:\n break\n\n o = next_o\n # Bad implementation. Just temporary.\n state = env._wrapped_env._state\n\n if animated:\n render(renderer, state, a)\n #env.render()\n timestep = 0.0001\n time.sleep(timestep / speedup)\n return state\n\n\ndef parse_arguments():\n parser = argparse.ArgumentParser()\n parser.add_argument('--speedup', type=float, default=100000,\n help='Speedup')\n parser.add_argument('--render', dest='render',\n action='store_true', help='Rendering')\n parser.add_argument('--no-render', dest='render',\n action='store_false', help='Rendering')\n parser.set_defaults(render=True)\n args = parser.parse_args()\n return args\n\n\ndef move(env, policy, args, way_point, renderer,\\\n state, isCurve, move_param):\n final_state = rollout(env, policy, way_point=way_point,\n animated=args.render, speedup=args.speedup,\n always_return_paths=True, renderer=renderer,\n state=state, isCurve=isCurve,\\\n move_param=move_param)\n return final_state\n\n\ndef init_render():\n stream = open('aa_simulation/envs/model_params.yaml', 'r')\n params = yaml.load(stream)\n obstacles = []\n goal = None\n return _Renderer(params, obstacles, goal, None)\n\n\ndef _check_curve_way_point(curve_param, way_point):\n center_x, center_y, curve_angle = curve_param\n curve_angle = _normalize_angle(curve_angle)\n\n if curve_angle <= math.pi / 2:\n return curve_param, way_point\n\n check_point_x, check_point_y, direction = way_point\n\n # Construct new way point\n\n\ndef main():\n args = parse_arguments()\n profiler = cProfile.Profile()\n\n data_curve = joblib.load(\"data/roundedsquare_demo/circle.pkl\")\n policy_curve = data_curve['policy']\n env_curve = data_curve['env']\n\n data_straight = joblib.load(\"data/roundedsquare_demo/straight.pkl\")\n policy_straight = data_straight['policy']\n env_straight = data_straight['env']\n\n plt.ion()\n\n # Set fixed random seed\n np.random.seed(100)\n\n # Sample one rollout\n profiler.enable()\n\n renderer = init_render()\n\n state = [-1, 0, np.deg2rad(-90), 0, 0.5, 0]\n render(renderer, state, None)\n\n # center positin x, center position y, radius\n curve_params = [[0, 0, 1], [2, 0, 1], [2, 2, 1], [0, 2, 1]]\n # start position x, start position y, target start yaw(direction)\n straight_params = [[0, -1, np.deg2rad(0)], [3, 0, np.deg2rad(90)],\\\n [2, 3, np.deg2rad(-180)], [-1, 2, np.deg2rad(-90)]]\n # curve_step_size = [43, 44, 44, 44]\n # straight_step_size = [54, 55, 55, 56]\n\n way_points = [[0, -1, 180], [2, -1, 180], [3, 0, -90], [3, 2, -90],\\\n [2, 3, 0], [0, 3, 0], [-1, 2, 90], [-1, 0, 90]]\n\n point = 0\n for i in range(400):\n\n i %= 4\n # Turn left for 90 degrees\n point %= 8\n state = move(env_curve, policy_curve, args,\\\n way_points[point], renderer, state,\\\n True, curve_params[i])\n # print(state)\n point += 1\n\n # Move straightly for length 2\n point %= 8\n state = move(env_straight, policy_straight, args,\\\n way_points[point], renderer, state,\\\n False, straight_params[i])\n # print(state)\n point += 1\n\n profiler.disable()\n\n # Block until key is pressed\n sys.stdout.write(\"Press to continue: \")\n input()\n\n\nif __name__ == '__main__':\n main()\n","sub_path":"demos/test_rounded_square.py","file_name":"test_rounded_square.py","file_ext":"py","file_size_in_byte":8950,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"360306574","text":"import sys\nimport os\nimport urllib\nimport tarfile\nimport hashlib\n\ndef download_spark():\n bytes_written = 0\n spark_url = \"http://eecs.berkeley.edu/~jegonzal/cs186_spark.tar.bz2\"\n if not os.path.exists(\"cs186_spark.tar.bz2\"):\n print(\"Downloading Spark\")\n resp = urllib.urlopen(spark_url)\n output = open('cs186_spark.tar.bz2','wb')\n block_len = 524288\n buf = resp.read(block_len)\n while buf:\n output.write(buf)\n bytes_written += len(buf)\n buf = resp.read(block_len)\n output.close()\n return bytes_written\n \ndef unzip_spark():\n if not (os.path.isdir(\"cs186_spark\") and os.path.exists(\"cs186_spark\")):\n print(\"Extracting Spark\")\n tfile = tarfile.open('cs186_spark.tar.bz2', 'r:bz2')\n tfile.extractall()\n tfile.close()\n \ndef setup_environment():\n download_spark()\n unzip_spark()\n sys.path.append(os.path.join(os.getcwd(), 'cs186_spark', 'python', 'lib', 'pyspark.zip'))\n sys.path.append(os.path.join(os.getcwd(), 'cs186_spark', 'python', 'lib', 'py4j-0.9-src.zip'))\n os.environ[\"SPARK_HOME\"] = os.path.join(os.getcwd(), 'cs186_spark')\n\n\n \n# setup_environment()\n","sub_path":"hw5/local_install.py","file_name":"local_install.py","file_ext":"py","file_size_in_byte":1211,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"648175617","text":"import sys\nimport os\nimport collections\n\n\ndef load_data(filepath):\n try:\n with open(filepath) as text_file:\n readed_text = text_file.read()\n return readed_text\n except IOError:\n return None\n\n\ndef get_most_frequent_words(text):\n word_list = text.split(\" \")\n just_word_list = []\n for word in word_list:\n if word.isalpha():\n just_word_list.append(word)\n counter = collections.Counter(just_word_list)\n number_words = 10\n return counter.most_common(number_words)\n\n\ndef output_words(most_frequent_words):\n for word, number in most_frequent_words:\n print(\" \",\n word,\n \" - повторяется\",\n number,\n \"раз(а)\")\n\n\nif __name__ == \"__main__\":\n if len(sys.argv) < 2:\n exit(\"Вы не ввели путь к файла с данными\")\n file_path = sys.argv[1]\n if not os.path.isfile(file_path):\n exit(\"Такого файла не существует\")\n if load_data(file_path) is None:\n exit(\"Проблема с открытием файла\")\n\n text = load_data(file_path)\n most_frequent_words = get_most_frequent_words(text)\n output_words(most_frequent_words)\n","sub_path":"lang_frequency.py","file_name":"lang_frequency.py","file_ext":"py","file_size_in_byte":1252,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"595542420","text":"import unittest\nimport sys\n\nfrom PyQt4 import QtGui\n\n\nclass Test_test(unittest.TestCase):\n def setUp(self):\n self.app = QtGui.QApplication(sys.argv)\n self.window = QtGui.QMainWindow()\n self.window.show()\n\n def tearDown(self):\n sys.exit(self.app.exec_())\n\n def test_login(self):\n from model.session import Session\n\n Session.authenticate(\"TestUser\", \"hello\")\n player = Session.getPlayer()\n player.setLevel(50)\n self.assertEqual(Session.getPlayer().getLevel(), 50)\n\n\nif __name__ == '__main__':\n unittest.main()\n","sub_path":"python3-qt4/src/test.py","file_name":"test.py","file_ext":"py","file_size_in_byte":583,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"174559546","text":"import numpy as np\nfrom scipy.spatial import distance\n\n\ndef pq(data, P, init_centroids, max_iter):\n M = data.shape[1]\n N = data.shape[0]\n K = init_centroids.shape[1]\n newdata = np.array(np.split(data, P, axis=1))\n codebooks = np.zeros(shape=(P, K, M // P), dtype=np.float32)\n # for each split data part\n for part in range(P):\n # update max_iter times for each part\n for times in range(max_iter):\n # for each data to calculate the distance to every center,768 data\n # 768 * 256 (every center and its distance to 768 data)\n # for each data, and its distance to 256 k\n newdis = manhattanDistance(newdata[part], init_centroids[part])\n # nearest is 1 * 768,means the nearest center to each data\n nearest = np.argmin(newdis, axis=1)\n\n for k in range(init_centroids[part].shape[0]):\n dimlist = []\n # find all the points belong to certain k and update the median\n for item in np.argwhere(nearest == k):\n # data index is item and its dim\n dimlist.append(newdata[part][item][0])\n if not dimlist:\n continue\n init_centroids[part, k] = np.median(dimlist, axis=0)\n codebooks[part] = init_centroids[part]\n\n # using the newest codebooks and do the k-means, then get the one part of codes\n codes = np.zeros(shape=(N, P), dtype=np.uint8)\n for part in range(P):\n newdis = manhattanDistance(newdata[part], init_centroids[part])\n nearest = np.array(np.argmin(newdis, axis=1), dtype=np.uint8)\n if part == 0:\n # 1 * 768 and transfer it to 768 * 1 then become N*P\n first = nearest.reshape(N, -1)\n else:\n codes = np.concatenate((first, nearest.reshape(N, -1)), axis=1)\n first = codes[:]\n\n return codebooks, codes\n\n\ndef query(queries, codebooks, codes, T):\n qsize = queries.shape[0]\n P = codebooks.shape[0]\n M = queries.shape[1]\n N = codes.shape[0]\n newquery = queries.reshape(qsize, P, M // P) # Q, P,M//P\n candidates = []\n if T >= N:\n for times in range(qsize):\n temp = set()\n for i in range(N):\n temp.add(i)\n candidates.append(temp)\n return candidates\n\n for times in range(qsize):\n # find the distance for each center to the query (P,k) matrix\n # codebooks p,k,m/p\n # newquery[times] , p,m/p\n disarray = np.array([])\n for p in range(codebooks.shape[0]):\n distance = manhattanDistance(newquery[times][p].reshape(1, -1), codebooks[p])\n if p == 0:\n # 1 * 256\n first = distance\n else:\n disarray = np.concatenate((first, distance), axis=0)\n first = disarray[:]\n # 2 * 256, the distance from one data block to all center\n nearindex = np.argsort(disarray, axis=1)\n curset = set()\n heap = []\n direction = generate(P)\n indexstart = [0] * P # for p = 2 ,it is (0,0)\n cursum = 0\n dataindex = []\n for p, v in list(enumerate(indexstart)):\n cursum += disarray[p][nearindex[p][v]]\n dataindex.append(nearindex[p][v])\n heap.append([cursum, dataindex, indexstart])\n indexset = set()\n indexset.add(tuple(indexstart))\n while len(curset) < T:\n firstdata = heap.pop(0)\n first = np.array(firstdata[1])\n indexstart = np.array(firstdata[2])\n indexlist = data_index(codes, first)\n if indexlist.size != 0:\n for item in indexlist:\n curset.add(item)\n # according to the newest element, and add one dimension then push them to heap\n for item in (indexstart + direction):\n cursum = 0\n dataindex = []\n for p, v in enumerate(item):\n cursum += disarray[p][nearindex[p][v]]\n dataindex.append(nearindex[p][v])\n tolist = item.tolist()\n totuple = tuple(tolist)\n if totuple not in indexset:\n # binary insert and do not need to sort again\n insertindex = binary_insert(heap, cursum)\n heap.insert(insertindex, [cursum, dataindex, tolist])\n indexset.add(totuple)\n candidates.append(curset)\n return candidates\n\n\ndef manhattanDistance(data1, data2):\n return distance.cdist(data1, data2, 'cityblock')\n\n\n# indexdata is the data part center,eg,[2,2]means the index of the 0 part is 2,and 1 part is 2\ndef data_index(codesdata, indexdata):\n # res = []\n # for i in range(codesdata.shape[0]):\n # if (codesdata[i, :] == indexdata).all():\n # res.append(i)\n # return res\n return np.where(np.logical_and.reduce(codesdata == indexdata, axis=1))[0]\n\n\ndef generate(dim):\n start = 1\n # add one for each dimension\n res = []\n for _ in range(dim):\n temp = list(map(int, bin(start)[2:].zfill(dim)))\n start = start << 1\n res.append(temp)\n return np.array(res)\n\n\n# return the index where to insert\ndef binary_insert(arr, num):\n l = 0\n r = len(arr) - 1\n while l <= r:\n mid = l + (r - l) // 2\n if arr[mid][0] < num:\n l = mid + 1\n else:\n r = mid - 1\n return l\n\n\n\n","sub_path":"9318/proj/submission.py","file_name":"submission.py","file_ext":"py","file_size_in_byte":5496,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"44868537","text":"\"\"\"\r\n create by Fred on 2018/8/22\r\n\"\"\"\r\nfrom flask_restful import fields\r\nfrom sqlalchemy import Column, Integer, String\r\nfrom app.entity.base import db, Base\r\n\r\n__author__ = 'Fred'\r\n\r\n\r\nclass StatStrategyRecordEntity(Base):\r\n \r\n __tablename__ = 'TB_STAT_STRATEGY_RECORD'\r\n \r\n id = Column(Integer, primary_key=True)\r\n model_id = Column(Integer, nullable=False)\r\n strategy_id = Column(Integer, nullable=False)\r\n batno = Column(String(14), nullable=False)\r\n result = Column(String(1), nullable=False)\r\n score = Column(Integer, nullable=False)\r\n\r\n marshal_fields = {}\r\n marshal_fields['id'] = fields.Integer(attribute='id')\r\n marshal_fields['model_id'] = fields.Integer(attribute='model_id')\r\n marshal_fields['strategy_id'] = fields.Integer(attribute='strategy_id')\r\n marshal_fields['batno'] = fields.String(attribute='batno')\r\n marshal_fields['result'] = fields.String(attribute='result')\r\n marshal_fields['score'] = fields.Integer(attribute='score')\r\n \r\n def __init__(self, v_model_id, v_strategy_id, v_batno, v_result, v_score):\r\n Base.__init__(self)\r\n self.model_id = v_model_id\r\n self.strategy_id = v_strategy_id\r\n self.batno = v_batno\r\n self.result = v_result\r\n self.score = v_score\r\n\r\n def __repr__(self):\r\n return '{\"id\":%r,\"model_id\":%r, \"strategy_id\": %r, \"batno\": %r, \"result\": %r, \"score\": %r}' \\\r\n % (self.id, self.model_id, self.strategy_id, self.batno, self.result, self.score)\r\n\r\n def save(self):\r\n try:\r\n db.session.add(self)\r\n db.session.commit()\r\n except Exception as e:\r\n print(\"Error: \" + e)\r\n db.session.rollback()\r\n return False\r\n return True\r\n\r\n def delete(self):\r\n try:\r\n db.session.delete(self)\r\n db.session.commit()\r\n except Exception as e:\r\n print(\"Error: \" + e)\r\n db.session.rollback()\r\n return False\r\n return True\r\n\r\n def get(self):\r\n try:\r\n return db.session.get(self.id)\r\n except Exception as e:\r\n print(\"Error: \" + e)\r\n return None\r\n","sub_path":"pyaiservice/app/entity/statstrategyqueryrecordentity.py","file_name":"statstrategyqueryrecordentity.py","file_ext":"py","file_size_in_byte":2197,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"433808006","text":"# 1432. Max Difference You Can Get From Changing an Integer\n# vbc 25\n# 2021/12/10\n\n# Runtime: 16 ms, faster than 100.00% of Python3 online submissions for Max Difference You Can Get From Changing an Integer.\n# Memory Usage: 14.3 MB, less than 14.29% of Python3 online submissions for Max Difference You Can Get From Changing an Integer.\n\n# greedy\n# diff = amx - diff\n# in order to make the diff as big as possible, we need make the max as big as possible and min as small as possible\n# max: iterate the list from the beginning, find the first digit that is not 9, and replace it as well as all\n# occurrence by 9\n# min: we have two situations. if the first digit is not 1, replace it as well as all occurrence by 1; otherwise,\n# search the first digit that is neither 0 or 1, and replace it as well as all occurrences by 0\n\n# remarks: brute force method is interesting; string replace function is good\n\nclass Solution:\n def maxDiff(self, num: int) -> int:\n max_digits = list(str(num))\n i = 0\n while i < len(max_digits) and max_digits[i] == '9':\n i += 1\n if i < len(max_digits):\n key = max_digits[i]\n while i < len(max_digits):\n if max_digits[i] == key:\n max_digits[i] = '9'\n i += 1\n min_digits = list(str(num))\n if min_digits[0] != '1':\n i, key = 0, min_digits[0]\n else:\n i = 1\n while i < len(min_digits) and (min_digits[i] == '0' or min_digits[i] == '1'):\n i += 1\n if i < len(min_digits):\n key = min_digits[i]\n val = '0' if i > 0 else '1'\n while i < len(min_digits):\n if min_digits[i] == key:\n min_digits[i] = val\n i += 1\n return int(\"\".join(max_digits)) - int(\"\".join(min_digits))\n","sub_path":"1432. Max Difference You Can Get From Changing an Integer.py","file_name":"1432. Max Difference You Can Get From Changing an Integer.py","file_ext":"py","file_size_in_byte":1832,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"347445131","text":"command = input()\n\nnumbers = [int(el) for el in input().split()]\n\nif command == \"Odd\":\n odd = [el for el in numbers if el % 2 == 1]\n res = sum(odd) * len(numbers)\n print(res)\nelif command == \"Even\":\n even = [el for el in numbers if el % 2 == 0]\n res = sum(even) * len(numbers)\n print(res)","sub_path":"advanced/advanced functions/even vs odd.py","file_name":"even vs odd.py","file_ext":"py","file_size_in_byte":306,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"528667939","text":"''' test_raw_data.py - Use this program to connect to any imu on file and output raw data.\n\n'''\n\nfrom imu_framework.tests.context import imu_tools\nfrom imu_framework.tests.context import imu_base\nfrom matplotlib import pyplot as plt\nfrom imu_framework.tests.context import RealtimePlot\n\n\n# from imu_framework.tests.context import imu_no_thrd_9250\n# from imu_framework.tests.context import imu_thrd_9250\n\n# from imu_framework.tests.context import imu_no_thrd_9250\n# from imu_framework.tests.context import imu_thrd_9250\n\n# from imu_framework.tests.context import imu_no_thrd_sparton\n# from imu_framework.tests.context import imu_thrd_sparton\n\nif __name__ == '__main__':\n\n ######## instantiate IMUs ####################################\n # myIMU_no_thrd_9250 = imu_no_thrd_9250()\n # myIMU_thrd_9250 = imu_thrd_9250()\n\n # myIMU_no_thrd_sparton = imu_no_thrd_sparton()\n # myIMU_thrd_sparton = imu_thrd_sparton()\n\n myIMU_base = imu_base()\n\n\n ######## connect all IMUs #############################################\n # myIMU_no_thrd_9250.connect()\n # myIMU_thrd_9250.connect()\n\n # myIMU_no_thrd_sparton.connect()\n # myIMU_thrd_sparton.connect()\n\n myIMU_base.connect()\n\n # fix me take all and put into tools so multipal instantiations are can be achived\n ##########################################################################\n myTools = imu_tools(imu=myIMU_base)\n\n fig, axes = plt.subplots()\n display = RealtimePlot(axes)\n\n i = 0\n print('start')\n while i <= 4999:\n\n rawAccel = myTools.get_raw_scale_data()\n\n # print(i)\n print(rawAccel)\n # myTools.dataForMatlabProcesing(rawAccel, i, 'LoggedData_CalInertialAndMag')\n\n # tcAcceleration = myTools.get_arhs_tcAccel()\n # print(tcAcceleration)\n\n # zVector = myTools.get_arhs_z_vector()\n # print(zVector)\n i = i + 1\n\n ######## disconnect all IMUs #############################################\n\n # myIMU_no_thrd_sparton.disconnect()\n print(i)\n\n\n display.animate(fig, lambda frame_index: (time.time() - start, random.random() * 100))\n plt.show()\n\n fig, axes = plt.subplots()\n display = RealtimePlot(axes)\n while True:\n display.add(time.time() - start, 100)\n plt.pause(0.001)","sub_path":"imu_framework/tests/test_raw_data.py","file_name":"test_raw_data.py","file_ext":"py","file_size_in_byte":2277,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"593865898","text":"import cv2\nimport time\nfrom datetime import timedelta\nimport position\nimport settings\nimport a_star\nimport best_first\n\n\n# Draws the given path on the image and shows it\ndef write_path(path):\n\n while len(path) > 0:\n node = path.pop()\n x = node.position.x\n y = node.position.y\n\n settings.image[x, y] = [255, 255, 255]\n\n cv2.imshow(\"Path\", settings.image)\n cv2.waitKey(0)\n\n\n# Manage all program\ndef main():\n\n image_name = input(\"\\n> Write image name : \") # Take image path from user\n settings.image = cv2.imread(image_name) # Read the image\n if settings.image is None: # Control if image was found\n print(\" Image not found \")\n return -1\n\n print(\"\\n> Select the algorithm you want to use\")\n selection = input(\" 1) A* with heap\\n 2) A* with stack\\n 3) BestFirst with heap\\n 4) BestFirst with stack\\n >\")\n\n # Define coordinate of start and end node\n print(\"\\n Image size is (\", settings.image.shape[0]-1, \", \", settings.image.shape[1]-1, \")\")\n x = int(input(\"> Enter the x coordinates of starting position : \"))\n y = int(input(\"> Enter the y coordinates of starting position : \"))\n start = position.Position(x, y)\n x = int(input(\"> Enter the x coordinates of ending position : \"))\n y = int(input(\"> Enter the y coordinates of ending position : \"))\n end = position.Position(x, y)\n\n star = time.time()\n if selection == \"1\":\n path, count_max, count_pop = a_star.a_star_with_heap(start, end) # Find the optimum path with A* using heap\n elif selection == \"2\":\n path, count_max, count_pop = a_star.a_star_with_stack(start, end) # Find the optimum path with A* using stack\n elif selection == \"3\":\n path, count_max, count_pop = best_first.bfs_with_heap(start, end) # Find the path with Best First using heap\n elif selection == \"4\":\n path, count_max, count_pop = best_first.bfs_with_stack(start, end) # Find the path with Best First using stack\n else:\n print(\" Invalid Selection \")\n path, count_max, count_pop = None, None, None\n elapsed = (time.time() - star)\n\n if path is None:\n print(\" ERROR: Path not found or start/end coordinates not given correctly\")\n else:\n print(\"\\nExecution took: %s secs\" % timedelta(seconds=round(elapsed)))\n print(\"Maximum number of heap/stack elements : \", count_max)\n print(\"Number of pop() called : \", count_pop)\n print(\" > Image printed on your screen! \")\n write_path(path) # draw the path on the image and show it\n\n\nif __name__ == '__main__':\n main()\n","sub_path":"artificial-intelligence-basic/path-finding-with-Astar/init.py","file_name":"init.py","file_ext":"py","file_size_in_byte":2759,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"98132044","text":"from blog.users.pic_handler import add_profile_pic\nimport secrets\nimport os\nfrom flask import Blueprint, render_template, request, url_for, redirect, flash, current_app\nfrom flask_login import login_user, logout_user, login_required, current_user\nfrom blog import db\nfrom blog.models import User, Post\nfrom blog.users.forms import Reg_form, Login_form, Update_User_form\nusers = Blueprint('users',__name__)\n\n#register\n@users.route('/reg', methods=['post','get'])\ndef register():\n form = Reg_form()\n if form.validate_on_submit():\n new_user = User(email=form.email.data,\n username=form.username.data,\n password=form.password.data)\n db.session.add(new_user)\n db.session.commit()\n flash('Regisration success')\n return redirect(url_for(request.args.get('next','users.login')))\n return render_template('reg.html',form=form)\n#login_view\n@users.route('/login', methods=['post','get'])\ndef login():\n form = Login_form()\n if form.validate_on_submit():\n user = User.query.filter_by(email=form.email.data).first()\n if user is not None and user.check_password(form.password.data):\n login_user(user)\n flash('Login success')\n return redirect(url_for(request.args.get('next','core.index')))\n else:\n flash('Invalid username or password', 'error')\n return render_template('login.html', form=form)\n\n#login_out\n@login_required\n@users.route(('/logout'))\ndef logout():\n logout_user()\n flash('Logout success')\n return redirect(url_for('core.index'))\n\n\n@login_required\n@users.route('/acc', methods=['GET', 'POST'])\ndef acc():\n form = Update_User_form()\n if form.validate_on_submit():\n if form.picture.data:\n # picture_file = save_picture(form.picture.data)\n picture_file = add_profile_pic(form.picture.data, form.username.data)\n current_user.profile_image = picture_file\n current_user.username=form.username.data\n current_user.email=form.email.data\n db.session.commit()\n flash('Account updated', 'success')\n return redirect(url_for('users.acc'))\n elif request.method == 'GET': #must be cap\n form.username.data = current_user.username\n form.email.data = current_user.email\n profile_image = url_for('static', filename='profile_img/'+current_user.profile_image)\n return render_template('acc.html', profile_image=profile_image, check=current_user.profile_image, form = form)\n\n@login_required\n@users.route('/')\ndef user_posts(user_id):\n user = User.query.get_or_404(user_id)\n posts_by_users = Post.query.filter_by(author=user).order_by(Post.date.desc())\n # return render_template('user_page.html', posts_by_users=posts_by_users)\n return render_template('user_page.html', user=user, posts_by_users=posts_by_users)\n # try two\n","sub_path":"build_along/blog-project/blog/users/views.py","file_name":"views.py","file_ext":"py","file_size_in_byte":2899,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"216448413","text":"from concurrent import futures\nimport time\nimport grpc\nfrom rpc.protoc import regular_adjustment_pb2_grpc\nfrom rpc.protoc import conditional_trigger_pb2_grpc\nfrom rpc.protoc import adjustment_and_triggering_of_portfolio_pb2_grpc\nfrom rpc.protoc import stocks_pb2_grpc\nfrom rpc.protoc import option_futures_pb2_grpc\nfrom rpc.protoc import citibank_api_pb2_grpc\nfrom rpc import RegularAdjustmentService, ConditionalTriggerService, AdjustmentAndTriggeringOfPortfolioService, \\\n StocksService, OptionFuturesService, CitibankApiService\n\n_ONE_DAY_IN_SECONDS = 60 * 60 * 24\n\n\ndef serve():\n server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))\n regular_adjustment_pb2_grpc.add_RegularAdjustmentServicer_to_server(\n RegularAdjustmentService.RegularAdjustmentService(), server)\n conditional_trigger_pb2_grpc.add_ConditionalTriggerServicer_to_server(\n ConditionalTriggerService.ConditionalTriggerService(), server)\n adjustment_and_triggering_of_portfolio_pb2_grpc.add_AdjustmentAndTriggeringOfPortfolioServicer_to_server(\n AdjustmentAndTriggeringOfPortfolioService.AdjustmentAndTriggeringOfPortfolioService(), server)\n stocks_pb2_grpc.add_StocksServicer_to_server(\n StocksService.StocksService(), server)\n option_futures_pb2_grpc.add_OptionFuturesServicer_to_server(\n OptionFuturesService.OptionFuturesService(), server)\n citibank_api_pb2_grpc.add_CitibankApiServicer_to_server(\n CitibankApiService.CitibankApiService(), server\n )\n\n server.add_insecure_port('[::]:50051')\n server.start()\n try:\n while True:\n time.sleep(_ONE_DAY_IN_SECONDS)\n except KeyboardInterrupt:\n server.stop(0)\n\n\nif __name__ == '__main__':\n serve()\n","sub_path":"RpcServer.py","file_name":"RpcServer.py","file_ext":"py","file_size_in_byte":1731,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"401776054","text":"#!/usr/bin/env python3\n# Copyright (c) 2019 Toyota Research Institute\n\n\"\"\"\nHelper functions for generating features in beep.featurize module\nAll methods are currently lumped into this script.\n\"\"\"\n\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib as plt\nfrom scipy import signal\nfrom lmfit import models\nfrom scipy.interpolate import interp1d\n\n\ndef isolate_dQdV_peaks(processed_cycler_run, diag_nr, charge_y_n, max_nr_peaks, rpt_type, half_peak_width=0.075):\n \"\"\"\n Determine the number of cycles to reach a certain level of degradation\n\n Args:\n processed_cycler_run: processed_cycler_run (beep.structure.ProcessedCyclerRun): information about cycler run\n rpt_type: string indicating which rpt to pick\n charge_y_n: if 1 (default), takes charge dQdV, if 0, takes discharge dQdV\n diag_nr: if 1 (default), takes dQdV of 1st RPT past the initial diagnostic\n\n Returns:\n dataframe with Voltage and dQdV columns for charge or discharge curve in the rpt_type diagnostic cycle.\n The peaks will be isolated\n \"\"\"\n\n rpt_type_data = processed_cycler_run.diagnostic_interpolated[(processed_cycler_run.diagnostic_interpolated.cycle_type == rpt_type)]\n cycles = rpt_type_data.cycle_index.unique()\n\n ## Take charge or discharge from cycle 'diag_nr'\n data = pd.DataFrame({'dQdV': [], 'voltage': []})\n\n if charge_y_n == 1:\n data.dQdV = rpt_type_data[\n (rpt_type_data.cycle_index == cycles[diag_nr]) & (rpt_type_data.step_type == 0)].charge_dQdV.values\n data.voltage = rpt_type_data[\n (rpt_type_data.cycle_index == cycles[diag_nr]) & (rpt_type_data.step_type == 0)].voltage.values\n elif charge_y_n == 0:\n data.dQdV = rpt_type_data[\n (rpt_type_data.cycle_index == cycles[diag_nr]) & (rpt_type_data.step_type == 1)].discharge_dQdV.values\n data.voltage = rpt_type_data[\n (rpt_type_data.cycle_index == cycles[diag_nr]) & (rpt_type_data.step_type == 1)].voltage.values\n # Turn values to positive temporarily\n data.dQdV = -data.dQdV\n else:\n raise NotImplementedError('Charge_y_n must be either 0 or 1')\n\n # Remove NaN from x and y\n data = data.dropna()\n\n # Reset x and y to values without NaNs\n x = data.voltage\n y = data.dQdV\n\n # Remove strong outliers\n upper_limit = y.sort_values().tail(round(0.01 * len(y))).mean() + y.sort_values().mean()\n data = data[(y < upper_limit)]\n\n # Reset x and y to values without outliers\n x = data.voltage\n y = data.dQdV\n\n # Filter out the x values of the peaks only\n no_filter_data = data\n\n # Find peaks\n peak_indices = signal.find_peaks_cwt(y, (10,))[-max_nr_peaks:]\n\n peak_voltages = {}\n peak_dQdVs = {}\n\n for count, i in enumerate(peak_indices):\n temp_filter_data = no_filter_data[((x > x.iloc[i] - half_peak_width) & (x < x.iloc[i] + half_peak_width))]\n peak_voltages[count] = x.iloc[i]\n peak_dQdVs[count] = y.iloc[i]\n\n if count == 0:\n filter_data = temp_filter_data\n else:\n filter_data = filter_data.append(temp_filter_data)\n\n return filter_data, no_filter_data, peak_voltages, peak_dQdVs\n\n\ndef generate_model(spec):\n \"\"\"\n Method that generates a model to fit the hppc data to for peak extraction, using spec dictionary\n :param spec (dict): dictionary containing X, y model types.\n :return: composite model objects of lmfit Model class and a parameter object as defined in lmfit.\n \"\"\"\n composite_model = None\n params = None\n x = spec['x']\n y = spec['y']\n x_min = np.min(x)\n x_max = np.max(x)\n x_range = x_max - x_min\n y_max = np.max(y)\n for i, basis_func in enumerate(spec['model']):\n prefix = f'm{i}_'\n\n #models is an lmfit object\n model = getattr(models, basis_func['type'])(prefix=prefix)\n if basis_func['type'] in ['GaussianModel', 'LorentzianModel',\n 'VoigtModel']: # for now VoigtModel has gamma constrained to sigma\n model.set_param_hint('sigma', min=1e-6, max=x_range)\n model.set_param_hint('center', min=x_min, max=x_max)\n model.set_param_hint('height', min=1e-6, max=1.1 * y_max)\n model.set_param_hint('amplitude', min=1e-6)\n\n default_params = {\n prefix + 'center': x_min + x_range * np.random.randn(),\n prefix + 'height': y_max * np.random.randn(),\n prefix + 'sigma': x_range * np.random.randn()\n }\n else:\n raise NotImplemented(f'model {basis_func[\"type\"]} not implemented yet')\n if 'help' in basis_func: # allow override of settings in parameter\n for param, options in basis_func['help'].items():\n model.set_param_hint(param, **options)\n model_params = model.make_params(**default_params, **basis_func.get('params', {}))\n if params is None:\n params = model_params\n else:\n params.update(model_params)\n if composite_model is None:\n composite_model = model\n else:\n composite_model = composite_model + model\n return composite_model, params\n\n\ndef update_spec_from_peaks(spec, model_indices, peak_voltages, peak_dQdVs, peak_widths=(10,), **kwargs):\n x = spec['x']\n y = spec['y']\n x_range = np.max(x) - np.min(x)\n\n for i, j, model_index in zip(peak_voltages, peak_dQdVs, model_indices):\n model = spec['model'][model_index]\n\n if model['type'] in ['GaussianModel', 'LorentzianModel', 'VoigtModel']:\n params = {\n 'height': peak_dQdVs[j],\n 'sigma': x_range / len(x) * np.min(peak_widths),\n 'center': peak_voltages[i]\n }\n if 'params' in model:\n model.update(params)\n else:\n model['params'] = params\n else:\n raise NotImplemented(f'model {basis_func[\"type\"]} not implemented yet')\n return\n\n\ndef generate_dQdV_peak_fits(processed_cycler_run, rpt_type, diag_nr, charge_y_n, plotting_y_n=0, max_nr_peaks=4):\n \"\"\"\n Generate fits characteristics from dQdV peaks\n\n Args:\n processed_cycler_run: processed_cycler_run (beep.structure.ProcessedCyclerRun)\n diag_nr: if 1, takes dQdV of 1st RPT past the initial diagnostic, 0 (default) is initial dianostic\n charge_y_n: if 1 (default), takes charge dQdV, if 0, takes discharge dQdV\n\n\n Returns:\n dataframe with Amplitude, mu and sigma of fitted peaks\n \"\"\"\n # Uses isolate_dQdV_peaks function to filter out peaks and returns x(Volt) and y(dQdV) values from peaks\n\n data, no_filter_data, peak_voltages, peak_dQdVs = isolate_dQdV_peaks(processed_cycler_run, rpt_type=rpt_type, \\\n charge_y_n=charge_y_n, diag_nr=diag_nr,\n max_nr_peaks=max_nr_peaks,\n half_peak_width=0.07)\n\n no_filter_x = no_filter_data.voltage\n no_filter_y = no_filter_data.dQdV\n\n ####### Setting spec for gaussian model generation\n\n x = data.voltage\n y = data.dQdV\n\n # Set construct spec using number of peaks\n model_types = []\n for i in np.arange(max_nr_peaks):\n model_types.append({'type': 'GaussianModel', 'help': {'sigma': {'max': 0.1}}})\n\n spec = {\n 'x': x,\n 'y': y,\n 'model': model_types\n }\n\n # Update spec using the found peaks\n update_spec_from_peaks(spec, np.arange(max_nr_peaks), peak_voltages, peak_dQdVs)\n if plotting_y_n:\n fig, ax = plt.subplots()\n ax.scatter(spec['x'], spec['y'], s=4)\n for i in peak_voltages:\n ax.axvline(x=peak_voltages[i], c='black', linestyle='dotted')\n ax.scatter(peak_voltages[i], peak_dQdVs[i], s=30, c='red')\n\n #### Generate fitting model\n\n model, params = generate_model(spec)\n output = model.fit(spec['y'], params, x=spec['x'])\n if plotting_y_n:\n # #Plot residuals\n # fig, gridspec = output.plot(data_kws={'markersize': 1})\n\n ### Plot components\n\n ax.scatter(no_filter_x, no_filter_y, s=4)\n ax.set_xlabel('Voltage')\n\n if charge_y_n:\n ax.set_title(f'dQdV for charge diag cycle {diag_nr}')\n ax.set_ylabel('dQdV')\n else:\n ax.set_title(f'dQdV for discharge diag cycle {diag_nr}')\n ax.set_ylabel('- dQdV')\n\n components = output.eval_components()\n for i, model in enumerate(spec['model']):\n ax.plot(spec['x'], components[f'm{i}_'])\n\n # Construct dictionary of peak fits\n peak_fit_dict = {}\n for i, model in enumerate(spec['model']):\n best_values = output.best_values\n prefix = f'm{i}_'\n peak_fit_dict[prefix + \"Amp\"] = [peak_dQdVs[i]]\n peak_fit_dict[prefix + \"Mu\"] = [best_values[prefix + \"center\"]]\n peak_fit_dict[prefix + \"Sig\"] = [best_values[prefix + \"sigma\"]]\n\n # Make dataframe out of dict\n peak_fit_df = pd.DataFrame(peak_fit_dict)\n\n return peak_fit_df\n\n\n\ndef interp(df):\n '''\n this function takes in a data frame that we are interested in, and\n returns an interpolation function based on the discharge volatge and capacity\n '''\n V = df.voltage.values\n Q = df.discharge_capacity.values\n f = interp1d(Q, V, kind='cubic', fill_value=\"extrapolate\")\n return f\n\n\ndef list_minus(list1, list2):\n \"\"\"\n this function takes in two lists and will return a list containing\n the values of list1 minus list2\n \"\"\"\n result = []\n zip_object = zip(list1, list2)\n for list1_i, list2_i in zip_object:\n result.append(list1_i - list2_i)\n return result\n\n\ndef get_hppc_ocv_helper(cycle_hppc_0, step_num):\n \"\"\"\n this helper function takes in a cycle and a step number\n and returns a list that stores the mean of the last five points of voltage in different\n step counter indexes (which is basically the soc window)\n \"\"\"\n chosen1 = cycle_hppc_0[cycle_hppc_0.step_index == step_num]\n voltage1 = []\n step_index_counters = chosen1.step_index_counter.unique()[0:9]\n for i in range(len(step_index_counters)):\n df_i = chosen1.loc[chosen1.step_index_counter == step_index_counters[i]]\n voltage1.append(df_i['voltage'].iloc[-10].mean()) # take the mean of the last 10 points of the voltage value\n return voltage1\n\n\ndef get_hppc_ocv(processed_cycler_run, diag_num):\n '''\n This function takes in cycling data for one cell and returns the variance of OCVs at different SOCs\n diag_num cyce minus first hppc cycle(cycle 2)\n Argument:\n processed_cycler_run(process_cycler_run object)\n diag_num(int): diagnostic cycle number at which you want to get the feature, such as 37 or 142\n Returns:\n a float\n the variance of the diag_num minus cycle 2 for OCV\n '''\n data = processed_cycler_run.diagnostic_interpolated\n cycle_hppc = data.loc[data.cycle_type == 'hppc']\n cycle_hppc = cycle_hppc.loc[cycle_hppc.current.notna()]\n step = 11\n step_later = 43\n cycle_hppc_0 = cycle_hppc.loc[cycle_hppc.cycle_index == 2]\n # in case that cycle 2 correspond to two cycles one is real cycle 2, one is at the end\n cycle_hppc_0 = cycle_hppc_0.loc[cycle_hppc_0.test_time < 250000]\n voltage_1 = get_hppc_ocv_helper(cycle_hppc_0, step)\n chosen = cycle_hppc.loc[cycle_hppc.cycle_index == diag_num]\n voltage_2 = get_hppc_ocv_helper(chosen, step_later)\n dv = list_minus(voltage_1, voltage_2)\n return np.var(dv)\n\n\ndef get_hppc_r(processed_cycler_run, diag_num):\n '''\n This function takes in cycling data for one cell and returns the resistance at different SOCs with resistance at the\n first hppc cycle(cycle 2) deducted\n Argument:\n processed_cycler_run(process_cycler_run object)\n diag_num(int): diagnostic cycle number at which you want to get the feature, such as 37 or 142\n Returns:\n two floats\n the variance of the diag_num - cycle 2 for HPPC resistance for both charge and discharge\n '''\n data = processed_cycler_run.diagnostic_interpolated\n cycle_hppc = data.loc[data.cycle_type == 'hppc']\n cycle_hppc = cycle_hppc.loc[cycle_hppc.current.notna()]\n cycles = cycle_hppc.cycle_index.unique()\n if diag_num not in cycles:\n return None\n steps = [11, 12, 14]\n states = ['R', 'D', 'C']\n results_0 = {}\n results = {}\n resistance = {}\n dr_d = {}\n cycle_hppc_0 = cycle_hppc.loc[cycle_hppc.cycle_index == 2]\n # in case that cycle 2 correspond to two cycles one is real cycle 2, one is at the end\n cycle_hppc_0 = cycle_hppc_0.loc[cycle_hppc_0.test_time < 250000]\n for i in range(len(steps)):\n chosen = cycle_hppc_0[cycle_hppc_0.step_index == steps[i]]\n state = states[i]\n result = get_V_I(chosen)\n results_0[state] = result\n results[2] = results_0\n steps_later = [43, 44, 46]\n # step 43 is rest, 44 is discharge and 46 is charge, use the get ocv function to get the voltage values\n # and calculate the over potential and thus the resistance change\n for i in range(1, len(cycles)):\n chosen = cycle_hppc[cycle_hppc.cycle_index == cycles[i]]\n results_s = {}\n for j in range(len(steps_later)):\n chosen_s = chosen[chosen.step_index == steps_later[j]]\n state = states[j]\n results_s[state] = get_V_I(chosen_s)\n results[cycles[i]] = results_s\n # calculate the resistance and compare the cycle evolution\n keys = list(results.keys())\n resistance['D'] = {}\n resistance['C'] = {}\n for i in range(len(keys)):\n d_v = results[keys[i]]['D']['voltage'] # discharge voltage for a cycle\n c_v = results[keys[i]]['C']['voltage'] # charge voltage for a cycle\n r_v = results[keys[i]]['R']['voltage'] # rest voltage for a cycle\n r_v_d = r_v[0:min(len(r_v), len(d_v))] # in case the size is different\n d_v = d_v[0:min(len(r_v), len(d_v))]\n c_v = c_v[0:min(len(r_v), len(c_v))]\n r_v_c = r_v[0:min(len(r_v), len(c_v))]\n d_n = list(np.array(d_v) - np.array(r_v_d)) # discharge overpotential\n c_n = list(np.array(c_v) - np.array(r_v_c)) # charge overpotential\n resistance['D'][keys[i]] = np.true_divide(d_n, results[keys[i]]['D']['current'])\n resistance['C'][keys[i]] = np.true_divide(c_n, results[keys[i]]['C']['current'])\n resistance_d = resistance['D']\n resistance_c = resistance['C']\n dr_c = {}\n SOC = list(range(10, 100, 10))\n for i in range(1, len(keys)):\n resistance_d_i = resistance_d[keys[i]]\n resistance_d_0 = resistance_d[keys[0]]\n resistance_d_0 = resistance_d_0[0:min(len(resistance_d_i), len(resistance_d_0))]\n dr_d[keys[i]] = list(resistance_d_i - resistance_d_0)\n for i in range(1, len(keys)):\n resistance_c_i = resistance_c[keys[i]]\n resistance_c_0 = resistance_c[keys[0]]\n resistance_c_0 = resistance_c_0[0:min(len(resistance_c_i), len(resistance_c_0))]\n dr_c[keys[i]] = list(resistance_c_i - resistance_c_0)\n f2_d = np.var(dr_d[diag_num])\n f2_c = np.var(dr_c[diag_num])\n return f2_d, f2_c\n\n\ndef get_V_I(df):\n \"\"\"\n this helper functiion takes in a specific step in the first hppc cycle and gives you the voltage values as\n well as the current values after each step in the first cycle.\n \"\"\"\n result = {}\n voltage = []\n current = []\n step_index_counters = df.step_index_counter.unique()[0:9]\n for i in range(len(step_index_counters)):\n df_i = df.loc[df.step_index_counter == step_index_counters[i]]\n voltage.append(df_i['voltage'].iloc[-1]) # the last point of the voltage value\n current.append(df_i['current'].mean())\n result['voltage'] = voltage\n result['current'] = current\n return result\n\n\ndef get_v_diff(diag_num, processed_cycler_run, soc_window):\n \"\"\"\n This function helps us get the feature of the variance of the voltage difference\n across a specific capacity window\n Argument:\n diag_num(int): diagnostic cycle number at which you want to get the feature, such as 37 or 142\n processed_cycler_run(process_cycler_run object)\n soc_window(int): let the function know which step_counter_index you want to look at\n Returns:\n a float\n \"\"\"\n data = processed_cycler_run.diagnostic_interpolated\n hppc_data = data.loc[data.cycle_type == 'hppc']\n # the discharge steps in the hppc cycles step number 47\n hppc_data_2 = hppc_data.loc[hppc_data.cycle_index == diag_num]\n hppc_data_1 = hppc_data.loc[hppc_data.cycle_index == 2]\n # in case a final HPPC is appended in the end also with cycle number 2\n hppc_data_1 = hppc_data_1.loc[hppc_data_1.discharge_capacity < 8]\n hppc_data_2_d = hppc_data_2.loc[hppc_data_2.step_index == 47]\n hppc_data_1_d = hppc_data_1.loc[hppc_data_1.step_index == 15]\n step_counters_1 = hppc_data_1_d.step_index_counter.unique()\n step_counters_2 = hppc_data_2_d.step_index_counter.unique()\n if (len(step_counters_1) < 8) or (len(step_counters_2) < 8):\n print('error')\n return None\n else:\n chosen_1 = hppc_data_1_d.loc[hppc_data_1_d.step_index_counter == step_counters_1[soc_window]]\n chosen_2 = hppc_data_2_d.loc[hppc_data_2_d.step_index_counter == step_counters_2[soc_window]]\n chosen_1 = chosen_1.loc[chosen_1.discharge_capacity.notna()]\n chosen_2 = chosen_2.loc[chosen_2.discharge_capacity.notna()]\n if len(chosen_1) == 0 or len(chosen_2) == 0:\n print('error')\n return None\n f = interp(chosen_2)\n v_1 = chosen_1.voltage.tolist()\n v_2 = f(chosen_1.discharge_capacity).tolist()\n v_diff = list_minus(v_1, v_2)\n if abs(np.var(v_diff)) > 1:\n print('weird voltage')\n return None\n else:\n return v_diff\n\n\ndef get_relaxation_times(voltage_data, time_data, decay_percentage = [0.5, 0.8, 0.99]):\n \"\"\"\n This function takes in the voltage data and time data of a voltage relaxation curve\n and calculates out the time it takes to reach 50%, 80% and 99% of the OCV relaxation.\n\n Args:\n voltage_data(np.array): list of the voltage data in a voltage relaxation curve\n time_data(np.array) : list of the time data corresponding to voltage data\n decay_percentage (list): list of thresholds to compute time constants for\n\n Returns:\n @time_array(np.array): list of time taken to reach percentage of total relaxation\n where percentages are 50%, 80%, and 99% returned in that order.\n\n \"\"\"\n\n # Scaling the voltage data to between 0-1\n final_voltage = voltage_data[-1]\n initial_voltage = voltage_data[0]\n scaled_voltage_data = (voltage_data - initial_voltage) / (final_voltage - initial_voltage)\n\n # shifting the time data to start at 0\n shifted_time_data = time_data - time_data[0]\n\n v_decay_inv = interp1d(scaled_voltage_data, shifted_time_data)\n\n # these are the decay percentages that will correspond to the time values extracted\n time_array = []\n\n for percent in decay_percentage:\n time_array.append(v_decay_inv(percent))\n\n return np.array(time_array)\n\n\ndef get_relaxation_features(processed_cycler_run):\n \"\"\"\n\n This function takes in the processed structure data and retrieves the fractional change in\n the time taken to reach 50%, 80% and 99% of the voltage decay between the first and\n the second HPPC cycles\n\n Args:\n @processed_cycler_run(beep.structure.ProcessedCyclerRun): ProcessedCyclerRun object for the cell\n you want the diagnostic feature for.\n\n Returns:\n @fracTimeArray(np.array): list of fractional difference in time taken to reach percentage of\n total relaxation between the first and second diagnostic cycle. It is organized such that\n the percentages 50%, 80%, and 99% correspond to a given column, and the rows are different\n SOCs of the HPPC starting at 0 with the highest SOC and going downwards.\n \"\"\"\n\n total_time_array = []\n\n # chooses the first and the second diagnostic cycle\n for hppc_chosen in [0, 1]:\n\n # Getting just the HPPC cycles\n hppc_diag_cycles = processed_cycler_run.diagnostic_interpolated[processed_cycler_run.diagnostic_interpolated.cycle_type == \"hppc\"]\n\n # Getting unique and ordered cycle index list for HPPC cycles, and choosing the hppc cycle\n hppc_cycle_list = list(set(hppc_diag_cycles.cycle_index))\n hppc_cycle_list.sort()\n\n # Getting unique and ordered Regular Step List (Non-unique identifier)\n reg_step_list = hppc_diag_cycles[hppc_diag_cycles.cycle_index == hppc_cycle_list[hppc_chosen]].step_index\n reg_step_list = list(set(reg_step_list))\n reg_step_list.sort()\n\n # The value of 1 for regular step corresponds to all of the relaxation curves in the hppc\n reg_step_relax = 1\n\n # Getting unique and ordered Step Counter List (unique identifier)\n step_count_list = hppc_diag_cycles[(hppc_diag_cycles.cycle_index == hppc_cycle_list[hppc_chosen]) &\n (hppc_diag_cycles.step_index == reg_step_list[reg_step_relax])].step_index_counter\n step_count_list = list(set(step_count_list))\n step_count_list.sort()\n # The first one isn't a proper relaxation curve(comes out of CV) so we ignore it\n step_count_list = step_count_list[1:]\n\n # 9x2 array where the rows are the different SOC starting high to low and columns are percent degrad\n # initialized to all nans so when they can't be calculated it has a nan in its place\n all_time_array = np.nan * np.ones((9, 3))\n\n # gets all the times for a single SOC per loop\n for soc_num in range(0, len(step_count_list)):\n relax_curve_df = hppc_diag_cycles[(hppc_diag_cycles.cycle_index == hppc_cycle_list[hppc_chosen]) &\\\n (hppc_diag_cycles.step_index_counter == step_count_list[soc_num])]\n\n time_array = get_relaxation_times(np.array(relax_curve_df.voltage), np.array(relax_curve_df.test_time))\n all_time_array[soc_num][:] = time_array\n\n total_time_array.append(all_time_array)\n\n return total_time_array[1] / total_time_array[0]\n\n","sub_path":"beep/helpers/featurizer_helpers.py","file_name":"featurizer_helpers.py","file_ext":"py","file_size_in_byte":22553,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"121043279","text":"import os.path\nimport sys\n\nfrom PyQt5.QtWidgets import QWidget, QFileDialog, QMessageBox\nfrom PyQt5 import QtCore, QtWidgets, uic\nfrom PyQt5.QtCore import QThread, pyqtSignal\nfrom modules.program_manager import *\nfrom pathlib import Path\n\napp = None # global variable bound to the QtWidgets.QApplication instance.\nmain_window = None # global variable bound to EyePredict application object.\n\nclass PredictionThread(QThread):\n \"\"\"\n Thread object class, used for \"Predict\" tab's processes.\n \"\"\"\n\n # A signal preparation\n alert = pyqtSignal(str)\n\n def __init__(self, program_manager, action, features=[], exclusion_category=\"\", num_to_exclude=\"\", iterations=\"\"):\n \"\"\"\n Make a new thread instance with the specified arguments.\n All arguments used for prediction processes, some arguments are optional for some prediction actions.\n :param program_manager: The program manger to activate by predication actions request/s.\n :param action: The requested action to preform (\"Cross Validation\", \"Fit Model\", \"Predict\").\n :param features: Selected features to extract.\n :param exclusion_category: column label from the behavioral data, will be used for trial grouping during learning.\n :param num_to_exclude: The number of groups that will be excluded during training (int).\n :param iterations: Maximum iterations number for the training session.\n \"\"\"\n # QThread.__init__(self)\n super(PredictionThread, self).__init__()\n self.program_manager = program_manager\n self.features = features\n self.exclusion_category = exclusion_category\n self.num_to_exclude = num_to_exclude\n self.iterations = iterations\n self.action = action\n\n def __del__(self):\n self.wait()\n\n def run(self):\n \"\"\"\n Begin thread's run, calling the requested prediction actions.\n :return: void\n \"\"\"\n if self.action != \"Cross Validation\" and self.action != \"Fit Model\" and self.action != \"Predict\":\n return\n\n self._block_gui_buttons()\n self._start_prediction_action()\n self.program_manager.reset_stop_flags() # reset flags that have been used to cancel the process.\n self._free_gui_buttons()\n\n def _start_prediction_action(self):\n \"\"\"\n call program_manger with the supplied arguments, and update the \"Result\" label as necessary.\n :return: void \n \"\"\"\n # predict\n main_window.ResultLable.setStyleSheet(\"color: rgb(0, 0, 0); font: 16pt 'Tahoma';\")\n main_window.ResultLable.setText(self.action + \" Process \\nIs Running...\")\n return_string, result = self.program_manager.predict(self.features, self.exclusion_category,\n self.num_to_exclude, self.iterations, self.action)\n # Check the process exit status, and update gui_module accordingly\n if return_string == \"Success\":\n main_window.ResultLable.setText(result)\n else:\n main_window.ResultLable.setStyleSheet(\"color: rgb(189, 189, 189); font: 24pt 'Tahoma';\")\n main_window.ResultLable.setText(\"Result...\")\n self.alert.emit(return_string)\n\n def _free_gui_buttons(self):\n \"\"\"\n Set all gui_module buttons enable.\n :return: void\n \"\"\"\n # free gui_module functions\n main_window.ReloadButton.setEnabled(True)\n main_window.ApplyButton.setEnabled(True)\n main_window.LoadButton.setEnabled(True)\n main_window.SaveButton.setEnabled(True)\n main_window.VisualizeButton.setEnabled(True)\n main_window.PredictButton.setEnabled(True)\n main_window.CrossValidationButton.setEnabled(True)\n main_window.FitModelButton.setEnabled(True)\n main_window.SaveModelButton.setEnabled(True)\n main_window.LoadModelButton.setEnabled(True)\n main_window.CancelButton.hide()\n main_window.SaveModelButton.show()\n\n def _block_gui_buttons(self):\n \"\"\"\n Set all gui_module buttons disable, to avoid data processing conflicts.\n :return: void\n \"\"\"\n # Block gui_module functions\n main_window.ReloadButton.setEnabled(False)\n main_window.ApplyButton.setEnabled(False)\n main_window.LoadButton.setEnabled(False)\n main_window.SaveButton.setEnabled(False)\n main_window.VisualizeButton.setEnabled(False)\n main_window.PredictButton.setEnabled(False)\n main_window.CrossValidationButton.setEnabled(False)\n main_window.FitModelButton.setEnabled(False)\n main_window.SaveModelButton.setEnabled(False)\n main_window.LoadModelButton.setEnabled(False)\n main_window.CancelButton.show()\n main_window.SaveModelButton.hide()\n\n\nclass Alert(QWidget):\n \"\"\"\n Widget class to pop user messages\n \"\"\"\n def __init__(self, text):\n super().__init__()\n self.title = 'Alert'\n self.left = 1300\n self.top = 800\n self.width = 320\n self.height = 200\n self.text = text\n self.init_ui()\n\n def init_ui(self):\n self.setWindowTitle(self.title)\n self.setGeometry(self.left, self.top, self.width, self.height)\n QMessageBox.warning(self, self.title, self.text, QMessageBox.Ok, QMessageBox.Ok)\n self.show()\n\n\nclass Logic(QWidget):\n \"\"\"\n All the logic methods behind the gui_module activation, and a connection to the program's business logic.\n \"\"\"\n\n def __init__(self):\n super().__init__()\n self.predict_tab_visited = False\n self.program_manager = ProgramManager()\n self.prediction_thread = None\n\n ##########################################################\n # Start Tab Related Functions #\n ##########################################################\n\n def browse_experiment_info_file(self):\n \"\"\"\n Start tab: \"Browse\" button || Data tab: \"Reload...\" button.\n Choose experiment info file.\n Send it to ProgramManager for processing.\n Display loaded behavioral data DataFrame.\n :return: Void\n \"\"\"\n # Pop up file dialog\n options = QFileDialog.Options()\n options |= QFileDialog.ShowDirsOnly\n path, _ = QFileDialog.getOpenFileName(self, \"Choose Experiment Info File\", \"\",\n \"Text Files (*.txt);;Ini Files (*.ini)\", options=options)\n\n # Check if a file was chosen\n if path:\n # Send it to ProgramManager for processing\n return_string, df = self.program_manager.load(path)\n if return_string == \"Success\":\n # Display loaded behavioral data DataFrame\n self._display_dataframe(df)\n main_window.tabWidget.setCurrentIndex(1)\n # Load prediction labels\n main_window.ExclusionCriteriaComboBox.addItems(self.program_manager.get_exclusion_criteria())\n # update the number of items in the predict tab\n self.set_train_test_num()\n else:\n self._display_dataframe(None)\n main_window.ExclusionCriteriaComboBox.clear()\n Alert(return_string)\n\n ##########################################################\n # Data Tab Related Functions #\n ##########################################################\n\n def load_data_filter_query(self):\n \"\"\"\n Data tab: \"Load...\" button.\n Loads data filtering configurations from a chosen .txt file\n :return: Void\n \"\"\"\n # Pop up file dialog\n options = QFileDialog.Options()\n file_path, _ = QFileDialog.getOpenFileName(self, \"Load configuration\", \"\", \"Text Files (*.txt)\",\n options=options)\n # Check if a file was chosen\n if file_path:\n # Read query from file\n query = Path(file_path).read_text()\n # Write query in text box\n main_window.ConfigurationTextEdit.setText(query)\n\n def save_data_filter_query(self):\n \"\"\"\n Data tab: \"Save\" button.\n Saves data filtering configurations to a chosen .txt file\n :return: Void\n \"\"\"\n # Pop up file dialog\n options = QFileDialog.Options()\n file_path, _ = QFileDialog.getSaveFileName(self, \"Save data filtering configurations\", \"\",\n \"All Files (*);;Text Files (*.txt)\", options=options)\n # Check if a file was chosen\n if file_path:\n # Get query from the configuration text box\n query = main_window.ConfigurationTextEdit.toPlainText()\n\n # Write query to file, create if doesn't exists\n with open(file_path, \"w\") as fd:\n fd.write(query)\n\n def _display_dataframe(self, df):\n \"\"\"\n Receives a DataFrame object, containing behavioral data.\n Loads it to the GUI data table, in the Data tab.\n :param df: a DataFrame \n :return: Void\n \"\"\"\n # clean table\n main_window.DataTable.setColumnCount(0)\n main_window.DataTable.setRowCount(0)\n\n if df is None:\n return\n\n # set table dimensions\n main_window.DataTable.setColumnCount(len(df.columns))\n main_window.DataTable.setRowCount(len(df.index))\n\n _translate = QtCore.QCoreApplication.translate\n\n # set headers\n i = 0\n for header in df:\n item = QtWidgets.QTableWidgetItem()\n item.setText(_translate(\"MainWindow\", header))\n main_window.DataTable.setHorizontalHeaderItem(i, item)\n i += 1\n\n # set values\n for i in range(len(df.index)):\n for j in range(len(df.columns)):\n item = QtWidgets.QTableWidgetItem()\n item.setText(_translate(\"MainWindow\", str(df.iat[i, j])))\n main_window.DataTable.setItem(i, j, item)\n\n # Resize table\n main_window.DataTable.resizeColumnsToContents()\n main_window.DataTable.resizeRowsToContents()\n\n def apply_filter_query(self):\n \"\"\"\n Data tab: \"Apply\" button.\n Execute data filter query, and updates data table in Data tab.\n :return: Void\n \"\"\"\n query = main_window.ConfigurationTextEdit.toPlainText()\n\n # Call ProgramManager for query processing\n return_string, df = self.program_manager.filter_data(query)\n if return_string is \"Success\":\n # Display updated data frame\n self._display_dataframe(df)\n # update predict tab (exclusion items number)\n self.set_train_test_num()\n else:\n Alert(return_string)\n\n ##########################################################\n # Visualization Tab Related Functions #\n ##########################################################\n\n def visualize(self):\n \"\"\"\n Visu tab: \"Visualize\" button.\n Collects visualization options and calls visu module to create visualization object and display it.\n :return: Void\n \"\"\"\n # get visualization type to create\n try:\n visu_type = main_window.VisuTypeList.currentItem().text()\n if not main_window.VisuTypeList.currentItem().isSelected():\n raise Exception\n except:\n Alert(\"Please choose a visualization type\")\n return\n\n # get filtering query\n filter_query = main_window.VisuFilterQueryTextEdit.toPlainText()\n\n # get processing method\n process_method = main_window.ProcessingMethodSlider.value() # 0 Simultaneous, 2 MeanCalc\n if process_method is 1: # can be tested later in the chain, and return alert as return_string if needed\n Alert(\"Please choose a processing method\")\n return\n\n # call manager.visualize\n return_string = self.program_manager.visualize(visu_type, filter_query, process_method)\n if not return_string == \"Success\":\n Alert(return_string)\n\n def set_processing_method(self):\n \"\"\"\n Visu tab: \"Processing Method\" slider.\n Styling slider position, using bold font for the chosen method text label. \n :return: void\n \"\"\"\n if main_window.ProcessingMethodSlider.value() is 0:\n main_window.SimultaneousLabel.setStyleSheet(\"font: -10 9pt 'Tahoma';\")\n main_window.MeanCalcLabel.setStyleSheet(\"font: 9pt 'Tahoma';\")\n elif main_window.ProcessingMethodSlider.value() is 1:\n main_window.SimultaneousLabel.setStyleSheet(\"font: 9pt 'Tahoma';\")\n main_window.MeanCalcLabel.setStyleSheet(\"font: 9pt 'Tahoma';\")\n elif main_window.ProcessingMethodSlider.value() is 2:\n main_window.SimultaneousLabel.setStyleSheet(\"font: 9pt 'Tahoma';\")\n main_window.MeanCalcLabel.setStyleSheet(\"font: -10 9pt 'Tahoma';\")\n\n ##########################################################\n # Predict Tab Related Functions #\n ##########################################################\n\n def _get_features_list_from_gui(self):\n \"\"\"\n Get the selected features from the list on \"Predict\" tab, and group it in a list.\n :return: List of features names to extract.\n \"\"\"\n features = []\n for index in range(main_window.FeaturesList.count()):\n if main_window.FeaturesList.item(index).checkState() == QtCore.Qt.Checked:\n features.append(main_window.FeaturesList.item(index).text())\n return features\n\n def cross_validation(self):\n \"\"\"\n Predict tab: \"Cross Validate\" button.\n Extract user's chosen prediction options from the gui_module, verify it, and start a new thread for the cross \n validation process.\n :return: void \n \"\"\"\n # extract prediction options from gui_module\n exclusion_category = main_window.ExclusionCriteriaComboBox.currentText()\n num_to_exclude = main_window.ExcludeLineEdit.text()\n iterations = main_window.IterationsLineEdit.text()\n features = self._get_features_list_from_gui()\n\n # verify the input\n if not features:\n Alert(\"Please choose at least 1 feature for extraction\")\n return\n if (not num_to_exclude.isdigit()) or int(num_to_exclude) < 1 or \\\n int(num_to_exclude) >= int(main_window.ItemsAmountLabel.text()):\n Alert(\"Exclusion amount must be larger than 0 and smaller than the total items count. Digits only.\")\n return\n if iterations != \"-1\":\n if (not iterations.isdigit()) or int(iterations) < 1:\n Alert(\"Cross validation iterations number must be digits only, -1 or larger than 1.\")\n return\n else:\n if int(num_to_exclude) != 1:\n Alert(\"Max iteration (-1) is only allowed when the exclusion amount is 1.\")\n return\n\n # begin\n self._start_prediction_thread(\"Cross Validation\", features=features, exclusion_category=exclusion_category,\n num_to_exclude=num_to_exclude, iterations=iterations)\n\n def fit_model(self):\n \"\"\"\n Predict tab: \"Fit Model\" button.\n Extract user's chosen features from the gui_module, verify, and start a new thread for the model fitting process.\n :return: void \n \"\"\"\n features = self._get_features_list_from_gui()\n if not features:\n Alert(\"Please choose at least 1 feature for extraction\")\n return\n self._start_prediction_thread(\"Fit Model\", features=features)\n\n def predict(self):\n \"\"\"\n Predict tab: \"Predict\" button.\n Extract user's chosen features from the gui_module, verify, and start a new thread for the prediction process.\n :return: void \n \"\"\"\n features = self._get_features_list_from_gui()\n if not features:\n Alert(\"Please choose at least 1 feature for extraction\")\n return\n self._start_prediction_thread(\"Predict\", features=features)\n\n def _start_prediction_thread(self, action, features=[], exclusion_category=\"\", num_to_exclude=\"\", iterations=\"\"):\n \"\"\"\n Create prediction thread instance, connect it to a signal, and start the threads run.\n :param action: The prediction action to preform.\n :param features: Selected features to extract.\n :param exclusion_category: column label from the behavioral data, will be used for trial grouping during learning.\n :param num_to_exclude: The number of groups that will be excluded during training (int).\n :param iterations: Maximum iterations number for the training session.\n :return: void\n \"\"\"\n self.prediction_thread = PredictionThread(self.program_manager, action, features=features,\n exclusion_category=exclusion_category, num_to_exclude=num_to_exclude,\n iterations=iterations)\n self.prediction_thread.alert.connect(self.raise_alert)\n self.prediction_thread.start()\n\n def save_model(self):\n \"\"\"\n Predict tab: \"Save Model\" button.\n Saves prediction model to a chosen .pkl file (pickle).\n :return: Void\n \"\"\"\n # Pop up file dialog to choose a path for saving\n options = QFileDialog.Options()\n path, _ = QFileDialog.getSaveFileName(self, \"Save Model\", \"\",\n \"Pickle files (*.pkl)\", options=options)\n # If a path was chosen, begin saving\n if path:\n main_window.ResultLable.setStyleSheet(\"color: rgb(0, 0, 0); font: 18pt 'Tahoma';\")\n main_window.ResultLable.setText(\"Saving Model...\")\n return_string = self.program_manager.save_ml_model(path)\n\n # Update gui_module with the saving results, fail/success.\n if return_string == \"Success\":\n main_window.ResultLable.setText(\"Model Saved.\")\n else:\n main_window.ResultLable.setStyleSheet(\"color: rgb(189, 189, 189); font: 24pt 'Tahoma';\")\n main_window.ResultLable.setText(\"Result...\")\n Alert(return_string)\n\n def load_model(self):\n \"\"\"\n Predict tab: \"Load Model\" button.\n Loads a trained prediction model from a chosen .pkl file (pickle).\n :return: Void\n \"\"\"\n # Pop up file dialog to choose the model to load\n options = QFileDialog.Options()\n options |= QFileDialog.ShowDirsOnly\n path, _ = QFileDialog.getOpenFileName(self, \"Choose a pickled model to load\", \"\",\n \"Pickle Files (*.pkl)\", options=options)\n # If a path was chosen, load the model\n if path:\n main_window.ResultLable.setStyleSheet(\"color: rgb(0, 0, 0); font: 18pt 'Tahoma';\")\n main_window.ResultLable.setText(\"Loading Model...\")\n return_string = self.program_manager.load_ml_model(path)\n\n # Update gui_module with the loading results, fail/success.\n if return_string == \"Success\":\n main_window.MLModelComboBox.clear()\n main_window.MLModelComboBox.addItems(self.program_manager.get_available_models())\n main_window.ResultLable.setText(\"Model List Updated\")\n else:\n main_window.ResultLable.setStyleSheet(\"color: rgb(189, 189, 189); font: 24pt 'Tahoma';\")\n main_window.ResultLable.setText(\"Result...\")\n Alert(return_string)\n\n def raise_alert(self, text):\n \"\"\"\n Linking method between prediction threads and gui_module Alert object (using signals).\n :param text: Text to present. \n :return: void\n \"\"\"\n Alert(text)\n\n def stop_thread(self):\n \"\"\"\n Predict tab: \"Cancel\" button.\n Stop prediction processing.\n :return: void\n \"\"\"\n main_window.ResultLable.setText(\"Canceling...\")\n self.program_manager.stop_prediction()\n\n def set_train_test_num(self):\n \"\"\"\n Predict tab: \"Exclusion Criteria\" option.\n When an exclusion criteria is chosen, this method is called to update the existing items amount label.\n Items amount is based on the chosen criteria.\n Sets a default items number in the exclude line edit, as 25% of the existing items.\n :return: void\n \"\"\"\n # Get existing items amount for the chosen criteria & set the gui_module label\n label = main_window.ExclusionCriteriaComboBox.currentText()\n total = self.program_manager.get_different_items_amount_by_label(label)\n main_window.ItemsAmountLabel.setText(str(total))\n # Set default exclude value\n exclude = int((total*25)/100)\n main_window.ExcludeLineEdit.setText(str(exclude))\n\n def set_ml_model(self):\n \"\"\"\n Predict tab: \"ML Model\" list.\n Load available models.\n :return: void\n \"\"\"\n try:\n # Load available models list. Will work only after the app is up and running\n self.program_manager.set_ml_model(main_window.MLModelComboBox.currentText())\n except:\n # app initialization isn't finished, and main_window doesn't exists yet\n pass\n\n def set_extractor_and_load_features(self):\n \"\"\"\n Predict tab: \"Feature Extractor\" list.\n Load available extractors.\n :return: void\n \"\"\"\n try:\n # load available extractors list. will work only after the app is up and running\n self.program_manager.set_extractor(main_window.ExtractorComboBox.currentText())\n\n # load available features list. will work only after extractors loading\n main_window.FeaturesList.clear()\n main_window.FeaturesList.addItems(self.program_manager.get_ml_features())\n for index in range(main_window.FeaturesList.count()):\n item = main_window.FeaturesList.item(index)\n item.setCheckState(QtCore.Qt.Unchecked)\n except:\n # app initialization isn't finished, and main_window doesn't exists yet\n pass\n\n def tab_changed(self):\n \"\"\"\n Tab change event activates this method.\n Disables the start tab.\n Updates \"Predict\" tab's lists (only on the first visit of that tab).\n :return: \n \"\"\"\n main_window.tabWidget.setTabEnabled(0, False)\n\n if (not self.predict_tab_visited) and main_window.tabWidget.tabText(main_window.tabWidget.currentIndex()) == \"Predict\":\n self.set_ml_model()\n self.set_extractor_and_load_features()\n self.predict_tab_visited = True\n\n\n# Global GUI information\nqtCreatorFile = \"EyePredict_gui.ui\"\nUi_MainWindow, QtBaseClass = uic.loadUiType(os.path.dirname(__file__) + \"/\" + qtCreatorFile)\n\n\nclass EyePredictApp(QtWidgets.QMainWindow, Ui_MainWindow):\n \"\"\"\n The application object class. Creates the Eye-predict GUI.\n \"\"\"\n\n def __init__(self):\n \"\"\"\n Application initiating method.\n \"\"\"\n # Open EyePredict application\n QtWidgets.QMainWindow.__init__(self)\n Ui_MainWindow.__init__(self)\n self.setupUi(self)\n\n # Activate buttons, load list options\n self.logic = Logic()\n self._connect()\n self._load_lists_items()\n\n def _connect(self):\n \"\"\"\n Connect gui_module functions (such as buttons, lists, etc) to logic methods.\n :return: Void\n \"\"\"\n # Main + Data\n self.BrowseButton.clicked.connect(self.logic.browse_experiment_info_file)\n self.ReloadButton.clicked.connect(self.logic.browse_experiment_info_file)\n self.ApplyButton.clicked.connect(self.logic.apply_filter_query)\n self.LoadButton.clicked.connect(self.logic.load_data_filter_query)\n self.SaveButton.clicked.connect(self.logic.save_data_filter_query)\n # Visualization\n self.ProcessingMethodSlider.valueChanged.connect(self.logic.set_processing_method)\n self.VisualizeButton.clicked.connect(self.logic.visualize)\n # ML\n self.MLModelComboBox.currentIndexChanged.connect(self.logic.set_ml_model)\n self.ExtractorComboBox.currentIndexChanged.connect(self.logic.set_extractor_and_load_features)\n self.CrossValidationButton.clicked.connect(self.logic.cross_validation)\n self.FitModelButton.clicked.connect(self.logic.fit_model)\n self.PredictButton.clicked.connect(self.logic.predict)\n self.SaveModelButton.clicked.connect(self.logic.save_model)\n self.LoadModelButton.clicked.connect(self.logic.load_model)\n self.ExclusionCriteriaComboBox.currentIndexChanged.connect(self.logic.set_train_test_num)\n self.tabWidget.currentChanged.connect(self.logic.tab_changed)\n self.CancelButton.clicked.connect(self.logic.stop_thread)\n self.CancelButton.hide()\n\n def _load_lists_items(self):\n \"\"\"\n Loads lists information.\n Lists content depends on available implementations, and so is due to changes.\n :return: void\n \"\"\"\n self.VisuTypeList.addItems(self.logic.program_manager.get_visu_types())\n self.MLModelComboBox.addItems(self.logic.program_manager.get_available_models())\n self.ExtractorComboBox.addItems(self.logic.program_manager.get_available_extractors())\n\n\ndef main():\n # Main. Program activation.\n global app, main_window\n app = QtWidgets.QApplication(sys.argv)\n main_window = EyePredictApp()\n main_window.show()\n sys.exit(app.exec_())\n","sub_path":"EyePredict/modules/gui_module/gui_logic.py","file_name":"gui_logic.py","file_ext":"py","file_size_in_byte":25898,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"218030375","text":"########\n# Copyright (c) 2015 GigaSpaces Technologies Ltd. All rights reserved\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# * See the License for the specific language governing permissions and\n# * limitations under the License.\n\nimport shutil\n\nfrom cloudify_rest_client.client import CloudifyClient\nfrom cloudify_rest_client.exceptions import CloudifyClientError\n\nfrom cosmo_tester.test_suites.test_security.security_test_base import \\\n SecurityTestBase\nfrom cosmo_tester.framework import util\n\n\nCUSTOM_AUTH_PROVIDER_PLUGIN = 'mock-auth-provider-with-no-userstore'\nPLUGINS_PROP_PATH = 'node_templates.manager.properties.cloudify.plugins'\n\n\nclass NoUserstoreTests(SecurityTestBase):\n\n def test_authentication_without_userstore(self):\n self.setup_secured_manager()\n self._assert_unauthorized_user_fails()\n\n def get_manager_blueprint_additional_props_override(self):\n src_plugin_dir = util.get_plugin_path(CUSTOM_AUTH_PROVIDER_PLUGIN)\n shutil.copytree(src_plugin_dir,\n self.test_manager_blueprint_path.dirname() /\n CUSTOM_AUTH_PROVIDER_PLUGIN)\n return {PLUGINS_PROP_PATH: self.get_plugins_settings()}\n\n def get_plugins_settings(self):\n return {\n 'user_custom_auth_provider': {\n 'source': CUSTOM_AUTH_PROVIDER_PLUGIN\n }\n }\n\n def get_authentication_providers_list(self):\n return [\n {\n 'implementation': 'mock_auth_provider_with_no_userstore'\n '.auth_without_userstore:AuthorizeUser1',\n 'name': 'password',\n 'properties': {\n 'dummy_param': 'dumdum'\n }\n }\n ]\n\n def get_userstore_drive(self):\n return ''\n\n def _assert_unauthorized_user_fails(self):\n client = CloudifyClient(host=self.env.management_ip,\n headers=util.get_auth_header(username='user2',\n password='pass2'))\n self.assertRaisesRegexp(CloudifyClientError, '401: user unauthorized',\n client.manager.get_status)\n","sub_path":"cosmo_tester/test_suites/test_security/no_userstore_tests.py","file_name":"no_userstore_tests.py","file_ext":"py","file_size_in_byte":2622,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"218634368","text":"# Author: Aaditya Maheshwari\n# Version: 0.0.1\n# File_name: longest_word.py\n\n# Copyright 2016 Aaditya Maheshwari\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport doctest\n\n\ndef longest_word(sentence):\n \"\"\"\n Prints the longest word in a sentence.\n >>> longest_word(\"Python is amazing\")\n amazing\n >>> longest_word(\"Python is cool\")\n Python\n >>> longest_word(\"I am the best not you\")\n best\n \"\"\"\n\n words = sentence.split(\" \")\n long_word = None\n word_idx = 0\n while word_idx < len(words):\n current_word = words[word_idx]\n\n if long_word is None:\n long_word = current_word\n elif len(long_word) < len(current_word):\n long_word = current_word\n word_idx += 1\n print(long_word)\n\nif __name__ == '__main__':\n user_input = input(\"Please enter a sentence/text: \")\n longest_word(user_input)\n doctest.testmod()\n","sub_path":"longest_word.py","file_name":"longest_word.py","file_ext":"py","file_size_in_byte":1397,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"181654341","text":"from django.conf.urls import url\nfrom django.urls import path , include\nfrom django.views.decorators.csrf import csrf_exempt\nfrom . import views\n\nurlpatterns = [\n path('', views.LeadListCreate.as_view()),\n path('auth/', include('djoser.urls')),\n path('auth/', include('djoser.urls.jwt')),\n path('/', views.DetailLead.as_view()),\n path('share', views.ShareView.as_view()),\n path('getAllDomains', views.SearchMultipledomain.as_view()),\n path('testSharing', views.TestSharingView.as_view()),\n path('updateJsonFile', views.UpdateJsonFile.as_view()),\n path('downloadEmails', views.DownloadEmailInCsv.as_view()),\n path('findervalidEmail', views.CreateEmailView.as_view()),\n]\n\n","sub_path":"Generation_2_lead/example/urls.py","file_name":"urls.py","file_ext":"py","file_size_in_byte":708,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"543696438","text":"import json\nimport re\nimport PublicLib.public as pfun\nimport time\n\ng_cnt = 0\n\n\ndef subStrToJson(data):\n if data == None:\n return False, None\n # 原因在于:字符串里用单引号来标识字符\n data = re.sub('\\'', '\\\"', data)\n data = re.sub('\\n', '', data)\n\n # 字符串 转 json\n try:\n data_json = json.loads(data)\n if not IsJsonFrame(data_json):\n return False, 'Error_Json'\n except:\n print(data)\n return False, data\n # addr = data_json['DataValue']['04A20208']\n # print(addr)\n return True, data_json\n\n\n# 自定义 json 格式判断\ndef IsJsonFrame(dictdata):\n if dictdata is None:\n return False\n\n if isinstance(dictdata, dict) is False:\n return False\n\n dictlist = ['Len', 'Cmd', 'SN', 'DataTime', 'CRC', 'DataValue']\n for k in dictlist:\n if k not in dictdata:\n return False\n return True\n\n\n# dict recv, dict send\n# dict expect answer\n# answer\n# answer result\n# int threshold\ndef JsonDealFrame(recvframe, senddata, answer):\n answer['answerresult'] = {}\n if not IsJsonFrame(recvframe) or not IsJsonFrame(senddata):\n answer['result'] = 'frame error'\n return\n\n # 接收帧 去除头部结构\n if \"recvData\" in recvframe:\n recvframe = recvframe[\"recvData\"]\n\n # 帧序号相同\n if recvframe['SN'] != senddata['SN']:\n answer['result'] = 'sn error'\n return\n\n # 控制字相同\n if recvframe['Cmd'] != senddata['Cmd']:\n answer['result'] = 'cmd error'\n return\n\n # 预估返回值相同\n threshold = answer['threshold']\n answer['answer'] = recvframe['DataValue'].copy()\n answer['answerresult'] = recvframe['DataValue'].copy()\n\n for i, j in recvframe['DataValue'].items():\n if i in answer['expectanswer']:\n # threshold 判断:\n # 0: 绝对相等\n # 1: 门限内相等\n # -1: 不判断\n if threshold == 0:\n if answer['expectanswer'][i] == j:\n answer['answerresult'][i] = 'ok'\n else:\n answer['answerresult'][i] = 'error'\n elif threshold == -1:\n answer['answerresult'][i] = 'ok'\n else:\n recvvalue = float(j)\n answervalue = float(answer['expectanswer'][i])\n if answervalue * (1 - threshold) <= recvvalue <= answervalue * (1 + threshold):\n answer['answerresult'][i] = 'ok'\n else:\n answer['answerresult'][i] = 'error'\n\n return\n\n\n# parm\n# key:cmd value:Read/Set/..\n# DataValue dict key1:value1 ... key n:value n\ndef JsonMakeFrame(parm):\n global g_cnt\n g_cnt = g_cnt + 1\n if g_cnt > 9999:\n g_cnt = 0\n\n if \"Cmd\" in parm and \"DataValue\" in parm:\n datatime = time.strftime(\"%y%m%d%H%M%S\", time.localtime())\n data = dict(Len=\"312\", Cmd=parm[\"Cmd\"], SN=str(g_cnt), DataTime=datatime, CRC=\"FFFF\", DataValue=parm[\"DataValue\"])\n\n # 计算CRC\n dv = str(parm[\"DataValue\"]).replace(' ', '')\n dv = \"0000\" + pfun.crc16str(0, dv[1:-1], False)\n data[\"CRC\"] = dv[-4:]\n\n # 计算��度\n data[\"Len\"] = str(len(str(data)) - 12)\n else:\n data = dict(frame = 'error')\n\n # 将python对象data转换json对象\n data_json = json.dumps(data, ensure_ascii=False)\n\n # 将json对象转换成python对象\n # data_python = json.loads(data_json)\n\n return data_json\n\n\n'''\ndef JsonMakeValue(DIlist):\n for i in DIlist:\n Value =\n'''\n\nif __name__ == '__main__':\n # 数据项和内容\n # DIList = ['05060101', '05060102', '05060103']\n # ValueList = ['000000.00', '123.14', '778899']\n DIList = ['04A00501']\n ValueList = ['594C#03#03BC#0001#0001CA910001CA5D0001C9F50001EAE30001BCC50001BC810001BCA10001C9CD0001C9A50001C97D0001AF130001C9550001C92D0001E8D30001C92B0001D4AB0000973D00006DC300000000000000000001E8C7000000000000000000000000000000000000000000000000000000000001C9290001CC670001F1F1200009A8']\n List = dict(zip(DIList, ValueList))\n\n MakeFramePara = {}\n MakeFramePara['Cmd'] = 'Set'\n MakeFramePara['DataValue'] = List\n\n # CRC16 IBM: E8FE\n\n # 元组转json\n # DIValue = json.loads(data_python)\n\n # json 转 字符串\n data_python = JsonMakeFrame(MakeFramePara)\n print(data_python)\n\n # 字符串 转 json\n data = json.loads(data_python)\n ret = IsJsonFrame(data)\n print(ret)\n if ret:\n # dict expect answer\n # answer\n # answer result\n # int threshold\n dictanswer = {'threshold': 0.1, 'expectanswer': List}\n JsonDealFrame(data, data, dictanswer)\n for k in dictanswer:\n print(dictanswer[k])\n\n List['05060102'] = '199.29'\n dictanswer = {'threshold': 0.1, 'expectanswer': List}\n JsonDealFrame(data, data, dictanswer)\n for k in dictanswer:\n print(dictanswer[k])\n\n # a = JsonParse(data)\n '''\n for key in a:\n print(key)\n # print(a.key())\n #for key in a.iterkeys():\n print(a.values())\n for value in a.values():\n print(value)\n '''\n","sub_path":"Protocol/ly_Json.py","file_name":"ly_Json.py","file_ext":"py","file_size_in_byte":5191,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"618242069","text":"\"\"\"\nAuthor : Kyungmin Lee (rekyungmin@gmail.com)\nDate : 05/19/2019\nDescrip: Perpective transform\n\"\"\"\n\nimport io\nfrom typing import Iterable\n\nimport cv2\nimport numpy as np\nfrom PIL import Image\n\n\ndef _transform(img: np.ndarray, coordinates: np.ndarray) -> np.ndarray:\n square_width = max(np.linalg.norm(coordinates[0] - coordinates[1]),\n np.linalg.norm(coordinates[2] - coordinates[3]))\n square_height = max(np.linalg.norm(coordinates[0] - coordinates[3]),\n np.linalg.norm(coordinates[1] - coordinates[2]))\n\n dst_coordinates = np.array([\n [0, 0],\n [square_width, 0],\n [square_width, square_height],\n [0, square_height]\n ], dtype=np.float32)\n\n transformation_matrix = cv2.getPerspectiveTransform(coordinates, dst_coordinates)\n return cv2.warpPerspective(img, transformation_matrix, dsize=(square_width, square_height))\n\n\ndef transform(bin_img: bytes, img_fmt: str, coordinates: Iterable[int]) -> bytes:\n if len(coordinates) != 4:\n raise TypeError('4 coordinates are required')\n\n pil_img = Image.open(io.BytesIO(bin_img))\n cv2_img = cv2.cvtColor(np.array(pil_img), cv2.COLOR_RGB2BGR)\n np_coordinates = np.array(coordinates, dtype=np.float32)\n\n transformed = _transform(cv2_img, np_coordinates)\n return cv2.imencode('.' + img_fmt, transformed)[1].tobytes()\n\n\nif __name__ == '__main__':\n print(cv2.__version__) # 4.1.0\n print(np.__version__) # 1.16.3\n print(Image.__version__) # 6.0.0\n","sub_path":"perspective.py","file_name":"perspective.py","file_ext":"py","file_size_in_byte":1508,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"548345910","text":"# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.db import migrations, models\nfrom django.conf import settings\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n migrations.swappable_dependency(settings.AUTH_USER_MODEL),\n ]\n\n operations = [\n migrations.CreateModel(\n name='Ad',\n fields=[\n ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),\n ('create_time', models.DateTimeField(auto_now_add=True, verbose_name='Create time')),\n ('hits', models.PositiveIntegerField(default=0, verbose_name='Hits', blank=True)),\n ('title', models.CharField(max_length=500, verbose_name='Title')),\n ('description', models.TextField(verbose_name='Description')),\n ('author', models.ForeignKey(default=None, to=settings.AUTH_USER_MODEL)),\n ],\n options={\n 'ordering': ['-pk'],\n 'verbose_name': 'ad',\n 'verbose_name_plural': 'ads',\n },\n ),\n migrations.CreateModel(\n name='AdUserHit',\n fields=[\n ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),\n ('time', models.DateTimeField(auto_now_add=True)),\n ('ad', models.ForeignKey(to='ads.Ad')),\n ('user', models.ForeignKey(to=settings.AUTH_USER_MODEL)),\n ],\n options={\n 'ordering': ['-pk'],\n },\n ),\n ]\n","sub_path":"ads/migrations/0001_initial.py","file_name":"0001_initial.py","file_ext":"py","file_size_in_byte":1624,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"173579603","text":"# Copyright 2020 Ryan Barry\n# See LICENSE file for licensing details.\n\nimport hashlib\nimport unittest\nimport yaml\nimport json\n\nfrom unittest.mock import patch\nfrom ops.testing import Harness\nfrom charm import GrafanaCharm\n\nMINIMAL_CONFIG = {\"grafana-image-path\": \"grafana/grafana\", \"port\": 3000}\n\nMINIMAL_DATASOURCES_CONFIG = {\n \"apiVersion\": 1,\n \"datasources\": [],\n \"deleteDatasources\": [],\n}\n\nBASIC_DATASOURCES = [\n {\n \"access\": \"proxy\",\n \"isDefault\": \"false\",\n \"name\": \"juju_test-model_abcdef_prometheus_0\",\n \"orgId\": \"1\",\n \"type\": \"prometheus\",\n \"url\": \"http://1.2.3.4:1234\",\n }\n]\n\nSOURCE_DATA = {\n \"model\": \"test-model\",\n \"model_uuid\": \"abcdef\",\n \"application\": \"prometheus\",\n \"type\": \"prometheus\",\n}\n\nDASHBOARD_CONFIG = {\n \"apiVersion\": 1,\n \"providers\": [\n {\n \"name\": \"Default\",\n \"type\": \"file\",\n \"options\": {\"path\": \"dashboards\"},\n }\n ],\n}\n\n\nDB_CONFIG = {\n \"type\": \"mysql\",\n \"host\": \"1.1.1.1:3306\",\n \"name\": \"mysqldb\",\n \"user\": \"grafana\",\n \"password\": \"grafana\",\n}\n\n\nDATABASE_CONFIG_INI = \"\"\"[database]\ntype = mysql\nhost = 1.1.1.1:3306\nname = mysqldb\nuser = grafana\npassword = grafana\nurl = mysql://grafana:grafana@1.1.1.1:3306/mysqldb\n\n\"\"\"\n\n\ndef datasource_config(config):\n config_yaml = config[1]\n config_dict = yaml.safe_load(config_yaml)\n return config_dict\n\n\ndef dashboard_config(config):\n config_yaml = config[1]\n config_dict = yaml.safe_load(config_yaml)\n return config_dict\n\n\ndef global_config(config):\n config_yaml = config[1]\n config_dict = yaml.safe_load(config_yaml)\n return config_dict[\"global\"]\n\n\ndef cli_arg(plan, cli_opt):\n plan_dict = plan.to_dict()\n args = plan_dict[\"services\"][\"grafana\"][\"command\"].split()\n for arg in args:\n opt_list = arg.split(\"=\")\n if len(opt_list) == 2 and opt_list[0] == cli_opt:\n return opt_list[1]\n if len(opt_list) == 1 and opt_list[0] == cli_opt:\n return opt_list[0]\n return None\n\n\nclass TestCharm(unittest.TestCase):\n def setUp(self):\n self.harness = Harness(GrafanaCharm)\n self.addCleanup(self.harness.cleanup)\n self.harness.begin()\n\n self.minimal_datasource_hash = hashlib.sha256(\n str(yaml.dump(MINIMAL_DATASOURCES_CONFIG)).encode(\"utf-8\")\n ).hexdigest()\n\n @patch(\"ops.testing._TestingPebbleClient.push\")\n def test_datasource_config_is_updated_by_raw_grafana_source_relation(self, push):\n self.harness.set_leader(True)\n\n # check datasource config is updated when a grafana-source joins\n rel_id = self.harness.add_relation(\"grafana-source\", \"prometheus\")\n self.harness.update_relation_data(\n rel_id, \"prometheus\", {\"grafana_source_data\": json.dumps(SOURCE_DATA)}\n )\n self.harness.add_relation_unit(rel_id, \"prometheus/0\")\n self.harness.update_relation_data(\n rel_id, \"prometheus/0\", {\"grafana_source_host\": \"1.2.3.4:1234\"}\n )\n\n config = push.call_args[0]\n self.assertEqual(\n datasource_config(config).get(\"datasources\"), BASIC_DATASOURCES\n )\n\n @patch(\"ops.testing._TestingPebbleClient.push\")\n def test_datasource_config_is_updated_by_grafana_source_removal(self, push):\n self.harness.set_leader(True)\n\n rel_id = self.harness.add_relation(\"grafana-source\", \"prometheus\")\n self.harness.update_relation_data(\n rel_id, \"prometheus\", {\"grafana_source_data\": json.dumps(SOURCE_DATA)}\n )\n self.harness.add_relation_unit(rel_id, \"prometheus/0\")\n self.harness.update_relation_data(\n rel_id, \"prometheus/0\", {\"grafana_source_host\": \"1.2.3.4:1234\"}\n )\n\n config = push.call_args[0]\n self.assertEqual(\n datasource_config(config).get(\"datasources\"), BASIC_DATASOURCES\n )\n\n rel = self.harness.charm.framework.model.get_relation(\"grafana-source\", rel_id)\n self.harness.charm.on[\"grafana-source\"].relation_departed.emit(rel)\n\n config = push.call_args[0]\n self.assertEqual(datasource_config(config).get(\"datasources\"), [])\n self.assertEqual(\n datasource_config(config).get(\"deleteDatasources\"),\n [{\"name\": \"juju_test-model_abcdef_prometheus_0\", \"orgId\": 1}],\n )\n\n @patch(\"ops.testing._TestingPebbleClient.push\")\n def test_config_is_updated_with_database_relation(self, push):\n self.harness.set_leader(True)\n\n rel_id = self.harness.add_relation(\"database\", \"mysql\")\n self.harness.add_relation_unit(rel_id, \"mysql/0\")\n self.harness.update_relation_data(\n rel_id,\n \"mysql\",\n DB_CONFIG,\n )\n\n config = push.call_args_list[0][0][1]\n self.assertEqual(config, DATABASE_CONFIG_INI)\n\n @patch(\"ops.testing._TestingPebbleClient.push\")\n def test_dashboard_path_is_initialized(self, push):\n self.harness.set_leader(True)\n\n self.harness.charm.init_dashboard_provisioning(\"dashboards\")\n\n config = push.call_args[0]\n self.assertEqual(dashboard_config(config), DASHBOARD_CONFIG)\n","sub_path":"tests/test_charm.py","file_name":"test_charm.py","file_ext":"py","file_size_in_byte":5182,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"333844279","text":"import collections\nimport collections.abc\nimport copy\nimport csv\nimport itertools\nimport json\nimport os\n\nimport attr\nimport entmax\nimport enum\nimport torch\nimport torch.nn.functional as F\n\nfrom ratsql.models import abstract_preproc\nfrom ratsql.models import attention\nfrom ratsql.models import variational_lstm\nfrom ratsql.models.nl2code.decoder import NL2CodeDecoderPreproc\nfrom ratsql.models.head_corner.infer_tree_traversal import InferenceTreeTraversal\nfrom ratsql.models.head_corner.train_tree_traversal import TrainTreeTraversal\nfrom ratsql.models.head_corner.tree_traversal import TreeTraversal\nfrom ratsql.utils import registry\nfrom ratsql.utils import serialization\nfrom ratsql.utils import vocab\n\n\ndef lstm_init(device, num_layers, hidden_size, *batch_sizes):\n init_size = batch_sizes + (hidden_size,)\n if num_layers is not None:\n init_size = (num_layers,) + init_size\n init = torch.zeros(*init_size, device=device)\n return (init, init)\n\n\ndef maybe_stack(items, dim=None):\n to_stack = [item for item in items if item is not None]\n if not to_stack:\n return None\n elif len(to_stack) == 1:\n return to_stack[0].unsqueeze(dim)\n else:\n return torch.stack(to_stack, dim)\n\n\ndef accumulate_logprobs(d, keys_and_logprobs):\n for key, logprob in keys_and_logprobs:\n existing = d.get(key)\n if existing is None:\n d[key] = logprob\n else:\n d[key] = torch.logsumexp(\n torch.stack((logprob, existing), dim=0),\n dim=0)\n\n\ndef get_field_presence_info(ast_wrapper, node, field_infos):\n present = []\n for field_info in field_infos:\n field_value = node.get(field_info.name)\n is_present = field_value is not None and field_value != []\n\n maybe_missing = field_info.opt or field_info.seq\n is_builtin_type = field_info.type in ast_wrapper.primitive_types\n\n if maybe_missing and is_builtin_type:\n # TODO: make it possible to deal with \"singleton?\"\n present.append(is_present and type(field_value).__name__)\n elif maybe_missing and not is_builtin_type:\n present.append(is_present)\n #elif not maybe_missing and field_info.type in [\"table\", \"column\"]: ## added this condition...\n # assert is_present\n # present.append(True)\n elif not maybe_missing and is_builtin_type:\n present.append(type(field_value).__name__)\n elif not maybe_missing and not is_builtin_type:\n assert is_present\n present.append(True)\n return tuple(present)\n\n\ndef rule_match(string_lhs, rule_string, rule):\n \"\"\" Check if string '->' rule form matches with a rule \"\"\"\n if isinstance(rule[1], str):\n return rule_string == rule[0]+\" -> \"+rule[1]+\"/NULL\"\n elif isinstance(rule[1], tuple):\n return string_lhs == rule[0]\n else:\n return string_lhs == rule[0]\n\n\ndef get_rule_string_from_node(node_type, child_node, ast_wrapper):\n if isinstance(child_node, list):\n return node_type, len(child_node)\n elif node_type in ast_wrapper.product_types:\n return None\n elif not node_type:\n return None\n else:\n return node_type, child_node[\"_type\"]\n\n\n@attr.s\nclass PredictPreterminal:\n ttype = attr.ib()\n goal_type = attr.ib()\n\n def __dict__(self):\n return {\"ttype\": self.ttype,\n \"goal_type\": self.goal_type,\n \"class\": \"PredictPreterminal\"}\n\n@attr.s\nclass ExpandUp:\n rule = attr.ib()\n goal_type = attr.ib()\n\n def __dict__(self):\n return {\"rule\": self.rule,\n \"goal_type\": self.goal_type,\n \"class\": \"ExpandUp\"}\n\n@attr.s\nclass Point:\n ttype = attr.ib()\n value = attr.ib()\n\n def __dict__(self):\n return {\"value\": self.value,\n \"ttype\": self.ttype,\n \"class\": \"Point\"}\n\n\n\ndef get_rule_match_indices(string_lhs, rule_string, rule_index):\n return [rule_index[rule] for rule in rule_index if rule_match(string_lhs, rule_string, rule)]\n\n@registry.register('decoder', 'HeadCorner')\nclass HeadCornerDecoder(torch.nn.Module):\n Preproc = NL2CodeDecoderPreproc\n\n class Handler:\n handlers = {}\n\n @classmethod\n def register_handler(cls, func_type):\n if func_type in cls.handlers:\n raise RuntimeError(f\"{func_type} handler is already registered\")\n\n def inner_func(func):\n cls.handlers[func_type] = func.__name__\n return func\n\n return inner_func\n\n class State(enum.Enum):\n EXPAND_UP = 0\n PREDICT_HEAD_CORNER = 1\n POINT = 2\n\n def __init__(\n self,\n device,\n preproc,\n grammar_path,\n rule_emb_size=128,\n node_embed_size=64,\n # TODO: This should be automatically inferred from encoder\n enc_recurrent_size=256,\n recurrent_size=256,\n dropout=0.,\n desc_attn='bahdanau',\n copy_pointer=None,\n multi_loss_type='logsumexp',\n sup_att=None,\n use_align_mat=False,\n use_align_loss=False,\n enumerate_order=False,\n loss_type=\"softmax\"):\n super().__init__()\n self._device = device\n self.preproc = preproc\n self.ast_wrapper = preproc.ast_wrapper\n self.terminal_vocab = preproc.vocab\n self.preproc.primitive_types.append(\"singleton\")\n\n if self.preproc.use_seq_elem_rules:\n self.node_type_vocab = vocab.Vocab(\n sorted(self.preproc.primitive_types) +\n sorted(self.ast_wrapper.custom_primitive_types) +\n sorted(self.preproc.sum_type_constructors.keys()) +\n sorted(self.preproc.field_presence_infos.keys()) +\n sorted(self.preproc.seq_lengths.keys()),\n special_elems=())\n else:\n self.node_type_vocab = vocab.Vocab(\n sorted(self.preproc.primitive_types) +\n sorted(self.ast_wrapper.custom_primitive_types) +\n sorted(self.ast_wrapper.sum_types.keys()) +\n sorted(self.ast_wrapper.singular_types.keys()) +\n sorted(self.preproc.seq_lengths.keys()),\n special_elems=())\n\n self.all_rules, self.rules_index, self.parent_to_preterminal, self.preterminal_mask, self.preterminal_debug, \\\n self.preterminal_types, self.parent_to_hc, self.hc_table, self.hc_debug, self.parent_to_head, \\\n self.parent_to_rule = self.compute_rule_masks(grammar_path)\n\n\n # json.dump(dict(self.parent_to_preterminal), open('data/spider/head-corner-glove,cv_link=true/p.json'))\n #json.dump({\"parent_to_preterminal\": dict(self.parent_to_preterminal),\n # \"preterminal_mask\": dict(self.preterminal_mask),\n # \"parent_to_hc\": {key: sorted(list(self.parent_to_hc[key])) for key in self.parent_to_hc},\n # \"hc_table\": {key: dict(self.hc_table[key]) for key in self.hc_table},\n # \"parent_to_head\": dict(self.parent_to_head),\n # \"node_type_vocab_e2i\": dict(self.node_type_vocab.elem_to_id),\n # \"node_type_vocab_i2e\": dict(self.node_type_vocab.id_to_elem),\n # # \"terminal_vocab\": self.terminal_vocab,\n # # \"rules_index\": self.rules_index,\n # \"parent_to_rule\": dict(self.parent_to_rule),\n # },\n # open('data/spider/head-corner-glove,cv_link=true/head_corner_elems.json', 'w'))\n\n self.rule_emb_size = rule_emb_size\n self.node_emb_size = node_embed_size\n self.enc_recurrent_size = enc_recurrent_size\n self.recurrent_size = recurrent_size\n\n self.use_align_mat = use_align_mat\n self.use_align_loss = use_align_loss\n self.enumerate_order = enumerate_order\n\n if use_align_mat:\n from ratsql.models.spider import spider_dec_func\n self.compute_align_loss = lambda *args: \\\n spider_dec_func.compute_align_loss(self, *args)\n self.compute_pointer_with_align = lambda *args: \\\n spider_dec_func.compute_pointer_with_align_head_corner(self, *args)\n\n self.state_update = variational_lstm.RecurrentDropoutLSTMCell(\n input_size=self.rule_emb_size * 2 + self.enc_recurrent_size + self.recurrent_size * 2 + self.node_emb_size,\n hidden_size=self.recurrent_size,\n dropout=dropout)\n\n self.attn_type = desc_attn\n if desc_attn == 'bahdanau':\n self.desc_attn = attention.BahdanauAttention(\n query_size=self.recurrent_size,\n value_size=self.enc_recurrent_size,\n proj_size=50)\n elif desc_attn == 'mha':\n self.desc_attn = attention.MultiHeadedAttention(\n h=8,\n query_size=self.recurrent_size,\n value_size=self.enc_recurrent_size)\n elif desc_attn == 'mha-1h':\n self.desc_attn = attention.MultiHeadedAttention(\n h=1,\n query_size=self.recurrent_size,\n value_size=self.enc_recurrent_size)\n elif desc_attn == 'sep':\n self.question_attn = attention.MultiHeadedAttention(\n h=1,\n query_size=self.recurrent_size,\n value_size=self.enc_recurrent_size)\n self.schema_attn = attention.MultiHeadedAttention(\n h=1,\n query_size=self.recurrent_size,\n value_size=self.enc_recurrent_size)\n else:\n # TODO: Figure out how to get right sizes (query, value) to module\n self.desc_attn = desc_attn\n self.sup_att = sup_att\n self.rule_logits = torch.nn.Sequential(\n torch.nn.Linear(self.recurrent_size, self.rule_emb_size),\n torch.nn.Tanh(),\n torch.nn.Linear(self.rule_emb_size, len(self.rules_index)))\n self.rule_embedding = torch.nn.Embedding(\n num_embeddings=len(self.rules_index),\n embedding_dim=self.rule_emb_size)\n\n self.gen_logodds = torch.nn.Linear(self.recurrent_size, 1)\n self.terminal_logits = torch.nn.Sequential(\n torch.nn.Linear(self.recurrent_size, self.rule_emb_size),\n torch.nn.Tanh(),\n torch.nn.Linear(self.rule_emb_size, len(self.terminal_vocab)))\n self.terminal_embedding = torch.nn.Embedding(\n num_embeddings=len(self.terminal_vocab),\n embedding_dim=self.rule_emb_size)\n if copy_pointer is None:\n self.copy_pointer = attention.BahdanauPointer(\n query_size=self.recurrent_size,\n key_size=self.enc_recurrent_size,\n proj_size=50)\n else:\n # TODO: Figure out how to get right sizes (query, key) to module\n self.copy_pointer = copy_pointer\n if multi_loss_type == 'logsumexp':\n self.multi_loss_reduction = lambda logprobs: -torch.logsumexp(logprobs, dim=1)\n elif multi_loss_type == 'mean':\n self.multi_loss_reduction = lambda logprobs: -torch.mean(logprobs, dim=1)\n\n self.pointers = torch.nn.ModuleDict()\n self.pointer_action_emb_proj = torch.nn.ModuleDict()\n for pointer_type in self.preproc.grammar.pointers:\n self.pointers[pointer_type] = attention.ScaledDotProductPointer(\n query_size=self.recurrent_size,\n key_size=self.enc_recurrent_size)\n self.pointer_action_emb_proj[pointer_type] = torch.nn.Linear(\n self.enc_recurrent_size, self.rule_emb_size)\n\n self.node_type_embedding = torch.nn.Embedding(\n num_embeddings=len(self.node_type_vocab),\n embedding_dim=self.node_emb_size)\n\n # TODO batching\n self.zero_rule_emb = torch.zeros(1, self.rule_emb_size, device=self._device)\n self.zero_recurrent_emb = torch.zeros(1, self.recurrent_size, device=self._device)\n if loss_type == \"softmax\":\n self.xent_loss = torch.nn.CrossEntropyLoss(reduction='none')\n elif loss_type == \"entmax\":\n self.xent_loss = entmax.entmax15_loss\n elif loss_type == \"sparsemax\":\n self.xent_loss = entmax.sparsemax_loss\n elif loss_type == \"label_smooth\":\n self.xent_loss = self.label_smooth_loss\n\n self.goals = None\n self.head_corners = None\n self.operation = None\n\n def label_smooth_loss(self, X, target, smooth_value=0.1):\n if self.training:\n logits = torch.log_softmax(X, dim=1)\n size = X.size()[1]\n one_hot = torch.full(X.size(), smooth_value / (size - 1)).to(X.device)\n one_hot.scatter_(1, target.unsqueeze(0), 1 - smooth_value)\n loss = F.kl_div(logits, one_hot, reduction=\"batchmean\")\n return loss.unsqueeze(0)\n else:\n return torch.nn.functional.cross_entropy(X, target, reduction=\"none\")\n\n @classmethod\n def _calculate_rules(cls, preproc):\n offset = 0\n\n all_rules = []\n rules_mask = {}\n\n # Rules of the form:\n # expr -> Attribute | Await | BinOp | BoolOp | ...\n # expr_seq_elem -> Attribute | Await | ... | Template1 | Template2 | ...\n for parent, children in sorted(preproc.sum_type_constructors.items()):\n assert parent not in rules_mask\n rules_mask[parent] = (offset, offset + len(children))\n offset += len(children)\n all_rules += [(parent, child) for child in children]\n\n # Rules of the form:\n # FunctionDef\n # -> identifier name, arguments args\n # | identifier name, arguments args, stmt* body\n # | identifier name, arguments args, expr* decorator_list\n # | identifier name, arguments args, expr? returns\n # ...\n # | identifier name, arguments args, stmt* body, expr* decorator_list, expr returns\n for name, field_presence_infos in sorted(preproc.field_presence_infos.items()):\n assert name not in rules_mask\n rules_mask[name] = (offset, offset + len(field_presence_infos))\n offset += len(field_presence_infos)\n all_rules += [(name, presence) for presence in field_presence_infos]\n\n # Rules of the form:\n # stmt* -> stmt\n # | stmt stmt\n # | stmt stmt stmt\n for seq_type_name, lengths in sorted(preproc.seq_lengths.items()):\n assert seq_type_name not in rules_mask\n rules_mask[seq_type_name] = (offset, offset + len(lengths))\n offset += len(lengths)\n all_rules += [(seq_type_name, i) for i in lengths]\n\n return all_rules, rules_mask\n\n def compute_rule_masks(self, grammar_path):\n\n # paths for head-corner settings\n path_to_preterminals = os.path.join(grammar_path, \"preterminals.csv\")\n path_to_head_map = os.path.join(grammar_path, \"rule_to_head.csv\")\n\n # goal to preterminals\n preterminal_map = {}\n preterminal_masks = {}\n preterminal_debug = {}\n preterminal_types = set()\n with open(path_to_preterminals, 'r') as csv_file:\n elems = csv.reader(csv_file)\n for goal, prets in elems:\n prets = set([s.strip() for s in prets.strip().split(\",\") if s != \"\"])\n preterminal_types = preterminal_types.union(prets)\n\n all_rules = list(self.preproc.all_rules + tuple([(\"\", \"**MATCH**\")]) + tuple([(\"\", p) for p in preterminal_types]))\n rules_index = {v: idx for idx, v in enumerate(all_rules)}\n\n with open(path_to_preterminals, 'r') as csv_file:\n elems = csv.reader(csv_file)\n for goal, prets in elems:\n prets = [s.strip() for s in prets.strip().split(\",\") if s != \"\"]\n mask_ids = sorted([rules_index[(\"\", p)] for p in prets])\n preterminal_map[goal] = prets\n preterminal_masks[goal] = mask_ids\n preterminal_debug[goal] = set(mask_ids)\n\n # rule to head\n rule_to_head = {}\n parent_to_rule = collections.defaultdict(list)\n head_to_rule = collections.defaultdict(list)\n parent_to_head = collections.defaultdict(list)\n with open(path_to_head_map, 'r') as csv_file:\n elems = csv.reader(csv_file)\n for rule_type, rule, head in elems:\n rule_to_head[rule] = int(head.strip())-1\n if \" -> \" in rule:\n lhs, rhs = rule.split(\" -> \")\n else:\n lhs, rhs = rule.strip().rstrip(\" ->\"), \"\"\n\n if len(rhs):\n split_rhs = [tuple(elem.strip().split(\"/\")) for elem in rhs.split(\",\")]\n parent_to_head[lhs] += [split_rhs[rule_to_head[rule]]]\n head_to_rule[parent_to_head[lhs][-1][0]] += [(lhs, rule)]\n parent_to_rule[lhs] += [[elem.strip() for elem in rhs.split(\",\")]]\n else:\n parent_to_head[lhs] = []\n parent_to_rule[lhs] = []\n\n seq_types = set([lhs for lhs, rhs in rules_index if lhs.endswith(\"*\")])\n for stype in seq_types:\n head = stype.rstrip(\"*\")\n parent_to_head[stype] = [(head, \"NULL\")]\n head_to_rule[head] += [(stype, stype + \" -> \" + head + \"/NULL\")]\n\n # derive head corners for goal nonterminals\n head_corners = {key: set([v[0] for v in value]) for key, value in parent_to_head.items()}\n def dfs(parent, table, visited=set()):\n for head in table[parent]:\n visited.add(parent)\n if head in visited:\n table[parent] = table[parent].union(table[head])\n else:\n dfs(head, table, visited)\n table[parent] = table[parent].union(table[head])\n for key in preterminal_map:\n dfs(key, head_corners)\n\n # now get head corner table -- set of grammar rules available\n # given a head and a goal state\n head_corner_table = {}\n hc_debug = {}\n for goal_state in preterminal_masks:\n head_corner_table[goal_state] = collections.defaultdict(set)\n hc_debug[goal_state] = collections.defaultdict(set)\n for head in head_corners[goal_state]:\n for parent, rule in head_to_rule.get(head):\n if parent in head_corners[goal_state]:\n head_corner_table[goal_state][head].add(rule)\n hc_debug[goal_state][head] = set()\n elif parent == goal_state:\n head_corner_table[goal_state][head].add(rule)\n hc_debug[goal_state][head] = set()\n # given a rule, get the set of indices in rule vocab corresponding to it\n for key_1 in head_corner_table:\n for key_2 in head_corner_table[key_1]:\n mask = []\n for rule in head_corner_table[key_1][key_2]:\n lhs, _ = rule.split(\" ->\")\n mask += get_rule_match_indices(lhs, rule, rules_index)\n head_corner_table[key_1][key_2] = sorted(mask)\n hc_debug[key_1][key_2] = set(mask)\n\n return all_rules, rules_index, preterminal_map, preterminal_masks, preterminal_debug, preterminal_types,\\\n head_corners, head_corner_table, hc_debug, parent_to_head, parent_to_rule\n\n def fetch_head_from_node(self, node):\n parent_type = node[\"_type\"]\n\n name_to_type = {elem[1]: elem[0] for elem in self.parent_to_head[parent_type]}\n for key in node.keys():\n if key != \"_type\":\n if key in name_to_type:\n return key, name_to_type[key]\n return None, []\n\n def construct_oracle_sequence(self, tree_state, oracle_sequence, goal_type=None):\n\n goal_was_false = not goal_type\n\n root_type, node = tree_state\n\n if isinstance(node, list): ## you've hit the child of an aggregator, visit each of the children\n if goal_was_false:\n root_type += \"*\"\n goal_type = root_type\n\n is_sum_type = root_type[:-1] in self.ast_wrapper.sum_types\n\n for i, elem in enumerate(node):\n if i == 0:\n if not is_sum_type:\n self.construct_oracle_sequence((None, elem), oracle_sequence, goal_type=goal_type)\n # new here\n else:\n self.construct_oracle_sequence((root_type[:-1], elem), oracle_sequence, goal_type=goal_type)\n # here, check if the parent is a sum type constructor, if so, add an extra expand up\n #if is_sum_type: ### commenting out for now\n # oracle_sequence.append(ExpandUp(rule=(root_type[:-1], elem[\"_type\"]), goal_type=goal_type))\n rule = get_rule_string_from_node(root_type, node, self.ast_wrapper)\n oracle_sequence.append(ExpandUp(rule=rule, goal_type=goal_type))\n else:\n if not is_sum_type:\n self.construct_oracle_sequence((None, elem), oracle_sequence, goal_type=None)\n else:\n self.construct_oracle_sequence((root_type[:-1], elem), oracle_sequence, goal_type=None)\n # here, check if the parent is a sum type constructor, if so, replace match with ExpandUP, then **MATCH**\n #if is_sum_type:\n # ## oracle_sequence[-1] = ExpandUp(rule=(root_type[:-1], elem[\"_type\"]), goal_type=goal_type)\n # oracle_sequence.append(ExpandUp(rule=\"**MATCH**\", goal_type=root_type[:-1]))\n\n if goal_was_false:\n oracle_sequence.append(ExpandUp(rule=\"**MATCH**\", goal_type=goal_type))\n\n elif isinstance(node, dict): ## you're dealing with a Constructor or a ProductType\n\n node_type = node[\"_type\"]\n\n if len(node) == 1:\n try:\n assert goal_was_false\n except:\n print(node)\n raise AssertionError\n\n oracle_sequence.append(PredictPreterminal(ttype=node_type,\n goal_type=root_type))\n oracle_sequence.append(ExpandUp(rule=(root_type, node_type),\n goal_type=root_type))\n oracle_sequence.append(ExpandUp(rule=\"**MATCH**\",\n goal_type=root_type))\n else:\n parent_rule = get_rule_string_from_node(root_type, node, self.ast_wrapper)\n if goal_was_false:\n if parent_rule:\n goal_type = root_type\n else:\n goal_type = node[\"_type\"]\n\n head_field_name, head_field_type = self.fetch_head_from_node(node)\n self.construct_oracle_sequence((head_field_type, node[head_field_name]), oracle_sequence,\n goal_type=goal_type)\n\n # fetch the right rule form for this dict\n type_info = self.ast_wrapper.singular_types[node_type]\n present = get_field_presence_info(self.ast_wrapper, node, type_info.fields)\n rule = (node['_type'], tuple(present))\n oracle_sequence.append(ExpandUp(rule=rule, goal_type=goal_type))\n\n leftover_children = [field for (field, p) in zip(self.ast_wrapper.singular_types[node_type].fields,\n present) if p and field.name != head_field_name]\n\n for child in leftover_children:\n self.construct_oracle_sequence((child.type, node[child.name]), oracle_sequence, goal_type=None)\n\n if parent_rule:\n oracle_sequence.append(ExpandUp(rule=parent_rule, goal_type=goal_type))\n\n # see what happens when you change the position of this\n if goal_was_false:\n if root_type:\n oracle_sequence.append(ExpandUp(rule=\"**MATCH**\", goal_type=root_type))\n else:\n oracle_sequence.append(ExpandUp(rule=\"**MATCH**\", goal_type=node_type))\n\n else:\n # something going on with singletons that we want to fix\n #if root_type not in [\"table\", \"column\"]:\n # root_type = str(type(node))\n if goal_was_false:\n goal_type = root_type\n # predicting a preterminal, pointing to its value. match if appropriate.\n oracle_sequence.append(PredictPreterminal(ttype=root_type,\n goal_type=goal_type))\n oracle_sequence.append(Point(ttype=root_type,\n value=node))\n if goal_was_false:\n oracle_sequence.append(ExpandUp(rule=\"**MATCH**\", goal_type=goal_type))\n\n def augment_data_with_oracle(self, zipped_data):\n encoder_data, decoder_data = zip(*zipped_data)\n\n oracle_sequences = []\n for elem in decoder_data:\n oracle_sequence = self.compute_oracle_sequence(elem)\n oracle_sequences.append(oracle_sequence)\n\n return zip(encoder_data, decoder_data, oracle_sequences)\n\n def begin_inference(self, desc_enc, example):\n traversal = InferenceTreeTraversal(self, desc_enc, example)\n choices = traversal.step(None)\n return traversal, choices\n\n def compute_loss(self, enc_input, example, desc_enc, debug):\n mle_loss = self.compute_mle_loss(enc_input, example, desc_enc, debug)\n\n if self.use_align_loss:\n align_loss = self.compute_align_loss(desc_enc, example[0])\n return mle_loss + align_loss\n return mle_loss\n\n def init_state(self, enc_input, example, desc_enc):\n self.goals.append((\"sql\", None))\n self.head_corners = []\n self.operation = self.State.PREDICT_HEAD_CORNER\n\n def compute_mle_loss(self, enc_input, example, desc_enc, debug):\n\n _, oracle = example\n\n # copy this over because pop is destructive\n oracle = oracle[:]\n\n #print(\"##########\")\n #print(json.dumps(example[0].tree, indent=2))\n #print(oracle)\n\n traversal = TrainTreeTraversal(self, desc_enc)\n traversal.step(None)\n while oracle:\n action = oracle.pop(0)\n if isinstance(action, PredictPreterminal):\n index = self.rules_index[(\"\", action.ttype)]\n goal_type = action.goal_type\n assert traversal.current_state == TreeTraversal.State.PRETERMINAL_APPLY\n assert traversal.goals[-1].node_type == goal_type\n try:\n assert index in self.preterminal_debug[goal_type]\n except:\n # print(action)\n raise AssertionError\n traversal.step(index)\n elif isinstance(action, ExpandUp):\n hc = traversal.head_corners[-1]\n if action.rule == \"**MATCH**\":\n index = self.rules_index[(\"\", action.rule)]\n else:\n index = self.rules_index[action.rule]\n # made change here this could be made stricter by giving head corner items goal types\n assert index in self.hc_debug[traversal.goals[-1].node_type][hc.root_type]\n traversal.step(index)\n else: # point\n assert traversal.current_state in [TreeTraversal.State.POINTER_APPLY,\n TreeTraversal.State.GEN_TOKEN_APPLY]\n if action.ttype not in [\"table\", \"column\"]:\n # we're doing conventional pointing (which we handle as strings)\n value = action.value\n field_value_split = self.preproc.grammar.tokenize_field_value(value) + [\n vocab.EOS]\n for value in field_value_split:\n traversal.step(value)\n else:\n pointer_map = desc_enc.pointer_maps.get(action.ttype)\n value = action.value\n\n if pointer_map:\n values = pointer_map[value]\n traversal.step(values[0], values[1:])\n else:\n traversal.step(value)\n\n loss = torch.sum(torch.stack(tuple(traversal.loss), dim=0), dim=0)\n\n hc = traversal.head_corners[-1]\n converted = traversal.convert_head_corner_to_node_rep(hc)\n\n t1 = json.dumps(converted, indent=2, sort_keys=True)\n t2 = json.dumps(example[0].tree, indent=2, sort_keys=True)\n\n # print(t1)\n # print(t2)\n assert t1 == t2\n\n return loss\n\n def compute_loss_from_all_ordering(self, enc_input, example, desc_enc, debug):\n def get_permutations(node):\n def traverse_tree(node):\n nonlocal permutations\n if isinstance(node, (list, tuple)):\n p = itertools.permutations(range(len(node)))\n permutations.append(list(p))\n for child in node:\n traverse_tree(child)\n elif isinstance(node, dict):\n for node_name in node:\n traverse_tree(node[node_name])\n\n permutations = []\n traverse_tree(node)\n return permutations\n\n def get_perturbed_tree(node, permutation):\n def traverse_tree(node, parent_type, parent_node):\n if isinstance(node, (list, tuple)):\n nonlocal permutation\n p_node = [node[i] for i in permutation[0]]\n parent_node[parent_type] = p_node\n permutation = permutation[1:]\n for child in node:\n traverse_tree(child, None, None)\n elif isinstance(node, dict):\n for node_name in node:\n traverse_tree(node[node_name], node_name, node)\n\n node = copy.deepcopy(node)\n traverse_tree(node, None, None)\n return node\n\n orig_tree = example.tree\n permutations = get_permutations(orig_tree)\n products = itertools.product(*permutations)\n loss_list = []\n for product in products:\n tree = get_perturbed_tree(orig_tree, product)\n example.tree = tree\n loss = self.compute_mle_loss(enc_input, example, desc_enc)\n loss_list.append(loss)\n example.tree = orig_tree\n loss_v = torch.stack(loss_list, 0)\n return torch.logsumexp(loss_v, 0)\n\n def _desc_attention(self, prev_state, desc_enc):\n # prev_state shape:\n # - h_n: batch (=1) x emb_size\n # - c_n: batch (=1) x emb_size\n query = prev_state[0]\n if self.attn_type != 'sep':\n return self.desc_attn(query, desc_enc.memory, attn_mask=None)\n else:\n question_context, question_attention_logits = self.question_attn(query, desc_enc.question_memory)\n schema_context, schema_attention_logits = self.schema_attn(query, desc_enc.schema_memory)\n return question_context + schema_context, schema_attention_logits\n\n def _tensor(self, data, dtype=None):\n return torch.tensor(data, dtype=dtype, device=self._device)\n\n def _index(self, vocab, word):\n return self._tensor([vocab.index(word)])\n\n def _update_state(\n self,\n node_type,\n prev_state,\n prev_action_emb,\n prec_h,\n prec_action_emb,\n prec_goal,\n desc_enc):\n # desc_context shape: batch (=1) x emb_size\n desc_context, attention_logits = self._desc_attention(prev_state, desc_enc)\n # node_type_emb shape: batch (=1) x emb_size\n node_type_emb = self.node_type_embedding(\n self._index(self.node_type_vocab, node_type))\n\n state_input = torch.cat(\n (\n prev_action_emb, # a_{t-1}: rule_emb_size\n desc_context, # c_t: enc_recurrent_size\n prec_h, # s_{p_t}: recurrent_size\n prec_action_emb, # a_{p_t}: rule_emb_size\n prec_goal, # recurrent_size (goal node) CHANGE MADE HERE\n node_type_emb, # n_{f-t}: node_emb_size\n ),\n dim=-1)\n new_state = self.state_update(\n # state_input shape: batch (=1) x (emb_size * 5)\n state_input, prev_state)\n return new_state, attention_logits\n\n def apply_rule(\n self,\n node_type,\n prev_state,\n prev_action_emb,\n prec_h,\n prec_action_emb,\n prec_goal,\n desc_enc):\n\n new_state, attention_logits = self._update_state(\n node_type, prev_state, prev_action_emb, prec_h, prec_action_emb, prec_goal, desc_enc)\n # output shape: batch (=1) x emb_size\n output = new_state[0]\n # rule_logits shape: batch (=1) x num choices\n rule_logits = self.rule_logits(output)\n\n return output, new_state, rule_logits\n\n def rule_infer(self, node_type, goal_type, rule_logits, state):\n rule_logprobs = torch.nn.functional.log_softmax(rule_logits, dim=-1)\n\n ## changed both from inquire to apply -- shouldn't make a difference?\n if state == TreeTraversal.State.EXPAND_UP_APPLY:\n assert goal_type\n rule_ids = self.hc_table[goal_type][node_type]\n if goal_type == node_type:\n rule_ids = rule_ids.union(set([self.rules_index[(\"\", \"**MATCH**\")]]))\n rule_ids = sorted(list(rule_ids))\n elif state == TreeTraversal.State.PRETERMINAL_APPLY:\n rule_ids = self.preterminal_mask[node_type]\n else:\n print(\"Rule infer should only be evoked for expand up and predict preterminal.\")\n raise NotImplementedError\n\n return list(zip(rule_ids, [rule_logprobs[0, idx] for idx in rule_ids]))\n\n\n def gen_token(\n self,\n node_type,\n prev_state,\n prev_action_emb,\n prec_h,\n prec_action_emb,\n goal_h,\n desc_enc):\n\n new_state, attention_logits = self._update_state(\n node_type, prev_state, prev_action_emb, prec_h, prec_action_emb, goal_h, desc_enc)\n # output shape: batch (=1) x emb_size\n output = new_state[0]\n\n # gen_logodds shape: batch (=1)\n gen_logodds = self.gen_logodds(output).squeeze(1)\n\n return new_state, output, gen_logodds\n\n def gen_token_loss(\n self,\n output,\n gen_logodds,\n token,\n desc_enc):\n # token_idx shape: batch (=1), LongTensor\n token_idx = self._index(self.terminal_vocab, token)\n # action_emb shape: batch (=1) x emb_size\n action_emb = self.terminal_embedding(token_idx)\n\n # +unk, +in desc: copy\n # +unk, -in desc: gen (an unk token)\n # -unk, +in desc: copy, gen\n # -unk, -in desc: gen\n # gen_logodds shape: batch (=1)\n desc_locs = desc_enc.find_word_occurrences(token)\n if desc_locs:\n # copy: if the token appears in the description at least once\n # copy_loc_logits shape: batch (=1) x desc length\n copy_loc_logits = self.copy_pointer(output, desc_enc.memory)\n copy_logprob = (\n # log p(copy | output)\n # shape: batch (=1)\n torch.nn.functional.logsigmoid(-gen_logodds) -\n # xent_loss: -log p(location | output)\n # TODO: sum the probability of all occurrences\n # shape: batch (=1)\n self.xent_loss(copy_loc_logits, self._tensor(desc_locs[0:1])))\n else:\n copy_logprob = None\n\n # gen: ~(unk & in desc), equivalent to ~unk | ~in desc\n if token in self.terminal_vocab or copy_logprob is None:\n token_logits = self.terminal_logits(output)\n # shape:\n gen_logprob = (\n # log p(gen | output)\n # shape: batch (=1)\n torch.nn.functional.logsigmoid(gen_logodds) -\n # xent_loss: -log p(token | output)\n # shape: batch (=1)\n self.xent_loss(token_logits, token_idx))\n else:\n gen_logprob = None\n\n # loss should be -log p(...), so negate\n loss_piece = -torch.logsumexp(\n maybe_stack([copy_logprob, gen_logprob], dim=1),\n dim=1)\n return loss_piece\n\n def token_infer(self, output, gen_logodds, desc_enc):\n # Copy tokens\n # log p(copy | output)\n # shape: batch (=1)\n copy_logprob = torch.nn.functional.logsigmoid(-gen_logodds)\n copy_loc_logits = self.copy_pointer(output, desc_enc.memory)\n # log p(loc_i | copy, output)\n # shape: batch (=1) x seq length\n copy_loc_logprobs = torch.nn.functional.log_softmax(copy_loc_logits, dim=-1)\n # log p(loc_i, copy | output)\n copy_loc_logprobs += copy_logprob\n\n log_prob_by_word = {}\n # accumulate_logprobs is needed because the same word may appear\n # multiple times in desc_enc.words.\n accumulate_logprobs(\n log_prob_by_word,\n zip(desc_enc.words, copy_loc_logprobs.squeeze(0)))\n\n # Generate tokens\n # log p(~copy | output)\n # shape: batch (=1)\n gen_logprob = torch.nn.functional.logsigmoid(gen_logodds)\n token_logits = self.terminal_logits(output)\n # log p(v | ~copy, output)\n # shape: batch (=1) x vocab size\n token_logprobs = torch.nn.functional.log_softmax(token_logits, dim=-1)\n # log p(v, ~copy| output)\n # shape: batch (=1) x vocab size\n token_logprobs += gen_logprob\n\n accumulate_logprobs(\n log_prob_by_word,\n ((self.terminal_vocab[idx], token_logprobs[0, idx]) for idx in range(token_logprobs.shape[1])))\n\n return list(log_prob_by_word.items())\n\n def compute_pointer(\n self,\n node_type,\n prev_state,\n prev_action_emb,\n parent_h,\n parent_action_emb,\n desc_enc):\n new_state, attention_logits = self._update_state(\n node_type, prev_state, prev_action_emb, parent_h, parent_action_emb, desc_enc)\n # output shape: batch (=1) x emb_size\n output = new_state[0]\n # pointer_logits shape: batch (=1) x num choices\n pointer_logits = self.pointers[node_type](\n output, desc_enc.pointer_memories[node_type])\n\n return output, new_state, pointer_logits, attention_logits\n\n def pointer_infer(self, node_type, logits):\n logprobs = torch.nn.functional.log_softmax(logits, dim=-1)\n return list(zip(\n # TODO batching\n range(logits.shape[1]),\n logprobs[0]))","sub_path":"ratsql/models/head_corner/decoder.py","file_name":"decoder.py","file_ext":"py","file_size_in_byte":39595,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"61145079","text":"import os\nfrom setuptools import setup, find_packages\n\nBASEDIR = os.path.dirname(os.path.abspath(__file__))\nVERSION = open(os.path.join(BASEDIR, 'VERSION')).read().strip()\n\nBASE_DEPENDENCIES = [\n 'wf-database-connection-honeycomb>=0.2.1',\n 'bluepy>=1.3.0',\n 'click>=7.0'\n]\n\n# allow setup.py to be run from any path\nos.chdir(os.path.normpath(BASEDIR))\n\nsetup(\n name='wf-shoe-sensor',\n packages=find_packages(),\n version=VERSION,\n include_package_data=True,\n description='Python package for communicating with Wildflower shoe sensors through BLE interface',\n long_description=open('README.md').read(),\n url='https://github.com/WildflowerSchools/shoe_sensor',\n author='Theodore Quinn',\n author_email='ted.quinn@wildflowerschools.org',\n install_requires=BASE_DEPENDENCIES,\n keywords=['bluetooth'],\n classifiers=[\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python',\n ]\n)\n","sub_path":"setup.py","file_name":"setup.py","file_ext":"py","file_size_in_byte":1043,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"17890509","text":"#!/usr/bin/python\n\nimport os\nimport socket\nimport sys\nimport glob\nimport select\nimport mysql.connector\nimport re\n\t\t\nconfig = {\n'user': 'root',\n'password': '',\n'host': '127.0.0.1',\n'port': '3306',\n'database': 'maison',\n'raise_on_warnings': True,}\n\ndef decodeTrame(encode):\n\treponse = \"null\"\n\tconn = mysql.connector.connect(**config)\n\tcursor = conn.cursor()\n\tencode = re.split('\\n', encode)\n\tif(encode[0] == '1'): # RECHERCHE UTILISATEUR\n\t\tcursor.execute(\"\"\"SELECT NIV FROM utilisateur WHERE NOM=%s AND MDP=%s\"\"\", (encode[1], encode[2], ))\n\t\trows = cursor.fetchall()\n\t\tconn.close()\n\t\tnum = list(sum(rows, ()))\n\t\tif(rows != None):\n\t\t\treponse = num[0]\n\t\t\treturn str(reponse).encode()\n\tif(encode[0] == '2'): # NOMBRE PIECES\n\t\tcursor.execute(\"SELECT COUNT(*) FROM pieces\")\n\t\trows = cursor.fetchall()\n\t\tconn.close()\n\t\tnum = list(sum(rows, ()))\n\t\tif(rows != None):\n\t\t\treponse = num[0]\n\t\t\treturn str(reponse).encode()\n\tif(encode[0] == '3'): # TOUTE LES PIECES\n\t\tcursor.execute(\"SELECT NOM FROM pieces\")\n\t\trows = cursor.fetchall()\n\t\tconn.close()\n\t\tnum = list(sum(rows, ()))\n\t\treponse = \"\"\n\t\tfor row in num:\n\t\t\treponse += row + '\\n'\n\t\tif(rows != None):\n\t\t\treturn reponse.encode()\n\tif(encode[0] == '4'): # NOMBRE COMPOSANT\n\t\tcursor.execute(\"\"\"SELECT COUNT(*) FROM composant WHERE PIECE=%s\"\"\", (encode[1], ))\n\t\trows = cursor.fetchall()\n\t\tconn.close()\n\t\tnum = list(sum(rows, ()))\n\t\tif(rows != None):\n\t\t\treponse = num[0]\n\t\t\treturn str(reponse).encode()\n\tif(encode[0] == '5'): # TOUS LES COMPOSANT\n\t\tcursor.execute(\"\"\"SELECT NOM, GA, NIV FROM composant WHERE PIECE=%s\"\"\", (encode[1], ))\n\t\trows = cursor.fetchall()\n\t\tconn.close()\n\t\tnum = list(sum(rows, ()))\n\t\treponse = \"\"\n\t\tfor row in num:\n\t\t\treponse += str(row) + '\\n'\n\t\tprint(reponse)\n\t\tif(rows != None):\n\t\t\treturn reponse.encode()\n\tif(encode[0] == '6'): # MODIFICATION MDP UTILISATEUR\n\t\ttry:\n\t\t\tcursor.execute(\"\"\"UPDATE utilisateur SET MDP=%s WHERE NOM=%s\"\"\", (encode[2], encode[1], ))\n\t\t\treponse = \"1\"\n\t\texcept (MySQLdb.Error, MySQLdb.Warning) as e:\n\t\t\treponse = \"0\"\n\t\tconn.close()\n\tif(encode[0] == '7'): # MODIFICATION MDP & NIV UTILISATEUR\n\t\ttry:\n\t\t\tcursor.execute(\"\"\"UPDATE utilisateur SET MDP=%s, NIV=%s WHERE NOM=%s\"\"\", (encode[2], encode[3], encode[1], ))\n\t\t\treponse = \"1\"\n\t\texcept (MySQLdb.Error, MySQLdb.Warning) as e:\n\t\t\treponse = \"0\"\n\t\tconn.close()\n\tif(encode[0] == '8'): # RECHERCHE NIV UTILISATEUR\n\t\ttry:\n\t\t\tcursor.execute(\"\"\"UPDATE utilisateur SET NIV=%s WHERE NOM=%s\"\"\", (encode[2], encode[1], ))\n\t\t\treponse = \"1\"\n\t\texcept (MySQLdb.Error, MySQLdb.Warning) as e:\n\t\t\treponse = \"0\"\n\t\tconn.close()\n\tif(encode[0] == '9'): # AJOUT UTILISATEUR\n\t\ttry:\n\t\t\tcursor.execute(\"\"\"INSERT INTO utilisateur (NOM, MDP, NIV) VALUES (%s, %s, %s)\"\"\", (encode[1], encode[2], encode[3], ))\n\t\t\treponse = \"1\"\n\t\texcept (MySQLdb.Error, MySQLdb.Warning) as e:\n\t\t\treponse = \"0\"\n\t\tconn.close()\n\tif(encode[0] == '10'): # DEJA USER\n\t\tcursor.execute(\"\"\"SELECT ID FROM utilisateur WHERE NOM=%s\"\"\", (encode[1], ))\n\t\trows = cursor.fetchall()\n\t\tconn.close()\n\t\tnum = list(sum(rows, ()))\n\t\treponse = \"\"\n\t\tfor row in num:\n\t\t\treponse += str(row) + '\\n'\n\t\tprint(reponse)\n\t\tif(rows != None):\n\t\t\treturn reponse.encode()\n\treturn reponse.encode()\n\nhote = 'localhost' # 192.168.1.104\nport = 3176\nconnexion_principale = None\nconnexion_principale = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\nconnexion_principale.bind((hote, port))\nconnexion_principale.listen(3) #nombre de connexion simultané\nprint(\"Le serveur ecoute {}\".format(port))\nwhile 1:\n serveur_lance = True\n print(\"\\t\\t+++++ DEBUT +++++\")\n clients_connectes = []\n while serveur_lance:\n connexions_demandees, wlist, xlist = select.select([connexion_principale],[], [], 0.05)\n for connexion in connexions_demandees:\n connexion_avec_client, infos_connexion = connexion.accept()\n clients_connectes.append(connexion_avec_client)\n clients_a_lire = []\n try:\n clients_a_lire, wlist, xlist = select.select(clients_connectes,[], [], 0.05)\n except select.error:\n pass\n else:\n for client in clients_a_lire:\n # Client est de type socket\n msg_recu = client.recv(2048)\n msg_recu = msg_recu.decode()\n print(\"Recu -> {0}\".format(msg_recu) + \"\\tDepuis -> {0}\".format(infos_connexion))\n client.send(decodeTrame(msg_recu))\n serveur_lance = False\n client.close()\n print(\"Communication fini avec success...\")\n print(\"\\t\\t+++++ END +++++\")\nfor client in clients_connectes:\n client.close()\nconnexion_principale.close()\n","sub_path":"Serveur/serveur.py","file_name":"serveur.py","file_ext":"py","file_size_in_byte":4541,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"298522878","text":"from .designdocs import (\n db_model_token,\n db_definition,\n db_data,\n db_data_item,\n)\n\n\nclass Database(object):\n \"\"\"Object handling all the connections to the couchdb server.\"\"\"\n\n def __init__(self, db):\n self.db = db\n self.save = db.save\n\n def get_definition(self, model_name):\n \"\"\"Get the scheme definition from the model_name.\n\n :param model_name: the name of the definition you want to retrieve\n\n \"\"\"\n results = db_definition(self.db)[model_name]\n for result in results:\n return result.value\n\n def get_definition_token(self, model_name):\n \"\"\"Return the token associated with a definition.\n\n :param model_name: the name of the definition you want to retrieve\n\n \"\"\"\n return db_model_token(self.db)[model_name]\n\n def get_data(self, model_name):\n \"\"\"Get the definition of the model data.\n\n :param model_name: the name of the definition you want to retrieve\n\n \"\"\"\n return db_data(self.db)[model_name]\n\n def get_data_item(self, model_name, data_item_id):\n \"\"\"Get a data-item and checks it behaves to the requested model\"\"\"\n key = [str(data_item_id), str(model_name)]\n data_items = db_data_item(self.db)[key]\n if len(data_items):\n data_item = data_items.rows[0]\n return data_item\n return None\n\n def create_data(self, model_name, data, data_id=None):\n \"\"\"Create a data to a model_name.\"\"\"\n if data_id:\n data_doc = self.db[data_id]\n data_id = data_doc.id\n else:\n data_doc = {\n 'type': 'data',\n 'model_name': model_name,\n }\n data_doc['data'] = data\n\n if data_id:\n self.db[data_id] = data_doc\n else:\n data_id, rev = self.db.save(data_doc)\n\n return data_id\n","sub_path":"daybed/backends/couchdb/database.py","file_name":"database.py","file_ext":"py","file_size_in_byte":1900,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"357765417","text":"#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) 2020-2021 Pod Group Ltd.\n#\n# Authors:\n# - Kostiantyn Chertov \n# - J. Félix Ontañón \n\nimport socket\nimport time\nfrom tlspsk import TLSClientSession\n\nfrom enosim.logger import logger\nfrom enosim.iccid import iccid2bin\n\n\ndef __tlssession(server, port, sim_key, sim_iccid, request):\n quit = False\n sock = None\n\n def callback(data):\n nonlocal quit, sock\n logger.info(data)\n if data == b\"bye\\n\":\n quit = True\n\n psk = bytes.fromhex(sim_key)\n session = TLSClientSession(\n server_names=server, psk=psk, psk_label=bytes.fromhex(sim_iccid), data_callback=callback, psk_only=True\n )\n\n sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n sock.connect((server, port))\n client_hello = session.pack_client_hello()\n logger.debug('client hello: {0}'.format(client_hello.hex()))\n sock.sendall(client_hello)\n\n parser = session.parser()\n step = 0\n logger.info('TLS1.3-PSK session established. Initialising operation.')\n\n while not quit:\n step += 1\n server_data = sock.recv(10*4096)\n if len(server_data) > 0:\n logger.debug(\"step {0}: {1}\".format(step, server_data.hex()))\n parser.send(server_data)\n data = parser.read()\n if data:\n logger.debug(\"data: {0}\".format(data.hex()))\n sock.sendall(data)\n quit = True\n\n data = bytes(request, 'utf-8')\n\n logger.debug('request: {0}'.format(data))\n app_data = session.pack_application_data(data)\n logger.debug('app_data: {0}'.format(app_data.hex()))\n\n sock.sendall(app_data)\n time.sleep(1)\n resp = sock.recv(4096)\n logger.debug('resp: {0}'.format(resp.hex()))\n parser.send(resp)\n\n time.sleep(0.5)\n resp = sock.recv(4096)\n logger.debug('resp: {0}'.format(resp.hex()))\n parser.send(resp)\n\n sock.sendall(session.pack_close())\n sock.close()\n logger.debug('done!')\n\n\ndef simulate_ztp(server, port, sim_key, sim_iccid, device_id):\n nibbled_iccid = iccid2bin(sim_iccid).hex()\n request = 'GET /v1/config/{0}?iccid={1} HTTP/1.1\\x0d\\x0a\\x0d\\x0a'.format(device_id, nibbled_iccid)\n logger.debug('request: {}'.format(request))\n\n return __tlssession(server, port, sim_key, nibbled_iccid, request)\n\n\ndef simulate_stc(server, port, sim_key, sim_iccid, device_id, json_data):\n data_length = len(json_data)\n nibbled_iccid = iccid2bin(sim_iccid).hex()\n request = 'POST /v1/data/{0}?iccid={1} HTTP/1.1\\x0d\\x0a'.format(device_id, nibbled_iccid) +\\\n 'Host: pod.iot.platform\\x0d\\x0a' +\\\n 'Content-Length: {0:d}\\x0d\\x0a\\x0d\\x0a{1}'.format(data_length, json_data)\n logger.debug('request: {}'.format(request))\n\n return __tlssession(server, port, sim_key, nibbled_iccid, request)\n","sub_path":"enosim/tlsclient.py","file_name":"tlsclient.py","file_ext":"py","file_size_in_byte":2866,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"92637063","text":"import threading\n\nimport tensorflow as tf\nfrom keras.models import load_model\nfacial_expression_model_path = 'models/face_expression.hdf5'\nfall_model_path = 'models/fall_detection.hdf5'\n\nclass MyModel:\n def __init__(self,path):\n self.path=path\n self.model_graph=tf.Graph()\n self.model_sess=tf.Session(graph=self.model_graph)\n self.model=self.load()\n\n\n def load(self):\n with self.model_sess.as_default():\n with self.model_graph.as_default():\n return load_model(self.path)\n\n\n def model_predict(self,roi):\n with self.model_sess.as_default():\n with self.model_graph.as_default():\n return self.model.predict(roi)\n\n\n\n\n","sub_path":"mymodel.py","file_name":"mymodel.py","file_ext":"py","file_size_in_byte":715,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"615920835","text":"from common_module import *\r\n\r\nclass ClubMonaco_Extractor(object):\r\n def extract_categories(self, text):\r\n log_display('ClubMonaco - extracting categories')\r\n soup = get_soup(text)\r\n categories = []\r\n divisions = []\r\n division_items = soup.select('.menuLinks')\r\n for division in division_items:\r\n if 'with-flyout' in division.a['class']:\r\n division_name = division.a.text.strip()\r\n for class_item in division.a['class']:\r\n if class_item.find('flyout-') != -1:\r\n division_flyout_class = class_item\r\n divisions.append({'name' : division_name, 'flyout_class' : division_flyout_class })\r\n base_url = 'http://www.clubmonaco.ca'\r\n pattern_category_id = re.compile('categoryId=(\\d+)')\r\n for division in divisions:\r\n division_menu_div = soup.select('#%s' % division['flyout_class'])[0]\r\n division_menu_item_groups = division_menu_div.select('.col')\r\n for menu_group in division_menu_item_groups:\r\n group_name = string.capwords(menu_group.h3.text.strip())\r\n group_subitems = menu_group.select('.leftnav-group > li')\r\n for subitem in group_subitems:\r\n subitem_name = subitem.a.text.strip()\r\n subitem_url = base_url + subitem.a['href'] \r\n category_id = pattern_category_id.findall(subitem_url)[0]\r\n category_name = division['name'] + ' : ' + group_name + \" : \" + subitem_name\r\n categories.append({'name' : category_name, 'url' : subitem_url})\r\n return categories\r\n def extract_category_productlist(self, text, category_id):\r\n log_display('ClubMonaco - extracting products for category')\r\n base_url = 'http://www.clubmonaco.ca'\r\n soup = get_soup(text)\r\n pattern_product_id = re.compile('productId=(\\d+)')\r\n products = []\r\n product_items = soup.select('.product')\r\n for product in product_items:\r\n product_url = base_url + product.select('.product-details')[0].dt.a['href']\r\n product_id = pattern_product_id.findall(product_url)[0]\r\n product_name = string.capwords(product.select('.product-details')[0].dt.a.text)\r\n products.append({'category_id' : category_id, 'id' : product_id, 'name' : product_name, 'url' : product_url})\r\n return products\r\n def extract_product(self, text, base_product_url):\r\n log_display('ClubMonaco - extracting product')\r\n soup = get_soup(text)\r\n swatches = {}\r\n swatch_items = soup.select('.swatches li')\r\n if not swatch_items:\r\n return None\r\n for item in swatch_items:\r\n swatch_name = item.text.strip()\r\n swatch_url = item.img['src']\r\n swatch_id = item['value']\r\n swatch_product_url = base_product_url + '&color=' + swatch_id\r\n swatches.update({swatch_id: {'name' : swatch_name, 'url' : swatch_url, 'product_url' : swatch_product_url}})\r\n # get prices for swatches\r\n pattern_skus = re.compile('skusGen\\.push\\([^;]+\\);', re.DOTALL)\r\n pattern_color_name = re.compile('color: \"([^\"]+)\"')\r\n pattern_color_id = re.compile('colorCode: \"([^\"]+)\"')\r\n pattern_price_regular = re.compile(\"baseUnformatted: ([\\d.]+)\")\r\n pattern_price_sale = re.compile(\"currentUnformatted: ([\\d.]+)\")\r\n sku_items = pattern_skus.findall(text)\r\n for item in sku_items:\r\n if len(item.replace('skusGen.push(', '').replace(');', '').strip()) == 0:\r\n # empty - may be old sku?\r\n continue\r\n color_id = pattern_color_id.findall(item)[0].strip()\r\n if color_id not in swatches.keys():\r\n # empty - may be unavailable ? \r\n continue\r\n color_name = pattern_color_name.findall(item)[0].strip()\r\n price_regular = float(pattern_price_regular.findall(item)[0].strip())\r\n price_sale = float(pattern_price_sale.findall(item)[0].strip())\r\n if price_sale == price_regular:\r\n price_sale = 0\r\n swatches[color_id].update({'price_regular' : price_regular, 'price_sale' : price_sale, 'pics' : []})\r\n # get description\r\n description = ''\r\n if soup.select('#tab-details') and soup.select('#tab-details')[0].p:\r\n description = soup.select('#tab-details')[0].p.text\r\n # grab pics\r\n pattern_color_slice = re.compile('colorSliceValuesGen\\.push\\([^;]+\\);', re.DOTALL)\r\n pattern_photo_color_id = re.compile('colorId: \"([^\"]+)\"')\r\n pattern_photo_color_name = re.compile('colorName: \"([^\"]+)\"')\r\n pattern_photo_urls = re.compile('enhancedImageURL: \"([^\"]+)\"')\r\n photo_section_items = pattern_color_slice.findall(text)\r\n for photo_item in photo_section_items:\r\n color_id = pattern_photo_color_id.findall(photo_item)[0].strip()\r\n if not color_id in swatches.keys():\r\n continue \r\n color_name = pattern_photo_color_name.findall(photo_item)[0].strip()\r\n photo_urls = pattern_photo_urls.findall(photo_item)\r\n swatches[color_id].update({'pics' : photo_urls})\r\n return {'description' : description, 'swatches' : swatches }\r\n\r\nclass ClubMonaco_Processor(object):\r\n def __init__(self, retailer_db, extractor):\r\n self.retailer_db = retailer_db\r\n self.extractor = extractor\r\n self.extraction_id = scraper_get_extraction_id()\r\n self.page_pause = 3\r\n self.name = 'clubmonaco'\r\n def insert_categories(self, extraction_id, categories):\r\n log_display('ClubMonaco_Processor - inserting categories')\r\n query = 'replace into clubmonaco_category(extraction_id, id, name, url) value (%s, %s, %s, %s)'\r\n params = []\r\n for idx, category in enumerate(categories):\r\n params.append((extraction_id, idx, category['name'], category['url']))\r\n log_display('ClubMonaco_Processor - %s categories to insert' % len(params))\r\n connection = self.retailer_db.get_connection()\r\n cursor = connection.cursor()\r\n cursor.executemany(query, params)\r\n connection.commit()\r\n cursor.close()\r\n def insert_category_productlist(self, extraction_id, products):\r\n log_display('ClubMonaco_Processor - inserting category product list')\r\n query = 'replace into clubmonaco_category_productlist(extraction_id, category_id, product_id, name, url) value (%s, %s, %s, %s, %s)'\r\n params = []\r\n for product in products:\r\n params.append((extraction_id, product['category_id'], product['id'], product['name'], product['url']))\r\n log_display('ClubMonaco_Processor - %s category products to insert' % len(params))\r\n connection = self.retailer_db.get_connection()\r\n cursor = connection.cursor()\r\n cursor.executemany(query, params)\r\n connection.commit()\r\n cursor.close()\r\n def insert_product(self, extraction_id, product):\r\n log_display('ClubMonaco_Processor - inserting product')\r\n product_query = 'replace into clubmonaco_product(extraction_id, id, name, description, url, color_id, color_name, color_pic, price_regular, price_sale) value (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s)'\r\n photo_query = 'replace into clubmonaco_productphoto(extraction_id, product_id, color_id, url) value (%s, %s, %s, %s)'\r\n product_params = []\r\n photo_params = [] \r\n for swatch_id, swatch in product['swatches'].items():\r\n product_params.append((extraction_id, product['id'], product['name'], product['description'], swatch['product_url'], swatch_id, swatch['name'], swatch['url'], swatch['price_regular'], swatch['price_sale']))\r\n for photo_url in swatch['pics']:\r\n photo_params.append((extraction_id, product['id'], swatch_id, photo_url))\r\n log_display('ClubMonaco_Processor - inserting %s product photos' % len(photo_params))\r\n connection = self.retailer_db.get_connection()\r\n cursor = connection.cursor()\r\n cursor.executemany(product_query, product_params)\r\n cursor.executemany(photo_query, photo_params)\r\n connection.commit()\r\n cursor.close()\r\n def run(self):\r\n if self.extraction_id is None:\r\n log_display('ClubMonaco - inserting fresh extraction run')\r\n self.extraction_id = self.retailer_db.scrape_get_extraction_id_for_run(self.name)\r\n log_display('ClubMonaco - extraction id is %s' % self.extraction_id)\r\n home_url = 'http://www.clubmonaco.ca/'\r\n log_display('ClubMonaco - getting homepage for categories via url %s' % home_url)\r\n text = get_page(home_url)\r\n categories = self.extractor.extract_categories(text)\r\n self.insert_categories(self.extraction_id, categories)\r\n identified_products = {}\r\n for cidx, category in enumerate(categories):\r\n page_no = 0\r\n while True:\r\n time.sleep(self.page_pause)\r\n page_no = page_no + 1\r\n url = '%s&size=99&page=%s' % (category['url'], page_no)\r\n log_display('ClubMonaco - getting category product list via url %s' % url)\r\n text = get_page(url)\r\n products = self.extractor.extract_category_productlist(text, cidx)\r\n self.insert_category_productlist(self.extraction_id, products)\r\n for product in products:\r\n if product['id'] not in identified_products.keys():\r\n identified_products[product['id']] = product\r\n soup = get_soup(text)\r\n if not soup.select('.next'):\r\n break\r\n for pid, product in identified_products.items():\r\n time.sleep(self.page_pause)\r\n log_display('ClubMonaco - getting product via url %s' % product['url'])\r\n text = get_page(product['url'])\r\n product_info = self.extractor.extract_product(text, product['url'])\r\n if product_info is None:\r\n continue\r\n product.update(product_info)\r\n self.insert_product(self.extraction_id, product)\r\n self.retailer_db.scrape_update_endtime_for_extraction_id(self.extraction_id)\r\n\r\nconfig = {\r\n 'RetailerDB' : {\r\n 'user' : 'Akon',\r\n 'pwd' : 'Ef351egUAQ-jZ-V',\r\n 'host' : '104.236.57.214',\r\n 'db' : 'retailers'\r\n }\r\n}\r\nretailerDB = MySQLAdapter(config['RetailerDB'])\r\nextractor = ClubMonaco_Extractor()\r\nprocessor = ClubMonaco_Processor(retailerDB, extractor)\r\n\r\nprocessor.run()\r\n\r\n\r\n\r\n\r\n\r\n","sub_path":"scripts/f21/clubmonaco.py","file_name":"clubmonaco.py","file_ext":"py","file_size_in_byte":9835,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"118182977","text":"#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nimport sys, time, os, struct, random\nimport asyncio\n\n\ndef data_coll(_rtt):\n global recv_i, sum_rtt\n\n recv_i += 1\n sum_rtt += _rtt\n\n\ndef len_data_head(data):\n len_idh = 1 # data[0]\n len_version = 4 # data[1:5]\n len_DCIL = 1 # data[5]\n len_DCID = data[len_idh + len_version] # data[6:6+len_DCIL]\n len_SCIL = 1 # data[6+len_DCIL]\n len_SCID = data[len_idh + len_version + len_DCIL + len_DCID]\n return 7 + len_DCID + len_SCID # 7 + data[6+data[5]] + data[5]\n\n\ndef dummy_version_packet(f): # 1:23 0:15\n if f: # QUIC Long Header\t[8-f][0-f] 00000000(32bitVer) DCID Len == SCID Len == 8\n # qdata = struct.pack('!B',random.randint(0,255)|0x80) + b'\\x00\\x00\\x00\\x00\\x50' + os.urandom(8)\n qdata = struct.pack('!B',random.randint(0,255)|0x80) + struct.pack('!L',random.randint(0,0xffffffff)&0xfafafafa|0x0a0a0a0a) + b'\\x08' + os.urandom(8) + b'\\x08' + os.urandom(8) + b'\\x00'*1177\n len_h = 23\n else: # SCID Len == 0 DCID Len == [8,20]\n # qdata = struct.pack('!B',random.randint(0,255)&0x7d|9) + os.urandom(8) + struct.pack('!L',random.randint(0,0xffffffff)&0xfafafafa|0x0a0a0a0a) # '\\x00\\x00\\x00\\x00'\n _rl = random.randint(8,0x14)\n qdata = struct.pack('!B',random.randint(0,255)|0x80) + random.choice([b'\\xff',b'\\x00']) + os.urandom(3) + struct.pack('!B',_rl) + os.urandom(_rl) + b'\\x00'*1186\n len_h = 7 + _rl\n print('\\tSend Query: %s'%' '.join('%02X'%x for x in qdata[:len_h]))\n return qdata\n '''\n struct.pack('!B',random.randint(0,255)&0x7d|9) # random.randint(0,7)*16+9+random.choice([0,4]) [0-7][9|d]\n struct.pack('!Q',random.randint(0,0xFFFFFFFFFFFFFFFF)) # os.urandom(8) CID(64bit)\n struct.pack('!L',random.randint(0,0xffffffff)&0xfafafafa|0x0a0a0a0a) # '*a*a*a*a' Ver(32bit)\n '''\n\nclass UdpHandler:\n global query_count\n\n def __init__(self, target_hostname, target_port):\n self.target_hostname = target_hostname\n self.target_port = target_port\n self.recv_count = 0\n\n def connection_made(self, transport):\n self.transport = transport\n self.s_time = time.time()\n\n for _ in range(query_count):\n self.transport.sendto(dummy_version_packet(random.randint(0,1)))\t# random.randint(0,1)\n\n def datagram_received(self, data, addr):\n global QUIC_Ver\n if self.recv_count == 0:\n len_h = len_data_head(data)\n QUIC_Ver = str(data[len_h:])[2:-1] # .decode('utf8','ignore')\n self.recv_count += 1\n print(' Recv Data:\\t(%d/%d)\\n %r'%(self.recv_count, query_count, data))\n data_coll(time.time() - self.s_time)\n # print('\"{}:{}\" is enabled QUIC.\\tRTT={}ms'.format(self.target_hostname, self.target_port, time.time() - self.s_time))\n if self.recv_count == query_count: self.transport.close()\n\n def error_received(self, transport):\n print('\"{}:{}\"\\t{}'.format(self.target_hostname, self.target_port, transport))\n self.transport.close()\n\n def connection_lost(self, transport):\n loop = asyncio.get_event_loop()\n loop.stop()\n\n\ndef stop_event_loop(event_loop, timeout, s_addr, q_port):\n \"\"\"Terminates event loop after the specified timeout.\"\"\"\n def timeout_handler():\n event_loop.stop()\n\n print('\"{}:{}\" \\tTimeout...\\t{}ms'.format(s_addr, q_port, timeout*1000))\n event_loop.call_later(timeout, timeout_handler)\n\n\ndef main():\n \"\"\"Main entry point.\"\"\"\n #print(\"Start:\",time.ctime(), time.time())\n global recv_i, sum_rtt, query_count\n recv_i = sum_rtt = 0\n query_count = 3\n query_timeout = 1.6\n query_port = 443\n server_addr = \"127.0.0.1\"\n if len(sys.argv) > 1 : server_addr = sys.argv[1]\n if len(sys.argv) > 2 : query_port = sys.argv[2]\n# args = cli.parse_args(sys.argv[1:])\n# server_addr = net.resolve_hostname(args.host)\n\n event_loop = asyncio.get_event_loop()\n connect = event_loop.create_datagram_endpoint(\n lambda: UdpHandler(server_addr, query_port),\n remote_addr=(server_addr, query_port)\n )\n event_loop.run_until_complete(connect)\n stop_event_loop(event_loop, query_timeout, server_addr, query_port)\n event_loop.run_forever()\n #print(\"End:\",time.ctime(), time.time())\n if recv_i:\n print('\"{}:{}\" is enabled QUIC. ({})\\tRTT={:.2f}ms\\t{}/{}'.format(server_addr, query_port, QUIC_Ver, sum_rtt*1000/recv_i, recv_i, query_count))\n with open('QUIC-r.txt', 'a') as wf: wf.write('%s\\t%s\\t%.2f\\t%d|%d\\t%s\\n'%(time.strftime('%Y%m%d %X'), server_addr, sum_rtt*1000/recv_i, recv_i, query_count, QUIC_Ver))\n\nif __name__ == '__main__':\n main()\n","sub_path":"quic_version_detector/main.py","file_name":"main.py","file_ext":"py","file_size_in_byte":4683,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"425486301","text":"\n#%%\n\nimport prod_1 as p\nimport pandas as pd\nfrom pandas.io.json import json_normalize\nimport json\nfrom openpyxl import load_workbook\n\np.open_window.browser_setup()\n\n#%%\n\n# PRODUCT NUMBER\nproduct_number = 281175\n\n#CAFE NUMBER\ncafe_name = '7silverfactory7'\n\n# 7silverfactory7\n# dodohi0607\n# soojip114\n\n\nnaver_cookie = 'NRTK=ag#all_gr#1_ma#-2_si#0_en#0_sp#0; NNB=J2ZVYNYXP2MV6; nx_ssl=2; ASID=77f773540000017630e9510000000067; nid_inf=959657076; NID_AUT=KbzwToCw88MVjp3A/Q9SZnknWKQ8yGm6JmX3+mYIOuETiZgaoCpiT6g3ik9LdT5Y; NID_JKL=lNlg2g4l0j+U4oZxs4OTLWsaZr9zUpYRdypoXffXy3A=; NID_SES=AAABo5uEtwhMXAL5KJ7cY5i5J6O53xjuQSTbVC9tUB3vaK7YFdrtHnlaQN4StnwpapRnitS/7PnaQyP8njyY4zNvSoLhSUZoBea9c04hrKb2Y9LUkqDJIFzMxMjHq12l9zHX43XRQSCmsNodvUgZdiuHV7eGkCYleQfXX97YyJXoZyLhOF6i8WjEBS5E3zaz7Hl0lu4+xzjIMuiLaPwVv+BF9T1n8rzrUAIgfdFGnBBoduFiVlwQ5E+opHX+2i9J8T0w2t+Px1L2SM/i5FJ/wXIh0sUp8vnNqu9ziDOPp924vK66mn6itHgDJmABIRNZlLChiTzhzXOwgjIXjaR8SlhNaJZiQhz+fjEcjiXbJ+D8r9PY902NH5h5Gu2UKkPKNLcJNqruO3Wb8/uwVBXDUdkQRaVEIV2RXJn89tL0f4p8mzUGbeWnG98HVEW7KItvCD84f7vtZXnffowWMEzD78bKSHTO3vl4wUZYHvO/vKiTQdJq+T7d20i6ZjOYTNXl9n7k/XFPjyODXXDcWlXB9YlrQLgmPKWCoDUIcYWN9LFXyMbiSkAm2p3YMVOfjOmkv5gSQQ=='\n\n\n\n# TIME\ntime = p.datetime.time(21, 8, 50)\n# ALPHA\nalpha = 200\n# MAX BID\nmax_bid = 15.0 # 만원\n# MIN BID\nmin_bid = 1.0 # 만원\n\n\n\n\n\n# URL\nurl = \"https://cafe.naver.com/{0}/{1}\".format(cafe_name, product_number)\n\n# Cafe number\nif cafe_name == \"soojip114\":\n cafe_number = 12097718\nelif cafe_name == \"dodohi0607\":\n cafe_number = 19278526\nelse:\n cafe_number = 23303375\n\n\n# Delta\ndelta = 0.85\n# Max BID and Min BID Multiplier\nmax_bid *= 10000\nmin_bid *= 10000\n# Run time\nrun_time = p.datetime.datetime.combine(p.datetime.date.today(), time)\np.browser.get(url)\n\n#%%\n\n\ndef price_list(product_number = product_number, naver_cookie = naver_cookie):\n\n headers = {\n 'authority': 'apis.naver.com',\n 'accept': 'application/json, text/plain, */*',\n 'x-cafe-product': 'pc',\n 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.121 Safari/537.36',\n 'origin': 'https://cafe.naver.com',\n 'sec-fetch-site': 'same-site',\n 'sec-fetch-mode': 'cors',\n 'sec-fetch-dest': 'empty',\n 'accept-language': 'en-US,en;q=0.9,es;q=0.8,ko;q=0.7',\n 'cookie': naver_cookie\n }\n\n params = (\n ('requestFrom', 'A'),\n ('orderBy', 'asc'),\n )\n\n response_price_list = p.requests.get('https://apis.naver.com/cafe-web/cafe-articleapi/cafes/{0}/articles/{1}/comments/pages/1'.format(cafe_number, product_number), headers=headers, params=params)\n\n #print(response_price_list.elapsed.total_seconds())\n\n # print('HTTP Status: {0}\\nReason: {1}'.format(response_price_list.status_code, response_price_list.reason))\n\n return response_price_list\n\ndef final_price():\n global final_list\n min_price = pd.Series([min_bid], [1004], name='content')\n list1 = json.loads(price_list(product_number, naver_cookie).text)\n list2 = json_normalize(list1['comments']['items'])['content'].replace(',','', regex=True).replace('요','', regex=True).replace('원','', regex=True).replace('~','', regex=True).replace(' ','', regex=True)\n list3 = list2[list2.apply(lambda x: x.isdigit())].astype(int)\n final_list = list3[list3.apply(lambda x: (x % 100 == 0) and (x < 2000000))].append(min_price)\n final_price = int(max(final_list))\n return final_price\n\n\n\n# '원' '요' '~', ',' 및 스페이스 삭제 후 숫자축출\n# 200만원 까지만 집계\n\n# '.', '\\n' 마크가 있으면 제외됨\n\ndef final_output(final_price, alpha, test = \"Y\"):\n\n global start_time\n global finish_time\n global time_taken\n global bid\n \n # Taking timestamp\n start_time = str(p.datetime.datetime.now())\n start_time2 = p.time.time()\n\n # Get the browser refresh\n p.browser.switch_to.frame(\"cafe_main\")\n\n bid = final_price() + alpha\n\n try:\n p.WebDriverWait(p.browser, 10).until(\n p.EC.visibility_of_element_located((p.By.CLASS_NAME, \"register_box\")))\n finally:\n\n if test == \"N\": # PRODUCTION\n\n # This piece will not write anything as this will run with scheduler\n elem = p.browser.find_element_by_class_name(\"comment_inbox_text\")\n if bid < max_bid:\n elem.send_keys(bid)\n p.browser.find_element_by_class_name(\"register_box\").click()\n else:\n elem.send_keys(bid)\n\n else: # Practice. This piece will write because it will only be used outside the scheduler\n if bid < max_bid:\n print(\"bid executed\")\n else:\n print(\"no bid executed\")\n\n p.browser.switch_to_default_content()\n finish_time = str(p.datetime.datetime.now())\n time_taken = p.time.time() - start_time2\n\n\n# 엑셀파일 저장\ndef saving_excel():\n # new dataframe with same columns\n printing_results = pd.DataFrame({'id_vars':['bid', 'alpha', 'max_bid', 'start_time', 'finish_time', 'time_taken', 'url', 'run_time','final_list'],'value_vars':[bid, alpha, max_bid, start_time, finish_time, time_taken, url, run_time, final_list]})\n\n writer = pd.ExcelWriter('saving_results.xlsx', engine='openpyxl')\n # try to open an existing workbook\n writer.book = load_workbook('saving_results.xlsx')\n # copy existing sheets\n writer.sheets = dict((ws.title, ws) for ws in writer.book.worksheets)\n\n # read existing file\n reader = pd.read_excel(r'saving_results.xlsx')\n # write out the new sheet\n printing_results['value_vars'].to_excel(writer, sheet_name=\"Sheet1\", startcol=writer.sheets['Sheet1'].max_column, index = False,header= False) #startrow=len(reader)+1)\n\n writer.close()\n\ndef final_job(run_time = run_time, delta = delta):\n execution_time = p.time.strptime(str(run_time + p.datetime.timedelta(seconds=9)), '%Y-%m-%d %H:%M:%S') # string 에서 시간으로 변경\n target_time = p.time.mktime(execution_time) - delta # 시간을 epoch 로 변경. 시간 미세하게 delta 로 조정\n p.browser.get(url)\n sleeping = p.time.sleep(target_time - p.time.time())\n final_output(final_price, alpha, \"N\")\n saving_excel()\n\n#%%\n\n# Do not run if this is not a test\n\nfinal_output(final_price, alpha, \"Y\")\n\n#%%\n\n\n# Review Run\n\nprint(final_list)\nprint(\"\\n--- bid: %s ---\" % bid)\nprint(\"--- alpha: %s ---\" % alpha)\nprint(\"--- max bid: %s ---\" % max_bid)\nprint(\"\\n--- %s seconds ---\" % start_time)\nprint(\"--- %s seconds ---\" % finish_time)\nprint(\"--- %s seconds ---\" % time_taken)\nprint(\"\\n--- url: %s ---\" % url)\nprint(\"--- run time: %s ---\" % run_time)\n\n\n#%%\n\n#############################################################################################\n#############################################################################################\n# PRODUCTION!!!!!!!!!\n\n# This code is to hide the main tkinter window\nroot = p.tkinter.Tk()\nroot.withdraw()\n\n# Message Box\np.messagebox.showinfo(\n \"check\", \"***ALPHA: {} ***\\n***MAX BID: {} ***\\n***TIME: {} ***\\n***URL: {} ***\".format(alpha, max_bid, run_time, url))\n\n\n# Production scheduler\n\nsched = p.BackgroundScheduler()\nsched.start()\n\njob = sched.add_job(final_job, 'date', run_date=run_time)\n\n#############################################################################################\n#############################################################################################\n\n\n#%%\n\n# Wrap-up\n\nsched.shutdown(wait=False)\n\n\n# %%\n\n# list(a.keys())\n# list1 = json.loads(price_list().text)\n# df = json_normalize(list1['comments']['items'])\n# df\n\n\n\n# Reference\n\n# https://developers.naver.com/docs/cafe/api/\n# https://blog.naver.com/popqser2/221430894929\n# https://requests.readthedocs.io/en/latest/api/\n# https://stackoverflow.com/questions/47242845/pandas-io-json-json-normalize-with-very-nested-json\n\n# 엑셀파일 생성\n# https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_excel.html\n# https://medium.com/better-programming/using-python-pandas-with-excel-d5082102ca27\n\n# %%\n\n\n\n","sub_path":"골드/경매/prod_6.py","file_name":"prod_6.py","file_ext":"py","file_size_in_byte":8047,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"161590039","text":"from datetime import (\n datetime,\n timedelta\n)\n\nfrom django.shortcuts import (\n render,\n redirect\n)\nfrom django.urls import reverse\n\nfrom ..models import (\n Bulletin,\n Product,\n VacationSettings\n)\nfrom django.core.paginator import Paginator\n\n\ndef index(request):\n vacation_settings = VacationSettings.load()\n\n if 'cart' not in request.session:\n request.session['cart'] = []\n\n if vacation_settings.active:\n return redirect(reverse('shop:vacation'))\n else:\n return redirect(reverse('shop:main'))\n\n\ndef vacation_index(request):\n return render(request, 'shop/temp_index.html')\n\n\ndef main_index(request):\n today = datetime.now()\n month_ago = today - timedelta(days=int(30))\n all_products = Product.objects.filter(status='A').order_by('-created_at').prefetch_related('pen').prefetch_related('image')\n paginator = Paginator(all_products, 24)\n products = paginator.page(1)\n context = {\n 'products': products,\n 'bulletins': Bulletin.objects.filter(updated_at__range=(month_ago, today), active=True).order_by('-updated_at')[:1]\n }\n\n return render(request, 'shop/index.html', context)\n\n\ndef news(request):\n bulletins = Bulletin.objects.filter(active=True).order_by('-updated_at')\n context = {\n \"bulletins\": bulletins,\n }\n return render(request, 'shop/news.html', context)\n\n\ndef not_found(request, exception):\n return render(request, 'shop/404.html')\n","sub_path":"apps/shop/views/index.py","file_name":"index.py","file_ext":"py","file_size_in_byte":1456,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"496244131","text":"# LC 654. Maximum Binary Tree\n\n'''\nYou are given an integer array nums with no duplicates. A maximum binary tree can be built recursively from nums using the following algorithm:\n\nCreate a root node whose value is the maximum value in nums.\nRecursively build the left subtree on the subarray prefix to the left of the maximum value.\nRecursively build the right subtree on the subarray suffix to the right of the maximum value.\nReturn the maximum binary tree built from nums.\n\n# Definition for a binary tree node.\n# class TreeNode:\n# def __init__(self, val=0, left=None, right=None):\n# self.val = val\n# self.left = left\n# self.right = right\n'''\n\nclass Solution:\n def constructMaximumBinaryTree(self, nums: List[int]) -> Optional[TreeNode]:\n return self.monostack(nums)\n\n # O(n^2) time | O(n) space\n def recursive(self, nums: List[int]) -> Optional[TreeNode]:\n if len(nums) == 0:\n return None\n\n max_idx = 0\n\n for i in range(len(nums)):\n if nums[i] > nums[max_idx]:\n max_idx = i\n\n node = TreeNode(nums[max_idx])\n node.left = self.constructMaximumBinaryTree(nums[:max_idx])\n node.right = self.constructMaximumBinaryTree(nums[max_idx + 1:])\n\n return node\n\n # O(n) time | O(n) space\n def monostack(self, nums: List[int]) -> Optional[TreeNode]:\n node = TreeNode(float('inf'))\n stack = [node]\n\n for n in nums:\n node = TreeNode(n)\n\n while stack and stack[-1].val < n:\n node.left = stack.pop()\n\n stack[-1].right = node\n stack.append(node)\n\n return stack[0].right\n","sub_path":"1. Problems/d. Stack & Queue/b. Monostack - Build Maximum Binary Tree.py","file_name":"b. Monostack - Build Maximum Binary Tree.py","file_ext":"py","file_size_in_byte":1676,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"541937926","text":"import socket\nimport threading\nimport socketserver\nimport json\nimport time\nimport hashlib\nimport sys\n\n# Lock mechanism to deal with threads synchronization.\n# Controls access to critical sections during runtime.\nlock = threading.Lock()\n\n\n\"\"\" \nGLOBAL VARIABLES\nUsed to share information across all threads.\n\"\"\"\n\n# Store requests and provide access to its info to all threads.\nglobal request_list\nrequest_list = []\n\n# Deals with maximum delay that the clients will experience when requesting.\n# Used when there are not enough requests in a certain period, so that the server does not hang.\n# Provide access to its info to all threads.\nglobal first_request_time\nfirst_request_time = 0\n\n# Store which operation should be executed on a requests \"batch\".\nglobal operation_to_execute\noperation_to_execute = 0\n\n# Keeps track of how many responses was already sent to clients during a \"batch\".\n# Used to check if all clients that sent a request got a response from the server.\nglobal responses_sent\nresponses_sent = 0\n\n# Keeps track of how many requests were received by the server when there\n# are less than 5 requests within a certain period.\nglobal total_requests\ntotal_requests = 0\n\n# Store an execution hash that is generated from a timestamp (first_request_time).\n# This hash is used to provide a unique identification about a requests \"batch\" execution.\nglobal execution_hash\nexecution_hash = 0\n\nglobal results_vector\nresults_vector = []\n\nglobal results_calculated\nresults_calculated = 0\n\n\nclass ThreadedTCPRequestHandler(socketserver.BaseRequestHandler):\n\n def handle(self):\n #cur_thread = threading.current_thread()\n #response = bytes(\"{}: {}\".format(cur_thread.name, data), 'ascii')\n\n global request_list\n global first_request_time\n global responses_sent\n global total_requests\n global execution_hash\n global results_vector\n global results_calculated\n\n if len(request_list) >= 5:\n print(\"[DEBUG] Server is busy processing a batch of 5 clients already\")\n self.request.sendall(\"SERVER IS BUSY\".encode(\"utf-8\"))\n return\n\n lock.acquire()\n if len(request_list) == 0:\n # start counting timeout\n first_request_time = time.time()\n # generate hash from timestamp for execution batch\n hash_object = hashlib.sha256(str(first_request_time).encode(\"utf-8\"))\n execution_hash = hash_object.hexdigest()\n #print(\"[DEBUG] hex_dig: {0}\".format(execution_hash))\n lock.release()\n\n # Load received client data\n data = str(self.request.recv(1024), 'ascii')\n\n # Load JSON data into python variables\n data_dict = json.loads(data)\n\n client_id = data_dict[\"client_id\"]\n request_code = data_dict[\"request_code\"] # which operation to execute (1:sum, 2:sub, 3:mult or 4:div)\n number_array = data_dict[\"number_array\"]\n number_1 = number_array[0]\n number_2 = number_array[1]\n timestamp = data_dict[\"timestamp\"]\n\n client_data = {\n \"client_address\": self.client_address[0],\n \"client_id\": client_id,\n \"request_code\": request_code,\n \"number_1\": number_1,\n \"number_2\": number_2,\n \"timestamp\": timestamp\n }\n\n # Add client request to request_list\n lock.acquire()\n request_list.append(client_data)\n lock.release()\n \n print(\"[DEBUG] request_list length: {0}\".format(len(request_list)))\n\n # wait while there has not been passed 5 seconds nor the server has received five requests whithin 5 seconds\n while(len(request_list) < 5):\n if (time.time() - first_request_time > 5):\n total_requests = len(request_list)\n print(\"[DEBUG] total_requests: {0}\".format(total_requests))\n break\n\n lock.acquire()\n if results_calculated == 0:\n self.check_operation_to_execute()\n lock.release()\n\n result = self.calculate(number_1, number_2)\n\n lock.acquire()\n results_calculated = results_calculated + 1\n results_vector.append(\n {\n \"client_id\":client_id,\n \"result\":result[1]\n })\n lock.release()\n\n print(\"[DEBUG] Server calculation response: {0}\".format(result))\n\n while(results_calculated < len(request_list)):\n print(\"[DEBUG] Waiting for all clients calculations to finish...\")\n\n response_dict = {\n \"execution_hash\":execution_hash,\n \"client_id\":client_id,\n \"operation_executed\":operation_to_execute,\n \"execution_status\":result[0],\n \"result\":result[1],\n \"all_clients_results\":results_vector,\n \"timestamp\":time.time()\n }\n \n response_json = json.dumps(response_dict)\n print(\"[DEBUG] Data that will be sent to clients: {0}\".format(response_json))\n\n # Send response to client\n self.request.sendall(response_json.encode(\"utf-8\"))\n\n lock.acquire()\n try:\n print(\"[DEBUG] Lock acquired\")\n responses_sent = responses_sent + 1\n except:\n print(\"[DEBUG] Error while locking resource\")\n finally:\n print(\"[DEBUG] Releasing lock\")\n lock.release()\n \n print(\"[DEBUG] responses_sent: {0}\".format(responses_sent))\n if responses_sent == total_requests or responses_sent >= 5:\n responses_sent = 0\n results_calculated = 0\n print(\"[DEBUG] Clearing results_vector\")\n results_vector.clear()\n print(\"[DEBUG] Clearing request_list\")\n request_list.clear()\n \n def check_operation_to_execute(self):\n # here we should develop a way to check which operation to execute\n # based on clients requests that are stored in global variable\n global operation_to_execute\n\n operations_requested = {\n \"sum\":0,\n \"sub\":0,\n \"mult\":0,\n \"div\":0\n }\n\n # check for the most requested operation\n for request in request_list:\n if request[\"request_code\"] == 1:\n operations_requested[\"sum\"] = operations_requested[\"sum\"] + 1\n elif request[\"request_code\"] == 2:\n operations_requested[\"sub\"] = operations_requested[\"sub\"] + 1\n elif request[\"request_code\"] == 3:\n operations_requested[\"mult\"] = operations_requested[\"mult\"] + 1\n elif request[\"request_code\"] == 4:\n operations_requested[\"div\"] = operations_requested[\"div\"] + 1\n \n operation_to_execute = max(operations_requested, key=operations_requested.get)\n\n # solve draw problem (when there are two operations that are equally requested)...\n if len(request_list) >= 5 and operations_requested[operation_to_execute] < 3:\n operation_to_execute = \"draw\"\n else:\n operation_to_execute = max(operations_requested, key=operations_requested.get)\n \n print(\"[DEBUG] Operation to execute: {0}\".format(operation_to_execute))\n\n def server_sum(self, n1, n2):\n execution_status = 0 # response execution status, 0 = error, 1 = success\n result = 0 # calculation result\n\n try:\n result = float(n1) + float(n2)\n execution_status = 1\n except:\n print(\"[DEBUG] Server error while trying to sum\")\n result = 0\n execution_status = 0\n\n return (execution_status, result)\n \n def server_subtract(self, n1, n2):\n execution_status = 0\n result = 0\n\n try:\n result = float(n1) - float(n2)\n execution_status = 1\n except:\n print(\"[DEBUG] Server error while trying to subtract\")\n result = 0\n execution_status = 0\n\n return (execution_status, result)\n \n def server_multiply(self, n1, n2):\n execution_status = 0\n result = 0\n\n try:\n result = float(n1) * float(n2)\n execution_status = 1\n except:\n print(\"[DEBUG] Server error while trying to multiply\")\n result = 0\n execution_status = 0\n\n return (execution_status, result)\n \n def server_divide(self, n1, n2):\n execution_status = 0 \n result = 0\n\n try:\n result = float(n1) / float(n2)\n execution_status = 1\n except:\n print(\"[DEBUG] Server error while trying to divide\")\n result = 0\n execution_status = 0\n\n return (execution_status, result)\n \n def calculate(self, n1, n2):\n if operation_to_execute == \"sum\":\n print(\"[DEBUG] Server is going to execute SUM operation\")\n result = self.server_sum(n1, n2)\n\n elif operation_to_execute == \"sub\":\n print(\"[DEBUG] Server is going to execute SUB operation\")\n result = self.server_subtract(n1, n2)\n\n elif operation_to_execute == \"mult\":\n print(\"[DEBUG] Server is going to execute MULT operation\")\n result = self.server_multiply(n1, n2)\n\n elif operation_to_execute == \"div\":\n print(\"[DEBUG] Server is going to execute DIV operation\")\n result = self.server_divide(n1, n2)\n\n elif operation_to_execute == \"draw\":\n print(\"[DEBUG] Server verified that clients did not consent in operation to execute\")\n result = (0,0)\n\n else:\n print(\"[DEBUG] Server error while trying to decide which operation to execute\")\n result = (0,0)\n\n return result\n\n\nclass ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer):\n pass\n\n\nif __name__ == \"__main__\":\n # Port 0 means to select an arbitrary unused port\n #HOST = \"localhost\"\n #PORT = 9999\n\n if len(sys.argv) < 2:\n print(\"[DEBUG] Error while parsing data from commando line.\")\n sys.exit(\"[X] IP or port not provided correctly! - Usage: python3 \");\n\n HOST = str(sys.argv[1])\n PORT = int(sys.argv[2])\n \n server = ThreadedTCPServer((HOST, PORT), ThreadedTCPRequestHandler)\n\n with server:\n ip, port = server.server_address\n\n # Start a thread with the server -- that thread will then start one\n # more thread for each request\n server_thread = threading.Thread(target=server.serve_forever)\n\n # Exit the server thread when the main thread terminates\n server_thread.daemon = True\n server_thread.start()\n\n print(\"[DEBUG] Server loop running in thread:\", server_thread.name)\n\n while True:\n pass\n #server.shutdown()","sub_path":"mtserver.py","file_name":"mtserver.py","file_ext":"py","file_size_in_byte":10852,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"498377705","text":"import csv\nimport datetime\nimport os\n\n\nwhile(1):\n #Defaults\n plu = input(\"What is the PLU? \")\n what = input(\"Description of the item \")\n howmuch = input(\"How much? \")\n who = input(\"Your numbers. Example 206430: \")\n when = input(\"What is the date \")\n\n #CSV Writer\n ofile = open('Whats_In_The_Container.csv', \"a\")\n writer = csv.writer(ofile, delimiter=',')\n writer.writerow([plu, what, howmuch, when, who])\n ofile.close()\n\n print()\n print()\n print()\n\n \n\n\n","sub_path":"WorkContainer/Container.py","file_name":"Container.py","file_ext":"py","file_size_in_byte":495,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"188563123","text":"import win32security\nimport win32api\nimport sys\nfrom ntsecuritycon import *\n\n\ndef AdjustPrivilege(priv, enable = 1):\n # Get the process token.\n flags = TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY\n htoken = win32security.OpenProcessToken(win32api.GetCurrentProcess(), flags)\n # Get the ID for the system shutdown privilege.\n id = win32security.LookupPrivilegeValue(None, priv)\n # Now obtain the privilege for this process.\n # Create a list of the privileges to be added.\n if enable:\n newPrivileges = [(id, SE_PRIVILEGE_ENABLED)]\n else:\n newPrivileges = [(id, 0)]\n # and make the adjustment.\n win32security.AdjustTokenPrivileges(htoken, 0, newPrivileges)\n\n\ndef go_shutdown(is_reboot=0):\n AdjustPrivilege(SE_SHUTDOWN_NAME)\n win32api.InitiateSystemShutdown(None, 'Shutdown msg from API Server', 10, 1, is_reboot)\n\n\nif __name__ == '__main__':\n go_shutdown()\n","sub_path":"agent/winvm/shutdown.py","file_name":"shutdown.py","file_ext":"py","file_size_in_byte":903,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"370750614","text":"from osgeo import osr, gdal\nimport sys\nimport pyproj\n\ndef printnn(line):\n print(\"\\r\\n{}\".format(line))\n\ndef show_crs(fname):\n print(fname)\n ds = gdal.Open(fname)\n old_cs= osr.SpatialReference()\n proj_str = ds.GetProjectionRef()\n old_cs.ImportFromWkt(proj_str)\n printnn(old_cs)\n cs = pyproj.CRS.from_string(proj_str)\n printnn(\"EPSG:{}\".format(cs.to_epsg()))\n\nif __name__ ==\"__main__\":\n fname='/home/tony/changed/h28/83.tif'\n if len(sys.argv)>1:\n fname = sys.argv[1]\n show_crs(fname)","sub_path":"crs.py","file_name":"crs.py","file_ext":"py","file_size_in_byte":524,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"159275682","text":"#coding=utf-8\nfrom uliweb import expose, functions\nfrom uliweb.i18n import ugettext_lazy as _\nimport logging\n\nlog = logging.getLogger(__name__)\n\ndef __begin__():\n functions.require_login()\n\n@expose('/admin/models_config')\nclass AdminModelsConfigView(object):\n \"\"\"\n Model administration config app\n \"\"\"\n\n def __init__(self):\n self.model = functions.get_model('model_config')\n self.model_his = functions.get_model('model_config_his')\n\n @expose('')\n def index(self):\n fields = [\n {'name':'display_name', 'width':200},\n {'name':'model_name', 'width':200},\n {'name':'is_published', 'verbose_name':_('Is Published'), 'width':100},\n 'description',\n {'name':'published_time', 'width':150},\n {'name':'action', 'verbose_name':_('Action'), 'width':120}\n ]\n\n def _action(value, obj):\n from uliweb.core.html import Tag\n\n actions = [\n Tag('a', '', title=_('View'),\n href=url_for(self.__class__.view, model_name=obj.model_name),\n _class=\"btn btn-xs btn-primary\"),\n Tag('a', '', title=_('Delete'),\n href=url_for(self.__class__.delete, model_name=obj.model_name),\n _class=\"btn btn-xs btn-danger action-delete\"),\n\n ]\n if obj.uuid:\n actions.insert(1, Tag('a', '', title=_('Unpublish'),\n href=url_for(self.__class__.unpublish, model_name=obj.model_name),\n _class=\"btn btn-xs btn-warning action-unpublish\"))\n return ' '.join(map(str, actions))\n\n def _is_published(value, obj):\n if obj.uuid:\n return ' (%s)' % obj.uuid\n else:\n return ''\n\n fields_convert_map = {'action':_action, 'is_published':_is_published}\n\n view =functions.ListView(self.model, fields=fields,\n fields_convert_map=fields_convert_map)\n objects = view.objects()\n return {'view':view, 'objects':objects, 'total':view.total}\n\n def add(self):\n from forms import AddForm\n\n fields = ['model_name', 'display_name', 'table_name', 'description',\n 'basemodel', 'has_extension', 'extension_model']\n\n def post_created_form(fcls, model):\n from uliweb.form.widgets import Button\n\n fcls.layout_class = 'bs3t'\n fcls.form_buttons = [\n str(Button(value=_('Save'), _class=\"btn btn-primary btn-sm\",\n name=\"submit\", type=\"submit\")),\n ]\n\n def pre_save(data):\n from uliweb.utils.common import get_uuid, import_attr\n from uliweb.contrib.model_config import get_model_fields, get_model_indexes\n\n data['uuid'] = get_uuid()[:6]\n if not data['table_name']:\n data['table_name'] = data['model_name'].lower()\n\n if not data['display_name']:\n data['display_name'] = data['model_name']\n\n #add import basemodel support\n if data['basemodel']:\n BM = import_attr(data['basemodel'])\n data['fields'] = get_model_fields(BM)\n data['indexes'] = get_model_indexes(BM)\n\n if data['extension_model']:\n EM = import_attr(data['extension_model'])\n data['extension_fields'] = get_model_fields(EM)\n data['extension_indexes'] = get_model_indexes(EM)\n\n def post_save(obj, data):\n r = self.model(model_name=obj.model_name,\n display_name=obj.display_name,\n description=obj.description,\n modified_user=request.user.id)\n r.save(version=True)\n\n\n view = functions.AddView(self.model_his, ok_url=url_for(self.__class__.index),\n post_created_form=post_created_form,\n form_cls=AddForm,\n pre_save=pre_save,\n post_save=post_save,\n fields=fields, version=True)\n return view.run()\n\n def _get_model(self, model_name, uuid):\n return self.model_his.get((self.model_his.c.model_name==model_name) &\n (self.model_his.c.uuid==uuid))\n\n def view(self, model_name):\n model = self.model.get(self.model.c.model_name==model_name)\n\n uuid = request.GET.get('uuid')\n uuids = [row.uuid for row in\n self.model_his.filter(self.model_his.c.model_name==model_name)\\\n .fields(self.model_his.c.uuid)\\\n .order_by(self.model_his.c.create_time.desc())]\n\n obj = None\n if not uuid and len(uuids)>0:\n uuid = uuids[0]\n\n if uuid in uuids:\n obj = self._get_model(model_name, uuid)\n\n template_data = {'uuids':uuids, 'model_name':model_name,\n 'uuid':uuid, 'object':obj, 'published_uuid':model.uuid if model else ''}\n if obj:\n template_data['columns'] = eval(obj.fields or '[]')\n template_data['indexes'] = eval(obj.indexes or '[]')\n template_data['extension_columns'] = eval(obj.extension_fields or '[]')\n template_data['extension_indexes'] = eval(obj.extension_indexes or '[]')\n fields = ['model_name', 'display_name', 'table_name', 'basemodel', 'has_extension', 'extension_model']\n view = functions.DetailView(self.model_his, obj=obj, fields=fields,\n template_data=template_data)\n return view.run()\n else:\n template_data['view'] = ''\n template_data['columns'] = []\n template_data['indexes'] = []\n template_data['extension_columns'] = []\n template_data['extension_indexes'] = []\n return template_data\n\n def save(self, model_name):\n import json as JSON\n from uliweb.utils.common import get_uuid\n\n column_name = request.GET.get('column_name')\n column = JSON.loads(request.POST[column_name])\n uuid = request.GET.get('uuid')\n action = request.GET.get('action')\n\n obj = self._get_model(model_name, uuid)\n old_column = getattr(obj, column_name)\n\n list_columns = column_name in ('fields', 'indexes', 'extension_fields',\n 'extension_indexes')\n if list_columns:\n old_column = eval(old_column or '[]')\n\n index = -1\n if action in ('edit', 'delete'):\n for x in range(len(old_column)):\n if old_column[x]['name'] == column['name']:\n index = x\n break\n\n if index >= 0:\n reserved = old_column[index].pop('_reserved', False)\n else:\n reserved = None\n\n if action in ('add', 'delete') or (index>=0 and old_column[index] != column):\n #if not published, then directly use current record\n if obj.status == '1':\n data = obj.to_dict()\n data.pop('id')\n data.pop('create_time')\n data['status'] = '0'\n obj = self.model_his(**data)\n obj.uuid = get_uuid()[:6]\n\n if list_columns:\n if action == 'add':\n old_column.append(column)\n elif action == 'edit':\n column['_reserved'] = reserved\n old_column[index] = column\n else:\n del old_column[index]\n else:\n old_column = column\n\n setattr(obj, column_name, old_column)\n obj.save(version=True)\n uuid = obj.uuid\n return json({'success':True, 'message':'Success!', 'data':{'uuid':uuid}})\n\n def publish(self, model_name):\n from uliweb.utils import date\n from uliweb.orm import Begin, Commit, Rollback\n\n Begin()\n uuid = request.GET.get('uuid')\n obj = self._get_model(model_name, uuid)\n if not obj:\n return json({'success':False, 'message':\"Model %s(%s) can't be found\" % (model_name, uuid)})\n obj.status = '1'\n obj.published_time = date.now()\n if len(obj.extension_fields) > 0:\n obj.has_extension = True\n obj.save(version=True)\n\n row = self.model.get(self.model.c.model_name==model_name)\n row.uuid = uuid\n row.published_time = date.now()\n row.modified_user = request.user.id\n row.modified_time = obj.published_time\n row.display_name = obj.display_name\n row.description = obj.description\n row.save(version=True)\n\n try:\n M = functions.get_model(model_name)\n M.migrate()\n if obj.has_extension:\n M.ext._model.migrate()\n Commit()\n\n except Exception as e:\n Rollback()\n log.exception(e)\n return json({'success':False, 'message':'Migrate Model %s(%s) Failed!' % (model_name, uuid)})\n return json({'success':True, 'message':'Model %s(%s) has been published successfully!' % (model_name, uuid)})\n\n def unpublish(self, model_name):\n count = self.model_his.filter(self.model_his.c.model_name==model_name).count()\n if count <= 1:\n return json({'success':False, 'message':'There should be at lastest one version existed'})\n row = self.model.get(self.model.c.model_name==model_name)\n row.uuid = ''\n row.published_time = None\n row.save(version=True)\n return json({'success':True})\n\n def delete(self, model_name):\n row = self.model.get(self.model.c.model_name==model_name)\n row.delete()\n\n for obj in self.model_his.filter(self.model_his.c.model_name==model_name):\n obj.delete()\n return json({'success':True})\n\n def delete_version(self, model_name):\n uuid = request.GET.get('uuid')\n row = self.model.get(self.model.c.model_name==model_name)\n\n version = self._get_model(model_name, uuid)\n if version.uuid != row.uuid:\n version.delete()\n else:\n return json({'success':False, 'message':\"You can't delete published version\"})\n\n return json({'success':True})","sub_path":"uliweb_peafowl/admin_models_config/views.py","file_name":"views.py","file_ext":"py","file_size_in_byte":10603,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"9540705","text":"# -*- coding: utf-8 -*-\nfrom howard_birnbaum.items import HowardBirnbaumItem\nimport scrapy\nimport json\nimport requests\nimport xmltodict\nfrom pyquery import PyQuery\n\n\nclass eldoradostone(scrapy.Spider):\n name=\"eldoradostone\"\n all_urls = []\n scraped_country = ''\n def __init__(self, country = '', *args,**kwargs):\n\n url = \"https://storage.scrapinghub.com/collections/293632/s/latlong\"\n \n headers = {\n 'content-type': \"application/json\",\n 'authorization': \"Basic ZWE3NmIwMzcxMGU3NDVlOGI2YWIxYTg2MGFiMjcxOGU6\"\n }\n \n response = requests.request(\"GET\", url, headers=headers).json()\n\n for i in response['value']:\n self.all_urls.append(i)\n\n\n \n \n\n def start_requests(self):\n for url in self.all_urls:\n url1 = url.split('$')\n req_url = \"http://www.eldoradostone.com/wp-admin/admin-ajax.php?action=store_search&lat={}&lng={}&max_results=100&search_radius=25\".format(str(url1[1]),str(url1[2]))\n yield scrapy.Request(url=req_url, callback=self.parse)\n \n \n def parse(self, response):\n \n r = json.loads(response.body_as_unicode())\n \n for m in r:\n item = HowardBirnbaumItem()\n try:\n item['company'] = m['store']\n except Exception as e:\n print (e)\n try:\n item['address'] = m['address']+' '+m['address2']\n except Exception as e:\n print (e) \n try:\n item['state'] = m['state']\n except Exception as e:\n print (e) \n try:\n item['country'] = m['country']\n except Exception as e:\n print (e) \n try:\n item['phone_number'] = m['phone']\n except Exception as e:\n print (e) \n try:\n item['web_site_url'] = m['url']\n except Exception as e:\n print (e) \n try:\n item['email'] = m['email']\n except Exception as e:\n print (e) \n try:\n item['city'] = m['city']\n except Exception as e:\n print (e)\n try:\n item['postal_code'] = m['zip']\n except Exception as e:\n print (e) \n \n yield item\n \n\n ","sub_path":"Howard_Birnbaum/howard_birnbaum/build/lib/howard_birnbaum/spiders/eldoradostone.py","file_name":"eldoradostone.py","file_ext":"py","file_size_in_byte":2502,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"523333886","text":"# Takes a 1 character number string and returns the factorial\ndef simpleFact(n):\n\tif (n == '9'):\n\t\treturn 362880\n\tif (n == '8'):\n\t\treturn 40320\n\tif (n == '7'):\n\t\treturn 5040\n\tif (n == '6'):\n\t\treturn 720\n\tif (n == '5'):\n\t\treturn 120\n\tif (n == '4'):\n\t\treturn 24\n\tif (n == '3'):\n\t\treturn 6\n\tif (n == '2'):\n\t\treturn 2\n\telse:\n\t\treturn 1\n\n\ndef factorial(n):\n\tfact = 1\n\twhile (n > 0):\n\t\tfact *= n\n\t\tn -= 1\n\treturn fact\n\ndef sumFactDigits(n):\n\tn = str(n)\n\ttotal = 0\n\tfor x in range(len(n)):\n\t\ttotal += simpleFact(n[x])\n\treturn total\n\n","sub_path":"Euler/P34 Sum Factorial Digits/p34.py","file_name":"p34.py","file_ext":"py","file_size_in_byte":526,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"232508530","text":"from cluster.preprocess.pre_node_feed import PreNodeFeed\nfrom master.workflow.preprocess.workflow_feed_fr2wcnn import WorkflowFeedFr2Wcnn\nimport pandas as pd\nimport warnings\nimport numpy as np\nfrom konlpy.tag import Mecab\nfrom common.utils import *\n\nclass PreNodeFeedFr2Wcnn(PreNodeFeed):\n \"\"\"\n\n \"\"\"\n\n def run(self, conf_data):\n \"\"\"\n override init class\n \"\"\"\n super(PreNodeFeedFr2Wcnn, self).run(conf_data)\n self._init_node_parm(conf_data['node_id'])\n\n def _get_node_parm(self, node_id):\n \"\"\"\n return conf master class\n :return:\n \"\"\"\n return WorkflowFeedFr2Wcnn(node_id)\n\n def _init_node_parm(self, node_id):\n \"\"\"\n\n :param node_id:\n :return:\n \"\"\"\n try:\n wf_conf = WorkflowFeedFr2Wcnn(node_id)\n self.wf_conf = wf_conf\n self.encode_channel = wf_conf.get_encode_channel\n self.encode_col = wf_conf.get_encode_column\n self.encode_len = wf_conf.get_encode_len\n self.decode_col = wf_conf.get_decode_column\n self.lable_size = wf_conf.get_lable_size\n self.char_embed = wf_conf.char_encode\n self.char_max_len = wf_conf.char_max_len\n self.lable_onehot = OneHotEncoder(self.lable_size)\n if (wf_conf.get_lable_list):\n self.lable_onehot.restore(wf_conf.get_lable_list)\n self.preprocess_type = wf_conf.get_preprocess_type\n self.embed_type = wf_conf.get_embed_type\n self.vocab_size = wf_conf.get_vocab_size + 4\n self.char_embed_size = 160\n if (self.char_embed == True) :\n self.word_vector_size = self.vocab_size + (self.char_embed_size * self.char_max_len)\n else :\n self.word_vector_size = self.vocab_size\n if(self.embed_type == 'onehot') :\n self.input_onehot = OneHotEncoder(self.vocab_size)\n if (wf_conf.get_vocab_list):\n self.input_onehot.restore(wf_conf.get_vocab_list)\n except Exception as e:\n raise Exception(e)\n\n def _convert_data_format(self, file_path, index):\n \"\"\"\n\n :param obj:\n :param index:\n :return:\n \"\"\"\n try :\n store = pd.HDFStore(file_path)\n chunk = store.select('table1',\n start=index.start,\n stop=index.stop)\n count = index.stop - index.start\n if(self.encode_col in chunk and self.decode_col in chunk) :\n words = self.encode_pad(self._preprocess(chunk[self.encode_col].values)[0:count], max_len=self.encode_len)\n encode = self._word_embed_data(self.embed_type, words, cls=self.input_onehot)\n encode = np.array(encode).reshape([-1, self.encode_len, self.vocab_size])\n if (self.char_embed == True):\n encode = self._concat_char_vector(encode, words)\n encode = np.array(encode).reshape([-1, self.encode_len, self.word_vector_size, self.encode_channel])\n decode = np.array(chunk[self.decode_col].values).reshape([-1,1]).tolist()\n return encode, self._word_embed_data(self.embed_type, decode, cls=self.lable_onehot)\n else :\n raise Exception (\"WCNN Data convert error : no column name exists\")\n except Exception as e :\n raise Exception (e)\n finally:\n store.close()\n\n def _concat_char_vector(self, encode, words):\n \"\"\"\n concat word embedding vecotr and char level embedding\n :param encode : word vector list\n :param words : word list\n :return: concat vector\n \"\"\"\n return_encode = np.array([])\n for i, vec_list, word_list in zip(range(len(encode)), encode, words) :\n for j, vec, word in zip(range(len(vec_list)), vec_list, word_list) :\n word = word[:self.char_max_len-1] if len(word) > self.char_max_len else word\n pad_len = (self.char_max_len - len(word))\n return_encode = np.append(return_encode,\n np.concatenate([vec,\n np.array(self.get_onehot_vector(word)).reshape([len(word) * self.char_embed_size]),\n np.zeros([pad_len * self.char_embed_size])]))\n return return_encode\n\n\n\n def _preprocess(self, input_data):\n \"\"\"\n\n :param input_data:\n :return:\n \"\"\"\n if(self.preprocess_type == 'mecab') :\n return self._mecab_parse(input_data)\n elif (self.preprocess_type == 'kkma'):\n return self._mecab_parse(input_data)\n elif (self.preprocess_type == 'twitter'):\n return self._mecab_parse(input_data)\n else :\n return list(map(lambda x : x.split(' '), input_data.tolist()))\n\n def data_size(self):\n \"\"\"\n get data array size of this calss\n :return:\n \"\"\"\n try :\n store = pd.HDFStore(self.input_paths[self.pointer])\n table_data = store.select('table1')\n return table_data[table_data.columns.values[0]].count()\n except Exception as e :\n raise Exception (e)\n finally:\n store.close()\n\n def has_next(self):\n \"\"\"\n check if hdf5 file pointer has next\n :return:\n \"\"\"\n if(len(self.input_paths) > self.pointer) :\n return True\n else :\n self.wf_conf.set_lable_list(self.lable_onehot.dics())\n self.wf_conf.set_word_vector_size(self.word_vector_size)\n if (self.embed_type == 'onehot'):\n self.wf_conf.set_vocab_list(self.input_onehot.dics())\n return False\n","sub_path":"cluster/preprocess/pre_node_feed_fr2wcnn.py","file_name":"pre_node_feed_fr2wcnn.py","file_ext":"py","file_size_in_byte":5888,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"23711411","text":"# coding=utf-8\r\nimport numpy as np\r\nimport math\r\nimport keras as kr\r\nfrom keras.models import Sequential\r\nfrom keras.layers import Dense, Activation\r\nimport matplotlib.pyplot as plt\r\nimport random\r\nfrom ann_visualizer.visualize import ann_viz\r\n\r\ndef randomcolor(): \r\n colorArr = ['1','2','3','4','5','6','7','8','9','A','B','C','D','E','F'] \r\n color = \"\" \r\n for i in range(6): \r\n color += colorArr[random.randint(0,14)] \r\n return \"#\"+color\r\n\r\n\r\ndef pl(st,r):\r\n plt.figure(st)\r\n r1los = r[r[3].isin([st])].values\r\n rmse=np.sqrt(np.power(r1los[:, -2]-r1los[:, -1],2).sum()/r1los.shape[0])\r\n plt.plot(r1los[:, 0], r1los[:, -1], randomcolor(),label = 'predict')\r\n plt.plot(r1los[:, 0], r1los[:, -2], randomcolor(),label='raw')\r\n plt.legend()\r\n plt.title(st+'rmse ='+str(rmse))\r\n plt.show()\r\n\r\n\r\nclass NN(object):\r\n model = []\r\n history=[]\r\n\r\n def __init__(self, layers):\r\n self.model = Sequential()\r\n self.model.add(Dense(layers[1][0], activation=layers[1][1], input_dim=layers[0][0]))\r\n #kr.initializers.RandomUniform(minval=init_weight[0], maxval=init_weight[1], seed=None)\r\n kr.layers.ELU(alpha=0.1)\r\n kr.layers.ReLU(negative_slope=0.1)\r\n for i in range(2, len(layers)):\r\n self.model.add(Dense(layers[i][0], activation=layers[i][1]))\r\n\r\n def fit(self, X, Y, learning_rate=0.1, epochs=2, solver=\"Adam\", momentum=1):\r\n if solver == \"SGD\":\r\n solver = kr.optimizers.SGD(\r\n lr=learning_rate)\r\n elif solver == \"Adam\":\r\n solver = kr.optimizers.Adam(\r\n lr=learning_rate)\r\n self.model.compile(optimizer=solver, loss='mean_squared_error') #'mean_squared_error'\r\n history=self.LossHistory()\r\n self.model.fit(X, Y, epochs=epochs,verbose=0,callbacks=[history])\r\n self.history=history\r\n\r\n def predict(self, X):\r\n X = np.array(X)\r\n pr = self.model.predict(X)\r\n return pr\r\n\r\n def loss(self):\r\n return self.history.losses\r\n\r\n class LossHistory(kr.callbacks.Callback):\r\n def on_train_begin(self, logs={}):\r\n self.losses = []\r\n\r\n def on_batch_end(self, batch, logs={}):\r\n self.losses.append(logs.get('loss'))\r\n\r\n def visual(self,tit):\r\n ann_viz(self.model,title=tit,filename=tit+'.dot',view=False)","sub_path":"mxnn.py","file_name":"mxnn.py","file_ext":"py","file_size_in_byte":2380,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"639239356","text":"import pandas as pd\r\nimport numpy as np\r\nfrom matplotlib import pyplot as plt\r\nfrom sklearn.preprocessing import LabelEncoder, StandardScaler\r\nimport sys, argparse\r\n\r\nnp.set_printoptions(precision=4) # Print only four digits\r\n\r\n'''\r\n Author: Samuel Prevost\r\n Date: 22/02/2018 14:22:53 UTC+1\r\n Title: Automatic Linear Discriminant Analysis\r\n Desc:\r\n - Aim of LDA: project a feature space (a dataset n-dimensional samples) onto a smaller subspace k (k <= n-1)\r\n while maintaining the class-discriminatory information.\r\n - LDA requires knowing the classes of the samples\r\n - It can be good to first reduce the dimension using PCA, then project by class using LDA\r\n Main source: http://sebastianraschka.com/Articles/2014_python_lda.html\r\n'''\r\n\r\n\r\ndef main(argv):\r\n ## ------- WELCOMING C.L.I. ------- ## \r\n parser = argparse.ArgumentParser()\r\n parser.add_argument(\"inputFile\", help=\"input file in Comma Separated Value format (with or without headers)\")\r\n parser.add_argument(\"outputFile\", help=\"will contain the data projected in the Linear Discriminants' dimensions, output as CSV\")\r\n parser.add_argument(\"labelCol\", help=\"Column number (counting from 0) containing the name of the class to which the sample belongs, should be a positive integer\", type=int)\r\n parser.add_argument(\"-t\", \"--varThreshold\", help=\"cumulative variance threshold after which to drop the useless eigen vectors, default: 0.8\", type=float)\r\n parser.add_argument(\"-ev\", \"--explainedVar\", help=\"output the explained variance along with the cumulative variance, should be a path where to save the image, if '-' the image will be shown but not save\")\r\n parser.add_argument(\"-pd\", \"--projectedData\", help=\"output the data projected in the first two Linear Discriminants' dimensions as a graph of scattered point, the most spread the better, should be a path where to save the image, if '-' the image will be shown but not save\")\r\n parser.add_argument(\"-pm\", \"--projectionMat\", help=\"path where to save the projection matrix used to project the data into PCs' dimensions (as as binary Numpy file .npy)\")\r\n parser.add_argument(\"-dn\", \"--dropNa\", help=\"exclude every row containing an null, invalid or infinite value. Solves the 'Input contains NaN, infinity or a value too large for dtype('float64')' issue\", action=\"store_true\")\r\n parser.add_argument(\"-v\", \"--verbose\", help=\"enable verbose and show graph as they get generated while still saving them\", action=\"store_true\")\r\n args = parser.parse_args()\r\n\r\n ## ------- ARGUMENTS ------- ## \r\n inputFile = args.inputFile\r\n outputFile = args.outputFile\r\n labelColIndex = np.abs(int(args.labelCol))\r\n varThreshold = 0.8 if not args.varThreshold else args.varThreshold\r\n explainedVarPath = args.explainedVar\r\n projectedDataPath = args.projectedData\r\n projectionMatPath = args.projectionMat\r\n dropNa = args.dropNa\r\n verbose = args.verbose\r\n\r\n ## ------- INPUTS ------- ##\r\n data = pd.read_csv(inputFile)\r\n if dropNa: # Drop every lines containing a NaN val, default: disabled\r\n data.dropna(inplace=True)\r\n # Drop empty lines at file-end\r\n data.dropna(how=\"all\", inplace=True)\r\n # Array containing each row's label (as string)\r\n strLabelVect = data.ix[:,labelColIndex].values\r\n # Transform the labels to integer (starting from 1)\r\n enc = LabelEncoder()\r\n labelEncoder = enc.fit(strLabelVect)\r\n labelVect = labelEncoder.transform(strLabelVect) + 1\r\n \r\n # Ex : labelDict = {1: 'class 1', 2: 'class 2', 3: 'class 3' ... etc }\r\n labelDict = dict()\r\n for key, val in zip(labelVect, strLabelVect):\r\n if not key in labelDict:\r\n labelDict[key] = val\r\n if verbose:\r\n print(\"Identified classes: \", labelDict)\r\n\r\n # Only keep rows with numerical data (float or int)\r\n data.drop(data.columns[[labelColIndex]], axis=1, inplace=True) # Remove label's col\r\n numData = data.ix[:,:]._get_numeric_data()\r\n if verbose:\r\n print(\"Numerical data from input:\\n\", numData.head(), \"\\n...\")\r\n # Convert from pandas dataframe to numpy array\r\n numData = numData.values\r\n ## COMPUTE D-DIMENSIONAL MEAN VECTOR ##\r\n # This vector contains the mean vector of each class\r\n # The mean vector is the mean of each feature of this class\r\n meanVects = []\r\n for label in set(labelVect):\r\n meanVects.append(np.mean(numData[labelVect==label], axis=0))\r\n if verbose:\r\n print(\"Mean vect class {}: \\n{}\".format(label, meanVects[label-1]))\r\n\r\n ## COMPUTE SCATTER MATRICES ##\r\n # -- Within-class scatter matrix (called sW)\r\n featureCount = numData.shape[1]\r\n sW = np.zeros((featureCount, featureCount))\r\n for label, meanVect in zip(set(labelVect), meanVects):\r\n classScatterMat = np.zeros_like(sW) # scatter matrix for class\r\n for row in numData[labelVect == label]:\r\n row = row.reshape(row.shape[0], 1)\r\n meanVect = meanVect.reshape(meanVect.shape[0], 1) # make col vect\r\n classScatterMat += (row-meanVect).dot((row-meanVect).T)\r\n sW += classScatterMat\r\n if verbose:\r\n print(\"Within-class scatter matrix: \\n\", sW)\r\n\r\n # -- Between-class scatter matrix (called sB)\r\n totalMean = np.mean(numData, axis=0)\r\n sB = np.zeros((featureCount, featureCount))\r\n for i, meanVect in enumerate(meanVects):\r\n n = numData[labelVect == i+1, :].shape[0]\r\n meanVect = np.array(meanVect) # avoid strange warning\r\n meanVect = meanVect.reshape(meanVect.shape[0], 1) # make col vect\r\n totalMean = np.array(totalMean) # avoid strange warning\r\n totalMean = totalMean.reshape(totalMean.shape[0], 1) # make col vect\r\n sB += n * (meanVect - totalMean).dot((meanVect - totalMean).T)\r\n\r\n if verbose:\r\n print(\"Between-class scatter matrix: \\n\", sB)\r\n\r\n ## CORE OF LDA ##\r\n eigVals, eigVects = np.linalg.eig(np.linalg.inv(sW).dot(sB))\r\n if verbose:\r\n for i in range(len(eigVals)):\r\n eigVect = eigVects[:,i].reshape(eigVects.shape[0], 1) # make col vect\r\n if eigVect.shape[0] < 7:\r\n print(\"Eigen vect {}: \\n{}\".format(i+1, eigVect.real))\r\n print(\"Eigen val {}: \\n{:.2e}\".format(i+1, eigVals[i].real))\r\n if i > 7:\r\n break\r\n # Checking if eigen vectors/values are alright\r\n print(\"Eigen Vects should be valid solution of sW^(-1)*sB*EigVect = EigVal*EigVect, checking...\", end=\"\\t\")\r\n for i in range(len(eigVals)):\r\n eigVect = eigVects[:,i].reshape(eigVects.shape[0], 1) # make col vect\r\n np.testing.assert_array_almost_equal(np.linalg.inv(sW).dot(sB).dot(eigVect).real,\r\n (eigVals[i] * eigVect).real,\r\n decimal=6,\r\n err_msg=\"Strange, the eigen vectors/values are wrong ?! This often occurs when the values are more than e+10, since numpy checks for differences in decimals regardless of the scale\",\r\n verbose=True)\r\n print(\"... Success !\")\r\n\r\n ## ------- SORT EIGEN VECTS BY EIGEN VALS ------- ##\r\n # \"The eigenvectors with the lowest eigenvalues bear the least information about the distribution of the data\"\r\n # Let's drop 'em\r\n\r\n # List of (eigVal, eigVect) tuples\r\n eigPairs = [(np.abs(eigVals[i]), eigVects[:,i]) for i in range(len(eigVals))]\r\n # Sort it\r\n eigPairs.sort(key=lambda x: x[0], reverse=True)\r\n if verbose and len(eigPairs) < 8:\r\n print(\"List of eigen vals in descending order:\", [i[0] for i in eigPairs])\r\n\r\n ## ------- EXPLAINED VARIANCE ------- ##\r\n eigSum = sum(eigVals)\r\n explnVar = [(i/eigSum)*100 for i in sorted(eigVals, reverse=True)]\r\n cumulativeExplnVar = np.cumsum(explnVar)\r\n if verbose:\r\n for i,j in enumerate(eigPairs):\r\n print(\"Eigen value {0:}: {1:.2%}\".format(i+1, (j[0]/eigSum).real))\r\n if i > 7:\r\n break\r\n ## ------- GRAPH EXPLAINED VAR ------- ##\r\n # Graph of the explained variance compared to the cumulative\r\n if not explainedVarPath is None:\r\n with plt.style.context(\"seaborn-whitegrid\"):\r\n plt.figure(figsize=(9, len(eigVals)))\r\n plt.bar(range(len(eigVals)), explnVar, alpha=0.5, align=\"center\", label=\"individual explained variance\")\r\n plt.step(range(len(eigVals)), cumulativeExplnVar, where=\"mid\", label=\"cumulative explained variance\")\r\n plt.ylabel(\"Explained variance ratio\")\r\n plt.xlabel(\"Eigen vects\")\r\n plt.legend(loc=\"best\")\r\n plt.tight_layout()\r\n if verbose or explainedVarPath == \"-\":\r\n plt.show()\r\n if explainedVarPath != \"-\":\r\n print(\"Explained Variance saved under : {}\".format(explainedVarPath))\r\n plt.savefig(explainedVarPath)\r\n\r\n ## ------- PROJECTION MATRIX ------- ##\r\n amountOfEigVectsToKeep = 0\r\n sortedEigVals = sorted(eigVals, reverse=True)\r\n while sum([sortedEigVals[i]/eigSum for i in range(amountOfEigVectsToKeep)]) < varThreshold:\r\n amountOfEigVectsToKeep += 1\r\n if verbose:\r\n varConserved = sum([sortedEigVals[i]/eigSum for i in range(amountOfEigVectsToKeep)]).real\r\n print(\"Amount of eigen vectors to keep to keep >={0:.2%} of information is {1:}, keeping {2:.2%} variance\".format(varThreshold, amountOfEigVectsToKeep, varConserved))\r\n \r\n # Create the projection matrix using the minimum amount of eigen vects to keep to reach the threshold\r\n eigVectsForLDA = []\r\n for i in range(amountOfEigVectsToKeep):\r\n eigVectLen = len(eigPairs[i][1])\r\n # Tilt the eig vects on their side to get vectors and not lists\r\n eigVectsForLDA.append(eigPairs[i][1].reshape(eigVectLen, 1))\r\n # Combine each eig vects horizontally to get a numberOfInputFeatures x amountOfEigVectsToKeep projection matrix\r\n # in which the data dimension is optimally reduced\r\n matW = np.hstack(tuple(eigVectsForLDA)).real\r\n if verbose:\r\n print(\"Projection Matrix (matrix W):\\n\", matW)\r\n \r\n ## ------- SAVE PROJECTION MATRIX ------- ##\r\n if not projectionMatPath is None:\r\n print(\"Projection Matrix (matrix W) saved under: {} in Numpy binary format (.npy)\".format(projectionMatPath))\r\n np.save(projectionMatPath, matW)\r\n\r\n ## PROJECT DATA ONTO NEW SUBSPACE ##\r\n Y = numData.dot(matW).real\r\n print(\"Input dimensions: {0:}\\nOutput dimensions: {1:}\\nReduction: {2:.2%}\".format(numData.shape[1], Y.shape[1], 1-Y.shape[1]/numData.shape[1]))\r\n assert Y.shape == (numData.shape[0], amountOfEigVectsToKeep), \"The matrix is not {}x{} dimensional !!\".format(numData.shape[0], amountOfEigVectsToKeep)\r\n ## ------- GRAPH PROJECTION DATA ------- ##\r\n # Show a representation in 2D\r\n if not projectedDataPath is None and Y.shape[1] >= 2:\r\n with plt.style.context(\"seaborn-whitegrid\"):\r\n plt.figure(figsize=(40, 20))\r\n ax = plt.subplot(111)\r\n # Generate unique label colour\r\n dicoLabelColor = dict()\r\n listColors = ['b', 'g', 'r', 'c', 'm', 'y', 'k']\r\n for i in range(max(labelVect)):\r\n dicoLabelColor[i+1] = listColors[(i+1)%len(listColors)]\r\n\r\n Y_round = np.around(Y, decimals=2)\r\n for lab in set(labelVect):\r\n # Col 0 is first PC, col 1 is second PC\r\n x = Y_round[lab==labelVect, 0]\r\n y = Y_round[lab==labelVect, 1]\r\n plt.scatter(x, y, alpha=0.5, label=labelDict[lab], c=dicoLabelColor[lab], marker=\".\", s=500)\r\n plt.xlabel(\"LD 1\")\r\n plt.ylabel(\"LD 2\")\r\n legend = plt.legend(loc=\"upper right\", fancybox=True)\r\n legend.get_frame().set_alpha(0.5)\r\n plt.title(\"LDA: {} projection onto the first 2 linear discriminant\".format(inputFile))\r\n # hide axis ticks\r\n plt.tick_params(axis='both', which=\"both\", bottom=\"off\", top=\"off\", labelbottom=\"on\", left=\"off\", right=\"off\", labelleft=\"on\")\r\n # remove axis spines\r\n ax.spines[\"top\"].set_visible(False) \r\n ax.spines[\"right\"].set_visible(False)\r\n ax.spines[\"bottom\"].set_visible(False)\r\n ax.spines[\"left\"].set_visible(False) \r\n plt.grid()\r\n plt.tight_layout()\r\n if verbose or projectedDataPath == \"-\":\r\n plt.show()\r\n if projectedDataPath != \"-\":\r\n print(\"Projection in 2D using the new LD axis saved under : {}\".format(projectedDataPath))\r\n plt.savefig(projectedDataPath)\r\n\r\n columns = [\"LD{}\".format(i+1) for i in range(Y.shape[1])]\r\n columns.append(\"class\")\r\n labelCol = np.array([labelDict[label] for label in labelVect]).reshape(labelVect.shape[0], 1)\r\n Y = np.append(Y, labelCol, axis=1)\r\n Y = pd.DataFrame(Y, columns=columns)\r\n Y.to_csv(outputFile, sep=\",\", encoding=\"utf-8\", index=False)\r\n print(\"Projected data saved under: {}\".format(outputFile))\r\nif __name__ == \"__main__\":\r\n main(sys.argv[1:])","sub_path":"scripts/automatic_lda.py","file_name":"automatic_lda.py","file_ext":"py","file_size_in_byte":13224,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"447668642","text":"\"\"\" Sort Demonstrator/Structures\r\n Author: Victoria Jurkfitz Kessler Thibes\r\n Date: Dec. 02, 2016\r\n Course: CST8333 - Programming Language Research Project\r\n\"\"\"\r\n\r\nfrom copy import deepcopy\r\n\r\nclass LinkedList:\r\n \"\"\" Implements a linked list with one node and a link to another \"\"\"\r\n\r\n node_data = ''\r\n # Data for this node\r\n\r\n next_node = ''\r\n # Next node in the list\r\n\r\n def __init__(self, data):\r\n \"\"\" Constructor \"\"\"\r\n if len(data) > 0:\r\n data_copy = deepcopy(data)\r\n\r\n # Takes first item from array\r\n self.node_data = data_copy.pop(0)\r\n\r\n # Next node is constructed until original array is empty\r\n if len(data_copy) > 0:\r\n self.next_node = LinkedList(data_copy)\r\n\r\n def get_string(self):\r\n \"\"\"\" Used to print results \"\"\"\r\n\r\n if self.next_node != '':\r\n return str(self.node_data) + \" | Next node: \" + str(self.next_node.node_data) + \"\\n\" + self.next_node.get_string()\r\n\r\n else:\r\n return str(self.node_data) + \" | Next node: empty\"\r\n\r\n def insert(self,data):\r\n\r\n if self.next_node == '':\r\n self.next_node = LinkedList(data)\r\n\r\n else:\r\n self.next_node.insert(data)\r\n\r\n\r\nclass BinaryTree:\r\n \"\"\" Implements a binary tree \"\"\"\r\n\r\n parent = ''\r\n # Parent node\r\n\r\n node_data = ''\r\n # Current node's data\r\n\r\n left = ''\r\n # Children to the left\r\n\r\n right = ''\r\n # Children to the right\r\n\r\n def __init__(self,data,parent):\r\n \"\"\" Constructor \"\"\"\r\n\r\n self.parent = parent\r\n\r\n if len(data) > 0:\r\n # Takes the middle item from the array\r\n mid = data.pop(int(len(data)/2))\r\n self.node_data = mid\r\n\r\n if len(data) > 0:\r\n # Next half of the array is linked to the right\r\n self.right = BinaryTree(data[int(len(data)/2):],self)\r\n if self.right.node_data == '':\r\n self.right = ''\r\n\r\n if len(data) > 0:\r\n # Previous half of the array is linked to the left\r\n self.left = BinaryTree(data[:int(len(data)/2)],self)\r\n if self.left.node_data == '':\r\n self.left = ''\r\n\r\n def get_string(self):\r\n \"\"\" Called to show results \"\"\"\r\n\r\n if self.right != '' and self.right.node_data == '':\r\n self.right = ''\r\n\r\n if self.left != '' and self.left.node_data == '':\r\n self.left = ''\r\n\r\n return_string = str(self.node_data) + \"\\tParent: \"\r\n\r\n if self.parent != '':\r\n return_string += str(self.parent.node_data)\r\n\r\n else:\r\n return_string += \" None\"\r\n\r\n return_string += \"\\tChildren: \"\r\n\r\n if self.right != '' and self.left != '':\r\n children_string = str(self.left.node_data) + \",\" + str(self.right.node_data) + \"\\t\\n\"\r\n\r\n return return_string + children_string + self.left.get_string() + \"\\n\" + self.right.get_string()\r\n\r\n elif self.right != '':\r\n children_string = str(self.right.node_data) + \"\\t\\n\"\r\n\r\n return return_string + children_string + self.right.get_string()\r\n\r\n elif self.left != '':\r\n children_string = str(self.left.node_data) + \"\\t\\n\"\r\n\r\n return return_string + children_string + self.left.get_string()\r\n\r\n else:\r\n return return_string + \"None\"\r\n\r\n\r\nclass Stack:\r\n \"\"\"\" Implements a stack \"\"\"\r\n\r\n node_data = ''\r\n # Current node's data\r\n\r\n next_node = ''\r\n # Next item on the stack\r\n\r\n def __init__(self,data):\r\n \"\"\" Constructor \"\"\"\r\n if len(data) > 0:\r\n self.node_data = data.pop()\r\n\r\n if len(data) > 0:\r\n self.next_node = Stack(data)\r\n\r\n def pop(self):\r\n \"\"\"\" Takes the first node out of the stack \"\"\"\r\n popped = self.node_data\r\n\r\n if self.next_node != '':\r\n self.node_data = self.next_node.node_data\r\n self.next_node = self.next_node.next_node\r\n\r\n else:\r\n self.node_data = ''\r\n\r\n return popped\r\n\r\n def push(self,data):\r\n \"\"\"\" Puts a new node on top of the stack \"\"\"\r\n if self.next_node != '':\r\n self.next_node.push(self.node_data)\r\n self.node_data = data\r\n\r\n else:\r\n self.next_node = Stack([self.node_data])\r\n self.node_data = data\r\n\r\n def get_string(self):\r\n \"\"\"\" Called to print results \"\"\"\r\n if self.next_node != '' and self.next_node.node_data == '':\r\n self.next_node = ''\r\n\r\n if self.next_node != '':\r\n return \"\\t\" + str(self.node_data) + \"\\n\" + self.next_node.get_string()\r\n\r\n else:\r\n # Last node is at the bottom\r\n return \"Bottom: \" + str(self.node_data)\r\n\r\n","sub_path":"Python_Sorter/structures.py","file_name":"structures.py","file_ext":"py","file_size_in_byte":4858,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"458546517","text":"# Dawei Li, 001022014\n\nimport re\nfrom .hash_table import MyHashTable\nfrom datetime import datetime\nfrom .read_xlsx import XlsxReader\n\n\n# Create two global variables to hold package and distance data\n# to be used across modules.\npackage_data = None\ndistance_data = None\ntotal_distance = 0\n\n\ndef init():\n read_package_data()\n read_distance_data()\n\n\ndef read_package_data():\n \"\"\"Read package data into a hash table\"\"\"\n # Create an empty hash table to store package data\n # Key: package ID\n # Value: a hash table holding package details\n global package_data\n package_reader = XlsxReader(file_loc=\"./data/WGUPS Package File.xlsx\")\n rawdata = package_reader.read_data_sheet(0)\n package_data = MyHashTable()\n for row in range(8, rawdata.nrows):\n # Package id is the key\n id = rawdata.cell_value(row, 0)\n # Use a hash table to store package details\n hash_table = MyHashTable()\n # Package 9 has a wrong address.\n if row == 16:\n hash_table.add(\"address\", \"410 S State St\")\n else:\n hash_table.add(\"address\", rawdata.cell_value(row, 1))\n hash_table.add(\"city\", rawdata.cell_value(row, 2))\n hash_table.add(\"state\", rawdata.cell_value(row, 3))\n hash_table.add(\"zip\", rawdata.cell_value(row, 4))\n hash_table.add(\"deadline\", deadline(rawdata.cell_value(row, 5)))\n hash_table.add(\"weight\", rawdata.cell_value(row, 6))\n hash_table.add(\"notes\", rawdata.cell_value(row, 7))\n # Add and initialize two additional attributes\n hash_table.add(\"status\", \"idle\")\n hash_table.add(\"delivery time\", None)\n # Add package id (key) and details (value) to the hash table\n package_data.add(id, hash_table)\n\n\ndef deadline(time):\n \"\"\"Convert the deadline data in the EOD column in the package excel file to datetime format\"\"\"\n # If time is \"EOD\", convert it to 17:00\n if time == \"EOD\":\n return datetime(2019, 11, 22, 17)\n time = int(time * 24 * 3600)\n hour = time // 3600\n minute = (time % 3600) // 60\n return datetime(2019, 11, 22, hour, minute)\n\n\ndef read_distance_data():\n \"\"\"Read distance data into a hash table\"\"\"\n # Create an empty hash table to store distance data\n # Key: a tuple of two locations\n # Value: distance between the two locations in the key\n global distance_data\n distance_reader = XlsxReader(file_loc=\"./data/WGUPS Distance Table.xlsx\")\n rawdata = distance_reader.read_data_sheet(0)\n distance_data = MyHashTable()\n # First extract only street number and street name\n # from the full address. This combination matches the \n # address data in the packages data\n addresses = []\n col = 0\n for row in range(8, rawdata.nrows):\n value = rawdata.cell_value(row, col)\n value = re.split(r'\\s*\\n\\s*', value)[1]\n value = re.split(r'\\,', value)[0]\n # Correct a mismatch between the distance file and the package file\n if value == \"3575 W Valley Central Sta bus Loop\":\n value = \"3575 W Valley Central Station bus Loop\"\n addresses.append(value)\n # Extract distance data into the hash table\n for row in range(8, rawdata.nrows):\n for i in range(2, row-7):\n address_tuple = (addresses[row-8], addresses[i-2])\n distance = rawdata.cell_value(row, i)\n distance_data.add(address_tuple, distance)\n for j in range(row+1, rawdata.nrows):\n address_tuple = (addresses[j-8], addresses[row-8])\n distance = rawdata.cell_value(j, row-6)\n distance_data.add(address_tuple, distance)\n # Two packages may be delivered to the same address.\n # So set the distance of a location to itself as 0.\n address_tuple = (addresses[row-8], addresses[row-8])\n distance_data.add(address_tuple, 0)\n\n\ndef all_status():\n \"\"\"Return a hash table of (id, status) for all packages.\"\"\"\n global package_data\n all_status = MyHashTable()\n packages_id = package_data.all_keys()\n for id in packages_id:\n status = package_data.get(id).get(\"status\")\n delivery_time = package_data.get(id).get(\"delivery time\")\n all_status.add(id, (status, delivery_time))\n print(id, \" \", (status, delivery_time))\n return all_status\n","sub_path":"packages/data.py","file_name":"data.py","file_ext":"py","file_size_in_byte":4298,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"336803181","text":"#!/usr/bin/env python\n# -*- coding: UTF-8 -*-\n#\n# Insightly API Test Script\n# Brian McConnell \n#\n# This Python module implements a test suite against the API. This allows users to create\n# whatever test cases they want in addition to the standard set of tests we run against\n# API endpoints.\n#\n# USAGE:\n#\n# i = Insightly()\n# i.test()\n#\n# NOTE:\n#\n# If you run the test suite, we recommend running it against a test instance with dummy data,\n# as there is the potential for data loss. The test suite is primarily intended for use in\n# QA testing. \n\nfrom insightly import Insightly\n\nget_endpoints = ['activitysets', 'contacts', 'countries', 'currencies', 'customfieldgroups', 'customfields', 'emails', 'filecategories','follows',\n 'instance','leads','leadsources','leadstatuses','notes','opportunities','opportunitycategories','opportunitystatereasons',\n 'organisations','pipelines','pipelinestages','projectcategories','projects','relationships','taskcategories','tasks','teammembers','teams','users']\n\ndef test(apikey='', version='2.2', dev=None):\n i = Insightly(apikey=apikey, version=version, dev=dev, test=True)\n i.tests_run = 0\n i.tests_passed = 0\n # test activity sets\n activity_sets = i.read('activitysets')\n if activity_sets is not None:\n activity_set_id = activity_sets[0]['ACTIVITYSET_ID']\n activity_set = i.read('activitysets', id=activity_set_id)\n # test contacts\n contacts = i.read('contacts')\n if contacts is not None:\n contact_id = contacts[0]['CONTACT_ID']\n contact = i.read('contacts', id=contact_id)\n contact = {'FIRST_NAME':'Test','LAST_NAME':'ミスターマコーネル'}\n contact = i.create('contacts', contact)\n if contact is not None:\n contact['FIRST_NAME'] = 'Foo'\n contact = i.update('contacts', contact)\n contact_id = contact['CONTACT_ID']\n i.upload_image('contacts', contact_id, 'apollo17.jpg')\n address = i.create_child('contacts', contact_id, 'addresses', {'ADDRESS_TYPE':'HOME','CITY':'San Francisco', 'STATE':'CA', 'COUNTRY':'United States'})\n if address is not None:\n address_id = address['ADDRESS_ID']\n i.delete('contacts',contact_id,sub_type='addresses',sub_type_id=address_id)\n contactinfo = i.create_child('contacts', contact_id, 'contactinfos', {'TYPE':'EMAIL','SUBTYPE':'Home','DETAIL':'foo@bar.com'})\n if contactinfo is not None:\n contact_info_id = contactinfo['CONTACT_INFO_ID']\n i.delete('contacts', contact_id, sub_type='contactinfos', sub_type_id = contact_info_id)\n contact_date = {'OCCASION_NAME':'Birthday','OCCASION_DATE':'2016-05-02T12:00:00Z'}\n contact_date = i.create_child('contacts', contact_id, 'dates', contact_date)\n if contact_date is not None:\n date_id = contact_date['DATE_ID']\n i.delete('contacts', contact_id, sub_type='dates', sub_type_id=date_id)\n tag = {'TAG_NAME':'foo'}\n i.create_child('contacts', contact_id, 'tags', tag)\n i.delete('contacts', contact_id,sub_type='tags', sub_type_id = 'foo')\n note = {'TITLE':'Test', 'BODY':'This is the body'}\n note = i.create_child('contacts', contact_id, 'notes', note)\n events = i.read('contacts', contact_id, sub_type='events')\n file_attachments = i.read('contacts', contact_id, sub_type='fileattachments')\n i.upload('contacts', contact_id, 'apollo17.jpg')\n i.create_child('contacts', contact_id, 'follow', {})\n i.delete('contacts', contact_id, sub_type='follow')\n tasks = i.read('contacts', contact_id, sub_type='tasks')\n emails = i.read('contacts', contact_id, sub_type='emails')\n i.delete('contacts', contact_id)\n countries = i.read('countries')\n currencies = i.read('currencies')\n custom_field_groups = i.read('customfieldgroups')\n custom_fields = i.read('customfields')\n if custom_fields is not None:\n custom_field_id = custom_fields[0]['CUSTOM_FIELD_ID']\n custom_field = i.read('customfields', custom_field_id)\n emails = i.read('emails')\n if emails is not None:\n email_id = emails[0]['EMAIL_ID']\n email = i.read('emails', email_id)\n i.create_child('emails', email_id, 'tags', {'TAG_NAME':'foo'})\n i.delete('emails', email_id, sub_type='tags', sub_type_id = 'foo')\n comments = i.read('emails', email_id, sub_type='/comments')\n events = i.read('events')\n file_categories = i.read('filecategories')\n if file_categories is not None:\n file_category_id = file_categories[0]['CATEGORY_ID']\n file_category = i.read('filecategories', file_category_id)\n follows = i.read('follows') \n instance = i.read('instance')\n leads = i.read('leads')\n if leads is not None:\n lead_id = leads[0]['LEAD_ID']\n lead = i.read('leads', lead_id)\n lead = i.create('leads', {'FIRST_NAME':'foo', 'LAST_NAME':'bar'})\n if lead is not None:\n lead_id = lead['LEAD_ID']\n lead['FIRST_NAME']='foozle'\n lead = i.update('leads', lead)\n i.upload_image('leads', lead_id, 'apollo17.jpg')\n i.delete('leads', lead_id, sub_type='image')\n i.create_child('leads', lead_id, 'tags', {'TAG_NAME':'foo'})\n i.delete('leads', lead_id, sub_type='tags', sub_type_id='foo')\n i.create_child('leads', lead_id, 'follow', {})\n i.delete('leads', lead_id, sub_type='follow')\n notes = i.read('leads', lead_id, sub_type='notes')\n i.create_child('leads', lead_id, 'notes', {'TITLE':'foo','BODY':'This is the body'})\n events = i.read('leads', lead_id, sub_type='events')\n file_attachments = i.read('leads', lead_id, sub_type='fileattachments')\n i.upload('leads', lead_id, 'apollo17.jpg')\n tasks = i.read('leads', lead_id, sub_type='tasks')\n emails = i.read('leads', lead_id, sub_type='emails')\n i.delete('leads', lead_id)\n leadsources = i.read('leadsources')\n lead_source = i.create('leadsources', {'LEAD_SOURCE':'Foozle Barzle'})\n if lead_source is not None:\n lead_source['LEAD_SOURCE'] = 'Barzle Foozle'\n lead_source_id = lead_source['LEAD_SOURCE_ID']\n lead_source = i.update('leadsources', lead_source)\n i.delete('leadsources', lead_source_id)\n lead_statuses = i.read('leadstatuses')\n lead_status = i.create('leadstatuses', {'LEAD_STATUS':'Foozle'})\n if lead_status is not None:\n lead_status_id = lead_status['LEAD_STATUS_ID']\n lead_status['LEAD_STATUS']='Barzle'\n lead_status['STATUS_TYPE']=1\n lead_status = i.update('leadstatuses', lead_status)\n i.delete('leadstatuses', lead_status_id)\n notes = i.read('notes')\n if notes is not None:\n note_id = notes[0]['NOTE_ID']\n note = i.read('notes', note_id)\n file_attachments = i.read('notes', note_id, sub_type='fileattachments')\n i.create_child('notes', note_id, 'follow', {})\n i.delete('notes', note_id, sub_type='follow')\n comments = i.read('notes', note_id, sub_type='comments')\n opportunities = i.read('opportunities')\n if opportunities is not None:\n opportunity_id = opportunities[0]['OPPORTUNITY_ID']\n opportunity = i.read('opportunities', opportunity_id)\n opportunity = i.create('opportunities', {'OPPORTUNITY_NAME':'Foozle','OPPORTUNITY_STATE':'Open'})\n if opportunity is not None:\n opportunity['OPPORTUNITY_NAME'] = 'Barzle'\n opportunity_id = opportunity['OPPORTUNITY_ID']\n opportunity = i.update('opportunities', opportunity)\n i.upload_image('opportunities', opportunity_id, 'apollo17.jpg')\n i.delete('opportunities', opportunity_id, 'image')\n i.create_child('opportunities', opportunity_id, 'tags', {'TAG_NAME':'foo'})\n i.delete('opportunities', opportunity_id, sub_type='tags', sub_type_id='foo')\n notes = i.read('opportunities', opportunity_id, sub_type='notes')\n i.create_child('opportunities', opportunity_id, 'notes', {'TITLE':'foo','BODY':'This is a test'})\n events = i.read('opportunities', opportunity_id, sub_type='events')\n file_attachments = i.read('opportunities', opportunity_id, sub_type='fileattachments')\n i.upload('opportunities', opportunity_id, 'apollo17.jpg')\n i.create_child('opportunities', opportunity_id, 'follow', {})\n i.delete('opportunities', opportunity_id, sub_type='follow')\n # add call to update opportunity state/state reason here\n opportunity_state_reasons = i.read('opportunities', opportunity_id, sub_type='statehistory')\n tasks = i.read('opportunities', opportunity_id, sub_type='tasks')\n emails = i.read('opportunities', opportunity_id, sub_type='emails')\n email = i.read('opportunities', opportunity_id, sub_type='linkemailaddress')\n i.delete('opportunities', opportunity_id, sub_type='pipeline')\n i.delete('opportunities', opportunity_id)\n opportunity_categories = i.read('opportunitycategories')\n opportunity_state_reasons = i.read('opportunitystatereasons')\n \n organisations = i.read('organisations')\n if organisations is not None:\n organisation_id = organisations[0]['ORGANISATION_ID']\n organisation = i.read('organisations', organisation_id)\n organisation = i.create('organisations', {'ORGANISATION_NAME':'Foo Corporation'})\n if organisation is not None:\n organisation_id = organisation['ORGANISATION_ID']\n organisation['ORGANISATION_NAME']='Bar Corporation'\n organisation = i.update('organisations', organisation)\n address = i.create_child('organisations', organisation_id, 'addresses', {'CITY':'San Francisco', 'STATE':'CA', 'COUNTRY':'United States', 'ADDRESS_TYPE':'Work'})\n if address is not None:\n address_id = address['ADDRESS_ID']\n i.delete('organisations', organisation_id, sub_type='addresses', sub_type_id=address_id)\n contactinfo = i.create_child('organisations', organisation_id, 'contactinfos', {'TYPE':'EMAIL','SUBTYPE':'Home','DETAIL':'foo@bar.com'})\n if contactinfo is not None:\n contact_info_id = contactinfo['CONTACT_INFO_ID']\n i.delete('organisations', organisation_id, sub_type='contactinfos', sub_type_id=contact_info_id)\n odate = i.create_child('organisations', organisation_id, 'dates', {'OCCASION_NAME':'Birthday','OCCASION_DATE':'2016-05-02T12:00:00Z'})\n if odate is not None:\n date_id = odate['DATE_ID']\n i.delete('organisations', organisation_id, sub_type='dates', sub_type_id=date_id)\n i.create_child('organisations', organisation_id, 'tags', {'TAG_NAME':'foo'})\n i.delete('organisations',organisation_id, sub_type='tags', sub_type_id='foo')\n i.upload_image('organisations', organisation_id, 'apollo17.jpg')\n i.delete('organisations', organisation_id, sub_type='image')\n notes = i.read('organisations', organisation_id, sub_type='notes')\n i.create_child('organisations', organisation_id, 'notes', {'TITLE':'Title','BODY':'This is the body'})\n events = i.read('organisations', organisation_id, sub_type='events')\n file_attachments = i.read('organisations', organisation_id, sub_type='fileattachments')\n i.upload('organisations', organisation_id, 'apollo17.jpg')\n i.create_child('organisations', organisation_id, 'follow', {})\n i.delete('organisations', organisation_id, sub_type='follow')\n emails = i.read('organisations', organisation_id, sub_type='emails')\n tasks = i.read('organisations', organisation_id, sub_type='tasks')\n i.delete('organisations', organisation_id)\n \n pipelines = i.read('pipelines')\n if pipelines is not None:\n pipeline_id = pipelines[0]['PIPELINE_ID']\n pipeline = i.read('pipelines', pipeline_id)\n \n pipeline_stages = i.read('pipelinestages')\n if pipeline_stages is not None:\n stage_id = pipeline_stages[0]['STAGE_ID']\n pipeline_stage = i.read('pipelinestages', stage_id)\n \n projects = i.read('projects')\n if projects is not None:\n project_id = projects[0]['PROJECT_ID']\n project = i.read('projects', project_id)\n project = i.create('projects', {'PROJECT_NAME':'Foo Corporation','STATUS':'Not Started'})\n if project is not None:\n project_id = project['PROJECT_ID']\n project['PROJECT_NAME']='Barzle Corporation'\n project = i.update('projects', project)\n i.upload_image('projects', project_id, 'apollo17.jpg')\n i.delete('projects', project_id, sub_type='image')\n i.create_child('projects', project_id, 'tags', {'TAG_NAME':'foo'})\n i.delete('projects', project_id, sub_type='tags', sub_type_id='foo')\n notes = i.read('projects', project_id, sub_type='notes')\n i.create_child('projects', project_id, 'notes', {'TITLE':'Foo','BODY':'This is the body'})\n events = i.read('projects', project_id, sub_type='events')\n file_attachments = i.read('projects', project_id, sub_type='fileattachments')\n i.create_child('projects', project_id, 'follow', {})\n i.delete('projects', project_id, sub_type='follow')\n milestones = i.read('projects', project_id, sub_type='milestones')\n tasks = i.read('projects', project_id, sub_type='tasks')\n emails = i.read('projects', project_id, sub_type='emails')\n email = i.read('projects', project_id, sub_type='linkemailaddress')\n i.delete('projects', project_id, sub_type='pipeline')\n i.delete('projects', project_id)\n relationships = i.read('relationships')\n tags = i.read('tags?record_type=contacts')\n task_categories = i.read('taskcategories')\n tasks = i.read('tasks')\n if tasks is not None:\n task_id = tasks[0]['TASK_ID']\n task = i.read('tasks', task_id)\n users = i.read('users')\n if users is not None:\n user_id = users[0]['USER_ID']\n user = i.read('users', user_id)\n else:\n user_id = None\n me = i.read('users/me')\n if user_id is not None:\n task = i.create('tasks', {'TITLE':'Test','STATUS':'Not Started','COMPLETED':'False','PUBLICLY_VISIBLE':'True','RESPONSIBLE_USER_ID':str(user_id)})\n if task is not None:\n task['TITLE'] = task['TITLE'] + 'foo'\n task_id = task['TASK_ID']\n task = i.update('tasks', task)\n i.create_child('tasks', task_id, 'follow', {})\n i.delete('tasks', task_id, sub_type='follow')\n comments = i.read('tasks', task_id, sub_type='comments')\n i.delete('tasks', task_id)\n team_members = i.read('teammembers')\n if team_members is not None:\n team_member_id = team_members[0]['PERMISSION_ID']\n team_member = i.read('teammembers', team_member_id)\n \n teams = i.read('teams')\n if teams is not None:\n team_id = teams[0]['TEAM_ID']\n team = i.read('teams', team_id)\n team = i.create('teams',{'TEAM_NAME':'Team Foo','ANONYMOUS_TEAM':'False'})\n if team is not None:\n team_id = team['TEAM_ID']\n team['TEAM_NAME'] = 'Team Bar'\n team = i.update('teams', team)\n i.delete('teams', team_id)\n \n #\n # Next, create a few objects, add links between them, and then delete them\n #\n \n contact_id = None\n organisation_id = None\n project_id = None\n opportunity_id = None\n \n contact = i.create('contacts',{'FIRST_NAME':'Foo','LAST_NAME':'Bar'})\n if contact is not None:\n contact_id = contact['CONTACT_ID']\n organisation = i.create('organisations',{'ORGANISATION_NAME':'Foo Corporation'})\n if organisation is not None:\n organisation_id = organisation['ORGANISATION_ID']\n project = i.create('projects',{'PROJECT_NAME':'Foo Corporation','Status':'Not Started'})\n if project is not None:\n project_id = project['PROJECT_ID']\n opportunity = i.create('opportunities',{'OPPORTUNITY_NAME':'Foo Corporation','OPPORTUNITY_STATE':'Open'})\n if opportunity is not None:\n opportunity_id = opportunity['OPPORTUNITY_ID']\n \n contact = i.create_child('contacts', contact_id, 'links', {'ORGANISATION_ID':organisation_id})\n organisation = i.create_child('organisations', organisation_id, 'links', {'PROJECT_ID':project_id})\n project = i.create_child('projects', project_id, 'links', {'ORGANISATION_ID':organisation_id})\n opportunity = i.create_child('opportunities', opportunity_id, 'links', {'CONTACT_ID':contact_id})\n \n if contact_id is not None:\n i.delete('contacts', contact_id)\n if organisation_id is not None:\n i.delete('organisations', organisation_id)\n if project_id is not None:\n i.delete('projects', project_id)\n if opportunity_id is not None:\n i.delete('opportunities', opportunity_id)\n \n print(str(i.tests_passed) + ' out of ' + str(i.tests_run) + ' passed')\n if len(i.test_failures) > 0:\n print ('')\n print ('Test Failures')\n for f in i.test_failures:\n print (f)","sub_path":"insightlytest.py","file_name":"insightlytest.py","file_ext":"py","file_size_in_byte":17384,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"602972039","text":"from database import db\nfrom database.models import Event, Tag, Report, User, Funded, VotedTest\nfrom datetime import datetime\nimport pytz\n\nutc = pytz.UTC\n\n\ndef add_event(data, user_creator):\n name = data.get('name')\n description = data.get('description')\n funding_start_date = datetime.strptime(data.get('funding_start_date'), \"%Y-%m-%dT%H:%M:%S.%fZ\").replace(tzinfo=utc)\n funding_end_date = datetime.strptime(data.get('funding_end_date'), \"%Y-%m-%dT%H:%M:%S.%fZ\").replace(tzinfo=utc)\n goal = data.get('goal')\n event_start_date = datetime.strptime(data.get('event_start_date'), \"%Y-%m-%dT%H:%M:%S.%fZ\").replace(tzinfo=utc)\n event_end_date = datetime.strptime(data.get('event_end_date'), \"%Y-%m-%dT%H:%M:%S.%fZ\").replace(tzinfo=utc)\n location = data.get('location')\n latitude = data.get('lat')\n if latitude is None:\n latitude = 0.000\n longitude = data.get('long')\n if longitude is None:\n longitude = 0.00\n tags = data.get('tags')\n photo = data.get('photo')\n\n if (funding_start_date < utc.localize(datetime.now())):\n raise Exception(\"The event funding date can not start in the past\")\n if (funding_end_date < utc.localize(datetime.now())):\n raise Exception(\"The event funding date can not end in the past\")\n if (funding_start_date > funding_end_date):\n raise Exception(\"Start funding date is after end funding date\")\n if (event_start_date < utc.localize(datetime.now())):\n raise Exception(\"The event can not start in the past\")\n if (event_end_date < utc.localize(datetime.now())):\n raise Exception(\"The event can not end in the past\")\n if (event_start_date > event_end_date):\n raise Exception(\"Start date is after end date\")\n\n new_event = Event(name, description, funding_start_date, funding_end_date, goal, event_start_date, event_end_date,\n location, user_creator, latitude, longitude, photo)\n\n for tag in tags:\n new_tag = Tag(tag)\n new_event.tags.append(new_tag)\n\n db.session.add(new_event)\n db.session.commit()\n\n return new_event.id\n\n\ndef update_event(id, data):\n event = Event.query.get(id)\n event.description = data.get('description')\n event.location = data.get('location')\n event.photo = data.get('photo')\n\n funding_end_date = datetime.strptime(data.get('funding_end_date'), \"%Y-%m-%dT%H:%M:%S.%fZ\").replace(tzinfo=utc)\n event_start_date = datetime.strptime(data.get('event_start_date'), \"%Y-%m-%dT%H:%M:%S.%fZ\").replace(tzinfo=utc)\n event_end_date = datetime.strptime(data.get('event_end_date'), \"%Y-%m-%dT%H:%M:%S.%fZ\").replace(tzinfo=utc)\n\n event.lat = data.get('lat')\n event.long = data.get('long')\n\n if (event_start_date > event_end_date):\n raise Exception(\"Start date is after end date\")\n\n event.event_end_date = event_end_date\n event.event_start_date = event_start_date\n event.funding_end_date = funding_end_date\n\n db.session.add(event)\n db.session.commit()\n\n\ndef delete_event(id):\n event = Event.query.get(id)\n db.session.delete(event)\n db.session.commit()\n return event\n\n\ndef add_report(data, user_id):\n event_id = data.get('event_id')\n content = data.get('content')\n\n new_report = Report(user_id, event_id, content)\n\n db.session.add(new_report)\n db.session.commit()\n\n return new_report.id\n\n\ndef delete_report(id):\n report = Report.query.get(id)\n db.session.delete(report)\n db.session.commit()\n return report\n\n\ndef watch_event(id, data, user_id):\n event = Event.query.get(id)\n user = User.query.get(user_id)\n\n if user in event.watchers:\n event.watchers.remove(user)\n else:\n event.watchers.append(user)\n\n db.session.commit()\n\n\ndef fund_event(id, data, user_id):\n event = Event.query.get(id)\n user = User.query.get(user_id)\n\n test = Funded(fund_amount=data.get('fund_amount'))\n\n test.backed = user\n test.backers = event\n\n event.backers.append(test)\n db.session.commit()\n\n\ndef vote_event(id, data, user_id):\n\n event = Event.query.get(id)\n user = User.query.get(user_id)\n\n test = VotedTest(stars=data.get('stars'))\n\n test.voted = user\n test.votes = event\n\n event.votes.append(test)\n db.session.commit()\n\n\ndef get_info_events(events):\n\n if type(events) != list:\n user = User.query.filter(User.id == events.user_creator).one()\n events.__dict__['user_creator_name'] = user.name\n return events\n\n result = []\n for event in events:\n user = User.query.filter(User.id == event.user_creator).one()\n event.__dict__['user_creator_name'] = user.name\n result.append(event)\n\n return result\n","sub_path":"api/events/logic.py","file_name":"logic.py","file_ext":"py","file_size_in_byte":4672,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"554259161","text":"import logging\n\nimport SMS.config\n\nCONF = SMS.config.CONF\n\n\ndef get_logger():\n return logging.getLogger('SMS')\n\n\ndef configure_logging():\n log_level = logging.DEBUG if CONF.log.debug else logging.INFO\n formatter = logging.Formatter(CONF.log.log_format)\n\n logger = get_logger()\n logger.setLevel(log_level)\n\n def add_handler(h):\n h.setFormatter(formatter)\n h.setLevel(log_level)\n logger.addHandler(h)\n\n if CONF.log.console_log:\n handler = logging.StreamHandler()\n add_handler(handler)\n\n if CONF.log.log_file:\n handler = logging.FileHandler(CONF.log.log_file)\n add_handler(handler)\n","sub_path":"SMS/log.py","file_name":"log.py","file_ext":"py","file_size_in_byte":652,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"383887451","text":"import argparse\nimport json\n\n\ndef parse_template(template, **template_values):\n print(\">>>>\", template.format(**template_values))\n\n\nif __name__ == \"__main__\":\n parser = argparse.ArgumentParser()\n parser.add_argument(\"--settings\")\n\n args = parser.parse_args()\n config = json.loads(args.settings)\n template = config.get(\"template\")\n template_values = config.get(\"template_values\")\n\n parse_template(template, **template_values)\n\n\n\"\"\"\n------- command ---------\n{\"template\":\"{bucket}/{prefix}/{database}/{schema}\",\"template_values\":{\"bucket\":\"eda0x7b2263-sbx-eu-west-1\",\"prefix\":\"breaking20\",\"database\":\"DTC_BI_DM\",\"schema\":\"dbo\"}}\n\npython3 templating.py --settings '{\"template\":\"{bucket}/{prefix}/{database}/{schema}\",\"template_values\":{\"bucket\":\"eda0x7b2263-sbx-eu-west-1\",\"prefix\":\"breaking20\",\"database\":\"DTC_BI_DM\",\"schema\":\"dbo\"}}'\n \n\n========== example ======\ntemplate = \"Hello, my name is {name}, today is {date} and the weather is {weather}\"\nfields = {'name': 'Michael', 'date': '21/06/2015', 'weather': 'sunny'}\ntemplate.format(**fields)\n\"\"\"\n","sub_path":"Python/playground/templating.py","file_name":"templating.py","file_ext":"py","file_size_in_byte":1071,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"10124375","text":"# - * - coding:utf8 - * - -\n###########################################\n# Author: Tinkle\n# E-mail: shutingnupt@gmail.com\n# Name: Climbing Stairs.py\n# Creation Time: 2017/7/10\n###########################################\n'''\nYou are climbing a stair case. It takes n steps to reach to the top.\n\nEach time you can either climb 1 or 2 steps. In how many distinct ways can you climb to the top?\n'''\nclass Solution(object):\n def climbStairs(self, n):\n #s[n]=s[n-1]+s[n-2]\n if n==1 or n==2:\n return n\n i=3;s=[1,2]\n while(i<=n):\n s.append(s[i-2]+s[i-3])\n i=i+1\n return s.pop()","sub_path":"DynamicProgramming/70. Climbing Stairs.py","file_name":"70. Climbing Stairs.py","file_ext":"py","file_size_in_byte":639,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"634369171","text":"#!/usr/bin/python\n\n\nfrom django.template import loader, Context\nfrom yapsy.IPlugin import IPlugin\n\nfrom server.models import PluginScriptRow\n\n\nclass ARDInfo(IPlugin):\n name = \"ARD Info\"\n\n def widget_width(self):\n return 4\n\n def plugin_type(self):\n return 'machine_detail'\n\n def get_description(self):\n return \"Apple Remote Desktop's Computer Information Fields\"\n\n def widget_content(self, page, machine=None, theid=None):\n template = loader.get_template(\n \"machine_detail_ard_info/templates/ard_info.html\")\n\n ard_info = {}\n\n for i in xrange(1, 5):\n key = 'ARD_Info_{}'.format(i)\n row = PluginScriptRow.objects.filter(\n submission__machine=machine,\n submission__plugin='ARD_Info',\n pluginscript_name=key)\n\n try:\n val = row.first().pluginscript_data\n except Exception:\n val = \"\"\n ard_info[key] = val\n\n c = Context({\n \"title\": self.get_description(),\n \"data\": ard_info,\n \"theid\": theid,\n \"page\": page})\n\n return template.render(c)\n","sub_path":"server/plugins/machine_detail_ard_info/ard_info.py","file_name":"ard_info.py","file_ext":"py","file_size_in_byte":1184,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"120805198","text":"import h5py\nimport numpy as np\nfrom progressbar import ProgressBar\n\nfrom utils import DBI, get_target_collection, get_related_list\n\nrelated_db = DBI().articles_related\narticles_db = DBI().articles\n\nvalid_index = set([doc['index'] for doc in articles_db.find({'target': False},\n {'index': 1})])\norder_valid = {index: i for i, index in enumerate(sorted(list(valid_index)))}\ntarget_index = set([doc['index'] for doc in articles_db.find({'target': True},\n {'index': 1})])\n\nprint(len(valid_index))\n\nf = h5py.File('data_0626/features_y_ndcg_1.hdf5', 'w')\npbar = ProgressBar(max_value=2000)\nfor index in target_index:\n doc = related_db.find_one({'index': index})\n related = get_related_list(doc['related'], 10, valid_index)\n arr = np.empty((len(valid_index), ))\n arr.fill(np.inf)\n for hop in range(10, 0, -1):\n slice = [order_valid[i] for i in related[hop]]\n arr[slice] = hop\n f.create_dataset(str(doc['index']), data=arr, dtype='i8', compression=\"gzip\")\n pbar.update(pbar.value + 1)\npbar.finish()\nf.flush()\nf.close()\n","sub_path":"source/generate_label_nDCG.py","file_name":"generate_label_nDCG.py","file_ext":"py","file_size_in_byte":1165,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"45268012","text":"# ChivTeams Plugin for BigBrotherBot(B3) (www.bigbrotherbot.net)\n# Copyright (C) 2015 ph03n1x\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, write to the Free Software\n# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA\n\n__version__ = '1.0.0'\n__author__ = 'ph03n1x'\n\nimport b3\nimport b3.events\nimport b3.plugin\n\nclass ChivteamsPlugin(b3.plugin.Plugin):\n requiresParsers = ['chiv']\n\n def onStartup(self):\n #register events\n self.registerEvent('EVT_CLIENT_JOIN', self.onJoin)\n\n #########################################################################################\n # EVENT HANDLING #\n #########################################################################################\n def onJoin(self, event):\n \"\"\"\\\n Handle EVT_CLIENT_JOIN\n \"\"\"\n sclient = event.client\n if sclient.maxLevel >= 20 and sclient.maxLevel != 100:\n if sclient.team == b3.TEAM_SPEC:\n self.debug('%s is spectating. Ignoring their team' % sclient.name)\n elif sclient.team == b3.TEAM_BLUE:\n self.info('Forcing %s to red team' % sclient.name)\n self.console.write('AdminChangeTeam %s' % sclient.name)\n sclient.message('^3You can only play on ^1RED ^3team')\n","sub_path":"chivteams.py","file_name":"chivteams.py","file_ext":"py","file_size_in_byte":1918,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"481215999","text":"import unittest\nimport sys\n\nmodule = sys.argv[-1].split(\".py\")[0]\n\nclass PublicTests(unittest.TestCase):\n\n @classmethod\n def setUpClass(cls):\n global devedores\n undertest = __import__(module)\n devedores = getattr(undertest, 'devedores', None)\n\n def test_1_vazio(self):\n contas = { 'Ana':1000, 'Antonio':-500, 'William':0, 'Carlos':2500, 'Kate':-1300 }\n assert devedores(contas) == 2\n\nif __name__ == '__main__':\n loader = unittest.TestLoader()\n runner = unittest.TextTestRunner()\n runner.run(loader.loadTestsFromModule(sys.modules[__name__]))\n","sub_path":"atividades/mini_testes/devedores/public_tests.py","file_name":"public_tests.py","file_ext":"py","file_size_in_byte":595,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"377297672","text":"# /psqtraviscontainer/use.py\n#\n# Module which handles the running of scripts and commands inside of a proot\n#\n# See LICENCE.md for Copyright information\n\"\"\" Module which handles the running of scripts inside of a proot.\"\"\"\n\nimport os\n\nimport platform\n\nimport subprocess\n\nimport sys\n\nfrom collections import namedtuple\n\nfrom psqtraviscontainer import architecture\nfrom psqtraviscontainer import common_options\nfrom psqtraviscontainer import constants\nfrom psqtraviscontainer import distro\n\nProotDistribution = namedtuple(\"ProotDistribution\", \"proot qemu\")\n\n\nclass PtraceRootExecutor(object):\n\n \"\"\"For a distro configured a in container, a mechanism to execute code.\"\"\"\n\n def __init__(self, proot_distro, container_root, config, arch):\n \"\"\"Initialize PtraceRootExecutor for container and distro.\"\"\"\n super(PtraceRootExecutor, self).__init__()\n self._proot_distro = proot_distro\n self._container_root = container_root\n self._config = config\n self._arch = arch\n\n def _execute_argv(self, user_argv):\n \"\"\"Get argv to pass to subprocess later.\"\"\"\n distro_dir = distro.get_dir(self._container_root,\n self._config,\n self._arch)\n proot_command = [self._proot_distro.proot(), \"-S\", distro_dir]\n\n # If we're not the same architecture, interpose qemu's emulator\n # for the target architecture as appropriate\n our_architecture = architecture.Alias.universal(platform.machine())\n target_architecture = architecture.Alias.universal(self._arch)\n\n if our_architecture != target_architecture:\n proot_command += [\"-q\", self._proot_distro.qemu(self._arch)]\n\n return proot_command + user_argv\n\n def execute(self, argv, stdout=None, stderr=None):\n \"\"\"Execute the command specified by argv.\n\n Return tuple of (exit status, stdout, stderr).\n \"\"\"\n argv = self._execute_argv(argv)\n executed_cmd = subprocess.Popen(argv,\n stdout=stdout,\n stderr=stderr,\n universal_newlines=True)\n stdout_data, stderr_data = executed_cmd.communicate()\n\n return (executed_cmd.returncode, stdout_data, stderr_data)\n\n def execute_success(self, argv):\n \"\"\"Execute the command specified by argv, throws on failure.\"\"\"\n returncode, stdout_data, stderr_data = self.execute(argv,\n subprocess.PIPE,\n subprocess.PIPE)\n\n if returncode != 0:\n sys.stderr.write(stdout_data)\n sys.stderr.write(stderr_data)\n raise RuntimeError(\"{0} failed with {1}\".format(\" \".join(argv),\n returncode))\n\n\ndef proot_distro_from_container(container_dir):\n \"\"\"Return a ProotDistribution from a container dir.\"\"\"\n path_to_proot_dir = constants.proot_distribution_dir(container_dir)\n path_to_proot_bin = os.path.join(path_to_proot_dir, \"bin/proot\")\n path_to_qemu_template = os.path.join(path_to_proot_dir,\n \"bin/qemu-{arch}\")\n\n def _get_qemu_binary(arch):\n \"\"\"Get the qemu binary for architecture.\"\"\"\n qemu_arch = architecture.Alias.qemu(arch)\n return path_to_qemu_template.format(arch=qemu_arch)\n\n def _get_proot_binary():\n \"\"\"Get the proot binary.\"\"\"\n return path_to_proot_bin\n\n return ProotDistribution(proot=_get_proot_binary,\n qemu=_get_qemu_binary)\n\n\ndef _parse_arguments(arguments=None):\n \"\"\"Return a parser context result.\"\"\"\n parser = common_options.get_parser(\"Use\")\n parser.add_argument(\"--cmd\",\n nargs=\"*\",\n help=\"Command to run inside of container\",\n default=None,\n required=True)\n return parser.parse_args(arguments)\n\n\ndef _check_if_exists(entity):\n \"\"\"Raise RuntimeError if entity does not exist.\"\"\"\n if not os.path.exists(entity):\n raise RuntimeError(\"A required entity {0} does not exist\\n\"\n \"Try running psq-travis-container-create \"\n \"first before using psq-travis-container-use.\")\n\n\ndef main(arguments=None):\n \"\"\"Select a distro in the container root and runs a comamnd in it.\"\"\"\n result = _parse_arguments(arguments=arguments)\n distro_config, arch = distro.lookup(result.distro[0],\n result.release[0],\n result.arch[0])\n required_entities = [\n constants.have_proot_distribution(result.containerdir[0]),\n distro.get_dir(result.containerdir[0], distro_config, arch)\n ]\n\n for entity in required_entities:\n _check_if_exists(entity)\n\n # Now create an executor and run our command\n proot_distro = proot_distro_from_container(result.containerdir[0])\n proot_executor = PtraceRootExecutor(proot_distro,\n result.containerdir[0],\n distro_config,\n arch)\n\n return proot_executor.execute(result.cmd)[0]\n","sub_path":"psqtraviscontainer/use.py","file_name":"use.py","file_ext":"py","file_size_in_byte":5363,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"612395757","text":"class Test:\n def takeInput(self):\n myText =input(\"enter the string\")\n return myText\n\n def reversTxt(self, msg):\n take=msg.split(\" \")\n take.reverse()\n print(\" \".join(take))\n\n\ntest = Test()\nword = test.takeInput()\ntest.reversTxt(word)\n","sub_path":"Q12Susheel.py","file_name":"Q12Susheel.py","file_ext":"py","file_size_in_byte":274,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"180765115","text":"import streamlit as st\n\n\n#NLP Pakages\nimport spacy\nfrom textblob import TextBlob\nfrom gensim.summarization import summarize\n\n#Sumy Packages\nfrom sumy.parsers.plaintext import PlaintextParser\nfrom sumy.nlp.tokenizers import Tokenizer\nfrom sumy.summarizers.lex_rank import LexRankSummarizer\nimport nltk\nnltk.download('punkt')\n\n#Summary Function\ndef sumy_summarizer(docx):\n\tparser=PlaintextParser.from_string(docx,Tokenizer(\"english\"))\n\tlex_summarizer=LexRankSummarizer()\n\tsummary=lex_summarizer(parser.document,3)\n\tsummary_list=[str(sentence) for sentence in summary]\n\tresult=' '.join(summary_list)\n\treturn result\n\ndef text_analyzer(my_text):\n\tnlp=spacy.load(\"en_core_web_sm\")\n\tdocx=nlp(my_text)\n\n\ttokens=[token.text for token in docx]\n\tallData=[('\"Tokens\":{},\\n\"Lemma\":{}'.format(token.text,token.lemma_)) for token in docx]\n\treturn allData\n\ndef entity_analyzer(my_text):\n\tnlp=spacy.load(\"en_core_web_sm\")\n\tdocx=nlp(my_text)\n\ttokens=[token.text for token in docx]\n\tentities=[(entity.text,entity.label_) for entity in docx.ents]\n\tdata=[('\"Token\":{}, \\n\"Entity\":{}'.format(tokens,entities))]\n\treturn data\t\n\n\n#@st.cache\ndef main():\n\t### NLP App with Streamlit ###\n\tst.title(\"NLPiffy Streamlit\")\n\tst.subheader(\"Natural Language Processing On A Go...\")\n\n\t#Tokenization\n\tif st.checkbox(\"Shaw Named Entities\"):\n\t\tst.subheader(\"Extract Entities from Your Text\")\n\t\tmessage1=st.text_area(\"Enter Your Text\",\"Type Here\")\n\t\tif st.button(\"Extract\"):\n\t\t\tnlp_result=entity_analyzer(message1)\n\t\t\tst.json(nlp_result)\n\n\n\t#Name Entity Recognition\n\tif st.checkbox(\"Show Tokens And Lemma\"):\n\t\tst.subheader(\"Tokenize Your Text\")\n\t\tmessage2= st.text_area(\"Enter Your Text\",\"Type Here\")\n\t\tif st.button(\"Analyze\"):\n\t\t\tnlp_result=text_analyzer(message)\n\t\t\tst.json(nlp_result)\n\n\t#Sentiment Analysis\n\tif st.checkbox(\"Show Sentiment Analysis\"):\n\t\tst.subheader(\"Sentiment Of Your Text\")\n\t\tmessage3= st.text_area(\"Enter Your Text\",\"Type Here\")\n\t\tif st.button(\"Analyze\"):\n\t\t\tblob=TextBlob(message3)\n\t\t\tresult_sentiment=blob.sentiment\n\t\t\tst.success(result_sentiment)\n\t\t\t\n\n\n\t#Text Summarization\n\tif st.checkbox(\"Show Text Summarization\"):\n\t\tst.subheader(\"Summarize Your Text\")\n\t\tmessage3= st.text_area(\"Enter Your Text\",\"Type Here\")\n\t\tsummary_options=st.selectbox(\"Choose Your Summarize\",(\"gensim\",\"sumy\"))\n\t\tif st.button(\"Summarize\"):\n\t\t\tif summary_options=='gensim':\n\t\t\t\tsummary_result=summarize(message3)\n\t\t\telif summary_options=='sumy':\n\t\t\t\tst.text(\"Using Sumy...\")\n\t\t\t\tsummary_result=sumy_summarizer(message3)\n\n\t\t\telse:\n\t\t\t\tst.warning(\"Using Default Summarizer\")\n\t\t\t\tst.text(\"Using Gensim...\")\n\t\t\t\tsummary_result=summarize(message3)\n\n\t\t\tst.success(summary_result)\t\n\n\tst.sidebar.subheader(\"About the App\")\n\tst.sidebar.text(\"NLPiffy App with Streamlit\")\n\tst.sidebar.info(\"Cudos to Streamlit Team\")\n\n\nif __name__ == '__main__':\n\tmain()\n\t\t\t\n\t\t\t\n\n\n\n\n\n\n\n\t","sub_path":"nlp.py","file_name":"nlp.py","file_ext":"py","file_size_in_byte":2818,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"392790859","text":"import numpy as np \nimport pandas as pd # 可視化の際に便利\nfrom IPython.display import display # 可視化に利用\nimport sys # プログラムの終了に利用\n\n# 52枚のトランプを作成する関数\ndef make_cards52():\n suit=np.array([\"♠\",\"♡\",\"♦\",\"♣\"],dtype=\"object\") # ♠,♡,♦,♣の4つのマーク\n number=[\"{:02d}\".format(i) for i in range(1,14)] # 1-13の13個の数字\n \n # 52枚のカードを生成\n s=np.repeat(suit,13) # 52個のマーク \n n=np.array(number*4,dtype=\"object\") # 52個の数字\n cards52=s+n # 52個の[マーク,数字]の組み合わせ\n \n return cards52\n\n# 21枚のカードを選択する関数\ndef choose_cards21(cards52):\n cards21=cards52[np.random.choice(np.arange(52),21,replace=False)] # 21枚をランダムに非復元的に選択\n \n return cards21\n\n# カードを並び替える関数\ndef sort_cards21(cards21,column_chosen):\n cards21_matrix=cards21.reshape(7,3) # 7行3列の行列を作成\n \n nc=(column_chosen-1)%3 # ユーザーの選択した列\n numlist=[i for i in range(3)] # 1-3列目のリスト\n nf,nb=list(set(numlist)-set([nc])) # ユーザーの選択しなかった2列\n \n cf=cards21_matrix[:,nf] # まとめる際、前となる列\n cc=cards21_matrix[:,nc] # まとめる際、真ん中となる列\n cb=cards21_matrix[:,nb] # まとめる際、後ろとなる列 \n \n cards21_sorted=np.concatenate([cf,cc,cb]) # 前、真ん中、後ろをまとめる\n \n return cards21_sorted\n\n# カードを表示\ndef display_cards(cards21):\n display(pd.DataFrame(cards21.reshape(7,3),columns=[\"列1\",\"列2\",\"列3\"])) # 7行3列の行列を表示\n \n# 入力された値が1-3かどうかを確認し、値をintに変換\ndef validate_input(i):\n # 正しい形式の値が入力されるまで繰り返し\n while True:\n column_chosen=input(\"{}回目: 選んだカードは何列目にありますか。1-3の数字で入力してください: \".format(i+1)) # 何列目かの入力値\n \n if column_chosen in [\"1\",\"2\",\"3\"]: # 正しく1-3の値が入力されたら、str型の入力をintに変換\n return int(column_chosen)\n else: # 正しく入力されなかったら、エラーメッセージを表示\n print(\"Error: 1-3の数値を入力してください。\") \n\n# 答えを確認\ndef validate_answer(cards21):\n while True: # 正しい形式の値が入力されるまで繰り返し\n answer=input(\"あなたが選んだのは、{}ですね? [yes or no] : \".format(cards21[10])) # 入力値\n \n if answer in [\"yes\",\"no\"]: # yes,noが入力されたら\n if answer==\"yes\": # yesの場合\n return print(\"でしょ。[end]\")\n elif answer==\"no\": # noの場合\n print(\"そんなことはない。あなたが間違えています。最初からやり直してください。 [end]\")\n sys.exit()\n else: # yes,no以外が入力されたら\n print(\"'yes'もしくは'no'で回答してください。\")\n\n# main関数\ndef main():\n cards52=make_cards52() # 52枚のカードを作成\n cards21=choose_cards21(cards52) # 21枚のカードを選択\n\n display_cards(cards21) # 表示する\n print(\"好きなカードを上から1枚選んでください。忘れないでくださいね。\")\n\n # 3回繰り返す\n for i in range(3):\n column_chosen=validate_input(i) # ユーザーの選択した列\n cards21=sort_cards21(cards21,column_chosen) # カードをソート\n display_cards(cards21) # 21枚のカードを表示\n \n validate_answer(cards21) # 答えてもらう\n \n# 実行\nif __name__==\"__main__\":\n main()","sub_path":"card_trick21/21-card_trick.py","file_name":"21-card_trick.py","file_ext":"py","file_size_in_byte":3763,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"97201711","text":"\"\"\"\nGiven an array nums and a target value k, find the maximum length of a subarray that sums to k.\nIf there isn't one, return 0 instead.\n\nNote:\nThe sum of the entire nums array is guaranteed to fit within the 32-bit signed integer range.\n\nExample 1:\nInput: nums = [1, -1, 5, -2, 3], k = 3\nOutput: 4\nExplanation: The subarray [1, -1, 5, -2] sums to 3 and is the longest.\n\nExample 2:\nInput: nums = [-2, -1, 2, 1], k = 1\nOutput: 2\nExplanation: The subarray [-1, 2] sums to 1 and is the longest.\n\nFollow Up:\nCan you do it in O(n) time?\n\"\"\"\nclass Solution:\n def maxSubArrayLen(self, nums, k):\n \"\"\"\n :type nums: List[int]\n :type k: int\n :rtype: int\n \"\"\"\n n = len(nums)\n presum = dict()\n ret = 0\n total = 0\n for i,num in enumerate(nums):\n total += num\n if total==k:\n ret = max(ret, i+1)\n elif total-k in presum:\n ret = max(ret, i-presum[total-k])\n if total not in presum:\n presum[total] = i\n return ret \n","sub_path":"MaximumSizeSubarraySumEqualsk.py","file_name":"MaximumSizeSubarraySumEqualsk.py","file_ext":"py","file_size_in_byte":1068,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"475525953","text":"from babel.babel_utils import glom, write_compendium, dump_sets, dump_dicts, get_prefixes, filter_out_non_unique_ids, clean_sets\nfrom babel.onto import Onto\nfrom babel.ubergraph import UberGraph\nfrom src.util import Text\nimport os\nfrom datetime import datetime as dt\nfrom functools import reduce\nfrom ast import literal_eval\nfrom collections import defaultdict\n\n#def write_dicts(dicts,fname):\n# with open(fname,'w') as outf:\n# for k in dicts:\n# outf.write(f'{k}\\t{dicts[k]}\\n')\n\ndef read_bad_hp_mappings():\n fn = os.path.join(os.path.dirname(os.path.abspath(__file__)),'input_data','hpo_errors.txt')\n drops = defaultdict(set)\n with open(fn,'r') as infile:\n for line in infile:\n if line.startswith('-'):\n continue\n x = line.strip().split('\\t')\n hps = x[0]\n commaindex = hps.index(',')\n curie = hps[1:commaindex]\n name = hps[commaindex+1:-1]\n badset = literal_eval(x[1])\n drops[curie].update(badset)\n return drops\n\ndef filter_umls(umls_pairs,sets_with_umls):\n # We've got a bunch of umls pairs, but we really only want to use them if they're not\n # already BOTH attached to a hp or mondo.\n #for s in sets_with_umls:\n # if 'UMLS:C2931082' in s:\n # print(s)\n # if 'MESH:C536004' in s:\n # print(s)\n with open('filtered.txt','w') as ff:\n used = set()\n for s in sets_with_umls:\n used.update(s)\n #if 'UMLS:C2931082' in used:\n # print('umls in used')\n #if 'MESH:C536004' in used:\n # print('mesh in used')\n ok_pairs = []\n for pair in umls_pairs:\n p=list(pair)\n # print(p[0])\n # print(p[1])\n ok = ((p[0] not in used) or (p[1] not in used))\n if ok:\n ok_pairs.append(pair)\n else:\n ff.write(f'{p[0]}\\t{p[1]}\\n')\n return ok_pairs\n\ndef combine_id_sets(l1,l2):\n \"\"\"Given lists of sets, combine them, overlapping sets that are exactly the same\"\"\"\n #print(l1[0])\n #print(type(l1[0]))\n #print(l2[0])\n #print(type(l2[0]))\n s = set( [frozenset(x) for x in l1])\n s2 = set( [frozenset(x) for x in l2])\n s.update(s2)\n return [ set(x) for x in s ]\n\ndef read_badxrefs(pref):\n morebad = defaultdict(list)\n fn = os.path.join(os.path.dirname(os.path.abspath(__file__)),'input_data',f'{pref}_badxrefs.txt')\n with open(fn,'r') as inf:\n for line in inf:\n if line.startswith('#'):\n continue\n x = line.strip().split(' ')\n morebad[x[0]].append(x[1])\n return morebad\n\ndef load_diseases_and_phenotypes():\n print('disease/phenotype')\n print('get and write hp sets')\n bad_mappings = read_bad_hp_mappings()\n more_bad_mappings = read_badxrefs('hpo')\n for h,m in more_bad_mappings.items():\n bad_mappings[h].update(m)\n hpo_sets,labels = build_sets('HP:0000118', ignore_list = ['ICD','NCIT'], bad_mappings = bad_mappings)\n print('filter')\n hpo_sets = filter_out_non_unique_ids(hpo_sets)\n print('ok')\n dump_sets(hpo_sets,'hpo_sets.txt')\n print('get and write mondo sets')\n #MONDO has disease, and its sister disease susceptibility. I'm putting both in disease. Biolink q\n #But! this is a problem right now because there are some things that go in both, and they are getting filtered out\n bad_mondo_mappings = read_badxrefs('mondo')\n mondo_sets_1,labels_1 = build_exact_sets('MONDO:0000001',bad_mondo_mappings)\n mondo_sets_2,labels_2 = build_exact_sets('MONDO:0042489',bad_mondo_mappings)\n mondo_close = get_close_matches('MONDO:0000001')\n mondo_close2 = get_close_matches('MONDO:0042489')\n for k,v in mondo_close2.items():\n mondo_close[k] = v\n dump_sets(mondo_sets_1,'mondo1.txt')\n dump_sets(mondo_sets_2,'mondo2.txt')\n labels.update(labels_1)\n labels.update(labels_2)\n #if we just add these together, then any mondo in both lists will get filtered out in the next step.\n #so we need to put them into a set. You can't put sets directly into a set, you have to freeze them first\n mondo_sets = combine_id_sets(mondo_sets_1,mondo_sets_2)\n mondo_sets = filter_out_non_unique_ids(mondo_sets)\n dump_sets(mondo_sets,'mondo_sets.txt')\n print('get and write umls sets')\n bad_umls = read_badxrefs('umls')\n meddra_umls = read_meddra(bad_umls)\n meddra_umls = filter_umls(meddra_umls,mondo_sets+hpo_sets)\n dump_sets(meddra_umls,'meddra_umls_sets.txt')\n dicts = {}\n #EFO has 3 parts that we want here:\n # Disease\n efo_sets_1,l = build_exact_sets('EFO:0000408')\n labels.update(l)\n #phenotype\n efo_sets_2,l = build_exact_sets('EFO:0000651')\n labels.update(l)\n #measurement\n efo_sets_3,l = build_exact_sets('EFO:0001444')\n labels.update(l)\n efo_sets_a = combine_id_sets(efo_sets_1,efo_sets_2)\n efo_sets = combine_id_sets(efo_sets_a, efo_sets_3)\n efo_sets = filter_out_non_unique_ids(efo_sets)\n dump_sets(efo_sets,'efo_sets.txt')\n print('put it all together')\n print('mondo')\n glom(dicts,mondo_sets,unique_prefixes=['MONDO'])\n dump_dicts(dicts,'mondo_dicts.txt')\n print('hpo')\n glom(dicts,hpo_sets,unique_prefixes=['MONDO'],pref='HP')\n dump_dicts(dicts,'mondo_hpo_dicts.txt')\n print('umls')\n glom(dicts,meddra_umls,unique_prefixes=['MONDO'],pref='UMLS',close={'MONDO':mondo_close})\n dump_dicts(dicts,'mondo_hpo_meddra_dicts.txt')\n print('efo')\n glom(dicts,efo_sets,unique_prefixes=['MONDO'],pref='EFO')\n dump_dicts(dicts,'mondo_hpo_meddra_efo_dicts.txt')\n print('dump it')\n diseases,phenotypes = create_typed_sets(set([frozenset(x) for x in dicts.values()]))\n write_compendium(diseases,'disease.txt','disease',labels)\n write_compendium(phenotypes,'phenotypes.txt','phenotypic_feature',labels)\n\n\ndef create_typed_sets(eqsets):\n \"\"\"Given a set of sets of equivalent identifiers, we want to type each one into\n being either a disease or a phenotypic feature. Or something else, that we may want to\n chuck out here.\n Current rules: If it has a mondo, it's a disease, no matter what else it is\n If it doesn't have a mondo, but it does have an HP, then it's a phenotype\n Otherwise, consult the UMLS to see what it might be\n \"\"\"\n umls_types = read_umls_types()\n diseases = set()\n phenotypic_features = set()\n unknown_types = set()\n for equivalent_ids in eqsets:\n #prefixes = set([ Text.get_curie(x) for x in equivalent_ids])\n prefixes = get_prefixes(equivalent_ids)\n if 'MONDO' in prefixes:\n diseases.add(equivalent_ids)\n elif 'HP' in prefixes:\n phenotypic_features.add(equivalent_ids)\n elif 'UMLS' in prefixes:\n umls_ids = [ Text.un_curie(x) for x in equivalent_ids if Text.get_curie(x) == 'UMLS']\n #if len(umls_ids) > 1:\n # print(umls_ids)\n try:\n semtype = umls_types[umls_ids[0]]\n if semtype in ['Disease or Syndrome','Neoplastic Process','Injury or Poisoning',\n 'Mental or Behavioral Dysfunction','Congenital Abnormality',\n 'Anatomical Abnormality']:\n diseases.add(equivalent_ids)\n elif semtype in ['Finding', 'Pathologic Function', 'Sign or Symptom', 'Acquired Abnormality']:\n phenotypic_features.add(equivalent_ids)\n else:\n #Therapeutic or Preventive Procedure, Laboratory Procedure,Laboratory or Test Result\n #Diagnostic Procedure\n if semtype not in unknown_types:\n #print('What is this UMLS type?')\n #print(semtype,umls_ids[0])\n unknown_types.add(semtype)\n pass\n except Exception as e:\n #print(f'Missing UMLS: {umls_ids[0]}')\n #print(equivalent_ids)\n #Calling it a phenotype\n phenotypic_features.add(equivalent_ids)\n elif 'EFO' in prefixes:\n phenotypic_features.add(equivalent_ids)\n #else:\n # print(prefixes)\n return diseases, phenotypic_features\n\ndef build_exact_sets(iri,bad_mappings = defaultdict(set)):\n prefix = Text.get_curie(iri)\n uber = UberGraph()\n uberres = uber.get_subclasses_and_exacts(iri)\n results = []\n labels = {}\n for k,v in uberres.items():\n #Don't hop ontologies here.\n subclass_prefix = Text.get_curie(k[0])\n if subclass_prefix != prefix:\n continue\n if k[1] is not None and k[1].startswith('obsolete'):\n continue\n dbx = set([ norm(x) for x in v ])\n for bm in bad_mappings[k[0]]:\n if bm in dbx:\n dbx.remove(bm)\n dbx.add(k[0])\n labels[k[0]] = k[1]\n results.append(dbx)\n return results,labels\n\ndef get_close_matches(iri):\n prefix = Text.get_curie(iri)\n uber = UberGraph()\n uberres = uber.get_subclasses_and_close(iri)\n close = {}\n for k,v in uberres.items():\n #Don't hop ontologies here.\n subclass_prefix = Text.get_curie(k[0])\n if subclass_prefix != prefix:\n continue\n if k[1] is not None and k[1].startswith('obsolete'):\n continue\n dbx = set([ norm(x) for x in v ])\n close[k[0]] = dbx\n return close\n\n\ndef norm(curie):\n curie = f'{Text.get_curie(curie).upper()}:{Text.un_curie(curie)}'\n if Text.get_curie(curie) == 'MSH':\n return Text.recurie(curie,'MESH')\n if Text.get_curie(curie) in ['SNOMEDCT_US','SCTID']:\n return Text.recurie(curie,'SNOMEDCT')\n return curie\n\ndef build_sets(iri, ignore_list = ['ICD'], bad_mappings = {}):\n \"\"\"Given an IRI create a list of sets. Each set is a set of equivalent LabeledIDs, and there\n is a set for each subclass of the input iri\"\"\"\n uber = UberGraph()\n uberres = uber.get_subclasses_and_xrefs(iri)\n results = []\n labels = {}\n for k,v in uberres.items():\n if k[1] is not None and k[1].startswith('obsolete'):\n continue\n dbx = set([ norm(x) for x in v if not Text.get_curie(x) in ignore_list ])\n labels[k[0]] = k[1]\n head = k[0]\n dbx.add(head)\n bad_guys = bad_mappings[head]\n dbx.difference_update(bad_guys)\n results.append(dbx)\n return results,labels\n\n\n#THIS is bad.\n# We can't distribute MRCONSO.RRF, and dragging it out of UMLS is a manual process.\n# It's possible we could rebuild using the services, but no doubt very slowly\ndef read_meddra(bad_maps):\n pairs = set()\n mrcon = os.path.join(os.path.dirname(__file__),'input_data', 'MRCONSO.RRF')\n nothandled = set()\n with open(mrcon,'r') as inf:\n for line in inf:\n x = line.strip().split('|')\n if x[1] != 'ENG':\n continue\n if x[2] == 'S':\n continue\n #There is a suppress column. Only go forward if it is 'N' (it can be 'O', 'E', 'Y', all mean suppress)\n if x[16] != 'N':\n continue\n oid = x[10]\n if oid == '':\n oid = x[9]\n source = x[11]\n if source == 'HPO':\n otherid = oid\n elif source == 'MDR':\n otherid = f'MEDDRA:{oid}'\n elif source == 'NCI':\n otherid = f'NCIT:{oid}'\n elif source == 'SNOMEDCT_US':\n otherid = f'SNOMEDCT:{oid}'\n elif source == 'MSH':\n otherid = f'MESH:{oid}'\n elif source in ['LNC','SRC']:\n continue\n else:\n if source not in nothandled:\n #print('not handling source:',source)\n nothandled.add(source)\n continue\n uid = f'UMLS:{x[0]}'\n if uid in bad_maps and otherid == bad_maps[uid]:\n continue\n pairs.add( frozenset({uid,otherid}) )\n return list(pairs)\n\ndef read_umls_types():\n types = {}\n mrsty = os.path.join(os.path.dirname(__file__),'input_data','MRSTY.RRF')\n with open(mrsty,'r') as inf:\n for line in inf:\n x = line.split('|')\n types[x[0]] = x[3]\n return types\n\nif __name__ == '__main__':\n load_diseases_and_phenotypes()\n","sub_path":"babel/disease_phenotype.py","file_name":"disease_phenotype.py","file_ext":"py","file_size_in_byte":12440,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"46985922","text":"from pprint import pprint\nword = \"it a string a string string\"\ndef histogram_dict(word):\n word = word.split()\n count = {}\n for i in word:\n if i in count:\n count[i] += 1\n else:\n count[i] = 1\n return count\n\ndef histogram_list(str):\n dict = histogram_dict(str)\n output = []\n for i in dict:\n output.append([i, dict[i]])\n return output\n\ndef histogram_tuple(str):\n dict = histogram_dict(str)\n return output.items()\n\ndef histogram_count(str):\n dict = histogram_dict(str)\n output = {}\n for i in dict:\n if dict[i] in output:\n output[dict[i]].append(i)\n else:\n output[dict[i]] = [i]\n output = output.items()\n return output\n\ndef unique_words(hist):\n return len(hist.values())\n\nif __name__ == '__main__':\n val = histogram_list(word)\n # val2 = unique_words(val)\n print(val)\n # print(val2)","sub_path":"Code/histogram.py","file_name":"histogram.py","file_ext":"py","file_size_in_byte":830,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"506861948","text":"class Items(dict):\n\n # Static Lookups\n UNKNOWN = 0\n POKE_BALL = 1\n GREAT_BALL = 2\n ULTRA_BALL = 3\n MASTER_BALL = 4\n POTION = 101\n SUPER_POTION = 102\n HYPER_POTION = 103\n MAX_POTION = 104\n REVIVE = 201\n MAX_REVIVE = 202\n LUCKY_EGG = 301\n INCENSE_ORDINARY = 401\n INCENSE_SPICY = 402\n INCENSE_COOL = 403\n INCENSE_FLORAL = 404\n TROY_DISK = 501\n X_ATTACK = 602\n X_DEFENSE = 603\n X_MIRACLE = 604\n RAZZ_BERRY = 701\n BLUK_BERRY = 702\n NANAB_BERRY = 703\n WEPAR_BERRY = 704\n PINAP_BERRY = 705\n SPECIAL_CAMERA = 801\n INCUBATOR_BASIC_UNLIMITED = 901\n INCUBATOR_BASIC = 902\n POKEMON_STORAGE_UPGRADE = 1001\n ITEM_STORAGE_UPGRADE = 1002\n\n def __init__(self):\n super(dict, self).__init__(self)\n attributes = inspect.getmembers(Items, lambda attr :not(inspect.isroutine(attr)))\n for attr in attributes:\n if attr[0].isupper():\n self[attr[1]] = attr[0]\n","sub_path":"pogo/itemdex.py","file_name":"itemdex.py","file_ext":"py","file_size_in_byte":977,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"29967332","text":"from sklearn.cluster import MeanShift, DBSCAN\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.base import BaseEstimator, ClusterMixin\nfrom sklearn.utils.validation import NotFittedError\n\n\nclass DBShift(BaseEstimator, ClusterMixin):\n \"\"\"Perform DBShift clustering on a vector array.\n\n DBShift is useful for splitting a feature space into regions for\n systematic exploration. Fitting occurs in two stages:\n 1. DBSCAN is performed over the input to identify \"main\n clusters\" and outliers.\n 2. Mean Shift clustering is performed over the DBSCAN\n outliers, to break these up into regions. These \"outlier\n clusters\" are labelled with negative integers.\n\n Prediction is done by kNN against the dataset used in fitting.\n\n Parameters\n ----------\n eps : float\n The `eps` parameter for DBSCAN. If None, `eps` is chosen to\n be the mean of the ranges of the input data dimensions,\n divided by 33.\n\n min_samples : int\n The `min_samples` parameter for DBSCAN. If None,\n `min_samples` is taken to be 1% of the input data.\n\n n_neighbors : int\n The `n_neighbors` (k) parameter for kNN. If None,\n `n_neighbors` is taken to be the same as `min_samples`.\n\n Attributes\n ----------\n labels_ : The cluster labels identified during fitting\n components_ : The vector array input used in fitting\n\n _dbscan : The internal DBSCAN classifier\n _meanshift : The internal Mean Shift classifier\n _knn : The internal KNN classifier\n \"\"\"\n def __init__(self, eps=None, min_samples=None, n_neighbors=None):\n self.eps = eps\n self.min_samples = min_samples\n self.n_neighbors = n_neighbors\n\n self._dbscan = None\n self._meanshift = None\n self._knn = None\n\n self.labels_ = None\n self.components_ = None\n\n def fit(self, X, y=None):\n \"\"\"Perform clustering.\n\n Parameters\n -----------\n X : array-like, shape=[n_samples, n_features]\n Samples to cluster.\n\n y : Ignored\n\n \"\"\"\n # DBSCAN parameters\n if self.eps is not None:\n eps = self.eps\n else:\n eps = (X.max(axis=0) - X.min(axis=0)).mean() / 33\n\n if self.min_samples is not None:\n m = self.min_samples\n else:\n m = X.shape[0] // 100\n\n # Do dbscan\n self._dbscan = DBSCAN(eps=eps, min_samples=m)\n labels = self._dbscan.fit_predict(X)\n\n # Do mean shift if there are outliers (default parameters)\n outliers = X[labels == -1]\n self._meanshift = MeanShift()\n\n if outliers.shape[0]:\n outlier_clusters = self._meanshift.fit_predict(outliers)\n labels[labels == -1] = -1 - outlier_clusters\n\n # Fit KNN\n if self.n_neighbors is not None:\n k = self.n_neighbors\n else:\n k = self._dbscan.min_samples\n\n self._knn = KNeighborsClassifier(n_neighbors=k).fit(X, labels)\n\n # save output\n self.components_ = X\n self.labels_ = labels\n\n return self\n\n def predict(self, X):\n \"\"\"Predict the cluster labels for the provided data using KNN\n\n Parameters\n ----------\n X : array-like, shape (n_query, n_features)\n Test samples.\n\n Returns\n -------\n y : array of shape [n_samples] or [n_samples, n_outputs]\n Cluster labels for each data sample.\n \"\"\"\n if self._knn is None:\n raise NotFittedError\n\n return self._knn.predict(X)\n\n def predict_proba(self, X):\n \"\"\"Return probability estimates for the test data X.\n\n Parameters\n ----------\n X : array-like, shape (n_query, n_features)\n Test samples.\n\n Returns\n -------\n p : array of shape = [n_samples, n_classes], or a list of\n n_outputs of such arrays if n_outputs > 1.\n The class probabilities of the input samples. Classes are\n ordered by lexicographic order.\n \"\"\"\n if self._knn is None:\n raise NotFittedError\n\n return self._knn.predict_proba(X)\n","sub_path":"olac/clusterers.py","file_name":"clusterers.py","file_ext":"py","file_size_in_byte":4199,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"227823678","text":"import jieba\nimport wordcloud\nfrom scipy.misc import imread\n\nmk = imread('../../Downloads/timg.jpg')\nf = open('CIIE.txt', 'r')\ntxt = f.read()\nw = wordcloud.WordCloud(width=600, height=600, background_color='white', scale=3,\n font_path='/System/Library/Fonts/Hiragino Sans GB.ttc',\n mask=mk, max_words=1000)\nw.generate(' '.join(jieba.lcut(txt)))\nw.to_file('CIIE.png')\n","sub_path":"CIIE-wordcloud.py","file_name":"CIIE-wordcloud.py","file_ext":"py","file_size_in_byte":413,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"494662692","text":"\nfrom player import Player\nfrom room import Room\nfrom items import Item\nfrom errors import IllegalMoveError, NotEnoughItemsError, IllegalCountError\n\n\ndef create_rooms():\n\t# Declare all the rooms\n\n\troom = {\n\t\t'outside': Room(\n\t\t\t\"Outside Cave Entrance\",\n\t\t\t\"North of you, the cave mount beckons\"\n\t\t),\n\t\t'foyer': Room(\n\t\t\t\"Foyer\",\n\t\t\t\"Dim light filters in from the south. Dusty passages run north and east.\"\n\t\t),\n\t\t'overlook': Room(\n\t\t\t\"Grand Overlook\",\n\t\t\t\"A steep cliff appears before you, falling into the darkness. Ahead to the north, a light flickers in the distance, but there is no way across the chasm.\"\n\t\t),\n\t\t'narrow': Room(\n\t\t\t\"Narrow Passage\",\n\t\t\t\"The narrow passage bends here from west to north. The smell of gold permeates the air.\"\n\t\t),\n\t\t'treasure': Room(\n\t\t\t\"Treasure Chamber\",\n\t\t\t\"You've found the long-lost treasure chamber! Sadly, it has already been completely emptied by earlier adventurers. The only exit is to the south.\"\n\t\t),\n\t}\n\n\t# Link rooms together\n\n\troom['outside'].connect_to(room['foyer'], 'n')\n\troom['foyer'].connect_to(room['overlook'], 'n')\n\troom['foyer'].connect_to(room['narrow'], 'e')\n\troom['narrow'].connect_to(room['treasure'], 'n')\n\n\troom['foyer'].add_items(\n\t\titem=Item('rock', 'A fist-sized rock. Could be used as a weapon in a pinch.'),\n\t\tcount=3,\n\t)\n\troom['outside'].add_items(\n\t\titem=Item('stick', 'A fairly thin branch. Not much use on its own.'),\n\t\tcount=8,\n\t)\n\troom['treasure'].add_items(\n\t\titem=Item('coin', 'The scattered remnants of a once-vast hoard.'),\n\t\tcount=41,\n\t)\n\n\treturn room\n\n\nclass AdventureManager:\n\tdirections = ['n', 's', 'e', 'w']\n\n\tdef __init__(self):\n\t\tself.rooms = create_rooms()\n\t\tself.player = Player(self.rooms['outside'])\n\t\tself.describe_current_room()\n\n\tdef print(self, text: str) -> None:\n\t\t'''\n\t\tPrints something to the screen.\n\t\t\tLater, this might be modified to allow for fancier effects.\n\n\t\tArgs:\n\t\t\ttext (str): text to print\n\t\t'''\n\n\t\tprint(text[0].upper() + text[1:])\n\n\tdef step(self) -> None:\n\t\t'''\n\t\tDoes a step of the game world.\n\t\t'''\n\t\tpass\n\n\tdef get_current_room_summary(self):\n\t\treturn f'You are currently in the {self.player.current_room.name}.'\n\n\tdef get_current_room_description(self):\n\t\treturn self.player.current_room.description\n\n\tdef get_current_room_contents(self):\n\t\treturn self.player.current_room.print_items()\n\n\tdef describe_current_room(self):\n\t\tself.print(self.get_current_room_summary())\n\t\tself.print(self.get_current_room_description())\n\t\tif len(self.player.current_room.items):\n\t\t\tself.print(self.get_current_room_contents())\n\n\tdef examine_item(self, item_name):\n\t\ttry:\n\t\t\titem = self.player.get_item_by_name(item_name)\n\t\t\tself.print(item.description)\n\t\texcept KeyError:\n\t\t\ttry:\n\t\t\t\titem = self.player.current_room.get_item_by_name(item_name)\n\t\t\t\tself.print(item.description)\n\t\t\texcept KeyError:\n\t\t\t\tself.print(f'There\\'s no {item_name} around here.')\n\n\tdef move_player(self, direction: str, direction_name: str = None) -> None:\n\t\t'''\n\t\tMoves the player.\n\n\t\tArgs:\n\t\t\tdirection (str): Direction to move.\n\t\t\tdirection_name (str, optional): The name of the direction you moved.\n\t\t'''\n\n\t\tif direction_name is None:\n\t\t\tdirection_name = direction\n\n\t\ttry:\n\t\t\tself.player.move(direction)\n\n\t\t\tself.print(f'You move {direction_name}...')\n\t\t\tself.describe_current_room()\n\n\t\t\tself.step()\n\n\t\texcept IllegalMoveError:\n\t\t\tself.print(f'You can\\'t move {direction_name}.')\n\n\tdef player_take(self, item_name, count):\n\t\ttry:\n\t\t\titem = self.player.current_room.get_item_by_name(item_name)\n\t\t\tself.player.transfer_items_from(self.player.current_room, item, count=count)\n\t\t\tself.print(f'Picked up {count} of {item_name}.')\n\t\texcept KeyError:\n\t\t\tself.print(f'There\\'s not a {item_name} here!')\n\t\texcept NotEnoughItemsError:\n\t\t\tself.print(f'There aren\\'t enough of {item_name} here to take {count}.')\n\t\texcept IllegalCountError:\n\t\t\tself.print(f'Can\\'t take {count} of an item!')\n\n\tdef player_drop(self, item_name, count):\n\t\ttry:\n\t\t\titem = self.player.get_item_by_name(item_name)\n\t\t\tself.player.transfer_items_to(self.player.current_room, item, count=count)\n\t\t\tself.print(f'Dropped {count} of {item_name}.')\n\t\texcept KeyError:\n\t\t\tself.print(f'You don\\'t have a {item_name}!')\n\t\texcept NotEnoughItemsError:\n\t\t\tself.print(f'You don\\'t have enough of {item_name} to drop {count}.')\n\t\texcept IllegalCountError:\n\t\t\tself.print(f'Can\\'t drop {count} of an item!')\n\n\tdef print_player_inventory(self):\n\t\tself.print(self.player.print_inventory())\n\n\tdef print_player_inventory_count(self, item_name):\n\t\tself.print(f'You are currently holding {self.player.get_item_count_by_name(item_name)}')\n","sub_path":"src/manager.py","file_name":"manager.py","file_ext":"py","file_size_in_byte":4559,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"326294640","text":"from math import *\nstring = input(\"Enter the input\")\nstring = string.split()\nprint(type(string))\n\ndef add(*args):\n total = 0\n for i in string:\n if i.isdigit():\n print(\"Is Valid Number\")\n print(i)\n total = total + int(i)\n\n return total\n\nprint(add(string))\n\n\n\n\n\n\n\n","sub_path":"add.py","file_name":"add.py","file_ext":"py","file_size_in_byte":311,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"299989162","text":"from fastapi import FastAPI\nfrom fastapi.middleware.cors import CORSMiddleware\nfrom mongo.db import DB\n\napp = FastAPI()\n\norigins = [\n \"http://localhost.tiangolo.com\",\n \"https://localhost.tiangolo.com\",\n \"http://localhost\",\n \"http://localhost:8080\",\n \"https://burakhanaksoy.azurewebsites.net\",\n \"http://burakhanaksoy.azurewebsites.net\"\n]\n\napp.add_middleware(\n CORSMiddleware,\n allow_origins=origins,\n allow_credentials=True,\n allow_methods=[\"*\"],\n allow_headers=[\"*\"],\n)\n\ndb_instance = DB()\ndb = db_instance.get_db('my-db')\n\n\n@app.get(\"/skills\")\nasync def get_skills():\n pipeline = []\n pipeline.append({'$project': {'_id': 0}})\n collection = db.get_collection('skills')\n\n skills = []\n async for record in collection.aggregate(pipeline):\n skills.append(record)\n\n return skills\n\n\n@app.get(\"/articles\")\nasync def get_articles():\n pipeline = []\n pipeline.append({'$project': {'_id': 0}})\n collection = db.get_collection('articles')\n\n articles = []\n async for record in collection.aggregate(pipeline):\n articles.append(record)\n print(articles)\n\n return articles\n\n\n@app.get('/demo')\nasync def demo():\n return 'hi'\n","sub_path":"docker/back/backend/main.py","file_name":"main.py","file_ext":"py","file_size_in_byte":1200,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"236862190","text":"\"\"\"Sample solutions from various students\"\"\"\n\n# Exercise 1\n\ndef vowelcount (s):\n # function vowelcount takes a string as an argument and returns the number of vowels in the string.\n\n s = s.lower()\n vowels = s.count('a') + s.count('o') + s.count('e') + s.count('u') + s.count('i')\n\n return vowels\n\n\ndef vowelcount(str):\n \"\"\"Count number of vowels in a string\"\"\"\n vowels = 'aeiouAEIOU'\n count = 0\n for i in str:\n if i in vowels:\n count += 1\n return count\n\n# Martin's solution:\ndef vowelcount(s):\n \"\"\"Count vowels in s\"\"\"\n s = s.lower()\n nv = 0\n for v in 'aeiou':\n nv += s.count(v)\n return nv\n\n# Exercise 2\n\ndef metric(x, y):\n \"\"\"Calculate the difference and sum of two numbers x and y and the quotient\n of the difference and sum\"\"\"\n\n d = x - y\n s = x + y\n\n if s == 0:\n print ('the sum is 0 and you can not divide by 0! Ever tried to divide cake by 0 people?')\n return None\n q = d / s\n return q\n\n\n# Martin's solution:\ndef metric(x, y):\n \"\"\"Calculate difference over sum\"\"\"\n d = x - y\n s = x + y\n print('difference is %g, sum is %g' % (d, s))\n if s == 0:\n return 0\n return d / s\n\n\n\n# Exercise 3\n\ndef multtable(n):\n \"Integers table of 1 to n multiplication\"\n\n for x in range(1,n+1):\n #print(x)\n for y in range(1,n+1):\n result=x*y\n #print(y)\n\n if y turtle points '''\n\n# no need to assign to result but i decided to because it makes it clearer for me \n result = egb.goto( pX , pY )\n result = egb.dot( 4, pColor)\n return result\n\n\n\n# uses the draw function to draw points \ndraw(30 , 100 , \"green\")\n\ndraw(200 , 200 , \"red\")\n\n\n\n\n \n \n","sub_path":"pa-11-points 2.01.58 AM.py","file_name":"pa-11-points 2.01.58 AM.py","file_ext":"py","file_size_in_byte":569,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"79778762","text":"from __future__ import print_function\nimport argparse\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torchvision import datasets, transforms\nfrom torch.optim.lr_scheduler import StepLR\nfrom torch.utils.data.sampler import SubsetRandomSampler\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport random\nimport PIL\nimport pickle\n\n\nimport sklearn\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.manifold import TSNE\nimport seaborn as sn\nimport pandas as pd\nfrom math import log\nimport math\n\n\nrandom.seed(2020)\ntorch.manual_seed(2020)\n\n\nimport os\n\n'''\nThis code is adapted from two sources:\n(i) The official PyTorch MNIST example (https://github.com/pytorch/examples/blob/master/mnist/main.py)\n(ii) Starter code from Yisong Yue's CS 155 Course (http://www.yisongyue.com/courses/cs155/2020_winter/)\n'''\n\nclass fcNet(nn.Module):\n '''\n Design your model with fully connected layers (convolutional layers are not\n allowed here). Initial model is designed to have a poor performance. These\n are the sample units you can try:\n Linear, Dropout, activation layers (ReLU, softmax)\n '''\n def __init__(self):\n # Define the units that you will use in your model\n # Note that this has nothing to do with the order in which operations\n # are applied - that is defined in the forward function below.\n super(fcNet, self).__init__()\n self.fc1 = nn.Linear(in_features=784, out_features=20)\n self.fc2 = nn.Linear(20, 10)\n self.dropout1 = nn.Dropout(p=0.5)\n\n def forward(self, x):\n # Define the sequence of operations your model will apply to an input x\n x = torch.flatten(x, start_dim=1)\n x = self.fc1(x)\n x = F.relu(x)\n x = self.dropout1(x)\n x = F.relu(x)\n\n output = F.log_softmax(x, dim=1)\n return output\n\nclass ConvNet(nn.Module):\n '''\n Design your model with convolutional layers.\n '''\n def __init__(self):\n super(ConvNet, self).__init__()\n self.conv1 = nn.Conv2d(in_channels=1, out_channels=8, kernel_size=(3,3), stride=1)\n self.conv2 = nn.Conv2d(8, 8, 3, 1)\n self.dropout1 = nn.Dropout2d(0.5)\n self.dropout2 = nn.Dropout2d(0.5)\n self.fc1 = nn.Linear(200, 64)\n self.fc2 = nn.Linear(64, 10)\n\n def forward(self, x):\n x = self.conv1(x)\n x = F.relu(x)\n x = F.max_pool2d(x, 2)\n x = self.dropout1(x)\n\n x = self.conv2(x)\n x = F.relu(x)\n x = F.max_pool2d(x, 2)\n x = self.dropout2(x)\n\n x = torch.flatten(x, 1)\n x = self.fc1(x)\n x = F.relu(x)\n x = self.fc2(x)\n\n output = F.log_softmax(x, dim=1)\n return output\n\n'''\nclass ConvNet(nn.Module):\n\n def __init__(self):\n super(ConvNet, self).__init__()\n self.conv1 = nn.Conv2d(in_channels=1, out_channels=8, kernel_size=(3,3), stride=1)\n self.conv2 = nn.Conv2d(8, 8, 3, 1)\n self.dropout1 = nn.Dropout2d(0.5)\n self.dropout2 = nn.Dropout2d(0.5)\n self.fc1 = nn.Linear(200, 64)\n self.fc2 = nn.Linear(64, 10)\n\n def forward(self, x):\n x = self.conv1(x)\n x = F.relu(x)\n x = F.max_pool2d(x, 2)\n x = self.dropout1(x)\n\n x = self.conv2(x)\n x = F.relu(x)\n x = F.max_pool2d(x, 2)\n x = self.dropout2(x)\n\n x = torch.flatten(x, 1)\n x = self.fc1(x)\n x = F.relu(x)\n x = self.fc2(x)\n\n output = F.log_softmax(x, dim=1)\n return output\n'''\n\n\nclass Net(nn.Module):\n '''\n COmpared with teh convnet, this has a 3rd linear layer and doubles the number of the convolution in the 2nd convolution layer\n '''\n\n def __init__(self):\n super(Net, self).__init__()\n self.conv1 = nn.Conv2d(1, out_channels=8, kernel_size=(3, 3), stride=1)\n self.conv15 = nn.Conv2d(8, out_channels=16, kernel_size=(3, 3), stride=1)\n #self.conv2 = nn.Conv2d(8, 16, 3, 1)\n self.conv2 = nn.Conv2d(16, 32, 3, 1)\n\n #self.dropout1 = nn.Dropout2d(0.5)\n #self.dropout2 = nn.Dropout2d(0.5)\n\n # follow dimensions:\n # conv1 takes 28 to 26\n # maxpool takes 26 to 13\n # conv2 takes 13 to 11\n # maxpool takes 11 to 5\n\n #self.fc1 = nn.Linear(16 * 5 * 5, 120)\n #self.fc1 = nn.Linear(32 * 4 * 4, 120)\n self.fc1 = nn.Linear(32 * 22 * 22, 3000)\n self.fc15 = nn.Linear(3000, 600)\n self.fc16 = nn.Linear(600, 120)\n self.fc2 = nn.Linear(120, 84)\n self.fc3 = nn.Linear(84, 10)\n self.dropout1 = nn.Dropout2d(0.3)\n self.dropout2 = nn.Dropout2d(0.3)\n\n\n def forward(self, x):\n\n x = self.conv1(x)\n x = F.relu(x)\n #x = F.max_pool2d(x, 2)\n x = self.dropout1(x)\n\n x = self.conv15(x)\n x = F.relu(x)\n x = self.dropout2(x)\n #x = F.max_pool2d(x, 2)\n\n x = self.conv2(x)\n x = F.relu(x)\n #x = F.max_pool2d(x, 2)\n\n size = x.size()[1:]\n dims = 1\n for s in size:\n dims *= s\n x = x.view(-1, dims)\n\n x = self.fc1(x)\n x = F.relu(x)\n\n x = self.fc15(x)\n x = F.relu(x)\n\n x = self.fc16(x)\n x = F.relu(x)\n\n x = self.fc2(x)\n x = F.relu(x)\n\n xf = self.fc3(x)\n\n\n output = F.log_softmax(xf, dim=1)\n return output, x\n\n\nclass Net2(nn.Module):\n '''\n COmpared with the convnet, this has a 3rd linear layer and doubles the number of the convolution in the 2nd convolution layer\n '''\n\n def __init__(self):\n super(Net, self).__init__()\n self.conv1 = nn.Conv2d(1, out_channels=8, kernel_size=(3, 3), stride=1)\n self.conv15 = nn.Conv2d(8, out_channels=16, kernel_size=(3, 3), stride=1)\n #self.conv2 = nn.Conv2d(8, 16, 3, 1)\n self.conv2 = nn.Conv2d(16, 32, 3, 1)\n\n #self.dropout1 = nn.Dropout2d(0.5)\n #self.dropout2 = nn.Dropout2d(0.5)\n\n # follow dimensions:\n # conv1 takes 28 to 26\n # maxpool takes 26 to 13\n # conv2 takes 13 to 11\n # maxpool takes 11 to 5\n\n #self.fc1 = nn.Linear(16 * 5 * 5, 120)\n #self.fc1 = nn.Linear(32 * 4 * 4, 120)\n self.fc1 = nn.Linear(32 * 22 * 22, 3000)\n self.fc15 = nn.Linear(3000, 600)\n self.fc16 = nn.Linear(600, 120)\n self.fc2 = nn.Linear(120, 84)\n self.fc3 = nn.Linear(84, 10)\n self.dropout1 = nn.Dropout2d(0.2)\n self.dropout2 = nn.Dropout2d(0.2)\n\n def num_flat_features(self, x):\n size = x.size()[1:] # all dimensions except the batch dimension\n num_features = 1\n for s in size:\n num_features *= s\n return num_features\n\n def forward(self, x):\n\n x = self.conv1(x)\n x = F.relu(x)\n #x = F.max_pool2d(x, 2)\n x = self.dropout1(x)\n\n x = self.conv15(x)\n x = F.relu(x)\n x = self.dropout2(x)\n #x = F.max_pool2d(x, 2)\n\n x = self.conv2(x)\n x = F.relu(x)\n #x = F.max_pool2d(x, 2)\n\n size = x.size()[1:]\n dims = 1\n for s in size:\n dims *= s\n x = x.view(-1, dims)\n\n x = self.fc1(x)\n x = F.relu(x)\n\n x = self.fc15(x)\n x = F.relu(x)\n\n x = self.fc16(x)\n x = F.relu(x)\n\n x = self.fc2(x)\n x = F.relu(x)\n\n\n\n xf = self.fc3(x)\n\n\n output = F.log_softmax(xf, dim=1)\n return output, x\n\ndef train(args, model, device, train_loader, optimizer, epoch):\n '''\n This is your training function. When you call this function, the model is\n trained for 1 epoch.\n '''\n model.train() # Set the model to training mode\n total_loss = 0\n for batch_idx, (data, target) in enumerate(train_loader):\n data, target = data.to(device), target.to(device)\n optimizer.zero_grad() # Clear the gradient\n output, hidden_layer = model(data) # Make predictions\n loss = F.nll_loss(output, target) # Compute loss\n loss.backward() # Gradient computation\n optimizer.step() # Perform a single optimization step\n if batch_idx % args.log_interval == 0:\n print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\n epoch, batch_idx * len(data), len(train_loader.sampler),\n 100. * batch_idx / len(train_loader), loss.item()))\n\n total_loss = total_loss + loss.item()\n #train loss for each epoch is an average of the loss over all mini-batches\n train_loss = total_loss/batch_idx\n\n return train_loss\n\n\ndef test(model, device, test_loader, evaluate = False):\n model.eval() # Set the model to inference mode\n test_loss = 0\n correct = 0\n test_num = 0\n\n images = []\n allimages = []\n master_preds = []\n master_truths = []\n master_hidden_layers = []\n with torch.no_grad(): # For the inference step, gradient is not computed\n for data, target in test_loader:\n data, target = data.to(device), target.to(device)\n output, hidden_layer = model(data)\n\n #feature_extractor = torch.nn.Sequential(*list(model.children())[:-1])\n\n test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss\n pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability\n\n print(len(hidden_layer))\n print(len(hidden_layer[0]))\n #print(hidden_layer[0])\n\n\n correct += pred.eq(target.view_as(pred)).sum().item()\n test_num += len(data)\n\n\n if evaluate:\n for i in range(len(pred)):\n master_preds.append(pred[i][0].item())\n master_truths.append(target[i].item())\n layer = hidden_layer[i].cpu()\n master_hidden_layers.append(layer.numpy())\n image = data[i][0].cpu()\n allimages.append(image.numpy())\n if pred[i][0] == target[i]:\n continue\n else:\n #print(\"not equal\")\n #print(\"pred is \", pred[i][0].item(), \"and target is \", target[i].item())\n image = data[i][0].cpu()\n images.append([image.numpy(),pred[i][0].item(),target[i].item()])\n\n if evaluate:\n\n #print(len(master_hidden_layers))\n #print(master_hidden_layers[0])\n\n distances = np.zeros(len(master_hidden_layers))\n\n #x0 = master_hidden_layers[0]\n\n for i in range(len(distances)):\n length = 0\n for dim in range(len(master_hidden_layers[0])):\n length = length + (master_hidden_layers[i][dim] - master_hidden_layers[15][dim])**2\n length = math.sqrt(length)\n distances[i] = length\n\n sorted_distance_index = np.argsort(distances)\n\n figa = plt.figure()\n\n\n print(\"test\")\n for i in range(9):\n sub = figa.add_subplot(9, 1, i + 1)\n sub.imshow(allimages[sorted_distance_index[i]], interpolation='nearest', cmap='gray')\n\n X = master_hidden_layers\n y = np.array(master_truths)\n tsne = TSNE(n_components=2, random_state=0)\n X_2d = np.array(tsne.fit_transform(X))\n\n target_ids = range(10)\n\n cdict = {0: 'orange', 1: 'red', 2: 'blue', 3: 'green', 4: 'salmon', 5:'c', 6: 'm', 7: 'y', 8: 'k', 9: 'lime'}\n\n fig, ax = plt.subplots()\n for g in np.unique(y):\n ix = np.where(y == g)\n ax.scatter(X_2d[ix, 0], X_2d[ix, 1], c=cdict[g], label=g, s=5)\n ax.legend()\n plt.show()\n\n\n #i = 1\n #plt.figure(figsize=(6, 5))\n #plt.scatter(X_2d[10*i:10*i+10,0],X_2d[:10,1])\n\n\n\n CM = confusion_matrix(master_truths,master_preds)\n CMex = CM\n #for i in range(len(CM)):\n # for j in range(len(CM)):\n # if CM[i][j] > 0:\n # CMex[i][j] = log(CM[i][j])\n # else:\n # CMex[i][j] = CM[i][j]\n\n print(CM)\n print(CMex)\n\n df_cm = pd.DataFrame(CM, range(10), range(10))\n #plt.figure(figsize=(10,7))\n fig0,ax0 = plt.subplots(1)\n sn.set(font_scale=1) # for label size\n sn.heatmap(df_cm, annot=True, annot_kws={\"size\": 11}) # font size\n #ax0.set_ylim(len(CMex) - 0.5, 0.5)\n plt.xlabel(\"predicted\")\n plt.ylabel(\"ground truth\")\n plt.show()\n\n\n\n\n fig = plt.figure()\n\n for i in range(9):\n sub = fig.add_subplot(3, 3, i + 1)\n sub.imshow(images[i + 10][0], interpolation='nearest', cmap='gray')\n\n title = \"Predicted: \" + str(images[i+ 10][1]) + \" True: \" + str(images[i+ 10][2])\n sub.set_title(title)\n\n kernels = model.conv1.weight.cpu().detach().clone()\n kernels = kernels - kernels.min()\n kernels = kernels / kernels.max()\n\n kernels = kernels.numpy()\n print(np.shape(kernels))\n\n fig2 = plt.figure()\n for i in range(8):\n\n sub = fig2.add_subplot(2, 4, i + 1)\n sub.imshow(kernels[i][0], interpolation='nearest', cmap='gray')\n\n title = \"Kernel #\" + str(i + 1)\n sub.set_title(title)\n\n\n #fig, axs = plt.subplots(3, 3, constrained_layout=True)\n #for i in range(9):\n # fig[i].imshow(images[i][0], interpolation='nearest', cmap='gray')\n # axs[i].set_title(\"all titles\")\n\n\n\n\n\n test_loss /= test_num\n\n print('\\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.4f}%)\\n'.format(\n test_loss, correct, test_num,\n 100. * correct / test_num))\n\n return test_loss\n\n\ndef main():\n # Training settings\n # Use the command line to modify the default settings\n parser = argparse.ArgumentParser(description='PyTorch MNIST Example')\n parser.add_argument('--batch-size', type=int, default=64, metavar='N',\n help='input batch size for training (default: 64)')\n parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',\n help='input batch size for testing (default: 1000)')\n parser.add_argument('--epochs', type=int, default=14, metavar='N',\n help='number of epochs to train (default: 14)')\n parser.add_argument('--lr', type=float, default=1.0, metavar='LR',\n help='learning rate (default: 1.0)')\n parser.add_argument('--step', type=int, default=1, metavar='N',\n help='number of epochs between learning rate reductions (default: 1)')\n parser.add_argument('--gamma', type=float, default=0.7, metavar='M',\n help='Learning rate step gamma (default: 0.7)')\n parser.add_argument('--no-cuda', action='store_true', default=False,\n help='disables CUDA training')\n parser.add_argument('--seed', type=int, default=1, metavar='S',\n help='random seed (default: 1)')\n parser.add_argument('--log-interval', type=int, default=10, metavar='N',\n help='how many batches to wait before logging training status')\n\n parser.add_argument('--evaluate', action='store_true', default=False,\n help='evaluate your model on the official test set')\n parser.add_argument('--load-model', type=str,\n help='model file path')\n\n parser.add_argument('--save-model', action='store_true', default=True,\n help='For Saving the current Model')\n args = parser.parse_args()\n use_cuda = not args.no_cuda and torch.cuda.is_available()\n\n torch.manual_seed(args.seed)\n\n device = torch.device(\"cuda\" if use_cuda else \"cpu\")\n\n kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {}\n\n # Evaluate on the official test set\n if args.evaluate:\n assert os.path.exists(args.load_model)\n\n # Set the test model\n model = Net().to(device)\n model.load_state_dict(torch.load(args.load_model))\n\n test_dataset = datasets.MNIST('../data', train=False,\n transform=transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.1307,), (0.3081,))\n ]))\n\n test_loader = torch.utils.data.DataLoader(\n test_dataset, batch_size=args.test_batch_size, shuffle=True, **kwargs)\n\n test(model, device, test_loader, evaluate = True)\n\n return\n\n # Pytorch has default MNIST dataloader which loads data at each iteration\n train_dataset = datasets.MNIST('../data', train=True, download=True,\n transform=transforms.Compose([ # Data preprocessing\n transforms.ToTensor(), # Add data augmentation here\n transforms.Normalize((0.1307,), (0.3081,))\n ]))\n\n train_dataset_augmented = datasets.MNIST('../data', train=True, download=True,\n transform=transforms.Compose([ # Data preprocessing\n #transforms.RandomCrop(28, padding=(1, 1, 1, 1)),\n #transforms.RandomRotation(4, resample=PIL.Image.BILINEAR),\n #transforms.RandomResizedCrop(28, scale=(0.85, 1.0), ratio=(1, 1),\n # interpolation=2),\n transforms.RandomAffine(8, translate=(.065, .065), scale=(0.80, 1.1),\n resample=PIL.Image.BILINEAR),\n transforms.ToTensor(), # Add data augmentation here\n transforms.Normalize((0.1307,), (0.3081,))\n ]))\n\n print(type(train_dataset))\n print(len(train_dataset), type(train_dataset[0][0]), type(train_dataset[0][1]), type(train_dataset[0]))\n\n print(\"the int is: \", train_dataset[2][1])\n print(np.shape(train_dataset[0][0][0].numpy()))\n\n idx = [[] for i in range(10)]\n #each row of indexes is a list of indexes in the train_dataset\n #e.g. row 5 containes a list of indexes for the places in train_dataset with images of 5\n print(idx[4])\n for i, img in enumerate(train_dataset):\n\n #if False:\n if i < 5:\n fig = plt.figure()\n plt.imshow(img[0][0].numpy(), cmap='gray')\n\n fig = plt.figure()\n plt.imshow(train_dataset_augmented[i][0][0].numpy(), cmap='gray')\n\n for number in range(10):\n if img[1] == number:\n idx[number].append(i)\n\n\n val_idx = [[] for i in range(10)]\n train_idx = [[] for i in range(10)]\n #print(idx[0][1:100])\n\n for i, number_indx in enumerate(idx):\n random.shuffle(number_indx)\n l = len(number_indx)\n idx_lim = int(l*0.15)\n val_idx[i] = number_indx[0:idx_lim]\n train_idx[i] = number_indx[idx_lim:]\n\n\n subset_indices_train = [j for sub in train_idx for j in sub]\n subset_indices_valid = [j for sub in val_idx for j in sub]\n\n\n # for adjusting size of train set\n\n train_length = int(len(subset_indices_train))\n #train_length = int(len(subset_indices_train)/2)\n #train_length = int(len(subset_indices_train) / 4)\n #train_length = int(len(subset_indices_train) / 8)\n #train_length = int(len(subset_indices_train) / 16)\n\n\n\n\n # You can assign indices for training/validation or use a random subset for\n # training by using SubsetRandomSampler. Right now the train and validation\n # sets are built from the same indices - this is bad! Change it so that\n # the training and validation sets are disjoint and have the correct relative sizes.\n\n\n train_loader = torch.utils.data.DataLoader(\n train_dataset_augmented, batch_size=args.batch_size,\n sampler=SubsetRandomSampler(subset_indices_train[:train_length])\n )\n val_loader = torch.utils.data.DataLoader(\n train_dataset, batch_size=args.test_batch_size,\n sampler=SubsetRandomSampler(subset_indices_valid)\n )\n\n # Load your model [fcNet, ConvNet, Net]\n model = Net().to(device)\n\n # Try different optimzers here [Adam, SGD, RMSprop]\n optimizer = optim.Adadelta(model.parameters(), lr=args.lr)\n\n\n # Set your learning rate scheduler\n scheduler = StepLR(optimizer, step_size=args.step, gamma=args.gamma)\n\n # Training loop\n train_losses = []\n test_losses = []\n x = []\n fig, ax = plt.subplots(1)\n\n\n if True:\n for epoch in range(1, args.epochs + 1):\n #train and test each epoch\n train_loss = train(args, model, device, train_loader, optimizer, epoch)\n test_loss = test(model, device, val_loader)\n scheduler.step() # learning rate scheduler\n\n train_losses.append(train_loss)\n test_losses.append(test_loss)\n x.append(epoch - 1)\n ax.plot(x, test_losses, label='test_losses', markersize=2)\n ax.plot(x, train_losses, label='train_losses', markersize=2)\n\n plt.pause(0.05)\n\n # You may optionally save your model at each epoch here\n\n if args.save_model:\n\n print(train_losses)\n with open(\"train_losses_one.txt\", \"wb\") as fp: # Pickling\n pickle.dump(train_losses, fp)\n print(test_losses)\n with open(\"test_losses_one.txt\", \"wb\") as fp: # Pickling\n pickle.dump(test_losses, fp)\n\n\n\n torch.save(model.state_dict(), \"mnist_model_onef.pt\")\n\n\nif __name__ == '__main__':\n main()\n","sub_path":"main.py","file_name":"main.py","file_ext":"py","file_size_in_byte":22052,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"539417910","text":"import subprocess as sp\n\ndef installDocker():\n sp.run(\"yum install docker-ce --nobest -y\", shell=True)\n\ndef startDocker():\n sp.run(\"systemctl start docker\",shell=True)\n\ndef dockerCommands():\n ch = 'Y'\n cnt = 0\n while ch =='Y':\n if cnt == 0:\n ans = input(\"would you like to play with containers?[Y/N]\")\n if ans == 'Y':\n print(\"CHOOSE OPERATION: \")\n print(\" 1 : RUN \")\n print(\" 2 : STOP\")\n print(\" 3 : REMOVE CONTAINER\")\n print(\" 4 : REMOVE IMAGE\")\n print(\" 5 : PULL IMAGE\")\n print(\" 6 : Exit\")\n opt = int(input())\n if opt == 1:\n name = input(\"GIVE NAME TO YOUR CONTIANER: \")\n term = input(\"WOULD YOU LIKE TO GET INTERACTIVE TERMINAL OR CLOSE CONT AFTER RUNNING?[Y/N]\")\n image = input(\"IMAGE NAME : \")\n cont_run = sp.getoutput(\"docker run -dit --name {} {}\".format(name, image))\n print(cont_run)\n op_of_cont = sp.getoutput(\"docker ps\")\n print(op_of_cont)\n elif opt == 2:\n all_cont = sp.getoutput(\"docker ps\")\n print(\"LIST OF CONTAINERS RUNNING\")\n print(all_cont)\n cont_name = input(\"ENTER THE NAME OF CONTAINER TO BE STOPPED: \")\n cont_stop = sp.getoutput(\"docker stop {}\".format(cont_name))\n print(cont_stop)\n clear = sp.run(\"clear\",shell=True)\n check_stop_cont = sp.getoutput(\"docker ps\")\n print(check_stop_cont)\n elif opt == 3:\n all_cont = sp.getoutput(\"docker ps -a\")\n print(all_cont)\n rm_cont_name = input(\"IF ALL WANT TO REMOVE TYPE all or ENTER THE NAME OF CONTAINER TO BE REMOVED: \")\n if rm_cont_name == \"all\":\n rm_cont = sp.getoutput(\"docker rm `docker ps -a`\")\n else:\n rm_cont = sp.getoutput(\"docker rm {}\".format(rm_cont_name))\n print(rm_cont)\n check_rm_cont = sp.getoutput(\"docker ps -a\")\n print(check_rm_cont)\n elif opt == 4:\n all_img = sp.getoutput(\"docker images\")\n print(all_img)\n rm_img_name = input(\"IF ALL WANT TO REMOVE TYPE all OR ENTER NAME OF IMAGE TO BE DELETED: \")\n if rm_img_name == \"all\":\n rm_img = sp.getoutput(\"docker rmi `docker images -q` --force\")\n else:\n rm_img = sp.getoutput(\"docker rmi {}\".format(rm_img_name))\n print(rm_img)\n check_rm_img = sp.getoutput(\"docker images\")\n print(check_rm_img)\n elif opt == 5:\n all_img_present = sp.getoutput(\"docker images\")\n print(\"LIST OF ALL CONTAINERS PRESENT\")\n print(all_img_present)\n pull_img_name = input(\"ENTER NAME OF IMAGE TO BE PULLED: \")\n pull_img = sp.getoutput(\"docker pull {}\".format(pull_img_name))\n print(pull_img)\n check_pull_img = sp.getoutput(\"docker images\")\n print(check_pull_img)\n elif opt == 6:\n break;\n ch = input(\"WOULD YOU LIKE TO CONTINUE?[Y/N]: \")\n if ch != 'Y':\n break\n cnt+=1\n else:\n break","sub_path":"linuxAutomation/docker.py","file_name":"docker.py","file_ext":"py","file_size_in_byte":3438,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"117032201","text":"#Tue Sep 10 16:06:10 EST 2013\nN_copies_Y=2\nN_copies_X=2\nZ_origin_offset=0\nshowEtch3=0\nshowEtch2=0\nshowEtch=1\nGRID_N_POINTS=(4,4)\nBAUDRATE=115200\nminDistance=0.001**2 \nmaxDistance=1**2 \nZlift_milling=1.0\nfilePath=\"C:/temp/out/\"\nF_fastMove=70000\nEmulate=True\nrunInGui=True\nshowDrill=0\nF_slowMove=20000\nfavouritesPath=\"C:/Users/dutoitk.FLITECH/Dropbox/Cyclone-PCB-Factory-master/favourites/\"\nZ_PROBING_FILE=\"Z_probing_data.p\"\nmargin_copies_Y=5\ninitial_Z_lowering_distance=-5\nDEVICE=\"COM3\"\nmargin_copies_X=5\nshowEdge=0\nZ_global_offset=0\nZlift=0.5\nfileName=\"Encoder_Board\"\n","sub_path":"Software/configuration.py","file_name":"configuration.py","file_ext":"py","file_size_in_byte":568,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"434077532","text":"#!/usr/bin/env python3 -u\n# Copyright (c) 2017-present, Facebook, Inc.\n# All rights reserved.\n#\n# This source code is licensed under the license found in the LICENSE file in\n# the root directory of this source tree. An additional grant of patent rights\n# can be found in the PATENTS file in the same directory.\n\"\"\"\nTrain a new model on one or across multiple GPUs.\n\"\"\"\n\nimport collections\nimport math\nimport os\nimport random\nimport torch\nimport subprocess\nimport re\nimport copy\nfrom fairseq import checkpoint_utils, distributed_utils, options, progress_bar, tasks, utils, bleu\nfrom fairseq.data import iterators\nfrom fairseq.trainer import Trainer\nfrom fairseq.meters import AverageMeter, StopwatchMeter\nimport numpy as np\n\n\ndef main(args, init_distributed=False):\n utils.import_user_module(args)\n\n assert args.max_tokens is not None or args.max_sentences is not None, \\\n 'Must specify batch size either with --max-tokens or --max-sentences'\n\n # Initialize CUDA and distributed training\n if torch.cuda.is_available() and not args.cpu:\n torch.cuda.set_device(args.device_id)\n torch.manual_seed(args.seed)\n if init_distributed:\n args.distributed_rank = distributed_utils.distributed_init(args)\n\n # Print args\n print(args)\n\n # Setup task, e.g., translation, language modeling, etc.\n task = tasks.setup_task(args)\n\n # Load valid dataset (we load training data below, based on the latest checkpoint)\n for valid_sub_split in args.valid_subset.split(','):\n task.load_dataset(valid_sub_split, combine=True, epoch=0)\n\n # Build model and criterion\n model = task.build_model(args)\n criterion = task.build_criterion(args)\n print(model)\n print('| model {}, criterion {}'.format(args.arch, criterion.__class__.__name__))\n print('| num. model params: {} (num. trained: {})'.format(\n sum(p.numel() for p in model.parameters()),\n sum(p.numel() for p in model.parameters() if p.requires_grad),\n ))\n\n # added by wxjiao: load bilingual pairs\n src_dict = task.source_dictionary.__dict__['indices']\n tgt_dict = task.target_dictionary.__dict__['indices']\n slang, tlang = args.source_lang, args.target_lang\n st_path = 'bildict/{}-{}/{}-{}.align.txt'.format(slang, tlang, slang, tlang)\n st_aligners, st_prob = get_stpairs(st_path, src_dict, tgt_dict, alpha=0.75) # LongTensor\n print('| bilingual pairs {}; top-5 probs {}'.format(st_aligners.shape[0], st_prob[:5]))\n\n\n # Build trainer\n trainer = Trainer(args, task, model, criterion)\n print('| training on {} GPUs'.format(args.distributed_world_size))\n print('| max tokens per GPU = {} and max sentences per GPU = {}'.format(\n args.max_tokens,\n args.max_sentences,\n ))\n\n #slhe, added for generation\n assert not args.sampling or args.nbest == args.beam, \\\n '--sampling requires --nbest to be equal to --beam'\n assert args.replace_unk is None or args.raw_text, \\\n '--replace-unk requires a raw text dataset (--raw-text)'\n # assert args.gen_subset\n # task.load_dataset(args.gen_subset)\n\n # Load the latest checkpoint if one is available and restore the\n # corresponding train iterator\n extra_state, epoch_itr = checkpoint_utils.load_checkpoint(args, trainer)\n\n # by wxjiao\n print('| Truly trained num. model params: {}'.format(\n sum(p.numel() for p in trainer.model.parameters() if p.requires_grad)))\n\n # Train until the learning rate gets too small\n max_epoch = args.max_epoch or math.inf\n max_update = args.max_update or math.inf\n lr = trainer.get_lr()\n train_meter = StopwatchMeter()\n train_meter.start()\n valid_losses = [None]\n valid_subsets = args.valid_subset.split(',')\n while lr > args.min_lr and epoch_itr.epoch < max_epoch and trainer.get_num_updates() < max_update:\n # train for one epoch\n train(args, trainer, task, epoch_itr, st_aligners, st_prob)\n\n if not args.disable_validation and epoch_itr.epoch % args.validate_interval == 0:\n valid_losses = validate(args, trainer, task, epoch_itr, valid_subsets)\n else:\n valid_losses = [None]\n\n # only use first validation loss to update the learning rate\n lr = trainer.lr_step(epoch_itr.epoch, valid_losses[0])\n\n # save checkpoint\n if epoch_itr.epoch % args.save_interval == 0:\n checkpoint_utils.save_checkpoint(args, trainer, epoch_itr, valid_losses[0])\n\n if ':' in getattr(args, 'data', ''):\n # sharded data: get train iterator for next epoch\n epoch_itr = trainer.get_train_iterator(epoch_itr.epoch)\n train_meter.stop()\n print('| done training in {:.1f} seconds'.format(train_meter.sum))\n\n\n# added by wxjiao: get the biligual pairs\ndef get_stpairs(st_path, src_dict, tgt_dict, alpha=1.0):\n st_aligners = []\n s_prob = []\n t_prob = []\n with open(st_path, 'r') as f:\n for line in f:\n src_l, tgt_l, ps_l, pt_l = line.strip('\\n').split(' ')\n if src_l in src_dict and tgt_l in tgt_dict:\n st_aligners.append([src_dict[src_l], tgt_dict[tgt_l]])\n s_prob.append(int(ps_l))\n t_prob.append(int(pt_l))\n st_aligners = np.array(st_aligners, dtype=int)\n s_prob = np.array(s_prob, dtype=float)\n s_prob = s_prob ** alpha / np.sum(s_prob ** alpha, axis=0, keepdims=True)\n t_prob = np.array(t_prob, dtype=float)\n t_prob = t_prob ** alpha / np.sum(t_prob ** alpha, axis=0, keepdims=True)\n st_prob = (s_prob * t_prob)**0.5\n st_prob = st_prob / np.sum(st_prob)\n return st_aligners, st_prob\n\n\n#def train(args, trainer, task, epoch_itr, st_aligners):\n# modified by wxjiao: add one more arg -- st_aligners\ndef train(args, trainer, task, epoch_itr, st_aligners, st_prob):\n \"\"\"Train the model for one epoch.\"\"\"\n # Update parameters every N batches\n update_freq = args.update_freq[epoch_itr.epoch - 1] \\\n if epoch_itr.epoch <= len(args.update_freq) else args.update_freq[-1]\n\n # Initialize data iterator\n itr = epoch_itr.next_epoch_itr(\n fix_batches_to_gpus=args.fix_batches_to_gpus,\n shuffle=(epoch_itr.epoch >= args.curriculum),\n )\n itr = iterators.GroupedIterator(itr, update_freq)\n progress = progress_bar.build_progress_bar(\n args, itr, epoch_itr.epoch, no_progress_bar='simple',\n )\n\n extra_meters = collections.defaultdict(lambda: AverageMeter())\n valid_subsets = args.valid_subset.split(',')\n max_update = args.max_update or math.inf\n\n # added by wxjiao\n K_buf = 1 # lambda 36.0\n sample_buf = []\n for i, samples in enumerate(progress, start=epoch_itr.iterations_in_epoch):\n #log_output = trainer.train_step(samples)\n # modified by wxjiao: add one more arg -- st_aligners\n log_output = trainer.train_step(samples, st_aligners, st_prob, sample_buf, K_buf)\n if log_output is None:\n continue\n\n # log mid-epoch stats\n stats = get_training_stats(trainer)\n for k, v in log_output.items():\n if k in ['loss', 'nll_loss', 'ntokens', 'nsentences', 'sample_size', 'sample_status']:\n continue # these are already logged above\n if 'loss' in k:\n extra_meters[k].update(v, log_output['sample_size'])\n else:\n extra_meters[k].update(v)\n stats[k] = extra_meters[k].avg\n progress.log(stats, tag='train', step=stats['num_updates'])\n\n # ignore the first mini-batch in words-per-second calculation\n if i == 0:\n trainer.get_meter('wps').reset()\n\n num_updates = trainer.get_num_updates()\n if (\n not args.disable_validation\n and args.save_interval_updates > 0\n and num_updates % args.save_interval_updates == 0\n and num_updates > 0\n ):\n valid_losses = validate(args, trainer, task, epoch_itr, valid_subsets)\n checkpoint_utils.save_checkpoint(args, trainer, epoch_itr, valid_losses[0])\n print(\"dictionary size is {}, {}\".format(len(task.source_dictionary), len(task.target_dictionary)))\n if num_updates >= max_update:\n break\n\n # log end-of-epoch stats\n stats = get_training_stats(trainer)\n for k, meter in extra_meters.items():\n stats[k] = meter.avg\n progress.print(stats, tag='train', step=stats['num_updates'])\n\n # reset training meters\n for k in [\n 'train_loss', 'train_nll_loss', 'wps', 'ups', 'wpb', 'bsz', 'gnorm', 'clip',\n ]:\n meter = trainer.get_meter(k)\n if meter is not None:\n meter.reset()\n\ndef get_training_stats(trainer):\n stats = collections.OrderedDict()\n stats['loss'] = trainer.get_meter('train_loss')\n if trainer.get_meter('train_nll_loss').count > 0:\n nll_loss = trainer.get_meter('train_nll_loss')\n stats['nll_loss'] = nll_loss\n else:\n nll_loss = trainer.get_meter('train_loss')\n stats['ppl'] = utils.get_perplexity(nll_loss.avg)\n stats['wps'] = trainer.get_meter('wps')\n stats['ups'] = trainer.get_meter('ups')\n stats['wpb'] = trainer.get_meter('wpb')\n stats['bsz'] = trainer.get_meter('bsz')\n stats['num_updates'] = trainer.get_num_updates()\n stats['lr'] = trainer.get_lr()\n stats['gnorm'] = trainer.get_meter('gnorm')\n stats['clip'] = trainer.get_meter('clip')\n stats['oom'] = trainer.get_meter('oom')\n if trainer.get_meter('loss_scale') is not None:\n stats['loss_scale'] = trainer.get_meter('loss_scale')\n stats['wall'] = round(trainer.get_meter('wall').elapsed_time)\n stats['train_wall'] = trainer.get_meter('train_wall')\n return stats\n\n\ndef validate(args, trainer, task, epoch_itr, subsets):\n \"\"\"Evaluate the model on the validation set(s) and return the losses.\"\"\"\n valid_losses = []\n valid_bleus = []\n for subset in subsets:\n # Initialize data iterator\n\n if args.sacrebleu:\n scorer = bleu.SacrebleuScorer()\n else:\n tgt_dict = task.target_dictionary\n scorer = bleu.Scorer(tgt_dict.pad(), tgt_dict.eos(), tgt_dict.unk())\n\n src_dict_dup, tgt_dict_dup = task.dict_dup()\n\n itr = task.get_batch_iterator(\n dataset=task.dataset(subset),\n max_tokens=args.max_tokens,\n max_sentences=args.max_sentences_valid,\n max_positions=utils.resolve_max_positions(\n task.max_positions(),\n trainer.get_model().max_positions(),\n ),\n ignore_invalid_inputs=args.skip_invalid_size_inputs_valid_test,\n required_batch_size_multiple=args.required_batch_size_multiple,\n seed=args.seed,\n num_shards=args.distributed_world_size,\n shard_id=args.distributed_rank,\n num_workers=args.num_workers,\n ).next_epoch_itr(shuffle=False)\n\n progress = progress_bar.build_progress_bar(\n args, itr, epoch_itr.epoch,\n prefix='valid on \\'{}\\' subset'.format(subset),\n no_progress_bar='simple'\n )\n\n # reset validation loss meters\n for k in ['valid_loss', 'valid_nll_loss', 'valid_bleu']:\n meter = trainer.get_meter(k)\n if meter is not None:\n meter.reset()\n extra_meters = collections.defaultdict(lambda: AverageMeter())\n\n all_generation_output = []\n #import time; t1 = time.time()\n #print(\"start validation on {} data with {} batch\".format(subset, len(itr))\n count = 0\n for sample in progress:\n #count+=1\n #if count%100 == 0:\n # print(\"batch: \", count)\n log_output, generation_output = trainer.valid_step(sample, src_dict_dup, tgt_dict_dup, scorer=scorer)\n all_generation_output.append(generation_output)\n for k, v in log_output.items():\n if k in ['loss', 'nll_loss', 'ntokens', 'nsentences', 'sample_size', 'sample_status']:\n continue\n extra_meters[k].update(v)\n #print(\"finish validation using {} seconds\".format(time.time()-t1))\n if len(all_generation_output) != 0:\n all_generation_output = [gen for gen_list in all_generation_output for gen in gen_list]\n print(\"| Evaluate {} samples from {} data\".format(len(all_generation_output), subset))\n print(\"| {}\".format(scorer.result_string()))\n if args.distributed_rank == 0:\n gene_save_dir = save_gene_file(args, all_generation_output, trainer, epoch_itr)\n bleu_res = external_eval(gene_save_dir, multi_ref=False)\n # log validation stats\n trainer.get_meter('valid_bleu').update(bleu_res)\n stats = get_valid_stats(trainer)\n for k, meter in extra_meters.items():\n stats[k] = meter.avg\n stats['bleu'] = stats['bleu'].avg\n progress.print(stats, tag=subset, step=trainer.get_num_updates())\n valid_losses.append(stats['loss'].avg)\n scorer.reset()\n\n return valid_losses\n\ndef external_eval(gene_save_dir, multi_ref=False):\n print(\"| Evaluate using external metric\")\n hypo_path = gene_save_dir + 'hypo.txt'\n if multi_ref:\n raise NotImplementedError\n else:\n target_path = gene_save_dir + 'target.txt'\n eval_cmd = \"perl ./scripts/multi-bleu.perl %s < %s\"%(target_path, hypo_path)\n # print(eval_cmd)\n output = subprocess.check_output(eval_cmd, shell=True).decode(\"utf-8\")\n print(\"| \" + output)\n p = re.compile(\"BLEU = \\d+\\.\\d+,\")\n bleu_res = float(p.search(output).group(0).replace(',', '').replace('BLEU = ', ''))\n return bleu_res\n\ndef save_gene_file(args, generation_list, trainer, epoch_itr):\n if args.results_path is not None:\n epoch = epoch_itr.epoch\n end_of_epoch = epoch_itr.end_of_epoch()\n updates = trainer.get_num_updates()\n\n gene_folder = 'gene/'\n save_folder = os.path.join(args.results_path, gene_folder)\n if not os.path.exists(save_folder):\n os.makedirs(save_folder, exist_ok=True)\n\n if end_of_epoch and not args.no_epoch_checkpoints and epoch % args.save_interval == 0:\n file_prefix = 'trans_cp_{}_'.format(epoch)\n elif not end_of_epoch and args.save_interval_updates > 0 and updates % args.save_interval_updates == 0:\n file_prefix = 'trans_cp_{}_{}_'.format(epoch, updates)\n else:\n raise NotImplementedError\n\n save_dir = os.path.join(save_folder, file_prefix)\n print('| Save generation results into path %s' % save_dir)\n else:\n print('Save path is not specified, will not save the generation results')\n return\n hypo_file_path = save_dir + 'hypo.txt'\n target_file_path = save_dir + 'target.txt'\n hypo_file = open(hypo_file_path, 'w')\n target_file = open(target_file_path, 'w')\n\n sorted(generation_list, key=lambda x: x[0])\n for item in generation_list:\n hypo_file.write(item[1] + '\\n')\n target_file.write(item[2] + '\\n')\n hypo_file.close()\n target_file.close()\n return save_dir\n\ndef get_valid_stats(trainer):\n stats = collections.OrderedDict()\n stats['loss'] = trainer.get_meter('valid_loss')\n stats['bleu'] = trainer.get_meter('valid_bleu')\n if trainer.get_meter('valid_nll_loss').count > 0:\n nll_loss = trainer.get_meter('valid_nll_loss')\n stats['nll_loss'] = nll_loss\n else:\n nll_loss = stats['loss']\n stats['ppl'] = utils.get_perplexity(nll_loss.avg)\n stats['num_updates'] = trainer.get_num_updates()\n if hasattr(checkpoint_utils.save_checkpoint, 'best'):\n stats['best_loss'] = min(\n checkpoint_utils.save_checkpoint.best, stats['loss'].avg)\n return stats\n\n\ndef distributed_main(i, args, start_rank=0):\n args.device_id = i\n if args.distributed_rank is None: # torch.multiprocessing.spawn\n args.distributed_rank = start_rank + i\n main(args, init_distributed=True)\n\n\ndef cli_main():\n parser = options.get_training_parser()\n args = options.parse_args_and_arch(parser)\n\n if args.distributed_init_method is None:\n distributed_utils.infer_init_method(args)\n\n if args.distributed_init_method is not None:\n # distributed training\n if torch.cuda.device_count() > 1 and not args.distributed_no_spawn:\n start_rank = args.distributed_rank\n args.distributed_rank = None # assign automatically\n torch.multiprocessing.spawn(\n fn=distributed_main,\n args=(args, start_rank),\n nprocs=torch.cuda.device_count(),\n )\n else:\n distributed_main(args.device_id, args)\n elif args.distributed_world_size > 1:\n # fallback for single node with multiple GPUs\n assert args.distributed_world_size <= torch.cuda.device_count()\n port = random.randint(10000, 20000)\n args.distributed_init_method = 'tcp://localhost:{port}'.format(port=port)\n args.distributed_rank = None # set based on device id\n if max(args.update_freq) > 1 and args.ddp_backend != 'no_c10d':\n print('| NOTE: you may get better performance with: --ddp-backend=no_c10d')\n torch.multiprocessing.spawn(\n fn=distributed_main,\n args=(args, ),\n nprocs=args.distributed_world_size,\n )\n else:\n # single GPU training\n main(args)\n\n\nif __name__ == '__main__':\n cli_main()\n","sub_path":"train.py","file_name":"train.py","file_ext":"py","file_size_in_byte":17489,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"428091064","text":"import heapq as hpq\nimport itertools\n\n\nclass PriorityQueueSet:\n \"\"\"\n Combined priority queue and set data structure.\n\n The dictionary wrapper guarantees:\n - a unique set of items\n - the possibility to search for items\n\n and as a result:\n - the possibility to update the priority of an already existing item\n\n\n Provides O(1) membership test, O(log N) insertion and O(log N) removal of the smallest item.\n\n Important: the items of this data structure must be both comparable and\n hashable (i.e. must implement __cmp__ and __hash__). This is true of\n Python's built-in objects, but you should implement those methods if you\n want to use the data structure for custom objects.\n \"\"\"\n\n # placeholder for a removed task\n REMOVED = ''\n\n def __init__(self, items=None):\n \"\"\"\n Create a new PriorityQueueSet.\n\n Arguments:\n items (list): An initial item list - it can be unsorted and\n non-unique. The data structure will be created in O(N).\n\n Attributes:\n self.set: dictionary wrapper for the heap\n self.heap: the actual priority queue\n self.counter: unique sequence count\n \"\"\"\n\n if items is None:\n items = []\n self.set = dict((item, []) for item in items)\n self.heap = list(self.set.keys())\n hpq.heapify(self.heap)\n self.counter = itertools.count()\n\n def has_item(self, item):\n \"\"\"Check if ``item`` exists in the queue.\"\"\"\n return item in self.set\n\n def get_priority(self, item):\n \"\"\"Get the priority of ``item`` if it exists.\"\"\"\n try:\n return self.set[item][0]\n except KeyError:\n print(\"Can't get priority of non-existing item\")\n\n def pop(self):\n \"\"\"Remove and return the lowest priority task. Raise KeyError if empty.\"\"\"\n while self.heap:\n priority, count, smallest = hpq.heappop(self.heap)\n if smallest is not self.REMOVED:\n del self.set[smallest]\n return priority, smallest\n raise KeyError('pop from an empty priority queue')\n\n def remove(self, item):\n \"\"\"Mark an existing task as REMOVED.\"\"\"\n try:\n entry = self.set.pop(item)\n entry[-1] = self.REMOVED\n except KeyError:\n print(\"Can't remove a non-existing item\")\n\n def add(self, item, priority=0):\n \"\"\"Add a new item or update the priority of an existing task\"\"\"\n if item in self.set:\n self.remove(item)\n count = next(self.counter)\n entry = [priority, count, item]\n self.set[item] = entry\n hpq.heappush(self.heap, entry)\n","sub_path":"taquin/algorithm/utility.py","file_name":"utility.py","file_ext":"py","file_size_in_byte":2719,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"35667840","text":"import tensorflow as tf\nimport numpy as np\nfrom lavd.shares import BaseModel, BiRNN, CharCNNHW, CRF, Embedding\nfrom utils.logger import Progbar\n\n\nclass BiLSTMCRFModel(BaseModel):\n def __init__(self, config):\n super(BiLSTMCRFModel, self).__init__(config)\n self._init_configs()\n with tf.Graph().as_default():\n self._add_placeholders()\n self._build_model()\n self.logger.info(\"total params: {}\".format(self.count_params()))\n self._initialize_session()\n\n def _init_configs(self):\n vocab = self.load_dataset(self.cfg.vocab)\n self.word_dict, self.char_dict, self.label_dict = vocab[\"word_dict\"], vocab[\"char_dict\"], vocab[\"label_dict\"]\n del vocab\n self.word_size, self.char_size, self.label_size = len(self.word_dict), len(self.char_dict), len(self.label_dict)\n self.rev_word_dict = dict([(idx, word) for word, idx in self.word_dict.items()])\n self.rev_char_dict = dict([(idx, char) for char, idx in self.char_dict.items()])\n self.rev_label_dict = dict([(idx, tag) for tag, idx in self.label_dict.items()])\n\n def _get_feed_dict(self, data, is_train=False, lr=None):\n feed_dict = {self.words: data[\"words\"], self.seq_len: data[\"seq_len\"], self.chars: data[\"chars\"],\n self.char_seq_len: data[\"char_seq_len\"]}\n if \"labels\" in data:\n feed_dict[self.labels] = data[\"labels\"]\n feed_dict[self.is_train] = is_train\n if lr is not None:\n feed_dict[self.lr] = lr\n return feed_dict\n\n def _add_placeholders(self):\n self.words = tf.placeholder(tf.int32, shape=[None, None], name=\"words\")\n self.seq_len = tf.placeholder(tf.int32, shape=[None], name=\"seq_len\")\n self.chars = tf.placeholder(tf.int32, shape=[None, None, None], name=\"chars\")\n self.char_seq_len = tf.placeholder(tf.int32, shape=[None, None], name=\"char_seq_len\")\n self.labels = tf.placeholder(tf.int32, shape=[None, None], name=\"labels\")\n # hyper-parameters\n self.is_train = tf.placeholder(tf.bool, shape=[], name=\"is_train\")\n self.lr = tf.placeholder(tf.float32, name=\"learning_rate\")\n\n def _build_model(self):\n with tf.variable_scope(\"embeddings_op\"):\n # word table\n word_table = Embedding(self.word_size, self.cfg.word_dim, self.cfg.wordvec, self.cfg.tune_emb,\n self.cfg.word_project, scope=\"word_table\")\n word_emb = word_table(self.words)\n # char table\n char_table = Embedding(self.char_size, self.cfg.char_dim, None, True, False, scope=\"char_table\")\n char_emb = char_table(self.chars)\n\n with tf.variable_scope(\"computation_graph\"):\n # create module\n emb_dropout = tf.layers.Dropout(rate=self.cfg.emb_drop_rate)\n rnn_dropout = tf.layers.Dropout(rate=self.cfg.rnn_drop_rate)\n char_tdnn_hw = CharCNNHW(self.cfg.char_kernels, self.cfg.char_kernel_features, self.cfg.char_dim,\n self.cfg.highway_layers, padding=\"VALID\", activation=tf.nn.tanh, use_bias=True,\n hw_activation=tf.nn.tanh, reuse=False, scope=\"char_tdnn_hw\")\n bi_rnn = BiRNN(self.cfg.num_units, concat=self.cfg.concat_rnn, reuse=tf.AUTO_REUSE, scope=\"bi_rnn\")\n crf_layer = CRF(self.label_size, reuse=False, scope=\"crf\")\n\n # compute logits\n char_cnn = char_tdnn_hw(char_emb)\n emb = emb_dropout(tf.concat([word_emb, char_cnn], axis=-1), training=self.is_train)\n rnn_outputs, _ = bi_rnn(emb, self.seq_len)\n rnn_outputs = rnn_dropout(rnn_outputs, training=self.is_train)\n self.logits, self.transition, self.loss = crf_layer(rnn_outputs, self.labels, self.seq_len)\n\n optimizer = self._build_optimizer()\n if self.cfg.grad_clip is not None and self.cfg.grad_clip > 0:\n grads, vs = zip(*optimizer.compute_gradients(self.loss))\n grads, _ = tf.clip_by_global_norm(grads, self.cfg.grad_clip)\n self.train_op = optimizer.apply_gradients(zip(grads, vs))\n else:\n self.train_op = optimizer.minimize(self.loss)\n\n def _predict_op(self, data):\n feed_dict = self._get_feed_dict(data)\n logits, transition, seq_len = self.sess.run([self.logits, self.transition, self.seq_len], feed_dict=feed_dict)\n return self.viterbi_decode(logits, transition, seq_len)\n\n def train(self, dataset):\n self.logger.info(\"Start training...\")\n best_f1, no_imprv_epoch, init_lr, lr, cur_step = -np.inf, 0, self.cfg.lr, self.cfg.lr, 0\n for epoch in range(1, self.cfg.epochs + 1):\n self.logger.info(\"Epoch {}/{}:\".format(epoch, self.cfg.epochs))\n prog = Progbar(target=dataset.get_num_batches())\n for i, data in enumerate(dataset.get_data_batches()):\n cur_step += 1\n feed_dict = self._get_feed_dict(data, is_train=True, lr=lr)\n _, train_loss = self.sess.run([self.train_op, self.loss], feed_dict=feed_dict)\n prog.update(i + 1, [(\"Global Step\", int(cur_step)), (\"Train Loss\", train_loss)])\n # learning rate decay\n if self.cfg.use_lr_decay:\n if self.cfg.decay_step:\n lr = max(init_lr / (1.0 + self.cfg.lr_decay * epoch / self.cfg.decay_step), self.cfg.minimal_lr)\n # evaluate\n score = self.evaluate(dataset.get_data_batches(\"dev\"), name=\"dev\")\n self.evaluate(dataset.get_data_batches(\"test\"), name=\"test\")\n if score[\"FB1\"] > best_f1:\n best_f1, no_imprv_epoch = score[\"FB1\"], 0\n self.save_session(epoch)\n self.logger.info(\" -- new BEST score on dev dataset: {:04.2f}\".format(best_f1))\n else:\n no_imprv_epoch += 1\n if self.cfg.no_imprv_tolerance is not None and no_imprv_epoch >= self.cfg.no_imprv_tolerance:\n self.logger.info(\"early stop at {}th epoch without improvement\".format(epoch))\n self.logger.info(\"best score on dev set: {}\".format(best_f1))\n break\n\n def evaluate(self, dataset, name):\n all_data = list()\n for data in dataset:\n predicts = self._predict_op(data)\n all_data.append((data[\"labels\"], predicts, data[\"words\"], data[\"seq_len\"]))\n return self.evaluate_f1(all_data, self.rev_word_dict, self.rev_label_dict, name)\n","sub_path":"lavd/base_model.py","file_name":"base_model.py","file_ext":"py","file_size_in_byte":6558,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"19162620","text":"#!/usr/bin/env python\nimport cv2\n\n\"\"\"This program exists to show the syntax used by OpenCV to capture\nand display a camera feed to a window\"\"\"\n\ndef display_webcam():\n # capturing video from camera, storing in 'cap'. 0 selects the camera\n\n cap = cv2.VideoCapture(0)\n\n while True:\n # frame gets the next frame in the camera via cap\n # ret is a boolean for success of capturing frame\n ret, frame = cap.read()\n print(type(frame))\n cv2.imshow('This is a window!!', frame)\n\n if cv2.waitKey(1) & 0xFF == ord('q'):\n print('An error occured')\n break\n cap.release()\n cv2.destroyAllWindows()\n\nif __name__ == \"__main__\":\n display_webcam()\n","sub_path":"video_display.py","file_name":"video_display.py","file_ext":"py","file_size_in_byte":709,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"576948767","text":"from json import dumps, loads\nfrom models.campaign import Campaign\nfrom flask_restplus import reqparse\n\nclass CampaignController:\n def __init__(self, request):\n self.request = request\n\n def new(self):\n\n parser = reqparse.RequestParser()\n parser.add_argument('name', required=True)\n parser.add_argument('gameMaster', required=True)\n parser.add_argument('players', action='append')\n parser.add_argument('characters', action='append')\n parser.add_argument('rules', action='append')\n parse_result = parser.parse_args(req=self.request)\n\n campaign = Campaign.from_json(dumps(parse_result)).save()\n\n return \"{}\".format(campaign.id)\n\n @staticmethod\n def list():\n list_of_campaigns = list(map(lambda campaign: loads(campaign.to_json()), Campaign.objects.all()))\n return list_of_campaigns\n\n @staticmethod\n def get_element_detail(identifier):\n return Campaign.objects.get(id=identifier).to_json()\n \n def edit(self, identifier):\n campaign = Campaign.objects.get(id=identifier)\n parser = reqparse.RequestParser()\n parser.add_argument('name', required=False)\n parser.add_argument('gameMaster', required=False)\n parser.add_argument('players', required=False)\n parser.add_argument('characters', required=False)\n parser.add_argument('rules', required=False)\n parse_result = parser.parse_args(req=self.request)\n\n filtered_result = {k: v for k, v in parse_result.items() if v is not None}\n \n no_docs_updated = campaign.update(**filtered_result)\n\n if no_docs_updated == 1:\n new_campaign = Campaign.objects.get(id=identifier)\n return loads(new_campaign.to_json())\n \n @staticmethod\n def delete(id):\n target = Campaign.objects.get(id=id)\n target_data = loads(target.to_json())\n target.delete()\n\n return target_data","sub_path":"services/campaigns/controller/campaign_controller.py","file_name":"campaign_controller.py","file_ext":"py","file_size_in_byte":1949,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"57417407","text":"#Don't care if factoids isn't a word.\n#Enjoy the plugin!\n#TODO Implement \"|\"\n#TODO Restructure to detect who set the factoid, and maybe to support additional information\nplugName = 'Factoids'\n\nfact_prefix = '\\''\nfact_setPermissions = 0 #Permissions required to set factoids\nfact_usePermissions = 0 #Permissions required to use set factoids\n\ndef fact_isValid(msg, protocol):\n if len(msg) < 9:\n return 'Factoid too short.'\n elif msg[:7] != '':\n if protocol == 'irc' or protocol == 'furc':\n if msg[:8] == '':\n return True\n else:\n return 'Factoid doesn\\'t begin with \"\" or \"\".'\n else:\n return 'Factoid doesn\\'t begin with \"\".'\n return True\n\ndef fact_getResponse(msg, protocol):\n if msg[:7] != '' and msg[:8] == '':\n if protocol == 'irc':\n return '\\x01ACTION ' + msg[8:] + '\\x01'\n elif protocol == 'furc':\n return ':' + msg[8:]\n else:\n return msg[7:]\n return False\n\ndef fact_remember(inMSG):\n if getPermission(inMSG) < fact_setPermissions:\n return\n\n splitMSG = inMSG[0].split(None, 2)\n if len(splitMSG) != 3:\n return\n\n conn = sqlite3.connect(dbLoc)\n\n if getSetting('Facts', splitMSG[1], conn):\n conn.close()\n return 'Factoid \"'+splitMSG[1]+'\" already exists.'\n\n validResponse = fact_isValid(splitMSG[2], inMSG[1])\n\n if validResponse != True:\n conn.close()\n return validResponse\n\n setSetting('Facts', splitMSG[1], (splitMSG[2],), ('Value',), conn)\n conn.close()\n return '\"'+splitMSG[1]+'\" added.'\n\ndef fact_forget(inMSG):\n if getPermission(inMSG) < fact_setPermissions:\n return\n\n splitMSG = inMSG[0].split()\n if len(splitMSG) != 2:\n return\n\n if not delSetting('Facts', splitMSG[1]):\n return 'Error deleting \"'+splitMSG[1]+'\" (Probably doesn\\'t exist).'\n\n return '\"'+splitMSG[1]+'\" deleted.'\n\ndef fact_replace(inMSG):\n if getPermission(inMSG) < fact_setPermissions:\n return\n\n splitMSG = inMSG[0].split(None, 2)\n if len(splitMSG) != 3:\n return\n\n conn = sqlite3.connect(dbLoc)\n\n if not getSetting('Facts', splitMSG[1], conn):\n conn.close()\n return 'Factoid \"'+splitMSG[1]+'\" does not exist.'\n\n validResponse = fact_isValid(splitMSG[2], inMSG[1])\n\n if validResponse != True:\n conn.close()\n return validResponse\n\n setSetting('Facts', splitMSG[1], (splitMSG[2],), ('Value',), conn)\n conn.close()\n return '\"'+splitMSG[1]+'\" replaced.'\n\ndef fact_getFact(inMSG):\n if (not inMSG or len(inMSG) != 6 or getPermission(inMSG) < fact_usePermissions or\n len(inMSG[0]) < len(fact_prefix)+1 or inMSG[0][:len(fact_prefix)] != fact_prefix):\n return\n\n splitMSG = inMSG[0].split()\n if len(splitMSG) > 2:\n return\n elif len(splitMSG) == 2:\n who = splitMSG[1]\n else:\n who = inMSG[4]\n \n fact = getSetting('Facts', splitMSG[0][len(fact_prefix):])\n\n if fact:\n validResponse = fact_isValid(fact[0][1], inMSG[1])\n else:\n return\n\n if validResponse != True:\n sendMSG(validResponse, inMSG[1], inMSG[2], inMSG[3])\n\n validResponse = fact_getResponse(fact[0][1], inMSG[1])\n\n if validResponse:\n sendMSG(validResponse.replace('$inp$', who), inMSG[1], inMSG[2], inMSG[3])\n\ndef load():\n global funcs\n funcs = dict(list(funcs.items()) + [('rem',fact_remember), ('rep',fact_replace), ('f',fact_forget)])\n return fact_getFact\n","sub_path":"3.4/plugins/factoids.py","file_name":"factoids.py","file_ext":"py","file_size_in_byte":3562,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"115622866","text":"# 二分搜索 O(lgn)\nclass Solution(object):\n def findMedianSortedArrays(self, nums1, nums2):\n \"\"\"\n :type nums1: List[int]\n :type nums2: List[int]\n :rtype: float\n \"\"\"\n length = len(nums1) + len(nums2)\n if length&0x1 == 1:\n return self.findKth(nums1, nums2, length/2+1)\n else:\n n1 = self.findKth(nums1, nums2, length/2+1)\n n2 = self.findKth(nums1, nums2, length/2)\n return (n1 + n2)/2.0\n\n def findKth(self, nums1, nums2, k):\n if len(nums1) > len(nums2):\n return self.findKth(nums2, nums1, k)\n if not nums1:\n return nums2[k-1]\n if k == 1:\n return min(nums1[0], nums2[0])\n in1 = min(len(nums1), k/2)\n in2 = k - in1\n if nums1[in1-1] == nums2[in2-1]:\n return nums1[in1-1]\n elif nums1[in1-1] < nums2[in2-1]:\n return self.findKth(nums1[in1:], nums2, k-in1)\n else:\n return self.findKth(nums1, nums2[in2:], k-in2)\n\n# 二路归并 O(n)\nclass Solution(object):\n def findMedianSortedArrays(self, nums1, nums2):\n \"\"\"\n :type nums1: List[int]\n :type nums2: List[int]\n :rtype: float\n \"\"\"\n nums = self.merge(nums1, nums2)\n length = len(nums)\n if length%2 == 0:\n return (nums[length/2] + nums[length/2 -1])/2.0\n else:\n return nums[length/2]\n \n def merge(self, nums1, nums2):\n nums, length = [], len(nums1) + len(nums2)\n index1, index2 = 0, 0\n nums1.append(sys.maxint)\n nums2.append(sys.maxint)\n for i in range(length):\n if nums1[index1] < nums2[index2]:\n nums.append(nums1[index1])\n index1 += 1\n else:\n nums.append(nums2[index2])\n index2 += 1\n return nums\n","sub_path":"004_Median_of_Two_Sorted_Arrays/median_of_two_sorted_arrays.py","file_name":"median_of_two_sorted_arrays.py","file_ext":"py","file_size_in_byte":1889,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"187709053","text":"from django.conf.urls import url, include\nfrom django.contrib.auth import views as auth_views\n\nfrom adminio.views import *\n\nurlpatterns = [\n url(r'^$', index, name=\"index\"),\n # (r\"^media/(.+)\", static.serve, {\"document_root\": settings.MEDIA_ROOT}),\n url(r'^login/$', login_view),\n url(r'^logout/$', auth_views.logout, {'next_page': '/'}, name=\"logout\"),\n\n url(r'^journal/$', journal, name='journal'),\n url(r'^journal/object/$', object_journal, name='object_journal'),\n url(r'^journal/([^/]+)/$', journal_event, name='journal_record'),\n\n url(r'^global_groups/$', global_groups, name='global_groups'),\n url(r'^global_groups/add/$', global_group_edit, name='global_group_add'),\n url(r'^global_groups/([^/]+)/edit/$', global_group_edit, name='global_group_edit'),\n\n]\n","sub_path":"adminio/urls.py","file_name":"urls.py","file_ext":"py","file_size_in_byte":793,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"575616296","text":"''' Test identifying faces for a POLYDATA model.\n'''\nimport sv\nimport sys\nimport vtk\nsys.path.insert(1, '../graphics/')\nimport graphics as gr\n\n## Create renderer and graphics window.\nwin_width = 500\nwin_height = 500\nrenderer, renderer_window = gr.init_graphics(win_width, win_height)\n\n## Create a modeler.\nfile_name = \"../data/models/cylinder.stl\"\nfile_name = \"../data/DemoProject/Models/demo.vtp\"\nmodeler = sv.modeling.Modeler(sv.modeling.Kernel.POLYDATA)\nmodel = modeler.read(file_name)\nprint(\"Model type: \" + str(type(model)))\n\n## Compute model face IDs for STL file.\nif 'stl' in file_name:\n face_ids = model.compute_boundary_faces(angle=60.0)\nface_ids = model.get_face_ids()\nprint(\"Number of model Face IDs: {0:d}\".format(len(face_ids)))\n#print(\"Model Face IDs: {0:s}\".format(str(face_ids)))\n\n## Identify the model faces caps.\nface_caps = model.identify_caps()\n#print(face_types)\n\n## Show the caps.\nnum_caps = 0\nfor face_id,is_cap in zip(face_ids, face_caps):\n face_polydata = model.get_face_polydata(face_id=face_id)\n if is_cap:\n gr.add_geometry(renderer, face_polydata, color=[1.0, 0.0, 0.0], wire=False)\n num_caps += 1\n else:\n gr.add_geometry(renderer, face_polydata, color=[0.0, 1.0, 0.0], wire=False)\n\nprint(\"Number of caps: \" + str(num_caps))\n\n# Display window.\ngr.display(renderer_window)\n\n","sub_path":"new-api-tests/modeling/identify-faces-polydata.py","file_name":"identify-faces-polydata.py","file_ext":"py","file_size_in_byte":1334,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"277814325","text":"from django.db import connection, models\nfrom django.db.models import F\nfrom django.db.models.signals import m2m_changed\nfrom django.dispatch import receiver\nfrom django.utils.translation import ugettext_lazy as _\nfrom future.builtins import super\nfrom mezzanine.conf import settings\n\nfrom ffcsa.shop.models import Product, ProductVariation\nfrom ffcsa.shop.models.Discount import Discount\n\n\nclass Sale(Discount):\n \"\"\"\n Stores sales field values for price and date range which when saved\n are then applied across products and variations according to the\n selected categories and products for the sale.\n \"\"\"\n\n class Meta:\n verbose_name = _(\"Sale\")\n verbose_name_plural = _(\"Sales\")\n\n def save(self, *args, **kwargs):\n super(Sale, self).save(*args, **kwargs)\n self.update_products()\n\n def update_products(self):\n \"\"\"\n Apply sales field value to products and variations according\n to the selected categories and products for the sale.\n \"\"\"\n self._clear()\n if self.active:\n extra_filter = {}\n if self.discount_deduct is not None:\n # Don't apply to prices that would be negative\n # after deduction.\n extra_filter[\"unit_price__gt\"] = self.discount_deduct\n sale_price = models.F(\"unit_price\") - self.discount_deduct\n elif self.discount_percent is not None:\n sale_price = models.F(\"unit_price\") - (\n F(\"unit_price\") / \"100.0\" * self.discount_percent)\n elif self.discount_exact is not None:\n # Don't apply to prices that are cheaper than the sale\n # amount.\n extra_filter[\"unit_price__gt\"] = self.discount_exact\n sale_price = self.discount_exact\n else:\n return\n products = self.all_products()\n variations = ProductVariation.objects.filter(product__in=products)\n for priced_objects in (products, variations):\n update = {\"sale_id\": self.id,\n \"sale_price\": sale_price,\n \"sale_to\": self.valid_to,\n \"sale_from\": self.valid_from}\n using = priced_objects.db\n if \"mysql\" not in settings.DATABASES[using][\"ENGINE\"]:\n priced_objects.filter(**extra_filter).update(**update)\n else:\n # Work around for MySQL which does not allow update\n # to operate on subquery where the FROM clause would\n # have it operate on the same table, so we update\n # each instance individually: http://bit.ly/1xMOGpU\n #\n # Also MySQL may raise a 'Data truncated' warning here\n # when doing a calculation that exceeds the precision\n # of the price column. In this case it's safe to ignore\n # it and the calculation will still be applied, but\n # we need to massage transaction management in order\n # to continue successfully: http://bit.ly/1xMOJCd\n for priced in priced_objects.filter(**extra_filter):\n for field, value in list(update.items()):\n setattr(priced, field, value)\n try:\n priced.save()\n except Warning:\n connection.set_rollback(False)\n\n def delete(self, *args, **kwargs):\n \"\"\"\n Clear this sale from products when deleting the sale.\n \"\"\"\n self._clear()\n super(Sale, self).delete(*args, **kwargs)\n\n def _clear(self):\n \"\"\"\n Clears previously applied sale field values from products prior\n to updating the sale, when deactivating it or deleting it.\n \"\"\"\n update = {\"sale_id\": None, \"sale_price\": None,\n \"sale_from\": None, \"sale_to\": None}\n for priced_model in (Product, ProductVariation):\n priced_model.objects.filter(sale_id=self.id).update(**update)\n\n\n@receiver(m2m_changed, sender=Sale.products.through)\ndef sale_update_products(sender, instance, action, *args, **kwargs):\n \"\"\"\n Signal for updating products for the sale - needed since the\n products won't be assigned to the sale when it is first saved.\n \"\"\"\n if action == \"post_add\":\n instance.update_products()\n","sub_path":"ffcsa/shop/models/Sale.py","file_name":"Sale.py","file_ext":"py","file_size_in_byte":4541,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"372441928","text":"n,m=map(int,input().split())\nab=[]\nfor i in range(m):\n ab.append(list(map(int,input().split())))\n\nans=0\ndef dfs(start,finish):\n global ans\n finish.add(start)#もう訪れたところ\n # print(finish)\n if len(finish)==n:#全部回ってたら終了\n ans+=1#橋じゃない数\n return \n for i in adlist[start-1]:\n\n if i not in finish:#通ったところに入っていないなら通れるということ \n dfs(i,finish)\n\nfor i in range(m):\n adlist=[]\n for h in range(n): \n adlist.append([])\n for j in range(m):\n if i!=j:\n x=ab[j][0]\n y=ab[j][1]\n adlist[x-1].append(y)\n adlist[y-1].append(x)\n finish=set()\n #print(adlist)\n dfs(1,finish)#引数がけすノード\n \n#print(adlist)\n\nprint(m-ans)\n\n\"\"\"\niのノードを消したときすべてのところを回れるかというう動作をn回繰り返す\nこればつ\nノードを消していたから違う線を消さなければならない\nだからそれぞれのはしがないと金の隣接リストを作らないイケナイ\n\"\"\"","sub_path":"ABC/75/C.py","file_name":"C.py","file_ext":"py","file_size_in_byte":1117,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"618105934","text":"import h5py\nimport numpy as np\nfrom i3d_inception import Inception_Inflated3d\nimport keras\nfrom keras import optimizers\nfrom keras.callbacks import TensorBoard, ModelCheckpoint, CSVLogger\nfrom keras.preprocessing.image import ImageDataGenerator\n\nFRAME_HEIGHT = 224\nFRAME_WIDTH = 224\nNUM_FRAMES = 118\nNUM_CLASSES = 10\nBATCH = 4\n\n#Training I3D 5 module inception freeze transfer learning imagenet dan kinetic\n\nhf = h5py.File(\"sibi_rgb_normal.h5\",\"r\")\ntrain_label_file = open(\"train_sibi_1.txt\",\"r\")\ntest_label_file = open(\"test_validation_sibi_1.txt\",\"r\")\nvalidation_label_file = open(\"test_validation_sibi_1.txt\",\"r\")\ntrain_raw_labels = train_label_file.read().split(\"\\n\")\ntest_raw_labels = test_label_file.read().split(\"\\n\")\nvalidation_raw_labels = validation_label_file.read().split(\"\\n\")\ntrain_labels = keras.utils.to_categorical(np.array(train_raw_labels),num_classes = NUM_CLASSES)\ntest_labels = keras.utils.to_categorical(np.array(test_raw_labels),num_classes = NUM_CLASSES)\nvalidation_labels = keras.utils.to_categorical(np.array(validation_raw_labels),num_classes = NUM_CLASSES)\n\n\ndef generator(type):\n i=0\n counter = 0\n if type==\"train\":\n while True:\n batch_features = np.zeros((BATCH,NUM_FRAMES, FRAME_WIDTH, FRAME_HEIGHT,3))\n batch_labels = np.zeros((BATCH,NUM_CLASSES))\n for i in range(BATCH):\n batch_features[i] = hf[\"train\"][counter%120]\n batch_labels[i] = train_labels[counter%120]\n # print(\"Index: \"+str(i)+\", Counter: \"+str(counter))\n # print(batch_labels)\n counter+=1\n yield batch_features,batch_labels\n elif type==\"test\":\n while True:\n batch_features = np.zeros((1,NUM_FRAMES, FRAME_WIDTH, FRAME_HEIGHT,3))\n batch_labels = np.zeros((1,NUM_CLASSES))\n for i in range(1):\n batch_features[i] = hf[\"test\"][counter%40]\n batch_labels[i] = test_labels[counter%40]\n # print(\"Index: \"+str(i))\n # print(batch_labels)\n counter+=1\n yield batch_features,batch_labels\n elif type==\"validation\":\n while True:\n batch_features = np.zeros((BATCH,NUM_FRAMES, FRAME_WIDTH, FRAME_HEIGHT,3))\n batch_labels = np.zeros((BATCH,NUM_CLASSES))\n for i in range(BATCH):\n batch_features[i] = hf[\"validation\"][counter%40]\n batch_labels[i] = validation_labels[counter%40]\n # print(\"Index: \"+str(i))\n # print(batch_labels)\n counter+=1\n yield batch_features,batch_labels\n\nrgb_model = Inception_Inflated3d(\n include_top=False,\n weights='rgb_imagenet_and_kinetics',\n input_shape=(NUM_FRAMES, FRAME_HEIGHT, FRAME_WIDTH,3),\n classes=NUM_CLASSES,endpoint_logit=False)\n\nopt = optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=1e-6)\n\nindex_freeze_layer = [1,2,3,5,6,7,8,9,10,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,32,33,34,35,36,37,39,40,41,42,43,44,45,46,47,48,49,50,53,54,55,56,57,58,60,61,62,63,64,65,66,67,68,69,70,71,73,74,75,76,77,78,80,81,82,83,84,85,86,87,88,89,90,91,93,94,95,96,97,98,100,101,102,103,104,105,106,107,108,109,110,111]\ncc = 0\nfor layer in rgb_model.layers:\n print(\"Layer - \"+str(cc)+\" \"+layer.name)\n if cc in index_freeze_layer:\n print(\"jadi false\")\n layer.trainable = False\n print(\"Trainable = \"+str(layer.trainable)+\"\\n\")\n cc+=1\n\nrgb_model.summary()\n\nrgb_model.compile(loss=keras.losses.categorical_crossentropy,\n optimizer=opt,\n metrics=['accuracy'])\n\nbest_checkpoint = ModelCheckpoint('sibi_rgb_normal_4_weights_best.hdf5', monitor='val_acc', verbose=1, save_best_only=True, mode='max')\ncheckpoint = ModelCheckpoint('sibi_rgb_normal_4_weights_epoch.hdf5', monitor='val_acc', verbose=1, save_best_only=False, mode='max')\ncsv_logger = CSVLogger('sibi_rgb_normal_4.log', append=False)\ntensorboard = TensorBoard(log_dir='./sibi_rgb_normal_4_tf-logs')\ncallbacks_list = [checkpoint,best_checkpoint, csv_logger, tensorboard]\n\n\n# len(hf[\"train\"])\n# len(hf[\"validation\"])\n# rgb_model.fit_generator(generator(\"train\"), steps_per_epoch=120//BATCH, epochs=200, callbacks=callbacks_list,shuffle=True,validation_data = generator(\"validation\"),validation_steps=40//BATCH)\n\n# score = rgb_model.predict_generator(generator(\"test\"),steps=40)\n# np.save(\"sibi_rgb_normal_result_4\",score)\n# print('Test loss:', score[0])\n# print('Test accuracy:', score[1])\n\nhf.close()\n","sub_path":"train_sibi_rgb_normal_4.py","file_name":"train_sibi_rgb_normal_4.py","file_ext":"py","file_size_in_byte":4350,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"252661166","text":"class A:\n classvar1 = \"I am a class variable in class A\"\n def __init__(self):\n self.var1 = \"I am inside class A constructor\"\n self.classvar1 = \"Instance variable in class A\"\n self.special = \"Special\"\n\nclass B(A):\n classvar1 = \"I am in class B\"\n def __init__(self):\n #super().__init__() # Using super class here prints var1 and classvar1 values as mentioned inside class B constructor as the values are overwritten when class B runs\n self.var1 = \"I am inside class B constructor\"\n self.classvar1 = \"Instance variable in class B\"\n super().__init__() # Using super class here prints var1 and classvar1 values as mentioned inside class A constructor as the class A already ran above.\n\n\na = A()\nb = B()\nprint(b.special, b.var1, b.classvar1)\n\n","sub_path":"11c. Super() and Overriding.py","file_name":"11c. Super() and Overriding.py","file_ext":"py","file_size_in_byte":798,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"622296111","text":"# Batch file for applying an object detection graph to a COCO style dataset,\n# cropping images to the detected animals inside and creating a COCO-\n# style classification dataset out of it. It also saves the detections \n# to a file using pickle\n\nimport numpy as np\nimport os\nimport tqdm\nimport pickle\nimport matplotlib; matplotlib.use('Agg')\nfrom pycocotools.coco import COCO\nfrom PIL import Image\nimport argparse\nimport random\nimport json\nimport sys\nsys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)),\n '../../tfrecords/utils'))\nif sys.version_info.major >= 3:\n import create_tfrecords_py3 as tfr\nelse:\n import create_tfrecords as tfr\nimport uuid\n\nprint('If you run into import errors, please make sure you added \"models/research\" and ' +\\\n ' \"models/research/object_detection\" of the tensorflow models repo to the PYTHONPATH\\n\\n')\nimport tensorflow as tf\nfrom object_detection.utils import ops as utils_ops\nfrom utils import label_map_util\nfrom utils import visualization_utils as vis_util\nfrom distutils.version import StrictVersion\nif StrictVersion(tf.__version__) < StrictVersion('1.9.0'):\n raise ImportError('Please upgrade your TensorFlow installation to v1.9.* or later!')\n\n\n########################################################## \n### Configuration\n\n# Any model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_FROZEN_GRAPH` to point to a new .pb file. \nparser = argparse.ArgumentParser()\nparser.add_argument(\"input_json\", type=str, default='CaltechCameraTraps.json',\n help='COCO style dataset annotation')\nparser.add_argument('image_dir', type=str, default='./images/cct_images',\n help='Root folder of the images, as used in the annotations file')\nparser.add_argument('frozen_graph', type=str, default='frozen_inference_graph.pb',\n help='Frozen graph of detection network as create by export_inference_graph.py of TFODAPI.')\n#parser.add_argument('detections_output', type=str, default='detections_final.pkl',\n# help='Pickle file with the detections, which can be used for cropping later on.')\n\nparser.add_argument('--coco_style_output', type=str, default=None,\n help='Output directory for a dataset in COCO format.')\nparser.add_argument('--tfrecords_output', type=str, default=None,\n help='Output directory for a dataset in TFRecords format.')\nparser.add_argument('--location_key', type=str, default='location', metavar='location',\n help='Key in the image-level annotations that specifies the splitting criteria. ' + \\\n 'Usually we split camera-trap datasets by locations, i.e. training and testing locations. ' + \\\n 'In this case, you probably want to pass something like `--split_by location`. ' + \\\n 'The script prints the annotation of a randomly selected image which you can use for reference.')\n\nparser.add_argument('--exclude_categories', type=str, nargs='+', default=[],\n help='Categories to ignore. We will not run detection on images of that categorie and will ' + \\\n 'not use them for the classification dataset.')\nparser.add_argument('--use_detection_file', type=str, default=None,\n help='Uses existing detections from a file generated by this script. You can use this ' + \\\n 'to continue a partially processed dataset. ')\nparser.add_argument('--detection_threshold', type=float, default=0.5,\n help='Threshold for detections to use. Default is 0.5.')\nparser.add_argument('--padding_factor', type=float, default=1.3*1.3,\n help='We will crop a tight square box around the animal enlarged by this factor. ' + \\\n 'Default is 1.3 * 1.3 = 1.69, which accounts for the cropping at test time and for' + \\\n ' a reasonable amount of context')\nparser.add_argument('--test_fraction', type=float, default=0.2,\n help='Proportion of the locations used for testing, should be in [0,1]. Default: 0.2')\nparser.add_argument('--ims_per_record', type=int, default=200,\n help='Number of images to store in each tfrecord file')\nargs = parser.parse_args()\n\n\n##########################################################\n### The actual code\n\n# Check arguments\nINPUT_JSON = args.input_json\nassert os.path.exists(INPUT_JSON), INPUT_JSON + ' does not exist'\nIMAGE_DIR = args.image_dir\nassert os.path.exists(IMAGE_DIR), IMAGE_DIR + ' does not exist'\n# /ai4edevfs/models/object_detection/faster_rcnn_inception_resnet_v2_atrous/megadetector/frozen_inference_graph.pb\nPATH_TO_FROZEN_GRAPH = args.frozen_graph\nCOCO_OUTPUT_DIR = args.coco_style_output\nTFRECORDS_OUTPUT_DIR = args.tfrecords_output\nassert COCO_OUTPUT_DIR or TFRECORDS_OUTPUT_DIR, 'Please provide either --coco_style_output or --tfrecords_output'\nif COCO_OUTPUT_DIR:\n DETECTION_OUTPUT = os.path.join(COCO_OUTPUT_DIR, 'detections_final.pkl')\nelse:\n DETECTION_OUTPUT = os.path.join(TFRECORDS_OUTPUT_DIR, 'detections_final.pkl')\n\nDETECTION_INPUT = args.use_detection_file\nif DETECTION_INPUT:\n assert os.path.exists(DETECTION_INPUT), DETECTION_INPUT + ' does not exist'\n\nSPLIT_BY = args.location_key\nEXCLUDED_CATEGORIES = args.exclude_categories\n\n# Detection threshold should be in [0,1]\nDETECTION_THRESHOLD = args.detection_threshold\nassert DETECTION_THRESHOLD >= 0 and DETECTION_THRESHOLD <= 1, 'Detection threshold should be in [0,1]'\n\n# Padding around the detected objects when cropping\n# 1.3 for the cropping during test time and 1.3 for \n# the context that the CNN requires in the left-over \n# image\nPADDING_FACTOR = args.padding_factor\nassert PADDING_FACTOR >= 1, 'Padding factor should be equal or larger 1'\n\n# Fraction of locations used for testing\nTEST_FRACTION = args.test_fraction\nassert TEST_FRACTION >= 0 and TEST_FRACTION <= 1, 'test_fraction should be a value in [0,1]'\n\nIMS_PER_RECORD = args.ims_per_record\nassert IMS_PER_RECORD > 0, 'The number of images per shard should be greater than 0'\n\nTMP_IMAGE = str(uuid.uuid4()) + '.jpg'\n\n# Create output directories\nif COCO_OUTPUT_DIR and not os.path.exists(COCO_OUTPUT_DIR):\n print('Creating COCO-style dataset output directory.')\n os.makedirs(COCO_OUTPUT_DIR)\nif TFRECORDS_OUTPUT_DIR and not os.path.exists(TFRECORDS_OUTPUT_DIR):\n print('Creating TFRecords output directory.')\n os.makedirs(TFRECORDS_OUTPUT_DIR)\nif not os.path.exists(os.path.dirname(DETECTION_OUTPUT)):\n print('Creating output directory for detection file.')\n os.makedirs(os.path.dirname(DETECTION_OUTPUT))\n\n# Load a (frozen) Tensorflow model into memory.\ndetection_graph = tf.Graph()\nwith detection_graph.as_default():\n od_graph_def = tf.GraphDef()\n with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:\n serialized_graph = fid.read()\n od_graph_def.ParseFromString(serialized_graph)\n tf.import_graph_def(od_graph_def, name='')\ngraph = detection_graph\n\n# Load COCO style annotations from the input dataset\ncoco = COCO(INPUT_JSON)\n\n# Get all categories, their names, and create an updated ID for the json file \ncategories = coco.loadCats(coco.getCatIds())\ncat_id_to_names = {cat['id']:cat['name'] for cat in categories}\ncat_id_to_new_id = {old_key:idx for idx,old_key in enumerate(cat_id_to_names.keys())}\nprint('All categories: \\n\"{}\"\\n'.format('\", \"'.join(cat_id_to_names.values())))\nfor ignore_cat in EXCLUDED_CATEGORIES:\n assert ignore_cat in cat_id_to_names.values(), 'Category %s does not exist in the dataset'%ignore_cat\n\n\n# Prepare the coco-style json files\ntraining_json = dict(images=[], categories=[], annotations=[])\ntest_json = dict(images=[], categories=[], annotations=[])\n\nfor old_cat_id in cat_id_to_names.keys():\n training_json['categories'].append(dict(id = cat_id_to_new_id[old_cat_id], \n name=cat_id_to_names[old_cat_id],\n supercategory='entity'))\ntest_json['categories'] = training_json['categories']\n\n# Split the dataset by locations\nrandom.seed(0)\nprint('Example of the annotation of a single image:')\nprint(list(coco.imgs.items())[0])\nprint('The corresponding category annoation:')\nprint(coco.imgToAnns[list(coco.imgs.items())[0][0]])\nlocations = set([ann[SPLIT_BY] for ann in coco.imgs.values()])\ntest_locations = sorted(random.sample(sorted(locations), max(1, int(TEST_FRACTION * len(locations)))))\ntraining_locations = sorted(list(set(locations) - set(test_locations)))\nprint('{} locations in total, {} will be used for training, {} for testing'.format(len(locations), \n len(training_locations),\n len(test_locations)))\nprint('Training uses locations ', sorted(training_locations))\nprint('Testing uses locations ', sorted(test_locations))\n\n# Load detections\nif DETECTION_INPUT:\n print('Loading existing detections from ' + DETECTION_INPUT)\n with open(DETECTION_INPUT, 'rb') as f:\n detections = pickle.load(f)\nelse:\n detections = dict()\n\n# TFRecords variables\nclass TFRecordsWriter(object):\n def __init__(self, output_file, ims_per_record):\n self.output_file = output_file\n self.ims_per_record = ims_per_record\n self.next_shard_idx = 0\n self.next_shard_img_idx = 0\n self.coder = tfr.ImageCoder()\n self.writer = None\n\n def add(self, data):\n if self.next_shard_img_idx % self.ims_per_record == 0:\n if self.writer:\n self.writer.close()\n self.writer = tf.python_io.TFRecordWriter(self.output_file%self.next_shard_idx)\n self.next_shard_idx = self.next_shard_idx + 1\n image_buffer, height, width = tfr._process_image(data['filename'], self.coder)\n example = tfr._convert_to_example(data, image_buffer, data['height'], data['width'])\n self.writer.write(example.SerializeToString())\n self.next_shard_img_idx = self.next_shard_img_idx + 1\n\n def close(self):\n if self.next_shard_idx == 0 and self.next_shard_img_idx == 0:\n print('WARNING: No images were written to tfrecords!')\n if self.writer:\n self.writer.close()\n\nif TFRECORDS_OUTPUT_DIR:\n training_tfr_writer = TFRecordsWriter(os.path.join(TFRECORDS_OUTPUT_DIR, 'train-%.5d'), IMS_PER_RECORD)\n test_tfr_writer = TFRecordsWriter(os.path.join(TFRECORDS_OUTPUT_DIR, 'test-%.5d'), IMS_PER_RECORD)\nelse:\n training_tfr_writer = None\n test_tfr_writer = None\n\n# The detection part\nimages_missing = False\nwith graph.as_default():\n with tf.Session() as sess:\n ### Preparations: get all the output tensors\n ops = tf.get_default_graph().get_operations()\n all_tensor_names = {output.name for op in ops for output in op.outputs}\n tensor_dict = {}\n for key in [\n 'num_detections', 'detection_boxes', 'detection_scores',\n 'detection_classes'\n ]:\n tensor_name = key + ':0'\n if tensor_name in all_tensor_names:\n tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(\n tensor_name)\n if 'detection_masks' in tensor_dict:\n # The following processing is only for single image\n detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])\n # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.\n real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32)\n detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1])\n image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')\n\n # For all images listed in the annotations file\n next_image_id = 0\n next_annotation_id = 0\n for cur_image_id in tqdm.tqdm(list(sorted([vv['id'] for vv in coco.imgs.values()]))):\n cur_image = coco.loadImgs([cur_image_id])[0]\n cur_file_name = cur_image['file_name']\n # Path to the input image\n in_file = os.path.join(IMAGE_DIR, cur_file_name)\n # Skip the image if it is annotated with more than one category\n if len(set([ann['category_id'] for ann in coco.imgToAnns[cur_image['id']]])) != 1:\n continue\n # Get category ID for this image\n cur_cat_id = coco.imgToAnns[cur_image['id']][0]['category_id']\n # ... and the corresponding category name\n cur_cat_name = cat_id_to_names[cur_cat_id]\n # The remapped category ID for our json file\n cur_json_cat_id = cat_id_to_new_id[cur_cat_id]\n # Whether it belongs to a training or testing location\n is_train = cur_image[SPLIT_BY] in training_locations\n\n # Skip excluded categories\n if cur_cat_name in EXCLUDED_CATEGORIES:\n continue\n\n # If we already have detection results, we can use them\n if cur_image_id in detections.keys():\n output_dict = detections[cur_image_id]\n # Otherwise run detector\n else:\n # We allow to skip images, which we do not have available right now\n # This is useful for processing parts of large datasets\n if not os.path.isfile(os.path.join(IMAGE_DIR, cur_file_name)):\n if not images_missing:\n print('Could not find ' + cur_file_name)\n print('Suprresing any further warnings about missing files.')\n images_missing = True\n continue\n\n # Load image\n image = np.array(Image.open(os.path.join(IMAGE_DIR, cur_file_name)))\n if image.dtype != np.uint8:\n print('Failed to load image ' + cur_file_name)\n continue\n\n # Run inference\n output_dict = sess.run(tensor_dict,\n feed_dict={image_tensor: np.expand_dims(image, 0)})\n\n # all outputs are float32 numpy arrays, so convert types as appropriate\n output_dict['num_detections'] = int(output_dict['num_detections'][0])\n output_dict['detection_classes'] = output_dict[\n 'detection_classes'][0].astype(np.uint8)\n output_dict['detection_boxes'] = output_dict['detection_boxes'][0]\n output_dict['detection_scores'] = output_dict['detection_scores'][0]\n if 'detection_masks' in output_dict:\n output_dict['detection_masks'] = output_dict['detection_masks'][0]\n\n # Add detections to the collection\n detections[cur_image_id] = output_dict\n\n imsize = cur_image['width'], cur_image['height']\n # Select detections with a confidence larger DETECTION_THRESHOLD\n selection = output_dict['detection_scores'] > DETECTION_THRESHOLD\n # Skip if no detection selected\n if np.sum(selection) < 1 or selection.size == 0:\n continue\n # Get these boxes and convert normalized coordinates to pixel coordinates\n selected_boxes = (output_dict['detection_boxes'][selection] * np.tile([imsize[1],imsize[0]], (1,2)))\n # Pad the detected animal to a square box and additionally by PADDING_FACTOR, the result will be in crop_boxes\n # However, we need to make sure that it box coordinates are still within the image\n bbox_sizes = np.vstack([selected_boxes[:,2] - selected_boxes[:,0], selected_boxes[:,3] - selected_boxes[:,1]]).T\n offsets = (PADDING_FACTOR * np.max(bbox_sizes, axis=1, keepdims=True) - bbox_sizes) / 2\n crop_boxes = selected_boxes + np.hstack([-offsets,offsets])\n crop_boxes = np.maximum(0,crop_boxes).astype(int)\n # For each detected bounding box with high confidence, we will\n # crop the image to the padded box and save it\n for box_id in range(selected_boxes.shape[0]):\n # bbox is the detected box, crop_box the padded / enlarged box\n bbox, crop_box = selected_boxes[box_id], crop_boxes[box_id]\n if COCO_OUTPUT_DIR:\n # The file path as it will appear in the annotation json\n new_file_name = os.path.join(cur_cat_name, cur_file_name)\n # Add numbering to the original file name if there are multiple boxes\n if selected_boxes.shape[0] > 1:\n new_file_base, new_file_ext = os.path.splitext(new_file_name)\n new_file_name = '{}_{}{}'.format(new_file_base, box_id, new_file_ext)\n # The absolute file path where we will store the image\n # Only used if an coco-style dataset is created\n out_file = os.path.join(COCO_OUTPUT_DIR, new_file_name)\n # Create the category directories if necessary\n os.makedirs(os.path.dirname(out_file), exist_ok=True)\n if not os.path.exists(out_file):\n try:\n img = np.array(Image.open(in_file))\n cropped_img = img[crop_box[0]:crop_box[2], crop_box[1]:crop_box[3]]\n Image.fromarray(cropped_img).save(out_file)\n except ValueError:\n continue\n except FileNotFoundError:\n continue\n else:\n # if COCO_OUTPUT_DIR is set, then we will only use the shape\n # of cropped_img in the following code. So instead of reading \n # cropped_img = np.array(Image.open(out_file))\n # we can speed everything up by reading only the size of the image\n cropped_img = np.zeros((3,) + Image.open(out_file).size).T\n else:\n out_file = TMP_IMAGE\n try:\n img = np.array(Image.open(in_file))\n cropped_img = img[crop_box[0]:crop_box[2], crop_box[1]:crop_box[3]]\n Image.fromarray(cropped_img).save(out_file)\n except ValueError:\n continue\n except FileNotFoundError:\n continue\n \n \n # Read the image\n if COCO_OUTPUT_DIR:\n # Add annotations to the appropriate json\n if is_train:\n cur_json = training_json\n cur_tfr_writer = training_tfr_writer\n else:\n cur_json = test_json\n cur_tfr_writer = test_tfr_writer\n cur_json['images'].append(dict(id=next_image_id,\n width=cropped_img.shape[1],\n height=cropped_img.shape[0],\n file_name=new_file_name,\n original_key=cur_image_id))\n cur_json['annotations'].append(dict(id=next_annotation_id,\n image_id=next_image_id,\n category_id=cur_json_cat_id))\n\n if TFRECORDS_OUTPUT_DIR:\n image_data = {}\n if COCO_OUTPUT_DIR:\n image_data['filename'] = out_file\n else:\n Image.fromarray(cropped_img).save(TMP_IMAGE)\n image_data['filename'] = TMP_IMAGE\n image_data['id'] = next_image_id\n\n image_data['class'] = {}\n image_data['class']['label'] = cur_json_cat_id\n image_data['class']['text'] = cur_cat_name\n\n # Propagate optional metadata to tfrecords\n image_data['height'] = cropped_img.shape[0]\n image_data['width'] = cropped_img.shape[1]\n\n cur_tfr_writer.add(image_data)\n if not COCO_OUTPUT_DIR:\n os.remove(TMP_IMAGE)\n\n next_annotation_id = next_annotation_id + 1\n next_image_id = next_image_id + 1\n\n\nif TFRECORDS_OUTPUT_DIR:\n training_tfr_writer.close()\n test_tfr_writer.close()\n\n label_map = []\n for cat in training_json['categories']:\n label_map += ['item {{name: \"{}\" id: {}}}\\n'.format(cat['name'], cat['id'])]\n with open(os.path.join(TFRECORDS_OUTPUT_DIR, 'label_map.pbtxt'), 'w') as f:\n f.write(''.join(label_map))\n\nif COCO_OUTPUT_DIR:\n # Write out COCO-style json files to the output directory\n with open(os.path.join(COCO_OUTPUT_DIR, 'train.json'), 'wt') as fi:\n json.dump(training_json, fi)\n with open(os.path.join(COCO_OUTPUT_DIR, 'test.json'), 'wt') as fi:\n json.dump(test_json, fi)\n\n# Write detections to file with pickle\nwith open(DETECTION_OUTPUT, 'wb') as f:\n pickle.dump(detections, f, pickle.HIGHEST_PROTOCOL)\n","sub_path":"data_management/databases/classification/make_classification_dataset.py","file_name":"make_classification_dataset.py","file_ext":"py","file_size_in_byte":19953,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"145495761","text":"from StackEx import *\n\n# Creates a ranking of users for each tag\n\nfilepath = 'example_data/Posts.xml'\n\npost_data = load_xml(filepath,'posts')\npost_data.Tags = post_data.Tags = to_tag_list(post_data)\npost_data['FixedNames'] = fix_names(post_data)\ntags = all_tags(post_data)\nquestiontags_dict = question_tags(post_data)\nuser_tag_score_dict = user_tag_scores(post_data,questiontags_dict)\ntag_sums = tag_sum(user_tag_score_dict) # got the totals\nwith open('tag_sums.txt','w') as f:\n for tag, scores in tag_sums.iteritems():\n f.write(tag+':\\n')\n c=0\n for x in scores:\n if c>10:\n break\n f.write(str(x)+'\\n')\n c+=1\n f.write('\\n')\n","sub_path":"Tag_sums.py","file_name":"Tag_sums.py","file_ext":"py","file_size_in_byte":702,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"475090942","text":"# Author : R Hariharan\n\n# Contains all the parameters with respect the biped\n\n# Right & Left leg link lengths\nRIGHT_LEG_LINK_LEN = [0,0,0,0,0]\nLEFT_LEG_LINK_LEN = [0,0,0,0,0]\n\n#Distance between the two legs\nDISTANCE_BW_LEGS = 0\nHALF_DIST_BW_LEGS = 0\n\t\ndef setRobotParameters(distanceBwLegs, rightLegLinkLen = [], leftLegLinkLen = []):\n\tglobal HALF_DIST_BW_LEGS, DISTANCE_BW_LEGS, RIGHT_LEG_LINK_LEN, LEFT_LEG_LINK_LEN\n\tDISTANCE_BW_LEGS = distanceBwLegs\n\tHALF_DIST_BW_LEGS = distanceBwLegs/2\n\tRIGHT_LEG_LINK_LEN = rightLegLinkLen\n\tLEFT_LEG_LINK_LEN = leftLegLinkLen\n\ndef checkRobotParametersSet():\n\tif DISTANCE_BW_LEGS == 0:\n\t\treturn False\n\tif all(x == 0 for x in RIGHT_LEG_LINK_LEN):\n\t\treturn False\n\tif all(x == 0 for x in LEFT_LEG_LINK_LEN):\n\t\treturn False\n\treturn True","sub_path":"Code/servo_test/scripts/hari/robot.py","file_name":"robot.py","file_ext":"py","file_size_in_byte":770,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"420039959","text":"from rest_framework_simplejwt.views import TokenObtainPairView\nfrom rest_framework import permissions, status\nfrom rest_framework.response import Response\nfrom .serializers import ProductSerializer, FileSerializer, \\\n TokenObtainSerializer, UserSerializer\nfrom .models import Product, Post\nfrom rest_framework.views import APIView\nfrom rest_framework.parsers import MultiPartParser, FormParser\nfrom cv2 import cvtColor, imread, COLOR_RGB2BGR\nfrom pyzbar.pyzbar import decode, ZBarSymbol\nfrom os import path, mkdir\nfrom numpy import array\nfrom PIL import Image\nfrom cairosvg import svg2png\nfrom io import BytesIO\nfrom rest_framework_simplejwt.tokens import RefreshToken\n\n\n# Класс отвечающий за вывод данных продукта через ручной ввод\nclass ProductTextCode(APIView):\n permission_classes = (permissions.AllowAny,)\n\n def post(self, request, *args, **kwargs):\n product_data = Product.objects.filter(vendor_code=request.data['vendor_code'][7:12])\n if product_data.exists():\n flag = self.CheckDigitCheck(request.data['vendor_code'])\n if flag == True:\n product_date_seriali = ProductSerializer(product_data, many=True)\n return Response(product_date_seriali.data)\n elif flag == False:\n return Response(data={\"err\": \"Штрих код не подлинный\"},\n status=status.HTTP_200_OK)\n else:\n return Response(data={\"err\": \"Такого товара нету в базе\"},\n status=status.HTTP_200_OK)\n\n\n # Метод отвечающий за проверку штрих кода на подлинность\n def CheckDigitCheck(self, num):\n one_step = sum([int(i) for k, i in enumerate(num, start=1) if not k % 2])\n two_step = one_step * 3\n three_step = sum([int(i) for k, i in enumerate(num, start=1) if k % 2 and k < 13])\n four_step = two_step + three_step\n final = 10 - int(str(four_step)[-1:])\n if num[12:] == str(final):\n flag = True\n return flag\n else:\n flag = False\n return flag\n\n\n# Класс отвечающий за вывод данных продукта через изображение\nclass ProductCode(APIView):\n parser_classes = (MultiPartParser, FormParser)\n\n def post(self, request, *args, **kwargs):\n posts_serializer = FileSerializer(data=request.data)\n if posts_serializer.is_valid():\n posts_serializer.save()\n # Тут производиться подмена ответа post запроса при загрузки изображения\n # Ответ на запрос будет информация о товаре\n # -----------------------------------------------------------------------\n serializer = FileSerializer(Post.objects.all(), many=True)\n path_file = serializer.data[0]['cover']\n code = self.Decoder_barcode(path_file[1:])\n product = Product.objects.filter(vendor_code=code.decode('utf-8')[7:12])\n\n if product.exists():\n product_date = ProductSerializer(product, many=True)\n Post.objects.all().delete()\n return Response(product_date.data)\n else:\n Post.objects.all().delete()\n return Response(data={\"err\": \"Такого товара нету в базе\"},\n status=status.HTTP_200_OK)\n # ----------------------------------------------------------------------------\n else:\n return Response(posts_serializer.errors, status=status.HTTP_400_BAD_REQUEST)\n\n\n # Метод отвечающий за распознование штрих кода.\n # Этот метод использунт возможности бибилиотеки pyzbar и opencv\n def Decoder_barcode(self, filename):\n\n if not path.exists('images'):\n mkdir('images/')\n\n files, file_extension = path.splitext(filename)\n # Если приходит файл формата .svg\n # то используем библиотеки PIL и cairosvg для перевода\n # изображения в удобный для opencv формат\n # -----------------------------------------------------------------\n if file_extension == '.svg':\n\n with open(filename, 'r') as file:\n svg = file.read()\n\n png = svg2png(bytestring=svg)\n pil_img = Image.open(BytesIO(png))\n image = cvtColor(array(pil_img), COLOR_RGB2BGR)\n # ------------------------------------------------------------------\n else:\n image = imread(filename)\n detectBarcode = decode(image, symbols=[ZBarSymbol.EAN13])\n\n for barcode in detectBarcode:\n return barcode.data\n\n\n# Класс отвечающий за выдачу токена при входе\nclass ObtainToken(TokenObtainPairView):\n permission_classes = (permissions.AllowAny,)\n serializer_class = TokenObtainSerializer\n\n\n# Класс отвечающий за создание пользователя\nclass CustomUserCreate(APIView):\n permission_classes = (permissions.AllowAny,)\n authentication_classes = ()\n\n def post(self, request, format='json'):\n serializer = UserSerializer(data=request.data)\n if serializer.is_valid():\n user = serializer.save()\n if user:\n return Response(serializer.data, status=status.HTTP_201_CREATED)\n return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)\n\n\n# Класс отправляющий сообщение при успешной авторизации\nclass Message(APIView):\n\n def get(self, request):\n return Response(data={\"hello\": \"Вы успешно вошли в сервис\"},\n status=status.HTTP_200_OK)\n\n\n# Класс отвечающий за занесение устаревших токенов в черный лист\nclass BlacklistToken(APIView):\n permission_classes = (permissions.AllowAny,)\n authentication_classes = ()\n\n def post(self, request):\n try:\n refresh_token = request.data[\"refresh_token\"]\n token = RefreshToken(refresh_token)\n token.blacklist()\n return Response(status=status.HTTP_205_RESET_CONTENT)\n except Exception as erro:\n return Response(status=status.HTTP_400_BAD_REQUEST)\n","sub_path":"bakend/views.py","file_name":"views.py","file_ext":"py","file_size_in_byte":6642,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"152639277","text":"#!/usr/bin/python\n\n#Test change#\n\nfrom jsonrpclib import Server\nfrom pprint import pprint as pp\n\nswitch = Server('http://jfurtak:arista123@192.168.56.10/command-api')\noutput = switch.runCmds(1, ['show version'])\npp(output[0]['systemMacAddress'])\nresult = switch.runCmds(1, ['enable',\n 'configure',\n 'interface Management1',\n \"description MAC: %s\" % output[0]\n['systemMacAddress']])\n","sub_path":"basic_eapi.py","file_name":"basic_eapi.py","file_ext":"py","file_size_in_byte":463,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"337647191","text":"# coding: utf-8\n\nfrom __future__ import absolute_import\nfrom datetime import date, datetime # noqa: F401\n\nfrom typing import List, Dict # noqa: F401\n\nfrom swagger_server.models.base_model_ import Model\nfrom swagger_server import util\n\n\nclass Address(Model):\n \"\"\"NOTE: This class is auto generated by the swagger code generator program.\n\n Do not edit the class manually.\n \"\"\"\n def __init__(self, description: str=None, type: str=None, legacy_addr_id: str=None): # noqa: E501\n \"\"\"Address - a model defined in Swagger\n\n :param description: The description of this Address. # noqa: E501\n :type description: str\n :param type: The type of this Address. # noqa: E501\n :type type: str\n :param legacy_addr_id: The legacy_addr_id of this Address. # noqa: E501\n :type legacy_addr_id: str\n \"\"\"\n self.swagger_types = {\n 'description': str,\n 'type': str,\n 'legacy_addr_id': str\n }\n\n self.attribute_map = {\n 'description': 'description',\n 'type': 'type',\n 'legacy_addr_id': 'legacy_addr_id'\n }\n self._description = description\n self._type = type\n self._legacy_addr_id = legacy_addr_id\n\n @classmethod\n def from_dict(cls, dikt) -> 'Address':\n \"\"\"Returns the dict as a model\n\n :param dikt: A dict.\n :type: dict\n :return: The Address of this Address. # noqa: E501\n :rtype: Address\n \"\"\"\n return util.deserialize_model(dikt, cls)\n\n @property\n def description(self) -> str:\n \"\"\"Gets the description of this Address.\n\n Full description of the address with its constituent parts combined. # noqa: E501\n\n :return: The description of this Address.\n :rtype: str\n \"\"\"\n return self._description\n\n @description.setter\n def description(self, description: str):\n \"\"\"Sets the description of this Address.\n\n Full description of the address with its constituent parts combined. # noqa: E501\n\n :param description: The description of this Address.\n :type description: str\n \"\"\"\n if description is None:\n raise ValueError(\"Invalid value for `description`, must not be `None`\") # noqa: E501\n\n self._description = description\n\n @property\n def type(self) -> str:\n \"\"\"Gets the type of this Address.\n\n The type of address. # noqa: E501\n\n :return: The type of this Address.\n :rtype: str\n \"\"\"\n return self._type\n\n @type.setter\n def type(self, type: str):\n \"\"\"Sets the type of this Address.\n\n The type of address. # noqa: E501\n\n :param type: The type of this Address.\n :type type: str\n \"\"\"\n allowed_values = [\"uk\", \"foreign\", \"bfpo\", \"dx\", \"electronic\", \"unknown\"] # noqa: E501\n if type not in allowed_values:\n raise ValueError(\n \"Invalid value for `type` ({0}), must be one of {1}\"\n .format(type, allowed_values)\n )\n\n self._type = type\n\n @property\n def legacy_addr_id(self) -> str:\n \"\"\"Gets the legacy_addr_id of this Address.\n\n The legacy identifier for the address. # noqa: E501\n\n :return: The legacy_addr_id of this Address.\n :rtype: str\n \"\"\"\n return self._legacy_addr_id\n\n @legacy_addr_id.setter\n def legacy_addr_id(self, legacy_addr_id: str):\n \"\"\"Sets the legacy_addr_id of this Address.\n\n The legacy identifier for the address. # noqa: E501\n\n :param legacy_addr_id: The legacy_addr_id of this Address.\n :type legacy_addr_id: str\n \"\"\"\n if legacy_addr_id is None:\n raise ValueError(\"Invalid value for `legacy_addr_id`, must not be `None`\") # noqa: E501\n\n self._legacy_addr_id = legacy_addr_id\n","sub_path":"server/swagger_server/models/address.py","file_name":"address.py","file_ext":"py","file_size_in_byte":3895,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"270508859","text":"from tastypie import fields\nfrom tastypie.resources import ModelResource, ALL, ALL_WITH_RELATIONS\nfrom api.models import Decision, Citation, Paragraph\n\nclass DecisionResource(ModelResource):\n class Meta:\n # object_list = Decision.objects.filter(canlii_id = \"2001scc2\")\n queryset = Decision.objects.all()\n resource_name = 'decision'\n filtering = {\n 'canlii_id' : ALL\n }\n\n# http://127.0.0.1:8000/api/paragraph/citation__decision__id=1\n\nclass CitationResource(ModelResource):\n # cited_case_id = fields.ForeignKey(DecisionResource, 'cited_case_id')\n class Meta:\n # object_list = Decision.objects.filter(canlii_id = \"2001scc2\")\n queryset = Citation.objects.all()\n resource_name = 'citation'\n filtering = {\n 'cited_case_id' : ALL_WITH_RELATIONS\n }\n\n\nclass ParagraphResource(ModelResource):\n citation_id = fields.ForeignKey(CitationResource, 'citation_id')\n sentscore = fields.DecimalField(readonly=True)\n class Meta:\n # object_list = Decision.objects.filter(canlii_id = \"2001scc2\")\n # input_canlii_id = '2001scc2'\n queryset = Paragraph.objects.all()\n # queryset = Paragraph.objects.all().filter(citation_id = Citation.objects.all().filter(cited_case_id = Decision.objects.get(canlii_id = input_canlii_id)))\n resource_name = 'paragraph'\n\n filtering = {\n 'cited_paragraph' : ALL_WITH_RELATIONS\n }\n\n # def dehydrate_sentscore(self, bundle):\n # average_score = 0.0\n # for para in bundle.obj.sentiment_score.all():\n # average_score = 10000.000\n # return average_score\n\n\n # choose field names to include\n # fields = ['cited_paragraph', 'sentiment_score']\n\n\n # # http://django-tastypie.readthedocs.org/en/latest/resources.html#alter-list-data-to-serialize\n # def alter_list_data_to_serialize(self, request, data):\n # if request.GET.get('meta_only'):\n # return {'meta': data['meta']}\n # return data\n\n# # Create your models here.\n\n# class Decision(models.Model):\n# case_id_canlii = models.CharField(max_length = 100)\n# case_name = models.CharField(max_length = 500)\n# case_neutral_citation = models.CharField(max_length = 100)\n\n# def __str__(self):\n# return self.case_name\n\n# class Citation(models.Model):\n# cited_case_id = models.ForeignKey(Decision, related_name=\"cited_case\", on_delete = models.PROTECT)\n# citing_case_id = models.ForeignKey(Decision, related_name=\"citing_case\", on_delete = models.PROTECT)\n\n# def __str__(self):\n# return self.cited_case_id.case_name + \" \" + self.citing_case_id.case_name\n\n# class Paragraph(models.Model):\n# citing_case_id = models.ForeignKey(Citation, on_delete = models.PROTECT)\n# cited_paragraph = models.PositiveIntegerField()\n# citing_paragraph = models.PositiveIntegerField()\n\n# def __str__(self):\n# return self.cited_paragraph","sub_path":"DjangoAPIv2/api/resources.py","file_name":"resources.py","file_ext":"py","file_size_in_byte":3028,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"242188102","text":"# installer for Met Watch\n# Copyright 2016 Chris Davies\n\nfrom setup import ExtensionInstaller\n\ndef loader():\n\treturn MetWatchInstaller()\n\nclass MetWatchInstaller(ExtensionInstaller):\n\tdef __init__(self):\n\t\tsuper(MetWatchInstaller, self).__init__(\n\t\t\tversion='0.1',\n\t\t\tname='metwatch',\n\t\t\tdescription='Uses an RSS Feed from the Met Office to provide template variables to alert to possible weather alerts',\n\t\t\tauthor='Chris Davies',\n\t\t\tauthor_email='weewx@davies-barnard.co.uk',\n\t\t\tconfig={\n\t\t\t\t'StdReport': {\n\t\t\t\t\t\t'metwatch': {\n\t\t\t\t\t\t\t\t'url' : 'http://www.metoffice.gov.uk/public/data/PWSCache/WarningsRSS/Region/sw',\n\t\t\t\t\t\t\t\t'skin': 'metwatch',\n\t\t\t\t\t\t\t\t'HTML_ROOT': 'metwatch'\n\t\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\tfiles=[\n\t\t\t\t('bin/user', ['bin/user/metwatch.py']),\n\t\t\t\t('skins/metwatch', ['skins/metwatch/skin.conf', 'skins/metwatch/index.html.tmpl']),\n\t\t\t]\n\t\t)","sub_path":"install.py","file_name":"install.py","file_ext":"py","file_size_in_byte":850,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"228820607","text":"from django.forms import ModelForm\nfrom newsapp import models\nfrom django import forms\nfrom django.contrib.auth.models import User\n\n#\n# class UserProfileForm(forms.Form):\n# '''用戶信息表 修改表單'''\n#\n# name = forms.CharField(max_length=100)\n# user = forms.ModelMultipleChoiceField(queryset=models.User.objects.all())\n\n\nclass EmailMasterForm(ModelForm):\n '''Email Master 管理表'''\n\n class Meta:\n model = models.EmailMaster\n fields = '__all__'\n\nclass TGMasterForm(ModelForm):\n '''TG Master 管理表'''\n\n class Meta:\n model = models.TelegramMaster\n fields = '__all__'\n\nclass UserForm(ModelForm):\n '''用戶表 修改表單'''\n\n class Meta:\n model = User\n fields = (\n \"email\",\n )\n\n\nclass UserProfileForm(ModelForm):\n '''用戶信息表 修改表單'''\n\n class Meta:\n model = models.UserProfile\n fields = (\n \"name\",\n )\n\n\n\n\nclass SourseAddForm(ModelForm):\n '''Sourse 新增表單'''\n\n class Meta:\n model = models.Source\n fields = ('name', 'url', 'category')\n\n\nclass SourseForm(ModelForm):\n '''Source 修改表單'''\n\n def __init__(self, *args, **kwargs):\n super(SourseForm, self).__init__(*args, **kwargs)\n # Making name required\n self.fields['timezone'].required = False\n\n class Meta:\n model = models.Source\n fields = '__all__'\n\n\nclass CategoryForm(ModelForm):\n class Meta:\n model = models.Category\n fields = '__all__'\n","sub_path":"WebNews/newsapp/forms.py","file_name":"forms.py","file_ext":"py","file_size_in_byte":1537,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"212790603","text":"#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n\"\"\" A helper to create a tornado service.\n\"\"\"\n\nimport json\nimport logging\n\nimport concurrent.futures\nimport tornado.web\n\nfrom angus.analytics import report\nimport angus.jobs\nimport angus.streams\n\nclass FakeGatewayRoot(tornado.web.RequestHandler):\n def initialize(self, *args, **kwargs):\n self.service_key = kwargs.pop('service_key')\n\n def get(self):\n self.write({\n \"services\": {\n self.service_key: {\n \"url\": \"/services/{}\".format(self.service_key)\n }\n }\n })\n\nclass FakeGatewayService(tornado.web.RequestHandler):\n def initialize(self, *args, **kwargs):\n self.service_key = kwargs.pop('service_key')\n self.version = kwargs.pop('version')\n\n def get(self):\n self.write({\n \"versions\": {\n \"1\": {\"url\": \"/services/{}/{}\".format(self.service_key, self.version)}\n }\n })\n\nclass Description(tornado.web.RequestHandler):\n \"\"\" Every services have a description endpoint.\n \"\"\"\n\n def initialize(self, *args, **kwargs):\n self.description = kwargs.pop('description')\n self.version = kwargs.pop('version')\n self.service_key = kwargs.pop('service_key')\n\n @report\n def get(self):\n public_url = \"%s://%s\" % (self.request.protocol, self.request.host)\n result = {\n \"description\": self.description,\n \"version\": self.version,\n \"url\": \"%s/sevices/%s/%s\" % (public_url,\n self.service_key,\n self.version),\n }\n\n self.write(json.dumps(result))\n\ndef wrap_computer(compute, threads):\n if threads == 0:\n return tornado.gen.coroutine(compute)\n\n executor = concurrent.futures.ThreadPoolExecutor(threads)\n @tornado.gen.coroutine\n def wrap_compute(*args, **kwargs):\n yield executor.submit(compute, *args, **kwargs)\n\n return wrap_compute\n\nclass Service(tornado.web.Application):\n \"\"\" Start a tornado server and configure it to run an angus\n service.\n \"\"\"\n\n def __init__(self, service_key, version,\n port,\n compute,\n resource_storage=None, threads=4,\n description=\"No description\"):\n\n self.logger = logging.getLogger(service_key)\n\n self.port = port\n\n self.queues = dict() # TODO: use celery\n\n conf = {\n 'service_key': service_key,\n 'resource_storage': resource_storage,\n 'version': version,\n 'compute': wrap_computer(compute, threads),\n 'description': description,\n 'streams': self.queues,\n }\n\n basename = \"/services/{}/{}\".format(service_key, version)\n\n super(Service, self).__init__([\n (r\"/services\", FakeGatewayRoot, conf),\n (r\"/services/{}\".format(service_key), FakeGatewayService, conf),\n (r\"{}/jobs/(.*)\".format(basename), angus.jobs.Job, conf),\n (r\"{}/jobs\".format(basename), angus.jobs.JobCollection, conf),\n (r\"{}/streams/(.*)/input\".format(basename), angus.streams.Input, conf),\n (r\"{}/streams/(.*)/output\".format(basename), angus.streams.Output, conf),\n (r\"{}/streams/(.*)\".format(basename), angus.streams.Stream, conf),\n (r\"{}/streams\".format(basename), angus.streams.Streams, conf),\n\n (r\"{}\".format(basename), Description, conf),\n\n ])\n\n def start(self):\n \"\"\" Run the service\n \"\"\"\n self.listen(self.port, xheaders=True)\n tornado.ioloop.IOLoop.instance().start()\n","sub_path":"angus/service.py","file_name":"service.py","file_ext":"py","file_size_in_byte":4472,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"646148491","text":"class ShipmentHelper:\n\n @staticmethod\n def get_lowest_price(provider_pricing, package_size='S'):\n \"\"\" Gets smallest price from price list according to package size.\n :param provider_pricing:\n :param package_size:\n :return: string:\n \"\"\"\n prices = []\n for key, value in provider_pricing.items():\n for size, price in value.items():\n if size == package_size:\n prices.append(price)\n prices.sort()\n return prices[:1][0]\n\n @staticmethod\n def get_price(provider, size, data):\n \"\"\"\n Gets price from price list according to selected provider and package size.\n :param provider:\n :param size:\n :param data:\n :return: string:\n \"\"\"\n return float(data[provider][size])\n\n @staticmethod\n def read_file(file_path):\n \"\"\"Loops overs rows of data in file and puts each row to the list.\n :param file_path:\n :return: list:\n \"\"\"\n with open(file_path) as f:\n content = f.readlines()\n content = [x.strip() for x in content]\n return content\n\n @staticmethod\n def get_package_sizes(data):\n \"\"\"Gets all available package sizes\n :param data:\n :return: list:\n \"\"\"\n sizes = []\n for key, value in data.items():\n for size, price in value.items():\n if size not in sizes:\n sizes.append(size)\n return sizes\n\n @staticmethod\n def get_providers(data):\n \"\"\"Gets all providers. In this context we do not care if they unique.\n :param data:\n :return: list:\n \"\"\"\n return [key for key, value in data.items()]\n\n","sub_path":"helper.py","file_name":"helper.py","file_ext":"py","file_size_in_byte":1747,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"215027571","text":"import kafka_stream.kafka_consumer as c\nimport time\nfrom kafka_stream.utils import argument_parser\nfrom kafka_stream.utils import get_logger\n\n\ndef write_to_file(chunk, file_name):\n \"\"\"Write consumed messages in a file.\"\"\"\n with open(file_name, 'w') as file:\n for item in chunk:\n file.write(str(item.value))\n\n\nif __name__ == '__main__':\n\n parser = argument_parser()\n args = parser.parse_args()\n logger = get_logger('write to file')\n\n consumer = c.create_consumer(args.topic, args.server)\n chunks = c.chunks(consumer, args.count)\n\n for chunk in chunks:\n write_to_file(chunk, args.file+str(time.time()))\n logger.info(\"new file created..\")\n","sub_path":"src/kafka_stream/consumer_write_file.py","file_name":"consumer_write_file.py","file_ext":"py","file_size_in_byte":692,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"531240789","text":"from voximplant.apiclient import VoximplantAPI, VoximplantException\nimport pytz\nimport datetime\n\nif __name__ == \"__main__\":\n voxapi = VoximplantAPI(\"credentials.json\")\n\n # Get statistics for all queues from the specified date\n\n FROM_DATE = datetime.datetime(2017, 1, 1, 0, 0, 0, pytz.utc)\n \n try:\n res = voxapi.get_acd_queue_statistics(FROM_DATE)\n except VoximplantException as e:\n print(\"Error: {}\".format(e.message))\n print(res)\n","sub_path":"samples/get_acd_queue_statistics.py","file_name":"get_acd_queue_statistics.py","file_ext":"py","file_size_in_byte":466,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"320503588","text":"#!/usr/bin/env python3.6\n\nLOCATION = 0\nTIME = 1\nLEGS = 2\n\nfrom aimapython.search import astar_search\nfrom aimapython.search import EightPuzzle\nfrom aimapython.search import Problem\nimport sys\n\n\nclass Legs:\n #profit_list = [] #make some sort of dictionary or tuple?\n #class_list = []\n\n def __init__(self, leg, id):\n self.id = id #is the id needed?\n self.dep_airport = leg[1]\n self.arr_airport = leg[2]\n self.dur = leg[3]\n\n self.profits = { leg[i]:leg[i+1] for i in range(len(leg)) if i >= 4 and i%2 == 0 }\n print(self.profits)\n \"\"\"index = 0\n for fields in leg:\n if index >= 4 and index%2 == 0:\n self.class_list.append(fields[index])\n self.profit_list.append(fields[index+1])\n index=index+1\"\"\"\n\n def get_dur(self):\n return self.dur\n\n def get_arr_airport(self):\n return self.arr_airport\n\n def get_dep_airport(self):\n return self.dep_airport\n\n def get_profit(self, c):\n return self.profits[c]\n\n def get_max_profit(self):\n inverse = [(value, key) for key, value in self.profits.items()]\n return max(inverse)[0]\n\n def get_id(self):\n return self.id\n\n\nclass Airplane:\n\n def __init__(self, plane, id):\n self.id = id\n self.name = plane[1]\n self.c = plane[2]\n\n def get_class(self):\n return self.c\n\n def get_id(self):\n return self.id\n\n def get_name(self):\n return self.name\n\n\nclass ASARProblem(Problem):\n\n def __init__(self, filename):\n #self.initial = # place here the initial state (or None)\n fh = open(filename)\n [A, P, L, C] = self.load(fh)\n\n self.airClasses = {C[i][1] : C[i][2] for i in range(len(C))}\n\n self.airports = {A[i][1]: (A[i][2], A[i][3]) for i in range(len(A))} # verificar closing>opening???\n\n self.legs = [Legs(L[i], i) for i in range(0, len(L))]\n self.planes = [Airplane(P[i], i) for i in range(0, len(P))]\n\n self.n_planes = len(self.planes)\n\n needed_info = [[-1 for p in range(len(P))], [0 for p in range(len(P))], [() for i in range(len(P))]]\n initial = (tuple([l.get_id() for l in self.legs]), tuple(tuple(i) for i in needed_info))\n #initial = state([l.get_id() for l in self.legs], [p.get_location() for p in planes])\n\n Problem.__init__(self, initial)\n\n def actions(self, state):\n # possible actions consist of applying a leg to a given plane\n possible_actions = [[p, l] for p in range(len(self.planes)) for l in state[0]]\n\n for action in possible_actions:\n # pode-se otimizar fazendo pre-processamento\n p = action[0]\n l = action[1]\n s = len(state[1][LEGS][p])\n\n if s == 0:\n pass\n # tem de começar onde o avião está\n else:\n last_leg = int(state[1][LEGS][p][s-1])\n if self.legs[last_leg].get_arr_airport() != self.legs[l].get_dep_airport():\n possible_actions.remove(action)\n continue\n\n # parte a uma hora em que o departure airport já abriu\n \"\"\"apt = self.legs[action[1]].get_dep_airport()\n if self.planes[action[0]].get_time() < self.airports[apt][0]:\n possible_actions.remove(action)\n continue\n \"\"\"\n #chega a uma hora em que o arrival airport já abriu\n \"\"\"apt = self.legs[action[1]].get_arr_airport()\n if self.planes[action[0]].get_time() + self.legs[action[1]].get_dur() < self.airports[apt][0]:\n possible_actions.remove(action)\n continue\n \"\"\"\n # parte a uma hora em que o departure airport ainda não fechou\n apt = self.legs[action[1]].get_dep_airport()\n\n if state[1][TIME][p] > self.airports[apt][1]:\n possible_actions.remove(action)\n continue\n\n #chega a uma hora em que o arrival airport ainda não fechou\n apt = self.legs[action[1]].get_arr_airport()\n if state[1][1][p] + self.legs[action[1]].get_dur() > self.airports[apt][1]:\n possible_actions.remove(action)\n continue\n\n return possible_actions\n\n def result(self, state, action):\n print(\"Current state: \", state)\n #print(self.airClasses)\n #new_state = [list(state[0]), [list(state[1][0]), list(state[1][1]), [list(state[1][2][i]) for i in range(len(state[1][2]))]]]\n new_state = [list(state[0]), [list(state[1][0]), list(state[1][1]), [list(state[1][2][0]), list(state[1][2][1])]]]\n print(\"New state: \", new_state)\n\n new_state[0].remove(action[1])\n p = action[0]\n l = action[1]\n new_state[1][0][p] = self.legs[l].get_arr_airport()\n dep = self.legs[l].get_dep_airport()\n arr = self.legs[l].get_arr_airport()\n leg_dur = self.legs[l].get_dur()\n classe = self.planes[p].get_class()\n new_state[1][1][p] = max(new_state[1][1][p] + leg_dur, self.airports[dep][0] + leg_dur, self.airports[arr][0])\n new_state[1][1][p] = new_state[1][1][p] + self.airClasses[classe]\n new_state[1][TIME][p] = new_state[1][TIME][p] + self.airClasses[classe]\n new_state[1][LEGS][p].append(l)\n\n #next_state = tuple(new_state[0]), (tuple(new_state[1][0]), tuple(new_state[1][1]), tuple(tuple(new_state[1][2][i]) for i in range(len(new_state[1][2])) ))\n #needed_info = [[-1 for p in range(len(P))], [0 for p in range(len(P))], [() for i in range(len(P))]]\n #initial = (tuple([l.get_id() for l in self.legs]), tuple(tuple(i) for i in needed_info))\n\n #planes_info = tuple(new_state[1][0]), tuple(new_state[1][1]), tuple(tuple(i) for i in new_state[1][2])\n #planes_info = tuple(new_state[1][0]), tuple(new_state[1][1]), (tuple(new_state[1][2][0]), tuple(new_state[1][2][1]), tuple(new_state[1][2][2]))\n #next_state = (tuple(new_state[0]), planes_info)\n #print(\"Next state: \", next_state)\n return (tuple(new_state[0]), (tuple(new_state[1][0]), tuple(new_state[1][1]), (tuple(new_state[1][2][0]),tuple(new_state[1][2][1]))) )\n\n\n def goal_test(self, state):\n #percorreu todas as legs?\n print(state)\n if state[0] != ():\n return False\n for l in state[1][LEGS]:\n s = len(l)\n\n if s == 0:\n continue\n else:\n first = self.legs[l[0]].get_dep_airport()\n last = self.legs[l[s-1]].get_arr_airport()\n #print(\" initial and final leg: \",l[0], l[s-1])\n if first != last:\n return False\n return True\n\n def path_cost(self, c, state1, action, state2): # path cost g(n)\n profit = self.legs[action[1]].get_profit(self.planes[action[0]].get_class())\n # print(\"profi: \", profit, \"leg \", action[1], \"plane\", self.planes[action[0]].get_class(), \"\\n\")\n return c + 1/profit\n\n def h(self, node): # heuristic function h(n)\n # note: use node.state to access the state\n h = 0\n\n \"\"\"state = node.state\n\n legs = list(state[1][2])\n\n state_list = [list(state[0]), [list(state[1][0]), list(state[1][1]), [legs[i] for i in range(len(legs))]]]\"\"\"\n\n for leg in node.state[0]:\n h = h + 1 / self.legs[leg].get_max_profit()\n #print(leg, self.legs[leg].get_max_profit())\n\n \"\"\"for leg in state_list[0]:\n h = h + 1/self.legs[leg].get_max_profit()\n print(leg, self.legs[leg].get_max_profit())\"\"\"\n\n # print(\" heuristica\",h)\n\n return h\n\n def load(self, fh):\n # note: fh is an opened file object\n # note: self.initial may also be initialized here\n lines = fh.readlines()\n return process(lines)\n\n\n\n def save(self, fh, state):\n i = 0\n profit = 0\n for legs_list in state[1][LEGS]:\n line = \"S \" + self.planes[i].get_name() + \" \"\n for l in legs_list:\n line = line + \" \" + self.legs[l].get_dep_airport() + \" \" + self.legs[l].get_arr_airport()\n c = self.planes[i].get_class()\n profit = profit + self.legs[l].get_profit(c)\n line = line + \"\\n\"\n fh.write(line)\n i = i +1\n line = \"P \" + str(profit)\n fh.write(line)\n\n\n\ndef load_problem(filename):\n with open(filename) as fh:\n lines = fh.readlines()\n return lines\n\n\ndef process(lines):\n A=[]\n P=[]\n L=[]\n C=[]\n\n for ln in lines:\n if ln[0] == 'A':\n A.append([s for s in ln.split() ])\n\n if ln[0] == 'P':\n P.append([s for s in ln.split()])\n\n if ln[0] == 'L':\n L.append([s for s in ln.split()])\n\n if ln[0] == 'C':\n C.append([s for s in ln.split()])\n for a in A:\n a[2] = int(a[2])\n a[3] = int(a[3])\n for l in L:\n l[3] = int(l[3])\n l[5] = int(l[5])\n l[7] = int(l[7])\n for c in C:\n c[2] = int(c[2])\n return [A, P, L, C]\n\n\n\ndef main():\n\n if len(sys.argv) > 1:\n fh = open(\"solution.txt\", \"w+\")\n prob = ASARProblem(sys.argv[1])\n node = astar_search(prob)\n prob.save(fh, node.state)\n else:\n print(\"Usage: %s \" % (sys.argv[0]))\n\n\nmain()\n\n\n\n\n\n\n\"\"\" puzzle = EightPuzzle((1, 2, 3, 4, 5, 6, 0, 7, 8))\n sol = astar_search(puzzle)\n print(\"parent \", sol.parent, \"state \", sol.state, \"action \", sol.action, \"path cost \", sol.path_cost, \"depth \", sol.depth)\n print(\"grandpa\", sol.parent.parent)\"\"\"\n\n\n","sub_path":"project1/funciona2.py","file_name":"funciona2.py","file_ext":"py","file_size_in_byte":9800,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"202341153","text":"from math import sqrt\n\ntri = \"\"\"75\n95 64\n17 47 82\n18 35 87 10\n20 04 82 47 65\n19 01 23 75 03 34\n88 02 77 73 07 63 67\n99 65 04 28 06 16 70 92\n41 41 26 56 83 40 80 70 33\n41 48 72 33 47 32 37 16 94 29\n53 71 44 65 25 43 91 52 97 51 14\n70 11 33 28 77 73 17 78 39 68 17 57\n91 71 52 38 17 14 91 43 58 50 27 29 48\n63 66 04 68 89 53 67 30 73 16 69 87 40 31\n04 62 98 27 23 09 70 98 73 93 38 53 60 04 23\"\"\"\n\n\n# 0 -> 1, 2\n# 1 -> 3, 4\n# 2 -> 4, 5\n# 3 -> 6, 7\n# 4 -> 7, 8\n# 5 -> 8, 9\n\n# L + x + 1, L + x + 2\n\n# S = n(n+1)/2 => n^2 + n - 2S = 0 => n = ( -1 +- sqrt(1 + 8S) ) / 2\n\n# L = int((sqrt(8S+1) - 1)/2)\n\n\ndef load_from_string(string):\n L = []\n lines = string.split('\\n')\n for line in lines:\n L += [int(num) for num in line.split(' ')]\n return L\n\ndef get_line_from_index(i):\n return int((sqrt(8*i + 1) - 1) / 2)\n\ndef get_left_child(i):\n return get_line_from_index(i) + i + 1\n\ndef get_right_child(i):\n return get_left_child(i) + 1\n\n# C = L(RP) + RP + 1 = L(C) - 1 + RP + 1 = L(C) + RP ==> RP = C - L(C)\n\ndef right_parent(i):\n line = get_line_from_index(i)\n p = i - line\n if get_line_from_index(p) != (line - 1):\n return None\n return p\n\ndef left_parent(i):\n line = get_line_from_index(i)\n p = i - line - 1\n if p < 0 or get_line_from_index(p) != (line - 1):\n return None\n return p\n\ndef max_paths(L):\n M = [0] * len(L)\n M[0] = L[0]\n\n # fill the triangle at the two extremities\n l = get_left_child(0)\n r = get_right_child(0)\n while r < len(L) and l < len(L):\n M[l] = L[l] + M[right_parent(l)]\n M[r] = L[r] + M[left_parent(r)]\n l = get_left_child(l)\n r = get_right_child(r)\n \n # fill the rest of the triangle\n for i in range(len(L)):\n l = left_parent(i)\n r = right_parent(i)\n if l is not None and r is not None:\n M[i] = L[i] + max(M[l], M[r])\n \n return M\n\ndef test():\n for i in range(10):\n print('Index: {} - LP: {} - RP: {} - LC: {} - RC: {}'.format(i, left_parent(i), right_parent(i), get_left_child(i), get_right_child(i)))\n\n\nL = load_from_string(tri)\n\n# test()\n\n\nM = max_paths(L)\n\ndef max_path_to_bottom(L):\n M = max_paths(L)\n last_line = get_line_from_index(len(L) - 1)\n first_of_last_line = last_line * (last_line + 1) // 2\n m = M[first_of_last_line]\n for i in range(first_of_last_line, len(L)):\n if M[i] > m:\n m = M[i]\n return m\n\nprint(M)\nprint(max_path_to_bottom(L))","sub_path":"problem18.py","file_name":"problem18.py","file_ext":"py","file_size_in_byte":2463,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"383190972","text":"import streamlit as st\nfrom PIL import Image\nimport pytesseract as pt\nfrom textblob import TextBlob\nimport cv2 as cv\nimport numpy as np\nimport re\nimport matplotlib.pyplot as plt\nimport os\nimport base64\nimport pandas as pd\nimport time\nfrom pdf2image import convert_from_path\nimport pdfplumber\nimport docx2txt\nfrom sklearn.feature_extraction.text import TfidfVectorizer\n\n\ndef intro():\n st.markdown(\"\"\"
Problem Statement
\n
Plagiarism is defined as taking the work of others as your own without credit. This definition applies to source-lacking essays and simply claiming ideas as your own. \n One example of plagiarism is like taking the idea of the theory of relativity as your own, or claiming you wrote a book while it was actually made by another author.
\"\"\",\n unsafe_allow_html=True)\n if(st.checkbox(\"What is Tesseract\")):\n st.markdown(\n \"\"\"
\n# from textblob import TextBlob\n
An optical character recognition (OCR) engine
\n
Tesseract is an OCR engine with support for unicode and \n the ability to recognize more than 100 languages out of the box. It can be trained to recognize other languages.
\n
Tesseract is used for text detection on mobile devices, in video, and \n in Gmail image spam detection.
LCS Problem Statement: \n Given two sequences, find the length of longest subsequence present in both of them. A subsequence is a sequence \n that appears in the same relative order, but not necessarily contiguous. For example, “abc”, “abg”, “bdf”, “aeg”, ‘”acefg”, .. etc are subsequences of “abcdefg”.
\"\"\",\n unsafe_allow_html=True)\n\n with st.echo():\n def lowestcomensubsequence(X, Y):\n m = len(X)\n n = len(Y)\n\n L = [[None]*(n+1) for i in range(m+1)]\n\n for i in range(m+1):\n for j in range(n+1):\n if i == 0 or j == 0:\n L[i][j] = 0\n elif X[i-1] == Y[j-1]:\n L[i][j] = L[i-1][j-1]+1\n else:\n L[i][j] = max(L[i-1][j], L[i][j-1])\n\n return L[m][n]\n\n\ndef Extract(fil):\n text = \"\"\n if fil.type == \"text/plain\":\n text = str(fil.read(), \"utf-8\")\n\n elif fil.type == \"application/vnd.openxmlformats-officedocument.wordprocessingml.document\":\n text = docx2txt.process(fil)\n\n elif fil.type == \"application/pdf\":\n with pdfplumber.open(fil) as pdf:\n for j in range(0, len(pdf.pages)):\n page = pdf.pages[j]\n text += page.extract_text()\n else:\n file_bytes = np.asarray(bytearray(fil.read()), dtype=np.uint8)\n img = cv.imdecode(file_bytes, cv.IMREAD_COLOR)\n text = pt.image_to_string(img)\n text = text.replace('\\n', ' ').replace('\\r', '')\n return text\n\n\ndef lcs(X, Y):\n m = len(X)\n n = len(Y)\n\n L = [[None]*(n+1) for i in range(m+1)]\n\n for i in range(m+1):\n for j in range(n+1):\n if i == 0 or j == 0:\n L[i][j] = 0\n elif X[i-1] == Y[j-1]:\n L[i][j] = L[i-1][j-1]+1\n else:\n L[i][j] = max(L[i-1][j], L[i][j-1])\n\n return L[m][n]*100/max(m, n)\n\n\ndef download_link(object_to_download, download_filename, download_link_text):\n if isinstance(object_to_download, pd.DataFrame):\n object_to_download = object_to_download.to_csv(index=False)\n\n # some strings <-> bytes conversions necessary here\n b64 = base64.b64encode(object_to_download.encode()).decode()\n\n return f'{download_link_text}'\n\n\ndef heatmap(x_labels, y_labels, values):\n fig, ax = plt.subplots()\n im = ax.imshow(values)\n\n # We want to show all ticks...\n ax.set_xticks(np.arange(len(x_labels)))\n ax.set_yticks(np.arange(len(y_labels)))\n # and label them with the respective list entries\n ax.set_xticklabels(x_labels)\n ax.set_yticklabels(y_labels)\n\n # Rotate the tick labels and set their alignment.\n plt.setp(ax.get_xticklabels(), rotation=45, ha=\"right\", fontsize=10,\n rotation_mode=\"anchor\")\n\n # Loop over data dimensions and create text annotations.\n for i in range(len(y_labels)):\n for j in range(len(x_labels)):\n if values[i][j] < 0.5:\n text = ax.text(j, i, \"%.2f\" % values[i, j],\n ha=\"center\", va=\"center\", color=\"w\", fontsize=6)\n else:\n text = ax.text(j, i, \"%.2f\" % values[i, j],\n ha=\"center\", va=\"center\", color=\"b\", fontsize=6)\n st.pyplot(fig)\n\n\ndef OTM():\n MAJOR = st.sidebar.file_uploader(\"MAJOR FILE\", type=[\"pdf\", \"txt\", \"docx\"])\n ALL = st.sidebar.file_uploader(\"ALL FILE\", type=[\"pdf\", \"txt\", \"docx\"],\n accept_multiple_files=True)\n\n if MAJOR is not None and ALL is not None:\n key_val = {}\n for i in range(len(ALL)):\n key_val[ALL[i].name] = i\n corpus = [Extract(ALL[key_val[j]]) for j in sorted(key_val.keys())]\n corpus.append(Extract(MAJOR))\n names = [ALL[key_val[j]].name for j in sorted(key_val.keys())]\n vect = TfidfVectorizer(min_df=1, stop_words=\"english\")\n tfidf = vect.fit_transform(corpus)\n pairwise_similarity = tfidf * tfidf.T\n pairwise_similarity = tfidf * tfidf.T\n pairwise_similarity = pairwise_similarity.toarray()\n l = len(ALL)\n # pairwise_similarity = a * b.T\n\n # print(a.toarray()[0])\n # print(b.toarray())\n # text_major = Extract(MAJOR)\n # key_val = {}\n arr = np.zeros((len(ALL), 1))\n # i = int(0)\n # for fil in ALL:\n # text = Extract(fil)\n # key_val[fil.name] = lcs(text_major, text)\n\n for i in range(l):\n arr[i][0] = pairwise_similarity[l][i]\n chart_data = pd.DataFrame(\n arr,\n columns=[\"Plagiarism\"])\n st.bar_chart(chart_data)\n tmp_download_link = download_link(\n chart_data, 'extracted_text.csv', 'Download as csv')\n st.markdown(f\"\"\"
{tmp_download_link}
\"\"\", unsafe_allow_html=True)\n\n\ndef MTM():\n ALL = None\n ALL = st.sidebar.file_uploader(\"ALL FILE's\", type=[\"pdf\", \"txt\", \"docx\"],\n accept_multiple_files=True)\n if ALL:\n key_val = {}\n for i in range(len(ALL)):\n key_val[ALL[i].name] = i\n corpus = [Extract(ALL[key_val[j]]) for j in sorted(key_val.keys())]\n names = [ALL[key_val[j]].name for j in sorted(key_val.keys())]\n vect = TfidfVectorizer(min_df=1, stop_words=\"english\")\n tfidf = vect.fit_transform(corpus)\n pairwise_similarity = tfidf * tfidf.T\n heatmap(names, names, pairwise_similarity.toarray())\n toprint = pd.DataFrame(pairwise_similarity.toarray())\n tmp_download_link = download_link(\n toprint, 'pairwise_similarity.csv', 'Download as csv')\n st.markdown(f\"\"\"
\",\n unsafe_allow_html=True)\n percentage = round(lcs(texts[0], texts[1]), 2)\n k = 2\n if percentage > 75:\n k = 1\n if percentage < 45:\n k = 0\n percentage = str(percentage)\n if k == 0:\n st.markdown(\n f\"\"\"
\n
Plagiarism: \n {percentage}%
\n
These Docs Are Not Copied!
\n
\n \"\"\", unsafe_allow_html=True)\n if k == 1:\n st.markdown(\n f\"\"\"
\n
Plagiarism: \n {percentage}%
\n
These Docs Are Copied!
\n
\n \"\"\", unsafe_allow_html=True)\n if k == 2:\n st.markdown(\n f\"\"\"
\", unsafe_allow_html=True)\n st.markdown(\n \"\"\"\"\"\", unsafe_allow_html=True)\n Comparison_Type = ['intro', 'One-to-Many', 'Many-to-Many', 'One-to-One']\n Comparison = st.selectbox(\"Select Comparison Type\", Comparison_Type)\n if Comparison == Comparison_Type[0]:\n intro()\n if Comparison == Comparison_Type[1]:\n OTM()\n elif Comparison == Comparison_Type[2]:\n MTM()\n else:\n OTO()\n\n\nmain()\n","sub_path":"app.py","file_name":"app.py","file_ext":"py","file_size_in_byte":12820,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"442778185","text":"import argparse\n\nimport numpy as np\n\nDATA_KEYS = [\"Allele\", \"Peptide\", \"MHC\", \"Binding result\"]\n\ndef write_all_2_file(filename, data, size_limit=10):\n \"\"\"\n Write all formatted data to @p filename\n \"\"\"\n with open(filename, \"w\") as f:\n line = \"\"\n for key in DATA_KEYS:\n line += key + \";\"\n f.write(line[:-1] + \"\\n\")\n\n linecount = 0\n for entry in data:\n if linecount >= size_limit and size_limit > 0:\n return\n linecount += 1\n f.write(str(entry[0]) + \";\" + str(entry[1]) + \";\" + str(entry[2]) \\\n + \";\" + str(entry[3]) + \"\\n\")\n\n\ndef write_peplen9_2_file(filename, data, size_limit=10):\n \"\"\"\n Write formatted data with peptide length 9 to @p filename\n \"\"\"\n with open(filename, \"w\") as f:\n line = \"\"\n for key in DATA_KEYS:\n line += key + \";\"\n f.write(line[:-1] + \"\\n\")\n\n linecount = 0\n for entry in data:\n if linecount >= size_limit and size_limit > 0:\n return\n if len(entry[1]) == 9:\n linecount += 1\n f.write(str(entry[0]) + \";\" + str(entry[1]) + \";\" + str(entry[2]) \\\n + \";\" + str(entry[3]) + \"\\n\")\n\n\ndef write_peplen10_2_file(filename, data, size_limit=10):\n \"\"\"\n Write formatted data with peptide length 9 to @p filename\n \"\"\"\n with open(filename, \"w\") as f:\n line = \"\"\n for key in DATA_KEYS:\n line += key + \";\"\n f.write(line[:-1] + \"\\n\")\n\n linecount = 0\n for entry in data:\n if linecount >= size_limit and size_limit > 0:\n return\n if len(entry[1]) == 10:\n linecount += 1\n f.write(str(entry[0]) + \";\" + str(entry[1]) + \";\" + str(entry[2]) \\\n + \";\" + str(entry[3]) + \"\\n\")\n print(\"Wrote \" + str(linecount) + \" entries to \" + filename)\n\n\ndef write_A_2_file(filename, data, size_limit=10):\n \"\"\"\n Write formatted data with peptide length 9 to @p filename\n \"\"\"\n with open(filename, \"w\") as f:\n line = \"\"\n for key in DATA_KEYS:\n line += key + \";\"\n f.write(line[:-1] + \"\\n\")\n\n linecount = 0\n for entry in data:\n if linecount >= size_limit and size_limit > 0:\n return\n if 'HLA-A' in entry[0]:\n linecount += 1\n f.write(str(entry[0]) + \";\" + str(entry[1]) + \";\" + str(entry[2]) \\\n + \";\" + str(entry[3]) + \"\\n\")\n print(\"Wrote \" + str(linecount) + \" entries to \" + filename)\n\n\ndef write_B_2_file(filename, data, size_limit=10):\n \"\"\"\n Write formatted data with peptide length 9 to @p filename\n \"\"\"\n with open(filename, \"w\") as f:\n line = \"\"\n for key in DATA_KEYS:\n line += key + \";\"\n f.write(line[:-1] + \"\\n\")\n\n linecount = 0\n for entry in data:\n if linecount >= size_limit and size_limit > 0:\n return\n if 'HLA-B' in entry[0]:\n linecount += 1\n f.write(str(entry[0]) + \";\" + str(entry[1]) + \";\" + str(entry[2]) \\\n + \";\" + str(entry[3]) + \"\\n\")\n print(\"Wrote \" + str(linecount) + \" entries to \" + filename)\n\n\ndef write_C_2_file(filename, data, size_limit=10):\n \"\"\"\n Write formatted data with peptide length 9 to @p filename\n \"\"\"\n with open(filename, \"w\") as f:\n line = \"\"\n for key in DATA_KEYS:\n line += key + \";\"\n f.write(line[:-1] + \"\\n\")\n\n linecount = 0\n for entry in data:\n if linecount >= size_limit and size_limit > 0:\n return\n if 'HLA-C' in entry[0]:\n linecount += 1\n f.write(str(entry[0]) + \";\" + str(entry[1]) + \";\" + str(entry[2]) \\\n + \";\" + str(entry[3]) + \"\\n\")\n print(\"Wrote \" + str(linecount) + \" entries to \" + filename)\n\n\ndef read_data(file):\n \"\"\"\n Read data from file\n \"\"\"\n data = np.genfromtxt(file,\n dtype='str',\n skip_header=True,\n delimiter=',')\n\n return data\n\n\ndef mhc_data(raw_data):\n \"\"\"\n Parse raw MHC data.\n\n returns tuple of lists: (mhc_name_list, mhc_sequence_list)\n \"\"\"\n print(\"Checking MHC data\")\n mhcs = []\n data = []\n temp_data = []\n for d in raw_data:\n temp_data.append(list(d))\n\n raw_data = temp_data\n\n for d in raw_data:\n # entry = list2string(extract_numeric_sequence(d, index=2))\n entry = d[2]\n # Allele name = index 1\n mhcs.append(d[1])\n data.append(entry)\n\n print(\"Size of data = \" + str(len(raw_data)))\n return (mhcs, data)\n\n\ndef peptide_data(raw_data):\n \"\"\"\n Parse raw peptide data.\n\n returns tuple of lists: (mhc_name_list, peptide_sequence_list, boolean_bindings_list)\n \"\"\"\n print(\"Checking Peptide data\")\n mhc_names = []\n data = []\n bindings = []\n temp_data = []\n for d in raw_data:\n temp_data.append(list(d))\n\n raw_data = temp_data\n for d in raw_data:\n entry = d[0]\n # Only keep peptides of length 9 or 10\n if filter_size(entry, 9) or filter_size(entry, 10):\n entry = d[0]\n data.append(entry)\n # Allele name = index 1\n mhc_names.append(d[2])\n # Append if binding is True or False\n bindings.append(int(d[1] == 'True'))\n\n return (mhc_names, data, bindings)\n\n\ndef filter_size(sequence, size):\n \"\"\"\n Only keep sequence of length @p size.\n \"\"\"\n if len(sequence) == size:\n return sequence\n return None\n\n\ndef get_valid_mhc_names(peptides_tuple, mhcs_tuple):\n \"\"\"\n Get list of MHC names that are also used in experiments with peptides.\n \"\"\"\n # Extract MHC names from peptides and mhcs data.\n mhcs_peptides = peptides_tuple[0]\n mhcs_mhcs = mhcs_tuple[0]\n\n inter_set = set(mhcs_peptides).intersection(set(mhcs_mhcs))\n\n return list(inter_set)\n\n\ndef merge_valid_data(peptides_tuple, mhcs_tuple, valid_mhcs):\n \"\"\"\n Merge the (valid) mhcs with the peptide experiments.\n \"\"\"\n # Get the indices of the peptide experiments that are done with valid MHCs.\n peptides_indices = [index for index, mhc in enumerate(peptides_tuple[0]) if mhc in valid_mhcs]\n\n result = []\n for index in peptides_indices:\n # Create data entry and append to the results\n mhc_name = peptides_tuple[0][index] # MHC name\n full_entry = [\n mhc_name, # MHC name\n peptides_tuple[1][index], # Peptide sequence\n mhcs_tuple[1][mhcs_tuple[0].index(mhc_name)], # MHC sequence\n peptides_tuple[2][index] # Binding result\n ]\n\n result.append(full_entry)\n\n return result\n\n\ndef main(mhc_file, peptide_file):\n \"\"\"\n Main function\n \"\"\"\n # Collect and clean MHC data\n raw_mhc_data = read_data(mhc_file)\n mhcs_tuple = mhc_data(raw_mhc_data)\n\n # Collect and clean experimental peptide data\n raw_binding_data = read_data(peptide_file)\n peptides_tuple = peptide_data(raw_binding_data)\n\n # Drop all useless data\n useful_mhcs = get_valid_mhc_names(peptides_tuple, mhcs_tuple)\n\n # Merge the remaining (valid) data\n training_data = merge_valid_data(peptides_tuple, mhcs_tuple, useful_mhcs)\n\n # Write the data to a file\n # Size limit set to 500k = larger than total data = process all\n write_all_2_file(\"TrainingDataAll.csv\", training_data, 500000)\n # write_peplen10_2_file(\"Peplen10TrialFile.csv\", training_data, 500000)\n # write_peplen9_2_file(\"Peplen9TrialFile.csv\", training_data, 500000)\n # write_A_2_file(\"ATrialFile.csv\", training_data, 500000)\n # write_B_2_file(\"BTrialFile.csv\", training_data, 500000)\n # write_C_2_file(\"CTrialFile.csv\", training_data, 500000)\n\n print(str(len(training_data)) + \" usefull entries found available\")\n","sub_path":"src/preprocess/Data.py","file_name":"Data.py","file_ext":"py","file_size_in_byte":7994,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"197616223","text":"import numpy as np\nimport scipy.sparse as sparse\n\n\nclass LaminateDisplacement(object):\n\n def __init__(self, fem, coords):\n\n self._elements = fem.dof.dof_elements\n self._xtip = coords[0]\n self._ytip = coords[1]\n self._a = fem.a # distance in um\n self._b = fem.b # distance in um\n self._dimension = fem.dof.n_mdof\n \n # Initial values of the operator.\n self._assemble(self._operator_element)\n\n \n def get_operator(self):\n \n return self._opr\n \n \n def _assemble(self, element_func):\n \n num = 20 * len(self._elements)\n row = np.zeros(num)\n col = np.zeros(num) \n val = np.zeros(num)\n ntriplet = 0\n \n for e in self._elements:\n dof = e.mechanical_dof\n ge = element_func(e)\n if ge is not None:\n for ii in range(20):\n row[ntriplet] = 0\n col[ntriplet] = dof[ii]\n val[ntriplet] = ge[0, ii]\n ntriplet += 1\n\n shape = (1, self._dimension)\n self._opr = sparse.coo_matrix((val, (row, col)), shape=shape).tocsr()\n \n \n def _operator_element(self, element):\n \n x0 = element.element.i + 0.5\n y0 = element.element.j + 0.5\n xi = (self._xtip / self._a - 2 * x0) \n eta = (self._ytip / self._b - 2 * y0) \n \n #if (x0 == 29.5 or x0 == 30.5) and (y0 == 59.5):\n # print(xi, eta)\n # print(self._xtip, self._ytip)\n # print(self._xtip - 2 * self._a * x0)\n \n if -1 < xi <= 1 and -1 < eta <= 1:\n xi_sign = np.array([-1, 1, 1, -1])\n eta_sign = np.array([-1, -1, 1, 1])\n n = 0.25 * (1 + xi_sign * xi) * (1 + eta_sign * eta)\n ge = np.array([[0, 0, n[0], 0, 0, 0, 0, n[1], 0, 0, 0, 0, n[2], 0, 0, 0, 0, n[3], 0, 0]])\n return ge\n \n return None\n","sub_path":"microfem/analysis_laminate_displacement.py","file_name":"analysis_laminate_displacement.py","file_ext":"py","file_size_in_byte":1964,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"168317783","text":"# To suppress warnings\n#!/usr/bin/env python -W ignore::DeprecationWarning\nimport warnings\nwarnings.simplefilter(action='ignore', category=FutureWarning)\nwarnings.filterwarnings('ignore')\n### Suppressing the warnings, does not seem to work...\n\n# Import modules\nimport matplotlib as mpl # to fix the runtime error on Mac (i.e. Python not installed as a framework)\nmpl.use('TkAgg')\nimport matplotlib.pyplot as plt\nfrom matplotlib.pyplot import ion\nimport pandas as pd\nimport os\nimport nilearn as nil\nimport numpy as np\nimport math\nimport gc # to garbage collect unreferenced memory use gc.collect()\nfrom numpy import array\nfrom os.path import join\nfrom nistats.first_level_model import FirstLevelModel, run_glm\nfrom nistats.design_matrix import make_first_level_design_matrix\nfrom nistats.reporting import plot_contrast_matrix\nfrom nistats.reporting import plot_design_matrix\nfrom nistats.reporting import get_clusters_table\nfrom nistats.contrasts import compute_contrast\nfrom nistats.second_level_model import SecondLevelModel\nfrom nilearn.plotting import plot_stat_map, plot_anat, plot_img, show\nfrom nilearn import image, masking\nfrom nilearn.image import math_img\n\n# Run the following analyses\nplot_design = False\nplot_zMap = False\nplot_FPR = False\nplot_FDR = False\nplot_FWER = False\nplot_clusterCorrect = False\nclusterReport = False\nplot_surface = False\npause_analysis = False\nplot_all = True\nmake_mask = True\nmake_mask_plot = False\nsecond_level = True\n\n# Set parameters\nthreshold = 1.96 # threshold for the localizer masks as well as plotting // for t contrast = 2.3 and z contrast = 1.96 for p = .05\nt_r = 0.7 # Repetition rate\n\n# Set paths and create folders\nderDataFolder = '/Users/michlf/NRoST_analysis/derivatives/'\nrawDataFolder = '/Volumes/VUHDD/NROST_analysis/'\noutdir = 'results'\nif not os.path.exists(rawDataFolder+outdir+'/figures/'):\n try:\n os.makedirs(rawDataFolder+outdir+'/figures/')\n except:\n pass\n\n# Let's go\nif any('sub-' in s for s in os.listdir(rawDataFolder)): # check whether there is any subject data\n\n # Preparations\n os.chdir(derDataFolder+'fmriprep/')\n folderlist = next(os.walk('.'))[1]\n folderlist.sort()\n os.chdir(rawDataFolder)\n subList = []\n z_map_all = []\n model_all = []\n \n for i in range(len(folderlist)):\n try:\n subList.append(folderlist[i].split(\"sub-\",1)[1])\n nrSubj = folderlist[i].split(\"sub-\",1)[1]\n print('\\n...running subject {0}...\\n'.format(nrSubj))\n # Get entire run and average across it\n try:\n fmri_img = image.concat_imgs(derDataFolder+'fmriprep/sub-{0}/ses-01/func/sub-{0}_ses-01_task-Localizer_run-9_bold_space-MNI152NLin2009cAsym_preproc.nii'.format(nrSubj))\n mean_img = image.mean_img(derDataFolder+'fmriprep/sub-{0}/ses-01/func/sub-{0}_ses-01_task-Localizer_run-9_bold_space-MNI152NLin2009cAsym_preproc.nii'.format(nrSubj))\n events = pd.read_csv(rawDataFolder+'sub-{0}/ses-01/func/sub-{0}_ses-01_task-Localizer_run-9_events.tsv'.format(nrSubj), sep='\\t')\n # Get confounds file and drop unnecessary confounds\n # Gilles recommended for my design aComCors and FD. We can use all derivatives from aComCor as well because the t_r is so low. \n # We also include the six motion parameters (X,Y,Z,and three rotations)\n # I read somewhere that averaging FD on group level for the conditions is also beneficial (I guess for second level analysis)\n confounds = pd.read_csv(derDataFolder+'fmriprep/sub-{0}/ses-01/func/sub-{0}_ses-01_task-Localizer_run-9_bold_confounds.tsv'.format(nrSubj), \n usecols=['FramewiseDisplacement','aCompCor00','aCompCor01','aCompCor02','aCompCor03','aCompCor04','aCompCor05',\n 'Cosine00', 'Cosine01', 'Cosine02', 'Cosine03', 'X','Y','Z','RotX','RotY','RotZ'], sep='\\t')\n except:\n fmri_img = image.concat_imgs(derDataFolder+'fmriprep/sub-{0}/ses-02/func/sub-{0}_ses-02_task-Localizer_run-9_bold_space-MNI152NLin2009cAsym_preproc.nii'.format(nrSubj))\n mean_img = image.mean_img(derDataFolder+'fmriprep/sub-{0}/ses-02/func/sub-{0}_ses-02_task-Localizer_run-9_bold_space-MNI152NLin2009cAsym_preproc.nii'.format(nrSubj))\n events = pd.read_csv(rawDataFolder+'sub-{0}/ses-02/func/sub-{0}_ses-02_task-Localizer_run-9_events.tsv'.format(nrSubj), sep='\\t')\n confounds = pd.read_csv(derDataFolder+'fmriprep/sub-{0}/ses-02/func/sub-{0}_ses-02_task-Localizer_run-9_bold_confounds.tsv'.format(nrSubj), \n usecols=['FramewiseDisplacement','aCompCor00','aCompCor01','aCompCor02','aCompCor03','aCompCor04','aCompCor05',\n 'Cosine00', 'Cosine01', 'Cosine02', 'Cosine03', 'X','Y','Z','RotX','RotY','RotZ'], sep='\\t')\n\n except Exception as e:\n print('', e, '')\n continue # if not a subject folder or no localizer image, move on to next iteration\n\n # The first TR cannot have a value for FD, so we set it to the value of the second slice\n confounds['FramewiseDisplacement'].iloc[0] = confounds['FramewiseDisplacement'].iloc[1]\n\n # Create first level model\n fmri_glm = FirstLevelModel(t_r=t_r,\n noise_model='ar1',\n standardize=False,\n hrf_model='glover',\n drift_model='cosine',\n period_cut=160,\n subject_label='sub-{0}'.format(nrSubj),\n mask=None,\n minimize_memory=True,\n n_jobs=-1)\n fmri_glm = fmri_glm.fit(fmri_img, events, confounds)\n design_matrix = fmri_glm.design_matrices_[0]\n\n if plot_design:\n # Plot design matrix\n plot_design_matrix(design_matrix) \n\n # Save the matrix image to disc\n #plot_design_matrix(design_matrix, output_file=join(outdir, 'design_matrix.png'))\n\n # Plot expected response to negative cows\n plt.plot(design_matrix['dresserReal1'])\n plt.xlabel('scan')\n plt.title('Expected dresser1 intact response')\n\n # Detect voxels with significant effects fro positive vs drop contrast\n otherRegressors = [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.] # not conditions\n conditions = {\n 'intact': array([1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.]\n +otherRegressors),\n 'scrambled': array([0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n ]+otherRegressors),\n 'fixation': array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n ]+otherRegressors),\n }\n\n #intact_minus_scrambled = conditions['scrambled'] - conditions['fixation']\n #intact_minus_scrambled = conditions['intact'] - conditions['fixation']\n intact_minus_scrambled = conditions['intact'] - conditions['scrambled']\n\n if plot_design:\n plot_contrast_matrix(intact_minus_scrambled, design_matrix=design_matrix)\n\n eff_map = fmri_glm.compute_contrast(intact_minus_scrambled,\n output_type='effect_size')\n\n z_map = fmri_glm.compute_contrast(intact_minus_scrambled,\n output_type='z_score')\n \n t_map = fmri_glm.compute_contrast(intact_minus_scrambled,\n output_type='stat') # stat corresponds to t map\n\n ###############################################################################\n # Plot thresholded z scores map.\n #\n # We display it on top of the average\n # functional image of the series (could be the anatomical image of the\n # subject). We use arbitrarily a threshold of 3.0 in z-scale. We'll\n # see later how to use corrected thresholds. we show to display 3\n # axial views: display_mode='z', cut_coords=3\n if plot_zMap:\n plot_stat_map(z_map, bg_img=mean_img, threshold=3.0,\n display_mode='z', cut_coords=3, black_bg=True,\n title='Sub{0}: Intact vs. scrambled (Z>3)'.format(nrSubj))\n plt.show()\n\n ###############################################################################\n # Statistical signifiance testing. One should worry about the\n # statistical validity of the procedure: here we used an arbitrary\n # threshold of 3.0 but the threshold should provide some guarantees on\n # the risk of false detections (aka type-1 errors in statistics). One\n # first suggestion is to control the false positive rate (fpr) at a\n # certain level, e.g. 0.001: this means that there is.1% chance of\n # declaring active an inactive voxel.\n\n if plot_FPR:\n from nistats.thresholding import map_threshold\n _, threshold = map_threshold(z_map, level=.001, height_control='fpr')\n print('Uncorrected p<0.001 threshold: %.3f' % threshold)\n plot_stat_map(z_map, bg_img=mean_img, threshold=threshold,\n display_mode='z', cut_coords=3, black_bg=True,\n title='Sub{0}: Intact vs. scrambled (p<0.001)'.format(nrSubj))\n plt.show()\n\n ###############################################################################\n # The problem is that with this you expect 0.001 * n_voxels to show up\n # while they're not active --- tens to hundreds of voxels. A more\n # conservative solution is to control the family wise error rate,\n # i.e. the probability of making ony one false detection, say at\n # 5%. For that we use the so-called Bonferroni correction\n\n if plot_FWER:\n _, threshold = map_threshold(z_map, level=.05, height_control='bonferroni')\n print('Bonferroni-corrected, p<0.05 threshold: %.3f' % threshold)\n plot_stat_map(z_map, bg_img=mean_img, threshold=threshold,\n display_mode='z', cut_coords=3, black_bg=True,\n title='Sub{0}: Intact vs. scrambled (p<0.05, corrected)'.format(nrSubj))\n plt.show()\n\n ###############################################################################\n # This is quite conservative indeed ! A popular alternative is to\n # control the false discovery rate, i.e. the expected proportion of\n # false discoveries among detections. This is called the false\n # disovery rate\n\n if plot_FDR:\n _, threshold = map_threshold(z_map, level=.05, height_control='fdr')\n print('False Discovery rate = 0.05 threshold: %.3f' % threshold)\n plot_stat_map(z_map, bg_img=mean_img, threshold=threshold,\n display_mode='z', cut_coords=3, black_bg=True,\n title='Sub{0}: Intact vs. scrambled (fdr=0.05)'.format(nrSubj))\n plt.show()\n\n ###############################################################################\n # Finally people like to discard isolated voxels (aka \"small\n # clusters\") from these images. It is possible to generate a\n # thresholded map with small clusters removed by providing a\n # cluster_threshold argument. here clusters smaller than 10 voxels\n # will be discarded.\n\n if plot_clusterCorrect:\n clean_map, threshold = map_threshold(\n z_map, level=.05, height_control='fdr', cluster_threshold=10)\n plot_stat_map(clean_map, bg_img=mean_img, threshold=threshold,\n display_mode='z', cut_coords=3, black_bg=True,\n title='Sub{0}: Intact vs. scrambled (fdr=0.05), clusters > 10 voxels'.format(nrSubj))\n plt.show()\n\n ###############################################################################\n # Report the found positions in a table\n if clusterReport:\n table = get_clusters_table(z_map, stat_threshold=threshold,\n cluster_threshold=10)\n print(table)\n\n if plot_surface:\n ###############################################################################\n # Let's do a surface-based first level analysis\n #########################################################################\n # Project the fMRI image to the surface\n # -------------------------------------\n #\n # For this we need to get a mesh representing the geometry of the\n # surface. we could use an individual mesh, but we first resort to a\n # standard mesh, the so-called fsaverage5 template from the Freesurfer\n # software.\n fsaverage = nil.datasets.fetch_surf_fsaverage(mesh='fsaverage5') # fsaverage is high, fsaverage5 is low resolution\n\n #########################################################################\n # The projection function simply takes the fMRI data and the mesh.\n # Note that those correspond spatially, are they are bothin MNI space.\n full_run = '/Users/michlf/NROST_analysis/derivatives/fmriprep/sub-{0}/ses-01/func/sub-{0}_ses-01_task-Localizer_run-9_bold_space-MNI152NLin2009cAsym_preproc.nii'.format(nrSubj)\n texture = nil.surface.vol_to_surf(full_run, fsaverage.pial_right)\n\n #########################################################################\n # Perform first level analysis\n # ----------------------------\n #\n # This involves computing the design matrix and fitting the model.\n # We start by specifying the timing of fMRI frames\n n_scans = texture.shape[1]\n frame_times = t_r * (np.arange(n_scans) + .5)\n\n #########################################################################\n # Create the design matrix\n #\n # We specify an hrf model containing SPM (for now not Glover model and its time derivative)\n # the drift model is implicitly a cosine basis with period cutoff 128s.\n # MF: We can also add regressors of choice with add_regs & add_reg_names, so we will add some confound regressors\n design_matrix = make_first_level_design_matrix(frame_times,\n events=events,\n hrf_model='spm',\n add_regs=confounds.values,\n add_reg_names=list(confounds))\n\n #########################################################################\n # Setup and fit GLM.\n # Note that the output consists in 2 variables: `labels` and `fit`\n # `labels` tags voxels according to noise autocorrelation.\n # `estimates` contains the parameter estimates.\n # We keep them for later contrast computation.\n labels, estimates = run_glm(texture.T, design_matrix.values)\n\n #########################################################################\n # Estimate contrasts\n # ------------------\n # Specify the contrasts\n # For practical purpose, we first generate an identity matrix whose size is\n # the number of columns of the design matrix\n otherRegressors = [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.] # not conditions\n basic_contrasts = {\n 'intact': array([1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.]\n +otherRegressors),\n 'scrambled': array([0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n ]+otherRegressors),\n 'fixation': array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n ]+otherRegressors),\n }\n\n contrasts = {\n \"scrambled vs fixation\": basic_contrasts['scrambled'] - basic_contrasts['fixation'],\n \"intact vs fixation\": basic_contrasts['intact'] - basic_contrasts['fixation'],\n \"intact vs scrambled\": basic_contrasts['intact'] - basic_contrasts['scrambled']\n }\n\n #########################################################################\n # contrast estimation\n\n # iterate over contrasts\n for index, (contrast_id, contrast_val) in enumerate(contrasts.items()):\n print(' Contrast % i out of %i: %s, right hemisphere' %\n (index + 1, len(contrasts), contrast_id))\n # compute contrast-related statistics\n contrast = compute_contrast(labels, estimates, contrast_val,\n contrast_type='t')\n # we present the Z-transform of the t map\n z_score = contrast.z_score() # present Z-transformed t map\n #z_score = contrast.effect # present t map\n # we plot it on the surface, on the inflated fsaverage mesh,\n # together with a suitable background to give an impression\n # of the cortex folding.\n nil.plotting.plot_surf_stat_map(\n fsaverage.infl_right, z_score, hemi='right',\n title=contrast_id, colorbar=True,\n threshold=2.3, bg_map=fsaverage.sulc_right)\n\n #########################################################################\n # Analysing the left hemisphere\n # -----------------------------\n\n #########################################################################\n # Project the fMRI data to the mesh\n texture = nil.surface.vol_to_surf(full_run, fsaverage.pial_left)\n\n #########################################################################\n # Estimate the General Linear Model\n labels, estimates = run_glm(texture.T, design_matrix.values)\n\n #########################################################################\n # Create contrast-specific maps\n for index, (contrast_id, contrast_val) in enumerate(contrasts.items()):\n print(' Contrast % i out of %i: %s, left hemisphere' %\n (index + 1, len(contrasts), contrast_id))\n # compute contrasts\n contrast = compute_contrast(labels, estimates, contrast_val,\n contrast_type='t')\n z_score = contrast.z_score()\n # Plot the result\n nil.plotting.plot_surf_stat_map(\n fsaverage.infl_left, z_score, hemi='left',\n title=contrast_id, colorbar=True,\n threshold=2.3, bg_map=fsaverage.sulc_left)\n\n # Other plotting\n # from nilearn import datasets\n # img = datasets.fetch_localizer_button_task()['tmap']\n # view = nil.plotting.view_img_on_surf(z_score, surf_mesh='fsaverage')\n\n # view.open_in_browser()\n\n nil.plotting.show()\n\n if make_mask:\n ### Compute anatomically constraint ROI based on functional localizer\n # An important question here is the moment of resampling\n # Do we resample before thresholding or after?\n # Resampling seems to lower activity of individual voxel\n # because the response becomes smeared out.\n # Alternatively, we could try resampled the template\n # to the preprocessed data instead of the other way around\n # However, original MNI space seems to be of the same size as the template\n # (thus larger) than the MNI coregistered preprocessed date from fMRI prep\n\n # Load anatomical atlas of pF\n atlas = image.load_img('/Users/michlf/Documents/GitHub/fMRI_NRoST/analysis/LOCatlas.nii.gz')\n atlasBool = image.load_img('/Users/michlf/Documents/GitHub/fMRI_NRoST/analysis/LOCatlas.nii.gz').get_data().astype(bool)\n # Resample contrast image to size of the atlas\n resampled_zmap = image.resample_to_img(z_map, atlas)\n # Threshold the contrast image and make an Niim object again\n zmapPreBool = resampled_zmap.dataobj.copy() # copy instead of referencing\n zmapPreBool[zmapPreBool < threshold] = 0 # Thresholded for checking\n zmapThreshed = image.new_img_like(resampled_zmap, zmapPreBool)\n zmapBool = (resampled_zmap.dataobj[:] > threshold) # Thresholded boolean for intersection\n zmapThreshedBool = image.new_img_like(resampled_zmap, zmapBool.astype(np.int))\n # Intersect boolean arrays of contrast image and anatomical image and reformat in Niim objects\n intersection = np.logical_and(zmapBool, atlasBool)\n intersection_img = image.new_img_like(resampled_zmap, intersection.astype(np.int))\n if make_mask_plot: # commenting out the show command will lead to memory overflow due to too many figures\n # Check the maps \n nil.plotting.plot_glass_brain(atlas, colorbar=True, title='Original Harvard atlas of posterior temporal fusiform gyrus',\n plot_abs=False, display_mode='ortho')\n nil.plotting.plot_glass_brain(z_map, colorbar=True, title='contrast image',\n plot_abs=False, display_mode='ortho')\n nil.plotting.plot_glass_brain(resampled_zmap, colorbar=True, title='resampled contrast image',\n plot_abs=False, display_mode='ortho')\n nil.plotting.plot_glass_brain(zmapThreshed, colorbar=True, title='contrast image thresholded at {0}'.format(threshold),\n plot_abs=False, display_mode='ortho')\n nil.plotting.plot_glass_brain(zmapThreshedBool, colorbar=True, title='contrast image thresholded at {0} in Boolean'.format(threshold),\n plot_abs=False, display_mode='ortho')\n # Plot anatomically constraint functional localizer\n nil.plotting.plot_glass_brain(intersection_img, colorbar=True, title='Anatomically constraint functional localizer mask',\n plot_abs=False, display_mode='ortho')\n nil.plotting.show()\n print('','time to inspect the plots','')\n # Assess the number of voxels in the mask\n # Katya used 31 voxel before substituting the intersection mask with the atlas map\n # Not sure whether that was 31 per hemisphere...\n if 'mask_info' in locals():\n mask_info = np.vstack( (mask_info, np.array([nrSubj, np.sum(zmapBool), np.sum(intersection)])) )\n else:\n mask_info = np.array([nrSubj, np.sum(zmapBool), np.sum(intersection)])\n print(mask_info)\n \n # We can now save the roi as mask\n # Note that we could subsegment the mask in connected regions\n # For details see tutorial https://nilearn.github.io/auto_examples/04_manipulating_images/plot_roi_extraction.html\n intersection_img.to_filename(join(outdir,'LOCmask_sub{0}.nii.gz'.format(nrSubj))) # or nibabel.save()\n\n # Pause to inspect\n if pause_analysis:\n input(\"Press Enter to continue...\")\n \n # Store each subjects data\n z_map_all.append(z_map)\n model_all.append(fmri_glm)\n\n # Plot first model for each subject\n if plot_all:\n fig1, axes1 = plt.subplots(nrows=2, ncols=math.ceil(len(z_map_all)/2), figsize=(8, 4.5))\n print(os.getcwd())\n for i in range(len(z_map_all)):\n # Using the following way of masking works if you provide the keyword threshold to the plot\n thresh_1stModel = math_img(\"np.ma.masked_less(img, [{0}])\".format(threshold), img=z_map_all[i])\n nil.plotting.plot_glass_brain(thresh_1stModel, colorbar=True, threshold=threshold,\n title=(model_all[i].subject_label),\n axes=axes1[int(i / math.ceil(len(z_map_all)/2)), int(i % math.ceil(len(z_map_all)/2))],\n plot_abs=False, display_mode='lr')\n nil.plotting.plot_glass_brain(thresh_1stModel, colorbar=True, threshold=threshold,\n title=(model_all[i].subject_label),\n plot_abs=False, display_mode='lr',\n output_file=outdir+'/figures/{0}.pdf'.format(model_all[i].subject_label))\n fig1.suptitle('subjects z_map intact vs. scrambled (unc z<{0})'.format(threshold))\n #nil.plotting.show()\n\n # Second level analysis (this still needs some work)\n if second_level:\n second_level_input = model_all\n second_level_model = SecondLevelModel()\n second_level_model = second_level_model.fit(second_level_input)\n zmap = second_level_model.compute_contrast(first_level_contrast=intact_minus_scrambled, output_type='z_score')\n \n # Plot\n nil.plotting.plot_glass_brain(zmap, colorbar=True, threshold=threshold,\n title='Object network (only z threshold of +-{0})'.format(threshold),\n plot_abs=False, display_mode='ortho')\n #nil.plotting.show()\n # Plot only positive z voxels above threshold\n masked_zmap = math_img(\"np.ma.masked_less(img, [{0}])\".format(threshold), img=zmap)\n nil.plotting.plot_glass_brain(masked_zmap, colorbar=True, threshold=None,\n title='Object network (only masked z<{0})'.format(threshold),\n plot_abs=False, display_mode='ortho')\n #nil.plotting.show()\n # Plot only positive z voxels\n masked_zmap1 = math_img(\"np.ma.masked_less(img, [{0}])\".format(threshold), img=zmap)\n nil.plotting.plot_glass_brain(masked_zmap1, colorbar=True, threshold=-threshold,\n title='Object network (masked z<{0} and thresholded at -{0}'.format(threshold),\n plot_abs=False, display_mode='ortho')\n #nil.plotting.show()\n\n # Localizer mask for 2nd level\n atlas1 = image.load_img('/Users/michlf/Documents/GitHub/fMRI_NRoST/analysis/LOCatlas.nii.gz')\n # resample the lower res zmap to the higher res atlas\n resampled_zmap = image.resample_to_img(zmap, atlas1)\n # Thresholdand make an Niim object again\n zmapPreBool = resampled_zmap.dataobj.copy()\n zmapPreBool[zmapPreBool < threshold] = 0\n zmapThreshed = image.new_img_like(resampled_zmap, zmapPreBool)\n zmapBool = (resampled_zmap.dataobj > threshold)\n zmapThreshedBool = image.new_img_like(resampled_zmap, zmapBool.astype(np.int))\n # Check it\n nil.plotting.plot_glass_brain(zmap, colorbar=True, title='original zmap',\n plot_abs=False, display_mode='ortho')\n nil.plotting.plot_glass_brain(resampled_zmap, colorbar=True, title='resampled zmap',\n plot_abs=False, display_mode='ortho')\n nil.plotting.plot_glass_brain(zmapThreshed, colorbar=True, title='original zmap thresholded at {0}'.format(threshold),\n plot_abs=False, display_mode='ortho')\n nil.plotting.plot_glass_brain(zmapThreshedBool, colorbar=True, title='original zmap thresholded at {0} in Boolean'.format(threshold),\n plot_abs=False, display_mode='ortho')\n #nil.plotting.show()\n # Now make all boolean so that we intersect the maps\n atlasBool = image.load_img('/Users/michlf/Documents/GitHub/fMRI_NRoST/analysis/LOCatlas.nii.gz').get_data().astype(bool)\n # Intersect and make it Niim object\n intersection = np.logical_and(zmapBool, atlasBool)\n atlasBoolPlt = image.new_img_like(resampled_zmap, atlasBool.astype(np.int))\n intersection_img = image.new_img_like(resampled_zmap, intersection.astype(np.int))\n # Plot the map\n nil.plotting.plot_glass_brain(atlas1, colorbar=True, title='Original Harvard atlas of posterior temporal fusiform gyrus',\n plot_abs=False, display_mode='ortho')\n nil.plotting.plot_roi(atlas1, title='Original Harvard atlas',\n display_mode='ortho')\n nil.plotting.plot_roi(atlasBoolPlt, title='Original Harvard atlas (boolean)',\n display_mode='ortho')\n nil.plotting.plot_glass_brain(intersection_img, colorbar=True, title='Intersection map',\n plot_abs=False, display_mode='ortho')\n nil.plotting.show()\n # We can now save the roi as mask\n # Note that we could subsegment the mask in connected regions\n # For details see tutorial https://nilearn.github.io/auto_examples/04_manipulating_images/plot_roi_extraction.html\n intersection_img.to_filename(join(outdir,'LOCmask_2ndlevel.nii.gz'))\n # Let's also save the resampled anatomical mask\n atlasBoolPlt.to_filename(join(outdir,'LOCmask_anat.nii.gz'))\n # Assess the number of voxels in the mask\n # Katya used 31 voxel before substituting the intersection mask with the atlas map\n # Not sure whether that was 31 per hemisphere...\n if 'mask_info' in locals():\n mask_info = np.vstack( (mask_info, np.array([0, np.sum(zmapBool), np.sum(intersection)])) )\n else:\n mask_info = np.array([nrSubj, np.sum(zmapBool), np.sum(intersection)])\n print(mask_info)\n\n# END\n","sub_path":"analysis/GLM_testing_loc.py","file_name":"GLM_testing_loc.py","file_ext":"py","file_size_in_byte":31666,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"277543552","text":"from .values.dimens import *\nfrom .values.colors import CHESS_WHITE, AlertDialogBG, AlertDialogFG\nfrom .values.assets import gameFont\n\n\nclass AlertDialog:\n def __init__(self, win, alertText, title='Chess', positiveBtn=None, negativeBtn=None):\n self.win = win\n self.alertText = alertText\n self.title = title\n self.pBtn = positiveBtn\n self.nBtn = negativeBtn\n self.pBtnRect = None\n self.nBtnRect = None\n\n def show(self):\n pygame.draw.rect(self.win, AlertDialogBG, ((AlertDialogStartX, AlertDialogStartY),\n (AlertDialogLenX, AlertDialogLenY)),\n border_radius=DialogTitleHeight // 2)\n pygame.draw.rect(self.win, AlertDialogFG,\n ((DialogInX, DialogInY), (DialogInLenX, DialogInLenY)),\n border_bottom_left_radius=DialogTitleHeight // 2,\n border_bottom_right_radius=DialogTitleHeight // 2)\n\n self.drawText(self.title, 50, AlertDialogStartX + AlertDialogLenX // 2, AlertDialogStartY +\n (dialogPad + DialogTitleHeight) // 2, CHESS_WHITE, font=gameFont, centre='XY')\n\n if '*' not in self.alertText:\n self.drawText(self.alertText, 40, DialogInX + DialogInLenX // 2, DialogInY + SquareDimen,\n CHESS_WHITE, centre=True)\n else:\n texts = self.alertText.split('*')\n length = SquareDimen // (len(texts) + (len(texts) % 2))\n for txt in texts:\n self.drawText(txt, 30, DialogInX + DialogInLenX // 2, DialogInY + length, CHESS_WHITE, centre=True)\n length += SquareDimen // 2\n\n btnX = DialogInX + int(0.125 * DialogInLenX)\n btnLenY = int(DialogInLenY * 0.2)\n btnY = DialogInY + int(0.7 * DialogInLenY)\n if self.nBtn:\n btnLenX = min(int(1.5 * SquareDimen), max(SquareDimen, 30 * (len(self.nBtn) + 2)))\n self.nBtnRect = pygame.draw.rect(self.win, CHESS_WHITE, ((btnX, btnY), (btnLenX, btnLenY)),\n border_radius=15)\n self.drawText(self.nBtn[0], 20, btnX + btnLenX // 2, btnY + btnLenY // 2, (0, 0, 0), centre=True)\n if self.pBtn:\n btnLenX = min(int(1.5 * SquareDimen), max(SquareDimen, 30 * (len(self.pBtn) + 2)))\n self.pBtnRect = pygame.draw.rect(self.win, CHESS_WHITE, ((btnX + DialogInLenX // 2, btnY),\n (btnLenX, btnLenY)), border_radius=15)\n self.drawText(self.pBtn[0], 20, btnX + DialogInLenX // 2 + btnLenX // 2, btnY + btnLenY // 2, (0, 0, 0),\n centre=True)\n pygame.display.update()\n\n def drawText(self, text, size, txtX, txtY, color, colorBg=None, font=gameFont, centre=False):\n Txt = pygame.font.Font(font, size).render(text, True, color, colorBg)\n nameRect = Txt.get_rect()\n if centre in [False, 'X', 'Y']:\n if centre == 'Y':\n txtX += nameRect.center[0]\n elif centre == 'X':\n txtY += nameRect.center[1]\n else:\n txtX += nameRect.center[0]\n txtY += nameRect.center[1]\n nameRect.center = (txtX, txtY)\n self.win.blit(Txt, nameRect)\n","sub_path":"Game/alertDialog.py","file_name":"alertDialog.py","file_ext":"py","file_size_in_byte":3346,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"5712374","text":"import webbrowser\nimport time\n\n\ntotal_breaks = 3\nbreak_count= 0\n\nwhile(break_count < total_breaks):\n\ttime.sleep(2*60*60)\n\twebbrowser.open(\"https://www.youtube.com/watch?v=StTqXEQ2l-Y\")\n\tbreak_count = break_count + 1\n\ttime.ctime()\n\tprint (\"This program started on \" + time.ctime())\n","sub_path":"take_a_break.py","file_name":"take_a_break.py","file_ext":"py","file_size_in_byte":281,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"553753951","text":"import asyncio\r\nimport concurrent.futures\r\nimport requests\r\nimport json\r\nimport socket\r\nimport pycurl\r\nfrom urllib.parse import urlparse\r\nfrom haversine import haversine\r\nfrom io import BytesIO\r\n\r\n\r\ndef _ipToGeoloc(ipaddr):\r\n res = requests.get(\"http://api.ipstack.com/\"\\\r\n + ipaddr\\\r\n + \"?access_key=1e7beb7517ccc392c760dd63e3fa0917\")\r\n\r\n if res.status_code == 200:\r\n result = json.loads(res.text)\r\n return result['latitude'], result['longitude']\r\n else:\r\n raise Exception(res.status_code)\r\n\r\n\r\ndef _getMyGeoloc():\r\n return _ipToGeoloc('check')\r\n\r\n\r\ndef _urlToIp(url):\r\n o = urlparse(url)\r\n hostname = o.hostname\r\n port = o.port or (443 if o.scheme == 'https' else 80)\r\n ip_addr = socket.getaddrinfo(hostname, port)[0][4][0]\r\n return ip_addr\r\n\r\n\r\ndef getMirrorUrls(country='mirrors'):\r\n res = requests.get('http://mirrors.ubuntu.com/' + country + '.txt')\r\n\r\n if res.status_code == 200:\r\n return res.text.split(\"\\n\")\r\n else:\r\n raise Exception(res.status_code)\r\n \r\n\r\ndef getMirrorsWithLocation(sorted=True):\r\n mirrors = []\r\n \r\n for url in getMirrorUrls():\r\n try:\r\n ip_addr = _urlToIp(url)\r\n geoloc = _ipToGeoloc(ip_addr)\r\n mirrors.append((url, geoloc))\r\n except Exception as e:\r\n # current mirror is unreachable\r\n pass\r\n\r\n if sorted:\r\n myloc = _getMyGeoloc()\r\n mirrors.sort(key=lambda x: haversine(x[1], myloc))\r\n\r\n return mirrors\r\n\r\n\r\ndef getCandidates():\r\n mirrors = getMirrorsWithLocation()\r\n return [url for url, loc in mirrors]\r\n\r\n\r\ndef __getServerStat(url):\r\n c = pycurl.Curl()\r\n c.setopt(c.URL, url)\r\n c.setopt(c.WRITEDATA, BytesIO())\r\n c.setopt(c.TIMEOUT, 1)\r\n try:\r\n c.perform()\r\n stat = {\r\n \"time_namelookup\": c.getinfo(pycurl.NAMELOOKUP_TIME),\r\n \"time_connect\": c.getinfo(pycurl.CONNECT_TIME),\r\n \"time_appconnect\": c.getinfo(pycurl.APPCONNECT_TIME),\r\n \"time_pretransfer\": c.getinfo(pycurl.PRETRANSFER_TIME),\r\n \"time_redirect\": c.getinfo(pycurl.REDIRECT_TIME),\r\n \"time_starttransfer\": c.getinfo(pycurl.STARTTRANSFER_TIME),\r\n \"time_total\": c.getinfo(pycurl.TOTAL_TIME),\r\n \"speed_download\": c.getinfo(pycurl.SPEED_DOWNLOAD),\r\n \"speed_upload\": c.getinfo(pycurl.SPEED_UPLOAD),\r\n \"local_ip\": c.getinfo(pycurl.LOCAL_IP),\r\n \"local_port\": c.getinfo(pycurl.LOCAL_PORT)\r\n }\r\n except Exception as e:\r\n stat = {\r\n \"time_total\": 999999\r\n }\r\n c.close()\r\n\r\n return stat\r\n\r\n\r\nasync def __getAvgLatency(pool, url, trial=3):\r\n loop = asyncio.get_running_loop()\r\n times = await asyncio.gather(*[loop.run_in_executor(pool, lambda : __getServerStat(url)) for _ in range(trial)])\r\n return url, sum(map(lambda x: x['time_total'], times)) * 1000 // trial\r\n\r\n\r\nasync def _testLatencies(urls):\r\n loop = asyncio.get_running_loop()\r\n with concurrent.futures.ThreadPoolExecutor() as pool:\r\n wait_target = {__getAvgLatency(pool, url) for url in urls}\r\n done, pending = await asyncio.wait(wait_target) #, return_when=asyncio.FIRST_COMPLETED)\r\n for p in pending:\r\n p.cancel()\r\n\r\n responseTimes = list(map(lambda x: x.result(), done))\r\n responseTimes.sort(key=lambda x: x[1])\r\n return responseTimes\r\n\r\n\r\ndef testLatencies(urls):\r\n return asyncio.run(_testLatencies(urls))\r\n\r\n\r\ndef findOptimalMirror(urls):\r\n latencies = testLatencies(urls)\r\n return latencies[0][0]","sub_path":"eapt/resolver.py","file_name":"resolver.py","file_ext":"py","file_size_in_byte":3632,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"204065265","text":"from zoo.pipeline.api.keras.layers import *\nfrom zoo.models.recommendation import UserItemFeature\nfrom zoo.models.recommendation import NeuralCF\nfrom zoo.common.nncontext import init_nncontext\nimport matplotlib\nfrom sklearn import metrics\nfrom operator import itemgetter\nfrom bigdl.dataset import movielens\nfrom bigdl.util.common import *\nimport random\nimport pickle\n\nfrom pyspark.sql import SparkSession\nfrom pyspark.sql.functions import col, udf, array, broadcast, log, explode, struct, collect_list\n\nsc = init_nncontext(\"NCF evaluation\")\nspark = SparkSession.builder \\\n .master(\"local[1]\") \\\n .appName(\"SparkByExamples.com\") \\\n .getOrCreate()\n\nmovielens_data = movielens.get_id_ratings(\"./data/movielens/\")\n\nrate_file = \"./data/movielens/ml-1m/ratings.dat\"\nratings = []\nwith open(rate_file) as infile:\n for cnt, line in enumerate(infile):\n x = line.strip().split(\"::\")\n y = list(map(lambda item: int(item), x))\n # y[3] = datetime.datetime.fromtimestamp(y[3])\n ratings.append(y)\n\nmin_user_id = np.min(movielens_data[:,0])\nmax_user_id = np.max(movielens_data[:,0])\nmin_movie_id = np.min(movielens_data[:,1])\nmax_movie_id = np.max(movielens_data[:,1])\nrating_labels= np.unique(movielens_data[:,2])\n\n# open a file, where you ant to store the data\nwith open(\"./data/movielens/train.pickle\", \"rb\") as f:\n train_data = pickle.load(f)\nwith open(\"./data/movielens/test.pickle\", \"rb\") as f:\n test_data = pickle.load(f)\n\nprint(min_user_id, max_user_id, min_movie_id, max_movie_id, rating_labels)\n\ndef build_sample(user_id, item_id, rating):\n sample = Sample.from_ndarray(np.array([user_id, item_id]), np.array([rating]))\n return UserItemFeature(user_id, item_id, sample)\n\ntrainPairFeatureRdds = sc.parallelize(train_data)\\\n .map(lambda x: build_sample(x[0], x[1], x[2]))\nvalPairFeatureRdds = sc.parallelize(test_data) \\\n .map(lambda x: build_sample(x[0], x[1], x[2]))\nvalPairFeatureRdds.cache()\n\ntrain_rdd= trainPairFeatureRdds.map(lambda pair_feature: pair_feature.sample)\nval_rdd= valPairFeatureRdds.map(lambda pair_feature: pair_feature.sample)\nval_rdd.persist()\n\nncf = NeuralCF(user_count=max_user_id,\n item_count=max_movie_id,\n class_num=5,\n hidden_layers=[20, 10],\n include_mf = False)\n\nncf.compile(optimizer= \"adam\",\n loss= \"sparse_categorical_crossentropy\",\n metrics=['accuracy'])\n\nloaded = ncf.load_model(\"./save_model/movie_ncf1.zoomodel\") #old\n\npredictions = ncf.predict_classes(val_rdd).collect()\n#print(predictions[1:10000])\n\nrecommendations = ncf.recommend_for_user(valPairFeatureRdds, 10)\n#for rec in recommendations.take(5): print(rec)\nuser_items_rec10 = recommendations.map(lambda x: [x.user_id, x.item_id]).collect()\nimport pandas\n\nrec_df = pandas.DataFrame(user_items_rec10, columns = ['uid', 'mid'])\nrec_df = rec_df.groupby('uid', as_index=False).agg(lambda x: list(x))\nprint(\"length:\")\nprint(len(rec_df))\n\ntest_df = pandas.DataFrame(test_data, columns=['uid', 'mid', 'rate', 'timestamp'])\ntest_df['rate'] = test_df['rate'] + 1\ntest_df['Rank'] = test_df.groupby('uid', as_index=False)['rate'].transform(lambda x: x.rank(ascending=False, method=\"first\"))\ntest_df = test_df[test_df['Rank'] < 11]\n#print(test_df.head(100))\ntest_df = test_df[['uid', 'mid']]\ntest_df = test_df.groupby('uid', as_index=False).agg(lambda x: list(x))\n\nprint(\"length:\")\nprint(len(test_df))\n\nrec_df = spark.createDataFrame(rec_df)\nrec_df.show(10)\n\ntest_df = spark.createDataFrame(test_df)\ntest_df.show(10)\n\njoined = rec_df.withColumnRenamed('mid', 'midrec').join(test_df, on=['uid'])\njoined.show(10)\n\ndef precision(prediction, groundtruth):\n sum = 0.0\n for ele in prediction:\n if ele in groundtruth:\n sum = sum + 1\n return sum/(len(prediction))\n\nprecision_udf = udf(lambda c1, c2: precision(c1, c2))\n\njoined=joined.withColumn(\"precision\", precision_udf('midrec', 'mid'))\n#def precision():\njoined.show(10, False)\nfrom pyspark.sql.functions import mean as _mean\n\nstats = joined.select(_mean(col('precision')).alias('mean')).collect()\nmean = stats[0]['mean']\nprint(\"precision @ k:\", mean)\n","sub_path":"src/bigdlmodels/evaluate_ncf.py","file_name":"evaluate_ncf.py","file_ext":"py","file_size_in_byte":4134,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"586839918","text":"# Import packages and set working directory if needed here\nimport datetime\nimport glob\nimport os\n\nimport tempfile\n\nimport earthpy.spatial as es\nimport geopandas as gpds\nfrom matplotlib import pyplot as plt\nimport numpy as np\nimport numpy.ma as ma\nimport pandas as pd\nfrom pyproj import Proj, transform\nfrom rasterio import mask\nfrom rasterio.transform import from_origin\nimport rasterio as rio\nimport rasterio.plot\nimport rasterstats as rs\nimport tarfile\nimport warnings; \nwarnings.simplefilter('ignore')\n\nimport common_functions as common\n\nlandsat_file_root = os.path.join(common.original_raster_data, 'landsat_summer')\nlandsat_raw_data = os.path.join(common.original_raster_data, 'landsat_raw')\ndata_year_list = ['2008', '2009', '2010', '2011', '2012', '2013', '2014', '2015', '2016', '2017', '2018']\n\navalanche_overlap_shape = None\n\n# This should match the qa layer\nqa_match = 'pixel_qa'\n\n# Set an elevation threshold, in meters, to limit our elevation for our dNDVI analysis\nminimum_elevation_threshold = 1000\nmaximum_elevation_threshold = 3000\n\nrunning_max_dndvi = None\n\n# The order which we will be concatenating our tifs\ncolor_order = ['red', 'green', 'blue', 'nir']\n\nndvi_by_year = {}\n\nmean_below_thresh = pd.DataFrame(columns=[\"year\", \n \"mean_ndvi_in_slide\", \n \"mean_ndvi_out_of_slide\", \n \"mean_dndvi_in_slide\", \n \"mean_dndvi_out_of_slide\", \n \"snow_depth\"])\n \nfile_list = glob.glob(os.path.join(landsat_file_root, \"L*\"))\ntotal_array_count = 0\nrunning_ndvi_sum_array = None\nonly_analyze_version_number = None\navalanches_only = None\n\nband_colors_landsat_versions = {\n '7':{ \n 'blue':'band1',\n 'green':'band2',\n 'red':'band3',\n 'nir':'band4'\n },\n '8':{ \n 'blue':'band2',\n 'green':'band3',\n 'red':'band4',\n 'nir':'band5'\n }\n }\n\ndef open_and_crop_geotiff(geotiff_path, out_path, crop_by):\n \"\"\"\n Open geotiff and crop to the extent of the crop_by geodataframe\n \n Parameters\n ----------\n geotiff_path: string\n Path of the input geotiff to be cropped\n out_path: string\n Target path to write the output to\n crop_by: pandas geodataframe\n Geodataframe or shape to crop by \n \"\"\"\n with rio.open(geotiff_path) as src:\n # Reproject our shape to whatever projection the landsat data is in\n src_meta = src.meta.copy()\n crop_by_reprojected = crop_by.to_crs(src.crs)\n band_masked, transform_cropped = mask.mask(src, crop_by_reprojected.geometry, crop=True)\n src_meta['transform'] = transform_cropped\n print(transform_cropped)\n src_meta['width'] = band_masked.shape[2]\n src_meta['height'] = band_masked.shape[1]\n with rasterio.open(out_path, 'w', **src_meta) as dst:\n dst.write(band_masked) \n \n\ndef create_archive_from_tgz(zipped_data_dir, target_data_dir):\n \"\"\"\n Unzip the tgz files that are in the zipped_data_folder and move them to the\n target_dir only if they don't already exist in the target_dir.\n \n Parameters\n ----------\n zipped_data_dir: string\n Source directory\n target_dir: string\n Target directory\n \"\"\"\n file_list = glob.glob(os.path.join(zipped_data_dir, \"*.tar.gz\"))\n print(\"Number of files found: %d\" % len(file_list))\n files_in_data_dir = [os.path.basename(base_name) for base_name in glob.glob(os.path.join(target_data_dir, \"*\"))]\n print(\"Number of files already present: %d\" % len(files_in_data_dir))\n files_to_upload = [file_to_unzip \n for file_to_unzip \n in file_list \n if os.path.basename(file_to_unzip.replace(\".tar.gz\", \"\"))\n not in files_in_data_dir] \n print(\"Number of files to upload: %d\" % len(files_to_upload))\n for file_name in files_to_upload:\n with tempfile.TemporaryDirectory() as temporary_directory:\n scene_name = unzip_file(file_name, target_data_dir=temporary_directory) \n band_list = glob.glob(os.path.join(temporary_directory, scene_name, \"*.tif\"))\n landsat_version = landsat_path_to_version(scene_name)\n bands_to_save = [i for i in band_colors_landsat_versions[landsat_version].values()] + [\"qa\"]\n band_list = [fname \n for fname in band_list \n for bands_to_save_match in bands_to_save \n if bands_to_save_match in fname]\n scene_directory = os.path.join(target_data_dir, scene_name)\n os.mkdir(scene_directory)\n for band in band_list:\n band_basename = os.path.basename(band)\n open_and_crop_geotiff(band, os.path.join(scene_directory, band_basename), common.study_area_box_gdf)\n \ndef unzip_file(file_name, target_data_dir=\"./\"): \n \"\"\"\n Unzip all tgz files in file_list and save to target_data_dir. Each file will\n be saved into its own directory, named the same as the .tgz name, without\n the extension.\n \n Parameters\n ----------\n file_name: string\n Path of the file to unzip\n target_data_dir: str\n Directory to put the files into\n \"\"\"\n tar = tarfile.open(file_name, \"r:gz\")\n file_base_no_ext = os.path.basename(file_name.replace(\".tar.gz\", \"\"))\n directory_name = os.path.join(target_data_dir, file_base_no_ext)\n tar.extractall(path=directory_name)\n tar.close()\n return file_base_no_ext\n\n\ndef mask_clouds(qa_arr, landsat_ver=\"8\"):\n \"\"\"\n Creates a cloud mask given a qa_raster.\n \n Parameters\n ----------\n qa_arr: ndarray\n A qa raster containing information about cloud cover.\n landsat_ver: str\n A string representation of the landsat version.\n \n Returns\n ----------\n cloud_mask: ndarray\n A boolean array the same shape as qa_raster containing \n True values where clouds are present and False values \n where there are no clouds.\n \"\"\" \n if landsat_ver == \"8\":\n # Much of the terrain was being marked as cloud with the cloud_shadow and cloud \n # mask values, so had to only mask high confidence clouds.\n cloud_shadow = []#[328, 392, 840, 904, 1350]\n cloud = []#[352, 368, 416, 432, 480, 864, 880, 928, 944, 992]\n high_confidence_cloud = [480, 992]\n high_confidence_cirrus = [834, 836, 840, 848, 864, 880, 898, 900, 904, 912, 928, 944, 992]\n snow_ice = [336, 368, 400, 432, 848, 880, 912, 944, 1352]\n water = [324, 388, 836, 900, 1348]\n combined_list = list(set(cloud_shadow + \n cloud + \n high_confidence_cloud + \n high_confidence_cirrus + \n snow_ice + \n water))\n elif landsat_ver == \"7\":\n # Much of the terrain was being marked as cloud with the cloud_shadow and cloud \n # mask values, so had to only mask high confidence clouds.\n cloud_shadow = []#[72, 136]\n cloud = [] #[96, 112, 160, 176, 224]\n low_confidence_cloud = [] #[66, 68, 72, 80, 96, 112]\n medium_confidence_cloud = [] #[130, 132, 136, 144, 160, 176]\n high_confidence_cloud = [224]\n snow_ice = [80, 112, 144, 176]\n water = [68, 132]\n combined_list = list(set(cloud_shadow + \n cloud + \n low_confidence_cloud + \n medium_confidence_cloud + \n high_confidence_cloud + \n snow_ice + \n water))\n else:\n print(\"Landsat version %s not recognized. No cloud removal performed.\" % landsat_ver)\n combined_list = []\n # Create a mask with True values indicating non-cloud pixels\n all_masked_values = np.array(combined_list)\n cloud_mask = np.isin(qa_arr, all_masked_values)\n \n return cloud_mask\n\n\ndef files_from_pattern(pattern, expect_single_file=False):\n \"\"\"\n From a given pattern, retrieve the filenames. If expect_single_file is True,\n raise an error if multiple files are returned. If no files are returned, print\n a message.\n \n TODO: expand pattern to regex instead of only wildcards.\n \n Parameters\n ----------\n pattern: str\n A pattern to match filename. At this time, only wildcards are accepted (no regular\n expressions).\n expect_single_file: bool\n When True, a valueError is raised if more than one file is returned.\n \n Returns\n ----------\n [file names]: list\n A list of returned file names.\n \"\"\"\n returned_files = glob.glob(pattern)\n if len(returned_files) == 0:\n print(\"No files found for pattern %s.\" % pattern)\n if expect_single_file and len(returned_files) > 1:\n raise ValueError(\"Expecting a single value to be returned \"\n \"and found %d values for pattern %s.\" \n % (len(returned_files), pattern))\n return returned_files\n\ndef scene_path_to_year(path):\n \"\"\"\n Determine the year given a scene ID\n \n Parameters\n ----------\n path: string\n Landsat scene ID (see https://landsat.usgs.gov/landsat-collections#Prod%20IDs)\n \n Returns\n ----------\n [year]: string\n Landsat year\n \"\"\"\n return os.path.basename(path)[10:14]\n\ndef landsat_path_to_version(path):\n \"\"\"\n Determine the landsat version given a scene ID\n \n Parameters\n ----------\n path: string\n Landsat scene ID (see https://landsat.usgs.gov/landsat-collections#Prod%20IDs)\n \n Returns\n ----------\n [year]: string\n Landsat version (single digit string)\n \"\"\"\n return os.path.basename(path)[3:4]\n\n\ndef generate_ndvi():\n ndvi_df = pd.DataFrame()\n\n # Loop through each year in the outer loop\n for year in data_year_list: \n print(\"Analyzing year %s\" % year)\n year_list_subset = [file for file in file_list if scene_path_to_year(file) == year]\n if not year_list_subset:\n print(\"No files found for year %s\" % (year))\n continue\n accumulated_ndvi_arrays = []\n \n # Loop through each file for the specified year in the inner loop\n for file in year_list_subset:\n file_basename = os.path.basename(file)\n landsat_version_number = landsat_path_to_version(file_basename)\n \n # If this landsat version number is not to be analyzed, continue to the next iteration\n if only_analyze_version_number is not None and only_analyze_version_number == landsat_version_number:\n continue\n band_colors = band_colors_landsat_versions[landsat_version_number] \n accumulated_bands_list = []\n accumulated_bands_list_unmasked = [] \n\n # Get QA layer to create our cloud mask\n qa_file_name = files_from_pattern(os.path.join(file, \"*%s*\" % qa_match), \n expect_single_file=True)[0]\n\n with rio.open(qa_file_name) as src:\n \n # Reproject our shape to whatever projection the landsat data is in\n landsat_crs = src.crs\n landsat_affine = src.transform\n qa_arr = src.read()\n qa_arr = np.squeeze(qa_arr)\n cloud_mask = mask_clouds(qa_arr, landsat_ver=landsat_version_number)\n\n # Loop through the colors necessary to create NDVI\n for color in color_order:\n band_file_name = files_from_pattern(os.path.join(file, \"*%s*\" % band_colors[color]), \n expect_single_file=True)[0]\n with rio.open(band_file_name) as src:\n \n # Reproject our shape to whatever projection the landsat data is in\n band = src.read()\n\n # Cast to float so we can assign nan values\n band = np.squeeze(band).astype(\"float\")\n \n # Mask invalid values\n band[band == src.nodatavals] = False\n \n # Remove the banding effect due to sattelite malfunction with landsat 7 after 2003\n if landsat_version_number == 7 and int(year) > 2003:\n band[band == 0] = False\n \n # Mask clouds \n band[cloud_mask] = False\n \n accumulated_bands_list.append(band) \n \n # Create arrays from our cloud-masked and no-cloud-masked band lists\n accumulated_bands_arr = np.array(accumulated_bands_list)\n\n # Calculate the NDVI array and append to list\n ndvi_arr = common.calculate_NDVI(accumulated_bands_arr)\n ndvi_df = ndvi_df.append({\"year\": year, \n \"fname\": file_basename, \n \"RGB_arr\": accumulated_bands_arr, \n \"NDVI_arr\": ndvi_arr, \n \"landsat_ver\": landsat_version_number,\n \"valid_vals\": band[band != False].size\n }, \n ignore_index=True)\n\n # Metadata with pandas is unfortunate; would move to something that handles metadata more robustly like xarray\n # but there's additional overhead/complexity with that. Don't copy this dataframe otherwise this metadata will\n # disappear in the copy.\n ndvi_df.affine = landsat_affine\n ndvi_df.crs = landsat_crs\n\n return ndvi_df\n\n\ndef generate_dndvi(ndvi_df, avalanche_overlap_shape):\n shapefile_below_threshold = avalanche_overlap_shape[\n (avalanche_overlap_shape['height_bucket'] < maximum_elevation_threshold) & \n (avalanche_overlap_shape['height_bucket'] > minimum_elevation_threshold)]\n\n mean_below_thresh = pd.DataFrame()\n\n ndvi_year = ndvi_df.groupby(by=\"year\")\n for year, group in ndvi_year:\n ndvi_vals = np.array(group['NDVI_arr'].values.tolist())\n annual_ndvi_array = np.nanmax(ndvi_vals, axis=0)\n ndvi_below_elevation_thresh = common.rasterstats_grouped_by_height(shapefile_below_threshold, \n annual_ndvi_array, \n ndvi_df.affine, \n \"mean\")\n mean_below_thresh = mean_below_thresh.append(\n {\n \"year\": year,\n \"mean_NDVI_in_slide\": ndvi_below_elevation_thresh\n .replace([np.inf, -np.inf], np.nan)['mean_avalanche']\n .mean(), \n \"mean_NDVI_out_of_slide\": ndvi_below_elevation_thresh\n .replace([np.inf, -np.inf], np.nan)['mean_no_avalanche']\n .mean(), \n \"snow_depth\": common.snowfall_data_df\n .loc[common.snowfall_data_df['Year'] == int(year), \"Total\"]\n .iat[0]\n },\n ignore_index=True\n )\n mean_below_thresh['mean_dNDVI_in_slide'] = mean_below_thresh['mean_NDVI_in_slide'].diff()\n mean_below_thresh['mean_dNDVI_out_of_slide'] = mean_below_thresh['mean_NDVI_out_of_slide'].diff()\n \n return mean_below_thresh\n\ndef generate_avalanche_shapes(ndvi_crs):\n # Generate a single shapefile that contains the union of the\n # avalanche path and the elevation buckets for our entire study area\n # This step takes forever when you run it for the first \n # time and if the geojson isn't available on disk\n return common.generate_unioned_avalanche_overlay(ndvi_crs)\n\n\ndef ndvi_analysis(ndvi_df, avalanche_overlap_shape):\n # The rgb image that has the most valid values to be used as the background for plotting\n best_rgb = ndvi_df.loc[ndvi_df['valid_vals'].idxmax()]['RGB_arr']\n\n # Create a 3-d array and take the mean over the 0th dimension (time)\n ndvi_vals = np.array(ndvi_df['NDVI_arr'].values.tolist())\n mean_ndvi_array = np.nanmax(ndvi_vals, axis=0)\n \n # This is the stat we are using in our zonal stats - taking the spatial mean\n stat = \"mean\"\n slide_paths_elev_buckets = avalanche_overlap_shape[~(pd.isna(avalanche_overlap_shape['avalanche_id']))]\n ndvi_slide_paths = common.get_zonal_stats_dataframe(slide_paths_elev_buckets, \n mean_ndvi_array, \n ndvi_df.affine, \n stat)\n _, _ = common.plot_rgb_and_vector(best_rgb, \n ndvi_df.crs,\n ndvi_slide_paths,\n \"Maximum NDVI in Slide Path Height Intervals\\n(Landsat Fig. 1)\", \n \"Imagery: Landsat, 2008-2018, \" + \\\n \"Avalanche Shapes: Utah Automated Geographic Reference Center\",\n vmax=1,\n vmin=-1,\n color=stat)\n\n # Find the deviation from the slide path NDVI and the remainder of the height bin\n # plot the result with a colormap\n ndvi_elevation_buckets = common.rasterstats_grouped_by_height(avalanche_overlap_shape, \n mean_ndvi_array, \n ndvi_df.affine, \n stat)\n slide_paths_elev_buckets[\"NDVI_deviation\"] = np.nan\n\n # Loop through the different elevation buckets\n for _, row in ndvi_elevation_buckets.iterrows():\n\n # Isolate just the slide paths in this elevation bucket\n is_in_height_bucket = slide_paths_elev_buckets['height_bucket'] == row['height_bucket']\n\n # Subtract the mean no-avalanche NDVI from the equivalent elevation bucket within the slide paths \n slide_paths_elev_buckets.loc[is_in_height_bucket, \"NDVI_deviation\"] = \\\n ndvi_slide_paths[stat] - row[stat + '_no_avalanche']\n\n _, _ = common.plot_rgb_and_vector(best_rgb, \n ndvi_df.crs,\n slide_paths_elev_buckets,\n \"Deviation From Typical Elevation NDVI\\n(Landsat Fig. 2)\", \n \"Imagery: Landsat Avalanche Shapes: Utah Automated Geographic Reference Center\",\n vmax=.25,\n vmin=-.25,\n color=\"NDVI_deviation\")\n\n common.plot_bar(ndvi_elevation_buckets[ndvi_elevation_buckets['height_bucket'] != 0], \n \"height_bucket\", \n \"Elevation (meters)\", \n ['mean_avalanche', 'mean_no_avalanche'], \n \"NDVI\", \n \"Maximum NDVI in Avalanche-Prone Areas vs Low Avalanche-Risk Areas\\n\" + \\\n \"maximum of %d Landsat datasets\\n\" % len(ndvi_df) + \\\n \"(Landsat Fig. 3)\", \n \"Landsat, 2008-2018\",\n series_names=['Within Avalanche Paths', \n 'Outside Avalanche Paths'])\n\n\ndef dndvi_analysis(dndvi_df_below_altitude_thresh):\n # First row is na since we don't have a previous NDVI to compare it to. Drop this.\n dndvi_df_below_altitude_thresh_no_na = dndvi_df_below_altitude_thresh.dropna(axis=0)\n dndvi_df_below_altitude_thresh_no_na.set_index(\"year\")\n ax1 = common.plot_bar(dndvi_df_below_altitude_thresh_no_na, \n 'year', \n 'Year', \n ['mean_dNDVI_in_slide','mean_dNDVI_out_of_slide'], \n \"Mean dNDVI Between Years\\n(positive indicates growth)\", \n \"dNDVI Below %s Meters and above %s Meters vs Snowfall\\n(Landsat Fig. 5)\"\n % (maximum_elevation_threshold, minimum_elevation_threshold), \n \"Imagery Data: Landsat 2008-2018\\n\" + \\\n \"Snowfall Data: Utah Department of Transportation\\n\" + \\\n \"Avalanche information: Utah Automated Geographic Reference Center\", \n series_names=[\"Annual dNDVI In Slide Paths\", \"Annual dNDVI Outside Slide Paths\"],\n display_plot=False)\n ax2 = ax1.twinx()\n dndvi_df_below_altitude_thresh_no_na['snow_depth'].plot(x='year', ax=ax2)\n ax2.set_ylabel('Total Snowfall in Prior Winter (inches)', fontsize=22)\n plt.show()\n\n\ndef calculate_maximum_diff(ndvi_df):\n # Get absolute max dNDVI for each pixel in our study period\n combined_arr = np.array(ndvi_df['NDVI_arr'].values.tolist())\n absolute_max_diff_ndvi = np.nanmax(combined_arr, axis=0) - np.nanmin(combined_arr, axis=0)\n _ = common.plot_array_and_vector(absolute_max_diff_ndvi, \n ndvi_df.crs,\n common.avalanche_shapes_object,\n \"Maximum Absolute Difference in NDVI, 2008-2018\\n(Landsat Fig. 4)\", \n \"Imagery: Landsat, 2008-2018\",\n vmax=2,\n vmin=0,\n cmap_array='OrRd')","sub_path":"landsat_analysis.py","file_name":"landsat_analysis.py","file_ext":"py","file_size_in_byte":22443,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"322552425","text":"import json\n\n\ndef write_order_to_json(item, quantity, price, buyer, date):\n add_data = {'item': item, 'quantity': quantity, 'price': price, 'buyer': buyer, 'date': date}\n try:\n with open('orders.json', 'r') as f:\n data = json.loads(f.read())\n data['data'].append(add_data)\n except FileNotFoundError:\n data = {'data': [add_data]}\n with open('orders.json', 'w') as f:\n f.write(json.dumps(data, indent=4))\n\n\nwrite_order_to_json('Toys', 11, 565, 'HNM', '11.01.18')\n","sub_path":"HW2/task2/task2.py","file_name":"task2.py","file_ext":"py","file_size_in_byte":516,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"282627016","text":"from selenium import webdriver\n\nimport bs4 as bs\n\nquestion = input(\"question >>> \")\n\n_driver = None\n\ndef main () :\n\tSTART_PHANTOM()\n\n\twords = question.split(\" \")\n\tweb_question = \"\"\n\tfor word in words :\n\t\tweb_question += (word + \"+\")\n\tweb_question = web_question[:-1]\n\tprint (\"searching >>> \" + web_question)\n\n\t_driver.get(\"https://stackoverflow.com/search?q=\" + web_question)\n\t_driver.find_element_by_class_name(\"question-hyperlink\").click()\n\n\tsrc = _driver.page_source\n\n\tsoup = bs.BeautifulSoup(src,\"lxml\")\n\n\ta = soup.find('div', class_='answer accepted-answer')\n\tfinal = a.find('div', class_='post-text')\n\n\twith open(\"data.txt\",\"w\") as f :\n\t\tf.write(final.text)\n\t\tf.close()\n\n\tprint(\"### ANSWER ###\")\n\tprint(final.text)\n\n\t_driver.quit()\n\t\n\n\ndef START_CHROME () :\n\tglobal _driver\n\t\n\tfrom selenium.webdriver.chrome.options import Options\n\n\tchrome_options = Options() \n\tchrome_options.add_argument(\"--headless\")\n\tchrome_options.binary_location = '/Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary'\n\t\n\t_driver = webdriver.Chrome(executable_path= \"/Users/machina/Desktop/WebScraping/StackOverflowBot/chrome_driver\" , chrome_options=chrome_options)\n\t\n\ndef START_PHANTOM () :\n\tglobal _driver\n\n\t_driver = webdriver.PhantomJS()\n\nif __name__ == \"__main__\":\n\tmain()","sub_path":"stackoverflowcli/stackoverflowcli.py","file_name":"stackoverflowcli.py","file_ext":"py","file_size_in_byte":1283,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"270616161","text":"import urllib.request\nimport xml.etree.ElementTree as ET\n\nurl = input('Enter location: ')\n\ncount = 0\nsum = 0\nprint('Retrieving', url)\nuh = urllib.request.urlopen(url)\ndata = uh.read()\nprint('Retrieved', len(data), 'characters')\ntree = ET.fromstring(data)\n\nvalues = tree.findall('.//count')\n\nfor value in values:\n count = count + 1\n text = value.text\n sum = sum + int(text)\n\nprint('Count:', count)\nprint('Sum:', sum)\n\n","sub_path":"student/geoxml.py","file_name":"geoxml.py","file_ext":"py","file_size_in_byte":426,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"364527262","text":"\"\"\"\r\n 猫眼TOP100榜单排名\r\n 2019-10-09\r\n 邱深知\r\n\"\"\"\r\n\r\nfrom requests.exceptions import RequestException\r\nimport requests\r\nfrom bs4 import BeautifulSoup\r\nimport re\r\n\r\ndef get_one_page(url):\r\n headers = {\r\n \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36\"\r\n }\r\n try:\r\n response = requests.get(url,headers=headers)\r\n if response.status_code == 200:\r\n return response.content.decode()\r\n return None\r\n except RequestException:\r\n return None\r\n\r\ndef parse_one_page(html):\r\n soup = BeautifulSoup(html,'lxml')\r\n title = re.compile('title=\"(.*?)\"')\r\n imgst = re.compile('data-src=\"(.*?)\"')\r\n shangyin = re.compile('
(.*?)
')\r\n zhuyans = re.compile('
\\S*\\s*(.*?)\\s*\\S*
')\r\n pingfen1 = re.compile('
(.*?)')\r\n pingfen2 = re.compile('(.*?)
')\r\n\r\n\r\n #标题\r\n page = soup.select('div > div > div.movie-item-info > p.name > a')\r\n ps = re.findall(title,str(page))\r\n # 图片\r\n imgs = soup.select('a > img.board-img')\r\n imgss = re.findall(imgst, str(imgs))\r\n #主演\r\n zhuyan = soup.select('div > div > div.movie-item-info > p.star')\r\n zhuyanss = re.findall(zhuyans,str(zhuyan))\r\n #上映时间\r\n shangying = soup.select('div > div > div.movie-item-info > p.releasetime')\r\n shanyings = re.findall(shangyin,str(shangying))\r\n #评分\r\n pingfens = soup.select('div > div > div.movie-item-number.score-num > p')\r\n pinfen1 = re.findall(pingfen1,str(pingfens))\r\n pinfen2 = re.findall(pingfen2,str(pingfens))\r\n\r\n\r\n for title,imgss,zhuya,shanyin,pinfen1,pinfen2 in zip(ps,imgss,zhuyanss,shanyings,pinfen1,pinfen2):\r\n # print(\"标题:\"+title)\r\n # print(\"图片:\"+imgss)\r\n # print(zhuya)\r\n # print(shanyin)\r\n # print(\"评分:\"+pinfen1+pinfen2)\r\n text = \"标题:\"+title+\"\\n\"+\"图片:\"+imgss+\"\\n\"+zhuya+\"\\n\"+shanyin+\"\\n\"+\"评分:\"+pinfen1+pinfen2+\"\\n\"\r\n with open('TOP100排名.txt', 'a', encoding='utf-8')as f:\r\n f.write(text)\r\n f.close()\r\n\r\n\r\ndef main():\r\n pages = 0\r\n # try:\r\n while True:\r\n if pages >=101 :\r\n print(\"已经爬取10页!\")\r\n break\r\n else:\r\n url = \"https://maoyan.com/board/4?offset={}\".format(pages)\r\n html = get_one_page(url)\r\n parse_one_page(html)\r\n pages += 10\r\n print(url)\r\n # except:\r\n # print(\"爬取%d\"%pages+\"成功!\")\r\n\r\nif __name__ == '__main__':\r\n main()\r\n\r\n\r\n","sub_path":"spider.py","file_name":"spider.py","file_ext":"py","file_size_in_byte":2665,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"278596886","text":"__author__ = 'kakshilshah'\nimport constants\nimport re\nfrom api.models import *\nfrom django.core.exceptions import ObjectDoesNotExist, MultipleObjectsReturned\n\nfrom responseParser import ResponseParser\nclass Validator(object):\n \"\"\"A validator object validates parameters and stores appropriate response.\n Attributes:\n statusCode: A string representing the customer's name.\n parameters: A float tracking the current balance of the customer's account.\n message:\n \"\"\"\n def __init__(self,request,parametersList):\n self.request = request\n self.requestQueryDictionary = self.convertToDict(request.POST)\n self.parametersList = parametersList\n\n def applyConstraints(self,constraintsList):\n self.constraintsList = constraintsList\n\n def isValidated(self):\n\n if not len(self.parametersList) == len(self.constraintsList):\n return False\n\n for i in range(0,len(self.parametersList)):\n\n parameterName = self.parametersList[i]\n parameterConstraint = self.constraintsList[i]\n\n if parameterConstraint == constants.TYPE_NOTNULL:\n if not parameterName in self.requestQueryDictionary:\n return False\n\n elif parameterConstraint == constants.TYPE_TEXT:\n if not parameterName in self.requestQueryDictionary:\n return False\n if len(self.requestQueryDictionary[parameterName]) == 0:\n return False\n\n elif parameterConstraint == constants.TYPE_PHONE:\n if not parameterName in self.requestQueryDictionary:\n return False\n if not len(self.requestQueryDictionary[parameterName]) == 10:\n return False\n\n elif parameterConstraint == constants.TYPE_DEVICE:\n if not parameterName in self.requestQueryDictionary:\n return False\n if not self.requestQueryDictionary[parameterName] in constants.DEVICE_TYPES:\n return False\n\n elif parameterConstraint == constants.TYPE_EMAIL:\n if not parameterName in self.requestQueryDictionary:\n return False\n return self.validateEmail(self.requestQueryDictionary[parameterName])\n\n elif parameterConstraint == constants.TYPE_LATLONG:\n if not parameterName in self.requestQueryDictionary:\n return False\n return self.validateLatLong(self.requestQueryDictionary[parameterName])\n\n elif parameterConstraint == constants.TYPE_USERID:\n if not parameterName in self.requestQueryDictionary:\n return False\n return self.validateUserID(self.requestQueryDictionary[parameterName])\n\n\n return True\n\n def getParametersDictionary(self):\n return self.requestQueryDictionary\n\n def getErrorBlock(self):\n if not len(self.parametersList) == len(self.constraintsList):\n return ResponseParser.getParsedErrorMessage(constants.ERROR_INTERNAL_PARAMETER)\n\n\n for i in range(0,len(self.parametersList)):\n parameterName = self.parametersList[i]\n parameterConstraint = self.constraintsList[i]\n\n if parameterConstraint == constants.TYPE_NOTNULL:\n if not parameterName in self.requestQueryDictionary:\n return ResponseParser.getParsedValidMessage(self.requestQueryDictionary[parameterName])\n\n elif parameterConstraint == constants.TYPE_TEXT:\n if not parameterName in self.requestQueryDictionary or len(self.requestQueryDictionary[parameterName]) == 0:\n return ResponseParser.getParsedValidMessage(parameterName)\n\n elif parameterConstraint == constants.TYPE_PHONE:\n if not parameterName in self.requestQueryDictionary or not len(self.requestQueryDictionary[parameterName]) == 10:\n return ResponseParser.getParsedValidMessage(parameterName)\n\n elif parameterConstraint == constants.TYPE_DEVICE:\n if not parameterName in self.requestQueryDictionary or not self.requestQueryDictionary[parameterName] in constants.DEVICE_TYPES:\n return ResponseParser.getParsedValidMessage(parameterName)\n\n elif parameterConstraint == constants.TYPE_EMAIL:\n if not parameterName in self.requestQueryDictionary:\n return ResponseParser.getParsedValidMessage(parameterName)\n if not self.validateEmail(self.requestQueryDictionary[parameterName]):\n return ResponseParser.getParsedValidMessage(parameterName)\n\n elif parameterConstraint == constants.TYPE_LATLONG:\n if not parameterName in self.requestQueryDictionary:\n return ResponseParser.getParsedValidMessage(parameterName)\n if not self.validateLatLong(self.requestQueryDictionary[parameterName]):\n return ResponseParser.getParsedValidMessage(parameterName)\n\n elif parameterConstraint == constants.TYPE_USERID:\n if not parameterName in self.requestQueryDictionary:\n return ResponseParser.getParsedValidMessage(parameterName)\n if not self.validateUserID(self.requestQueryDictionary[parameterName]):\n return ResponseParser.getParsedValidMessage(parameterName)\n\n return ResponseParser.gerGenericErrorMessage()\n\n\n def validateEmail( self,email ):\n from django.core.validators import validate_email\n from django.core.exceptions import ValidationError\n try:\n validate_email( email )\n return True\n except ValidationError:\n return False\n\n\n def validateLatLong(self,latLong):\n latLongRegex = re.compile(\"^(\\+)?(\\-)?([\\d]{1,3})(\\.)(\\d+)$\")\n if not latLongRegex.match(latLong):\n return False\n else:\n return True\n\n def convertToDict(self,queryDict):\n outputDict = {}\n for key in queryDict:\n value = queryDict[key]\n outputDict[key] = value\n return outputDict\n\n def validateUserID(self,userID):\n try:\n userObject = PapsterUser.objects.get(userID = userID)\n return True\n except ObjectDoesNotExist:\n return False\n except MultipleObjectsReturned:\n return False","sub_path":"api/core/validator.py","file_name":"validator.py","file_ext":"py","file_size_in_byte":6515,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"206376256","text":"from sierra.base_parameters import BaseParameter\n\nfrom sierra.utilities.converter import convert\n\n\nclass Upper_Collierville_Tunnel_1_Capacity(BaseParameter):\n\n def _value(self, timestep, scenario_index):\n capacity_cms = 800 / 35.31 # cfs to cms\n if self.model.mode == 'scheduling':\n if (6, 1) <= (timestep.month, timestep.day) <= (8, 1):\n capacity_cms = 100 / 35.31\n union_utica = self.model.nodes['Union-Utica Reservoir']\n # relief_reservoir = self.model.nodes['Relief Reservoir']\n if timestep.index == 0:\n prev_storage = union_utica.initial_volume\n else:\n prev_storage = union_utica.volume[scenario_index.global_id]\n prev_storage /= 1.2335\n\n if prev_storage <= 2:\n capacity_cms = 0\n elif prev_storage <= 3:\n capacity_cms = 150 / 35.31\n elif prev_storage <= 4:\n capacity_cms = 300 / 35.31\n\n else:\n capacity_cms *= self.days_in_month\n\n return capacity_cms\n\n def value(self, timestep, scenario_index):\n try:\n return convert(self._value(timestep, scenario_index), \"m^3 s^-1\", \"m^3 day^-1\", scale_in=1,\n scale_out=1000000.0)\n except Exception as err:\n print('\\nERROR for parameter {}'.format(self.name))\n print('File where error occurred: {}'.format(__file__))\n print(err)\n\n @classmethod\n def load(cls, model, data):\n return cls(model, **data)\n\n\nUpper_Collierville_Tunnel_1_Capacity.register()\n","sub_path":"sierra/models/stanislaus/_parameters/Upper_Collierville_Tunnel_1_Capacity.py","file_name":"Upper_Collierville_Tunnel_1_Capacity.py","file_ext":"py","file_size_in_byte":1623,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"559236707","text":"# uncompyle6 version 3.7.4\n# Python bytecode 2.7 (62211)\n# Decompiled from: Python 3.6.9 (default, Apr 18 2020, 01:56:04) \n# [GCC 8.4.0]\n# Embedded file name: build/bdist.macosx-10.11-x86_64/egg/reviewbot/tools/flake8.py\n# Compiled at: 2018-07-31 04:26:56\n\"\"\"Review Bot tool to run flake8.\"\"\"\nfrom __future__ import unicode_literals\nfrom reviewbot.tools import Tool\nfrom reviewbot.utils.process import execute, is_exe_in_path\n\nclass Flake8Tool(Tool):\n \"\"\"Review Bot tool to run flake8.\"\"\"\n name = b'flake8'\n version = b'1.0'\n description = b'Checks Python code for style and programming errors.'\n timeout = 30\n options = [\n {b'name': b'max_line_length', \n b'field_type': b'django.forms.IntegerField', \n b'default': 79, \n b'field_options': {b'label': b'Maximum Line Length', \n b'help_text': b'The maximum line length to allow.', \n b'required': True}},\n {b'name': b'ignore', \n b'field_type': b'django.forms.CharField', \n b'default': b'', \n b'field_options': {b'label': b'Ignore', \n b'help_text': b'A comma-separated list of errors and warnings to ignore. This will be passed to the --ignore command line argument (e.g. E4,W).', \n b'required': False}}]\n\n def check_dependencies(self):\n \"\"\"Verify that the tool's dependencies are installed.\n\n Returns:\n bool:\n True if all dependencies for the tool are satisfied. If this\n returns False, the worker will not be listed for this Tool's queue,\n and a warning will be logged.\n \"\"\"\n return is_exe_in_path(b'flake8')\n\n def handle_file(self, f, settings):\n \"\"\"Perform a review of a single file.\n\n Args:\n f (reviewbot.processing.review.File):\n The file to process.\n\n settings (dict):\n Tool-specific settings.\n \"\"\"\n if not f.dest_file.lower().endswith(b'.py'):\n return\n path = f.get_patched_file_path()\n if not path:\n return\n output = execute([\n b'flake8',\n b'--exit-zero',\n b'--max-line-length=%s' % settings[b'max_line_length'],\n b'--ignore=%s' % settings[b'ignore'],\n path], split_lines=True)\n for line in output:\n try:\n line = line[len(path) + 1:]\n line_num, column, message = line.split(b':', 2)\n f.comment(message.strip(), int(line_num))\n except Exception:\n pass","sub_path":"pycfiles/reviewbot_worker-1.0.1.1-py2.7/flake8.py","file_name":"flake8.py","file_ext":"py","file_size_in_byte":2606,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"4776714","text":"# -----------------------------------------------------------------------------\n# Gated working memory with an echo state network\n# Copyright (c) 2018 Nicolas P. Rougier\n#\n# Distributed under the terms of the BSD License.\n# -----------------------------------------------------------------------------\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom data import generate_data, smoothen\nfrom model import generate_model, train_model, test_model\n\n\nif __name__ == '__main__':\n \n # Random generator initialization\n np.random.seed(123)\n \n # Testing data\n n_gate = 1\n n = 2500\n values = smoothen(np.random.uniform(-1, +1, n))\n ticks = np.random.uniform(0, 1, (n, n_gate)) < 0.01\n data = generate_data(values, ticks)\n\n y = data[\"input\"][0,0]\n output = []\n states = np.zeros((3,len(data)))\n a = 1000\n b = .001\n \n for i in range(len(data)):\n v,t = data[\"input\"][i]\n x0 = states[0,i] = b*v\n x1 = states[1,i] = b*v + a*t\n x2 = states[2,i] = a*t + b*y\n y = (np.tanh(x0) - np.tanh(x1) + np.tanh(x2))/b\n output.append(y)\n\n model = {\"output\" : np.array(output).reshape(len(output),1),\n \"state\" : np.array(states) }\n error = np.sqrt(np.mean((model[\"output\"] - data[\"output\"])**2))\n print(\"Error: {0}\".format(error))\n\n # Display\n fig = plt.figure(figsize=(14,6))\n fig.patch.set_alpha(0.0)\n n_subplots = 4\n\n ax1 = plt.subplot(n_subplots, 1, 1)\n ax1.tick_params(axis='both', which='major', labelsize=8)\n ax1.plot(data[\"input\"][:,0], color='0.75', lw=1.0)\n ax1.plot(data[\"output\"], color='0.75', lw=1.0)\n ax1.plot(model[\"output\"], color='0.00', lw=1.5)\n X, Y = np.arange(len(data)), np.ones(len(data))\n C = np.zeros((len(data),4))\n C[:,3] = data[\"input\"][:,1]\n ax1.scatter(X, -0.9*Y, s=1, facecolors=C, edgecolors=None)\n ax1.text(-25, -0.9, \"Ticks:\",\n fontsize=8, transform=ax1.transData,\n horizontalalignment=\"right\", verticalalignment=\"center\")\n ax1.set_ylim(-1.1,1.1)\n ax1.yaxis.tick_right()\n ax1.set_ylabel(\"Input & Output\")\n ax1.text(0.01, 0.9, \"A\",\n fontsize=16, fontweight=\"bold\", transform=ax1.transAxes,\n horizontalalignment=\"left\", verticalalignment=\"top\")\n\n\n for i in range(3):\n ax = plt.subplot(n_subplots, 1, 2+i, sharex=ax1)\n ax.tick_params(axis='both', which='major', labelsize=8)\n ax.set_ylim(-0.001, +0.001)\n ax.yaxis.tick_right()\n ax.text(0.01, 0.9, chr(ord(\"B\")+i),\n fontsize=16, fontweight=\"bold\", transform=ax.transAxes,\n horizontalalignment=\"left\", verticalalignment=\"top\")\n ax.plot(model[\"state\"][i,:], color='k', alpha=.5, lw=.5)\n ax.set_ylabel(\"Activity\")\n ax.set_yticks([-0.001,0.001])\n ax.set_yticklabels([\"$-10^{-3}$\",\"$+10^{-3}$\"])\n \n \n plt.tight_layout()\n plt.savefig(\"figure5.pdf\")\n plt.show()\n","sub_path":"figure5.py","file_name":"figure5.py","file_ext":"py","file_size_in_byte":2935,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"69267981","text":"#from main_script import read_mhunter_csv, calculate_labelling, read_atomic_composition\nimport SauerFunction as sf\nfrom SauerClass import Record, Labelling\nimport argparse\n\n\n\n# Read all the relevant files\n#records = sf.read_mhunter_csv('sample_files/sample_input_data.csv')\n#atomic_composition, N_dict = sf.read_atomic_composition('sample_files/sample_input_formulae.csv')\n#fp = open('sample_files/out_temp.csv', 'w')\n\n\nparser = argparse.ArgumentParser(description='Do enrichment analysis.')\nparser.add_argument('-1','--input_data', help='Input data file name', required=True)\nparser.add_argument('-2','--input_formulae', help='Input formulae file name', required=True)\nparser.add_argument('-o','--output_filename', help='Name of output file', required=True)\nargs = vars(parser.parse_args())\n\nrecords = sf.read_mhunter_csv(args[\"input_data\"])\natomic_composition, N_dict = sf.read_atomic_composition(args[\"input_formulae\"])\nfp = open(args[\"output_filename\"], 'w')\n\n\nlabelling_list = []\nmax_results_length = 0\n\nfor record in records:\n results_dict = sf.calculate_labelling(record, N_dict, atomic_composition)\n for key, value in results_dict.items():\n if len(value) > max_results_length:\n max_results_length = len(value)\n labelling_list.append(Labelling(record.get_name(), results_dict))\n\nfp.write(\"Species, Labelling Source, Sample Name, Labelling %,\")\nfor i in range(max_results_length-1):\n fp.write(\"m\" + str(i) + \",\")\nfp.write(\"\\n\")\n\nfor label in labelling_list:\n species = label.get_species()\n label_dict = label.get_label_dict()\n names = label_dict.keys()\n names = sorted(names)\n i = 0\n for name in names:\n for key, value in label_dict.items():\n if name == key:\n fp.write(species + ',')\n if len(key.split(',')) == 2:\n fp.write(key.split(',')[1].strip('\"') + ',')\n fp.write(key.split(',')[0].strip('\"') + ',')\n else:\n fp.write(' ,')\n fp.write(key + ',')\n for val in value:\n fp.write(str(val) + ', ')\n fp.write('\\n')\nprint(\"Process complete; wrote out results.\")\n\n","sub_path":"proc.py","file_name":"proc.py","file_ext":"py","file_size_in_byte":2201,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"375625735","text":"#!/usr/bin/env python3\n#-*-coding:utf-8-*-\nimport getpass\nimport json\nimport os\nimport socket\nimport subprocess\nimport sys\nimport threading\nimport time\n\nimport mysql\nimport mysql.connector\n\nfrom Library.DBook import Database\nfrom Library.TBook import Tools\n\n\nclass Creator():\n def __init__(self) :\n self.MagicWord=self.getModuleData(\"localDbPassword\",\"Tobias\") \n\n def choiceList(self):\n \"\"\"SETUP THE FIRST MENU\"\"\"\n \n eachChoices = [\n \"Créer un bloc\",\n \"Créer une fonctionnalité\",\n \"Lancer une fonctionnalité\",\n ] #Choices the user can choose to move on\n\n execute = {\n \"1\" : \"self.createBloc()\",\n \"2\" : \"self.createFeature()\",\n \"3\" : \"self.feature()\"\n } #Action launched when the user chooses a number\n\n os.system(\"clear\")\n\n for element in range(eachChoices.__len__()): #Fetch choices\n \n print(f\"({element+1}) {eachChoices[element]} |\",end = \" \\n\") #Display the menu\n\n answer = input(\"----> Votre choix : \")\n\n for keys,values in execute.items() : # If the answer is correct, check what the answer does\n if answer is keys :\n exec(values)\n else : #If not, do while the answer is not a key\n while answer != keys :\n answer = input(\"----> Votre choix : \")\n if answer is keys :\n exec(values)\n\n def createBloc(self):\n\n os.system(\"clear\")\n\n self.nom = input(\"Nom du bloc : \")\n self.category = input(\"Catégorie du bloc [ Primaire | Secondaire | Script | multipleInput ]: \")\n self.command = input(\"Commande à Éxecuter: \\n > \")\n \n if self.category != \"Primaire\":\n self.pattern = input(\"Emplacement dans le pattern: \\n > \")\n else :\n self.pattern = 0\n\n Database(\"creator\",\"id\",\"name\",\"type\",\n \"command\",\"category\",\"pattern\",\n \"NULL\",f\"{self.nom}\",\"Bloc\",f\"{self.command}\",\n f\"{self.category}\",f\"{self.pattern}\").insertInDatabase()\n\n print(\"Bloc créer avec succès\")\n time.sleep(1)\n self.choiceList()\n # Tools().Notification(\"Block Successfully Created\")\n\n def fill(self):\n self.MagicWord=self.getModuleData(\"localDbPassword\",\"Tobias\")\n \n self.mydb = mysql.connector.connect(\n host=\"localhost\",\n user=\"root\",\n passwd=self.MagicWord, port=\"8889\",\n )\n\n self.primaryContent = []\n self.secondaryContent = []\n self.featureContent = []\n self.globalContent = []\n self.globalPrimaryItems = []\n self.Weird = []\n self.scriptContent = []\n self.multipleContent = []\n\n Command = self.mydb.cursor()\n Command.execute(\"USE tobiasdb\")\n Command.execute(\"SELECT DISTINCT(name) FROM creator WHERE category='Primaire' AND type='Bloc'\")\n\n for x in Command:\n for lettre in x :\n self.primaryContent.append(lettre)\n\n\n Command = self.mydb.cursor()\n Command.execute(\"USE tobiasdb\")\n Command.execute(\"SELECT DISTINCT(name) FROM creator WHERE category='Primaire' AND type='Bloc'\")\n\n for x in Command:\n for lettre in x :\n self.primaryContent.append(lettre)\n\n Command.execute(\"USE tobiasdb\")\n Command.execute(\"SELECT DISTINCT(name) FROM creator WHERE category='Secondaire' AND type='Bloc'\")\n\n for x in Command:\n for lettre in x :\n self.secondaryContent.append(lettre)\n\n Command.execute(\"USE tobiasdb\")\n Command.execute(\"SELECT DISTINCT(name) FROM creator WHERE category='Script' AND type='Bloc'\")\n\n for x in Command:\n for lettre in x :\n self.scriptContent.append(lettre)\n\n Command.execute(\"USE tobiasdb\")\n Command.execute(\"SELECT DISTINCT(name) FROM creator\")\n\n for x in Command:\n for lettre in x :\n self.globalContent.append(lettre)\n\n Command.execute(\"USE tobiasdb\")\n Command.execute(\"SELECT DISTINCT(name) FROM creator WHERE type='Feature'\")\n\n for x in Command:\n for lettre in x :\n self.featureContent.append(lettre)\n\n Command.execute(\"USE tobiasdb\")\n Command.execute(\"SELECT DISTINCT(name) FROM creator WHERE category='multipleInputs'\")\n\n for x in Command:\n for lettre in x :\n self.multipleContent.append(lettre)\n\n Command.execute(\"USE tobiasdb\")\n Command.execute(\"SELECT DISTINCT(name) FROM creator WHERE category='Primaire'\")\n\n for x in Command:\n for lettre in x :\n self.globalPrimaryItems.append(lettre)\n\n os.system(\"clear\")\n blocDisplayed = []\n\n for i in range(0,len(self.multipleContent)):\n if len(self.multipleContent) == 0 :\n print(\"Aucun bloc prenant plusieur points d'entrées\")\n else :\n print(self.multipleContent[i]+\"\\n\")\n blocDisplayed.append(self.multipleContent[i])\n print(\" \"+\"_\"*len(self.multipleContent[i]))\n print(\"|\"+self.multipleContent[i]+\"|\"+f\" Bloc à MultiInput | Command : [{self.getCommand(self.multipleContent[i])}]\")\n print(\" \"+\"-\"*len(self.multipleContent[i]))\n\n for i in range(0,len(self.primaryContent)):\n if len(self.primaryContent) == 0 :\n print(\"Aucun blocs Primaires\")\n else :\n print(\" \"+\"_\"*len(self.primaryContent[i]))\n blocDisplayed.append(self.primaryContent[i])\n print(\"|\"+self.primaryContent[i]+\"|\"+f\" Bloc Primaire | Command : [{self.getCommand(self.primaryContent[i])}]\")\n print(\" \"+\"-\"*len(self.primaryContent[i]))\n\n for i in range(0,len(self.secondaryContent)):\n if len(self.secondaryContent) == 0 :\n print(\"Aucun blocs Secondaires\")\n else :\n print(\" \"+\"_\"*len(self.secondaryContent[i]))\n blocDisplayed.append(self.secondaryContent[i])\n print(\"|\"+self.secondaryContent[i]+\"|\"+f\" Bloc Secondaire | Command : [{self.getCommand(self.secondaryContent[i])}]\")\n print(\" \"+\"-\"*len(self.secondaryContent[i]))\n\n for i in range(0,len(self.scriptContent)):\n if len(self.scriptContent) == 0 :\n print(\"Aucun blocs Scripts\")\n else :\n print(\" \"+\"_\"*len(self.scriptContent[i]))\n blocDisplayed.append(self.scriptContent[i])\n print(\"|\"+self.scriptContent[i]+\"|\"+f\" Bloc Scripts | Command : [{self.getCommand(self.scriptContent[i])}]\")\n print(\" \"+\"-\"*len(self.scriptContent[i]))\n\n if len(self.blocList) != 0 :\n print(f\"Vos blocs sélectionnés : {self.blocList}\")\n print(\"_______________________________________________________________\")\n nextAction = input(\"\\n(1) Utiliser un bloc \\n(2) Chercher par lettres \\n(3) Créer une fonctionnalité avec les blocs actuelles \\n(4) Supprimer un bloc de la pioche \\n(5) Retour \\n----> Votre choix : \")\n \n self.goodWords = []\n good = 1\n\n choices = [\"1\",\"2\",\"3\",\"4\",\"5\"]\n for y in range(len(choices)):\n if choices[y] == nextAction :\n good = 0\n\n if good != 0 :\n for i in range(len(blocDisplayed)):\n if nextAction == blocDisplayed[i]:\n self.blocList.append(nextAction)\n self.fill()\n else :\n if blocDisplayed[i].count(nextAction) > 0:\n self.goodWords.append(blocDisplayed[i])\n\n if len(self.goodWords) == 0:\n print(\"Votre bloc n'existe pas\")\n time.sleep(0.5)\n self.fill()\n else: \n\n for i in range(len(self.goodWords)):\n print(f\"Bloc correspondant à la description {self.goodWords[i]}\\n ------\")\n \n answer = input(\"Lequel est ce ? [nom/aucun] \\n ----> Votre choix : \")\n if answer == \"aucun \":\n self.fill()\n else : \n self.blocList.append(answer)\n self.fill()\n\n\n if nextAction == \"1\" :\n\n blocToSelect = input(\"Quel est son nom ou numéro ? \\n : \")\n right = 0\n for i in range(len(blocDisplayed)):\n if blocToSelect != blocDisplayed[i] :\n right += 1\n if right == len(blocDisplayed) :\n print(\"Votre bloc n'existe pas\")\n time.sleep(0.5)\n self.fill()\n else :\n self.blocList.append(blocToSelect)\n self.fill()\n\n elif nextAction == \"2\" :\n count = 0 \n blocToSearchFor = input(\"Quel est son nom ? \\n : \")\n for i in range(len(blocDisplayed)):\n if blocDisplayed[i].count(blocToSearchFor) > 0:\n count +=1 \n print(f\"Bloc correspondant à la description {blocDisplayed[i]}\\n ------\")\n answer = input(\"Lequel est ce ? [nom/aucun] \\n ----> Votre choix : \")\n if answer == \"aucun \":\n self.fill()\n else : \n self.blocList.append(answer)\n self.fill()\n \n if count == 0:\n print(\"Votre bloc n'existe pas\")\n self.fill()\n\n elif nextAction == \"3\":\n if len(self.blocList) != 0:\n self.save()\n self.feature()\n else :\n print(\"[!] Votre pioche est vide...\")\n time.sleep(0.5)\n self.fill()\n\n elif nextAction == \"4\":\n if len(self.blocList) != 0:\n if len(self.blocList) == 1 : \n self.blocList.remove(self.blocList[0])\n self.fill()\n \n name = input(\"Nom du bloc à retirer \\n : \")\n \n if len(self.blocList) >1 : \n self.blocList.remove(name)\n self.fill()\n else :\n print(\"[!] Votre pioche est vide...\")\n time.sleep(0.5)\n self.fill()\n\n elif nextAction == \"5\":\n os.system(\"clear\")\n self.choiceList()\n\n else :\n self.fill()\n\n def save(self):\n os.system(\"clear\")\n foncName = input(\"Nom de la fonctionnalité ? \\n : \")\n indexObjets = []\n nObjets = len(self.blocList)\n\n #Récupérer les éléments\n indexObjets = self.blocList\n\n if nObjets > 0:\n Command = self.mydb.cursor()\n Command.execute(\"USE tobiasdb\")\n\n Command.execute(f\"INSERT INTO creator (id , name , type , command , category,pattern) VALUES (NULL , '{foncName}' , 'Feature' , \\\"{indexObjets}\\\", 'Primaire',0)\")\n self.mydb.commit()\n # Tools().Notification(\"Feature Successfully Added\")\n os.system(\"clear\")\n self.choiceList()\n\n def createFeature(self):\n os.system(\"clear\")\n self.fill()\n\n def feature(self):\n self.MagicWord=self.getModuleData(\"localDbPassword\",\"Tobias\")\n self.featureContent = []\n self.mydb = mysql.connector.connect(\n host=\"localhost\",\n user=\"root\",\n passwd=self.MagicWord, port=\"8889\",\n )\n\n Command = self.mydb.cursor()\n Command.execute(\"USE tobiasdb\")\n Command.execute(\"SELECT DISTINCT(name) FROM creator WHERE type='Feature'\")\n\n for x in Command:\n for lettre in x :\n self.featureContent.append(lettre)\n\n os.system(\"clear\")\n print(f\"Liste des fonctionnalités : {self.featureContent}\")\n featureToExecute = input(\"Que voulez vous éxecuter ? \\n : \")\n\n pattern = []\n\n self.mydb = mysql.connector.connect(\n host=\"localhost\",\n user=\"root\",\n passwd=self.MagicWord, port=\"8889\",\n )\n\n #Déterminer les éléments de type Fonctionnalité\n\n Command = self.mydb.cursor()\n Command.execute(\"USE tobiasdb\")\n Command.execute(f\"SELECT command FROM creator WHERE name='{featureToExecute}' AND category !='Script'\")\n\n for x in Command:\n for lettre in x :\n pass\n\n lettre = str(lettre)\n lettre = lettre.strip(\"[]\")\n lettre = lettre.replace(\"',\",\"\")\n lettre = lettre.replace(\"'\",\"\")\n firstStageCommands = lettre.split()\n\n #Déterminer les commandes de ces éléments\n for i in range(0,len(firstStageCommands)):\n Command = self.mydb.cursor()\n Command.execute(\"USE tobiasdb\")\n Command.execute(\"SELECT command FROM creator WHERE name='\"+firstStageCommands[i]+\"' AND type='Bloc' AND category!='Script'\")\n for x in Command:\n for lettre in x :\n firstStageCommands[i] = lettre\n\n for i in range(0,len(firstStageCommands)):\n Command = self.mydb.cursor()\n Command.execute(\"USE tobiasdb\")\n Command.execute(f\"SELECT command FROM creator WHERE name='{firstStageCommands[i]}' AND type='Bloc' AND category='Script'\")\n for x in Command:\n for lettre in x :\n a = subprocess.getoutput(lettre)\n firstStageCommands[i] = a\n pattern.append(a)\n\n Command = self.mydb.cursor()\n Command.execute(\"USE tobiasdb\")\n Command.execute(\"SELECT pattern FROM creator WHERE command='\"+lettre+\"' AND type='Bloc' AND category='Script'\")\n for x in Command:\n for lettre in x:\n pattern.append(lettre)\n\n #Check for Pattern\n for i in range(0,len(firstStageCommands)):\n Command = self.mydb.cursor()\n Command.execute(\"USE tobiasdb\")\n Command.execute(\"SELECT command,pattern FROM creator WHERE command='\"+firstStageCommands[i]+\"' AND type='Bloc' AND category='Secondaire'\")\n for x in Command:\n for lettre in x :\n pattern.append(lettre)\n\n list_of_strings = [str(s) for s in firstStageCommands]\n joined_string = \" \".join(list_of_strings)\n string = joined_string.split()\n stringList = []\n for i in range(0,len(string)):\n stringList.append(string[i])\n stringList.append(\" \")\n workingUnits = []\n byOne = []\n #Détermine les secteurs ayant besoin d'une modification\n pos = []\n var_pos = 0\n for sector in stringList:\n num = sector.count(\"$\")\n if num > 0:\n workingUnits.append(sector)\n pos.append(var_pos)\n var_pos +=1\n else :\n var_pos +=1\n\n #Change le string en liste de caractères\n for elements in range(0,len(workingUnits)):\n elementString = workingUnits[elements]\n for caracter in range(0,len(elementString)):\n byOne.append(elementString[caracter])\n byOne.append(\" \")\n\n byOne = byOne[:-1]\n val = 0\n #Change $n par la bonne valeur du pattern\n for cell in range(0,len(byOne)):\n if byOne[cell] == \"$\":\n if int(byOne[cell+1]) == 1:\n val = -1\n elif int(byOne[cell+1]) == 2:\n val = 0\n else :\n val +=1\n\n byOne[cell+1] = pattern[int(byOne[cell+1])+val]\n\n final = \"\"\n for i in byOne:\n final = final+str(i)\n final = final.replace(\"$\",\"\")\n final = final.split()\n\n finalList = []\n for i in range(0,len(final)):\n finalList.append(final[i])\n\n for i in range(0,len(stringList)):\n for y in range(0,len(pattern)):\n if stringList[i] == pattern[y]:\n stringList[i] = \" \"\n\n for i in range(0,len(pos)):\n stringList[pos[i]] = finalList[i]\n\n full = stringList\n if stringList.count(\"X1\") > 0:\n #TROUVER LA BOUCLE\n position = 0\n indexGet = []\n\n for i in range (0,len(stringList)):\n if stringList[i] == \"for\":\n indexGet.append(position)\n position +=1\n elif stringList[i] == \"end\" :\n indexGet.append(position)\n position +=1\n else :\n position +=1\n\n #TROUVER LE COMMENCEMENT ET LA FIN\n doing = []\n i = 0\n while i != int(len(indexGet)):\n start = indexGet[i]\n end = indexGet[i+1]\n\n start = int(start)\n end = int(end)\n\n #LES RECUPERER\n condition = stringList[start:end]\n stage_1 = condition[0:9]\n stage_1[6] = stage_1[8]\n numberOT = stage_1[8]\n stage_1.pop(8)\n i = i+2\n condition = stringList[start:end]\n\n condition[6] = stage_1[6]\n condition.pop(8)\n condition.pop(8)\n\n\n #GET THE THINGS THAT NEED TO BE DONE\n do = condition[7:end]\n do = \"\".join(do)\n doing.append(do)\n\n inputs=[]\n #Déterminer les commandes de ces éléments\n\n Command = self.mydb.cursor()\n Command.execute(\"USE tobiasdb\")\n Command.execute(\"SELECT command FROM creator WHERE category='multipleInputs' AND type='Bloc'\")\n for x in Command:\n for lettre in x :\n for i in range(0,len(stringList)):\n if lettre in stringList[i]:\n inputs.append(lettre)\n if len(inputs) == 0:\n inputs.append(numberOT)\n\n numberOI = 0\n try :\n numberOT = int(numberOT)\n numberOI = numberOT\n except:\n numberOT = None\n numberOI = len(inputs)\n\n for i in range(0,numberOI):\n if numberOT is not None:\n a = numberOT\n a = str(a)\n else :\n proc = subprocess.Popen(['python3', inputs[i]], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)\n a = proc.communicate()[0]\n a = a.decode(\"utf-8\")\n a = str(a)\n a = a.strip(\"[\")\n a = a.strip(\"]\")\n a = a[:-2]\n\n a = a.replace(\"',\",\"\")\n a = a.replace(\"'\",\"\")\n a = a.split()\n i=0\n while i != len(doing):\n word = doing[i]\n word = word.split()\n for k in range(0,len(word)):\n print(word)\n if word[k] == \"X2\":\n for y in range(0,len(a)):\n word[k] = a[y]\n word = \" \".join(word)\n print(word)\n os.system(word)\n word = do.split()\n i +=len(doing)\n\n\n #A REFAIRE\n for i in range(indexGet[0],indexGet[-1]+1):\n full[i] = \"TODELETE\"\n\n\n good = []\n for i in range(0,len(full)):\n if full[i] != \"TODELETE\":\n good.append(full[i])\n\n commande = \"\".join(good)\n print(f\"La commande à éxecuter est : {commande}\")\n os.system(commande)\n else :\n commande = \"\".join(stringList)\n print(f\"La commande à éxecuter est : {commande}\")\n os.system(commande)\n\n def getCommand(self,blocName):\n Command = self.mydb.cursor()\n Command.execute(\"USE tobiasdb\")\n Command.execute(f\"SELECT command FROM creator WHERE name='{blocName}' AND type='Bloc'\")\n\n for x in Command:\n for lettre in x :\n return lettre\n\nclass Tobias(Creator):\n\n def __init__(self):\n self.blocList = []\n self.confs = self.getPaths('path','ConfigurationJson')\n self.networkB = self.getPaths('network','sysCommandsDirectory')\n self.user = os.environ[\"USER\"] #User's name\n\n\n # try :\n # open(f\".started.txt\")\n\n # except IOError:\n\n # self.fromZeroToHero() #Working\n\n threadLogin = threading.Thread(target=self.login())\n threadLogin.start()\n\n def fromZeroToHero(self):\n\n \"\"\"Install needed packages\"\"\"\n packages = [\n \"python-nmap\",\n \"PyQt5\",\n \"notify2\",\n \"django\",\n \"django-debug-toolbar\",\n \"gtts\",\n \"pyshark\",\n \"nginx\",\n \"speechRecognition\" ,\n \"mysql\",\n \"mysql-connector\",\n \"paramiko\",\n \"install --pre scapy[basic]\",\n \"mechanicalsoup\",\n \"beautifulsoup4\",\n \"git\",\n \"pandas\",\n \"unidecode\"\n ]\n\n for i in range (0,len(packages)):\n a = subprocess.getoutput(f\"which {packages[i]}\")\n if a != f\"/usr/bin/{packages[i]}\" :\n #print(f\"{packages[i]} is not installed\")\n os.system(f\"pip3 install {packages[i]}\")\n\n os.system(\"clear\")\n\n with open(f\"/Users/{self.user}/Terminal/.started.txt\",\"w\") as variable :\n variable.write(\"Started\")\n\n def myIp(self,toSearch):\n \"\"\"Give my Ip Address and Mask\"\"\"\n if toSearch == \"ip\":\n try :\n\n s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\n s.connect((\"8.8.8.8\", 80))\n myIp=s.getsockname()[0]\n s.close()\n return myIp\n\n except OSError :\n\n myIp = subprocess.getoutput(\"ip a | egrep 'inet' | egrep 'brd' | awk '{print $2}' | sed -re 's/\\\\/..//g'|head -n1\")\n\n return myIp\n\n finally :\n\n osBasedIp = subprocess.getoutput(\"ip a | egrep 'inet' | egrep 'brd' | awk '{print $2}' | sed -re 's/\\\\/..//g' | head -n1\")\n\n if myIp != osBasedIp :\n raise Exception(\"IP Non concordante\")\n\n elif toSearch == \"mask\":\n\n mask = subprocess.getoutput(\"ip a | egrep \\\"inet\\\" | head -n3 | tail -1 | awk '{print $2}'\")\n return mask\n\n else :\n raise Exception(\"'ip' and 'mask' are the only parameters accepted\")\n\n def retour(self):\n question = input(\"\\nRevenir au menu principal ? [yes/no] \\n > \")\n if question == \"yes\":\n self.start()\n\n def initializeViolet(self):\n os.system(\"cd ./.Violet && ./Violet.py\")\n\n def start(self):\n\n os.system(\"clear\")\n print(f\"----------------- Bonjour {self.getModuleData('prenom')} -----------------\\n\")\n print(f\"[?] Violet : Computer Status --------\\n\")\n if self.myIp('ip') != None :\n print(f\"Network Access : YES \")\n else :\n print(f\"Network Access : NO \")\n\n print(f\"IP Address: {self.myIp('ip')}\")\n\n try :\n open(f\"/Users/{self.user}/Archetype/Tobi/Terminal/.Violet/Report.txt\")\n print(f\"Is Violet deployed : YES\\n\")\n print(\"Violet Report File Content: \\n\")\n with open(f\"/Users/{self.user}/Archetype/Tobi/Terminal/.Violet/Report.txt\",\"r\") as variable:\n print(variable.read()+\"\\n\")\n\n except IOError:\n\n print(f\"Is Violet deployed : NO \\n\")\n\n print(\"|(1) Outils # (2) Réseau # (3) Internet |\\n|(4) Stockage # (5) Serveur # (6) Configuration | \\n|(7) Créateur # (8) LoopSequence # (9) Deploy Violet |\\n\")\n self.chooseAction()\n\n def login(self):\n from Library import toHash\n print(\"--------------- Tobias Login Page : ---------------\\n\")\n username = input(\" Nom d'utilisateur : \")\n password = getpass.getpass(\" Mot de passe : \")\n if username == self.getModuleData(\"prenom\"):\n if toHash.HASH(password) == self.getModuleData(\"password\"):\n self.start()\n else :\n print(\"WRONG PASSWORD\")\n sys.exit(0)\n\n def creator(self):\n self.choiceList()\n\n def chooseAction(self):\n choice = input(\"----> Votre choix : \")\n if choice == \"1\":\n\n os.system(\"clear\")\n print(\"[?] Tobias : Onglet Outil ----\\n\")\n print(\"----- Bloc Note (1) | Handler (2) | Raw (3) -----\\n\")\n\n answer = input(\"----> Votre choix : \")\n if answer == \"1\":\n self.blocNote()\n self.retour()\n\n if answer == \"2\":\n self.handler()\n self.retour()\n\n if answer == \"3\":\n self.raw()\n self.retour()\n\n\n if choice == \"2\":\n os.system(\"clear\")\n print(\"[?] Tobias : Onglet Réseau ----\\n\")\n print(\"----- Page Reseau (1) | Paquet (2) \\n\")\n\n answer = input(\"----> Votre choix : \")\n if answer == \"1\":\n self.pageReseau()\n self.retour()\n\n if answer == \"2\":\n self.paquet()\n self.retour()\n\n\n if choice == \"4\":\n os.system(\"clear\")\n print(\"[?] Tobias : Onglet Stockage ----\\n\")\n print(\"----- Coffre Fort (1) | GetFromDb (2) | Archiver\\n\")\n\n answer = input(\"----> Votre choix : \")\n if answer == \"1\":\n self.coffreFort()\n self.retour()\n\n if answer == \"2\":\n self.getFromDb()\n self.retour()\n\n if answer == \"3\":\n self.Archives()\n self.retour()\n\n\n if choice == \"5\":\n os.system(\"clear\")\n print(\"[?] Tobias : Onglet Serveur ----\\n\")\n print(\"----- Page Serveur (1) | Transférer \\n\")\n\n answer = input(\"----> Votre choix : \")\n if answer == \"1\":\n self.pageServeur()\n self.retour()\n\n if answer == \"2\":\n self.transfert()\n self.retour()\n\n\n if choice == \"7\":\n os.system(\"clear\")\n print(\"[?] Tobias : Onglet Créateur ----\\n\")\n self.creator()\n # self.start()\n\n\n elif choice == \"8\" :\n threadLoopSequence = threading.Thread(target=self.loopSequence())\n threadLoopSequence.start()\n self.retour()\n\n elif choice == \"9\" :\n threadViolet = threading.Thread(target=self.initializeViolet())\n threadViolet.start()\n self.retour()\n\n def pageServeur(self):\n pass\n\n def transfert(self):\n pass\n\n def coffreFort(self):\n pass\n\n def getFromDb(self):\n pass\n\n def Archives(self):\n pass\n\n def pageReseau(self):\n pass\n\n def paquet(self):\n pass\n\n def blocNote(self):\n self.notesPath = self.getPaths('noteFile','Notes')\n\n with open(f\"{self.notesPath}\",\"r\") as variable :\n print(variable.read())\n\n def raw(self):\n pass\n\n def handler(self):\n pass\n\n def loopSequence(self):\n\n import time\n from multiprocessing import Process\n\n from Library.NBook import Network\n from Library.RawNetwork import Ally_Computers, internetProtocol\n from Security.Backbone import Backbone\n from Security.Riot import Security\n\n\n \"\"\" Execute every methods in order to make them properly available to the user \"\"\"\n\n print(\"Launching loopSequence\")\n\n\n #BackBone\n if self.getModuleData(\"ipScan\",\"Backbone\") == \"True\" :\n loopBackboneIpScan = threading.Thread(target=Backbone().innerPortScan())\n loopBackboneIpScan.start()\n\n\n if self.getModuleData(\"networkSpace\",\"Backbone\") == \"True\" :\n loopBackboneNetworkSpace = threading.Thread(target=Backbone().networkSpace())\n loopBackboneNetworkSpace.start()\n\n if self.getModuleData(\"allow/Deny access\",\"Backbone\") == \"True\" :\n loopBackbone = threading.Thread(target=Backbone().Etapes_de_Fonctionnement())\n loopBackbone.start()\n\n # Riot\n if self.getModuleData(\"authorized_keys\",\"Riot\") == \"True\" :\n loopRiot = threading.Thread(target=Security().autorized_keysCheck())\n loopRiot.start()\n\n # if self.getModuleData(\"crontabCheck\",\"Riot\") == \"True\" :\n # print(\"starting\")\n # loopCrontab = threading.Thread(target=Security().crontabCheck())\n # loopCrontab.start()\n\n if self.getModuleData(\"connexions\",\"Riot\") == \"True\" :\n loopRiot = threading.Thread(target=Security().connexion())\n loopRiot.start()\n\n if self.getModuleData(\"processus\",\"Riot\") == \"True\" :\n loopRiotProc = threading.Thread(target=Security().Processus())\n loopRiotProc.start()\n\n #RawNetwork\n # if self.getModuleData(\"AllyComputer\",\"General\") == \"True\" :\n # self.threads(Ally_Computers().Main())\n\n if self.getModuleData(\"Internet Protocol\",\"General\") == \"True\" :\n loopRaw = threading.Thread(target=internetProtocol().Main())\n loopRaw.start()\n\n def getModuleData(self,searchingFor,fieldName='user'):\n import json\n with open(f\"{self.confs}\",\"r\") as config :\n content = json.load(config)\n\n for parameters in content['Configurations'] :\n for keys,values in parameters[fieldName].items() :\n if searchingFor == keys :\n return values\n\n def getPaths(self,searchingFor,fieldName='Archives'):\n\n with open(f\"Settings/Paths/filesPaths.json\",\"r\") as variable:\n content = json.load(variable)\n\n for parameters in content['Paths'] :\n for keys,values in parameters[fieldName].items() :\n if searchingFor == keys :\n return values\n\n#Tobias Main Task\nif __name__ == \"__main__\":\n Tobias()\n # creatorDebug = Creator()\n\n\n\n\n","sub_path":"Terminal/Tobias.py","file_name":"Tobias.py","file_ext":"py","file_size_in_byte":30935,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"367247496","text":"import os\n\nimport torch\nfrom torch.utils.data import Dataset as dataset\nimport SimpleITK as sitk\nimport numpy as np\n\n\nclass Dataset(dataset):\n def __init__(self, CT_dir, GT_dir):\n\n self.CT_list = list(map(lambda x: os.path.join(CT_dir, x), os.listdir(CT_dir)))\n self.GT_list = list(map(lambda x: os.path.join(GT_dir, x), os.listdir(GT_dir)))\n\n\n def __getitem__(self, index):\n\n CT_path = self.CT_list[index]\n GT_path = self.GT_list[index]\n\n # 将CT和金标准读入到内存中\n CT = sitk.ReadImage(CT_path)\n GT = sitk.ReadImage(GT_path)\n\n CT_nd = sitk.GetArrayFromImage(CT)\n GT_nd = sitk.GetArrayFromImage(GT)\n\n if len(CT_nd.shape) == 2:\n CT_nd = np.expand_dims(CT_nd, axis=2)\n GT_nd=np.expand_dims(GT_nd,axis=2)\n # HWC to CHW\n CT = CT_nd.transpose((2, 0, 1))\n GT=GT_nd.transpose((2,0,1))\n\n # 处理完毕,将array转换为tensor\n CT_array = torch.from_numpy(CT).float()\n GT_array = torch.from_numpy(GT).float().squeeze(0)\n\n return CT_array, GT_array\n\n def __len__(self):\n\n return len(self.CT_list)\n\nCT_dir = \"/share/xianqim/UNet/data/img_process_600\"\nGT_dir = \"/share/xianqim/UNet/data/label_600\"\n\nData2d = Dataset(CT_dir, GT_dir)\n\n\n","sub_path":"dataloaders/Dataset.py","file_name":"Dataset.py","file_ext":"py","file_size_in_byte":1303,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"284921269","text":"import random, sys\r\n\r\ndef roll():\r\n return random.randint(1, 20)\r\n\r\nclass character:\r\n def __init__(self,name,age,home,intel,char,agil,stre,luck,type, life, batt):\r\n self.name = name\r\n self.age = age\r\n self.home = home\r\n self.intel = intel\r\n self.char = char\r\n self.agil = agil\r\n self.stre = stre\r\n self.luck = luck\r\n self.type = type\r\n self.life = life\r\n self.batt = batt\r\n\r\ndef translate(word):\r\n lordex = 0\r\n lord = \"\"\r\n for letter in word:\r\n if random.randint(1,5) > 3:\r\n if letter.lower() in \"aiu\":\r\n letter = \"o\"\r\n lord += letter\r\n elif letter.lower() in \"eoy\":\r\n letter = \"i\"\r\n lord += letter\r\n elif letter.lower() in \"h\":\r\n letter = \"u\"\r\n lord += letter\r\n elif letter.lower() in \"spt\":\r\n letter = \"z\"\r\n lord += letter\r\n elif letter.lower() in \"mlt\":\r\n letter = \"n\"\r\n lord += letter\r\n elif letter.lower() in \"djz\":\r\n letter = \"ch\"\r\n lord += letter\r\n elif letter.lower() in \"nv\":\r\n letter = \"m\"\r\n lord += letter\r\n else:\r\n lord += letter\r\n # print(letter)\r\n return lord\r\n\r\ndef fight(good, bad):\r\n atk = good.stre+random.randint(-3,7)\r\n if atk < 0:\r\n atk = 0\r\n bad.life = bad.life - atk\r\n if bad.life < 1:\r\n print(\"\\nYou caused \" + str(atk) + \" damage...\\n\" + bad.name + \" has been destroyed.\\n\")\r\n return \"break\"\r\n else:\r\n print(\"\\nYour attack did \" + str(atk) + \" damage.\")\r\n print(bad.name + \" is still alive with \" + str(bad.life) +\r\n \" life points remaining.\")\r\n atk = bad.stre + random.randint(-3, 5)\r\n if atk < 0:\r\n atk = 0\r\n good.life = good.life - atk\r\n if good.life > 0:\r\n print(\"\\n\" + bad.name + \" attacked you and caused \" + str(atk) + \" damage...\\nYou have \" + str(good.life) + \" remaining.\\n\")\r\n else:\r\n print(\"\\n\" + bad.name + \" attacked you with \" + str(atk) + \" damage...\\nYou died.\\n\\n\")\r\n sys.exit()\r\n\r\ndef stats(npc):\r\n return (\"name: \" + npc.name + \"\\nhome town: \" + npc.home + \"\\nage: \" + str(npc.age) + \"\\ntype: \" + str(npc.type) +\r\n \"\\n\\nintelligence: \" + str(npc.intel) + \"\\ncharisma: \" + str(npc.char) +\r\n \"\\nagillity: \" + str(npc.agil) + \"\\nstrength: \" + str(npc.stre) +\r\n \"\\nLife points: \" + str(npc.life) + \"\\n\")\r\n\r\ndef action(hero,npc):\r\n choice = input(\"what do you want to do: \").lower()\r\n if \"fight\" in choice:\r\n hero.batt = True\r\n if fight(hero, npc) == \"break\":\r\n return \"break\"\r\n hero.batt = True\r\n elif \"leave\" in choice:\r\n if hero.batt == True:\r\n if roll() + hero.luck > npc.life:\r\n print(\"You've succesfully escaped.\")\r\n return \"break\"\r\n else:\r\n print(\"\\nYou can't do that now.\")\r\n atk = npc.stre + random.randint(-3, 5)\r\n if atk < 0:\r\n atk = 0\r\n hero.life = hero.life - atk\r\n if hero.life > 0:\r\n print(npc.name + \" attacked you and caused \" + str(atk) + \" damage...\\nYou have \" + str(\r\n hero.life) + \" remaining.\\n\")\r\n else:\r\n print(npc.name + \" attacked you with \" + str(atk) + \" damage...\\nYou died.\")\r\n return \"break\"\r\n sys.exit()\r\n else:\r\n print(\"You walk away\")\r\n return \"break\"\r\n elif \"stat\" in choice:\r\n print(stats(npc))\r\n elif \"talk\" in choice:\r\n if hero.batt == False:\r\n if hero.char >= npc.char:\r\n if npc.talked == False:\r\n talk_list = open(\"/Users/Axyl Brosseau/PycharmProjects/role playing adventure/\" + npc.type + \"_dialogue\", \"r\")\r\n speech = \"My name is, \" + npc.name.title() + \". \" + random.choice(list(talk_list)).lower().capitalize().replace(\"\\n\", \"\")\r\n talk_list.close()\r\n print(npc.name + \" says, \\\"\" + speech + \"\\\"\")\r\n npc.talked = True\r\n else:\r\n talk_list = open( \"/Users/Axyl Brosseau/PycharmProjects/role playing adventure/\" + npc.type + \"_dialogue\", \"r\")\r\n speech = random.choice(list(talk_list)).lower().capitalize().replace(\"\\n\", \"\")\r\n talk_list.close()\r\n print(npc.name + \" says, \\\"\" + speech + \"\\\"\")\r\n else:\r\n talk_list = open(\"/Users/Axyl Brosseau/PycharmProjects/role playing adventure/\" + npc.type + \"_bad_dialogue\", \"r\")\r\n speech = random.choice(list(talk_list)).lower().capitalize().replace(\"\\n\", \"\")\r\n talk_list.close()\r\n print(npc.name + \" says, \\\"\" + speech + \"\\\"\")\r\n else:\r\n print(\"It's too late for words!\")\r\n\r\ndef enemy():\r\n\r\n name_list = open(\"/Users/Axyl Brosseau/PycharmProjects/role playing adventure/names\", \"r\")\r\n name = translate(random.choice(list(name_list))).lower().title()\r\n name_list.close()\r\n\r\n if random.randint(1,35) == 1:\r\n age = random.randint(3,200)\r\n elif random.randint(1,12) == 1:\r\n age = random.randint(6,105)\r\n else:\r\n age = random.randint(10,65)\r\n\r\n skillpts = 20\r\n try:\r\n if random.randint(0,15) < 6:\r\n intel = random.randint(0, 12)\r\n skillpts = skillpts - intel\r\n char = random.randint(0, 7)\r\n skillpts = skillpts - char\r\n agil = random.randint(0, 5)\r\n skillpts = skillpts - agil\r\n stre = random.randint(0, skillpts)\r\n skillpts = skillpts - stre\r\n luck = skillpts\r\n elif random.randint(0,15) > 9:\r\n intel = random.randint(0, 5)\r\n skillpts = skillpts - intel\r\n char = random.randint(0, 6)\r\n skillpts = skillpts - char\r\n agil = random.randint(0, 7)\r\n skillpts = skillpts - agil\r\n stre = random.randint(0, skillpts)\r\n skillpts = skillpts - stre\r\n luck = skillpts\r\n else:\r\n intel = random.randint(2, 6)\r\n skillpts = skillpts - intel\r\n char = random.randint(2, 6)\r\n skillpts = skillpts - char\r\n agil = random.randint(2, 6)\r\n skillpts = skillpts - agil\r\n stre = random.randint(0, skillpts)\r\n skillpts = skillpts - stre\r\n luck = skillpts\r\n except:\r\n intel = random.randint(3, 7)\r\n char = random.randint(2, 6)\r\n agil = random.randint(2, 7)\r\n stre = random.randint(2, 8)\r\n luck = random.randint(2, 8)\r\n type_try = random.randint(1, 7)\r\n if type_try == 1:\r\n type = \"wizard\"\r\n intel += 4\r\n luck += 2\r\n life = 20+stre+(luck%4)\r\n elif type_try == 2:\r\n type = \"thief\"\r\n agil += 3\r\n luck += 2\r\n life = 17+stre+(luck%4)\r\n elif type_try == 3:\r\n type = \"warrior\"\r\n stre += 2\r\n char += 2\r\n life = 23+stre+(luck%4)\r\n elif type_try == 4:\r\n type = \"orc\"\r\n stre += 5\r\n life = 25+stre+(luck%4)\r\n else:\r\n type = \"villager\"\r\n stre = stre%4\r\n life = 5+stre+(luck%4)\r\n\r\n\r\n town_list = open(\"/Users/Axyl Brosseau/PycharmProjects/role playing adventure/towns\", \"r\")\r\n town = translate(random.choice(list(town_list))).lower().title()\r\n town_list.close()\r\n\r\n if age < 26:\r\n agil += 3\r\n elif age < 40:\r\n stre += 3\r\n elif age < 50:\r\n char += 3\r\n elif age < 70:\r\n intel += 3\r\n else:\r\n luck += 3\r\n talked = False\r\n new_bad = [name, age, town, intel, char, agil, stre, luck, type, life, talked]\r\n return new_bad","sub_path":"characters.py","file_name":"characters.py","file_ext":"py","file_size_in_byte":8045,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"370065284","text":"import cv2\nimport numpy as np\nimport copy\nimport math\n\n\ndef show_img(str, img):\n cv2.imshow(str, img)\n cv2.waitKey(0)\n\n\ndef line_angle(line):\n x1, y1 = line[0:2]\n x2, y2 = line[2:]\n\n if y1 - y2 == 0:\n return 0\n\n if x1 - x2 == 0:\n return 90\n\n angle = np.rad2deg(np.arctan2(y2 - y1, x2 - x1))\n\n # make it in multiplcation of 5\n angle = 5 * (int(angle / 5))\n return angle\n\n\ndef filter_angles(lines, angles):\n lines_filtered = [line for line in lines if line_angle(line) in angles]\n return lines_filtered\n\n\ndef mark_traffic_signs(image_in, signs_dict):\n img = image_in.copy()\n for sign_name, center in signs_dict.items():\n x, y = int(center[0]), int(center[1])\n text = \"(({},{}),'{}')\".format(x, y, sign_name)\n xs, ys = image_in.shape[0], image_in.shape[1]\n orgx = x + 50 if x + 50 + 200 < xs else x - 200\n orgy = y\n cv2.putText(img, text, (orgx, orgy), cv2.FONT_HERSHEY_SIMPLEX, .5, (0, 0, 0))\n cv2.putText(img, \"*\", (x - 8, y + 9), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), thickness=2)\n return img\n\n\ndef draw_tl_center(image_in, center, state):\n img = image_in.copy()\n x, y = int(center[0]), int(center[1])\n text = \"(({},{}),'{}')\".format(x, y, state)\n xs, ys = image_in.shape[0], image_in.shape[1]\n orgx = x + 50 if x + 50 + 200 < xs else x - 200\n orgy = y\n cv2.putText(img, text, (orgx, orgy), cv2.FONT_HERSHEY_SIMPLEX, .5, (0, 0, 0))\n cv2.putText(img, \"*\", (x - 8, y + 9), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), thickness=2)\n return img\n\n\ndef get_midpt(line):\n x1, y1 = line[0], line[1]\n x2, y2 = line[2], line[3]\n return [(x1 / 2) + (x2 / 2), (y1 / 2) + (y2 / 2)]\n\n\ndef get_length(line):\n x1, y1 = line[0], line[1]\n x2, y2 = line[2], line[3]\n length = math.sqrt(abs(x1 - x2) ** 2 + abs(y1 - y2) ** 2)\n return length\n\n\ndef get_centers(lines):\n m = 0\n l1m = None\n l2m = None\n for l1 in lines:\n for l2 in lines:\n if tuple(l1) == tuple(l2):\n continue\n l1c = get_midpt(l1)\n l2c = get_midpt(l2)\n distance = get_length([l1c[0], l1c[1], l2c[0], l2c[1]])\n if m < distance:\n m = distance\n l1m = l1\n l2m = l2\n l1c = get_midpt(l1m)\n l2c = get_midpt(l2m)\n center = get_midpt([l1c[0], l1c[1], l2c[0], l2c[1]])\n return center\n\n\ndef get_square_centers(lines):\n # filter lines with +45 and -45 angles\n side_45 = filter_angles(lines, [45])\n side__45 = filter_angles(lines, [-45])\n\n # list of set of square lines\n squares = []\n for line1 in side_45:\n l1c = get_midpt(line1)\n for line2 in side__45:\n l2c = get_midpt(line2)\n \"\"\"\n if both lines are almost same length and \n either their mid point x values or mid point y values are approximately the same\n then they belong to the same set of square\n \"\"\"\n if ((abs(get_length(line1) - get_length(line2)) < 3)\n and ((abs(l1c[0] - l2c[0]) < 3) or (abs(l1c[1] - l2c[1]) < 3))):\n placed = False\n l1 = tuple(line1)\n l2 = tuple(line2)\n if len(squares) != 0:\n for square in squares:\n if l1 in square or l2 in square:\n square.add(l1)\n square.add(l2)\n placed = True\n if not placed:\n square = set({l1, l2})\n squares.append(square)\n\n print(squares)\n # for square in squares:\n # for line in square:\n # #cv2.line(sign_draw, (line[0], line[1]), (line[2], line[3]), (0, 0, 255), 2)\n # x, y = line[0],line[1]\n # cv2.putText(sign_draw, \"*\", (int(x), int(y)), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255),\n # thickness=2)\n x, y = (749.75, 349.75)\n cv2.putText(sign_draw, \"*\", (int(x), int(y)), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255),\n thickness=2)\n show_img(\"lines\", sign_draw)\n\n centers = []\n\n for square in squares:\n \"\"\"\n get lines of same angle, get their mid points and the midpoint of these midpoints\n from the two groups, find mid-mid point\n \"\"\"\n\n # lines = [np.array(line) for line in list(square)]\n # side45 = filter_angles(lines, [45])\n # side_45 = filter_angles(lines, [-45])\n #\n # c = np.array([get_centers(side45), get_centers(side_45)])\n #\n # center = np.mean(c, axis=1)\n # centers.append(center)\n return centers\n\n\ndef proximal_pts(p1, p2, threshold):\n if (abs(p1[0] - p2[0]) < threshold and abs(p1[1] - p2[1]) < threshold):\n return True\n else:\n return False\n\n\ndef get_diamonds(lines):\n l45 = filter_angles(lines, [45])\n l_45 = filter_angles(lines, [-45])\n\n diamonds = []\n\n for l1 in l45:\n common = []\n p1 = (l1[0], l1[1])\n p2 = (l1[2], l1[3])\n for l2 in l_45:\n p3 = (l2[0], l2[1])\n p4 = (l2[2], l2[3])\n for pin1 in [p1, p2]:\n for pin2 in [p3, p4]:\n # print(\"{} {} {} {}\".format(pin1, pin2, abs(pin1[0]-pin2[0]), abs(pin1[1]-pin2[1])))\n if proximal_pts(pin1, pin2, 10):\n common = [(pin1[0] + pin2[0]) / 2, (pin1[1] + pin2[1]) / 2]\n placed = False\n l1t = tuple(l1)\n l2t = tuple(l2)\n ct = tuple(common)\n for diamond in diamonds:\n if l1t in diamond[\"lines\"] or l2t in diamond[\"lines\"]:\n diamond[\"lines\"].add(l1t)\n diamond[\"lines\"].add(l2t)\n diamond[\"common\"].add(ct)\n placed = True\n if not placed:\n diamond = {\"lines\": set([l1t, l2t]), \"common\": set({ct})}\n diamonds.append(diamond)\n return diamonds\n\n\ndef draw_circles(circles, img):\n for i in circles[0, :]:\n center = (i[0], i[1])\n # circle center\n cv2.circle(img, center, 1, (0, 100, 100), 3)\n # circle outline\n radius = i[2]\n cv2.circle(img, center, radius, (255, 0, 255), 3)\n\n\ndef pt_in_circle(c, r, p):\n if (c[0] - r < p[0] < c[0] + r) and (c[1] - r < p[1] < c[1] + r):\n return True\n return False\n\n\ndef get_lines_in_circles(lines, circles):\n linesIn = []\n for circle in circles:\n c = (circle[0], circle[1])\n r = circle[2]\n for line in lines:\n p1 = (line[0], line[1])\n p2 = (line[2], line[3])\n if pt_in_circle(c, r, p1) and pt_in_circle(c, r, p2):\n placed = False\n for l_pair in linesIn:\n if tuple(circle) == l_pair[\"cir\"]:\n l_pair[\"l\"].add(tuple(line))\n placed = True\n if not placed:\n linesIn.append({\"cir\": tuple(circle), \"l\": set({tuple(line)})})\n return linesIn\n\n\nsign = cv2.imread(\"input_images\\\\scene_stp_1.png\")\n# sign = cv2.imread(\"input_images\\\\scene_wrng_1.png\")\n\n# sign = cv2.imread(\"input_images\\\\scene_all_signs.png\")\nsign = cv2.imread(\"input_images\\\\test_images\\\\stop_249_149_blank.png\")\nsign_draw = copy.deepcopy(sign)\n# show_img(\"sample tl\", tl)\n\n# gray_img = cv2.cvtColor(sign, cv2.COLOR_BGR2GRAY)\n# blured = cv2.GaussianBlur(gray_img, (5,5),2)\n# edges_blur = cv2.Canny(blured, 5, 10)\n# show_img(\"blurred edges\", edges_blur)\n\n# check how canny edge filter shows up)\nedges = cv2.Canny(sign_draw, 100, 200)\nshow_img(\"edges of tl\", edges)\n\nlines = cv2.HoughLinesP(edges, rho=1, theta=np.pi / 36, threshold=20, minLineLength=5, maxLineGap=5)\nlines = lines.reshape(lines.shape[0], lines.shape[2])\n\nlengths = [5 * int(get_length(line) / 5) for line in lines]\nprint(sorted(lengths))\n\nlines = np.array([lines[i] for i in range(len(lines)) if 25 <= lengths[i] <= 40])\nprint(len(lines))\n\ni = 0\nfor line in lines:\n if 40 >= 5 * int(get_length(line) / 5) >= 25 or lengths[i] == 995:\n i += 1\n cv2.line(sign_draw, (line[0], line[1]), (line[2], line[3]), (0, 0, 0), 2)\nshow_img(\"\", sign_draw)\n\nlinesS = filter_angles(lines, [0, 90])\nlinesA = filter_angles(lines, [45, -45])\n\nlinesV = filter_angles(lines, [90])\nlinesH = filter_angles(lines, [0])\nlinesP = filter_angles(lines, [45])\nlinesN = filter_angles(lines, [-45])\n\nprint(linesS)\nprint(linesA)\n\noctagons = []\n\nfor l1 in linesS:\n p1 = (l1[0], l1[1])\n p2 = (l1[2], l1[3])\n for l2 in linesA:\n p3 = (l2[0], l2[1])\n p4 = (l2[2], l2[3])\n for pin1 in [p1, p2]:\n for pin2 in [p3, p4]:\n # print(\"{} {} {} {}\".format(pin1, pin2, abs(pin1[0]-pin2[0]), abs(pin1[1]-pin2[1])))\n if proximal_pts(pin1, pin2, 10):\n common = [int((pin1[0] + pin2[0]) / 2), int((pin1[1] + pin2[1]) / 2)]\n placed = False\n l1t = tuple(l1)\n l2t = tuple(l2)\n ct = tuple(common)\n l1end = p2 if pin1 == p1 else p1\n l2end = p4 if pin2 == p3 else p3\n for octagon in octagons:\n if l1t in octagon[\"lines\"] or l2t in octagon[\"lines\"]:\n octagon[\"lines\"].add(l1t)\n octagon[\"lines\"].add(l2t)\n octagon[\"common\"].add(ct)\n octagon[\"common\"].add(l1end)\n octagon[\"common\"].add(l2end)\n placed = True\n if not placed:\n octagon = {\"lines\": set([l1t, l2t]), \"common\": set({ct, l1end, l2end})}\n octagons.append(octagon)\n\nfo = {\"lines\": set(), \"common\": set()}\nfor o in octagons:\n if len(o[\"lines\"]) >= 3:\n fo[\"lines\"] = fo[\"lines\"].union(o[\"lines\"])\n fo[\"common\"] = fo[\"common\"].union(o[\"common\"])\nprint(len(fo[\"lines\"]))\n\npoints = list(fo[\"common\"])\npd = list([0 for i in range(len(fo[\"common\"]))])\n\nfor i in range(len(points)):\n distances = [proximal_pts(points[i], pt, 15) for pt in points]\n distances[i] = False\n cx = points[i][0]\n cy = points[i][1]\n cx = np.mean([points[i][0] for i in range(len(points)) if distances[i]])\n for j in range(i + 1, len(distances)):\n if pd[j] == 0 and distances[j]:\n pd[j] = 1\nprint(pd)\n\ncenterx = int(np.mean([points[i][0] for i in range(len(points)) if pd[i] == 0]))\ncentery = int(np.mean([points[i][1] for i in range(len(points)) if pd[i] == 0]))\n\narea = sign_draw[centery - 5:centery + 5, centerx - 5:centerx + 5]\nred = np.mean(area[:, :, 2])\ngreen = np.mean(area[:, :, 1])\nblue = np.mean(area[:, :, 0])\nif red > 200:\n print(\"finally!!\")\n\ncv2.putText(sign_draw, \"*\", (int(centerx), int(centery)), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 0), thickness=2)\n\nprint(points)\nfor i in range(len(points)):\n if pd[i] == 0:\n cv2.putText(sign_draw, \"*\", (int(points[i][0]), int(points[i][1])), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 0),\n thickness=2)\nshow_img(\"\", sign_draw)\n\n\"\"\"\nfor each point, calculate the distance with each point in the array \nremove the points from array which have distance with the point < 15\n\"\"\"\n\n# for o in octagons:\n# print(\"lines: {} \\n common: {}\".format(o[\"lines\"], o[\"common\"]))\n# for point in o[\"common\"]:\n# cv2.putText(sign_draw, \"*\", (int(point[0]), int(point[1])), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,0),\n# thickness=2)\n# show_img(\"\",sign_draw)\n\n# oct2 = copy.deepcopy(octagons)\n\n\n# final_octagons = []\n# inter = [set({}) for i in range(len(octagons))]\n#\n# for i in range(len(octagons)):\n# for j in range(i+1, len(octagons)):\n# if octagons[i][\"lines\"].intersection(octagons[j][\"lines\"]) != set():\n# inter[i].add(j)\n#\n# oc = copy.deepcopy(octagons)\n# print(oc)\n# for i in range(len(inter)-1, -1, -1):\n# new_set = octagons[i]\n# dependents = list(inter[i])\n# dependents.sort()\n# for j in range(len(dependents)-1,-1,-1):\n# if dependents[j] >= len(octagons):\n# continue\n# old = octagons[dependents[j]]\n# new_set[\"lines\"] = new_set[\"lines\"].union(old[\"lines\"])\n# new_set[\"common\"] = new_set[\"common\"].union(old[\"common\"])\n# octagons.pop(j)\n# octagons[i] = new_set\n# final_octagons.append(new_set)\n#\n#\n# for o in octagons:\n# print(\"lines: {} \\n common: {}\".format(o[\"lines\"], o[\"common\"]))\n# for point in o[\"common\"]:\n# cv2.putText(sign_draw, \"*\", (int(point[0]), int(point[1])), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,0),\n# thickness=2)\n# show_img(\"\",sign_draw)\n\n\n# for o1 in octagons:\n# for o2 in octagons:\n# if o1[\"lines\"].intersection(o2[\"lines\"]) != set():\n#\n\n# print(len(lines))\n#\n# i=0\n# for line in lines:\n# if 40 >= 5*int(get_length(line)/5) >= 25:\n# i+=1\n# cv2.line(sign_draw, (line[0], line[1]), (line[2], line[3]), (0,0,0), 2)\n# show_img(\"\",sign_draw)\n\n\n\"\"\"\n# for line in lines:\n# cv2.line(sign_draw, (line[0], line[1]), (line[2], line[3]), (0,0,0), 2)\n\nlinesS = filter_angles(lines, [0,90])\nlinesA = filter_angles(lines, [45, -45])\n\nlinesV = filter_angles(lines, [90])\nlinesH = filter_angles(lines, [0])\nlinesP = filter_angles(lines, [45])\nlinesN = filter_angles(lines, [-45])\n\n# print(linesV)\n# print(linesH)\n# print(linesP)\n# print(linesN)\n\noctagons = []\n\nfor lV in linesV:\n p1 = (lV[0], lV[1])\n p2 = (lV[2], lV[3])\n for lP in linesP:\n p3 = (lP[0], lP[1])\n p4 = (lP[2], lP[3])\n if abs(get_length(lV) - get_length(lP)) <10:\n for lH in linesH:\n p5 = (lH[0], lH[1])\n p6 = (lH[2], lH[3])\n if abs(get_length(lH) - get_length(lP)) < 10:\n for pin1 in [p1, p2]:\n for pin2 in [p3,p4]:\n for pin3 in [p5,p6]:\n cv2.line(sign_draw, pin1, pin2, (0,0,0), 2)\n cv2.line(sign_draw, pin2, pin3, (0, 0, 0), 2)\n show_img(\"\", sign_draw)\n if proximal_pts(pin1, pin2, 10) and abs(get_length(lV) - get_length(lP)) <10:\n print(\"I am V- P \")\n common = [int((pin1[0] + pin2[0]) / 2), int((pin1[1] + pin2[1]) / 2)]\n placed = False\n lVt = tuple(lV)\n lPt = tuple(lP)\n ct = tuple(common)\n for o in octagons:\n if lVt in o[\"lines\"] or lPt in o[\"lines\"]:\n o[\"lines\"].add(lVt)\n o[\"lines\"].add(lPt)\n o[\"common\"].add(ct)\n placed = True\n if not placed:\n o = {\"lines\":set({lVt, lPt}), \"common\":set({ct})}\n if proximal_pts(pin2, pin3, 10) and abs(get_length(lH) - get_length(lP)) <10:\n print(\"I am P - H\")\n common = [int((pin2[0] + pin3[0]) / 2), int((pin2[1] + pin3[1]) / 2)]\n placed = False\n lHt = tuple(lH)\n lPt = tuple(lP)\n ct = tuple(common)\n for o in octagons:\n if lHt in o[\"lines\"] or lPt in o[\"lines\"]:\n o[\"lines\"].add(lHt)\n o[\"lines\"].add(lPt)\n o[\"common\"].add(ct)\n placed = True\n if not placed:\n o = {\"lines\":set({lHt, lPt}), \"common\":set({ct})}\n octagons.append(o)\n\n #points of lH and lV will be the last ones.\n\n\"\"\"\n\n# for l1 in linesS:\n# p1 = (l1[0], l1[1])\n# p2 = (l1[2], l1[3])\n# for l2 in linesA:\n# p3 = (l2[0], l2[1])\n# p4 = (l2[2], l2[3])\n# for pin1 in [p1, p2]:\n# for pin2 in [p3, p4]:\n# # print(\"{} {} {} {}\".format(pin1, pin2, abs(pin1[0]-pin2[0]), abs(pin1[1]-pin2[1])))\n# if proximal_pts(pin1, pin2, 10) and abs(get_length(l1)-get_length(l2)) < 10:\n# common = [(pin1[0] + pin2[0]) / 2, (pin1[1] + pin2[1]) / 2]\n# placed = False\n# l1t = tuple(l1)\n# l2t = tuple(l2)\n# ct = tuple(common)\n# for octagon in octagons:\n# if l1t in octagon[\"lines\"] or l2t in octagon[\"lines\"]:\n# octagon[\"lines\"].add(l1t)\n# octagon[\"lines\"].add(l2t)\n# octagon[\"common\"].add(ct)\n# placed = True\n# if not placed:\n# octagon = {\"lines\": set([l1t, l2t]), \"common\": set({ct})}\n# octagons.append(octagon)\n\"\"\"\nprint(len(octagons))\n\nfor o in octagons:\n print(\"lines: {} \\n common: {}\".format(o[\"lines\"], o[\"common\"]))\n for point in o[\"common\"]:\n cv2.putText(sign_draw, \"*\", (int(point[0]), int(point[1])), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,0),\n thickness=2)\n show_img(\"\",sign_draw)\n\nFor Do Not Enter signs\ncircles = cv2.HoughCircles(edges, cv2.HOUGH_GRADIENT, 1, 20,\n param1=15, param2=20,\n minRadius=5, maxRadius=50)\nif circles is None: exit() # should become return 0, 0\n\ncircles = np.uint16(np.around(circles))\ncshape = circles.shape\ncenters = circles.reshape(cshape[1], cshape[2])\n\n\n# will get circles with lines inside of it\nlinesIn = get_lines_in_circles(lines, centers)\n\nfor l_pair in linesIn:\n circle = np.array(l_pair[\"cir\"])\n r = (circle[2] / 2).astype(int)\n red = np.mean(sign_draw[circle[1] - r:circle[1] + r,\n circle[0] - r:circle[0] + r,\n 2])\n # check circle color\n if red > 200:\n c = (circle[0], circle[1])\n area = sign_draw[c[1]-5:c[1]+5, c[0]-5:c[0]+5]\n red = np.mean(area[:, :, 2])\n green = np.mean(area[:, :, 1])\n blue = np.mean(area[:, :, 0])\n if red > 200 and blue > 200 and green > 200:\n print(\"{} {}\".format(int(c[0]), int(c[1])))\n cv2.putText(sign_draw, \"*\", (int(c[0]), int(c[1])), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255),\n thickness=2)\n show_img(\"sign\",sign_draw)\n\n\n#show_img(\"lines and circles\", sign_draw)\n# if circles is not None:\n# circles = np.uint16(np.around(circles))\n# draw_circles(circles, sign_draw)\n# show_img(\"lines and circles\", sign_draw)\n\n\"\"\"\n\n\"\"\"for construction and warning\n# for diamond in diamonds:\n# print(\"lines: {} \\n common: {}\".format(diamond[\"lines\"], diamond[\"common\"]))\n# for point in diamond[\"common\"]:\n# cv2.putText(sign_draw, \"*\", (int(point[0]), int(point[1])), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255),\n# thickness=2)\n# show_img(\"\",sign_draw)\n# diamonds = get_diamonds(lines)\n#\n# for diamond in diamonds:\n# centerx = np.mean([c[0] for c in diamond[\"common\"]])\n# centery = np.mean([c[1] for c in diamond[\"common\"]])\n# area = sign_draw[int(centery) - 5:int(centery) + 5, int(centerx) - 5:int(centerx) + 5]\n# red = np.mean(area[:, :, 2])\n# green = np.mean(area[:, :, 1])\n# print(\"{} {}\".format(red, green))\n# if red > 200 and green > 200:\n# print(\"warning\")\n# cv2.putText(sign_draw, \"* warning\", (int(centerx), int(centery)), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255),\n# thickness=2)\n# show_img(\"\", sign_draw)\n\n\n# centers = get_square_centers(lines)\n#\n# for center in centers:\n# area = sign_draw[int(center[0])-5:int(center[0])+5, int(center[1])-5:int(center[1]) +5]\n# red = np.mean(area[:,:,2])\n# green = np.mean(area[:,:,1])\n# cv2.putText(sign_draw, \"*\", (int(center[1]),int(center[0])), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), thickness=2)\n# show_img(\"sign\",sign_draw)\n# print(red)\n# print(green)\n# if red > 200 and green > 200:\n# print(\"warning\")\n#\n#\n# print(centers)\n# #print(\"{} {}\".format(x,y))\n\n#draw_tl_center(sign_draw, (x,y), \"yield\")\n# show_img(\"warning\", sign_draw)\n\"\"\"\n\ncv2.destroyAllWindows()\n","sub_path":"ps02/playing_signs.py","file_name":"playing_signs.py","file_ext":"py","file_size_in_byte":21082,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"70368114","text":"# _*_ coding=utf-8 _*_\n\nimport re\nimport os,sys\n\nprojectPath=os.getcwd()\n\n\nf=open(projectPath+\"/white_list/result.txt\",'r')\nL=[]\nwith open(projectPath+\"/white_list/white_list.txt\",\n 'w') as g:\n for i in f.readlines():\n j=i.strip('\\r\\n')\n l2=re.search('[0-9A-Za-z]+\\.[0-9A-Za-z]+\\.[0-9A-Za-z]+', j)\n l3 = re.search('[0-9A-Za-z]+\\.[0-9A-Za-z]+', j)\n l1 = re.search('[0-9A-Za-z]+\\.[0-9A-Za-z]+\\.[0-9A-Za-z]+\\.[0-9A-Za-z]+', j)\n if l1 :\n print(l1.group())\n\n L.append(l1)\n elif l2:\n print(l2.group())\n\n L.append(l2)\n elif l3:\n print(l3.group())\n L.append(l3)\n else:\n print(\"no urls\")\n for l in L:\n\n url=l.group(0)\n\n url=re.sub('www.','',url)\n g.write(url+'\\n')\n\nf.close()","sub_path":"com-fj-phishing-2/white_list/Qingxi.py","file_name":"Qingxi.py","file_ext":"py","file_size_in_byte":823,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"27351433","text":"# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Tue May 14 10:28:40 2019\n\n@author: Pedro Ball\n\"\"\"\n\n# Importando as bibliotecas necessárias.\nimport pygame\nfrom os import path\n\n# Estabelece a pasta que contem as figuras.\nimg_dir = path.join(path.dirname(__file__), 'img')\n\n# Dados gerais do jogo.\nWIDTH = 480 # Largura da tela\nHEIGHT = 450 # Altura da tela\nFPS = 60 # Frames por segundo\n\n# Define algumas variáveis com as cores básicas\nWHITE = (255, 255, 255)\nBLACK = (0, 0, 0)\nRED = (255, 0, 0)\nGREEN = (0, 255, 0)\nBLUE = (0, 0, 255)\nYELLOW = (255, 255, 0)\n\n# Classe Jogador que representa a nave\nclass Mochila(pygame.sprite.Sprite):\n \n # Construtor da classe.\n def __init__(self):\n \n # Construtor da classe pai (Sprite).\n pygame.sprite.Sprite.__init__(self)\n \n # Carregando a imagem de fundo.\n mochila = pygame.image.load(path.join(img_dir, \"inventario.png\")).convert()\n self.image = mochila\n \n # Diminuindo o tamanho da imagem.\n self.image = pygame.transform.scale(mochila, (50, 38))\n \n # Deixando transparente.\n self.image.set_colorkey(BLACK)\n \n # Detalhes sobre o posicionamento.\n self.rect = self.image.get_rect()\n \n # Centraliza embaixo da tela.\n self.rect.centerx = WIDTH / 2\n self.rect.bottom = HEIGHT - 10\n\n# Classe Jogador que representa a seta\nclass Seta(pygame.sprite.Sprite):\n \n # Construtor da classe.\n def __init__(self):\n \n # Construtor da classe pai (Sprite).\n pygame.sprite.Sprite.__init__(self)\n \n # Carregando a imagem de fundo.\n seta = pygame.image.load(path.join(img_dir, \"SETA.png\")).convert()\n self.image = seta\n \n # Diminuindo o tamanho da imagem.\n self.image = pygame.transform.scale(seta, (50, 38))\n \n # Deixando transparente.\n self.image.set_colorkey(BLACK)\n \n # Detalhes sobre o posicionamento.\n self.rect = self.image.get_rect()\n \n # Centraliza embaixo da tela.\n self.rect.centerx = WIDTH - 40\n self.rect.bottom = HEIGHT - 400\n \nimg_local = \"\"\n\ndef load_assets(img_dir):\n assets = {}\n assets[\"Inicio\"] = pygame.image.load(path.join(img_dir, \"inicio.png\")).convert()\n assets[\"Quarto\"] = pygame.image.load(path.join(img_dir, \"quarto.png\")).convert()\n assets[\"Cama\"] = pygame.image.load(path.join(img_dir, \"cama.png\")).convert()\n assets[\"Recompensa\"] = pygame.image.load(path.join(img_dir,\"recompensa_1.png\")).convert()\n# assets[\"Inferno\"] = pygame.image.load(path.join(img_dir, \"inferno\"))\n assets[\"Inventario\"] = pygame.image.load(path.join(img_dir,\"inventario_0.png\"))\n if img_local == \"Cama\":\n assets[\"Inventario\"] = pygame.image.load(path.join(img_dir,\"inventario_1.png\"))\n return assets\n\ndef load_inventario(img_dir):\n inventory = {}\n inventory[\"Água Benta\"]\n\n# Inicialização do Pygame.\npygame.init()\npygame.mixer.init()\n\n# Tamanho da tela.\nscreen = pygame.display.set_mode((WIDTH, HEIGHT))\n\n# Nome do jogo\npygame.display.set_caption(\"Doom Escape\")\n\n# Variável para o ajuste de velocidade\nclock = pygame.time.Clock()\n\n# Carrega o fundo do jogo\nimg_dic = load_assets(img_dir)\n\n\n# Cria uma nave. O construtor será chamado automaticamente.\nmochila = Mochila()\nseta = Seta()\n# Cria um grupo de sprites e adiciona a nave.\nall_sprites = pygame.sprite.Group()\nall_sprites.add(mochila)\nall_sprites.add(seta)\n\n# Comando para evitar travamentos.\nimg_local = \"Inicio\"\nimg_local_0 = \"\"\n\n\ntry:\n \n # Loop principal.\n running = True\n while running:\n \n # Ajusta a velocidade do jogo.\n clock.tick(FPS)\n \n # Processa os eventos (mouse, teclado, botão, etc).\n for event in pygame.event.get():\n \n # Verifica se foi fechado\n if event.type == pygame.QUIT:\n running = False\n\n elif img_local == \"Inicio\":\n if event.type == pygame.MOUSEBUTTONDOWN:\n px = event.pos[0]\n py = event.pos[1]\n if px > 0 and px < 480 and py > 0 and py < 450:\n img_local = \"Quarto\"\n if px > 220 and px < 259 and py > 406 and py < 440:\n img_local_0 = \"Inicio\"\n img_local = \"Inventario\"\n \n elif img_local == \"Quarto\": \n if event.type == pygame.MOUSEBUTTONDOWN:\n px = event.pos[0]\n py = event.pos[1]\n if px > 70 and px < 310 and py > 270 and py < 370:\n img_local = \"Cama\"\n if px > 387 and px < 449 and py > 244 and py < 270:\n img_local = \"Recompensa\"\n img_local_0 = \"\"\n if px > 220 and px < 259 and py > 406 and py < 440:\n img_local_0 = \"Quarto\"\n img_local = \"Inventario\"\n elif img_local == \"Cama\":\n if event.type == pygame.MOUSEBUTTONDOWN:\n px = event.pos[0]\n py = event.pos[1]\n if px > 220 and px < 259 and py > 406 and py < 440:\n img_local_0 = \"Cama\"\n img_local = \"Inventario\"\n \n elif img_local == \"Inventario\":\n if event.type == pygame.MOUSEBUTTONDOWN:\n px = event.pos[0]\n py = event.pos[1]\n if px > 220 and px < 259 and py > 406 and py < 440:\n img_local = img_local_0\n \n # A cada loop, redesenha o fundo e os sprites\n screen.fill(BLACK)\n reindera_imagem = pygame.transform.scale(img_dic[img_local], (480, 450))\n screen.blit(reindera_imagem, img_dic[img_local].get_rect())\n #screen.blit(img_dic[\"Inventario\"], img_dic[\"Inventario\"].get_rect())\n all_sprites.draw(screen)\n \n # Depois de desenhar tudo, inverte o display.\n pygame.display.flip()\n \nfinally:\n pygame.quit()","sub_path":"Desenvolvimento e Tentativas/teste.py","file_name":"teste.py","file_ext":"py","file_size_in_byte":6161,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"48044627","text":"# import socket\r\n# import random\r\n# \r\n# def client(string):\r\n# HOST, PORT = 'localhost', 31100\r\n# # SOCK_STREAM == a TCP socket\r\n# sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\r\n# #sock.setblocking(0) # optional non-blocking\r\n# sock.connect((HOST, PORT))\r\n# \r\n# self.logger.info (\"sending data => [%s]\" % (string))\r\n# \r\n# sock.send(bytes(string, \"utf-8\"))\r\n# reply = sock.recv(16384) # limit reply to 16K\r\n# self.logger.info (\"reply => \\n [%s]\" % (reply))\r\n# sock.close()\r\n# return reply\r\n# \r\n# def main():\r\n# client('Python Rocks')\r\n# \r\n# if __name__ == \"__main__\":\r\n# main()\r\n\r\nfrom tkinter import *\r\nfrom tkinter import ttk\r\nfrom threading import RLock\r\nfrom queue import Queue\r\nfrom threading import Thread\r\nfrom PIL import ImageTk\r\nimport numpy as np\r\nimport matplotlib.pyplot as plt\r\nimport os\r\nimport math\r\nfrom PIRQueryObject import PIRQueryObject\r\n\r\n'''''\r\nhttp://python.about.com/od/python30/ss/30_strings_3.htm\r\n'''''\r\nimport logging\r\nimport threading\r\nimport socket\r\nfrom FrameBuilder import FrameBuilder\r\nfrom time import sleep\r\nfrom OpCodes import OpCodes\r\nimport binascii\r\nimport random\r\nfrom bitstring import BitArray\r\nimport time\r\nfrom Crypto import Random\r\nfrom Crypto.Cipher import AES\r\nfrom sys import byteorder\r\nimport random\r\nfrom itertools import count\r\nfrom tkinter import messagebox\r\nimport pickle \r\n\r\ncodes = OpCodes()\r\nS_M_PORT = 31100\r\nactive_servers = {}\r\nserversPool = Queue()\r\nserversQueryReply = list()\r\nSEED_LENGTH = 128\r\nclass client_window(Frame):\r\n \r\n \r\n pirClient = None\r\n \r\n def __init__(self, parent):\r\n logging.basicConfig(level=logging.DEBUG,format='%(name)s: %(message)s',)\r\n self.logger = logging.getLogger(\"Client Window\")\r\n \r\n \r\n \r\n #################### Global variables ####################\r\n \r\n self.DB_LENGTH = 500\r\n self.queryMethod=IntVar()\r\n self.desirableBit=0\r\n self.myParent = parent \r\n \r\n self.myParent.title('PIR Client')\r\n self.masterFrame = ttk.Frame(self.myParent,padding=(0,10,0,5)) ###\r\n self.masterFrame.grid(column=0, row=0, sticky=(N, S, E, W))\r\n \r\n self.pirClient = PIRClient(self)\r\n \r\n #------ constants for controlling layout ------\r\n button_width = 12 ### (1)\r\n button_height = 3\r\n \r\n button_padx = \"2\" ### (2)\r\n button_pady = \"1\" ### (2)\r\n\r\n buttons_frame_padx = \"3\" ### (3)\r\n buttons_frame_pady = \"3\" ### (3) \r\n buttons_frame_ipadx = \"3\" ### (3)\r\n buttons_frame_ipady = \"3\" ### (3)\r\n # -------------- end constants ----------------\r\n\r\n self.style = ttk.Style(self.myParent)\r\n self.style.configure('TButton', font=(\"Arial\", 8,'bold'))#larger Font for buttons\r\n self.style.configure('TLabel', font=(\"Arial\", 10,'bold'))#larger Font for buttons\r\n \r\n self.icn_SM_disconnected = ImageTk.PhotoImage(file=\"icons/Red-icon32.png\")\r\n self.icn_SM_connected = ImageTk.PhotoImage(file=\"icons/Green-icon32.png\")\r\n self.icn_compare = ImageTk.PhotoImage(file=\"icons/compare24.png\")\r\n self.icn_query = ImageTk.PhotoImage(file=\"icons/download24.png\")\r\n self.icn_exit = ImageTk.PhotoImage(file=\"icons/exit24.png\")\r\n\r\n self.lbl_SMAddress = ttk.Label(self.masterFrame, compound=LEFT, style='TLabel', text=\"Server manager address: \" )\r\n self.lbl_SMAddress.grid(row=0,column=0, columnspan=2, ipadx=button_padx, ipady=button_pady, padx=buttons_frame_padx, pady=buttons_frame_ipady, sticky=(W))\r\n\r\n self.txt_SMAddress = ttk.Entry(self.masterFrame, justify=CENTER, width=button_width)\r\n self.txt_SMAddress.insert(0, '192.168.4.1')\r\n self.txt_SMAddress.grid(row=0,column=7, columnspan=1, rowspan=1, ipadx=button_padx, ipady=button_pady, padx=buttons_frame_padx, pady=buttons_frame_ipady, sticky=(E))\r\n \r\n# self.lbl_padding = ttk.Label(self.masterFrame, compound=LEFT)\r\n# self.lbl_padding.grid(row=0,column=3, columnspan=3, ipadx=button_padx, ipady=button_pady, padx=buttons_frame_padx, pady=buttons_frame_ipady, sticky=(W))\r\n# \r\n self.btn_connect = ttk.Button(self.masterFrame, compound=RIGHT, command=self.clickConnect, style='TButton', text=\"Connect \",width=button_width )\r\n self.btn_connect.grid(row=2,column=0, ipadx=button_padx, ipady=button_pady, padx=buttons_frame_padx, pady=buttons_frame_ipady, sticky=(W))\r\n \r\n self.lbl_connectionSts = Label(self.masterFrame, image=self.icn_SM_disconnected)\r\n self.lbl_connectionSts.grid(row=2, column=7, sticky=(E))\r\n \r\n self.chk_pir = ttk.Radiobutton(self.masterFrame, text='PIR', variable=self.queryMethod, value=1)\r\n self.chk_pir.grid(row=3,column=0,columnspan=2, ipadx=button_padx, ipady=button_pady, padx=buttons_frame_padx, pady=buttons_frame_ipady, sticky=(W, S))\r\n self.chk_pir.state(['selected'])\r\n self.queryMethod.set(1)\r\n \r\n self.chk_regular = ttk.Radiobutton(self.masterFrame, text='Standard', variable=self.queryMethod, value=2)\r\n self.chk_regular.grid(row=4,column=0,columnspan=2, ipadx=button_padx, ipady=button_pady, padx=buttons_frame_padx, pady=buttons_frame_ipady, sticky=(W, S))\r\n \r\n self.scl_bitChoice = ttk.Scale(self.masterFrame, orient=HORIZONTAL, from_=0.0, to=self.DB_LENGTH,command=self.updatDesiedBit)\r\n self.scl_bitChoice.grid(row=5, column=0, columnspan=8, ipadx=button_padx, ipady=button_pady, padx=buttons_frame_padx, pady=buttons_frame_ipady,sticky=(W,E))\r\n self.scl_bitChoice.state([\"disabled\"]) # Disable the scale bar.\r\n \r\n self.lbl_bitIndex = ttk.Label(self.masterFrame, compound=CENTER, style='TLabel' , text = '0')\r\n self.lbl_bitIndex.grid(row=6,column=1, ipadx=button_padx, ipady=button_pady, padx=buttons_frame_padx, pady=buttons_frame_ipady, sticky=(W,E))\r\n \r\n \r\n self.btn_query = ttk.Button(self.masterFrame, compound=RIGHT, command=self.clickQuery, style='TButton', text=\"Get value \",image=self.icn_query, width=button_width )\r\n self.btn_query.grid(row=7,column=0, ipadx=button_padx, ipady=button_pady, padx=buttons_frame_padx, pady=buttons_frame_ipady, sticky=(W))\r\n \r\n self.lbl_result = ttk.Label(self.masterFrame, compound=CENTER, style='TLabel', text='XX',justify=LEFT)\r\n self.lbl_result.grid(row=7,column=7, ipadx=button_padx, ipady=button_pady, padx=buttons_frame_padx, pady=buttons_frame_ipady, sticky=(E))\r\n \r\n \r\n self.btn_compare = ttk.Button(self.masterFrame, compound=RIGHT, command=self.clickCompare, style='TButton', text=\"Compare \", image=self.icn_compare, width=button_width )\r\n self.btn_compare.grid(row=8,column=0, ipadx=button_padx, ipady=button_pady, padx=buttons_frame_padx, pady=buttons_frame_ipady, sticky=(W))\r\n \r\n self.btn_exit = ttk.Button(self.masterFrame, compound=RIGHT, command=self.clickExit, style='TButton', text=\"Exit \", image=self.icn_exit, width=button_width )\r\n self.btn_exit.grid(row=8,column=7, ipadx=button_padx, ipady=button_pady, padx=buttons_frame_padx, pady=buttons_frame_ipady, sticky=(E))\r\n \r\n def updatDesiedBit(self,value): \r\n self.lbl_bitIndex.configure(text= str(int(float(value))))\r\n \r\n def clickConnect(self): \r\n self.pirClient.initateConnection(self.txt_SMAddress.get())\r\n \r\n \r\n \r\n \r\n def disableConnectBtn(self):\r\n self.btn_connect.state([\"disabled\"])\r\n \r\n def enableConnectBtn(self):\r\n self.btn_connect.state([\"!disabled\"])\r\n \r\n def ServerConnected_icon(self):\r\n self.lbl_connectionSts.config(image=self.icn_SM_connected)\r\n def ServerDisconnected_icon(self):\r\n self.lbl_connectionSts.config(image=self.icn_SM_disconnected)\r\n \r\n def enableScale(self):\r\n self.scl_bitChoice.state([\"!disabled\"]) # Enable the scale bar.\r\n \r\n def getCurrentScaleNum(self):\r\n return (int)(self.scl_bitChoice.get())\r\n \r\n def clickExit(self):\r\n self.myParent.destroy()\r\n \r\n def configureDBScale(self,valueToUpdate):\r\n self.scl_bitChoice.configure(to=valueToUpdate) \r\n \r\n def clickQuery(self):\r\n if self.queryMethod.get()==1:\r\n self.logger.info(\"PIR radio selected\")\r\n self.pirClient.executeQuery()\r\n\r\n elif self.queryMethod.get()==2:\r\n self.logger.info(\"Regular radio selected\")\r\n self.generateRegQuery()\r\n \r\n def writeResultToLabel(self,aResultToWrite):\r\n self.lbl_result.configure(text=aResultToWrite,justify=LEFT) \r\n \r\n def showWarningPopUp(self,aTitle,aMsg):\r\n messagebox.showinfo(aTitle,aMsg)\r\n \r\n def generateRegQuery(self):\r\n self.pirClient.executeQuery()\r\n \r\n \r\n def clickCompare(self):\r\n N = 2\r\n# queryResult = bin(int(self.lbl_result.cget(\"text\")))\r\n results = (self.pirClient.currentQueryLength,self.pirClient.DB_LENGTH)\r\n# queryResult = 2\r\n# results = 10\r\n ind = np.arange(N) # the x locations for the groups\r\n width = 0.1 # the width of the bars: can also be len(x) sequence\r\n\r\n plt.bar(ind, results, width, color='r')\r\n plt.bar(ind, results, width, color='g')\r\n \r\n plt.ylabel('Bits')\r\n plt.title('PIR vs. non PIR data transfer')\r\n plt.xticks(ind+width/2., ('PIR','regular') )\r\n\r\n plt.show()\r\n \r\n \r\n \r\n \r\n \r\n###############################################################################\r\n## PIR Algorithm functions section ##\r\n############################################################################### \r\n# def encrypt(self,message, key=None, key_size=128):\r\n# def pad(s):\r\n# x = AES.block_size - len(s) % AES.block_size\r\n# return s + (bytes([x]) * x)\r\n# \r\n# padded_message = pad(message)\r\n# if key is None:\r\n# key = Random.new().read(key_size // 8)\r\n# \r\n# cipher = AES.new(key)\r\n# return (cipher.encrypt(padded_message)) \r\n# \r\n# \r\n# def createRandomSeeds(self):\r\n# global amountOfSeeds\r\n# seedListAsBytes = []\r\n# amountOfSeeds = int((math.sqrt(self.DB_LENGTH)-1)*(2**(self.currentServersQuantity-1)-1)+2**(self.currentServersQuantity-1))\r\n# self.logger.info(\"amount of seeds:\",amountOfSeeds)\r\n# for _ in range (0,amountOfSeeds):\r\n# seedAsByte = BitArray(os.urandom(16))\r\n# seedListAsBytes.append(seedAsByte.hex)\r\n# return seedListAsBytes \r\n# \r\n# \r\n# def createCWListPool(self):\r\n# global CWListPool\r\n# CWListPool = [] \r\n# #tempCw = BitArray(os.urandom((int)(math.sqrt(dataBaseSizeVar)/8)))\r\n# # CWList.append(tempCw)\r\n# for _ in range (1,2**(self.currentServersQuantity-1)):\r\n# tempCw = BitArray(os.urandom((int)(math.sqrt(self.DB_LENGTH)/8)))\r\n# CWListPool.append(tempCw)\r\n# self.logger.info(\"size of CWlist:\",CWListPool.__len__()) \r\n# \r\n# \r\n# def convertMatrix(self,aMatrixToConvert):\r\n# global matrixA\r\n# global matrixB\r\n# matrixA = []\r\n# matrixB = []\r\n# \r\n# for index in range (0,numRows):\r\n# if (aMatrixToConvert[index].count(1)%2) == 0:\r\n# matrixA.append(aMatrixToConvert[index])\r\n# else:\r\n# matrixB.append(aMatrixToConvert[index])\r\n# # self.logger.info(\"matrix A\",matrixA)\r\n# # self.logger.info(\"matrix B\",matrixB)\r\n# t=0 \r\n# for i in matrixA: \r\n# t+=i[0] \r\n# self.logger.info(\"seed per section:\",t) \r\n# \r\n# \r\n# def buildMatrices(self):\r\n# matrix = []\r\n# global numRows\r\n# numRows = 2**self.currentServersQuantity\r\n# self.logger.info(\"#self.currentServersQuantity = %d\" %self.currentServersQuantity)\r\n# for index in range (0,numRows):\r\n# matrix.append([int(d) for d in bin(index)[2:].zfill(self.currentServersQuantity)])\r\n# # self.logger.info(matrix)\r\n# self.convertMatrix(matrix) \r\n# \r\n# \r\n# def transformIndexBit(self):\r\n# global transformedRowIndex\r\n# global transformedColumnIndex\r\n# \r\n# bitToExtractIndex = int(self.scl_bitChoice.get())\r\n# transformedRowIndex = int(bitToExtractIndex/(math.sqrt(self.DB_LENGTH))) #### i'\r\n# transformedColumnIndex = int(bitToExtractIndex%(math.sqrt(self.DB_LENGTH))) #### j\r\n# \r\n# \r\n# def createGFunctions(self,aSeedList):\r\n# listOfGFunction = []\r\n# \r\n# for seed in aSeedList: \r\n# listOfGFunction.append(self.inflatorFunction(seed)) \r\n# \r\n# return listOfGFunction \r\n# \r\n# \r\n# def createEjVector(self):\r\n# global unitVectorJAsBytes\r\n# rootedDatabaseSizeVar = (int)(math.sqrt(self.DB_LENGTH))\r\n# for index in range(0,rootedDatabaseSizeVar):\r\n# if index == transformedColumnIndex:\r\n# break \r\n# unitVectorJAsBytes = BitArray(int = 1, length = rootedDatabaseSizeVar)\r\n# unitVectorJAsBytes <<= ((rootedDatabaseSizeVar-1) - index)\r\n# self.logger.info (unitVectorJAsBytes.bin)\r\n# \r\n# \r\n# def inflatorFunction(self,aSeedToInflate):\r\n# gFunction = BitArray(self.encrypt(str(0).encode('utf-8'),aSeedToInflate))\r\n# for i in range (1,(int)(math.sqrt(self.DB_LENGTH)/SEED_LENGTH)):\r\n# gFunction.append(self.encrypt(str(i).encode('utf-8'),aSeedToInflate))\r\n# return BitArray(gFunction) \r\n# \r\n# \r\n# ###############################################################################\r\n# ## Communication stuff starts here ##\r\n# ###############################################################################\r\n# \r\n# \r\n# def createTIndicatorsPool(self):\r\n# tIndicatorsPool = []\r\n# for tIndicatorIndex in range (0,2**(self.currentServersQuantity-1)):\r\n# tIndicatorsPool.append((tIndicatorIndex,tIndicatorIndex))\r\n# return tIndicatorsPool \r\n# \r\n# \r\n# def findKthCW(self,aListOfGFunction):\r\n# tempGFunctionResult = BitArray(int = 0,length = (int)(math.sqrt(self.DB_LENGTH)))\r\n# tempCWResult = BitArray(int = 0,length =(int)(math.sqrt(self.DB_LENGTH)))\r\n# \r\n# for gFunctionIndex in range (0,amountOfSeeds):\r\n# tempGFunctionResult = tempGFunctionResult ^ aListOfGFunction[gFunctionIndex]\r\n# for cwIndex in range(0,2**(self.currentServersQuantity-1)-1):\r\n# tempCWResult = tempCWResult ^ CWListPool[cwIndex]\r\n# kthCw = tempGFunctionResult ^ tempCWResult ^ unitVectorJAsBytes\r\n# self.logger.info(\"kthCW:\",kthCw)\r\n# CWListPool.append(kthCw)\r\n# \r\n# \r\n# def appendCWListPoolToQuery(self):\r\n# global serversQueryWithCWAppended \r\n# serversQueryWithCWAppended = {}\r\n# \r\n# for serversIndex in range(0,self.currentServersQuantity):\r\n# seedsList,indicatorsList = self.serversQuery[serversIndex]\r\n# serversQueryWithCWAppended[serversIndex] = (seedsList,indicatorsList,CWListPool)\r\n# self.logger.info(\"amount of CW:\",CWListPool.__len__()) \r\n# # self.logger.info(\"server query with CWListPool\",serversQueryWithCWAppended[0][2].__len__())\r\n# # self.logger.info(\"server query with CWListPool\",serversQueryWithCWAppended[1])\r\n# # self.logger.info(\"server query with CWListPool\",serversQueryWithCWAppended[2]\r\n# \r\n# \r\n# \r\n# \r\n# \r\n# \r\n# \r\n# \r\n# \r\n# \r\n# \r\n# \r\n# \r\n# def generatePIRQuery(self,aSeedsList,aTIndicatorsList):\r\n# targetBit = int(self.scl_bitChoice.get())\r\n# self.currentServersQuantity = active_servers.__len__()\r\n# \r\n# self.serversQuery\r\n# rootedDatabaseSizeVar = (int)(math.sqrt(self.DB_LENGTH))\r\n# # self.logger.info(\"rootedSize\",rootedDatabaseSizeVar)\r\n# # self.logger.info(\"j index:\",transformedColumnIndex)\r\n# # self.logger.info(\"i' index:\",transformedRowIndex)\r\n# \r\n# \r\n# tempSeedListPerSection = []\r\n# serversQuery = {}\r\n# startIndex = 0\r\n# endIndex = 0\r\n# \r\n# # self.logger.info(\"seed pool\",seedListAsBytes)\r\n# for serverAmountIndex in range(0,self.currentServersQuantity):\r\n# serversQuery[serverAmountIndex] = ([],[])\r\n# for sectionIndex in range(0,rootedDatabaseSizeVar):\r\n# seedIndex = 0\r\n# \r\n# if sectionIndex == transformedRowIndex:# important section\r\n# choosingMatrix = matrixB\r\n# importantSectionIndicator = 0\r\n# else:#not important section\r\n# choosingMatrix = matrixA\r\n# importantSectionIndicator = 1\r\n# \r\n# #prepares the indices for next withdrawal from seed pool for all runs except the first run\r\n# startIndex = endIndex\r\n# endIndex = endIndex + 2**(self.currentServersQuantity-1) - importantSectionIndicator \r\n# \r\n# if sectionIndex == 0: #first run\r\n# startIndex = 0\r\n# endIndex = 2**(self.currentServersQuantity-1)-importantSectionIndicator \r\n# \r\n# tempSeedListPerSection = aSeedsList[startIndex:endIndex]\r\n# if importantSectionIndicator == 1:# when we in a section that is not important we have an array of 2^(k-1) seeds, and we need to add another seed to be modular\r\n# tempSeedListPerSection.insert(0,0xFF)\r\n# # self.logger.info(\"choosing matrix \",choosingMatrix,\"section number:\",sectionIndex+1)\r\n# # self.logger.info(\"list for a section\",tempSeedListPerSection) \r\n# for choosingMatrixColumnIndex in choosingMatrix:\r\n# matchSeedToServerList = [i for i, j in enumerate(choosingMatrixColumnIndex) if j == 1]\r\n# # self.logger.info(\"matchedListMatrix A\",matchSeedToServerList)\r\n# for serverIndex in matchSeedToServerList:\r\n# tmpSeedList,tmpIndicatorsList = serversQuery[serverIndex]\r\n# tmpSeedList.append(tempSeedListPerSection[seedIndex])\r\n# tmpIndicatorsList.append((aTIndicatorsList[seedIndex])[0])\r\n# serversQuery[serverIndex] = (tmpSeedList,tmpIndicatorsList)\r\n# seedIndex+=1\r\n# # self.logger.info(\"server query\",serversQuery) \r\n# random.shuffle(aTIndicatorsList)\r\n# # self.logger.info(\"server query\",serversQuery[0])\r\n# # self.logger.info(\"server query\",serversQuery[1])\r\n# # self.logger.info(\"server query\",serversQuery[2])\r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n\r\n\r\n# def clickQueryHandler(self):\r\n# bitToExtractIndex = int(self.scl_bitChoice.get())\r\n# \r\n# self.buildMatrices()\r\n# seedList = self.createRandomSeeds()\r\n# listOfGFuncion = self.createGFunctions(seedList)\r\n# self.transformIndexBit()\r\n# self.createCWListPool()\r\n# listOfIndicators = self.createTIndicatorsPool()\r\n# self.createEjVector()\r\n# self.findKthCW(listOfGFuncion)\r\n# self.generateQuery(seedList,listOfIndicators)\r\n# self.appendCWListPoolToQuery() \r\n \r\n \r\n \r\n \r\n \r\nclass PIRClient():\r\n \r\n clientWindowManager = None\r\n lock = RLock()\r\n frameBuilder = FrameBuilder()\r\n pirQuery = None\r\n def __init__(self,aWindowManager): \r\n logging.basicConfig(level=logging.DEBUG,format='%(name)s: %(message)s',)\r\n self.logger = logging.getLogger(\"Client\")\r\n self.clientWindowManager = aWindowManager\r\n \r\n def initateConnection(self,aSMAddress):\r\n# self.clientWindowManager.showWarningPopUp(\"connection failed\",\"asdsfd\")\r\n\r\n self.t_SMConnection = threading.Thread(target = self.connect2SM, args=(aSMAddress,))\r\n self.t_SMConnection.setDaemon(True)\r\n self.t_SMConnection.start()\r\n \r\n \r\n \r\n def connect2SM(self,aSMAddress):\r\n self.logger.debug('creating socket')\r\n soc_serverManager = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\r\n self.logger.debug('connecting to server')\r\n \r\n try:\r\n\r\n soc_serverManager.connect((aSMAddress, S_M_PORT))\r\n self.saveServerDetails((aSMAddress,S_M_PORT,soc_serverManager)) ##tuple format: (IP, PORT, Active Socket)\r\n\r\n self.sayHelloToSM()\r\n self.getDBLength()\r\n self.getSTDservers()\r\n \r\n self.clientWindowManager.enableScale() \r\n self.clientWindowManager.disableConnectBtn()\r\n except OSError: \r\n self.logger.debug('connection failed')\r\n \r\n\r\n ##TODO move this.\r\n# soc_serverManager.send(self.frameBuilder.getFrame()) ##\r\n# self.readSocketForResponse(soc_serverManager)\r\n def calacLengthSent(self,queryToSend):\r\n self.currentQueryLength = self.pirQuery.calacQuerySize()\r\n \r\n \r\n def pushServersAndQueriesToSendIntoQueue(self,queriesToSend):\r\n serversPool.queue.clear()\r\n serversQueryReply.clear()\r\n# while serversPool.qsize() != 0:\r\n# serversPool.get_nowait()\r\n self.verifyServersAlive()\r\n for serverIndex in range(0,active_servers.__len__()):\r\n queryAsString = pickle.dumps(queriesToSend[serverIndex])\r\n# self.logger.info(\"Pickle query: %s type: %s\",queryAsString,type(queryAsString))\r\n serversPool.put((active_servers[serverIndex][2],queryAsString))\r\n \r\n \r\n\r\n def executeQuery(self):\r\n \r\n \r\n self.pirQuery = PIRQueryObject(self.currentServersQuantity,self.DB_LENGTH)\r\n self.logger.info(\"Requested bit: %d\", self.clientWindowManager.getCurrentScaleNum())\r\n queryToSend = self.pirQuery.getPIRQuery(self.clientWindowManager.getCurrentScaleNum())\r\n self.calacLengthSent(queryToSend)\r\n# self.logger.info(\"query to send 0: %s\",queryToSend[0])\r\n# self.logger.info(\"query to send 1: %s\",queryToSend[1])\r\n\r\n self.pushServersAndQueriesToSendIntoQueue(queryToSend)\r\n for _ in queryToSend:\r\n# serverTuple = active_servers[targetServer]\r\n# avtiveTargetSocket = serverTuple[2]\r\n worker = Thread(target=self.sendQueries)\r\n# self.logger.info(\"Worker %s: created, connected to %s:%s\",worker.getName(),serverTuple[0],serverTuple[1])\r\n worker.setDaemon(True)\r\n worker.start()\r\n# worker.join()\r\n serversPool.join()\r\n# self.logger.info(\"All threads returned\")\r\n if(serversQueryReply.__len__() == active_servers.__len__()):\r\n self.logger.info(\"All servers replied to the query\")\r\n \r\n desiredBit = self.pirQuery.calculateQueryResult(serversQueryReply)\r\n self.logger.info(\"List of responses: %s\",serversQueryReply)\r\n\r\n self.clientWindowManager.writeResultToLabel(desiredBit)\r\n \r\n \r\n \r\n def verifyServersAlive(self):\r\n serversToRemovePool = []\r\n for (serverKey,(_,_,currentServer)) in active_servers.items():\r\n self.logger.info(\"Check server: %s alive\",serverKey)\r\n try:\r\n self.sayHelloToServer(currentServer)\r\n except Exception:\r\n self.logger.info(\"server: %s is not present, removed\",serverKey)\r\n serversToRemovePool.append(serverKey)\r\n for server2Remove in serversToRemovePool:\r\n del active_servers[server2Remove]\r\n \r\n def readSocketForResponse(self,runnigSocket):\r\n cur_thread = threading.currentThread()\r\n\r\n while True:\r\n # Echo the back to the client\r\n try:\r\n data = runnigSocket.recv(2)\r\n if data == '' or len(data) == 0:\r\n break\r\n except Exception: \r\n self.logger.debug('recv failed opcode and num bytes of Length')\r\n break\r\n #Got data Successfully\r\n self.recvOpcode = data[0] #first byte is op code\r\n lengthOfSize = data[1]\r\n try:\r\n data = runnigSocket.recv(lengthOfSize)\r\n if data == '' or len(data) == 0:\r\n break\r\n except Exception: \r\n self.logger.debug('recv data failed size')\r\n break\r\n size = int.from_bytes (data,byteorder='big')\r\n try:\r\n data = runnigSocket.recv(size)\r\n if data == '' or len(data) == 0:\r\n break\r\n except Exception: \r\n self.logger.debug('recv data failed payload')\r\n break\r\n self.payload = data\r\n self.logger.info(\"%s Received: %s %s\",cur_thread.getName(),self.recvOpcode,self.payload )\r\n self.reponseHandler(self.recvOpcode,self.payload)\r\n break\r\n \r\n def reponseHandler(self,recvOpcode,msg):\r\n self.code = OpCodes.getCode(self, self.recvOpcode)\r\n \r\n if self.code == 'hello_ack':\r\n self.logger.info (self.code)\r\n self.clientWindowManager.ServerConnected_icon()\r\n elif self.code == 'server_quantity_reply':\r\n self.logger.info (self.code)\r\n self.handleQuantityReply(msg)\r\n elif self.code == 'servers_up':\r\n self.logger.info (self.code)\r\n elif self.code == 'servers_failed':\r\n self.logger.info (self.code)\r\n elif self.code == 'db_length':\r\n self.logger.info (self.code)\r\n self.handleDBLengthReply(msg)\r\n elif self.code == 'pir_query_reply':\r\n self.logger.info (self.code)\r\n self.handleQueryReply(msg) \r\n elif self.code == 'std_query_reply':\r\n self.logger.info (self.code)\r\n self.handleQueryReply(msg)\r\n elif self.code == 'ip_and_port_reply':\r\n self.logger.info (self.code)\r\n self.handleIpAndPortReply(msg)\r\n else:\r\n self.logger.info(\"Bad opCode\") \r\n \r\n \r\n def sayHelloToSM(self):\r\n self.frameBuilder.assembleFrame(codes.getValue('client_hello')[0],\"client says hello\")\r\n# self.frameBuilder.assembleFramePickle(codes.getValue('client_hello')[0],\"client says hello\")\r\n# self.logger.info(\"Say hello frame: %s \",self.frameBuilder.getFramePickle())\r\n self.sendAndHandleResponse(active_servers[0][2]) \r\n \r\n def sayHelloToServer(self,activeTargetSocket):\r\n self.frameBuilder.assembleFrame(codes.getValue('client_hello')[0],\"client says hello\")\r\n self.sendAndHandleResponse(activeTargetSocket) \r\n \r\n def getSTDservers(self):\r\n self.frameBuilder.assembleFrame(codes.getValue('server_quantity_request')[0],\"server currentServersQuantity request\")\r\n self.sendAndHandleResponse(active_servers[0][2])\r\n \r\n def getDBLength(self):\r\n self.frameBuilder.assembleFrame(codes.getValue('db_length_request')[0],\"DB length request\")\r\n self.sendAndHandleResponse(active_servers[0][2])\r\n\r\n def sendQueries(self):\r\n t_frameBuilder = FrameBuilder()\r\n while True:\r\n# self.logger.info('%s Fetching socket from to queue ',threading.currentThread().getName())\r\n (targetSocket,query2Send) = serversPool.get(True)\r\n self.logger.info(\"length of query to send %d\",len(query2Send))\r\n# self.lock.acquire(blocking=True)\r\n t_frameBuilder.assembleFrame(codes.getValue('pir_query')[0],query2Send)\r\n# self.lock.release()\r\n targetSocket.send(t_frameBuilder.getFrame())\r\n self.readSocketForResponse(targetSocket)\r\n# self.sendAndHandleResponse(targetSocket)\r\n serversPool.task_done()\r\n \r\n def sendAndHandleResponse(self,activeSocket):\r\n activeSocket.send(self.frameBuilder.getFrame())\r\n# activeSocket.send(pickle.dump(self.frameBuilder.getFramePickle()),flags=0)\r\n self.readSocketForResponse(activeSocket)\r\n \r\n \r\n \r\n###############################################################################\r\n## Handling functions in this section ##\r\n############################################################################### \r\n def handleQuantityReply(self,msg):\r\n self.currentServersQuantity = int(msg)\r\n soc_serverManager = active_servers[0][2]\r\n for currentServer in range(1, self.currentServersQuantity):\r\n self.frameBuilder.assembleFrame(codes.getValue('ip_and_port_request')[0],str(currentServer))\r\n soc_serverManager.send(self.frameBuilder.getFrame())\r\n self.readSocketForResponse(soc_serverManager)\r\n self.currentServersQuantity = active_servers.__len__()\r\n \r\n def saveServerDetails(self,serverCredential):\r\n self.lock.acquire(blocking=True)\r\n active_servers[active_servers.__len__()] = serverCredential\r\n self.lock.release()\r\n \r\n def handleIpAndPortReply(self,msg):\r\n modifiedMsg = msg.decode('utf-8')\r\n try:\r\n index,stdIP,stdPort = modifiedMsg.split(':',3)\r\n except Exception: \r\n self.logger.debug('Bad ipAndPortRequest format')\r\n \r\n soc_stdServer=self.connect2Target((stdIP,int(stdPort)))\r\n self.sayHelloToServer(soc_stdServer) \r\n self.lock.acquire(blocking=True)\r\n active_servers[int(index)] = (stdIP,int(stdPort),soc_stdServer)\r\n self.lock.release()\r\n self.logger.debug('STD Server:%s on Port:%s was added in index:%s ', stdIP,stdPort,str(index))\r\n \r\n def handleQueryReply(self,msg):\r\n serversQueryReply.append(int(msg))\r\n \r\n def handleDBLengthReply(self,msg):\r\n modifiedMsg = msg.decode('utf-8')\r\n self.DB_LENGTH = int(modifiedMsg)\r\n self.clientWindowManager.configureDBScale(self.DB_LENGTH-1)\r\n self.logger.info('Data base size is updated to:%s',self.DB_LENGTH )\r\n\r\n def connect2Target(self,tu_address):\r\n self.logger.debug('creating socket connection to %s' , tu_address)\r\n s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\r\n try:\r\n s.connect(tu_address)\r\n return s\r\n# connectedFlag = True\r\n except Exception: \r\n self.logger.debug('connection to %s failed',tu_address)\r\n \r\n \r\n def setSMIPAdress(self,aIPAdress):\r\n pass\r\n \r\nif __name__ == \"__main__\":\r\n# logger = logging.getLogger('Client')\r\n root = Tk()\r\n appWindownManager = client_window(root)\r\n root.mainloop()\r\n# logger = logging.getLogger(\"Client computer\")\r\n# \r\n# # while True:\r\n# # sleep(1)\r\n# # self.logger.info(int(time.time()%1000000))\r\n# # \r\n# \r\n# s_c = ('123',344)\r\n# a_s = {1: ('123',344),2:('345',1254667)}\r\n# whatreturned = [ k for k, element in a_s.items() if element == s_c]\r\n# for key, element in a_s.items(): \r\n# self.logger.info(key,element)\r\n# \r\n# \r\n# \r\n# # p_bitstring = BitArray(hex(random.getrandbits(2**20)))\r\n# # logger.debug('BitArray s: %s' ,p_bitstring)\r\n# \r\n# frameBuilder = FrameBuilder()\r\n# ip, port = '192.168.4.1', 31101\r\n# # self.logger.info (codes.get_code(b'242'))\r\n# \r\n# logger.info('Server on %s:%s', ip, port)\r\n# \r\n# # Connect to the server\r\n# logger.debug('creating socket')\r\n# s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\r\n# logger.debug('connecting to server')\r\n# connectedFlag = True\r\n# try:\r\n# s.connect((ip, port))\r\n# except Exception: \r\n# logger.debug('connection failed')\r\n# connectedFlag = False\r\n# frameBuilder.assembleFrame(codes.getValue('clientHello')[0],\"dfgsdf\")\r\n# s.send(frameBuilder.getFrame())\r\n# while connectedFlag:\r\n# \r\n# message = input(\"Enter your message to the EchoServer: \")\r\n# # self.logger.info (codes.getValue('db_length')[0])\r\n# frameBuilder.assembleFrame(codes.getValue('clientHello')[0],message)\r\n# # my_bytes.append(245)\r\n# # my_bytes.append(len(message))\r\n# # my_bytes.extend(str.encode(message))\r\n# logger.debug('sending data: \"%s\"', message)\r\n# len_sent = s.send(frameBuilder.getFrame())\r\n# # len_sent = s.send('240')\r\n# \r\n# # Receive a response\r\n# logger.debug('waiting for response')\r\n# response = s.recv(len_sent + len(threading.currentThread().getName()) + 3)\r\n# logger.debug('response from server: \"%s\"', response)\r\n# # self.logger.info('response from server: ', response.encode(\"utf-8\"))\r\n# sleep(0.05)\r\n# # connectedFlag = False\r\n# \r\n# # Clean up\r\n# logger.debug('closing socket')\r\n# s.close()\r\n# logger.debug('Client done')\r\n\r\n \r\n \r\n","sub_path":"main.py","file_name":"main.py","file_ext":"py","file_size_in_byte":33604,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"173707354","text":"temperature_button = u\"\\U0001F321\" + \"Temperature\"\nget_status_button = u\"\\U0001F49A\" + \"Get Status\"\nset_status_button = u\"\\U0001F9E1\" + \"Set Status\"\nenable_hurt_button = u\"\\U0001F494\" + \"Enable Hurt\"\ndisable_hurt_button = u\"\\U00002764\" + \"Disable Hurt\"\nwallet_button = u\"\\U0001F4B0\" + \"Wallet\"\ndaily_zeit_button = u\"\\U0000231A\" + \"Daily Zeit\"\nget_photo_button = u\"\\U0001F4F8\" + \"Get Photo\"\nday_deal_button = u\"\\U0001F4B9\" + \"Day Deal\"\nweek_deal_button = u\"\\U0001F911\" + \"Week Deal\"\ngerman_button = u\"\\U0001F468\" + \"German\"\ndigitec_deal_button = u\"\\U0001F4BB\" + \"Digitec Deal\"\n\nback_button = u\"\\U0001F448\" + \" Back\"\nnext_button = u\"\\U0001F449\" + \" Next\"\n\nnot_enough_permissions = \"You are not authorized\"\nnot_yet_in_production = \"I'm sorry but this feature is not yet in production\"\n\nSTART_MENU_RESULT, SECOND_PAGE_RESULT, THIRD_PAGE_RESULT = range(3)\n","sub_path":"bin/utils/Constant.py","file_name":"Constant.py","file_ext":"py","file_size_in_byte":851,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"563630484","text":"# -*- coding: utf-8 -*-\n\nimport os\nimport sys\nimport unittest\nimport tempfile\nimport netCDF4\n\nclass test_filepath(unittest.TestCase):\n\n def setUp(self):\n self.netcdf_file = os.path.join(os.getcwd(), \"netcdf_dummy_file.nc\")\n self.nc = netCDF4.Dataset(self.netcdf_file)\n\n def test_filepath(self):\n assert self.nc.filepath() == str(self.netcdf_file)\n\n def test_filepath_with_non_ascii_characters(self):\n python3 = sys.version_info[0] > 2\n if python3:\n encoding = 'utf-8'\n else:\n encoding = 'mbsc'\n # create nc-file in a filepath with Non-Ascii-Characters\n tempdir = tempfile.mkdtemp(prefix=u'ÄÖÜß_')\n filename = u\"Besançonalléestraße.nc\"\n nc_non_ascii_file = os.path.join(tempdir, filename)\n try:\n nc_non_ascii = netCDF4.Dataset(nc_non_ascii_file, 'w')\n except OSError:\n msg = u'cannot create file {} in folder {}\\n using encoding: {}'.format(\n tempdir, filename, encoding)\n raise OSError(msg)\n \n # test that no UnicodeDecodeError occur in the filepath() method\n msg = u'original: {}\\nfilepath: {}'.format(\n nc_non_ascii_file,\n nc_non_ascii.filepath())\n assert nc_non_ascii.filepath() == nc_non_ascii_file, msg\n \n # cleanup\n nc_non_ascii.close()\n os.remove(nc_non_ascii_file)\n os.rmdir(tempdir)\n \n \nif __name__ == '__main__':\n unittest.main()\n","sub_path":"test/tst_filepath.py","file_name":"tst_filepath.py","file_ext":"py","file_size_in_byte":1512,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"349049968","text":"from django.db import models\nfrom django.contrib.contenttypes import generic\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.utils.http import urlquote\nfrom django.utils.translation import ugettext as _\n\nDEFAULT_VISITS = 0\n\nSTATUS_CHOICES = (\n (1, 'Inactive'),\n (2, 'Active'),\n)\n\nurl_help = \"\"\"\nNot a formal URL field. This accepts a string which will have string formatting operations performed on it. Valid key \nmappings for the string formatting includes:\n
\n
%(url)s Url to be provided to social bookmarking service
\n
%(title)s Title of object being submitted to social bookmarking service
\n
%(description)s Summary or description of the object being submitted
\n
\n\"\"\"\n\nimage_help = \"\"\"\nBookmark image icon stored in media/social_bookmarking/img folder. Stored there so easier to install with fixtures.\"\n\"\"\"\n\njs_help = \"\"\"\nJavascript placed here will be inserted in the page in a body. Lines will be stripped so make sure that \nyou end your lines of code correctly.\n\"\"\"\n\nclass BookmarkManager(models.Manager):\n \"\"\"\n QuerySet for all acive bookmarks.\n \"\"\"\n def get_active(self):\n return self.get_query_set().filter(status=2)\n\nclass Bookmark(models.Model):\n title = models.CharField(max_length=255, blank=False)\n slug = models.SlugField(_('slug'))\n status = models.IntegerField(choices=STATUS_CHOICES, default=2) \n description = models.CharField(max_length=255, blank=True, help_text=_(\"Because some things want it\"))\n url = models.CharField(blank=False, max_length=255, help_text=_(url_help))\n image = models.CharField(help_text=_(image_help), max_length=100, blank=False)\n js = models.TextField(help_text=_(js_help), blank=True)\n \n objects = BookmarkManager()\n \n class Meta:\n ordering = ('title',)\n\n def __unicode__(self):\n return unicode(self.title)\n\nclass BookmarkRelated(models.Model):\n content_type = models.ForeignKey(ContentType)\n object_id = models.PositiveIntegerField()\n content_object = generic.GenericForeignKey('content_type', 'object_id')\n bookmark = models.ForeignKey(Bookmark, blank=False, null=False)\n visits = models.IntegerField(_('visits'), default=DEFAULT_VISITS, editable=False)","sub_path":"social_bookmarking/models.py","file_name":"models.py","file_ext":"py","file_size_in_byte":2421,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"127509475","text":"from webservice import calculate_average, diagnosis, check_input\nimport pytest\nimport math\n\n\ndef test_calculate_average():\n\n output_1 = calculate_average([10, 20, 30])\n assert output_1 == 20\n\n output_2 = calculate_average([50, 76, 82, 99, 43, 46, 76, 35])\n assert output_2 == 63.375\n\n\ndef test_diagnosis():\n\n output_3 = diagnosis(120)\n assert output_3 == \"Tachycardia\"\n\n output_4 = diagnosis(100)\n assert output_4 == \"Normal\"\n\n output_5 = diagnosis(61)\n assert output_5 == \"Normal\"\n\n output_6 = diagnosis(45)\n assert output_6 == \"Bradycardia\"\n\n\ndef check_input():\n\n input_7 = {\n \"user_email\": \"suyash@suyashkumar.com\",\n \"user_age\": 50,\n \"heart_rate\": 100\n }\n\n output_7 = check_input(input_7)\n assert output_7 is True\n\n input_8 = {\n \"user_email\": \"suyash@suyashkumar.com\",\n \"user_age\": \"50\",\n \"heart_rate\": \"100\"\n }\n\n output_8 = check_input(input_8)\n assert output_8 is True\n\n input_9 = {\n \"user_age\": 50,\n \"heart_rate\": 100\n }\n\n with pytest.raises(KeyError):\n output_9 = check_input(input_9)\n\n assert output_9 is False\n\n input_10 = {\n \"user_email\": 45,\n \"user_age\": 50,\n \"heart_rate\": 100\n }\n\n output_10 = check_input(input_10)\n assert output_10 is False\n\n input_11 = {\n \"user_email\": \"suyash@suyashkumar.com\",\n \"user_age\": \"fifty\",\n \"heart_rate\": 100\n }\n\n with pytest.raises(ValueError):\n output_11 = check_input(input_11)\n assert output_11 is False\n\n input_12 = {\n \"user_email\": \"suyash@suyashkumar.com\",\n \"user_age\": 50,\n \"heart_rate\": math.sqrt(-100)\n }\n\n with pytest.raises(ValueError):\n output_12 = check_input(input_12)\n assert output_12 is False\n","sub_path":"test_webservice.py","file_name":"test_webservice.py","file_ext":"py","file_size_in_byte":1898,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"169997906","text":"#!/usr/bin/python\n\n# %h is the IP address of the client\n# %l is identity of the client, or \"-\" if it's unavailable\n# %u is username of the client, or \"-\" if it's unavailable\n# %t is the time that the server finished processing the request. The format is [day/month/year:hour:minute:second zone]\n# %r is the request line from the client is given (in double quotes). It contains the method, path, query-string, and protocol or the request.\n# %>s is the status code that the server sends back to the client. You will see see mostly status codes 200 (OK - The request has succeeded), 304 (Not Modified) and 404 (Not Found). See more information on status codes in W3C.org\n# %b is the size of the object returned to the client, in bytes. It will be \"-\" in case of status code 304.\n#\n# 10.223.157.186 - - [15/Jul/2009:15:50:35 -0700] \"GET /assets/js/lowpro.js HTTP/1.1\" 200 10469\n# %h %l %u %t \\\"%r\\\" %>s %b\n\nimport sys\n\n\nclass CommonLogLine:\n\n def __init__(self, line):\n line = line.strip()\n\n self.ip = ''\n self.identity = ''\n self.username = ''\n self.timestamp = ''\n self.method = ''\n self.path = ''\n self.querystring = ''\n self.protocol = ''\n self.status = 0\n self.size = 0\n\n try:\n ini, end = line.split(' [', 1)\n except:\n return\n\n self.timestamp, end = end.split('] \"', 1) # timestamp\n self.ip, ini = ini.split(' ', 1) # ini -> identity + username\n\n request, end = end.split('\" ')\n try:\n request = request.split(' ') # ignore: \"LIMIT 0\"\n self.method = request[0]\n self.path = request[1]\n self.protocol = request[-1]\n except Exception as e:\n raise Exception(str(e), request, line)\n\n status, size = end.split(' ')\n self.status = int(status)\n self.size = 0 if size == '-' else int(size)\n\n def __str__(self):\n lst = []\n if any(self.ip):\n lst.append('ip=%s' % self.ip)\n if any(self.identity):\n lst.append('identity=%s' % self.identity)\n if any(self.username):\n lst.append('username=%s' % self.username)\n if any(self.timestamp):\n lst.append('ts=%s' % self.timestamp)\n if any(self.method):\n lst.append('method=%s' % self.method)\n if any(self.path):\n lst.append('path=%s' % self.path)\n if any(self.querystring):\n lst.append('qs=%s' % self.querystring)\n if any(self.protocol):\n lst.append('protocol=%s' % self.protocol)\n if self.status > 0:\n lst.append('status=%d' % self.status)\n if self.size > 0:\n lst.append('size=%d' % self.size)\n \n return ', '.join(lst)\n \n \nfor line in sys.stdin:\n\n line = CommonLogLine(line)\n if not line.path:\n continue\n\n print('{0}\\t1'.format(line.path))\n","sub_path":"courses/udacity/hadoop.and.mapreduce/project.01/part.02/01.hits.to.page/mapper.py","file_name":"mapper.py","file_ext":"py","file_size_in_byte":2926,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"493844927","text":"# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Mon Feb 1 11:44:41 2016\n\n@author: pilgrim\n\"\"\"\n\nclass Settings():\n '''a clas to store all settings for alien invation'''\n def __init__(self):\n '''init game settings'''\n #screen settings\n self.scrn_width = 1200\n self.scrn_hgt = 800\n self.bgcolor = (0, 0, 81)\n #ship settings\n self.ship_speed = 1.5\n #bullet settings\n self.torpedo_speed = 1\n self.torpedo_width = 3\n self.torpedo_height = 15\n self.torpedo_color = (0, 204, 0)\n self.torpedo_number = 3\n self.comet_number = 5\n","sub_path":"settings.py","file_name":"settings.py","file_ext":"py","file_size_in_byte":614,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"253152796","text":"from django.db.models.signals import post_save\n\nfrom django.contrib.auth.models import User\nfrom .models import Profile\n\n\ndef create_profile(sender, instance, created, **kw):\n if created:\n user = instance\n profile = Profile.objects.create(\n user=user,\n )\n\n\npost_save.connect(create_profile, sender=User)\n","sub_path":"users/signals.py","file_name":"signals.py","file_ext":"py","file_size_in_byte":339,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"491207371","text":"# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Sat Apr 27 23:12:58 2019\n\n@author: iason\n\"\"\"\n\nimport numpy as np\nimport sys\n\nsys.path.append('../traceAnalysis - Ivo')\nimport traceAnalysisCode as analysis\nimport pandas as pd\nimport os\nimport itertools\nimport matplotlib.pyplot as plt\nimport matplotlib.widgets\nimport seaborn as sns\nfrom cursor_matplotlib import SnaptoCursor\n#plt.rcParams['toolbar'] = 'toolmanager'\n#from matplotlib.backend_tools import ToolBase\n#mainPath = r'D:\\ivoseverins\\SURFdrive\\Promotie\\Code\\Python\\traceAnalysis\\twoColourExampleData\\HJ A'\n\n\nclass InteractivePlot(object):\n def __init__(self, file):\n self.file = file\n self.mol_indx = 0 #From which molecule to start the analysis\n # See if there are saved analyzed molecules\n self.file.importExcel(filename=self.file.name+'_steps_data.xlsx')\n\n def plot_initialize(self):\n sns.set(style=\"dark\")\n sns.set_color_codes()\n plt.style.use('dark_background')\n self.fig, self.axes = plt.subplots(2, 1, sharex=True, figsize=(10,5))\n self.fig.canvas.set_window_title(f'Dataset: {self.file.name}')\n\n plt.subplots_adjust(bottom=0.23)\n\n # Create the axes for the widgets\n self.rax = plt.axes([0.91, 0.65, 0.08, 0.15])\n\n self.axcheckfret = plt.axes([0.91, 0.35, 0.08, 0.08])\n self.axcorred = plt.axes([0.95, 0.6, 0.028, 0.06])\n self.axcorgreen = plt.axes([0.95, 0.53, 0.028, 0.06])\n self.axcorrfretI = plt.axes([0.95, 0.3, 0.028, 0.06])\n self.axthrsliders = [plt.axes([0.26, 0.072, 0.10, 0.03]),\n plt.axes([0.26, 0.033, 0.10, 0.03])]\n # Create the buttons\n self.axthresb = plt.axes([0.1, 0.03, 0.13, 0.062]) # Button to calculate dwell times by thresholding\n self.axrejb = plt.axes([0.41, 0.03, 0.07, 0.062]) # Button to reject calculated dwell times by thresholding\n self.axclearb = plt.axes([0.51, 0.03, 0.11, 0.062]) # Button to clear the clicked points (clears vlines)\n self.axthrowb = plt.axes([0.64, 0.03, 0.11, 0.062]) # Button to throw away already calculated dwell times and de-select molecule\n self.axconclb = plt.axes([0.77, 0.03, 0.15, 0.062]) # Button to conlcude analysis by saving all the calculated steps and metadata\n\n self.axnextb = plt.axes([0.17, 0.90, 0.065, 0.062]) # Buttons to cycle through molecules\n self.axprevb = plt.axes([0.083, 0.90, 0.08, 0.062])\n [ax.set_frame_on(False) for ax in self.fig.get_axes()[2:]]\n # Radiobutton to select red or green\n self.radio = matplotlib.widgets.RadioButtons(self.rax, (\"red\", \"green\"))\n self.radio.circles[0].set_color(\"r\")\n for circle in self.radio.circles: # adjust radius here. The default is 0.05\n circle.set_radius(0.07)\n self.radio.on_clicked(self.radio_manage)\n\n # Connect clicking to draw lines class\n self.draw = Draw_lines(self.fig, self.radio)\n self.fig.canvas.mpl_connect('button_press_event', self.draw.onclick)\n # Create the buttons with\n\n bp = {'color': 'black', 'hovercolor': 'gray'}\n self.bauto = matplotlib.widgets.Button(self.axthresb,'autoThreshold' , **bp)\n self.bauto.on_clicked(self.autoThreshold_plot)\n self.brejauto = matplotlib.widgets.Button(self.axrejb,'reject' , **bp)\n self.brejauto.on_clicked(self.auto_reject)\n self.bclear = matplotlib.widgets.Button(self.axclearb,'clear clicks' , **bp)\n self.bclear.on_clicked(self.draw.clear_all)\n self.bthrow = matplotlib.widgets.Button(self.axthrowb,'throw away' , **bp)\n self.bthrow.on_clicked(self.throw_away)\n self.bconcl = matplotlib.widgets.Button(self.axconclb,'conclude analysis' , **bp)\n self.bconcl.on_clicked(self.conclude_analysis)\n self.bnext = matplotlib.widgets.Button(self.axnextb,'Next' , **bp)\n self.bnext.on_clicked(self.save_molecule)\n self.bprev = matplotlib.widgets.Button(self.axprevb,'Previous' , **bp)\n self.bprev.on_clicked(self.save_molecule)\n\n # A checkbutton for fret autothreshold dwell-time calculation\n self.checkbfret = matplotlib.widgets.CheckButtons(self.axcheckfret, [\"E fret\"],\n actives=[False])\n self.checkbfret.rectangles[0].set_color(\"black\")\n self.checkbfret.rectangles[0].set_height( 0.2)\n [line.remove() for line in self.checkbfret.lines[0]]\n self.checkbfret.on_clicked(self.check_fret)\n\n # Entryboxes for offset corrections\n corrdict = {'initial': str(0), 'color':'k', 'hovercolor': \"k\", 'label_pad':.2}\n corrlabels = [r'$I_{R_{off}}$', r'$I_{G_{off}}$', r'$I_{min}$']\n corraxes = [self.axcorred, self.axcorgreen, self.axcorrfretI]\n self.correntries = [matplotlib.widgets.TextBox(ax, label, **corrdict)\n for ax, label in zip(corraxes, corrlabels)]\n [entry.on_submit(lambda _: self.plot_molecule()) for entry in self.correntries]\n\n # Sliders for assigning the threshold\n self.thrsliders = []\n self.thrsliders.append(matplotlib.widgets.Slider(self.axthrsliders[0], label=r\"$I_R$\", valmin=0,\n valmax=500, valinit=100, valfmt=\"%i\", color=\"r\"))\n self.thrsliders.append(matplotlib.widgets.Slider(self.axthrsliders[1], label=r\"$E$\", valmin=0,\n valfmt=\"%.2f\", valinit=0.5, color=\"b\", valmax=1.0))\n [slider.vline.remove() for slider in self.thrsliders]\n\n self.fig.show()\n plt.pause(0.1)\n\n def plot_molecule(self, draw_plot=True):\n #clear the appropriate axes first\n [ax.clear() for ax in self.fig.get_axes()[:2]]\n # find the current molecule instance\n self.mol = self.file.molecules[self.mol_indx]\n\n # Check if molecule is selected\n if self.mol.isSelected: self.select_molecule(toggle=False)\n #load saved steps\n self.load_from_Molecule()\n # load kon if existing or assign a False 3x3 boolean\n self.prev_mol = self.file.molecules[self.mol_indx - 1]\n if all(kon is None for kon in [self.mol.kon_boolean, self.prev_mol.kon_boolean]):\n\n self.kon = np.zeros((3,3), dtype=bool)\n elif self.mol.kon_boolean is None:\n self.kon = np.copy(self.prev_mol.kon_boolean) # if no kon is defined for current molecule\n else:\n self.kon = self.mol.kon_boolean\n # update the edge color from self.kon:\n self.load_edges(load_fret=True)\n\n self.axes[0].set_title(f\"Molecule: {self.mol.index} /{len(self.file.molecules)}\")\n self.Iroff, self.Igoff, self.Imin = [float(c.text) for c in self.correntries]\n\n self.red = self.mol.I(1, Ioff=self.Iroff)\n self.green = self.mol.I(0, Ioff=self.Igoff)\n self.fret = self.mol.E(Imin=self.Imin)\n self.exp_time = self.file.exposure_time\n self.time = np.arange(0,len(self.red)*self.exp_time, self.exp_time)\n\n if not draw_plot:\n return\n\n self.axes[0].plot(self.time, self.green, \"g\", lw=.75)\n self.axes[0].plot(self.time, self.red, \"r\", lw=.75)\n\n self.axes[1].plot(self.time, self.fret, \"b\", lw=.75)\n self.axes[1].set_ylim((0,1.1))\n self.axes[1].set_xlim((-10, self.time[-1]))\n self.axes[1].set_xlabel(\"time (s)\")\n # vertical lines to indicate the threshold in the two axes\n self.slidel = [ax.axhline(0, lw=1, ls=\":\", zorder=3, visible=False) for ax in self.axes]\n # Creat cursor particular to the molelcule and connect it to mouse movement event\n self.cursors = []\n self.cursors.append(SnaptoCursor(self.axes[0], self.time, self.red))\n self.cursors.append(SnaptoCursor(self.axes[0], self.time, self.green))\n self.cursors.append(SnaptoCursor(self.axes[1], self.time, self.fret))\n self.connect_events_to_canvas()\n self.fig.canvas.draw()\n plt.pause(0.1)\n\n def connect_events_to_canvas(self):\n self.fig.canvas.mpl_connect('key_press_event', self.key_bind)\n self.fig.canvas.mpl_connect('motion_notify_event', self.mouse_cursor)\n for cursor in self.cursors:\n self.fig.canvas.mpl_connect('axes_leave_event', cursor.leave_axis)\n self.fig.canvas.mpl_connect('axes_leave_event',\n lambda _: [[self.slidel[i].set_visible(False), self.fig.canvas.draw()] for i in [0,1]])\n\n def key_bind(self, event):\n k = event.key\n if k == 'a': self.autoThreshold_plot(event, find_all=False)\n if k == 'ctrl+a': self.autoThreshold_plot(event, find_all=True)\n elif k in ['left', 'right']: self.save_molecule(event, move=True)\n elif k == 'z': self.auto_reject(event)\n elif k == 'c': self.draw.clear_all(event)\n elif k in [',', '.', '/']: self.select_edge(k)\n elif k == ' ': self.select_molecule(toggle=True)\n elif k == 'r': self.radio_manage('red')\n elif k == 'g': self.radio_manage('green')\n elif k == 'e': self.check_fret('E')\n elif k == 't': self.throw_away(event)\n\n self.fig.canvas.draw()\n\n def load_from_Molecule(self):\n if self.mol.steps is None:\n return\n else:\n s = self.mol.steps\n [self.axes[0].axvline(f, zorder=0, lw=0.65, label=\"saved r\")\n for f in s.time[s.trace == 'red'].values]\n [self.axes[0].axvline(f, zorder=0, lw=0.65, label=\"saved g\")\n for f in s.time[s.trace == 'green'].values]\n [self.axes[1].axvline(f, zorder=0, lw=0.65, label=\"saved E\")\n for f in s.time[s.trace == 'E'].values]\n\n def select_molecule(self, toggle=True, deselect=False):\n if toggle:\n self.mol.isSelected = not self.mol.isSelected\n elif deselect:\n self.mol.isSelected = False\n else:\n self.mol.isSelected = True\n title = f'Molecule: {self.mol.index} /{len(self.file.molecules)}'\n title += ' selected'*(self.mol.isSelected)\n rgba = matplotlib.colors.to_rgba\n c = rgba('g')*self.mol.isSelected + rgba('w')*(not self.mol.isSelected)\n self.axes[0].set_title(title, color=c)\n self.fig.canvas.draw()\n\n def throw_away(self, event):\n if self.mol.steps is not None:\n self.mol.steps = None\n lines = self.axes[0].get_lines() + self.axes[1].get_lines()\n [l.remove() for l in lines if l.get_label().split()[0] in ['man', 'thres', 'saved']]\n self.select_molecule(toggle=False, deselect=True)\n self.fig.canvas.draw()\n\n\n def save_molecule(self, event=None, move=True, draw=True):\n # Assume acceptance of auto matically found and manually selected dwell times\n lines = self.axes[0].get_lines() + self.axes[1].get_lines()\n lines = [l for l in lines if l.get_label().split()[0] in [\"man\", \"thres\"]]\n self.mol.kon_boolean = self.kon\n if lines:\n if len(lines) % 2 != 0:\n print(f'Found an odd number of steps. Molecule {self.mol.index} not added')\n return\n if self.mol.steps is None:\n self.mol.steps = pd.DataFrame(columns=['time', 'trace', 'state',\n 'method','thres'])\n self.mol.isSelected = True\n\n for l in lines:\n method = l.get_label().split()[0]\n thres = \"N/A\"*(method=='man') + str(self.thrsliders[0].val)*(method =='thres')\n\n d = {'time': l.get_xdata()[0], 'trace': l.get_label().split()[1],\n 'state': 1, 'method': method, 'thres': thres}\n\n self.mol.steps= self.mol.steps.append(d, ignore_index=True)\n self.mol.steps.drop_duplicates(inplace=True)\n kon = [f'{int(i)}' for i in self.mol.kon_boolean.flatten()]\n kon = ''.join(kon)\n if 'kon' not in self.mol.steps.columns:\n kon = pd.DataFrame.from_records([{\"kon\": kon}])\n self.mol.steps = pd.concat([self.mol.steps, kon], axis=1)\n self.mol.steps.fillna(value='-')\n else:\n self.mol.steps.loc[0, 'kon'] = kon\n\n if move:\n if event.inaxes == self.axnextb or event.key in ['right']:\n if self.mol_indx > len(self.file.molecules):\n self.mol_indx = 1\n else:\n self.mol_indx += 1\n elif event.inaxes == self.axprevb or event.key in ['left']:\n self.mol_indx -= 1\n\n self.plot_molecule(draw_plot=draw)\n\n def conclude_analysis(self, event=None, save=True):\n # Save current molecule if it was analyzed\n self.save_molecule(move=False)\n # Concatenate all steps dataframes that are not None\n mol_data = [mol.steps for mol in self.file.molecules if mol.steps is not None]\n if not mol_data:\n print('no data to save')\n return\n keys = [f'mol {mol.index}' for mol in self.file.molecules if mol.steps is not None]\n steps_data = pd.concat(mol_data, keys=keys)\n if save:\n print(\"steps saved\")\n writer = pd.ExcelWriter(f'{self.file.name}_steps_data.xlsx')\n steps_data.to_excel(writer, self.file.name)\n writer.save()\n\n\n def autoThreshold_plot(self, event=None, find_all=False):\n self.auto_reject()\n # Find the steps for the checked buttons\n sel = self.radio.value_selected\n color = self.red*bool(sel == \"red\") + self.green*bool(sel == \"green\") # Select red or green\n steps = self.mol.find_steps(color, threshold=self.thrsliders[0].val)\n l_props = {\"lw\": 0.75, \"zorder\": 5, \"label\": \"thres \"+sel}\n [self.axes[0].axvline(s*self.exp_time, **l_props) for s in steps[\"start_frames\"]]\n [self.axes[0].axvline(s*self.exp_time, ls=\"--\", **l_props) for s in steps[\"stop_frames\"]]\n if self.checkbfret.get_status()[0]:\n steps = self.mol.find_steps(self.fret, threshold=self.thrsliders[1].val)\n l_props = {\"lw\": 0.75, \"zorder\": 5, \"label\": \"thres E\"}\n [self.axes[1].axvline(s*self.exp_time, **l_props) for s in steps[\"start_frames\"]]\n [self.axes[1].axvline(s*self.exp_time, ls=\"--\", **l_props) for s in steps[\"stop_frames\"]]\n self.fig.canvas.draw()\n if find_all:\n for mol in self.file.molecules:\n self.autoThreshold_plot(find_all=False)\n print(f'Analyzed mol {self.mol.index} /{len(self.file.molecules)}')\n e = matplotlib.backend_bases.KeyEvent('key_press_event', self.fig.canvas, 'right')\n if mol != self.file.molecules[-1]:\n self.save_molecule(event=e, move=True, draw=False)\n elif mol == self.file.molecules[-1]:\n self.conclude_analysis()\n return\n\n def auto_reject(self, event=None):\n for ax in self.axes:\n lines = ax.get_lines()\n [l.remove() for l in lines if l.get_label().split()[0] == 'thres']\n self.fig.canvas.draw()\n\n def mouse_cursor(self, event):\n if not event.inaxes :\n self.fret_edge_lock = True\n return\n ax = event.inaxes\n if ax == self.axes[0]:\n self.fret_edge_lock = True\n self.fig.canvas.mpl_connect('motion_notify_event', self.cursors[0].mouse_move)\n self.fig.canvas.mpl_connect('motion_notify_event', self.cursors[1].mouse_move)\n\n rad = self.radio.value_selected\n i = ['red', 'green'].index(rad)\n t, I = self.cursors[i].ly.get_xdata(), self.cursors[i].lx.get_ydata()\n try:\n labels = [rf\"t = {t:.1f}, $I_R$ = {I:.0f}\", rf\"t = {t:.1f}, $I_G$ = {I:.0f}\"]\n self.cursors[i].txt.set_text(labels[i])\n except TypeError:\n pass\n self.fig.canvas.draw()\n\n elif ax == self.axes[1]:\n self.fret_edge_lock = False\n self.fig.canvas.mpl_connect('motion_notify_event', self.cursors[-1].mouse_move)\n t, E = self.cursors[-1].ly.get_xdata(), self.cursors[-1].lx.get_ydata()\n try:\n self.cursors[-1].txt.set_text(f\"t = {t:.1f}, E = {E:.2f}\")\n except TypeError:\n pass\n self.fig.canvas.draw()\n\n elif ax in self.axthrsliders:\n indx = int(ax == self.axthrsliders[1]) # gives 0 if ax is upper (I) plot, 1 if ax is lower (E) plot\n self.slidel[indx].set_ydata(self.thrsliders[indx].val)\n self.slidel[indx].set_visible(True)\n self.fig.canvas.draw()\n\n\n def radio_manage(self, label):\n def update_slider(color, label):\n s = self.thrsliders[0]\n s.poly.set_color(color); s.label.set(text=label)\n\n indx = int(label == 'green') # 1 if green, 0 if red\n self.axes[0].get_lines()[not indx].set_zorder((not indx)+2)\n self.axes[0].get_lines()[indx].set_zorder(indx)\n self.radio.circles[indx].set_color(label[0])\n self.radio.circles[not indx].set_color(\"black\")\n update_slider(label[0], r\"$I_G$\"*bool(indx)+r\"$I_R$\"*bool(not indx))\n # Check the edge colors and set to white if not selected color\n sel = self.radio.value_selected\n selcol = matplotlib.colors.to_rgba(sel[0])\n spcol = [self.axes[0].spines[s].get_edgecolor() for s in ['left','bottom','right']]\n if selcol not in spcol:\n [self.axes[0].spines[s].set_color('white') for s in ['left','bottom','right']]\n\n self.load_edges()\n\n def load_edges(self, load_fret=False): # loads edge color from kon array\n sel = self.radio.value_selected\n kons = [self.kon[int(sel == 'green')]] ; colors = [sel[0]]\n if load_fret: kons.append(self.kon[2]) ;colors.append('blueviolet')\n\n for i, kon in enumerate(kons):\n selected_sides = list(itertools.compress(['left','bottom','right'], kon))\n unselected_sides = list(itertools.compress(['left','bottom','right'], np.invert(kon)))\n [self.axes[i].spines[s].set_color(colors[i]) for s in selected_sides]\n [self.axes[i].spines[s].set_color('white') for s in unselected_sides]\n\n self.fig.canvas.draw()\n\n def select_edge(self, key):\n if self.fret_edge_lock:\n ax = self.axes[0]\n sel = self.radio.value_selected[0] # get the selected color of the radiobutton\n elif not self.fret_edge_lock:\n ax = self.axes[1]\n sel = 'blueviolet' # this refers to the fret color\n\n side = 'left'*(key == ',') + 'bottom'*(key == '.') + 'right'*(key == '/')\n\n spcolor = ax.spines[side].get_edgecolor()\n selcol, w = matplotlib.colors.to_rgba(sel), matplotlib.colors.to_rgba('white')\n c = selcol*(spcolor == w) + w*(spcolor == selcol)\n ax.spines[side].set_color(c)\n\n self.update_kon(sel, selcol, side, ax)\n\n def update_kon(self, sel=None, selcol=None, side=None, ax=None):\n i = ['r', 'g', 'blueviolet'].index(sel) # These are the colors of the sides. blueviolet refers to fret\n j = ['left', 'bottom', 'right'].index(side)\n self.kon[i][j] = (ax.spines[side].get_edgecolor() == selcol)\n\n\n def check_fret(self, label):\n if self.checkbfret.get_status()[0]:\n self.checkbfret.rectangles[0].set_color(\"b\")\n elif not self.checkbfret.get_status()[0]:\n self.checkbfret.rectangles[0].set_color(\"black\")\n self.fig.canvas.draw()\n\nclass Draw_lines(object):\n def __init__(self, fig, iplot_radio):\n self.lines = []\n self.fig = fig\n self.radio = iplot_radio # The InteractivePlot instance\n\n def onclick(self, event):\n if self.fig.canvas.manager.toolbar.mode != '': # self.fig.canvas.manager.toolmanager.active_toggle[\"default\"] is not None:\n return\n if event.inaxes is None:\n return\n ax = event.inaxes\n if event.button == 1:\n if ax == self.fig.get_axes()[0] or ax == self.fig.get_axes()[1]:\n sel = self.radio.value_selected*(ax == self.fig.get_axes()[0])\n sel = sel + \"E\"*(ax == self.fig.get_axes()[1])\n l = ax.axvline(x=event.xdata, zorder=0, lw=0.65, label=\"man \"+sel)\n self.lines.append(l)\n\n if event.button == 3 and self.lines != []:\n self.lines.pop().remove()\n self.fig.canvas.draw()\n\n def clear_all(self, event):\n while self.lines:\n self.lines.pop().remove()\n self.fig.canvas.draw()\n\n\nos.chdir(os.path.dirname(os.path.abspath(__file__)))\n#mainPath = './traces'\nmainPath = './simulations'\nexp = analysis.Experiment(mainPath, 0.1)\ni = InteractivePlot(exp.files[0])\ni.plot_initialize()\ni.plot_molecule()\n#plt.show()\n\n\n#self.fig.canvas.manager.toolmanager.add_tool('Next', NextTool)\n#self.fig.canvas.manager.toolbar.add_tool('Next', 'foo')\n#class NextTool(ToolBase, InteractivePlot):\n# '''Go to next molecule'''\n# default_keymap = 'enter, right'\n# description = 'Next Molecule 1'\n#\n# def trigger(self, *args, **kwargs):\n# pass\n# InteractivePlot.__init__(InteractivePlot, self.file)\n# print(self.mol_indx\n# )\n# InteractivePlot.plot_setup(InteractivePlot)\n# print(InteractivePlot.mol)","sub_path":"interactive_analysis_v3.1.py","file_name":"interactive_analysis_v3.1.py","file_ext":"py","file_size_in_byte":21470,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"343577337","text":"#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Fri Dec 14 12:46:58 2018\n\n@author: bdus\n\nbaseline:\nhttps://github.com/bdus/programpractice/blob/master/mxnet/mnist_semi/supervised/experiments/finetune/train_finetune.py\n\nmean teacher are better role model\n\n\"\"\"\n\n\nimport _init_paths\n\nimport os\nimport mxnet as mx\nimport numpy as np\nfrom gluoncv import model_zoo as mzoo\n\nfrom mxnet import autograd, gluon, init, nd\nfrom mxnet.gluon import nn, loss as gloss\n\nfrom symbols import symbols\n\nbatch_size = 100\nstochastic_ratio = 0.01\n\nctx = mx.cpu() #[mx.gpu(i) for i in range(num_gpus)] if num_gpus > 0 else [mx.cpu()]\n\n# get data csv\ntransform=lambda data, label: (data.reshape(784,).astype(np.float32)/255, label)\ntrain_data = gluon.data.DataLoader(dataset= gluon.data.vision.MNIST(train=True,transform=transform), batch_size=100,shuffle=True,last_batch='discard')\nval_data = gluon.data.DataLoader(dataset= gluon.data.vision.MNIST(train=False,transform=transform), batch_size=100,shuffle=False)\n\n\n# network\nmodelname = 'semi_mt_simple2'\n\nbasemodel_zoo = 'simple2'\nnet_s = symbols.get_model(basemodel_zoo)\nnet_t = symbols.get_model(basemodel_zoo)\n#net_s.initialize(mx.init.Xavier(magnitude=2.24))\n#net_t.initialize(mx.init.Xavier(magnitude=2.24))\nnet_t.load_parameters( os.path.join('symbols','para','%s_t.params'%(modelname)) )\nnet_s.load_parameters( os.path.join('symbols','para','%s_s.params'%(modelname)) )\n\n#net.load_parameters(os.path.join('symbols','para','%s.params'%(modelname)))\n\n# g(x) : stochastic input augmentation function\ndef g(x):\n return x + nd.random.normal(0,stochastic_ratio,shape=x.shape)\n\n\n# loss function\nl_logistic = gloss.SoftmaxCrossEntropyLoss()\nl_l2loss = gloss.L2Loss() \nmetric = mx.metric.Accuracy()\n\ndef net_liner(net,net2,x1,x2,b):\n # net.para = x1 * net2.para + x2 * net.para + b\n for (k, v) , (k2, v2) in zip( net.collect_params().items() , net2.collect_params().items() ):\n v.set_data(v2.data() * x1 + v.data() * x2 + b)\n \n# train\ndef test():\n metric = mx.metric.Accuracy()\n for data, label in val_data:\n X = data.reshape((-1,1,28,28))\n #img = nd.concat(X,X,X,dim=1)\n output = net_t(X)\n metric.update([label], [output])\n return metric.get()\n \ndef train(epochs,alpha=0,beta=0,lr=0.1):\n #net.initialize(mx.init.Xavier(magnitude=2.24))\n print('ems_alpha = %f, consis_beta = %f ,lr = %f'%(alpha,beta,lr))\n trainer = gluon.Trainer(net_s.collect_params(),'sgd',{'learning_rate':lr})\n for epoch in range(epochs):\n metric.reset()\n for i, (X, y) in enumerate(train_data):\n X = nd.array(X)\n X = X.reshape((-1,1,28,28)) \n y = nd.array(y)\n #y = nd.one_hot(y,10) \n with autograd.record():\n y_s = net_s(g(X)) \n y_t = net_t(g(X)) \n L = l_logistic(y_s,y) + beta * l_l2loss(y_s,y_t)\n L.backward()\n trainer.step(batch_size)\n # net_t = alpha * net_t + (1-alpha) * net_s\n # alpha == 0 : copy student ; \n net_liner(net_t,net_s,1-alpha,alpha,0)\n #metric.update(y,y_t)\n #if i % 50 == 0:\n #name, acc = metric.get()\n #print('[Epoch %d Batch %d] Training: %s=%f'%(epoch, i, name, acc))\n metric.update(y,y_t)\n name, acc = metric.get()\n print('[Epoch %d] Training: %s=%f'%(epoch, name, acc))\n name, val_acc = test()\n print('[Epoch %d] Validation: %s=%f'%(epoch, name, val_acc)) \n if epoch % 10 == 0:\n net_t.save_parameters( os.path.join('symbols','para','%s_t.params'%(modelname)) )\n net_s.save_parameters( os.path.join('symbols','para','%s_s.params'%(modelname)) )\n net_t.save_parameters( os.path.join('symbols','para','%s_t.params'%(modelname)) )\n net_s.save_parameters( os.path.join('symbols','para','%s_s.params'%(modelname)) )\n\nif __name__ == '__main__':\n num_epochs = 10\n alpha = 0\n beta = 0\n train(20)\n for i in range(10):\n train(10,beta=0.1*i)\n for i in range(10):\n train(10,alpha=0.01*i, beta=0.1*i)\n for i in range(10):\n train(10,alpha=0.01*i, beta=1) \n train(100,alpha=0.1,beta=1)\n train(100,alpha=0.01,beta=1,lr=0.1)\n train(100,alpha=0.01,beta=1,lr=0.01)\n ","sub_path":"mxnet/mnist_semi/semi/train_mt.py","file_name":"train_mt.py","file_ext":"py","file_size_in_byte":4378,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"649328625","text":"\"\"\"\nUsed to make a summary plot of network properties and classification accuracy\n\"\"\"\n\nimport glob\nimport os\nimport sys\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nfrom params import *\n\n# parameters has to be tweaked manually\nintra_mode = 'unimodal'\ninter_mode = 'unimodal'\nPATH = os.getcwd() + '/data/sum/'\nxticks = [\"M0\", \"M1\", \"M2\", \"M3\"] # xlabels are module indices\n\n\n\n\"\"\"\nmeasures figures\n\"\"\"\n# load data and melt in an appropriate format\nmeasures = pd.read_csv(PATH + 'measures_intra={}_inter={}.csv'.format(intra_mode, inter_mode), keep_default_na=False)\nmetrics = measures.columns[1:] # 4 different metrics, first column is indices\nmeasures = measures\nmeasures['network type'] = pd.Categorical(measures['network type'], categories=['noise', 'random', 'topo'])\nmeasures = measures.melt(id_vars=['module index', 'network type', 'intra type', 'intra params', 'inter type',\n 'inter params'], var_name='metric').sort_values(by=['network type', 'module index'])\nprint(measures)\n\n# plot\nsns.set(font_scale=1.5)\ng = sns.FacetGrid(measures, col=\"metric\", row=\"network type\", hue='network type',\n sharex=True, sharey='col', margin_titles=False)\ng.map_dataframe(sns.lineplot, \"module index\", 'value', style='intra params', legend='full')\n\n# ticks and labels\nylabels = [\"Pearson CC\", \"LvR\", \"spikes/sec\", \"Fano factor\"] # ylabels are units of metrics\nfor row_i in range(3):\n for ax_i, ax in enumerate(g.axes[row_i]):\n ax.set(title=None, ylabel=None)\nfor ax_i, ax in enumerate(g.axes[0]):\n ax.set(title=metrics[ax_i], ylabel=ylabels[ax_i], xticklabels=xticks, xticks=np.arange(4))\n\n# legends\ng.add_legend()\nhandles, labels = g.axes[-1][-1].get_legend_handles_labels()\ng.axes[-1][-1].legend(handles=handles[5:], labels=labels[5:], bbox_to_anchor=(1.9, 1.0))\n\n# save the figure\nplt.savefig(\"ultimate_intra={}_inter={}.pdf\".format(intra_mode, inter_mode), bbox_to_inches=\"tight\")\n\n\n\n# \"\"\"\n# training figures\n# \"\"\"\n# training = pd.read_csv(PATH + 'training_intra={}_inter={}.csv'.format(intra_mode, inter_mode), keep_default_na=False)\n# training = training.melt(id_vars=['module index', 'network type', 'intra type',\n# 'inter type', 'intra params', 'inter params'], var_name='metric')\n# print(training)\n#\n# sns.set(font_scale=2)\n# g = sns.catplot(x=\"module index\", y=\"value\", hue='intra params', data=training,\n# kind='bar', row=\"metric\", col='network type', sharey='row', margin_titles=True,\n# ci='sd', alpha=0.7)\n# for ax in g.axes[0]:\n# ax.axhline(y=0.1, color='black', linewidth=2.0)\n\n#\n# plt.savefig(\"training_intra={}_inter={}.pdf\".format(intra_mode, inter_mode), bbox_to_inches=\"tight\")\n","sub_path":"plotter.py","file_name":"plotter.py","file_ext":"py","file_size_in_byte":2756,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"9830936","text":"from my_utils import *\nimport numpy as np\nfrom datetime import datetime\n\ndef NSGA(dimension):\n c,w = gen_model(dimension)\n\n # 从中群中选择一个父节点出来(选目标函数大的)\n def BinaryTournament(P):\n size = P.shape[0]\n t1 = P[np.random.randint(size)]\n t2 = P[np.random.randint(size)]\n minus = target_function(c,w,t1) - target_function(c,w,t2)\n return t1 if np.sum(minus) > 0 else t2\n\n # 返回排好序的解集列表(大的在前小的在后)\n def fastNonDominatedSorting(P):\n rank = np.zeros(P.shape[0])\n\n S = [[] for i in range(P.shape[0])]\n n = np.zeros(P.shape[0])\n F = []\n for x in range(P.shape[0]):\n for y in range(P.shape[0]):\n if dominatedBy(c,w,P[y],P[x]):\n S[x].append(y)\n elif dominatedBy(c,w,P[x],P[y]):\n n[x] += 1\n if n[x] == 0:\n rank[x] = 1\n F.append(x)\n \n i = 0\n while len(F)>0:\n Q = []\n for x in F:\n for y in S[x]:\n n[y] -= 1\n if n[y] == 0:\n rank[y] = i+1\n Q.append(y)\n i += 1\n F = Q\n \n ret = []\n for i in range(1,P.shape[0]+1):\n tmp = []\n for j in range(len(rank)):\n if rank[j] == i:\n tmp.append(P[j])\n if len(tmp)>0:\n ret.append(tmp)\n else:\n break\n \n # 反转,使大的排在前面\n return ret[::-1]\n \n # 计算crosding分数,返回 ndarray\n def crowdingDistance(P):\n target = [target_function(c,w,p) for p in P]\n target = np.array(target).T # 转置之后大小为: target维度 x individual\n # 设置初始distance为0\n distance = np.zeros([len(P)])\n for dim in range(target.shape[0]):\n Q = target[dim,:]\n Q = np.sort(Q)\n scale = Q.max() - Q.min()\n for i in range(len(P)):\n index = int(np.where(Q == target[dim][i])[0][0])\n if index==0 or index==target.shape[1]-1:\n distance[i] = 100000 # 设为无穷大\n else:\n distance[i] += (Q[index+1] - Q[index-1]) / scale\n return distance\n\n\n # 要开始咯\n p = init_population()\n for epoch in range(Epoch):\n print(datetime.now().strftime('20%y-%m-%d %H:%M:%S'),\" NSGA-II on\",str(dimension)+'d',\" Epoch:\",epoch)\n # 父代挑选、生成子代\n offspring = []\n while len(offspring) < PopulationSize:\n p1 = BinaryTournament(p)\n p2 = BinaryTournament(p)\n \n p1 = mutation(p1)\n p2 = mutation(p2)\n\n child = crossover(p1,p2)\n offspring.append(child)\n \n new_pop = np.array(list(p)[:] + offspring[:])\n\n # N+N selection\n pops = fastNonDominatedSorting(new_pop)\n p = [] # 重置种群\n for tmp in pops: # tmp是个List\n if len(p) == PopulationSize:\n break\n elif len(p) + len(tmp) <= PopulationSize:\n p += tmp\n else:\n dis = crowdingDistance(tmp)\n k = PopulationSize-len(p)\n ids = np.argpartition(dis,-k)\n for index in ids[-k:]:\n p.append(tmp[index])\n \n p = np.array(p)\n\n print('NSGA-II Success!')\n ret = [gen_result(individual) for individual in p]\n\n print(\"Final Population Size:\",end=' ')\n for i in range(len(ret)):\n print(len(ret[i]),end=' ')\n print()\n\n return ret","sub_path":"启发式搜索与演化算法/hw4/my_NSGA_II.py","file_name":"my_NSGA_II.py","file_ext":"py","file_size_in_byte":3793,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"316455407","text":"\"\"\"\nNodule for asynchronously saving files and computing data\n\"\"\"\nimport os\nimport asyncio\nimport sys\n\n__author__ = \"Przemek\"\n\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\nprint(BASE_DIR)\nFILES_DIR = os.path.dirname(os.path.join(BASE_DIR, 'async', ''))\n\n\ndef create_directory(directory):\n \"\"\"Create directory for files, if not exist\"\"\"\n base_dir = BASE_DIR\n if not os.path.exists(directory):\n os.makedirs(directory)\n return os.path.dirname(os.path.abspath(directory)) + '\\{}'.format(directory)\n\nasync def compute_values(x, y, z):\n return (x * y) / z\n\n\nasync def save_values(x, y, z, path, name):\n while True:\n await compute_values(x, y, z)\n with open(path + '\\{}.txt'.format(name), mode='w+') as file:\n file.write(compute_values(x, y ))\n\n\ndef got_result(future):\n print(future.result())\n\n\nfiles_dir = create_directory('filess')\n# main event loop\nloop = asyncio.get_event_loop()\n\n# Create a task from coroutine\ntask = loop.create_task(save_values(2, 4, 6, files_dir, 'log'))\n\n# Please notify when task is completed\ntask.add_done_callback(got_result)\n\n# The loop will run forever\nloop.run_until_complete(task)\n","sub_path":"async/[ASYNC] calculate_data_save_to_file.py","file_name":"[ASYNC] calculate_data_save_to_file.py","file_ext":"py","file_size_in_byte":1188,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"401228397","text":"# from django.urls import path\n# from . import views\n\n\n# urlpatterns = [\n# path('location/', views.get_post_locations.as_view(), name=\"location-all\")\n# ]\nfrom django.conf.urls import url\nfrom . import views\n\n\nurlpatterns = [\n url(r'^location/(?P[0-9]+)$', # urls with details i.e /movies/(1-9)\n views.get_delete_update_location,\n name='get_delete_update_location'\n ), \n url(\n r'^locations/$', # urls list all and create new one\n views.get_post_locations,\n name='get_post_locations'\n ), \n url(\n r'^viewer', # urls list all and create new one\n views.index,\n name='index'\n ),\n # url(\n # r'^static', # urls list all and create new one\n # views.static,\n # name='static'\n # ), \n]","sub_path":"lotrlocations/locations/urls.py","file_name":"urls.py","file_ext":"py","file_size_in_byte":786,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"47931136","text":"import csv\nimport io\nimport json\nimport os.path\nimport shutil\n\nfrom http import HTTPStatus\nfrom http.server import ThreadingHTTPServer, BaseHTTPRequestHandler\n\n__version__ = \"0.0\"\n\n\nclass AbcHTTPRequestHandler(BaseHTTPRequestHandler):\n server_version = \"AbcHTTP/\" + __version__\n protocol_version = \"HTTP/1.1\"\n\n pages_path = \"www\"\n main_page_path = os.path.join(pages_path, \"MainPage.html\")\n\n user_accounts_page_path = os.path.join(pages_path, \"UserAccountsPage.html\")\n user_accounts_page_above_rows = None\n user_account_row_template = None\n user_accounts_page_below_rows = None\n\n user_accounts_action_result_page_path = os.path.join(pages_path, \"UserAccountsActionResult.html\")\n user_accounts_action_result_page = None\n\n request_user_login = \"/user/?login=\"\n\n # noinspection PyPep8Naming\n def do_HEAD(self):\n self.send_response(HTTPStatus.NOT_IMPLEMENTED)\n\n # noinspection PyPep8Naming\n def do_GET(self):\n request = self.path\n print(request)\n # Main page\n if request == \"/\":\n self.send_main_page()\n # User sign in\n elif request.startswith(self.request_user_login):\n self.send_user_accounts(request[len(self.request_user_login):])\n # 404\n else:\n self.send_error(HTTPStatus.NOT_FOUND)\n\n # noinspection PyPep8Naming\n def do_POST(self):\n request = self.path\n print(request)\n # read request body\n content_length = 0\n h = self.headers\n for item in h.items():\n if item[0].lower() == \"content-length\":\n content_length = int(item[1])\n break\n if content_length != 0:\n request_body = self.rfile.read(content_length).decode('utf-8')\n else:\n self.send_error(HTTPStatus.BAD_REQUEST)\n return\n print(request_body)\n\n # request_body: param=value¶m=value&parm=value\n param_dict = {}\n for param in request_body.split('&'):\n key_and_value = param.split('=')\n param_dict[key_and_value[0]] = key_and_value[1]\n \n # action with account\n if request == \"/user/accounts/action\":\n # request_body: login=user1&account-number=001&amount=100&account-action=deposit\n dbh = UserDatabaseHandler(param_dict[\"login\"])\n result_of_request_to_db = dbh.action_on_account(param_dict)\n # create new account\n elif request == \"/user/accounts/create\":\n # request_body: login=user1¤cy=RUB\n dbh = UserDatabaseHandler(param_dict[\"login\"])\n result_of_request_to_db = dbh.create_new_account(param_dict)\n # 404\n else:\n self.send_error(HTTPStatus.NOT_FOUND)\n return\n\n # send result\n self.send_user_accounts_action_result_page(param_dict['login'], result_of_request_to_db)\n\n def send_main_page(self):\n with open(self.main_page_path, 'rb') as main_page:\n self.send_response(HTTPStatus.OK)\n self.send_header(\"Content-Type\", \"text/html; charset=UTF-8\")\n self.send_header(\"Content-Length\", str(os.fstat(main_page.fileno())[6]))\n self.end_headers()\n shutil.copyfileobj(main_page, self.wfile)\n\n def __init_user_accounts_template__(self):\n with open(self.user_accounts_page_path, 'r') as f:\n start_mark = \"{begin account-row-template}\"\n end_mark = \"{end account-row-template}\"\n page_template = f.read()\n self.user_accounts_page_above_rows = page_template[:page_template.find(start_mark)]\n self.user_account_row_template = \\\n page_template[page_template.find(start_mark) + len(start_mark):page_template.find(end_mark)]\n self.user_accounts_page_below_rows = page_template[page_template.find(end_mark) + len(end_mark):]\n\n def send_user_accounts(self, username):\n dbh = UserDatabaseHandler(username)\n user_accounts = dbh.get_user_accounts('r')\n if user_accounts is None:\n dbh.create_new_user()\n\n if self.user_accounts_page_above_rows is None:\n self.__init_user_accounts_template__()\n\n accounts_rows = \"\"\n accounts_dict = json.load(user_accounts)\n user_accounts.close()\n for account_number in accounts_dict:\n a_row = self.user_account_row_template.\\\n replace(\"{number}\", account_number).\\\n replace(\"{currency}\", accounts_dict[account_number][\"currency\"]).\\\n replace(\"{amount}\", str(accounts_dict[account_number][\"amount\"])).\\\n replace(\"{username}\", username)\n accounts_rows += a_row + '\\n'\n\n response_body = \\\n self.user_accounts_page_above_rows.replace(\"{username}\", username) \\\n + accounts_rows \\\n + self.user_accounts_page_below_rows\n response_body = response_body.encode('utf-8')\n response_body_stream = io.BytesIO()\n response_body_stream.write(response_body)\n response_body_stream.seek(0)\n\n self.send_response(HTTPStatus.OK)\n self.send_header(\"Content-Type\", \"text/html; charset=UTF-8\")\n self.send_header(\"Content-Length\", str(len(response_body)))\n self.end_headers()\n shutil.copyfileobj(response_body_stream, self.wfile)\n\n def send_user_accounts_action_result_page(self, username, result_of_request_to_db):\n if self.user_accounts_action_result_page is None:\n with open(self.user_accounts_action_result_page_path, 'r') as file:\n self.user_accounts_action_result_page = file.read()\n\n response_body = self.user_accounts_action_result_page.\\\n replace(\"{username}\", username).\\\n replace(\"{result}\", result_of_request_to_db[1])\n response_body = response_body.encode('utf-8')\n response_body_stream = io.BytesIO()\n response_body_stream.write(response_body)\n response_body_stream.seek(0)\n\n self.send_response(HTTPStatus.OK)\n self.send_header(\"Content-Type\", \"text/html; charset=UTF-8\")\n self.send_header(\"Content-Length\", str(len(response_body)))\n self.end_headers()\n shutil.copyfileobj(response_body_stream, self.wfile)\n\n\n# TODO:\n# py file for class\n# request result constants outside the class\nclass UserDatabaseHandler:\n database_path = \"database\"\n index = os.path.join(database_path, \"database.index\")\n\n REQUEST_RESULT_OK = (0, \"Success\")\n REQUEST_RESULT_ERROR_WRONG_REQUEST_OR_DATABASE_ERROR = (100, \"Wrong request or internal database error\")\n REQUEST_RESULT_ERROR_WRONG_REQUEST = (200, \"Wrong request\")\n REQUEST_RESULT_ERROR_WRONG_REQUEST_INSUFFICIENT_FUNDS = (201, \"Insufficient funds\")\n REQUEST_RESULT_ERROR_WRONG_REQUEST_UNSUPPORTED_ACTION = (202, \"Unsupported action\")\n REQUEST_RESULT_ERROR_DATABASE_ERROR = (300, \"Internal database error\")\n\n def __init__(self, username):\n self.username = username\n\n def create_new_user(self):\n user_accounts_path = self.username + \".acc\"\n user_index = [self.username, user_accounts_path]\n with open(self.index, 'a') as index_file:\n csv.writer(index_file).writerow(user_index)\n with open(os.path.join(self.database_path, user_accounts_path), 'w') as account_file:\n empty_dict = {}\n json.dump(empty_dict, account_file)\n\n def get_user_accounts_path(self):\n \"\"\"\n Search the path to the user accounts info file from database index file. Return path or None.\n :return: path or empty string\n \"\"\"\n file_path = \"\"\n with open(self.index, 'r') as index_file:\n for row in csv.reader(index_file):\n if row[0] == self.username:\n file_path = os.path.join(self.database_path, row[1])\n break\n if os.path.isfile(file_path):\n return file_path\n else:\n return \"\"\n\n def get_user_accounts(self, mode):\n \"\"\"\n Open file self.get_user_accounts_path and return corresponding file object.\n If self.get_user_accounts_path return None, this function return None too.\n :param mode: have the same meaning as in built-in function open()\n :return: file object or None\n \"\"\"\n file_path = self.get_user_accounts_path()\n if file_path != \"\":\n return open(file_path, mode)\n else:\n return None\n\n def create_new_account(self, param_dict):\n \"\"\"\n Crete new account with currency param_dict['currency']\n :param param_dict: dictionary {'login': str, 'currency': str}\n :return: REQUEST_RESULT_* constant\n \"\"\"\n if 'currency' not in param_dict:\n return UserDatabaseHandler.REQUEST_RESULT_ERROR_WRONG_REQUEST\n currency = param_dict['currency']\n\n accounts_file_path = self.get_user_accounts_path()\n if accounts_file_path == \"\":\n return UserDatabaseHandler.REQUEST_RESULT_ERROR_WRONG_REQUEST_OR_DATABASE_ERROR\n with open(accounts_file_path, 'r') as file:\n accounts = json.load(file)\n\n acc_number_lst = []\n for acc in accounts:\n acc_number_lst.append(int(acc))\n acc_number_lst.sort()\n\n if len(acc_number_lst) != 0:\n new_account_number = acc_number_lst[-1] + 1\n if new_account_number > 999:\n return UserDatabaseHandler.REQUEST_RESULT_ERROR_DATABASE_ERROR\n else:\n new_account_number = 1\n\n accounts[str(new_account_number).rjust(3, '0')] = {'currency': currency, 'amount': 0.0}\n with open(accounts_file_path, 'w') as file:\n json.dump(accounts, file)\n return UserDatabaseHandler.REQUEST_RESULT_OK\n\n def action_on_account(self, param_dict):\n \"\"\"\n Perform action param_dict['account-action'] (deposit/withdraw) on account\n witch number is param_dict['account-number'] and which belongs to the user param_dict['login']\n :param param_dict: dictionary {'login': str, 'account-number': str, 'amount': str, 'account-action': str}\n :return: REQUEST_RESULT_* constant\n \"\"\"\n if 'account-number' not in param_dict:\n return UserDatabaseHandler.REQUEST_RESULT_ERROR_WRONG_REQUEST\n if 'account-action' not in param_dict:\n return UserDatabaseHandler.REQUEST_RESULT_ERROR_WRONG_REQUEST\n if 'amount' not in param_dict:\n return UserDatabaseHandler.REQUEST_RESULT_ERROR_WRONG_REQUEST\n\n number = param_dict['account-number']\n action = param_dict['account-action']\n amount_for_action = round(float(param_dict['amount']), 2)\n\n accounts_file_path = self.get_user_accounts_path()\n if accounts_file_path == \"\":\n return UserDatabaseHandler.REQUEST_RESULT_ERROR_WRONG_REQUEST_OR_DATABASE_ERROR\n with open(accounts_file_path, 'r') as file:\n accounts = json.load(file)\n\n if number not in accounts:\n return UserDatabaseHandler.REQUEST_RESULT_ERROR_DATABASE_ERROR\n acc = accounts[number]\n if 'amount' not in acc:\n return UserDatabaseHandler.REQUEST_RESULT_ERROR_DATABASE_ERROR\n amount_in_account = acc['amount']\n\n if action == 'deposit':\n amount_in_account += amount_for_action\n elif action == 'withdraw':\n if amount_in_account >= amount_for_action:\n amount_in_account -= amount_for_action\n else:\n return UserDatabaseHandler.REQUEST_RESULT_ERROR_WRONG_REQUEST_INSUFFICIENT_FUNDS\n else:\n return UserDatabaseHandler.REQUEST_RESULT_ERROR_WRONG_REQUEST_UNSUPPORTED_ACTION\n\n with open(accounts_file_path, 'w') as file:\n acc['amount'] = round(amount_in_account, 2)\n json.dump(accounts, file)\n return UserDatabaseHandler.REQUEST_RESULT_OK\n\n\ndef run(server_class=ThreadingHTTPServer, handler_class=AbcHTTPRequestHandler):\n server_address = ('localhost', 8080)\n httpd = server_class(server_address, handler_class)\n print(\"Python system version:\", handler_class.sys_version)\n print(\"Server version:\", handler_class.server_version)\n httpd.serve_forever()\n\n\nif __name__ == \"__main__\":\n run()\n","sub_path":"abc_server.py","file_name":"abc_server.py","file_ext":"py","file_size_in_byte":12337,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"264989943","text":"class ResultSaved():\n\tdef __init__(self):\n\t\tself.Name \t\t=''\n\t\tself.amount_Column\t=1\n\tdef Saved(self):\n\t\tcode=open('main.f08','a')\n\t\tcode.write(\"\topen(30,file='\"+self.Name+\".csv')\\n\")\n\t\tcode.write(\"\t\twrite(30,*)' NO3 NO2 NO N2O N2 S2 Sul SO4 X_Mox X_Mred X_VSS'\\n\")\n\t\tcode.write('\t\twrite(30,\"('+str(self.amount_Column)+'f20.10)\")'+self.Name+'\\n')\n\t\tcode.write('\tclose(30)\\n\\n')\n\t\tcode.close()","sub_path":"sludge/tests/ResultSave_old.py","file_name":"ResultSave_old.py","file_ext":"py","file_size_in_byte":435,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"219274057","text":"from flask import Flask, render_template\nimport requests\nimport random\n\napp = Flask(__name__)\napp.debug = True\n\ndef get_data():\n\turl = \"https://raw.githubusercontent.com/ischurov/dj-prog/master/pushkin1.json\"\n\tr = requests.get(url)\n\treturn r.json()\n\ndef get_poems_list(json):\n poems = json['poems']\n poems_list = []\n for i in range(len(poems)):\n title = poems[i]['title'][0]\n if title == \"* * *\":\n title = poems[i]['verses'][0]\n year = poems[i]['year']\n if title[-1] == \",\":\n title = title.replace(\",\", \"\")\n if title.isupper != True:\n title = title.upper()\n title = title.strip()\n string = \"{0}, {1}\".format(title, year)\n poems_list.append(string)\n return poems_list\n\n\n@app.route('/')\ndef poems_list():\n\treturn render_template(\"poems_list.html\", data = get_poems_list(get_data()))\n\n@app.route('/poem/')\ndef show_poem(n):\n raw_data = get_data()\n data = raw_data['poems']\n poem = data[n - 1]\n return render_template(\"poem.html\", div = poem, n = n)\n\n@app.route('/random')\ndef show_random():\n rand_number = random.randrange(0, 231, 1)\n raw_data = get_data()\n data = raw_data['poems']\n rand_poem = data[rand_number]\n return render_template(\"random.html\", div = rand_poem)\n\n\nif __name__ == '__main__':\n app.run()\n","sub_path":"pushkin.py","file_name":"pushkin.py","file_ext":"py","file_size_in_byte":1349,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"442583806","text":"def classifica_lista(lista):\n if len(lista)<2:\n return 'nenhum'\n y = lista[1]-lista[0]\n if w>0:\n for i in range (len(lista)-1):\n if lista[i]-lista[i+1]<=0:\n return 'nenhum'\n if w<0:\n for i in range (len(lista)-1):\n if lista[i]-lista[i+1]>=0:\n return 'nenhum'\n if w==0:\n return 'nenhum'\n elif w>0:\n return 'crescente'\n elif w<0:\n return 'decrescente'","sub_path":"backup/user_022/ch151_2020_04_13_19_40_39_601011.py","file_name":"ch151_2020_04_13_19_40_39_601011.py","file_ext":"py","file_size_in_byte":464,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"651882968","text":"#!/usr/bin/env python3\r\n\"\"\"Continuous bayesian inference\"\"\"\r\n\r\n\r\nfrom scipy import math, special\r\n\r\n\r\ndef posterior(x, n, p1, p2):\r\n \"\"\"Continuous bayesian inference\"\"\"\r\n if type(n) is not int or n <= 0:\r\n raise ValueError(\"n must be a positive integer\")\r\n if type(x) is not int or x < 0:\r\n raise ValueError(\"x must be a positive integer\")\r\n if x > n:\r\n raise ValueError(\"x cannot be greater than n\")\r\n if type(p1) is not float or p1 < 0 or p1 > 1:\r\n raise ValueError(\"p1 must be a float in the range [0, 1]\")\r\n if type(p2) is not float or p2 < 0 or p2 > 1:\r\n raise ValueError(\"p2 must be a float in the range [0, 1]\")\r\n if p2 <= p1:\r\n raise ValueError(\"p2 must be greater than p1\")\r\n return None\r\n","sub_path":"math/0x07-bayesian_prob/100-continuous.py","file_name":"100-continuous.py","file_ext":"py","file_size_in_byte":764,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"469658970","text":"# 500小时粤语导出\n# author:zhaowenhua, date:2019-10-09\n\nimport os\nimport subprocess\nimport shutil\nfrom collections import defaultdict\n\n\ndef mkdir_if_not_exists(filepath):\n if not os.path.exists(filepath):\n os.makedirs(filepath)\n\n\ndef check(src):\n # 检查wav和textgrid是否成对出现\n names = defaultdict(list)\n for path, dirs, filenames in os.walk(src):\n for filename in filenames:\n name, _ = os.path.splitext(os.path.join(path, filename))\n names[name].append(filename)\n errors = []\n for name, contents in names.items():\n if len(contents) != 2:\n errors.append(name)\n\n if len(errors) > 0:\n print(errors)\n raise NameError\n\n\ndef rename(src, dst):\n # 客户重命名\n for path, dirs, filenames in os.walk(src):\n for filename in filenames:\n if not filename.endswith('.TextGrid'):\n continue\n dirname = os.path.basename(path)\n new_name = dirname + '_' + filename\n src_path = os.path.join(path, filename)\n if not filename.startswith('0'):\n dst_path = os.path.join(dst, os.path.relpath(path, src), new_name)\n else:\n dst_path = os.path.join(dst, os.path.relpath(path, src), filename)\n mkdir_if_not_exists(os.path.dirname(dst_path))\n # 转换格式\n # transform(src_path, dst_path)\n shutil.copy(src_path.replace('.TextGrid', '.TextGrid'), dst_path.replace('.TextGrid', '.TextGrid'))\n\n\n\ndef transform(src, dst):\n cmd_line = u'bin/ffmpeg.exe -i \"{src}\" -ar 16k -ac 1 -y \"{dst}\"'.format(src=src, dst=dst)\n try:\n subprocess.check_call(cmd_line, shell=False, stderr=open(os.devnull, 'w'))\n except Exception as e:\n print(e)\n\n\nif __name__ == '__main__':\n src = r'C:\\Users\\Aorus\\Desktop\\500小时粤语自然对话生成textgrid\\第一次交付'\n dst = r'\\\\10.10.8.123\\500小时粤语自然对话语音采集\\交付数据\\1011\\500人粤语返工\\500人粤语返工\\返工交付第一批'\n check(dst)\n # rename(src, dst)","sub_path":"yueyuexport.py","file_name":"yueyuexport.py","file_ext":"py","file_size_in_byte":2103,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"546317205","text":"import asyncio\nimport requests\nimport json\n\n\ndef construct_data_for_prometheus(data):\n def construct_for_field(field, type, help, value):\n if field in data:\n return f\"# TYPE {field} {type}\\n# HELP {field} {help}\\n{field} {value}\\n\"\n return \"\"\n\n s = \"\"\n s += construct_for_field(\n \"manual_memory_avail_bytes\",\n \"gauge\",\n \"RAM bytes available.\",\n data[\"manual_memory_avail_bytes\"],\n )\n s += construct_for_field(\n \"manual_uptime\",\n \"gauge\",\n \"RAM bytes available.\",\n data[\"manual_uptime\"],\n )\n return s\n\n\ndef json_from_everything(everything):\n s = everything.decode().replace(\"'\", '\"')\n s = s.replace(\"\\t\", \"\")\n s = s.replace(\"\\n\", \"\")\n s = s.replace(\",}\", \"}\")\n s = s.replace(\",]\", \"]\")\n return json.loads(s)\n\n\ndef http_importer(ip):\n # Netis n4 (https://4pda.to/forum/index.php?showtopic=1031030&st=40)\n # RAM: 64Mb\n resp = requests.get(\n f\"http://{ip}/cgi-bin/skk_get.cgi\",\n auth=requests.auth.HTTPDigestAuth(\"guest\", \"!watr00shka4ever\"),\n )\n j = json_from_everything(resp.content)\n return {\n \"manual_memory_avail_bytes\": 64\n * 1024\n * 1024\n * (100 - int(j[\"mem\"][:-1]))\n / 100,\n \"manual_uptime\": j[\"system_uptime\"],\n }\n\n\nasync def minutely():\n while True:\n d = construct_data_for_prometheus(http_importer(\"192.168.1.254\"))\n url = \"http://192.168.1.19/pushgateway/metrics/job/manual/instance/netis-n4\"\n try:\n r = requests.post(url, data=d)\n except:\n pass\n print(r.status_code, r.text)\n await asyncio.sleep(60)\n\n\ndef stop():\n task.cancel()\n\n\nloop = asyncio.get_event_loop()\ntask = loop.create_task(minutely())\n\ntry:\n loop.run_until_complete(task)\nexcept asyncio.CancelledError:\n pass\n","sub_path":"termux/stat_importer.py","file_name":"stat_importer.py","file_ext":"py","file_size_in_byte":1856,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"51880159","text":"from statistics.statistic import Statistic\nfrom statistics.statisticrepository import StatisticRepository\n\nfrom utils import arrayToJson\n\nclass StatisticController():\n def __init__(self, connection):\n\n self.connection = connection \n self.repo = StatisticRepository(connection)\n\n def getInfoByCourseId(self, courseID):\n p = self.repo.findByCourseId(courseID)\n\n return arrayToJson(p,\"statistics\")\n\n def update(self,variabile_conditie,valori_conditie,variabile_put,valori_put):\n\n if \"courseID\" in variabile_put:\n return b'{\"code\":\"405\",\"result\":{\"error\":\"Interdictie modificare date\"}}'\n\t\t\n result = self.repo.update(variabile_conditie,valori_conditie,variabile_put,valori_put)\n\n if result == -1:\n return b'{\"code\":\"404\",\"result\":{\"error\":\"Statistica inexistenta\"}}'\n else:\n return b'{\"code\":\"200\",\"result\":{\"error\":\"none\"}}'","sub_path":"server/statistics/statisticcontroller.py","file_name":"statisticcontroller.py","file_ext":"py","file_size_in_byte":920,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"292945026","text":"# pylint: disable=missing-docstring, global-statement, invalid-name\n#\n# Copyright (C) 2017 Jonas Colmsjö, Claes Strannegård\n#\n\n\n# Imports\n# ======\n\nimport unittest\n\nfrom sea import Sea\nfrom agents import Agent\nfrom agents import Obstacle\nfrom myutils import Logging\n\n\n# Setup logging\n# =============\n\nDEBUG_MODE = True\nl = Logging('test_cachalot', DEBUG_MODE)\n\n\n# Unit tests\n# ==========\n\nlane = ('xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\\n' +\n 'wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww\\n' +\n 'wwwwwsssswwwwwwwwwwwwwwwwwwwwwwwwwwsssswwwwwwwwwww\\n')\n\n# the mother and calf have separate and identical lanes\nworld = lane + lane + 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'\n\noptions = {\n 'world': [x for x in world.split(\"\\n\")]\n}\n\nclass TestCachalot(unittest.TestCase):\n\n def setUp(self):\n l.info('Testing cachalot...')\n\n def test_add_squid(self):\n l.info('test_add_squid')\n\n sea = Sea(options)\n\n for i in range(5, 4):\n self.assertTrue(len(sea.list_things_at((i, 2))) == 1)\n self.assertTrue(len(sea.list_things_at((i, 5))) == 1)\n\n for i in range(0, 50):\n self.assertTrue(isinstance(sea.list_things_at((i, 3))[0], Obstacle))\n\n\n def test_moving_cachalot(self):\n l.info('test_moving_cachalot')\n\n e = Sea(options)\n a = Agent()\n e.add_thing(a, (1, 1))\n\n self.assertTrue(a.location == (1, 1))\n e.execute_action(a, 'DiveAndForward', 1)\n self.assertTrue(a.location == (2, 2))\n\n # Should hit the wall\n e.execute_action(a, 'DiveAndForward', 1)\n self.assertTrue(a.location == (3, 2))\n\n e.execute_action(a, 'Forward', 1)\n self.assertTrue(a.location == (4, 2))\n\n e.execute_action(a, 'UpAndforward', 1)\n self.assertTrue(a.location == (5, 1))\n\n # check that the world is torus, should get back to the same location\n for _ in range(0, 50):\n e.execute_action(a, 'Forward', 1)\n\n self.assertTrue(a.location == (5, 1))\n\n def test_singing_cachalot(self):\n l.info('test_singing_cachalot')\n\n e = Sea(options)\n a = Agent()\n e.add_thing(a, (1, 1))\n\n # song at time=1 will be heard by other agents at time=2\n e.execute_ns_action(a, 'sign', 1)\n self.assertTrue(len(e.list_ns_artifacts_at(2)) == 1)\n\n\n def tearDown(self):\n l.info('...done with test_sea.')\n\n\n# Main\n# ====\n\nif __name__ == '__main__':\n unittest.main()\n","sub_path":"cachalot/test/test_sea.py","file_name":"test_sea.py","file_ext":"py","file_size_in_byte":2508,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"277369398","text":"import sys\nsys.stdin = open('input.txt','r')\n\nN, M = map(int,input().split())\nmap = []\nfor i in range(N):\n l = input()\n templ = []\n for j in range(M):\n templ.append(l[j])\n map.append(templ)\n# for a in map: print(*a)\n\ndef check(i,j,li):\n global broken\n for c in li:\n if c == '1' :\n if map[i-1][j] == '.':\n broken.append([(i-1,j),i,j])\n elif c == '2' :\n if map[i][j+1] == '.':\n broken.append([(i,j+1),i,j])\n elif c == '3' :\n if map[i+1][j] == '.':\n broken.append([(i+1,j),i,j])\n elif c == '4' :\n if map[i][j-1] == '.':\n broken.append([(i,j-1),i,j])\n\nbroken = []\nfor i in range(N):\n for j in range(M):\n point = map[i][j]\n if point == chr(124) : check(i,j,['1','3'])\n elif point == '-' : check(i,j,['2','4'])\n elif point == '+' : check(i,j,['1','2','3','4'])\n elif point == '1' : check(i,j,['2','3'])\n elif point == '2' : check(i,j,['1','2'])\n elif point == '3' : check(i,j,['1','4'])\n elif point == '4' : check(i,j,['3','4'])\n\n# print(broken)\nfix = set()\ndi, dj = [-1,0,1,0],[0,1,0,-1]\nbi, bj = broken[0][0][0],broken[0][0][1]\nif len(broken) == 4:\n print(bi,bj,'+')\nelse:\n for d in range(4):\n ci, cj = bi+di[d] , bj+dj[d]\n if 0 <= ci < N and 0 <= cj < M :\n for cij in broken:\n if cij[1] == ci and cij[2] == cj :\n fix.add(d+1)\n# print(fix)\nif fix == {1,3} : print(bi+1,bj+1,chr(124))\nelif fix == {2,4} : print(bi+1,bj+1,'-')\nelif fix == {2,3} : print(bi+1,bj+1,1)\nelif fix == {1,2} : print(bi+1,bj+1,2)\nelif fix == {1,4} : print(bi+1,bj+1,3)\nelif fix == {3,4} : print(bi+1,bj+1,4)","sub_path":"ForNovTest/BeakJoon/2931.py","file_name":"2931.py","file_ext":"py","file_size_in_byte":1758,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"102063556","text":"import sys, math\nfrom collections import defaultdict as dd\n\ndef mean(scores):\n total = 0.0\n for score in scores:\n total += score\n return total * 1.0 /len(scores)\n\ndef variance(scores, mu):\n sigma = 0\n for score in scores:\n sigma += math.pow(score-mu,2)\n return math.sqrt(sigma*1.0/len(scores))\n\nclass perUserStats:\n def __init__(self):\n self._userWise = dd(list)\n self._tweets = []\n self._scores = []\n self._correct = []\n self._mean = dd(float)\n self._variance = dd(float)\n \n def classScore(self, score):\n score = float(score)\n if score > 0:\n return '1'\n else:\n return '-1'\n \n def loadScores(self, scoredTweets, userIds):\n self._scores = [float(s.strip().split('\\t')[1]) for s in open(scoredTweets)]\n self._tweets = ['\\t'.join(s.strip().split('\\t')[3:]) for s in open(scoredTweets)]\n self._correct = map(lambda x:x[0]==self.classScore(x[1]),[s.strip().split('\\t')[:2] for s in open(scoredTweets)])\n ids = [i.strip().split('\\t')[0] for i in open(userIds)]\n for index in range(len(self._scores)):\n self._userWise[ids[index]].append((self._tweets[index],self._scores[index], self._correct[index]))\n\n def meanAnalysis(self, outputFile):\n outputFile = open(outputFile,'w')\n for user in self._userWise.iterkeys():\n totalOutside = 0\n userMean = mean(map(lambda x:x[1],self._userWise[user]))\n userVar = variance(map(lambda x:x[1], self._userWise[user]), userMean)\n userPos = []\n userNeg = []\n userMid = []\n sys.stderr.write(\"User Mean:\"+str(userMean)+\"\\t User Variance:\"+str(userVar)+'\\n') \n for (tweet,score,correct) in self._userWise[user]:\n if not correct:\n continue\n if score > userMean + 2*userVar:\n userPos.append(tweet)\n totalOutside += 1\n if score < userMean - 2 * userVar:\n userNeg.append(tweet)\n totalOutside += 1\n if (score > userMean and score < userMean + 0.5*userVar) or (score < userMean and score > userMean - 0.5*userVar):\n userMid.append(tweet)\n \n outputFile.write('-'*60+'\\n')\n outputFile.write(\"User Id:\" + str(user)+ \" has \" + str(totalOutside) + \" tweets outside of 2 StDs out of \" + str(len(self._userWise[user]))+'\\n')\n outputFile.write('-'*60+'\\n\\n')\n outputFile.write('Positive(AAE):\\n')\n outputFile.write('\\n'.join(userPos)+'\\n\\n')\n outputFile.write('Middle:\\n')\n outputFile.write('\\n'.join(userMid[:5])+'\\n\\n')\n outputFile.write('Negative(MSE):\\n')\n outputFile.write('\\n'.join(userNeg)+'\\n\\n')\n outputFile.close()\n \nif __name__ == \"__main__\":\n scoredTweets = \"/usr0/home/pgadde/Work/Ethnic/AAEness/Exp/Social/Users/Data/annotatedTweets.tsv.scored\"\n userIds = \"/usr0/home/pgadde/Work/Ethnic/AAEness/Exp/Social/Users/Data/tweetsCleaned.txt\"\n outputFile = \"/usr0/home/pgadde/Work/Ethnic/AAEness/Exp/Social/Users/Data/perUserStats.txt\"\n P = perUserStats()\n P.loadScores(scoredTweets, userIds)\n P.meanAnalysis(outputFile) \n","sub_path":"EthnicGroups/src/SocialContexts/perUserStats.py","file_name":"perUserStats.py","file_ext":"py","file_size_in_byte":3004,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"497845739","text":"########\n# Copyright (c) 2015 GigaSpaces Technologies Ltd. All rights reserved\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# * See the License for the specific language governing permissions and\n# * limitations under the License.\n\n\"\"\"\nHandles all commands that start with 'cfy plugins'\n\"\"\"\nimport tarfile\n\nfrom cloudify_cli import utils\nfrom cloudify_cli import messages\nfrom cloudify_cli.logger import get_logger\nfrom cloudify_cli.utils import print_table\nfrom cloudify_cli.exceptions import CloudifyCliError\n\n\ndef validate(plugin_path):\n logger = get_logger()\n\n logger.info(\n messages.VALIDATING_PLUGIN.format(plugin_path.name))\n if not tarfile.is_tarfile(plugin_path.name):\n raise CloudifyCliError('Archive {0} is of an unsupported archive type.'\n ' Only tar.gz is allowed'\n .format(plugin_path.name))\n with tarfile.open(plugin_path.name, 'r') as tar:\n tar_members = tar.getmembers()\n package_json_path = '{0}/package.json'.format(tar_members[0].name)\n try:\n package_member = tar.getmember(package_json_path)\n except KeyError:\n raise CloudifyCliError(messages.VALIDATING_PLUGIN_FAILED\n .format(plugin_path, 'package.json was not '\n 'found in archive'))\n try:\n tar.extractfile(package_member).read()\n except:\n raise CloudifyCliError(messages.VALIDATING_PLUGIN_FAILED\n .format(plugin_path, 'unable to read '\n 'package.json'))\n\n logger.info(messages.VALIDATING_PLUGIN_SUCCEEDED)\n\n\ndef delete(plugin_id):\n logger = get_logger()\n management_ip = utils.get_management_server_ip()\n client = utils.get_rest_client(management_ip)\n\n logger.info(messages.PLUGIN_DELETE.format(plugin_id, management_ip))\n client.plugins.delete(plugin_id)\n\n logger.info(messages.PLUGIN_DELETE_SUCCEEDED.format(plugin_id))\n\n\ndef upload(plugin_path):\n server_ip = utils.get_management_server_ip()\n utils.upload_plugin(plugin_path, server_ip,\n utils.get_rest_client(server_ip), validate)\n\n\ndef download(plugin_id,\n output):\n logger = get_logger()\n management_ip = utils.get_management_server_ip()\n logger.info(messages.DOWNLOADING_PLUGIN.format(plugin_id))\n client = utils.get_rest_client(management_ip)\n target_file = client.plugins.download(plugin_id, output)\n logger.info(messages.DOWNLOADING_PLUGIN_SUCCEEDED.format(plugin_id,\n target_file))\n\n\nfields = ['id', 'package_name', 'package_version', 'supported_platform',\n 'distribution', 'distribution_release', 'uploaded_at']\n\n\ndef get(plugin_id):\n logger = get_logger()\n management_ip = utils.get_management_server_ip()\n client = utils.get_rest_client(management_ip)\n\n logger.info(messages.PLUGINS_GET.format(plugin_id, management_ip))\n plugin = client.plugins.get(plugin_id, _include=fields)\n\n pt = utils.table(fields, data=[plugin])\n print_table('Plugin:', pt)\n\n\ndef ls():\n logger = get_logger()\n management_ip = utils.get_management_server_ip()\n client = utils.get_rest_client(management_ip)\n\n logger.info(messages.PLUGINS_LIST.format(management_ip))\n plugins = client.plugins.list(_include=fields)\n\n pt = utils.table(fields, data=plugins)\n print_table('Plugins:', pt)\n","sub_path":"cloudify_cli/commands/plugins.py","file_name":"plugins.py","file_ext":"py","file_size_in_byte":3938,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"370975286","text":"from viewer.settings_common import *\n\nROOT_URL = '/'\nSTATIC_URL_PATH = 'static/'\nSTATIC_URL = ROOT_URL + STATIC_URL_PATH\n\nHOSTNAME = 'viewer.legacysurvey.org'\n#HOSTNAME = 'spin.legacysurvey.org'\nTILE_URL = 'https://{s}.%s%s{id}/{ver}/{z}/{x}/{y}.jpg' % (HOSTNAME, ROOT_URL)\n\nDEBUG = True\n\nDEBUG_LOGGING = True\n\nREAD_ONLY_BASEDIR = True\n\nUSER_QUERY_DIR = '/tmp/viewer-user'\n\nFORCE_SCRIPT_NAME = ROOT_URL\n\nSTATIC_TILE_URL_B = 'http://{s}.imagine.legacysurvey.org/static/tiles/{id}/{ver}/{z}/{x}/{y}.jpg'\nSUBDOMAINS_B = SUBDOMAINS\n\n# no CORS -- so don't use subdomains, or specify hostname (www.legacysurvey.org vs legacysurvey.org)\nCAT_URL = '%s/{id}/{ver}/{z}/{x}/{y}.cat.json' % (ROOT_URL)\n\nENABLE_DR5 = False\nENABLE_DR9 = True\nENABLE_DR9SV = False\nENABLE_OLDER = False\n# public version\nENABLE_SCIENCE = False\n\nENABLE_CUTOUTS = False\nENABLE_SPECTRA = False\nENABLE_DESI_TARGETS = False\n","sub_path":"viewer/settings_pr.py","file_name":"settings_pr.py","file_ext":"py","file_size_in_byte":886,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"207130226","text":"#!/usr/bin/env python\nfrom __future__ import print_function\n\nimport yaml as yml\n\n\ndef maintainers(pkgfile, repofile):\n with open(pkgfile) as p, open(repofile) as r:\n pkgs, repo = yml.load(p), yml.load(r)\n\n maints = set()\n for pkg, ver in pkgs.items():\n maints.add((pkg, ver, repo[pkg][ver]['maintainer']))\n return maints\n\n\nif __name__ == '__main__':\n import argparse\n parser = argparse.ArgumentParser(description='print maintainers')\n parser.add_argument('pkgfile', type=str)\n parser.add_argument('repofile', type=str)\n args = parser.parse_args()\n maints = maintainers(args.pkgfile, args.repofile)\n for pkg, ver, maintainer in maints:\n print('%s %s %s' % (pkg, ver, maintainer))\n","sub_path":"komodo/maintainer.py","file_name":"maintainer.py","file_ext":"py","file_size_in_byte":733,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"278814087","text":"import time\nimport json\nimport hashlib\nimport logging\n\nfrom pulsar.apps import ws\nfrom builtins import isinstance\n\nLUX_CONNECTION = 'lux:connection_established'\nLUX_MESSAGE = 'lux:message'\nLUX_ERROR = 'lux:error'\n\nLOGGER = logging.getLogger('lux.sockjs')\n\n\nclass WsClient:\n '''Server side of a websocket client\n '''\n def __init__(self, transport, handler):\n request = transport.handshake\n self.transport = transport\n self.handler = handler\n self.started = time.time()\n self.address = request.get_client_address()\n session_id = request.urlargs.get('session_id')\n if not session_id:\n key = '%s - %s' % (self.address, self.started)\n session_id = hashlib.sha224(key.encode('utf-8')).hexdigest()\n self.session_id = session_id\n request.cache.websocket = self\n transport.on_open(self)\n\n def __str__(self):\n return '%s - %s' % (self.address, self.session_id)\n\n def __call__(self, channel, message):\n message = message.decode('utf-8')\n self.write(LUX_MESSAGE, channel, message)\n\n # Lux Implementation\n def write(self, event, channel=None, data=None, **kw):\n msg = {'event': event}\n if channel:\n msg['channel'] = channel\n if kw:\n if data:\n data.update(kw)\n else:\n data = kw\n if data:\n if not isinstance(data, str):\n data = json.dumps(data)\n msg['data'] = data\n array = [json.dumps(msg)]\n self.transport.write('a%s' % json.dumps(array))\n\n def error_message(self, ws, exc):\n msg = {'event': LUX_ERROR}\n code = getattr(exc, 'code', None)\n if code:\n msg['code'] = code\n msg['message'] = str(exc)\n\n\nclass LuxWs(ws.WS):\n '''Lux websocket\n '''\n pubsub = None\n\n def on_open(self, websocket):\n ws = WsClient(websocket, self)\n if self.pubsub:\n self.pubsub.add_client(ws)\n app = websocket.app\n app.fire('on_websocket_open', websocket, self)\n #\n # Send the LUX_CONNECTION event with socket id and start time\n ws.write(LUX_CONNECTION, socket_id=ws.session_id, time=ws.started)\n\n def on_message(self, websocket, message):\n ws = websocket.handshake.cache.websocket\n try:\n msg = json.loads(message)\n\n except Exception as exc:\n ws.error_message(exc)\n\n def on_close(self, websocket):\n ws = websocket.handshake.cache.websocket\n if self.pubsub:\n self.pubsub.remove_client(ws)\n LOGGER.info('closing socket %s', ws)\n","sub_path":"lux/extensions/sockjs/ws.py","file_name":"ws.py","file_ext":"py","file_size_in_byte":2663,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"316863943","text":"from typing import Union\nimport numpy as np\nimport torch\nfrom torchvision.ops import nms\n\nfrom face_detection.Config import cig\nfrom face_detection.utils.bbox import to_offsets, bbox_iou, to_real_bbox\n\ndef safe_to_numpy(data : Union[np.ndarray, torch.Tensor]) -> np.ndarray:\n if isinstance(data, np.ndarray):\n return data\n if isinstance(data, torch.Tensor):\n return data.detach().cpu().numpy()\n\ndef safe_to_tensor(data : Union[np.ndarray, torch.Tensor], use_cuda : bool = cig.use_cuda) -> torch.Tensor:\n if isinstance(data, np.ndarray):\n tensor = torch.from_numpy(data)\n if isinstance(data, torch.Tensor):\n tensor = data.detach()\n if use_cuda:\n tensor = tensor.cuda()\n return tensor\n\nclass ProposalTargetCreator(object):\n def __init__(self,\n n_sample=128,\n pos_ratio=0.25, pos_iou_thresh=0.5,\n neg_iou_thresh_hi=0.5, neg_iou_thresh_lo=0.0\n ):\n self.n_sample = n_sample # number of the bbox left\n self.pos_ratio = pos_ratio\n self.pos_iou_thresh = pos_iou_thresh\n self.neg_iou_thresh_hi = neg_iou_thresh_hi\n self.neg_iou_thresh_lo = neg_iou_thresh_lo # NOTE:default 0.1 in py-faster-rcnn\n\n def __call__(self, roi, bbox, label, loc_normalize_mean=(0., 0., 0., 0.), loc_normalize_std=(0.1, 0.1, 0.2, 0.2)):\n n_bbox, _ = bbox.shape\n roi = np.concatenate((roi, bbox), axis=0)\n\n pos_roi_per_image = np.round(self.n_sample * self.pos_ratio)\n iou = bbox_iou(roi, bbox)\n gt_assignment = iou.argmax(axis=1)\n max_iou = iou.max(axis=1)\n # Offset range of classes from [0, n_fg_class - 1] to [1, n_fg_class].\n # The label with value 0 is the background.\n gt_roi_label = label[gt_assignment] + 1\n\n # Select foreground RoIs as those with >= pos_iou_thresh IoU.\n pos_index = np.where(max_iou >= self.pos_iou_thresh)[0]\n pos_roi_per_this_image = int(min(pos_roi_per_image, pos_index.size))\n if pos_index.size > 0:\n pos_index = np.random.choice(\n pos_index, size=pos_roi_per_this_image, replace=False)\n\n # Select background RoIs as those within\n # [neg_iou_thresh_lo, neg_iou_thresh_hi).\n neg_index = np.where((max_iou < self.neg_iou_thresh_hi) &\n (max_iou >= self.neg_iou_thresh_lo))[0]\n neg_roi_per_this_image = self.n_sample - pos_roi_per_this_image\n neg_roi_per_this_image = int(min(neg_roi_per_this_image,\n neg_index.size))\n if neg_index.size > 0:\n neg_index = np.random.choice(\n neg_index, size=neg_roi_per_this_image, replace=False)\n\n # The indices that we're selecting (both positive and negative).\n keep_index = np.append(pos_index, neg_index)\n gt_roi_label = gt_roi_label[keep_index]\n gt_roi_label[pos_roi_per_this_image:] = 0 # negative labels --> 0\n sample_roi = roi[keep_index]\n\n # Compute offsets and scales to match sampled RoIs to the GTs.\n gt_roi_loc = to_offsets(sample_roi, bbox[gt_assignment[keep_index]])\n gt_roi_loc = ((gt_roi_loc - np.array(loc_normalize_mean, np.float32)) / np.array(loc_normalize_std, np.float32))\n\n return sample_roi, gt_roi_loc, gt_roi_label\n\n# assign anchor to target\nclass AnchorTargetCreator(object):\n def __init__(self, n_sample=256, pos_iou_thresh=0.7, neg_iou_thresh=0.3, pos_ratio=0.5):\n self.n_sample = n_sample\n self.pos_iou_thresh = pos_iou_thresh\n self.neg_iou_thresh = neg_iou_thresh\n self.pos_ratio = pos_ratio\n\n def __call__(self, bbox, anchor, img_size):\n img_H, img_W = img_size\n n_anchor = len(anchor)\n inside_index = _get_inside_index(anchor, img_H, img_W)\n anchor = anchor[inside_index]\n argmax_ious, label = self._create_label(\n inside_index, anchor, bbox)\n\n # compute bounding box regression targets\n loc = to_offsets(anchor, bbox[argmax_ious])\n\n # map up to original set of anchors\n label = _unmap(label, n_anchor, inside_index, fill=-1)\n loc = _unmap(loc, n_anchor, inside_index, fill=0)\n\n return loc, label\n\n def _create_label(self, inside_index, anchor, bbox):\n # label: 1 is positive, 0 is negative, -1 is dont care\n label = np.empty((len(inside_index),), dtype=np.int32)\n label.fill(-1)\n\n argmax_ious, max_ious, gt_argmax_ious = \\\n self._calc_ious(anchor, bbox, inside_index)\n\n # assign negative labels first so that positive labels can clobber them\n label[max_ious < self.neg_iou_thresh] = 0\n\n # positive label: for each gt, anchor with highest iou\n label[gt_argmax_ious] = 1\n\n # positive label: above threshold IOU\n label[max_ious >= self.pos_iou_thresh] = 1\n\n # subsample positive labels if we have too many\n n_pos = int(self.pos_ratio * self.n_sample)\n pos_index = np.where(label == 1)[0]\n if len(pos_index) > n_pos:\n disable_index = np.random.choice(\n pos_index, size=(len(pos_index) - n_pos), replace=False)\n label[disable_index] = -1\n\n # subsample negative labels if we have too many\n n_neg = self.n_sample - np.sum(label == 1)\n neg_index = np.where(label == 0)[0]\n if len(neg_index) > n_neg:\n disable_index = np.random.choice(\n neg_index, size=(len(neg_index) - n_neg), replace=False)\n label[disable_index] = -1\n\n return argmax_ious, label\n\n def _calc_ious(self, anchor, bbox, inside_index):\n # ious between the anchors and the gt boxes\n ious = bbox_iou(anchor, bbox)\n argmax_ious = ious.argmax(axis=1)\n max_ious = ious[np.arange(len(inside_index)), argmax_ious]\n gt_argmax_ious = ious.argmax(axis=0)\n gt_max_ious = ious[gt_argmax_ious, np.arange(ious.shape[1])]\n gt_argmax_ious = np.where(ious == gt_max_ious)[0]\n\n return argmax_ious, max_ious, gt_argmax_ious\n\n\ndef _unmap(data, count, index, fill=0):\n # Unmap a subset of item (data) back to the original set of items (of\n # size count)\n\n if len(data.shape) == 1:\n ret = np.empty((count,), dtype=data.dtype)\n ret.fill(fill)\n ret[index] = data\n else:\n ret = np.empty((count,) + data.shape[1:], dtype=data.dtype)\n ret.fill(fill)\n ret[index, :] = data\n return ret\n\n\ndef _get_inside_index(anchor, H, W):\n # Calc indicies of anchors which are located completely inside of the image\n # whose size is speficied.\n index_inside = np.where(\n (anchor[:, 0] >= 0) &\n (anchor[:, 1] >= 0) &\n (anchor[:, 2] <= W) &\n (anchor[:, 3] <= H)\n )[0]\n return index_inside\n\n\nclass ProposalCreator:\n def __init__(self,\n parent_model,\n nms_thresh=0.7,\n n_train_pre_nms=12000,\n n_train_post_nms=2000,\n n_test_pre_nms=6000,\n n_test_post_nms=300,\n min_size=16\n ):\n self.parent_model = parent_model\n self.nms_thresh = nms_thresh\n self.n_train_pre_nms = n_train_pre_nms\n self.n_train_post_nms = n_train_post_nms\n self.n_test_pre_nms = n_test_pre_nms\n self.n_test_post_nms = n_test_post_nms\n self.min_size = min_size\n\n def __call__(self, loc, score, anchor, img_size, scale=1.):\n if self.parent_model.training:\n n_pre_nms = self.n_train_pre_nms\n n_post_nms = self.n_train_post_nms\n else:\n n_pre_nms = self.n_test_pre_nms\n n_post_nms = self.n_test_post_nms\n\n # Convert anchors into proposal via bbox transformations.\n # roi = to_real_bbox(anchor, loc)\n roi = to_real_bbox(anchor, loc)\n \n # Clip predicted boxes to image.\n roi[:, slice(0, 4, 2)] = np.clip(roi[:, slice(0, 4, 2)], 0, img_size[0])\n roi[:, slice(1, 4, 2)] = np.clip(roi[:, slice(1, 4, 2)], 0, img_size[1])\n\n # Remove predicted boxes with either height or width < threshold.\n min_size = self.min_size * scale\n hs = roi[:, 3] - roi[:, 1]\n ws = roi[:, 2] - roi[:, 0]\n keep = np.where((hs >= min_size) & (ws >= min_size))[0]\n roi = roi[keep, :]\n score = score[keep]\n\n # Sort all (proposal, score) pairs by score from highest to lowest.\n # Take top pre_nms_topN (e.g. 6000).\n order = score.ravel().argsort()[::-1]\n if n_pre_nms > 0:\n order = order[:n_pre_nms]\n roi = roi[order, :]\n score = score[order]\n # Apply nms (e.g. threshold = 0.7).\n # Take after_nms_topN (e.g. 300).\n\n # unNOTE: somthing is wrong here!\n # TODO: remove cuda.to_gpu\n keep = nms(\n torch.from_numpy(roi).cuda() if cig.use_cuda else torch.from_numpy(roi).cpu(),\n torch.from_numpy(score).cuda() if cig.use_cuda else torch.from_numpy(score).cpu(),\n self.nms_thresh\n )\n if n_post_nms > 0:\n keep = keep[:n_post_nms]\n roi = roi[keep.cpu().numpy()]\n return roi\n","sub_path":"face_detection/utils/transform.py","file_name":"transform.py","file_ext":"py","file_size_in_byte":9249,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"390639317","text":"import tensorflow as tf\n\nimport argparse\n\n# command line arguments\nparser = argparse.ArgumentParser(\n description='Convert a checkpoint to frozen graph')\nparser.add_argument(\n '--checkpoint',\n type=str,\n default=\"model.ckpt\",\n help='The checkpoint file to be converted')\nparser.add_argument(\n '--graph',\n type=str,\n default=\"graph.pb\",\n help='Output graph name.')\n\nargs = parser.parse_args()\n\n\n# add pb extension if not present\nif not args.graph.endswith(\".pb\"):\n args.graph = args.graph + \".pb\"\n\n# initialise the saver\nsaver = tf.train.Saver()\n\nwith tf.Session() as sess:\n # restore all variables from checkpoint\n saver.restore(sess, args.checkpoint)\n\n # node that are required output nodes\n output_node_names = [\"Openpose/concat_stage7\"]\n\n # We use a built-in TF helper to export variables to constants\n output_graph_def = tf.graph_util.convert_variables_to_constants(\n sess,\n tf.get_default_graph().as_graph_def(),\n # The graph_def is used to retrieve the nodes\n output_node_names # The output node names are used to select the usefull nodes\n )\n\n # convert variables to constants\n output_graph_def = tf.graph_util.remove_training_nodes(output_graph_def)\n\n # Finally we serialize and dump the output graph to the filesystem\n output_graph = args.graph\n with tf.gfile.GFile(output_graph, \"wb\") as f:\n f.write(output_graph_def.SerializeToString())\n\n print(\"Frozen graph file {} created successfully\".format(args.graph))","sub_path":"tensorflow_checkpoint_to_graph.py","file_name":"tensorflow_checkpoint_to_graph.py","file_ext":"py","file_size_in_byte":1523,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"382004131","text":"# -*- coding: utf-8 -*-\n# @Time : 18-8-11 下午6:54\n# @Author : unicoe\n# @Email : unicoe@163.com\n# @File : draw_loss.py\n# @Software: PyCharm Community Edition\n\nimport re\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pdb\n\nrf = open(\"/home/user/PycharmProjects/setting_weight_by_learn/src/loss.txt\")\n\ncontent = rf.readline()\n\nresult = []\n\nwhile content:\n res = re.findall(\"\\[\\d.\\d+\\]\",content)\n\n if len(res) != 0:\n tmp = res[0][1:-2]\n result.append(float(tmp))\n content = rf.readline()\n\ndraw_train_all_loss = pd.Series(result, index = range(0,len(result),1))\n\n#draw\nfig = plt.figure()\nw = 25\nh = 10\nfig.set_size_inches(w,h)\n\nplt.plot(draw_train_all_loss,'r')\nplt.title(u\"all loss\")\n#plt.legend((u'accuracy'),loc='best')\nplt.xlabel(u\"iter\")\nplt.ylabel(u\"loss\")\n\n\nplt.savefig(\"/home/user/PycharmProjects/setting_weight_by_learn/img/loss_add_wegiht_08_11.png\")\n#save format maybe : format=\"eps\" or \"pdf\"\n\n\nplt.show()\n","sub_path":"some_learn/Data_Set_handle/Caltech-Dateset/setting_weight_by_learn/src/pdb_learn_draw_loss.py","file_name":"pdb_learn_draw_loss.py","file_ext":"py","file_size_in_byte":978,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"10"}
+{"seq_id":"62834005","text":"import pickle\nfrom collections import defaultdict\nfrom pathlib import Path\nfrom typing import Optional, Callable\n\nimport numpy as np\nimport torch\nimport torch.utils.data as torchdata\nfrom ignite.contrib.handlers import ProgressBar\nfrom ignite.engine import create_supervised_evaluator, Events, Engine\nfrom ignite.metrics import Accuracy, Loss\nfrom torch import nn\nfrom torch.nn import functional as F\n\nfrom alr import ALRModel\nfrom alr import MCDropout\nfrom alr.acquisition import BALD\nfrom alr.data import DataManager\nfrom alr.data import RelabelDataset, PseudoLabelDataset, UnlabelledDataset\nfrom alr.data.datasets import Dataset\nfrom alr.training import Trainer\nfrom alr.training.samplers import RandomFixedLengthSampler\nfrom alr.training.utils import EarlyStopper, PLPredictionSaver\nfrom alr.utils import eval_fwd_exp, timeop, manual_seed\nfrom alr.utils._type_aliases import _DeviceType, _Loss_fn\n\n\nclass PseudoLabelManager:\n def __init__(\n self,\n pool: UnlabelledDataset,\n model: nn.Module,\n threshold: float,\n log_dir: Optional[str] = None,\n device: _DeviceType = None,\n **kwargs,\n ):\n bs = kwargs.pop(\"batch_size\", 1024)\n shuffle = kwargs.pop(\"shuffle\", False)\n assert not shuffle\n self._pool = pool\n self._loader = torchdata.DataLoader(\n pool, batch_size=bs, shuffle=shuffle, **kwargs\n )\n self._model = model\n self._log_dir = log_dir\n self._device = device\n self._threshold = threshold\n self.acquired_sizes = []\n\n def attach(self, engine: Engine):\n engine.add_event_handler(Events.STARTED, self._initialise)\n # could also be EPOCH_COMPLETED since there's only one iteration in each epoch\n engine.add_event_handler(Events.ITERATION_COMPLETED, self._load_labels)\n\n def _load_labels(self, engine: Engine):\n evaluator = create_supervised_evaluator(\n self._model, metrics=None, device=self._device\n )\n plc = PseudoLabelCollector(\n self._threshold,\n log_dir=self._log_dir,\n )\n plc.attach(evaluator, batch_size=self._loader.batch_size)\n plc.global_step_from_engine(engine)\n evaluator.run(self._loader)\n indices, pseudo_labels = (\n evaluator.state.pl_indices.cpu().numpy(),\n evaluator.state.pl_plabs.cpu().numpy(),\n )\n self.acquired_sizes.append(indices.shape[0])\n if indices.shape[0]:\n confident_points = torchdata.Subset(self._pool, indices)\n if self._pool.debug:\n # pool returns target labels too\n engine.state.pseudo_labelled_dataset = RelabelDataset(\n confident_points, pseudo_labels\n )\n else:\n engine.state.pseudo_labelled_dataset = PseudoLabelDataset(\n confident_points, pseudo_labels\n )\n else:\n engine.state.pseudo_labelled_dataset = None\n\n @staticmethod\n def _initialise(engine: Engine):\n engine.state.pseudo_labelled_dataset = None\n\n\nclass PseudoLabelCollector:\n def __init__(\n self,\n threshold: float,\n log_dir: Optional[str] = None,\n pred_transform: Callable[[torch.Tensor], torch.Tensor] = lambda x: x.exp(),\n ):\n self._indices = []\n self._plabs = []\n self._pred_transform = pred_transform\n self._output_transform = lambda x: x\n self._thresh = threshold\n self._targets = []\n self._preds = []\n if log_dir:\n self._saver = PLPredictionSaver(log_dir, pred_transform=pred_transform)\n else:\n self._saver = None\n self._batch_size = None\n\n def _parse(self, engine: Engine):\n preds, targets = self._output_transform(engine.state.output)\n # state.iteration starts with 1\n iteration = engine.state.iteration - 1\n offset = iteration * self._batch_size\n with torch.no_grad():\n preds = self._pred_transform(preds)\n preds_max, plabs = torch.max(preds, dim=-1)\n mask = torch.nonzero(preds_max >= self._thresh).flatten()\n if mask.shape[0]:\n # plabs = [N,]\n self._plabs.append(plabs[mask])\n self._indices.append(mask + offset)\n\n def _flush(self, engine: Engine):\n if self._indices and self._plabs:\n engine.state.pl_indices = torch.cat(self._indices)\n engine.state.pl_plabs = torch.cat(self._plabs)\n else:\n engine.state.pl_indices = torch.Tensor([])\n engine.state.pl_plabs = torch.Tensor([])\n self._indices = []\n self._plabs = []\n\n def attach(self, engine: Engine, batch_size: int, output_transform=lambda x: x):\n r\"\"\"\n\n Args:\n engine (Engine): ignite engine object\n batch_size (int): engine's batch size\n output_transform (Callable): if engine.state.output is not (preds, target),\n then output_transform should return aforementioned tuple.\n\n Returns:\n NoneType: None\n \"\"\"\n engine.add_event_handler(Events.ITERATION_COMPLETED, self._parse)\n engine.add_event_handler(Events.COMPLETED, self._flush)\n self._output_transform = output_transform\n self._batch_size = batch_size\n if self._saver:\n self._saver.attach(engine, output_transform=output_transform)\n\n def global_step_from_engine(self, engine: Engine):\n if self._saver:\n self._saver.global_step_from_engine(engine)\n\n\ndef _update_dataloader(\n loader: torchdata.DataLoader,\n dataset: torchdata.Dataset,\n sampler: Optional[torchdata.Sampler] = None,\n):\n # attributes that usually go in dataloader's constructor\n attrs = [k for k in loader.__dict__.keys() if not k.startswith(\"_\")]\n drop = [\"dataset\", \"sampler\", \"batch_sampler\", \"dataset_kind\"]\n kwargs = {k: getattr(loader, k) for k in attrs if k not in drop}\n if not isinstance(\n loader.sampler,\n (\n torchdata.SequentialSampler,\n torchdata.RandomSampler,\n RandomFixedLengthSampler,\n ),\n ):\n raise ValueError(\n f\"Only sequential, random, and random fixed length samplers \"\n f\"are supported in _update_dataloader\"\n )\n kwargs[\"dataset\"] = dataset\n # Sequential and Random will be automatically determined if sampler is None (depending on shuffle)\n kwargs[\"sampler\"] = sampler\n return torchdata.DataLoader(**kwargs)\n\n\ndef create_pseudo_label_trainer(\n model: ALRModel,\n loss: _Loss_fn,\n optimiser: str,\n train_loader: torchdata.DataLoader,\n val_loader: torchdata.DataLoader,\n pseudo_label_manager: PseudoLabelManager,\n rfls_len: Optional[int] = None,\n patience: Optional[int] = None,\n reload_best: Optional[bool] = None,\n epochs: Optional[int] = 1,\n device: _DeviceType = None,\n *args,\n **kwargs,\n):\n def _step(engine: Engine, _):\n # update loader accordingly: if pld is not none, concatenate them\n new_loader = train_loader\n pld = engine.state.pseudo_labelled_dataset\n if pld is not None:\n # only reset weights if engine.state.epoch != 1\n model.reset_weights()\n train_ds = torchdata.ConcatDataset((train_loader.dataset, pld))\n # update dataloader's dataset attribute\n if rfls_len:\n new_loader = _update_dataloader(\n train_loader,\n train_ds,\n RandomFixedLengthSampler(train_ds, length=rfls_len, shuffle=True),\n )\n else:\n new_loader = _update_dataloader(train_loader, train_ds)\n else:\n assert engine.state.epoch == 1\n\n # begin supervised training\n trainer = Trainer(\n model,\n loss,\n optimiser,\n patience,\n reload_best,\n device=device,\n *args,\n **kwargs,\n )\n history = trainer.fit(\n new_loader,\n val_loader=val_loader,\n epochs=epochs,\n )\n\n # if early stopping was applied w/ patience, then the actual train acc and loss should be\n # -patience from the final loss/acc UNLESS we reached the maximum number of epochs.\n if patience and len(history[\"train_loss\"]) != epochs:\n return history[\"train_loss\"][-patience], history[\"train_acc\"][-patience]\n return history[\"train_loss\"][-1], history[\"train_acc\"][-1]\n\n e = Engine(_step)\n pseudo_label_manager.attach(e)\n return e\n\n\nclass EphemeralTrainer:\n def __init__(\n self,\n model: ALRModel,\n pool: UnlabelledDataset,\n loss: _Loss_fn,\n optimiser: str,\n threshold: float,\n random_fixed_length_sampler_length: Optional[int] = None,\n log_dir: Optional[str] = None,\n patience: Optional[int] = None,\n reload_best: Optional[bool] = False,\n device: _DeviceType = None,\n pool_loader_kwargs: Optional[dict] = {},\n *args,\n **kwargs,\n ):\n self._pool = pool\n self._model = model\n self._loss = loss\n self._optimiser = optimiser\n self._patience = patience\n self._reload_best = reload_best\n self._device = device\n self._args = args\n self._kwargs = kwargs\n self._threshold = threshold\n self._log_dir = log_dir\n self._pool_loader_kwargs = pool_loader_kwargs\n self._rfls_len = random_fixed_length_sampler_length\n\n def fit(\n self,\n train_loader: torchdata.DataLoader,\n val_loader: Optional[torchdata.DataLoader] = None,\n iterations: Optional[int] = 1,\n epochs: Optional[int] = 1,\n ):\n if self._patience and val_loader is None:\n raise ValueError(\n \"If patience is specified, then val_loader must be provided in .fit().\"\n )\n\n val_evaluator = create_supervised_evaluator(\n self._model,\n metrics={\"acc\": Accuracy(), \"loss\": Loss(self._loss)},\n device=self._device,\n )\n\n history = defaultdict(list)\n pbar = ProgressBar()\n\n def _log_metrics(engine: Engine):\n # train_loss and train_acc are moving averages of the last epoch\n # in the supervised training loop\n train_loss, train_acc = engine.state.output\n history[f\"train_loss\"].append(train_loss)\n history[f\"train_acc\"].append(train_acc)\n pbar.log_message(\n f\"Eph. iteration {engine.state.epoch}/{engine.state.max_epochs}\\n\"\n f\"\\ttrain acc = {train_acc}, train loss = {train_loss}\"\n )\n if val_loader is None:\n return # job done\n # val loader - save to history and print metrics. Also, add handlers to\n # evaluator (e.g. early stopping, model checkpointing that depend on val_acc)\n metrics = val_evaluator.run(val_loader).metrics\n\n history[f\"val_acc\"].append(metrics[\"acc\"])\n history[f\"val_loss\"].append(metrics[\"loss\"])\n pbar.log_message(\n f\"\\tval acc = {metrics['acc']}, val loss = {metrics['loss']}\"\n )\n\n pseudo_label_manager = PseudoLabelManager(\n pool=self._pool,\n model=self._model,\n threshold=self._threshold,\n log_dir=self._log_dir,\n device=self._device,\n **self._pool_loader_kwargs,\n )\n trainer = create_pseudo_label_trainer(\n model=self._model,\n loss=self._loss,\n optimiser=self._optimiser,\n train_loader=train_loader,\n val_loader=val_loader,\n pseudo_label_manager=pseudo_label_manager,\n rfls_len=self._rfls_len,\n patience=self._patience,\n reload_best=self._reload_best,\n epochs=epochs,\n device=self._device,\n *self._args,\n **self._kwargs,\n )\n # output of trainer are running averages of train_loss and train_acc (from the\n # last epoch of the supervised trainer)\n pbar.attach(trainer, output_transform=lambda x: {\"loss\": x[0], \"acc\": x[1]})\n if val_loader is not None and self._patience:\n es = EarlyStopper(\n self._model, self._patience, trainer, key=\"acc\", mode=\"max\"\n )\n es.attach(val_evaluator)\n trainer.add_event_handler(Events.EPOCH_COMPLETED, _log_metrics)\n trainer.run(\n range(iterations),\n max_epochs=iterations,\n epoch_length=1,\n )\n if val_loader is not None and self._patience and self._reload_best:\n es.reload_best()\n\n history[\"train_size\"] = np.array(pseudo_label_manager.acquired_sizes) + len(\n train_loader.dataset\n )\n return history\n\n def evaluate(self, data_loader: torchdata.DataLoader) -> dict:\n evaluator = create_supervised_evaluator(\n self._model,\n metrics={\"acc\": Accuracy(), \"loss\": Loss(self._loss)},\n device=self._device,\n )\n return evaluator.run(data_loader).metrics\n\n\ndef main(threshold: float, b: int):\n manual_seed(42)\n device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n kwargs = dict(num_workers=4, pin_memory=True)\n\n BATCH_SIZE = 64\n REPS = 6\n ITERS = 24\n VAL_SIZE = 5_000\n MIN_TRAIN_LEN = 12_500\n SSL_ITERATIONS = 200\n EPOCHS = 200\n\n accs = defaultdict(list)\n\n template = f\"thresh_{threshold}_b_{b}\"\n calib_metrics = Path(\"calib_metrics\") / template\n saved_models = Path(\"saved_models\") / template\n metrics = Path(\"metrics\") / template\n calib_metrics.mkdir(parents=True)\n saved_models.mkdir(parents=True)\n metrics.mkdir(parents=True)\n\n train, pool, test = Dataset.MNIST.get_fixed()\n val, pool = torchdata.random_split(pool, (VAL_SIZE, len(pool) - VAL_SIZE))\n pool = UnlabelledDataset(pool)\n test_loader = torchdata.DataLoader(test, batch_size=512, shuffle=False, **kwargs)\n val_loader = torchdata.DataLoader(val, batch_size=512, shuffle=False, **kwargs)\n\n for r in range(1, REPS + 1):\n model = MCDropout(Dataset.MNIST.model, forward=20, fast=True).to(device)\n bald = BALD(eval_fwd_exp(model), device=device, batch_size=512, **kwargs)\n dm = DataManager(train, pool, bald)\n dm.reset() # to reset pool\n print(f\"=== repeat #{r} of {REPS} ===\")\n for i in range(1, ITERS + 1):\n # don't reset weights: let ephemeral trainer take care of it\n # since we're collecting calibration metrics,\n # make pool return targets too. (i.e. debug mode)\n with dm.unlabelled.tmp_debug():\n trainer = EphemeralTrainer(\n model,\n dm.unlabelled,\n F.nll_loss,\n \"Adam\",\n threshold=threshold,\n random_fixed_length_sampler_length=MIN_TRAIN_LEN,\n log_dir=(calib_metrics / f\"rep_{r}\" / f\"iter_{i}\"),\n patience=3,\n reload_best=True,\n device=device,\n pool_loader_kwargs=kwargs,\n )\n train_loader = torchdata.DataLoader(\n dm.labelled,\n batch_size=BATCH_SIZE,\n sampler=RandomFixedLengthSampler(\n dm.labelled, MIN_TRAIN_LEN, shuffle=True\n ),\n **kwargs,\n )\n with timeop() as t:\n history = trainer.fit(\n train_loader,\n val_loader,\n iterations=SSL_ITERATIONS,\n epochs=EPOCHS,\n )\n # eval on test set\n test_metrics = trainer.evaluate(test_loader)\n accs[dm.n_labelled].append(test_metrics[\"acc\"])\n print(f\"-- Iteration {i} of {ITERS} --\")\n print(\n f\"\\ttrain: {dm.n_labelled}; pool: {dm.n_unlabelled}\\n\"\n f\"\\t[test] acc: {test_metrics['acc']}; time: {t}\"\n )\n\n # save stuff\n with open(metrics / f\"rep_{r}_iter_{i}.pkl\", \"wb\") as fp:\n payload = {\n \"history\": history,\n \"test_metrics\": test_metrics,\n \"labelled_classes\": dm.unlabelled.labelled_classes,\n \"labelled_indices\": dm.unlabelled.labelled_indices,\n }\n pickle.dump(payload, fp)\n torch.save(model.state_dict(), saved_models / f\"rep_{r}_iter_{i}.pth\")\n\n # finally, acquire points\n dm.acquire(b)\n\n with open(f\"{template}_accs.pkl\", \"wb\") as fp:\n pickle.dump(accs, fp)\n\n\nif __name__ == \"__main__\":\n main(threshold=0.95, b=10)\n","sub_path":"docs/source/experiments/old/ephemeral/mnist/legacy/dont_reset_weights_more_iters/train.py","file_name":"train.py","file_ext":"py","file_size_in_byte":17138,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"242970747","text":"import numpy as np\nimport pandas as pd\n\ndf = pd.read_csv('../../data/spx2014to2017.csv', index_col = 'date', parse_dates = True)\ndf.drop(df.index[0], inplace = True)\n# df.count() # this could help\ndf.dropna(inplace=True, axis = 1)\n\n# risk free rate\nrisk_free = df[\"USGG10YR Index\"] / 100\n\n# forward looking YoY returns\nspx = df[\"SPX INDEX\"]\nyoy = spx.pct_change(250).shift(-250)\nyoy.dropna(inplace = True)\n\n# Forward looking annual returns for all index members\nann_returns = df.pct_change(250).shift(-250)\nann_returns.dropna(inplace = True, axis = 0, how = 'all')\nann_returns.drop(\"USGG10YR Index\", inplace=True, axis = 1)\n\n\n# subtract risk free returns as qutoed on the day from forward annual stock returns to get excess annual returns\nexcess_returns = ann_returns.subtract(risk_free, axis = 0)\nexcess_returns.drop('SPX INDEX', axis = 1, inplace=True)\nexcess_returns.dropna(inplace=True, how='all')\n\n\n# Save that shit so I don't have to do it again\n#excess_returns.to_csv('../data/excess_returns.csv')\n\n# Get the Factor data and set categories as large and small market cap - I just took the most recent market cap, but I think historical market cap (based on the first day of forward looking returns) should be used. I'm not sure if using an updating market cap would be beneficial since it's based on the price.\n# using that sweet French sauce\nfactor_returns = pd.read_csv('../../data/factors.csv', parse_dates=True, index_col = 'date')\n\n\n# this might be wrong because these are daily returns - not annual returns\n# you want the annual geometric average of factor returns lagged by one year\nnew_data = excess_returns.merge(factor_returns, left_index=True, right_index=True)","sub_path":"chap3/fm/data_prep.py","file_name":"data_prep.py","file_ext":"py","file_size_in_byte":1680,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"443150896","text":"from django.contrib import admin\n#from django.db import models as django_models\n#from django.forms.extras.widgets import SelectDateWidget\nfrom gigs.gig_registry import models\n\n\nclass VenueAdmin(admin.ModelAdmin):\n prepopulated_fields = {'uid':('name',)}\n\nclass MusicianInline(admin.TabularInline):\n fields = ['musician', 'started', 'finished', 'date_of_birth', 'instrument',]\n model = models.Musician\n\n#class MusicianAdmin(admin.ModelAdmin):\n# inlines = [MembershipInline]\n\nclass MembershipAdmin(admin.ModelAdmin):\n pass#inlines = [MusicianInline]\n\nclass MembershipInline(admin.TabularInline):\n model = models.BandMembership\n verbose_name = \"Band Member\"\n verbose_name_plural = \"Band Members\"\n fields = ['musician', 'started', 'finished']\n\n extra = 3\n\nclass BandAdmin(admin.ModelAdmin):\n inlines = [MembershipInline]\n\nclass BandInline(admin.TabularInline):\n model = models.Gig.bands.through\n\n\nclass GigAdmin(admin.ModelAdmin):\n fieldsets = [\n (None, {'fields': ['name', 'venue','bands', 'cost']}),\n ('Dates', {'fields': ['start', 'finish']}),\n ('Meta', {'fields': ['comment']}),\n ]\n \n filter_horizontal = ('bands',)\n list_filter = ('venue', 'bands',)\n\nclass VenueAdmin(admin.ModelAdmin):\n list_display = ['name', 'location']\n\nclass LocationAdmin(admin.ModelAdmin):\n #fields = ['street_address', 'suburb', 'state', 'post_code', 'country', 'lat', 'lon']\n fieldsets = [\n ('Address', \n {'fields': \n [\n 'street_address', \n 'suburb', \n 'state', \n 'post_code', \n 'country', \n ]\n }\n ),\n ('Co-ordinates',\n {'fields':\n [\n 'lat',\n 'lon',\n ]\n }\n )\n ]\n\nadmin.site.register(models.Band, BandAdmin)\nadmin.site.register(models.Musician)\nadmin.site.register(models.Owner)\nadmin.site.register(models.Venue, VenueAdmin)\nadmin.site.register(models.Location, LocationAdmin)\nadmin.site.register(models.Genre)\nadmin.site.register(models.Gig, GigAdmin)\nadmin.site.register(models.BandMembership, MembershipAdmin)\n","sub_path":"gigs/gig_registry/admin.py","file_name":"admin.py","file_ext":"py","file_size_in_byte":2334,"program_lang":"python","lang":"en","doc_type":"code","dataset":"code-starcoder2","pt":"11"}
+{"seq_id":"228127828","text":"from flask import Flask, request, redirect, render_template\nimport cgi\n\napp = Flask(__name__)\napp.config['DEBUG'] = True # Displays runtime errors in the browser, too.\n\n# A list of movies that nobody should have to watch.\nterrible_movies = [\n \"Gigli\",\n \"Star Wars Episode 1: Attack of the Clones\",\n \"Paul Blart: Mall Cop 2\",\n \"Nine Lives\",\n \"Mission to Mars\"\n]\n\ndef get_current_watchlist():\n # Returns user's current watchlist -- hard coded for now\n return [\"Heartbreakers\", \"She-Devil\", \"Jackie Brown\", \"Heathers\", \"Death Becomes Her\"]\n\n# TODO ---\n# Modify \"My Watchlist\" so you eliminate the need for the \"crossoff\" form in\n# edit.html. Now, next to every list item/movie listed in \"My Watchlist\" you\n# should display a button that says \"I Watched It!\".\n# Clicking the button will result in a confirmation message that the movie has\n# been watched. So, you'll need to add a form withint the
tags of \"My\n# Watchlist\". Once this is done, delete the \"crossoff\" form in edit.html\n\n# TODO ---\n# Make a ratings.html template which lists all movies that have been crossed off.\n# It should have a header of
Movies I Have Watched
\n# Add a form for rating EACH list item/movie using a