{"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadjust a value to be within a range 0 - 360.", "response": "def adjust_to_360(val, key):\n \"\"\"\n Take in a value and a key. If the key is of the type:\n declination/longitude/azimuth/direction, adjust it to be within\n the range 0-360 as required by the MagIC data model\n \"\"\"\n CheckDec = ['_dec', '_lon', '_azimuth', 'dip_direction']\n adjust = False\n for dec_key in CheckDec:\n if dec_key in key:\n if key.endswith(dec_key) or key.endswith('_'):\n adjust = True\n if not val:\n return ''\n elif not adjust:\n return val\n elif adjust:\n new_val = float(val) % 360\n if new_val != float(val):\n print('-I- adjusted {} {} to 0=>360.: {}'.format(key, val, new_val))\n return new_val"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef adjust_all_to_360(dictionary):\n for key in dictionary:\n dictionary[key] = adjust_to_360(dictionary[key], key)\n return dictionary", "response": "Adjusts all declination longitude and azimuth direction of all items in a dictionary to be within 0 - 360"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning lists of declination, inclination and intensities for lat/lon grid for desired model and date. Parameters: _________________ date = Required date in decimal years (Common Era, negative for Before Common Era) Optional Parameters: ______________ mod = model to use ('arch3k','cals3k','pfm9k','hfm10k','cals10k.2','shadif14k','cals10k.1b','custom') file = l m g h formatted filefor custom model lon_0 : central longitude for Hammer projection alt = altitude resolution = ['low','high'] default is low Returns: ______________ Bdec=list of declinations Binc=list of inclinations B = list of total field intensities in nT Br = list of radial field intensities lons = list of longitudes evaluated lats = list of latitudes evaluated", "response": "def do_mag_map(date, lon_0=0, alt=0, file=\"\", mod=\"cals10k\",resolution='low'):\n \"\"\"\n returns lists of declination, inclination and intensities for lat/lon grid for\n desired model and date.\n\n Parameters:\n _________________\n date = Required date in decimal years (Common Era, negative for Before Common Era)\n\n Optional Parameters:\n ______________\n mod = model to use ('arch3k','cals3k','pfm9k','hfm10k','cals10k.2','shadif14k','cals10k.1b','custom')\n file = l m g h formatted filefor custom model\n lon_0 : central longitude for Hammer projection\n alt = altitude\n resolution = ['low','high'] default is low\n\n Returns:\n ______________\n Bdec=list of declinations\n Binc=list of inclinations\n B = list of total field intensities in nT\n Br = list of radial field intensities\n lons = list of longitudes evaluated\n lats = list of latitudes evaluated\n\n \"\"\"\n if resolution=='low':\n incr = 10 # we can vary to the resolution of the model\n elif resolution=='high':\n incr = 2 # we can vary to the resolution of the model\n if lon_0 == 180:\n lon_0 = 179.99\n if lon_0 > 180:\n lon_0 = lon_0-360.\n # get some parameters for our arrays of lat/lon\n lonmax = (lon_0 + 180.) % 360 + incr\n lonmin = (lon_0 - 180.)\n latmax = 90 + incr\n # make a 1D array of longitudes (like elons)\n lons = np.arange(lonmin, lonmax, incr)\n # make a 1D array of longitudes (like elats)\n lats = np.arange(-90, latmax, incr)\n # set up some containers for the field elements\n B = np.zeros((len(lats), len(lons)))\n Binc = np.zeros((len(lats), len(lons)))\n Bdec = np.zeros((len(lats), len(lons)))\n Brad = np.zeros((len(lats), len(lons)))\n if mod == 'custom' and file != '':\n gh = []\n lmgh = np.loadtxt(file).transpose()\n gh.append(lmgh[2][0])\n for i in range(1, lmgh.shape[1]):\n gh.append(lmgh[2][i])\n if lmgh[1][i] != 0:\n gh.append(lmgh[3][i])\n for j in range(len(lats)): # step through the latitudes\n for i in range(len(lons)): # and the longitudes\n # get the field elements\n if mod == 'custom':\n x, y, z, f = docustom(lons[i], lats[j], alt, gh)\n else:\n x, y, z, f = doigrf(\n lons[i], lats[j], alt, date, mod=mod, file=file)\n # turn them into polar coordinates\n Dec, Inc, Int = cart2dir([x, y, z])\n if mod != 'custom':\n # convert the string to microtesla (from nT)\n B[j][i] = Int * 1e-3\n else:\n B[j][i] = Int # convert the string to microtesla (from nT)\n Binc[j][i] = Inc # store the inclination value\n if Dec > 180:\n Dec = Dec-360.\n Bdec[j][i] = Dec # store the declination value\n if mod != 'custom':\n Brad[j][i] = z*1e-3\n else:\n Brad[j][i] = z\n return Bdec, Binc, B, Brad, lons, lats"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef doeqdi(x, y, UP=False):\n xp, yp = y, x # need to switch into geographic convention\n r = np.sqrt(xp**2+yp**2)\n z = 1.-r**2\n t = np.arcsin(z)\n if UP == 1:\n t = -t\n p = np.arctan2(yp, xp)\n dec, inc = np.degrees(p) % 360, np.degrees(t)\n return dec, inc", "response": "Takes digitized x y data and returns the dec inc assuming an upper hemisphere projection\n "} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nseparate set of directions into two lists of nested dec inc pairs.", "response": "def separate_directions(di_block):\n \"\"\"\n Separates set of directions into two modes based on principal direction\n\n Parameters\n _______________\n di_block : block of nested dec,inc pairs\n\n Return\n mode_1_block,mode_2_block : two lists of nested dec,inc pairs\n \"\"\"\n ppars = doprinc(di_block)\n di_df = pd.DataFrame(di_block) # turn into a data frame for easy filtering\n di_df.columns = ['dec', 'inc']\n di_df['pdec'] = ppars['dec']\n di_df['pinc'] = ppars['inc']\n di_df['angle'] = angle(di_df[['dec', 'inc']].values,\n di_df[['pdec', 'pinc']].values)\n mode1_df = di_df[di_df['angle'] <= 90]\n mode2_df = di_df[di_df['angle'] > 90]\n mode1 = mode1_df[['dec', 'inc']].values.tolist()\n mode2 = mode2_df[['dec', 'inc']].values.tolist()\n return mode1, mode2"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef dovandamme(vgp_df):\n vgp_df['delta'] = 90.-vgp_df['vgp_lat'].values\n ASD = np.sqrt(np.sum(vgp_df.delta**2)/(vgp_df.shape[0]-1))\n A = 1.8 * ASD + 5.\n delta_max = vgp_df.delta.max()\n while delta_max > A:\n delta_max = vgp_df.delta.max()\n if delta_max < A:\n return vgp_df, A, ASD\n vgp_df = vgp_df[vgp_df.delta < delta_max]\n ASD = np.sqrt(np.sum(vgp_df.delta**2)/(vgp_df.shape[0]-1))\n A = 1.8 * ASD + 5.", "response": "This function is used to determine the S_b value for VGPs using the Vandamme method."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef scalc_vgp_df(vgp_df, anti=0, rev=0, cutoff=180., kappa=0, n=0, spin=0, v=0, boot=0, mm97=0, nb=1000):\n vgp_df['delta'] = 90.-vgp_df.vgp_lat.values\n # filter by cutoff, kappa, and n\n vgp_df = vgp_df[vgp_df.delta <= cutoff]\n vgp_df = vgp_df[vgp_df.dir_k >= kappa]\n vgp_df = vgp_df[vgp_df.dir_n_samples >= n]\n if spin: # do transformation to pole\n Pvgps = vgp_df[['vgp_lon', 'vgp_lat']].values\n ppars = doprinc(Pvgps)\n Bdirs = np.full((Pvgps.shape[0]), ppars['dec']-180.)\n Bdips = np.full((Pvgps.shape[0]), 90.-ppars['inc'])\n Pvgps = np.column_stack((Pvgps, Bdirs, Bdips))\n lons, lats = dotilt_V(Pvgps)\n vgp_df['vgp_lon'] = lons\n vgp_df['vgp_lat'] = lats\n vgp_df['delta'] = 90.-vgp_df.vgp_lat\n if anti:\n print('flipping reverse')\n vgp_rev = vgp_df[vgp_df.vgp_lat < 0]\n vgp_norm = vgp_df[vgp_df.vgp_lat >= 0]\n vgp_anti = vgp_rev\n vgp_anti['vgp_lat'] = -vgp_anti['vgp_lat']\n vgp_anti['vgp_lon'] = (vgp_anti['vgp_lon']-180) % 360\n vgp_df = pd.concat([vgp_norm, vgp_anti], sort=True)\n if rev:\n vgp_df = vgp_df[vgp_df.vgp_lat < 0] # use only reverse data\n if v:\n vgp_df, cutoff, S_v = dovandamme(vgp_df) # do vandamme cutoff\n S_B = get_sb_df(vgp_df, mm97=mm97) # get\n N = vgp_df.shape[0]\n SBs, low, high = [], 0, 0\n if boot:\n for i in range(nb): # now do bootstrap\n bs_df = vgp_df.sample(n=N, replace=True)\n Sb_bs = get_sb_df(bs_df)\n SBs.append(Sb_bs)\n SBs.sort()\n low = SBs[int(.025 * nb)]\n high = SBs[int(.975 * nb)]\n return N, S_B, low, high, cutoff", "response": "Calculates the Sf for a dataframe with VGP Lat. and optional Fisher s k site latitude and optional N information can be used to correct for within site scatter."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncalculates Watson s F statistic for two data sets.", "response": "def watsons_f(DI1, DI2):\n \"\"\"\n calculates Watson's F statistic (equation 11.16 in Essentials text book).\n\n Parameters\n _________\n DI1 : nested array of [Dec,Inc] pairs\n DI2 : nested array of [Dec,Inc] pairs\n\n Returns\n _______\n F : Watson's F\n Fcrit : critical value from F table\n \"\"\"\n # first calculate R for the combined data set, then R1 and R2 for each individually.\n # create a new array from two smaller ones\n DI = np.concatenate((DI1, DI2), axis=0)\n fpars = fisher_mean(DI) # re-use our functionfrom problem 1b\n fpars1 = fisher_mean(DI1)\n fpars2 = fisher_mean(DI2)\n N = fpars['n']\n R = fpars['r']\n R1 = fpars1['r']\n R2 = fpars2['r']\n F = (N-2.)*((R1+R2-R)/(N-R1-R2))\n Fcrit = fcalc(2, 2*(N-2))\n return F, Fcrit"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncalculating expected pole positions and directions for given plate, location and age Parameters _________ data : [plate,lat,lon,age] plate : [NA, SA, AF, IN, EU, AU, ANT, GL] NA : North America SA : South America AF : Africa IN : India EU : Eurasia AU : Australia ANT: Antarctica GL : Greenland lat/lon : latitude/longitude in degrees N/E age : age in millions of years print_results : if True will print out nicely formatted results Returns _________ if print_results is False, [Age,Paleolat, Dec, Inc, Pole_lat, Pole_lon]", "response": "def apwp(data, print_results=False):\n \"\"\"\n calculates expected pole positions and directions for given plate, location and age\n Parameters\n _________\n data : [plate,lat,lon,age]\n plate : [NA, SA, AF, IN, EU, AU, ANT, GL]\n NA : North America\n SA : South America\n AF : Africa\n IN : India\n EU : Eurasia\n AU : Australia\n ANT: Antarctica\n GL : Greenland\n lat/lon : latitude/longitude in degrees N/E\n age : age in millions of years\n print_results : if True will print out nicely formatted results\n Returns\n _________\n if print_results is False, [Age,Paleolat, Dec, Inc, Pole_lat, Pole_lon]\n \"\"\"\n pole_lat, pole_lon = bc02(data) # get the pole for these parameters\n # get the declination and inclination for that pole\n ExpDec, ExpInc = vgp_di(pole_lat, pole_lon, data[1], data[2])\n # convert the inclination to paleo latitude\n paleo_lat = magnetic_lat(ExpInc)\n if print_results:\n # print everything out\n print(' Age Paleolat. Dec. Inc. Pole_lat. Pole_Long.')\n print('%7.1f %7.1f %7.1f %7.1f %7.1f %7.1f\\n'\n % (data[3], paleo_lat, ExpDec, ExpInc, pole_lat, pole_lon))\n\n else:\n return [data[3], paleo_lat, ExpDec, ExpInc, pole_lat, pole_lon]"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef chart_maker(Int, Top, start=100, outfile='chart.txt'):\n low, k, iz = start, 0, 0\n Tzero = []\n f = open('chart.txt', 'w')\n vline = '\\t%s\\n' % (\n ' | | | | | | | |')\n hline = '______________________________________________________________________________\\n'\n f.write('file:_________________ field:___________uT\\n\\n\\n')\n f.write('%s\\n' % (\n ' date | run# | zone I | zone II | zone III | start | sp | cool|'))\n f.write(hline)\n f.write('\\t%s' % (' 0.0'))\n f.write(vline)\n f.write(hline)\n for k in range(len(Top)):\n for t in range(low, Top[k]+Int[k], Int[k]):\n if iz == 0:\n Tzero.append(t) # zero field first step\n f.write('%s \\t %s' % ('Z', str(t)+'.'+str(iz)))\n f.write(vline)\n f.write(hline)\n if len(Tzero) > 1:\n f.write('%s \\t %s' % ('P', str(Tzero[-2])+'.'+str(2)))\n f.write(vline)\n f.write(hline)\n iz = 1\n # infield after zero field first\n f.write('%s \\t %s' % ('I', str(t)+'.'+str(iz)))\n f.write(vline)\n f.write(hline)\n\n# f.write('%s \\t %s'%('T',str(t)+'.'+str(3))) # print second zero field (tail check)\n# f.write(vline)\n# f.write(hline)\n\n elif iz == 1:\n # infield first step\n f.write('%s \\t %s' % ('I', str(t)+'.'+str(iz)))\n f.write(vline)\n f.write(hline)\n iz = 0\n # zero field step (after infield)\n f.write('%s \\t %s' % ('Z', str(t)+'.'+str(iz)))\n f.write(vline)\n f.write(hline)\n try:\n low = Top[k]+Int[k+1] # increment to next temp step\n except:\n f.close()\n print(\"output stored in: chart.txt\")", "response": "This function creates a chart for performing IZZI experiments."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ntries to import Basemap and print out a useful help message if it can t be imported.", "response": "def import_basemap():\n \"\"\"\n Try to import Basemap and print out a useful help message\n if Basemap is either not installed or is missing required\n environment variables.\n\n Returns\n ---------\n has_basemap : bool\n Basemap : Basemap package if possible else None\n \"\"\"\n\n Basemap = None\n has_basemap = True\n has_cartopy = import_cartopy()[0]\n try:\n from mpl_toolkits.basemap import Basemap\n WARNINGS['has_basemap'] = True\n except ImportError:\n has_basemap = False\n # if they have installed cartopy, no warning is needed\n if has_cartopy:\n return has_basemap, False\n # if they haven't installed Basemap or cartopy, they need to be warned\n if not WARNINGS['basemap']:\n print(\n \"-W- You haven't installed a module for plotting maps (cartopy or Basemap)\")\n print(\" Recommended: install cartopy. With conda:\")\n print(\" conda install cartopy\")\n print(\n \" For more information, see http://earthref.org/PmagPy/Cookbook#getting_python\")\n except (KeyError, FileNotFoundError):\n has_basemap = False\n # if cartopy is installed, no warning is needed\n if has_cartopy:\n return has_basemap, False\n if not WARNINGS['basemap']:\n print('-W- Basemap is installed but could not be imported.')\n print(' You are probably missing a required environment variable')\n print(\n ' If you need to use Basemap, you will need to run this program or notebook in a conda env.')\n print(' For more on how to create a conda env, see: https://conda.io/docs/user-guide/tasks/manage-environments.html')\n print(\n ' Recommended alternative: install cartopy for plotting maps. With conda:')\n print(' conda install cartopy')\n if has_basemap and not has_cartopy:\n print(\"-W- You have installed Basemap but not cartopy.\")\n print(\" In the future, Basemap will no longer be supported.\")\n print(\" To continue to make maps, install using conda:\")\n print(' conda install cartopy')\n\n WARNINGS['basemap'] = True\n return has_basemap, Basemap"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ntrying to import cartopy and print out a help message if it is not installed", "response": "def import_cartopy():\n \"\"\"\n Try to import cartopy and print out a help message\n if it is not installed\n\n Returns\n ---------\n has_cartopy : bool\n cartopy : cartopy package if available else None\n \"\"\"\n cartopy = None\n has_cartopy = True\n try:\n import cartopy\n WARNINGS['has_cartopy'] = True\n except ImportError:\n has_cartopy = False\n if not WARNINGS['cartopy']:\n print('-W- cartopy is not installed')\n print(' If you want to make maps, install using conda:')\n print(' conda install cartopy')\n WARNINGS['cartopy'] = True\n return has_cartopy, cartopy"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef age_to_BP(age, age_unit):\n ageBP = -1e9\n if age_unit == \"Years AD (+/-)\" or age_unit == \"Years Cal AD (+/-)\":\n if age < 0:\n age = age+1 # to correct for there being no 0 AD\n ageBP = 1950-age\n elif age_unit == \"Years BP\" or age_unit == \"Years Cal BP\":\n ageBP = age\n elif age_unit == \"ka\":\n ageBP = age*1000\n elif age_unit == \"Ma\":\n ageBP = age*1e6\n elif age_unit == \"Ga\":\n ageBP = age*1e9\n else:\n print(\"Age unit invalid. Age set to -1.0e9\")\n\n return ageBP", "response": "Converts an age value into the equivalent in time Before Present ( BP"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconverting a MagIC database vocabulary to other standards.", "response": "def vocab_convert(vocab, standard, key=''):\n \"\"\"\n Converts MagIC database terms (method codes, geologic_types, etc) to other standards.\n May not be comprehensive for each standard. Terms added to standards as people need them\n and may not be up-to-date.\n\n 'key' can be used to distinguish vocab terms that exist in two different lists.\n\n Returns:\n value of the MagIC vocab in the standard requested\n\n Example:\n vocab_convert('Egypt','GEOMAGIA') will return '1'\n \"\"\"\n\n places_to_geomagia = {\n 'Egypt': \"1\",\n 'Japan': \"2\",\n 'France': \"3\",\n 'Ukraine': \"5\",\n 'India': \"6\",\n 'China': \"7\",\n 'Finland': \"8\",\n 'Greece': \"9\",\n 'Italy': \"11\",\n 'Switzerland': \"12\",\n 'Bulgaria': \"13\",\n 'Syria': \"14\",\n 'Hungary': \"15\",\n 'East Pacific Ridge': \"17\",\n 'Hawaii': \"18\",\n 'Morocco': \"19\",\n 'Australia': \"20\",\n 'Georgia': \"21\",\n 'Azerbaijan': \"22\",\n 'Spain': \"24\",\n 'England': \"25\",\n 'Czech Republic': \"26\",\n 'Mexico': \"27\",\n 'Iraq': \"28\",\n 'Israel': \"29\",\n 'Iran': \"30\",\n 'Uzbekistan': \"31\",\n 'Turkmenistan': \"32\",\n 'Mongolia': \"33\",\n 'Iceland': \"34\",\n 'New Zealand': \"35\",\n 'Amsterdam Island': \"36\",\n 'Guadeloupe': \"37\",\n 'Mid Atlantic Ridge': \"38\",\n 'Austria': \"39\",\n 'Belgium': \"40\",\n 'Romania': \"41\",\n 'Guatemala': \"42\",\n 'El Salvador': \"43\",\n 'Canary Islands': \"45\",\n 'Moldova': \"46\",\n 'Latvia': \"47\",\n 'Lithuania': \"48\",\n 'Russia': \"49\",\n 'Germany': \"51\",\n 'Martinique': \"52\",\n 'Netherlands': \"53\",\n 'Turkey': \"54\",\n 'Denmark': \"55\",\n 'Cameroon': \"56\",\n 'Honduras': \"57\",\n 'Jordan': \"58\",\n 'Brazil': \"59\",\n 'Estonia': \"61\",\n 'Sweden': \"62\",\n 'Peru': \"63\",\n 'Bolivia': \"64\",\n 'Ecuador': \"65\",\n 'Ontario': \"66\",\n 'New Mexico': \"67\",\n 'Arizona': \"68\",\n 'California': \"69\",\n 'Colorado': \"70\",\n 'Utah': \"71\",\n 'Washington': \"72\",\n 'Oregon': \"73\",\n 'British Columbia': \"74\",\n 'Idaho': \"75\",\n 'Arkansas': \"76\",\n 'Tennessee': \"78\",\n 'Serbia': \"79\",\n 'Kosovo': \"80\",\n 'Portugal': \"81\",\n 'Thailand': \"82\",\n 'South Korea': \"83\",\n 'Kazakhstan': \"84\",\n 'Nebraska': \"85\",\n 'La Reunion': \"86\",\n 'Cyprus': \"87\",\n 'Papua New Guinea': \"88\",\n 'Vanuatu': \"89\",\n 'Fiji': \"90\",\n 'Argentina': \"91\",\n 'Tunisia': \"92\",\n 'Mali': \"93\",\n 'Senegal': \"95\",\n 'Alaska': \"96\",\n 'North Atlantic': \"97\",\n 'South Atlantic': \"98\",\n 'Beaufort Sea': \"99\",\n 'Chukchi Sea': \"100\",\n 'Kyrgyzstan': \"101\",\n 'Indonesia': \"102\",\n 'Azores': \"103\",\n 'Quebec': \"104\",\n 'Norway': \"105\",\n 'Northern Ireland': \"106\",\n 'Wales': \"107\",\n 'Scotland': \"108\",\n 'Virginia': \"109\",\n 'North West Pacific': \"110\",\n 'Mediterranean': \"111\",\n 'Slovakia': \"121\",\n 'Poland': \"124\"\n }\n\n geologic_types_to_geomagia = {\n \"Baked Clay\": \"2\",\n \"Tile\": \"3\",\n \"Lava\": \"4\",\n \"Pottery\": \"5\",\n \"Sun Dried Object\": \"6\",\n \"Porcelain\": \"7\",\n \"Ceramic\": \"8\",\n \"Kiln\": \"9\",\n \"Oven or Hearth (GEOMAGIA Only)\": \"10\",\n \"Mixed Archeological Objects\": \"11\",\n \"Slag\": \"12\",\n \"Baked Rock\": \"13\",\n \"Fresco\": \"14\",\n \"Mosaic\": \"15\",\n \"Wall\": \"16\",\n \"Bath\": \"17\",\n \"Burnt Floor\": \"18\",\n \"Funeral Pyre\": \"19\",\n \"Hypocaust\": \"20\",\n \"Burnt Pit\": \"21\",\n \"Bell Mould\": \"22\",\n \"Smoking Chamber\": \"23\",\n \"Baked Mud\": \"24\",\n \"Volcanic Ash\": \"25\",\n \"Burnt Structure\": \"26\",\n \"Burnt Castle Wall\": \"27\",\n \"Charcoal Pile\": \"28\",\n \"Burnt Earth\": \"29\",\n \"Vitrified Object\": \"30\",\n \"Unbaked Sediment\": \"31\",\n \"Tuyere\": \"32\",\n \"Sauna\": \"33\",\n \"Pit Structure\": \"35\",\n \"Room\": \"36\",\n \"Pit House\": \"37\",\n \"Salt Kiln\": \"38\",\n \"Burnt Sediment\": \"39\",\n \"Archeological Ashes\": \"40\",\n \"Volcanic Other or Undefined (GEOMAGIA Only)\": \"41\",\n \"Mural\": \"42\",\n \"Vitrified Stone\": \"43\",\n \"Soil\": \"44\",\n \"Kamadogu\": \"45\",\n \"Foundry\": \"46\",\n \"Obsidian\": \"47\",\n \"Chert\": \"48\",\n \"Burnt daub\": \"49\",\n \"Amphora\": \"50\",\n \"Granite\": \"51\",\n \"Volcanic Glass\": \"52\",\n \"Furnace\": \"53\",\n \"Roasting Pit\": \"54\"\n\n }\n\n# Some of the simple method code mappings are done here\n\n method_codes_to_geomagia = {\n \"GM-NO\": \"0\",\n \"GM-CC-ARCH\": \"101\",\n \"GM-C14-CAL\": \"102\",\n \"GM-C14-UNCAL\": \"103\",\n \"GM-LUM-TH\": \"104\",\n \"GM-HIST\": \"105\",\n \"GM-PMAG-ARCH\": \"106\",\n \"GM-ARAR\": \"107\",\n \"GM-CC-TEPH\": \"108\",\n \"GM-CC-STRAT\": \"109\",\n \"GM-CC-REL\": \"110\",\n \"GM-DENDRO\": \"111\",\n \"GM-RATH\": \"112\",\n \"GM-KAR\": \"113\",\n \"GM-UTH\": \"114\",\n \"GM-FT\": \"115\",\n \"GM-C14-AMS\": \"116\",\n \"GM-LUM-OS\": \"117\",\n \"GM-HE3\": \"118\",\n \"GM-VARVE\": \"119\",\n \"GM-CS137\": \"120\",\n \"GM-USD-PB210\": \"121\",\n \"GM-C14-BETA\": \"122\",\n \"GM-O18\": \"123\",\n \"GM-PA\": \"124\"\n }\n\n standard = standard.lower()\n standard_value = \"\"\n if standard == \"geomagia\":\n if vocab in places_to_geomagia.keys():\n standard_value = places_to_geomagia[vocab]\n if vocab in geologic_types_to_geomagia.keys():\n standard_value = geologic_types_to_geomagia[vocab]\n if vocab in method_codes_to_geomagia.keys():\n standard_value = method_codes_to_geomagia[vocab]\n if standard_value == \"\":\n if vocab=='':\n standard_value=\"Fail:vocab_to_convert_is_null\"\n elif vocab.isspace() or vocab!='':\n standard_value=\"Fail:vocab_to_convert_is_all_whitespace\"\n else:\n print(\"pmag.vocab_convert:Magic vocab '\", vocab, \"' not found for standard \", standard, sep='')\n return(vocab)\n return standard_value"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ntakes arguments input_directory output_directory and returns realpath for both values.", "response": "def fix_directories(input_dir_path, output_dir_path):\n \"\"\"\n Take arguments input/output directories and fixes them.\n If no input_directory, default to output_dir_path for both.\n Then return realpath for both values.\n\n Parameters\n ----------\n input_dir_path : str\n output_dir_path : str\n\n Returns\n ---------\n input_dir_path, output_dir_path\n \"\"\"\n if not input_dir_path:\n input_dir_path = output_dir_path\n input_dir_path = os.path.realpath(input_dir_path)\n output_dir_path = os.path.realpath(output_dir_path)\n return input_dir_path, output_dir_path"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef main():\n if '-h' in sys.argv: # check if help is needed\n print(main.__doc__)\n sys.exit() # graceful quit\n if '-f' in sys.argv:\n ind=sys.argv.index('-f')\n file_name=sys.argv[ind+1]\n else:\n print(\"MagIC file name needed. Please add the file name after the -f option.\")\n\n\n# Create all the table files from the magic.txt file so they can be imported by the cb\n command = \"download_magic.py -f \" + file_name\n os.system(command)\n\n md = cb.Contribution() #md stands for magic file data\n md.propagate_location_to_measurements()\n md.propagate_location_to_specimens()\n md.propagate_location_to_samples()\n if not md.tables:\n print('-E- No MagIC tables could be found in this directory')\n error_log(\"No MagIC tables found\")\n return\n\n doi=md.tables['contribution'].df.iloc[0]['reference']\n id=md.tables['contribution'].df.iloc[0]['id']\n timestamp=md.tables['contribution'].df.iloc[0]['timestamp']\n contributor=md.tables['contribution'].df.iloc[0]['contributor']\n print(\"c=\",contributor)\n contributor=contributor.replace('@','')\n print(\"c=\",contributor)\n \n cr = Crossref()\n ref=cr.works(doi)\n \n# authors = \"Doe J.X., Alexander,T.G.\"\n status= ref[\"status\"]\n message= ref[\"message\"]\n# print(\"message=\",message)\n authors= message[\"author\"]\n# print(\"authors=\",authors)\n authorList=\"\"\n for author in authors:\n# print (\"Name:\",author['given'], author['family']) \n author_given=\"\"\n names=author['given'].split(' ')\n for name in names:\n author_given +=name[0]+\".\"\n authorList += author['family'] + \" \" + author_given + \", \" \n# print(authorList)\n authorList=authorList[:-2]\n# print(authorList)\n\n title = message['title'][0]\n year = message['created']['date-parts'][0][0]\n# print(year)\n journal = message['short-container-title'][0]\n volume = message['volume']\n# print(volume)\n pages='0'\n if \"page\" in message.keys():\n pages = message['page']\n# print(pages)\n url = \"https://earthref.org/MagIC/doi/\" + doi\n\n print(\"REFS\") \n print(\"Insert into REFS values(NULL,'\", authorList, \"','\", title, \"', \", year, \", '\", journal, \"', \", volume, \", '\", pages, \"', '\", doi, \"', '\", url, \"');\", sep='')\n \n print()\n print(\"ARCHEODIJ\") \n \n sites=md.tables['sites'].df\n locations=md.tables['locations'].df\n\n print(\"UID,NUM_SAMPLES,NUM_ACC_SPEC,NUM_MEAS_SPEC,BA,SIGMA_BA,AGE, AGE_MIN,AGE_MAX,NUM_SIGMAS,AGE_ERROR_TYPE_ID,SITE_LAT, SITE_LON,VADM,SIGMA_VADM,SITE_ID,PI_METHODS_ID,AC_ID,MD_CK_ ID,AN_CORR_ID,CR_CORR_ID,DM_METHOD_ID,AF_STEP,T_STEP,DM_ ANALYSIS_ID,SPECIMEN_TYPE_ID,MATERIAL_ID,REFERENCE_ID,NUM_ C14_SAMPLES,C14_ID,CALIB_C14_AGE,CALIB_C14_AGE_SIGMA_MIN, CALIB_C14_AGE_SIGMA_MAX,NUM_C14_SIGMAS,CALC_CALIB_C14_AGE, CALC_CALIB_C14_AGE_SIGMA_MIN,CALC_CALIB_C14_AGE_SIGMA_MAX, C14_CALIB_SOFTWARE_ID,CALC_C14_CALIB_SOFTWARE_ID,C14_CALIB_DATASET_ID,CALC_C14_ CALIB_DATASET_ID,DENDRO_ID,TOT_NUM_DENDRO,NUM_DENDRO_ USED,DATING_METHOD_ID,NUM_DIR_SAMPLES,NUM_DIR_SPECIMENS,NUM_ DIR_SPEC_COLLECTED,DECL,INCL,ALPHA_95,K,VDM,SIGMA_VDM,SAMPLE_ID,c_csv,SITE_NAME, SITE_HORIZON,1000,1001,1002,1003,1004,1005,1006,1007,1008,1009,1010,1011,1012,1013,1014, SUPERSEEDED,UPLOAD_YEAR,UPLOAD_MONTH,UPLOADER,EDITOR,EDIT_DATE,NOTES\")\n\n for index, row in sites.iterrows():\n int_n_samples,int_n_specimens,int_n_total_specimens,int_abs,int_abs_sigma=-1,-1,-1,-1,-1\n if 'int_n_samples' in sites.columns.values: \n int_n_samples=row['int_n_samples']\n if 'int_n_specimens' in sites.columns.values: \n int_n_specimens=row['int_n_specimens']\n if 'int_n_total_specimens' in sites.columns.values: \n int_n_total_specimens=row['int_n_total_specimens']\n\n if int_n_specimens == -1 and int_n_samples >0:\n int_n_spcimens = int_n_samples\n\n if 'int_abs' in sites.columns.values: \n int_abs=row['int_abs']\n if int_abs is not None:\n int_abs=round(int_abs*1e6,1)\n if 'int_abs_sigma' in sites.columns.values: \n int_abs_sigma=row['int_abs_sigma']\n if int_abs_sigma is not None:\n int_abs_sigma=round(row['int_abs_sigma']*1e6,1)\n\n age,age_high,age_low=-1e9,-1e9,-1e9\n age_error_type='0' # \n \n if 'age_unit' not in sites.columns.values: \n print(\"Malformed Magic sites data table. Required column row 'age_unit' is missing\")\n sys.exit()\n age_unit=row['age_unit']\n if 'age' in sites.columns.values: \n age=row['age'] \n age=pmag.age_to_BP(age,age_unit)\n if 'age_high' in sites.columns.values: \n age_high=row['age_high'] \n age_high=pmag.age_to_BP(age_high,age_unit)\n if 'age_low' in sites.columns.values: \n age_low=row['age_low'] \n age_low=pmag.age_to_BP(age_low,age_unit)\n if 'age_sigma' in sites.columns.values: \n age_sigma=row['age_sigma'] \n age_sigma=pmag.age_to_BP(age_sigma,age_unit)\n age_high=age+age_sigma\n age_low=age-age_sigma\n age_error_type='5' #Magic is one sigma for all sigma state/province column to data modelages\n\n if age_low > age_high: # MagIC lets age_high and age_low be in any order. Fix that for GEOMAGIA \n temp=age_high\n age_high=age_low\n age_low=temp\n if age == -1e9: # If only age_low and age_high are in the MagIC file then calculate the age.\n age=(age_high+age_low)/2\n age_error_type='8' #If MagIC age only high and low then error type is \"range\"\n\n age_min=age-age_low # GEOMAGIA has the max and min as differences from the age, not absolute. \n age_max=age_high-age\n age_BP=age\n age=1950-age #GEOMAGIA want +-AD/BC so convert BP to AD/-BC\n\n lat=row['lat']\n lon=row['lon']\n\n vadm,vadm_sigma=-1,-1\n \n if 'vadm' in sites.columns.values: \n vadm=row['vadm'] \n vadm=vadm/1e22\n if 'vadm_sigma' in sites.columns.values: \n vadm=row['vadm'] \n vadm=vadm/1e22\n\n site_name=row['site'] \n \n# For paleointensity codes just give the method code list and Max will decide on the right \n# GEOMAGIA code.\n method_codes=\"No MagIC method codes available\"\n if 'method_codes' in sites.columns.values: \n method_codes=row['method_codes']\n\n# Just give Max all the method codes for him to decide for now\n paleointensity_procedure=method_codes\n \n alteration_monitor=\"0\"\n alteration_monitor=method_codes_to_geomagia(method_codes,'ALTERATION_MONIT_CORR')\n multidomain_check=\"0\" \n multidomain_check=method_codes_to_geomagia(method_codes,'MD_CHECKS')\n anisotropy_correction=\"0\"\n anisotropy_correction=method_codes_to_geomagia(method_codes,'ANISOTROPY_CORRECTION')\n cooling_rate=\"0\"\n cooling_rate=method_codes_to_geomagia(method_codes,'COOLING_RATE')\n demag_method=\"0\"\n demag_method=method_codes_to_geomagia(method_codes,'DM_METHODS')\n demag_analysis=\"0\"\n demag_analysis=method_codes_to_geomagia(method_codes,'DM_ANALYSIS')\n specimen_shape=\"0\"\n specimen_shape=method_codes_to_geomagia(method_codes,'SPECIMEN_TYPE_ID')\n \n materials=\"\" \n geologic_types=\"\" \n if 'geologic_types' in sites.columns.values: \n geologic_types=row['geologic_types'] \n if \":\" in geologic_types:\n gtypes=geologic_types.split(\":\")\n for gtype in gtypes:\n materials=materials+pmag.vocab_convert(gtype,\"geomagia\")+\":\"\n materials=materials[:-1]\n else:\n materials=pmag.vocab_convert(geologic_types,\"geomagia\")\n \n geochron_codes=\"\" \n if \":\" in method_codes:\n gcodes=method_codes.split(\":\")\n for gcode in gcodes:\n if \"GM-\" == gcode[:3]:\n geochron_codes=geochron_codes+pmag.vocab_convert(gcode,\"geomagia\")+\":\"\n geochron_codes=geochron_codes[:-1]\n else:\n geochron_codes=pmag.vocab_convert(geochron_codes,\"geomagia\")\n if geochron_codes == \"\": \n geochron_codes=\"0\"\n\n dir_n_samples=\"-1\"\n if 'dir_n_samples' in sites.columns.values: \n dir_n_samples=row['dir_n_samples']\n \n dir_n_samples=\"-1\"\n if 'dir_n_samples' in sites.columns.values: \n dir_n_samples=row['dir_n_samples']\n\n# Not in MagIC\n dir_n_specimens=\"-1\"\n\n# using total number of samples for total specimen number\n dir_n_total_samples=\"-1\"\n if 'dir_n_total_samples' in sites.columns.values: \n dir_n_total_samples=row['dir_n_total_samples']\n\n dir_dec=\"999\"\n if 'dir_dec' in sites.columns.values: \n dir_dec=row['dir_dec']\n\n dir_inc=\"999\"\n if 'dir_inc' in sites.columns.values: \n dir_inc=row['dir_inc']\n\n dir_alpha95=\"-1\"\n if 'dir_alpha95' in sites.columns.values: \n dir_alpha95=row['dir_alpha95']\n\n dir_k=\"-1\"\n if 'dir_k' in sites.columns.values: \n dir_k=row['dir_k']\n\n vdm=-1\n if 'vdm' in sites.columns.values: \n vdm=float(row['vdm'])\n vdm=vdm/1e22\n\n vdm_sigma=-1\n if 'vdm_sigma' in sites.columns.values: \n vdm_sigma=float(row['vdm_sigma'])\n vdm_sigma=vdm_sigma/1e22\n\n# Could try and get sample names from samples table (using Contribution object) but just taking the list \n# if it exists for now.\n sample_list=\"-1\"\n if 'samples' in sites.columns.values: \n sample_list=row['samples']\n\n# c_csv is in GEOMAGIA insert. What it is I don't know. Max said set to 0\n c_csv='0'\n\n# This place_id is SITE_ID in GEOMAGIA\n\n place_id=\"0\"\n location=row['location']\n if 'state_province' in locations.columns.values: \n place=locations.loc[location,'state_province']\n if place != \"\":\n place_id=pmag.vocab_convert(place,'GEOMAGIA')\n if place_id == \"0\":\n if 'country' in locations.columns.values: \n place=locations.loc[location,'country']\n if place != \"\":\n place_id=pmag.vocab_convert(place,'GEOMAGIA')\n if place_id == \"0\":\n if 'continent_ocean' in locations.columns.values: \n place_id=locations.loc[location,'continent_ocean']\n if place != \"\":\n place_id=pmag.vocab_convert(place,'GEOMAGIA')\n\n site=row['site']\n dt=dateutil.parser.parse(timestamp)\n\n description=\"-1\" \n if 'description' in sites.columns.values: \n description=row['description'] \n\n if age_BP <= 50000:\n print(\"0\",int_n_samples,int_n_specimens,int_n_total_specimens,int_abs,int_abs_sigma,age,age_min,age_max,\"1\",age_error_type,lat,lon,vadm,vadm_sigma,place_id,paleointensity_procedure,alteration_monitor,multidomain_check,anisotropy_correction,cooling_rate,demag_method,\"0\",\"0\",demag_analysis,specimen_shape,materials,doi,\"-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1\",geochron_codes,dir_n_samples,dir_n_samples,dir_n_total_samples,dir_dec,dir_inc,dir_alpha95,dir_k,vdm,vdm_sigma,sample_list,c_csv,location,site,\"-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1\",\"-1\",dt.year,dt.month,contributor,\"-1,-1\",description,sep=',')", "response": "This function is the main function that is used to download and insert a MagIC file into the Max Brown s GEOMAGIA database and return the data for easier input into Max Brown s GEOMAGIA database."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef method_codes_to_geomagia(magic_method_codes,geomagia_table):\n codes=magic_method_codes\n geomagia=geomagia_table.lower()\n geomagia_code='0'\n \n if geomagia=='alteration_monit_corr':\n if \"DA-ALT-V\" or \"LP-PI-ALT-PTRM\" or \"LP-PI-ALT-PMRM\" in codes:\n geomagia_code='1' \n elif \"LP-PI-ALT-SUSC\" in codes:\n geomagia_code='2' \n elif \"DA-ALT-RS\" or \"LP-PI-ALT-AFARM\" in codes:\n geomagia_code='3' \n elif \"LP-PI-ALT-WALTON\" in codes:\n geomagia_code='4' \n elif \"LP-PI-ALT-TANGUY\" in codes:\n geomagia_code='5' \n elif \"DA-ALT\" in codes:\n geomagia_code='6' #at end to fill generic if others don't exist\n elif \"LP-PI-ALT-FABIAN\" in codes:\n geomagia_code='7' \n\n if geomagia=='md_checks':\n if (\"LT-PTRM-MD\" in codes) or (\"LT-PMRM-MD\" in codes):\n geomagia_code='1:' \n if (\"LP-PI-BT-LT\" in codes) or (\"LT-LT-Z\" in codes):\n if \"0\" in geomagia_code: \n geomagia_code=\"23:\"\n else:\n geomagia_code+='2:' \n geomagia_code=geomagia_code[:-1] \n\n if geomagia=='anisotropy_correction':\n if \"DA-AC-AMS\" in codes:\n geomagia_code='1' \n elif \"DA-AC-AARM\" in codes:\n geomagia_code='2' \n elif \"DA-AC-ATRM\" in codes:\n geomagia_code='3' \n elif \"LT-NRM-PAR\" in codes:\n geomagia_code='4' \n elif \"DA-AC-AIRM\" in codes:\n geomagia_code='6' \n elif \"DA-AC\" in codes: #at end to fill generic if others don't exist\n geomagia_code='5' \n\n if geomagia=='cooling_rate':\n if \"DA-CR\" in codes: #all current CR codes but CR-EG are a 1 but may change in the future \n geomagia_code='1' \n if \"DA-CR-EG\" in codes:\n geomagia_code='2' \n\n if geomagia=='dm_methods':\n if \"LP-DIR-AF\" in codes:\n geomagia_code='1' \n elif \"LT-AF-D\" in codes:\n geomagia_code='1' \n elif \"LT-AF-G\" in codes:\n geomagia_code='1' \n elif \"LT-AF-Z\" in codes:\n geomagia_code='1' \n elif \"LP-DIR-T\" in codes:\n geomagia_code='2' \n elif \"LT-AF-Z\" in codes:\n geomagia_code='2' \n elif \"LP-DIR-M\" in codes:\n geomagia_code='5' \n elif \"LT-M-Z\" in codes:\n geomagia_code='5' \n\n if geomagia=='dm_analysis':\n if \"DE-BFL\" in codes:\n geomagia_code='1' \n elif \"DE-BLANKET\" in codes:\n geomagia_code='2' \n elif \"DE-FM\" in codes:\n geomagia_code='3' \n elif \"DE-NRM\" in codes:\n geomagia_code='6' \n\n if geomagia=='specimen_type_id':\n if \"SC-TYPE-CYC\" in codes:\n geomagia_code='1' \n elif \"SC-TYPE-CUBE\" in codes:\n geomagia_code='2' \n elif \"SC-TYPE-MINI\" in codes:\n geomagia_code='3' \n elif \"SC-TYPE-SC\" in codes:\n geomagia_code='4' \n elif \"SC-TYPE-UC\" in codes:\n geomagia_code='5' \n elif \"SC-TYPE-LARGE\" in codes:\n geomagia_code='6' \n\n return geomagia_code", "response": "Converts a list of MagIC method codes to a list of GEOMAGIA codes."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nnaming microwave_magic.py DESCRIPTION plots microwave paleointensity data, allowing interactive setting of bounds. Saves and reads interpretations from a pmag_specimen formatted table, default: microwave_specimens.txt SYNTAX microwave_magic.py [command line options] OPTIONS -h prints help message and quits -f MEAS, set magic_measurements input file -fsp PRIOR, set pmag_specimen prior interpretations file -fcr CRIT, set criteria file for grading. -fmt [svg,png,jpg], format for images - default is svg -sav, saves plots with out review (default format) -spc SPEC, plots single specimen SPEC, saves plot with specified format with optional -b bounds adn quits -b BEG END: sets bounds for calculation BEG: starting step for slope calculation END: ending step for slope calculation DEFAULTS MEAS: magic_measurements.txt CRIT: NONE PRIOR: microwave_specimens.txt OUTPUT figures: ALL: numbers refer to temperature steps in command line window 1) Arai plot: closed circles are zero-field first/infield open circles are infield first/zero-field triangles are pTRM checks squares are pTRM tail checks VDS is vector difference sum diamonds are bounds for interpretation 2) Zijderveld plot: closed (open) symbols are X-Y (X-Z) planes X rotated to NRM direction 3) (De/Re)Magnetization diagram: circles are NRM remaining squares are pTRM gained command line window: list is: temperature step numbers, power (J), Dec, Inc, Int (units of magic_measuements) list of possible commands: type letter followed by return to select option saving of plots creates .svg format files with specimen_name, plot type as name", "response": "def main():\n \"\"\"\n NAME\n microwave_magic.py\n\n DESCRIPTION\n plots microwave paleointensity data, allowing interactive setting of bounds.\n Saves and reads interpretations\n from a pmag_specimen formatted table, default: microwave_specimens.txt\n\n SYNTAX\n microwave_magic.py [command line options]\n\n OPTIONS\n -h prints help message and quits\n -f MEAS, set magic_measurements input file\n -fsp PRIOR, set pmag_specimen prior interpretations file\n -fcr CRIT, set criteria file for grading.\n -fmt [svg,png,jpg], format for images - default is svg\n -sav, saves plots with out review (default format)\n -spc SPEC, plots single specimen SPEC, saves plot with specified format\n with optional -b bounds adn quits\n -b BEG END: sets bounds for calculation\n BEG: starting step for slope calculation\n END: ending step for slope calculation\n\n DEFAULTS\n MEAS: magic_measurements.txt\n CRIT: NONE\n PRIOR: microwave_specimens.txt\n\n OUTPUT\n figures:\n ALL: numbers refer to temperature steps in command line window\n 1) Arai plot: closed circles are zero-field first/infield\n open circles are infield first/zero-field\n triangles are pTRM checks\n squares are pTRM tail checks\n VDS is vector difference sum\n diamonds are bounds for interpretation\n 2) Zijderveld plot: closed (open) symbols are X-Y (X-Z) planes\n X rotated to NRM direction\n 3) (De/Re)Magnetization diagram:\n circles are NRM remaining\n squares are pTRM gained\n command line window:\n list is: temperature step numbers, power (J), Dec, Inc, Int (units of magic_measuements)\n list of possible commands: type letter followed by return to select option\n saving of plots creates .svg format files with specimen_name, plot type as name\n \"\"\"\n#\n# initializations\n#\n meas_file,critout,inspec=\"magic_measurements.txt\",\"\",\"microwave_specimens.txt\"\n inlt=0\n version_num=pmag.get_version()\n Tinit,DCZ,field,first_save=0,0,-1,1\n user,comment=\"\",''\n ans,specimen,recnum,start,end=0,0,0,0,0\n plots,pmag_out,samp_file,style=0,\"\",\"\",\"svg\"\n fmt='.'+style\n#\n# default acceptance criteria\n#\n accept_keys=['specimen_int_ptrm_n','specimen_md','specimen_fvds','specimen_b_beta','specimen_dang','specimen_drats','specimen_Z']\n accept={}\n accept['specimen_int_ptrm_n']=2\n accept['specimen_md']=10\n accept['specimen_fvds']=0.35\n accept['specimen_b_beta']=.1\n accept['specimen_int_mad']=7\n accept['specimen_dang']=10\n accept['specimen_drats']=10\n accept['specimen_Z']=10\n#\n# parse command line options\n#\n spc,BEG,END=\"\",\"\",\"\"\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-f' in sys.argv:\n ind=sys.argv.index('-f')\n meas_file=sys.argv[ind+1]\n if '-fsp' in sys.argv:\n ind=sys.argv.index('-fsp')\n inspec=sys.argv[ind+1]\n if '-fcr' in sys.argv:\n ind=sys.argv.index('-fcr')\n critout=sys.argv[ind+1]\n if '-fmt' in sys.argv:\n ind=sys.argv.index('-fmt')\n fmt='.'+sys.argv[ind+1]\n if '-spc' in sys.argv:\n ind=sys.argv.index('-spc')\n spc=sys.argv[ind+1]\n if '-b' in sys.argv:\n ind=sys.argv.index('-b')\n BEG=int(sys.argv[ind+1])\n END=int(sys.argv[ind+2])\n if critout!=\"\":\n crit_data,file_type=pmag.magic_read(critout)\n if pmagplotlib.verbose:\n print(\"Acceptance criteria read in from \", critout)\n accept={}\n accept['specimen_int_ptrm_n']=2.0\n for critrec in crit_data:\n if critrec[\"pmag_criteria_code\"]==\"IE-SPEC\":\n for key in accept_keys:\n if key not in list(critrec.keys()):\n accept[key]=-1\n else:\n accept[key]=float(critrec[key])\n try:\n open(inspec,'r')\n PriorRecs,file_type=pmag.magic_read(inspec)\n if file_type != 'pmag_specimens':\n print(file_type)\n print(file_type,inspec,\" is not a valid pmag_specimens file \")\n sys.exit()\n for rec in PriorRecs:\n if 'magic_software_packages' not in list(rec.keys()):rec['magic_software_packages']=\"\"\n except IOError:\n PriorRecs=[]\n if pmagplotlib.verbose:print(\"starting new specimen interpretation file: \",inspec)\n meas_data,file_type=pmag.magic_read(meas_file)\n if file_type != 'magic_measurements':\n print(file_type)\n print(file_type,\"This is not a valid magic_measurements file \")\n sys.exit()\n backup=0\n # define figure numbers for arai, zijderveld and\n # de-,re-magization diagrams\n AZD={}\n AZD['deremag'], AZD['zijd'],AZD['arai'],AZD['eqarea']=1,2,3,4\n pmagplotlib.plot_init(AZD['arai'],4,4)\n pmagplotlib.plot_init(AZD['zijd'],4,4)\n pmagplotlib.plot_init(AZD['deremag'],4,4)\n pmagplotlib.plot_init(AZD['eqarea'],4,4)\n #\n #\n #\n # get list of unique specimen names\n #\n CurrRec=[]\n sids=pmag.get_specs(meas_data)\n # get plots for specimen s - default is just to step through arai diagrams\n #\n if spc!=\"\": specimen =sids.index(spc)\n while specimen < len(sids):\n methcodes=[]\n if pmagplotlib.verbose and spc!=\"\":\n print(sids[specimen],specimen+1, 'of ', len(sids))\n MeasRecs=[]\n s=sids[specimen]\n datablock,trmblock=[],[]\n PmagSpecRec={}\n PmagSpecRec[\"er_analyst_mail_names\"]=user\n PmagSpecRec[\"specimen_correction\"]='u'\n #\n # find the data from the meas_data file for this specimen\n #\n for rec in meas_data:\n if rec[\"er_specimen_name\"]==s:\n MeasRecs.append(rec)\n methods=rec[\"magic_method_codes\"].split(\":\")\n meths=[]\n for meth in methods:\n meths.append(meth.strip()) # take off annoying spaces\n methods=\"\"\n for meth in meths:\n if meth.strip() not in methcodes and \"LP-\" in meth:methcodes.append(meth.strip())\n methods=methods+meth+\":\"\n methods=methods[:-1]\n rec[\"magic_method_codes\"]=methods\n if \"LP-PI-M\" in meths: datablock.append(rec)\n if \"LP-MRM\" in meths: trmblock.append(rec)\n if len(trmblock)>2 and inspec!=\"\":\n if Tinit==0:\n Tinit=1\n AZD['MRM']=4\n pmagplotlib.plot_init(AZD['MRM'],4,4)\n elif Tinit==1:\n pmagplotlib.clearFIG(AZD['MRM'])\n if len(datablock) <4:\n if backup==0:\n specimen+=1\n if pmagplotlib.verbose:\n print('skipping specimen - moving forward ', s)\n else:\n specimen-=1\n if pmagplotlib.verbose:\n print('skipping specimen - moving backward ', s)\n #\n # collect info for the PmagSpecRec dictionary\n #\n else:\n rec=datablock[0]\n PmagSpecRec[\"er_citation_names\"]=\"This study\"\n PmagSpecRec[\"er_specimen_name\"]=s\n PmagSpecRec[\"er_sample_name\"]=rec[\"er_sample_name\"]\n PmagSpecRec[\"er_site_name\"]=rec[\"er_site_name\"]\n PmagSpecRec[\"er_location_name\"]=rec[\"er_location_name\"]\n if \"magic_instrument_codes\" not in list(rec.keys()):rec[\"magic_instrument_codes\"]=\"\"\n PmagSpecRec[\"magic_instrument_codes\"]=rec[\"magic_instrument_codes\"]\n PmagSpecRec[\"measurement_step_unit\"]=\"J\"\n if \"magic_experiment_name\" not in list(rec.keys()):\n rec[\"magic_experiment_name\"]=\"\"\n else:\n PmagSpecRec[\"magic_experiment_names\"]=rec[\"magic_experiment_name\"]\n\n meths=rec[\"magic_method_codes\"].split(':')\n # sort data into types\n if \"LP-PI-M-D\" in meths: # this is a double heating experiment\n exp_type=\"LP-PI-M-D\"\n elif \"LP-PI-M-S\" in meths:\n exp_type=\"LP-PI-M-S\"\n else:\n print(\"experiment type not supported yet \")\n break\n araiblock,field=pmag.sortmwarai(datablock,exp_type)\n first_Z=araiblock[0]\n first_I=araiblock[1]\n GammaChecks=araiblock[-3]\n ThetaChecks=araiblock[-2]\n DeltaChecks=araiblock[-1]\n if len(first_Z)<3:\n if backup==0:\n specimen+=1\n if pmagplotlib.verbose:\n print('skipping specimen - moving forward ', s)\n else:\n specimen-=1\n if pmagplotlib.verbose:\n print('skipping specimen - moving backward ', s)\n else:\n backup=0\n zijdblock,units=pmag.find_dmag_rec(s,meas_data)\n if exp_type==\"LP-PI-M-D\":\n recnum=0\n print(\"ZStep Watts Dec Inc Int\")\n for plotrec in zijdblock:\n if pmagplotlib.verbose:\n print('%i %i %7.1f %7.1f %8.3e ' % (recnum,plotrec[0],plotrec[1],plotrec[2],plotrec[3]))\n recnum += 1\n recnum = 1\n if GammaChecks!=\"\":\n print(\"IStep Watts Gamma\")\n for gamma in GammaChecks:\n if pmagplotlib.verbose: print('%i %i %7.1f ' % (recnum, gamma[0],gamma[1]))\n recnum += 1\n if exp_type==\"LP-PI-M-S\":\n if pmagplotlib.verbose:\n print(\"IStep Watts Theta\")\n kk=0\n for theta in ThetaChecks:\n kk+=1\n print('%i %i %7.1f ' % (kk,theta[0],theta[1]))\n if pmagplotlib.verbose:\n print(\"Watts Delta\")\n for delta in DeltaChecks:\n print('%i %7.1f ' % (delta[0],delta[1]))\n pmagplotlib.plot_arai_zij(AZD,araiblock,zijdblock,s,units[0])\n if inspec !=\"\":\n if pmagplotlib.verbose: print('Looking up saved interpretation....')\n found = 0\n for k in range(len(PriorRecs)):\n try:\n if PriorRecs[k][\"er_specimen_name\"]==s:\n found =1\n CurrRec.append(PriorRecs[k])\n for j in range(len(araiblock[0])):\n if float(araiblock[0][j][0])==float(PriorRecs[k][\"measurement_step_min\"]):start=j\n if float(araiblock[0][j][0])==float(PriorRecs[k][\"measurement_step_max\"]):end=j\n pars,errcode=pmag.PintPars(araiblock,zijdblock,start,end)\n pars['measurement_step_unit']=\"J\"\n del PriorRecs[k] # put in CurrRec, take out of PriorRecs\n if errcode!=1:\n pars[\"specimen_lab_field_dc\"]=field\n pars[\"specimen_int\"]=-1*field*pars[\"specimen_b\"]\n pars[\"er_specimen_name\"]=s\n if pmagplotlib.verbose:\n print('Saved interpretation: ')\n pars=pmag.scoreit(pars,PmagSpecRec,accept,'',0)\n pmagplotlib.plot_b(AZD,araiblock,zijdblock,pars)\n if len(trmblock)>2:\n blab=field\n best=pars[\"specimen_int\"]\n Bs,TRMs=[],[]\n for trec in trmblock:\n Bs.append(float(trec['treatment_dc_field']))\n TRMs.append(float(trec['measurement_magn_moment']))\n NLpars=nlt.NLtrm(Bs,TRMs,best,blab,0) # calculate best fit parameters through TRM acquisition data, and get new banc\n Mp,Bp=[],[]\n for k in range(int(max(Bs)*1e6)):\n Bp.append(float(k)*1e-6)\n npred=nlt.TRM(Bp[-1],NLpars['xopt'][0],NLpars['xopt'][1]) # predicted NRM for this field\n Mp.append(npred)\n pmagplotlib.plot_trm(AZD['MRM'],Bs,TRMs,Bp,Mp,NLpars,trec['magic_experiment_name'])\n print(npred)\n print('Banc= ',float(NLpars['banc'])*1e6)\n if pmagplotlib.verbose:\n print('Banc= ',float(NLpars['banc'])*1e6)\n pmagplotlib.draw_figs(AZD)\n else:\n print('error on specimen ',s)\n except:\n pass\n if pmagplotlib.verbose and found==0: print(' None found :( ')\n if spc!=\"\":\n if BEG!=\"\":\n pars,errcode=pmag.PintPars(araiblock,zijdblock,BEG,END)\n pars['measurement_step_unit']=\"J\"\n pars[\"specimen_lab_field_dc\"]=field\n pars[\"specimen_int\"]=-1*field*pars[\"specimen_b\"]\n pars[\"er_specimen_name\"]=s\n pars['specimen_grade']='' # ungraded\n pmagplotlib.plot_b(AZD,araiblock,zijdblock,pars)\n if len(trmblock)>2:\n if inlt==0:\n donlt()\n inlt=1\n blab=field\n best=pars[\"specimen_int\"]\n Bs,TRMs=[],[]\n for trec in trmblock:\n Bs.append(float(trec['treatment_dc_field']))\n TRMs.append(float(trec['measurement_magn_moment']))\n NLpars=nlt.NLtrm(Bs,TRMs,best,blab,0) # calculate best fit parameters through TRM acquisition data, and get new banc\n #\n Mp,Bp=[],[]\n for k in range(int(max(Bs)*1e6)):\n Bp.append(float(k)*1e-6)\n npred=nlt.TRM(Bp[-1],NLpars['xopt'][0],NLpars['xopt'][1]) # predicted NRM for this field\n files={}\n for key in list(AZD.keys()):\n files[key]=s+'_'+key+fmt\n pmagplotlib.save_plots(AZD,files)\n sys.exit()\n if plots==0:\n ans='b'\n while ans != \"\":\n print(\"\"\"\n s[a]ve plot, set [b]ounds for calculation, [d]elete current interpretation, [p]revious, [s]ample, [q]uit:\n \"\"\")\n ans=input('Return for next specimen \\n')\n if ans==\"\":\n specimen +=1\n if ans==\"d\":\n save_redo(PriorRecs,inspec)\n CurrRec=[]\n pmagplotlib.plot_arai_zij(AZD,araiblock,zijdblock,s,units[0])\n pmagplotlib.draw_figs(AZD)\n if ans=='a':\n files={}\n for key in list(AZD.keys()):\n files[key]=s+'_'+key+fmt\n pmagplotlib.save_plots(AZD,files)\n ans=\"\"\n if ans=='q':\n print(\"Good bye\")\n sys.exit()\n if ans=='p':\n specimen =specimen -1\n backup = 1\n ans=\"\"\n if ans=='s':\n keepon=1\n spec=input('Enter desired specimen name (or first part there of): ')\n while keepon==1:\n try:\n specimen =sids.index(spec)\n keepon=0\n except:\n tmplist=[]\n for qq in range(len(sids)):\n if spec in sids[qq]:tmplist.append(sids[qq])\n print(specimen,\" not found, but this was: \")\n print(tmplist)\n spec=input('Select one or try again\\n ')\n ans=\"\"\n if ans=='b':\n if end==0 or end >=len(araiblock[0]):end=len(araiblock[0])-1\n GoOn=0\n while GoOn==0:\n print('Enter index of first point for calculation: ','[',start,']')\n answer=input('return to keep default ')\n if answer != \"\":start=int(answer)\n print('Enter index of last point for calculation: ','[',end,']')\n answer=input('return to keep default ')\n if answer != \"\":\n end=int(answer)\n if start >=0 and start 0 and end 2:\n blab=field\n best=pars[\"specimen_int\"]\n Bs,TRMs=[],[]\n for trec in trmblock:\n Bs.append(float(trec['treatment_dc_field']))\n TRMs.append(float(trec['measurement_magn_moment']))\n NLpars=nlt.NLtrm(Bs,TRMs,best,blab,0) # calculate best fit parameters through TRM acquisition data, and get new banc\n Mp,Bp=[],[]\n for k in range(int(max(Bs)*1e6)):\n Bp.append(float(k)*1e-6)\n npred=nlt.TRM(Bp[-1],NLpars['xopt'][0],NLpars['xopt'][1]) # predicted NRM for this field\n Mp.append(npred)\n pmagplotlib.plot_trm(AZD['MRM'],Bs,TRMs,Bp,Mp,NLpars,trec['magic_experiment_name'])\n print('Banc= ',float(NLpars['banc'])*1e6)\n pmagplotlib.draw_figs(AZD)\n pars[\"specimen_lab_field_dc\"]=field\n pars[\"specimen_int\"]=-1*field*pars[\"specimen_b\"]\n saveit=input(\"Save this interpretation? [y]/n \\n\")\n if saveit!='n':\n specimen+=1\n PriorRecs.append(PmagSpecRec) # put back an interpretation\n save_redo(PriorRecs,inspec)\n ans=\"\"\n else:\n specimen+=1\n if fmt != \".pmag\":\n basename=s+'_microwave'+fmt\n files={}\n for key in list(AZD.keys()):\n files[key]=s+'_'+key+fmt\n if pmagplotlib.isServer:\n black = '#000000'\n purple = '#800080'\n titles={}\n titles['deremag']='DeReMag Plot'\n titles['zijd']='Zijderveld Plot'\n titles['arai']='Arai Plot'\n AZD = pmagplotlib.add_borders(AZD,titles,black,purple)\n pmagplotlib.save_plots(AZD,files)\n # pmagplotlib.combineFigs(s,files,3)\n if len(CurrRec)>0:\n for rec in CurrRec:\n PriorRecs.append(rec)\n CurrRec=[]\n if plots!=1:\n ans=input(\" Save last plot? 1/[0] \")\n if ans==\"1\":\n if fmt != \".pmag\":\n files={}\n for key in list(AZD.keys()):\n files[key]=s+'_'+key+fmt\n pmagplotlib.save_plots(AZD,files)\n if len(CurrRec)>0:PriorRecs.append(CurrRec) # put back an interpretation\n if len(PriorRecs)>0:\n save_redo(PriorRecs,inspec)\n print('Updated interpretations saved in ',inspec)\n if pmagplotlib.verbose:\n print(\"Good bye\")"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef main():\n# initialize some stuff\n noave=0\n methcode,inst=\"\",\"\"\n phi,theta,peakfield,labfield=0,0,0,0\n dec=[315,225,180,135,45,90,270,270,270,90,180,180,0,0,0]\n inc=[0,0,0,0,0,-45,-45,0,45,45,45,-45,-90,-45,45]\n missing=1\n demag=\"N\"\n er_location_name=\"\"\n citation='This study'\n args=sys.argv\n methcode=\"LP-NO\"\n trm=0\n irm=0\n dc=\"0\"\n dir_path='.'\n#\n# get command line arguments\n#\n meas_file=\"magic_measurements.txt\"\n user=\"\"\n if \"-WD\" in args:\n ind=args.index(\"-WD\")\n dir_path=args[ind+1]\n samp_file=dir_path+'/'+'er_samples.txt'\n if \"-h\" in args:\n print(main.__doc__)\n sys.exit()\n if '-F' in args:\n ind=args.index(\"-F\")\n meas_file=args[ind+1]\n if '-f' in args:\n ind=args.index(\"-f\")\n mag_file=dir_path+'/'+args[ind+1]\n try:\n input=open(mag_file,'r')\n except:\n print(\"bad mag file name\")\n sys.exit()\n else:\n print(\"spn_file field is required option\")\n print(main.__doc__)\n sys.exit()\n vol=10.5e-6 # default for spinner program\n if \"-V\" in args:\n ind=args.index(\"-V\")\n vol=float(args[ind+1])*1e-6 # convert volume to m^3\n if \"-A\" in args: noave=1\n if '-LP' in args:\n ind=args.index(\"-LP\")\n codelist=args[ind+1]\n codes=codelist.split(':')\n if \"AF\" in codes:\n demag='AF'\n methcode=\"LT-AF-Z\"\n if \"T\" in codes:\n demag=\"T\"\n methcode=\"LT-T-Z\"\n if \"I\" in codes:\n methcode=\"LP-IRM\"\n if \"A\" in codes:\n methcode=\"LT-AF-I\"\n dc='%10.3e'%(1e-3*float(args[ind+1]))\n MagRecs=[]\n version_num=pmag.get_version()\n meas_file=dir_path+'/'+meas_file\n for line in input.readlines():\n instcode=\"ODP-MSPN\"\n rec=line.split()\n if len(rec)>2 and \"Leg\" not in line:\n MagRec={}\n MagRec['er_expedition_name']=rec[0]\n MagRec['er_location_name']=rec[1]+rec[2]\n MagRec[\"er_specimen_name\"]=rec[0]+'-'+'U'+rec[1]+rec[2].upper()+\"-\"+rec[3]+rec[4].upper()+'-'+rec[5]+'-'+'W'+'-'+rec[6]\n MagRec[\"er_site_name\"]=MagRec['er_specimen_name']\n MagRec[\"er_sample_name\"]=MagRec['er_specimen_name']\n MagRec['magic_software_packages']=version_num\n MagRec[\"treatment_temp\"]='%8.3e' % (273) # room temp in kelvin\n MagRec[\"measurement_temp\"]='%8.3e' % (273) # room temp in kelvin\n MagRec[\"treatment_ac_field\"]='0'\n MagRec[\"treatment_dc_field\"]=dc\n MagRec[\"treatment_dc_field_phi\"]='0'\n MagRec[\"treatment_dc_field_theta\"]='0'\n meas_type=\"LT-NO\"\n if float(rec[11])==0:\n pass\n elif demag==\"AF\":\n MagRec[\"treatment_ac_field\"]='%8.3e' %(float(rec[11])*1e-3) # peak field in tesla\n meas_type=\"LT-AF-Z\"\n MagRec[\"treatment_dc_field\"]='0'\n else:\n MagRec[\"treatment_temp\"]='%8.3e' % (float(rec[11])+273.) # temp in kelvin\n meas_type=\"LT-T-Z\"\n intens=1e-3*float(rec[10])*vol # convert mA/m to Am^2\n MagRec[\"measurement_magn_moment\"]='%10.3e'% (intens)\n MagRec[\"measurement_dec\"]=rec[8]\n MagRec[\"measurement_inc\"]=rec[9]\n MagRec[\"magic_instrument_codes\"]=\"ODP-MSPN\"\n MagRec[\"er_analyst_mail_names\"]=user\n MagRec[\"er_citation_names\"]=citation\n MagRec[\"magic_method_codes\"]=meas_type\n MagRec[\"measurement_flag\"]='g'\n MagRec[\"measurement_csd\"]=''\n MagRec[\"measurement_number\"]='1'\n MagRecs.append(MagRec)\n MagOuts=pmag.measurements_methods(MagRecs,noave)\n pmag.magic_write(meas_file,MagOuts,'magic_measurements')\n print(\"results put in \",meas_file)", "response": "This function is the main entry point for the ODP s Molspin s. spn format files."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef convert(**kwargs):\n\n #get parameters from kwargs.get(parameter_name, default_value)\n user = kwargs.get('user', '')\n magfile = kwargs.get('magfile')\n\n #do any extra formating you need to variables here\n\n #open magfile to start reading data\n try:\n infile=open(magfile,'r')\n except Exception as ex:\n print((\"bad file path: \", magfile))\n return False, \"bad file path\"\n\n #Depending on the dataset you may need to read in all data here put it in a list of dictionaries or something here. If you do just replace the \"for line in infile.readlines():\" bellow with \"for d in data:\" where data is the structure you put your data into\n\n #define the lists that hold each line of data for their respective tables\n SpecRecs,SampRecs,SiteRecs,LocRecs,MeasRecs=[],[],[],[],[]\n\n #itterate over the contence of the file\n for line in infile.readlines():\n MeasRec,SpecRec,SampRec,SiteRec,LocRec={},{},{},{},{}\n\n #extract data from line and put it in variables\n\n #fill this line of the Specimen table using above variables\n if specimen!=\"\" and specimen not in [x['specimen'] if 'specimen' in list(x.keys()) else \"\" for x in SpecRecs]:\n SpecRec['analysts']=user\n SpecRecs.append(SpecRec)\n #fill this line of the Sample table using above variables\n if sample!=\"\" and sample not in [x['sample'] if 'sample' in list(x.keys()) else \"\" for x in SampRecs]:\n SampRec['analysts']=user\n SampRecs.append(SampRec)\n #fill this line of the Site table using above variables\n if site!=\"\" and site not in [x['site'] if 'site' in list(x.keys()) else \"\" for x in SiteRecs]:\n SiteRec['analysts']=user\n SiteRecs.append(SiteRec)\n #fill this line of the Location table using above variables\n if location!=\"\" and location not in [x['location'] if 'location' in list(x.keys()) else \"\" for x in LocRecs]:\n LocRec['analysts']=user\n LocRecs.append(LocRec)\n\n #Fill this line of Meas Table using data in line\n MeasRec['analysts']=user\n MeasRecs.append(MeasRec)\n\n #close your file object so Python3 doesn't throw an annoying warning\n infile.close()\n\n #open a Contribution object\n con = cb.Contribution(output_dir_path,read_tables=[])\n\n #Create Magic Tables and add to a contribution\n con.add_magic_table_from_data(dtype='specimens', data=SpecRecs)\n con.add_magic_table_from_data(dtype='samples', data=SampRecs)\n con.add_magic_table_from_data(dtype='sites', data=SiteRecs)\n con.add_magic_table_from_data(dtype='locations', data=LocRecs)\n MeasOuts=pmag.measurements_methods3(MeasRecs,noave) #figures out method codes for measuremet data\n con.add_magic_table_from_data(dtype='measurements', data=MeasOuts)\n\n #write to file\n con.write_table_to_file('specimens', custom_name=spec_file)\n con.write_table_to_file('samples', custom_name=samp_file)\n con.write_table_to_file('sites', custom_name=site_file)\n con.write_table_to_file('locations', custom_name=loc_file)\n meas_file = con.write_table_to_file('measurements', custom_name=meas_file)\n\n return True, meas_file", "response": "This function converts a single file into a single structure."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef main():\n inp,out=\"\",\"\"\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-f' in sys.argv:\n ind=sys.argv.index('-f')\n file=sys.argv[ind+1]\n f=open(file,'r')\n inp=f.readlines()\n if '-F' in sys.argv:\n ind=sys.argv.index('-F')\n o=sys.argv[ind+1]\n out=open(o,'w')\n if '-i' in sys.argv:\n cont=1\n while cont==1:\n try:\n b=1e-6*float(input('B (in microtesla): 0:\n for grab_key in grab_keys:\n for rec in Data: \n if change=='l':\n rec[grab_key]=rec[grab_key].lower()\n else:\n rec[grab_key]=rec[grab_key].upper()\n else:\n print('bad file name')\n pmag.magic_write(out_file,Data,file_type)", "response": "NAME \n change_case_magic.py\n\n DESCRIPTION\n picks out key and converts to upper or lower case\n\n SYNTAX\n change_case_magic.py [command line options]\n\n OPTIONS\n -h prints help message and quits\n -f FILE: specify input magic format file \n -F FILE: specify output magic format file , default is to overwrite input file\n -keys KEY1:KEY2 specify colon delimited list of keys to convert\n -[U,l] : specify [U]PPER or [l]ower case, default is lower"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_pole(continent,age):\n age=int(age)\n if continent=='congo':continent='nwaf'\n if continent=='kala':continent='neaf'\n if continent=='sam':continent='sac'\n if continent=='ant':continent='eant'\n if continent=='af':\n return [0,0,0] # assume africa fixed\n if continent=='aus':\n cont= [[5,9.7,54.3,-3.3],[10,10.4,52.8,-6.2],[15,11.5,49.8,-9.0],[20,12.4,48.0,-11.8],[25,12.9,48.3,-15.0],[30,12.8,49.9,-18.1],[35,13.5,50.8,-20.9],[40,14.1,52.7,-22.1],[45,14.4,54.7,-22.9],[50,14.7,56.5,-23.6],[55,14.0,57.3,-24.7],[60,12.9,57.9,-25.7],[65,13.6,58.8,-26.3],[70,17.3,60.2,-26.3],[75,19.8,63.3,-26.7],[80,20.5,68.5,-26.6],[85,19.8,74.6,-26.9],[90,17.7,80.9,-28.9],[95,15.9,86.2,-31.1],[100,18.4,89.3,-30.7],[105,17.9,95.6,-32.6],[110,17.3,101.0,-34.8],[115,16.8,105.6,-37.4],[120,16.4,109.4,-40.3],[125,15.7,110.3,-42.3],[130,15.9,111.6,-44.4],[135,15.9,113.1,-46.6],[140,15.6,113.7,-48.3],[145,15.0,113.1,-50.5],[150,15.5,113.5,-52.5],[155,17.6,115.7,-54.3],[160,19.5,117.8,-56.2],[165,19.5,117.8,-56.2],[170,19.5,117.8,-56.2],[175,19.5,117.8,-56.2],[180,19.5,117.8,-56.2],[185,19.5,117.8,-56.2],[190,19.5,117.8,-56.2],[195,19.5,117.8,-56.2],[200,19.5,117.8,-56.2],[205,19.5,117.8,-56.2],[210,19.5,117.8,-56.2],[215,19.5,117.8,-56.2],[220,19.5,117.8,-56.2],[225,19.5,117.8,-56.2],[230,19.5,117.8,-56.2],[235,19.5,117.8,-56.2],[240,19.5,117.8,-56.2],[245,19.5,117.8,-56.2],[250,19.5,117.8,-56.2],[255,19.5,117.8,-56.2],[260,19.5,117.8,-56.2],[265,19.5,117.8,-56.2],[270,19.5,117.8,-56.2],[275,19.5,117.8,-56.2],[280,19.5,117.8,-56.2],[285,19.5,117.8,-56.2],[290,19.5,117.8,-56.2],[295,19.5,117.8,-56.2],[300,19.5,117.8,-56.2],[305,19.5,117.8,-56.2],[310,19.5,117.8,-56.2],[315,19.5,117.8,-56.2],[320,19.5,117.8,-56.2]]\n for rec in cont:\n if age==int(rec[0]): return [rec[1],rec[2],rec[3]] \n if continent=='eur':\n cont= [[5,17.9,-27.1,0.6],[10,18.4,-26.3,1.2],[15,18.9,-24.6,1.8],[20,17.2,-22.7,2.4],[25,20.7,-19.0,3.0],[30,24.9,-19.5,4.3],[35,27.2,-19.3,5.8],[40,28.7,-18.5,7.5],[45,30.3,-18.2,9.0],[50,30.8,-16.7,10.0],[55,32.7,-15.4,11.3],[60,34.8,-15.7,12.6],[65,36.0,-15.8,13.6],[70,35.4,-16.1,14.9],[75,35.5,-15.7,15.5],[80,36.1,-15.2,16.9],[85,37.0,-14.2,18.8],[90,39.6,-13.7,21.9],[95,39.8,-13.7,25.2],[100,40.2,-12.5,28.5],[105,41.6,-11.2,31.7],[110,42.6,-9.8,34.5],[115,43.4,-8.5,37.3],[120,44.5,-6.9,40.3],[125,45.3,-6.3,42.0],[130,45.9,-5.7,43.0],[135,46.6,-5.3,44.0],[140,47.3,-4.9,45.2],[145,47.8,-4.8,46.4],[150,48.6,-4.0,47.9],[155,49.8,-2.2,50.0],[160,50.6,-1.2,52.1],[165,51.4,-0.3,54.2],[170,52.1,0.6,56.3],[175,52.9,1.9,59.6],[180,53.0,2.0,60.0],[185,53.0,2.0,60.4],[190,53.1,2.1,60.8],[195,53.2,2.2,61.1],[200,53.3,2.2,61.5],[205,53.2,2.6,59.7],[210,53.1,2.9,57.8],[215,53.1,3.3,55.9],[220,52.9,3.6,53.6],[225,52.7,4.0,51.4],[230,52.4,4.4,49.1],[235,52.2,4.8,46.8],[240,51.9,5.3,44.5],[245,51.9,5.3,44.5],[250,51.9,5.3,44.5],[255,51.9,5.3,44.5],[260,51.9,5.3,44.5],[265,51.9,5.3,44.5],[270,51.9,5.3,44.5],[275,51.9,5.3,44.5],[280,51.9,5.3,44.5],[285,51.9,5.3,44.5],[290,51.9,5.3,44.5],[295,51.9,5.3,44.5],[300,51.9,5.3,44.5],[305,51.9,5.3,44.5],[310,51.9,5.3,44.5],[315,51.9,5.3,44.5],[320,51.9,5.3,44.5]]\n for rec in cont:\n if age==int(rec[0]): return [rec[1],rec[2],rec[3]] \n if continent=='mad':\n cont= [[5,90.0,0.0,0.0],[10,90.0,0.0,0.0],[15,90.0,0.0,0.0],[20,90.0,0.0,0.0],[25,90.0,0.0,0.0],[30,90.0,0.0,0.0],[35,90.0,0.0,0.0],[40,90.0,0.0,0.0],[45,90.0,0.0,0.0],[50,90.0,0.0,0.0],[55,90.0,0.0,0.0],[60,90.0,0.0,0.0],[65,90.0,0.0,0.0],[70,90.0,0.0,0.0],[75,90.0,0.0,0.0],[80,90.0,0.0,0.0],[85,90.0,0.0,0.0],[90,90.0,0.0,0.0],[95,90.0,0.0,0.0],[100,90.0,0.0,0.0],[105,90.0,0.0,0.0],[110,90.0,0.0,0.0],[115,90.0,0.0,0.0],[120,90.0,0.0,0.0],[125,2.6,-63.3,1.8],[130,2.6,-63.3,3.9],[135,1.5,-57.6,5.7],[140,1.0,-55.9,7.2],[145,0.6,118.3,-8.9],[150,4.5,119.8,-10.9],[155,10.6,130.1,-13.0],[160,14.8,137.5,-15.4],[165,14.8,137.5,-15.4],[170,14.8,137.5,-15.4],[175,14.8,137.5,-15.4],[180,14.8,137.5,-15.4],[185,14.8,137.5,-15.4],[190,14.8,137.5,-15.4],[195,14.8,137.5,-15.4],[200,14.8,137.5,-15.4],[205,14.8,137.5,-15.4],[210,14.8,137.5,-15.4],[215,14.8,137.5,-15.4],[220,14.8,137.5,-15.4],[225,14.8,137.5,-15.4],[230,14.8,137.5,-15.4],[235,14.8,137.5,-15.4],[240,14.8,137.5,-15.4],[245,14.8,137.5,-15.4],[250,14.8,137.5,-15.4],[255,14.8,137.5,-15.4],[260,14.8,137.5,-15.4],[265,14.8,137.5,-15.4],[270,14.8,137.5,-15.4],[275,14.8,137.5,-15.4],[280,14.8,137.5,-15.4],[285,14.8,137.5,-15.4],[290,14.8,137.5,-15.4],[295,14.8,137.5,-15.4],[300,14.8,137.5,-15.4],[305,14.8,137.5,-15.4],[310,14.8,137.5,-15.4],[315,14.8,137.5,-15.4],[320,14.8,137.5,-15.4]]\n for rec in cont:\n if age==int(rec[0]): return [rec[1],rec[2],rec[3]] \n if continent=='nwaf':\n cont= [[5,90.0,0.0,0.0],[10,90.0,0.0,0.0],[15,90.0,0.0,0.0],[20,90.0,0.0,0.0],[25,90.0,0.0,0.0],[30,90.0,0.0,0.0],[35,90.0,0.0,0.0],[40,90.0,0.0,0.0],[45,90.0,0.0,0.0],[50,90.0,0.0,0.0],[55,90.0,0.0,0.0],[60,90.0,0.0,0.0],[65,90.0,0.0,0.0],[70,90.0,0.0,0.0],[75,90.0,0.0,0.0],[80,90.0,0.0,0.0],[85,19.6,6.7,0.0],[90,16.6,6.7,-0.2],[95,16.5,6.7,-0.4],[100,16.5,6.7,-0.5],[105,16.5,6.7,-0.7],[110,16.5,6.7,-0.8],[115,16.5,6.7,-1.0],[120,16.5,6.7,-1.1],[125,16.5,6.7,-1.2],[130,16.5,6.7,-1.2],[135,16.5,6.7,-1.2],[140,16.5,6.7,-1.2],[145,16.5,6.7,-1.2],[150,16.5,6.7,-1.2],[155,16.5,6.7,-1.2],[160,16.5,6.7,-1.2],[165,16.5,6.7,-1.2],[170,16.5,6.7,-1.2],[175,16.5,6.7,-1.2],[180,16.5,6.7,-1.2],[185,16.5,6.7,-1.2],[190,16.5,6.7,-1.2],[195,16.5,6.7,-1.2],[200,16.5,6.7,-1.2],[205,16.5,6.7,-1.2],[210,16.5,6.7,-1.2],[215,16.5,6.7,-1.2],[220,16.5,6.7,-1.2],[225,16.5,6.7,-1.2],[230,16.5,6.7,-1.2],[235,16.5,6.7,-1.2],[240,16.5,6.7,-1.2],[245,16.5,6.7,-1.2],[250,16.5,6.7,-1.2],[255,16.5,6.7,-1.2],[260,16.5,6.7,-1.2],[265,16.5,6.7,-1.2],[270,16.5,6.7,-1.2],[275,16.5,6.7,-1.2],[280,16.5,6.7,-1.2],[285,16.5,6.7,-1.2],[290,16.5,6.7,-1.2],[295,16.5,6.7,-1.2],[300,16.5,6.7,-1.2],[305,16.5,6.7,-1.2],[310,16.5,6.7,-1.2],[315,16.5,6.7,-1.2],[320,16.5,6.7,-1.2]]\n for rec in cont:\n if age==int(rec[0]): return [rec[1],rec[2],rec[3]] \n if continent=='col':\n cont= [[5,62.1,-40.2,1.6],[10,61.8,-40.3,3.3],[15,59.6,-38.1,5.4],[20,58.5,-37.1,7.5],[25,57.7,-36.4,9.6],[30,56.7,-34.5,11.9],[35,56.5,-33.4,14.3],[40,57.1,-32.6,16.6],[45,57.0,-31.4,18.6],[50,58.2,-31.2,20.5],[55,60.7,-31.9,22.0],[60,62.5,-32.8,23.3],[65,63.7,-33.5,24.6],[70,63.5,-33.4,26.1],[75,63.2,-33.9,28.6],[80,62.7,-34.3,31.5],[85,61.2,-34.3,34.4],[90,59.1,-34.5,37.3],[95,57.2,-34.7,40.3],[100,55.7,-34.8,43.3],[105,54.3,-34.9,46.4],[110,53.1,-35.0,49.5],[115,52.2,-35.0,51.7],[120,51.6,-35.0,52.8],[125,50.7,-33.9,54.0],[130,48.5,-33.4,55.4],[135,47.5,-33.3,56.0],[140,47.5,-33.3,56.1],[145,47.5,-33.3,56.1],[150,47.5,-33.3,56.2],[155,47.5,-33.3,56.2],[160,47.5,-33.3,56.2],[165,47.5,-33.3,56.2],[170,47.5,-33.3,56.2],[175,47.5,-33.3,56.2],[180,47.5,-33.3,56.2],[185,47.5,-33.3,56.2],[190,47.5,-33.3,56.2],[195,47.5,-33.3,56.2],[200,47.5,-33.3,56.2],[205,47.5,-33.3,56.2],[210,47.5,-33.3,56.2],[215,47.5,-33.3,56.2],[220,47.5,-33.3,56.2],[225,47.5,-33.3,56.2],[230,47.5,-33.3,56.2],[235,47.5,-33.3,56.2],[240,47.5,-33.3,56.2],[245,47.5,-33.3,56.2],[250,47.5,-33.3,56.2],[255,47.5,-33.3,56.2],[260,47.5,-33.3,56.2],[265,47.5,-33.3,56.2],[270,47.5,-33.3,56.2],[275,47.5,-33.3,56.2],[280,47.5,-33.3,56.2],[285,47.5,-33.3,56.2],[290,47.5,-33.3,56.2],[295,47.5,-33.3,56.2],[300,47.5,-33.3,56.2],[305,47.5,-33.3,56.2],[310,47.5,-33.3,56.2],[315,47.5,-33.3,56.2],[320,47.5,-33.3,56.2]]\n for rec in cont:\n if age==int(rec[0]): return [rec[1],rec[2],rec[3]] \n if continent=='grn':\n cont= [[5,80.9,22.8,1.3],[10,80.9,22.9,2.6],[15,80.9,23.2,4.1],[20,80.6,24.4,5.5],[25,79.5,28.1,6.8],[30,77.3,12.5,8.6],[35,74.8,7.2,10.2],[40,72.6,9.5,11.5],[45,71.4,11.4,12.7],[50,71.0,20.7,14.2],[55,71.8,29.6,16.8],[60,71.9,30.5,17.5],[65,71.3,32.9,17.6],[70,69.8,29.0,17.9],[75,69.0,26.6,18.5],[80,67.6,21.0,19.8],[85,66.3,16.4,21.5],[90,65.9,11.5,24.2],[95,64.2,5.5,26.9],[100,62.7,2.8,30.1],[105,62.4,1.6,33.3],[110,62.1,0.9,36.5],[115,61.8,0.5,39.7],[120,61.8,0.8,43.1],[125,61.9,1.0,44.9],[130,62.2,1.3,46.0],[135,62.4,1.6,47.1],[140,62.7,1.6,48.4],[145,62.9,1.3,49.7],[150,63.2,1.8,51.4],[155,63.7,3.6,53.8],[160,64.1,4.2,56.0],[165,64.4,4.8,58.3],[170,64.7,5.3,60.6],[175,64.8,6.0,64.1],[180,64.9,6.0,64.5],[185,64.9,5.9,64.9],[190,65.0,5.9,65.4],[195,65.0,5.8,65.8],[200,65.1,5.8,66.2],[205,65.1,5.7,66.7],[210,65.2,5.7,67.1],[215,65.2,5.6,67.5],[220,65.2,5.6,67.5],[225,65.2,5.6,67.5],[230,65.2,5.6,67.5],[235,65.2,5.6,67.5],[240,65.2,5.6,67.5],[245,65.2,5.6,67.5],[250,65.2,5.6,67.5],[255,65.2,5.6,67.5],[260,65.2,5.6,67.5],[265,65.2,5.6,67.5],[270,65.2,5.6,67.5],[275,65.2,5.6,67.5],[280,65.2,5.6,67.5],[285,65.2,5.6,67.5],[290,65.2,5.6,67.5],[295,65.2,5.6,67.5],[300,65.2,5.6,67.5],[305,65.2,5.6,67.5],[310,65.2,5.6,67.5],[315,65.2,5.6,67.5],[320,65.2,5.6,67.5]]\n for rec in cont:\n if age==int(rec[0]): return [rec[1],rec[2],rec[3]] \n if continent=='nam':\n cont= [[5,80.9,22.8,1.3],[10,80.9,22.9,2.6],[15,80.9,23.2,4.1],[20,80.6,24.4,5.5],[25,79.5,28.1,6.8],[30,77.3,12.5,8.6],[35,75.4,3.5,10.5],[40,74.5,-1.1,12.6],[45,74.3,-4.3,14.6],[50,75.9,-3.5,16.2],[55,79.8,4.1,17.6],[60,81.6,5.1,19.1],[65,82.6,3.2,20.7],[70,81.6,-6.5,22.4],[75,80.4,-13.1,24.6],[80,78.2,-18.8,27.5],[85,76.2,-21.3,30.5],[90,74.6,-23.0,33.8],[95,72.0,-24.7,36.9],[100,70.0,-24.0,40.2],[105,69.1,-23.3,43.6],[110,68.3,-22.6,47.0],[115,67.6,-21.8,50.4],[120,67.1,-20.4,53.9],[125,67.0,-19.7,55.6],[130,67.0,-19.1,56.7],[135,67.1,-18.7,57.9],[140,67.2,-18.4,59.2],[145,67.1,-18.3,60.5],[150,67.3,-17.6,62.2],[155,67.6,-15.5,64.6],[160,67.6,-14.5,66.8],[165,67.7,-13.6,69.1],[170,67.8,-12.8,71.4],[175,67.7,-11.5,74.8],[180,67.7,-11.5,75.3],[185,67.7,-11.5,75.7],[190,67.7,-11.5,76.1],[195,67.7,-11.5,76.6],[200,67.7,-11.5,77.0],[205,67.7,-11.5,77.4],[210,67.7,-11.5,77.9],[215,67.7,-11.5,78.3],[220,67.7,-11.5,78.3],[225,67.7,-11.5,78.3],[230,67.7,-11.5,78.3],[235,67.7,-11.5,78.3],[240,67.7,-11.5,78.3],[245,67.7,-11.5,78.3],[250,67.7,-11.5,78.3],[255,67.7,-11.5,78.3],[260,67.7,-11.5,78.3],[265,67.7,-11.5,78.3],[270,67.7,-11.5,78.3],[275,67.7,-11.5,78.3],[280,67.7,-11.5,78.3],[285,67.7,-11.5,78.3],[290,67.7,-11.5,78.3],[295,67.7,-11.5,78.3],[300,67.7,-11.5,78.3],[305,67.7,-11.5,78.3],[310,67.7,-11.5,78.3],[315,67.7,-11.5,78.3],[320,67.7,-11.5,78.3]]\n for rec in cont:\n if int(age)==int(rec[0]): \n pole= [rec[1],rec[2],rec[3]] \n return pole\n if continent=='par':\n cont= [[5,62.1,-40.2,1.6],[10,61.8,-40.3,3.3],[15,59.6,-38.1,5.4],[20,58.5,-37.1,7.5],[25,57.7,-36.4,9.6],[30,56.7,-34.5,11.9],[35,56.5,-33.4,14.3],[40,57.1,-32.6,16.6],[45,57.0,-31.4,18.6],[50,58.2,-31.2,20.5],[55,60.7,-31.9,22.0],[60,62.5,-32.8,23.3],[65,63.7,-33.5,24.6],[70,63.5,-33.4,26.1],[75,63.2,-33.9,28.6],[80,62.7,-34.3,31.5],[85,61.2,-34.3,34.4],[90,59.1,-34.5,37.3],[95,57.2,-34.7,40.3],[100,55.7,-34.8,43.3],[105,54.3,-34.9,46.4],[110,53.1,-35.0,49.5],[115,52.2,-35.0,51.7],[120,51.6,-35.0,52.8],[125,50.7,-33.9,54.0],[130,48.5,-33.4,55.4],[135,47.5,-33.3,56.0],[140,47.5,-33.3,56.1],[145,47.5,-33.3,56.1],[150,47.5,-33.3,56.2],[155,47.5,-33.3,56.2],[160,47.5,-33.3,56.2],[165,47.5,-33.3,56.2],[170,47.5,-33.3,56.2],[175,47.5,-33.3,56.2],[180,47.5,-33.3,56.2],[185,47.5,-33.3,56.2],[190,47.5,-33.3,56.2],[195,47.5,-33.3,56.2],[200,47.5,-33.3,56.2],[205,47.5,-33.3,56.2],[210,47.5,-33.3,56.2],[215,47.5,-33.3,56.2],[220,47.5,-33.3,56.2],[225,47.5,-33.3,56.2],[230,47.5,-33.3,56.2],[235,47.5,-33.3,56.2],[240,47.5,-33.3,56.2],[245,47.5,-33.3,56.2],[250,47.5,-33.3,56.2],[255,47.5,-33.3,56.2],[260,47.5,-33.3,56.2],[265,47.5,-33.3,56.2],[270,47.5,-33.3,56.2],[275,47.5,-33.3,56.2],[280,47.5,-33.3,56.2],[285,47.5,-33.3,56.2],[290,47.5,-33.3,56.2],[295,47.5,-33.3,56.2],[300,47.5,-33.3,56.2],[305,47.5,-33.3,56.2],[310,47.5,-33.3,56.2],[315,47.5,-33.3,56.2],[320,47.5,-33.3,56.2]]\n for rec in cont:\n if age==int(rec[0]): return [rec[1],rec[2],rec[3]] \n if continent=='eant':\n cont= [[5,8.2,-49.4,0.8],[10,8.2,-49.4,1.5],[15,9.8,-48.4,2.1],[20,10.7,-47.9,2.8],[25,11.4,-48.2,3.8],[30,11.8,-48.3,4.8],[35,12.5,-46.1,6.0],[40,13.6,-41.5,7.4],[45,11.1,-41.1,8.5],[50,9.1,-40.9,9.6],[55,9.4,-43.5,10.3],[60,10.6,-47.4,10.8],[65,8.1,-47.7,11.3],[70,0.4,-43.3,12.2],[75,3.7,138.9,-13.8],[80,2.7,142.7,-16.1],[85,0.6,144.7,-18.8],[90,1.4,-37.0,22.3],[95,2.9,-38.3,25.8],[100,3.1,146.5,-26.8],[105,5.5,148.9,-30.3],[110,7.4,150.7,-33.9],[115,9.0,152.3,-37.6],[120,10.3,153.6,-41.3],[125,9.4,152.4,-43.0],[130,9.1,151.5,-45.3],[135,8.6,150.9,-47.6],[140,8.0,150.1,-49.2],[145,7.3,148.1,-50.7],[150,7.4,147.1,-52.6],[155,9.0,148.0,-55.4],[160,10.5,148.8,-58.2],[165,10.5,148.8,-58.2],[170,10.5,148.8,-58.2],[175,10.5,148.8,-58.2],[180,10.5,148.8,-58.2],[185,10.5,148.8,-58.2],[190,10.5,148.8,-58.2],[195,10.5,148.8,-58.2],[200,10.5,148.8,-58.2],[205,10.5,148.8,-58.2],[210,10.5,148.8,-58.2],[215,10.5,148.8,-58.2],[220,10.5,148.8,-58.2],[225,10.5,148.8,-58.2],[230,10.5,148.8,-58.2],[235,10.5,148.8,-58.2],[240,10.5,148.8,-58.2],[245,10.5,148.8,-58.2],[250,10.5,148.8,-58.2],[255,10.5,148.8,-58.2],[260,10.5,148.8,-58.2],[265,10.5,148.8,-58.2],[270,10.5,148.8,-58.2],[275,10.5,148.8,-58.2],[280,10.4,148.8,-58.2],[285,10.5,148.8,-58.2],[290,10.5,148.8,-58.2],[295,10.5,148.8,-58.2],[300,10.5,148.8,-58.2],[305,10.4,148.8,-58.2],[310,10.5,148.8,-58.2],[315,10.5,148.8,-58.2],[320,10.5,148.8,-58.2]]\n for rec in cont:\n if age==int(rec[0]): return [rec[1],rec[2],rec[3]] \n if continent=='ind':\n cont= [[5,22.7,32.9,-2.3],[10,23.8,33.1,-4.6],[15,27.1,27.4,-6.0],[20,29.6,23.9,-7.5],[25,25.1,33.2,-10.3],[30,22.5,38.5,-13.3],[35,22.6,41.3,-15.9],[40,25.5,42.7,-17.4],[45,24.2,40.1,-19.7],[50,24.0,34.2,-23.5],[55,22.1,29.2,-28.3],[60,19.5,25.2,-34.4],[65,19.0,21.9,-40.2],[70,20.5,18.9,-44.4],[75,21.8,18.2,-47.3],[80,22.3,18.2,-49.1],[85,21.8,22.1,-53.8],[90,20.0,27.5,-58.8],[95,20.7,28.1,-57.8],[100,21.3,28.8,-56.8],[105,21.9,29.6,-55.9],[110,22.6,30.3,-54.9],[115,23.3,31.1,-54.0],[120,24.0,32.0,-53.1],[125,23.4,34.8,-55.2],[130,21.2,36.2,-60.1],[135,21.2,36.2,-61.6],[140,21.9,37.5,-61.5],[145,22.6,39.0,-62.5],[150,24.1,40.4,-62.9],[155,26.9,41.2,-61.6],[160,29.8,42.1,-60.5],[165,29.8,42.1,-60.5],[170,29.8,42.1,-60.5],[175,29.8,42.1,-60.5],[180,29.8,42.1,-60.5],[185,29.8,42.1,-60.5],[190,29.8,42.1,-60.5],[195,29.8,42.1,-60.5],[200,29.8,42.1,-60.5],[205,29.8,42.1,-60.5],[210,29.8,42.1,-60.5],[215,29.8,42.1,-60.5],[220,29.8,42.1,-60.5],[225,29.8,42.1,-60.5],[230,29.8,42.1,-60.5],[235,29.8,42.1,-60.5],[240,29.8,42.1,-60.5],[245,29.8,42.1,-60.5],[250,29.8,42.1,-60.5],[255,29.8,42.1,-60.5],[260,29.8,42.1,-60.5],[265,29.8,42.1,-60.5],[270,29.8,42.1,-60.5],[275,29.8,42.1,-60.5],[280,29.8,42.1,-60.5],[285,29.8,42.1,-60.5],[290,29.8,42.1,-60.5],[295,29.8,42.1,-60.5],[300,29.8,42.1,-60.5],[305,29.8,42.1,-60.5],[310,29.8,42.1,-60.5],[315,29.8,42.1,-60.5],[320,29.8,42.1,-60.5]]\n for rec in cont:\n if age==int(rec[0]): return [rec[1],rec[2],rec[3]] \n if continent=='neaf':\n cont= [[5,90.0,0.0,0.0],[10,90.0,0.0,0.0],[15,90.0,0.0,0.0],[20,90.0,0.0,0.0],[25,90.0,0.0,0.0],[30,90.0,0.0,0.0],[35,90.0,0.0,0.0],[40,90.0,0.0,0.0],[45,90.0,0.0,0.0],[50,90.0,0.0,0.0],[55,90.0,0.0,0.0],[60,90.0,0.0,0.0],[65,90.0,0.0,0.0],[70,90.0,0.0,0.0],[75,90.0,0.0,0.0],[80,90.0,0.0,0.0],[85,27.9,-61.4,0.0],[90,39.8,-61.4,-0.1],[95,40.8,-61.4,-0.2],[100,40.3,-61.4,-0.3],[105,40.6,-61.4,-0.4],[110,40.6,-61.4,-0.5],[115,40.5,-61.4,-0.6],[120,40.5,-61.4,-0.7],[125,40.5,-61.4,-0.7],[130,40.5,-61.4,-0.7],[135,40.5,-61.4,-0.7],[140,40.5,-61.4,-0.7],[145,40.5,-61.4,-0.7],[150,40.5,-61.4,-0.7],[155,40.5,-61.4,-0.7],[160,40.5,-61.4,-0.7],[165,40.5,-61.4,-0.7],[170,40.5,-61.4,-0.7],[175,40.5,-61.4,-0.7],[180,40.5,-61.4,-0.7],[185,40.5,-61.4,-0.7],[190,40.5,-61.4,-0.7],[195,40.5,-61.4,-0.7],[200,40.5,-61.4,-0.7],[205,40.5,-61.4,-0.7],[210,40.5,-61.4,-0.7],[215,40.5,-61.4,-0.7],[220,40.5,-61.4,-0.7],[225,40.5,-61.4,-0.7],[230,40.5,-61.4,-0.7],[235,40.5,-61.4,-0.7],[240,40.5,-61.4,-0.7],[245,40.4,-61.4,-0.7],[250,40.4,-61.4,-0.7],[255,40.4,-61.4,-0.7],[260,40.4,-61.4,-0.7],[265,40.4,-61.4,-0.7],[270,40.4,-61.4,-0.7],[275,40.4,-61.4,-0.7],[280,40.4,-61.4,-0.7],[285,40.4,-61.4,-0.7],[290,40.4,-61.4,-0.7],[295,40.4,-61.4,-0.7],[300,40.4,-61.4,-0.7],[305,40.4,-61.4,-0.7],[310,40.4,-61.4,-0.7],[315,40.4,-61.4,-0.7],[320,40.4,-61.4,-0.7]]\n for rec in cont:\n if age==int(rec[0]): return [rec[1],rec[2],rec[3]] \n if continent=='sac':\n cont= [[5,62.1,-40.2,1.6],[10,61.8,-40.3,3.3],[15,59.6,-38.1,5.4],[20,58.5,-37.1,7.5],[25,57.7,-36.4,9.6],[30,56.7,-34.5,11.9],[35,56.5,-33.4,14.3],[40,57.1,-32.6,16.6],[45,57.0,-31.4,18.6],[50,58.2,-31.2,20.5],[55,60.7,-31.9,22.0],[60,62.5,-32.8,23.3],[65,63.7,-33.5,24.6],[70,63.5,-33.4,26.1],[75,63.2,-33.9,28.6],[80,62.7,-34.3,31.5],[85,61.2,-34.3,34.4],[90,59.1,-34.5,37.3],[95,57.2,-34.7,40.3],[100,55.7,-34.8,43.3],[105,54.3,-34.9,46.4],[110,53.1,-35.0,49.5],[115,52.2,-35.0,51.7],[120,51.6,-35.0,52.8],[125,50.7,-33.9,54.0],[130,50.1,-32.8,54.9],[135,50.0,-32.5,55.1],[140,50.0,-32.5,55.1],[145,50.0,-32.5,55.1],[150,50.0,-32.5,55.1],[155,50.0,-32.5,55.1],[160,50.0,-32.5,55.1],[165,50.0,-32.5,55.1],[170,50.0,-32.5,55.1],[175,50.0,-32.5,55.1],[180,50.0,-32.5,55.1],[185,50.0,-32.5,55.1],[190,50.0,-32.5,55.1],[195,50.0,-32.5,55.1],[200,50.0,-32.5,55.1],[205,50.0,-32.5,55.1],[210,50.0,-32.5,55.1],[215,50.0,-32.5,55.1],[220,50.0,-32.5,55.1],[225,50.0,-32.5,55.1],[230,50.0,-32.5,55.1],[235,50.0,-32.5,55.1],[240,50.0,-32.5,55.1],[245,50.0,-32.5,55.1],[250,50.0,-32.5,55.1],[255,50.0,-32.5,55.1],[260,50.0,-32.5,55.1],[265,50.0,-32.5,55.1],[270,50.0,-32.5,55.1],[275,50.0,-32.5,55.1],[280,50.0,-32.5,55.1],[285,50.0,-32.5,55.1],[290,50.0,-32.5,55.1],[295,50.0,-32.5,55.1],[300,50.0,-32.5,55.1],[305,50.0,-32.5,55.1],[310,50.0,-32.5,55.1],[315,50.0,-32.5,55.1],[320,50.0,-32.5,55.1]]\n for rec in cont:\n if age==int(rec[0]): return [rec[1],rec[2],rec[3]] \n if continent=='ib':\n cont= [[5,0,0,0],[10,0,0,0],[15,77.93,59.14,.12],[20,77.93,59.14,.24],[25,77.93,59.14,.24],[30,-31.21,166.79,1.73],[35,-31.21,166.79,1.73],[40,-27,160,1.73],[45,-23.85,157.12,1.72],[50,-20.6,157.88,2.1],[55,-20.72,162.4,2.61],[60,-16,164,3],[65,-12.95,165.77,3.1],[70,-16.45,167.49,3.1],[75,-16.45,167.49,3.1],[80,-37.17,169,8.04],[85,-38.86,169.85,10.28],[90,-42.64,173.2,16.56],[95,-43.,174.,20],[100,-43.,174.,25],[105,-43.,174.,30],[110,-43.,174.,35],[115,-43.,174.,35],[120,-43.,174.,44.77],[120,-43.86,174.17,44.77],[125,-43.86,174.17,44.77],[130,-46.19,177.47,45.91],[135,-46.19,177.47,45.91],[140,-46.19,177.47,45.91],[145,-46.19,177.47,45.91],[150,-46.19,177.47,45.91],[155,-47.12,179.45,46.29],[160,-47.12,179.45,46.29],[165,-47.12,179.45,46.29],[170,-47.55,180.35,50.62],[175,-46.8,181.1,50.33],[180,-46.8,181.1,50.33],[185,-46.8,181.1,50.33],[190,-46.8,181.1,50.33],[195,-46.8,181.1,50.33],[200,-46.8,181.1,50.33],[205,-46.8,181.1,50.33],[210,-46.8,181.1,50.33],[215,-46.8,181.1,50.33],[220,-46.8,181.1,50.33],[225,-46.8,181.1,50.33],[230,-46.8,181.1,50.33],[235,-46.8,181.1,50.33],[240,-46.8,181.1,50.33],[245,-46.8,181.1,50.33],[250,-46.8,181.1,50.33],[255,-46.8,181.1,50.33],[260,-46.8,181.1,50.33],[265,-46.8,181.1,50.33],[270,-46.8,181.1,50.33],[275,-46.8,181.1,50.33],[280,-46.8,181.1,50.33],[285,-46.8,181.1,50.33],[290,-46.8,181.1,50.33],[295,-46.8,181.1,50.33],[300,-46.8,181.1,50.33],[305,-46.8,181.1,50.33],[310,-46.8,181.1,50.33],[315,-46.8,181.1,50.33],[320,-46.8,181.1,50.33]]\n for rec in cont:\n if age==int(rec[0]): return [rec[1],rec[2],rec[3]] \n if continent=='saf':\n cont= [[0,0,56.0,2.2],[5,0,57.6,2.5],[10,0,53.9,2.5],[15,0,66.5,3.0],[20,0,75.5,4.7],[25,0,84.1,6.8],[30,0,95.8,7.9],[35,0,98.8,8.7],[40,0,107.5,9.2],[45,0,110.9,10.3],[50,0,111.6,13.2],[55,0,115.7,13.9],[60,0,123.5,15.7],[65,0,127.8,17.5],[70,0,137.2,17.5],[75,0,140.3,19.2],[80,0,138.1,19.3],[85,0,142.9,19.6],[90,0,144.7,20.5],[95,0,144.3,20.8],[100,0,150.8,22.3],[105,0,160.2,26.9],[110,0,169.2,32.1],[115,0,170.3,35.6],[120,0,171.3,36.2],[125,0,172.1,37.5],[130,0,170.0,39.4],[135,0,172.6,42.1],[140,0,163.1,40.8],[145,0,155.2,38.1],[150,0,155.0,34.8],[155,0,155.0,33.2],[160,0,157.0,30.7],[165,0,159.5,32.5],[170,0,167.6,28.8],[175,0,167.8,27.7],[180,0,167.4,25.9],[185,0,168.4,21.6],[190,0,158.8,18.2],[195,0,147.9,17.8],[200,0,144.4,19.2],[205,0,137.4,20.7],[210,0,133.6,23.1],[215,0,129.9,26.4],[220,0,127.2,27.2],[225,0,128.0,29.4],[230,0,130.0,31.4],[235,0,133.6,35.3],[240,0,137.4,36.5],[245,0,143.1,39.6],[250,0,145.4,40.4],[255,0,145.6,41.8],[260,0,144.8,41.9],[265,0,141.6,47.1],[270,0,140.3,46.8],[275,0,138.2,51.1],[280,0,138.6,51.6],[285,0,136.5,51.8],[290,0,135.8,52.8],[295,0,136.8,53.5],[300,0,136.9,55.4],[305,0,138.9,56.3],[310,0,139.9,59.5],[315,0,138.9,60.8],[320,0,132.5,61.6]]\n for rec in cont:\n if age==int(rec[0]): return [rec[1],rec[2],rec[3]] \n return 'NONE'", "response": "Returns the pole longitude and angle for the specified continent and age."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef main():\n cmap = 'RdYlBu'\n date = 2016.\n if not ccrs:\n print(\"-W- You must intstall the cartopy module to run plot_magmap.py\")\n sys.exit()\n dir_path = '.'\n lincr = 1 # level increment for contours\n if '-WD' in sys.argv:\n ind = sys.argv.index('-WD')\n dir_path = sys.argv[ind+1]\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-fmt' in sys.argv:\n ind = sys.argv.index('-fmt')\n fmt = sys.argv[ind+1]\n if fmt == 'jpg':\n print('jpg not a supported option')\n print(main.__doc__)\n sys.exit()\n else:\n fmt = 'png'\n if '-cm' in sys.argv:\n ind = sys.argv.index('-cm')\n cmap = sys.argv[ind+1]\n if '-el' in sys.argv:\n ind = sys.argv.index('-el')\n el = sys.argv[ind+1]\n else:\n el = 'B'\n if '-alt' in sys.argv:\n ind = sys.argv.index('-alt')\n alt = sys.argv[ind+1]\n else:\n alt = 0\n if '-lon0' in sys.argv:\n ind = sys.argv.index('-lon0')\n lon_0 = float(sys.argv[ind+1])\n else:\n lon_0 = 0\n if '-mod' in sys.argv:\n ind = sys.argv.index('-mod')\n mod = sys.argv[ind+1]\n ghfile = ''\n elif '-f' in sys.argv:\n ind = sys.argv.index('-f')\n ghfile = sys.argv[ind+1]\n mod = 'custom'\n date = ''\n else:\n mod, ghfile = 'cals10k', ''\n if '-age' in sys.argv:\n ind = sys.argv.index('-age')\n date = float(sys.argv[ind+1])\n if '-alt' in sys.argv:\n ind = sys.argv.index('-alt')\n alt = float(sys.argv[ind+1])\n else:\n alt = 0\n\n # doesn't work correctly with mod other than default\n Ds, Is, Bs, Brs, lons, lats = pmag.do_mag_map(\n date, mod=mod, lon_0=lon_0, alt=alt, file=ghfile)\n ax = plt.axes(projection=ccrs.Mollweide(central_longitude=lon_0))\n ax.coastlines()\n xx, yy = meshgrid(lons, lats)\n if mod == 'custom':\n str_date = 'Custom'\n else:\n str_date = str(date)\n if el == 'B':\n levmax = Bs.max()+lincr\n levmin = round(Bs.min()-lincr)\n levels = np.arange(levmin, levmax, lincr)\n plt.contourf(xx, yy, Bs, levels=levels, cmap=cmap,\n transform=ccrs.PlateCarree(central_longitude=lon_0))\n plt.title('Field strength ($\\mu$T): '+ str_date)\n if el == 'Br':\n levmax = Brs.max()+lincr\n levmin = round(Brs.min()-lincr)\n plt.contourf(xx, yy, Brs,\n levels=np.arange(levmin, levmax, lincr),\n cmap=cmap, transform=ccrs.PlateCarree(central_longitude=lon_0))\n plt.title('Radial field strength ($\\mu$T): '+ str_date)\n if el == 'I':\n levmax = Is.max()+lincr\n levmin = round(Is.min()-lincr)\n plt.contourf(xx, yy, Is,\n levels=np.arange(levmin, levmax, lincr),\n cmap=cmap, transform=ccrs.PlateCarree(central_longitude=lon_0))\n plt.contour(xx, yy, Is, levels=np.arange(-80, 90, 10),\n colors='black', transform=ccrs.PlateCarree(central_longitude=lon_0))\n plt.title('Field inclination: '+ str_date)\n if el == 'D':\n plt.contourf(xx, yy, Ds,\n levels=np.arange(-180, 180, 10),\n cmap=cmap, transform=ccrs.PlateCarree(central_longitude=lon_0))\n plt.contour(xx, yy, Ds, levels=np.arange(-180,\n 180, 10), colors='black')\n # cs=m.contourf(x,y,Ds,levels=np.arange(-180,180,10),cmap=cmap)\n # cs=m.contourf(x,y,Ds,levels=np.arange(-180,180,10),cmap=cmap)\n # m.contour(x,y,Ds,levels=np.arange(-180,180,10),colors='black')\n plt.title('Field declination: '+ str_date)\n cbar = plt.colorbar(orientation='horizontal')\n figname = 'geomagnetic_field_' + str_date + '.'+fmt\n plt.savefig(figname)\n print('Figure saved as: ', figname)", "response": "This function is the main function for the plot_magmap. py script. It will create a contour map of the desired field model and plot the contour."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef main():\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-WD' in sys.argv:\n ind=sys.argv.index('-WD')\n dir_path=sys.argv[ind+1]\n # interactive entry\n if '-i' in sys.argv:\n infile=input(\"Magic txt file for unpacking? \")\n dir_path = '.'\n input_dir_path = '.'\n # non-interactive\n else:\n infile = pmag.get_named_arg(\"-f\", reqd=True)\n # if -O flag is present, overwrite is False\n overwrite = pmag.get_flag_arg_from_sys(\"-O\", true=False, false=True)\n # if -sep flag is present, sep is True\n sep = pmag.get_flag_arg_from_sys(\"-sep\", true=True, false=False)\n data_model = pmag.get_named_arg(\"-DM\", default_val=3, reqd=False)\n dir_path = pmag.get_named_arg(\"-WD\", default_val=\".\", reqd=False)\n input_dir_path = pmag.get_named_arg(\"-ID\", default_val=\".\", reqd=False)\n\n #if '-ID' not in sys.argv and '-WD' in sys.argv:\n # input_dir_path = dir_path\n if \"-WD\" not in sys.argv and \"-ID\" not in sys.argv:\n input_dir_path = os.path.split(infile)[0]\n if not input_dir_path:\n input_dir_path = \".\"\n\n ipmag.download_magic(infile, dir_path, input_dir_path, overwrite, True, data_model, sep)", "response": "NAME\n download_magic. py DESCRIPTION download_magic. py FILE UPLOAD_MAGIC. txt FILE and unpacks a smartbook. txt file into the current tab delimited MagIC format txt files for use with the MagIC - Py programs."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef main():\n args = sys.argv\n if \"-h\" in args:\n print(main.__doc__)\n sys.exit()\n pltspec = \"\"\n verbose = pmagplotlib.verbose\n dir_path = pmag.get_named_arg('-WD', '.')\n dir_path = os.path.realpath(dir_path)\n meas_file = pmag.get_named_arg('-f', 'measurements.txt')\n fmt = pmag.get_named_arg('-fmt', 'png')\n save_plots = False\n interactive = True\n if '-sav' in args:\n verbose = False\n save_plots = True\n interactive = False\n if '-spc' in args:\n ind = args.index(\"-spc\")\n pltspec = args[ind+1]\n verbose = False\n save_plots = True\n ipmag.quick_hyst(dir_path, meas_file, save_plots,\n interactive, fmt, pltspec, verbose)", "response": "NAME\n quick_hyst.py\n\n DESCRIPTION\n makes plots of hysteresis data\n\n SYNTAX\n quick_hyst.py [command line options]\n\n OPTIONS\n -h prints help message and quits\n -f: specify input file, default is measurements.txt\n -spc SPEC: specify specimen name to plot and quit\n -sav save all plots and quit\n -fmt [png,svg,eps,jpg]"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef smooth(x,window_len,window='bartlett'):\n\n if x.ndim != 1:\n raise ValueError(\"smooth only accepts 1 dimension arrays.\")\n\n if x.size < window_len:\n raise ValueError(\"Input vector needs to be bigger than window size.\")\n\n if window_len<3:\n return x\n\n # numpy available windows\n if not window in ['flat', 'hanning', 'hamming', 'bartlett', 'blackman']:\n raise ValueError(\"Window is on of 'flat', 'hanning', 'hamming', 'bartlett', 'blackman'\")\n\n # padding the beggining and the end of the signal with an average value to evoid edge effect\n start=[average(x[0:10])]*window_len\n end=[average(x[-10:])]*window_len\n s=start+list(x)+end\n\n\n #s=numpy.r_[2*x[0]-x[window_len:1:-1],x,2*x[-1]-x[-1:-window_len:-1]]\n if window == 'flat': #moving average\n w=ones(window_len,'d')\n else:\n w=eval('numpy.'+window+'(window_len)')\n y=numpy.convolve(old_div(w,w.sum()),s,mode='same')\n return array(y[window_len:-window_len])", "response": "smooth the data using a sliding window with requested size"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef main():\n plot,fmt=0,'svg'\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-f' in sys.argv:\n ind=sys.argv.index('-f')\n meas_file=sys.argv[ind+1]\n else:\n print(\"missing -f\\n\")\n sys.exit()\n if '-w' in sys.argv:\n ind=sys.argv.index('-w')\n window_len=int(sys.argv[ind+1])\n else:\n window_len=3\n if '-t' in sys.argv:\n ind=sys.argv.index('-t')\n t_begin=int(sys.argv[ind+1])\n t_end=int(sys.argv[ind+2])\n else:\n t_begin=''\n t_end=''\n if '-sav' in sys.argv:plot=1\n if '-fmt' in sys.argv:\n ind=sys.argv.index('-fmt')\n fmt=sys.argv[ind+1]\n\n\n # read data from file\n Data=numpy.loadtxt(meas_file,dtype=numpy.float)\n T=Data.transpose()[0]\n M=Data.transpose()[1]\n T=list(T)\n M=list(M)\n # cut the data if -t is one of the flags\n if t_begin:\n while T[0]t_end:\n M.pop(-1);T.pop(-1)\n\n\n # prepare the signal:\n # from M(T) array with unequal deltaT\n # to M(T) array with deltaT=(1 degree).\n # if delataT is larger, then points are added using linear fit between\n # consecutive data points.\n # exit if deltaT is not integer\n i=0\n while i<(len(T)-1):\n if (T[i+1]-T[i])%1>0.001:\n print(\"delta T should be integer, this program will not work!\")\n print(\"temperature range:\",T[i],T[i+1])\n sys.exit()\n if (T[i+1]-T[i])==0.:\n M[i]=average([M[i],M[i+1]])\n M.pop(i+1);T.pop(i+1)\n elif (T[i+1]-T[i])<0.:\n M.pop(i+1);T.pop(i+1)\n print(\"check data in T=%.0f ,M[T] is ignored\"%(T[i]))\n elif (T[i+1]-T[i])>1.:\n slope,b=polyfit([T[i],T[i+1]],[M[i],M[i+1]],1)\n for j in range(int(T[i+1])-int(T[i])-1):\n M.insert(i+1,slope*(T[i]+1.)+b)\n T.insert(i+1,(T[i]+1.))\n i=i+1\n i=i+1\n\n # calculate the smoothed signal\n M=array(M,'f')\n T=array(T,'f')\n M_smooth=[]\n M_smooth=smooth(M,window_len)\n\n #plot the original data and the smooth data\n PLT={'M_T':1,'der1':2,'der2':3,'Curie':4}\n pmagplotlib.plot_init(PLT['M_T'],5,5)\n string='M-T (sliding window=%i)'%int(window_len)\n pmagplotlib.plot_xy(PLT['M_T'],T,M_smooth,sym='-')\n pmagplotlib.plot_xy(PLT['M_T'],T,M,sym='--',xlab='Temperature C',ylab='Magnetization',title=string)\n\n #calculate first derivative\n d1,T_d1=[],[]\n for i in range(len(M_smooth)-1):\n Dy=M_smooth[i-1]-M_smooth[i+1]\n Dx=T[i-1]-T[i+1]\n d1.append(old_div(Dy,Dx))\n T_d1=T[1:len(T-1)]\n d1=array(d1,'f')\n d1_smooth=smooth(d1,window_len)\n\n #plot the first derivative\n pmagplotlib.plot_init(PLT['der1'],5,5)\n string='1st derivative (sliding window=%i)'%int(window_len)\n pmagplotlib.plot_xy(PLT['der1'],T_d1,d1_smooth,sym='-',xlab='Temperature C',title=string)\n pmagplotlib.plot_xy(PLT['der1'],T_d1,d1,sym='b--')\n\n #calculate second derivative\n d2,T_d2=[],[]\n for i in range(len(d1_smooth)-1):\n Dy=d1_smooth[i-1]-d1_smooth[i+1]\n Dx=T[i-1]-T[i+1]\n #print Dy/Dx\n d2.append(old_div(Dy,Dx))\n T_d2=T[2:len(T-2)]\n d2=array(d2,'f')\n d2_smooth=smooth(d2,window_len)\n\n #plot the second derivative\n pmagplotlib.plot_init(PLT['der2'],5,5)\n string='2nd derivative (sliding window=%i)'%int(window_len)\n pmagplotlib.plot_xy(PLT['der2'],T_d2,d2,sym='-',xlab='Temperature C',title=string)\n d2=list(d2)\n print('second derivative maximum is at T=%i'%int(T_d2[d2.index(max(d2))]))\n\n # calculate Curie temperature for different width of sliding windows\n curie,curie_1=[],[]\n wn=list(range(5,50,1))\n for win in wn:\n # calculate the smoothed signal\n M_smooth=[]\n M_smooth=smooth(M,win)\n #calculate first derivative\n d1,T_d1=[],[]\n for i in range(len(M_smooth)-1):\n Dy=M_smooth[i-1]-M_smooth[i+1]\n Dx=T[i-1]-T[i+1]\n d1.append(old_div(Dy,Dx))\n T_d1=T[1:len(T-1)]\n d1=array(d1,'f')\n d1_smooth=smooth(d1,win)\n #calculate second derivative\n d2,T_d2=[],[]\n for i in range(len(d1_smooth)-1):\n Dy=d1_smooth[i-1]-d1_smooth[i+1]\n Dx=T[i-1]-T[i+1]\n d2.append(old_div(Dy,Dx))\n T_d2=T[2:len(T-2)]\n d2=array(d2,'f')\n d2_smooth=smooth(d2,win)\n d2=list(d2)\n d2_smooth=list(d2_smooth)\n curie.append(T_d2[d2.index(max(d2))])\n curie_1.append(T_d2[d2_smooth.index(max(d2_smooth))])\n\n #plot Curie temp for different sliding window length\n pmagplotlib.plot_init(PLT['Curie'],5,5)\n pmagplotlib.plot_xy(PLT['Curie'],wn,curie,sym='.',xlab='Sliding Window Width (degrees)',ylab='Curie Temp',title='Curie Statistics')\n files = {}\n for key in list(PLT.keys()): files[key]=str(key) + \".\" +fmt\n if plot==0:\n pmagplotlib.draw_figs(PLT)\n ans=input(\" S[a]ve to save plot, [q]uit, Return to continue: \")\n if ans==\"q\": sys.exit()\n if ans==\"a\": pmagplotlib.save_plots(PLT,files)\n else: pmagplotlib.save_plots(PLT,files)\n sys.exit()", "response": "NAME curie. py curie. py\n is a simple curie. py curie. py curie. py curie. py curie. py curie. py curie. py curie. py curie. py curie. py curie. py curie. py curie. py curie. py curie. py curie. py curie. py curie. py curie. py curie. py curie. py"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nnaming jr6_jr6_magic.py DESCRIPTION converts JR6 .jr6 format files to magic_measurements format files SYNTAX jr6_jr6_magic.py [command line options] OPTIONS -h: prints the help message and quits. -f FILE: specify input file, or -F FILE: specify output file, default is magic_measurements.txt -Fsa: specify er_samples format file for appending, default is new er_samples.txt (Not working yet) -spc NUM : specify number of characters to designate a specimen, default = 1 -loc LOCNAME : specify location/study name -A: don't average replicate measurements -ncn NCON: specify sample naming convention (6 and 7 not yet implemented) -mcd [SO-MAG,SO-SUN,SO-SIGHT...] supply how these samples were oriented -JR IODP samples measured on the JOIDES RESOLUTION -v NUM : specify the volume in cc of the sample, default 2.5^3cc Sample naming convention: [1] XXXXY: where XXXX is an arbitrary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX [5] site name same as sample [6] site is entered under a separate column -- NOT CURRENTLY SUPPORTED [7-Z] [XXXX]YYY: XXXX is site designation with Z characters with sample name XXXXYYYY NB: all others you will have to customize your self or e-mail ltauxe@ucsd.edu for help. INPUT JR6 .jr6 format file", "response": "def main(command_line=True, **kwargs):\n \"\"\"\n NAME\n jr6_jr6_magic.py\n \n DESCRIPTION\n converts JR6 .jr6 format files to magic_measurements format files\n\n SYNTAX\n jr6_jr6_magic.py [command line options]\n\n OPTIONS\n -h: prints the help message and quits.\n -f FILE: specify input file, or\n -F FILE: specify output file, default is magic_measurements.txt\n -Fsa: specify er_samples format file for appending, default is new er_samples.txt (Not working yet)\n -spc NUM : specify number of characters to designate a specimen, default = 1\n -loc LOCNAME : specify location/study name\n -A: don't average replicate measurements\n -ncn NCON: specify sample naming convention (6 and 7 not yet implemented)\n -mcd [SO-MAG,SO-SUN,SO-SIGHT...] supply how these samples were oriented\n -JR IODP samples measured on the JOIDES RESOLUTION\n -v NUM : specify the volume in cc of the sample, default 2.5^3cc\n Sample naming convention:\n [1] XXXXY: where XXXX is an arbitrary length site designation and Y\n is the single character sample designation. e.g., TG001a is the\n first sample from site TG001. [default]\n [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX\n [5] site name same as sample\n [6] site is entered under a separate column -- NOT CURRENTLY SUPPORTED\n [7-Z] [XXXX]YYY: XXXX is site designation with Z characters with sample name XXXXYYYY\n NB: all others you will have to customize your self\n or e-mail ltauxe@ucsd.edu for help.\n \n INPUT\n JR6 .jr6 format file\n \"\"\"\n# initialize some stuff\n noave=0\n #volume=2.5**3 #default volume is a 2.5cm cube\n volume = 2.5 * 1e-6 #default volume is a 2.5 cm cube, translated to meters cubed\n inst=\"\"\n samp_con,Z='1',\"\"\n missing=1\n demag=\"N\"\n er_location_name=\"unknown\"\n citation='This study'\n args=sys.argv\n meth_code=\"LP-NO\"\n specnum=1\n version_num=pmag.get_version()\n Samps=[] # keeps track of sample orientations\n\n user=\"\"\n mag_file=\"\"\n dir_path='.'\n MagRecs=[]\n ErSamps=[]\n SampOuts=[]\n\n samp_file = 'er_samples.txt'\n meas_file = 'magic_measurements.txt'\n tmp_file= \"fixed.jr6\"\n meth_code,JR=\"\",0\n #\n # get command line arguments\n #\n \n if command_line:\n if '-WD' in sys.argv:\n ind = sys.argv.index('-WD')\n dir_path=sys.argv[ind+1]\n if '-ID' in sys.argv:\n ind = sys.argv.index('-ID')\n input_dir_path = sys.argv[ind+1]\n else:\n input_dir_path = dir_path\n output_dir_path = dir_path\n if \"-h\" in args:\n print(main.__doc__)\n return False\n if '-F' in args:\n ind=args.index(\"-F\")\n meas_file = args[ind+1]\n if '-Fsa' in args:\n ind = args.index(\"-Fsa\")\n samp_file = args[ind+1]\n #try:\n # open(samp_file,'r')\n # ErSamps,file_type=pmag.magic_read(samp_file)\n # print 'sample information will be appended to ', samp_file \n #except:\n # print samp_file,' not found: sample information will be stored in new er_samples.txt file'\n # samp_file = output_dir_path+'/er_samples.txt'\n if '-f' in args:\n ind = args.index(\"-f\")\n mag_file= args[ind+1]\n if \"-spc\" in args:\n ind = args.index(\"-spc\")\n specnum = int(args[ind+1])\n if \"-ncn\" in args:\n ind=args.index(\"-ncn\")\n samp_con=sys.argv[ind+1]\n if \"-loc\" in args:\n ind=args.index(\"-loc\")\n er_location_name=args[ind+1]\n if \"-A\" in args: noave=1\n if \"-mcd\" in args: \n ind=args.index(\"-mcd\")\n meth_code=args[ind+1]\n if \"-JR\" in args: \n meth_code=meth_code+\":FS-C-DRILL-IODP:SP-SS-C:SO-V\"\n meth_code=meth_code.strip(\":\")\n JR=1\n samp_con='5'\n if \"-v\" in args: \n ind=args.index(\"-v\")\n volume=float(args[ind+1])*1e-6 # enter volume in cc, convert to m^3\n if not command_line:\n dir_path = kwargs.get('dir_path', '.')\n input_dir_path = kwargs.get('input_dir_path', dir_path)\n output_dir_path = dir_path\n meas_file = kwargs.get('meas_file', 'magic_measurements.txt')\n mag_file = kwargs.get('mag_file')\n samp_file = kwargs.get('samp_file', 'er_samples.txt')\n specnum = kwargs.get('specnum', 1)\n samp_con = kwargs.get('samp_con', '1')\n er_location_name = kwargs.get('er_location_name', '')\n noave = kwargs.get('noave', 0) # default (0) means DO average\n meth_code = kwargs.get('meth_code', \"LP-NO\")\n volume = float(kwargs.get('volume', 0))\n if not volume:\n volume = 2.5 * 1e-6 #default volume is a 2.5 cm cube, translated to meters cubed\n else:\n #convert cm^3 to m^3\n volume *= 1e-6\n JR = kwargs.get('JR', 0)\n if JR:\n if meth_code == \"LP-NO\":\n meth_code = \"\"\n meth_code=meth_code+\":FS-C-DRILL-IODP:SP-SS-C:SO-V\"\n meth_code=meth_code.strip(\":\")\n samp_con='5'\n\n # format variables\n mag_file = input_dir_path+\"/\" + mag_file\n meas_file = output_dir_path+\"/\" + meas_file\n samp_file = output_dir_path+\"/\" + samp_file\n tmp_file = output_dir_path+\"/\" + tmp_file\n if specnum!=0:\n specnum=-specnum\n if \"4\" in samp_con:\n if \"-\" not in samp_con:\n print(\"option [4] must be in form 4-Z where Z is an integer\")\n return False, \"option [4] must be in form 4-Z where Z is an integer\"\n else:\n Z=samp_con.split(\"-\")[1]\n samp_con=\"4\"\n if \"7\" in samp_con:\n if \"-\" not in samp_con:\n print(\"option [7] must be in form 7-Z where Z is an integer\")\n return False, \"option [7] must be in form 7-Z where Z is an integer\"\n else:\n Z=samp_con.split(\"-\")[1]\n samp_con=\"7\"\n\n ErSampRec,ErSiteRec={},{}\n\n # parse data\n\n # fix .jr6 file so that there are spaces between all the columns.\n pre_data=open(mag_file, 'r')\n tmp_data=open(tmp_file, 'w')\n line=pre_data.readline()\n while line !='':\n line=line.replace('-',' -')\n #print \"line=\", line\n tmp_data.write(line)\n line=pre_data.readline()\n tmp_data.close()\n pre_data.close()\n\n data=pd.read_csv(tmp_file, delim_whitespace=True,header=None)\n\n if JR==0: #\n data.columns=['er_specimen_name','step','x','y','z','expon','sample_azimuth','sample_dip', 'sample_bed_dip_direction','sample_bed_dip','bed_dip_dir2','bed_dip2','param1','param2','param3','param4','measurement_csd']\n cart=np.array([data['x'],data['y'],data['z']]).transpose()\n else: # measured on the Joides Resolution JR6\n data.columns=['er_specimen_name','step','negz','y','x','expon','sample_azimuth','sample_dip', 'sample_bed_dip_direction','sample_bed_dip','bed_dip_dir2','bed_dip2','param1','param2','param3','param4','measurement_csd']\n cart=np.array([data['x'],data['y'],-data['negz']]).transpose()\n dir= pmag.cart2dir(cart).transpose()\n data['measurement_dec']=dir[0]\n data['measurement_inc']=dir[1]\n data['measurement_magn_moment']=dir[2]*(10.0**data['expon'])*volume # the data are in A/m - this converts to Am^2\n data['measurement_magn_volume']=dir[2]*(10.0**data['expon']) # A/m - data in A/m\n data['sample_dip']=-data['sample_dip']\n DGEOs,IGEOs=[],[]\n for ind in range(len(data)):\n dgeo,igeo=pmag.dogeo(data.iloc[ind]['measurement_dec'],data.iloc[ind]['measurement_inc'],data.iloc[ind]['sample_azimuth'],data.iloc[ind]['sample_dip'])\n DGEOs.append(dgeo)\n IGEOs.append(igeo)\n data['specimen_dec']=DGEOs\n data['specimen_inc']=IGEOs\n data['specimen_tilt']='1'\n if specnum!=0:\n data['er_sample_name']=data['er_specimen_name'][:specnum]\n else:\n data['er_sample_name']=data['er_specimen_name']\n\n if int(samp_con) in [1, 2, 3, 4, 5, 7]:\n data['er_site_name']=pmag.parse_site(data['er_sample_name'],samp_con,Z)\n # else:\n # if 'er_site_name' in ErSampRec.keys():er_site_name=ErSampRec['er_site_name']\n # if 'er_location_name' in ErSampRec.keys():er_location_name=ErSampRec['er_location_name']\n\n # Configure the er_sample table \n\n for rowNum, row in data.iterrows():\n sampleFlag=0\n for sampRec in SampOuts:\n if sampRec['er_sample_name'] == row['er_sample_name']:\n sampleFlag=1\n break\n if sampleFlag == 0:\n ErSampRec['er_sample_name']=row['er_sample_name']\n ErSampRec['sample_azimuth']=str(row['sample_azimuth'])\n ErSampRec['sample_dip']=str(row['sample_dip'])\n ErSampRec['magic_method_codes']=meth_code \n ErSampRec['er_location_name']=er_location_name\n ErSampRec['er_site_name']=row['er_site_name']\n ErSampRec['er_citation_names']='This study'\n SampOuts.append(ErSampRec.copy())\n\n # Configure the magic_measurements table\n\n for rowNum, row in data.iterrows():\n MagRec={}\n# MagRec['measurement_description']='Date: '+date\n MagRec[\"er_citation_names\"]=\"This study\"\n MagRec['er_location_name']=er_location_name\n MagRec['er_site_name']=row['er_site_name']\n MagRec['er_sample_name']=row['er_sample_name']\n MagRec['magic_software_packages']=version_num\n MagRec[\"treatment_temp\"]='%8.3e' % (273) # room temp in kelvin\n MagRec[\"measurement_temp\"]='%8.3e' % (273) # room temp in kelvin\n MagRec[\"measurement_flag\"]='g'\n MagRec[\"measurement_standard\"]='u'\n MagRec[\"measurement_number\"]='1'\n MagRec[\"er_specimen_name\"]=row['er_specimen_name']\n MagRec[\"treatment_ac_field\"]='0'\n if row['step'] == 'NRM':\n meas_type=\"LT-NO\"\n elif row['step'][0:2] == 'AD':\n meas_type=\"LT-AF-Z\"\n treat=float(row['step'][2:])\n MagRec[\"treatment_ac_field\"]='%8.3e' %(treat*1e-3) # convert from mT to tesla\n elif row['step'][0] == 'TD':\n meas_type=\"LT-T-Z\"\n treat=float(row['step'][2:])\n MagRec[\"treatment_temp\"]='%8.3e' % (treat+273.) # temp in kelvin\n else: # need to add IRM, and ARM options\n print(\"measurement type unknown\", row['step'])\n return False, \"measurement type unknown\"\n MagRec[\"measurement_magn_moment\"]=str(row['measurement_magn_moment'])\n MagRec[\"measurement_magn_volume\"]=str(row['measurement_magn_volume'])\n MagRec[\"measurement_dec\"]=str(row['measurement_dec'])\n MagRec[\"measurement_inc\"]=str(row['measurement_inc'])\n MagRec['magic_method_codes']=meas_type\n MagRecs.append(MagRec.copy())\n pmag.magic_write(samp_file,SampOuts,'er_samples')\n print(\"sample orientations put in \",samp_file)\n MagOuts=pmag.measurements_methods(MagRecs,noave)\n pmag.magic_write(meas_file,MagOuts,'magic_measurements')\n print(\"results put in \",meas_file)\n print(\"exit!\")\n return True, meas_file"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nnames extract_methods.py DESCRIPTION reads in a magic table and creates a file with method codes SYNTAX extract_methods.py [command line options] OPTIONS -h: prints the help message and quits. -f FILE: specify magic format input file, default is magic_measurements.txt -F FILE: specify method code output file, default is magic_methods.txt", "response": "def main():\n \"\"\"\n NAME\n extract_methods.py\n \n DESCRIPTION\n reads in a magic table and creates a file with method codes\n\n SYNTAX\n extract_methods.py [command line options]\n\n OPTIONS\n -h: prints the help message and quits.\n -f FILE: specify magic format input file, default is magic_measurements.txt\n -F FILE: specify method code output file, default is magic_methods.txt\n \"\"\"\n citation='This study'\n args=sys.argv\n outfile='magic_methods.txt'\n infile='magic_measurements.txt'\n#\n# get command line arguments\n#\n dir_path='.'\n if '-WD' in args:\n ind=args.index(\"-WD\")\n dir_path=args[ind+1]\n if \"-h\" in args:\n print(main.__doc__)\n sys.exit()\n if '-F' in args:\n ind=args.index(\"-F\")\n outfile=args[ind+1]\n if '-f' in args:\n ind=args.index(\"-f\")\n infile=args[ind+1]\n infile=dir_path+'/'+infile\n outfile=dir_path+'/'+outfile\n data,file_type=pmag.magic_read(infile)\n MethRecs=[]\n methods=[]\n for rec in data:\n meths=rec['magic_method_codes'].split(\":\")\n for meth in meths:\n if meth not in methods:\n MethRec={}\n methods.append(meth)\n MethRec['magic_method_code']=meth\n MethRecs.append(MethRec)\n pmag.magic_write(outfile,MethRecs,'magic_methods')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nname get_fisher_parameters is the main function of the gofish module. It is the main function of the gofish module. It is used to calculate the fisher parameters from dec inc data.", "response": "def main():\n \"\"\"\n NAME\n gofish.py\n\n DESCRIPTION\n calculates fisher parameters from dec inc data\n\n INPUT FORMAT\n takes dec/inc as first two columns in space delimited file\n\n SYNTAX\n gofish.py [options] [< filename]\n\n OPTIONS\n -h prints help message and quits\n -i for interactive filename entry\n -f FILE, specify input file\n -F FILE, specifies output file name\n < filename for reading from standard input\n \n OUTPUT\n mean dec, mean inc, N, R, k, a95, csd\n\n \"\"\"\n if '-h' in sys.argv: # check if help is needed\n print(main.__doc__)\n sys.exit() # graceful quit\n if '-i' in sys.argv: # ask for filename\n file=input(\"Enter file name with dec, inc data: \")\n f=open(file,'r')\n data=f.readlines()\n elif '-f' in sys.argv:\n dat=[]\n ind=sys.argv.index('-f')\n file=sys.argv[ind+1]\n f=open(file,'r')\n data=f.readlines()\n else:\n data = sys.stdin.readlines() # read from standard input\n ofile = \"\"\n if '-F' in sys.argv:\n ind = sys.argv.index('-F')\n ofile= sys.argv[ind+1]\n out = open(ofile, 'w + a')\n DIs= [] # set up list for dec inc data\n for line in data: # read in the data from standard input\n if '\\t' in line:\n rec=line.split('\\t') # split each line on space to get records\n else:\n rec=line.split() # split each line on space to get records\n DIs.append((float(rec[0]),float(rec[1])))\n#\n fpars=pmag.fisher_mean(DIs)\n outstring='%7.1f %7.1f %i %10.4f %8.1f %7.1f %7.1f'%(fpars['dec'],fpars['inc'],fpars['n'],fpars['r'],fpars['k'],fpars['alpha95'], fpars['csd'])\n if ofile == \"\":\n print(outstring)\n else:\n out.write(outstring+'\\n')"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nnaming trmaq_magic.py DESCTIPTION does non-linear trm acquisisiton correction SYNTAX trmaq_magic.py [-h][-i][command line options] OPTIONS -h prints help message and quits -i allows interactive setting of file names -f MFILE, sets magic_measurements input file -ft TSPEC, sets thellier_specimens input file -F OUT, sets output for non-linear TRM acquisition corrected data -sav save figures and quit -fmt [png, svg, pdf] -DM [2, 3] MagIC data model, default 3 DEFAULTS MFILE: trmaq_measurements.txt TSPEC: thellier_specimens.txt OUT: NLT_specimens.txt", "response": "def main():\n \"\"\"\n NAME\n trmaq_magic.py\n\n DESCTIPTION\n does non-linear trm acquisisiton correction\n\n SYNTAX\n trmaq_magic.py [-h][-i][command line options]\n\n OPTIONS\n -h prints help message and quits\n -i allows interactive setting of file names\n -f MFILE, sets magic_measurements input file\n -ft TSPEC, sets thellier_specimens input file\n -F OUT, sets output for non-linear TRM acquisition corrected data\n -sav save figures and quit\n -fmt [png, svg, pdf]\n -DM [2, 3] MagIC data model, default 3\n\n\n DEFAULTS\n MFILE: trmaq_measurements.txt\n TSPEC: thellier_specimens.txt\n OUT: NLT_specimens.txt\n \"\"\"\n meas_file = 'trmaq_measurements.txt'\n tspec = \"thellier_specimens.txt\"\n output = 'NLT_specimens.txt'\n data_model_num = int(float(pmag.get_named_arg(\"-DM\", 3)))\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-i' in sys.argv:\n meas_file = input(\n \"Input magic_measurements file name? [trmaq_measurements.txt] \")\n if meas_file == \"\":\n meas_file = \"trmaq_measurements.txt\"\n tspec = input(\n \" thellier_specimens file name? [thellier_specimens.txt] \")\n if tspec == \"\":\n tspec = \"thellier_specimens.txt\"\n output = input(\n \"File for non-linear TRM adjusted specimen data: [NLTspecimens.txt] \")\n if output == \"\":\n output = \"NLT_specimens.txt\"\n if '-f' in sys.argv:\n ind = sys.argv.index('-f')\n meas_file = sys.argv[ind+1]\n if '-ft' in sys.argv:\n ind = sys.argv.index('-ft')\n tspec = sys.argv[ind+1]\n if '-F' in sys.argv:\n ind = sys.argv.index('-F')\n output = sys.argv[ind+1]\n if '-sav' in sys.argv:\n save_plots = True\n else:\n save_plots = False\n fmt = pmag.get_named_arg(\"-fmt\", \"svg\")\n #\n PLT = {'aq': 1}\n if not save_plots:\n pmagplotlib.plot_init(PLT['aq'], 5, 5)\n #\n # get name of file from command line\n #\n comment = \"\"\n #\n #\n meas_data, file_type = pmag.magic_read(meas_file)\n if 'measurements' not in file_type:\n print(file_type, \"This is not a valid measurements file \")\n sys.exit()\n\n if data_model_num == 2:\n spec_col = \"er_specimen_name\"\n lab_field_dc_col = \"specimen_lab_field_dc\"\n int_col = \"specimen_int\"\n meth_col = \"magic_method_codes\"\n treat_dc_col = \"treatment_dc_field\"\n magn_moment_col = \"measurement_magn_moment\"\n experiment_col = \"magic_experiment_name\"\n outfile_type = \"pmag_specimens\"\n else:\n spec_col = \"specimen\"\n lab_field_dc_col = \"int_treat_dc_field\"\n int_col = \"int_abs\"\n meth_col = \"method_codes\"\n treat_dc_col = \"treat_dc_field\"\n magn_moment_col = \"magn_moment\"\n experiment_col = \"experiment\"\n outfile_type = \"specimens\"\n\n sids = pmag.get_specs(meas_data)\n specimen = 0\n #\n # read in thellier_specimen data\n #\n nrm, file_type = pmag.magic_read(tspec)\n PmagSpecRecs= []\n while specimen < len(sids):\n #\n # find corresoponding paleointensity data for this specimen\n #\n s = sids[specimen]\n blab, best = \"\", \"\"\n for nrec in nrm: # pick out the Banc data for this spec\n if nrec[spec_col] == s:\n try:\n blab = float(nrec[lab_field_dc_col])\n except ValueError:\n continue\n best = float(nrec[int_col])\n TrmRec = nrec\n break\n if blab == \"\":\n print(\"skipping \", s, \" : no best \")\n specimen += 1\n else:\n print(sids[specimen], specimen+1, 'of ',\n len(sids), 'Best = ', best*1e6)\n MeasRecs = []\n #\n # find the data from the meas_data file for this specimen\n #\n for rec in meas_data:\n if rec[spec_col] == s:\n meths = rec[meth_col].split(\":\")\n methcodes = []\n for meth in meths:\n methcodes.append(meth.strip())\n if \"LP-TRM\" in methcodes:\n MeasRecs.append(rec)\n if len(MeasRecs) < 2:\n specimen += 1\n print('skipping specimen - no trm acquisition data ', s)\n #\n # collect info for the PmagSpecRec dictionary\n #\n else:\n TRMs, Bs = [], []\n for rec in MeasRecs:\n Bs.append(float(rec[treat_dc_col]))\n TRMs.append(float(rec[magn_moment_col]))\n # calculate best fit parameters through TRM acquisition data, and get new banc\n NLpars = nlt.NLtrm(Bs, TRMs, best, blab, 0)\n #\n Mp, Bp = [], []\n for k in range(int(max(Bs)*1e6)):\n Bp.append(float(k)*1e-6)\n # predicted NRM for this field\n npred = nlt.TRM(Bp[-1], NLpars['xopt']\n [0], NLpars['xopt'][1])\n Mp.append(npred)\n pmagplotlib.plot_trm(\n PLT['aq'], Bs, TRMs, Bp, Mp, NLpars, rec[experiment_col])\n if not save_plots:\n pmagplotlib.draw_figs(PLT)\n print('Banc= ', float(NLpars['banc'])*1e6)\n trmTC = {}\n for key in list(TrmRec.keys()):\n # copy of info from thellier_specimens record\n trmTC[key] = TrmRec[key]\n trmTC[int_col] = '%8.3e' % (NLpars['banc'])\n trmTC[meth_col] = TrmRec[meth_col]+\":DA-NL\"\n PmagSpecRecs.append(trmTC)\n if not save_plots:\n ans = input(\"Return for next specimen, s[a]ve plot \")\n if ans == 'a':\n Name = {'aq': rec[spec_col]+'_TRM.{}'.format(fmt)}\n pmagplotlib.save_plots(PLT, Name)\n else:\n Name = {'aq': rec[spec_col]+'_TRM.{}'.format(fmt)}\n pmagplotlib.save_plots(PLT, Name)\n\n specimen += 1\n\n pmag.magic_write(output, PmagSpecRecs, outfile_type)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nname huji_sample_magic.py DESCRIPTION takes tab delimited Hebrew University sample file and converts to MagIC formatted tables SYNTAX huji_sample_magic.py [command line options] OPTIONS -f FILE: specify input file -Fsa FILE: specify sample output file, default is: samples.txt -Fsi FILE: specify site output file, default is: sites.txt -Iso: import sample orientation info - default is to set sample_az/dip to 0,0 -ncn NCON: specify naming convention: default is #1 below -mcd: specify sampling method codes as a colon delimited string: [default is: FS-FD:SO-POM:SO-SUN] FS-FD field sampling done with a drill FS-H field sampling done with hand samples FS-LOC-GPS field location done with GPS FS-LOC-MAP field location done with map SO-POM a Pomeroy orientation device was used SO-ASC an ASC orientation device was used SO-MAG orientation with magnetic compass -loc: location name, default=\"unknown\" -DM: data model number (MagIC 2 or 3, default 3) INPUT FORMAT Input files must be tab delimited: Samp Az Dip Dip_dir Dip Orientation convention: Lab arrow azimuth = mag_azimuth; Lab arrow dip = 90-field_dip e.g. field_dip is degrees from horizontal of drill direction Magnetic declination convention: Az is already corrected in file Sample naming convention: [1] XXXXY: where XXXX is an arbitrary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX [5] site name = sample name [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY NB: all others you will have to either customize your self or e-mail ltauxe@ucsd.edu for help. OUTPUT output saved in samples will overwrite any existing files", "response": "def main():\n \"\"\"\n NAME\n huji_sample_magic.py\n\n DESCRIPTION\n takes tab delimited Hebrew University sample file and converts to MagIC formatted tables\n\n SYNTAX\n huji_sample_magic.py [command line options]\n\n OPTIONS\n -f FILE: specify input file\n -Fsa FILE: specify sample output file, default is: samples.txt\n -Fsi FILE: specify site output file, default is: sites.txt\n -Iso: import sample orientation info - default is to set sample_az/dip to 0,0\n -ncn NCON: specify naming convention: default is #1 below\n -mcd: specify sampling method codes as a colon delimited string: [default is: FS-FD:SO-POM:SO-SUN]\n FS-FD field sampling done with a drill\n FS-H field sampling done with hand samples\n FS-LOC-GPS field location done with GPS\n FS-LOC-MAP field location done with map\n SO-POM a Pomeroy orientation device was used\n SO-ASC an ASC orientation device was used\n SO-MAG orientation with magnetic compass\n -loc: location name, default=\"unknown\"\n -DM: data model number (MagIC 2 or 3, default 3)\n\n INPUT FORMAT\n Input files must be tab delimited:\n Samp Az Dip Dip_dir Dip\n Orientation convention:\n Lab arrow azimuth = mag_azimuth; Lab arrow dip = 90-field_dip\n e.g. field_dip is degrees from horizontal of drill direction\n\n Magnetic declination convention:\n Az is already corrected in file\n\n Sample naming convention:\n [1] XXXXY: where XXXX is an arbitrary length site designation and Y\n is the single character sample designation. e.g., TG001a is the\n first sample from site TG001. [default]\n [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX\n [5] site name = sample name\n [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED\n [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY\n NB: all others you will have to either customize your\n self or e-mail ltauxe@ucsd.edu for help.\n\n OUTPUT\n output saved in samples will overwrite any existing files\n \"\"\"\n args = sys.argv\n if \"-h\" in args:\n print(main.__doc__)\n sys.exit()\n #\n # initialize variables\n Z = 1\n # get arguments from the command line\n orient_file = pmag.get_named_arg(\"-f\", reqd=True)\n data_model_num = int(float(pmag.get_named_arg(\"-DM\", 3)))\n if data_model_num == 2:\n samp_file = pmag.get_named_arg(\"-Fsa\", \"er_samples.txt\")\n site_file = pmag.get_named_arg(\"-Fsi\", \"er_sites.txt\")\n else:\n samp_file = pmag.get_named_arg(\"-Fsa\", \"samples.txt\")\n site_file = pmag.get_named_arg(\"-Fsi\", \"sites.txt\")\n samp_con = pmag.get_named_arg(\"-ncn\", \"1\")\n if \"4\" in samp_con:\n if \"-\" not in samp_con:\n print(\"option [4] must be in form 3-Z where Z is an integer\")\n sys.exit()\n else:\n Z = samp_con.split(\"-\")[1]\n #samp_con = \"4\"\n print(samp_con)#, Z)\n meths = pmag.get_named_arg(\"-mcd\", 'FS-FD:SO-POM:SO-SUN')\n location_name = pmag.get_named_arg(\"-loc\", \"unknown\")\n if \"-Iso\" in args:\n ignore = 0\n else:\n ignore = 1\n\n convert.huji_sample(orient_file, meths, location_name, samp_con, ignore)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nname vector_mean.py DESCRIPTION calculates vector mean of vector data INPUT FORMAT takes dec, inc, int from an input file SYNTAX vector_mean.py [command line options] [< filename] OPTIONS -h prints help message and quits -f FILE, specify input file -F FILE, specify output file < filename for reading from standard input OUTPUT mean dec, mean inc, R, N", "response": "def main():\n \"\"\"\n NAME\n vector_mean.py\n\n DESCRIPTION\n calculates vector mean of vector data\n\n INPUT FORMAT\n takes dec, inc, int from an input file\n\n SYNTAX\n vector_mean.py [command line options] [< filename]\n\n OPTIONS\n -h prints help message and quits\n -f FILE, specify input file\n -F FILE, specify output file\n < filename for reading from standard input\n \n OUTPUT\n mean dec, mean inc, R, N\n\n \"\"\"\n if '-h' in sys.argv: # check if help is needed\n print(main.__doc__)\n sys.exit() # graceful quit\n if '-f' in sys.argv:\n dat=[]\n ind=sys.argv.index('-f')\n file=sys.argv[ind+1]\n else:\n file = sys.stdin # read from standard input\n ofile=\"\"\n if '-F' in sys.argv:\n ind = sys.argv.index('-F')\n ofile= sys.argv[ind+1]\n out = open(ofile, 'w + a')\n DIIs=numpy.loadtxt(file,dtype=numpy.float) # read in the data\n#\n vpars,R=pmag.vector_mean(DIIs)\n outstring='%7.1f %7.1f %10.3e %i'%(vpars[0],vpars[1],R,len(DIIs))\n if ofile == \"\":\n print(outstring)\n else:\n out.write(outstring + \"\\n\")"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef main():\n kappa, cutoff = 0, 180\n rev, anti, boot = 0, 0, 0\n spin,n,v,mm97 = 0,0,0,0\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-f' in sys.argv:\n ind = sys.argv.index(\"-f\")\n in_file = sys.argv[ind + 1]\n vgp_df=pd.read_csv(in_file,delim_whitespace=True,header=None) \n else:\n vgp_df=pd.read_csv(sys.stdin,delim_whitespace=True,header=None) \n if '-c' in sys.argv:\n ind = sys.argv.index('-c')\n cutoff = float(sys.argv[ind + 1])\n if '-k' in sys.argv:\n ind = sys.argv.index('-k')\n kappa = float(sys.argv[ind + 1])\n if '-n' in sys.argv:\n ind = sys.argv.index('-n')\n n = int(sys.argv[ind + 1])\n if '-a' in sys.argv: anti = 1\n if '-r' in sys.argv: rev=1\n if '-b' in sys.argv: boot = 1\n if '-v' in sys.argv: v = 1\n if '-p' in sys.argv: spin = 1\n if '-mm97' in sys.argv: mm97=1\n #\n #\n if len(list(vgp_df.columns))==2:\n vgp_df.columns=['vgp_lon','vgp_lat']\n vgp_df['dir_k'],vgp_df['dir_n_samples'],vgp_df['lat']=0,0,0\n else:\n vgp_df.columns=['vgp_lon','vgp_lat','dir_k','dir_n_samples','lat']\n N,S_B,low,high,cutoff=pmag.scalc_vgp_df(vgp_df,anti=anti,rev=rev,cutoff=cutoff,kappa=kappa,n=n,spin=spin,v=v,boot=boot,mm97=mm97)\n if high!=0:\n print(N, '%7.1f %7.1f %7.1f %7.1f ' % (S_B, low, high, cutoff))\n else:\n print(N, '%7.1f %7.1f ' % (S_B, cutoff))", "response": "NAME calculate_scalc_scalc_scalc. py DESCRIPTION calculate_scalc_scalc_scalc_scalc_scalc_scalc_scalc_scalc_scalc_scalc_scalc_scalc_scalc_scalc_scalc_scalc_scalc_scalc_scalc_scalc s"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nname irmaq_magic. py AttributeNames is the entry point for the IRM acquisition curves.", "response": "def main():\n \"\"\"\n NAME\n irmaq_magic.py\n\n DESCRIPTION\n plots IRM acquisition curves from measurements file\n\n SYNTAX\n irmaq_magic [command line options]\n\n INPUT\n takes magic formatted magic_measurements.txt files\n\n OPTIONS\n -h prints help message and quits\n -f FILE: specify input file, default is: magic_measurements.txt/measurements.txt\n -obj OBJ: specify object [loc, sit, sam, spc] for plot, default is by location\n -N ; do not normalize by last point - use original units\n -fmt [png,jpg,eps,pdf] set plot file format [default is svg]\n -sav save plot[s] and quit\n -DM MagIC data model number, default is 3\n NOTE\n loc: location (study); sit: site; sam: sample; spc: specimen\n \"\"\"\n FIG = {} # plot dictionary\n FIG['exp'] = 1 # exp is figure 1\n dir_path = './'\n plot, fmt = 0, 'svg'\n units = 'T',\n XLP = []\n norm = 1\n LP = \"LP-IRM\"\n if len(sys.argv) > 1:\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n data_model = int(pmag.get_named_arg(\"-DM\", 3))\n if '-N' in sys.argv:\n norm = 0\n if '-sav' in sys.argv:\n plot = 1\n if '-fmt' in sys.argv:\n ind = sys.argv.index(\"-fmt\")\n fmt = sys.argv[ind + 1]\n if data_model == 3:\n in_file = pmag.get_named_arg(\"-f\", 'measurements.txt')\n else:\n in_file = pmag.get_named_arg(\"-f\", 'magic_measurements.txt')\n if '-WD' in sys.argv:\n ind = sys.argv.index('-WD')\n dir_path = sys.argv[ind + 1]\n dir_path = os.path.realpath(dir_path)\n in_file = pmag.resolve_file_name(in_file, dir_path)\n if '-WD' not in sys.argv:\n dir_path = os.path.split(in_file)[0]\n plot_by = pmag.get_named_arg(\"-obj\", \"loc\")\n if data_model == 3:\n plot_key = 'location'\n if plot_by == 'sit':\n plot_key = 'site'\n if plot_by == 'sam':\n plot_key = 'sample'\n if plot_by == 'spc':\n plot_key = 'specimen'\n else:\n plot_key = 'er_location_name'\n if plot_by == 'sit':\n plot_key = 'er_site_name'\n if plot_by == 'sam':\n plot_key = 'er_sample_name'\n if plot_by == 'spc':\n plot_key = 'er_specimen_name'\n\n # set defaults and get more information if needed\n if data_model == 3:\n dmag_key = 'treat_dc_field'\n else:\n dmag_key = 'treatment_dc_field'\n #\n if data_model == 3 and plot_key != 'specimen':\n # gonna need to read in more files\n print('-W- You are trying to plot measurements by {}'.format(plot_key))\n print(' By default, this information is not available in your measurement file.')\n print(' Trying to acquire this information from {}'.format(dir_path))\n con = cb.Contribution(dir_path)\n meas_df = con.propagate_location_to_measurements()\n if meas_df is None:\n print('-W- No data found in {}'.format(dir_path))\n return\n if plot_key not in meas_df.columns:\n print('-W- Could not find required data.')\n print(' Try a different plot key.')\n return\n else:\n print('-I- Found {} information, continuing with plotting'.format(plot_key))\n # need to take the data directly from the contribution here, to keep\n # location/site/sample columns in the measurements table\n data = con.tables['measurements'].convert_to_pmag_data_list()\n file_type = \"measurements\"\n else:\n data, file_type = pmag.magic_read(in_file)\n # read in data\n sids = pmag.get_specs(data)\n pmagplotlib.plot_init(FIG['exp'], 6, 6)\n #\n #\n # find desired intensity data\n #\n # get plotlist\n #\n plotlist = []\n if data_model == 3:\n intlist = ['magn_moment', 'magn_volume', 'magn_mass', 'magnitude']\n else:\n intlist = ['measurement_magnitude', 'measurement_magn_moment',\n 'measurement_magn_volume', 'measurement_magn_mass']\n IntMeths = []\n # get all the records with this lab protocol\n #print('data', len(data))\n #print('data[0]', data[0])\n if data_model == 3:\n data = pmag.get_dictitem(data, 'method_codes', LP, 'has')\n else:\n data = pmag.get_dictitem(data, 'magic_method_codes', LP, 'has')\n Ints = {}\n NoInts, int_key = 1, \"\"\n for key in intlist:\n # get all non-blank data for intensity type\n Ints[key] = pmag.get_dictitem(data, key, '', 'F')\n if len(Ints[key]) > 0:\n NoInts = 0\n if int_key == \"\":\n int_key = key\n if NoInts == 1:\n print('No intensity information found')\n sys.exit()\n for rec in Ints[int_key]:\n if rec[plot_key] not in plotlist:\n plotlist.append(rec[plot_key])\n plotlist.sort()\n for plt in plotlist:\n print(plt)\n INTblock = []\n # get data with right intensity info whose plot_key matches plot\n data = pmag.get_dictitem(Ints[int_key], plot_key, plt, 'T')\n # get a list of specimens with appropriate data\n sids = pmag.get_specs(data)\n if len(sids) > 0:\n title = data[0][plot_key]\n for s in sids:\n INTblock = []\n # get data for each specimen\n if data_model == 3:\n sdata = pmag.get_dictitem(data, 'specimen', s, 'T')\n else:\n sdata = pmag.get_dictitem(data, 'er_specimen_name', s, 'T')\n for rec in sdata:\n INTblock.append([float(rec[dmag_key]), 0, 0,\n float(rec[int_key]), 1, 'g'])\n pmagplotlib.plot_mag(FIG['exp'], INTblock, title, 0, units, norm)\n files = {}\n for key in list(FIG.keys()):\n files[key] = title + '_' + LP + '.' + fmt\n if plot == 0:\n pmagplotlib.draw_figs(FIG)\n ans = input(\" S[a]ve to save plot, [q]uit, Return to continue: \")\n if ans == 'q':\n sys.exit()\n if ans == \"a\":\n pmagplotlib.save_plots(FIG, files)\n if plt != plotlist[-1]: # if it isn't the last plot, init the next one\n pmagplotlib.plot_init(FIG['exp'], 6, 6)\n else:\n pmagplotlib.save_plots(FIG, files)\n pmagplotlib.clearFIG(FIG['exp'])"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a slice covering all combinations with coordinate i along a given axis.", "response": "def all_but_axis(i, axis, num_axes):\n \"\"\"\n Return a slice covering all combinations with coordinate i along\n axis. (Effectively the hyperplane perpendicular to axis at i.)\n \"\"\"\n the_slice = ()\n for j in range(num_axes):\n if j == axis:\n the_slice = the_slice + (i,)\n else:\n the_slice = the_slice + (slice(None),)\n return the_slice"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef array_map(f, ar):\n \"Apply an ordinary function to all values in an array.\"\n flat_ar = ravel(ar)\n out = zeros(len(flat_ar), flat_ar.typecode())\n for i in range(len(flat_ar)):\n out[i] = f(flat_ar[i])\n out.shape = ar.shape\n return out", "response": "Apply an ordinary function to all values in an array."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef on_plot_select(self,event):\n if not self.xdata or not self.ydata: return\n pos=event.GetPosition()\n width, height = self.canvas.get_width_height()\n pos[1] = height - pos[1]\n xpick_data,ypick_data = pos\n xdata_org = self.xdata\n ydata_org = self.ydata\n data_corrected = self.map.transData.transform(vstack([xdata_org,ydata_org]).T)\n xdata,ydata = data_corrected.T\n xdata = list(map(float,xdata))\n ydata = list(map(float,ydata))\n e = 4e0\n\n index = None\n for i,(x,y) in enumerate(zip(xdata,ydata)):\n if 0 < sqrt((x-xpick_data)**2. + (y-ypick_data)**2.) < e:\n index = i\n break\n if index==None: print(\"Couldn't find point %.1f,%.1f\"%(xpick_data,ypick_data))\n\n self.change_selected(index)", "response": "Select data point if cursor is in range of a data point"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef on_change_plot_cursor(self,event):\n if not self.xdata or not self.ydata: return\n pos=event.GetPosition()\n width, height = self.canvas.get_width_height()\n pos[1] = height - pos[1]\n xpick_data,ypick_data = pos\n xdata_org = self.xdata\n ydata_org = self.ydata\n data_corrected = self.map.transData.transform(vstack([xdata_org,ydata_org]).T)\n xdata,ydata = data_corrected.T\n xdata = list(map(float,xdata))\n ydata = list(map(float,ydata))\n e = 4e0\n\n if self.plot_setting == \"Zoom\":\n self.canvas.SetCursor(wx.Cursor(wx.CURSOR_CROSS))\n else:\n self.canvas.SetCursor(wx.Cursor(wx.CURSOR_ARROW))\n for i,(x,y) in enumerate(zip(xdata,ydata)):\n if 0 < sqrt((x-xpick_data)**2. + (y-ypick_data)**2.) < e:\n self.canvas.SetCursor(wx.Cursor(wx.CURSOR_HAND))\n break\n event.Skip()", "response": "change the shape of the cursor"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a tuple with first entry being a boolean which specifies if the user entered all values and second entry being a dictionary of input names to parsed values.", "response": "def get_values(self):\n \"\"\"\n Applies parsing functions to each input as specified in init before returning a tuple with first entry being a boolean which specifies if the user entered all values and a second entry which is a dictionary of input names to parsed values.\n \"\"\"\n return_dict = {}\n for i,ctrl in enumerate(self.list_ctrls):\n if hasattr(self.parse_funcs,'__getitem__') and len(self.parse_funcs)>i and hasattr(self.parse_funcs[i],'__call__'):\n try: return_dict[self.inputs[i]] = self.parse_funcs[i](ctrl.GetValue())\n except: return_dict[self.inputs[i]] = ctrl.GetValue()\n else:\n return_dict[self.inputs[i]] = ctrl.GetValue()\n return ('' not in list(return_dict.values()), return_dict)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nnaming upload_magic.py DESCRIPTION This program will prepare your MagIC text files for uploading to the MagIC database it will check for all the MagIC text files and skip the missing ones SYNTAX upload_magic.py INPUT MagIC txt files OPTIONS -h prints help message and quits -all include all the measurement data, default is only those used in interpretations -DM specify which MagIC data model number to use (2 or 3). Default is 3. OUTPUT upload file: file for uploading to MagIC database", "response": "def main():\n \"\"\"\n NAME\n upload_magic.py\n\n DESCRIPTION\n This program will prepare your MagIC text files for uploading to the MagIC database\n it will check for all the MagIC text files and skip the missing ones\n\n SYNTAX\n upload_magic.py\n\n INPUT\n MagIC txt files\n\n OPTIONS\n -h prints help message and quits\n -all include all the measurement data, default is only those used in interpretations\n -DM specify which MagIC data model number to use (2 or 3). Default is 3.\n\n OUTPUT\n upload file: file for uploading to MagIC database\n \"\"\"\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n else:\n data_model_num = pmag.get_named_arg(\"-DM\", 3)\n dataframe = extractor.command_line_dataframe([['cat', False, 0], ['F', False, ''], ['f', False, '']])\n checked_args = extractor.extract_and_check_args(sys.argv, dataframe)\n dir_path, concat = extractor.get_vars(['WD', 'cat'], checked_args)\n data_model_num = int(float(data_model_num))\n if data_model_num == 2:\n ipmag.upload_magic2(concat, dir_path)\n else:\n ipmag.upload_magic(concat, dir_path)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef d2_func(x, y, z):\n '''\n #=================================================\n /poly fit for every SF grid data\n #=================================================\n '''\n X, Y = np.meshgrid(x, y, copy=False)\n X = X.flatten()\n Y = Y.flatten()\n A = np.array([np.ones(len(X)), X, X**2, Y, Y**2, X*Y]).T\n Z = np.array(z)\n B = Z.flatten()\n # print(A.shape,B.shape)\n coeff, r, rank, s = np.linalg.lstsq(A, B, rcond=None)\n return -coeff[5]", "response": "Compute the D2 function for the given data."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef grid_list(data):\n '''\n #=================================================\n /process the grid data\n /convert to list data for poly fitting\n #=================================================\n '''\n a = []\n b = []\n M = []\n for i in data:\n a.append(i[0]) # np.array([i[1] for i in data], dtype=np.float64)\n b.append(i[1]) # np.array([i[0] for i in data], dtype=np.float64)\n M.append(i[2]) # np.array([i[2] for i in data], dtype=np.float64)\n a = np.array(a, dtype=np.float64).tolist()\n b = np.array(b, dtype=np.float64).tolist()\n M = np.array(M, dtype=np.float64).tolist()\n a = list(set(a))\n b = list(set(b))\n return a, b, M", "response": "Convert a list of data into a list of lists"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nfitting the FORC distribution for a given set of values", "response": "def fit(self, SF, x_range, y_range, matrix_z):\n '''\n #=================================================\n /the main fitting process\n /xx,yy,zz = Hb,Ha,p\n /p is the FORC distribution\n /m0,n0 is the index of values on Ha = Hb\n /then loop m0 and n0\n /based on smooth factor(SF)\n /select data grid from the matrix_z for curve fitting\n #=================================================\n '''\n xx, yy, zz = [], [], []\n m0, n0 = [], []\n for m, n in itertools.product(np.arange(0, len(x_range), step=SF), np.arange(0, len(y_range), step=SF)):\n if x_range[m] > y_range[n]: # Ha nearly equal Hb\n m0.append(m)\n n0.append(n)\n\n aa, bb, cc = [], [], []\n for m, n in zip(m0, n0):\n s = 0\n try:\n grid_data = []\n a_ = x_range[m+s]\n b_ = y_range[n-s]\n for i, j in itertools.product(np.arange(3*SF+1), np.arange(3*SF+1)):\n try:\n grid_data.append(\n [x_range[m+s+i], y_range[n-s-j], matrix_z.item(n-s-j, m+s+i)])\n except:\n try:\n for i, j in itertools.product(np.arange(3), np.arange(3)):\n grid_data.append(\n [x_range[m+i], y_range[n-j], matrix_z.item(n-j, m+i)])\n except:\n pass\n # print(grid_data)\n '''\n #=================================================\n /when SF = n\n /data grid as (2*n+1)x(2*n+1)\n /grid_list: convert grid to list\n /every grid produce on FORC distritution p\n /the poly fitting use d2_func\n #=================================================\n '''\n x, y, z = grid_list(grid_data)\n try:\n p = d2_func(x, y, z)\n # print(p)\n xx.append((a_-b_)/2)\n yy.append((a_+b_)/2)\n zz.append(p)\n except Exception as e:\n # print(e)\n pass\n except:\n pass\n\n '''\n #=================================================\n /the data will be save as pandas dataframe\n /all the data with nan values will be delete be dropna()\n #=================================================\n '''\n # print(zz)\n df = pd.DataFrame({'x': xx, 'y': yy, 'z': zz})\n #df = df.replace(0,np.nan)\n df = df.dropna()\n '''\n #=================================================\n /due to the space near Bc = zero\n /the Bi values when Bc <0.003 will be mirrored to -Bc\n #=================================================\n '''\n df_negative = df[(df.x < 0.03)].copy()\n df_negative.x = df_negative.x*-1\n df = df.append(df_negative)\n df = df.drop_duplicates(['x', 'y'])\n df = df.sort_values('x')\n # plt.scatter(df.x,df.y,c=df.z)\n # plt.show()\n '''\n #=================================================\n /reset the Bc and Bi range by X,Y\n /use linear interpolate to obtain FORC distribution\n #=================================================\n '''\n xrange = [0, int((np.max(df.x)+0.05)*10)/10]\n yrange = [int((np.min(df.y)-0.05)*10)/10,\n int((np.max(df.y)+0.05)*10)/10]\n X = np.linspace(xrange[0], xrange[1], 200)\n Y = np.linspace(yrange[0], yrange[1], 200)\n self.yi, self.xi = np.mgrid[yrange[0]:yrange[1]:200j, xrange[0]:xrange[1]:200j]\n\n #self.xi,self.yi = np.mgrid[0:0.2:400j,-0.15:0.15:400j]\n z = df.z/np.max(df.z)\n z = np.asarray(z.tolist())\n self.zi = griddata((df.x, df.y), z, (self.xi, self.yi), method='cubic')"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef rawData(self, fileAdres=None):\n # skip skiprows\n skiprows = None\n skip_from = [b'Field', b'Moment']\n with open(fileAdres, 'rb') as fr:\n #f = fr.read()\n for i, line in enumerate(fr, 1):\n # print(line.split())\n if skip_from == line.split():\n skiprows = i+2\n break\n # else:\n # print('file format wrong, cannot find the data row.')\n skiprows = 34 if skiprows == None else skiprows\n df = pd.read_csv(fileAdres, skiprows=skiprows, sep='\\s+',\n delimiter=',', names=['H', 'M'], skipfooter=1,\n engine='python')\n\n H = df.H # measured field\n M = df.M # measured magnetic moment\n '''\n #=================================================\n /datainterval_H/_M\n /slice the measured data into pieces\n /for every measured FORC\n #=================================================\n '''\n dataInterval_H = []\n dataInterval_M = []\n # print(H)\n cretia = df.H.mean() # edge of linear programing for selecting data\n H0 = df.H.max() # the maximum field\n self.x, self.y, self.z = [[], [], []]\n for i in np.arange(1, len(H)):\n dataInterval_H.append(H[i])\n dataInterval_M.append(M[i])\n if abs(H[i]-H0) <= 0.001: # when the filed reach the max, a new forc\n if len(dataInterval_H) >= 0 and len(dataInterval_H) <= 200:\n # print(dataInterval_H)\n Ha = dataInterval_H[0]\n dataInterval_H.pop(-1)\n dataInterval_M.pop(-1)\n Hb = dataInterval_H[1:-1]\n Hm = dataInterval_M[1:-1]\n for t in np.arange(len(Hb)):\n self.x.append(Hb[t])\n self.y.append(Ha)\n self.z.append(Hm[t])\n # print(Ha)\n dataInterval_H = []\n dataInterval_M = []\n self.rawdf = df\n '''\n #=================================================\n transfer the data set to matrix as len(x)*len(y) with z value\n /mesh up the rawdata\n /select the data area by X,Y ranges\n /obtain regular spaced data potins by np.linspace\n /use interplote to caculate the Hm values\n /loop Ha(Y),Hb(X)\n /fill every position with Hm, else with np.nan\n #=================================================\n '''\n self.z = self.z/np.max(self.z)\n # print(int(np.min(self.x)*100)/100,np.max(self.x))\n xrange = [int((np.min(self.x)-0.1)*10)/10,\n int((np.max(self.x)+0.1)*10)/10]\n yrange = [int((np.min(self.y)-0.1)*10)/10,\n int((np.max(self.y)+0.1)*10)/10]\n X = np.linspace(xrange[0], xrange[1], 200)\n Y = np.linspace(yrange[0], yrange[1], 200)\n yi, xi = np.mgrid[yrange[0]:yrange[1]:200j, xrange[0]:xrange[1]:200j]\n\n #X = np.linspace(-0.2,0.3,200)\n #Y = np.linspace(-0.2,0.3,200)\n #xi,yi = np.mgrid[-0.2:0.3:200j,-0.2:0.3:200j]\n\n zi = griddata((self.x, self.y), self.z, (xi, yi),\n method='linear') # !!! must linear\n self.matrix_z = zi\n self.x_range = X\n self.y_range = Y", "response": "read the raw data from a file and store it in the self. x y z."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_data_model():\n #print(\"-I- getting data model, please be patient!!!!\")\n url = 'http://earthref.org/services/MagIC-data-model.txt'\n offline = True # always get cached data model, as 2.5 is now static\n #try:\n # data = urllib2.urlopen(url)\n #except urllib2.URLError:\n # print '-W- Unable to fetch data model online\\nTrying to use cached data model instead'\n # offline = True\n #except httplib.BadStatusLine:\n # print '-W- Website: {} not responding\\nTrying to use cached data model instead'.format(url)\n # offline = True\n if offline:\n data = get_data_offline()\n data_model, file_type = pmag.magic_read(None, data)\n if file_type in ('bad file', 'empty_file'):\n print('-W- Unable to read online data model.\\nTrying to use cached data model instead')\n data = get_data_offline()\n data_model, file_type = pmag.magic_read(None, data)\n ref_dicts = [d for d in data_model if d['column_nmb'] != '>>>>>>>>>>']\n file_types = [d['field_name'] for d in data_model if d['column_nmb'] == 'tab delimited']\n file_types.insert(0, file_type)\n complete_ref = {}\n\n dictionary = {}\n n = 0\n for d in ref_dicts:\n if d['field_name'] in file_types:\n complete_ref[file_types[n]] = dictionary\n n += 1\n dictionary = {}\n else:\n dictionary[d['field_name_oracle']] = {'data_type': d['data_type'], 'data_status': d['data_status']}\n return complete_ref", "response": "Get the data model document from the EarthRef site."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef read_upload(up_file, data_model=None):\n print(\"-I- Running validation for your upload file\")\n\n ## Read file\n f = open(up_file)\n lines = f.readlines()\n f.close()\n data = split_lines(lines)\n data_dicts = get_dicts(data)\n ## initialize\n invalid_data = {}\n missing_data = {}\n non_numeric = {}\n bad_vocab = {}\n bad_coords = {}\n invalid_col_names = {}\n missing_file_type = False\n ## make sure you have the data model\n if not data_model:\n data_model = get_data_model()\n reqd_file_types = ['er_locations']\n provided_file_types = set()\n if not data_model:\n return False, None\n ## Iterate through data\n # each dictionary is one tab delimited line in a csv file\n for dictionary in data_dicts:\n for k, v in list(dictionary.items()):\n if k == \"file_type\": # meta data\n provided_file_types.add(v)\n continue\n file_type = dictionary['file_type']\n # need to deal with pmag_criteria type file, too\n item_type = file_type.split('_')[1][:-1]\n if item_type == 'criteria':\n item_name = dictionary.get('criteria_definition')\n elif item_type == 'result':\n item_name = dictionary.get('pmag_result_name', None)\n elif item_type in ('specimen', 'sample', 'site', 'location'):\n item_name = dictionary.get('er_' + item_type + '_name', None)\n elif item_type == 'age':\n # get the lowest level er_*_name column that is filled in\n for dtype in ('specimen', 'sample', 'site', 'location'):\n item_name = dictionary.get('er_' + dtype + '_name', None)\n if item_name:\n break\n elif item_type == 'measurement':\n exp_name = dictionary.get('magic_experiment_name')\n meas_num = dictionary.get('measurement_number')\n item_name = exp_name + '_' + str(meas_num)\n else:\n item_name = None\n\n if file_type not in list(data_model.keys()):\n continue\n specific_data_model = data_model[file_type]\n\n ## Function for building problems list\n def add_to_invalid_data(item_name, item_type, invalid_data,\n validation, problem_type):\n \"\"\"\n correctly create or add to the dictionary of invalid values\n \"\"\"\n if item_name:\n if item_type not in invalid_data:\n invalid_data[item_type] = {}\n if item_name not in invalid_data[item_type]:\n invalid_data[item_type][item_name] = {}\n if problem_type not in invalid_data[item_type][item_name]:\n invalid_data[item_type][item_name][problem_type] = []\n invalid_data[item_type][item_name][problem_type].append(validation)\n\n ## Validate for each problem type\n\n # check if column header is in the data model\n invalid_col_name = validate_for_recognized_column(k, v, specific_data_model)\n if invalid_col_name:\n if item_type not in list(invalid_col_names.keys()):\n invalid_col_names[item_type] = set()\n invalid_col_names[item_type].add(invalid_col_name)\n # skip to next item, as additional validations won't work\n # (key is not in the data model)\n\n ## new style\n add_to_invalid_data(item_name, item_type, invalid_data,\n invalid_col_name, 'invalid_col')\n # skip to next item, as additional validations won't work\n # (key is not in the data model)\n continue\n\n # make a list of missing, required data\n missing_item = validate_for_presence(k, v, specific_data_model)\n #print 'k, v', k, v\n if missing_item:\n if item_type not in list(missing_data.keys()):\n missing_data[item_type] = set()\n missing_data[item_type].add(missing_item)\n if item_name:\n # don't double count if a site is missing its parent location\n if item_type == 'age' and missing_item == 'er_location_name':\n pass\n # ignore er_synthetic_name (data model is incorrect here)\n if missing_item == 'er_synthetic_name':\n pass\n else:\n add_to_invalid_data(item_name, item_type, invalid_data,\n missing_item, 'missing_data')\n\n # vocabulary problems\n vocab_problem = validate_for_controlled_vocab(k, v, specific_data_model)\n if vocab_problem:\n if item_type not in list(bad_vocab.keys()):\n bad_vocab[item_type] = set()\n bad_vocab[item_type].add(vocab_problem)\n add_to_invalid_data(item_name, item_type, invalid_data,\n vocab_problem, 'vocab_problem')\n\n # illegal coordinates\n coord_problem = validate_for_coordinates(k, v, specific_data_model)\n if coord_problem:\n if item_type not in list(bad_coords.keys()):\n bad_coords[item_type] = set()\n bad_coords[item_type].add(coord_problem)\n add_to_invalid_data(item_name, item_type, invalid_data,\n coord_problem, 'coordinates')\n\n # make a list of data that should be numeric, but aren't\n number_fail = validate_for_numericality(k, v, specific_data_model)\n if number_fail:\n if item_type not in list(non_numeric.keys()):\n non_numeric[item_type] = set()\n non_numeric[item_type].add(number_fail)\n add_to_invalid_data(item_name, item_type, invalid_data,\n number_fail, 'number_fail')\n\n ## Print out all issues\n\n for file_type, invalid_names in list(invalid_col_names.items()):\n print(\"-W- In your {} file, you are using the following unrecognized columns: {}\".format(file_type, ', '.join(invalid_names)))\n\n for file_type, wrong_cols in list(non_numeric.items()):\n print(\"-W- In your {} file, you must provide only valid numbers, in the following columns: {}\".format(file_type, ', '.join(wrong_cols)))\n\n for file_type, empty_cols in list(missing_data.items()):\n print(\"-W- In your {} file, you are missing data in the following required columns: {}\".format(file_type, ', '.join(empty_cols)))\n\n for file_type in reqd_file_types:\n if file_type not in provided_file_types:\n print(\"-W- You have not provided a(n) {} type file, which is required data\".format(file_type))\n missing_file_type = True\n\n for file_type, vocab_types in list(bad_vocab.items()):\n print(\"-W- In your {} file, you are using an unrecognized value for these controlled vocabularies: {}\".format(file_type, ', '.join(vocab_types)))\n\n for file_type, coords in list(bad_coords.items()):\n print(\"-W- In your {} file, you are using an illegal value for these columns: {}. (Latitude must be between -90 and +90)\".format(file_type, ', '.join(coords)))\n\n\n if any((invalid_col_names, non_numeric, missing_data, missing_file_type, bad_vocab, bad_coords)):\n return False, invalid_data\n else:\n print(\"-I- validation was successful\")\n return True, None", "response": "Read a file and validate it"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsplitting a MagIC upload format file into lists.", "response": "def split_lines(lines):\n \"\"\"\n split a MagIC upload format file into lists.\n the lists are split by the '>>>' lines between file_types.\n \"\"\"\n container = []\n new_list = []\n for line in lines:\n if '>>>' in line:\n container.append(new_list)\n new_list = []\n else:\n new_list.append(line)\n container.append(new_list)\n return container"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a list of dicts that are lists of lists of file types and headers.", "response": "def get_dicts(data):\n \"\"\"\n data must be a list of lists, from a tab delimited file.\n in each list:\n the first list item will be the type of data.\n the second list item will be a tab delimited list of headers.\n the remaining items will be a tab delimited list following the list of headers.\n \"\"\"\n data_dictionaries = []\n for chunk in data[:-1]:\n if not chunk:\n continue\n data1 = data[0]\n file_type = chunk[0].split('\\t')[1].strip('\\n').strip('\\r')\n keys = chunk[1].split('\\t')\n clean_keys = []\n\n # remove new-line characters, and any empty string keys\n for key in keys:\n clean_key = key.strip('\\n').strip('\\r')\n if clean_key:\n clean_keys.append(clean_key)\n for line in chunk[2:]:\n data_dict = {}\n for key in clean_keys:\n data_dict[key] = \"\"\n line = line.split('\\t')\n for n, key in enumerate(clean_keys):\n data_dict[key] = line[n].strip('\\n').strip('\\r')\n data_dict['file_type'] = file_type\n data_dictionaries.append(data_dict)\n return data_dictionaries"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndetermining Declination, Inclination and Intensity from the IGRF model. (http://www.ngdc.noaa.gov/IAGA/vmod/igrf.html) Parameters ---------- input_list : list with format [Date, Altitude, Latitude, Longitude] date must be in decimal year format XXXX.XXXX (Common Era) mod : desired model \"\" : Use the IGRF custom : use values supplied in ghfile or choose from this list ['arch3k','cals3k','pfm9k','hfm10k','cals10k.2','cals10k.1b'] where: arch3k (Korte et al., 2009) cals3k (Korte and Constable, 2011) cals10k.1b (Korte et al., 2011) pfm9k (Nilsson et al., 2014) hfm10k is the hfm.OL1.A1 of Constable et al. (2016) cals10k.2 (Constable et al., 2016) the first four of these models, are constrained to agree with gufm1 (Jackson et al., 2000) for the past four centuries gh : path to file with l m g h data Returns ------- igrf_array : array of IGRF values (0: dec; 1: inc; 2: intensity (in nT)) Examples -------- >>> local_field = ipmag.igrf([2013.6544, .052, 37.87, -122.27]) >>> local_field array([ 1.39489916e+01, 6.13532008e+01, 4.87452644e+04]) >>> ipmag.igrf_print(local_field) Declination: 13.949 Inclination: 61.353 Intensity: 48745.264 nT", "response": "def igrf(input_list, mod='', ghfile=\"\"):\n \"\"\"\n Determine Declination, Inclination and Intensity from the IGRF model.\n (http://www.ngdc.noaa.gov/IAGA/vmod/igrf.html)\n\n Parameters\n ----------\n input_list : list with format [Date, Altitude, Latitude, Longitude]\n date must be in decimal year format XXXX.XXXX (Common Era)\n mod : desired model\n \"\" : Use the IGRF\n custom : use values supplied in ghfile\n or choose from this list\n ['arch3k','cals3k','pfm9k','hfm10k','cals10k.2','cals10k.1b']\n where:\n arch3k (Korte et al., 2009)\n cals3k (Korte and Constable, 2011)\n cals10k.1b (Korte et al., 2011)\n pfm9k (Nilsson et al., 2014)\n hfm10k is the hfm.OL1.A1 of Constable et al. (2016)\n cals10k.2 (Constable et al., 2016)\n the first four of these models, are constrained to agree\n with gufm1 (Jackson et al., 2000) for the past four centuries\n\n\n gh : path to file with l m g h data\n\n Returns\n -------\n igrf_array : array of IGRF values (0: dec; 1: inc; 2: intensity (in nT))\n\n Examples\n --------\n >>> local_field = ipmag.igrf([2013.6544, .052, 37.87, -122.27])\n >>> local_field\n array([ 1.39489916e+01, 6.13532008e+01, 4.87452644e+04])\n >>> ipmag.igrf_print(local_field)\n Declination: 13.949\n Inclination: 61.353\n Intensity: 48745.264 nT\n \"\"\"\n if ghfile != \"\":\n lmgh = np.loadtxt(ghfile)\n gh = []\n lmgh = np.loadtxt(ghfile).transpose()\n gh.append(lmgh[2][0])\n for i in range(1, lmgh.shape[1]):\n gh.append(lmgh[2][i])\n gh.append(lmgh[3][i])\n if len(gh) == 0:\n print('no valid gh file')\n return\n mod = 'custom'\n if mod == \"\":\n x, y, z, f = pmag.doigrf(\n input_list[3] % 360., input_list[2], input_list[1], input_list[0])\n elif mod != 'custom':\n x, y, z, f = pmag.doigrf(\n input_list[3] % 360., input_list[2], input_list[1], input_list[0], mod=mod)\n else:\n x, y, z, f = pmag.docustom(\n input_list[3] % 360., input_list[2], input_list[1], gh)\n\n igrf_array = pmag.cart2dir((x, y, z))\n return igrf_array"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef dms2dd(degrees, minutes, seconds):\n dd = float(degrees) + old_div(float(minutes), 60) + \\\n old_div(float(seconds), (60 * 60))\n return dd", "response": "Convert latitude and longitude of a location that is in degrees minutes seconds to decimal degrees"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef fisher_mean(dec=None, inc=None, di_block=None):\n if di_block is None:\n di_block = make_di_block(dec, inc)\n return pmag.fisher_mean(di_block)\n else:\n return pmag.fisher_mean(di_block)", "response": "Calculates the Fisher mean and associated parameters from either a list of declination values and a separate list of inclination values or a single di_block."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncalculating the critical angle of interest from the true mean within which a chosen percentage of directions lie can be calculated from the Fisher distribution.", "response": "def fisher_angular_deviation(dec=None, inc=None, di_block=None, confidence=95):\n '''\n The angle from the true mean within which a chosen percentage of directions\n lie can be calculated from the Fisher distribution. This function uses the\n calculated Fisher concentration parameter to estimate this angle from\n directional data. The 63 percent confidence interval is often called the\n angular standard deviation.\n\n Parameters\n ----------\n dec : list of declinations or longitudes\n inc : list of inclinations or latitudes\n di_block : a nested list of [dec,inc,1.0]\n A di_block can be provided instead of dec, inc lists in which case it\n will be used. Either dec, inc lists or a di_block need to be provided.\n confidence : 50 percent, 63 percent or 95 percent\n\n Returns\n -------\n theta : critical angle of interest from the mean which contains the\n percentage of directions specified by the confidence parameter\n '''\n if di_block is None:\n di_block = make_di_block(dec, inc)\n mean = pmag.fisher_mean(di_block)\n else:\n mean = pmag.fisher_mean(di_block)\n if confidence == 50:\n theta = old_div(67.5, np.sqrt(mean['k']))\n if confidence == 63:\n theta = old_div(81, np.sqrt(mean['k']))\n if confidence == 95:\n theta = old_div(140, np.sqrt(mean['k']))\n return theta"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncalculating the Bingham mean and associated statistical parameters from a list of declination values and inclination values.", "response": "def bingham_mean(dec=None, inc=None, di_block=None):\n \"\"\"\n Calculates the Bingham mean and associated statistical parameters from\n either a list of declination values and a separate list of inclination\n values or from a di_block (a nested list a nested list of [dec,inc,1.0]).\n Returns a dictionary with the Bingham mean and statistical parameters.\n\n Parameters\n ----------\n dec: list of declinations\n inc: list of inclinations\n\n or\n\n di_block: a nested list of [dec,inc,1.0]\n\n A di_block can be provided instead of dec, inc lists in which case it will\n be used. Either dec, inc lists or a di_block need to passed to the function.\n\n Returns\n ---------\n bpars : dictionary containing the Bingham mean and associated statistics.\n\n Examples\n --------\n Use lists of declination and inclination to calculate a Bingham mean:\n\n >>> ipmag.bingham_mean(dec=[140,127,142,136],inc=[21,23,19,22])\n {'Edec': 220.84075754194598,\n 'Einc': -13.745780972597291,\n 'Eta': 9.9111522306938742,\n 'Zdec': 280.38894136954474,\n 'Zeta': 9.8653370276451113,\n 'Zinc': 64.23509410796224,\n 'dec': 136.32637167111312,\n 'inc': 21.34518678073179,\n 'n': 4}\n\n Use a di_block to calculate a Bingham mean (will give the same output as the\n example with the lists):\n\n >>> ipmag.bingham_mean(di_block=[[140,21],[127,23],[142,19],[136,22]])\n \"\"\"\n if di_block is None:\n di_block = make_di_block(dec, inc)\n return pmag.dobingham(di_block)\n else:\n return pmag.dobingham(di_block)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncalculate the Kent mean and associated statistical parameters from either a list of declination values and a separate list of inclination values or a single di_block.", "response": "def kent_mean(dec=None, inc=None, di_block=None):\n \"\"\"\n Calculates the Kent mean and associated statistical parameters from either a list of\n declination values and a separate list of inclination values or from a\n di_block (a nested list a nested list of [dec,inc,1.0]). Returns a\n dictionary with the Kent mean and statistical parameters.\n\n Parameters\n ----------\n dec: list of declinations\n inc: list of inclinations\n\n or\n\n di_block: a nested list of [dec,inc,1.0]\n\n A di_block can be provided instead of dec, inc lists in which case it will\n be used. Either dec, inc lists or a di_block need to passed to the function.\n\n Returns\n ----------\n kpars : dictionary containing Kent mean and associated statistics.\n\n Examples\n --------\n Use lists of declination and inclination to calculate a Kent mean:\n\n >>> ipmag.kent_mean(dec=[140,127,142,136],inc=[21,23,19,22])\n {'Edec': 280.38683553668795,\n 'Einc': 64.236598921744289,\n 'Eta': 0.72982112760919715,\n 'Zdec': 40.824690028412761,\n 'Zeta': 6.7896823241008795,\n 'Zinc': 13.739412321974067,\n 'dec': 136.30838974272072,\n 'inc': 21.347784026899987,\n 'n': 4}\n\n Use a di_block to calculate a Kent mean (will give the same output as the\n example with the lists):\n\n >>> ipmag.kent_mean(di_block=[[140,21],[127,23],[142,19],[136,22]])\n \"\"\"\n if di_block is None:\n di_block = make_di_block(dec, inc)\n return pmag.dokent(di_block, len(di_block))\n else:\n return pmag.dokent(di_block, len(di_block))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef print_direction_mean(mean_dictionary):\n print('Dec: ' + str(round(mean_dictionary['dec'], 1)) +\n ' Inc: ' + str(round(mean_dictionary['inc'], 1)))\n print('Number of directions in mean (n): ' + str(mean_dictionary['n']))\n print('Angular radius of 95% confidence (a_95): ' +\n str(round(mean_dictionary['alpha95'], 1)))\n print('Precision parameter (k) estimate: ' +\n str(round(mean_dictionary['k'], 1)))", "response": "This function prints a Fisher mean and associated statistics for the directional data."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndoing a pretty job printing a Fisher mean and associated statistics for mean paleomagnetic poles. Parameters ---------- mean_dictionary: output dictionary of pmag.fisher_mean Examples -------- Generate a Fisher mean using ``ipmag.fisher_mean`` and then print it nicely using ``ipmag.print_pole_mean`` >>> my_mean = ipmag.fisher_mean(di_block=[[140,21],[127,23],[142,19],[136,22]]) >>> ipmag.print_pole_mean(my_mean) Plon: 136.3 Plat: 21.3 Number of directions in mean (n): 4 Angular radius of 95% confidence (A_95): 7.3 Precision parameter (k) estimate: 159.7", "response": "def print_pole_mean(mean_dictionary):\n \"\"\"\n Does a pretty job printing a Fisher mean and associated statistics for\n mean paleomagnetic poles.\n\n Parameters\n ----------\n mean_dictionary: output dictionary of pmag.fisher_mean\n\n Examples\n --------\n Generate a Fisher mean using ``ipmag.fisher_mean`` and then print it nicely\n using ``ipmag.print_pole_mean``\n\n >>> my_mean = ipmag.fisher_mean(di_block=[[140,21],[127,23],[142,19],[136,22]])\n >>> ipmag.print_pole_mean(my_mean)\n Plon: 136.3 Plat: 21.3\n Number of directions in mean (n): 4\n Angular radius of 95% confidence (A_95): 7.3\n Precision parameter (k) estimate: 159.7\n \"\"\"\n print('Plon: ' + str(round(mean_dictionary['dec'], 1)) +\n ' Plat: ' + str(round(mean_dictionary['inc'], 1)))\n print('Number of directions in mean (n): ' + str(mean_dictionary['n']))\n print('Angular radius of 95% confidence (A_95): ' +\n str(round(mean_dictionary['alpha95'], 1)))\n print('Precision parameter (k) estimate: ' +\n str(round(mean_dictionary['k'], 1)))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef fishrot(k=20, n=100, dec=0, inc=90, di_block=True):\n directions = []\n declinations = []\n inclinations = []\n if di_block == True:\n for data in range(n):\n d, i = pmag.fshdev(k)\n drot, irot = pmag.dodirot(d, i, dec, inc)\n directions.append([drot, irot, 1.])\n return directions\n else:\n for data in range(n):\n d, i = pmag.fshdev(k)\n drot, irot = pmag.dodirot(d, i, dec, inc)\n declinations.append(drot)\n inclinations.append(irot)\n return declinations, inclinations", "response": "This function generates Fisher distributed unit vectors from a specified distribution\n ."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef tk03(n=100, dec=0, lat=0, rev='no', G2=0, G3=0):\n tk_03_output = []\n for k in range(n):\n gh = pmag.mktk03(8, k, G2, G3) # terms and random seed\n # get a random longitude, between 0 and 359\n lon = random.randint(0, 360)\n vec = pmag.getvec(gh, lat, lon) # send field model and lat to getvec\n vec[0] += dec\n if vec[0] >= 360.:\n vec[0] -= 360.\n if k % 2 == 0 and rev == 'yes':\n vec[0] += 180.\n vec[1] = -vec[1]\n tk_03_output.append([vec[0], vec[1], vec[2]])\n return tk_03_output", "response": "Returns a nested list of declination inclination and intensity of a secular variation."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef squish(incs, f):\n try:\n length = len(incs)\n incs_squished = []\n for n in range(0, length):\n inc_rad = incs[n] * np.pi / 180. # convert to radians\n inc_new_rad = f * np.tan(inc_rad)\n inc_new = np.arctan(inc_new_rad) * 180. / \\\n np.pi # convert back to degrees\n incs_squished.append(inc_new)\n return incs_squished\n except:\n inc_rad = incs * np.pi / 180. # convert to radians\n inc_new_rad = f * np.tan(inc_rad)\n inc_new = np.arctan(inc_new_rad) * 180. / \\\n np.pi # convert back to degrees\n return inc_new", "response": "This function applies an flattening factor to inclination data and returns a list of squished values."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef do_flip(dec=None, inc=None, di_block=None):\n if di_block is None:\n dec_flip = []\n inc_flip = []\n for n in range(0, len(dec)):\n dec_flip.append((dec[n] - 180.) % 360.0)\n inc_flip.append(-inc[n])\n return dec_flip, inc_flip\n else:\n dflip = []\n for rec in di_block:\n d, i = (rec[0] - 180.) % 360., -rec[1]\n dflip.append([d, i, 1.0])\n return dflip", "response": "This function flips the declination and inclination of a single antipode into a single antipode."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nconduct a bootstrap test (Tauxe, 2010) for a common mean on two declination, inclination data sets. Plots are generated of the cumulative distributions of the Cartesian coordinates of the means of the pseudo-samples (one for x, one for y and one for z). If the 95 percent confidence bounds for each component overlap, the two directions are not significantly different. Parameters ---------- Data1 : a nested list of directional data [dec,inc] (a di_block) Data2 : a nested list of directional data [dec,inc] (a di_block) if Data2 is length of 1, treat as single direction NumSims : number of bootstrap samples (default is 1000) save : optional save of plots (default is False) save_folder : path to directory where plots should be saved fmt : format of figures to be saved (default is 'svg') figsize : optionally adjust figure size (default is (7, 2.3)) x_tick_bins : because they occasionally overlap depending on the data, this argument allows you adjust number of tick marks on the x axis of graphs (default is 4) Returns ------- three plots : cumulative distributions of the X, Y, Z of bootstrapped means Examples -------- Develop two populations of directions using ``ipmag.fishrot``. Use the function to determine if they share a common mean (through visual inspection of resulting plots). >>> directions_A = ipmag.fishrot(k=20, n=30, dec=40, inc=60) >>> directions_B = ipmag.fishrot(k=35, n=25, dec=42, inc=57) >>> ipmag.common_mean_bootstrap(directions_A, directions_B)", "response": "def common_mean_bootstrap(Data1, Data2, NumSims=1000, save=False, save_folder='.', fmt='svg', figsize=(7, 2.3), x_tick_bins=4):\n \"\"\"\n Conduct a bootstrap test (Tauxe, 2010) for a common mean on two declination,\n inclination data sets. Plots are generated of the cumulative distributions\n of the Cartesian coordinates of the means of the pseudo-samples (one for x,\n one for y and one for z). If the 95 percent confidence bounds for each\n component overlap, the two directions are not significantly different.\n\n Parameters\n ----------\n Data1 : a nested list of directional data [dec,inc] (a di_block)\n Data2 : a nested list of directional data [dec,inc] (a di_block)\n if Data2 is length of 1, treat as single direction\n NumSims : number of bootstrap samples (default is 1000)\n save : optional save of plots (default is False)\n save_folder : path to directory where plots should be saved\n fmt : format of figures to be saved (default is 'svg')\n figsize : optionally adjust figure size (default is (7, 2.3))\n x_tick_bins : because they occasionally overlap depending on the data, this\n argument allows you adjust number of tick marks on the x axis of graphs\n (default is 4)\n\n Returns\n -------\n three plots : cumulative distributions of the X, Y, Z of bootstrapped means\n\n Examples\n --------\n Develop two populations of directions using ``ipmag.fishrot``. Use the\n function to determine if they share a common mean (through visual inspection\n of resulting plots).\n\n >>> directions_A = ipmag.fishrot(k=20, n=30, dec=40, inc=60)\n >>> directions_B = ipmag.fishrot(k=35, n=25, dec=42, inc=57)\n >>> ipmag.common_mean_bootstrap(directions_A, directions_B)\n \"\"\"\n counter = 0\n BDI1 = pmag.di_boot(Data1)\n cart1 = pmag.dir2cart(BDI1).transpose()\n X1, Y1, Z1 = cart1[0], cart1[1], cart1[2]\n if np.array(Data2).shape[0] > 2:\n BDI2 = pmag.di_boot(Data2)\n cart2 = pmag.dir2cart(BDI2).transpose()\n X2, Y2, Z2 = cart2[0], cart2[1], cart2[2]\n else:\n cart = pmag.dir2cart(Data2).transpose()\n\n fignum = 1\n fig = plt.figure(figsize=figsize)\n fig = plt.subplot(1, 3, 1)\n\n minimum = int(0.025 * len(X1))\n maximum = int(0.975 * len(X1))\n\n X1, y = pmagplotlib.plot_cdf(fignum, X1, \"X component\", 'r', \"\")\n bounds1 = [X1[minimum], X1[maximum]]\n pmagplotlib.plot_vs(fignum, bounds1, 'r', '-')\n if np.array(Data2).shape[0] > 2:\n X2, y = pmagplotlib.plot_cdf(fignum, X2, \"X component\", 'b', \"\")\n bounds2 = [X2[minimum], X2[maximum]]\n pmagplotlib.plot_vs(fignum, bounds2, 'b', '--')\n else:\n pmagplotlib.plot_vs(fignum, [cart[0]], 'k', '--')\n plt.ylim(0, 1)\n plt.locator_params(nbins=x_tick_bins)\n\n plt.subplot(1, 3, 2)\n\n Y1, y = pmagplotlib.plot_cdf(fignum, Y1, \"Y component\", 'r', \"\")\n bounds1 = [Y1[minimum], Y1[maximum]]\n pmagplotlib.plot_vs(fignum, bounds1, 'r', '-')\n if np.array(Data2).shape[0] > 2:\n Y2, y = pmagplotlib.plot_cdf(fignum, Y2, \"Y component\", 'b', \"\")\n bounds2 = [Y2[minimum], Y2[maximum]]\n pmagplotlib.plot_vs(fignum, bounds2, 'b', '--')\n else:\n pmagplotlib.plot_vs(fignum, [cart[1]], 'k', '--')\n plt.ylim(0, 1)\n\n plt.subplot(1, 3, 3)\n\n Z1, y = pmagplotlib.plot_cdf(fignum, Z1, \"Z component\", 'r', \"\")\n bounds1 = [Z1[minimum], Z1[maximum]]\n pmagplotlib.plot_vs(fignum, bounds1, 'r', '-')\n\n if np.array(Data2).shape[0] > 2:\n Z2, y = pmagplotlib.plot_cdf(fignum, Z2, \"Z component\", 'b', \"\")\n bounds2 = [Z2[minimum], Z2[maximum]]\n pmagplotlib.plot_vs(fignum, bounds2, 'b', '--')\n else:\n pmagplotlib.plot_vs(fignum, [cart[2]], 'k', '--')\n plt.ylim(0, 1)\n plt.locator_params(nbins=x_tick_bins)\n\n plt.tight_layout()\n if save == True:\n plt.savefig(os.path.join(\n save_folder, 'common_mean_bootstrap') + '.' + fmt)\n plt.show()"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nconduct a Watson V test for a common mean on two directional data sets. This function calculates Watson's V statistic from input files through Monte Carlo simulation in order to test whether two populations of directional data could have been drawn from a common mean. The critical angle between the two sample mean directions and the corresponding McFadden and McElhinny (1990) classification is printed. Parameters ---------- Data1 : a nested list of directional data [dec,inc] (a di_block) Data2 : a nested list of directional data [dec,inc] (a di_block) NumSims : number of Monte Carlo simulations (default is 5000) print_result : default is to print the test result (True) plot : the default is no plot ('no'). Putting 'yes' will the plot the CDF from the Monte Carlo simulations. save : optional save of plots (default is False) save_folder : path to where plots will be saved (default is current) fmt : format of figures to be saved (default is 'svg') Returns ------- printed text : text describing the test result is printed result : a boolean where 0 is fail and 1 is pass angle : angle between the Fisher means of the two data sets critical_angle : critical angle for the test to pass Examples -------- Develop two populations of directions using ``ipmag.fishrot``. Use the function to determine if they share a common mean. >>> directions_A = ipmag.fishrot(k=20, n=30, dec=40, inc=60) >>> directions_B = ipmag.fishrot(k=35, n=25, dec=42, inc=57) >>> ipmag.common_mean_watson(directions_A, directions_B)", "response": "def common_mean_watson(Data1, Data2, NumSims=5000, print_result=True, plot='no', save=False, save_folder='.', fmt='svg'):\n \"\"\"\n Conduct a Watson V test for a common mean on two directional data sets.\n\n This function calculates Watson's V statistic from input files through\n Monte Carlo simulation in order to test whether two populations of\n directional data could have been drawn from a common mean. The critical\n angle between the two sample mean directions and the corresponding\n McFadden and McElhinny (1990) classification is printed.\n\n Parameters\n ----------\n Data1 : a nested list of directional data [dec,inc] (a di_block)\n Data2 : a nested list of directional data [dec,inc] (a di_block)\n NumSims : number of Monte Carlo simulations (default is 5000)\n print_result : default is to print the test result (True)\n plot : the default is no plot ('no'). Putting 'yes' will the plot the CDF\n from the Monte Carlo simulations.\n save : optional save of plots (default is False)\n save_folder : path to where plots will be saved (default is current)\n fmt : format of figures to be saved (default is 'svg')\n\n Returns\n -------\n printed text : text describing the test result is printed\n result : a boolean where 0 is fail and 1 is pass\n angle : angle between the Fisher means of the two data sets\n critical_angle : critical angle for the test to pass\n\n Examples\n --------\n Develop two populations of directions using ``ipmag.fishrot``. Use the\n function to determine if they share a common mean.\n\n >>> directions_A = ipmag.fishrot(k=20, n=30, dec=40, inc=60)\n >>> directions_B = ipmag.fishrot(k=35, n=25, dec=42, inc=57)\n >>> ipmag.common_mean_watson(directions_A, directions_B)\n \"\"\"\n pars_1 = pmag.fisher_mean(Data1)\n pars_2 = pmag.fisher_mean(Data2)\n cart_1 = pmag.dir2cart([pars_1[\"dec\"], pars_1[\"inc\"], pars_1[\"r\"]])\n cart_2 = pmag.dir2cart([pars_2['dec'], pars_2['inc'], pars_2[\"r\"]])\n Sw = pars_1['k'] * pars_1['r'] + pars_2['k'] * pars_2['r'] # k1*r1+k2*r2\n xhat_1 = pars_1['k'] * cart_1[0] + pars_2['k'] * cart_2[0] # k1*x1+k2*x2\n xhat_2 = pars_1['k'] * cart_1[1] + pars_2['k'] * cart_2[1] # k1*y1+k2*y2\n xhat_3 = pars_1['k'] * cart_1[2] + pars_2['k'] * cart_2[2] # k1*z1+k2*z2\n Rw = np.sqrt(xhat_1**2 + xhat_2**2 + xhat_3**2)\n V = 2 * (Sw - Rw)\n # keep weighted sum for later when determining the \"critical angle\"\n # let's save it as Sr (notation of McFadden and McElhinny, 1990)\n Sr = Sw\n\n # do monte carlo simulation of datasets with same kappas as data,\n # but a common mean\n counter = 0\n Vp = [] # set of Vs from simulations\n for k in range(NumSims):\n\n # get a set of N1 fisher distributed vectors with k1,\n # calculate fisher stats\n Dirp = []\n for i in range(pars_1[\"n\"]):\n Dirp.append(pmag.fshdev(pars_1[\"k\"]))\n pars_p1 = pmag.fisher_mean(Dirp)\n # get a set of N2 fisher distributed vectors with k2,\n # calculate fisher stats\n Dirp = []\n for i in range(pars_2[\"n\"]):\n Dirp.append(pmag.fshdev(pars_2[\"k\"]))\n pars_p2 = pmag.fisher_mean(Dirp)\n # get the V for these\n Vk = pmag.vfunc(pars_p1, pars_p2)\n Vp.append(Vk)\n\n # sort the Vs, get Vcrit (95th percentile one)\n\n Vp.sort()\n k = int(.95 * NumSims)\n Vcrit = Vp[k]\n\n # equation 18 of McFadden and McElhinny, 1990 calculates the critical\n # value of R (Rwc)\n\n Rwc = Sr - (old_div(Vcrit, 2))\n\n # following equation 19 of McFadden and McElhinny (1990) the critical\n # angle is calculated. If the observed angle (also calculated below)\n # between the data set means exceeds the critical angle the hypothesis\n # of a common mean direction may be rejected at the 95% confidence\n # level. The critical angle is simply a different way to present\n # Watson's V parameter so it makes sense to use the Watson V parameter\n # in comparison with the critical value of V for considering the test\n # results. What calculating the critical angle allows for is the\n # classification of McFadden and McElhinny (1990) to be made\n # for data sets that are consistent with sharing a common mean.\n\n k1 = pars_1['k']\n k2 = pars_2['k']\n R1 = pars_1['r']\n R2 = pars_2['r']\n critical_angle = np.degrees(np.arccos(old_div(((Rwc**2) - ((k1 * R1)**2)\n - ((k2 * R2)**2)),\n (2 * k1 * R1 * k2 * R2))))\n D1 = (pars_1['dec'], pars_1['inc'])\n D2 = (pars_2['dec'], pars_2['inc'])\n angle = pmag.angle(D1, D2)\n\n if print_result == True:\n print(\"Results of Watson V test: \")\n print(\"\")\n print(\"Watson's V: \" '%.1f' % (V))\n print(\"Critical value of V: \" '%.1f' % (Vcrit))\n\n if V < Vcrit:\n if print_result == True:\n print('\"Pass\": Since V is less than Vcrit, the null hypothesis')\n print('that the two populations are drawn from distributions')\n print('that share a common mean direction can not be rejected.')\n result = 1\n elif V > Vcrit:\n if print_result == True:\n print('\"Fail\": Since V is greater than Vcrit, the two means can')\n print('be distinguished at the 95% confidence level.')\n result = 0\n\n if print_result == True:\n print(\"\")\n print(\"M&M1990 classification:\")\n print(\"\")\n print(\"Angle between data set means: \" '%.1f' % (angle))\n print(\"Critical angle for M&M1990: \" '%.1f' % (critical_angle))\n\n if print_result == True:\n if V > Vcrit:\n print(\"\")\n elif V < Vcrit:\n if critical_angle < 5:\n print(\"The McFadden and McElhinny (1990) classification for\")\n print(\"this test is: 'A'\")\n elif critical_angle < 10:\n print(\"The McFadden and McElhinny (1990) classification for\")\n print(\"this test is: 'B'\")\n elif critical_angle < 20:\n print(\"The McFadden and McElhinny (1990) classification for\")\n print(\"this test is: 'C'\")\n else:\n print(\"The McFadden and McElhinny (1990) classification for\")\n print(\"this test is: 'INDETERMINATE;\")\n\n if plot == 'yes':\n CDF = {'cdf': 1}\n # pmagplotlib.plot_init(CDF['cdf'],5,5)\n plt.figure(figsize=(3.5, 2.5))\n p1 = pmagplotlib.plot_cdf(CDF['cdf'], Vp, \"Watson's V\", 'r', \"\")\n p2 = pmagplotlib.plot_vs(CDF['cdf'], [V], 'g', '-')\n p3 = pmagplotlib.plot_vs(CDF['cdf'], [Vp[k]], 'b', '--')\n # pmagplotlib.draw_figs(CDF)\n if save == True:\n plt.savefig(os.path.join(\n save_folder, 'common_mean_watson') + '.' + fmt)\n pmagplotlib.show_fig(CDF['cdf'])\n\n return result, angle[0], critical_angle"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef fishqq(lon=None, lat=None, di_block=None):\n if di_block is None:\n all_dirs = make_di_block(lon, lat)\n else:\n all_dirs = di_block\n\n ppars = pmag.doprinc(all_dirs) # get principal directions\n\n rDIs = []\n nDIs = []\n QQ_dict1 = {}\n QQ_dict2 = {}\n\n for rec in all_dirs:\n angle = pmag.angle([rec[0], rec[1]], [ppars['dec'], ppars['inc']])\n if angle > 90.:\n rDIs.append(rec)\n else:\n nDIs.append(rec)\n\n if len(rDIs) >= 10 or len(nDIs) >= 10:\n D1, I1 = [], []\n QQ = {'unf': 1, 'exp': 2}\n if len(nDIs) < 10:\n ppars = pmag.doprinc(rDIs) # get principal directions\n Drbar, Irbar = ppars['dec'] - 180., -ppars['inc']\n Nr = len(rDIs)\n for di in rDIs:\n d, irot = pmag.dotilt(\n di[0], di[1], Drbar - 180., 90. - Irbar) # rotate to mean\n drot = d - 180.\n if drot < 0:\n drot = drot + 360.\n D1.append(drot)\n I1.append(irot)\n Dtit = 'Mode 2 Declinations'\n Itit = 'Mode 2 Inclinations'\n else:\n ppars = pmag.doprinc(nDIs) # get principal directions\n Dnbar, Inbar = ppars['dec'], ppars['inc']\n Nn = len(nDIs)\n for di in nDIs:\n d, irot = pmag.dotilt(\n di[0], di[1], Dnbar - 180., 90. - Inbar) # rotate to mean\n drot = d - 180.\n if drot < 0:\n drot = drot + 360.\n D1.append(drot)\n I1.append(irot)\n Dtit = 'Mode 1 Declinations'\n Itit = 'Mode 1 Inclinations'\n plt.figure(figsize=(6, 3))\n Mu_n, Mu_ncr = pmagplotlib.plot_qq_unf(\n QQ['unf'], D1, Dtit, subplot=True) # make plot\n Me_n, Me_ncr = pmagplotlib.plot_qq_exp(\n QQ['exp'], I1, Itit, subplot=True) # make plot\n plt.tight_layout()\n if Mu_n <= Mu_ncr and Me_n <= Me_ncr:\n F_n = 'consistent with Fisherian model'\n else:\n F_n = 'Fisherian model rejected'\n QQ_dict1['Mode'] = 'Mode 1'\n QQ_dict1['Dec'] = Dnbar\n QQ_dict1['Inc'] = Inbar\n QQ_dict1['N'] = Nn\n QQ_dict1['Mu'] = Mu_n\n QQ_dict1['Mu_critical'] = Mu_ncr\n QQ_dict1['Me'] = Me_n\n QQ_dict1['Me_critical'] = Me_ncr\n QQ_dict1['Test_result'] = F_n\n\n if len(rDIs) > 10 and len(nDIs) > 10:\n D2, I2 = [], []\n ppars = pmag.doprinc(rDIs) # get principal directions\n Drbar, Irbar = ppars['dec'] - 180., -ppars['inc']\n Nr = len(rDIs)\n for di in rDIs:\n d, irot = pmag.dotilt(\n di[0], di[1], Drbar - 180., 90. - Irbar) # rotate to mean\n drot = d - 180.\n if drot < 0:\n drot = drot + 360.\n D2.append(drot)\n I2.append(irot)\n Dtit = 'Mode 2 Declinations'\n Itit = 'Mode 2 Inclinations'\n plt.figure(figsize=(6, 3))\n Mu_r, Mu_rcr = pmagplotlib.plot_qq_unf(\n QQ['unf'], D2, Dtit, subplot=True) # make plot\n Me_r, Me_rcr = pmagplotlib.plot_qq_exp(\n QQ['exp'], I2, Itit, subplot=True) # make plot\n plt.tight_layout()\n\n if Mu_r <= Mu_rcr and Me_r <= Me_rcr:\n F_r = 'consistent with Fisherian model'\n else:\n F_r = 'Fisherian model rejected'\n QQ_dict2['Mode'] = 'Mode 2'\n QQ_dict2['Dec'] = Drbar\n QQ_dict2['Inc'] = Irbar\n QQ_dict2['N'] = Nr\n QQ_dict2['Mu'] = Mu_r\n QQ_dict2['Mu_critical'] = Mu_rcr\n QQ_dict2['Me'] = Me_r\n QQ_dict2['Me_critical'] = Me_rcr\n QQ_dict2['Test_result'] = F_r\n\n if QQ_dict2:\n return QQ_dict1, QQ_dict2\n elif QQ_dict1:\n return QQ_dict1\n else:\n print('you need N> 10 for at least one mode')", "response": "Test whether a Fisher distribution is Fisherian and make a corresponding Q - Q plot."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncalculate the paleolatitude from inclination using the dipole equation", "response": "def lat_from_inc(inc, a95=None):\n \"\"\"\n Calculate paleolatitude from inclination using the dipole equation\n\n Required Parameter\n ----------\n inc: (paleo)magnetic inclination in degrees\n\n Optional Parameter\n ----------\n a95: 95% confidence interval from Fisher mean\n\n Returns\n ----------\n if a95 is provided paleo_lat, paleo_lat_max, paleo_lat_min are returned\n otherwise, it just returns paleo_lat\n \"\"\"\n rad = old_div(np.pi, 180.)\n paleo_lat = old_div(np.arctan(0.5 * np.tan(inc * rad)), rad)\n if a95 is not None:\n paleo_lat_max = old_div(\n np.arctan(0.5 * np.tan((inc + a95) * rad)), rad)\n paleo_lat_min = old_div(\n np.arctan(0.5 * np.tan((inc - a95) * rad)), rad)\n return paleo_lat, paleo_lat_max, paleo_lat_min\n else:\n return paleo_lat"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncalculate the paleolatitude for a reference location based on a paleomagnetic pole", "response": "def lat_from_pole(ref_loc_lon, ref_loc_lat, pole_plon, pole_plat):\n \"\"\"\n Calculate paleolatitude for a reference location based on a paleomagnetic pole\n\n Required Parameters\n ----------\n ref_loc_lon: longitude of reference location in degrees\n ref_loc_lat: latitude of reference location\n pole_plon: paleopole longitude in degrees\n pole_plat: paleopole latitude in degrees\n \"\"\"\n\n ref_loc = (ref_loc_lon, ref_loc_lat)\n pole = (pole_plon, pole_plat)\n paleo_lat = 90 - pmag.angle(pole, ref_loc)\n return float(paleo_lat)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef inc_from_lat(lat):\n rad = old_div(np.pi, 180.)\n inc = old_div(np.arctan(2 * np.tan(lat * rad)), rad)\n return inc", "response": "Calculate inclination predicted from latitude using the dipole equation\n "} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef plot_net(fignum):\n\n# make the perimeter\n plt.figure(num=fignum,)\n plt.clf()\n plt.axis(\"off\")\n Dcirc = np.arange(0, 361.)\n Icirc = np.zeros(361, 'f')\n Xcirc, Ycirc = [], []\n for k in range(361):\n XY = pmag.dimap(Dcirc[k], Icirc[k])\n Xcirc.append(XY[0])\n Ycirc.append(XY[1])\n plt.plot(Xcirc, Ycirc, 'k')\n\n# put on the tick marks\n Xsym, Ysym = [], []\n for I in range(10, 100, 10):\n XY = pmag.dimap(0., I)\n Xsym.append(XY[0])\n Ysym.append(XY[1])\n plt.plot(Xsym, Ysym, 'k+')\n Xsym, Ysym = [], []\n for I in range(10, 90, 10):\n XY = pmag.dimap(90., I)\n Xsym.append(XY[0])\n Ysym.append(XY[1])\n plt.plot(Xsym, Ysym, 'k+')\n Xsym, Ysym = [], []\n for I in range(10, 90, 10):\n XY = pmag.dimap(180., I)\n Xsym.append(XY[0])\n Ysym.append(XY[1])\n plt.plot(Xsym, Ysym, 'k+')\n Xsym, Ysym = [], []\n for I in range(10, 90, 10):\n XY = pmag.dimap(270., I)\n Xsym.append(XY[0])\n Ysym.append(XY[1])\n plt.plot(Xsym, Ysym, 'k+')\n for D in range(0, 360, 10):\n Xtick, Ytick = [], []\n for I in range(4):\n XY = pmag.dimap(D, I)\n Xtick.append(XY[0])\n Ytick.append(XY[1])\n plt.plot(Xtick, Ytick, 'k')\n plt.axis(\"equal\")\n plt.axis((-1.05, 1.05, -1.05, 1.05))", "response": "Plots the perimeter and tick marks for equal area projection."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef plot_di(dec=None, inc=None, di_block=None, color='k', marker='o', markersize=20, legend='no', label='', title='', edge='',alpha=1):\n X_down = []\n X_up = []\n Y_down = []\n Y_up = []\n color_down = []\n color_up = []\n\n if di_block is not None:\n di_lists = unpack_di_block(di_block)\n if len(di_lists) == 3:\n dec, inc, intensity = di_lists\n if len(di_lists) == 2:\n dec, inc = di_lists\n try:\n length = len(dec)\n for n in range(len(dec)):\n XY = pmag.dimap(dec[n], inc[n])\n if inc[n] >= 0:\n X_down.append(XY[0])\n Y_down.append(XY[1])\n if type(color) == list:\n color_down.append(color[n])\n else:\n color_down.append(color)\n else:\n X_up.append(XY[0])\n Y_up.append(XY[1])\n if type(color) == list:\n color_up.append(color[n])\n else:\n color_up.append(color)\n except:\n XY = pmag.dimap(dec, inc)\n if inc >= 0:\n X_down.append(XY[0])\n Y_down.append(XY[1])\n color_down.append(color)\n else:\n X_up.append(XY[0])\n Y_up.append(XY[1])\n color_up.append(color)\n\n if len(X_up) > 0:\n plt.scatter(X_up, Y_up, facecolors='none', edgecolors=color_up,\n s=markersize, marker=marker, label=label,alpha=alpha)\n\n if len(X_down) > 0:\n plt.scatter(X_down, Y_down, facecolors=color_down, edgecolors=edge,\n s=markersize, marker=marker, label=label,alpha=alpha)\n if legend == 'yes':\n plt.legend(loc=2)\n plt.tight_layout()\n if title != \"\":\n plt.title(title)", "response": "This function plots declination inclination data on an equal area plot."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef plot_di_mean(dec, inc, a95, color='k', marker='o', markersize=20, label='', legend='no'):\n DI_dimap = pmag.dimap(dec, inc)\n if inc < 0:\n plt.scatter(DI_dimap[0], DI_dimap[1],\n edgecolors=color, facecolors='white',\n marker=marker, s=markersize, label=label)\n if inc >= 0:\n plt.scatter(DI_dimap[0], DI_dimap[1],\n edgecolors=color, facecolors=color,\n marker=marker, s=markersize, label=label)\n Xcirc, Ycirc = [], []\n Da95, Ia95 = pmag.circ(dec, inc, a95)\n if legend == 'yes':\n plt.legend(loc=2)\n for k in range(len(Da95)):\n XY = pmag.dimap(Da95[k], Ia95[k])\n Xcirc.append(XY[0])\n Ycirc.append(XY[1])\n plt.plot(Xcirc, Ycirc, c=color)\n plt.tight_layout()", "response": "Plots a mean direction of declination inclination and alpha_95 ellipse on the code tree."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef plot_di_mean_bingham(bingham_dictionary, fignum=1, color='k', marker='o', markersize=20, label='', legend='no'):\n plot_di_mean_ellipse(bingham_dictionary, fignum=fignum, color=color,\n marker=marker, markersize=markersize, label=label, legend=legend)", "response": "plot di mean ellipse of a bingham dictionary"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef plot_di_mean_ellipse(dictionary, fignum=1, color='k', marker='o', markersize=20, label='', legend='no'):\n pars = []\n pars.append(dictionary['dec'])\n pars.append(dictionary['inc'])\n pars.append(dictionary['Zeta'])\n pars.append(dictionary['Zdec'])\n pars.append(dictionary['Zinc'])\n pars.append(dictionary['Eta'])\n pars.append(dictionary['Edec'])\n pars.append(dictionary['Einc'])\n\n DI_dimap = pmag.dimap(dictionary['dec'], dictionary['inc'])\n if dictionary['inc'] < 0:\n plt.scatter(DI_dimap[0], DI_dimap[1],\n edgecolors=color, facecolors='white',\n marker=marker, s=markersize, label=label)\n if dictionary['inc'] >= 0:\n plt.scatter(DI_dimap[0], DI_dimap[1],\n edgecolors=color, facecolors=color,\n marker=marker, s=markersize, label=label)\n pmagplotlib.plot_ell(fignum, pars, color, 0, 1)", "response": "Plots a confidence ellipse of the di_mean_ellipse of the given dictionary."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nfunctioning creates and returns an orthographic map projection using cartopy", "response": "def make_orthographic_map(central_longitude=0, central_latitude=0, figsize=(8, 8),\n add_land=True, land_color='tan', add_ocean=False, ocean_color='lightblue', grid_lines=True,\n lat_grid=[-80., -60., -30.,\n 0., 30., 60., 80.],\n lon_grid=[-180., -150., -120., -90., -60., -30., 0., 30., 60., 90., 120., 150., 180.]):\n '''\n Function creates and returns an orthographic map projection using cartopy\n\n Example\n -------\n >>> map_axis = make_orthographic_map(central_longitude=200,central_latitude=30)\n\n Optional Parameters\n -----------\n central_longitude : central longitude of projection (default is 0)\n central_latitude : central latitude of projection (default is 0)\n figsize : size of the figure (default is 8x8)\n add_land : chose whether land is plotted on map (default is true)\n land_color : specify land color (default is 'tan')\n add_ocean : chose whether land is plotted on map (default is False, change to True to plot)\n ocean_color : specify ocean color (default is 'lightblue')\n grid_lines : chose whether gird lines are plotted on map (default is true)\n lat_grid : specify the latitude grid (default is 30 degree spacing)\n lon_grid : specify the longitude grid (default is 30 degree spacing)\n '''\n if not has_cartopy:\n print('-W- cartopy must be installed to run ipmag.make_orthographic_map')\n return\n fig = plt.figure(figsize=figsize)\n map_projection = ccrs.Orthographic(\n central_longitude=central_longitude, central_latitude=central_latitude)\n ax = plt.axes(projection=map_projection)\n ax.set_global()\n if add_ocean == True:\n ax.add_feature(cartopy.feature.OCEAN, zorder=0, facecolor=ocean_color)\n if add_land == True:\n ax.add_feature(cartopy.feature.LAND, zorder=0,\n facecolor=land_color, edgecolor='black')\n if grid_lines == True:\n ax.gridlines(xlocs=lon_grid, ylocs=lat_grid, linewidth=1,\n color='black', linestyle='dotted')\n return ax"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef plot_pole_basemap(mapname, plon, plat, A95, label='', color='k', edgecolor='k', marker='o', markersize=20, legend='no'):\n centerlon, centerlat = mapname(plon, plat)\n A95_km = A95 * 111.32\n mapname.scatter(centerlon, centerlat, marker=marker,\n color=color, edgecolors=edgecolor, s=markersize, label=label, zorder=101)\n equi_basemap(mapname, plon, plat, A95_km, color)\n if legend == 'yes':\n plt.legend(loc=2)", "response": "This function plots a paleomagnetic pole and A95 error ellipse on the current map."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef plot_pole_dp_dm(map_axis, plon, plat, slon, slat, dp, dm, pole_label='pole', site_label='site',\n pole_color='k', pole_edgecolor='k', pole_marker='o',\n site_color='r', site_edgecolor='r', site_marker='s',\n markersize=20, legend=True):\n \"\"\"\n This function plots a paleomagnetic pole and a dp/dm confidence ellipse on a cartopy map axis.\n\n Before this function is called, a plot needs to be initialized with code\n such as that in the make_orthographic_map function.\n\n Example\n -------\n >>> dec = 280\n >>> inc = 45\n >>> a95 = 5\n >>> site_lat = 45\n >>> site_lon = -100\n >>> pole = pmag.dia_vgp(dec, inc, a95, site_lat, site_lon)\n >>> pole_lon = pole[0]\n >>> pole_lat = pole[1]\n >>> dp = pole[2]\n >>> dm = pole[3]\n\n >>> map_axis = ipmag.make_orthographic_map(central_longitude=200,central_latitude=30)\n >>> ipmag.plot_pole_dp_dm(map_axis,pole_lon,pole_lat,site_lon,site_lat,dp,dm)\n\n Required Parameters\n -----------\n map_axis : the name of the current map axis that has been developed using cartopy\n plon : the longitude of the paleomagnetic pole being plotted (in degrees E)\n plat : the latitude of the paleomagnetic pole being plotted (in degrees)\n slon : the longitude of the site (in degrees E)\n slat : the latitude of the site (in degrees)\n dp : the semi-minor axis of the confidence ellipse (in degrees)\n dm : the semi-major axis of the confidence ellipse (in degrees)\n\n Optional Parameters (defaults are used if not specified)\n -----------\n pole_color : the default color is black. Other colors can be chosen (e.g. 'g')\n site_color : the default color is red. Other colors can be chosen (e.g. 'g')\n pole_marker : the default is a circle. Other symbols can be chosen (e.g. 's')\n site_marker : the default is a square. Other symbols can be chosen (e.g. '^')\n markersize : the default is 20. Other size can be chosen\n pole_label : string that labels the pole.\n site_label : string that labels the site\n legend : the default is a legend (True). Putting False will suppress legend plotting.\n \"\"\"\n if not has_cartopy:\n print('-W- cartopy must be installed to run ipmag.plot_pole_dp_dm')\n return\n dp_km = dp*111.32\n dm_km = dm*111.32\n map_axis.scatter(plon, plat, marker=pole_marker,\n color=pole_color, edgecolors=pole_edgecolor, s=markersize,\n label=pole_label, zorder=101, transform=ccrs.Geodetic())\n map_axis.scatter(slon, slat, marker=site_marker,\n color=site_color, edgecolors=site_edgecolor, s=markersize,\n label=site_label, zorder=101, transform=ccrs.Geodetic())\n # the orientation of the ellipse needs to be determined using the\n # two laws of cosines for spherical triangles where the triangle is\n # A: site, B: north pole, C: paleomagnetic pole (see Fig. A.2 of Butler)\n\n site_lon_rad = np.deg2rad(slon)\n site_lat_rad = np.deg2rad(slat)\n c_rad = np.deg2rad(90-slat)\n\n pole_lon_rad = np.deg2rad(plon)\n pole_lat_rad = np.deg2rad(plat)\n a_rad = np.deg2rad(90-plat)\n\n B_rad = np.abs(pole_lon_rad-site_lon_rad)\n\n cos_b = np.cos(c_rad)*np.cos(a_rad) + np.sin(c_rad) * \\\n np.sin(a_rad)*np.cos(B_rad)\n b_rad = np.arccos(cos_b)\n\n sin_C = (np.sin(B_rad)/np.sin(b_rad))*np.sin(c_rad)\n C_rad = np.arcsin(sin_C)\n\n # need to make the rotation of the ellipse go the right way\n if slon-plon > 180:\n if plon >= slon and plat >= slat:\n C_deg = -np.abs(np.rad2deg(C_rad))\n elif plon <= slon and plat >= slat:\n C_deg = np.abs(np.rad2deg(C_rad))\n elif plon >= slon and plat <= slat:\n C_deg = np.abs(np.rad2deg(C_rad))\n elif plon <= slon and plat <= slat:\n C_deg = -np.abs(np.rad2deg(C_rad))\n elif slon-plon <= 180:\n if plon >= slon and plat >= slat:\n C_deg = np.abs(np.rad2deg(C_rad))\n elif plon <= slon and plat >= slat:\n C_deg = -np.abs(np.rad2deg(C_rad))\n elif plon >= slon and plat <= slat:\n C_deg = -np.abs(np.rad2deg(C_rad))\n elif plon <= slon and plat <= slat:\n C_deg = np.abs(np.rad2deg(C_rad))\n\n print(C_deg)\n ellipse(map_axis, plon, plat, dp_km, dm_km, C_deg)\n\n if legend == True:\n plt.legend(loc=2)", "response": "This function plots a paleomagnetic pole and a semi - major axis of the confidence ellipse on a cartopy map axis."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef plot_vgp_basemap(mapname, vgp_lon=None, vgp_lat=None, di_block=None, label='', color='k', marker='o', markersize=20, legend='no'):\n if di_block != None:\n di_lists = unpack_di_block(di_block)\n if len(di_lists) == 3:\n vgp_lon, vgp_lat, intensity = di_lists\n if len(di_lists) == 2:\n vgp_lon, vgp_lat = di_lists\n centerlon, centerlat = mapname(vgp_lon, vgp_lat)\n mapname.scatter(centerlon, centerlat, marker=marker,\n s=markersize, color=color, label=label, zorder=100)\n if legend == 'yes':\n plt.legend(loc=2)", "response": "This function plots a paleomagnetic pole on the current map."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef vgp_calc(dataframe, tilt_correction='yes', site_lon='site_lon', site_lat='site_lat', dec_is='dec_is', inc_is='inc_is', dec_tc='dec_tc', inc_tc='inc_tc'):\n dataframe.is_copy = False\n if tilt_correction == 'yes':\n # calculate the paleolatitude/colatitude\n dataframe['paleolatitude'] = np.degrees(\n np.arctan(0.5 * np.tan(np.radians(dataframe[inc_tc]))))\n dataframe['colatitude'] = 90 - dataframe['paleolatitude']\n # calculate the latitude of the pole\n dataframe['vgp_lat'] = np.degrees(np.arcsin(np.sin(np.radians(dataframe[site_lat])) *\n np.cos(np.radians(dataframe['colatitude'])) +\n np.cos(np.radians(dataframe[site_lat])) *\n np.sin(np.radians(dataframe['colatitude'])) *\n np.cos(np.radians(dataframe[dec_tc]))))\n # calculate the longitudinal difference between the pole and the site\n # (beta)\n dataframe['beta'] = np.degrees(np.arcsin(old_div((np.sin(np.radians(dataframe['colatitude'])) *\n np.sin(np.radians(dataframe[dec_tc]))),\n (np.cos(np.radians(dataframe['vgp_lat']))))))\n # generate a boolean array (mask) to use to distinguish between the two possibilities for pole longitude\n # and then calculate pole longitude using the site location and\n # calculated beta\n mask = np.cos(np.radians(dataframe['colatitude'])) > np.sin(\n np.radians(dataframe[site_lat])) * np.sin(np.radians(dataframe['vgp_lat']))\n dataframe['vgp_lon'] = np.where(mask, (dataframe[site_lon] + dataframe['beta']) %\n 360., (dataframe[site_lon] + 180 - dataframe['beta']) % 360.)\n # calculate the antipode of the poles\n dataframe['vgp_lat_rev'] = -dataframe['vgp_lat']\n dataframe['vgp_lon_rev'] = (dataframe['vgp_lon'] - 180.) % 360.\n # the 'colatitude' and 'beta' columns were created for the purposes of the pole calculations\n # but aren't of further use and are deleted\n del dataframe['colatitude']\n del dataframe['beta']\n if tilt_correction == 'no':\n # calculate the paleolatitude/colatitude\n dataframe['paleolatitude'] = np.degrees(\n np.arctan(0.5 * np.tan(np.radians(dataframe[inc_is]))))\n dataframe['colatitude'] = 90 - dataframe['paleolatitude']\n # calculate the latitude of the pole\n dataframe['vgp_lat'] = np.degrees(np.arcsin(np.sin(np.radians(dataframe[site_lat])) *\n np.cos(np.radians(dataframe['colatitude'])) +\n np.cos(np.radians(dataframe[site_lat])) *\n np.sin(np.radians(dataframe['colatitude'])) *\n np.cos(np.radians(dataframe[dec_is]))))\n # calculate the longitudinal difference between the pole and the site\n # (beta)\n dataframe['beta'] = np.degrees(np.arcsin(old_div((np.sin(np.radians(dataframe['colatitude'])) *\n np.sin(np.radians(dataframe[dec_is]))),\n (np.cos(np.radians(dataframe['vgp_lat']))))))\n # generate a boolean array (mask) to use to distinguish between the two possibilities for pole longitude\n # and then calculate pole longitude using the site location and\n # calculated beta\n mask = np.cos(np.radians(dataframe['colatitude'])) > np.sin(\n np.radians(dataframe[site_lat])) * np.sin(np.radians(dataframe['vgp_lat']))\n dataframe['vgp_lon'] = np.where(mask, (dataframe[site_lon] + dataframe['beta']) %\n 360., (dataframe[site_lon] + 180 - dataframe['beta']) % 360.)\n # calculate the antipode of the poles\n dataframe['vgp_lat_rev'] = -dataframe['vgp_lat']\n dataframe['vgp_lon_rev'] = (dataframe['vgp_lon'] - 180.) % 360.\n # the 'colatitude' and 'beta' columns were created for the purposes of the pole calculations\n # but aren't of further use and are deleted\n del dataframe['colatitude']\n del dataframe['beta']\n return(dataframe)", "response": "Calculates paleomagnetic poles using directional data and site location data within a pandas. DataFrame."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef make_diddd_array(dec, inc, dip_direction, dip):\n diddd_block = []\n for n in range(0, len(dec)):\n diddd_block.append([dec[n], inc[n], dip_direction[n], dip[n]])\n diddd_array = np.array(diddd_block)\n return diddd_array", "response": "This function creates a numpy array of dec inc dd dip."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef ellipse(map_axis, centerlon, centerlat, major_axis, minor_axis, angle, n=360, filled=False, **kwargs):\n if not has_cartopy:\n print('-W- cartopy must be installed to run ipmag.ellipse')\n return False\n angle = angle*(np.pi/180)\n glon1 = centerlon\n glat1 = centerlat\n X = []\n Y = []\n for azimuth in np.linspace(0, 360, n):\n az_rad = azimuth*(np.pi/180)\n radius = ((major_axis*minor_axis)/(((minor_axis*np.cos(az_rad-angle))\n ** 2 + (major_axis*np.sin(az_rad-angle))**2)**.5))\n glon2, glat2, baz = shoot(glon1, glat1, azimuth, radius)\n X.append((360+glon2) % 360)\n Y.append(glat2)\n X.append(X[0])\n Y.append(Y[0])\n\n if filled:\n ellip = np.array((X, Y)).T\n ellip = map_axis.projection.transform_points(\n ccrs.PlateCarree(), ellip[:, 0], ellip[:, 1])\n poly = Polygon(ellip[:, :2], **kwargs)\n map_axis.add_patch(poly)\n else:\n try:\n map_axis.plot(X, Y, transform=ccrs.Geodetic(), **kwargs)\n return True\n except ValueError:\n return False", "response": "This function creates a key word ellipses that can be drawn on the map object."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncombines a list of MagIC formatted files into one.", "response": "def combine_magic(filenames, outfile='measurements.txt', data_model=3, magic_table='measurements',\n dir_path=\".\", input_dir_path=\"\"):\n \"\"\"\n Takes a list of magic-formatted files, concatenates them, and creates a\n single file. Returns output filename if the operation was successful.\n\n Parameters\n -----------\n filenames : list of MagIC formatted files\n outfile : name of output file [e.g., measurements.txt]\n data_model : data model number (2.5 or 3), default 3\n magic_table : name of magic table, default 'measurements'\n dir_path : str\n output directory, default \".\"\n input_dir_path : str\n input file directory (if different from dir_path), default \"\"\n\n\n Returns\n ----------\n outfile name if success, False if failure\n \"\"\"\n input_dir_path, output_dir_path = pmag.fix_directories(input_dir_path, dir_path)\n if float(data_model) == 3.0:\n outfile = pmag.resolve_file_name(outfile, output_dir_path)\n output_dir_path, file_name = os.path.split(outfile)\n con = cb.Contribution(output_dir_path, read_tables=[])\n # make sure files actually exist\n filenames = [pmag.resolve_file_name(f, input_dir_path) for f in filenames]\n #filenames = [os.path.realpath(f) for f in filenames]\n filenames = [f for f in filenames if os.path.exists(f)]\n if not filenames:\n print(\"You have provided no valid file paths, so nothing will be combined\".format(\n magic_table))\n return False\n # figure out file type from first of files to join\n with open(filenames[0]) as f:\n file_type = f.readline().split()[1]\n if file_type in ['er_specimens', 'er_samples', 'er_sites',\n 'er_locations', 'er_ages', 'pmag_specimens',\n 'pmag_samples', 'pmag_sites', 'pmag_results',\n 'magic_measurements', 'rmag_anisotropy',\n 'rmag_results', 'rmag_specimens']:\n print(\n '-W- You are working in MagIC 3 but have provided a MagIC 2.5 file: {}'.format(file_type))\n return False\n if file_type not in con.table_names:\n file_type = magic_table\n infiles = [pd.read_csv(infile, sep='\\t', header=1)\n for infile in filenames]\n df = pd.concat(infiles, ignore_index=True, sort=True)\n # drop any fully duplicated rows\n df.drop_duplicates(inplace=True)\n con.add_magic_table(dtype=file_type, df=df)\n # drop any mostly empty rows IF they have duplicate index\n parent, child = con.get_parent_and_child(file_type)\n ignore_cols = [col[:-1] for col in [file_type, parent] if col]\n ignore_cols.extend(['software_packages', 'citations'])\n con.tables[file_type].drop_duplicate_rows(ignore_cols)\n # correctly handle measurements.sequence column\n if 'sequence' in con.tables[file_type].df:\n con.tables[file_type].df['sequence'] = range(1, len(con.tables[file_type].df) + 1)\n # write table to file, use custom name\n res = con.write_table_to_file(file_type, custom_name=file_name)\n return res\n else:\n datasets = []\n if not filenames:\n print(\"You must provide at least one file\")\n return False\n for infile in filenames:\n if not os.path.isfile(infile):\n print(\"{} is not a valid file name\".format(infile))\n return False\n try:\n dataset, file_type = pmag.magic_read(infile)\n except IndexError:\n print('-W- Could not get records from {}'.format(infile))\n print(' Skipping...')\n continue\n print(\"File \", infile, \" read in with \", len(dataset), \" records\")\n for rec in dataset:\n datasets.append(rec)\n\n Recs, keys = pmag.fillkeys(datasets)\n if Recs:\n pmag.magic_write(outfile, Recs, file_type)\n print(\"All records stored in \", outfile)\n return outfile\n print(\"No file could be created\")\n return False"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn matplotlib figure with anisotropy data plotted against depth available depth scales: 'sample_composite_depth', 'sample_core_depth', or 'age' (you must provide an age file to use this option)", "response": "def ani_depthplot2(ani_file='rmag_anisotropy.txt', meas_file='magic_measurements.txt', samp_file='er_samples.txt', age_file=None, sum_file=None, fmt='svg', dmin=-1, dmax=-1, depth_scale='sample_core_depth', dir_path='.'):\n \"\"\"\n returns matplotlib figure with anisotropy data plotted against depth\n available depth scales: 'sample_composite_depth', 'sample_core_depth', or 'age' (you must provide an age file to use this option)\n\n \"\"\"\n pcol = 4\n tint = 9\n plots = 0\n\n # format files to use full path\n # os.path.join(dir_path, ani_file)\n ani_file = pmag.resolve_file_name(ani_file, dir_path)\n if not os.path.isfile(ani_file):\n print(\"Could not find rmag_anisotropy type file: {}.\\nPlease provide a valid file path and try again\".format(ani_file))\n return False, \"Could not find rmag_anisotropy type file: {}.\\nPlease provide a valid file path and try again\".format(ani_file)\n\n # os.path.join(dir_path, meas_file)\n meas_file = pmag.resolve_file_name(meas_file, dir_path)\n\n if age_file:\n if not os.path.isfile(age_file):\n print(\n 'Warning: you have provided an invalid age file. Attempting to use sample file instead')\n age_file = None\n depth_scale = 'sample_core_depth'\n # os.path.join(dir_path, samp_file)\n samp_file = pmag.resolve_file_name(samp_file, dir_path)\n else:\n # os.path.join(dir_path, age_file)\n samp_file = pmag.resolve_file_name(samp_file, dir_path)\n depth_scale = 'age'\n print(\n 'Warning: you have provided an er_ages format file, which will take precedence over er_samples')\n else:\n samp_file = pmag.resolve_file_name(samp_file, dir_path)\n\n label = 1\n\n if sum_file:\n sum_file = os.path.join(dir_path, sum_file)\n\n dmin, dmax = float(dmin), float(dmax)\n\n # get data read in\n isbulk = 0 # tests if there are bulk susceptibility measurements\n AniData, file_type = pmag.magic_read(ani_file) # read in tensor elements\n if not age_file:\n # read in sample depth info from er_sample.txt format file\n Samps, file_type = pmag.magic_read(samp_file)\n else:\n # read in sample age info from er_ages.txt format file\n Samps, file_type = pmag.magic_read(samp_file)\n age_unit = Samps[0]['age_unit']\n for s in Samps:\n # change to upper case for every sample name\n s['er_sample_name'] = s['er_sample_name'].upper()\n Meas, file_type = pmag.magic_read(meas_file)\n # print 'meas_file', meas_file\n # print 'file_type', file_type\n if file_type == 'magic_measurements':\n isbulk = 1\n Data = []\n Bulks = []\n BulkDepths = []\n for rec in AniData:\n # look for depth record for this sample\n samprecs = pmag.get_dictitem(Samps, 'er_sample_name',\n rec['er_sample_name'].upper(), 'T')\n # see if there are non-blank depth data\n sampdepths = pmag.get_dictitem(samprecs, depth_scale, '', 'F')\n if dmax != -1:\n # fishes out records within depth bounds\n sampdepths = pmag.get_dictitem(\n sampdepths, depth_scale, dmax, 'max')\n sampdepths = pmag.get_dictitem(\n sampdepths, depth_scale, dmin, 'min')\n if len(sampdepths) > 0: # if there are any....\n # set the core depth of this record\n rec['core_depth'] = sampdepths[0][depth_scale]\n Data.append(rec) # fish out data with core_depth\n if isbulk: # if there are bulk data\n chis = pmag.get_dictitem(\n Meas, 'er_specimen_name', rec['er_specimen_name'], 'T')\n # get the non-zero values for this specimen\n chis = pmag.get_dictitem(\n chis, 'measurement_chi_volume', '', 'F')\n if len(chis) > 0: # if there are any....\n # put in microSI\n Bulks.append(\n 1e6 * float(chis[0]['measurement_chi_volume']))\n BulkDepths.append(float(sampdepths[0][depth_scale]))\n if len(Bulks) > 0: # set min and max bulk values\n bmin = min(Bulks)\n bmax = max(Bulks)\n xlab = \"Depth (m)\"\n if len(Data) > 0:\n location = Data[0]['er_location_name']\n else:\n return False, 'no data to plot'\n # collect the data for plotting tau V3_inc and V1_dec\n Depths, Tau1, Tau2, Tau3, V3Incs, P, V1Decs = [], [], [], [], [], [], []\n F23s = []\n Axs = [] # collect the plot ids\n# START HERE\n if len(Bulks) > 0:\n pcol += 1\n # get all the s1 values from Data as floats\n s1 = pmag.get_dictkey(Data, 'anisotropy_s1', 'f')\n s2 = pmag.get_dictkey(Data, 'anisotropy_s2', 'f')\n s3 = pmag.get_dictkey(Data, 'anisotropy_s3', 'f')\n s4 = pmag.get_dictkey(Data, 'anisotropy_s4', 'f')\n s5 = pmag.get_dictkey(Data, 'anisotropy_s5', 'f')\n s6 = pmag.get_dictkey(Data, 'anisotropy_s6', 'f')\n nmeas = pmag.get_dictkey(Data, 'anisotropy_n', 'int')\n sigma = pmag.get_dictkey(Data, 'anisotropy_sigma', 'f')\n Depths = pmag.get_dictkey(Data, 'core_depth', 'f')\n # Ss=np.array([s1,s4,s5,s4,s2,s6,s5,s6,s3]).transpose() # make an array\n Ss = np.array([s1, s2, s3, s4, s5, s6]).transpose() # make an array\n # Ts=np.reshape(Ss,(len(Ss),3,-1)) # and re-shape to be n-length array of\n # 3x3 sub-arrays\n for k in range(len(Depths)):\n # tau,Evecs= pmag.tauV(Ts[k]) # get the sorted eigenvalues and eigenvectors\n # v3=pmag.cart2dir(Evecs[2])[1] # convert to inclination of the minimum\n # eigenvector\n fpars = pmag.dohext(nmeas[k] - 6, sigma[k], Ss[k])\n V3Incs.append(fpars['v3_inc'])\n V1Decs.append(fpars['v1_dec'])\n Tau1.append(fpars['t1'])\n Tau2.append(fpars['t2'])\n Tau3.append(fpars['t3'])\n P.append(old_div(Tau1[-1], Tau3[-1]))\n F23s.append(fpars['F23'])\n if len(Depths) > 0:\n if dmax == -1:\n dmax = max(Depths)\n dmin = min(Depths)\n tau_min = 1\n for t in Tau3:\n if t > 0 and t < tau_min:\n tau_min = t\n tau_max = max(Tau1)\n # tau_min=min(Tau3)\n P_max = max(P)\n P_min = min(P)\n # dmax=dmax+.05*dmax\n # dmin=dmin-.05*dmax\n\n main_plot = plt.figure(1, figsize=(10, 8)) # make the figure\n\n version_num = pmag.get_version()\n plt.figtext(.02, .01, version_num) # attach the pmagpy version number\n ax = plt.subplot(1, pcol, 1) # make the first column\n Axs.append(ax)\n ax.plot(Tau1, Depths, 'rs')\n ax.plot(Tau2, Depths, 'b^')\n ax.plot(Tau3, Depths, 'ko')\n if sum_file:\n core_depth_key, core_label_key, Cores = read_core_csv_file(\n sum_file)\n for core in Cores:\n depth = float(core[core_depth_key])\n if depth > dmin and depth < dmax:\n plt.plot([0, 90], [depth, depth], 'b--')\n ax.axis([tau_min, tau_max, dmax, dmin])\n ax.set_xlabel('Eigenvalues')\n if depth_scale == 'sample_core_depth':\n ax.set_ylabel('Depth (mbsf)')\n elif depth_scale == 'age':\n ax.set_ylabel('Age (' + age_unit + ')')\n else:\n ax.set_ylabel('Depth (mcd)')\n ax2 = plt.subplot(1, pcol, 2) # make the second column\n ax2.plot(P, Depths, 'rs')\n ax2.axis([P_min, P_max, dmax, dmin])\n ax2.set_xlabel('P')\n ax2.set_title(location)\n if sum_file:\n for core in Cores:\n depth = float(core[core_depth_key])\n if depth > dmin and depth < dmax:\n plt.plot([0, 90], [depth, depth], 'b--')\n Axs.append(ax2)\n ax3 = plt.subplot(1, pcol, 3)\n Axs.append(ax3)\n ax3.plot(V3Incs, Depths, 'ko')\n ax3.axis([0, 90, dmax, dmin])\n ax3.set_xlabel('V3 Inclination')\n if sum_file:\n for core in Cores:\n depth = float(core[core_depth_key])\n if depth > dmin and depth < dmax:\n plt.plot([0, 90], [depth, depth], 'b--')\n ax4 = plt.subplot(1, np.abs(pcol), 4)\n Axs.append(ax4)\n ax4.plot(V1Decs, Depths, 'rs')\n ax4.axis([0, 360, dmax, dmin])\n ax4.set_xlabel('V1 Declination')\n if sum_file:\n for core in Cores:\n depth = float(core[core_depth_key])\n if depth >= dmin and depth <= dmax:\n plt.plot([0, 360], [depth, depth], 'b--')\n if pcol == 4 and label == 1:\n plt.text(360, depth + tint, core[core_label_key])\n # ax5=plt.subplot(1,np.abs(pcol),5)\n # Axs.append(ax5)\n # ax5.plot(F23s,Depths,'rs')\n # bounds=ax5.axis()\n # ax5.axis([bounds[0],bounds[1],dmax,dmin])\n # ax5.set_xlabel('F_23')\n # ax5.semilogx()\n # if sum_file:\n # for core in Cores:\n # depth=float(core[core_depth_key])\n # if depth>=dmin and depth<=dmax:\n # plt.plot([bounds[0],bounds[1]],[depth,depth],'b--')\n # if pcol==5 and label==1:plt.text(bounds[1],depth+tint,core[core_label_key])\n # if pcol==6:\n if pcol == 5:\n # ax6=plt.subplot(1,pcol,6)\n ax6 = plt.subplot(1, pcol, 5)\n Axs.append(ax6)\n ax6.plot(Bulks, BulkDepths, 'bo')\n ax6.axis([bmin - 1, 1.1 * bmax, dmax, dmin])\n ax6.set_xlabel('Bulk Susc. (uSI)')\n if sum_file:\n for core in Cores:\n depth = float(core[core_depth_key])\n if depth >= dmin and depth <= dmax:\n plt.plot([0, bmax], [depth, depth], 'b--')\n if label == 1:\n plt.text(1.1 * bmax, depth + tint,\n core[core_label_key])\n for x in Axs:\n # this makes the x-tick labels more reasonable - they were\n # overcrowded using the defaults\n pmagplotlib.delticks(x)\n fig_name = location + '_ani_depthplot.' + fmt\n return main_plot, fig_name\n else:\n return False, \"No data to plot\""} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef ani_depthplot(spec_file='specimens.txt', samp_file='samples.txt',\n meas_file='measurements.txt', site_file='sites.txt',\n age_file=\"\", sum_file=\"\", fmt='svg', dmin=-1, dmax=-1,\n depth_scale='core_depth', dir_path='.', contribution=None):\n \"\"\"\n returns matplotlib figure with anisotropy data plotted against depth\n available depth scales: 'composite_depth', 'core_depth' or 'age' (you must provide an age file to use this option).\n You must provide valid specimens and sites files, and either a samples or an ages file.\n You may additionally provide measurements and a summary file (csv).\n\n Parameters\n ----------\n spec_file : str, default \"specimens.txt\"\n samp_file : str, default \"samples.txt\"\n meas_file : str, default \"measurements.txt\"\n site_file : str, default \"sites.txt\"\n age_file : str, default \"\"\n sum_file : str, default \"\"\n fmt : str, default \"svg\"\n format for figures, [\"svg\", \"jpg\", \"pdf\", \"png\"]\n dmin : number, default -1\n minimum depth to plot (if -1, default to plotting all)\n dmax : number, default -1\n maximum depth to plot (if -1, default to plotting all)\n depth_scale : str, default \"core_depth\"\n scale to plot, ['composite_depth', 'core_depth', 'age'].\n if 'age' is selected, you must provide an ages file.\n dir_path : str, default \".\"\n directory for input files\n contribution : cb.Contribution, default None\n if provided, use Contribution object instead of reading in\n data from files\n\n Returns\n ---------\n plot : matplotlib plot, or False if no plot could be created\n name : figure name, or error message if no plot could be created\n \"\"\"\n if depth_scale == 'sample_core_depth':\n depth_scale = 'core_depth'\n if depth_scale == 'sample_composite_depth':\n depth_scale = 'composite_depth'\n\n pcol = 4\n tint = 9\n plots = 0\n\n dmin, dmax = float(dmin), float(dmax)\n\n # if contribution object is not provided, read in data from files\n if isinstance(contribution, cb.Contribution):\n con = contribution\n else:\n # format files to use full path\n meas_file = pmag.resolve_file_name(meas_file, dir_path)\n spec_file = pmag.resolve_file_name(spec_file, dir_path)\n samp_file = pmag.resolve_file_name(samp_file, dir_path)\n site_file = pmag.resolve_file_name(site_file, dir_path)\n\n if age_file:\n age_file = pmag.resolve_file_name(age_file, dir_path)\n if not os.path.isfile(age_file):\n print(\n 'Warning: you have provided an invalid age file. Attempting to use sample file instead')\n age_file = None\n depth_scale = 'core_depth'\n else:\n samp_file = age_file\n depth_scale = 'age'\n print(\n 'Warning: you have provided an ages format file, which will take precedence over samples')\n\n samp_file = pmag.resolve_file_name(samp_file, dir_path)\n\n label = 1\n\n if sum_file:\n sum_file = pmag.resolve_file_name(sum_file, dir_path)\n core_df=pd.read_csv(sum_file)\n depths=core_df['Top depth cored CSF (m)'].values\n\n # contribution\n\n dir_path = os.path.split(spec_file)[0]\n tables = ['measurements', 'specimens', 'samples', 'sites']\n con = cb.Contribution(dir_path, read_tables=tables,\n custom_filenames={'measurements': meas_file, 'specimens': spec_file,\n 'samples': samp_file, 'sites': site_file})\n\n\n\n for ftype in ['specimens', 'samples', 'sites']:\n if not con.tables.get(ftype):\n if ftype == 'samples':\n if con.tables.get('ages'):\n depth_scale = 'age'\n continue\n print(\"-W- This function requires a {} file to run.\".format(ftype))\n print(\" Make sure you include one in your working directory\")\n return False, \"missing required file type: {}\".format(ftype)\n\n\n # propagate needed values\n con.propagate_cols(['core_depth'], 'samples', 'sites')\n con.propagate_location_to_specimens()\n\n # get data read in\n isbulk = 0 # tests if there are bulk susceptibility measurements\n ani_file = spec_file\n\n SampData = con.tables['samples'].df\n AniData = con.tables['specimens'].df\n # add sample into specimens (AniData)\n AniData = pd.merge(\n AniData, SampData[['sample', depth_scale]], how='inner', on='sample')\n # trim down AniData\n cond = AniData[depth_scale].astype(bool)\n AniData = AniData[cond]\n if dmin != -1:\n AniData = AniData[AniData[depth_scale] < dmax]\n if dmax != -1:\n AniData = AniData[AniData[depth_scale] > dmin]\n AniData['core_depth'] = AniData[depth_scale]\n\n if not age_file:\n Samps = con.tables['samples'].convert_to_pmag_data_list()\n else:\n con.add_magic_table(dtype='ages', fname=age_file)\n Samps = con.tables['ages'].convert_to_pmag_data_list()\n # get age unit\n age_unit = con.tables['ages'].df['age_unit'][0]\n # propagate ages down to sample level\n\n for s in Samps:\n # change to upper case for every sample name\n s['sample'] = s['sample'].upper()\n\n if 'measurements' in con.tables:\n isbulk = 1\n Meas = con.tables['measurements'].df # convert_to_pmag_data_list()\n\n if isbulk:\n Meas = Meas[Meas['specimen'].astype('bool')]\n Meas = Meas[Meas['susc_chi_volume'].astype(bool)]\n # add core_depth into Measurements dataframe\n Meas = pd.merge(Meas[['susc_chi_volume', 'specimen']], AniData[[\n 'specimen', 'core_depth']], how='inner', on='specimen')\n Bulks = list(Meas['susc_chi_volume'] * 1e6)\n BulkDepths = list(Meas['core_depth'])\n else:\n Bulks, BulkDepths = [], []\n\n # now turn Data from pandas dataframe to a list of dicts\n Data = list(AniData.T.apply(dict))\n\n if len(Bulks) > 0: # set min and max bulk values\n bmin = min(Bulks)\n bmax = max(Bulks)\n xlab = \"Depth (m)\"\n\n #\n if len(Data) > 0:\n location = Data[0].get('location', 'unknown')\n if cb.is_null(location):\n location = 'unknown'\n try:\n location = con.tables['sites'].df['location'][0]\n except KeyError:\n pass\n else:\n return False, 'no data to plot'\n\n # collect the data for plotting tau V3_inc and V1_dec\n Depths, Tau1, Tau2, Tau3, V3Incs, P, V1Decs = [], [], [], [], [], [], []\n F23s = []\n Axs = [] # collect the plot ids\n if len(Bulks) > 0:\n pcol += 1\n\n Data = pmag.get_dictitem(Data, 'aniso_s', '', 'not_null')\n # get all the s1 values from Data as floats\n aniso_s = pmag.get_dictkey(Data, 'aniso_s', '')\n aniso_s = [a.split(':') for a in aniso_s if a is not None]\n #print('aniso_s', aniso_s)\n s1 = [float(a[0]) for a in aniso_s]\n s2 = [float(a[1]) for a in aniso_s]\n s3 = [float(a[2]) for a in aniso_s]\n s4 = [float(a[3]) for a in aniso_s]\n s5 = [float(a[4]) for a in aniso_s]\n s6 = [float(a[5]) for a in aniso_s]\n # we are good with s1 - s2\n nmeas = pmag.get_dictkey(Data, 'aniso_s_n_measurements', 'int')\n sigma = pmag.get_dictkey(Data, 'aniso_s_sigma', 'f')\n Depths = pmag.get_dictkey(Data, 'core_depth', 'f')\n # Ss=np.array([s1,s4,s5,s4,s2,s6,s5,s6,s3]).transpose() # make an array\n Ss = np.array([s1, s2, s3, s4, s5, s6]).transpose() # make an array\n # Ts=np.reshape(Ss,(len(Ss),3,-1)) # and re-shape to be n-length array of\n # 3x3 sub-arrays\n for k in range(len(Depths)):\n # tau,Evecs= pmag.tauV(Ts[k]) # get the sorted eigenvalues and eigenvectors\n # v3=pmag.cart2dir(Evecs[2])[1] # convert to inclination of the minimum\n # eigenvector\n fpars = pmag.dohext(nmeas[k] - 6, sigma[k], Ss[k])\n V3Incs.append(fpars['v3_inc'])\n V1Decs.append(fpars['v1_dec'])\n Tau1.append(fpars['t1'])\n Tau2.append(fpars['t2'])\n Tau3.append(fpars['t3'])\n P.append(old_div(Tau1[-1], Tau3[-1]))\n F23s.append(fpars['F23'])\n if len(Depths) > 0:\n if dmax == -1:\n dmax = max(Depths)\n dmin = min(Depths)\n tau_min = 1\n for t in Tau3:\n if t > 0 and t < tau_min:\n tau_min = t\n tau_max = max(Tau1)\n # tau_min=min(Tau3)\n P_max = max(P)\n P_min = min(P)\n # dmax=dmax+.05*dmax\n # dmin=dmin-.05*dmax\n\n main_plot = plt.figure(1, figsize=(11, 7)) # make the figure\n # main_plot = plt.figure(1, figsize=(10, 8)) # make the figure\n\n version_num = pmag.get_version()\n plt.figtext(.02, .01, version_num) # attach the pmagpy version number\n ax = plt.subplot(1, pcol, 1) # make the first column\n Axs.append(ax)\n ax.plot(Tau1, Depths, 'rs')\n ax.plot(Tau2, Depths, 'b^')\n ax.plot(Tau3, Depths, 'ko')\n if sum_file:\n for depth in depths:\n if depth >= dmin and depth < dmax:\n plt.axhline(depth,color='blue',linestyle='dotted')\n if tau_min>.3: tau_min=.3\n if tau_max<.36: tau_max=.36\n ax.axis([tau_min, tau_max, dmax, dmin])\n ax.set_xlabel('Eigenvalues')\n if depth_scale == 'core_depth':\n ax.set_ylabel('Depth (mbsf)')\n elif depth_scale == 'age':\n ax.set_ylabel('Age (' + age_unit + ')')\n else:\n ax.set_ylabel('Depth (mcd)')\n ax2 = plt.subplot(1, pcol, 2) # make the second column\n ax2.yaxis.set_major_locator(plt.NullLocator())\n ax2.plot(P, Depths, 'rs')\n ax2.axis([P_min, P_max, dmax, dmin])\n ax2.set_xlabel('P')\n ax2.set_title(location)\n if sum_file:\n for depth in depths:\n if depth >= dmin and depth < dmax:\n plt.axhline(depth,color='blue',linestyle='dotted')\n Axs.append(ax2)\n ax3 = plt.subplot(1, pcol, 3)\n Axs.append(ax3)\n ax3.plot(V3Incs, Depths, 'ko')\n ax3.axis([0, 90, dmax, dmin])\n ax3.set_xlabel('V3 Inclination')\n ax3.yaxis.set_major_locator(plt.NullLocator())\n if sum_file:\n for depth in depths:\n if depth >= dmin and depth < dmax:\n plt.axhline(depth,color='blue',linestyle='dotted')\n ax4 = plt.subplot(1, np.abs(pcol), 4)\n Axs.append(ax4)\n ax4.plot(V1Decs, Depths, 'rs')\n ax4.axis([0, 360, dmax, dmin])\n ax4.set_xlabel('V1 Declination')\n ax4.yaxis.set_major_locator(plt.NullLocator())\n if sum_file:\n for depth in depths:\n if depth >= dmin and depth < dmax:\n plt.axhline(depth,color='blue',linestyle='dotted')\n # ax5=plt.subplot(1,np.abs(pcol),5)\n # Axs.append(ax5)\n # ax5.plot(F23s,Depths,'rs')\n # bounds=ax5.axis()\n # ax5.axis([bounds[0],bounds[1],dmax,dmin])\n # ax5.set_xlabel('F_23')\n # ax5.semilogx()\n # if sum_file:\n # for core in Cores:\n # depth=float(core[core_depth_key])\n # if depth>=dmin and depth<=dmax:\n # plt.plot([bounds[0],bounds[1]],[depth,depth],'b--')\n # if pcol==5 and label==1:plt.text(bounds[1],depth+tint,core[core_label_key])\n # if pcol==6:\n if pcol == 5:\n # ax6=plt.subplot(1,pcol,6)\n ax6 = plt.subplot(1, pcol, 5)\n Axs.append(ax6)\n ax6.plot(Bulks, BulkDepths, 'bo')\n ax6.axis([bmin - 1, 1.1 * bmax, dmax, dmin])\n ax6.set_xlabel('Bulk Susc. (uSI)')\n ax6.yaxis.set_major_locator(plt.NullLocator())\n if sum_file:\n for depth in depths:\n if depth >= dmin and depth < dmax:\n plt.axhline(depth,color='blue',linestyle='dotted')\n for x in Axs:\n # this makes the x-tick labels more reasonable - they were\n # overcrowded using the defaults\n pmagplotlib.delticks(x)\n fig_name = location + '_ani_depthplot.' + fmt\n return main_plot, [fig_name]\n else:\n return False, \"No data to plot\"", "response": "Function to plot an anisotropy data against a single site or specimen."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef core_depthplot(input_dir_path='.', meas_file='measurements.txt', spc_file='',\n samp_file='samples.txt', age_file='', sum_file='', wt_file='',\n depth_scale='core_depth', dmin=-1, dmax=-1, sym='bo',\n size=5, spc_sym='ro', spc_size=5, meth='', step=0, fmt='svg',\n pltDec=True, pltInc=True, pltMag=True, pltLine=True, pltSus=True,\n logit=False, pltTime=False, timescale=None, amin=-1, amax=-1,\n norm=False, data_model_num=3,location=\"\"):\n \"\"\"\n depth scale can be 'core_depth' or 'composite_depth' (for data model=3)\n if age file is provided, depth_scale will be set to 'age' by default.\n You must provide at least a measurements,specimens and sample file to plot.\n\n Parameters\n ----------\n input_dir_path : str, default \".\"\n file input directory\n meas_file : str, default \"measurements.txt\"\n input measurements file\n spc_file : str, default \"\"\n input specimens file\n samp_file : str, default \"\"\n input samples file\n age_file : str, default \"\"\n input ages file\n sum_file : str, default \"\"\n input csv summary file\n wt_file : str, default \"\"\n input file with weights\n depth_scale : str, default \"core_depth\"\n ['core_depth', 'composite_depth']\n dmin : number, default -1\n minimum depth to plot (if -1, default to plotting all)\n dmax : number, default -1\n maximum depth to plot (if -1, default to plotting all)\n sym : str, default \"bo\"\n symbol color and shape, default blue circles\n (see matplotlib documentaiton for more options)\n size : int, defualt 5\n symbol size\n spc_sym : str, default 'ro'\n specimen symbol color and shape, default red circles\n (see matplotlib documentation for more options)\n meth : str, default \"\"\n method codes, [\"LT-NO\", \"AF\", \"T\", \"ARM\", \"IRM\", \"X\"]\n step : int, default 0\n treatment step for plotting:\n for AF, in mT, for T, in C\n fmt : str, default \"svg\"\n format for figures, [svg,jpg,png,pdf]\n pltDec : bool, default True\n plot declination\n pltInc : bool, default True\n plot inclination\n pltMag : bool, default True\n plot magnetization\n pltLine : bool, default True\n connect dots with a line\n pltSus : bool, default True\n plot blanket treatment\n logit : bool, default False\n plot magnetization on a log scale\n amin : int, default -1\n minimum time to plot (if -1, default to plotting all)\n amax : int, default -1\n maximum time to plot (if -1, default to plotting all)\n norm : bool, default False\n normalize by weight\n data_model_num : int, default 3\n MagIC data model (please, use data model 3)\n \"\"\"\n data_model_num = int(data_model_num)\n # replace MagIC 3 defaults with MagIC 2.5 defaults if needed\n if data_model_num == 2 and meas_file == 'measurements.txt':\n meas_file = 'magic_measurements.txt'\n if data_model_num == 2 and samp_file == 'samples.txt':\n samp_file = 'er_samples.txt'\n if data_model_num == 2 and age_file == 'ages.txt':\n age_file = 'er_ages.txt'\n if data_model_num == 2 and depth_scale == \"core_depth\":\n depth_scale = \"sample_core_depth\"\n # initialize MagIC 3.0 vs 2.5 column names\n loc_col_name = \"location\" if data_model_num == 3 else \"er_location_name\"\n site_col_name = \"site\" if data_model_num == 3 else \"er_site_name\"\n samp_col_name = \"sample\" if data_model_num == 3 else \"er_sample_name\"\n spec_col_name = \"specimen\" if data_model_num == 3 else \"er_specimen_name\"\n meth_col_name = \"method_codes\" if data_model_num == 3 else \"magic_method_codes\"\n spec_dec_col_name = \"dir_dec\" if data_model_num == 3 else \"specimen_dec\"\n spec_inc_col_name = \"dir_inc\" if data_model_num == 3 else \"specimen_inc\"\n avg_weight_col_name = \"weight\" if data_model_num == 3 else \"average_weight\"\n spec_weight_col_name = \"weight\" if data_model_num == 3 else \"specimen_weight\"\n age_col_name = \"age\" if data_model_num == 3 else \"average_age\"\n height_col_name = \"height\" if data_model_num == 3 else \"average_height\"\n average_dec_col_name = \"dir_dec\" if data_model_num == 3 else \"average_dec\"\n average_inc_col_name = \"dir_inc\" if data_model_num == 3 else \"average_inc\"\n # initialize other variables\n width = 10\n Ssym, Ssize = 'cs', 5\n pcol = 3\n pel = 3\n maxInt = -1000\n minInt = 1e10\n maxSuc = -1000\n minSuc = 10000\n main_plot = None\n if size:\n size = int(size)\n if spc_size:\n spc_size = int(spc_size)\n title = \"\"\n if location:title=location\n\n # file formats not supported for the moment\n ngr_file = \"\" # nothing needed, not implemented fully in original script\n suc_file = \"\" # nothing else needed, also was not implemented in original script\n res_file = \"\" # need also res_sym, res_size\n wig_file = \"\" # if wig_file: pcol+=1; width+=2\n\n # which plots to make\n if not pltDec:\n pcol -= 1\n pel -= 1\n width -= 2\n if not pltInc:\n pcol -= 1\n pel -= 1\n width -= 2\n if not pltMag:\n pcol -= 1\n pel -= 1\n width -= 2\n\n # method and step\n if not step or meth == 'LT-NO':\n step = 0\n method = 'LT-NO'\n elif meth == \"AF\":\n step = round(float(step) * 1e-3, 6)\n method = 'LT-AF-Z'\n elif meth == 'T':\n step = round(float(step) + 273, 6)\n method = 'LT-T-Z'\n elif meth == 'ARM':\n method = 'LT-AF-I'\n step = round(float(step) * 1e-3, 6)\n elif meth == 'IRM':\n method = 'LT-IRM'\n step = round(float(step) * 1e-3, 6)\n # not supporting susceptibility at the moment LJ\n elif meth == 'X':\n method = 'LP-X'\n pcol += 1\n ind = sys.argv.index('-LP')\n if sys.argv[ind+2] == 'mass':\n if data_model_num != 3:\n suc_key = 'measurement_chi_mass'\n else:\n suc_key = 'susc_chi_mass'\n elif sys.argv[ind+2] == 'vol':\n if data_model_num != 3:\n suc_key = 'measurement_chi_volume'\n else:\n suc_key = 'susc_chi_volume'\n else:\n print('error in susceptibility units')\n return False, 'error in susceptibility units'\n else:\n print('method: {} not supported'.format(meth))\n return False, 'method: \"{}\" not supported'.format(meth)\n\n if wt_file:\n norm = True\n\n if dmin and dmax:\n dmin, dmax = float(dmin), float(dmax)\n else:\n dmin, dmax = -1, -1\n\n if pltTime:\n amin = float(amin)\n amax = float(amax)\n pcol += 1\n width += 2\n if not (amax and timescale):\n return False, \"To plot time, you must provide amin, amax, and timescale\"\n\n #\n #\n # read in 3.0 data and translate to 2.5\n\n if meas_file:\n meas_file = pmag.resolve_file_name(meas_file, input_dir_path)\n if spc_file:\n spc_file = pmag.resolve_file_name(spc_file, input_dir_path)\n if samp_file:\n samp_file = pmag.resolve_file_name(samp_file, input_dir_path)\n if age_file:\n age_file = pmag.resolve_file_name(age_file, input_dir_path)\n if data_model_num == 3:\n fnames = {'specimens': spc_file, 'samples': samp_file,\n 'ages': age_file, 'measurements': meas_file}\n fnames = {k: v for (k, v) in fnames.items() if v}\n con = cb.Contribution(input_dir_path, custom_filenames=fnames)\n for dtype in ['measurements', 'specimens']:\n if dtype not in con.tables:\n print(\n '-E- You must have a {} file in your input directory ({}) to run core_depthplot'.format(dtype, input_dir_path))\n print(' If needed, you can specify your input directory on the command line with \"core_depthplot.py -ID dirname ... \"')\n print(' Or with ipmag.core_depthplot(input_dir_path=dirname, ...)')\n # return False, '-E- You must have a {} file in your input directory ({}) to run core_depthplot'.format(dtype, input_dir_path)\n # propagate data to measurements\n con.propagate_name_down('sample', 'measurements')\n con.propagate_name_down('site', 'measurements')\n\n # propagate depth info from sites --> samples\n con.propagate_cols(\n ['core_depth', 'composite_depth'], 'samples', 'sites')\n\n if age_file == \"\":\n # get sample data straight from the contribution\n Samps = []\n if 'samples' in con.tables:\n Samps = con.tables['samples'].convert_to_pmag_data_list()\n else:\n depth_scale = 'age'\n Samps = []\n # get age data from contribution\n if 'ages' in con.tables:\n # we need to get sample in here\n # this doesn't do the trick by itself\n con.propagate_ages()\n con.propagate_cols(['age', 'age_unit'], 'samples', 'sites')\n Samps = con.tables['samples'].convert_to_pmag_data_list()\n age_unit = \"\"\n if spc_file:\n Specs3 = []\n # get specimen data from contribution\n Specs = []\n if 'specimens' in con.tables:\n Specs = con.tables['specimens'].convert_to_pmag_data_list()\n\n if res_file:\n warn = '-W- result file option is not currently available for MagIC data model 3'\n print(warn)\n return False, warn\n #Results, file_type = pmag.magic_read(res_file)\n if norm:\n #warn = '-W- norm option is not currently available for MagIC data model 3'\n # print(warn)\n # return False, warn\n Specs3, file_type = pmag.magic_read(wt_file)\n # translate specimen records to 2.5\n ErSpecs = []\n # for spec in Specs3:\n # ErSpecs.append(map_magic.mapping(spec, spec_magic3_2_magic2_map))\n ErSpecs = Specs3\n\n print(len(ErSpecs), ' specimens read in from ', wt_file)\n\n if not os.path.isfile(spc_file):\n if not os.path.isfile(meas_file):\n return False, \"You must provide either a magic_measurements file or a pmag_specimens file\"\n if not age_file and not samp_file:\n print('-W- You must provide either an age file or a sample file')\n return False, '-W- You must provide either an age file or a sample file'\n\n # read in 2.5 data\n elif data_model_num == 2:\n if age_file == \"\":\n if samp_file:\n samp_file = os.path.join(input_dir_path, samp_file)\n Samps, file_type = pmag.magic_read(samp_file)\n else:\n depth_scale = 'age'\n if age_file:\n age_file = os.path.join(input_dir_path, age_file)\n Samps, file_type = pmag.magic_read(age_file)\n age_unit = \"\"\n if spc_file:\n Specs, file_type = pmag.magic_read(spc_file)\n if res_file:\n Results, file_type = pmag.magic_read(res_file)\n if norm:\n ErSpecs, file_type = pmag.magic_read(wt_file)\n print(len(ErSpecs), ' specimens read in from ', wt_file)\n\n if not os.path.isfile(spc_file):\n if not os.path.isfile(meas_file):\n return False, \"You must provide either a magic_measurements file or a pmag_specimens file\"\n else:\n return False, \"Invalid data model number: {}\".format(str(data_model_num))\n\n Cores = []\n core_depth_key = \"Top depth cored CSF (m)\"\n if sum_file:\n # os.path.join(input_dir_path, sum_file)\n sum_file = pmag.resolve_file_name(sum_file, input_dir_path)\n with open(sum_file, 'r') as fin:\n indat = fin.readlines()\n if \"Core Summary\" in indat[0]:\n headline = 1\n else:\n headline = 0\n keys = indat[headline].replace('\\n', '').split(',')\n if \"Core Top (m)\" in keys:\n core_depth_key = \"Core Top (m)\"\n if \"Top depth cored CSF (m)\" in keys:\n core_dpeth_key = \"Top depth cored CSF (m)\"\n if \"Core Label\" in keys:\n core_label_key = \"Core Label\"\n if \"Core label\" in keys:\n core_label_key = \"Core label\"\n for line in indat[2:]:\n if 'TOTALS' not in line:\n CoreRec = {}\n for k in range(len(keys)):\n CoreRec[keys[k]] = line.split(',')[k]\n Cores.append(CoreRec)\n if len(Cores) == 0:\n print('no Core depth information available: import core summary file')\n sum_file = \"\"\n Data = []\n if 'core_depth' in depth_scale or depth_scale == 'mbsf':\n ylab = \"Depth (mbsf)\"\n depth_scale = 'core_depth'\n elif depth_scale == 'age':\n ylab = \"Age\"\n elif 'composite_depth' in depth_scale or depth_scale == 'mcd':\n ylab = \"Depth (mcd)\"\n depth_scale = 'composite_depth'\n else:\n print('Warning: You have provided unsupported depth scale: {}.\\nUsing default (mbsf) instead.'.format(\n depth_scale))\n depth_scale = 'core_depth'\n ylab = \"Depth (mbsf)\"\n # fix depth scale for data model 2 if needed\n if data_model_num == 2 and not depth_scale.startswith('sample_'):\n if depth_scale != \"age\":\n depth_scale = \"sample_\" + depth_scale\n # collect the data for plotting declination\n Depths, Decs, Incs, Ints = [], [], [], []\n SDepths, SDecs, SIncs, SInts = [], [], [], []\n SSucs = []\n samples = []\n methods, steps, m2 = [], [], []\n\n if os.path.isfile(meas_file): # plot the bulk measurement data\n if data_model_num == 3:\n Meas = []\n if 'measurements' in con.tables:\n Meas = con.tables['measurements'].convert_to_pmag_data_list()\n # has measurement_magn_mass ....\n dec_key, inc_key = 'dir_dec', 'dir_inc'\n meth_key, temp_key, ac_key, dc_key = 'method_codes', 'treat_temp', 'treat_ac_field', 'treat_dc_field'\n intlist = ['magnitude', 'magn_moment',\n 'magn_volume', 'magn_mass']\n meas_key = \"magn_moment\"\n elif data_model_num == 2:\n intlist = ['measurement_magnitude', 'measurement_magn_moment',\n 'measurement_magn_volume', 'measurement_magn_mass']\n temp_key, ac_key, dc_key = 'treatment_temp', 'treatment_ac_field', 'treatment_dc_field'\n dec_key, inc_key = 'measurement_dec', 'measurement_inc'\n Meas, file_type = pmag.magic_read(meas_file)\n meas_key = 'measurement_magn_moment'\n #\n print(len(Meas), ' measurements read in from ', meas_file)\n #\n for m in intlist: # find the intensity key with data\n # get all non-blank data for this specimen\n meas_data = pmag.get_dictitem(Meas, m, '', 'F')\n if len(meas_data) > 0:\n print('using intensity key:', m)\n meas_key = m\n break\n # fish out the desired method code\n m1 = pmag.get_dictitem(Meas, meth_col_name, method, 'has')\n if method == 'LT-T-Z':\n m2 = pmag.get_dictitem(m1, temp_key, str(\n step), 'eval') # fish out the desired step\n elif 'LT-AF' in method:\n m2 = pmag.get_dictitem(m1, ac_key, str(step), 'eval')\n elif 'LT-IRM' in method:\n m2 = pmag.get_dictitem(m1, dc_key, str(step), 'eval')\n elif 'LP-X' in method:\n m2 = pmag.get_dictitem(m1, suc_key, '', 'F')\n if len(m2) > 0:\n for rec in m2: # fish out depths and weights\n D = pmag.get_dictitem(\n Samps, samp_col_name, rec[samp_col_name], 'T')\n if not D: # if using an age_file, you may need to sort by site\n D = pmag.get_dictitem(\n Samps, site_col_name, rec[site_col_name], 'T')\n depth = pmag.get_dictitem(D, depth_scale, '', 'F')\n if len(depth) > 0:\n if ylab == 'Age':\n # get units of ages - assume they are all the same!\n ylab = ylab + ' (' + depth[0]['age_unit'] + ')'\n\n rec[depth_scale] = float(depth[0][depth_scale])\n rec[meth_col_name] = rec[meth_col_name] + \\\n ':' + depth[0][meth_col_name]\n if norm:\n specrecs = pmag.get_dictitem(\n ErSpecs, spec_col_name, rec[spec_col_name], 'T')\n specwts = pmag.get_dictitem(\n specrecs, spec_weight_col_name, \"\", 'F')\n if len(specwts) > 0:\n rec[weight_col_name] = specwts[0][spec_weight_col_name]\n # fish out data with core_depth and (if needed)\n # weights\n Data.append(rec)\n else:\n # fish out data with core_depth and (if needed) weights\n Data.append(rec)\n if title == \"\":\n pieces = rec[samp_col_name].split('-')\n location = rec.get(loc_col_name, '')\n title = location\n\n SData = pmag.sort_diclist(Data, depth_scale)\n for rec in SData: # fish out bulk measurement data from desired depths\n if dmax == -1 or float(rec[depth_scale]) < dmax and float(rec[depth_scale]) > dmin:\n Depths.append((rec[depth_scale]))\n if method == \"LP-X\":\n SSucs.append(float(rec[suc_key]))\n else:\n if pltDec:\n Decs.append(float(rec[dec_key]))\n if pltInc:\n Incs.append(float(rec[inc_key]))\n if not norm and pltMag:\n Ints.append(float(rec[meas_key]))\n if norm and pltMag:\n Ints.append(\n float(rec[meas_key]) / float(rec[spec_weight_col_name]))\n if len(SSucs) > 0:\n maxSuc = max(SSucs)\n minSuc = min(SSucs)\n if len(Ints) > 1:\n maxInt = max(Ints)\n minInt = min(Ints)\n if len(Depths) == 0:\n print('no bulk measurement data matched your request')\n else:\n print(len(Depths), \"depths found\")\n SpecDepths, SpecDecs, SpecIncs = [], [], []\n FDepths, FDecs, FIncs = [], [], []\n if spc_file: # add depths to spec data\n # get all the discrete data with best fit lines\n BFLs = pmag.get_dictitem(Specs, meth_col_name, 'DE-BFL', 'has')\n for spec in BFLs:\n if location == \"\":\n location = spec.get(loc_col_name, \"\")\n samp = pmag.get_dictitem(\n Samps, samp_col_name, spec[samp_col_name], 'T')\n if len(samp) > 0 and depth_scale in list(samp[0].keys()) and samp[0][depth_scale] != \"\":\n if ylab == 'Age':\n # get units of ages - assume they are all the same!\n ylab = ylab + ' (' + samp[0]['age_unit'] + ')'\n # filter for depth\n if dmax == -1 or float(samp[0][depth_scale]) < dmax and float(samp[0][depth_scale]) > dmin:\n # fish out data with core_depth\n SpecDepths.append(float(samp[0][depth_scale]))\n # fish out data with core_depth\n SpecDecs.append(float(spec[spec_dec_col_name]))\n # fish out data with core_depth\n SpecIncs.append(float(spec[spec_inc_col_name]))\n else:\n print('no core_depth found for: ', spec[spec_col_name])\n # get all the discrete data with best fit lines\n FMs = pmag.get_dictitem(Specs, meth_col_name, 'DE-FM', 'has')\n for spec in FMs:\n if location == \"\":\n location = spec.get(loc_col_name, \"\")\n samp = pmag.get_dictitem(\n Samps, samp_col_name, spec[samp_col_name], 'T')\n if len(samp) > 0 and depth_scale in list(samp[0].keys()) and samp[0][depth_scale] != \"\":\n if ylab == 'Age':\n # get units of ages - assume they are all the same!\n ylab = ylab + ' (' + samp[0]['age_unit'] + ')'\n # filter for depth\n if dmax == -1 or float(samp[0][depth_scale]) < dmax and float(samp[0][depth_scale]) > dmin:\n # fish out data with core_depth\n FDepths.append(float(samp[0][depth_scale]))\n # fish out data with core_depth\n FDecs.append(float(spec[spec_dec_col]))\n # fish out data with core_depth\n FIncs.append(float(spec[spec_inc_col]))\n else:\n print('no core_depth found for: ', spec[spec_col_name])\n ResDepths, ResDecs, ResIncs = [], [], []\n if 'age' in depth_scale: # set y-key\n res_scale = age_col_name\n else:\n res_scale = height_col_name\n if res_file: # creates lists of Result Data\n for res in Results:\n meths = res[meth_col_name].split(\":\")\n if 'DE-FM' in meths:\n # filter for depth\n if dmax == -1 or float(res[res_scale]) < dmax and float(res[res_scale]) > dmin:\n # fish out data with core_depth\n ResDepths.append(float(res[res_scale]))\n # fish out data with core_depth\n ResDecs.append(float(res['average_dec']))\n # fish out data with core_depth\n ResIncs.append(float(res['average_inc']))\n Susc, Sus_depths = [], []\n if dmin == -1:\n if len(Depths) > 0:\n dmin, dmax = Depths[0], Depths[-1]\n if len(FDepths) > 0:\n dmin, dmax = FDepths[0], FDepths[-1]\n if pltSus and len(SDepths) > 0:\n if SDepths[0] < dmin:\n dmin = SDepths[0]\n if SDepths[-1] > dmax:\n dmax = SDepths[-1]\n if len(SpecDepths) > 0:\n if min(SpecDepths) < dmin:\n dmin = min(SpecDepths)\n if max(SpecDepths) > dmax:\n dmax = max(SpecDepths)\n if len(ResDepths) > 0:\n if min(ResDepths) < dmin:\n dmin = min(ResDepths)\n if max(ResDepths) > dmax:\n dmax = max(ResDepths)\n # wig_file and suc_file not currently supported options\n # if suc_file:\n # with open(suc_file, 'r') as s_file:\n # sucdat = s_file.readlines()\n # keys = sucdat[0].replace('\\n', '').split(',') # splits on underscores\n # for line in sucdat[1:]:\n # SucRec = {}\n # for k in range(len(keys)):\n # SucRec[keys[k]] = line.split(',')[k]\n # if float(SucRec['Top Depth (m)']) < dmax and float(SucRec['Top Depth (m)']) > dmin and SucRec['Magnetic Susceptibility (80 mm)'] != \"\":\n # Susc.append(float(SucRec['Magnetic Susceptibility (80 mm)']))\n # if Susc[-1] > maxSuc:\n # maxSuc = Susc[-1]\n # if Susc[-1] < minSuc:\n # minSuc = Susc[-1]\n # Sus_depths.append(float(SucRec['Top Depth (m)']))\n #WIG, WIG_depths = [], []\n # if wig_file:\n # wigdat, file_type = pmag.magic_read(wig_file)\n # swigdat = pmag.sort_diclist(wigdat, depth_scale)\n # keys = list(wigdat[0].keys())\n # for key in keys:\n # if key != depth_scale:\n # plt_key = key\n # break\n # for wig in swigdat:\n # if float(wig[depth_scale]) < dmax and float(wig[depth_scale]) > dmin:\n # WIG.append(float(wig[plt_key]))\n # WIG_depths.append(float(wig[depth_scale]))\n tint = 4.5\n plot = 1\n #print('Decs', len(Decs))\n #print('Depths', len(Depths), 'SpecDecs', len(SpecDecs))\n #print('SpecDepths', len(SpecDepths), 'ResDecs', len(ResDecs))\n #print('ResDepths', len(ResDepths), 'SDecs', len(SDecs))\n #print('SDepths', len(SDepths), 'SIincs', len(SIncs))\n #print('Incs', len(Incs))\n if (Decs and Depths) or (SpecDecs and SpecDepths) or (ResDecs and ResDepths) or (SDecs and SDepths) or (SInts and SDepths) or (SIncs and SDepths) or (Incs and Depths):\n main_plot = plt.figure(1, figsize=(width, 8)) # this works\n # pylab.figure(1,figsize=(width,8))\n version_num = pmag.get_version()\n plt.figtext(.02, .01, version_num)\n if pltDec:\n ax = plt.subplot(1, pcol, plot)\n if pltLine:\n plt.plot(Decs, Depths, 'k')\n if len(Decs) > 0:\n plt.plot(Decs, Depths, sym, markersize=size)\n if len(Decs) == 0 and pltLine and len(SDecs) > 0:\n plt.plot(SDecs, SDepths, 'k')\n if len(SDecs) > 0:\n plt.plot(SDecs, SDepths, Ssym, markersize=Ssize)\n if spc_file:\n plt.plot(SpecDecs, SpecDepths, spc_sym, markersize=spc_size)\n if spc_file and len(FDepths) > 0:\n plt.scatter(\n FDecs, FDepths, marker=spc_sym[-1], edgecolor=spc_sym[0], facecolor='white', s=spc_size**2)\n if res_file:\n plt.plot(ResDecs, ResDepths, res_sym, markersize=res_size)\n if sum_file:\n for core in Cores:\n depth = float(core[core_depth_key])\n if depth > dmin and depth < dmax:\n plt.plot([0, 360.], [depth, depth], 'b--')\n if pel == plt:\n plt.text(360, depth + tint, core[core_label_key])\n if pel == plot:\n plt.axis([0, 400, dmax, dmin])\n else:\n plt.axis([0, 360., dmax, dmin])\n plt.xlabel('Declination')\n plt.ylabel(ylab)\n plot += 1\n pmagplotlib.delticks(ax) # dec xticks are too crowded otherwise\n else:\n print('no data!')\n return False, 'No data found to plot\\nTry again with different parameters'\n if pltInc:\n plt.subplot(1, pcol, plot)\n if pltLine:\n plt.plot(Incs, Depths, 'k')\n if len(Incs) > 0:\n plt.plot(Incs, Depths, sym, markersize=size)\n if len(Incs) == 0 and pltLine and len(SIncs) > 0:\n plt.plot(SIncs, SDepths, 'k')\n if len(SIncs) > 0:\n plt.plot(SIncs, SDepths, Ssym, markersize=Ssize)\n if spc_file and len(SpecDepths) > 0:\n plt.plot(SpecIncs, SpecDepths, spc_sym, markersize=spc_size)\n if spc_file and len(FDepths) > 0:\n plt.scatter(\n FIncs, FDepths, marker=spc_sym[-1], edgecolor=spc_sym[0], facecolor='white', s=spc_size**2)\n if res_file:\n plt.plot(ResIncs, ResDepths, res_sym, markersize=res_size)\n if sum_file:\n for core in Cores:\n depth = float(core[core_depth_key])\n if depth > dmin and depth < dmax:\n if pel == plot:\n plt.text(90, depth + tint, core[core_label_key])\n plt.plot([-90, 90], [depth, depth], 'b--')\n plt.plot([0, 0], [dmax, dmin], 'k-')\n if pel == plot:\n plt.axis([-90, 110, dmax, dmin])\n else:\n plt.axis([-90, 90, dmax, dmin])\n plt.xlabel('Inclination')\n plt.ylabel('')\n plot += 1\n if pltMag and len(Ints) > 0 or len(SInts) > 0:\n plt.subplot(1, pcol, plot)\n for pow in range(-10, 10):\n if maxInt * 10**pow > 1:\n break\n if not logit:\n for k in range(len(Ints)):\n Ints[k] = Ints[k] * 10**pow\n for k in range(len(SInts)):\n SInts[k] = SInts[k] * 10**pow\n if pltLine and len(Ints) > 0:\n plt.plot(Ints, Depths, 'k')\n if len(Ints) > 0:\n plt.plot(Ints, Depths, sym, markersize=size)\n if len(Ints) == 0 and pltLine and len(SInts) > 0:\n plt.plot(SInts, SDepths, 'k-')\n if len(SInts) > 0:\n plt.plot(SInts, SDepths, Ssym, markersize=Ssize)\n if sum_file:\n for core in Cores:\n depth = float(core[core_depth_key])\n plt.plot([0, maxInt * 10**pow + .1], [depth, depth], 'b--')\n if depth > dmin and depth < dmax:\n plt.text(maxInt * 10**pow - .2 * maxInt * 10 **\n pow, depth + tint, core[core_label_key])\n plt.axis([0, maxInt * 10**pow + .1, dmax, dmin])\n if not norm:\n plt.xlabel('%s %i %s' % ('Intensity (10^-', pow, ' Am^2)'))\n else:\n plt.xlabel('%s %i %s' % ('Intensity (10^-', pow, ' Am^2/kg)'))\n else:\n if pltLine:\n plt.semilogx(Ints, Depths, 'k')\n if len(Ints) > 0:\n plt.semilogx(Ints, Depths, sym, markersize=size)\n if len(Ints) == 0 and pltLine and len(SInts) > 0:\n plt.semilogx(SInts, SDepths, 'k')\n if len(Ints) == 0 and pltLine == 1 and len(SInts) > 0:\n plt.semilogx(SInts, SDepths, 'k')\n if len(SInts) > 0:\n plt.semilogx(SInts, SDepths, Ssym, markersize=Ssize)\n if sum_file:\n for core in Cores:\n depth = float(core[core_depth_key])\n plt.semilogx([minInt, maxInt], [depth, depth], 'b--')\n if depth > dmin and depth < dmax:\n plt.text(maxInt - .2 * maxInt, depth +\n tint, core[core_label_key])\n minInt = plt.axis()[0]\n plt.axis([minInt, maxInt, dmax, dmin])\n if not norm:\n plt.xlabel('Intensity (Am^2)')\n else:\n plt.xlabel('Intensity (Am^2/kg)')\n plot += 1\n if suc_file or len(SSucs) > 0:\n plt.subplot(1, pcol, plot)\n if len(Susc) > 0:\n if pltLine:\n plt.plot(Susc, Sus_depths, 'k')\n if not logit:\n plt.plot(Susc, Sus_depths, sym, markersize=size)\n if logit:\n plt.semilogx(Susc, Sus_depths, sym, markersize=size)\n if len(SSucs) > 0:\n if not logit:\n plt.plot(SSucs, SDepths, sym, markersize=size)\n if logit:\n plt.semilogx(SSucs, SDepths, sym, markersize=size)\n if sum_file:\n for core in Cores:\n depth = float(core[core_depth_key])\n if not logit:\n plt.plot([minSuc, maxSuc], [depth, depth], 'b--')\n if logit:\n plt.semilogx([minSuc, maxSuc], [depth, depth], 'b--')\n plt.axis([minSuc, maxSuc, dmax, dmin])\n plt.xlabel('Susceptibility')\n plot += 1\n # if wig_file:\n # plt.subplot(1, pcol, plot)\n # plt.plot(WIG, WIG_depths, 'k')\n # if sum_file:\n # for core in Cores:\n # depth = float(core[core_depth_key])\n # plt.plot([WIG[0], WIG[-1]], [depth, depth], 'b--')\n # plt.axis([min(WIG), max(WIG), dmax, dmin])\n # plt.xlabel(plt_key)\n # plot += 1\n if pltTime:\n ax1 = plt.subplot(1, pcol, plot)\n ax1.axis([-.25, 1.5, amax, amin])\n plot += 1\n TS, Chrons = pmag.get_ts(timescale)\n X, Y, Y2 = [0, 1], [], []\n cnt = 0\n if amin < TS[1]: # in the Brunhes\n Y = [amin, amin] # minimum age\n Y1 = [TS[1], TS[1]] # age of the B/M boundary\n # color in Brunhes, black\n ax1.fill_between(X, Y, Y1, facecolor='black')\n for d in TS[1:]:\n pol = cnt % 2\n cnt += 1\n if d <= amax and d >= amin:\n ind = TS.index(d)\n Y = [TS[ind], TS[ind]]\n Y1 = [TS[ind + 1], TS[ind + 1]]\n if pol:\n # fill in every other time\n ax1.fill_between(X, Y, Y1, facecolor='black')\n ax1.plot([0, 1, 1, 0, 0], [amin, amin, amax, amax, amin], 'k-')\n ax2 = ax1.twinx()\n plt.ylabel(\"Age (Ma): \" + timescale)\n for k in range(len(Chrons) - 1):\n c = Chrons[k]\n cnext = Chrons[k + 1]\n d = cnext[1] - old_div((cnext[1] - c[1]), 3.)\n if d >= amin and d < amax:\n # make the Chron boundary tick\n ax2.plot([1, 1.5], [c[1], c[1]], 'k-')\n ax2.text(1.05, d, c[0])\n ax2.axis([-.25, 1.5, amax, amin])\n figname = location + '_m:_' + method + '_core-depthplot.' + fmt\n plt.title(location)\n return main_plot, figname", "response": "Function to plot a core depth of a single species and sample file."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef download_magic(infile, dir_path='.', input_dir_path='',\n overwrite=False, print_progress=True,\n data_model=3., separate_locs=False):\n \"\"\"\n takes the name of a text file downloaded from the MagIC database and\n unpacks it into magic-formatted files. by default, download_magic assumes\n that you are doing everything in your current directory. if not, you may\n provide optional arguments dir_path (where you want the results to go) and\n input_dir_path (where the downloaded file is IF that location is different from\n dir_path).\n\n Parameters\n ----------\n infile : str\n MagIC-format file to unpack\n dir_path : str\n output directory (default \".\")\n input_dir_path : str, default \"\"\n path for intput file if different from output_dir_path (default is same)\n overwrite: bool\n overwrite current directory (default False)\n print_progress: bool\n verbose output (default True)\n data_model : float\n MagIC data model 2.5 or 3 (default 3)\n separate_locs : bool\n create a separate directory for each location (Location_*)\n (default False)\n \"\"\"\n if data_model == 2.5:\n method_col = \"magic_method_codes\"\n else:\n method_col = \"method_codes\"\n\n input_dir_path, dir_path = pmag.fix_directories(input_dir_path, dir_path)\n infile = pmag.resolve_file_name(infile, input_dir_path)\n # try to deal reasonably with unicode errors\n try:\n f = codecs.open(infile, 'r', \"utf-8\")\n infile = f.readlines()\n except UnicodeDecodeError:\n f = codecs.open(infile, 'r', \"Latin-1\")\n infile = f.readlines()\n f.close()\n File = [] # will contain all non-blank lines from downloaded file\n for line in infile:\n line = line.replace('\\n', '')\n if line[0:4] == '>>>>' or len(line.strip()) > 0: # skip blank lines\n File.append(line)\n LN = 0 # tracks our progress iterating through File\n type_list = []\n filenum = 0\n while LN < len(File) - 1:\n line = File[LN]\n if \">>>>\" in line:\n LN += 1\n continue\n file_type = line.split('\\t')[1]\n file_type = file_type.lower()\n if file_type[-1] == \"\\n\":\n file_type = file_type[:-1]\n if print_progress == True:\n print('working on: ', repr(file_type))\n if file_type not in type_list:\n type_list.append(file_type)\n else:\n filenum += 1\n LN += 1\n line = File[LN]\n # skip empty tables\n if line == \">>>>>>>>>>\":\n LN += 1\n continue\n keys = line.replace('\\n', '').split('\\t')\n if keys[0][0] == '.':\n keys = line.replace('\\n', '').replace('.', '').split('\\t')\n keys.append('RecNo') # cludge for new MagIC download format\n LN += 1\n Recs = []\n while LN < len(File):\n line = File[LN]\n # finish up one file type and then break\n if \">>>>\" in line and len(Recs) > 0:\n if filenum == 0:\n outfile = os.path.join(dir_path, file_type.strip() + '.txt')\n else:\n outfile = os.path.join(dir_path, file_type.strip() + '_' + str(filenum) + '.txt')\n NewRecs = []\n for rec in Recs:\n if method_col in list(rec.keys()):\n meths = rec[method_col].split(\":\")\n if len(meths) > 0:\n methods = \"\"\n for meth in meths:\n methods = methods + meth.strip() + \":\" # get rid of nasty spaces!!!!!!\n rec[method_col] = methods[:-1]\n NewRecs.append(rec)\n pmag.magic_write(outfile, Recs, file_type)\n if print_progress == True:\n print(file_type, \" data put in \", outfile)\n Recs = []\n LN += 1\n break\n # keep adding records of the same file type\n else:\n rec = line.split('\\t')\n Rec = {}\n if len(rec) == len(keys):\n for k in range(len(rec)):\n Rec[keys[k]] = rec[k]\n Recs.append(Rec)\n # in case of magic_search_results.txt, which has an extra\n # column:\n elif len(rec) - len(keys) == 1:\n for k in range(len(rec))[:-1]:\n Rec[keys[k]] = rec[k]\n Recs.append(Rec)\n elif len(rec) < len(keys):\n for k in range(len(rec)):\n Rec[keys[k]] = rec[k]\n for k in range(len(rec), len(keys)):\n Rec[keys[k]] = \"\"\n Recs.append(Rec)\n else:\n print('WARNING: problem in file with line: ')\n print(line)\n print('skipping....')\n LN += 1\n if len(Recs) > 0:\n if filenum == 0:\n outfile = os.path.join(dir_path, file_type.strip() + '.txt')\n else:\n outfile = os.path.join(dir_path, file_type.strip() + '_' + str(filenum) + '.txt')\n NewRecs = []\n for rec in Recs:\n if method_col in list(rec.keys()):\n meths = rec[method_col].split(\":\")\n if len(meths) > 0:\n methods = \"\"\n for meth in meths:\n methods = methods + meth.strip() + \":\" # get rid of nasty spaces!!!!!!\n rec[method_col] = methods[:-1]\n NewRecs.append(rec)\n pmag.magic_write(outfile, Recs, file_type)\n if print_progress == True:\n print(file_type, \" data put in \", outfile)\n # look through locations table and create separate directories for each\n # location\n if separate_locs:\n con = cb.Contribution(dir_path)\n con.propagate_location_to_measurements()\n con.propagate_name_down('location', 'samples')\n for dtype in con.tables:\n con.write_table_to_file(dtype)\n locs, locnum = [], 1\n if 'locations' in type_list:\n locs, file_type = pmag.magic_read(\n os.path.join(dir_path, 'locations.txt'))\n if len(locs) > 0: # at least one location\n # go through unique location names\n for loc_name in set([loc.get('location') for loc in locs]):\n if print_progress == True:\n print('location_' + str(locnum) + \": \", loc_name)\n lpath = os.path.join(dir_path, 'Location_' + str(locnum))\n locnum += 1\n try:\n os.mkdir(lpath)\n except:\n print('directory ', lpath,\n ' already exists - overwriting everything: {}'.format(overwrite))\n if not overwrite:\n print(\"-W- download_magic encountered a duplicate subdirectory ({}) and could not finish.\\nRerun with overwrite=True, or unpack this file in a different directory.\".format(lpath))\n return False\n for f in type_list:\n fname = os.path.join(dir_path, f + '.txt')\n if print_progress == True:\n print('unpacking: ', fname)\n recs, file_type = pmag.magic_read(fname)\n if print_progress == True:\n print(len(recs), ' read in')\n lrecs = pmag.get_dictitem(recs, 'location', loc_name, 'T')\n if len(lrecs) > 0:\n outfile_name = os.path.join(lpath, f + \".txt\")\n pmag.magic_write(outfile_name, lrecs, file_type)\n if print_progress == True:\n print(len(lrecs), ' stored in ', outfile_name)\n return True", "response": "Download a magic file into a new directory."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef upload_magic2(concat=0, dir_path='.', data_model=None):\n SpecDone = []\n locations = []\n concat = int(concat)\n files_list = [\"er_expeditions.txt\", \"er_locations.txt\", \"er_samples.txt\", \"er_specimens.txt\", \"er_sites.txt\", \"er_ages.txt\", \"er_citations.txt\", \"er_mailinglist.txt\", \"magic_measurements.txt\",\n \"rmag_hysteresis.txt\", \"rmag_anisotropy.txt\", \"rmag_remanence.txt\", \"rmag_results.txt\", \"pmag_specimens.txt\", \"pmag_samples.txt\", \"pmag_sites.txt\", \"pmag_results.txt\", \"pmag_criteria.txt\", \"magic_instruments.txt\"]\n file_names = [os.path.join(dir_path, f) for f in files_list]\n\n # begin the upload process\n up = os.path.join(dir_path, \"upload.txt\")\n if os.path.exists(up):\n os.remove(up)\n RmKeys = ['citation_label', 'compilation', 'calculation_type', 'average_n_lines', 'average_n_planes',\n 'specimen_grade', 'site_vgp_lat', 'site_vgp_lon', 'direction_type', 'specimen_Z',\n 'magic_instrument_codes', 'cooling_rate_corr', 'cooling_rate_mcd', 'anisotropy_atrm_alt',\n 'anisotropy_apar_perc', 'anisotropy_F', 'anisotropy_F_crit', 'specimen_scat',\n 'specimen_gmax', 'specimen_frac', 'site_vadm', 'site_lon', 'site_vdm', 'site_lat',\n 'measurement_chi', 'specimen_k_prime', 'specimen_k_prime_sse', 'external_database_names',\n 'external_database_ids', 'Further Notes', 'Typology', 'Notes (Year/Area/Locus/Level)',\n 'Site', 'Object Number', 'dir_n_specimens']\n print(\"-I- Removing: \", RmKeys)\n CheckDec = ['_dec', '_lon', '_azimuth', 'dip_direction']\n CheckSign = ['specimen_b_beta']\n last = file_names[-1]\n methods, first_file = [], 1\n for File in file_names:\n # read in the data\n Data, file_type = pmag.magic_read(File)\n if (file_type != \"bad_file\") and (file_type != \"empty_file\"):\n print(\"-I- file\", File, \" successfully read in\")\n if len(RmKeys) > 0:\n for rec in Data:\n # remove unwanted keys\n for key in RmKeys:\n if key == 'specimen_Z' and key in list(rec.keys()):\n # change # change this to lower case\n rec[key] = 'specimen_z'\n if key in list(rec.keys()):\n del rec[key] # get rid of unwanted keys\n # make sure b_beta is positive\n # ignore blanks\n if 'specimen_b_beta' in list(rec.keys()) and rec['specimen_b_beta'] != \"\":\n if float(rec['specimen_b_beta']) < 0:\n # make sure value is positive\n rec['specimen_b_beta'] = str(\n -float(rec['specimen_b_beta']))\n print('-I- adjusted to positive: ',\n 'specimen_b_beta', rec['specimen_b_beta'])\n # make all declinations/azimuths/longitudes in range\n # 0=>360.\n rec = pmag.adjust_all_to_360(rec)\n if file_type == 'er_locations':\n for rec in Data:\n locations.append(rec['er_location_name'])\n if file_type in ['pmag_samples', 'pmag_sites', 'pmag_specimens']:\n # if there is NO pmag data for specimens (samples/sites),\n # do not try to write it to file\n # (this causes validation errors, elsewise)\n ignore = True\n for rec in Data:\n if ignore == False:\n break\n keys = list(rec.keys())\n exclude_keys = ['er_citation_names', 'er_site_name', 'er_sample_name',\n 'er_location_name', 'er_specimen_names', 'er_sample_names']\n for key in exclude_keys:\n if key in keys:\n keys.remove(key)\n for key in keys:\n if rec[key]:\n ignore = False\n break\n if ignore:\n continue\n\n if file_type == 'er_samples': # check to only upload top priority orientation record!\n NewSamps, Done = [], []\n for rec in Data:\n if rec['er_sample_name'] not in Done:\n orient, az_type = pmag.get_orient(\n Data, rec['er_sample_name'])\n NewSamps.append(orient)\n Done.append(rec['er_sample_name'])\n Data = NewSamps\n print(\n 'only highest priority orientation record from er_samples.txt read in ')\n if file_type == 'er_specimens': # only specimens that have sample names\n NewData, SpecDone = [], []\n for rec in Data:\n if rec['er_sample_name'] in Done:\n NewData.append(rec)\n SpecDone.append(rec['er_specimen_name'])\n else:\n print('no valid sample record found for: ')\n print(rec)\n Data = NewData\n # print 'only measurements that have specimen/sample info'\n if file_type == 'magic_measurements': # only measurements that have specimen names\n no_specs = []\n NewData = []\n for rec in Data:\n if rec['er_specimen_name'] in SpecDone:\n NewData.append(rec)\n else:\n print('no valid specimen record found for: ')\n print(rec)\n no_specs.append(rec)\n # print set([record['er_specimen_name'] for record in\n # no_specs])\n Data = NewData\n # write out the data\n if len(Data) > 0:\n if first_file == 1:\n keystring = pmag.first_rec(up, Data[0], file_type)\n first_file = 0\n else:\n keystring = pmag.first_up(up, Data[0], file_type)\n for rec in Data:\n # collect the method codes\n if \"magic_method_codes\" in list(rec.keys()):\n meths = rec[\"magic_method_codes\"].split(':')\n for meth in meths:\n if meth.strip() not in methods:\n if meth.strip() != \"LP-DIR-\":\n methods.append(meth.strip())\n try:\n pmag.putout(up, keystring, rec)\n except IOError:\n print('-W- File input error: slowing down')\n time.sleep(1)\n pmag.putout(up, keystring, rec)\n\n # write out the file separator\n f = open(up, 'a')\n f.write('>>>>>>>>>>\\n')\n f.close()\n print(file_type, 'written to ', up)\n else:\n print('File:', File)\n print(file_type, 'is bad or non-existent - skipping ')\n\n # write out the methods table\n first_rec, MethRec = 1, {}\n for meth in methods:\n MethRec[\"magic_method_code\"] = meth\n if first_rec == 1:\n meth_keys = pmag.first_up(up, MethRec, \"magic_methods\")\n first_rec = 0\n try:\n pmag.putout(up, meth_keys, MethRec)\n except IOError:\n print('-W- File input error: slowing down')\n time.sleep(1)\n pmag.putout(up, meth_keys, MethRec)\n if concat == 1:\n f = open(up, 'a')\n f.write('>>>>>>>>>>\\n')\n f.close()\n\n if os.path.isfile(up):\n from . import validate_upload2 as validate_upload\n validated = False\n validated, errors = validate_upload.read_upload(up, data_model)\n\n else:\n print(\"no data found, upload file not created\")\n return False, \"no data found, upload file not created\", None\n\n # rename upload.txt according to location + timestamp\n format_string = \"%d.%b.%Y\"\n if locations:\n location = locations[0].replace(' ', '_')\n new_up = location + '_' + time.strftime(format_string) + '.txt'\n else:\n new_up = 'unknown_location_' + time.strftime(format_string) + '.txt'\n\n new_up = os.path.join(dir_path, new_up)\n if os.path.isfile(new_up):\n fname, extension = os.path.splitext(new_up)\n for i in range(1, 100):\n if os.path.isfile(fname + \"_\" + str(i) + extension):\n continue\n else:\n new_up = fname + \"_\" + str(i) + extension\n break\n os.rename(up, new_up)\n print(\"Finished preparing upload file: {} \".format(new_up))\n if not validated:\n print(\"-W- validation of upload file has failed.\\nYou can still upload {} to MagIC,\\nbut you will need to fix the above errors before your contribution can be activated.\".format(new_up))\n return False, \"Validation of your upload file has failed.\\nYou can still upload {} to MagIC,\\nbut you will need to fix the above errors before your contribution can be activated.\".format(new_up), errors\n return new_up, '', None", "response": "Uploads a magic file to MagIC database."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef upload_magic(concat=False, dir_path='.', dmodel=None, vocab=\"\", contribution=None,\n input_dir_path=\"\"):\n \"\"\"\n Finds all magic files in a given directory, and compiles them into an\n upload.txt file which can be uploaded into the MagIC database.\n Parameters\n ----------\n concat : boolean where True means do concatenate to upload.txt file in dir_path,\n False means write a new file (default is False)\n dir_path : string for input/output directory (default \".\")\n dmodel : pmagpy data_model.DataModel object,\n if not provided will be created (default None)\n vocab : pmagpy controlled_vocabularies3.Vocabulary object,\n if not provided will be created (default None)\n contribution : pmagpy contribution_builder.Contribution object, if not provided will be created\n in directory (default None)\n input_dir_path : str, default \"\"\n path for intput files if different from output dir_path (default is same)\n\n Returns\n ----------\n tuple of either: (False, error_message, errors, all_failing_items)\n if there was a problem creating/validating the upload file\n or: (filename, '', None, None) if the file creation was fully successful.\n \"\"\"\n input_dir_path, dir_path = pmag.fix_directories(input_dir_path, dir_path)\n locations = []\n concat = int(concat)\n dtypes = [\"locations\", \"samples\", \"specimens\", \"sites\", \"ages\", \"measurements\",\n \"criteria\", \"contribution\", \"images\"]\n fnames = [os.path.join(input_dir_path, dtype + \".txt\") for dtype in dtypes]\n file_names = [fname for fname in fnames if os.path.exists(fname)]\n error_fnames = [dtype + \"_errors.txt\" for dtype in dtypes]\n error_full_fnames = [os.path.join(\n dir_path, fname) for fname in error_fnames if os.path.exists(os.path.join(dir_path, fname))]\n print('-I- Removing old error files from {}: {}'.format(dir_path,\n \", \".join(error_fnames)))\n for error in error_full_fnames:\n os.remove(error)\n if isinstance(contribution, cb.Contribution):\n # if contribution object provided, use it\n con = contribution\n for table_name in con.tables:\n con.tables[table_name].write_magic_file()\n elif file_names:\n # otherwise create a new Contribution in dir_path\n con = Contribution(input_dir_path, vocabulary=vocab)\n else:\n # if no contribution is provided and no contribution could be created,\n # you are out of luck\n print(\"-W- No 3.0 files found in your directory: {}, upload file not created\".format(input_dir_path))\n return False, \"no 3.0 files found, upload file not created\", None, None\n\n # if the contribution has no tables, you can't make an upload file\n if not con.tables.keys():\n print(\"-W- No tables found in your contribution in directory {}, file not created\".format(input_dir_path))\n return False, \"-W- No tables found in your contribution, file not created\", None, None\n\n con.propagate_cols(['core_depth', 'composite_depth'],\n 'sites', 'samples', down=False)\n\n # take out any extra added columns\n # con.remove_non_magic_cols()\n\n # begin the upload process\n up = os.path.join(dir_path, \"upload.txt\")\n if os.path.exists(up):\n os.remove(up)\n RmKeys = ('citation_label', 'compilation', 'calculation_type', 'average_n_lines', 'average_n_planes',\n 'specimen_grade', 'site_vgp_lat', 'site_vgp_lon', 'direction_type', 'specimen_Z',\n 'magic_instrument_codes', 'cooling_rate_corr', 'cooling_rate_mcd', 'anisotropy_atrm_alt',\n 'anisotropy_apar_perc', 'anisotropy_F', 'anisotropy_F_crit', 'specimen_scat',\n 'specimen_gmax', 'specimen_frac', 'site_vadm', 'site_lon', 'site_vdm', 'site_lat',\n 'measurement_chi', 'specimen_k_prime', 'specimen_k_prime_sse', 'external_database_names',\n 'external_database_ids', 'Further Notes', 'Typology', 'Notes (Year/Area/Locus/Level)',\n 'Site', 'Object Number', 'version', 'site_definition')\n #print(\"-I- Removing: \", RmKeys)\n extra_RmKeys = {'measurements': ['sample', 'site', 'location'],\n 'specimens': ['site', 'location', 'age', 'age_unit', 'age_high',\n 'age_low', 'age_sigma', 'specimen_core_depth'],\n 'samples': ['location', 'age', 'age_unit', 'age_high', 'age_low',\n 'age_sigma', 'core_depth', 'composite_depth'],\n 'sites': ['texture', 'azimuth', 'azimuth_dec_correction', 'dip',\n 'orientation_quality', 'sample_alternatives', 'timestamp'],\n 'ages': ['level']}\n\n failing = []\n all_failing_items = {}\n if not dmodel:\n dmodel = data_model.DataModel()\n last_file_type = sorted(con.tables.keys())[-1]\n for file_type in sorted(con.tables.keys()):\n container = con.tables[file_type]\n # format all float values to have correct number of decimals\n container.all_to_str()\n # make sure all nans and Nones are changed to ''\n container.df.fillna('')\n df = container.df\n if len(df):\n print(\"-I- {} file successfully read in\".format(file_type))\n # make some adjustments to clean up data\n # drop non MagIC keys\n DropKeys = list(RmKeys) + extra_RmKeys.get(file_type, [])\n DropKeys = set(DropKeys).intersection(df.columns)\n if DropKeys:\n print(\n '-I- dropping these columns: {} from the {} table'.format(', '.join(DropKeys), file_type))\n df.drop(DropKeys, axis=1, inplace=True)\n container.df = df\n unrecognized_cols = container.get_non_magic_cols()\n if unrecognized_cols:\n print('-W- {} table still has some unrecognized columns: {}'.format(file_type.title(),\n \", \".join(unrecognized_cols)))\n # make sure int_b_beta is positive\n if 'int_b_beta' in df.columns:\n # get rid of empty strings\n df = df.replace(r'\\s+( +\\.)|#', np.nan,\n regex=True).replace('', np.nan)\n try:\n df['int_b_beta'] = df['int_b_beta'].astype(\n float).apply(abs)\n except ValueError:\n \"-W- Non numeric values found in int_b_beta column.\\n Could not apply absolute value.\"\n # make all declinations/azimuths/longitudes in range 0=>360.\n relevant_cols = val_up3.get_degree_cols(df)\n for col in relevant_cols:\n df[col] = df[col].apply(pmag.adjust_val_to_360)\n # get list of location names\n if file_type == 'locations':\n locations = sorted(df['location'].unique())\n # LJ: need to deal with this\n # use only highest priority orientation -- not sure how this works\n elif file_type == 'samples':\n # orient,az_type=pmag.get_orient(Data,rec['sample'])\n pass\n # include only specimen records with samples\n elif file_type == 'specimens':\n df = df[df['sample'].notnull()]\n if 'samples' in con.tables:\n samp_df = con.tables['samples'].df\n df = df[df['sample'].isin(samp_df.index.unique())]\n # include only measurements with specmiens\n elif file_type == 'measurements':\n df = df[df['specimen'].notnull()]\n if 'specimens' in con.tables:\n spec_df = con.tables['specimens'].df\n df = df[df['specimen'].isin(spec_df.index.unique())]\n # run validations\n res = val_up3.validate_table(\n con, file_type, output_dir=dir_path) # , verbose=True)\n if res:\n dtype, bad_rows, bad_cols, missing_cols, missing_groups, failing_items = res\n if dtype not in all_failing_items:\n all_failing_items[dtype] = {}\n all_failing_items[dtype][\"rows\"] = failing_items\n all_failing_items[dtype][\"missing_columns\"] = missing_cols\n all_failing_items[dtype][\"missing_groups\"] = missing_groups\n failing.append(dtype)\n # write out the data\n if len(df):\n container.write_magic_file(up, append=True, multi_type=True)\n # write out the file separator\n if last_file_type != file_type:\n f = open(up, 'a')\n f.write('>>>>>>>>>>\\n')\n f.close()\n print(\"-I-\", file_type, 'written to ', up)\n else: # last file, no newline at end of file\n #f = open(up, 'a')\n # f.write('>>>>>>>>>>')\n # f.close()\n print(\"-I-\", file_type, 'written to ', up)\n # if there was no understandable data\n else:\n print(file_type, 'is bad or non-existent - skipping ')\n # add to existing file\n if concat:\n f = open(up, 'a')\n f.write('>>>>>>>>>>\\n')\n f.close()\n\n if not os.path.isfile(up):\n print(\"no data found, upload file not created\")\n return False, \"no data found, upload file not created\", None, None\n\n # rename upload.txt according to location + timestamp\n format_string = \"%d.%b.%Y\"\n if locations:\n locs = set(locations)\n locs = sorted(locs)[:3]\n #location = locations[0].replace(' ', '_')\n try:\n locs = [loc.replace(' ', '-') for loc in locs]\n except AttributeError:\n locs = [\"unknown_location\"]\n location = \"_\".join(locs)\n new_up = location + '_' + time.strftime(format_string) + '.txt'\n else:\n new_up = 'unknown_location_' + time.strftime(format_string) + '.txt'\n\n new_up = os.path.join(dir_path, new_up)\n if os.path.isfile(new_up):\n fname, extension = os.path.splitext(new_up)\n for i in range(1, 100):\n if os.path.isfile(fname + \"_\" + str(i) + extension):\n continue\n else:\n new_up = fname + \"_\" + str(i) + extension\n break\n if not up:\n print(\"-W- Could not create an upload file\")\n return False, \"Could not create an upload file\", None, None\n os.rename(up, new_up)\n print(\"Finished preparing upload file: {} \".format(new_up))\n if failing:\n print(\"-W- These tables have errors: {}\".format(\", \".join(failing)))\n print(\"-W- validation of upload file has failed.\\nYou can still upload {} to MagIC,\\nbut you will need to fix the above errors before your contribution can be activated.\".format(new_up))\n return False, \"Validation of your upload file has failed.\\nYou can still upload {} to MagIC,\\nbut you will need to fix the above errors before your contribution can be activated.\".format(new_up), failing, all_failing_items\n else:\n print(\"-I- Your file has passed validation. You should be able to upload it to the MagIC database without trouble!\")\n return new_up, '', None, None", "response": "Uploads all files in a given directory and returns a tuple of True and error message and errors for the upload. txt file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef specimens_results_magic(infile='pmag_specimens.txt', measfile='magic_measurements.txt', sampfile='er_samples.txt', sitefile='er_sites.txt', agefile='er_ages.txt', specout='er_specimens.txt', sampout='pmag_samples.txt', siteout='pmag_sites.txt', resout='pmag_results.txt', critout='pmag_criteria.txt', instout='magic_instruments.txt', plotsites=False, fmt='svg', dir_path='.', cors=[], priorities=['DA-AC-ARM', 'DA-AC-TRM'], coord='g', user='', vgps_level='site', do_site_intensity=True, DefaultAge=[\"none\"], avg_directions_by_sample=False, avg_intensities_by_sample=False, avg_all_components=False, avg_by_polarity=False, skip_directions=False, skip_intensities=False, use_sample_latitude=False, use_paleolatitude=False, use_criteria='default'):\n # initialize some variables\n plotsites = False # cannot use draw_figs from within ipmag\n Comps = [] # list of components\n version_num = pmag.get_version()\n args = sys.argv\n model_lat_file = \"\"\n Dcrit, Icrit, nocrit = 0, 0, 0\n corrections = []\n nocorrection = ['DA-NL', 'DA-AC', 'DA-CR']\n\n # do some data adjustments\n for cor in cors:\n nocorrection.remove('DA-' + cor)\n corrections.append('DA-' + cor)\n\n for p in priorities:\n if not p.startswith('DA-AC-'):\n p = 'DA-AC-' + p\n\n # translate coord into coords\n if coord == 's':\n coords = ['-1']\n if coord == 'g':\n coords = ['0']\n if coord == 't':\n coords = ['100']\n if coord == 'b':\n coords = ['0', '100']\n\n if vgps_level == 'sample':\n vgps = 1 # save sample level VGPS/VADMs\n else:\n vgps = 0 # site level\n\n if do_site_intensity:\n nositeints = 0\n else:\n nositeints = 1\n\n # chagne these all to True/False instead of 1/0\n\n if not skip_intensities:\n # set model lat and\n if use_sample_latitude and use_paleolatitude:\n print(\"you should set a paleolatitude file OR use present day lat - not both\")\n return False\n elif use_sample_latitude:\n get_model_lat = 1\n elif use_paleolatitude:\n get_model_lat = 2\n try:\n model_lat_file = dir_path + '/' + args[ind + 1]\n get_model_lat = 2\n mlat = open(model_lat_file, 'r')\n ModelLats = []\n for line in mlat.readlines():\n ModelLat = {}\n tmp = line.split()\n ModelLat[\"er_site_name\"] = tmp[0]\n ModelLat[\"site_model_lat\"] = tmp[1]\n ModelLat[\"er_sample_name\"] = tmp[0]\n ModelLat[\"sample_lat\"] = tmp[1]\n ModelLats.append(ModelLat)\n mlat.clos()\n except:\n print(\"use_paleolatitude option requires a valid paleolatitude file\")\n else:\n get_model_lat = 0 # skips VADM calculation entirely\n\n if plotsites and not skip_directions: # plot by site - set up plot window\n EQ = {}\n EQ['eqarea'] = 1\n # define figure 1 as equal area projection\n pmagplotlib.plot_init(EQ['eqarea'], 5, 5)\n # I don't know why this has to be here, but otherwise the first plot\n # never plots...\n pmagplotlib.plot_net(EQ['eqarea'])\n pmagplotlib.draw_figs(EQ)\n\n infile = os.path.join(dir_path, infile)\n measfile = os.path.join(dir_path, measfile)\n instout = os.path.join(dir_path, instout)\n sampfile = os.path.join(dir_path, sampfile)\n sitefile = os.path.join(dir_path, sitefile)\n agefile = os.path.join(dir_path, agefile)\n specout = os.path.join(dir_path, specout)\n sampout = os.path.join(dir_path, sampout)\n siteout = os.path.join(dir_path, siteout)\n resout = os.path.join(dir_path, resout)\n critout = os.path.join(dir_path, critout)\n\n if use_criteria == 'none':\n Dcrit, Icrit, nocrit = 1, 1, 1 # no selection criteria\n crit_data = pmag.default_criteria(nocrit)\n elif use_criteria == 'default':\n crit_data = pmag.default_criteria(nocrit) # use default criteria\n elif use_criteria == 'existing':\n crit_data, file_type = pmag.magic_read(\n critout) # use pmag_criteria file\n print(\"Acceptance criteria read in from \", critout)\n accept = {}\n for critrec in crit_data:\n for key in list(critrec.keys()):\n # need to migrate specimen_dang to specimen_int_dang for intensity\n # data using old format\n if 'IE-SPEC' in list(critrec.keys()) and 'specimen_dang' in list(critrec.keys()) and 'specimen_int_dang' not in list(critrec.keys()):\n critrec['specimen_int_dang'] = critrec['specimen_dang']\n del critrec['specimen_dang']\n# need to get rid of ron shaars sample_int_sigma_uT\n if 'sample_int_sigma_uT' in list(critrec.keys()):\n critrec['sample_int_sigma'] = '%10.3e' % (\n eval(critrec['sample_int_sigma_uT']) * 1e-6)\n if key not in list(accept.keys()) and critrec[key] != '':\n accept[key] = critrec[key]\n\n if use_criteria == 'default':\n pmag.magic_write(critout, [accept], 'pmag_criteria')\n print(\"\\n Pmag Criteria stored in \", critout, '\\n')\n\n# now we're done slow dancing\n\n # read in site data - has the lats and lons\n SiteNFO, file_type = pmag.magic_read(sitefile)\n # read in site data - has the lats and lons\n SampNFO, file_type = pmag.magic_read(sampfile)\n # find all the sites with height info.\n height_nfo = pmag.get_dictitem(SiteNFO, 'site_height', '', 'F')\n if agefile:\n AgeNFO, file_type = pmag.magic_read(\n agefile) # read in the age information\n # read in specimen interpretations\n Data, file_type = pmag.magic_read(infile)\n # retrieve specimens with intensity data\n IntData = pmag.get_dictitem(Data, 'specimen_int', '', 'F')\n comment, orient = \"\", []\n samples, sites = [], []\n for rec in Data: # run through the data filling in missing keys and finding all components, coordinates available\n # fill in missing fields, collect unique sample and site names\n if 'er_sample_name' not in list(rec.keys()):\n rec['er_sample_name'] = \"\"\n elif rec['er_sample_name'] not in samples:\n samples.append(rec['er_sample_name'])\n if 'er_site_name' not in list(rec.keys()):\n rec['er_site_name'] = \"\"\n elif rec['er_site_name'] not in sites:\n sites.append(rec['er_site_name'])\n if 'specimen_int' not in list(rec.keys()):\n rec['specimen_int'] = ''\n if 'specimen_comp_name' not in list(rec.keys()) or rec['specimen_comp_name'] == \"\":\n rec['specimen_comp_name'] = 'A'\n if rec['specimen_comp_name'] not in Comps:\n Comps.append(rec['specimen_comp_name'])\n rec['specimen_tilt_correction'] = rec['specimen_tilt_correction'].strip(\n '\\n')\n if \"specimen_tilt_correction\" not in list(rec.keys()):\n rec[\"specimen_tilt_correction\"] = \"-1\" # assume sample coordinates\n if rec[\"specimen_tilt_correction\"] not in orient:\n # collect available coordinate systems\n orient.append(rec[\"specimen_tilt_correction\"])\n if \"specimen_direction_type\" not in list(rec.keys()):\n # assume direction is line - not plane\n rec[\"specimen_direction_type\"] = 'l'\n if \"specimen_dec\" not in list(rec.keys()):\n # if no declination, set direction type to blank\n rec[\"specimen_direction_type\"] = ''\n if \"specimen_n\" not in list(rec.keys()):\n rec[\"specimen_n\"] = '' # put in n\n if \"specimen_alpha95\" not in list(rec.keys()):\n rec[\"specimen_alpha95\"] = '' # put in alpha95\n if \"magic_method_codes\" not in list(rec.keys()):\n rec[\"magic_method_codes\"] = ''\n # start parsing data into SpecDirs, SpecPlanes, SpecInts\n SpecInts, SpecDirs, SpecPlanes = [], [], []\n samples.sort() # get sorted list of samples and sites\n sites.sort()\n if not skip_intensities: # don't skip intensities\n # retrieve specimens with intensity data\n IntData = pmag.get_dictitem(Data, 'specimen_int', '', 'F')\n if nocrit == 0: # use selection criteria\n for rec in IntData: # do selection criteria\n kill = pmag.grade(rec, accept, 'specimen_int')\n if len(kill) == 0:\n # intensity record to be included in sample, site\n # calculations\n SpecInts.append(rec)\n else:\n SpecInts = IntData[:] # take everything - no selection criteria\n# check for required data adjustments\n if len(corrections) > 0 and len(SpecInts) > 0:\n for cor in corrections:\n # only take specimens with the required corrections\n SpecInts = pmag.get_dictitem(\n SpecInts, 'magic_method_codes', cor, 'has')\n if len(nocorrection) > 0 and len(SpecInts) > 0:\n for cor in nocorrection:\n # exclude the corrections not specified for inclusion\n SpecInts = pmag.get_dictitem(\n SpecInts, 'magic_method_codes', cor, 'not')\n# take top priority specimen of its name in remaining specimens (only one\n# per customer)\n PrioritySpecInts = []\n specimens = pmag.get_specs(SpecInts) # get list of uniq specimen names\n for spec in specimens:\n # all the records for this specimen\n ThisSpecRecs = pmag.get_dictitem(\n SpecInts, 'er_specimen_name', spec, 'T')\n if len(ThisSpecRecs) == 1:\n PrioritySpecInts.append(ThisSpecRecs[0])\n elif len(ThisSpecRecs) > 1: # more than one\n prec = []\n for p in priorities:\n # all the records for this specimen\n ThisSpecRecs = pmag.get_dictitem(\n SpecInts, 'magic_method_codes', p, 'has')\n if len(ThisSpecRecs) > 0:\n prec.append(ThisSpecRecs[0])\n PrioritySpecInts.append(prec[0]) # take the best one\n SpecInts = PrioritySpecInts # this has the first specimen record\n if not skip_directions: # don't skip directions\n # retrieve specimens with directed lines and planes\n AllDirs = pmag.get_dictitem(Data, 'specimen_direction_type', '', 'F')\n # get all specimens with specimen_n information\n Ns = pmag.get_dictitem(AllDirs, 'specimen_n', '', 'F')\n if nocrit != 1: # use selection criteria\n for rec in Ns: # look through everything with specimen_n for \"good\" data\n kill = pmag.grade(rec, accept, 'specimen_dir')\n if len(kill) == 0: # nothing killed it\n SpecDirs.append(rec)\n else: # no criteria\n SpecDirs = AllDirs[:] # take them all\n# SpecDirs is now the list of all specimen directions (lines and planes)\n# that pass muster\n # list of all sample data and list of those that pass the DE-SAMP criteria\n PmagSamps, SampDirs = [], []\n PmagSites, PmagResults = [], [] # list of all site data and selected results\n SampInts = []\n for samp in samples: # run through the sample names\n if avg_directions_by_sample: # average by sample if desired\n # get all the directional data for this sample\n SampDir = pmag.get_dictitem(SpecDirs, 'er_sample_name', samp, 'T')\n if len(SampDir) > 0: # there are some directions\n for coord in coords: # step through desired coordinate systems\n # get all the directions for this sample\n CoordDir = pmag.get_dictitem(\n SampDir, 'specimen_tilt_correction', coord, 'T')\n if len(CoordDir) > 0: # there are some with this coordinate system\n if not avg_all_components: # look component by component\n for comp in Comps:\n # get all directions from this component\n CompDir = pmag.get_dictitem(\n CoordDir, 'specimen_comp_name', comp, 'T')\n if len(CompDir) > 0: # there are some\n # get a sample average from all specimens\n PmagSampRec = pmag.lnpbykey(\n CompDir, 'sample', 'specimen')\n # decorate the sample record\n PmagSampRec[\"er_location_name\"] = CompDir[0]['er_location_name']\n PmagSampRec[\"er_site_name\"] = CompDir[0]['er_site_name']\n PmagSampRec[\"er_sample_name\"] = samp\n PmagSampRec[\"er_citation_names\"] = \"This study\"\n PmagSampRec[\"er_analyst_mail_names\"] = user\n PmagSampRec['magic_software_packages'] = version_num\n if CompDir[0]['specimen_flag'] == 'g':\n PmagSampRec['sample_flag'] = 'g'\n else:\n PmagSampRec['sample_flag'] = 'b'\n if nocrit != 1:\n PmagSampRec['pmag_criteria_codes'] = \"ACCEPT\"\n if agefile != \"\":\n PmagSampRec = pmag.get_age(\n PmagSampRec, \"er_site_name\", \"sample_inferred_\", AgeNFO, DefaultAge)\n site_height = pmag.get_dictitem(\n height_nfo, 'er_site_name', PmagSampRec['er_site_name'], 'T')\n if len(site_height) > 0:\n # add in height if available\n PmagSampRec[\"sample_height\"] = site_height[0]['site_height']\n PmagSampRec['sample_comp_name'] = comp\n PmagSampRec['sample_tilt_correction'] = coord\n PmagSampRec['er_specimen_names'] = pmag.get_list(\n CompDir, 'er_specimen_name') # get a list of the specimen names used\n PmagSampRec['magic_method_codes'] = pmag.get_list(\n CompDir, 'magic_method_codes') # get a list of the methods used\n if nocrit != 1: # apply selection criteria\n kill = pmag.grade(\n PmagSampRec, accept, 'sample_dir')\n else:\n kill = []\n if len(kill) == 0:\n SampDirs.append(PmagSampRec)\n if vgps == 1: # if sample level VGP info desired, do that now\n PmagResRec = pmag.getsampVGP(\n PmagSampRec, SiteNFO)\n if PmagResRec != \"\":\n PmagResults.append(PmagResRec)\n# print(PmagSampRec)\n PmagSamps.append(PmagSampRec)\n if avg_all_components: # average all components together basically same as above\n PmagSampRec = pmag.lnpbykey(\n CoordDir, 'sample', 'specimen')\n PmagSampRec[\"er_location_name\"] = CoordDir[0]['er_location_name']\n PmagSampRec[\"er_site_name\"] = CoordDir[0]['er_site_name']\n PmagSampRec[\"er_sample_name\"] = samp\n PmagSampRec[\"er_citation_names\"] = \"This study\"\n PmagSampRec[\"er_analyst_mail_names\"] = user\n PmagSampRec['magic_software_packages'] = version_num\n if all(i['specimen_flag'] == 'g' for i in CoordDir):\n PmagSampRec['sample_flag'] = 'g'\n else:\n PmagSampRec['sample_flag'] = 'b'\n if nocrit != 1:\n PmagSampRec['pmag_criteria_codes'] = \"\"\n if agefile != \"\":\n PmagSampRec = pmag.get_age(\n PmagSampRec, \"er_site_name\", \"sample_inferred_\", AgeNFO, DefaultAge)\n site_height = pmag.get_dictitem(\n height_nfo, 'er_site_name', site, 'T')\n if len(site_height) > 0:\n # add in height if available\n PmagSampRec[\"sample_height\"] = site_height[0]['site_height']\n PmagSampRec['sample_tilt_correction'] = coord\n PmagSampRec['sample_comp_name'] = pmag.get_list(\n CoordDir, 'specimen_comp_name') # get components used\n PmagSampRec['er_specimen_names'] = pmag.get_list(\n CoordDir, 'er_specimen_name') # get specimne names averaged\n PmagSampRec['magic_method_codes'] = pmag.get_list(\n CoordDir, 'magic_method_codes') # assemble method codes\n if nocrit != 1: # apply selection criteria\n kill = pmag.grade(\n PmagSampRec, accept, 'sample_dir')\n if len(kill) == 0: # passes the mustard\n SampDirs.append(PmagSampRec)\n if vgps == 1:\n PmagResRec = pmag.getsampVGP(\n PmagSampRec, SiteNFO)\n if PmagResRec != \"\":\n PmagResults.append(PmagResRec)\n else: # take everything\n SampDirs.append(PmagSampRec)\n if vgps == 1:\n PmagResRec = pmag.getsampVGP(\n PmagSampRec, SiteNFO)\n if PmagResRec != \"\":\n PmagResults.append(PmagResRec)\n PmagSamps.append(PmagSampRec)\n if avg_intensities_by_sample: # average by sample if desired\n # get all the intensity data for this sample\n SampI = pmag.get_dictitem(SpecInts, 'er_sample_name', samp, 'T')\n if len(SampI) > 0: # there are some\n # get average intensity stuff\n PmagSampRec = pmag.average_int(SampI, 'specimen', 'sample')\n # decorate sample record\n PmagSampRec[\"sample_description\"] = \"sample intensity\"\n PmagSampRec[\"sample_direction_type\"] = \"\"\n PmagSampRec['er_site_name'] = SampI[0][\"er_site_name\"]\n PmagSampRec['er_sample_name'] = samp\n PmagSampRec['er_location_name'] = SampI[0][\"er_location_name\"]\n PmagSampRec[\"er_citation_names\"] = \"This study\"\n PmagSampRec[\"er_analyst_mail_names\"] = user\n if agefile != \"\":\n PmagSampRec = pmag.get_age(\n PmagSampRec, \"er_site_name\", \"sample_inferred_\", AgeNFO, DefaultAge)\n site_height = pmag.get_dictitem(\n height_nfo, 'er_site_name', PmagSampRec['er_site_name'], 'T')\n if len(site_height) > 0:\n # add in height if available\n PmagSampRec[\"sample_height\"] = site_height[0]['site_height']\n PmagSampRec['er_specimen_names'] = pmag.get_list(\n SampI, 'er_specimen_name')\n PmagSampRec['magic_method_codes'] = pmag.get_list(\n SampI, 'magic_method_codes')\n if nocrit != 1: # apply criteria!\n kill = pmag.grade(PmagSampRec, accept, 'sample_int')\n if len(kill) == 0:\n PmagSampRec['pmag_criteria_codes'] = \"ACCEPT\"\n SampInts.append(PmagSampRec)\n PmagSamps.append(PmagSampRec)\n else:\n PmagSampRec = {} # sample rejected\n else: # no criteria\n SampInts.append(PmagSampRec)\n PmagSamps.append(PmagSampRec)\n PmagSampRec['pmag_criteria_codes'] = \"\"\n if vgps == 1 and get_model_lat != 0 and PmagSampRec != {}:\n if get_model_lat == 1: # use sample latitude\n PmagResRec = pmag.getsampVDM(PmagSampRec, SampNFO)\n # get rid of the model lat key\n del(PmagResRec['model_lat'])\n elif get_model_lat == 2: # use model latitude\n PmagResRec = pmag.getsampVDM(PmagSampRec, ModelLats)\n if PmagResRec != {}:\n PmagResRec['magic_method_codes'] = PmagResRec['magic_method_codes'] + \":IE-MLAT\"\n if PmagResRec != {}:\n PmagResRec['er_specimen_names'] = PmagSampRec['er_specimen_names']\n PmagResRec['er_sample_names'] = PmagSampRec['er_sample_name']\n PmagResRec['pmag_criteria_codes'] = 'ACCEPT'\n PmagResRec['average_int_sigma_perc'] = PmagSampRec['sample_int_sigma_perc']\n PmagResRec['average_int_sigma'] = PmagSampRec['sample_int_sigma']\n PmagResRec['average_int_n'] = PmagSampRec['sample_int_n']\n PmagResRec['vadm_n'] = PmagSampRec['sample_int_n']\n PmagResRec['data_type'] = 'i'\n PmagResults.append(PmagResRec)\n if len(PmagSamps) > 0:\n # fill in missing keys from different types of records\n TmpSamps, keylist = pmag.fillkeys(PmagSamps)\n # save in sample output file\n pmag.magic_write(sampout, TmpSamps, 'pmag_samples')\n print(' sample averages written to ', sampout)\n\n#\n# create site averages from specimens or samples as specified\n#\n for site in sites:\n for coord in coords:\n if not avg_directions_by_sample:\n key, dirlist = 'specimen', SpecDirs # if specimen averages at site level desired\n if avg_directions_by_sample:\n key, dirlist = 'sample', SampDirs # if sample averages at site level desired\n # get all the sites with directions\n tmp = pmag.get_dictitem(dirlist, 'er_site_name', site, 'T')\n # use only the last coordinate if avg_all_components==False\n tmp1 = pmag.get_dictitem(tmp, key + '_tilt_correction', coord, 'T')\n # fish out site information (lat/lon, etc.)\n sd = pmag.get_dictitem(SiteNFO, 'er_site_name', site, 'T')\n if len(sd) > 0:\n sitedat = sd[0]\n if not avg_all_components: # do component wise averaging\n for comp in Comps:\n # get all components comp\n siteD = pmag.get_dictitem(\n tmp1, key + '_comp_name', comp, 'T')\n # remove bad data from means\n quality_siteD = []\n # remove any records for which specimen_flag or sample_flag are 'b'\n # assume 'g' if flag is not provided\n for rec in siteD:\n spec_quality = rec.get('specimen_flag', 'g')\n samp_quality = rec.get('sample_flag', 'g')\n if (spec_quality == 'g') and (samp_quality == 'g'):\n quality_siteD.append(rec)\n siteD = quality_siteD\n if len(siteD) > 0: # there are some for this site and component name\n # get an average for this site\n PmagSiteRec = pmag.lnpbykey(siteD, 'site', key)\n # decorate the site record\n PmagSiteRec['site_comp_name'] = comp\n PmagSiteRec[\"er_location_name\"] = siteD[0]['er_location_name']\n PmagSiteRec[\"er_site_name\"] = siteD[0]['er_site_name']\n PmagSiteRec['site_tilt_correction'] = coord\n PmagSiteRec['site_comp_name'] = pmag.get_list(\n siteD, key + '_comp_name')\n if avg_directions_by_sample:\n PmagSiteRec['er_sample_names'] = pmag.get_list(\n siteD, 'er_sample_name')\n else:\n PmagSiteRec['er_specimen_names'] = pmag.get_list(\n siteD, 'er_specimen_name')\n # determine the demagnetization code (DC3,4 or 5) for this site\n AFnum = len(pmag.get_dictitem(\n siteD, 'magic_method_codes', 'LP-DIR-AF', 'has'))\n Tnum = len(pmag.get_dictitem(\n siteD, 'magic_method_codes', 'LP-DIR-T', 'has'))\n DC = 3\n if AFnum > 0:\n DC += 1\n if Tnum > 0:\n DC += 1\n PmagSiteRec['magic_method_codes'] = pmag.get_list(\n siteD, 'magic_method_codes') + ':' + 'LP-DC' + str(DC)\n PmagSiteRec['magic_method_codes'].strip(\":\")\n if plotsites:\n print(PmagSiteRec['er_site_name'])\n # plot and list the data\n pmagplotlib.plot_site(\n EQ['eqarea'], PmagSiteRec, siteD, key)\n pmagplotlib.draw_figs(EQ)\n PmagSites.append(PmagSiteRec)\n else: # last component only\n # get the last orientation system specified\n siteD = tmp1[:]\n if len(siteD) > 0: # there are some\n # get the average for this site\n PmagSiteRec = pmag.lnpbykey(siteD, 'site', key)\n # decorate the record\n PmagSiteRec[\"er_location_name\"] = siteD[0]['er_location_name']\n PmagSiteRec[\"er_site_name\"] = siteD[0]['er_site_name']\n PmagSiteRec['site_comp_name'] = comp\n PmagSiteRec['site_tilt_correction'] = coord\n PmagSiteRec['site_comp_name'] = pmag.get_list(\n siteD, key + '_comp_name')\n PmagSiteRec['er_specimen_names'] = pmag.get_list(\n siteD, 'er_specimen_name')\n PmagSiteRec['er_sample_names'] = pmag.get_list(\n siteD, 'er_sample_name')\n AFnum = len(pmag.get_dictitem(\n siteD, 'magic_method_codes', 'LP-DIR-AF', 'has'))\n Tnum = len(pmag.get_dictitem(\n siteD, 'magic_method_codes', 'LP-DIR-T', 'has'))\n DC = 3\n if AFnum > 0:\n DC += 1\n if Tnum > 0:\n DC += 1\n PmagSiteRec['magic_method_codes'] = pmag.get_list(\n siteD, 'magic_method_codes') + ':' + 'LP-DC' + str(DC)\n PmagSiteRec['magic_method_codes'].strip(\":\")\n if not avg_directions_by_sample:\n PmagSiteRec['site_comp_name'] = pmag.get_list(\n siteD, key + '_comp_name')\n if plotsites:\n pmagplotlib.plot_site(\n EQ['eqarea'], PmagSiteRec, siteD, key)\n pmagplotlib.draw_figs(EQ)\n PmagSites.append(PmagSiteRec)\n else:\n print('site information not found in er_sites for site, ',\n site, ' site will be skipped')\n for PmagSiteRec in PmagSites: # now decorate each dictionary some more, and calculate VGPs etc. for results table\n PmagSiteRec[\"er_citation_names\"] = \"This study\"\n PmagSiteRec[\"er_analyst_mail_names\"] = user\n PmagSiteRec['magic_software_packages'] = version_num\n if agefile != \"\":\n PmagSiteRec = pmag.get_age(\n PmagSiteRec, \"er_site_name\", \"site_inferred_\", AgeNFO, DefaultAge)\n PmagSiteRec['pmag_criteria_codes'] = 'ACCEPT'\n if 'site_n_lines' in list(PmagSiteRec.keys()) and 'site_n_planes' in list(PmagSiteRec.keys()) and PmagSiteRec['site_n_lines'] != \"\" and PmagSiteRec['site_n_planes'] != \"\":\n if int(PmagSiteRec[\"site_n_planes\"]) > 0:\n PmagSiteRec[\"magic_method_codes\"] = PmagSiteRec['magic_method_codes'] + \":DE-FM-LP\"\n elif int(PmagSiteRec[\"site_n_lines\"]) > 2:\n PmagSiteRec[\"magic_method_codes\"] = PmagSiteRec['magic_method_codes'] + \":DE-FM\"\n kill = pmag.grade(PmagSiteRec, accept, 'site_dir')\n if len(kill) == 0:\n PmagResRec = {} # set up dictionary for the pmag_results table entry\n PmagResRec['data_type'] = 'i' # decorate it a bit\n PmagResRec['magic_software_packages'] = version_num\n PmagSiteRec['site_description'] = 'Site direction included in results table'\n PmagResRec['pmag_criteria_codes'] = 'ACCEPT'\n dec = float(PmagSiteRec[\"site_dec\"])\n inc = float(PmagSiteRec[\"site_inc\"])\n if 'site_alpha95' in list(PmagSiteRec.keys()) and PmagSiteRec['site_alpha95'] != \"\":\n a95 = float(PmagSiteRec[\"site_alpha95\"])\n else:\n a95 = 180.\n sitedat = pmag.get_dictitem(SiteNFO, 'er_site_name', PmagSiteRec['er_site_name'], 'T')[\n 0] # fish out site information (lat/lon, etc.)\n lat = float(sitedat['site_lat'])\n lon = float(sitedat['site_lon'])\n plon, plat, dp, dm = pmag.dia_vgp(\n dec, inc, a95, lat, lon) # get the VGP for this site\n if PmagSiteRec['site_tilt_correction'] == '-1':\n C = ' (spec coord) '\n if PmagSiteRec['site_tilt_correction'] == '0':\n C = ' (geog. coord) '\n if PmagSiteRec['site_tilt_correction'] == '100':\n C = ' (strat. coord) '\n PmagResRec[\"pmag_result_name\"] = \"VGP Site: \" + \\\n PmagSiteRec[\"er_site_name\"] # decorate some more\n PmagResRec[\"result_description\"] = \"Site VGP, coord system = \" + \\\n str(coord) + ' component: ' + comp\n PmagResRec['er_site_names'] = PmagSiteRec['er_site_name']\n PmagResRec['pmag_criteria_codes'] = 'ACCEPT'\n PmagResRec['er_citation_names'] = 'This study'\n PmagResRec['er_analyst_mail_names'] = user\n PmagResRec[\"er_location_names\"] = PmagSiteRec[\"er_location_name\"]\n if avg_directions_by_sample:\n PmagResRec[\"er_sample_names\"] = PmagSiteRec[\"er_sample_names\"]\n else:\n PmagResRec[\"er_specimen_names\"] = PmagSiteRec[\"er_specimen_names\"]\n PmagResRec[\"tilt_correction\"] = PmagSiteRec['site_tilt_correction']\n PmagResRec[\"pole_comp_name\"] = PmagSiteRec['site_comp_name']\n PmagResRec[\"average_dec\"] = PmagSiteRec[\"site_dec\"]\n PmagResRec[\"average_inc\"] = PmagSiteRec[\"site_inc\"]\n PmagResRec[\"average_alpha95\"] = PmagSiteRec[\"site_alpha95\"]\n PmagResRec[\"average_n\"] = PmagSiteRec[\"site_n\"]\n PmagResRec[\"average_n_lines\"] = PmagSiteRec[\"site_n_lines\"]\n PmagResRec[\"average_n_planes\"] = PmagSiteRec[\"site_n_planes\"]\n PmagResRec[\"vgp_n\"] = PmagSiteRec[\"site_n\"]\n PmagResRec[\"average_k\"] = PmagSiteRec[\"site_k\"]\n PmagResRec[\"average_r\"] = PmagSiteRec[\"site_r\"]\n PmagResRec[\"average_lat\"] = '%10.4f ' % (lat)\n PmagResRec[\"average_lon\"] = '%10.4f ' % (lon)\n if agefile != \"\":\n PmagResRec = pmag.get_age(\n PmagResRec, \"er_site_names\", \"average_\", AgeNFO, DefaultAge)\n site_height = pmag.get_dictitem(\n height_nfo, 'er_site_name', site, 'T')\n if len(site_height) > 0:\n PmagResRec[\"average_height\"] = site_height[0]['site_height']\n PmagResRec[\"vgp_lat\"] = '%7.1f ' % (plat)\n PmagResRec[\"vgp_lon\"] = '%7.1f ' % (plon)\n PmagResRec[\"vgp_dp\"] = '%7.1f ' % (dp)\n PmagResRec[\"vgp_dm\"] = '%7.1f ' % (dm)\n PmagResRec[\"magic_method_codes\"] = PmagSiteRec[\"magic_method_codes\"]\n if '0' in PmagSiteRec['site_tilt_correction'] and \"DA-DIR-GEO\" not in PmagSiteRec['magic_method_codes']:\n PmagSiteRec['magic_method_codes'] = PmagSiteRec['magic_method_codes'] + \":DA-DIR-GEO\"\n if '100' in PmagSiteRec['site_tilt_correction'] and \"DA-DIR-TILT\" not in PmagSiteRec['magic_method_codes']:\n PmagSiteRec['magic_method_codes'] = PmagSiteRec['magic_method_codes'] + \":DA-DIR-TILT\"\n PmagSiteRec['site_polarity'] = \"\"\n if avg_by_polarity: # assign polarity based on angle of pole lat to spin axis - may want to re-think this sometime\n angle = pmag.angle([0, 0], [0, (90 - plat)])\n if angle <= 55.:\n PmagSiteRec[\"site_polarity\"] = 'n'\n if angle > 55. and angle < 125.:\n PmagSiteRec[\"site_polarity\"] = 't'\n if angle >= 125.:\n PmagSiteRec[\"site_polarity\"] = 'r'\n PmagResults.append(PmagResRec)\n if avg_by_polarity:\n # find the tilt corrected data\n crecs = pmag.get_dictitem(\n PmagSites, 'site_tilt_correction', '100', 'T')\n if len(crecs) < 2:\n # if there aren't any, find the geographic corrected data\n crecs = pmag.get_dictitem(\n PmagSites, 'site_tilt_correction', '0', 'T')\n if len(crecs) > 2: # if there are some,\n comp = pmag.get_list(crecs, 'site_comp_name').split(':')[\n 0] # find the first component\n # fish out all of the first component\n crecs = pmag.get_dictitem(crecs, 'site_comp_name', comp, 'T')\n precs = []\n for rec in crecs:\n precs.append({'dec': rec['site_dec'], 'inc': rec['site_inc'],\n 'name': rec['er_site_name'], 'loc': rec['er_location_name']})\n # calculate average by polarity\n polpars = pmag.fisher_by_pol(precs)\n # hunt through all the modes (normal=A, reverse=B, all=ALL)\n for mode in list(polpars.keys()):\n PolRes = {}\n PolRes['er_citation_names'] = 'This study'\n PolRes[\"pmag_result_name\"] = \"Polarity Average: Polarity \" + mode\n PolRes[\"data_type\"] = \"a\"\n PolRes[\"average_dec\"] = '%7.1f' % (polpars[mode]['dec'])\n PolRes[\"average_inc\"] = '%7.1f' % (polpars[mode]['inc'])\n PolRes[\"average_n\"] = '%i' % (polpars[mode]['n'])\n PolRes[\"average_r\"] = '%5.4f' % (polpars[mode]['r'])\n PolRes[\"average_k\"] = '%6.0f' % (polpars[mode]['k'])\n PolRes[\"average_alpha95\"] = '%7.1f' % (\n polpars[mode]['alpha95'])\n PolRes['er_site_names'] = polpars[mode]['sites']\n PolRes['er_location_names'] = polpars[mode]['locs']\n PolRes['magic_software_packages'] = version_num\n PmagResults.append(PolRes)\n\n if not skip_intensities and nositeints != 1:\n for site in sites: # now do intensities for each site\n if plotsites:\n print(site)\n if not avg_intensities_by_sample:\n key, intlist = 'specimen', SpecInts # if using specimen level data\n if avg_intensities_by_sample:\n key, intlist = 'sample', PmagSamps # if using sample level data\n # get all the intensities for this site\n Ints = pmag.get_dictitem(intlist, 'er_site_name', site, 'T')\n if len(Ints) > 0: # there are some\n # get average intensity stuff for site table\n PmagSiteRec = pmag.average_int(Ints, key, 'site')\n # get average intensity stuff for results table\n PmagResRec = pmag.average_int(Ints, key, 'average')\n if plotsites: # if site by site examination requested - print this site out to the screen\n for rec in Ints:\n print(rec['er_' + key + '_name'], ' %7.1f' %\n (1e6 * float(rec[key + '_int'])))\n if len(Ints) > 1:\n print('Average: ', '%7.1f' % (\n 1e6 * float(PmagResRec['average_int'])), 'N: ', len(Ints))\n print('Sigma: ', '%7.1f' % (\n 1e6 * float(PmagResRec['average_int_sigma'])), 'Sigma %: ', PmagResRec['average_int_sigma_perc'])\n input('Press any key to continue\\n')\n er_location_name = Ints[0][\"er_location_name\"]\n # decorate the records\n PmagSiteRec[\"er_location_name\"] = er_location_name\n PmagSiteRec[\"er_citation_names\"] = \"This study\"\n PmagResRec[\"er_location_names\"] = er_location_name\n PmagResRec[\"er_citation_names\"] = \"This study\"\n PmagSiteRec[\"er_analyst_mail_names\"] = user\n PmagResRec[\"er_analyst_mail_names\"] = user\n PmagResRec[\"data_type\"] = 'i'\n if not avg_intensities_by_sample:\n PmagSiteRec['er_specimen_names'] = pmag.get_list(\n Ints, 'er_specimen_name') # list of all specimens used\n PmagResRec['er_specimen_names'] = pmag.get_list(\n Ints, 'er_specimen_name')\n PmagSiteRec['er_sample_names'] = pmag.get_list(\n Ints, 'er_sample_name') # list of all samples used\n PmagResRec['er_sample_names'] = pmag.get_list(\n Ints, 'er_sample_name')\n PmagSiteRec['er_site_name'] = site\n PmagResRec['er_site_names'] = site\n PmagSiteRec['magic_method_codes'] = pmag.get_list(\n Ints, 'magic_method_codes')\n PmagResRec['magic_method_codes'] = pmag.get_list(\n Ints, 'magic_method_codes')\n kill = pmag.grade(PmagSiteRec, accept, 'site_int')\n if nocrit == 1 or len(kill) == 0:\n b, sig = float(PmagResRec['average_int']), \"\"\n if(PmagResRec['average_int_sigma']) != \"\":\n sig = float(PmagResRec['average_int_sigma'])\n # fish out site direction\n sdir = pmag.get_dictitem(\n PmagResults, 'er_site_names', site, 'T')\n # get the VDM for this record using last average\n # inclination (hope it is the right one!)\n if len(sdir) > 0 and sdir[-1]['average_inc'] != \"\":\n inc = float(sdir[0]['average_inc'])\n # get magnetic latitude using dipole formula\n mlat = pmag.magnetic_lat(inc)\n # get VDM with magnetic latitude\n PmagResRec[\"vdm\"] = '%8.3e ' % (pmag.b_vdm(b, mlat))\n PmagResRec[\"vdm_n\"] = PmagResRec['average_int_n']\n if 'average_int_sigma' in list(PmagResRec.keys()) and PmagResRec['average_int_sigma'] != \"\":\n vdm_sig = pmag.b_vdm(\n float(PmagResRec['average_int_sigma']), mlat)\n PmagResRec[\"vdm_sigma\"] = '%8.3e ' % (vdm_sig)\n else:\n PmagResRec[\"vdm_sigma\"] = \"\"\n mlat = \"\" # define a model latitude\n if get_model_lat == 1: # use present site latitude\n mlats = pmag.get_dictitem(\n SiteNFO, 'er_site_name', site, 'T')\n if len(mlats) > 0:\n mlat = mlats[0]['site_lat']\n # use a model latitude from some plate reconstruction model\n # (or something)\n elif get_model_lat == 2:\n mlats = pmag.get_dictitem(\n ModelLats, 'er_site_name', site, 'T')\n if len(mlats) > 0:\n PmagResRec['model_lat'] = mlats[0]['site_model_lat']\n mlat = PmagResRec['model_lat']\n if mlat != \"\":\n # get the VADM using the desired latitude\n PmagResRec[\"vadm\"] = '%8.3e ' % (\n pmag.b_vdm(b, float(mlat)))\n if sig != \"\":\n vdm_sig = pmag.b_vdm(\n float(PmagResRec['average_int_sigma']), float(mlat))\n PmagResRec[\"vadm_sigma\"] = '%8.3e ' % (vdm_sig)\n PmagResRec[\"vadm_n\"] = PmagResRec['average_int_n']\n else:\n PmagResRec[\"vadm_sigma\"] = \"\"\n # fish out site information (lat/lon, etc.)\n sitedat = pmag.get_dictitem(\n SiteNFO, 'er_site_name', PmagSiteRec['er_site_name'], 'T')\n if len(sitedat) > 0:\n sitedat = sitedat[0]\n PmagResRec['average_lat'] = sitedat['site_lat']\n PmagResRec['average_lon'] = sitedat['site_lon']\n else:\n PmagResRec['average_lon'] = 'UNKNOWN'\n PmagResRec['average_lon'] = 'UNKNOWN'\n PmagResRec['magic_software_packages'] = version_num\n PmagResRec[\"pmag_result_name\"] = \"V[A]DM: Site \" + site\n PmagResRec[\"result_description\"] = \"V[A]DM of site\"\n PmagResRec[\"pmag_criteria_codes\"] = \"ACCEPT\"\n if agefile != \"\":\n PmagResRec = pmag.get_age(\n PmagResRec, \"er_site_names\", \"average_\", AgeNFO, DefaultAge)\n site_height = pmag.get_dictitem(\n height_nfo, 'er_site_name', site, 'T')\n if len(site_height) > 0:\n PmagResRec[\"average_height\"] = site_height[0]['site_height']\n PmagSites.append(PmagSiteRec)\n PmagResults.append(PmagResRec)\n if len(PmagSites) > 0:\n Tmp, keylist = pmag.fillkeys(PmagSites)\n pmag.magic_write(siteout, Tmp, 'pmag_sites')\n print(' sites written to ', siteout)\n else:\n print(\"No Site level table\")\n if len(PmagResults) > 0:\n TmpRes, keylist = pmag.fillkeys(PmagResults)\n pmag.magic_write(resout, TmpRes, 'pmag_results')\n print(' results written to ', resout)\n else:\n print(\"No Results level table\")", "response": "Writes the data from the pmag_specimens. txt file to the pmag_samples. txt and pmag_sites. txt file to the pmag_results. txt file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef orientation_magic(or_con=1, dec_correction_con=1, dec_correction=0, bed_correction=True,\n samp_con='1', hours_from_gmt=0, method_codes='', average_bedding=False,\n orient_file='orient.txt', samp_file='samples.txt', site_file='sites.txt',\n output_dir_path='.', input_dir_path='', append=False, data_model=3):\n \"\"\"\n use this function to convert tab delimited field notebook information to MagIC formatted tables (er_samples and er_sites)\n\n INPUT FORMAT\n Input files must be tab delimited and have in the first line:\ntab location_name\n Note: The \"location_name\" will facilitate searching in the MagIC database. Data from different\n \"locations\" should be put in separate files. The definition of a \"location\" is rather loose.\n Also this is the word 'tab' not a tab, which will be indicated by '\\t'.\n The second line has the names of the columns (tab delimited), e.g.:\nsite_name sample_name mag_azimuth field_dip date lat long sample_lithology sample_type sample_class shadow_angle hhmm stratigraphic_height bedding_dip_direction bedding_dip GPS_baseline image_name image_look image_photographer participants method_codes site_description sample_description GPS_Az, sample_igsn, sample_texture, sample_cooling_rate, cooling_rate_corr, cooling_rate_mcd\n\n\n Notes:\n 1) column order doesn't matter but the NAMES do.\n 2) sample_name, sample_lithology, sample_type, sample_class, lat and long are required. all others are optional.\n 3) If subsequent data are the same (e.g., date, bedding orientation, participants, stratigraphic_height),\n you can leave the field blank and the program will fill in the last recorded information. BUT if you really want a blank stratigraphic_height, enter a '-1'. These will not be inherited and must be specified for each entry: image_name, look, photographer or method_codes\n 4) hhmm must be in the format: hh:mm and the hh must be in 24 hour time.\n date must be mm/dd/yy (years < 50 will be converted to 20yy and >50 will be assumed 19yy). hours_from_gmt is the number of hours to SUBTRACT from hh to get to GMT.\n 5) image_name, image_look and image_photographer are colon delimited lists of file name (e.g., IMG_001.jpg) image look direction and the name of the photographer respectively. If all images had same look and photographer, just enter info once. The images will be assigned to the site for which they were taken - not at the sample level.\n 6) participants: Names of who helped take the samples. These must be a colon delimited list.\n 7) method_codes: Special method codes on a sample level, e.g., SO-GT5 which means the orientation is has an uncertainty of >5 degrees\n for example if it broke off before orienting....\n 8) GPS_Az is the place to put directly determined GPS Azimuths, using, e.g., points along the drill direction.\n 9) sample_cooling_rate is the cooling rate in K per Ma\n 10) int_corr_cooling_rate\n 11) cooling_rate_mcd: data adjustment method code for cooling rate correction; DA-CR-EG is educated guess; DA-CR-PS is percent estimated from pilot samples; DA-CR-TRM is comparison between 2 TRMs acquired with slow and rapid cooling rates.\nis the percent cooling rate factor to apply to specimens from this sample, DA-CR-XX is the method code\n\n\n defaults:\n orientation_magic(or_con=1, dec_correction_con=1, dec_correction=0, bed_correction=True, samp_con='1', hours_from_gmt=0, method_codes='', average_bedding=False, orient_file='orient.txt', samp_file='er_samples.txt', site_file='er_sites.txt', output_dir_path='.', input_dir_path='', append=False):\n orientation conventions:\n [1] Standard Pomeroy convention of azimuth and hade (degrees from vertical down)\n of the drill direction (field arrow). lab arrow azimuth= sample_azimuth = mag_azimuth;\n lab arrow dip = sample_dip =-field_dip. i.e. the lab arrow dip is minus the hade.\n [2] Field arrow is the strike of the plane orthogonal to the drill direction,\n Field dip is the hade of the drill direction. Lab arrow azimuth = mag_azimuth-90\n Lab arrow dip = -field_dip\n [3] Lab arrow is the same as the drill direction;\n hade was measured in the field.\n Lab arrow azimuth = mag_azimuth; Lab arrow dip = 90-field_dip\n [4] lab azimuth and dip are same as mag_azimuth, field_dip : use this for unoriented samples too\n [5] Same as AZDIP convention explained below -\n azimuth and inclination of the drill direction are mag_azimuth and field_dip;\n lab arrow is as in [1] above.\n lab azimuth is same as mag_azimuth,lab arrow dip=field_dip-90\n [6] Lab arrow azimuth = mag_azimuth-90; Lab arrow dip = 90-field_dip\n [7] see http://earthref.org/PmagPy/cookbook/#field_info for more information. You can customize other format yourself, or email ltauxe@ucsd.edu for help.\n\n\n Magnetic declination convention:\n [1] Use the IGRF value at the lat/long and date supplied [default]\n [2] Will supply declination correction\n [3] mag_az is already corrected in file\n [4] Correct mag_az but not bedding_dip_dir\n\n Sample naming convention:\n [1] XXXXY: where XXXX is an arbitrary length site designation and Y\n is the single character sample designation. e.g., TG001a is the\n first sample from site TG001. [default]\n [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX\n [5] site name = sample name\n [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED\n [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY\n NB: all others you will have to either customize your\n self or e-mail ltauxe@ucsd.edu for help.\n\n\n \"\"\"\n # initialize some variables\n # bed_correction used to be BedCorr\n # dec_correction_con used to be corr\n # dec_correction used to be DecCorr\n # meths is now method_codes\n # delta_u is now hours_from_gmt\n\n input_dir_path, output_dir_path = pmag.fix_directories(input_dir_path, output_dir_path)\n or_con, dec_correction_con, dec_correction = int(\n or_con), int(dec_correction_con), float(dec_correction)\n hours_from_gmt = float(hours_from_gmt)\n stratpos = \"\"\n # date of sampling, latitude (pos North), longitude (pos East)\n date, lat, lon = \"\", \"\", \"\"\n bed_dip, bed_dip_dir = \"\", \"\"\n Lats, Lons = [], [] # list of latitudes and longitudes\n # lists of Sample records and Site records\n SampOuts, SiteOuts, ImageOuts = [], [], []\n samplelist, sitelist, imagelist = [], [], []\n Z = 1\n newbaseline, newbeddir, newbeddip = \"\", \"\", \"\"\n fpars = []\n sclass, lithology, sample_type = \"\", \"\", \"\"\n newclass, newlith, newtype = '', '', ''\n BPs = [] # bedding pole declinations, bedding pole inclinations\n image_file = \"er_images.txt\"\n #\n # use 3.0. default filenames when in 3.0.\n # but, still allow for custom names\n data_model = int(data_model)\n if data_model == 3:\n if samp_file == \"er_samples.txt\":\n samp_file = \"samples.txt\"\n if site_file == \"er_sites.txt\":\n site_file = \"sites.txt\"\n image_file = \"images.txt\"\n orient_file = pmag.resolve_file_name(orient_file, input_dir_path)\n if not os.path.exists(orient_file):\n return False, \"No such file: {}. If the orientation file is not in your current working directory, make sure you have specified the correct input directory.\".format(orient_file)\n samp_file = os.path.join(output_dir_path, samp_file)\n site_file = os.path.join(output_dir_path, site_file)\n image_file = os.path.join(output_dir_path, image_file)\n\n # validate input\n if '4' in samp_con[0]:\n pattern = re.compile('[4][-]\\d')\n result = pattern.match(samp_con)\n if not result:\n raise Exception(\n \"If using sample naming convention 4, you must provide the number of characters with which to distinguish sample from site. [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX)\")\n\n if '7' in samp_con[0]:\n pattern = re.compile('[7][-]\\d')\n result = pattern.match(samp_con)\n if not result:\n raise Exception(\n \"If using sample naming convention 7, you must provide the number of characters with which to distinguish sample from site. [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY\")\n\n if dec_correction_con == 2 and not dec_correction:\n raise Exception(\n \"If using magnetic declination convention 2, you must also provide a declincation correction in degrees\")\n\n SampRecs, SiteRecs, ImageRecs = [], [], []\n SampRecs_sorted, SiteRecs_sorted = {}, {}\n\n if append:\n try:\n SampRecs, file_type = pmag.magic_read(samp_file)\n # convert 3.0. sample file to 2.5 format\n if data_model == 3:\n SampRecs3 = SampRecs\n SampRecs = []\n for samp_rec in SampRecs3:\n rec = map_magic.mapping(\n samp_rec, map_magic.samp_magic3_2_magic2_map)\n SampRecs.append(rec)\n # magic_data dictionary sorted by sample_name\n SampRecs_sorted = pmag.sort_magic_data(SampRecs, 'er_sample_name')\n print('sample data to be appended to: ', samp_file)\n except Exception as ex:\n print(ex)\n print('problem with existing file: ',\n samp_file, ' will create new.')\n try:\n SiteRecs, file_type = pmag.magic_read(site_file)\n # convert 3.0. site file to 2.5 format\n if data_model == 3:\n SiteRecs3 = SiteRecs\n SiteRecs = []\n for site_rec in SiteRecs3:\n SiteRecs.append(map_magic.mapping(\n site_rec, map_magic.site_magic3_2_magic2_map))\n # magic_data dictionary sorted by site_name\n SiteRecs_sorted = pmag.sort_magic_data(SiteRecs, 'er_site_name')\n print('site data to be appended to: ', site_file)\n except Exception as ex:\n print(ex)\n print('problem with existing file: ',\n site_file, ' will create new.')\n try:\n ImageRecs, file_type = pmag.magic_read(image_file)\n # convert from 3.0. --> 2.5\n if data_model == 3:\n ImageRecs3 = ImageRecs\n ImageRecs = []\n for image_rec in ImageRecs3:\n ImageRecs.append(map_magic.mapping(\n image_rec, map_magic.image_magic3_2_magic2_map))\n print('image data to be appended to: ', image_file)\n except:\n print('problem with existing file: ',\n image_file, ' will create new.')\n #\n # read in file to convert\n #\n OrData, location_name = pmag.magic_read(orient_file)\n if location_name == \"demag_orient\":\n location_name = \"\"\n #\n # step through the data sample by sample\n #\n # use map_magic in here...\n\n for OrRec in OrData:\n if 'mag_azimuth' not in list(OrRec.keys()):\n OrRec['mag_azimuth'] = \"\"\n if 'field_dip' not in list(OrRec.keys()):\n OrRec['field_dip'] = \"\"\n if OrRec['mag_azimuth'] == \" \":\n OrRec[\"mag_azimuth\"] = \"\"\n if OrRec['field_dip'] == \" \":\n OrRec[\"field_dip\"] = \"\"\n if 'sample_description' in list(OrRec.keys()):\n sample_description = OrRec['sample_description']\n else:\n sample_description = \"\"\n if 'cooling_rate_corr' in list(OrRec.keys()):\n if 'cooling_rate_mcd' not in list(OrRec.keys()):\n OrRec['cooling_rate_mcd'] = 'DA-CR'\n sample_orientation_flag = 'g'\n if 'sample_orientation_flag' in list(OrRec.keys()):\n if OrRec['sample_orientation_flag'] == 'b' or OrRec[\"mag_azimuth\"] == \"\":\n sample_orientation_flag = 'b'\n methcodes = method_codes # initialize method codes\n if methcodes:\n if 'method_codes' in list(OrRec.keys()) and OrRec['method_codes'].strip() != \"\":\n methcodes = methcodes + \":\" + \\\n OrRec['method_codes'] # add notes\n else:\n if 'method_codes' in list(OrRec.keys()) and OrRec['method_codes'].strip() != \"\":\n methcodes = OrRec['method_codes'] # add notes\n\n codes = methcodes.replace(\" \", \"\").split(\":\")\n sample_name = OrRec[\"sample_name\"]\n # patch added by rshaar 7/2016\n # if sample_name already exists in er_samples.txt:\n # merge the new data colmuns calculated by orientation_magic with the existing data colmuns\n # this is done to make sure no previous data in er_samples.txt and\n # er_sites.txt is lost.\n\n if sample_name in list(SampRecs_sorted.keys()):\n Prev_MagRec = SampRecs_sorted[sample_name][-1]\n MagRec = Prev_MagRec\n else:\n Prev_MagRec = {}\n MagRec = {}\n MagRec[\"er_citation_names\"] = \"This study\"\n\n # the following keys were calculated or defined in the code above:\n for key in ['sample_igsn', 'sample_texture', 'sample_cooling_rate',\n 'cooling_rate_corr', 'cooling_rate_mcd']:\n val = OrRec.get(key, '')\n if val:\n MagRec[key] = val\n elif key in list(Prev_MagRec.keys()):\n MagRec[key] = Prev_MagRec[key]\n else:\n MagRec[key] = \"\"\n\n if location_name != \"\":\n MagRec[\"er_location_name\"] = location_name\n elif \"er_location_name\" in list(Prev_MagRec.keys()):\n MagRec[\"er_location_name\"] = Prev_MagRec[\"er_location_name\"]\n else:\n MagRec[\"er_location_name\"] = \"\"\n\n # the following keys are taken directly from OrRec dictionary:\n for key in [\"sample_height\", \"er_sample_alternatives\", \"sample_orientation_flag\"]:\n if key in list(OrRec.keys()) and OrRec[key] != \"\":\n MagRec[key] = OrRec[key]\n elif key in list(Prev_MagRec.keys()):\n MagRec[key] = Prev_MagRec[key]\n else:\n MagRec[key] = \"\"\n\n # the following keys, if blank, used to be defined here as \"Not Specified\" :\n\n for key in [\"sample_class\", \"sample_lithology\", \"sample_type\"]:\n if key in list(OrRec.keys()) and OrRec[key] != \"\" and OrRec[key] != \"Not Specified\":\n MagRec[key] = OrRec[key]\n elif key in list(Prev_MagRec.keys()) and Prev_MagRec[key] != \"\" and Prev_MagRec[key] != \"Not Specified\":\n MagRec[key] = Prev_MagRec[key]\n else:\n MagRec[key] = \"\" # \"Not Specified\"\n\n # (rshaar) From here parse new information and replace previous, if exists:\n #\n # parse information common to all orientation methods\n #\n MagRec[\"er_sample_name\"] = OrRec[\"sample_name\"]\n if \"IGSN\" in list(OrRec.keys()):\n MagRec[\"sample_igsn\"] = OrRec[\"IGSN\"]\n else:\n MagRec[\"sample_igsn\"] = \"\"\n # MagRec[\"sample_height\"],MagRec[\"sample_bed_dip_direction\"],MagRec[\"sample_bed_dip\"]=\"\",\"\",\"\"\n MagRec[\"sample_bed_dip_direction\"], MagRec[\"sample_bed_dip\"] = \"\", \"\"\n # if \"er_sample_alternatives\" in OrRec.keys():\n # MagRec[\"er_sample_alternatives\"]=OrRec[\"sample_alternatives\"]\n sample = OrRec[\"sample_name\"]\n if OrRec['mag_azimuth'] == \"\" and OrRec['field_dip'] != \"\":\n OrRec['mag_azimuth'] = '999'\n if OrRec[\"mag_azimuth\"] != \"\":\n labaz, labdip = pmag.orient(\n float(OrRec[\"mag_azimuth\"]), float(OrRec[\"field_dip\"]), or_con)\n if labaz < 0:\n labaz += 360.\n else:\n labaz, labdip = \"\", \"\"\n if OrRec['mag_azimuth'] == '999':\n labaz = \"\"\n if \"GPS_baseline\" in list(OrRec.keys()) and OrRec['GPS_baseline'] != \"\":\n newbaseline = OrRec[\"GPS_baseline\"]\n if newbaseline != \"\":\n baseline = float(newbaseline)\n MagRec['er_scientist_mail_names'] = OrRec.get('participants', '')\n newlat = OrRec[\"lat\"]\n if newlat != \"\":\n lat = float(newlat)\n if lat == \"\":\n print(\"No latitude specified for ! \", sample,\n \". Latitude is required for all samples.\")\n return False, \"No latitude specified for ! \" + sample + \". Latitude is required for all samples.\"\n MagRec[\"sample_lat\"] = '%11.5f' % (lat)\n newlon = OrRec[\"long\"]\n if newlon != \"\":\n lon = float(newlon)\n if lon == \"\":\n print(\"No longitude specified for ! \", sample,\n \". Longitude is required for all samples.\")\n return False, str(\"No longitude specified for ! \" + sample + \". Longitude is required for all samples.\")\n MagRec[\"sample_lon\"] = '%11.5f' % (lon)\n if 'bedding_dip_direction' in list(OrRec.keys()):\n newbeddir = OrRec[\"bedding_dip_direction\"]\n if newbeddir != \"\":\n bed_dip_dir = OrRec['bedding_dip_direction']\n if 'bedding_dip' in list(OrRec.keys()):\n newbeddip = OrRec[\"bedding_dip\"]\n if newbeddip != \"\":\n bed_dip = OrRec['bedding_dip']\n MagRec[\"sample_bed_dip\"] = bed_dip\n MagRec[\"sample_bed_dip_direction\"] = bed_dip_dir\n # MagRec[\"sample_type\"]=sample_type\n if labdip != \"\":\n MagRec[\"sample_dip\"] = '%7.1f' % labdip\n else:\n MagRec[\"sample_dip\"] = \"\"\n if \"date\" in list(OrRec.keys()) and OrRec[\"date\"] != \"\":\n newdate = OrRec[\"date\"]\n if newdate != \"\":\n date = newdate\n mmddyy = date.split('/')\n yy = int(mmddyy[2])\n if yy > 50:\n yy = 1900 + yy\n else:\n yy = 2000 + yy\n decimal_year = yy + old_div(float(mmddyy[0]), 12)\n sample_date = '%i:%s:%s' % (yy, mmddyy[0], mmddyy[1])\n time = OrRec['hhmm']\n if time:\n sample_date += (':' + time)\n MagRec[\"sample_date\"] = sample_date.strip(':')\n if labaz != \"\":\n MagRec[\"sample_azimuth\"] = '%7.1f' % (labaz)\n else:\n MagRec[\"sample_azimuth\"] = \"\"\n if \"stratigraphic_height\" in list(OrRec.keys()):\n if OrRec[\"stratigraphic_height\"] != \"\":\n MagRec[\"sample_height\"] = OrRec[\"stratigraphic_height\"]\n stratpos = OrRec[\"stratigraphic_height\"]\n elif OrRec[\"stratigraphic_height\"] == '-1':\n MagRec[\"sample_height\"] = \"\" # make empty\n elif stratpos != \"\":\n # keep last record if blank\n MagRec[\"sample_height\"] = stratpos\n#\n # get magnetic declination (corrected with igrf value)\n if dec_correction_con == 1 and MagRec['sample_azimuth'] != \"\":\n x, y, z, f = pmag.doigrf(lon, lat, 0, decimal_year)\n Dir = pmag.cart2dir((x, y, z))\n dec_correction = Dir[0]\n if \"bedding_dip\" in list(OrRec.keys()):\n if OrRec[\"bedding_dip\"] != \"\":\n MagRec[\"sample_bed_dip\"] = OrRec[\"bedding_dip\"]\n bed_dip = OrRec[\"bedding_dip\"]\n else:\n MagRec[\"sample_bed_dip\"] = bed_dip\n else:\n MagRec[\"sample_bed_dip\"] = '0'\n if \"bedding_dip_direction\" in list(OrRec.keys()):\n if OrRec[\"bedding_dip_direction\"] != \"\" and bed_correction == 1:\n dd = float(OrRec[\"bedding_dip_direction\"]) + dec_correction\n if dd > 360.:\n dd = dd - 360.\n MagRec[\"sample_bed_dip_direction\"] = '%7.1f' % (dd)\n dip_dir = MagRec[\"sample_bed_dip_direction\"]\n else:\n MagRec[\"sample_bed_dip_direction\"] = OrRec['bedding_dip_direction']\n else:\n MagRec[\"sample_bed_dip_direction\"] = '0'\n if average_bedding:\n if str(MagRec[\"sample_bed_dip_direction\"]) and str(MagRec[\"sample_bed_dip\"]):\n BPs.append([float(MagRec[\"sample_bed_dip_direction\"]),\n float(MagRec[\"sample_bed_dip\"]) - 90., 1.])\n if MagRec['sample_azimuth'] == \"\" and MagRec['sample_dip'] == \"\":\n MagRec[\"sample_declination_correction\"] = ''\n methcodes = methcodes + ':SO-NO'\n MagRec[\"magic_method_codes\"] = methcodes\n MagRec['sample_description'] = sample_description\n #\n # work on the site stuff too\n if 'site_name' in list(OrRec.keys()) and OrRec['site_name'] != \"\":\n site = OrRec['site_name']\n elif 'site_name' in list(Prev_MagRec.keys()) and Prev_MagRec['site_name'] != \"\":\n site = Prev_MagRec['site_name']\n else:\n # parse out the site name\n site = pmag.parse_site(OrRec[\"sample_name\"], samp_con, Z)\n MagRec[\"er_site_name\"] = site\n site_description = \"\" # overwrite any prior description\n if 'site_description' in list(OrRec.keys()) and OrRec['site_description'] != \"\":\n site_description = OrRec['site_description'].replace(\",\", \";\")\n if \"image_name\" in list(OrRec.keys()):\n images = OrRec[\"image_name\"].split(\":\")\n if \"image_look\" in list(OrRec.keys()):\n looks = OrRec['image_look'].split(\":\")\n else:\n looks = []\n if \"image_photographer\" in list(OrRec.keys()):\n photographers = OrRec['image_photographer'].split(\":\")\n else:\n photographers = []\n for image in images:\n if image != \"\" and image not in imagelist:\n imagelist.append(image)\n ImageRec = {}\n ImageRec['er_image_name'] = image\n ImageRec['image_type'] = \"outcrop\"\n ImageRec['image_date'] = sample_date\n ImageRec['er_citation_names'] = \"This study\"\n ImageRec['er_location_name'] = location_name\n ImageRec['er_site_name'] = MagRec['er_site_name']\n k = images.index(image)\n if len(looks) > k:\n ImageRec['er_image_description'] = \"Look direction: \" + looks[k]\n elif len(looks) >= 1:\n ImageRec['er_image_description'] = \"Look direction: \" + looks[-1]\n else:\n ImageRec['er_image_description'] = \"Look direction: unknown\"\n if len(photographers) > k:\n ImageRec['er_photographer_mail_names'] = photographers[k]\n elif len(photographers) >= 1:\n ImageRec['er_photographer_mail_names'] = photographers[-1]\n else:\n ImageRec['er_photographer_mail_names'] = \"unknown\"\n ImageOuts.append(ImageRec)\n if site not in sitelist:\n sitelist.append(site) # collect unique site names\n # patch added by rshaar 7/2016\n # if sample_name already exists in er_samples.txt:\n # merge the new data colmuns calculated by orientation_magic with the existing data colmuns\n # this is done to make sure no previous data in er_samples.txt and\n # er_sites.txt is lost.\n\n if site in list(SiteRecs_sorted.keys()):\n Prev_MagRec = SiteRecs_sorted[site][-1]\n SiteRec = Prev_MagRec\n else:\n Prev_MagRec = {}\n SiteRec = {}\n SiteRec[\"er_citation_names\"] = \"This study\"\n SiteRec[\"er_site_name\"] = site\n SiteRec[\"site_definition\"] = \"s\"\n\n if \"er_location_name\" in SiteRec and SiteRec.get(\"er_location_name\"):\n pass\n elif key in list(Prev_MagRec.keys()) and Prev_MagRec[key] != \"\":\n SiteRec[key] = Prev_MagRec[key]\n else:\n print('setting location name to \"\"')\n SiteRec[key] = \"\"\n\n for key in [\"lat\", \"lon\", \"height\"]:\n if \"site_\" + key in list(Prev_MagRec.keys()) and Prev_MagRec[\"site_\" + key] != \"\":\n SiteRec[\"site_\" + key] = Prev_MagRec[\"site_\" + key]\n else:\n SiteRec[\"site_\" + key] = MagRec[\"sample_\" + key]\n\n # SiteRec[\"site_lat\"]=MagRec[\"sample_lat\"]\n # SiteRec[\"site_lon\"]=MagRec[\"sample_lon\"]\n # SiteRec[\"site_height\"]=MagRec[\"sample_height\"]\n\n for key in [\"class\", \"lithology\", \"type\"]:\n if \"site_\" + key in list(Prev_MagRec.keys()) and Prev_MagRec[\"site_\" + key] != \"Not Specified\":\n SiteRec[\"site_\" + key] = Prev_MagRec[\"site_\" + key]\n else:\n SiteRec[\"site_\" + key] = MagRec[\"sample_\" + key]\n\n # SiteRec[\"site_class\"]=MagRec[\"sample_class\"]\n # SiteRec[\"site_lithology\"]=MagRec[\"sample_lithology\"]\n # SiteRec[\"site_type\"]=MagRec[\"sample_type\"]\n\n if site_description != \"\": # overwrite only if site_description has something\n SiteRec[\"site_description\"] = site_description\n SiteOuts.append(SiteRec)\n if sample not in samplelist:\n samplelist.append(sample)\n if MagRec['sample_azimuth'] != \"\": # assume magnetic compass only\n MagRec['magic_method_codes'] = MagRec['magic_method_codes'] + ':SO-MAG'\n MagRec['magic_method_codes'] = MagRec['magic_method_codes'].strip(\n \":\")\n SampOuts.append(MagRec)\n if MagRec['sample_azimuth'] != \"\" and dec_correction_con != 3:\n az = labaz + dec_correction\n if az > 360.:\n az = az - 360.\n CMDRec = {}\n for key in list(MagRec.keys()):\n CMDRec[key] = MagRec[key] # make a copy of MagRec\n CMDRec[\"sample_azimuth\"] = '%7.1f' % (az)\n CMDRec[\"magic_method_codes\"] = methcodes + ':SO-CMD-NORTH'\n CMDRec[\"magic_method_codes\"] = CMDRec['magic_method_codes'].strip(\n ':')\n CMDRec[\"sample_declination_correction\"] = '%7.1f' % (\n dec_correction)\n if dec_correction_con == 1:\n CMDRec['sample_description'] = sample_description + \\\n ':Declination correction calculated from IGRF'\n else:\n CMDRec['sample_description'] = sample_description + \\\n ':Declination correction supplied by user'\n CMDRec[\"sample_description\"] = CMDRec['sample_description'].strip(\n ':')\n SampOuts.append(CMDRec)\n if \"mag_az_bs\" in list(OrRec.keys()) and OrRec[\"mag_az_bs\"] != \"\" and OrRec[\"mag_az_bs\"] != \" \":\n SRec = {}\n for key in list(MagRec.keys()):\n SRec[key] = MagRec[key] # make a copy of MagRec\n labaz = float(OrRec[\"mag_az_bs\"])\n az = labaz + dec_correction\n if az > 360.:\n az = az - 360.\n SRec[\"sample_azimuth\"] = '%7.1f' % (az)\n SRec[\"sample_declination_correction\"] = '%7.1f' % (\n dec_correction)\n SRec[\"magic_method_codes\"] = methcodes + \\\n ':SO-SIGHT-BACK:SO-CMD-NORTH'\n SampOuts.append(SRec)\n #\n # check for suncompass data\n #\n # there are sun compass data\n if \"shadow_angle\" in list(OrRec.keys()) and OrRec[\"shadow_angle\"] != \"\":\n if hours_from_gmt == \"\":\n #hours_from_gmt=raw_input(\"Enter hours to subtract from time for GMT: [0] \")\n hours_from_gmt = 0\n SunRec, sundata = {}, {}\n shad_az = float(OrRec[\"shadow_angle\"])\n if not OrRec[\"hhmm\"]:\n print('If using the column shadow_angle for sun compass data, you must also provide the time for each sample. Sample ',\n sample, ' has shadow_angle but is missing the \"hh:mm\" column.')\n else: # calculate sun declination\n sundata[\"date\"] = '%i:%s:%s:%s' % (\n yy, mmddyy[0], mmddyy[1], OrRec[\"hhmm\"])\n sundata[\"delta_u\"] = hours_from_gmt\n sundata[\"lon\"] = lon # do not truncate!\n sundata[\"lat\"] = lat # do not truncate!\n sundata[\"shadow_angle\"] = OrRec[\"shadow_angle\"]\n # now you can truncate\n sundec = '%7.1f' % (pmag.dosundec(sundata))\n for key in list(MagRec.keys()):\n SunRec[key] = MagRec[key] # make a copy of MagRec\n SunRec[\"sample_azimuth\"] = sundec # do not truncate!\n SunRec[\"sample_declination_correction\"] = ''\n SunRec[\"magic_method_codes\"] = methcodes + ':SO-SUN'\n SunRec[\"magic_method_codes\"] = SunRec['magic_method_codes'].strip(\n ':')\n SampOuts.append(SunRec)\n #\n # check for differential GPS data\n #\n # there are diff GPS data\n if \"prism_angle\" in list(OrRec.keys()) and OrRec[\"prism_angle\"] != \"\":\n GPSRec = {}\n for key in list(MagRec.keys()):\n GPSRec[key] = MagRec[key] # make a copy of MagRec\n prism_angle = float(OrRec[\"prism_angle\"])\n sundata[\"shadow_angle\"] = OrRec[\"shadow_angle\"]\n sundec = pmag.dosundec(sundata)\n for key in list(MagRec.keys()):\n SunRec[key] = MagRec[key] # make a copy of MagRec\n SunRec[\"sample_azimuth\"] = '%7.1f' % (sundec)\n SunRec[\"sample_declination_correction\"] = ''\n SunRec[\"magic_method_codes\"] = methcodes + ':SO-SUN'\n SunRec[\"magic_method_codes\"] = SunRec['magic_method_codes'].strip(\n ':')\n SampOuts.append(SunRec)\n #\n # check for differential GPS data\n #\n # there are diff GPS data\n if \"prism_angle\" in list(OrRec.keys()) and OrRec[\"prism_angle\"] != \"\":\n GPSRec = {}\n for key in list(MagRec.keys()):\n GPSRec[key] = MagRec[key] # make a copy of MagRec\n prism_angle = float(OrRec[\"prism_angle\"])\n laser_angle = float(OrRec[\"laser_angle\"])\n if OrRec[\"GPS_baseline\"] != \"\":\n baseline = float(OrRec[\"GPS_baseline\"]) # new baseline\n gps_dec = baseline + laser_angle + prism_angle - 90.\n while gps_dec > 360.:\n gps_dec = gps_dec - 360.\n while gps_dec < 0:\n gps_dec = gps_dec + 360.\n for key in list(MagRec.keys()):\n GPSRec[key] = MagRec[key] # make a copy of MagRec\n GPSRec[\"sample_azimuth\"] = '%7.1f' % (gps_dec)\n GPSRec[\"sample_declination_correction\"] = ''\n GPSRec[\"magic_method_codes\"] = methcodes + ':SO-GPS-DIFF'\n SampOuts.append(GPSRec)\n # there are differential GPS Azimuth data\n if \"GPS_Az\" in list(OrRec.keys()) and OrRec[\"GPS_Az\"] != \"\":\n GPSRec = {}\n for key in list(MagRec.keys()):\n GPSRec[key] = MagRec[key] # make a copy of MagRec\n GPSRec[\"sample_azimuth\"] = '%7.1f' % (float(OrRec[\"GPS_Az\"]))\n GPSRec[\"sample_declination_correction\"] = ''\n GPSRec[\"magic_method_codes\"] = methcodes + ':SO-GPS-DIFF'\n SampOuts.append(GPSRec)\n if average_bedding != \"0\" and fpars:\n fpars = pmag.fisher_mean(BPs)\n print('over-writing all bedding with average ')\n Samps = []\n for rec in SampOuts:\n if average_bedding != \"0\" and fpars:\n rec['sample_bed_dip_direction'] = '%7.1f' % (fpars['dec'])\n rec['sample_bed_dip'] = '%7.1f' % (fpars['inc'] + 90.)\n Samps.append(rec)\n else:\n Samps.append(rec)\n for rec in SampRecs:\n if rec['er_sample_name'] not in samplelist: # overwrite prior for this sample\n Samps.append(rec)\n for rec in SiteRecs:\n if rec['er_site_name'] not in sitelist: # overwrite prior for this sample\n SiteOuts.append(rec)\n for rec in ImageRecs:\n if rec['er_image_name'] not in imagelist: # overwrite prior for this sample\n ImageOuts.append(rec)\n print('saving data...')\n SampsOut, keys = pmag.fillkeys(Samps)\n Sites, keys = pmag.fillkeys(SiteOuts)\n if data_model == 3:\n SampsOut3 = []\n Sites3 = []\n for samp_rec in SampsOut:\n new_rec = map_magic.mapping(\n samp_rec, map_magic.samp_magic2_2_magic3_map)\n SampsOut3.append(new_rec)\n for site_rec in Sites:\n new_rec = map_magic.mapping(\n site_rec, map_magic.site_magic2_2_magic3_map)\n Sites3.append(new_rec)\n wrote_samps = pmag.magic_write(samp_file, SampsOut3, \"samples\")\n wrote_sites = pmag.magic_write(site_file, Sites3, \"sites\")\n else:\n wrote_samps = pmag.magic_write(samp_file, SampsOut, \"er_samples\")\n wrote_sites = pmag.magic_write(site_file, Sites, \"er_sites\")\n if wrote_samps:\n print(\"Data saved in \", samp_file, ' and ', site_file)\n else:\n print(\"No data found\")\n if len(ImageOuts) > 0:\n # need to do conversion here 3.0. --> 2.5\n Images, keys = pmag.fillkeys(ImageOuts)\n image_type = \"er_images\"\n if data_model == 3:\n # convert 2.5 --> 3.0.\n image_type = \"images\"\n Images2 = Images\n Images = []\n for image_rec in Images2:\n Images.append(map_magic.mapping(\n image_rec, map_magic.image_magic2_2_magic3_map))\n pmag.magic_write(image_file, Images, image_type)\n print(\"Image info saved in \", image_file)\n return True, None", "response": "This function converts the orientation of a notebook to MagIC formatted tables."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ntake space delimited AzDip file and converts to MagIC formatted tables Parameters __________ orient_file : name of azdip formatted input file samp_file : name of samples.txt formatted output file samp_con : integer of sample orientation convention [1] XXXXY: where XXXX is an arbitrary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX [5] site name same as sample [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED [7-Z] [XXXX]YYY: XXXX is site designation with Z characters with sample name XXXXYYYY method_codes : colon delimited string with the following as desired FS-FD field sampling done with a drill FS-H field sampling done with hand samples FS-LOC-GPS field location done with GPS FS-LOC-MAP field location done with map SO-POM a Pomeroy orientation device was used SO-ASC an ASC orientation device was used SO-MAG orientation with magnetic compass location_name : location of samples append : boolean. if True, append to the output file output_dir : path to output file directory input_dir : path to input file directory data_model : MagIC data model. INPUT FORMAT Input files must be space delimited: Samp Az Dip Strike Dip Orientation convention: Lab arrow azimuth = mag_azimuth; Lab arrow dip = 90-field_dip e.g. field_dip is degrees from horizontal of drill direction Magnetic declination convention: Az is already corrected in file", "response": "def azdip_magic(orient_file='orient.txt', samp_file=\"samples.txt\", samp_con=\"1\", Z=1, method_codes='FS-FD', location_name='unknown', append=False, output_dir='.', input_dir='.', data_model=3):\n \"\"\"\n takes space delimited AzDip file and converts to MagIC formatted tables\n\n Parameters\n __________\n orient_file : name of azdip formatted input file\n samp_file : name of samples.txt formatted output file\n samp_con : integer of sample orientation convention\n [1] XXXXY: where XXXX is an arbitrary length site designation and Y\n is the single character sample designation. e.g., TG001a is the\n first sample from site TG001. [default]\n [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX\n [5] site name same as sample\n [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED\n [7-Z] [XXXX]YYY: XXXX is site designation with Z characters with sample name XXXXYYYY\n\n method_codes : colon delimited string with the following as desired\n FS-FD field sampling done with a drill\n FS-H field sampling done with hand samples\n FS-LOC-GPS field location done with GPS\n FS-LOC-MAP field location done with map\n SO-POM a Pomeroy orientation device was used\n SO-ASC an ASC orientation device was used\n SO-MAG orientation with magnetic compass\n location_name : location of samples\n append : boolean. if True, append to the output file\n output_dir : path to output file directory\n input_dir : path to input file directory\n data_model : MagIC data model.\n\n INPUT FORMAT\n Input files must be space delimited:\n Samp Az Dip Strike Dip\n Orientation convention:\n Lab arrow azimuth = mag_azimuth; Lab arrow dip = 90-field_dip\n e.g. field_dip is degrees from horizontal of drill direction\n Magnetic declination convention:\n Az is already corrected in file\n\n \"\"\"\n #\n # initialize variables\n #\n data_model = int(data_model)\n if (data_model != 3) and (samp_file == \"samples.txt\"):\n samp_file = \"er_samples.txt\"\n if (data_model == 2) and (samp_file == \"er_samples.txt\"):\n samp_file = \"samples.txt\"\n DEBUG = 0\n version_num = pmag.get_version()\n or_con, corr = \"3\", \"1\"\n # date of sampling, latitude (pos North), longitude (pos East)\n date, lat, lon = \"\", \"\", \"\"\n bed_dip, bed_dip_dir = \"\", \"\"\n participantlist = \"\"\n sites = [] # list of site names\n Lats, Lons = [], [] # list of latitudes and longitudes\n # lists of Sample records and Site records\n SampRecs, SiteRecs, ImageRecs, imagelist = [], [], [], []\n average_bedding = \"1\", 1, \"0\"\n newbaseline, newbeddir, newbeddip = \"\", \"\", \"\"\n delta_u = \"0\"\n sclass, lithology, type = \"\", \"\", \"\"\n newclass, newlith, newtype = '', '', ''\n user = \"\"\n corr == \"3\"\n DecCorr = 0.\n samp_file = pmag.resolve_file_name(samp_file, output_dir)\n orient_file = pmag.resolve_file_name(orient_file, input_dir)\n input_dir = os.path.split(orient_file)[0]\n output_dir = os.path.split(samp_file)[0]\n #\n #\n if append:\n try:\n SampRecs, file_type = pmag.magic_read(samp_file)\n print(\"sample data to be appended to: \", samp_file)\n except:\n print('problem with existing samp file: ',\n samp_file, ' will create new')\n #\n # read in file to convert\n #\n azfile = open(orient_file, 'r')\n AzDipDat = azfile.readlines()\n azfile.close()\n if not AzDipDat:\n return False, 'No data in orientation file, please try again'\n azfile.close()\n SampOut, samplist = [], []\n for line in AzDipDat:\n orec = line.split()\n if len(orec) > 2:\n labaz, labdip = pmag.orient(float(orec[1]), float(orec[2]), or_con)\n bed_dip = float(orec[4])\n if bed_dip != 0:\n bed_dip_dir = float(orec[3]) - \\\n 90. # assume dip to right of strike\n else:\n bed_dip_dir = float(orec[3]) # assume dip to right of strike\n MagRec = {}\n MagRec[\"er_location_name\"] = location_name\n MagRec[\"er_citation_names\"] = \"This study\"\n #\n # parse information common to all orientation methods\n #\n MagRec[\"er_sample_name\"] = orec[0]\n MagRec[\"sample_bed_dip\"] = '%7.1f' % (bed_dip)\n MagRec[\"sample_bed_dip_direction\"] = '%7.1f' % (bed_dip_dir)\n MagRec[\"sample_dip\"] = '%7.1f' % (labdip)\n MagRec[\"sample_azimuth\"] = '%7.1f' % (labaz)\n methods = method_codes.replace(\" \", \"\").split(\":\")\n OR = 0\n for method in methods:\n method_type = method.split(\"-\")\n if \"SO\" in method_type:\n OR = 1\n if OR == 0:\n method_codes = method_codes + \":SO-NO\"\n MagRec[\"magic_method_codes\"] = method_codes\n # parse out the site name\n site = pmag.parse_site(orec[0], samp_con, Z)\n MagRec[\"er_site_name\"] = site\n MagRec['magic_software_packages'] = version_num\n SampOut.append(MagRec)\n if MagRec['er_sample_name'] not in samplist:\n samplist.append(MagRec['er_sample_name'])\n for samp in SampRecs:\n if samp not in samplist:\n SampOut.append(samp)\n Samps, keys = pmag.fillkeys(SampOut)\n if data_model == 2:\n # write to file\n pmag.magic_write(samp_file, Samps, \"er_samples\")\n else:\n # translate sample records to MagIC 3\n Samps3 = []\n for samp in Samps:\n Samps3.append(map_magic.mapping(\n samp, map_magic.samp_magic2_2_magic3_map))\n # write to file\n pmag.magic_write(samp_file, Samps3, \"samples\")\n print(\"Data saved in \", samp_file)\n return True, None"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef dayplot_magic(path_to_file='.', hyst_file=\"specimens.txt\", rem_file='',\n save=True, save_folder='.', fmt='svg', data_model=3,\n interactive=False, contribution=None):\n \"\"\"\n Makes 'day plots' (Day et al. 1977) and squareness/coercivity plots\n (Neel, 1955; plots after Tauxe et al., 2002); plots 'linear mixing'\n curve from Dunlop and Carter-Stiglitz (2006).\n\n Optional Parameters (defaults are used if not specified)\n ----------\n path_to_file : path to directory that contains files (default is current directory, '.')\n the default input file is 'specimens.txt' (data_model=3\n if data_model = 2, then must these are the defaults:\n hyst_file : hysteresis file (default is 'rmag_hysteresis.txt')\n rem_file : remanence file (default is 'rmag_remanence.txt')\n save : boolean argument to save plots (default is True)\n save_folder : relative directory where plots will be saved (default is current directory, '.')\n fmt : format of saved figures (default is 'pdf')\n \"\"\"\n args = sys.argv\n hyst_path = os.path.join(path_to_file, hyst_file)\n if data_model == 2 and rem_file != '':\n rem_path = os.path.join(path_to_file, rem_file)\n # hyst_file,rem_file=\"rmag_hysteresis.txt\",\"rmag_remanence.txt\"\n dir_path = path_to_file\n verbose = pmagplotlib.verbose\n # initialize some variables\n # define figure numbers for Day,S-Bc,S-Bcr\n DSC = {}\n DSC['day'], DSC['S-Bc'], DSC['S-Bcr'], DSC['bcr1-bcr2'] = 1, 2, 3, 4\n hyst_data, file_type = pmag.magic_read(hyst_path)\n rem_data = []\n if data_model == 2 and rem_file != \"\":\n rem_data, file_type = pmag.magic_read(rem_path)\n S, BcrBc, Bcr2, Bc, hsids, Bcr = [], [], [], [], [], []\n Ms, Bcr1, Bcr1Bc, S1 = [], [], [], []\n locations = ''\n if data_model == 2:\n for rec in hyst_data:\n if 'er_location_name' in list(rec.keys()) and rec['er_location_name'] not in locations:\n locations = locations + rec['er_location_name'] + '_'\n if rec['hysteresis_bcr'] != \"\" and rec['hysteresis_mr_moment'] != \"\":\n S.append(old_div(float(rec['hysteresis_mr_moment']), float(\n rec['hysteresis_ms_moment'])))\n Bcr.append(float(rec['hysteresis_bcr']))\n Bc.append(float(rec['hysteresis_bc']))\n BcrBc.append(old_div(Bcr[-1], Bc[-1]))\n if 'er_synthetic_name' in list(rec.keys()) and rec['er_synthetic_name'] != \"\":\n rec['er_specimen_name'] = rec['er_synthetic_name']\n hsids.append(rec['er_specimen_name'])\n if len(rem_data) > 0:\n for rec in rem_data:\n if rec['remanence_bcr'] != \"\" and float(rec['remanence_bcr']) > 0:\n try:\n ind = hsids.index(rec['er_specimen_name'])\n Bcr1.append(float(rec['remanence_bcr']))\n Bcr1Bc.append(old_div(Bcr1[-1], Bc[ind]))\n S1.append(S[ind])\n Bcr2.append(Bcr[ind])\n except ValueError:\n if verbose:\n print('hysteresis data for ',\n rec['er_specimen_name'], ' not found')\n else:\n fnames = {'specimens': hyst_file}\n if contribution:\n con = contribution\n else:\n con = cb.Contribution(dir_path, read_tables=['specimens'],\n custom_filenames=fnames)\n if 'specimens' not in con.tables:\n print('-E- No specimen file found in {}'.format(os.path.realpath(dir_path)))\n return False, []\n spec_container = con.tables['specimens']\n spec_df = spec_container.df\n\n # get as much data as possible for naming plots\n #if pmagplotlib.isServer:\n con.propagate_location_to_specimens()\n\n\n loc_list = []\n\n if 'location' in spec_df.columns:\n loc_list = spec_df['location'].unique()\n do_rem = bool('rem_bcr' in spec_df.columns)\n\n for ind, row in spec_df.iterrows():\n if row['hyst_bcr'] and row['hyst_mr_moment']:\n S.append(\n old_div(float(row['hyst_mr_moment']), float(row['hyst_ms_moment'])))\n Bcr.append(float(row['hyst_bcr']))\n Bc.append(float(row['hyst_bc']))\n BcrBc.append(old_div(Bcr[-1], Bc[-1]))\n hsids.append(row['specimen'])\n if do_rem:\n if row['rem_bcr'] and float(row['rem_bcr']) > 0:\n try:\n Bcr1.append(float(row['rem_bcr']))\n Bcr1Bc.append(old_div(Bcr1[-1], Bc[-1]))\n S1.append(S[-1])\n Bcr2.append(Bcr[-1])\n except ValueError:\n if verbose:\n print('hysteresis data for ',\n row['specimen'], end=' ')\n print(' not found')\n\n #\n # now plot the day and S-Bc, S-Bcr plots\n #\n fnames = {'day': os.path.join(save_folder, \"_\".join(loc_list) + '_Day.' + fmt),\n 'S-Bcr': os.path.join(save_folder, \"_\".join(loc_list) + '_S-Bcr.' + fmt),\n 'S-Bc': os.path.join(save_folder, \"_\".join(loc_list) + '_S-Bc.' + fmt)}\n if len(Bcr1) > 0:\n\n plt.figure(num=DSC['day'], figsize=(5, 5))\n #plt.figure(num=DSC['S-Bc'], figsize=(5, 5))\n plt.figure(num=DSC['S-Bcr'], figsize=(5, 5))\n plt.figure(num=DSC['bcr1-bcr2'], figsize=(5, 5))\n\n pmagplotlib.plot_day(DSC['day'], Bcr1Bc, S1, 'ro')\n pmagplotlib.plot_s_bcr(DSC['S-Bcr'], Bcr1, S1, 'ro')\n #pmagplotlib.plot_init(DSC['bcr1-bcr2'], 5, 5)\n pmagplotlib.plot_bcr(DSC['bcr1-bcr2'], Bcr1, Bcr2)\n fnames.pop('S-Bc')\n fnames['bcr1-bcr2'] = os.path.join(save_folder, 'bcr1-bcr2.png')\n DSC.pop('S-Bc')\n if pmagplotlib.isServer:\n for key in list(DSC.keys()):\n fnames[key] = 'LO:_' + \":\".join(set(loc_list)) + '_' + 'SI:__SA:__SP:__TY:_' + key + '_.' + fmt\n if save:\n pmagplotlib.save_plots(DSC, fnames, incl_directory=True)\n return True, fnames.values()\n if interactive:\n pmagplotlib.draw_figs(DSC)\n ans = pmagplotlib.save_or_quit()\n if ans == 'a':\n pmagplotlib.save_plots(DSC, fnames, incl_directory=True)\n return True, fnames.values()\n\n else:\n plt.figure(num=DSC['day'], figsize=(5, 5))\n plt.figure(num=DSC['S-Bc'], figsize=(5, 5))\n plt.figure(num=DSC['S-Bcr'], figsize=(5, 5))\n #plt.figure(num=DSC['bcr1-bcr2'], figsize=(5, 5))\n del DSC['bcr1-bcr2']\n # do other plots instead\n pmagplotlib.plot_day(DSC['day'], BcrBc, S, 'bs')\n pmagplotlib.plot_s_bcr(DSC['S-Bcr'], Bcr, S, 'bs')\n pmagplotlib.plot_s_bc(DSC['S-Bc'], Bc, S, 'bs')\n if pmagplotlib.isServer:\n for key in list(DSC.keys()):\n fnames[key] = 'LO:_' + \":\".join(set(loc_list)) + '_' + 'SI:__SA:__SP:__TY:_' + key + '_.' + fmt\n if save:\n pmagplotlib.save_plots(DSC, fnames, incl_directory=True)\n return True, fnames.values()\n elif interactive:\n pmagplotlib.draw_figs(DSC)\n ans = pmagplotlib.save_or_quit()\n if ans == 'a':\n pmagplotlib.save_plots(DSC, fnames, incl_directory=True)\n return True, fnames.values()\n return True, []", "response": "Function that creates a dayplot of the magic files."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef curie(path_to_file='.', file_name='', magic=False,\n window_length=3, save=False, save_folder='.', fmt='svg', t_begin=\"\", t_end=\"\"):\n \"\"\"\n Plots and interprets curie temperature data.\n ***\n The 1st derivative is calculated from smoothed M-T curve (convolution\n with trianfular window with width= <-w> degrees)\n ***\n The 2nd derivative is calculated from smoothed 1st derivative curve\n (using the same sliding window width)\n ***\n The estimated curie temp. is the maximum of the 2nd derivative.\n Temperature steps should be in multiples of 1.0 degrees.\n\n Parameters\n __________\n file_name : name of file to be opened\n\n Optional Parameters (defaults are used if not specified)\n ----------\n path_to_file : path to directory that contains file (default is current directory, '.')\n window_length : dimension of smoothing window (input to smooth() function)\n save : boolean argument to save plots (default is False)\n save_folder : relative directory where plots will be saved (default is current directory, '.')\n fmt : format of saved figures\n t_begin: start of truncated window for search\n t_end: end of truncated window for search\n magic : True if MagIC formated measurements.txt file\n \"\"\"\n plot = 0\n window_len = window_length\n\n # read data from file\n complete_path = os.path.join(path_to_file, file_name)\n if magic:\n data_df = pd.read_csv(complete_path, sep='\\t', header=1)\n T = data_df['meas_temp'].values-273\n magn_key = cb.get_intensity_col(data_df)\n M = data_df[magn_key].values\n else:\n Data = np.loadtxt(complete_path, dtype=np.float)\n T = Data.transpose()[0]\n M = Data.transpose()[1]\n T = list(T)\n M = list(M)\n # cut the data if -t is one of the flags\n if t_begin != \"\":\n while T[0] < t_begin:\n M.pop(0)\n T.pop(0)\n while T[-1] > t_end:\n M.pop(-1)\n T.pop(-1)\n\n # prepare the signal:\n # from M(T) array with unequal deltaT\n # to M(T) array with deltaT=(1 degree).\n # if delataT is larger, then points are added using linear fit between\n # consecutive data points.\n # exit if deltaT is not integer\n i = 0\n while i < (len(T) - 1):\n if (T[i + 1] - T[i]) % 1 > 0.001:\n print(\"delta T should be integer, this program will not work!\")\n print(\"temperature range:\", T[i], T[i + 1])\n sys.exit()\n if (T[i + 1] - T[i]) == 0.:\n M[i] = np.average([M[i], M[i + 1]])\n M.pop(i + 1)\n T.pop(i + 1)\n elif (T[i + 1] - T[i]) < 0.:\n M.pop(i + 1)\n T.pop(i + 1)\n print(\"check data in T=%.0f ,M[T] is ignored\" % (T[i]))\n elif (T[i + 1] - T[i]) > 1.:\n slope, b = np.polyfit([T[i], T[i + 1]], [M[i], M[i + 1]], 1)\n for j in range(int(T[i + 1]) - int(T[i]) - 1):\n M.insert(i + 1, slope * (T[i] + 1.) + b)\n T.insert(i + 1, (T[i] + 1.))\n i = i + 1\n i = i + 1\n\n # calculate the smoothed signal\n M = np.array(M, 'f')\n T = np.array(T, 'f')\n M_smooth = []\n M_smooth = smooth(M, window_len)\n\n # plot the original data and the smooth data\n PLT = {'M_T': 1, 'der1': 2, 'der2': 3, 'Curie': 4}\n plt.figure(num=PLT['M_T'], figsize=(5, 5))\n string = 'M-T (sliding window=%i)' % int(window_len)\n pmagplotlib.plot_xy(PLT['M_T'], T, M_smooth, sym='-')\n pmagplotlib.plot_xy(PLT['M_T'], T, M, sym='--',\n xlab='Temperature C', ylab='Magnetization', title=string)\n\n # calculate first derivative\n d1, T_d1 = [], []\n for i in range(len(M_smooth) - 1):\n Dy = M_smooth[i - 1] - M_smooth[i + 1]\n Dx = T[i - 1] - T[i + 1]\n d1.append(old_div(Dy, Dx))\n T_d1 = T[1:len(T - 1)]\n d1 = np.array(d1, 'f')\n d1_smooth = smooth(d1, window_len)\n\n # plot the first derivative\n plt.figure(num=PLT['der1'], figsize=(5, 5))\n string = '1st derivative (sliding window=%i)' % int(window_len)\n pmagplotlib.plot_xy(PLT['der1'], T_d1, d1_smooth,\n sym='-', xlab='Temperature C', title=string)\n pmagplotlib.plot_xy(PLT['der1'], T_d1, d1, sym='b--')\n\n # calculate second derivative\n d2, T_d2 = [], []\n for i in range(len(d1_smooth) - 1):\n Dy = d1_smooth[i - 1] - d1_smooth[i + 1]\n Dx = T[i - 1] - T[i + 1]\n # print Dy/Dx\n d2.append(old_div(Dy, Dx))\n T_d2 = T[2:len(T - 2)]\n d2 = np.array(d2, 'f')\n d2_smooth = smooth(d2, window_len)\n\n # plot the second derivative\n plt.figure(num=PLT['der2'], figsize=(5, 5))\n string = '2nd derivative (sliding window=%i)' % int(window_len)\n pmagplotlib.plot_xy(PLT['der2'], T_d2, d2, sym='-',\n xlab='Temperature C', title=string)\n d2 = list(d2)\n print('second derivative maximum is at T=%i' %\n int(T_d2[d2.index(max(d2))]))\n\n # calculate Curie temperature for different width of sliding windows\n curie, curie_1 = [], []\n wn = list(range(5, 50, 1))\n for win in wn:\n # calculate the smoothed signal\n M_smooth = []\n M_smooth = smooth(M, win)\n # calculate first derivative\n d1, T_d1 = [], []\n for i in range(len(M_smooth) - 1):\n Dy = M_smooth[i - 1] - M_smooth[i + 1]\n Dx = T[i - 1] - T[i + 1]\n d1.append(old_div(Dy, Dx))\n T_d1 = T[1:len(T - 1)]\n d1 = np.array(d1, 'f')\n d1_smooth = smooth(d1, win)\n # calculate second derivative\n d2, T_d2 = [], []\n for i in range(len(d1_smooth) - 1):\n Dy = d1_smooth[i - 1] - d1_smooth[i + 1]\n Dx = T[i - 1] - T[i + 1]\n d2.append(old_div(Dy, Dx))\n T_d2 = T[2:len(T - 2)]\n d2 = np.array(d2, 'f')\n d2_smooth = smooth(d2, win)\n d2 = list(d2)\n d2_smooth = list(d2_smooth)\n curie.append(T_d2[d2.index(max(d2))])\n curie_1.append(T_d2[d2_smooth.index(max(d2_smooth))])\n\n # plot Curie temp for different sliding window length\n plt.figure(num=PLT['Curie'], figsize=(5, 5))\n pmagplotlib.plot_xy(PLT['Curie'], wn, curie, sym='.',\n xlab='Sliding Window Width (degrees)', ylab='Curie Temp', title='Curie Statistics')\n files = {}\n for key in list(PLT.keys()):\n files[key] = str(key) + '.' + fmt\n if save == True:\n for key in list(PLT.keys()):\n try:\n plt.figure(num=PLT[key])\n plt.savefig(save_folder + '/' + files[key].replace('/', '-'))\n except:\n print('could not save: ', PLT[key], files[key])\n print(\"output file format not supported \")\n plt.show()", "response": "Function that creates a curie from a file."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef chi_magic2(path_to_file='.', file_name='magic_measurements.txt',\n save=False, save_folder='.', fmt='svg'):\n \"\"\"\n Generates plots that compare susceptibility to temperature at different\n frequencies.\n\n Optional Parameters (defaults are used if not specified)\n ----------\n path_to_file : path to directory that contains file (default is current directory, '.')\n file_name : name of file to be opened (default is 'magic_measurements.txt')\n save : boolean argument to save plots (default is False)\n save_folder : relative directory where plots will be saved (default is current directory, '.')\n \"\"\"\n cont, FTinit, BTinit, k = \"\", 0, 0, 0\n complete_path = os.path.join(path_to_file, file_name)\n Tind, cont = 0, \"\"\n EXP = \"\"\n #\n meas_data, file_type = pmag.magic_read(complete_path)\n #\n # get list of unique experiment names\n #\n # initialize some variables (a continuation flag, plot initialization\n # flags and the experiment counter\n experiment_names = []\n for rec in meas_data:\n if rec['magic_experiment_name'] not in experiment_names:\n experiment_names.append(rec['magic_experiment_name'])\n #\n # hunt through by experiment name\n if EXP != \"\":\n try:\n k = experiment_names.index(EXP)\n except:\n print(\"Bad experiment name\")\n sys.exit()\n while k < len(experiment_names):\n e = experiment_names[k]\n if EXP == \"\":\n print(e, k + 1, 'out of ', len(experiment_names))\n #\n # initialize lists of data, susceptibility, temperature, frequency and\n # field\n X, T, F, B = [], [], [], []\n for rec in meas_data:\n methcodes = rec['magic_method_codes']\n meths = methcodes.strip().split(':')\n if rec['magic_experiment_name'] == e and \"LP-X\" in meths: # looking for chi measurement\n if 'measurement_temp' not in list(rec.keys()):\n rec['measurement_temp'] = '300' # set defaults\n if 'measurement_freq' not in list(rec.keys()):\n rec['measurement_freq'] = '0' # set defaults\n if 'measurement_lab_field_ac' not in list(rec.keys()):\n rec['measurement_lab_field_ac'] = '0' # set default\n X.append(float(rec['measurement_x']))\n T.append(float(rec['measurement_temp']))\n F.append(float(rec['measurement_freq']))\n B.append(float(rec['measurement_lab_field_ac']))\n #\n # get unique list of Ts,Fs, and Bs\n #\n Ts, Fs, Bs = [], [], []\n for k in range(len(X)): # hunt through all the measurements\n if T[k] not in Ts:\n Ts.append(T[k]) # append if not in list\n if F[k] not in Fs:\n Fs.append(F[k])\n if B[k] not in Bs:\n Bs.append(B[k])\n Ts.sort() # sort list of temperatures, frequencies and fields\n Fs.sort()\n Bs.sort()\n if '-x' in sys.argv:\n k = len(experiment_names) + 1 # just plot the one\n else:\n k += 1 # increment experiment number\n #\n # plot chi versus T and F holding B constant\n #\n plotnum = 1 # initialize plot number to 1\n if len(X) > 2: # if there are any data to plot, continue\n b = Bs[-1] # keeping field constant and at maximum\n XTF = [] # initialize list of chi versus Temp and freq\n for f in Fs: # step through frequencies sequentially\n XT = [] # initialize list of chi versus temp\n for kk in range(len(X)): # hunt through all the data\n if F[kk] == f and B[kk] == b: # select data with given freq and field\n XT.append([X[kk], T[kk]]) # append to list\n XTF.append(XT) # append list to list of frequencies\n if len(XT) > 1: # if there are any temperature dependent data\n plt.figure(num=plotnum, figsize=(5, 5)) # initialize plot\n # call the plotting function\n pmagplotlib.plot_xtf(plotnum, XTF, Fs, e, b)\n pmagplotlib.show_fig(plotnum)\n plotnum += 1 # increment plot number\n f = Fs[0] # set frequency to minimum\n XTB = [] # initialize list if chi versus Temp and field\n for b in Bs: # step through field values\n XT = [] # initial chi versus temp list for this field\n for kk in range(len(X)): # hunt through all the data\n if F[kk] == f and B[kk] == b: # select data with given freq and field\n XT.append([X[kk], T[kk]]) # append to list\n XTB.append(XT)\n if len(XT) > 1: # if there are any temperature dependent data\n plt.figure(num=plotnum, figsize=(5, 5)) # set up plot\n # call the plotting function\n pmagplotlib.plot_xtb(plotnum, XTB, Bs, e, f)\n pmagplotlib.show_fig(plotnum)\n plotnum += 1 # increment plot number\n if save == True:\n files = {}\n PLTS = {}\n for p in range(1, plotnum):\n key = str(p)\n files[key] = e + '_' + key + '.' + fmt\n PLTS[key] = p\n for key in list(PLTS.keys()):\n try:\n plt.figure(num=PLTS[key])\n plt.savefig(save_folder + '/' +\n files[key].replace('/', '-'))\n except:\n print('could not save: ', PLTS[key], files[key])\n print(\"output file format not supported \")", "response": "Generates a chi - plot of the magic_measurements. txt file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef pmag_results_extract(res_file=\"pmag_results.txt\", crit_file=\"\", spec_file=\"\",\n age_file=\"\", latex=False, grade=False, WD=\".\"):\n \"\"\"\n Generate tab delimited output file(s) with result data.\n Save output files and return True if successful.\n Possible output files: Directions, Intensities, SiteNfo, Criteria,\n Specimens\n\n Optional Parameters (defaults are used if not specified)\n ----------\n res_file : name of pmag_results file (default is \"pmag_results.txt\")\n crit_file : name of criteria file (default is \"pmag_criteria.txt\")\n spec_file : name of specimen file (default is \"pmag_specimens.txt\")\n age_file : name of age file (default is \"er_ages.txt\")\n latex : boolean argument to output in LaTeX (default is False)\n WD : path to directory that contains input files and takes output (default is current directory, '.')\n \"\"\"\n # format outfiles\n if latex:\n latex = 1\n file_type = '.tex'\n else:\n latex = 0\n file_type = '.txt'\n dir_path = os.path.realpath(WD)\n outfile = os.path.join(dir_path, 'Directions' + file_type)\n Ioutfile = os.path.join(dir_path, 'Intensities' + file_type)\n Soutfile = os.path.join(dir_path, 'SiteNfo' + file_type)\n Specout = os.path.join(dir_path, 'Specimens' + file_type)\n Critout = os.path.join(dir_path, 'Criteria' + file_type)\n # format infiles\n res_file = os.path.join(dir_path, res_file)\n if crit_file:\n crit_file = os.path.join(dir_path, crit_file)\n if spec_file:\n spec_file = os.path.join(dir_path, spec_file)\n else:\n grade = False\n # open output files\n f = open(outfile, 'w')\n sf = open(Soutfile, 'w')\n fI = open(Ioutfile, 'w')\n if crit_file:\n cr = open(Critout, 'w')\n # set up column headers\n Sites, file_type = pmag.magic_read(res_file)\n if crit_file:\n Crits, file_type = pmag.magic_read(crit_file)\n else:\n Crits = []\n SiteCols = [\"Site\", \"Location\",\n \"Lat. (N)\", \"Long. (E)\", \"Age \", \"Age sigma\", \"Units\"]\n SiteKeys = [\"er_site_names\", \"average_lat\", \"average_lon\", \"average_age\",\n \"average_age_sigma\", \"average_age_unit\"]\n DirCols = [\"Site\", 'Comp.', \"perc TC\", \"Dec.\", \"Inc.\", \"Nl\", \"Np\", \"k \", \"R\", \"a95\",\n \"PLat\", \"PLong\"]\n DirKeys = [\"er_site_names\", \"pole_comp_name\", \"tilt_correction\", \"average_dec\", \"average_inc\",\n \"average_n_lines\", \"average_n_planes\", \"average_k\", \"average_r\", \"average_alpha95\",\n \"vgp_lat\", \"vgp_lon\"]\n IntCols = [\"Site\", \"N\", \"B (uT)\", \"sigma\",\n \"sigma perc\", \"VADM\", \"VADM sigma\"]\n IntKeys = [\"er_site_names\", \"average_int_n\", \"average_int\", \"average_int_sigma\",\n 'average_int_sigma_perc', \"vadm\", \"vadm_sigma\"]\n AllowedKeys = ['specimen_frac', 'specimen_scat', 'specimen_gap_max', 'measurement_step_min',\n 'measurement_step_max', 'measurement_step_unit', 'specimen_polarity',\n 'specimen_nrm', 'specimen_direction_type', 'specimen_comp_nmb', 'specimen_mad',\n 'specimen_alpha95', 'specimen_n', 'specimen_int_sigma',\n 'specimen_int_sigma_perc', 'specimen_int_rel_sigma',\n 'specimen_int_rel_sigma_perc', 'specimen_int_mad', 'specimen_int_n',\n 'specimen_w', 'specimen_q', 'specimen_f', 'specimen_fvds', 'specimen_b_sigma',\n 'specimen_b_beta', 'specimen_g', 'specimen_dang', 'specimen_md',\n 'specimen_ptrm', 'specimen_drat', 'specimen_drats', 'specimen_rsc',\n 'specimen_viscosity_index', 'specimen_magn_moment', 'specimen_magn_volume',\n 'specimen_magn_mass', 'specimen_int_ptrm_n', 'specimen_delta', 'specimen_theta',\n 'specimen_gamma', 'sample_polarity', 'sample_nrm', 'sample_direction_type',\n 'sample_comp_nmb', 'sample_sigma', 'sample_alpha95', 'sample_n',\n 'sample_n_lines', 'sample_n_planes', 'sample_k', 'sample_r',\n 'sample_tilt_correction', 'sample_int_sigma', 'sample_int_sigma_perc',\n 'sample_int_rel_sigma', 'sample_int_rel_sigma_perc', 'sample_int_n',\n 'sample_magn_moment', 'sample_magn_volume', 'sample_magn_mass', 'site_polarity',\n 'site_nrm', 'site_direction_type', 'site_comp_nmb', 'site_sigma',\n 'site_alpha95', 'site_n', 'site_n_lines', 'site_n_planes', 'site_k', 'site_r',\n 'site_tilt_correction', 'site_int_sigma', 'site_int_sigma_perc',\n 'site_int_rel_sigma', 'site_int_rel_sigma_perc', 'site_int_n',\n 'site_magn_moment', 'site_magn_volume', 'site_magn_mass', 'average_age_min',\n 'average_age_max', 'average_age_sigma', 'average_age_unit', 'average_sigma',\n 'average_alpha95', 'average_n', 'average_nn', 'average_k', 'average_r',\n 'average_int_sigma', 'average_int_rel_sigma', 'average_int_rel_sigma_perc',\n 'average_int_n', 'average_int_nn', 'vgp_dp', 'vgp_dm', 'vgp_sigma',\n 'vgp_alpha95', 'vgp_n', 'vdm_sigma', 'vdm_n', 'vadm_sigma', 'vadm_n']\n if crit_file:\n crit = Crits[0] # get a list of useful keys\n for key in list(crit.keys()):\n if key not in AllowedKeys:\n del(crit[key])\n for key in list(crit.keys()):\n if (not crit[key]) or (eval(crit[key]) > 1000) or (eval(crit[key]) == 0):\n # get rid of all blank or too big ones or too little ones\n del(crit[key])\n CritKeys = list(crit.keys())\n if spec_file:\n Specs, file_type = pmag.magic_read(spec_file)\n fsp = open(Specout, 'w') # including specimen intensities if desired\n SpecCols = [\"Site\", \"Specimen\", \"B (uT)\", \"MAD\", \"Beta\", \"N\", \"Q\", \"DANG\", \"f-vds\",\n \"DRATS\", \"T (C)\"]\n SpecKeys = ['er_site_name', 'er_specimen_name', 'specimen_int', 'specimen_int_mad',\n 'specimen_b_beta', 'specimen_int_n', 'specimen_q', 'specimen_dang',\n 'specimen_fvds', 'specimen_drats', 'trange']\n Xtra = ['specimen_frac', 'specimen_scat', 'specimen_gmax']\n if grade:\n SpecCols.append('Grade')\n SpecKeys.append('specimen_grade')\n for x in Xtra: # put in the new intensity keys if present\n if x in list(Specs[0].keys()):\n SpecKeys.append(x)\n newkey = \"\"\n for k in x.split('_')[1:]:\n newkey = newkey + k + '_'\n SpecCols.append(newkey.strip('_'))\n SpecCols.append('Corrections')\n SpecKeys.append('corrections')\n # these should be multiplied by 1e6\n Micro = ['specimen_int', 'average_int', 'average_int_sigma']\n Zeta = ['vadm', 'vadm_sigma'] # these should be multiplied by 1e21\n # write out the header information for each output file\n if latex: # write out the latex header stuff\n sep = ' & '\n end = '\\\\\\\\'\n f.write('\\\\documentclass{article}\\n')\n f.write('\\\\usepackage[margin=1in]{geometry}\\n')\n f.write('\\\\usepackage{longtable}\\n')\n f.write('\\\\begin{document}\\n')\n sf.write('\\\\documentclass{article}\\n')\n sf.write('\\\\usepackage[margin=1in]{geometry}\\n')\n sf.write('\\\\usepackage{longtable}\\n')\n sf.write('\\\\begin{document}\\n')\n fI.write('\\\\documentclass{article}\\n')\n fI.write('\\\\usepackage[margin=1in]{geometry}\\n')\n fI.write('\\\\usepackage{longtable}\\n')\n fI.write('\\\\begin{document}\\n')\n if crit_file:\n cr.write('\\\\documentclass{article}\\n')\n cr.write('\\\\usepackage[margin=1in]{geometry}\\n')\n cr.write('\\\\usepackage{longtable}\\n')\n cr.write('\\\\begin{document}\\n')\n if spec_file:\n fsp.write('\\\\documentclass{article}\\n')\n fsp.write('\\\\usepackage[margin=1in]{geometry}\\n')\n fsp.write('\\\\usepackage{longtable}\\n')\n fsp.write('\\\\begin{document}\\n')\n tabstring = '\\\\begin{longtable}{'\n fstring = tabstring\n for k in range(len(SiteCols)):\n fstring = fstring + 'r'\n sf.write(fstring + '}\\n')\n sf.write('\\hline\\n')\n fstring = tabstring\n for k in range(len(DirCols)):\n fstring = fstring + 'r'\n f.write(fstring + '}\\n')\n f.write('\\hline\\n')\n fstring = tabstring\n for k in range(len(IntCols)):\n fstring = fstring + 'r'\n fI.write(fstring + '}\\n')\n fI.write('\\hline\\n')\n fstring = tabstring\n if crit_file:\n for k in range(len(CritKeys)):\n fstring = fstring + 'r'\n cr.write(fstring + '}\\n')\n cr.write('\\hline\\n')\n if spec_file:\n fstring = tabstring\n for k in range(len(SpecCols)):\n fstring = fstring + 'r'\n fsp.write(fstring + '}\\n')\n fsp.write('\\hline\\n')\n else: # just set the tab and line endings for tab delimited\n sep = ' \\t '\n end = ''\n# now write out the actual column headers\n Soutstring, Doutstring, Ioutstring, Spoutstring, Croutstring = \"\", \"\", \"\", \"\", \"\"\n for k in range(len(SiteCols)):\n Soutstring = Soutstring + SiteCols[k] + sep\n Soutstring = Soutstring.strip(sep)\n Soutstring = Soutstring + end + '\\n'\n sf.write(Soutstring)\n for k in range(len(DirCols)):\n Doutstring = Doutstring + DirCols[k] + sep\n Doutstring = Doutstring.strip(sep)\n Doutstring = Doutstring + end + '\\n'\n f.write(Doutstring)\n for k in range(len(IntCols)):\n Ioutstring = Ioutstring + IntCols[k] + sep\n Ioutstring = Ioutstring.strip(sep)\n Ioutstring = Ioutstring + end + '\\n'\n fI.write(Ioutstring)\n if crit_file:\n for k in range(len(CritKeys)):\n Croutstring = Croutstring + CritKeys[k] + sep\n Croutstring = Croutstring.strip(sep)\n Croutstring = Croutstring + end + '\\n'\n cr.write(Croutstring)\n if spec_file:\n for k in range(len(SpecCols)):\n Spoutstring = Spoutstring + SpecCols[k] + sep\n Spoutstring = Spoutstring.strip(sep)\n Spoutstring = Spoutstring + end + \"\\n\"\n fsp.write(Spoutstring)\n if latex: # put in a horizontal line in latex file\n f.write('\\hline\\n')\n sf.write('\\hline\\n')\n fI.write('\\hline\\n')\n if crit_file:\n cr.write('\\hline\\n')\n if spec_file:\n fsp.write('\\hline\\n')\n # do criteria\n if crit_file:\n for crit in Crits:\n Croutstring = \"\"\n for key in CritKeys:\n Croutstring = Croutstring + crit[key] + sep\n Croutstring = Croutstring.strip(sep) + end\n cr.write(Croutstring + '\\n')\n # do directions\n # get all results with VGPs\n VGPs = pmag.get_dictitem(Sites, 'vgp_lat', '', 'F')\n VGPs = pmag.get_dictitem(VGPs, 'data_type', 'i',\n 'T') # get site level stuff\n for site in VGPs:\n if len(site['er_site_names'].split(\":\")) == 1:\n if 'er_sample_names' not in list(site.keys()):\n site['er_sample_names'] = ''\n if 'pole_comp_name' not in list(site.keys()):\n site['pole_comp_name'] = \"A\"\n if 'average_nn' not in list(site.keys()) and 'average_n' in list(site.keys()):\n site['average_nn'] = site['average_n']\n if 'average_n_lines' not in list(site.keys()):\n site['average_n_lines'] = site['average_nn']\n if 'average_n_planes' not in list(site.keys()):\n site['average_n_planes'] = \"\"\n Soutstring, Doutstring = \"\", \"\"\n for key in SiteKeys:\n if key in list(site.keys()):\n Soutstring = Soutstring + site[key] + sep\n Soutstring = Soutstring.strip(sep) + end\n sf.write(Soutstring + '\\n')\n for key in DirKeys:\n if key in list(site.keys()):\n Doutstring = Doutstring + site[key] + sep\n Doutstring = Doutstring.strip(sep) + end\n f.write(Doutstring + '\\n')\n# now do intensities\n VADMs = pmag.get_dictitem(Sites, 'vadm', '', 'F')\n VADMs = pmag.get_dictitem(VADMs, 'data_type', 'i', 'T')\n for site in VADMs: # do results level stuff\n if site not in VGPs:\n Soutstring = \"\"\n for key in SiteKeys:\n if key in list(site.keys()):\n Soutstring = Soutstring + site[key] + sep\n else:\n Soutstring = Soutstring + \" \" + sep\n Soutstring = Soutstring.strip(sep) + end\n sf.write(Soutstring + '\\n')\n if len(site['er_site_names'].split(\":\")) == 1 and site['data_type'] == 'i':\n if 'average_int_sigma_perc' not in list(site.keys()):\n site['average_int_sigma_perc'] = \"0\"\n if site[\"average_int_sigma\"] == \"\":\n site[\"average_int_sigma\"] = \"0\"\n if site[\"average_int_sigma_perc\"] == \"\":\n site[\"average_int_sigma_perc\"] = \"0\"\n if site[\"vadm\"] == \"\":\n site[\"vadm\"] = \"0\"\n if site[\"vadm_sigma\"] == \"\":\n site[\"vadm_sigma\"] = \"0\"\n for key in list(site.keys()): # reformat vadms, intensities\n if key in Micro:\n site[key] = '%7.1f' % (float(site[key]) * 1e6)\n if key in Zeta:\n site[key] = '%7.1f' % (float(site[key]) * 1e-21)\n outstring = \"\"\n for key in IntKeys:\n if key not in list(site.keys()):\n site[key] = \"\"\n outstring = outstring + site[key] + sep\n outstring = outstring.strip(sep) + end + '\\n'\n fI.write(outstring)\n# VDMs=pmag.get_dictitem(Sites,'vdm','','F') # get non-blank VDMs\n# for site in VDMs: # do results level stuff\n# if len(site['er_site_names'].split(\":\"))==1:\n# if 'average_int_sigma_perc' not in site.keys():site['average_int_sigma_perc']=\"0\"\n# if site[\"average_int_sigma\"]==\"\":site[\"average_int_sigma\"]=\"0\"\n# if site[\"average_int_sigma_perc\"]==\"\":site[\"average_int_sigma_perc\"]=\"0\"\n# if site[\"vadm\"]==\"\":site[\"vadm\"]=\"0\"\n# if site[\"vadm_sigma\"]==\"\":site[\"vadm_sigma\"]=\"0\"\n# for key in site.keys(): # reformat vadms, intensities\n# if key in Micro: site[key]='%7.1f'%(float(site[key])*1e6)\n# if key in Zeta: site[key]='%7.1f'%(float(site[key])*1e-21)\n# outstring=\"\"\n# for key in IntKeys:\n# outstring=outstring+site[key]+sep\n# fI.write(outstring.strip(sep)+'\\n')\n if spec_file:\n SpecsInts = pmag.get_dictitem(Specs, 'specimen_int', '', 'F')\n for spec in SpecsInts:\n spec['trange'] = '%i' % (int(float(spec['measurement_step_min']) - 273)) + \\\n '-' + '%i' % (int(float(spec['measurement_step_max']) - 273))\n meths = spec['magic_method_codes'].split(':')\n corrections = ''\n for meth in meths:\n if 'DA' in meth:\n corrections = corrections + meth[3:] + ':'\n corrections = corrections.strip(':')\n if corrections.strip() == \"\":\n corrections = \"None\"\n spec['corrections'] = corrections\n outstring = \"\"\n for key in SpecKeys:\n if key in Micro:\n spec[key] = '%7.1f' % (float(spec[key]) * 1e6)\n if key in Zeta:\n spec[key] = '%7.1f' % (float(spec[key]) * 1e-21)\n outstring = outstring + spec[key] + sep\n fsp.write(outstring.strip(sep) + end + '\\n')\n #\n if latex: # write out the tail stuff\n f.write('\\hline\\n')\n sf.write('\\hline\\n')\n fI.write('\\hline\\n')\n f.write('\\end{longtable}\\n')\n sf.write('\\end{longtable}\\n')\n fI.write('\\end{longtable}\\n')\n f.write('\\end{document}\\n')\n sf.write('\\end{document}\\n')\n fI.write('\\end{document}\\n')\n if spec_file:\n fsp.write('\\hline\\n')\n fsp.write('\\end{longtable}\\n')\n fsp.write('\\end{document}\\n')\n if crit_file:\n cr.write('\\hline\\n')\n cr.write('\\end{longtable}\\n')\n cr.write('\\end{document}\\n')\n f.close()\n sf.close()\n fI.close()\n print('data saved in: ', outfile, Ioutfile, Soutfile)\n outfiles = [outfile, Ioutfile, Soutfile]\n if spec_file:\n fsp.close()\n print('specimen data saved in: ', Specout)\n outfiles.append(Specout)\n if crit_file:\n cr.close()\n print('Selection criteria saved in: ', Critout)\n outfiles.append(Critout)\n return True, outfiles", "response": "Extract data from pmag_results. txt and return a list of files."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef demag_magic(path_to_file='.', file_name='magic_measurements.txt',\n save=False, save_folder='.', fmt='svg', plot_by='loc',\n treat=None, XLP=\"\", individual=None, average_measurements=False,\n single_plot=False):\n '''\n Takes demagnetization data (from magic_measurements file) and outputs\n intensity plots (with optional save).\n\n Parameters\n -----------\n path_to_file : path to directory that contains files (default is current directory, '.')\n file_name : name of measurements file (default is 'magic_measurements.txt')\n save : boolean argument to save plots (default is False)\n save_folder : relative directory where plots will be saved (default is current directory, '.')\n fmt : format of saved figures (default is 'svg')\n plot_by : specifies what sampling level you wish to plot the data at\n ('loc' -- plots all samples of the same location on the same plot\n 'exp' -- plots all samples of the same expedition on the same plot\n 'site' -- plots all samples of the same site on the same plot\n 'sample' -- plots all measurements of the same sample on the same plot\n 'spc' -- plots each specimen individually)\n treat : treatment step\n 'T' = thermal demagnetization\n 'AF' = alternating field demagnetization\n 'M' = microwave radiation demagnetization\n (default is 'AF')\n XLP : filter data by a particular method\n individual : This function outputs all plots by default. If plotting by sample\n or specimen, you may not wish to see (or wait for) every single plot. You can\n therefore specify a particular plot by setting this keyword argument to\n a string of the site/sample/specimen name.\n average_measurements : Option to average demagnetization measurements by\n the grouping specified with the 'plot_by' keyword argument (default is False)\n single_plot : Option to output a single plot with all measurements (default is False)\n '''\n\n FIG = {} # plot dictionary\n FIG['demag'] = 1 # demag is figure 1\n in_file, plot_key, LT = os.path.join(\n path_to_file, file_name), 'er_location_name', \"LT-AF-Z\"\n XLP = \"\"\n norm = 1\n units, dmag_key = 'T', 'treatment_ac_field'\n plot_num = 0\n\n if plot_by == 'loc':\n plot_key = 'er_location_name'\n elif plot_by == 'exp':\n plot_key = 'er_expedition_name'\n elif plot_by == 'site':\n plot_key = 'er_site_name'\n elif plot_by == 'sam':\n plot_key = 'er_sample_name'\n elif plot_by == 'spc':\n plot_key = 'er_specimen_name'\n\n if treat != None:\n LT = 'LT-' + treat + '-Z' # get lab treatment for plotting\n if LT == 'LT-T-Z':\n units, dmag_key = 'K', 'treatment_temp'\n elif LT == 'LT-AF-Z':\n units, dmag_key = 'T', 'treatment_ac_field'\n elif LT == 'LT-M-Z':\n units, dmag_key = 'J', 'treatment_mw_energy'\n else:\n units = 'U'\n else:\n LT = 'LT-AF-Z'\n\n plot_dict = {}\n data, file_type = pmag.magic_read(in_file)\n sids = pmag.get_specs(data)\n plt.figure(num=FIG['demag'], figsize=(5, 5))\n print(len(data), ' records read from ', in_file)\n #\n #\n # find desired intensity data\n #\n # get plotlist\n #\n plotlist, intlist = [], ['measurement_magnitude', 'measurement_magn_moment',\n 'measurement_magn_volume', 'measurement_magn_mass']\n IntMeths = []\n FixData = []\n for rec in data:\n meths = []\n methcodes = rec['magic_method_codes'].split(':')\n for meth in methcodes:\n meths.append(meth.strip())\n for key in list(rec.keys()):\n if key in intlist and rec[key] != \"\":\n if key not in IntMeths:\n IntMeths.append(key)\n if rec[plot_key] not in plotlist and LT in meths:\n plotlist.append(rec[plot_key])\n if 'measurement_flag' not in list(rec.keys()):\n rec['measurement_flag'] = 'g'\n FixData.append(rec)\n plotlist.sort()\n if len(IntMeths) == 0:\n print('No intensity information found')\n data = FixData\n # plot first intensity method found - normalized to initial value anyway -\n # doesn't matter which used\n int_key = IntMeths[0]\n # print plotlist\n if individual is not None:\n if type(individual) == list or type(individual) == tuple:\n plotlist = list(individual)\n else:\n plotlist = []\n plotlist.append(individual)\n for plot in plotlist:\n print(plot, 'plotting by: ', plot_key)\n # fish out all the data for this type of plot\n PLTblock = pmag.get_dictitem(data, plot_key, plot, 'T')\n # fish out all the dmag for this experiment type\n PLTblock = pmag.get_dictitem(PLTblock, 'magic_method_codes', LT, 'has')\n # get all with this intensity key non-blank\n PLTblock = pmag.get_dictitem(PLTblock, int_key, '', 'F')\n if XLP != \"\":\n # reject data with XLP in method_code\n PLTblock = pmag.get_dictitem(\n PLTblock, 'magic_method_codes', XLP, 'not')\n # for plot in plotlist:\n if len(PLTblock) > 2:\n title = PLTblock[0][plot_key]\n spcs = []\n for rec in PLTblock:\n if rec['er_specimen_name'] not in spcs:\n spcs.append(rec['er_specimen_name'])\n if average_measurements is False:\n for spc in spcs:\n # plot specimen by specimen\n SPCblock = pmag.get_dictitem(\n PLTblock, 'er_specimen_name', spc, 'T')\n INTblock = []\n for rec in SPCblock:\n INTblock.append([float(rec[dmag_key]), 0, 0, float(\n rec[int_key]), 1, rec['measurement_flag']])\n if len(INTblock) > 2:\n pmagplotlib.plot_mag(\n FIG['demag'], INTblock, title, 0, units, norm)\n else:\n AVGblock = {}\n for spc in spcs:\n # plot specimen by specimen\n SPCblock = pmag.get_dictitem(\n PLTblock, 'er_specimen_name', spc, 'T')\n for rec in SPCblock:\n if rec['measurement_flag'] == 'g':\n if float(rec[dmag_key]) not in list(AVGblock.keys()):\n AVGblock[float(rec[dmag_key])] = [\n float(rec[int_key])]\n else:\n AVGblock[float(rec[dmag_key])].append(\n float(rec[int_key]))\n INTblock = []\n for step in sorted(AVGblock.keys()):\n INTblock.append([float(step), 0, 0, old_div(\n float(sum(AVGblock[step])), float(len(AVGblock[step]))), 1, 'g'])\n pmagplotlib.plot_mag(FIG['demag'], INTblock,\n title, 0, units, norm)\n if save == True:\n plt.savefig(os.path.join(save_folder, title) + '.' + fmt)\n if single_plot is False:\n plt.show()\n if single_plot is True:\n plt.show()", "response": "This function creates a demagnetization of the specified magic_measurements file and outputs the intensity plots of the specified individual."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nfunctioning to plot hysteresis data for specific use in matplotlib notebook", "response": "def iplot_hys(fignum, B, M, s):\n \"\"\"\n function to plot hysteresis data\n\n This function has been adapted from pmagplotlib.iplot_hys for specific use\n within a Jupyter notebook.\n\n Parameters\n -----------\n fignum : reference number for matplotlib figure being created\n B : list of B (flux density) values of hysteresis experiment\n M : list of M (magnetization) values of hysteresis experiment\n s : specimen name\n \"\"\"\n if fignum != 0:\n plt.figure(num=fignum)\n plt.clf()\n hpars = {}\n# close up loop\n Npts = len(M)\n B70 = 0.7 * B[0] # 70 percent of maximum field\n for b in B:\n if b < B70:\n break\n Nint = B.index(b) - 1\n if Nint > 30:\n Nint = 30\n if Nint < 10:\n Nint = 10\n Bzero, Mzero, Mfix, Mnorm, Madj, MadjN = \"\", \"\", [], [], [], []\n Mazero = \"\"\n m_init = 0.5 * (M[0] + M[1])\n m_fin = 0.5 * (M[-1] + M[-2])\n diff = m_fin - m_init\n Bmin = 0.\n for k in range(Npts):\n frac = old_div(float(k), float(Npts - 1))\n Mfix.append((M[k] - diff * frac))\n if Bzero == \"\" and B[k] < 0:\n Bzero = k\n if B[k] < Bmin:\n Bmin = B[k]\n kmin = k\n # adjust slope with first 30 data points (throwing out first 3)\n Bslop = B[2:Nint + 2]\n Mslop = Mfix[2:Nint + 2]\n polyU = polyfit(Bslop, Mslop, 1) # best fit line to high field points\n # adjust slope with first 30 points of ascending branch\n Bslop = B[kmin:kmin + (Nint + 1)]\n Mslop = Mfix[kmin:kmin + (Nint + 1)]\n polyL = polyfit(Bslop, Mslop, 1) # best fit line to high field points\n xhf = 0.5 * (polyU[0] + polyL[0]) # mean of two slopes\n # convert B to A/m, high field slope in m^3\n hpars['hysteresis_xhf'] = '%8.2e' % (xhf * 4 * np.pi * 1e-7)\n meanint = 0.5 * (polyU[1] + polyL[1]) # mean of two intercepts\n Msat = 0.5 * (polyU[1] - polyL[1]) # mean of saturation remanence\n Moff = []\n for k in range(Npts):\n # take out linear slope and offset (makes symmetric about origin)\n Moff.append((Mfix[k] - xhf * B[k] - meanint))\n if Mzero == \"\" and Moff[k] < 0:\n Mzero = k\n if Mzero != \"\" and Mazero == \"\" and Moff[k] > 0:\n Mazero = k\n hpars['hysteresis_ms_moment'] = '%8.3e' % (Msat) # Ms in Am^2\n#\n# split into upper and lower loops for splining\n Mupper, Bupper, Mlower, Blower = [], [], [], []\n deltaM, Bdm = [], [] # diff between upper and lower curves at Bdm\n for k in range(kmin - 2, 0, -2):\n Mupper.append(old_div(Moff[k], Msat))\n Bupper.append(B[k])\n for k in range(kmin + 2, len(B)-1):\n Mlower.append(Moff[k] / Msat)\n Blower.append(B[k])\n Iupper = spline.Spline(Bupper, Mupper) # get splines for upper up and down\n Ilower = spline.Spline(Blower, Mlower) # get splines for lower\n for b in np.arange(B[0]): # get range of field values\n Mpos = ((Iupper(b) - Ilower(b))) # evaluate on both sides of B\n Mneg = ((Iupper(-b) - Ilower(-b)))\n Bdm.append(b)\n deltaM.append(0.5 * (Mpos + Mneg)) # take average delta M\n print('whew')\n for k in range(Npts):\n MadjN.append(old_div(Moff[k], Msat))\n Mnorm.append(old_div(M[k], Msat))\n# find Mr : average of two spline fits evaluted at B=0 (times Msat)\n Mr = Msat * 0.5 * (Iupper(0.) - Ilower(0.))\n hpars['hysteresis_mr_moment'] = '%8.3e' % (Mr)\n# find Bc (x intercept), interpolate between two bounding points\n Bz = B[Mzero - 1:Mzero + 1]\n Mz = Moff[Mzero - 1:Mzero + 1]\n Baz = B[Mazero - 1:Mazero + 1]\n Maz = Moff[Mazero - 1:Mazero + 1]\n try:\n poly = polyfit(Bz, Mz, 1) # best fit line through two bounding points\n Bc = old_div(-poly[1], poly[0]) # x intercept\n # best fit line through two bounding points\n poly = polyfit(Baz, Maz, 1)\n Bac = old_div(-poly[1], poly[0]) # x intercept\n hpars['hysteresis_bc'] = '%8.3e' % (0.5 * (abs(Bc) + abs(Bac)))\n except:\n hpars['hysteresis_bc'] = '0'\n return hpars, deltaM, Bdm, B, Mnorm, MadjN"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncalculate hysteresis parameters, saves them in rmag_hysteresis format file. If selected, this function also plots hysteresis loops, delta M curves, d (Delta M)/dB curves, and IRM backfield curves. Parameters (defaults are used if not specified) ---------- path_to_file : path to directory that contains files (default is current directory, '.') hyst_file : hysteresis file (default is 'rmag_hysteresis.txt') save : boolean argument to save plots (default is False) save_folder : relative directory where plots will be saved (default is current directory, '.') fmt : format of saved figures (default is 'pdf') plots: whether or not to display the plots (default is true)", "response": "def hysteresis_magic2(path_to_file='.', hyst_file=\"rmag_hysteresis.txt\",\n save=False, save_folder='.',\n fmt=\"svg\", plots=True):\n \"\"\"\n Calculates hysteresis parameters, saves them in rmag_hysteresis format file.\n If selected, this function also plots hysteresis loops, delta M curves,\n d (Delta M)/dB curves, and IRM backfield curves.\n\n Parameters (defaults are used if not specified)\n ----------\n path_to_file : path to directory that contains files (default is current directory, '.')\n hyst_file : hysteresis file (default is 'rmag_hysteresis.txt')\n save : boolean argument to save plots (default is False)\n save_folder : relative directory where plots will be saved (default is current directory, '.')\n fmt : format of saved figures (default is 'pdf')\n plots: whether or not to display the plots (default is true)\n \"\"\"\n user, meas_file, rmag_out, rmag_file = \"\", \"agm_measurements.txt\", \"rmag_hysteresis.txt\", \"\"\n pltspec = \"\"\n dir_path = save_folder\n verbose = pmagplotlib.verbose\n version_num = pmag.get_version()\n rmag_out = save_folder + '/' + rmag_out\n meas_file = path_to_file + '/' + hyst_file\n rmag_rem = save_folder + \"/rmag_remanence.txt\"\n #\n #\n meas_data, file_type = pmag.magic_read(meas_file)\n if file_type != 'magic_measurements':\n print(hysteresis_magic.__doc__)\n print('bad file')\n return\n\n # initialize some variables\n # define figure numbers for hyst,deltaM,DdeltaM curves\n HystRecs, RemRecs = [], []\n HDD = {}\n HDD['hyst'], HDD['deltaM'], HDD['DdeltaM'] = 1, 2, 3\n experiment_names, sids = [], []\n for rec in meas_data:\n meths = rec['magic_method_codes'].split(':')\n methods = []\n for meth in meths:\n methods.append(meth.strip())\n if 'LP-HYS' in methods:\n if 'er_synthetic_name' in list(rec.keys()) and rec['er_synthetic_name'] != \"\":\n rec['er_specimen_name'] = rec['er_synthetic_name']\n if rec['magic_experiment_name'] not in experiment_names:\n experiment_names.append(rec['magic_experiment_name'])\n if rec['er_specimen_name'] not in sids:\n sids.append(rec['er_specimen_name'])\n #\n fignum = 1\n sample_num = 0\n # initialize variables to record some bulk info in first loop\n first_dcd_rec, first_rec, first_imag_rec = 1, 1, 1\n while sample_num < len(sids):\n sample = sids[sample_num]\n print(sample, sample_num + 1, 'out of ', len(sids))\n # B,M for hysteresis, Bdcd,Mdcd for irm-dcd data\n B, M, Bdcd, Mdcd = [], [], [], []\n Bimag, Mimag = [], [] # Bimag,Mimag for initial magnetization curves\n for rec in meas_data:\n methcodes = rec['magic_method_codes'].split(':')\n meths = []\n for meth in methcodes:\n meths.append(meth.strip())\n if rec['er_specimen_name'] == sample and \"LP-HYS\" in meths:\n B.append(float(rec['measurement_lab_field_dc']))\n M.append(float(rec['measurement_magn_moment']))\n if first_rec == 1:\n e = rec['magic_experiment_name']\n HystRec = {}\n first_rec = 0\n if \"er_location_name\" in list(rec.keys()):\n HystRec[\"er_location_name\"] = rec[\"er_location_name\"]\n locname = rec['er_location_name'].replace('/', '-')\n if \"er_sample_name\" in list(rec.keys()):\n HystRec[\"er_sample_name\"] = rec[\"er_sample_name\"]\n if \"er_site_name\" in list(rec.keys()):\n HystRec[\"er_site_name\"] = rec[\"er_site_name\"]\n if \"er_synthetic_name\" in list(rec.keys()) and rec['er_synthetic_name'] != \"\":\n HystRec[\"er_synthetic_name\"] = rec[\"er_synthetic_name\"]\n else:\n HystRec[\"er_specimen_name\"] = rec[\"er_specimen_name\"]\n if rec['er_specimen_name'] == sample and \"LP-IRM-DCD\" in meths:\n Bdcd.append(float(rec['treatment_dc_field']))\n Mdcd.append(float(rec['measurement_magn_moment']))\n if first_dcd_rec == 1:\n RemRec = {}\n irm_exp = rec['magic_experiment_name']\n first_dcd_rec = 0\n if \"er_location_name\" in list(rec.keys()):\n RemRec[\"er_location_name\"] = rec[\"er_location_name\"]\n if \"er_sample_name\" in list(rec.keys()):\n RemRec[\"er_sample_name\"] = rec[\"er_sample_name\"]\n if \"er_site_name\" in list(rec.keys()):\n RemRec[\"er_site_name\"] = rec[\"er_site_name\"]\n if \"er_synthetic_name\" in list(rec.keys()) and rec['er_synthetic_name'] != \"\":\n RemRec[\"er_synthetic_name\"] = rec[\"er_synthetic_name\"]\n else:\n RemRec[\"er_specimen_name\"] = rec[\"er_specimen_name\"]\n if rec['er_specimen_name'] == sample and \"LP-IMAG\" in meths:\n if first_imag_rec == 1:\n imag_exp = rec['magic_experiment_name']\n first_imag_rec = 0\n Bimag.append(float(rec['measurement_lab_field_dc']))\n Mimag.append(float(rec['measurement_magn_moment']))\n if len(B) > 0:\n hmeths = []\n for meth in meths:\n hmeths.append(meth)\n # fignum = 1\n fig = plt.figure(figsize=(8, 8))\n hpars, deltaM, Bdm, B, Mnorm, MadjN = iplot_hys(1, B, M, sample)\n ax1 = fig.add_subplot(2, 2, 1)\n ax1.axhline(0, color='k')\n ax1.axvline(0, color='k')\n ax1.plot(B, Mnorm, 'r')\n ax1.plot(B, MadjN, 'b')\n ax1.set_xlabel('B (T)')\n ax1.set_ylabel(\"M/Msat\")\n # ax1.set_title(sample)\n ax1.set_xlim(-1, 1)\n ax1.set_ylim(-1, 1)\n bounds = ax1.axis()\n n4 = 'Ms: ' + \\\n '%8.2e' % (float(hpars['hysteresis_ms_moment'])) + ' Am^2'\n ax1.text(bounds[1] - .9 * bounds[1], -.9, n4, fontsize=9)\n n1 = 'Mr: ' + \\\n '%8.2e' % (float(hpars['hysteresis_mr_moment'])) + ' Am^2'\n ax1.text(bounds[1] - .9 * bounds[1], -.7, n1, fontsize=9)\n n2 = 'Bc: ' + '%8.2e' % (float(hpars['hysteresis_bc'])) + ' T'\n ax1.text(bounds[1] - .9 * bounds[1], -.5, n2, fontsize=9)\n if 'hysteresis_xhf' in list(hpars.keys()):\n n3 = r'Xhf: ' + \\\n '%8.2e' % (float(hpars['hysteresis_xhf'])) + ' m^3'\n ax1.text(bounds[1] - .9 * bounds[1], -.3, n3, fontsize=9)\n # plt.subplot(1,2,2)\n # plt.subplot(1,3,3)\n DdeltaM = []\n Mhalf = \"\"\n for k in range(2, len(Bdm)):\n # differnential\n DdeltaM.append(\n old_div(abs(deltaM[k] - deltaM[k - 2]), (Bdm[k] - Bdm[k - 2])))\n for k in range(len(deltaM)):\n if old_div(deltaM[k], deltaM[0]) < 0.5:\n Mhalf = k\n break\n try:\n Bhf = Bdm[Mhalf - 1:Mhalf + 1]\n Mhf = deltaM[Mhalf - 1:Mhalf + 1]\n # best fit line through two bounding points\n poly = polyfit(Bhf, Mhf, 1)\n Bcr = old_div((.5 * deltaM[0] - poly[1]), poly[0])\n hpars['hysteresis_bcr'] = '%8.3e' % (Bcr)\n hpars['magic_method_codes'] = \"LP-BCR-HDM\"\n if HDD['deltaM'] != 0:\n ax2 = fig.add_subplot(2, 2, 2)\n ax2.plot(Bdm, deltaM, 'b')\n ax2.set_xlabel('B (T)')\n ax2.set_ylabel('Delta M')\n linex = [0, Bcr, Bcr]\n liney = [old_div(deltaM[0], 2.), old_div(deltaM[0], 2.), 0]\n ax2.plot(linex, liney, 'r')\n # ax2.set_title(sample)\n ax3 = fig.add_subplot(2, 2, 3)\n ax3.plot(Bdm[(len(Bdm) - len(DdeltaM)):], DdeltaM, 'b')\n ax3.set_xlabel('B (T)')\n ax3.set_ylabel('d (Delta M)/dB')\n # ax3.set_title(sample)\n\n ax4 = fig.add_subplot(2, 2, 4)\n ax4.plot(Bdcd, Mdcd)\n ax4.yaxis.set_major_formatter(mtick.FormatStrFormatter('%.2e'))\n ax4.axhline(0, color='k')\n ax4.axvline(0, color='k')\n ax4.set_xlabel('B (T)')\n ax4.set_ylabel('M/Mr')\n except:\n print(\"not doing it\")\n hpars['hysteresis_bcr'] = '0'\n hpars['magic_method_codes'] = \"\"\n plt.gcf()\n plt.gca()\n plt.tight_layout()\n if save:\n plt.savefig(save_folder + '/' + sample + '_hysteresis.' + fmt)\n plt.show()\n sample_num += 1"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef find_ei(data, nb=1000, save=False, save_folder='.', fmt='svg',\n site_correction=False, return_new_dirs=False):\n \"\"\"\n Applies series of assumed flattening factor and \"unsquishes\" inclinations assuming tangent function.\n Finds flattening factor that gives elongation/inclination pair consistent with TK03;\n or, if correcting by site instead of for study-level secular variation,\n finds flattening factor that minimizes elongation and most resembles a\n Fisherian distribution.\n Finds bootstrap confidence bounds\n\n Required Parameter\n -----------\n data: a nested list of dec/inc pairs\n\n Optional Parameters (defaults are used unless specified)\n -----------\n nb: number of bootstrapped pseudo-samples (default is 1000)\n save: Boolean argument to save plots (default is False)\n save_folder: path to folder in which plots should be saved (default is current directory)\n fmt: specify format of saved plots (default is 'svg')\n site_correction: Boolean argument to specify whether to \"unsquish\" data to\n 1) the elongation/inclination pair consistent with TK03 secular variation model\n (site_correction = False)\n or\n 2) a Fisherian distribution (site_correction = True). Default is FALSE.\n Note that many directions (~ 100) are needed for this correction to be reliable.\n return_new_dirs: optional return of newly \"unflattened\" directions (default is False)\n\n Returns\n -----------\n four plots: 1) equal area plot of original directions\n 2) Elongation/inclination pairs as a function of f, data plus 25 bootstrap samples\n 3) Cumulative distribution of bootstrapped optimal inclinations plus uncertainties.\n Estimate from original data set plotted as solid line\n 4) Orientation of principle direction through unflattening\n\n NOTE: If distribution does not have a solution, plot labeled: Pathological. Some bootstrap samples may have\n valid solutions and those are plotted in the CDFs and E/I plot.\n \"\"\"\n print(\"Bootstrapping.... be patient\")\n print(\"\")\n sys.stdout.flush()\n\n upper, lower = int(round(.975 * nb)), int(round(.025 * nb))\n E, I = [], []\n\n plt.figure(num=1, figsize=(4, 4))\n plot_net(1)\n plot_di(di_block=data)\n plt.title('Original')\n ppars = pmag.doprinc(data)\n Io = ppars['inc']\n n = ppars[\"N\"]\n Es, Is, Fs, V2s = pmag.find_f(data)\n if site_correction == True:\n Inc, Elong = Is[Es.index(min(Es))], Es[Es.index(min(Es))]\n flat_f = Fs[Es.index(min(Es))]\n else:\n Inc, Elong = Is[-1], Es[-1]\n flat_f = Fs[-1]\n\n plt.figure(num=2, figsize=(4, 4))\n plt.plot(Is, Es, 'r')\n plt.xlabel(\"Inclination\")\n plt.ylabel(\"Elongation\")\n plt.text(Inc, Elong, ' %3.1f' % (flat_f))\n plt.text(Is[0] - 2, Es[0], ' %s' % ('f=1'))\n\n b = 0\n\n while b < nb:\n bdata = pmag.pseudo(data)\n Esb, Isb, Fsb, V2sb = pmag.find_f(bdata)\n if b < 25:\n plt.plot(Isb, Esb, 'y')\n if Esb[-1] != 0:\n ppars = pmag.doprinc(bdata)\n if site_correction == True:\n I.append(abs(Isb[Esb.index(min(Esb))]))\n E.append(Esb[Esb.index(min(Esb))])\n else:\n I.append(abs(Isb[-1]))\n E.append(Esb[-1])\n b += 1\n\n I.sort()\n E.sort()\n Eexp = []\n for i in I:\n Eexp.append(pmag.EI(i))\n plt.plot(I, Eexp, 'g-')\n if Inc == 0:\n title = 'Pathological Distribution: ' + \\\n '[%7.1f, %7.1f]' % (I[lower], I[upper])\n else:\n title = '%7.1f [%7.1f, %7.1f]' % (Inc, I[lower], I[upper])\n\n cdf_fig_num = 3\n plt.figure(num=cdf_fig_num, figsize=(4, 4))\n pmagplotlib.plot_cdf(cdf_fig_num, I, 'Inclinations', 'r', title)\n pmagplotlib.plot_vs(cdf_fig_num, [I[lower], I[upper]], 'b', '--')\n pmagplotlib.plot_vs(cdf_fig_num, [Inc], 'g', '-')\n pmagplotlib.plot_vs(cdf_fig_num, [Io], 'k', '-')\n\n # plot corrected directional data\n\n di_lists = unpack_di_block(data)\n if len(di_lists) == 3:\n decs, incs, intensity = di_lists\n if len(di_lists) == 2:\n decs, incs = di_lists\n if flat_f:\n unsquished_incs = unsquish(incs, flat_f)\n plt.figure(num=4, figsize=(4, 4))\n plot_net(4)\n plot_di(decs, unsquished_incs)\n plt.title('Corrected for flattening')\n else:\n plt.figure(num=4, figsize=(4, 4))\n plot_net(4)\n plot_di(decs, incs)\n plt.title('Corrected for flattening')\n\n if (Inc, Elong, flat_f) == (0, 0, 0):\n print(\"PATHOLOGICAL DISTRIBUTION\")\n print(\"The original inclination was: \" + str(Io))\n print(\"\")\n print(\"The corrected inclination is: \" + str(Inc))\n print(\"with bootstrapped confidence bounds of: \" +\n str(I[lower]) + ' to ' + str(I[upper]))\n print(\"and elongation parameter of: \" + str(Elong))\n print(\"The flattening factor is: \" + str(flat_f))\n if return_new_dirs is True:\n return make_di_block(decs, unsquished_incs)", "response": "Find the Elongation and Inclination Pairs for a given set of data."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncalculate the rate of a single plate in the Ma .", "response": "def plate_rate_mc(pole1_plon, pole1_plat, pole1_kappa, pole1_N, pole1_age, pole1_age_error,\n pole2_plon, pole2_plat, pole2_kappa, pole2_N, pole2_age, pole2_age_error,\n ref_loc_lon, ref_loc_lat, samplesize=10000, random_seed=None, plot=True,\n savefig=True, save_directory='./', figure_name=''):\n \"\"\"\n Determine the latitudinal motion implied by a pair of poles and utilize\n the Monte Carlo sampling method of Swanson-Hysell (2014) to determine the\n associated uncertainty.\n\n Parameters:\n ------------\n plon : longitude of pole\n plat : latitude of pole\n kappa : Fisher precision parameter for VPGs in pole\n N : number of VGPs in pole\n age : age assigned to pole in Ma\n age_error : 1 sigma age uncertainty in million years\n ref_loc_lon : longitude of reference location\n ref_loc_lat : latitude of reference location\n samplesize : number of draws from pole and age distributions (default set to 10000)\n random_seed : set random seed for reproducible number generation (default is None)\n plot : whether to make figures (default is True, optional)\n savefig : whether to save figures (default is True, optional)\n save_directory = default is local directory (optional)\n figure_name = prefix for file names (optional)\n\n Returns\n --------\n rate : rate of latitudinal motion in cm/yr along with estimated 2.5 and 97.5\n percentile rate estimates\n \"\"\"\n ref_loc = [ref_loc_lon, ref_loc_lat]\n pole1 = (pole1_plon, pole1_plat)\n pole1_paleolat = 90 - pmag.angle(pole1, ref_loc)\n pole2 = (pole2_plon, pole2_plat)\n pole2_paleolat = 90 - pmag.angle(pole2, ref_loc)\n print(\"The paleolatitude for ref_loc resulting from pole 1 is:\" +\n str(pole1_paleolat))\n print(\"The paleolatitude for ref_loc resulting from pole 2 is:\" +\n str(pole2_paleolat))\n rate = old_div(((pole1_paleolat - pole2_paleolat) * 111 *\n 100000), ((pole1_age - pole2_age) * 1000000))\n print(\"The rate of paleolatitudinal change implied by the poles pairs in cm/yr is:\" + str(rate))\n\n if random_seed != None:\n np.random.seed(random_seed)\n pole1_MCages = np.random.normal(pole1_age, pole1_age_error, samplesize)\n pole2_MCages = np.random.normal(pole2_age, pole2_age_error, samplesize)\n\n plt.hist(pole1_MCages, 100, histtype='stepfilled',\n color='darkred', label='Pole 1 ages')\n plt.hist(pole2_MCages, 100, histtype='stepfilled',\n color='darkblue', label='Pole 2 ages')\n plt.xlabel('Age (Ma)')\n plt.ylabel('n')\n plt.legend(loc=3)\n if savefig == True:\n plot_extension = '_1.svg'\n plt.savefig(save_directory + figure_name + plot_extension)\n plt.show()\n\n pole1_MCpoles = []\n pole1_MCpole_lat = []\n pole1_MCpole_long = []\n pole1_MCpaleolat = []\n for n in range(samplesize):\n vgp_samples = []\n for vgp in range(pole1_N):\n # pmag.dev returns a direction from a fisher distribution with\n # specified kappa\n direction_atN = pmag.fshdev(pole1_kappa)\n # this direction is centered at latitude of 90 degrees and needs to be rotated\n # to be centered on the mean pole position\n tilt_direction = pole1_plon\n tilt_amount = 90 - pole1_plat\n direction = pmag.dotilt(\n direction_atN[0], direction_atN[1], tilt_direction, tilt_amount)\n vgp_samples.append([direction[0], direction[1], 1.])\n mean = pmag.fisher_mean(vgp_samples)\n mean_pole_position = (mean['dec'], mean['inc'])\n pole1_MCpoles.append([mean['dec'], mean['inc'], 1.])\n pole1_MCpole_lat.append(mean['inc'])\n pole1_MCpole_long.append(mean['dec'])\n paleolat = 90 - pmag.angle(mean_pole_position, ref_loc)\n pole1_MCpaleolat.append(paleolat[0])\n\n pole2_MCpoles = []\n pole2_MCpole_lat = []\n pole2_MCpole_long = []\n pole2_MCpaleolat = []\n for n in range(samplesize):\n vgp_samples = []\n for vgp in range(pole2_N):\n # pmag.dev returns a direction from a fisher distribution with\n # specified kappa\n direction_atN = pmag.fshdev(pole2_kappa)\n # this direction is centered at latitude of 90 degrees and needs to be rotated\n # to be centered on the mean pole position\n tilt_direction = pole2_plon\n tilt_amount = 90 - pole2_plat\n direction = pmag.dotilt(\n direction_atN[0], direction_atN[1], tilt_direction, tilt_amount)\n vgp_samples.append([direction[0], direction[1], 1.])\n mean = pmag.fisher_mean(vgp_samples)\n mean_pole_position = (mean['dec'], mean['inc'])\n pole2_MCpoles.append([mean['dec'], mean['inc'], 1.])\n pole2_MCpole_lat.append(mean['inc'])\n pole2_MCpole_long.append(mean['dec'])\n paleolat = 90 - pmag.angle(mean_pole_position, ref_loc)\n pole2_MCpaleolat.append(paleolat[0])\n\n if plot is True:\n plt.figure(figsize=(5, 5))\n map_axis = make_mollweide_map()\n\n plot_vgp(map_axis, pole1_MCpole_long, pole1_MCpole_lat, color='b')\n plot_vgp(map_axis, pole2_MCpole_long, pole2_MCpole_lat, color='g')\n if savefig == True:\n plot_extension = '_2.svg'\n plt.savefig(save_directory + figure_name + plot_extension)\n plt.show()\n\n # calculating the change in paleolatitude between the Monte Carlo pairs\n pole1_pole2_Delta_degrees = []\n pole1_pole2_Delta_kilometers = []\n pole1_pole2_Delta_myr = []\n pole1_pole2_degrees_per_myr = []\n pole1_pole2_cm_per_yr = []\n\n for n in range(samplesize):\n Delta_degrees = pole1_MCpaleolat[n] - pole2_MCpaleolat[n]\n Delta_Myr = pole1_MCages[n] - pole2_MCages[n]\n pole1_pole2_Delta_degrees.append(Delta_degrees)\n degrees_per_myr = old_div(Delta_degrees, Delta_Myr)\n cm_per_yr = old_div(((Delta_degrees * 111) * 100000),\n (Delta_Myr * 1000000))\n pole1_pole2_degrees_per_myr.append(degrees_per_myr)\n pole1_pole2_cm_per_yr.append(cm_per_yr)\n\n if plot is True:\n plotnumber = 100\n plt.figure(num=None, figsize=(10, 4))\n plt.subplot(1, 2, 1)\n for n in range(plotnumber):\n plt.plot([pole1_MCpaleolat[n], pole2_MCpaleolat[n]],\n [pole1_MCages[n], pole2_MCages[n]], 'k-', linewidth=0.1, alpha=0.3)\n plt.scatter(pole1_MCpaleolat[:plotnumber],\n pole1_MCages[:plotnumber], color='b', s=3)\n plt.scatter(pole1_paleolat, pole1_age, color='lightblue',\n s=100, edgecolor='w', zorder=10000)\n plt.scatter(pole2_MCpaleolat[:plotnumber],\n pole2_MCages[:plotnumber], color='g', s=3)\n plt.scatter(pole2_paleolat, pole2_age, color='lightgreen',\n s=100, edgecolor='w', zorder=10000)\n plt.plot([pole1_paleolat, pole2_paleolat], [\n pole1_age, pole2_age], 'w-', linewidth=2)\n plt.gca().invert_yaxis()\n plt.xlabel('paleolatitude (degrees)', size=14)\n plt.ylabel('time (Ma)', size=14)\n\n plt.subplot(1, 2, 2)\n plt.hist(pole1_pole2_cm_per_yr, bins=600)\n plt.ylabel('n', size=14)\n plt.xlabel('latitudinal drift rate (cm/yr)', size=14)\n # plt.xlim([0,90])\n if savefig == True:\n plot_extension = '_3.svg'\n plt.savefig(save_directory + figure_name + plot_extension)\n plt.show()\n\n twopointfive_percentile = stats.scoreatpercentile(\n pole1_pole2_cm_per_yr, 2.5)\n fifty_percentile = stats.scoreatpercentile(pole1_pole2_cm_per_yr, 50)\n ninetysevenpointfive_percentile = stats.scoreatpercentile(\n pole1_pole2_cm_per_yr, 97.5)\n print(\"2.5th percentile is: \" +\n str(round(twopointfive_percentile, 2)) + \" cm/yr\")\n print(\"50th percentile is: \" + str(round(fifty_percentile, 2)) + \" cm/yr\")\n print(\"97.5th percentile is: \" +\n str(round(ninetysevenpointfive_percentile, 2)) + \" cm/yr\")\n return rate[0], twopointfive_percentile, ninetysevenpointfive_percentile"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef zeq(path_to_file='.', file='', data=\"\", units='U', calculation_type=\"DE-BFL\",\n save=False, save_folder='.', fmt='svg', begin_pca=\"\", end_pca=\"\", angle=0):\n \"\"\"\n NAME\n zeq.py\n\n DESCRIPTION\n plots demagnetization data for a single specimen:\n - The solid (open) symbols in the Zijderveld diagram are X,Y (X,Z) pairs. The demagnetization diagram plots the\n fractional remanence remaining after each step. The green line is the fraction of the total remaence removed\n between each step. If the principle direction is desired, specify begin_pca and end_pca steps as bounds for calculation.\n\n -The equal area projection has the X direction (usually North in geographic coordinates)\n to the top. The red line is the X axis of the Zijderveld diagram. Solid symbols are lower hemisphere.\n\n - red dots and blue line is the remanence remaining after each step. The green line is the partial TRM removed in each interval\n\n INPUT FORMAT\n reads from file_name or takes a Pandas DataFrame data with specimen treatment intensity declination inclination as columns\n\n Keywords:\n file= FILE a space or tab delimited file with\n specimen treatment declination inclination intensity\n units= [mT,C] specify units of mT OR C, default is unscaled\n save=[True,False] save figure and quit, default is False\n fmt [svg,jpg,png,pdf] set figure format [default is svg]\n begin_pca [step number] treatment step for beginning of PCA calculation, default\n end_pca [step number] treatment step for end of PCA calculation, last step is default\n calculation_type [DE-BFL,DE-BFP,DE-FM] Calculation Type: best-fit line, plane or fisher mean; line is default\n angle=[0-360]: angle to subtract from declination to rotate in horizontal plane, default is 0\n\n \"\"\"\n if units == \"C\":\n SIunits = \"K\"\n if units == \"mT\":\n SIunits = \"T\"\n if units == \"U\":\n SIunits = \"U\"\n if file != \"\":\n f = pd.read_csv(os.path.join(path_to_file, file),\n delim_whitespace=True, header=None)\n f.columns = ['specimen', 'treatment',\n 'intensity', 'declination', 'inclination']\n # adjust for angle rotation\n f['declination'] = (f['declination']-angle) % 360\n f['quality'] = 'g'\n f['type'] = ''\n#\n s = f['specimen'].tolist()[0]\n if units == 'mT':\n f['treatment'] = f['treatment']*1e-3\n if units == 'C':\n f['treatment'] = f['treatment']+273\n data = f[['treatment', 'declination',\n 'inclination', 'intensity', 'type', 'quality']]\n print(s)\n datablock = data.values.tolist()\n# define figure numbers in a dictionary for equal area, zijderveld,\n# and intensity vs. demagnetiztion step respectively\n ZED = {}\n ZED['eqarea'], ZED['zijd'], ZED['demag'] = 2, 1, 3\n plt.figure(num=ZED['zijd'], figsize=(5, 5))\n plt.figure(num=ZED['eqarea'], figsize=(5, 5))\n plt.figure(num=ZED['demag'], figsize=(5, 5))\n#\n#\n pmagplotlib.plot_zed(ZED, datablock, angle, s, SIunits) # plot the data\n#\n# print out data for this sample to screen\n#\n recnum = 0\n print('step treat intensity dec inc')\n for plotrec in datablock:\n if units == 'mT':\n print('%i %7.1f %8.3e %7.1f %7.1f ' %\n (recnum, plotrec[0]*1e3, plotrec[3], plotrec[1], plotrec[2]))\n if units == 'C':\n print('%i %7.1f %8.3e %7.1f %7.1f ' %\n (recnum, plotrec[0]-273., plotrec[3], plotrec[1], plotrec[2]))\n if units == 'U':\n print('%i %7.1f %8.3e %7.1f %7.1f ' %\n (recnum, plotrec[0], plotrec[3], plotrec[1], plotrec[2]))\n recnum += 1\n pmagplotlib.draw_figs(ZED)\n if begin_pca != \"\" and end_pca != \"\" and calculation_type != \"\":\n pmagplotlib.plot_zed(ZED, datablock, angle, s,\n SIunits) # plot the data\n # get best-fit direction/great circle\n mpars = pmag.domean(datablock, begin_pca, end_pca, calculation_type)\n # plot the best-fit direction/great circle\n pmagplotlib.plot_dir(ZED, mpars, datablock, angle)\n print('Specimen, calc_type, N, min, max, MAD, dec, inc')\n if units == 'mT':\n print('%s %s %i %6.2f %6.2f %6.1f %7.1f %7.1f' % (s, calculation_type,\n mpars[\"specimen_n\"], mpars[\"measurement_step_min\"]*1e3, mpars[\"measurement_step_max\"]*1e3, mpars[\"specimen_mad\"], mpars[\"specimen_dec\"], mpars[\"specimen_inc\"]))\n if units == 'C':\n print('%s %s %i %6.2f %6.2f %6.1f %7.1f %7.1f' % (s, calculation_type,\n mpars[\"specimen_n\"], mpars[\"measurement_step_min\"]-273, mpars[\"measurement_step_max\"]-273, mpars[\"specimen_mad\"], mpars[\"specimen_dec\"], mpars[\"specimen_inc\"]))\n if units == 'U':\n print('%s %s %i %6.2f %6.2f %6.1f %7.1f %7.1f' % (s, calculation_type,\n mpars[\"specimen_n\"], mpars[\"measurement_step_min\"], mpars[\"measurement_step_max\"], mpars[\"specimen_mad\"], mpars[\"specimen_dec\"], mpars[\"specimen_inc\"]))\n if save:\n files = {}\n for key in list(ZED.keys()):\n files[key] = s+'_'+key+'.'+fmt\n pmagplotlib.save_plots(ZED, files)", "response": "A function that creates a Zijderveld diagram showing the demagnetization data for a single specimen."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nmake plots of anisotropy eigenvectors, eigenvalues and confidence bounds All directions are on the lower hemisphere. Parameters __________ infile : specimens formatted file with aniso_s data samp_file : samples formatted file with sample => site relationship site_file : sites formatted file with site => location relationship verbose : if True, print messages to output confidence bounds options: ipar : if True - perform parametric bootstrap - requires non-blank aniso_s_sigma ihext : if True - Hext ellipses ivec : if True - plot bootstrapped eigenvectors instead of ellipses isite : if True plot by site, requires non-blank samp_file #iloc : if True plot by location, requires non-blank samp_file, and site_file NOT IMPLEMENTED iboot : if True - bootstrap ellipses vec : eigenvector for comparison with Dir Dir : [Dec,Inc] list for comparison direction PDir : [Pole_dec, Pole_Inc] for pole to plane for comparison green dots are on the lower hemisphere, cyan are on the upper hemisphere crd : ['s','g','t'], coordinate system for plotting whereby: s : specimen coordinates, aniso_tile_correction = -1, or unspecified g : geographic coordinates, aniso_tile_correction = 0 t : tilt corrected coordinates, aniso_tile_correction = 100 num_bootstraps : how many bootstraps to do, default 1000 dir_path : directory path fignum : matplotlib figure number, default 1 save_plots : bool, default True if True, create and save all requested plots interactive : bool, default False interactively plot and display for each specimen (this is best used on the command line only) fmt : str, default \"svg\" format for figures, [svg, jpg, pdf, png]", "response": "def aniso_magic_nb(infile='specimens.txt', samp_file='samples.txt', site_file='sites.txt', verbose=True,\n ipar=False, ihext=True, ivec=False, isite=False, iloc=False, iboot=False, vec=0,\n Dir=[], PDir=[], crd=\"s\", num_bootstraps=1000, dir_path=\".\", fignum=1,\n save_plots=True, interactive=False, fmt=\"png\"):\n \"\"\"\n Makes plots of anisotropy eigenvectors, eigenvalues and confidence bounds\n All directions are on the lower hemisphere.\n\n Parameters\n __________\n infile : specimens formatted file with aniso_s data\n samp_file : samples formatted file with sample => site relationship\n site_file : sites formatted file with site => location relationship\n verbose : if True, print messages to output\n confidence bounds options:\n ipar : if True - perform parametric bootstrap - requires non-blank aniso_s_sigma\n ihext : if True - Hext ellipses\n ivec : if True - plot bootstrapped eigenvectors instead of ellipses\n isite : if True plot by site, requires non-blank samp_file\n #iloc : if True plot by location, requires non-blank samp_file, and site_file NOT IMPLEMENTED\n iboot : if True - bootstrap ellipses\n vec : eigenvector for comparison with Dir\n Dir : [Dec,Inc] list for comparison direction\n PDir : [Pole_dec, Pole_Inc] for pole to plane for comparison\n green dots are on the lower hemisphere, cyan are on the upper hemisphere\n crd : ['s','g','t'], coordinate system for plotting whereby:\n s : specimen coordinates, aniso_tile_correction = -1, or unspecified\n g : geographic coordinates, aniso_tile_correction = 0\n t : tilt corrected coordinates, aniso_tile_correction = 100\n num_bootstraps : how many bootstraps to do, default 1000\n dir_path : directory path\n fignum : matplotlib figure number, default 1\n save_plots : bool, default True\n if True, create and save all requested plots\n interactive : bool, default False\n interactively plot and display for each specimen\n (this is best used on the command line only)\n fmt : str, default \"svg\"\n format for figures, [svg, jpg, pdf, png]\n\n \"\"\"\n figs = {}\n saved = []\n # make sure boolean values are in integer form\n # for backwards compatibility\n ipar = int(ipar)\n ihext = int(ihext)\n ivec = int(ivec)\n isite = int(isite)\n #iloc = int(iloc) # NOT USED\n iboot = int(iboot)\n # fix directory\n input_dir_path = os.path.realpath(dir_path)\n # initialize some variables\n version_num = pmag.get_version()\n hpars, bpars = [], []\n # set aniso_tilt_correction value\n CS = -1 # specimen\n if crd == 'g':\n CS = 0\n if crd == 't':\n CS = 100\n\n #\n #\n # read in the data\n\n fnames = {'specimens': infile, 'samples': samp_file, 'sites': site_file}\n con = cb.Contribution(input_dir_path, read_tables=['specimens', 'samples', 'sites', 'contribution'],\n custom_filenames=fnames)\n # get contribution id if available\n con_id = \"\"\n if 'contribution' in con.tables:\n if 'id' in con.tables['contribution'].df.columns:\n con_id = str(con.tables['contribution'].df['id'].values[0])\n # get other data\n con.propagate_location_to_specimens()\n spec_container = con.tables['specimens']\n spec_df = spec_container.df\n # use only anisotropy records\n spec_df = spec_df.dropna(subset=['aniso_s']).copy()\n if 'aniso_tilt_correction' not in spec_df.columns:\n spec_df['aniso_tilt_correction'] = -1 # assume specimen coordinates\n if \"aniso_s_n_measurements\" not in spec_df.columns:\n spec_df[\"aniso_s_n_measurements\"] = \"6\"\n if \"aniso_s_sigma\" not in spec_df.columns:\n spec_df[\"aniso_sigma\"] = \"0\"\n orlist = spec_df['aniso_tilt_correction'].dropna().unique()\n if CS not in orlist:\n if len(orlist) > 0:\n CS = orlist[0]\n else:\n CS = -1\n if CS == -1:\n crd = 's'\n if CS == 0:\n crd = 'g'\n if CS == 100:\n crd = 't'\n if verbose:\n print(\"desired coordinate system not available, using available: \", crd)\n cs_df = spec_df[spec_df['aniso_tilt_correction'] == CS]\n if isite:\n sites = cs_df['site'].unique()\n for site in list(sites):\n site_df = cs_df[cs_df.site == site]\n loc = \"\"\n if 'sites' in con.tables:\n if 'location' in con.tables['sites'].df.columns:\n locs = con.tables['sites'].df.loc[site, 'location'].dropna()\n if any(locs):\n loc = locs.iloc[0]\n figs = plot_aniso(fignum, site_df, Dir=Dir, PDir=PDir, ipar=ipar,\n ihext=ihext, ivec=ivec, iboot=iboot,\n vec=vec, num_bootstraps=num_bootstraps, title=site)\n files = {key: loc + \"_\" + site +\"_\" + crd + \"_aniso-\" + key + \".png\" for (key, value) in figs.items()}\n if pmagplotlib.isServer:\n for key in figs.keys():\n files[key] = \"LO:_\" + loc + \"_SI:_\" + site + '_TY:_aniso_' + key + '_.' + fmt\n titles = {}\n titles['data'] = \"Eigenvectors\"\n titles['tcdf'] = \"Eigenvalue Confidence\"\n titles['conf'] = \"Confidence Ellipses\"\n for key in figs:\n if key not in titles:\n titles[key] = key\n pmagplotlib.add_borders(figs, titles, con_id=con_id)\n if save_plots:\n saved.extend(pmagplotlib.save_plots(figs, files))\n elif interactive:\n pmagplotlib.draw_figs(figs)\n ans = pmagplotlib.save_or_quit()\n if ans == 'a':\n saved.extend(pmagplotlib.save_plots(figs, files))\n else:\n continue\n else:\n fignum += 2\n if iboot:\n fignum += 1\n if len(Dir) > 0:\n fignum += 1\n else:\n figs = plot_aniso(fignum, cs_df, Dir=Dir, PDir=PDir, ipar=ipar, ihext=ihext,\n ivec=ivec, iboot=iboot, vec=vec, num_bootstraps=num_bootstraps)\n try:\n locs = cs_df['location'].unique()\n except:\n locs = [\"\"]\n locs = \"-\".join(locs)\n files = {key: locs + \"_\" + crd + \"_aniso-\" + key + \".png\" for (key, value) in figs.items()}\n if pmagplotlib.isServer:\n for key in figs.keys():\n files[key] = 'MC:_' + con_id + '_TY:_aniso_' + key + '_.' + fmt\n titles = {}\n titles['data'] = \"Eigenvectors\"\n titles['tcdf'] = \"Eigenvalue Confidence\"\n titles['conf'] = \"Confidence Ellipses\"\n for key in figs:\n if key not in titles:\n titles[key] = key\n pmagplotlib.add_borders(figs, titles, con_id=con_id)\n if save_plots:\n saved.extend(pmagplotlib.save_plots(figs, files))\n elif interactive:\n pmagplotlib.draw_figs(figs)\n ans = pmagplotlib.save_or_quit()\n if ans == 'a':\n saved.extend(pmagplotlib.save_plots(figs, files))\n return True, saved"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nplot demagenetization data versus step for all specimens in pandas dataframe datablock Parameters ______________ data : Pandas dataframe with MagIC data model 3 columns: fignum : figure number specimen : specimen name dmag_key : one of these: ['treat_temp','treat_ac_field','treat_mw_energy'] selected using method_codes : ['LT_T-Z','LT-AF-Z','LT-M-Z'] respectively intensity : if blank will choose one of these: ['magn_moment', 'magn_volume', 'magn_mass'] quality : if True use the quality column of the DataFrame title : title for plot norm : if True, normalize data to first step Output : matptlotlib plot", "response": "def plot_dmag(data=\"\", title=\"\", fignum=1, norm=1,dmag_key='treat_ac_field',intensity='',\n quality=False):\n \"\"\"\n plots demagenetization data versus step for all specimens in pandas dataframe datablock\n\n Parameters\n ______________\n data : Pandas dataframe with MagIC data model 3 columns:\n fignum : figure number\n specimen : specimen name\n dmag_key : one of these: ['treat_temp','treat_ac_field','treat_mw_energy']\n selected using method_codes : ['LT_T-Z','LT-AF-Z','LT-M-Z'] respectively\n intensity : if blank will choose one of these: ['magn_moment', 'magn_volume', 'magn_mass']\n quality : if True use the quality column of the DataFrame\n title : title for plot\n norm : if True, normalize data to first step\n Output :\n matptlotlib plot\n \"\"\"\n plt.figure(num=fignum, figsize=(5, 5))\n if intensity:\n int_key=intensity\n else:\n intlist = ['magn_moment', 'magn_volume', 'magn_mass']\n # get which key we have\n IntMeths = [col_name for col_name in data.columns if col_name in intlist]\n int_key = IntMeths[0]\n data = data[data[int_key].notnull()] # fish out all data with this key\n units = \"U\" # this sets the units for plotting to undefined\n if not dmag_key:\n if 'treat_temp' in data.columns: units = \"K\" # kelvin\n elif 'treat_ac_field' in data.columns: units = \"T\" # tesla\n elif 'treat_mw_energy' in data.columns: units = \"J\" # joules\n if dmag_key=='treat_temp': units='K'\n if dmag_key=='treat_ac_field': units='T'\n if dmag_key=='treat_mw_energy': units='J'\n spcs = data.specimen.unique() # get a list of all specimens in DataFrame data\n if len(spcs)==0:\n print('no data for plotting')\n return\n # step through specimens to put on plot\n for spc in spcs:\n spec_data = data[data.specimen.str.contains(spc)]\n INTblock = []\n for ind, rec in spec_data.iterrows():\n INTblock.append([float(rec[dmag_key]), 0, 0,\n float(rec[int_key]), 1, rec['quality']])\n if len(INTblock) > 2:\n pmagplotlib.plot_mag(fignum, INTblock, title, 0, units, norm)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconvert eigenparamters format data to s format", "response": "def eigs_s(infile=\"\", dir_path='.'):\n \"\"\"\n Converts eigenparamters format data to s format\n\n Parameters\n ___________________\n Input:\n file : input file name with eigenvalues (tau) and eigenvectors (V) with format:\n tau_1 V1_dec V1_inc tau_2 V2_dec V2_inc tau_3 V3_dec V3_inc\n Output\n the six tensor elements as a nested array\n [[x11,x22,x33,x12,x23,x13],....]\n\n \"\"\"\n file = os.path.join(dir_path, infile)\n eigs_data = np.loadtxt(file)\n Ss = []\n for ind in range(eigs_data.shape[0]):\n tau, Vdirs = [], []\n for k in range(0, 9, 3):\n tau.append(eigs_data[ind][k])\n Vdirs.append([eigs_data[ind][k+1], eigs_data[ind][k+2]])\n s = list(pmag.doeigs_s(tau, Vdirs))\n Ss.append(s)\n return Ss"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef plot_gc(poles, color='g', fignum=1):\n for pole in poles:\n pmagplotlib.plot_circ(fignum, pole, 90., color)", "response": "Plots a great circle on an equal area projection"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef atrm_magic(meas_file, dir_path=\".\", input_dir_path=\"\",\n input_spec_file='specimens.txt', output_spec_file='specimens.txt',\n data_model_num=3):\n \"\"\"\n Converts ATRM data to best-fit tensor (6 elements plus sigma)\n\n Parameters\n ----------\n meas_file : str\n input measurement file\n dir_path : str\n output directory, default \".\"\n input_dir_path : str\n input file directory IF different from dir_path, default \"\"\n input_spec_file : str\n input specimen file name, default \"specimens.txt\"\n output_spec_file : str\n output specimen file name, default \"specimens.txt\"\n data_model_num : number\n MagIC data model [2, 3], default 3\n\n Returns\n ---------\n Tuple : (True or False indicating if conversion was sucessful, output file name written)\n\n \"\"\"\n # fix up file names\n input_dir_path, dir_path = pmag.fix_directories(input_dir_path, dir_path)\n meas_file = pmag.resolve_file_name(meas_file, input_dir_path)\n rmag_anis = os.path.join(dir_path, 'rmag_anisotropy.txt')\n rmag_res = os.path.join(dir_path, 'rmag_results.txt')\n input_spec_file = pmag.resolve_file_name(input_spec_file, input_dir_path)\n output_spec_file = pmag.resolve_file_name(output_spec_file, dir_path)\n\n # read in data\n if data_model_num == 3:\n meas_data = []\n meas_data3, file_type = pmag.magic_read(meas_file)\n if file_type != 'measurements':\n print(\n \"-E- {} is not a valid measurements file, {}\".format(meas_file, file_type))\n return False\n # convert meas_data to 2.5\n for rec in meas_data3:\n meas_map = map_magic.meas_magic3_2_magic2_map\n meas_data.append(map_magic.mapping(rec, meas_map))\n old_spec_recs, file_type = pmag.magic_read(input_spec_file)\n if file_type != 'specimens':\n print(\"-W- {} is not a valid specimens file \".format(input_spec_file))\n old_spec_recs = []\n spec_recs = []\n for rec in old_spec_recs:\n spec_map = map_magic.spec_magic3_2_magic2_map\n spec_recs.append(map_magic.mapping(rec, spec_map))\n\n else:\n meas_data, file_type = pmag.magic_read(meas_file)\n if file_type != 'magic_measurements':\n print(\"-E- {} is is not a valid magic_measurements file \".format(file_type))\n return False, \"{} is not a valid magic_measurements file, {}\".format(meas_file, file_type)\n\n meas_data = pmag.get_dictitem(\n meas_data, 'magic_method_codes', 'LP-AN-TRM', 'has')\n if not len(meas_data):\n print(\"-E- No measurement records found with code LP-AN-TRM\")\n return False, \"No measurement records found with code LP-AN-TRM\"\n #\n #\n # get sorted list of unique specimen names\n ssort = []\n for rec in meas_data:\n spec = rec[\"er_specimen_name\"]\n if spec not in ssort:\n ssort.append(spec)\n sids = sorted(ssort)\n #\n #\n # work on each specimen\n #\n specimen, npos = 0, 6\n RmagSpecRecs, RmagResRecs = [], []\n SpecRecs, SpecRecs3 = [], []\n while specimen < len(sids):\n nmeas = 0\n s = sids[specimen]\n RmagSpecRec = {}\n RmagResRec = {}\n # get old specrec here if applicable\n if data_model_num == 3:\n if spec_recs:\n try:\n RmagResRec = pmag.get_dictitem(\n spec_recs, 'er_specimen_name', s, 'T')[0]\n RmagSpecRec = pmag.get_dictitem(\n spec_recs, 'er_specimen_name', s, 'T')[0]\n except IndexError:\n pass\n\n BX, X = [], []\n method_codes = []\n Spec0 = \"\"\n #\n # find the data from the meas_data file for this sample\n # and get dec, inc, int and convert to x,y,z\n #\n # fish out data for this specimen name\n data = pmag.get_dictitem(meas_data, 'er_specimen_name', s, 'T')\n if len(data) > 5:\n RmagSpecRec[\"rmag_anisotropy_name\"] = data[0][\"er_specimen_name\"]\n RmagSpecRec[\"er_location_name\"] = data[0].get(\n \"er_location_name\", \"\")\n RmagSpecRec[\"er_specimen_name\"] = data[0][\"er_specimen_name\"]\n if not \"er_sample_name\" in RmagSpecRec:\n RmagSpecRec[\"er_sample_name\"] = data[0].get(\n \"er_sample_name\", \"\")\n RmagSpecRec[\"er_site_name\"] = data[0].get(\"er_site_name\", \"\")\n RmagSpecRec[\"magic_experiment_names\"] = RmagSpecRec[\"rmag_anisotropy_name\"] + \":ATRM\"\n RmagSpecRec[\"er_citation_names\"] = \"This study\"\n RmagResRec[\"rmag_result_name\"] = data[0][\"er_specimen_name\"] + \":ATRM\"\n RmagResRec[\"er_location_names\"] = data[0].get(\n \"er_location_names\", \"\")\n RmagResRec[\"er_specimen_names\"] = data[0][\"er_specimen_name\"]\n if data_model_num == 2:\n RmagResRec[\"er_sample_names\"] = data[0].get(\n \"er_sample_name\", \"\")\n RmagResRec[\"er_site_names\"] = data[0].get(\"er_site_name\", \"\")\n RmagResRec[\"magic_experiment_names\"] = RmagSpecRec[\"rmag_anisotropy_name\"] + \":ATRM\"\n RmagResRec[\"er_citation_names\"] = \"This study\"\n RmagSpecRec[\"anisotropy_type\"] = \"ATRM\"\n if \"magic_instrument_codes\" in list(data[0].keys()):\n RmagSpecRec[\"magic_instrument_codes\"] = data[0][\"magic_instrument_codes\"]\n else:\n RmagSpecRec[\"magic_instrument_codes\"] = \"\"\n RmagSpecRec[\"anisotropy_description\"] = \"Hext statistics adapted to ATRM\"\n for rec in data:\n meths = rec['magic_method_codes'].strip().split(':')\n Dir = []\n Dir.append(float(rec[\"measurement_dec\"]))\n Dir.append(float(rec[\"measurement_inc\"]))\n Dir.append(float(rec[\"measurement_magn_moment\"]))\n if \"LT-T-Z\" in meths:\n BX.append(pmag.dir2cart(Dir)) # append baseline steps\n elif \"LT-T-I\" in meths:\n X.append(pmag.dir2cart(Dir))\n nmeas += 1\n #\n if len(BX) == 1:\n for i in range(len(X) - 1):\n BX.append(BX[0]) # assume first 0 field step as baseline\n elif len(BX) == 0: # assume baseline is zero\n for i in range(len(X)):\n BX.append([0., 0., 0.]) # assume baseline of 0\n elif len(BX) != len(X): # if BX isn't just one measurement or one in between every infield step, just assume it is zero\n print('something odd about the baselines - just assuming zero')\n for i in range(len(X)):\n BX.append([0., 0., 0.]) # assume baseline of 0\n if nmeas < 6: # must have at least 6 measurements right now -\n print('skipping specimen ', s, ' too few measurements')\n specimen += 1\n else:\n # B matrix made from design matrix for positions\n B, H, tmpH = pmag.designATRM(npos)\n #\n # subtract optional baseline and put in a work array\n #\n work = np.zeros((nmeas, 3), 'f')\n for i in range(nmeas):\n for j in range(3):\n # subtract baseline, if available\n work[i][j] = X[i][j] - BX[i][j]\n #\n # calculate tensor elements\n # first put ARM components in w vector\n #\n w = np.zeros((npos * 3), 'f')\n index = 0\n for i in range(npos):\n for j in range(3):\n w[index] = work[i][j]\n index += 1\n s = np.zeros((6), 'f') # initialize the s matrix\n for i in range(6):\n for j in range(len(w)):\n s[i] += B[i][j] * w[j]\n trace = s[0] + s[1] + s[2] # normalize by the trace\n for i in range(6):\n s[i] = s[i] / trace\n a = pmag.s2a(s)\n\n # ------------------------------------------------------------\n # Calculating dels is different than in the Kappabridge\n # routine. Use trace normalized tensor (a) and the applied\n # unit field directions (tmpH) to generate model X,Y,Z\n # components. Then compare these with the measured values.\n # ------------------------------------------------------------\n S = 0.\n comp = np.zeros((npos * 3), 'f')\n for i in range(npos):\n for j in range(3):\n index = i * 3 + j\n compare = a[j][0] * tmpH[i][0] + a[j][1] * \\\n tmpH[i][1] + a[j][2] * tmpH[i][2]\n comp[index] = compare\n for i in range(npos * 3):\n d = (w[i] / trace) - comp[i] # del values\n S += d * d\n nf = float(npos * 3. - 6.) # number of degrees of freedom\n if S > 0:\n sigma = np.sqrt(S / nf)\n else:\n sigma = 0\n hpars = pmag.dohext(nf, sigma, s)\n #\n # prepare for output\n #\n RmagSpecRec[\"anisotropy_s1\"] = '%8.6f' % (s[0])\n RmagSpecRec[\"anisotropy_s2\"] = '%8.6f' % (s[1])\n RmagSpecRec[\"anisotropy_s3\"] = '%8.6f' % (s[2])\n RmagSpecRec[\"anisotropy_s4\"] = '%8.6f' % (s[3])\n RmagSpecRec[\"anisotropy_s5\"] = '%8.6f' % (s[4])\n RmagSpecRec[\"anisotropy_s6\"] = '%8.6f' % (s[5])\n RmagSpecRec[\"anisotropy_mean\"] = '%8.3e' % (trace / 3)\n RmagSpecRec[\"anisotropy_sigma\"] = '%8.6f' % (sigma)\n RmagSpecRec[\"anisotropy_unit\"] = \"Am^2\"\n RmagSpecRec[\"anisotropy_n\"] = '%i' % (npos)\n RmagSpecRec[\"anisotropy_tilt_correction\"] = '-1'\n # used by thellier_gui - must be taken out for uploading\n RmagSpecRec[\"anisotropy_F\"] = '%7.1f ' % (hpars[\"F\"])\n # used by thellier_gui - must be taken out for uploading\n RmagSpecRec[\"anisotropy_F_crit\"] = hpars[\"F_crit\"]\n RmagResRec[\"anisotropy_t1\"] = '%8.6f ' % (hpars[\"t1\"])\n RmagResRec[\"anisotropy_t2\"] = '%8.6f ' % (hpars[\"t2\"])\n RmagResRec[\"anisotropy_t3\"] = '%8.6f ' % (hpars[\"t3\"])\n RmagResRec[\"anisotropy_v1_dec\"] = '%7.1f ' % (hpars[\"v1_dec\"])\n RmagResRec[\"anisotropy_v2_dec\"] = '%7.1f ' % (hpars[\"v2_dec\"])\n RmagResRec[\"anisotropy_v3_dec\"] = '%7.1f ' % (hpars[\"v3_dec\"])\n RmagResRec[\"anisotropy_v1_inc\"] = '%7.1f ' % (hpars[\"v1_inc\"])\n RmagResRec[\"anisotropy_v2_inc\"] = '%7.1f ' % (hpars[\"v2_inc\"])\n RmagResRec[\"anisotropy_v3_inc\"] = '%7.1f ' % (hpars[\"v3_inc\"])\n RmagResRec[\"anisotropy_ftest\"] = '%7.1f ' % (hpars[\"F\"])\n RmagResRec[\"anisotropy_ftest12\"] = '%7.1f ' % (hpars[\"F12\"])\n RmagResRec[\"anisotropy_ftest23\"] = '%7.1f ' % (hpars[\"F23\"])\n RmagResRec[\"result_description\"] = 'Critical F: ' + \\\n hpars[\"F_crit\"] + ';Critical F12/F13: ' + hpars[\"F12_crit\"]\n if hpars[\"e12\"] > hpars[\"e13\"]:\n RmagResRec[\"anisotropy_v1_zeta_semi_angle\"] = '%7.1f ' % (\n hpars['e12'])\n RmagResRec[\"anisotropy_v1_zeta_dec\"] = '%7.1f ' % (\n hpars['v2_dec'])\n RmagResRec[\"anisotropy_v1_zeta_inc\"] = '%7.1f ' % (\n hpars['v2_inc'])\n RmagResRec[\"anisotropy_v2_zeta_semi_angle\"] = '%7.1f ' % (\n hpars['e12'])\n RmagResRec[\"anisotropy_v2_zeta_dec\"] = '%7.1f ' % (\n hpars['v1_dec'])\n RmagResRec[\"anisotropy_v2_zeta_inc\"] = '%7.1f ' % (\n hpars['v1_inc'])\n RmagResRec[\"anisotropy_v1_eta_semi_angle\"] = '%7.1f ' % (\n hpars['e13'])\n RmagResRec[\"anisotropy_v1_eta_dec\"] = '%7.1f ' % (\n hpars['v3_dec'])\n RmagResRec[\"anisotropy_v1_eta_inc\"] = '%7.1f ' % (\n hpars['v3_inc'])\n RmagResRec[\"anisotropy_v3_eta_semi_angle\"] = '%7.1f ' % (\n hpars['e13'])\n RmagResRec[\"anisotropy_v3_eta_dec\"] = '%7.1f ' % (\n hpars['v1_dec'])\n RmagResRec[\"anisotropy_v3_eta_inc\"] = '%7.1f ' % (\n hpars['v1_inc'])\n else:\n RmagResRec[\"anisotropy_v1_zeta_semi_angle\"] = '%7.1f ' % (\n hpars['e13'])\n RmagResRec[\"anisotropy_v1_zeta_dec\"] = '%7.1f ' % (\n hpars['v3_dec'])\n RmagResRec[\"anisotropy_v1_zeta_inc\"] = '%7.1f ' % (\n hpars['v3_inc'])\n RmagResRec[\"anisotropy_v3_zeta_semi_angle\"] = '%7.1f ' % (\n hpars['e13'])\n RmagResRec[\"anisotropy_v3_zeta_dec\"] = '%7.1f ' % (\n hpars['v1_dec'])\n RmagResRec[\"anisotropy_v3_zeta_inc\"] = '%7.1f ' % (\n hpars['v1_inc'])\n RmagResRec[\"anisotropy_v1_eta_semi_angle\"] = '%7.1f ' % (\n hpars['e12'])\n RmagResRec[\"anisotropy_v1_eta_dec\"] = '%7.1f ' % (\n hpars['v2_dec'])\n RmagResRec[\"anisotropy_v1_eta_inc\"] = '%7.1f ' % (\n hpars['v2_inc'])\n RmagResRec[\"anisotropy_v2_eta_semi_angle\"] = '%7.1f ' % (\n hpars['e12'])\n RmagResRec[\"anisotropy_v2_eta_dec\"] = '%7.1f ' % (\n hpars['v1_dec'])\n RmagResRec[\"anisotropy_v2_eta_inc\"] = '%7.1f ' % (\n hpars['v1_inc'])\n if hpars[\"e23\"] > hpars['e12']:\n RmagResRec[\"anisotropy_v2_zeta_semi_angle\"] = '%7.1f ' % (\n hpars['e23'])\n RmagResRec[\"anisotropy_v2_zeta_dec\"] = '%7.1f ' % (\n hpars['v3_dec'])\n RmagResRec[\"anisotropy_v2_zeta_inc\"] = '%7.1f ' % (\n hpars['v3_inc'])\n RmagResRec[\"anisotropy_v3_zeta_semi_angle\"] = '%7.1f ' % (\n hpars['e23'])\n RmagResRec[\"anisotropy_v3_zeta_dec\"] = '%7.1f ' % (\n hpars['v2_dec'])\n RmagResRec[\"anisotropy_v3_zeta_inc\"] = '%7.1f ' % (\n hpars['v2_inc'])\n RmagResRec[\"anisotropy_v3_eta_semi_angle\"] = '%7.1f ' % (\n hpars['e13'])\n RmagResRec[\"anisotropy_v3_eta_dec\"] = '%7.1f ' % (\n hpars['v1_dec'])\n RmagResRec[\"anisotropy_v3_eta_inc\"] = '%7.1f ' % (\n hpars['v1_inc'])\n RmagResRec[\"anisotropy_v2_eta_semi_angle\"] = '%7.1f ' % (\n hpars['e12'])\n RmagResRec[\"anisotropy_v2_eta_dec\"] = '%7.1f ' % (\n hpars['v1_dec'])\n RmagResRec[\"anisotropy_v2_eta_inc\"] = '%7.1f ' % (\n hpars['v1_inc'])\n else:\n RmagResRec[\"anisotropy_v2_zeta_semi_angle\"] = '%7.1f ' % (\n hpars['e12'])\n RmagResRec[\"anisotropy_v2_zeta_dec\"] = '%7.1f ' % (\n hpars['v1_dec'])\n RmagResRec[\"anisotropy_v2_zeta_inc\"] = '%7.1f ' % (\n hpars['v1_inc'])\n RmagResRec[\"anisotropy_v3_eta_semi_angle\"] = '%7.1f ' % (\n hpars['e23'])\n RmagResRec[\"anisotropy_v3_eta_dec\"] = '%7.1f ' % (\n hpars['v2_dec'])\n RmagResRec[\"anisotropy_v3_eta_inc\"] = '%7.1f ' % (\n hpars['v2_inc'])\n RmagResRec[\"anisotropy_v3_zeta_semi_angle\"] = '%7.1f ' % (\n hpars['e13'])\n RmagResRec[\"anisotropy_v3_zeta_dec\"] = '%7.1f ' % (\n hpars['v1_dec'])\n RmagResRec[\"anisotropy_v3_zeta_inc\"] = '%7.1f ' % (\n hpars['v1_inc'])\n RmagResRec[\"anisotropy_v2_eta_semi_angle\"] = '%7.1f ' % (\n hpars['e23'])\n RmagResRec[\"anisotropy_v2_eta_dec\"] = '%7.1f ' % (\n hpars['v3_dec'])\n RmagResRec[\"anisotropy_v2_eta_inc\"] = '%7.1f ' % (\n hpars['v3_inc'])\n RmagResRec[\"tilt_correction\"] = '-1'\n RmagResRec[\"anisotropy_type\"] = 'ATRM'\n RmagResRec[\"magic_method_codes\"] = 'LP-AN-TRM:AE-H'\n RmagSpecRec[\"magic_method_codes\"] = 'LP-AN-TRM:AE-H'\n RmagResRec[\"magic_software_packages\"] = pmag.get_version()\n RmagSpecRec[\"magic_software_packages\"] = pmag.get_version()\n RmagSpecRecs.append(RmagSpecRec)\n RmagResRecs.append(RmagResRec)\n specimen += 1\n if data_model_num == 3:\n SpecRec = RmagResRec.copy()\n SpecRec.update(RmagSpecRec)\n SpecRecs.append(SpecRec)\n\n # finished iterating through specimens,\n # now we need to write out the data to files\n if data_model_num == 3:\n # translate records\n for rec in SpecRecs:\n rec3 = map_magic.convert_aniso('magic3', rec)\n SpecRecs3.append(rec3)\n\n # write output to 3.0 specimens file\n pmag.magic_write(output_spec_file, SpecRecs3, 'specimens')\n print(\"specimen data stored in {}\".format(output_spec_file))\n return True, output_spec_file\n\n else:\n # write output to 2.5 rmag_ files\n pmag.magic_write(rmag_anis, RmagSpecRecs, 'rmag_anisotropy')\n print(\"specimen tensor elements stored in \", rmag_anis)\n pmag.magic_write(rmag_res, RmagResRecs, 'rmag_results')\n print(\"specimen statistics and eigenparameters stored in \", rmag_res)\n return True, rmag_anis", "response": "Convert ATRM data to best fit tensor"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ntransform decs and incs to geographic coordinates.", "response": "def transform_to_geographic(this_spec_meas_df, samp_df, samp, coord=\"0\"):\n \"\"\"\n Transform decs/incs to geographic coordinates.\n Calls pmag.dogeo_V for the heavy lifting\n\n Parameters\n ----------\n this_spec_meas_df : pandas dataframe of measurements for a single specimen\n samp_df : pandas dataframe of samples\n samp : samp name\n\n Returns\n ---------\n this_spec_meas_df : measurements dataframe with transformed coordinates\n \"\"\"\n # we could return the type of coordinates ACTUALLY used\n # transform geographic\n decs = this_spec_meas_df['dir_dec'].values.tolist()\n incs = this_spec_meas_df['dir_inc'].values.tolist()\n or_info, az_type = pmag.get_orient(samp_df,samp,data_model=3)\n if 'azimuth' in or_info.keys() and cb.not_null(or_info['azimuth'], False):\n azimuths=len(decs)*[or_info['azimuth']]\n dips=len(decs)*[or_info['dip']]\n # if azimuth/dip is missing, or orientation is bad,\n # stick with specimen coordinates\n else:\n return this_spec_meas_df\n dirs = [decs, incs, azimuths, dips]\n dirs_geo = np.array(list(map(list, list(zip(*dirs)))))\n decs, incs = pmag.dogeo_V(dirs_geo)\n if coord == '100' and 'bed_dip_direction' in or_info.keys() and or_info['bed_dip_direction']!=\"\": # need to do tilt correction too\n bed_dip_dirs = len(decs)*[or_info['bed_dip_direction']]\n bed_dips = len(decs) * [or_info['bed_dip']]\n dirs = [decs, incs, bed_dip_dirs, bed_dips]\n ## this transposes the columns and rows of the list of lists\n dirs_tilt = np.array(list(map(list, list(zip(*dirs)))))\n decs, incs = pmag.dotilt_V(dirs_tilt)\n this_spec_meas_df['dir_dec'] = decs\n this_spec_meas_df['dir_inc'] = incs\n return this_spec_meas_df"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef hysteresis_magic(output_dir_path=\".\", input_dir_path=\"\", spec_file=\"specimens.txt\",\n meas_file=\"measurements.txt\", fmt=\"svg\",\n save_plots=True, make_plots=True, pltspec=\"\", n_specs=5, interactive=False):\n \"\"\"\n Calculate hysteresis parameters and plot hysteresis data.\n Plotting may be called interactively with save_plots==False,\n or be suppressed entirely with make_plots==False.\n\n Parameters\n ----------\n output_dir_path : str, default \".\"\n Note: if using Windows, all figures will be saved to working directly\n *not* dir_path\n input_dir_path : str\n path for intput file if different from output_dir_path (default is same)\n spec_file : str, default \"specimens.txt\"\n output file to save hysteresis data\n meas_file : str, default \"measurements.txt\"\n input measurement file\n fmt : str, default \"svg\"\n format for figures, [svg, jpg, pdf, png]\n save_plots : bool, default True\n if True, generate and save all requested plots\n make_plots : bool, default True\n if False, skip making plots and just save hysteresis data\n (if False, save_plots will be set to False also)\n pltspec : str, default \"\"\n specimen name to plot, otherwise will plot all specimens\n n_specs : int\n number of specimens to plot, default 5\n if you want to make all possible plots, specify \"all\"\n interactive : bool, default False\n interactively plot and display for each specimen\n (this is best used on the command line or in the Python interpreter)\n\n\n Returns\n ---------\n Tuple : (True or False indicating if conversion was sucessful, output file names written)\n \"\"\"\n # put plots in output_dir_path, unless isServer\n incl_directory = True\n if pmagplotlib.isServer or set_env.IS_WIN:\n incl_directory = False\n # figure out directory/file paths\n input_dir_path, output_dir_path = pmag.fix_directories(input_dir_path, output_dir_path)\n spec_file = pmag.resolve_file_name(spec_file, input_dir_path)\n meas_file = pmag.resolve_file_name(meas_file, input_dir_path)\n # format and initialize variables\n verbose = pmagplotlib.verbose\n version_num = pmag.get_version()\n if not make_plots:\n irm_init, imag_init = -1, -1\n save_plots = False\n if save_plots:\n verbose = False\n if pltspec:\n pass\n if interactive:\n save_plots = False\n\n SpecRecs = []\n #\n #\n meas_data, file_type = pmag.magic_read(meas_file)\n if file_type != 'measurements':\n print('bad file', meas_file)\n return False, []\n #\n # initialize some variables\n # define figure numbers for hyst,deltaM,DdeltaM curves\n HystRecs, RemRecs = [], []\n HDD = {}\n if verbose and make_plots:\n print(\"Plots may be on top of each other - use mouse to place \")\n if make_plots:\n HDD['hyst'], HDD['deltaM'], HDD['DdeltaM'] = 1, 2, 3\n if make_plots and (not save_plots):\n pmagplotlib.plot_init(HDD['DdeltaM'], 5, 5)\n pmagplotlib.plot_init(HDD['deltaM'], 5, 5)\n pmagplotlib.plot_init(HDD['hyst'], 5, 5)\n imag_init = 0\n irm_init = 0\n else:\n HDD['hyst'], HDD['deltaM'], HDD['DdeltaM'], HDD['irm'], HDD['imag'] = 0, 0, 0, 0, 0\n #\n if spec_file:\n prior_data, file_type = pmag.magic_read(spec_file)\n #\n # get list of unique experiment names and specimen names\n #\n experiment_names, sids = [], []\n hys_data = pmag.get_dictitem(meas_data, 'method_codes', 'LP-HYS', 'has')\n dcd_data = pmag.get_dictitem(\n meas_data, 'method_codes', 'LP-IRM-DCD', 'has')\n imag_data = pmag.get_dictitem(meas_data, 'method_codes', 'LP-IMAG', 'has')\n for rec in hys_data:\n if rec['experiment'] not in experiment_names:\n experiment_names.append(rec['experiment'])\n if rec['specimen'] not in sids:\n sids.append(rec['specimen'])\n #\n k = 0\n # if plotting only one specimen, find it\n if pltspec:\n k = sids.index(pltspec)\n # if plotting only n specimens, remove others from the list\n elif n_specs != \"all\":\n try:\n sids = sids[:n_specs]\n except:\n pass\n cnt = 0\n while k < len(sids):\n specimen = sids[k]\n if pltspec:\n if specimen != pltspec:\n k += 1\n continue\n else:\n for key, value in HDD.items():\n cnt += 1\n HDD[key] = cnt\n\n #HDD = {key: value + len(HDD) + k for (key, value) in HDD.items()}\n # initialize a new specimen hysteresis record\n HystRec = {'specimen': specimen, 'experiment': \"\"}\n if verbose and make_plots:\n print(specimen, k+1, 'out of ', len(sids))\n #\n #\n # B,M for hysteresis, Bdcd,Mdcd for irm-dcd data\n B, M, Bdcd, Mdcd = [], [], [], []\n Bimag, Mimag = [], [] # Bimag,Mimag for initial magnetization curves\n # fish out all the LP-HYS data for this specimen\n spec_data = pmag.get_dictitem(hys_data, 'specimen', specimen, 'T')\n if len(spec_data) > 0:\n meths = spec_data[0]['method_codes'].split(':')\n e = spec_data[0]['experiment']\n HystRec['experiment'] = spec_data[0]['experiment']\n for rec in spec_data:\n B.append(float(rec['meas_field_dc']))\n M.append(float(rec['magn_moment']))\n # fish out all the data for this specimen\n spec_data = pmag.get_dictitem(dcd_data, 'specimen', specimen, 'T')\n if len(spec_data) > 0:\n HystRec['experiment'] = HystRec['experiment'] + \\\n ':'+spec_data[0]['experiment']\n irm_exp = spec_data[0]['experiment']\n for rec in spec_data:\n Bdcd.append(float(rec['treat_dc_field']))\n Mdcd.append(float(rec['magn_moment']))\n # fish out all the data for this specimen\n spec_data = pmag.get_dictitem(imag_data, 'specimen', specimen, 'T')\n if len(spec_data) > 0:\n imag_exp = spec_data[0]['experiment']\n for rec in spec_data:\n Bimag.append(float(rec['meas_field_dc']))\n Mimag.append(float(rec['magn_moment']))\n #\n # now plot the hysteresis curve\n #\n if len(B) > 0:\n hmeths = []\n for meth in meths:\n hmeths.append(meth)\n\n hpars = pmagplotlib.plot_hdd(HDD, B, M, e)\n if interactive:\n if not set_env.IS_WIN:\n pmagplotlib.draw_figs(HDD)\n #\n if make_plots:\n pmagplotlib.plot_hpars(HDD, hpars, 'bs')\n HystRec['hyst_mr_moment'] = hpars['hysteresis_mr_moment']\n HystRec['hyst_ms_moment'] = hpars['hysteresis_ms_moment']\n HystRec['hyst_bc'] = hpars['hysteresis_bc']\n HystRec['hyst_bcr'] = hpars['hysteresis_bcr']\n HystRec['hyst_xhf'] = hpars['hysteresis_xhf']\n HystRec['experiments'] = e\n HystRec['software_packages'] = version_num\n if hpars[\"magic_method_codes\"] not in hmeths:\n hmeths.append(hpars[\"magic_method_codes\"])\n methods = \"\"\n for meth in hmeths:\n methods = methods+meth.strip()+\":\"\n HystRec[\"method_codes\"] = methods[:-1]\n HystRec[\"citations\"] = \"This study\"\n #\n if len(Bdcd) > 0:\n rmeths = []\n for meth in meths:\n rmeths.append(meth)\n if verbose and make_plots:\n print('plotting IRM')\n if irm_init == 0:\n cnt += 1\n HDD['irm'] = cnt #5 if 'imag' in HDD else 4\n if make_plots and (not save_plots):\n pmagplotlib.plot_init(HDD['irm'], 5, 5)\n irm_init = 1\n rpars = pmagplotlib.plot_irm(HDD['irm'], Bdcd, Mdcd, irm_exp)\n HystRec['rem_mr_moment'] = rpars['remanence_mr_moment']\n HystRec['rem_bcr'] = rpars['remanence_bcr']\n HystRec['experiments'] = specimen+':'+irm_exp\n if rpars[\"magic_method_codes\"] not in meths:\n meths.append(rpars[\"magic_method_codes\"])\n methods = \"\"\n for meth in rmeths:\n methods = methods+meth.strip()+\":\"\n HystRec[\"method_codes\"] = HystRec['method_codes']+':'+methods[:-1]\n HystRec[\"citations\"] = \"This study\"\n else:\n if irm_init:\n pmagplotlib.clearFIG(HDD['irm'])\n if len(Bimag) > 0:\n if verbose and make_plots:\n print('plotting initial magnetization curve')\n# first normalize by Ms\n Mnorm = []\n for m in Mimag:\n Mnorm.append(m / float(hpars['hysteresis_ms_moment']))\n if imag_init == 0:\n HDD['imag'] = 4\n if make_plots and (not save_plots):\n pmagplotlib.plot_init(HDD['imag'], 5, 5)\n imag_init = 1\n pmagplotlib.plot_imag(HDD['imag'], Bimag, Mnorm, imag_exp)\n else:\n if imag_init:\n pmagplotlib.clearFIG(HDD['imag'])\n if len(list(HystRec.keys())) > 0:\n HystRecs.append(HystRec)\n #\n files = {}\n if save_plots and make_plots:\n if pltspec:\n s = pltspec\n else:\n s = specimen\n files = {}\n for key in list(HDD.keys()):\n if incl_directory:\n files[key] = os.path.join(output_dir_path, s+'_'+key+'.'+fmt)\n else:\n files[key] = s+'_'+key+'.'+fmt\n if make_plots and save_plots:\n pmagplotlib.save_plots(HDD, files, incl_directory=incl_directory)\n #if pltspec:\n # return True, []\n if interactive:\n pmagplotlib.draw_figs(HDD)\n ans = input(\n \"S[a]ve plots, [s]pecimen name, [q]uit, to continue\\n \")\n if ans == \"a\":\n files = {}\n for key in list(HDD.keys()):\n if incl_directory:\n files[key] = os.path.join(output_dir_path, specimen+'_'+key+'.'+fmt)\n else:\n files[key] = specimen+'_'+key+'.'+fmt\n pmagplotlib.save_plots(HDD, files, incl_directory=incl_directory)\n if ans == '':\n k += 1\n if ans == \"p\":\n del HystRecs[-1]\n k -= 1\n if ans == 'q':\n print(\"Good bye\")\n return True, []\n if ans == 's':\n keepon = 1\n specimen = input(\n 'Enter desired specimen name (or first part there of): ')\n while keepon == 1:\n try:\n k = sids.index(specimen)\n keepon = 0\n except:\n tmplist = []\n for qq in range(len(sids)):\n if specimen in sids[qq]:\n tmplist.append(sids[qq])\n print(specimen, \" not found, but this was: \")\n print(tmplist)\n specimen = input('Select one or try again\\n ')\n k = sids.index(specimen)\n\n else:\n k += 1\n if len(B) == 0 and len(Bdcd) == 0:\n if verbose:\n print('skipping this one - no hysteresis data')\n k += 1\n if k < len(sids):\n # must re-init figs for Windows to keep size\n if make_plots and set_env.IS_WIN:\n if not save_plots:\n pmagplotlib.plot_init(HDD['DdeltaM'], 5, 5)\n pmagplotlib.plot_init(HDD['deltaM'], 5, 5)\n pmagplotlib.plot_init(HDD['hyst'], 5, 5)\n if len(Bimag) > 0:\n HDD['imag'] = 4\n if not save_plots:\n pmagplotlib.plot_init(HDD['imag'], 5, 5)\n if len(Bdcd) > 0:\n HDD['irm'] = 5 if 'imag' in HDD else 4\n if not save_plots:\n pmagplotlib.plot_init(HDD['irm'], 5, 5)\n elif not make_plots and set_env.IS_WIN:\n HDD['hyst'], HDD['deltaM'], HDD['DdeltaM'], HDD['irm'], HDD['imag'] = 0, 0, 0, 0, 0\n if len(HystRecs) > 0:\n # go through prior_data, clean out prior results and save combined file as spec_file\n SpecRecs, keys = [], list(HystRecs[0].keys())\n if len(prior_data) > 0:\n prior_keys = list(prior_data[0].keys())\n else:\n prior_keys = []\n for rec in prior_data:\n for key in keys:\n if key not in list(rec.keys()):\n rec[key] = \"\"\n if 'LP-HYS' not in rec['method_codes']:\n SpecRecs.append(rec)\n for rec in HystRecs:\n for key in prior_keys:\n if key not in list(rec.keys()):\n rec[key] = \"\"\n prior = pmag.get_dictitem(\n prior_data, 'specimen', rec['specimen'], 'T')\n if len(prior) > 0 and 'sample' in list(prior[0].keys()):\n # pull sample name from prior specimens table\n rec['sample'] = prior[0]['sample']\n SpecRecs.append(rec)\n # drop unnecessary/duplicate rows\n #dir_path = os.path.split(spec_file)[0]\n con = cb.Contribution(input_dir_path, read_tables=[])\n con.add_magic_table_from_data('specimens', SpecRecs)\n con.tables['specimens'].drop_duplicate_rows(\n ignore_cols=['specimen', 'sample', 'citations', 'software_packages'])\n con.tables['specimens'].df = con.tables['specimens'].df.drop_duplicates()\n spec_file = os.path.join(output_dir_path, os.path.split(spec_file)[1])\n con.write_table_to_file('specimens', custom_name=spec_file)\n if verbose:\n print(\"hysteresis parameters saved in \", spec_file)\n return True, [spec_file]", "response": "This function calculates parameters and plots hysteresis data for a single specimen and returns a tuple of True if conversion was sucessful False otherwise."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nextracting directional and/or intensity data from a MagIC 3.0 format sites.txt file. Default output format is an Excel file. Optional latex format longtable file which can be uploaded to Overleaf or typeset with latex on your own computer. Parameters ___________ site_file : str input file name directions_file : str output file name for directional data intensity_file : str output file name for intensity data site_info : str output file name for site information (lat, lon, location, age....) output_dir_path : str path for output files input_dir_path : str path for intput file if different from output_dir_path (default is same) latex : boolean if True, output file should be latex formatted table with a .tex ending Return : [True,False], error type : True if successful Effects : writes Excel or LaTeX formatted tables for use in publications", "response": "def sites_extract(site_file='sites.txt', directions_file='directions.xls',\n intensity_file='intensity.xls', info_file='site_info.xls',\n output_dir_path='.', input_dir_path='', latex=False):\n \"\"\"\n Extracts directional and/or intensity data from a MagIC 3.0 format sites.txt file.\n Default output format is an Excel file.\n Optional latex format longtable file which can be uploaded to Overleaf or\n typeset with latex on your own computer.\n\n Parameters\n ___________\n site_file : str\n input file name\n directions_file : str\n output file name for directional data\n intensity_file : str\n output file name for intensity data\n site_info : str\n output file name for site information (lat, lon, location, age....)\n output_dir_path : str\n path for output files\n input_dir_path : str\n path for intput file if different from output_dir_path (default is same)\n latex : boolean\n if True, output file should be latex formatted table with a .tex ending\n\n Return :\n [True,False], error type : True if successful\n\n Effects :\n writes Excel or LaTeX formatted tables for use in publications\n \"\"\"\n # initialize outfile names\n input_dir_path, output_dir_path = pmag.fix_directories(input_dir_path, output_dir_path)\n try:\n fname = pmag.resolve_file_name(site_file, input_dir_path)\n except IOError:\n print(\"bad site file name\")\n return False, \"bad site file name\"\n sites_df = pd.read_csv(fname, sep='\\t', header=1)\n dir_df = map_magic.convert_site_dm3_table_directions(sites_df)\n dir_file = pmag.resolve_file_name(directions_file, output_dir_path)\n if len(dir_df):\n if latex:\n if dir_file.endswith('.xls'):\n dir_file = dir_file[:-4] + \".tex\"\n directions_out = open(dir_file, 'w+', errors=\"backslashreplace\")\n directions_out.write('\\documentclass{article}\\n')\n directions_out.write('\\\\usepackage{booktabs}\\n')\n directions_out.write('\\\\usepackage{longtable}\\n')\n directions_out.write('\\\\begin{document}')\n directions_out.write(dir_df.to_latex(\n index=False, longtable=True, multicolumn=False))\n directions_out.write('\\end{document}\\n')\n directions_out.close()\n else:\n dir_df.to_excel(dir_file, index=False)\n else:\n print(\"No directional data for ouput.\")\n dir_file = None\n intensity_file = pmag.resolve_file_name(intensity_file, output_dir_path)\n int_df = map_magic.convert_site_dm3_table_intensity(sites_df)\n if len(int_df):\n if latex:\n if intensity_file.endswith('.xls'):\n intensity_file = intensity_file[:-4] + \".tex\"\n intensities_out = open(intensity_file, 'w+',\n errors=\"backslashreplace\")\n intensities_out.write('\\documentclass{article}\\n')\n intensities_out.write('\\\\usepackage{booktabs}\\n')\n intensities_out.write('\\\\usepackage{longtable}\\n')\n intensities_out.write('\\\\begin{document}')\n intensities_out.write(int_df.to_latex(\n index=False, longtable=True, multicolumn=False))\n intensities_out.write('\\end{document}\\n')\n intensities_out.close()\n else:\n int_df.to_excel(intensity_file, index=False)\n else:\n print(\"No intensity data for ouput.\")\n intensity_file = None\n# site info\n nfo_df = sites_df.dropna(subset=['lat', 'lon']) # delete blank locations\n if len(nfo_df) > 0:\n SiteCols = [\"Site\", \"Location\", \"Lat. (N)\", \"Long. (E)\"]\n info_file = pmag.resolve_file_name(info_file, output_dir_path)\n age_cols = ['age', 'age_sigma', 'age_unit']\n for col in age_cols:\n if col not in nfo_df:\n nfo_df[col] = None\n test_age = nfo_df.dropna(subset=['age', 'age_sigma', 'age_unit'])\n if len(test_age) > 0:\n SiteCols.append(\"Age \")\n SiteCols.append(\"Age sigma\")\n SiteCols.append(\"Units\")\n nfo_df = nfo_df[['site', 'location', 'lat',\n 'lon', 'age', 'age_sigma', 'age_unit']]\n else:\n nfo_df = nfo_df[['site', 'location', 'lat', 'lon']]\n nfo_df.drop_duplicates(inplace=True)\n nfo_df.columns = SiteCols\n nfo_df.fillna(value='', inplace=True)\n if latex:\n if info_file.endswith('.xls'):\n info_file = info_file[:-4] + \".tex\"\n info_out = open(info_file, 'w+', errors=\"backslashreplace\")\n info_out.write('\\documentclass{article}\\n')\n info_out.write('\\\\usepackage{booktabs}\\n')\n info_out.write('\\\\usepackage{longtable}\\n')\n info_out.write('\\\\begin{document}')\n info_out.write(nfo_df.to_latex(\n index=False, longtable=True, multicolumn=False))\n info_out.write('\\end{document}\\n')\n info_out.close()\n else:\n nfo_df.to_excel(info_file, index=False)\n else:\n print(\"No location information for ouput.\")\n info_file = None\n return True, [fname for fname in [info_file, intensity_file, dir_file] if fname]"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef specimens_extract(spec_file='specimens.txt', output_file='specimens.xls', landscape=False,\n longtable=False, output_dir_path='.', input_dir_path='', latex=False):\n \"\"\"\n Extracts specimen results from a MagIC 3.0 format specimens.txt file.\n Default output format is an Excel file.\n typeset with latex on your own computer.\n\n Parameters\n ___________\n spec_file : str, default \"specimens.txt\"\n input file name\n output_file : str, default \"specimens.xls\"\n output file name\n landscape : boolean, default False\n if True output latex landscape table\n longtable : boolean\n if True output latex longtable\n output_dir_path : str, default \".\"\n output file directory\n input_dir_path : str, default \"\"\n path for intput file if different from output_dir_path (default is same)\n latex : boolean, default False\n if True, output file should be latex formatted table with a .tex ending\n\n Return :\n [True,False], data table error type : True if successful\n\n Effects :\n writes xls or latex formatted tables for use in publications\n \"\"\"\n input_dir_path, output_dir_path = pmag.fix_directories(input_dir_path, output_dir_path)\n try:\n fname = pmag.resolve_file_name(spec_file, input_dir_path)\n except IOError:\n print(\"bad specimen file name\")\n return False, \"bad specimen file name\"\n spec_df = pd.read_csv(fname, sep='\\t', header=1)\n spec_df.dropna('columns', how='all', inplace=True)\n if 'int_abs' in spec_df.columns:\n spec_df.dropna(subset=['int_abs'], inplace=True)\n if len(spec_df) > 0:\n table_df = map_magic.convert_specimen_dm3_table(spec_df)\n out_file = pmag.resolve_file_name(output_file, output_dir_path)\n if latex:\n if out_file.endswith('.xls'):\n out_file = out_file.rsplit('.')[0] + \".tex\"\n info_out = open(out_file, 'w+', errors=\"backslashreplace\")\n info_out.write('\\documentclass{article}\\n')\n info_out.write('\\\\usepackage{booktabs}\\n')\n if landscape:\n info_out.write('\\\\usepackage{lscape}')\n if longtable:\n info_out.write('\\\\usepackage{longtable}\\n')\n info_out.write('\\\\begin{document}\\n')\n if landscape:\n info_out.write('\\\\begin{landscape}\\n')\n info_out.write(table_df.to_latex(index=False, longtable=longtable,\n escape=True, multicolumn=False))\n if landscape:\n info_out.write('\\end{landscape}\\n')\n info_out.write('\\end{document}\\n')\n info_out.close()\n else:\n table_df.to_excel(out_file, index=False)\n\n else:\n print(\"No specimen data for ouput.\")\n return True, [out_file]", "response": "This function extracts specimen results from a MagIC 3. 0 format specimens. txt file and returns a list of lists of boolean values."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nextract criteria from a MagIC 3.0 format criteria.txt file. Default output format is an Excel file. typeset with latex on your own computer. Parameters ___________ crit_file : str, default \"criteria.txt\" input file name output_file : str, default \"criteria.xls\" output file name output_dir_path : str, default \".\" output file directory input_dir_path : str, default \"\" path for intput file if different from output_dir_path (default is same) latex : boolean, default False if True, output file should be latex formatted table with a .tex ending Return : [True,False], data table error type : True if successful Effects : writes xls or latex formatted tables for use in publications", "response": "def criteria_extract(crit_file='criteria.txt', output_file='criteria.xls',\n output_dir_path='.', input_dir_path='', latex=False):\n \"\"\"\n Extracts criteria from a MagIC 3.0 format criteria.txt file.\n Default output format is an Excel file.\n typeset with latex on your own computer.\n\n Parameters\n ___________\n crit_file : str, default \"criteria.txt\"\n input file name\n output_file : str, default \"criteria.xls\"\n output file name\n output_dir_path : str, default \".\"\n output file directory\n input_dir_path : str, default \"\"\n path for intput file if different from output_dir_path (default is same)\n latex : boolean, default False\n if True, output file should be latex formatted table with a .tex ending\n\n Return :\n [True,False], data table error type : True if successful\n\n Effects :\n writes xls or latex formatted tables for use in publications\n \"\"\"\n input_dir_path, output_dir_path = pmag.fix_directories(input_dir_path, output_dir_path)\n try:\n fname = pmag.resolve_file_name(crit_file, input_dir_path)\n except IOError:\n print(\"bad criteria file name\")\n return False, \"bad criteria file name\"\n crit_df = pd.read_csv(fname, sep='\\t', header=1)\n if len(crit_df) > 0:\n out_file = pmag.resolve_file_name(output_file, output_dir_path)\n\n s = crit_df['table_column'].str.split(pat='.', expand=True)\n crit_df['table'] = s[0]\n crit_df['column'] = s[1]\n crit_df = crit_df[['table', 'column',\n 'criterion_value', 'criterion_operation']]\n\n crit_df.columns = ['Table', 'Statistic', 'Threshold', 'Operation']\n\n if latex:\n if out_file.endswith('.xls'):\n out_file = out_file.rsplit('.')[0] + \".tex\"\n crit_df.loc[crit_df['Operation'].str.contains(\n '<'), 'operation'] = 'maximum'\n crit_df.loc[crit_df['Operation'].str.contains(\n '>'), 'operation'] = 'minimum'\n crit_df.loc[crit_df['Operation'] == '=', 'operation'] = 'equal to'\n info_out = open(out_file, 'w+', errors=\"backslashreplace\")\n info_out.write('\\documentclass{article}\\n')\n info_out.write('\\\\usepackage{booktabs}\\n')\n # info_out.write('\\\\usepackage{longtable}\\n')\n # T1 will ensure that symbols like '<' are formatted correctly\n info_out.write(\"\\\\usepackage[T1]{fontenc}\\n\")\n info_out.write('\\\\begin{document}')\n info_out.write(crit_df.to_latex(index=False, longtable=False,\n escape=True, multicolumn=False))\n info_out.write('\\end{document}\\n')\n info_out.close()\n else:\n crit_df.to_excel(out_file, index=False)\n\n else:\n print(\"No criteria for ouput.\")\n return True, [out_file]"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef eqarea_magic(in_file='sites.txt', dir_path=\".\", input_dir_path=\"\",\n spec_file=\"specimens.txt\", samp_file=\"samples.txt\",\n site_file=\"sites.txt\", loc_file=\"locations.txt\",\n plot_by=\"all\", crd=\"g\", ignore_tilt=False,\n save_plots=True, fmt=\"svg\", contour=False, color_map=\"coolwarm\",\n plot_ell=\"\", n_plots=5, interactive=False):\n\n \"\"\"\n makes equal area projections from declination/inclination data\n\n Parameters\n ----------\n in_file : str, default \"sites.txt\"\n dir_path : str\n output directory, default \".\"\n input_dir_path : str\n input file directory (if different from dir_path), default \"\"\n spec_file : str\n input specimen file name, default \"specimens.txt\"\n samp_file: str\n input sample file name, default \"samples.txt\"\n site_file : str\n input site file name, default \"sites.txt\"\n loc_file : str\n input location file name, default \"locations.txt\"\n plot_by : str\n [spc, sam, sit, loc, all] (specimen, sample, site, location, all), default \"all\"\n crd : ['s','g','t'], coordinate system for plotting whereby:\n s : specimen coordinates, aniso_tile_correction = -1\n g : geographic coordinates, aniso_tile_correction = 0 (default)\n t : tilt corrected coordinates, aniso_tile_correction = 100\n ignore_tilt : bool\n default False. If True, data are unoriented (allows plotting of measurement dec/inc)\n save_plots : bool\n plot and save non-interactively, default True\n fmt : str\n [\"png\", \"svg\", \"pdf\", \"jpg\"], default \"svg\"\n contour : bool\n plot as color contour\n colormap : str\n color map for contour plotting, default \"coolwarm\"\n see cartopy documentation for more options\n plot_ell : str\n [F,K,B,Be,Bv] plot Fisher, Kent, Bingham, Bootstrap ellipses or Boostrap eigenvectors\n default \"\" plots none\n n_plots : int\n maximum number of plots to make, default 5\n if you want to make all possible plots, specify \"all\"\n interactive : bool, default False\n interactively plot and display for each specimen\n (this is best used on the command line or in the Python interpreter)\n\n\n Returns\n ---------\n type - Tuple : (True or False indicating if conversion was sucessful, file name(s) written)\n \"\"\"\n saved = []\n # parse out input/out directories\n input_dir_path, dir_path = pmag.fix_directories(input_dir_path, dir_path)\n # initialize some variables\n verbose = pmagplotlib.verbose\n FIG = {} # plot dictionary\n FIG['eqarea'] = 1 # eqarea is figure 1\n pmagplotlib.plot_init(FIG['eqarea'], 5, 5)\n # get coordinate system\n if crd == \"s\":\n coord = \"-1\"\n elif crd == \"t\":\n coord = \"100\"\n else:\n coord = \"0\"\n # get item to plot by\n if plot_by == 'all':\n plot_key = 'all'\n elif plot_by == 'sit':\n plot_key = 'site'\n elif plot_by == 'sam':\n plot_key = 'sample'\n elif plot_by == 'spc':\n plot_key = 'specimen'\n else:\n plot_by = 'all'\n plot_key = 'all'\n # get distribution to plot ellipses/eigenvectors if desired\n\n if save_plots:\n verbose = False\n\n # set keys\n dec_key = 'dir_dec'\n inc_key = 'dir_inc'\n tilt_key = 'dir_tilt_correction'\n\n # create contribution\n fnames = {\"specimens\": spec_file, \"samples\": samp_file,\n 'sites': site_file, 'locations': loc_file}\n\n if not os.path.exists(pmag.resolve_file_name(in_file, input_dir_path)):\n print('-E- Could not find {}'.format(in_file))\n return False, []\n\n contribution = cb.Contribution(input_dir_path, custom_filenames=fnames,\n single_file=in_file)\n table_name = list(contribution.tables.keys())[0]\n contribution.add_magic_table(\"contribution\")\n\n # get contribution id if available for server plots\n if pmagplotlib.isServer:\n con_id = contribution.get_con_id()\n # try to propagate all names to measurement level\n try:\n contribution.propagate_location_to_samples()\n contribution.propagate_location_to_specimens()\n contribution.propagate_location_to_measurements()\n except KeyError as ex:\n pass\n\n # the object that contains the DataFrame + useful helper methods:\n data_container = contribution.tables[table_name]\n # the actual DataFrame:\n data = data_container.df\n plot_type = data_container.dtype\n\n if plot_key != \"all\" and plot_key not in data.columns:\n print(\"-E- You can't plot by {} with the data provided\".format(plot_key))\n return False, []\n\n # add tilt key into DataFrame columns if it isn't there already\n if tilt_key not in data.columns:\n data.loc[:, tilt_key] = None\n\n if verbose:\n print(len(data), ' records read from ', in_file)\n\n # find desired dec,inc data:\n dir_type_key = ''\n #\n # get plotlist if not plotting all records\n #\n plotlist = []\n if plot_key != \"all\":\n\n # return all where plot_key is not blank\n if plot_key not in data.columns:\n print('-E- Can\\'t plot by \"{}\". That header is not in infile: {}'.format(\n plot_key, in_file))\n return False, []\n plots = data[data[plot_key].notnull()]\n plotlist = plots[plot_key].unique() # grab unique values\n else:\n plotlist.append('All')\n\n if n_plots != \"all\":\n if len(plotlist) > n_plots:\n plotlist = plotlist[:n_plots]\n fignum = 0\n for plot in plotlist:\n fignum += 1\n FIG['eqarea'] = fignum\n pmagplotlib.plot_init(FIG['eqarea'], 5, 5)\n if plot_ell:\n dist = plot_ell.upper()\n # if dist type is unrecognized, use Fisher\n if dist not in ['F', 'K', 'B', 'BE', 'BV']:\n dist = 'F'\n if dist == \"BV\":\n fignum += 1\n FIG['bdirs'] = fignum\n pmagplotlib.plot_init(FIG['bdirs'], 5, 5)\n\n\n if verbose:\n print(plot)\n if plot == 'All':\n # plot everything at once\n plot_data = data\n else:\n # pull out only partial data\n plot_data = data[data[plot_key] == plot]\n\n # get location names for the data\n locs = []\n if 'location' in plot_data.columns:\n locs = plot_data['location'].dropna().unique()\n\n DIblock = []\n GCblock = []\n # SLblock, SPblock = [], []\n title = plot\n mode = 1\n\n if dec_key not in plot_data.columns:\n print(\"-W- No dec/inc data\")\n continue\n # get all records where dec & inc values exist\n plot_data = plot_data[plot_data[dec_key].notnull()\n & plot_data[inc_key].notnull()]\n if plot_data.empty:\n print(\"-W- No dec/inc data\")\n continue\n\n # get metadata for naming the plot file\n locations = str(data_container.get_name('location', df_slice=plot_data))\n site = str(data_container.get_name('site', df_slice=plot_data))\n sample = str(data_container.get_name('sample', df_slice=plot_data))\n specimen = str(data_container.get_name('specimen', df_slice=plot_data))\n\n # make sure method_codes is in plot_data\n if 'method_codes' not in plot_data.columns:\n plot_data['method_codes'] = ''\n\n # get data blocks\n # would have to ignore tilt to use measurement level data\n DIblock = data_container.get_di_block(df_slice=plot_data,\n tilt_corr=coord, excl=['DE-BFP'], ignore_tilt=ignore_tilt)\n\n if title == 'All':\n if len(locs):\n title = \" ,\".join(locs) + \" - {} {} plotted\".format(str(len(DIblock)), plot_type)\n else:\n title = \"{} {} plotted\".format(str(len(DIblock)), plot_type)\n #SLblock = [[ind, row['method_codes']] for ind, row in plot_data.iterrows()]\n # get great circles\n great_circle_data = data_container.get_records_for_code('DE-BFP', incl=True,\n use_slice=True, sli=plot_data)\n\n if len(great_circle_data) > 0:\n gc_cond = great_circle_data[tilt_key] == coord\n GCblock = [[float(row[dec_key]), float(row[inc_key])]\n for ind, row in great_circle_data[gc_cond].iterrows()]\n #SPblock = [[ind, row['method_codes']] for ind, row in great_circle_data[gc_cond].iterrows()]\n\n if len(DIblock) > 0:\n if not contour:\n pmagplotlib.plot_eq(FIG['eqarea'], DIblock, title)\n else:\n pmagplotlib.plot_eq_cont(\n FIG['eqarea'], DIblock, color_map=color_map)\n else:\n pmagplotlib.plot_net(FIG['eqarea'])\n if len(GCblock) > 0:\n for rec in GCblock:\n pmagplotlib.plot_circ(FIG['eqarea'], rec, 90., 'g')\n if len(DIblock) == 0 and len(GCblock) == 0:\n if verbose:\n print(\"no records for plotting\")\n fignum -= 1\n if 'bdirs' in FIG:\n fignum -= 1\n continue\n # sys.exit()\n if plot_ell:\n ppars = pmag.doprinc(DIblock) # get principal directions\n nDIs, rDIs, npars, rpars = [], [], [], []\n for rec in DIblock:\n angle = pmag.angle([rec[0], rec[1]], [\n ppars['dec'], ppars['inc']])\n if angle > 90.:\n rDIs.append(rec)\n else:\n nDIs.append(rec)\n if dist == 'B': # do on whole dataset\n etitle = \"Bingham confidence ellipse\"\n bpars = pmag.dobingham(DIblock)\n for key in list(bpars.keys()):\n if key != 'n' and verbose:\n print(\" \", key, '%7.1f' % (bpars[key]))\n if key == 'n' and verbose:\n print(\" \", key, ' %i' % (bpars[key]))\n npars.append(bpars['dec'])\n npars.append(bpars['inc'])\n npars.append(bpars['Zeta'])\n npars.append(bpars['Zdec'])\n npars.append(bpars['Zinc'])\n npars.append(bpars['Eta'])\n npars.append(bpars['Edec'])\n npars.append(bpars['Einc'])\n if dist == 'F':\n etitle = \"Fisher confidence cone\"\n if len(nDIs) > 2:\n fpars = pmag.fisher_mean(nDIs)\n for key in list(fpars.keys()):\n if key != 'n' and verbose:\n print(\" \", key, '%7.1f' % (fpars[key]))\n if key == 'n' and verbose:\n print(\" \", key, ' %i' % (fpars[key]))\n mode += 1\n npars.append(fpars['dec'])\n npars.append(fpars['inc'])\n npars.append(fpars['alpha95']) # Beta\n npars.append(fpars['dec'])\n isign = abs(fpars['inc']) / fpars['inc']\n npars.append(fpars['inc']-isign*90.) # Beta inc\n npars.append(fpars['alpha95']) # gamma\n npars.append(fpars['dec']+90.) # Beta dec\n npars.append(0.) # Beta inc\n if len(rDIs) > 2:\n fpars = pmag.fisher_mean(rDIs)\n if verbose:\n print(\"mode \", mode)\n for key in list(fpars.keys()):\n if key != 'n' and verbose:\n print(\" \", key, '%7.1f' % (fpars[key]))\n if key == 'n' and verbose:\n print(\" \", key, ' %i' % (fpars[key]))\n mode += 1\n rpars.append(fpars['dec'])\n rpars.append(fpars['inc'])\n rpars.append(fpars['alpha95']) # Beta\n rpars.append(fpars['dec'])\n isign = abs(fpars['inc']) / fpars['inc']\n rpars.append(fpars['inc']-isign*90.) # Beta inc\n rpars.append(fpars['alpha95']) # gamma\n rpars.append(fpars['dec']+90.) # Beta dec\n rpars.append(0.) # Beta inc\n if dist == 'K':\n etitle = \"Kent confidence ellipse\"\n if len(nDIs) > 3:\n kpars = pmag.dokent(nDIs, len(nDIs))\n if verbose:\n print(\"mode \", mode)\n for key in list(kpars.keys()):\n if key != 'n' and verbose:\n print(\" \", key, '%7.1f' % (kpars[key]))\n if key == 'n' and verbose:\n print(\" \", key, ' %i' % (kpars[key]))\n mode += 1\n npars.append(kpars['dec'])\n npars.append(kpars['inc'])\n npars.append(kpars['Zeta'])\n npars.append(kpars['Zdec'])\n npars.append(kpars['Zinc'])\n npars.append(kpars['Eta'])\n npars.append(kpars['Edec'])\n npars.append(kpars['Einc'])\n if len(rDIs) > 3:\n kpars = pmag.dokent(rDIs, len(rDIs))\n if verbose:\n print(\"mode \", mode)\n for key in list(kpars.keys()):\n if key != 'n' and verbose:\n print(\" \", key, '%7.1f' % (kpars[key]))\n if key == 'n' and verbose:\n print(\" \", key, ' %i' % (kpars[key]))\n mode += 1\n rpars.append(kpars['dec'])\n rpars.append(kpars['inc'])\n rpars.append(kpars['Zeta'])\n rpars.append(kpars['Zdec'])\n rpars.append(kpars['Zinc'])\n rpars.append(kpars['Eta'])\n rpars.append(kpars['Edec'])\n rpars.append(kpars['Einc'])\n else: # assume bootstrap\n if dist == 'BE':\n if len(nDIs) > 5:\n BnDIs = pmag.di_boot(nDIs)\n Bkpars = pmag.dokent(BnDIs, 1.)\n if verbose:\n print(\"mode \", mode)\n for key in list(Bkpars.keys()):\n if key != 'n' and verbose:\n print(\" \", key, '%7.1f' % (Bkpars[key]))\n if key == 'n' and verbose:\n print(\" \", key, ' %i' % (Bkpars[key]))\n mode += 1\n npars.append(Bkpars['dec'])\n npars.append(Bkpars['inc'])\n npars.append(Bkpars['Zeta'])\n npars.append(Bkpars['Zdec'])\n npars.append(Bkpars['Zinc'])\n npars.append(Bkpars['Eta'])\n npars.append(Bkpars['Edec'])\n npars.append(Bkpars['Einc'])\n if len(rDIs) > 5:\n BrDIs = pmag.di_boot(rDIs)\n Bkpars = pmag.dokent(BrDIs, 1.)\n if verbose:\n print(\"mode \", mode)\n for key in list(Bkpars.keys()):\n if key != 'n' and verbose:\n print(\" \", key, '%7.1f' % (Bkpars[key]))\n if key == 'n' and verbose:\n print(\" \", key, ' %i' % (Bkpars[key]))\n mode += 1\n rpars.append(Bkpars['dec'])\n rpars.append(Bkpars['inc'])\n rpars.append(Bkpars['Zeta'])\n rpars.append(Bkpars['Zdec'])\n rpars.append(Bkpars['Zinc'])\n rpars.append(Bkpars['Eta'])\n rpars.append(Bkpars['Edec'])\n rpars.append(Bkpars['Einc'])\n etitle = \"Bootstrapped confidence ellipse\"\n elif dist == 'BV':\n sym = {'lower': ['o', 'c'], 'upper': [\n 'o', 'g'], 'size': 3, 'edgecolor': 'face'}\n if len(nDIs) > 5:\n BnDIs = pmag.di_boot(nDIs)\n pmagplotlib.plot_eq_sym(\n FIG['bdirs'], BnDIs, 'Bootstrapped Eigenvectors', sym)\n if len(rDIs) > 5:\n BrDIs = pmag.di_boot(rDIs)\n if len(nDIs) > 5: # plot on existing plots\n pmagplotlib.plot_di_sym(FIG['bdirs'], BrDIs, sym)\n else:\n pmagplotlib.plot_eq(\n FIG['bdirs'], BrDIs, 'Bootstrapped Eigenvectors')\n if dist == 'B':\n if len(nDIs) > 3 or len(rDIs) > 3:\n pmagplotlib.plot_conf(FIG['eqarea'], etitle, [], npars, 0)\n elif len(nDIs) > 3 and dist != 'BV':\n pmagplotlib.plot_conf(FIG['eqarea'], etitle, [], npars, 0)\n if len(rDIs) > 3:\n pmagplotlib.plot_conf(FIG['eqarea'], etitle, [], rpars, 0)\n elif len(rDIs) > 3 and dist != 'BV':\n pmagplotlib.plot_conf(FIG['eqarea'], etitle, [], rpars, 0)\n\n for key in list(FIG.keys()):\n files = {}\n #if filename: # use provided filename\n # filename += '.' + fmt\n if pmagplotlib.isServer: # use server plot naming convention\n if plot_key == 'all':\n filename = 'LO:_'+locations+'_SI:__SA:__SP:__CO:_'+crd+'_TY:_'+key+'_.'+fmt\n else:\n filename = 'LO:_'+locations+'_SI:_'+site+'_SA:_'+sample + \\\n '_SP:_'+str(specimen)+'_CO:_'+crd+'_TY:_'+key+'_.'+fmt\n elif plot_key == 'all':\n filename = 'all'\n if locs:\n loc_string = \"_\".join(\n [str(loc).replace(' ', '_') for loc in locs])\n filename += \"_\" + loc_string\n filename += \"_\" + crd + \"_\" + key\n filename += \".{}\".format(fmt)\n else: # use more readable naming convention\n filename = ''\n # fix this if plot_by is location , for example\n use_names = {'location': [locations], 'site': [locations, site],\n 'sample': [locations, site, sample],\n 'specimen': [locations, site, sample, specimen]}\n use = use_names[plot_key]\n use.extend([crd, key])\n # [locations, site, sample, specimen, crd, key]:\n for item in use:\n if item:\n item = item.replace(' ', '_')\n filename += item + '_'\n if filename.endswith('_'):\n filename = filename[:-1]\n filename += \".{}\".format(fmt)\n\n if not pmagplotlib.isServer:\n filename = os.path.join(dir_path, filename)\n files[key] = filename\n\n if pmagplotlib.isServer:\n titles = {'eqarea': 'Equal Area Plot'}\n FIG = pmagplotlib.add_borders(FIG, titles, con_id=con_id)\n saved_figs = pmagplotlib.save_plots(FIG, files)\n saved.extend(saved_figs)\n\n elif save_plots:\n saved_figs = pmagplotlib.save_plots(FIG, files, incl_directory=True)\n saved.extend(saved_figs)\n continue\n elif interactive:\n pmagplotlib.draw_figs(FIG)\n ans = input(\" S[a]ve to save plot, [q]uit, Return to continue: \")\n if ans == \"q\":\n return True, []\n if ans == \"a\":\n saved_figs = pmagplotlib.save_plots(FIG, files, incl_directory=True)\n saved.extend(saved)\n continue\n return True, saved", "response": "This function makes an equal area projection from the declination and inclination data."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nusing a MagIC format locations table to plot poles. Parameters ---------- loc_file : str, default \"locations.txt\" dir_path : str, default \".\" directory name to find loc_file in (if not included in loc_file) interactive : bool, default False if True, interactively plot and display (this is best used on the command line only) crd : str, default \"\" coordinate system [g, t] (geographic, tilt_corrected) sym : str, default \"ro\" symbol color and shape, default red circles (see matplotlib documentation for more options) symsize : int, default 40 symbol size rsym : str, default \"g^\" symbol for plotting reverse poles rsymsize : int, default 40 symbol size for reverse poles fmt : str, default \"pdf\" format for figures, [\"svg\", \"jpg\", \"pdf\", \"png\"] res : str, default \"c\" resolution [c, l, i, h] (crude, low, intermediate, high) proj : str, default \"ortho\" ortho = orthographic lcc = lambert conformal moll = molweide merc = mercator flip : bool, default False if True, flip reverse poles to normal antipode anti : bool, default False if True, plot antipodes for each pole fancy : bool, default False if True, plot topography (not yet implementedj) ell : bool, default False if True, plot ellipses ages : bool, default False if True, plot ages lat_0 : float, default 90. eyeball latitude lon_0 : float, default 0. eyeball longitude save_plots : bool, default True if True, create and save all requested plots", "response": "def polemap_magic(loc_file=\"locations.txt\", dir_path=\".\", interactive=False, crd=\"\",\n sym='ro', symsize=40, rsym='g^', rsymsize=40,\n fmt=\"pdf\", res=\"c\", proj=\"ortho\",\n flip=False, anti=False, fancy=False,\n ell=False, ages=False, lat_0=90., lon_0=0., save_plots=True):\n \"\"\"\n Use a MagIC format locations table to plot poles.\n\n Parameters\n ----------\n loc_file : str, default \"locations.txt\"\n dir_path : str, default \".\"\n directory name to find loc_file in (if not included in loc_file)\n interactive : bool, default False\n if True, interactively plot and display\n (this is best used on the command line only)\n crd : str, default \"\"\n coordinate system [g, t] (geographic, tilt_corrected)\n sym : str, default \"ro\"\n symbol color and shape, default red circles\n (see matplotlib documentation for more options)\n symsize : int, default 40\n symbol size\n rsym : str, default \"g^\"\n symbol for plotting reverse poles\n rsymsize : int, default 40\n symbol size for reverse poles\n fmt : str, default \"pdf\"\n format for figures, [\"svg\", \"jpg\", \"pdf\", \"png\"]\n res : str, default \"c\"\n resolution [c, l, i, h] (crude, low, intermediate, high)\n proj : str, default \"ortho\"\n ortho = orthographic\n lcc = lambert conformal\n moll = molweide\n merc = mercator\n flip : bool, default False\n if True, flip reverse poles to normal antipode\n anti : bool, default False\n if True, plot antipodes for each pole\n fancy : bool, default False\n if True, plot topography (not yet implementedj)\n ell : bool, default False\n if True, plot ellipses\n ages : bool, default False\n if True, plot ages\n lat_0 : float, default 90.\n eyeball latitude\n lon_0 : float, default 0.\n eyeball longitude\n save_plots : bool, default True\n if True, create and save all requested plots\n \"\"\"\n # initialize and format variables\n saved = []\n lats, lons = [], []\n Pars = []\n dates, rlats, rlons = [], [], []\n polarities = []\n if interactive:\n save_plots = False\n full_path = pmag.resolve_file_name(loc_file, dir_path)\n dir_path, loc_file = os.path.split(full_path)\n # create MagIC contribution\n con = cb.Contribution(dir_path, single_file=loc_file)\n if not list(con.tables.keys()):\n print(\"-W - Couldn't read in data\")\n return False, \"Couldn't read in data\"\n FIG = {'map': 1}\n pmagplotlib.plot_init(FIG['map'], 6, 6)\n\n\n pole_container = con.tables['locations']\n pole_df = pole_container.df\n if 'pole_lat' not in pole_df.columns or 'pole_lon' not in pole_df.columns:\n print(\"-W- pole_lat and pole_lon are required columns to run polemap_magic.py\")\n return False, \"pole_lat and pole_lon are required columns to run polemap_magic.py\"\n # use records with pole_lat and pole_lon\n cond1, cond2 = pole_df['pole_lat'].notnull(), pole_df['pole_lon'].notnull()\n Results = pole_df[cond1 & cond2]\n # don't plot identical poles twice\n Results.drop_duplicates(subset=['pole_lat', 'pole_lon', 'location'], inplace=True)\n # use tilt correction if available\n # prioritize tilt-corrected poles\n if 'dir_tilt_correction' in Results.columns:\n if not crd:\n coords = Results['dir_tilt_correction'].unique()\n if 100. in coords:\n crd = 't'\n elif 0. in coords:\n crd = 'g'\n else:\n crd = ''\n coord_dict = {'g': 0, 't': 100}\n coord = coord_dict[crd] if crd else \"\"\n # filter results by dir_tilt_correction if available\n if (coord or coord == 0) and 'dir_tilt_correction' in Results.columns:\n Results = Results[Results['dir_tilt_correction'] == coord]\n # get location name and average ages\n loc_list = Results['location'].values\n locations = \":\".join(Results['location'].unique())\n if 'age' not in Results.columns and 'age_low' in Results.columns and 'age_high' in Results.columns:\n Results['age'] = Results['age_low']+0.5 * \\\n (Results['age_high']-Results['age_low'])\n if 'age' in Results.columns and ages:\n dates = Results['age'].unique()\n\n if not any(Results.index):\n print(\"-W- No poles could be plotted\")\n return False, \"No poles could be plotted\"\n\n # go through rows and extract data\n for ind, row in Results.iterrows():\n lat, lon = float(row['pole_lat']), float(row['pole_lon'])\n if 'dir_polarity' in row:\n polarities.append(row['dir_polarity'])\n if anti:\n lats.append(-lat)\n lon = lon + 180.\n if lon > 360:\n lon = lon - 360.\n lons.append(lon)\n elif not flip:\n lats.append(lat)\n lons.append(lon)\n elif flip:\n if lat < 0:\n rlats.append(-lat)\n lon = lon + 180.\n if lon > 360:\n lon = lon - 360\n rlons.append(lon)\n else:\n lats.append(lat)\n lons.append(lon)\n ppars = []\n ppars.append(lon)\n ppars.append(lat)\n ell1, ell2 = \"\", \"\"\n if 'pole_dm' in list(row.keys()) and row['pole_dm']:\n ell1 = float(row['pole_dm'])\n if 'pole_dp' in list(row.keys()) and row['pole_dp']:\n ell2 = float(row['pole_dp'])\n if 'pole_alpha95' in list(row.keys()) and row['pole_alpha95']:\n ell1, ell2 = float(row['pole_alpha95']), float(row['pole_alpha95'])\n if ell1 and ell2 and lons:\n ppars = []\n ppars.append(lons[-1])\n ppars.append(lats[-1])\n ppars.append(ell1)\n ppars.append(lons[-1])\n try:\n isign = abs(lats[-1]) / lats[-1]\n except ZeroDivisionError:\n isign = 1\n ppars.append(lats[-1] - isign * 90.)\n ppars.append(ell2)\n ppars.append(lons[-1] + 90.)\n ppars.append(0.)\n Pars.append(ppars)\n\n locations = locations.strip(':')\n Opts = {'latmin': -90, 'latmax': 90, 'lonmin': 0., 'lonmax': 360.,\n 'lat_0': lat_0, 'lon_0': lon_0, 'proj': proj, 'sym': 'b+',\n 'symsize': 40, 'pltgrid': 0, 'res': res, 'boundinglat': 0.,\n 'edgecolor': 'face'}\n Opts['details'] = {'coasts': 1, 'rivers': 0, 'states': 0,\n 'countries': 0, 'ocean': 1, 'fancy': fancy}\n base_Opts = Opts.copy()\n\n # make the base map with a blue triangle at the pole\n pmagplotlib.plot_map(FIG['map'], [90.], [0.], Opts)\n\n #Opts['pltgrid'] = -1\n if proj=='merc':Opts['pltgrid']=1\n Opts['sym'] = sym\n Opts['symsize'] = symsize\n if len(dates) > 0:\n Opts['names'] = dates\n if len(lats) > 0:\n pole_lats = []\n pole_lons = []\n for num, lat in enumerate(lats):\n lon = lons[num]\n if lat > 0:\n pole_lats.append(lat)\n pole_lons.append(lon)\n # plot the lats and lons of the poles\n pmagplotlib.plot_map(FIG['map'], pole_lats, pole_lons, Opts)\n\n # do reverse poles\n if len(rlats) > 0:\n reverse_Opts = Opts.copy()\n reverse_Opts['sym'] = rsym\n reverse_Opts['symsize'] = rsymsize\n reverse_Opts['edgecolor'] = 'black'\n # plot the lats and lons of the reverse poles\n pmagplotlib.plot_map(FIG['map'], rlats, rlons, reverse_Opts)\n\n Opts['names'] = []\n titles = {}\n files = {}\n\n if pmagplotlib.isServer:\n # plot each indvidual pole for the server\n for ind in range(len(lats)):\n lat = lats[ind]\n lon = lons[ind]\n polarity = \"\"\n if 'polarites' in locals():\n polarity = polarities[ind]\n polarity = \"_\" + polarity if polarity else \"\"\n location = loc_list[ind]\n FIG[\"map_{}\".format(ind)] = ind+2\n pmagplotlib.plot_init(FIG['map_{}'.format(ind)], 6, 6)\n pmagplotlib.plot_map(FIG['map_{}'.format(ind)], [90.], [0.], base_Opts)\n pmagplotlib.plot_map(ind+2, [lat], [lon], Opts)\n titles[\"map_{}\".format(ind)] = location\n if crd:\n fname = \"LO:_{}{}_TY:_POLE_map_{}.{}\".format(location, polarity, crd, fmt)\n fname_short = \"LO:_{}{}_TY:_POLE_map_{}\".format(location, polarity, crd)\n else:\n fname = \"LO:_{}{}_TY:_POLE_map.{}\".format(location, polarity, fmt)\n fname_short = \"LO:_{}{}_TY:_POLE_map\".format(location, polarity)\n\n # don't allow identically named files\n if files:\n file_values = files.values()\n file_values_short = [fname.rsplit('.')[0] for fname in file_values]\n if fname_short in file_values_short:\n for val in [str(n) for n in range(1, 10)]:\n fname = fname_short + \"_{}.\".format(val) + fmt\n if fname not in file_values:\n break\n files[\"map_{}\".format(ind)] = fname\n\n # truncate location names so that ultra long filenames are not created\n if len(locations) > 50:\n locations = locations[:50]\n if pmagplotlib.isServer:\n # use server plot naming convention\n con_id = ''\n if 'contribution' in con.tables:\n # try to get contribution id\n if 'id' in con.tables['contribution'].df.columns:\n con_id = con.tables['contribution'].df.iloc[0]['id']\n files['map'] = 'MC:_{}_TY:_POLE_map_{}.{}'.format(con_id, crd, fmt)\n else:\n # no contribution id available\n files['map'] = 'LO:_' + locations + '_TY:_POLE_map_{}.{}'.format(crd, fmt)\n else:\n # use readable naming convention for non-database use\n files['map'] = '{}_POLE_map_{}.{}'.format(locations, crd, fmt)\n\n #\n if interactive and (not set_env.IS_WIN):\n pmagplotlib.draw_figs(FIG)\n if ell: # add ellipses if desired.\n Opts['details'] = {'coasts': 0, 'rivers': 0, 'states': 0,\n 'countries': 0, 'ocean': 0, 'fancy': fancy}\n Opts['pltgrid'] = -1 # turn off meridian replotting\n Opts['symsize'] = 2\n Opts['sym'] = 'g-'\n for ppars in Pars:\n if ppars[2] != 0:\n PTS = pmagplotlib.plot_ell(FIG['map'], ppars, 'g.', 0, 0)\n elats, elons = [], []\n for pt in PTS:\n elons.append(pt[0])\n elats.append(pt[1])\n # make the base map with a blue triangle at the pole\n pmagplotlib.plot_map(FIG['map'], elats, elons, Opts)\n if interactive and (not set_env.IS_WIN):\n pmagplotlib.draw_figs(FIG)\n\n if pmagplotlib.isServer:\n black = '#000000'\n purple = '#800080'\n titles['map'] = 'LO:_' + locations + '_POLE_map'\n con_id = ''\n if 'contribution' in con.tables:\n if 'id' in con.tables['contribution'].df.columns:\n con_id = con.tables['contribution'].df.iloc[0]['id']\n\n loc_string = \"\"\n if 'locations' in con.tables:\n num_locs = len(con.tables['locations'].df.index.unique())\n loc_string = \"{} location{}\".format(num_locs, 's' if num_locs > 1 else '')\n num_lats = len([lat for lat in lats if lat > 0])\n num_rlats = len(rlats)\n npole_string = \"\"\n rpole_string = \"\"\n if num_lats:\n npole_string = \"{} normal \".format(num_lats) #, 's' if num_lats > 1 else '')\n if num_rlats:\n rpole_string = \"{} reverse\".format(num_rlats)\n if num_lats + num_rlats > 1:\n pole_string = \"poles\"\n elif num_lats + num_rlats == 0:\n pole_string = \"\"\n else:\n pole_string = \"pole\"\n title = \"MagIC contribution {}\\n {} {}{} {}\".format(con_id, loc_string, npole_string, rpole_string, pole_string)\n titles['map'] = title.replace(' ', ' ')\n FIG = pmagplotlib.add_borders(FIG, titles, black, purple, con_id)\n saved = pmagplotlib.save_plots(FIG, files)\n elif interactive:\n pmagplotlib.draw_figs(FIG)\n ans = input(\" S[a]ve to save plot, Return to quit: \")\n if ans == \"a\":\n saved = pmagplotlib.save_plots(FIG, files)\n else:\n print(\"Good bye\")\n elif save_plots:\n saved = pmagplotlib.save_plots(FIG, files)\n\n return True, saved"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef chi_magic(infile=\"measurements.txt\", dir_path=\".\", experiments=\"\",\n fmt=\"svg\", save_plots=True, interactive=False, contribution=None):\n \"\"\"\n Parameters\n ----------\n infile : str, default \"measurements.txt\"\n measurement infile\n dir_path : str, default \".\"\n input directory\n experiments : str, default \"\"\n experiment name to plot\n fmt : str, default \"svg\"\n format for figures, [\"svg\", \"jpg\", \"pdf\", \"png\"]\n save_plots : bool, default True\n save figures\n interactive : bool, default False\n if True, interactively plot and display\n (this is best used on the command line only)\n contribution : cb.Contribution, default None\n if provided, use Contribution object instead of reading in\n data from files\n\n Returns\n ---------\n (status, output_files) - Tuple : (True or False indicating if conversion was sucessful, file name(s) written)\n \"\"\"\n saved = []\n if contribution:\n chi_data_all = contribution.tables['measurements'].df\n else:\n infile = pmag.resolve_file_name(infile, dir_path)\n chi_data_all = pd.read_csv(infile, sep='\\t', header=1)\n if not experiments:\n try:\n experiments = chi_data_all.experiment.unique()\n except Exception as ex:\n print(ex)\n experiments = [\"all\"]\n else:\n experiments = [experiments]\n\n plotnum = 0\n figs = {}\n fnames = {}\n for exp in experiments:\n if exp == \"all\":\n chi_data = chi_data_all\n chi_data = chi_data_all[chi_data_all.experiment == exp]\n if len(chi_data) <= 1:\n print('Not enough data to plot {}'.format(exp))\n continue\n\n plotnum += 1\n if not save_plots:\n pmagplotlib.plot_init(plotnum, 5, 5) # set up plot\n else:\n plt.figure(plotnum)\n figs[str(plotnum)] = plotnum\n fnames[str(plotnum)] = exp + '_temperature.{}'.format(fmt)\n\n # get arrays of available temps, frequencies and fields\n Ts = np.sort(chi_data.meas_temp.unique())\n Fs = np.sort(chi_data.meas_freq.unique())\n Bs = np.sort(chi_data.meas_field_ac.unique())\n\n # plot chi versus temperature at constant field\n b = Bs.max()\n for num, f in enumerate(Fs):\n this_f = chi_data[chi_data.meas_freq == f]\n this_f = this_f[this_f.meas_field_ac == b]\n plt.plot(this_f.meas_temp, 1e6*this_f.susc_chi_volume,\n label='%i' % (f)+' Hz')\n plt.legend()\n plt.xlabel('Temperature (K)')\n plt.ylabel('$\\chi$ ($\\mu$SI)')\n plt.title('B = '+'%7.2e' % (b) + ' T')\n\n plotnum += 1\n figs[str(plotnum)] = plotnum\n fnames[str(plotnum)] = exp + '_frequency.{}'.format(fmt)\n\n if not save_plots:\n pmagplotlib.plot_init(plotnum, 5, 5) # set up plot\n else:\n plt.figure(plotnum)\n ## plot chi versus frequency at constant B\n b = Bs.max()\n t = Ts.min()\n this_t = chi_data[chi_data.meas_temp == t]\n this_t = this_t[this_t.meas_field_ac == b]\n plt.semilogx(this_t.meas_freq, 1e6 *\n this_t.susc_chi_volume, label='%i' % (t)+' K')\n plt.legend()\n plt.xlabel('Frequency (Hz)')\n plt.ylabel('$\\chi$ ($\\mu$SI)')\n plt.title('B = '+'%7.2e' % (b) + ' T')\n\n if interactive:\n pmagplotlib.draw_figs(figs)\n ans = input(\n \"enter s[a]ve to save files, [return] to quit \")\n if ans == 'a':\n saved.extend(pmagplotlib.save_plots(figs, fnames))\n else:\n return True, []\n elif save_plots:\n saved.extend(pmagplotlib.save_plots(figs, fnames))\n\n return True, saved", "response": "This function will convert a single measurement file into a single unique one and plot the data."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef quick_hyst(dir_path=\".\", meas_file=\"measurements.txt\", save_plots=True,\n interactive=False, fmt=\"png\", specimen=\"\", verbose=True, n_plots=10,\n contribution=None):\n \"\"\"\n makes specimen plots of hysteresis data\n\n Parameters\n ----------\n dir_path : str, default \".\"\n input directory\n meas_file : str, default \"measurements.txt\"\n name of MagIC measurement file\n save_plots : bool, default True\n save figures\n interactive : bool, default False\n if True, interactively plot and display\n (this is best used on the command line only)\n fmt : str, default \"svg\"\n format for figures, [\"svg\", \"jpg\", \"pdf\", \"png\"]\n specimen : str, default \"\"\n specific specimen to plot\n verbose : bool, default True\n if True, print more verbose output\n\n Returns\n ---------\n Tuple : (True or False indicating if conversion was sucessful, output file name(s) written)\n \"\"\"\n\n if contribution is None:\n con = cb.Contribution(dir_path, read_tables=['measurements'],\n custom_filenames={'measurements': meas_file})\n else:\n con = contribution\n # get as much name data as possible (used for naming plots)\n if 'measurements' not in con.tables:\n print(\"-W- No measurement file found\")\n return False, []\n con.propagate_location_to_measurements()\n\n if 'measurements' not in con.tables:\n print(main.__doc__)\n print('bad file')\n return False, []\n meas_container = con.tables['measurements']\n #meas_df = meas_container.df\n\n #\n # initialize some variables\n # define figure numbers for hyst,deltaM,DdeltaM curves\n saved = []\n HystRecs = []\n HDD = {}\n HDD['hyst'] = 1\n pmagplotlib.plot_init(HDD['hyst'], 5, 5)\n #\n # get list of unique experiment names and specimen names\n #\n sids = []\n hyst_data = meas_container.get_records_for_code('LP-HYS')\n #experiment_names = hyst_data['experiment_name'].unique()\n if not len(hyst_data):\n print(\"-W- No hysteresis data found\")\n return False, []\n if 'specimen' not in hyst_data.columns:\n print('-W- No specimen names in measurements data, cannot complete quick_hyst.py')\n return False, []\n sids = hyst_data['specimen'].unique()\n\n # if 'treat_temp' is provided, use that value, otherwise assume 300\n hyst_data['treat_temp'].where(\n hyst_data['treat_temp'].notnull(), '300', inplace=True)\n # start at first specimen, or at provided specimen ('-spc')\n k = 0\n if specimen:\n try:\n print(sids)\n k = list(sids).index(specimen)\n except ValueError:\n print('-W- No specimen named: {}.'.format(specimen))\n print('-W- Please provide a valid specimen name')\n return False, []\n intlist = ['magn_moment', 'magn_volume', 'magn_mass']\n\n if len(sids) > n_plots:\n sids = sids[:n_plots]\n\n while k < len(sids):\n locname, site, sample, synth = '', '', '', ''\n s = sids[k]\n if verbose:\n print(s, k + 1, 'out of ', len(sids))\n # B, M for hysteresis, Bdcd,Mdcd for irm-dcd data\n B, M = [], []\n # get all measurements for this specimen\n spec = hyst_data[hyst_data['specimen'] == s]\n # get names\n if 'location' in spec:\n locname = spec['location'].iloc[0]\n if 'site' in spec:\n site = spec['sample'].iloc[0]\n if 'sample' in spec:\n sample = spec['sample'].iloc[0]\n # get all records with non-blank values in any intlist column\n # find intensity data\n for int_column in intlist:\n if int_column in spec.columns:\n int_col = int_column\n break\n meas_data = spec[spec[int_column].notnull()]\n if len(meas_data) == 0:\n break\n #\n c = ['k-', 'b-', 'c-', 'g-', 'm-', 'r-', 'y-']\n cnum = 0\n Temps = []\n xlab, ylab, title = '', '', ''\n Temps = meas_data['treat_temp'].unique()\n for t in Temps:\n print('working on t: ', t)\n t_data = meas_data[meas_data['treat_temp'] == t]\n m = int_col\n B = t_data['meas_field_dc'].astype(float).values\n M = t_data[m].astype(float).values\n # now plot the hysteresis curve(s)\n #\n if len(B) > 0:\n B = np.array(B)\n M = np.array(M)\n if t == Temps[-1]:\n xlab = 'Field (T)'\n ylab = m\n title = 'Hysteresis: ' + s\n if t == Temps[0]:\n pmagplotlib.clearFIG(HDD['hyst'])\n pmagplotlib.plot_xy(\n HDD['hyst'], B, M, sym=c[cnum], xlab=xlab, ylab=ylab, title=title)\n pmagplotlib.plot_xy(HDD['hyst'], [\n 1.1*B.min(), 1.1*B.max()], [0, 0], sym='k-', xlab=xlab, ylab=ylab, title=title)\n pmagplotlib.plot_xy(HDD['hyst'], [0, 0], [\n 1.1*M.min(), 1.1*M.max()], sym='k-', xlab=xlab, ylab=ylab, title=title)\n if not save_plots and not set_env.IS_WIN:\n pmagplotlib.draw_figs(HDD)\n cnum += 1\n if cnum == len(c):\n cnum = 0\n #\n files = {}\n if save_plots:\n if specimen != \"\":\n s = specimen\n for key in list(HDD.keys()):\n if pmagplotlib.isServer:\n if synth == '':\n files[key] = \"LO:_\"+locname+'_SI:_'+site + \\\n '_SA:_'+sample+'_SP:_'+s+'_TY:_'+key+'_.'+fmt\n else:\n files[key] = 'SY:_'+synth+'_TY:_'+key+'_.'+fmt\n else:\n if synth == '':\n filename = ''\n for item in [locname, site, sample, s, key]:\n if item:\n item = item.replace(' ', '_')\n filename += item + '_'\n if filename.endswith('_'):\n filename = filename[:-1]\n filename += \".{}\".format(fmt)\n files[key] = filename\n else:\n files[key] = \"{}_{}.{}\".format(synth, key, fmt)\n\n pmagplotlib.save_plots(HDD, files)\n saved.extend([value for value in files.values()])\n if specimen:\n return True, saved\n if interactive:\n pmagplotlib.draw_figs(HDD)\n ans = input(\n \"S[a]ve plots, [s]pecimen name, [q]uit, to continue\\n \")\n if ans == \"a\":\n files = {}\n for key in list(HDD.keys()):\n if pmagplotlib.isServer: # use server plot naming convention\n locname = locname if locname else \"\"\n site = site if site else \"\"\n sample = sample if sample else \"\"\n files[key] = \"LO:_\"+locname+'_SI:_'+site + \\\n '_SA:_'+sample+'_SP:_'+s+'_TY:_'+key+'_.'+fmt\n else: # use more readable plot naming convention\n filename = ''\n for item in [locname, site, sample, s, key]:\n if item:\n item = item.replace(' ', '_')\n filename += item + '_'\n if filename.endswith('_'):\n filename = filename[:-1]\n filename += \".{}\".format(fmt)\n files[key] = filename\n\n pmagplotlib.save_plots(HDD, files)\n saved.extend([value for value in files.values()])\n if ans == '':\n k += 1\n if ans == \"p\":\n del HystRecs[-1]\n k -= 1\n if ans == 'q':\n print(\"Good bye\")\n return True, []\n if ans == 's':\n keepon = 1\n specimen = input(\n 'Enter desired specimen name (or first part there of): ')\n while keepon == 1:\n try:\n k = list(sids).index(specimen)\n keepon = 0\n except ValueError:\n tmplist = []\n for qq in range(len(sids)):\n if specimen in sids[qq]:\n tmplist.append(sids[qq])\n print(specimen, \" not found, but this was: \")\n print(tmplist)\n specimen = input('Select one or try again\\n ')\n k = list(sids).index(specimen)\n else:\n k += 1\n if not len(B):\n if verbose:\n print('skipping this one - no hysteresis data')\n k += 1\n return True, saved", "response": "This function will take a directory path and a measurement file and plot the hysteresis data."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nmake a map of vgps and a95/dp,dm for site means in a sites table Parameters ---------- dir_path : str, default \".\" input directory path results_file : str, default \"sites.txt\" name of MagIC format sites file crd : str, default \"\" coordinate system [g, t] (geographic, tilt_corrected) sym : str, default \"ro\" symbol color and shape, default red circles (see matplotlib documentation for more color/shape options) size : int, default 8 symbol size rsym : str, default \"g^\" symbol for plotting reverse poles (see matplotlib documentation for more color/shape options) rsize : int, default 8 symbol size for reverse poles fmt : str, default \"pdf\" format for figures, [\"svg\", \"jpg\", \"pdf\", \"png\"] res : str, default \"c\" resolution [c, l, i, h] (crude, low, intermediate, high) proj : str, default \"ortho\" ortho = orthographic lcc = lambert conformal moll = molweide merc = mercator flip : bool, default False if True, flip reverse poles to normal antipode anti : bool, default False if True, plot antipodes for each pole fancy : bool, default False if True, plot topography (not yet implemented) ell : bool, default False if True, plot ellipses ages : bool, default False if True, plot ages lat_0 : float, default 0. eyeball latitude lon_0 : float, default 0. eyeball longitude save_plots : bool, default True if True, create and save all requested plots interactive : bool, default False if True, interactively plot and display (this is best used on the command line only) Returns --------- (status, output_files) - Tuple : (True or False indicating if conversion was sucessful, file name(s) written)", "response": "def vgpmap_magic(dir_path=\".\", results_file=\"sites.txt\", crd=\"\",\n sym='ro', size=8, rsym=\"g^\", rsize=8,\n fmt=\"pdf\", res=\"c\", proj=\"ortho\",\n flip=False, anti=False, fancy=False,\n ell=False, ages=False, lat_0=0, lon_0=0,\n save_plots=True, interactive=False, contribution=None):\n \"\"\"\n makes a map of vgps and a95/dp,dm for site means in a sites table\n\n Parameters\n ----------\n dir_path : str, default \".\"\n input directory path\n results_file : str, default \"sites.txt\"\n name of MagIC format sites file\n crd : str, default \"\"\n coordinate system [g, t] (geographic, tilt_corrected)\n sym : str, default \"ro\"\n symbol color and shape, default red circles\n (see matplotlib documentation for more color/shape options)\n size : int, default 8\n symbol size\n rsym : str, default \"g^\"\n symbol for plotting reverse poles\n (see matplotlib documentation for more color/shape options)\n rsize : int, default 8\n symbol size for reverse poles\n fmt : str, default \"pdf\"\n format for figures, [\"svg\", \"jpg\", \"pdf\", \"png\"]\n res : str, default \"c\"\n resolution [c, l, i, h] (crude, low, intermediate, high)\n proj : str, default \"ortho\"\n ortho = orthographic\n lcc = lambert conformal\n moll = molweide\n merc = mercator\n flip : bool, default False\n if True, flip reverse poles to normal antipode\n anti : bool, default False\n if True, plot antipodes for each pole\n fancy : bool, default False\n if True, plot topography (not yet implemented)\n ell : bool, default False\n if True, plot ellipses\n ages : bool, default False\n if True, plot ages\n lat_0 : float, default 0.\n eyeball latitude\n lon_0 : float, default 0.\n eyeball longitude\n save_plots : bool, default True\n if True, create and save all requested plots\n interactive : bool, default False\n if True, interactively plot and display\n (this is best used on the command line only)\n\n Returns\n ---------\n (status, output_files) - Tuple : (True or False indicating if conversion was sucessful, file name(s) written)\n\n \"\"\"\n coord_dict = {'g': 0, 't': 100}\n coord = coord_dict[crd] if crd else \"\"\n if contribution is None:\n con = cb.Contribution(dir_path, single_file=results_file)\n else:\n con = contribution\n if not list(con.tables.keys()):\n print(\"-W - Couldn't read in data\")\n return False, []\n if 'sites' not in con.tables:\n print(\"-W - No sites data\")\n return False, []\n\n FIG = {'map': 1}\n pmagplotlib.plot_init(FIG['map'], 6, 6)\n # read in sites file\n lats, lons = [], []\n Pars = []\n dates, rlats, rlons = [], [], []\n\n site_container = con.tables['sites']\n site_container.front_and_backfill(['location'])\n site_df = site_container.df\n # use records with vgp_lat and vgp_lon\n if 'vgp_lat' in site_df.columns and 'vgp_lon' in site_df.columns:\n cond1, cond2 = site_df['vgp_lat'].notnull(), site_df['vgp_lon'].notnull()\n else:\n print ('nothing to plot')\n sys.exit()\n Results = site_df[cond1 & cond2]\n # use tilt correction\n if coord and 'dir_tilt_correction' in Results.columns:\n Results = Results[Results['dir_tilt_correction'] == coord]\n # get location name and average ages\n locs = Results['location'].dropna().unique()\n if len(locs):\n location = \":\".join(Results['location'].unique())\n else:\n location = \"\"\n if 'age' in Results.columns and ages == 1:\n dates = Results['age'].unique()\n\n # go through rows and extract data\n for ind, row in Results.iterrows():\n try:\n lat, lon = float(row['vgp_lat']), float(row['vgp_lon'])\n except ValueError:\n lat = float(str(row['vgp_lat']).replace(' ', '').translate({0x2c: '.', 0xa0: None, 0x2212: '-'}))\n lon = float(str(row['vgp_lon']).replace(' ', '').translate({0x2c: '.', 0xa0: None, 0x2212: '-'}))\n if anti == 1:\n lats.append(-lat)\n lon = lon + 180.\n if lon > 360:\n lon = lon - 360.\n lons.append(lon)\n elif flip == 0:\n lats.append(lat)\n lons.append(lon)\n elif flip == 1:\n if lat < 0:\n rlats.append(-lat)\n lon = lon + 180.\n if lon > 360:\n lon = lon - 360\n rlons.append(lon)\n else:\n lats.append(lat)\n lons.append(lon)\n\n ppars = []\n ppars.append(lon)\n ppars.append(lat)\n ell1, ell2 = \"\", \"\"\n if 'vgp_dm' in list(row.keys()) and row['vgp_dm']:\n ell1 = float(row['vgp_dm'])\n if 'vgp_dp' in list(row.keys()) and row['vgp_dp']:\n ell2 = float(row['vgp_dp'])\n if 'vgp_alpha95' in list(row.keys()) and (row['vgp_alpha95'] or row['vgp_alpha95'] == 0):\n ell1, ell2 = float(row['vgp_alpha95']), float(row['vgp_alpha95'])\n if ell1 and ell2:\n ppars = []\n ppars.append(lons[-1])\n ppars.append(lats[-1])\n ppars.append(ell1)\n ppars.append(lons[-1])\n try:\n isign = abs(lats[-1]) / lats[-1]\n except ZeroDivisionError:\n isign = 1\n ppars.append(lats[-1] - isign * 90.)\n ppars.append(ell2)\n ppars.append(lons[-1] + 90.)\n ppars.append(0.)\n Pars.append(ppars)\n\n location = location.strip(':')\n Opts = {'latmin': -90, 'latmax': 90, 'lonmin': 0., 'lonmax': 360.,\n 'lat_0': lat_0, 'lon_0': lon_0, 'proj': proj, 'sym': 'bs',\n 'symsize': 3, 'pltgrid': 0, 'res': res, 'boundinglat': 0.}\n Opts['details'] = {'coasts': 1, 'rivers': 0, 'states': 0,\n 'countries': 0, 'ocean': 1, 'fancy': fancy}\n # make the base map with a blue triangle at the pole\n pmagplotlib.plot_map(FIG['map'], [90.], [0.], Opts)\n Opts['pltgrid'] = -1\n Opts['sym'] = sym\n Opts['symsize'] = size\n if len(dates) > 0:\n Opts['names'] = dates\n if len(lats) > 0:\n # add the lats and lons of the poles\n pmagplotlib.plot_map(FIG['map'], lats, lons, Opts)\n Opts['names'] = []\n if len(rlats) > 0:\n Opts['sym'] = rsym\n Opts['symsize'] = rsize\n # add the lats and lons of the poles\n pmagplotlib.plot_map(FIG['map'], rlats, rlons, Opts)\n if not save_plots and not set_env.IS_WIN:\n pmagplotlib.draw_figs(FIG)\n if ell == 1: # add ellipses if desired.\n Opts['details'] = {'coasts': 0, 'rivers': 0, 'states': 0,\n 'countries': 0, 'ocean': 0, 'fancy': fancy}\n Opts['pltgrid'] = -1 # turn off meridian replotting\n Opts['symsize'] = 2\n Opts['sym'] = 'g-'\n for ppars in Pars:\n if ppars[2] != 0:\n PTS = pmagplotlib.plot_ell(FIG['map'], ppars, 'g.', 0, 0)\n elats, elons = [], []\n for pt in PTS:\n elons.append(pt[0])\n elats.append(pt[1])\n # make the base map with a blue triangle at the pole\n pmagplotlib.plot_map(FIG['map'], elats, elons, Opts)\n if not save_plots and not set_env.IS_WIN:\n pmagplotlib.draw_figs(FIG)\n files = {}\n for key in list(FIG.keys()):\n if pmagplotlib.isServer: # use server plot naming convention\n files[key] = 'LO:_' + location + '_TY:_VGP_map.' + fmt\n con.add_magic_table('contribution')\n con_id = con.get_con_id()\n if con_id:\n files[key] = 'MC:_' + str(con_id) + '_' + files[key]\n else: # use more readable naming convention\n files[key] = '{}_VGP_map.{}'.format(location, fmt)\n\n if pmagplotlib.isServer:\n black = '#000000'\n purple = '#800080'\n titles = {}\n titles['map'] = location + ' VGP map'\n FIG = pmagplotlib.add_borders(FIG, titles, black, purple)\n pmagplotlib.save_plots(FIG, files)\n elif interactive:\n pmagplotlib.draw_figs(FIG)\n ans = input(\" S[a]ve to save plot, Return to quit: \")\n if ans == \"a\":\n pmagplotlib.save_plots(FIG, files)\n return True, files.values()\n else:\n print(\"Good bye\")\n return True, []\n elif save_plots:\n pmagplotlib.save_plots(FIG, files)\n return True, files.values()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nfunction that plots a histogram of the data in infile.", "response": "def histplot(infile=\"\", data=(), outfile=\"\",\n xlab='x', binsize=False, norm=1,\n fmt='svg', save_plots=True, interactive=False):\n \"\"\"\n makes histograms for data\n\n Parameters\n ----------\n infile : str, default \"\"\n input file name\n format: single variable\n data : list-like, default ()\n list/array of values to plot if infile is not provided\n outfile : str, default \"\"\n name for plot, if not provided defaults to hist.FMT\n xlab : str, default 'x'\n label for x axis\n binsize : int, default False\n desired binsize. if not specified, an appropriate binsize will be calculated.\n norm : int, default 1\n 1: norm, 0: don't norm, -1: show normed and non-normed axes\n fmt : str, default \"svg\"\n format for figures, [\"svg\", \"jpg\", \"pdf\", \"png\"]\n save_plots : bool, default True\n if True, create and save all requested plots\n interactive : bool, default False\n interactively plot and display\n (this is best used on the command line only)\n \"\"\"\n # set outfile name\n if outfile:\n fmt = \"\"\n else:\n outfile = 'hist.'+fmt\n # read in data from infile or use data argument\n if os.path.exists(infile):\n D = np.loadtxt(infile)\n else:\n D = np.array(data)\n try:\n if not len(D):\n print('-W- No data found')\n return False, []\n except ValueError:\n pass\n fig = pmagplotlib.plot_init(1, 8, 7)\n try:\n len(D)\n except TypeError:\n D = np.array([D])\n if len(D) < 5:\n print(\"-W- Not enough points to plot histogram ({} point(s) provided, 5 required)\".format(len(D)))\n return False, []\n\n # if binsize not provided, calculate reasonable binsize\n if not binsize:\n binsize = int(np.around(1 + 3.22 * np.log(len(D))))\n binsize = int(binsize)\n Nbins = int(len(D) / binsize)\n ax = fig.add_subplot(111)\n if norm == 1:\n print('normalizing')\n n, bins, patches = ax.hist(\n D, bins=Nbins, facecolor='#D3D3D3', histtype='stepfilled', color='black', density=True)\n ax.set_ylabel('Frequency')\n elif norm == 0:\n print('not normalizing')\n n, bins, patches = ax.hist(\n D, bins=Nbins, facecolor='#D3D3D3', histtype='stepfilled', color='black', density=False)\n ax.set_ylabel('Number')\n elif norm == -1:\n #print('trying twin')\n n, bins, patches = ax.hist(\n D, bins=Nbins, facecolor='#D3D3D3', histtype='stepfilled', color='black', density=True)\n ax.set_ylabel('Frequency')\n ax2 = ax.twinx()\n n, bins, patches = ax2.hist(\n D, bins=Nbins, facecolor='#D3D3D3', histtype='stepfilled', color='black', density=False)\n ax2.set_ylabel('Number', rotation=-90)\n plt.axis([D.min(), D.max(), 0, n.max()+.1*n.max()])\n ax.set_xlabel(xlab)\n name = 'N = ' + str(len(D))\n plt.title(name)\n if interactive:\n pmagplotlib.draw_figs({1: 'hist'})\n p = input('s[a]ve to save plot, [q]uit to exit without saving ')\n if p != 'a':\n return True, []\n plt.savefig(outfile)\n print('plot saved in ', outfile)\n return True, [outfile]\n if pmagplotlib.isServer:\n pmagplotlib.add_borders({'hist': 1}, {'hist': 'Intensity Histogram'})\n if save_plots:\n plt.savefig(outfile)\n print('plot saved in ', outfile)\n return True, [outfile]"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef parse_fits(self, fit_name):\n '''USE PARSE_ALL_FITS unless otherwise necessary\n Isolate fits by the name of the fit; we also set 'specimen_tilt_correction' to zero in order\n to only include data in geographic coordinates - THIS NEEDS TO BE GENERALIZED\n '''\n fits = self.fits.loc[self.fits.specimen_comp_name ==\n fit_name].loc[self.fits.specimen_tilt_correction == 0]\n fits.reset_index(inplace=True)\n means = self.means.loc[self.means.site_comp_name ==\n fit_name].loc[self.means.site_tilt_correction == 0]\n means.reset_index(inplace=True)\n mean_name = str(fit_name) + \"_mean\"\n setattr(self, fit_name, fits)\n setattr(self, mean_name, means)", "response": "USE PARSE_ALL_FITS unless otherwise necessary\n Isolate fits by the name of the fit ; we also set specimen_tilt_correction to zero in order\n "} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef main():\n #\n # initialize variables\n #\n #\n #\n dir_path='.'\n if \"-WD\" in sys.argv:\n ind=sys.argv.index(\"-WD\")\n dir_path=sys.argv[ind+1]\n meas_file,spec_file= dir_path+\"/magic_measurements.txt\",dir_path+\"/er_specimens.txt\"\n out_file=meas_file\n MeasRecs,SpecRecs=[],[]\n OutRecs=[]\n if \"-h\" in sys.argv:\n print(main.__doc__)\n sys.exit()\n if \"-f\" in sys.argv:\n ind=sys.argv.index(\"-f\")\n meas_file=dir_path+'/'+sys.argv[ind+1]\n if \"-fsp\" in sys.argv:\n ind=sys.argv.index(\"-fsp\")\n spec_file=dir_path+'/'+sys.argv[ind+1]\n if \"-F\" in sys.argv:\n ind=sys.argv.index(\"-F\")\n out_file=dir_path+'/'+sys.argv[ind+1]\n MeasRecs,file_type=pmag.magic_read(meas_file)\n Specs,file_type=pmag.magic_read(spec_file)\n for rec in MeasRecs:\n if 'measurement_magn_moment' in list(rec.keys()) and rec['measurement_magn_moment'] != \"\":\n for spec in Specs:\n if spec['er_specimen_name']==rec['er_specimen_name']:\n if 'specimen_weight' in list(spec.keys()) and spec['specimen_weight']!=\"\":\n rec['measurement_magn_mass']='%e'%(old_div(float(rec['measurement_magn_moment']),float(spec['specimen_weight'])))\n if 'specimen_volume' in list(spec.keys()) and spec['specimen_volume']!=\"\":\n rec['measurement_magn_volume']='%e'%(old_div(float(rec['measurement_magn_moment']),float(spec['specimen_volume'])))\n break\n if 'measurement_magn_volume' not in list(rec.keys()): rec['measurement_magn_volume']=''\n if 'measurement_magn_mass' not in list(rec.keys()): rec['measurement_magn_mass']=''\n OutRecs.append(rec) \n pmag.magic_write(out_file,OutRecs,\"magic_measurements\")\n print(\"Data saved in \", out_file)", "response": "NAME measurements_normalize. py DESCRIPTION\n \n takes magic_measurements file and normalized moment by sample weight and sample volume in the er_specimens table\n \n returns a list of dicts each containing the name of the unique item in the table that contains the unique item in the order they appear in the measurements file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nshutting down application if in the main frame. otherwise, destroy the top window (wind) and restore the main frame.", "response": "def on_quit(self, event, wind=None):\n \"\"\"\n shut down application if in the main frame.\n otherwise, destroy the top window (wind) and restore\n the main frame.\n \"\"\"\n if wind:\n wind.Destroy()\n if not self.parent.IsShown():\n self.on_show_mainframe(None)\n # re-do the quit binding\n self.parent.Bind(wx.EVT_MENU, self.on_quit, self.file_quit)\n else:\n self.parent.Close()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef on_clear(self, event):\n dia = pmag_menu_dialogs.ClearWD(self.parent, self.parent.WD)\n clear = dia.do_clear()\n if clear:\n # clear directory, but use previously acquired data_model\n if self.data_model_num == 2.5:\n self.parent.er_magic = builder.ErMagicBuilder(self.parent.WD, self.parent.er_magic.data_model)\n elif self.data_model_num == 3:\n self.parent.contribution = cb.Contribution(self.parent.WD,\n dmodel=self.parent.contribution.data_model)", "response": "clear the working directory"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nname plotxy_magic.py DESCRIPTION Makes simple X,Y plots INPUT FORMAT Any MagIC formatted file SYNTAX plotxy_magic.py [command line options] OPTIONS -h prints this help message -f FILE to set file name on command rec -c col1 col2 specify columns names to plot -sym SYM SIZE specify symbol and size to plot: default is red dots -S don't plot symbols -xlab XLAB -ylab YLAB -l connect symbols with lines -b xmin xmax ymin ymax, sets bounds # -b [key:max:min,key:max:min,etc.] leave or min blank for no cutoff", "response": "def main():\n \"\"\"\n NAME\n plotxy_magic.py\n\n DESCRIPTION\n Makes simple X,Y plots \n\n INPUT FORMAT\n Any MagIC formatted file\n\n SYNTAX\n plotxy_magic.py [command line options] \n\n OPTIONS\n -h prints this help message\n -f FILE to set file name on command rec \n -c col1 col2 specify columns names to plot\n -sym SYM SIZE specify symbol and size to plot: default is red dots\n -S don't plot symbols\n -xlab XLAB\n -ylab YLAB\n -l connect symbols with lines\n -b xmin xmax ymin ymax, sets bounds\n# -b [key:max:min,key:max:min,etc.] leave or min blank for no cutoff\n \"\"\"\n col1,col2=0,1\n sym,size = 'ro',20\n xlab,ylab='',''\n lines=0\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-f' in sys.argv:\n ind=sys.argv.index('-f')\n file=sys.argv[ind+1]\n else:\n '-f option is a required field'\n print(main.__doc__)\n sys.exit()\n if '-c' in sys.argv:\n ind=sys.argv.index('-c')\n col1=sys.argv[ind+1]\n col2=sys.argv[ind+2]\n else:\n 'Column headers a required field'\n print(main.__doc__)\n sys.exit()\n if '-xlab' in sys.argv:\n ind=sys.argv.index('-xlab')\n xlab=sys.argv[ind+1]\n if '-ylab' in sys.argv:\n ind=sys.argv.index('-ylab')\n ylab=sys.argv[ind+1]\n# if '-b' in sys.argv:\n# ind=sys.argv.index('-b')\n# bounds=sys.argv[ind+1].split(',')\n if '-b' in sys.argv:\n ind=sys.argv.index('-b')\n xmin=float(sys.argv[ind+1])\n xmax=float(sys.argv[ind+2])\n ymin=float(sys.argv[ind+3])\n ymax=float(sys.argv[ind+4])\n if '-sym' in sys.argv:\n ind=sys.argv.index('-sym')\n sym=sys.argv[ind+1]\n size=int(sys.argv[ind+2])\n if '-l' in sys.argv: lines=1\n if '-S' in sys.argv: sym=''\n X,Y=[],[]\n data,file_type=pmag.magic_read(file)\n print(file_type)\n for rec in data:\n if col1 not in list(rec.keys()) or col2 not in list(rec.keys()):\n print(col1,' and/or ',col2, ' not in file headers')\n print('try again')\n sys.exit()\n if rec[col1]!='' and rec[col2]!='':\n skip=0\n if '-crit' in sys.argv:\n for crit in bounds:\n crits=crit.split(':')\n crit_key=crits[0]\n crit_min=crits[1]\n crit_max=crits[2]\n if rec[crit_key]==\"\":\n skip=1\n else:\n if crit_min!=\"\" and float(rec[crit_key])float(crit_min):skip=1\n if skip==0:\n X.append(float(rec[col1]))\n Y.append(float(rec[col2]))\n if len(X)==0:\n print(col1,' and/or ',col2, ' have no data ')\n print('try again')\n sys.exit()\n else:\n print(len(X),' data points')\n if sym!='':pylab.scatter(X,Y,c=sym[0],marker=sym[1],s=size)\n if xlab!='':pylab.xlabel(xlab)\n if ylab!='':pylab.ylabel(ylab)\n if lines==1:pylab.plot(X,Y,'k-')\n if '-b' in sys.argv:pylab.axis([xmin,xmax,ymin,ymax])\n pylab.draw()\n ans=input(\"Press return to quit \")\n sys.exit()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef main():\n fmt, plot = 'svg', 0\n FIG = {} # plot dictionary\n FIG['lowrie'] = 1 # demag is figure 1\n pmagplotlib.plot_init(FIG['lowrie'], 6, 6)\n norm = 1 # default is to normalize by maximum axis\n if len(sys.argv) > 1:\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-N' in sys.argv:\n norm = 0 # don't normalize\n if '-sav' in sys.argv:\n plot = 1 # don't normalize\n if '-fmt' in sys.argv: # sets input filename\n ind = sys.argv.index(\"-fmt\")\n fmt = sys.argv[ind + 1]\n if '-f' in sys.argv: # sets input filename\n ind = sys.argv.index(\"-f\")\n in_file = sys.argv[ind + 1]\n else:\n print(main.__doc__)\n print('you must supply a file name')\n sys.exit()\n else:\n print(main.__doc__)\n print('you must supply a file name')\n sys.exit()\n data = pmag.open_file(in_file)\n PmagRecs = [] # set up a list for the results\n keys = ['specimen', 'treatment', 'csd', 'M', 'dec', 'inc']\n for line in data:\n PmagRec = {}\n rec = line.replace('\\n', '').split()\n for k in range(len(keys)):\n PmagRec[keys[k]] = rec[k]\n PmagRecs.append(PmagRec)\n specs = pmag.get_dictkey(PmagRecs, 'specimen', '')\n sids = []\n for spec in specs:\n if spec not in sids:\n sids.append(spec) # get list of unique specimen names\n for spc in sids: # step through the specimen names\n pmagplotlib.plot_init(FIG['lowrie'], 6, 6)\n print(spc)\n specdata = pmag.get_dictitem(\n PmagRecs, 'specimen', spc, 'T') # get all this one's data\n DIMs, Temps = [], []\n for dat in specdata: # step through the data\n DIMs.append([float(dat['dec']), float(\n dat['inc']), float(dat['M']) * 1e-3])\n Temps.append(float(dat['treatment']))\n carts = pmag.dir2cart(DIMs).transpose()\n # if norm==1: # want to normalize\n # nrm=max(max(abs(carts[0])),max(abs(carts[1])),max(abs(carts[2]))) # by maximum of x,y,z values\n # ylab=\"M/M_max\"\n if norm == 1: # want to normalize\n nrm = (DIMs[0][2]) # normalize by NRM\n ylab = \"M/M_o\"\n else:\n nrm = 1. # don't normalize\n ylab = \"Magnetic moment (Am^2)\"\n xlab = \"Temperature (C)\"\n pmagplotlib.plot_xy(FIG['lowrie'], Temps, old_div(\n abs(carts[0]), nrm), sym='r-')\n pmagplotlib.plot_xy(FIG['lowrie'], Temps, old_div(\n abs(carts[0]), nrm), sym='ro') # X direction\n pmagplotlib.plot_xy(FIG['lowrie'], Temps, old_div(\n abs(carts[1]), nrm), sym='c-')\n pmagplotlib.plot_xy(FIG['lowrie'], Temps, old_div(\n abs(carts[1]), nrm), sym='cs') # Y direction\n pmagplotlib.plot_xy(FIG['lowrie'], Temps, old_div(\n abs(carts[2]), nrm), sym='k-')\n pmagplotlib.plot_xy(FIG['lowrie'], Temps, old_div(\n abs(carts[2]), nrm), sym='k^', title=spc, xlab=xlab, ylab=ylab) # Z direction\n files = {'lowrie': 'lowrie:_' + spc + '_.' + fmt}\n if plot == 0:\n pmagplotlib.draw_figs(FIG)\n ans = input('S[a]ve figure? [q]uit, to continue ')\n if ans == 'a':\n pmagplotlib.save_plots(FIG, files)\n elif ans == 'q':\n sys.exit()\n else:\n pmagplotlib.save_plots(FIG, files)\n pmagplotlib.clearFIG(FIG['lowrie'])", "response": "NAME\naptcha. py is the main function for the Lowrie experiment."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nnaming apwp.py DESCRIPTION returns predicted paleolatitudes, directions and pole latitude/longitude from apparent polar wander paths of Besse and Courtillot (2002). SYNTAX apwp.py [command line options][< filename] OPTIONS -h prints help message and quits -i allows interactive data entry f file: read plate, lat, lon, age data from file -F output_file: write output to output_file -P [NA, SA, AF, IN, EU, AU, ANT, GL] plate -lat LAT specify present latitude (positive = North; negative=South) -lon LON specify present longitude (positive = East, negative=West) -age AGE specify Age in Ma Note: must have all -P, -lat, -lon, -age or none. OUTPUT Age Paleolat. Dec. Inc. Pole_lat. Pole_Long.", "response": "def main():\n \"\"\"\n NAME\n apwp.py\n\n DESCRIPTION\n returns predicted paleolatitudes, directions and pole latitude/longitude\n from apparent polar wander paths of Besse and Courtillot (2002).\n\n SYNTAX\n apwp.py [command line options][< filename]\n\n OPTIONS\n -h prints help message and quits\n -i allows interactive data entry\n f file: read plate, lat, lon, age data from file\n -F output_file: write output to output_file \n -P [NA, SA, AF, IN, EU, AU, ANT, GL] plate\n -lat LAT specify present latitude (positive = North; negative=South)\n -lon LON specify present longitude (positive = East, negative=West)\n -age AGE specify Age in Ma\n\n Note: must have all -P, -lat, -lon, -age or none.\n\n OUTPUT\n Age Paleolat. Dec. Inc. Pole_lat. Pole_Long. \n\n \"\"\"\n infile,outfile,data,indata=\"\",\"\",[],[]\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-F' in sys.argv:\n ind=sys.argv.index('-F')\n outfile=sys.argv[ind+1]\n out=open(outfile,'w')\n if '-i' in sys.argv:\n print(\"Welcome to paleolatitude calculator\\n\")\n while 1:\n data=[]\n print(\"pick a plate: NA, SA, AF, IN, EU, AU, ANT, GL \\n cntl-D to quit\")\n try:\n plate=input(\"Plate\\n\").upper()\n except:\n print(\"Goodbye \\n\")\n sys.exit()\n lat=float(input( \"Site latitude\\n\"))\n lon=float(input(\" Site longitude\\n\"))\n age=float(input(\" Age\\n\"))\n data=[plate,lat,lon,age]\n print(\"Age Paleolat. Dec. Inc. Pole_lat. Pole_Long.\")\n print(spitout(data))\n elif '-f' in sys.argv:\n ind=sys.argv.index('-f')\n infile=sys.argv[ind+1]\n f=open(infile,'r')\n inp=f.readlines()\n elif '-P' in sys.argv:\n ind=sys.argv.index('-P')\n plate=sys.argv[ind+1].upper()\n if '-lat' in sys.argv:\n ind=sys.argv.index('-lat')\n lat=float(sys.argv[ind+1])\n else:\n print(main.__doc__)\n sys.exit()\n if '-lon' in sys.argv:\n ind=sys.argv.index('-lon')\n lon=float(sys.argv[ind+1])\n else:\n print(main.__doc__)\n sys.exit()\n if '-age' in sys.argv:\n ind=sys.argv.index('-age')\n age=float(sys.argv[ind+1])\n else:\n print(main.__doc__)\n sys.exit()\n data=[plate,lat,lon,age]\n outstring=spitout(data)\n if outfile==\"\": \n print(\"Age Paleolat. Dec. Inc. Pole_lat. Pole_Long.\")\n print(outstring)\n else:\n out.write(outstring)\n sys.exit()\n else:\n inp=sys.stdin.readlines() # read from standard input\n if len(inp)>0:\n for line in inp:\n data=[]\n rec=line.split()\n data.append(rec[0])\n for k in range(1,4): data.append(float(rec[k]))\n indata.append(data) \n if len(indata)>0:\n for line in indata:\n outstring=spitout(line)\n if outfile==\"\": \n print(outstring)\n else:\n out.write(outstring)\n else:\n print('no input data')\n sys.exit()"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets the eigenvalues and eigenvectors of a single object from a matrix T", "response": "def tauV(T):\n \"\"\" \n gets the eigenvalues (tau) and eigenvectors (V) from matrix T\n \"\"\"\n t,V,tr=[],[],0.\n ind1,ind2,ind3=0,1,2\n evalues,evectmps=numpy.linalg.eig(T)\n evectors=numpy.transpose(evectmps) # to make compatible with Numeric convention\n for tau in evalues:\n tr += tau # tr totals tau values\n if tr != 0:\n for i in range(3):\n evalues[i]=old_div(evalues[i], tr) # convention is norming eigenvalues so they sum to 1.\n else:\n return t,V # if eigenvalues add up to zero, no sorting is needed\n# sort evalues,evectors \n t1, t2, t3 = 0., 0., 1.\n for k in range(3): \n if evalues[k] > t1:\n t1,ind1 = evalues[k],k\n if evalues[k] < t3:\n t3,ind3 = evalues[k],k\n for k in range(3):\n if evalues[k] != t1 and evalues[k] != t3:\n t2,ind2=evalues[k],k\n V.append(evectors[ind1])\n V.append(evectors[ind2])\n V.append(evectors[ind3])\n t.append(t1)\n t.append(t2)\n t.append(t3)\n return t,V"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ntakes arrays of X1_prime X2_prime X3_prime and the PD. checks that the PD vector direction is correct", "response": "def get_PD_direction(X1_prime, X2_prime, X3_prime, PD):\n \"\"\"takes arrays of X1_prime, X2_prime, X3_prime, and the PD. \n checks that the PD vector direction is correct\"\"\"\n n = len(X1_prime) - 1\n X1 = X1_prime[0] - X1_prime[n]\n X2 = X2_prime[0] - X2_prime[n]\n X3 = X3_prime[0] - X3_prime[n]\n R= numpy.array([X1, X2, X3])\n #print 'R (reference vector for PD direction)', R\n dot = numpy.dot(PD, R) # dot product of reference vector and the principal axis of the V matrix\n #print 'dot (dot of PD and R)', dot\n if dot < -1:\n dot = -1\n elif dot > 1:\n dot = 1\n if numpy.arccos(dot) > old_div(numpy.pi, 2.):\n #print 'numpy.arccos(dot) {} > numpy.pi / 2. {}'.format(numpy.arccos(dot), numpy.pi / 2)\n #print 'correcting PD direction'\n PD = -1. * numpy.array(PD)\n #print 'PD after get PD direction', PD\n return PD"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_MAD(tau):\n # tau is ordered so that tau[0] > tau[1] > tau[2]\n for t in tau:\n if isinstance(t, complex):\n return -999\n MAD = math.degrees(numpy.arctan(numpy.sqrt(old_div((tau[1] + tau[2]), tau[0]))) )\n return MAD", "response": "Calculates the maximum angular deviation of a PCA matrix"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef dir2cart(d): # from pmag.py\n ints = numpy.ones(len(d)).transpose() # get an array of ones to plug into dec,inc pairs \n d = numpy.array(d)\n rad = old_div(numpy.pi, 180.)\n if len(d.shape) > 1: # array of vectors \n decs, incs = d[:,0] * rad, d[:,1] * rad\n if d.shape[1] == 3: ints = d[:,2] # take the given lengths \n else: # single vector \n decs, incs = numpy.array(d[0]) * rad, numpy.array(d[1]) * rad\n if len(d) == 3:\n ints = numpy.array(d[2])\n else:\n ints = numpy.array([1.])\n cart = numpy.array([ints * numpy.cos(decs) * numpy.cos(incs),\n ints * numpy.sin(decs) * numpy.cos(incs),\n ints * numpy.sin(incs)\n ]).transpose()\n return cart", "response": "converts list or array of vector directions in degrees to array of cartesian coordinates in x y z form"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef pmag_angle(D1,D2): # use this \n D1 = numpy.array(D1)\n if len(D1.shape) > 1:\n D1 = D1[:,0:2] # strip off intensity\n else: D1 = D1[:2]\n D2 = numpy.array(D2)\n if len(D2.shape) > 1:\n D2 = D2[:,0:2] # strip off intensity\n else: D2 = D2[:2]\n X1 = dir2cart(D1) # convert to cartesian from polar\n X2 = dir2cart(D2)\n angles = [] # set up a list for angles\n for k in range(X1.shape[0]): # single vector\n angle = numpy.arccos(numpy.dot(X1[k],X2[k]))*180./numpy.pi # take the dot product\n angle = angle%360.\n angles.append(angle)\n return numpy.array(angles)", "response": "returns the angle between two lists of two directions D1 D2"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef new_get_angle_diff(v1,v2):\n v1 = numpy.array(v1)\n v2 = numpy.array(v2)\n angle = numpy.arctan2(numpy.linalg.norm(numpy.cross(v1, v2)), numpy.dot(v1, v2))\n return math.degrees(angle)", "response": "returns angular difference in degrees between two vectors. may be more precise in certain cases."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_angle_difference(v1, v2):\n v1 = numpy.array(v1)\n v2 = numpy.array(v2)\n angle=numpy.arccos(old_div((numpy.dot(v1, v2) ), (numpy.sqrt(math.fsum(v1**2)) * numpy.sqrt(math.fsum(v2**2))))) \n return math.degrees(angle)", "response": "returns angular difference in degrees between two vectors. takes in cartesian coordinates"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngives angle between principal direction of the ptrm data and the b_lab vector. this is NOT in SPD, but taken from Ron Shaar's old thellier_gui.py code. see PmagPy on github", "response": "def get_ptrms_angle(ptrms_best_fit_vector, B_lab_vector):\n \"\"\"\n gives angle between principal direction of the ptrm data and the b_lab vector. this is NOT in SPD, but taken from Ron Shaar's old thellier_gui.py code. see PmagPy on github\n \"\"\"\n ptrms_angle = math.degrees(math.acos(old_div(numpy.dot(ptrms_best_fit_vector,B_lab_vector),(numpy.sqrt(sum(ptrms_best_fit_vector**2)) * numpy.sqrt(sum(B_lab_vector**2)))))) # from old thellier_gui.py code \n return ptrms_angle"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ntaking out dos problem characters from any file", "response": "def main():\n \"\"\"\n Take out dos problem characters from any file\n \"\"\"\n filename = pmag.get_named_arg('-f')\n if not filename:\n return\n with open(filename, 'rb+') as f:\n content = f.read()\n f.seek(0)\n f.write(content.replace(b'\\r', b''))\n f.truncate()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding criteria used only in thellier gui only", "response": "def add_thellier_gui_criteria(acceptance_criteria):\n '''criteria used only in thellier gui\n these criteria are not written to pmag_criteria.txt\n '''\n category=\"thellier_gui\"\n for crit in ['sample_int_n_outlier_check','site_int_n_outlier_check']:\n acceptance_criteria[crit]={}\n acceptance_criteria[crit]['category']=category\n acceptance_criteria[crit]['criterion_name']=crit\n acceptance_criteria[crit]['value']=-999\n acceptance_criteria[crit]['threshold_type']=\"low\"\n acceptance_criteria[crit]['decimal_points']=0\n\n for crit in ['sample_int_interval_uT','sample_int_interval_perc',\\\n 'site_int_interval_uT','site_int_interval_perc',\\\n 'sample_int_BS_68_uT','sample_int_BS_95_uT','sample_int_BS_68_perc','sample_int_BS_95_perc','specimen_int_max_slope_diff']:\n acceptance_criteria[crit]={}\n acceptance_criteria[crit]['category']=category\n acceptance_criteria[crit]['criterion_name']=crit\n acceptance_criteria[crit]['value']=-999\n acceptance_criteria[crit]['threshold_type']=\"high\"\n if crit in ['specimen_int_max_slope_diff']:\n acceptance_criteria[crit]['decimal_points']=-999\n else:\n acceptance_criteria[crit]['decimal_points']=1\n acceptance_criteria[crit]['comments']=\"thellier_gui_only\"\n\n for crit in ['average_by_sample_or_site','interpreter_method']:\n acceptance_criteria[crit]={}\n acceptance_criteria[crit]['category']=category\n acceptance_criteria[crit]['criterion_name']=crit\n if crit in ['average_by_sample_or_site']:\n acceptance_criteria[crit]['value']='sample'\n if crit in ['interpreter_method']:\n acceptance_criteria[crit]['value']='stdev_opt'\n acceptance_criteria[crit]['threshold_type']=\"flag\"\n acceptance_criteria[crit]['decimal_points']=-999\n\n for crit in ['include_nrm']:\n acceptance_criteria[crit]={}\n acceptance_criteria[crit]['category']=category\n acceptance_criteria[crit]['criterion_name']=crit\n acceptance_criteria[crit]['value']=True\n acceptance_criteria[crit]['threshold_type']=\"bool\"\n acceptance_criteria[crit]['decimal_points']=-999"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef on_okButton(self, event):\n def check_input_dir_path(input_dir_path, new_dir_path):\n if input_dir_path and input_dir_path != new_dir_path:\n pw.simple_warning(\"Please make sure that all input files come from the same directory\")\n return False\n if not input_dir_path and new_dir_path:\n return new_dir_path\n elif input_dir_path == new_dir_path:\n return input_dir_path\n\n wait = wx.BusyInfo('Making plots, please wait...')\n wx.SafeYield()\n\n os.chdir(self.WD)\n input_dir_path = None\n meas_file = self.bSizer0.return_value()\n if meas_file:\n input_dir_path, meas_file = os.path.split(meas_file)\n pmag_spec_file = self.bSizer0a.return_value()\n if pmag_spec_file:\n new_dir_path, pmag_spec_file = os.path.split(pmag_spec_file)\n input_dir_path = check_input_dir_path(input_dir_path, new_dir_path)\n if not input_dir_path:\n del wait\n return False\n sum_file = self.bSizer2.return_value()\n if sum_file:\n new_dir_path, sum_file = os.path.split(sum_file)\n input_dir_path = check_input_dir_path(input_dir_path, new_dir_path)\n if not input_dir_path:\n del wait\n return False\n\n spec_sym, spec_sym_shape, spec_sym_color, spec_sym_size = \"\", \"\", \"\", \"\"\n\n if pmag_spec_file:\n # get symbol/size for dots\n spec_sym_shape = self.shape_choices_dict[self.bSizer0a2.return_value()]\n spec_sym_color = self.bSizer0a1.return_value()[0]\n spec_sym_size = self.bSizer0a3.return_value()\n spec_sym = str(spec_sym_color) + str(spec_sym_shape)\n\n use_sampfile = self.bSizer1a.return_value()\n if use_sampfile:\n new_dir_path, samp_file = os.path.split(str(self.bSizer1.return_value()))\n age_file = ''\n input_dir_path = check_input_dir_path(input_dir_path, new_dir_path)\n if not input_dir_path:\n del wait\n return False\n else:\n samp_file = ''\n new_dir_path, age_file = os.path.split(self.bSizer1.return_value())\n input_dir_path = check_input_dir_path(input_dir_path, new_dir_path)\n if not input_dir_path:\n del wait\n return False\n\n depth_scale = self.bSizer8.return_value()\n if age_file:\n depth_scale='age'\n elif depth_scale:\n depth_scale = 'sample_core_depth' #'mbsf'\n else:\n depth_scale = 'sample_composite_depth' #'mcd'\n dmin = self.bSizer6.return_value()\n dmax = self.bSizer7.return_value()\n if self.bSizer9.return_value(): # if plot GPTS is checked\n pltTime = 1\n timescale = self.bSizer10.return_value()\n amin = self.bSizer11.return_value()\n amax = self.bSizer12.return_value()\n if not amin or not amax:\n del wait\n pw.simple_warning(\"If plotting timescale, you must provide both a lower and an upper bound.\\nIf you don't want to plot timescale, uncheck the 'Plot GPTS' checkbox\")\n return False\n else: # if plot GPTS is not checked\n pltTime, timescale, amin, amax = 0, '', -1, -1\n sym_shape = self.shape_choices_dict[self.bSizer5.return_value()]\n sym_color = self.bSizer4.return_value()[0]\n sym = sym_color + sym_shape\n size = self.bSizer5a.return_value()\n pltLine = self.bSizer5b.return_value()\n if pltLine:\n pltLine = 1\n else:\n pltLine = 0\n method = str(self.bSizer13.return_value())\n step = self.bSizer14.return_value()\n if not step:\n step = 0\n method = 'LT-NO'\n #if not step:\n # #-LP [AF,T,ARM,IRM, X] step [in mT,C,mT,mT, mass/vol] to plot\n # units_dict = {'AF': 'millitesla', 'T': 'degrees C', 'ARM': 'millitesla', 'IRM': 'millitesla', 'X': 'mass/vol'}\n #unit = units_dict[method]\n #pw.simple_warning(\"You must provide the experiment step in {}\".format(unit))\n #return False\n pltDec, pltInc, pltMag, logit = 0, 0, 0, 0\n for val in self.bSizer3.return_value():\n if 'declination' in val:\n pltDec = 1\n if 'inclination' in val:\n pltInc = 1\n if 'magnetization' in val:\n pltMag = 1\n if 'log' in val:\n logit = 1\n\n #pltSus = self.bSizer15.return_value()\n #if pltSus:\n # pltSus = 0\n #else:\n # pltSus = 1\n\n fmt = self.bSizer16.return_value()\n #print \"meas_file\", meas_file, \"pmag_spec_file\", pmag_spec_file, \"spec_sym_shape\", spec_sym_shape, \"spec_sym_color\", spec_sym_color, \"spec_sym_size\", spec_sym_size, \"samp_file\", samp_file, \"age_file\", age_file, \"depth_scale\", depth_scale, \"dmin\", dmin, \"dmax\", dmax, \"timescale\", timescale, \"amin\", amin, \"amax\", amax, \"sym\", sym, \"size\", size, \"method\", method, \"step\", step, \"pltDec\", pltDec, \"pltInc\", pltInc, \"pltMag\", pltMag, \"pltTime\", pltTime, \"logit\", logit, \"fmt\", fmt\n\n # for use as module:\n #print \"pltLine:\", pltLine\n #print \"pltSus:\", pltSus\n\n fig, figname = ipmag.core_depthplot(input_dir_path or self.WD, meas_file, pmag_spec_file, samp_file, age_file, sum_file, '', depth_scale, dmin, dmax, sym, size, spec_sym, spec_sym_size, method, step, fmt, pltDec, pltInc, pltMag, pltLine, 1, logit, pltTime, timescale, amin, amax)\n if fig:\n self.Destroy()\n dpi = fig.get_dpi()\n pixel_width = dpi * fig.get_figwidth()\n pixel_height = dpi * fig.get_figheight()\n plot_frame = PlotFrame((pixel_width, pixel_height + 50), fig, figname)\n del wait\n return plot_frame\n else:\n del wait\n pw.simple_warning(\"No data points met your criteria - try again\\nError message: {}\".format(figname))\n return False\n\n\n # for use as command_line:\n if meas_file:\n meas_file = os.path.split(meas_file)[1]\n meas_file = pmag.add_flag(meas_file, '-f')\n if pmag_spec_file:\n pmag_spec_file = os.path.split(pmag_spec_file)[1]\n pmag_spec_file = pmag.add_flag(pmag_spec_file, '-fsp')\n pmag_spec_file = pmag_spec_file + ' ' + spec_sym_color + spec_sym_shape + ' ' + str(spec_sym_size)\n sym = '-sym ' + sym + ' ' + str(size)\n if samp_file:\n samp_file = os.path.split(samp_file)[1]\n samp_file = pmag.add_flag(samp_file, '-fsa')\n if age_file:\n age_file = os.path.split(age_file)[1]\n age_file = pmag.add_flag(age_file, '-fa')\n depth_scale = pmag.add_flag(depth_scale, '-ds')\n depth_range = ''\n if dmin and dmax:\n depth_range = '-d ' + str(dmin) + ' ' + str(dmax)\n if pltTime and amin and amax:\n timescale = '-ts ' + timescale + ' ' + str(amin) + ' ' + str(amax)\n else:\n timescale = ''\n method = pmag.add_flag(method, '-LP') + ' ' + str(step)\n #if not pltSus:\n # pltSus = \"-L\"\n #else:\n # pltSus = ''\n if not pltDec:\n pltDec = \"-D\"\n else:\n pltDec = ''\n if not pltInc:\n pltInc = \"-I\"\n else:\n pltInc = ''\n if not pltMag:\n pltMag = \"-M\"\n else:\n pltMag = ''\n if pltLine:\n pltLine = \"\"\n else:\n pltLine = '-L' # suppress line\n if logit:\n logit = \"-log\"\n else:\n logit = ''\n fmt = pmag.add_flag(fmt, '-fmt')\n\n COMMAND = \"core_depthplot.py {meas_file} {pmag_spec_file} {sym} {samp_file} {age_file} {depth_scale} {depth_range} {timescale} {method} {pltDec} {pltInc} {pltMag} {logit} {fmt} {pltLine} -WD {WD}\".format(meas_file=meas_file, pmag_spec_file=pmag_spec_file, sym=sym, samp_file=samp_file, age_file=age_file, depth_scale=depth_scale, depth_range=depth_range, timescale=timescale, method=method, pltDec=pltDec, pltInc=pltInc, pltMag=pltMag, logit=logit, fmt=fmt, pltLine=pltLine, WD=self.WD)\n print(COMMAND)\n #os.system(COMMAND)\n\n\n \"\"\"\n haven't done these options yet\n wt_file (also sets norm)# -n specimen_filename\n spc_file, spc_sym, spc_size # -fsp spec_file symbol_shape symbol_size\n res_file, res_sym, res_size # -fres pmag_results_file symbol_shape symbol_size\n wig_file (also sets pcol, width) # -fwig wiggle_file(???)\n sum_file # -fsum IODP_core_summary_csv_file\n\n (sets plots & verbose) # -sav\n \"\"\"", "response": "Function called when OK button is clicked"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nnames odp_srm_magic.py DESCRIPTION converts ODP measurement format files to magic_measurements format files SYNTAX odp_srm_magic.py [command line options] OPTIONS -h: prints the help message and quits. -F FILE: specify output measurements file, default is magic_measurements.txt -Fsa FILE: specify output er_sample.txt file, default is er_sample.txt -A : don't average replicate measurements INPUT put data from a single core into a directory. depths will be below core top", "response": "def main():\n \"\"\"\n NAME\n odp_srm_magic.py\n\n DESCRIPTION\n converts ODP measurement format files to magic_measurements format files\n\n SYNTAX\n odp_srm_magic.py [command line options]\n\n OPTIONS\n -h: prints the help message and quits.\n -F FILE: specify output measurements file, default is magic_measurements.txt\n -Fsa FILE: specify output er_sample.txt file, default is er_sample.txt\n -A : don't average replicate measurements\n INPUT\n put data from a single core into a directory. depths will be below core top\n\n \"\"\"\n#\n#\n version_num=pmag.get_version()\n meas_file='magic_measurements.txt'\n samp_file='er_samples.txt'\n ErSpecs,ErSamps,ErSites,ErLocs,ErCits=[],[],[],[],[]\n MagRecs=[]\n citation=\"This study\"\n dir_path,demag='.','NRM'\n args=sys.argv\n noave=0,\n if '-WD' in args:\n ind=args.index(\"-WD\")\n dir_path=args[ind+1]\n if \"-h\" in args:\n print(main.__doc__)\n sys.exit()\n if \"-A\" in args: noave=1\n if '-F' in args:\n ind=args.index(\"-F\")\n meas_file=args[ind+1]\n if '-Fsa' in args:\n ind=args.index(\"-Fsa\")\n samp_file=args[ind+1]\n if '-LP' in args:\n ind=args.index(\"-LP\")\n codelist=args[ind+1]\n codes=codelist.split(':')\n if \"AF\" in codes:\n demag='AF'\n if'-dc' not in args: methcode=\"LT-AF-Z\"\n if'-dc' in args: methcode=\"LT-AF-I\"\n if \"T\" in codes:\n demag=\"T\"\n if '-dc' not in args: methcode=\"LT-T-Z\"\n if '-dc' in args: methcode=\"LT-T-I\"\n if \"I\" in codes:\n methcode=\"LP-IRM\"\n if \"S\" in codes:\n demag=\"S\"\n methcode=\"LP-PI-TRM:LP-PI-ALT-AFARM\"\n trm_labfield=labfield\n ans=input(\"DC lab field for ARM step: [50uT] \")\n if ans==\"\":\n arm_labfield=50e-6\n else:\n arm_labfield=float(ans)*1e-6\n ans=input(\"temperature for total trm step: [600 C] \")\n if ans==\"\":\n trm_peakT=600+273 # convert to kelvin\n else:\n trm_peakT=float(ans)+273 # convert to kelvin\n if \"G\" in codes: methcode=\"LT-AF-G\"\n if \"D\" in codes: methcode=\"LT-AF-D\"\n if \"TRM\" in codes:\n demag=\"T\"\n trm=1\n if demag==\"T\" and \"ANI\" in codes:\n methcode=\"LP-AN-TRM\"\n if demag==\"AF\" and \"ANI\" in codes:\n methcode=\"LP-AN-ARM\"\n if labfield==0: labfield=50e-6\n if peakfield==0: peakfield=.180\n samp_file=dir_path+'/'+samp_file\n meas_file=dir_path+'/'+meas_file\n filelist=os.listdir(dir_path) # read in list of files to import\n specimens,samples,sites=[],[],[]\n MagRecs,SpecRecs,SampRecs=[],[],[]\n for file in filelist: # parse each file\n if file[-3:].lower()=='srm':\n print('processing: ',file)\n Nfo=file.split('_')[0].split('-')\n try:\n sect=int(Nfo[3][:-1])\n except:\n sect=1\n input=open(file,'r').readlines()\n MagRec,SpecRec,SampRec={},{},{}\n alt_spec,treatment_type,treatment_value,user=\"\",\"\",\"\",\"\"\n inst=\"ODP-SRM\"\n SampRec['sample_azimuth']='0'\n SampRec['sample_dip']='0'\n SampRec['magic_method_code']='FS-C-DRILL-IODP:SP-SS-C'\n MagRec['er_analyst_mail_names']=user\n MagRec['magic_method_codes']='LT-NO'\n MagRec['magic_software_packages']=version_num\n MagRec[\"treatment_temp\"]='%8.3e' % (273) # room temp in kelvin\n MagRec[\"measurement_temp\"]='%8.3e' % (273) # room temp in kelvin\n MagRec[\"treatment_ac_field\"]='0'\n MagRec[\"treatment_dc_field\"]='0'\n MagRec[\"treatment_dc_field_phi\"]='0'\n MagRec[\"treatment_dc_field_theta\"]='0'\n MagRec[\"measurement_flag\"]='g' # assume all data are \"good\"\n MagRec[\"measurement_standard\"]='u' # assume all data are \"good\"\n MagRec[\"measurement_csd\"]='' # set csd to blank\n SpecRec['er_specimen_alternatives']=alt_spec\n vol=7e-6 # assume 7 cc samples\n datestamp=input[1].split() # date time is second line of file\n mmddyy=datestamp[0].split('/') # break into month day year\n date=mmddyy[2]+':'+mmddyy[0]+\":\"+mmddyy[1] +':' +datestamp[1]\n MagRec[\"measurement_date\"]=date\n treatment_value,inst=\"\",\"ODP-SRM\"\n k=0\n while 1:\n fields= input[k].replace('\\n','').split(\"=\")\n if 'treatment_type' in fields[0]:\n if \"Alternating Frequency Demagnetization\" in fields[1]:\n MagRec['magic_method_codes'] = 'LT-AF-Z'\n inst=inst+':ODP-DTECH' # measured on shipboard AF DTECH D2000\n if \"treatment_value\" in fields[0]:\n value=fields[1]\n if value!=\" \":\n treatment_value=float(value)*1e-3\n MagRec[\"treatment_ac_field\"]='%8.3e'%(treatment_value) # AF demag in treat mT => T\n if 'user' in fields[0]:\n user=fields[-1]\n MagRec[\"er_analyst_mail_names\"]=user\n MagRec[\"measurement_standard\"]='u' # assume all data are \"good\"\n if 'sample_area' in fields[0]: vol=float(fields[1])*1e-6 # takes volume (cc) and converts to m^3\n if 'run_number' in fields[0]:\n MagRec['external_database_ids']=fields[1] # run number is the LIMS measurement number\n MagRec['external_database_names']='LIMS'\n k+=1\n if input[k][0:7]=='':\n break\n while 1:\n k+=1\n line = input[k]\n if line[0:5]=='':\n break\n treatment_value=\"\"\n rec=line.replace('\\n','').split(',') # list of data\n if len(rec)>2:\n MeasRec,SampRec={},{'core_depth':'0','er_sample_name':'0','er_site_name':'0','er_location_name':'location'}\n for key in list(MagRec.keys()):MeasRec[key]=MagRec[key]\n for item in rec:\n items=item.split('=')\n if 'demag_level' in items[0]:\n treat= float(items[1])\n if treat!=0:\n MeasRec['magic_method_codes']='LT-AF-Z'\n inst=inst+':ODP-SRM-AF'\n MeasRec[\"treatment_ac_field\"]='%8.3e'%(treat*1e-3) # AF demag in treat mT => T\n if 'inclination_w_tray_w_bkgrd' in items[0]: MeasRec['measurement_inc']=items[1]\n if 'declination_w_tray_w_bkgrd' in items[0]: MeasRec['measurement_dec']=items[1]\n if 'intensity_w_tray_w_bkgrd' in items[0]: MeasRec['measurement_magn_moment']='%8.3e'%(float(items[1])*vol) # convert intensity from A/m to Am^2 using vol\n MeasRec['magic_instrument_codes']=inst\n if 'offset' in items[0]:\n depth='%7.3f'%(float(sect-1)*1.5+float(items[1]))\n SampRec['core_depth']=depth\n MeasRec['er_specimen_name']=depth\n MeasRec['er_sample_name']=depth\n MeasRec['er_site_name']=depth\n MeasRec['er_location_name']='location'\n SampRec['er_sample_name']=depth\n SampRec['er_site_name']=depth\n SampRec['er_location_name']='location'\n MeasRec['measurement_number']='1'\n SampRecs.append(SampRec)\n MagRecs.append(MeasRec)\n pmag.magic_write(samp_file,SampRecs,'er_samples')\n print('samples stored in ',samp_file)\n Fixed=pmag.measurements_methods(MagRecs,noave)\n pmag.magic_write(meas_file,Fixed,'magic_measurements')\n print('data stored in ',meas_file)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nnaming kly4s_magic.py DESCRIPTION converts files generated by SIO kly4S labview program to MagIC formated files for use with PmagPy plotting software SYNTAX kly4s_magic.py -h [command line options] OPTIONS -h: prints the help message and quits -f FILE: specify .ams input file name -fad AZDIP: specify AZDIP file with orientations, will create er_samples.txt file -fsa SFILE: specify existing er_samples.txt file with orientation information -fsp SPFILE: specify existing er_specimens.txt file for appending -F MFILE: specify magic_measurements output file -Fa AFILE: specify rmag_anisotropy output file -ocn ORCON: specify orientation convention: default is #3 below -only with AZDIP file -usr USER: specify who made the measurements -loc LOC: specify location name for study -ins INST: specify instrument used -spc SPEC: specify number of characters to specify specimen from sample -ncn NCON: specify naming convention: default is #1 below DEFAULTS MFILE: magic_measurements.txt AFILE: rmag_anisotropy.txt SPFILE: create new er_specimens.txt file USER: \"\" LOC: \"unknown\" INST: \"SIO-KLY4S\" SPEC: 1 specimen name is same as sample (if SPEC is 1, sample is all but last character) NOTES: Sample naming convention: [1] XXXXY: where XXXX is an arbitrary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) [4-Z] XXXXYYY: YYY is sample designation with Z characters from site XXX [5] site name = sample name [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY NB: all others you will have to either customize your self or e-mail ltauxe@ucsd.edu for help. Orientation convention: [1] Lab arrow azimuth= azimuth; Lab arrow dip=-dip i.e., dip is degrees from vertical down - the hade [default] [2] Lab arrow azimuth = azimuth-90; Lab arrow dip = -dip i.e., azimuth is strike and dip is hade [3] Lab arrow azimuth = azimuth; Lab arrow dip = dip-90 e.g. dip is degrees from horizontal of drill direction [4] Lab arrow azimuth = azimuth; Lab arrow dip = dip [5] Lab arrow azimuth = azimuth; Lab arrow dip = 90-dip [6] all others you will have to either customize your self or e-mail ltauxe@ucsd.edu for help.", "response": "def main():\n \"\"\"\n NAME\n kly4s_magic.py\n\n DESCRIPTION\n converts files generated by SIO kly4S labview program to MagIC formated\n files for use with PmagPy plotting software\n\n SYNTAX\n kly4s_magic.py -h [command line options]\n\n OPTIONS\n -h: prints the help message and quits\n -f FILE: specify .ams input file name\n -fad AZDIP: specify AZDIP file with orientations, will create er_samples.txt file\n -fsa SFILE: specify existing er_samples.txt file with orientation information\n -fsp SPFILE: specify existing er_specimens.txt file for appending\n -F MFILE: specify magic_measurements output file\n -Fa AFILE: specify rmag_anisotropy output file\n -ocn ORCON: specify orientation convention: default is #3 below -only with AZDIP file\n -usr USER: specify who made the measurements\n -loc LOC: specify location name for study\n -ins INST: specify instrument used\n -spc SPEC: specify number of characters to specify specimen from sample\n -ncn NCON: specify naming convention: default is #1 below\n\n DEFAULTS\n MFILE: magic_measurements.txt\n AFILE: rmag_anisotropy.txt\n SPFILE: create new er_specimens.txt file\n USER: \"\"\n LOC: \"unknown\"\n INST: \"SIO-KLY4S\"\n SPEC: 1 specimen name is same as sample (if SPEC is 1, sample is all but last character)\n NOTES:\n Sample naming convention:\n [1] XXXXY: where XXXX is an arbitrary length site designation and Y\n is the single character sample designation. e.g., TG001a is the\n first sample from site TG001. [default]\n [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [4-Z] XXXXYYY: YYY is sample designation with Z characters from site XXX\n [5] site name = sample name\n [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED\n [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY\n NB: all others you will have to either customize your\n self or e-mail ltauxe@ucsd.edu for help.\n\n Orientation convention:\n [1] Lab arrow azimuth= azimuth; Lab arrow dip=-dip\n i.e., dip is degrees from vertical down - the hade [default]\n [2] Lab arrow azimuth = azimuth-90; Lab arrow dip = -dip\n i.e., azimuth is strike and dip is hade\n [3] Lab arrow azimuth = azimuth; Lab arrow dip = dip-90\n e.g. dip is degrees from horizontal of drill direction\n [4] Lab arrow azimuth = azimuth; Lab arrow dip = dip\n [5] Lab arrow azimuth = azimuth; Lab arrow dip = 90-dip\n [6] all others you will have to either customize your\n self or e-mail ltauxe@ucsd.edu for help.\n\n \"\"\"\n\n args = sys.argv\n if '-h' in args:\n print(main.__doc__)\n sys.exit()\n dataframe = extractor.command_line_dataframe([['f', True, ''], ['fad', False, ''],\n ['fsa', False, ''], ['fsp', False, ''],\n ['Fsp', False, 'specimens.txt'], ['F', False, 'measurements.txt'],\n ['Fa', False, 'rmag_anisotropy.txt'], ['ocn', False, '3'],\n ['usr', False, ''], ['loc', False, ''],\n ['ins', False, 'SIO-KLY4S'], ['spc', False, 0],\n ['ncn', False, '1'], ['WD', False, '.'],\n ['ID', False, '.'], ['DM', False, 3 ]])\n checked_args = extractor.extract_and_check_args(args, dataframe)\n infile, azdip_infile, samp_infile, spec_infile, spec_outfile, measfile, aniso_outfile, or_con, user, locname, inst, specnum, samp_con, output_dir_path, input_dir_path, data_model_num = extractor.get_vars(['f', 'fad', 'fsa', 'fsp', 'Fsp', 'F', 'Fa', 'ocn', 'usr', 'loc', 'ins', 'spc', 'ncn', 'WD', 'ID', 'DM'], checked_args)\n convert.kly4s(infile, specnum=specnum, locname=locname, inst=inst,\n user=user, measfile=measfile,or_con=or_con,\n samp_con=samp_con, aniso_outfile=aniso_outfile,\n samp_infile=samp_infile, spec_infile=spec_infile,\n spec_outfile=spec_outfile, azdip_infile=azdip_infile,\n dir_path=output_dir_path, input_dir_path=input_dir_path,\n data_model_num=data_model_num)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nnaming plotXY.py DESCRIPTION Makes simple X,Y plots INPUT FORMAT X,Y data in columns SYNTAX plotxy.py [command line options] OPTIONS -h prints this help message -f FILE to set file name on command line -c col1 col2 specify columns to plot -xsig col3 specify xsigma if desired -ysig col4 specify xsigma if desired -b xmin xmax ymin ymax, sets bounds -sym SYM SIZE specify symbol to plot: default is red dots, 10 pt -S don't plot the symbols -xlab XLAB -ylab YLAB -l connect symbols with lines -fmt [svg,png,pdf,eps] specify output format, default is svg -sav saves plot and quits -poly X plot a degree X polynomial through the data -skip n Number of lines to skip before reading in data", "response": "def main():\n \"\"\"\n NAME\n plotXY.py\n\n DESCRIPTION\n Makes simple X,Y plots\n\n INPUT FORMAT\n X,Y data in columns\n\n SYNTAX\n plotxy.py [command line options] \n\n OPTIONS\n -h prints this help message\n -f FILE to set file name on command line \n -c col1 col2 specify columns to plot\n -xsig col3 specify xsigma if desired\n -ysig col4 specify xsigma if desired\n -b xmin xmax ymin ymax, sets bounds \n -sym SYM SIZE specify symbol to plot: default is red dots, 10 pt\n -S don't plot the symbols\n -xlab XLAB\n -ylab YLAB\n -l connect symbols with lines\n -fmt [svg,png,pdf,eps] specify output format, default is svg\n -sav saves plot and quits\n -poly X plot a degree X polynomial through the data\n -skip n Number of lines to skip before reading in data\n \"\"\"\n fmt,plot='svg',0\n col1,col2=0,1\n sym,size = 'ro',50\n xlab,ylab='',''\n lines=0\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-f' in sys.argv:\n ind=sys.argv.index('-f')\n file=sys.argv[ind+1]\n if '-fmt' in sys.argv:\n ind=sys.argv.index('-fmt')\n fmt=sys.argv[ind+1]\n if '-sav' in sys.argv:plot=1\n if '-c' in sys.argv:\n ind=sys.argv.index('-c')\n col1=int(sys.argv[ind+1])-1\n col2=int(sys.argv[ind+2])-1\n if '-xsig' in sys.argv:\n ind=sys.argv.index('-xsig')\n col3=int(sys.argv[ind+1])-1\n if '-ysig' in sys.argv:\n ind=sys.argv.index('-ysig')\n col4=int(sys.argv[ind+1])-1\n if '-xlab' in sys.argv:\n ind=sys.argv.index('-xlab')\n xlab=sys.argv[ind+1]\n if '-ylab' in sys.argv:\n ind=sys.argv.index('-ylab')\n ylab=sys.argv[ind+1]\n if '-b' in sys.argv:\n ind=sys.argv.index('-b')\n xmin=float(sys.argv[ind+1])\n xmax=float(sys.argv[ind+2])\n ymin=float(sys.argv[ind+3])\n ymax=float(sys.argv[ind+4])\n if '-poly' in sys.argv:\n ind=sys.argv.index('-poly')\n degr=sys.argv[ind+1]\n if '-sym' in sys.argv:\n ind=sys.argv.index('-sym')\n sym=sys.argv[ind+1]\n size=int(sys.argv[ind+2])\n if '-l' in sys.argv: lines=1\n if '-S' in sys.argv: sym=''\n skip = int(pmag.get_named_arg('-skip', default_val=0))\n X,Y=[],[]\n Xerrs,Yerrs=[],[]\n f=open(file,'r')\n for num in range(skip):\n f.readline()\n data=f.readlines()\n for line in data:\n line.replace('\\n','')\n line.replace('\\t',' ')\n rec=line.split()\n X.append(float(rec[col1]))\n Y.append(float(rec[col2]))\n if '-xsig' in sys.argv:Xerrs.append(float(rec[col3]))\n if '-ysig' in sys.argv:Yerrs.append(float(rec[col4]))\n if '-poly' in sys.argv:\n pylab.plot(xs,ys)\n coeffs=numpy.polyfit(X,Y,degr)\n correl=numpy.corrcoef(X,Y)**2\n polynomial=numpy.poly1d(coeffs)\n xs=numpy.linspace(numpy.min(X),numpy.max(X),10)\n ys=polynomial(xs)\n pylab.plot(xs,ys)\n print(polynomial)\n if degr=='1': print('R-square value =', '%5.4f'%(correl[0,1]))\n if sym!='':\n pylab.scatter(X,Y,marker=sym[1],c=sym[0],s=size)\n else:\n pylab.plot(X,Y)\n if '-xsig' in sys.argv and '-ysig' in sys.argv:\n pylab.errorbar(X,Y,xerr=Xerrs,yerr=Yerrs,fmt=None)\n if '-xsig' in sys.argv and '-ysig' not in sys.argv:\n pylab.errorbar(X,Y,xerr=Xerrs,fmt=None)\n if '-xsig' not in sys.argv and '-ysig' in sys.argv:\n pylab.errorbar(X,Y,yerr=Yerrs,fmt=None)\n if xlab!='':pylab.xlabel(xlab)\n if ylab!='':pylab.ylabel(ylab)\n if lines==1:pylab.plot(X,Y,'k-')\n if '-b' in sys.argv:pylab.axis([xmin,xmax,ymin,ymax])\n if plot==0:\n pylab.show()\n else:\n pylab.savefig('plotXY.'+fmt) \n print('Figure saved as ','plotXY.'+fmt)\n sys.exit()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef main():\n dir_path='.'\n inspec=\"pmag_specimens.txt\"\n if '-WD' in sys.argv:\n ind=sys.argv.index('-WD')\n dir_path=sys.argv[ind+1]\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-f' in sys.argv:\n ind=sys.argv.index('-f')\n inspec=sys.argv[ind+1]\n basename=inspec.split('.')[:-1]\n inspec=dir_path+\"/\"+inspec\n ofile_base=dir_path+\"/\"+basename[0]\n#\n# read in data\n#\n prior_spec_data,file_type=pmag.magic_read(inspec)\n if file_type != 'pmag_specimens':\n print(file_type, \" this is not a valid pmag_specimens file\")\n sys.exit()\n# get list of specimens in file, components, coordinate systems available\n specs,comps,coords=[],[],[]\n for spec in prior_spec_data:\n if spec['er_specimen_name'] not in specs:specs.append(spec['er_specimen_name'])\n if 'specimen_comp_name' not in list(spec.keys()):spec['specimen_comp_name']='A'\n if 'specimen_tilt_correction' not in list(spec.keys()):spec['tilt_correction']='-1' # assume specimen coordinates\n if spec['specimen_comp_name'] not in comps:comps.append(spec['specimen_comp_name'])\n if spec['specimen_tilt_correction'] not in coords:coords.append(spec['specimen_tilt_correction'])\n# work on separating out components, coordinate systems by specimen\n for coord in coords:\n print(coord)\n for comp in comps:\n print(comp)\n speclist=[]\n for spec in prior_spec_data:\n if spec['specimen_tilt_correction']==coord and spec['specimen_comp_name']==comp:speclist.append(spec)\n ofile=ofile_base+'_'+coord+'_'+comp+'.txt'\n pmag.magic_write(ofile,speclist,'pmag_specimens')\n print('coordinate system: ',coord,' component name: ',comp,' saved in ',ofile)", "response": "NAME sort_specimens. py DESCRIPTION Reads in a pmag_specimen formatted file and separates it into different components and coordinate systems"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef InitSpecCheck(self):\n #wait = wx.BusyInfo(\"Please wait, working...\")\n #wx.SafeYield()\n self.contribution.propagate_lithology_cols()\n spec_df = self.contribution.tables['specimens'].df\n self.panel = wx.Panel(self, style=wx.SIMPLE_BORDER)\n self.grid_frame = grid_frame3.GridFrame(self.contribution, self.WD,\n 'specimens', 'specimens', self.panel,\n main_frame=self.main_frame)\n # redefine default 'save & exit grid' button to go to next dialog instead\n self.grid_frame.exitButton.SetLabel('Save and continue')\n grid = self.grid_frame.grid\n self.grid_frame.Bind(wx.EVT_BUTTON,\n lambda event: self.onContinue(event, grid, self.InitSampCheck),\n self.grid_frame.exitButton)\n\n # add back button\n self.backButton = wx.Button(self.grid_frame.panel, id=-1, label='Back',\n name='back_btn')\n self.backButton.Disable()\n self.grid_frame.main_btn_vbox.Add(self.backButton, flag=wx.ALL, border=5)\n # re-do fit\n self.grid_frame.do_fit(None, self.min_size)\n # center\n self.grid_frame.Centre()\n return", "response": "Initialize the specimen names and grids for the current specimen"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef InitSiteCheck(self):\n # propagate average lat/lon info from samples table if\n # available in samples and missing in sites\n self.contribution.propagate_average_up(cols=['lat', 'lon', 'height'],\n target_df_name='sites',\n source_df_name='samples')\n # propagate lithology columns\n self.contribution.propagate_lithology_cols()\n\n site_df = self.contribution.tables['sites'].df\n self.panel = wx.Panel(self, style=wx.SIMPLE_BORDER)\n self.grid_frame = grid_frame3.GridFrame(self.contribution, self.WD,\n 'sites', 'sites', self.panel,\n main_frame=self.main_frame)\n # redefine default 'save & exit grid' button to go to next dialog instead\n self.grid_frame.exitButton.SetLabel('Save and continue')\n grid = self.grid_frame.grid\n self.grid_frame.Bind(wx.EVT_BUTTON,\n lambda event: self.onContinue(event, grid, self.InitLocCheck),\n self.grid_frame.exitButton)\n # add back button\n self.backButton = wx.Button(self.grid_frame.panel, id=-1, label='Back',\n name='back_btn')\n self.Bind(wx.EVT_BUTTON,\n lambda event: self.onbackButton(event, self.InitSampCheck),\n self.backButton)\n self.grid_frame.main_btn_vbox.Add(self.backButton, flag=wx.ALL, border=5)\n # re-do fit\n self.grid_frame.do_fit(None, self.min_size)\n # center\n self.grid_frame.Centre()\n return", "response": "This method creates an interactive grid in which users can edit site names and locations a site belongs to."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ninitializes the locations grid", "response": "def InitLocCheck(self):\n \"\"\"\n make an interactive grid in which users can edit locations\n \"\"\"\n # if there is a location without a name, name it 'unknown'\n self.contribution.rename_item('locations', 'nan', 'unknown')\n # propagate lat/lon values from sites table\n self.contribution.get_min_max_lat_lon()\n # propagate lithologies & geologic classes from sites table\n self.contribution.propagate_cols_up(['lithologies',\n 'geologic_classes'], 'locations', 'sites')\n res = self.contribution.propagate_min_max_up()\n if cb.not_null(res):\n self.contribution.propagate_cols_up(['age_unit'], 'locations', 'sites')\n\n # set up frame\n self.panel = wx.Panel(self, style=wx.SIMPLE_BORDER)\n self.grid_frame = grid_frame3.GridFrame(self.contribution, self.WD,\n 'locations', 'locations', self.panel,\n main_frame=self.main_frame)\n # redefine default 'save & exit grid' button to go to next dialog instead\n self.grid_frame.exitButton.SetLabel('Save and continue')\n grid = self.grid_frame.grid\n self.grid_frame.Bind(wx.EVT_BUTTON,\n lambda event: self.onContinue(event, grid, self.InitAgeCheck),\n self.grid_frame.exitButton)\n # add back button\n self.backButton = wx.Button(self.grid_frame.panel, id=-1, label='Back',\n name='back_btn')\n self.Bind(wx.EVT_BUTTON,\n lambda event: self.onbackButton(event, self.InitSiteCheck),\n self.backButton)\n self.grid_frame.main_btn_vbox.Add(self.backButton, flag=wx.ALL, border=5)\n # re-do fit\n self.grid_frame.do_fit(None, min_size=self.min_size)\n # center\n self.grid_frame.Centre()\n return"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef InitAgeCheck(self):\n age_df = self.contribution.tables['ages'].df\n self.panel = wx.Panel(self, style=wx.SIMPLE_BORDER)\n self.grid_frame = grid_frame3.GridFrame(self.contribution, self.WD,\n 'ages', 'ages', self.panel,\n main_frame=self.main_frame)\n self.grid_frame.exitButton.SetLabel('Save and continue')\n grid = self.grid_frame.grid\n self.grid_frame.Bind(wx.EVT_BUTTON, lambda event: self.onContinue(event, grid, None),\n self.grid_frame.exitButton)\n # add back button\n self.backButton = wx.Button(self.grid_frame.panel, id=-1, label='Back',\n name='back_btn')\n self.Bind(wx.EVT_BUTTON,\n lambda event: self.onbackButton(event, self.InitLocCheck),\n self.backButton)\n self.grid_frame.main_btn_vbox.Add(self.backButton, flag=wx.ALL, border=5)\n # re-do fit\n self.grid_frame.do_fit(None, self.min_size)\n # center\n self.grid_frame.Centre()\n return", "response": "make an interactive grid in which users can edit ages"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef onContinue(self, event, grid, next_dia=None):#, age_data_type='site'):\n # deselect column, including remove 'EDIT ALL' label\n if self.grid_frame.drop_down_menu:\n self.grid_frame.drop_down_menu.clean_up()\n\n # remove '**' and '^^' from col names\n #self.remove_starred_labels(grid)\n grid.remove_starred_labels()\n\n grid.SaveEditControlValue() # locks in value in cell currently edited\n grid_name = str(grid.GetName())\n\n # save all changes to data object and write to file\n self.grid_frame.grid_builder.save_grid_data()\n\n # check that all required data are present\n validation_errors = self.validate(grid)\n if validation_errors:\n warn_string = \"\"\n for error_name, error_cols in list(validation_errors.items()):\n if error_cols:\n warn_string += \"You have {}: {}.\\n\\n\".format(error_name, \", \".join(error_cols))\n warn_string += \"Are you sure you want to continue?\"\n result = pw.warning_with_override(warn_string)\n if result == wx.ID_YES:\n pass\n else:\n return False\n else:\n wx.MessageBox('Saved!', 'Info',\n style=wx.OK | wx.ICON_INFORMATION)\n\n self.panel.Destroy()\n if next_dia:\n next_dia()\n else:\n # propagate any type/lithology/class data from sites to samples table\n # will only overwrite if sample values are blank or \"Not Specified\"\n self.contribution.propagate_lithology_cols()\n wx.MessageBox('Done!', 'Info',\n style=wx.OK | wx.ICON_INFORMATION)", "response": "Save grid data in the data object and save it to file"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef validate(self, grid):\n grid_name = str(grid.GetName())\n dmodel = self.contribution.dmodel\n reqd_headers = dmodel.get_reqd_headers(grid_name)\n df = self.contribution.tables[grid_name].df\n df = df.replace('', np.nan) # python does not view empty strings as null\n if df.empty:\n return {}\n col_names = set(df.columns)\n missing_headers = set(reqd_headers) - col_names\n present_headers = set(reqd_headers) - set(missing_headers)\n non_null_headers = df.dropna(how='all', axis='columns').columns\n null_reqd_headers = present_headers - set(non_null_headers)\n if any(missing_headers) or any (null_reqd_headers):\n warnings = {'missing required column(s)': sorted(missing_headers),\n 'no data in required column(s)': sorted(null_reqd_headers)}\n else:\n warnings = {}\n return warnings", "response": "Validate the MagIC data model for a specific MagIC grid."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsave any editing of the grid but does not continue to the next window", "response": "def on_saveButton(self, event, grid):\n \"\"\"saves any editing of the grid but does not continue to the next window\"\"\"\n wait = wx.BusyInfo(\"Please wait, working...\")\n wx.SafeYield()\n\n if self.grid_frame.drop_down_menu: # unhighlight selected columns, etc.\n self.grid_frame.drop_down_menu.clean_up()\n\n # remove '**' and '^^' from col labels\n starred_cols, hatted_cols = grid.remove_starred_labels()\n\n grid.SaveEditControlValue() # locks in value in cell currently edited\n grid.HideCellEditControl() # removes focus from cell that was being edited\n\n if grid.changes:\n self.onSave(grid)\n\n for col in starred_cols:\n label = grid.GetColLabelValue(col)\n grid.SetColLabelValue(col, label + '**')\n for col in hatted_cols:\n label = grid.GetColLabelValue(col)\n grid.SetColLabelValue(col, label + '^^')\n del wait"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef InitSpecCheck(self):\n\n self.panel = wx.Panel(self, style=wx.SIMPLE_BORDER)\n #import wx.lib.scrolledpanel as libpanel # does not work well\n #self.panel = libpanel.ScrolledPanel(self, style=wx.SIMPLE_BORDER)\n\n text = \"\"\"Step 1:\nCheck that all specimens belong to the correct sample\n(if sample name is simply wrong, that will be fixed in step 2)\"\"\"\n label = wx.StaticText(self.panel, label=text)\n self.grid_builder = grid_frame2.GridBuilder(self.er_magic_data, 'specimen',\n self.er_magic_data.headers, self.panel,\n 'sample')\n self.spec_grid = self.grid_builder.make_grid(incl_pmag=False)\n self.grid = self.spec_grid\n\n self.spec_grid.InitUI()\n\n self.grid_builder.add_data_to_grid(self.spec_grid, 'specimen', incl_pmag=False)\n\n samples = self.er_magic_data.make_name_list(self.er_magic_data.samples)\n self.drop_down_menu = drop_down_menus.Menus(\"specimen\", self, self.spec_grid, samples)\n\n\n #### Create Buttons ####\n hbox_one = wx.BoxSizer(wx.HORIZONTAL)\n self.addSampleButton = wx.Button(self.panel, label=\"Add a new sample\")\n self.samples = [name for name in self.er_magic_data.samples]\n self.Bind(wx.EVT_BUTTON, self.on_addSampleButton, self.addSampleButton)\n self.helpButton = wx.Button(self.panel, label=\"Help\")\n self.Bind(wx.EVT_BUTTON, lambda event: self.on_helpButton(event, \"ErMagicSpecimenHelp.html\"), self.helpButton)\n hbox_one.Add(self.addSampleButton, flag=wx.ALIGN_LEFT|wx.RIGHT, border=10)\n hbox_one.Add(self.helpButton)\n\n #\n hboxok = wx.BoxSizer(wx.HORIZONTAL)\n self.saveButton = wx.Button(self.panel, id=-1, label='Save')\n self.Bind(wx.EVT_BUTTON, lambda event: self.on_saveButton(event, self.spec_grid), self.saveButton)\n self.cancelButton = wx.Button(self.panel, wx.ID_CANCEL, '&Cancel')\n self.Bind(wx.EVT_BUTTON, self.on_cancelButton, self.cancelButton)\n self.continueButton = wx.Button(self.panel, id=-1, label='Save and continue')\n self.Bind(wx.EVT_BUTTON, lambda event: self.on_continueButton(event, self.spec_grid, next_dia=self.InitSampCheck), self.continueButton)\n\n hboxok.Add(self.saveButton, flag=wx.ALIGN_LEFT|wx.RIGHT, border=10)\n hboxok.Add(self.cancelButton, flag=wx.ALIGN_LEFT|wx.RIGHT, border=10)\n hboxok.Add(self.continueButton, flag=wx.ALIGN_LEFT)\n\n #\n hboxgrid = pw.hbox_grid(self.panel, self.onDeleteRow, 'specimen', self.grid)\n self.deleteRowButton = hboxgrid.deleteRowButton\n\n self.panel.Bind(wx.grid.EVT_GRID_LABEL_LEFT_CLICK, self.onLeftClickLabel, self.grid)\n\n ### Create Containers ###\n vbox = wx.BoxSizer(wx.VERTICAL)\n vbox.AddSpacer(10)\n vbox.Add(label, flag=wx.ALIGN_CENTER|wx.TOP|wx.BOTTOM, border=10)\n\n vbox.Add(hbox_one, flag=wx.TOP|wx.LEFT|wx.BOTTOM, border=10)\n vbox.Add(hboxok, flag=wx.BOTTOM|wx.LEFT, border=10)\n vbox.Add(hboxgrid, flag=wx.BOTTOM|wx.LEFT, border=10)\n vbox.Add(self.spec_grid, flag=wx.ALL, border=10)#|wx.EXPAND, border=30)\n vbox.AddSpacer(20)\n\n self.hbox_all = wx.BoxSizer(wx.HORIZONTAL)\n self.hbox_all.AddSpacer(20)\n self.hbox_all.Add(vbox)\n self.hbox_all.AddSpacer(20)\n\n self.panel.SetSizer(self.hbox_all)\n #self.panel.SetScrollbars(20, 20, 50, 50)\n self.hbox_all.Fit(self)\n self.Centre()\n self.Show()\n self.Hide()\n self.Show()", "response": "Initialize the specimen check dialog"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef InitSampCheck(self):\n self.sample_window += 1\n self.panel = wx.Panel(self, style=wx.SIMPLE_BORDER)\n\n if self.sample_window == 1:\n text = \"\"\"Step 2:\nCheck that all samples are correctly named,\nand that they belong to the correct site\n(if site name is simply wrong, that will be fixed in step 3)\"\"\"\n step_label = wx.StaticText(self.panel, label=text)#, size=(900, 100))\n else:\n text = \"\"\"Step 4:\nSome of the data from the er_sites table has propogated into er_samples.\nCheck that these data are correct, and fill in missing cells using controlled vocabularies.\nThe columns for class, lithology, and type can take multiple values in the form of a colon-delimited list.\nYou may use the drop-down menus to add as many values as needed in these columns.\n(see Help button for more details)\\n\\n** Denotes controlled vocabulary\"\"\"\n step_label = wx.StaticText(self.panel, label=text)#, size=(900, 100))\n\n if self.sample_window == 1:\n # provide no extra headers\n headers = {'sample': {'er': [[], [], []],\n 'pmag': [[], [], []]}}\n self.grid_builder = grid_frame2.GridBuilder(self.er_magic_data, 'sample',\n headers, self.panel,\n 'site')\n\n if self.sample_window > 1:\n self.grid_builder = grid_frame2.GridBuilder(self.er_magic_data, 'sample',\n self.er_magic_data.headers, self.panel,\n 'site')\n\n self.samp_grid = self.grid_builder.make_grid(incl_pmag=False)\n self.samp_grid.InitUI()\n self.grid_builder.add_data_to_grid(self.samp_grid, 'sample', incl_pmag=False)\n self.grid = self.samp_grid\n\n sites = sorted(self.er_magic_data.make_name_list(self.er_magic_data.sites))\n self.drop_down_menu = drop_down_menus.Menus(\"sample\", self, self.samp_grid, sites) # initialize all needed drop-down menus\n\n ### Create Buttons ###\n hbox_one = wx.BoxSizer(wx.HORIZONTAL)\n self.addSiteButton = wx.Button(self.panel, label=\"Add a new site\")\n self.Bind(wx.EVT_BUTTON, self.on_addSiteButton, self.addSiteButton)\n hbox_one.Add(self.addSiteButton, flag=wx.RIGHT, border=10)\n if self.sample_window == 1:\n html_help = \"ErMagicSampleHelp1.html\"\n if self.sample_window > 1:\n html_help = \"ErMagicSampleHelp.html\"\n self.helpButton = wx.Button(self.panel, label=\"Help\")\n self.Bind(wx.EVT_BUTTON, lambda event: self.on_helpButton(event, html_help), self.helpButton)\n hbox_one.Add(self.helpButton)\n\n hboxok = wx.BoxSizer(wx.HORIZONTAL)\n self.saveButton = wx.Button(self.panel, id=-1, label='Save')\n self.Bind(wx.EVT_BUTTON, lambda event: self.on_saveButton(event, self.samp_grid), self.saveButton)\n self.cancelButton = wx.Button(self.panel, wx.ID_CANCEL, '&Cancel')\n self.Bind(wx.EVT_BUTTON, self.on_cancelButton, self.cancelButton)\n self.continueButton = wx.Button(self.panel, id=-1, label='Save and continue')\n next_dia = self.InitSiteCheck if self.sample_window < 2 else self.InitLocCheck\n self.Bind(wx.EVT_BUTTON, lambda event: self.on_continueButton(event, self.samp_grid, next_dia=next_dia), self.continueButton)\n self.backButton = wx.Button(self.panel, wx.ID_ANY, \"&Back\")\n previous_dia = self.InitSpecCheck if self.sample_window < 2 else self.InitSiteCheck\n self.Bind(wx.EVT_BUTTON, lambda event: self.on_backButton(event, previous_dia=previous_dia), self.backButton)\n\n hboxok.Add(self.saveButton, flag=wx.RIGHT, border=10)\n hboxok.Add(self.cancelButton, flag=wx.RIGHT, border=10)\n hboxok.Add(self.continueButton, flag=wx.RIGHT, border=10)\n hboxok.Add(self.backButton)\n\n hboxgrid = pw.hbox_grid(self.panel, self.onDeleteRow, 'sample', self.grid)\n self.deleteRowButton = hboxgrid.deleteRowButton\n\n self.Bind(wx.grid.EVT_GRID_LABEL_LEFT_CLICK, self.onLeftClickLabel, self.grid)\n\n ### Make Containers ###\n vbox = wx.BoxSizer(wx.VERTICAL)\n vbox.Add(step_label, flag=wx.ALIGN_LEFT|wx.TOP|wx.BOTTOM, border=20)\n\n vbox.Add(hbox_one, flag=wx.BOTTOM|wx.LEFT, border=10)\n vbox.Add(hboxok, flag=wx.BOTTOM|wx.LEFT, border=10)\n vbox.Add(hboxgrid, flag=wx.BOTTOM|wx.LEFT, border=10)\n vbox.Add(self.samp_grid, flag=wx.ALL, border=10) # using wx.EXPAND or not does not affect re-size problem\n vbox.AddSpacer(20)\n\n self.hbox_all = wx.BoxSizer(wx.HORIZONTAL)\n self.hbox_all.AddSpacer(20)\n self.hbox_all.Add(vbox)\n self.hbox_all.AddSpacer(20)\n\n self.panel.SetSizer(self.hbox_all)\n #if sys.platform in ['win32', 'win64']:\n # self.panel.SetScrollbars(20, 20, 50, 50)\n self.hbox_all.Fit(self)\n\n self.Centre()\n self.Show()\n\n ## this combination may prevent a display error that (without the fix) only resolves on manually resizing the window\n self.panel.Refresh()\n self.samp_grid.ForceRefresh()\n self.panel.Refresh()\n self.Refresh()\n\n # this prevents display errors\n self.Hide()\n self.Show()", "response": "Initialize the main sample check dialog"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef InitSiteCheck(self):\n self.panel = wx.Panel(self, style=wx.SIMPLE_BORDER)\n\n text = \"\"\"Step 3:\nCheck that all sites are correctly named, and that they belong to the correct location.\nFill in the additional columns with controlled vocabularies.\nThe columns for class, lithology, and type can take multiple values in the form of a colon-delimited list.\nYou may use the drop-down menus to add as many values as needed in these columns.\n(see the help button for more details)\nnote: Changes to site_class, site_lithology, or site_type will overwrite er_samples.txt\nHowever, you will be able to edit sample_class, sample_lithology, and sample_type in step 4\n\n**Denotes controlled vocabulary\"\"\"\n label = wx.StaticText(self.panel, label=text)\n #self.Data_hierarchy = self.ErMagic.Data_hierarchy\n self.sites = sorted(self.er_magic_data.make_name_list(self.er_magic_data.sites))\n\n #for val in ['er_citation_names', 'er_location_name', 'er_site_name', 'site_class', 'site_lithology', 'site_type', 'site_definition', 'site_lat', 'site_lon']: #\n # try:\n # self.er_magic_data.headers['site']['er'][0].remove(val)\n # except ValueError:\n # pass\n\n self.grid_builder = grid_frame2.GridBuilder(self.er_magic_data, 'site',\n self.er_magic_data.headers, self.panel,\n 'location')\n\n self.site_grid = self.grid_builder.make_grid(incl_pmag=False)\n self.site_grid.InitUI()\n self.grid_builder.add_data_to_grid(self.site_grid, 'site', incl_pmag=False)\n self.grid = self.site_grid\n\n # populate site_definition as 's' by default if no value is provided (indicates that site is single, not composite)\n rows = self.site_grid.GetNumberRows()\n col = 6\n for row in range(rows):\n cell = self.site_grid.GetCellValue(row, col)\n if not cell:\n self.site_grid.SetCellValue(row, col, 's')\n\n # initialize all needed drop-down menus\n locations = sorted(self.er_magic_data.make_name_list(self.er_magic_data.locations))\n self.drop_down_menu = drop_down_menus.Menus(\"site\", self, self.site_grid, locations)\n\n ### Create Buttons ###\n hbox_one = wx.BoxSizer(wx.HORIZONTAL)\n self.addLocButton = wx.Button(self.panel, label=\"Add a new location\")\n\n self.Bind(wx.EVT_BUTTON, self.on_addLocButton, self.addLocButton)\n hbox_one.Add(self.addLocButton, flag=wx.RIGHT, border=10)\n\n self.helpButton = wx.Button(self.panel, label=\"Help\")\n self.Bind(wx.EVT_BUTTON, lambda event: self.on_helpButton(event, \"ErMagicSiteHelp.html\"), self.helpButton)\n hbox_one.Add(self.helpButton)\n\n hboxok = wx.BoxSizer(wx.HORIZONTAL)\n self.saveButton = wx.Button(self.panel, id=-1, label='Save')\n self.Bind(wx.EVT_BUTTON, lambda event: self.on_saveButton(event, self.site_grid), self.saveButton)\n self.cancelButton = wx.Button(self.panel, wx.ID_CANCEL, '&Cancel')\n self.Bind(wx.EVT_BUTTON, self.on_cancelButton, self.cancelButton)\n self.continueButton = wx.Button(self.panel, id=-1, label='Save and continue')\n self.Bind(wx.EVT_BUTTON, lambda event: self.on_continueButton(event, self.site_grid, next_dia=self.InitSampCheck), self.continueButton)\n self.backButton = wx.Button(self.panel, wx.ID_ANY, \"&Back\")\n previous_dia = self.InitSampCheck\n self.Bind(wx.EVT_BUTTON, lambda event: self.on_backButton(event, previous_dia=previous_dia), self.backButton)\n\n hboxok.Add(self.saveButton, flag=wx.RIGHT, border=10)\n hboxok.Add(self.cancelButton, flag=wx.RIGHT, border=10)\n hboxok.Add(self.continueButton, flag=wx.RIGHT, border=10)\n hboxok.Add(self.backButton)\n\n #\n hboxgrid = pw.hbox_grid(self.panel, self.onDeleteRow, 'site', self.grid)\n self.deleteRowButton = hboxgrid.deleteRowButton\n\n self.Bind(wx.grid.EVT_GRID_LABEL_LEFT_CLICK, self.onLeftClickLabel, self.grid)\n\n\n ### Make Containers ###\n vbox = wx.BoxSizer(wx.VERTICAL)\n vbox.Add(label, flag=wx.ALIGN_CENTER|wx.BOTTOM|wx.TOP, border=20)\n vbox.Add(hbox_one, flag=wx.BOTTOM|wx.LEFT, border=10)\n vbox.Add(hboxok, flag=wx.BOTTOM|wx.LEFT, border=10)\n vbox.Add(hboxgrid, flag=wx.BOTTOM|wx.LEFT, border=10)\n vbox.Add(self.site_grid, flag=wx.ALL|wx.EXPAND, border=10) # EXPAND ??\n vbox.AddSpacer(20)\n\n self.hbox_all = wx.BoxSizer(wx.HORIZONTAL)\n self.hbox_all.AddSpacer(20)\n self.hbox_all.Add(vbox)\n self.hbox_all.AddSpacer(20)\n\n self.panel.SetSizer(self.hbox_all)\n #if sys.platform in ['win32', 'win64']:\n # self.panel.SetScrollbars(20, 20, 50, 50)\n self.hbox_all.Fit(self)\n self.Centre()\n self.Show()\n # this combination prevents a display error that (without the fix) only resolves on manually resizing the window\n self.site_grid.ForceRefresh()\n self.panel.Refresh()\n self.Hide()\n self.Show()", "response": "Initialize the site check dialog"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef InitLocCheck(self):\n self.panel = wx.Panel(self, style=wx.SIMPLE_BORDER)\n\n text = \"\"\"Step 5:\nCheck that locations are correctly named.\nFill in any blank cells using controlled vocabularies.\n(See Help button for details)\n\n** Denotes controlled vocabulary\"\"\"\n label = wx.StaticText(self.panel, label=text)\n #self.Data_hierarchy = self.ErMagic.Data_hierarchy\n self.locations = self.er_magic_data.locations\n #\n if not self.er_magic_data.locations:\n msg = \"You have no data in er_locations, so we are skipping step 5.\\n Note that location names must be entered at the measurements level,so you may need to re-import your data, or you can add a location in step 3\"\n dlg = wx.MessageDialog(None, caption=\"Message:\", message=msg, style=wx.OK|wx.ICON_INFORMATION)\n dlg.ShowModal()\n dlg.Destroy()\n self.panel.Destroy()\n self.InitAgeCheck()\n return\n\n self.grid_builder = grid_frame2.GridBuilder(self.er_magic_data, 'location',\n self.er_magic_data.headers, self.panel)\n self.loc_grid = self.grid_builder.make_grid(incl_pmag=False)\n self.loc_grid.InitUI()\n self.grid_builder.add_data_to_grid(self.loc_grid, 'location', incl_pmag=False)\n self.grid = self.loc_grid\n # initialize all needed drop-down menus\n self.drop_down_menu = drop_down_menus.Menus(\"location\", self,\n self.loc_grid, None)\n\n # need to find max/min lat/lon here IF they were added in the previous grid\n sites = self.er_magic_data.sites\n location_lat_lon = self.er_magic_data.get_min_max_lat_lon(self.er_magic_data.locations)\n\n col_names = ('location_begin_lat', 'location_end_lat', 'location_begin_lon', 'location_end_lon')\n col_inds = [self.grid.col_labels.index(name) for name in col_names]\n col_info = list(zip(col_names, col_inds))\n for loc in self.er_magic_data.locations:\n row_ind = self.grid.row_labels.index(loc.name)\n for col_name, col_ind in col_info:\n info = location_lat_lon[loc.name][col_name]\n self.grid.SetCellValue(row_ind, col_ind, str(info))\n\n ### Create Buttons ###\n hbox_one = wx.BoxSizer(wx.HORIZONTAL)\n self.helpButton = wx.Button(self.panel, label=\"Help\")\n self.Bind(wx.EVT_BUTTON, lambda event: self.on_helpButton(event, \"ErMagicLocationHelp.html\"), self.helpButton)\n hbox_one.Add(self.helpButton)\n\n hboxok = wx.BoxSizer(wx.HORIZONTAL)\n self.saveButton = wx.Button(self.panel, id=-1, label='Save')\n self.Bind(wx.EVT_BUTTON, lambda event: self.on_saveButton(event, self.loc_grid), self.saveButton)\n self.cancelButton = wx.Button(self.panel, wx.ID_CANCEL, '&Cancel')\n self.Bind(wx.EVT_BUTTON, self.on_cancelButton, self.cancelButton)\n self.continueButton = wx.Button(self.panel, id=-1, label='Save and continue')\n self.Bind(wx.EVT_BUTTON, lambda event: self.on_continueButton(event, self.loc_grid, next_dia=self.InitAgeCheck), self.continueButton)\n self.backButton = wx.Button(self.panel, wx.ID_ANY, \"&Back\")\n previous_dia = self.InitSampCheck\n self.Bind(wx.EVT_BUTTON, lambda event: self.on_backButton(event, previous_dia, current_dia=self.InitLocCheck), self.backButton)\n\n hboxok.Add(self.saveButton, flag=wx.RIGHT, border=10)\n hboxok.Add(self.cancelButton, flag=wx.RIGHT, border=10)\n hboxok.Add(self.continueButton, flag=wx.RIGHT, border=10)\n hboxok.Add(self.backButton)\n\n #\n hboxgrid = pw.hbox_grid(self.panel, self.onDeleteRow, 'location', self.grid)\n self.deleteRowButton = hboxgrid.deleteRowButton\n\n self.Bind(wx.grid.EVT_GRID_LABEL_LEFT_CLICK, self.onLeftClickLabel, self.grid)\n\n\n ### Make Containers ###\n vbox = wx.BoxSizer(wx.VERTICAL)\n vbox.Add(label, flag=wx.ALIGN_CENTER|wx.TOP|wx.BOTTOM, border=20)\n vbox.Add(hbox_one, flag=wx.BOTTOM|wx.ALIGN_LEFT, border=10)\n vbox.Add(hboxok, flag=wx.BOTTOM|wx.ALIGN_LEFT, border=10)\n vbox.Add(hboxgrid, flag=wx.BOTTOM|wx.ALIGN_LEFT, border=10)\n vbox.Add(self.loc_grid, flag=wx.TOP|wx.BOTTOM, border=10)\n vbox.AddSpacer(20)\n\n self.hbox_all = wx.BoxSizer(wx.HORIZONTAL)\n self.hbox_all.AddSpacer(20)\n self.hbox_all.Add(vbox)\n self.hbox_all.AddSpacer(20)\n\n self.panel.SetSizer(self.hbox_all)\n #if sys.platform in ['win32', 'win64']:\n # self.panel.SetScrollbars(20, 20, 50, 50)\n self.hbox_all.Fit(self)\n self.Centre()\n self.Show()\n self.Hide()\n self.Show()", "response": "Initialize locations and locations grid"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef InitAgeCheck(self):\n self.panel = wx.Panel(self, style=wx.SIMPLE_BORDER)\n\n text = \"\"\"Step 6:\nFill in or correct any cells with information about ages.\nThe column for magic_method_codes can take multiple values in the form of a colon-delimited list.\nYou may use the drop-down menus to add as many values as needed in these columns.\n(See Help button for details)\n\n**Denotes controlled vocabulary \"\"\"\n label = wx.StaticText(self.panel, label=text)\n\n self.items = self.er_magic_data.data_lists[self.er_magic_data.age_type][0]\n\n self.grid_builder = grid_frame2.GridBuilder(self.er_magic_data, 'age',\n self.er_magic_data.headers, self.panel, 'location')\n self.age_grid = self.grid_builder.make_grid(incl_pmag=False)\n self.age_grid.InitUI()\n self.grid_builder.add_data_to_grid(self.age_grid, 'age', incl_pmag=False)\n self.grid_builder.add_age_data_to_grid()\n\n self.grid = self.age_grid\n #\n # make it impossible to edit the 1st and 3rd columns\n for row in range(self.age_grid.GetNumberRows()):\n for col in (0, 2):\n self.age_grid.SetReadOnly(row, col, True)\n\n # initialize all needed drop-down menus\n self.drop_down_menu = drop_down_menus.Menus(\"age\", self, self.age_grid, None)\n\n # re-set first column name\n self.age_grid.SetColLabelValue(0, 'er_site_name')\n\n ### Create Buttons ###\n hbox_one = wx.BoxSizer(wx.HORIZONTAL)\n self.helpButton = wx.Button(self.panel, label=\"Help\")\n self.Bind(wx.EVT_BUTTON, lambda event: self.on_helpButton(event, \"ErMagicAgeHelp.html\"), self.helpButton)\n hbox_one.Add(self.helpButton)\n\n hboxok = wx.BoxSizer(wx.HORIZONTAL)\n self.saveButton = wx.Button(self.panel, id=-1, label='Save')\n self.Bind(wx.EVT_BUTTON, lambda event: self.on_saveButton(event, self.age_grid), self.saveButton)\n self.cancelButton = wx.Button(self.panel, wx.ID_CANCEL, '&Cancel')\n self.Bind(wx.EVT_BUTTON, self.on_cancelButton, self.cancelButton)\n self.continueButton = wx.Button(self.panel, id=-1, label='Save and continue')\n self.Bind(wx.EVT_BUTTON, lambda event: self.on_continueButton(event, self.age_grid, next_dia=None), self.continueButton)\n self.backButton = wx.Button(self.panel, wx.ID_ANY, \"&Back\")\n previous_dia = self.InitLocCheck\n self.Bind(wx.EVT_BUTTON, lambda event: self.on_backButton(event, previous_dia), self.backButton)\n\n self.panel.Bind(wx.grid.EVT_GRID_LABEL_LEFT_CLICK, self.onLeftClickLabel, self.grid)\n\n hboxok.Add(self.saveButton, flag=wx.RIGHT, border=10)\n hboxok.Add(self.cancelButton, flag=wx.RIGHT, border=10)\n hboxok.Add(self.continueButton, flag=wx.RIGHT, border=10)\n hboxok.Add(self.backButton)\n\n ### Make Containers ###\n vbox = wx.BoxSizer(wx.VERTICAL)\n vbox.Add(label, flag=wx.ALIGN_CENTER|wx.TOP|wx.BOTTOM, border=20)#, flag=wx.ALIGN_LEFT|wx.BOTTOM, border=20)\n vbox.Add(hbox_one, flag=wx.BOTTOM, border=10)\n vbox.Add(hboxok, flag=wx.BOTTOM, border=10)\n vbox.Add(self.age_grid, flag=wx.TOP|wx.BOTTOM, border=10) # EXPAND ??\n vbox.AddSpacer(20)\n\n self.hbox_all = wx.BoxSizer(wx.HORIZONTAL)\n self.hbox_all.AddSpacer(20)\n self.hbox_all.Add(vbox)\n self.hbox_all.AddSpacer(20)\n\n self.panel.SetSizer(self.hbox_all)\n #if sys.platform in ['win32', 'win64']:\n # self.panel.SetScrollbars(20, 20, 50, 50)\n self.hbox_all.Fit(self)\n self.Centre()\n self.Show()\n self.Hide()\n self.Show()", "response": "Initialize the age check dialog"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndisplays a tooltip over any cell in a certain column", "response": "def onMouseOver(self, event, grid):\n \"\"\"\n Displays a tooltip over any cell in a certain column\n \"\"\"\n x, y = grid.CalcUnscrolledPosition(event.GetX(), event.GetY())\n coords = grid.XYToCell(x, y)\n col = coords[1]\n row = coords[0]\n\n # creates tooltip message for cells with long values\n # note: this works with EPD for windows, and modern wxPython, but not with Canopy Python\n msg = grid.GetCellValue(row, col)\n if len(msg) > 15:\n event.GetEventObject().SetToolTipString(msg)\n else:\n event.GetEventObject().SetToolTipString('')"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef on_helpButton(self, event, page=None):\n # for use on the command line:\n path = find_pmag_dir.get_pmag_dir()\n # for use with pyinstaller\n #path = self.main_frame.resource_dir\n help_page = os.path.join(path, 'dialogs', 'help_files', page)\n # if using with py2app, the directory structure is flat,\n # so check to see where the resource actually is\n if not os.path.exists(help_page):\n help_page = os.path.join(path, 'help_files', page)\n html_frame = pw.HtmlFrame(self, page=help_page)\n html_frame.Show()", "response": "shows html help page"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\npulling up next dialog, if there is one. gets any updated information from the current grid and runs ErMagicBuilder", "response": "def on_continueButton(self, event, grid, next_dia=None):\n \"\"\"\n pulls up next dialog, if there is one.\n gets any updated information from the current grid and runs ErMagicBuilder\n \"\"\"\n #wait = wx.BusyInfo(\"Please wait, working...\")\n # unhighlight selected columns, etc.\n if self.drop_down_menu:\n self.drop_down_menu.clean_up()\n\n # remove '**' from col names\n #self.remove_starred_labels(grid)\n grid.remove_starred_labels()\n\n grid.SaveEditControlValue() # locks in value in cell currently edited\n grid_name = str(grid.GetName())\n\n # check that all required data are present\n validation_errors = self.validate(grid)\n if validation_errors:\n result = pw.warning_with_override(\"You are missing required data in these columns: {}\\nAre you sure you want to continue without these data?\".format(', '.join(validation_errors)))\n if result == wx.ID_YES:\n pass\n else:\n return False\n\n if grid.changes:\n self.onSave(grid)\n\n self.deleteRowButton = None\n #self.panel.Destroy() # calling Destroy here breaks with Anaconda Python (segfault)\n\n # make sure that specimens get propagated with\n # any default sample info\n if next_dia == self.InitLocCheck:\n if self.er_magic_data.specimens:\n for spec in self.er_magic_data.specimens:\n spec.propagate_data()\n\n if next_dia:\n wait = wx.BusyInfo(\"Please wait, working...\")\n wx.SafeYield()\n wx.CallAfter(self.panel.Destroy) # no segfault here!\n next_dia()\n # need to wait to process the resize:\n event = wx.PyCommandEvent(wx.EVT_SIZE.typeId, self.GetId())\n wx.CallAfter(self.GetEventHandler().ProcessEvent, event)\n del wait\n else:\n wait = wx.BusyInfo(\"Please wait, writing data to files...\")\n wx.SafeYield()\n # actually write data:\n self.er_magic_data.write_files()\n self.Destroy()\n del wait"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndeleting the selected object from both the data model and the grid.", "response": "def onDeleteRow(self, event, data_type):\n \"\"\"\n On button click, remove relevant object from both the data model and the grid.\n \"\"\"\n ancestry = self.er_magic_data.ancestry\n child_type = ancestry[ancestry.index(data_type) - 1]\n names = [self.grid.GetCellValue(row, 0) for row in self.selected_rows]\n if data_type == 'site':\n how_to_fix = 'Make sure to select a new site for each orphaned sample in the next step'\n else:\n how_to_fix = 'Go back a step and select a new {} for each orphaned {}'.format(data_type, child_type)\n\n orphans = []\n for name in names:\n row = self.grid.row_labels.index(name)\n orphan = self.er_magic_data.delete_methods[data_type](name)\n if orphan:\n orphans.extend(orphan)\n self.grid.remove_row(row)\n if orphans:\n orphan_names = self.er_magic_data.make_name_list(orphans)\n pw.simple_warning('You have deleted:\\n\\n {}\\n\\nthe parent(s) of {}(s):\\n\\n {}\\n\\n{}'.format(', '.join(names), child_type, ', '.join(orphan_names), how_to_fix))\n\n self.selected_rows = set()\n\n # update grid and data model\n self.update_grid(self.grid)#, grids[grid_name])\n\n self.grid.Refresh()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nhighlighting or unhighlight a row for possible deletion.", "response": "def onSelectRow(self, event):\n \"\"\"\n Highlight or unhighlight a row for possible deletion.\n \"\"\"\n grid = self.grid\n row = event.Row\n default = (255, 255, 255, 255)\n highlight = (191, 216, 216, 255)\n cell_color = grid.GetCellBackgroundColour(row, 0)\n attr = wx.grid.GridCellAttr()\n if cell_color == default:\n attr.SetBackgroundColour(highlight)\n self.selected_rows.add(row)\n else:\n attr.SetBackgroundColour(default)\n try:\n self.selected_rows.remove(row)\n except KeyError:\n pass\n if self.selected_rows and self.deleteRowButton:\n self.deleteRowButton.Enable()\n else:\n self.deleteRowButton.Disable()\n grid.SetRowAttr(row, attr)\n grid.Refresh()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nupdate the internal data structures of the current level of the grid", "response": "def update_grid(self, grid):\n \"\"\"\n takes in wxPython grid and ErMagic data object to be updated\n \"\"\"\n data_methods = {'specimen': self.er_magic_data.change_specimen,\n 'sample': self.er_magic_data.change_sample,\n 'site': self.er_magic_data.change_site,\n 'location': self.er_magic_data.change_location,\n 'age': self.er_magic_data.change_age}\n\n grid_name = str(grid.GetName())\n\n cols = list(range(grid.GetNumberCols()))\n\n col_labels = []\n for col in cols:\n col_labels.append(grid.GetColLabelValue(col))\n\n for row in grid.changes: # go through changes and update data structures\n if row == -1:\n continue\n else:\n data_dict = {}\n for num, label in enumerate(col_labels):\n if label:\n data_dict[str(label)] = str(grid.GetCellValue(row, num))\n new_name = str(grid.GetCellValue(row, 0))\n old_name = self.temp_data[grid_name][row]\n data_methods[grid_name](new_name, old_name, data_dict)\n\n grid.changes = False"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef onSave(self, grid):#, age_data_type='site'):\n # deselect column, including remove 'EDIT ALL' label\n if self.drop_down_menu:\n self.drop_down_menu.clean_up()\n\n # save all changes to er_magic data object\n self.grid_builder.save_grid_data()\n\n # don't actually write data in this step (time-consuming)\n # instead, write to files when user is done editing\n #self.er_magic_data.write_files()\n\n wx.MessageBox('Saved!', 'Info',\n style=wx.OK | wx.ICON_INFORMATION)", "response": "Save the data in the object"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef main():\n dir_path='.'\n meas_file='magic_measurements.txt'\n samp_file=\"er_samples.txt\"\n out_file='magic_measurements.txt'\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-WD' in sys.argv:\n ind = sys.argv.index('-WD')\n dir_path=sys.argv[ind+1]\n if '-f' in sys.argv:\n ind = sys.argv.index('-f')\n meas_file=sys.argv[ind+1]\n if '-fsa' in sys.argv:\n ind = sys.argv.index('-fsa')\n samp_file=sys.argv[ind+1]\n if '-F' in sys.argv:\n ind = sys.argv.index('-F')\n out_file=sys.argv[ind+1]\n # read in measurements file\n meas_file=dir_path+'/'+meas_file\n out_file=dir_path+'/'+out_file\n samp_file=dir_path+'/'+samp_file\n data,file_type=pmag.magic_read(meas_file)\n samps,file_type=pmag.magic_read(samp_file)\n MeasRecs=[]\n sampnames,sflag=[],0\n for rec in data:\n for samp in samps:\n if samp['er_sample_name'].lower()==rec['er_sample_name'].lower():\n if samp['er_sample_name'] not in sampnames:sampnames.append(samp['er_sample_name'].lower())\n rec['er_site_name']=samp['er_site_name']\n rec['er_location_name']=samp['er_location_name']\n MeasRecs.append(rec)\n break\n if rec['er_sample_name'].lower() not in sampnames:\n sampnames.append(rec['er_sample_name'].lower())\n sflag=1\n SampRec={}\n for key in list(samps[0].keys()):SampRec[key]=\"\"\n SampRec['er_sample_name']=rec['er_sample_name']\n SampRec['er_citation_names']=\"This study\"\n SampRec['er_site_name']='MISSING'\n SampRec['er_location_name']='MISSING'\n SampRec['sample_desription']='recorded added by update_measurements - edit as needed'\n samps.append(SampRec)\n print(rec['er_sample_name'],' missing from er_samples.txt file - edit orient.txt file and re-import')\n rec['er_site_name']='MISSING'\n rec['er_location_name']='MISSING'\n MeasRecs.append(rec)\n pmag.magic_write(out_file,MeasRecs,'magic_measurements')\n print(\"updated measurements file stored in \", out_file)\n if sflag==1:\n pmag.magic_write(samp_file,samps,'er_samples')\n print(\"updated sample file stored in \", samp_file)", "response": "NAME update_measurements. py is the entry point for the update_measurements module."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nnaming ldeo_magic.py DESCRIPTION converts LDEO format files to magic_measurements format files SYNTAX ldeo_magic.py [command line options] OPTIONS -h: prints the help message and quits. -usr USER: identify user, default is \"\" -f FILE: specify .ldeo format input file, required -F FILE: specify output file, default is magic_measurements.txt -Fsy: specify er_synthetics file, default is er_sythetics.txt -LP [colon delimited list of protocols, include all that apply] AF: af demag T: thermal including thellier but not trm acquisition S: Shaw method I: IRM (acquisition) N: NRM only TRM: trm acquisition ANI: anisotropy experiment D: double AF demag G: triple AF demag (GRM protocol) -V [1,2,3] units of IRM field in volts using ASC coil #1,2 or 3 -spc NUM : specify number of characters to designate a specimen, default = 0 -loc LOCNAME : specify location/study name, must have either LOCNAME or SAMPFILE or be a synthetic -syn INST TYPE: sets these specimens as synthetics created at institution INST and of type TYPE -ins INST : specify which demag instrument was used (e.g, SIO-Suzy or SIO-Odette),default is \"\" -dc B PHI THETA: dc lab field (in micro tesla) and phi,theta, default is none NB: use PHI, THETA = -1 -1 to signal that it changes, i.e. in anisotropy experiment -ac B : peak AF field (in mT) for ARM acquisition, default is none -ARM_dc # default value is 50e-6 -ARM_temp # default is 600c -ncn NCON: specify naming convention: default is #1 below -A: don't average replicate measurements Sample naming convention: [1] XXXXY: where XXXX is an arbitrary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX [5] site name same as sample [6] site is entered under a separate column -- NOT CURRENTLY SUPPORTED [7-Z] [XXXX]YYY: XXXX is site designation with Z characters with sample name XXXXYYYY NB: all others you will have to customize your self or e-mail ltauxe@ucsd.edu for help. [8] synthetic - has no site name INPUT Best to put separate experiments (all AF, thermal, thellier, trm aquisition, Shaw, etc.) in seperate .mag files (eg. af.mag, thermal.mag, etc.) Format of LDEO files: isaf2.fix LAT: .00 LON: .00 ID TREAT I CD J CDECL CINCL GDECL GINCL BDECL BINCL SUSC M/V ________________________________________________________________________________ is031c2 .0 SD 0 461.600 163.9 17.5 337.1 74.5 319.1 74.4 .0 .0 ID: specimen name TREAT: treatment step I: Instrument CD: Circular standard devation J: intensity. assumed to be total moment in 10^-4 (emu) CDECL: Declination in specimen coordinate system CINCL: Declination in specimen coordinate system GDECL: Declination in geographic coordinate system GINCL: Declination in geographic coordinate system BDECL: Declination in bedding adjusted coordinate system BINCL: Declination in bedding adjusted coordinate system SUSC: magnetic susceptibility (in micro SI)a M/V: mass or volume for nomalizing (0 won't normalize)", "response": "def main(command_line=True, **kwargs):\n \"\"\"\n NAME\n ldeo_magic.py\n \n DESCRIPTION\n converts LDEO format files to magic_measurements format files\n\n SYNTAX\n ldeo_magic.py [command line options]\n\n OPTIONS\n -h: prints the help message and quits.\n -usr USER: identify user, default is \"\"\n -f FILE: specify .ldeo format input file, required\n -F FILE: specify output file, default is magic_measurements.txt\n -Fsy: specify er_synthetics file, default is er_sythetics.txt\n -LP [colon delimited list of protocols, include all that apply]\n AF: af demag\n T: thermal including thellier but not trm acquisition\n S: Shaw method\n I: IRM (acquisition)\n N: NRM only\n TRM: trm acquisition\n ANI: anisotropy experiment\n D: double AF demag\n G: triple AF demag (GRM protocol)\n -V [1,2,3] units of IRM field in volts using ASC coil #1,2 or 3\n -spc NUM : specify number of characters to designate a specimen, default = 0\n -loc LOCNAME : specify location/study name, must have either LOCNAME or SAMPFILE or be a synthetic\n -syn INST TYPE: sets these specimens as synthetics created at institution INST and of type TYPE\n -ins INST : specify which demag instrument was used (e.g, SIO-Suzy or SIO-Odette),default is \"\"\n -dc B PHI THETA: dc lab field (in micro tesla) and phi,theta, default is none\n NB: use PHI, THETA = -1 -1 to signal that it changes, i.e. in anisotropy experiment\n -ac B : peak AF field (in mT) for ARM acquisition, default is none\n\n -ARM_dc # default value is 50e-6\n -ARM_temp # default is 600c\n\n -ncn NCON: specify naming convention: default is #1 below\n -A: don't average replicate measurements\n Sample naming convention:\n [1] XXXXY: where XXXX is an arbitrary length site designation and Y\n is the single character sample designation. e.g., TG001a is the\n first sample from site TG001. [default]\n [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX\n [5] site name same as sample\n [6] site is entered under a separate column -- NOT CURRENTLY SUPPORTED\n [7-Z] [XXXX]YYY: XXXX is site designation with Z characters with sample name XXXXYYYY\n NB: all others you will have to customize your self\n or e-mail ltauxe@ucsd.edu for help.\n \n [8] synthetic - has no site name\n INPUT\n Best to put separate experiments (all AF, thermal, thellier, trm aquisition, Shaw, etc.) in \n seperate .mag files (eg. af.mag, thermal.mag, etc.)\n\n Format of LDEO files: \nisaf2.fix \nLAT: .00 LON: .00\n ID TREAT I CD J CDECL CINCL GDECL GINCL BDECL BINCL SUSC M/V\n________________________________________________________________________________\nis031c2 .0 SD 0 461.600 163.9 17.5 337.1 74.5 319.1 74.4 .0 .0\n \n ID: specimen name\n TREAT: treatment step\n I: Instrument \n CD: Circular standard devation\n J: intensity. assumed to be total moment in 10^-4 (emu)\n CDECL: Declination in specimen coordinate system\n CINCL: Declination in specimen coordinate system\n GDECL: Declination in geographic coordinate system\n GINCL: Declination in geographic coordinate system\n BDECL: Declination in bedding adjusted coordinate system\n BINCL: Declination in bedding adjusted coordinate system\n SUSC: magnetic susceptibility (in micro SI)a\n M/V: mass or volume for nomalizing (0 won't normalize)\n \n \"\"\"\n# initialize some stuff\n noave=0\n codelist = ''\n methcode,inst=\"LP-NO\",\"\"\n phi,theta,peakfield,labfield=0,0,0,0\n pTRM,MD,samp_con,Z=0,0,'1',1\n dec=[315,225,180,135,45,90,270,270,270,90,180,180,0,0,0]\n inc=[0,0,0,0,0,-45,-45,0,45,45,45,-45,-90,-45,45]\n tdec=[0,90,0,180,270,0,0,90,0]\n tinc=[0,0,90,0,0,-90,0,0,90]\n missing=1\n demag=\"N\"\n er_location_name=\"\"\n citation='This study'\n args=sys.argv\n fmt='old'\n syn=0\n synfile='er_synthetics.txt'\n magfile = ''\n trm=0\n irm=0\n specnum=0\n coil=\"\"\n arm_labfield = 50e-6\n trm_peakT = 600+273\n #\n # get command line arguments\n #\n\n meas_file=\"magic_measurements.txt\"\n user=\"\"\n if command_line:\n if \"-h\" in args:\n print(main.__doc__)\n return False\n if \"-usr\" in args:\n ind=args.index(\"-usr\")\n user=args[ind+1]\n if '-F' in args:\n ind=args.index(\"-F\")\n meas_file=args[ind+1]\n if '-Fsy' in args:\n ind=args.index(\"-Fsy\")\n synfile=args[ind+1]\n if '-f' in args:\n ind=args.index(\"-f\")\n magfile=args[ind+1]\n if \"-dc\" in args:\n ind=args.index(\"-dc\")\n labfield=float(args[ind+1])*1e-6\n phi=float(args[ind+2])\n theta=float(args[ind+3])\n if \"-ac\" in args:\n ind=args.index(\"-ac\")\n peakfield=float(args[ind+1])*1e-3\n if \"-spc\" in args:\n ind=args.index(\"-spc\")\n specnum=int(args[ind+1])\n if \"-loc\" in args:\n ind=args.index(\"-loc\")\n er_location_name=args[ind+1]\n if '-syn' in args:\n syn=1\n ind=args.index(\"-syn\")\n institution=args[ind+1]\n syntype=args[ind+2]\n if '-fsy' in args:\n ind=args.index(\"-fsy\")\n synfile=args[ind+1]\n if \"-ins\" in args:\n ind=args.index(\"-ins\")\n inst=args[ind+1]\n if \"-A\" in args: noave=1\n if \"-ncn\" in args:\n ind=args.index(\"-ncn\")\n samp_con=sys.argv[ind+1]\n if '-LP' in args:\n ind=args.index(\"-LP\")\n codelist=args[ind+1]\n if \"-V\" in args:\n ind=args.index(\"-V\")\n coil=args[ind+1]\n if '-ARM_dc' in args:\n ind = args.index(\"-ARM_dc\")\n arm_labfield = args[ind+1]\n if '-ARM_temp' in args:\n ind = args.index('-ARM_temp')\n trm_peakT = args[ind+1]\n\n\n if not command_line:\n user = kwargs.get('user', '')\n meas_file = kwargs.get('meas_file', 'magic_measurements.txt')\n synfile = kwargs.get('synfile', 'er_synthetics.txt')\n # rm samp_file = kwargs.get('samp_file', '')\n magfile = kwargs.get('magfile', '')\n labfield = int(kwargs.get('labfield', 0)) *1e-6\n phi = int(kwargs.get('phi', 0))\n theta = int(kwargs.get('theta', 0))\n peakfield = int(kwargs.get('peakfield', 0))*1e-3\n specnum = int(kwargs.get('specnum', 0))\n er_location_name = kwargs.get('er_location_name', '')\n # rm samp_infile = kwargs.get('samp_infile', '')\n syn = kwargs.get('syn', 0) \n institution = kwargs.get('institution', '')\n syntype = kwargs.get('syntype', '')\n inst = kwargs.get('inst', '')\n noave = kwargs.get('noave', 0) # 0 means \"do average\", is default\n samp_con = kwargs.get('samp_con', '1')\n codelist = kwargs.get('codelist', '')\n coil = kwargs.get('coil', '')\n arm_labfield = kwargs.get('arm_labfield', 50e-6)\n trm_peakT = kwargs.get('trm_peakT', 600+273)\n\n\n # format/organize variables\n if magfile:\n try:\n input=open(magfile,'r')\n except:\n print(\"bad mag file name\")\n return False, \"bad mag file name\"\n else: \n print(\"mag_file field is required option\")\n print(main.__doc__)\n return False, \"mag_file field is required option\"\n \n if specnum!=0:specnum=-specnum\n\n if \"4\" in samp_con:\n if \"-\" not in samp_con:\n print(\"naming convention option [4] must be in form 4-Z where Z is an integer\")\n return False, \"naming convention option [4] must be in form 4-Z where Z is an integer\"\n else:\n Z=samp_con.split(\"-\")[1]\n samp_con=\"4\"\n if \"7\" in samp_con:\n if \"-\" not in samp_con:\n print(\"naming convention option [7] must be in form 7-Z where Z is an integer\")\n return False, \"naming convention option [7] must be in form 7-Z where Z is an integer\"\n else:\n Z=samp_con.split(\"-\")[1]\n samp_con=\"4\"\n\n codes=codelist.split(':')\n if \"AF\" in codes:\n demag='AF' \n if not labfield: methcode=\"LT-AF-Z\"\n if labfield: methcode=\"LT-AF-I\"\n if \"T\" in codes:\n demag=\"T\"\n if not labfield: methcode=\"LT-T-Z\"\n if labfield: methcode=\"LT-T-I\"\n if \"I\" in codes:\n methcode=\"LP-IRM\"\n irmunits=\"mT\"\n if \"S\" in codes: \n demag=\"S\"\n methcode=\"LP-PI-TRM:LP-PI-ALT-AFARM\"\n trm_labfield=labfield\n # should use arm_labfield and trm_peakT as well, but these values are currently never asked for\n if \"G\" in codes: methcode=\"LT-AF-G\"\n if \"D\" in codes: methcode=\"LT-AF-D\"\n if \"TRM\" in codes: \n demag=\"T\"\n trm=1\n\n if coil:\n methcode=\"LP-IRM\"\n irmunits=\"V\"\n if coil not in [\"1\",\"2\",\"3\"]:\n print(main.__doc__)\n print('not a valid coil specification')\n return False, 'not a valid coil specification'\n\n if demag==\"T\" and \"ANI\" in codes:\n methcode=\"LP-AN-TRM\"\n if demag==\"AF\" and \"ANI\" in codes:\n methcode=\"LP-AN-ARM\"\n if labfield==0: labfield=50e-6\n if peakfield==0: peakfield=.180\n SynRecs,MagRecs=[],[]\n version_num=pmag.get_version()\n if 1: # ldeo file format\n#\n# find start of data:\n#\n DIspec=[]\n Data,k=input.readlines(),0\n for k in range(len(Data)):\n rec=Data[k].split()\n if rec[0][0]==\"_\" or rec[0][0:2]==\"!_\":\n break\n start=k+1\n for k in range(start,len(Data)):\n rec=Data[k].split()\n if len(rec)>0:\n MagRec={}\n MagRec[\"treatment_temp\"]='%8.3e' % (273) # room temp in kelvin\n MagRec[\"measurement_temp\"]='%8.3e' % (273) # room temp in kelvin\n MagRec[\"treatment_ac_field\"]='0'\n MagRec[\"treatment_dc_field\"]='0'\n MagRec[\"treatment_dc_field_phi\"]='0'\n MagRec[\"treatment_dc_field_theta\"]='0'\n meas_type=\"LT-NO\"\n MagRec[\"measurement_flag\"]='g'\n MagRec[\"measurement_standard\"]='u'\n MagRec[\"measurement_number\"]='1'\n MagRec[\"er_specimen_name\"]=rec[0]\n if specnum!=0:\n MagRec[\"er_sample_name\"]=rec[0][:specnum]\n else:\n MagRec[\"er_sample_name\"]=rec[0]\n site=pmag.parse_site(MagRec['er_sample_name'],samp_con,Z)\n MagRec[\"er_site_name\"]=site\n MagRec[\"er_location_name\"]=er_location_name\n MagRec[\"measurement_csd\"]=rec[3]\n MagRec[\"measurement_magn_moment\"]='%10.3e'% (float(rec[4])*1e-7) # moment in Am^2 (from 10^-4 emu)\n#\n #if samp_file!=\"\" and MagRec[\"er_sample_name\"] not in Samps: # create er_samples.txt file with these data \n # cdec,cinc=float(rec[5]),float(rec[6])\n # gdec,ginc=float(rec[7]),float(rec[8])\n # az,pl=pmag.get_azpl(cdec,cinc,gdec,ginc)\n # bdec,binc=float(rec[9]),float(rec[10])\n # if rec[7]!=rec[9] and rec[6]!=rec[8]:\n # dipdir,dip=pmag.get_tilt(gdec,ginc,bdec,binc)\n # else:\n # dipdir,dip=0,0\n # ErSampRec={}\n # ErSampRec['er_location_name']=MagRec['er_location_name']\n # ErSampRec['er_sample_name']=MagRec['er_sample_name']\n # ErSampRec['er_site_name']=MagRec['er_site_name']\n # ErSampRec['sample_azimuth']='%7.1f'%(az)\n # ErSampRec['sample_dip']='%7.1f'%(pl)\n # ErSampRec['sample_bed_dip_direction']='%7.1f'%(dipdir)\n # ErSampRec['sample_bed_dip']='%7.1f'%(dip)\n # ErSampRec['sample_description']='az,pl,dip_dir and dip recalculated from [c,g,b][dec,inc] in ldeo file'\n # ErSampRec['magic_method_codes']='SO-REC'\n # ErSamps.append(ErSampRec)\n # Samps.append(ErSampRec['er_sample_name'])\n MagRec[\"measurement_dec\"]=rec[5]\n MagRec[\"measurement_inc\"]=rec[6]\n MagRec[\"measurement_chi\"]='%10.3e'%(float(rec[11])*1e-5)#convert to SI (assume Bartington, 10-5 SI)\n #MagRec[\"magic_instrument_codes\"]=rec[2]\n #MagRec[\"er_analyst_mail_names\"]=\"\"\n MagRec[\"er_citation_names\"]=\"This study\"\n MagRec[\"magic_method_codes\"]=meas_type\n if demag==\"AF\":\n if methcode != \"LP-AN-ARM\":\n MagRec[\"treatment_ac_field\"]='%8.3e' %(float(rec[1])*1e-3) # peak field in tesla\n meas_type=\"LT-AF-Z\"\n MagRec[\"treatment_dc_field\"]='0'\n else: # AARM experiment\n if treat[1][0]=='0':\n meas_type=\"LT-AF-Z\"\n MagRec[\"treatment_ac_field\"]='%8.3e' %(peakfield) # peak field in tesla\n else:\n meas_type=\"LT-AF-I\"\n ipos=int(treat[0])-1\n MagRec[\"treatment_dc_field_phi\"]='%7.1f' %(dec[ipos])\n MagRec[\"treatment_dc_field_theta\"]='%7.1f'% (inc[ipos])\n MagRec[\"treatment_dc_field\"]='%8.3e'%(labfield)\n MagRec[\"treatment_ac_field\"]='%8.3e' %(peakfield) # peak field in tesla\n elif demag==\"T\":\n if rec[1][0]==\".\":rec[1]=\"0\"+rec[1]\n treat=rec[1].split('.')\n if len(treat)==1:treat.append('0')\n MagRec[\"treatment_temp\"]='%8.3e' % (float(rec[1])+273.) # temp in kelvin\n meas_type=\"LT-T-Z\"\n MagRec[\"treatment_temp\"]='%8.3e' % (float(treat[0])+273.) # temp in kelvin\n if trm==0: # demag=T and not trmaq\n if treat[1][0]=='0':\n meas_type=\"LT-T-Z\"\n else: \n MagRec[\"treatment_dc_field\"]='%8.3e' % (labfield) # labfield in tesla (convert from microT)\n MagRec[\"treatment_dc_field_phi\"]='%7.1f' % (phi) # labfield phi\n MagRec[\"treatment_dc_field_theta\"]='%7.1f' % (theta) # labfield theta\n if treat[1][0]=='1':meas_type=\"LT-T-I\" # in-field thermal step\n if treat[1][0]=='2':\n meas_type=\"LT-PTRM-I\" # pTRM check\n pTRM=1\n if treat[1][0]=='3':\n MagRec[\"treatment_dc_field\"]='0' # this is a zero field step\n meas_type=\"LT-PTRM-MD\" # pTRM tail check\n else: \n meas_type=\"LT-T-I\" # trm acquisition experiment\n MagRec['magic_method_codes']=meas_type\n MagRecs.append(MagRec) \n MagOuts=pmag.measurements_methods(MagRecs,noave)\n pmag.magic_write(meas_file,MagOuts,'magic_measurements')\n print(\"results put in \",meas_file)\n if len(SynRecs)>0:\n pmag.magic_write(synfile,SynRecs,'er_synthetics')\n print(\"synthetics put in \",synfile)\n return True, meas_file"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nnames odp_dcs_magic.py DESCRIPTION converts ODP discrete sample format files to magic_measurements format files SYNTAX odp_dsc_magic.py [command line options] OPTIONS -h: prints the help message and quits. -F FILE: specify output measurements file, default is magic_measurements.txt -Fsp FILE: specify output er_specimens.txt file, default is er_specimens.txt -Fsa FILE: specify output er_samples.txt file for appending, default is er_samples.txt -Fsi FILE: specify output er_sites.txt file, default is er_sites.txt -dc B PHI THETA: dc lab field (in micro tesla) and phi,theta, default is none NB: use PHI, THETA = -1 -1 to signal that it changes, i.e. in anisotropy experiment -ac B : peak AF field (in mT) for ARM acquisition, default is none -A : don't average replicate measurements INPUT Put data from separate experiments (all AF, thermal, thellier, trm aquisition, Shaw, etc.) in separate directory", "response": "def main():\n \"\"\"\n NAME\n odp_dcs_magic.py\n\n DESCRIPTION\n converts ODP discrete sample format files to magic_measurements format files\n\n SYNTAX\n odp_dsc_magic.py [command line options]\n\n OPTIONS\n -h: prints the help message and quits.\n -F FILE: specify output measurements file, default is magic_measurements.txt\n -Fsp FILE: specify output er_specimens.txt file, default is er_specimens.txt\n -Fsa FILE: specify output er_samples.txt file for appending, default is er_samples.txt\n -Fsi FILE: specify output er_sites.txt file, default is er_sites.txt\n -dc B PHI THETA: dc lab field (in micro tesla) and phi,theta, default is none\n NB: use PHI, THETA = -1 -1 to signal that it changes, i.e. in anisotropy experiment\n -ac B : peak AF field (in mT) for ARM acquisition, default is none\n -A : don't average replicate measurements\n INPUT\n Put data from separate experiments (all AF, thermal, thellier, trm aquisition, Shaw, etc.) in separate directory\n\n \"\"\"\n#\n#\n version_num=pmag.get_version()\n meas_file='magic_measurements.txt'\n spec_file='er_specimens.txt'\n samp_file='er_samples.txt'\n site_file='er_sites.txt'\n ErSpecs,ErSamps,ErSites,ErLocs,ErCits=[],[],[],[],[]\n MagRecs=[]\n citation=\"This study\"\n dir_path,demag='.','NRM'\n args=sys.argv\n noave=0\n if '-WD' in args:\n ind=args.index(\"-WD\")\n dir_path=args[ind+1]\n if \"-h\" in args:\n print(main.__doc__)\n sys.exit()\n if \"-A\" in args: noave=1\n if '-F' in args:\n ind=args.index(\"-F\")\n meas_file=args[ind+1]\n if '-Fsp' in args:\n ind=args.index(\"-Fsp\")\n spec_file=args[ind+1]\n if '-Fsa' in args:\n ind=args.index(\"-Fsa\")\n samp_file=dir_path+'/'+args[ind+1]\n ErSamps,file_type=pmag.magic_read(samp_file)\n else:\n samp_file=dir_path+'/'+samp_file\n if '-LP' in args:\n ind=args.index(\"-LP\")\n codelist=args[ind+1]\n codes=codelist.split(':')\n if \"AF\" in codes:\n demag='AF'\n if'-dc' not in args: methcode=\"LT-AF-Z\"\n if'-dc' in args: methcode=\"LT-AF-I\"\n if \"T\" in codes:\n demag=\"T\"\n if '-dc' not in args: methcode=\"LT-T-Z\"\n if '-dc' in args: methcode=\"LT-T-I\"\n if \"I\" in codes:\n methcode=\"LP-IRM\"\n if \"S\" in codes:\n demag=\"S\"\n methcode=\"LP-PI-TRM:LP-PI-ALT-AFARM\"\n trm_labfield=labfield\n ans=input(\"DC lab field for ARM step: [50uT] \")\n if ans==\"\":\n arm_labfield=50e-6\n else:\n arm_labfield=float(ans)*1e-6\n ans=input(\"temperature for total trm step: [600 C] \")\n if ans==\"\":\n trm_peakT=600+273 # convert to kelvin\n else:\n trm_peakT=float(ans)+273 # convert to kelvin\n if \"G\" in codes: methcode=\"LT-AF-G\"\n if \"D\" in codes: methcode=\"LT-AF-D\"\n if \"TRM\" in codes:\n demag=\"T\"\n trm=1\n if demag==\"T\" and \"ANI\" in codes:\n methcode=\"LP-AN-TRM\"\n if demag==\"AF\" and \"ANI\" in codes:\n methcode=\"LP-AN-ARM\"\n if labfield==0: labfield=50e-6\n if peakfield==0: peakfield=.180\n spec_file=dir_path+'/'+spec_file\n site_file=dir_path+'/'+site_file\n meas_file=dir_path+'/'+meas_file\n filelist=os.listdir(dir_path) # read in list of files to import\n specimens,samples,sites=[],[],[]\n MagRecs,SpecRecs,SampRecs=[],[],[]\n for samp in ErSamps:\n if samp['er_sample_name'] not in samples:\n samples.append(samp['er_sample_name'])\n SampRecs.append(samp)\n for file in filelist: # parse each file\n if file[-3:].lower()=='dsc':\n print('processing: ',file)\n MagRec,SpecRec,SampRec={},{},{}\n treatment_type,treatment_value,user=\"\",\"\",\"\"\n inst=\"ODP-SRM\"\n input=open(dir_path+'/'+file,'r').readlines()\n IDs=file.split('_') # splits on underscores\n pieces=IDs[0].split('-')\n expedition=pieces[0]\n location=pieces[1]\n if file[0]!='_':\n while len(pieces[2])<4:pieces[2]='0'+pieces[2] # pad core to be 3 characters\n specimen=\"\"\n else:\n specimen=\"test\"\n for piece in pieces:\n specimen=specimen+piece+'-'\n specimen=specimen[:-1]\n alt_spec=IDs[1] # alternate specimen is second field in field name\n# set up specimen record for Er_specimens table\n SpecRec['er_expedition_name']=expedition\n SpecRec['er_location_name']=location\n SpecRec['er_site_name']=specimen\n SpecRec['er_sample_name']=specimen\n SpecRec['er_citation_names']=citation\n for key in list(SpecRec.keys()):SampRec[key]=SpecRec[key]\n SampRec['sample_azimuth']='0'\n SampRec['sample_dip']='0'\n SampRec['magic_method_codes']='FS-C-DRILL-IODP:SP-SS-C:SO-V'\n SpecRec['er_specimen_name']=specimen\n SampRec['er_specimen_names']=specimen\n for key in list(SpecRec.keys()):MagRec[key]=SpecRec[key]\n# set up measurement record - default is NRM\n MagRec['er_analyst_mail_names']=user\n MagRec['magic_method_codes']='LT-NO'\n MagRec['magic_software_packages']=version_num\n MagRec[\"treatment_temp\"]='%8.3e' % (273) # room temp in kelvin\n MagRec[\"measurement_temp\"]='%8.3e' % (273) # room temp in kelvin\n MagRec[\"treatment_ac_field\"]=0.\n MagRec[\"treatment_dc_field\"]='0'\n MagRec[\"treatment_dc_field_phi\"]='0'\n MagRec[\"treatment_dc_field_theta\"]='0'\n MagRec[\"measurement_flag\"]='g' # assume all data are \"good\"\n MagRec[\"measurement_standard\"]='u' # assume all data are \"good\"\n MagRec[\"measurement_csd\"]='' # set csd to blank\n SpecRec['er_specimen_alternatives']=alt_spec\n vol=7e-6 # assume 7 cc samples\n datestamp=input[1].split() # date time is second line of file\n mmddyy=datestamp[0].split('/') # break into month day year\n date=mmddyy[2]+':'+mmddyy[0]+\":\"+mmddyy[1] +':' +datestamp[1]\n MagRec[\"measurement_date\"]=date\n for k in range(len(input)):\n fields= input[k].split(\"=\")\n if 'treatment_type' in fields[0]:\n if \"Alternating Frequency Demagnetization\" in fields[1]:\n MagRec['magic_method_codes'] = 'LT-AF-Z'\n inst=inst+':ODP-DTECH' # measured on shipboard AF DTECH D2000\n treatment_type=\"AF\"\n if \"Anhysteretic Remanent Magnetization\" in fields[1]:\n MagRec['magic_method_codes'] = 'LT-AF-I'\n inst=inst+':ODP-DTECH' # measured on shipboard AF DTECH D2000\n treatment_type=\"ARM\"\n if \"Isothermal Remanent Magnetization\" in fields[1]:\n MagRec['magic_method_codes'] = 'LT-IRM'\n inst=inst+':ODP-IMP' # measured on shipboard ASC IMPULSE magnetizer\n treatment_type=\"IRM\"\n if \"treatment_value\" in fields[0]:\n values=fields[1].split(',')\n value=values[0]\n if value!=\" \\n\":\n if treatment_type==\"AF\":\n treatment_value=float(value)*1e-3\n MagRec[\"treatment_ac_field\"]=treatment_value # AF demag in treat mT => T\n elif treatment_type==\"IRM\":\n treatment_value=float(value)*1e-3\n MagRec[\"treatment_dc_field\"]='%8.3e'%(treatment_value) # IRM treat mT => T\n if treatment_type==\"ARM\":\n treatment_value=float(value)*1e-3\n dc_value=float(values[1])*1e-3\n MagRec[\"treatment_ac_field\"]=treatment_value # AF demag in treat mT => T\n MagRec[\"treatment_dc_field\"]='%8.3e'%(dc_value) # DC mT => T\n if 'user' in fields[0]:\n user=fields[-1]\n MagRec[\"er_analyst_mail_names\"]=user\n if 'sample_orientation' in fields[0]:\n MagRec[\"measurement_description\"]=fields[-1]\n MagRec[\"measurement_standard\"]='u' # assume all data are \"good\"\n if 'sample_area' in fields[0]: vol=float(fields[1])*1e-6 # takes volume (cc) and converts to m^3\n if 'run_number' in fields[0]:\n MagRec['external_database_ids']=fields[1] # run number is the LIMS measurement number\n MagRec['external_database_names']='LIMS'\n if input[k][0:7]=='':\n rec=input[k+1].split(',') # list of data\n for item in rec:\n items=item.split('=')\n if items[0].strip()=='demag_level' and treatment_value==\"\" :\n treat= float(items[1])\n if treat!=0:\n MagRec['magic_method_codes']='LT-AF-Z'\n inst=inst+':ODP-SRM-AF'\n MagRec[\"treatment_ac_field\"]=treat*1e-3 # AF demag in treat mT => T\n if items[0].strip()=='inclination_w_tray_w_bkgrd': MagRec['measurement_inc']=items[1]\n if items[0].strip()=='declination_w_tray_w_bkgrd': MagRec['measurement_dec']=items[1]\n if items[0].strip()=='intensity_w_tray_w_bkgrd': MagRec['measurement_magn_moment']='%8.3e'%(float(items[1])*vol) # convert intensity from A/m to Am^2 using vol\n if items[0].strip()=='x_stdev':MagRec['measurement_x_sd']=items[1]\n if items[0].strip()=='y_stdev':MagRec['measurement_y_sd']=items[1]\n if items[0].strip()=='z_stdev':MagRec['measurement_sd_z']=items[1]\n MagRec['magic_instrument_codes']=inst\n MagRec['measurement_number']='1'\n MagRec['measurement_positions']=''\n MagRecs.append(MagRec)\n if specimen not in specimens:\n specimens.append(specimen)\n SpecRecs.append(SpecRec)\n if MagRec['er_sample_name'] not in samples:\n samples.append(MagRec['er_sample_name'])\n SampRecs.append(SampRec)\n MagOuts=pmag.sort_diclist(MagRecs,'treatment_ac_field')\n for MagRec in MagOuts:\n MagRec[\"treatment_ac_field\"]='%8.3e'%(MagRec[\"treatment_ac_field\"]) # convert to string\n pmag.magic_write(spec_file,SpecRecs,'er_specimens')\n if len(SampRecs)>0:\n SampOut,keys=pmag.fillkeys(SampRecs)\n pmag.magic_write(samp_file,SampOut,'er_samples')\n print('samples stored in ',samp_file)\n pmag.magic_write(samp_file,SampRecs,'er_samples')\n print('specimens stored in ',spec_file)\n\n Fixed=pmag.measurements_methods(MagOuts,noave)\n pmag.magic_write(meas_file,Fixed,'magic_measurements')\n print('data stored in ',meas_file)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef main():\n N,kappa,D,I=100,20.,0.,90.\n if len(sys.argv)!=0 and '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n elif '-i' in sys.argv:\n ans=input(' Kappa: ')\n kappa=float(ans)\n ans=input(' N: ')\n N=int(ans)\n ans=input(' Mean Dec: ')\n D=float(ans)\n ans=input(' Mean Inc: ')\n I=float(ans)\n else:\n if '-k' in sys.argv:\n ind=sys.argv.index('-k')\n kappa=float(sys.argv[ind+1])\n if '-n' in sys.argv:\n ind=sys.argv.index('-n')\n N=int(sys.argv[ind+1])\n if '-D' in sys.argv:\n ind=sys.argv.index('-D')\n D=float(sys.argv[ind+1])\n if '-I' in sys.argv:\n ind=sys.argv.index('-I')\n I=float(sys.argv[ind+1])\n for k in range(N): \n dec,inc= pmag.fshdev(kappa) # send kappa to fshdev\n drot,irot=pmag.dodirot(dec,inc,D,I) \n print('%7.1f %7.1f ' % (drot,irot))", "response": "NAME fishrot. py\nTaxonomy DESCRIPTION get a set of Fisher distributed data from a single Fisher Distribution"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef main():\n if \"-h\" in sys.argv:\n print(main.__doc__)\n sys.exit()\n\n dataframe = extractor.command_line_dataframe([['WD', False, '.'], ['ID', False, '.'], ['f', True, ''], ['Fsa', False, 'samples.txt'], ['DM', False, 3]])\n args = sys.argv\n checked_args = extractor.extract_and_check_args(args, dataframe)\n samp_file, output_samp_file, output_dir_path, input_dir_path, data_model_num = extractor.get_vars(['f', 'Fsa', 'WD', 'ID', 'DM'], checked_args)\n data_model_num = int(float(data_model_num))\n if '-Fsa' not in args and data_model_num == 2:\n output_samp_file = \"er_samples.txt\"\n ran, error = convert.iodp_samples(samp_file, output_samp_file, output_dir_path,\n input_dir_path, data_model_num=data_model_num)\n if not ran:\n print(\"-W- \" + error)", "response": "iodp_samples_magic.py\n OPTIONS:\n -f FILE, input csv file\n -Fsa FILE, output samples file for updating, default is to overwrite existing samples file"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconverts a cartesian coordinates to a direction vector", "response": "def cart2dir(self,cart):\n \"\"\"\n converts a direction to cartesian coordinates\n \"\"\"\n# print \"calling cart2dir(), not in anything\"\n cart=numpy.array(cart)\n rad=old_div(numpy.pi,180.) # constant to convert degrees to radians\n if len(cart.shape)>1:\n Xs,Ys,Zs=cart[:,0],cart[:,1],cart[:,2]\n else: #single vector\n Xs,Ys,Zs=cart[0],cart[1],cart[2]\n Rs=numpy.sqrt(Xs**2+Ys**2+Zs**2) # calculate resultant vector length\n Decs=(old_div(numpy.arctan2(Ys,Xs),rad))%360. # calculate declination taking care of correct quadrants (arctan2) and making modulo 360.\n try:\n Incs=old_div(numpy.arcsin(old_div(Zs,Rs)),rad) # calculate inclination (converting to degrees) #\n except:\n print('trouble in cart2dir') # most likely division by zero somewhere\n return numpy.zeros(3)\n\n return numpy.array([Decs,Incs,Rs]).transpose()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef magic_read(self,infile):\n# print \"calling magic_read(self, infile)\", infile\n hold,magic_data,magic_record,magic_keys=[],[],{},[]\n try:\n f=open(infile,\"r\")\n except:\n return [],'bad_file'\n d = f.readline()[:-1].strip('\\n')\n if d[0]==\"s\" or d[1]==\"s\":\n delim='space'\n elif d[0]==\"t\" or d[1]==\"t\":\n delim='tab'\n else:\n print('error reading ', infile)\n sys.exit()\n if delim=='space':file_type=d.split()[1]\n if delim=='tab':file_type=d.split('\\t')[1]\n if file_type=='delimited':\n if delim=='space':file_type=d.split()[2]\n if delim=='tab':file_type=d.split('\\t')[2]\n if delim=='space':line =f.readline()[:-1].split()\n if delim=='tab':line =f.readline()[:-1].split('\\t')\n for key in line:\n magic_keys.append(key)\n lines=f.readlines()\n for line in lines[:-1]:\n line.replace('\\n','')\n if delim=='space':rec=line[:-1].split()\n if delim=='tab':rec=line[:-1].split('\\t')\n hold.append(rec)\n line = lines[-1].replace('\\n','')\n if delim=='space':rec=line[:-1].split()\n if delim=='tab':rec=line.split('\\t')\n hold.append(rec)\n for rec in hold:\n magic_record={}\n if len(magic_keys) != len(rec):\n\n print(\"Warning: Uneven record lengths detected: \")\n #print magic_keys\n #print rec\n for k in range(len(rec)):\n magic_record[magic_keys[k]]=rec[k].strip('\\n')\n magic_data.append(magic_record)\n magictype=file_type.lower().split(\"_\")\n Types=['er','magic','pmag','rmag']\n if magictype in Types:file_type=file_type.lower()\n# print \"magic data from magic_read:\"\n# print str(magic_data)[:500] + \"...\"\n# print \"file_type\", file_type\n return magic_data,file_type", "response": "reads a Magic template file puts data in a list of dictionaries"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ntaking a magic format file and returns a list of unique specimen names", "response": "def get_specs(self,data):\n \"\"\"\n takes a magic format file and returns a list of unique specimen names\n \"\"\"\n # sort the specimen names\n #\n# print \"calling get_specs()\"\n speclist=[]\n for rec in data:\n spec=rec[\"er_specimen_name\"]\n if spec not in speclist:speclist.append(spec)\n speclist.sort()\n #print speclist\n return speclist"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsort the data block in to first_Z first_I first_I... Zdiff is the number of times the data block is in.", "response": "def sortarai(self,datablock,s,Zdiff):\n \"\"\"\n sorts data block in to first_Z, first_I, etc.\n \"\"\"\n# print \"calling sortarai()\"\n first_Z,first_I,zptrm_check,ptrm_check,ptrm_tail=[],[],[],[],[]\n field,phi,theta=\"\",\"\",\"\"\n starthere=0\n Treat_I,Treat_Z,Treat_PZ,Treat_PI,Treat_M,Treat_AC=[],[],[],[],[],[]\n ISteps,ZSteps,PISteps,PZSteps,MSteps,ACSteps=[],[],[],[],[],[]\n GammaChecks=[] # comparison of pTRM direction acquired and lab field\n Mkeys=['measurement_magn_moment','measurement_magn_volume','measurement_magn_mass','measurement_magnitude']\n rec=datablock[0] # finds which type of magnetic measurement is present in magic_measurements.txt, then assigns momkey to that value\n for key in Mkeys:\n if key in list(rec.keys()) and rec[key]!=\"\":\n momkey=key\n break\n # first find all the steps\n for k in range(len(datablock)): # iterates through records.\n rec=datablock[k]\n if \"treatment_temp\" in list(rec.keys()):\n temp=float(rec[\"treatment_temp\"])\n elif \"treatment_mw_power\" in list(rec.keys()):\n temp=float(rec[\"treatment_mw_power\"])\n\n methcodes=[]\n tmp=rec[\"magic_method_codes\"].split(\":\")\n for meth in tmp:\n methcodes.append(meth.strip())\n # methchodes contains all codes for a particular record\n # for thellier-thellier\n if 'LT-T-I' in methcodes and 'LP-PI-TRM' in methcodes and 'LP-TRM' not in methcodes :\n # IF specimen cooling AND using a laboratory trm AND NOT trm acquisition\n Treat_I.append(temp)\n ISteps.append(k)\n if field==\"\":field=float(rec[\"treatment_dc_field\"])\n if phi==\"\":\n phi=float(rec['treatment_dc_field_phi'])\n theta=float(rec['treatment_dc_field_theta'])\n\n # for Microwave\n if 'LT-M-I' in methcodes and 'LP-PI-M' in methcodes :\n # if using microwave radiation in lab field AND using microwave demagnetisation\n Treat_I.append(temp)\n ISteps.append(k)\n if field==\"\":field=float(rec[\"treatment_dc_field\"])\n if phi==\"\":\n phi=float(rec['treatment_dc_field_phi'])\n theta=float(rec['treatment_dc_field_theta'])\n\n # stick first zero field stuff into first_Z\n if 'LT-NO' in methcodes:\n # if no treatments applied before measurements\n Treat_Z.append(temp)\n ZSteps.append(k)\n if 'LT-T-Z' in methcodes or 'LT-M-Z' in methcodes:\n # if specimen cooling in zero field OR using microwave radiation: In zero field\n Treat_Z.append(temp)\n ZSteps.append(k)\n if 'LT-PTRM-Z' : # maybe this should be in methcodes ?? note I no longer understand\n # if pTRM tail check\n Treat_PZ.append(temp)\n PZSteps.append(k)\n if 'LT-PTRM-I' in methcodes or 'LT-PMRM-I' in methcodes:\n # if pTRM check\n Treat_PI.append(temp)\n PISteps.append(k)\n if 'LT-PTRM-MD' in methcodes:\n # if pTRM tail check\n Treat_M.append(temp)\n MSteps.append(k)\n if 'LT-PTRM-AC' in methcodes or 'LT-PMRM-AC' in methcodes:\n Treat_AC.append(temp)\n ACSteps.append(k)\n if 'LT-NO' in methcodes:\n # if no treatments applied before measurement\n dec=float(rec[\"measurement_dec\"])\n inc=float(rec[\"measurement_inc\"])\n str=float(rec[momkey])\n if 'LP-PI-M' not in methcodes:\n # if not using microwave demagnetisation\n first_I.append([273,0.,0.,0.,1])\n first_Z.append([273,dec,inc,str,1]) # NRM step\n else:\n first_I.append([0,0.,0.,0.,1])\n first_Z.append([0,dec,inc,str,1]) # NRM step\n\n # the block above seems to be sorting out into wheter it is Treat_Z (zero field), Treat_I (infield), a ptrm check, or a ptrm tail check. so, each record has been appended to whichever of those it belongs in.\n #---------------------\n # find IZ and ZI\n #---------------------\n\n\n for temp in Treat_I: # look through infield steps and find matching Z step\n if temp in Treat_Z: # found a match\n istep=ISteps[Treat_I.index(temp)]\n irec=datablock[istep]\n methcodes=[]\n tmp=irec[\"magic_method_codes\"].split(\":\")\n for meth in tmp: methcodes.append(meth.strip())\n brec=datablock[istep-1] # take last record as baseline to subtract\n zstep=ZSteps[Treat_Z.index(temp)]\n zrec=datablock[zstep]\n # sort out first_Z records\n if \"LP-PI-TRM-IZ\" in methcodes or \"LP-PI-M-IZ\" in methcodes:\n ZI=0\n else:\n ZI=1\n dec=float(zrec[\"measurement_dec\"])\n inc=float(zrec[\"measurement_inc\"])\n str=float(zrec[momkey])\n first_Z.append([temp,dec,inc,str,ZI])\n # sort out first_I records\n #print 'irec', irec # full data set for infield measurement\n #print 'zrec', zrec # coresponding zerofield measurement\n idec=float(irec[\"measurement_dec\"])\n iinc=float(irec[\"measurement_inc\"])\n istr=float(irec[momkey])\n X=self.dir2cart([idec,iinc,istr])\n BL=self.dir2cart([dec,inc,str])\n I=[]\n for c in range(3):\n I.append((X[c]-BL[c]))\n iDir=self.cart2dir(I)\n first_I.append([temp,iDir[0],iDir[1],iDir[2],ZI])\n now_ignore = \"\"\"\n #if I[2]!=0: # lj PUT THIS BACK\n if True:\n iDir=self.cart2dir(I)\n if Zdiff==0:\n print \"Zdiff == 0, appending to first_I\" #lj\n print [temp,iDir[0],iDir[1],iDir[2],ZI] #lj\n first_I.append([temp,iDir[0],iDir[1],iDir[2],ZI])\n else:\n print \"Zdiff != 0, appending to first_I\" #lj\n print [temp,0.,0.,I[2],ZI] #lj\n first_I.append([temp,0.,0.,I[2],ZI])\n## gamma=angle([iDir[0],iDir[1]],[phi,theta])\n else:\n print \"0,0,0 appending to first_I\"\n print [temp,0.,0.,0.,ZI]\n first_I.append([temp,0.,0.,0.,ZI])\n## gamma=0.0\n## # put in Gamma check (infield trm versus lab field)\n## if 180.-gamma 0:\n for rec in rem_data:\n if rec['remanence_bcr'] != \"\" and float(rec['remanence_bcr']) > 0:\n try:\n ind = hsids.index(rec['er_specimen_name'])\n Bcr1.append(float(rec['remanence_bcr']))\n Bcr1Bc.append(Bcr1[-1]/Bc[ind])\n S1.append(S[ind])\n Bcr2.append(Bcr[ind])\n except ValueError:\n if verbose:\n print('hysteresis data for ',\n rec['er_specimen_name'], ' not found')\n #\n # now plot the day and S-Bc, S-Bcr plots\n #\n leglist = []\n if label == 0:\n names = []\n if len(Bcr1) > 0:\n pmagplotlib.plot_day(DSC['day'], Bcr1Bc, S1, 'ro', names=names)\n pmagplotlib.plot_s_bcr(DSC['S-Bcr'], Bcr1, S1, 'ro')\n pmagplotlib.plot_init(DSC['bcr1-bcr2'], 5, 5)\n pmagplotlib.plot_bcr(DSC['bcr1-bcr2'], Bcr1, Bcr2)\n else:\n del DSC['bcr1-bcr2']\n pmagplotlib.plot_day(DSC['day'], BcrBc, S, 'bs', names=names)\n pmagplotlib.plot_s_bcr(DSC['S-Bcr'], Bcr, S, 'bs')\n pmagplotlib.plot_s_bc(DSC['S-Bc'], Bc, S, 'bs')\n files = {}\n if len(locations) > 0:\n locations = locations[:-1]\n for key in DSC.keys():\n if pmagplotlib.isServer: # use server plot naming convention\n files[key] = 'LO:_'+locations+'_' + \\\n 'SI:__SA:__SP:__TY:_'+key+'_.'+fmt\n else: # use more readable plot naming convention\n files[key] = '{}_{}.{}'.format(locations, key, fmt)\n if verbose:\n pmagplotlib.draw_figs(DSC)\n ans = raw_input(\" S[a]ve to save plots, return to quit: \")\n if ans == \"a\":\n pmagplotlib.save_plots(DSC, files)\n else:\n sys.exit()\n if plots:\n pmagplotlib.save_plots(DSC, files)", "response": "NAME\n dayplot_magic.py\n\n DESCRIPTION\n makes 'day plots' (Day et al. 1977) and squareness/coercivity,\n plots 'linear mixing' curve from Dunlop and Carter-Stiglitz (2006).\n squareness coercivity of remanence (Neel, 1955) plots after\n Tauxe et al. (2002)\n\n SYNTAX\n dayplot_magic.py [command line options]\n\n OPTIONS\n -h prints help message and quits\n -f: specify input hysteresis file, default is rmag_hysteresis.txt\n -fr: specify input remanence file, default is rmag_remanence.txt\n -fmt [svg,png,jpg] format for output plots\n -sav saves plots and quits quietly\n -n label specimen names"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nnames orientation_magic.py DESCRIPTION takes tab delimited field notebook information and converts to MagIC formatted tables SYNTAX orientation_magic.py [command line options] OPTIONS -f FILE: specify input file, default is: orient.txt -Fsa FILE: specify output file, default is: er_samples.txt -Fsi FILE: specify output site location file, default is: er_sites.txt -app append/update these data in existing er_samples.txt, er_sites.txt files -ocn OCON: specify orientation convention, default is #1 below -dcn DCON [DEC]: specify declination convention, default is #1 below if DCON = 2, you must supply the declination correction -BCN don't correct bedding_dip_dir for magnetic declination -already corrected -ncn NCON: specify naming convention: default is #1 below -a: averages all bedding poles and uses average for all samples: default is NO -gmt HRS: specify hours to subtract from local time to get GMT: default is 0 -mcd: specify sampling method codes as a colon delimited string: [default is: FS-FD:SO-POM] FS-FD field sampling done with a drill FS-H field sampling done with hand samples FS-LOC-GPS field location done with GPS FS-LOC-MAP field location done with map SO-POM a Pomeroy orientation device was used SO-ASC an ASC orientation device was used -DM: specify data model (2 or 3). Default: 3. Will output to the appropriate format. Orientation convention: Samples are oriented in the field with a \"field arrow\" and measured in the laboratory with a \"lab arrow\". The lab arrow is the positive X direction of the right handed coordinate system of the specimen measurements. The lab and field arrows may not be the same. In the MagIC database, we require the orientation (azimuth and plunge) of the X direction of the measurements (lab arrow). Here are some popular conventions that convert the field arrow azimuth (mag_azimuth in the orient.txt file) and dip (field_dip in orient.txt) to the azimuth and plunge of the laboratory arrow (sample_azimuth and sample_dip in er_samples.txt). The two angles, mag_azimuth and field_dip are explained below. [1] Standard Pomeroy convention of azimuth and hade (degrees from vertical down) of the drill direction (field arrow). lab arrow azimuth= sample_azimuth = mag_azimuth; lab arrow dip = sample_dip =-field_dip. i.e. the lab arrow dip is minus the hade. [2] Field arrow is the strike of the plane orthogonal to the drill direction, Field dip is the hade of the drill direction. Lab arrow azimuth = mag_azimuth-90 Lab arrow dip = -field_dip [3] Lab arrow is the same as the drill direction; hade was measured in the field. Lab arrow azimuth = mag_azimuth; Lab arrow dip = 90-field_dip [4] lab azimuth and dip are same as mag_azimuth, field_dip : use this for unoriented samples too [5] Same as AZDIP convention explained below - azimuth and inclination of the drill direction are mag_azimuth and field_dip; lab arrow is as in [1] above. lab azimuth is same as mag_azimuth,lab arrow dip=field_dip-90 [6] Lab arrow azimuth = mag_azimuth-90; Lab arrow dip = 90-field_dip [7] all others you will have to either customize your self or e-mail ltauxe@ucsd.edu for help. Magnetic declination convention: [1] Use the IGRF value at the lat/long and date supplied [default] [2] Will supply declination correction [3] mag_az is already corrected in file [4] Correct mag_az but not bedding_dip_dir Sample naming convention: [1] XXXXY: where XXXX is an arbitrary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX [5] site name = sample name [6] site name entered in site_name column in the orient.txt format input file [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY NB: all others you will have to either customize your self or e-mail ltauxe@ucsd.edu for help. OUTPUT output saved in er_samples.txt and er_sites.txt (or samples.txt and sites.txt if using data model 3.0) - this will overwrite any existing files", "response": "def main():\n \"\"\"\n NAME\n orientation_magic.py\n\n DESCRIPTION\n takes tab delimited field notebook information and converts to MagIC formatted tables\n\n SYNTAX\n orientation_magic.py [command line options]\n\n OPTIONS\n -f FILE: specify input file, default is: orient.txt\n -Fsa FILE: specify output file, default is: er_samples.txt\n -Fsi FILE: specify output site location file, default is: er_sites.txt\n -app append/update these data in existing er_samples.txt, er_sites.txt files\n -ocn OCON: specify orientation convention, default is #1 below\n -dcn DCON [DEC]: specify declination convention, default is #1 below\n if DCON = 2, you must supply the declination correction\n -BCN don't correct bedding_dip_dir for magnetic declination -already corrected\n -ncn NCON: specify naming convention: default is #1 below\n -a: averages all bedding poles and uses average for all samples: default is NO\n -gmt HRS: specify hours to subtract from local time to get GMT: default is 0\n -mcd: specify sampling method codes as a colon delimited string: [default is: FS-FD:SO-POM]\n FS-FD field sampling done with a drill\n FS-H field sampling done with hand samples\n FS-LOC-GPS field location done with GPS\n FS-LOC-MAP field location done with map\n SO-POM a Pomeroy orientation device was used\n SO-ASC an ASC orientation device was used\n -DM: specify data model (2 or 3). Default: 3. Will output to the appropriate format.\n\n\n Orientation convention:\n Samples are oriented in the field with a \"field arrow\" and measured in the laboratory with a \"lab arrow\". The lab arrow is the positive X direction of the right handed coordinate system of the specimen measurements. The lab and field arrows may not be the same. In the MagIC database, we require the orientation (azimuth and plunge) of the X direction of the measurements (lab arrow). Here are some popular conventions that convert the field arrow azimuth (mag_azimuth in the orient.txt file) and dip (field_dip in orient.txt) to the azimuth and plunge of the laboratory arrow (sample_azimuth and sample_dip in er_samples.txt). The two angles, mag_azimuth and field_dip are explained below.\n\n [1] Standard Pomeroy convention of azimuth and hade (degrees from vertical down)\n of the drill direction (field arrow). lab arrow azimuth= sample_azimuth = mag_azimuth;\n lab arrow dip = sample_dip =-field_dip. i.e. the lab arrow dip is minus the hade.\n [2] Field arrow is the strike of the plane orthogonal to the drill direction,\n Field dip is the hade of the drill direction. Lab arrow azimuth = mag_azimuth-90\n Lab arrow dip = -field_dip\n [3] Lab arrow is the same as the drill direction;\n hade was measured in the field.\n Lab arrow azimuth = mag_azimuth; Lab arrow dip = 90-field_dip\n [4] lab azimuth and dip are same as mag_azimuth, field_dip : use this for unoriented samples too\n [5] Same as AZDIP convention explained below -\n azimuth and inclination of the drill direction are mag_azimuth and field_dip;\n lab arrow is as in [1] above.\n lab azimuth is same as mag_azimuth,lab arrow dip=field_dip-90\n [6] Lab arrow azimuth = mag_azimuth-90; Lab arrow dip = 90-field_dip\n [7] all others you will have to either customize your\n self or e-mail ltauxe@ucsd.edu for help.\n\n Magnetic declination convention:\n [1] Use the IGRF value at the lat/long and date supplied [default]\n [2] Will supply declination correction\n [3] mag_az is already corrected in file\n [4] Correct mag_az but not bedding_dip_dir\n\n Sample naming convention:\n [1] XXXXY: where XXXX is an arbitrary length site designation and Y\n is the single character sample designation. e.g., TG001a is the\n first sample from site TG001. [default]\n [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX\n [5] site name = sample name\n [6] site name entered in site_name column in the orient.txt format input file\n [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY\n NB: all others you will have to either customize your\n self or e-mail ltauxe@ucsd.edu for help.\n OUTPUT\n output saved in er_samples.txt and er_sites.txt (or samples.txt and sites.txt if using data model 3.0)\n - this will overwrite any existing files\n \"\"\"\n args = sys.argv\n if \"-h\" in args:\n print(main.__doc__)\n sys.exit()\n else:\n info = [['WD', False, '.'], ['ID', False, ''], ['f', False, 'orient.txt'],\n ['app', False, False], ['ocn', False, 1], ['dcn', False, 1],\n ['BCN', False, True], ['ncn', False, '1'], ['gmt', False, 0],\n ['mcd', False, ''], ['a', False, False], ['DM', False, 3]]\n\n #output_dir_path, input_dir_path, orient_file, append, or_con, dec_correction_con, samp_con, hours_from_gmt, method_codes, average_bedding\n # leave off -Fsa, -Fsi b/c defaults in command_line_extractor\n dataframe = extractor.command_line_dataframe(info)\n checked_args = extractor.extract_and_check_args(args, dataframe)\n output_dir_path, input_dir_path, orient_file, append, or_con, dec_correction_con, bed_correction, samp_con, hours_from_gmt, method_codes, average_bedding, samp_file, site_file, data_model = extractor.get_vars(['WD', 'ID', 'f', 'app', 'ocn', 'dcn', 'BCN', 'ncn', 'gmt', 'mcd', 'a', 'Fsa', 'Fsi', 'DM'], checked_args)\n if input_dir_path == '.':\n input_dir_path = output_dir_path\n\n if not isinstance(dec_correction_con, int):\n if len(dec_correction_con) > 1:\n dec_correction = int(dec_correction_con.split()[1])\n dec_correction_con = int(dec_correction_con.split()[0])\n else:\n dec_correction = 0\n else:\n dec_correction = 0\n\n ipmag.orientation_magic(or_con, dec_correction_con, dec_correction, bed_correction, samp_con, hours_from_gmt, method_codes, average_bedding, orient_file, samp_file, site_file, output_dir_path, input_dir_path, append, data_model)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nadd items and or update existing items in grid", "response": "def add_items(self, items_list, incl_pmag=True, incl_parents=True):\n \"\"\"\n Add items and/or update existing items in grid\n \"\"\"\n num_rows = self.GetNumberRows()\n current_grid_rows = [self.GetCellValue(num, 0) for num in range(num_rows)]\n er_data = {item.name: item.er_data for item in items_list}\n pmag_data = {item.name: item.pmag_data for item in items_list}\n items_list = sorted(items_list, key=lambda item: item.name)\n for item in items_list[:]:\n if item.name in current_grid_rows:\n pass\n else:\n self.add_row(item.name, item)\n self.add_data(er_data)#, pmag=False)\n if incl_pmag:\n self.add_data(pmag_data, pmag=True)\n if incl_parents:\n self.add_parents()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nadd a row to the grid", "response": "def add_row(self, label='', item=''):\n \"\"\"\n Add a row to the grid\n \"\"\"\n self.AppendRows(1)\n last_row = self.GetNumberRows() - 1\n self.SetCellValue(last_row, 0, str(label))\n self.row_labels.append(label)\n self.row_items.append(item)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nremoving a row from the grid.", "response": "def remove_row(self, row_num=None):\n \"\"\"\n Remove a row from the grid\n \"\"\"\n #DeleteRows(self, pos, numRows, updateLabel\n if not row_num and row_num != 0:\n row_num = self.GetNumberRows() - 1\n label = self.GetCellValue(row_num, 0)\n self.DeleteRows(pos=row_num, numRows=1, updateLabels=True)\n\n # remove label from row_labels\n try:\n self.row_labels.remove(label)\n except ValueError:\n # if label name hasn't been saved yet, simply truncate row_labels\n self.row_labels = self.row_labels[:-1]\n self.row_items.pop(row_num)\n if not self.changes:\n self.changes = set()\n self.changes.add(-1)\n # fix #s for rows edited:\n self.update_changes_after_row_delete(row_num)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nadds a correctly formatted drop - down menu for given col_label.", "response": "def add_drop_down(self, col_number, col_label):\n \"\"\"\n Add a correctly formatted drop-down-menu for given col_label, if required.\n Otherwise do nothing.\n \"\"\"\n if col_label in ['magic_method_codes', 'magic_method_codes++']:\n self.add_method_drop_down(col_number, col_label)\n if col_label in vocab.possible_vocabularies:\n if col_number not in list(self.choices.keys()): # if not already assigned above\n self.grid.SetColLabelValue(col_number, col_label + \"**\") # mark it as using a controlled vocabulary\n url = 'http://api.earthref.org/MagIC/vocabularies/{}.json'.format(col_label)\n controlled_vocabulary = pd.io.json.read_json(url)\n stripped_list = []\n for item in controlled_vocabulary[col_label][0]:\n try:\n stripped_list.append(str(item['item']))\n except UnicodeEncodeError:\n # skips items with non ASCII characters\n pass\n #stripped_list = [item['item'] for item in controlled_vocabulary[label][0]]\n\n if len(stripped_list) > 100:\n # split out the list alphabetically, into a dict of lists {'A': ['alpha', 'artist'], 'B': ['beta', 'beggar']...}\n dictionary = {}\n for item in stripped_list:\n letter = item[0].upper()\n if letter not in list(dictionary.keys()):\n dictionary[letter] = []\n dictionary[letter].append(item)\n\n stripped_list = dictionary\n\n two_tiered = True if isinstance(stripped_list, dict) else False\n self.choices[col_number] = (stripped_list, two_tiered)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding drop - down - menu options for magic_method_codes columns", "response": "def add_method_drop_down(self, col_number, col_label):\n \"\"\"\n Add drop-down-menu options for magic_method_codes columns\n \"\"\"\n if self.data_type == 'age':\n method_list = vocab.age_methods\n elif '++' in col_label:\n method_list = vocab.pmag_methods\n elif self.data_type == 'result':\n method_list = vocab.pmag_methods\n else:\n method_list = vocab.er_methods\n self.choices[col_number] = (method_list, True)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a popup menu when user clicks on the column.", "response": "def on_left_click(self, event, grid, choices):\n \"\"\"\n creates popup menu when user clicks on the column\n if that column is in the list of choices that get a drop-down menu.\n allows user to edit the column, but only from available values\n \"\"\"\n color = self.grid.GetCellBackgroundColour(event.GetRow(), event.GetCol())\n # allow user to cherry-pick cells for editing. gets selection of meta key for mac, ctrl key for pc\n if event.ControlDown() or event.MetaDown():\n row, col = event.GetRow(), event.GetCol()\n if (row, col) not in self.dispersed_selection:\n self.dispersed_selection.append((row, col))\n self.grid.SetCellBackgroundColour(row, col, 'light blue')\n else:\n self.dispersed_selection.remove((row, col))\n self.grid.SetCellBackgroundColour(row, col, color)# 'white'\n self.grid.ForceRefresh()\n return\n if event.ShiftDown(): # allow user to highlight multiple consecutive cells in a column\n previous_col = self.grid.GetGridCursorCol()\n previous_row = self.grid.GetGridCursorRow()\n col = event.GetCol()\n row = event.GetRow()\n if col != previous_col:\n return\n else:\n if row > previous_row:\n row_range = list(range(previous_row, row+1))\n else:\n row_range = list(range(row, previous_row+1))\n for r in row_range:\n self.grid.SetCellBackgroundColour(r, col, 'light blue')\n self.selection.append((r, col))\n self.grid.ForceRefresh()\n return\n\n selection = False\n\n if self.dispersed_selection:\n is_dispersed = True\n selection = self.dispersed_selection\n\n if self.selection:\n is_dispersed = False\n selection = self.selection\n\n try:\n col = event.GetCol()\n row = event.GetRow()\n except AttributeError:\n row, col = selection[0][0], selection[0][1]\n\n self.grid.SetGridCursor(row, col)\n\n if col in list(choices.keys()): # column should have a pop-up menu\n menu = wx.Menu()\n two_tiered = choices[col][1]\n choices = choices[col][0]\n if not two_tiered: # menu is one tiered\n if 'CLEAR cell of all values' not in choices:\n choices.insert(0, 'CLEAR cell of all values')\n for choice in choices:\n if not choice:\n choice = \" \" # prevents error if choice is an empty string\n menuitem = menu.Append(wx.ID_ANY, str(choice))\n self.window.Bind(wx.EVT_MENU, lambda event: self.on_select_menuitem(event, grid, row, col, selection), menuitem)\n self.show_menu(event, menu)\n else: # menu is two_tiered\n clear = menu.Append(-1, 'CLEAR cell of all values')\n self.window.Bind(wx.EVT_MENU, lambda event: self.on_select_menuitem(event, grid, row, col, selection), clear)\n for choice in sorted(choices.items()):\n submenu = wx.Menu()\n for item in choice[1]:\n menuitem = submenu.Append(-1, str(item))\n self.window.Bind(wx.EVT_MENU, lambda event: self.on_select_menuitem(event, grid, row, col, selection), menuitem)\n menu.Append(-1, choice[0], submenu)\n self.show_menu(event, menu)\n\n if selection:\n # re-whiten the cells that were previously highlighted\n for row, col in selection:\n self.grid.SetCellBackgroundColour(row, col, self.col_color)\n self.dispersed_selection = []\n self.selection = []\n self.grid.ForceRefresh()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset value of selected cell to value selected from menuitem", "response": "def on_select_menuitem(self, event, grid, row, col, selection):\n \"\"\"\n sets value of selected cell to value selected from menu\n \"\"\"\n if self.grid.changes: # if user selects a menuitem, that is an edit\n self.grid.changes.add(row)\n else:\n self.grid.changes = {row}\n\n item_id = event.GetId()\n item = event.EventObject.FindItemById(item_id)\n label = item.Label\n cell_value = grid.GetCellValue(row, col)\n if str(label) == \"CLEAR cell of all values\":\n label = \"\"\n\n col_label = grid.GetColLabelValue(col).strip('\\nEDIT ALL').strip('**')\n if col_label in self.colon_delimited_lst and label:\n if not label.lower() in cell_value.lower():\n label += (\":\" + cell_value).rstrip(':')\n else:\n label = cell_value\n\n if self.selected_col and self.selected_col == col:\n for row in range(self.grid.GetNumberRows()):\n grid.SetCellValue(row, col, label)\n if self.grid.changes:\n self.grid.changes.add(row)\n else:\n self.grid.changes = {row}\n\n #self.selected_col = None\n else:\n grid.SetCellValue(row, col, label)\n\n if selection:\n for cell in selection:\n row = cell[0]\n grid.SetCellValue(row, col, label)\n return"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nnames plot_cdf.py DESCRIPTION makes plots of cdfs of data in input file SYNTAX plot_cdf.py [-h][command line options] OPTIONS -h prints help message and quits -f FILE -t TITLE -fmt [svg,eps,png,pdf,jpg..] specify format of output figure, default is svg -sav saves plot and quits", "response": "def main():\n \"\"\"\n NAME\n plot_cdf.py\n\n DESCRIPTION\n makes plots of cdfs of data in input file \n\n SYNTAX\n plot_cdf.py [-h][command line options]\n\n OPTIONS\n -h prints help message and quits\n -f FILE\n -t TITLE\n -fmt [svg,eps,png,pdf,jpg..] specify format of output figure, default is svg\n -sav saves plot and quits\n \n \"\"\"\n fmt,plot='svg',0\n title=\"\"\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-sav' in sys.argv:plot=1\n if '-f' in sys.argv:\n ind=sys.argv.index('-f')\n file=sys.argv[ind+1] \n X=numpy.loadtxt(file)\n# else:\n# X=numpy.loadtxt(sys.stdin,dtype=numpy.float)\n else:\n print('-f option required')\n print(main.__doc__)\n sys.exit()\n if '-fmt' in sys.argv:\n ind=sys.argv.index('-fmt')\n fmt=sys.argv[ind+1] \n if '-t' in sys.argv:\n ind=sys.argv.index('-t')\n title=sys.argv[ind+1] \n CDF={'X':1}\n pmagplotlib.plot_init(CDF['X'],5,5)\n pmagplotlib.plot_cdf(CDF['X'],X,title,'r','')\n files={'X':'CDF_.'+fmt}\n if plot==0:\n pmagplotlib.draw_figs(CDF)\n ans= input('S[a]ve plot, to quit ')\n if ans=='a':\n pmagplotlib.save_plots(CDF,files)\n else:\n pmagplotlib.save_plots(CDF,files)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nnaming npar - npar - npar - npar - nnpar - nnpar - n", "response": "def main():\n \"\"\"\n NAME\n bootams.py\n\n DESCRIPTION\n calculates bootstrap statistics for tensor data\n\n SYNTAX\n bootams.py [-h][command line options]\n\n OPTIONS:\n -h prints help message and quits\n -f FILE specifies input file name\n -par specifies parametric bootstrap [by whole data set]\n -n N specifies the number of bootstrap samples, default is N=1000\n\n INPUT\n x11 x22 x33 x12 x23 x13\n\n OUTPUT\n tau_1 tau_1_sigma V1_dec V1_inc V1_eta V1_eta_dec V1_eta_inc V1_zeta V1_zeta_dec V1_zeta_inc\n tau_2 tau_2_sigma V2_dec V2_inc V2_eta V2_eta_dec V2_eta_inc V2_zeta V2_zeta_dec V2_zeta_inc\n tau_3 tau_2_sigma V3_dec V3_inc V3_eta V3_eta_dec V3_eta_inc V3_zeta V3_zeta_dec V3_zeta_inc\n \"\"\"\n# set options\n ipar,nb=0,5000\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-f' in sys.argv:\n ind=sys.argv.index('-f')\n file=sys.argv[ind+1]\n Ss=np.loadtxt(file)\n #f=open(file,'r')\n #data=f.readlines()\n if '-par' in sys.argv:ipar=1\n if '-n' in sys.argv:\n ind=sys.argv.index('-n')\n nb=int(sys.argv[ind+1])\n# read in the data\n print(\"Doing bootstrap - be patient\")\n #Ss=[]\n #for line in data:\n # s=[]\n # rec=line.split()\n # for i in range(6):\n # s.append(float(rec[i]))\n # Ss.append(s)\n Tmean,Vmean,Taus,Vs=pmag.s_boot(Ss,ipar=ipar,nb=nb)\n bpars=pmag.sbootpars(Taus,Vs) # calculate kent parameters for bootstrap\n bpars[\"v1_dec\"]=Vmean[0][0]\n bpars[\"v1_inc\"]=Vmean[0][1]\n bpars[\"v2_dec\"]=Vmean[1][0]\n bpars[\"v2_inc\"]=Vmean[1][1]\n bpars[\"v3_dec\"]=Vmean[2][0]\n bpars[\"v3_inc\"]=Vmean[2][1]\n bpars[\"t1\"]=Tmean[0]\n bpars[\"t2\"]=Tmean[1]\n bpars[\"t3\"]=Tmean[2]\n print(\"\"\"\ntau tau_sigma V_dec V_inc V_eta V_eta_dec V_eta_inc V_zeta V_zeta_dec V_zeta_inc\n\"\"\")\n outstring='%7.5f %7.5f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f'%(bpars[\"t1\"],bpars[\"t1_sigma\"],bpars[\"v1_dec\"],bpars[\"v1_inc\"],bpars[\"v1_zeta\"],bpars[\"v1_zeta_dec\"],bpars[\"v1_zeta_inc\"],bpars[\"v1_eta\"],bpars[\"v1_eta_dec\"],bpars[\"v1_eta_inc\"])\n print(outstring)\n outstring='%7.5f %7.5f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f'%(bpars[\"t2\"],bpars[\"t2_sigma\"],bpars[\"v2_dec\"],bpars[\"v2_inc\"],bpars[\"v2_zeta\"],bpars[\"v2_zeta_dec\"],bpars[\"v2_zeta_inc\"],bpars[\"v2_eta\"],bpars[\"v2_eta_dec\"],bpars[\"v2_eta_inc\"])\n print(outstring)\n outstring='%7.5f %7.5f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f'%(bpars[\"t3\"],bpars[\"t3_sigma\"],bpars[\"v3_dec\"],bpars[\"v3_inc\"],bpars[\"v3_zeta\"],bpars[\"v3_zeta_dec\"],bpars[\"v3_zeta_inc\"],bpars[\"v3_eta\"],bpars[\"v3_eta_dec\"],bpars[\"v3_eta_inc\"])\n print(outstring)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_json_online(self, url):\n if not requests:\n return False\n try:\n req = requests.get(url, timeout=.2)\n if not req.ok:\n return []\n return req\n except (requests.exceptions.ConnectTimeout, requests.exceptions.ConnectionError,\n requests.exceptions.ReadTimeout):\n return []", "response": "Get json from url."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting a tiered list of all er and pmag_age codes for a given method type", "response": "def get_tiered_meth_category(self, mtype, all_codes, code_types):\n \"\"\"\n Get a tiered list of all er/pmag_age codes\n i.e. pmag_codes = {'anisotropy_codes': ['code1', 'code2'],\n 'sample_preparation': [code1, code2], ...}\n \"\"\"\n categories = Series(code_types[code_types[mtype] == True].index)\n codes = {cat: list(self.get_one_meth_type(cat, all_codes).index) for cat in categories}\n return codes"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_controlled_vocabularies(self, vocab_types=default_vocab_types):\n if len(VOCAB):\n self.set_vocabularies()\n return\n data = []\n controlled_vocabularies = []\n # try to get online\n if not set_env.OFFLINE:\n url = 'https://www2.earthref.org/vocabularies/controlled.json'\n try:\n raw = self.get_json_online(url)\n data = pd.DataFrame(raw.json())\n print('-I- Importing controlled vocabularies from https://earthref.org')\n except Exception as ex:\n pass\n #print(ex, type(ex))\n # used cached\n if not len(data):\n print('-I- Using cached vocabularies')\n fname = os.path.join(data_model_dir, \"controlled_vocabularies_December_10_2018.json\")\n data = pd.io.json.read_json(fname, encoding='utf-8-sig')\n # parse data\n possible_vocabularies = data.columns\n ## this line means, grab every single controlled vocabulary\n vocab_types = list(possible_vocabularies)\n\n def get_cv_from_list(lst):\n \"\"\"\n Check a validations list from the data model.\n If there is a controlled vocabulary validation,\n return which category of controlled vocabulary it is.\n This will generally be applied to the validations col\n of the data model\n \"\"\"\n try:\n for i in lst:\n if \"cv(\" in i:\n return i[4:-2]\n except TypeError:\n return None\n else:\n return None\n\n vocab_col_names = []\n data_model = self.data_model\n for dm_key in data_model.dm:\n df = data_model.dm[dm_key]\n df['vocab_name'] = df['validations'].apply(get_cv_from_list)\n lst = list(zip(df[df['vocab_name'].notnull()]['vocab_name'], df[df['vocab_name'].notnull()].index))\n # in lst, first value is the name of the controlled vocabulary\n # second value is the name of the dataframe column\n vocab_col_names.extend(lst)\n\n # vocab_col_names is now a list of tuples\n # consisting of the vocabulary name and the column name\n # i.e., (u'type', u'geologic_types')\n\n # remove duplicate col_names:\n vocab_col_names = sorted(set(vocab_col_names))\n\n # add in boolean category to controlled vocabularies\n bool_items = [{'item': True}, {'item': False}, {'item': 'true'},\n {'item': 'false'}, {'item': 0}, {'item': 1},\n {'item': 0.0}, {'item': 1.0},\n {'item': 't'}, {'item': 'f'},\n {'item': 'T'}, {'item': 'F'}]\n series = Series({'label': 'Boolean', 'items': bool_items})\n data['boolean'] = series\n # use vocabulary name to get possible values for the column name\n for vocab in vocab_col_names[:]:\n if vocab[0] == \"magic_table_column\":\n vocab_col_names.remove((\"magic_table_column\", \"table_column\"))\n continue\n items = data[vocab[0]]['items']\n stripped_list = [item['item'] for item in items]\n controlled_vocabularies.append(stripped_list)\n # create series with the column name as the index,\n # and the possible values as the values\n ind_values = [i[1] for i in vocab_col_names]\n vocabularies = pd.Series(controlled_vocabularies, index=ind_values)\n return vocabularies", "response": "Get all controlled vocabularies from Earthref 10. 2018"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef main():\n\n\n #==================================================================================\n \n meas_file=\"magic_measurements.txt\"\n args=sys.argv\n dir_path='.'\n #\n # get name of file from command line\n #\n if '-WD' in args:\n ind=args.index('-WD')\n dir_path=args[ind+1]\n if \"-h\" in args:\n print(main.__doc__)\n sys.exit()\n if \"-f\" in args:\n ind=args.index(\"-f\")\n meas_file=sys.argv[ind+1]\n else:\n meas_file=dir_path+'/'+meas_file\n WD=dir_path\n\n \n #======================================\n # functions\n #======================================\n\n def get_Data(magic_file):\n \n #------------------------------------------------\n # Read magic measurement file and sort to blocks\n #------------------------------------------------\n Data={}\n try:\n meas_data,file_type=pmag.magic_read(magic_file)\n except:\n print(\"-E- ERROR: Cant read magic_measurement.txt file. File is corrupted.\")\n return Data\n \n # get list of unique specimen names\n \n #sids=pmag.get_specs(meas_data) # samples ID's\n \n for rec in meas_data:\n s=rec[\"er_specimen_name\"]\n method_codes= rec[\"magic_method_codes\"].strip('\\n')\n method_codes.replace(\" \",\"\")\n methods=method_codes.split(\":\")\n if \"LP-AN-TRM\" in methods:\n if s not in list(Data.keys()):\n Data[s]={}\n if 'atrmblock' not in list(Data[s].keys()):\n Data[s]['atrmblock']=[]\n Data[s]['atrmblock'].append(rec)\n \n \n if \"LP-AN-ARM\" in methods:\n if s not in list(Data.keys()):\n Data[s]={}\n if 'aarmblock' not in list(Data[s].keys()):\n Data[s]['aarmblock']=[]\n Data[s]['aarmblock'].append(rec)\n return (Data) \n\n #======================================\n # better to put this one in pmagpy\n #======================================\n \n def calculate_aniso_parameters(B,K):\n \n aniso_parameters={}\n S_bs=dot(B,K)\n \n # normalize by trace\n trace=S_bs[0]+S_bs[1]+S_bs[2]\n S_bs=old_div(S_bs,trace)\n s1,s2,s3,s4,s5,s6=S_bs[0],S_bs[1],S_bs[2],S_bs[3],S_bs[4],S_bs[5]\n s_matrix=[[s1,s4,s6],[s4,s2,s5],[s6,s5,s3]]\n \n # calculate eigen vector,\n t,evectors=eig(s_matrix)\n # sort vectors\n t=list(t)\n t1=max(t)\n ix_1=t.index(t1)\n t3=min(t)\n ix_3=t.index(t3)\n for tt in range(3):\n if t[tt]!=t1 and t[tt]!=t3:\n t2=t[tt]\n ix_2=t.index(t2)\n \n v1=[evectors[0][ix_1],evectors[1][ix_1],evectors[2][ix_1]]\n v2=[evectors[0][ix_2],evectors[1][ix_2],evectors[2][ix_2]]\n v3=[evectors[0][ix_3],evectors[1][ix_3],evectors[2][ix_3]]\n \n \n DIR_v1=pmag.cart2dir(v1)\n DIR_v2=pmag.cart2dir(v2)\n DIR_v3=pmag.cart2dir(v3)\n \n \n aniso_parameters['anisotropy_s1']=\"%f\"%s1\n aniso_parameters['anisotropy_s2']=\"%f\"%s2\n aniso_parameters['anisotropy_s3']=\"%f\"%s3\n aniso_parameters['anisotropy_s4']=\"%f\"%s4\n aniso_parameters['anisotropy_s5']=\"%f\"%s5\n aniso_parameters['anisotropy_s6']=\"%f\"%s6\n aniso_parameters['anisotropy_degree']=\"%f\"%(old_div(t1,t3))\n aniso_parameters['anisotropy_t1']=\"%f\"%t1\n aniso_parameters['anisotropy_t2']=\"%f\"%t2\n aniso_parameters['anisotropy_t3']=\"%f\"%t3\n aniso_parameters['anisotropy_v1_dec']=\"%.1f\"%DIR_v1[0]\n aniso_parameters['anisotropy_v1_inc']=\"%.1f\"%DIR_v1[1]\n aniso_parameters['anisotropy_v2_dec']=\"%.1f\"%DIR_v2[0]\n aniso_parameters['anisotropy_v2_inc']=\"%.1f\"%DIR_v2[1]\n aniso_parameters['anisotropy_v3_dec']=\"%.1f\"%DIR_v3[0]\n aniso_parameters['anisotropy_v3_inc']=\"%.1f\"%DIR_v3[1]\n \n # modified from pmagpy:\n if old_div(len(K),3)==9 or old_div(len(K),3)==6 or old_div(len(K),3)==15:\n n_pos=old_div(len(K),3)\n tmpH = Matrices[n_pos]['tmpH']\n a=s_matrix\n S=0.\n comp=zeros((n_pos*3),'f')\n for i in range(n_pos):\n for j in range(3):\n index=i*3+j\n compare=a[j][0]*tmpH[i][0]+a[j][1]*tmpH[i][1]+a[j][2]*tmpH[i][2]\n comp[index]=compare\n for i in range(n_pos*3):\n d=old_div(K[i],trace) - comp[i] # del values\n S+=d*d\n nf=float(n_pos*3-6) # number of degrees of freedom\n if S >0: \n sigma=math.sqrt(old_div(S,nf))\n hpars=pmag.dohext(nf,sigma,[s1,s2,s3,s4,s5,s6])\n \n aniso_parameters['anisotropy_sigma']=\"%f\"%sigma\n aniso_parameters['anisotropy_ftest']=\"%f\"%hpars[\"F\"]\n aniso_parameters['anisotropy_ftest12']=\"%f\"%hpars[\"F12\"]\n aniso_parameters['anisotropy_ftest23']=\"%f\"%hpars[\"F23\"]\n aniso_parameters['result_description']=\"Critical F: %s\"%(hpars['F_crit'])\n aniso_parameters['anisotropy_F_crit']=\"%f\"%float(hpars['F_crit'])\n aniso_parameters['anisotropy_n']=n_pos\n \n return(aniso_parameters)\n \n \n #======================================\n # Main\n #======================================\n \n \n \n aniso_logfile=open(WD+\"/rmag_anisotropy.log\",'w')\n aniso_logfile.write(\"------------------------\\n\")\n aniso_logfile.write( \"-I- Start rmag anisrotropy script\\n\")\n aniso_logfile.write( \"------------------------\\n\")\n\n Data=get_Data(meas_file)\n #try: \n # Data=get_Data(meas_file)\n #except:\n # aniso_logfile.write( \"-E- Cant open measurement file %s\\n\" %meas_file)\n # print \"-E- Cant open measurement file %s\\n exiting\" %meas_file\n # exit()\n \n aniso_logfile.write( \"-I- Open measurement file %s\\n\" %meas_file)\n \n \n Data_anisotropy={} \n specimens=list(Data.keys())\n specimens.sort()\n \n \n \n #-----------------------------------\n # Prepare rmag_anisotropy.txt file for writing\n #-----------------------------------\n \n rmag_anisotropy_file =open(WD+\"/rmag_anisotropy.txt\",'w')\n rmag_anisotropy_file.write(\"tab\\trmag_anisotropy\\n\")\n \n rmag_results_file =open(WD+\"/rmag_results.txt\",'w')\n rmag_results_file.write(\"tab\\trmag_results\\n\")\n \n rmag_anistropy_header=['er_specimen_name','er_sample_name','er_site_name','anisotropy_type','anisotropy_n','anisotropy_description','anisotropy_s1','anisotropy_s2','anisotropy_s3','anisotropy_s4','anisotropy_s5','anisotropy_s6','anisotropy_sigma','anisotropy_alt','magic_experiment_names','magic_method_codes']\n \n String=\"\"\n for i in range (len(rmag_anistropy_header)):\n String=String+rmag_anistropy_header[i]+'\\t'\n rmag_anisotropy_file.write(String[:-1]+\"\\n\")\n \n \n \n rmag_results_header=['er_specimen_names','er_sample_names','er_site_names','anisotropy_type','magic_method_codes','magic_experiment_names','result_description','anisotropy_t1','anisotropy_t2','anisotropy_t3','anisotropy_ftest','anisotropy_ftest12','anisotropy_ftest23',\\\n 'anisotropy_v1_dec','anisotropy_v1_inc','anisotropy_v2_dec','anisotropy_v2_inc','anisotropy_v3_dec','anisotropy_v3_inc']\n \n \n String=\"\"\n for i in range (len(rmag_results_header)):\n String=String+rmag_results_header[i]+'\\t'\n rmag_results_file.write(String[:-1]+\"\\n\")\n \n #-----------------------------------\n # Matrices definitions:\n # A design matrix\n # B dot(inv(dot(A.transpose(),A)),A.transpose())\n # tmpH is used for sigma calculation (9,15 measurements only)\n # \n # Anisotropy tensor:\n #\n # |Mx| |s1 s4 s6| |Bx|\n # |My| = |s4 s2 s5| . |By|\n # |Mz| |s6 s5 s3| |Bz|\n #\n # A matrix (measurement matrix):\n # Each mesurement yields three lines in \"A\" matrix\n #\n # |Mi | |Bx 0 0 By 0 Bz| |s1|\n # |Mi+1| = |0 By 0 Bx Bz 0 | . |s2|\n # |Mi+2| |0 0 Bz 0 By Bx| |s3|\n # |s4|\n # |s5|\n #\n #-----------------------------------\n \n Matrices={}\n \n for n_pos in [6,9,15]:\n \n Matrices[n_pos]={}\n \n A=zeros((n_pos*3,6),'f')\n \n if n_pos==6:\n positions=[[0.,0.,1.],[90.,0.,1.],[0.,90.,1.],\\\n [180.,0.,1.],[270.,0.,1.],[0.,-90.,1.]]\n \n if n_pos==15:\n positions=[[315.,0.,1.],[225.,0.,1.],[180.,0.,1.],[135.,0.,1.],[45.,0.,1.],\\\n [90.,-45.,1.],[270.,-45.,1.],[270.,0.,1.],[270.,45.,1.],[90.,45.,1.],\\\n [180.,45.,1.],[180.,-45.,1.],[0.,-90.,1.],[0,-45.,1.],[0,45.,1.]]\n if n_pos==9:\n positions=[[315.,0.,1.],[225.,0.,1.],[180.,0.,1.],\\\n [90.,-45.,1.],[270.,-45.,1.],[270.,0.,1.],\\\n [180.,45.,1.],[180.,-45.,1.],[0.,-90.,1.]]\n \n \n tmpH=zeros((n_pos,3),'f') # define tmpH\n for i in range(len(positions)):\n CART=pmag.dir2cart(positions[i])\n a=CART[0];b=CART[1];c=CART[2]\n A[3*i][0]=a\n A[3*i][3]=b\n A[3*i][5]=c\n \n A[3*i+1][1]=b\n A[3*i+1][3]=a\n A[3*i+1][4]=c\n \n A[3*i+2][2]=c\n A[3*i+2][4]=b\n A[3*i+2][5]=a\n \n tmpH[i][0]=CART[0]\n tmpH[i][1]=CART[1]\n tmpH[i][2]=CART[2]\n \n B=dot(inv(dot(A.transpose(),A)),A.transpose())\n \n Matrices[n_pos]['A']=A\n Matrices[n_pos]['B']=B\n Matrices[n_pos]['tmpH']=tmpH\n \n \n \n \n \n for specimen in specimens:\n \n if 'atrmblock' in list(Data[specimen].keys()):\n \n #-----------------------------------\n # aTRM 6 positions\n #-----------------------------------\n \n aniso_logfile.write(\"-I- Start calculating ATRM tensor for specimen %s\\n \"%specimen)\n atrmblock=Data[specimen]['atrmblock']\n if len(atrmblock)<6:\n aniso_logfile.write(\"-W- specimen %s has not enough measurementf for ATRM calculation\\n\"%specimen)\n continue\n \n B=Matrices[6]['B']\n \n Reject_specimen = False\n \n # The zero field step is a \"baseline\"\n \n # Search the baseline in the ATRM measurement\n \n baseline=\"\"\n Alteration_check=\"\"\n Alteration_check_index=\"\"\n baselines=[]\n \n # search for baseline in atrm blocks\n for rec in atrmblock:\n dec=float(rec['measurement_dec'])\n inc=float(rec['measurement_inc'])\n moment=float(rec['measurement_magn_moment'])\n # find the temperature of the atrm\n if float(rec['treatment_dc_field'])!=0 and float(rec['treatment_temp'])!=273:\n atrm_temperature=float(rec['treatment_temp'])\n # find baseline\n if float(rec['treatment_dc_field'])==0 and float(rec['treatment_temp'])!=273:\n baselines.append(array(pmag.dir2cart([dec,inc,moment])))\n # Find alteration check\n #print rec['measurement_number']\n \n if len(baselines)!=0:\n aniso_logfile.write( \"-I- found ATRM baseline for specimen %s\\n\"%specimen)\n baselines=array(baselines)\n baseline=array([mean(baselines[:,0]),mean(baselines[:,1]),mean(baselines[:,2])]) \n \n else:\n baseline=zeros(3,'f')\n aniso_logfile.write( \"-I- No aTRM baseline for specimen %s\\n\"%specimen)\n \n # sort measurements\n \n M=zeros([6,3],'f')\n \n for rec in atrmblock:\n \n dec=float(rec['measurement_dec'])\n inc=float(rec['measurement_inc'])\n moment=float(rec['measurement_magn_moment'])\n CART=array(pmag.dir2cart([dec,inc,moment]))-baseline\n \n if float(rec['treatment_dc_field'])==0: # Ignore zero field steps\n continue\n if \"LT-PTRM-I\" in rec['magic_method_codes'].split(\":\"): # alteration check\n Alteration_check=CART\n Alteration_check_dc_field_phi=float(rec['treatment_dc_field_phi'])\n Alteration_check_dc_field_theta=float(rec['treatment_dc_field_theta'])\n if Alteration_check_dc_field_phi==0 and Alteration_check_dc_field_theta==0 :\n Alteration_check_index=0\n if Alteration_check_dc_field_phi==90 and Alteration_check_dc_field_theta==0 :\n Alteration_check_index=1\n if Alteration_check_dc_field_phi==0 and Alteration_check_dc_field_theta==90 :\n Alteration_check_index=2\n if Alteration_check_dc_field_phi==180 and Alteration_check_dc_field_theta==0 :\n Alteration_check_index=3\n if Alteration_check_dc_field_phi==270 and Alteration_check_dc_field_theta==0 :\n Alteration_check_index=4\n if Alteration_check_dc_field_phi==0 and Alteration_check_dc_field_theta==-90 :\n Alteration_check_index=5\n aniso_logfile.write( \"-I- found alteration check for specimen %s\\n\"%specimen)\n continue\n \n treatment_dc_field_phi=float(rec['treatment_dc_field_phi'])\n treatment_dc_field_theta=float(rec['treatment_dc_field_theta'])\n treatment_dc_field=float(rec['treatment_dc_field'])\n \n #+x, M[0]\n if treatment_dc_field_phi==0 and treatment_dc_field_theta==0 :\n M[0]=CART\n #+Y , M[1]\n if treatment_dc_field_phi==90 and treatment_dc_field_theta==0 :\n M[1]=CART\n #+Z , M[2]\n if treatment_dc_field_phi==0 and treatment_dc_field_theta==90 :\n M[2]=CART\n #-x, M[3]\n if treatment_dc_field_phi==180 and treatment_dc_field_theta==0 :\n M[3]=CART\n #-Y , M[4]\n if treatment_dc_field_phi==270 and treatment_dc_field_theta==0 :\n M[4]=CART\n #-Z , M[5]\n if treatment_dc_field_phi==0 and treatment_dc_field_theta==-90 :\n M[5]=CART\n \n # check if at least one measurement in missing\n for i in range(len(M)):\n if M[i][0]==0 and M[i][1]==0 and M[i][2]==0: \n aniso_logfile.write( \"-E- ERROR: missing atrm data for specimen %s\\n\"%(specimen))\n Reject_specimen=True\n \n # alteration check \n \n anisotropy_alt=0\n if Alteration_check!=\"\":\n for i in range(len(M)):\n if Alteration_check_index==i:\n M_1=sqrt(sum((array(M[i])**2)))\n M_2=sqrt(sum(Alteration_check**2))\n diff=abs(M_1-M_2)\n diff_ratio=old_div(diff,mean([M_1,M_2]))\n diff_ratio_perc=100*diff_ratio\n if diff_ratio_perc > anisotropy_alt:\n anisotropy_alt=diff_ratio_perc\n else:\n aniso_logfile.write( \"-W- Warning: no alteration check for specimen %s \\n \"%specimen )\n \n # Check for maximum difference in anti parallel directions.\n # if the difference between the two measurements is more than maximum_diff\n # The specimen is rejected\n \n # i.e. +x versus -x, +y versus -y, etc.s\n \n for i in range(3):\n M_1=sqrt(sum(array(M[i])**2))\n M_2=sqrt(sum(array(M[i+3])**2))\n \n diff=abs(M_1-M_2)\n diff_ratio=old_div(diff,max(M_1,M_2))\n diff_ratio_perc=100*diff_ratio\n \n if diff_ratio_perc>anisotropy_alt:\n anisotropy_alt=diff_ratio_perc\n \n if not Reject_specimen:\n \n # K vector (18 elements, M1[x], M1[y], M1[z], ... etc.) \n K=zeros(18,'f')\n K[0],K[1],K[2]=M[0][0],M[0][1],M[0][2]\n K[3],K[4],K[5]=M[1][0],M[1][1],M[1][2]\n K[6],K[7],K[8]=M[2][0],M[2][1],M[2][2]\n K[9],K[10],K[11]=M[3][0],M[3][1],M[3][2]\n K[12],K[13],K[14]=M[4][0],M[4][1],M[4][2]\n K[15],K[16],K[17]=M[5][0],M[5][1],M[5][2]\n \n if specimen not in list(Data_anisotropy.keys()):\n Data_anisotropy[specimen]={}\n aniso_parameters=calculate_aniso_parameters(B,K)\n Data_anisotropy[specimen]['ATRM']=aniso_parameters\n Data_anisotropy[specimen]['ATRM']['anisotropy_alt']=\"%.2f\"%anisotropy_alt \n Data_anisotropy[specimen]['ATRM']['anisotropy_type']=\"ATRM\"\n Data_anisotropy[specimen]['ATRM']['er_sample_name']=atrmblock[0]['er_sample_name']\n Data_anisotropy[specimen]['ATRM']['er_specimen_name']=specimen\n Data_anisotropy[specimen]['ATRM']['er_site_name']=atrmblock[0]['er_site_name']\n Data_anisotropy[specimen]['ATRM']['anisotropy_description']='Hext statistics adapted to ATRM'\n Data_anisotropy[specimen]['ATRM']['magic_experiment_names']=specimen+\";ATRM\"\n Data_anisotropy[specimen]['ATRM']['magic_method_codes']=\"LP-AN-TRM:AE-H\"\n #Data_anisotropy[specimen]['ATRM']['rmag_anisotropy_name']=specimen\n \n \n if 'aarmblock' in list(Data[specimen].keys()): \n \n #-----------------------------------\n # AARM - 6, 9 or 15 positions\n #-----------------------------------\n \n aniso_logfile.write( \"-I- Start calculating AARM tensors specimen %s\\n\"%specimen)\n \n aarmblock=Data[specimen]['aarmblock']\n if len(aarmblock)<12:\n aniso_logfile.write( \"-W- WARNING: not enough aarm measurement for specimen %s\\n\"%specimen)\n continue\n elif len(aarmblock)==12:\n n_pos=6\n B=Matrices[6]['B']\n M=zeros([6,3],'f')\n elif len(aarmblock)==18:\n n_pos=9\n B=Matrices[9]['B']\n M=zeros([9,3],'f')\n # 15 positions\n elif len(aarmblock)==30:\n n_pos=15\n B=Matrices[15]['B']\n M=zeros([15,3],'f')\n else:\n aniso_logfile.write( \"-E- ERROR: number of measurements in aarm block is incorrect sample %s\\n\"%specimen)\n continue\n \n Reject_specimen = False\n \n for i in range(n_pos):\n for rec in aarmblock:\n if float(rec['measurement_number'])==i*2+1:\n dec=float(rec['measurement_dec'])\n inc=float(rec['measurement_inc'])\n moment=float(rec['measurement_magn_moment']) \n M_baseline=array(pmag.dir2cart([dec,inc,moment]))\n \n if float(rec['measurement_number'])==i*2+2:\n dec=float(rec['measurement_dec'])\n inc=float(rec['measurement_inc'])\n moment=float(rec['measurement_magn_moment']) \n M_arm=array(pmag.dir2cart([dec,inc,moment]))\n M[i]=M_arm-M_baseline\n \n \n K=zeros(3*n_pos,'f')\n for i in range(n_pos):\n K[i*3]=M[i][0]\n K[i*3+1]=M[i][1]\n K[i*3+2]=M[i][2] \n \n if specimen not in list(Data_anisotropy.keys()):\n Data_anisotropy[specimen]={}\n aniso_parameters=calculate_aniso_parameters(B,K)\n Data_anisotropy[specimen]['AARM']=aniso_parameters\n Data_anisotropy[specimen]['AARM']['anisotropy_alt']=\"\" \n Data_anisotropy[specimen]['AARM']['anisotropy_type']=\"AARM\"\n Data_anisotropy[specimen]['AARM']['er_sample_name']=aarmblock[0]['er_sample_name']\n Data_anisotropy[specimen]['AARM']['er_site_name']=aarmblock[0]['er_site_name']\n Data_anisotropy[specimen]['AARM']['er_specimen_name']=specimen\n Data_anisotropy[specimen]['AARM']['anisotropy_description']='Hext statistics adapted to AARM'\n Data_anisotropy[specimen]['AARM']['magic_experiment_names']=specimen+\";AARM\"\n Data_anisotropy[specimen]['AARM']['magic_method_codes']=\"LP-AN-ARM:AE-H\"\n #Data_anisotropy[specimen]['AARM']['rmag_anisotropy_name']=specimen\n \n \n #----------------------------------- \n \n specimens=list(Data_anisotropy.keys())\n specimens.sort\n \n # remove previous anistropy data, and replace with the new one:\n s_list=list(Data.keys())\n for sp in s_list:\n if 'AniSpec' in list(Data[sp].keys()):\n del Data[sp]['AniSpec']\n for specimen in specimens:\n # if both AARM and ATRM axist prefer the AARM !!\n if 'AARM' in list(Data_anisotropy[specimen].keys()):\n TYPES=['AARM']\n if 'ATRM' in list(Data_anisotropy[specimen].keys()):\n TYPES=['ATRM']\n if 'AARM' in list(Data_anisotropy[specimen].keys()) and 'ATRM' in list(Data_anisotropy[specimen].keys()):\n TYPES=['ATRM','AARM']\n aniso_logfile.write( \"-W- WARNING: both aarm and atrm data exist for specimen %s. using AARM by default. If you prefer using one of them, delete the other!\\n\"%specimen)\n for TYPE in TYPES:\n String=\"\"\n for i in range (len(rmag_anistropy_header)):\n try:\n String=String+Data_anisotropy[specimen][TYPE][rmag_anistropy_header[i]]+'\\t'\n except:\n String=String+\"%f\"%(Data_anisotropy[specimen][TYPE][rmag_anistropy_header[i]])+'\\t'\n rmag_anisotropy_file.write(String[:-1]+\"\\n\")\n \n String=\"\"\n Data_anisotropy[specimen][TYPE]['er_specimen_names']=Data_anisotropy[specimen][TYPE]['er_specimen_name']\n Data_anisotropy[specimen][TYPE]['er_sample_names']=Data_anisotropy[specimen][TYPE]['er_sample_name']\n Data_anisotropy[specimen][TYPE]['er_site_names']=Data_anisotropy[specimen][TYPE]['er_site_name']\n for i in range (len(rmag_results_header)):\n try:\n String=String+Data_anisotropy[specimen][TYPE][rmag_results_header[i]]+'\\t'\n except:\n String=String+\"%f\"%(Data_anisotropy[specimen][TYPE][rmag_results_header[i]])+'\\t'\n rmag_results_file.write(String[:-1]+\"\\n\")\n \n if 'AniSpec' not in Data[specimen]:\n Data[specimen]['AniSpec']={}\n Data[specimen]['AniSpec'][TYPE]=Data_anisotropy[specimen][TYPE]\n \n aniso_logfile.write(\"------------------------\\n\")\n aniso_logfile.write(\"-I- remanence_aniso_magic script finished sucsessfuly\\n\")\n aniso_logfile.write( \"------------------------\\n\")\n \n rmag_anisotropy_file.close()\n print(\"Anisotropy tensors elements are saved in rmag_anistropy.txt\")\n print(\"Other anisotropy statistics are saved in rmag_results.txt\")\n print(\"log file is in rmag_anisotropy.log\")", "response": "This function is the main function of the aarm_magic. py program."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef main():\n FIG = {} # plot dictionary\n FIG['eqarea'] = 1 # eqarea is figure 1\n in_file, plot_key, coord, crd = 'pmag_results.txt', 'all', \"0\", 'g'\n plotE, contour = 0, 0\n dir_path = '.'\n fmt = 'svg'\n verbose = pmagplotlib.verbose\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-WD' in sys.argv:\n ind = sys.argv.index('-WD')\n dir_path = sys.argv[ind+1]\n pmagplotlib.plot_init(FIG['eqarea'], 5, 5)\n if '-f' in sys.argv:\n ind = sys.argv.index(\"-f\")\n in_file = dir_path+\"/\"+sys.argv[ind+1]\n if '-obj' in sys.argv:\n ind = sys.argv.index('-obj')\n plot_by = sys.argv[ind+1]\n if plot_by == 'all':\n plot_key = 'all'\n if plot_by == 'sit':\n plot_key = 'er_site_name'\n if plot_by == 'sam':\n plot_key = 'er_sample_name'\n if plot_by == 'spc':\n plot_key = 'er_specimen_name'\n if '-c' in sys.argv:\n contour = 1\n plt = 0\n if '-sav' in sys.argv:\n plt = 1\n verbose = 0\n if '-ell' in sys.argv:\n plotE = 1\n ind = sys.argv.index('-ell')\n ell_type = sys.argv[ind+1]\n if ell_type == 'F':\n dist = 'F'\n if ell_type == 'K':\n dist = 'K'\n if ell_type == 'B':\n dist = 'B'\n if ell_type == 'Be':\n dist = 'BE'\n if ell_type == 'Bv':\n dist = 'BV'\n FIG['bdirs'] = 2\n pmagplotlib.plot_init(FIG['bdirs'], 5, 5)\n if '-crd' in sys.argv:\n ind = sys.argv.index(\"-crd\")\n crd = sys.argv[ind+1]\n if crd == 's':\n coord = \"-1\"\n if crd == 'g':\n coord = \"0\"\n if crd == 't':\n coord = \"100\"\n if '-fmt' in sys.argv:\n ind = sys.argv.index(\"-fmt\")\n fmt = sys.argv[ind+1]\n Dec_keys = ['site_dec', 'sample_dec', 'specimen_dec',\n 'measurement_dec', 'average_dec', 'none']\n Inc_keys = ['site_inc', 'sample_inc', 'specimen_inc',\n 'measurement_inc', 'average_inc', 'none']\n Tilt_keys = ['tilt_correction', 'site_tilt_correction',\n 'sample_tilt_correction', 'specimen_tilt_correction', 'none']\n Dir_type_keys = ['', 'site_direction_type',\n 'sample_direction_type', 'specimen_direction_type']\n Name_keys = ['er_specimen_name', 'er_sample_name',\n 'er_site_name', 'pmag_result_name']\n data, file_type = pmag.magic_read(in_file)\n if file_type == 'pmag_results' and plot_key != \"all\":\n plot_key = plot_key+'s' # need plural for results table\n if verbose:\n print(len(data), ' records read from ', in_file)\n #\n #\n # find desired dec,inc data:\n #\n dir_type_key = ''\n #\n # get plotlist if not plotting all records\n #\n plotlist = []\n if plot_key != \"all\":\n plots = pmag.get_dictitem(data, plot_key, '', 'F')\n for rec in plots:\n if rec[plot_key] not in plotlist:\n plotlist.append(rec[plot_key])\n plotlist.sort()\n else:\n plotlist.append('All')\n for plot in plotlist:\n # if verbose: print plot\n DIblock = []\n GCblock = []\n SLblock, SPblock = [], []\n title = plot\n mode = 1\n dec_key, inc_key, tilt_key, name_key, k = \"\", \"\", \"\", \"\", 0\n if plot != \"All\":\n odata = pmag.get_dictitem(data, plot_key, plot, 'T')\n else:\n odata = data # data for this obj\n for dec_key in Dec_keys:\n # get all records with this dec_key not blank\n Decs = pmag.get_dictitem(odata, dec_key, '', 'F')\n if len(Decs) > 0:\n break\n for inc_key in Inc_keys:\n # get all records with this inc_key not blank\n Incs = pmag.get_dictitem(Decs, inc_key, '', 'F')\n if len(Incs) > 0:\n break\n for tilt_key in Tilt_keys:\n if tilt_key in Incs[0].keys():\n break # find the tilt_key for these records\n if tilt_key == 'none': # no tilt key in data, need to fix this with fake data which will be unknown tilt\n tilt_key = 'tilt_correction'\n for rec in Incs:\n rec[tilt_key] = ''\n # get all records matching specified coordinate system\n cdata = pmag.get_dictitem(Incs, tilt_key, coord, 'T')\n if coord == '0': # geographic\n # get all the blank records - assume geographic\n udata = pmag.get_dictitem(Incs, tilt_key, '', 'T')\n if len(cdata) == 0:\n crd = ''\n if len(udata) > 0:\n for d in udata:\n cdata.append(d)\n crd = crd+'u'\n for name_key in Name_keys:\n # get all records with this name_key not blank\n Names = pmag.get_dictitem(cdata, name_key, '', 'F')\n if len(Names) > 0:\n break\n for dir_type_key in Dir_type_keys:\n # get all records with this direction type\n Dirs = pmag.get_dictitem(cdata, dir_type_key, '', 'F')\n if len(Dirs) > 0:\n break\n if dir_type_key == \"\":\n dir_type_key = 'direction_type'\n locations, site, sample, specimen = \"\", \"\", \"\", \"\"\n for rec in cdata: # pick out the data\n if 'er_location_name' in rec.keys() and rec['er_location_name'] != \"\" and rec['er_location_name'] not in locations:\n locations = locations + \\\n rec['er_location_name'].replace(\"/\", \"\")+\"_\"\n if 'er_location_names' in rec.keys() and rec['er_location_names'] != \"\":\n locs = rec['er_location_names'].split(':')\n for loc in locs:\n if loc not in locations:\n locations = locations+loc.replace(\"/\", \"\")+'_'\n if plot_key == 'er_site_name' or plot_key == 'er_sample_name' or plot_key == 'er_specimen_name':\n site = rec['er_site_name']\n if plot_key == 'er_sample_name' or plot_key == 'er_specimen_name':\n sample = rec['er_sample_name']\n if plot_key == 'er_specimen_name':\n specimen = rec['er_specimen_name']\n if plot_key == 'er_site_names' or plot_key == 'er_sample_names' or plot_key == 'er_specimen_names':\n site = rec['er_site_names']\n if plot_key == 'er_sample_names' or plot_key == 'er_specimen_names':\n sample = rec['er_sample_names']\n if plot_key == 'er_specimen_names':\n specimen = rec['er_specimen_names']\n if dir_type_key not in rec.keys() or rec[dir_type_key] == \"\":\n rec[dir_type_key] = 'l'\n if 'magic_method_codes' not in rec.keys():\n rec['magic_method_codes'] = \"\"\n DIblock.append([float(rec[dec_key]), float(rec[inc_key])])\n SLblock.append([rec[name_key], rec['magic_method_codes']])\n if rec[tilt_key] == coord and rec[dir_type_key] != 'l' and rec[dec_key] != \"\" and rec[inc_key] != \"\":\n GCblock.append([float(rec[dec_key]), float(rec[inc_key])])\n SPblock.append([rec[name_key], rec['magic_method_codes']])\n if len(DIblock) == 0 and len(GCblock) == 0:\n if verbose:\n print(\"no records for plotting\")\n sys.exit()\n if verbose:\n for k in range(len(SLblock)):\n print('%s %s %7.1f %7.1f' % (\n SLblock[k][0], SLblock[k][1], DIblock[k][0], DIblock[k][1]))\n for k in range(len(SPblock)):\n print('%s %s %7.1f %7.1f' % (\n SPblock[k][0], SPblock[k][1], GCblock[k][0], GCblock[k][1]))\n if len(DIblock) > 0:\n if contour == 0:\n pmagplotlib.plot_eq(FIG['eqarea'], DIblock, title)\n else:\n pmagplotlib.plot_eq_cont(FIG['eqarea'], DIblock)\n else:\n pmagplotlib.plot_net(FIG['eqarea'])\n if len(GCblock) > 0:\n for rec in GCblock:\n pmagplotlib.plot_circ(FIG['eqarea'], rec, 90., 'g')\n if plotE == 1:\n ppars = pmag.doprinc(DIblock) # get principal directions\n nDIs, rDIs, npars, rpars = [], [], [], []\n for rec in DIblock:\n angle = pmag.angle([rec[0], rec[1]], [\n ppars['dec'], ppars['inc']])\n if angle > 90.:\n rDIs.append(rec)\n else:\n nDIs.append(rec)\n if dist == 'B': # do on whole dataset\n etitle = \"Bingham confidence ellipse\"\n bpars = pmag.dobingham(DIblock)\n for key in bpars.keys():\n if key != 'n' and verbose:\n print(\" \", key, '%7.1f' % (bpars[key]))\n if key == 'n' and verbose:\n print(\" \", key, ' %i' % (bpars[key]))\n npars.append(bpars['dec'])\n npars.append(bpars['inc'])\n npars.append(bpars['Zeta'])\n npars.append(bpars['Zdec'])\n npars.append(bpars['Zinc'])\n npars.append(bpars['Eta'])\n npars.append(bpars['Edec'])\n npars.append(bpars['Einc'])\n if dist == 'F':\n etitle = \"Fisher confidence cone\"\n if len(nDIs) > 2:\n fpars = pmag.fisher_mean(nDIs)\n for key in fpars.keys():\n if key != 'n' and verbose:\n print(\" \", key, '%7.1f' % (fpars[key]))\n if key == 'n' and verbose:\n print(\" \", key, ' %i' % (fpars[key]))\n mode += 1\n npars.append(fpars['dec'])\n npars.append(fpars['inc'])\n npars.append(fpars['alpha95']) # Beta\n npars.append(fpars['dec'])\n isign = abs(fpars['inc'])/fpars['inc']\n npars.append(fpars['inc']-isign*90.) # Beta inc\n npars.append(fpars['alpha95']) # gamma\n npars.append(fpars['dec']+90.) # Beta dec\n npars.append(0.) # Beta inc\n if len(rDIs) > 2:\n fpars = pmag.fisher_mean(rDIs)\n if verbose:\n print(\"mode \", mode)\n for key in fpars.keys():\n if key != 'n' and verbose:\n print(\" \", key, '%7.1f' % (fpars[key]))\n if key == 'n' and verbose:\n print(\" \", key, ' %i' % (fpars[key]))\n mode += 1\n rpars.append(fpars['dec'])\n rpars.append(fpars['inc'])\n rpars.append(fpars['alpha95']) # Beta\n rpars.append(fpars['dec'])\n isign = abs(fpars['inc'])/fpars['inc']\n rpars.append(fpars['inc']-isign*90.) # Beta inc\n rpars.append(fpars['alpha95']) # gamma\n rpars.append(fpars['dec']+90.) # Beta dec\n rpars.append(0.) # Beta inc\n if dist == 'K':\n etitle = \"Kent confidence ellipse\"\n if len(nDIs) > 3:\n kpars = pmag.dokent(nDIs, len(nDIs))\n if verbose:\n print(\"mode \", mode)\n for key in kpars.keys():\n if key != 'n' and verbose:\n print(\" \", key, '%7.1f' % (kpars[key]))\n if key == 'n' and verbose:\n print(\" \", key, ' %i' % (kpars[key]))\n mode += 1\n npars.append(kpars['dec'])\n npars.append(kpars['inc'])\n npars.append(kpars['Zeta'])\n npars.append(kpars['Zdec'])\n npars.append(kpars['Zinc'])\n npars.append(kpars['Eta'])\n npars.append(kpars['Edec'])\n npars.append(kpars['Einc'])\n if len(rDIs) > 3:\n kpars = pmag.dokent(rDIs, len(rDIs))\n if verbose:\n print(\"mode \", mode)\n for key in kpars.keys():\n if key != 'n' and verbose:\n print(\" \", key, '%7.1f' % (kpars[key]))\n if key == 'n' and verbose:\n print(\" \", key, ' %i' % (kpars[key]))\n mode += 1\n rpars.append(kpars['dec'])\n rpars.append(kpars['inc'])\n rpars.append(kpars['Zeta'])\n rpars.append(kpars['Zdec'])\n rpars.append(kpars['Zinc'])\n rpars.append(kpars['Eta'])\n rpars.append(kpars['Edec'])\n rpars.append(kpars['Einc'])\n else: # assume bootstrap\n if dist == 'BE':\n if len(nDIs) > 5:\n BnDIs = pmag.di_boot(nDIs)\n Bkpars = pmag.dokent(BnDIs, 1.)\n if verbose:\n print(\"mode \", mode)\n for key in Bkpars.keys():\n if key != 'n' and verbose:\n print(\" \", key, '%7.1f' % (Bkpars[key]))\n if key == 'n' and verbose:\n print(\" \", key, ' %i' % (Bkpars[key]))\n mode += 1\n npars.append(Bkpars['dec'])\n npars.append(Bkpars['inc'])\n npars.append(Bkpars['Zeta'])\n npars.append(Bkpars['Zdec'])\n npars.append(Bkpars['Zinc'])\n npars.append(Bkpars['Eta'])\n npars.append(Bkpars['Edec'])\n npars.append(Bkpars['Einc'])\n if len(rDIs) > 5:\n BrDIs = pmag.di_boot(rDIs)\n Bkpars = pmag.dokent(BrDIs, 1.)\n if verbose:\n print(\"mode \", mode)\n for key in Bkpars.keys():\n if key != 'n' and verbose:\n print(\" \", key, '%7.1f' % (Bkpars[key]))\n if key == 'n' and verbose:\n print(\" \", key, ' %i' % (Bkpars[key]))\n mode += 1\n rpars.append(Bkpars['dec'])\n rpars.append(Bkpars['inc'])\n rpars.append(Bkpars['Zeta'])\n rpars.append(Bkpars['Zdec'])\n rpars.append(Bkpars['Zinc'])\n rpars.append(Bkpars['Eta'])\n rpars.append(Bkpars['Edec'])\n rpars.append(Bkpars['Einc'])\n etitle = \"Bootstrapped confidence ellipse\"\n elif dist == 'BV':\n sym = {'lower': ['o', 'c'], 'upper': [\n 'o', 'g'], 'size': 3, 'edgecolor': 'face'}\n if len(nDIs) > 5:\n BnDIs = pmag.di_boot(nDIs)\n pmagplotlib.plot_eq_sym(\n FIG['bdirs'], BnDIs, 'Bootstrapped Eigenvectors', sym)\n if len(rDIs) > 5:\n BrDIs = pmag.di_boot(rDIs)\n if len(nDIs) > 5: # plot on existing plots\n pmagplotlib.plot_di_sym(FIG['bdirs'], BrDIs, sym)\n else:\n pmagplotlib.plot_eq(\n FIG['bdirs'], BrDIs, 'Bootstrapped Eigenvectors')\n if dist == 'B':\n if len(nDIs) > 3 or len(rDIs) > 3:\n pmagplotlib.plot_conf(FIG['eqarea'], etitle, [], npars, 0)\n elif len(nDIs) > 3 and dist != 'BV':\n pmagplotlib.plot_conf(FIG['eqarea'], etitle, [], npars, 0)\n if len(rDIs) > 3:\n pmagplotlib.plot_conf(FIG['eqarea'], etitle, [], rpars, 0)\n elif len(rDIs) > 3 and dist != 'BV':\n pmagplotlib.plot_conf(FIG['eqarea'], etitle, [], rpars, 0)\n if verbose:\n pmagplotlib.draw_figs(FIG)\n #\n files = {}\n locations = locations[:-1]\n for key in FIG.keys():\n if pmagplotlib.isServer: # use server plot naming convention\n filename = 'LO:_'+locations+'_SI:_'+site+'_SA:_'+sample + \\\n '_SP:_'+specimen+'_CO:_'+crd+'_TY:_'+key+'_.'+fmt\n else: # use more readable plot naming convention\n filename = ''\n for item in [locations, site, sample, specimen, crd, key]:\n if item:\n item = item.replace(' ', '_')\n filename += item + '_'\n if filename.endswith('_'):\n filename = filename[:-1]\n filename += \".{}\".format(fmt)\n files[key] = filename\n if pmagplotlib.isServer:\n black = '#000000'\n purple = '#800080'\n titles = {}\n titles['eq'] = 'Equal Area Plot'\n FIG = pmagplotlib.add_borders(FIG, titles, black, purple)\n pmagplotlib.save_plots(FIG, files)\n elif verbose:\n ans = raw_input(\n \" S[a]ve to save plot, [q]uit, Return to continue: \")\n if ans == \"q\":\n sys.exit()\n if ans == \"a\":\n pmagplotlib.save_plots(FIG, files)\n if plt:\n pmagplotlib.save_plots(FIG, files)", "response": "NAME\n eqarea_magic.py\n\n DESCRIPTION\n makes equal area projections from declination/inclination data\n\n SYNTAX\n eqarea_magic.py [command line options]\n\n INPUT\n takes magic formatted pmag_results, pmag_sites, pmag_samples or pmag_specimens\n\n OPTIONS\n -h prints help message and quits\n -f FILE: specify input magic format file from magic,default='pmag_results.txt'\n supported types=[magic_measurements,pmag_specimens, pmag_samples, pmag_sites, pmag_results, magic_web]\n -obj OBJ: specify level of plot [all, sit, sam, spc], default is all\n -crd [s,g,t]: specify coordinate system, [s]pecimen, [g]eographic, [t]ilt adjusted\n default is geographic, unspecified assumed geographic\n -fmt [svg,png,jpg] format for output plots\n -ell [F,K,B,Be,Bv] plot Fisher, Kent, Bingham, Bootstrap ellipses or Boostrap eigenvectors\n -c plot as colour contour\n -sav save plot and quit quietly\n NOTE\n all: entire file; sit: site; sam: sample; spc: specimen"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef main(command_line=True, **kwargs):\n\n #--------------------------------------\n # functions\n #--------------------------------------\n\n\n def sort_magic_file(path,ignore_lines_n,sort_by_this_name):\n '''\n reads a file with headers. Each line is stored as a dictionary following the headers.\n Lines are sorted in DATA by the sort_by_this_name header\n DATA[sort_by_this_name]=[dictionary1,dictionary2,...]\n '''\n DATA={}\n fin=open(path,'r')\n #ignore first lines\n for i in range(ignore_lines_n):\n fin.readline()\n #header\n line=fin.readline()\n header=line.strip('\\n').split('\\t')\n #print header\n for line in fin.readlines():\n if line[0]==\"#\":\n continue\n tmp_data={}\n tmp_line=line.strip('\\n').split('\\t')\n #print tmp_line\n for i in range(len(tmp_line)):\n if i>= len(header):\n continue\n tmp_data[header[i]]=tmp_line[i]\n DATA[tmp_data[sort_by_this_name]]=tmp_data\n fin.close()\n return(DATA)\n\n def read_generic_file(path,average_replicates):\n '''\n reads a generic file format. If average_replicates==True average replicate measurements.\n Rrturns a Data dictionary with measurements line sorted by specimen\n Data[specimen_name][dict1,dict2,...]\n '''\n Data={}\n Fin=open(path,'r')\n header=Fin.readline().strip('\\n').split('\\t')\n duplicates=[]\n for line in Fin.readlines():\n tmp_data={}\n #found_duplicate=False\n l=line.strip('\\n').split('\\t')\n for i in range(min(len(header),len(l))):\n tmp_data[header[i]]=l[i]\n specimen=tmp_data['specimen']\n if specimen not in list(Data.keys()):\n Data[specimen]=[]\n Data[specimen].append(tmp_data)\n # search fro duplicates\n for specimen in list(Data.keys()):\n x=len(Data[specimen])-1\n new_data=[]\n duplicates=[]\n for i in range(1,x):\n while i< len(Data[specimen]) and Data[specimen][i]['treatment']==Data[specimen][i-1]['treatment'] and Data[specimen][i]['treatment_type']==Data[specimen][i-1]['treatment_type']:\n duplicates.append(Data[specimen][i])\n del(Data[specimen][i])\n if len(duplicates)>0:\n if average_replicates:\n duplicates.append(Data[specimen][i-1])\n Data[specimen][i-1]=average_duplicates(duplicates)\n print(\"-W- WARNING: averaging %i duplicates for specimen %s treatmant %s\"%(len(duplicates),specimen,duplicates[-1]['treatment']))\n duplicates=[]\n else:\n Data[specimen][i-1]=duplicates[-1]\n print(\"-W- WARNING: found %i duplicates for specimen %s treatmant %s. Taking the last measurement only\"%(len(duplicates),specimen,duplicates[-1]['treatment']))\n duplicates=[]\n\n if i==len(Data[specimen])-1:\n break\n\n\n\n # if tmp_data['treatment']==Data[specimen][-1]['treatment'] and tmp_data['treatment_type']==Data[specimen][-1]['treatment_type']:\n #\n ## check replicates\n #if tmp_data['treatment']==Data[specimen][-1]['treatment'] and tmp_data['treatment_type']==Data[specimen][-1]['treatment_type']:\n # #found_duplicate=True\n # duplicates.append(Data[specimen][-1])\n # duplicates.append(tmp_data)\n # del(Data[specimen][-1])\n # continue\n #else:\n # if len(duplicates)>0:\n # if average_replicates:\n # Data[specimen].append(average_duplicates(duplicates))\n # print \"-W- WARNING: averaging %i duplicates for specimen %s treatmant %s\"%(len(duplicates),specimen,duplicates[-1]['treatment'])\n # else:\n # Data[specimen].append(duplicates[-1])\n # print \"-W- WARNING: found %i duplicates for specimen %s treatmant %s. Taking the last measurement only\"%(len(duplicates),specimen,duplicates[-1]['treatment'])\n # duplicates=[]\n # Data[specimen].append(tmp_data)\n\n\n return(Data)\n\n def average_duplicates(duplicates):\n '''\n avarage replicate measurements.\n '''\n carts_s,carts_g,carts_t=[],[],[]\n for rec in duplicates:\n moment=float(rec['moment'])\n if 'dec_s' in list(rec.keys()) and 'inc_s' in list(rec.keys()):\n if rec['dec_s']!=\"\" and rec['inc_s']!=\"\":\n dec_s=float(rec['dec_s'])\n inc_s=float(rec['inc_s'])\n cart_s=pmag.dir2cart([dec_s,inc_s,moment])\n carts_s.append(cart_s)\n if 'dec_g' in list(rec.keys()) and 'inc_g' in list(rec.keys()):\n if rec['dec_g']!=\"\" and rec['inc_g']!=\"\":\n dec_g=float(rec['dec_g'])\n inc_g=float(rec['inc_g'])\n cart_g=pmag.dir2cart([dec_g,inc_g,moment])\n carts_g.append(cart_g)\n if 'dec_t' in list(rec.keys()) and 'inc_t' in list(rec.keys()):\n if rec['dec_t']!=\"\" and rec['inc_t']!=\"\":\n dec_t=float(rec['dec_t'])\n inc_t=float(rec['inc_t'])\n cart_t=pmag.dir2cart([dec_t,inc_t,moment])\n carts_t.append(cart_t)\n if len(carts_s)>0:\n carts=scipy.array(carts_s)\n x_mean=scipy.mean(carts[:,0])\n y_mean=scipy.mean(carts[:,1])\n z_mean=scipy.mean(carts[:,2])\n mean_dir=pmag.cart2dir([x_mean,y_mean,z_mean])\n mean_dec_s=\"%.2f\"%mean_dir[0]\n mean_inc_s=\"%.2f\"%mean_dir[1]\n mean_moment=\"%10.3e\"%mean_dir[2]\n else:\n mean_dec_s,mean_inc_s=\"\",\"\"\n if len(carts_g)>0:\n carts=scipy.array(carts_g)\n x_mean=scipy.mean(carts[:,0])\n y_mean=scipy.mean(carts[:,1])\n z_mean=scipy.mean(carts[:,2])\n mean_dir=pmag.cart2dir([x_mean,y_mean,z_mean])\n mean_dec_g=\"%.2f\"%mean_dir[0]\n mean_inc_g=\"%.2f\"%mean_dir[1]\n mean_moment=\"%10.3e\"%mean_dir[2]\n else:\n mean_dec_g,mean_inc_g=\"\",\"\"\n\n if len(carts_t)>0:\n carts=scipy.array(carts_t)\n x_mean=scipy.mean(carts[:,0])\n y_mean=scipy.mean(carts[:,1])\n z_mean=scipy.mean(carts[:,2])\n mean_dir=pmag.cart2dir([x_mean,y_mean,z_mean])\n mean_dec_t=\"%.2f\"%mean_dir[0]\n mean_inc_t=\"%.2f\"%mean_dir[1]\n mean_moment=\"%10.3e\"%mean_dir[2]\n else:\n mean_dec_t,mean_inc_t=\"\",\"\"\n\n meanrec={}\n for key in list(duplicates[0].keys()):\n if key in ['dec_s','inc_s','dec_g','inc_g','dec_t','inc_t','moment']:\n continue\n else:\n meanrec[key]=duplicates[0][key]\n meanrec['dec_s']=mean_dec_s\n meanrec['dec_g']=mean_dec_g\n meanrec['dec_t']=mean_dec_t\n meanrec['inc_s']=mean_inc_s\n meanrec['inc_g']=mean_inc_g\n meanrec['inc_t']=mean_inc_t\n meanrec['moment']=mean_moment\n return meanrec\n\n def get_upper_level_name(name,nc):\n '''\n get sample/site name from specimen/sample using naming convention\n '''\n if float(nc[0])==0:\n if float(nc[1])!=0:\n number_of_char=int(nc[1])\n high_name=name[:number_of_char]\n else:\n high_name=name\n elif float(nc[0])==1:\n if float(nc[1])!=0:\n number_of_char=int(nc[1])*-1\n high_name=name[:number_of_char]\n else:\n high_name=name\n elif float(nc[0])==2:\n d=str(nc[1])\n name_splitted=name.split(d)\n if len(name_splitted)==1:\n high_name=name_splitted[0]\n else:\n high_name=d.join(name_splitted[:-1])\n else:\n high_name=name\n return high_name\n\n def merge_pmag_recs(old_recs):\n recs={}\n recs=copy.deepcopy(old_recs)\n headers=[]\n for rec in recs:\n for key in list(rec.keys()):\n if key not in headers:\n headers.append(key)\n for rec in recs:\n for header in headers:\n if header not in list(rec.keys()):\n rec[header]=\"\"\n return recs\n\n\n # initialize some variables\n experiment = ''\n sample_nc = [1, 0]\n site_nc = [1, 0]\n meas_file = \"magic_measurements.txt\"\n labfield = 0\n\n #--------------------------------------\n # get command line arguments\n #--------------------------------------\n\n if command_line:\n args=sys.argv\n user=\"\"\n if \"-h\" in args:\n print(main.__doc__)\n return False\n if \"-usr\" in args:\n ind=args.index(\"-usr\")\n user=args[ind+1]\n else:\n user=\"\"\n if '-F' in args:\n ind=args.index(\"-F\")\n meas_file=args[ind+1]\n if '-Fsa' in args:\n ind=args.index(\"-Fsa\")\n samp_file=args[ind+1]\n else:\n samp_file=\"er_samples.txt\"\n\n if '-f' in args:\n ind=args.index(\"-f\")\n magfile=args[ind+1]\n \n if \"-dc\" in args:\n ind=args.index(\"-dc\")\n labfield=float(args[ind+1])*1e-6\n labfield_phi=float(args[ind+2])\n labfield_theta=float(args[ind+3])\n if '-exp' in args:\n ind=args.index(\"-exp\")\n experiment=args[ind+1]\n if \"-samp\" in args:\n ind=args.index(\"-samp\")\n sample_nc=[]\n sample_nc.append(args[ind+1])\n sample_nc.append(args[ind+2])\n if \"-site\" in args:\n ind=args.index(\"-site\")\n site_nc=[]\n site_nc.append(args[ind+1])\n site_nc.append(args[ind+2])\n if \"-loc\" in args:\n ind=args.index(\"-loc\")\n er_location_name=args[ind+1]\n else:\n er_location_name=\"\"\n if \"-A\" in args:\n noave=1\n else:\n noave=0\n\n if \"-WD\" in args:\n ind=args.index(\"-WD\")\n WD=args[ind+1]\n os.chdir(WD)\n\n # unpack keyword args if using as module\n if not command_line:\n user = kwargs.get('user', '')\n meas_file = kwargs.get('meas_file', 'magic_measurements.txt')\n samp_file = kwargs.get('samp_file', 'er_samples.txt')\n magfile = kwargs.get('magfile', '')\n labfield = int(kwargs.get('labfield', 0))\n if labfield:\n labfield *= 1e-6\n labfield_phi = int(kwargs.get('labfield_phi', 0))\n labfield_theta = int(kwargs.get('labfield_theta', 0))\n experiment = kwargs.get('experiment', '')\n cooling_times = kwargs.get('cooling_times_list', '')\n sample_nc = kwargs.get('sample_nc', [1, 0])\n site_nc = kwargs.get('site_nc', [1, 0])\n er_location_name = kwargs.get('er_location_name', '')\n noave = kwargs.get('noave', 0) # 0 is default, means do average\n WD = kwargs.get('WD', '.')\n #os.chdir(WD)\n\n # format and validate variables\n if magfile:\n try:\n input=open(magfile,'r')\n except:\n print(\"bad mag file:\",magfile)\n return False, \"bad mag file\"\n else:\n print(\"mag_file field is required option\")\n print(main.__doc__)\n return False, \"mag_file field is required option\"\n\n if not experiment:\n print(\"-exp is required option. Please provide experiment type of: Demag, PI, ATRM n (n of positions), CR (see below for format), NLT\")\n print(main.__doc__)\n return False, \"-exp is required option\"\n\n if experiment=='ATRM':\n if command_line:\n ind=args.index(\"ATRM\")\n atrm_n_pos=int(args[ind+1])\n else:\n atrm_n_pos = 6\n\n if experiment=='AARM':\n if command_line:\n ind=args.index(\"AARM\")\n aarm_n_pos=int(args[ind+1])\n else:\n aarm_n_pos = 6\n\n if experiment=='CR':\n if command_line:\n ind=args.index(\"CR\")\n cooling_times=args[ind+1]\n cooling_times_list=cooling_times.split(',')\n # if not command line, cooling_times_list is already set\n\n\n\n\n #--------------------------------------\n # read data from er_samples.txt\n #--------------------------------------\n\n #if \"-fsa\" in args:\n # ind=args.index(\"-fsa\")\n # er_sample_file=args[ind+1]\n #else:\n # er_sample_file=\"er_samples.txt\"\n\n er_sample_data={}\n #er_sample_data=sort_magic_file(samp_file,1,'er_sample_name')\n try:\n er_sample_data=sort_magic_file(samp_file,1,'er_sample_name')\n print(\"-I- Found er_samples.txt\")\n print('-I- sample information will be appended to existing er_samples.txt file')\n except:\n print(\"-I- Cant find file er_samples.txt\")\n print('-I- sample information will be stored in new er_samples.txt file')\n\n #--------------------------------------\n # read data from generic file\n #--------------------------------------\n\n if noave:\n mag_data=read_generic_file(magfile,False)\n else:\n mag_data=read_generic_file(magfile,True)\n\n #--------------------------------------\n # for each specimen get the data, and translate it to MagIC format\n #--------------------------------------\n\n ErSamplesRecs=[]\n MagRecs=[]\n specimens_list=list(mag_data.keys())\n specimens_list.sort()\n for specimen in specimens_list:\n measurement_running_number=0\n this_specimen_treatments=[] # a list of all treatments\n MagRecs_this_specimen=[]\n LP_this_specimen=[] # a list of all lab protocols\n IZ,ZI=0,0 # counter for IZ and ZI steps\n\n for meas_line in mag_data[specimen]:\n\n #------------------\n # trivial MagRec data\n #------------------\n\n MagRec={}\n MagRec['er_citation_names']=\"This study\"\n MagRec[\"er_specimen_name\"]=meas_line['specimen']\n MagRec[\"er_sample_name\"]=get_upper_level_name(MagRec[\"er_specimen_name\"],sample_nc)\n MagRec[\"er_site_name\"]=get_upper_level_name(MagRec[\"er_sample_name\"],site_nc)\n MagRec['er_location_name']=er_location_name\n MagRec['er_analyst_mail_names']=user\n MagRec[\"magic_instrument_codes\"]=\"\"\n MagRec[\"measurement_flag\"]='g'\n MagRec[\"measurement_number\"]=\"%i\"%measurement_running_number\n\n MagRec[\"measurement_magn_moment\"]='%10.3e'%(float(meas_line[\"moment\"])*1e-3) # in Am^2\n MagRec[\"measurement_temp\"]='273.' # room temp in kelvin\n\n #------------------\n # decode treatments from treatment column in the generic file\n #------------------\n\n treatment=[]\n treatment_code=str(meas_line['treatment']).split(\".\")\n treatment.append(float(treatment_code[0]))\n if len(treatment_code)==1:\n treatment.append(0)\n else:\n treatment.append(float(treatment_code[1]))\n\n #------------------\n # lab field direction\n #------------------\n\n if experiment in ['PI','NLT','CR']:\n\n if float(treatment[1])==0:\n MagRec[\"treatment_dc_field\"]=\"0\"\n MagRec[\"treatment_dc_field_phi\"]=\"0\"\n MagRec[\"treatment_dc_field_theta\"]=\"0\"\n elif not labfield:\n print(\"-W- WARNING: labfield (-dc) is a required argument for this experiment type\")\n return False, \"labfield (-dc) is a required argument for this experiment type\"\n\n else:\n MagRec[\"treatment_dc_field\"]='%8.3e'%(float(labfield))\n MagRec[\"treatment_dc_field_phi\"]=\"%.2f\"%(float(labfield_phi))\n MagRec[\"treatment_dc_field_theta\"]=\"%.2f\"%(float(labfield_theta))\n else:\n MagRec[\"treatment_dc_field\"]=\"\"\n MagRec[\"treatment_dc_field_phi\"]=\"\"\n MagRec[\"treatment_dc_field_theta\"]=\"\"\n\n #------------------\n # treatment temperature/peak field\n #------------------\n\n if experiment == 'Demag':\n if meas_line['treatment_type']=='A':\n MagRec['treatment_temp']=\"273.\"\n MagRec[\"treatment_ac_field\"]=\"%.3e\"%(treatment[0]*1e-3)\n elif meas_line['treatment_type']=='N':\n MagRec['treatment_temp']=\"273.\"\n MagRec[\"treatment_ac_field\"]=\"\"\n else:\n MagRec['treatment_temp']=\"%.2f\"%(treatment[0]+273.)\n MagRec[\"treatment_ac_field\"]=\"\"\n else:\n MagRec['treatment_temp']=\"%.2f\"%(treatment[0]+273.)\n MagRec[\"treatment_ac_field\"]=\"\"\n\n\n #---------------------\n # Lab treatment\n # Lab protocol\n #---------------------\n\n #---------------------\n # Lab treatment and lab protocoal for NRM:\n #---------------------\n\n if float(meas_line['treatment'])==0:\n LT=\"LT-NO\"\n LP=\"\" # will be filled later after finishing reading all measurements line\n\n #---------------------\n # Lab treatment and lab protocoal for paleointensity experiment\n #---------------------\n\n elif experiment =='PI':\n LP=\"LP-PI-TRM\"\n if treatment[1]==0:\n LT=\"LT-T-Z\"\n elif treatment[1]==1 or treatment[1]==10: # infield\n LT=\"LT-T-I\"\n elif treatment[1]==2 or treatment[1]==20: # pTRM check\n LT=\"LT-PTRM-I\"\n LP=LP+\":\"+\"LP-PI-ALT-PTRM\"\n elif treatment[1]==3 or treatment[1]==30: # Tail check\n LT=\"LT-PTRM-MD\"\n LP=LP+\":\"+\"LP-PI-BT-MD\"\n elif treatment[1]==4 or treatment[1]==40: # Additivity check\n LT=\"LT-PTRM-AC\"\n LP=LP+\":\"+\"LP-PI-BT-MD\"\n elif treatment[1]==5 or treatment[1]==50: # Thellier protocol, second infield step\n LT=\"LT-T-I\"\n LP=LP+\":\"+\"LP-PI-II\"\n \n # adjust field direction in thellier protocol \n MagRec[\"treatment_dc_field_phi\"]=\"%.2f\"%( (float(labfield_phi) +180.0)%360. )\n MagRec[\"treatment_dc_field_theta\"]=\"%.2f\"%( float(labfield_theta)*-1 )\n\n else:\n print(\"-E- unknown measurement code specimen %s treatmemt %s\"%(meas_line['specimen'],meas_line['treatment']))\n MagRec={}\n continue\n # save all treatment in a list\n # we will use this later to distinguidh between ZI / IZ / and IZZI\n\n this_specimen_treatments.append(float(meas_line['treatment']))\n if LT==\"LT-T-Z\":\n if float(treatment[0]+0.1) in this_specimen_treatments:\n LP=LP+\":\"+\"LP-PI-IZ\"\n if LT==\"LT-T-I\":\n if float(treatment[0]+0.0) in this_specimen_treatments:\n LP=LP+\":\"+\"LP-PI-ZI\"\n #---------------------\n # Lab treatment and lab protocoal for demag experiment\n #---------------------\n\n elif \"Demag\" in experiment:\n if meas_line['treatment_type']=='A':\n LT=\"LT-AF-Z\"\n LP=\"LP-DIR-AF\"\n else:\n LT=\"LT-T-Z\"\n LP=\"LP-DIR-T\"\n\n #---------------------\n # Lab treatment and lab protocoal for ATRM experiment\n #---------------------\n\n elif experiment in ['ATRM','AARM']:\n\n if experiment=='ATRM':\n LP=\"LP-AN-TRM\"\n n_pos=atrm_n_pos\n if n_pos!=6:\n print(\"the program does not support ATRM in %i position.\"%n_pos)\n continue\n\n if experiment=='AARM':\n #MagRec['treatment_temp']=\"273.\"\n #MagRec[\"treatment_ac_field\"]=\"\"\n LP=\"LP-AN-ARM\"\n n_pos=aarm_n_pos\n if n_pos!=6:\n print(\"the program does not support AARM in %i position.\"%n_pos)\n continue\n\n if treatment[1]==0:\n if experiment=='ATRM':\n LT=\"LT-T-Z\"\n MagRec['treatment_temp']=\"%.2f\"%(treatment[0]+273.)\n MagRec[\"treatment_ac_field\"]=\"\"\n\n else:\n LT=\"LT-AF-Z\"\n MagRec['treatment_temp']=\"273.\"\n MagRec[\"treatment_ac_field\"]=\"%.3e\"%(treatment[0]*1e-3)\n MagRec[\"treatment_dc_field\"]='0'\n MagRec[\"treatment_dc_field_phi\"]='0'\n MagRec[\"treatment_dc_field_theta\"]='0'\n else:\n if experiment=='ATRM':\n if float(treatment[1])==70 or float(treatment[1])==7: # alteration check as final measurement\n LT=\"LT-PTRM-I\"\n else:\n LT=\"LT-T-I\"\n else:\n LT=\"LT-AF-I\"\n MagRec[\"treatment_dc_field\"]='%8.3e'%(float(labfield))\n\n # find the direction of the lab field in two ways:\n\n # (1) using the treatment coding (XX.1=+x, XX.2=+y, XX.3=+z, XX.4=-x, XX.5=-y, XX.6=-z)\n tdec=[0,90,0,180,270,0,0,90,0]\n tinc=[0,0,90,0,0,-90,0,0,90]\n if treatment[1] < 10:\n ipos_code=int(treatment[1])-1\n else:\n ipos_code=int(old_div(treatment[1],10))-1\n\n # (2) using the magnetization\n if meas_line[\"dec_s\"]!=\"\":\n DEC=float(meas_line[\"dec_s\"])\n INC=float(meas_line[\"inc_s\"])\n elif meas_line[\"dec_g\"]!=\"\":\n DEC=float(meas_line[\"dec_g\"])\n INC=float(meas_line[\"inc_g\"])\n elif meas_line[\"dec_t\"]!=\"\":\n DEC=float(meas_line[\"dec_t\"])\n INC=float(meas_line[\"inc_t\"])\n if DEC<0 and DEC>-359:\n DEC=360.+DEC\n\n if INC < 45 and INC > -45:\n if DEC>315 or DEC<45: ipos_guess=0\n if DEC>45 and DEC<135: ipos_guess=1\n if DEC>135 and DEC<225: ipos_guess=3\n if DEC>225 and DEC<315: ipos_guess=4\n else:\n if INC >45: ipos_guess=2\n if INC <-45: ipos_guess=5\n # prefer the guess over the code\n ipos=ipos_guess\n # check it\n if treatment[1]!= 7 and treatment[1]!= 70:\n if ipos_guess!=ipos_code:\n print(\"-W- WARNING: check specimen %s step %s, anistropy measurements, coding does not match the direction of the lab field\"%(specimen,meas_line['treatment']))\n MagRec[\"treatment_dc_field_phi\"]='%7.1f' %(tdec[ipos])\n MagRec[\"treatment_dc_field_theta\"]='%7.1f'% (tinc[ipos])\n\n\n #---------------------\n # Lab treatment and lab protocoal for cooling rate experiment\n #---------------------\n\n elif experiment == \"CR\":\n\n cooling_times_list\n LP=\"LP-CR-TRM\"\n MagRec[\"treatment_temp\"]='%8.3e' % (float(treatment[0])+273.) # temp in kelvin\n\n if treatment[1]==0:\n LT=\"LT-T-Z\"\n MagRec[\"treatment_dc_field\"]=\"0\"\n MagRec[\"treatment_dc_field_phi\"]='0'\n MagRec[\"treatment_dc_field_theta\"]='0'\n else:\n if treatment[1]==7: # alteration check as final measurement\n LT=\"LT-PTRM-I\"\n else:\n LT=\"LT-T-I\"\n MagRec[\"treatment_dc_field\"]='%8.3e'%(labfield)\n MagRec[\"treatment_dc_field_phi\"]='%7.1f' % (labfield_phi) # labfield phi\n MagRec[\"treatment_dc_field_theta\"]='%7.1f' % (labfield_theta) # labfield theta\n\n indx=int(treatment[1])-1\n # alteration check matjed as 0.7 in the measurement file\n if indx==6:\n cooling_time= cooling_times_list[-1]\n else:\n cooling_time=cooling_times_list[indx]\n MagRec[\"measurement_description\"]=\"cooling_rate\"+\":\"+cooling_time+\":\"+\"K/min\"\n\n\n #---------------------\n # Lab treatment and lab protocoal for NLT experiment\n #---------------------\n\n elif 'NLT' in experiment :\n print(\"Dont support yet NLT rate experiment file. Contact rshaar@ucsd.edu\")\n\n #---------------------\n # magic_method_codes for this measurement only\n # LP will be fixed after all measurement lines are read\n #---------------------\n\n MagRec[\"magic_method_codes\"]=LT+\":\"+LP\n\n #---------------------\n # Demag experiments only:\n # search if orientation data exists in er_samples.txt\n # if not: create one and save\n #---------------------\n\n # see if core azimuth and tilt-corrected data are in er_samples.txt\n sample=MagRec[\"er_sample_name\"]\n found_sample_azimuth,found_sample_dip,found_sample_bed_dip_direction,found_sample_bed_dip=False,False,False,False\n if sample in list(er_sample_data.keys()):\n if \"sample_azimuth\" in list(er_sample_data[sample].keys()) and er_sample_data[sample]['sample_azimuth'] !=\"\":\n sample_azimuth=float(er_sample_data[sample]['sample_azimuth'])\n found_sample_azimuth=True\n if \"sample_dip\" in list(er_sample_data[sample].keys()) and er_sample_data[sample]['sample_dip']!=\"\":\n sample_dip=float(er_sample_data[sample]['sample_dip'])\n found_sample_dip=True\n if \"sample_bed_dip_direction\" in list(er_sample_data[sample].keys()) and er_sample_data[sample]['sample_bed_dip_direction']!=\"\":\n sample_bed_dip_direction=float(er_sample_data[sample]['sample_bed_dip_direction'])\n found_sample_bed_dip_direction=True\n if \"sample_bed_dip\" in list(er_sample_data[sample].keys()) and er_sample_data[sample]['sample_bed_dip']!=\"\":\n sample_bed_dip=float(er_sample_data[sample]['sample_bed_dip'])\n found_sample_bed_dip=True\n else:\n er_sample_data[sample]={}\n\n #--------------------\n # deal with specimen orientation and different coordinate system\n #--------------------\n\n found_s,found_geo,found_tilt=False,False,False\n if \"dec_s\" in list(meas_line.keys()) and \"inc_s\" in list(meas_line.keys()):\n if meas_line[\"dec_s\"]!=\"\" and meas_line[\"inc_s\"]!=\"\":\n found_s=True\n MagRec[\"measurement_dec\"]=meas_line[\"dec_s\"]\n MagRec[\"measurement_inc\"]=meas_line[\"inc_s\"]\n if \"dec_g\" in list(meas_line.keys()) and \"inc_g\" in list(meas_line.keys()):\n if meas_line[\"dec_g\"]!=\"\" and meas_line[\"inc_g\"]!=\"\":\n found_geo=True\n if \"dec_t\" in list(meas_line.keys()) and \"inc_t\" in list(meas_line.keys()):\n if meas_line[\"dec_t\"]!=\"\" and meas_line[\"inc_t\"]!=\"\":\n found_tilt=True\n\n #-----------------------------\n # specimen coordinates: no\n # geographic coordinates: yes\n #-----------------------------\n\n if found_geo and not found_s:\n MagRec[\"measurement_dec\"]=meas_line[\"dec_g\"]\n MagRec[\"measurement_inc\"]=meas_line[\"inc_g\"]\n\n # core azimuth/plunge is not in er_samples.txt\n if not found_sample_dip or not found_sample_azimuth:\n er_sample_data[sample]['sample_azimuth']=\"0\"\n er_sample_data[sample]['sample_dip']=\"0\"\n\n # core azimuth/plunge is in er_samples.txt\n else:\n sample_azimuth=float(er_sample_data[sample]['sample_azimuth'])\n sample_dip=float(er_sample_data[sample]['sample_dip'])\n if sample_azimuth!=0 and sample_dip!=0:\n print(\"-W- WARNING: delete core azimuth/plunge in er_samples.txt\\n\\\n becasue dec_s and inc_s are unavaialable\")\n\n #-----------------------------\n # specimen coordinates: no\n # geographic coordinates: no\n #-----------------------------\n if not found_geo and not found_s:\n print(\"-E- ERROR: sample %s does not have dec_s/inc_s or dec_g/inc_g. Ignore specimen %s \"%(sample,specimen))\n break\n\n #-----------------------------\n # specimen coordinates: yes\n # geographic coordinates: yes\n #\n # commant: Ron, this need to be tested !!\n #-----------------------------\n if found_geo and found_s:\n\n cdec,cinc=float(meas_line[\"dec_s\"]),float(meas_line[\"inc_s\"])\n gdec,ginc=float(meas_line[\"dec_g\"]),float(meas_line[\"inc_g\"])\n az,pl=pmag.get_azpl(cdec,cinc,gdec,ginc)\n\n # core azimuth/plunge is not in er_samples.txt:\n # calculate core az/pl and add it to er_samples.txt\n if not found_sample_dip or not found_sample_azimuth:\n er_sample_data[sample]['sample_azimuth']=\"%.1f\"%az\n er_sample_data[sample]['sample_dip']=\"%.1f\"%pl\n\n # core azimuth/plunge is in er_samples.txt\n else:\n if float(er_sample_data[sample]['sample_azimuth'])!= az:\n print(\"-E- ERROR in sample_azimuth sample %s. Check it! using the value in er_samples.txt\"%sample)\n\n if float(er_sample_data[sample]['sample_dip'])!= pl:\n print(\"-E- ERROR in sample_dip sample %s. Check it! using the value in er_samples.txt\"%sample)\n\n #-----------------------------\n # specimen coordinates: yes\n # geographic coordinates: no\n #-----------------------------\n if not found_geo and found_s:\n if found_sample_dip and found_sample_azimuth:\n pass\n # (nothing to do)\n else:\n if \"Demag\" in experiment:\n print(\"-W- WARNING: missing sample_dip or sample_azimuth for sample %s\"%sample)\n\n #-----------------------------\n # tilt-corrected coordinates: yes\n # geographic coordinates: no\n #-----------------------------\n if found_tilt and not found_geo:\n print(\"-E- ERROR: missing geographic data for sample %s. Ignoring tilt-corrected data \"%sample)\n if found_tilt and found_geo:\n dec_geo,inc_geo=float(meas_line[\"dec_g\"]),float(meas_line[\"inc_g\"])\n dec_tilt,inc_tilt=float(meas_line[\"dec_t\"]),float(meas_line[\"inc_t\"])\n if dec_geo==dec_tilt and inc_geo==inc_tilt:\n DipDir,Dip=0.,0.\n else:\n DipDir,Dip=pmag.get_tilt(dec_geo,inc_geo,dec_tilt,inc_tilt)\n\n if not found_sample_bed_dip_direction or not found_sample_bed_dip:\n print(\"-I- calculating dip and dip direction used for tilt correction sample %s. results are put in er_samples.txt\"%sample)\n er_sample_data[sample]['sample_bed_dip_direction']=\"%.1f\"%DipDir\n er_sample_data[sample]['sample_bed_dip']=\"%.1f\"%Dip\n\n #-----------------------------\n # er_samples method codes\n # geographic coordinates: no\n #-----------------------------\n if found_tilt or found_geo:\n er_sample_data[sample]['magic_method_codes']=\"SO-NO\"\n\n\n #-----------------\n # er_samples_data\n #-----------------\n if sample in list(er_sample_data.keys()):\n er_sample_data[sample]['er_sample_name']=sample\n er_sample_data[sample]['er_site_name']=MagRec[\"er_site_name\"]\n er_sample_data[sample]['er_location_name']=MagRec[\"er_location_name\"]\n\n #MagRec[\"magic_method_codes\"]=LT\n MagRecs_this_specimen.append(MagRec)\n\n #if LP!=\"\" and LP not in LP_this_specimen:\n # LP_this_specimen.append(LP)\n\n measurement_running_number+=1\n #-------\n\n #-------\n # after reading all the measurements lines for this specimen\n # 1) add magic_experiment_name\n # 2) fix magic_method_codes with the correct lab protocol\n #-------\n LP_this_specimen=[]\n for MagRec in MagRecs_this_specimen:\n magic_method_codes=MagRec[\"magic_method_codes\"].split(\":\")\n for code in magic_method_codes:\n if \"LP\" in code and code not in LP_this_specimen:\n LP_this_specimen.append(code)\n # check IZ/ZI/IZZI\n if \"LP-PI-ZI\" in LP_this_specimen and \"LP-PI-IZ\" in LP_this_specimen:\n LP_this_specimen.remove(\"LP-PI-ZI\")\n LP_this_specimen.remove(\"LP-PI-IZ\")\n LP_this_specimen.append(\"LP-PI-BT-IZZI\")\n\n # add the right LP codes and fix experiment name\n for MagRec in MagRecs_this_specimen:\n MagRec[\"magic_experiment_name\"]=MagRec[\"er_specimen_name\"]+\":\"+\":\".join(LP_this_specimen)\n magic_method_codes=MagRec[\"magic_method_codes\"].split(\":\")\n LT=\"\"\n for code in magic_method_codes:\n if code[:3]==\"LT-\":\n LT=code;\n break\n MagRec[\"magic_method_codes\"]=LT+\":\"+\":\".join(LP_this_specimen)\n\n MagRecs.append(MagRec)\n\n #--\n # write magic_measurements.txt\n #--\n MagRecs_fixed=merge_pmag_recs(MagRecs)\n pmag.magic_write(meas_file,MagRecs_fixed,'magic_measurements')\n print(\"-I- MagIC file is saved in %s\"%meas_file)\n\n #--\n # write er_samples.txt\n #--\n ErSamplesRecs=[]\n samples=list(er_sample_data.keys())\n samples.sort()\n for sample in samples:\n ErSamplesRecs.append(er_sample_data[sample])\n ErSamplesRecs_fixed=merge_pmag_recs(ErSamplesRecs)\n pmag.magic_write(samp_file,ErSamplesRecs_fixed,'er_samples')\n return True, meas_file", "response": "This function is the entry point for the generic_magic. py script."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nloading the data from a file.", "response": "def loadData(filePath=None):\n '''\n #====================================================================\n in:\n read measured raw data file,\n search the line [' Field Remanence '] in measured data file,\n skip all the rows above and the last line,\n otherwise, load data as two columns.\n #\n out:\n rawDf,fitDf are the pandas dataframe of measured raw data, and the\n log10 interploted data\n #====================================================================\n '''\n skip_from = ' Field Remanence '\n with open(filePath,'rb') as fr:\n #f = fr.read()\n for i,line in enumerate(fr,1):\n #print(line)\n if skip_from in str(line):\n skiprows=i+2\n break\n else:\n skiprows=None\n skiprows = skiprows if isinstance(skiprows,int) else 1\n rawDf = pd.read_csv(filePath, sep='\\s+', delimiter=',', names=['field','remanance'],\n dtype=np.float64, skiprows=skiprows, skipfooter=1,engine='python')\n rawDf = rawDf[(rawDf['field']>0)]\n rawDf = rawDf.sort_values(by=['field'])\n rawDf['field'] = rawDf['field']*10**3 # mT to # T\n y_measure=rawDf['remanance']\n rawDf = rawDf[(rawDf['field']>=2)]\n rawDf['field_log'] = np.log10(rawDf['field'])\n rawDf['rem_gradient'] = np.gradient(rawDf['remanance'])\n rawDf['rem_grad_norm'] = rawDf['rem_gradient']/rawDf['rem_gradient'].max()\n field_fit = np.linspace(np.log10(rawDf['field'].min()),\n np.log10(rawDf['field'].max()), 100)\n y_gradient = interpolate.splev(field_fit,\n interpolate.splrep(np.log10(rawDf['field']),\n np.gradient(rawDf['remanance'])))\n fitDf = pd.DataFrame({'field':field_fit,'remanance':y_gradient})\n fitDf.remanance[fitDf.remanance<=0] = 10**-15\n return rawDf,fitDf"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef fit_plots(ax,xfit,xraw,yfit,yraw):\n '''\n #====================================================================\n plot the fitted results for data fit and refit\n #====================================================================\n '''\n global _yfits_\n _yfits_ = yfit\n ax.plot(xfit, yfit)\n ax.plot(xfit, np.sum(yfit,axis=1))\n ax.scatter(xraw, yraw)\n ax.set_xlabel('Field (log10(mT))')\n ax.set_ylabel('IRM normalization')", "response": "Plots fitted data fit and refit\n plots"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nname VIRTUAL_BASENAME DESCRIPTION draws N directions from uniform distribution on a sphere SYNTAX - h prints help message and quits VIRTUAL_BASENAME default is 100", "response": "def main():\n \"\"\"\n NAME\n uniform.py\n\n DESCRIPTION\n draws N directions from uniform distribution on a sphere\n \n SYNTAX \n uniform.py [-h][command line options]\n -h prints help message and quits\n -n N, specify N on the command line (default is 100)\n -F file, specify output file name, default is standard output\n \"\"\"\n outf=\"\"\n N=100\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-F' in sys.argv:\n ind=sys.argv.index('-F')\n outf=sys.argv[ind+1]\n if outf!=\"\": out=open(outf,'w')\n if '-n' in sys.argv:\n ind=sys.argv.index('-n')\n N=int(sys.argv[ind+1])\n dirs=pmag.get_unf(N)\n if outf=='':\n for dir in dirs:\n print('%7.1f %7.1f'%(dir[0],dir[1]))\n else:\n numpy.savetxt(outf,dirs,fmt='%7.1f %7.1f')"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _2g_bin(dir_path=\".\", mag_file=\"\", meas_file='measurements.txt',\n spec_file=\"specimens.txt\", samp_file=\"samples.txt\", site_file=\"sites.txt\",\n loc_file=\"locations.txt\", or_con='3', specnum=0, samp_con='2', corr='1',\n gmeths=\"FS-FD:SO-POM\", location=\"unknown\", inst=\"\", user=\"\", noave=False, input_dir=\"\",\n lat=\"\", lon=\"\"):\n\n \"\"\"\n Convert 2G binary format file to MagIC file(s)\n\n Parameters\n ----------\n dir_path : str\n output directory, default \".\"\n mag_file : str\n input file name\n meas_file : str\n output measurement file name, default \"measurements.txt\"\n spec_file : str\n output specimen file name, default \"specimens.txt\"\n samp_file: str\n output sample file name, default \"samples.txt\"\n site_file : str\n output site file name, default \"sites.txt\"\n loc_file : str\n output location file name, default \"locations.txt\"\n or_con : number\n orientation convention, default '3', see info below\n specnum : int\n number of characters to designate a specimen, default 0\n samp_con : str\n sample/site naming convention, default '2', see info below\n corr: str\n default '1'\n gmeths : str\n sampling method codes, default \"FS-FD:SO-POM\", see info below\n location : str\n location name, default \"unknown\"\n inst : str\n instrument, default \"\"\n user : str\n user name, default \"\"\n noave : bool\n do not average duplicate measurements, default False (so by default, DO average)\n input_dir : str\n input file directory IF different from dir_path, default \"\"\n lat : float\n latitude, default \"\"\n lon : float\n longitude, default \"\"\n\n Returns\n ---------\n Tuple : (True or False indicating if conversion was sucessful, meas_file name written)\n\n\n Info\n ----------\n Orientation convention:\n [1] Lab arrow azimuth= mag_azimuth; Lab arrow dip=-field_dip\n i.e., field_dip is degrees from vertical down - the hade [default]\n [2] Lab arrow azimuth = mag_azimuth-90; Lab arrow dip = -field_dip\n i.e., mag_azimuth is strike and field_dip is hade\n [3] Lab arrow azimuth = mag_azimuth; Lab arrow dip = 90-field_dip\n i.e., lab arrow same as field arrow, but field_dip was a hade.\n [4] lab azimuth and dip are same as mag_azimuth, field_dip\n [5] lab azimuth is same as mag_azimuth,lab arrow dip=field_dip-90\n [6] Lab arrow azimuth = mag_azimuth-90; Lab arrow dip = 90-field_dip\n [7] all others you will have to either customize your\n self or e-mail ltauxe@ucsd.edu for help.\n\n Sample naming convention:\n [1] XXXXY: where XXXX is an arbitrary length site designation and Y\n is the single character sample designation. e.g., TG001a is the\n first sample from site TG001. [default]\n [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX\n [5] site name = sample name\n [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED\n [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY\n\n Sampling method codes:\n FS-FD field sampling done with a drill\n FS-H field sampling done with hand samples\n FS-LOC-GPS field location done with GPS\n FS-LOC-MAP field location done with map\n SO-POM a Pomeroy orientation device was used\n SO-ASC an ASC orientation device was used\n SO-MAG orientation with magnetic compass\n SO-SUN orientation with sun compass\n\n \"\"\"\n\n def skip(N, ind, L):\n for b in range(N):\n ind += 1\n while L[ind] == \"\":\n ind += 1\n ind += 1\n while L[ind] == \"\":\n ind += 1\n return ind\n\n #\n # initialize variables\n #\n bed_dip, bed_dip_dir = \"\", \"\"\n sclass, lithology, _type = \"\", \"\", \"\"\n DecCorr = 0.\n months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',\n 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']\n\n # format and fix variables\n specnum = int(specnum)\n specnum = -specnum\n input_dir_path, dir_path = pmag.fix_directories(input_dir, dir_path)\n\n if samp_con:\n Z = 1\n if \"4\" in samp_con:\n if \"-\" not in samp_con:\n print(\"option [4] must be in form 4-Z where Z is an integer\")\n return False, \"option [4] must be in form 4-Z where Z is an integer\"\n else:\n Z = samp_con.split(\"-\")[1]\n samp_con = \"4\"\n if \"7\" in samp_con:\n if \"-\" not in samp_con:\n print(\"option [7] must be in form 7-Z where Z is an integer\")\n return False, \"option [7] must be in form 7-Z where Z is an integer\"\n else:\n Z = samp_con.split(\"-\")[1]\n samp_con = \"7\"\n if \"6\" in samp_con:\n print('Naming convention option [6] not currently supported')\n return False, 'Naming convention option [6] not currently supported'\n # Z=1\n # try:\n # SampRecs,file_type=pmag.magic_read(os.path.join(input_dir_path, 'er_samples.txt'))\n # except:\n # print(\"there is no er_samples.txt file in your input directory - you can't use naming convention #6\")\n # return False, \"there is no er_samples.txt file in your input directory - you can't use naming convention #6\"\n # if file_type == 'bad_file':\n # print(\"there is no er_samples.txt file in your input directory - you can't use naming convention #6\")\n # return False, \"there is no er_samples.txt file in your input directory - you can't use naming convention #6\"\n # else: Z=1\n\n if not mag_file:\n print(\"mag file is required input\")\n return False, \"mag file is required input\"\n output_dir_path = dir_path\n mag_file = pmag.resolve_file_name(mag_file, input_dir_path)\n\n samplist = []\n try:\n SampRecs, file_type = pmag.magic_read(samp_file)\n except:\n SampRecs = []\n MeasRecs, SpecRecs, SiteRecs, LocRecs = [], [], [], []\n try:\n f = open(mag_file, 'br')\n input = str(f.read()).strip(\"b '\")\n f.close()\n except Exception as ex:\n print('ex', ex)\n print(\"bad mag file\")\n return False, \"bad mag file\"\n firstline, date = 1, \"\"\n d = input.split('\\\\xcd')\n for line in d:\n rec = line.split('\\\\x00')\n # skip nearly empty lines\n rec_not_null = [i for i in rec if i]\n if len(rec_not_null) < 5:\n continue\n if firstline == 1:\n firstline = 0\n spec, vol = \"\", 1\n el = 51\n #while line[el:el+1] != \"\\\\\":\n # spec = spec+line[el]\n # el += 1\n spec = rec[7]\n # check for bad sample name\n test = spec.split('.')\n date = \"\"\n if len(test) > 1:\n spec = test[0]\n kk = 24\n while line[kk] != '\\\\x01' and line[kk] != '\\\\x00':\n kk += 1\n vcc = line[24:kk]\n el = 10\n while rec[el].strip() != '':\n el += 1\n date, comments = rec[el+7], []\n else:\n el = 9\n while rec[el] != '\\\\x01':\n el += 1\n vcc, date, comments = rec[el-3], rec[el+7], []\n specname = spec.lower()\n print('importing ', specname)\n el += 8\n while rec[el].isdigit() == False:\n comments.append(rec[el])\n el += 1\n while rec[el] == \"\":\n el += 1\n az = float(rec[el])\n el += 1\n while rec[el] == \"\":\n el += 1\n pl = float(rec[el])\n el += 1\n while rec[el] == \"\":\n el += 1\n bed_dip_dir = float(rec[el])\n el += 1\n while rec[el] == \"\":\n el += 1\n bed_dip = float(rec[el])\n el += 1\n while rec[el] == \"\":\n el += 1\n if rec[el] == '\\\\x01':\n bed_dip = 180.-bed_dip\n el += 1\n while rec[el] == \"\":\n el += 1\n fold_az = float(rec[el])\n el += 1\n while rec[el] == \"\":\n el += 1\n fold_pl = rec[el]\n el += 1\n while rec[el] == \"\":\n el += 1\n if rec[el] != \"\" and rec[el] != '\\\\x02' and rec[el] != '\\\\x01':\n deccorr = float(rec[el])\n az += deccorr\n bed_dip_dir += deccorr\n fold_az += deccorr\n if bed_dip_dir >= 360:\n bed_dip_dir = bed_dip_dir-360.\n if az >= 360.:\n az = az-360.\n if fold_az >= 360.:\n fold_az = fold_az-360.\n else:\n deccorr = 0\n if specnum != 0:\n sample = specname[:specnum]\n else:\n sample = specname\n\n methods = gmeths.split(':')\n if deccorr != \"0\":\n if 'SO-MAG' in methods:\n del methods[methods.index('SO-MAG')]\n methods.append('SO-CMD-NORTH')\n meths = reduce(lambda x, y: x+':'+y, methods)\n method_codes = meths\n # parse out the site name\n site = pmag.parse_site(sample, samp_con, Z)\n SpecRec, SampRec, SiteRec, LocRec = {}, {}, {}, {}\n SpecRec[\"specimen\"] = specname\n SpecRec[\"sample\"] = sample\n if vcc.strip() != \"\":\n vol = float(vcc)*1e-6 # convert to m^3 from cc\n SpecRec[\"volume\"] = '%10.3e' % (vol)\n SpecRec[\"geologic_classes\"] = sclass\n SpecRec[\"lithologies\"] = lithology\n SpecRec[\"geologic_types\"] = _type\n SpecRecs.append(SpecRec)\n\n if sample != \"\" and sample not in [x['sample'] if 'sample' in list(x.keys()) else \"\" for x in SampRecs]:\n SampRec[\"sample\"] = sample\n SampRec[\"site\"] = site\n # convert to labaz, labpl\n labaz, labdip = pmag.orient(az, pl, or_con)\n SampRec[\"bed_dip\"] = '%7.1f' % (bed_dip)\n SampRec[\"bed_dip_direction\"] = '%7.1f' % (bed_dip_dir)\n SampRec[\"dip\"] = '%7.1f' % (labdip)\n SampRec[\"azimuth\"] = '%7.1f' % (labaz)\n SampRec[\"azimuth_dec_correction\"] = '%7.1f' % (deccorr)\n SampRec[\"geologic_classes\"] = sclass\n SampRec[\"lithologies\"] = lithology\n SampRec[\"geologic_types\"] = _type\n SampRec[\"method_codes\"] = method_codes\n SampRecs.append(SampRec)\n\n if site != \"\" and site not in [x['site'] if 'site' in list(x.keys()) else \"\" for x in SiteRecs]:\n SiteRec['site'] = site\n SiteRec['location'] = location\n SiteRec['lat'] = lat\n SiteRec['lon'] = lon\n SiteRec[\"geologic_classes\"] = sclass\n SiteRec[\"lithologies\"] = lithology\n SiteRec[\"geologic_types\"] = _type\n SiteRecs.append(SiteRec)\n\n if location != \"\" and location not in [x['location'] if 'location' in list(x.keys()) else \"\" for x in LocRecs]:\n LocRec['location'] = location\n LocRec['lat_n'] = lat\n LocRec['lon_e'] = lon\n LocRec['lat_s'] = lat\n LocRec['lon_w'] = lon\n # LocRec[\"geologic_classes\"]=sclass\n # LocRec[\"lithologies\"]=lithology\n # LocRec[\"geologic_types\"]=_type\n LocRecs.append(LocRec)\n\n else:\n MeasRec = {}\n MeasRec[\"treat_temp\"] = '%8.3e' % (273) # room temp in kelvin\n MeasRec[\"meas_temp\"] = '%8.3e' % (273) # room temp in kelvin\n MeasRec[\"treat_ac_field\"] = '0'\n MeasRec[\"treat_dc_field\"] = '0'\n MeasRec[\"treat_dc_field_phi\"] = '0'\n MeasRec[\"treat_dc_field_theta\"] = '0'\n meas_type = \"LT-NO\"\n MeasRec[\"quality\"] = 'g'\n MeasRec[\"standard\"] = 'u'\n MeasRec[\"treat_step_num\"] = 0\n MeasRec[\"specimen\"] = specname\n el, demag = 1, ''\n treat = rec[el]\n if treat[-1] == 'C':\n demag = 'T'\n elif treat != 'NRM':\n demag = 'AF'\n el += 1\n while rec[el] == \"\":\n el += 1\n MeasRec[\"dir_dec\"] = rec[el]\n cdec = float(rec[el])\n el += 1\n while rec[el] == \"\":\n el += 1\n MeasRec[\"dir_inc\"] = rec[el]\n cinc = float(rec[el])\n el += 1\n while rec[el] == \"\":\n el += 1\n gdec = rec[el]\n el += 1\n while rec[el] == \"\":\n el += 1\n ginc = rec[el]\n el = skip(2, el, rec) # skip bdec,binc\n# el=skip(4,el,rec) # skip gdec,ginc,bdec,binc\n# print 'moment emu: ',rec[el]\n MeasRec[\"magn_moment\"] = '%10.3e' % (\n float(rec[el])*1e-3) # moment in Am^2 (from emu)\n MeasRec[\"magn_volume\"] = '%10.3e' % (\n float(rec[el])*1e-3/vol) # magnetization in A/m\n el = skip(2, el, rec) # skip to xsig\n MeasRec[\"magn_x_sigma\"] = '%10.3e' % (\n float(rec[el])*1e-3) # convert from emu\n el = skip(3, el, rec) # skip to ysig\n MeasRec[\"magn_y_sigma\"] = '%10.3e' % (\n float(rec[el])*1e-3) # convert from emu\n el = skip(3, el, rec) # skip to zsig\n MeasRec[\"magn_z_sigma\"] = '%10.3e' % (\n float(rec[el])*1e-3) # convert from emu\n el += 1 # skip to positions\n MeasRec[\"meas_n_orient\"] = rec[el]\n# el=skip(5,el,rec) # skip to date\n# mm=str(months.index(date[0]))\n# if len(mm)==1:\n# mm='0'+str(mm)\n# else:\n# mm=str(mm)\n# dstring=date[2]+':'+mm+':'+date[1]+\":\"+date[3]\n# MeasRec['measurement_date']=dstring\n MeasRec[\"instrument_codes\"] = inst\n MeasRec[\"analysts\"] = user\n MeasRec[\"citations\"] = \"This study\"\n MeasRec[\"method_codes\"] = meas_type\n if demag == \"AF\":\n MeasRec[\"treat_ac_field\"] = '%8.3e' % (\n float(treat[:-2])*1e-3) # peak field in tesla\n meas_type = \"LT-AF-Z\"\n MeasRec[\"treat_dc_field\"] = '0'\n elif demag == \"T\":\n MeasRec[\"treat_temp\"] = '%8.3e' % (\n float(treat[:-1])+273.) # temp in kelvin\n meas_type = \"LT-T-Z\"\n MeasRec['method_codes'] = meas_type\n MeasRecs.append(MeasRec)\n\n con = cb.Contribution(output_dir_path, read_tables=[])\n\n con.add_magic_table_from_data(dtype='specimens', data=SpecRecs)\n con.add_magic_table_from_data(dtype='samples', data=SampRecs)\n con.add_magic_table_from_data(dtype='sites', data=SiteRecs)\n con.add_magic_table_from_data(dtype='locations', data=LocRecs)\n MeasOuts = pmag.measurements_methods3(MeasRecs, noave)\n con.add_magic_table_from_data(dtype='measurements', data=MeasOuts)\n\n con.write_table_to_file('specimens', custom_name=spec_file)\n con.write_table_to_file('samples', custom_name=samp_file)\n con.write_table_to_file('sites', custom_name=site_file)\n con.write_table_to_file('locations', custom_name=loc_file)\n con.write_table_to_file('measurements', custom_name=meas_file)\n return True, meas_file", "response": "Convert 2G binary format file to MagIC file"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nconverting AGM format file to MagIC file", "response": "def agm(agm_file, dir_path=\".\", input_dir_path=\"\",\n meas_outfile=\"\", spec_outfile=\"\", samp_outfile=\"\",\n site_outfile=\"\", loc_outfile=\"\", spec_infile=\"\",\n samp_infile=\"\", site_infile=\"\",\n specimen=\"\", specnum=0, samp_con=\"1\", location=\"unknown\",\n instrument=\"\", institution=\"\", bak=False, syn=False, syntype=\"\",\n units=\"cgs\", fmt='new', user=''):\n \"\"\"\n Convert AGM format file to MagIC file(s)\n\n Parameters\n ----------\n agm_file : str\n input file name\n dir_path : str\n working directory, default \".\"\n input_dir_path : str\n input file directory IF different from dir_path, default \"\"\n meas_outfile : str\n output measurement file name, default \"\"\n (default output is SPECNAME.magic)\n spec_outfile : str\n output specimen file name, default \"\"\n (default output is SPEC_specimens.txt)\n samp_outfile: str\n output sample file name, default \"\"\n (default output is SPEC_samples.txt)\n site_outfile : str\n output site file name, default \"\"\n (default output is SPEC_sites.txt)\n loc_outfile : str\n output location file name, default \"\"\n (default output is SPEC_locations.txt)\n samp_infile : str\n existing sample infile (not required), default \"\"\n site_infile : str\n existing site infile (not required), default \"\"\n specimen : str\n specimen name, default \"\"\n (default is to take base of input file name, e.g. SPEC.agm)\n specnum : int\n number of characters to designate a specimen, default 0\n samp_con : str\n sample/site naming convention, default '1', see info below\n location : str\n location name, default \"unknown\"\n instrument : str\n instrument name, default \"\"\n institution : str\n institution name, default \"\"\n bak : bool\n IRM backfield curve, default False\n syn : bool\n synthetic, default False\n syntype : str\n synthetic type, default \"\"\n units : str\n units, default \"cgs\"\n fmt: str\n input format, options: ('new', 'old', 'xy', default 'new')\n user : user name\n\n Returns\n ---------\n Tuple : (True or False indicating if conversion was sucessful, meas_file name written)\n\n Info\n --------\n Sample naming convention:\n [1] XXXXY: where XXXX is an arbitrary length site designation and Y\n is the single character sample designation. e.g., TG001a is the\n first sample from site TG001. [default]\n [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX\n [5] site name = sample name\n [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED\n [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY\n\n\n \"\"\"\n # initialize some stuff\n citations = 'This study'\n meth = \"LP-HYS\"\n version_num = pmag.get_version()\n Samps, Sites = [], []\n #noave = 1\n\n # get args\n input_dir_path, output_dir_path = pmag.fix_directories(input_dir_path, dir_path)\n specnum = - int(specnum)\n if not specimen:\n # grab the specimen name from the input file name\n specimen = agm_file.split('.')[0]\n if not meas_outfile:\n meas_outfile = specimen + '.magic'\n if not spec_outfile:\n spec_outfile = specimen + '_specimens.txt'\n if not samp_outfile:\n samp_file = specimen + '_samples.txt'\n if not site_outfile:\n site_file = specimen + '_sites.txt'\n if not loc_outfile:\n loc_outfile = specimen + '_locations.txt'\n if bak:\n meth = \"LP-IRM-DCD\"\n output = output_dir_path + \"/irm.magic\"\n if \"4\" == samp_con[0]:\n if \"-\" not in samp_con:\n print(\n \"naming convention option [4] must be in form 4-Z where Z is an integer\")\n print('---------------')\n return False, \"naming convention option [4] must be in form 4-Z where Z is an integer\"\n else:\n Z = samp_con.split(\"-\")[1]\n samp_con = \"4\"\n if \"7\" == samp_con[0]:\n if \"-\" not in samp_con:\n print(\"option [7] must be in form 7-Z where Z is an integer\")\n return False, \"option [7] must be in form 7-Z where Z is an integer\"\n else:\n Z = samp_con.split(\"-\")[1]\n samp_con = \"7\"\n else:\n Z = 0\n\n # read stuff in\n if site_infile:\n Sites, file_type = pmag.magic_read(site_infile)\n if samp_infile:\n Samps, file_type = pmag.magic_read(samp_infile)\n if spec_infile:\n Specs, file_type = pmag.magic_read(spec_infile)\n if agm_file:\n agm_file = pmag.resolve_file_name(agm_file, input_dir_path)\n Data = pmag.open_file(agm_file)\n if not Data:\n print(\"you must provide a valid agm_file\")\n return False, \"you must provide a valid agm_file\"\n if not agm_file:\n print(__doc__)\n print(\"agm_file field is required option\")\n return False, \"agm_file field is required option\"\n if \"ASCII\" not in Data[0] and fmt != 'xy':\n fmt = 'new'\n measnum, start, end = 1, 0, 0\n if fmt == 'new': # new Micromag formatted file\n end = 2\n for skip in range(len(Data)):\n line = Data[skip]\n rec = line.strip('\\n').strip('\\r').split()\n if 'Units' in line:\n units = rec[-1]\n if \"Raw\" in rec:\n start = skip + 2\n if (\"Field\" in rec) and (\"Moment\" in rec) and (not start):\n start = skip + 2\n break\n elif fmt == 'old':\n start = 2\n end = 1\n\n MeasRecs, SpecRecs, SampRecs, SiteRecs, LocRecs = [], [], [], [], []\n version_num = pmag.get_version()\n\n ##################################\n # parse data\n stop = len(Data) - end\n for line in Data[start:stop]: # skip header stuff\n MeasRec, SpecRec, SampRec, SiteRec, LocRec = {}, {}, {}, {}, {}\n # take care of some paper work\n if not syn:\n MeasRec[\"specimen\"] = specimen\n if specnum != 0:\n sample = specimen[:specnum]\n else:\n sample = specimen\n if samp_infile and Samps: # if samp_infile was provided AND yielded sample data\n samp = pmag.get_dictitem(Samps, 'sample', sample, 'T')\n if len(samp) > 0:\n site = samp[0][\"site\"]\n else:\n site = ''\n if site_infile and Sites: # if samp_infile was provided AND yielded sample data\n sites = pmag.get_dictitem(Sites, 'sample', sample, 'T')\n if len(sites) > 0:\n site = sites[0][\"site\"]\n else:\n site = ''\n else:\n site = pmag.parse_site(sample, samp_con, Z)\n if location != '' and location not in [x['location'] if 'location' in list(x.keys()) else '' for x in LocRecs]:\n LocRec['location'] = location\n LocRecs.append(LocRec)\n if site != '' and site not in [x['site'] if 'site' in list(x.keys()) else '' for x in SiteRecs]:\n SiteRec['location'] = location\n SiteRec['site'] = site\n SiteRecs.append(SiteRec)\n if sample != '' and sample not in [x['sample'] if 'sample' in list(x.keys()) else '' for x in SampRecs]:\n SampRec['site'] = site\n SampRec['sample'] = sample\n SampRecs.append(SampRec)\n if specimen != '' and specimen not in [x['specimen'] if 'specimen' in list(x.keys()) else '' for x in SpecRecs]:\n SpecRec[\"specimen\"] = specimen\n SpecRec['sample'] = sample\n SpecRecs.append(SpecRec)\n else:\n SampRec[\"material_type\"] = syntype\n MeasRec[\"specimen\"] = specimen\n MeasRec[\"user\"] = user\n if specnum != 0:\n sample = specimen[:specnum]\n else:\n sample = specimen\n site = pmag.parse_site(sample, samp_con, Z)\n if location != '' and location not in [x['location'] if 'location' in list(x.keys()) else '' for x in LocRecs]:\n LocRec['location'] = location\n LocRecs.append(LocRec)\n if site != '' and site not in [x['site'] if 'site' in list(x.keys()) else '' for x in SiteRecs]:\n SiteRec['location'] = location\n SiteRec['site'] = site\n SiteRecs.append(SiteRec)\n if sample != '' and sample not in [x['sample'] if 'sample' in list(x.keys()) else '' for x in SampRecs]:\n SampRec['site'] = site\n SampRec['sample'] = sample\n SampRecs.append(SampRec)\n if specimen != '' and specimen not in [x['specimen'] if 'specimen' in list(x.keys()) else '' for x in SpecRecs]:\n SpecRec[\"specimen\"] = specimen\n SpecRec['sample'] = sample\n SpecRecs.append(SpecRec)\n MeasRec['instrument_codes'] = instrument\n MeasRec['institution'] = institution\n MeasRec['method_codes'] = meth\n MeasRec['experiment'] = specimen + ':' + meth\n if fmt == 'xy':\n rec = list(line.strip('\\n').split())\n else:\n rec = list(line.strip('\\n').strip('\\r').split(','))\n #print (rec)\n if rec[0] != \"\":\n if units == 'cgs':\n field = float(rec[0]) * 1e-4 # convert from oe to tesla\n else:\n field = float(rec[0]) # field in tesla\n if meth == \"LP-HYS\":\n MeasRec['meas_field_dc'] = '%10.3e' % (field)\n MeasRec['treat_dc_field'] = '0'\n else:\n MeasRec['meas_field_dc'] = '0'\n MeasRec['treat_dc_field'] = '%10.3e' % (field)\n if units == 'cgs':\n MeasRec['magn_moment'] = '%10.3e' % (\n float(rec[1]) * 1e-3) # convert from emu to Am^2\n else:\n MeasRec['magn_moment'] = '%10.3e' % (float(rec[1])) # Am^2\n MeasRec['treat_temp'] = '273' # temp in kelvin\n MeasRec['meas_temp'] = '273' # temp in kelvin\n MeasRec['quality'] = 'g'\n MeasRec['standard'] = 'u'\n MeasRec['treat_step_num'] = '%i' % (measnum)\n MeasRec['measurement'] = specimen + \":\" + meth + '%i' % (measnum)\n measnum += 1\n MeasRec['software_packages'] = version_num\n MeasRec['description'] = \"\"\n MeasRecs.append(MeasRec)\n # we have to relabel LP-HYS method codes. initial loop is LP-IMT, minor\n # loops are LP-M - do this in measurements_methods function\n if meth == 'LP-HYS':\n recnum = 0\n while float(MeasRecs[recnum]['meas_field_dc']) < float(MeasRecs[recnum + 1]['meas_field_dc']) and recnum + 1 < len(MeasRecs): # this is LP-IMAG\n MeasRecs[recnum]['method_codes'] = 'LP-IMAG'\n MeasRecs[recnum]['experiment'] = MeasRecs[recnum]['specimen'] + \\\n \":\" + 'LP-IMAG'\n recnum += 1\n#\n con = cb.Contribution(output_dir_path, read_tables=[])\n\n # create MagIC tables\n con.add_magic_table_from_data(dtype='specimens', data=SpecRecs)\n con.add_magic_table_from_data(dtype='samples', data=SampRecs)\n con.add_magic_table_from_data(dtype='sites', data=SiteRecs)\n con.add_magic_table_from_data(dtype='locations', data=LocRecs)\n # MeasOuts=pmag.measurements_methods3(MeasRecs,noave)\n con.add_magic_table_from_data(dtype='measurements', data=MeasRecs)\n # write MagIC tables to file\n con.write_table_to_file('specimens', custom_name=spec_outfile)\n con.write_table_to_file('samples', custom_name=samp_outfile)\n con.write_table_to_file('sites', custom_name=site_outfile)\n con.write_table_to_file('locations', custom_name=loc_outfile)\n con.write_table_to_file('measurements', custom_name=meas_outfile)\n\n return True, meas_outfile"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconvert BGC format file to MagIC file", "response": "def bgc(mag_file, dir_path=\".\", input_dir_path=\"\",\n meas_file='measurements.txt', spec_file='specimens.txt', samp_file='samples.txt',\n site_file='sites.txt', loc_file='locations.txt', append=False,\n location=\"unknown\", site=\"\", samp_con='1', specnum=0,\n meth_code=\"LP-NO\", volume=12, user=\"\", timezone='US/Pacific', noave=False):\n\n \"\"\"\n Convert BGC format file to MagIC file(s)\n\n Parameters\n ----------\n mag_file : str\n input file name\n dir_path : str\n working directory, default \".\"\n input_dir_path : str\n input file directory IF different from dir_path, default \"\"\n meas_file : str\n output measurement file name, default \"measurements.txt\"\n spec_file : str\n output specimen file name, default \"specimens.txt\"\n samp_file: str\n output sample file name, default \"samples.txt\"\n site_file : str\n output site file name, default \"sites.txt\"\n loc_file : str\n output location file name, default \"locations.txt\"\n append : bool\n append output files to existing files instead of overwrite, default False\n location : str\n location name, default \"unknown\"\n site : str\n site name, default \"\"\n samp_con : str\n sample/site naming convention, default '1', see info below\n specnum : int\n number of characters to designate a specimen, default 0\n meth_code : str\n orientation method codes, default \"LP-NO\"\n e.g. [SO-MAG, SO-SUN, SO-SIGHT, ...]\n volume : float\n volume in ccs, default 12.\n user : str\n user name, default \"\"\n timezone : str\n timezone in pytz library format, default \"US/Pacific\"\n list of timezones can be found at http://pytz.sourceforge.net/\n noave : bool\n do not average duplicate measurements, default False (so by default, DO average)\n\n Returns\n ---------\n Tuple : (True or False indicating if conversion was sucessful, meas_file name written)\n\n Info\n --------\n Sample naming convention:\n [1] XXXXY: where XXXX is an arbitrary length site designation and Y\n is the single character sample designation. e.g., TG001a is the\n first sample from site TG001. [default]\n [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX\n [5] site name same as sample\n [6] site is entered under a separate column -- NOT CURRENTLY SUPPORTED\n [7-Z] [XXXX]YYY: XXXX is site designation with Z characters with sample name XXXXYYYY\n\n \"\"\"\n\n version_num = pmag.get_version()\n input_dir_path, output_dir_path = pmag.fix_directories(input_dir_path, dir_path)\n samp_con = str(samp_con)\n specnum = - int(specnum)\n volume *= 1e-6 # convert cc to m^3\n\n if \"4\" in samp_con:\n if \"-\" not in samp_con:\n print(\"option [4] must be in form 4-Z where Z is an integer\")\n return False, \"option [4] must be in form 4-Z where Z is an integer\"\n else:\n Z = int(samp_con.split(\"-\")[1])\n samp_con = \"4\"\n if \"7\" in samp_con:\n if \"-\" not in samp_con:\n print(\"option [7] must be in form 7-Z where Z is an integer\")\n return False, \"option [7] must be in form 7-Z where Z is an integer\"\n else:\n Z = int(samp_con.split(\"-\")[1])\n samp_con = \"7\"\n else:\n Z = 1\n\n # format variables\n mag_file = os.path.join(input_dir_path, mag_file)\n if not os.path.isfile(mag_file):\n print(\"%s is not a BGC file\" % mag_file)\n return False, 'You must provide a BCG format file'\n\n # Open up the BGC file and read the header information\n print('mag_file in bgc_magic', mag_file)\n pre_data = open(mag_file, 'r')\n line = pre_data.readline()\n line_items = line.split(' ')\n specimen = line_items[2]\n specimen = specimen.replace('\\n', '')\n line = pre_data.readline()\n line = pre_data.readline()\n line_items = line.split('\\t')\n azimuth = float(line_items[1])\n dip = float(line_items[2])\n bed_dip = line_items[3]\n sample_bed_azimuth = line_items[4]\n lon = line_items[5]\n lat = line_items[6]\n tmp_volume = line_items[7]\n if tmp_volume != 0.0:\n volume = float(tmp_volume) * 1e-6\n pre_data.close()\n\n data = pd.read_csv(mag_file, sep='\\t', header=3, index_col=False)\n\n cart = np.array([data['X'], data['Y'], data['Z']]).transpose()\n direction = pmag.cart2dir(cart).transpose()\n\n data['dir_dec'] = direction[0]\n data['dir_inc'] = direction[1]\n # the data are in EMU - this converts to Am^2\n data['magn_moment'] = direction[2] / 1000\n data['magn_volume'] = (direction[2] / 1000) / \\\n volume # EMU - data converted to A/m\n\n # Configure the magic_measurements table\n MeasRecs, SpecRecs, SampRecs, SiteRecs, LocRecs = [], [], [], [], []\n for rowNum, row in data.iterrows():\n MeasRec, SpecRec, SampRec, SiteRec, LocRec = {}, {}, {}, {}, {}\n\n if specnum != 0:\n sample = specimen[:specnum]\n else:\n sample = specimen\n if site == '':\n site = pmag.parse_site(sample, samp_con, Z)\n\n if specimen != \"\" and specimen not in [x['specimen'] if 'specimen' in list(x.keys()) else \"\" for x in SpecRecs]:\n SpecRec['specimen'] = specimen\n SpecRec['sample'] = sample\n SpecRec['volume'] = volume\n SpecRec['analysts'] = user\n SpecRec['citations'] = 'This study'\n SpecRecs.append(SpecRec)\n if sample != \"\" and sample not in [x['sample'] if 'sample' in list(x.keys()) else \"\" for x in SampRecs]:\n SampRec['sample'] = sample\n SampRec['site'] = site\n SampRec['azimuth'] = azimuth\n SampRec['dip'] = dip\n SampRec['bed_dip_direction'] = sample_bed_azimuth\n SampRec['bed_dip'] = bed_dip\n SampRec['method_codes'] = meth_code\n SampRec['analysts'] = user\n SampRec['citations'] = 'This study'\n SampRecs.append(SampRec)\n if site != \"\" and site not in [x['site'] if 'site' in list(x.keys()) else \"\" for x in SiteRecs]:\n SiteRec['site'] = site\n SiteRec['location'] = location\n SiteRec['lat'] = lat\n SiteRec['lon'] = lon\n SiteRec['analysts'] = user\n SiteRec['citations'] = 'This study'\n SiteRecs.append(SiteRec)\n if location != \"\" and location not in [x['location'] if 'location' in list(x.keys()) else \"\" for x in LocRecs]:\n LocRec['location'] = location\n LocRec['analysts'] = user\n LocRec['citations'] = 'This study'\n LocRec['lat_n'] = lat\n LocRec['lon_e'] = lon\n LocRec['lat_s'] = lat\n LocRec['lon_w'] = lon\n LocRecs.append(LocRec)\n\n MeasRec['description'] = 'Date: ' + \\\n str(row['Date']) + ' Time: ' + str(row['Time'])\n if '.' in row['Date']:\n datelist = row['Date'].split('.')\n elif '/' in row['Date']:\n datelist = row['Date'].split('/')\n elif '-' in row['Date']:\n datelist = row['Date'].split('-')\n else:\n print(\n \"unrecogized date formating on one of the measurement entries for specimen %s\" % specimen)\n datelist = ['', '', '']\n if ':' in row['Time']:\n timelist = row['Time'].split(':')\n else:\n print(\n \"unrecogized time formating on one of the measurement entries for specimen %s\" % specimen)\n timelist = ['', '', '']\n datelist[2] = '19' + \\\n datelist[2] if len(datelist[2]) <= 2 else datelist[2]\n dt = \":\".join([datelist[1], datelist[0], datelist[2],\n timelist[0], timelist[1], timelist[2]])\n local = pytz.timezone(timezone)\n naive = datetime.datetime.strptime(dt, \"%m:%d:%Y:%H:%M:%S\")\n local_dt = local.localize(naive, is_dst=None)\n utc_dt = local_dt.astimezone(pytz.utc)\n timestamp = utc_dt.strftime(\"%Y-%m-%dT%H:%M:%S\")+\"Z\"\n MeasRec[\"timestamp\"] = timestamp\n MeasRec[\"citations\"] = \"This study\"\n MeasRec['software_packages'] = version_num\n MeasRec[\"treat_temp\"] = '%8.3e' % (273) # room temp in kelvin\n MeasRec[\"meas_temp\"] = '%8.3e' % (273) # room temp in kelvin\n MeasRec[\"quality\"] = 'g'\n MeasRec[\"standard\"] = 'u'\n MeasRec[\"treat_step_num\"] = rowNum\n MeasRec[\"specimen\"] = specimen\n MeasRec[\"treat_ac_field\"] = '0'\n if row['DM Val'] == '0':\n meas_type = \"LT-NO\"\n elif int(row['DM Type']) > 0.0:\n meas_type = \"LT-AF-Z\"\n treat = float(row['DM Val'])\n MeasRec[\"treat_ac_field\"] = '%8.3e' % (\n treat*1e-3) # convert from mT to tesla\n elif int(row['DM Type']) == -1:\n meas_type = \"LT-T-Z\"\n treat = float(row['DM Val'])\n MeasRec[\"treat_temp\"] = '%8.3e' % (treat+273.) # temp in kelvin\n else:\n print(\"measurement type unknown:\",\n row['DM Type'], \" in row \", rowNum)\n MeasRec[\"magn_moment\"] = str(row['magn_moment'])\n MeasRec[\"magn_volume\"] = str(row['magn_volume'])\n MeasRec[\"dir_dec\"] = str(row['dir_dec'])\n MeasRec[\"dir_inc\"] = str(row['dir_inc'])\n MeasRec['method_codes'] = meas_type\n MeasRec['dir_csd'] = '0.0' # added due to magic.write error\n MeasRec['meas_n_orient'] = '1' # added due to magic.write error\n MeasRecs.append(MeasRec.copy())\n\n con = cb.Contribution(output_dir_path, read_tables=[])\n\n con.add_magic_table_from_data(dtype='specimens', data=SpecRecs)\n con.add_magic_table_from_data(dtype='samples', data=SampRecs)\n con.add_magic_table_from_data(dtype='sites', data=SiteRecs)\n con.add_magic_table_from_data(dtype='locations', data=LocRecs)\n MeasOuts = pmag.measurements_methods3(MeasRecs, noave)\n con.add_magic_table_from_data(dtype='measurements', data=MeasOuts)\n\n con.write_table_to_file('specimens', custom_name=spec_file, append=append)\n con.write_table_to_file('samples', custom_name=samp_file, append=append)\n con.write_table_to_file('sites', custom_name=site_file, append=append)\n con.write_table_to_file('locations', custom_name=loc_file, append=append)\n meas_file = con.write_table_to_file(\n 'measurements', custom_name=meas_file, append=append)\n\n return True, meas_file"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef cit(dir_path=\".\", input_dir_path=\"\", magfile=\"\", user=\"\", meas_file=\"measurements.txt\",\n spec_file=\"specimens.txt\", samp_file=\"samples.txt\",\n site_file=\"sites.txt\", loc_file=\"locations.txt\", locname=\"unknown\",\n sitename=\"\", methods=['SO-MAG'], specnum=0, samp_con='3',\n norm='cc', oersted=False, noave=False, meas_n_orient='8',\n labfield=0, phi=0, theta=0):\n \"\"\"\n Converts CIT formated Magnetometer data into MagIC format for Analysis and contribution to the MagIC database\n\n Parameters\n -----------\n dir_path : directory to output files to (default : current directory)\n input_dir_path : directory to input files (only needed if different from dir_path!)\n magfile : magnetometer file (.sam) to convert to MagIC (required)\n user : colon delimited list of analysts (default : \"\")\n meas_file : measurement file name to output (default : measurements.txt)\n spec_file : specimen file name to output (default : specimens.txt)\n samp_file : sample file name to output (default : samples.txt)\n site_file : site file name to output (default : site.txt)\n loc_file : location file name to output (default : locations.txt)\n locname : location name\n sitename : site name set by user instead of using samp_con\n methods : colon delimited list of sample method codes. full list here (https://www2.earthref.org/MagIC/method-codes) (default : SO-MAG)\n specnum : number of terminal characters that identify a specimen\n norm : is volume or mass normalization using cgs or si units (options : cc,m3,g,kg) (default : cc)\n oersted : demag step values are in Oersted\n noave : average measurement data or not. False is average, True is don't average. (default : False)\n samp_con : sample naming convention options as follows:\n [1] XXXXY: where XXXX is an arbitrary length site designation and Y\n is the single character sample designation. e.g., TG001a is the\n first sample from site TG001. [default]\n [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX\n [5] site name = sample name\n [6] site name entered in sitename column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED\n [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY\n meas_n_orient : Number of different orientations in measurement (default : 8)\n labfield : DC_FIELD in microTesla (default : 0)\n phi : DC_PHI in degrees (default : 0)\n theta : DC_THETA in degrees (default : 0)\n\n Returns\n -----------\n type - Tuple : (True or False indicating if conversion was sucessful, meas_file name written)\n \"\"\"\n\n def get_dc_params(FIRST_GET_DC, specimen, treat_type, yn):\n \"\"\"\n Prompts user for DC field data if not provided, just an encapsulation function for the above program and should not be used elsewhere.\n\n Parameters\n -----------\n FIRST_GET_DC : is this the first time you are asking for DC data?\n specimen : what specimen do you want DC data for?\n treat_type : what kind of step was it? PTM, Tail, in field, zero field?\n yn : is DC field constant or varrying? (y = constant, n = varrying)\n\n Returns\n -----------\n GET_DC_PARAMS : weather or not to rerun this function\n FIRST_GET_DC : same as above\n yn : same as above\n DC_FIELD : field strength in Tesla\n DC_PHI : field azimuth\n DC_THETA : field polar angle\n \"\"\"\n if FIRST_GET_DC:\n yn = input(\n \"Is the DC field used in this IZZI study constant or does it varry between specimen or step? (y=const) [y/N]: \")\n FIRST_GET_DC = False\n if \"y\" == yn:\n DC_FIELD, DC_PHI, DC_THETA = list(map(float, eval(input(\n \"What DC field, Phi, and Theta was used for all steps? (float (in microTesla),float,float): \"))))\n GET_DC_PARAMS = False\n else:\n DC_FIELD, DC_PHI, DC_THETA = list(map(float, eval(input(\n \"What DC field, Phi, and Theta was used for specimen %s and step %s? (float (in microTesla),float,float): \" % (str(specimen), str(treat_type))))))\n return GET_DC_PARAMS, FIRST_GET_DC, yn, DC_FIELD*1e-6, DC_PHI, DC_THETA\n\n specnum = - int(specnum)\n input_dir_path, output_dir_path = pmag.fix_directories(input_dir_path, dir_path)\n DC_FIELD = float(labfield) * 1e-6\n DC_PHI = float(phi)\n DC_THETA = float(theta)\n #try:\n # DC_FIELD = float(labfield) * 1e-6\n # DC_PHI = float(phi)\n # DC_THETA = float(theta)\n# except ValueError:\n# raise ValueError(\n# 'problem with your dc parameters. please provide a labfield in microTesla and a phi and theta in degrees.')\n# yn = ''\n# if DC_FIELD == 0 and DC_PHI == 0 and DC_THETA == 0:\n# print('-I- Required values for labfield, phi, and theta not provided! Will try to get these interactively')\n# GET_DC_PARAMS = True\n# else:\n# GET_DC_PARAMS = False\n\n# With the above commented out GET_DC_PARAMS does not get set and this function fail below so setting it here. NAJ\n GET_DC_PARAMS = False\n if locname == '' or locname == None:\n locname = 'unknown'\n if \"4\" in samp_con:\n if \"-\" not in samp_con:\n print(\"option [4] must be in form 4-Z where Z is an integer\")\n return False, \"naming convention option [4] must be in form 4-Z where Z is an integer\"\n else:\n Z = samp_con.split(\"-\")[1]\n samp_con = \"4\"\n elif \"7\" in samp_con:\n if \"-\" not in samp_con:\n print(\"option [7] must be in form 7-Z where Z is an integer\")\n return False, \"naming convention option [7] must be in form 7-Z where Z is an integer\"\n else:\n Z = samp_con.split(\"-\")[1]\n samp_con = \"7\"\n else:\n Z = 1\n\n # get file names and open magfile to start reading data\n magfile = pmag.resolve_file_name(magfile, input_dir_path)\n input_dir_path = os.path.split(magfile)[0]\n FIRST_GET_DC = True\n try:\n file_input = open(magfile, 'r')\n except IOError as ex:\n print(ex)\n print((\"bad sam file name: \", magfile))\n return False, \"bad sam file name\"\n File = file_input.readlines()\n file_input.close()\n if len(File) == 1:\n File = File[0].split('\\r')\n File = [x+\"\\r\\n\" for x in File]\n\n # define initial variables\n SpecRecs, SampRecs, SiteRecs, LocRecs, MeasRecs = [], [], [], [], []\n sids, ln, format, citations = [], 0, 'CIT', \"This study\"\n formats = ['CIT', '2G', 'APP', 'JRA']\n\n if File[ln].strip() == 'CIT':\n ln += 1\n LocRec = {}\n LocRec[\"location\"] = locname\n LocRec[\"citations\"] = citations\n LocRec['analysts'] = user\n comment = File[ln]\n if comment == 'CIT':\n format = comment\n ln += 1\n comment = File[ln]\n print(comment)\n ln += 1\n specimens, samples, sites = [], [], []\n if format == 'CIT':\n line = File[ln].split()\n site_lat = line[0]\n site_lon = line[1]\n LocRec[\"lat_n\"] = site_lat\n LocRec[\"lon_e\"] = site_lon\n LocRec[\"lat_s\"] = site_lat\n LocRec[\"lon_w\"] = site_lon\n LocRecs.append(LocRec)\n Cdec = float(line[2])\n for k in range(ln+1, len(File)):\n line = File[k]\n rec = line.split()\n if rec == []:\n continue\n specimen = rec[0]\n specimens.append(specimen)\n for specimen in specimens:\n SpecRec, SampRec, SiteRec = {}, {}, {}\n if specnum != 0:\n sample = specimen[:specnum]\n else:\n sample = specimen\n if sitename:\n site = sitename\n else:\n site = pmag.parse_site(sample, samp_con, Z)\n SpecRec['specimen'] = specimen\n SpecRec['sample'] = sample\n SpecRec['citations'] = citations\n SpecRec['analysts'] = user\n SampRec['sample'] = sample\n SampRec['site'] = site\n SampRec['citations'] = citations\n SampRec['method_codes'] = methods\n SampRec['azimuth_dec_correction'] = '%7.1f' % (Cdec)\n SampRec['analysts'] = user\n SiteRec['site'] = site\n SiteRec['location'] = locname\n SiteRec['citations'] = citations\n SiteRec['lat'] = site_lat\n SiteRec['lon'] = site_lon\n SiteRec['analysts'] = user\n f = open(os.path.join(input_dir_path, specimen), 'r')\n Lines = f.readlines()\n f.close()\n comment = \"\"\n line = Lines[0].split()\n if len(line) > 2:\n comment = line[2]\n info = Lines[1].split()\n volmass = float(info[-1])\n if volmass == 1.0:\n print('Warning: Specimen volume set to 1.0.')\n print('Warning: If volume/mass really is 1.0, set volume/mass to 1.001')\n print('Warning: specimen method code LP-NOMAG set.')\n SpecRec['weight'] = \"\"\n SpecRec['volume'] = \"\"\n SpecRec['method_codes'] = 'LP-NOMAG'\n elif norm == \"gm\":\n SpecRec['volume'] = ''\n SpecRec['weight'] = '%10.3e' % volmass*1e-3\n elif norm == \"kg\":\n SpecRec['volume'] = ''\n SpecRec['weight'] = '%10.3e'*volmass\n elif norm == \"cc\":\n SpecRec['weight'] = \"\"\n SpecRec['volume'] = '%10.3e' % (volmass*1e-6)\n elif norm == \"m3\":\n SpecRec['weight'] = \"\"\n SpecRec['volume'] = '%10.3e' % (volmass)\n else:\n print('Warning: Unknown normalization unit ',\n norm, '. Using default of cc')\n SpecRec['weight'] = \"\"\n SpecRec['volume'] = '%10.3e' % (volmass*1e-6)\n dip = float(info[-2])\n dip_direction = float(info[-3])+Cdec+90.\n sample_dip = -float(info[-4])\n sample_azimuth = float(info[-5])+Cdec-90.\n if len(info) > 5:\n SampRec['height'] = info[-6]\n else:\n SampRec['height'] = '0'\n SampRec['azimuth'] = '%7.1f' % (sample_azimuth)\n SampRec['dip'] = '%7.1f' % (sample_dip)\n SampRec['bed_dip'] = '%7.1f' % (dip)\n SampRec['bed_dip_direction'] = '%7.1f' % (dip_direction)\n SampRec['geologic_classes'] = ''\n SampRec['geologic_types'] = ''\n SampRec['lithologies'] = ''\n if Cdec != 0 or Cdec != \"\":\n SampRec['method_codes'] = 'SO-CMD-NORTH'\n else:\n SampRec['method_codes'] = 'SO-MAG'\n for line in Lines[2:len(Lines)]:\n if line == '\\n':\n continue\n MeasRec = SpecRec.copy()\n MeasRec.pop('sample')\n MeasRec['analysts'] = user\n# Remove volume and weight as they do not exits in the magic_measurement table\n del MeasRec[\"volume\"]\n del MeasRec[\"weight\"]\n # USGS files have blank for an AF demag value when measurement is the NRM. njarboe\n if line[0:6] == 'AF ':\n line = 'NRM' + line[3:]\n treat_type = line[0:3]\n if treat_type[1] == '.':\n treat_type = 'NRM'\n treat = line[2:6]\n try:\n float(treat)\n except ValueError:\n treat = line[3:6]\n if treat.split() == '':\n treat = '0'\n try:\n float(treat)\n except ValueError:\n treat = line.split()[1]\n if treat_type.startswith('NRM'):\n MeasRec['method_codes'] = 'LT-NO'\n MeasRec['meas_temp'] = '273'\n MeasRec['treat_temp'] = '273'\n MeasRec['treat_dc_field'] = '0'\n MeasRec['treat_dc_field_phi'] = '%1.2f' % DC_PHI\n MeasRec['treat_dc_field_theta'] = '%1.2f' % DC_THETA\n MeasRec['treat_ac_field'] = '0'\n elif treat_type.startswith('LT') or treat_type.upper().startswith('LN2'):\n MeasRec['method_codes'] = 'LT-LT-Z'\n MeasRec['meas_temp'] = '273'\n MeasRec['treat_temp'] = '77'\n MeasRec['treat_dc_field'] = '0'\n MeasRec['treat_dc_field_phi'] = '%1.2f' % DC_PHI\n MeasRec['treat_dc_field_theta'] = '%1.2f' % DC_THETA\n MeasRec['treat_ac_field'] = '0'\n elif treat_type.startswith('AF') or treat_type.startswith('MAF'):\n MeasRec['method_codes'] = 'LT-AF-Z'\n MeasRec['meas_temp'] = '273'\n MeasRec['treat_temp'] = '273'\n MeasRec['treat_dc_field'] = '0'\n MeasRec['treat_dc_field_phi'] = '%1.2f' % DC_PHI\n MeasRec['treat_dc_field_theta'] = '%1.2f' % DC_THETA\n if treat.strip() == '':\n MeasRec['treat_ac_field'] = '0'\n else:\n try:\n MeasRec['treat_ac_field'] = '%10.3e' % (\n float(treat)*1e-3)\n except ValueError as e:\n print(os.path.join(input_dir_path, specimen))\n raise e\n if MeasRec['treat_ac_field'] != '0':\n MeasRec['treat_ac_field'] = '%10.3e' % (\n float(MeasRec['treat_ac_field'])/10)\n elif treat_type.startswith('ARM'):\n MeasRec['method_codes'] = \"LP-ARM\"\n MeasRec['meas_temp'] = '273'\n MeasRec['treat_temp'] = '273'\n MeasRec['treat_dc_field'] = '0'\n MeasRec['treat_dc_field_phi'] = '%1.2f' % DC_PHI\n MeasRec['treat_dc_field_theta'] = '%1.2f' % DC_THETA\n if treat.strip() == '':\n MeasRec['treat_ac_field'] = '0'\n else:\n MeasRec['method_codes'] = \"LP-ARM-AFD\"\n MeasRec['treat_ac_field'] = '%10.3e' % (float(treat)*1e-3)\n elif treat_type.startswith('IRM'):\n if GET_DC_PARAMS:\n GET_DC_PARAMS, FIRST_GET_DC, yn, DC_FIELD, DC_PHI, DC_THETA = get_dc_params(\n FIRST_GET_DC, specimen, treat_type, yn)\n MeasRec['method_codes'] = \"LT-IRM\"\n MeasRec['meas_temp'] = '273'\n MeasRec['treat_temp'] = '273'\n MeasRec['treat_dc_field'] = '%1.2e' % DC_FIELD\n MeasRec['treat_dc_field_phi'] = '%1.2f' % DC_PHI\n MeasRec['treat_dc_field_theta'] = '%1.2f' % DC_THETA\n MeasRec['treat_ac_field'] = '0'\n elif treat_type.startswith('TT'):\n MeasRec['method_codes'] = 'LT-T-Z'\n MeasRec['meas_temp'] = '273'\n if treat.strip() == '':\n MeasRec['treat_temp'] = '273'\n else:\n MeasRec['treat_temp'] = '%7.1f' % (float(treat)+273)\n MeasRec['treat_dc_field'] = '0'\n MeasRec['treat_dc_field_phi'] = '%1.2f' % DC_PHI\n MeasRec['treat_dc_field_theta'] = '%1.2f' % DC_THETA\n MeasRec['treat_ac_field'] = '0'\n elif line[4] == '0': # assume decimal IZZI format 0 field thus can hardcode the dc fields\n MeasRec['method_codes'] = 'LT-T-Z'\n MeasRec['meas_temp'] = '273'\n try:\n MeasRec['treat_temp'] = str(int(treat_type) + 273)\n except ValueError as e:\n print(specimen)\n raise e\n MeasRec['treat_dc_field'] = '0'\n MeasRec['treat_dc_field_phi'] = '%1.2f' % DC_PHI\n MeasRec['treat_dc_field_theta'] = '%1.2f' % DC_THETA\n MeasRec['treat_ac_field'] = '0'\n elif line[4] == '1': # assume decimal IZZI format in constant field\n if GET_DC_PARAMS:\n GET_DC_PARAMS, FIRST_GET_DC, yn, DC_FIELD, DC_PHI, DC_THETA = get_dc_params(\n FIRST_GET_DC, specimen, treat_type, yn)\n MeasRec['method_codes'] = 'LT-T-I'\n MeasRec['meas_temp'] = '273'\n MeasRec['treat_temp'] = str(int(treat_type) + 273)\n MeasRec['treat_dc_field'] = '%1.2e' % DC_FIELD\n MeasRec['treat_dc_field_phi'] = '%1.2f' % DC_PHI\n MeasRec['treat_dc_field_theta'] = '%1.2f' % DC_THETA\n MeasRec['treat_ac_field'] = '0'\n elif line[4] == '2': # assume decimal IZZI format PTRM step\n if GET_DC_PARAMS:\n GET_DC_PARAMS, FIRST_GET_DC, yn, DC_FIELD, DC_PHI, DC_THETA = get_dc_params(\n FIRST_GET_DC, specimen, treat_type, yn)\n MeasRec['method_codes'] = 'LT-PTRM-I'\n MeasRec['meas_temp'] = '273'\n MeasRec['treat_temp'] = str(int(treat_type) + 273)\n MeasRec['treat_dc_field'] = '%1.2e' % DC_FIELD\n MeasRec['treat_dc_field_phi'] = '%1.2f' % DC_PHI\n MeasRec['treat_dc_field_theta'] = '%1.2f' % DC_THETA\n MeasRec['treat_ac_field'] = '0'\n elif line[4] == '3': # assume decimal IZZI format PTRM tail check\n if GET_DC_PARAMS:\n GET_DC_PARAMS, FIRST_GET_DC, yn, DC_FIELD, DC_PHI, DC_THETA = get_dc_params(\n FIRST_GET_DC, specimen, treat_type, yn)\n MeasRec['method_codes'] = 'LT-PTRM-Z'\n MeasRec['meas_temp'] = '273'\n MeasRec['treat_temp'] = str(int(treat_type) + 273)\n MeasRec['treat_dc_field'] = '0'\n MeasRec['treat_dc_field_phi'] = '%1.2f' % DC_PHI\n MeasRec['treat_dc_field_theta'] = '%1.2f' % DC_THETA\n MeasRec['treat_ac_field'] = '0'\n else:\n print(\"trouble with your treatment steps\")\n MeasRec['dir_dec'] = line[46:51]\n MeasRec['dir_inc'] = line[52:58]\n# Some MIT files have and extra digit in the exponent of the magnetude.\n# That makes those files not compliant with the cit measurement file spec.\n# Not sure if we should just print an error message and exit. For now we accept the file and fix it.\n# The first digit of the exponent, which should always be zero, is cut out of the line if column 39 is not ' '\n if line[39] != ' ':\n line = line[0:37] + line[38:]\n M = '%8.2e' % (float(line[31:39])*volmass*1e-3) # convert to Am2\n MeasRec['magn_moment'] = M\n MeasRec['dir_csd'] = '%7.1f' % (eval(line[41:46]))\n MeasRec[\"meas_n_orient\"] = meas_n_orient\n MeasRec['standard'] = 'u'\n if len(line) > 60:\n MeasRec['instrument_codes'] = line[85:].strip('\\n \\r \\t \"')\n MeasRec['magn_x_sigma'] = '%8.2e' % (\n float(line[58:67])*1e-8) # (convert e-5emu to Am2)\n MeasRec['magn_y_sigma'] = '%8.2e' % (float(line[67:76])*1e-8)\n MeasRec['magn_z_sigma'] = '%8.2e' % (float(line[76:85])*1e-8)\n MeasRecs.append(MeasRec)\n SpecRecs.append(SpecRec)\n if sample not in samples:\n samples.append(sample)\n SampRecs.append(SampRec)\n if site not in sites:\n sites.append(site)\n SiteRecs.append(SiteRec)\n\n con = cb.Contribution(output_dir_path, read_tables=[])\n\n con.add_magic_table_from_data(dtype='specimens', data=SpecRecs)\n con.add_magic_table_from_data(dtype='samples', data=SampRecs)\n con.add_magic_table_from_data(dtype='sites', data=SiteRecs)\n con.add_magic_table_from_data(dtype='locations', data=LocRecs)\n MeasOuts = pmag.measurements_methods3(MeasRecs, noave)\n con.add_magic_table_from_data(dtype='measurements', data=MeasOuts)\n\n con.tables['specimens'].write_magic_file(\n custom_name=spec_file, dir_path=output_dir_path)\n con.tables['samples'].write_magic_file(\n custom_name=samp_file, dir_path=output_dir_path)\n con.tables['sites'].write_magic_file(\n custom_name=site_file, dir_path=output_dir_path)\n con.tables['locations'].write_magic_file(\n custom_name=loc_file, dir_path=output_dir_path)\n con.tables['measurements'].write_magic_file(\n custom_name=meas_file, dir_path=output_dir_path)\n\n return True, meas_file", "response": "This function converts CIT formated Magnetometer data into MagIC format for Analysis and contribution to the MagIC database."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nconvert generic file to MagIC file", "response": "def generic(magfile=\"\", dir_path=\".\", meas_file=\"measurements.txt\",\n spec_file=\"specimens.txt\", samp_file=\"samples.txt\", site_file=\"sites.txt\",\n loc_file=\"locations.txt\", user=\"\", labfield=0, labfield_phi=0, labfield_theta=0,\n experiment=\"\", cooling_times_list=[], sample_nc=[1, 0], site_nc=[1, 0],\n location=\"unknown\", lat=\"\", lon=\"\", noave=False, input_dir_path=\"\"):\n\n \"\"\"\n Convert generic file to MagIC file(s)\n\n Parameters\n ----------\n mag_file : str\n input file name\n dir_path : str\n output directory, default \".\"\n meas_file : str\n output measurement file name, default \"measurements.txt\"\n spec_file : str\n output specimen file name, default \"specimens.txt\"\n samp_file: str\n output sample file name, default \"samples.txt\"\n site_file : str\n output site file name, default \"sites.txt\"\n loc_file : str\n output location file name, default \"locations.txt\"\n user : str\n user name, default \"\"\n labfield : float\n dc lab field (in micro tesla)\n labfield_phi : float\n declination 0-360\n labfield_theta : float\n inclination -90 - 90\n experiment : str\n experiment type, see info below\n cooling_times_list : list\n cooling times in [K/minutes] seperated by comma,\n ordered at the same order as XXX.10,XXX.20 ...XX.70\n sample_nc : list\n sample naming convention, default [1, 0], see info below\n site_nc : list\n site naming convention, default [1, 0], see info below\n location : str\n location name, default \"unknown\"\n lat : float\n latitude, default \"\"\n lon : float\n longitude, default \"\"\n noave : bool\n do not average duplicate measurements, default False (so by default, DO average)\n input_dir_path : str\n input file directory IF different from dir_path, default \"\"\n\n\n Info\n --------\n Experiment type:\n Demag:\n AF and/or Thermal\n PI:\n paleointenisty thermal experiment (ZI/IZ/IZZI)\n ATRM n:\n\n ATRM in n positions (n=6)\n\n AARM n:\n AARM in n positions\n CR:\n cooling rate experiment\n The treatment coding of the measurement file should be: XXX.00,XXX.10, XXX.20 ...XX.70 etc. (XXX.00 is optional)\n where XXX in the temperature and .10,.20... are running numbers of the cooling rates steps.\n XXX.00 is optional zerofield baseline. XXX.70 is alteration check.\n if using this type, you must also provide cooling rates in [K/minutes] in cooling_times_list\n seperated by comma, ordered at the same order as XXX.10,XXX.20 ...XX.70\n\n No need to specify the cooling rate for the zerofield\n But users need to make sure that there are no duplicate meaurements in the file\n\n NLT:\n non-linear-TRM experiment\n\n Specimen-sample naming convention:\n X determines which kind of convention (initial characters, terminal characters, or delimiter\n Y determines how many characters to remove to go from specimen --> sample OR which delimiter to use\n X=0 Y=n: specimen is distinguished from sample by n initial characters.\n (example: generic(samp_nc=[0, 4], ...)\n if n=4 then and specimen = mgf13a then sample = mgf13)\n X=1 Y=n: specimen is distiguished from sample by n terminate characters.\n (example: generic(samp_nc=[1, 1], ...))\n if n=1 then and specimen = mgf13a then sample = mgf13)\n X=2 Y=c: specimen is distinguishing from sample by a delimiter.\n (example: generic([2, \"-\"]))\n if c=- then and specimen = mgf13-a then sample = mgf13)\n default: sample is the same as specimen name\n\n Sample-site naming convention:\n X determines which kind of convention (initial characters, terminal characters, or delimiter\n Y determines how many characters to remove to go from sample --> site OR which delimiter to use\n X=0 Y=n: sample is distiguished from site by n initial characters.\n (example: generic(site_nc=[0, 3]))\n if n=3 then and sample = mgf13 then sample = mgf)\n X=1 Y=n: sample is distiguished from site by n terminate characters.\n (example: generic(site_nc=[1, 2]))\n if n=2 and sample = mgf13 then site = mgf)\n X=2 Y=c: specimen is distiguishing from sample by a delimiter.\n (example: generic(site_nc=[2, \"-\"]))\n if c='-' and sample = 'mgf-13' then site = mgf)\n default: site name is the same as sample name\n\n\n \"\"\"\n\n # --------------------------------------\n # functions\n # --------------------------------------\n\n def sort_magic_file(path, ignore_lines_n, sort_by_this_name):\n '''\n reads a file with headers. Each line is stored as a dictionary following the headers.\n Lines are sorted in DATA by the sort_by_this_name header\n DATA[sort_by_this_name]=[dictionary1,dictionary2,...]\n '''\n DATA = {}\n fin = open(path, 'r')\n # ignore first lines\n for i in range(ignore_lines_n):\n fin.readline()\n # header\n line = fin.readline()\n header = line.strip('\\n').split('\\t')\n # print header\n for line in fin.readlines():\n if line[0] == \"#\":\n continue\n tmp_data = {}\n tmp_line = line.strip('\\n').split('\\t')\n # print tmp_line\n for i in range(len(tmp_line)):\n if i >= len(header):\n continue\n tmp_data[header[i]] = tmp_line[i]\n DATA[tmp_data[sort_by_this_name]] = tmp_data\n fin.close()\n return(DATA)\n\n def read_generic_file(path, average_replicates):\n '''\n reads a generic file format. If average_replicates==True average replicate measurements.\n Rrturns a Data dictionary with measurements line sorted by specimen\n Data[specimen_name][dict1,dict2,...]\n '''\n Data = {}\n Fin = open(path, 'r')\n header = Fin.readline().strip('\\n').split('\\t')\n duplicates = []\n for line in Fin.readlines():\n tmp_data = {}\n # found_duplicate=False\n l = line.strip('\\n').split('\\t')\n for i in range(min(len(header), len(l))):\n tmp_data[header[i]] = l[i]\n specimen = tmp_data['specimen']\n if specimen not in list(Data.keys()):\n Data[specimen] = []\n Data[specimen].append(tmp_data)\n Fin.close()\n # search from duplicates\n for specimen in list(Data.keys()):\n x = len(Data[specimen])-1\n new_data = []\n duplicates = []\n for i in range(1, x):\n while i < len(Data[specimen]) and Data[specimen][i]['treatment'] == Data[specimen][i-1]['treatment'] and Data[specimen][i]['treatment_type'] == Data[specimen][i-1]['treatment_type']:\n duplicates.append(Data[specimen][i])\n del(Data[specimen][i])\n if len(duplicates) > 0:\n if average_replicates:\n duplicates.append(Data[specimen][i-1])\n Data[specimen][i-1] = average_duplicates(duplicates)\n print(\"-W- WARNING: averaging %i duplicates for specimen %s treatmant %s\" %\n (len(duplicates), specimen, duplicates[-1]['treatment']))\n duplicates = []\n else:\n Data[specimen][i-1] = duplicates[-1]\n print(\"-W- WARNING: found %i duplicates for specimen %s treatmant %s. Taking the last measurement only\" %\n (len(duplicates), specimen, duplicates[-1]['treatment']))\n duplicates = []\n\n if i == len(Data[specimen])-1:\n break\n\n return(Data)\n\n def average_duplicates(duplicates):\n '''\n avarage replicate measurements.\n '''\n carts_s, carts_g, carts_t = [], [], []\n for rec in duplicates:\n moment = float(rec['moment'])\n if 'dec_s' in list(rec.keys()) and 'inc_s' in list(rec.keys()):\n if rec['dec_s'] != \"\" and rec['inc_s'] != \"\":\n dec_s = float(rec['dec_s'])\n inc_s = float(rec['inc_s'])\n cart_s = pmag.dir2cart([dec_s, inc_s, moment])\n carts_s.append(cart_s)\n if 'dec_g' in list(rec.keys()) and 'inc_g' in list(rec.keys()):\n if rec['dec_g'] != \"\" and rec['inc_g'] != \"\":\n dec_g = float(rec['dec_g'])\n inc_g = float(rec['inc_g'])\n cart_g = pmag.dir2cart([dec_g, inc_g, moment])\n carts_g.append(cart_g)\n if 'dec_t' in list(rec.keys()) and 'inc_t' in list(rec.keys()):\n if rec['dec_t'] != \"\" and rec['inc_t'] != \"\":\n dec_t = float(rec['dec_t'])\n inc_t = float(rec['inc_t'])\n cart_t = pmag.dir2cart([dec_t, inc_t, moment])\n carts_t.append(cart_t)\n if len(carts_s) > 0:\n carts = scipy.array(carts_s)\n x_mean = scipy.mean(carts[:, 0])\n y_mean = scipy.mean(carts[:, 1])\n z_mean = scipy.mean(carts[:, 2])\n mean_dir = pmag.cart2dir([x_mean, y_mean, z_mean])\n mean_dec_s = \"%.2f\" % mean_dir[0]\n mean_inc_s = \"%.2f\" % mean_dir[1]\n mean_moment = \"%10.3e\" % mean_dir[2]\n else:\n mean_dec_s, mean_inc_s = \"\", \"\"\n if len(carts_g) > 0:\n carts = scipy.array(carts_g)\n x_mean = scipy.mean(carts[:, 0])\n y_mean = scipy.mean(carts[:, 1])\n z_mean = scipy.mean(carts[:, 2])\n mean_dir = pmag.cart2dir([x_mean, y_mean, z_mean])\n mean_dec_g = \"%.2f\" % mean_dir[0]\n mean_inc_g = \"%.2f\" % mean_dir[1]\n mean_moment = \"%10.3e\" % mean_dir[2]\n else:\n mean_dec_g, mean_inc_g = \"\", \"\"\n\n if len(carts_t) > 0:\n carts = scipy.array(carts_t)\n x_mean = scipy.mean(carts[:, 0])\n y_mean = scipy.mean(carts[:, 1])\n z_mean = scipy.mean(carts[:, 2])\n mean_dir = pmag.cart2dir([x_mean, y_mean, z_mean])\n mean_dec_t = \"%.2f\" % mean_dir[0]\n mean_inc_t = \"%.2f\" % mean_dir[1]\n mean_moment = \"%10.3e\" % mean_dir[2]\n else:\n mean_dec_t, mean_inc_t = \"\", \"\"\n\n meanrec = {}\n for key in list(duplicates[0].keys()):\n if key in ['dec_s', 'inc_s', 'dec_g', 'inc_g', 'dec_t', 'inc_t', 'moment']:\n continue\n else:\n meanrec[key] = duplicates[0][key]\n meanrec['dec_s'] = mean_dec_s\n meanrec['dec_g'] = mean_dec_g\n meanrec['dec_t'] = mean_dec_t\n meanrec['inc_s'] = mean_inc_s\n meanrec['inc_g'] = mean_inc_g\n meanrec['inc_t'] = mean_inc_t\n meanrec['moment'] = mean_moment\n return meanrec\n\n def get_upper_level_name(name, nc):\n '''\n get sample/site name from specimen/sample using naming convention\n '''\n if float(nc[0]) == 0:\n if float(nc[1]) != 0:\n number_of_char = int(nc[1])\n high_name = name[:number_of_char]\n else:\n high_name = name\n elif float(nc[0]) == 1:\n if float(nc[1]) != 0:\n number_of_char = int(nc[1])*-1\n high_name = name[:number_of_char]\n else:\n high_name = name\n elif float(nc[0]) == 2:\n d = str(nc[1])\n name_splitted = name.split(d)\n if len(name_splitted) == 1:\n high_name = name_splitted[0]\n else:\n high_name = d.join(name_splitted[:-1])\n else:\n high_name = name\n return high_name\n\n def merge_pmag_recs(old_recs):\n recs = {}\n recs = copy.deepcopy(old_recs)\n headers = []\n for rec in recs:\n for key in list(rec.keys()):\n if key not in headers:\n headers.append(key)\n for rec in recs:\n for header in headers:\n if header not in list(rec.keys()):\n rec[header] = \"\"\n return recs\n\n # --------------------------------------\n # start conversion from generic\n # --------------------------------------\n\n # format and validate variables\n\n input_dir_path, dir_path = pmag.fix_directories(input_dir_path, dir_path)\n labfield = float(labfield)\n labfield_phi = float(labfield_phi)\n labfield_theta = float(labfield_theta)\n\n if magfile:\n magfile = pmag.resolve_file_name(magfile, input_dir_path)\n try:\n input = open(magfile, 'r')\n except:\n print(\"bad mag file:\", magfile)\n return False, \"bad mag file\"\n else:\n print(\"mag_file field is required option\")\n return False, \"mag_file field is required option\"\n\n if not experiment:\n print(\"-E- Must provide experiment. Please provide experiment type of: Demag, PI, ATRM n (n of positions), CR (see below for format), NLT\")\n return False, \"Must provide experiment. Please provide experiment type of: Demag, PI, ATRM n (n of positions), CR (see help for format), NLT\"\n\n if 'ATRM' in experiment:\n try:\n experiment, atrm_n_pos = experiment.split()\n atrm_n_pos = int(atrm_n_pos)\n except:\n experiment = 'ATRM'\n atrm_n_pos = 6\n if 'AARM' in experiment:\n try:\n experiment, aarm_n_pos = experiment.split()\n aarm_n_pos = int(aarm_n_pos)\n except:\n experiment = 'AARM'\n aarm_n_pos = 6\n\n if experiment == 'CR':\n if command_line:\n ind = sys.argv.index(\"CR\")\n cooling_times = sys.argv[ind+1]\n cooling_times_list = cooling_times.split(',')\n noave = True\n # if not command line, cooling_times_list is already set\n\n # --------------------------------------\n # read data from generic file\n # --------------------------------------\n\n mag_data = read_generic_file(magfile, not noave)\n\n # --------------------------------------\n # for each specimen get the data, and translate it to MagIC format\n # --------------------------------------\n\n MeasRecs, SpecRecs, SampRecs, SiteRecs, LocRecs = [], [], [], [], []\n specimens_list = sorted(mag_data.keys())\n for specimen in specimens_list:\n measurement_running_number = 0\n this_specimen_treatments = [] # a list of all treatments\n MeasRecs_this_specimen = []\n LP_this_specimen = [] # a list of all lab protocols\n IZ, ZI = 0, 0 # counter for IZ and ZI steps\n\n for meas_line in mag_data[specimen]:\n\n # ------------------\n # trivial MeasRec data\n # ------------------\n\n MeasRec, SpecRec, SampRec, SiteRec, LocRec = {}, {}, {}, {}, {}\n\n specimen = meas_line['specimen']\n sample = get_upper_level_name(specimen, sample_nc)\n site = get_upper_level_name(sample, site_nc)\n sample_method_codes = \"\"\n azimuth, dip, DipDir, Dip = \"\", \"\", \"\", \"\"\n\n MeasRec['citations'] = \"This study\"\n MeasRec[\"specimen\"] = specimen\n MeasRec['analysts'] = user\n MeasRec[\"instrument_codes\"] = \"\"\n MeasRec[\"quality\"] = 'g'\n MeasRec[\"treat_step_num\"] = \"%i\" % measurement_running_number\n MeasRec[\"magn_moment\"] = '%10.3e' % (\n float(meas_line[\"moment\"])*1e-3) # in Am^2\n MeasRec[\"meas_temp\"] = '273.' # room temp in kelvin\n\n # ------------------\n # decode treatments from treatment column in the generic file\n # ------------------\n\n treatment = []\n treatment_code = str(meas_line['treatment']).split(\".\")\n treatment.append(float(treatment_code[0]))\n if len(treatment_code) == 1:\n treatment.append(0)\n else:\n treatment.append(float(treatment_code[1]))\n\n # ------------------\n # lab field direction\n # ------------------\n\n if experiment in ['PI', 'NLT', 'CR']:\n if float(treatment[1]) in [0., 3.]: # zerofield step or tail check\n MeasRec[\"treat_dc_field\"] = \"0\"\n MeasRec[\"treat_dc_field_phi\"] = \"0\"\n MeasRec[\"treat_dc_field_theta\"] = \"0\"\n elif not labfield:\n print(\n \"-W- WARNING: labfield (-dc) is a required argument for this experiment type\")\n return False, \"labfield (-dc) is a required argument for this experiment type\"\n else:\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (float(labfield))\n MeasRec[\"treat_dc_field_phi\"] = \"%.2f\" % (\n float(labfield_phi))\n MeasRec[\"treat_dc_field_theta\"] = \"%.2f\" % (\n float(labfield_theta))\n else:\n MeasRec[\"treat_dc_field\"] = \"\"\n MeasRec[\"treat_dc_field_phi\"] = \"\"\n MeasRec[\"treat_dc_field_theta\"] = \"\"\n\n # ------------------\n # treatment temperature/peak field\n # ------------------\n\n if experiment == 'Demag':\n if meas_line['treatment_type'] == 'A':\n MeasRec['treat_temp'] = \"273.\"\n MeasRec[\"treat_ac_field\"] = \"%.3e\" % (treatment[0]*1e-3)\n elif meas_line['treatment_type'] == 'N':\n MeasRec['treat_temp'] = \"273.\"\n MeasRec[\"treat_ac_field\"] = \"\"\n else:\n MeasRec['treat_temp'] = \"%.2f\" % (treatment[0]+273.)\n MeasRec[\"treat_ac_field\"] = \"\"\n else:\n MeasRec['treat_temp'] = \"%.2f\" % (treatment[0]+273.)\n MeasRec[\"treat_ac_field\"] = \"\"\n\n # ---------------------\n # Lab treatment\n # Lab protocol\n # ---------------------\n\n # ---------------------\n # Lab treatment and lab protocoal for NRM:\n # ---------------------\n\n if float(meas_line['treatment']) == 0:\n LT = \"LT-NO\"\n LP = \"\" # will be filled later after finishing reading all measurements line\n\n # ---------------------\n # Lab treatment and lab protocoal for paleointensity experiment\n # ---------------------\n\n elif experiment == 'PI':\n LP = \"LP-PI-TRM\"\n if treatment[1] == 0:\n LT = \"LT-T-Z\"\n elif treatment[1] == 1 or treatment[1] == 10: # infield\n LT = \"LT-T-I\"\n elif treatment[1] == 2 or treatment[1] == 20: # pTRM check\n LT = \"LT-PTRM-I\"\n LP = LP+\":\"+\"LP-PI-ALT-PTRM\"\n elif treatment[1] == 3 or treatment[1] == 30: # Tail check\n LT = \"LT-PTRM-MD\"\n LP = LP+\":\"+\"LP-PI-BT-MD\"\n elif treatment[1] == 4 or treatment[1] == 40: # Additivity check\n LT = \"LT-PTRM-AC\"\n LP = LP+\":\"+\"LP-PI-BT-MD\"\n else:\n print(\"-E- unknown measurement code specimen %s treatmemt %s\" %\n (meas_line['specimen'], meas_line['treatment']))\n MeasRec = {}\n continue\n # save all treatment in a list\n # we will use this later to distinguidh between ZI / IZ / and IZZI\n\n this_specimen_treatments.append(float(meas_line['treatment']))\n if LT == \"LT-T-Z\":\n if float(treatment[0]+0.1) in this_specimen_treatments:\n LP = LP+\":\"+\"LP-PI-IZ\"\n if LT == \"LT-T-I\":\n if float(treatment[0]+0.0) in this_specimen_treatments:\n LP = LP+\":\"+\"LP-PI-ZI\"\n # ---------------------\n # Lab treatment and lab protocoal for demag experiment\n # ---------------------\n\n elif \"Demag\" in experiment:\n if meas_line['treatment_type'] == 'A':\n LT = \"LT-AF-Z\"\n LP = \"LP-DIR-AF\"\n else:\n LT = \"LT-T-Z\"\n LP = \"LP-DIR-T\"\n\n # ---------------------\n # Lab treatment and lab protocoal for ATRM experiment\n # ---------------------\n\n elif experiment in ['ATRM', 'AARM']:\n\n if experiment == 'ATRM':\n LP = \"LP-AN-TRM\"\n n_pos = atrm_n_pos\n if n_pos != 6:\n print(\n \"the program does not support ATRM in %i position.\" % n_pos)\n continue\n\n if experiment == 'AARM':\n LP = \"LP-AN-ARM\"\n n_pos = aarm_n_pos\n if n_pos != 6:\n print(\n \"the program does not support AARM in %i position.\" % n_pos)\n continue\n\n if treatment[1] == 0:\n if experiment == 'ATRM':\n LT = \"LT-T-Z\"\n MeasRec['treat_temp'] = \"%.2f\" % (treatment[0]+273.)\n MeasRec[\"treat_ac_field\"] = \"\"\n\n else:\n LT = \"LT-AF-Z\"\n MeasRec['treat_temp'] = \"273.\"\n MeasRec[\"treat_ac_field\"] = \"%.3e\" % (\n treatment[0]*1e-3)\n MeasRec[\"treat_dc_field\"] = '0'\n MeasRec[\"treat_dc_field_phi\"] = '0'\n MeasRec[\"treat_dc_field_theta\"] = '0'\n else:\n if experiment == 'ATRM':\n # alteration check as final measurement\n if float(treatment[1]) == 70 or float(treatment[1]) == 7:\n LT = \"LT-PTRM-I\"\n else:\n LT = \"LT-T-I\"\n else:\n LT = \"LT-AF-I\"\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (float(labfield))\n\n # find the direction of the lab field in two ways:\n\n # (1) using the treatment coding (XX.1=+x, XX.2=+y, XX.3=+z, XX.4=-x, XX.5=-y, XX.6=-z)\n tdec = [0, 90, 0, 180, 270, 0, 0, 90, 0]\n tinc = [0, 0, 90, 0, 0, -90, 0, 0, 90]\n if treatment[1] < 10:\n ipos_code = int(treatment[1]) - 1\n else:\n ipos_code = int(treatment[1] / 10) - 1\n\n # (2) using the magnetization\n if meas_line[\"dec_s\"] != \"\":\n DEC = float(meas_line[\"dec_s\"])\n INC = float(meas_line[\"inc_s\"])\n elif meas_line[\"dec_g\"] != \"\":\n DEC = float(meas_line[\"dec_g\"])\n INC = float(meas_line[\"inc_g\"])\n elif meas_line[\"dec_t\"] != \"\":\n DEC = float(meas_line[\"dec_t\"])\n INC = float(meas_line[\"inc_t\"])\n if DEC < 0 and DEC > -359:\n DEC = 360.+DEC\n\n if INC < 45 and INC > -45:\n if DEC > 315 or DEC < 45:\n ipos_guess = 0\n if DEC > 45 and DEC < 135:\n ipos_guess = 1\n if DEC > 135 and DEC < 225:\n ipos_guess = 3\n if DEC > 225 and DEC < 315:\n ipos_guess = 4\n else:\n if INC > 45:\n ipos_guess = 2\n if INC < -45:\n ipos_guess = 5\n # prefer the guess over the code\n ipos = ipos_guess\n # check it\n if treatment[1] != 7 and treatment[1] != 70:\n if ipos_guess != ipos_code:\n print(\"-W- WARNING: check specimen %s step %s, anistropy measurements, coding does not match the direction of the lab field\" % (\n specimen, meas_line['treatment']))\n MeasRec[\"treat_dc_field_phi\"] = '%7.1f' % (tdec[ipos])\n MeasRec[\"treat_dc_field_theta\"] = '%7.1f' % (tinc[ipos])\n\n # ---------------------\n # Lab treatment and lab protocoal for cooling rate experiment\n # ---------------------\n\n elif experiment == \"CR\":\n\n cooling_times_list\n LP = \"LP-CR-TRM\"\n MeasRec[\"treat_temp\"] = '%8.3e' % (\n float(treatment[0])+273.) # temp in kelvin\n\n if treatment[1] == 0:\n LT = \"LT-T-Z\"\n MeasRec[\"treat_dc_field\"] = \"0\"\n MeasRec[\"treat_dc_field_phi\"] = '0'\n MeasRec[\"treat_dc_field_theta\"] = '0'\n else:\n if treatment[1] == 7: # alteration check as final measurement\n LT = \"LT-PTRM-I\"\n else:\n LT = \"LT-T-I\"\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (labfield)\n MeasRec[\"treat_dc_field_phi\"] = '%7.1f' % (\n labfield_phi) # labfield phi\n MeasRec[\"treat_dc_field_theta\"] = '%7.1f' % (\n labfield_theta) # labfield theta\n\n indx = int(treatment[1])-1\n # alteration check matjed as 0.7 in the measurement file\n if indx == 6:\n cooling_time = cooling_times_list[-1]\n else:\n cooling_time = cooling_times_list[indx]\n MeasRec[\"measurement_description\"] = \"cooling_rate\" + \\\n \":\"+cooling_time+\":\"+\"K/min\"\n\n # ---------------------\n # Lab treatment and lab protocoal for NLT experiment\n # ---------------------\n\n elif 'NLT' in experiment:\n print(\n \"Dont support yet NLT rate experiment file. Contact rshaar@ucsd.edu\")\n\n # ---------------------\n # method_codes for this measurement only\n # LP will be fixed after all measurement lines are read\n # ---------------------\n\n MeasRec[\"method_codes\"] = LT+\":\"+LP\n\n # --------------------\n # deal with specimen orientation and different coordinate system\n # --------------------\n\n found_s, found_geo, found_tilt = False, False, False\n if \"dec_s\" in list(meas_line.keys()) and \"inc_s\" in list(meas_line.keys()):\n if meas_line[\"dec_s\"] != \"\" and meas_line[\"inc_s\"] != \"\":\n found_s = True\n MeasRec[\"dir_dec\"] = meas_line[\"dec_s\"]\n MeasRec[\"dir_inc\"] = meas_line[\"inc_s\"]\n if \"dec_g\" in list(meas_line.keys()) and \"inc_g\" in list(meas_line.keys()):\n if meas_line[\"dec_g\"] != \"\" and meas_line[\"inc_g\"] != \"\":\n found_geo = True\n if \"dec_t\" in list(meas_line.keys()) and \"inc_t\" in list(meas_line.keys()):\n if meas_line[\"dec_t\"] != \"\" and meas_line[\"inc_t\"] != \"\":\n found_tilt = True\n\n # -----------------------------\n # specimen coordinates: no\n # geographic coordinates: yes\n # -----------------------------\n\n if found_geo and not found_s:\n MeasRec[\"dir_dec\"] = meas_line[\"dec_g\"]\n MeasRec[\"dir_inc\"] = meas_line[\"inc_g\"]\n azimuth = \"0\"\n dip = \"0\"\n\n # -----------------------------\n # specimen coordinates: no\n # geographic coordinates: no\n # -----------------------------\n if not found_geo and not found_s:\n print(\"-E- ERROR: sample %s does not have dec_s/inc_s or dec_g/inc_g. Ignore specimen %s \" %\n (sample, specimen))\n break\n\n # -----------------------------\n # specimen coordinates: yes\n # geographic coordinates: yes\n #\n # commant: Ron, this need to be tested !!\n # -----------------------------\n if found_geo and found_s:\n cdec, cinc = float(meas_line[\"dec_s\"]), float(\n meas_line[\"inc_s\"])\n gdec, ginc = float(meas_line[\"dec_g\"]), float(\n meas_line[\"inc_g\"])\n az, pl = pmag.get_azpl(cdec, cinc, gdec, ginc)\n azimuth = \"%.1f\" % az\n dip = \"%.1f\" % pl\n\n # -----------------------------\n # specimen coordinates: yes\n # geographic coordinates: no\n # -----------------------------\n if not found_geo and found_s and \"Demag\" in experiment:\n print(\"-W- WARNING: missing dip or azimuth for sample %s\" % sample)\n\n # -----------------------------\n # tilt-corrected coordinates: yes\n # geographic coordinates: no\n # -----------------------------\n if found_tilt and not found_geo:\n print(\n \"-E- ERROR: missing geographic data for sample %s. Ignoring tilt-corrected data \" % sample)\n\n # -----------------------------\n # tilt-corrected coordinates: yes\n # geographic coordinates: yes\n # -----------------------------\n if found_tilt and found_geo:\n dec_geo, inc_geo = float(\n meas_line[\"dec_g\"]), float(meas_line[\"inc_g\"])\n dec_tilt, inc_tilt = float(\n meas_line[\"dec_t\"]), float(meas_line[\"inc_t\"])\n if dec_geo == dec_tilt and inc_geo == inc_tilt:\n DipDir, Dip = 0., 0.\n else:\n DipDir, Dip = pmag.get_tilt(\n dec_geo, inc_geo, dec_tilt, inc_tilt)\n\n # -----------------------------\n # samples method codes\n # geographic coordinates: no\n # -----------------------------\n if found_tilt or found_geo:\n sample_method_codes = \"SO-NO\"\n\n if specimen != \"\" and specimen not in [x['specimen'] if 'specimen' in list(x.keys()) else \"\" for x in SpecRecs]:\n SpecRec['specimen'] = specimen\n SpecRec['sample'] = sample\n SpecRec['citations'] = \"This study\"\n SpecRecs.append(SpecRec)\n if sample != \"\" and sample not in [x['sample'] if 'sample' in list(x.keys()) else \"\" for x in SampRecs]:\n SampRec['sample'] = sample\n SampRec['site'] = site\n SampRec['citations'] = \"This study\"\n SampRec['azimuth'] = azimuth\n SampRec['dip'] = dip\n SampRec['bed_dip_direction'] = DipDir\n SampRec['bed_dip'] = Dip\n SampRec['method_codes'] = sample_method_codes\n SampRecs.append(SampRec)\n if site != \"\" and site not in [x['site'] if 'site' in list(x.keys()) else \"\" for x in SiteRecs]:\n SiteRec['site'] = site\n SiteRec['location'] = location\n SiteRec['citations'] = \"This study\"\n SiteRec['lat'] = lat\n SiteRec['lon'] = lon\n SiteRecs.append(SiteRec)\n if location != \"\" and location not in [x['location'] if 'location' in list(x.keys()) else \"\" for x in LocRecs]:\n LocRec['location'] = location\n LocRec['citations'] = \"This study\"\n LocRec['lat_n'] = lat\n LocRec['lon_e'] = lon\n LocRec['lat_s'] = lat\n LocRec['lon_w'] = lon\n LocRecs.append(LocRec)\n\n MeasRecs_this_specimen.append(MeasRec)\n measurement_running_number += 1\n # -------\n\n # -------\n # after reading all the measurements lines for this specimen\n # 1) add experiments\n # 2) fix method_codes with the correct lab protocol\n # -------\n LP_this_specimen = []\n for MeasRec in MeasRecs_this_specimen:\n method_codes = MeasRec[\"method_codes\"].split(\":\")\n for code in method_codes:\n if \"LP\" in code and code not in LP_this_specimen:\n LP_this_specimen.append(code)\n # check IZ/ZI/IZZI\n if \"LP-PI-ZI\" in LP_this_specimen and \"LP-PI-IZ\" in LP_this_specimen:\n LP_this_specimen.remove(\"LP-PI-ZI\")\n LP_this_specimen.remove(\"LP-PI-IZ\")\n LP_this_specimen.append(\"LP-PI-BT-IZZI\")\n\n # add the right LP codes and fix experiment name\n for MeasRec in MeasRecs_this_specimen:\n # MeasRec[\"experiment\"]=MeasRec[\"specimen\"]+\":\"+\":\".join(LP_this_specimen)\n method_codes = MeasRec[\"method_codes\"].split(\":\")\n LT = \"\"\n for code in method_codes:\n if code[:3] == \"LT-\":\n LT = code\n break\n MeasRec[\"method_codes\"] = LT+\":\"+\":\".join(LP_this_specimen)\n MeasRec[\"method_codes\"] = MeasRec[\"method_codes\"].strip(\":\")\n MeasRecs.append(MeasRec)\n\n # --\n # write tables to file\n # --\n\n con = cb.Contribution(dir_path, read_tables=[])\n\n con.add_magic_table_from_data(dtype='specimens', data=SpecRecs)\n con.add_magic_table_from_data(dtype='samples', data=SampRecs)\n con.add_magic_table_from_data(dtype='sites', data=SiteRecs)\n con.add_magic_table_from_data(dtype='locations', data=LocRecs)\n MeasOuts = pmag.measurements_methods3(MeasRecs, noave)\n con.add_magic_table_from_data(dtype='measurements', data=MeasOuts)\n\n con.tables['specimens'].write_magic_file(custom_name=spec_file,dir_path=dir_path)\n con.tables['samples'].write_magic_file(custom_name=samp_file,dir_path=dir_path)\n con.tables['sites'].write_magic_file(custom_name=site_file,dir_path=dir_path)\n con.tables['locations'].write_magic_file(custom_name=loc_file,dir_path=dir_path)\n con.tables['measurements'].write_magic_file(custom_name=meas_file,dir_path=dir_path)\n\n return True, meas_file"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef huji(magfile=\"\", dir_path=\".\", input_dir_path=\"\", datafile=\"\", codelist=\"\",\n meas_file=\"measurements.txt\", spec_file=\"specimens.txt\",\n samp_file=\"samples.txt\", site_file=\"sites.txt\", loc_file=\"locations.txt\",\n user=\"\", specnum=0, samp_con=\"1\", labfield=0, phi=0, theta=0,\n location=\"\", CR_cooling_times=None, noave=False):\n \"\"\"\n Convert HUJI format file to MagIC file(s)\n\n Parameters\n ----------\n magfile : str\n input file name\n dir_path : str\n working directory, default \".\"\n input_dir_path : str\n input file directory IF different from dir_path, default \"\"\n datafile : str\n HUJI datafile with sample orientations, default \"\"\n codelist : str\n colon-delimited protocols, include all that apply\n see info below\n meas_file : str\n output measurement file name, default \"measurements.txt\"\n spec_file : str\n output specimen file name, default \"specimens.txt\"\n samp_file: str\n output sample file name, default \"samples.txt\"\n site_file : str\n output site file name, default \"sites.txt\"\n loc_file : str\n output location file name, default \"locations.txt\"\n user : str\n user name, default \"\"\n specnum : int\n number of characters to designate a specimen, default 0\n samp_con : str\n sample/site naming convention, default '1', see info below\n labfield : float\n dc lab field (in micro tesla)\n labfield_phi : float\n declination 0-360\n labfield_theta : float\n inclination -90 - 90\n location : str\n location name, default \"unknown\"\n CR_cooling_times : list\n default None\n cooling times in [K/minutes] seperated by comma,\n ordered at the same order as XXX.10,XXX.20 ...XX.70\n noave : bool\n do not average duplicate measurements, default False (so by default, DO average)\n\n Info\n --------\n Code list:\n AF: af demag\n T: thermal including thellier but not trm acquisition\n N: NRM only\n TRM: trm acquisition\n ANI: anisotropy experiment\n CR: cooling rate experiment.\n The treatment coding of the measurement file should be: XXX.00,XXX.10, XXX.20 ...XX.70 etc. (XXX.00 is optional)\n where XXX in the temperature and .10,.20... are running numbers of the cooling rates steps.\n XXX.00 is optional zerofield baseline. XXX.70 is alteration check.\n syntax in sio_magic is: -LP CR xxx,yyy,zzz,.....xx\n where xx, yyy,zzz...xxx are cooling time in [K/minutes], seperated by comma, ordered at the same order as XXX.10,XXX.20 ...XX.70\n if you use a zerofield step then no need to specify the cooling rate for the zerofield\n\n Sample naming convention:\n [1] XXXXY: where XXXX is an arbitrary length site designation and Y\n is the single character sample designation. e.g., TG001a is the\n first sample from site TG001. [default]\n [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX\n [5] site name = sample name\n [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED\n [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY\n\n\n \"\"\"\n\n # format and validate variables\n specnum = int(specnum)\n labfield = float(labfield) * 1e-6\n phi = int(theta)\n theta = int(theta)\n input_dir_path, dir_path = pmag.fix_directories(input_dir_path, dir_path)\n if magfile:\n try:\n fname = pmag.resolve_file_name(magfile, input_dir_path)\n infile = open(fname, 'r')\n except IOError as ex:\n print(ex)\n print(\"bad mag file name\")\n return False, \"bad mag file name\"\n else:\n print(\"mag_file field is a required option\")\n return False, \"mag_file field is a required option\"\n\n if specnum != 0:\n specnum = -specnum\n if \"4\" in samp_con:\n if \"-\" not in samp_con:\n print(\"option [4] must be in form 4-Z where Z is an integer\")\n return False, \"option [4] must be in form 4-Z where Z is an integer\"\n else:\n Z = int(samp_con.split(\"-\")[1])\n samp_con = \"4\"\n if \"7\" in samp_con:\n if \"-\" not in samp_con:\n print(\"option [7] must be in form 7-Z where Z is an integer\")\n return False, \"option [7] must be in form 7-Z where Z is an integer\"\n else:\n Z = int(samp_con.split(\"-\")[1])\n samp_con = \"7\"\n else:\n Z = 1\n\n if codelist:\n codes = codelist.split(':')\n else:\n print(\n \"-E- Must select experiment type (codelist/-LP, options are: [AF, T, ANI, TRM, CR])\")\n return False, \"Must select experiment type (codelist/-LP, options are: [AF, T, ANI, TRM, CR])\"\n if \"AF\" in codes:\n demag = 'AF'\n LPcode = \"LP-DIR-AF\"\n if \"T\" in codes:\n demag = \"T\"\n if not labfield:\n LPcode = \"LP-DIR-T\"\n if labfield:\n LPcode = \"LP-PI-TRM\"\n if \"ANI\" in codes:\n if not labfield:\n print(\"missing lab field option\")\n return False, \"missing lab field option\"\n LPcode = \"LP-AN-TRM\"\n\n if \"TRM\" in codes:\n demag = \"T\"\n LPcode = \"LP-TRM\"\n # trm=1\n\n if \"CR\" in codes:\n demag = \"T\"\n # dc should be in the code\n if not labfield:\n print(\"missing lab field option\")\n return False, \"missing lab field option\"\n\n LPcode = \"LP-CR-TRM\" # TRM in different cooling rates\n if command_line:\n ind = sys.argv.index(\"-LP\")\n CR_cooling_times = sys.argv[ind+2].split(\",\")\n\n version_num = pmag.get_version()\n\n # --------------------------------------\n # Read the file\n # Assumption:\n # 1. different lab protocolsa are in different files\n # 2. measurements are in the correct order\n # --------------------------------------\n\n Data = {}\n line_no = 0\n for line in infile.readlines():\n line_no += 1\n this_line_data = {}\n line_no += 1\n instcode = \"\"\n if len(line) < 2:\n continue\n if line[0] == \"#\": # HUJI way of marking bad data points\n continue\n\n rec = line.strip('\\n').split()\n specimen = rec[0]\n date = rec[2].split(\"/\")\n hour = rec[3].split(\":\")\n treatment_type = rec[4]\n treatment = rec[5].split(\".\")\n dec_core = rec[6]\n inc_core = rec[7]\n moment_emu = float(rec[-1])\n\n if specimen not in list(Data.keys()):\n Data[specimen] = []\n\n # check duplicate treatments:\n # if yes, delete the first and use the second\n\n if len(Data[specimen]) > 0:\n if treatment == Data[specimen][-1]['treatment']:\n del(Data[specimen][-1])\n print(\"-W- Identical treatments in file %s magfile line %i: specimen %s, treatment %s ignoring the first. \" %\n (magfile, line_no, specimen, \".\".join(treatment)))\n\n this_line_data = {}\n this_line_data['specimen'] = specimen\n this_line_data['date'] = date\n this_line_data['hour'] = hour\n this_line_data['treatment_type'] = treatment_type\n this_line_data['treatment'] = treatment\n this_line_data['dec_core'] = dec_core\n this_line_data['inc_core'] = inc_core\n this_line_data['moment_emu'] = moment_emu\n this_line_data['azimuth'] = ''\n this_line_data['dip'] = ''\n this_line_data['bed_dip_direction'] = ''\n this_line_data['bed_dip'] = ''\n this_line_data['lat'] = ''\n this_line_data['lon'] = ''\n this_line_data['volume'] = ''\n Data[specimen].append(this_line_data)\n infile.close()\n print(\"-I- done reading file %s\" % magfile)\n\n if datafile:\n dinfile = open(datafile)\n for line in dinfile.readlines():\n data = line.split()\n if len(data) < 8 or data[0] == '':\n continue\n elif data[0] in list(Data.keys()):\n for i in range(len(Data[data[0]])):\n Data[data[0]][i]['azimuth'] = data[1]\n Data[data[0]][i]['dip'] = data[2]\n try:\n Data[data[0]][i]['bed_dip_direction'] = float(\n data[3])+90\n except ValueError:\n Data[data[0]][i]['bed_dip_direction'] = ''\n Data[data[0]][i]['bed_dip'] = data[4]\n Data[data[0]][i]['lat'] = data[5]\n Data[data[0]][i]['lon'] = data[6]\n Data[data[0]][i]['volume'] = data[7]\n else:\n print(\n \"no specimen %s found in magnetometer data file when reading specimen orientation data file, or data file record for specimen too short\" % data[0])\n dinfile.close()\n\n # --------------------------------------\n # Convert to MagIC\n # --------------------------------------\n\n specimens_list = list(Data.keys())\n specimens_list.sort()\n\n MeasRecs, SpecRecs, SampRecs, SiteRecs, LocRecs = [], [], [], [], []\n for specimen in specimens_list:\n for i in range(len(Data[specimen])):\n this_line_data = Data[specimen][i]\n methcode = \"\"\n MeasRec, SpecRec, SampRec, SiteRec, LocRec = {}, {}, {}, {}, {}\n specimen = this_line_data['specimen']\n if specnum != 0:\n sample = this_line_data['specimen'][:specnum]\n else:\n sample = this_line_data['specimen']\n site = pmag.parse_site(sample, samp_con, Z)\n if not location:\n location = site\n if specimen != \"\" and specimen not in [x['specimen'] if 'specimen' in list(x.keys()) else \"\" for x in SpecRecs]:\n SpecRec['specimen'] = specimen\n SpecRec['sample'] = sample\n SpecRecs.append(SpecRec)\n if sample != \"\" and sample not in [x['sample'] if 'sample' in list(x.keys()) else \"\" for x in SampRecs]:\n SampRec['sample'] = sample\n SampRec['site'] = site\n SampRec['azimuth'] = this_line_data['azimuth']\n SampRec['dip'] = this_line_data['dip']\n SampRec['bed_dip_direction'] = this_line_data['bed_dip_direction']\n SampRec['bed_dip'] = this_line_data['bed_dip']\n SampRecs.append(SampRec)\n if site != \"\" and site not in [x['site'] if 'site' in list(x.keys()) else \"\" for x in SiteRecs]:\n SiteRec['site'] = site\n SiteRec['location'] = location\n SiteRec['lat'] = this_line_data['lat']\n SiteRec['lon'] = this_line_data['lon']\n SiteRecs.append(SiteRec)\n if location != \"\" and location not in [x['location'] if 'location' in list(x.keys()) else \"\" for x in LocRecs]:\n LocRec['location'] = location\n LocRec['lat_n'] = this_line_data['lat']\n LocRec['lon_e'] = this_line_data['lon']\n LocRec['lat_s'] = this_line_data['lat']\n LocRec['lon_w'] = this_line_data['lon']\n LocRecs.append(LocRec)\n\n MeasRec['specimen'] = specimen\n MeasRec[\"meas_temp\"] = '%8.3e' % (273) # room temp in kelvin\n MeasRec[\"magn_moment\"] = '%10.3e' % (\n float(this_line_data['moment_emu'])*1e-3) # moment in Am^2 (from emu)\n MeasRec[\"dir_dec\"] = this_line_data['dec_core']\n MeasRec[\"dir_inc\"] = this_line_data['inc_core']\n\n date = this_line_data['date']\n hour = this_line_data['hour']\n if len(date[2]) < 4 and float(date[2]) >= 70:\n yyyy = \"19\"+date[2]\n elif len(date[2]) < 4 and float(date[2]) < 70:\n yyyy = \"20\"+date[2]\n else:\n yyyy = date[2]\n if len(date[0]) == 1:\n date[0] = \"0\"+date[0]\n if len(date[1]) == 1:\n date[1] = \"0\"+date[1]\n dt = \":\".join([date[0], date[1], yyyy, hour[0], hour[1], \"0\"])\n local = pytz.timezone(\"America/New_York\")\n naive = datetime.datetime.strptime(dt, \"%m:%d:%Y:%H:%M:%S\")\n local_dt = local.localize(naive, is_dst=None)\n utc_dt = local_dt.astimezone(pytz.utc)\n timestamp = utc_dt.strftime(\"%Y-%m-%dT%H:%M:%S\")+\"Z\"\n\n MeasRec['analysts'] = user\n MeasRec[\"citations\"] = \"This study\"\n MeasRec[\"instrument_codes\"] = \"HUJI-2G\"\n MeasRec[\"quality\"] = \"g\"\n MeasRec[\"meas_n_orient\"] = \"1\"\n MeasRec[\"standard\"] = \"u\"\n MeasRec[\"description\"] = \"\"\n\n # ----------------------------------------\n # AF demag\n # do not support AARM yet\n # ----------------------------------------\n\n if demag == \"AF\":\n treatment_type = this_line_data['treatment_type']\n # demag in zero field\n if LPcode != \"LP-AN-ARM\":\n MeasRec[\"treat_ac_field\"] = '%8.3e' % (\n float(this_line_data['treatment'][0])*1e-3) # peak field in tesla\n MeasRec[\"treat_dc_field\"] = '0'\n MeasRec[\"treat_dc_field_phi\"] = '0'\n MeasRec[\"treat_dc_field_theta\"] = '0'\n if treatment_type == \"N\":\n methcode = \"LP-DIR-AF:LT-NO\"\n elif treatment_type == \"A\":\n methcode = \"LP-DIR-AF:LT-AF-Z\"\n else:\n print(\n \"ERROR in treatment field line %i... exiting until you fix the problem\" % line_no)\n print(this_line_data)\n return False, \"ERROR in treatment field line %i... exiting until you fix the problem\" % line_no\n # AARM experiment\n else:\n print(\"Don't support AARM in HUJI format yet. sorry... \")\n MeasRec[\"method_codes\"] = methcode\n MeasRec[\"experiment\"] = specimen + \":\" + LPcode\n MeasRec[\"treat_step_num\"] = \"%i\" % i\n MeasRec[\"description\"] = \"\"\n\n MeasRecs.append(MeasRec)\n\n # ----------------------------------------\n # Thermal:\n # Thellier experiment: \"IZ\", \"ZI\", \"IZZI\", pTRM checks\n # Thermal demag\n # Thermal cooling rate experiment\n # Thermal NLT\n # ----------------------------------------\n\n if demag == \"T\":\n\n treatment = this_line_data['treatment']\n treatment_type = this_line_data['treatment_type']\n\n # ----------------------------------------\n # Thellier experimet\n # ----------------------------------------\n\n if LPcode == \"LP-PI-TRM\": # Thelllier experiment\n MeasRec[\"experiment\"] = specimen + \":\" + LPcode\n methcode = LPcode\n if treatment_type == \"N\" or ((treatment[1] == '0' or treatment[1] == '00') and float(treatment[0]) == 0):\n LT_code = \"LT-NO\"\n MeasRec[\"treat_dc_field_phi\"] = '0'\n MeasRec[\"treat_dc_field_theta\"] = '0'\n MeasRec[\"treat_dc_field\"] = '0'\n MeasRec[\"treat_temp\"] = '273.'\n\n elif treatment[1] == '0' or treatment[1] == '00':\n LT_code = \"LT-T-Z\"\n MeasRec[\"treat_dc_field_phi\"] = '0'\n MeasRec[\"treat_dc_field_theta\"] = '0'\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (0)\n MeasRec[\"treat_temp\"] = '%8.3e' % (\n float(treatment[0])+273.) # temp in kelvin\n\n # check if this is ZI or IZ:\n # check if the same temperature already measured:\n methcode = \"LP-PI-TRM:LP-PI-TRM-ZI\"\n for j in range(0, i):\n if Data[specimen][j]['treatment'][0] == treatment[0]:\n if Data[specimen][j]['treatment'][1] == '1' or Data[specimen][j]['treatment'][1] == '10':\n methcode = \"LP-PI-TRM:LP-PI-TRM-IZ\"\n else:\n methcode = \"LP-PI-TRM:LP-PI-TRM-ZI\"\n\n elif treatment[1] == '1' or treatment[1] == '10':\n LT_code = \"LT-T-I\"\n # labfield in tesla (convert from microT)\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (labfield)\n MeasRec[\"treat_dc_field_phi\"] = '%7.1f' % (\n phi) # labfield phi\n MeasRec[\"treat_dc_field_theta\"] = '%7.1f' % (\n theta) # labfield theta\n MeasRec[\"treat_temp\"] = '%8.3e' % (\n float(treatment[0])+273.) # temp in kelvin\n\n # check if this is ZI or IZ:\n # check if the same temperature already measured:\n methcode = \"LP-PI-TRM:LP-PI-TRM-IZ\"\n for j in range(0, i):\n if Data[specimen][j]['treatment'][0] == treatment[0]:\n if Data[specimen][j]['treatment'][1] == '0' or Data[specimen][j]['treatment'][1] == '00':\n methcode = \"LP-PI-TRM:LP-PI-TRM-ZI\"\n else:\n methcode = \"LP-PI-TRM:LP-PI-TRM-IZ\"\n elif treatment[1] == '2' or treatment[1] == '20':\n LT_code = \"LT-PTRM-I\"\n # labfield in tesla (convert from microT)\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (labfield)\n MeasRec[\"treat_dc_field_phi\"] = '%7.1f' % (\n phi) # labfield phi\n MeasRec[\"treat_dc_field_theta\"] = '%7.1f' % (\n theta) # labfield theta\n MeasRec[\"treat_temp\"] = '%8.3e' % (\n float(treatment[0])+273.) # temp in kelvin\n methcode = \"LP-PI-TRM:LP-PI-TRM-IZ\"\n\n else:\n print(\n \"ERROR in treatment field line %i... exiting until you fix the problem\" % line_no)\n return False, \"ERROR in treatment field line %i... exiting until you fix the problem\" % line_no\n MeasRec[\"method_codes\"] = LT_code+\":\"+methcode\n MeasRec[\"treat_step_num\"] = \"%i\" % i\n MeasRec[\"description\"] = \"\"\n MeasRecs.append(MeasRec)\n\n # ----------------------------------------\n # demag experimet\n # ----------------------------------------\n\n if LPcode == \"LP-DIR-T\":\n MeasRec[\"experiment\"] = specimen + \":\" + LPcode\n methcode = LPcode\n if treatment_type == \"N\":\n LT_code = \"LT-NO\"\n else:\n LT_code = \"LT-T-Z\"\n methcode = LPcode+\":\"+\"LT-T-Z\"\n MeasRec[\"treat_dc_field_phi\"] = '0'\n MeasRec[\"treat_dc_field_theta\"] = '0'\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (0)\n MeasRec[\"treat_temp\"] = '%8.3e' % (\n float(treatment[0])+273.) # temp in kelvin\n MeasRec[\"method_codes\"] = LT_code+\":\"+methcode\n MeasRec[\"treat_step_num\"] = \"%i\" % i\n MeasRec[\"description\"] = \"\"\n MeasRecs.append(MeasRec)\n # continue\n\n # ----------------------------------------\n # ATRM measurements\n # The direction of the magnetization is used to determine the\n # direction of the lab field.\n # ----------------------------------------\n\n if LPcode == \"LP-AN-TRM\":\n MeasRec[\"experiment\"] = specimen + \":\" + LPcode\n methcode = LPcode\n\n if float(treatment[1]) == 0:\n MeasRec[\"method_codes\"] = \"LP-AN-TRM:LT-T-Z\"\n MeasRec[\"treat_dc_field_phi\"] = '0'\n MeasRec[\"treat_dc_field_theta\"] = '0'\n MeasRec[\"treat_temp\"] = '%8.3e' % (\n float(treatment[0])+273.) # temp in kelvin\n MeasRec[\"treat_dc_field\"] = '0'\n else:\n if float(treatment[1]) == 7:\n # alteration check\n methcode = \"LP-AN-TRM:LT-PTRM-I\"\n MeasRec[\"treat_step_num\"] = '7' # -z\n else:\n MeasRec[\"method_codes\"] = \"LP-AN-TRM:LT-T-I\"\n inc = float(MeasRec[\"dir_inc\"])\n dec = float(MeasRec[\"dir_dec\"])\n if abs(inc) < 45 and (dec < 45 or dec > 315): # +x\n tdec, tinc = 0, 0\n MeasRec[\"treat_step_num\"] = '1'\n if abs(inc) < 45 and (dec < 135 and dec > 45):\n tdec, tinc = 90, 0\n MeasRec[\"treat_step_num\"] = '2' # +y\n if inc > 45:\n tdec, tinc = 0, 90\n MeasRec[\"treat_step_num\"] = '3' # +z\n if abs(inc) < 45 and (dec < 225 and dec > 135):\n tdec, tinc = 180, 0\n MeasRec[\"treat_step_num\"] = '4' # -x\n if abs(inc) < 45 and (dec < 315 and dec > 225):\n tdec, tinc = 270, 0\n MeasRec[\"treat_step_num\"] = '5' # -y\n if inc < -45:\n tdec, tinc = 0, -90\n MeasRec[\"treat_step_num\"] = '6' # -z\n\n MeasRec[\"treat_dc_field_phi\"] = '%7.1f' % (tdec)\n MeasRec[\"treat_dc_field_theta\"] = '%7.1f' % (tinc)\n MeasRec[\"treat_temp\"] = '%8.3e' % (\n float(treatment[0])+273.) # temp in kelvin\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (labfield)\n MeasRec[\"description\"] = \"\"\n MeasRecs.append(MeasRec)\n # continue\n\n # ----------------------------------------\n # NLT measurements\n # or TRM acquisistion experiment\n # ----------------------------------------\n\n if LPcode == \"LP-TRM\":\n MeasRec[\"experiment\"] = specimen + \":\" + LPcode\n MeasRec[\"method_codes\"] = \"LP-TRM:LT-T-I\"\n if float(treatment[1]) == 0:\n labfield = 0\n else:\n labfield = float(float(treatment[1]))*1e-6\n MeasRec[\"treat_temp\"] = '%8.3e' % (\n float(treatment[0])+273.) # temp in kelvin\n # labfield in tesla (convert from microT)\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (labfield)\n MeasRec[\"treat_dc_field_phi\"] = '%7.1f' % (\n phi) # labfield phi\n MeasRec[\"treat_dc_field_theta\"] = '%7.1f' % (\n theta) # labfield theta\n MeasRec[\"treat_step_num\"] = \"%i\" % i\n MeasRec[\"description\"] = \"\"\n MeasRecs.append(MeasRec)\n # continue\n\n # ----------------------------------------\n # Cooling rate experiments\n # ----------------------------------------\n\n if LPcode == \"LP-CR-TRM\":\n index = int(treatment[1][0])\n # print index,\"index\"\n # print CR_cooling_times,\"CR_cooling_times\"\n # print CR_cooling_times[index-1]\n # print CR_cooling_times[0:index-1]\n if index == 7 or index == 70: # alteration check as final measurement\n meas_type = \"LT-PTRM-I:LP-CR-TRM\"\n CR_cooling_time = CR_cooling_times[-1]\n else:\n meas_type = \"LT-T-I:LP-CR-TRM\"\n CR_cooling_time = CR_cooling_times[index-1]\n MeasRec[\"method_codes\"] = meas_type\n MeasRec[\"experiment\"] = specimen + \":\" + LPcode\n MeasRec[\"treat_temp\"] = '%8.3e' % (\n float(treatment[0])+273.) # temp in kelvin\n # labfield in tesla (convert from microT)\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (labfield)\n MeasRec[\"treat_dc_field_phi\"] = '%7.1f' % (\n phi) # labfield phi\n MeasRec[\"treat_dc_field_theta\"] = '%7.1f' % (\n theta) # labfield theta\n MeasRec[\"treat_step_num\"] = \"%i\" % index\n MeasRec[\"description\"] = \"cooling_rate\" + \\\n \":\"+CR_cooling_time+\":\"+\"K/min\"\n #MeasRec[\"description\"]=\"%.1f minutes per cooling time\"%int(CR_cooling_time)\n MeasRecs.append(MeasRec)\n # continue\n\n con = cb.Contribution(dir_path, read_tables=[])\n\n con.add_magic_table_from_data(dtype='specimens', data=SpecRecs)\n con.add_magic_table_from_data(dtype='samples', data=SampRecs)\n con.add_magic_table_from_data(dtype='sites', data=SiteRecs)\n con.add_magic_table_from_data(dtype='locations', data=LocRecs)\n MeasOuts = pmag.measurements_methods3(MeasRecs, noave)\n con.add_magic_table_from_data(dtype='measurements', data=MeasOuts)\n\n con.tables['specimens'].write_magic_file(custom_name=spec_file,dir_path=dir_path)\n con.tables['samples'].write_magic_file(custom_name=samp_file,dir_path=dir_path)\n con.tables['sites'].write_magic_file(custom_name=site_file,dir_path=dir_path)\n con.tables['locations'].write_magic_file(custom_name=loc_file,dir_path=dir_path)\n con.tables['measurements'].write_magic_file(custom_name=meas_file,dir_path=dir_path)\n\n return True, meas_file", "response": "Convert HUJI format file to MagIC file"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef huji_sample(orient_file, meths='FS-FD:SO-POM:SO-SUN', location_name='unknown',\n samp_con=\"1\", ignore_dip=True, data_model_num=3,\n samp_file=\"samples.txt\", site_file=\"sites.txt\",\n dir_path=\".\", input_dir_path=\"\"):\n \"\"\"\n Convert HUJI sample file to MagIC file(s)\n\n Parameters\n ----------\n orient_file : str\n input file name\n meths : str\n colon-delimited sampling methods, default FS-FD:SO-POM:SO-SUN\n for more options, see info below\n location : str\n location name, default \"unknown\"\n samp_con : str\n sample/site naming convention, default '1', see info below\n ignore_dip : bool\n set sample az/dip to 0, default True\n data_model_num : int\n MagIC data model 2 or 3, default 3\n samp_file : str\n sample file name to output (default : samples.txt)\n site_file : str\n site file name to output (default : site.txt)\n dir_path : str\n output directory, default \".\"\n input_dir_path : str\n input file directory IF different from dir_path, default \"\"\n\n Returns\n --------\n type - Tuple : (True or False indicating if conversion was sucessful, file name written)\n\n Info\n --------\n Sampling method codes:\n FS-FD field sampling done with a drill\n FS-H field sampling done with hand samples\n FS-LOC-GPS field location done with GPS\n FS-LOC-MAP field location done with map\n SO-POM a Pomeroy orientation device was used\n SO-ASC an ASC orientation device was used\n SO-MAG orientation with magnetic compass\n\n Sample naming convention:\n [1] XXXXY: where XXXX is an arbitrary length site designation and Y\n is the single character sample designation. e.g., TG001a is the\n first sample from site TG001. [default]\n [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX\n [5] site name = sample name\n [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED\n [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY\n\n\n \"\"\"\n\n try:\n samp_con, Z = samp_con.split(\"-\")\n except ValueError:\n samp_con = samp_con\n Z = 1\n\n version_num = pmag.get_version()\n if data_model_num == 2:\n loc_col = \"er_location_name\"\n site_col = \"er_site_name\"\n samp_col = \"er_sample_name\"\n citation_col = \"er_citation_names\"\n class_col = \"site_class\"\n lithology_col = \"site_lithology\"\n definition_col = \"site_definition\"\n type_col = \"site_type\"\n sample_bed_dip_direction_col = \"sample_bed_dip_direction\"\n sample_bed_dip_col = \"sample_bed_dip\"\n site_bed_dip_direction_col = \"site_bed_dip_direction\"\n site_bed_dip_col = \"site_bed_dip\"\n sample_dip_col = \"sample_dip\"\n sample_az_col = \"sample_azimuth\"\n sample_lat_col = \"sample_lat\"\n sample_lon_col = \"sample_lon\"\n site_lat_col = \"site_lat\"\n site_lon_col = \"site_lon\"\n meth_col = \"magic_method_codes\"\n software_col = \"magic_software_packages\"\n else:\n loc_col = \"location\"\n site_col = \"site\"\n samp_col = \"sample\"\n citation_col = \"citations\"\n class_col = \"class\"\n lithology_col = \"lithology\"\n definition_col = \"definition\"\n type_col = \"type\"\n sample_bed_dip_direction_col = 'bed_dip_direction'\n sample_bed_dip_col = 'bed_dip'\n site_bed_dip_direction_col = 'bed_dip_direction'\n site_bed_dip_col = \"bed_dip\"\n sample_dip_col = \"dip\"\n sample_az_col = \"azimuth\"\n sample_lat_col = \"lat\"\n sample_lon_col = \"lon\"\n site_lat_col = \"lat\"\n site_lon_col = \"lon\"\n meth_col = \"method_codes\"\n software_col = \"software_packages\"\n\n input_dir_path, dir_path = pmag.fix_directories(input_dir_path, dir_path)\n samp_file = pmag.resolve_file_name(samp_file, dir_path)\n site_file = pmag.resolve_file_name(site_file, dir_path)\n orient_file = pmag.resolve_file_name(orient_file, input_dir_path)\n print(\"-I- reading in: {}\".format(orient_file))\n #\n # read in file to convert\n #\n with open(orient_file, 'r') as azfile:\n AzDipDat = azfile.readlines()\n SampOut = []\n SiteOut = []\n for line in AzDipDat[1:]:\n orec = line.split()\n if len(orec) > 1:\n labaz, labdip = pmag.orient(float(orec[1]), float(orec[2]), '3')\n bed_dip_dir = (orec[3])\n bed_dip = (orec[4])\n SampRec = {}\n SiteRec = {}\n SampRec[loc_col] = location_name\n SampRec[citation_col] = \"This study\"\n SiteRec[loc_col] = location_name\n SiteRec[citation_col] = \"This study\"\n SiteRec[class_col] = \"\"\n SiteRec[lithology_col] = \"\"\n SiteRec[type_col] = \"\"\n SiteRec[definition_col] = \"s\"\n #\n # parse information common to all orientation methods\n #\n SampRec[samp_col] = orec[0]\n SampRec[sample_bed_dip_direction_col] = orec[3]\n SampRec[sample_bed_dip_col] = orec[4]\n SiteRec[site_bed_dip_direction_col] = orec[3]\n SiteRec[site_bed_dip_col] = orec[4]\n if not ignore_dip:\n SampRec[sample_dip_col] = '%7.1f' % (labdip)\n SampRec[sample_az_col] = '%7.1f' % (labaz)\n else:\n SampRec[sample_dip_col] = '0'\n SampRec[sample_az_col] = '0'\n SampRec[sample_lat_col] = orec[5]\n SampRec[sample_lon_col] = orec[6]\n SiteRec[site_lat_col] = orec[5]\n SiteRec[site_lon_col] = orec[6]\n SampRec[meth_col] = meths\n # parse out the site name\n site = pmag.parse_site(orec[0], samp_con, Z)\n SampRec[site_col] = site\n SampRec[software_col] = version_num\n SiteRec[site_col] = site\n SiteRec[software_col] = version_num\n SampOut.append(SampRec)\n SiteOut.append(SiteRec)\n if data_model_num == 2:\n pmag.magic_write(samp_file, SampOut, \"er_samples\")\n pmag.magic_write(site_file, SiteOut, \"er_sites\")\n else:\n pmag.magic_write(samp_file, SampOut, \"samples\")\n pmag.magic_write(site_file, SiteOut, \"sites\")\n\n print(\"Sample info saved in \", samp_file)\n print(\"Site info saved in \", site_file)\n return True, samp_file", "response": "Convert HUJI sample file to MagIC file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef iodp_dscr_lore(dscr_file,dscr_ex_file=\"\", dir_path=\".\", input_dir_path=\"\",volume=7,noave=False,\\\n meas_file=\"measurements.txt\", offline_meas_file=\"\",spec_file=\"specimens.txt\"):\n \"\"\"\n Convert IODP discrete measurement files in MagIC file(s). This program\n assumes that you have created the specimens, samples, sites and location\n files using convert_2_magic.iodp_samples_csv from files downloaded from the LIMS online\n repository and that all samples are in that file.\n\n If there are offline treatments, you will also need the extended version of the SRM discrete\n download file from LORE.\n\n Parameters\n ----------\n dscr_file : str\n input csv file downloaded from LIMS online repository\n dscr_ex_file : str\n input extended csv file downloaded from LIMS online repository\n dir_path : str\n output directory, default \".\"\n input_dir_path : str\n input file directory IF different from dir_path, default \"\"\n meas_file : str\n output measurement file name, default \"measurements.txt\"\n offline_meas_file : str\n output measurement file for offline measurements , default \"\". must be specified if dscr_ex_file supplied\n spec_file : str\n specimens file name created by, for example, convert_2_magic.iodp_samples_csv, default \"specimens.txt\"\n file should already be in dir_path\n volume : float\n volume in cm^3 assumed during measurement on SRM. The so-called \"Japanese\" cubes have a volume of 7cc\n noave : Boolean\n if False, average replicate measurements\n\n Returns\n --------\n type - Tuple : (True or False indicating if conversion was sucessful, meas_file name written)\n \"\"\"\n # initialize defaults\n version_num = pmag.get_version()\n # format variables\n input_dir_path, output_dir_path = pmag.fix_directories(input_dir_path, dir_path)\n # convert cc to m^3\n volume = volume * 1e-6\n meas_reqd_columns=['specimen','measurement','experiment','sequence','quality','method_codes',\\\n 'instrument_codes','citations',\\\n 'treat_temp','treat_ac_field','treat_dc_field',\\\n 'treat_dc_field_phi','treat_dc_field_theta','meas_temp',\\\n 'dir_dec','dir_inc','magn_moment','magn_volume',\\\n 'description','timestamp','software_packages',\\\n 'external_database_ids','treat_step_num','meas_n_orient']\n dscr_file = pmag.resolve_file_name(dscr_file, input_dir_path)\n if dscr_ex_file:dscr_ex_file = pmag.resolve_file_name(dscr_ex_file, input_dir_path)\n spec_file = pmag.resolve_file_name(spec_file, dir_path)\n specimens_df=pd.read_csv(spec_file,sep='\\t',header=1)\n if len(specimens_df)==0:\n print ('you must download and process the samples table from LORE prior to using this')\n print ('see convert_2_magic.iodp_samples_csv for help')\n return False\n LORE_specimens=list(specimens_df.specimen.unique())\n in_df=pd.read_csv(dscr_file)\n in_df['offline_treatment']=\"\"\n if dscr_ex_file:\n ex_df=pd.read_csv(dscr_ex_file)\n ex_df['Test No.']=ex_df['test test_number']\n ex_df['offline_treatment']=ex_df['comments']\n ex_df=ex_df[['Test No.','offline_treatment']]\n in_df=in_df.merge(ex_df,on='Test No.')\n in_df['offline_treatment']=in_df['offline_treatment_y']\n in_df['offline_treatment'].fillna(\"\",inplace=True)\n in_df.drop_duplicates(inplace=True)\n if len(in_df)==0:\n print ('you must download a csv file from the LIMS database and place it in your input_dir_path')\n return False\n in_df.sort_values(by='Test No.',inplace=True)\n in_df.reset_index(inplace=True)\n measurements_df=pd.DataFrame(columns=meas_reqd_columns)\n meas_out = os.path.join(output_dir_path, meas_file)\n if offline_meas_file:\n offline_meas_out = os.path.join(output_dir_path, offline_meas_file)\n if dscr_ex_file and not offline_meas_file:\n print (\"You must specify an output file for the offline measurements with dscr_ex_file\")\n return\n hole,srm_specimens=iodp_sample_names(in_df)\n for spec in list(srm_specimens.unique()):\n if spec not in LORE_specimens:\n print (' -W- ',spec, ' not found in specimens table ')\n print ( 'check your sample name or add to specimens table by hand\\n')\n # set up defaults\n measurements_df['specimen']=srm_specimens\n measurements_df['offline_treatment']=in_df['offline_treatment']\n measurements_df['sequence']=in_df['Test No.']\n measurements_df['offline_list']=\"\"\n measurements_df['quality']='g'\n measurements_df['citations']='This study'\n measurements_df['meas_temp']=273\n measurements_df['software_packages']=version_num\n measurements_df[\"treat_temp\"] = '%8.3e' % (273) # room temp in kelvin\n measurements_df[\"meas_temp\"] = '%8.3e' % (273) # room temp in kelvin\n measurements_df['meas_n_orient']=1 # at least one orientation\n measurements_df[\"treat_ac_field\"] = '0'\n measurements_df[\"treat_dc_field\"] = '0'\n measurements_df[\"treat_dc_field_phi\"] = '0'\n measurements_df[\"treat_dc_field_theta\"] = '0'\n measurements_df[\"treat_step_num\"] = '1'\n measurements_df[\"standard\"] = 'u' # assume all data are \"good\"\n measurements_df[\"dir_csd\"] = '0' # assume all data are \"good\"\n measurements_df[\"method_codes\"] = 'LT-NO' # assume all are NRMs\n measurements_df['instrument_codes']=\"IODP-SRM\" # assume all measurements on shipboard 2G\n measurements_df['timestamp']=pd.to_datetime(\\\n in_df['Timestamp (UTC)']).dt.strftime(\"%Y-%m-%dT%H:%M:%S\")+'Z'\n measurements_df['dir_dec']=in_df['Declination background & drift corrected (deg)'] # declination\n measurements_df['dir_inc']=in_df['Inclination background & drift corrected (deg)'] # inclination\n measurements_df['magn_volume']=in_df['Intensity background & drift corrected (A/m)'] # magnetization\n measurements_df['magn_moment']=measurements_df['magn_volume']*volume # moment in Am^2\n measurements_df['description']=in_df['Treatment Type'] # temporary column\n measurements_df['treat_ac_field']=in_df['Treatment Value']*1e-3 # assume all treatments are AF\n measurements_df.loc[measurements_df['description']=='IN-LINE AF DEMAG',\\\n 'method_codes']='LT-AF-Z'\n measurements_df.loc[measurements_df['description']=='IN-LINE AF DEMAG',\\\n 'instrument_codes']='IODP-SRM:IODP-SRM-AF'\n measurements_df['external_database_ids']='LORE['+in_df['Test No.'].astype('str')+']'\n measurements_df.fillna(\"\",inplace=True)\n measurements_df.sort_values(by='sequence',inplace=True)\n if dscr_ex_file:\n meas_df=measurements_df[measurements_df.offline_treatment==\"\"] # all the records with no offline treatments\n offline_df=pd.DataFrame(columns=meas_df.columns) # make a container for offline measurements\n arm_df=measurements_df[measurements_df['offline_treatment'].str.contains('ARM')]\n if len(arm_df)>0: # there are ARM treatment steps\n arm_df['offline_list']=arm_df['offline_treatment'].str.split(\":\")\n arm_list=arm_df.specimen.unique()\n for spc in arm_list: # get all the ARM treated specimens\n spc_df=arm_df[arm_df.specimen.str.match(spc)] # get all the measurements for this specimen\n seq_no=spc_df[spc_df.specimen.str.match(spc)].sequence.values[0] # get the sequence number of the ARM step\n end_seq_no=spc_df[spc_df.specimen.str.match(spc)].sequence.values[-1] # get the sequence number of the last ARM demag step\n arm_df.loc[arm_df.sequence==seq_no,'method_codes']='LT-AF-I:LP-ARM-AFD' # label the ARM record\n arm_df.loc[arm_df.sequence==seq_no,'experiment']=spc+'_LT-AF-I_LT-AF-Z_LP-ARM-AFD' # label the ARM record\n arm_df.loc[arm_df.sequence==seq_no,'treat_ac_field']=arm_df['offline_list'].str.get(1).astype('float')*1e-3 # AF peak field in mT converted to tesla\n arm_df.loc[arm_df.sequence==seq_no,'treat_dc_field']=arm_df['offline_list'].str.get(2).astype('float')*1e-3 # AF peak field in mT converted to tesla\n arm_df.loc[arm_df.sequence==seq_no,'instrument_codes']='IODP-SRM:IODP-DTECH'\n arm_df.loc[(arm_df.specimen.str.match(spc)) &\n (arm_df.sequence>seq_no) &\n (arm_df.sequence<=end_seq_no),'method_codes']= 'LT-AF-Z:LP-ARM-AFD'\n arm_df.loc[(arm_df.specimen.str.match(spc)) &\n (arm_df.sequence>seq_no) &\n (arm_df.sequence<=end_seq_no),'experiment']= spc+'LT-AF-I_LT-AF-Z_LP-ARM-AFD'\n arm_df.loc[(arm_df.specimen.str.match(spc)) &\n (arm_df.sequence>seq_no) &\n (arm_df.sequence<=end_seq_no),'instrument_codes']= 'IODP-SRM:IODP-SRM-AF'\n strings=[]\n for i in range(len(arm_df)):strings.append(str(i))\n arm_df['measurement']=arm_df['experiment']+strings\n arm_df['description']=arm_df['offline_treatment']\n offline_df=pd.concat([offline_df,arm_df]) # put the arm data into the offline dataframe\n irm_in_df=measurements_df[measurements_df['offline_treatment'].str.contains('IRM')]\n if len(irm_in_df)>0: # there are IRM treatment steps\n irm_in_df['offline_list']=irm_in_df['offline_treatment'].str.split(\":\")\n irm_list=irm_in_df.specimen.unique()\n irm_out_df=pd.DataFrame(columns=irm_in_df.columns) # make an output container\n for spc in irm_list: # get all the IRM treated specimens\n # first do IRM acquisition steps\n spc_df=irm_in_df[irm_in_df.specimen.str.match(spc)] # get all the measurements for this specimen\n spc_acq_df=spc_df[spc_df.treat_ac_field==0] # IRM acquisition step\n spc_acq_df['method_codes']='LT-IRM:LP-IRM' # label the IRM records\n spc_acq_df['experiment']=spc+'_LT-IRM_LP-IRM' # label the IRM experiment\n spc_acq_df['treat_dc_field']=spc_acq_df['offline_list'].str.get(1).astype('float')*1e-3 # IRM field in mT converted to tesla\n spc_acq_df['instrument_codes']='IODP-SRM:IODP-IM-10'\n # do the AF demag of the IRM\n sirm_seq_no=spc_acq_df[spc_acq_df.specimen.str.match(spc)].sequence.values[-1] # get the sequence number of the last IRM step\n spc_afd_df=spc_df[(spc_df.treat_ac_field!=0)] #\n spc_afd_df['method_codes']= 'LP-IRM:LP-IRM-AFD'\n spc_afd_df['experiment']= spc+'LP-IRM:LP-IRM-AFD'\n spc_afd_df['instrument_codes']= 'IODP-SRM:IODP-SRM-AFD'\n irm_out_df=pd.concat([irm_out_df,spc_acq_df,spc_afd_df])\n strings=[]\n for i in range(len(irm_out_df)):strings.append(str(i))\n irm_out_df['measurement']=irm_out_df['experiment']+strings\n irm_out_df['description']=irm_out_df['offline_treatment']\n offline_df=pd.concat([offline_df,irm_out_df]) # put the irm data into the offline dataframe\n if dscr_ex_file:\n offline_df.drop(columns=['offline_list'],inplace=True)\n offline_df.drop(columns=['offline_treatment'],inplace=True)\n offline_df.sort_values(by='sequence',inplace=True)\n offline_df.drop_duplicates(subset=['sequence'],inplace=True)\n offline_df.fillna(\"\",inplace=True)\n offline_dicts = offline_df.to_dict('records')\n pmag.magic_write(offline_meas_out, offline_dicts, 'measurements')\n measurements_df=meas_df # put all the non-offline treatments back into measurements_df\n if 'offline_treatment' in measurements_df.columns:\n measurements_df.drop(columns=['offline_treatment'],inplace=True)\n if 'offline_list' in measurements_df.columns:\n measurements_df.drop(columns=['offline_list'],inplace=True)\n measurements_df.sort_values(by='sequence',inplace=True)\n measurements_df.drop_duplicates(subset=['sequence'],inplace=True)\n measurements_df['treat_step_num']=measurements_df['sequence']\n measurements_df.fillna(\"\",inplace=True)\n meas_dicts = measurements_df.to_dict('records')\n meas_dicts=pmag.measurements_methods3(meas_dicts,noave=noave)\n pmag.magic_write(meas_out, meas_dicts, 'measurements')\n\n return True", "response": "Convert IODP discrete measurement files from LORE to MagIC file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef iodp_jr6_lore(jr6_file, dir_path=\".\", input_dir_path=\"\",volume=7,noave=False,dc_field=50e-6,\\\n meas_file=\"measurements.txt\", spec_file=\"specimens.txt\"):\n \"\"\"\n Convert IODP JR6 measurement files in MagIC file(s). This program\n assumes that you have created the specimens, samples, sites and location\n files using convert_2_magic.iodp_samples_csv from files downloaded from the LIMS online\n repository and that all samples are in that file.\n\n Parameters\n ----------\n jr6_file : str\n input csv file downloaded from LIMS online repository\n dir_path : str\n output directory, default \".\"\n input_dir_path : str\n input file directory IF different from dir_path, default \"\"\n meas_file : str\n output measurement file name, default \"measurements.txt\"\n spec_file : str\n specimens file name created by, for example, convert_2_magic.iodp_samples_csv, default \"specimens.txt\"\n file should already be in dir_path\n volume : float\n volume in cm^3 assumed during measurement on SRM. The so-called \"Japanese\" cubes have a volume of 7cc\n noave : Boolean\n if False, average replicate measurements\n dc_field : float\n if ARM measurements are in the file, this was the DC bias field applied\n\n Returns\n --------\n type - Tuple : (True or False indicating if conversion was sucessful, meas_file name written)\n \"\"\"\n # initialize defaults\n version_num = pmag.get_version()\n # format variables\n input_dir_path, output_dir_path = pmag.fix_directories(input_dir_path, dir_path)\n # convert cc to m^3\n volume = volume * 1e-6\n meas_reqd_columns=['specimen','measurement','experiment','sequence','quality','method_codes',\\\n 'instrument_codes','citations',\\\n 'treat_temp','treat_ac_field','treat_dc_field',\\\n 'treat_dc_field_phi','treat_dc_field_theta','meas_temp',\\\n 'dir_dec','dir_inc','magn_moment','magn_volume',\\\n 'description','timestamp','software_packages',\\\n 'external_database_ids','treat_step_num']\n jr6_file = pmag.resolve_file_name(jr6_file, input_dir_path)\n spec_file = pmag.resolve_file_name(spec_file, dir_path)\n specimens_df=pd.read_csv(spec_file,sep='\\t',header=1)\n if len(specimens_df)==0:\n print ('you must download and process the samples table from LORE prior to using this')\n print ('see convert_2_magic.iodp_samples_csv for help')\n return False\n LORE_specimens=list(specimens_df.specimen.unique())\n measurements_df=pd.DataFrame(columns=meas_reqd_columns)\n meas_out = os.path.join(output_dir_path, meas_file)\n citations = \"This study\"\n in_df=pd.read_csv(jr6_file)\n if len(in_df)==0:\n print ('you must download a csv file from the LIMS database and place it in your input_dir_path')\n return False\n in_df.sort_values(by='Treatment value (mT or \u00b0C or ex. B14)',inplace=True)\n hole,jr6_specimens=iodp_sample_names(in_df)\n for spec in list(jr6_specimens.unique()):\n if spec not in LORE_specimens:\n print (spec, ' not found in sample table')\n # set up defaults\n measurements_df['specimen']=jr6_specimens\n measurements_df['quality']='g'\n measurements_df['citations']='This study'\n measurements_df['meas_temp']=273\n measurements_df['software_packages']=version_num\n measurements_df[\"treat_temp\"] = '%8.3e' % (273) # room temp in kelvin\n measurements_df[\"meas_temp\"] = '%8.3e' % (273) # room temp in kelvin\n measurements_df[\"treat_ac_field\"] = '0'\n measurements_df[\"treat_dc_field\"] = '0'\n measurements_df[\"treat_dc_field_phi\"] = '0'\n measurements_df[\"treat_dc_field_theta\"] = '0'\n measurements_df[\"treat_step_num\"] = 1\n measurements_df[\"standard\"] = 'u' # assume all data are \"good\"\n measurements_df[\"dir_csd\"] = '0' # assume all data are \"good\"\n measurements_df[\"method_codes\"] = 'LT-NO' # assume all are NRMs\n measurements_df['instrument_codes']=\"IODP-JR6\" # assume all measurements on shipboard JR6\n measurements_df['timestamp']=pd.to_datetime(\\\n in_df['Timestamp (UTC)']).dt.strftime(\"%Y-%m-%dT%H:%M:%S\")+'Z'\n # use the sample coordinates, assume was placed correctly in the JR6 and use az=0,pl=-90 to\n # put into 'geographic' coordinates\n incs=in_df['Inclination (\u00b0)'].values\n decs=in_df['Declination (\u00b0)'].values\n azs=np.ones(len(decs))*0\n pls=np.ones(len(decs))*-90\n geo=np.vstack((decs,incs,azs,pls)).transpose()\n drot,irot=pmag.dogeo_V(geo)\n\n measurements_df['dir_dec']=drot # declination\n measurements_df['dir_inc']=irot # inclination\n measurements_df['magn_volume']=in_df['Intensity raw (A/m)'] # magnetization\n measurements_df['magn_moment']=measurements_df['magn_volume']*volume # moment in Am^2\n measurements_df['description']=in_df['Treatment type']\n measurements_df['treat_ac_field']=in_df['Treatment value (mT or \u00b0C or ex. B14)']*1e-3 # initialize all treatments to AF in mT\n measurements_df.loc[measurements_df['description']=='AD',\\\n 'method_codes']='LT-AF-Z'\n measurements_df.loc[measurements_df['description']=='AD',\\\n 'instrument_codes']='IODP-JR6:IODP-D2000'\n measurements_df.loc[measurements_df['description']=='ARM',\\\n 'method_codes']='LT-AF-I'\n measurements_df.loc[measurements_df['description']=='ARM',\\\n 'instrument_codes']='IODP-JR6:IODP-D2000'\n measurements_df.loc[measurements_df['description']=='ARM',\"treat_dc_field\"] = dc_field #\n measurements_df['external_database_ids']='LORE['+in_df['Test No.'].astype('str')+']'\n# add these later when controlled vocabs implemented\n measurements_df.loc[measurements_df['description']=='TD','method_codes']='LT-T-Z'\n measurements_df.loc[measurements_df['description']=='TD',\\\n 'instrument_codes']='IODP-SRM:IODP-TDS'\n measurements_df.loc[measurements_df['description']=='TD',\"treat_temp\"] =\\\n in_df['Treatment value (mT or \u00b0C or ex. B14)']+273\n #measurements_df.loc[measurements_df['description']=='IRM','method_codes']='LT-IRM'\n #measurements_df.loc[measurements_df['description']=='IRM',\\\n # 'instrument_codes']='IODP-SRM:IODP-IRM'\n\n measurements_df.fillna(\"\",inplace=True)\n meas_dicts = measurements_df.to_dict('records')\n meas_dicts=pmag.measurements_methods3(meas_dicts,noave=noave)\n pmag.magic_write(meas_out, meas_dicts, 'measurements')\n\n return True", "response": "Convert IODP JR6 measurement files in MagIC file to Lore - Marquardt format."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconverting a kly4s. txt file from the LIMS online LORE downloaded sample file to MagIC files.", "response": "def iodp_kly4s_lore(kly4s_file, meas_out='measurements.txt',\n spec_infile='specimens.txt', spec_out='specimens.txt', instrument='IODP-KLY4S',\n actual_volume=\"\",dir_path='.', input_dir_path=''):\n \"\"\"\n Converts ascii files generated by SUFAR ver.4.0 and downloaded from the LIMS online\n repository to MagIC (datamodel 3) files\n\n Parameters\n ----------\n kly4s_file : str\n input LORE downloaded csv file, required\n meas_output : str\n measurement output filename, default \"measurements.txt\"\n spec_infile : str\n specimen infile, default specimens.txt\n [file created by iodp_samples_csv from LORE downloaded sample file]\n spec_outfile : str\n specimen outfile, default \"kly4s_specimens.txt\"\n instrument : str\n instrument name, default \"\"\n actual_volume : float\n the nominal volume is assumed to be 8cc or even 10cc, depending on the shipboard\n software used, actual_vol is the actual specimen volume in cc\n dir_path : str\n output directory, default \".\"\n input_dir_path : str\n input file directory IF different from dir_path, default \"\"\n\n Returns\n --------\n type - Tuple : (True or False indicating if conversion was sucessful, file name written)\n\n \"\"\"\n # initialize defaults\n version_num = pmag.get_version()\n # format variables\n input_dir_path, output_dir_path = pmag.fix_directories(input_dir_path, dir_path)\n # set up required columns\n meas_reqd_columns=['specimen','measurement','experiment','sequence','quality','method_codes',\\\n 'instrument_codes','citations',\\\n 'treat_temp','treat_ac_field','treat_dc_field',\\\n 'treat_dc_field_phi','treat_dc_field_theta','meas_temp',\\\n 'dir_dec','dir_inc','magn_moment','magn_volume',\\\n 'description','timestamp','software_packages',\\\n 'external_database_ids','experiments','treat_step_num']\n spec_reqd_columns=['specimen','sample','result_quality','method_codes','volume',\\\n 'specimen_name_alternatives','citations',\\\n 'aniso_type','aniso_s_n_measurements',\\\n 'azimuth','dip','aniso_s_sigma',\\\n 'aniso_s_unit','aniso_tilt_correction',]\n # sort out file paths\n kly4s_file = pmag.resolve_file_name(kly4s_file, input_dir_path)\n spec_out = pmag.resolve_file_name(spec_out, dir_path)\n spec_file=pmag.resolve_file_name(spec_infile, dir_path)\n meas_out = pmag.resolve_file_name(meas_out, dir_path)\n # read in necessary data:\n specs=pd.read_csv(spec_file,sep='\\t',header=1) # read in existing specimen table\n if len(specs)==0:\n print ('you must download and process the samples table from LORE prior to using this')\n print ('see convert_2_magic.iodp_samples_csv for help')\n return False\n LORE_specimens=list(specs.specimen.unique())\n in_df=pd.read_csv(kly4s_file)\n if len(in_df)==0:\n print ('you must download a csv file from the LIMS database and place it in your input_dir_path')\n return False\n measurements_df=pd.DataFrame(columns=meas_reqd_columns)\n specimens_df=pd.DataFrame(columns=spec_reqd_columns)\n hole,kly4s_specimens=iodp_sample_names(in_df)\n for spec in list(kly4s_specimens.unique()):\n if spec not in LORE_specimens:\n print (spec, ' not found in specimen table')\n\n # set up defaults\n # specimens table defaults\n specimens_df['specimen']=kly4s_specimens\n specimens_df['sample']=kly4s_specimens\n specimens_df['result_quality']='g'\n specimens_df['citations']='This study'\n specimens_df['aniso_type']='AMS'\n specimens_df['azimuth']=0\n specimens_df['dip']=0\n specimens_df['aniso_s_n_measurements']=192\n specimens_df['aniso_tilt_correction']=0\n specimens_df['aniso_s_unit']='SI'\n specimens_df['aniso_s_sigma']=''\n specimens_df['method_codes']= \"LP-X:AE-H:LP-AN-MS\"\n specimens_df['experiments']=specimens_df['specimen'].astype('str')+'_'+ \"LP-AN-MS\"\n # measurements table\n measurements_df['specimen']=kly4s_specimens\n measurements_df['quality']='g'\n measurements_df['citations']='This study'\n measurements_df['meas_temp']=273\n measurements_df['software_packages']=version_num\n measurements_df[\"treat_temp\"] = '%8.3e' % (273) # room temp in kelvin\n measurements_df[\"meas_temp\"] = '%8.3e' % (273) # room temp in kelvin\n measurements_df[\"treat_ac_field\"] = '0'\n measurements_df[\"treat_dc_field\"] = '0'\n measurements_df[\"treat_dc_field_phi\"] = '0'\n measurements_df[\"treat_dc_field_theta\"] = '0'\n measurements_df[\"treat_step_num\"] = '1'\n measurements_df[\"standard\"] = 'u' # assume all data are \"good\"\n measurements_df['instrument_codes']=\"IODP-KLY4S\" # assume all measurements on shipboard KLY4S\n measurements_df['description']='Bulk sucsecptibility measurement'\n measurements_df['method_codes']='LP-X'\n measurements_df['experiment']=measurements_df['specimen'].astype('str')+'_'+\\\n measurements_df['method_codes'].astype('str')\n meas_num=range(len(kly4s_specimens))\n measurements_df['sequence']=meas_num\n measurements_df['measurement']=measurements_df['experiment'].astype('str')+'-'+\\\n measurements_df['sequence'].astype('str')\n# parse the measurement data into columns\n nominal_volume=in_df['Sample volume (CC)']*1e-6 # convert cc to m^3\n if actual_volume:\n actual_volume=(1e-6*actual_volume)\n factor=nominal_volume/actual_volume\n else:\n actual_volume=nominal_volume\n factor=1\n measurements_df['susc_chi_volume']=in_df['Bulk susceptibility(SI)']*factor\n measurements_df['external_database_ids']='LORE['+in_df['Test No.'].astype('str')+']'\n# parse specimen data into columns\n specimens_df['specimen_name_alternatives']=in_df['Text ID']\n specimens_df['volume']=actual_volume\n s1=in_df['Normalized tensor K11']\n s2=in_df['Normalized tensor K22']\n s3=in_df['Normalized tensor K33']\n s4=in_df['Normalized tensor K12']\n s5=in_df['Normalized tensor K23']\n s6=in_df['Normalized tensor K13']\n if 'Standard deviation(SI)' in in_df.columns:\n specimens_df['aniso_s_sigma']=in_df['Standard deviation(SI)']\n #if (s1+s2+s3)>.35: # AGICO format\n # s1=s1/3\n # s2=s2/3\n # s3=s3/3\n # s4=s4/3\n # s5=s5/3\n # s6=s6/3\n specimens_df['aniso_s'] = s1.astype('str')+':'+ s2.astype('str')+':'+s3.astype('str')+':'+\\\n s4.astype('str')+':'+ s5.astype('str')+':'+ s6.astype('str')\n tau1=in_df['Kmax susceptibility (SI)']/3\n v1_dec=in_df['Kmax dec (deg)']\n v1_inc=in_df['Kmax inc (deg)']\n specimens_df['aniso_v1']=tau1.astype('str')+\":\"+v1_dec.astype('str')+\":\"+v1_inc.astype('str')\n tau2=in_df['Kint susceptibility (SI)']/3\n v2_dec=in_df['Kint dec (deg)']\n v2_inc=in_df['Kint inc (deg)']\n specimens_df['aniso_v2']=tau2.astype('str')+\":\"+v2_dec.astype('str')+\":\"+v2_inc.astype('str')\n tau3=in_df['Kmin susceptibility (SI)']/3\n v3_dec=in_df['Kmin dec (deg)']\n v3_inc=in_df['Kmin inc (deg)']\n specimens_df['aniso_v3']=tau3.astype('str')+\":\"+v3_dec.astype('str')+\":\"+v3_inc.astype('str')\n\n\n# output data files\n measurements_df.fillna(\"\",inplace=True)\n meas_dicts = measurements_df.to_dict('records')\n\n pmag.magic_write(meas_out, meas_dicts, 'measurements')\n specimens_df.fillna(\"\",inplace=True)\n spec_dicts = specimens_df.to_dict('records')\n pmag.magic_write(spec_out, spec_dicts, 'specimens')\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconverts IODP samples data file into MagIC samples file.", "response": "def iodp_samples(samp_file, output_samp_file=None, output_dir_path='.',\n input_dir_path='', data_model_num=3):\n \"\"\"\n Convert IODP samples data file into MagIC samples file.\n Default is to overwrite samples.txt in your working directory.\n\n Parameters\n ----------\n samp_file : str\n path to IODP sample file to convert\n output_samp_file : str\n MagIC-format samples file to append to, default None\n output_dir_path : str\n output file directory, default \".\"\n input_dir_path : str\n input file directory IF different from output_dir_path, default \"\"\n data_model_num : int\n MagIC data model [2, 3], default 3\n\n Returns\n --------\n type - Tuple : (True or False indicating if conversion was sucessful, samp_file name written)\n\n \"\"\"\n samp_file_name = \"samples.txt\"\n sample_alternatives = \"sample_alternatives\"\n method_codes = \"method_codes\"\n sample_name = \"sample\"\n site_name = \"site\"\n expedition_name = \"expedition_name\"\n location_name = \"location\"\n citation_name = \"citation\"\n dip = \"dip\"\n azimuth = \"azimuth\"\n core_depth = \"core_depth\"\n composite_depth = \"composite_depth\"\n timestamp = \"timestamp\"\n file_type = \"samples\"\n data_model_num = int(float(data_model_num))\n if data_model_num == 2:\n samp_file_name = \"er_samples.txt\"\n sample_alternatives = \"er_sample_alternatives\"\n method_codes = \"magic_method_codes\"\n sample_name = \"er_sample_name\"\n site_name = \"er_site_name\"\n expedition_name = \"er_expedition_name\"\n location_name = \"er_location_name\"\n citation_name = \"er_citation_names\"\n dip = \"sample_dip\"\n azimuth = \"sample_azimuth\"\n core_depth = \"sample_core_depth\"\n composite_depth = \"sample_composite_depth\"\n timestamp = \"sample_date\"\n file_type = \"er_samples\"\n\n\n text_key = None\n comp_depth_key = \"\"\n input_dir_path, output_dir_path = pmag.fix_directories(input_dir_path, output_dir_path)\n samp_file = pmag.resolve_file_name(samp_file, input_dir_path)\n Samps = []\n samp_out = os.path.join(output_dir_path, samp_file_name)\n if output_samp_file:\n if os.path.exists(output_samp_file):\n samp_out = os.path.join(output_dir_path, output_samp_file)\n Samps, file_type = pmag.magic_read(samp_out)\n print(len(Samps), ' read in from: ', samp_out)\n fin = open(samp_file, \"r\")\n file_input = fin.readlines()\n fin.close()\n keys = file_input[0].replace('\\n', '').split(',')\n if \"CSF-B Top (m)\" in keys:\n comp_depth_key = \"CSF-B Top (m)\"\n elif \"Top depth CSF-B (m)\" in keys:\n comp_depth_key = \"Top depth CSF-B (m)\"\n # incorporate changes to LIMS data model, while maintaining backward\n # compatibility\n keys = [key.strip('\"').strip(\"'\") for key in keys]\n if \"Top Depth (m)\" in keys:\n depth_key = \"Top Depth (m)\"\n elif \"CSF-A Top (m)\" in keys:\n depth_key = \"CSF-A Top (m)\"\n elif \"Top depth CSF-A (m)\" in keys:\n depth_key = \"Top depth CSF-A (m)\"\n if \"Text Id\" in keys:\n text_key = \"Text Id\"\n elif \"Text identifier\" in keys:\n text_key = \"Text identifier\"\n elif \"Text ID\" in keys:\n text_key = \"Text ID\"\n if \"Sample Date Logged\" in keys:\n date_key = \"Sample Date Logged\"\n elif \"Sample date logged\" in keys:\n date_key = \"Sample date logged\"\n elif \"Date sample logged\" in keys:\n date_key = \"Date sample logged\"\n elif \"Timestamp (UTC)\" in keys:\n date_key = \"Timestamp (UTC)\"\n if 'Volume (cc)' in keys:\n volume_key = 'Volume (cc)'\n if 'Volume (cm^3)' in keys:\n volume_key = 'Volume (cm^3)'\n if 'Volume (cm3)' in keys:\n volume_key = 'Volume (cm3)'\n if not text_key:\n return False, \"Could not extract the necessary data from your input file.\\nPlease make sure you are providing a correctly formated IODP samples csv file.\"\n ErSamples, samples, file_format = [], [], 'old'\n for line in file_input[1:]:\n if line[0] != '0':\n ODPRec, SampRec = {}, {}\n interval, core = \"\", \"\"\n rec = line.replace('\\n', '').split(',')\n if len(rec) < 2:\n print(\"Error in csv file, blank columns\")\n break\n for k in range(len(keys)):\n ODPRec[keys[k]] = rec[k].strip('\"')\n SampRec[sample_alternatives] = ODPRec[text_key]\n if \"Label Id\" in keys: # old format\n label = ODPRec['Label Id'].split()\n if len(label) > 1:\n interval = label[1].split('/')[0]\n pieces = label[0].split('-')\n core = pieces[2]\n while len(core) < 4:\n core = '0' + core # my way\n else: # new format\n file_format = 'new'\n pieces = [ODPRec['Exp'], ODPRec['Site'] + ODPRec['Hole'],\n ODPRec['Core'] + ODPRec['Type'], ODPRec['Sect'], ODPRec['A/W']]\n interval = ODPRec['Top offset (cm)'].split(\n '.')[0].strip() # only integers allowed!\n core = ODPRec['Core'] + ODPRec['Type']\n if core != \"\" and interval != \"\":\n SampRec[method_codes] = 'FS-C-DRILL-IODP:SP-SS-C:SO-V'\n if file_format == 'old':\n SampRec[sample_name] = pieces[0] + '-' + pieces[1] + \\\n '-' + core + '-' + pieces[3] + \\\n '-' + pieces[4] + '-' + interval\n else:\n SampRec[sample_name] = pieces[0] + '-' + pieces[1] + '-' + core + '-' + \\\n pieces[3] + '_' + pieces[4] + '_' + \\\n interval # change in sample name convention\n SampRec[site_name] = SampRec[sample_name]\n # pieces=SampRec['er_sample_name'].split('-')\n SampRec[expedition_name] = pieces[0]\n SampRec[location_name] = pieces[1]\n SampRec[citation_name] = \"This study\"\n SampRec[dip] = \"0\"\n SampRec[azimuth] = \"0\"\n SampRec[core_depth] = ODPRec[depth_key]\n if ODPRec[volume_key] != \"\":\n SampRec['sample_volume'] = str(\n float(ODPRec[volume_key]) * 1e-6)\n else:\n SampRec['sample_volume'] = '1'\n if comp_depth_key != \"\":\n SampRec[composite_depth] = ODPRec[comp_depth_key]\n dates = ODPRec[date_key].split()\n if '/' in dates[0]: # have a date\n mmddyy = dates[0].split('/')\n yyyy = '20' + mmddyy[2]\n mm = mmddyy[0]\n if len(mm) == 1:\n mm = '0' + mm\n dd = mmddyy[1]\n if len(dd) == 1:\n dd = '0' + dd\n date = yyyy + ':' + mm + ':' + \\\n dd + ':' + dates[1] + \":00.00\"\n else:\n date = \"\"\n SampRec[timestamp] = date\n ErSamples.append(SampRec)\n samples.append(SampRec[sample_name])\n if len(Samps) > 0:\n for samp in Samps:\n if samp[sample_name] not in samples:\n ErSamples.append(samp)\n Recs, keys = pmag.fillkeys(ErSamples)\n pmag.magic_write(samp_out, Recs, file_type)\n print('sample information written to: ', samp_out)\n return True, samp_out"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nconverting DataFrame df to list of sample names in format U999A - 1H - 1 - W - 1", "response": "def iodp_sample_names(df):\n \"\"\"\n Convert expedition, hole, section, type, interval to sample name in format:\n exp-hole-type-sect-interval\n\n Parameters\n ___________\n df : Pandas DataFrame\n dataframe read in from .csv file downloaded from LIMS online database\n\n Returns\n --------------\n\n holes : list\n IODP Holes name in format U999A\n\n specimens : pandas Series\n series with sample names, e.g. 999-U999A-1H-1-W-1\n\n\n \"\"\"\n if 'Top offset (cm)' in df.columns:\n offset_key='Top offset (cm)'\n elif 'Offset (cm)' in df.columns:\n offset_key='Offset (cm)'\n else:\n print ('No offset key found')\n return False,[]\n interval=df[offset_key].astype('float').astype('str')\n holes=df['Site'].astype('str')+df['Hole']\n specimens=df['Exp'].astype('str')+'-'+holes +'-'+df['Core'].astype('str')+df['Type'].astype('str')+\\\n '-'+df['Sect'].astype('str')+ '-' + df['A/W']+'-'+ interval\n return holes, specimens"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef iodp_samples_csv(lims_sample_file, spec_file='specimens.txt',samp_file=\"samples.txt\",\n site_file=\"sites.txt\",loc_file=\"locations.txt\",dir_path='.',\n input_dir_path='',comp_depth_key=\"\",lat=\"\",lon=\"\",\n exp_name=\"\",exp_desc=\"\",age_low=0,age_high=200e6):\n \"\"\"\n Convert IODP samples data file downloaded from LIMS as .csv file into datamodel 3.0 MagIC samples file.\n Default is to overwrite samples.txt in your working directory.\n\n Parameters\n ----------\n lims_sample_file : str\n path to IODP sample file to convert\n dir_path : str\n working directory, default \".\"\n input_dir_path : str\n input file directory IF different from dir_path, default \"\"\n\n spec_file : str\n output specimens.txt file name\n samp_file : str\n output samples.txt file name\n site_file : str\n output sites.txt file name\n loc_file : str\n output locations.txt name\n comp_depth_key : str\n if not \"\", there is a composite depth model, for example 'Top depth CSF-B (m)'\n lat : float\n latitude of hole location\n lon : float\n longitude of hole location\n exp_name : str\n expedition number (e.g., 382)\n exp_description :str\n expedition nick name (e.g., Iceberg Alley)\n age_high : float\n maximum age of hole in Ma\n age_low : float\n minimum age of hole in Ma\n\n NOTE: all output files will overwrite existing files.\n\n Returns\n --------\n type - Tuple : (True or False indicating if conversion was sucessful,file names written)\n\n \"\"\"\n # define required columns for samples, sites, locations (holes).\n spec_reqd_columns=['specimen','sample','result_quality','method_codes','volume',\\\n 'specimen_name_alternatives','citations']\n samp_reqd_columns=['sample','site','result_type','result_quality',\\\n 'azimuth','dip','azimuth_correction','method_codes','citations']\n site_reqd_columns=['site','location','lat','lon','result_type','result_quality','method_codes',\\\n 'core_depth','composite_depth',\\\n 'geologic_types','geologic_classes','geologic_lithologies',\\\n 'age','age_low','age_high','age_unit','citations']\n loc_reqd_columns=['expedition_name','expedition_ship','expedition_description','lat_s','lat_n',\\\n 'geologic_classes','geologic_lithologies','location_type',\\\n 'lon_w','lon_e','age_low','age_high','age_unit','citations']\n # fix the path names for input and output directories (if different)\n input_dir_path, output_dir_path = pmag.fix_directories(input_dir_path, dir_path)\n # find the input sample file name downloaded from LORE\n samp_file_name = pmag.resolve_file_name(lims_sample_file, input_dir_path)\n # set the output file names\n spec_out = os.path.join(output_dir_path, spec_file)\n samp_out = os.path.join(output_dir_path, samp_file)\n site_out = os.path.join(output_dir_path, site_file)\n loc_out = os.path.join(output_dir_path, loc_file)\n # read in the data from the downloaded .csv file\n iodp_sample_data=pd.read_csv(samp_file_name)\n # get rid of incomplete sample rows\n iodp_sample_data.dropna(subset=['Hole','Type','Sect','Top offset (cm)','Top depth CSF-A (m)','A/W'],\n inplace=True)\n\n # replace the 'nan' values with blanks\n iodp_sample_data.fillna('',inplace=True)\n # check if there are data\n if len(iodp_sample_data)==0:\n return False, \"Could not extract the necessary data from your input file.\\nPlease make sure you are providing a correctly formated IODP samples csv file.\"\n # create the MagIC data model 3.0 DataFrames with the required column\n specimens_df=pd.DataFrame(columns = spec_reqd_columns)\n samples_df=pd.DataFrame(columns = samp_reqd_columns)\n sites_df=pd.DataFrame(columns = site_reqd_columns)\n # set some column headers for the sample master .csv file\n depth_key = \"Top depth CSF-A (m)\"\n text_key = \"Text ID\"\n date_key = \"Date sample logged\"\n volume_key = 'Volume (cm3)'\n if volume_key not in iodp_sample_data.columns:\n volume_key = 'Sample volume (CC)'\n # convert the meta data in the sample master to MagIC datamodel 3\n # construct the sample name\n holes,specimens=iodp_sample_names(iodp_sample_data)\n # put the sample name in the specimen, sample, site\n specimens_df['specimen']=specimens\n specimens_df['sample']=specimens\n samples_df['sample']=specimens\n samples_df['site']=specimens\n\n # add required meta-data for each hole to the locations table\n unique_holes=list(holes.unique())\n loc_dicts=[]\n for hole in unique_holes:\n loc_dict={}\n print (hole)\n loc_dict['location_name']=hole\n loc_dict['expedition_name']=exp_name\n loc_dict['expedition_ship']='JOIDES Resolution'\n loc_dict['expedition_description']=exp_desc\n loc_dict['lat_s']=lat\n loc_dict['lat_n']=lat\n loc_dict['lon_w']=lon\n loc_dict['lon_e']=lon\n loc_dict['location_type']=\"Drill Site\"\n loc_dict['citations']='This study'\n loc_dict['age_high']=age_high\n loc_dict['age_low']=age_low\n loc_dict['age_unit']='Ma'\n\n loc_dicts.append(loc_dict)\n # add in the rest to the sites table\n sites_df['site']=specimens\n sites_df['core_depth']=iodp_sample_data['Top depth CSF-A (m)']\n sites_df['lat']=lat\n sites_df['lon']=lon\n sites_df['result_type']='i'\n sites_df['result_quality']='g'\n sites_df['method_codes']='FS-C-DRILL-IODP'\n sites_df['location']=hole\n sites_df['citations']=\"This study\"\n if comp_depth_key: sites_df['composite_depth']=iodp_sample_data[comp_depth_key]\n # now do the samples\n\n samples_df['method_codes']='FS-C-DRILL-IODP:SP-SS-C:SO-V' # could put in sampling tool...\n samples_df['site']=specimens # same as sample and specimen name\n samples_df['result_type']='i'\n samples_df['result_type']='g'\n samples_df['azimuth']='0'\n samples_df['dip']='0'\n samples_df['citations']='This study'\n\n # NEED TO ADD TIMESTAMP FROM TIMESTAMP KEY SOMEDAY\n # and the specimens\n specimens_df['specimen_name_alternatives']=iodp_sample_data[text_key]\n specimens_df['result_quality']='g'\n specimens_df['method_codes']='FS-C-DRILL-IODP:SP-SS-C:SO-V'\n specimens_df['volume']=iodp_sample_data[volume_key]*1e-6 # convert from cm^3 to m^3\n specimens_df['citations']='This study'\n # fill in the np.nan with blanks\n specimens_df.fillna(\"\",inplace=True)\n # make specimen_df format compatible with MagicDataFrame\n specimens_df.index = specimens_df['specimen']\n specimens_df.index.name = 'specimen name'\n # combine with old specimen records if available\n if os.path.exists(spec_file):\n old_specimens = cb.MagicDataFrame(spec_file)\n old_specimens.df = old_specimens.merge_dfs(specimens_df)\n old_specimens.write_magic_file(spec_file)\n else:\n spec_dicts = specimens_df.to_dict('records')\n pmag.magic_write(spec_out, spec_dicts, 'specimens')\n\n # fill in the np.nan with blanks\n samples_df.fillna(\"\",inplace=True)\n sites_df.fillna(\"\",inplace=True)\n # save the files in the designated spots samp_out, site_out and loc_out\n samp_dicts = samples_df.to_dict('records')\n pmag.magic_write(samp_out, samp_dicts, 'samples')\n site_dicts = sites_df.to_dict('records')\n pmag.magic_write(site_out, site_dicts, 'sites')\n pmag.magic_write(loc_out, loc_dicts, 'locations')\n return True", "response": "Convert IODP samples data file downloaded from LIMS as. csv file into datamodel 3. 0 MagIC samples file."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nconverts IODP samples data generated from the SRM measurements file into datamodel 3. 0 MagIC samples file.", "response": "def iodp_samples_srm(df, spec_file='specimens.txt',samp_file=\"samples.txt\",site_file=\"sites.txt\",dir_path='.',\n input_dir_path='',comp_depth_key=\"\",lat=\"\",lon=\"\"):\n \"\"\"\n Convert IODP samples data generated from the SRM measurements file into datamodel 3.0 MagIC samples file.\n Default is to overwrite the output files in your working directory.\n\n Parameters\n ----------\n df : str\n Pandas DataFrame of SRM Archive data\n dir_path : str\n working directory, default \".\"\n input_dir_path : str\n input file directory IF different from dir_path, default \"\"\n\n spec_file : str\n output specimens.txt file name\n samp_file : str\n output samples.txt file name\n site_file : str\n output sites.txt file name\n comp_depth_key : str\n if not \"\", there is a composite depth model, for example 'Depth CSF-B (m)'\n lat : float\n latitude of hole location\n lon : float\n longitude of hole location\n\n NOTE: all output files will overwrite existing files.\n\n Returns\n --------\n type - Tuple : (True or False indicating if conversion was sucessful,file names written)\n\n \"\"\"\n # define required columns for samples, sites, locations (holes).\n spec_reqd_columns=['specimen','sample','result_quality','method_codes','volume',\\\n 'specimen_name_alternatives','citations']\n samp_reqd_columns=['sample','site','result_type','result_quality',\\\n 'azimuth','dip','azimuth_correction','method_codes','citations','core_depth','composite_depth']\n site_reqd_columns=['site','location','lat','lon','result_type','result_quality','method_codes',\\\n 'core_depth','composite_depth',\\\n 'geologic_types','geologic_classes','geologic_lithologies',\\\n 'age','age_low','age_high','age_unit','citations']\n # fix the path names for input and output directories (if different)\n input_dir_path, output_dir_path = pmag.fix_directories(input_dir_path, dir_path)\n # find the input sample file name downloaded from LORE\n # set the output file names\n spec_out = os.path.join(output_dir_path, spec_file)\n samp_out = os.path.join(output_dir_path, samp_file)\n site_out = os.path.join(output_dir_path, site_file)\n # read in the data from the downloaded .csv file\n # create the MagIC data model 3.0 DataFrames with the required column\n specimens_df=pd.DataFrame(columns = spec_reqd_columns)\n samples_df=pd.DataFrame(columns = samp_reqd_columns)\n sites_df=pd.DataFrame(columns = site_reqd_columns)\n # set some column headers for the sample master .csv file\n depth_key = \"Depth CSF-A (m)\"\n text_key = \"Text ID\"\n date_key = \"Date sample logged\"\n volume_key = 'Volume (cm3)'\n # convert the meta data in the sample master to MagIC datamodel 3\n # construct the sample name\n\n holes,specimens=iodp_sample_names(df)\n # put the sample name in the specimen, sample, site\n specimens_df['specimen']=specimens\n specimens_df['sample']=specimens\n samples_df['sample']=specimens\n samples_df['site']=specimens\n samples_df['core_depth']=df[depth_key]\n if comp_depth_key: samples_df['composite_depth']=df[comp_depth_key]\n\n # add in the rest to the sites table\n sites_df['site']=specimens\n sites_df['core_depth']=df[depth_key]\n sites_df['lat']=lat\n sites_df['lon']=lon\n sites_df['result_type']='i'\n sites_df['result_quality']='g'\n sites_df['method_codes']='FS-C-DRILL-IODP'\n sites_df['location']=holes[0]\n sites_df['citations']=\"This study\"\n if comp_depth_key: sites_df['composite_depth']=df[comp_depth_key]\n # now do the samples\n\n samples_df['method_codes']='FS-C-DRILL-IODP:SP-SS-C:SO-V' # could put in sampling tool...\n samples_df['site']=specimens # same as sample and specimen name\n samples_df['result_type']='i'\n samples_df['result_type']='g'\n samples_df['azimuth']='0'\n samples_df['dip']='0'\n samples_df['citations']='This study'\n\n # NEED TO ADD TIMESTAMP FROM TIMESTAMP KEY SOMEDAY\n # and the specimens\n specimens_df['result_quality']='g'\n specimens_df['method_codes']='FS-C-DRILL-IODP'\n specimens_df['citations']='This study'\n # fill in the np.nan with blanks\n specimens_df.fillna(\"\",inplace=True)\n samples_df.fillna(\"\",inplace=True)\n sites_df.fillna(\"\",inplace=True)\n # save the files in the designated spots spec_out, samp_out, site_out and loc_out\n spec_dicts = specimens_df.to_dict('records')\n pmag.magic_write(spec_out, spec_dicts, 'specimens')\n samp_dicts = samples_df.to_dict('records')\n pmag.magic_write(samp_out, samp_dicts, 'samples')\n site_dicts = sites_df.to_dict('records')\n pmag.magic_write(site_out, site_dicts, 'sites')\n\n return holes[0],specimens"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconverts IODP archive half measurement files into MagIC file", "response": "def iodp_srm_lore(srm_file, dir_path=\".\", input_dir_path=\"\",noave=False,comp_depth_key='Depth CSF-B (m)',\\\n meas_file=\"srm_arch_measurements.txt\", spec_file=\"srm_arch_specimens.txt\",\\\n samp_file='srm_arch_samples.txt',site_file='srm_arch_sites.txt',lat=\"\",lon=\"\"):\n \"\"\"\n Convert IODP archive half measurement files into MagIC file(s).\n\n Parameters\n ----------\n srm_file : str\n input csv file downloaded from LIMS online repository\n dir_path : str\n output directory, default \".\"\n input_dir_path : str\n input file directory IF different from dir_path, default \"\"\n meas_file : str\n output measurement file name, default \"measurements.txt\"\n spec_file : str\n specimens file name as output file - these specimens are not already in the specimen table\n created by iodp_samples_csv\n samp_file : str\n samples file name as output file - these are not already in the specimen table\n created by iodp_samples_csv\n site_file : str\n sites file name as output file - these are not already in the samples table\n noave : Boolean\n if False, average replicate measurements\n\n Returns\n --------\n type - Tuple : (True or False indicating if conversion was sucessful, meas_file name written)\n \"\"\"\n # initialize defaults\n version_num = pmag.get_version()\n # format variables\n input_dir_path, output_dir_path = pmag.fix_directories(input_dir_path, dir_path)\n # convert cc to m^3\n meas_reqd_columns=['specimen','measurement','experiment','sequence','quality','method_codes',\\\n 'instrument_codes','citations',\\\n 'treat_temp','treat_ac_field','treat_dc_field',\\\n 'treat_dc_field_phi','treat_dc_field_theta','meas_temp',\\\n 'dir_dec','dir_inc','magn_moment','magn_volume',\\\n 'description','timestamp','software_packages',\\\n 'external_database_ids','treat_step_num','meas_n_orient']\n srm_file = pmag.resolve_file_name(srm_file, input_dir_path)\n spec_file = pmag.resolve_file_name(spec_file, dir_path)\n in_df=pd.read_csv(srm_file,header=0)\n in_df.drop_duplicates(inplace=True)\n in_df.sort_values(by='Treatment Value',inplace=True)\n if len(in_df)==0:\n print ('you must download a csv file from the LIMS database and place it in your input_dir_path')\n return False\n measurements_df=pd.DataFrame(columns=meas_reqd_columns)\n meas_out = os.path.join(output_dir_path, meas_file)\n hole,srm_specimens=iodp_samples_srm(in_df, spec_file=spec_file,samp_file=samp_file,site_file=site_file,\\\n dir_path=dir_path,input_dir_path=input_dir_path,\\\n comp_depth_key=comp_depth_key,lat=lat,lon=lon)\n # assume only one hole\n # set up defaults\n measurements_df['specimen']=srm_specimens\n measurements_df['quality']='g'\n measurements_df['citations']='This study'\n measurements_df['meas_temp']=273\n measurements_df['software_packages']=version_num\n measurements_df[\"treat_temp\"] = '%8.3e' % (273) # room temp in kelvin\n measurements_df[\"meas_temp\"] = '%8.3e' % (273) # room temp in kelvin\n measurements_df[\"treat_ac_field\"] = '0'\n measurements_df[\"treat_dc_field\"] = '0'\n measurements_df[\"treat_dc_field_phi\"] = '0'\n measurements_df[\"treat_dc_field_theta\"] = '0'\n measurements_df[\"standard\"] = 'u' # assume all data are \"good\"\n measurements_df[\"dir_csd\"] = '0' # assume all data are \"good\"\n measurements_df[\"method_codes\"] = 'LT-NO' # assume all are NRMs\n measurements_df['instrument_codes']=\"IODP-SRM\" # assume all measurements on shipboard 2G\n measurements_df['treat_step_num']='0' # assign a number\n measurements_df['meas_n_orient']=1 # at least one orientation\n measurements_df['timestamp']=pd.to_datetime(\\\n in_df['Timestamp (UTC)']).dt.strftime(\"%Y-%m-%dT%H:%M:%S\")+'Z'\n measurements_df['dir_dec']=in_df['Declination background & drift corrected (deg)'] # declination\n measurements_df['dir_inc']=in_df['Inclination background & drift corrected (deg)'] # inclination\n measurements_df['magn_volume']=in_df['Intensity background & drift corrected (A/m)'] # magnetization\n Xs=in_df['Magnetic moment x (Am\u00b2)']\n Ys=in_df['Magnetic moment y (Am\u00b2)']\n Zs=in_df['Magnetic moment z (Am\u00b2)']\n magn_moment=np.sqrt(Xs**2+Ys**2+Zs**2)\n measurements_df['magn_moment']=magn_moment # moment in Am^2\n measurements_df['description']=in_df['Treatment Type'] # temporary column\n measurements_df['treat_ac_field']=in_df['Treatment Value']*1e-3 # assume all treatments are AF\n measurements_df.loc[measurements_df['description']=='IN-LINE AF DEMAG',\\\n 'method_codes']='LT-AF-Z'\n measurements_df.loc[measurements_df['description']=='IN-LINE AF DEMAG',\\\n 'instrument_codes']='IODP-SRM:IODP-SRM-AF'\n measurements_df['external_database_ids']='LORE['+in_df['Test No.'].astype('str')+']'\n\n measurements_df.fillna(\"\",inplace=True)\n meas_dicts = measurements_df.to_dict('records')\n meas_dicts=pmag.measurements_methods3(meas_dicts,noave=noave)\n pmag.magic_write(meas_out, meas_dicts, 'measurements')\n return True"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef iodp_srm(csv_file=\"\", dir_path=\".\", input_dir_path=\"\",\n meas_file=\"measurements.txt\", spec_file=\"specimens.txt\",\n samp_file=\"samples.txt\", site_file=\"sites.txt\", loc_file=\"locations.txt\",\n lat=\"\", lon=\"\", noave=0):\n\n \"\"\"\n Converts IODP LIMS and LORE SRM archive half sample format files\n to measurements format files.\n\n Parameters\n ----------\n csv_file : str\n input csv file, default \"\"\n if no file name is provided, find any .csv files in the provided dir_path\n dir_path : str\n working directory, default \".\"\n input_dir_path : str\n input file directory IF different from dir_path, default \"\"\n meas_file : str\n output measurement file name, default \"measurements.txt\"\n spec_file : str\n output specimen file name, default \"specimens.txt\"\n samp_file: str\n output sample file name, default \"samples.txt\"\n site_file : str\n output site file name, default \"sites.txt\"\n loc_file : str\n output location file name, default \"locations.txt\"\n lat : float\n latitude, default \"\"\n lon : float\n longitude, default \"\"\n noave : bool\n do not average duplicate measurements, default False (so by default, DO average)\n\n Returns\n --------\n type - Tuple : (True or False indicating if conversion was sucessful, meas_file name written)\n\n \"\"\"\n\n # helper function\n\n def pad_year(date, ind, warn=False, fname=''):\n \"\"\"\n Parameters\n ----------\n Date: str\n date as colon-delimited string, i.e. 04:10:2015:06:45:00\n ind: int\n index of year position\n warn : bool (default False)\n verbose or not\n fname: str (default '')\n filename for more informative warning\n\n Returns\n ---------\n new_date: str\n date as colon-delimited,\n with year padded if it wasn't before\n \"\"\"\n date_list = date.split(':')\n year = date_list[ind]\n if len(year) == 2:\n padded_year = '20' + year\n date_list[ind] = padded_year\n if warn:\n print('-W- Ambiguous year \"{}\" in {}.'.format(year, fname))\n print(' Assuming {}.'.format(padded_year))\n new_date = ':'.join(date_list)\n if new_date != date and warn:\n print(' Date translated to {}'.format(new_date))\n return new_date\n\n\n # initialize defaults\n version_num = pmag.get_version()\n citations = \"This study\"\n demag = 'NRM'\n depth_method = 'a'\n # format variables\n input_dir_path, output_dir_path = pmag.fix_directories(input_dir_path, dir_path)\n if csv_file == \"\":\n # read in list of files to import\n filelist = os.listdir(input_dir_path)\n else:\n filelist = [csv_file]\n\n # parsing the data\n MeasRecs, SpecRecs, SampRecs, SiteRecs, LocRecs = [], [], [], [], []\n file_found = False\n for f in filelist: # parse each file\n year_warning = True\n if f[-3:].lower() == 'csv':\n print('processing:', f)\n file_found = True\n # get correct full filename and read data\n fname = pmag.resolve_file_name(f, input_dir_path)\n full_file = open(fname)\n file_input = full_file.readlines()\n full_file.close()\n keys = file_input[0].replace('\\n', '').split(\n ',') # splits on underscores\n keys = [k.strip('\"') for k in keys]\n if \"Interval Top (cm) on SHLF\" in keys:\n interval_key = \"Interval Top (cm) on SHLF\"\n elif \" Interval Bot (cm) on SECT\" in keys:\n interval_key = \" Interval Bot (cm) on SECT\"\n elif \"Offset (cm)\" in keys:\n interval_key = \"Offset (cm)\"\n else:\n print(\"couldn't find interval or offset amount\")\n # get depth key\n if \"Top Depth (m)\" in keys:\n depth_key = \"Top Depth (m)\"\n elif \"CSF-A Top (m)\" in keys:\n depth_key = \"CSF-A Top (m)\"\n elif \"Depth CSF-A (m)\" in keys:\n depth_key = \"Depth CSF-A (m)\"\n else:\n print(\"couldn't find depth\")\n # get comp depth key\n if \"CSF-B Top (m)\" in keys:\n comp_depth_key = \"CSF-B Top (m)\" # use this model if available\n elif \"Depth CSF-B (m)\" in keys:\n comp_depth_key = \"Depth CSF-B (m)\"\n else:\n comp_depth_key = \"\"\n print(\"couldn't find composite depth\")\n if \"Demag level (mT)\" in keys:\n demag_key = \"Demag level (mT)\"\n elif \"Demag Level (mT)\" in keys:\n demag_key = \"Demag Level (mT)\"\n elif \"Treatment Value\" in keys:\n demag_key = \"Treatment Value\"\n else:\n print(\"couldn't find demag type\")\n # Get inclination key\n if \"Inclination (Tray- and Bkgrd-Corrected) (deg)\" in keys:\n inc_key = \"Inclination (Tray- and Bkgrd-Corrected) (deg)\"\n elif \"Inclination background + tray corrected (deg)\" in keys:\n inc_key = \"Inclination background + tray corrected (deg)\"\n elif \"Inclination background + tray corrected (\\xc2\\xb0)\" in keys:\n inc_key = \"Inclination background + tray corrected (\\xc2\\xb0)\"\n elif \"Inclination background & tray corrected (deg)\" in keys:\n inc_key = \"Inclination background & tray corrected (deg)\"\n elif \"Inclination background & tray corrected (deg)\" in keys:\n inc_key = \"Inclination background & tray corrected (deg)\"\n elif \"Inclination background & drift corrected (deg)\" in keys:\n inc_key = \"Inclination background & drift corrected (deg)\"\n elif \"Inclination background + tray corrected (\\N{DEGREE SIGN})\" in keys:\n inc_key = \"Inclination background + tray corrected (\\N{DEGREE SIGN})\"\n else:\n print(\"couldn't find inclination\")\n # get declination key\n if \"Declination (Tray- and Bkgrd-Corrected) (deg)\" in keys:\n dec_key = \"Declination (Tray- and Bkgrd-Corrected) (deg)\"\n elif \"Declination background + tray corrected (\\N{DEGREE SIGN})\" in keys:\n dec_key = \"Declination background + tray corrected (\\N{DEGREE SIGN})\"\n elif \"Declination background + tray corrected (deg)\" in keys:\n dec_key = \"Declination background + tray corrected (deg)\"\n elif \"Declination background + tray corrected (\\xc2\\xb0)\" in keys:\n dec_key = \"Declination background + tray corrected (\\xc2\\xb0)\"\n elif \"Declination background & tray corrected (deg)\" in keys:\n dec_key = \"Declination background & tray corrected (deg)\"\n elif \"Declination background & tray corrected (deg)\" in keys:\n dec_key = \"Declination background & tray corrected (deg)\"\n elif \"Declination background & drift corrected (deg)\" in keys:\n dec_key = \"Declination background & drift corrected (deg)\"\n else:\n print(\"couldn't find declination\")\n if \"Intensity (Tray- and Bkgrd-Corrected) (A/m)\" in keys:\n int_key = \"Intensity (Tray- and Bkgrd-Corrected) (A/m)\"\n elif \"Intensity background + tray corrected (A/m)\" in keys:\n int_key = \"Intensity background + tray corrected (A/m)\"\n elif \"Intensity background & tray corrected (A/m)\" in keys:\n int_key = \"Intensity background & tray corrected (A/m)\"\n elif \"Intensity background & tray corrected (A/m)\" in keys:\n int_key = \"Intensity background & tray corrected (A/m)\"\n elif \"Intensity background & drift corrected (A/m)\" in keys:\n int_key = \"Intensity background & drift corrected (A/m)\"\n else:\n print(\"couldn't find magnetic moment\")\n if \"Core Type\" in keys:\n core_type = \"Core Type\"\n elif \"Type\" in keys:\n core_type = \"Type\"\n else:\n print(\"couldn't find core type\")\n if 'Run Number' in keys:\n run_number_key = 'Run Number'\n elif 'Test No.' in keys:\n run_number_key = 'Test No.'\n else:\n print(\"couldn't find run number\")\n if 'Test Changed On' in keys:\n date_key = 'Test Changed On'\n elif \"Timestamp (UTC)\" in keys:\n date_key = \"Timestamp (UTC)\"\n else:\n print(\"couldn't find timestamp\")\n if \"Section\" in keys:\n sect_key = \"Section\"\n elif \"Sect\" in keys:\n sect_key = \"Sect\"\n else:\n print(\"couldn't find section number\")\n if 'Section Half' in keys:\n half_key = 'Section Half'\n elif \"A/W\" in keys:\n half_key = \"A/W\"\n else:\n print(\"couldn't find half number\")\n if \"Text ID\" in keys:\n text_id = \"Text ID\"\n elif \"Text Id\" in keys:\n text_id = \"Text Id\"\n else:\n print(\"couldn't find ID number\")\n for line in file_input[1:]:\n InRec = {}\n test = 0\n recs = line.split(',')\n for k in range(len(keys)):\n if len(recs) == len(keys):\n InRec[keys[k]] = line.split(',')[k].strip(\"\"\" \" ' \"\"\")\n if 'Exp' in list(InRec.keys()) and InRec['Exp'] != \"\":\n # get rid of pesky blank lines (why is this a thing?)\n test = 1\n if not test:\n continue\n run_number = \"\"\n inst = \"IODP-SRM\"\n volume = '15.59' # set default volume to this\n if 'Sample Area (cm?)' in list(InRec.keys()) and InRec['Sample Area (cm?)'] != \"\":\n volume = InRec['Sample Area (cm?)']\n MeasRec, SpecRec, SampRec, SiteRec, LocRec = {}, {}, {}, {}, {}\n expedition = InRec['Exp']\n location = InRec['Site']+InRec['Hole']\n# Maintain backward compatibility for the ever-changing LIMS format (Argh!)\n while len(InRec['Core']) < 3:\n InRec['Core'] = '0'+InRec['Core']\n # assume discrete sample\n if \"Last Tray Measurment\" in list(InRec.keys()) and \"SHLF\" not in InRec[text_id] or 'dscr' in csv_file:\n specimen = expedition+'-'+location+'-' + \\\n InRec['Core']+InRec[core_type]+\"-\"+InRec[sect_key] + \\\n '-'+InRec[half_key]+'-'+str(InRec[interval_key])\n else: # mark as continuous measurements\n specimen = expedition+'-'+location+'-' + \\\n InRec['Core']+InRec[core_type]+\"_\"+InRec[sect_key] + \\\n InRec[half_key]+'-'+str(InRec[interval_key])\n sample = expedition+'-'+location + \\\n '-'+InRec['Core']+InRec[core_type]\n site = expedition+'-'+location\n\n if not InRec[dec_key] or not InRec[inc_key]:\n print(\"No dec or inc found for specimen %s, skipping\" % specimen)\n continue\n\n if specimen != \"\" and specimen not in [x['specimen'] if 'specimen' in list(x.keys()) else \"\" for x in SpecRecs]:\n SpecRec['specimen'] = specimen\n SpecRec['sample'] = sample\n SpecRec['citations'] = citations\n SpecRec['volume'] = volume\n SpecRec['specimen_alternatives'] = InRec[text_id]\n SpecRecs.append(SpecRec)\n if sample != \"\" and sample not in [x['sample'] if 'sample' in list(x.keys()) else \"\" for x in SampRecs]:\n SampRec['sample'] = sample\n SampRec['site'] = site\n SampRec['citations'] = citations\n SampRec['azimuth'] = '0'\n SampRec['dip'] = '0'\n SampRec['core_depth'] = InRec[depth_key]\n if comp_depth_key != '':\n SampRec['composite_depth'] = InRec[comp_depth_key]\n SiteRec['composite_depth'] = InRec[comp_depth_key]\n if \"SHLF\" not in InRec[text_id]:\n SampRec['method_codes'] = 'FS-C-DRILL-IODP:SP-SS-C:SO-V'\n else:\n SampRec['method_codes'] = 'FS-C-DRILL-IODP:SO-V'\n SampRecs.append(SampRec)\n if site != \"\" and site not in [x['site'] if 'site' in list(x.keys()) else \"\" for x in SiteRecs]:\n SiteRec['site'] = site\n SiteRec['location'] = location\n SiteRec['citations'] = citations\n SiteRec['lat'] = lat\n SiteRec['lon'] = lon\n SiteRecs.append(SiteRec)\n SiteRec['core_depth'] = InRec[depth_key]\n if location != \"\" and location not in [x['location'] if 'location' in list(x.keys()) else \"\" for x in LocRecs]:\n LocRec['location'] = location\n LocRec['citations'] = citations\n LocRec['expedition_name'] = expedition\n LocRec['lat_n'] = lat\n LocRec['lon_e'] = lon\n LocRec['lat_s'] = lat\n LocRec['lon_w'] = lon\n LocRecs.append(LocRec)\n\n MeasRec['specimen'] = specimen\n MeasRec['software_packages'] = version_num\n MeasRec[\"treat_temp\"] = '%8.3e' % (273) # room temp in kelvin\n MeasRec[\"meas_temp\"] = '%8.3e' % (273) # room temp in kelvin\n MeasRec[\"treat_ac_field\"] = 0\n MeasRec[\"treat_dc_field\"] = '0'\n MeasRec[\"treat_dc_field_phi\"] = '0'\n MeasRec[\"treat_dc_field_theta\"] = '0'\n MeasRec[\"quality\"] = 'g' # assume all data are \"good\"\n MeasRec[\"standard\"] = 'u' # assume all data are \"good\"\n if run_number_key in list(InRec.keys()) and InRec[run_number_key] != \"\":\n run_number = InRec[run_number_key]\n # date time is second line of file\n datestamp = InRec[date_key].split()\n if '/' in datestamp[0]:\n # break into month day year\n mmddyy = datestamp[0].split('/')\n if len(mmddyy[0]) == 1:\n mmddyy[0] = '0'+mmddyy[0] # make 2 characters\n if len(mmddyy[1]) == 1:\n mmddyy[1] = '0'+mmddyy[1] # make 2 characters\n if len(mmddyy[2]) == 1:\n mmddyy[2] = '0'+mmddyy[2] # make 2 characters\n if len(datestamp[1]) == 1:\n datestamp[1] = '0'+datestamp[1] # make 2 characters\n hour, minute = datestamp[1].split(':')\n if len(hour) == 1:\n hour = '0' + hour\n date = mmddyy[0]+':'+mmddyy[1]+\":\" + \\\n mmddyy[2] + ':' + hour + \":\" + minute + \":00\"\n #date=mmddyy[2] + ':'+mmddyy[0]+\":\"+mmddyy[1] +':' + hour + \":\" + minute + \":00\"\n if '-' in datestamp[0]:\n # break into month day year\n mmddyy = datestamp[0].split('-')\n date = mmddyy[0]+':'+mmddyy[1]+\":\" + \\\n mmddyy[2] + ':' + datestamp[1]+\":0\"\n if len(date.split(\":\")) > 6:\n date = date[:-2]\n # try with month:day:year\n try:\n utc_dt = datetime.datetime.strptime(\n date, \"%m:%d:%Y:%H:%M:%S\")\n except ValueError:\n # try with year:month:day\n try:\n utc_dt = datetime.datetime.strptime(\n date, \"%Y:%m:%d:%H:%M:%S\")\n except ValueError:\n # if all else fails, assume the year is in the third position\n # and try padding with '20'\n new_date = pad_year(\n date, ind=2, warn=year_warning, fname=os.path.split(f)[1])\n utc_dt = datetime.datetime.strptime(\n new_date, \"%m:%d:%Y:%H:%M:%S\")\n # only give warning once per csv file\n year_warning = False\n MeasRec['timestamp'] = utc_dt.strftime(\"%Y-%m-%dT%H:%M:%S\")+\"Z\"\n MeasRec[\"method_codes\"] = 'LT-NO'\n if 'Treatment Type' in list(InRec.keys()) and InRec['Treatment Type'] != \"\":\n if \"AF\" in InRec['Treatment Type'].upper():\n MeasRec['method_codes'] = 'LT-AF-Z'\n inst = inst+':IODP-SRM-AF' # measured on shipboard in-line 2G AF\n treatment_value = float(\n InRec[demag_key].strip('\"'))*1e-3 # convert mT => T\n # AF demag in treat mT => T\n MeasRec[\"treat_ac_field\"] = treatment_value\n elif \"T\" in InRec['Treatment Type'].upper():\n MeasRec['method_codes'] = 'LT-T-Z'\n inst = inst+':IODP-TDS' # measured on shipboard Schonstedt thermal demagnetizer\n try:\n treatment_value = float(\n InRec['Treatment Value'])+273 # convert C => K\n except KeyError:\n try:\n treatment_value = float(\n InRec[\"Treatment value
(mT or \\N{DEGREE SIGN}C)\"]) + 273\n except KeyError:\n print([k for k in InRec.keys() if 'treat' in k.lower()])\n print(\"-W- Couldn't find column for Treatment Value\")\n treatment_value = \"\"\n MeasRec[\"treat_temp\"] = str(treatment_value)\n elif 'Alternating Frequency' in InRec['Treatment Type']:\n MeasRec['method_codes'] = 'LT-AF-Z'\n inst = inst+':IODP-DTECH' # measured on shipboard Dtech D2000\n treatment_value = float(\n InRec[demag_key])*1e-3 # convert mT => T\n # AF demag in treat mT => T\n MeasRec[\"treat_ac_field\"] = treatment_value\n elif 'Thermal' in InRec['Treatment Type']:\n MeasRec['method_codes'] = 'LT-T-Z'\n inst = inst+':IODP-TDS' # measured on shipboard Schonstedt thermal demagnetizer\n treatment_value = float(\n InRec[demag_key])+273 # convert C => K\n MeasRec[\"treat_temp\"] = '%8.3e' % (treatment_value)\n elif InRec[demag_key] != \"0\":\n MeasRec['method_codes'] = 'LT-AF-Z'\n inst = inst+':IODP-SRM-AF' # measured on shipboard in-line 2G AF\n try:\n treatment_value = float(\n InRec[demag_key])*1e-3 # convert mT => T\n except ValueError:\n print(\"Couldn't determine treatment value was given treatment value of %s and demag key %s; setting to blank you will have to manually correct this (or fix it)\" % (\n InRec[demag_key], demag_key))\n treatment_value = ''\n # AF demag in treat mT => T\n MeasRec[\"treat_ac_field\"] = treatment_value\n MeasRec[\"standard\"] = 'u' # assume all data are \"good\"\n vol = float(volume)*1e-6 # convert from cc to m^3\n if run_number != \"\":\n MeasRec['external_database_ids'] = {'LIMS': run_number}\n else:\n MeasRec['external_database_ids'] = \"\"\n MeasRec['dir_inc'] = InRec[inc_key]\n MeasRec['dir_dec'] = InRec[dec_key]\n intens = InRec[int_key]\n try:\n # convert intensity from A/m to Am^2 using vol\n MeasRec['magn_moment'] = '%8.3e' % (float(intens)*vol)\n except ValueError:\n print(\"couldn't find magnetic moment for specimen %s and int_key %s; leaving this field blank you'll have to fix this manually\" % (\n specimen, int_key))\n MeasRec['magn_moment'] = ''\n MeasRec['instrument_codes'] = inst\n MeasRec['treat_step_num'] = 0\n MeasRec['dir_csd'] = '0'\n MeasRec['meas_n_orient'] = ''\n MeasRecs.append(MeasRec)\n if not file_found:\n print(\"No .csv files were found\")\n return (False, \"No .csv files were found\")\n\n con = cb.Contribution(output_dir_path, read_tables=[])\n\n con.add_magic_table_from_data(dtype='specimens', data=SpecRecs)\n con.add_magic_table_from_data(dtype='samples', data=SampRecs)\n con.add_magic_table_from_data(dtype='sites', data=SiteRecs)\n con.add_magic_table_from_data(dtype='locations', data=LocRecs)\n #MeasSort=sorted(MeasRecs,key=lambda x: (x['specimen'], float(x['treat_ac_field'])))\n #MeasSort=sorted(MeasRecs,key=lambda x: float(x['treat_ac_field']))\n # MeasOuts=pmag.measurements_methods3(MeasSort,noave)\n MeasOuts = pmag.measurements_methods3(MeasRecs, noave)\n con.add_magic_table_from_data(dtype='measurements', data=MeasOuts)\n\n con.write_table_to_file('specimens', custom_name=spec_file,dir_path=dir_path)\n con.write_table_to_file('samples', custom_name=samp_file,dir_path=dir_path)\n con.write_table_to_file('sites', custom_name=site_file,dir_path=dir_path)\n con.write_table_to_file('locations', custom_name=loc_file,dir_path=dir_path)\n con.write_table_to_file('measurements', custom_name=meas_file,dir_path=dir_path)\n\n return (True, meas_file)", "response": "This function converts IODP LIMS and LORE SRM archive half sample format files to measurements format files."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nconverts JR6. jr6 files to MagIC files.", "response": "def jr6_jr6(mag_file, dir_path=\".\", input_dir_path=\"\",\n meas_file=\"measurements.txt\", spec_file=\"specimens.txt\",\n samp_file=\"samples.txt\", site_file=\"sites.txt\", loc_file=\"locations.txt\",\n specnum=1, samp_con='1', location='unknown', lat='', lon='',\n noave=False, meth_code=\"LP-NO\", volume=12, JR=False, user=\"\"):\n\n \"\"\"\n Convert JR6 .jr6 files to MagIC file(s)\n\n Parameters\n ----------\n mag_file : str\n input file name\n dir_path : str\n working directory, default \".\"\n input_dir_path : str\n input file directory IF different from dir_path, default \"\"\n meas_file : str\n output measurement file name, default \"measurements.txt\"\n spec_file : str\n output specimen file name, default \"specimens.txt\"\n samp_file: str\n output sample file name, default \"samples.txt\"\n site_file : str\n output site file name, default \"sites.txt\"\n loc_file : str\n output location file name, default \"locations.txt\"\n specnum : int\n number of characters to designate a specimen, default 0\n samp_con : str\n sample/site naming convention, default '1', see info below\n location : str\n location name, default \"unknown\"\n lat : float\n latitude, default \"\"\n lon : float\n longitude, default \"\"\n noave : bool\n do not average duplicate measurements, default False (so by default, DO average)\n meth_code : str\n colon-delimited method codes, default \"LP-NO\"\n volume : float\n volume in ccs, default 12\n JR : bool\n IODP samples were measured on the JOIDES RESOLUTION, default False\n user : str\n user name, default \"\"\n\n Returns\n ---------\n Tuple : (True or False indicating if conversion was sucessful, meas_file name written)\n\n Info\n --------\n Sample naming convention:\n [1] XXXXY: where XXXX is an arbitrary length site designation and Y\n is the single character sample designation. e.g., TG001a is the\n first sample from site TG001. [default]\n [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX\n [5] site name same as sample\n [6] site is entered under a separate column -- NOT CURRENTLY SUPPORTED\n [7-Z] [XXXX]YYY: XXXX is site designation with Z characters with sample name XXXXYYYY\n\n\n \"\"\"\n\n version_num = pmag.get_version()\n input_dir_path, output_dir_path = pmag.fix_directories(input_dir_path, dir_path)\n specnum = - int(specnum)\n samp_con = str(samp_con)\n volume = float(volume) * 1e-6\n # need to add these\n meas_file = pmag.resolve_file_name(meas_file, output_dir_path)\n spec_file = pmag.resolve_file_name(spec_file, output_dir_path)\n samp_file = pmag.resolve_file_name(samp_file, output_dir_path)\n site_file = pmag.resolve_file_name(site_file, output_dir_path)\n loc_file = pmag.resolve_file_name(loc_file, output_dir_path)\n mag_file = pmag.resolve_file_name(mag_file, input_dir_path)\n\n if JR:\n if meth_code == \"LP-NO\":\n meth_code = \"\"\n meth_code = meth_code+\":FS-C-DRILL-IODP:SP-SS-C:SO-V\"\n meth_code = meth_code.strip(\":\")\n samp_con = '5'\n\n # format variables\n tmp_file = mag_file.split(os.extsep)[0]+os.extsep+'tmp'\n mag_file = pmag.resolve_file_name(mag_file, input_dir_path)\n if samp_con.startswith(\"4\"):\n if \"-\" not in samp_con:\n print(\"option [4] must be in form 4-Z where Z is an integer\")\n return False, \"naming convention option [4] must be in form 4-Z where Z is an integer\"\n else:\n Z = samp_con.split(\"-\")[1]\n samp_con = \"4\"\n elif samp_con.startswith(\"7\"):\n if \"-\" not in samp_con:\n print(\"option [7] must be in form 7-Z where Z is an integer\")\n return False, \"naming convention option [7] must be in form 7-Z where Z is an integer\"\n else:\n Z = samp_con.split(\"-\")[1]\n samp_con = \"7\"\n else:\n Z = 1\n\n # parse data\n # fix .jr6 file so that there are spaces between all the columns.\n pre_data = open(mag_file, 'r')\n tmp_data = open(tmp_file, 'w')\n if samp_con != '2':\n fixed_data = pre_data.read().replace('-', ' -')\n else:\n fixed_data = \"\"\n for line in pre_data.readlines():\n entries = line.split()\n if len(entries) < 2:\n continue\n fixed_line = entries[0] + ' ' + reduce(\n lambda x, y: x+' '+y, [x.replace('-', ' -') for x in entries[1:]])\n fixed_data += fixed_line+os.linesep\n tmp_data.write(fixed_data)\n tmp_data.close()\n pre_data.close()\n\n if not JR:\n column_names = ['specimen', 'step', 'x', 'y', 'z', 'expon', 'azimuth', 'dip', 'bed_dip_direction',\n 'bed_dip', 'bed_dip_dir2', 'bed_dip2', 'param1', 'param2', 'param3', 'param4', 'dir_csd']\n else: # measured on the Joides Resolution JR6\n column_names = ['specimen', 'step', 'negz', 'y', 'x', 'expon', 'azimuth', 'dip', 'bed_dip_direction',\n 'bed_dip', 'bed_dip_dir2', 'bed_dip2', 'param1', 'param2', 'param3', 'param4', 'dir_csd']\n data = pd.read_csv(tmp_file, delim_whitespace=True,\n names=column_names, index_col=False)\n if isinstance(data['x'][0], str):\n column_names = ['specimen', 'step', 'step_unit', 'x', 'y', 'z', 'expon', 'azimuth', 'dip', 'bed_dip_direction',\n 'bed_dip', 'bed_dip_dir2', 'bed_dip2', 'param1', 'param2', 'param3', 'param4', 'dir_csd']\n data = pd.read_csv(tmp_file, delim_whitespace=True,\n names=column_names, index_col=False)\n if JR:\n data['z'] = -data['negz']\n cart = np.array([data['x'], data['y'], data['z']]).transpose()\n dir_dat = pmag.cart2dir(cart).transpose()\n data['dir_dec'] = dir_dat[0]\n data['dir_inc'] = dir_dat[1]\n # the data are in A/m - this converts to Am^2\n data['magn_moment'] = dir_dat[2]*(10.0**data['expon'])*volume\n data['magn_volume'] = dir_dat[2] * \\\n (10.0**data['expon']) # A/m - data in A/m\n data['dip'] = -data['dip']\n\n data['specimen']\n # put data into magic tables\n MagRecs, SpecRecs, SampRecs, SiteRecs, LocRecs = [], [], [], [], []\n for rowNum, row in data.iterrows():\n MeasRec, SpecRec, SampRec, SiteRec, LocRec = {}, {}, {}, {}, {}\n specimen = row['specimen']\n if specnum != 0:\n sample = specimen[:specnum]\n else:\n sample = specimen\n site = pmag.parse_site(sample, samp_con, Z)\n if specimen != \"\" and specimen not in [x['specimen'] if 'specimen' in list(x.keys()) else \"\" for x in SpecRecs]:\n SpecRec['specimen'] = specimen\n SpecRec['sample'] = sample\n SpecRec[\"citations\"] = \"This study\"\n SpecRec[\"analysts\"] = user\n SpecRec['volume'] = volume\n SpecRecs.append(SpecRec)\n if sample != \"\" and sample not in [x['sample'] if 'sample' in list(x.keys()) else \"\" for x in SampRecs]:\n SampRec['sample'] = sample\n SampRec['site'] = site\n SampRec[\"citations\"] = \"This study\"\n SampRec[\"analysts\"] = user\n SampRec['azimuth'] = row['azimuth']\n SampRec['dip'] = row['dip']\n SampRec['bed_dip_direction'] = row['bed_dip_direction']\n SampRec['bed_dip'] = row['bed_dip']\n SampRec['method_codes'] = meth_code\n SampRecs.append(SampRec)\n if site != \"\" and site not in [x['site'] if 'site' in list(x.keys()) else \"\" for x in SiteRecs]:\n SiteRec['site'] = site\n SiteRec['location'] = location\n SiteRec[\"citations\"] = \"This study\"\n SiteRec[\"analysts\"] = user\n SiteRec['lat'] = lat\n SiteRec['lon'] = lon\n SiteRecs.append(SiteRec)\n if location != \"\" and location not in [x['location'] if 'location' in list(x.keys()) else \"\" for x in LocRecs]:\n LocRec['location'] = location\n LocRec[\"citations\"] = \"This study\"\n LocRec[\"analysts\"] = user\n LocRec['lat_n'] = lat\n LocRec['lon_e'] = lon\n LocRec['lat_s'] = lat\n LocRec['lon_w'] = lon\n LocRecs.append(LocRec)\n MeasRec[\"citations\"] = \"This study\"\n MeasRec[\"analysts\"] = user\n MeasRec[\"specimen\"] = specimen\n MeasRec['software_packages'] = version_num\n MeasRec[\"treat_temp\"] = '%8.3e' % (273) # room temp in kelvin\n MeasRec[\"meas_temp\"] = '%8.3e' % (273) # room temp in kelvin\n MeasRec[\"quality\"] = 'g'\n MeasRec[\"standard\"] = 'u'\n MeasRec[\"treat_step_num\"] = 0\n MeasRec[\"treat_ac_field\"] = '0'\n if row['step'] == 'NRM':\n meas_type = \"LT-NO\"\n elif 'step_unit' in row and row['step_unit'] == 'C':\n meas_type = \"LT-T-Z\"\n treat = float(row['step'])\n MeasRec[\"treat_temp\"] = '%8.3e' % (treat+273.) # temp in kelvin\n elif row['step'][0:2] == 'AD':\n meas_type = \"LT-AF-Z\"\n treat = float(row['step'][2:])\n MeasRec[\"treat_ac_field\"] = '%8.3e' % (\n treat*1e-3) # convert from mT to tesla\n elif row['step'][0] == 'A':\n meas_type = \"LT-AF-Z\"\n treat = float(row['step'][1:])\n MeasRec[\"treat_ac_field\"] = '%8.3e' % (\n treat*1e-3) # convert from mT to tesla\n elif row['step'][0] == 'TD':\n meas_type = \"LT-T-Z\"\n treat = float(row['step'][2:])\n MeasRec[\"treat_temp\"] = '%8.3e' % (treat+273.) # temp in kelvin\n elif row['step'][0] == 'T':\n meas_type = \"LT-T-Z\"\n treat = float(row['step'][1:])\n MeasRec[\"treat_temp\"] = '%8.3e' % (treat+273.) # temp in kelvin\n else: # need to add IRM, and ARM options\n print(\"measurement type unknown\", row['step'])\n return False, \"measurement type unknown\"\n MeasRec[\"magn_moment\"] = str(row['magn_moment'])\n MeasRec[\"magn_volume\"] = str(row['magn_volume'])\n MeasRec[\"dir_dec\"] = str(row['dir_dec'])\n MeasRec[\"dir_inc\"] = str(row['dir_inc'])\n MeasRec['method_codes'] = meas_type\n MagRecs.append(MeasRec)\n\n con = cb.Contribution(output_dir_path, read_tables=[])\n\n con.add_magic_table_from_data(dtype='specimens', data=SpecRecs)\n con.add_magic_table_from_data(dtype='samples', data=SampRecs)\n con.add_magic_table_from_data(dtype='sites', data=SiteRecs)\n con.add_magic_table_from_data(dtype='locations', data=LocRecs)\n MeasOuts = pmag.measurements_methods3(MagRecs, noave)\n con.add_magic_table_from_data(dtype='measurements', data=MeasOuts)\n\n con.tables['specimens'].write_magic_file(custom_name=spec_file)\n con.tables['samples'].write_magic_file(custom_name=samp_file)\n con.tables['sites'].write_magic_file(custom_name=site_file)\n con.tables['locations'].write_magic_file(custom_name=loc_file)\n con.tables['measurements'].write_magic_file(custom_name=meas_file)\n\n try:\n os.remove(tmp_file)\n except (OSError, IOError) as e:\n print(\"couldn't remove temperary fixed JR6 file %s\" % tmp_file)\n\n return True, meas_file"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef jr6_txt(mag_file, dir_path=\".\", input_dir_path=\"\",\n meas_file=\"measurements.txt\", spec_file=\"specimens.txt\",\n samp_file=\"samples.txt\", site_file=\"sites.txt\", loc_file=\"locations.txt\",\n user=\"\", specnum=1, samp_con='1', location='unknown', lat='', lon='',\n noave=False, volume=12, timezone=\"UTC\", meth_code=\"LP-NO\"):\n\n \"\"\"\n Converts JR6 .txt format files to MagIC measurements format files.\n\n Parameters\n ----------\n mag_file : str\n input file name\n dir_path : str\n working directory, default \".\"\n input_dir_path : str\n input file directory IF different from dir_path, default \"\"\n meas_file : str\n output measurement file name, default \"measurements.txt\"\n spec_file : str\n output specimen file name, default \"specimens.txt\"\n samp_file: str\n output sample file name, default \"samples.txt\"\n site_file : str\n output site file name, default \"sites.txt\"\n loc_file : str\n output location file name, default \"locations.txt\"\n user : str\n user name, default \"\"\n specnum : int\n number of characters to designate a specimen, default 0\n samp_con : str\n sample/site naming convention, default '1', see info below\n location : str\n location name, default \"unknown\"\n lat : float\n latitude, default \"\"\n lon : float\n longitude, default \"\"\n noave : bool\n do not average duplicate measurements, default False (so by default, DO average)\n volume : float\n volume in ccs, default 12\n timezone : timezone of date/time string in comment string, default UTC\n meth_code : str\n default \"LP-NO\"\n\n Returns\n ---------\n Tuple : (True or False indicating if conversion was sucessful, meas_file name written)\n\n \"\"\"\n\n version_num = pmag.get_version()\n input_dir_path, output_dir_path = pmag.fix_directories(input_dir_path, dir_path)\n mag_file = pmag.resolve_file_name(mag_file, input_dir_path)\n input_dir_path = os.path.split(mag_file)[0]\n specnum = - int(specnum)\n samp_con = str(samp_con)\n volume = float(volume) * 1e-6\n\n # format variables\n mag_file = pmag.resolve_file_name(mag_file, input_dir_path)\n if samp_con.startswith(\"4\"):\n if \"-\" not in samp_con:\n print(\"option [4] must be in form 4-Z where Z is an integer\")\n return False, \"naming convention option [4] must be in form 4-Z where Z is an integer\"\n else:\n Z = samp_con.split(\"-\")[1]\n samp_con = \"4\"\n elif samp_con.startswith(\"7\"):\n if \"-\" not in samp_con:\n print(\"option [7] must be in form 7-Z where Z is an integer\")\n return False, \"naming convention option [7] must be in form 7-Z where Z is an integer\"\n else:\n Z = samp_con.split(\"-\")[1]\n samp_con = \"7\"\n else:\n Z = 1\n\n # create data holders\n MeasRecs, SpecRecs, SampRecs, SiteRecs, LocRecs = [], [], [], [], []\n\n data = pmag.open_file(mag_file)\n # remove garbage/blank lines\n data = [i for i in data if len(i) >= 5]\n if not len(data):\n print('No data')\n return\n\n n = 0\n end = False\n # loop through records\n while not end:\n first_line = data[n].split()\n sampleName = first_line[0]\n demagLevel = first_line[2]\n date = first_line[3] + \":0:0:0\"\n n += 2\n third_line = data[n].split()\n if not third_line[0].startswith('SPEC.ANGLES'):\n print('third line of a block should start with SPEC.ANGLES')\n print(third_line)\n return\n specimenAngleDec = third_line[1]\n specimenAngleInc = third_line[2]\n n += 4\n while not data[n].startswith('MEAN'):\n n += 1\n mean_line = data[n]\n Mx = mean_line[1]\n My = mean_line[2]\n Mz = mean_line[3]\n n += 1\n precision_line = data[n].split()\n if not precision_line[0].startswith('Modulus'):\n print('precision line should start with \"Modulus\"')\n return\n splitExp = precision_line[2].split('A')\n intensityVolStr = precision_line[1] + splitExp[0]\n intensityVol = float(intensityVolStr)\n # check and see if Prec is too big and messes with the parcing.\n precisionStr = ''\n if len(precision_line) == 6: # normal line\n precisionStr = precision_line[5][0:-1]\n else:\n precisionStr = precision_line[4][0:-1]\n\n precisionPer = float(precisionStr)\n precision = intensityVol * precisionPer/100\n\n while not data[n].startswith('SPEC.'):\n n += 1\n specimen_line = data[n].split()\n specimenDec = specimen_line[2]\n specimenInc = specimen_line[3]\n n += 1\n geographic_line = data[n]\n if not geographic_line.startswith('GEOGR'):\n geographic_dec = ''\n geographic_inc = ''\n else:\n geographic_line = geographic_line.split()\n geographicDec = geographic_line[1]\n geographicInc = geographic_line[2]\n # Add data to various MagIC data tables.\n specimen = sampleName\n if specnum != 0:\n sample = specimen[:specnum]\n else:\n sample = specimen\n site = pmag.parse_site(sample, samp_con, Z)\n\n MeasRec, SpecRec, SampRec, SiteRec, LocRec = {}, {}, {}, {}, {}\n\n if specimen != \"\" and specimen not in [x['specimen'] if 'specimen' in list(x.keys()) else \"\" for x in SpecRecs]:\n SpecRec['specimen'] = specimen\n SpecRec['sample'] = sample\n SpecRec[\"citations\"] = \"This study\"\n SpecRec[\"analysts\"] = user\n SpecRec['volume'] = volume\n SpecRecs.append(SpecRec)\n if sample != \"\" and sample not in [x['sample'] if 'sample' in list(x.keys()) else \"\" for x in SampRecs]:\n SampRec['sample'] = sample\n SampRec['site'] = site\n SampRec[\"citations\"] = \"This study\"\n SampRec[\"analysts\"] = user\n SampRec['azimuth'] = specimenAngleDec\n # convert to magic orientation\n sample_dip = str(float(specimenAngleInc)-90.0)\n SampRec['dip'] = sample_dip\n SampRec['method_codes'] = meth_code\n SampRecs.append(SampRec)\n if site != \"\" and site not in [x['site'] if 'site' in list(x.keys()) else \"\" for x in SiteRecs]:\n SiteRec['site'] = site\n SiteRec['location'] = location\n SiteRec[\"citations\"] = \"This study\"\n SiteRec[\"analysts\"] = user\n SiteRec['lat'] = lat\n SiteRec['lon'] = lon\n SiteRecs.append(SiteRec)\n if location != \"\" and location not in [x['location'] if 'location' in list(x.keys()) else \"\" for x in LocRecs]:\n LocRec['location'] = location\n LocRec[\"citations\"] = \"This study\"\n LocRec[\"analysts\"] = user\n LocRec['lat_n'] = lat\n LocRec['lon_e'] = lon\n LocRec['lat_s'] = lat\n LocRec['lon_w'] = lon\n LocRecs.append(LocRec)\n\n local = pytz.timezone(timezone)\n naive = datetime.datetime.strptime(date, \"%m-%d-%Y:%H:%M:%S\")\n local_dt = local.localize(naive, is_dst=None)\n utc_dt = local_dt.astimezone(pytz.utc)\n timestamp = utc_dt.strftime(\"%Y-%m-%dT%H:%M:%S\")+\"Z\"\n MeasRec[\"specimen\"] = specimen\n MeasRec[\"timestamp\"] = timestamp\n MeasRec['description'] = ''\n MeasRec[\"citations\"] = \"This study\"\n MeasRec['software_packages'] = version_num\n MeasRec[\"treat_temp\"] = '%8.3e' % (273) # room temp in kelvin\n MeasRec[\"meas_temp\"] = '%8.3e' % (273) # room temp in kelvin\n MeasRec[\"quality\"] = 'g'\n MeasRec[\"standard\"] = 'u'\n MeasRec[\"treat_step_num\"] = 0\n MeasRec[\"treat_ac_field\"] = '0'\n if demagLevel == 'NRM':\n meas_type = \"LT-NO\"\n elif demagLevel[0] == 'A':\n if demagLevel[:2] == 'AD':\n treat = float(demagLevel[2:])\n else:\n treat = float(demagLevel[1:])\n meas_type = \"LT-AF-Z\"\n MeasRec[\"treat_ac_field\"] = '%8.3e' % (\n treat*1e-3) # convert from mT to tesla\n elif demagLevel[0] == 'T':\n meas_type = \"LT-T-Z\"\n treat = float(demagLevel[1:])\n MeasRec[\"treat_temp\"] = '%8.3e' % (treat+273.) # temp in kelvin\n else:\n print(\"measurement type unknown\", demagLevel)\n return False, \"measurement type unknown\"\n\n MeasRec[\"magn_moment\"] = str(intensityVol*volume) # Am^2\n MeasRec[\"magn_volume\"] = intensityVolStr # A/m\n MeasRec[\"dir_dec\"] = specimenDec\n MeasRec[\"dir_inc\"] = specimenInc\n MeasRec['method_codes'] = meas_type\n MeasRecs.append(MeasRec)\n\n # ignore all the rest of the special characters. Some data files not consistantly formatted.\n n += 1\n while ((len(data[n]) <= 5 and data[n] != '') or data[n].startswith('----')):\n n += 1\n if n >= len(data):\n break\n if n >= len(data):\n # we're done!\n end = True\n\n # end of data while loop\n\n con = cb.Contribution(output_dir_path, read_tables=[])\n\n con.add_magic_table_from_data(dtype='specimens', data=SpecRecs)\n con.add_magic_table_from_data(dtype='samples', data=SampRecs)\n con.add_magic_table_from_data(dtype='sites', data=SiteRecs)\n con.add_magic_table_from_data(dtype='locations', data=LocRecs)\n MeasOuts = pmag.measurements_methods3(MeasRecs, noave)\n con.add_magic_table_from_data(dtype='measurements', data=MeasOuts)\n\n con.tables['specimens'].write_magic_file(custom_name=spec_file,dir_path=dir_path)\n con.tables['samples'].write_magic_file(custom_name=samp_file,dir_path=dir_path)\n con.tables['sites'].write_magic_file(custom_name=site_file,dir_path=dir_path)\n con.tables['locations'].write_magic_file(custom_name=loc_file,dir_path=dir_path)\n con.tables['measurements'].write_magic_file(custom_name=meas_file,dir_path=dir_path)\n\n return True, meas_file", "response": "This function converts JR6. txt format files to MagIC measurements format files."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef k15(k15file, dir_path='.', input_dir_path='',\n meas_file='measurements.txt', aniso_outfile='specimens.txt',\n samp_file=\"samples.txt\", result_file =\"rmag_anisotropy.txt\",\n specnum=0, sample_naming_con='1', location=\"unknown\",\n data_model_num=3):\n \"\"\"\n converts .k15 format data to MagIC format.\n assumes Jelinek Kappabridge measurement scheme.\n\n Parameters\n ----------\n k15file : str\n input file name\n dir_path : str\n output file directory, default \".\"\n input_dir_path : str\n input file directory IF different from dir_path, default \"\"\n meas_file : str\n output measurement file name, default \"measurements.txt\"\n aniso_outfile : str\n output specimen file name, default \"specimens.txt\"\n samp_file: str\n output sample file name, default \"samples.txt\"\n aniso_results_file : str\n output result file name, default \"rmag_results.txt\", data model 2 only\n specnum : int\n number of characters to designate a specimen, default 0\n samp_con : str\n sample/site naming convention, default '1', see info below\n location : str\n location name, default \"unknown\"\n data_model_num : int\n MagIC data model [2, 3], default 3\n\n Returns\n --------\n type - Tuple : (True or False indicating if conversion was sucessful, samp_file name written)\n\n\n Info\n --------\n Infile format:\n name [az,pl,strike,dip], followed by\n 3 rows of 5 measurements for each specimen\n\n Sample naming convention:\n [1] XXXXY: where XXXX is an arbitrary length site designation and Y\n is the single character sample designation. e.g., TG001a is the\n first sample from site TG001. [default]\n [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [4-Z] XXXXYYY: YYY is sample designation with Z characters from site XXX\n [5] site name same as sample\n [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED\n [7-Z] [XXXX]YYY: XXXX is site designation with Z characters with sample name XXXXYYYY\n NB: all others you will have to customize your self\n or e-mail ltauxe@ucsd.edu for help.\n\n \"\"\"\n #\n # initialize some variables\n #\n input_dir_path, output_dir_path = pmag.fix_directories(input_dir_path, dir_path)\n version_num = pmag.get_version()\n syn = 0\n itilt, igeo, linecnt, key = 0, 0, 0, \"\"\n first_save = 1\n k15 = []\n citation = 'This study'\n\n data_model_num = int(float(data_model_num))\n\n # set column names for MagIC 3\n spec_name_col = 'specimen' #\n samp_name_col = 'sample' #\n site_name_col = 'site' #\n loc_name_col = 'location' #\n citation_col = 'citations'\n method_col = 'method_codes'\n site_description_col = 'description'\n expedition_col = 'expedition_name'\n instrument_col = 'instrument_codes'\n experiment_col = 'experiments'\n analyst_col = 'analysts'\n quality_col = 'quality'\n aniso_quality_col = 'result_quality'\n meas_standard_col = 'standard'\n meas_description_col = 'description'\n aniso_type_col = 'aniso_type'\n aniso_unit_col = 'aniso_s_unit'\n aniso_n_col = 'aniso_s_n_measurements'\n azimuth_col = 'azimuth'\n spec_volume_col = 'volume'\n samp_dip_col = 'dip'\n bed_dip_col = 'bed_dip'\n bed_dip_direction_col = 'bed_dip_direction'\n chi_vol_col = 'susc_chi_volume'\n aniso_sigma_col = 'aniso_s_sigma'\n aniso_unit_col = 'aniso_s_unit'\n aniso_tilt_corr_col = 'aniso_tilt_correction'\n meas_table_name = 'measurements'\n spec_table_name = 'specimens'\n samp_table_name = 'samples'\n site_table_name = 'sites'\n meas_name_col = 'measurement'\n meas_time_col = 'timestamp'\n meas_ac_col = 'meas_field_ac'\n meas_temp_col = \"meas_temp\"\n #\n software_col = 'software_packages'\n description_col = 'description' # sites.description\n treat_temp_col = 'treat_temp'\n meas_orient_phi_col = \"meas_orient_phi\"\n meas_orient_theta_col = \"meas_orient_theta\"\n aniso_mean_col = 'aniso_s_mean'\n result_description_col = \"description\"\n\n # set defaults correctly for MagIC 2\n if data_model_num == 2:\n if meas_file == 'measurements.txt':\n meas_file = 'magic_measurements.txt'\n if samp_file == 'samples.txt':\n samp_file = 'er_samples.txt'\n if aniso_outfile == 'specimens.txt':\n aniso_outfile = 'rmag_anisotropy.txt'\n\n # set column names for MagIC 2\n if data_model_num == 2:\n spec_name_col = 'er_specimen_name'\n samp_name_col = 'er_sample_name'\n site_name_col = 'er_site_name'\n loc_name_col = 'er_location_name'\n citation_col = 'er_citation_names'\n method_col = 'magic_method_codes'\n site_description_col = 'site_description'\n expedition_col = 'er_expedition_name'\n instrument_col = 'magic_instrument_codes'\n experiment_col = 'magic_experiment_names'\n analyst_col = 'er_analyst_mail_names'\n quality_col = 'measurement_flag'\n aniso_quality_col = 'anisotropy_flag'\n meas_standard_col = 'measurement_standard'\n meas_description_col = 'measurement_description'\n aniso_type_col = 'anisotropy_type'\n aniso_unit_col = 'anisotropy_unit'\n aniso_n_col = 'anisotropy_n'\n azimuth_col = 'sample_azimuth'\n spec_volume_col = 'specimen_volume'\n samp_dip_col = 'sample_dip'\n bed_dip_col = 'sample_bed_dip'\n bed_dip_direction_col = 'sample_bed_dip_direction'\n chi_vol_col = 'measurement_chi_volume'\n aniso_sigma_col = 'anisotropy_sigma'\n aniso_unit_col = 'anisotropy_unit'\n aniso_tilt_corr_col = 'anisotropy_tilt_correction'\n meas_table_name = 'magic_measurements'\n spec_table_name = 'er_specimens'\n samp_table_name = 'er_samples'\n site_table_name = 'er_sites'\n meas_name_col = 'measurement_number'\n meas_time_col = 'measurement_date'\n meas_ac_col = 'measurement_lab_field_ac'\n meas_temp_col = \"measurement_temp\"\n #\n software_col = 'magic_software_packages'\n description_col = 'rmag_result_name'\n treat_temp_col = 'treatment_temp'\n meas_temp_col = \"measurement_temp\"\n meas_orient_phi_col = \"measurement_orient_phi\"\n meas_orient_theta_col = \"measurement_orient_theta\"\n aniso_mean_col = 'anisotropy_mean'\n result_description_col = \"result_description\"\n\n\n# pick off stuff from command line\n Z = \"\"\n if \"4\" in sample_naming_con:\n if \"-\" not in sample_naming_con:\n print(\"option [4] must be in form 4-Z where Z is an integer\")\n return False, \"option [4] must be in form 4-Z where Z is an integer\"\n else:\n Z = sample_naming_con.split(\"-\")[1]\n sample_naming_con = \"4\"\n if sample_naming_con == '6':\n Samps, filetype = pmag.magic_read(\n os.path.join(input_dir_path, samp_table_name + \".txt\"))\n samp_file = pmag.resolve_file_name(samp_file, output_dir_path)\n meas_file = pmag.resolve_file_name(meas_file, output_dir_path)\n aniso_outfile = pmag.resolve_file_name(aniso_outfile, output_dir_path)\n result_file = pmag.resolve_file_name(result_file, output_dir_path)\n k15file = pmag.resolve_file_name(k15file, input_dir_path)\n if not os.path.exists(k15file):\n print(k15file)\n return False, \"You must provide a valid k15 format file\"\n try:\n SampRecs, filetype = pmag.magic_read(\n samp_file) # append new records to existing\n samplist = []\n for samp in SampRecs:\n if samp[samp_name_col] not in samplist:\n samplist.append(samp[samp_name_col])\n except IOError:\n SampRecs = []\n # measurement directions for Jelinek 1977 protocol:\n Decs = [315, 225, 180, 135, 45, 90, 270, 270, 270, 90, 180, 180, 0, 0, 0]\n Incs = [0, 0, 0, 0, 0, -45, -45, 0, 45, 45, 45, -45, -90, -45, 45]\n # some defaults to read in .k15 file format\n # list of measurements and default number of characters for specimen ID\n# some magic default definitions\n #\n # read in data\n with open(k15file, 'r') as finput:\n lines = finput.readlines()\n MeasRecs, SpecRecs, AnisRecs, ResRecs = [], [], [], []\n # read in data\n MeasRec, SpecRec, SampRec, SiteRec, AnisRec, ResRec = {}, {}, {}, {}, {}, {}\n for line in lines:\n linecnt += 1\n rec = line.split()\n if linecnt == 1:\n MeasRec[method_col] = \"\"\n SpecRec[method_col] = \"\"\n SampRec[method_col] = \"\"\n AnisRec[method_col] = \"\"\n SiteRec[method_col] = \"\"\n ResRec[method_col] = \"\"\n MeasRec[software_col] = version_num\n SpecRec[software_col] = version_num\n SampRec[software_col] = version_num\n AnisRec[software_col] = version_num\n SiteRec[software_col] = version_num\n ResRec[software_col] = version_num\n MeasRec[method_col] = \"LP-X\"\n MeasRec[quality_col] = \"g\"\n MeasRec[meas_standard_col] = \"u\"\n MeasRec[citation_col] = \"This study\"\n SpecRec[citation_col] = \"This study\"\n SampRec[citation_col] = \"This study\"\n AnisRec[citation_col] = \"This study\"\n ResRec[citation_col] = \"This study\"\n MeasRec[spec_name_col] = rec[0]\n MeasRec[experiment_col] = rec[0] + \":LP-AN-MS\"\n AnisRec[experiment_col] = rec[0] + \":AMS\"\n ResRec[experiment_col] = rec[0] + \":AMS\"\n SpecRec[spec_name_col] = rec[0]\n AnisRec[spec_name_col] = rec[0]\n SampRec[spec_name_col] = rec[0]\n if data_model_num == 2:\n ResRec[description_col] = rec[0]\n if data_model_num == 3:\n ResRec[spec_name_col] = rec[0]\n specnum = int(specnum)\n if specnum != 0:\n MeasRec[samp_name_col] = rec[0][:-specnum]\n if specnum == 0:\n MeasRec[samp_name_col] = rec[0]\n SampRec[samp_name_col] = MeasRec[samp_name_col]\n SpecRec[samp_name_col] = MeasRec[samp_name_col]\n AnisRec[samp_name_col] = MeasRec[samp_name_col]\n if data_model_num == 3:\n ResRec[samp_name_col] = MeasRec[samp_name_col]\n else:\n ResRec[samp_name_col + \"s\"] = MeasRec[samp_name_col]\n if sample_naming_con == \"6\":\n for samp in Samps:\n if samp[samp_name_col] == AnisRec[samp_name_col]:\n sitename = samp[site_name_col]\n location = samp[loc_name_col]\n elif sample_naming_con != \"\":\n sitename = pmag.parse_site(\n AnisRec[samp_name_col], sample_naming_con, Z)\n MeasRec[site_name_col] = sitename\n MeasRec[loc_name_col] = location\n SampRec[site_name_col] = MeasRec[site_name_col]\n SpecRec[site_name_col] = MeasRec[site_name_col]\n AnisRec[site_name_col] = MeasRec[site_name_col]\n ResRec[loc_name_col] = location\n ResRec[site_name_col] = MeasRec[site_name_col]\n if data_model_num == 2:\n ResRec[site_name_col + \"s\"] = MeasRec[site_name_col]\n SampRec[loc_name_col] = MeasRec[loc_name_col]\n SpecRec[loc_name_col] = MeasRec[loc_name_col]\n AnisRec[loc_name_col] = MeasRec[loc_name_col]\n if data_model_num == 2 :\n ResRec[loc_name_col + \"s\"] = MeasRec[loc_name_col]\n if len(rec) >= 3:\n SampRec[azimuth_col], SampRec[samp_dip_col] = rec[1], rec[2]\n az, pl, igeo = float(rec[1]), float(rec[2]), 1\n if len(rec) == 5:\n SampRec[bed_dip_direction_col], SampRec[bed_dip_col] = '%7.1f' % (\n 90. + float(rec[3])), (rec[4])\n bed_az, bed_dip, itilt, igeo = 90. + \\\n float(rec[3]), float(rec[4]), 1, 1\n else:\n for i in range(5):\n # assume measurements in micro SI\n k15.append(1e-6 * float(rec[i]))\n if linecnt == 4:\n sbar, sigma, bulk = pmag.dok15_s(k15)\n hpars = pmag.dohext(9, sigma, sbar)\n MeasRec[treat_temp_col] = '%8.3e' % (\n 273) # room temp in kelvin\n MeasRec[meas_temp_col] = '%8.3e' % (\n 273) # room temp in kelvin\n for i in range(15):\n NewMeas = copy.deepcopy(MeasRec)\n NewMeas[meas_orient_phi_col] = '%7.1f' % (Decs[i])\n NewMeas[meas_orient_theta_col] = '%7.1f' % (Incs[i])\n NewMeas[chi_vol_col] = '%12.10f' % (k15[i])\n NewMeas[meas_name_col] = '%i' % (i + 1)\n if data_model_num == 2:\n NewMeas[\"magic_experiment_name\"] = rec[0] + \":LP-AN-MS\"\n else:\n NewMeas[\"experiment\"] = rec[0] + \":LP-AN-MS\"\n MeasRecs.append(NewMeas)\n if SampRec[samp_name_col] not in samplist:\n SampRecs.append(SampRec)\n samplist.append(SampRec[samp_name_col])\n SpecRecs.append(SpecRec)\n AnisRec[aniso_type_col] = \"AMS\"\n ResRec[aniso_type_col] = \"AMS\"\n s1_val = '{:12.10f}'.format(sbar[0])\n s2_val = '{:12.10f}'.format(sbar[1])\n s3_val = '{:12.10f}'.format(sbar[2])\n s4_val = '{:12.10f}'.format(sbar[3])\n s5_val = '{:12.10f}'.format(sbar[4])\n s6_val = '{:12.10f}'.format(sbar[5])\n # MAgIC 2\n if data_model_num == 2:\n AnisRec[\"anisotropy_s1\"] = s1_val\n AnisRec[\"anisotropy_s2\"] = s2_val\n AnisRec[\"anisotropy_s3\"] = s3_val\n AnisRec[\"anisotropy_s4\"] = s4_val\n AnisRec[\"anisotropy_s5\"] = s5_val\n AnisRec[\"anisotropy_s6\"] = s6_val\n # MagIC 3\n else:\n vals = [s1_val, s2_val, s3_val, s4_val, s5_val, s6_val]\n AnisRec['aniso_s'] = \":\".join([str(v).strip() for v in vals])\n AnisRec[aniso_mean_col] = '%12.10f' % (bulk)\n AnisRec[aniso_sigma_col] = '%12.10f' % (sigma)\n AnisRec[aniso_mean_col] = '{:12.10f}'.format(bulk)\n AnisRec[aniso_sigma_col] = '{:12.10f}'.format(sigma)\n AnisRec[aniso_unit_col] = 'SI'\n AnisRec[aniso_n_col] = '15'\n AnisRec[aniso_tilt_corr_col] = '-1'\n AnisRec[method_col] = 'LP-X:AE-H:LP-AN-MS'\n AnisRecs.append(AnisRec)\n ResRec[method_col] = 'LP-X:AE-H:LP-AN-MS'\n ResRec[aniso_tilt_corr_col] = '-1'\n\n if data_model_num == 3:\n aniso_v1 = ':'.join([str(i) for i in (hpars['t1'], hpars['v1_dec'], hpars['v1_inc'], hpars['v2_dec'], hpars['v2_inc'], hpars['e12'], hpars['v3_dec'], hpars['v3_inc'], hpars['e13'])])\n aniso_v2 = ':'.join([str(i) for i in (hpars['t2'], hpars['v2_dec'], hpars['v2_inc'], hpars['v1_dec'], hpars['v1_inc'], hpars['e12'], hpars['v3_dec'], hpars['v3_inc'], hpars['e23'])])\n aniso_v3 = ':'.join([str(i) for i in (hpars['t3'], hpars['v3_dec'], hpars['v3_inc'], hpars['v1_dec'], hpars['v1_inc'], hpars['e13'], hpars['v2_dec'], hpars['v2_inc'], hpars['e23'])])\n ResRec['aniso_v1'] = aniso_v1\n ResRec['aniso_v2'] = aniso_v2\n ResRec['aniso_v3'] = aniso_v3\n else: # data model 2\n ResRec[\"anisotropy_t1\"] = '%12.10f' % (hpars['t1'])\n ResRec[\"anisotropy_t2\"] = '%12.10f' % (hpars['t2'])\n ResRec[\"anisotropy_t3\"] = '%12.10f' % (hpars['t3'])\n ResRec[\"anisotropy_fest\"] = '%12.10f' % (hpars['F'])\n ResRec[\"anisotropy_ftest12\"] = '%12.10f' % (hpars['F12'])\n ResRec[\"anisotropy_ftest23\"] = '%12.10f' % (hpars['F23'])\n ResRec[\"anisotropy_v1_dec\"] = '%7.1f' % (hpars['v1_dec'])\n ResRec[\"anisotropy_v2_dec\"] = '%7.1f' % (hpars['v2_dec'])\n ResRec[\"anisotropy_v3_dec\"] = '%7.1f' % (hpars['v3_dec'])\n ResRec[\"anisotropy_v1_inc\"] = '%7.1f' % (hpars['v1_inc'])\n ResRec[\"anisotropy_v2_inc\"] = '%7.1f' % (hpars['v2_inc'])\n ResRec[\"anisotropy_v3_inc\"] = '%7.1f' % (hpars['v3_inc'])\n ResRec['anisotropy_v1_eta_dec'] = ResRec['anisotropy_v2_dec']\n ResRec['anisotropy_v1_eta_inc'] = ResRec['anisotropy_v2_inc']\n ResRec['anisotropy_v1_zeta_dec'] = ResRec['anisotropy_v3_dec']\n ResRec['anisotropy_v1_zeta_inc'] = ResRec['anisotropy_v3_inc']\n ResRec['anisotropy_v2_eta_dec'] = ResRec['anisotropy_v1_dec']\n ResRec['anisotropy_v2_eta_inc'] = ResRec['anisotropy_v1_inc']\n ResRec['anisotropy_v2_zeta_dec'] = ResRec['anisotropy_v3_dec']\n ResRec['anisotropy_v2_zeta_inc'] = ResRec['anisotropy_v3_inc']\n ResRec['anisotropy_v3_eta_dec'] = ResRec['anisotropy_v1_dec']\n ResRec['anisotropy_v3_eta_inc'] = ResRec['anisotropy_v1_inc']\n ResRec['anisotropy_v3_zeta_dec'] = ResRec['anisotropy_v2_dec']\n ResRec['anisotropy_v3_zeta_inc'] = ResRec['anisotropy_v2_inc']\n ResRec[\"anisotropy_v1_eta_semi_angle\"] = '%7.1f' % (\n hpars['e12'])\n ResRec[\"anisotropy_v1_zeta_semi_angle\"] = '%7.1f' % (\n hpars['e13'])\n ResRec[\"anisotropy_v2_eta_semi_angle\"] = '%7.1f' % (\n hpars['e12'])\n ResRec[\"anisotropy_v2_zeta_semi_angle\"] = '%7.1f' % (\n hpars['e23'])\n ResRec[\"anisotropy_v3_eta_semi_angle\"] = '%7.1f' % (\n hpars['e13'])\n ResRec[\"anisotropy_v3_zeta_semi_angle\"] = '%7.1f' % (\n hpars['e23'])\n ResRec[result_description_col] = 'Critical F: ' + hpars[\"F_crit\"] + ';Critical F12/F13: ' + hpars[\"F12_crit\"]\n #\n ResRecs.append(ResRec)\n if igeo == 1:\n sbarg = pmag.dosgeo(sbar, az, pl)\n hparsg = pmag.dohext(9, sigma, sbarg)\n AnisRecG = copy.copy(AnisRec)\n ResRecG = copy.copy(ResRec)\n if data_model_num == 3:\n AnisRecG[\"aniso_s\"] = \":\".join('{:12.10f}'.format(i) for i in sbarg)\n if data_model_num == 2:\n AnisRecG[\"anisotropy_s1\"] = '%12.10f' % (sbarg[0])\n AnisRecG[\"anisotropy_s2\"] = '%12.10f' % (sbarg[1])\n AnisRecG[\"anisotropy_s3\"] = '%12.10f' % (sbarg[2])\n AnisRecG[\"anisotropy_s4\"] = '%12.10f' % (sbarg[3])\n AnisRecG[\"anisotropy_s5\"] = '%12.10f' % (sbarg[4])\n AnisRecG[\"anisotropy_s6\"] = '%12.10f' % (sbarg[5])\n\n AnisRecG[aniso_tilt_corr_col] = '0'\n ResRecG[aniso_tilt_corr_col] = '0'\n\n if data_model_num == 3:\n aniso_v1 = ':'.join([str(i) for i in (hparsg['t1'], hparsg['v1_dec'], hparsg['v1_inc'], hparsg['v2_dec'], hparsg['v2_inc'], hparsg['e12'], hparsg['v3_dec'], hparsg['v3_inc'], hparsg['e13'])])\n aniso_v2 = ':'.join([str(i) for i in (hparsg['t2'], hparsg['v2_dec'], hparsg['v2_inc'], hparsg['v1_dec'], hparsg['v1_inc'], hparsg['e12'], hparsg['v3_dec'], hparsg['v3_inc'], hparsg['e23'])])\n aniso_v3 = ':'.join([str(i) for i in (hparsg['t3'], hparsg['v3_dec'], hparsg['v3_inc'], hparsg['v1_dec'], hparsg['v1_inc'], hparsg['e13'], hparsg['v2_dec'], hparsg['v2_inc'], hparsg['e23'])])\n ResRecG['aniso_v1'] = aniso_v1\n ResRecG['aniso_v2'] = aniso_v2\n ResRecG['aniso_v3'] = aniso_v3\n #\n if data_model_num == 2:\n ResRecG[\"anisotropy_v1_dec\"] = '%7.1f' % (hparsg['v1_dec'])\n ResRecG[\"anisotropy_v2_dec\"] = '%7.1f' % (hparsg['v2_dec'])\n ResRecG[\"anisotropy_v3_dec\"] = '%7.1f' % (hparsg['v3_dec'])\n ResRecG[\"anisotropy_v1_inc\"] = '%7.1f' % (hparsg['v1_inc'])\n ResRecG[\"anisotropy_v2_inc\"] = '%7.1f' % (hparsg['v2_inc'])\n ResRecG[\"anisotropy_v3_inc\"] = '%7.1f' % (hparsg['v3_inc'])\n ResRecG['anisotropy_v1_eta_dec'] = ResRecG['anisotropy_v2_dec']\n ResRecG['anisotropy_v1_eta_inc'] = ResRecG['anisotropy_v2_inc']\n ResRecG['anisotropy_v1_zeta_dec'] = ResRecG['anisotropy_v3_dec']\n ResRecG['anisotropy_v1_zeta_inc'] = ResRecG['anisotropy_v3_inc']\n ResRecG['anisotropy_v2_eta_dec'] = ResRecG['anisotropy_v1_dec']\n ResRecG['anisotropy_v2_eta_inc'] = ResRecG['anisotropy_v1_inc']\n ResRecG['anisotropy_v2_zeta_dec'] = ResRecG['anisotropy_v3_dec']\n ResRecG['anisotropy_v2_zeta_inc'] = ResRecG['anisotropy_v3_inc']\n ResRecG['anisotropy_v3_eta_dec'] = ResRecG['anisotropy_v1_dec']\n ResRecG['anisotropy_v3_eta_inc'] = ResRecG['anisotropy_v1_inc']\n ResRecG['anisotropy_v3_zeta_dec'] = ResRecG['anisotropy_v2_dec']\n ResRecG['anisotropy_v3_zeta_inc'] = ResRecG['anisotropy_v2_inc']\n #\n ResRecG[result_description_col] = 'Critical F: ' + \\\n hpars[\"F_crit\"] + ';Critical F12/F13: ' + \\\n hpars[\"F12_crit\"]\n ResRecs.append(ResRecG)\n AnisRecs.append(AnisRecG)\n if itilt == 1:\n sbart = pmag.dostilt(sbarg, bed_az, bed_dip)\n hparst = pmag.dohext(9, sigma, sbart)\n AnisRecT = copy.copy(AnisRec)\n ResRecT = copy.copy(ResRec)\n if data_model_num == 3:\n aniso_v1 = ':'.join([str(i) for i in (hparst['t1'], hparst['v1_dec'], hparst['v1_inc'], hparst['v2_dec'], hparst['v2_inc'], hparst['e12'], hparst['v3_dec'], hparst['v3_inc'], hparst['e13'])])\n aniso_v2 = ':'.join([str(i) for i in (hparst['t2'], hparst['v2_dec'], hparst['v2_inc'], hparst['v1_dec'], hparst['v1_inc'], hparst['e12'], hparst['v3_dec'], hparst['v3_inc'], hparst['e23'])])\n aniso_v3 = ':'.join([str(i) for i in (hparst['t3'], hparst['v3_dec'], hparst['v3_inc'], hparst['v1_dec'], hparst['v1_inc'], hparst['e13'], hparst['v2_dec'], hparst['v2_inc'], hparst['e23'])])\n ResRecT['aniso_v1'] = aniso_v1\n ResRecT['aniso_v2'] = aniso_v2\n ResRecT['aniso_v3'] = aniso_v3\n #\n if data_model_num == 2:\n AnisRecT[\"anisotropy_s1\"] = '%12.10f' % (sbart[0])\n AnisRecT[\"anisotropy_s2\"] = '%12.10f' % (sbart[1])\n AnisRecT[\"anisotropy_s3\"] = '%12.10f' % (sbart[2])\n AnisRecT[\"anisotropy_s4\"] = '%12.10f' % (sbart[3])\n AnisRecT[\"anisotropy_s5\"] = '%12.10f' % (sbart[4])\n AnisRecT[\"anisotropy_s6\"] = '%12.10f' % (sbart[5])\n AnisRecT[\"anisotropy_tilt_correction\"] = '100'\n ResRecT[\"anisotropy_v1_dec\"] = '%7.1f' % (hparst['v1_dec'])\n ResRecT[\"anisotropy_v2_dec\"] = '%7.1f' % (hparst['v2_dec'])\n ResRecT[\"anisotropy_v3_dec\"] = '%7.1f' % (hparst['v3_dec'])\n ResRecT[\"anisotropy_v1_inc\"] = '%7.1f' % (hparst['v1_inc'])\n ResRecT[\"anisotropy_v2_inc\"] = '%7.1f' % (hparst['v2_inc'])\n ResRecT[\"anisotropy_v3_inc\"] = '%7.1f' % (hparst['v3_inc'])\n ResRecT['anisotropy_v1_eta_dec'] = ResRecT['anisotropy_v2_dec']\n ResRecT['anisotropy_v1_eta_inc'] = ResRecT['anisotropy_v2_inc']\n ResRecT['anisotropy_v1_zeta_dec'] = ResRecT['anisotropy_v3_dec']\n ResRecT['anisotropy_v1_zeta_inc'] = ResRecT['anisotropy_v3_inc']\n ResRecT['anisotropy_v2_eta_dec'] = ResRecT['anisotropy_v1_dec']\n ResRecT['anisotropy_v2_eta_inc'] = ResRecT['anisotropy_v1_inc']\n ResRecT['anisotropy_v2_zeta_dec'] = ResRecT['anisotropy_v3_dec']\n ResRecT['anisotropy_v2_zeta_inc'] = ResRecT['anisotropy_v3_inc']\n ResRecT['anisotropy_v3_eta_dec'] = ResRecT['anisotropy_v1_dec']\n ResRecT['anisotropy_v3_eta_inc'] = ResRecT['anisotropy_v1_inc']\n ResRecT['anisotropy_v3_zeta_dec'] = ResRecT['anisotropy_v2_dec']\n ResRecT['anisotropy_v3_zeta_inc'] = ResRecT['anisotropy_v2_inc']\n #\n ResRecT[aniso_tilt_corr_col] = '100'\n ResRecT[result_description_col] = 'Critical F: ' + \\\n hparst[\"F_crit\"] + ';Critical F12/F13: ' + \\\n hparst[\"F12_crit\"]\n ResRecs.append(ResRecT)\n AnisRecs.append(AnisRecT)\n k15, linecnt = [], 0\n MeasRec, SpecRec, SampRec, SiteRec, AnisRec = {}, {}, {}, {}, {}\n\n # samples\n pmag.magic_write(samp_file, SampRecs, samp_table_name)\n # specimens / rmag_anisotropy / rmag_results\n if data_model_num == 3:\n AnisRecs.extend(ResRecs)\n SpecRecs = AnisRecs.copy()\n SpecRecs, keys = pmag.fillkeys(SpecRecs)\n pmag.magic_write(aniso_outfile, SpecRecs, 'specimens')\n flist = [meas_file, aniso_outfile, samp_file]\n else:\n pmag.magic_write(aniso_outfile, AnisRecs, 'rmag_anisotropy') # add to specimens?\n pmag.magic_write(result_file, ResRecs, 'rmag_results') # added to specimens (NOT sites)\n flist = [meas_file, samp_file, aniso_outfile, result_file]\n # measurements\n pmag.magic_write(meas_file, MeasRecs, meas_table_name)\n\n print(\"Data saved to: \" + \", \".join(flist))\n return True, meas_file", "response": "This function converts. k15 format data to MagIC format."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef ldeo(magfile, dir_path=\".\", input_dir_path=\"\",\n meas_file=\"measurements.txt\", spec_file=\"specimens.txt\",\n samp_file=\"samples.txt\", site_file=\"sites.txt\", loc_file=\"locations.txt\",\n specnum=0, samp_con=\"1\", location=\"unknown\", codelist=\"\",\n coil=\"\", arm_labfield=50e-6, trm_peakT=873., peakfield=0,\n labfield=0, phi=0, theta=0, mass_or_vol=\"v\", noave=0):\n \"\"\"\n converts Lamont Doherty Earth Observatory measurement files to MagIC data base model 3.0\n\n Parameters\n _________\n magfile : input measurement file\n dir_path : output directory path, default \".\"\n input_dir_path : input file directory IF different from dir_path, default \"\"\n meas_file : output file measurement file name, default \"measurements.txt\"\n spec_file : output file specimen file name, default \"specimens.txt\"\n samp_file : output file sample file name, default \"samples.txt\"\n site_file : output file site file name, default \"sites.txt\"\n loc_file : output file location file name, default \"locations.txt\"\n specnum : number of terminal characters distinguishing specimen from sample, default 0\n samp_con : sample/site naming convention, default \"1\"\n \"1\" XXXXY: where XXXX is an arbitr[ary length site designation and Y\n is the single character sample designation. e.g., TG001a is the\n first sample from site TG001. [default]\n \"2\" XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)\n \"3\" XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)\n \"4-Z\" XXXX[YYY]: YYY is sample designation with Z characters from site XXX\n \"5\" site name same as sample\n \"6\" site is entered under a separate column NOT CURRENTLY SUPPORTED\n \"7-Z\" [XXXX]YYY: XXXX is site designation with Z characters with sample name XXXXYYYY\n NB: all others you will have to customize your self\n or e-mail ltauxe@ucsd.edu for help.\n\n \"8\" synthetic - has no site name\n \"9\" ODP naming convention\n\n codelist : colon delimited string of lab protocols (e.g., codelist=\"AF\"), default \"\"\n AF: af demag\n T: thermal including thellier but not trm acquisition\n S: Shaw method\n I: IRM (acquisition)\n N: NRM only\n TRM: trm acquisition\n ANI: anisotropy experiment\n D: double AF demag\n G: triple AF demag (GRM protocol)\n coil : 1,2, or 3 unist of IRM field in volts using ASC coil #1,2 or 3, default \"\"\n arm_labfield : dc field for ARM in tesla, default 50e-6\n peakfield : peak af field for ARM, default 873.\n trm_peakT : peak temperature for TRM, default 0\n labfield : lab field in tesla for TRM, default 0\n phi, theta : direction of lab field, default 0, 0\n mass_or_vol : is the parameter in the file mass 'm' or volume 'v', default \"v\"\n noave : boolean, if False, average replicates, default False\n\n Returns\n --------\n type - Tuple : (True or False indicating if conversion was sucessful, meas_file name written)\n\n Effects\n _______\n creates MagIC formatted tables\n \"\"\"\n # initialize some stuff\n dec = [315, 225, 180, 135, 45, 90, 270, 270, 270, 90, 180, 180, 0, 0, 0]\n inc = [0, 0, 0, 0, 0, -45, -45, 0, 45, 45, 45, -45, -90, -45, 45]\n tdec = [0, 90, 0, 180, 270, 0, 0, 90, 0]\n tinc = [0, 0, 90, 0, 0, -90, 0, 0, 90]\n demag = \"N\"\n trm = 0\n irm = 0\n\n # format/organize variables\n input_dir_path, output_dir_path = pmag.fix_directories(input_dir_path, dir_path)\n labfield = int(labfield) * 1e-6\n phi = int(phi)\n theta = int(theta)\n specnum = - int(specnum)\n\n if magfile:\n try:\n fname = pmag.resolve_file_name(magfile, input_dir_path)\n infile = open(fname, 'r')\n except IOError:\n print(\"bad mag file name\")\n return False, \"bad mag file name\"\n else:\n print(\"mag_file field is required option\")\n return False, \"mag_file field is required option\"\n\n samp_con = str(samp_con)\n if \"4\" in samp_con:\n if \"-\" not in samp_con:\n print(\n \"naming convention option [4] must be in form 4-Z where Z is an integer\")\n return False, \"naming convention option [4] must be in form 4-Z where Z is an integer\"\n else:\n Z = samp_con.split(\"-\")[1]\n samp_con = \"4\"\n elif \"7\" in samp_con:\n if \"-\" not in samp_con:\n print(\n \"naming convention option [7] must be in form 7-Z where Z is an integer\")\n return False, \"naming convention option [7] must be in form 7-Z where Z is an integer\"\n else:\n Z = samp_con.split(\"-\")[1]\n samp_con = \"4\"\n else:\n Z = 1\n\n codes = codelist.split(':')\n if \"AF\" in codes:\n demag = 'AF'\n if not labfield:\n methcode = \"LT-AF-Z\"\n if labfield:\n methcode = \"LT-AF-I\"\n if \"T\" in codes:\n demag = \"T\"\n if not labfield:\n methcode = \"LT-T-Z\"\n if labfield:\n methcode = \"LT-T-I\"\n if \"I\" in codes:\n methcode = \"LP-IRM\"\n irmunits = \"mT\"\n if \"S\" in codes:\n demag = \"S\"\n methcode = \"LP-PI-TRM:LP-PI-ALT-AFARM\"\n trm_labfield = labfield\n # should use arm_labfield and trm_peakT as well, but these values are currently never asked for\n if \"G\" in codes:\n methcode = \"LT-AF-G\"\n if \"D\" in codes:\n methcode = \"LT-AF-D\"\n if \"TRM\" in codes:\n demag = \"T\"\n trm = 1\n\n if coil:\n methcode = \"LP-IRM\"\n irmunits = \"V\"\n if coil not in [\"1\", \"2\", \"3\"]:\n print('not a valid coil specification')\n return False, 'not a valid coil specification'\n\n if demag == \"T\" and \"ANI\" in codes:\n methcode = \"LP-AN-TRM\"\n if demag == \"AF\" and \"ANI\" in codes:\n methcode = \"LP-AN-ARM\"\n if labfield == 0:\n labfield = 50e-6\n if peakfield == 0:\n peakfield = .180\n MeasRecs, SpecRecs, SampRecs, SiteRecs, LocRecs = [], [], [], [], []\n version_num = pmag.get_version()\n # find start of data:\n DIspec = []\n Data = infile.readlines()\n infile.close()\n for k in range(len(Data)):\n rec = Data[k].split()\n if len(rec) <= 2:\n continue\n if rec[0].upper() == \"LAT:\" and len(rec) > 3:\n lat, lon = rec[1], rec[3]\n continue\n elif rec[0].upper() == \"ID\":\n continue\n MeasRec, SpecRec, SampRec, SiteRec, LocRec = {}, {}, {}, {}, {}\n specimen = rec[0]\n if specnum != 0:\n sample = specimen[:specnum]\n else:\n sample = specimen\n site = pmag.parse_site(sample, samp_con, Z)\n if mass_or_vol == 'v':\n volume = float(rec[12])\n if volume > 0:\n # convert to SI (assume Bartington, 10-5 SI)\n susc_chi_volume = '%10.3e' % (\n float(rec[11])*1e-5) / volume\n else:\n # convert to SI (assume Bartington, 10-5 SI)\n susc_chi_volume = '%10.3e' % (float(rec[11])*1e-5)\n else:\n mass = float(rec[12])\n if mass > 0:\n # convert to SI (assume Bartington, 10-5 SI)\n susc_chi_mass = '%10.3e' % (\n float(rec[11])*1e-5) / mass\n else:\n # convert to SI (assume Bartington, 10-5 SI)\n susc_chi_mass = '%10.3e' % (float(rec[11])*1e-5)\n # print((specimen,sample,site,samp_con,Z))\n\n # fill tables besides measurements\n if specimen != \"\" and specimen not in [x['specimen'] if 'specimen' in list(x.keys()) else \"\" for x in SpecRecs]:\n SpecRec['specimen'] = specimen\n SpecRec['sample'] = sample\n if mass_or_vol == 'v':\n SpecRec[\"susc_chi_volume\"] = susc_chi_volume\n SpecRec[\"volume\"] = volume\n else:\n SpecRec[\"susc_chi_mass\"] = susc_chi_mass\n SpecRec[\"magn_mass\"] = mass\n SpecRecs.append(SpecRec)\n if sample != \"\" and sample not in [x['sample'] if 'sample' in list(x.keys()) else \"\" for x in SampRecs]:\n SampRec['sample'] = sample\n SampRec['site'] = site\n SampRecs.append(SampRec)\n if site != \"\" and site not in [x['site'] if 'site' in list(x.keys()) else \"\" for x in SiteRecs]:\n SiteRec['site'] = site\n SiteRec['location'] = location\n SiteRec['lat'] = lat\n SiteRec['lon'] = lon\n SiteRecs.append(SiteRec)\n if location != \"\" and location not in [x['location'] if 'location' in list(x.keys()) else \"\" for x in LocRecs]:\n LocRec['location'] = location\n LocRec['lat_n'] = lat\n LocRec['lon_e'] = lon\n LocRec['lat_s'] = lat\n LocRec['lon_w'] = lon\n LocRecs.append(LocRec)\n\n # fill measurements\n MeasRec[\"treat_temp\"] = '%8.3e' % (273) # room temp in kelvin\n MeasRec[\"meas_temp\"] = '%8.3e' % (273) # room temp in kelvin\n MeasRec[\"treat_ac_field\"] = '0'\n MeasRec[\"treat_dc_field\"] = '0'\n MeasRec[\"treat_dc_field_phi\"] = '0'\n MeasRec[\"treat_dc_field_theta\"] = '0'\n meas_type = \"LT-NO\"\n MeasRec[\"quality\"] = 'g'\n MeasRec[\"standard\"] = 'u'\n MeasRec[\"treat_step_num\"] = 0\n MeasRec[\"specimen\"] = specimen\n# if mass_or_vol=='v': MeasRec[\"susc_chi_volume\"]=susc_chi_volume\n# else: MeasRec[\"susc_chi_mass\"]=susc_chi_mass\n try:\n float(rec[3])\n MeasRec[\"dir_csd\"] = rec[3]\n except ValueError:\n MeasRec[\"dir_csd\"] = ''\n MeasRec[\"magn_moment\"] = '%10.3e' % (float(rec[4])*1e-7)\n MeasRec[\"dir_dec\"] = rec[5]\n MeasRec[\"dir_inc\"] = rec[6]\n MeasRec[\"citations\"] = \"This study\"\n if demag == \"AF\":\n if methcode != \"LP-AN-ARM\":\n MeasRec[\"treat_ac_field\"] = '%8.3e' % (\n float(rec[1])*1e-3) # peak field in tesla\n meas_type = \"LT-AF-Z\"\n MeasRec[\"treat_dc_field\"] = '0'\n else: # AARM experiment\n if treat[1][0] == '0':\n meas_type = \"LT-AF-Z\"\n MeasRec[\"treat_ac_field\"] = '%8.3e' % (\n peakfield) # peak field in tesla\n else:\n meas_type = \"LT-AF-I\"\n ipos = int(treat[0])-1\n MeasRec[\"treat_dc_field_phi\"] = '%7.1f' % (dec[ipos])\n MeasRec[\"treat_dc_field_theta\"] = '%7.1f' % (inc[ipos])\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (labfield)\n MeasRec[\"treat_ac_field\"] = '%8.3e' % (\n peakfield) # peak field in tesla\n elif demag == \"T\":\n if rec[1][0] == \".\":\n rec[1] = \"0\"+rec[1]\n treat = rec[1].split('.')\n if len(treat) == 1:\n treat.append('0')\n MeasRec[\"treat_temp\"] = '%8.3e' % (\n float(rec[1])+273.) # temp in kelvin\n meas_type = \"LT-T-Z\"\n MeasRec[\"treat_temp\"] = '%8.3e' % (\n float(treat[0])+273.) # temp in kelvin\n if trm == 0: # demag=T and not trmaq\n if treat[1][0] == '0':\n meas_type = \"LT-T-Z\"\n else:\n # labfield in tesla (convert from microT)\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (labfield)\n MeasRec[\"treat_dc_field_phi\"] = '%7.1f' % (\n phi) # labfield phi\n MeasRec[\"treat_dc_field_theta\"] = '%7.1f' % (\n theta) # labfield theta\n if treat[1][0] == '1':\n meas_type = \"LT-T-I\" # in-field thermal step\n if treat[1][0] == '2':\n meas_type = \"LT-PTRM-I\" # pTRM check\n pTRM = 1\n if treat[1][0] == '3':\n # this is a zero field step\n MeasRec[\"treat_dc_field\"] = '0'\n meas_type = \"LT-PTRM-MD\" # pTRM tail check\n else:\n meas_type = \"LT-T-I\" # trm acquisition experiment\n MeasRec['method_codes'] = meas_type\n MeasRecs.append(MeasRec)\n # need to add these\n meas_file = pmag.resolve_file_name(meas_file, output_dir_path)\n spec_file = pmag.resolve_file_name(spec_file, output_dir_path)\n samp_file = pmag.resolve_file_name(samp_file, output_dir_path)\n site_file = pmag.resolve_file_name(site_file, output_dir_path)\n loc_file = pmag.resolve_file_name(loc_file, output_dir_path)\n\n con = cb.Contribution(output_dir_path, read_tables=[])\n\n con.add_magic_table_from_data(dtype='specimens', data=SpecRecs)\n con.add_magic_table_from_data(dtype='samples', data=SampRecs)\n con.add_magic_table_from_data(dtype='sites', data=SiteRecs)\n con.add_magic_table_from_data(dtype='locations', data=LocRecs)\n MeasOuts = pmag.measurements_methods3(MeasRecs, noave)\n con.add_magic_table_from_data(dtype='measurements', data=MeasOuts)\n\n con.tables['specimens'].write_magic_file(custom_name=spec_file)\n con.tables['samples'].write_magic_file(custom_name=samp_file)\n con.tables['sites'].write_magic_file(custom_name=site_file)\n con.tables['locations'].write_magic_file(custom_name=loc_file)\n con.tables['measurements'].write_magic_file(custom_name=meas_file)\n\n return True, meas_file", "response": "This function converts a Lamont Doherty Earth Observatory measurement file to MagIC data base model 3. 0."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef mini(magfile, dir_path='.', meas_file='measurements.txt',\n data_model_num=3, volume=12, noave=0,\n inst=\"\", user=\"\", methcode=\"LP-NO\", input_dir_path=\"\"):\n \"\"\"\n Convert the Yale minispin format to MagIC format files\n\n Parameters\n ----------\n magfile : str\n input file name, required\n dir_path : str\n working directory, default \".\"\n meas_file : str\n output measurement file name, default \"measurements.txt\"\n data_model_num : int\n MagIC data model 2 or 3, default 3\n volume : float\n volume in ccs, default 12.\n noave : bool\n do not average duplicate measurements, default False (so by default, DO average)\n inst : str\n instrument, default \"\"\n user : str\n user name, default \"\"\n methcode : str\n colon-delimited protocols, include all that apply\n default \"LP-NO\"\n options include \"AF\" for demag and \"T\" for thermal\n input_dir_path : str,\n input file directory IF different from dir_path, default \"\"\n\n Returns\n ---------\n Tuple : (True or False indicating if conversion was sucessful, meas_file name written)\n\n \"\"\"\n\n codes = methcode.split(':')\n demag = \"N\"\n if \"AF\" in codes:\n demag = 'AF'\n methcode = \"LT-AF-Z\"\n if \"T\" in codes:\n demag = \"T\"\n methcode = \"LP-TRM-TD\"\n\n # initialize\n citation = 'This study'\n MagRecs = []\n version_num = pmag.get_version()\n input_dir_path, dir_path = pmag.fix_directories(input_dir_path, dir_path)\n magfile = pmag.resolve_file_name(magfile, input_dir_path)\n input_dir_path = os.path.split(magfile)[0]\n try:\n with open(magfile, 'r') as finput:\n lines = finput.readlines()\n except OSError:\n print(\"bad mag file name\")\n return False, \"bad mag file name\"\n # convert volume\n volume = 1e-6 * float(volume)\n # set col names based on MagIC 2 or 3\n if data_model_num == 2:\n spec_col = \"er_specimen_name\"\n loc_col = \"er_location_name\"\n site_col = \"er_site_col\"\n samp_col = \"er_sample_name\"\n software_col = \"magic_software_packages\"\n treat_temp_col = \"treatment_temp\"\n meas_temp_col = \"measurement_temp\"\n treat_ac_col = \"treatment_ac_field\"\n treat_dc_col = \"treatment_dc_field\"\n treat_dc_phi_col = \"treatment_dc_field_phi\"\n treat_dc_theta_col = \"treatment_dc_field_theta\"\n moment_col = \"measurement_magn_moment\"\n dec_col = \"measurement_dec\"\n inc_col = \"measurement_inc\"\n instrument_col = \"magic_instrument_codes\"\n analyst_col = \"er_analyst_mail_names\"\n citations_col = \"er_citation_names\"\n methods_col = \"magic_method_codes\"\n quality_col = \"measurement_flag\"\n meas_standard_col = \"measurement_standard\"\n meas_name_col = \"measurement_number\"\n else:\n spec_col = \"specimen\"\n loc_col = \"location\"\n site_col = \"site\"\n samp_col = \"sample\"\n software_col = \"software_packages\"\n treat_temp_col = \"treat_temp\"\n meas_temp_col = \"meas_temp\"\n treat_ac_col = \"treat_ac_field\"\n treat_dc_col = \"treat_dc_field\"\n treat_dc_phi_col = \"treat_dc_field_phi\"\n treat_dc_theta_col = \"treat_dc_field_theta\"\n moment_col = \"magn_moment\"\n dec_col = \"dir_dec\"\n inc_col = \"dir_inc\"\n instrument_col = \"instrument_codes\"\n analyst_col = \"analysts\"\n citations_col = \"citations\"\n methods_col = \"method_codes\"\n quality_col = \"quality\"\n meas_standard_col = \"standard\"\n meas_name_col = \"experiment\"\n meas_seq_col = \"sequence\"\n\n # go through the measurements\n seq=1\n for line in lines:\n rec = line.split(',')\n if len(rec) > 1:\n MagRec = {}\n IDs = rec[0].split('_')\n treat = IDs[1]\n MagRec[spec_col] = IDs[0]\n # print(MagRec[spec_col])\n sids = IDs[0].split('-')\n MagRec[loc_col] = sids[0]\n MagRec[site_col] = sids[0]+'-'+sids[1]\n if len(sids) == 2:\n MagRec[samp_col] = IDs[0]\n else:\n MagRec[samp_col] = sids[0]+'-'+sids[1]+'-'+sids[2]\n # print(MagRec)\n MagRec[software_col] = version_num\n MagRec[treat_temp_col] = '%8.3e' % (273) # room temp in kelvin\n MagRec[meas_temp_col] = '%8.3e' % (273) # room temp in kelvin\n MagRec[treat_ac_col] = '0'\n MagRec[treat_dc_col] = '0'\n MagRec[treat_dc_phi_col] = '0'\n MagRec[treat_dc_theta_col] = '0'\n meas_type = \"LT-NO\"\n if demag == \"AF\":\n MagRec[treat_ac_col] = '%8.3e' % (\n float(treat)*1e-3) # peak field in tesla\n if demag == \"T\":\n meas_type = \"LT-T-Z\"\n MagRec[treat_dc_col] = '%8.3e' % (0)\n MagRec[treat_ac_col] = '0'\n MagRec[treat_temp_col] = '%8.3e' % (\n float(treat)+273.) # temp in kelvin\n if demag == \"N\":\n meas_type = \"LT-NO\"\n MagRec[treat_ac_col] = '0'\n MagRec[treat_dc_col] = '0'\n MagRec[moment_col] = '%10.3e' % (\n volume*float(rec[3])*1e-3) # moment in Am2 (from mA/m)\n MagRec[dec_col] = rec[1]\n MagRec[inc_col] = rec[2]\n MagRec[instrument_col] = inst\n MagRec[analyst_col] = user\n MagRec[citations_col] = citation\n MagRec[methods_col] = methcode.strip(':')\n MagRec[quality_col] = 'g'\n MagRec[meas_standard_col] = 'u'\n MagRec[meas_name_col] = '1'\n MagRecs.append(MagRec)\n\n if data_model_num == 2:\n MagOuts = pmag.measurements_methods(MagRecs, noave)\n pmag.magic_write(meas_file, MagOuts, 'magic_measurements')\n else:\n MagOuts = pmag.measurements_methods3(MagRecs, noave)\n pmag.magic_write(meas_file, MagOuts, 'measurements')\n # nicely parse all the specimen/sample/site/location data\n # and write it to file as well\n dir_path = os.path.split(meas_file)[0]\n con = cb.Contribution(dir_path, read_tables=['measurements'],\n custom_filenames={'measurements': meas_file})\n con.propagate_measurement_info()\n for table in con.tables:\n con.write_table_to_file(table)\n print(\"results put in \", meas_file)\n return True, meas_file", "response": "Convert the Yale minispin format to MagIC format files."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef mst(infile, spec_name='unknown', dir_path=\".\", input_dir_path=\"\",\n meas_file=\"measurements.txt\", samp_infile=\"samples.txt\",\n user=\"\", specnum=0, samp_con=\"1\", labfield=0.5,\n location='unknown', syn=False, data_model_num=3):\n \"\"\"\n Convert MsT data (T,M) to MagIC measurements format files\n\n Parameters\n ----------\n infile : str\n input file name\n specimen : str\n specimen name, default \"unknown\"\n dir_path : str\n working directory, default \".\"\n input_dir_path : str\n input file directory IF different from dir_path, default \"\"\n meas_file : str\n output measurement file name, default \"measurements.txt\"\n samp_infile : str\n existing sample infile (not required), default \"samples.txt\"\n user : str\n user name, default \"\"\n specnum : int\n number of characters to designate a specimen, default 0\n samp_con : str\n sample/site naming convention, default '1', see info below\n labfield : float\n DC_FIELD in Tesla, default : .5\n location : str\n location name, default \"unknown\"\n syn : bool\n synthetic, default False\n data_model_num : int\n MagIC data model 2 or 3, default 3\n\n Returns\n --------\n type - Tuple : (True or False indicating if conversion was sucessful, file name written)\n\n\n Info\n --------\n Sample naming convention:\n [1] XXXXY: where XXXX is an arbitrary length site designation and Y\n is the single character sample designation. e.g., TG001a is the\n first sample from site TG001. [default]\n [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX\n [5] site name = sample name\n [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED\n [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY\n\n \"\"\"\n\n # deal with input files\n input_dir_path, dir_path = pmag.fix_directories(input_dir_path, dir_path)\n try:\n infile = pmag.resolve_file_name(infile, input_dir_path)\n with open(infile, 'r') as finput:\n data = finput.readlines()\n except (IOError, FileNotFoundError) as ex:\n print(ex)\n print(\"bad mag file name\")\n return False, \"bad mag file name\"\n\n samp_file = pmag.resolve_file_name(samp_infile, input_dir_path)\n if os.path.exists(samp_file):\n Samps, file_type = pmag.magic_read(samp_file)\n else:\n Samps = []\n\n # parse out samp_con\n if \"4\" in samp_con:\n if \"-\" not in samp_con:\n print(\"option [4] must be in form 4-Z where Z is an integer\")\n return False, \"option [4] must be in form 4-Z where Z is an integer\"\n else:\n Z = samp_con.split(\"-\")[1]\n samp_con = \"4\"\n if \"7\" in samp_con:\n if \"-\" not in samp_con:\n print(\"option [7] must be in form 7-Z where Z is an integer\")\n return False, \"option [7] must be in form 7-Z where Z is an integer\"\n else:\n Z = samp_con.split(\"-\")[1]\n samp_con = \"7\"\n\n # initialize some stuff\n specnum = - int(specnum)\n Z = \"0\"\n citation = 'This study'\n measnum = 1\n MagRecs, specs = [], []\n version_num = pmag.get_version()\n\n if data_model_num == 2:\n loc_col = \"er_location_name\"\n software_col = \"magic_software_packages\"\n treat_dc_col = \"treatment_dc_field\"\n meas_temp_col = \"measurement_temp\"\n meth_codes_col = \"magic_method_codes\"\n meas_magnitude_col = \"measurement_magnitude\"\n spec_col = \"er_specimen_name\"\n samp_col = \"er_sample_name\"\n site_col = \"er_site_name\"\n synthetic_name_col = \"er_synthetic_name\"\n inst_col = \"magic_instrument_codes\"\n analyst_col = \"er_analyst_mail_names\"\n citation_col = \"er_citation_names\"\n quality_col = \"measurement_flag\"\n meas_name_col = \"measurement_number\"\n experiment_col = \"magic_experiment_name\"\n meas_file_type = \"magic_measurements\"\n if meas_file == \"measurements.txt\":\n meas_file = \"magic_measurements.txt\"\n else:\n loc_col = \"location\"\n software_col = \"software_packages\"\n treat_dc_col = \"treat_dc_field\"\n meas_temp_col = \"meas_temp\"\n meth_codes_col = \"method_codes\"\n meas_magnitude_col = \"magn_uncal\"\n spec_col = \"specimen\"\n samp_col = \"sample\"\n site_col = \"site\"\n synthetic_name_col = \"specimen\"\n inst_col = \"instrument_codes\"\n analyst_col = \"analysts\"\n citation_col = \"citations\"\n quality_col = \"quality\"\n meas_name_col = \"measurement\"\n experiment_col = \"experiment\"\n meas_file_type = \"measurements\"\n\n meas_file = pmag.resolve_file_name(meas_file, dir_path)\n dir_path = os.path.split(meas_file)[0]\n\n T0 = float(data[0].split()[0])\n for line in data:\n instcode = \"\"\n if len(line) > 1:\n MagRec = {}\n if syn == 0:\n MagRec[loc_col] = location\n MagRec[software_col] = version_num\n MagRec[treat_dc_col] = labfield\n rec = line.split()\n T = float(rec[0])\n MagRec[meas_temp_col] = '%8.3e' % (\n float(rec[0])+273.) # temp in kelvin\n if T > T0:\n MagRec[meth_codes_col] = 'LP-MW-I'\n elif T < T0:\n MagRec[meth_codes_col] = 'LP-MC-I'\n T0 = T\n elif data.index(line) == 0: # NRM step\n MagRec[meth_codes_col] = \"LP-NO\"\n else:\n print('skipping repeated temperature step')\n MagRec[meth_codes_col] = ''\n T0 = T\n MagRec[meas_magnitude_col] = '%10.3e' % (\n float(rec[1])) # uncalibrated magnitude\n if syn == 0:\n MagRec[spec_col] = spec_name\n MagRec[site_col] = \"\"\n if specnum != 0:\n MagRec[samp_col] = spec_name[:specnum]\n else:\n MagRec[samp_col] = spec_name\n if Samps:\n for samp in Samps:\n if samp[samp_col] == MagRec[samp_col]:\n MagRec[loc_col] = samp.get(loc_col, location)\n MagRec[site_col] = samp[site_col]\n break\n elif int(samp_con) != 6:\n site = pmag.parse_site(\n MagRec[samp_col], samp_con, Z)\n MagRec[site_col] = site\n if MagRec[site_col] == \"\":\n print('No site name found for: ',\n MagRec[spec_col], MagRec[samp_col])\n if not MagRec[loc_col]:\n print('no location name for: ', MagRec[spec_col])\n else: # synthetic\n MagRec[loc_col] = \"\"\n MagRec[samp_col] = \"\"\n MagRec[site_col] = \"\"\n MagRec[spec_col] = \"\"\n MagRec[synthetic_name_col] = spec_name\n\n MagRec[inst_col] = instcode\n MagRec[analyst_col] = user\n MagRec[citation_col] = citation\n MagRec[quality_col] = 'g'\n MagRec[meas_name_col] = str(measnum)\n if data_model_num == 3:\n MagRec['sequence'] = measnum\n MagRecs.append(MagRec)\n measnum += 1\n new_MagRecs = []\n for rec in MagRecs: # sort out the measurements by experiment type\n rec[experiment_col] = spec_name\n if rec[meth_codes_col] == 'LP-MW-I':\n rec[experiment_col] = spec_name + ':LP-MW-I:Curie'\n elif rec[meth_codes_col] == 'LP-MC-I':\n rec[experiment_col] = spec_name +':LP-MC-I'\n elif rec[meth_codes_col] == \"LP-NO\":\n rec[experiment_col] = spec_name + \":LP-NO\"\n if data_model_num == 3:\n rec[meas_name_col] = rec[experiment_col] + \"_\" + rec[meas_name_col]\n new_MagRecs.append(rec)\n\n pmag.magic_write(meas_file, new_MagRecs, meas_file_type)\n print(\"results put in \", meas_file)\n # try to extract location/site/sample/specimen info into tables\n con = cb.Contribution(dir_path)\n con.propagate_measurement_info()\n for table in con.tables:\n if table in ['samples', 'sites', 'locations']:\n # add in location name by hand\n if table == 'sites':\n con.tables['sites'].df['location'] = location\n con.write_table_to_file(table)\n\n return True, meas_file", "response": "Convert MsT data to MagIC measurements format files"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef pmd(mag_file, dir_path=\".\", input_dir_path=\"\",\n meas_file=\"measurements.txt\", spec_file='specimens.txt',\n samp_file='samples.txt', site_file=\"sites.txt\", loc_file=\"locations.txt\",\n lat=\"\", lon=\"\", specnum=0, samp_con='1', location=\"unknown\",\n noave=0, meth_code=\"LP-NO\"):\n \"\"\"\n converts PMD (Enkin) format files to MagIC format files\n\n Parameters\n ----------\n mag_file : str\n input file name, required\n dir_path : str\n working directory, default \".\"\n input_dir_path : str\n input file directory IF different from dir_path, default \"\"\n spec_file : str\n output specimen file name, default \"specimens.txt\"\n samp_file: str\n output sample file name, default \"samples.txt\"\n site_file : str\n output site file name, default \"sites.txt\"\n loc_file : str\n output location file name, default \"locations.txt\"\n lat : float or str\n latitude, default \"\"\n lon : float or str\n longitude, default \"\"\n specnum : int\n number of characters to designate a specimen, default 0\n samp_con : str\n sample/site naming convention, default '1', see info below\n location : str\n location name, default \"unknown\"\n noave : bool\n do not average duplicate measurements, default False (so by default, DO average)\n meth_code : str\n default \"LP-NO\"\n e.g. [SO-MAG, SO-SUN, SO-SIGHT, ...]\n\n Returns\n ---------\n Tuple : (True or False indicating if conversion was sucessful, file name written)\n\n\n Info\n --------\n Sample naming convention:\n [1] XXXXY: where XXXX is an arbitrary length site designation and Y\n is the single character sample designation. e.g., TG001a is the\n first sample from site TG001. [default]\n [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX\n [5] site name = sample name\n [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED\n [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY\n\n\n \"\"\"\n input_dir_path, output_dir_path = pmag.fix_directories(input_dir_path, dir_path)\n specnum = - int(specnum)\n samp_con = str(samp_con)\n version_num = pmag.get_version()\n\n if \"4\" in samp_con:\n if \"-\" not in samp_con:\n print(\n \"naming convention option [4] must be in form 4-Z where Z is an integer\")\n return False, \"naming convention option [4] must be in form 4-Z where Z is an integer\"\n else:\n Z = samp_con.split(\"-\")[1]\n samp_con = \"4\"\n elif \"7\" in samp_con:\n if \"-\" not in samp_con:\n print(\"option [7] must be in form 7-Z where Z is an integer\")\n return False, \"naming convention option [7] must be in form 7-Z where Z is an integer\"\n else:\n Z = samp_con.split(\"-\")[1]\n samp_con = \"7\"\n else:\n Z = 1\n\n # format variables\n mag_file = pmag.resolve_file_name(mag_file, input_dir_path)\n meas_file = pmag.resolve_file_name(meas_file, output_dir_path)\n spec_file = pmag.resolve_file_name(spec_file, output_dir_path)\n samp_file = pmag.resolve_file_name(samp_file, output_dir_path)\n site_file = pmag.resolve_file_name(site_file, output_dir_path)\n\n # parse data\n try:\n with open(mag_file, 'r') as f:\n data = f.readlines()\n except IOError:\n print('mag_file field is required option')\n return False, 'mag_file field is required option'\n comment = data[0]\n line = data[1].strip()\n line = line.replace(\"=\", \"= \") # make finding orientations easier\n rec = line.split() # read in sample orientation, etc.\n specimen = rec[0]\n SpecRecs, SampRecs, SiteRecs, LocRecs, MeasRecs = [], [], [], [], []\n SpecRec, SampRec, SiteRec, LocRec = {}, {}, {}, {} # make a sample record\n if specnum != 0:\n sample = rec[0][:specnum]\n else:\n sample = rec[0]\n if int(samp_con) < 6:\n site = pmag.parse_site(sample, samp_con, Z)\n else:\n if 'site' in list(SampRec.keys()):\n site = ErSampREc['site']\n if 'location' in list(SampRec.keys()):\n location = ErSampREc['location']\n az_ind = rec.index('a=')+1\n SampRec['sample'] = sample\n SampRec['description'] = comment.replace('\\n', '').replace('\\t', '')\n SampRec['azimuth'] = rec[az_ind]\n dip_ind = rec.index('b=')+1\n dip = -float(rec[dip_ind])\n SampRec['dip'] = '%7.1f' % (dip)\n strike_ind = rec.index('s=')+1\n SampRec['bed_dip_direction'] = '%7.1f' % (float(rec[strike_ind])+90.)\n bd_ind = rec.index('d=')+1\n SampRec['bed_dip'] = rec[bd_ind]\n v_ind = rec.index('v=')+1\n vol = rec[v_ind][:-3]\n date = rec[-2]\n time = rec[-1]\n SampRec['method_codes'] = meth_code\n SampRec['site'] = site\n SampRec['citations'] = 'This study'\n SampRec['method_codes'] = 'SO-NO'\n\n SpecRec['specimen'] = specimen\n SpecRec['sample'] = sample\n SpecRec['citations'] = 'This study'\n\n SiteRec['site'] = site\n SiteRec['location'] = location\n SiteRec['citations'] = 'This study'\n SiteRec['lat'] = lat\n SiteRec['lon'] = lon\n\n LocRec['location'] = location\n LocRec['citations'] = 'This study'\n LocRec['lat_n'] = lat\n LocRec['lat_s'] = lat\n LocRec['lon_e'] = lon\n LocRec['lon_w'] = lon\n\n SpecRecs.append(SpecRec)\n SampRecs.append(SampRec)\n SiteRecs.append(SiteRec)\n LocRecs.append(LocRec)\n for k in range(3, len(data)): # read in data\n line = data[k]\n rec = line.split()\n if len(rec) > 1: # skip blank lines at bottom\n MeasRec = {}\n MeasRec['description'] = 'Date: '+date+' '+time\n MeasRec[\"citations\"] = \"This study\"\n MeasRec['software_packages'] = version_num\n MeasRec[\"treat_temp\"] = '%8.3e' % (273) # room temp in kelvin\n MeasRec[\"meas_temp\"] = '%8.3e' % (273) # room temp in kelvin\n MeasRec[\"quality\"] = 'g'\n MeasRec[\"standard\"] = 'u'\n MeasRec[\"treat_step_num\"] = 0\n MeasRec[\"specimen\"] = specimen\n if rec[0] == 'NRM':\n meas_type = \"LT-NO\"\n elif rec[0][0] == 'M' or rec[0][0] == 'H':\n meas_type = \"LT-AF-Z\"\n elif rec[0][0] == 'T':\n meas_type = \"LT-T-Z\"\n else:\n print(\"measurement type unknown\")\n return False, \"measurement type unknown\"\n X = [float(rec[1]), float(rec[2]), float(rec[3])]\n Vec = pmag.cart2dir(X)\n MeasRec[\"magn_moment\"] = '%10.3e' % (Vec[2]) # Am^2\n MeasRec[\"magn_volume\"] = rec[4] # A/m\n MeasRec[\"dir_dec\"] = '%7.1f' % (Vec[0])\n MeasRec[\"dir_inc\"] = '%7.1f' % (Vec[1])\n MeasRec[\"treat_ac_field\"] = '0'\n if meas_type != 'LT-NO':\n treat = float(rec[0][1:])\n else:\n treat = 0\n if meas_type == \"LT-AF-Z\":\n MeasRec[\"treat_ac_field\"] = '%8.3e' % (\n treat*1e-3) # convert from mT to tesla\n elif meas_type == \"LT-T-Z\":\n MeasRec[\"treat_temp\"] = '%8.3e' % (\n treat+273.) # temp in kelvin\n MeasRec['method_codes'] = meas_type\n MeasRecs.append(MeasRec)\n\n con = cb.Contribution(output_dir_path, read_tables=[])\n\n con.add_magic_table_from_data(dtype='specimens', data=SpecRecs)\n con.add_magic_table_from_data(dtype='samples', data=SampRecs)\n con.add_magic_table_from_data(dtype='sites', data=SiteRecs)\n con.add_magic_table_from_data(dtype='locations', data=LocRecs)\n MeasOuts = pmag.measurements_methods3(MeasRecs, noave)\n con.add_magic_table_from_data(dtype='measurements', data=MeasOuts)\n\n con.tables['specimens'].write_magic_file(custom_name=spec_file,dir_path=dir_path)\n con.tables['samples'].write_magic_file(custom_name=samp_file,dir_path=dir_path)\n con.tables['sites'].write_magic_file(custom_name=site_file,dir_path=dir_path)\n con.tables['locations'].write_magic_file(custom_name=loc_file,dir_path=dir_path)\n con.tables['measurements'].write_magic_file(custom_name=meas_file,dir_path=dir_path)\n\n return True, meas_file", "response": "This function converts PMD format files to MagIC format files."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert Scripps Institution of Oceanography measurement files to MagIC data base model 3.0 Parameters _________ magfile : input measurement file dir_path : output directory path, default \".\" input_dir_path : input file directory IF different from dir_path, default \"\" meas_file : output file measurement file name, default \"measurements.txt\" spec_file : output file specimen file name, default \"specimens.txt\" samp_file : output file sample file name, default \"samples.tt\" site_file : output file site file name, default \"sites.txt\" loc_file : output file location file name, default \"locations.txt\" samp_infile : output file to append to, default \"\" syn : if True, this is a synthetic specimen, default False syntype : sample material type, default \"\" instrument : instrument on which the measurements were made (e.g., \"SIO-2G\"), default \"\" labfield : lab field in microtesla for TRM, default 0 phi, theta : direction of lab field [-1,-1 for anisotropy experiments], default 0, 0 peakfield : peak af field in mT for ARM, default 0 specnum : number of terminal characters distinguishing specimen from sample, default 0 samp_con : sample/site naming convention, default '1' \"1\" XXXXY: where XXXX is an arbitr[ary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] \"2\" XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) \"3\" XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) \"4-Z\" XXXX[YYY]: YYY is sample designation with Z characters from site XXX \"5\" site name same as sample \"6\" site is entered under a separate column NOT CURRENTLY SUPPORTED \"7-Z\" [XXXX]YYY: XXXX is site designation with Z characters with sample name XXXXYYYY NB: all others you will have to customize your self or e-mail ltauxe@ucsd.edu for help. \"8\" synthetic - has no site name \"9\" ODP naming convention location : location name for study, default \"unknown\" lat : latitude of sites, default \"\" lon : longitude of sites, default \"\" noave : boolean, if False, average replicates, default False codelist : colon delimited string of lab protocols (e.g., codelist=\"AF\"), default \"\" AF: af demag T: thermal including thellier but not trm acquisition S: Shaw method I: IRM (acquisition) N: NRM only TRM: trm acquisition ANI: anisotropy experiment D: double AF demag G: triple AF demag (GRM protocol) CR: cooling rate experiment. The treatment coding of the measurement file should be: XXX.00,XXX.10, XXX.20 ...XX.70 etc. (XXX.00 is optional) where XXX in the temperature and .10,.20... are running numbers of the cooling rates steps. XXX.00 is optional zerofield baseline. XXX.70 is alteration check. syntax in sio_magic is: -LP CR xxx,yyy,zzz,..... xxx -A where xxx, yyy, zzz...xxx are cooling time in [K/minutes], seperated by comma, ordered at the same order as XXX.10,XXX.20 ...XX.70 if you use a zerofield step then no need to specify the cooling rate for the zerofield It is important to add to the command line the -A option so the measurements will not be averaged. But users need to make sure that there are no duplicate measurements in the file cooling_rates : cooling rate in K/sec for cooling rate dependence studies (K/minutes) in comma separated list for each cooling rate (e.g., \"43.6,1.3,43.6\") coil : 1,2, or 3 unist of IRM field in volts using ASC coil #1,2 or 3 the fast and slow experiments in comma separated string (e.g., fast: 43.6 K/min, slow: 1.3 K/min) timezone : timezone of date/time string in comment string, default \"UTC\" user : analyst, default \"\" Effects _______ creates MagIC formatted tables", "response": "def sio(mag_file, dir_path=\".\", input_dir_path=\"\",\n meas_file=\"measurements.txt\", spec_file=\"specimens.txt\",\n samp_file=\"samples.txt\", site_file=\"sites.txt\", loc_file=\"locations.txt\",\n samp_infile=\"\", institution=\"\", syn=False, syntype=\"\", instrument=\"\",\n labfield=0, phi=0, theta=0, peakfield=0,\n specnum=0, samp_con='1', location=\"unknown\", lat=\"\", lon=\"\",\n noave=False, codelist=\"\", cooling_rates=\"\", coil='', timezone=\"UTC\",\n user=\"\"):\n \"\"\"\n converts Scripps Institution of Oceanography measurement files to MagIC data base model 3.0\n\n Parameters\n _________\n magfile : input measurement file\n dir_path : output directory path, default \".\"\n input_dir_path : input file directory IF different from dir_path, default \"\"\n meas_file : output file measurement file name, default \"measurements.txt\"\n spec_file : output file specimen file name, default \"specimens.txt\"\n samp_file : output file sample file name, default \"samples.tt\"\n site_file : output file site file name, default \"sites.txt\"\n loc_file : output file location file name, default \"locations.txt\"\n samp_infile : output file to append to, default \"\"\n syn : if True, this is a synthetic specimen, default False\n syntype : sample material type, default \"\"\n instrument : instrument on which the measurements were made (e.g., \"SIO-2G\"), default \"\"\n labfield : lab field in microtesla for TRM, default 0\n phi, theta : direction of lab field [-1,-1 for anisotropy experiments], default 0, 0\n peakfield : peak af field in mT for ARM, default 0\n specnum : number of terminal characters distinguishing specimen from sample, default 0\n samp_con : sample/site naming convention, default '1'\n \"1\" XXXXY: where XXXX is an arbitr[ary length site designation and Y\n is the single character sample designation. e.g., TG001a is the\n first sample from site TG001. [default]\n \"2\" XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)\n \"3\" XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)\n \"4-Z\" XXXX[YYY]: YYY is sample designation with Z characters from site XXX\n \"5\" site name same as sample\n \"6\" site is entered under a separate column NOT CURRENTLY SUPPORTED\n \"7-Z\" [XXXX]YYY: XXXX is site designation with Z characters with sample name XXXXYYYY\n NB: all others you will have to customize your self\n or e-mail ltauxe@ucsd.edu for help.\n\n \"8\" synthetic - has no site name\n \"9\" ODP naming convention\n location : location name for study, default \"unknown\"\n lat : latitude of sites, default \"\"\n lon : longitude of sites, default \"\"\n noave : boolean, if False, average replicates, default False\n codelist : colon delimited string of lab protocols (e.g., codelist=\"AF\"), default \"\"\n AF: af demag\n T: thermal including thellier but not trm acquisition\n S: Shaw method\n I: IRM (acquisition)\n N: NRM only\n TRM: trm acquisition\n ANI: anisotropy experiment\n D: double AF demag\n G: triple AF demag (GRM protocol)\n CR: cooling rate experiment.\n The treatment coding of the measurement file should be: XXX.00,XXX.10, XXX.20 ...XX.70 etc. (XXX.00 is optional)\n where XXX in the temperature and .10,.20... are running numbers of the cooling rates steps.\n XXX.00 is optional zerofield baseline. XXX.70 is alteration check.\n syntax in sio_magic is: -LP CR xxx,yyy,zzz,..... xxx -A\n where xxx, yyy, zzz...xxx are cooling time in [K/minutes], seperated by comma, ordered at the same order as XXX.10,XXX.20 ...XX.70\n if you use a zerofield step then no need to specify the cooling rate for the zerofield\n It is important to add to the command line the -A option so the measurements will not be averaged.\n But users need to make sure that there are no duplicate measurements in the file\n cooling_rates : cooling rate in K/sec for cooling rate dependence studies (K/minutes)\n in comma separated list for each cooling rate (e.g., \"43.6,1.3,43.6\")\n coil : 1,2, or 3 unist of IRM field in volts using ASC coil #1,2 or 3\n the fast and slow experiments in comma separated string (e.g., fast: 43.6 K/min, slow: 1.3 K/min)\n timezone : timezone of date/time string in comment string, default \"UTC\"\n user : analyst, default \"\"\n\n Effects\n _______\n creates MagIC formatted tables\n \"\"\"\n\n # initialize some stuff\n methcode = \"LP-NO\"\n pTRM, MD = 0, 0\n dec = [315, 225, 180, 135, 45, 90, 270, 270, 270, 90, 180, 180, 0, 0, 0]\n inc = [0, 0, 0, 0, 0, -45, -45, 0, 45, 45, 45, -45, -90, -45, 45]\n tdec = [0, 90, 0, 180, 270, 0, 0, 90, 0]\n tinc = [0, 0, 90, 0, 0, -90, 0, 0, 90]\n missing = 1\n demag = \"N\"\n citations = 'This study'\n fmt = 'old'\n Samps = []\n trm = 0\n irm = 0\n\n # get args\n input_dir_path, output_dir_path = pmag.fix_directories(input_dir_path, dir_path)\n # measurement outfile\n meas_file = pmag.resolve_file_name(meas_file, output_dir_path)\n spec_file = pmag.resolve_file_name(spec_file, output_dir_path)\n samp_file = pmag.resolve_file_name(samp_file, output_dir_path)\n site_file = pmag.resolve_file_name(site_file, output_dir_path)\n loc_file = pmag.resolve_file_name(loc_file, output_dir_path)\n mag_file = pmag.resolve_file_name(mag_file, input_dir_path)\n labfield = float(labfield) * 1e-6\n phi = float(phi)\n theta = float(theta)\n peakfield = float(peakfield) * 1e-3\n specnum = int(specnum)\n samp_con = str(samp_con)\n\n # make sure all initial values are correctly set up (whether they come from the command line or a GUI)\n if samp_infile:\n Samps, file_type = pmag.magic_read(samp_infile)\n if coil:\n coil = str(coil)\n methcode = \"LP-IRM\"\n irmunits = \"V\"\n if coil not in [\"1\", \"2\", \"3\"]:\n print(__doc__)\n print('not a valid coil specification')\n return False, '{} is not a valid coil specification'.format(coil)\n if mag_file:\n lines = pmag.open_file(mag_file)\n if not lines:\n print(\"you must provide a valid mag_file\")\n return False, \"you must provide a valid mag_file\"\n if not mag_file:\n print(__doc__)\n print(\"mag_file field is required option\")\n return False, \"mag_file field is required option\"\n if specnum != 0:\n specnum = -specnum\n if \"4\" == samp_con[0]:\n if \"-\" not in samp_con:\n print(\n \"naming convention option [4] must be in form 4-Z where Z is an integer\")\n print('---------------')\n return False, \"naming convention option [4] must be in form 4-Z where Z is an integer\"\n else:\n Z = samp_con.split(\"-\")[1]\n samp_con = \"4\"\n if \"7\" == samp_con[0]:\n if \"-\" not in samp_con:\n print(\"option [7] must be in form 7-Z where Z is an integer\")\n return False, \"option [7] must be in form 7-Z where Z is an integer\"\n else:\n Z = samp_con.split(\"-\")[1]\n samp_con = \"7\"\n else:\n Z = 0\n\n if codelist:\n codes = codelist.split(':')\n if \"AF\" in codes:\n demag = 'AF'\n if'-dc' not in sys.argv:\n methcode = \"LT-AF-Z\"\n if'-dc' in sys.argv:\n methcode = \"LT-AF-I\"\n if \"T\" in codes:\n demag = \"T\"\n if '-dc' not in sys.argv:\n methcode = \"LT-T-Z\"\n if '-dc' in sys.argv:\n methcode = \"LT-T-I\"\n if \"I\" in codes:\n methcode = \"LP-IRM\"\n irmunits = \"mT\"\n if \"I3d\" in codes:\n methcode = \"LT-T-Z:LP-IRM-3D\"\n if \"S\" in codes:\n demag = \"S\"\n methcode = \"LP-PI-TRM:LP-PI-ALT-AFARM\"\n trm_labfield = labfield\n ans = input(\"DC lab field for ARM step: [50uT] \")\n if ans == \"\":\n arm_labfield = 50e-6\n else:\n arm_labfield = float(ans)*1e-6\n ans = input(\"temperature for total trm step: [600 C] \")\n if ans == \"\":\n trm_peakT = 600+273 # convert to kelvin\n else:\n trm_peakT = float(ans)+273 # convert to kelvin\n if \"G\" in codes:\n methcode = \"LT-AF-G\"\n if \"D\" in codes:\n methcode = \"LT-AF-D\"\n if \"TRM\" in codes:\n demag = \"T\"\n trm = 1\n if \"CR\" in codes:\n demag = \"T\"\n cooling_rate_experiment = 1\n # command_line does not exist in this code\n cooling_rates_list = cooling_rates.split(',')\n # if command_line:\n # ind=sys.argv.index(\"CR\")\n # cooling_rates=sys.argv[ind+1]\n # cooling_rates_list=cooling_rates.split(',')\n # else:\n # cooling_rates_list=str(cooling_rates).split(',')\n if demag == \"T\" and \"ANI\" in codes:\n methcode = \"LP-AN-TRM\"\n if demag == \"T\" and \"CR\" in codes:\n methcode = \"LP-CR-TRM\"\n if demag == \"AF\" and \"ANI\" in codes:\n methcode = \"LP-AN-ARM\"\n if labfield == 0:\n labfield = 50e-6\n if peakfield == 0:\n peakfield = .180\n\n MeasRecs, SpecRecs, SampRecs, SiteRecs, LocRecs = [], [], [], [], []\n version_num = pmag.get_version()\n\n ##################################\n\n for line in lines:\n instcode = \"\"\n if len(line) > 2:\n MeasRec, SpecRec, SampRec, SiteRec, LocRec = {}, {}, {}, {}, {}\n MeasRec['software_packages'] = version_num\n MeasRec[\"description\"] = \"\"\n MeasRec[\"treat_temp\"] = '%8.3e' % (273) # room temp in kelvin\n MeasRec[\"meas_temp\"] = '%8.3e' % (273) # room temp in kelvin\n MeasRec[\"treat_ac_field\"] = '0'\n MeasRec[\"treat_dc_field\"] = '0'\n MeasRec[\"treat_dc_field_phi\"] = '0'\n MeasRec[\"treat_dc_field_theta\"] = '0'\n meas_type = \"LT-NO\"\n rec = line.split()\n try:\n float(rec[0])\n print(\"No specimen name for line #%d in the measurement file\" %\n lines.index(line))\n continue\n except ValueError:\n pass\n if rec[1] == \".00\":\n rec[1] = \"0.00\"\n treat = rec[1].split('.')\n if methcode == \"LP-IRM\":\n if irmunits == 'mT':\n labfield = float(treat[0])*1e-3\n else:\n labfield = pmag.getfield(irmunits, coil, treat[0])\n if rec[1][0] != \"-\":\n phi, theta = 0., 90.\n else:\n phi, theta = 0., -90.\n meas_type = \"LT-IRM\"\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (labfield)\n MeasRec[\"treat_dc_field_phi\"] = '%7.1f' % (phi)\n MeasRec[\"treat_dc_field_theta\"] = '%7.1f' % (theta)\n if len(rec) > 6:\n # break e.g., 10/15/02;7:45 indo date and time\n code1 = rec[6].split(';')\n if len(code1) == 2: # old format with AM/PM\n missing = 0\n code2 = code1[0].split('/') # break date into mon/day/year\n # break e.g., AM;C34;200 into time;instr/axes/measuring pos;number of measurements\n code3 = rec[7].split(';')\n yy = int(code2[2])\n if yy < 90:\n yyyy = str(2000+yy)\n else:\n yyyy = str(1900+yy)\n mm = int(code2[0])\n if mm < 10:\n mm = \"0\"+str(mm)\n else:\n mm = str(mm)\n dd = int(code2[1])\n if dd < 10:\n dd = \"0\"+str(dd)\n else:\n dd = str(dd)\n time = code1[1].split(':')\n hh = int(time[0])\n if code3[0] == \"PM\":\n hh = hh+12\n if hh < 10:\n hh = \"0\"+str(hh)\n else:\n hh = str(hh)\n min = int(time[1])\n if min < 10:\n min = \"0\"+str(min)\n else:\n min = str(min)\n dt = yyyy+\":\"+mm+\":\"+dd+\":\"+hh+\":\"+min+\":00\"\n local = pytz.timezone(timezone)\n naive = datetime.datetime.strptime(dt, \"%Y:%m:%d:%H:%M:%S\")\n local_dt = local.localize(naive, is_dst=None)\n utc_dt = local_dt.astimezone(pytz.utc)\n MeasRec[\"timestamp\"] = utc_dt.strftime(\n \"%Y-%m-%dT%H:%M:%S\")+\"Z\"\n if instrument == \"\":\n if code3[1][0] == 'C':\n instcode = 'SIO-bubba'\n if code3[1][0] == 'G':\n instcode = 'SIO-flo'\n else:\n instcode = ''\n MeasRec[\"meas_n_orient\"] = code3[1][2]\n elif len(code1) > 2: # newest format (cryo7 or later)\n if \"LP-AN-ARM\" not in methcode:\n labfield = 0\n fmt = 'new'\n date = code1[0].split('/') # break date into mon/day/year\n yy = int(date[2])\n if yy < 90:\n yyyy = str(2000+yy)\n else:\n yyyy = str(1900+yy)\n mm = int(date[0])\n if mm < 10:\n mm = \"0\"+str(mm)\n else:\n mm = str(mm)\n dd = int(date[1])\n if dd < 10:\n dd = \"0\"+str(dd)\n else:\n dd = str(dd)\n time = code1[1].split(':')\n hh = int(time[0])\n if hh < 10:\n hh = \"0\"+str(hh)\n else:\n hh = str(hh)\n min = int(time[1])\n if min < 10:\n min = \"0\"+str(min)\n else:\n min = str(min)\n dt = yyyy+\":\"+mm+\":\"+dd+\":\"+hh+\":\"+min+\":00\"\n local = pytz.timezone(timezone)\n naive = datetime.datetime.strptime(dt, \"%Y:%m:%d:%H:%M:%S\")\n local_dt = local.localize(naive, is_dst=None)\n utc_dt = local_dt.astimezone(pytz.utc)\n MeasRec[\"timestamp\"] = utc_dt.strftime(\n \"%Y-%m-%dT%H:%M:%S\")+\"Z\"\n if instrument == \"\":\n if code1[6][0] == 'C':\n instcode = 'SIO-bubba'\n if code1[6][0] == 'G':\n instcode = 'SIO-flo'\n else:\n instcode = ''\n if len(code1) > 1:\n MeasRec[\"meas_n_orient\"] = code1[6][2]\n else:\n # takes care of awkward format with bubba and flo being different\n MeasRec[\"meas_n_orient\"] = code1[7]\n if user == \"\":\n user = code1[5]\n if code1[2][-1].upper() == 'C':\n demag = \"T\"\n if code1[4] == 'microT' and float(code1[3]) != 0. and \"LP-AN-ARM\" not in methcode:\n labfield = float(code1[3])*1e-6\n if code1[2] == 'mT' and methcode != \"LP-IRM\":\n demag = \"AF\"\n if code1[4] == 'microT' and float(code1[3]) != 0.:\n labfield = float(code1[3])*1e-6\n if code1[4] == 'microT' and labfield != 0. and meas_type != \"LT-IRM\":\n phi, theta = 0., -90.\n if demag == \"T\":\n meas_type = \"LT-T-I\"\n if demag == \"AF\":\n meas_type = \"LT-AF-I\"\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (labfield)\n MeasRec[\"treat_dc_field_phi\"] = '%7.1f' % (phi)\n MeasRec[\"treat_dc_field_theta\"] = '%7.1f' % (theta)\n if code1[4] == '' or labfield == 0. and meas_type != \"LT-IRM\":\n if demag == 'T':\n meas_type = \"LT-T-Z\"\n if demag == \"AF\":\n meas_type = \"LT-AF-Z\"\n MeasRec[\"treat_dc_field\"] = '0'\n if not syn:\n specimen = rec[0]\n MeasRec[\"specimen\"] = specimen\n if specnum != 0:\n sample = rec[0][:specnum]\n else:\n sample = rec[0]\n if samp_infile and Samps: # if samp_infile was provided AND yielded sample data\n samp = pmag.get_dictitem(Samps, 'sample', sample, 'T')\n if len(samp) > 0:\n location = samp[0][\"location\"]\n site = samp[0][\"site\"]\n else:\n location = ''\n site = ''\n else:\n site = pmag.parse_site(sample, samp_con, Z)\n if location != '' and location not in [x['location'] if 'location' in list(x.keys()) else '' for x in LocRecs]:\n LocRec['location'] = location\n LocRec['lat_n'] = lat\n LocRec['lat_s'] = lat\n LocRec['lon_e'] = lon\n LocRec['lon_w'] = lon\n LocRecs.append(LocRec)\n if site != '' and site not in [x['site'] if 'site' in list(x.keys()) else '' for x in SiteRecs]:\n SiteRec['location'] = location\n SiteRec['site'] = site\n SiteRec['lat'] = lat\n SiteRec['lon'] = lon\n SiteRecs.append(SiteRec)\n if sample != '' and sample not in [x['sample'] if 'sample' in list(x.keys()) else '' for x in SampRecs]:\n SampRec['site'] = site\n SampRec['sample'] = sample\n SampRecs.append(SampRec)\n if specimen != '' and specimen not in [x['specimen'] if 'specimen' in list(x.keys()) else '' for x in SpecRecs]:\n SpecRec[\"specimen\"] = specimen\n SpecRec['sample'] = sample\n SpecRecs.append(SpecRec)\n else:\n specimen = rec[0]\n MeasRec[\"specimen\"] = specimen\n if specnum != 0:\n sample = rec[0][:specnum]\n else:\n sample = rec[0]\n site = pmag.parse_site(sample, samp_con, Z)\n if location != '' and location not in [x['location'] if 'location' in list(x.keys()) else '' for x in LocRecs]:\n LocRec['location'] = location\n LocRec['lat_n'] = lat\n LocRec['lat_s'] = lat\n LocRec['lon_e'] = lon\n LocRec['lon_w'] = lon\n LocRecs.append(LocRec)\n if site != '' and site not in [x['site'] if 'site' in list(x.keys()) else '' for x in SiteRecs]:\n SiteRec['location'] = location\n SiteRec['site'] = site\n SiteRec['lat'] = lat\n SiteRec['lon'] = lon\n SiteRecs.append(SiteRec)\n if sample != '' and sample not in [x['sample'] if 'sample' in list(x.keys()) else '' for x in SampRecs]:\n SampRec['site'] = site\n SampRec['sample'] = sample\n SampRecs.append(SampRec)\n if specimen != '' and specimen not in [x['specimen'] if 'specimen' in list(x.keys()) else '' for x in SpecRecs]:\n SpecRec[\"specimen\"] = specimen\n SpecRec['sample'] = sample\n SpecRecs.append(SpecRec)\n SampRec[\"institution\"] = institution\n SampRec[\"material_type\"] = syntype\n # MeasRec[\"sample\"]=sample\n if float(rec[1]) == 0:\n pass\n elif demag == \"AF\":\n if methcode != \"LP-AN-ARM\":\n MeasRec[\"treat_ac_field\"] = '%8.3e' % (\n float(rec[1])*1e-3) # peak field in tesla\n if meas_type == \"LT-AF-Z\":\n MeasRec[\"treat_dc_field\"] = '0'\n else: # AARM experiment\n if treat[1][0] == '0':\n meas_type = \"LT-AF-Z:LP-AN-ARM:\"\n MeasRec[\"treat_ac_field\"] = '%8.3e' % (\n peakfield) # peak field in tesla\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (0)\n if labfield != 0 and methcode != \"LP-AN-ARM\":\n print(\n \"Warning - inconsistency in mag file with lab field - overriding file with 0\")\n else:\n meas_type = \"LT-AF-I:LP-AN-ARM\"\n ipos = int(treat[0])-1\n MeasRec[\"treat_dc_field_phi\"] = '%7.1f' % (dec[ipos])\n MeasRec[\"treat_dc_field_theta\"] = '%7.1f' % (inc[ipos])\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (labfield)\n MeasRec[\"treat_ac_field\"] = '%8.3e' % (\n peakfield) # peak field in tesla\n elif demag == \"T\" and methcode == \"LP-AN-TRM\":\n MeasRec[\"treat_temp\"] = '%8.3e' % (\n float(treat[0])+273.) # temp in kelvin\n if treat[1][0] == '0':\n meas_type = \"LT-T-Z:LP-AN-TRM\"\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (0)\n MeasRec[\"treat_dc_field_phi\"] = '0'\n MeasRec[\"treat_dc_field_theta\"] = '0'\n else:\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (labfield)\n if treat[1][0] == '7': # alteration check as final measurement\n meas_type = \"LT-PTRM-I:LP-AN-TRM\"\n else:\n meas_type = \"LT-T-I:LP-AN-TRM\"\n\n # find the direction of the lab field in two ways:\n # (1) using the treatment coding (XX.1=+x, XX.2=+y, XX.3=+z, XX.4=-x, XX.5=-y, XX.6=-z)\n ipos_code = int(treat[1][0])-1\n # (2) using the magnetization\n DEC = float(rec[4])\n INC = float(rec[5])\n if INC < 45 and INC > -45:\n if DEC > 315 or DEC < 45:\n ipos_guess = 0\n if DEC > 45 and DEC < 135:\n ipos_guess = 1\n if DEC > 135 and DEC < 225:\n ipos_guess = 3\n if DEC > 225 and DEC < 315:\n ipos_guess = 4\n else:\n if INC > 45:\n ipos_guess = 2\n if INC < -45:\n ipos_guess = 5\n # prefer the guess over the code\n ipos = ipos_guess\n MeasRec[\"treat_dc_field_phi\"] = '%7.1f' % (tdec[ipos])\n MeasRec[\"treat_dc_field_theta\"] = '%7.1f' % (tinc[ipos])\n # check it\n if ipos_guess != ipos_code and treat[1][0] != '7':\n print(\"-E- ERROR: check specimen %s step %s, ATRM measurements, coding does not match the direction of the lab field!\" %\n (rec[0], \".\".join(list(treat))))\n\n elif demag == \"S\": # Shaw experiment\n if treat[1][1] == '0':\n if int(treat[0]) != 0:\n MeasRec[\"treat_ac_field\"] = '%8.3e' % (\n float(treat[0])*1e-3) # AF field in tesla\n MeasRec[\"treat_dc_field\"] = '0'\n meas_type = \"LT-AF-Z\" # first AF\n else:\n meas_type = \"LT-NO\"\n MeasRec[\"treat_ac_field\"] = '0'\n MeasRec[\"treat_dc_field\"] = '0'\n elif treat[1][1] == '1':\n if int(treat[0]) == 0:\n MeasRec[\"treat_ac_field\"] = '%8.3e' % (\n peakfield) # peak field in tesla\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (arm_labfield)\n MeasRec[\"treat_dc_field_phi\"] = '%7.1f' % (phi)\n MeasRec[\"treat_dc_field_theta\"] = '%7.1f' % (theta)\n meas_type = \"LT-AF-I\"\n else:\n MeasRec[\"treat_ac_field\"] = '%8.3e' % (\n float(treat[0])*1e-3) # AF field in tesla\n MeasRec[\"treat_dc_field\"] = '0'\n meas_type = \"LT-AF-Z\"\n elif treat[1][1] == '2':\n if int(treat[0]) == 0:\n MeasRec[\"treat_ac_field\"] = '0'\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (trm_labfield)\n MeasRec[\"treat_dc_field_phi\"] = '%7.1f' % (phi)\n MeasRec[\"treat_dc_field_theta\"] = '%7.1f' % (theta)\n MeasRec[\"treat_temp\"] = '%8.3e' % (trm_peakT)\n meas_type = \"LT-T-I\"\n else:\n MeasRec[\"treat_ac_field\"] = '%8.3e' % (\n float(treat[0])*1e-3) # AF field in tesla\n MeasRec[\"treat_dc_field\"] = '0'\n meas_type = \"LT-AF-Z\"\n elif treat[1][1] == '3':\n if int(treat[0]) == 0:\n MeasRec[\"treat_ac_field\"] = '%8.3e' % (\n peakfield) # peak field in tesla\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (arm_labfield)\n MeasRec[\"treat_dc_field_phi\"] = '%7.1f' % (phi)\n MeasRec[\"treat_dc_field_theta\"] = '%7.1f' % (theta)\n meas_type = \"LT-AF-I\"\n else:\n MeasRec[\"treat_ac_field\"] = '%8.3e' % (\n float(treat[0])*1e-3) # AF field in tesla\n MeasRec[\"treat_dc_field\"] = '0'\n meas_type = \"LT-AF-Z\"\n\n # Cooling rate experient # added by rshaar\n elif demag == \"T\" and methcode == \"LP-CR-TRM\":\n MeasRec[\"treat_temp\"] = '%8.3e' % (\n float(treat[0])+273.) # temp in kelvin\n if treat[1][0] == '0':\n meas_type = \"LT-T-Z:LP-CR-TRM\"\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (0)\n MeasRec[\"treat_dc_field_phi\"] = '0'\n MeasRec[\"treat_dc_field_theta\"] = '0'\n else:\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (labfield)\n if treat[1][0] == '7': # alteration check as final measurement\n meas_type = \"LT-PTRM-I:LP-CR-TRM\"\n else:\n meas_type = \"LT-T-I:LP-CR-TRM\"\n MeasRec[\"treat_dc_field_phi\"] = '%7.1f' % (\n phi) # labfield phi\n MeasRec[\"treat_dc_field_theta\"] = '%7.1f' % (\n theta) # labfield theta\n\n indx = int(treat[1][0])-1\n # alteration check matjed as 0.7 in the measurement file\n if indx == 6:\n cooling_time = cooling_rates_list[-1]\n else:\n cooling_time = cooling_rates_list[indx]\n MeasRec[\"description\"] = \"cooling_rate\" + \\\n \":\"+cooling_time+\":\"+\"K/min\"\n noave = 1\n elif demag != 'N':\n if len(treat) == 1:\n treat.append('0')\n MeasRec[\"treat_temp\"] = '%8.3e' % (\n float(treat[0])+273.) # temp in kelvin\n if trm == 0: # demag=T and not trmaq\n if treat[1][0] == '0':\n meas_type = \"LT-T-Z\"\n else:\n # labfield in tesla (convert from microT)\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (labfield)\n MeasRec[\"treat_dc_field_phi\"] = '%7.1f' % (\n phi) # labfield phi\n MeasRec[\"treat_dc_field_theta\"] = '%7.1f' % (\n theta) # labfield theta\n if treat[1][0] == '1':\n meas_type = \"LT-T-I\" # in-field thermal step\n if treat[1][0] == '2':\n meas_type = \"LT-PTRM-I\" # pTRM check\n pTRM = 1\n if treat[1][0] == '3':\n # this is a zero field step\n MeasRec[\"treat_dc_field\"] = '0'\n meas_type = \"LT-PTRM-MD\" # pTRM tail check\n else:\n labfield = float(treat[1])*1e-6\n # labfield in tesla (convert from microT)\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (labfield)\n MeasRec[\"treat_dc_field_phi\"] = '%7.1f' % (\n phi) # labfield phi\n MeasRec[\"treat_dc_field_theta\"] = '%7.1f' % (\n theta) # labfield theta\n meas_type = \"LT-T-I:LP-TRM\" # trm acquisition experiment\n\n MeasRec[\"dir_csd\"] = rec[2]\n MeasRec[\"magn_moment\"] = '%10.3e' % (\n float(rec[3])*1e-3) # moment in Am^2 (from emu)\n MeasRec[\"dir_dec\"] = rec[4]\n MeasRec[\"dir_inc\"] = rec[5]\n MeasRec[\"instrument_codes\"] = instcode\n MeasRec[\"analysts\"] = user\n MeasRec[\"citations\"] = citations\n if \"LP-IRM-3D\" in methcode:\n meas_type = methcode\n # MeasRec[\"method_codes\"]=methcode.strip(':')\n MeasRec[\"method_codes\"] = meas_type\n MeasRec[\"quality\"] = 'g'\n if 'std' in rec[0]:\n MeasRec[\"standard\"] = 's'\n else:\n MeasRec[\"standard\"] = 'u'\n MeasRec[\"treat_step_num\"] = 0\n # print MeasRec['treat_temp']\n MeasRecs.append(MeasRec)\n\n con = cb.Contribution(output_dir_path, read_tables=[])\n\n # create MagIC tables\n con.add_magic_table_from_data(dtype='specimens', data=SpecRecs)\n con.add_magic_table_from_data(dtype='samples', data=SampRecs)\n con.add_magic_table_from_data(dtype='sites', data=SiteRecs)\n con.add_magic_table_from_data(dtype='locations', data=LocRecs)\n MeasOuts = pmag.measurements_methods3(MeasRecs, noave)\n con.add_magic_table_from_data(dtype='measurements', data=MeasOuts)\n # write MagIC tables to file\n con.tables['specimens'].write_magic_file(custom_name=spec_file,dir_path=dir_path)\n con.tables['samples'].write_magic_file(custom_name=samp_file,dir_path=dir_path)\n con.tables['sites'].write_magic_file(custom_name=site_file,dir_path=dir_path)\n con.tables['locations'].write_magic_file(custom_name=loc_file,dir_path=dir_path)\n meas_file = con.tables['measurements'].write_magic_file(\n custom_name=meas_file,dir_path=dir_path)\n return True, meas_file"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef s_magic(sfile, anisfile=\"specimens.txt\", dir_path=\".\", atype=\"AMS\",\n coord_type=\"s\", sigma=False, samp_con=\"1\", specnum=0,\n location=\"unknown\", spec=\"unknown\", sitename=\"unknown\",\n user=\"\", data_model_num=3, name_in_file=False, input_dir_path=\"\"):\n \"\"\"\n converts .s format data to measurements format.\n\n Parameters\n ----------\n sfile : str\n .s format file, required\n anisfile : str\n specimen filename, default 'specimens.txt'\n dir_path : str\n output directory, default \".\"\n atype : str\n anisotropy type (AMS, AARM, ATRM, default AMS)\n coord_type : str\n coordinate system ('s' for specimen, 't' for tilt-corrected,\n or 'g' for geographic, default 's')\n sigma : bool\n if True, last column has sigma, default False\n samp_con : str\n sample/site naming convention, default '1', see info below\n specnum : int\n number of characters to designate a specimen, default 0\n location : str\n location name, default \"unknown\"\n spec : str\n specimen name, default \"unknown\"\n sitename : str\n site name, default \"unknown\"\n user : str\n user name, default \"\"\n data_model_num : int\n MagIC data model 2 or 3, default 3\n name_in_file : bool\n first entry of each line is specimen name, default False\n input_dir_path : input directory path IF different from dir_path, default \"\"\n\n Returns\n ---------\n Tuple : (True or False indicating if conversion was sucessful, meas_file name written)\n\n\n Input format\n --------\n X11,X22,X33,X12,X23,X13 (.s format file)\n X11,X22,X33,X12,X23,X13,sigma (.s format file with -sig option)\n SID, X11,X22,X33,X12,X23,X13 (.s format file with -n option)\n\n Info\n --------\n Sample naming convention:\n [1] XXXXY: where XXXX is an arbitrary length site designation and Y\n is the single character sample designation. e.g., TG001a is the\n first sample from site TG001. [default]\n [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX\n [5] site name = sample name\n [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED\n [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY\n\n\n \"\"\"\n con, Z = \"\", 1\n if samp_con:\n samp_con = str(samp_con)\n if \"4\" in samp_con:\n if \"-\" not in samp_con:\n print(\"option [4] must be in form 4-Z where Z is an integer\")\n return False, \"option [4] must be in form 4-Z where Z is an integer\"\n else:\n Z = samp_con.split(\"-\")[1]\n samp_con = \"4\"\n if samp_con == '6':\n print(\"option [6] is not currently supported\")\n return\n else:\n samp_con = con\n\n\n coord_dict = {'s': '-1', 't': '100', 'g': '0'}\n coord = coord_dict.get(coord_type, '-1')\n specnum = -specnum\n\n if data_model_num == 2:\n specimen_col = \"er_specimen_name\"\n sample_col = \"er_sample_name\"\n site_col = \"er_site_name\"\n loc_col = \"er_location_name\"\n citation_col = \"er_citation_names\"\n analyst_col = \"er_analyst_mail_names\"\n aniso_type_col = \"anisotropy_type\"\n experiment_col = \"magic_experiment_names\"\n sigma_col = \"anisotropy_sigma\"\n unit_col = \"anisotropy_unit\"\n tilt_corr_col = \"anisotropy_tilt_correction\"\n method_col = \"magic_method_codes\"\n outfile_type = \"rmag_anisotropy\"\n else:\n specimen_col = \"specimen\"\n sample_col = \"sample\"\n site_col = \"site\"\n loc_col = \"location\"\n citation_col = \"citations\"\n analyst_col = \"analysts\"\n aniso_type_col = \"aniso_type\"\n experiment_col = \"experiments\"\n sigma_col = \"aniso_s_sigma\"\n unit_col = \"aniso_s_unit\"\n tilt_corr_col = \"aniso_tilt_correction\"\n method_col = \"method_codes\"\n outfile_type = \"specimens\"\n # get down to bidness\n input_dir_path, dir_path = pmag.fix_directories(input_dir_path, dir_path)\n sfile = pmag.resolve_file_name(sfile, input_dir_path)\n anisfile = pmag.resolve_file_name(anisfile, dir_path)\n try:\n with open(sfile, 'r') as f:\n lines = f.readlines()\n except FileNotFoundError:\n return False, \"No such file: {}\".format(sfile)\n AnisRecs = []\n citation = \"This study\"\n # read in data\n for line in lines:\n AnisRec = {}\n rec = line.split()\n if name_in_file:\n k = 1\n spec = rec[0]\n else:\n k = 0\n trace = float(rec[k])+float(rec[k+1])+float(rec[k+2])\n s1 = '%10.9e' % (float(rec[k]) / trace)\n s2 = '%10.9e' % (float(rec[k+1]) / trace)\n s3 = '%10.9e' % (float(rec[k+2]) / trace)\n s4 = '%10.9e' % (float(rec[k+3]) / trace)\n s5 = '%10.9e' % (float(rec[k+4]) / trace)\n s6 = '%10.9e' % (float(rec[k+5]) / trace)\n AnisRec[citation_col] = citation\n AnisRec[specimen_col] = spec\n if specnum != 0:\n AnisRec[sample_col] = spec[:specnum]\n else:\n AnisRec[sample_col] = spec\n # if samp_con == \"6\":\n # for samp in Samps:\n # if samp['er_sample_name'] == AnisRec[\"er_sample_name\"]:\n # sitename = samp['er_site_name']\n # location = samp['er_location_name']\n if samp_con != \"\":\n sitename = pmag.parse_site(AnisRec[sample_col], samp_con, Z)\n AnisRec[loc_col] = location\n AnisRec[site_col] = sitename\n AnisRec[analyst_col] = user\n if atype == 'AMS':\n AnisRec[aniso_type_col] = \"AMS\"\n AnisRec[experiment_col] = spec+\":LP-X\"\n else:\n AnisRec[aniso_type_col] = atype\n AnisRec[experiment_col] = spec+\":LP-\"+atype\n if data_model_num != 3:\n AnisRec[\"anisotropy_s1\"] = s1\n AnisRec[\"anisotropy_s2\"] = s2\n AnisRec[\"anisotropy_s3\"] = s3\n AnisRec[\"anisotropy_s4\"] = s4\n AnisRec[\"anisotropy_s5\"] = s5\n AnisRec[\"anisotropy_s6\"] = s6\n else:\n AnisRec['aniso_s'] = \":\".join(\n [str(s) for s in [s1, s2, s3, s4, s5, s6]])\n if sigma:\n AnisRec[sigma_col] = '%10.8e' % (\n float(rec[k+6]) / trace)\n AnisRec[unit_col] = 'SI'\n AnisRec[tilt_corr_col] = coord\n AnisRec[method_col] = 'LP-' + atype\n AnisRecs.append(AnisRec)\n pmag.magic_write(anisfile, AnisRecs, outfile_type)\n print('data saved in ', anisfile)\n # try to extract location/site/sample info into tables\n con = cb.Contribution(dir_path, custom_filenames={\"specimens\": anisfile})\n con.propagate_all_tables_info()\n for table in con.tables:\n if table in ['samples', 'sites', 'locations']:\n # add in location name by hand\n if table == 'sites':\n con.tables['sites'].df['location'] = location\n con.write_table_to_file(table)\n return True, anisfile", "response": "This function converts. s format data to measurements."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconverts ascii files generated by SUFAR ver.4.0 to MagIC files Parameters ---------- ascfile : str input ASC file, required meas_output : str measurement output filename, default \"measurements.txt\" aniso_output : str anisotropy output filename, MagIC 2 only, \"rmag_anisotropy.txt\" spec_infile : str specimen infile, default None spec_outfile : str specimen outfile, default \"specimens.txt\" samp_outfile : str sample outfile, default \"samples.txt\" site_outfile : str site outfile, default \"sites.txt\" specnum : int number of characters to designate a specimen, default 0 sample_naming_con : str sample/site naming convention, default '1', see info below user : str user name, default \"\" locname : str location name, default \"unknown\" instrument : str instrument name, default \"\" static_15_position_mode : bool specify static 15 position mode - default False (is spinning) dir_path : str output directory, default \".\" input_dir_path : str input file directory IF different from dir_path, default \"\" data_model_num : int MagIC data model 2 or 3, default 3 Returns -------- type - Tuple : (True or False indicating if conversion was sucessful, file name written) Info -------- Sample naming convention: [1] XXXXY: where XXXX is an arbitrary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX [5] site name = sample name [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY", "response": "def sufar4(ascfile, meas_output='measurements.txt', aniso_output='rmag_anisotropy.txt',\n spec_infile=None, spec_outfile='specimens.txt', samp_outfile='samples.txt',\n site_outfile='sites.txt', specnum=0, sample_naming_con='1', user=\"\",\n locname=\"unknown\", instrument='', static_15_position_mode=False,\n dir_path='.', input_dir_path='', data_model_num=3):\n \"\"\"\n Converts ascii files generated by SUFAR ver.4.0 to MagIC files\n\n Parameters\n ----------\n ascfile : str\n input ASC file, required\n meas_output : str\n measurement output filename, default \"measurements.txt\"\n aniso_output : str\n anisotropy output filename, MagIC 2 only, \"rmag_anisotropy.txt\"\n spec_infile : str\n specimen infile, default None\n spec_outfile : str\n specimen outfile, default \"specimens.txt\"\n samp_outfile : str\n sample outfile, default \"samples.txt\"\n site_outfile : str\n site outfile, default \"sites.txt\"\n specnum : int\n number of characters to designate a specimen, default 0\n sample_naming_con : str\n sample/site naming convention, default '1', see info below\n user : str\n user name, default \"\"\n locname : str\n location name, default \"unknown\"\n instrument : str\n instrument name, default \"\"\n static_15_position_mode : bool\n specify static 15 position mode - default False (is spinning)\n dir_path : str\n output directory, default \".\"\n input_dir_path : str\n input file directory IF different from dir_path, default \"\"\n data_model_num : int\n MagIC data model 2 or 3, default 3\n\n Returns\n --------\n type - Tuple : (True or False indicating if conversion was sucessful, file name written)\n\n Info\n --------\n Sample naming convention:\n [1] XXXXY: where XXXX is an arbitrary length site designation and Y\n is the single character sample designation. e.g., TG001a is the\n first sample from site TG001. [default]\n [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX\n [5] site name = sample name\n [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED\n [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY\n\n \"\"\"\n\n citation = 'This study'\n cont = 0\n Z = 1\n AniRecSs, AniRecs, SpecRecs, SampRecs, SiteRecs, MeasRecs = [], [], [], [], [], []\n isspec = '0'\n spin = 0\n data_model_num = int(float(data_model_num))\n\n # set defaults for MagIC 2\n if data_model_num == 2:\n if meas_output == 'measurements.txt':\n meas_output = 'magic_measurements.txt'\n if spec_outfile == 'specimens.txt':\n spec_outfile = 'er_specimens.txt'\n if samp_outfile == 'samples.txt':\n samp_outfile = 'er_samples.txt'\n if site_outfile == 'sites.txt':\n site_outfile = 'er_sites.txt'\n\n # set column names for MagIC 3\n spec_name_col = 'specimen'\n samp_name_col = 'sample'\n site_name_col = 'site'\n loc_name_col = 'location'\n citation_col = 'citations'\n method_col = 'method_codes'\n site_description_col = 'description'\n expedition_col = 'expedition_name'\n instrument_col = 'instrument_codes'\n experiment_col = 'experiments'\n analyst_col = 'analysts'\n quality_col = 'quality'\n aniso_quality_col = 'result_quality'\n meas_standard_col = 'standard'\n meas_description_col = 'description'\n aniso_type_col = 'aniso_type'\n aniso_unit_col = 'aniso_s_unit'\n aniso_n_col = 'aniso_s_n_measurements'\n azimuth_col = 'azimuth'\n spec_volume_col = 'volume'\n samp_dip_col = 'dip'\n bed_dip_col = 'bed_dip'\n bed_dip_direction_col = 'bed_dip_direction'\n chi_vol_col = 'susc_chi_volume'\n aniso_sigma_col = 'aniso_s_sigma'\n aniso_unit_col = 'aniso_s_unit'\n aniso_tilt_corr_col = 'aniso_tilt_correction'\n meas_table_name = 'measurements'\n spec_table_name = 'specimens'\n samp_table_name = 'samples'\n site_table_name = 'sites'\n\n # set column names for MagIC 2\n if data_model_num == 2:\n spec_name_col = 'er_specimen_name'\n samp_name_col = 'er_sample_name'\n site_name_col = 'er_site_name'\n loc_name_col = 'er_location_name'\n citation_col = 'er_citation_names'\n method_col = 'magic_method_codes'\n site_description_col = 'site_description'\n expedition_col = 'er_expedition_name'\n instrument_col = 'magic_instrument_codes'\n experiment_col = 'magic_experiment_names'\n analyst_col = 'er_analyst_mail_names'\n quality_col = 'measurement_flag'\n aniso_quality_col = 'anisotropy_flag'\n meas_standard_col = 'measurement_standard'\n meas_description_col = 'measurement_description'\n aniso_type_col = 'anisotropy_type'\n aniso_unit_col = 'anisotropy_unit'\n aniso_n_col = 'anisotropy_n'\n azimuth_col = 'sample_azimuth'\n spec_volume_col = 'specimen_volume'\n samp_dip_col = 'sample_dip'\n bed_dip_col = 'sample_bed_dip'\n bed_dip_direction_col = 'sample_bed_dip_direction'\n chi_vol_col = 'measurement_chi_volume'\n aniso_sigma_col = 'anisotropy_sigma'\n aniso_unit_col = 'anisotropy_unit'\n aniso_tilt_corr_col = 'anisotropy_tilt_correction'\n meas_table_name = 'magic_measurements'\n spec_table_name = 'er_specimens'\n samp_table_name = 'er_samples'\n site_table_name = 'er_sites'\n\n # create full path for files\n input_dir_path, output_dir_path = pmag.fix_directories(input_dir_path, dir_path)\n ascfile = os.path.join(input_dir_path, ascfile)\n aniso_output = os.path.join(output_dir_path, aniso_output)\n # initialized but not used\n meas_output = os.path.join(output_dir_path, meas_output)\n\n spec_outfile = os.path.join(output_dir_path, spec_outfile)\n samp_outfile = os.path.join(output_dir_path, samp_outfile)\n site_outfile = os.path.join(output_dir_path, site_outfile)\n\n if \"4\" in sample_naming_con:\n if \"-\" not in sample_naming_con:\n print(\"option [4] must be in form 4-Z where Z is an integer\")\n return False, \"option [4] must be in form 4-Z where Z is an integer\"\n else:\n Z = sample_naming_con.split(\"-\")[1]\n sample_naming_con = \"4\"\n if \"7\" in sample_naming_con:\n if \"-\" not in sample_naming_con:\n print(\"option [7] must be in form 7-Z where Z is an integer\")\n return False, \"option [7] must be in form 7-Z where Z is an integer\"\n else:\n Z = sample_naming_con.split(\"-\")[1]\n sample_naming_con = \"7\"\n\n if static_15_position_mode:\n spin = 0\n\n if spec_infile:\n if os.path.isfile(os.path.join(input_dir_path, str(spec_infile))):\n # means an er_specimens.txt file has been provided with sample,\n # site, location (etc.) info\n isspec = '1'\n\n specnum = int(specnum)\n\n if isspec == \"1\":\n specs, file_type = pmag.magic_read(spec_infile)\n\n specnames, sampnames, sitenames = [], [], []\n # if '-new' not in sys.argv: # see if there are already specimen,sample, site files lying around\n # try:\n # SpecRecs,file_type=pmag.magic_read(input_dir_path+'/er_specimens.txt')\n # for spec in SpecRecs:\n # if spec['er_specimen_name'] not in specnames:\n # specnames.append(samp['er_specimen_name'])\n # except:\n # SpecRecs,specs=[],[]\n # try:\n # SampRecs,file_type=pmag.magic_read(input_dir_path+'/er_samples.txt')\n # for samp in SampRecs:\n # if samp['er_sample_name'] not in sampnames:sampnames.append(samp['er_sample_name'])\n # except:\n # sampnames,SampRecs=[],[]\n # try:\n # SiteRecs,file_type=pmag.magic_read(input_dir_path+'/er_sites.txt')\n # for site in SiteRecs:\n # if site['er_site_names'] not in sitenames:sitenames.append(site['er_site_name'])\n # except:\n # sitenames,SiteRecs=[],[]\n\n try:\n file_input = open(ascfile, 'r')\n except:\n print('Error opening file: ', ascfile)\n return False, 'Error opening file: {}'.format(ascfile)\n Data = file_input.readlines()\n file_input.close()\n k = 0\n while k < len(Data):\n line = Data[k]\n words = line.split()\n if \"ANISOTROPY\" in words: # first line of data for the spec\n MeasRec, AniRec, SpecRec, SampRec, SiteRec = {}, {}, {}, {}, {}\n specname = words[0]\n AniRec[spec_name_col] = specname\n if isspec == \"1\":\n for spec in specs:\n if spec[spec_name_col] == specname:\n AniRec[samp_name_col] = spec[samp_name_col]\n AniRec[site_name_col] = spec[site_name_col]\n AniRec[loc_name_col] = spec[loc_name_col]\n break\n elif isspec == \"0\":\n if specnum != 0:\n sampname = specname[:-specnum]\n else:\n sampname = specname\n AniRec[samp_name_col] = sampname\n SpecRec[spec_name_col] = specname\n SpecRec[samp_name_col] = sampname\n SampRec[samp_name_col] = sampname\n SiteRec[samp_name_col] = sampname\n SiteRec[site_description_col] = 's'\n if sample_naming_con != \"9\":\n AniRec[site_name_col] = pmag.parse_site(\n AniRec[samp_name_col], sample_naming_con, Z)\n SpecRec[site_name_col] = pmag.parse_site(\n AniRec[samp_name_col], sample_naming_con, Z)\n SampRec[site_name_col] = pmag.parse_site(\n AniRec[samp_name_col], sample_naming_con, Z)\n SiteRec[site_name_col] = pmag.parse_site(\n AniRec[samp_name_col], sample_naming_con, Z)\n else:\n AniRec[site_name_col] = specname\n SpecRec[site_name_col] = specname\n SampRec[site_name_col] = specname\n SiteRec[site_name_col] = specname\n pieces = specname.split('-')\n AniRec[expedition_col] = pieces[0]\n SpecRec[expedition_col] = pieces[0]\n SampRec[expedition_col] = pieces[0]\n SiteRec[expedition_col] = pieces[0]\n location = pieces[1]\n AniRec[loc_name_col] = locname\n SpecRec[loc_name_col] = locname\n SampRec[loc_name_col] = locname\n SiteRec[loc_name_col] = locname\n AniRec[citation_col] = \"This study\"\n SpecRec[citation_col] = \"This study\"\n SampRec[citation_col] = \"This study\"\n SiteRec[citation_col] = \"This study\"\n AniRec[citation_col] = \"This study\"\n AniRec[instrument_col] = instrument\n AniRec[method_col] = \"LP-X:AE-H:LP-AN-MS\"\n AniRec[experiment_col] = specname + \":\" + \"LP-AN-MS\"\n AniRec[analyst_col] = user\n for key in list(AniRec.keys()):\n MeasRec[key] = AniRec[key]\n if data_model_num == 2:\n MeasRec['magic_experiment_name'] = AniRec.get('magic_experiment_names', '')\n if 'magic_experiment_names' in MeasRec:\n MeasRec.pop('magic_experiment_names')\n if data_model_num == 3:\n MeasRec['experiment'] = AniRec.get('experiments', '')\n if 'experiments' in MeasRec:\n MeasRec.pop('experiments')\n MeasRec[quality_col] = 'g'\n AniRec[aniso_quality_col] = 'g'\n MeasRec[meas_standard_col] = 'u'\n MeasRec[meas_description_col] = 'Bulk sucsecptibility measurement'\n AniRec[aniso_type_col] = \"AMS\"\n AniRec[aniso_unit_col] = \"Normalized by trace\"\n if spin == 1:\n AniRec[aniso_n_col] = \"192\"\n else:\n AniRec[aniso_n_col] = \"15\"\n if 'Azi' in words and isspec == '0':\n az = float(words[1])\n P1 = float(words[4])\n P2 = float(words[5])\n P3 = float(words[6])\n # P4 relates to a fabric or bedding measurement -- not dealt with\n # here\n P4 = float(words[7])\n az = az + P1 * 360. / 12. - P3 * 360. / 12.\n if az >= 360:\n az = az - 360\n elif az <= -360:\n az = az + 360\n labaz = az\n SampRec[azimuth_col] = str(round(az, 1))\n if 'Dip' in words:\n # convert actual volume to m^3 from cm^3\n SpecRec[spec_volume_col] = '%8.3e' % (float(words[10]) * 1e-6)\n dip = float(words[1])\n if P2 == 90:\n dip = dip - 90.\n labdip = dip\n SampRec[samp_dip_col] = str(round(dip, 1))\n if 'T1' in words and 'F1' in words:\n k += 2 # read in fourth line down\n line = Data[k]\n rec = line.split()\n dd = rec[1].split('/')\n dip_direction = int(dd[0]) + 90\n SampRec[bed_dip_direction_col] = '%i' % (dip_direction)\n SampRec[bed_dip_col] = dd[1]\n bed_dip = float(dd[1])\n if \"Mean\" in words:\n k += 4 # read in fourth line down\n line = Data[k]\n rec = line.split()\n MeasRec[chi_vol_col] = rec[1]\n sigma = .01 * float(rec[2]) / 3.\n AniRec[aniso_sigma_col] = '%7.4f' % (sigma)\n AniRec[aniso_unit_col] = 'SI'\n if \"factors\" in words:\n k += 4 # read in second line down\n line = Data[k]\n rec = line.split()\n if \"Specimen\" in words: # first part of specimen data\n # eigenvalues sum to unity - not 3\n s1_val = '%7.4f' % (float(words[5]) / 3.)\n s2_val = '%7.4f' % (float(words[6]) / 3.)\n s3_val = '%7.4f' % (float(words[7]) / 3.)\n k += 1\n line = Data[k]\n rec = line.split()\n # eigenvalues sum to unity - not 3\n s4_val= '%7.4f' % (float(rec[5]) / 3.)\n s5_val = '%7.4f' % (float(rec[6]) / 3.)\n s6_val = '%7.4f' % (float(rec[7]) / 3.)\n # parse for data model 2\n if data_model_num == 2:\n AniRec['anisotropy_s1'] = s1_val\n AniRec['anisotropy_s2'] = s2_val\n AniRec['anisotropy_s3'] = s3_val\n AniRec['anisotropy_s4'] = s4_val\n AniRec['anisotropy_s5'] = s5_val\n AniRec['anisotropy_s6'] = s6_val\n # parse for data model 3\n else:\n vals = (s1_val, s2_val, s3_val, s4_val, s5_val, s6_val)\n AniRec['aniso_s'] = \":\".join([v.strip() for v in vals])\n #\n AniRec[aniso_tilt_corr_col] = '-1'\n AniRecs.append(AniRec)\n AniRecG, AniRecT = {}, {}\n for key in list(AniRec.keys()):\n AniRecG[key] = AniRec[key]\n for key in list(AniRec.keys()):\n AniRecT[key] = AniRec[key]\n sbar = []\n sbar.append(float(s1_val))\n sbar.append(float(s2_val))\n sbar.append(float(s3_val))\n sbar.append(float(s4_val))\n sbar.append(float(s5_val))\n sbar.append(float(s6_val))\n sbarg = pmag.dosgeo(sbar, labaz, labdip)\n s1_g = '%12.10f' % (sbarg[0])\n s2_g = '%12.10f' % (sbarg[1])\n s3_g = '%12.10f' % (sbarg[2])\n s4_g = '%12.10f' % (sbarg[3])\n s5_g = '%12.10f' % (sbarg[4])\n s6_g = '%12.10f' % (sbarg[5])\n if data_model_num == 2:\n AniRecG[\"anisotropy_s1\"] = s1_g\n AniRecG[\"anisotropy_s2\"] = s2_g\n AniRecG[\"anisotropy_s3\"] = s3_g\n AniRecG[\"anisotropy_s4\"] = s4_g\n AniRecG[\"anisotropy_s5\"] = s5_g\n AniRecG[\"anisotropy_s6\"] = s6_g\n else:\n vals = (s1_g, s2_g, s3_g, s4_g, s5_g, s6_g)\n AniRecG['aniso_s'] = \":\".join([v.strip() for v in vals])\n AniRecG[aniso_tilt_corr_col] = '0'\n AniRecs.append(AniRecG)\n if bed_dip != \"\" and bed_dip != 0: # have tilt correction\n sbart = pmag.dostilt(sbarg, dip_direction, bed_dip)\n s1_t = '%12.10f' % (sbart[0])\n s2_t = '%12.10f' % (sbart[1])\n s3_t = '%12.10f' % (sbart[2])\n s4_t = '%12.10f' % (sbart[3])\n s5_t = '%12.10f' % (sbart[4])\n s6_t = '%12.10f' % (sbart[5])\n if data_model_num == 2:\n AniRecT[\"anisotropy_s1\"] = s1_t\n AniRecT[\"anisotropy_s2\"] = s2_t\n AniRecT[\"anisotropy_s3\"] = s3_t\n AniRecT[\"anisotropy_s4\"] = s4_t\n AniRecT[\"anisotropy_s5\"] = s5_t\n AniRecT[\"anisotropy_s6\"] = s6_t\n else:\n vals = (s1_t, s2_t, s3_t, s4_t, s5_t, s6_t)\n AniRecT[\"aniso_s\"] = \":\".join([v.strip() for v in vals])\n AniRecT[aniso_tilt_corr_col] = '100'\n AniRecs.append(AniRecT)\n MeasRecs.append(MeasRec)\n if SpecRec[spec_name_col] not in specnames:\n SpecRecs.append(SpecRec)\n specnames.append(SpecRec[spec_name_col])\n if SampRec[samp_name_col] not in sampnames:\n SampRecs.append(SampRec)\n sampnames.append(SampRec[samp_name_col])\n if SiteRec[site_name_col] not in sitenames:\n SiteRecs.append(SiteRec)\n sitenames.append(SiteRec[site_name_col])\n k += 1 # skip to next specimen\n\n pmag.magic_write(meas_output, MeasRecs, meas_table_name)\n print(\"bulk measurements put in \", meas_output)\n # if isspec==\"0\":\n SpecOut, keys = pmag.fillkeys(SpecRecs)\n #\n # for MagIC 2, anisotropy records go in rmag_anisotropy\n if data_model_num == 2:\n pmag.magic_write(aniso_output, AniRecs, 'rmag_anisotropy')\n print(\"anisotropy tensors put in \", aniso_output)\n # for MagIC 3, anisotropy records go in specimens\n if data_model_num == 3:\n full_SpecOut = []\n spec_list = []\n for rec in SpecOut:\n full_SpecOut.append(rec)\n spec_name = rec[spec_name_col]\n if spec_name not in spec_list:\n spec_list.append(spec_name)\n ani_recs = pmag.get_dictitem(AniRecs, spec_name_col, spec_name, 'T')\n full_SpecOut.extend(ani_recs)\n\n\n # FILL KEYS\n full_SpecOut, keys = pmag.fillkeys(full_SpecOut)\n else:\n full_SpecOut = SpecOut\n pmag.magic_write(spec_outfile, full_SpecOut, spec_table_name)\n print(\"specimen/anisotropy info put in \", spec_outfile)\n SampOut, keys = pmag.fillkeys(SampRecs)\n pmag.magic_write(samp_outfile, SampOut, samp_table_name)\n print(\"sample info put in \", samp_outfile)\n SiteOut, keys = pmag.fillkeys(SiteRecs)\n pmag.magic_write(site_outfile, SiteOut, site_table_name)\n print(\"site info put in \", site_outfile)\n return True, meas_output"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconverting TDT formatted files to measurements format files Parameters ---------- input_dir_path : str directory with one or more .tdt files experiment: str one of: [\"Thellier\", \"ATRM 6 pos\", \"NLT\"], default \"Thellier\" meas_file_name : str default \"measurements.txt\" spec_file_name : str default \"specimens.txt\" samp_file_name : str default \"samples.txt\" site_file_name : str default \"sites.txt\" loc_file_name : str default \"locations.txt\" user : str default \"\" location : str default \"\" lab_dec: int default: 0 lab_inc: int default 90 moment_units : str must be one of: [\"mA/m\", \"emu\", \"Am^2\"], default \"mA/m\" samp_name_con : str or int {1: \"sample=specimen\", 2: \"no. of terminate characters\", 3: \"character delimited\"} samp_name_chars : str or int number of characters to remove for sample name, (or delimiting character), default 0 site_name_con : str or int {1: \"site=sample\", 2: \"no. of terminate characters\", 3: \"character delimited\"} site_name_chars : str or int number of characters to remove for site name, (or delimiting character), default 0 volume : float volume in cc, default 12 output_dir_path : str path for file output, defaults to input_dir_path Returns --------- tuple : (True if program ran else False, measurement outfile name or error message if failed)", "response": "def tdt(input_dir_path, experiment_name=\"Thellier\", meas_file_name=\"measurements.txt\",\n spec_file_name=\"specimens.txt\", samp_file_name=\"samples.txt\",\n site_file_name=\"sites.txt\", loc_file_name=\"locations.txt\",\n user=\"\", location=\"\", lab_dec=0, lab_inc=90, moment_units=\"mA/m\",\n samp_name_con=\"sample=specimen\", samp_name_chars=0,\n site_name_con=\"site=sample\", site_name_chars=0, volume=12.,\n output_dir_path=\"\"):\n\n \"\"\"\n converts TDT formatted files to measurements format files\n\n Parameters\n ----------\n input_dir_path : str\n directory with one or more .tdt files\n experiment: str\n one of: [\"Thellier\", \"ATRM 6 pos\", \"NLT\"], default \"Thellier\"\n meas_file_name : str\n default \"measurements.txt\"\n spec_file_name : str\n default \"specimens.txt\"\n samp_file_name : str\n default \"samples.txt\"\n site_file_name : str\n default \"sites.txt\"\n loc_file_name : str\n default \"locations.txt\"\n user : str\n default \"\"\n location : str\n default \"\"\n lab_dec: int\n default: 0\n lab_inc: int\n default 90\n moment_units : str\n must be one of: [\"mA/m\", \"emu\", \"Am^2\"], default \"mA/m\"\n samp_name_con : str or int\n {1: \"sample=specimen\", 2: \"no. of terminate characters\", 3: \"character delimited\"}\n samp_name_chars : str or int\n number of characters to remove for sample name, (or delimiting character), default 0\n site_name_con : str or int\n {1: \"site=sample\", 2: \"no. of terminate characters\", 3: \"character delimited\"}\n site_name_chars : str or int\n number of characters to remove for site name, (or delimiting character), default 0\n volume : float\n volume in cc, default 12\n output_dir_path : str\n path for file output, defaults to input_dir_path\n\n\n Returns\n ---------\n tuple : (True if program ran else False, measurement outfile name or error message if failed)\n \"\"\"\n\n # --------------------------------------\n # Read the files\n #\n # Database structure\n # Thellier_type experiment:\n #\n # 1) Each file contains the data one specimen\n # 2) First line is the header: \"Thellier-tdt\"\n # 3) Second line in header inlucdes 4 fields:\n # [Blab] ,[unknown_1] , [unknown_2] , [unknown_3] , [unknown_4]\n # 4) Body includes 5 fields\n # [specimen_name], [treatments], [moment],[meas_dec],[meas_dec\n # Tretment: XXX.0 (zerofield)\n # XXX.1 (infield)\n # XXX.2 (pTRM check)\n # XXX.3 (Tail check)\n # XXX.4 (Additivity check; Krasa et al., 2003)\n # XXX.5 (Original Thellier-Thellier protocol. )\n # (where .5 is for the second direction and .1 in the first)\n # XXX = temperature in degrees\n #\n #\n # IMPORTANT ASSUMPTION:\n # (1) lab field is always in Z direction (theta=0, phi=90)\n # (2) Thermal demagnetization - NO MICROWAVE\n # (3) if if XXX <50 then assuming that this is NRM (273K)\n #\n # -------------------------------------\n #\n # ATRM in six positions\n #\n # Tretment: XXX.0 zerofield\n # XXX.1 +x\n # XXX.2 +y\n # XXX.3 +z\n # XXX.4 -x\n # XXX.5 -y\n # XXX.6 -z\n # XXX.7 alteration check\n # IMPORTANT REMARKS:\n #\n # (1) If the program check if the direction of the magnetization fits the coding above\n # if not, an error message will appear\n # (2) Alteration ckeck can be in any direction\n # (3) the order of the measurements is not important\n #\n\n\n def get_sample_name(specimen, sample_naming_convention):\n if sample_naming_convention[0] == \"sample=specimen\":\n sample = specimen\n elif sample_naming_convention[0] == \"no. of terminate characters\":\n n = int(sample_naming_convention[1])*-1\n sample = specimen[:n]\n elif sample_naming_convention[0] == \"charceter delimited\":\n d = sample_naming_convention[1]\n sample_splitted = specimen.split(d)\n if len(sample_splitted) == 1:\n sample = sample_splitted[0]\n else:\n sample = d.join(sample_splitted[:-1])\n return sample\n\n def get_site_name(sample, site_naming_convention):\n if site_naming_convention[0] == \"site=sample\":\n site = sample\n elif site_naming_convention[0] == \"no. of terminate characters\":\n n = int(site_naming_convention[1])*-1\n site = sample[:n]\n elif site_naming_convention[0] == \"charceter delimited\":\n d = site_naming_convention[1]\n site_splitted = sample.split(d)\n if len(site_splitted) == 1:\n site = site_splitted[0]\n else:\n site = d.join(site_splitted[:-1])\n return site\n\n ## format some variables\n\n # convert volume from cc to m^3\n volume = float(volume) * 1e-6\n if not output_dir_path:\n output_dir_path = input_dir_path\n samp_name_cons = {1: 'sample=specimen', 2: 'no. of terminate characters', 3: 'character delimited'}\n if not samp_name_con:\n samp_name_con = \"sample=specimen\"\n elif samp_name_con not in samp_name_cons.values():\n try:\n samp_name_con = samp_name_cons.get(int(samp_name_con), 'sample=specimen')\n except ValueError:\n samp_name_con=\"sample=specimen\"\n if samp_name_con == 'no. of terminate characters' and not samp_name_chars:\n print(\"-W- You have selected the sample naming convention: 'no. of terminate characters',\\n but have provided the number of characters as 0.\\n Defaulting to use 'sample=specimen' instead.\")\n samp_name_con = 'sample=specimen'\n\n site_name_cons = {1: 'site=sample', 2: 'no. of terminate characters', 3: 'character delimited'}\n if not site_name_con:\n site_name_con = \"site=sample\"\n elif site_name_con not in site_name_cons.values():\n try:\n site_name_con = site_name_cons.get(int(site_name_con), 'site=sample')\n except ValueError:\n site_name_con = \"site=sample\"\n if site_name_con == 'no. of terminate characters' and not site_name_chars:\n print(\"-W- You have selected the site naming convention: 'no. of terminate characters',\\n but have provided the number of characters as 0.\\n Defaulting to use 'site=sample' instead.\")\n site_name_con = 'site=sample'\n\n\n Data = {}\n\n # -----------------------------------\n # First, read all files and sort data by specimen and by Experiment type\n # -----------------------------------\n\n for files in os.listdir(input_dir_path):\n if files.endswith(\".tdt\"):\n fname = os.path.join(input_dir_path, files)\n print(\"Open file: \", fname)\n fin = open(fname, 'r')\n header_codes = ['labfield', 'core_azimuth',\n 'core_plunge', 'bedding_dip_direction', 'bedding_dip']\n body_codes = ['specimen_name',\n 'treatment', 'moment', 'dec', 'inc']\n tmp_body = []\n tmp_header_data = {}\n line_number = 0\n continue_reading = True\n line = fin.readline() # ignore first line\n lines = fin.readlines()\n fin.close()\n for line in lines:\n\n if \"END\" in line:\n break\n\n if line.strip('\\n') == \"\":\n break\n\n this_line = line.strip('\\n').split()\n\n if len(this_line) < 5:\n continue\n\n # ---------------------------------------------------\n # fix muxworthy funky data format\n # ---------------------------------------------------\n if len(this_line) < 5 and line_number != 0:\n new_line = []\n for i in range(len(this_line)):\n if i > 1 and \"-\" in this_line[i]:\n tmp = this_line[i].replace(\"-\", \" -\")\n tmp1 = tmp.split()\n for i in range(len(tmp1)):\n new_line.append(tmp1[i])\n else:\n new_line.append(this_line[i])\n this_line = list(copy(new_line))\n\n # -------------------------------\n # Read infromation from Header and body\n # The data is stored in a dictionary:\n # Data[specimen][Experiment_Type]['header_data']=tmp_header_data --> a dictionary with header data\n # Data[specimen][Experiment_Type]['meas_data']=[dict1, dict2, ...] --> a list of dictionaries with measurement data\n # -------------------------------\n\n # ---------------------------------------------------\n # header\n # ---------------------------------------------------\n if line_number == 0:\n\n for i in range(len(this_line)):\n tmp_header_data[header_codes[i]] = this_line[i]\n\n line_number += 1\n\n # ---------------------------------------------------\n # body\n # ---------------------------------------------------\n\n else:\n tmp_data = {}\n for i in range(min(len(this_line), len(body_codes))):\n tmp_data[body_codes[i]] = this_line[i]\n tmp_body.append(tmp_data)\n\n # ------------\n\n specimen = tmp_body[0]['specimen_name']\n line_number += 1\n\n if specimen not in list(Data.keys()):\n Data[specimen] = {}\n Experiment_Type = experiment_name\n if Experiment_Type not in list(Data[specimen].keys()):\n Data[specimen][Experiment_Type] = {}\n Data[specimen][Experiment_Type]['meas_data'] = tmp_body\n Data[specimen][Experiment_Type]['header_data'] = tmp_header_data\n Data[specimen][Experiment_Type]['sample_naming_convention'] = [samp_name_con, samp_name_chars]\n Data[specimen][Experiment_Type]['site_naming_convention'] = [site_name_con, site_name_chars]\n Data[specimen][Experiment_Type]['location'] = location\n Data[specimen][Experiment_Type]['user_name'] = user\n Data[specimen][Experiment_Type]['volume'] = volume\n Data[specimen][Experiment_Type]['moment_units'] = moment_units\n Data[specimen][Experiment_Type]['labfield_DI'] = [lab_dec, lab_inc]\n\n # -----------------------------------\n # Convert Data{} to MagIC\n # -----------------------------------\n MeasRecs, SpecRecs, SampRecs, SiteRecs, LocRecs = [], [], [], [], []\n specimens_list = list(Data.keys())\n specimens_list.sort()\n for specimen in specimens_list:\n Experiment_Types_list = list(Data[specimen].keys())\n Experiment_Types_list.sort()\n for Experiment_Type in Experiment_Types_list:\n if Experiment_Type in [\"Thellier\"]:\n\n tmp_MeasRecs = []\n\n # IMORTANT:\n # phi and theta of lab field are not defined\n # defaults are defined here:\n phi, theta = '0.', '90.'\n\n header_line = Data[specimen][Experiment_Type]['header_data']\n experiment_treatments = []\n measurement_running_number = 0\n # start to make a list of the methcodes. and later will merge it to one string\n methcodes = [\"LP-PI-TRM\"]\n\n for i in range(len(Data[specimen][Experiment_Type]['meas_data'])):\n meas_line = Data[specimen][Experiment_Type]['meas_data'][i]\n\n # ------------------\n # check if the same treatment appears more than once. If yes, assuming that the measurements is repeated twice,\n # ignore the first, and take only the second one\n # ------------------\n\n if i < (len(Data[specimen][Experiment_Type]['meas_data'])-2):\n Repeating_measurements = True\n for key in ['treatment', 'specimen_name']:\n if Data[specimen][Experiment_Type]['meas_data'][i][key] != Data[specimen][Experiment_Type]['meas_data'][i+1][key]:\n Repeating_measurements = False\n if Repeating_measurements == True:\n \"Found a repeating measurement at line %i, sample %s. taking the last one\" % (\n i, specimen)\n continue\n # ------------------\n # Special treatment for first line (NRM data).\n # ------------------\n\n if i == 0:\n if \".\" not in meas_line['treatment']:\n meas_line['treatment'] = \"0.0\"\n # if NRM is in the form of \".0\" instead of \"0.0\"\n elif meas_line['treatment'].split(\".\")[0] == \"\" and meas_line['treatment'].split(\".\")[1] == '0':\n meas_line['treatment'] = \"0.0\"\n # if NRM is in the form of \"20.0\" instead of \"0.0\"\n elif float(meas_line['treatment'].split(\".\")[0]) < 50 and float(meas_line['treatment'].split(\".\")[-1]) == 0:\n meas_line['treatment'] = \"0.0\"\n\n # ------------------\n # fix line in format of XX instead of XX.YY\n # ------------------\n if \".\" not in meas_line['treatment']:\n meas_line['treatment'] = meas_line['treatment']+\".0\"\n if meas_line['treatment'].split(\".\")[1] == \"\":\n meas_line['treatment'] = meas_line['treatment']+\"0\"\n\n # ------------------\n # init names and dictionaries\n # ------------------\n\n MeasRec, SpecRec, SampRec, SiteRec, LocRec = {}, {}, {}, {}, {}\n\n # convert from microT to Tesla\n labfield = float(header_line['labfield'])*1e-6\n sample = get_sample_name(\n specimen, Data[specimen][Experiment_Type]['sample_naming_convention'])\n site = get_site_name(\n sample, Data[specimen][Experiment_Type]['site_naming_convention'])\n location = Data[specimen][Experiment_Type]['location']\n if location == '':\n location = 'unknown'\n\n # ------------------\n # Fill data\n # ------------------\n\n # Start with S'tables and Loc Table\n if specimen != \"\" and specimen not in [x['specimen'] if 'specimen' in list(x.keys()) else \"\" for x in SpecRecs]:\n SpecRec['specimen'] = specimen\n SpecRec['sample'] = sample\n SpecRec['volume'] = Data[specimen][Experiment_Type]['volume']\n SpecRec['citations'] = \"This study\"\n SpecRec['analysts'] = Data[specimen][Experiment_Type]['user_name']\n SpecRecs.append(SpecRec)\n if sample != \"\" and sample not in [x['sample'] if 'sample' in list(x.keys()) else \"\" for x in SampRecs]:\n SampRec['sample'] = sample\n SampRec['site'] = site\n SampRec['citations'] = \"This study\"\n SampRec['analysts'] = Data[specimen][Experiment_Type]['user_name']\n SampRecs.append(SampRec)\n if site != \"\" and site not in [x['site'] if 'site' in list(x.keys()) else \"\" for x in SiteRecs]:\n SiteRec['site'] = site\n SiteRec['location'] = location\n SiteRec['citations'] = \"This study\"\n SiteRec['analysts'] = Data[specimen][Experiment_Type]['user_name']\n SiteRecs.append(SiteRec)\n if location != \"\" and location not in [x['location'] if 'location' in list(x.keys()) else \"\" for x in LocRecs]:\n LocRec['location'] = location\n LocRec['citations'] = \"This study\"\n LocRec['analysts'] = Data[specimen][Experiment_Type]['user_name']\n LocRecs.append(LocRec)\n\n # now the measurement Rec\n MeasRec['citations'] = \"This study\"\n # experiment is set in pmag.measurements_methods3\n # MeasRec[\"experiments\"]=\"\"\n MeasRec[\"specimen\"] = specimen\n MeasRec['analysts'] = Data[specimen][Experiment_Type]['user_name']\n MeasRec[\"quality\"] = 'g'\n MeasRec[\"standard\"] = 'u'\n MeasRec[\"treat_step_num\"] = \"%i\" % measurement_running_number\n MeasRec[\"dir_dec\"] = meas_line['dec']\n MeasRec[\"dir_inc\"] = meas_line['inc']\n if Data[specimen][Experiment_Type]['moment_units'] == 'mA/m':\n MeasRec[\"magn_moment\"] = \"%5e\" % (float(meas_line['moment'])*1e-3*float(\n Data[specimen][Experiment_Type]['volume'])) # converted to Am^2\n if Data[specimen][Experiment_Type]['moment_units'] == 'emu':\n MeasRec[\"magn_moment\"] = \"%5e\" % (\n float(meas_line['moment'])*1e-3) # converted to Am^2\n if Data[specimen][Experiment_Type]['moment_units'] == 'Am^2':\n MeasRec[\"magn_moment\"] = \"%5e\" % (\n float(meas_line['moment'])) # converted to Am^2\n MeasRec[\"meas_temp\"] = '273.' # room temp in kelvin\n\n # Date and time\n## date=meas_line['Measurement Date'].strip(\"\\\"\").split('-')\n# yyyy=date[2];dd=date[1];mm=date[0]\n## hour=meas_line['Measurement Time'].strip(\"\\\"\")\n# MeasRec[\"measurement_date\"]=yyyy+':'+mm+\":\"+dd+\":\"+hour\n\n # lab field data: distinguish between PI experiments to AF/Thermal\n treatments = meas_line['treatment'].split(\".\")\n if float(treatments[1]) == 0:\n MeasRec[\"treat_dc_field\"] = '0'\n MeasRec[\"treat_dc_field_phi\"] = '0'\n MeasRec[\"treat_dc_field_theta\"] = '0'\n else:\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (labfield)\n MeasRec[\"treat_dc_field_phi\"] = Data[specimen][Experiment_Type]['labfield_DI'][0]\n MeasRec[\"treat_dc_field_theta\"] = Data[specimen][Experiment_Type]['labfield_DI'][1]\n\n # ------------------\n # Lab Treatments\n # ------------------\n\n # NRM\n if float(treatments[0]) == 0 and float(treatments[1]) == 0:\n MeasRec[\"method_codes\"] = \"LT-NO\"\n experiment_treatments.append('0')\n MeasRec[\"treat_temp\"] = '273.'\n IZorZI = \"\"\n\n # Zerofield step\n elif float(treatments[1]) == 0:\n MeasRec[\"method_codes\"] = \"LT-T-Z\"\n MeasRec[\"treat_temp\"] = '%8.3e' % (\n float(treatments[0])+273.) # temp in kelvin\n\n # check if this is ZI or IZ:\n for j in range(0, i):\n previous_lines = Data[specimen][Experiment_Type]['meas_data'][j]\n if previous_lines['treatment'].split(\".\")[0] == meas_line['treatment'].split(\".\")[0]:\n if float(previous_lines['treatment'].split(\".\")[1]) == 1 or float(previous_lines['treatment'].split(\".\")[1]) == 10:\n if \"LP-PI-TRM-IZ\" not in methcodes:\n methcodes.append(\"LP-PI-TRM-IZ\")\n IZorZI = \"\"\n else:\n IZorZI = \"Z\"\n\n # Infield step\n elif float(treatments[1]) == 1 or float(treatments[1]) == 10:\n MeasRec[\"method_codes\"] = \"LT-T-I\"\n MeasRec[\"treat_temp\"] = '%8.3e' % (\n float(treatments[0])+273.) # temp in kelvin\n\n # check if this is ZI,IZ:\n for j in range(0, i):\n previous_lines = Data[specimen][Experiment_Type]['meas_data'][j]\n if previous_lines['treatment'].split(\".\")[0] == meas_line['treatment'].split(\".\")[0]:\n if float(previous_lines['treatment'].split(\".\")[1]) == 0:\n if \"LP-PI-TRM-ZI\" not in methcodes:\n methcodes.append(\"LP-PI-TRM-ZI\")\n IZorZI = \"\"\n else:\n IZorZI = \"I\"\n # pTRM check step\n elif float(treatments[1]) == 2 or float(treatments[1]) == 20:\n MeasRec[\"method_codes\"] = \"LT-PTRM-I\"\n MeasRec[\"treat_temp\"] = '%8.3e' % (\n float(treatments[0])+273.) # temp in kelvin\n if \"LP-PI-ALT\" not in methcodes:\n methcodes.append(\"LP-PI-ALT\")\n\n # Tail check step\n elif float(treatments[1]) == 3 or float(treatments[1]) == 30:\n MeasRec[\"method_codes\"] = \"LT-PTRM-MD\"\n MeasRec[\"treat_temp\"] = '%8.3e' % (\n float(treatments[0])+273.) # temp in kelvin\n if \"LP-PI-BT-MD\" not in methcodes:\n methcodes.append(\"LP-PI-BT-MD\")\n MeasRec[\"treat_dc_field\"] = \"0\"\n MeasRec[\"treat_dc_field_phi\"] = \"0\"\n MeasRec[\"treat_dc_field_theta\"] = \"0\"\n\n # Additivity check step\n elif float(treatments[1]) == 4 or float(treatments[1]) == 40:\n MeasRec[\"method_codes\"] = \"LT-PTRM-AC\"\n MeasRec[\"treat_temp\"] = '%8.3e' % (\n float(treatments[0])+273.) # temp in kelvin\n if \"LP-PI-BT\" not in methcodes:\n methcodes.append(\"LP-PI-BT\")\n\n # Thellier Thellier protocol (1 for one direction and 5 for the antiparallel)\n # Lab field direction of 1 is as put in the GUI dialog box\n # Lab field direction of 5 is the anti-parallel direction of 1\n\n elif float(treatments[1]) == 5 or float(treatments[1]) == 50:\n MeasRec[\"method_codes\"] = \"LT-T-I\"\n MeasRec[\"treat_temp\"] = '%8.3e' % (\n float(treatments[0])+273.) # temp in kelvin\n MeasRec[\"treat_dc_field_phi\"] = \"%.2f\" % (\n (float(Data[specimen][Experiment_Type]['labfield_DI'][0])+180.) % 360.)\n MeasRec[\"treat_dc_field_theta\"] = \"%.2f\" % (\n float(Data[specimen][Experiment_Type]['labfield_DI'][1])*-1.)\n if \"LP-PI-II\" not in methcodes:\n methcodes.append(\"LP-PI-II\")\n\n else:\n print(\"-E- ERROR in file %s\" % Experiment_Type)\n print(\"-E- ERROR in treatment \",\n meas_line['treatment'])\n print(\"... exiting until you fix the problem\")\n\n # -----------------------------------\n\n # MeasRec[\"method_codes\"]=lab_treatment+\":\"+lab_protocols_string\n # MeasRec[\"experiments\"]=specimen+\":\"+lab_protocols_string\n\n tmp_MeasRecs.append(MeasRec)\n measurement_running_number += 1\n\n # arrange method_codes and experiments:\n method_codes = \"LP-PI-TRM\"\n # Coe mothod\n if \"LP-PI-TRM-ZI\" in methcodes and \"LP-PI-TRM-IZ\" not in methcodes and \"LP-PI-II\" not in methcodes:\n method_codes = method_codes+\":LP-PI-TRM-ZI\"\n if \"LP-PI-TRM-ZI\" not in methcodes and \"LP-PI-TRM-IZ\" in methcodes and \"LP-PI-II\" not in methcodes:\n method_codes = method_codes+\":LP-PI-TRM-IZ\"\n if \"LP-PI-TRM-ZI\" in methcodes and \"LP-PI-TRM-IZ\" in methcodes and \"LP-PI-II\" not in methcodes:\n method_codes = method_codes+\":LP-PI-BT-IZZI\"\n if \"LP-PI-II\" in methcodes:\n method_codes = method_codes+\":LP-PI-II\"\n if \"LP-PI-ALT\" in methcodes:\n method_codes = method_codes+\":LP-PI-ALT\"\n if \"LP-PI-BT-MD\" in methcodes:\n method_codes = method_codes+\":LP-PI-BT-MD\"\n if \"LP-PI-BT\" in methcodes:\n method_codes = method_codes+\":LP-PI-BT\"\n for i in range(len(tmp_MeasRecs)):\n STRING = \":\".join(\n [tmp_MeasRecs[i][\"method_codes\"], method_codes])\n tmp_MeasRecs[i][\"method_codes\"] = STRING\n # experiment is set in pmag.measurements_methods3\n # STRING=\":\".join([tmp_MeasRecs[i][\"specimen\"],method_codes])\n # tmp_MeasRecs[i][\"experiments\"]=STRING\n MeasRecs.append(tmp_MeasRecs[i])\n\n elif Experiment_Type in [\"ATRM 6 positions\"]:\n\n tmp_MeasRecs = []\n\n header_line = Data[specimen][Experiment_Type]['header_data']\n experiment_treatments = []\n measurement_running_number = 0\n # start to make a list of the methcodes. and later will merge it to one string\n methcodes = [\"LP-AN-TRM\"]\n\n for i in range(len(Data[specimen][Experiment_Type]['meas_data'])):\n meas_line = Data[specimen][Experiment_Type]['meas_data'][i]\n\n # ------------------\n # check if the same treatment appears more than once. If yes, assuming that the measurements is repeated twice,\n # ignore the first, and take only the second one\n # ------------------\n\n if i < (len(Data[specimen][Experiment_Type]['meas_data'])-2):\n Repeating_measurements = True\n for key in ['treatment', 'specimen_name']:\n if Data[specimen][Experiment_Type]['meas_data'][i][key] != Data[specimen][Experiment_Type]['meas_data'][i+1][key]:\n Repeating_measurements = False\n if Repeating_measurements == True:\n \"Found a repeating measurement at line %i, sample %s. taking the last one\" % (\n i, specimen)\n continue\n\n # ------------------\n # fix line in format of XX instead of XX.0\n # ------------------\n if \".\" not in meas_line['treatment']:\n meas_line['treatment'] = meas_line['treatment']+\".0\"\n if meas_line['treatment'].split(\".\")[1] == \"\":\n meas_line['treatment'] = meas_line['treatment']+\"0\"\n\n # ------------------\n # init names and dictionaries\n # ------------------\n\n MeasRec, SpecRec, SampRec, SiteRec, LocRec = {}, {}, {}, {}, {}\n\n # convert from microT to Tesla\n labfield = float(header_line['labfield'])*1e-6\n sample = get_sample_name(\n specimen, Data[specimen][Experiment_Type]['sample_naming_convention'])\n site = get_site_name(\n sample, Data[specimen][Experiment_Type]['site_naming_convention'])\n location = Data[specimen][Experiment_Type]['location']\n\n # ------------------\n # Fill data\n # ------------------\n\n # Start with S'tables and Loc Table\n if specimen != \"\" and specimen not in [x['specimen'] if 'specimen' in list(x.keys()) else \"\" for x in SpecRecs]:\n SpecRec['specimen'] = specimen\n SpecRec['sample'] = sample\n SpecRec['volume'] = Data[specimen][Experiment_Type]['volume']\n SpecRec['citations'] = \"This study\"\n SpecRec['analysts'] = Data[specimen][Experiment_Type]['user_name']\n SpecRecs.append(SpecRec)\n if sample != \"\" and sample not in [x['sample'] if 'sample' in list(x.keys()) else \"\" for x in SampRecs]:\n SampRec['sample'] = sample\n SampRec['site'] = site\n SampRec['citations'] = \"This study\"\n SampRec['analysts'] = Data[specimen][Experiment_Type]['user_name']\n SampRecs.append(SampRec)\n if site != \"\" and site not in [x['site'] if 'site' in list(x.keys()) else \"\" for x in SiteRecs]:\n SiteRec['site'] = site\n SiteRec['location'] = location\n SiteRec['citations'] = \"This study\"\n SiteRec['analysts'] = Data[specimen][Experiment_Type]['user_name']\n SiteRecs.append(SiteRec)\n if location != \"\" and location not in [x['location'] if 'location' in list(x.keys()) else \"\" for x in LocRecs]:\n LocRec['location'] = location\n LocRec['citations'] = \"This study\"\n LocRec['analysts'] = Data[specimen][Experiment_Type]['user_name']\n LocRecs.append(LocRec)\n\n # Meas data now\n MeasRec[\"specimen\"] = specimen\n MeasRec['analysts'] = Data[specimen][Experiment_Type]['user_name']\n MeasRec['citations'] = \"This study\"\n MeasRec[\"quality\"] = 'g'\n MeasRec[\"standard\"] = 'u'\n MeasRec[\"treat_step_num\"] = \"%i\" % measurement_running_number\n MeasRec[\"dir_dec\"] = meas_line['dec']\n MeasRec[\"dir_inc\"] = meas_line['inc']\n MeasRec[\"magn_moment\"] = \"%5e\" % (float(meas_line['moment'])*1e-3*float(\n Data[specimen][Experiment_Type]['volume'])) # converted to Am^2\n MeasRec[\"meas_temp\"] = '273.' # room temp in kelvin\n\n treatments = meas_line['treatment'].split(\".\")\n if len(treatments[1]) > 1:\n treatments[1] = treatments[1][0]\n\n MeasRec[\"treat_temp\"] = '%8.3e' % (\n float(treatments[0])+273.) # temp in kelvin\n\n # labfield direction\n if float(treatments[1]) == 0:\n MeasRec[\"treat_dc_field\"] = '0'\n MeasRec[\"treat_dc_field_phi\"] = '0'\n MeasRec[\"treat_dc_field_theta\"] = '0'\n MeasRec[\"method_codes\"] = \"LT-T-Z:LP-AN-TRM\"\n else:\n\n MeasRec[\"treat_dc_field\"] = '%8.3e' % (labfield)\n\n # alteration check as final measurement\n if float(treatments[1]) == 7 or float(treatments[1]) == 70:\n MeasRec[\"method_codes\"] = \"LT-PTRM-I:LP-AN-TRM\"\n else:\n MeasRec[\"method_codes\"] = \"LT-T-I:LP-AN-TRM\"\n\n # find the direction of the lab field in two ways:\n # (1) using the treatment coding (XX.1=+x, XX.2=+y, XX.3=+z, XX.4=-x, XX.5=-y, XX.6=-z)\n # atrm declination/inlclination order\n tdec = [0, 90, 0, 180, 270, 0, 0, 90, 0]\n # atrm declination/inlclination order\n tinc = [0, 0, 90, 0, 0, -90, 0, 0, 90]\n\n ipos_code = int(treatments[1])-1\n # (2) using the magnetization\n DEC = float(MeasRec[\"dir_dec\"])\n INC = float(MeasRec[\"dir_inc\"])\n if INC < 45 and INC > -45:\n if DEC > 315 or DEC < 45:\n ipos_guess = 0\n if DEC > 45 and DEC < 135:\n ipos_guess = 1\n if DEC > 135 and DEC < 225:\n ipos_guess = 3\n if DEC > 225 and DEC < 315:\n ipos_guess = 4\n else:\n if INC > 45:\n ipos_guess = 2\n if INC < -45:\n ipos_guess = 5\n # prefer the guess over the code\n ipos = ipos_guess\n MeasRec[\"treat_dc_field_phi\"] = '%7.1f' % (\n tdec[ipos])\n MeasRec[\"treat_dc_field_theta\"] = '%7.1f' % (\n tinc[ipos])\n # check it\n if ipos_guess != ipos_code and treatments[1] != '7':\n print(\"-E- ERROR: check specimen %s step %s, ATRM measurements, coding does not match the direction of the lab field!\" % (\n MeasRec[\"specimen\"], \".\".join(list(treatments))))\n\n MeasRecs.append(MeasRec)\n measurement_running_number += 1\n\n else:\n print(\n \"-E- ERROR. sorry, file format %s is not supported yet. Please contact rshaar@ucsd.edu\" % Experiment_Type)\n\n # -------------------------------------------\n # measurements.txt\n # -------------------------------------------\n\n con = cb.Contribution(output_dir_path, read_tables=[])\n\n con.add_magic_table_from_data(dtype='specimens', data=SpecRecs)\n con.add_magic_table_from_data(dtype='samples', data=SampRecs)\n con.add_magic_table_from_data(dtype='sites', data=SiteRecs)\n con.add_magic_table_from_data(dtype='locations', data=LocRecs)\n MeasOuts = pmag.measurements_methods3(MeasRecs, noave=False)\n con.add_magic_table_from_data(dtype='measurements', data=MeasOuts)\n con.write_table_to_file('specimens', spec_file_name)\n con.write_table_to_file('samples', samp_file_name)\n con.write_table_to_file('sites', site_file_name)\n con.write_table_to_file('locations', loc_file_name)\n meas_file = con.write_table_to_file('measurements', meas_file_name)\n return True, meas_file"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef utrecht(mag_file, dir_path=\".\", input_dir_path=\"\", meas_file=\"measurements.txt\",\n spec_file=\"specimens.txt\", samp_file=\"samples.txt\", site_file=\"sites.txt\",\n loc_file=\"locations.txt\", location=\"unknown\", lat=\"\", lon=\"\", dmy_flag=False,\n noave=False, meas_n_orient=8, meth_code=\"LP-NO\", specnum=1, samp_con='2',\n labfield=0, phi=0, theta=0):\n \"\"\"\n Converts Utrecht magnetometer data files to MagIC files\n\n Parameters\n ----------\n mag_file : str\n input file name\n dir_path : str\n working directory, default \".\"\n input_dir_path : str\n input file directory IF different from dir_path, default \"\"\n spec_file : str\n output specimen file name, default \"specimens.txt\"\n samp_file: str\n output sample file name, default \"samples.txt\"\n site_file : str\n output site file name, default \"sites.txt\"\n loc_file : str\n output location file name, default \"locations.txt\"\n append : bool\n append output files to existing files instead of overwrite, default False\n location : str\n location name, default \"unknown\"\n lat : float\n latitude, default \"\"\n lon : float\n longitude, default \"\"\n dmy_flag : bool\n default False\n noave : bool\n do not average duplicate measurements, default False (so by default, DO average)\n meas_n_orient : int\n Number of different orientations in measurement (default : 8)\n meth_code : str\n sample method codes, default \"LP-NO\"\n e.g. [SO-MAG, SO-SUN, SO-SIGHT, ...]\n specnum : int\n number of characters to designate a specimen, default 0\n samp_con : str\n sample/site naming convention, default '2', see info below\n labfield : float\n DC_FIELD in microTesla (default : 0)\n phi : float\n DC_PHI in degrees (default : 0)\n theta : float\n DC_THETA in degrees (default : 0)\n\n Returns\n ----------\n type - Tuple : (True or False indicating if conversion was sucessful, meas_file name written)\n\n Info\n ---------\n Sample naming convention:\n [1] XXXXY: where XXXX is an arbitrary length site designation and Y\n is the single character sample designation. e.g., TG001a is the\n first sample from site TG001. [default]\n [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX\n [5] site name = sample name\n [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED\n [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY\n\n\n \"\"\"\n # initialize some stuff\n version_num = pmag.get_version()\n MeasRecs, SpecRecs, SampRecs, SiteRecs, LocRecs = [], [], [], [], []\n input_dir_path, output_dir_path = pmag.fix_directories(input_dir_path, dir_path)\n specnum = -int(specnum)\n if \"4\" in samp_con:\n if \"-\" not in samp_con:\n print(\"option [4] must be in form 4-Z where Z is an integer\")\n return False, \"naming convention option [4] must be in form 4-Z where Z is an integer\"\n else:\n site_num = samp_con.split(\"-\")[1]\n samp_con = \"4\"\n elif \"7\" in samp_con:\n if \"-\" not in samp_con:\n print(\"option [7] must be in form 7-Z where Z is an integer\")\n return False, \"naming convention option [7] must be in form 7-Z where Z is an integer\"\n else:\n site_num = samp_con.split(\"-\")[1]\n samp_con = \"7\"\n else:\n site_num = 1\n try:\n DC_FIELD = float(labfield)*1e-6\n DC_PHI = float(phi)\n DC_THETA = float(theta)\n except ValueError:\n raise ValueError(\n 'problem with your dc parameters. please provide a labfield in microTesla and a phi and theta in degrees.')\n\n # format variables\n if not mag_file:\n return False, 'You must provide a Utrecht format file'\n mag_file = pmag.resolve_file_name(mag_file, input_dir_path)\n # need to add these\n meas_file = pmag.resolve_file_name(meas_file, output_dir_path)\n spec_file = pmag.resolve_file_name(spec_file, output_dir_path)\n samp_file = pmag.resolve_file_name(samp_file, output_dir_path)\n site_file = pmag.resolve_file_name(site_file, output_dir_path)\n loc_file = pmag.resolve_file_name(loc_file, output_dir_path)\n\n\n # parse data\n\n # Open up the Utrecht file and read the header information\n AF_or_T = mag_file.split('.')[-1]\n data = open(mag_file, 'r')\n line = data.readline()\n line_items = line.split(',')\n operator = line_items[0]\n operator = operator.replace(\"\\\"\", \"\")\n machine = line_items[1]\n machine = machine.replace(\"\\\"\", \"\")\n machine = machine.rstrip('\\n')\n# print(\"operator=\", operator)\n# print(\"machine=\", machine)\n\n # read in measurement data\n line = data.readline()\n while line != \"END\" and line != '\"END\"':\n SpecRec, SampRec, SiteRec, LocRec = {}, {}, {}, {}\n line_items = line.split(',')\n spec_name = line_items[0]\n spec_name = spec_name.replace(\"\\\"\", \"\")\n# print(\"spec_name=\", spec_name)\n free_string = line_items[1]\n free_string = free_string.replace(\"\\\"\", \"\")\n# print(\"free_string=\", free_string)\n dec = line_items[2]\n# print(\"dec=\", dec)\n inc = line_items[3]\n# print(\"inc=\", inc)\n volume = float(line_items[4])\n volume = volume * 1e-6 # enter volume in cm^3, convert to m^3\n# print(\"volume=\", volume)\n bed_plane = line_items[5]\n# print(\"bed_plane=\", bed_plane)\n bed_dip = line_items[6]\n# print(\"bed_dip=\", bed_dip)\n\n # Configure et er_ tables\n if specnum == 0:\n sample_name = spec_name\n else:\n sample_name = spec_name[:specnum]\n site = pmag.parse_site(sample_name, samp_con, site_num)\n SpecRec['specimen'] = spec_name\n SpecRec['sample'] = sample_name\n if volume != 0:\n SpecRec['volume'] = volume\n SpecRecs.append(SpecRec)\n if sample_name != \"\" and sample_name not in [x['sample'] if 'sample' in list(x.keys()) else \"\" for x in SampRecs]:\n SampRec['sample'] = sample_name\n SampRec['azimuth'] = dec\n SampRec['dip'] = str(float(inc)-90)\n SampRec['bed_dip_direction'] = bed_plane\n SampRec['bed_dip'] = bed_dip\n SampRec['method_codes'] = meth_code\n SampRec['site'] = site\n SampRecs.append(SampRec)\n if site != \"\" and site not in [x['site'] if 'site' in list(x.keys()) else \"\" for x in SiteRecs]:\n SiteRec['site'] = site\n SiteRec['location'] = location\n SiteRec['lat'] = lat\n SiteRec['lon'] = lon\n SiteRecs.append(SiteRec)\n if location != \"\" and location not in [x['location'] if 'location' in list(x.keys()) else \"\" for x in LocRecs]:\n LocRec['location'] = location\n LocRec['lat_n'] = lat\n LocRec['lon_e'] = lon\n LocRec['lat_s'] = lat\n LocRec['lon_w'] = lon\n LocRecs.append(LocRec)\n\n # measurement data\n line = data.readline()\n line = line.rstrip(\"\\n\")\n items = line.split(\",\")\n while line != '9999':\n step = items[0]\n step = step.split('.')\n step_value = step[0]\n step_type = \"\"\n if len(step) == 2:\n step_type = step[1]\n if step_type == '5':\n step_value = items[0]\n A = float(items[1])\n B = float(items[2])\n C = float(items[3])\n# convert to MagIC coordinates\n Z = -A\n X = -B\n Y = C\n cart = np.array([X, Y, Z]).transpose()\n direction = pmag.cart2dir(cart).transpose()\n measurement_dec = direction[0]\n measurement_inc = direction[1]\n # the data are in pico-Am^2 - this converts to Am^2\n magn_moment = direction[2] * 1.0e-12\n if volume != 0:\n # data volume normalized - converted to A/m\n magn_volume = direction[2] * 1.0e-12 / volume\n# print(\"magn_moment=\", magn_moment)\n# print(\"magn_volume=\", magn_volume)\n error = items[4]\n date = items[5]\n date = date.strip('\"').replace(' ', '')\n if date.count(\"-\") > 0:\n date = date.split(\"-\")\n elif date.count(\"/\") > 0:\n date = date.split(\"/\")\n else:\n print(\"date format seperator cannot be identified\")\n# print(date)\n time = items[6]\n time = time.strip('\"').replace(' ', '')\n time = time.split(\":\")\n# print(time)\n dt = date[0] + \":\" + date[1] + \":\" + date[2] + \\\n \":\" + time[0] + \":\" + time[1] + \":\" + \"0\"\n local = pytz.timezone(\"Europe/Amsterdam\")\n try:\n if dmy_flag:\n naive = datetime.datetime.strptime(dt, \"%d:%m:%Y:%H:%M:%S\")\n else:\n naive = datetime.datetime.strptime(dt, \"%m:%d:%Y:%H:%M:%S\")\n except ValueError:\n try:\n naive = datetime.datetime.strptime(dt, \"%Y:%m:%d:%H:%M:%S\")\n except ValueError:\n print('-W- Could not parse date format')\n return False, 'Could not parse date format'\n local_dt = local.localize(naive, is_dst=None)\n utc_dt = local_dt.astimezone(pytz.utc)\n timestamp = utc_dt.strftime(\"%Y-%m-%dT%H:%M:%S\")+\"Z\"\n# print(timestamp)\n\n MeasRec = {}\n MeasRec[\"timestamp\"] = timestamp\n MeasRec[\"analysts\"] = operator\n MeasRec[\"instrument_codes\"] = \"Utrecht_\" + machine\n MeasRec[\"description\"] = \"free string = \" + free_string\n MeasRec[\"citations\"] = \"This study\"\n MeasRec['software_packages'] = version_num\n MeasRec[\"meas_temp\"] = '%8.3e' % (273) # room temp in kelvin\n MeasRec[\"quality\"] = 'g'\n MeasRec[\"standard\"] = 'u'\n # will be overwritten by measurements_methods3\n MeasRec[\"experiment\"] = location + site + spec_name\n MeasRec[\"treat_step_num\"] = location + site + spec_name + items[0]\n MeasRec[\"specimen\"] = spec_name\n # MeasRec[\"treat_ac_field\"] = '0'\n if AF_or_T.lower() == \"th\":\n MeasRec[\"treat_temp\"] = '%8.3e' % (\n float(step_value)+273.) # temp in kelvin\n MeasRec['treat_ac_field'] = '0'\n lab_treat_type = \"T\"\n else:\n MeasRec['treat_temp'] = '273'\n MeasRec['treat_ac_field'] = '%10.3e' % (float(step_value)*1e-3)\n lab_treat_type = \"AF\"\n MeasRec['treat_dc_field'] = '0'\n if step_value == '0':\n meas_type = \"LT-NO\"\n# print(\"step_type=\", step_type)\n if step_type == '0' or step_type == '00':\n meas_type = \"LT-%s-Z\" % lab_treat_type\n elif step_type == '1' or step_type == '11':\n meas_type = \"LT-%s-I\" % lab_treat_type\n MeasRec['treat_dc_field'] = '%1.2e' % DC_FIELD\n elif step_type == '2' or step_type == '12':\n meas_type = \"LT-PTRM-I\"\n MeasRec['treat_dc_field'] = '%1.2e' % DC_FIELD\n elif step_type == '3' or step_type == '13':\n meas_type = \"LT-PTRM-Z\"\n# print(\"meas_type=\", meas_type)\n MeasRec['treat_dc_field_phi'] = '%1.2f' % DC_PHI\n MeasRec['treat_dc_field_theta'] = '%1.2f' % DC_THETA\n MeasRec['method_codes'] = meas_type\n MeasRec[\"magn_moment\"] = magn_moment\n if volume != 0:\n MeasRec[\"magn_volume\"] = magn_volume\n MeasRec[\"dir_dec\"] = measurement_dec\n MeasRec[\"dir_inc\"] = measurement_inc\n MeasRec['dir_csd'] = error\n MeasRec['meas_n_orient'] = meas_n_orient\n# print(MeasRec)\n MeasRecs.append(MeasRec)\n\n line = data.readline()\n line = line.rstrip(\"\\n\")\n items = line.split(\",\")\n line = data.readline()\n line = line.rstrip(\"\\n\")\n items = line.split(\",\")\n\n data.close()\n\n con = cb.Contribution(output_dir_path, read_tables=[])\n\n con.add_magic_table_from_data(dtype='specimens', data=SpecRecs)\n con.add_magic_table_from_data(dtype='samples', data=SampRecs)\n con.add_magic_table_from_data(dtype='sites', data=SiteRecs)\n con.add_magic_table_from_data(dtype='locations', data=LocRecs)\n # figures out method codes for measuremet data\n MeasOuts = pmag.measurements_methods3(MeasRecs, noave)\n con.add_magic_table_from_data(dtype='measurements', data=MeasOuts)\n\n con.tables['specimens'].write_magic_file(custom_name=spec_file)\n con.tables['samples'].write_magic_file(custom_name=samp_file)\n con.tables['sites'].write_magic_file(custom_name=site_file)\n con.tables['locations'].write_magic_file(custom_name=loc_file)\n con.tables['measurements'].write_magic_file(custom_name=meas_file)\n\n return True, meas_file", "response": "This function converts Utrecht magnetometer data files to MagIC files."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef make_thing():\n cwd = os.getcwd()\n main_dir = cwd + '/SPD'\n try:\n import new_lj_thellier_gui_spd as tgs\n gui = tgs.Arai_GUI('/magic_measurements.txt', main_dir)\n specimens = list(gui.Data.keys())\n thing = PintPars(gui.Data, '0238x6011044', 473., 623.)\n thing.calculate_all_statistics()\n #new_thing = PintPars(gui.Data, '0238x5721062', 100. + 273., 525. + 273.)\n #new_thing.calculate_all_statistics()\n gui2 = tgs.Arai_GUI('/magic_measurements.txt', '/Users/nebula/Desktop/MagIC_experiments/ODP-SBG_1')\n thing2 = PintPars(gui2.Data, '0335x1031411', 273., 743.)\n return thing, thing2\n except Exception as ex:\n print('could not make standard specimen objects')\n print(ex)", "response": "make example PintPars object"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting the dec and inc from the PTRMS table", "response": "def get_ptrm_dec_and_inc(self):\n \"\"\"not included in spd.\"\"\"\n PTRMS = self.PTRMS[1:]\n CART_pTRMS_orig = numpy.array([lib_direct.dir2cart(row[1:4]) for row in PTRMS])\n #B_lab_dir = [self.B_lab_dir[0], self.B_lab_dir[1], 1.] # dir\n tmin, tmax = self.t_Arai[0], self.t_Arai[-1]\n ptrms_dec_Free, ptrms_inc_Free, ptrm_best_fit_vector_Free, ptrm_tau_Free, ptrm_v_Free, ptrm_mass_center_Free, ptrm_PCA_sigma_Free = lib_direct.get_dec_and_inc(CART_pTRMS_orig, self.t_Arai, tmin, tmax, anchored=False)\n ptrms_angle = lib_direct.get_ptrms_angle(ptrm_best_fit_vector_Free, self.B_lab_cart)\n self.pars['ptrms_dec_Free'], self.pars['ptrms_inc_Free'] = ptrms_dec_Free, ptrms_inc_Free\n self.pars['ptrms_tau_Free'] = ptrm_tau_Free\n self.pars['ptrms_angle_Free'] = ptrms_angle"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef main():\n\n#\n# parse command line options\n#\n\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n dir_path = pmag.get_named_arg(\"-WD\", default_val=\".\")\n input_dir_path = pmag.get_named_arg('-ID', \"\")\n input_dir_path, dir_path = pmag.fix_directories(input_dir_path, dir_path)\n meas_file = pmag.get_named_arg(\n \"-f\", default_val=\"measurements.txt\")\n #spec_file = pmag.get_named_arg(\n # \"-fsp\", default_val=\"specimens.txt\")\n #crit_file = pmag.get_named_arg(\"-fcr\", default_val=\"criteria.txt\")\n #spec_file = os.path.join(dir_path, spec_file)\n #crit_file = os.path.join(dir_path, crit_file)\n meas_file = pmag.resolve_file_name(meas_file, input_dir_path)\n fmt = pmag.get_named_arg(\"-fmt\", \"svg\")\n save_plots = False\n interactive = True\n if '-sav' in sys.argv:\n save_plots = True\n interactive=False\n spec = pmag.get_named_arg(\"-spc\", default_val=\"\")\n n_specs = pmag.get_named_arg(\"-n\", default_val=\"all\")\n try:\n n_specs = int(n_specs)\n except ValueError:\n pass\n ipmag.thellier_magic(meas_file, dir_path, input_dir_path,\n spec, n_specs, save_plots, fmt, interactive)", "response": "This function is the main function of the Thellier - Thellier - Thellier - Magic script. It is used to generate the interpretations for a specific specimen in version 3. 0."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ntrying to download the data model from Earthref and return the corresponding object. If that fails return the online data model.", "response": "def get_data_model(self):\n \"\"\"\n Try to download the data model from Earthref.\n If that fails, grab the cached data model.\n \"\"\"\n if len(DM):\n self.dm = DM\n self.crit_map = CRIT_MAP\n return\n if not set_env.OFFLINE:\n dm = self.get_dm_online()\n if dm:\n print('-I- Using online data model')\n #self.cache_data_model(dm)\n return self.parse_response(dm)\n # if online is not available, get cached dm\n dm = self.get_dm_offline()\n print('-I- Using cached data model')\n return self.parse_cache(dm)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngrabs the 3. 0 data model from the PmagPy directory Returns --------- DataFrame full", "response": "def get_dm_offline(self):\n \"\"\"\n Grab the 3.0 data model from the PmagPy/pmagpy directory\n\n Returns\n ---------\n full : DataFrame\n cached data model json in DataFrame format\n \"\"\"\n model_file = self.find_cached_dm()\n try:\n f = open(model_file, 'r', encoding='utf-8-sig')\n except TypeError:\n f = open(model_file, 'r')\n string = '\\n'.join(f.readlines())\n f.close()\n raw = json.loads(string)\n full = pd.DataFrame(raw)\n return full"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting data model from Earthref.", "response": "def get_dm_online(self):\n \"\"\"\n Use requests module to get data model from Earthref.\n If this fails or times out, return false.\n\n Returns\n ---------\n result : requests.models.Response, False if unsuccessful\n \"\"\"\n if not requests:\n return False\n try:\n req = requests.get(\"https://earthref.org/MagIC/data-models/3.0.json\", timeout=3)\n if not req.ok:\n return False\n return req\n except (requests.exceptions.ConnectTimeout, requests.exceptions.ConnectionError,\n requests.exceptions.ReadTimeout):\n return False"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef parse_cache(self, full_df):\n data_model = {}\n levels = ['specimens', 'samples', 'sites', 'locations',\n 'ages', 'measurements', 'criteria', 'contribution',\n 'images']\n criteria_map = pd.DataFrame(full_df['criteria_map'])\n for level in levels:\n df = pd.DataFrame(full_df['tables'][level]['columns'])\n data_model[level] = df.transpose()\n # replace np.nan with None\n data_model[level] = data_model[level].where((pd.notnull(data_model[level])), None)\n return data_model, criteria_map", "response": "Format the cached data model into a dictionary of DataFrames\n and a criteria map DataFrame."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef parse(self, data_model, crit):\n # data model\n tables = pd.DataFrame(data_model)\n data_model = {}\n for table_name in tables.columns:\n data_model[table_name] = pd.DataFrame(tables[table_name]['columns']).T\n # replace np.nan with None\n data_model[table_name] = data_model[table_name].where((pd.notnull(data_model[table_name])), None)\n # criteria map\n zipped = list(zip(crit.keys(), crit.values()))\n crit_map = pd.DataFrame(zipped)\n crit_map.index = crit_map[0]\n crit_map.drop(0, axis='columns', inplace=True)\n crit_map.rename({1: 'criteria_map'}, axis='columns', inplace=True)\n crit_map.index.rename(\"\", inplace=True)\n for table_name in ['measurements', 'specimens', 'samples', 'sites', 'locations',\n 'contribution', 'criteria', 'images', 'ages']:\n crit_map.loc[table_name] = np.nan\n\n return data_model, crit_map", "response": "Take the relevant pieces of the data model json\n and parse into data model and criteria map."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef parse_response(self, raw):\n\n tables = raw.json()['tables']\n crit = raw.json()['criteria_map']\n return self.parse(tables, crit)", "response": "Parse the response from the Earthref API into a dictionary of DataFrames\n and a criteria map DataFrame."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nfinding filename where cached data model json is stored.", "response": "def find_cached_dm(self):\n \"\"\"\n Find filename where cached data model json is stored.\n\n Returns\n ---------\n model_file : str\n data model json file location\n \"\"\"\n pmag_dir = find_pmag_dir.get_pmag_dir()\n if pmag_dir is None:\n pmag_dir = '.'\n model_file = os.path.join(pmag_dir, 'pmagpy',\n 'data_model', 'data_model.json')\n # for py2app:\n if not os.path.isfile(model_file):\n model_file = os.path.join(pmag_dir, 'data_model',\n 'data_model.json')\n if not os.path.isfile(model_file):\n model_file = os.path.join(os.path.split(os.path.dirname(__file__))[0],'pmagpy', 'data_model','data_model.json')\n if not os.path.isfile(model_file):\n model_file = os.path.join(os.path.split(os.path.dirname(__file__))[0], 'data_model','data_model.json')\n return model_file"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncache the data model json.", "response": "def cache_data_model(self, raw):\n \"\"\"\n Cache the data model json.\n Take data returned by a requests.get call to Earthref.\n\n Parameters\n ----------\n raw: requests.models.Response\n\n \"\"\"\n output_json = json.loads(raw.content)\n output_file = self.find_cached_dm()\n json.dump(output_json, open(output_file, 'w+'))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning list of all groups for a particular data type", "response": "def get_groups(self, table_name):\n \"\"\"\n Return list of all groups for a particular data type\n \"\"\"\n df = self.dm[table_name]\n return list(df['group'].unique())"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a list of all headers for a given group", "response": "def get_group_headers(self, table_name, group_name):\n \"\"\"\n Return a list of all headers for a given group\n \"\"\"\n # get all headers of a particular group\n df = self.dm[table_name]\n cond = df['group'] == group_name\n return df[cond].index"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_reqd_headers(self, table_name):\n df = self.dm[table_name]\n cond = df['validations'].map(lambda x: 'required()' in str(x))\n return df[cond].index", "response": "Return a list of all required headers for a particular table"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nchecking data model to find group name for a given column header", "response": "def get_group_for_col(self, table_name, col_name):\n \"\"\"\n Check data model to find group name for a given column header\n\n Parameters\n ----------\n table_name: str\n col_name: str\n\n Returns\n ---------\n group_name: str\n \"\"\"\n df = self.dm[table_name]\n try:\n group_name = df.loc[col_name, 'group']\n except KeyError:\n return ''\n return group_name"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nname fisher. py AttributeNames fisher. py AttributeNames is the main function for the fisher - distribution.", "response": "def main():\n \"\"\"\n NAME\n fisher.py\n\n DESCRIPTION\n generates set of Fisher distribed data from specified distribution\n\n INPUT (COMMAND LINE ENTRY)\n OUTPUT\n dec, inc\n\n SYNTAX\n fisher.py [-h] [-i] [command line options]\n\n OPTIONS\n -h prints help message and quits\n -i for interactive entry\n -k specify kappa as next argument, default is 20\n -n specify N as next argument, default is 100\n where:\n kappa: fisher distribution concentration parameter\n N: number of directions desired\n\n \"\"\"\n N,kappa=100,20\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n elif '-i' in sys.argv:\n ans=input(' Kappa: ')\n kappa=float(ans)\n ans=input(' N: ')\n N=int(ans)\n else:\n if '-k' in sys.argv:\n ind=sys.argv.index('-k')\n kappa=float(sys.argv[ind+1])\n if '-n' in sys.argv:\n ind=sys.argv.index('-n')\n N=int(sys.argv[ind+1])\n for k in range(N): spitout(kappa)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nname pmm_redo. py - h prints help message and quits prints help message and quits prints help message and quits prints help message and quits prints help message and quits prints usage message and exits", "response": "def main():\n \"\"\"\n NAME\n pmm_redo.py\n\n DESCRIPTION\n converts the UCSC PMM files format to PmagPy redo file\n\n SYNTAX \n pmm_redo.py [-h] [command line options]\n\n OPTIONS\n -h: prints help message and quits\n -f FILE: specify input file\n -F FILE: specify output file, default is 'zeq_redo' \n \"\"\"\n dir_path='.'\n if '-WD' in sys.argv:\n ind=sys.argv.index('-WD')\n dir_path=sys.argv[ind+1]\n zfile=dir_path+'/zeq_redo'\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-f' in sys.argv:\n ind=sys.argv.index('-f')\n inspec=dir_path+'/'+sys.argv[ind+1]\n if '-F' in sys.argv:\n ind=sys.argv.index('-F')\n zfile=dir_path+'/'+sys.argv[ind+1]\n zredo=open(zfile,\"w\")\n#\n# read in PMM file\n#\n specs=[]\n prior_spec_data=open(inspec,'r').readlines()\n for line in prior_spec_data:\n rec=line.split(',')\n if rec[0][0]!='\"' and rec[0]!=\"ID\" and len(rec)>2: # skip all the header stuff\n spec=rec[0]\n specs.append(spec) \n comp_name=string.uppercase[specs.count(spec)-1] # assign component names\n calculation_type=\"DE-FM\"\n if rec[1].strip()=='DirPCA': calculation_type=\"DE-BFL\" # assume default calculation type is best-fit line\n if rec[1].strip()=='DirOPCA': calculation_type=\"DE-BFL-A\" # anchored best-fit line\n if rec[1].strip()=='GCPCA' or rec[1]=='GCnPCA' : calculation_type=\"DE-BFP\" # best-fit plane\n steps=rec[2].strip().split('-')\n min,max=steps[0],steps[1]\n beg,end=\"\",\"\"\n if min==\"NRM\": \n beg=0 \n elif min[0]=='M' or min[0]=='H':\n beg=float(min[1:])*1e-3 # convert to T from mT\n elif min[-1]=='M':\n beg=float(min[:-1])*1e-3 # convert to T from mT\n elif min[0]=='T':\n beg=float(min[1:])+273 # convert to C to kelvin\n if max[0]=='M' or max[0]=='H':\n end=float(max[1:])*1e-3 # convert to T from mT\n elif max[0]=='T':\n end=float(max[1:])+273 # convert to C to kelvin\n elif max[-1]=='M':\n end=float(max[1:])*1e-3 # convert to T from mT\n if beg==0:beg=273\n outstring='%s %s %s %s %s \\n'%(spec,calculation_type,beg,end,comp_name)\n zredo.write(outstring)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nnaming nac_util. py VIRTUAL is the main function of nac_util. py", "response": "def main():\n \"\"\"\n NAME\n fishqq.py\n\n DESCRIPTION\n makes qq plot from dec,inc input data\n\n INPUT FORMAT\n takes dec/inc pairs in space delimited file\n\n SYNTAX\n fishqq.py [command line options]\n\n OPTIONS\n -h help message\n -f FILE, specify file on command line\n -F FILE, specify output file for statistics\n -sav save and quit [saves as input file name plus fmt extension]\n -fmt specify format for output [png, eps, svg, pdf] \n\n OUTPUT:\n Dec Inc N Mu Mu_crit Me Me_crit Y/N\n where direction is the principal component and Y/N is Fisherian or not\n separate lines for each mode with N >=10 (N and R)\n \"\"\"\n fmt,plot='svg',0\n outfile=\"\"\n if '-h' in sys.argv: # check if help is needed\n print(main.__doc__)\n sys.exit() # graceful quit\n elif '-f' in sys.argv: # ask for filename\n ind=sys.argv.index('-f')\n file=sys.argv[ind+1]\n f=open(file,'r')\n data=f.readlines()\n if '-F' in sys.argv:\n ind=sys.argv.index('-F')\n outfile=open(sys.argv[ind+1],'w') # open output file\n if '-sav' in sys.argv: plot=1\n if '-fmt' in sys.argv:\n ind=sys.argv.index('-fmt')\n fmt=sys.argv[ind+1]\n DIs,nDIs,rDIs= [],[],[] # set up list for data\n for line in data: # read in the data from standard input\n if '\\t' in line:\n rec=line.split('\\t') # split each line on space to get records\n else:\n rec=line.split() # split each line on space to get records\n DIs.append([float(rec[0]),float(rec[1])]) # append data to Inc\n# split into two modes\n ppars=pmag.doprinc(DIs) # get principal directions\n for rec in DIs:\n angle=pmag.angle([rec[0],rec[1]],[ppars['dec'],ppars['inc']])\n if angle>90.:\n rDIs.append(rec)\n else:\n nDIs.append(rec)\n \n#\n if len(rDIs) >=10 or len(nDIs) >=10:\n D1,I1=[],[]\n QQ={'unf1':1,'exp1':2}\n pmagplotlib.plot_init(QQ['unf1'],5,5)\n pmagplotlib.plot_init(QQ['exp1'],5,5)\n if len(nDIs) < 10: \n ppars=pmag.doprinc(rDIs) # get principal directions\n Drbar,Irbar=ppars['dec']-180.,-ppars['inc']\n Nr=len(rDIs)\n for di in rDIs:\n d,irot=pmag.dotilt(di[0],di[1],Drbar-180.,90.-Irbar) # rotate to mean\n drot=d-180.\n if drot<0:drot=drot+360.\n D1.append(drot) \n I1.append(irot) \n Dtit='Mode 2 Declinations'\n Itit='Mode 2 Inclinations'\n else: \n ppars=pmag.doprinc(nDIs) # get principal directions\n Dnbar,Inbar=ppars['dec'],ppars['inc']\n Nn=len(nDIs)\n for di in nDIs:\n d,irot=pmag.dotilt(di[0],di[1],Dnbar-180.,90.-Inbar) # rotate to mean\n drot=d-180.\n if drot<0:drot=drot+360.\n D1.append(drot)\n I1.append(irot)\n Dtit='Mode 1 Declinations'\n Itit='Mode 1 Inclinations'\n Mu_n,Mu_ncr=pmagplotlib.plot_qq_unf(QQ['unf1'],D1,Dtit) # make plot\n Me_n,Me_ncr=pmagplotlib.plot_qq_exp(QQ['exp1'],I1,Itit) # make plot\n #print Mu_n,Mu_ncr,Me_n, Me_ncr\n if outfile!=\"\":\n# Dec Inc N Mu Mu_crit Me Me_crit Y/N\n if Mu_n<=Mu_ncr and Me_n<=Me_ncr:\n F='Y'\n else:\n F='N'\n outstring='%7.1f %7.1f %i %5.3f %5.3f %5.3f %5.3f %s \\n'%(Dnbar,Inbar,Nn,Mu_n,Mu_ncr,Me_n,Me_ncr,F)\n outfile.write(outstring)\n else:\n print('you need N> 10 for at least one mode')\n sys.exit()\n if len(rDIs)>10 and len(nDIs)>10:\n D2,I2=[],[]\n QQ['unf2']=3\n QQ['exp2']=4\n pmagplotlib.plot_init(QQ['unf2'],5,5)\n pmagplotlib.plot_init(QQ['exp2'],5,5)\n ppars=pmag.doprinc(rDIs) # get principal directions\n Drbar,Irbar=ppars['dec']-180.,-ppars['inc']\n Nr=len(rDIs)\n for di in rDIs:\n d,irot=pmag.dotilt(di[0],di[1],Drbar-180.,90.-Irbar) # rotate to mean\n drot=d-180.\n if drot<0:drot=drot+360.\n D2.append(drot) \n I2.append(irot) \n Dtit='Mode 2 Declinations'\n Itit='Mode 2 Inclinations'\n Mu_r,Mu_rcr=pmagplotlib.plot_qq_unf(QQ['unf2'],D2,Dtit) # make plot\n Me_r,Me_rcr=pmagplotlib.plot_qq_exp(QQ['exp2'],I2,Itit) # make plot\n if outfile!=\"\":\n# Dec Inc N Mu Mu_crit Me Me_crit Y/N\n if Mu_r<=Mu_rcr and Me_r<=Me_rcr:\n F='Y'\n else:\n F='N'\n outstring='%7.1f %7.1f %i %5.3f %5.3f %5.3f %5.3f %s \\n'%(Drbar,Irbar,Nr,Mu_r,Mu_rcr,Me_r,Me_rcr,F)\n outfile.write(outstring)\n files={}\n for key in list(QQ.keys()):\n files[key]=file+'_'+key+'.'+fmt \n if pmagplotlib.isServer:\n black = '#000000'\n purple = '#800080'\n titles={}\n titles['eq']='Equal Area Plot'\n EQ = pmagplotlib.add_borders(EQ,titles,black,purple)\n pmagplotlib.save_plots(QQ,files)\n elif plot==1:\n pmagplotlib.save_plots(QQ,files)\n else:\n pmagplotlib.draw_figs(QQ) \n ans=input(\" S[a]ve to save plot, [q]uit without saving: \")\n if ans==\"a\": pmagplotlib.save_plots(QQ,files)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nnames n_lines n_planes K alpha95 dec inc", "response": "def main():\n \"\"\"\n NAME\n lnp_magic.py\n\n DESCRIPTION\n makes equal area projections site by site\n from specimen formatted file with\n Fisher confidence ellipse using McFadden and McElhinny (1988)\n technique for combining lines and planes\n\n SYNTAX\n lnp_magic [command line options]\n\n INPUT\n takes magic formatted specimens file\n\n OUPUT\n prints site_name n_lines n_planes K alpha95 dec inc R\n\n OPTIONS\n -h prints help message and quits\n -f FILE: specify input file, default is 'specimens.txt', ('pmag_specimens.txt' for legacy data model 2)\n -fsa FILE: specify samples file, required to plot by site for data model 3 (otherwise will plot by sample)\n default is 'samples.txt'\n -crd [s,g,t]: specify coordinate system, [s]pecimen, [g]eographic, [t]ilt adjusted\n default is specimen\n -fmt [svg,png,jpg] format for plots, default is svg\n -sav save plots and quit\n -P: do not plot\n -F FILE, specify output file of dec, inc, alpha95 data for plotting with plotdi_a and plotdi_e\n -exc use criteria in criteria table # NOT IMPLEMENTED\n -DM NUMBER MagIC data model (2 or 3, default 3)\n \"\"\"\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n dir_path = pmag.get_named_arg(\"-WD\", \".\")\n data_model = int(float(pmag.get_named_arg(\"-DM\", 3)))\n fmt = pmag.get_named_arg(\"-fmt\", 'svg')\n if data_model == 2:\n in_file = pmag.get_named_arg('-f', 'pmag_specimens.txt')\n crit_file = \"pmag_criteria.txt\"\n else:\n in_file = pmag.get_named_arg('-f', 'specimens.txt')\n samp_file = pmag.get_named_arg('-fsa', 'samples.txt')\n crit_file = \"criteria.txt\"\n in_file = pmag.resolve_file_name(in_file, dir_path)\n dir_path = os.path.split(in_file)[0]\n if data_model == 3:\n samp_file = pmag.resolve_file_name(samp_file, dir_path)\n if '-crd' in sys.argv:\n ind = sys.argv.index(\"-crd\")\n crd = sys.argv[ind+1]\n if crd == 's':\n coord = \"-1\"\n if crd == 'g':\n coord = \"0\"\n if crd == 't':\n coord = \"100\"\n else:\n coord = \"-1\"\n\n\n out_file = pmag.get_named_arg('-F', '')\n if out_file:\n out = open(dir_path+'/'+out_file, 'w')\n if '-P' in sys.argv:\n make_plots = 0 # do not plot\n else:\n make_plots = 1 # do plot\n if '-sav' in sys.argv:\n plot = 1 # save plots and quit\n else:\n plot = 0 # show plots intereactively (if make_plots)\n#\n\n\n if data_model == 2:\n Specs, file_type = pmag.magic_read(in_file)\n if 'specimens' not in file_type:\n print('Error opening ', in_file, file_type)\n sys.exit()\n else:\n fnames = {'specimens': in_file, 'samples': samp_file}\n con = cb.Contribution(dir_path, read_tables=['samples', 'specimens'],\n custom_filenames=fnames)\n con.propagate_name_down('site', 'specimens')\n if 'site' in con.tables['specimens'].df.columns:\n site_col = 'site'\n else:\n site_col = 'sample'\n tilt_corr_col = \"dir_tilt_correction\"\n mad_col = \"dir_mad_free\"\n alpha95_col = \"dir_alpha95\"\n site_alpha95_col = \"dir_alpha95\"\n dec_col = \"dir_dec\"\n inc_col = \"dir_inc\"\n num_meas_col = \"dir_n_measurements\"\n k_col = \"dir_k\"\n cols = [site_col, tilt_corr_col, mad_col, alpha95_col, dec_col, inc_col]\n con.tables['specimens'].front_and_backfill(cols)\n con.tables['specimens'].df = con.tables['specimens'].df.where(con.tables['specimens'].df.notnull(), \"\")\n Specs = con.tables['specimens'].convert_to_pmag_data_list()\n\n\n ## using criteria file was never fully implemented\n #if '-exc' in sys.argv:\n # Crits, file_type = pmag.magic_read(pmag.resolve_file_name(crit_file, dir_path))\n # for crit in Crits:\n # if mad_col in crit:\n # M = float(crit['specimen_mad'])\n # if num_meas_col in crit:\n # N = float(crit['specimen_n'])\n # if site_alpha95_col in crit and 'site' in crit:\n # acutoff = float(crit['site_alpha95'])\n # if k_col in crit:\n # kcutoff = float(crit['site_k'])\n #else:\n # Crits = \"\"\n\n sitelist = []\n\n # initialize some variables\n FIG = {} # plot dictionary\n FIG['eqarea'] = 1 # eqarea is figure 1\n M, N, acutoff, kcutoff = 180., 1, 180., 0.\n\n if data_model == 2:\n site_col = 'er_site_name'\n tilt_corr_col = \"specimen_tilt_correction\"\n mad_col = \"specimen_mad\"\n alpha95_col = 'specimen_alpha95'\n dec_col = \"specimen_dec\"\n inc_col = \"specimen_inc\"\n num_meas_col = \"specimen_n\"\n site_alpha95_col = \"site_alpha95\"\n else: # data model 3\n pass\n\n for rec in Specs:\n if rec[site_col] not in sitelist:\n sitelist.append(rec[site_col])\n sitelist.sort()\n if make_plots == 1:\n EQ = {}\n EQ['eqarea'] = 1\n for site in sitelist:\n pmagplotlib.plot_init(EQ['eqarea'], 4, 4)\n print(site)\n data = []\n for spec in Specs:\n if tilt_corr_col not in list(spec.keys()):\n spec[tilt_corr_col] = '-1' # assume unoriented\n if spec[site_col] == site:\n if mad_col not in list(spec.keys()) or spec[mad_col] == \"\":\n if alpha95_col in list(spec.keys()) and spec[alpha95_col] != \"\":\n spec[mad_col] = spec[alpha95_col]\n else:\n spec[mad_col] = '180'\n if not spec[num_meas_col]:\n continue\n if (float(spec[tilt_corr_col]) == float(coord)) and (float(spec[mad_col]) <= M) and (float(spec[num_meas_col]) >= N):\n rec = {}\n for key in list(spec.keys()):\n rec[key] = spec[key]\n rec[\"dec\"] = float(spec[dec_col])\n rec[\"inc\"] = float(spec[inc_col])\n rec[\"tilt_correction\"] = spec[tilt_corr_col]\n data.append(rec)\n if len(data) > 2:\n fpars = pmag.dolnp(data, 'specimen_direction_type')\n print(\"Site lines planes kappa a95 dec inc\")\n print(site, fpars[\"n_lines\"], fpars[\"n_planes\"], fpars[\"K\"],\n fpars[\"alpha95\"], fpars[\"dec\"], fpars[\"inc\"], fpars[\"R\"])\n if out_file != \"\":\n if float(fpars[\"alpha95\"]) <= acutoff and float(fpars[\"K\"]) >= kcutoff:\n out.write('%s %s %s\\n' %\n (fpars[\"dec\"], fpars['inc'], fpars['alpha95']))\n print('% tilt correction: ', coord)\n if make_plots == 1:\n files = {}\n files['eqarea'] = site+'_'+crd+'_'+'eqarea'+'.'+fmt\n pmagplotlib.plot_lnp(EQ['eqarea'], site,\n data, fpars, 'specimen_direction_type')\n if plot == 0:\n pmagplotlib.draw_figs(EQ)\n ans = input(\n \"s[a]ve plot, [q]uit, to continue:\\n \")\n if ans == \"a\":\n pmagplotlib.save_plots(EQ, files)\n if ans == \"q\":\n sys.exit()\n else:\n pmagplotlib.save_plots(EQ, files)\n else:\n print('skipping site - not enough data with specified coordinate system')"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_item_string(items_list):\n if not items_list:\n return ''\n string_list = []\n for item in items_list:\n try:\n name = item.name\n string_list.append(name)\n except AttributeError:\n pass\n return \":\".join(string_list)", "response": "take in a list of pmag_objects\n return a colon - delimited list of the findable names\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef combine_dicts(new_dict, old_dict):\n old_data_keys = list(old_dict.keys())\n new_data_keys = list(new_dict.keys())\n all_keys = set(old_data_keys).union(new_data_keys)\n combined_data_dict = {}\n for k in all_keys:\n try:\n combined_data_dict[k] = new_dict[k]\n except KeyError:\n combined_data_dict[k] = old_dict[k]\n return combined_data_dict", "response": "Combine two dictionaries with all key value pairs from new_dict."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef find_by_name(self, item_name, items_list, name_list=None):\n if not name_list:\n names = [item.name for item in items_list if item]\n else:\n names = name_list\n if item_name in names:\n ind = names.index(item_name)\n return items_list[ind]\n return False", "response": "Find a item in items_list with the given name."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef find_or_create_by_name(self, item_name, items_list, item_type):\n item = self.find_by_name(item_name, items_list)\n if not item:\n item = self.data_lists[item_type][2](item_name, None)\n return item", "response": "Find or create an item by name"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef init_default_headers(self):\n if not self.data_model:\n self.data_model = validate_upload.get_data_model()\n if not self.data_model:\n print(\"Can't access MagIC-data-model at the moment.\\nIf you are working offline, make sure MagIC-data-model.txt is in your PmagPy directory (or download it from https://github.com/ltauxe/PmagPy and put it in your PmagPy directory).\\nOtherwise, check your internet connection\")\n return False\n\n # actual is at position 0, reqd is at position 1, optional at position 2\n self.headers['measurement']['er'][1], self.headers['measurement']['er'][2] = self.get_headers('magic_measurements')\n self.headers['specimen']['er'][1], self.headers['specimen']['er'][2] = self.get_headers('er_specimens')\n self.headers['sample']['er'][1], self.headers['sample']['er'][2] = self.get_headers('er_samples')\n self.headers['site']['er'][1], self.headers['site']['er'][2] = self.get_headers('er_sites')\n self.headers['location']['er'][1], self.headers['location']['er'][2] = self.get_headers('er_locations')\n self.headers['age']['er'][1], self.headers['age']['er'][2] = self.get_headers('er_ages')\n\n self.headers['result']['pmag'][1], self.headers['result']['pmag'][2] = self.get_headers('pmag_results')\n self.headers['specimen']['pmag'][1], self.headers['specimen']['pmag'][2] = self.get_headers('pmag_specimens')\n self.headers['sample']['pmag'][1], self.headers['sample']['pmag'][2] = self.get_headers('pmag_samples')\n self.headers['site']['pmag'][1], self.headers['site']['pmag'][2] = self.get_headers('pmag_sites')", "response": "initialize default required headers."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd a new measurement to the data object.", "response": "def add_measurement(self, exp_name, meas_num, spec_name=None, er_data=None, pmag_data=None):\n \"\"\"\n Find actual data object for specimen.\n Then create a measurement belonging to that specimen and add it to the data object\n \"\"\"\n specimen = self.find_by_name(spec_name, self.specimens)\n measurement = Measurement(exp_name, meas_num, specimen, er_data)\n self.measurements.append(measurement)\n return measurement"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef change_specimen(self, old_spec_name, new_spec_name,\n new_sample_name=None, new_er_data=None, new_pmag_data=None,\n replace_data=False):\n \"\"\"\n Find actual data objects for specimen and sample.\n Then call Specimen class change method to update specimen name and data.\n \"\"\"\n specimen = self.find_by_name(old_spec_name, self.specimens)\n if not specimen:\n print('-W- {} is not a currently existing specimen, so cannot be updated'.format(old_spec_name))\n return False\n if new_sample_name:\n new_sample = self.find_by_name(new_sample_name, self.samples)\n if not new_sample:\n print(\"\"\"-W- {} is not a currently existing sample.\nCreating a new sample named: {} \"\"\".format(new_sample_name, new_sample_name))\n new_sample = self.add_sample(new_sample_name)\n else:\n new_sample = None\n specimen.change_specimen(new_spec_name, new_sample, new_er_data, new_pmag_data, replace_data)\n return specimen", "response": "Change the specimen name and data."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef delete_specimen(self, spec_name):\n specimen = self.find_by_name(spec_name, self.specimens)\n if not specimen:\n return False\n sample = specimen.sample\n if sample:\n sample.specimens.remove(specimen)\n self.specimens.remove(specimen)\n del specimen\n return []", "response": "Delete a specimen from the specimen list."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding a new specimen to the internal list of specimen objects.", "response": "def add_specimen(self, spec_name, samp_name=None, er_data=None, pmag_data=None):\n \"\"\"\n Create a Specimen object and add it to self.specimens.\n If a sample name is provided, add the specimen to sample.specimens as well.\n \"\"\"\n if samp_name:\n sample = self.find_by_name(samp_name, self.samples)\n if not sample:\n print(\"\"\"-W- {} is not a currently existing sample.\nCreating a new sample named: {} \"\"\".format(samp_name, samp_name))\n sample = self.add_sample(samp_name)\n else:\n sample = None\n\n specimen = Specimen(spec_name, sample, self.data_model, er_data, pmag_data)\n\n self.specimens.append(specimen)\n if sample:\n sample.specimens.append(specimen)\n return specimen"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef change_sample(self, old_samp_name, new_samp_name, new_site_name=None,\n new_er_data=None, new_pmag_data=None, replace_data=False):\n \"\"\"\n Find actual data objects for sample and site.\n Then call Sample class change method to update sample name and data..\n \"\"\"\n sample = self.find_by_name(old_samp_name, self.samples)\n if not sample:\n print('-W- {} is not a currently existing sample, so it cannot be updated'.format(old_samp_name))\n return False\n if new_site_name:\n new_site = self.find_by_name(new_site_name, self.sites)\n if not new_site:\n print(\"\"\"-W- {} is not a currently existing site.\nAdding site named: {}\"\"\".format(new_site_name, new_site_name))#sample.site or '*empty*', sample)\n new_site = self.add_site(new_site_name)\n else:\n new_site = None\n sample.change_sample(new_samp_name, new_site, new_er_data, new_pmag_data, replace_data)\n return sample", "response": "Change the name of a sample and site."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding a sample to the internal list of samples.", "response": "def add_sample(self, samp_name, site_name=None, er_data=None, pmag_data=None):\n \"\"\"\n Create a Sample object and add it to self.samples.\n If a site name is provided, add the sample to site.samples as well.\n \"\"\"\n if site_name:\n site = self.find_by_name(site_name, self.sites)\n if not site:\n print(\"\"\"-W- {} is not a currently existing site.\nCreating a new site named: {} \"\"\".format(site_name, site_name))\n site = self.add_site(site_name)\n else:\n site = None\n sample = Sample(samp_name, site, self.data_model, er_data, pmag_data)\n self.samples.append(sample)\n if site:\n site.samples.append(sample)\n return sample"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef delete_sample(self, sample_name, replacement_samp=None):\n sample = self.find_by_name(sample_name, self.samples)\n if not sample:\n return False\n specimens = sample.specimens\n site = sample.site\n if site:\n site.samples.remove(sample)\n self.samples.remove(sample)\n for spec in specimens:\n spec.sample = \"\"\n return specimens", "response": "Removes a sample from the list of samples."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nchanges the site name and data.", "response": "def change_site(self, old_site_name, new_site_name, new_location_name=None,\n new_er_data=None, new_pmag_data=None, replace_data=False):\n \"\"\"\n Find actual data objects for site and location.\n Then call the Site class change method to update site name and data.\n \"\"\"\n site = self.find_by_name(old_site_name, self.sites)\n if not site:\n print('-W- {} is not a currently existing site, so it cannot be updated.'.format(old_site_name))\n return False\n if new_location_name:\n if site.location:\n old_location = self.find_by_name(site.location.name, self.locations)\n if old_location:\n old_location.sites.remove(site)\n new_location = self.find_by_name(new_location_name, self.locations)\n if not new_location:\n print(\"\"\"-W- {} is not a currently existing location.\nAdding location with name: {}\"\"\".format(new_location_name, new_location_name))\n new_location = self.add_location(new_location_name)\n new_location.sites.append(site)\n else:\n new_location = None\n ## check all declinations/azimuths/longitudes in range 0=>360.\n #for key, value in new_er_data.items():\n # new_er_data[key] = pmag.adjust_to_360(value, key)\n site.change_site(new_site_name, new_location, new_er_data, new_pmag_data, replace_data)\n return site"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef add_site(self, site_name, location_name=None, er_data=None, pmag_data=None):\n if location_name:\n location = self.find_by_name(location_name, self.locations)\n if not location:\n location = self.add_location(location_name)\n else:\n location = None\n\n ## check all declinations/azimuths/longitudes in range 0=>360.\n #for key, value in er_data.items():\n # er_data[key] = pmag.adjust_to_360(value, key)\n\n new_site = Site(site_name, location, self.data_model, er_data, pmag_data)\n self.sites.append(new_site)\n if location:\n location.sites.append(new_site)\n return new_site", "response": "Add a site to the list of sites and add it to self. sites."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef delete_site(self, site_name, replacement_site=None):\n site = self.find_by_name(site_name, self.sites)\n if not site:\n return False\n self.sites.remove(site)\n if site.location:\n site.location.sites.remove(site)\n samples = site.samples\n for samp in samples:\n samp.site = ''\n del site\n return samples", "response": "Removes a site from the list of sites and returns a list of samples."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nchanges location name and data.", "response": "def change_location(self, old_location_name, new_location_name, new_parent_name=None,\n new_er_data=None, new_pmag_data=None, replace_data=False):\n \"\"\"\n Find actual data object for location with old_location_name.\n Then call Location class change method to update location name and data.\n \"\"\"\n location = self.find_by_name(old_location_name, self.locations)\n if not location:\n print('-W- {} is not a currently existing location, so it cannot be updated.'.format(old_location_name))\n return False\n location.change_location(new_location_name, new_er_data, new_pmag_data, replace_data)\n return location"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds a Location object to the list of locations.", "response": "def add_location(self, location_name, parent_name=None, er_data=None, pmag_data=None):\n \"\"\"\n Create a Location object and add it to self.locations.\n \"\"\"\n if not location_name:\n return False\n location = Location(location_name, data_model=self.data_model, er_data=er_data, pmag_data=pmag_data)\n self.locations.append(location)\n return location"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef delete_location(self, location_name):\n location = self.find_by_name(location_name, self.locations)\n if not location:\n return False\n sites = location.sites\n self.locations.remove(location)\n for site in sites:\n if site:\n site.location = ''\n del location\n return sites", "response": "Delete a location from the set of available locations."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef change_result(self, old_result_name, new_result_name, new_er_data=None,\n new_pmag_data=None, spec_names=None, samp_names=None,\n site_names=None, loc_names=None, replace_data=False):\n \"\"\"\n Find actual data object for result with old_result_name.\n Then call Result class change method to update result name and data.\n \"\"\"\n result = self.find_by_name(old_result_name, self.results)\n if not result:\n msg = '-W- {} is not a currently existing result, so it cannot be updated.'.format(old_result_name)\n print(msg)\n return False\n else:\n specimens, samples, sites, locations = None, None, None, None\n if spec_names:\n specimens = [self.find_or_create_by_name(spec, self.specimens, 'specimen') for spec in spec_names]\n if samp_names:\n samples = [self.find_or_create_by_name(samp, self.samples, 'sample') for samp in samp_names]\n if site_names:\n sites = [self.find_or_create_by_name(site, self.sites, 'site') for site in site_names]\n if loc_names:\n locations = [self.find_or_create_by_name(loc, self.locations, 'location') for loc in loc_names]\n\n result.change_result(new_result_name, new_pmag_data, specimens, samples,\n sites, locations, replace_data)\n return result", "response": "Change the result name and data of a currently existing result."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreading the magic_measurements. txt file and return a dictionary of data", "response": "def get_data(self):\n \"\"\"\n attempt to read measurements file in working directory.\n \"\"\"\n meas_file = os.path.join(self.WD, 'magic_measurements.txt')\n if not os.path.isfile(meas_file):\n print(\"-I- No magic_measurements.txt file\")\n return {}\n try:\n meas_data, file_type = pmag.magic_read(meas_file)\n except IOError:\n print(\"-I- No magic_measurements.txt file\")\n return {}\n if file_type == 'bad_file':\n print(\"-E- ERROR: Can't read magic_measurements.txt file. File is corrupted.\")\n\n old_specimen_name = ''\n #start_time = time.time()\n meas_name_list = [measurement.name for measurement in self.measurements]\n\n for rec in meas_data:\n # get citation information\n citation = rec.get('er_citation_names', 'This study')\n if 'This study' not in citation:\n citation = citation.strip() + ':This study'\n er_data = {'er_citation_names': citation}\n pmag_data = {'er_citation_names': 'This study'}\n specimen_name = rec[\"er_specimen_name\"]\n # ignore measurement if there is no specimen\n if specimen_name == \"\" or specimen_name == \" \":\n continue\n # if we've moved onto a new specimen, make sure a sample/site/location\n # exists for that specimen\n if specimen_name != old_specimen_name:\n sample_name = rec[\"er_sample_name\"]\n site_name = rec[\"er_site_name\"]\n location_name = rec[\"er_location_name\"]\n\n # add items and parents\n location = self.find_by_name(location_name, self.locations)\n if location_name and not location:\n location = self.add_location(location_name, er_data=er_data,\n pmag_data=pmag_data)\n site = self.find_by_name(site_name, self.sites)\n if site_name and not site:\n site = self.add_site(site_name, location_name,\n er_data, pmag_data)\n sample = self.find_by_name(sample_name, self.samples)\n if sample_name and not sample:\n sample = self.add_sample(sample_name, site_name,\n er_data, pmag_data)\n specimen = self.find_by_name(specimen_name, self.specimens)\n if specimen_name and not specimen:\n specimen = self.add_specimen(specimen_name, sample_name,\n er_data, pmag_data)\n\n # add child_items\n if sample and not self.find_by_name(specimen_name, sample.specimens):\n sample.specimens.append(specimen)\n if site and not self.find_by_name(sample_name, site.samples):\n site.samples.append(sample)\n if location and not self.find_by_name(site_name, location.sites):\n location.sites.append(site)\n\n exp_name = rec['magic_experiment_name']\n meas_num = rec['measurement_number']\n\n meas_name = exp_name + '_' + str(meas_num)\n measurement = self.find_by_name(meas_name, self.measurements, meas_name_list)\n if not measurement:\n self.add_measurement(exp_name, meas_num, specimen.name, rec)\n meas_name_list.append(meas_name)\n\n old_specimen_name = specimen_name"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreads the info of a specific child type and parent type from a specific file.", "response": "def get_magic_info(self, child_type, parent_type=None, attr='er',\n filename=None, sort_by_file_type=False):\n \"\"\"\n Read er_*.txt or pmag_*.txt file.\n If no filename is provided, use er_* or pmag_* file in WD.\n If sort_by_file_type, use file header to determine child, parent types,\n instead of passing those in as arguments.\n Once file is open, parse information into dictionaries for each item.\n If the item does not yet exist, add it to the builder data object.\n Then add info to the item object as object.er_data or object.pmag_data.\n \"\"\"\n parent = ''\n grandparent_type = None\n magic_name = 'er_' + child_type + '_name'\n expected_item_type = child_type\n if not filename:\n short_filename = attr + '_' + child_type + 's.txt'\n magic_file = os.path.join(self.WD, short_filename)\n else:\n short_filename = os.path.split(filename)[1]\n magic_file = filename\n attr = short_filename.split('_')[0]\n print('-I- Attempting to read {}'.format(magic_file))\n if not os.path.isfile(magic_file):\n print('-W- Could not find {}'.format(magic_file))\n return False\n # get the data from the appropriate .txt file\n\n data_dict, header, file_type = self.read_magic_file(magic_file, magic_name,\n sort_by_file_type=sort_by_file_type)\n if not data_dict:\n print('-W- Could not read in file: {}.\\n Make sure it is a MagIC-format file'.format(magic_file))\n return False\n\n item_type = file_type.split('_')[1][:-1]\n # if a file was named wrong, use the type of data that is actually in that file\n if item_type != expected_item_type:\n print('-W- Expected data of type: {} but instead got: {}'.format(expected_item_type,\n item_type))\n print('-W- Using type: {}'.format(item_type))\n if item_type == 'age':\n self.get_age_info(filename)\n return 'age'\n child_type = item_type\n magic_name = 'er_' + child_type + '_name'\n ind = self.ancestry.index(child_type)\n parent_type = self.ancestry[ind+1]\n if item_type != 'location':\n grandparent_type = self.ancestry[ind+2]\n else:\n grandparent_type = ''\n if not grandparent_type:\n ind = self.ancestry.index(child_type)\n try:\n grandparent_type = self.ancestry[ind+2]\n except IndexError:\n grandparent_type = None\n child_list, child_class, child_constructor = self.data_lists[child_type]\n\n if parent_type:\n parent_list, parent_class, parent_constructor = self.data_lists[parent_type]\n else:\n parent_list, parent_name = None, None\n for child_name in data_dict:\n # if there is a possible parent, try to find parent object in the data model\n if parent_type:\n parent_name = data_dict[child_name].get('er_' + parent_type + '_name', '')\n parent = self.find_by_name(parent_name, parent_list)\n if parent:\n remove_dict_headers(parent.er_data)\n remove_dict_headers(parent.pmag_data)\n # if there should be a parent\n # (meaning there is a name for it and the child object should have a parent)\n # but none exists in the data model, go ahead and create that parent object.\n if parent_name and parent_type and not parent:\n # try to get grandparent\n grandparent = None\n grandparent_name = None\n if grandparent_type:\n grandparent_list, grandparent_class, grandparent_constructor = self.data_lists[grandparent_type]\n grandparent_name = data_dict[child_name]['er_' + grandparent_type + '_name']\n grandparent = self.find_by_name(grandparent_name, grandparent_list)\n if grandparent_name and not grandparent:\n grandparent = grandparent_constructor(grandparent_name, None)\n parent = parent_constructor(parent_name, grandparent_name)\n # otherwise there is no parent and none can be created, so use an empty string\n elif not parent:\n parent_name = None\n parent = ''\n child = self.find_by_name(child_name, child_list)\n # if the child object does not exist yet in the data model\n if not child:\n child = child_constructor(child_name, parent_name)\n else:\n # bind parent to child and child to parent\n if parent:\n child.set_parent(parent)\n if parent and (child not in parent.children):\n parent.add_child(child)\n\n # add in the appropriate data dictionary to the child object\n if attr == 'er':\n self.update_methods[child_type](child_name, child_name, parent_name,\n new_er_data=data_dict[child_name])\n else:\n self.update_methods[child_type](child_name, child_name, parent_name,\n new_pmag_data=data_dict[child_name])\n # old way\n #child.__setattr__(attr + '_data', data_dict[child_name])\n remove_dict_headers(child.er_data)\n remove_dict_headers(child.pmag_data)\n #\n return child_type"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_age_info(self, filename=None):\n # use filename if provided, otherwise find er_ages.txt in WD\n if not filename:\n short_filename = 'er_ages.txt'\n magic_file = os.path.join(self.WD, short_filename)\n else:\n magic_file = filename\n if not os.path.isfile(magic_file):\n print('-W- Could not find {}'.format(magic_file))\n return False\n\n data_dict, header, file_type = self.read_magic_file(magic_file, 'by_line_number')\n # if provided file is not an age_file,\n # try to read it in as whatever type of file it actually is\n if file_type != 'er_ages':\n item_type = file_type.split('_')[1][:-1]\n self.get_magic_info(item_type, filename=filename, sort_by_file_type=True)\n return file_type\n\n # if it is an age file,\n # determine level for each age and assign it to the appropriate pmag object\n for item_dict in list(data_dict.values()):\n item_type = None\n for dtype in ['specimen', 'sample', 'site', 'location']:\n header_name = 'er_' + dtype + '_name'\n if header_name in list(item_dict.keys()):\n if item_dict[header_name]:\n item_type = dtype\n item_name = item_dict[header_name].strip()\n break\n if not item_type:\n print('-W- You must provide a name for your age')\n print(' These data:\\n{}\\n will not be imported'.format(item_dict))\n continue\n items_list = self.data_lists[item_type][0]\n item = self.find_by_name(item_name, items_list)\n if not item:\n ## the following code creates any item in er_ages that does not exist already\n ## however, we may not WANT that behavior\n print(\"\"\"-I- A {} named {} in your age file was not found in the data object:\n Now initializing {} {}\"\"\".format(item_type, item_name, item_type, item_name))\n ind = self.ancestry.index(item_type)\n parent_type = self.ancestry[ind+1]\n parent_header, parent_constructor = None, None\n if parent_type:\n parent_list, parent_class, parent_constructor = self.data_lists[parent_type]\n parent_header = 'er_' + parent_type + '_name'\n parent_name = item_dict.get(parent_header, '')\n parent = self.find_by_name(parent_name, parent_list)\n # if the parent item doesn't exist, and should, create it\n if parent_name and not parent:\n print(\"\"\"-I- A {} named {} in your age file was not found in the data object:\n Now initializing {} {}\"\"\".format(parent_type, parent_name, parent_type, parent_name))\n parent = parent_constructor(parent_name, None)\n item_constructor = self.data_lists[item_type][2]\n if not parent:\n parent_name = None\n item = item_constructor(item_name, parent_name)\n # add the age data to the object\n item.age_data = remove_dict_headers(item_dict)\n # note that data is available to write\n self.write_ages = True\n return file_type", "response": "Read er_ages. txt file and return the age information."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreading pmag_results. txt file and parse it into dictionaries and add it to the results_data attribute.", "response": "def get_results_info(self, filename=None):\n \"\"\"\n Read pmag_results.txt file.\n Parse information into dictionaries for each item.\n Then add it to the item object as object.results_data.\n \"\"\"\n if not filename:\n short_filename = \"pmag_results.txt\"\n magic_file = os.path.join(self.WD, short_filename)\n else:\n magic_file = filename\n if not os.path.isfile(magic_file):\n print('-W- Could not find {} in your working directory {}'.format(short_filename, self.WD))\n return False\n # get the data from the pmag_results.txt file\n data_dict = self.read_magic_file(magic_file, 'by_line_number')[0]\n\n def make_items_list(string, search_items_list):\n names = string.split(':')\n items = []\n for name in names:\n name = name.strip(' ')\n item = self.find_by_name(name, search_items_list)\n if item:\n items.append(item)\n return items\n\n for num, result in list(data_dict.items()):\n name, specimens, samples, sites, locations = None, None, None, None, None\n for key, value in list(result.items()):\n #print key, ':', value\n if key == 'er_specimen_names':\n specimens = make_items_list(value, self.specimens)\n if key == 'er_sample_names':\n samples = make_items_list(value, self.samples)\n if key == 'er_site_names':\n sites = make_items_list(value, self.sites)\n if key == 'er_location_names':\n locations = make_items_list(value, self.locations)\n if key == 'pmag_result_name':\n name = value\n for header_name in ['er_specimen_names', 'er_site_names',\n 'er_sample_names', 'er_location_names']:\n if header_name in list(result.keys()):\n result.pop(header_name)\n if not name:\n name = num\n result_item = self.find_by_name(name, self.results)\n if not result_item:\n result_item = Result(name, specimens, samples, sites, locations, result, self.data_model)\n else:\n print('-W- Two or more results with name: {} found in your result file.\\n Taking only the first.'.format(name))\n if result_item and result_item not in self.results:\n self.results.append(result_item)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreading a magic - formatted tab - delimited file and return a dictionary of dictionaries with keys that are the names of the specimen classes and the values that are the values of the specimen types.", "response": "def read_magic_file(self, path, sort_by_this_name, sort_by_file_type=False):\n \"\"\"\n read a magic-formatted tab-delimited file.\n return a dictionary of dictionaries, with this format:\n {'Z35.5a': {'specimen_weight': '1.000e-03', 'er_citation_names': 'This study', 'specimen_volume': '', 'er_location_name': '', 'er_site_name': 'Z35.', 'er_sample_name': 'Z35.5', 'specimen_class': '', 'er_specimen_name': 'Z35.5a', 'specimen_lithology': '', 'specimen_type': ''}, ....}\n \"\"\"\n DATA = {}\n with open(path, 'r') as fin:\n lines = list(fin.readlines())\n first_line = lines[0]\n if not first_line:\n return False, None, 'empty_file'\n if first_line[0] == \"s\" or first_line[1] == \"s\":\n delim = ' '\n elif first_line[0] == \"t\" or first_line[1] == \"t\":\n delim = '\\t'\n else:\n print('-W- error reading ', path)\n return False, None, 'bad_file'\n\n file_type = first_line.strip('\\n').split(delim)[1]\n if sort_by_file_type:\n item_type = file_type.split('_')[1][:-1]\n if item_type == 'age':\n sort_by_this_name = \"by_line_number\"\n else:\n sort_by_this_name = 'er_' + item_type + '_name'\n line = lines[1]\n header = line.strip('\\n').split(delim)\n counter = 0\n for line in lines[2:]:\n tmp_data = {}\n tmp_line = line.strip('\\n').split(delim)\n for i in range(len(header)):\n if i < len(tmp_line):\n tmp_data[header[i]] = tmp_line[i].strip()\n else:\n tmp_data[header[i]] = \"\"\n if sort_by_this_name == \"by_line_number\":\n DATA[counter] = tmp_data\n counter += 1\n else:\n if tmp_data[sort_by_this_name] != \"\":\n DATA[tmp_data[sort_by_this_name]] = tmp_data\n return DATA, header, file_type"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef write_files(self):\n warnings = self.validate_data()\n\n print('-I- Writing all saved data to files')\n if self.measurements:\n self.write_measurements_file()\n for dtype in ['specimen', 'sample', 'site']:\n if self.data_lists[dtype][0]:\n do_pmag = dtype in self.incl_pmag_data\n self.write_magic_file(dtype, do_er=True, do_pmag=do_pmag)\n if not do_pmag:\n pmag_file = os.path.join(self.WD, 'pmag_' + dtype + 's.txt')\n if os.path.isfile(pmag_file):\n os.remove(pmag_file)\n\n if self.locations:\n self.write_magic_file('location', do_er=True, do_pmag=False)\n\n self.write_age_file()\n\n if self.results:\n self.write_result_file()\n\n if warnings:\n print('-W- ' + str(warnings))\n return False, warnings\n\n return True, None", "response": "write all data into er_* and pmag_* files as appropriate"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nwriting er_ages. txt based on updated ErMagicBuilder data object", "response": "def write_age_file(self):\n \"\"\"\n Write er_ages.txt based on updated ErMagicBuilder data object\n \"\"\"\n if not self.write_ages:\n print('-I- No age data available to write')\n return\n first_headers = self.first_age_headers\n actual_headers = sorted(self.headers['age']['er'][0])\n for header in first_headers:\n if header in actual_headers:\n actual_headers.remove(header)\n add_headers = ['er_specimen_name', 'er_sample_name', 'er_site_name', 'er_location_name']\n actual_headers[:0] = first_headers\n full_headers = add_headers[:]\n full_headers.extend(actual_headers)\n\n header_string = '\\t'.join(full_headers)\n ages = []\n for dtype in ['specimen', 'sample', 'site', 'location']:\n ages_list = sorted(self.data_lists[dtype][0], key=lambda item: item.name)\n ages.extend(ages_list)\n\n age_strings = []\n for age in ages:\n ind = self.ancestry.index(age.dtype)\n ancestors = ['' for num in range(len(self.ancestry) - (ind+2))]\n data_found = False\n string = ''\n if age.dtype == 'specimen':\n string += age.name + '\\t'\n elif age.dtype == 'sample':\n string += '\\t' + age.name + '\\t'\n elif age.dtype == 'site':\n string += '\\t\\t' + age.name + '\\t'\n elif age.dtype == 'location':\n string += '\\t\\t\\t' + age.name + '\\t'\n parent = age.get_parent()\n grandparent = None\n if parent:\n ancestors[0] = parent.name\n grandparent = parent.get_parent()\n if grandparent:\n ancestors[1] = grandparent.name\n greatgrandparent = grandparent.get_parent()\n if greatgrandparent:\n ancestors[2] = greatgrandparent.name\n for ancestor in ancestors:\n string += ancestor + '\\t'\n for key in actual_headers:\n try:\n add_string = age.age_data[key]\n except KeyError:\n add_string = ''\n age.age_data[key] = ''\n if add_string and not key == 'er_citation_names':\n data_found = True\n if key == 'er_citation_names' and not add_string.strip('\\t'):\n add_string = 'This study'\n string += add_string + '\\t'\n # prevent extra '' at the end of age string\n if string.endswith('\\t'):\n string = string[:-1]\n # only write ages to file if there is data provided\n if data_found:\n age_strings.append(string)\n outfile = open(os.path.join(self.WD, 'er_ages.txt'), 'w')\n outfile.write('tab\\ter_ages\\n')\n outfile.write(header_string + '\\n')\n if not age_strings:\n outfile.close()\n os.remove(os.path.join(self.WD, 'er_ages.txt'))\n return False\n for string in age_strings:\n outfile.write(string + '\\n')\n outfile.close()\n return outfile"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef validate_data(self):\n warnings = {}\n spec_warnings, samp_warnings, site_warnings, loc_warnings = {}, {}, {}, {}\n if self.specimens:\n spec_warnings = self.validate_items(self.specimens, 'specimen')\n if self.samples:\n samp_warnings = self.validate_items(self.samples, 'sample')\n if self.sites:\n site_warnings = self.validate_items(self.sites, 'site')\n if self.locations:\n loc_warnings = self.validate_items(self.locations, 'location')\n return spec_warnings, samp_warnings, site_warnings, loc_warnings", "response": "Validate specimen sample site and location data."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nvalidate a list of items and return a dictionary of errors.", "response": "def validate_items(self, item_list, item_type):\n \"\"\"\n Go through a list Pmag_objects and check for:\n parent errors,\n children errors,\n type errors.\n Return a dictionary of exceptions in this format:\n {sample1: {'parent': [warning1, warning2, warning3], 'child': [warning1, warning2]},\n sample2: {'child': [warning1], 'type': [warning1, warning2]},\n ...}\n\n \"\"\"\n def append_or_create_dict_item(warning_type, dictionary, key, value):\n \"\"\"\n Add to dictionary with this format:\n {key1: {warning_type1: [value1, value2], warning_type2: [value1]},\n ...}\n \"\"\"\n if not value:\n return\n try:\n name = key.name\n except AttributeError:\n name = key\n if not name in dictionary:\n dictionary[name] = {}\n if not warning_type in dictionary[name]:\n dictionary[name][warning_type] = []\n for v in value:\n dictionary[name][warning_type].append(v)\n\n def check_item_type(item, item_type):#, warnings=None):\n \"\"\"\n Make sure that item has appropriate type, and is in the data object.\n \"\"\"\n warnings = []\n item_list, item_class, item_constructor = self.data_lists[item_type]\n if not isinstance(item, item_class):\n warnings.append(PmagException('wrong type'))\n if item not in item_list:\n warnings.append(PmagException('not in data object'))\n return warnings\n\n def check_item_for_parent(item, item_type, parent_type):\n \"\"\"\n Make sure that item has a parent of the correct type\n \"\"\"\n if not parent_type:\n return []\n if not isinstance(item, Pmag_object):\n return []\n warnings = []\n parent = item.get_parent()\n parent_list, parent_class, parent_constructor = self.data_lists[parent_type]\n if not parent or not parent.name:\n warnings.append(PmagException('missing parent'))\n return warnings\n if not isinstance(parent, parent_class):\n warnings.append(PmagException('invalid parent type', parent))\n if not parent in parent_list:\n warnings.append(PmagException('parent not in data object', parent))\n return warnings\n\n def check_item_for_children(item, child_type):\n \"\"\"\n Make sure that any children are of the correct type,\n and are in the data object\n \"\"\"\n if not child_type:\n return []\n warnings = []\n children = item.children\n child_list, child_class, child_constructor = self.data_lists[child_type]\n for child in children:\n if not isinstance(child, child_class):\n warnings.append(PmagException('child has wrong type', child))\n if not child in child_list:\n warnings.append(PmagException('child not in data object', child))\n return warnings\n\n warnings = {}\n type_ind = self.ancestry.index(item_type)\n parent_type = self.ancestry[type_ind+1]\n child_type = self.ancestry[type_ind-1]\n for item in item_list:\n #warnings[item] = []\n type_warnings = check_item_type(item, item_type)\n append_or_create_dict_item('type', warnings, item, type_warnings)\n parent_warnings = check_item_for_parent(item, item_type, parent_type)\n append_or_create_dict_item('parent', warnings, item, parent_warnings)\n child_warnings = check_item_for_children(item, child_type)\n append_or_create_dict_item('children', warnings, item, child_warnings)\n return warnings"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_min_max_lat_lon(self, locations):\n d = {}\n for location in locations:\n sites = location.sites\n max_lat, min_lat = '', ''\n max_lon, min_lon = '', ''\n if not any(sites):\n d[location.name] = {'location_begin_lat': min_lat, 'location_begin_lon': min_lon,\n 'location_end_lat': max_lat, 'location_end_lon': max_lon}\n #return d\n continue\n lats, lons = [], []\n # try to fill in min/max latitudes/longitudes from sites\n for site in sites:\n if site.er_data['site_lon']:\n lons.append(site.er_data['site_lon'])\n if site.er_data['site_lat']:\n lats.append(site.er_data['site_lat'])\n if lats:\n lats = [float(lat) for lat in lats]\n max_lat = max(lats)\n min_lat = min(lats)\n if lons:\n lons = [float(lon) for lon in lons]\n max_lon = max(lons)\n min_lon = min(lons)\n d[location.name] = {'location_begin_lat': min_lat, 'location_begin_lon': min_lon,\n 'location_end_lat': max_lat, 'location_end_lon': max_lon}\n return d", "response": "Take a list of locations and return a dictionary with the keys location_begin_lat location_begin_lon location_end_lat and location_end_lon."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_parent(self, new_samp):\n self.sample = new_samp\n if new_samp:\n if not isinstance(new_samp, Sample):\n raise Exception\n self.propagate_data()\n return new_samp", "response": "Set self. sample as either an empty string or a new Sample."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef set_parent(self, new_site):\n if new_site:\n if not isinstance(new_site, Site):\n raise Exception\n self.site = new_site\n self.propagate_data()\n return new_site", "response": "Set self. site as either an empty string or a new Site."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nchange the name location er_data and pmag_data of a specific site.", "response": "def change_site(self, new_name, new_location=None, new_er_data=None,\n new_pmag_data=None, replace_data=False):\n \"\"\"\n Update a site's name, location, er_data, and pmag_data.\n By default, new data will be added in to pre-existing data, overwriting existing values.\n If replace_data is True, the new data dictionary will simply take the place of the existing dict.\n \"\"\"\n self.name = new_name\n if new_location:\n self.location = new_location\n self.update_data(new_er_data, new_pmag_data, replace_data)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nnames thellier_magic.py DESCRIPTION plots Thellier-Thellier, allowing interactive setting of bounds and customizing of selection criteria. Saves and reads interpretations from a pmag_specimen formatted table, default: thellier_specimens.txt SYNTAX thellier_magic.py [command line options] OPTIONS -h prints help message and quits -f MEAS, set magic_measurements input file -fsp PRIOR, set pmag_specimen prior interpretations file -fan ANIS, set rmag_anisotropy file for doing the anisotropy corrections -fcr CRIT, set criteria file for grading. -fmt [svg,png,jpg], format for images - default is svg -sav, saves plots with out review (default format) -spc SPEC, plots single specimen SPEC, saves plot with specified format with optional -b bounds adn quits -b BEG END: sets bounds for calculation BEG: starting step for slope calculation END: ending step for slope calculation -z use only z component difference for pTRM calculation DEFAULTS MEAS: magic_measurements.txt REDO: thellier_redo CRIT: NONE PRIOR: NONE OUTPUT figures: ALL: numbers refer to temperature steps in command line window 1) Arai plot: closed circles are zero-field first/infield open circles are infield first/zero-field triangles are pTRM checks squares are pTRM tail checks VDS is vector difference sum diamonds are bounds for interpretation 2) Zijderveld plot: closed (open) symbols are X-Y (X-Z) planes X rotated to NRM direction 3) (De/Re)Magnetization diagram: circles are NRM remaining squares are pTRM gained 4) equal area projections: green triangles are pTRM gained direction red (purple) circles are lower(upper) hemisphere of ZI step directions blue (cyan) squares are lower(upper) hemisphere IZ step directions 5) Optional: TRM acquisition 6) Optional: TDS normalization command line window: list is: temperature step numbers, temperatures (C), Dec, Inc, Int (units of magic_measuements) list of possible commands: type letter followed by return to select option saving of plots creates .svg format files with specimen_name, plot type as name", "response": "def main():\n \"\"\"\n NAME\n thellier_magic.py\n\n DESCRIPTION\n plots Thellier-Thellier, allowing interactive setting of bounds\n and customizing of selection criteria. Saves and reads interpretations\n from a pmag_specimen formatted table, default: thellier_specimens.txt\n\n SYNTAX\n thellier_magic.py [command line options]\n\n OPTIONS\n -h prints help message and quits\n -f MEAS, set magic_measurements input file\n -fsp PRIOR, set pmag_specimen prior interpretations file\n -fan ANIS, set rmag_anisotropy file for doing the anisotropy corrections\n -fcr CRIT, set criteria file for grading.\n -fmt [svg,png,jpg], format for images - default is svg\n -sav, saves plots with out review (default format)\n -spc SPEC, plots single specimen SPEC, saves plot with specified format\n with optional -b bounds adn quits\n -b BEG END: sets bounds for calculation\n BEG: starting step for slope calculation\n END: ending step for slope calculation\n -z use only z component difference for pTRM calculation\n\n DEFAULTS\n MEAS: magic_measurements.txt\n REDO: thellier_redo\n CRIT: NONE\n PRIOR: NONE\n\n OUTPUT\n figures:\n ALL: numbers refer to temperature steps in command line window\n 1) Arai plot: closed circles are zero-field first/infield\n open circles are infield first/zero-field\n triangles are pTRM checks\n squares are pTRM tail checks\n VDS is vector difference sum\n diamonds are bounds for interpretation\n 2) Zijderveld plot: closed (open) symbols are X-Y (X-Z) planes\n X rotated to NRM direction\n 3) (De/Re)Magnetization diagram:\n circles are NRM remaining\n squares are pTRM gained\n 4) equal area projections:\n green triangles are pTRM gained direction\n red (purple) circles are lower(upper) hemisphere of ZI step directions\n blue (cyan) squares are lower(upper) hemisphere IZ step directions\n 5) Optional: TRM acquisition\n 6) Optional: TDS normalization\n command line window:\n list is: temperature step numbers, temperatures (C), Dec, Inc, Int (units of magic_measuements)\n list of possible commands: type letter followed by return to select option\n saving of plots creates .svg format files with specimen_name, plot type as name\n \"\"\"\n#\n# initializations\n#\n meas_file, critout, inspec = \"magic_measurements.txt\", \"\", \"thellier_specimens.txt\"\n first = 1\n inlt = 0\n version_num = pmag.get_version()\n TDinit, Tinit, field, first_save = 0, 0, -1, 1\n user, comment, AniSpec, locname = \"\", '', \"\", \"\"\n ans, specimen, recnum, start, end = 0, 0, 0, 0, 0\n plots, pmag_out, samp_file, style = 0, \"\", \"\", \"svg\"\n verbose = pmagplotlib.verbose\n fmt = '.'+style\n#\n# default acceptance criteria\n#\n accept = pmag.default_criteria(0)[0] # set the default criteria\n#\n# parse command line options\n#\n Zdiff, anis = 0, 0\n spc, BEG, END = \"\", \"\", \"\"\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-f' in sys.argv:\n ind = sys.argv.index('-f')\n meas_file = sys.argv[ind+1]\n if '-fsp' in sys.argv:\n ind = sys.argv.index('-fsp')\n inspec = sys.argv[ind+1]\n if '-fan' in sys.argv:\n ind = sys.argv.index('-fan')\n anisfile = sys.argv[ind+1]\n anis = 1\n anis_data, file_type = pmag.magic_read(anisfile)\n if verbose:\n print(\"Anisotropy data read in from \", anisfile)\n if '-fmt' in sys.argv:\n ind = sys.argv.index('-fmt')\n fmt = '.'+sys.argv[ind+1]\n if '-dpi' in sys.argv:\n ind = sys.argv.index('-dpi')\n dpi = '.'+sys.argv[ind+1]\n else:\n dpi = 100\n if '-sav' in sys.argv:\n plots = 1\n verbose = 0\n if '-z' in sys.argv:\n Zdiff = 1\n if '-spc' in sys.argv:\n ind = sys.argv.index('-spc')\n spc = sys.argv[ind+1]\n if '-b' in sys.argv:\n ind = sys.argv.index('-b')\n BEG = int(sys.argv[ind+1])\n END = int(sys.argv[ind+2])\n if '-fcr' in sys.argv:\n ind = sys.argv.index('-fcr')\n critout = sys.argv[ind+1]\n crit_data, file_type = pmag.magic_read(critout)\n if file_type != 'pmag_criteria':\n if verbose:\n print('bad pmag_criteria file, using no acceptance criteria')\n accept = pmag.default_criteria(1)[0]\n else:\n if verbose:\n print(\"Acceptance criteria read in from \", critout)\n accept = {'pmag_criteria_code': 'ACCEPTANCE',\n 'er_citation_names': 'This study'}\n for critrec in crit_data:\n if 'sample_int_sigma_uT' in critrec.keys(): # accommodate Shaar's new criterion\n critrec['sample_int_sigma'] = '%10.3e' % (\n eval(critrec['sample_int_sigma_uT'])*1e-6)\n for key in critrec.keys():\n if key not in accept.keys() and critrec[key] != '':\n accept[key] = critrec[key]\n try:\n open(inspec, 'rU')\n PriorRecs, file_type = pmag.magic_read(inspec)\n if file_type != 'pmag_specimens':\n print(file_type)\n print(file_type, inspec, \" is not a valid pmag_specimens file \")\n sys.exit()\n for rec in PriorRecs:\n if 'magic_software_packages' not in rec.keys():\n rec['magic_software_packages'] = \"\"\n except IOError:\n PriorRecs = []\n if verbose:\n print(\"starting new specimen interpretation file: \", inspec)\n meas_data, file_type = pmag.magic_read(meas_file)\n if file_type != 'magic_measurements':\n print(file_type)\n print(file_type, \"This is not a valid magic_measurements file \")\n sys.exit()\n backup = 0\n # define figure numbers for arai, zijderveld and\n # de-,re-magization diagrams\n AZD = {}\n AZD['deremag'], AZD['zijd'], AZD['arai'], AZD['eqarea'] = 1, 2, 3, 4\n pmagplotlib.plot_init(AZD['arai'], 5, 5)\n pmagplotlib.plot_init(AZD['zijd'], 5, 5)\n pmagplotlib.plot_init(AZD['deremag'], 5, 5)\n pmagplotlib.plot_init(AZD['eqarea'], 5, 5)\n #\n #\n #\n # get list of unique specimen names\n #\n CurrRec = []\n sids = pmag.get_specs(meas_data)\n # get plots for specimen s - default is just to step through arai diagrams\n #\n if spc != \"\":\n specimen = sids.index(spc)\n while specimen < len(sids):\n methcodes = []\n\n if verbose:\n print(sids[specimen], specimen+1, 'of ', len(sids))\n MeasRecs = []\n s = sids[specimen]\n datablock, trmblock, tdsrecs = [], [], []\n PmagSpecRec = {}\n if first == 0:\n for key in keys:\n # make sure all new records have same set of keys\n PmagSpecRec[key] = \"\"\n PmagSpecRec[\"er_analyst_mail_names\"] = user\n PmagSpecRec[\"specimen_correction\"] = 'u'\n #\n # find the data from the meas_data file for this specimen\n #\n for rec in meas_data:\n if rec[\"er_specimen_name\"] == s:\n MeasRecs.append(rec)\n if \"magic_method_codes\" not in rec.keys():\n rec[\"magic_method_codes\"] = \"\"\n methods = rec[\"magic_method_codes\"].split(\":\")\n meths = []\n for meth in methods:\n meths.append(meth.strip()) # take off annoying spaces\n methods = \"\"\n for meth in meths:\n if meth.strip() not in methcodes and \"LP-\" in meth:\n methcodes.append(meth.strip())\n methods = methods+meth+\":\"\n methods = methods[:-1]\n rec[\"magic_method_codes\"] = methods\n if \"LP-PI-TRM\" in meths:\n datablock.append(rec)\n if \"LP-TRM\" in meths:\n trmblock.append(rec)\n if \"LP-TRM-TD\" in meths:\n tdsrecs.append(rec)\n if len(trmblock) > 2 and inspec != \"\":\n if Tinit == 0:\n Tinit = 1\n AZD['TRM'] = 5\n pmagplotlib.plot_init(AZD['TRM'], 5, 5)\n elif Tinit == 1: # clear the TRM figure if not needed\n pmagplotlib.clearFIG(AZD['TRM'])\n if len(tdsrecs) > 2:\n if TDinit == 0:\n TDinit = 1\n AZD['TDS'] = 6\n pmagplotlib.plot_init(AZD['TDS'], 5, 5)\n elif TDinit == 1: # clear the TDS figure if not needed\n pmagplotlib.clearFIG(AZD['TDS'])\n if len(datablock) < 4:\n if backup == 0:\n specimen += 1\n if verbose:\n print('skipping specimen - moving forward ', s)\n else:\n specimen -= 1\n if verbose:\n print('skipping specimen - moving backward ', s)\n #\n # collect info for the PmagSpecRec dictionary\n #\n else:\n rec = datablock[0]\n PmagSpecRec[\"er_citation_names\"] = \"This study\"\n PmagSpecRec[\"er_specimen_name\"] = s\n PmagSpecRec[\"er_sample_name\"] = rec[\"er_sample_name\"]\n PmagSpecRec[\"er_site_name\"] = rec[\"er_site_name\"]\n PmagSpecRec[\"er_location_name\"] = rec[\"er_location_name\"]\n locname = rec['er_location_name'].replace('/', '-')\n if \"er_expedition_name\" in rec.keys():\n PmagSpecRec[\"er_expedition_name\"] = rec[\"er_expedition_name\"]\n if \"magic_instrument_codes\" not in rec.keys():\n rec[\"magic_instrument_codes\"] = \"\"\n PmagSpecRec[\"magic_instrument_codes\"] = rec[\"magic_instrument_codes\"]\n PmagSpecRec[\"measurement_step_unit\"] = \"K\"\n if \"magic_experiment_name\" not in rec.keys():\n rec[\"magic_experiment_name\"] = \"\"\n else:\n PmagSpecRec[\"magic_experiment_names\"] = rec[\"magic_experiment_name\"]\n\n meths = rec[\"magic_method_codes\"].split()\n # sort data into types\n araiblock, field = pmag.sortarai(datablock, s, Zdiff)\n first_Z = araiblock[0]\n GammaChecks = araiblock[5]\n if len(first_Z) < 3:\n if backup == 0:\n specimen += 1\n if verbose:\n print('skipping specimen - moving forward ', s)\n else:\n specimen -= 1\n if verbose:\n print('skipping specimen - moving backward ', s)\n else:\n backup = 0\n zijdblock, units = pmag.find_dmag_rec(s, meas_data)\n recnum = 0\n if verbose:\n print(\"index step Dec Inc Int Gamma\")\n for plotrec in zijdblock:\n if GammaChecks != \"\":\n gamma = \"\"\n for g in GammaChecks:\n if g[0] == plotrec[0]-273:\n gamma = g[1]\n break\n if gamma != \"\":\n print('%i %i %7.1f %7.1f %8.3e %7.1f' % (\n recnum, plotrec[0]-273, plotrec[1], plotrec[2], plotrec[3], gamma))\n else:\n print('%i %i %7.1f %7.1f %8.3e ' % (\n recnum, plotrec[0]-273, plotrec[1], plotrec[2], plotrec[3]))\n recnum += 1\n pmagplotlib.plot_arai_zij(AZD, araiblock, zijdblock, s, units[0])\n if verbose:\n pmagplotlib.draw_figs(AZD)\n if len(tdsrecs) > 2: # a TDS experiment\n tdsblock = [] # make a list for the TDS data\n Mkeys = ['measurement_magnitude', 'measurement_magn_moment',\n 'measurement_magn_volume', 'measuruement_magn_mass']\n mkey, k = \"\", 0\n # find which type of intensity\n while mkey == \"\" and k < len(Mkeys)-1:\n key = Mkeys[k]\n if key in tdsrecs[0].keys() and tdsrecs[0][key] != \"\":\n mkey = key\n k += 1\n if mkey == \"\":\n break # get outta here\n Tnorm = \"\"\n for tdrec in tdsrecs:\n meths = tdrec['magic_method_codes'].split(\":\")\n for meth in meths:\n # strip off potential nasty spaces\n meth.replace(\" \", \"\")\n if 'LT-T-I' in meths and Tnorm == \"\": # found first total TRM\n # normalize by total TRM\n Tnorm = float(tdrec[mkey])\n # put in the zero step\n tdsblock.append([273, zijdblock[0][3]/Tnorm, 1.])\n # found a LP-TRM-TD demag step, now need complementary LT-T-Z from zijdblock\n if 'LT-T-Z' in meths and Tnorm != \"\":\n step = float(tdrec['treatment_temp'])\n Tint = \"\"\n if mkey != \"\":\n Tint = float(tdrec[mkey])\n if Tint != \"\":\n for zrec in zijdblock:\n if zrec[0] == step: # found matching\n tdsblock.append(\n [step, zrec[3]/Tnorm, Tint/Tnorm])\n break\n if len(tdsblock) > 2:\n pmagplotlib.plot_tds(\n AZD['TDS'], tdsblock, s+':LP-PI-TDS:')\n if verbose:\n pmagplotlib(draw_figs(AZD))\n else:\n print(\"Something wrong here\")\n if anis == 1: # look up anisotropy data for this specimen\n AniSpec = \"\"\n for aspec in anis_data:\n if aspec[\"er_specimen_name\"] == PmagSpecRec[\"er_specimen_name\"]:\n AniSpec = aspec\n if verbose:\n print('Found anisotropy record...')\n break\n if inspec != \"\":\n if verbose:\n print('Looking up saved interpretation....')\n found = 0\n for k in range(len(PriorRecs)):\n try:\n if PriorRecs[k][\"er_specimen_name\"] == s:\n found = 1\n CurrRec.append(PriorRecs[k])\n for j in range(len(zijdblock)):\n if float(zijdblock[j][0]) == float(PriorRecs[k][\"measurement_step_min\"]):\n start = j\n if float(zijdblock[j][0]) == float(PriorRecs[k][\"measurement_step_max\"]):\n end = j\n pars, errcode = pmag.PintPars(\n datablock, araiblock, zijdblock, start, end, accept)\n pars['measurement_step_unit'] = \"K\"\n pars['experiment_type'] = 'LP-PI-TRM'\n # put in CurrRec, take out of PriorRecs\n del PriorRecs[k]\n if errcode != 1:\n pars[\"specimen_lab_field_dc\"] = field\n pars[\"specimen_int\"] = -1 * \\\n field*pars[\"specimen_b\"]\n pars[\"er_specimen_name\"] = s\n if verbose:\n print('Saved interpretation: ')\n pars, kill = pmag.scoreit(\n pars, PmagSpecRec, accept, '', verbose)\n pmagplotlib.plot_b(\n AZD, araiblock, zijdblock, pars)\n if verbose:\n pmagplotlib.draw_figs(AZD)\n if len(trmblock) > 2:\n blab = field\n best = pars[\"specimen_int\"]\n Bs, TRMs = [], []\n for trec in trmblock:\n Bs.append(\n float(trec['treatment_dc_field']))\n TRMs.append(\n float(trec['measurement_magn_moment']))\n # calculate best fit parameters through TRM acquisition data, and get new banc\n NLpars = nlt.NLtrm(\n Bs, TRMs, best, blab, 0)\n Mp, Bp = [], []\n for k in range(int(max(Bs)*1e6)):\n Bp.append(float(k)*1e-6)\n # predicted NRM for this field\n npred = nlt.TRM(\n Bp[-1], NLpars['xopt'][0], NLpars['xopt'][1])\n Mp.append(npred)\n pmagplotlib.plot_trm(\n AZD['TRM'], Bs, TRMs, Bp, Mp, NLpars, trec['magic_experiment_name'])\n PmagSpecRec['specimen_int'] = NLpars['banc']\n if verbose:\n print('Banc= ', float(\n NLpars['banc'])*1e6)\n pmagplotlib.draw_figs(AZD)\n mpars = pmag.domean(\n araiblock[1], start, end, 'DE-BFL')\n if verbose:\n print('pTRM direction= ', '%7.1f' % (mpars['specimen_dec']), ' %7.1f' % (\n mpars['specimen_inc']), ' MAD:', '%7.1f' % (mpars['specimen_mad']))\n if AniSpec != \"\":\n CpTRM = pmag.Dir_anis_corr(\n [mpars['specimen_dec'], mpars['specimen_inc']], AniSpec)\n AniSpecRec = pmag.doaniscorr(\n PmagSpecRec, AniSpec)\n if verbose:\n print('Anisotropy corrected TRM direction= ', '%7.1f' % (\n CpTRM[0]), ' %7.1f' % (CpTRM[1]))\n print('Anisotropy corrected intensity= ', float(\n AniSpecRec['specimen_int'])*1e6)\n else:\n print('error on specimen ', s)\n except:\n pass\n if verbose and found == 0:\n print(' None found :( ')\n if spc != \"\":\n if BEG != \"\":\n pars, errcode = pmag.PintPars(\n datablock, araiblock, zijdblock, BEG, END, accept)\n pars['measurement_step_unit'] = \"K\"\n pars[\"specimen_lab_field_dc\"] = field\n pars[\"specimen_int\"] = -1*field*pars[\"specimen_b\"]\n pars[\"er_specimen_name\"] = s\n pars['specimen_grade'] = '' # ungraded\n pmagplotlib.plot_b(AZD, araiblock, zijdblock, pars)\n if verbose:\n pmagplotlib.draw_figs(AZD)\n if len(trmblock) > 2:\n if inlt == 0:\n inlt = 1\n blab = field\n best = pars[\"specimen_int\"]\n Bs, TRMs = [], []\n for trec in trmblock:\n Bs.append(float(trec['treatment_dc_field']))\n TRMs.append(\n float(trec['measurement_magn_moment']))\n # calculate best fit parameters through TRM acquisition data, and get new banc\n NLpars = nlt.NLtrm(Bs, TRMs, best, blab, 0)\n #\n Mp, Bp = [], []\n for k in range(int(max(Bs)*1e6)):\n Bp.append(float(k)*1e-6)\n # predicted NRM for this field\n npred = nlt.TRM(\n Bp[-1], NLpars['xopt'][0], NLpars['xopt'][1])\n files = {}\n for key in AZD.keys():\n files[key] = s+'_'+key+fmt\n pmagplotlib.save_plots(AZD, files, dpi=dpi)\n sys.exit()\n if verbose:\n ans = 'b'\n while ans != \"\":\n print(\"\"\"\n s[a]ve plot, set [b]ounds for calculation, [d]elete current interpretation, [p]revious, [s]ample, [q]uit:\n \"\"\")\n ans = input('Return for next specimen \\n')\n if ans == \"\":\n specimen += 1\n if ans == \"d\":\n save_redo(PriorRecs, inspec)\n CurrRec = []\n pmagplotlib.plot_arai_zij(\n AZD, araiblock, zijdblock, s, units[0])\n if verbose:\n pmagplotlib.draw_figs(AZD)\n if ans == 'a':\n files = {}\n for key in AZD.keys():\n files[key] = \"LO:_\"+locname+'_SI:_'+PmagSpecRec['er_site_name'] + \\\n '_SA:_' + \\\n PmagSpecRec['er_sample_name'] + \\\n '_SP:_'+s+'_CO:_s_TY:_'+key+fmt\n pmagplotlib.save_plots(AZD, files)\n ans = \"\"\n if ans == 'q':\n print(\"Good bye\")\n sys.exit()\n if ans == 'p':\n specimen = specimen - 1\n backup = 1\n ans = \"\"\n if ans == 's':\n keepon = 1\n spec = input(\n 'Enter desired specimen name (or first part there of): ')\n while keepon == 1:\n try:\n specimen = sids.index(spec)\n keepon = 0\n except:\n tmplist = []\n for qq in range(len(sids)):\n if spec in sids[qq]:\n tmplist.append(sids[qq])\n print(specimen, \" not found, but this was: \")\n print(tmplist)\n spec = input('Select one or try again\\n ')\n ans = \"\"\n if ans == 'b':\n if end == 0 or end >= len(zijdblock):\n end = len(zijdblock)-1\n GoOn = 0\n while GoOn == 0:\n answer = input(\n 'Enter index of first point for calculation: ['+str(start)+'] ')\n try:\n start = int(answer)\n answer = input(\n 'Enter index of last point for calculation: ['+str(end)+'] ')\n end = int(answer)\n if start >= 0 and start < len(zijdblock)-2 and end > 0 and end < len(zijdblock) or start >= end:\n GoOn = 1\n else:\n print(\"Bad endpoints - try again! \")\n start, end = 0, len(zijdblock)\n except ValueError:\n print(\"Bad endpoints - try again! \")\n start, end = 0, len(zijdblock)\n s = sids[specimen]\n pars, errcode = pmag.PintPars(\n datablock, araiblock, zijdblock, start, end, accept)\n pars['measurement_step_unit'] = \"K\"\n pars[\"specimen_lab_field_dc\"] = field\n pars[\"specimen_int\"] = -1*field*pars[\"specimen_b\"]\n pars[\"er_specimen_name\"] = s\n pars, kill = pmag.scoreit(\n pars, PmagSpecRec, accept, '', 0)\n PmagSpecRec['specimen_scat'] = pars['specimen_scat']\n PmagSpecRec['specimen_frac'] = '%5.3f' % (\n pars['specimen_frac'])\n PmagSpecRec['specimen_gmax'] = '%5.3f' % (\n pars['specimen_gmax'])\n PmagSpecRec[\"measurement_step_min\"] = '%8.3e' % (\n pars[\"measurement_step_min\"])\n PmagSpecRec[\"measurement_step_max\"] = '%8.3e' % (\n pars[\"measurement_step_max\"])\n PmagSpecRec[\"measurement_step_unit\"] = \"K\"\n PmagSpecRec[\"specimen_int_n\"] = '%i' % (\n pars[\"specimen_int_n\"])\n PmagSpecRec[\"specimen_lab_field_dc\"] = '%8.3e' % (\n pars[\"specimen_lab_field_dc\"])\n PmagSpecRec[\"specimen_int\"] = '%9.4e ' % (\n pars[\"specimen_int\"])\n PmagSpecRec[\"specimen_b\"] = '%5.3f ' % (\n pars[\"specimen_b\"])\n PmagSpecRec[\"specimen_q\"] = '%5.1f ' % (\n pars[\"specimen_q\"])\n PmagSpecRec[\"specimen_f\"] = '%5.3f ' % (\n pars[\"specimen_f\"])\n PmagSpecRec[\"specimen_fvds\"] = '%5.3f' % (\n pars[\"specimen_fvds\"])\n PmagSpecRec[\"specimen_b_beta\"] = '%5.3f' % (\n pars[\"specimen_b_beta\"])\n PmagSpecRec[\"specimen_int_mad\"] = '%7.1f' % (\n pars[\"specimen_int_mad\"])\n PmagSpecRec[\"specimen_Z\"] = '%7.1f' % (\n pars[\"specimen_Z\"])\n PmagSpecRec[\"specimen_gamma\"] = '%7.1f' % (\n pars[\"specimen_gamma\"])\n PmagSpecRec[\"specimen_grade\"] = pars[\"specimen_grade\"]\n if pars[\"method_codes\"] != \"\":\n tmpcodes = pars[\"method_codes\"].split(\":\")\n for t in tmpcodes:\n if t.strip() not in methcodes:\n methcodes.append(t.strip())\n PmagSpecRec[\"specimen_dec\"] = '%7.1f' % (\n pars[\"specimen_dec\"])\n PmagSpecRec[\"specimen_inc\"] = '%7.1f' % (\n pars[\"specimen_inc\"])\n PmagSpecRec[\"specimen_tilt_correction\"] = '-1'\n PmagSpecRec[\"specimen_direction_type\"] = 'l'\n # this is redundant, but helpful - won't be imported\n PmagSpecRec[\"direction_type\"] = 'l'\n PmagSpecRec[\"specimen_int_dang\"] = '%7.1f ' % (\n pars[\"specimen_int_dang\"])\n PmagSpecRec[\"specimen_drats\"] = '%7.1f ' % (\n pars[\"specimen_drats\"])\n PmagSpecRec[\"specimen_drat\"] = '%7.1f ' % (\n pars[\"specimen_drat\"])\n PmagSpecRec[\"specimen_int_ptrm_n\"] = '%i ' % (\n pars[\"specimen_int_ptrm_n\"])\n PmagSpecRec[\"specimen_rsc\"] = '%6.4f ' % (\n pars[\"specimen_rsc\"])\n PmagSpecRec[\"specimen_md\"] = '%i ' % (\n int(pars[\"specimen_md\"]))\n if PmagSpecRec[\"specimen_md\"] == '-1':\n PmagSpecRec[\"specimen_md\"] = \"\"\n PmagSpecRec[\"specimen_b_sigma\"] = '%5.3f ' % (\n pars[\"specimen_b_sigma\"])\n if \"IE-TT\" not in methcodes:\n methcodes.append(\"IE-TT\")\n methods = \"\"\n for meth in methcodes:\n methods = methods+meth+\":\"\n PmagSpecRec[\"magic_method_codes\"] = methods[:-1]\n PmagSpecRec[\"specimen_description\"] = comment\n PmagSpecRec[\"magic_software_packages\"] = version_num\n pmagplotlib.plot_arai_zij(\n AZD, araiblock, zijdblock, s, units[0])\n pmagplotlib.plot_b(AZD, araiblock, zijdblock, pars)\n if verbose:\n pmagplotlib.draw_figs(AZD)\n if len(trmblock) > 2:\n blab = field\n best = pars[\"specimen_int\"]\n Bs, TRMs = [], []\n for trec in trmblock:\n Bs.append(\n float(trec['treatment_dc_field']))\n TRMs.append(\n float(trec['measurement_magn_moment']))\n # calculate best fit parameters through TRM acquisition data, and get new banc\n NLpars = nlt.NLtrm(Bs, TRMs, best, blab, 0)\n Mp, Bp = [], []\n for k in range(int(max(Bs)*1e6)):\n Bp.append(float(k)*1e-6)\n # predicted NRM for this field\n npred = nlt.TRM(\n Bp[-1], NLpars['xopt'][0], NLpars['xopt'][1])\n Mp.append(npred)\n pmagplotlib.plot_trm(\n AZD['TRM'], Bs, TRMs, Bp, Mp, NLpars, trec['magic_experiment_name'])\n if verbose:\n print(\n 'Non-linear TRM corrected intensity= ', float(NLpars['banc'])*1e6)\n if verbose:\n pmagplotlib.draw_figs(AZD)\n pars[\"specimen_lab_field_dc\"] = field\n pars[\"specimen_int\"] = -1*field*pars[\"specimen_b\"]\n pars, kill = pmag.scoreit(\n pars, PmagSpecRec, accept, '', verbose)\n saveit = input(\n \"Save this interpretation? [y]/n \\n\")\n if saveit != 'n':\n # put back an interpretation\n PriorRecs.append(PmagSpecRec)\n specimen += 1\n save_redo(PriorRecs, inspec)\n ans = \"\"\n elif plots == 1:\n specimen += 1\n if fmt != \".pmag\":\n files = {}\n for key in AZD.keys():\n files[key] = \"LO:_\"+locname+'_SI:_'+PmagSpecRec['er_site_name']+'_SA:_' + \\\n PmagSpecRec['er_sample_name'] + \\\n '_SP:_'+s+'_CO:_s_TY:_'+key+'_'+fmt\n if pmagplotlib.isServer:\n black = '#000000'\n purple = '#800080'\n titles = {}\n titles['deremag'] = 'DeReMag Plot'\n titles['zijd'] = 'Zijderveld Plot'\n titles['arai'] = 'Arai Plot'\n AZD = pmagplotlib.add_borders(\n AZD, titles, black, purple)\n pmagplotlib.save_plots(AZD, files, dpi=dpi)\n # pmagplotlib.combineFigs(s,files,3)\n else: # save in pmag format\n script = \"grep \"+s+\" output.mag | thellier -mfsi\"\n script = script+' %8.4e' % (field)\n min = '%i' % ((pars[\"measurement_step_min\"]-273))\n Max = '%i' % ((pars[\"measurement_step_max\"]-273))\n script = script+\" \"+min+\" \"+Max\n script = script+\" |plotxy;cat mypost >>thellier.ps\\n\"\n pltf.write(script)\n pmag.domagicmag(outf, MeasRecs)\n if len(CurrRec) > 0:\n for rec in CurrRec:\n PriorRecs.append(rec)\n CurrRec = []\n if plots != 1 and verbose:\n ans = input(\" Save last plot? 1/[0] \")\n if ans == \"1\":\n if fmt != \".pmag\":\n files = {}\n for key in AZD.keys():\n files[key] = s+'_'+key+fmt\n pmagplotlib.save_plots(AZD, files, dpi=dpi)\n else:\n print(\"\\n Good bye\\n\")\n sys.exit()\n if len(CurrRec) > 0:\n PriorRecs.append(CurrRec) # put back an interpretation\n if len(PriorRecs) > 0:\n save_redo(PriorRecs, inspec)\n print('Updated interpretations saved in ', inspec)\n if verbose:\n print(\"Good bye\")"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nnames nrm_specimens_magic.py Note: this program has been deprecated and is not maintained DESCRIPTION converts NRM data in a measurements type file to geographic and tilt corrected data in a specimens type file SYNTAX nrm_specimens_magic.py [-h][command line options] OPTIONS: -h prints the help message and quits -f MFILE: specify input file -fsa SFILE: specify samples format file [with orientations] -F PFILE: specify output file -A do not average replicate measurements -crd [g, t]: specify coordinate system ([g]eographic or [t]ilt adjusted) NB: you must have the SFILE in this directory DEFAULTS MFILE: measurements.txt PFILE: nrm_specimens.txt SFILE: samples.txt coord: g average replicate measurements?: YES", "response": "def main():\n \"\"\"\n NAME\n nrm_specimens_magic.py\n Note: this program has been deprecated and is not maintained\n\n DESCRIPTION\n converts NRM data in a measurements type file to\n geographic and tilt corrected data in a specimens type file\n\n SYNTAX\n nrm_specimens_magic.py [-h][command line options]\n\n OPTIONS:\n -h prints the help message and quits\n -f MFILE: specify input file\n -fsa SFILE: specify samples format file [with orientations]\n -F PFILE: specify output file\n -A do not average replicate measurements\n -crd [g, t]: specify coordinate system ([g]eographic or [t]ilt adjusted)\n NB: you must have the SFILE in this directory\n\n DEFAULTS\n MFILE: measurements.txt\n PFILE: nrm_specimens.txt\n SFILE: samples.txt\n coord: g\n average replicate measurements?: YES\n\n\n \"\"\"\n#\n# define some variables\n#\n beg, end, pole, geo, tilt, askave, save = 0, 0, [], 0, 0, 0, 0\n samp_file = 1\n args = sys.argv\n geo, tilt, orient = 0, 0, 0\n doave = 1\n user, comment, doave, coord = \"\", \"\", 1, \"g\"\n geo, orient = 1, 1\n\n dir_path = '.'\n if \"-h\" in args:\n print(main.__doc__)\n sys.exit()\n if '-WD' in sys.argv:\n ind = sys.argv.index('-WD')\n dir_path = sys.argv[ind + 1]\n meas_file = dir_path + \"/measurements.txt\"\n spec_file = dir_path + \"/specimens.txt\"\n samp_file = dir_path + \"/samples.txt\"\n out_file = dir_path + \"/nrm_specimens.txt\"\n if \"-A\" in args:\n doave = 0\n if \"-f\" in args:\n ind = args.index(\"-f\")\n meas_file = sys.argv[ind + 1]\n if \"-F\" in args:\n ind = args.index(\"-F\")\n out_file = dir_path + '/' + sys.argv[ind + 1]\n speclist = []\n if \"-fsa\" in args:\n ind = args.index(\"-fsa\")\n samp_file = dir_path + '/' + sys.argv[ind + 1]\n if \"-crd\" in args:\n ind = args.index(\"-crd\")\n coord = sys.argv[ind + 1]\n if coord == \"t\":\n tilt, orient, geo = 1, 1, 1\n#\n# read in data\n #\n meas_data = pd.read_csv(meas_file, header=1, sep='\\t')\n meas_data = meas_data[meas_data['method_codes'].str.contains(\n 'LT-NO') == True] # fish out NRM data\n meas_data = meas_data[['specimen', 'dir_dec', 'dir_inc']]\n meas_data = meas_data.dropna(subset=['dir_dec', 'dir_inc'])\n meas_data = pd.DataFrame(meas_data)\n# import samples for orientation info\n#\n##\n if orient == 1:\n spec_data = pd.read_csv(spec_file, header=1, sep='\\t')\n spec_data = spec_data[['specimen', 'sample']]\n meas_data = pd.merge(meas_data, spec_data,\n how='inner', on=['specimen'])\n samp_data = pd.read_csv(samp_file, header=1, sep='\\t')\n #\n sids = meas_data.specimen.unique() # list of specimen names\n #\n #\n NrmSpecRecs = []\n for spec in sids:\n gdec, ginc, bdec, binc = \"\", \"\", \"\", \"\"\n this_spec_data = meas_data[meas_data.specimen.str.contains(spec)]\n this_sample = this_spec_data['sample'].iloc[-1]\n this_sample_data = samp_data[samp_data['sample'].str.contains(\n this_sample)]\n this_sample_data = this_sample_data.to_dict('records')\n for m in this_spec_data.to_dict('records'):\n NrmSpecRec = {'specimen': spec, 'sample': this_sample}\n if coord == 'g':\n NrmSpecRec['dir_tilt_correction'] = '0'\n elif coord == 't':\n NrmSpecRec['dir_tilt_correction'] = '100'\n else:\n NrmSpecRec['dir_tilt_correction'] = '-1'\n if not orient:\n NrmSpecRec['dir_dec'] = m['dir_dec']\n NrmSpecRec['dir_inc'] = m['dir_inc']\n NrmSpecRec['method_codes'] = 'SO-NO'\n NrmSpecRecs.append(NrmSpecRec)\n else: # do geographic correction\n # get the azimuth\n or_info, az_type = pmag.get_orient(\n this_sample_data, this_sample, data_model=3)\n if 'azimuth' in or_info.keys() and or_info['azimuth'] != \"\":\n azimuth = or_info['azimuth']\n dip = or_info['dip']\n gdec, ginc = pmag.dogeo(\n m['dir_dec'], m['dir_inc'], azimuth, dip)\n if tilt:\n # try tilt correction\n if 'bed_dip' in or_info.keys() and or_info['bed_dip'] != \"\":\n bed_dip = or_info['bed_dip']\n bed_dip_dir = or_info['bed_dip_direction']\n bdec, binc = pmag.dogeo(\n gdec, ginc, bed_dip_dir, bed_dip)\n NrmSpecRec['dir_dec'] = bdec\n NrmSpecRec['dir_inc'] = binc\n NrmSpecRec['method_codes'] = az_type\n NrmSpecRecs.append(NrmSpecRec)\n else:\n print('no bedding orientation data for ', spec)\n\n else:\n NrmSpecRec['dir_dec'] = gdec\n NrmSpecRec['dir_inc'] = ginc\n NrmSpecRec['method_codes'] = az_type\n NrmSpecRecs.append(NrmSpecRec)\n else:\n print('no geo orientation data for ', spec)\n\n pmag.magic_write(out_file, NrmSpecRecs, 'specimens')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nrunning the interpreter and returns the unique identifier for the interpreter.", "response": "def run_interpreter(self):\n \"\"\"\n Run thellier_auto_interpreter\n \"\"\"\n\n import random\n import copy\n\n start_time = time.time()\n #------------------------------------------------\n # Clean work directory\n #------------------------------------------------\n\n # self.write_acceptance_criteria_to_file()\n try:\n shutil.rmtree(os.path.join(self.WD, \"thellier_interpreter\"))\n except:\n pass\n\n try:\n os.mkdir(os.path.join(self.WD, \"thellier_interpreter\"))\n except:\n pass\n\n #------------------------------------------------\n # Intialize interpreter output files:\n # Prepare header for \"Thellier_auto_interpretation.all.txt\"\n # All the acceptable interpretation are saved in this file\n #------------------------------------------------\n\n # sort acceptance criteria\n self.specimen_criteria = []\n for crit in list(self.acceptance_criteria.keys()):\n if 'category' in list(self.acceptance_criteria[crit].keys()):\n if self.acceptance_criteria[crit]['category'] == \"IE-SPEC\":\n if self.acceptance_criteria[crit]['value'] != -999:\n self.specimen_criteria.append(crit)\n\n # sort acceptance criteria\n sample_criteria = []\n for crit in list(self.acceptance_criteria.keys()):\n if 'category' in list(self.acceptance_criteria[crit].keys()):\n if self.acceptance_criteria[crit]['category'] == \"IE-SAMP\":\n if self.acceptance_criteria[crit]['value'] != -999:\n sample_criteria.append(crit)\n\n # sort acceptance criteria\n site_criteria = []\n for crit in list(self.acceptance_criteria.keys()):\n if 'category' in list(self.acceptance_criteria[crit].keys()):\n if self.acceptance_criteria[crit]['category'] == \"thellier_gui\":\n if self.acceptance_criteria[crit]['value'] != -999:\n site_criteria.append(crit)\n\n # sort acceptance criteria\n thellier_gui_criteria = []\n for crit in list(self.acceptance_criteria.keys()):\n if 'category' in list(self.acceptance_criteria[crit].keys()):\n if self.acceptance_criteria[crit]['category'] == \"thellier_gui\":\n if self.acceptance_criteria[crit]['value'] != -999:\n thellier_gui_criteria.append(crit)\n\n #----------------------------\n\n # log file\n self.thellier_interpreter_log = open(\n self.WD + \"/\" + \"/thellier_interpreter//thellier_interpreter.log\", 'w')\n self.thellier_interpreter_log.write(\"-I- Start auto interpreter\\n\")\n\n # \"all grade A interpretation\n thellier_interpreter_all = open(\n self.WD + \"/thellier_interpreter/thellier_interpreter_all.txt\", 'w')\n thellier_interpreter_all.write(\"tab\\tpmag_specimens\\n\")\n String = \"er_specimen_name\\tmeasurement_step_min\\tmeasurement_step_max\\tspecimen_lab_field_dc_uT\\tspecimen_int_corr_anisotropy\\tspecimen_int_corr_nlt\\tspecimen_int_corr_cooling_rate\\tspecimen_int_uT\\t\"\n # + [\"specimen_b\"] + ['specimen_cm_x'] + ['specimen_cm_y']:\n for crit in self.specimen_criteria:\n String = String + crit + \"\\t\"\n String = String[:-1] + \"\\n\"\n thellier_interpreter_all.write(String)\n\n # specimen_bound\n Fout_specimens_bounds = open(\n self.WD + \"/thellier_interpreter/thellier_interpreter_specimens_bounds.txt\", 'w')\n String = \"acceptance criteria:\\n\"\n for crit in self.specimen_criteria:\n String = String + crit + \"\\t\"\n Fout_specimens_bounds.write(String[:-1] + \"\\n\")\n String = \"\"\n for crit in self.specimen_criteria:\n if type(self.acceptance_criteria[crit]['value']) == str:\n string = self.acceptance_criteria[crit]['value']\n elif type(self.acceptance_criteria[crit]['value']) == bool:\n string = str(self.acceptance_criteria[crit]['value'])\n elif type(self.acceptance_criteria[crit]['value']) == int or type(self.acceptance_criteria[crit]['value']) == float:\n if self.acceptance_criteria[crit]['decimal_points'] == -999:\n string = \"{:.3f}\".format(self.acceptance_criteria[crit]['value'])\n else:\n string = \"{:.{}f}\".format(self.acceptance_criteria[crit]['value'],\n self.acceptance_criteria[crit]['decimal_points'])\n\n else:\n string=\"\"\n\n String=String+\"%s\\t\"%string\n Fout_specimens_bounds.write(String[:-1]+\"\\n\")\n Fout_specimens_bounds.write(\"--------------------------------\\n\")\n Fout_specimens_bounds.write(\"er_sample_name\\ter_specimen_name\\tspecimen_int_corr_anisotropy\\tAnisotropy_code\\tspecimen_int_corr_nlt\\tspecimen_int_corr_cooling_rate\\tspecimen_lab_field_dc_uT\\tspecimen_int_min_uT\\tspecimen_int_max_uT\\tWARNING\\n\")\n\n #----------------------------------\n\n criteria_string = \"acceptance criteria:\\n\"\n for crit in self.specimen_criteria + sample_criteria + site_criteria + thellier_gui_criteria:\n criteria_string = criteria_string + crit + \"\\t\"\n criteria_string = criteria_string[:-1] + \"\\n\"\n for crit in self.specimen_criteria + sample_criteria + site_criteria + thellier_gui_criteria:\n if type(self.acceptance_criteria[crit]['value']) == str:\n string = self.acceptance_criteria[crit]['value']\n elif type(self.acceptance_criteria[crit]['value']) == bool:\n string = str(self.acceptance_criteria[crit]['value'])\n elif type(self.acceptance_criteria[crit]['value']) == int or type(self.acceptance_criteria[crit]['value']) == float:\n if self.acceptance_criteria[crit]['decimal_points'] == -999:\n string = \"%.3e\" % (\n float(self.acceptance_criteria[crit]['value']))\n else:\n string = \"{:.{}f}\".format(self.acceptance_criteria[crit]['value'],\n self.acceptance_criteria[crit]['decimal_points'])\n else:\n string = \"\"\n\n criteria_string = criteria_string + \"%s\\t\" % string\n criteria_string = criteria_string[:-1] + \"\\n\"\n criteria_string = criteria_string + \"---------------------------------\\n\"\n\n # STDEV-OPT output files\n if self.acceptance_criteria['interpreter_method']['value'] == 'stdev_opt':\n self.Fout_STDEV_OPT_redo = open(\n self.WD + \"/thellier_interpreter/thellier_interpreter_STDEV-OPT_redo\", 'w')\n self.Fout_STDEV_OPT_specimens = open(\n self.WD + \"/thellier_interpreter/thellier_interpreter_STDEV-OPT_specimens.txt\", 'w')\n\n self.Fout_STDEV_OPT_specimens.write(\"tab\\tpmag_specimens\\n\")\n String = \"er_sample_name\\ter_specimen_name\\tspecimen_int_uT\\tmeasurement_step_min\\tmeasurement_step_min\\tspecimen_lab_field_dc\\tAnisotropy_correction_factor\\tNLT_correction_factor\\tCooling_rate_correction_factor\\t\"\n for crit in self.specimen_criteria:\n String = String + crit + \"\\t\"\n self.Fout_STDEV_OPT_specimens.write(String[:-1] + \"\\n\")\n\n if self.acceptance_criteria['average_by_sample_or_site']['value'] == 'sample':\n self.Fout_STDEV_OPT_samples = open(\n self.WD + \"/thellier_interpreter/thellier_interpreter_STDEV-OPT_samples.txt\", 'w')\n self.Fout_STDEV_OPT_samples.write(criteria_string)\n self.Fout_STDEV_OPT_samples.write(\n \"er_sample_name\\tsample_int_n\\tsample_int_uT\\tsample_int_sigma_uT\\tsample_int_sigma_perc\\tsample_int_min_uT\\tsample_int_min_sigma_uT\\tsample_int_max_uT\\tsample_int_max_sigma_uT\\tsample_int_interval_uT\\tsample_int_interval_perc\\tWarning\\n\")\n else:\n self.Fout_STDEV_OPT_sites = open(\n self.WD + \"/thellier_interpreter/thellier_interpreter_STDEV-OPT_sites.txt\", 'w')\n self.Fout_STDEV_OPT_sites.write(criteria_string)\n self.Fout_STDEV_OPT_sites.write(\n \"er_site_name\\tsite_int_n\\tsite_int_uT\\tsite_int_sigma_uT\\tsite_int_sigma_perc\\tsite_int_min_uT\\tsite_int_min_sigma_uT\\tsite_int_max_uT\\tsite_int_max_sigma_uT\\tsite_int_interval_uT\\tsite_int_interval_perc\\tWarning\\n\")\n\n # simple bootstrap output files\n # Dont supports site yet!\n\n if self.acceptance_criteria['interpreter_method']['value'] == 'bs':\n Fout_BS_samples = open(\n self.WD + \"/thellier_interpreter/thellier_interpreter_BS_samples.txt\", 'w')\n Fout_BS_samples.write(criteria_string)\n # Fout_BS_samples.write(\"---------------------------------\\n\")\n Fout_BS_samples.write(\n \"er_sample_name\\tsample_int_n\\tsample_int_uT\\tsample_int_68_low\\tsample_int_68_high\\tsample_int_95_low\\tsample_int_95_high\\tsample_int_sigma_uT\\tsample_int_sigma_perc\\tWARNING\\n\")\n # parameteric bootstrap output files\n\n if self.acceptance_criteria['interpreter_method']['value'] == 'bs_par':\n Fout_BS_PAR_samples = open(\n self.WD + \"/thellier_interpreter/thellier_interpreter_BS-PAR_samples.txt\", 'w')\n Fout_BS_PAR_samples.write(criteria_string)\n # Fout_BS_PAR_samples.write(\"---------------------------------\\n\")\n Fout_BS_PAR_samples.write(\n \"er_sample_name\\tsample_int_n\\tsample_int_uT\\tsample_int_68_low\\tsample_int_68_high\\tsample_int_95_low\\tsample_int_95_high\\tsample_int_sigma_uT\\tsample_int_sigma_perc\\tWARNING\\n\")\n\n self.thellier_interpreter_log.write(\n \"-I- using paleointenisty statistics:\\n\")\n self.thellier_interpreter_log.write(criteria_string)\n\n #------------------------------------------------\n\n specimens_list = list(self.Data.keys())\n specimens_list.sort()\n self.thellier_interpreter_log.write(\n \"-I- Found %i specimens\\n\" % (len(specimens_list)))\n\n # try:\n All_grade_A_Recs = {}\n print('-I- Running through specimens:')\n for s in specimens_list:\n self.thellier_interpreter_log.write(\n \"-I- doing now specimen %s\\n\" % s)\n self.Data[s]['pars'] = {}\n self.Data[s]['pars']['lab_dc_field'] = self.Data[s]['lab_dc_field']\n self.Data[s]['pars']['er_specimen_name'] = s\n self.Data[s]['pars']['er_sample_name'] = self.Data_hierarchy['specimens'][s]\n temperatures = self.Data[s]['t_Arai']\n\n # check that all temperatures are in right order:\n ignore_specimen = False\n for t in range(len(temperatures) - 1):\n if float(temperatures[t + 1]) < float(temperatures[t]):\n self.thellier_interpreter_log.write(\n \"-W- Found problem in the temperature order of specimen %s. skipping specimen\\n\" % (s))\n ignore_specimen = True\n if ignore_specimen:\n continue\n if self.acceptance_criteria['specimen_int_n']['value'] != -999:\n specimen_int_n = min(\n 3, int(self.acceptance_criteria['specimen_int_n']['value']))\n else:\n specimen_int_n = 3\n #-------------------------------------------------\n # loop through all possible tmin,tmax and check if pass criteria\n #-------------------------------------------------\n # print s\n for tmin_i in range(len(temperatures) - specimen_int_n + 1):\n # check if to include NRM\n # print temperatures\n # print self.acceptance_criteria['include_nrm']['value']\n if self.acceptance_criteria['include_nrm']['value'] == -999:\n # print \" Its False\"\n if temperatures[tmin_i] == 273:\n continue\n # print \"ignoring NRM\",tmin_i,temperatures[tmin_i]\n # print tmin_i\n for tmax_i in range(tmin_i + specimen_int_n - 1, len(temperatures)):\n # print tmax_i\n # print len(temperatures)\n tmin = temperatures[tmin_i]\n tmax = temperatures[tmax_i]\n pars = thellier_gui_lib.get_PI_parameters(\n self.Data, self.acceptance_criteria, self.preferences, s, tmin, tmax, self.GUI_log, THERMAL, MICROWAVE)\n if not pars: # error with getting pars\n message_string = '-W- Problem in SPD. Could not calculate any parameters for {} with tmin: {} and tmax {}. Check data for typos, make sure temperatures are correct, etc.'.format(\n s, tmin, tmax)\n self.thellier_interpreter_log.write(\n message_string + \"\\n\")\n continue\n if 'NLT_specimen_correction_factor' not in list(pars.keys()):\n # problem in get_PI_parameters (probably with\n # tmin/zdata). can't run specimen\n message_string = '-W- Problem in get_PI_parameters. Could not get all parameters for {} with tmin: {} and tmax: {}. Check data for typos, make sure temperatures are correct, etc.'.format(\n s, tmin, tmax)\n self.thellier_interpreter_log.write(\n message_string + \"\\n\")\n continue\n pars = thellier_gui_lib.check_specimen_PI_criteria(\n pars, self.acceptance_criteria)\n #-------------------------------------------------\n # check if pass the criteria\n #-------------------------------------------------\n\n if 'specimen_fail_criteria' in list(pars.keys()) and len(pars['specimen_fail_criteria']) > 0:\n # Fail:\n message_string = \"-I- specimen %s (%.0f-%.0f) FAIL on: \" % (s, float(\n pars[\"measurement_step_min\"]) - 273, float(pars[\"measurement_step_max\"]) - 273)\n for parameter in pars['specimen_fail_criteria']:\n if \"scat\" not in parameter:\n message_string = message_string + \\\n parameter + \"= %f, \" % pars[parameter]\n else:\n message_string = message_string + parameter + \\\n \"= %s, \" % str(pars[parameter])\n\n self.thellier_interpreter_log.write(\n message_string + \"\\n\")\n elif 'specimen_fail_criteria' in list(pars.keys()) and len(pars['specimen_fail_criteria']) == 0:\n\n # PASS:\n message_string = \"-I- specimen %s (%.0f-%.0f) PASS\" % (s, float(\n pars[\"measurement_step_min\"]) - 273, float(pars[\"measurement_step_max\"]) - 273)\n self.thellier_interpreter_log.write(\n message_string + \"\\n\")\n\n #----------------------------------------------------\n # Save all the grade A interpretation in thellier_interpreter_all.txt\n #----------------------------------------------------\n\n String = s + \"\\t\"\n String = String + \\\n \"%.0f\\t\" % (\n float(pars[\"measurement_step_min\"]) - 273.)\n String = String + \\\n \"%.0f\\t\" % (\n float(pars[\"measurement_step_max\"]) - 273.)\n String = String + \\\n \"%.0f\\t\" % (float(pars[\"lab_dc_field\"]) * 1e6)\n\n if \"Anisotropy_correction_factor\" in list(pars.keys()):\n String = String + \\\n \"%.2f\\t\" % float(\n pars[\"Anisotropy_correction_factor\"])\n else:\n String = String + \"-\\t\"\n if float(pars[\"NLT_specimen_correction_factor\"]) != -999:\n String = String + \\\n \"%.2f\\t\" % float(\n pars[\"NLT_specimen_correction_factor\"])\n else:\n String = String + \"-\\t\"\n if float(pars[\"specimen_int_corr_cooling_rate\"]) != -999 and float(pars[\"specimen_int_corr_cooling_rate\"]) != -1:\n String = String + \\\n \"%.2f\\t\" % float(\n pars[\"specimen_int_corr_cooling_rate\"])\n else:\n String = String + \"-\\t\"\n Bancient = float(pars['specimen_int_uT'])\n String = String + \"%.1f\\t\" % (Bancient)\n # + [\"specimen_b\"] + [\"specimen_cm_x\"] + [\"specimen_cm_y\"]:\n for key in self.specimen_criteria:\n if type(pars[key]) == str:\n String = String + pars[key] + \"\\t\"\n else:\n String = String + \\\n \"%.3e\" % (float(pars[key])) + \"\\t\"\n String = String[:-1] + \"\\n\"\n\n thellier_interpreter_all.write(String)\n\n #-------------------------------------------------\n # save 'acceptable' (grade A) specimen interpretaion\n # All_grade_A_Recs={}\n # All_grade_A_Recs[specimen_name][\"tmin,tmax\"]={PI pars sorted in dictionary}\n #-------------------------------------------------\n\n if s not in list(All_grade_A_Recs.keys()):\n All_grade_A_Recs[s] = {}\n new_pars = {}\n for k in list(pars.keys()):\n new_pars[k] = pars[k]\n TEMP = \"%.0f,%.0f\" % (float(\n pars[\"measurement_step_min\"]) - 273, float(pars[\"measurement_step_max\"]) - 273)\n All_grade_A_Recs[s][TEMP] = new_pars\n\n specimens_list = list(All_grade_A_Recs.keys())\n specimens_list.sort()\n Grade_A_samples = {}\n Grade_A_sites = {}\n Redo_data_specimens = {}\n\n #--------------------------------------------------------------\n # specimens bound file\n #--------------------------------------------------------------\n\n for s in specimens_list:\n\n sample = self.Data_hierarchy['specimens'][s]\n site = thellier_gui_lib.get_site_from_hierarchy(\n sample, self.Data_hierarchy)\n B_lab = float(self.Data[s]['lab_dc_field']) * 1e6\n B_min, B_max = 1e10, 0.\n NLT_factor_min, NLT_factor_max = 1e10, 0.\n all_B_tmp_array = []\n\n for TEMP in list(All_grade_A_Recs[s].keys()):\n pars = All_grade_A_Recs[s][TEMP]\n if \"AC_anisotropy_type\" in list(pars.keys()):\n AC_correction_factor = pars[\"Anisotropy_correction_factor\"]\n AC_correction_type = pars[\"AC_anisotropy_type\"]\n WARNING = \"\"\n if \"AC_WARNING\" in list(pars.keys()):\n WARNING = WARNING + pars[\"AC_WARNING\"]\n else:\n AC_correction_factor = 1.\n AC_correction_type = \"-\"\n WARNING = \"WARNING: No anisotropy correction\"\n\n B_anc = pars['specimen_int_uT']\n\n if B_anc < B_min:\n B_min = B_anc\n if B_anc > B_max:\n B_max = B_anc\n if pars[\"NLT_specimen_correction_factor\"] != -1:\n NLT_f = pars['NLT_specimen_correction_factor']\n if NLT_f < NLT_factor_min:\n NLT_factor_min = NLT_f\n if NLT_f > NLT_factor_max:\n NLT_factor_max = NLT_f\n\n # sort by samples\n #------------------------------------------------------------\n\n if cb.not_null(sample, False):\n if sample not in list(Grade_A_samples.keys()):\n Grade_A_samples[sample] = {}\n if s not in list(Grade_A_samples[sample].keys()) and len(All_grade_A_Recs[s]) > 0:\n Grade_A_samples[sample][s] = []\n if cb.not_null(B_anc, False):\n Grade_A_samples[sample][s].append(B_anc)\n\n # sort by sites\n #------------------------------------------------------------\n\n if cb.not_null(site, False):\n if site not in list(Grade_A_sites.keys()):\n Grade_A_sites[site] = {}\n if s not in list(Grade_A_sites[site].keys()) and len(All_grade_A_Recs[s]) > 0:\n Grade_A_sites[site][s] = []\n if cb.not_null(B_anc, False):\n Grade_A_sites[site][s].append(B_anc)\n\n # ? check\n #------------------------------------------------------------\n\n if s not in list(Redo_data_specimens.keys()):\n Redo_data_specimens[s] = {}\n\n # write to specimen_bounds\n #----------------------------------------------------------------\n\n if pars[\"NLT_specimen_correction_factor\"] != -1:\n NLT_factor = \"%.2f\" % (NLT_factor_max)\n else:\n NLT_factor = \"-\"\n\n if pars[\"specimen_int_corr_cooling_rate\"] != -1 and pars[\"specimen_int_corr_cooling_rate\"] != -999:\n CR_factor = \"%.2f\" % (\n float(pars[\"specimen_int_corr_cooling_rate\"]))\n else:\n CR_factor = \"-\"\n if 'cooling_rate_data' in list(self.Data[s].keys()):\n if 'CR_correction_factor_flag' in list(self.Data[s]['cooling_rate_data'].keys()):\n if self.Data[s]['cooling_rate_data']['CR_correction_factor_flag'] != \"calculated\":\n if \"inferred\" in self.Data[s]['cooling_rate_data']['CR_correction_factor_flag']:\n WARNING = WARNING + \";\" + \"cooling rate correction inferred from sister specimens\"\n if \"alteration\" in self.Data[s]['cooling_rate_data']['CR_correction_factor_flag']:\n WARNING = WARNING + \";\" + \"cooling rate experiment failed alteration\"\n if \"bad\" in self.Data[s]['cooling_rate_data']['CR_correction_factor_flag']:\n WARNING = WARNING + \";\" + \"cooling rate experiment failed\"\n\n if AC_correction_type == \"-\":\n AC_correction_factor_to_print = \"-\"\n else:\n AC_correction_factor_to_print = \"%.2f\" % AC_correction_factor\n\n String = \"%s\\t%s\\t%s\\t%s\\t%s\\t%s\\t%.1f\\t%.1f\\t%.1f\\t%s\\n\"\\\n % (sample, s, AC_correction_factor_to_print, AC_correction_type, NLT_factor, CR_factor, B_lab, B_min, B_max, WARNING)\n Fout_specimens_bounds.write(String)\n\n #--------------------------------------------------------------\n # Find the STDEV-OPT 'best mean':\n # the interprettaions that give\n # the minimum standrad deviation (perc!)\n # not nesseserily the standrad deviation in microT\n #\n #--------------------------------------------------------------\n\n # Sort all grade A interpretation\n\n samples = list(Grade_A_samples.keys())\n samples.sort()\n\n sites = list(Grade_A_sites.keys())\n sites.sort()\n\n #--------------------------------------------------------------\n # thellier-interpreter can work by averaging specimens by sample (default)\n # or by averaging specimens by site\n #--------------------------------------------------------------\n\n if self.acceptance_criteria['average_by_sample_or_site']['value'] == 'sample':\n self.Grade_A_sorted = copy.deepcopy(Grade_A_samples)\n\n else:\n self.Grade_A_sorted = copy.deepcopy(Grade_A_sites)\n\n self.clean_workspace()\n\n samples_or_sites = list(self.Grade_A_sorted.keys())\n samples_or_sites.sort()\n # print Grade_A_sorted\n for sample_or_site in samples_or_sites:\n if len(list(self.Grade_A_sorted[sample_or_site].keys())) == 1:\n specimen = list(self.Grade_A_sorted[sample_or_site].keys())[0]\n self.choose_interpretation_max_frac(All_grade_A_Recs, specimen)\n else:\n self.thellier_interpreter_pars = self.calc_upper_level_mean(\n self.Grade_A_sorted, All_grade_A_Recs, sample_or_site)\n self.update_data_with_interpreter_pars(\n self.Grade_A_sorted, All_grade_A_Recs, sample_or_site, self.thellier_interpreter_pars)\n self.update_files_with_intrepretation(\n self.Grade_A_sorted, All_grade_A_Recs, sample_or_site, self.thellier_interpreter_pars)\n self.thellier_interpreter_log.write(\"-I- Statistics:\\n\")\n self.thellier_interpreter_log.write(\n \"-I- number of specimens analzyed = %i\\n\" % len(specimens_list))\n self.thellier_interpreter_log.write(\n \"-I- number of sucsessful 'acceptable' specimens = %i\\n\" % len(list(All_grade_A_Recs.keys())))\n\n runtime_sec = time.time() - start_time\n m, s = divmod(runtime_sec, 60)\n h, m = divmod(m, 60)\n self.thellier_interpreter_log.write(\n \"-I- runtime hh:mm:ss is \" + \"%d:%02d:%02d\\n\" % (h, m, s))\n if len(specimens_list) != 0:\n self.thellier_interpreter_log.write(\n \"-I- runtime per specimen: %.1f seconds\" % (float(runtime_sec) / len(specimens_list)))\n\n self.thellier_interpreter_log.write(\"-I- Finished sucsessfuly.\\n\")\n self.thellier_interpreter_log.write(\"-I- DONE\\n\")\n\n # close all files\n\n self.thellier_interpreter_log.close()\n thellier_interpreter_all.close()\n Fout_specimens_bounds.close()\n if self.acceptance_criteria['interpreter_method']['value'] == 'stdev_opt':\n self.Fout_STDEV_OPT_redo.close()\n self.Fout_STDEV_OPT_specimens.close()\n if self.acceptance_criteria['interpreter_method']['value'] == 'stdev_opt':\n if self.acceptance_criteria['average_by_sample_or_site']['value'] == 'sample':\n self.Fout_STDEV_OPT_samples.close()\n else:\n self.Fout_STDEV_OPT_sites.close()\n\n if self.acceptance_criteria['interpreter_method']['value'] == 'bs':\n Fout_BS_samples.close()\n\n if self.acceptance_criteria['interpreter_method']['value'] == 'bs_par':\n Fout_BS_PAR_samples.close()\n # try:\n # os.system('\\a')\n # except:\n # pass\n return True, len(specimens_list)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef find_close_value(self, LIST, value):\n '''\n take a LIST and find the nearest value in LIST to 'value'\n '''\n diff = inf\n for a in LIST:\n if abs(value - a) < diff:\n diff = abs(value - a)\n result = a\n return(result)", "response": "take a LIST and find the nearest value in LIST to value"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef find_sample_min_std(self, Intensities):\n '''\n find the best interpretation with the minimum stratard deviation (in units of percent % !)\n '''\n\n Best_array = []\n best_array_std_perc = inf\n Best_array_tmp = []\n Best_interpretations = {}\n Best_interpretations_tmp = {}\n for this_specimen in list(Intensities.keys()):\n for value in Intensities[this_specimen]:\n Best_interpretations_tmp[this_specimen] = value\n Best_array_tmp = [value]\n all_other_specimens = list(Intensities.keys())\n all_other_specimens.remove(this_specimen)\n\n for other_specimen in all_other_specimens:\n closest_value = self.find_close_value(\n Intensities[other_specimen], value)\n Best_array_tmp.append(closest_value)\n Best_interpretations_tmp[other_specimen] = closest_value\n\n if std(Best_array_tmp, ddof=1) / mean(Best_array_tmp) < best_array_std_perc:\n Best_array = Best_array_tmp\n best_array_std_perc = std(\n Best_array, ddof=1) / mean(Best_array_tmp)\n Best_interpretations = copy.deepcopy(\n Best_interpretations_tmp)\n Best_interpretations_tmp = {}\n return Best_interpretations, mean(Best_array), std(Best_array, ddof=1)", "response": "find the best interpretation with the minimum stratard deviation"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncalculate the Thellier Interpreter Parameters for the given site or sample.", "response": "def thellier_interpreter_pars_calc(self, Grade_As):\n '''\n calcualte sample or site STDEV-OPT paleointensities\n and statistics\n Grade_As={}\n\n '''\n thellier_interpreter_pars = {}\n thellier_interpreter_pars['stdev-opt'] = {}\n # thellier_interpreter_pars['stdev-opt']['B']=\n # thellier_interpreter_pars['stdev-opt']['std']=\n thellier_interpreter_pars['min-value'] = {}\n # thellier_interpreter_pars['min-value']['B']=\n # thellier_interpreter_pars['min-value']['std']=\n thellier_interpreter_pars['max-value'] = {}\n # thellier_interpreter_pars['max-value']['B']=\n # thellier_interpreter_pars['max-value']['std']=\n thellier_interpreter_pars['fail_criteria'] = []\n thellier_interpreter_pars['pass_or_fail'] = 'pass'\n\n # acceptance criteria\n if self.acceptance_criteria['average_by_sample_or_site']['value'] == 'sample':\n int_n_cutoff = self.acceptance_criteria['sample_int_n']['value']\n int_sigma_cutoff = self.acceptance_criteria['sample_int_sigma']['value']\n int_sigma_perc_cutoff = self.acceptance_criteria['sample_int_sigma_perc']['value']\n int_interval_cutoff = self.acceptance_criteria['sample_int_interval_uT']['value']\n int_interval_perc_cutoff = self.acceptance_criteria['sample_int_interval_perc']['value']\n else:\n int_n_cutoff = self.acceptance_criteria['site_int_n']['value']\n int_sigma_cutoff = self.acceptance_criteria['site_int_sigma']['value']\n int_sigma_perc_cutoff = self.acceptance_criteria['site_int_sigma_perc']['value']\n int_interval_cutoff = self.acceptance_criteria['site_int_interval_uT']['value']\n int_interval_perc_cutoff = self.acceptance_criteria['site_int_interval_perc']['value']\n\n N = len(list(Grade_As.keys()))\n if N <= 1:\n thellier_interpreter_pars['pass_or_fail'] = 'fail'\n thellier_interpreter_pars['fail_criteria'].append(\"int_n\")\n return(thellier_interpreter_pars)\n\n Best_interpretations, best_mean, best_std = self.find_sample_min_std(\n Grade_As)\n sample_acceptable_min, sample_acceptable_min_std, sample_acceptable_max, sample_acceptable_max_std = self.find_sample_min_max_interpretation(\n Grade_As)\n sample_int_interval_uT = sample_acceptable_max - sample_acceptable_min\n sample_int_interval_perc = 100 * \\\n ((sample_acceptable_max - sample_acceptable_min) / best_mean)\n thellier_interpreter_pars['stdev_opt_interpretations'] = Best_interpretations\n thellier_interpreter_pars['stdev-opt']['B'] = best_mean\n thellier_interpreter_pars['stdev-opt']['std'] = best_std\n thellier_interpreter_pars['stdev-opt']['std_perc'] = 100. * \\\n (best_std / best_mean)\n thellier_interpreter_pars['min-value']['B'] = sample_acceptable_min\n thellier_interpreter_pars['min-value']['std'] = sample_acceptable_min_std\n thellier_interpreter_pars['max-value']['B'] = sample_acceptable_max\n thellier_interpreter_pars['max-value']['std'] = sample_acceptable_max_std\n thellier_interpreter_pars['sample_int_interval_uT'] = sample_int_interval_uT\n thellier_interpreter_pars['sample_int_interval_perc'] = sample_int_interval_perc\n\n if N < int_n_cutoff:\n thellier_interpreter_pars['pass_or_fail'] = 'fail'\n thellier_interpreter_pars['fail_criteria'].append(\"int_n\")\n\n pass_int_sigma, pass_int_sigma_perc = True, True\n pass_int_interval, pass_int_interval_perc = True, True\n\n if not (int_sigma_cutoff == -999 and int_sigma_perc_cutoff == -999):\n if best_std <= int_sigma_cutoff * 1e6 and int_sigma_cutoff != -999:\n pass_sigma = True\n else:\n pass_sigma = False\n if 100. * (best_std / best_mean) <= int_sigma_perc_cutoff and int_sigma_perc_cutoff != -999:\n pass_sigma_perc = True\n else:\n pass_sigma_perc = False\n if not (pass_sigma or pass_sigma_perc):\n thellier_interpreter_pars['pass_or_fail'] = 'fail'\n thellier_interpreter_pars['fail_criteria'].append(\"int_sigma\")\n\n if not (int_interval_cutoff == -999 and int_interval_perc_cutoff == -999):\n if sample_int_interval_uT <= int_interval_perc_cutoff and int_interval_perc_cutoff != -999:\n pass_interval = True\n else:\n pass_interval = False\n if sample_int_interval_perc <= int_interval_perc_cutoff and int_interval_perc_cutoff != -999:\n pass_interval_perc = True\n else:\n pass_interval_perc = False\n if not (pass_interval or pass_interval_perc):\n thellier_interpreter_pars['pass_or_fail'] = 'fail'\n thellier_interpreter_pars['fail_criteria'].append(\n \"int_interval\")\n\n return(thellier_interpreter_pars)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ntaking in a dictionary and a mapping which contains new key names and returns a new dictionary with the updated key names.", "response": "def mapping(dictionary, mapping):\n \"\"\"\n takes in a dictionary and a mapping which contains new key names,\n and returns a new dictionary with the updated key names, i.e.:\n dictionary = {'a': 1, 'b': 2, 'c': 3}\n mapping = {'a': 'aa', 'c': 'cc'}\n mapped_dictionary = mapping(dictionary, mapping)\n mapped_dictionary = {'aa': 1, b, 2, 'cc': 3}\n \"\"\"\n mapped_dictionary = {}\n for key, value in dictionary.items():\n if key in list(mapping.keys()):\n new_key = mapping[key]\n # if there is already a mapped value, try to figure out which value to use\n # (i.e., if both er_synthetic_name and er_specimen_name are in one measurement file)\n if new_key in mapped_dictionary:\n if hasattr(value, 'any'):\n if not value.any():\n # if new value is null, leave the old value there\n continue\n if hasattr(mapped_dictionary, 'any'):\n if value.any() and not mapped_dictionary[new_key].any():\n # choose the one that has a non-null value\n mapped_dictionary[new_key] = value\n elif value.any() and mapped_dictionary[new_key].any():\n # if both have values, choose which one to replace and warn\n #print('-W- Two possible values found for {}'.format(new_key))\n #print(' Replacing {} with {}'.format(mapped_dictionary[new_key], value))\n mapped_dictionary[new_key] = value\n else:\n if value.any() and not mapped_dictionary[new_key].any():\n # choose the one that has a non-null value\n mapped_dictionary[new_key] = value\n elif value.any() and mapped_dictionary[new_key].any():\n # if both have values, choose which one to replace and warn\n #print('-W- Two possible values found for {}'.format(new_key))\n #print(' Replacing {} with {}'.format(mapped_dictionary[new_key], value))\n mapped_dictionary[new_key] = value\n else:\n if not value:\n # if new value is null, leave the old value there\n continue\n elif value and not mapped_dictionary[new_key]:\n # choose the one that has a non-null value\n mapped_dictionary[new_key] = value\n elif value and mapped_dictionary[new_key]:\n # if both have values, choose which one to replace and warn\n #print('-W- Two possible values found for {}'.format(new_key))\n #print(' Replacing {} with {}'.format(mapped_dictionary[new_key], value))\n mapped_dictionary[new_key] = value\n # if there is no mapped_value already:\n else:\n mapped_dictionary[new_key] = value\n else:\n # if this line is left in, it gives everything from the original dictionary\n mapped_dictionary[key] = value\n return mapped_dictionary"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nmaking a full mapping for 2 --> 3 columns. Output the mapping to json in the specified file_path. Note: This file is currently called maps.py, full path is PmagPy/pmagpy/mapping/maps.py. Parameters ---------- file_path : string with full file path to dump mapping json. Returns --------- maps : nested dictionary with format {table_name: {magic2_col_name: magic3_col_name, ...}, ...}", "response": "def cache_mappings(file_path):\n \"\"\"\n Make a full mapping for 2 --> 3 columns.\n Output the mapping to json in the specified file_path.\n Note: This file is currently called maps.py,\n full path is PmagPy/pmagpy/mapping/maps.py.\n\n Parameters\n ----------\n file_path : string with full file path to dump mapping json.\n\n Returns\n ---------\n maps : nested dictionary with format {table_name: {magic2_col_name: magic3_col_name, ...}, ...}\n \"\"\"\n def get_2_to_3(dm_type, dm):\n table_names3_2_table_names2 = {'measurements': ['magic_measurements'],\n 'locations': ['er_locations'],\n 'sites': ['er_sites', 'pmag_sites'],\n 'samples': ['er_samples', 'pmag_samples'],\n 'specimens': ['er_specimens', 'pmag_specimens'],\n 'ages': ['er_ages'],\n 'criteria': ['pmag_criteria'],\n 'images': ['er_images'],\n 'contribution': []}\n table_names3 = table_names3_2_table_names2[dm_type]\n dictionary = {}\n for label, row in dm.iterrows():\n # if there are one or more corresponding 2.5 columns:\n if isinstance(row['previous_columns'], list):\n for previous_values in row['previous_columns']:\n previous_table = previous_values['table']\n previous_value = previous_values['column']\n if previous_table in table_names3:\n add_to_dict(previous_value, label, dictionary)\n elif previous_table in [\"pmag_results\", \"rmag_results\"]:\n if label not in dictionary.values():\n if previous_value not in dictionary.keys():\n add_to_dict(previous_value, label, dictionary)\n return dictionary\n\n def add_to_dict(key, value, dictionary):\n if key in dictionary:\n if value != dictionary[key]:\n print('W- OVERWRITING')\n print('was:', key, dictionary[key])\n print('now:', key, value)\n dictionary[key] = value\n\n # begin\n data_model = DataModel()\n maps = {}\n for table_name in data_model.dm:\n dm = data_model.dm[table_name]\n new_mapping = get_2_to_3(table_name, dm)\n maps[table_name] = new_mapping\n # write maps out to file\n f = open(file_path, 'w')\n f.write(\"all_maps = \")\n json.dump(maps, f)\n f.close()\n return maps"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_thellier_gui_meas_mapping(input_df, output=2):\n if int(output) == 2:\n thellier_gui_meas3_2_meas2_map = meas_magic3_2_magic2_map.copy()\n if 'treat_step_num' in input_df.columns:\n thellier_gui_meas3_2_meas2_map.update(\n {'treat_step_num': 'measurement_number'})\n thellier_gui_meas3_2_meas2_map.pop('measurement')\n return thellier_gui_meas3_2_meas2_map\n # 2 --> 3\n else:\n thellier_gui_meas2_2_meas3_map = meas_magic2_2_magic3_map.copy()\n if 'measurement' in input_df.columns:\n thellier_gui_meas2_2_meas3_map.pop('measurement_number')\n try:\n res = int(input_df.iloc[0]['measurement_number'])\n if res < 100:\n thellier_gui_meas2_2_meas3_map['measurement_number'] = 'treat_step_num'\n except ValueError as ex:\n pass\n return thellier_gui_meas2_2_meas3_map", "response": "Returns the appropriate mapping for translating measurements in Thellier GUI to MagIC records."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nconverting a dataframe with measurement names to MagIC 2 or 3 or vice versa.", "response": "def convert_meas_df_thellier_gui(meas_df_in, output):\n \"\"\"\n Take a measurement dataframe and convert column names\n from MagIC 2 --> 3 or vice versa.\n Use treat_step_num --> measurement_number if available,\n otherwise measurement --> measurement_number.\n\n Parameters\n ----------\n meas_df_in : pandas DataFrame\n input dataframe with measurement data\n output : int\n output to MagIC 2 or MagIC 3\n \"\"\"\n output = int(output)\n meas_mapping = get_thellier_gui_meas_mapping(meas_df_in, output)\n meas_df_out = meas_df_in.rename(columns=meas_mapping)\n if 'measurement' not in meas_df_out.columns:\n meas_df_out['measurement'] = meas_df_in['measurement']\n return meas_df_out"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconverting measurments tables from magic 2 to 3", "response": "def convert_meas(direction, Rec):\n \"\"\"\n converts measurments tables from magic 2 to 3 (direction=magic3)\n or from model 3 to 2.5 (direction=magic2) [not available]\n \"\"\"\n if direction == 'magic3':\n columns = meas_magic2_2_magic3_map\n MeasRec = {}\n for key in columns:\n if key in list(Rec.keys()):\n # transfer info and change column name to data model 3.0\n MeasRec[columns[key]] = Rec[key]\n return MeasRec\n else: # haven't added this way yet\n pass"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nconvert MagIC site headers to short or readable tables for a figure.", "response": "def convert_site_dm3_table_intensity(sites_df):\n \"\"\"\n Convert MagIC site headers to short/readable\n headers for a figure (used by ipmag.sites_extract)\n Intensity data only.\n\n Parameters\n ----------\n sites_df : pandas DataFrame\n sites information\n\n Returns\n ---------\n int_df : pandas DataFrame\n intensity site data with easily readable headers\n \"\"\"\n # now for the intensities\n has_vadms, has_vdms = False, False\n if 'int_abs' not in sites_df:\n sites_df['int_abs'] = None\n if 'int_n_samples' not in sites_df:\n sites_df['int_n_samples'] = None\n int_df = sites_df.copy().dropna(subset=['int_abs'])\n int_df['int_n_samples'] = int_df['int_n_samples'].values.astype('int')\n if len(int_df) > 0:\n int_df['int_abs_uT'] = 1e6*int_df.int_abs.values # convert to uT\n int_df['int_abs_sigma_uT'] = 1e6 * \\\n int_df.int_abs_sigma.values # convert to uT\n int_df['int_abs_uT'] = int_df['int_abs_uT'].values.astype('int')\n int_df['int_abs_sigma_uT'] = int_df['int_abs_sigma_uT'].values.astype(\n 'int')\n int_df['int_abs_sigma_perc'] = int_df['int_abs_sigma_perc'].values.astype(\n 'int')\n\n IntCols = [\"Site\", \"N\", \"B\", \"B sigma\", \"sigma (%)\"]\n if 'vadm' in int_df.columns:\n test_vadm = int_df.dropna(subset=['vadm'])\n if len(test_vadm) > 0:\n has_vadms = True\n\n if 'vdm' in int_df.columns:\n test_vdm = int_df.dropna(subset=['vdm'])\n if len(test_vdm) > 0:\n has_vdms = True\n\n if has_vadms:\n IntCols.append(\"VADM\")\n IntCols.append(\"VADM sigma\")\n if has_vdms:\n IntCols.append(\"VDM\")\n IntCols.append(\"VDM sigma\")\n if not has_vadms and not has_vdms:\n int_df = int_df[['site', 'int_n_samples', 'int_abs_uT', 'int_abs_sigma_uT',\n 'int_abs_sigma_perc']]\n if has_vadms and not has_vdms:\n int_df.sort_values(by=['site', 'vadm'],\n ascending=False, inplace=True)\n int_df.drop_duplicates(subset=['int_abs_uT', 'site'], inplace=True)\n\n int_df['vadm_ZAm2'] = 1e-21*int_df.vadm.values\n int_df['vadm_sigma_ZAm2'] = 1e-21*int_df.vadm_sigma.values\n int_df = int_df[['site', 'int_n_samples', 'int_abs_uT', 'int_abs_sigma_uT',\n 'int_abs_sigma_perc', 'vadm_ZAm2', 'vadm_ZAm2_sigma']]\n if not has_vadms and has_vdms:\n int_df.sort_values(by=['site', 'vdm'],\n ascending=False, inplace=True)\n int_df.drop_duplicates(subset=['int_abs_uT', 'site'], inplace=True)\n int_df['vdm_ZAm2'] = 1e-21*int_df.vdm.values()\n int_df['vdm_sigma_ZAm2'] = 1e-21*int_df.vdm_sigma.values()\n\n int_df = int_df[['site', 'int_n_samples', 'int_abs_uT', 'int_abs_sigma_uT',\n 'int_abs_sigma_perc', 'vdm_ZAm2', 'vdm_ZAm2_sigma']]\n if has_vadms and has_vdms:\n int_df.sort_values(by=['site', 'vadm'],\n ascending=False, inplace=True)\n int_df.drop_duplicates(subset=['int_abs_uT', 'site'], inplace=True)\n int_df['vadm_ZAm2'] = 1e-21*int_df.vadm.values\n int_df['vadm_sigma_ZAm2'] = 1e-21*int_df.vadm_sigma.values\n int_df['vdm_ZAm2'] = 1e-21*int_df.vdm.values\n int_df['vdm_sigma_ZAm2'] = 1e-21*int_df.vdm_sigma.values\n int_df = int_df[['site', 'int_n_samples', 'int_abs_uT', 'int_abs_sigma_uT',\n 'int_abs_sigma_perc', 'vadm_ZAm2', 'vadm_sigma_ZAm2', 'vdm_ZAm2', 'vdm_sigma_ZAm2']]\n int_df.columns = IntCols\n int_df.sort_values(by=['Site'], inplace=True, ascending=True)\n int_df.fillna(value='', inplace=True)\n return int_df"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nconverting MagIC site headers to short or readable headers for a figure.", "response": "def convert_site_dm3_table_directions(sites_df):\n \"\"\"\n Convert MagIC site headers to short/readable\n headers for a figure (used by ipmag.sites_extract)\n Directional table only.\n\n Parameters\n ----------\n sites_df : pandas DataFrame\n sites information\n\n Returns\n ---------\n dir_df : pandas DataFrame\n directional site data with easily readable headers\n \"\"\"\n # directional\n # do directional stuff first\n # a few things need cleaning up\n dir_df = sites_df.copy().dropna(\n subset=['dir_dec', 'dir_inc']) # delete blank directions\n # sort by absolute value of vgp_lat in order to eliminate duplicate rows for\n # directions put in by accident on intensity rows\n DirCols = [\"Site\", \"TC (%)\", \"Dec.\", \"Inc.\", \"N\",\n \"k \", \"R\", \"a95\", \"VGP Lat\", \"VGP Long\"]\n columns = ['site', 'dir_tilt_correction', 'dir_dec', 'dir_inc',\n 'dir_n_samples', 'dir_k', 'dir_r', 'dir_alpha95', 'vgp_lat', 'vgp_lon']\n dm3_to_readable = dict(zip(columns, DirCols))\n if len(dir_df) > 0:\n\n for col in ['dir_n_samples', 'dir_tilt_correction']:\n if col in dir_df.columns:\n dir_df[col] = dir_df[col].values.astype('int')\n\n columns = dir_df.columns.intersection(columns)\n has_vgps = False\n if 'vgp_lat' in dir_df.columns:\n test_vgp = dir_df.dropna(subset=['vgp_lat', 'vgp_lon'])\n if len(test_vgp) > 0:\n has_vgps = True\n if has_vgps:\n dir_df['vgp_lat_abs'] = dir_df.vgp_lat.abs()\n dir_df.sort_values(by=['site', 'vgp_lat_abs'],\n ascending=False, inplace=True)\n dir_df = dir_df[columns]\n # this will take the first record for each site's directions (including VGP lat if present)\n dir_df.drop_duplicates(\n subset=['dir_dec', 'dir_inc', 'site'], inplace=True)\n else:\n dir_df.drop_duplicates(\n subset=['dir_dec', 'dir_inc', 'site'], inplace=True)\n dir_df = dir_df[['site', 'dir_tilt_correction', 'dir_dec', 'dir_inc',\n 'dir_n_samples', 'dir_k', 'dir_r', 'dir_alpha95']]\n dir_df.rename(dm3_to_readable, axis='columns', inplace=True)\n dir_df.sort_values(by=['Site'], inplace=True, ascending=True)\n new_cols = list(dir_df.columns.drop('Site'))\n dir_df = dir_df[['Site'] + new_cols]\n return dir_df"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nconvert MagIC specimen dataframe to short or readable format for a figure", "response": "def convert_specimen_dm3_table(spec_df):\n \"\"\"\n Convert MagIC specimen headers to short/readable\n headers for a figure (used by ipmag.specimens_extract)\n \"\"\"\n from pmagpy import data_model3 as dm3\n # get the data model\n dm = dm3.DataModel()\n # get the specimen absolute intensity column headers\n meas_group = ['meas_step_min', 'meas_step_max', 'meas_step_unit']\n pint_group = dm.get_group_headers('specimens', 'Paleointensity')\n arai_group = dm.get_group_headers(\n 'specimens', 'Paleointensity Arai Statistics')\n # filter out the relative intensity stuff\n dm3_columns = list(meas_group)+list(pint_group)+list(arai_group)\n dm3_columns = filter(lambda x: '_rel' not in x, dm3_columns)\n # apply to specimen dataframe\n meas_group_columns = ['meas_step_min', 'meas_step_max', 'meas_step_unit']\n pint_group_columns = list(spec_df.columns.intersection(pint_group))\n arai_group_columns = list(spec_df.columns.intersection(arai_group))\n columns = ['specimen', 'sample']+meas_group_columns + \\\n pint_group_columns+arai_group_columns\n spec_df = spec_df.copy()[columns]\n muT_list = ['int_abs', 'int_abs_sigma', 'int_treat_dc_field']\n meas_list = ['meas_step_min', 'meas_step_max']\n for el in muT_list:\n if el in columns:\n spec_df[el] = 1e6*spec_df[el]\n if 'meas_step_unit' in columns:\n for el in meas_list:\n spec_df.loc[spec_df['meas_step_unit'] == 'K', el] = spec_df[el]-273\n spec_df.loc[spec_df['meas_step_unit'] == 'T', el] = 1e3*spec_df[el]\n spec_df.loc[spec_df['meas_step_unit'] == 'K', 'meas_step_unit'] = 'C'\n spec_df.loc[spec_df['meas_step_unit'] == 'T', 'meas_step_unit'] = 'mT'\n spec_df['meas_step_min'] = spec_df['meas_step_min'].astype('int')\n spec_df['meas_step_max'] = spec_df['meas_step_max'].astype('int')\n dm3_columns = ['meas_step_min', 'meas_step_max', 'meas_step_unit', 'int_abs', 'int_abs_sigma',\n 'int_abs_sigma_perc', 'int_n_measurements', 'int_corr', 'int_corr_cooling_rate',\n 'int_corr_aniso', 'int_corr_nlt', 'int_corr_arm', 'int_viscosity_index', 'int_treat_dc_field',\n 'int_b', 'int_b_sigma', 'int_b_beta', 'int_rsc', 'int_f', 'int_fvds', 'int_frac',\n 'int_g', 'int_gmax', 'int_q', 'int_w', 'int_k', 'int_k_sse', 'int_k_prime', 'int_k_prime_sse',\n 'int_scat', 'int_r2_corr', 'int_r2_det', 'int_z', 'int_z_md']\n table_columns = ['Min', 'Max', 'Units', 'B (uT)', 'sigma', 'percent', 'N', 'c/u', 'CR', 'Aniso.', 'NLT', 'AARM', 'VI', 'Lab Field',\n 'b', 'b sigma', 'beta', 'R2', 'f', 'fvds', 'frac', 'g', 'gap max', 'q', 'w', 'k', 'k sse', 'k prime', 'k prime sse',\n 'scat', 'r2 corr', 'r2 det', 'Z', 'Z md']\n spec_mapping = dict(list(zip(dm3_columns, table_columns)))\n spec_df_out = spec_df.rename(columns=spec_mapping)\n if 'N' in spec_df_out.columns:\n spec_df_out['N'] = spec_df_out['N'].astype('int')\n if 'Lab Field' in spec_df_out.columns:\n spec_df_out['Lab Field'] = spec_df_out['Lab Field'].round().astype('int')\n return spec_df_out"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nnames sufar4-asc_magic.py DESCRIPTION converts ascii files generated by SUFAR ver.4.0 to MagIC formated files for use with PmagPy plotting software SYNTAX sufar4-asc_magic.py -h [command line options] OPTIONS -h: prints the help message and quits -f FILE: specify .asc input file name -fsp SINFILE: specify er_specimens input file with location, sample, site, etc. information -F MFILE: specify measurements output file -Fa AFILE: specify rmag_anisotropy output file # MagIC 2 only -Fsi SFILE: specify specimens output file -usr USER: specify who made the measurements -loc LOC: specify location name for study -ins INST: specify instrument used -spc SPEC: specify number of characters to specify specimen from sample -ncn NCON: specify naming convention: default is #2 below -k15 : specify static 15 position mode - default is spinning -new : replace all existing magic files DEFAULTS AFILE: rmag_anisotropy.txt # MagIC 2 only SFILE: default is to create new specimen file USER: \"\" LOC: \"unknown\" INST: \"\" SPEC: 0 sample name is same as site (if SPEC is 1, sample is all but last character) appends to specimen/sample/site files Sample naming convention: [1] XXXXY: where XXXX is an arbitrary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX [5] site name same as sample [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED [7-Z] [XXXX]YYY: XXXX is site designation with Z characters with sample name XXXXYYYY NB: all others you will have to customize your self or e-mail ltauxe@ucsd.edu for help. [8] This is a synthetic [9] ODP naming convention", "response": "def main():\n \"\"\"\n NAME\n sufar4-asc_magic.py\n\n DESCRIPTION\n converts ascii files generated by SUFAR ver.4.0 to MagIC formated\n files for use with PmagPy plotting software\n\n SYNTAX\n sufar4-asc_magic.py -h [command line options]\n\n OPTIONS\n -h: prints the help message and quits\n -f FILE: specify .asc input file name\n -fsp SINFILE: specify er_specimens input file with location, sample, site, etc. information\n -F MFILE: specify measurements output file\n -Fa AFILE: specify rmag_anisotropy output file # MagIC 2 only\n -Fsi SFILE: specify specimens output file\n -usr USER: specify who made the measurements\n -loc LOC: specify location name for study\n -ins INST: specify instrument used\n -spc SPEC: specify number of characters to specify specimen from sample\n -ncn NCON: specify naming convention: default is #2 below\n -k15 : specify static 15 position mode - default is spinning\n -new : replace all existing magic files\n\n DEFAULTS\n AFILE: rmag_anisotropy.txt # MagIC 2 only\n SFILE: default is to create new specimen file\n USER: \"\"\n LOC: \"unknown\"\n INST: \"\"\n SPEC: 0 sample name is same as site (if SPEC is 1, sample is all but last character)\n appends to specimen/sample/site files\n Sample naming convention:\n [1] XXXXY: where XXXX is an arbitrary length site designation and Y\n is the single character sample designation. e.g., TG001a is the\n first sample from site TG001. [default]\n [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX\n [5] site name same as sample\n [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED\n [7-Z] [XXXX]YYY: XXXX is site designation with Z characters with sample name XXXXYYYY\n NB: all others you will have to customize your self\n or e-mail ltauxe@ucsd.edu for help.\n [8] This is a synthetic\n [9] ODP naming convention\n\n \"\"\"\n\n args = sys.argv\n if '-h' in args:\n print(main.__doc__)\n sys.exit()\n\n dataframe = extractor.command_line_dataframe([ ['WD', False, '.'], ['ID', False, ''],\n ['usr', False, ''], ['ncn', False, '1'],\n ['k15', False, False], ['ins', False, ''],\n ['f', True, ''], ['F', False, 'measurements.txt'],\n ['Fa', False, 'rmag_anisotropy.txt'],\n ['Fsi', False, 'specimens.txt'],\n ['loc', False, 'unknown'], ['spc', False, 0],\n ['fsi', False, None], ['DM', False, 3] ])\n #'WD', 'ID', 'usr', 'ncn', 'k15', 'ins', 'f', 'F', 'Fa', 'Fsi', 'loc', 'spc',\n checked_args = extractor.extract_and_check_args(args, dataframe)\n output_dir_path, input_dir_path, user, sample_naming_con, static_15_position_mode, instrument, ascfile, meas_output, aniso_output, spec_outfile, locname, specnum, spec_infile, data_model_num = extractor.get_vars(['WD', 'ID', 'usr', 'ncn', 'k15', 'ins', 'f', 'F', 'Fa', 'Fsi', 'loc', 'spc', 'fsi', 'DM'], checked_args)\n\n convert.sufar4(ascfile, meas_output, aniso_output, spec_infile,\n spec_outfile, specnum=specnum, sample_naming_con=sample_naming_con,\n user=user, locname=locname, instrument=instrument,\n static_15_position_mode=static_15_position_mode, dir_path=output_dir_path,\n input_dir_path=input_dir_path, data_model_num=data_model_num)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nnaming make_magic_plots.py DESCRIPTION inspects magic directory for available plots. SYNTAX make_magic_plots.py [command line options] INPUT magic files OPTIONS -h prints help message and quits -f FILE specifies input file name -fmt [png,eps,svg,jpg,pdf] specify format, default is png", "response": "def main():\n \"\"\"\n NAME\n make_magic_plots.py\n\n DESCRIPTION\n inspects magic directory for available plots.\n\n SYNTAX\n make_magic_plots.py [command line options]\n\n INPUT\n magic files\n\n OPTIONS\n -h prints help message and quits\n -f FILE specifies input file name\n -fmt [png,eps,svg,jpg,pdf] specify format, default is png\n \"\"\"\n dirlist = ['./']\n dir_path = os.getcwd()\n names = os.listdir(dir_path)\n for n in names:\n if 'Location' in n:\n dirlist.append(n)\n if '-fmt' in sys.argv:\n ind = sys.argv.index(\"-fmt\")\n fmt = sys.argv[ind+1]\n else:\n fmt = 'png'\n if '-f' in sys.argv:\n ind = sys.argv.index(\"-f\")\n filelist = [sys.argv[ind+1]]\n else:\n filelist = os.listdir(dir_path)\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n for loc in dirlist:\n print('working on: ', loc)\n os.chdir(loc) # change working directories to each location\n crd = 's'\n if 'er_samples.txt' in filelist: # find coordinate systems\n samps, file_type = pmag.magic_read(\n 'er_samples.txt') # read in data\n # get all none blank sample orientations\n Srecs = pmag.get_dictitem(samps, 'sample_azimuth', '', 'F')\n if len(Srecs) > 0:\n crd = 'g'\n if 'magic_measurements.txt' in filelist: # start with measurement data\n print('working on measurements data')\n data, file_type = pmag.magic_read(\n 'magic_measurements.txt') # read in data\n if loc == './':\n # get all the blank location names from data file\n data = pmag.get_dictitem(data, 'er_location_name', '', 'T')\n # looking for zeq_magic possibilities\n # get all none blank method codes\n AFZrecs = pmag.get_dictitem(\n data, 'magic_method_codes', 'LT-AF-Z', 'has')\n # get all none blank method codes\n TZrecs = pmag.get_dictitem(\n data, 'magic_method_codes', 'LT-T-Z', 'has')\n # get all none blank method codes\n MZrecs = pmag.get_dictitem(\n data, 'magic_method_codes', 'LT-M-Z', 'has')\n # get all dec measurements\n Drecs = pmag.get_dictitem(data, 'measurement_dec', '', 'F')\n # get all dec measurements\n Irecs = pmag.get_dictitem(data, 'measurement_inc', '', 'F')\n Mkeys = ['measurement_magnitude', 'measurement_magn_moment',\n 'measurement_magn_volume', 'measurement_magn_mass']\n for key in Mkeys:\n Mrecs = pmag.get_dictitem(\n data, key, '', 'F') # get intensity data\n if len(Mrecs) > 0:\n break\n # potential for stepwise demag curves\n if len(AFZrecs) > 0 or len(TZrecs) > 0 or len(MZrecs) > 0 and len(Drecs) > 0 and len(Irecs) > 0 and len(Mrecs) > 0:\n print('zeq_magic.py -fsp pmag_specimens.txt -sav -fmt ' +\n fmt+' -crd '+crd)\n os.system('zeq_magic.py -sav -fmt '+fmt+' -crd '+crd)\n # looking for thellier_magic possibilities\n if len(pmag.get_dictitem(data, 'magic_method_codes', 'LP-PI-TRM', 'has')) > 0:\n print('thellier_magic.py -fsp pmag_specimens.txt -sav -fmt '+fmt)\n os.system('thellier_magic.py -sav -fmt '+fmt)\n # looking for hysteresis possibilities\n if len(pmag.get_dictitem(data, 'magic_method_codes', 'LP-HYS', 'has')) > 0: # find hyst experiments\n print('quick_hyst.py -sav -fmt '+fmt)\n os.system('quick_hyst.py -sav -fmt '+fmt)\n if 'pmag_results.txt' in filelist: # start with measurement data\n data, file_type = pmag.magic_read(\n 'pmag_results.txt') # read in data\n print('number of datapoints: ', len(data))\n if loc == './':\n # get all the concatenated location names from data file\n data = pmag.get_dictitem(data, 'er_location_names', ':', 'has')\n print('number of datapoints: ', len(data), loc)\n print('working on pmag_results directions')\n SiteDIs = pmag.get_dictitem(\n data, 'average_dec', \"\", 'F') # find decs\n print('number of directions: ', len(SiteDIs))\n SiteDIs = pmag.get_dictitem(\n SiteDIs, 'average_inc', \"\", 'F') # find decs and incs\n print('number of directions: ', len(SiteDIs))\n # only individual results - not poles\n SiteDIs = pmag.get_dictitem(SiteDIs, 'data_type', 'i', 'has')\n print('number of directions: ', len(SiteDIs))\n # tilt corrected coordinates\n SiteDIs_t = pmag.get_dictitem(\n SiteDIs, 'tilt_correction', '100', 'T')\n print('number of directions: ', len(SiteDIs))\n if len(SiteDIs_t) > 0:\n print('eqarea_magic.py -sav -crd t -fmt '+fmt)\n os.system('eqarea_magic.py -sav -crd t -fmt '+fmt)\n elif len(SiteDIs) > 0 and 'tilt_correction' not in SiteDIs[0].keys():\n print('eqarea_magic.py -sav -fmt '+fmt)\n os.system('eqarea_magic.py -sav -fmt '+fmt)\n else:\n SiteDIs_g = pmag.get_dictitem(\n SiteDIs, 'tilt_correction', '0', 'T') # geographic coordinates\n if len(SiteDIs_g) > 0:\n print('eqarea_magic.py -sav -crd g -fmt '+fmt)\n os.system('eqarea_magic.py -sav -crd g -fmt '+fmt)\n else:\n SiteDIs_s = pmag.get_dictitem(\n SiteDIs, 'tilt_correction', '-1', 'T') # sample coordinates\n if len(SiteDIs_s) > 0:\n print('eqarea_magic.py -sav -crd s -fmt '+fmt)\n os.system('eqarea_magic.py -sav -crd s -fmt '+fmt)\n else:\n SiteDIs_x = pmag.get_dictitem(\n SiteDIs, 'tilt_correction', '', 'T') # no coordinates\n if len(SiteDIs_x) > 0:\n print('eqarea_magic.py -sav -fmt '+fmt)\n os.system('eqarea_magic.py -sav -fmt '+fmt)\n print('working on pmag_results VGP map')\n VGPs = pmag.get_dictitem(\n SiteDIs, 'vgp_lat', \"\", 'F') # are there any VGPs?\n if len(VGPs) > 0: # YES!\n os.system(\n 'vgpmap_magic.py -prj moll -res c -sym ro 5 -sav -fmt png')\n print('working on pmag_results intensities')\n os.system(\n 'magic_select.py -f pmag_results.txt -key data_type i T -F tmp.txt')\n os.system(\n 'magic_select.py -f tmp.txt -key average_int 0. has -F tmp1.txt')\n os.system(\n \"grab_magic_key.py -f tmp1.txt -key average_int | awk '{print $1*1e6}' >tmp2.txt\")\n data, file_type = pmag.magic_read('tmp1.txt') # read in data\n locations = pmag.get_dictkey(data, 'er_location_names', \"\")\n histfile = 'LO:_'+locations[0]+'_intensities_histogram:_.'+fmt\n os.system(\n \"histplot.py -b 1 -xlab 'Intensity (uT)' -sav -f tmp2.txt -F \" + histfile)\n print(\n \"histplot.py -b 1 -xlab 'Intensity (uT)' -sav -f tmp2.txt -F \" + histfile)\n os.system('rm tmp*.txt')\n if 'rmag_hysteresis.txt' in filelist: # start with measurement data\n print('working on rmag_hysteresis')\n data, file_type = pmag.magic_read(\n 'rmag_hysteresis.txt') # read in data\n if loc == './':\n # get all the blank location names from data file\n data = pmag.get_dictitem(data, 'er_location_name', '', 'T')\n hdata = pmag.get_dictitem(data, 'hysteresis_bcr', '', 'F')\n hdata = pmag.get_dictitem(hdata, 'hysteresis_mr_moment', '', 'F')\n hdata = pmag.get_dictitem(hdata, 'hysteresis_ms_moment', '', 'F')\n # there are data for a dayplot\n hdata = pmag.get_dictitem(hdata, 'hysteresis_bc', '', 'F')\n if len(hdata) > 0:\n print('dayplot_magic.py -sav -fmt '+fmt)\n os.system('dayplot_magic.py -sav -fmt '+fmt)\n # if 'er_sites.txt' in filelist: # start with measurement data\n # print 'working on er_sites'\n #os.system('basemap_magic.py -sav -fmt '+fmt)\n if 'rmag_anisotropy.txt' in filelist: # do anisotropy plots if possible\n print('working on rmag_anisotropy')\n data, file_type = pmag.magic_read(\n 'rmag_anisotropy.txt') # read in data\n if loc == './':\n # get all the blank location names from data file\n data = pmag.get_dictitem(data, 'er_location_name', '', 'T')\n # get specimen coordinates\n sdata = pmag.get_dictitem(\n data, 'anisotropy_tilt_correction', '-1', 'T')\n # get specimen coordinates\n gdata = pmag.get_dictitem(\n data, 'anisotropy_tilt_correction', '0', 'T')\n # get specimen coordinates\n tdata = pmag.get_dictitem(\n data, 'anisotropy_tilt_correction', '100', 'T')\n if len(sdata) > 3:\n print('aniso_magic.py -x -B -crd s -sav -fmt '+fmt)\n os.system('aniso_magic.py -x -B -crd s -sav -fmt '+fmt)\n if len(gdata) > 3:\n os.system('aniso_magic.py -x -B -crd g -sav -fmt '+fmt)\n if len(tdata) > 3:\n os.system('aniso_magic.py -x -B -crd t -sav -fmt '+fmt)\n if loc != './':\n os.chdir('..')"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef init_UI(self):\n# --------------------------------------------------------------------------\n # Setup ScrolledPanel Ctrls---------------------------------------------\n# --------------------------------------------------------------------------\n\n # ----------------------------------------------------------------------\n # set ctrl size and style variables\n # ----------------------------------------------------------------------\n dw, dh = wx.DisplaySize()\n r1 = dw/1210.\n r2 = dw/640.\n\n self.GUI_RESOLUTION = min(r1, r2, 1)\n top_bar_2v_space = 5\n top_bar_h_space = 10\n spec_button_space = 10\n side_bar_v_space = 10\n\n # ----------------------------------------------------------------------\n # set font size and style\n # ----------------------------------------------------------------------\n\n FONT_WEIGHT = 1\n if sys.platform.startswith('win'):\n FONT_WEIGHT = -1\n font1 = wx.Font(9+FONT_WEIGHT, wx.SWISS, wx.NORMAL,\n wx.NORMAL, False, self.font_type)\n font2 = wx.Font(12+FONT_WEIGHT, wx.SWISS, wx.NORMAL,\n wx.NORMAL, False, self.font_type)\n font = wx.SystemSettings.GetFont(wx.SYS_SYSTEM_FONT)\n font.SetPointSize(10+FONT_WEIGHT)\n\n# --------------------------------------------------------------------------\n # Setup ScrolledPanel Ctrls---------------------------------------------\n# --------------------------------------------------------------------------\n\n # ----------------------------------------------------------------------\n # Create Figures and FigCanvas objects.\n # ----------------------------------------------------------------------\n\n self.fig1 = Figure((5.*self.GUI_RESOLUTION, 5. *\n self.GUI_RESOLUTION), dpi=self.dpi)\n self.canvas1 = FigCanvas(self.scrolled_panel, -1, self.fig1)\n self.toolbar1 = NavigationToolbar(self.canvas1)\n self.toolbar1.Hide()\n self.zijderveld_setting = \"Zoom\"\n self.toolbar1.zoom()\n self.canvas1.Bind(wx.EVT_RIGHT_DOWN, self.right_click_zijderveld)\n self.canvas1.Bind(wx.EVT_MIDDLE_DOWN, self.home_zijderveld)\n self.canvas1.Bind(wx.EVT_LEFT_DCLICK, self.on_zijd_select)\n self.canvas1.Bind(wx.EVT_RIGHT_DCLICK, self.on_zijd_mark)\n self.canvas1.Bind(wx.EVT_MOTION, self.on_change_zijd_mouse_cursor)\n self.canvas1.SetHelpText(dgh.zij_help)\n\n self.fig2 = Figure((2.5*self.GUI_RESOLUTION, 2.5 *\n self.GUI_RESOLUTION), dpi=self.dpi)\n self.specimen_eqarea = self.fig2.add_subplot(111)\n draw_net(self.specimen_eqarea)\n self.canvas2 = FigCanvas(self.scrolled_panel, -1, self.fig2)\n self.toolbar2 = NavigationToolbar(self.canvas2)\n self.toolbar2.Hide()\n self.toolbar2.zoom()\n self.specimen_EA_setting = \"Zoom\"\n self.canvas2.Bind(wx.EVT_LEFT_DCLICK,\n self.on_equalarea_specimen_select)\n self.canvas2.Bind(wx.EVT_RIGHT_DOWN,\n self.right_click_specimen_equalarea)\n self.canvas2.Bind(wx.EVT_MOTION, self.on_change_specimen_mouse_cursor)\n self.canvas2.Bind(wx.EVT_MIDDLE_DOWN, self.home_specimen_equalarea)\n self.canvas2.SetHelpText(dgh.spec_eqarea_help)\n self.specimen_EA_xdata = []\n self.specimen_EA_ydata = []\n\n self.fig3 = Figure((2.5*self.GUI_RESOLUTION, 2.5 *\n self.GUI_RESOLUTION), dpi=self.dpi)\n self.mplot = self.fig3.add_axes(\n [0.2, 0.15, 0.7, 0.7], frameon=True, facecolor='None')\n self.canvas3 = FigCanvas(self.scrolled_panel, -1, self.fig3)\n self.toolbar3 = NavigationToolbar(self.canvas3)\n self.toolbar3.Hide()\n self.toolbar3.zoom()\n self.MM0_setting = \"Zoom\"\n self.canvas3.Bind(wx.EVT_RIGHT_DOWN, self.right_click_MM0)\n self.canvas3.Bind(wx.EVT_MIDDLE_DOWN, self.home_MM0)\n self.canvas3.SetHelpText(dgh.MM0_help)\n\n self.fig4 = Figure((2.5*self.GUI_RESOLUTION, 2.5 *\n self.GUI_RESOLUTION), dpi=self.dpi)\n self.canvas4 = FigCanvas(self.scrolled_panel, -1, self.fig4)\n self.toolbar4 = NavigationToolbar(self.canvas4)\n self.toolbar4.Hide()\n self.toolbar4.zoom()\n self.high_EA_setting = \"Zoom\"\n self.canvas4.Bind(wx.EVT_LEFT_DCLICK, self.on_equalarea_high_select)\n self.canvas4.Bind(wx.EVT_RIGHT_DOWN, self.right_click_high_equalarea)\n self.canvas4.Bind(wx.EVT_MOTION, self.on_change_high_mouse_cursor)\n self.canvas4.Bind(wx.EVT_MIDDLE_DOWN, self.home_high_equalarea)\n self.canvas4.SetHelpText(dgh.high_level_eqarea_help)\n self.old_pos = None\n self.high_EA_xdata = []\n self.high_EA_ydata = []\n self.high_level_eqarea = self.fig4.add_subplot(111)\n draw_net(self.high_level_eqarea)\n\n # ----------------------------------------------------------------------\n # High level Stats Sizer and Switch Stats Button\n # ----------------------------------------------------------------------\n\n self.stats_sizer = wx.StaticBoxSizer(wx.StaticBox(\n self.panel, wx.ID_ANY, \"mean statistics\"), wx.VERTICAL)\n\n for parameter in ['mean_type', 'dec', 'inc', 'alpha95', 'K', 'R', 'n_lines', 'n_planes']:\n COMMAND = \"self.%s_window=wx.TextCtrl(self.scrolled_panel,style=wx.TE_CENTER|wx.TE_READONLY,size=(50*self.GUI_RESOLUTION,25))\" % parameter\n exec(COMMAND)\n COMMAND = \"self.%s_window.SetBackgroundColour(wx.WHITE)\" % parameter\n exec(COMMAND)\n COMMAND = \"self.%s_window.SetFont(font2)\" % parameter\n exec(COMMAND)\n COMMAND = \"self.%s_outer_window = wx.GridSizer(1,2,5*self.GUI_RESOLUTION,15*self.GUI_RESOLUTION)\" % parameter\n exec(COMMAND)\n COMMAND = \"\"\"self.%s_outer_window.AddMany([\n (wx.StaticText(self.scrolled_panel,label='%s',style=wx.TE_CENTER),1,wx.EXPAND),\n (self.%s_window, 1, wx.EXPAND)])\"\"\" % (parameter, parameter, parameter)\n exec(COMMAND)\n COMMAND = \"self.stats_sizer.Add(self.%s_outer_window, 1, wx.ALIGN_LEFT|wx.EXPAND)\" % parameter\n exec(COMMAND)\n\n self.switch_stats_button = wx.SpinButton(\n self.scrolled_panel, id=wx.ID_ANY, style=wx.SP_HORIZONTAL | wx.SP_ARROW_KEYS | wx.SP_WRAP, name=\"change stats\")\n self.Bind(wx.EVT_SPIN, self.on_select_stats_button,\n self.switch_stats_button)\n self.switch_stats_button.SetHelpText(dgh.switch_stats_btn_help)\n\n# --------------------------------------------------------------------------\n # Side Bar Options and Logger-----------------------------------------\n# --------------------------------------------------------------------------\n\n # ----------------------------------------------------------------------\n # Create text_box for presenting the measurements\n # ----------------------------------------------------------------------\n\n self.logger = wx.ListCtrl(self.side_panel, id=wx.ID_ANY, size=(\n 100*self.GUI_RESOLUTION, 100*self.GUI_RESOLUTION), style=wx.LC_REPORT)\n self.logger.SetFont(font1)\n self.logger.InsertColumn(0, 'i', width=25*self.GUI_RESOLUTION)\n self.logger.InsertColumn(1, 'Step', width=25*self.GUI_RESOLUTION)\n self.logger.InsertColumn(2, 'Tr', width=35*self.GUI_RESOLUTION)\n self.logger.InsertColumn(3, 'Dec', width=35*self.GUI_RESOLUTION)\n self.logger.InsertColumn(4, 'Inc', width=35*self.GUI_RESOLUTION)\n self.logger.InsertColumn(5, 'M', width=45*self.GUI_RESOLUTION)\n self.logger.InsertColumn(6, 'csd', width=45*self.GUI_RESOLUTION)\n self.Bind(wx.EVT_LIST_ITEM_ACTIVATED,\n self.on_click_listctrl, self.logger)\n self.Bind(wx.EVT_LIST_ITEM_RIGHT_CLICK,\n self.on_right_click_listctrl, self.logger)\n self.Bind(wx.EVT_LIST_ITEM_SELECTED,\n self.on_select_measurement, self.logger)\n self.logger.SetHelpText(dgh.logger_help)\n\n # ----------------------------------------------------------------------\n # select specimen box\n # ----------------------------------------------------------------------\n\n # Combo-box with a list of specimen\n self.specimens_box = wx.ComboBox(self.side_panel, id=wx.ID_ANY, value=self.s, size=(\n 200*self.GUI_RESOLUTION, 25), choices=self.specimens, style=wx.CB_DROPDOWN | wx.TE_PROCESS_ENTER, name=\"specimen\")\n self.Bind(wx.EVT_COMBOBOX, self.onSelect_specimen, self.specimens_box)\n self.Bind(wx.EVT_TEXT_ENTER, self.on_enter_specimen,\n self.specimens_box)\n self.specimens_box.SetHelpText(dgh.specimens_box_help)\n\n # buttons to move forward and backwards from specimens\n self.nextbutton = wx.Button(\n self.side_panel, id=wx.ID_ANY, label='next', size=(100*self.GUI_RESOLUTION, 25))\n self.Bind(wx.EVT_BUTTON, self.on_next_button, self.nextbutton)\n self.nextbutton.SetFont(font2)\n self.nextbutton.SetHelpText(dgh.nextbutton_help)\n\n self.prevbutton = wx.Button(\n self.side_panel, id=wx.ID_ANY, label='previous', size=(100*self.GUI_RESOLUTION, 25))\n self.prevbutton.SetFont(font2)\n self.Bind(wx.EVT_BUTTON, self.on_prev_button, self.prevbutton)\n self.prevbutton.SetHelpText(dgh.prevbutton_help)\n\n # ----------------------------------------------------------------------\n # select coordinate box\n # ----------------------------------------------------------------------\n\n self.COORDINATE_SYSTEM, self.coordinate_list = self.get_coordinate_system()\n self.coordinates_box = wx.ComboBox(self.side_panel, id=wx.ID_ANY, size=(\n 200*self.GUI_RESOLUTION, 25), choices=self.coordinate_list, value=self.COORDINATE_SYSTEM, style=wx.CB_DROPDOWN | wx.TE_READONLY, name=\"coordinates\")\n self.Bind(wx.EVT_COMBOBOX, self.onSelect_coordinates,\n self.coordinates_box)\n self.coordinates_box.SetHelpText(dgh.coordinates_box_help)\n\n # ----------------------------------------------------------------------\n # Orthogonal Zijderveld Options box\n # ----------------------------------------------------------------------\n\n self.orthogonal_box = wx.ComboBox(self.side_panel, id=wx.ID_ANY, value='X=East', size=(\n 200*self.GUI_RESOLUTION, 25), choices=['X=NRM dec', 'X=East', 'X=North'], style=wx.CB_DROPDOWN | wx.TE_READONLY, name=\"orthogonal_plot\")\n # remove 'X=best fit line dec' as option given that is isn't implemented for multiple components\n self.Bind(wx.EVT_COMBOBOX, self.onSelect_orthogonal_box,\n self.orthogonal_box)\n self.orthogonal_box.SetHelpText(dgh.orthogonal_box_help)\n\n# --------------------------------------------------------------------------\n # Top Bar Options ----------------------------------------------------\n# --------------------------------------------------------------------------\n\n # ----------------------------------------------------------------------\n # select bounds box\n # ----------------------------------------------------------------------\n\n self.T_list = []\n\n self.tmin_box = wx.ComboBox(self.panel, id=wx.ID_ANY, size=(\n 50*self.GUI_RESOLUTION, 25), choices=self.T_list, style=wx.CB_DROPDOWN | wx.TE_READONLY)\n self.Bind(wx.EVT_COMBOBOX, self.get_new_PCA_parameters, self.tmin_box)\n self.tmin_box.SetHelpText(dgh.tmin_box_help)\n\n self.tmax_box = wx.ComboBox(self.panel, id=wx.ID_ANY, size=(\n 50*self.GUI_RESOLUTION, 25), choices=self.T_list, style=wx.CB_DROPDOWN | wx.TE_READONLY)\n self.Bind(wx.EVT_COMBOBOX, self.get_new_PCA_parameters, self.tmax_box)\n self.tmax_box.SetHelpText(dgh.tmax_box_help)\n\n # ----------------------------------------------------------------------\n # Specimens interpretations Management box\n # ----------------------------------------------------------------------\n\n list_fits = []\n\n self.fit_box = wx.ComboBox(self.panel, id=wx.ID_ANY, size=(\n 50*self.GUI_RESOLUTION, 25), choices=list_fits, style=wx.TE_PROCESS_ENTER)\n self.Bind(wx.EVT_COMBOBOX, self.on_select_fit, self.fit_box)\n self.Bind(wx.EVT_TEXT_ENTER, self.on_enter_fit_name, self.fit_box)\n self.fit_box.SetHelpText(dgh.fit_box_help)\n\n self.add_fit_button = wx.Button(\n self.panel, id=wx.ID_ANY, label='add fit', size=(50*self.GUI_RESOLUTION, 25))\n self.add_fit_button.SetFont(font2)\n self.Bind(wx.EVT_BUTTON, self.on_btn_add_fit, self.add_fit_button)\n self.add_fit_button.SetHelpText(dgh.add_fit_button_help)\n\n # save/delete interpretation buttons\n self.save_fit_button = wx.Button(self.panel, id=wx.ID_ANY, label='save', size=(\n 50*self.GUI_RESOLUTION, 25)) # ,style=wx.BU_EXACTFIT)#, size=(175, 28))\n self.save_fit_button.SetFont(font2)\n self.save_fit_button.SetHelpText(dgh.save_fit_btn_help)\n\n self.delete_fit_button = wx.Button(self.panel, id=wx.ID_ANY, label='delete fit', size=(\n 50*self.GUI_RESOLUTION, 25)) # ,style=wx.BU_EXACTFIT)#, size=(175, 28))\n self.delete_fit_button.SetFont(font2)\n self.delete_fit_button.SetHelpText(dgh.delete_fit_btn_help)\n\n self.Bind(wx.EVT_BUTTON, self.on_save_interpretation_button,\n self.save_fit_button)\n self.Bind(wx.EVT_BUTTON, self.on_btn_delete_fit,\n self.delete_fit_button)\n\n # auto-save interpretation buttons\n self.auto_save = wx.CheckBox(self.panel, wx.ID_ANY, 'auto-save')\n if self.preferences['auto_save']:\n self.auto_save.SetValue(True)\n\n self.auto_save_info = wx.Button(self.panel, wx.ID_ANY, \"?\")\n self.Bind(wx.EVT_BUTTON, self.on_btn_info_click, self.auto_save_info)\n\n # ----------------------------------------------------------------------\n # Interpretation Type and Display window\n # ----------------------------------------------------------------------\n\n self.PCA_type_box = wx.ComboBox(self.panel, id=wx.ID_ANY, size=(50*self.GUI_RESOLUTION, 25), value='line', choices=[\n 'line', 'line-anchored', 'line-with-origin', 'plane', 'Fisher'], style=wx.CB_DROPDOWN | wx.TE_READONLY, name=\"coordinates\")\n self.Bind(wx.EVT_COMBOBOX,\n self.on_select_specimen_mean_type_box, self.PCA_type_box)\n self.PCA_type_box.SetHelpText(dgh.PCA_type_help)\n\n self.plane_display_box = wx.ComboBox(self.panel, id=wx.ID_ANY, size=(50*self.GUI_RESOLUTION, 25), value='whole plane', choices=[\n 'whole plane', 'u. hemisphere', 'l. hemisphere', 'poles', 'bfv', 'wp + bfv'], style=wx.CB_DROPDOWN | wx.TE_READONLY, name=\"PlaneType\")\n self.Bind(wx.EVT_COMBOBOX, self.on_select_plane_display_box,\n self.plane_display_box)\n self.plane_display_box.SetHelpText(dgh.plane_display_help)\n\n # ----------------------------------------------------------------------\n # Interpretation Statistics StaticSizer\n # ----------------------------------------------------------------------\n\n box_sizer_specimen_stat = wx.StaticBoxSizer(wx.StaticBox(\n self.panel, wx.ID_ANY, \"Interpretation Direction and Statistics\"), wx.HORIZONTAL)\n\n for parameter in ['dec', 'inc', 'n', 'mad', 'dang', 'alpha95']:\n COMMAND = \"self.s%s_window=wx.TextCtrl(self.panel,style=wx.TE_CENTER|wx.TE_READONLY,size=(25*self.GUI_RESOLUTION,25))\" % parameter\n exec(COMMAND)\n COMMAND = \"self.s%s_window.SetBackgroundColour(wx.WHITE)\" % parameter\n exec(COMMAND)\n COMMAND = \"self.s%s_window.SetFont(font2)\" % parameter\n exec(COMMAND)\n\n specimen_stat_window = wx.GridSizer(2, 6, 0, 5)\n specimen_stat_window.AddMany([(wx.StaticText(self.panel, label=\"dec\", style=wx.TE_CENTER), 1, wx.EXPAND | wx.TOP, 2*top_bar_2v_space),\n (wx.StaticText(self.panel, label=\"inc\", style=wx.TE_CENTER),\n 1, wx.EXPAND | wx.TOP, 2*top_bar_2v_space),\n (wx.StaticText(self.panel, label=\"n\", style=wx.TE_CENTER),\n 1, wx.EXPAND | wx.TOP, 2*top_bar_2v_space),\n (wx.StaticText(self.panel, label=\"mad\", style=wx.TE_CENTER),\n 1, wx.EXPAND | wx.TOP, 2*top_bar_2v_space),\n (wx.StaticText(self.panel, label=\"dang\", style=wx.TE_CENTER),\n 1, wx.TE_CENTER | wx.EXPAND | wx.TOP, 2*top_bar_2v_space),\n (wx.StaticText(self.panel, label=\"a95\", style=wx.TE_CENTER),\n 1, wx.TE_CENTER | wx.EXPAND | wx.TOP, 2*top_bar_2v_space),\n (self.sdec_window, 1, wx.EXPAND),\n (self.sinc_window, 1, wx.EXPAND),\n (self.sn_window, 1, wx.EXPAND),\n (self.smad_window, 1, wx.EXPAND),\n (self.sdang_window, 1, wx.EXPAND),\n (self.salpha95_window, 1, wx.EXPAND)])\n box_sizer_specimen_stat.Add(\n specimen_stat_window, 1, wx.ALIGN_LEFT | wx.EXPAND)\n\n # ----------------------------------------------------------------------\n # High level mean window\n # ----------------------------------------------------------------------\n\n self.level_box = wx.ComboBox(self.panel, id=wx.ID_ANY, size=(50*self.GUI_RESOLUTION, 25), value='site', choices=[\n 'sample', 'site', 'location', 'study'], style=wx.CB_DROPDOWN | wx.TE_READONLY, name=\"high_level\")\n self.Bind(wx.EVT_COMBOBOX, self.onSelect_high_level, self.level_box)\n self.level_box.SetHelpText(dgh.level_box_help)\n\n self.level_names = wx.ComboBox(self.panel, id=wx.ID_ANY, size=(\n 50*self.GUI_RESOLUTION, 25), value=self.site, choices=self.sites, style=wx.CB_DROPDOWN | wx.TE_READONLY, name=\"high_level_names\")\n self.Bind(wx.EVT_COMBOBOX, self.onSelect_level_name, self.level_names)\n self.level_names.SetHelpText(dgh.level_names_help)\n\n # ----------------------------------------------------------------------\n # mean types box\n # ----------------------------------------------------------------------\n\n self.mean_type_box = wx.ComboBox(self.panel, id=wx.ID_ANY, size=(50*self.GUI_RESOLUTION, 25), value='None', choices=[\n 'Fisher', 'Fisher by polarity', 'None'], style=wx.CB_DROPDOWN | wx.TE_READONLY, name=\"high_type\")\n self.Bind(wx.EVT_COMBOBOX, self.onSelect_mean_type_box,\n self.mean_type_box)\n self.mean_type_box.SetHelpText(dgh.mean_type_help)\n\n self.mean_fit_box = wx.ComboBox(self.panel, id=wx.ID_ANY, size=(50*self.GUI_RESOLUTION, 25), value='None', choices=[\n 'None', 'All'] + list_fits, style=wx.CB_DROPDOWN | wx.TE_READONLY, name=\"high_type\")\n self.Bind(wx.EVT_COMBOBOX, self.onSelect_mean_fit_box,\n self.mean_fit_box)\n self.mean_fit_box.SetHelpText(dgh.mean_fit_help)\n self.mean_fit = 'None'\n\n # ----------------------------------------------------------------------\n # Warnings TextCtrl\n # ----------------------------------------------------------------------\n warning_sizer = wx.StaticBoxSizer(wx.StaticBox(\n self.panel, wx.ID_ANY, \"Current Data Warnings\"), wx.VERTICAL)\n\n self.warning_box = wx.TextCtrl(self.panel, id=wx.ID_ANY, size=(50*self.GUI_RESOLUTION, 50 + 2*top_bar_2v_space),\n value=\"No Problems\", style=wx.TE_MULTILINE | wx.TE_READONLY | wx.HSCROLL, name=\"warning_box\")\n self.warning_box.SetHelpText(dgh.warning_help)\n\n warning_sizer.Add(self.warning_box, 1, wx.TOP | wx.EXPAND)\n\n # ----------------------------------------------------------------------\n # Design the panel\n # ----------------------------------------------------------------------\n\n # Top Bar-----------------------------------------------------------\n top_bar_sizer = wx.BoxSizer(wx.HORIZONTAL)\n\n bounds_sizer = wx.StaticBoxSizer(wx.StaticBox(\n self.panel, wx.ID_ANY, \"Bounds\"), wx.VERTICAL)\n bounds_sizer.AddMany([(self.tmin_box, 1, wx.ALIGN_TOP | wx.EXPAND | wx.BOTTOM, top_bar_2v_space),\n (self.tmax_box, 1, wx.ALIGN_BOTTOM | wx.EXPAND | wx.TOP, top_bar_2v_space)])\n top_bar_sizer.Add(bounds_sizer, 1, wx.ALIGN_LEFT)\n\n fit_sizer = wx.StaticBoxSizer(wx.StaticBox(\n self.panel, wx.ID_ANY, \"Interpretation Options\"), wx.VERTICAL)\n fit_grid = wx.GridSizer(2, 3, top_bar_h_space, 2*top_bar_2v_space)\n fit_grid.AddMany([(self.add_fit_button, 1, wx.ALIGN_TOP | wx.ALIGN_LEFT | wx.EXPAND),\n (self.save_fit_button, 1, wx.ALIGN_TOP |\n wx.ALIGN_LEFT | wx.EXPAND),\n (self.auto_save, 1, wx.ALIGN_CENTER),\n (self.fit_box, 1, wx.ALIGN_BOTTOM |\n wx.ALIGN_LEFT | wx.EXPAND),\n (self.delete_fit_button, 1, wx.ALIGN_BOTTOM | wx.ALIGN_LEFT | wx.EXPAND),\n (self.auto_save_info, 1, wx.ALIGN_CENTER)])\n fit_sizer.Add(fit_grid, 1, wx.EXPAND)\n top_bar_sizer.Add(fit_sizer, 2, wx.ALIGN_LEFT |\n wx.LEFT, top_bar_h_space)\n\n fit_type_sizer = wx.StaticBoxSizer(wx.StaticBox(\n self.panel, wx.ID_ANY, \"Interpretation Type\"), wx.VERTICAL)\n fit_type_sizer.AddMany([(self.PCA_type_box, 1, wx.ALIGN_TOP | wx.EXPAND | wx.BOTTOM, top_bar_2v_space),\n (self.plane_display_box, 1, wx.ALIGN_BOTTOM | wx.EXPAND | wx.TOP, top_bar_2v_space)])\n top_bar_sizer.Add(fit_type_sizer, 1, wx.ALIGN_LEFT |\n wx.LEFT, top_bar_h_space)\n\n top_bar_sizer.Add(box_sizer_specimen_stat, 3,\n wx.ALIGN_LEFT | wx.LEFT, top_bar_h_space)\n\n level_sizer = wx.StaticBoxSizer(wx.StaticBox(\n self.panel, wx.ID_ANY, \"Display Level\"), wx.VERTICAL)\n level_sizer.AddMany([(self.level_box, 1, wx.ALIGN_TOP | wx.EXPAND | wx.BOTTOM, top_bar_2v_space),\n (self.level_names, 1, wx.ALIGN_BOTTOM | wx.EXPAND | wx.TOP, top_bar_2v_space)])\n top_bar_sizer.Add(level_sizer, 1, wx.ALIGN_LEFT |\n wx.LEFT, top_bar_h_space)\n\n mean_options_sizer = wx.StaticBoxSizer(wx.StaticBox(\n self.panel, wx.ID_ANY, \"Mean Options\"), wx.VERTICAL)\n mean_options_sizer.AddMany([(self.mean_type_box, 1, wx.ALIGN_TOP | wx.EXPAND | wx.BOTTOM, top_bar_2v_space),\n (self.mean_fit_box, 1, wx.ALIGN_BOTTOM | wx.EXPAND | wx.TOP, top_bar_2v_space)])\n top_bar_sizer.Add(mean_options_sizer, 1,\n wx.ALIGN_LEFT | wx.LEFT, top_bar_h_space)\n\n top_bar_sizer.Add(warning_sizer, 2, wx.ALIGN_LEFT |\n wx.LEFT, top_bar_h_space)\n\n # Side Bar------------------------------------------------------------\n side_bar_sizer = wx.BoxSizer(wx.VERTICAL)\n\n spec_sizer = wx.StaticBoxSizer(wx.StaticBox(\n self.side_panel, wx.ID_ANY, \"Specimen\"), wx.VERTICAL)\n spec_buttons_sizer = wx.GridSizer(1, 2, 0, spec_button_space)\n spec_buttons_sizer.AddMany([(self.prevbutton, 1, wx.ALIGN_LEFT | wx.EXPAND),\n (self.nextbutton, 1, wx.ALIGN_RIGHT | wx.EXPAND)])\n spec_sizer.AddMany([(self.specimens_box, 1, wx.ALIGN_TOP | wx.EXPAND | wx.BOTTOM, side_bar_v_space/2),\n (spec_buttons_sizer, 1, wx.ALIGN_BOTTOM | wx.EXPAND | wx.TOP, side_bar_v_space/2)])\n side_bar_sizer.Add(spec_sizer, .5, wx.ALIGN_TOP | wx.EXPAND)\n side_bar_sizer.Add(wx.StaticLine(self.side_panel), .5,\n wx.ALL | wx.EXPAND, side_bar_v_space)\n\n coordinate_sizer = wx.StaticBoxSizer(wx.StaticBox(\n self.side_panel, wx.ID_ANY, \"Coordinate System\"), wx.VERTICAL)\n coordinate_sizer.Add(self.coordinates_box, .5, wx.EXPAND)\n side_bar_sizer.Add(coordinate_sizer, .5, wx.ALIGN_TOP | wx.EXPAND)\n side_bar_sizer.Add(wx.StaticLine(self.side_panel), .5,\n wx.ALL | wx.EXPAND, side_bar_v_space)\n\n zijderveld_option_sizer = wx.StaticBoxSizer(wx.StaticBox(\n self.side_panel, wx.ID_ANY, \"Zijderveld Plot Options\"), wx.VERTICAL)\n zijderveld_option_sizer.Add(self.orthogonal_box, 1, wx.EXPAND)\n side_bar_sizer.Add(zijderveld_option_sizer, .5,\n wx.ALIGN_TOP | wx.EXPAND)\n\n side_bar_sizer.Add(self.logger, proportion=1,\n flag=wx.ALIGN_TOP | wx.TOP | wx.EXPAND, border=8)\n\n # Mean Stats and button Sizer-----------------------------------------\n stats_and_button_sizer = wx.BoxSizer(wx.HORIZONTAL)\n stats_and_button_sizer.AddMany([(self.stats_sizer, 1, wx.ALIGN_LEFT | wx.EXPAND),\n (self.switch_stats_button, .3, wx.ALIGN_RIGHT | wx.EXPAND)])\n\n # EQ area MM0 and stats sizer-----------------------------------------\n eqarea_MM0_stats_sizer = wx.GridSizer(2, 2, 0, 0)\n eqarea_MM0_stats_sizer.AddMany([(self.canvas2, 1, wx.ALIGN_LEFT | wx.EXPAND),\n (self.canvas4, 1, wx.ALIGN_RIGHT | wx.EXPAND),\n (self.canvas3, 1, wx.ALIGN_LEFT | wx.EXPAND),\n (stats_and_button_sizer, 1, wx.ALIGN_RIGHT | wx.EXPAND)])\n\n # Plots and Stats Sizer-----------------------------------------------\n full_plots_sizer = wx.BoxSizer(wx.HORIZONTAL)\n full_plots_sizer.Add(self.canvas1, 1, wx.ALIGN_LEFT | wx.EXPAND)\n full_plots_sizer.Add(eqarea_MM0_stats_sizer, 1.5,\n wx.ALIGN_RIGHT | wx.EXPAND)\n\n self.panel.SetSizerAndFit(top_bar_sizer)\n self.side_panel.SetSizerAndFit(side_bar_sizer)\n self.scrolled_panel.SetSizer(full_plots_sizer)\n\n # Outer Sizer---------------------------------------------------------\n add_side_bar_sizer = wx.BoxSizer(wx.HORIZONTAL)\n add_side_bar_sizer.Add(self.side_panel, 1, wx.ALIGN_LEFT | wx.EXPAND)\n add_side_bar_sizer.Add(self.scrolled_panel, 5,\n wx.ALIGN_RIGHT | wx.EXPAND)\n\n outersizer = wx.BoxSizer(wx.VERTICAL)\n outersizer.Add(self.panel, .2, wx.ALIGN_TOP | wx.EXPAND)\n outersizer.Add(add_side_bar_sizer, 1, wx.ALIGN_BOTTOM | wx.EXPAND)\n\n self.SetSizer(outersizer)\n outersizer.Fit(self)\n\n self.GUI_SIZE = self.GetSize()", "response": "Initializes the main GUI for the current locale."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the available coordinate system.", "response": "def get_coordinate_system(self):\n \"\"\"\n Check self.Data for available coordinate systems.\n\n Returns\n ---------\n initial_coordinate, coordinate_list : str, list\n i.e., 'geographic', ['specimen', 'geographic']\n \"\"\"\n coordinate_list = ['specimen']\n initial_coordinate = 'specimen'\n for specimen in self.specimens:\n if 'geographic' not in coordinate_list and self.Data[specimen]['zijdblock_geo']:\n coordinate_list.append('geographic')\n initial_coordinate = 'geographic'\n if 'tilt-corrected' not in coordinate_list and self.Data[specimen]['zijdblock_tilt']:\n coordinate_list.append('tilt-corrected')\n return initial_coordinate, coordinate_list"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef create_menu(self):\n self.menubar = wx.MenuBar()\n\n # -----------------\n # File Menu\n # -----------------\n\n menu_file = wx.Menu()\n\n m_change_WD = menu_file.Append(-1,\n \"Change Working Directory\\tCtrl-W\", \"\")\n self.Bind(wx.EVT_MENU, self.on_menu_change_working_directory, m_change_WD)\n\n\n\n m_import_meas_file = menu_file.Append(-1,\n \"Change measurements file\", \"\")\n\n self.Bind(wx.EVT_MENU, self.on_menu_import_meas_file, m_import_meas_file)\n\n m_import_LSQ = menu_file.Append(-1,\n \"&Import Interpretations from LSQ file\\tCtrl-L\", \"\")\n self.Bind(wx.EVT_MENU, self.on_menu_read_from_LSQ, m_import_LSQ)\n\n m_previous_interpretation = menu_file.Append(\n -1, \"&Import interpretations from a redo file\\tCtrl-R\", \"\")\n self.Bind(wx.EVT_MENU, self.on_menu_previous_interpretation,\n m_previous_interpretation)\n\n m_save_interpretation = menu_file.Append(\n -1, \"&Save interpretations to a redo file\\tCtrl-S\", \"\")\n self.Bind(wx.EVT_MENU, self.on_menu_save_interpretation,\n m_save_interpretation)\n\n m_make_MagIC_results_tables = menu_file.Append(\n -1, \"&Save MagIC tables\\tCtrl-Shift-S\", \"\")\n self.Bind(wx.EVT_MENU, self.on_menu_make_MagIC_results_tables,\n m_make_MagIC_results_tables)\n\n submenu_save_plots = wx.Menu()\n\n m_save_zij_plot = submenu_save_plots.Append(\n -1, \"&Save Zijderveld plot\", \"\")\n self.Bind(wx.EVT_MENU, self.on_save_Zij_plot, m_save_zij_plot, \"Zij\")\n\n m_save_eq_plot = submenu_save_plots.Append(\n -1, \"&Save specimen equal area plot\", \"\")\n self.Bind(wx.EVT_MENU, self.on_save_Eq_plot,\n m_save_eq_plot, \"specimen-Eq\")\n\n m_save_M_t_plot = submenu_save_plots.Append(-1, \"&Save M-t plot\", \"\")\n self.Bind(wx.EVT_MENU, self.on_save_M_t_plot, m_save_M_t_plot, \"M_t\")\n\n m_save_high_level = submenu_save_plots.Append(\n -1, \"&Save high level plot\", \"\")\n self.Bind(wx.EVT_MENU, self.on_save_high_level,\n m_save_high_level, \"Eq\")\n\n m_save_all_plots = submenu_save_plots.Append(-1, \"&Save all plots\", \"\")\n self.Bind(wx.EVT_MENU, self.on_save_all_figures, m_save_all_plots)\n\n m_new_sub_plots = menu_file.AppendSubMenu(submenu_save_plots, \"&Save plot\")\n\n menu_file.AppendSeparator()\n m_exit = menu_file.Append(-1, \"E&xit\\tCtrl-Q\", \"Exit\")\n self.Bind(wx.EVT_MENU, self.on_menu_exit, m_exit)\n\n # -----------------\n # Edit Menu\n # -----------------\n\n menu_edit = wx.Menu()\n\n m_new = menu_edit.Append(-1, \"&New interpretation\\tCtrl-N\", \"\")\n self.Bind(wx.EVT_MENU, self.on_btn_add_fit, m_new)\n\n m_delete = menu_edit.Append(-1, \"&Delete interpretation\\tCtrl-D\", \"\")\n self.Bind(wx.EVT_MENU, self.on_btn_delete_fit, m_delete)\n\n m_next_interp = menu_edit.Append(-1,\n \"&Next interpretation\\tCtrl-Up\", \"\")\n self.Bind(wx.EVT_MENU, self.on_menu_next_interp, m_next_interp)\n\n m_previous_interp = menu_edit.Append(-1,\n \"&Previous interpretation\\tCtrl-Down\", \"\")\n self.Bind(wx.EVT_MENU, self.on_menu_prev_interp, m_previous_interp)\n\n m_next_specimen = menu_edit.Append(-1,\n \"&Next Specimen\\tCtrl-Right\", \"\")\n self.Bind(wx.EVT_MENU, self.on_next_button, m_next_specimen)\n\n m_previous_specimen = menu_edit.Append(-1,\n \"&Previous Specimen\\tCtrl-Left\", \"\")\n self.Bind(wx.EVT_MENU, self.on_prev_button, m_previous_specimen)\n\n menu_flag_meas = wx.Menu()\n\n m_good = menu_flag_meas.Append(-1, \"&Good Measurement\\tCtrl-Alt-G\", \"\")\n self.Bind(wx.EVT_MENU, self.on_menu_flag_meas_good, m_good)\n m_bad = menu_flag_meas.Append(-1, \"&Bad Measurement\\tCtrl-Alt-B\", \"\")\n self.Bind(wx.EVT_MENU, self.on_menu_flag_meas_bad, m_bad)\n\n m_flag_meas = menu_edit.AppendSubMenu(menu_flag_meas, \"&Flag Measurement Data\")\n\n menu_coordinates = wx.Menu()\n\n m_speci = menu_coordinates.Append(-1,\n \"&Specimen Coordinates\\tCtrl-P\", \"\")\n self.Bind(wx.EVT_MENU, self.on_menu_change_speci_coord, m_speci)\n if \"geographic\" in self.coordinate_list:\n m_geo = menu_coordinates.Append(-1,\n \"&Geographic Coordinates\\tCtrl-G\", \"\")\n self.Bind(wx.EVT_MENU, self.on_menu_change_geo_coord, m_geo)\n if \"tilt-corrected\" in self.coordinate_list:\n m_tilt = menu_coordinates.Append(-1,\n \"&Tilt-Corrected Coordinates\\tCtrl-T\", \"\")\n self.Bind(wx.EVT_MENU, self.on_menu_change_tilt_coord, m_tilt)\n\n m_coords = menu_edit.AppendSubMenu(menu_coordinates, \"&Coordinate Systems\")\n\n # -----------------\n # Analysis Menu\n # -----------------\n\n menu_Analysis = wx.Menu()\n\n submenu_criteria = wx.Menu()\n\n m_change_criteria_file = submenu_criteria.Append(\n -1, \"&Change acceptance criteria\", \"\")\n self.Bind(wx.EVT_MENU, self.on_menu_change_criteria,\n m_change_criteria_file)\n\n m_import_criteria_file = submenu_criteria.Append(\n -1, \"&Import criteria file\", \"\")\n self.Bind(wx.EVT_MENU, self.on_menu_criteria_file,\n m_import_criteria_file)\n\n m_new_sub = menu_Analysis.AppendSubMenu(submenu_criteria, \"Acceptance criteria\")\n\n menu_flag_fit = wx.Menu()\n\n m_good_fit = menu_flag_fit.Append(-1,\n \"&Good Interpretation\\tCtrl-Shift-G\", \"\")\n self.Bind(wx.EVT_MENU, self.on_menu_flag_fit_good, m_good_fit)\n m_bad_fit = menu_flag_fit.Append(-1,\n \"&Bad Interpretation\\tCtrl-Shift-B\", \"\")\n self.Bind(wx.EVT_MENU, self.on_menu_flag_fit_bad, m_bad_fit)\n\n m_flag_fit = menu_Analysis.AppendSubMenu(menu_flag_fit, \"&Flag Interpretations\")\n\n submenu_sample_check = wx.Menu()\n\n m_check_orient = submenu_sample_check.Append(\n -1, \"&Check Sample Orientations\\tCtrl-O\", \"\")\n self.Bind(wx.EVT_MENU, self.on_menu_check_orient, m_check_orient)\n\n m_mark_samp_bad = submenu_sample_check.Append(\n -1, \"&Mark Sample Bad\\tCtrl-.\", \"\")\n self.Bind(wx.EVT_MENU, self.on_menu_mark_samp_bad, m_mark_samp_bad)\n\n m_mark_samp_good = submenu_sample_check.Append(\n -1, \"&Mark Sample Good\\tCtrl-,\", \"\")\n self.Bind(wx.EVT_MENU, self.on_menu_mark_samp_good, m_mark_samp_good)\n\n m_submenu = menu_Analysis.AppendSubMenu(submenu_sample_check, \"Sample Orientation\")\n\n submenu_toggle_mean_display = wx.Menu()\n\n lines = [\"m_%s_toggle_mean = submenu_toggle_mean_display.AppendCheckItem(-1, '&%s', ''); self.Bind(wx.EVT_MENU, self.on_menu_toggle_mean, m_%s_toggle_mean)\" % (\n f, f) for f in self.all_fits_list]\n\n for line in lines:\n exec(line)\n\n menu_Analysis.AppendSubMenu(submenu_toggle_mean_display, \"Toggle Mean Display\")\n\n\n # -----------------\n # Tools Menu\n # -----------------\n\n menu_Tools = wx.Menu()\n\n# m_auto_interpret = menu_Tools.Append(-1, \"&Auto interpret (alpha version)\\tCtrl-A\", \"\")\n# self.Bind(wx.EVT_MENU, self.autointerpret, m_auto_interpret)\n\n m_edit_interpretations = menu_Tools.Append(\n -1, \"&Interpretation editor\\tCtrl-E\", \"\")\n self.Bind(wx.EVT_MENU, self.on_menu_edit_interpretations,\n m_edit_interpretations)\n\n m_view_VGP = menu_Tools.Append(-1, \"&View VGPs\\tCtrl-Shift-V\", \"\")\n self.Bind(wx.EVT_MENU, self.on_menu_view_vgps, m_view_VGP)\n\n # -----------------\n # Help Menu\n # -----------------\n\n menu_Help = wx.Menu()\n\n m_help = menu_Help.Append(-1, \"&Usage and Tips\\tCtrl-H\", \"\")\n self.Bind(wx.EVT_MENU, self.on_menu_help, m_help)\n\n m_cookbook = menu_Help.Append(-1, \"&PmagPy Cookbook\\tCtrl-Shift-W\", \"\")\n self.Bind(wx.EVT_MENU, self.on_menu_cookbook, m_cookbook)\n\n m_docs = menu_Help.Append(-1, \"&Open Docs\\tCtrl-Shift-H\", \"\")\n self.Bind(wx.EVT_MENU, self.on_menu_docs, m_docs)\n\n m_git = menu_Help.Append(-1, \"&Github Page\\tCtrl-Shift-G\", \"\")\n self.Bind(wx.EVT_MENU, self.on_menu_git, m_git)\n\n m_debug = menu_Help.Append(-1, \"&Open Debugger\\tCtrl-Shift-D\", \"\")\n self.Bind(wx.EVT_MENU, self.on_menu_debug, m_debug)\n\n # -----------------\n\n #self.menubar.Append(menu_preferences, \"& Preferences\")\n self.menubar.Append(menu_file, \"&File\")\n self.menubar.Append(menu_edit, \"&Edit\")\n self.menubar.Append(menu_Analysis, \"&Analysis\")\n self.menubar.Append(menu_Tools, \"&Tools\")\n self.menubar.Append(menu_Help, \"&Help\")\n #self.menubar.Append(menu_Plot, \"&Plot\")\n #self.menubar.Append(menu_results_table, \"&Table\")\n #self.menubar.Append(menu_MagIC, \"&MagIC\")\n self.SetMenuBar(self.menubar)", "response": "Create the menu for the GUI current structure"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef draw_figure(self, s, update_high_plots=True):\n self.initialize_CART_rot(s)\n\n # Draw Zij plot\n self.draw_zijderveld()\n\n # Draw specimen equal area\n self.draw_spec_eqarea()\n\n # Draw M/M0 plot ( or NLT data on the same area in the GUI)\n self.draw_MM0()\n\n # If measurements are selected redisplay selected data\n if len(self.selected_meas) > 0:\n self.plot_selected_meas()\n\n # Draw high level equal area\n if update_high_plots:\n self.plot_high_levels_data()\n self.canvas4.draw()", "response": "This method draws the current figure for the current specimen and updates all plots."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef draw_zijderveld(self):\n self.fig1.clf()\n axis_bounds = [0, .1, 1, .85]\n self.zijplot = self.fig1.add_axes(\n axis_bounds, frameon=False, facecolor='None', label='zig_orig', zorder=0)\n self.zijplot.clear()\n self.zijplot.axis('equal')\n self.zijplot.xaxis.set_visible(False)\n self.zijplot.yaxis.set_visible(False)\n\n self.MS = 6*self.GUI_RESOLUTION\n self.dec_MEC = 'k'\n self.dec_MFC = 'r'\n self.inc_MEC = 'k'\n self.inc_MFC = 'b'\n self.MS = 6*self.GUI_RESOLUTION\n self.zijdblock_steps = self.Data[self.s]['zijdblock_steps']\n self.vds = self.Data[self.s]['vds']\n\n self.zijplot.plot(self.CART_rot_good[:, 0], -1*self.CART_rot_good[:, 1], 'ro-',\n markersize=self.MS, clip_on=False, picker=True, zorder=1) # x,y or N,E\n self.zijplot.plot(self.CART_rot_good[:, 0], -1*self.CART_rot_good[:, 2], 'bs-',\n markersize=self.MS, clip_on=False, picker=True, zorder=1) # x-z or N,D\n\n for i in range(len(self.CART_rot_bad)):\n self.zijplot.plot(self.CART_rot_bad[:, 0][i], -1 * self.CART_rot_bad[:, 1][i], 'o', mfc='None',\n mec=self.dec_MEC, markersize=self.MS, clip_on=False, picker=False) # x,y or N,E\n self.zijplot.plot(self.CART_rot_bad[:, 0][i], -1 * self.CART_rot_bad[:, 2][i], 's', mfc='None',\n mec=self.inc_MEC, markersize=self.MS, clip_on=False, picker=False) # x-z or N,D\n\n if self.preferences['show_Zij_treatments']:\n for i in range(len(self.zijdblock_steps)):\n if int(self.preferences['show_Zij_treatments_steps']) != 1:\n if i != 0 and (i+1) % int(self.preferences['show_Zij_treatments_steps']) == 0:\n self.zijplot.text(self.CART_rot[i][0], -1*self.CART_rot[i][2], \" %s\" % (\n self.zijdblock_steps[i]), fontsize=8*self.GUI_RESOLUTION, color='gray', ha='left', va='center') # inc\n else:\n self.zijplot.text(self.CART_rot[i][0], -1*self.CART_rot[i][2], \" %s\" % (\n self.zijdblock_steps[i]), fontsize=10*self.GUI_RESOLUTION, color='gray', ha='left', va='center') # inc\n\n # -----\n\n xmin, xmax = self.zijplot.get_xlim()\n if xmax < 0:\n xmax = 0\n if xmin > 0:\n xmin = 0\n # else:\n # xmin=xmin+xmin%0.2\n\n props = dict(color='black', linewidth=1.0, markeredgewidth=0.5)\n\n xlocs = array(list(arange(0.2, xmax, 0.2)) +\n list(arange(-0.2, xmin, -0.2)))\n if len(xlocs) > 0:\n xtickline, = self.zijplot.plot(\n xlocs, [0]*len(xlocs), linestyle='', marker='+', **props)\n xtickline.set_clip_on(False)\n\n axxline, = self.zijplot.plot([xmin, xmax], [0, 0], **props)\n axxline.set_clip_on(False)\n\n TEXT = \"\"\n if self.COORDINATE_SYSTEM == 'specimen':\n self.zijplot.text(xmax, 0, ' x', fontsize=10,\n verticalalignment='bottom')\n else:\n if self.ORTHO_PLOT_TYPE == 'N-S':\n TEXT = \" N\"\n elif self.ORTHO_PLOT_TYPE == 'E-W':\n TEXT = \" E\"\n else:\n TEXT = \" x\"\n self.zijplot.text(xmax, 0, TEXT, fontsize=10,\n verticalalignment='bottom')\n\n # -----\n\n ymin, ymax = self.zijplot.get_ylim()\n if ymax < 0:\n ymax = 0\n if ymin > 0:\n ymin = 0\n\n ylocs = array(list(arange(0.2, ymax, 0.2)) +\n list(arange(-0.2, ymin, -0.2)))\n if len(ylocs) > 0:\n ytickline, = self.zijplot.plot(\n [0]*len(ylocs), ylocs, linestyle='', marker='+', **props)\n ytickline.set_clip_on(False)\n\n axyline, = self.zijplot.plot([0, 0], [ymin, ymax], **props)\n axyline.set_clip_on(False)\n\n TEXT1, TEXT2 = \"\", \"\"\n if self.COORDINATE_SYSTEM == 'specimen':\n TEXT1, TEXT2 = \" y\", \" z\"\n else:\n if self.ORTHO_PLOT_TYPE == 'N-S':\n TEXT1, TEXT2 = \" E\", \" D\"\n elif self.ORTHO_PLOT_TYPE == 'E-W':\n TEXT1, TEXT2 = \" S\", \" D\"\n else:\n TEXT1, TEXT2 = \" y\", \" z\"\n self.zijplot.text(0, ymin, TEXT1, fontsize=10,\n color='r', verticalalignment='top')\n self.zijplot.text(0, ymin, ' ,', fontsize=10,\n color='k', verticalalignment='top')\n self.zijplot.text(0, ymin, TEXT2, fontsize=10,\n color='b', verticalalignment='top')\n\n # ----\n\n if self.ORTHO_PLOT_TYPE == 'N-S':\n STRING = \"\"\n #STRING1=\"N-S orthogonal plot\"\n self.fig1.text(0.01, 0.98, \"Zijderveld plot: x = North\", {\n 'family': self.font_type, 'fontsize': 10*self.GUI_RESOLUTION, 'style': 'normal', 'va': 'center', 'ha': 'left'})\n elif self.ORTHO_PLOT_TYPE == 'E-W':\n STRING = \"\"\n #STRING1=\"E-W orthogonal plot\"\n self.fig1.text(0.01, 0.98, \"Zijderveld plot:: x = East\", {\n 'family': self.font_type, 'fontsize': 10*self.GUI_RESOLUTION, 'style': 'normal', 'va': 'center', 'ha': 'left'})\n\n elif self.ORTHO_PLOT_TYPE == 'PCA_dec':\n self.fig1.text(0.01, 0.98, \"Zijderveld plot\", {\n 'family': self.font_type, 'fontsize': 10*self.GUI_RESOLUTION, 'style': 'normal', 'va': 'center', 'ha': 'left'})\n if 'specimen_dec' in list(self.current_fit.pars.keys()) and type(self.current_fit.pars['specimen_dec']) != str:\n STRING = \"X-axis rotated to best fit line declination (%.0f); \" % (\n self.current_fit.pars['specimen_dec'])\n else:\n STRING = \"X-axis rotated to NRM (%.0f); \" % (\n self.zijblock[0][1])\n else:\n self.fig1.text(0.01, 0.98, \"Zijderveld plot\", {\n 'family': self.font_type, 'fontsize': 10*self.GUI_RESOLUTION, 'style': 'normal', 'va': 'center', 'ha': 'left'})\n STRING = \"X-axis rotated to NRM (%.0f); \" % (self.zijblock[0][1])\n #STRING1=\"Zijderveld plot\"\n\n STRING = STRING+\"NRM=%.2e \" % (self.zijblock[0][3]) + 'Am^2'\n self.fig1.text(0.01, 0.95, STRING, {'family': self.font_type, 'fontsize': 8 *\n self.GUI_RESOLUTION, 'style': 'normal', 'va': 'center', 'ha': 'left'})\n\n xmin, xmax = self.zijplot.get_xlim()\n ymin, ymax = self.zijplot.get_ylim()\n\n self.zij_xlim_initial = (xmin, xmax)\n self.zij_ylim_initial = (ymin, ymax)\n\n self.canvas1.draw()", "response": "Draws the zijderveld plot on canvas1"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncalculating point positions and draws the Specimen eqarea plot on canvas2", "response": "def draw_spec_eqarea(self):\n \"\"\"\n Calculates point positions and draws the Specimen eqarea plot on\n canvas2\n \"\"\"\n draw_net(self.specimen_eqarea)\n self.specimen_eqarea.text(-1.2, 1.15, \"specimen: %s\" % self.s, {\n 'family': self.font_type, 'fontsize': 10*self.GUI_RESOLUTION, 'style': 'normal', 'va': 'center', 'ha': 'left'})\n\n x_eq = array([row[0] for row in self.zij_norm])\n y_eq = array([row[1] for row in self.zij_norm])\n z_eq = abs(array([row[2] for row in self.zij_norm]))\n\n # remove bad data from plotting:\n x_eq_good, y_eq_good, z_eq_good = [], [], []\n x_eq_bad, y_eq_bad, z_eq_bad = [], [], []\n for i in range(len(list(self.zij_norm))):\n if self.Data[self.s]['measurement_flag'][i] == 'g':\n x_eq_good.append(self.zij_norm[i][0])\n y_eq_good.append(self.zij_norm[i][1])\n z_eq_good.append(abs(self.zij_norm[i][2]))\n else:\n x_eq_bad.append(self.zij_norm[i][0])\n y_eq_bad.append(self.zij_norm[i][1])\n z_eq_bad.append(abs(self.zij_norm[i][2]))\n\n x_eq_good, y_eq_good, z_eq_good = array(\n x_eq_good), array(y_eq_good), array(z_eq_good)\n x_eq_bad, y_eq_bad, z_eq_bad = array(\n x_eq_bad), array(y_eq_bad), array(z_eq_bad)\n\n R_good = array(sqrt(1-z_eq_good)/sqrt(x_eq_good**2 +\n y_eq_good**2)) # from Collinson 1983\n # from Collinson 1983\n R_bad = array(sqrt(1-z_eq_bad)/sqrt(x_eq_bad**2+y_eq_bad**2))\n\n eqarea_data_x_good = y_eq_good*R_good\n eqarea_data_y_good = x_eq_good*R_good\n\n eqarea_data_x_bad = y_eq_bad*R_bad\n eqarea_data_y_bad = x_eq_bad*R_bad\n\n self.specimen_eqarea.plot(\n eqarea_data_x_good, eqarea_data_y_good, lw=0.5, color='gray') # ,zorder=0)\n\n # --------------------\n # scatter plot\n # --------------------\n\n x_eq_dn, y_eq_dn, z_eq_dn, eq_dn_temperatures = [], [], [], []\n x_eq_dn = array([row[0] for row in self.zij_norm if row[2] > 0])\n y_eq_dn = array([row[1] for row in self.zij_norm if row[2] > 0])\n z_eq_dn = abs(array([row[2] for row in self.zij_norm if row[2] > 0]))\n\n if len(x_eq_dn) > 0:\n # from Collinson 1983\n R = array(sqrt(1-z_eq_dn)/sqrt(x_eq_dn**2+y_eq_dn**2))\n eqarea_data_x_dn = y_eq_dn*R\n eqarea_data_y_dn = x_eq_dn*R\n self.specimen_eqarea.scatter([eqarea_data_x_dn], [eqarea_data_y_dn], marker='o',\n edgecolor='black', facecolor=\"#808080\", s=15*self.GUI_RESOLUTION, lw=1, clip_on=False)\n\n x_eq_up, y_eq_up, z_eq_up = [], [], []\n x_eq_up = array([row[0] for row in self.zij_norm if row[2] <= 0])\n y_eq_up = array([row[1] for row in self.zij_norm if row[2] <= 0])\n z_eq_up = abs(array([row[2] for row in self.zij_norm if row[2] <= 0]))\n if len(x_eq_up) > 0:\n # from Collinson 1983\n R = array(sqrt(1-z_eq_up)/sqrt(x_eq_up**2+y_eq_up**2))\n eqarea_data_x_up = y_eq_up*R\n eqarea_data_y_up = x_eq_up*R\n self.specimen_eqarea.scatter([eqarea_data_x_up], [eqarea_data_y_up], marker='o',\n edgecolor='black', facecolor=\"#FFFFFF\", s=15*self.GUI_RESOLUTION, lw=1, clip_on=False)\n\n # self.preferences['show_eqarea_treatments']=True\n if self.preferences['show_eqarea_treatments']:\n for i in range(len(self.zijdblock_steps)):\n self.specimen_eqarea.text(eqarea_data_x[i], eqarea_data_y[i], \"%.1f\" % float(\n self.zijdblock_steps[i]), fontsize=8*self.GUI_RESOLUTION, color=\"0.5\")\n\n # add line to show the direction of the x axis in the Zijderveld plot\n\n if str(self.orthogonal_box.GetValue()) in [\"X=best fit line dec\", \"X=NRM dec\"]:\n XY = []\n if str(self.orthogonal_box.GetValue()) == \"X=NRM dec\":\n dec_zij = self.zijblock[0][1]\n XY = pmag.dimap(dec_zij, 0)\n if str(self.orthogonal_box.GetValue()) == \"X=best fit line dec\":\n if 'specimen_dec' in list(self.current_fit.pars.keys()) and type(self.current_fit.pars['specimen_dec']) != str:\n dec_zij = self.current_fit.pars['specimen_dec']\n XY = pmag.dimap(dec_zij, 0)\n if XY != []:\n self.specimen_eqarea.plot(\n [0, XY[0]], [0, XY[1]], ls='-', c='gray', lw=0.5) # ,zorder=0)\n\n self.canvas2.draw()"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndraws the M and M0 plot on canvas3.", "response": "def draw_MM0(self):\n \"\"\"\n Draws the M/M0 plot in the GUI on canvas3\n \"\"\"\n self.fig3.clf()\n self.fig3.text(0.02, 0.96, 'M/M0', {'family': self.font_type, 'fontsize': 10 *\n self.GUI_RESOLUTION, 'style': 'normal', 'va': 'center', 'ha': 'left'})\n self.mplot = self.fig3.add_axes(\n [0.2, 0.15, 0.7, 0.7], frameon=True, facecolor='None')\n\n thermal_x, thermal_y = [], []\n thermal_x_bad, thermal_y_bad = [], []\n af_x, af_y = [], []\n af_x_bad, af_y_bad = [], []\n for i in range(len(self.Data[self.s]['zijdblock'])):\n step = self.Data[self.s]['zijdblock_steps'][i]\n # bad point\n if self.Data[self.s]['measurement_flag'][i] == 'b':\n if step == \"0\":\n thermal_x_bad.append(self.Data[self.s]['zijdblock'][i][0])\n af_x_bad.append(self.Data[self.s]['zijdblock'][i][0])\n thermal_y_bad.append(\n self.Data[self.s]['zijdblock'][i][3]/self.Data[self.s]['zijdblock'][0][3])\n af_y_bad.append(\n self.Data[self.s]['zijdblock'][i][3]/self.Data[self.s]['zijdblock'][0][3])\n elif \"C\" in step:\n thermal_x_bad.append(self.Data[self.s]['zijdblock'][i][0])\n thermal_y_bad.append(\n self.Data[self.s]['zijdblock'][i][3]/self.Data[self.s]['zijdblock'][0][3])\n elif \"T\" in step:\n af_x_bad.append(self.Data[self.s]['zijdblock'][i][0])\n af_y_bad.append(\n self.Data[self.s]['zijdblock'][i][3]/self.Data[self.s]['zijdblock'][0][3])\n else:\n continue\n\n else:\n if step == \"0\":\n thermal_x.append(self.Data[self.s]['zijdblock'][i][0])\n af_x.append(self.Data[self.s]['zijdblock'][i][0])\n thermal_y.append(\n self.Data[self.s]['zijdblock'][i][3]/self.Data[self.s]['zijdblock'][0][3])\n af_y.append(self.Data[self.s]['zijdblock'][i]\n [3]/self.Data[self.s]['zijdblock'][0][3])\n elif \"C\" in step:\n thermal_x.append(self.Data[self.s]['zijdblock'][i][0])\n thermal_y.append(\n self.Data[self.s]['zijdblock'][i][3]/self.Data[self.s]['zijdblock'][0][3])\n elif \"T\" in step:\n af_x.append(self.Data[self.s]['zijdblock'][i][0])\n af_y.append(self.Data[self.s]['zijdblock'][i]\n [3]/self.Data[self.s]['zijdblock'][0][3])\n else:\n continue\n\n if len(thermal_x)+len(thermal_x_bad) > self.Data[self.s]['zijdblock_steps'].count('0'):\n self.mplot.plot(thermal_x, thermal_y, 'ro-',\n markersize=self.MS, lw=1, clip_on=False, zorder=1)\n for i in range(len(thermal_x_bad)):\n self.mplot.plot([thermal_x_bad[i]], [thermal_y_bad[i]], 'o',\n mfc='None', mec='k', markersize=self.MS, clip_on=False, zorder=1)\n\n self.mplot.set_xlabel('Thermal (C)', color='r')\n for tl in self.mplot.get_xticklabels():\n tl.set_color('r')\n\n self.mplot_af = self.mplot.twiny()\n if len(af_x)+len(af_x_bad) > self.Data[self.s]['zijdblock_steps'].count('0'):\n self.mplot_af.plot(\n af_x, af_y, 'bo-', markersize=self.MS, lw=1, clip_on=False, zorder=1)\n for i in range(len(af_x_bad)):\n self.mplot_af.plot([af_x_bad[i]], [\n af_y_bad[i]], 'o', mfc='None', mec='k', markersize=self.MS, clip_on=False, zorder=1)\n\n self.mplot_af.set_xlabel('AF (mT)', color='b')\n for tl in self.mplot_af.get_xticklabels():\n tl.set_color('b')\n\n self.mplot.tick_params(axis='both', which='major', labelsize=7)\n self.mplot_af.tick_params(axis='both', which='major', labelsize=7)\n self.mplot.spines[\"right\"].set_visible(False)\n self.mplot_af.spines[\"right\"].set_visible(False)\n self.mplot.get_xaxis().tick_bottom()\n self.mplot.get_yaxis().tick_left()\n self.mplot.set_ylabel(\"M / NRM0\", fontsize=8*self.GUI_RESOLUTION)\n\n self.canvas3.draw()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef plot_selected_meas(self):\n self.close_warning = True\n # set hex colors for cover and size of selected meas marker\n blue_cover = \"#9999FF\"\n red_cover = \"#FF9999\"\n eqarea_outline = \"#FF0000\"\n MS_selected = 40\n\n # remove old selected points\n for a in self.selected_meas_artists:\n if a in self.zijplot.collections:\n self.zijplot.collections.remove(a)\n if a in self.specimen_eqarea.collections:\n self.specimen_eqarea.collections.remove(a)\n if a in self.mplot.collections:\n self.mplot.collections.remove(a)\n if a in self.mplot_af.collections:\n self.mplot_af.collections.remove(a)\n\n # do zijderveld plot\n self.selected_meas_artists = []\n x, y, z = self.CART_rot[self.selected_meas,\n 0], self.CART_rot[self.selected_meas, 1], self.CART_rot[self.selected_meas, 2]\n self.selected_meas_artists.append(self.zijplot.scatter(\n x, -1*y, c=red_cover, marker='o', s=MS_selected, zorder=2))\n self.selected_meas_artists.append(self.zijplot.scatter(\n x, -1*z, c=blue_cover, marker='s', s=MS_selected, zorder=2))\n\n # do down data for eqarea\n x_eq = array([row[0] for i, row in enumerate(self.zij_norm)\n if i in self.selected_meas and row[2] > 0])\n y_eq = array([row[1] for i, row in enumerate(self.zij_norm)\n if i in self.selected_meas and row[2] > 0])\n z_eq = abs(array([row[2] for i, row in enumerate(\n self.zij_norm) if i in self.selected_meas and row[2] > 0]))\n if len(x_eq) > 0:\n # from Collinson 1983\n R = array(sqrt(1-z_eq)/sqrt(x_eq**2+y_eq**2))\n eqarea_data_x = y_eq*R\n eqarea_data_y = x_eq*R\n self.selected_meas_artists.append(self.specimen_eqarea.scatter([eqarea_data_x], [\n eqarea_data_y], marker='o', edgecolor=eqarea_outline, facecolor=\"#808080\", s=15*self.GUI_RESOLUTION, lw=1, clip_on=False))\n\n # do up data for eqarea\n x_eq = array([row[0] for i, row in enumerate(self.zij_norm)\n if i in self.selected_meas and row[2] < 0])\n y_eq = array([row[1] for i, row in enumerate(self.zij_norm)\n if i in self.selected_meas and row[2] < 0])\n z_eq = abs(array([row[2] for i, row in enumerate(\n self.zij_norm) if i in self.selected_meas and row[2] < 0]))\n if len(x_eq) > 0:\n # from Collinson 1983\n R = array(sqrt(1-z_eq)/sqrt(x_eq**2+y_eq**2))\n eqarea_data_x = y_eq*R\n eqarea_data_y = x_eq*R\n self.selected_meas_artists.append(self.specimen_eqarea.scatter([eqarea_data_x], [\n eqarea_data_y], marker='o', edgecolor=eqarea_outline, facecolor=\"#FFFFFF\", s=15*self.GUI_RESOLUTION, lw=1, clip_on=False))\n\n # do M/M0 plot\n steps = self.Data[self.s]['zijdblock_steps']\n flags = self.Data[self.s]['measurement_flag']\n selected_af_meas = [\n i for i in self.selected_meas if \"T\" in steps[i] or steps[i] == \"0\" and flags[i] != \"b\"]\n selected_T_meas = [i for i in self.selected_meas if \"C\" in steps[i]\n or steps[i] == \"0\" and flags[i] != \"b\"]\n data = array(self.Data[self.s]['zijdblock'])\n af_x = array(list(map(float, data[selected_af_meas, 0])))\n af_y = array(\n list(map(float, data[selected_af_meas, 3])))/float(data[0, 3])\n T_x = array(list(map(float, data[selected_T_meas, 0])))\n T_y = array(\n list(map(float, data[selected_T_meas, 3])))/float(data[0, 3])\n\n xmin, xmax = self.mplot.get_xlim()\n ymin, ymax = self.mplot.get_ylim()\n if T_x.astype(float).any() or T_y.astype(float).any():\n self.selected_meas_artists.append(self.mplot.scatter(\n T_x, T_y, facecolor=red_cover, edgecolor=\"#000000\", marker='o', s=30, lw=1, clip_on=False, zorder=3))\n self.mplot.set_xlim([xmin, xmax])\n self.mplot.set_ylim([ymin, ymax])\n\n xmin, xmax = self.mplot_af.get_xlim()\n ymin, ymax = self.mplot_af.get_ylim()\n if af_x.astype(float).any() or af_y.astype(float).any():\n self.selected_meas_artists.append(self.mplot_af.scatter(\n af_x, af_y, facecolor=blue_cover, edgecolor=\"#000000\", marker='o', s=30, lw=1, clip_on=False, zorder=3))\n self.mplot_af.set_xlim([xmin, xmax])\n self.mplot_af.set_ylim([ymin, ymax])\n\n self.canvas1.draw()\n self.canvas2.draw()\n self.canvas3.draw()", "response": "Goes through all measurements selected in logger and draws darker and eqarea plots to display which measurements have been selected in specimen plots."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef draw_interpretations(self):\n\n problems = {}\n\n if self.s in self.pmag_results_data['specimens'] and \\\n self.pmag_results_data['specimens'][self.s] != []:\n # self.zijplot.collections=[] # delete fit points\n self.specimen_EA_xdata = [] # clear saved x positions on specimen equal area\n self.specimen_EA_ydata = [] # clear saved y positions on specimen equal area\n\n # check to see if there's a results log or not\n if not (self.s in list(self.pmag_results_data['specimens'].keys())):\n self.pmag_results_data['specimens'][self.s] = []\n\n for fit in self.pmag_results_data['specimens'][self.s]:\n\n pars = fit.get(self.COORDINATE_SYSTEM)\n\n if (fit.tmin == None or fit.tmax == None or not pars):\n if 'no bounds' not in list(problems.keys()):\n problems['no bounds'] = []\n problems['no bounds'].append(fit)\n continue\n\n for line in fit.lines:\n if line in self.zijplot.lines:\n self.zijplot.lines.remove(line)\n for point in fit.points:\n if point in self.zijplot.collections:\n self.zijplot.collections.remove(point)\n\n PCA_type = fit.PCA_type\n\n tmin_index, tmax_index = self.get_indices(fit)\n\n marker_shape = 'o'\n SIZE = 20\n if fit == self.current_fit:\n marker_shape = 'D'\n if pars['calculation_type'] == \"DE-BFP\":\n marker_shape = 's'\n if \"bfv\" in self.plane_display_box.GetValue():\n marker_shape = '>'\n if fit in self.bad_fits:\n marker_shape = (4, 1, 0)\n SIZE = 30*self.GUI_RESOLUTION\n\n # Zijderveld plot\n\n ymin, ymax = self.zijplot.get_ylim()\n xmin, xmax = self.zijplot.get_xlim()\n\n for i in range(1):\n if (len(self.CART_rot[:, i]) <= tmin_index or\n len(self.CART_rot[:, i]) <= tmax_index):\n self.Add_text()\n\n self.zijplot.scatter([self.CART_rot[:, 0][tmin_index], self.CART_rot[:, 0][tmax_index]], [-1 * self.CART_rot[:, 1][tmin_index], -\n 1 * self.CART_rot[:, 1][tmax_index]], marker=marker_shape, s=40, facecolor=fit.color, edgecolor='k', zorder=100, clip_on=False)\n self.zijplot.scatter([self.CART_rot[:, 0][tmin_index], self.CART_rot[:, 0][tmax_index]], [-1 * self.CART_rot[:, 2][tmin_index], -\n 1 * self.CART_rot[:, 2][tmax_index]], marker=marker_shape, s=40, facecolor=fit.color, edgecolor='k', zorder=100, clip_on=False)\n fit.points[0] = self.zijplot.collections[-1]\n fit.points[1] = self.zijplot.collections[-2]\n\n if pars['calculation_type'] in ['DE-BFL', 'DE-BFL-A', 'DE-BFL-O']:\n\n # rotated zijderveld\n if self.COORDINATE_SYSTEM == 'geographic' and len(self.Data[self.s]['zdata_geo']) > 0:\n first_data = self.Data[self.s]['zdata_geo'][0]\n elif self.COORDINATE_SYSTEM == 'tilt-corrected' and len(self.Data[self.s]['zdata_tilt']) > 0:\n first_data = self.Data[self.s]['zdata_tilt'][0]\n else:\n first_data = self.Data[self.s]['zdata'][0]\n if self.COORDINATE_SYSTEM != 'specimen':\n self.on_menu_change_speci_coord(-1)\n pars = fit.get(self.COORDINATE_SYSTEM)\n\n if self.ORTHO_PLOT_TYPE == 'N-S':\n rotation_declination = 0.\n elif self.ORTHO_PLOT_TYPE == 'E-W':\n rotation_declination = 90.\n elif self.ORTHO_PLOT_TYPE == 'PCA_dec':\n if 'specimen_dec' in list(pars.keys()) and type(pars['specimen_dec']) != str:\n rotation_declination = pars['specimen_dec']\n else:\n rotation_declination = pmag.cart2dir(first_data)[0]\n else: # Zijderveld\n rotation_declination = pmag.cart2dir(first_data)[0]\n\n PCA_dir = [pars['specimen_dec'], pars['specimen_inc'], 1]\n PCA_dir_rotated = [PCA_dir[0] -\n rotation_declination, PCA_dir[1], 1]\n PCA_CART_rotated = pmag.dir2cart(PCA_dir_rotated)\n\n slop_xy_PCA = -1*PCA_CART_rotated[1]/PCA_CART_rotated[0]\n slop_xz_PCA = -1*PCA_CART_rotated[2]/PCA_CART_rotated[0]\n\n # Center of mass rotated for plotting\n CM_x = mean(self.CART_rot_good[:, 0][tmin_index:tmax_index+1])\n CM_y = mean(self.CART_rot_good[:, 1][tmin_index:tmax_index+1])\n CM_z = mean(self.CART_rot_good[:, 2][tmin_index:tmax_index+1])\n\n # intercpet from the center of mass\n intercept_xy_PCA = -1*CM_y - slop_xy_PCA*CM_x\n intercept_xz_PCA = -1*CM_z - slop_xz_PCA*CM_x\n\n xx = array([self.CART_rot[:, 0][tmax_index],\n self.CART_rot[:, 0][tmin_index]])\n yy = slop_xy_PCA*xx+intercept_xy_PCA\n zz = slop_xz_PCA*xx+intercept_xz_PCA\n\n if (pars['calculation_type'] in ['DE-BFL-A']): # CHECK\n xx = [0.] + xx\n yy = [0.] + yy\n zz = [0.] + zz\n\n self.zijplot.plot(xx, yy, '-', color=fit.color,\n lw=3, alpha=0.5, zorder=0)\n self.zijplot.plot(xx, zz, '-', color=fit.color,\n lw=3, alpha=0.5, zorder=0)\n\n fit.lines[0] = self.zijplot.lines[-2]\n fit.lines[1] = self.zijplot.lines[-1]\n\n # Equal Area plot\n self.toolbar2.home()\n\n # delete old interpretation data\n for d in fit.eqarea_data:\n if d in self.specimen_eqarea.lines:\n self.specimen_eqarea.lines.remove(d)\n if d in self.specimen_eqarea.collections:\n self.specimen_eqarea.collections.remove(d)\n\n if pars['calculation_type'] == 'DE-BFP' and \\\n self.plane_display_box.GetValue() != \"poles\":\n\n # draw a best-fit plane\n ymin, ymax = self.specimen_eqarea.get_ylim()\n xmin, xmax = self.specimen_eqarea.get_xlim()\n\n D_c, I_c = pmag.circ(\n pars[\"specimen_dec\"], pars[\"specimen_inc\"], 90)\n X_c_up, Y_c_up = [], []\n X_c_d, Y_c_d = [], []\n for k in range(len(D_c)):\n XY = pmag.dimap(D_c[k], I_c[k])\n if I_c[k] < 0:\n X_c_up.append(XY[0])\n Y_c_up.append(XY[1])\n if I_c[k] > 0:\n X_c_d.append(XY[0])\n Y_c_d.append(XY[1])\n\n if self.plane_display_box.GetValue() == \"u. hemisphere\" or \\\n self.plane_display_box.GetValue() == \"whole plane\" or \\\n \"wp\" in self.plane_display_box.GetValue():\n self.specimen_eqarea.plot(X_c_d, Y_c_d, 'b')\n if self.plane_display_box.GetValue() == \"l. hemisphere\" or \\\n self.plane_display_box.GetValue() == \"whole plane\" or \\\n \"wp\" in self.plane_display_box.GetValue():\n self.specimen_eqarea.plot(X_c_up, Y_c_up, 'c')\n eqarea_x = XY[0]\n eqarea_y = XY[1]\n z = 1\n fit.eqarea_data[0] = self.specimen_eqarea.lines[-1]\n fit.eqarea_data[1] = self.specimen_eqarea.lines[-2]\n\n if \"bfv\" in self.plane_display_box.GetValue():\n if 'bfv_dec' not in list(pars.keys()) or 'bfv_inc' not in list(pars.keys()):\n self.calculate_best_fit_vectors()\n pars = fit.get(self.COORDINATE_SYSTEM)\n try:\n CART = pmag.dir2cart(\n [pars['bfv_dec'], pars['bfv_inc'], 1])\n x = CART[0]\n y = CART[1]\n z = CART[2]\n R = array(sqrt(1-abs(z))/sqrt(x**2+y**2))\n eqarea_x = y*R\n eqarea_y = x*R\n except KeyError:\n print((\"specimen %s fit %s is marked bad or there was an error calculating bfv pole will be displayed instead\" % (\n element, fit.name)))\n\n else:\n CART = pmag.dir2cart(\n [pars['specimen_dec'], pars['specimen_inc'], 1])\n x = CART[0]\n y = CART[1]\n z = CART[2]\n R = array(sqrt(1-abs(z))/sqrt(x**2+y**2))\n eqarea_x = y*R\n eqarea_y = x*R\n\n self.specimen_EA_xdata.append(eqarea_x)\n self.specimen_EA_ydata.append(eqarea_y)\n if z > 0:\n FC = fit.color\n EC = '0.1'\n else:\n FC = (1, 1, 1)\n EC = fit.color\n self.specimen_eqarea.scatter([eqarea_x], [\n eqarea_y], marker=marker_shape, edgecolor=EC, facecolor=FC, s=SIZE, lw=1, clip_on=False)\n fit.eqarea_data[0] = self.specimen_eqarea.collections[-1]\n\n # M/M0 plot (only if C or mT - not both)\n for d in fit.mm0_data:\n if d in self.mplot.collections:\n self.mplot.collections.remove(d)\n elif d in self.mplot_af.collections:\n self.mplot_af.collections.remove(d)\n temp_data_exists = any(\n ['C' in step for step in self.Data[self.s]['zijdblock_steps']])\n af_data_exists = any(\n ['T' in step for step in self.Data[self.s]['zijdblock_steps']])\n if \"C\" in fit.tmin and temp_data_exists:\n tmin_ax = self.mplot\n elif af_data_exists:\n tmin_ax = self.mplot_af\n else:\n tmin_ax = self.mplot\n if \"C\" in fit.tmax and temp_data_exists:\n tmax_ax = self.mplot\n elif af_data_exists:\n tmax_ax = self.mplot_af\n else:\n tmax_ax = self.mplot\n tmin_ymin, tmin_ymax = tmin_ax.get_ylim()\n tmin_xmin, tmin_xmax = tmin_ax.get_xlim()\n tmax_ymin, tmax_ymax = tmax_ax.get_ylim()\n tmax_xmin, tmax_xmax = tmax_ax.get_xlim()\n fit.mm0_data[0] = tmin_ax.scatter([self.Data[self.s]['zijdblock'][tmin_index][0]], [self.Data[self.s]['zijdblock'][tmin_index][3] /\n self.Data[self.s]['zijdblock'][0][3]], marker=marker_shape, s=30, facecolor=fit.color, edgecolor='k', zorder=10000, clip_on=False)\n fit.mm0_data[1] = tmax_ax.scatter([self.Data[self.s]['zijdblock'][tmax_index][0]], [self.Data[self.s]['zijdblock'][tmax_index][3] /\n self.Data[self.s]['zijdblock'][0][3]], marker=marker_shape, s=30, facecolor=fit.color, edgecolor='k', zorder=10000, clip_on=False)\n tmin_ax.set_xlim(tmin_xmin, tmin_xmax)\n tmin_ax.set_ylim(tmin_ymin, tmin_ymax)\n tmax_ax.set_xlim(tmax_xmin, tmax_xmax)\n tmax_ax.set_ylim(tmax_ymin, tmax_ymax)\n\n # logger\n if fit == self.current_fit:\n for item in range(self.logger.GetItemCount()):\n if item >= tmin_index and item <= tmax_index:\n self.logger.SetItemBackgroundColour(item, \"LIGHT BLUE\")\n else:\n self.logger.SetItemBackgroundColour(item, \"WHITE\")\n try:\n relability = self.Data[self.s]['measurement_flag'][item]\n except IndexError:\n relability = 'b'\n if relability == 'b':\n self.logger.SetItemBackgroundColour(item, \"red\")\n\n if problems != {}:\n if 'no bounds' in list(problems.keys()):\n text = \"Fits \"\n for problem in problems['no bounds']:\n text += fit.name + ' '\n text += \" for the current specimen are missing bounds and will not be displayed.\"\n\n self.canvas1.draw()\n self.canvas2.draw()\n self.canvas3.draw()", "response": "draw the interpretations on the specimen and the M - M0 plots"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget the current level of the high level mean plot and the name of the corrisponding site study etc.", "response": "def get_levels_and_coordinates_names(self):\n \"\"\"\n Get the current level of the high level mean plot and the name of\n the corrisponding site, study, etc. As well as the code for the\n current coordinate system.\n\n Returns\n -------\n (high_level_type,high_level_name,coordinate_system) : tuple object\n containing current high level type, name, and coordinate system\n being analyzed\n \"\"\"\n if self.COORDINATE_SYSTEM == \"geographic\":\n dirtype = 'DA-DIR-GEO'\n elif self.COORDINATE_SYSTEM == \"tilt-corrected\":\n dirtype = 'DA-DIR-TILT'\n else:\n dirtype = 'DA-DIR'\n\n if self.level_box.GetValue() == 'sample':\n high_level_type = 'samples'\n if self.level_box.GetValue() == 'site':\n high_level_type = 'sites'\n if self.level_box.GetValue() == 'location':\n high_level_type = 'locations'\n if self.level_box.GetValue() == 'study':\n high_level_type = 'study'\n high_level_name = str(self.level_names.GetValue())\n return (high_level_type, high_level_name, dirtype)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncalculate and plots the sample orientation of the current specimen and checks if the sample orientation is on plots the wrong arrow wrong compass wrong compass and rotated sample error directions for the current specimen", "response": "def calc_and_plot_sample_orient_check(self):\n \"\"\"\n If sample orientation is on plots the wrong arrow, wrong compass,\n and rotated sample error directions for the current specimen\n interpretation on the high level mean plot so that you can check\n sample orientation good/bad.\n \"\"\"\n fit = self.current_fit\n if fit == None:\n return\n pars = fit.get('specimen')\n if 'specimen_dec' not in list(pars.keys()) or 'specimen_inc' not in list(pars.keys()):\n fit.put(self.s, 'specimen', self.get_PCA_parameters(\n self.s, fit, fit.tmin, fit.tmax, 'specimen', fit.PCA_type))\n pars = fit.get('specimen')\n if not pars:\n self.user_warning(\n \"could not calculate fit %s for specimen %s in specimen coordinate system while checking sample orientation please check data\" % (fit.name, self.s))\n return\n dec, inc = pars['specimen_dec'], pars['specimen_inc']\n sample = self.Data_hierarchy['sample_of_specimen'][self.s]\n if sample not in list(self.Data_info[\"er_samples\"].keys()) or \"sample_azimuth\" not in list(self.Data_info[\"er_samples\"][sample].keys()) or \"sample_dip\" not in list(self.Data_info[\"er_samples\"][sample].keys()):\n self.user_warning(\n \"Could not display sample orientation checks because sample azimuth or sample dip is missing from er_samples table for sample %s\" % sample)\n self.check_orient_on = False # stop trying because this raises a ton of warnings\n return\n try:\n azimuth = float(self.Data_info[\"er_samples\"][sample]['sample_azimuth'])\n dip = float(self.Data_info[\"er_samples\"][sample]['sample_dip'])\n except TypeError:\n self.user_warning(\n \"Could not display sample orientation checks because azimuth or dip is missing (or invalid) for sample %s\" % sample)\n self.check_orient_on = False # stop trying because this raises a ton of warnings\n return\n\n # first test wrong direction of drill arrows (flip drill direction in opposite direction and re-calculate d,i)\n d, i = pmag.dogeo(dec, inc, azimuth-180., -dip)\n XY = pmag.dimap(d, i)\n if i > 0:\n FC = fit.color\n SIZE = 15*self.GUI_RESOLUTION\n else:\n FC = 'white'\n SIZE = 15*self.GUI_RESOLUTION\n self.high_level_eqarea.scatter([XY[0]], [\n XY[1]], marker='^', edgecolor=fit.color, facecolor=FC, s=SIZE, lw=1, clip_on=False)\n if self.ie_open:\n self.ie.scatter([XY[0]], [XY[1]], marker='^', edgecolor=fit.color,\n facecolor=FC, s=SIZE, lw=1, clip_on=False)\n # Then test wrong end of compass (take az-180.)\n d, i = pmag.dogeo(dec, inc, azimuth-180., dip)\n XY = pmag.dimap(d, i)\n if i > 0:\n FC = fit.color\n SIZE = 15*self.GUI_RESOLUTION\n else:\n FC = 'white'\n SIZE = 15*self.GUI_RESOLUTION\n self.high_level_eqarea.scatter([XY[0]], [\n XY[1]], marker='v', edgecolor=fit.color, facecolor=FC, s=SIZE, lw=1, clip_on=False)\n if self.ie_open:\n self.ie.scatter([XY[0]], [XY[1]], marker='v', edgecolor=fit.color,\n facecolor=FC, s=SIZE, lw=1, clip_on=False)\n # did the sample spin in the hole?\n # now spin around specimen's z\n X_up, Y_up, X_d, Y_d = [], [], [], []\n for incr in range(0, 360, 5):\n d, i = pmag.dogeo(dec+incr, inc, azimuth, dip)\n XY = pmag.dimap(d, i)\n if i >= 0:\n X_d.append(XY[0])\n Y_d.append(XY[1])\n else:\n X_up.append(XY[0])\n Y_up.append(XY[1])\n self.high_level_eqarea.scatter(\n X_d, Y_d, marker='.', color=fit.color, alpha=.5, s=SIZE/2, lw=1, clip_on=False)\n self.high_level_eqarea.scatter(\n X_up, Y_up, marker='.', color=fit.color, s=SIZE/2, lw=1, clip_on=False)\n if self.ie_open:\n self.ie.scatter(X_d, Y_d, marker='.', color=fit.color,\n alpha=.5, s=SIZE/2, lw=1, clip_on=False)\n self.ie.scatter(X_up, Y_up, marker='.',\n color=fit.color, s=SIZE/2, lw=1, clip_on=False)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef plot_high_level_equalarea(self, element):\n if self.ie_open:\n high_level = self.ie.show_box.GetValue()\n else:\n high_level = self.UPPER_LEVEL_SHOW\n fits = []\n if high_level not in self.pmag_results_data:\n print((\"no level: \" + str(high_level)))\n return\n if element not in self.pmag_results_data[high_level]:\n print((\"no element: \" + str(element)))\n return\n if self.mean_fit == 'All':\n fits = self.pmag_results_data[high_level][element]\n elif self.mean_fit != 'None' and self.mean_fit != None:\n fits = [fit for fit in self.pmag_results_data[high_level]\n [element] if fit.name == self.mean_fit]\n else:\n fits = []\n fig = self.high_level_eqarea\n if fits:\n for fit in fits:\n pars = fit.get(self.COORDINATE_SYSTEM)\n if not pars:\n if element in self.specimens:\n fit.put(element, self.COORDINATE_SYSTEM, self.get_PCA_parameters(\n element, fit, fit.tmin, fit.tmax, self.COORDINATE_SYSTEM, self.PCA_type_box.GetValue()))\n pars = fit.get(self.COORDINATE_SYSTEM)\n if not pars:\n print((\"No data for %s on element %s\" %\n (fit.name, element)))\n return\n if \"specimen_dec\" in list(pars.keys()) and \"specimen_inc\" in list(pars.keys()):\n dec = pars[\"specimen_dec\"]\n inc = pars[\"specimen_inc\"]\n elif \"dec\" in list(pars.keys()) and \"inc\" in list(pars.keys()):\n dec = pars[\"dec\"]\n inc = pars[\"inc\"]\n else:\n print((\"-E- no dec and inc values for:\\n\" + str(fit)))\n XY = pmag.dimap(dec, inc)\n if inc > 0:\n FC = fit.color\n SIZE = 15*self.GUI_RESOLUTION\n else:\n FC = 'white'\n SIZE = 15*self.GUI_RESOLUTION\n marker_shape = 'o'\n if pars['calculation_type'] == \"DE-BFP\":\n marker_shape = 's'\n if 'bfv' in self.plane_display_box.GetValue():\n marker_shape = '>'\n SIZE = 25*self.GUI_RESOLUTION\n if fit == self.current_fit:\n marker_shape = 'D'\n if fit in self.bad_fits:\n marker_shape = (4, 1, 0)\n SIZE = 25*self.GUI_RESOLUTION\n\n # draw a best-fit plane\n if pars['calculation_type'] == 'DE-BFP' and \\\n self.plane_display_box.GetValue() != \"poles\":\n\n if \"plane\" in self.plane_display_box.GetValue() or \"hemisphere\" in self.plane_display_box.GetValue() or \"wp\" in self.plane_display_box.GetValue():\n\n ymin, ymax = self.specimen_eqarea.get_ylim()\n xmin, xmax = self.specimen_eqarea.get_xlim()\n\n D_c, I_c = pmag.circ(\n pars[\"specimen_dec\"], pars[\"specimen_inc\"], 90)\n X_c_up, Y_c_up = [], []\n X_c_d, Y_c_d = [], []\n for k in range(len(D_c)):\n XY = pmag.dimap(D_c[k], I_c[k])\n if I_c[k] < 0:\n X_c_up.append(XY[0])\n Y_c_up.append(XY[1])\n if I_c[k] > 0:\n X_c_d.append(XY[0])\n Y_c_d.append(XY[1])\n\n if self.plane_display_box.GetValue() == \"u. hemisphere\" or \\\n self.plane_display_box.GetValue() == \"whole plane\" or \\\n self.plane_display_box.GetValue() == \"wp + bfv\":\n fig.plot(X_c_d, Y_c_d, 'b')\n if self.ie_open:\n self.ie.plot(X_c_d, Y_c_d, 'b')\n if self.plane_display_box.GetValue() == \"l. hemisphere\" or \\\n self.plane_display_box.GetValue() == \"whole plane\" or \\\n self.plane_display_box.GetValue() == \"wp + bfv\":\n fig.plot(X_c_up, Y_c_up, 'c')\n if self.ie_open:\n self.ie.plot(X_c_up, Y_c_up, 'c')\n\n if \"bfv\" in self.plane_display_box.GetValue():\n if 'bfv_dec' not in list(pars.keys()) or 'bfv_inc' not in list(pars.keys()):\n self.calculate_best_fit_vectors()\n pars = fit.get(self.COORDINATE_SYSTEM)\n try:\n XY = pmag.dimap(pars['bfv_dec'], pars['bfv_inc'])\n except KeyError:\n print((\"specimen %s fit %s is marked bad or there was an error calculating bfv pole will be displayed instead\" % (\n element, fit.name)))\n# self.user_warning(\"There was an error calculating bfv for %s of %s, please raise an issue on github and/or contact a dev about this problem and restart the GUI\"%(fit.name,element))\n if inc > 0:\n FC = fit.color\n else:\n FC = 'white'\n\n self.high_EA_xdata.append(XY[0])\n self.high_EA_ydata.append(XY[1])\n fig.scatter([XY[0]], [XY[1]], marker=marker_shape,\n edgecolor=fit.color, facecolor=FC, s=SIZE, lw=1, clip_on=False)\n if self.ie_open:\n self.ie.scatter([XY[0]], [XY[1]], marker=marker_shape,\n edgecolor=fit.color, facecolor=FC, s=SIZE, lw=1, clip_on=False)", "response": "Given an element in the GUI plot to the high level mean plot"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef plot_eqarea_pars(self, pars, fig):\n if pars == {}:\n pass\n elif 'calculation_type' in list(pars.keys()) and pars['calculation_type'] == 'DE-BFP':\n ymin, ymax = fig.get_ylim()\n xmin, xmax = fig.get_xlim()\n\n D_c, I_c = pmag.circ(pars[\"specimen_dec\"],\n pars[\"specimen_inc\"], 90)\n X_c_up, Y_c_up = [], []\n X_c_d, Y_c_d = [], []\n for k in range(len(D_c)):\n XY = pmag.dimap(D_c[k], I_c[k])\n if I_c[k] < 0:\n X_c_up.append(XY[0])\n Y_c_up.append(XY[1])\n if I_c[k] > 0:\n X_c_d.append(XY[0])\n Y_c_d.append(XY[1])\n fig.plot(X_c_d, Y_c_d, 'b', lw=0.5)\n fig.plot(X_c_up, Y_c_up, 'c', lw=0.5)\n if self.ie_open:\n self.ie.plot(X_c_d, Y_c_d, 'b', lw=0.5)\n self.ie.plot(X_c_up, Y_c_up, 'c', lw=0.5)\n\n fig.set_xlim(xmin, xmax)\n fig.set_ylim(ymin, ymax)\n # plot best-fit direction\n else:\n if \"specimen_dec\" in list(pars.keys()) and \"specimen_inc\" in list(pars.keys()):\n dec = pars[\"specimen_dec\"]\n inc = pars[\"specimen_inc\"]\n elif \"dec\" in list(pars.keys()) and \"inc\" in list(pars.keys()):\n dec = pars[\"dec\"]\n inc = pars[\"inc\"]\n else:\n print((\"either dec or inc missing from values recived for high level plot, was given %s, aborting\" % (\n str(pars))))\n return\n XY = pmag.dimap(float(dec), float(inc))\n if inc > 0:\n if 'color' in list(pars.keys()):\n FC = pars['color']\n EC = pars['color']\n SIZE = 15*self.GUI_RESOLUTION\n else:\n FC = 'grey'\n EC = 'grey'\n SIZE = 15*self.GUI_RESOLUTION\n else:\n if 'color' in list(pars.keys()):\n FC = 'white'\n EC = pars['color']\n SIZE = 15*self.GUI_RESOLUTION\n else:\n FC = 'white'\n EC = 'grey'\n SIZE = 15*self.GUI_RESOLUTION\n fig.scatter([XY[0]], [XY[1]], marker='o', edgecolor=EC,\n facecolor=FC, s=SIZE, lw=1, clip_on=False)\n if self.ie_open:\n self.ie.scatter([XY[0]], [XY[1]], marker='o', edgecolor=EC,\n facecolor=FC, s=SIZE, lw=1, clip_on=False)", "response": "Plots the parameters that are returned from pmag. domean and returns the corresponding plot of the species in the given figure."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef plot_eqarea_mean(self, meanpars, fig):\n mpars_to_plot = []\n if meanpars == {}:\n return\n if meanpars['calculation_type'] == 'Fisher by polarity':\n for mode in list(meanpars.keys()):\n if type(meanpars[mode]) == dict and meanpars[mode] != {}:\n mpars_to_plot.append(meanpars[mode])\n else:\n mpars_to_plot.append(meanpars)\n ymin, ymax = fig.get_ylim()\n xmin, xmax = fig.get_xlim()\n if 'color' in meanpars:\n color = meanpars['color']\n else:\n color = 'black'\n size, alpha = 30, 1.\n # put on the mean direction\n for mpars in mpars_to_plot:\n XYM = pmag.dimap(float(mpars[\"dec\"]), float(mpars[\"inc\"]))\n if float(mpars[\"inc\"]) > 0:\n FC = color\n EC = 'black'\n else:\n FC = 'white'\n EC = color\n self.displayed_means.append(fig.scatter([XYM[0]], [\n XYM[1]], marker='o', edgecolor=EC, facecolor=FC, s=size, lw=1, clip_on=False, alpha=alpha))\n\n if \"alpha95\" in list(mpars.keys()):\n # get the alpha95\n Xcirc, Ycirc = [], []\n Da95, Ia95 = pmag.circ(float(mpars[\"dec\"]), float(\n mpars[\"inc\"]), float(mpars[\"alpha95\"]))\n for k in range(len(Da95)):\n XY = pmag.dimap(Da95[k], Ia95[k])\n Xcirc.append(XY[0])\n Ycirc.append(XY[1])\n self.displayed_means.append(\n fig.plot(Xcirc, Ycirc, color, alpha=alpha))\n\n if self.ie_open:\n self.displayed_means.append(self.ie.scatter([XYM[0]], [\n XYM[1]], marker='o', edgecolor=EC, facecolor=FC, s=size, lw=1, clip_on=False, alpha=alpha))\n if \"alpha95\" in list(mpars.keys()):\n self.displayed_means.append(\n self.ie.plot(Xcirc, Ycirc, color, alpha=alpha))\n self.ie.eqarea.set_xlim(xmin, xmax)\n self.ie.eqarea.set_ylim(ymin, ymax)\n\n fig.set_xlim(xmin, xmax)\n fig.set_ylim(ymin, ymax)", "response": "Plots the mean parameters of the eqarea of the current object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef initialize_CART_rot(self, s):\n self.s = s # only place in code where self.s is to be set directly\n if self.orthogonal_box.GetValue() == \"X=East\":\n self.ORTHO_PLOT_TYPE = 'E-W'\n elif self.orthogonal_box.GetValue() == \"X=North\":\n self.ORTHO_PLOT_TYPE = 'N-S'\n elif self.orthogonal_box.GetValue() == \"X=best fit line dec\":\n self.ORTHO_PLOT_TYPE = 'PCA_dec'\n else:\n self.ORTHO_PLOT_TYPE = 'ZIJ'\n if self.COORDINATE_SYSTEM == 'geographic':\n # self.CART_rot=self.Data[self.s]['zij_rotated_geo']\n self.zij = array(self.Data[self.s]['zdata_geo'])\n self.zijblock = self.Data[self.s]['zijdblock_geo']\n elif self.COORDINATE_SYSTEM == 'tilt-corrected':\n # self.CART_rot=self.Data[self.s]['zij_rotated_tilt']\n self.zij = array(self.Data[self.s]['zdata_tilt'])\n self.zijblock = self.Data[self.s]['zijdblock_tilt']\n else:\n # self.CART_rot=self.Data[self.s]['zij_rotated']\n self.zij = array(self.Data[self.s]['zdata'])\n self.zijblock = self.Data[self.s]['zijdblock']\n\n if self.COORDINATE_SYSTEM == 'geographic':\n if self.ORTHO_PLOT_TYPE == 'N-S':\n self.CART_rot = Rotate_zijderveld(\n self.Data[self.s]['zdata_geo'], 0.)\n elif self.ORTHO_PLOT_TYPE == 'E-W':\n self.CART_rot = Rotate_zijderveld(\n self.Data[self.s]['zdata_geo'], 90.)\n elif self.ORTHO_PLOT_TYPE == 'PCA_dec':\n if 'specimen_dec' in list(self.current_fit.pars.keys()) and type(self.current_fit.pars['specimen_dec']) != str:\n self.CART_rot = Rotate_zijderveld(\n self.Data[self.s]['zdata_geo'], self.current_fit.pars['specimen_dec'])\n else:\n self.CART_rot = Rotate_zijderveld(self.Data[self.s]['zdata_geo'], pmag.cart2dir(\n self.Data[self.s]['zdata_geo'][0])[0])\n else:\n self.CART_rot = Rotate_zijderveld(self.Data[self.s]['zdata_geo'], pmag.cart2dir(\n self.Data[self.s]['zdata_geo'][0])[0])\n\n elif self.COORDINATE_SYSTEM == 'tilt-corrected':\n if self.ORTHO_PLOT_TYPE == 'N-S':\n self.CART_rot = Rotate_zijderveld(\n self.Data[self.s]['zdata_tilt'], 0.)\n elif self.ORTHO_PLOT_TYPE == 'E-W':\n self.CART_rot = Rotate_zijderveld(\n self.Data[self.s]['zdata_tilt'], 90)\n elif self.ORTHO_PLOT_TYPE == 'PCA_dec':\n if 'specimen_dec' in list(self.current_fit.pars.keys()) and type(self.current_fit.pars['specimen_dec']) != str:\n self.CART_rot = Rotate_zijderveld(\n self.Data[self.s]['zdata_tilt'], self.current_fit.pars['specimen_dec'])\n else:\n self.CART_rot = Rotate_zijderveld(self.Data[self.s]['zdata_tilt'], pmag.cart2dir(\n self.Data[self.s]['zdata_tilt'][0])[0])\n else:\n self.CART_rot = Rotate_zijderveld(self.Data[self.s]['zdata_tilt'], pmag.cart2dir(\n self.Data[self.s]['zdata_tilt'][0])[0])\n else:\n if self.ORTHO_PLOT_TYPE == 'N-S':\n self.CART_rot = Rotate_zijderveld(\n self.Data[self.s]['zdata'], 0.)\n elif self.ORTHO_PLOT_TYPE == 'E-W':\n self.CART_rot = Rotate_zijderveld(\n self.Data[self.s]['zdata'], 90)\n elif self.ORTHO_PLOT_TYPE == 'PCA_dec':\n if 'specimen_dec' in list(self.current_fit.pars.keys()) and type(self.current_fit.pars['specimen_dec']) != str:\n self.CART_rot = Rotate_zijderveld(\n self.Data[self.s]['zdata'], self.current_fit.pars['specimen_dec'])\n else: # Zijderveld\n self.CART_rot = Rotate_zijderveld(\n self.Data[self.s]['zdata'], pmag.cart2dir(self.Data[self.s]['zdata'][0])[0])\n\n else: # Zijderveld\n self.CART_rot = Rotate_zijderveld(\n self.Data[self.s]['zdata'], pmag.cart2dir(self.Data[self.s]['zdata'][0])[0])\n\n self.zij_norm = array([row/sqrt(sum(row**2)) for row in self.zij])\n\n # remove bad data from plotting:\n self.CART_rot_good = []\n self.CART_rot_bad = []\n for i in range(len(self.CART_rot)):\n if self.Data[self.s]['measurement_flag'][i] == 'g':\n self.CART_rot_good.append(list(self.CART_rot[i]))\n else:\n self.CART_rot_bad.append(list(self.CART_rot[i]))\n\n self.CART_rot = array(self.CART_rot)\n self.CART_rot_good = array(self.CART_rot_good)\n self.CART_rot_bad = array(self.CART_rot_bad)", "response": "Initializes the internal structure of the current specimen and calculates the data necessary to plot the specimen plots."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nadds a new fit to the current fit.", "response": "def add_fit(self, specimen, name, fmin, fmax, PCA_type=\"DE-BFL\", color=None, suppress_warnings=False, saved=True):\n \"\"\"\n Goes through the data checks required to add an interpretation to\n the param specimen with the name param name, the bounds param fmin\n and param fmax, and calculation type param PCA_type.\n\n Parameters\n ----------\n specimen : specimen with measurement data to add the\n interpretation to\n name : name of the new interpretation\n fmin : lower bound of new interpretation\n fmax : upper bound of new interpretation\n PCA_type : type of regression or mean for new interpretaion\n (default: DE-BFL or line)\n color : color to plot the new interpretation in\n\n Returns\n -------\n new Fit object or None if fit could not be added\n \"\"\"\n if specimen not in list(self.Data.keys()) and not suppress_warnings:\n self.user_warning(\n \"there is no measurement data for %s and therefore no interpretation can be created for this specimen\" % (specimen))\n return\n if fmax != None and fmax not in self.Data[specimen]['zijdblock_steps'] or fmin != None and fmin not in self.Data[specimen]['zijdblock_steps']:\n return\n if not (specimen in list(self.pmag_results_data['specimens'].keys())):\n self.pmag_results_data['specimens'][specimen] = []\n next_fit = str(len(self.pmag_results_data['specimens'][specimen]) + 1)\n if name == None or name in [x.name for x in self.pmag_results_data['specimens'][specimen]] or name == \"\" or name.replace(\" \", \"\") == \"\":\n name = ('Fit ' + next_fit)\n if name in [x.name for x in self.pmag_results_data['specimens'][specimen]]:\n print('bad name')\n return\n if color == None:\n color = self.colors[(int(next_fit)-1) % len(self.colors)]\n new_fit = Fit(name, fmax, fmin, color, self, PCA_type, saved)\n if fmin != None and fmax != None:\n new_fit.put(specimen, self.COORDINATE_SYSTEM, self.get_PCA_parameters(\n specimen, new_fit, fmin, fmax, self.COORDINATE_SYSTEM, PCA_type))\n if ('specimen_dec' not in list(new_fit.get(self.COORDINATE_SYSTEM).keys())\n or 'specimen_inc' not in list(new_fit.get(self.COORDINATE_SYSTEM).keys()))\\\n and not suppress_warnings:\n TEXT = \"Could not calculate dec or inc for specimen %s component %s with bounds %s and %s in coordinate_system %s, component not added\" % (\n specimen, name, fmin, fmax, self.COORDINATE_SYSTEM)\n self.user_warning(TEXT)\n print(TEXT)\n return\n self.pmag_results_data['specimens'][specimen].append(new_fit)\n samp = self.Data_hierarchy['sample_of_specimen'][specimen]\n if samp in list(self.Data_info['er_samples'].keys()):\n if 'sample_orientation_flag' not in self.Data_info['er_samples'][samp]:\n self.Data_info['er_samples'][samp]['sample_orientation_flag'] = 'g'\n samp_flag = self.Data_info['er_samples'][samp]['sample_orientation_flag']\n if samp_flag == 'b':\n self.mark_fit_bad(new_fit)\n self.close_warning = True\n return new_fit"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nremove fit from GUI results data", "response": "def delete_fit(self, fit, specimen=None):\n \"\"\"\n removes fit from GUI results data\n\n Parameters\n ----------\n fit : fit to remove\n specimen : specimen of fit to remove, if not provided and\n set to None then the function will find the specimen itself\n \"\"\"\n if specimen == None:\n for spec in self.pmag_results_data['specimens']:\n if fit in self.pmag_results_data['specimens'][spec]:\n specimen = spec\n break\n if specimen not in self.pmag_results_data['specimens']:\n return\n if fit in self.pmag_results_data['specimens'][specimen]:\n self.pmag_results_data['specimens'][specimen].remove(fit)\n if fit == self.current_fit:\n if self.pmag_results_data['specimens'][specimen]:\n self.pmag_results_data['specimens'][specimen][-1].select()\n else:\n self.current_fit = None\n self.close_warning = True\n self.calculate_high_levels_data()\n if self.ie_open:\n self.ie.update_editor()\n self.update_selection()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef calculate_vgp_data(self):\n # get criteria if it exists else use default\n crit_data = self.read_criteria_file()\n if crit_data == None:\n crit_data = pmag.default_criteria(0)\n accept = {}\n for critrec in crit_data:\n if type(critrec) != dict:\n continue\n for key in list(critrec.keys()):\n # need to migrate specimen_dang to specimen_int_dang\n if 'IE-SPEC' in list(critrec.keys()) and 'specimen_dang' in list(critrec.keys()) and 'specimen_int_dang' not in list(critrec.keys()):\n critrec['specimen_int_dang'] = critrec['specimen_dang']\n del critrec['specimen_dang']\n # need to get rid of ron shaars sample_int_sigma_uT\n if 'sample_int_sigma_uT' in list(critrec.keys()):\n critrec['sample_int_sigma'] = '%10.3e' % (\n eval(critrec['sample_int_sigma_uT'])*1e-6)\n if key not in list(accept.keys()) and critrec[key] != '':\n accept[key] = critrec[key]\n\n Ns = []\n # retrieve specimen data to calculate VGPS with\n for s in list(self.pmag_results_data['specimens'].keys()):\n for fit in self.pmag_results_data['specimens'][s]:\n if fit in self.bad_fits:\n continue\n pars = fit.get(self.COORDINATE_SYSTEM)\n # check for interpretation data for fit\n if not pars:\n pars = self.get_PCA_parameters(\n s, fit, fit.tmin, fit.tmax, self.COORDINATE_SYSTEM, fit.PCA_type)\n if not pars or 'specimen_dec' not in list(pars.keys()) or 'specimen_inc' not in list(pars.keys()):\n print((\"Could not calculate interpretation for specimen %s and fit %s while calculating VGP data, skipping this component\" % (\n s, fit.name)))\n continue\n pars['er_specimen_name'] = s\n pars['specimen_comp_name'] = fit.name\n Ns.append(pars)\n\n SpecDirs = []\n if crit_data != None: # use selection criteria\n for rec in Ns: # look through everything with specimen_n for \"good\" data\n kill = pmag.grade(rec, accept, 'specimen_dir', data_model=2.5)\n if len(kill) == 0: # nothing killed it\n SpecDirs.append(rec)\n else: # no criteria\n SpecDirs = Ns[:] # take them all\n\n for i in range(len(SpecDirs)):\n if SpecDirs[i] == {}:\n continue\n specimen = SpecDirs[i]['er_specimen_name']\n SpecDirs[i]['er_sample_name'] = self.Data_hierarchy['sample_of_specimen'][specimen]\n SpecDirs[i]['er_site_name'] = self.Data_hierarchy['site_of_specimen'][specimen]\n SpecDirs[i]['er_location_name'] = self.Data_hierarchy['location_of_specimen'][specimen]\n\n # init VGP data\n VGP_Data = {'samples': [], 'sites': [], 'locations': []}\n\n # obtain lat lon data\n SiteNFO = list(self.Data_info['er_sites'].values())\n for val in SiteNFO:\n site = val['er_site_name']\n not_found = []\n # check that lat/lon columns are present and non-null\n if 'site_lat' not in val:\n not_found.append('latitude')\n elif not val['site_lat'] and val['site_lat'] is not 0:\n not_found.append('latitude')\n if 'site_lon' not in val:\n not_found.append('longitude')\n elif not val['site_lon'] and val['site_lon'] is not 0:\n not_found.append('longitude')\n if not_found == []:\n continue\n TEXT = \"%s not found for site %s. Select 'yes' to enter the values now or 'no' to skip this site and all samples contained in it.\" % (\n str(not_found), val['er_site_name'])\n dlg = wx.MessageDialog(\n self, caption=\"Missing Data\", message=TEXT, style=wx.YES_NO | wx.ICON_QUESTION)\n result = self.show_dlg(dlg)\n dlg.Destroy()\n if result == wx.ID_YES:\n ui_dialog = demag_dialogs.user_input(self, ['Latitude', 'Longitude'], parse_funcs=[\n float, float], heading=\"Missing Latitude or Longitude data for site: %s\" % val['er_site_name'])\n self.show_dlg(ui_dialog)\n ui_data = ui_dialog.get_values()\n if ui_data[0]:\n self.Data_info['er_sites'][site]['site_lat'] = ui_data[1]['Latitude']\n self.Data_info['er_sites'][site]['site_lon'] = ui_data[1]['Longitude']\n val['site_lat'] = ui_data[1]['Latitude']\n val['site_lon'] = ui_data[1]['Longitude']\n\n # calculate sample vgps\n for samp in self.samples:\n SampDir = pmag.get_dictitem(SpecDirs, 'er_sample_name', samp, 'T')\n if len(SampDir) <= 0:\n continue\n for comp in self.all_fits_list:\n CompDir = pmag.get_dictitem(\n SampDir, 'specimen_comp_name', comp, 'T')\n if len(CompDir) <= 0:\n continue # no data for comp\n samp_mean = pmag.lnpbykey(CompDir, 'sample', 'specimen')\n site = pmag.get_dictitem(\n SiteNFO, 'er_site_name', CompDir[0]['er_site_name'], 'T')\n dec = float(samp_mean['sample_dec'])\n inc = float(samp_mean['sample_inc'])\n if 'sample_alpha95' in samp_mean and samp_mean['sample_alpha95'] != \"\":\n a95 = float(samp_mean['sample_alpha95'])\n else:\n a95 = 180.\n try:\n lat = float(site[0]['site_lat'])\n lon = float(site[0]['site_lon'])\n except (KeyError, IndexError, ValueError, TypeError) as e:\n continue\n plong, plat, dp, dm = pmag.dia_vgp(dec, inc, a95, lat, lon)\n PmagResRec = {}\n PmagResRec['name'] = samp\n PmagResRec['comp_name'] = comp\n PmagResRec['n'] = len(CompDir)\n PmagResRec['color'] = [e.color for sl in list(self.pmag_results_data['specimens'].values(\n )) for e in sl if e not in self.bad_fits and e.name == comp][0]\n PmagResRec['vgp_lon'] = plong\n PmagResRec['vgp_lat'] = plat\n PmagResRec['lon'] = lon\n PmagResRec['lat'] = lat\n PmagResRec['vgp_dp'] = dp\n PmagResRec['vgp_dm'] = dm\n VGP_Data['samples'].append(PmagResRec)\n\n for site in self.sites:\n SiteDir = pmag.get_dictitem(SpecDirs, 'er_site_name', site, 'T')\n erSite = pmag.get_dictitem(SiteNFO, 'er_site_name', site, 'T')\n for comp in self.all_fits_list:\n siteD = pmag.get_dictitem(\n SiteDir, 'specimen_comp_name', comp, 'T')\n if len(siteD) <= 0:\n print((\"no data for comp %s and site %s\" % (comp, site)))\n continue\n SiteData = pmag.lnpbykey(siteD, 'site', 'specimen')\n dec = float(SiteData['site_dec'])\n inc = float(SiteData['site_inc'])\n if 'site_alpha95' in SiteData and SiteData['site_alpha95'] != \"\":\n a95 = float(SiteData['site_alpha95'])\n else:\n a95 = 180.\n try:\n lat = float(erSite[0]['site_lat'])\n lon = float(erSite[0]['site_lon'])\n except (KeyError, IndexError, TypeError) as e:\n continue\n plong, plat, dp, dm = pmag.dia_vgp(dec, inc, a95, lat, lon)\n SiteData['name'] = site\n SiteData['comp_name'] = comp\n SiteData['n'] = len(siteD)\n SiteData['vgp_lon'] = plong\n SiteData['vgp_lat'] = plat\n SiteData['lon'] = lon\n SiteData['lat'] = lat\n SiteData['vgp_dp'] = dp\n SiteData['vgp_dm'] = dm\n SiteData['color'] = [e.color for sl in list(self.pmag_results_data['specimens'].values(\n )) for e in sl if e not in self.bad_fits and e.name == comp][0]\n VGP_Data['sites'].append(SiteData)\n\n for loc in self.locations:\n LocDir = pmag.get_dictitem(SpecDirs, 'er_location_name', loc, 'T')\n for comp in self.all_fits_list:\n LocCompData = pmag.get_dictitem(\n LocDir, 'specimen_comp_name', comp, 'T')\n if len(LocCompData) < 3:\n print((\"insufficient data for comp %s\" % comp))\n continue\n precs = []\n for rec in LocCompData:\n prec = {'dec': rec['specimen_dec'], 'inc': rec['specimen_inc'],\n 'name': rec['er_site_name'], 'loc': rec['er_location_name']}\n prec = {k: v if v != None else '' for k,\n v in list(prec.items())}\n precs.append(prec)\n # you need at least 3 records to get anything back from pmag.fisher_by_pol\n polpars = pmag.fisher_by_pol(precs)\n # hunt through all the modes (normal=A, reverse=B, all=ALL)\n for mode in list(polpars.keys()):\n PolRes = {}\n PolRes['name'] = polpars[mode]['locs']\n PolRes[\"comp_name\"] = comp+':'+mode\n PolRes[\"dec\"] = '%7.1f' % (polpars[mode]['dec'])\n PolRes[\"inc\"] = '%7.1f' % (polpars[mode]['inc'])\n PolRes[\"n\"] = '%i' % (polpars[mode]['n'])\n PolRes[\"r\"] = '%5.4f' % (polpars[mode]['r'])\n PolRes[\"k\"] = '%6.0f' % (polpars[mode]['k'])\n PolRes['a95'] = '%7.1f' % (polpars[mode]['alpha95'])\n dec, inc, a95 = PolRes[\"dec\"], PolRes[\"inc\"], PolRes[\"a95\"]\n lat, lon, loc_data = \"\", \"\", self.Data_info['er_locations']\n if loc in loc_data and 'location_begin_lat' in loc_data[loc]:\n lat = loc_data[loc]['location_begin_lat']\n elif loc in loc_data and 'location_end_lat' in loc_data[loc]:\n lat = loc_data[loc]['location_end_lat']\n if loc in loc_data and 'location_begin_lon' in loc_data[loc]:\n lon = loc_data[loc]['location_begin_lon']\n elif loc in loc_data and 'location_end_lon' in loc_data[loc]:\n lon = loc_data[loc]['location_end_lon']\n if lat == \"\" or lon == \"\" or lat == None or lon == None:\n ui_dialog = demag_dialogs.user_input(self, ['Latitude', 'Longitude'], parse_funcs=[\n float, float], heading=\"Missing Latitude or Longitude data for location: %s\" % loc)\n self.show_dlg(ui_dialog)\n ui_data = ui_dialog.get_values()\n if ui_data[0]:\n lat = ui_data[1]['Latitude']\n lon = ui_data[1]['Longitude']\n if loc not in loc_data:\n loc_data[loc] = {}\n if len(loc_data) > 0:\n loc_data[loc]['location_begin_lat'] = lat\n loc_data[loc]['location_begin_lon'] = lon\n else:\n continue\n try:\n plong, plat, dp, dm = pmag.dia_vgp(\n *list(map(float, [dec, inc, a95, lat, lon])))\n except TypeError:\n print((\"Not valid parameters for vgp calculation on location: %s\" %\n loc, dec, inc, a95, lat, lon))\n continue\n PolRes['vgp_lon'] = plong\n PolRes['vgp_lat'] = plat\n PolRes['lon'] = lon\n PolRes['lat'] = lat\n PolRes['vgp_dp'] = dp\n PolRes['vgp_dm'] = dm\n PolRes['color'] = [e.color for sl in list(self.pmag_results_data['specimens'].values(\n )) for e in sl if e not in self.bad_fits and e.name == comp][0]\n VGP_Data['locations'].append(PolRes)\n\n return VGP_Data", "response": "Calculates VGPS for all samples sites and locations"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef convert_ages_to_calendar_year(self, er_ages_rec):\n if \"age\" not in list(er_ages_rec.keys()):\n return(er_ages_rec)\n if \"age_unit\" not in list(er_ages_rec.keys()):\n return(er_ages_rec)\n if er_ages_rec[\"age_unit\"] == \"\":\n return(er_ages_rec)\n\n if er_ages_rec[\"age\"] == \"\":\n if \"age_range_high\" in list(er_ages_rec.keys()) and \"age_range_low\" in list(er_ages_rec.keys()):\n if er_ages_rec[\"age_range_high\"] != \"\" and er_ages_rec[\"age_range_low\"] != \"\":\n er_ages_rec[\"age\"] = scipy.mean(\n [float(er_ages_rec[\"age_range_high\"]), float(er_ages_rec[\"age_range_low\"])])\n if er_ages_rec[\"age\"] == \"\":\n return(er_ages_rec)\n\n age_unit = er_ages_rec[\"age_unit\"]\n\n # Fix 'age':\n mutliplier = 1\n if age_unit == \"Ga\":\n mutliplier = -1e9\n if age_unit == \"Ma\":\n mutliplier = -1e6\n if age_unit == \"Ka\":\n mutliplier = -1e3\n if age_unit == \"Years AD (+/-)\" or age_unit == \"Years Cal AD (+/-)\":\n mutliplier = 1\n if age_unit == \"Years BP\" or age_unit == \"Years Cal BP\":\n mutliplier = 1\n age = float(er_ages_rec[\"age\"])*mutliplier\n if age_unit == \"Years BP\" or age_unit == \"Years Cal BP\":\n age = 1950-age\n er_ages_rec['age_cal_year'] = age\n\n # Fix 'age_range_low':\n age_range_low = age\n age_range_high = age\n age_sigma = 0\n\n if \"age_sigma\" in list(er_ages_rec.keys()) and er_ages_rec[\"age_sigma\"] != \"\":\n age_sigma = float(er_ages_rec[\"age_sigma\"])*mutliplier\n if age_unit == \"Years BP\" or age_unit == \"Years Cal BP\":\n age_sigma = 1950-age_sigma\n age_range_low = age-age_sigma\n age_range_high = age+age_sigma\n\n if \"age_range_high\" in list(er_ages_rec.keys()) and \"age_range_low\" in list(er_ages_rec.keys()):\n if er_ages_rec[\"age_range_high\"] != \"\" and er_ages_rec[\"age_range_low\"] != \"\":\n age_range_high = float(\n er_ages_rec[\"age_range_high\"])*mutliplier\n if age_unit == \"Years BP\" or age_unit == \"Years Cal BP\":\n age_range_high = 1950-age_range_high\n age_range_low = float(er_ages_rec[\"age_range_low\"])*mutliplier\n if age_unit == \"Years BP\" or age_unit == \"Years Cal BP\":\n age_range_low = 1950-age_range_low\n er_ages_rec['age_cal_year_range_low'] = age_range_low\n er_ages_rec['age_cal_year_range_high'] = age_range_high\n\n return(er_ages_rec)", "response": "Convert all age units to calendar year"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ngenerate the warning text for the current specimen and adds them to the warning_text attribute of the GUI.", "response": "def generate_warning_text(self):\n \"\"\"\n generates warnings for the current specimen then adds them to the\n current warning text for the GUI which will be rendered on a call to\n update_warning_box.\n \"\"\"\n self.warning_text = \"\"\n if self.s in list(self.pmag_results_data['specimens'].keys()):\n for fit in self.pmag_results_data['specimens'][self.s]:\n beg_pca, end_pca = self.get_indices(\n fit, fit.tmin, fit.tmax, self.s)\n if beg_pca == None or end_pca == None:\n self.warning_text += \"%s to %s are invalid bounds, to fit %s.\\n\" % (\n fit.tmin, fit.tmax, fit.name)\n elif end_pca - beg_pca < 2:\n self.warning_text += \"there are not enough points between %s to %s, on fit %s.\\n\" % (\n fit.tmin, fit.tmax, fit.name)\n else:\n check_duplicates = []\n warning_issued = [] # keep track of warnings issued to avoid redundant warnings\n\n # if within range, attempt to go one additional step beyond\n # tmax so that duplicates at the upper bound are caught\n if (end_pca + 2) < len(self.Data[self.s]['zijdblock_steps']):\n check_endpoint = end_pca + 2\n else:\n check_endpoint = end_pca + 1\n\n for s, f in zip(self.Data[self.s]['zijdblock_steps'][beg_pca:check_endpoint],\n self.Data[self.s]['measurement_flag'][beg_pca:check_endpoint]):\n if f == 'g' and [s, 'g'] in check_duplicates:\n if s == fit.tmin and s not in warning_issued:\n self.warning_text += (\"There are multiple good %s \" +\n \"steps at the upper bound of Fit %s. The first \" +\n \"measurement will be used as the lower bound.\\n\") % (\n s, fit.name)\n # warning_issued_low.append(s)\n warning_issued.append(s)\n elif s == fit.tmax and s not in warning_issued:\n self.warning_text += (\"There are multiple good %s \" +\n \"steps at the upper bound of Fit %s. The first \" +\n \"measurement will be used as the upper bound.\\n\") % (\n s, fit.name)\n # warning_issued_high.append(s)\n warning_issued.append(s)\n elif s not in warning_issued:\n self.warning_text += (\"Within Fit %s, there are \" +\n \"multiple good measurements at the %s step. All \" +\n \"good measurements are included in the fit.\\n\") % (\n fit.name, s)\n warning_issued.append(s)\n else:\n pass\n else:\n check_duplicates.append([s, f])\n if self.s in list(self.Data.keys()):\n if not self.Data[self.s]['zijdblock_geo']:\n self.warning_text += \"There is no geographic data for this specimen.\\n\"\n if not self.Data[self.s]['zijdblock_tilt']:\n self.warning_text += \"There is no tilt-corrected data for this specimen.\\n\""} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef read_criteria_file(self, criteria_file_name=None):\n# import pdb; pdb.set_trace()\n acceptance_criteria = pmag.initialize_acceptance_criteria()\n if self.data_model == 3:\n if criteria_file_name == None:\n criteria_file_name = \"criteria.txt\"\n contribution = cb.Contribution(self.WD, read_tables=[\n 'criteria'], custom_filenames={'criteria': criteria_file_name})\n if 'criteria' in contribution.tables:\n crit_container = contribution.tables['criteria']\n crit_data = crit_container.df\n crit_data = crit_data.to_dict('records')\n for crit in crit_data:\n m2_name = map_magic.convert_direction_criteria(\n 'magic2', crit['table_column'])\n if m2_name != \"\":\n try:\n if crit['criterion_value'] == 'True':\n acceptance_criteria[m2_name]['value'] = 1\n else:\n acceptance_criteria[m2_name]['value'] = 0\n acceptance_criteria[m2_name]['value'] = float(\n crit['criterion_value'])\n except ValueError:\n self.user_warning(\"%s is not a valid comparitor for %s, skipping this criteria\" % (\n str(crit['criterion_value']), m2_name))\n continue\n acceptance_criteria[m2_name]['pmag_criteria_code'] = crit['criterion']\n return acceptance_criteria\n else:\n if criteria_file_name == None:\n criteria_file_name = \"pmag_criteria.txt\"\n try:\n acceptance_criteria = pmag.read_criteria_from_file(\n os.path.join(self.WD, criteria_file_name), acceptance_criteria)\n except (IOError, OSError) as e:\n self.user_warning(\"File %s not found in directory %s aborting opperation\" % (\n criteria_file_name, self.WD))\n return acceptance_criteria", "response": "Reads the pmag_criteria. txt file and returns a set of dicts that can be passed into pmag. grade to filter data."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_PCA_parameters(self, specimen, fit, tmin, tmax, coordinate_system, calculation_type):\n if tmin == '' or tmax == '':\n return\n beg_pca, end_pca = self.get_indices(fit, tmin, tmax, specimen)\n\n if coordinate_system == 'geographic' or coordinate_system == 'DA-DIR-GEO':\n block = self.Data[specimen]['zijdblock_geo']\n elif coordinate_system == 'tilt-corrected' or coordinate_system == 'DA-DIR-TILT':\n block = self.Data[specimen]['zijdblock_tilt']\n else:\n block = self.Data[specimen]['zijdblock']\n if block == []:\n print((\"-E- no measurement data for specimen %s in coordinate system %s\" %\n (specimen, coordinate_system)))\n mpars = {}\n elif end_pca > beg_pca and end_pca - beg_pca > 1:\n\n try:\n # preformes regression\n mpars = pmag.domean(block, beg_pca, end_pca, calculation_type)\n except:\n print((block, beg_pca, end_pca, calculation_type,\n specimen, fit.name, tmin, tmax, coordinate_system))\n return\n\n if 'specimen_direction_type' in mpars and mpars['specimen_direction_type'] == 'Error':\n print((\"-E- no measurement data for specimen %s in coordinate system %s\" %\n (specimen, coordinate_system)))\n return {}\n else:\n mpars = {}\n for k in list(mpars.keys()):\n try:\n if math.isnan(float(mpars[k])):\n mpars[k] = 0\n except:\n pass\n if \"DE-BFL\" in calculation_type and 'specimen_dang' not in list(mpars.keys()):\n mpars['specimen_dang'] = 0\n\n if 'best fit vector' in self.plane_display_box.GetValue():\n self.calculate_best_fit_vectors()\n\n return(mpars)", "response": "Returns a dictionary of specimen record parameters for the specified specimen and fit."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef autointerpret(self, event, step_size=None, calculation_type=\"DE-BFL\"):\n if not self.user_warning(\"This feature is in ALPHA and still in development and testing. It is subject to bugs and will often create a LOT of new interpretations. This feature should only be used to get a general idea of the trend of the data before actually mannuely interpreting the data and the output of this function should certainly not be trusted as 100% accurate and useable for publication. Would you like to continue?\"):\n return\n if not self.clear_interpretations():\n return\n\n print(\"Autointerpretation Start\")\n\n self.set_test_mode(True)\n for specimen in self.specimens:\n self.autointerpret_specimen(specimen, step_size, calculation_type)\n self.set_test_mode(False)\n\n if self.pmag_results_data['specimens'][self.s] != []:\n self.current_fit = self.pmag_results_data['specimens'][self.s][-1]\n else:\n self.current_fit = None\n print(\"Autointerpretation Complete\")\n self.update_selection()\n if self.ie_open:\n self.ie.update_editor()", "response": "Autointerprets the current interpretations for all the specimen in the current specimen and adds interpretations to every specimen in the current specimen."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the Parameters of a specific high level mean of the specimen interpretations", "response": "def get_high_level_mean_pars(self, high_level_type, high_level_name, calculation_type, elements_type, mean_fit, dirtype):\n \"\"\"\n Gets the Parameters of a mean of lower level data such as a Site\n level Fisher mean of Specimen interpretations\n\n Parameters\n ----------\n high_level_type : 'samples','sites','locations','study'\n high_level_name : sample, site, location, or study whose\n data to which to apply the mean\n calculation_type : 'Bingham','Fisher','Fisher by polarity'\n elements_type : what to average: 'specimens', 'samples',\n 'sites' (Ron. ToDo allow VGP and maybe locations?)\n mean_fit : name of interpretation to average if All uses all\n figure out what level to average,and what elements to average\n (specimen, samples, sites, vgp)\n \"\"\"\n\n elements_list = self.Data_hierarchy[high_level_type][high_level_name][elements_type]\n pars_for_mean = {}\n pars_for_mean[\"All\"] = []\n\n for element in elements_list:\n if elements_type == 'specimens' and element in self.pmag_results_data['specimens']:\n for fit in self.pmag_results_data['specimens'][element]:\n if fit in self.bad_fits:\n continue\n if fit.name not in list(pars_for_mean.keys()):\n pars_for_mean[fit.name] = []\n try:\n # is this fit to be included in mean\n if mean_fit == 'All' or mean_fit == fit.name:\n pars = fit.get(dirtype)\n if pars == {} or pars == None:\n pars = self.get_PCA_parameters(\n element, fit, fit.tmin, fit.tmax, dirtype, fit.PCA_type)\n if pars == {} or pars == None:\n print((\"cannot calculate parameters for element %s and fit %s in calculate_high_level_mean leaving out of fisher mean, please check this value.\" % (\n element, fit.name)))\n continue\n fit.put(element, dirtype, pars)\n else:\n continue\n if \"calculation_type\" in list(pars.keys()) and pars[\"calculation_type\"] == 'DE-BFP':\n dec, inc, direction_type = pars[\"specimen_dec\"], pars[\"specimen_inc\"], 'p'\n elif \"specimen_dec\" in list(pars.keys()) and \"specimen_inc\" in list(pars.keys()):\n dec, inc, direction_type = pars[\"specimen_dec\"], pars[\"specimen_inc\"], 'l'\n elif \"dec\" in list(pars.keys()) and \"inc\" in list(pars.keys()):\n dec, inc, direction_type = pars[\"dec\"], pars[\"inc\"], 'l'\n else:\n print(\n (\"-E- ERROR: can't find mean for specimen interpertation: %s , %s\" % (element, fit.name)))\n print(pars)\n continue\n # add for calculation\n pars_for_mean[fit.name].append({'dec': float(dec), 'inc': float(\n inc), 'direction_type': direction_type, 'element_name': element})\n pars_for_mean[\"All\"].append({'dec': float(dec), 'inc': float(\n inc), 'direction_type': direction_type, 'element_name': element})\n except KeyError:\n print(\n (\"KeyError in calculate_high_level_mean for element: \" + str(element)))\n continue\n else:\n try:\n pars = self.high_level_means[elements_type][element][mean_fit][dirtype]\n if \"dec\" in list(pars.keys()) and \"inc\" in list(pars.keys()):\n dec, inc, direction_type = pars[\"dec\"], pars[\"inc\"], 'l'\n else:\n # print \"-E- ERROR: can't find mean for element %s\"%element\n continue\n except KeyError:\n # print(\"KeyError in calculate_high_level_mean for element: \" + str(element) + \" please report to a dev\")\n continue\n\n return pars_for_mean"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef calculate_mean(self, pars_for_mean, calculation_type):\n\n if len(pars_for_mean) == 0:\n return({})\n\n elif len(pars_for_mean) == 1:\n return ({\"dec\": float(pars_for_mean[0]['dec']), \"inc\": float(pars_for_mean[0]['inc']), \"calculation_type\": calculation_type, \"n\": 1})\n\n# elif calculation_type =='Bingham':\n# data=[]\n# for pars in pars_for_mean:\n# # ignore great circle\n# if 'direction_type' in pars.keys() and 'direction_type'=='p':\n# continue\n# else:\n# data.append([pars['dec'],pars['inc']])\n# mpars=pmag.dobingham(data)\n\n elif calculation_type == 'Fisher':\n mpars = pmag.dolnp(pars_for_mean, 'direction_type')\n\n elif calculation_type == 'Fisher by polarity':\n mpars = pmag.fisher_by_pol(pars_for_mean)\n for key in list(mpars.keys()):\n mpars[key]['n_planes'] = 0\n mpars[key]['calculation_type'] = 'Fisher'\n\n mpars['calculation_type'] = calculation_type\n\n return mpars", "response": "Calculates the mean of the in - memory data for a given set of parameters."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncalculating the high level mean data for the high level mean plot using the information in level_box level_names mean_type_box and mean_fit_box.", "response": "def calculate_high_levels_data(self):\n \"\"\"\n calculates high level mean data for the high level mean plot using\n information in level_box, level_names, mean_type_box, and\n mean_fit_box also updates the information in the ie to match high\n level mean data in main GUI.\n \"\"\"\n high_level_type = str(self.level_box.GetValue())\n if high_level_type == 'sample':\n high_level_type = 'samples'\n if high_level_type == 'site':\n high_level_type = 'sites'\n if high_level_type == 'location':\n high_level_type = 'locations'\n high_level_name = str(self.level_names.GetValue())\n calculation_type = str(self.mean_type_box.GetValue())\n elements_type = self.UPPER_LEVEL_SHOW\n if self.ie_open:\n self.ie.mean_type_box.SetStringSelection(calculation_type)\n self.calculate_high_level_mean(\n high_level_type, high_level_name, calculation_type, elements_type, self.mean_fit)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef quiet_reset_backend(self, reset_interps=True):\n new_Data_info = self.get_data_info()\n new_Data, new_Data_hierarchy = self.get_data()\n\n if not new_Data:\n print(\"Data read in failed when reseting, aborting reset\")\n return\n else:\n self.Data, self.Data_hierarchy, self.Data_info = new_Data, new_Data_hierarchy, new_Data_info\n\n if reset_interps:\n self.pmag_results_data = {}\n for level in ['specimens', 'samples', 'sites', 'locations', 'study']:\n self.pmag_results_data[level] = {}\n self.high_level_means = {}\n\n high_level_means = {}\n for high_level in ['samples', 'sites', 'locations', 'study']:\n if high_level not in list(self.high_level_means.keys()):\n self.high_level_means[high_level] = {}\n\n # get list of sites\n self.locations = list(self.Data_hierarchy['locations'].keys())\n self.locations.sort() # get list of sites\n # get list of sites\n self.sites = list(self.Data_hierarchy['sites'].keys())\n self.sites.sort(key=spec_key_func) # get list of sites\n self.samples = [] # sort the samples within each site\n for site in self.sites:\n self.samples.extend(\n sorted(self.Data_hierarchy['sites'][site]['samples'], key=spec_key_func))\n self.specimens = [] # sort the specimens within each sample\n for samp in self.samples:\n self.specimens.extend(\n sorted(self.Data_hierarchy['samples'][samp]['specimens'], key=spec_key_func))\n\n # --------------------------------------------------------------------\n # initialize first specimen in list as current specimen\n # --------------------------------------------------------------------\n if self.s in self.specimens:\n pass\n elif len(self.specimens) > 0:\n self.select_specimen(str(self.specimens[0]))\n else:\n self.select_specimen(\"\")\n try:\n self.sample = self.Data_hierarchy['sample_of_specimen'][self.s]\n except KeyError:\n self.sample = \"\"\n try:\n self.site = self.Data_hierarchy['site_of_specimen'][self.s]\n except KeyError:\n self.site = \"\"\n\n if self.Data and reset_interps:\n self.update_pmag_tables()\n\n if self.ie_open:\n self.ie.specimens_list = self.specimens", "response": "Resets all the data in the backend to the original data and returns the new data and the new data_info."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef reset_backend(self, warn_user=True, reset_interps=True):\n if warn_user and not self.data_loss_warning():\n return False\n\n # reset backend, including get_data(), get_data_info()\n self.quiet_reset_backend(reset_interps=reset_interps)\n\n # reset specimens box\n self.specimens_box.SetItems(self.specimens)\n self.specimens_box.SetStringSelection(str(self.s))\n\n # reset site level means box\n self.level_names.Clear()\n self.level_names.AppendItems(self.sites)\n if self.sites:\n self.level_names.SetSelection(0)\n\n # reset coordinate system\n self.COORDINATE_SYSTEM, self.coordinate_list = self.get_coordinate_system()\n self.coordinates_box.Clear()\n self.coordinates_box.AppendItems(self.coordinate_list)\n self.coordinates_box.SetStringSelection(self.COORDINATE_SYSTEM)\n\n # get cart rot\n self.initialize_CART_rot(str(self.s))\n\n # draw everything\n if self.Data:\n if not self.current_fit:\n self.draw_figure(self.s)\n self.update_selection()\n else:\n self.Add_text()\n self.update_fit_boxes()\n\n if self.ie_open:\n self.ie.update_editor()", "response": "Resets the current backend and updates GUI displays such as plots boxes and GUI displays."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef recalculate_current_specimen_interpreatations(self):\n self.initialize_CART_rot(self.s)\n if str(self.s) in self.pmag_results_data['specimens']:\n for fit in self.pmag_results_data['specimens'][self.s]:\n if fit.get('specimen') and 'calculation_type' in fit.get('specimen'):\n fit.put(self.s, 'specimen', self.get_PCA_parameters(\n self.s, fit, fit.tmin, fit.tmax, 'specimen', fit.get('specimen')['calculation_type']))\n if len(self.Data[self.s]['zijdblock_geo']) > 0 and fit.get('geographic') and 'calculation_type' in fit.get('geographic'):\n fit.put(self.s, 'geographic', self.get_PCA_parameters(\n self.s, fit, fit.tmin, fit.tmax, 'geographic', fit.get('geographic')['calculation_type']))\n if len(self.Data[self.s]['zijdblock_tilt']) > 0 and fit.get('tilt-corrected') and 'calculation_type' in fit.get('tilt-corrected'):\n fit.put(self.s, 'tilt-corrected', self.get_PCA_parameters(self.s, fit, fit.tmin,\n fit.tmax, 'tilt-corrected', fit.get('tilt-corrected')['calculation_type']))", "response": "recalculates all interpretations on all coordinate systems for all coordinate systems. Does not display recalcuated data."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef parse_bound_data(self, tmin0, tmax0, specimen):\n if specimen not in self.Data:\n print(\n (\"no measurement data found loaded for specimen %s and will be ignored\" % (specimen)))\n return (None, None)\n if self.Data[specimen]['measurement_step_unit'] == \"C\":\n if float(tmin0) == 0 or float(tmin0) == 273:\n tmin = \"0\"\n else:\n tmin = \"%.0fC\" % (float(tmin0)-273)\n if float(tmax0) == 0 or float(tmax0) == 273:\n tmax = \"0\"\n else:\n tmax = \"%.0fC\" % (float(tmax0)-273)\n elif self.Data[specimen]['measurement_step_unit'] == \"mT\":\n if float(tmin0) == 0:\n tmin = \"0\"\n else:\n tmin = \"%.1fmT\" % (float(tmin0)*1000)\n if float(tmax0) == 0:\n tmax = \"0\"\n else:\n tmax = \"%.1fmT\" % (float(tmax0)*1000)\n else: # combimned experiment T:AF\n if float(tmin0) == 0:\n tmin = \"0\"\n elif \"%.0fC\" % (float(tmin0)-273) in self.Data[specimen]['zijdblock_steps']:\n tmin = \"%.0fC\" % (float(tmin0)-273)\n elif \"%.1fmT\" % (float(tmin0)*1000) in self.Data[specimen]['zijdblock_steps']:\n tmin = \"%.1fmT\" % (float(tmin0)*1000)\n else:\n tmin = None\n if float(tmax0) == 0:\n tmax = \"0\"\n elif \"%.0fC\" % (float(tmax0)-273) in self.Data[specimen]['zijdblock_steps']:\n tmax = \"%.0fC\" % (float(tmax0)-273)\n elif \"%.1fmT\" % (float(tmax0)*1000) in self.Data[specimen]['zijdblock_steps']:\n tmax = \"%.1fmT\" % (float(tmax0)*1000)\n else:\n tmax = None\n return tmin, tmax", "response": "This function converts the input temperature and upper bound values from the MagIC and Redo format to that of Celsius and MilliTesla which is often more intuitive and returns the minimum and maximum values for the current is\n ."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_indices(self, fit=None, tmin=None, tmax=None, specimen=None):\n if specimen == None:\n specimen = self.s\n if fit and not tmin and not tmax:\n tmin = fit.tmin\n tmax = fit.tmax\n if specimen not in list(self.Data.keys()):\n self.user_warning(\"No data for specimen \" + specimen)\n if tmin in self.Data[specimen]['zijdblock_steps']:\n tmin_index = self.Data[specimen]['zijdblock_steps'].index(tmin)\n elif type(tmin) == str or type(tmin) == str and tmin != '':\n int_steps = [float(x.strip(\"C mT\"))\n for x in self.Data[specimen]['zijdblock_steps']]\n if tmin == '':\n tmin = self.Data[specimen]['zijdblock_steps'][0]\n print((\"No lower bound for %s on specimen %s using lowest step (%s) for lower bound\" % (\n fit.name, specimen, tmin)))\n if fit != None:\n fit.tmin = tmin\n int_tmin = float(tmin.strip(\"C mT\"))\n diffs = [abs(x-int_tmin) for x in int_steps]\n tmin_index = diffs.index(min(diffs))\n else:\n tmin_index = self.tmin_box.GetSelection()\n if tmax in self.Data[specimen]['zijdblock_steps']:\n tmax_index = self.Data[specimen]['zijdblock_steps'].index(tmax)\n elif type(tmax) == str or type(tmax) == str and tmax != '':\n int_steps = [float(x.strip(\"C mT\"))\n for x in self.Data[specimen]['zijdblock_steps']]\n if tmax == '':\n tmax = self.Data[specimen]['zijdblock_steps'][-1]\n print((\"No upper bound for fit %s on specimen %s using last step (%s) for upper bound\" % (\n fit.name, specimen, tmax)))\n if fit != None:\n fit.tmax = tmax\n int_tmax = float(tmax.strip(\"C mT\"))\n diffs = [abs(x-int_tmax) for x in int_steps]\n tmax_index = diffs.index(min(diffs))\n else:\n tmax_index = self.tmin_box.GetSelection()\n\n max_index = len(self.Data[specimen]['zijdblock_steps'])-1\n while (self.Data[specimen]['measurement_flag'][max_index] == 'b' and max_index-1 > 0):\n max_index -= 1\n\n if tmin_index >= max_index:\n print((\"lower bound is greater or equal to max step cannot determine bounds for specimen: \" + specimen))\n return (None, None)\n\n if (tmin_index >= 0):\n while (self.Data[specimen]['measurement_flag'][tmin_index] == 'b' and\n tmin_index+1 < len(self.Data[specimen]['zijdblock_steps'])):\n if (self.Data[specimen]['zijdblock_steps'][tmin_index+1] == tmin):\n tmin_index += 1\n else:\n tmin_old = tmin\n while (self.Data[specimen]['measurement_flag'][tmin_index] == 'b' and\n tmin_index+1 < len(self.Data[specimen]['zijdblock_steps'])):\n tmin_index += 1\n tmin = self.Data[specimen]['zijdblock_steps'][tmin_index]\n if fit != None:\n fit.tmin = tmin\n self.tmin_box.SetStringSelection(tmin)\n print((\"For specimen \" + str(specimen) + \" there are no good measurement steps with value - \" +\n str(tmin_old) + \" using step \" + str(tmin) + \" as lower bound instead\"))\n break\n\n if (tmax_index < max_index):\n while (self.Data[specimen]['measurement_flag'][tmax_index] == 'b' and\n tmax_index+1 < len(self.Data[specimen]['zijdblock_steps'])):\n if (self.Data[specimen]['zijdblock_steps'][tmax_index+1] == tmax):\n tmax_index += 1\n else:\n tmax_old = tmax\n while (self.Data[specimen]['measurement_flag'][tmax_index] == 'b' and\n tmax_index >= 0):\n tmax_index -= 1\n tmax = self.Data[specimen]['zijdblock_steps'][tmax_index]\n if fit != None:\n fit.tmax = tmax\n self.tmax_box.SetStringSelection(tmax)\n print((\"For specimen \" + str(specimen) + \" there are no good measurement steps with value - \" +\n str(tmax_old) + \" using step \" + str(tmax) + \" as upper bound instead\"))\n break\n\n if (tmin_index < 0):\n tmin_index = 0\n if (tmax_index > max_index):\n tmax_index = max_index\n\n return (tmin_index, tmax_index)", "response": "Finds the appropriate indices in the order of the indecies in the given order of the indecies in the given specimen."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nmerge a list of dictionaries with the keys in the old_recs and returns a list of dictionaries with the same keys as all the others.", "response": "def merge_pmag_recs(self, old_recs):\n \"\"\"\n Takes in a list of dictionaries old_recs and returns a list of\n dictionaries where every dictionary in the returned list has the\n same keys as all the others.\n\n Parameters\n ----------\n old_recs : list of dictionaries to fix\n\n Returns\n -------\n recs : list of dictionaries with same keys\n \"\"\"\n recs = {}\n recs = deepcopy(old_recs)\n headers = []\n for rec in recs:\n for key in list(rec.keys()):\n if key not in headers:\n headers.append(key)\n for rec in recs:\n for header in headers:\n if header not in list(rec.keys()):\n rec[header] = \"\"\n return recs"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nselecting the specimen and sets the current GUI specimen to the specimen.", "response": "def select_specimen(self, specimen):\n \"\"\"\n Goes through the calculations necessary to plot measurement data for\n specimen and sets specimen as current GUI specimen, also attempts to\n handle changing current fit.\n \"\"\"\n try:\n fit_index = self.pmag_results_data['specimens'][self.s].index(\n self.current_fit)\n except KeyError:\n fit_index = None\n except ValueError:\n fit_index = None\n # sets self.s to specimen calculates params etc.\n self.initialize_CART_rot(specimen)\n self.list_bound_loc = 0\n if fit_index != None and self.s in self.pmag_results_data['specimens']:\n try:\n self.current_fit = self.pmag_results_data['specimens'][self.s][fit_index]\n except IndexError:\n self.current_fit = None\n else:\n self.current_fit = None\n if self.s != self.specimens_box.GetValue():\n self.specimens_box.SetValue(self.s)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nclearing all interpretations in the current directory", "response": "def clear_interpretations(self, message=None):\n \"\"\"\n Clears all specimen interpretations\n\n Parameters\n ----------\n message : message to display when warning the user that all\n fits will be deleted. If None default message is used (None is\n default)\n \"\"\"\n if self.total_num_of_interpertations() == 0:\n print(\"There are no interpretations\")\n return True\n\n if message == None:\n message = \"All interpretations will be deleted all unsaved data will be irretrievable, continue?\"\n dlg = wx.MessageDialog(self, caption=\"Delete?\",\n message=message, style=wx.OK | wx.CANCEL)\n result = self.show_dlg(dlg)\n dlg.Destroy()\n if result != wx.ID_OK:\n return False\n\n for specimen in list(self.pmag_results_data['specimens'].keys()):\n self.pmag_results_data['specimens'][specimen] = []\n # later on when high level means are fixed remove the bellow loop and loop over pmag_results_data\n for high_level_type in ['samples', 'sites', 'locations', 'study']:\n self.high_level_means[high_level_type] = {}\n self.current_fit = None\n if self.ie_open:\n self.ie.update_editor()\n return True"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nmark the g_index'th measuremnt of current specimen good", "response": "def mark_meas_good(self, g_index):\n \"\"\"\n Marks the g_index'th measuremnt of current specimen good\n\n Parameters\n ----------\n g_index : int that gives the index of the measurement to mark good,\n indexed from 0\n \"\"\"\n meas_index, ind_data = 0, []\n for i, meas_data in enumerate(self.mag_meas_data):\n if meas_data['er_specimen_name'] == self.s:\n ind_data.append(i)\n meas_index = ind_data[g_index]\n\n self.Data[self.s]['measurement_flag'][g_index] = 'g'\n if len(self.Data[self.s]['zijdblock'][g_index]) < 6:\n self.Data[self.s]['zijdblock'][g_index].append('g')\n self.Data[self.s]['zijdblock'][g_index][5] = 'g'\n if 'zijdblock_geo' in self.Data[self.s] and g_index < len(self.Data[self.s]['zijdblock_geo']):\n if len(self.Data[self.s]['zijdblock_geo'][g_index]) < 6:\n self.Data[self.s]['zijdblock_geo'][g_index].append('g')\n self.Data[self.s]['zijdblock_geo'][g_index][5] = 'g'\n if 'zijdblock_tilt' in self.Data[self.s] and g_index < len(self.Data[self.s]['zijdblock_tilt']):\n if len(self.Data[self.s]['zijdblock_tilt'][g_index]) < 6:\n self.Data[self.s]['zijdblock_tilt'][g_index].append('g')\n self.Data[self.s]['zijdblock_tilt'][g_index][5] = 'g'\n self.mag_meas_data[meas_index]['measurement_flag'] = 'g'\n\n if self.data_model == 3.0:\n meas_name = str(self.Data[self.s]['measurement_names'][g_index])\n mdf = self.con.tables['measurements'].df\n # check for multiple measurements with the same name\n if not isinstance(mdf.loc[meas_name], pd.Series):\n res = self.user_warning(\"Your measurements table has non-unique measurement names.\\nYou may end up marking more than one measurement as good.\\nRight click this measurement again to undo.\")\n # mark measurement as good\n mdf.loc[meas_name, 'quality'] = 'g'"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef mark_fit_good(self, fit, spec=None):\n if spec == None:\n for spec, fits in list(self.pmag_results_data['specimens'].items()):\n if fit in fits:\n break\n samp = self.Data_hierarchy['sample_of_specimen'][spec]\n if 'sample_orientation_flag' not in self.Data_info['er_samples'][samp]:\n self.Data_info['er_samples'][samp]['sample_orientation_flag'] = 'g'\n samp_flag = self.Data_info['er_samples'][samp]['sample_orientation_flag']\n if samp_flag == 'g':\n self.bad_fits.remove(fit)\n return True\n else:\n self.user_warning(\n \"Cannot mark this interpretation good its sample orientation has been marked bad\")\n return False", "response": "Mark a fit good so it is used in high level means\n "} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef mark_fit_bad(self, fit):\n if fit not in self.bad_fits:\n self.bad_fits.append(fit)\n return True\n else:\n return False", "response": "Mark a fit bad so it is excluded from high level means\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreading data from current WD measurement. txt or magic_measurements. txt and sorts it into main measurements data structures given bellow", "response": "def get_data(self):\n \"\"\"\n reads data from current WD measurement.txt or magic_measurements.txt\n depending on data model and sorts it into main measurements data\n structures given bellow:\n Data - {specimen: {\n zijdblock:[[treatment temp-str,dec-float, inc-float,\n mag_moment-float, ZI-float, meas_flag-str ('b','g'),\n method_codes-str]],\n zijdblock_geo:[[treatment temp-str,dec-float, inc-float,\n mag_moment-float, ZI-float, meas_flag-str ('b','g'),\n method_codes-str]],\n zijdblock_tilt:[[treatment temp-str,dec-float, inc-float,\n mag_moment-float, ZI-float, meas_flag-str ('b','g'),\n method_codes-str]],\n zijdblock_lab_treatments: [str],\n zijdblock_steps: [str],\n measurement_flag: [str ('b','g')],\n mag_meas_data_index: [int],\n csds: [float],\n pars: {},\n zdata: array.shape = 2x2 (float),\n zdata_geo: array.shape = 2x2 (float),\n zdata_tilt: array.shape = 2x2 (float),\n vector_diffs: [float],\n vds: float }}\n Data_hierarchy - {specimen: {\n study: {}\n locations: {}\n sites: {}\n samples: {}\n specimens: {}\n sample_of_specimen: {}\n site_of_specimen: {}\n site_of_sample: {}\n location_of_site: {}\n location_of_specimen: {}\n study_of_specimen: {}\n expedition_name_of_specimen: {} }}\n \"\"\"\n # ------------------------------------------------\n # Read magic measurement file and sort to blocks\n # ------------------------------------------------\n\n # All meas data information is stored in Data[secimen]={}\n Data = {}\n Data_hierarchy = {}\n Data_hierarchy['study'] = {}\n Data_hierarchy['locations'] = {}\n Data_hierarchy['sites'] = {}\n Data_hierarchy['samples'] = {}\n Data_hierarchy['specimens'] = {}\n Data_hierarchy['sample_of_specimen'] = {}\n Data_hierarchy['site_of_specimen'] = {}\n Data_hierarchy['site_of_sample'] = {}\n Data_hierarchy['location_of_site'] = {}\n Data_hierarchy['location_of_specimen'] = {}\n Data_hierarchy['study_of_specimen'] = {}\n Data_hierarchy['expedition_name_of_specimen'] = {}\n\n if self.data_model == 3:\n\n if 'measurements' not in self.con.tables:\n self.user_warning(\n \"Measurement data file is empty and the GUI cannot start, aborting\")\n return Data, Data_hierarchy\n\n if self.con.tables['measurements'].df.empty:\n self.user_warning(\n \"Measurement data file is empty and the GUI cannot start, aborting\")\n return Data, Data_hierarchy\n\n # extract specimen data from measurements table\n if not len(self.spec_data):\n specs = self.con.tables['measurements'].df['specimen'].unique()\n df = pd.DataFrame(index=specs, columns=['specimen'])\n df.index.name = 'specimen_name'\n df['specimen'] = specs\n self.con.tables['specimens'].df = df\n self.spec_data = df\n\n if not len(self.spec_data):\n self.user_warning(\n \"Measurement data file does not seem to have specimen data and the GUI cannot start, aborting\")\n return Data, Data_hierarchy\n\n\n if 'sample' not in self.spec_data.columns or 'sample' not in self.samp_data.columns:\n if 'specimen' not in self.spec_data.columns:\n self.spec_data['specimen'] = self.con.tables['measurements'].df['specimen']\n self.spec_data.set_index('specimen', inplace=True)\n self.spec_data['specimen'] = self.spec_data.index\n\n ui_dialog = demag_dialogs.user_input(\n self, [\"# of characters to remove\"], heading=\"Sample data could not be found attempting to generate sample names by removing characters from specimen names\")\n self.show_dlg(ui_dialog)\n ui_data = ui_dialog.get_values()\n try:\n samp_ncr = int(ui_data[1][\"# of characters to remove\"])\n except ValueError:\n self.user_warning(\n \"Invalid input, specimen names will be used for sample names instead\")\n samp_ncr = 0\n self.spec_data['sample'] = [x[:-samp_ncr]\n for x in self.spec_data['specimen']]\n\n self.samp_data['sample'] = self.spec_data['sample']\n self.samp_data.set_index('sample', inplace=True)\n self.samp_data['sample'] = self.samp_data.index\n\n if 'site' not in self.samp_data.columns or 'site' not in self.site_data.columns:\n ui_dialog = demag_dialogs.user_input(\n self, [\"# of characters to remove\", \"site delimiter\"], heading=\"No Site Data found attempting to create site names from specimen names\")\n self.show_dlg(ui_dialog)\n ui_data = ui_dialog.get_values()\n try:\n site_ncr = int(ui_data[1][\"# of characters to remove\"])\n self.samp_data['site'] = [x[:-site_ncr]\n for x in self.spec_data['specimen']]\n except ValueError:\n try:\n sd = ui_data[1][\"site delimiter\"]\n self.samp_data['site'] = [\n x.split(sd)[0] for x in self.spec_data['specimen']]\n except ValueError:\n self.samp_data['site'] = [\n x for x in self.spec_data['specimen']]\n\n self.site_data['site'] = self.samp_data['site']\n self.site_data.drop_duplicates(inplace=True)\n self.site_data.set_index('site', inplace=True)\n self.site_data['site'] = self.site_data.index\n\n if 'location' not in self.site_data.columns or 'location' not in self.loc_data.columns:\n ui_dialog = demag_dialogs.user_input(\n self, [\"location name for all sites\"], heading=\"No Location found\")\n self.show_dlg(ui_dialog)\n ui_data = ui_dialog.get_values()\n self.site_data['location'] = ui_data[1][\"location name for all sites\"]\n\n self.loc_data['location'] = self.site_data['location']\n self.loc_data.drop_duplicates(inplace=True)\n self.loc_data.set_index('location', inplace=True)\n self.loc_data['location'] = self.loc_data.index\n\n # add data to other dataframes\n self.con.propagate_location_to_measurements()\n self.con.propagate_location_to_specimens()\n\n # get measurement data from contribution object\n meas_container = self.con.tables['measurements']\n meas_data3_0 = meas_container.df\n\n meas_data3_0.replace({'specimen': {nan: 'unknown'}, 'sample': {nan: 'unknown'}, 'site': {\n nan: 'unknown'}, 'location': {nan: 'unknown'}}, inplace=True)\n meas_data3_0['specimen'] = meas_data3_0['specimen'].apply(str)\n meas_data3_0['sample'] = meas_data3_0['sample'].apply(str)\n meas_data3_0['site'] = meas_data3_0['site'].apply(str)\n meas_data3_0['location'] = meas_data3_0['location'].apply(str)\n\n # do some filtering\n# if 'location' in meas_data3_0.columns:\n# if any(meas_data3_0['location'].isnull()):\n# print(\"-W- Some measurements are missing location data, and will not be used\")\n# meas_data3_0 = meas_data3_0[meas_data3_0['location'].notnull()]\n# meas_data3_0.replace({'location':float('nan')},'unknown',inplace=True)\n# if 'site' in meas_data3_0.columns:\n# if any(meas_data3_0['site'].isnull()):\n# print(\"-W- Some measurements are missing site data, and will not be used\")\n# meas_data3_0 = meas_data3_0[meas_data3_0['site'].notnull()]\n# meas_data3_0.replace({'site':float('nan')},'unknown',inplace=True)\n# if 'sample' in meas_data3_0.columns:\n# if any(meas_data3_0['sample'].isnull()):\n# print(\"-W- Some measurements are missing sample data, and will not be used\")\n# meas_data3_0 = meas_data3_0[meas_data3_0['sample'].notnull()]\n# meas_data3_0.replace({'sample':float('nan')},'unknown',inplace=True)\n# if 'specimen' in meas_data3_0.columns:\n# missing = meas_data3_0[meas_data3_0['specimen'].isnull()]\n# if len(missing):\n# print(\"-W- {} measurements are missing specimen data, and will not be used\".format(missing))\n# meas_data3_0 = meas_data3_0[meas_data3_0['specimen'].notnull()]\n# meas_data3_0.replace({'specimen':float('nan')},'unknown',inplace=True)\n\n# col_names = ['specimen', 'sample', 'site', 'location']\n# for col_name in col_names:\n# if col_name in meas_data3_0.columns:\n# pruned = meas_data3_0[meas_data3_0[col_name].apply(cb.not_null)]\n# num_missing = len(meas_data3_0) - len(pruned)\n# if num_missing:\n# msg = \"{} measurements cannot be associated with a {} and will be excluded\\nTry using Pmag GUI (step 3) to make sure you have provided the full chain from specimen to location.\".format(num_missing, col_name)\n# pw.simple_warning(msg)\n# print(\"-W- {} measurements are missing {} data and will be excluded\".format(num_missing, col_name))\n# meas_data3_0 = pruned\n Mkeys = ['magn_moment', 'magn_volume', 'magn_mass']\n# fish out all the relavent data\n meas_data3_0 = meas_data3_0[meas_data3_0['method_codes'].str.contains(\n 'LT-NO|LT-AF-Z|LT-T-Z|LT-M-Z|LT-LT-Z') == True]\n if not len(meas_data3_0):\n self.user_warning(\"Your measurements table contains none of the required method codes to run Demag GUI: [LT-NO, LT-AF-Z, LT-T-Z, LT-M-Z, LT-LT-Z]\")\n return {}, {}\n# now convert back to 2.5 changing only those keys that are necessary for thellier_gui\n meas_con_dict = map_magic.get_thellier_gui_meas_mapping(\n meas_data3_0, output=2)\n intensity_col = cb.get_intensity_col(meas_data3_0)\n if not intensity_col:\n self.user_warning(\"Your measurements table must have one of the following columns to run Demag GUI: 'magn_moment', 'magn_volume', 'magn_mass',or 'magn_uncal'\")\n return {}, {}\n\n print('-I- Using {} for intensity'.format(intensity_col))\n self.intensity_col = meas_con_dict[intensity_col]\n meas_data2_5 = meas_data3_0.rename(columns=meas_con_dict)\n # make a list of dictionaries to maintain backward compatibility\n mag_meas_data = meas_data2_5.to_dict(\"records\")\n\n else: # data model 2.5\n try:\n print((\"-I- Read magic file %s\" % self.magic_file))\n except ValueError:\n self.magic_measurement = self.choose_meas_file()\n print((\"-I- Read magic file %s\" % self.magic_file))\n mag_meas_data, file_type = pmag.magic_read(self.magic_file)\n if file_type != \"magic_measurements\":\n self.user_warning(\"You have selected data model 2.5, but your measurements file is either not in 2.5, or is not a measurements file.\\n{} has file type: {}\".format(self.magic_file, file_type))\n return {}, {}\n\n\n self.mag_meas_data = self.merge_pmag_recs(mag_meas_data)\n\n # get list of unique specimen names with measurement data\n CurrRec = []\n sids = pmag.get_specs(self.mag_meas_data) # specimen ID's\n for s in sids:\n if s not in list(Data.keys()):\n Data[s] = {}\n Data[s]['zijdblock'] = []\n Data[s]['zijdblock_geo'] = []\n Data[s]['zijdblock_tilt'] = []\n Data[s]['zijdblock_lab_treatments'] = []\n Data[s]['pars'] = {}\n Data[s]['csds'] = []\n Data[s]['zijdblock_steps'] = []\n Data[s]['measurement_flag'] = [] # a list of points 'g' or 'b'\n # index in original magic_measurements.txt\n Data[s]['mag_meas_data_index'] = []\n Data[s]['measurement_names'] = []\n\n prev_s = None\n cnt = -1\n # list of excluded lab protocols. copied from pmag.find_dmag_rec(s,data)\n self.excluded_methods = [\"LP-AN-ARM\", \"LP-AN-TRM\", \"LP-ARM-AFD\",\n \"LP-ARM2-AFD\", \"LP-TRM-AFD\", \"LP-TRM\", \"LP-TRM-TD\", \"LP-X\", \"LP-PI-ARM\"]\n self.included_methods = [\n \"LT-NO\", \"LT-AF-Z\", \"LT-T-Z\", \"LT-M-Z\", \"LT-LT-Z\"]\n# self.mag_meas_data.sort(key=meas_key)\n # asiigned default values for NRM\n if len(self.mag_meas_data) > 0 and self.intensity_col in list(self.mag_meas_data[0].keys()):\n NRM = float(self.mag_meas_data[0][self.intensity_col])\n for rec in self.mag_meas_data:\n # if \"measurement_number\" in rec.keys() and str(rec['measurement_number']) == '1' and \"magic_method_codes\" in rec.keys() and \"LT-NO\" not in rec[\"magic_method_codes\"].split(':'):\n # NRM = 1 #not really sure how to handle this case but assume that data is already normalized\n cnt += 1 # index counter\n s = rec[\"er_specimen_name\"]\n if \"er_sample_name\" in list(rec.keys()):\n sample = rec[\"er_sample_name\"]\n else:\n sample = ''\n if \"er_site_name\" in list(rec.keys()):\n site = rec[\"er_site_name\"]\n else:\n site = ''\n if \"er_location_name\" in list(rec.keys()):\n location = rec[\"er_location_name\"]\n else:\n location = ''\n expedition_name = \"\"\n if \"er_expedition_name\" in list(rec.keys()):\n expedition_name = rec[\"er_expedition_name\"]\n\n methods = rec[\"magic_method_codes\"].replace(\n \" \", \"\").strip(\"\\n\").split(\":\")\n LP_methods = []\n LT_methods = []\n\n for k in ['zdata', 'zdata_geo', 'zdata_tilt', 'vector_diffs']:\n if k not in Data[s]:\n Data[s][k] = []\n\n for i in range(len(methods)):\n methods[i] = methods[i].strip()\n if 'measurement_flag' not in list(rec.keys()):\n rec['measurement_flag'] = 'g'\n SKIP = True\n lab_treatment = \"\"\n for meth in methods:\n if 'DIR' in meth:\n SKIP = False\n if meth in self.included_methods:\n lab_treatment = meth\n SKIP = False\n if \"LP\" in meth:\n LP_methods.append(meth)\n for meth in self.excluded_methods:\n if meth in methods:\n SKIP = True\n break\n if SKIP:\n continue\n tr, LPcode, measurement_step_unit = \"\", \"\", \"\"\n if \"LT-NO\" in methods:\n tr = 0\n if prev_s != s and self.intensity_col in rec:\n try:\n NRM = float(rec[self.intensity_col])\n except ValueError:\n NRM = 1\n for method in methods:\n if \"AF\" in method:\n LPcode = \"LP-DIR-AF\"\n measurement_step_unit = \"mT\"\n if \"TRM\" in method:\n LPcode = \"LP-DIR-T\"\n measurement_step_unit = \"C\"\n elif \"LT-AF-Z\" in methods:\n try:\n tr = float(rec[\"treatment_ac_field\"])*1e3 # (mT)\n except ValueError:\n print((\"Could not convert ac field for measurement, was given %s, skipping\" %\n rec[\"treatment_ac_field\"]))\n continue\n measurement_step_unit = \"mT\" # in magic its T in GUI its mT\n LPcode = \"LP-DIR-AF\"\n elif \"LT-T-Z\" in methods or \"LT-LT-Z\" in methods:\n try:\n tr = float(rec[\"treatment_temp\"])-273. # celsius\n except ValueError:\n print(\n (\"Could not convert temperature for measurement, was given %s, skipping\" % rec[\"treatment_temp\"]))\n continue\n measurement_step_unit = \"C\" # in magic its K in GUI its C\n LPcode = \"LP-DIR-T\"\n elif \"LT-M-Z\" in methods:\n # temporary for microwave\n tr = float(rec[\"measurement_number\"])\n else:\n # attempt to determine from treatment data\n if all(im not in methods for im in self.included_methods):\n if 'treatment_temp' in list(rec.keys()) and not str(rec['treatment_temp']).isalpha() and rec['treatment_temp'] != '' and float(rec['treatment_temp']) > 0:\n tr = float(rec[\"treatment_temp\"])-273. # celsius\n measurement_step_unit = \"C\" # in magic its K in GUI its C\n LPcode = \"LP-DIR-T\"\n elif 'treatment_ac_field' in list(rec.keys()) and not str(rec['treatment_ac_field']).isalpha() and rec['treatment_ac_field'] != '' and float(rec['treatment_ac_field']) > 0:\n tr = float(rec[\"treatment_ac_field\"])*1e3 # (mT)\n measurement_step_unit = \"mT\" # in magic its T in GUI its mT\n LPcode = \"LP-DIR-AF\"\n else:\n tr = 0\n if prev_s != s and self.intensity_col in rec:\n try:\n NRM = float(rec[self.intensity_col])\n except ValueError:\n NRM = 1\n for method in methods:\n if \"AF\" in method:\n LPcode = \"LP-DIR-AF\"\n measurement_step_unit = \"mT\"\n if \"TRM\" in method:\n LPcode = \"LP-DIR-T\"\n measurement_step_unit = \"C\"\n else:\n tr = float(rec[\"measurement_number\"])\n if prev_s != s and len(Data[s]['zijdblock']) > 0:\n NRM = Data[s]['zijdblock'][0][3]\n\n ZI = 0\n if tr != \"\":\n Data[s]['mag_meas_data_index'].append(\n cnt) # magic_measurement file intex\n if not int(self.data_model) == 2:\n try:\n Data[s]['measurement_names'].append(rec['measurement'])\n except KeyError:\n Data[s]['measurement_names'].append(rec['measurement_number'])\n Data[s]['zijdblock_lab_treatments'].append(lab_treatment)\n if measurement_step_unit != \"\":\n if 'measurement_step_unit' in list(Data[s].keys()):\n if measurement_step_unit not in Data[s]['measurement_step_unit'].split(\":\"):\n Data[s]['measurement_step_unit'] = Data[s]['measurement_step_unit'] + \\\n \":\"+measurement_step_unit\n else:\n Data[s]['measurement_step_unit'] = measurement_step_unit\n dec, inc, inten = \"\", \"\", \"\"\n if \"measurement_dec\" in list(rec.keys()) and cb.not_null(rec[\"measurement_dec\"], False):\n dec = float(rec[\"measurement_dec\"])\n else:\n continue\n if \"measurement_inc\" in list(rec.keys()) and cb.not_null(rec[\"measurement_inc\"], False):\n inc = float(rec[\"measurement_inc\"])\n else:\n continue\n if self.intensity_col in list(rec.keys()) and cb.not_null(rec[self.intensity_col], False):\n intensity = float(rec[self.intensity_col])\n else:\n intensity = 1. # just assume a normal vector\n if 'magic_instrument_codes' not in list(rec.keys()):\n rec['magic_instrument_codes'] = ''\n if 'measurement_csd' in list(rec.keys()):\n csd = str(rec['measurement_csd'])\n else:\n csd = ''\n Data[s]['zijdblock'].append(\n [tr, dec, inc, intensity, ZI, rec['measurement_flag'], rec['magic_instrument_codes']])\n Data[s]['csds'].append(csd)\n DIR = [dec, inc, intensity/NRM]\n cart = pmag.dir2cart(DIR)\n Data[s]['zdata'].append(array([cart[0], cart[1], cart[2]]))\n\n if 'magic_experiment_name' in list(Data[s].keys()) and Data[s]['magic_experiment_name'] != rec[\"magic_experiment_name\"]:\n print((\"-E- ERROR: specimen %s has more than one demagnetization experiment name. You need to merge them to one experiment-name?\" % (s)))\n if float(tr) == 0 or float(tr) == 273:\n Data[s]['zijdblock_steps'].append(\"0\")\n elif measurement_step_unit == \"C\":\n Data[s]['zijdblock_steps'].append(\n \"%.0f%s\" % (tr, measurement_step_unit))\n else:\n Data[s]['zijdblock_steps'].append(\n \"%.1f%s\" % (tr, measurement_step_unit))\n # --------------\n if 'magic_experiment_name' in list(rec.keys()):\n Data[s]['magic_experiment_name'] = rec[\"magic_experiment_name\"]\n if \"magic_instrument_codes\" in list(rec.keys()):\n Data[s]['magic_instrument_codes'] = rec['magic_instrument_codes']\n Data[s][\"magic_method_codes\"] = LPcode\n\n # --------------\n # \"\"good\" or \"bad\" data\n # --------------\n\n flag = 'g'\n if 'measurement_flag' in list(rec.keys()):\n if str(rec[\"measurement_flag\"]) == 'b':\n flag = 'b'\n Data[s]['measurement_flag'].append(flag)\n\n # gegraphic coordinates\n try:\n sample_azimuth = float(\n self.Data_info[\"er_samples\"][sample]['sample_azimuth'])\n sample_dip = float(\n self.Data_info[\"er_samples\"][sample]['sample_dip'])\n d_geo, i_geo = pmag.dogeo(\n dec, inc, sample_azimuth, sample_dip)\n # if d_geo or i_geo is null, we can't do geographic coordinates\n # otherwise, go ahead\n if not any([np.isnan(val) for val in [d_geo, i_geo]]):\n Data[s]['zijdblock_geo'].append(\n [tr, d_geo, i_geo, intensity, ZI, rec['measurement_flag'], rec['magic_instrument_codes']])\n DIR = [d_geo, i_geo, intensity/NRM]\n cart = pmag.dir2cart(DIR)\n Data[s]['zdata_geo'].append([cart[0], cart[1], cart[2]])\n except (IOError, KeyError, ValueError, TypeError) as e:\n pass\n # if prev_s != s:\n # print( \"-W- can't find sample_azimuth,sample_dip for sample %s\"%sample)\n\n # tilt-corrected coordinates\n try:\n sample_bed_dip_direction = float(\n self.Data_info[\"er_samples\"][sample]['sample_bed_dip_direction'])\n sample_bed_dip = float(\n self.Data_info[\"er_samples\"][sample]['sample_bed_dip'])\n d_tilt, i_tilt = pmag.dotilt(\n d_geo, i_geo, sample_bed_dip_direction, sample_bed_dip)\n Data[s]['zijdblock_tilt'].append(\n [tr, d_tilt, i_tilt, intensity, ZI, rec['measurement_flag'], rec['magic_instrument_codes']])\n DIR = [d_tilt, i_tilt, intensity/NRM]\n cart = pmag.dir2cart(DIR)\n Data[s]['zdata_tilt'].append([cart[0], cart[1], cart[2]])\n except (IOError, KeyError, TypeError, ValueError, UnboundLocalError) as e:\n pass\n # if prev_s != s:\n # printd(\"-W- can't find tilt-corrected data for sample %s\"%sample)\n\n if len(Data[s]['zdata']) > 1:\n Data[s]['vector_diffs'].append(\n sqrt(sum((array(Data[s]['zdata'][-2])-array(Data[s]['zdata'][-1]))**2)))\n\n # ---------------------\n # hierarchy is determined from magic_measurements.txt\n # ---------------------\n\n if sample not in list(Data_hierarchy['samples'].keys()):\n Data_hierarchy['samples'][sample] = {}\n Data_hierarchy['samples'][sample]['specimens'] = []\n\n if site not in list(Data_hierarchy['sites'].keys()):\n Data_hierarchy['sites'][site] = {}\n Data_hierarchy['sites'][site]['samples'] = []\n Data_hierarchy['sites'][site]['specimens'] = []\n\n if location not in list(Data_hierarchy['locations'].keys()):\n Data_hierarchy['locations'][location] = {}\n Data_hierarchy['locations'][location]['sites'] = []\n Data_hierarchy['locations'][location]['samples'] = []\n Data_hierarchy['locations'][location]['specimens'] = []\n\n if 'this study' not in list(Data_hierarchy['study'].keys()):\n Data_hierarchy['study']['this study'] = {}\n Data_hierarchy['study']['this study']['sites'] = []\n Data_hierarchy['study']['this study']['samples'] = []\n Data_hierarchy['study']['this study']['specimens'] = []\n\n if s not in Data_hierarchy['samples'][sample]['specimens']:\n Data_hierarchy['samples'][sample]['specimens'].append(s)\n\n if s not in Data_hierarchy['sites'][site]['specimens']:\n Data_hierarchy['sites'][site]['specimens'].append(s)\n\n if s not in Data_hierarchy['locations'][location]['specimens']:\n Data_hierarchy['locations'][location]['specimens'].append(s)\n\n if s not in Data_hierarchy['study']['this study']['specimens']:\n Data_hierarchy['study']['this study']['specimens'].append(s)\n\n if sample not in Data_hierarchy['sites'][site]['samples']:\n Data_hierarchy['sites'][site]['samples'].append(sample)\n\n if sample not in Data_hierarchy['locations'][location]['samples']:\n Data_hierarchy['locations'][location]['samples'].append(sample)\n\n if sample not in Data_hierarchy['study']['this study']['samples']:\n Data_hierarchy['study']['this study']['samples'].append(sample)\n\n if site not in Data_hierarchy['locations'][location]['sites']:\n Data_hierarchy['locations'][location]['sites'].append(site)\n\n if site not in Data_hierarchy['study']['this study']['sites']:\n Data_hierarchy['study']['this study']['sites'].append(site)\n\n # Data_hierarchy['specimens'][s]=sample\n Data_hierarchy['sample_of_specimen'][s] = sample\n Data_hierarchy['site_of_specimen'][s] = site\n Data_hierarchy['site_of_sample'][sample] = site\n Data_hierarchy['location_of_site'][site] = location\n Data_hierarchy['location_of_specimen'][s] = location\n if expedition_name != \"\":\n Data_hierarchy['expedition_name_of_specimen'][s] = expedition_name\n prev_s = s\n\n print(\"-I- done sorting meas data\")\n self.specimens = list(Data.keys())\n\n for s in self.specimens:\n if len(Data[s]['zdata']) > 0:\n Data[s]['vector_diffs'].append(\n sqrt(sum(array(Data[s]['zdata'][-1])**2))) # last vector of the vds\n vds = sum(Data[s]['vector_diffs']) # vds calculation\n Data[s]['vector_diffs'] = array(Data[s]['vector_diffs'])\n Data[s]['vds'] = vds\n Data[s]['zdata'] = array(Data[s]['zdata'])\n Data[s]['zdata_geo'] = array(Data[s]['zdata_geo'])\n Data[s]['zdata_tilt'] = array(Data[s]['zdata_tilt'])\n return(Data, Data_hierarchy)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_interpretations3(self):\n if \"specimen\" not in self.spec_data.columns or \\\n \"meas_step_min\" not in self.spec_data.columns or \\\n \"meas_step_max\" not in self.spec_data.columns or \\\n \"meas_step_unit\" not in self.spec_data.columns or \\\n \"method_codes\" not in self.spec_data.columns:\n return\n if \"dir_comp\" in self.spec_data.columns:\n fnames = 'dir_comp'\n elif \"dir_comp_name\" in self.spec_data.columns:\n fnames = 'dir_comp_name'\n else:\n fnames = 'dir_comp'\n self.spec_data['dir_comp'] = 'Fit 1'\n #print(\"No specimen interpretation name found in specimens.txt\")\n #return\n if \"result_quality\" not in self.spec_data.columns:\n self.spec_data[\"result_quality\"] = \"g\"\n if \"dir_tilt_correction\" not in self.spec_data.columns:\n self.spec_data[\"dir_tilt_correction\"] = \"\"\n fdict = self.spec_data[['specimen', fnames, 'meas_step_min', 'meas_step_max', 'meas_step_unit',\n 'dir_tilt_correction', 'method_codes', 'result_quality']].to_dict(\"records\")\n for i in range(len(fdict)):\n spec = fdict[i]['specimen']\n if spec not in self.specimens:\n print((\"-E- specimen %s does not exist in measurement data\" % (spec)))\n continue\n fname = fdict[i][fnames]\n if fname == None or (spec in list(self.pmag_results_data['specimens'].keys()) and fname in [x.name for x in self.pmag_results_data['specimens'][spec]]):\n continue\n if fdict[i]['meas_step_unit'] == \"K\":\n fmin = int(float(fdict[i]['meas_step_min'])-273)\n fmax = int(float(fdict[i]['meas_step_max'])-273)\n if fmin == 0:\n fmin = str(fmin)\n else:\n fmin = str(fmin)+\"C\"\n if fmax == 0:\n fmax = str(fmax)\n else:\n fmax = str(fmax)+\"C\"\n elif fdict[i]['meas_step_unit'] == \"T\":\n fmin = float(fdict[i]['meas_step_min'])*1000\n fmax = float(fdict[i]['meas_step_max'])*1000\n if fmin == 0:\n fmin = str(int(fmin))\n else:\n fmin = str(fmin)+\"mT\"\n if fmax == 0:\n fmax = str(int(fmax))\n else:\n fmax = str(fmax)+\"mT\"\n else:\n fmin = fdict[i]['meas_step_min']\n fmax = fdict[i]['meas_step_max']\n\n PCA_types = [\"DE-BFL\", \"DE-BFL-A\", \"DE-BFL-O\", \"DE-FM\", \"DE-BFP\"]\n PCA_type_list = [x for x in str(fdict[i]['method_codes']).split(\n ':') if x.strip() in PCA_types]\n if len(PCA_type_list) > 0:\n PCA_type = PCA_type_list[0].strip()\n else:\n PCA_type = \"DE-BFL\"\n\n fit = self.add_fit(spec, fname, fmin, fmax, PCA_type)\n\n if fdict[i]['result_quality'] == 'b':\n self.bad_fits.append(fit)", "response": "Get interpretations from contribution objects in data model 3. 0."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_data_info(self):\n Data_info = {}\n data_er_samples = {}\n data_er_sites = {}\n data_er_locations = {}\n data_er_ages = {}\n\n if self.data_model == 3.0:\n print((\"data model: %1.1f\" % (self.data_model)))\n Data_info[\"er_samples\"] = []\n Data_info[\"er_sites\"] = []\n Data_info[\"er_locations\"] = []\n Data_info[\"er_ages\"] = []\n\n # self.magic_file may have a full path, but this breaks cb.Contribution\n # determine if magic_file exists in WD, and if it doesn't, copy it in\n\n magic_file_real = os.path.realpath(self.magic_file)\n magic_file_short = os.path.split(self.magic_file)[1]\n WD_file_real = os.path.realpath(\n os.path.join(self.WD, magic_file_short))\n if magic_file_real == WD_file_real:\n fnames = {'measurements': magic_file_short}\n else:\n # copy measurements file to WD, keeping original name\n shutil.copy(magic_file_real, WD_file_real)\n fnames = {'measurements': magic_file_short}\n self.con = cb.Contribution(self.WD, custom_filenames=fnames, read_tables=[\n 'measurements', 'specimens', 'samples', 'sites', 'locations', 'criteria', 'ages'])\n if 'specimens' in self.con.tables:\n spec_container = self.con.tables['specimens']\n self.spec_data = spec_container.df\n else:\n self.con.add_empty_magic_table('specimens')\n self.spec_data = self.con.tables['specimens'].df\n if 'samples' in self.con.tables:\n samp_container = self.con.tables['samples']\n samp_container.front_and_backfill(['azimuth', 'dip'])\n self.samp_data = samp_container.df\n samp_data2 = self.samp_data.rename(\n columns=map_magic.samp_magic3_2_magic2_map)\n data_er_samples = samp_data2.T.to_dict()\n else:\n self.con.add_empty_magic_table('samples')\n self.samp_data = self.con.tables['samples'].df\n if 'sites' in self.con.tables:\n site_container = self.con.tables['sites']\n self.site_data = site_container.df\n if 'age' in self.site_data.columns:\n self.site_data = self.site_data[self.site_data['age'].notnull(\n )]\n age_ids = [col for col in self.site_data.columns if col.startswith(\n \"age\") or col == \"site\"]\n age_data = self.site_data[age_ids].rename(\n columns=map_magic.site_magic3_2_magic2_map)\n # save this in 2.5 format\n er_ages = age_data.to_dict('records')\n data_er_ages = {}\n for s in er_ages:\n s = self.convert_ages_to_calendar_year(s)\n data_er_ages[s['er_site_name']] = s\n sites = self.site_data.rename(\n columns=map_magic.site_magic3_2_magic2_map)\n # pick out what is needed by thellier_gui and put in 2.5 format\n er_sites = sites.to_dict('records')\n data_er_sites = {}\n for s in er_sites:\n data_er_sites[s['er_site_name']] = s\n else:\n self.con.add_empty_magic_table('sites')\n self.site_data = self.con.tables['sites'].df\n if 'locations' in self.con.tables:\n location_container = self.con.tables[\"locations\"]\n self.loc_data = location_container.df # only need this for saving tables\n if self.loc_data['location'].isnull().any():\n self.loc_data.replace(\n {'location': {None: 'unknown'}}, inplace=True)\n self.loc_data.set_index('location', inplace=True)\n self.loc_data['location'] = self.loc_data.index\n loc2_data = self.loc_data.rename(\n columns=map_magic.loc_magic3_2_magic2_map)\n data_er_locations = loc2_data.to_dict('index')\n else:\n self.con.add_empty_magic_table('locations')\n self.loc_data = self.con.tables['locations'].df\n\n else: # try 2.5 data model\n\n print((\"data model: %1.1f\" % (self.data_model)))\n self.read_magic_file(os.path.join(\n self.WD, \"er_samples.txt\"), 'er_sample_name')\n\n try:\n data_er_samples = self.read_magic_file(\n os.path.join(self.WD, \"er_samples.txt\"), 'er_sample_name')\n except:\n print(\"-W- Can't find er_sample.txt in project directory\")\n\n try:\n data_er_sites = self.read_magic_file(\n os.path.join(self.WD, \"er_sites.txt\"), 'er_site_name')\n except:\n print(\"-W- Can't find er_sites.txt in project directory\")\n\n try:\n data_er_locations = self.read_magic_file(os.path.join(\n self.WD, \"er_locations.txt\"), 'er_location_name')\n except:\n print(\"-W- Can't find er_locations.txt in project directory\")\n\n try:\n data_er_ages = self.read_magic_file(\n os.path.join(self.WD, \"er_ages.txt\"), 'er_sample_name')\n except:\n try:\n data_er_ages = self.read_magic_file(\n os.path.join(self.WD, \"er_ages.txt\"), 'er_site_name')\n except:\n print(\"-W- Can't find er_ages in project directory\")\n\n Data_info[\"er_samples\"] = data_er_samples\n Data_info[\"er_sites\"] = data_er_sites\n Data_info[\"er_locations\"] = data_er_locations\n Data_info[\"er_ages\"] = data_er_ages\n\n return(Data_info)", "response": "Get the data structure of the current instance of the class."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_preferences(self):\n # default\n preferences = {}\n preferences['gui_resolution'] = 100.\n preferences['show_Zij_treatments'] = True\n preferences['show_Zij_treatments_steps'] = 2.\n preferences['show_eqarea_treatments'] = False\n preferences['auto_save'] = True\n # preferences['show_statistics_on_gui']=[\"int_n\",\"int_ptrm_n\",\"frac\",\"scat\",\"gmax\",\"b_beta\",\"int_mad\",\"dang\",\"f\",\"fvds\",\"g\",\"q\",\"drats\"]#,'ptrms_dec','ptrms_inc','ptrms_mad','ptrms_angle']\n #\n # try to read preferences file:\n #user_data_dir = find_pmag_dir.find_user_data_dir(\"demag_gui\")\n # if not user_data_dir:\n # return preferences\n # if os.path.exists(user_data_dir):\n # pref_file = os.path.join(user_data_dir, \"demag_gui_preferences.json\")\n # if os.path.exists(pref_file):\n # with open(pref_file, \"r\") as pfile:\n # return json.load(pfile)\n return preferences", "response": "Gets preferences for certain display variables from zeq_gui_preferences. json file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef read_magic_file(self, path, sort_by_this_name):\n DATA = {}\n try:\n with open(path, 'r') as finput:\n lines = list(finput.readlines()[1:])\n except FileNotFoundError:\n return []\n # fin=open(path,'r')\n # fin.readline()\n line = lines[0]\n header = line.strip('\\n').split('\\t')\n error_strings = []\n for line in lines[1:]:\n tmp_data = {}\n tmp_line = line.strip('\\n').split('\\t')\n for i in range(len(tmp_line)):\n tmp_data[header[i]] = tmp_line[i]\n if tmp_data[sort_by_this_name] in list(DATA.keys()):\n error_string = \"-E- ERROR: magic file %s has more than one line for %s %s\" % (\n path, sort_by_this_name, tmp_data[sort_by_this_name])\n # only print each error message once\n if error_string not in error_strings:\n print(error_string)\n error_strings.append(error_string)\n DATA[tmp_data[sort_by_this_name]] = tmp_data\n # fin.close()\n finput.close()\n return(DATA)", "response": "reads a magic formated data file from path and sorts the keysbythis_name according to sort_by_this_name"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef read_from_LSQ(self, LSQ_file):\n cont = self.user_warning(\n \"LSQ import only works if all measurements are present and not averaged during import from magnetometer files to magic format. Do you wish to continue reading interpretations?\")\n if not cont:\n return\n self.clear_interpretations(\n message=\"\"\"Do you wish to clear all previous interpretations on import?\"\"\")\n old_s = self.s\n for specimen in self.specimens:\n self.select_specimen(specimen)\n for i in range(len(self.Data[specimen]['zijdblock'])):\n self.mark_meas_good(i)\n self.select_specimen(old_s)\n print(\"Reading LSQ file\")\n interps = read_LSQ(LSQ_file)\n for interp in interps:\n specimen = interp['er_specimen_name']\n if specimen not in self.specimens:\n print(\n (\"specimen %s has no registered measurement data, skipping interpretation import\" % specimen))\n continue\n PCA_type = interp['magic_method_codes'].split(':')[0]\n tmin = self.Data[specimen]['zijdblock_steps'][interp['measurement_min_index']]\n tmax = self.Data[specimen]['zijdblock_steps'][interp['measurement_max_index']]\n if 'specimen_comp_name' in list(interp.keys()):\n name = interp['specimen_comp_name']\n else:\n name = None\n new_fit = self.add_fit(specimen, name, tmin, tmax, PCA_type)\n if 'bad_measurement_index' in list(interp.keys()):\n old_s = self.s\n self.select_specimen(specimen)\n for bmi in interp[\"bad_measurement_index\"]:\n try:\n self.mark_meas_bad(bmi)\n except IndexError:\n print(\n \"Magic Measurments length does not match that recorded in LSQ file\")\n self.select_specimen(old_s)\n if self.ie_open:\n self.ie.update_editor()\n self.update_selection()", "response": "Reads interpretations from an LSQ file and replaces them with interpretations read from the magic format."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nread a. redo formated file and replaces all current interpretations with interpretations taken from the. redo file.", "response": "def read_redo_file(self, redo_file):\n \"\"\"\n Reads a .redo formated file and replaces all current interpretations\n with interpretations taken from the .redo file\n\n Parameters\n ----------\n redo_file : path to .redo file to read\n \"\"\"\n if not self.clear_interpretations():\n return\n print(\"-I- read redo file and processing new bounds\")\n fin = open(redo_file, 'r')\n\n new_s = \"\"\n for Line in fin.read().splitlines():\n line = Line.split('\\t')\n specimen = line[0]\n if specimen.startswith(\"current_\"):\n specimen = specimen.lstrip(\"current_\")\n new_s = specimen\n if len(line) < 6:\n continue\n\n if len(line) < 6:\n print((\"insuffecent data for specimen %s and fit %s\" %\n (line[0], line[4])))\n continue\n if len(line) == 6:\n line.append('g')\n if specimen not in self.specimens:\n print(\n (\"specimen %s not found in this data set and will be ignored\" % (specimen)))\n continue\n\n tmin, tmax = self.parse_bound_data(line[2], line[3], specimen)\n new_fit = self.add_fit(\n specimen, line[4], tmin, tmax, line[1], line[5])\n\n if line[6] == 'b' and new_fit != None:\n self.bad_fits.append(new_fit)\n\n fin.close()\n\n if new_s != \"\":\n self.select_specimen(new_s)\n if (self.s not in self.pmag_results_data['specimens']) or (not self.pmag_results_data['specimens'][self.s]):\n self.current_fit = None\n else:\n self.current_fit = self.pmag_results_data['specimens'][self.s][-1]\n self.calculate_high_levels_data()\n if self.ie_open:\n self.ie.update_editor()\n self.update_selection()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nchange Demag GUI s current WD to new_WD", "response": "def change_WD(self, new_WD, meas_file=\"\"):\n \"\"\"\n Changes Demag GUI's current WD to new_WD if possible\n\n Parameters\n ----------\n new_WD : WD to change to current GUI's WD\n \"\"\"\n new_WD = os.path.abspath(new_WD)\n if not os.path.isdir(new_WD):\n return\n self.WD = new_WD\n if not meas_file:\n if self.data_model == None:\n if os.path.isfile(os.path.join(self.WD, \"measurements.txt\")) and os.path.isfile(os.path.join(self.WD, \"magic_measurements.txt\")):\n ui_dialog = demag_dialogs.user_input(self, ['data_model'], parse_funcs=[\n float], heading=\"More than one measurement file found in CWD with different data models please input prefered data model (2.5,3.0)\", values=[3])\n self.show_dlg(ui_dialog)\n ui_data = ui_dialog.get_values()\n self.data_model = ui_data[1]['data_model']\n elif os.path.isfile(os.path.join(self.WD, \"measurements.txt\")):\n self.data_model = 3.0\n elif os.path.isfile(os.path.join(self.WD, \"magic_measurements.txt\")):\n self.data_model = 2.5\n else:\n self.user_warning(\n \"No measurement file found in chosen directory\")\n self.data_model = 3\n try:\n self.data_model = float(self.data_model)\n if int(self.data_model) == 3:\n meas_file = os.path.join(self.WD, \"measurements.txt\")\n elif int(self.data_model) == 2:\n meas_file = os.path.join(self.WD, \"magic_measurements.txt\")\n else:\n meas_file = ''\n self.data_model = 3\n except (ValueError, TypeError) as e:\n self.user_warning(\n \"Provided data model is unrecognized or invalid, assuming you want data model 3\")\n self.data_model = 3\n\n if os.path.isfile(meas_file):\n self.magic_file = meas_file\n else:\n self.magic_file = self.choose_meas_file()\n\n if not self.data_model:\n self.data_model = 3"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef init_log_file(self):\n # redirect terminal output\n self.old_stdout = sys.stdout\n sys.stdout = open(os.path.join(self.WD, \"demag_gui.log\"), 'w+')", "response": "Redirect stdout to a log file to prevent printing to a hanging\n terminal when dealing with the compiled binary."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef update_pmag_tables(self):\n pmag_specimens, pmag_samples, pmag_sites = [], [], []\n print(\"-I- Reading previous interpretations from specimen/sample/site tables\")\n try:\n pmag_specimens, file_type = pmag.magic_read(\n os.path.join(self.WD, \"pmag_specimens.txt\"))\n except:\n print(\"-I- Can't read pmag_specimens.txt\")\n try:\n pmag_samples, file_type = pmag.magic_read(\n os.path.join(self.WD, \"pmag_samples.txt\"))\n except:\n print(\"-I- Can't read pmag_samples.txt\")\n try:\n pmag_sites, file_type = pmag.magic_read(\n os.path.join(self.WD, \"pmag_sites.txt\"))\n except:\n print(\"-I- Can't read pmag_sites.txt\")\n # --------------------------\n # reads pmag_specimens.txt and\n # update pmag_results_data['specimens'][specimen]\n # with the new interpretation\n # --------------------------\n\n if self.COORDINATE_SYSTEM == 'geographic':\n current_tilt_correction = 0\n elif self.COORDINATE_SYSTEM == 'tilt-corrected':\n current_tilt_correction = 100\n else:\n current_tilt_correction = -1\n\n if 'specimens' not in list(self.pmag_results_data.keys()):\n self.pmag_results_data['specimens'] = {}\n for rec in pmag_specimens:\n if 'er_specimen_name' in rec:\n specimen = rec['er_specimen_name']\n else:\n continue\n\n # initialize list of interpretations\n if specimen not in list(self.pmag_results_data['specimens'].keys()):\n self.pmag_results_data['specimens'][specimen] = []\n\n if any(k not in list(rec.keys()) for k in ['measurement_step_min', 'measurement_step_max']):\n continue\n\n methods = rec['magic_method_codes'].strip(\n \"\\n\").replace(\" \", \"\").split(\":\")\n LPDIR = False\n calculation_type = \"\"\n\n for method in methods:\n if \"LP-DIR\" in method:\n LPDIR = True\n if \"DE-\" in method:\n calculation_type = method\n\n msu = ''\n if 'measurement_step_unit' in list(rec.keys()):\n msu = rec['measurement_step_unit']\n\n # if interpretation doesn't exsist create it.\n if float(rec['measurement_step_min']) == 0. or float(rec['measurement_step_min']) == 273.:\n tmin = \"0\"\n elif msu == 'K' or float(rec['measurement_step_min']) > 3.: # thermal\n tmin = \"%.0fC\" % (float(rec['measurement_step_min'])-273.)\n else: # AF\n tmin = \"%.1fmT\" % (float(rec['measurement_step_min'])*1000.)\n\n if float(rec['measurement_step_max']) == 0. or float(rec['measurement_step_max']) == 273.:\n tmax = \"0\"\n elif msu == 'K' or float(rec['measurement_step_max']) > 3.: # thermal\n tmax = \"%.0fC\" % (float(rec['measurement_step_max'])-273.)\n else: # AF\n tmax = \"%.1fmT\" % (float(rec['measurement_step_max'])*1000.)\n\n if 'specimen_comp_name' in list(rec.keys()):\n fname = rec['specimen_comp_name']\n else:\n fname = None\n if calculation_type == \"\":\n calculation_type = \"DE-BFL\"\n\n if (tmin, tmax) in [(f.tmin, f.tmax) for f in self.pmag_results_data['specimens'][specimen]] and calculation_type in [f.PCA_type for f in self.pmag_results_data['specimens'][specimen]]:\n fit = None\n else:\n fit = self.add_fit(specimen, fname, tmin,\n tmax, calculation_type)\n\n if 'specimen_flag' in list(rec.keys()) and rec['specimen_flag'] == 'b':\n self.bad_fits.append(fit)\n\n # BUG FIX-almost replaced first sample with last due to above assignment to self.s\n if self.specimens:\n self.select_specimen(self.specimens[0])\n self.specimens_box.SetSelection(0)\n if self.s in self.pmag_results_data['specimens'] and self.pmag_results_data['specimens'][self.s]:\n self.initialize_CART_rot(self.specimens[0])\n self.pmag_results_data['specimens'][self.s][-1].select()\n\n # --------------------------\n # reads pmag_sample.txt and\n # if finds a mean in pmag_samples.txt\n # calculate the mean for self.high_level_means['samples'][samples]\n # If the program finds a codes \"DE-FM\",\"DE-FM-LP\",\"DE-FM-UV\"in magic_method_codes\n # then the program repeat teh fisher mean\n # --------------------------\n\n for rec in pmag_samples:\n if \"magic_method_codes\" in list(rec.keys()):\n methods = rec['magic_method_codes'].strip(\n \"\\n\").replace(\" \", \"\").split(\":\")\n\n else:\n methods = \"\"\n sample = rec['er_sample_name'].strip(\"\\n\")\n LPDIR = False\n calculation_method = \"\"\n for method in methods:\n if \"LP-DIR\" in method:\n LPDIR = True\n if \"DE-\" in method:\n calculation_method = method\n if LPDIR: # this a mean of directions\n calculation_type = \"Fisher\"\n for dirtype in self.dirtypes:\n self.calculate_high_level_mean(\n 'samples', sample, calculation_type, 'specimens', self.mean_fit)\n\n # --------------------------\n # reads pmag_sites.txt and\n # if finds a mean in pmag_sites.txt\n # calculate the mean for self.high_level_means['sites'][site]\n # using specimens or samples, depends on the er_specimen_names or er_samples_names\n # The program repeat the fisher calculation and oevrwrites it\n # --------------------------\n\n for rec in pmag_sites:\n methods = rec['magic_method_codes'].strip(\n \"\\n\").replace(\" \", \"\").split(\":\")\n site = rec['er_site_name'].strip(\"\\n\")\n LPDIR = False\n calculation_method = \"\"\n elements_type = \"specimens\"\n for method in methods:\n if \"LP-DIR\" in method or \"DA-DIR\" in method or \"DE-FM\" in method:\n LPDIR = True\n if \"DE-\" in method:\n calculation_method = method\n if LPDIR: # this a mean of directions\n if calculation_method in [\"DE-BS\"]:\n calculation_type = \"Bingham\"\n else:\n calculation_type = \"Fisher\"\n if 'er_sample_names' in list(rec.keys()) and len(rec['er_sample_names'].strip('\\n').replace(\" \", \"\").split(\":\")) > 0:\n elements_type = 'samples'\n if 'er_specimen_names' in list(rec.keys()) and len(rec['er_specimen_names'].strip('\\n').replace(\" \", \"\").split(\":\")) > 0:\n elements_type = 'specimens'\n self.calculate_high_level_mean(\n 'sites', site, calculation_type, elements_type, self.mean_fit)", "response": "Reads pmag tables from data model 2. 5 and updates them with updates\n their data"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nwriting current GUI acceptance criteria to criteria. txt or pmag_criteria. txt depending on data model", "response": "def write_acceptance_criteria_to_file(self):\n \"\"\"\n Writes current GUI acceptance criteria to criteria.txt or\n pmag_criteria.txt depending on data model\n \"\"\"\n crit_list = list(self.acceptance_criteria.keys())\n crit_list.sort()\n rec = {}\n rec['pmag_criteria_code'] = \"ACCEPT\"\n # rec['criteria_definition']=\"\"\n rec['criteria_definition'] = \"acceptance criteria for study\"\n rec['er_citation_names'] = \"This study\"\n\n for crit in crit_list:\n if type(self.acceptance_criteria[crit]['value']) == str:\n if self.acceptance_criteria[crit]['value'] != \"-999\" and self.acceptance_criteria[crit]['value'] != \"\":\n rec[crit] = self.acceptance_criteria[crit]['value']\n elif type(self.acceptance_criteria[crit]['value']) == int:\n if self.acceptance_criteria[crit]['value'] != -999:\n rec[crit] = \"%.i\" % (\n self.acceptance_criteria[crit]['value'])\n elif type(self.acceptance_criteria[crit]['value']) == float:\n if float(self.acceptance_criteria[crit]['value']) == -999:\n continue\n decimal_points = self.acceptance_criteria[crit]['decimal_points']\n if decimal_points != -999:\n command = \"rec[crit]='%%.%sf'%%(self.acceptance_criteria[crit]['value'])\" % (\n decimal_points)\n exec(command)\n else:\n rec[crit] = \"%e\" % (\n self.acceptance_criteria[crit]['value'])\n pmag.magic_write(os.path.join(self.WD, \"pmag_criteria.txt\"), [\n rec], \"pmag_criteria\")"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nshows a dialog that allows user to choose a working directory", "response": "def get_DIR(self):\n \"\"\"\n Dialog that allows user to choose a working directory\n \"\"\"\n dlg = wx.DirDialog(self, \"Choose a directory:\", defaultPath=self.currentDirectory,\n style=wx.DD_DEFAULT_STYLE | wx.DD_NEW_DIR_BUTTON | wx.DD_CHANGE_DIR)\n ok = self.show_dlg(dlg)\n if ok == wx.ID_OK:\n new_WD = dlg.GetPath()\n dlg.Destroy()\n else:\n new_WD = os.getcwd()\n dlg.Destroy()\n return new_WD"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nopens a dialog allowing the user to choose a measurement file", "response": "def choose_meas_file(self, event=None):\n \"\"\"\n Opens a dialog allowing the user to pick a measurement file\n \"\"\"\n dlg = wx.FileDialog(\n self, message=\"Please choose a measurement file\",\n defaultDir=self.WD,\n defaultFile=\"measurements.txt\",\n wildcard=\"measurement files (*.magic,*.txt)|*.magic;*.txt\",\n style=wx.FD_OPEN | wx.FD_CHANGE_DIR\n )\n if self.show_dlg(dlg) == wx.ID_OK:\n meas_file = dlg.GetPath()\n dlg.Destroy()\n else:\n meas_file = ''\n self.data_model = 2.5\n dlg.Destroy()\n return meas_file"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nshow a dialog that tells the user that a file has been saved", "response": "def saved_dlg(self, message, caption='Saved:'):\n \"\"\"\n Shows a dialog that tells the user that a file has been saved\n\n Parameters\n ----------\n message : message to display to user\n caption : title for dialog (default: \"Saved:\")\n \"\"\"\n dlg = wx.MessageDialog(self, caption=caption,\n message=message, style=wx.OK)\n result = self.show_dlg(dlg)\n dlg.Destroy()"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nshow a dialog that warns the user about some action", "response": "def user_warning(self, message, caption='Warning!'):\n \"\"\"\n Shows a dialog that warns the user about some action\n\n Parameters\n ----------\n message : message to display to user\n caption : title for dialog (default: \"Warning!\")\n\n Returns\n -------\n continue_bool : True or False\n \"\"\"\n dlg = wx.MessageDialog(self, message, caption,\n wx.OK | wx.CANCEL | wx.ICON_WARNING)\n if self.show_dlg(dlg) == wx.ID_OK:\n continue_bool = True\n else:\n continue_bool = False\n dlg.Destroy()\n return continue_bool"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nfunction called when a close of change acceptance criteria dialog is shown.", "response": "def on_close_criteria_box(self, dia):\n \"\"\"\n Function called on close of change acceptance criteria dialog that\n writes new criteria to the hardrive and sets new criteria as GUI's\n current criteria.\n\n Parameters\n ----------\n dia : closed change criteria dialog\n \"\"\"\n window_list_specimens = [\n 'specimen_n', 'specimen_mad', 'specimen_dang', 'specimen_alpha95']\n window_list_samples = ['sample_n', 'sample_n_lines',\n 'sample_n_planes', 'sample_k', 'sample_r', 'sample_alpha95']\n window_list_sites = ['site_n', 'site_n_lines',\n 'site_n_planes', 'site_k', 'site_r', 'site_alpha95']\n demag_gui_supported_criteria = window_list_specimens + \\\n window_list_samples+window_list_sites\n\n if self.data_model == 3:\n new_crits = []\n for crit in demag_gui_supported_criteria:\n new_crit = {}\n command = \"dia.set_%s.GetValue()\" % (crit)\n new_value = pmag.execute(command, dia=dia)\n if new_value == None or new_value == '':\n continue\n d = findall(r\"[-+]?\\d*\\.\\d+|\\d+\", new_value)\n if len(d) > 0:\n d = d[0]\n comp = new_value.strip(str(d))\n if comp == '':\n comp = '>='\n if 'specimen' in crit:\n col = \"specimens.\"+map_magic.spec_magic2_2_magic3_map[crit]\n elif 'sample' in crit:\n col = \"samples.\"+map_magic.samp_magic2_2_magic3_map[crit]\n elif 'site' in crit:\n col = \"sites.\"+map_magic.site_magic2_2_magic3_map[crit]\n else:\n print(\"no way this like is impossible\")\n continue\n new_crit['criterion'] = \"ACCEPT\"\n new_crit['criterion_value'] = d\n new_crit['criterion_operation'] = comp\n new_crit['table_column'] = col\n new_crit['citations'] = \"This study\"\n new_crit['description'] = ''\n new_crits.append(new_crit)\n cdf = DataFrame(new_crits)\n cdf = cdf.set_index(\"table_column\")\n cdf[\"table_column\"] = cdf.index\n cdf = cdf.reindex_axis(sorted(cdf.columns), axis=1)\n if 'criteria' not in self.con.tables:\n cols = ['criterion', 'criterion_value', 'criterion_operation',\n 'table_column', 'citations', 'description']\n self.con.add_empty_magic_table('criteria', col_names=cols)\n self.con.tables['criteria'].df = cdf\n self.con.tables['criteria'].write_magic_file(dir_path=self.WD)\n else:\n for crit in demag_gui_supported_criteria:\n command = \"new_value=dia.set_%s.GetValue()\" % (crit)\n exec(command)\n # empty box\n if new_value == \"\":\n self.acceptance_criteria[crit]['value'] = -999\n continue\n # box with no valid number\n try:\n float(new_value)\n except:\n self.show_crit_window_err_messege(crit)\n continue\n self.acceptance_criteria[crit]['value'] = float(new_value)\n\n # message dialog\n self.saved_dlg(message=\"changes saved to criteria\")\n self.write_acceptance_criteria_to_file()\n dia.Destroy()"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nshows a dialog to show a message if a valid naumber is not entered to criteria dialog box", "response": "def show_crit_window_err_messege(self, crit):\n \"\"\"\n error message if a valid naumber is not entered to criteria dialog\n boxes\n \"\"\"\n dlg = wx.MessageDialog(\n self, caption=\"Error:\", message=\"not a vaild value for statistic %s\\n ignoring value\" % crit, style=wx.OK)\n result = self.show_dlg(dlg)\n if result == wx.ID_OK:\n dlg.Destroy()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef On_close_MagIC_dialog(self, dia):\n\n def add_missing_ages(df):\n \"\"\"\n Take a dataframe and add min/max age, age, age_sigma, age_units.\n Use values gained from dialog, but don't overwrite values already\n in the dataframe.\n \"\"\"\n age_columns = df.columns[df.columns.str.startswith('age')]\n for col, value in {'age_high': max_age, 'age_low': min_age,\n 'age': age, 'age_sigma': age_sigma, 'age_unit': age_units}.items():\n if cb.not_null(value):\n if col in age_columns:\n df[col] = np.where(df[col].apply(cb.not_null), df[col], value)\n else:\n df[col] = value\n return df\n\n\n if dia.cb_acceptance_criteria.GetValue():\n use_criteria = 'existing'\n else:\n use_criteria = 'none'\n\n # -- coordinate system\n if dia.rb_spec_coor.GetValue():\n coord = \"s\"\n elif dia.rb_geo_coor.GetValue():\n coord = \"g\"\n elif dia.rb_tilt_coor.GetValue():\n coord = \"t\"\n elif dia.rb_geo_tilt_coor.GetValue():\n coord = \"b\"\n else:\n coord = \"s\"\n\n # -- default age options\n DefaultAge = [\"none\"]\n add_ages = dia.add_ages.GetValue()\n if add_ages:\n age_units = dia.default_age_unit.GetValue()\n min_age = dia.default_age_min.GetValue()\n max_age = dia.default_age_max.GetValue()\n age = dia.default_age.GetValue()\n age_sigma = dia.default_age_sigma.GetValue()\n if (min_age and max_age) or age:\n # enough age data provided\n pass\n else:\n go_on = self.user_warning(\"Not enough age data provided (you must provide lower and upper bound, or age).\\nPress OK to skip ages for now, or cancel to end this process.\")\n if not go_on:\n self.user_warning(\"Aborting, please try again.\", caption=\"Message\")\n return\n add_ages = False\n\n if min_age and max_age:\n DefaultAge = [min_age, max_age, age_units]\n\n # -- sample mean\n avg_directions_by_sample = False\n if dia.cb_sample_mean.GetValue():\n avg_directions_by_sample = True\n\n vgps_level = 'site'\n if dia.cb_sample_mean_VGP.GetValue():\n vgps_level = 'sample'\n\n # -- site mean\n\n if dia.cb_site_mean.GetValue():\n pass\n\n # -- location mean\n avg_by_polarity = False\n if dia.cb_location_mean.GetValue():\n avg_by_polarity = True\n\n if self.data_model == 3.0:\n # update or add age data to the sites table, but don't overwrite existing data\n if add_ages:\n site_df = self.con.tables['sites'].df\n self.con.tables['sites'].df = add_missing_ages(site_df)\n self.con.write_table_to_file(\"sites\")\n\n # set some variables\n priorities = ['DA-AC-ARM', 'DA-AC-TRM']\n for p in priorities:\n if not p.startswith('DA-AC-'):\n p = 'DA-AC-'+p\n # translate coord into coords\n if coord == 's':\n coords = ['-1']\n elif coord == 'g':\n coords = ['0']\n elif coord == 't':\n coords = ['100']\n elif coord == 'b':\n coords = ['0', '100']\n else:\n coords = ['-1']\n\n if vgps_level == 'sample':\n vgps = 1 # save sample level VGPS/VADMs\n else:\n vgps = 0 # site level\n\n nositeints = 0\n version_num = pmag.get_version()\n get_model_lat = 0 # skips VADM calculation entirely\n Dcrit, Icrit, nocrit = 0, 0, 0 # default criteria input\n\n # still broken (needs translation or determination of translation necessity)\n if use_criteria == 'none':\n Dcrit, Icrit, nocrit = 1, 1, 1 # no selection criteria\n crit_data = pmag.default_criteria(nocrit)\n elif use_criteria == 'existing':\n crit_data = self.read_criteria_file()\n if crit_data == None:\n crit_data = pmag.default_criteria(nocrit)\n print(\n \"No acceptance criteria found in criteria.txt defualt PmagPy criteria used instead\")\n else:\n print(\"Acceptance criteria from criteria.txt used\")\n else:\n # use default criteria\n crit_data = pmag.default_criteria(nocrit)\n print(\"PmagPy default criteria used\")\n\n accept = {}\n for critrec in crit_data:\n if type(critrec) != dict:\n continue\n for key in list(critrec.keys()):\n # need to migrate specimen_dang to specimen_int_dang for intensity data using old format\n if 'IE-SPEC' in list(critrec.keys()) and 'specimen_dang' in list(critrec.keys()) and 'specimen_int_dang' not in list(critrec.keys()):\n critrec['specimen_int_dang'] = critrec['specimen_dang']\n del critrec['specimen_dang']\n # need to get rid of ron shaars sample_int_sigma_uT\n if 'sample_int_sigma_uT' in list(critrec.keys()):\n critrec['sample_int_sigma'] = '%10.3e' % (\n eval(critrec['sample_int_sigma_uT'])*1e-6)\n if key not in list(accept.keys()) and critrec[key] != '':\n accept[key] = critrec[key]\n\n if use_criteria == 'default':\n pmag.magic_write(critout, [accept], 'pmag_criteria')\n print(\"\\n Pmag Criteria stored in \", critout, '\\n')\n\n if 'specimens' not in self.con.tables:\n self.user_warning(\n \"No specimen interpretations found in the current contribution samples, sites, and locations cannot be exported, aborting\")\n return\n spec_df = self.con.tables['specimens'].df\n if 'sites' not in self.con.tables:\n self.con.add_empty_magic_table('sites')\n site_df = self.con.tables['sites'].df\n SiteNFO = site_df.to_dict(\"records\")\n Data = spec_df.to_dict(\"records\")\n\n comment = \"\"\n orient = list(spec_df[spec_df['dir_tilt_correction'].notnull(\n )]['dir_tilt_correction'].drop_duplicates())\n samples = sorted(list(spec_df['sample'].dropna().drop_duplicates()))\n sites = sorted(list(spec_df['site'].dropna().drop_duplicates()))\n locations = sorted(list(spec_df['location'].dropna().drop_duplicates()))\n Comps = sorted(\n list(spec_df[spec_df['dir_comp'].notnull()]['dir_comp'].drop_duplicates()))\n Comps = [c for c in Comps if type(c) == str]\n # find all the sites with height info.\n height_info = pmag.get_dictitem(SiteNFO, 'height', '', 'F')\n\n nocorrection = ['DA-NL', 'DA-AC', 'DA-CR']\n SpecInts = []\n # retrieve specimens with intensity data\n IntData = pmag.get_dictitem(Data, 'int_abs', '', 'F')\n if nocrit == 0: # use selection criteria\n for rec in IntData: # do selection criteria\n kill = pmag.grade(\n rec, accept, 'specimen_int', data_model=3.0)\n if len(kill) == 0:\n # intensity record to be included in sample, site calculations\n SpecInts.append(rec)\n else:\n # take everything - no selection criteria\n SpecInts = IntData[:]\n # check for required data adjustments\n if len(nocorrection) > 0 and len(SpecInts) > 0:\n for cor in nocorrection:\n # exclude the corrections not specified for inclusion\n SpecInts = pmag.get_dictitem(\n SpecInts, 'method_codes', cor, 'not')\n # take top priority specimen of its name in remaining specimens (only one per customer)\n PrioritySpecInts = []\n # get list of uniq specimen names\n specimens = pmag.get_specs(SpecInts)\n for spec in specimens:\n # all the records for this specimen\n ThisSpecRecs = pmag.get_dictitem(\n SpecInts, 'specimen', spec, 'T')\n if len(ThisSpecRecs) == 1:\n PrioritySpecInts.append(ThisSpecRecs[0])\n elif len(ThisSpecRecs) > 1: # more than one\n prec = []\n for p in priorities:\n # all the records for this specimen\n ThisSpecRecs = pmag.get_dictitem(\n SpecInts, 'method_codes', p, 'has')\n if len(ThisSpecRecs) > 0:\n prec.append(ThisSpecRecs[0])\n PrioritySpecInts.append(\n prec[0]) # take the best one\n SpecInts = PrioritySpecInts # this has the first specimen record\n\n # apply criteria to directional data\n # retrieve specimens with directed lines and planes and some measuremnt data\n Ns = spec_df[spec_df['dir_n_measurements']\n != ''].to_dict(\"records\")\n SpecDirs = []\n if nocrit != 1: # use selection criteria\n for rec in Ns: # look through everything with specimen_n for \"good\" data\n kill = pmag.grade(\n rec, accept, 'specimen_dir', data_model=3.0)\n if len(kill) == 0: # nothing killed it\n SpecDirs.append(rec)\n else: # no criteria\n SpecDirs = Ns[:] # take them all\n\n # list of all sample data and list of those that pass the DE-SAMP criteria\n PmagSamps, SampDirs = [], []\n PmagSites = [] # list of all site data\n SampInts = []\n renamelnp = {'R': 'dir_r', 'n': 'dir_n_samples', 'n_total': 'dir_n_specimens', 'alpha95': 'dir_alpha95',\n 'n_lines': 'dir_n_specimens_lines', 'K': 'dir_k', 'dec': 'dir_dec', 'n_planes': 'dir_n_specimens_planes', 'inc': 'dir_inc'}\n for samp in samples: # run through the sample names\n if not avg_directions_by_sample:\n break\n # get all the directional data for this sample\n SampDir = pmag.get_dictitem(SpecDirs, 'sample', samp, 'T')\n if len(SampDir) <= 0:\n continue # if no directions\n for coord in coords: # step through desired coordinate systems\n # get all the directions for this sample\n CoordDir = pmag.get_dictitem(\n SampDir, 'dir_tilt_correction', coord, 'T')\n if len(CoordDir) <= 0:\n continue # no data for this coordinate system\n for comp in Comps:\n # get all directions from this component\n CompDir = pmag.get_dictitem(\n CoordDir, 'dir_comp', comp, 'T')\n CompDir = [\n x for x in CompDir if 'result_quality' in x and x['result_quality'] == 'g']\n if len(CompDir) <= 0:\n continue # no data for comp\n PmagSampRec = pmag.dolnp3_0(CompDir)\n for k, v in list(renamelnp.items()):\n if k in PmagSampRec:\n PmagSampRec[v] = PmagSampRec[k]\n del PmagSampRec[k]\n # decorate the sample record\n PmagSampRec[\"location\"] = CompDir[0]['location']\n PmagSampRec[\"site\"] = CompDir[0]['site']\n PmagSampRec[\"sample\"] = samp\n PmagSampRec[\"citations\"] = \"This study\"\n PmagSampRec['software_packages'] = version_num + \\\n ': demag_gui.v.3.0'\n if CompDir[0]['result_quality'] == 'g':\n PmagSampRec['result_quality'] = 'g'\n else:\n PmagSampRec['result_quality'] = 'b'\n if nocrit != 1:\n PmagSampRec['criteria'] = \"ACCEPT\"\n site_height = pmag.get_dictitem(\n height_info, 'site', PmagSampRec['site'], 'T')\n if len(site_height) > 0:\n # add in height if available\n PmagSampRec[\"height\"] = site_height[0]['height']\n PmagSampRec['dir_comp_name'] = comp\n PmagSampRec['dir_tilt_correction'] = coord\n specs = [d['specimen'] for d in CompDir]\n if 'dir_n_specimens' not in PmagSampRec:\n PmagSampRec['dir_n_specimens'] = len(specs)\n # get a list of the specimen names used\n PmagSampRec['specimens'] = reduce(\n lambda x, y: str(x)+':'+str(y), specs)\n # get old method codes\n prev_meth_codes = self.con.tables['samples'].df.loc[samp]['method_codes']\n # get a list of the methods used\n new_meth_codes = pmag.get_list(CompDir, 'method_codes')\n if isinstance(prev_meth_codes, Series):\n merged_meths = self.merge_meth_codes(\n prev_meth_codes.iloc[0], new_meth_codes)\n else:\n merged_meths = self.merge_meth_codes(\n prev_meth_codes, new_meth_codes)\n PmagSampRec['method_codes'] = merged_meths\n if nocrit != 1: # apply selection criteria\n kill = pmag.grade(\n PmagSampRec, accept, 'sample_dir', data_model=3.0)\n else:\n kill = []\n if len(kill) > 0:\n PmagSampRec['result_quality'] = 'b'\n else:\n SampDirs.append(PmagSampRec)\n if vgps == 1: # if sample level VGP info desired, do that now\n try:\n PmagResRec = pmag.getsampVGP(\n PmagSampRec, SiteNFO, data_model=self.data_model)\n except KeyError:\n print(\n (\"no lat lon data for sample %s skipping VGP calculation\" % samp))\n PmagResRec = \"\"\n if PmagResRec != \"\":\n for k in ['vgp_dp', 'vgp_dm', 'vgp_lat', 'vgp_lon']:\n PmagSampRec[k] = PmagResRec[k]\n PmagSamps.append(PmagSampRec)\n\n # removed average_all_components check because demag GUI never averages directional components\n # removed intensity average portion as demag GUI has no need of this also cause translating this is a bitch\n\n if len(PmagSamps) > 0:\n if 'samples' not in self.con.tables:\n self.con.add_empty_magic_table('samples')\n for dc in ['magic_method_codes']:\n if dc in self.con.tables['samples'].df:\n del self.con.tables['samples'].df[dc]\n samps_df = DataFrame(PmagSamps)\n samps_df = samps_df.set_index('sample')\n samps_df['sample'] = samps_df.index\n nsdf = self.con.tables['samples'].merge_dfs(samps_df)\n if not vgps == 1:\n nsdf.drop([col for col in nsdf.columns if type(\n col) == str and col.startswith('vgp')], axis=1, inplace=True)\n nsdf = nsdf.reindex_axis(sorted(nsdf.columns), axis=1)\n self.con.tables['samples'].df = nsdf\n self.con.tables['samples'].write_magic_file(dir_path=self.WD)\n\n # create site averages from specimens or samples as specified\n for site in sites:\n for coord in coords:\n if dia.combo_site_mean.GetValue() == 'samples' and avg_directions_by_sample:\n # if sample averages at site level desired\n key, comp_key, dirlist = 'sample', 'dir_comp_name', SampDirs\n else:\n # if specimen averages at site level desired\n key, comp_key, dirlist = 'specimen', 'dir_comp', SpecDirs\n # get all the sites with directions\n tmp = pmag.get_dictitem(dirlist, 'site', site, 'T')\n tmp1 = pmag.get_dictitem(\n tmp, 'dir_tilt_correction', coord, 'T', float_to_int=True)\n # fish out site information (lat/lon, etc.)\n sd = pmag.get_dictitem(SiteNFO, 'site', site, 'T')\n if len(sd) <= 0: # no data for this site\n print(\n ('site information not found in sites.txt for site, %s. skipping.' % site))\n continue\n for comp in Comps:\n # get all components comp\n siteD = pmag.get_dictitem(tmp1, comp_key, comp, 'T')\n # remove bad data from means\n siteD = [\n x for x in siteD if 'result_quality' in x and x['result_quality'] == 'g']\n if len(siteD) <= 0:\n # print(\"no data for comp %s in site %s. skipping\"%(comp,site))\n continue\n PmagSiteRec = PmagSampRec = pmag.dolnp3_0(\n siteD) # get an average for this site\n for k, v in list(renamelnp.items()):\n if k in PmagSiteRec:\n PmagSiteRec[v] = PmagSiteRec[k]\n del PmagSiteRec[k]\n # decorate the site record\n PmagSiteRec['dir_comp_name'] = comp\n PmagSiteRec[\"location\"] = siteD[0]['location']\n PmagSiteRec[\"site\"] = siteD[0]['site']\n PmagSiteRec['dir_tilt_correction'] = coord\n PmagSiteRec['samples'] = pmag.get_list(siteD, 'sample')\n if dia.combo_site_mean.GetValue() == 'samples' and avg_directions_by_sample:\n PmagSiteRec['specimens'] = pmag.get_list(\n siteD, 'specimens')\n else:\n PmagSiteRec['specimens'] = pmag.get_list(\n siteD, 'specimen')\n if 'dir_n_samples' not in list(PmagSiteRec.keys()):\n PmagSiteRec['dir_n_samples'] = len(\n PmagSiteRec['samples'].split(':'))\n if 'dir_n_specimens' not in list(PmagSiteRec.keys()):\n PmagSiteRec['dir_n_specimens'] = len(\n PmagSiteRec['specimens'].split(':'))\n # determine the demagnetization code (DC3,4 or 5) for this site\n AFnum = len(pmag.get_dictitem(\n siteD, 'method_codes', 'LP-DIR-AF', 'has'))\n Tnum = len(pmag.get_dictitem(\n siteD, 'method_codes', 'LP-DIR-T', 'has'))\n DC = 3\n if AFnum > 0:\n DC += 1\n if Tnum > 0:\n DC += 1\n PmagSiteRec['method_codes'] = pmag.get_list(\n siteD, 'method_codes')+':' + 'LP-DC'+str(DC)\n PmagSiteRec['method_codes'].strip(\":\")\n\n PmagSiteRec[\"citations\"] = \"This study\"\n PmagSiteRec['software_packages'] = version_num + \\\n ': demag_gui.v.3.0'\n\n # here we need to grab ages from sites or ages\n site_age_rec = Series()\n if 'ages' in self.con.tables:\n ages_df = self.con.tables['ages'].df\n if 'site' in ages_df.columns:\n site_age_records = ages_df[ages_df['site'] == site]\n if len(site_age_records):\n if isinstance(site_age_records, Series):\n site_age_rec = site_age_records\n else:\n site_age_rec = site_age_records.iloc[0]\n if not len(site_age_rec):\n sites_df = self.con.tables['sites'].df\n site_age_records = sites_df.loc[site]\n if len(site_age_records):\n if isinstance(site_age_records, Series):\n site_age_rec = site_age_records\n else:\n site_age_rec = site_age_records.iloc[0]\n\n PmagSiteRec['criteria'] = 'ACCEPT'\n if 'dir_n_specimens_lines' in list(PmagSiteRec.keys()) and 'dir_n_specimens_planes' in list(PmagSiteRec.keys()) and PmagSiteRec['dir_n_specimens_lines'] != \"\" and PmagSiteRec['dir_n_specimens_planes'] != \"\":\n if int(PmagSiteRec[\"dir_n_specimens_planes\"]) > 0:\n PmagSiteRec[\"method_codes\"] = PmagSiteRec['method_codes']+\":DE-FM-LP\"\n elif int(PmagSiteRec[\"dir_n_specimens_lines\"]) > 2:\n PmagSiteRec[\"method_codes\"] = PmagSiteRec['method_codes']+\":DE-FM\"\n\n site_height = pmag.get_dictitem(\n height_info, 'site', site, 'T')\n if len(site_height) > 0:\n PmagSiteRec[\"height\"] = site_height[0]['height']\n if '0' in PmagSiteRec['dir_tilt_correction'] and \"DA-DIR-GEO\" not in PmagSiteRec['method_codes']:\n PmagSiteRec['method_codes'] = PmagSiteRec['method_codes']+\":DA-DIR-GEO\"\n if '100' in PmagSiteRec['dir_tilt_correction'] and \"DA-DIR-TILT\" not in PmagSiteRec['method_codes']:\n PmagSiteRec['method_codes'] = PmagSiteRec['method_codes'] + \\\n \":DA-DIR-TILT\"\n PmagSiteRec['dir_polarity'] = \"\"\n# assign polarity based on angle of pole lat to spin axis - may want to re-think this sometime\n\n if dia.cb_site_mean_VGP.GetValue():\n dec = float(PmagSiteRec[\"dir_dec\"])\n inc = float(PmagSiteRec[\"dir_inc\"])\n if 'dir_alpha95' in list(PmagSiteRec.keys()) and PmagSiteRec['dir_alpha95'] != \"\":\n a95 = float(PmagSiteRec[\"dir_alpha95\"])\n else:\n a95 = 180.\n # fish out site information (lat/lon, etc.)\n sitedat = [x for x in pmag.get_dictitem(\n SiteNFO, 'site', PmagSiteRec['site'], 'T') if x['lat'] != None and x['lon'] != None]\n if len(sitedat) != 0:\n sitedat = sitedat[0]\n else:\n sitedat = {}\n try:\n PmagSiteRec['lat'] = float(sitedat['lat'])\n PmagSiteRec['lon'] = float(sitedat['lon'])\n lat, lon = PmagSiteRec['lat'], PmagSiteRec['lon']\n calculate = True\n except (KeyError, ValueError, TypeError) as e:\n calculate = False\n ui_dialog = demag_dialogs.user_input(self, ['Latitude', 'Longitude'], parse_funcs=[\n float, float], heading=\"Missing Latitude or Longitude data for site: %s\" % site)\n self.show_dlg(ui_dialog)\n ui_data = ui_dialog.get_values()\n if ui_data[0]:\n PmagSiteRec['lat'] = ui_data[1]['Latitude']\n PmagSiteRec['lon'] = ui_data[1]['Longitude']\n lat, lon = PmagSiteRec['lat'], PmagSiteRec['lon']\n calculate = True\n else:\n self.user_warning(\n \"insuffecent data provided skipping VGP calculation for site %s and comp %s\" % (site, comp))\n if calculate:\n plong, plat, dp, dm = pmag.dia_vgp(\n dec, inc, a95, lat, lon) # get the VGP for this site\n PmagSiteRec[\"vgp_lat\"] = '%7.1f ' % (plat)\n PmagSiteRec[\"vgp_lon\"] = '%7.1f ' % (plong)\n PmagSiteRec[\"vgp_dp\"] = '%7.1f ' % (dp)\n PmagSiteRec[\"vgp_dm\"] = '%7.1f ' % (dm)\n\n angle = pmag.angle([0, 0], [0, (90-plat)])\n if angle <= 55.:\n PmagSiteRec[\"dir_polarity\"] = 'n'\n if angle > 55. and angle < 125.:\n PmagSiteRec[\"dir_polarity\"] = 't'\n if angle >= 125.:\n PmagSiteRec[\"dir_polarity\"] = 'r'\n\n kill = pmag.grade(PmagSiteRec, accept, 'site_dir')\n if len(kill) > 0:\n PmagSiteRec['result_quality'] = 'b'\n else:\n PmagSiteRec['result_quality'] = 'g'\n\n # Carry over data from previous sites\n prev_dat = pmag.get_dictitem(\n SiteNFO, 'site', PmagSiteRec['site'], 'T')\n if len(prev_dat) != 0:\n for ndd in set(PmagSiteRec.keys()).symmetric_difference(set(prev_dat[0].keys())):\n sitedat = [\n x for x in prev_dat if ndd in x and x[ndd] != None]\n if len(sitedat) != 0:\n sitedat = sitedat[0]\n else:\n continue\n PmagSiteRec[ndd] = sitedat[ndd]\n\n PmagSites.append(PmagSiteRec)\n\n if len(PmagSites) > 0:\n if 'sites' not in self.con.tables:\n self.con.tables.add_empty_magic_table('sites')\n sites_df = DataFrame(PmagSites)\n if 'tilt_correction' in sites_df.columns:\n sites_df.drop('tilt_correction', axis=1, inplace=True)\n sites_df = sites_df.set_index('site')\n sites_df['site'] = sites_df.index\n nsdf = self.con.tables['sites'].merge_dfs(sites_df)\n if not dia.cb_site_mean_VGP.GetValue():\n nsdf.drop([col for col in nsdf.columns if type(\n col) == str and col.startswith('vgp')], axis=1, inplace=True)\n nsdf = nsdf.reindex_axis(sorted(nsdf.columns), axis=1)\n self.con.tables['sites'].df = nsdf\n self.con.tables['sites'].write_magic_file(dir_path=self.WD)\n\n # location mean section\n PmagLocs = []\n for location in locations:\n if not avg_by_polarity:\n break\n locrecs = pmag.get_dictitem(\n PmagSites, 'location', location, 'T')\n if len(locrecs) < 2:\n print((\"no data for location %s\" % location))\n continue\n for coord in coords:\n # find the tilt corrected data\n coordrecs = pmag.get_dictitem(\n locrecs, 'dir_tilt_correction', coord, 'T')\n if len(coordrecs) < 2:\n print(\n (\"not enough data of %s percent tilt corrected data in sites to calculate locations mean or vgp\" % coord))\n continue\n for comp in Comps:\n # fish out all of the component\n crecs = pmag.get_dictitem(\n coordrecs, 'dir_comp_name', comp, 'T')\n if len(crecs) < 2:\n print(\n (\"insuffecent data for comp %s when calculating location\" % comp))\n continue\n precs = []\n for rec in crecs:\n prec = {'dec': rec['dir_dec'], 'inc': rec['dir_inc'],\n 'name': rec['site'], 'loc': rec['location']}\n prec = {k: v if v != None else 'None' for k,\n v in list(prec.items())}\n precs.append(prec)\n # calculate average by polarity\n polpars = pmag.fisher_by_pol(precs)\n # hunt through all the modes (normal=A, reverse=B, all=ALL)\n for mode in list(polpars.keys()):\n PolRes = {}\n PolRes['citations'] = 'This study'\n PolRes[\"result_name\"] = \"Polarity Average: Polarity \"+mode\n PolRes[\"pole_comp_name\"] = comp+':'+mode\n PolRes[\"dir_dec\"] = '%7.1f' % (\n polpars[mode]['dec'])\n PolRes[\"dir_inc\"] = '%7.1f' % (\n polpars[mode]['inc'])\n PolRes[\"dir_n_sites\"] = '%i' % (polpars[mode]['n'])\n PolRes[\"dir_r\"] = '%5.4f' % (polpars[mode]['r'])\n PolRes[\"dir_k\"] = '%6.0f' % (polpars[mode]['k'])\n PolRes[\"dir_alpha95\"] = '%7.1f' % (\n polpars[mode]['alpha95'])\n PolRes['sites'] = polpars[mode]['sites']\n sites_dat = self.con.tables['sites'].df\n for e in ['samples', 'specimens']:\n PolRes[e] = reduce(lambda x, y: x+':'+y, [sites_dat.loc[site][e].iloc[0] if isinstance(\n sites_dat.loc[site][e], Series) else sites_dat.loc[site][e] for site in PolRes['sites'].split(':')])\n PolRes['dir_n_samples'] = len(\n PolRes['samples'].split(':'))\n PolRes['dir_n_specimens'] = len(\n PolRes['specimens'].split(':'))\n PolRes['location'] = polpars[mode]['locs']\n PolRes['software_packages'] = version_num + \\\n ': demag_gui.v.3.0'\n PolRes['dir_tilt_correction'] = coord\n if add_ages:\n loc_rec = {}\n if 'locations' in self.con.tables:\n locs_df = self.con.tables['locations'].df\n self.con.tables['locations'].df = add_missing_ages(locs_df)\n loc_recs = locs_df.loc[location]\n if len(loc_recs):\n if isinstance(loc_recs, Series):\n loc_rec = loc_recs\n else:\n loc_rec = loc_recs.iloc[0]\n PolRes['age_high'] = loc_rec.get('age_high', max_age)\n PolRes['age_low'] = loc_rec.get('age_low', min_age)\n PolRes['age'] = loc_rec.get('age', age)\n PolRes['age_sigma'] = loc_rec.get('age_sigma', age_sigma)\n PolRes['age_unit'] = loc_rec.get('age_unit', age_units)\n if dia.cb_location_mean_VGP.GetValue():\n sucess_lat_lon_info = True\n if 'locations' in self.con.tables:\n locs_dat = self.con.tables['locations'].df\n if 'lat_n' in locs_dat.columns:\n lat = locs_dat['lat_n'][location].iloc[0] if isinstance(\n locs_dat['lat_n'][location], Series) else locs_dat['lat_n'][location]\n elif 'lat_s' in locs_dat.columns:\n lat = locs_dat['lat_s'][location].iloc[0] if isinstance(\n locs_dat['lat_s'][location], Series) else locs_dat['lat_s'][location]\n else:\n sucess_lat_lon_info = False\n if 'lon_e' in locs_dat.columns:\n lon = locs_dat['lon_e'][location].iloc[0] if isinstance(\n locs_dat['lon_e'][location], Series) else locs_dat['lon_e'][location]\n elif 'lon_w' in locs_dat.columns:\n lon = locs_dat['lon_w'][location].iloc[0] if isinstance(\n locs_dat['lon_w'][location], Series) else locs_dat['lon_w'][location]\n else:\n sucess_lat_lon_info = False\n if not sucess_lat_lon_info:\n ui_dialog = demag_dialogs.user_input(self, ['North Boundary Latitude', 'South Boundary Latitude', 'East Boundary Longitude', 'West Boundary Longitude'], parse_funcs=[\n float, float, float, float], heading=\"Missing Latitude or Longitude data for location %s please define the boundary of this region so VGP calculations can be preformed\" % location)\n ui_data = ui_dialog.get_values()\n if ui_data[0]:\n PolRes['lat_n'] = ui_data[1]['North Boundary Latitude']\n PolRes['lat_s'] = ui_data[1]['South Boundary Latitude']\n PolRes['lon_e'] = ui_data[1]['East Boundary Longitude']\n PolRes['lon_w'] = ui_data[1]['West Boundary Longitude']\n lat, lon = PolRes['lat_n'], PolRes['lon_e']\n sucess_lat_lon_info = True\n else:\n self.user_warning(\n \"insuffecent data provided skipping VGP calculation for location %s\" % location)\n try:\n dec, inc, a95, lat, lon = float(polpars[mode]['dec']), float(\n polpars[mode]['inc']), float(polpars[mode]['alpha95']), float(lat), float(lon)\n except (UnboundLocalError, TypeError):\n print(\n \"unable to obtain all data needed for VGP calculation, skipping\")\n sucess_lat_lon_info = False\n if sucess_lat_lon_info:\n # get the VGP for this pole component\n plong, plat, dp, dm = pmag.dia_vgp(\n dec, inc, a95, lat, lon)\n PolRes[\"pole_lat\"] = '%7.1f ' % (plat)\n PolRes[\"pole_lon\"] = '%7.1f ' % (plong)\n PolRes[\"pole_dp\"] = '%7.1f ' % (dp)\n PolRes[\"pole_dm\"] = '%7.1f ' % (dm)\n PolRes[\"pole_alpha95\"] = PolRes['dir_alpha95']\n PolRes[\"pole_r\"] = PolRes['dir_r']\n PolRes[\"pole_k\"] = PolRes['dir_k']\n angle = pmag.angle([0, 0], [0, (90-plat)])\n if angle <= 55.:\n PolRes[\"dir_polarity\"] = 'n'\n if angle > 55. and angle < 125.:\n PolRes[\"dir_polarity\"] = 't'\n if angle >= 125.:\n PolRes[\"dir_polarity\"] = 'r'\n\n # fix not duplicating data\n prev_dat = pmag.get_dictitem(locs_dat.to_dict(\n 'records'), 'location', location, 'T')\n # merge method codes\n methods = []\n for prd in prev_dat:\n if 'method_codes' not in list(prd.keys()) or prd['method_codes'] == None:\n continue\n for mc in prd['method_codes'].split(':'):\n if mc not in methods:\n methods.append(mc)\n if len(methods) > 1:\n PolRes['method_codes'] = reduce(\n lambda x, y: x+':'+y, methods)\n elif len(methods) > 0:\n PolRes['method_codes'] = methods[0]\n else:\n PolRes['method_codes'] = ''\n if len(prev_dat) != 0:\n for ndd in set(PolRes.keys()).symmetric_difference(set(prev_dat[0].keys())):\n dat = [\n x for x in prev_dat if ndd in x and x[ndd] != None]\n if len(dat) != 0:\n dat = dat[0]\n else:\n continue\n PolRes[ndd] = dat[ndd]\n\n PmagLocs.append(PolRes)\n\n if len(PmagLocs) > 0:\n locs_df = DataFrame(PmagLocs)\n locs_df = locs_df.set_index('location')\n locs_df['location'] = locs_df.index\n nsdf = self.con.tables['locations'].merge_dfs(locs_df)\n if not dia.cb_location_mean_VGP.GetValue():\n nsdf.drop([col for col in nsdf.columns if type(col) == str and col.startswith(\n 'pole') and col != 'pol_comp_name'], axis=1, inplace=True)\n nsdf = nsdf.reindex_axis(sorted(nsdf.columns), axis=1)\n self.con.tables['locations'].df = nsdf\n self.con.tables['locations'].write_magic_file(dir_path=self.WD)\n\n else:\n for FILE in ['pmag_samples.txt', 'pmag_sites.txt', 'pmag_results.txt']:\n self.PmagRecsOld[FILE] = []\n try:\n meas_data, file_type = pmag.magic_read(\n os.path.join(self.WD, FILE))\n print((\"-I- Read old magic file %s\" %\n os.path.join(self.WD, FILE)))\n if FILE != 'pmag_specimens.txt':\n os.remove(os.path.join(self.WD, FILE))\n print((\"-I- Delete old magic file %s\" %\n os.path.join(self.WD, FILE)))\n except:\n continue\n\n for rec in meas_data:\n if \"magic_method_codes\" in list(rec.keys()):\n if \"LP-DIR\" not in rec['magic_method_codes'] and \"DE-\" not in rec['magic_method_codes']:\n self.PmagRecsOld[FILE].append(rec)\n\n print(('coord', coord, 'vgps_level', vgps_level, 'DefaultAge', DefaultAge, 'avg_directions_by_sample',\n avg_directions_by_sample, 'avg_by_polarity', avg_by_polarity, 'use_criteria', use_criteria))\n prev_cwd = os.getcwd()\n os.chdir(self.WD)\n ipmag.specimens_results_magic(coord=coord, vgps_level=vgps_level, DefaultAge=DefaultAge,\n avg_directions_by_sample=avg_directions_by_sample, avg_by_polarity=avg_by_polarity, use_criteria=use_criteria)\n os.chdir(prev_cwd)\n\n # reads new pmag tables, and merge the old lines:\n for FILE in ['pmag_samples.txt', 'pmag_sites.txt', 'pmag_results.txt']:\n pmag_data = []\n try:\n pmag_data, file_type = pmag.magic_read(\n os.path.join(self.WD, FILE))\n except:\n pass\n if FILE in list(self.PmagRecsOld.keys()):\n for rec in self.PmagRecsOld[FILE]:\n pmag_data.append(rec)\n if len(pmag_data) > 0:\n pmag_data_fixed = self.merge_pmag_recs(pmag_data)\n pmag.magic_write(os.path.join(self.WD, FILE),\n pmag_data_fixed, FILE.split(\".\")[0])\n print((\"write new interpretations in %s\\n\" %\n (os.path.join(self.WD, FILE))))\n\n # make pmag_criteria.txt if it does not exist\n if not os.path.isfile(os.path.join(self.WD, \"pmag_criteria.txt\")):\n Fout = open(os.path.join(self.WD, \"pmag_criteria.txt\"), 'w')\n Fout.write(\"tab\\tpmag_criteria\\n\")\n Fout.write(\"er_citation_names\\tpmag_criteria_code\\n\")\n Fout.write(\"This study\\tACCEPT\\n\")\n\n self.update_pmag_tables()\n self.update_selection()\n\n TEXT = \"interpretations saved\"\n self.saved_dlg(TEXT)\n self.close_warning = False", "response": "Function called when the user closes the MagIC dialog. It calculates VGPs and saves them the hard drive."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef update_selection(self):\n\n self.clear_boxes()\n # commented out to allow propogation of higher level viewing state\n self.clear_high_level_pars()\n\n if self.UPPER_LEVEL_SHOW != \"specimens\":\n self.mean_type_box.SetValue(\"None\")\n\n # --------------------------\n # check if the coordinate system in the window exists (if not change to \"specimen\" coordinate system)\n # --------------------------\n\n coordinate_system = self.coordinates_box.GetValue()\n if coordinate_system == 'tilt-corrected' and \\\n len(self.Data[self.s]['zijdblock_tilt']) == 0:\n self.coordinates_box.SetStringSelection('specimen')\n elif coordinate_system == 'geographic' and \\\n len(self.Data[self.s]['zijdblock_geo']) == 0:\n self.coordinates_box.SetStringSelection(\"specimen\")\n if coordinate_system != self.coordinates_box.GetValue() and self.ie_open:\n self.ie.coordinates_box.SetStringSelection(\n self.coordinates_box.GetValue())\n self.ie.update_editor()\n coordinate_system = self.coordinates_box.GetValue()\n self.COORDINATE_SYSTEM = coordinate_system\n\n # --------------------------\n # update treatment list\n # --------------------------\n\n self.update_bounds_boxes()\n\n # --------------------------\n # update high level boxes\n # --------------------------\n\n high_level = self.level_box.GetValue()\n old_string = self.level_names.GetValue()\n new_string = old_string\n if high_level == 'sample':\n if self.s in self.Data_hierarchy['sample_of_specimen']:\n new_string = self.Data_hierarchy['sample_of_specimen'][self.s]\n else:\n new_string = ''\n if high_level == 'site':\n if self.s in self.Data_hierarchy['site_of_specimen']:\n new_string = self.Data_hierarchy['site_of_specimen'][self.s]\n else:\n new_string = ''\n if high_level == 'location':\n if self.s in self.Data_hierarchy['location_of_specimen']:\n new_string = self.Data_hierarchy['location_of_specimen'][self.s]\n else:\n new_string = ''\n self.level_names.SetValue(new_string)\n if self.ie_open and new_string != old_string:\n self.ie.level_names.SetValue(new_string)\n self.ie.on_select_level_name(-1, True)\n\n # --------------------------\n # update PCA box\n # --------------------------\n\n self.update_PCA_box()\n\n # update warning\n self.generate_warning_text()\n self.update_warning_box()\n # update choices in the fit box\n self.update_fit_boxes()\n self.update_mean_fit_box()\n # measurements text box\n self.Add_text()\n # draw figures\n if self.current_fit:\n self.draw_figure(self.s, False)\n else:\n self.draw_figure(self.s, True)\n # update high level stats\n self.update_high_level_stats()\n # redraw interpretations\n self.update_GUI_with_new_interpretation()", "response": "Update the selection of the current specimen and its high level boxes and statistics windows."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef update_warning_box(self):\n self.warning_box.Clear()\n if self.warning_text == \"\":\n self.warning_box.AppendText(\"No Problems\")\n else:\n self.warning_box.AppendText(self.warning_text)", "response": "Updates the warning box with the current warning_text variable."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nupdate statistics boxes and figures with a new interpretatiom when selecting new temperature bound", "response": "def update_GUI_with_new_interpretation(self):\n \"\"\"\n update statistics boxes and figures with a new interpretatiom when\n selecting new temperature bound\n \"\"\"\n\n self.update_fit_bounds_and_statistics()\n self.draw_interpretations()\n self.calculate_high_levels_data()\n self.plot_high_levels_data()"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nupdating the high level statistics in bottom left of GUI.", "response": "def update_high_level_stats(self):\n \"\"\"\n updates high level statistics in bottom left of GUI.\n \"\"\"\n self.clear_high_level_pars()\n dirtype = str(self.coordinates_box.GetValue())\n if dirtype == 'specimen':\n dirtype = 'DA-DIR'\n elif dirtype == 'geographic':\n dirtype = 'DA-DIR-GEO'\n elif dirtype == 'tilt-corrected':\n dirtype = 'DA-DIR-TILT'\n if str(self.level_box.GetValue()) == 'sample':\n high_level_type = 'samples'\n elif str(self.level_box.GetValue()) == 'site':\n high_level_type = 'sites'\n elif str(self.level_box.GetValue()) == 'location':\n high_level_type = 'locations'\n elif str(self.level_box.GetValue()) == 'study':\n high_level_type = 'study'\n high_level_name = str(self.level_names.GetValue())\n elements_type = self.UPPER_LEVEL_SHOW\n if high_level_name in list(self.high_level_means[high_level_type].keys()):\n mpars = []\n for mf in list(self.high_level_means[high_level_type][high_level_name].keys()):\n if mf in list(self.high_level_means[high_level_type][high_level_name].keys()) and self.mean_fit == 'All' or mf == self.mean_fit:\n if dirtype in list(self.high_level_means[high_level_type][high_level_name][mf].keys()):\n mpar = deepcopy(\n self.high_level_means[high_level_type][high_level_name][mf][dirtype])\n if 'n' in mpar and mpar['n'] == 1:\n mpar['calculation_type'] = \"Fisher:\"+mf\n mpars.append(mpar)\n elif mpar['calculation_type'] == 'Fisher by polarity':\n for k in list(mpar.keys()):\n if k == 'color' or k == 'calculation_type':\n continue\n mpar[k]['calculation_type'] += ':'+k+':'+mf\n mpar[k]['color'] = mpar['color']\n if 'K' not in mpar[k] and 'k' in mpar[k]:\n mpar[k]['K'] = mpar[k]['k']\n if 'R' not in mpar[k] and 'r' in mpar[k]:\n mpar[k]['R'] = mpar[k]['r']\n if 'n_lines' not in mpar[k] and 'n' in mpar[k]:\n mpar[k]['n_lines'] = mpar[k]['n']\n mpars.append(mpar[k])\n else:\n mpar['calculation_type'] += \":\"+mf\n mpars.append(mpar)\n self.switch_stats_button.SetRange(0, len(mpars)-1)\n self.show_high_levels_pars(mpars)\n if self.ie_open:\n self.ie.switch_stats_button.SetRange(0, len(mpars)-1)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef update_bounds_boxes(self):\n if self.s not in list(self.Data.keys()):\n self.select_specimen(list(self.Data.keys())[0])\n self.T_list = self.Data[self.s]['zijdblock_steps']\n if self.current_fit:\n self.tmin_box.SetItems(self.T_list)\n self.tmax_box.SetItems(self.T_list)\n if type(self.current_fit.tmin) == str and type(self.current_fit.tmax) == str:\n self.tmin_box.SetStringSelection(self.current_fit.tmin)\n self.tmax_box.SetStringSelection(self.current_fit.tmax)\n if self.ie_open:\n self.ie.update_bounds_boxes(self.T_list)", "response": "Updates bounds boxes with current specimen and fit."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef update_PCA_box(self):\n if self.s in list(self.pmag_results_data['specimens'].keys()):\n\n if self.current_fit:\n tmin = self.current_fit.tmin\n tmax = self.current_fit.tmax\n calculation_type = self.current_fit.PCA_type\n else:\n calculation_type = self.PCA_type_box.GetValue()\n PCA_type = \"None\"\n\n # update calculation type windows\n if calculation_type == \"DE-BFL\":\n PCA_type = \"line\"\n elif calculation_type == \"DE-BFL-A\":\n PCA_type = \"line-anchored\"\n elif calculation_type == \"DE-BFL-O\":\n PCA_type = \"line-with-origin\"\n elif calculation_type == \"DE-FM\":\n PCA_type = \"Fisher\"\n elif calculation_type == \"DE-BFP\":\n PCA_type = \"plane\"\n else:\n print(\"no PCA type found setting to line\")\n PCA_type = \"line\"\n self.PCA_type_box.SetStringSelection(PCA_type)", "response": "updates PCA box with current fit s PCA type"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nupdate the fit box and mean fit box lists to match with changes in the specimen or new or removed interpretations in .", "response": "def update_fit_boxes(self, new_fit=False):\n \"\"\"\n alters fit_box and mean_fit_box lists to match with changes in\n specimen or new/removed interpretations\n\n Parameters\n ----------\n new_fit : boolean representing if there is a new fit\n\n Alters\n ------\n fit_box selection, tmin_box selection, tmax_box selection,\n mean_fit_box selection, current_fit\n \"\"\"\n # update the fit box\n self.update_fit_box(new_fit)\n # select new fit\n self.on_select_fit(None)\n # update the high level fits box\n self.update_mean_fit_box()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef update_fit_box(self, new_fit=False):\n # get new fit data\n if self.s in list(self.pmag_results_data['specimens'].keys()):\n self.fit_list = list(\n [x.name for x in self.pmag_results_data['specimens'][self.s]])\n else:\n self.fit_list = []\n # find new index to set fit_box to\n if not self.fit_list:\n new_index = 'None'\n elif new_fit:\n new_index = len(self.fit_list) - 1\n else:\n if self.fit_box.GetValue() in self.fit_list:\n new_index = self.fit_list.index(self.fit_box.GetValue())\n else:\n new_index = 'None'\n # clear old box\n self.fit_box.Clear()\n # update fit box\n self.fit_box.SetItems(self.fit_list)\n fit_index = None\n # select defaults\n if new_index == 'None':\n self.fit_box.SetStringSelection('None')\n else:\n self.fit_box.SetSelection(new_index)", "response": "Update the fit_box with the new values from specimen or new"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nupdating the high level mean fit box and the menu for the current entry.", "response": "def update_mean_fit_box(self):\n \"\"\"\n alters mean_fit_box list to match with changes in specimen or new/\n removed interpretations\n\n Alters\n ------\n mean_fit_box selection and choices, mean_types_box string selection\n \"\"\"\n self.mean_fit_box.Clear()\n # update high level mean fit box\n fit_index = None\n if self.mean_fit in self.all_fits_list:\n fit_index = self.all_fits_list.index(self.mean_fit)\n self.all_fits_list = []\n for specimen in self.specimens:\n if specimen in self.pmag_results_data['specimens']:\n for name in [x.name for x in self.pmag_results_data['specimens'][specimen]]:\n if name not in self.all_fits_list:\n self.all_fits_list.append(name)\n self.mean_fit_box.SetItems(['None', 'All'] + self.all_fits_list)\n # select defaults\n if not isinstance(fit_index, type(None)):\n self.mean_fit_box.SetValue(self.all_fits_list[fit_index])\n elif self.mean_fit == 'All':\n self.mean_fit_box.SetValue('All')\n else:\n self.mean_fit_box.SetValue('None')\n self.mean_type_box.SetValue('None')\n self.clear_high_level_pars()\n self.update_toggleable_means_menu()\n if self.ie_open:\n self.ie.mean_fit_box.Clear()\n self.ie.mean_fit_box.SetItems(['None', 'All'] + self.all_fits_list)\n if fit_index:\n self.ie.mean_fit_box.SetValue(self.all_fits_list[fit_index])\n elif self.mean_fit == 'All':\n self.ie.mean_fit_box.SetValue('All')\n else:\n self.ie.mean_fit_box.SetValue('None')\n self.ie.mean_type_box.SetValue('None')"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef show_high_levels_pars(self, mpars):\n FONT_WEIGHT = self.GUI_RESOLUTION+(self.GUI_RESOLUTION-1)*5\n font2 = wx.Font(12+min(1, FONT_WEIGHT), wx.SWISS,\n wx.NORMAL, wx.NORMAL, False, self.font_type)\n\n if self.mean_type_box.GetValue() != \"None\" and self.mean_fit_box.GetValue() != \"None\" and mpars:\n\n if isinstance(mpars, list):\n i = self.switch_stats_button.GetValue()\n if i >= len(mpars):\n print((\"Cannot display statistics as mpars does not have index %d. Was given mpars %s, aborting.\" % (\n i, str(mpars))))\n return\n self.show_high_levels_pars(mpars[i])\n elif mpars[\"calculation_type\"].startswith('Fisher'):\n if \"alpha95\" in list(mpars.keys()):\n for val in ['mean_type:calculation_type', 'dec:dec', 'inc:inc', 'alpha95:alpha95', 'K:K', 'R:R', 'n_lines:n_lines', 'n_planes:n_planes']:\n val, ind = val.split(\":\")\n COMMAND = \"\"\"self.%s_window.SetValue(str(mpars['%s']))\"\"\" % (\n val, ind)\n exec(COMMAND)\n\n if self.ie_open:\n ie = self.ie\n if \"alpha95\" in list(mpars.keys()):\n for val in ['mean_type:calculation_type', 'dec:dec', 'inc:inc', 'alpha95:alpha95', 'K:K', 'R:R', 'n_lines:n_lines', 'n_planes:n_planes']:\n val, ind = val.split(\":\")\n COMMAND = \"\"\"ie.%s_window.SetValue(str(mpars['%s']))\"\"\" % (\n val, ind)\n exec(COMMAND)\n\n elif mpars[\"calculation_type\"].startswith('Fisher by polarity'):\n i = self.switch_stats_button.GetValue()\n keys = list(mpars.keys())\n keys.remove('calculation_type')\n if 'color' in keys:\n keys.remove('color')\n keys.sort()\n name = keys[i % len(keys)]\n mpars = mpars[name]\n if type(mpars) != dict:\n print(\n (\"error in showing high level mean, reseaved %s\" % str(mpars)))\n return\n if mpars[\"calculation_type\"] == 'Fisher' and \"alpha95\" in list(mpars.keys()):\n for val in ['mean_type:calculation_type', 'dec:dec', 'inc:inc', 'alpha95:alpha95', 'K:k', 'R:r', 'n_lines:n', 'n_planes:n_planes']:\n val, ind = val.split(\":\")\n if val == 'mean_type':\n COMMAND = \"\"\"self.%s_window.SetValue('%s')\"\"\" % (\n val, mpars[ind] + \":\" + name)\n else:\n COMMAND = \"\"\"self.%s_window.SetValue(str(mpars['%s']))\"\"\" % (\n val, ind)\n exec(COMMAND)\n\n if self.ie_open:\n ie = self.ie\n if mpars[\"calculation_type\"] == 'Fisher' and \"alpha95\" in list(mpars.keys()):\n for val in ['mean_type:calculation_type', 'dec:dec', 'inc:inc', 'alpha95:alpha95', 'K:k', 'R:r', 'n_lines:n', 'n_planes:n_planes']:\n val, ind = val.split(\":\")\n if val == 'mean_type':\n COMMAND = \"\"\"ie.%s_window.SetValue('%s')\"\"\" % (\n val, mpars[ind] + \":\" + name)\n else:\n COMMAND = \"\"\"ie.%s_window.SetValue(str(mpars['%s']))\"\"\" % (\n val, ind)\n exec(COMMAND)\n\n self.set_mean_stats_color()", "response": "shows in the bottom left of the high level mean display area in the bottom left of the main window"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef clear_boxes(self):\n self.tmin_box.Clear()\n self.tmin_box.SetStringSelection(\"\")\n if self.current_fit:\n self.tmin_box.SetItems(self.T_list)\n self.tmin_box.SetSelection(-1)\n\n self.tmax_box.Clear()\n self.tmax_box.SetStringSelection(\"\")\n if self.current_fit:\n self.tmax_box.SetItems(self.T_list)\n self.tmax_box.SetSelection(-1)\n\n self.fit_box.Clear()\n self.fit_box.SetStringSelection(\"\")\n if self.s in self.pmag_results_data['specimens'] and self.pmag_results_data['specimens'][self.s]:\n self.fit_box.SetItems(\n list([x.name for x in self.pmag_results_data['specimens'][self.s]]))\n\n for parameter in ['dec', 'inc', 'n', 'mad', 'dang', 'alpha95']:\n COMMAND = \"self.s%s_window.SetValue('')\" % parameter\n exec(COMMAND)\n COMMAND = \"self.s%s_window.SetBackgroundColour(wx.Colour('grey'))\" % parameter\n exec(COMMAND)", "response": "Clear all boxes in the current directory"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef clear_high_level_pars(self):\n for val in ['mean_type', 'dec', 'inc', 'alpha95', 'K', 'R', 'n_lines', 'n_planes']:\n COMMAND = \"\"\"self.%s_window.SetValue(\"\")\"\"\" % (val)\n exec(COMMAND)\n if self.ie_open:\n for val in ['mean_type', 'dec', 'inc', 'alpha95', 'K', 'R', 'n_lines', 'n_planes']:\n COMMAND = \"\"\"self.ie.%s_window.SetValue(\"\")\"\"\" % (val)\n exec(COMMAND)\n self.set_mean_stats_color()", "response": "Clears all high level pars display boxes"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef on_menu_make_MagIC_results_tables(self, event):\n\n # ---------------------------------------\n # save pmag_*.txt.tmp without directional data\n # ---------------------------------------\n self.on_menu_save_interpretation(None)\n\n # ---------------------------------------\n # dialog box to choose coordinate systems for pmag_specimens.txt\n # ---------------------------------------\n dia = demag_dialogs.magic_pmag_specimens_table_dialog(None)\n CoorTypes = []\n if self.test_mode:\n CoorTypes = ['DA-DIR']\n elif dia.ShowModal() == wx.ID_OK: # Until the user clicks OK, show the message\n if dia.cb_spec_coor.GetValue() == True:\n CoorTypes.append('DA-DIR')\n if dia.cb_geo_coor.GetValue() == True:\n CoorTypes.append('DA-DIR-GEO')\n if dia.cb_tilt_coor.GetValue() == True:\n CoorTypes.append('DA-DIR-TILT')\n else:\n self.user_warning(\"MagIC tables not saved\")\n print(\"MagIC tables not saved\")\n return\n # ------------------------------\n\n self.PmagRecsOld = {}\n if self.data_model == 3.0:\n FILES = []\n else:\n FILES = ['pmag_specimens.txt']\n for FILE in FILES:\n self.PmagRecsOld[FILE] = []\n meas_data = []\n try:\n meas_data, file_type = pmag.magic_read(\n os.path.join(self.WD, FILE))\n print((\"-I- Read old magic file %s\\n\" %\n os.path.join(self.WD, FILE)))\n # if FILE !='pmag_specimens.txt':\n os.remove(os.path.join(self.WD, FILE))\n print((\"-I- Delete old magic file %s\\n\" %\n os.path.join(self.WD, FILE)))\n\n except (OSError, IOError) as e:\n continue\n\n for rec in meas_data:\n if \"magic_method_codes\" in list(rec.keys()):\n if \"LP-DIR\" not in rec['magic_method_codes'] and \"DE-\" not in rec['magic_method_codes']:\n self.PmagRecsOld[FILE].append(rec)\n\n # ---------------------------------------\n # write a new pmag_specimens.txt\n # ---------------------------------------\n\n specimens_list = list(self.pmag_results_data['specimens'].keys())\n specimens_list.sort()\n PmagSpecs = []\n for specimen in specimens_list:\n for dirtype in CoorTypes:\n i = 0\n for fit in self.pmag_results_data['specimens'][specimen]:\n mpars = fit.get(dirtype)\n if not mpars:\n mpars = self.get_PCA_parameters(\n specimen, fit, fit.tmin, fit.tmax, dirtype, fit.PCA_type)\n if not mpars or 'specimen_dec' not in list(mpars.keys()):\n self.user_warning(\"Could not calculate interpretation for specimen %s and fit %s in coordinate system %s while exporting pmag tables, skipping\" % (\n specimen, fit.name, dirtype))\n continue\n\n PmagSpecRec = {}\n PmagSpecRec[\"magic_software_packages\"] = pmag.get_version(\n ) + ': demag_gui'\n PmagSpecRec[\"er_specimen_name\"] = specimen\n PmagSpecRec[\"er_sample_name\"] = self.Data_hierarchy['sample_of_specimen'][specimen]\n PmagSpecRec[\"er_site_name\"] = self.Data_hierarchy['site_of_specimen'][specimen]\n PmagSpecRec[\"er_location_name\"] = self.Data_hierarchy['location_of_specimen'][specimen]\n if specimen in list(self.Data_hierarchy['expedition_name_of_specimen'].keys()):\n PmagSpecRec[\"er_expedition_name\"] = self.Data_hierarchy['expedition_name_of_specimen'][specimen]\n PmagSpecRec[\"er_citation_names\"] = \"This study\"\n if \"magic_experiment_name\" in self.Data[specimen]:\n PmagSpecRec[\"magic_experiment_names\"] = self.Data[specimen][\"magic_experiment_name\"]\n if 'magic_instrument_codes' in list(self.Data[specimen].keys()):\n PmagSpecRec[\"magic_instrument_codes\"] = self.Data[specimen]['magic_instrument_codes']\n PmagSpecRec['specimen_correction'] = 'u'\n PmagSpecRec['specimen_direction_type'] = mpars[\"specimen_direction_type\"]\n PmagSpecRec['specimen_dec'] = \"%.1f\" % mpars[\"specimen_dec\"]\n PmagSpecRec['specimen_inc'] = \"%.1f\" % mpars[\"specimen_inc\"]\n PmagSpecRec['specimen_flag'] = \"g\"\n if fit in self.bad_fits:\n PmagSpecRec['specimen_flag'] = \"b\"\n\n if \"C\" in fit.tmin or \"C\" in fit.tmax:\n PmagSpecRec['measurement_step_unit'] = \"K\"\n else:\n PmagSpecRec['measurement_step_unit'] = \"T\"\n\n if \"C\" in fit.tmin:\n PmagSpecRec['measurement_step_min'] = \"%.0f\" % (\n mpars[\"measurement_step_min\"]+273.)\n elif \"mT\" in fit.tmin:\n PmagSpecRec['measurement_step_min'] = \"%8.3e\" % (\n mpars[\"measurement_step_min\"]*1e-3)\n else:\n if PmagSpecRec['measurement_step_unit'] == \"K\":\n PmagSpecRec['measurement_step_min'] = \"%.0f\" % (\n mpars[\"measurement_step_min\"]+273.)\n else:\n PmagSpecRec['measurement_step_min'] = \"%8.3e\" % (\n mpars[\"measurement_step_min\"]*1e-3)\n\n if \"C\" in fit.tmax:\n PmagSpecRec['measurement_step_max'] = \"%.0f\" % (\n mpars[\"measurement_step_max\"]+273.)\n elif \"mT\" in fit.tmax:\n PmagSpecRec['measurement_step_max'] = \"%8.3e\" % (\n mpars[\"measurement_step_max\"]*1e-3)\n else:\n if PmagSpecRec['measurement_step_unit'] == \"K\":\n PmagSpecRec['measurement_step_min'] = \"%.0f\" % (\n mpars[\"measurement_step_min\"]+273.)\n else:\n PmagSpecRec['measurement_step_min'] = \"%8.3e\" % (\n mpars[\"measurement_step_min\"]*1e-3)\n\n PmagSpecRec['specimen_n'] = \"%.0f\" % mpars[\"specimen_n\"]\n calculation_type = mpars['calculation_type']\n PmagSpecRec[\"magic_method_codes\"] = self.Data[specimen]['magic_method_codes'] + \\\n \":\"+calculation_type+\":\"+dirtype\n PmagSpecRec[\"specimen_comp_n\"] = str(\n len(self.pmag_results_data[\"specimens\"][specimen]))\n PmagSpecRec[\"specimen_comp_name\"] = fit.name\n if fit in self.bad_fits:\n PmagSpecRec[\"specimen_flag\"] = \"b\"\n else:\n PmagSpecRec[\"specimen_flag\"] = \"g\"\n if calculation_type in [\"DE-BFL\", \"DE-BFL-A\", \"DE-BFL-O\"]:\n PmagSpecRec['specimen_direction_type'] = 'l'\n PmagSpecRec['specimen_mad'] = \"%.1f\" % float(\n mpars[\"specimen_mad\"])\n PmagSpecRec['specimen_dang'] = \"%.1f\" % float(\n mpars['specimen_dang'])\n PmagSpecRec['specimen_alpha95'] = \"\"\n elif calculation_type in [\"DE-BFP\"]:\n PmagSpecRec['specimen_direction_type'] = 'p'\n PmagSpecRec['specimen_mad'] = \"%.1f\" % float(\n mpars['specimen_mad'])\n PmagSpecRec['specimen_dang'] = \"\"\n PmagSpecRec['specimen_alpha95'] = \"\"\n if self.data_model == 3.0:\n if 'bfv_dec' not in list(mpars.keys()) or \\\n 'bfv_inc' not in list(mpars.keys()):\n self.calculate_best_fit_vectors(\n high_level_type=\"sites\", high_level_name=PmagSpecRec[\"er_site_name\"], dirtype=dirtype)\n mpars = fit.get(dirtype)\n try:\n PmagSpecRec['dir_bfv_dec'] = \"%.1f\" % mpars['bfv_dec']\n PmagSpecRec['dir_bfv_inc'] = \"%.1f\" % mpars['bfv_inc']\n except KeyError:\n print(\"Error calculating BFV during export of interpretations for %s, %s, %s\" % (\n fit.name, specimen, dirtype))\n elif calculation_type in [\"DE-FM\"]:\n PmagSpecRec['specimen_direction_type'] = 'l'\n PmagSpecRec['specimen_mad'] = \"\"\n PmagSpecRec['specimen_dang'] = \"\"\n PmagSpecRec['specimen_alpha95'] = \"%.1f\" % float(\n mpars['specimen_alpha95'])\n if dirtype == 'DA-DIR-TILT':\n PmagSpecRec['specimen_tilt_correction'] = \"100\"\n elif dirtype == 'DA-DIR-GEO':\n PmagSpecRec['specimen_tilt_correction'] = \"0\"\n else:\n PmagSpecRec['specimen_tilt_correction'] = \"-1\"\n PmagSpecs.append(PmagSpecRec)\n i += 1\n\n # add the 'old' lines with no \"LP-DIR\" in\n if 'pmag_specimens.txt' in list(self.PmagRecsOld.keys()):\n for rec in self.PmagRecsOld['pmag_specimens.txt']:\n PmagSpecs.append(rec)\n PmagSpecs_fixed = self.merge_pmag_recs(PmagSpecs)\n\n if len(PmagSpecs_fixed) == 0:\n self.user_warning(\n \"No data to save to MagIC tables please create some interpretations before saving\")\n print(\"No data to save, MagIC tables not written\")\n return\n\n if self.data_model == 3.0:\n\n # translate demag_gui output to 3.0 DataFrame\n ndf2_5 = DataFrame(PmagSpecs_fixed)\n if 'specimen_direction_type' in ndf2_5.columns:\n # doesn't exist in new model\n del ndf2_5['specimen_direction_type']\n ndf3_0 = ndf2_5.rename(columns=map_magic.spec_magic2_2_magic3_map)\n if 'specimen' in ndf3_0.columns:\n ndf3_0 = ndf3_0.set_index(\"specimen\")\n # replace the removed specimen column\n ndf3_0['specimen'] = ndf3_0.index\n # prefer keeping analyst_names in txt\n if 'analyst_names' in ndf3_0:\n del ndf3_0['analyst_names']\n\n # get current 3.0 DataFrame from contribution object\n if 'specimens' not in self.con.tables:\n cols = ndf3_0.columns\n self.con.add_empty_magic_table('specimens', col_names=cols)\n spmdf = self.con.tables['specimens']\n # remove translation collisions or deprecated terms\n for dc in [\"dir_comp_name\", \"magic_method_codes\"]:\n if dc in spmdf.df.columns:\n del spmdf.df[dc]\n\n # merge previous df with new interpretations DataFrame\n # (do not include non-directional data in the merge or else it\n # will be overwritten)\n\n # fix index names\n spmdf.df.index.name = \"specimen_name\"\n ndf3_0.index.name = \"specimen_name\"\n # pull out directional/non-directional data\n if 'method_codes' not in spmdf.df:\n spmdf.df['method_codes'] = ''\n directional = spmdf.df['method_codes'].str.contains('LP-DIR').astype(bool)\n non_directional_df = spmdf.df[~directional]\n spmdf.df = spmdf.df[directional]\n # merge new interpretations with old specimen information\n directional_df = spmdf.merge_dfs(ndf3_0)\n # add any missing columns to non_directional_df\n for col in directional_df.columns:\n if col not in non_directional_df.columns:\n non_directional_df[col] = \"\"\n # make sure columns are ordered the same so that we can concatenate\n non_directional_df.sort_index(axis='columns', inplace=True)\n directional_df.sort_index(axis='columns', inplace=True)\n # put directional/non-directional data back together\n merged = pd.concat([non_directional_df, directional_df])\n merged.sort_index(inplace=True)\n spmdf.df = merged\n\n # write to disk\n spmdf.write_magic_file(dir_path=self.WD)\n\n TEXT = \"specimens interpretations are saved in specimens.txt.\\nPress OK to save to samples/sites/locations/ages tables.\"\n self.dlg = wx.MessageDialog(\n self, caption=\"Other Tables\", message=TEXT, style=wx.OK | wx.CANCEL)\n result = self.show_dlg(self.dlg)\n if result == wx.ID_OK:\n self.dlg.Destroy()\n else:\n self.dlg.Destroy()\n return\n\n else:\n pmag.magic_write(os.path.join(\n self.WD, \"pmag_specimens.txt\"), PmagSpecs_fixed, 'pmag_specimens')\n print((\"specimen data stored in %s\\n\" %\n os.path.join(self.WD, \"pmag_specimens.txt\")))\n\n TEXT = \"specimens interpretations are saved in pmag_specimens.txt.\\nPress OK for pmag_samples/pmag_sites/pmag_results tables.\"\n dlg = wx.MessageDialog(\n self, caption=\"Other Pmag Tables\", message=TEXT, style=wx.OK | wx.CANCEL)\n result = self.show_dlg(dlg)\n if result == wx.ID_OK:\n dlg.Destroy()\n else:\n dlg.Destroy()\n return\n\n # --------------------------------\n\n dia = demag_dialogs.magic_pmag_tables_dialog(\n None, self.WD, self.Data, self.Data_info)\n\n if self.show_dlg(dia) == wx.ID_OK: # Until the user clicks OK, show the message\n self.On_close_MagIC_dialog(dia)", "response": "Creates or updates MagIC tables for later contribution to the MagIC results table."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef on_menu_import_meas_file(self, event):\n # use new measurement file and corresponding WD\n meas_file = self.choose_meas_file()\n WD = os.path.split(meas_file)[0]\n self.WD = WD\n self.magic_file = meas_file\n # reset backend with new files\n self.reset_backend()", "response": "Open measurement file reset self. magic_file and self. WD reset everything."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreads pmag_criteria. txt file and open changecriteria dialog", "response": "def on_menu_criteria_file(self, event):\n \"\"\"\n read pmag_criteria.txt file\n and open changecriteria dialog\n \"\"\"\n if self.data_model == 3:\n default_file = \"criteria.txt\"\n else:\n default_file = \"pmag_criteria.txt\"\n read_sucsess = False\n dlg = wx.FileDialog(\n self, message=\"choose pmag criteria file\",\n defaultDir=self.WD,\n defaultFile=default_file,\n style=wx.FD_OPEN | wx.FD_CHANGE_DIR\n )\n if self.show_dlg(dlg) == wx.ID_OK:\n criteria_file = dlg.GetPath()\n print((\"-I- Read new criteria file: %s\" % criteria_file))\n\n # check if this is a valid pmag_criteria file\n try:\n mag_meas_data, file_type = pmag.magic_read(criteria_file)\n except:\n dlg = wx.MessageDialog(\n self, caption=\"Error\", message=\"not a valid pmag_criteria file\", style=wx.OK)\n result = self.show_dlg(dlg)\n if result == wx.ID_OK:\n dlg.Destroy()\n dlg.Destroy()\n return\n\n # initialize criteria\n self.acceptance_criteria = self.read_criteria_file(criteria_file)\n read_sucsess = True\n\n dlg.Destroy()\n if read_sucsess:\n self.on_menu_change_criteria(None)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nrighting click the zijderveld on the right of the iTunes instance", "response": "def right_click_zijderveld(self, event):\n \"\"\"\n toggles between zoom and pan effects for the zijderveld on right\n click\n\n Parameters\n ----------\n event : the wx.MouseEvent that triggered the call of this function\n\n Alters\n ------\n zijderveld_setting, toolbar1 setting\n \"\"\"\n if event.LeftIsDown() or event.ButtonDClick():\n return\n elif self.zijderveld_setting == \"Zoom\":\n self.zijderveld_setting = \"Pan\"\n try:\n self.toolbar1.pan('off')\n except TypeError:\n pass\n elif self.zijderveld_setting == \"Pan\":\n self.zijderveld_setting = \"Zoom\"\n try:\n self.toolbar1.zoom()\n except TypeError:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nhandle mouse cursor events for the zijd area", "response": "def on_change_zijd_mouse_cursor(self, event):\n \"\"\"\n If mouse is over data point making it selectable change the shape of\n the cursor\n\n Parameters\n ----------\n event : the wx Mouseevent for that click\n \"\"\"\n if not array(self.CART_rot).any():\n return\n pos = event.GetPosition()\n width, height = self.canvas1.get_width_height()\n pos[1] = height - pos[1]\n xpick_data, ypick_data = pos\n xdata_org = list(self.CART_rot[:, 0]) + list(self.CART_rot[:, 0])\n ydata_org = list(-1*self.CART_rot[:, 1]) + list(-1*self.CART_rot[:, 2])\n data_corrected = self.zijplot.transData.transform(\n vstack([xdata_org, ydata_org]).T)\n xdata, ydata = data_corrected.T\n xdata = list(map(float, xdata))\n ydata = list(map(float, ydata))\n e = 4e0\n\n if self.zijderveld_setting == \"Zoom\":\n self.canvas1.SetCursor(wx.Cursor(wx.CURSOR_CROSS))\n else:\n self.canvas1.SetCursor(wx.Cursor(wx.CURSOR_ARROW))\n for i, (x, y) in enumerate(zip(xdata, ydata)):\n if 0 < sqrt((x-xpick_data)**2. + (y-ypick_data)**2.) < e:\n self.canvas1.SetCursor(wx.Cursor(wx.CURSOR_HAND))\n break\n event.Skip()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nselecting the nearest interpretation to the mouse position on double click.", "response": "def on_zijd_select(self, event):\n \"\"\"\n Get mouse position on double click find the nearest interpretation\n to the mouse\n position then select that interpretation\n\n Parameters\n ----------\n event : the wx Mouseevent for that click\n\n Alters\n ------\n current_fit\n \"\"\"\n if not array(self.CART_rot_good).any():\n return\n pos = event.GetPosition()\n width, height = self.canvas1.get_width_height()\n pos[1] = height - pos[1]\n xpick_data, ypick_data = pos\n xdata_org = list(\n self.CART_rot_good[:, 0]) + list(self.CART_rot_good[:, 0])\n ydata_org = list(-1*self.CART_rot_good[:, 1]) + \\\n list(-1*self.CART_rot_good[:, 2])\n data_corrected = self.zijplot.transData.transform(\n vstack([xdata_org, ydata_org]).T)\n xdata, ydata = data_corrected.T\n xdata = list(map(float, xdata))\n ydata = list(map(float, ydata))\n e = 4.0\n\n index = None\n for i, (x, y) in enumerate(zip(xdata, ydata)):\n if 0 < sqrt((x-xpick_data)**2. + (y-ypick_data)**2.) < e:\n index = i\n break\n if index != None:\n steps = self.Data[self.s]['zijdblock_steps']\n bad_count = self.Data[self.s]['measurement_flag'][:index].count(\n 'b')\n if index > len(steps):\n bad_count *= 2\n if not self.current_fit:\n self.on_btn_add_fit(event)\n self.select_bounds_in_logger((index+bad_count) % len(steps))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nmarking the interpretation in the current interpretation for that mouse position and mark that interpretation bad or good", "response": "def on_zijd_mark(self, event):\n \"\"\"\n Get mouse position on double right click find the interpretation in\n range of mouse\n position then mark that interpretation bad or good\n\n Parameters\n ----------\n event : the wx Mouseevent for that click\n\n Alters\n ------\n current_fit\n \"\"\"\n if not array(self.CART_rot).any():\n return\n pos = event.GetPosition()\n width, height = self.canvas1.get_width_height()\n pos[1] = height - pos[1]\n xpick_data, ypick_data = pos\n xdata_org = list(self.CART_rot[:, 0]) + list(self.CART_rot[:, 0])\n ydata_org = list(-1*self.CART_rot[:, 1]) + list(-1*self.CART_rot[:, 2])\n data_corrected = self.zijplot.transData.transform(\n vstack([xdata_org, ydata_org]).T)\n xdata, ydata = data_corrected.T\n xdata = list(map(float, xdata))\n ydata = list(map(float, ydata))\n e = 4e0\n\n index = None\n for i, (x, y) in enumerate(zip(xdata, ydata)):\n if 0 < sqrt((x-xpick_data)**2. + (y-ypick_data)**2.) < e:\n index = i\n break\n if index != None:\n steps = self.Data[self.s]['zijdblock']\n if self.Data[self.s]['measurement_flag'][index % len(steps)] == \"g\":\n self.mark_meas_bad(index % len(steps))\n else:\n self.mark_meas_good(index % len(steps))\n pmag.magic_write(os.path.join(\n self.WD, \"magic_measurements.txt\"), self.mag_meas_data, \"magic_measurements\")\n\n self.recalculate_current_specimen_interpreatations()\n\n if self.ie_open:\n self.ie.update_current_fit_data()\n self.calculate_high_levels_data()\n self.update_selection()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef right_click_specimen_equalarea(self, event):\n if event.LeftIsDown() or event.ButtonDClick():\n return\n elif self.specimen_EA_setting == \"Zoom\":\n self.specimen_EA_setting = \"Pan\"\n try:\n self.toolbar2.pan('off')\n except TypeError:\n pass\n elif self.specimen_EA_setting == \"Pan\":\n self.specimen_EA_setting = \"Zoom\"\n try:\n self.toolbar2.zoom()\n except TypeError:\n pass", "response": "right click specimen equal area on the specimen equal area"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef on_change_specimen_mouse_cursor(self, event):\n if not self.specimen_EA_xdata or not self.specimen_EA_ydata:\n return\n pos = event.GetPosition()\n width, height = self.canvas2.get_width_height()\n pos[1] = height - pos[1]\n xpick_data, ypick_data = pos\n xdata_org = self.specimen_EA_xdata\n ydata_org = self.specimen_EA_ydata\n data_corrected = self.specimen_eqarea.transData.transform(\n vstack([xdata_org, ydata_org]).T)\n xdata, ydata = data_corrected.T\n xdata = list(map(float, xdata))\n ydata = list(map(float, ydata))\n e = 4e0\n\n if self.specimen_EA_setting == \"Zoom\":\n self.canvas2.SetCursor(wx.Cursor(wx.CURSOR_CROSS))\n else:\n self.canvas2.SetCursor(wx.Cursor(wx.CURSOR_ARROW))\n for i, (x, y) in enumerate(zip(xdata, ydata)):\n if 0 < sqrt((x-xpick_data)**2. + (y-ypick_data)**2.) < e:\n self.canvas2.SetCursor(wx.Cursor(wx.CURSOR_HAND))\n break\n event.Skip()", "response": "Handles mouse cursor events for specimen related data points"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nselects the nearest interpretation to the mouse position on double click and draw the figure.", "response": "def on_equalarea_specimen_select(self, event):\n \"\"\"\n Get mouse position on double click find the nearest interpretation\n to the mouse\n position then select that interpretation\n\n Parameters\n ----------\n event : the wx Mouseevent for that click\n\n Alters\n ------\n current_fit\n \"\"\"\n if not self.specimen_EA_xdata or not self.specimen_EA_ydata:\n return\n pos = event.GetPosition()\n width, height = self.canvas2.get_width_height()\n pos[1] = height - pos[1]\n xpick_data, ypick_data = pos\n xdata_org = self.specimen_EA_xdata\n ydata_org = self.specimen_EA_ydata\n data_corrected = self.specimen_eqarea.transData.transform(\n vstack([xdata_org, ydata_org]).T)\n xdata, ydata = data_corrected.T\n xdata = list(map(float, xdata))\n ydata = list(map(float, ydata))\n e = 4e0\n\n index = None\n for i, (x, y) in enumerate(zip(xdata, ydata)):\n if 0 < sqrt((x-xpick_data)**2. + (y-ypick_data)**2.) < e:\n index = i\n break\n if index != None:\n self.fit_box.SetSelection(index)\n self.draw_figure(self.s, True)\n self.on_select_fit(event)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef right_click_high_equalarea(self, event):\n if event.LeftIsDown():\n return\n elif self.high_EA_setting == \"Zoom\":\n self.high_EA_setting = \"Pan\"\n try:\n self.toolbar4.pan('off')\n except TypeError:\n pass\n elif self.high_EA_setting == \"Pan\":\n self.high_EA_setting = \"Zoom\"\n try:\n self.toolbar4.zoom()\n except TypeError:\n pass", "response": "Right click the high equal area on the event object event."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nchange the shape of the mouse cursor", "response": "def on_change_high_mouse_cursor(self, event):\n \"\"\"\n If mouse is over data point making it selectable change the shape of\n the cursor\n\n Parameters\n ----------\n event : the wx Mouseevent for that click\n \"\"\"\n if self.ie_open and self.ie.show_box.GetValue() != \"specimens\":\n return\n pos = event.GetPosition()\n width, height = self.canvas4.get_width_height()\n pos[1] = height - pos[1]\n xpick_data, ypick_data = pos\n xdata_org = self.high_EA_xdata\n ydata_org = self.high_EA_ydata\n data_corrected = self.high_level_eqarea.transData.transform(\n vstack([xdata_org, ydata_org]).T)\n xdata, ydata = data_corrected.T\n xdata = list(map(float, xdata))\n ydata = list(map(float, ydata))\n e = 4e0\n\n if self.high_EA_setting == \"Zoom\":\n self.canvas4.SetCursor(wx.Cursor(wx.CURSOR_CROSS))\n else:\n self.canvas4.SetCursor(wx.Cursor(wx.CURSOR_ARROW))\n if not self.high_EA_xdata or not self.high_EA_ydata:\n return\n for i, (x, y) in enumerate(zip(xdata, ydata)):\n if 0 < sqrt((x-xpick_data)**2. + (y-ypick_data)**2.) < e:\n self.canvas4.SetCursor(wx.Cursor(wx.CURSOR_HAND))\n break\n event.Skip()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nselecting interpretation for high level interpretation", "response": "def on_equalarea_high_select(self, event, fig=None, canvas=None):\n \"\"\"\n Get mouse position on double click find the nearest interpretation\n to the mouse position then select that interpretation\n\n Parameters\n ----------\n event : the wx Mouseevent for that click\n\n Alters\n ------\n current_fit, s, mean_fit, fit_box selection, mean_fit_box selection,\n specimens_box selection, tmin_box selection, tmax_box selection\n \"\"\"\n if self.ie_open and self.ie.show_box.GetValue() != \"specimens\":\n return\n if not self.high_EA_xdata or not self.high_EA_ydata:\n return\n if fig == None:\n fig = self.high_level_eqarea\n if canvas == None:\n canvas = self.canvas4\n pos = event.GetPosition()\n width, height = canvas.get_width_height()\n pos[1] = height - pos[1]\n xpick_data, ypick_data = pos\n xdata_org = self.high_EA_xdata\n ydata_org = self.high_EA_ydata\n data_corrected = fig.transData.transform(\n vstack([xdata_org, ydata_org]).T)\n xdata, ydata = data_corrected.T\n xdata = list(map(float, xdata))\n ydata = list(map(float, ydata))\n e = 4e0\n\n index = None\n for i, (x, y) in enumerate(zip(xdata, ydata)):\n if 0 < sqrt((x-xpick_data)**2. + (y-ypick_data)**2.) < e:\n index = i\n break\n if index != None:\n disp_fit_name = self.mean_fit_box.GetValue()\n\n if self.level_box.GetValue() == 'sample':\n high_level_type = 'samples'\n if self.level_box.GetValue() == 'site':\n high_level_type = 'sites'\n if self.level_box.GetValue() == 'location':\n high_level_type = 'locations'\n if self.level_box.GetValue() == 'study':\n high_level_type = 'study'\n\n high_level_name = str(self.level_names.GetValue())\n calculation_type = str(self.mean_type_box.GetValue())\n elements_type = self.UPPER_LEVEL_SHOW\n\n elements_list = self.Data_hierarchy[high_level_type][high_level_name][elements_type]\n\n new_fit_index = 0\n for i, specimen in enumerate(elements_list):\n if disp_fit_name == \"All\" and \\\n specimen in self.pmag_results_data[elements_type]:\n l = 0\n for fit in self.pmag_results_data[elements_type][specimen]:\n l += 1\n else:\n try:\n disp_fit_index = map(\n lambda x: x.name, self.pmag_results_data[elements_type][specimen]).index(disp_fit_name)\n if self.pmag_results_data[elements_type][specimen][disp_fit_index] in self.bad_fits:\n l = 0\n else:\n l = 1\n except IndexError:\n l = 0\n except KeyError:\n l = 0\n except ValueError:\n l = 0\n\n if index < l:\n self.specimens_box.SetStringSelection(specimen)\n self.select_specimen(specimen)\n self.draw_figure(specimen, False)\n if disp_fit_name == \"All\":\n new_fit_index = index\n else:\n new_fit_index = disp_fit_index\n break\n\n index -= l\n self.update_fit_box()\n self.fit_box.SetSelection(new_fit_index)\n self.on_select_fit(event)\n if disp_fit_name != \"All\":\n self.mean_fit = self.current_fit.name\n self.mean_fit_box.SetSelection(2+new_fit_index)\n self.update_selection()\n else:\n self.Add_text()\n if self.ie_open:\n self.ie.change_selected(self.current_fit)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nadd measurement data lines to the text window.", "response": "def Add_text(self):\n \"\"\"\n Add measurement data lines to the text window.\n \"\"\"\n self.selected_meas = []\n if self.COORDINATE_SYSTEM == 'geographic':\n zijdblock = self.Data[self.s]['zijdblock_geo']\n elif self.COORDINATE_SYSTEM == 'tilt-corrected':\n zijdblock = self.Data[self.s]['zijdblock_tilt']\n else:\n zijdblock = self.Data[self.s]['zijdblock']\n\n tmin_index, tmax_index = -1, -1\n if self.current_fit and self.current_fit.tmin and self.current_fit.tmax:\n tmin_index, tmax_index = self.get_indices(self.current_fit)\n\n TEXT = \"\"\n self.logger.DeleteAllItems()\n for i in range(len(zijdblock)):\n lab_treatment = self.Data[self.s]['zijdblock_lab_treatments'][i]\n Step = \"\"\n methods = lab_treatment.split('-')\n if \"NO\" in methods:\n Step = \"N\"\n elif \"AF\" in methods:\n Step = \"AF\"\n elif \"ARM\" in methods:\n Step = \"ARM\"\n elif \"IRM\" in methods:\n Step = \"IRM\"\n elif \"T\" in methods:\n Step = \"T\"\n elif \"LT\" in methods:\n Step = \"LT\"\n Tr = zijdblock[i][0]\n Dec = zijdblock[i][1]\n Inc = zijdblock[i][2]\n Int = zijdblock[i][3]\n csd = self.Data[self.s]['csds'][i]\n self.logger.InsertItem(i, \"%i\" % i)\n self.logger.SetItem(i, 1, Step)\n self.logger.SetItem(i, 2, \"%.1f\" % Tr)\n self.logger.SetItem(i, 3, \"%.1f\" % Dec)\n self.logger.SetItem(i, 4, \"%.1f\" % Inc)\n self.logger.SetItem(i, 5, \"%.2e\" % Int)\n self.logger.SetItem(i, 6, csd)\n self.logger.SetItemBackgroundColour(i, \"WHITE\")\n if i >= tmin_index and i <= tmax_index:\n self.logger.SetItemBackgroundColour(i, \"LIGHT BLUE\")\n if self.Data[self.s]['measurement_flag'][i] == 'b':\n self.logger.SetItemBackgroundColour(i, \"red\")"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nselects the index in the logger based on what the other bound is and sets it in the logger.", "response": "def select_bounds_in_logger(self, index):\n \"\"\"\n sets index as the upper or lower bound of a fit based on what the\n other bound is and selects it in the logger. Requires 2 calls to\n completely update a interpretation. NOTE: Requires an interpretation\n to exist before it is called.\n\n Parameters\n ----------\n index : index of the step to select in the logger\n \"\"\"\n tmin_index, tmax_index = \"\", \"\"\n if str(self.tmin_box.GetValue()) != \"\":\n tmin_index = self.tmin_box.GetSelection()\n if str(self.tmax_box.GetValue()) != \"\":\n tmax_index = self.tmax_box.GetSelection()\n\n if self.list_bound_loc != 0:\n if self.list_bound_loc == 1:\n if index < tmin_index:\n self.tmin_box.SetSelection(index)\n self.tmax_box.SetSelection(tmin_index)\n elif index == tmin_index:\n pass\n else:\n self.tmax_box.SetSelection(index)\n else:\n if index > tmax_index:\n self.tmin_box.SetSelection(tmax_index)\n self.tmax_box.SetSelection(index)\n elif index == tmax_index:\n pass\n else:\n self.tmin_box.SetSelection(index)\n self.list_bound_loc = 0\n else:\n if index < tmax_index:\n self.tmin_box.SetSelection(index)\n self.list_bound_loc = 1\n else:\n self.tmax_box.SetSelection(index)\n self.list_bound_loc = 2\n\n# if tmin_index==\"\" or indextmax_index:\n# self.tmax_box.SetSelection(index)\n# else:\n# self.tmin_box.SetSelection(index)\n# self.tmax_box.SetValue(\"\")\n\n self.logger.Select(index, on=0)\n self.get_new_PCA_parameters(-1)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef on_right_click_listctrl(self, event):\n g_index = event.GetIndex()\n\n if self.Data[self.s]['measurement_flag'][g_index] == 'g':\n self.mark_meas_bad(g_index)\n else:\n self.mark_meas_good(g_index)\n\n if self.data_model == 3.0:\n self.con.tables['measurements'].write_magic_file(dir_path=self.WD)\n else:\n pmag.magic_write(os.path.join(\n self.WD, \"magic_measurements.txt\"), self.mag_meas_data, \"magic_measurements\")\n\n self.recalculate_current_specimen_interpreatations()\n\n if self.ie_open:\n self.ie.update_current_fit_data()\n self.calculate_high_levels_data()\n self.update_selection()", "response": "right click on the listctrl toggles measurement bad\n "} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef onSelect_specimen(self, event):\n self.selected_meas = []\n self.select_specimen(str(self.specimens_box.GetValue()))\n if self.ie_open:\n self.ie.change_selected(self.current_fit)\n self.update_selection()", "response": "update figures and text when a new specimen is selected"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef on_enter_specimen(self, event):\n new_specimen = self.specimens_box.GetValue()\n if new_specimen not in self.specimens:\n self.user_warning(\n \"%s is not a valid specimen with measurement data, aborting\" % (new_specimen))\n self.specimens_box.SetValue(self.s)\n return\n self.select_specimen(new_specimen)\n if self.ie_open:\n self.ie.change_selected(self.current_fit)\n self.update_selection()", "response": "When the specimen is entered it makes that specimen the current specimen and the specimen is selected"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_new_PCA_parameters(self, event):\n\n tmin = str(self.tmin_box.GetValue())\n tmax = str(self.tmax_box.GetValue())\n if tmin == \"\" or tmax == \"\":\n return\n\n if tmin in self.T_list and tmax in self.T_list and \\\n (self.T_list.index(tmax) <= self.T_list.index(tmin)):\n return\n\n PCA_type = self.PCA_type_box.GetValue()\n if PCA_type == \"line\":\n calculation_type = \"DE-BFL\"\n elif PCA_type == \"line-anchored\":\n calculation_type = \"DE-BFL-A\"\n elif PCA_type == \"line-with-origin\":\n calculation_type = \"DE-BFL-O\"\n elif PCA_type == \"Fisher\":\n calculation_type = \"DE-FM\"\n elif PCA_type == \"plane\":\n calculation_type = \"DE-BFP\"\n coordinate_system = self.COORDINATE_SYSTEM\n if self.current_fit:\n self.current_fit.put(self.s, coordinate_system, self.get_PCA_parameters(\n self.s, self.current_fit, tmin, tmax, coordinate_system, calculation_type))\n if self.ie_open:\n self.ie.update_current_fit_data()\n self.update_GUI_with_new_interpretation()", "response": "calculate statistics when temperatures are selected and PCA type is changed"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nselects the current fit of the current entry in the fit combobox and sets the current fit to the fit of the GUI", "response": "def on_select_fit(self, event):\n \"\"\"\n Picks out the fit selected in the fit combobox and sets it to the\n current fit of the GUI then calls the select function of the fit to\n set the GUI's bounds boxes and alter other such parameters\n\n Parameters\n ----------\n event : the wx.ComboBoxEvent that triggers this function\n\n Alters\n ------\n current_fit, fit_box selection, tmin_box selection, tmax_box\n selection\n \"\"\"\n fit_val = self.fit_box.GetValue()\n if self.s not in self.pmag_results_data['specimens'] or not self.pmag_results_data['specimens'][self.s] or fit_val == 'None':\n self.clear_boxes()\n self.current_fit = None\n self.fit_box.SetStringSelection('None')\n self.tmin_box.SetStringSelection('')\n self.tmax_box.SetStringSelection('')\n else:\n try:\n fit_num = list(\n map(lambda x: x.name, self.pmag_results_data['specimens'][self.s])).index(fit_val)\n except ValueError:\n fit_num = -1\n self.pmag_results_data['specimens'][self.s][fit_num].select()\n if self.ie_open:\n self.ie.change_selected(self.current_fit)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nallowing the entering of new fit names in the fit combobox", "response": "def on_enter_fit_name(self, event):\n \"\"\"\n Allows the entering of new fit names in the fit combobox\n\n Parameters\n ----------\n event : the wx.ComboBoxEvent that triggers this function\n\n Alters\n ------\n current_fit.name\n \"\"\"\n if self.current_fit == None:\n self.on_btn_add_fit(event)\n value = self.fit_box.GetValue()\n if ':' in value:\n name, color = value.split(':')\n else:\n name, color = value, None\n if name in [x.name for x in self.pmag_results_data['specimens'][self.s]]:\n print('bad name')\n return\n self.current_fit.name = name\n if color in list(self.color_dict.keys()):\n self.current_fit.color = self.color_dict[color]\n self.update_fit_boxes()\n self.plot_high_levels_data()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef on_save_interpretation_button(self, event):\n\n if self.current_fit:\n self.current_fit.saved = True\n calculation_type = self.current_fit.get(self.COORDINATE_SYSTEM)[\n 'calculation_type']\n tmin = str(self.tmin_box.GetValue())\n tmax = str(self.tmax_box.GetValue())\n\n self.current_fit.put(self.s, 'specimen', self.get_PCA_parameters(\n self.s, self.current_fit, tmin, tmax, 'specimen', calculation_type))\n if len(self.Data[self.s]['zijdblock_geo']) > 0:\n self.current_fit.put(self.s, 'geographic', self.get_PCA_parameters(\n self.s, self.current_fit, tmin, tmax, 'geographic', calculation_type))\n if len(self.Data[self.s]['zijdblock_tilt']) > 0:\n self.current_fit.put(self.s, 'tilt-corrected', self.get_PCA_parameters(\n self.s, self.current_fit, tmin, tmax, 'tilt-corrected', calculation_type))\n\n # calculate high level data\n self.calculate_high_levels_data()\n self.plot_high_levels_data()\n self.on_menu_save_interpretation(event)\n self.update_selection()\n self.close_warning = True", "response": "save interpretation to pmag_results_table"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nfunctions that is called when the user adds a new interpretation to the current specimen and creates a new interpretation for the current specimen", "response": "def on_btn_add_fit(self, event):\n \"\"\"\n add a new interpretation to the current specimen\n\n Parameters\n ----------\n event : the wx.ButtonEvent that triggered this function\n\n Alters\n ------\n pmag_results_data\n \"\"\"\n if self.auto_save.GetValue():\n self.current_fit = self.add_fit(self.s, None, None, None, saved=True)\n else:\n self.current_fit = self.add_fit(self.s, None, None, None, saved=False)\n self.generate_warning_text()\n self.update_warning_box()\n\n if self.ie_open:\n self.ie.update_editor()\n\n self.update_fit_boxes(True)\n # Draw figures and add text\n self.get_new_PCA_parameters(event)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef on_btn_delete_fit(self, event):\n self.delete_fit(self.current_fit, specimen=self.s)", "response": "Delete the current interpretation from the log"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef do_auto_save(self):\n if not self.auto_save.GetValue():\n if self.current_fit:\n if not self.current_fit.saved:\n self.delete_fit(self.current_fit, specimen=self.s)", "response": "Delete current fit if auto_save == False."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nupdate figures and text when a next button is selected", "response": "def on_next_button(self, event):\n \"\"\"\n update figures and text when a next button is selected\n \"\"\"\n self.do_auto_save()\n self.selected_meas = []\n index = self.specimens.index(self.s)\n try:\n fit_index = self.pmag_results_data['specimens'][self.s].index(\n self.current_fit)\n except KeyError:\n fit_index = None\n except ValueError:\n fit_index = None\n if index == len(self.specimens)-1:\n index = 0\n else:\n index += 1\n # sets self.s calculates params etc.\n self.initialize_CART_rot(str(self.specimens[index]))\n self.specimens_box.SetStringSelection(str(self.s))\n if fit_index != None and self.s in self.pmag_results_data['specimens']:\n try:\n self.current_fit = self.pmag_results_data['specimens'][self.s][fit_index]\n except IndexError:\n self.current_fit = None\n else:\n self.current_fit = None\n if self.ie_open:\n self.ie.change_selected(self.current_fit)\n self.update_selection()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef main():\n if \"-h\" in sys.argv:\n print(main.__doc__)\n return\n infile = pmag.get_named_arg(\"-f\", \"measurements.txt\")\n dir_path = pmag.get_named_arg(\"-WD\", \".\")\n infile = pmag.resolve_file_name(infile, dir_path)\n fmt = pmag.get_named_arg(\"-fmt\", \"svg\")\n save_plots = False\n interactive = True\n if \"-sav\" in sys.argv:\n interactive = False\n save_plots = True\n experiments = pmag.get_named_arg(\"-e\", \"\")\n ipmag.chi_magic(infile, dir_path, experiments, fmt, save_plots, interactive)", "response": "NAME\n chi_magic.py\n\n DESCRIPTION\n plots magnetic susceptibility as a function of frequency and temperature and AC field\n\n SYNTAX\n chi_magic.py [command line options]\n\n OPTIONS\n -h prints help message and quits\n -f FILE, specify measurements format file, default \"measurements.txt\"\n -T IND, specify temperature step to plot -- NOT IMPLEMENTED\n -e EXP, specify experiment name to plot\n -fmt [svg,jpg,png,pdf] set figure format [default is svg]\n -sav save figure and quit"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nnaming thellier_magic_redo.py DESCRIPTION Calculates paleointensity parameters for thellier-thellier type data using bounds stored in the \"redo\" file SYNTAX thellier_magic_redo [command line options] OPTIONS -h prints help message -usr USER: identify user, default is \"\" -fcr CRIT, set criteria for grading -f IN: specify input file, default is magic_measurements.txt -fre REDO: specify redo file, default is \"thellier_redo\" -F OUT: specify output file, default is thellier_specimens.txt -leg: attaches \"Recalculated from original measurements; supercedes published results. \" to comment field -CR PERC TYPE: apply a blanket cooling rate correction if none supplied in the er_samples.txt file PERC should be a percentage of original (say reduce to 90%) TYPE should be one of the following: EG (for educated guess); PS (based on pilots); TRM (based on comparison of two TRMs) -ANI: perform anisotropy correction -fsa SAMPFILE: er_samples.txt file with cooling rate correction information, default is NO CORRECTION -Fcr CRout: specify pmag_specimen format file for cooling rate corrected data -fan ANIFILE: specify rmag_anisotropy format file, default is rmag_anisotropy.txt -Fac ACout: specify pmag_specimen format file for anisotropy corrected data default is AC_specimens.txt -fnl NLTFILE: specify magic_measurments format file, default is magic_measurements.txt -Fnl NLTout: specify pmag_specimen format file for non-linear trm corrected data default is NLT_specimens.txt -z use z component differenences for pTRM calculation INPUT a thellier_redo file is Specimen_name Tmin Tmax (where Tmin and Tmax are in Centigrade)", "response": "def main():\n \"\"\"\n NAME\n thellier_magic_redo.py\n\n DESCRIPTION\n Calculates paleointensity parameters for thellier-thellier type data using bounds\n stored in the \"redo\" file\n\n SYNTAX\n thellier_magic_redo [command line options]\n\n OPTIONS\n -h prints help message\n -usr USER: identify user, default is \"\"\n -fcr CRIT, set criteria for grading\n -f IN: specify input file, default is magic_measurements.txt\n -fre REDO: specify redo file, default is \"thellier_redo\"\n -F OUT: specify output file, default is thellier_specimens.txt\n -leg: attaches \"Recalculated from original measurements; supercedes published results. \" to comment field\n -CR PERC TYPE: apply a blanket cooling rate correction if none supplied in the er_samples.txt file\n PERC should be a percentage of original (say reduce to 90%)\n TYPE should be one of the following:\n EG (for educated guess); PS (based on pilots); TRM (based on comparison of two TRMs)\n -ANI: perform anisotropy correction\n -fsa SAMPFILE: er_samples.txt file with cooling rate correction information, default is NO CORRECTION\n -Fcr CRout: specify pmag_specimen format file for cooling rate corrected data\n -fan ANIFILE: specify rmag_anisotropy format file, default is rmag_anisotropy.txt\n -Fac ACout: specify pmag_specimen format file for anisotropy corrected data\n default is AC_specimens.txt\n -fnl NLTFILE: specify magic_measurments format file, default is magic_measurements.txt\n -Fnl NLTout: specify pmag_specimen format file for non-linear trm corrected data\n default is NLT_specimens.txt\n -z use z component differenences for pTRM calculation\n\n INPUT\n a thellier_redo file is Specimen_name Tmin Tmax (where Tmin and Tmax are in Centigrade)\n \"\"\"\n dir_path='.'\n critout=\"\"\n version_num=pmag.get_version()\n field,first_save=-1,1\n spec,recnum,start,end=0,0,0,0\n crfrac=0\n NltRecs,PmagSpecs,AniSpecRecs,NltSpecRecs,CRSpecs=[],[],[],[],[]\n meas_file,pmag_file,mk_file=\"magic_measurements.txt\",\"thellier_specimens.txt\",\"thellier_redo\"\n anis_file=\"rmag_anisotropy.txt\"\n anisout,nltout=\"AC_specimens.txt\",\"NLT_specimens.txt\"\n crout=\"CR_specimens.txt\"\n nlt_file=\"\"\n samp_file=\"\"\n comment,user=\"\",\"unknown\"\n anis,nltrm=0,0\n jackknife=0 # maybe in future can do jackknife\n args=sys.argv\n Zdiff=0\n if '-WD' in args:\n ind=args.index('-WD')\n dir_path=args[ind+1]\n if \"-h\" in args:\n print(main.__doc__)\n sys.exit()\n if \"-usr\" in args:\n ind=args.index(\"-usr\")\n user=sys.argv[ind+1]\n if \"-leg\" in args: comment=\"Recalculated from original measurements; supercedes published results. \"\n cool=0\n if \"-CR\" in args:\n cool=1\n ind=args.index(\"-CR\")\n crfrac=.01*float(sys.argv[ind+1])\n crtype='DA-CR-'+sys.argv[ind+2]\n if \"-Fcr\" in args:\n ind=args.index(\"-Fcr\")\n crout=sys.argv[ind+1]\n if \"-f\" in args:\n ind=args.index(\"-f\")\n meas_file=sys.argv[ind+1]\n if \"-F\" in args:\n ind=args.index(\"-F\")\n pmag_file=sys.argv[ind+1]\n if \"-fre\" in args:\n ind=args.index(\"-fre\")\n mk_file=args[ind+1]\n if \"-fsa\" in args:\n ind=args.index(\"-fsa\")\n samp_file=dir_path+'/'+args[ind+1]\n Samps,file_type=pmag.magic_read(samp_file)\n SampCRs=pmag.get_dictitem(Samps,'cooling_rate_corr','','F') # get samples cooling rate corrections\n cool=1\n if file_type!='er_samples':\n print('not a valid er_samples.txt file')\n sys.exit()\n #\n #\n if \"-ANI\" in args:\n anis=1\n ind=args.index(\"-ANI\")\n if \"-Fac\" in args:\n ind=args.index(\"-Fac\")\n anisout=args[ind+1]\n if \"-fan\" in args:\n ind=args.index(\"-fan\")\n anis_file=args[ind+1]\n #\n if \"-NLT\" in args:\n if \"-Fnl\" in args:\n ind=args.index(\"-Fnl\")\n nltout=args[ind+1]\n if \"-fnl\" in args:\n ind=args.index(\"-fnl\")\n nlt_file=args[ind+1]\n if \"-z\" in args: Zdiff=1\n if '-fcr' in sys.argv:\n ind=args.index(\"-fcr\")\n critout=sys.argv[ind+1]\n#\n# start reading in data:\n#\n meas_file=dir_path+\"/\"+meas_file\n mk_file=dir_path+\"/\"+mk_file\n accept=pmag.default_criteria(1)[0] # set criteria to none\n if critout!=\"\":\n critout=dir_path+\"/\"+critout\n crit_data,file_type=pmag.magic_read(critout)\n if file_type!='pmag_criteria':\n print('bad pmag_criteria file, using no acceptance criteria')\n print(\"Acceptance criteria read in from \", critout)\n for critrec in crit_data:\n if 'sample_int_sigma_uT' in list(critrec.keys()): # accommodate Shaar's new criterion\n critrec['sample_int_sigma']='%10.3e'%(eval(critrec['sample_int_sigma_uT'])*1e-6)\n for key in list(critrec.keys()):\n if key not in list(accept.keys()) and critrec[key]!='':\n accept[key]=critrec[key]\n meas_data,file_type=pmag.magic_read(meas_file)\n if file_type != 'magic_measurements':\n print(file_type)\n print(file_type,\"This is not a valid magic_measurements file \")\n sys.exit()\n try:\n mk_f=open(mk_file,'r')\n except:\n print(\"Bad redo file\")\n sys.exit()\n mkspec=[]\n speclist=[]\n for line in mk_f.readlines():\n tmp=line.split()\n mkspec.append(tmp)\n speclist.append(tmp[0])\n if anis==1:\n anis_file=dir_path+\"/\"+anis_file\n anis_data,file_type=pmag.magic_read(anis_file)\n if file_type != 'rmag_anisotropy':\n print(file_type)\n print(file_type,\"This is not a valid rmag_anisotropy file \")\n sys.exit()\n if nlt_file==\"\":\n nlt_data=pmag.get_dictitem(meas_data,'magic_method_codes','LP-TRM','has') # look for trm acquisition data in the meas_data file\n else:\n nlt_file=dir_path+\"/\"+nlt_file\n nlt_data,file_type=pmag.magic_read(nlt_file)\n if len(nlt_data)>0:\n nltrm=1\n#\n# sort the specimen names and step through one by one\n#\n sids=pmag.get_specs(meas_data)\n#\n print('Processing ',len(speclist),' specimens - please wait ')\n while spec < len(speclist):\n s=speclist[spec]\n recnum=0\n datablock=[]\n PmagSpecRec={}\n PmagSpecRec[\"er_analyst_mail_names\"]=user\n PmagSpecRec[\"er_citation_names\"]=\"This study\"\n PmagSpecRec[\"magic_software_packages\"]=version_num\n methcodes,inst_code=[],\"\"\n #\n # find the data from the meas_data file for this specimen\n #\n datablock=pmag.get_dictitem(meas_data,'er_specimen_name',s,'T')\n datablock=pmag.get_dictitem(datablock,'magic_method_codes','LP-PI-TRM','has') #pick out the thellier experiment data\n if len(datablock)>0:\n for rec in datablock:\n if \"magic_instrument_codes\" not in list(rec.keys()): rec[\"magic_instrument_codes\"]=\"unknown\"\n #\n # collect info for the PmagSpecRec dictionary\n #\n rec=datablock[0]\n PmagSpecRec[\"er_specimen_name\"]=s\n PmagSpecRec[\"er_sample_name\"]=rec[\"er_sample_name\"]\n PmagSpecRec[\"er_site_name\"]=rec[\"er_site_name\"]\n PmagSpecRec[\"er_location_name\"]=rec[\"er_location_name\"]\n PmagSpecRec[\"measurement_step_unit\"]=\"K\"\n PmagSpecRec[\"specimen_correction\"]='u'\n if \"er_expedition_name\" in list(rec.keys()):PmagSpecRec[\"er_expedition_name\"]=rec[\"er_expedition_name\"]\n if \"magic_instrument_codes\" not in list(rec.keys()):\n PmagSpecRec[\"magic_instrument_codes\"]=\"unknown\"\n else:\n PmagSpecRec[\"magic_instrument_codes\"]=rec[\"magic_instrument_codes\"]\n if \"magic_experiment_name\" not in list(rec.keys()):\n rec[\"magic_experiment_name\"]=\"\"\n else:\n PmagSpecRec[\"magic_experiment_names\"]=rec[\"magic_experiment_name\"]\n meths=rec[\"magic_experiment_name\"].split(\":\")\n for meth in meths:\n if meth.strip() not in methcodes and \"LP-\" in meth:methcodes.append(meth.strip())\n #\n # sort out the data into first_Z, first_I, ptrm_check, ptrm_tail\n #\n araiblock,field=pmag.sortarai(datablock,s,Zdiff)\n first_Z=araiblock[0]\n first_I=araiblock[1]\n ptrm_check=araiblock[2]\n ptrm_tail=araiblock[3]\n if len(first_I)<3 or len(first_Z)<4:\n spec+=1\n print('skipping specimen ', s)\n else:\n #\n # get start, end\n #\n for redospec in mkspec:\n if redospec[0]==s:\n b,e=float(redospec[1]),float(redospec[2])\n break\n if e > float(first_Z[-1][0]):e=float(first_Z[-1][0])\n for recnum in range(len(first_Z)):\n if first_Z[recnum][0]==b:start=recnum\n if first_Z[recnum][0]==e:end=recnum\n nsteps=end-start\n if nsteps>2:\n zijdblock,units=pmag.find_dmag_rec(s,meas_data)\n pars,errcode=pmag.PintPars(datablock,araiblock,zijdblock,start,end,accept)\n if 'specimen_scat' in list(pars.keys()): PmagSpecRec['specimen_scat']=pars['specimen_scat']\n if 'specimen_frac' in list(pars.keys()): PmagSpecRec['specimen_frac']='%5.3f'%(pars['specimen_frac'])\n if 'specimen_gmax' in list(pars.keys()): PmagSpecRec['specimen_gmax']='%5.3f'%(pars['specimen_gmax'])\n pars['measurement_step_unit']=units\n pars[\"specimen_lab_field_dc\"]=field\n pars[\"specimen_int\"]=-1*field*pars[\"specimen_b\"]\n PmagSpecRec[\"measurement_step_min\"]='%8.3e' % (pars[\"measurement_step_min\"])\n PmagSpecRec[\"measurement_step_max\"]='%8.3e' % (pars[\"measurement_step_max\"])\n PmagSpecRec[\"specimen_int_n\"]='%i'%(pars[\"specimen_int_n\"])\n PmagSpecRec[\"specimen_lab_field_dc\"]='%8.3e'%(pars[\"specimen_lab_field_dc\"])\n PmagSpecRec[\"specimen_int\"]='%9.4e '%(pars[\"specimen_int\"])\n PmagSpecRec[\"specimen_b\"]='%5.3f '%(pars[\"specimen_b\"])\n PmagSpecRec[\"specimen_q\"]='%5.1f '%(pars[\"specimen_q\"])\n PmagSpecRec[\"specimen_f\"]='%5.3f '%(pars[\"specimen_f\"])\n PmagSpecRec[\"specimen_fvds\"]='%5.3f'%(pars[\"specimen_fvds\"])\n PmagSpecRec[\"specimen_b_beta\"]='%5.3f'%(pars[\"specimen_b_beta\"])\n PmagSpecRec[\"specimen_int_mad\"]='%7.1f'%(pars[\"specimen_int_mad\"])\n PmagSpecRec[\"specimen_gamma\"]='%7.1f'%(pars[\"specimen_gamma\"])\n if pars[\"magic_method_codes\"]!=\"\" and pars[\"magic_method_codes\"] not in methcodes: methcodes.append(pars[\"magic_method_codes\"])\n PmagSpecRec[\"specimen_dec\"]='%7.1f'%(pars[\"specimen_dec\"])\n PmagSpecRec[\"specimen_inc\"]='%7.1f'%(pars[\"specimen_inc\"])\n PmagSpecRec[\"specimen_tilt_correction\"]='-1'\n PmagSpecRec[\"specimen_direction_type\"]='l'\n PmagSpecRec[\"direction_type\"]='l' # this is redudant, but helpful - won't be imported\n PmagSpecRec[\"specimen_dang\"]='%7.1f '%(pars[\"specimen_dang\"])\n PmagSpecRec[\"specimen_drats\"]='%7.1f '%(pars[\"specimen_drats\"])\n PmagSpecRec[\"specimen_drat\"]='%7.1f '%(pars[\"specimen_drat\"])\n PmagSpecRec[\"specimen_int_ptrm_n\"]='%i '%(pars[\"specimen_int_ptrm_n\"])\n PmagSpecRec[\"specimen_rsc\"]='%6.4f '%(pars[\"specimen_rsc\"])\n PmagSpecRec[\"specimen_md\"]='%i '%(int(pars[\"specimen_md\"]))\n if PmagSpecRec[\"specimen_md\"]=='-1':PmagSpecRec[\"specimen_md\"]=\"\"\n PmagSpecRec[\"specimen_b_sigma\"]='%5.3f '%(pars[\"specimen_b_sigma\"])\n if \"IE-TT\" not in methcodes:methcodes.append(\"IE-TT\")\n methods=\"\"\n for meth in methcodes:\n methods=methods+meth+\":\"\n PmagSpecRec[\"magic_method_codes\"]=methods.strip(':')\n PmagSpecRec[\"magic_software_packages\"]=version_num\n PmagSpecRec[\"specimen_description\"]=comment\n if critout!=\"\":\n kill=pmag.grade(PmagSpecRec,accept,'specimen_int')\n if len(kill)>0:\n Grade='F' # fails\n else:\n Grade='A' # passes\n PmagSpecRec[\"specimen_grade\"]=Grade\n else:\n PmagSpecRec[\"specimen_grade\"]=\"\" # not graded\n if nltrm==0 and anis==0 and cool!=0: # apply cooling rate correction\n SCR=pmag.get_dictitem(SampCRs,'er_sample_name',PmagSpecRec['er_sample_name'],'T') # get this samples, cooling rate correction\n CrSpecRec=pmag.cooling_rate(PmagSpecRec,SCR,crfrac,crtype)\n if CrSpecRec['er_specimen_name']!='none':CrSpecs.append(CrSpecRec)\n PmagSpecs.append(PmagSpecRec)\n NltSpecRec=\"\"\n #\n # check on non-linear TRM correction\n #\n if nltrm==1:\n #\n # find the data from the nlt_data list for this specimen\n #\n TRMs,Bs=[],[]\n NltSpecRec=\"\"\n NltRecs=pmag.get_dictitem(nlt_data,'er_specimen_name',PmagSpecRec['er_specimen_name'],'has') # fish out all the NLT data for this specimen\n if len(NltRecs) > 2:\n for NltRec in NltRecs:\n Bs.append(float(NltRec['treatment_dc_field']))\n TRMs.append(float(NltRec['measurement_magn_moment']))\n NLTpars=nlt.NLtrm(Bs,TRMs,float(PmagSpecRec['specimen_int']),float(PmagSpecRec['specimen_lab_field_dc']),0)\n if NLTpars['banc']>0:\n NltSpecRec={}\n for key in list(PmagSpecRec.keys()):\n NltSpecRec[key]=PmagSpecRec[key]\n NltSpecRec['specimen_int']='%9.4e'%(NLTpars['banc'])\n NltSpecRec['magic_method_codes']=PmagSpecRec[\"magic_method_codes\"]+\":DA-NL\"\n NltSpecRec[\"specimen_correction\"]='c'\n NltSpecRec['specimen_grade']=PmagSpecRec['specimen_grade']\n NltSpecRec[\"magic_software_packages\"]=version_num\n print(NltSpecRec['er_specimen_name'], ' Banc= ',float(NLTpars['banc'])*1e6)\n if anis==0 and cool!=0:\n SCR=pmag.get_dictitem(SampCRs,'er_sample_name',NltSpecRec['er_sample_name'],'T') # get this samples, cooling rate correction\n CrSpecRec=pmag.cooling_rate(NltSpecRec,SCR,crfrac,crtype)\n if CrSpecRec['er_specimen_name']!='none':CrSpecs.append(CrSpecRec)\n NltSpecRecs.append(NltSpecRec)\n #\n # check on anisotropy correction\n if anis==1:\n if NltSpecRec!=\"\":\n Spc=NltSpecRec\n else: # find uncorrected data\n Spc=PmagSpecRec\n AniSpecs=pmag.get_dictitem(anis_data,'er_specimen_name',PmagSpecRec['er_specimen_name'],'T')\n if len(AniSpecs)>0:\n AniSpec=AniSpecs[0]\n AniSpecRec=pmag.doaniscorr(Spc,AniSpec)\n AniSpecRec['specimen_grade']=PmagSpecRec['specimen_grade']\n AniSpecRec[\"magic_instrument_codes\"]=PmagSpecRec['magic_instrument_codes']\n AniSpecRec[\"specimen_correction\"]='c'\n AniSpecRec[\"magic_software_packages\"]=version_num\n if cool!=0:\n SCR=pmag.get_dictitem(SampCRs,'er_sample_name',AniSpecRec['er_sample_name'],'T') # get this samples, cooling rate correction\n CrSpecRec=pmag.cooling_rate(AniSpecRec,SCR,crfrac,crtype)\n if CrSpecRec['er_specimen_name']!='none':CrSpecs.append(CrSpecRec)\n AniSpecRecs.append(AniSpecRec)\n elif anis==1:\n AniSpecs=pmag.get_dictitem(anis_data,'er_specimen_name',PmagSpecRec['er_specimen_name'],'T')\n if len(AniSpecs)>0:\n AniSpec=AniSpecs[0]\n AniSpecRec=pmag.doaniscorr(PmagSpecRec,AniSpec)\n AniSpecRec['specimen_grade']=PmagSpecRec['specimen_grade']\n AniSpecRec[\"magic_instrument_codes\"]=PmagSpecRec[\"magic_instrument_codes\"]\n AniSpecRec[\"specimen_correction\"]='c'\n AniSpecRec[\"magic_software_packages\"]=version_num\n if crfrac!=0:\n CrSpecRec={}\n for key in list(AniSpecRec.keys()):CrSpecRec[key]=AniSpecRec[key]\n inten=frac*float(CrSpecRec['specimen_int'])\n CrSpecRec[\"specimen_int\"]='%9.4e '%(inten) # adjust specimen intensity by cooling rate correction\n CrSpecRec['magic_method_codes'] = CrSpecRec['magic_method_codes']+':DA-CR-'+crtype\n CRSpecs.append(CrSpecRec)\n AniSpecRecs.append(AniSpecRec)\n spec +=1\n else:\n print(\"skipping \",s)\n spec+=1\n pmag_file=dir_path+'/'+pmag_file\n pmag.magic_write(pmag_file,PmagSpecs,'pmag_specimens')\n print('uncorrected thellier data saved in: ',pmag_file)\n if anis==1 and len(AniSpecRecs)>0:\n anisout=dir_path+'/'+anisout\n pmag.magic_write(anisout,AniSpecRecs,'pmag_specimens')\n print('anisotropy corrected data saved in: ',anisout)\n if nltrm==1 and len(NltSpecRecs)>0:\n nltout=dir_path+'/'+nltout\n pmag.magic_write(nltout,NltSpecRecs,'pmag_specimens')\n print('non-linear TRM corrected data saved in: ',nltout)\n if crfrac!=0:\n crout=dir_path+'/'+crout\n pmag.magic_write(crout,CRSpecs,'pmag_specimens')\n print('cooling rate corrected data saved in: ',crout)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef main(command_line=True, **kwargs):\n# initialize some stuff\n noave=0\n inst=\"\"\n samp_con,Z='1',\"\"\n missing=1\n demag=\"N\"\n er_location_name=\"unknown\"\n citation='This study'\n args=sys.argv\n meth_code=\"LP-NO\"\n specnum=-1\n MagRecs=[]\n version_num=pmag.get_version()\n Samps=[] # keeps track of sample orientations\n DIspec=[]\n MagFiles=[]\n\n user=\"\"\n mag_file=\"\"\n dir_path='.'\n ErSamps=[]\n SampOuts=[]\n\n samp_file = 'er_samples.txt'\n meas_file = 'magic_measurements.txt'\n\n\n #\n # get command line arguments\n #\n \n if command_line:\n if '-WD' in sys.argv:\n ind = sys.argv.index('-WD')\n dir_path=sys.argv[ind+1]\n if '-ID' in sys.argv:\n ind = sys.argv.index('-ID')\n input_dir_path = sys.argv[ind+1]\n else:\n input_dir_path = dir_path\n output_dir_path = dir_path\n if \"-h\" in args:\n print(main.__doc__)\n return False\n if '-F' in args:\n ind=args.index(\"-F\")\n meas_file = args[ind+1]\n if '-Fsa' in args:\n ind = args.index(\"-Fsa\")\n samp_file = args[ind+1]\n #try:\n # open(samp_file,'r')\n # ErSamps,file_type=pmag.magic_read(samp_file)\n # print 'sample information will be appended to ', samp_file \n #except:\n # print samp_file,' not found: sample information will be stored in new er_samples.txt file'\n # samp_file = output_dir_path+'/er_samples.txt'\n if '-f' in args:\n ind = args.index(\"-f\")\n mag_file= args[ind+1]\n if \"-spc\" in args:\n ind = args.index(\"-spc\")\n specnum = int(args[ind+1])\n if \"-ncn\" in args:\n ind=args.index(\"-ncn\")\n samp_con=sys.argv[ind+1]\n if \"-loc\" in args:\n ind=args.index(\"-loc\")\n er_location_name=args[ind+1]\n if \"-A\" in args: noave=1\n if \"-mcd\" in args: \n ind=args.index(\"-mcd\")\n meth_code=args[ind+1]\n if \"-lat\" in args:\n ind=args.index(\"-lat\")\n site_lat=args[ind+1]\n if \"-lon\" in args:\n ind=args.index(\"-lon\")\n site_lon=args[ind+1]\n\n if not command_line:\n dir_path = kwargs.get('dir_path', '.')\n input_dir_path = kwargs.get('input_dir_path', dir_path)\n output_dir_path = dir_path\n meas_file = kwargs.get('meas_file', 'magic_measurements.txt')\n mag_file = kwargs.get('mag_file')\n spec_file = kwargs.get('spec_file', 'er_specimens.txt')\n samp_file = kwargs.get('samp_file', 'er_samples.txt')\n site_file = kwargs.get('site_file', 'er_sites.txt')\n site_lat = kwargs.get('site_lat', 0)\n site_lon = kwargs.get('site_lon', 0)\n specnum = kwargs.get('specnum', 0)\n samp_con = kwargs.get('samp_con', '1')\n er_location_name = kwargs.get('er_location_name', '')\n noave = kwargs.get('noave', 0) # default (0) means DO average\n meth_code = kwargs.get('meth_code', \"LP-NO\")\n\n print(samp_con)\n # format variables\n mag_file = os.path.join(input_dir_path,mag_file)\n meas_file = os.path.join(output_dir_path,meas_file)\n spec_file = os.path.join(output_dir_path,spec_file)\n samp_file = os.path.join(output_dir_path,samp_file)\n site_file = os.path.join(output_dir_path,site_file)\n if specnum!=0:specnum=-specnum\n if \"4\" in samp_con:\n if \"-\" not in samp_con:\n print(\"naming convention option [4] must be in form 4-Z where Z is an integer\")\n return False, \"naming convention option [4] must be in form 4-Z where Z is an integer\"\n else:\n Z=samp_con.split(\"-\")[1]\n samp_con=\"4\"\n if \"7\" in samp_con:\n if \"-\" not in samp_con:\n print(\"option [7] must be in form 7-Z where Z is an integer\")\n return False, \"naming convention option [7] must be in form 7-Z where Z is an integer\"\n else:\n Z=samp_con.split(\"-\")[1]\n samp_con=\"7\"\n\n # parse data\n data=open(mag_file,'r').readlines() # read in data from file\n comment=data[0]\n line=data[1].strip()\n line=line.replace(\"=\",\"= \") # make finding orientations easier\n rec=line.split() # read in sample orientation, etc.\n er_specimen_name=rec[0]\n ErSpecRec,ErSampRec,ErSiteRec={},{},{} # make a sample record\n if specnum!=0:\n er_sample_name=rec[0][:specnum]\n else:\n er_sample_name=rec[0]\n if len(ErSamps)>0: # need to copy existing\n for samp in ErSamps:\n if samp['er_sample_name']==er_sample_name:\n ErSampRec=samp # we'll ammend this one\n else:\n SampOuts.append(samp) # keep all the others\n if int(samp_con)<6:\n er_site_name=pmag.parse_site(er_sample_name,samp_con,Z)\n else:\n if 'er_site_name' in list(ErSampRec.keys()):er_site_name=ErSampREc['er_site_name']\n if 'er_location_name' in list(ErSampRec.keys()):er_location_name=ErSampREc['er_location_name']\n az_ind=rec.index('a=')+1\n ErSampRec['er_sample_name']=er_sample_name\n ErSampRec['er_sample_description']=comment\n ErSampRec['sample_azimuth']=rec[az_ind]\n dip_ind=rec.index('b=')+1\n dip=-float(rec[dip_ind])\n ErSampRec['sample_dip']='%7.1f'%(dip)\n strike_ind=rec.index('s=')+1\n ErSampRec['sample_bed_dip_direction']='%7.1f'%(float(rec[strike_ind])+90.)\n bd_ind=rec.index('d=')+1\n ErSampRec['sample_bed_dip']=rec[bd_ind]\n v_ind=rec.index('v=')+1\n vol=rec[v_ind][:-3]\n date=rec[-2]\n time=rec[-1]\n ErSampRec['magic_method_codes']=meth_code\n if 'er_location_name' not in list(ErSampRec.keys()):ErSampRec['er_location_name']=er_location_name\n if 'er_site_name' not in list(ErSampRec.keys()):ErSampRec['er_site_name']=er_site_name\n if 'er_citation_names' not in list(ErSampRec.keys()):ErSampRec['er_citation_names']='This study'\n if 'magic_method_codes' not in list(ErSampRec.keys()):ErSampRec['magic_method_codes']='SO-NO'\n\n ErSpecRec['er_specimen_name'] = er_specimen_name\n ErSpecRec['er_sample_name'] = er_sample_name\n ErSpecRec['er_site_name'] = er_site_name\n ErSpecRec['er_location_name'] = er_location_name\n ErSpecRec['er_citation_names']='This study'\n\n ErSiteRec['er_site_name'] = er_site_name\n ErSiteRec['er_location_name'] = er_location_name\n ErSiteRec['er_citation_names']='This study'\n ErSiteRec['site_lat'] = site_lat\n ErSiteRec['site_lon']= site_lon\n\n SpecOuts.append(ErSpecRec)\n SampOuts.append(ErSampRec)\n SiteOuts.append(ErSiteRec)\n for k in range(3,len(data)): # read in data\n line=data[k]\n rec=line.split()\n if len(rec)>1: # skip blank lines at bottom \n MagRec={}\n MagRec['measurement_description']='Date: '+date+' '+time\n MagRec[\"er_citation_names\"]=\"This study\"\n MagRec['er_location_name']=er_location_name\n MagRec['er_site_name']=er_site_name\n MagRec['er_sample_name']=er_sample_name\n MagRec['magic_software_packages']=version_num\n MagRec[\"treatment_temp\"]='%8.3e' % (273) # room temp in kelvin\n MagRec[\"measurement_temp\"]='%8.3e' % (273) # room temp in kelvin\n MagRec[\"measurement_flag\"]='g'\n MagRec[\"measurement_standard\"]='u'\n MagRec[\"measurement_number\"]='1'\n MagRec[\"er_specimen_name\"]=er_specimen_name\n if rec[0]=='NRM': \n meas_type=\"LT-NO\"\n elif rec[0][0]=='M' or rec[0][0]=='H': \n meas_type=\"LT-AF-Z\"\n elif rec[0][0]=='T': \n meas_type=\"LT-T-Z\"\n else:\n print(\"measurement type unknown\")\n return False, \"measurement type unknown\"\n X=[float(rec[1]),float(rec[2]),float(rec[3])]\n Vec=pmag.cart2dir(X)\n MagRec[\"measurement_magn_moment\"]='%10.3e'% (Vec[2]) # Am^2 \n MagRec[\"measurement_magn_volume\"]=rec[4] # A/m \n MagRec[\"measurement_dec\"]='%7.1f'%(Vec[0])\n MagRec[\"measurement_inc\"]='%7.1f'%(Vec[1])\n MagRec[\"treatment_ac_field\"]='0'\n if meas_type!='LT-NO':\n treat=float(rec[0][1:])\n else:\n treat=0\n if meas_type==\"LT-AF-Z\":\n MagRec[\"treatment_ac_field\"]='%8.3e' %(treat*1e-3) # convert from mT to tesla\n elif meas_type==\"LT-T-Z\":\n MagRec[\"treatment_temp\"]='%8.3e' % (treat+273.) # temp in kelvin\n MagRec['magic_method_codes']=meas_type\n MagRecs.append(MagRec) \n MagOuts=pmag.measurements_methods(MagRecs,noave)\n pmag.magic_write(meas_file,MagOuts,'magic_measurements')\n print(\"results put in \",meas_file)\n pmag.magic_write(samp_file,SpecOuts,'er_specimens')\n pmag.magic_write(samp_file,SampOuts,'er_samples')\n pmag.magic_write(samp_file,SiteOuts,'er_sites')\n return True, meas_file", "response": "This function is the main entry point for the magic_measurements. py script."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ninput x_segment, y_segment, x_mean, y_mean, n, lab_dc_field, steps_Arai output: x_err, y_err, x_tag, y_tag, b, b_sigma, specimen_b_beta, y_intercept, x_intercept, x_prime, y_prime, delta_x_prime, delta_y_prime, f_Coe, g_Coe, g_lim, specimen_q, specimen_w, count_IZ, count_ZI, B_lab, B_anc, B_anc_sigma, specimen_int", "response": "def York_Regression(x_segment, y_segment, x_mean, y_mean, n, lab_dc_field, steps_Arai):\n \"\"\"\n input: x_segment, y_segment, x_mean, y_mean, n, lab_dc_field, steps_Arai\n output: x_err, y_err, x_tag, y_tag, b, b_sigma, specimen_b_beta, y_intercept,\n x_intercept, x_prime, y_prime, delta_x_prime, delta_y_prime, f_Coe,\n g_Coe, g_lim, specimen_q, specimen_w, count_IZ, count_ZI, B_lab, B_anc,\n B_anc_sigma, specimen_int\n \"\"\"\n x_err = x_segment - x_mean\n y_err = y_segment - y_mean\n york_b = -1* numpy.sqrt(old_div(sum(y_err**2), sum(x_err**2)) ) # averaged slope\n\n b = numpy.sign(sum(x_err * y_err)) * numpy.std(y_segment, ddof=1)/numpy.std(x_segment, ddof=1) # ddof is degrees of freedom\n if b == 0:\n york_b = 1e-10\n else:\n york_b = b\n\n york_sigma= numpy.sqrt( old_div((2 * sum(y_err**2) - 2*york_b* sum(x_err*y_err)), ( (n-2) * sum(x_err**2) )) )\n if york_sigma == 0: # prevent divide by zero\n york_sigma = 1e-10\n beta_Coe = abs(old_div(york_sigma,york_b))\n # y_T is the intercept of the extrepolated line\n # through the center of mass (see figure 7 in Coe (1978))\n y_T = y_mean - (york_b* x_mean)\n x_T = old_div((-1 * y_T), york_b) # x intercept\n # # calculate the extrarpolated data points for f and fvds\n x_tag = old_div((y_segment - y_T ), york_b) # returns array of y points minus the y intercept, divided by slope\n y_tag = york_b*x_segment + y_T\n\n # intersect of the dashed square and the horizontal dahed line next to delta-y-5 in figure 7, Coe (1978)\n x_prime = old_div((x_segment+x_tag), 2)\n y_prime = old_div((y_segment+y_tag), 2)\n\n delta_x_prime = abs(max(x_prime) - min(x_prime)) # TRM length of best fit line\n delta_y_prime = abs(max(y_prime) - min(y_prime)) # NRM length of best fit line\n\n f_Coe = old_div(delta_y_prime, abs(y_T))\n if delta_y_prime:\n g_Coe = 1 - (old_div(sum((y_prime[:-1]-y_prime[1:])**2), delta_y_prime ** 2)) # gap factor\n else:\n g_Coe = float('nan')\n g_lim = old_div((float(n) - 2), (float(n) - 1))\n q_Coe = abs(york_b)*f_Coe*g_Coe/york_sigma\n w_Coe = old_div(q_Coe, numpy.sqrt(n - 2))\n count_IZ = steps_Arai.count('IZ')\n count_ZI = steps_Arai.count('ZI')\n B_lab = lab_dc_field * 1e6\n B_anc = abs(york_b) * B_lab # in microtesla\n B_anc_sigma = york_sigma * B_lab\n specimen_int = -1* lab_dc_field * york_b # in tesla\n specimen_int_sigma = york_sigma * lab_dc_field\n return {'x_err': x_err, 'y_err': y_err, 'x_tag': x_tag, 'y_tag': y_tag,\n 'specimen_b': york_b, 'specimen_b_sigma': york_sigma, 'specimen_b_beta': beta_Coe,\n 'y_int': y_T, 'x_int': x_T, 'x_prime': x_prime, 'y_prime': y_prime,\n 'delta_x_prime': delta_x_prime, 'delta_y_prime': delta_y_prime, 'specimen_f': f_Coe,\n 'specimen_g': g_Coe, 'specimen_g_lim': g_lim, 'specimen_q': q_Coe, 'specimen_w': w_Coe,\n 'count_IZ': count_IZ, 'count_ZI': count_ZI, 'B_lab': B_lab, 'B_anc': B_anc,\n 'B_anc_sigma': B_anc_sigma, 'specimen_int': specimen_int, 'specimen_int_sigma': specimen_int_sigma}"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_vds(zdata, delta_y_prime, start, end):\n vector_diffs = []\n for k in range(len(zdata)-1): # gets diff between two vectors\n vector_diffs.append(numpy.sqrt(sum((numpy.array(zdata[k+1]) - numpy.array(zdata[k]))**2) ))\n last_vector = numpy.linalg.norm(zdata[-1])\n vector_diffs.append(last_vector)\n vds = sum(vector_diffs)\n f_vds = abs(old_div(delta_y_prime, vds)) # fvds varies, because of delta_y_prime, but vds does not.\n vector_diffs_segment = vector_diffs[start:end]\n partial_vds = sum(vector_diffs_segment)\n max_diff = max(vector_diffs_segment)\n GAP_MAX = old_div(max_diff, partial_vds) #\n return {'max_diff': max_diff, 'vector_diffs': vector_diffs, 'specimen_vds': vds,\n 'specimen_fvds': f_vds, 'vector_diffs_segment': vector_diffs_segment,\n 'partial_vds': partial_vds, 'GAP-MAX': GAP_MAX}", "response": "gets vds and f_vds and GAP_MAX from the data array"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_SCAT_box(slope, x_mean, y_mean, beta_threshold = .1):\n # if beta_threshold is -999, that means null\n if beta_threshold == -999:\n beta_threshold = .1\n slope_err_threshold = abs(slope) * beta_threshold\n x, y = x_mean, y_mean\n # get lines that pass through mass center, with opposite slope\n slope1 = slope + (2* slope_err_threshold)\n line1_y_int = y - (slope1 * x)\n line1_x_int = -1 * (old_div(line1_y_int, slope1))\n slope2 = slope - (2 * slope_err_threshold)\n line2_y_int = y - (slope2 * x)\n line2_x_int = -1 * (old_div(line2_y_int, slope2))\n # l1_y_int and l2_x_int form the bottom line of the box\n # l2_y_int and l1_x_int form the top line of the box\n# print \"_diagonal line1:\", (0, line2_y_int), (line2_x_int, 0), (x, y)\n# print \"_diagonal line2:\", (0, line1_y_int), (line1_x_int, 0), (x, y)\n# print \"_bottom line:\", [(0, line1_y_int), (line2_x_int, 0)]\n# print \"_top line:\", [(0, line2_y_int), (line1_x_int, 0)]\n low_bound = [(0, line1_y_int), (line2_x_int, 0)]\n high_bound = [(0, line2_y_int), (line1_x_int, 0)]\n x_max = high_bound[1][0]#\n y_max = high_bound[0][1]\n # function for low_bound\n low_slope = old_div((low_bound[0][1] - low_bound[1][1]), (low_bound[0][0] - low_bound[1][0])) #\n low_y_int = low_bound[0][1]\n def low_bound(x):\n y = low_slope * x + low_y_int\n return y\n # function for high_bound\n high_slope = old_div((high_bound[0][1] - high_bound[1][1]), (high_bound[0][0] - high_bound[1][0])) # y_0-y_1/x_0-x_1\n high_y_int = high_bound[0][1]\n def high_bound(x):\n y = high_slope * x + high_y_int\n return y\n high_line = [high_y_int, high_slope]\n low_line = [low_y_int, low_slope]\n return low_bound, high_bound, x_max, y_max, low_line, high_line", "response": "returns information about a SCAT box"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndetermining if a particular point falls within a box", "response": "def in_SCAT_box(x, y, low_bound, high_bound, x_max, y_max):\n \"\"\"determines if a particular point falls within a box\"\"\"\n passing = True\n upper_limit = high_bound(x)\n lower_limit = low_bound(x)\n if x > x_max or y > y_max:\n passing = False\n if x < 0 or y < 0:\n passing = False\n if y > upper_limit:\n passing = False\n if y < lower_limit:\n passing = False\n return passing"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_SCAT_points(x_Arai_segment, y_Arai_segment, tmin, tmax, ptrm_checks_temperatures,\n ptrm_checks_starting_temperatures, x_ptrm_check, y_ptrm_check,\n tail_checks_temperatures, tail_checks_starting_temperatures,\n x_tail_check, y_tail_check):\n \"\"\"returns relevant points for a SCAT test\"\"\"\n points = []\n points_arai = []\n points_ptrm = []\n points_tail = []\n for i in range(len(x_Arai_segment)): # uses only the best_fit segment, so no need for further selection\n x = x_Arai_segment[i]\n y = y_Arai_segment[i]\n points.append((x, y))\n points_arai.append((x,y))\n\n for num, temp in enumerate(ptrm_checks_temperatures): #\n if temp >= tmin and temp <= tmax: # if temp is within selected range\n if (ptrm_checks_starting_temperatures[num] >= tmin and\n ptrm_checks_starting_temperatures[num] <= tmax): # and also if it was not done after an out-of-range temp\n x = x_ptrm_check[num]\n y = y_ptrm_check[num]\n points.append((x, y))\n points_ptrm.append((x,y))\n\n for num, temp in enumerate(tail_checks_temperatures):\n if temp >= tmin and temp <= tmax:\n if (tail_checks_starting_temperatures[num] >= tmin and\n tail_checks_starting_temperatures[num] <= tmax):\n x = x_tail_check[num]\n y = y_tail_check[num]\n points.append((x, y))\n points_tail.append((x,y))\n # print \"points (tail checks added)\", points\n fancy_points = {'points_arai': points_arai, 'points_ptrm': points_ptrm, 'points_tail': points_tail}\n return points, fancy_points", "response": "returns relevant points for a SCAT test"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_SCAT(points, low_bound, high_bound, x_max, y_max):\n # iterate through all relevant points and see if any of them fall outside of your SCAT box\n SCAT = True\n for point in points:\n result = in_SCAT_box(point[0], point[1], low_bound, high_bound, x_max, y_max)\n if result == False:\n # print \"SCAT TEST FAILED\"\n SCAT = False\n return SCAT", "response": "runs SCAT test and returns boolean\n "} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nrun SCAT test and returns Pass or Fail", "response": "def fancy_SCAT(points, low_bound, high_bound, x_max, y_max):\n \"\"\"\n runs SCAT test and returns 'Pass' or 'Fail'\n \"\"\"\n # iterate through all relevant points and see if any of them fall outside of your SCAT box\n # {'points_arai': [(x,y),(x,y)], 'points_ptrm': [(x,y),(x,y)], ...}\n SCAT = 'Pass'\n SCATs = {'SCAT_arai': 'Pass', 'SCAT_ptrm': 'Pass', 'SCAT_tail': 'Pass'}\n for point_type in points:\n #print 'point_type', point_type\n for point in points[point_type]:\n #print 'point', point\n result = in_SCAT_box(point[0], point[1], low_bound, high_bound, x_max, y_max)\n if not result:\n # print \"SCAT TEST FAILED\"\n x = 'SCAT' + point_type[6:]\n #print 'lib point type', point_type\n #print 'xxxx', x\n SCATs[x] = 'Fail'\n SCAT = 'Fail'\n return SCAT, SCATs"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting the FRAC of a single element in a list of vector diffs", "response": "def get_FRAC(vds, vector_diffs_segment):\n \"\"\"\n input: vds, vector_diffs_segment\n output: FRAC\n \"\"\"\n for num in vector_diffs_segment:\n if num < 0:\n raise ValueError('vector diffs should not be negative')\n if vds == 0:\n raise ValueError('attempting to divide by zero. vds should be a positive number')\n FRAC = old_div(sum(vector_diffs_segment), vds)\n return FRAC"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_R_corr2(x_avg, y_avg, x_segment, y_segment): #\n xd = x_segment - x_avg # detrend x_segment\n yd = y_segment - y_avg # detrend y_segment\n if sum(xd**2) * sum(yd**2) == 0: # prevent divide by zero error\n return float('nan')\n rcorr = old_div(sum((xd * yd))**2, (sum(xd**2) * sum(yd**2)))\n return rcorr", "response": "Compute R_corr2 based on average of two segments."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the R_det2 of the last two values of the y_prime and y_avg", "response": "def get_R_det2(y_segment, y_avg, y_prime):\n \"\"\"\n takes in an array of y values, the mean of those values, and the array of y prime values.\n returns R_det2\n \"\"\"\n numerator = sum((numpy.array(y_segment) - numpy.array(y_prime))**2)\n denominator = sum((numpy.array(y_segment) - y_avg)**2)\n if denominator: # prevent divide by zero error\n R_det2 = 1 - (old_div(numerator, denominator))\n return R_det2\n else:\n return float('nan')"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_b_wiggle(x, y, y_int):\n if x == 0:\n b_wiggle = 0\n else:\n b_wiggle = old_div((y_int - y), x)\n return b_wiggle", "response": "returns instantaneous slope from the ratio of NRM lost to TRM gained at the ith step"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets Z from Arai plot", "response": "def get_Z(x_segment, y_segment, x_int, y_int, slope):\n \"\"\"\n input: x_segment, y_segment, x_int, y_int, slope\n output: Z (Arai plot zigzag parameter)\n \"\"\"\n Z = 0\n first_time = True\n for num, x in enumerate(x_segment):\n b_wiggle = get_b_wiggle(x, y_segment[num], y_int)\n z = old_div((x * abs(b_wiggle - abs(slope)) ), abs(x_int))\n Z += z\n first_time = False\n return Z"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncalculate the Zstar of a set of segments.", "response": "def get_Zstar(x_segment, y_segment, x_int, y_int, slope, n):\n \"\"\"\n input: x_segment, y_segment, x_int, y_int, slope, n\n output: Z* (Arai plot zigzag parameter (alternate))\n \"\"\"\n total = 0\n first_time = True\n for num, x in enumerate(x_segment):\n b_wiggle = get_b_wiggle(x, y_segment[num], y_int)\n result = 100 * ( old_div((x * abs(b_wiggle - abs(slope)) ), abs(y_int)) )\n total += result\n first_time = False\n Zstar = (old_div(1., (n - 1.))) * total\n return Zstar"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the normalized points", "response": "def get_normed_points(point_array, norm): # good to go\n \"\"\"\n input: point_array, norm\n output: normed array\n \"\"\"\n norm = float(norm)\n #floated_array = []\n #for p in point_array: # need to make sure each point is a float\n #floated_array.append(float(p))\n points = old_div(numpy.array(point_array), norm)\n return points"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting xy array from x_segment y_segment", "response": "def get_xy_array(x_segment, y_segment):\n \"\"\"\n input: x_segment, y_segment\n output: xy_segment, ( format: [(x[0], y[0]), (x[1], y[1])]\n \"\"\"\n xy_array = []\n for num, x in enumerate(x_segment):\n xy_array.append((x, y_segment[num]))\n return xy_array"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef main():\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-i' in sys.argv:\n file=input(\"Enter file name: \")\n f=open(file,'r')\n elif '-f' in sys.argv:\n ind=sys.argv.index('-f')\n file=sys.argv[ind+1]\n f=open(file,'r')\n else:\n f=sys.stdin\n ofile = \"\"\n if '-F' in sys.argv:\n ind = sys.argv.index('-F')\n ofile= sys.argv[ind+1]\n out = open(ofile, 'w + a')\n data=f.readlines()\n dat=[]\n sum=0\n for line in data:\n rec=line.split()\n dat.append(float(rec[0]))\n sum+=float(float(rec[0]))\n mean,std=pmag.gausspars(dat)\n outdata = len(dat),mean,sum,std,100*std/mean\n if ofile == \"\":\n print(len(dat),mean,sum,std,100*std/mean)\n else:\n for i in outdata:\n i = str(i)\n out.write(i + \" \")", "response": "NAME\n stats.py\n \n DEFINITION\n calculates Gauss statistics for input data\n\n SYNTAX\n stats [command line options][< filename]\n\n INPUT\n single column of numbers\n\n OPTIONS\n -h prints help message and quits\n -i interactive entry of file name\n -f input file name\n -F output file name\n \n OUTPUT\n N, mean, sum, sigma, (%) \n where sigma is the standard deviation\n where % is sigma as percentage of the mean\n stderr is the standard error and \n 95% conf.= 1.96*sigma/sqrt(N)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nname magic_select.py DESCRIPTION picks out records and dictitem options saves to magic_special file SYNTAX magic_select.py [command line optins] OPTIONS -h prints help message and quits -f FILE: specify input magic format file -F FILE: specify output magic format file -dm : data model (default is 3.0, otherwise use 2.5) -key KEY string [T,F,has, not, eval,min,max] returns records where the value of the key either: matches exactly the string (T) does not match the string (F) contains the string (has) does not contain the string (not) the value equals the numerical value of the string (eval) the value is greater than the numerical value of the string (min) the value is less than the numerical value of the string (max) NOTES for age range: use KEY: age (converts to Ma, takes mid point of low, high if no value for age. for paleolat: use KEY: model_lat (uses lat, if age<5 Ma, else, model_lat, or attempts calculation from average_inc if no model_lat.) returns estimate in model_lat key EXAMPLE: # here I want to output all records where the site column exactly matches \"MC01\" magic_select.py -f samples.txt -key site MC01 T -F select_samples.txt", "response": "def main():\n \"\"\"\n NAME\n magic_select.py\n\n DESCRIPTION\n picks out records and dictitem options saves to magic_special file\n\n SYNTAX\n magic_select.py [command line optins]\n\n OPTIONS\n -h prints help message and quits\n -f FILE: specify input magic format file\n -F FILE: specify output magic format file\n -dm : data model (default is 3.0, otherwise use 2.5)\n -key KEY string [T,F,has, not, eval,min,max]\n returns records where the value of the key either:\n matches exactly the string (T)\n does not match the string (F)\n contains the string (has)\n does not contain the string (not)\n the value equals the numerical value of the string (eval)\n the value is greater than the numerical value of the string (min)\n the value is less than the numerical value of the string (max)\n NOTES\n for age range:\n use KEY: age (converts to Ma, takes mid point of low, high if no value for age.\n for paleolat:\n use KEY: model_lat (uses lat, if age<5 Ma, else, model_lat, or attempts calculation from average_inc if no model_lat.) returns estimate in model_lat key\n\n EXAMPLE:\n # here I want to output all records where the site column exactly matches \"MC01\"\n magic_select.py -f samples.txt -key site MC01 T -F select_samples.txt\n\n \"\"\"\n dir_path = \".\"\n flag = ''\n if '-WD' in sys.argv:\n ind = sys.argv.index('-WD')\n dir_path = sys.argv[ind+1]\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-f' in sys.argv:\n ind = sys.argv.index('-f')\n magic_file = dir_path+'/'+sys.argv[ind+1]\n else:\n print(main.__doc__)\n print('-W- \"-f\" is a required option')\n sys.exit()\n if '-dm' in sys.argv:\n ind = sys.argv.index('-dm')\n data_model_num=sys.argv[ind+1]\n if data_model_num!='3':data_model_num=2.5\n else : data_model_num=3\n if '-F' in sys.argv:\n ind = sys.argv.index('-F')\n outfile = dir_path+'/'+sys.argv[ind+1]\n else:\n print(main.__doc__)\n print('-W- \"-F\" is a required option')\n sys.exit()\n if '-key' in sys.argv:\n ind = sys.argv.index('-key')\n grab_key = sys.argv[ind+1]\n v = sys.argv[ind+2]\n flag = sys.argv[ind+3]\n else:\n print(main.__doc__)\n print('-key is required')\n sys.exit()\n #\n # get data read in\n Data, file_type = pmag.magic_read(magic_file)\n if grab_key == 'age':\n grab_key = 'average_age'\n Data = pmag.convert_ages(Data,data_model=data_model_num)\n if grab_key == 'model_lat':\n Data = pmag.convert_lat(Data)\n Data = pmag.convert_ages(Data,data_model=data_model_num)\n #print(Data[0])\n Selection = pmag.get_dictitem(Data, grab_key, v, flag, float_to_int=True)\n if len(Selection) > 0:\n pmag.magic_write(outfile, Selection, file_type)\n else:\n print('no data matched your criteria')"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef main():\n#\n dir_path = \".\"\n version_num = pmag.get_version()\n verbose = pmagplotlib.verbose\n args = sys.argv\n ipar, ihext, ivec, iboot, imeas, isite, iplot, vec = 0, 0, 0, 1, 1, 0, 1, 0\n hpars, bpars, PDir = [], [], []\n CS, crd = '-1', 's'\n nb = 1000\n fmt = 'pdf'\n ResRecs = []\n orlist = []\n outfile, comp, Dir, gtcirc, PDir = 'rmag_results.txt', 0, [], 0, []\n infile = 'rmag_anisotropy.txt'\n if \"-h\" in args:\n print(main.__doc__)\n sys.exit()\n if '-WD' in args:\n ind = args.index('-WD')\n dir_path = args[ind+1]\n if '-n' in args:\n ind = args.index('-n')\n nb = int(args[ind+1])\n if '-usr' in args:\n ind = args.index('-usr')\n user = args[ind+1]\n else:\n user = \"\"\n if '-B' in args:\n iboot, ihext = 0, 1\n if '-par' in args:\n ipar = 1\n if '-x' in args:\n ihext = 1\n if '-v' in args:\n ivec = 1\n if '-sit' in args:\n isite = 1\n if '-P' in args:\n iplot = 0\n if '-f' in args:\n ind = args.index('-f')\n infile = args[ind+1]\n if '-F' in args:\n ind = args.index('-F')\n outfile = args[ind+1]\n if '-crd' in sys.argv:\n ind = sys.argv.index('-crd')\n crd = sys.argv[ind+1]\n if crd == 'g':\n CS = '0'\n if crd == 't':\n CS = '100'\n if '-fmt' in args:\n ind = args.index('-fmt')\n fmt = args[ind+1]\n if '-sav' in args:\n plots = 1\n verbose = 0\n else:\n plots = 0\n if '-gtc' in args:\n ind = args.index('-gtc')\n d, i = float(args[ind+1]), float(args[ind+2])\n PDir.append(d)\n PDir.append(i)\n if '-d' in args:\n comp = 1\n ind = args.index('-d')\n vec = int(args[ind+1])-1\n Dir = [float(args[ind+2]), float(args[ind+3])]\n#\n# set up plots\n#\n if infile[0] != '/':\n infile = dir_path+'/'+infile\n if outfile[0] != '/':\n outfile = dir_path+'/'+outfile\n ANIS = {}\n initcdf, inittcdf = 0, 0\n ANIS['data'], ANIS['conf'] = 1, 2\n if iboot == 1:\n ANIS['tcdf'] = 3\n if iplot == 1:\n inittcdf = 1\n pmagplotlib.plot_init(ANIS['tcdf'], 5, 5)\n if comp == 1 and iplot == 1:\n initcdf = 1\n ANIS['vxcdf'], ANIS['vycdf'], ANIS['vzcdf'] = 4, 5, 6\n pmagplotlib.plot_init(ANIS['vxcdf'], 5, 5)\n pmagplotlib.plot_init(ANIS['vycdf'], 5, 5)\n pmagplotlib.plot_init(ANIS['vzcdf'], 5, 5)\n if iplot == 1:\n pmagplotlib.plot_init(ANIS['conf'], 5, 5)\n pmagplotlib.plot_init(ANIS['data'], 5, 5)\n# read in the data\n data, ifiletype = pmag.magic_read(infile)\n for rec in data: # find all the orientation systems\n if 'anisotropy_tilt_correction' not in rec.keys():\n rec['anisotropy_tilt_correction'] = '-1'\n if rec['anisotropy_tilt_correction'] not in orlist:\n orlist.append(rec['anisotropy_tilt_correction'])\n if CS not in orlist:\n if len(orlist) > 0:\n CS = orlist[0]\n else:\n CS = '-1'\n if CS == '-1':\n crd = 's'\n if CS == '0':\n crd = 'g'\n if CS == '100':\n crd = 't'\n if verbose:\n print(\"desired coordinate system not available, using available: \", crd)\n if isite == 1:\n sitelist = []\n for rec in data:\n if rec['er_site_name'] not in sitelist:\n sitelist.append(rec['er_site_name'])\n sitelist.sort()\n plt = len(sitelist)\n else:\n plt = 1\n k = 0\n while k < plt:\n site = \"\"\n sdata, Ss = [], [] # list of S format data\n Locs, Sites, Samples, Specimens, Cits = [], [], [], [], []\n if isite == 0:\n sdata = data\n else:\n site = sitelist[k]\n for rec in data:\n if rec['er_site_name'] == site:\n sdata.append(rec)\n anitypes = []\n csrecs = pmag.get_dictitem(\n sdata, 'anisotropy_tilt_correction', CS, 'T')\n for rec in csrecs:\n if rec['anisotropy_type'] not in anitypes:\n anitypes.append(rec['anisotropy_type'])\n if rec['er_location_name'] not in Locs:\n Locs.append(rec['er_location_name'])\n if rec['er_site_name'] not in Sites:\n Sites.append(rec['er_site_name'])\n if rec['er_sample_name'] not in Samples:\n Samples.append(rec['er_sample_name'])\n if rec['er_specimen_name'] not in Specimens:\n Specimens.append(rec['er_specimen_name'])\n if rec['er_citation_names'] not in Cits:\n Cits.append(rec['er_citation_names'])\n s = []\n s.append(float(rec[\"anisotropy_s1\"]))\n s.append(float(rec[\"anisotropy_s2\"]))\n s.append(float(rec[\"anisotropy_s3\"]))\n s.append(float(rec[\"anisotropy_s4\"]))\n s.append(float(rec[\"anisotropy_s5\"]))\n s.append(float(rec[\"anisotropy_s6\"]))\n if s[0] <= 1.0:\n Ss.append(s) # protect against crap\n # tau,Vdirs=pmag.doseigs(s)\n ResRec = {}\n ResRec['er_location_names'] = rec['er_location_name']\n ResRec['er_citation_names'] = rec['er_citation_names']\n ResRec['er_site_names'] = rec['er_site_name']\n ResRec['er_sample_names'] = rec['er_sample_name']\n ResRec['er_specimen_names'] = rec['er_specimen_name']\n ResRec['rmag_result_name'] = rec['er_specimen_name'] + \\\n \":\"+rec['anisotropy_type']\n ResRec[\"er_analyst_mail_names\"] = user\n ResRec[\"tilt_correction\"] = CS\n ResRec[\"anisotropy_type\"] = rec['anisotropy_type']\n if \"anisotropy_n\" not in rec.keys():\n rec[\"anisotropy_n\"] = \"6\"\n if \"anisotropy_sigma\" not in rec.keys():\n rec[\"anisotropy_sigma\"] = \"0\"\n fpars = pmag.dohext(\n int(rec[\"anisotropy_n\"])-6, float(rec[\"anisotropy_sigma\"]), s)\n ResRec[\"anisotropy_v1_dec\"] = '%7.1f' % (fpars['v1_dec'])\n ResRec[\"anisotropy_v2_dec\"] = '%7.1f' % (fpars['v2_dec'])\n ResRec[\"anisotropy_v3_dec\"] = '%7.1f' % (fpars['v3_dec'])\n ResRec[\"anisotropy_v1_inc\"] = '%7.1f' % (fpars['v1_inc'])\n ResRec[\"anisotropy_v2_inc\"] = '%7.1f' % (fpars['v2_inc'])\n ResRec[\"anisotropy_v3_inc\"] = '%7.1f' % (fpars['v3_inc'])\n ResRec[\"anisotropy_t1\"] = '%10.8f' % (fpars['t1'])\n ResRec[\"anisotropy_t2\"] = '%10.8f' % (fpars['t2'])\n ResRec[\"anisotropy_t3\"] = '%10.8f' % (fpars['t3'])\n ResRec[\"anisotropy_ftest\"] = '%10.3f' % (fpars['F'])\n ResRec[\"anisotropy_ftest12\"] = '%10.3f' % (fpars['F12'])\n ResRec[\"anisotropy_ftest23\"] = '%10.3f' % (fpars['F23'])\n ResRec[\"result_description\"] = 'F_crit: ' + \\\n fpars['F_crit']+'; F12,F23_crit: '+fpars['F12_crit']\n ResRec['anisotropy_type'] = pmag.makelist(anitypes)\n ResRecs.append(ResRec)\n if len(Ss) > 1:\n if pmagplotlib.isServer:\n title = \"LO:_\"+ResRec['er_location_names'] + \\\n '_SI:_'+site+'_SA:__SP:__CO:_'+crd\n else:\n title = ResRec['er_location_names']\n if site:\n title += \"_{}\".format(site)\n title += '_{}'.format(crd)\n ResRec['er_location_names'] = pmag.makelist(Locs)\n bpars, hpars = pmagplotlib.plot_anis(\n ANIS, Ss, iboot, ihext, ivec, ipar, title, iplot, comp, vec, Dir, nb)\n if len(PDir) > 0:\n pmagplotlib.plot_circ(ANIS['data'], PDir, 90., 'g')\n pmagplotlib.plot_circ(ANIS['conf'], PDir, 90., 'g')\n if verbose and plots == 0:\n pmagplotlib.draw_figs(ANIS)\n ResRec['er_location_names'] = pmag.makelist(Locs)\n if plots == 1:\n save(ANIS, fmt, title)\n ResRec = {}\n ResRec['er_citation_names'] = pmag.makelist(Cits)\n ResRec['er_location_names'] = pmag.makelist(Locs)\n ResRec['er_site_names'] = pmag.makelist(Sites)\n ResRec['er_sample_names'] = pmag.makelist(Samples)\n ResRec['er_specimen_names'] = pmag.makelist(Specimens)\n ResRec['rmag_result_name'] = pmag.makelist(\n Sites)+\":\"+pmag.makelist(anitypes)\n ResRec['anisotropy_type'] = pmag.makelist(anitypes)\n ResRec[\"er_analyst_mail_names\"] = user\n ResRec[\"tilt_correction\"] = CS\n if isite == \"0\":\n ResRec['result_description'] = \"Study average using coordinate system: \" + CS\n if isite == \"1\":\n ResRec['result_description'] = \"Site average using coordinate system: \" + CS\n if hpars != [] and ihext == 1:\n HextRec = {}\n for key in ResRec.keys():\n HextRec[key] = ResRec[key] # copy over stuff\n HextRec[\"anisotropy_v1_dec\"] = '%7.1f' % (hpars[\"v1_dec\"])\n HextRec[\"anisotropy_v2_dec\"] = '%7.1f' % (hpars[\"v2_dec\"])\n HextRec[\"anisotropy_v3_dec\"] = '%7.1f' % (hpars[\"v3_dec\"])\n HextRec[\"anisotropy_v1_inc\"] = '%7.1f' % (hpars[\"v1_inc\"])\n HextRec[\"anisotropy_v2_inc\"] = '%7.1f' % (hpars[\"v2_inc\"])\n HextRec[\"anisotropy_v3_inc\"] = '%7.1f' % (hpars[\"v3_inc\"])\n HextRec[\"anisotropy_t1\"] = '%10.8f' % (hpars[\"t1\"])\n HextRec[\"anisotropy_t2\"] = '%10.8f' % (hpars[\"t2\"])\n HextRec[\"anisotropy_t3\"] = '%10.8f' % (hpars[\"t3\"])\n HextRec[\"anisotropy_hext_F\"] = '%7.1f ' % (hpars[\"F\"])\n HextRec[\"anisotropy_hext_F12\"] = '%7.1f ' % (hpars[\"F12\"])\n HextRec[\"anisotropy_hext_F23\"] = '%7.1f ' % (hpars[\"F23\"])\n HextRec[\"anisotropy_v1_eta_semi_angle\"] = '%7.1f ' % (\n hpars[\"e12\"])\n HextRec[\"anisotropy_v1_eta_dec\"] = '%7.1f ' % (hpars[\"v2_dec\"])\n HextRec[\"anisotropy_v1_eta_inc\"] = '%7.1f ' % (hpars[\"v2_inc\"])\n HextRec[\"anisotropy_v1_zeta_semi_angle\"] = '%7.1f ' % (\n hpars[\"e13\"])\n HextRec[\"anisotropy_v1_zeta_dec\"] = '%7.1f ' % (\n hpars[\"v3_dec\"])\n HextRec[\"anisotropy_v1_zeta_inc\"] = '%7.1f ' % (\n hpars[\"v3_inc\"])\n HextRec[\"anisotropy_v2_eta_semi_angle\"] = '%7.1f ' % (\n hpars[\"e12\"])\n HextRec[\"anisotropy_v2_eta_dec\"] = '%7.1f ' % (hpars[\"v1_dec\"])\n HextRec[\"anisotropy_v2_eta_inc\"] = '%7.1f ' % (hpars[\"v1_inc\"])\n HextRec[\"anisotropy_v2_zeta_semi_angle\"] = '%7.1f ' % (\n hpars[\"e23\"])\n HextRec[\"anisotropy_v2_zeta_dec\"] = '%7.1f ' % (\n hpars[\"v3_dec\"])\n HextRec[\"anisotropy_v2_zeta_inc\"] = '%7.1f ' % (\n hpars[\"v3_inc\"])\n HextRec[\"anisotropy_v3_eta_semi_angle\"] = '%7.1f ' % (\n hpars[\"e12\"])\n HextRec[\"anisotropy_v3_eta_dec\"] = '%7.1f ' % (hpars[\"v1_dec\"])\n HextRec[\"anisotropy_v3_eta_inc\"] = '%7.1f ' % (hpars[\"v1_inc\"])\n HextRec[\"anisotropy_v3_zeta_semi_angle\"] = '%7.1f ' % (\n hpars[\"e23\"])\n HextRec[\"anisotropy_v3_zeta_dec\"] = '%7.1f ' % (\n hpars[\"v2_dec\"])\n HextRec[\"anisotropy_v3_zeta_inc\"] = '%7.1f ' % (\n hpars[\"v2_inc\"])\n HextRec[\"magic_method_codes\"] = 'LP-AN:AE-H'\n if verbose:\n print(\"Hext Statistics: \")\n print(\n \" tau_i, V_i_D, V_i_I, V_i_zeta, V_i_zeta_D, V_i_zeta_I, V_i_eta, V_i_eta_D, V_i_eta_I\")\n print(HextRec[\"anisotropy_t1\"], HextRec[\"anisotropy_v1_dec\"], HextRec[\"anisotropy_v1_inc\"], HextRec[\"anisotropy_v1_eta_semi_angle\"], HextRec[\"anisotropy_v1_eta_dec\"],\n HextRec[\"anisotropy_v1_eta_inc\"], HextRec[\"anisotropy_v1_zeta_semi_angle\"], HextRec[\"anisotropy_v1_zeta_dec\"], HextRec[\"anisotropy_v1_zeta_inc\"])\n print(HextRec[\"anisotropy_t2\"], HextRec[\"anisotropy_v2_dec\"], HextRec[\"anisotropy_v2_inc\"], HextRec[\"anisotropy_v2_eta_semi_angle\"], HextRec[\"anisotropy_v2_eta_dec\"],\n HextRec[\"anisotropy_v2_eta_inc\"], HextRec[\"anisotropy_v2_zeta_semi_angle\"], HextRec[\"anisotropy_v2_zeta_dec\"], HextRec[\"anisotropy_v2_zeta_inc\"])\n print(HextRec[\"anisotropy_t3\"], HextRec[\"anisotropy_v3_dec\"], HextRec[\"anisotropy_v3_inc\"], HextRec[\"anisotropy_v3_eta_semi_angle\"], HextRec[\"anisotropy_v3_eta_dec\"],\n HextRec[\"anisotropy_v3_eta_inc\"], HextRec[\"anisotropy_v3_zeta_semi_angle\"], HextRec[\"anisotropy_v3_zeta_dec\"], HextRec[\"anisotropy_v3_zeta_inc\"])\n HextRec['magic_software_packages'] = version_num\n ResRecs.append(HextRec)\n if bpars != []:\n BootRec = {}\n for key in ResRec.keys():\n BootRec[key] = ResRec[key] # copy over stuff\n BootRec[\"anisotropy_v1_dec\"] = '%7.1f' % (bpars[\"v1_dec\"])\n BootRec[\"anisotropy_v2_dec\"] = '%7.1f' % (bpars[\"v2_dec\"])\n BootRec[\"anisotropy_v3_dec\"] = '%7.1f' % (bpars[\"v3_dec\"])\n BootRec[\"anisotropy_v1_inc\"] = '%7.1f' % (bpars[\"v1_inc\"])\n BootRec[\"anisotropy_v2_inc\"] = '%7.1f' % (bpars[\"v2_inc\"])\n BootRec[\"anisotropy_v3_inc\"] = '%7.1f' % (bpars[\"v3_inc\"])\n BootRec[\"anisotropy_t1\"] = '%10.8f' % (bpars[\"t1\"])\n BootRec[\"anisotropy_t2\"] = '%10.8f' % (bpars[\"t2\"])\n BootRec[\"anisotropy_t3\"] = '%10.8f' % (bpars[\"t3\"])\n BootRec[\"anisotropy_v1_eta_inc\"] = '%7.1f ' % (\n bpars[\"v1_eta_inc\"])\n BootRec[\"anisotropy_v1_eta_dec\"] = '%7.1f ' % (\n bpars[\"v1_eta_dec\"])\n BootRec[\"anisotropy_v1_eta_semi_angle\"] = '%7.1f ' % (\n bpars[\"v1_eta\"])\n BootRec[\"anisotropy_v1_zeta_inc\"] = '%7.1f ' % (\n bpars[\"v1_zeta_inc\"])\n BootRec[\"anisotropy_v1_zeta_dec\"] = '%7.1f ' % (\n bpars[\"v1_zeta_dec\"])\n BootRec[\"anisotropy_v1_zeta_semi_angle\"] = '%7.1f ' % (\n bpars[\"v1_zeta\"])\n BootRec[\"anisotropy_v2_eta_inc\"] = '%7.1f ' % (\n bpars[\"v2_eta_inc\"])\n BootRec[\"anisotropy_v2_eta_dec\"] = '%7.1f ' % (\n bpars[\"v2_eta_dec\"])\n BootRec[\"anisotropy_v2_eta_semi_angle\"] = '%7.1f ' % (\n bpars[\"v2_eta\"])\n BootRec[\"anisotropy_v2_zeta_inc\"] = '%7.1f ' % (\n bpars[\"v2_zeta_inc\"])\n BootRec[\"anisotropy_v2_zeta_dec\"] = '%7.1f ' % (\n bpars[\"v2_zeta_dec\"])\n BootRec[\"anisotropy_v2_zeta_semi_angle\"] = '%7.1f ' % (\n bpars[\"v2_zeta\"])\n BootRec[\"anisotropy_v3_eta_inc\"] = '%7.1f ' % (\n bpars[\"v3_eta_inc\"])\n BootRec[\"anisotropy_v3_eta_dec\"] = '%7.1f ' % (\n bpars[\"v3_eta_dec\"])\n BootRec[\"anisotropy_v3_eta_semi_angle\"] = '%7.1f ' % (\n bpars[\"v3_eta\"])\n BootRec[\"anisotropy_v3_zeta_inc\"] = '%7.1f ' % (\n bpars[\"v3_zeta_inc\"])\n BootRec[\"anisotropy_v3_zeta_dec\"] = '%7.1f ' % (\n bpars[\"v3_zeta_dec\"])\n BootRec[\"anisotropy_v3_zeta_semi_angle\"] = '%7.1f ' % (\n bpars[\"v3_zeta\"])\n BootRec[\"anisotropy_hext_F\"] = ''\n BootRec[\"anisotropy_hext_F12\"] = ''\n BootRec[\"anisotropy_hext_F23\"] = ''\n # regular bootstrap\n BootRec[\"magic_method_codes\"] = 'LP-AN:AE-H:AE-BS'\n if ipar == 1:\n # parametric bootstrap\n BootRec[\"magic_method_codes\"] = 'LP-AN:AE-H:AE-BS-P'\n if verbose:\n print(\"Boostrap Statistics: \")\n print(\n \" tau_i, V_i_D, V_i_I, V_i_zeta, V_i_zeta_D, V_i_zeta_I, V_i_eta, V_i_eta_D, V_i_eta_I\")\n print(BootRec[\"anisotropy_t1\"], BootRec[\"anisotropy_v1_dec\"], BootRec[\"anisotropy_v1_inc\"], BootRec[\"anisotropy_v1_eta_semi_angle\"], BootRec[\"anisotropy_v1_eta_dec\"],\n BootRec[\"anisotropy_v1_eta_inc\"], BootRec[\"anisotropy_v1_zeta_semi_angle\"], BootRec[\"anisotropy_v1_zeta_dec\"], BootRec[\"anisotropy_v1_zeta_inc\"])\n print(BootRec[\"anisotropy_t2\"], BootRec[\"anisotropy_v2_dec\"], BootRec[\"anisotropy_v2_inc\"], BootRec[\"anisotropy_v2_eta_semi_angle\"], BootRec[\"anisotropy_v2_eta_dec\"],\n BootRec[\"anisotropy_v2_eta_inc\"], BootRec[\"anisotropy_v2_zeta_semi_angle\"], BootRec[\"anisotropy_v2_zeta_dec\"], BootRec[\"anisotropy_v2_zeta_inc\"])\n print(BootRec[\"anisotropy_t3\"], BootRec[\"anisotropy_v3_dec\"], BootRec[\"anisotropy_v3_inc\"], BootRec[\"anisotropy_v3_eta_semi_angle\"], BootRec[\"anisotropy_v3_eta_dec\"],\n BootRec[\"anisotropy_v3_eta_inc\"], BootRec[\"anisotropy_v3_zeta_semi_angle\"], BootRec[\"anisotropy_v3_zeta_dec\"], BootRec[\"anisotropy_v3_zeta_inc\"])\n BootRec['magic_software_packages'] = version_num\n ResRecs.append(BootRec)\n k += 1\n goon = 1\n while goon == 1 and iplot == 1 and verbose:\n if iboot == 1:\n print(\"compare with [d]irection \")\n print(\n \" plot [g]reat circle, change [c]oord. system, change [e]llipse calculation, s[a]ve plots, [q]uit \")\n if isite == 1:\n print(\" [p]revious, [s]ite, [q]uit, for next \")\n ans = input(\"\")\n if ans == \"q\":\n sys.exit()\n if ans == \"e\":\n iboot, ipar, ihext, ivec = 1, 0, 0, 0\n e = input(\"Do Hext Statistics 1/[0]: \")\n if e == \"1\":\n ihext = 1\n e = input(\"Suppress bootstrap 1/[0]: \")\n if e == \"1\":\n iboot = 0\n if iboot == 1:\n e = input(\"Parametric bootstrap 1/[0]: \")\n if e == \"1\":\n ipar = 1\n e = input(\"Plot bootstrap eigenvectors: 1/[0]: \")\n if e == \"1\":\n ivec = 1\n if iplot == 1:\n if inittcdf == 0:\n ANIS['tcdf'] = 3\n pmagplotlib.plot_init(ANIS['tcdf'], 5, 5)\n inittcdf = 1\n bpars, hpars = pmagplotlib.plot_anis(\n ANIS, Ss, iboot, ihext, ivec, ipar, title, iplot, comp, vec, Dir, nb)\n if verbose and plots == 0:\n pmagplotlib.draw_figs(ANIS)\n if ans == \"c\":\n print(\"Current Coordinate system is: \")\n if CS == '-1':\n print(\" Specimen\")\n if CS == '0':\n print(\" Geographic\")\n if CS == '100':\n print(\" Tilt corrected\")\n key = input(\n \" Enter desired coordinate system: [s]pecimen, [g]eographic, [t]ilt corrected \")\n if key == 's':\n CS = '-1'\n if key == 'g':\n CS = '0'\n if key == 't':\n CS = '100'\n if CS not in orlist:\n if len(orlist) > 0:\n CS = orlist[0]\n else:\n CS = '-1'\n if CS == '-1':\n crd = 's'\n if CS == '0':\n crd = 'g'\n if CS == '100':\n crd = 't'\n print(\n \"desired coordinate system not available, using available: \", crd)\n k -= 1\n goon = 0\n if ans == \"\":\n if isite == 1:\n goon = 0\n else:\n print(\"Good bye \")\n sys.exit()\n if ans == 'd':\n if initcdf == 0:\n initcdf = 1\n ANIS['vxcdf'], ANIS['vycdf'], ANIS['vzcdf'] = 4, 5, 6\n pmagplotlib.plot_init(ANIS['vxcdf'], 5, 5)\n pmagplotlib.plot_init(ANIS['vycdf'], 5, 5)\n pmagplotlib.plot_init(ANIS['vzcdf'], 5, 5)\n Dir, comp = [], 1\n print(\"\"\"\n Input: Vi D I to compare eigenvector Vi with direction D/I\n where Vi=1: principal\n Vi=2: major\n Vi=3: minor\n D= declination of comparison direction\n I= inclination of comparison direction\"\"\")\n con = 1\n while con == 1:\n try:\n vdi = input(\"Vi D I: \").split()\n vec = int(vdi[0])-1\n Dir = [float(vdi[1]), float(vdi[2])]\n con = 0\n except IndexError:\n print(\" Incorrect entry, try again \")\n bpars, hpars = pmagplotlib.plot_anis(\n ANIS, Ss, iboot, ihext, ivec, ipar, title, iplot, comp, vec, Dir, nb)\n Dir, comp = [], 0\n if ans == 'g':\n con, cnt = 1, 0\n while con == 1:\n try:\n print(\n \" Input: input pole to great circle ( D I) to plot a great circle: \")\n di = input(\" D I: \").split()\n PDir.append(float(di[0]))\n PDir.append(float(di[1]))\n con = 0\n except:\n cnt += 1\n if cnt < 10:\n print(\n \" enter the dec and inc of the pole on one line \")\n else:\n print(\n \"ummm - you are doing something wrong - i give up\")\n sys.exit()\n pmagplotlib.plot_circ(ANIS['data'], PDir, 90., 'g')\n pmagplotlib.plot_circ(ANIS['conf'], PDir, 90., 'g')\n if verbose and plots == 0:\n pmagplotlib.draw_figs(ANIS)\n if ans == \"p\":\n k -= 2\n goon = 0\n if ans == \"q\":\n k = plt\n goon = 0\n if ans == \"s\":\n keepon = 1\n site = input(\" print site or part of site desired: \")\n while keepon == 1:\n try:\n k = sitelist.index(site)\n keepon = 0\n except:\n tmplist = []\n for qq in range(len(sitelist)):\n if site in sitelist[qq]:\n tmplist.append(sitelist[qq])\n print(site, \" not found, but this was: \")\n print(tmplist)\n site = input('Select one or try again\\n ')\n k = sitelist.index(site)\n goon, ans = 0, \"\"\n if ans == \"a\":\n locs = pmag.makelist(Locs)\n if pmagplotlib.isServer: # use server plot naming convention\n title = \"LO:_\"+locs+'_SI:__'+'_SA:__SP:__CO:_'+crd\n else: # use more readable plot naming convention\n title = \"{}_{}\".format(locs, crd)\n save(ANIS, fmt, title)\n goon = 0\n else:\n if verbose:\n print('skipping plot - not enough data points')\n k += 1\n# put rmag_results stuff here\n if len(ResRecs) > 0:\n ResOut, keylist = pmag.fillkeys(ResRecs)\n pmag.magic_write(outfile, ResOut, 'rmag_results')\n if verbose:\n print(\" Good bye \")", "response": "This function is the main function of the anisotropy data plot. It is called by pmag. py to generate the appropriate data structure for the base class and the main function of the anisotropy data."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef main():\n FIG={} # plot dictionary\n FIG['eq']=1 # eqarea is figure 1\n fmt,dist,mode,plot='svg','F',1,0\n sym={'lower':['o','r'],'upper':['o','w'],'size':10}\n plotE=0\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if not set_env.IS_WIN:\n pmagplotlib.plot_init(FIG['eq'],5,5)\n if '-sav' in sys.argv:plot=1\n if '-f' in sys.argv:\n ind=sys.argv.index(\"-f\")\n title=sys.argv[ind+1]\n data=numpy.loadtxt(title).transpose()\n if '-ell' in sys.argv:\n plotE=1\n ind=sys.argv.index('-ell')\n ell_type=sys.argv[ind+1]\n if ell_type=='F':dist='F'\n if ell_type=='K':dist='K'\n if ell_type=='B':dist='B'\n if ell_type=='Be':dist='BE'\n if ell_type=='Bv':\n dist='BV'\n FIG['bdirs']=2\n pmagplotlib.plot_init(FIG['bdirs'],5,5)\n if '-fmt' in sys.argv:\n ind=sys.argv.index(\"-fmt\")\n fmt=sys.argv[ind+1]\n DIblock=numpy.array([data[0],data[1]]).transpose()\n if len(DIblock)>0:\n pmagplotlib.plot_eq_sym(FIG['eq'],DIblock,title,sym)\n #if plot==0:pmagplotlib.draw_figs(FIG)\n else:\n print(\"no data to plot\")\n sys.exit()\n if plotE==1:\n ppars=pmag.doprinc(DIblock) # get principal directions\n nDIs,rDIs,npars,rpars=[],[],[],[]\n for rec in DIblock:\n angle=pmag.angle([rec[0],rec[1]],[ppars['dec'],ppars['inc']])\n if angle>90.:\n rDIs.append(rec)\n else:\n nDIs.append(rec)\n if dist=='B': # do on whole dataset\n etitle=\"Bingham confidence ellipse\"\n bpars=pmag.dobingham(DIblock)\n for key in list(bpars.keys()):\n if key!='n' and pmagplotlib.verbose:print(\" \",key, '%7.1f'%(bpars[key]))\n if key=='n' and pmagplotlib.verbose:print(\" \",key, ' %i'%(bpars[key]))\n npars.append(bpars['dec'])\n npars.append(bpars['inc'])\n npars.append(bpars['Zeta'])\n npars.append(bpars['Zdec'])\n npars.append(bpars['Zinc'])\n npars.append(bpars['Eta'])\n npars.append(bpars['Edec'])\n npars.append(bpars['Einc'])\n if dist=='F':\n etitle=\"Fisher confidence cone\"\n if len(nDIs)>3:\n fpars=pmag.fisher_mean(nDIs)\n for key in list(fpars.keys()):\n if key!='n' and pmagplotlib.verbose:print(\" \",key, '%7.1f'%(fpars[key]))\n if key=='n' and pmagplotlib.verbose:print(\" \",key, ' %i'%(fpars[key]))\n mode+=1\n npars.append(fpars['dec'])\n npars.append(fpars['inc'])\n npars.append(fpars['alpha95']) # Beta\n npars.append(fpars['dec'])\n isign=abs(fpars['inc']) / fpars['inc']\n npars.append(fpars['inc']-isign*90.) #Beta inc\n npars.append(fpars['alpha95']) # gamma\n npars.append(fpars['dec']+90.) # Beta dec\n npars.append(0.) #Beta inc\n if len(rDIs)>3:\n fpars=pmag.fisher_mean(rDIs)\n if pmagplotlib.verbose:print(\"mode \",mode)\n for key in list(fpars.keys()):\n if key!='n' and pmagplotlib.verbose:print(\" \",key, '%7.1f'%(fpars[key]))\n if key=='n' and pmagplotlib.verbose:print(\" \",key, ' %i'%(fpars[key]))\n mode+=1\n rpars.append(fpars['dec'])\n rpars.append(fpars['inc'])\n rpars.append(fpars['alpha95']) # Beta\n rpars.append(fpars['dec'])\n isign=abs(fpars['inc']) / fpars['inc']\n rpars.append(fpars['inc']-isign*90.) #Beta inc\n rpars.append(fpars['alpha95']) # gamma\n rpars.append(fpars['dec']+90.) # Beta dec\n rpars.append(0.) #Beta inc\n if dist=='K':\n etitle=\"Kent confidence ellipse\"\n if len(nDIs)>3:\n kpars=pmag.dokent(nDIs,len(nDIs))\n if pmagplotlib.verbose:print(\"mode \",mode)\n for key in list(kpars.keys()):\n if key!='n' and pmagplotlib.verbose:print(\" \",key, '%7.1f'%(kpars[key]))\n if key=='n' and pmagplotlib.verbose:print(\" \",key, ' %i'%(kpars[key]))\n mode+=1\n npars.append(kpars['dec'])\n npars.append(kpars['inc'])\n npars.append(kpars['Zeta'])\n npars.append(kpars['Zdec'])\n npars.append(kpars['Zinc'])\n npars.append(kpars['Eta'])\n npars.append(kpars['Edec'])\n npars.append(kpars['Einc'])\n if len(rDIs)>3:\n kpars=pmag.dokent(rDIs,len(rDIs))\n if pmagplotlib.verbose:print(\"mode \",mode)\n for key in list(kpars.keys()):\n if key!='n' and pmagplotlib.verbose:print(\" \",key, '%7.1f'%(kpars[key]))\n if key=='n' and pmagplotlib.verbose:print(\" \",key, ' %i'%(kpars[key]))\n mode+=1\n rpars.append(kpars['dec'])\n rpars.append(kpars['inc'])\n rpars.append(kpars['Zeta'])\n rpars.append(kpars['Zdec'])\n rpars.append(kpars['Zinc'])\n rpars.append(kpars['Eta'])\n rpars.append(kpars['Edec'])\n rpars.append(kpars['Einc'])\n else: # assume bootstrap\n if len(nDIs)<10 and len(rDIs)<10:\n print('too few data points for bootstrap')\n sys.exit()\n if dist=='BE':\n print('Be patient for bootstrap...')\n if len(nDIs)>=10:\n BnDIs=pmag.di_boot(nDIs)\n Bkpars=pmag.dokent(BnDIs,1.)\n if pmagplotlib.verbose:print(\"mode \",mode)\n for key in list(Bkpars.keys()):\n if key!='n' and pmagplotlib.verbose:print(\" \",key, '%7.1f'%(Bkpars[key]))\n if key=='n' and pmagplotlib.verbose:print(\" \",key, ' %i'%(Bkpars[key]))\n mode+=1\n npars.append(Bkpars['dec'])\n npars.append(Bkpars['inc'])\n npars.append(Bkpars['Zeta'])\n npars.append(Bkpars['Zdec'])\n npars.append(Bkpars['Zinc'])\n npars.append(Bkpars['Eta'])\n npars.append(Bkpars['Edec'])\n npars.append(Bkpars['Einc'])\n if len(rDIs)>=10:\n BrDIs=pmag.di_boot(rDIs)\n Bkpars=pmag.dokent(BrDIs,1.)\n if pmagplotlib.verbose:print(\"mode \",mode)\n for key in list(Bkpars.keys()):\n if key!='n' and pmagplotlib.verbose:print(\" \",key, '%7.1f'%(Bkpars[key]))\n if key=='n' and pmagplotlib.verbose:print(\" \",key, ' %i'%(Bkpars[key]))\n mode+=1\n rpars.append(Bkpars['dec'])\n rpars.append(Bkpars['inc'])\n rpars.append(Bkpars['Zeta'])\n rpars.append(Bkpars['Zdec'])\n rpars.append(Bkpars['Zinc'])\n rpars.append(Bkpars['Eta'])\n rpars.append(Bkpars['Edec'])\n rpars.append(Bkpars['Einc'])\n etitle=\"Bootstrapped confidence ellipse\"\n elif dist=='BV':\n print('Be patient for bootstrap...')\n vsym={'lower':['+','k'],'upper':['x','k'],'size':5}\n if len(nDIs)>5:\n BnDIs=pmag.di_boot(nDIs)\n pmagplotlib.plot_eq_sym(FIG['bdirs'],BnDIs,'Bootstrapped Eigenvectors',vsym)\n if len(rDIs)>5:\n BrDIs=pmag.di_boot(rDIs)\n if len(nDIs)>5: # plot on existing plots\n pmagplotlib.plot_di_sym(FIG['bdirs'],BrDIs,vsym)\n else:\n pmagplotlib.plot_eq(FIG['bdirs'],BrDIs,'Bootstrapped Eigenvectors',vsym)\n if dist=='B':\n if len(nDIs)> 3 or len(rDIs)>3: pmagplotlib.plot_conf(FIG['eq'],etitle,[],npars,0)\n elif len(nDIs)>3 and dist!='BV':\n pmagplotlib.plot_conf(FIG['eq'],etitle,[],npars,0)\n if len(rDIs)>3:\n pmagplotlib.plot_conf(FIG['eq'],etitle,[],rpars,0)\n elif len(rDIs)>3 and dist!='BV':\n pmagplotlib.plot_conf(FIG['eq'],etitle,[],rpars,0)\n #if plot==0:pmagplotlib.draw_figs(FIG)\n if plot==0:pmagplotlib.draw_figs(FIG)\n #\n files={}\n for key in list(FIG.keys()):\n files[key]=title+'_'+key+'.'+fmt\n if pmagplotlib.isServer:\n black = '#000000'\n purple = '#800080'\n titles={}\n titles['eq']='Equal Area Plot'\n FIG = pmagplotlib.add_borders(FIG,titles,black,purple)\n pmagplotlib.save_plots(FIG,files)\n elif plot==0:\n ans=input(\" S[a]ve to save plot, [q]uit, Return to continue: \")\n if ans==\"q\": sys.exit()\n if ans==\"a\":\n pmagplotlib.save_plots(FIG,files)\n else:\n pmagplotlib.save_plots(FIG,files)", "response": "NAME\n eqarea_ell.py\n\n DESCRIPTION\n makes equal area projections from declination/inclination data\n and plot ellipses\n\n SYNTAX\n eqarea_ell.py -h [command line options]\n\n INPUT\n takes space delimited Dec/Inc data\n\n OPTIONS\n -h prints help message and quits\n -f FILE\n -fmt [svg,png,jpg] format for output plots\n -sav saves figures and quits\n -ell [F,K,B,Be,Bv] plot Fisher, Kent, Bingham, Bootstrap ellipses or Boostrap eigenvectors"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nname is the main function for the module. It is the main function for the module. It is used to convert dec inc pairs to x y pairs using equal area projection.", "response": "def main():\n \"\"\"\n NAME\n di_eq.py\n \n DESCRIPTION\n converts dec, inc pairs to x,y pairs using equal area projection\n NB: do only upper or lower hemisphere at a time: does not distinguish between up and down.\n \n SYNTAX\n di_eq.py [command line options] [< filename]\n \n OPTIONS\n -h prints help message and quits\n -f FILE, input file\n \"\"\"\n out=\"\"\n UP=0\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-f' in sys.argv:\n ind=sys.argv.index('-f')\n file=sys.argv[ind+1] \n DI=numpy.loadtxt(file,dtype=numpy.float)\n else:\n DI = numpy.loadtxt(sys.stdin,dtype=numpy.float) # read from standard input\n Ds=DI.transpose()[0]\n Is=DI.transpose()[1]\n if len(DI)>1: #array of data\n XY=pmag.dimap_V(Ds,Is)\n for xy in XY:\n print('%f %f'%(xy[0],xy[1]))\n else: # single data point\n XY=pmag.dimap(Ds,Is)\n print('%f %f'%(XY[0],XY[1]))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef spline_interpolate(x1, y1, x2):\n sp = Spline(x1, y1)\n return sp(x2)", "response": "Interpolate a function at a set of points x1 y1 at a set of points x2."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef logspline_interpolate(x1, y1, x2):\n sp = Spline(log(x1), log(y1))\n return exp(sp(log(x2)))", "response": "Given a function at a set of points x1 y1 interpolate to a set of points x2 evaluate it at points x2."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef linear_interpolate(x1, y1, x2):\n li = LinInt(x1, y1)\n return li(x2)", "response": "Interpolate a function at a set of points x1 y1 at a set of points x2."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nevaluate the interpolant of the logarithmic term.", "response": "def call(self, x):\n \"\"\"\n Evaluate the interpolant, assuming x is a scalar.\n \"\"\"\n # if out of range, return endpoint\n if x <= self.x_vals[0]:\n return self.y_vals[0]\n if x >= self.x_vals[-1]:\n return self.y_vals[-1]\n\n pos = numpy.searchsorted(self.x_vals, x)\n\n h = self.x_vals[pos]-self.x_vals[pos-1]\n if h == 0.0:\n raise BadInput\n\n a = old_div((self.x_vals[pos] - x), h)\n b = old_div((x - self.x_vals[pos-1]), h)\n return a*self.y_vals[pos-1] + b*self.y_vals[pos]"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef main():\n # initialize some parameters\n args = sys.argv\n\n if \"-h\" in args:\n print(main.__doc__)\n sys.exit()\n\n #meas_file = \"aarm_measurements.txt\"\n #rmag_anis = \"arm_anisotropy.txt\"\n #rmag_res = \"aarm_results.txt\"\n #\n # get name of file from command line\n #\n data_model_num = int(pmag.get_named_arg(\"-DM\", 3))\n spec_file = pmag.get_named_arg(\"-Fsi\", \"specimens.txt\")\n if data_model_num == 3:\n samp_file = pmag.get_named_arg(\"-fsa\", \"samples.txt\")\n else:\n samp_file = pmag.get_named_arg(\"-fsa\", \"er_samples.txt\")\n dir_path = pmag.get_named_arg('-WD', '.')\n input_dir_path = pmag.get_named_arg('-ID', '')\n infile = pmag.get_named_arg('-f', reqd=True)\n coord = pmag.get_named_arg('-crd', '-1')\n\n #if \"-Fa\" in args:\n # ind = args.index(\"-Fa\")\n # rmag_anis = args[ind + 1]\n #if \"-Fr\" in args:\n # ind = args.index(\"-Fr\")\n # rmag_res = args[ind + 1]\n ipmag.aarm_magic(infile, dir_path, input_dir_path,\n spec_file, samp_file, data_model_num,\n coord)", "response": "This function is the main function for the ARMcrunch program. It is used to construct the best fit ARM tensor for the specified program."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef main():\n FIG = {} # plot dictionary\n FIG['demag'] = 1 # demag is figure 1\n in_file, plot_key, LT = 'magic_measurements.txt', 'er_location_name', \"LT-AF-Z\"\n XLP = \"\"\n norm = 1\n LT = 'LT-AF-Z'\n units, dmag_key = 'T', 'treatment_ac_field'\n plot = 0\n fmt = 'svg'\n if len(sys.argv) > 1:\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-N' in sys.argv:\n norm = 0\n if '-sav' in sys.argv:\n plot = 1\n if '-f' in sys.argv:\n ind = sys.argv.index(\"-f\")\n in_file = sys.argv[ind+1]\n if '-fmt' in sys.argv:\n ind = sys.argv.index(\"-fmt\")\n fmt = sys.argv[ind+1]\n if '-obj' in sys.argv:\n ind = sys.argv.index('-obj')\n plot_by = sys.argv[ind+1]\n if plot_by == 'sit':\n plot_key = 'er_site_name'\n if plot_by == 'sam':\n plot_key = 'er_sample_name'\n if plot_by == 'spc':\n plot_key = 'er_specimen_name'\n if '-XLP' in sys.argv:\n ind = sys.argv.index(\"-XLP\")\n XLP = sys.argv[ind+1] # get lab protocol for excluding\n if '-LT' in sys.argv:\n ind = sys.argv.index(\"-LT\")\n LT = 'LT-'+sys.argv[ind+1]+'-Z' # get lab treatment for plotting\n if LT == 'LT-T-Z':\n units, dmag_key = 'K', 'treatment_temp'\n elif LT == 'LT-AF-Z':\n units, dmag_key = 'T', 'treatment_ac_field'\n elif LT == 'LT-M-Z':\n units, dmag_key = 'J', 'treatment_mw_energy'\n else:\n units = 'U'\n data, file_type = pmag.magic_read(in_file)\n sids = pmag.get_specs(data)\n pmagplotlib.plot_init(FIG['demag'], 5, 5)\n print(len(data), ' records read from ', in_file)\n #\n #\n # find desired intensity data\n #\n #\n plotlist, intlist = [], ['measurement_magnitude', 'measurement_magn_moment',\n 'measurement_magn_volume', 'measurement_magn_mass']\n IntMeths = []\n FixData = []\n for rec in data:\n meths = []\n methcodes = rec['magic_method_codes'].split(':')\n for meth in methcodes:\n meths.append(meth.strip())\n for key in rec.keys():\n if key in intlist and rec[key] != \"\":\n if key not in IntMeths:\n IntMeths.append(key)\n if rec[plot_key] not in plotlist and LT in meths:\n plotlist.append(rec[plot_key])\n if 'measurement_flag' not in rec.keys():\n rec['measurement_flag'] = 'g'\n FixData.append(rec)\n plotlist.sort()\n if len(IntMeths) == 0:\n print('No intensity information found')\n sys.exit()\n data = FixData\n # plot first intensity method found - normalized to initial value anyway - doesn't matter which used\n int_key = IntMeths[0]\n for plt in plotlist:\n if plot == 0:\n print(plt, 'plotting by: ', plot_key)\n # fish out all the data for this type of plot\n PLTblock = pmag.get_dictitem(data, plot_key, plt, 'T')\n # fish out all the dmag for this experiment type\n PLTblock = pmag.get_dictitem(PLTblock, 'magic_method_codes', LT, 'has')\n # get all with this intensity key non-blank\n PLTblock = pmag.get_dictitem(PLTblock, int_key, '', 'F')\n if XLP != \"\":\n # reject data with XLP in method_code\n PLTblock = pmag.get_dictitem(\n PLTblock, 'magic_method_codes', XLP, 'not')\n if len(PLTblock) > 2:\n title = PLTblock[0][plot_key]\n spcs = []\n for rec in PLTblock:\n if rec['er_specimen_name'] not in spcs:\n spcs.append(rec['er_specimen_name'])\n for spc in spcs:\n # plot specimen by specimen\n SPCblock = pmag.get_dictitem(\n PLTblock, 'er_specimen_name', spc, 'T')\n INTblock = []\n for rec in SPCblock:\n INTblock.append([float(rec[dmag_key]), 0, 0, float(\n rec[int_key]), 1, rec['measurement_flag']])\n if len(INTblock) > 2:\n pmagplotlib.plot_mag(\n FIG['demag'], INTblock, title, 0, units, norm)\n if plot == 1:\n files = {}\n for key in FIG.keys():\n files[key] = title+'_'+LT+'.'+fmt\n pmagplotlib.save_plots(FIG, files)\n sys.exit()\n else:\n pmagplotlib.draw_figs(FIG)\n ans = input(\n \" S[a]ve to save plot, [q]uit, Return to continue: \")\n if ans == 'q':\n sys.exit()\n if ans == \"a\":\n files = {}\n for key in FIG.keys():\n files[key] = title+'_'+LT+'.'+fmt\n pmagplotlib.save_plots(FIG, files)\n pmagplotlib.clearFIG(FIG['demag'])", "response": "NAME dmag_magic2. py\nAttributeNames dmag_magic2. py\nAttributeNames dmag_magic2. py\nAttributeNames dmag_magic2. py\nAttributeNames dmag_magic2. py\nAttributeNames dmag_magic2. py\nAttributeNames dmag_magic2. py\nAttributeNames dmag_magic2. py\nAttributeNames dmag_magic2. py\nAttributeNames dmag_magic2. py\nAttributeNames dmag_magic2. py"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nnaming uni_magic. py is the main function for the ipython - minispin. py script.", "response": "def main():\n \"\"\"\n NAME\n mini_magic.py\n\n DESCRIPTION\n converts the Yale minispin format to magic_measurements format files\n\n SYNTAX\n mini_magic.py [command line options]\n\n OPTIONS\n -h: prints the help message and quits.\n -usr USER: identify user, default is \"\"\n -f FILE: specify input file, required\n -F FILE: specify output file, default is magic_measurements.txt\n -LP [colon delimited list of protocols, include all that apply]\n AF: af demag\n T: thermal including thellier but not trm acquisition\n -A: don't average replicate measurements\n -vol: volume assumed for measurement in cm^3 (default 12 cc)\n -DM NUM: MagIC data model (2 or 3, default 3)\n\n INPUT\n Must put separate experiments (all AF, thermal, etc.) in\n seperate files\n\n Format of Yale MINI files:\n LL-SI-SP_STEP, Declination, Inclination, Intensity (mA/m), X,Y,Z\n\n\n \"\"\"\n args = sys.argv\n if \"-h\" in args:\n print(main.__doc__)\n sys.exit()\n# initialize some stuff\n methcode = \"LP-NO\"\n demag = \"N\"\n\n#\n# get command line arguments\n#\n data_model_num = int(float(pmag.get_named_arg(\"-DM\", 3)))\n user = pmag.get_named_arg(\"-usr\", \"\")\n dir_path = pmag.get_named_arg(\"-WD\", \".\")\n inst = pmag.get_named_arg(\"-inst\", \"\")\n magfile = pmag.get_named_arg(\"-f\", reqd=True)\n magfile = pmag.resolve_file_name(magfile, dir_path)\n if \"-A\" in args:\n noave = 1\n else:\n noave = 0\n\n if data_model_num == 2:\n meas_file = pmag.get_named_arg(\"-F\", \"magic_measurements.txt\")\n else:\n meas_file = pmag.get_named_arg(\"-F\", \"measurements.txt\")\n meas_file = pmag.resolve_file_name(meas_file, dir_path)\n volume = pmag.get_named_arg(\"-vol\", 12) # assume a volume of 12 cc if not provided\n methcode = pmag.get_named_arg(\"-LP\", \"LP-NO\")\n #ind = args.index(\"-LP\")\n #codelist = args[ind+1]\n #codes = codelist.split(':')\n #if \"AF\" in codes:\n # demag = 'AF'\n # methcode = \"LT-AF-Z\"\n #if \"T\" in codes:\n # demag = \"T\"\n\n convert.mini(magfile, dir_path, meas_file, data_model_num,\n volume, noave, inst, user, methcode)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nnaming customize_criteria.py NB: This program has been deprecated - use demag_gui or thellier_gui to customize acceptance criteria - OR pandas from within a jupyter notebook DESCRIPTION Allows user to specify acceptance criteria, saves them in pmag_criteria.txt SYNTAX customize_criteria.py [-h][command line options] OPTIONS -h prints help message and quits -f IFILE, reads in existing criteria -F OFILE, writes to pmag_criteria format file DEFAULTS IFILE: pmag_criteria.txt OFILE: pmag_criteria.txt OUTPUT creates a pmag_criteria.txt formatted output file", "response": "def main():\n \"\"\"\n NAME\n customize_criteria.py\n NB: This program has been deprecated - use demag_gui or thellier_gui\n to customize acceptance criteria - OR pandas from within a jupyter notebook\n\n DESCRIPTION\n Allows user to specify acceptance criteria, saves them in pmag_criteria.txt\n\n SYNTAX\n customize_criteria.py [-h][command line options]\n\n OPTIONS\n -h prints help message and quits\n -f IFILE, reads in existing criteria\n -F OFILE, writes to pmag_criteria format file\n\n DEFAULTS\n IFILE: pmag_criteria.txt\n OFILE: pmag_criteria.txt\n \n OUTPUT\n creates a pmag_criteria.txt formatted output file\n \"\"\"\n infile,critout=\"\",\"pmag_criteria.txt\"\n# parse command line options\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-f' in sys.argv:\n ind=sys.argv.index('-f')\n infile=sys.argv[ind+1]\n crit_data,file_type=pmag.magic_read(infile)\n if file_type!='pmag_criteria':\n print('bad input file')\n print(main.__doc__)\n sys.exit()\n print(\"Acceptance criteria read in from \", infile)\n if '-F' in sys.argv:\n ind=sys.argv.index('-F')\n critout=sys.argv[ind+1]\n Dcrit,Icrit,nocrit=0,0,0\n custom='1'\n crit=input(\" [0] Use no acceptance criteria?\\n [1] Use default criteria\\n [2] customize criteria \\n \")\n if crit=='0':\n print('Very very loose criteria saved in ',critout)\n crit_data=pmag.default_criteria(1)\n pmag.magic_write(critout,crit_data,'pmag_criteria')\n sys.exit()\n crit_data=pmag.default_criteria(0)\n if crit=='1':\n print('Default criteria saved in ',critout)\n pmag.magic_write(critout,crit_data,'pmag_criteria')\n sys.exit()\n CritRec=crit_data[0]\n crit_keys=list(CritRec.keys())\n crit_keys.sort()\n print(\"Enter new threshold value.\\n Return to keep default.\\n Leave blank to not use as a criterion\\n \")\n for key in crit_keys:\n if key!='pmag_criteria_code' and key!='er_citation_names' and key!='criteria_definition' and CritRec[key]!=\"\":\n print(key, CritRec[key])\n new=input('new value: ')\n if new != \"\": CritRec[key]=(new)\n pmag.magic_write(critout,[CritRec],'pmag_criteria')\n print(\"Criteria saved in pmag_criteria.txt\")"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_DIR(self, WD=None):\n\n if \"-WD\" in sys.argv and FIRST_RUN:\n ind = sys.argv.index('-WD')\n self.WD = sys.argv[ind + 1]\n elif not WD: # if no arg was passed in for WD, make a dialog to choose one\n dialog = wx.DirDialog(None, \"Choose a directory:\", defaultPath=self.currentDirectory,\n style=wx.DD_DEFAULT_STYLE | wx.DD_NEW_DIR_BUTTON | wx.DD_CHANGE_DIR)\n ok = self.show_dlg(dialog)\n if ok == wx.ID_OK:\n self.WD = dialog.GetPath()\n else:\n self.WD = os.getcwd()\n dialog.Destroy()\n self.WD = os.path.realpath(self.WD)\n # name measurement file\n if self.data_model == 3:\n meas_file = 'measurements.txt'\n else:\n meas_file = 'magic_measurements.txt'\n self.magic_file = os.path.join(self.WD, meas_file)\n # intialize GUI_log\n self.GUI_log = open(os.path.join(self.WD, \"thellier_GUI.log\"), 'w+')\n self.GUI_log.write(\"starting...\\n\")\n self.GUI_log.close()\n self.GUI_log = open(os.path.join(self.WD, \"thellier_GUI.log\"), 'a')\n os.chdir(self.WD)\n self.WD = os.getcwd()", "response": "get a directory name"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nbuild main frame of the main panel", "response": "def Main_Frame(self):\n \"\"\"\n Build main frame od panel: buttons, etc.\n choose the first specimen and display data\n \"\"\"\n\n #--------------------------------------------------------------------\n # initialize first specimen in list as current specimen\n #--------------------------------------------------------------------\n\n try:\n self.s = self.specimens[0]\n except:\n self.s = \"\"\n print(\"No specimens during UI build\")\n\n #--------------------------------------------------------------------\n # create main panel in the right size\n #--------------------------------------------------------------------\n\n dw, dh = wx.DisplaySize()\n w, h = self.GetSize()\n r1 = dw / 1250.\n r2 = dw / 750.\n\n GUI_RESOLUTION = min(r1, r2, 1.3)\n if 'gui_resolution' in list(self.preferences.keys()):\n if float(self.preferences['gui_resolution']) != 1:\n self.GUI_RESOLUTION = float(\n self.preferences['gui_resolution']) / 100\n else:\n self.GUI_RESOLUTION = min(r1, r2, 1.3)\n else:\n self.GUI_RESOLUTION = min(r1, r2, 1.3)\n\n #--------------------------------------------------------------------\n # adjust font size\n #--------------------------------------------------------------------\n\n self.font_type = \"Arial\"\n if sys.platform.startswith(\"linux\"):\n self.font_type = \"Liberation Serif\"\n\n if self.GUI_RESOLUTION >= 1.1 and self.GUI_RESOLUTION <= 1.3:\n font2 = wx.Font(13, wx.SWISS, wx.NORMAL,\n wx.NORMAL, False, self.font_type)\n elif self.GUI_RESOLUTION <= 0.9 and self.GUI_RESOLUTION < 1.0:\n font2 = wx.Font(11, wx.SWISS, wx.NORMAL,\n wx.NORMAL, False, self.font_type)\n elif self.GUI_RESOLUTION <= 0.9:\n font2 = wx.Font(10, wx.SWISS, wx.NORMAL,\n wx.NORMAL, False, self.font_type)\n else:\n font2 = wx.Font(12, wx.SWISS, wx.NORMAL,\n wx.NORMAL, False, self.font_type)\n print(\" self.GUI_RESOLUTION\", self.GUI_RESOLUTION)\n\n h_space = 4\n v_space = 4\n\n # set font size and style\n #font1 = wx.Font(10, wx.SWISS, wx.NORMAL, wx.NORMAL, False, u'Comic Sans MS')\n FONT_RATIO = self.GUI_RESOLUTION + (self.GUI_RESOLUTION - 1) * 5\n font1 = wx.Font(9 + FONT_RATIO, wx.SWISS, wx.NORMAL,\n wx.NORMAL, False, self.font_type)\n # GUI headers\n\n font3 = wx.Font(11 + FONT_RATIO, wx.SWISS, wx.NORMAL,\n wx.NORMAL, False, self.font_type)\n font = wx.SystemSettings.GetFont(wx.SYS_SYSTEM_FONT)\n font.SetPointSize(10 + FONT_RATIO)\n\n #--------------------------------------------------------------------\n # Create Figures and FigCanvas objects.\n #--------------------------------------------------------------------\n\n self.fig1 = Figure((5. * self.GUI_RESOLUTION, 5. *\n self.GUI_RESOLUTION), dpi=self.dpi)\n self.canvas1 = FigCanvas(self.plot_panel, wx.ID_ANY, self.fig1)\n self.fig1.text(0.01, 0.98, \"Arai plot\", {\n 'family': self.font_type, 'fontsize': 10, 'style': 'normal', 'va': 'center', 'ha': 'left'})\n self.toolbar1 = NavigationToolbar(self.canvas1)\n self.toolbar1.Hide()\n self.fig1_setting = \"Zoom\"\n self.toolbar1.zoom()\n self.canvas1.Bind(wx.EVT_RIGHT_DOWN, self.on_right_click_fig)\n self.canvas1.Bind(wx.EVT_MIDDLE_DOWN, self.on_home_fig)\n\n self.fig2 = Figure((2.5 * self.GUI_RESOLUTION, 2.5 *\n self.GUI_RESOLUTION), dpi=self.dpi)\n self.canvas2 = FigCanvas(self.plot_panel, wx.ID_ANY, self.fig2)\n self.fig2.text(0.02, 0.96, \"Zijderveld\", {\n 'family': self.font_type, 'fontsize': 10, 'style': 'normal', 'va': 'center', 'ha': 'left'})\n self.toolbar2 = NavigationToolbar(self.canvas2)\n self.toolbar2.Hide()\n self.fig2_setting = \"Zoom\"\n self.toolbar2.zoom()\n self.canvas2.Bind(wx.EVT_RIGHT_DOWN, self.on_right_click_fig)\n self.canvas2.Bind(wx.EVT_MIDDLE_DOWN, self.on_home_fig)\n\n self.fig3 = Figure((2.5 * self.GUI_RESOLUTION, 2.5 *\n self.GUI_RESOLUTION), dpi=self.dpi)\n self.canvas3 = FigCanvas(self.plot_panel, wx.ID_ANY, self.fig3)\n #self.fig3.text(0.02,0.96,\"Equal area\",{'family':self.font_type, 'fontsize':10*self.GUI_RESOLUTION, 'style':'normal','va':'center', 'ha':'left' })\n self.toolbar3 = NavigationToolbar(self.canvas3)\n self.toolbar3.Hide()\n self.fig3_setting = \"Zoom\"\n self.toolbar3.zoom()\n self.canvas3.Bind(wx.EVT_RIGHT_DOWN, self.on_right_click_fig)\n self.canvas3.Bind(wx.EVT_MIDDLE_DOWN, self.on_home_fig)\n\n self.fig4 = Figure((2.5 * self.GUI_RESOLUTION, 2.5 *\n self.GUI_RESOLUTION), dpi=self.dpi)\n self.canvas4 = FigCanvas(self.plot_panel, wx.ID_ANY, self.fig4)\n if self.acceptance_criteria['average_by_sample_or_site']['value'] == 'site':\n TEXT = \"Site data\"\n else:\n TEXT = \"Sample data\"\n self.fig4.text(0.02, 0.96, TEXT, {\n 'family': self.font_type, 'fontsize': 10, 'style': 'normal', 'va': 'center', 'ha': 'left'})\n self.toolbar4 = NavigationToolbar(self.canvas4)\n self.toolbar4.Hide()\n self.fig4_setting = \"Zoom\"\n self.toolbar4.zoom()\n self.canvas4.Bind(wx.EVT_RIGHT_DOWN, self.on_right_click_fig)\n self.canvas4.Bind(wx.EVT_MIDDLE_DOWN, self.on_home_fig)\n\n self.fig5 = Figure((2.5 * self.GUI_RESOLUTION, 2.5 *\n self.GUI_RESOLUTION), dpi=self.dpi)\n self.canvas5 = FigCanvas(self.plot_panel, wx.ID_ANY, self.fig5)\n #self.fig5.text(0.02,0.96,\"M/M0\",{'family':self.font_type, 'fontsize':10, 'style':'normal','va':'center', 'ha':'left' })\n self.toolbar5 = NavigationToolbar(self.canvas5)\n self.toolbar5.Hide()\n self.fig5_setting = \"Zoom\"\n self.toolbar5.zoom()\n self.canvas5.Bind(wx.EVT_RIGHT_DOWN, self.on_right_click_fig)\n self.canvas5.Bind(wx.EVT_MIDDLE_DOWN, self.on_home_fig)\n\n # make axes of the figures\n self.araiplot = self.fig1.add_axes([0.1, 0.1, 0.8, 0.8])\n self.zijplot = self.fig2.add_subplot(111)\n self.eqplot = self.fig3.add_subplot(111)\n self.sampleplot = self.fig4.add_axes(\n [0.2, 0.3, 0.7, 0.6], frameon=True, facecolor='None')\n self.mplot = self.fig5.add_axes(\n [0.2, 0.15, 0.7, 0.7], frameon=True, facecolor='None')\n\n #--------------------------------------------------------------------\n # text box displaying measurement data\n #--------------------------------------------------------------------\n\n self.logger = wx.ListCtrl(self.side_panel, id=wx.ID_ANY, size=(\n 100 * self.GUI_RESOLUTION, 100 * self.GUI_RESOLUTION), style=wx.LC_REPORT)\n self.logger.SetFont(font1)\n self.logger.InsertColumn(0, 'i', width=45 * self.GUI_RESOLUTION)\n self.logger.InsertColumn(1, 'Step', width=45 * self.GUI_RESOLUTION)\n self.logger.InsertColumn(2, 'Tr', width=65 * self.GUI_RESOLUTION)\n self.logger.InsertColumn(3, 'Dec', width=65 * self.GUI_RESOLUTION)\n self.logger.InsertColumn(4, 'Inc', width=65 * self.GUI_RESOLUTION)\n self.logger.InsertColumn(5, 'M', width=75 * self.GUI_RESOLUTION)\n self.Bind(wx.EVT_LIST_ITEM_ACTIVATED,\n self.on_click_listctrl, self.logger)\n self.Bind(wx.EVT_LIST_ITEM_RIGHT_CLICK,\n self.on_right_click_listctrl, self.logger)\n\n #--------------------------------------------------------------------\n # select a specimen box\n #--------------------------------------------------------------------\n\n # Combo-box with a list of specimen\n self.specimens_box = wx.ComboBox(self.top_panel, wx.ID_ANY, self.s, (250 * self.GUI_RESOLUTION, 25),\n wx.DefaultSize, self.specimens, wx.CB_DROPDOWN | wx.TE_PROCESS_ENTER, name=\"specimen\")\n self.specimens_box.SetFont(font2)\n self.Bind(wx.EVT_COMBOBOX, self.onSelect_specimen, self.specimens_box)\n self.Bind(wx.EVT_TEXT_ENTER, self.onSelect_specimen,\n self.specimens_box)\n\n # buttons to move forward and backwards from specimens\n nextbutton = wx.Button(self.top_panel, id=wx.ID_ANY, label='next', size=(\n 75 * self.GUI_RESOLUTION, 25)) # ,style=wx.BU_EXACTFIT)#, size=(175, 28))\n self.Bind(wx.EVT_BUTTON, self.on_next_button, nextbutton)\n nextbutton.SetFont(font2)\n\n prevbutton = wx.Button(self.top_panel, id=wx.ID_ANY, label='previous', size=(\n 75 * self.GUI_RESOLUTION, 25)) # ,style=wx.BU_EXACTFIT)#, size=(175, 28))\n prevbutton.SetFont(font2)\n self.Bind(wx.EVT_BUTTON, self.on_prev_button, prevbutton)\n\n #--------------------------------------------------------------------\n # select temperature bounds\n #--------------------------------------------------------------------\n\n try:\n if self.Data[self.s]['T_or_MW'] == \"T\":\n self.temperatures = np.array(\n self.Data[self.s]['t_Arai']) - 273.\n self.T_list = [\"%.0f\" % T for T in self.temperatures]\n elif self.Data[self.s]['T_or_MW'] == \"MW\":\n self.temperatures = np.array(self.Data[self.s]['t_Arai'])\n self.T_list = [\"%.0f\" % T for T in self.temperatures]\n except (ValueError, TypeError, KeyError) as e:\n self.T_list = []\n\n self.tmin_box = wx.ComboBox(self.top_panel, wx.ID_ANY, size=(\n 100 * self.GUI_RESOLUTION, 25), choices=self.T_list, style=wx.CB_DROPDOWN | wx.TE_READONLY)\n self.Bind(wx.EVT_COMBOBOX, self.get_new_T_PI_parameters, self.tmin_box)\n\n self.tmax_box = wx.ComboBox(self.top_panel, -1, size=(100 * self.GUI_RESOLUTION, 25),\n choices=self.T_list, style=wx.CB_DROPDOWN | wx.TE_READONLY)\n self.Bind(wx.EVT_COMBOBOX, self.get_new_T_PI_parameters, self.tmax_box)\n\n #--------------------------------------------------------------------\n # save/delete buttons\n #--------------------------------------------------------------------\n\n # save/delete interpretation buttons\n self.save_interpretation_button = wx.Button(self.top_panel, id=-1, label='save', size=(\n 75 * self.GUI_RESOLUTION, 25)) # ,style=wx.BU_EXACTFIT)#, size=(175, 28))\n self.save_interpretation_button.SetFont(font2)\n self.delete_interpretation_button = wx.Button(self.top_panel, id=-1, label='delete', size=(\n 75 * self.GUI_RESOLUTION, 25)) # ,style=wx.BU_EXACTFIT)#, size=(175, 28))\n self.delete_interpretation_button.SetFont(font2)\n self.Bind(wx.EVT_BUTTON, self.on_save_interpretation_button,\n self.save_interpretation_button)\n self.Bind(wx.EVT_BUTTON, self.on_delete_interpretation_button,\n self.delete_interpretation_button)\n\n self.auto_save = wx.CheckBox(self.top_panel, wx.ID_ANY, 'auto-save')\n self.auto_save_info = wx.Button(self.top_panel, wx.ID_ANY, \"?\")\n self.Bind(wx.EVT_BUTTON, self.on_info_click, self.auto_save_info)\n #self.auto_save_text = wx.StaticText(self.top_panel, wx.ID_ANY, label=\"(saves with 'next')\")\n\n\n #--------------------------------------------------------------------\n # specimen interpretation and statistics window (Blab; Banc, Dec, Inc, correction factors etc.)\n #--------------------------------------------------------------------\n\n self.Blab_window = wx.TextCtrl(\n self.top_panel, style=wx.TE_CENTER | wx.TE_READONLY, size=(50 * self.GUI_RESOLUTION, 25))\n self.Banc_window = wx.TextCtrl(\n self.top_panel, style=wx.TE_CENTER | wx.TE_READONLY, size=(50 * self.GUI_RESOLUTION, 25))\n self.Aniso_factor_window = wx.TextCtrl(\n self.top_panel, style=wx.TE_CENTER | wx.TE_READONLY, size=(50 * self.GUI_RESOLUTION, 25))\n self.NLT_factor_window = wx.TextCtrl(\n self.top_panel, style=wx.TE_CENTER | wx.TE_READONLY, size=(50 * self.GUI_RESOLUTION, 25))\n self.CR_factor_window = wx.TextCtrl(\n self.top_panel, style=wx.TE_CENTER | wx.TE_READONLY, size=(50 * self.GUI_RESOLUTION, 25))\n self.declination_window = wx.TextCtrl(\n self.top_panel, style=wx.TE_CENTER | wx.TE_READONLY, size=(50 * self.GUI_RESOLUTION, 25))\n self.inclination_window = wx.TextCtrl(\n self.top_panel, style=wx.TE_CENTER | wx.TE_READONLY, size=(50 * self.GUI_RESOLUTION, 25))\n\n for stat in ['Blab', 'Banc', 'Aniso_factor', 'NLT_factor', 'CR_factor', 'declination', 'inclination']:\n exec(\"self.%s_window.SetBackgroundColour(wx.WHITE)\" % stat)\n\n self.Blab_label = wx.StaticText(\n self.top_panel, label=\"\\nB_lab\", style=wx.ALIGN_CENTRE)\n self.Blab_label.SetFont(font2)\n self.Banc_label = wx.StaticText(\n self.top_panel, label=\"\\nB_anc\", style=wx.ALIGN_CENTRE)\n self.Banc_label.SetFont(font2)\n self.aniso_corr_label = wx.StaticText(\n self.top_panel, label=\"Aniso\\ncorr\", style=wx.ALIGN_CENTRE)\n self.aniso_corr_label.SetFont(font2)\n self.nlt_corr_label = wx.StaticText(\n self.top_panel, label=\"NLT\\ncorr\", style=wx.ALIGN_CENTRE)\n self.nlt_corr_label.SetFont(font2)\n self.cr_corr_label = wx.StaticText(\n self.top_panel, label=\"CR\\ncorr\", style=wx.ALIGN_CENTRE)\n self.cr_corr_label.SetFont(font2)\n self.dec_label = wx.StaticText(\n self.top_panel, label=\"\\nDec\", style=wx.ALIGN_CENTRE)\n self.dec_label.SetFont(font2)\n self.inc_label = wx.StaticText(\n self.top_panel, label=\"\\nInc\", style=wx.ALIGN_CENTRE)\n self.inc_label.SetFont(font2)\n\n # handle Specimen Results Sizer\n sizer_specimen_results = wx.StaticBoxSizer(wx.StaticBox(\n self.top_panel, wx.ID_ANY, \"specimen results\"), wx.HORIZONTAL)\n specimen_stat_window = wx.GridSizer(2, 7, h_space, v_space)\n specimen_stat_window.AddMany([(self.Blab_label, 1, wx.ALIGN_BOTTOM),\n ((self.Banc_label), 1, wx.ALIGN_BOTTOM),\n ((self.aniso_corr_label), 1, wx.ALIGN_BOTTOM),\n ((self.nlt_corr_label), 1, wx.ALIGN_BOTTOM),\n ((self.cr_corr_label), 1, wx.ALIGN_BOTTOM),\n ((self.dec_label), 1,\n wx.TE_CENTER | wx.ALIGN_BOTTOM),\n ((self.inc_label), 1, wx.ALIGN_BOTTOM),\n (self.Blab_window, 1, wx.EXPAND),\n (self.Banc_window, 1, wx.EXPAND),\n (self.Aniso_factor_window, 1, wx.EXPAND),\n (self.NLT_factor_window, 1, wx.EXPAND),\n (self.CR_factor_window, 1, wx.EXPAND),\n (self.declination_window, 1, wx.EXPAND),\n (self.inclination_window, 1, wx.EXPAND)])\n sizer_specimen_results.Add(\n specimen_stat_window, 1, wx.EXPAND | wx.ALIGN_LEFT, 0)\n\n #--------------------------------------------------------------------\n # Sample interpretation window\n #--------------------------------------------------------------------\n\n for key in [\"sample_int_n\", \"sample_int_uT\", \"sample_int_sigma\", \"sample_int_sigma_perc\"]:\n command = \"self.%s_window=wx.TextCtrl(self.top_panel,style=wx.TE_CENTER|wx.TE_READONLY,size=(50*self.GUI_RESOLUTION,25))\" % key\n exec(command)\n exec(\"self.%s_window.SetBackgroundColour(wx.WHITE)\" % key)\n\n sample_mean_label = wx.StaticText(\n self.top_panel, label=\"\\nmean\", style=wx.TE_CENTER)\n sample_mean_label.SetFont(font2)\n sample_N_label = wx.StaticText(\n self.top_panel, label=\"\\nN \", style=wx.TE_CENTER)\n sample_N_label.SetFont(font2)\n sample_std_label = wx.StaticText(\n self.top_panel, label=\"\\nstd uT\", style=wx.TE_CENTER)\n sample_std_label.SetFont(font2)\n sample_std_per_label = wx.StaticText(\n self.top_panel, label=\"\\nstd %\", style=wx.TE_CENTER)\n sample_std_per_label.SetFont(font2)\n\n # handle samples/sites results sizers\n sizer_sample_results = wx.StaticBoxSizer(wx.StaticBox(\n self.top_panel, wx.ID_ANY, \"sample/site results\"), wx.HORIZONTAL)\n sample_stat_window = wx.GridSizer(2, 4, h_space, v_space)\n sample_stat_window.AddMany([(sample_mean_label, 1, wx.ALIGN_BOTTOM),\n (sample_N_label, 1, wx.ALIGN_BOTTOM),\n (sample_std_label, 1, wx.ALIGN_BOTTOM),\n (sample_std_per_label, 1, wx.ALIGN_BOTTOM),\n (self.sample_int_uT_window, 1, wx.EXPAND),\n (self.sample_int_n_window, 1, wx.EXPAND),\n (self.sample_int_sigma_window, 1, wx.EXPAND),\n (self.sample_int_sigma_perc_window, 1, wx.EXPAND)])\n sizer_sample_results.Add(\n sample_stat_window, 1, wx.EXPAND | wx.ALIGN_LEFT, 0)\n\n #--------------------------------------------------------------------\n\n label_0 = wx.StaticText(\n self.bottom_panel, label=\" \", style=wx.ALIGN_CENTER, size=(180, 25))\n label_1 = wx.StaticText(\n self.bottom_panel, label=\"Acceptance criteria:\", style=wx.ALIGN_CENTER, size=(180, 25))\n label_2 = wx.StaticText(\n self.bottom_panel, label=\"Specimen statistics:\", style=wx.ALIGN_CENTER, size=(180, 25))\n\n for statistic in self.preferences['show_statistics_on_gui']:\n self.stat_windows[statistic] = wx.TextCtrl(\n self.bottom_panel, style=wx.TE_CENTER | wx.TE_READONLY, size=(50 * self.GUI_RESOLUTION, 25))\n self.stat_windows[statistic].SetBackgroundColour(wx.WHITE)\n self.stat_windows[statistic].SetFont(font2)\n self.threshold_windows[statistic] = wx.TextCtrl(\n self.bottom_panel, style=wx.TE_CENTER | wx.TE_READONLY, size=(50 * self.GUI_RESOLUTION, 25))\n self.threshold_windows[statistic].SetFont(font2)\n self.threshold_windows[statistic].SetBackgroundColour(wx.WHITE)\n label = statistic.replace(\"specimen_\", \"\").replace(\"int_\", \"\")\n self.stat_labels[statistic] = wx.StaticText(\n self.bottom_panel, label=label, style=wx.ALIGN_CENTRE_HORIZONTAL | wx.ALIGN_BOTTOM)\n self.stat_labels[statistic].SetFont(font2)\n\n #-------------------------------------------------------------------\n # Design the panels\n #-------------------------------------------------------------------\n\n # Plots Panel--------------------------------------------------------\n sizer_grid_plots = wx.GridSizer(2, 2, 0, 0)\n sizer_grid_plots.AddMany([(self.canvas2, 1, wx.EXPAND),\n (self.canvas4, 1, wx.EXPAND),\n (self.canvas3, 1, wx.EXPAND),\n (self.canvas5, 1, wx.EXPAND)])\n\n sizer_plots_outer = wx.BoxSizer(wx.HORIZONTAL)\n sizer_plots_outer.Add(self.canvas1, 1, wx.EXPAND)\n sizer_plots_outer.Add(sizer_grid_plots, 1, wx.EXPAND)\n\n # Top Bar Sizer-------------------------------------------------------\n #-------------Specimens Sizer----------------------------------------\n sizer_prev_next_btns = wx.BoxSizer(wx.HORIZONTAL)\n sizer_prev_next_btns.Add(prevbutton, 1, wx.EXPAND | wx.RIGHT, h_space)\n sizer_prev_next_btns.Add(nextbutton, 1, wx.EXPAND | wx.LEFT, h_space)\n\n sizer_select_specimen = wx.StaticBoxSizer(wx.StaticBox(\n self.top_panel, wx.ID_ANY, \"specimen\"), wx.VERTICAL)\n sizer_select_specimen.Add(\n self.specimens_box, 1, wx.EXPAND | wx.BOTTOM, v_space)\n sizer_select_specimen.Add(\n sizer_prev_next_btns, 1, wx.EXPAND | wx.TOP, v_space)\n\n #-------------Bounds Sizer----------------------------------------\n sizer_grid_bounds_btns = wx.GridSizer(2, 3, 2 * h_space, 2 * v_space)\n sizer_grid_bounds_btns.AddMany([(self.tmin_box, 1, wx.EXPAND),\n (self.save_interpretation_button,\n 1, wx.EXPAND), (self.auto_save, 1, wx.EXPAND),\n (self.tmax_box, 1, wx.EXPAND),\n (self.delete_interpretation_button, 1, wx.EXPAND),\n (self.auto_save_info, 1, wx.EXPAND)])\n\n\n if self.s in list(self.Data.keys()) and self.Data[self.s]['T_or_MW'] == \"T\":\n sizer_select_temp = wx.StaticBoxSizer(wx.StaticBox(\n self.top_panel, wx.ID_ANY, \"temperatures\"), wx.HORIZONTAL)\n else:\n sizer_select_temp = wx.StaticBoxSizer(wx.StaticBox(\n self.top_panel, wx.ID_ANY, \"MW power\"), wx.HORIZONTAL)\n sizer_select_temp.Add(sizer_grid_bounds_btns, 1, wx.EXPAND)\n\n #-------------Top Bar Outer Sizer------------------------------------\n sizer_top_bar = wx.BoxSizer(wx.HORIZONTAL)\n sizer_top_bar.AddMany([(sizer_select_specimen, 1, wx.EXPAND | wx.ALIGN_LEFT | wx.RIGHT, 2 * h_space),\n (sizer_select_temp, 1, wx.EXPAND |\n wx.ALIGN_LEFT | wx.RIGHT, 2 * h_space),\n (sizer_specimen_results, 2, wx.EXPAND |\n wx.ALIGN_LEFT | wx.RIGHT, 2 * h_space),\n (sizer_sample_results, 1, wx.EXPAND | wx.ALIGN_LEFT | wx.RIGHT, 0)])\n\n # Bottom Bar Sizer----------------------------------------------------\n #----------------Criteria Labels Sizer-------------------------------\n sizer_criteria_labels = wx.BoxSizer(wx.HORIZONTAL)\n sizer_criteria_labels.Add(label_0, 3, wx.EXPAND | wx.LEFT, 2 * h_space)\n sizer_criteria_boxes = wx.BoxSizer(wx.HORIZONTAL)\n sizer_criteria_boxes.Add(label_1, 3, wx.EXPAND | wx.LEFT, 2 * h_space)\n sizer_stats_boxes = wx.BoxSizer(wx.HORIZONTAL)\n sizer_stats_boxes.Add(label_2, 3, wx.EXPAND | wx.LEFT, 2 * h_space)\n for statistic in self.preferences['show_statistics_on_gui']:\n sizer_criteria_labels.Add(\n self.stat_labels[statistic], 1, wx.ALIGN_BOTTOM, 0)\n\n #----------------Acceptance Criteria Boxes---------------------------\n sizer_criteria_boxes.Add(\n self.threshold_windows[statistic], 1, wx.EXPAND | wx.LEFT, h_space)\n\n #----------------Specimen Statistics Boxes---------------------------\n sizer_stats_boxes.Add(\n self.stat_windows[statistic], 1, wx.EXPAND | wx.LEFT, h_space)\n\n #----------------Bottom Outer Sizer----------------------------------\n sizer_bottom_bar = wx.BoxSizer(wx.VERTICAL)\n sizer_bottom_bar.AddMany([(sizer_criteria_labels, 1, wx.EXPAND | wx.ALIGN_BOTTOM | wx.BOTTOM, v_space),\n (sizer_criteria_boxes, 1, wx.EXPAND |\n wx.BOTTOM | wx.ALIGN_TOP, v_space),\n (sizer_stats_boxes, 1, wx.EXPAND | wx.ALIGN_TOP)])\n\n # Logger Sizer--------------------------------------------------------\n sizer_logger = wx.BoxSizer(wx.HORIZONTAL)\n sizer_logger.Add(self.logger, 1, wx.EXPAND)\n\n # Set Panel Sizers----------------------------------------------------\n self.plot_panel.SetSizer(sizer_plots_outer)\n self.side_panel.SetSizerAndFit(sizer_logger)\n self.top_panel.SetSizerAndFit(sizer_top_bar)\n self.bottom_panel.SetSizerAndFit(sizer_bottom_bar)\n\n # Outer Sizer for Frame-----------------------------------------------\n sizer_logger_plots = wx.BoxSizer(wx.HORIZONTAL)\n sizer_logger_plots.Add(self.side_panel, 1, wx.EXPAND | wx.ALIGN_LEFT)\n sizer_logger_plots.Add(self.plot_panel, 3, wx.EXPAND | wx.ALIGN_LEFT)\n\n sizer_outer = wx.BoxSizer(wx.VERTICAL)\n sizer_outer.AddMany([(self.top_panel, 1, wx.EXPAND | wx.ALIGN_TOP | wx.BOTTOM, v_space / 2),\n (sizer_logger_plots, 4, wx.EXPAND |\n wx.ALIGN_TOP | wx.BOTTOM, v_space / 2),\n (self.bottom_panel, 1, wx.EXPAND | wx.ALIGN_TOP)])\n\n self.SetSizer(sizer_outer)\n sizer_outer.Fit(self)\n self.Layout()"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsaves the current interpretation temporarily", "response": "def on_save_interpretation_button(self, event):\n \"\"\"\n save the current interpretation temporarily (not to a file)\n \"\"\"\n if \"specimen_int_uT\" not in self.Data[self.s]['pars']:\n return\n if 'deleted' in self.Data[self.s]['pars']:\n self.Data[self.s]['pars'].pop('deleted')\n self.Data[self.s]['pars']['saved'] = True\n\n # collect all interpretation by sample\n sample = self.Data_hierarchy['specimens'][self.s]\n if sample not in list(self.Data_samples.keys()):\n self.Data_samples[sample] = {}\n if self.s not in list(self.Data_samples[sample].keys()):\n self.Data_samples[sample][self.s] = {}\n self.Data_samples[sample][self.s]['B'] = self.Data[self.s]['pars'][\"specimen_int_uT\"]\n\n # collect all interpretation by site\n # site=thellier_gui_lib.get_site_from_hierarchy(sample,self.Data_hierarchy)\n site = thellier_gui_lib.get_site_from_hierarchy(\n sample, self.Data_hierarchy)\n if site not in list(self.Data_sites.keys()):\n self.Data_sites[site] = {}\n if self.s not in list(self.Data_sites[site].keys()):\n self.Data_sites[site][self.s] = {}\n self.Data_sites[site][self.s]['B'] = self.Data[self.s]['pars'][\"specimen_int_uT\"]\n\n self.draw_sample_mean()\n self.write_sample_box()\n self.close_warning = True"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndeletes the current interpretation temporarily", "response": "def on_delete_interpretation_button(self, event):\n \"\"\"\n delete the current interpretation temporarily (not to a file)\n \"\"\"\n\n del self.Data[self.s]['pars']\n self.Data[self.s]['pars'] = {}\n self.Data[self.s]['pars']['deleted'] = True\n self.Data[self.s]['pars']['lab_dc_field'] = self.Data[self.s]['lab_dc_field']\n self.Data[self.s]['pars']['er_specimen_name'] = self.Data[self.s]['er_specimen_name']\n self.Data[self.s]['pars']['er_sample_name'] = self.Data[self.s]['er_sample_name']\n self.Data[self.s]['pars']['er_sample_name'] = self.Data[self.s]['er_sample_name']\n sample = self.Data_hierarchy['specimens'][self.s]\n if sample in list(self.Data_samples.keys()):\n if self.s in list(self.Data_samples[sample].keys()):\n if 'B' in list(self.Data_samples[sample][self.s].keys()):\n del self.Data_samples[sample][self.s]['B']\n\n site = thellier_gui_lib.get_site_from_hierarchy(\n sample, self.Data_hierarchy)\n if site in list(self.Data_sites.keys()):\n if self.s in list(self.Data_sites[site].keys()):\n del self.Data_sites[site][self.s]['B']\n # if 'B' in self.Data_sites[site][self.s].keys():\n # del self.Data_sites[site][self.s]['B']\n\n self.tmin_box.SetValue(\"\")\n self.tmax_box.SetValue(\"\")\n self.clear_boxes()\n self.draw_figure(self.s)\n self.draw_sample_mean()\n self.write_sample_box()\n self.close_warning = True"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef write_acceptance_criteria_to_boxes(self):\n\n self.ignore_parameters, value = {}, ''\n for crit_short_name in self.preferences['show_statistics_on_gui']:\n crit = \"specimen_\" + crit_short_name\n if self.acceptance_criteria[crit]['value'] == -999:\n self.threshold_windows[crit_short_name].SetValue(\"\")\n self.threshold_windows[crit_short_name].SetBackgroundColour(\n wx.Colour(128, 128, 128))\n self.ignore_parameters[crit] = True\n continue\n elif crit == \"specimen_scat\":\n if self.acceptance_criteria[crit]['value'] in ['g', 1, '1', True, \"True\", \"t\"]:\n value = \"True\"\n value = \"t\"\n #self.scat_threshold_window.SetBackgroundColour(wx.SetBackgroundColour(128, 128, 128))\n else:\n value = \"\"\n value = \"f\"\n self.threshold_windows['scat'].SetBackgroundColour(\n (128, 128, 128))\n #self.scat_threshold_window.SetBackgroundColour((128, 128, 128))\n\n elif type(self.acceptance_criteria[crit]['value']) == int:\n value = \"%i\" % self.acceptance_criteria[crit]['value']\n elif type(self.acceptance_criteria[crit]['value']) == float:\n if self.acceptance_criteria[crit]['decimal_points'] == -999:\n value = \"%.3e\" % self.acceptance_criteria[crit]['value']\n else:\n value = \"{:.{}f}\".format(self.acceptance_criteria[crit]['value'],\n self.acceptance_criteria[crit]['decimal_points'])\n else:\n continue\n\n self.threshold_windows[crit_short_name].SetValue(value)\n self.threshold_windows[crit_short_name].SetBackgroundColour(\n wx.WHITE)", "response": "Writes acceptance criteria to the acceptance criteria boxes."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding text to measurement data window.", "response": "def Add_text(self, s):\n \"\"\"\n Add text to measurement data window.\n \"\"\"\n self.logger.DeleteAllItems()\n FONT_RATIO = self.GUI_RESOLUTION + (self.GUI_RESOLUTION - 1) * 5\n if self.GUI_RESOLUTION > 1.1:\n font1 = wx.Font(11, wx.SWISS, wx.NORMAL,\n wx.NORMAL, False, self.font_type)\n elif self.GUI_RESOLUTION <= 0.9:\n font1 = wx.Font(8, wx.SWISS, wx.NORMAL,\n wx.NORMAL, False, self.font_type)\n else:\n font1 = wx.Font(10, wx.SWISS, wx.NORMAL,\n wx.NORMAL, False, self.font_type)\n\n # get temperature indecies to display current interp steps in logger\n t1 = self.tmin_box.GetValue()\n t2 = self.tmax_box.GetValue()\n\n # microwave or thermal\n if \"LP-PI-M\" in self.Data[s]['datablock'][0]['magic_method_codes']:\n MICROWAVE = True\n THERMAL = False\n\n steps_tr = []\n for rec in self.Data[s]['datablock']:\n\n if \"measurement_description\" in rec:\n MW_step = rec[\"measurement_description\"].strip(\n '\\n').split(\":\")\n for STEP in MW_step:\n if \"Number\" in STEP:\n temp = float(STEP.split(\"-\")[-1])\n steps_tr.append(temp)\n\n else:\n power = rec['treatment_mw_power']\n if '-' in str(power):\n power = power.split('-')[-1]\n steps_tr.append(int(power))\n\n\n\n #steps_tr = [float(d['treatment_mw_power'].split(\"-\")[-1])\n # for d in self.Data[s]['datablock']]\n else:\n MICROWAVE = False\n THERMAL = True\n steps_tr = [float(d['treatment_temp']) -\n 273 for d in self.Data[s]['datablock']]\n\n if (t1 == \"\" or t2 == \"\") or float(t2) < float(t1):\n tmin_index, tmax_index = -1, -1\n else:\n tmin_index = steps_tr.index(int(t1))\n tmax_index = steps_tr.index(int(t2))\n\n self.logger.SetFont(font1)\n for i, rec in enumerate(self.Data[s]['datablock']):\n if \"LT-NO\" in rec['magic_method_codes']:\n step = \"N\"\n elif \"LT-AF-Z\" in rec['magic_method_codes']:\n step = \"AFD\"\n elif \"LT-T-Z\" in rec['magic_method_codes'] or 'LT-M-Z' in rec['magic_method_codes']:\n step = \"Z\"\n elif \"LT-T-I\" in rec['magic_method_codes'] or 'LT-M-I' in rec['magic_method_codes']:\n step = \"I\"\n elif \"LT-PTRM-I\" in rec['magic_method_codes'] or \"LT-PMRM-I\" in rec['magic_method_codes']:\n step = \"P\"\n elif \"LT-PTRM-MD\" in rec['magic_method_codes'] or \"LT-PMRM-MD\" in rec['magic_method_codes']:\n step = \"T\"\n elif \"LT-PTRM-AC\" in rec['magic_method_codes'] or \"LT-PMRM-AC\" in rec['magic_method_codes']:\n step = \"A\"\n else:\n print((\"unrecognized step in specimen %s Method codes: %s\" %\n (str(rec['magic_method_codes']), s)))\n if THERMAL:\n self.logger.InsertItem(i, \"%i\" % i)\n self.logger.SetItem(i, 1, step)\n self.logger.SetItem(i, 2, \"%1.0f\" %\n (float(rec['treatment_temp']) - 273.))\n self.logger.SetItem(i, 3, \"%.1f\" %\n float(rec['measurement_dec']))\n self.logger.SetItem(i, 4, \"%.1f\" %\n float(rec['measurement_inc']))\n self.logger.SetItem(i, 5, \"%.2e\" %\n float(rec['measurement_magn_moment']))\n elif MICROWAVE: # mcrowave\n if \"measurement_description\" in list(rec.keys()):\n MW_step = rec[\"measurement_description\"].strip(\n '\\n').split(\":\")\n for STEP in MW_step:\n if \"Number\" not in STEP:\n continue\n temp = float(STEP.split(\"-\")[-1])\n\n self.logger.InsertItem(i, \"%i\" % i)\n self.logger.SetItem(i, 1, step)\n self.logger.SetItem(i, 2, \"%1.0f\" % temp)\n self.logger.SetItem(i, 3, \"%.1f\" %\n float(rec['measurement_dec']))\n self.logger.SetItem(i, 4, \"%.1f\" %\n float(rec['measurement_inc']))\n self.logger.SetItem(i, 5, \"%.2e\" % float(\n rec['measurement_magn_moment']))\n self.logger.SetItemBackgroundColour(i, \"WHITE\")\n if i >= tmin_index and i <= tmax_index:\n self.logger.SetItemBackgroundColour(i, \"LIGHT BLUE\")\n if 'measurement_flag' not in list(rec.keys()):\n rec['measurement_flag'] = 'g'"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef select_bounds_in_logger(self, index):\n tmin_index, tmax_index = \"\", \"\"\n if str(self.tmin_box.GetValue()) != \"\":\n tmin_index = self.tmin_box.GetSelection()\n if str(self.tmax_box.GetValue()) != \"\":\n tmax_index = self.tmax_box.GetSelection()\n # if there is no prior interpretation, assume first click is\n # tmin and set highest possible temp as tmax\n if not tmin_index and not tmax_index:\n tmin_index = index\n self.tmin_box.SetSelection(index)\n # set to the highest step\n max_step_data = self.Data[self.s]['datablock'][-1]\n step_key = 'treatment_temp'\n if MICROWAVE:\n step_key = 'treatment_mw_power'\n max_step = max_step_data[step_key]\n tmax_index = self.tmax_box.GetCount() - 1\n self.tmax_box.SetSelection(tmax_index)\n elif self.list_bound_loc != 0:\n if self.list_bound_loc == 1:\n if index < tmin_index:\n self.tmin_box.SetSelection(index)\n self.tmax_box.SetSelection(tmin_index)\n elif index == tmin_index:\n pass\n else:\n self.tmax_box.SetSelection(index)\n else:\n if index > tmax_index:\n self.tmin_box.SetSelection(tmax_index)\n self.tmax_box.SetSelection(index)\n elif index == tmax_index:\n pass\n else:\n self.tmin_box.SetSelection(index)\n self.list_bound_loc = 0\n else:\n if index < tmax_index:\n self.tmin_box.SetSelection(index)\n self.list_bound_loc = 1\n else:\n self.tmax_box.SetSelection(index)\n self.list_bound_loc = 2\n\n self.logger.Select(index, on=0)\n self.get_new_T_PI_parameters(-1)", "response": "Selects the index in the logger based on what the other bound is and selects it in the logger."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nupdating figures and statistics windows with a new selection of specimen", "response": "def update_selection(self):\n \"\"\"\n update figures and statistics windows with a new selection of specimen\n \"\"\"\n\n # clear all boxes\n self.clear_boxes()\n self.draw_figure(self.s)\n\n # update temperature list\n if self.Data[self.s]['T_or_MW'] == \"T\":\n self.temperatures = np.array(self.Data[self.s]['t_Arai']) - 273.\n else:\n self.temperatures = np.array(self.Data[self.s]['t_Arai'])\n\n self.T_list = [\"%.0f\" % T for T in self.temperatures]\n self.tmin_box.SetItems(self.T_list)\n self.tmax_box.SetItems(self.T_list)\n self.tmin_box.SetValue(\"\")\n self.tmax_box.SetValue(\"\")\n self.Blab_window.SetValue(\n \"%.0f\" % (float(self.Data[self.s]['pars']['lab_dc_field']) * 1e6))\n if \"saved\" in self.Data[self.s]['pars']:\n self.pars = self.Data[self.s]['pars']\n self.update_GUI_with_new_interpretation()\n self.Add_text(self.s)\n self.write_sample_box()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef onSelect_specimen(self, event):\n new_s = self.specimens_box.GetValue()\n if self.select_specimen(new_s):\n self.update_selection()\n else:\n self.specimens_box.SetValue(self.s)\n self.user_warning(\n \"no specimen %s reverting to old specimen %s\" % (new_s, self.s))", "response": "update figures and text when a new specimen is selected\n "} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nshow popup info window when user clicks \"?\"", "response": "def on_info_click(self, event):\n \"\"\"\n Show popup info window when user clicks \"?\"\n \"\"\"\n def on_close(event, wind):\n wind.Close()\n wind.Destroy()\n event.Skip()\n wind = wx.PopupTransientWindow(self, wx.RAISED_BORDER)\n if self.auto_save.GetValue():\n info = \"'auto-save' is currently selected. Temperature bounds will be saved when you click 'next' or 'back'.\"\n else:\n info = \"'auto-save' is not selected. Temperature bounds will only be saved when you click 'save'.\"\n text = wx.StaticText(wind, -1, info)\n box = wx.StaticBox(wind, -1, 'Info:')\n boxSizer = wx.StaticBoxSizer(box, wx.VERTICAL)\n boxSizer.Add(text, 5, wx.ALL | wx.CENTER)\n exit_btn = wx.Button(wind, wx.ID_EXIT, 'Close')\n wind.Bind(wx.EVT_BUTTON, lambda evt: on_close(evt, wind), exit_btn)\n boxSizer.Add(exit_btn, 5, wx.ALL | wx.CENTER)\n wind.SetSizer(boxSizer)\n wind.Layout()\n wind.Popup()"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nupdates figures and text when a previous button is selected", "response": "def on_prev_button(self, event):\n \"\"\"\n update figures and text when a previous button is selected\n \"\"\"\n if 'saved' not in self.Data[self.s]['pars'] or self.Data[self.s]['pars']['saved'] != True:\n # check preferences\n if self.auto_save.GetValue():\n self.on_save_interpretation_button(None)\n else:\n del self.Data[self.s]['pars']\n self.Data[self.s]['pars'] = {}\n self.Data[self.s]['pars']['lab_dc_field'] = self.Data[self.s]['lab_dc_field']\n self.Data[self.s]['pars']['er_specimen_name'] = self.Data[self.s]['er_specimen_name']\n self.Data[self.s]['pars']['er_sample_name'] = self.Data[self.s]['er_sample_name']\n # return to last saved interpretation if exist\n if 'er_specimen_name' in list(self.last_saved_pars.keys()) and self.last_saved_pars['er_specimen_name'] == self.s:\n for key in list(self.last_saved_pars.keys()):\n self.Data[self.s]['pars'][key] = self.last_saved_pars[key]\n self.last_saved_pars = {}\n\n index = self.specimens.index(self.s)\n if index == 0:\n index = len(self.specimens)\n index -= 1\n self.s = self.specimens[index]\n self.specimens_box.SetStringSelection(self.s)\n self.update_selection()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef clear_boxes(self):\n self.tmin_box.Clear()\n self.tmin_box.SetItems(self.T_list)\n self.tmin_box.SetSelection(-1)\n\n self.tmax_box.Clear()\n self.tmax_box.SetItems(self.T_list)\n self.tmax_box.SetSelection(-1)\n\n self.Blab_window.SetValue(\"\")\n self.Banc_window.SetValue(\"\")\n self.Banc_window.SetBackgroundColour(wx.Colour('grey'))\n self.Aniso_factor_window.SetValue(\"\")\n self.Aniso_factor_window.SetBackgroundColour(wx.Colour('grey'))\n self.NLT_factor_window.SetValue(\"\")\n self.NLT_factor_window.SetBackgroundColour(wx.Colour('grey'))\n self.CR_factor_window.SetValue(\"\")\n self.CR_factor_window.SetBackgroundColour(wx.Colour('grey'))\n self.declination_window.SetValue(\"\")\n self.declination_window.SetBackgroundColour(wx.Colour('grey'))\n self.inclination_window.SetValue(\"\")\n self.inclination_window.SetBackgroundColour(wx.Colour('grey'))\n\n window_list = ['sample_int_n', 'sample_int_uT',\n 'sample_int_sigma', 'sample_int_sigma_perc']\n for key in window_list:\n command = \"self.%s_window.SetValue(\\\"\\\")\" % key\n exec(command)\n command = \"self.%s_window.SetBackgroundColour(wx.Colour('grey'))\" % key\n exec(command)\n\n # window_list=['int_n','int_ptrm_n','frac','scat','gmax','f','fvds','b_beta','g','q','int_mad','int_dang','drats','md','ptrms_dec','ptrms_inc','ptrms_mad','ptrms_angle']\n # for key in window_list:\n for key in self.preferences['show_statistics_on_gui']:\n self.stat_windows[key].SetValue(\"\")\n self.stat_windows[key].SetBackgroundColour(wx.Colour('grey'))", "response": "Clear all boxes in the current locale."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef read_preferences_file(self):\n user_data_dir = find_pmag_dir.find_user_data_dir(\"thellier_gui\")\n if not user_data_dir:\n return {}\n if os.path.exists(user_data_dir):\n pref_file = os.path.join(user_data_dir, \"thellier_gui_preferences.json\")\n if os.path.exists(pref_file):\n with open(pref_file, \"r\") as pfile:\n return json.load(pfile)\n return {}", "response": "Read json preferences file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef write_preferences_file(self):\n user_data_dir = find_pmag_dir.find_user_data_dir(\"thellier_gui\")\n if not os.path.exists(user_data_dir):\n find_pmag_dir.make_user_data_dir(user_data_dir)\n pref_file = os.path.join(user_data_dir, \"thellier_gui_preferences.json\")\n with open(pref_file, \"w+\") as pfile:\n print('-I- writing preferences to {}'.format(pref_file))\n json.dump(self.preferences, pfile)", "response": "Write json preferences file to PmagPy directory."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nrun whenever Thellier GUI exits and closes the current instance of the class.", "response": "def on_menu_exit(self, event):\n \"\"\"\n Runs whenever Thellier GUI exits\n \"\"\"\n if self.close_warning:\n TEXT = \"Data is not saved to a file yet!\\nTo properly save your data:\\n1) Analysis --> Save current interpretations to a redo file.\\nor\\n1) File --> Save MagIC tables.\\n\\n Press OK to exit without saving.\"\n dlg1 = wx.MessageDialog(\n None, caption=\"Warning:\", message=TEXT, style=wx.OK | wx.CANCEL | wx.ICON_EXCLAMATION)\n if self.show_dlg(dlg1) == wx.ID_OK:\n dlg1.Destroy()\n self.GUI_log.close()\n self.Destroy()\n # if a custom quit event is specified, fire it\n if self.evt_quit:\n event = self.evt_quit(self.GetId())\n self.GetEventHandler().ProcessEvent(event)\n if self.standalone:\n sys.exit()\n else:\n self.GUI_log.close()\n self.Destroy()\n # if a custom quit event is specified, fire it\n if self.evt_quit:\n event = self.evt_quit(self.GetId())\n self.GetEventHandler().ProcessEvent(event)\n if self.standalone:\n sys.exit()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef on_menu_previous_interpretation(self, event):\n save_current_specimen = self.s\n dlg = wx.FileDialog(\n self, message=\"choose a file in a pmagpy redo format\",\n defaultDir=self.WD,\n defaultFile=\"thellier_GUI.redo\",\n wildcard=\"*.redo\",\n style=wx.FD_OPEN | wx.FD_CHANGE_DIR\n )\n if self.show_dlg(dlg) == wx.ID_OK:\n redo_file = dlg.GetPath()\n if self.test_mode:\n redo_file = \"thellier_GUI.redo\"\n else:\n redo_file = None\n dlg.Destroy()\n\n print(\"redo_file\", redo_file)\n if redo_file:\n self.read_redo_file(redo_file)", "response": "Open FileDialog for upload previous interpretation"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nread pmag_criteria. txt file and open change criteria dialog", "response": "def on_menu_criteria_file(self, event):\n \"\"\"\n read pmag_criteria.txt file\n and open change criteria dialog\n \"\"\"\n if self.data_model == 3:\n dlg = wx.FileDialog(\n self, message=\"choose a file in MagIC Data Model 3.0 format\",\n defaultDir=self.WD,\n defaultFile=\"criteria.txt\",\n style=wx.FD_OPEN | wx.FD_CHANGE_DIR\n )\n else:\n dlg = wx.FileDialog(\n self, message=\"choose a file in a MagIC Data Model 2.5 pmagpy format\",\n defaultDir=self.WD,\n defaultFile=\"pmag_criteria.txt\",\n # wildcard=wildcard,\n style=wx.FD_OPEN | wx.FD_CHANGE_DIR\n )\n\n if self.show_dlg(dlg) == wx.ID_OK:\n criteria_file = dlg.GetPath()\n self.GUI_log.write(\n \"-I- Read new criteria file: %s\\n\" % criteria_file)\n dlg.Destroy()\n replace_acceptance_criteria = pmag.initialize_acceptance_criteria(\n data_model=self.data_model)\n try:\n if self.data_model == 3:\n self.read_criteria_file(criteria_file)\n replace_acceptance_criteria = self.acceptance_criteria\n # replace_acceptance_criteria=pmag.read_criteria_from_file(criteria_file,replace_acceptance_criteria,data_model=self.data_model)\n # # just to see if file exists\n print(replace_acceptance_criteria)\n else:\n replace_acceptance_criteria = pmag.read_criteria_from_file(\n criteria_file, replace_acceptance_criteria, data_model=self.data_model) # just to see if file exists\n except Exception as ex:\n print('-W-', ex)\n dlg1 = wx.MessageDialog(\n self, caption=\"Error:\", message=\"error in reading file\", style=wx.OK)\n result = self.show_dlg(dlg1)\n if result == wx.ID_OK:\n dlg1.Destroy()\n return\n self.add_thellier_gui_criteria()\n self.read_criteria_file(criteria_file)\n # check if some statistics are in the new criteria but not in old. If\n # yes, add to self.preferences['show_statistics_on_gui']\n crit_list_not_in_pref = []\n for crit in list(self.acceptance_criteria.keys()):\n if self.acceptance_criteria[crit]['category'] == \"IE-SPEC\":\n if self.acceptance_criteria[crit]['value'] != -999:\n short_crit = crit.split('specimen_')[-1]\n if short_crit not in self.preferences['show_statistics_on_gui']:\n print(\"-I- statistic %s is not in your preferences\" % crit)\n self.preferences['show_statistics_on_gui'].append(\n short_crit)\n crit_list_not_in_pref.append(crit)\n if len(crit_list_not_in_pref) > 0:\n stat_list = \":\".join(crit_list_not_in_pref)\n dlg1 = wx.MessageDialog(self, caption=\"WARNING:\",\n message=\"statistics '%s' is in the imported criteria file but not in your appearence preferences.\\nThis statistic will not appear on the gui panel.\\n The program will exit after saving new acceptance criteria, and it will be added automatically the next time you open it \" % stat_list,\n style=wx.OK | wx.ICON_INFORMATION)\n self.show_dlg(dlg1)\n dlg1.Destroy()\n\n dia = thellier_gui_dialogs.Criteria_Dialog(\n None, self.acceptance_criteria, self.preferences, title='Acceptance Criteria')\n dia.Center()\n result = self.show_dlg(dia)\n if result == wx.ID_OK: # Until the user clicks OK, show the message\n self.On_close_criteria_box(dia)\n if len(crit_list_not_in_pref) > 0:\n dlg1 = wx.MessageDialog(self, caption=\"WARNING:\",\n message=\"Exiting now! When you restart the gui all the new statistics will be added.\",\n style=wx.OK | wx.ICON_INFORMATION)\n self.show_dlg(dlg1)\n dlg1.Destroy()\n self.on_menu_exit(None)\n # self.Destroy()\n # sys.exit()\n\n if result == wx.ID_CANCEL: # Until the user clicks OK, show the message\n for crit in crit_list_not_in_pref:\n short_crit = crit.split('specimen_')[-1]\n self.preferences['show_statistics_on_gui'].remove(short_crit)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef on_menu_criteria(self, event):\n\n dia = thellier_gui_dialogs.Criteria_Dialog(\n None, self.acceptance_criteria, self.preferences, title='Set Acceptance Criteria')\n dia.Center()\n result = self.show_dlg(dia)\n\n if result == wx.ID_OK: # Until the user clicks OK, show the message\n self.On_close_criteria_box(dia)", "response": "Show the dialog to change the acceptance criteria and save it to the criteria file"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nfunctions is called when the close criteria dialog window is closed.", "response": "def On_close_criteria_box(self, dia):\n \"\"\"\n after criteria dialog window is closed.\n Take the acceptance criteria values and update\n self.acceptance_criteria\n \"\"\"\n criteria_list = list(self.acceptance_criteria.keys())\n criteria_list.sort()\n\n #---------------------------------------\n # check if averaging by sample or by site\n # and intialize sample/site criteria\n #---------------------------------------\n avg_by = dia.set_average_by_sample_or_site.GetValue()\n if avg_by == 'sample':\n for crit in ['site_int_n', 'site_int_sigma', 'site_int_sigma_perc', 'site_aniso_mean', 'site_int_n_outlier_check']:\n self.acceptance_criteria[crit]['value'] = -999\n\n if avg_by == 'site':\n for crit in ['sample_int_n', 'sample_int_sigma', 'sample_int_sigma_perc', 'sample_aniso_mean', 'sample_int_n_outlier_check']:\n self.acceptance_criteria[crit]['value'] = -999\n\n #---------\n # get value for each criterion\n for i in range(len(criteria_list)):\n crit = criteria_list[i]\n value, accept = dia.get_value_for_crit(crit, self.acceptance_criteria)\n if accept:\n self.acceptance_criteria.update(accept)\n #---------\n # thellier interpreter calculation type\n if dia.set_stdev_opt.GetValue() == True:\n self.acceptance_criteria['interpreter_method']['value'] = 'stdev_opt'\n elif dia.set_bs.GetValue() == True:\n self.acceptance_criteria['interpreter_method']['value'] = 'bs'\n elif dia.set_bs_par.GetValue() == True:\n self.acceptance_criteria['interpreter_method']['value'] = 'bs_par'\n\n # message dialog\n dlg1 = wx.MessageDialog(\n self, caption=\"Warning:\", message=\"changes are saved to the criteria file\\n \", style=wx.OK)\n result = self.show_dlg(dlg1)\n if result == wx.ID_OK:\n try:\n self.clear_boxes()\n except IndexError:\n pass\n try:\n self.write_acceptance_criteria_to_boxes()\n except IOError:\n pass\n if self.data_model == 3:\n crit_file = 'criteria.txt'\n else:\n crit_file = 'pmag_criteria.txt'\n try:\n pmag.write_criteria_to_file(os.path.join(\n self.WD, crit_file), self.acceptance_criteria, data_model=self.data_model, prior_crits=self.crit_data)\n except AttributeError as ex:\n print(ex)\n print(\"no criteria given to save\")\n dlg1.Destroy()\n dia.Destroy()\n\n self.fig4.texts[0].remove()\n txt = \"{} data\".format(avg_by).capitalize()\n self.fig4.text(0.02, 0.96, txt, {\n 'family': self.font_type, 'fontsize': 10, 'style': 'normal', 'va': 'center', 'ha': 'left'})\n self.recalculate_satistics()\n try:\n self.update_GUI_with_new_interpretation()\n except KeyError:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef recalculate_satistics(self):\n '''\n update self.Data[specimen]['pars'] for all specimens.\n '''\n gframe = wx.BusyInfo(\n \"Re-calculating statistics for all specimens\\n Please wait..\", self)\n\n for specimen in list(self.Data.keys()):\n if 'pars' not in list(self.Data[specimen].keys()):\n continue\n if 'specimen_int_uT' not in list(self.Data[specimen]['pars'].keys()):\n continue\n tmin = self.Data[specimen]['pars']['measurement_step_min']\n tmax = self.Data[specimen]['pars']['measurement_step_max']\n pars = thellier_gui_lib.get_PI_parameters(\n self.Data, self.acceptance_criteria, self.preferences, specimen, tmin, tmax, self.GUI_log, THERMAL, MICROWAVE)\n self.Data[specimen]['pars'] = pars\n self.Data[specimen]['pars']['lab_dc_field'] = self.Data[specimen]['lab_dc_field']\n self.Data[specimen]['pars']['er_specimen_name'] = self.Data[specimen]['er_specimen_name']\n self.Data[specimen]['pars']['er_sample_name'] = self.Data[specimen]['er_sample_name']\n del gframe", "response": "Recalculates the statistics for all the available suds."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef read_criteria_file(self, criteria_file):\n '''\n read criteria file.\n initialize self.acceptance_criteria\n try to guess if averaging by sample or by site.\n '''\n if self.data_model == 3:\n self.acceptance_criteria = pmag.initialize_acceptance_criteria(\n data_model=self.data_model)\n self.add_thellier_gui_criteria()\n fnames = {'criteria': criteria_file}\n contribution = cb.Contribution(\n self.WD, custom_filenames=fnames, read_tables=['criteria'])\n if 'criteria' in contribution.tables:\n crit_container = contribution.tables['criteria']\n crit_data = crit_container.df\n crit_data['definition'] = 'acceptance criteria for study'\n # convert to list of dictionaries\n self.crit_data = crit_data.to_dict('records')\n for crit in self.crit_data: # step through and rename every f-ing one\n # magic2[magic3.index(crit['table_column'])] # find data\n # model 2.5 name\n m2_name = map_magic.convert_intensity_criteria(\n 'magic2', crit['table_column'])\n if not m2_name:\n pass\n elif m2_name not in self.acceptance_criteria:\n print('-W- Your criteria file contains {}, which is not currently supported in Thellier GUI.'.format(m2_name))\n print(' This record will be skipped:\\n {}'.format(crit))\n else:\n if m2_name != crit['table_column'] and 'scat' not in m2_name != \"\":\n self.acceptance_criteria[m2_name]['value'] = float(\n crit['criterion_value'])\n self.acceptance_criteria[m2_name]['pmag_criteria_code'] = crit['criterion']\n if m2_name != crit['table_column'] and 'scat' in m2_name != \"\":\n if crit['criterion_value'] == 'True':\n self.acceptance_criteria[m2_name]['value'] = 1\n else:\n self.acceptance_criteria[m2_name]['value'] = 0\n else:\n print(\"-E- Can't read criteria file\")\n\n else: # Do it the data model 2.5 way:\n self.crit_data = {}\n try:\n self.acceptance_criteria = pmag.read_criteria_from_file(\n criteria_file, self.acceptance_criteria)\n except:\n print(\"-E- Can't read pmag criteria file\")\n # guesss if average by site or sample:\n by_sample = True\n flag = False\n for crit in ['sample_int_n', 'sample_int_sigma_perc', 'sample_int_sigma']:\n if self.acceptance_criteria[crit]['value'] == -999:\n flag = True\n if flag:\n for crit in ['site_int_n', 'site_int_sigma_perc', 'site_int_sigma']:\n if self.acceptance_criteria[crit]['value'] != -999:\n by_sample = False\n if not by_sample:\n self.acceptance_criteria['average_by_sample_or_site']['value'] = 'site'", "response": "read the criteria file and initialize self. acceptance_criteria"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsaves interpretations to a redo file", "response": "def on_menu_save_interpretation(self, event):\n '''\n save interpretations to a redo file\n '''\n\n thellier_gui_redo_file = open(\n os.path.join(self.WD, \"thellier_GUI.redo\"), 'w')\n\n #--------------------------------------------------\n # write interpretations to thellier_GUI.redo\n #--------------------------------------------------\n spec_list = list(self.Data.keys())\n spec_list.sort()\n redo_specimens_list = []\n for sp in spec_list:\n if 'saved' not in self.Data[sp]['pars']:\n continue\n if not self.Data[sp]['pars']['saved']:\n continue\n redo_specimens_list.append(sp)\n\n thellier_gui_redo_file.write(\"%s %.0f %.0f\\n\" % (\n sp, self.Data[sp]['pars']['measurement_step_min'], self.Data[sp]['pars']['measurement_step_max']))\n dlg1 = wx.MessageDialog(\n self, caption=\"Saved:\", message=\"File thellier_GUI.redo is saved in MagIC working folder\", style=wx.OK)\n result = self.show_dlg(dlg1)\n if result == wx.ID_OK:\n dlg1.Destroy()\n thellier_gui_redo_file.close()\n return\n\n thellier_gui_redo_file.close()\n self.close_warning = False"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef on_menu_clear_interpretation(self, event):\n '''\n clear all current interpretations.\n '''\n\n # delete all previous interpretation\n for sp in list(self.Data.keys()):\n del self.Data[sp]['pars']\n self.Data[sp]['pars'] = {}\n self.Data[sp]['pars']['lab_dc_field'] = self.Data[sp]['lab_dc_field']\n self.Data[sp]['pars']['er_specimen_name'] = self.Data[sp]['er_specimen_name']\n self.Data[sp]['pars']['er_sample_name'] = self.Data[sp]['er_sample_name']\n self.Data_samples = {}\n self.Data_sites = {}\n self.tmin_box.SetValue(\"\")\n self.tmax_box.SetValue(\"\")\n self.clear_boxes()\n self.draw_figure(self.s)", "response": "clear all current interpretations."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef read_redo_file(self, redo_file):\n self.GUI_log.write(\n \"-I- reading redo file and processing new temperature bounds\")\n self.redo_specimens = {}\n # first delete all previous interpretation\n for sp in list(self.Data.keys()):\n del self.Data[sp]['pars']\n self.Data[sp]['pars'] = {}\n self.Data[sp]['pars']['lab_dc_field'] = self.Data[sp]['lab_dc_field']\n self.Data[sp]['pars']['er_specimen_name'] = self.Data[sp]['er_specimen_name']\n self.Data[sp]['pars']['er_sample_name'] = self.Data[sp]['er_sample_name']\n # print sp\n # print self.Data[sp]['pars']\n self.Data_samples = {}\n self.Data_sites = {}\n\n fin = open(redo_file, 'r')\n lines = fin.readlines()\n fin.close()\n for Line in lines:\n line = Line.strip('\\n').split()\n specimen = line[0]\n tmin_kelvin = float(line[1])\n tmax_kelvin = float(line[2])\n if specimen not in list(self.redo_specimens.keys()):\n self.redo_specimens[specimen] = {}\n self.redo_specimens[specimen]['t_min'] = float(tmin_kelvin)\n self.redo_specimens[specimen]['t_max'] = float(tmax_kelvin)\n if specimen in list(self.Data.keys()):\n if tmin_kelvin not in self.Data[specimen]['t_Arai'] or tmax_kelvin not in self.Data[specimen]['t_Arai']:\n self.GUI_log.write(\n \"-W- WARNING: can't fit temperature bounds in the redo file to the actual measurement. specimen %s\\n\" % specimen)\n else:\n self.Data[specimen]['pars'] = thellier_gui_lib.get_PI_parameters(\n self.Data, self.acceptance_criteria, self.preferences, specimen, float(tmin_kelvin), float(tmax_kelvin), self.GUI_log, THERMAL, MICROWAVE)\n try:\n self.Data[specimen]['pars'] = thellier_gui_lib.get_PI_parameters(\n self.Data, self.acceptance_criteria, self.preferences, specimen, float(tmin_kelvin), float(tmax_kelvin), self.GUI_log, THERMAL, MICROWAVE)\n self.Data[specimen]['pars']['saved'] = True\n # write intrepretation into sample data\n sample = self.Data_hierarchy['specimens'][specimen]\n if sample not in list(self.Data_samples.keys()):\n self.Data_samples[sample] = {}\n if specimen not in list(self.Data_samples[sample].keys()):\n self.Data_samples[sample][specimen] = {}\n self.Data_samples[sample][specimen]['B'] = self.Data[specimen]['pars']['specimen_int_uT']\n site = thellier_gui_lib.get_site_from_hierarchy(\n sample, self.Data_hierarchy)\n if site not in list(self.Data_sites.keys()):\n self.Data_sites[site] = {}\n if specimen not in list(self.Data_sites[site].keys()):\n self.Data_sites[site][specimen] = {}\n self.Data_sites[site][specimen]['B'] = self.Data[specimen]['pars']['specimen_int_uT']\n\n except:\n print(\"-E- ERROR 1\")\n self.GUI_log.write(\n \"-E- ERROR. Can't calculate PI paremeters for specimen %s using redo file. Check!\\n\" % (specimen))\n else:\n self.GUI_log.write(\n \"-W- WARNING: Can't find specimen %s from redo file in measurement file!\\n\" % specimen)\n print(\n \"-W- WARNING: Can't find specimen %s from redo file in measurement file!\\n\" % specimen)\n if not fin.closed:\n fin.close()\n self.pars = self.Data[self.s]['pars']\n self.clear_boxes()\n self.draw_figure(self.s)\n self.update_GUI_with_new_interpretation()", "response": "Read the previous interpretation from a redo file and update the gui with the new interpretation"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreading er_ages file and convert it by site or sample names and return a dictionary of calendar year values.", "response": "def read_er_ages_file(self, path, ignore_lines_n, sort_by_these_names):\n '''\n read er_ages, sort it by site or sample (the header that is not empty)\n and convert ages to calendar year\n\n '''\n DATA = {}\n fin = open(path, 'r')\n # ignore first lines\n for i in range(ignore_lines_n):\n fin.readline()\n # header\n line = fin.readline()\n header = line.strip('\\n').split('\\t')\n # print header\n for line in fin.readlines():\n if line[0] == \"#\":\n continue\n tmp_data = {}\n tmp_line = line.strip('\\n').split('\\t')\n for i in range(len(tmp_line)):\n if i >= len(header):\n continue\n tmp_data[header[i]] = tmp_line[i]\n for name in sort_by_these_names:\n if name in list(tmp_data.keys()) and tmp_data[name] != \"\":\n er_ages_rec = self.convert_ages_to_calendar_year(tmp_data)\n DATA[tmp_data[name]] = er_ages_rec\n fin.close()\n return(DATA)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef calculate_sample_mean(self, Data_sample_or_site):\n '''\n Data_sample_or_site is a dictonary holding the samples_or_sites mean\n Data_sample_or_site ={}\n Data_sample_or_site[specimen]=B (in units of microT)\n '''\n\n pars = {}\n tmp_B = []\n for spec in list(Data_sample_or_site.keys()):\n if 'B' in list(Data_sample_or_site[spec].keys()):\n tmp_B.append(Data_sample_or_site[spec]['B'])\n if len(tmp_B) < 1:\n pars['N'] = 0\n pars['pass_or_fail'] = 'fail'\n return pars\n\n tmp_B = np.array(tmp_B)\n pars['pass_or_fail'] = 'pass'\n pars['N'] = len(tmp_B)\n pars['B_uT'] = np.mean(tmp_B)\n if len(tmp_B) > 1:\n pars['B_std_uT'] = np.std(tmp_B, ddof=1)\n pars['B_std_perc'] = 100 * (pars['B_std_uT'] / pars['B_uT'])\n else:\n pars['B_std_uT'] = 0\n pars['B_std_perc'] = 0\n pars['sample_int_interval_uT'] = (max(tmp_B) - min(tmp_B))\n pars['sample_int_interval_perc'] = 100 * \\\n (pars['sample_int_interval_uT'] / pars['B_uT'])\n pars['fail_list'] = []\n # check if pass criteria\n #----------\n # int_n\n if self.acceptance_criteria['average_by_sample_or_site']['value'] == 'sample':\n average_by_sample_or_site = 'sample'\n else:\n average_by_sample_or_site = 'site'\n\n if average_by_sample_or_site == 'sample':\n cutoff_value = self.acceptance_criteria['sample_int_n']['value']\n else:\n cutoff_value = self.acceptance_criteria['site_int_n']['value']\n if cutoff_value != -999:\n if pars['N'] < cutoff_value:\n pars['pass_or_fail'] = 'fail'\n pars['fail_list'].append(\"int_n\")\n #----------\n # int_sigma ; int_sigma_perc\n pass_sigma, pass_sigma_perc = False, False\n if self.acceptance_criteria['average_by_sample_or_site']['value'] == 'sample':\n sigma_cutoff_value = self.acceptance_criteria['sample_int_sigma']['value']\n else:\n sigma_cutoff_value = self.acceptance_criteria['site_int_sigma']['value']\n\n if sigma_cutoff_value != -999:\n if pars['B_std_uT'] * 1e-6 <= sigma_cutoff_value:\n pass_sigma = True\n\n if self.acceptance_criteria['average_by_sample_or_site']['value'] == 'sample':\n sigma_perc_cutoff_value = self.acceptance_criteria['sample_int_sigma_perc']['value']\n else:\n sigma_perc_cutoff_value = self.acceptance_criteria['site_int_sigma_perc']['value']\n if sigma_perc_cutoff_value != -999:\n if pars['B_std_perc'] <= sigma_perc_cutoff_value:\n pass_sigma_perc = True\n\n if not (sigma_cutoff_value == -999 and sigma_perc_cutoff_value == -999):\n if not (pass_sigma or pass_sigma_perc):\n pars['pass_or_fail'] = 'fail'\n pars['fail_list'].append(\"int_sigma\")\n\n pass_int_interval, pass_int_interval_perc = False, False\n if self.acceptance_criteria['average_by_sample_or_site']['value'] == 'sample':\n cutoff_value = self.acceptance_criteria['sample_int_interval_uT']['value']\n if cutoff_value != -999:\n if pars['sample_int_interval_uT'] <= cutoff_value:\n pass_int_interval = True\n\n cutoff_value_perc = self.acceptance_criteria['sample_int_interval_perc']['value']\n if cutoff_value_perc != -999:\n if pars['sample_int_interval_perc'] <= cutoff_value_perc:\n pass_int_interval_perc = True\n\n if not (cutoff_value == -999 and cutoff_value_perc == -999):\n if not (pass_int_interval or pass_int_interval_perc):\n pars['pass_or_fail'] = 'fail'\n pars['fail_list'].append(\"int_interval\")\n\n # if cutoff_value != -999 or cutoff_value_perc != -999:\n # if not (pass_int_interval or pass_int_interval_perc):\n # pars['pass_or_fail']='fail'\n # pars['fail_list'].append(\"int_interval\")\n\n #\n #\n #\n #\n # if (acceptance_criteria['sample_int_sigma_uT']==0 and acceptance_criteria['sample_int_sigma_perc']==0) or\\\n # (pars['B_uT'] <= acceptance_criteria['sample_int_sigma_uT'] or pars['B_std_perc'] <= acceptance_criteria['sample_int_sigma_perc']):\n # if ( pars['sample_int_interval_uT'] <= acceptance_criteria['sample_int_interval_uT'] or pars['sample_int_interval_perc'] <= acceptance_criteria['sample_int_interval_perc']):\n # pars['pass_or_fail']='pass'\n return(pars)", "response": "Calculates the sample mean of the given samples_or_sites."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconvert all age units to calendar year", "response": "def convert_ages_to_calendar_year(self, er_ages_rec):\n '''\n convert all age units to calendar year\n '''\n if (\"age\" not in list(er_ages_rec.keys())) or (cb.is_null(er_ages_rec['age'], False)):\n return(er_ages_rec)\n if (\"age_unit\" not in list(er_ages_rec.keys())) or (cb.is_null(er_ages_rec['age_unit'])):\n return(er_ages_rec)\n if cb.is_null(er_ages_rec[\"age\"], False):\n if \"age_range_high\" in list(er_ages_rec.keys()) and \"age_range_low\" in list(er_ages_rec.keys()):\n if cb.not_null(er_ages_rec[\"age_range_high\"], False) and cb.not_null(er_ages_rec[\"age_range_low\"], False):\n er_ages_rec[\"age\"] = np.mean(\n [float(er_ages_rec[\"age_range_high\"]), float(er_ages_rec[\"age_range_low\"])])\n if cb.is_null(er_ages_rec[\"age\"], False):\n return(er_ages_rec)\n\n # age_descriptier_ages_recon=er_ages_rec[\"age_description\"]\n\n age_unit = er_ages_rec[\"age_unit\"]\n\n # Fix 'age':\n mutliplier = 1\n if age_unit == \"Ga\":\n mutliplier = -1e9\n if age_unit == \"Ma\":\n mutliplier = -1e6\n if age_unit == \"Ka\":\n mutliplier = -1e3\n if age_unit == \"Years AD (+/-)\" or age_unit == \"Years Cal AD (+/-)\":\n mutliplier = 1\n if age_unit == \"Years BP\" or age_unit == \"Years Cal BP\":\n mutliplier = 1\n age = float(er_ages_rec[\"age\"]) * mutliplier\n if age_unit == \"Years BP\" or age_unit == \"Years Cal BP\":\n age = 1950 - age\n er_ages_rec['age_cal_year'] = age\n\n # Fix 'age_range_low':\n age_range_low = age\n age_range_high = age\n age_sigma = 0\n\n if \"age_sigma\" in list(er_ages_rec.keys()) and cb.not_null(er_ages_rec[\"age_sigma\"], False):\n age_sigma = float(er_ages_rec[\"age_sigma\"]) * mutliplier\n if age_unit == \"Years BP\" or age_unit == \"Years Cal BP\":\n age_sigma = 1950 - age_sigma\n age_range_low = age - age_sigma\n age_range_high = age + age_sigma\n\n if \"age_range_high\" in list(er_ages_rec.keys()) and \"age_range_low\" in list(er_ages_rec.keys()):\n if cb.not_null(er_ages_rec[\"age_range_high\"], False) and cb.not_null(er_ages_rec[\"age_range_low\"], False):\n age_range_high = float(\n er_ages_rec[\"age_range_high\"]) * mutliplier\n if age_unit == \"Years BP\" or age_unit == \"Years Cal BP\":\n age_range_high = 1950 - age_range_high\n age_range_low = float(\n er_ages_rec[\"age_range_low\"]) * mutliplier\n if age_unit == \"Years BP\" or age_unit == \"Years Cal BP\":\n age_range_low = 1950 - age_range_low\n er_ages_rec['age_cal_year_range_low'] = age_range_low\n er_ages_rec['age_cal_year_range_high'] = age_range_high\n\n return(er_ages_rec)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_new_T_PI_parameters(self, event):\n\n # remember the last saved interpretation\n if \"saved\" in list(self.pars.keys()):\n if self.pars['saved']:\n self.last_saved_pars = {}\n for key in list(self.pars.keys()):\n self.last_saved_pars[key] = self.pars[key]\n self.pars['saved'] = False\n t1 = self.tmin_box.GetValue()\n t2 = self.tmax_box.GetValue()\n\n if (t1 == \"\" or t2 == \"\"):\n print(\"empty interpretation bounds\")\n return\n if float(t2) < float(t1):\n print(\"upper bound less than lower bound\")\n return\n\n index_1 = self.T_list.index(t1)\n index_2 = self.T_list.index(t2)\n\n # if (index_2-index_1)+1 >= self.acceptance_criteria['specimen_int_n']:\n if (index_2 - index_1) + 1 >= 3:\n if self.Data[self.s]['T_or_MW'] != \"MW\":\n self.pars = thellier_gui_lib.get_PI_parameters(self.Data, self.acceptance_criteria, self.preferences, self.s, float(\n t1) + 273., float(t2) + 273., self.GUI_log, THERMAL, MICROWAVE)\n self.Data[self.s]['pars'] = self.pars\n else:\n self.pars = thellier_gui_lib.get_PI_parameters(\n self.Data, self.acceptance_criteria, self.preferences, self.s, float(t1), float(t2), self.GUI_log, THERMAL, MICROWAVE)\n self.Data[self.s]['pars'] = self.pars\n self.update_GUI_with_new_interpretation()\n self.Add_text(self.s)", "response": "get new interpretation parameters for temperatures"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef add_thellier_gui_criteria(self):\n '''criteria used only in thellier gui\n these criteria are not written to pmag_criteria.txt\n '''\n category = \"thellier_gui\"\n for crit in ['sample_int_n_outlier_check', 'site_int_n_outlier_check']:\n self.acceptance_criteria[crit] = {}\n self.acceptance_criteria[crit]['category'] = category\n self.acceptance_criteria[crit]['criterion_name'] = crit\n self.acceptance_criteria[crit]['value'] = -999\n self.acceptance_criteria[crit]['threshold_type'] = \"low\"\n self.acceptance_criteria[crit]['decimal_points'] = 0\n\n for crit in ['sample_int_interval_uT', 'sample_int_interval_perc',\n 'site_int_interval_uT', 'site_int_interval_perc',\n 'sample_int_BS_68_uT', 'sample_int_BS_95_uT', 'sample_int_BS_68_perc', 'sample_int_BS_95_perc', 'specimen_int_max_slope_diff']:\n self.acceptance_criteria[crit] = {}\n self.acceptance_criteria[crit]['category'] = category\n self.acceptance_criteria[crit]['criterion_name'] = crit\n self.acceptance_criteria[crit]['value'] = -999\n self.acceptance_criteria[crit]['threshold_type'] = \"high\"\n if crit in ['specimen_int_max_slope_diff']:\n self.acceptance_criteria[crit]['decimal_points'] = -999\n else:\n self.acceptance_criteria[crit]['decimal_points'] = 1\n self.acceptance_criteria[crit]['comments'] = \"thellier_gui_only\"\n\n for crit in ['average_by_sample_or_site', 'interpreter_method']:\n self.acceptance_criteria[crit] = {}\n self.acceptance_criteria[crit]['category'] = category\n self.acceptance_criteria[crit]['criterion_name'] = crit\n if crit in ['average_by_sample_or_site']:\n self.acceptance_criteria[crit]['value'] = 'sample'\n if crit in ['interpreter_method']:\n self.acceptance_criteria[crit]['value'] = 'stdev_opt'\n self.acceptance_criteria[crit]['threshold_type'] = \"flag\"\n self.acceptance_criteria[crit]['decimal_points'] = -999\n\n for crit in ['include_nrm']:\n self.acceptance_criteria[crit] = {}\n self.acceptance_criteria[crit]['category'] = category\n self.acceptance_criteria[crit]['criterion_name'] = crit\n self.acceptance_criteria[crit]['value'] = True\n self.acceptance_criteria[crit]['threshold_type'] = \"bool\"\n self.acceptance_criteria[crit]['decimal_points'] = -999\n\n # define internal Thellier-GUI definitions:\n self.average_by_sample_or_site = 'sample'\n self.stdev_opt = True\n self.bs = False\n self.bs_par = False", "response": "add the available criteria used only in thellier gui only"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsorts data block in to first_Z, first_I, etc.", "response": "def sortarai(self, datablock, s, Zdiff):\n \"\"\"\n sorts data block in to first_Z, first_I, etc.\n \"\"\"\n\n first_Z, first_I, zptrm_check, ptrm_check, ptrm_tail = [], [], [], [], []\n field, phi, theta = \"\", \"\", \"\"\n starthere = 0\n Treat_I, Treat_Z, Treat_PZ, Treat_PI, Treat_M, Treat_AC = [], [], [], [], [], []\n ISteps, ZSteps, PISteps, PZSteps, MSteps, ACSteps = [], [], [], [], [], []\n GammaChecks = [] # comparison of pTRM direction acquired and lab field\n Mkeys = ['measurement_magn_moment', 'measurement_magn_volume',\n 'measurement_magn_mass', 'measurement_magnitude']\n rec = datablock[0]\n for key in Mkeys:\n if key in list(rec.keys()) and rec[key] != \"\":\n momkey = key\n break\n # first find all the steps\n for k in range(len(datablock)):\n rec = datablock[k]\n if \"treatment_temp\" in list(rec.keys()) and rec[\"treatment_temp\"] != \"\":\n temp = float(rec[\"treatment_temp\"])\n THERMAL = True\n MICROWAVE = False\n elif \"treatment_mw_power\" in list(rec.keys()) and rec[\"treatment_mw_power\"] != \"\":\n THERMAL = False\n MICROWAVE = True\n if \"measurement_description\" in list(rec.keys()):\n MW_step = rec[\"measurement_description\"].strip(\n '\\n').split(\":\")\n for STEP in MW_step:\n if \"Number\" in STEP:\n temp = float(STEP.split(\"-\")[-1])\n\n methcodes = []\n tmp = rec[\"magic_method_codes\"].split(\":\")\n for meth in tmp:\n methcodes.append(meth.strip())\n # for thellier-thellier\n if 'LT-T-I' in methcodes and 'LP-PI-TRM' in methcodes and 'LP-TRM' not in methcodes:\n Treat_I.append(temp)\n ISteps.append(k)\n if field == \"\":\n field = float(rec[\"treatment_dc_field\"])\n if phi == \"\":\n phi = float(rec['treatment_dc_field_phi'])\n theta = float(rec['treatment_dc_field_theta'])\n\n # for Microwave\n if 'LT-M-I' in methcodes and 'LP-PI-M' in methcodes:\n Treat_I.append(temp)\n ISteps.append(k)\n if field == \"\":\n field = float(rec[\"treatment_dc_field\"])\n if phi == \"\":\n phi = float(rec['treatment_dc_field_phi'])\n theta = float(rec['treatment_dc_field_theta'])\n\n # stick first zero field stuff into first_Z\n if 'LT-NO' in methcodes:\n Treat_Z.append(temp)\n ZSteps.append(k)\n if \"LT-AF-Z\" in methcodes and 'treatment_ac_field' in list(rec.keys()):\n if rec['treatment_ac_field'] != \"\":\n AFD_after_NRM = True\n # consider AFD before T-T experiment ONLY if it comes before\n # the experiment\n for i in range(len(first_I)):\n # check if there was an infield step before the AFD\n if float(first_I[i][3]) != 0:\n AFD_after_NRM = False\n if AFD_after_NRM:\n AF_field = 0\n if 'treatment_ac_field' in rec:\n try:\n AF_field = float(rec['treatment_ac_field']) * 1000\n except ValueError:\n pass\n\n dec = float(rec[\"measurement_dec\"])\n inc = float(rec[\"measurement_inc\"])\n intensity = float(rec[momkey])\n first_I.append([273. - AF_field, 0., 0., 0., 1])\n first_Z.append(\n [273. - AF_field, dec, inc, intensity, 1]) # NRM step\n if 'LT-T-Z' in methcodes or 'LT-M-Z' in methcodes:\n Treat_Z.append(temp)\n ZSteps.append(k)\n if 'LT-PTRM-Z':\n Treat_PZ.append(temp)\n PZSteps.append(k)\n if 'LT-PTRM-I' in methcodes or 'LT-PMRM-I' in methcodes:\n Treat_PI.append(temp)\n PISteps.append(k)\n if 'LT-PTRM-MD' in methcodes or 'LT-PMRM-MD' in methcodes:\n Treat_M.append(temp)\n MSteps.append(k)\n if 'LT-PTRM-AC' in methcodes or 'LT-PMRM-AC' in methcodes:\n Treat_AC.append(temp)\n ACSteps.append(k)\n if 'LT-NO' in methcodes:\n dec = float(rec[\"measurement_dec\"])\n inc = float(rec[\"measurement_inc\"])\n moment = float(rec[\"measurement_magn_moment\"])\n if 'LP-PI-M' not in methcodes:\n first_I.append([273, 0., 0., 0., 1])\n first_Z.append([273, dec, inc, moment, 1]) # NRM step\n else:\n first_I.append([0, 0., 0., 0., 1])\n first_Z.append([0, dec, inc, moment, 1]) # NRM step\n\n #---------------------\n # find IZ and ZI\n #---------------------\n\n for temp in Treat_I: # look through infield steps and find matching Z step\n if temp in Treat_Z: # found a match\n istep = ISteps[Treat_I.index(temp)]\n irec = datablock[istep]\n methcodes = []\n tmp = irec[\"magic_method_codes\"].split(\":\")\n for meth in tmp:\n methcodes.append(meth.strip())\n # take last record as baseline to subtract\n brec = datablock[istep - 1]\n zstep = ZSteps[Treat_Z.index(temp)]\n zrec = datablock[zstep]\n # sort out first_Z records\n # check if ZI/IZ in in method codes:\n ZI = \"\"\n if \"LP-PI-TRM-IZ\" in methcodes or \"LP-PI-M-IZ\" in methcodes or \"LP-PI-IZ\" in methcodes:\n ZI = 0\n elif \"LP-PI-TRM-ZI\" in methcodes or \"LP-PI-M-ZI\" in methcodes or \"LP-PI-ZI\" in methcodes:\n ZI = 1\n elif \"LP-PI-BT-IZZI\" in methcodes:\n ZI == \"\"\n i_intex, z_intex = 0, 0\n foundit = False\n for i in range(len(datablock)):\n if THERMAL:\n if ('treatment_temp' in list(datablock[i].keys()) and float(temp) == float(datablock[i]['treatment_temp'])):\n foundit = True\n if MICROWAVE:\n if ('measurement_description' in list(datablock[i].keys())):\n MW_step = datablock[i][\"measurement_description\"].strip(\n '\\n').split(\":\")\n for STEP in MW_step:\n if \"Number\" in STEP:\n ThisStep = float(STEP.split(\"-\")[-1])\n if ThisStep == float(temp):\n foundit = True\n if foundit:\n if \"LT-T-Z\" in datablock[i]['magic_method_codes'].split(\":\") or \"LT-M-Z\" in datablock[i]['magic_method_codes'].split(\":\"):\n z_intex = i\n if \"LT-T-I\" in datablock[i]['magic_method_codes'].split(\":\") or \"LT-M-I\" in datablock[i]['magic_method_codes'].split(\":\"):\n i_intex = i\n foundit = False\n\n if z_intex < i_intex:\n ZI = 1\n else:\n ZI = 0\n dec = float(zrec[\"measurement_dec\"])\n inc = float(zrec[\"measurement_inc\"])\n str = float(zrec[momkey])\n first_Z.append([temp, dec, inc, str, ZI])\n # sort out first_I records\n idec = float(irec[\"measurement_dec\"])\n iinc = float(irec[\"measurement_inc\"])\n istr = float(irec[momkey])\n X = pmag.dir2cart([idec, iinc, istr])\n BL = pmag.dir2cart([dec, inc, str])\n I = []\n for c in range(3):\n I.append((X[c] - BL[c]))\n if I[2] != 0:\n iDir = pmag.cart2dir(I)\n if Zdiff == 0:\n first_I.append([temp, iDir[0], iDir[1], iDir[2], ZI])\n else:\n first_I.append([temp, 0., 0., I[2], ZI])\n# gamma=angle([iDir[0],iDir[1]],[phi,theta])\n else:\n first_I.append([temp, 0., 0., 0., ZI])\n# gamma=0.0\n# put in Gamma check (infield trm versus lab field)\n# if 180.-gamma-c to quit')\n data = np.loadtxt(sys.stdin, dtype=np.float)\n\n ipmag.histplot(fname, data, outfile, xlab, binsize, norm,\n fmt, save_plots, interactive)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nextracts the arguments from sys. argv that is used to call a command - line script and returns a correctly split list of arguments", "response": "def extract_args(argv):\n \"\"\"\n take sys.argv that is used to call a command-line script and return a correctly split list of arguments\n for example, this input: [\"eqarea.py\", \"-f\", \"infile\", \"-F\", \"outfile\", \"-A\"]\n will return this output: [['f', 'infile'], ['F', 'outfile'], ['A']]\n \"\"\"\n string = \" \".join(argv)\n string = string.split(' -')\n program = string[0]\n arguments = [s.split() for s in string[1:]]\n return arguments"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef check_args(arguments, data_frame):\n stripped_args = [a[0] for a in arguments]\n df = data_frame.df\n # first make sure all args are valid\n for a in arguments:\n if a[0] not in df.index:\n print(\"-I- ignoring invalid argument: {}\".format(a[0]))\n print(\"-\")\n # next make sure required arguments are present\n condition = df['reqd']\n reqd_args = df[condition]\n for arg in reqd_args['arg_name']:\n if arg not in stripped_args:\n raise pmag.MissingCommandLineArgException(\"-\"+arg)\n #next, assign any default values as needed\n\n #condition = df['default'] != '' # don't need this, and sometimes the correct default argument IS ''\n default_args = df #[condition]\n using_defaults = []\n for arg_name, row in default_args.iterrows():\n default = row['default']\n if arg_name not in stripped_args:\n using_defaults.append(arg_name)\n arguments.append([arg_name, default])\n using_defaults = [\"-\" + arg for arg in using_defaults]\n print('Using default arguments for: {}'.format(', '.join(using_defaults)))\n return arguments", "response": "checks that all arguments are valid and assign any required arguments to the passed in list of available objects"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef InitUI(self):\n if self.data_type in ['orient', 'ages']:\n belongs_to = []\n else:\n parent_table_name = self.parent_type + \"s\"\n if parent_table_name in self.contribution.tables:\n belongs_to = sorted(self.contribution.tables[parent_table_name].df.index.unique())\n else:\n belongs_to = []\n\n self.choices = {}\n if self.data_type in ['specimens', 'samples', 'sites']:\n self.choices = {1: (belongs_to, False)}\n if self.data_type == 'orient':\n self.choices = {1: (['g', 'b'], False)}\n if self.data_type == 'ages':\n for level in ['specimen', 'sample', 'site', 'location']:\n if level in self.grid.col_labels:\n level_names = []\n if level + \"s\" in self.contribution.tables:\n level_names = list(self.contribution.tables[level+\"s\"].df.index.unique())\n num = self.grid.col_labels.index(level)\n self.choices[num] = (level_names, False)\n # Bind left click to drop-down menu popping out\n self.grid.Bind(wx.grid.EVT_GRID_CELL_LEFT_CLICK,\n lambda event: self.on_left_click(event, self.grid, self.choices))\n\n cols = self.grid.GetNumberCols()\n col_labels = [self.grid.GetColLabelValue(col) for col in range(cols)]\n\n # check if any additional columns have controlled vocabularies\n # if so, get the vocabulary list\n for col_number, label in enumerate(col_labels):\n self.add_drop_down(col_number, label)", "response": "Initialize the UI for the current object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding a correctly formatted drop - down menu for given col_label.", "response": "def add_drop_down(self, col_number, col_label):\n \"\"\"\n Add a correctly formatted drop-down-menu for given col_label,\n if required or suggested.\n Otherwise do nothing.\n\n Parameters\n ----------\n col_number : int\n grid position at which to add a drop down menu\n col_label : str\n column name\n \"\"\"\n if col_label.endswith('**') or col_label.endswith('^^'):\n col_label = col_label[:-2]\n # add drop-down for experiments\n if col_label == \"experiments\":\n if 'measurements' in self.contribution.tables:\n meas_table = self.contribution.tables['measurements'].df\n if 'experiment' in meas_table.columns:\n exps = meas_table['experiment'].unique()\n self.choices[col_number] = (sorted(exps), False)\n self.grid.SetColLabelValue(col_number, col_label + \"**\")\n return\n #\n if col_label == 'method_codes':\n self.add_method_drop_down(col_number, col_label)\n elif col_label == 'magic_method_codes':\n self.add_method_drop_down(col_number, 'method_codes')\n elif col_label in ['specimens', 'samples', 'sites', 'locations']:\n if col_label in self.contribution.tables:\n item_df = self.contribution.tables[col_label].df\n item_names = item_df.index.unique() #[col_label[:-1]].unique()\n self.choices[col_number] = (sorted(item_names), False)\n elif col_label in ['specimen', 'sample', 'site', 'location']:\n if col_label + \"s\" in self.contribution.tables:\n item_df = self.contribution.tables[col_label + \"s\"].df\n item_names = item_df.index.unique() #[col_label[:-1]].unique()\n self.choices[col_number] = (sorted(item_names), False)\n # add vocabularies\n if col_label in self.contribution.vocab.suggested:\n typ = 'suggested'\n elif col_label in self.contribution.vocab.vocabularies:\n typ = 'controlled'\n else:\n return\n\n # add menu, if not already set\n if col_number not in list(self.choices.keys()):\n if typ == 'suggested':\n self.grid.SetColLabelValue(col_number, col_label + \"^^\")\n controlled_vocabulary = self.contribution.vocab.suggested[col_label]\n else:\n self.grid.SetColLabelValue(col_number, col_label + \"**\")\n controlled_vocabulary = self.contribution.vocab.vocabularies[col_label]\n #\n stripped_list = []\n for item in controlled_vocabulary:\n try:\n stripped_list.append(str(item))\n except UnicodeEncodeError:\n # skips items with non ASCII characters\n pass\n\n if len(stripped_list) > 100:\n # split out the list alphabetically, into a dict of lists {'A': ['alpha', 'artist'], 'B': ['beta', 'beggar']...}\n dictionary = {}\n for item in stripped_list:\n letter = item[0].upper()\n if letter not in list(dictionary.keys()):\n dictionary[letter] = []\n dictionary[letter].append(item)\n stripped_list = dictionary\n\n two_tiered = True if isinstance(stripped_list, dict) else False\n self.choices[col_number] = (stripped_list, two_tiered)\n return"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding drop - down - menu options for magic_method_codes columns", "response": "def add_method_drop_down(self, col_number, col_label):\n \"\"\"\n Add drop-down-menu options for magic_method_codes columns\n \"\"\"\n if self.data_type == 'ages':\n method_list = self.contribution.vocab.age_methods\n else:\n method_list = self.contribution.vocab.age_methods.copy()\n method_list.update(self.contribution.vocab.methods)\n self.choices[col_number] = (method_list, True)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef on_left_click(self, event, grid, choices):\n row, col = event.GetRow(), event.GetCol()\n if col == 0 and self.grid.name != 'ages':\n default_val = self.grid.GetCellValue(row, col)\n msg = \"Choose a new name for {}.\\nThe new value will propagate throughout the contribution.\".format(default_val)\n dia = wx.TextEntryDialog(self.grid, msg,\n \"Rename {}\".format(self.grid.name, default_val),\n default_val)\n res = dia.ShowModal()\n if res == wx.ID_OK:\n new_val = dia.GetValue()\n # update the contribution with new name\n self.contribution.rename_item(self.grid.name,\n default_val, new_val)\n # don't propagate changes if we are just assigning a new name\n # and not really renaming\n # (i.e., if a blank row was added then named)\n if default_val == '':\n self.grid.SetCellValue(row, 0, new_val)\n return\n\n # update the current grid with new name\n for row in range(self.grid.GetNumberRows()):\n cell_value = self.grid.GetCellValue(row, 0)\n if cell_value == default_val:\n self.grid.SetCellValue(row, 0, new_val)\n else:\n continue\n return\n color = self.grid.GetCellBackgroundColour(event.GetRow(), event.GetCol())\n # allow user to cherry-pick cells for editing.\n # gets selection of meta key for mac, ctrl key for pc\n if event.ControlDown() or event.MetaDown():\n row, col = event.GetRow(), event.GetCol()\n if (row, col) not in self.dispersed_selection:\n self.dispersed_selection.append((row, col))\n self.grid.SetCellBackgroundColour(row, col, 'light blue')\n else:\n self.dispersed_selection.remove((row, col))\n self.grid.SetCellBackgroundColour(row, col, color)# 'white'\n self.grid.ForceRefresh()\n return\n if event.ShiftDown(): # allow user to highlight multiple consecutive cells in a column\n previous_col = self.grid.GetGridCursorCol()\n previous_row = self.grid.GetGridCursorRow()\n col = event.GetCol()\n row = event.GetRow()\n if col != previous_col:\n return\n else:\n if row > previous_row:\n row_range = list(range(previous_row, row+1))\n else:\n row_range = list(range(row, previous_row+1))\n for r in row_range:\n self.grid.SetCellBackgroundColour(r, col, 'light blue')\n self.selection.append((r, col))\n self.grid.ForceRefresh()\n return\n\n selection = False\n\n if self.dispersed_selection:\n is_dispersed = True\n selection = self.dispersed_selection\n\n if self.selection:\n is_dispersed = False\n selection = self.selection\n\n try:\n col = event.GetCol()\n row = event.GetRow()\n except AttributeError:\n row, col = selection[0][0], selection[0][1]\n\n self.grid.SetGridCursor(row, col)\n\n if col in list(choices.keys()): # column should have a pop-up menu\n menu = wx.Menu()\n two_tiered = choices[col][1]\n choices = choices[col][0]\n if not two_tiered: # menu is one tiered\n if 'CLEAR cell of all values' not in choices:\n choices.insert(0, 'CLEAR cell of all values')\n for choice in choices:\n if not choice:\n choice = \" \" # prevents error if choice is an empty string\n menuitem = menu.Append(wx.ID_ANY, str(choice))\n self.window.Bind(wx.EVT_MENU, lambda event: self.on_select_menuitem(event, grid, row, col, selection), menuitem)\n self.show_menu(event, menu)\n else: # menu is two_tiered\n clear = menu.Append(-1, 'CLEAR cell of all values')\n self.window.Bind(wx.EVT_MENU, lambda event: self.on_select_menuitem(event, grid, row, col, selection), clear)\n for choice in sorted(choices.items()):\n submenu = wx.Menu()\n for item in choice[1]:\n menuitem = submenu.Append(-1, str(item))\n self.window.Bind(wx.EVT_MENU, lambda event: self.on_select_menuitem(event, grid, row, col, selection), menuitem)\n menu.Append(-1, choice[0], submenu)\n self.show_menu(event, menu)\n\n if selection:\n # re-whiten the cells that were previously highlighted\n for row, col in selection:\n self.grid.SetCellBackgroundColour(row, col, self.col_color)\n self.dispersed_selection = []\n self.selection = []\n self.grid.ForceRefresh()", "response": "Creates a popup menu when user clicks on the column\n "} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef main():\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-F' in sys.argv:\n ind=sys.argv.index('-F')\n ofile=sys.argv[ind+1]\n out=open(ofile,'w')\n print(ofile, ' opened for output')\n else: ofile=\"\"\n if '-i' in sys.argv: # interactive flag\n while 1:\n try:\n Dec=float(input(\"Declination: to quit \"))\n except:\n print(\"\\n Good-bye\\n\")\n sys.exit()\n Inc=float(input(\"Inclination: \"))\n Dip_dir=float(input(\"Dip direction: \"))\n Dip=float(input(\"Dip: \"))\n print('%7.1f %7.1f'%(pmag.dotilt(Dec,Inc,Dip_dir,Dip)))\n elif '-f' in sys.argv:\n ind=sys.argv.index('-f')\n file=sys.argv[ind+1]\n data=numpy.loadtxt(file)\n else:\n data=numpy.loadtxt(sys.stdin,dtype=numpy.float) # read in the data from the datafile\n D,I=pmag.dotilt_V(data)\n for k in range(len(D)):\n if ofile==\"\":\n print('%7.1f %7.1f'%(D[k],I[k]))\n else:\n out.write('%7.1f %7.1f\\n'%(D[k],I[k]))", "response": "NAME di_tilt. py entry."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nnames convert2unix.py DESCRIPTION converts mac or dos formatted file to unix file in place SYNTAX convert2unix.py FILE OPTIONS -h prints help and quits", "response": "def main():\n \"\"\"\n NAME\n convert2unix.py\n \n DESCRIPTION\n converts mac or dos formatted file to unix file in place\n \n SYNTAX\n convert2unix.py FILE \n \n OPTIONS\n -h prints help and quits\n \n \"\"\"\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n file=sys.argv[1]\n f=open(file,'r')\n Input=f.readlines()\n f.close()\n out=open(file,'w')\n for line in Input:\n out.write(line)\n out.close()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef main():\n plot, fmt = 0, 'svg'\n mod, alt, make_plot, lat, lon = 'cals10k', 0, 0, 0, 0\n if '-loc' in sys.argv:\n ind = sys.argv.index('-loc')\n lat = float(sys.argv[ind+1])\n lon = float(sys.argv[ind+2])\n if '-alt' in sys.argv:\n ind = sys.argv.index('-alt')\n alt = float(sys.argv[ind+1])\n if '-fmt' in sys.argv:\n ind = sys.argv.index('-fmt')\n fmt = sys.argv[ind+1]\n if len(sys.argv) != 0 and '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-mod' in sys.argv:\n ind = sys.argv.index('-mod')\n mod = sys.argv[ind+1]\n if '-fgh' in sys.argv:\n ind = sys.argv.index('-fgh')\n ghfile = sys.argv[ind+1]\n lmgh = numpy.loadtxt(ghfile)\n gh = []\n lmgh = numpy.loadtxt(ghfile).transpose()\n gh.append(lmgh[2][0])\n for i in range(1, lmgh.shape[1]):\n gh.append(lmgh[2][i])\n gh.append(lmgh[3][i])\n mod = 'custom'\n inp = [[0, alt, lat, lon]]\n elif '-f' in sys.argv:\n ind = sys.argv.index('-f')\n file = sys.argv[ind+1]\n inp = numpy.loadtxt(file)\n elif '-i' in sys.argv:\n while 1:\n try:\n line = []\n if mod != 'custom':\n line.append(\n float(input(\"Decimal year: \")))\n else:\n line.append(0)\n alt = input(\"Elevation in km [0] \")\n if alt == \"\":\n alt = \"0\"\n line.append(float(alt))\n line.append(float(input(\"Latitude (positive north) \")))\n line.append(float(input(\"Longitude (positive east) \")))\n if mod == '':\n x, y, z, f = pmag.doigrf(\n line[3] % 360., line[2], line[1], line[0])\n elif mod == 'custom':\n x, y, z, f = pmag.docustom(\n line[3] % 360., line[2], line[1], gh)\n else:\n x, y, z, f = pmag.doigrf(\n line[3] % 360., line[2], line[1], line[0], mod=mod)\n Dir = pmag.cart2dir((x, y, z))\n print('%8.2f %8.2f %8.0f' % (Dir[0], Dir[1], f))\n except EOFError:\n print(\"\\n Good-bye\\n\")\n sys.exit()\n elif '-ages' in sys.argv:\n ind = sys.argv.index('-ages')\n agemin = float(sys.argv[ind+1])\n agemax = float(sys.argv[ind+2])\n ageincr = float(sys.argv[ind+3])\n ages = numpy.arange(agemin, agemax, ageincr)\n lats = numpy.ones(len(ages))*lat\n lons = numpy.ones(len(ages))*lon\n alts = numpy.ones(len(ages))*alt\n inp = numpy.array([ages, alts, lats, lons]).transpose()\n else:\n inp = numpy.loadtxt(sys.stdin, dtype=numpy.float)\n if '-F' in sys.argv:\n ind = sys.argv.index('-F')\n outfile = sys.argv[ind+1]\n out = open(outfile, 'w')\n else:\n outfile = \"\"\n if '-sav' in sys.argv:\n plot = 1\n if '-plt' in sys.argv:\n make_plot = 1\n Ages, Decs, Incs, Ints, VADMs = [], [], [], [], []\n for line in inp:\n if mod != 'custom':\n x, y, z, f = pmag.doigrf(\n line[3] % 360., line[2], line[1], line[0], mod=mod)\n else:\n x, y, z, f = pmag.docustom(line[3] % 360., line[2], line[1], gh)\n Dir = pmag.cart2dir((x, y, z))\n if outfile != \"\":\n out.write('%8.2f %8.2f %8.0f %7.1f %7.1f %7.1f %7.1f\\n' %\n (Dir[0], Dir[1], f, line[0], line[1], line[2], line[3]))\n elif make_plot:\n Ages.append(line[0])\n if Dir[0] > 180:\n Dir[0] = Dir[0]-360.0\n Decs.append(Dir[0])\n Incs.append(Dir[1])\n Ints.append(f*1e-3)\n VADMs.append(pmag.b_vdm(f*1e-9, line[2])*1e-21)\n else:\n print('%8.2f %8.2f %8.0f %7.1f %7.1f %7.1f %7.1f' %\n (Dir[0], Dir[1], f, line[0], line[1], line[2], line[3]))\n if make_plot:\n pmagplotlib.plot_init(1, 7, 9)\n fig = plt.figure(num=1, figsize=(7, 9))\n fig.add_subplot(411)\n plt.plot(Ages, Decs)\n plt.ylabel('Declination ($^{\\circ}$)')\n fig.add_subplot(412)\n plt.plot(Ages, Incs)\n plt.ylabel('Inclination ($^{\\circ}$)')\n fig.add_subplot(413)\n plt.plot(Ages, Ints)\n plt.ylabel('Intensity ($\\mu$T)')\n fig.add_subplot(414)\n plt.plot(Ages, VADMs)\n plt.ylabel('VADMs (ZAm$^2$)')\n plt.xlabel('Ages')\n # show plot\n if plot == 0:\n pmagplotlib.draw_figs({'time series': 1})\n ans = input(\"S[a]ve to save figure, to quit \")\n if ans == 'a':\n plt.savefig('igrf.'+fmt)\n print('Figure saved as: ', 'igrf.'+fmt)\n # save plot without showing\n else:\n plt.savefig('igrf.'+fmt)\n print('Figure saved as: ', 'igrf.'+fmt)\n sys.exit()", "response": "This function is used to generate the igrf field vector for the given file entry. It uses the routine of Malin and Barraclough and CALS10k - 4bce."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef main():\n dir_path = '.'\n res, ages = 'c', 0\n plot = 0\n proj = 'ortho'\n results_file = 'pmag_results.txt'\n ell, flip = 0, 0\n lat_0, lon_0 = 90., 0.\n fmt = 'pdf'\n sym, size = 'ro', 8\n rsym, rsize = 'g^', 8\n anti = 0\n fancy = 0\n coord = \"\"\n if '-WD' in sys.argv:\n ind = sys.argv.index('-WD')\n dir_path = sys.argv[ind+1]\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-S' in sys.argv:\n anti = 1\n if '-fmt' in sys.argv:\n ind = sys.argv.index('-fmt')\n fmt = sys.argv[ind+1]\n if '-sav' in sys.argv:\n plot = 1\n if '-res' in sys.argv:\n ind = sys.argv.index('-res')\n res = sys.argv[ind+1]\n if '-etp' in sys.argv:\n fancy = 1\n if '-prj' in sys.argv:\n ind = sys.argv.index('-prj')\n proj = sys.argv[ind+1]\n if '-rev' in sys.argv:\n flip = 1\n ind = sys.argv.index('-rev')\n rsym = (sys.argv[ind+1])\n rsize = int(sys.argv[ind+2])\n if '-sym' in sys.argv:\n ind = sys.argv.index('-sym')\n sym = (sys.argv[ind+1])\n size = int(sys.argv[ind+2])\n if '-eye' in sys.argv:\n ind = sys.argv.index('-eye')\n lat_0 = float(sys.argv[ind+1])\n lon_0 = float(sys.argv[ind+2])\n if '-ell' in sys.argv:\n ell = 1\n if '-age' in sys.argv:\n ages = 1\n if '-f' in sys.argv:\n ind = sys.argv.index('-f')\n results_file = sys.argv[ind+1]\n if '-crd' in sys.argv:\n ind = sys.argv.index('-crd')\n crd = sys.argv[ind+1]\n if crd == 'g':\n coord = '0'\n if crd == 't':\n coord = '100'\n results_file = dir_path+'/'+results_file\n data, file_type = pmag.magic_read(results_file)\n if file_type != 'pmag_results':\n print(\"bad results file\")\n sys.exit()\n FIG = {'map': 1}\n pmagplotlib.plot_init(FIG['map'], 6, 6)\n # read in er_sites file\n lats, lons, dp, dm, a95 = [], [], [], [], []\n Pars = []\n dates, rlats, rlons = [], [], []\n if 'data_type' in data[0].keys():\n # get all site level data\n Results = pmag.get_dictitem(data, 'data_type', 'i', 'T')\n else:\n Results = data\n # get all non-blank latitudes\n Results = pmag.get_dictitem(Results, 'vgp_lat', '', 'F')\n # get all non-blank longitudes\n Results = pmag.get_dictitem(Results, 'vgp_lon', '', 'F')\n if coord != \"\":\n # get specified coordinate system\n Results = pmag.get_dictitem(Results, 'tilt_correction', coord, 'T')\n location = \"\"\n for rec in Results:\n if rec['er_location_names'] not in location:\n location = location+':'+rec['er_location_names']\n if 'average_age' in rec.keys() and rec['average_age'] != \"\" and ages == 1:\n dates.append(rec['average_age'])\n lat = float(rec['vgp_lat'])\n lon = float(rec['vgp_lon'])\n if flip == 0:\n lats.append(lat)\n lons.append(lon)\n elif flip == 1:\n if lat < 0:\n rlats.append(-lat)\n lon = lon+180.\n if lon > 360:\n lon = lon-360.\n rlons.append(lon)\n else:\n lats.append(lat)\n lons.append(lon)\n elif anti == 1:\n lats.append(-lat)\n lon = lon+180.\n if lon > 360:\n lon = lon-360.\n lons.append(lon)\n ppars = []\n ppars.append(lon)\n ppars.append(lat)\n ell1, ell2 = \"\", \"\"\n if 'vgp_dm' in rec.keys() and rec['vgp_dm'] != \"\":\n ell1 = float(rec['vgp_dm'])\n if 'vgp_dp' in rec.keys() and rec['vgp_dp'] != \"\":\n ell2 = float(rec['vgp_dp'])\n if 'vgp_alpha95' in rec.keys() and rec['vgp_alpha95'] != \"\":\n ell1, ell2 = float(rec['vgp_alpha95']), float(rec['vgp_alpha95'])\n if ell1 != \"\" and ell2 != \"\":\n ppars = []\n ppars.append(lons[-1])\n ppars.append(lats[-1])\n ppars.append(ell1)\n ppars.append(lons[-1])\n isign = abs(lats[-1])/lats[-1]\n ppars.append(lats[-1]-isign*90.)\n ppars.append(ell2)\n ppars.append(lons[-1]+90.)\n ppars.append(0.)\n Pars.append(ppars)\n location = location.strip(':')\n Opts = {'latmin': -90, 'latmax': 90, 'lonmin': 0., 'lonmax': 360., 'lat_0': lat_0, 'lon_0': lon_0,\n 'proj': proj, 'sym': 'bs', 'symsize': 3, 'pltgrid': 0, 'res': res, 'boundinglat': 0.}\n Opts['details'] = {'coasts': 1, 'rivers': 0, 'states': 0,\n 'countries': 0, 'ocean': 1, 'fancy': fancy}\n # make the base map with a blue triangle at the pole`\n pmagplotlib.plot_map(FIG['map'], [90.], [0.], Opts)\n Opts['pltgrid'] = -1\n Opts['sym'] = sym\n Opts['symsize'] = size\n if len(dates) > 0:\n Opts['names'] = dates\n if len(lats) > 0:\n # add the lats and lons of the poles\n pmagplotlib.plot_map(FIG['map'], lats, lons, Opts)\n Opts['names'] = []\n if len(rlats) > 0:\n Opts['sym'] = rsym\n Opts['symsize'] = rsize\n # add the lats and lons of the poles\n pmagplotlib.plot_map(FIG['map'], rlats, rlons, Opts)\n if plot == 0:\n pmagplotlib.draw_figs(FIG)\n if ell == 1: # add ellipses if desired.\n Opts['details'] = {'coasts': 0, 'rivers': 0,\n 'states': 0, 'countries': 0, 'ocean': 0}\n Opts['pltgrid'] = -1 # turn off meridian replotting\n Opts['symsize'] = 2\n Opts['sym'] = 'g-'\n for ppars in Pars:\n if ppars[2] != 0:\n PTS = pmagplotlib.plot_ell(FIG['map'], ppars, 'g.', 0, 0)\n elats, elons = [], []\n for pt in PTS:\n elons.append(pt[0])\n elats.append(pt[1])\n # make the base map with a blue triangle at the pole`\n pmagplotlib.plot_map(FIG['map'], elats, elons, Opts)\n if plot == 0:\n pmagplotlib.draw_figs(FIG)\n files = {}\n for key in FIG.keys():\n if pmagplotlib.isServer: # use server plot naming convention\n files[key] = 'LO:_'+location+'_VGP_map.'+fmt\n else: # use more readable plot naming convention\n files[key] = '{}_VGP_map.{}'.format(\n location.replace(' ', '_'), fmt)\n if pmagplotlib.isServer:\n black = '#000000'\n purple = '#800080'\n titles = {}\n titles['eq'] = 'LO:_'+location+'_VGP_map'\n FIG = pmagplotlib.add_borders(FIG, titles, black, purple)\n pmagplotlib.save_plots(FIG, files)\n elif plot == 0:\n pmagplotlib.draw_figs(FIG)\n ans = input(\" S[a]ve to save plot, Return to quit: \")\n if ans == \"a\":\n pmagplotlib.save_plots(FIG, files)\n else:\n print(\"Good bye\")\n sys.exit()\n else:\n pmagplotlib.save_plots(FIG, files)", "response": "This function is the main function of the vgpmap_magic. py script. It is the main function of the vgpmap_magic. py script."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nnames gokent.py DESCRIPTION calculates Kent parameters from dec inc data INPUT FORMAT takes dec/inc as first two columns in space delimited file SYNTAX gokent.py [options] OPTIONS -h prints help message and quits -i for interactive filename entry -f FILE, specify filename -F FILE, specifies output file name < filename for reading from standard input OUTPUT mean dec, mean inc, Eta, Deta, Ieta, Zeta, Zdec, Zinc, N", "response": "def main():\n \"\"\"\n NAME\n gokent.py\n\n DESCRIPTION\n calculates Kent parameters from dec inc data\n\n INPUT FORMAT\n takes dec/inc as first two columns in space delimited file\n\n SYNTAX\n gokent.py [options]\n\n OPTIONS\n -h prints help message and quits\n -i for interactive filename entry\n -f FILE, specify filename\n -F FILE, specifies output file name\n < filename for reading from standard input\n\n OUTPUT\n mean dec, mean inc, Eta, Deta, Ieta, Zeta, Zdec, Zinc, N\n \"\"\"\n if len(sys.argv) > 0:\n if '-h' in sys.argv: # check if help is needed\n print(main.__doc__)\n sys.exit() # graceful quit\n if '-f' in sys.argv:\n ind=sys.argv.index('-f')\n file=sys.argv[ind+1]\n f=open(file,'r')\n data=f.readlines()\n elif '-i' in sys.argv: # ask for filename\n file=input(\"Enter file name with dec, inc data: \")\n f=open(file,'r')\n data=f.readlines()\n else:\n#\n data=sys.stdin.readlines() # read in data from standard input\n ofile = \"\"\n if '-F' in sys.argv:\n ind = sys.argv.index('-F')\n ofile= sys.argv[ind+1]\n out = open(ofile, 'w + a')\n DIs= [] # set up list for dec inc data\n for line in data: # read in the data from standard input\n if '\\t' in line:\n rec=line.split('\\t') # split each line on space to get records\n else:\n rec=line.split() # split each line on space to get records\n DIs.append((float(rec[0]),float(rec[1])))\n#\n kpars=pmag.dokent(DIs,len(DIs))\n output = '%7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %i' % (kpars[\"dec\"],kpars[\"inc\"],kpars[\"Eta\"],kpars[\"Edec\"],kpars[\"Einc\"],kpars[\"Zeta\"],kpars[\"Zdec\"],kpars[\"Zinc\"],kpars[\"n\"])\n if ofile == \"\":\n print(output)\n else:\n out.write(output+'\\n')"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef draw_figs(FIGS):\n is_win = True if sys.platform in ['win32', 'win64'] else False\n if not is_win:\n plt.ion()\n for fig in list(FIGS.keys()):\n plt.draw()\n plt.show()\n plt.ioff()\n if is_win:\n # this style basically works for Windows\n plt.draw()\n print(\"You must manually close all plots to continue\")\n plt.show()", "response": "Draw all the figures in the TKAgg\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef delticks(fig):\n locs = fig.xaxis.get_ticklocs()\n nlocs = np.delete(locs, list(range(0, len(locs), 2)))\n fig.set_xticks(nlocs)", "response": "Delete half the x - axis tick marks in the matplotlib figure."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef plot_init(fignum, w, h):\n global fig_x_pos, fig_y_pos, plt_num\n dpi = 80\n if isServer:\n dpi = 240\n # plt.ion()\n plt_num += 1\n fig = plt.figure(num=fignum, figsize=(w, h), dpi=dpi)\n if (not isServer) and (not set_env.IS_NOTEBOOK):\n plt.get_current_fig_manager().show()\n # plt.get_current_fig_manager().window.wm_geometry('+%d+%d' %\n # (fig_x_pos,fig_y_pos)) # this only works with matplotlib.use('TKAgg')\n fig_x_pos = fig_x_pos + dpi * (w) + 25\n if plt_num == 3:\n plt_num = 0\n fig_x_pos = 25\n fig_y_pos = fig_y_pos + dpi * (h) + 25\n plt.figtext(.02, .01, version_num)\n# plt.connect('button_press_event',click)\n#\n # plt.ioff()\n return fig", "response": "Initializes the matplotlib figure with the given width w and height h."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the cumulative normal distribution function of the variable y with mean ybar and standard deviation sigma", "response": "def gaussfunc(y, ybar, sigma):\n \"\"\"\n cumulative normal distribution function of the variable y\n with mean ybar,standard deviation sigma\n uses expression 7.1.26 from Abramowitz & Stegun\n accuracy better than 1.5e-7 absolute\n Parameters\n _________\n y : input variable\n ybar : mean\n sigma : standard deviation\n\n \"\"\"\n x = old_div((y - ybar), (np.sqrt(2.) * sigma))\n t = old_div(1.0, (1.0 + .3275911 * abs(x)))\n erf = 1.0 - np.exp(-x * x) * t * (.254829592 - t * (.284496736 -\n t * (1.421413741 - t * (1.453152027 - t * 1.061405429))))\n erf = abs(erf)\n sign = old_div(x, abs(x))\n return 0.5 * (1.0 + sign * erf)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef k_s(X):\n xbar, sigma = pmag.gausspars(X)\n d, f = 0, 0.\n for i in range(1, len(X) + 1):\n b = old_div(float(i), float(len(X)))\n a = gaussfunc(X[i - 1], xbar, sigma)\n if abs(f - a) > abs(b - a):\n delta = abs(f - a)\n else:\n delta = abs(b - a)\n if delta > d:\n d = delta\n f = b\n return d, xbar, sigma", "response": "Kolmorgorov - Smirnov statistic."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef plot_xy(fignum, X, Y, **kwargs):\n plt.figure(num=fignum)\n# if 'poly' in kwargs.keys():\n# coeffs=np.polyfit(X,Y,kwargs['poly'])\n# polynomial=np.poly1d(coeffs)\n# xs=np.arange(np.min(X),np.max(X))\n# ys=polynomial(xs)\n# plt.plot(xs,ys)\n# print coefs\n# print polynomial\n if 'sym' in list(kwargs.keys()):\n sym = kwargs['sym']\n else:\n sym = 'ro'\n if 'lw' in list(kwargs.keys()):\n lw = kwargs['lw']\n else:\n lw = 1\n if 'xerr' in list(kwargs.keys()):\n plt.errorbar(X, Y, fmt=sym, xerr=kwargs['xerr'])\n if 'yerr' in list(kwargs.keys()):\n plt.errorbar(X, Y, fmt=sym, yerr=kwargs['yerr'])\n if 'axis' in list(kwargs.keys()):\n if kwargs['axis'] == 'semilogx':\n plt.semilogx(X, Y, marker=sym[1], markerfacecolor=sym[0])\n if kwargs['axis'] == 'semilogy':\n plt.semilogy(X, Y, marker=sym[1], markerfacecolor=sym[0])\n if kwargs['axis'] == 'loglog':\n plt.loglog(X, Y, marker=sym[1], markerfacecolor=sym[0])\n else:\n plt.plot(X, Y, sym, linewidth=lw)\n if 'xlab' in list(kwargs.keys()):\n plt.xlabel(kwargs['xlab'])\n if 'ylab' in list(kwargs.keys()):\n plt.ylabel(kwargs['ylab'])\n if 'title' in list(kwargs.keys()):\n plt.title(kwargs['title'])\n if 'xmin' in list(kwargs.keys()):\n plt.axis([kwargs['xmin'], kwargs['xmax'],\n kwargs['ymin'], kwargs['ymax']])\n if 'notes' in list(kwargs.keys()):\n for note in kwargs['notes']:\n plt.text(note[0], note[1], note[2])", "response": "This function plots the X and Y of a curie."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nplotting the site mean data", "response": "def plot_site(fignum, SiteRec, data, key):\n \"\"\"\n deprecated (used in ipmag)\n \"\"\"\n print('Site mean data: ')\n print(' dec inc n_lines n_planes kappa R alpha_95 comp coord')\n print(SiteRec['site_dec'], SiteRec['site_inc'], SiteRec['site_n_lines'], SiteRec['site_n_planes'], SiteRec['site_k'],\n SiteRec['site_r'], SiteRec['site_alpha95'], SiteRec['site_comp_name'], SiteRec['site_tilt_correction'])\n print('sample/specimen, dec, inc, n_specs/a95,| method codes ')\n for i in range(len(data)):\n print('%s: %s %s %s / %s | %s' % (data[i]['er_' + key + '_name'], data[i][key + '_dec'], data[i]\n [key + '_inc'], data[i][key + '_n'], data[i][key + '_alpha95'], data[i]['magic_method_codes']))\n plot_slnp(fignum, SiteRec, data, key)\n plot = input(\"s[a]ve plot, [q]uit or to continue: \")\n if plot == 'q':\n print(\"CUL8R\")\n sys.exit()\n if plot == 'a':\n files = {}\n for key in list(EQ.keys()):\n files[key] = site + '_' + key + '.' + fmt\n save_plots(EQ, files)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef plot_qq_norm(fignum, Y, title):\n plt.figure(num=fignum)\n if type(Y) == list:\n Y = np.array(Y)\n Y = np.sort(Y) # sort the data\n n = len(Y)\n d, mean, sigma = k_s(Y)\n dc = old_div(0.886, np.sqrt(float(n)))\n X = [] # list for normal quantile\n for i in range(1, n + 1):\n p = old_div(float(i), float(n + 1))\n X.append(qsnorm(p))\n plt.plot(X, Y, 'ro')\n plt.title(title)\n plt.xlabel('Normal Quantile')\n plt.ylabel('Data Quantile')\n bounds = plt.axis()\n notestr = 'N: ' + '%i' % (n)\n plt.text(-.9 * bounds[1], .9 * bounds[3], notestr)\n notestr = 'mean: ' + '%8.3e' % (mean)\n plt.text(-.9 * bounds[1], .8 * bounds[3], notestr)\n notestr = 'std dev: ' + '%8.3e' % (sigma)\n plt.text(-.9 * bounds[1], .7 * bounds[3], notestr)\n notestr = 'D: ' + '%8.3e' % (d)\n plt.text(-.9 * bounds[1], .6 * bounds[3], notestr)\n notestr = 'Dc: ' + '%8.3e' % (dc)\n plt.text(-.9 * bounds[1], .5 * bounds[3], notestr)\n return d, dc", "response": "Function that makes a Quantile - Quantile - Normal plot for the data Y"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nfunctions to plot a Quantile plot of data against a uniform distribution in 0 => 360.", "response": "def plot_qq_unf(fignum, D, title, subplot=False, degrees=True):\n \"\"\"\n plots data against a uniform distribution in 0=>360.\n Parameters\n _________\n fignum : matplotlib figure number\n D : data\n title : title for plot\n subplot : if True, make this number one of two subplots\n degrees : if True, assume that these are degrees\n\n Return\n Mu : Mu statistic (Fisher et al., 1987)\n Mu_crit : critical value of Mu for uniform distribution\n\n Effect\n ______\n makes a Quantile Quantile plot of data\n \"\"\"\n if subplot == True:\n plt.subplot(1, 2, fignum)\n else:\n plt.figure(num=fignum)\n X, Y, dpos, dneg = [], [], 0., 0.\n if degrees:\n D = (np.array(D)) % 360\n X = D/D.max()\n X = np.sort(X)\n n = float(len(D))\n i = np.arange(0, len(D))\n Y = (i-0.5)/n\n ds = (i/n)-X\n dpos = ds.max()\n dneg = ds.min()\n plt.plot(Y, X, 'ro')\n v = dneg + dpos # kuiper's v\n # Mu of fisher et al. equation 5.16\n Mu = v * (np.sqrt(n) - 0.567 + (old_div(1.623, (np.sqrt(n)))))\n plt.axis([0, 1., 0., 1.])\n bounds = plt.axis()\n notestr = 'N: ' + '%i' % (n)\n plt.text(.1 * bounds[1], .9 * bounds[3], notestr)\n notestr = 'Mu: ' + '%7.3f' % (Mu)\n plt.text(.1 * bounds[1], .8 * bounds[3], notestr)\n if Mu > 1.347:\n notestr = \"Non-uniform (99%)\"\n elif Mu < 1.207:\n notestr = \"Uniform (95%)\"\n elif Mu > 1.207:\n notestr = \"Uniform (99%)\"\n plt.text(.1 * bounds[1], .7 * bounds[3], notestr)\n plt.text(.1 * bounds[1], .7 * bounds[3], notestr)\n plt.title(title)\n plt.xlabel('Uniform Quantile')\n plt.ylabel('Data Quantile')\n return Mu, 1.207"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nplotting data against an exponential distribution in 0=>90.", "response": "def plot_qq_exp(fignum, I, title, subplot=False):\n \"\"\"\n plots data against an exponential distribution in 0=>90.\n\n Parameters\n _________\n fignum : matplotlib figure number\n I : data\n title : plot title\n subplot : boolean, if True plot as subplot with 1 row, two columns with fignum the plot number\n \"\"\"\n if subplot == True:\n plt.subplot(1, 2, fignum)\n else:\n plt.figure(num=fignum)\n X, Y, dpos, dneg = [], [], 0., 0.\n rad = old_div(np.pi, 180.)\n xsum = 0\n for i in I:\n theta = (90. - i) * rad\n X.append(1. - np.cos(theta))\n xsum += X[-1]\n X.sort()\n n = float(len(X))\n kappa = old_div((n - 1.), xsum)\n for i in range(len(X)):\n p = old_div((float(i) - 0.5), n)\n Y.append(-np.log(1. - p))\n f = 1. - np.exp(-kappa * X[i])\n ds = old_div(float(i), n) - f\n if dpos < ds:\n dpos = ds\n ds = f - old_div((float(i) - 1.), n)\n if dneg < ds:\n dneg = ds\n if dneg > dpos:\n ds = dneg\n else:\n ds = dpos\n Me = (ds - (old_div(0.2, n))) * (np.sqrt(n) + 0.26 +\n (old_div(0.5, (np.sqrt(n))))) # Eq. 5.15 from Fisher et al. (1987)\n\n plt.plot(Y, X, 'ro')\n bounds = plt.axis()\n plt.axis([0, bounds[1], 0., bounds[3]])\n notestr = 'N: ' + '%i' % (n)\n plt.text(.1 * bounds[1], .9 * bounds[3], notestr)\n notestr = 'Me: ' + '%7.3f' % (Me)\n plt.text(.1 * bounds[1], .8 * bounds[3], notestr)\n if Me > 1.094:\n notestr = \"Not Exponential\"\n else:\n notestr = \"Exponential (95%)\"\n plt.text(.1 * bounds[1], .7 * bounds[3], notestr)\n plt.title(title)\n plt.xlabel('Exponential Quantile')\n plt.ylabel('Data Quantile')\n return Me, 1.094"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef plot_di(fignum, DIblock):\n global globals\n \"\"\"\n plots directions on equal area net\n Parameters\n _________\n fignum : matplotlib figure number\n DIblock : nested list of dec, inc pairs\n \"\"\"\n X_down, X_up, Y_down, Y_up = [], [], [], [] # initialize some variables\n plt.figure(num=fignum)\n#\n# plot the data - separate upper and lower hemispheres\n#\n for rec in DIblock:\n Up, Down = 0, 0\n XY = pmag.dimap(rec[0], rec[1])\n if rec[1] >= 0:\n X_down.append(XY[0])\n Y_down.append(XY[1])\n else:\n X_up.append(XY[0])\n Y_up.append(XY[1])\n#\n if len(X_down) > 0:\n # plt.scatter(X_down,Y_down,marker='s',c='r')\n plt.scatter(X_down, Y_down, marker='o', c='blue')\n if globals != 0:\n globals.DIlist = X_down\n globals.DIlisty = Y_down\n if len(X_up) > 0:\n # plt.scatter(X_up,Y_up,marker='s',facecolor='none',edgecolor='black')\n plt.scatter(X_up, Y_up, marker='o',\n facecolor='white', edgecolor='blue')\n if globals != 0:\n globals.DIlist = X_up\n globals.DIlisty = Y_up", "response": "Plots the DIblock in the same direction as the current net."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nfunctions to plot the DIblock and the corresponding symbol", "response": "def plot_di_sym(fignum, DIblock, sym):\n global globals\n \"\"\"\n plots directions on equal area net\n Parameters\n _________\n fignum : matplotlib figure number\n DIblock : nested list of dec, inc pairs\n sym : set matplotlib symbol (e.g., 'bo' for blue circles)\n \"\"\"\n X_down, X_up, Y_down, Y_up = [], [], [], [] # initialize some variables\n plt.figure(num=fignum)\n#\n# plot the data - separate upper and lower hemispheres\n#\n for rec in DIblock:\n Up, Down = 0, 0\n XY = pmag.dimap(rec[0], rec[1])\n if rec[1] >= 0:\n X_down.append(XY[0])\n Y_down.append(XY[1])\n else:\n X_up.append(XY[0])\n Y_up.append(XY[1])\n#\n if 'size' not in list(sym.keys()):\n size = 50\n else:\n size = sym['size']\n if 'edgecolor' not in list(sym.keys()):\n sym['edgecolor'] = 'k'\n if len(X_down) > 0:\n plt.scatter(X_down, Y_down, marker=sym['lower'][0],\n c=sym['lower'][1], s=size, edgecolor=sym['edgecolor'])\n if globals != 0:\n globals.DIlist = X_down\n globals.DIlisty = Y_down\n if len(X_up) > 0:\n plt.scatter(X_up, Y_up, marker=sym['upper'][0],\n c=sym['upper'][1], s=size, edgecolor=sym['edgecolor'])\n if globals != 0:\n globals.DIlist = X_up\n globals.DIlisty = Y_up"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef plot_circ(fignum, pole, ang, col):\n plt.figure(num=fignum)\n D_c, I_c = pmag.circ(pole[0], pole[1], ang)\n X_c_up, Y_c_up = [], []\n X_c_d, Y_c_d = [], []\n for k in range(len(D_c)):\n XY = pmag.dimap(D_c[k], I_c[k])\n if I_c[k] < 0:\n X_c_up.append(XY[0])\n Y_c_up.append(XY[1])\n else:\n X_c_d.append(XY[0])\n Y_c_d.append(XY[1])\n plt.plot(X_c_d, Y_c_d, col + '.', ms=5)\n plt.plot(X_c_up, Y_c_up, 'c.', ms=2)", "response": "function to plot a small circle on an equal area projection plot"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nfunction to make a Zijderveld diagrams for a given site.", "response": "def plot_zij(fignum, datablock, angle, s, norm=True):\n \"\"\"\n function to make Zijderveld diagrams\n\n Parameters\n __________\n fignum : matplotlib figure number\n datablock : nested list of [step, dec, inc, M (Am2), type, quality]\n where type is a string, either 'ZI' or 'IZ' for IZZI experiments\n angle : desired rotation in the horizontal plane (0 puts X on X axis)\n s : specimen name\n norm : if True, normalize to initial magnetization = unity\n\n Effects\n _______\n makes a zijderveld plot\n\n \"\"\"\n global globals\n plt.figure(num=fignum)\n plt.clf()\n if not isServer:\n plt.figtext(.02, .01, version_num)\n amin, amax = 0., -100.\n if norm == 0:\n fact = 1.\n else:\n fact = (1./datablock[0][3]) # normalize to NRM=1\n # convert datablock to DataFrame data with dec,inc, int\n data = pd.DataFrame(datablock)\n if len(data.columns) == 5:\n data.columns = ['treat', 'dec', 'inc', 'int', 'quality']\n if len(data.columns) == 6:\n data.columns = ['treat', 'dec', 'inc', 'int', 'type', 'quality']\n elif len(data.columns) == 7:\n data.columns = ['treat', 'dec', 'inc', 'int', 'type', 'quality', 'y']\n #print (len(data.columns))\n data['int'] = data['int']*fact # normalize\n data['dec'] = (data['dec']-angle) % 360 # adjust X axis angle\n gdata = data[data['quality'].str.contains('g')]\n bdata = data[data['quality'].str.contains('b')]\n forVDS = gdata[['dec', 'inc', 'int']].values\n gXYZ = pd.DataFrame(pmag.dir2cart(forVDS))\n gXYZ.columns = ['X', 'Y', 'Z']\n amax = np.maximum(gXYZ.X.max(), gXYZ.Z.max())\n amin = np.minimum(gXYZ.X.min(), gXYZ.Z.min())\n if amin > 0:\n amin = 0\n bXYZ = pmag.dir2cart(bdata[['dec', 'inc', 'int']].values).transpose()\n# plotting stuff\n if angle != 0:\n tempstr = \"\\n Declination rotated by: \" + str(angle) + '\\n'\n if globals != 0:\n globals.text.insert(globals.END, tempstr)\n globals.Zlist = gXYZ['x'].tolist()\n globals.Zlisty = gXYZ['y'].tolist()\n globals.Zlistz = gXYZ['z'].tolist()\n if len(bXYZ) > 0:\n plt.scatter(bXYZ[0], bXYZ[1], marker='d', c='y', s=30)\n plt.scatter(bXYZ[0], bXYZ[2], marker='d', c='y', s=30)\n plt.plot(gXYZ['X'], gXYZ['Y'], 'ro')\n plt.plot(gXYZ['X'], gXYZ['Z'], 'ws', markeredgecolor='blue')\n plt.plot(gXYZ['X'], gXYZ['Y'], 'r-')\n plt.plot(gXYZ['X'], gXYZ['Z'], 'b-')\n for k in range(len(gXYZ)):\n plt.annotate(str(k), (gXYZ['X'][k], gXYZ['Z']\n [k]), ha='left', va='bottom')\n if amin > 0 and amax >0:amin=0 # complete the line\n if amin < 0 and amax <0:amax=0 # complete the line\n xline = [amin, amax]\n # yline=[-amax,-amin]\n yline = [amax, amin]\n zline = [0, 0]\n plt.plot(xline, zline, 'k-')\n plt.plot(zline, xline, 'k-')\n if angle != 0:\n xlab = \"X: rotated to Dec = \" + '%7.1f' % (angle)\n if angle == 0:\n xlab = \"X: rotated to Dec = \" + '%7.1f' % (angle)\n plt.xlabel(xlab)\n plt.ylabel(\"Circles: Y; Squares: Z\")\n tstring = s + ': NRM = ' + '%9.2e' % (datablock[0][3])\n plt.axis([amin, amax, amax, amin])\n plt.axis(\"equal\")\n plt.title(tstring)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nplots the magnetization against the current state of the object.", "response": "def plot_mag(fignum, datablock, s, num, units, norm):\n \"\"\"\n plots magnetization against (de)magnetizing temperature or field\n\n Parameters\n _________________\n fignum : matplotlib figure number for plotting\n datablock : nested list of [step, 0, 0, magnetization, 1,quality]\n s : string for title\n num : matplotlib figure number, can set to 1\n units : [T,K,U] for tesla, kelvin or arbitrary\n norm : [True,False] if True, normalize\n\n Effects\n ______\n plots figure\n \"\"\"\n global globals, graphmenu\n Ints = []\n for plotrec in datablock:\n Ints.append(plotrec[3])\n Ints.sort()\n plt.figure(num=fignum)\n T, M, Tv, recnum = [], [], [], 0\n Mex, Tex, Vdif = [], [], []\n recbak = []\n for rec in datablock:\n if rec[5] == 'g':\n if units == \"T\":\n T.append(rec[0] * 1e3)\n Tv.append(rec[0] * 1e3)\n if recnum > 0:\n Tv.append(rec[0] * 1e3)\n elif units == \"U\":\n T.append(rec[0])\n Tv.append(rec[0])\n if recnum > 0:\n Tv.append(rec[0])\n elif units == \"K\":\n T.append(rec[0] - 273)\n Tv.append(rec[0] - 273)\n if recnum > 0:\n Tv.append(rec[0] - 273)\n elif \"T\" in units and \"K\" in units:\n if rec[0] < 1.:\n T.append(rec[0] * 1e3)\n Tv.append(rec[0] * 1e3)\n else:\n T.append(rec[0] - 273)\n Tv.append(rec[0] - 273)\n if recnum > 0:\n Tv.append(rec[0] - 273)\n else:\n T.append(rec[0])\n Tv.append(rec[0])\n if recnum > 0:\n Tv.append(rec[0])\n if norm:\n M.append(old_div(rec[3], Ints[-1]))\n else:\n M.append(rec[3])\n if recnum > 0 and len(rec) > 0 and len(recbak) > 0:\n v = []\n if recbak[0] != rec[0]:\n V0 = pmag.dir2cart([recbak[1], recbak[2], recbak[3]])\n V1 = pmag.dir2cart([rec[1], rec[2], rec[3]])\n for el in range(3):\n v.append(abs(V1[el] - V0[el]))\n vdir = pmag.cart2dir(v)\n # append vector difference\n Vdif.append(old_div(vdir[2], Ints[-1]))\n Vdif.append(old_div(vdir[2], Ints[-1]))\n recbak = []\n for el in rec:\n recbak.append(el)\n delta = .005 * M[0]\n if num == 1:\n if recnum % 2 == 0:\n plt.text(T[-1] + delta, M[-1],\n (' ' + str(recnum)), fontsize=9)\n recnum += 1\n else:\n if rec[0] < 200:\n Tex.append(rec[0] * 1e3)\n if rec[0] >= 200:\n Tex.append(rec[0] - 273)\n Mex.append(old_div(rec[3], Ints[-1]))\n recnum += 1\n if globals != 0:\n globals.MTlist = T\n globals.MTlisty = M\n if len(Mex) > 0 and len(Tex) > 0:\n plt.scatter(Tex, Mex, marker='d', color='k')\n if len(Vdif) > 0:\n Vdif.append(old_div(vdir[2], Ints[-1]))\n Vdif.append(0)\n if Tv:\n Tv.append(Tv[-1])\n plt.plot(T, M)\n plt.plot(T, M, 'ro')\n if len(Tv) == len(Vdif) and norm:\n plt.plot(Tv, Vdif, 'g-')\n if units == \"T\":\n plt.xlabel(\"Step (mT)\")\n elif units == \"K\":\n plt.xlabel(\"Step (C)\")\n elif units == \"J\":\n plt.xlabel(\"Step (J)\")\n else:\n plt.xlabel(\"Step [mT,C]\")\n if norm == 1:\n plt.ylabel(\"Fractional Magnetization\")\n if norm == 0:\n plt.ylabel(\"Magnetization\")\n plt.axvline(0, color='k')\n plt.axhline(0, color='k')\n tstring = s\n plt.title(tstring)\n plt.draw()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef plot_zed(ZED, datablock, angle, s, units):\n for fignum in list(ZED.keys()):\n fig = plt.figure(num=ZED[fignum])\n plt.clf()\n if not isServer:\n plt.figtext(.02, .01, version_num)\n DIbad, DIgood = [], []\n for rec in datablock:\n if cb.is_null(rec[1],zero_as_null=False):\n print('-W- You are missing a declination for specimen', s, ', skipping this row')\n continue\n if cb.is_null(rec[2],zero_as_null=False):\n print('-W- You are missing an inclination for specimen', s, ', skipping this row')\n continue\n if rec[5] == 'b':\n DIbad.append((rec[1], rec[2]))\n else:\n DIgood.append((rec[1], rec[2]))\n badsym = {'lower': ['+', 'g'], 'upper': ['x', 'c']}\n if len(DIgood) > 0:\n plot_eq(ZED['eqarea'], DIgood, s)\n if len(DIbad) > 0:\n plot_di_sym(ZED['eqarea'], DIbad, badsym)\n elif len(DIbad) > 0:\n plot_eq_sym(ZED['eqarea'], DIbad, s, badsym)\n AngleX, AngleY = [], []\n XY = pmag.dimap(angle, 90.)\n AngleX.append(XY[0])\n AngleY.append(XY[1])\n XY = pmag.dimap(angle, 0.)\n AngleX.append(XY[0])\n AngleY.append(XY[1])\n plt.figure(num=ZED['eqarea'])\n # Draw a line for Zijderveld horizontal axis\n plt.plot(AngleX, AngleY, 'r-')\n if AngleX[-1] == 0:\n AngleX[-1] = 0.01\n plt.text(AngleX[-1] + (old_div(AngleX[-1], abs(AngleX[-1]))) * .1,\n AngleY[-1] + (old_div(AngleY[-1], abs(AngleY[-1]))) * .1, 'X')\n norm = 1\n #if units==\"U\": norm=0\n # if there are NO good points, don't try to plot\n if DIgood:\n plot_mag(ZED['demag'], datablock, s, 1, units, norm)\n plot_zij(ZED['zijd'], datablock, angle, s, norm)\n else:\n ZED.pop('demag')\n ZED.pop('zijd')\n return ZED", "response": "function to make a zijderveld plot and a demag"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nfunction to plot the great circle on the equal area projection and plot start and end points of the current object", "response": "def plot_dir(ZED, pars, datablock, angle):\n \"\"\"\n function to put the great circle on the equal area projection\n and plot start and end points of calculation\n\n DEPRECATED (used in zeq_magic)\n \"\"\"\n#\n# find start and end points from datablock\n#\n if pars[\"calculation_type\"] == 'DE-FM':\n x, y = [], []\n plt.figure(num=ZED['eqarea'])\n XY = pmag.dimap(pars[\"specimen_dec\"], pars[\"specimen_inc\"])\n x.append(XY[0])\n y.append(XY[1])\n plt.scatter(x, y, marker='^', s=80, c='r')\n plt.show()\n return\n StartDir, EndDir = [0, 0, 1.], [0, 0, 1.]\n for rec in datablock:\n if rec[0] == pars[\"measurement_step_min\"]:\n StartDir[0] = rec[1]\n StartDir[1] = rec[2]\n if pars[\"specimen_direction_type\"] == 'l':\n StartDir[2] = rec[3]/datablock[0][3]\n if rec[0] == pars[\"measurement_step_max\"]:\n EndDir[0] = rec[1]\n EndDir[1] = rec[2]\n if pars[\"specimen_direction_type\"] == 'l':\n EndDir[2] = rec[3]/datablock[0][3]\n\n#\n# put them on the plots\n#\n x, y, z, pole = [], [], [], []\n if pars[\"calculation_type\"] != 'DE-BFP':\n plt.figure(num=ZED['eqarea'])\n XY = pmag.dimap(pars[\"specimen_dec\"], pars[\"specimen_inc\"])\n x.append(XY[0])\n y.append(XY[1])\n plt.scatter(x, y, marker='d', s=80, c='b')\n x, y, z = [], [], []\n StartDir[0] = StartDir[0] - angle\n EndDir[0] = EndDir[0] - angle\n XYZs = pmag.dir2cart(StartDir)\n x.append(XYZs[0])\n y.append(XYZs[1])\n z.append(XYZs[2])\n XYZe = pmag.dir2cart(EndDir)\n x.append(XYZe[0])\n y.append(XYZe[1])\n z.append(XYZe[2])\n plt.figure(num=ZED['zijd'])\n plt.scatter(x, y, marker='d', s=80, c='g')\n plt.scatter(x, z, marker='d', s=80, c='g')\n plt.scatter(x, y, marker='o', c='r', s=20)\n plt.scatter(x, z, marker='s', c='w', s=20)\n#\n# put on best fit line\n# new way (from Jeff Gee's favorite website http://GET THIS):\n# P1=pmag.dir2cart([(pars[\"specimen_dec\"]-angle),pars[\"specimen_inc\"],1.]) # princ comp.\n# P2=pmag.dir2cart([(pars[\"specimen_dec\"]-angle-180.),-pars[\"specimen_inc\"],1.]) # antipode of princ comp.\n# P21,Ps,Pe,Xs,Xe=[],[],[],[],[]\n# for i in range(3):\n# P21.append(P2[i]-P1[i])\n# Ps.append(XYZs[i]-P1[i])\n# Pe.append(XYZe[i]-P1[i])\n# norm=pmag.cart2dir(P21)[2]\n# us=(Ps[0]*P21[0]+Ps[1]*P21[1]+Ps[2]*P21[2])/(norm**2)\n# ue=(Pe[0]*P21[0]+Pe[1]*P21[1]+Pe[2]*P21[2])/(norm**2)\n# px,py,pz=[],[],[]\n# for i in range(3):\n# Xs.append(P1[i]+us*(P2[i]-P1[i]))\n# Xe.append(P1[i]+ue*(P2[i]-P1[i]))\n# old way:\n cm = pars[\"center_of_mass\"]\n if cm != [0., 0., 0.]:\n cm = np.array(pars[\"center_of_mass\"])/datablock[0][3]\n cmDir = pmag.cart2dir(cm)\n cmDir[0] = cmDir[0] - angle\n cm = pmag.dir2cart(cmDir)\n diff = []\n for i in range(3):\n diff.append(XYZe[i] - XYZs[i])\n R = np.sqrt(diff[0]**2 + diff[1]**2 + diff[2]**2)\n P = pmag.dir2cart(\n ((pars[\"specimen_dec\"] - angle), pars[\"specimen_inc\"], R/2.5))\n px, py, pz = [], [], []\n px.append((cm[0] + P[0]))\n py.append((cm[1] + P[1]))\n pz.append((cm[2] + P[2]))\n px.append((cm[0] - P[0]))\n py.append((cm[1] - P[1]))\n pz.append((cm[2] - P[2]))\n\n plt.plot(px, py, 'g', linewidth=2)\n plt.plot(px, pz, 'g', linewidth=2)\n plt.axis(\"equal\")\n else:\n plt.figure(num=ZED['eqarea'])\n XY = pmag.dimap(StartDir[0], StartDir[1])\n x.append(XY[0])\n y.append(XY[1])\n XY = pmag.dimap(EndDir[0], EndDir[1])\n x.append(XY[0])\n y.append(XY[1])\n plt.scatter(x, y, marker='d', s=80, c='b')\n pole.append(pars[\"specimen_dec\"])\n pole.append(pars[\"specimen_inc\"])\n plot_circ(ZED['eqarea'], pole, 90., 'g')\n plt.xlim((-1., 1.))\n plt.ylim((-1., 1.))\n plt.axis(\"equal\")"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nmaking Arai plots for Thellier-Thellier type experiments Parameters __________ fignum : figure number of matplotlib plot object indata : nested list of data for Arai plots: the araiblock of data prepared by pmag.sortarai() s : specimen name units : [K, J, \"\"] (kelvin, joules, unknown) Effects _______ makes the Arai plot", "response": "def plot_arai(fignum, indata, s, units):\n \"\"\"\n makes Arai plots for Thellier-Thellier type experiments\n\n Parameters\n __________\n fignum : figure number of matplotlib plot object\n indata : nested list of data for Arai plots:\n the araiblock of data prepared by pmag.sortarai()\n s : specimen name\n units : [K, J, \"\"] (kelvin, joules, unknown)\n Effects\n _______\n makes the Arai plot\n \"\"\"\n global globals\n plt.figure(num=fignum)\n plt.clf()\n if not isServer:\n plt.figtext(.02, .01, version_num)\n x, y, x_zi, y_zi, x_iz, y_iz, xptrm, yptrm, xptrmt, yptrmt = [\n ], [], [], [], [], [], [], [], [], []\n xzptrm, yzptrm = [], [] # zero field ptrm checks\n zptrm_check = []\n first_Z, first_I, ptrm_check, ptrm_tail, zptrm_check = indata[\n 0], indata[1], indata[2], indata[3], indata[4]\n if len(indata) > 6:\n if len(indata[-1]) > 1:\n s = s + \":PERP\" # there are Delta checks, must be a LP-PI-M-S perp experiment\n recnum, yes, Nptrm, Nptrmt, diffcum = 0, 0, 0, 0, 0\n# plot the NRM-pTRM data\n forVDS = []\n for zrec in first_Z:\n forVDS.append([zrec[1], zrec[2], old_div(zrec[3], first_Z[0][3])])\n ZI = zrec[4]\n if zrec[0] == '0':\n irec = ['0', 0, 0, 0]\n if zrec[0] == '273' and units == 'K':\n irec = ['273', 0, 0, 0]\n else:\n for irec in first_I:\n if irec[0] == zrec[0]:\n break\n# save the NRM data used for calculation in Vi\n x.append(old_div(irec[3], first_Z[0][3]))\n y.append(old_div(zrec[3], first_Z[0][3]))\n if ZI == 1:\n x_zi.append(old_div(irec[3], first_Z[0][3]))\n y_zi.append(old_div(zrec[3], first_Z[0][3]))\n else:\n x_iz.append(old_div(irec[3], first_Z[0][3]))\n y_iz.append(old_div(zrec[3], first_Z[0][3]))\n plt.text(x[-1], y[-1], (' ' + str(recnum)), fontsize=9)\n recnum += 1\n# now deal with ptrm checks.\n if len(ptrm_check) != 0:\n for prec in ptrm_check:\n step = prec[0]\n for zrec in first_Z:\n if zrec[0] == step:\n break\n xptrm.append(old_div(prec[3], first_Z[0][3]))\n yptrm.append(old_div(zrec[3], first_Z[0][3]))\n# now deal with zptrm checks.\n if len(zptrm_check) != 0:\n for prec in zptrm_check:\n step = prec[0]\n for zrec in first_Z:\n if zrec[0] == step:\n break\n xzptrm.append(old_div(prec[3], first_Z[0][3]))\n yzptrm.append(old_div(zrec[3], first_Z[0][3]))\n# and the pTRM tails\n if len(ptrm_tail) != 0:\n for trec in ptrm_tail:\n step = trec[0]\n for irec in first_I:\n if irec[0] == step:\n break\n xptrmt.append(old_div(irec[3], first_Z[0][3]))\n yptrmt.append((old_div(trec[3], first_Z[0][3])))\n# now plot stuff\n if len(x) == 0:\n print(\"Can't do nuttin for ya\")\n return\n try:\n if len(x_zi) > 0:\n plt.scatter(x_zi, y_zi, marker='o', c='r',\n edgecolors=\"none\") # zero field-infield\n if len(x_iz) > 0:\n plt.scatter(x_iz, y_iz, marker='s', c='b',\n faceted=\"True\") # infield-zerofield\n except:\n if len(x_zi) > 0:\n plt.scatter(x_zi, y_zi, marker='o', c='r') # zero field-infield\n if len(x_iz) > 0:\n plt.scatter(x_iz, y_iz, marker='s', c='b') # infield-zerofield\n plt.plot(x, y, 'r')\n if globals != 0:\n globals.MTlist = x\n globals.MTlisty = y\n if len(xptrm) > 0:\n plt.scatter(xptrm, yptrm, marker='^', c='g', s=80)\n if len(xzptrm) > 0:\n plt.scatter(xzptrm, yzptrm, marker='v', c='c', s=80)\n if len(xptrmt) > 0:\n plt.scatter(xptrmt, yptrmt, marker='s', c='b', s=80)\n try:\n plt.axhline(0, color='k')\n plt.axvline(0, color='k')\n except:\n pass\n plt.xlabel(\"pTRM gained\")\n plt.ylabel(\"NRM remaining\")\n tstring = s + ': NRM = ' + '%9.2e' % (first_Z[0][3])\n plt.title(tstring)\n# put on VDS\n vds = pmag.dovds(forVDS)\n plt.axhline(vds, color='b')\n plt.text(1., vds - .1, ('VDS '), fontsize=9)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef plot_np(fignum, indata, s, units):\n global globals\n first_Z, first_I, ptrm_check, ptrm_tail = indata[0], indata[1], indata[2], indata[3]\n plt.figure(num=fignum)\n plt.clf()\n if not isServer:\n plt.figtext(.02, .01, version_num)\n X, Y, recnum = [], [], 0\n#\n for rec in first_Z:\n if units == \"K\":\n if rec[0] != 0:\n X.append(rec[0] - 273.)\n else:\n X.append(rec[0])\n if (units == \"J\") or (not units) or (units == \"T\"):\n X.append(rec[0])\n Y.append(old_div(rec[3], first_Z[0][3]))\n delta = .02 * Y[0]\n if recnum % 2 == 0:\n plt.text(X[-1] - delta, Y[-1] + delta,\n (' ' + str(recnum)), fontsize=9)\n recnum += 1\n plt.plot(X, Y)\n plt.scatter(X, Y, marker='o', color='b')\n X, Y = [], []\n for rec in first_I:\n if units == \"K\":\n if rec[0] != 0:\n X.append(rec[0] - 273)\n else:\n X.append(rec[0])\n if (units == \"J\") or (not units) or (units == \"T\"):\n X.append(rec[0])\n Y.append(old_div(rec[3], first_Z[0][3]))\n if globals != 0:\n globals.DIlist = X\n globals.DIlisty = Y\n plt.plot(X, Y)\n plt.scatter(X, Y, marker='s', color='r')\n plt.ylabel(\"Circles: NRM; Squares: pTRM\")\n if units == \"K\":\n plt.xlabel(\"Temperature (C)\")\n elif units == \"J\":\n plt.xlabel(\"Microwave Energy (J)\")\n else:\n plt.xlabel(\"\")\n title = s + \": NRM = \" + '%9.2e' % (first_Z[0][3])\n plt.title(title)\n plt.axhline(y=0, xmin=0, xmax=1, color='k')\n plt.axvline(x=0, ymin=0, ymax=1, color='k')", "response": "This function plots the data for a specific Thellier - Thellier type experiment."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef plot_arai_zij(ZED, araiblock, zijdblock, s, units):\n angle = zijdblock[0][1]\n norm = 1\n if units == \"U\":\n norm = 0\n plot_arai(ZED['arai'], araiblock, s, units)\n plot_teq(ZED['eqarea'], araiblock, s, \"\")\n plot_zij(ZED['zijd'], zijdblock, angle, s, norm)\n plot_np(ZED['deremag'], araiblock, s, units)\n return ZED", "response": "Plots the Arai and Zijderveld diagrams for Thellier - Thellier experiments."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nplotting the b - element of the araiblock.", "response": "def plot_b(Figs, araiblock, zijdblock, pars):\n \"\"\"\n deprecated (used in thellier_magic/microwave_magic)\n \"\"\"\n angle = zijdblock[0][1]\n plotblock = []\n Dir, zx, zy, zz, ax, ay = [], [], [], [], [], []\n zstart, zend = 0, len(zijdblock)\n first_Z, first_I = araiblock[0], araiblock[1]\n for rec in zijdblock:\n if rec[0] == pars[\"measurement_step_min\"]:\n Dir.append((rec[1] - angle, rec[2],\n old_div(rec[3], zijdblock[0][3])))\n if rec[0] == pars[\"measurement_step_max\"]:\n Dir.append((rec[1] - angle, rec[2],\n old_div(rec[3], zijdblock[0][3])))\n for drec in Dir:\n cart = pmag.dir2cart(drec)\n zx.append(cart[0])\n # zy.append(-cart[1])\n # zz.append(-cart[2])\n zy.append(cart[1])\n zz.append(cart[2])\n if len(zx) > 0:\n plt.figure(num=Figs['zijd'])\n plt.scatter(zx, zy, marker='d', s=100, c='y')\n plt.scatter(zx, zz, marker='d', s=100, c='y')\n plt.axis(\"equal\")\n ax.append(old_div(first_I[0][3], first_Z[0][3]))\n ax.append(old_div(first_I[-1][3], first_Z[0][3]))\n ay.append(old_div(first_Z[0][3], first_Z[0][3]))\n ay.append(old_div(first_Z[-1][3], first_Z[0][3]))\n for k in range(len(first_Z)):\n if first_Z[k][0] == pars[\"measurement_step_min\"]:\n ay[0] = (old_div(first_Z[k][3], first_Z[0][3]))\n if first_Z[k][0] == pars[\"measurement_step_max\"]:\n ay[1] = (old_div(first_Z[k][3], first_Z[0][3]))\n if first_I[k][0] == pars[\"measurement_step_min\"]:\n ax[0] = (old_div(first_I[k][3], first_Z[0][3]))\n if first_I[k][0] == pars[\"measurement_step_max\"]:\n ax[1] = (old_div(first_I[k][3], first_Z[0][3]))\n new_Z, new_I = [], []\n for zrec in first_Z:\n if zrec[0] >= pars['measurement_step_min'] and zrec[0] <= pars['measurement_step_max']:\n new_Z.append(zrec)\n for irec in first_I:\n if irec[0] >= pars['measurement_step_min'] and irec[0] <= pars['measurement_step_max']:\n new_I.append(irec)\n newblock = [new_Z, new_I]\n plot_teq(Figs['eqarea'], newblock, \"\", pars)\n plt.figure(num=Figs['arai'])\n plt.scatter(ax, ay, marker='d', s=100, c='y')\n#\n# find midpoint between two endpoints\n#\n sy = []\n sy.append((pars[\"specimen_b\"] * ax[0] +\n old_div(pars[\"specimen_ytot\"], first_Z[0][3])))\n sy.append((pars[\"specimen_b\"] * ax[1] +\n old_div(pars[\"specimen_ytot\"], first_Z[0][3])))\n plt.plot(ax, sy, 'g', linewidth=2)\n bounds = plt.axis()\n if pars['specimen_grade'] != '':\n notestr = 'Grade: ' + pars[\"specimen_grade\"]\n plt.text(.7 * bounds[1], .9 * bounds[3], notestr)\n notestr = 'B: ' + '%6.2f' % (pars[\"specimen_int\"] * 1e6) + ' uT'\n plt.text(.7 * bounds[1], .8 * bounds[3], notestr)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nmaking a single line plot on a great circle with alpha 95 and mean deprecated ( used in pmagplotlib", "response": "def plot_slnp(fignum, SiteRec, datablock, key):\n \"\"\"\n plots lines and planes on a great circle with alpha 95 and mean\n deprecated (used in pmagplotlib)\n \"\"\"\n# make the stereonet\n plt.figure(num=fignum)\n plot_net(fignum)\n s = SiteRec['er_site_name']\n#\n# plot on the data\n#\n coord = SiteRec['site_tilt_correction']\n title = ''\n if coord == '-1':\n title = s + \": specimen coordinates\"\n if coord == '0':\n title = s + \": geographic coordinates\"\n if coord == '100':\n title = s + \": tilt corrected coordinates\"\n DIblock, GCblock = [], []\n for plotrec in datablock:\n if plotrec[key + '_direction_type'] == 'p': # direction is pole to plane\n GCblock.append(\n (float(plotrec[key + \"_dec\"]), float(plotrec[key + \"_inc\"])))\n else: # assume direction is a directed line\n DIblock.append(\n (float(plotrec[key + \"_dec\"]), float(plotrec[key + \"_inc\"])))\n if len(DIblock) > 0:\n plot_di(fignum, DIblock) # plot directed lines\n if len(GCblock) > 0:\n for pole in GCblock:\n plot_circ(fignum, pole, 90., 'g') # plot directed lines\n#\n# put on the mean direction\n#\n x, y = [], []\n XY = pmag.dimap(float(SiteRec[\"site_dec\"]), float(SiteRec[\"site_inc\"]))\n x.append(XY[0])\n y.append(XY[1])\n plt.scatter(x, y, marker='d', s=80, c='g')\n plt.title(title)\n#\n# get the alpha95\n#\n Xcirc, Ycirc = [], []\n Da95, Ia95 = pmag.circ(float(SiteRec[\"site_dec\"]), float(\n SiteRec[\"site_inc\"]), float(SiteRec[\"site_alpha95\"]))\n for k in range(len(Da95)):\n XY = pmag.dimap(Da95[k], Ia95[k])\n Xcirc.append(XY[0])\n Ycirc.append(XY[1])\n plt.plot(Xcirc, Ycirc, 'g')"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef plot_lnp(fignum, s, datablock, fpars, direction_type_key):\n# make the stereonet\n plot_net(fignum)\n#\n# plot on the data\n#\n dec_key, inc_key, tilt_key = 'dec', 'inc', 'tilt_correction'\n if 'dir_dec' in datablock[0].keys(): # this is data model 3.0\n dec_key, inc_key, tilt_key = 'dir_dec', 'dir_inc', 'dir_tilt_correction'\n coord = datablock[0][tilt_key]\n title = s\n if coord == '-1':\n title = title + \": specimen coordinates\"\n if coord == '0':\n title = title + \": geographic coordinates\"\n if coord == '100':\n title = title + \": tilt corrected coordinates\"\n DIblock, GCblock = [], []\n for plotrec in datablock:\n if plotrec[direction_type_key] == 'p': # direction is pole to plane\n GCblock.append((float(plotrec[dec_key]), float(plotrec[inc_key])))\n else: # assume direction is a directed line\n DIblock.append((float(plotrec[dec_key]), float(plotrec[inc_key])))\n if len(DIblock) > 0:\n plot_di(fignum, DIblock) # plot directed lines\n if len(GCblock) > 0:\n for pole in GCblock:\n plot_circ(fignum, pole, 90., 'g') # plot directed lines\n#\n# put on the mean direction\n#\n x, y = [], []\n XY = pmag.dimap(float(fpars[\"dec\"]), float(fpars[\"inc\"]))\n x.append(XY[0])\n y.append(XY[1])\n plt.figure(num=fignum)\n plt.scatter(x, y, marker='d', s=80, c='g')\n plt.title(title)\n#\n# get the alpha95\n#\n Xcirc, Ycirc = [], []\n Da95, Ia95 = pmag.circ(float(fpars[\"dec\"]), float(\n fpars[\"inc\"]), float(fpars[\"alpha95\"]))\n for k in range(len(Da95)):\n XY = pmag.dimap(Da95[k], Ia95[k])\n Xcirc.append(XY[0])\n Ycirc.append(XY[1])\n plt.plot(Xcirc, Ycirc, 'g')", "response": "make the lnp plot on the stereonet object s"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nplots the eqarea projection", "response": "def plot_eq(fignum, DIblock, s):\n \"\"\"\n plots directions on eqarea projection\n Parameters\n __________\n fignum : matplotlib figure number\n DIblock : nested list of dec/inc pairs\n s : specimen name\n \"\"\"\n# make the stereonet\n plt.figure(num=fignum)\n if len(DIblock) < 1:\n return\n # plt.clf()\n if not isServer:\n plt.figtext(.02, .01, version_num)\n plot_net(fignum)\n#\n# put on the directions\n#\n plot_di(fignum, DIblock) # plot directions\n plt.axis(\"equal\")\n plt.text(-1.1, 1.15, s)\n plt.draw()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nplotting directions with specified symbol in the same site as DIblock.", "response": "def plot_eq_sym(fignum, DIblock, s, sym):\n \"\"\"\n plots directions with specified symbol\n Parameters\n __________\n fignum : matplotlib figure number\n DIblock : nested list of dec/inc pairs\n s : specimen name\n sym : matplotlib symbol (e.g., 'bo' for blue circle)\n \"\"\"\n# make the stereonet\n plt.figure(num=fignum)\n if len(DIblock) < 1:\n return\n # plt.clf()\n if not isServer:\n plt.figtext(.02, .01, version_num)\n plot_net(fignum)\n#\n# put on the directions\n#\n plot_di_sym(fignum, DIblock, sym) # plot directions with symbols in sym\n plt.axis(\"equal\")\n plt.text(-1.1, 1.15, s)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nplot the television of the object.", "response": "def plot_teq(fignum, araiblock, s, pars):\n \"\"\"\n plots directions of pTRM steps and zero field steps\n\n Parameters\n __________\n fignum : figure number for matplotlib object\n araiblock : nested list of data from pmag.sortarai()\n s : specimen name\n pars : default is \"\",\n otherwise is dictionary with keys:\n 'measurement_step_min' and 'measurement_step_max'\n\n Effects\n _______\n makes the equal area projection with color coded symbols\n red circles: ZI steps\n blue squares: IZ steps\n yellow : pTRM steps\n\n\n \"\"\"\n first_Z, first_I = araiblock[0], araiblock[1]\n# make the stereonet\n plt.figure(num=fignum)\n plt.clf()\n ZIblock, IZblock, pTblock = [], [], []\n for zrec in first_Z: # sort out the zerofield steps\n if zrec[4] == 1: # this is a ZI step\n ZIblock.append([zrec[1], zrec[2]])\n else:\n IZblock.append([zrec[1], zrec[2]])\n plot_net(fignum)\n if pars != \"\":\n min, max = float(pars[\"measurement_step_min\"]), float(\n pars[\"measurement_step_max\"])\n else:\n min, max = first_I[0][0], first_I[-1][0]\n for irec in first_I:\n if irec[1] != 0 and irec[1] != 0 and irec[0] >= min and irec[0] <= max:\n pTblock.append([irec[1], irec[2]])\n if len(ZIblock) < 1 and len(IZblock) < 1 and len(pTblock) < 1:\n return\n if not isServer:\n plt.figtext(.02, .01, version_num)\n#\n# put on the directions\n#\n sym = {'lower': ['o', 'r'], 'upper': ['o', 'm']}\n if len(ZIblock) > 0:\n plot_di_sym(fignum, ZIblock, sym) # plot ZI directions\n sym = {'lower': ['s', 'b'], 'upper': ['s', 'c']}\n if len(IZblock) > 0:\n plot_di_sym(fignum, IZblock, sym) # plot IZ directions\n sym = {'lower': ['^', 'g'], 'upper': ['^', 'y']}\n if len(pTblock) > 0:\n plot_di_sym(fignum, pTblock, sym) # plot pTRM directions\n plt.axis(\"equal\")\n plt.text(-1.1, 1.15, s)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef save_plots(Figs, filenames, **kwargs):\n saved = []\n for key in list(Figs.keys()):\n try:\n plt.figure(num=Figs[key])\n fname = filenames[key]\n if set_env.IS_WIN: # always truncate filenames if on Windows\n fname = os.path.split(fname)[1]\n if not isServer: # remove illegal ':' character for windows\n fname = fname.replace(':', '_')\n if 'incl_directory' in kwargs.keys() and not set_env.IS_WIN:\n if kwargs['incl_directory']:\n pass # do not flatten file name\n else:\n fname = fname.replace('/', '-') # flatten file name\n else:\n fname = fname.replace('/', '-') # flatten file name\n if 'dpi' in list(kwargs.keys()):\n plt.savefig(fname, dpi=kwargs['dpi'])\n elif isServer:\n plt.savefig(fname, dpi=240)\n else:\n plt.savefig(fname)\n if verbose:\n print(Figs[key], \" saved in \", fname)\n saved.append(fname)\n plt.close(Figs[key])\n except Exception as ex:\n print(type(ex), ex)\n print('could not save: ', Figs[key], filenames[key])\n print(\"output file format not supported \")\n return saved", "response": "Save the plots of the current language and associated resources."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nplotting eigenvector directions of S vectors", "response": "def plot_evec(fignum, Vs, symsize, title):\n \"\"\"\n plots eigenvector directions of S vectors\n\n Paramters\n ________\n fignum : matplotlib figure number\n Vs : nested list of eigenvectors\n symsize : size in pts for symbol\n title : title for plot\n \"\"\"\n#\n plt.figure(num=fignum)\n plt.text(-1.1, 1.15, title)\n # plot V1s as squares, V2s as triangles and V3s as circles\n symb, symkey = ['s', 'v', 'o'], 0\n col = ['r', 'b', 'k'] # plot V1s rec, V2s blue, V3s black\n for VEC in range(3):\n X, Y = [], []\n for Vdirs in Vs:\n #\n #\n # plot the V1 data first\n #\n XY = pmag.dimap(Vdirs[VEC][0], Vdirs[VEC][1])\n X.append(XY[0])\n Y.append(XY[1])\n plt.scatter(X, Y, s=symsize,\n marker=symb[VEC], c=col[VEC], edgecolors='none')\n plt.axis(\"equal\")"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nfunctioning to plot the points on an ellipse about Pdec Pdip with angle beta gamma Gdec Ginc and Ginc.", "response": "def plot_ell(fignum, pars, col, lower, plot):\n \"\"\"\n function to calcualte/plot points on an ellipse about Pdec,Pdip with angle beta,gamma\n Parameters\n _________\n fignum : matplotlib figure number\n pars : list of [Pdec, Pinc, beta, Bdec, Binc, gamma, Gdec, Ginc ]\n where P is direction, Bdec,Binc are beta direction, and Gdec,Ginc are gamma direction\n col : color for ellipse\n lower : boolean, if True, lower hemisphere projection\n plot : boolean, if False, return the points, if False, make the plot\n \"\"\"\n plt.figure(num=fignum)\n rad = old_div(np.pi, 180.)\n Pdec, Pinc, beta, Bdec, Binc, gamma, Gdec, Ginc = pars[0], pars[\n 1], pars[2], pars[3], pars[4], pars[5], pars[6], pars[7]\n if beta > 90. or gamma > 90:\n beta = 180. - beta\n gamma = 180. - gamma\n Pdec = Pdec - 180.\n Pinc = -Pinc\n beta, gamma = beta * rad, gamma * rad # convert to radians\n X_ell, Y_ell, X_up, Y_up, PTS = [], [], [], [], []\n nums = 201\n xnum = old_div(float(nums - 1.), 2.)\n# set up t matrix\n t = [[0, 0, 0], [0, 0, 0], [0, 0, 0]]\n X = pmag.dir2cart((Pdec, Pinc, 1.0)) # convert to cartesian coordintes\n if lower == 1 and X[2] < 0:\n for i in range(3):\n X[i] = -X[i]\n# set up rotation matrix t\n t[0][2] = X[0]\n t[1][2] = X[1]\n t[2][2] = X[2]\n X = pmag.dir2cart((Bdec, Binc, 1.0))\n if lower == 1 and X[2] < 0:\n for i in range(3):\n X[i] = -X[i]\n t[0][0] = X[0]\n t[1][0] = X[1]\n t[2][0] = X[2]\n X = pmag.dir2cart((Gdec, Ginc, 1.0))\n if lower == 1 and X[2] < 0:\n for i in range(3):\n X[i] = -X[i]\n t[0][1] = X[0]\n t[1][1] = X[1]\n t[2][1] = X[2]\n# set up v matrix\n v = [0, 0, 0]\n for i in range(nums): # incremental point along ellipse\n psi = float(i) * np.pi / xnum\n v[0] = np.sin(beta) * np.cos(psi)\n v[1] = np.sin(gamma) * np.sin(psi)\n v[2] = np.sqrt(1. - v[0]**2 - v[1]**2)\n elli = [0, 0, 0]\n# calculate points on the ellipse\n for j in range(3):\n for k in range(3):\n # cartesian coordinate j of ellipse\n elli[j] = elli[j] + t[j][k] * v[k]\n pts = pmag.cart2dir(elli)\n PTS.append([pts[0], pts[1]])\n # put on an equal area projection\n R = old_div(np.sqrt(\n 1. - abs(elli[2])), (np.sqrt(elli[0]**2 + elli[1]**2)))\n if elli[2] <= 0:\n # for i in range(3): elli[i]=-elli[i]\n X_up.append(elli[1] * R)\n Y_up.append(elli[0] * R)\n else:\n X_ell.append(elli[1] * R)\n Y_ell.append(elli[0] * R)\n if plot == 1:\n col = col[0]+'.'\n if X_ell != []:\n plt.plot(X_ell, Y_ell, col, markersize=3)\n if X_up != []:\n plt.plot(X_up, Y_up, col, markersize=3)\n else:\n return PTS"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef plot_strat(fignum, data, labels):\n vertical_plot_init(fignum, 10, 3)\n xlab, ylab, title = labels[0], labels[1], labels[2]\n X, Y = [], []\n for rec in data:\n X.append(rec[0])\n Y.append(rec[1])\n plt.plot(X, Y)\n plt.plot(X, Y, 'ro')\n plt.xlabel(xlab)\n plt.ylabel(ylab)\n plt.title(title)", "response": "Plots a time series with the given data."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef plot_cdf(fignum, data, xlab, sym, title, **kwargs):\n#\n #if len(sym)==1:sym=sym+'-'\n fig = plt.figure(num=fignum)\n # sdata=np.array(data).sort()\n sdata = []\n for d in data:\n sdata.append(d) # have to copy the data to avoid overwriting it!\n sdata.sort()\n X, Y = [], []\n color = \"\"\n for j in range(len(sdata)):\n Y.append(old_div(float(j), float(len(sdata))))\n X.append(sdata[j])\n if 'color' in list(kwargs.keys()):\n color = kwargs['color']\n if 'linewidth' in list(kwargs.keys()):\n lw = kwargs['linewidth']\n else:\n lw = 1\n if color != \"\":\n plt.plot(X, Y, color=sym, linewidth=lw)\n else:\n plt.plot(X, Y, sym, linewidth=lw)\n\n plt.xlabel(xlab)\n plt.ylabel('Cumulative Distribution')\n plt.title(title)\n return X, Y", "response": "Plots the cumulative distribution function."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nplot horizontal lines at Ys values at Ys values", "response": "def plot_hs(fignum, Ys, c, ls):\n \"\"\"\n plots horizontal lines at Ys values\n\n Parameters\n _________\n fignum : matplotlib figure number\n Ys : list of Y values for lines\n c : color for lines\n ls : linestyle for lines\n \"\"\"\n fig = plt.figure(num=fignum)\n for yv in Ys:\n bounds = plt.axis()\n plt.axhline(y=yv, xmin=0, xmax=1, linewidth=1, color=c, linestyle=ls)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nplot the vertical lines at Xs values at Xs", "response": "def plot_vs(fignum, Xs, c, ls):\n \"\"\"\n plots vertical lines at Xs values\n\n Parameters\n _________\n fignum : matplotlib figure number\n Xs : list of X values for lines\n c : color for lines\n ls : linestyle for lines\n \"\"\"\n fig = plt.figure(num=fignum)\n for xv in Xs:\n bounds = plt.axis()\n plt.axvline(\n x=xv, ymin=bounds[2], ymax=bounds[3], linewidth=1, color=c, linestyle=ls)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nplot the geomagnetic polarity time scale for a set of dates.", "response": "def plot_ts(fignum, dates, ts):\n \"\"\"\n plot the geomagnetic polarity time scale\n\n Parameters\n __________\n fignum : matplotlib figure number\n dates : bounding dates for plot\n ts : time scale ck95, gts04, or gts12\n \"\"\"\n vertical_plot_init(fignum, 10, 3)\n TS, Chrons = pmag.get_ts(ts)\n p = 1\n X, Y = [], []\n for d in TS:\n if d <= dates[1]:\n if d >= dates[0]:\n if len(X) == 0:\n ind = TS.index(d)\n X.append(TS[ind - 1])\n Y.append(p % 2)\n X.append(d)\n Y.append(p % 2)\n p += 1\n X.append(d)\n Y.append(p % 2)\n else:\n X.append(dates[1])\n Y.append(p % 2)\n plt.plot(X, Y, 'k')\n plot_vs(fignum, dates, 'w', '-')\n plot_hs(fignum, [1.1, -.1], 'w', '-')\n plt.xlabel(\"Age (Ma): \" + ts)\n isign = -1\n for c in Chrons:\n off = -.1\n isign = -1 * isign\n if isign > 0:\n off = 1.05\n if c[1] >= X[0] and c[1] < X[-1]:\n plt.text(c[1] - .2, off, c[0])\n return"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nfunction to plot Delta M curves", "response": "def plot_delta_m(fignum, B, DM, Bcr, s):\n \"\"\"\n function to plot Delta M curves\n\n Parameters\n __________\n fignum : matplotlib figure number\n B : array of field values\n DM : array of difference between top and bottom curves in hysteresis loop\n Bcr : coercivity of remanence\n s : specimen name\n \"\"\"\n plt.figure(num=fignum)\n plt.clf()\n if not isServer:\n plt.figtext(.02, .01, version_num)\n plt.plot(B, DM, 'b')\n plt.xlabel('B (T)')\n plt.ylabel('Delta M')\n linex = [0, Bcr, Bcr]\n liney = [old_div(DM[0], 2.), old_div(DM[0], 2.), 0]\n plt.plot(linex, liney, 'r')\n plt.title(s)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef plot_d_delta_m(fignum, Bdm, DdeltaM, s):\n plt.figure(num=fignum)\n plt.clf()\n if not isServer:\n plt.figtext(.02, .01, version_num)\n start = len(Bdm) - len(DdeltaM)\n plt.plot(Bdm[start:], DdeltaM, 'b')\n plt.xlabel('B (T)')\n plt.ylabel('d (Delta M)/dB')\n plt.title(s)", "response": "Function to plot d delta M"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef plot_imag(fignum, Bimag, Mimag, s):\n plt.figure(num=fignum)\n plt.clf()\n if not isServer:\n plt.figtext(.02, .01, version_num)\n plt.plot(Bimag, Mimag, 'r')\n plt.xlabel('B (T)')\n plt.ylabel('M/Ms')\n plt.axvline(0, color='k')\n plt.title(s)", "response": "function to plot the imaginary Bimag and Mimag"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef plot_hdd(HDD, B, M, s):\n hpars, deltaM, Bdm = plot_hys(\n HDD['hyst'], B, M, s) # Moff is the \"fixed\" loop data\n DdeltaM = []\n Mhalf = \"\"\n for k in range(2, len(Bdm)):\n # differnential\n DdeltaM.append(\n old_div(abs(deltaM[k] - deltaM[k - 2]), (Bdm[k] - Bdm[k - 2])))\n for k in range(len(deltaM)):\n if old_div(deltaM[k], deltaM[0]) < 0.5:\n Mhalf = k\n break\n try:\n Bhf = Bdm[Mhalf - 1:Mhalf + 1]\n Mhf = deltaM[Mhalf - 1:Mhalf + 1]\n # best fit line through two bounding points\n poly = np.polyfit(Bhf, Mhf, 1)\n Bcr = old_div((.5 * deltaM[0] - poly[1]), poly[0])\n hpars['hysteresis_bcr'] = '%8.3e' % (Bcr)\n hpars['magic_method_codes'] = \"LP-BCR-HDM\"\n if HDD['deltaM'] != 0:\n plot_delta_m(HDD['deltaM'], Bdm, deltaM, Bcr, s)\n plt.axhline(0, color='k')\n plt.axvline(0, color='k')\n plot_d_delta_m(HDD['DdeltaM'], Bdm, DdeltaM, s)\n except:\n hpars['hysteresis_bcr'] = '0'\n hpars['magic_method_codes'] = \"\"\n return hpars", "response": "Function to make hysteresis plot for a given set of fields in the tesla t t"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef plot_day(fignum, BcrBc, S, sym, **kwargs):\n plt.figure(num=fignum)\n plt.plot(BcrBc, S, sym)\n plt.axhline(0, color='k')\n plt.axhline(.05, color='k')\n plt.axhline(.5, color='k')\n plt.axvline(1, color='k')\n plt.axvline(4, color='k')\n plt.xlabel('Bcr/Bc')\n plt.ylabel('Mr/Ms')\n plt.title('Day Plot')\n plt.xlim(0, 6)\n #bounds= plt.axis()\n #plt.axis([0, bounds[1],0, 1])\n mu_o = 4. * np.pi * 1e-7\n Bc_sd = 46e-3 # (MV1H) dunlop and carter-stiglitz 2006 (in T)\n Bc_md = 5.56e-3 # (041183) dunlop and carter-stiglitz 2006 (in T)\n chi_sd = 5.20e6 * mu_o # now in T\n chi_md = 4.14e6 * mu_o # now in T\n chi_r_sd = 4.55e6 * mu_o # now in T\n chi_r_md = 0.88e6 * mu_o # now in T\n Bcr_sd, Bcr_md = 52.5e-3, 26.1e-3 # (MV1H and 041183 in DC06 in tesla)\n Ms = 480e3 # A/m\n p = .1 # from Dunlop 2002\n N = old_div(1., 3.) # demagnetizing factor\n f_sd = np.arange(1., 0., -.01) # fraction of sd\n f_md = 1. - f_sd # fraction of md\n f_sp = 1. - f_sd # fraction of sp\n # Mr/Ms ratios for USD,MD and Jax shaped\n sdrat, mdrat, cbrat = 0.498, 0.048, 0.6\n Mrat = f_sd * sdrat + f_md * mdrat # linear mixing - eq. 9 in Dunlop 2002\n Bc = old_div((f_sd * chi_sd * Bc_sd + f_md * chi_md * Bc_md),\n (f_sd * chi_sd + f_md * chi_md)) # eq. 10 in Dunlop 2002\n Bcr = old_div((f_sd * chi_r_sd * Bcr_sd + f_md * chi_r_md * Bcr_md),\n (f_sd * chi_r_sd + f_md * chi_r_md)) # eq. 11 in Dunlop 2002\n chi_sps = np.arange(1, 5) * chi_sd\n plt.plot(old_div(Bcr, Bc), Mrat, 'r-')\n if 'names' in list(kwargs.keys()):\n names = kwargs['names']\n for k in range(len(names)):\n plt.text(BcrBc[k], S[k], names[k])", "response": "Function to plot the day of the chemical structure of the current state of the atom."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef plot_s_bc(fignum, Bc, S, sym):\n plt.figure(num=fignum)\n plt.plot(Bc, S, sym)\n plt.xlabel('Bc')\n plt.ylabel('Mr/Ms')\n plt.title('Squareness-Coercivity Plot')\n bounds = plt.axis()\n plt.axis([0, bounds[1], 0, 1])", "response": "Function to plot Squareness - Coercivity"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef plot_s_bcr(fignum, Bcr, S, sym):\n plt.figure(num=fignum)\n plt.plot(Bcr, S, sym)\n plt.xlabel('Bcr')\n plt.ylabel('Mr/Ms')\n plt.title('Squareness-Bcr Plot')\n bounds = plt.axis()\n plt.axis([0, bounds[1], 0, 1])", "response": "Function to plot Squareness Coercivity of remanence"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef plot_bcr(fignum, Bcr1, Bcr2):\n plt.figure(num=fignum)\n plt.plot(Bcr1, Bcr2, 'ro')\n plt.xlabel('Bcr1')\n plt.ylabel('Bcr2')\n plt.title('Compare coercivity of remanence')", "response": "function to plot two estimates of Bcr against each other"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef plot_hpars(HDD, hpars, sym):\n plt.figure(num=HDD['hyst'])\n X, Y = [], []\n X.append(0)\n Y.append(old_div(float(hpars['hysteresis_mr_moment']), float(\n hpars['hysteresis_ms_moment'])))\n X.append(float(hpars['hysteresis_bc']))\n Y.append(0)\n plt.plot(X, Y, sym)\n bounds = plt.axis()\n n4 = 'Ms: ' + '%8.2e' % (float(hpars['hysteresis_ms_moment'])) + ' Am^2'\n plt.text(bounds[1] - .9 * bounds[1], -.9, n4)\n n1 = 'Mr: ' + '%8.2e' % (float(hpars['hysteresis_mr_moment'])) + ' Am^2'\n plt.text(bounds[1] - .9 * bounds[1], -.7, n1)\n n2 = 'Bc: ' + '%8.2e' % (float(hpars['hysteresis_bc'])) + ' T'\n plt.text(bounds[1] - .9 * bounds[1], -.5, n2)\n if 'hysteresis_xhf' in list(hpars.keys()):\n n3 = r'Xhf: ' + '%8.2e' % (float(hpars['hysteresis_xhf'])) + ' m^3'\n plt.text(bounds[1] - .9 * bounds[1], -.3, n3)\n plt.figure(num=HDD['deltaM'])\n X, Y, Bcr = [], [], \"\"\n if 'hysteresis_bcr' in list(hpars.keys()):\n X.append(float(hpars['hysteresis_bcr']))\n Y.append(0)\n Bcr = float(hpars['hysteresis_bcr'])\n plt.plot(X, Y, sym)\n bounds = plt.axis()\n if Bcr != \"\":\n n1 = 'Bcr: ' + '%8.2e' % (Bcr) + ' T'\n plt.text(bounds[1] - .5 * bounds[1], .9 * bounds[3], n1)", "response": "function to plot hysteresis parameters"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef plot_irm(fignum, B, M, title):\n rpars = {}\n Mnorm = []\n backfield = 0\n X, Y = [], []\n for k in range(len(B)):\n if M[k] < 0:\n break\n if k <= 5:\n kmin = 0\n else:\n kmin = k - 5\n for k in range(kmin, k + 1):\n X.append(B[k])\n if B[k] < 0:\n backfield = 1\n Y.append(M[k])\n if backfield == 1:\n poly = np.polyfit(X, Y, 1)\n if poly[0] != 0:\n bcr = (old_div(-poly[1], poly[0]))\n else:\n bcr = 0\n rpars['remanence_mr_moment'] = '%8.3e' % (M[0])\n rpars['remanence_bcr'] = '%8.3e' % (-bcr)\n rpars['magic_method_codes'] = 'LP-BCR-BF'\n if M[0] != 0:\n for m in M:\n Mnorm.append(old_div(m, M[0])) # normalize to unity Msat\n title = title + ':' + '%8.3e' % (M[0])\n else:\n if M[-1] != 0:\n for m in M:\n Mnorm.append(old_div(m, M[-1])) # normalize to unity Msat\n title = title + ':' + '%8.3e' % (M[-1])\n# do plots if desired\n if fignum != 0 and M[0] != 0: # skip plot for fignum = 0\n plt.figure(num=fignum)\n plt.clf()\n if not isServer:\n plt.figtext(.02, .01, version_num)\n plt.plot(B, Mnorm)\n plt.axhline(0, color='k')\n plt.axvline(0, color='k')\n plt.xlabel('B (T)')\n plt.ylabel('M/Mr')\n plt.title(title)\n if backfield == 1:\n plt.scatter([bcr], [0], marker='s', c='b')\n bounds = plt.axis()\n n1 = 'Bcr: ' + '%8.2e' % (-bcr) + ' T'\n plt.figtext(.2, .5, n1)\n n2 = 'Mr: ' + '%8.2e' % (M[0]) + ' Am^2'\n plt.figtext(.2, .45, n2)\n elif fignum != 0:\n plt.figure(num=fignum)\n # plt.clf()\n if not isServer:\n plt.figtext(.02, .01, version_num)\n print('M[0]=0, skipping specimen')\n return rpars", "response": "function to plot IRM backfield curves"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nfunctioning to plot series of chi measurements as a function of temperature holding field constant and varying frequency", "response": "def plot_xtf(fignum, XTF, Fs, e, b):\n \"\"\"\n function to plot series of chi measurements as a function of temperature, holding field constant and varying frequency\n \"\"\"\n plt.figure(num=fignum)\n plt.xlabel('Temperature (K)')\n plt.ylabel('Susceptibility (m^3/kg)')\n k = 0\n Flab = []\n for freq in XTF:\n T, X = [], []\n for xt in freq:\n X.append(xt[0])\n T.append(xt[1])\n plt.plot(T, X)\n plt.text(T[-1], X[-1], str(int(Fs[k])) + ' Hz')\n# Flab.append(str(int(Fs[k]))+' Hz')\n k += 1\n plt.title(e + ': B = ' + '%8.1e' % (b) + ' T')"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nfunctioning to plot series of chi measurements as a function of temperature holding frequency constant and varying B", "response": "def plot_xtb(fignum, XTB, Bs, e, f):\n \"\"\" function to plot series of chi measurements as a function of temperature, holding frequency constant and varying B\n \"\"\"\n plt.figure(num=fignum)\n plt.xlabel('Temperature (K)')\n plt.ylabel('Susceptibility (m^3/kg)')\n k = 0\n Blab = []\n for field in XTB:\n T, X = [], []\n for xt in field:\n X.append(xt[0])\n T.append(xt[1])\n plt.plot(T, X)\n plt.text(T[-1], X[-1], '%8.2e' % (Bs[k]) + ' T')\n# Blab.append('%8.1e'%(Bs[k])+' T')\n k += 1\n plt.title(e + ': f = ' + '%i' % (int(f)) + ' Hz')"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef plot_xft(fignum, XF, T, e, b):\n plt.figure(num=fignum)\n plt.clf()\n if not isServer:\n plt.figtext(.02, .01, version_num)\n plt.xlabel('Frequency (Hz)')\n plt.ylabel('Susceptibility (m^3/kg)')\n k = 0\n F, X = [], []\n for xf in XF:\n X.append(xf[0])\n F.append(xf[1])\n plt.plot(F, X)\n plt.semilogx()\n plt.title(e + ': B = ' + '%8.1e' % (b) + ' T')\n\n plt.legend(['%i' % (int(T)) + ' K'])", "response": "function to plot series of chi measurements as a function of temperature holding field constant and varying frequency"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef plot_xbt(fignum, XB, T, e, b):\n plt.figure(num=fignum)\n plt.clf()\n if not isServer:\n plt.figtext(.02, .01, version_num)\n plt.xlabel('Field (T)')\n plt.ylabel('Susceptibility (m^3/kg)')\n k = 0\n B, X = [], []\n for xb in XB:\n X.append(xb[0])\n B.append(xb[1])\n plt.plot(B, X)\n plt.legend(['%i' % (int(T)) + ' K'])\n plt.title(e + ': f = ' + '%i' % (int(f)) + ' Hz')", "response": "function to plot series of chi measurements as a function of temperature holding field constant and varying frequency"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef plot_ltc(LTC_CM, LTC_CT, LTC_WM, LTC_WT, e):\n leglist, init = [], 0\n if len(LTC_CM) > 2:\n if init == 0:\n plot_init(1, 5, 5)\n plt.plot(LTC_CT, LTC_CM, 'b')\n leglist.append('RT SIRM, measured while cooling')\n init = 1\n if len(LTC_WM) > 2:\n if init == 0:\n plot_init(1, 5, 5)\n plt.plot(LTC_WT, LTC_WM, 'r')\n leglist.append('RT SIRM, measured while warming')\n if init != 0:\n plt.legend(leglist, 'lower left')\n plt.xlabel('Temperature (K)')\n plt.ylabel('Magnetization (Am^2/kg)')\n if len(LTC_CM) > 2:\n plt.plot(LTC_CT, LTC_CM, 'bo')\n if len(LTC_WM) > 2:\n plt.plot(LTC_WT, LTC_WM, 'ro')\n plt.title(e)", "response": "function to plot low temperature cycling experiments"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef plot_conf(fignum, s, datablock, pars, new):\n# make the stereonet\n if new == 1:\n plot_net(fignum)\n#\n# plot the data\n#\n DIblock = []\n for plotrec in datablock:\n DIblock.append((float(plotrec[\"dec\"]), float(plotrec[\"inc\"])))\n if len(DIblock) > 0:\n plot_di(fignum, DIblock) # plot directed lines\n#\n# put on the mean direction\n#\n x, y = [], []\n XY = pmag.dimap(float(pars[0]), float(pars[1]))\n x.append(XY[0])\n y.append(XY[1])\n plt.figure(num=fignum)\n if new == 1:\n plt.scatter(x, y, marker='d', s=80, c='r')\n else:\n if float(pars[1] > 0):\n plt.scatter(x, y, marker='^', s=100, c='r')\n else:\n plt.scatter(x, y, marker='^', s=100, c='y')\n plt.title(s)\n#\n# plot the ellipse\n#\n plot_ell(fignum, pars, 'r-,', 0, 1)", "response": "make the main plot of the main figure"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef add_borders(Figs, titles, border_color='#000000', text_color='#800080', con_id=\"\"):\n\n \"\"\"\n Formatting for generating plots on the server\n Default border color: black\n Default text color: purple\n \"\"\"\n def split_title(s):\n \"\"\"\n Add '\\n's to split of overly long titles\n \"\"\"\n s_list = s.split(\",\")\n lines = []\n tot = 0\n line = []\n for i in s_list:\n tot += len(i)\n if tot < 30:\n line.append(i + \",\")\n else:\n lines.append(\" \".join(line))\n line = [i]\n tot = 0\n lines.append(\" \".join(line))\n return \"\\n\".join(lines).strip(',')\n\n # format contribution id if available\n if con_id:\n if not str(con_id).startswith(\"/\"):\n con_id = \"/\" + str(con_id)\n\n import datetime\n now = datetime.datetime.utcnow()\n\n for key in list(Figs.keys()):\n\n fig = plt.figure(Figs[key])\n plot_title = split_title(titles[key]).strip().strip('\\n')\n fig.set_figheight(5.5)\n #get returns Bbox with x0, y0, x1, y1\n pos = fig.gca().get_position()\n # tweak some of the default values\n w = pos.x1 - pos.x0\n h = (pos.y1 - pos.y0) / 1.1\n x = pos.x0\n y = pos.y0 * 1.3\n # set takes: left, bottom, width, height\n fig.gca().set_position([x, y, w, h])\n\n # add an axis covering the entire figure\n border_ax = fig.add_axes([0, 0, 1, 1])\n border_ax.set_frame_on(False)\n border_ax.set_xticks([])\n border_ax.set_yticks([])\n # add a border\n if \"\\n\" in plot_title:\n y_val = 1.0 # lower border\n #fig.set_figheight(6.25)\n else:\n y_val = 1.04 # higher border\n #border_ax.text(-0.02, y_val, \" # |\",\n # horizontalalignment='left',\n # verticalalignment='top',\n # color=text_color,\n # bbox=dict(edgecolor=border_color,\n # facecolor='#FFFFFF', linewidth=0.25),\n # size=50)\n #border_ax.text(-0.02, 0, \"| # |\",\n # horizontalalignment='left',\n # verticalalignment='bottom',\n # color=text_color,\n # bbox=dict(edgecolor=border_color,\n # facecolor='#FFFFFF', linewidth=0.25),\n # size=20)#18)\n\n # add text\n\n border_ax.text((4. / fig.get_figwidth()) * 0.015, 0.03, now.strftime(\"%Y-%m-%d, %I:%M:%S {}\".format('UT')),\n horizontalalignment='left',\n verticalalignment='top',\n color=text_color,\n size=10)\n border_ax.text(0.5, 0.98, plot_title,\n horizontalalignment='center',\n verticalalignment='top',\n color=text_color,\n size=20)\n border_ax.text(1 - (4. / fig.get_figwidth()) * 0.015, 0.03, 'earthref.org/MagIC{}'.format(con_id),\n horizontalalignment='right',\n verticalalignment='top',\n color=text_color,\n size=10)\n return Figs", "response": "Add borders to the server figure."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef plot_map(fignum, lats, lons, Opts):\n if not has_cartopy:\n print('This function requires installation of cartopy')\n return\n from matplotlib import cm\n # draw meridian labels on the bottom [left,right,top,bottom]\n mlabels = [0, 0, 0, 1]\n plabels = [1, 0, 0, 0] # draw parallel labels on the left\n Opts_defaults = {'latmin': -90, 'latmax': 90, 'lonmin': 0, 'lonmax': 360,\n 'lat_0': 0, 'lon_0': 0, 'proj': 'moll', 'sym': 'ro', 'symsize': 5,\n 'edge': None, 'pltgrid': 1, 'res': 'c', 'boundinglat': 0.,\n 'padlon': 0, 'padlat': 0, 'gridspace': 30, 'global': 1, 'cmap': 'jet','oceancolor':'azure','landcolor':'bisque',\n 'details': {'fancy': 0, 'coasts': 0, 'rivers': 0, 'states': 0, 'countries': 0, 'ocean': 0},\n 'edgecolor': 'face'}\n for key in Opts_defaults.keys():\n if key not in Opts.keys() and key != 'details':\n Opts[key] = Opts_defaults[key]\n if key == 'details':\n if key not in Opts.keys():\n Opts[key] = Opts_defaults[key]\n for detail_key in Opts_defaults[key].keys():\n if detail_key not in Opts[key].keys():\n Opts[key][detail_key] = Opts_defaults[key][detail_key]\n if Opts['proj'] == 'pc':\n ax = plt.axes(projection=ccrs.PlateCarree())\n ax.set_extent([Opts['lonmin'], Opts['lonmax'], Opts['latmin'], Opts['latmax']],\n crs=ccrs.PlateCarree())\n if Opts['proj'] == 'aea':\n ax = plt.axes(projection=ccrs.AlbersEqualArea(\n central_longitude=Opts['lon_0'],\n central_latitude=Opts['lat_0'],\n false_easting=0.0, false_northing=0.0, standard_parallels=(20.0, 50.0),\n globe=None))\n if Opts['proj'] == 'lcc':\n proj = ccrs.LambertConformal(central_longitude=Opts['lon_0'], central_latitude=Opts['lat_0'],\\\n false_easting=0.0, false_northing=0.0, standard_parallels=(20.0, 50.0),\n globe=None)\n fig=plt.figure(fignum,figsize=(6,6),frameon=True)\n ax = plt.axes(projection=proj)\n ax.set_extent([Opts['lonmin'], Opts['lonmax'], Opts['latmin'], Opts['latmax']],\n crs=ccrs.PlateCarree())\n if Opts['proj'] == 'lcyl':\n ax = plt.axes(projection=ccrs.LambertCylindrical(\n central_longitude=Opts['lon_0']))\n\n if Opts['proj'] == 'merc':\n ax = plt.axes(projection=ccrs.Mercator(\n central_longitude=Opts['lon_0'], min_latitude=Opts['latmin'],\n max_latitude=Opts['latmax'], latitude_true_scale=0.0, globe=None))\n ax.set_extent([Opts['lonmin'],Opts['lonmax'],\\\n Opts['latmin'],Opts['latmax']])\n if Opts['proj'] == 'mill':\n ax = plt.axes(projection=ccrs.Miller(\n central_longitude=Opts['lon_0']))\n if Opts['proj'] == 'moll':\n ax = plt.axes(projection=ccrs.Mollweide(\n central_longitude=Opts['lat_0'], globe=None))\n if Opts['proj'] == 'ortho':\n ax = plt.axes(projection=ccrs.Orthographic(\n central_longitude=Opts['lon_0'],\n central_latitude=Opts['lat_0']))\n if Opts['proj'] == 'robin':\n ax = plt.axes(projection=ccrs.Robinson(\n central_longitude=Opts['lon_0'],\n globe=None))\n\n if Opts['proj'] == 'sinu':\n ax = plt.axes(projection=ccrs.Sinusoidal(\n central_longitude=Opts['lon_0'],\n false_easting=0.0, false_northing=0.0,\n globe=None))\n\n if Opts['proj'] == 'stere':\n ax = plt.axes(projection=ccrs.Stereographic(\n central_longitude=Opts['lon_0'],\n false_easting=0.0, false_northing=0.0,\n true_scale_latitude=None,\n scale_factor=None,\n globe=None))\n if Opts['proj'] == 'tmerc':\n ax = plt.axes(projection=ccrs.TransverseMercator(\n central_longitude=Opts['lon_0'], central_latitude=Opts['lat_0'],\n false_easting=0.0, false_northing=0.0,\n scale_factor=None,\n globe=None))\n if Opts['proj'] == 'utm':\n ax = plt.axes(projection=ccrs.UTM(\n zone=Opts['zone'],\n southern_hemisphere=Opts['south'],\n globe=None))\n if Opts['proj'] == 'geos':\n ax = plt.axes(projection=ccrs.Geostationary(\n central_longitude=Opts['lon_0'],\n false_easting=0.0, false_northing=0.0,\n satellite_height=35785831,\n sweep_axis='y',\n globe=None))\n if Opts['proj'] == 'laea':\n ax = plt.axes(projection=ccrs.LambertAzimuthalEqualArea(\n central_longitude=Opts['lon_0'], central_latitude=Opts['lat_0'],\n false_easting=0.0, false_northing=0.0,\n globe=None))\n if Opts['proj'] == 'npstere':\n ax = plt.axes(projection=ccrs.NorthPolarStereo(\n central_longitude=Opts['lon_0'],\n true_scale_latitude=None,\n globe=None))\n if Opts['proj'] == 'spstere':\n ax = plt.axes(projection=ccrs.SouthPolarStereo(\n central_longitude=Opts['lon_0'],\n true_scale_latitude=None,\n globe=None))\n\n if 'details' in list(Opts.keys()):\n if Opts['details']['fancy'] == 1:\n pmag_data_dir = find_pmag_dir.get_data_files_dir()\n EDIR = os.path.join(pmag_data_dir, \"etopo20\")\n etopo_path = os.path.join(EDIR, 'etopo20data.gz')\n etopo = np.loadtxt(os.path.join(EDIR, 'etopo20data.gz'))\n elons = np.loadtxt(os.path.join(EDIR, 'etopo20lons.gz'))\n elats = np.loadtxt(os.path.join(EDIR, 'etopo20lats.gz'))\n xx, yy = np.meshgrid(elons, elats)\n levels = np.arange(-10000, 8000, 500) # define contour intervals\n m = ax.contourf(xx, yy, etopo, levels,\n transform=ccrs.PlateCarree(),\n cmap=Opts['cmap'])\n cbar=plt.colorbar(m)\n if Opts['details']['coasts'] == 1:\n if Opts['res']=='c' or Opts['res']=='l':\n ax.coastlines(resolution='110m')\n elif Opts['res']=='i':\n ax.coastlines(resolution='50m')\n elif Opts['res']=='h':\n ax.coastlines(resolution='10m')\n if Opts['details']['rivers'] == 1:\n ax.add_feature(cfeature.RIVERS)\n if Opts['details']['states'] == 1:\n states_provinces = cfeature.NaturalEarthFeature(\n category='cultural',\n name='admin_1_states_provinces_lines',\n scale='50m',\n edgecolor='black',\n facecolor='none',\n linestyle='dotted')\n ax.add_feature(states_provinces)\n if Opts['details']['countries'] == 1:\n ax.add_feature(BORDERS, linestyle='-', linewidth=2)\n if Opts['details']['ocean'] == 1:\n ax.add_feature(OCEAN, color=Opts['oceancolor'])\n ax.add_feature(LAND, color=Opts['landcolor'])\n ax.add_feature(LAKES, color=Opts['oceancolor'])\n if Opts['proj'] in ['merc', 'pc','lcc']:\n if Opts['pltgrid']:\n if Opts['proj']=='lcc':\n fig.canvas.draw()\n #xticks=list(np.arange(Opts['lonmin'],Opts['lonmax']+Opts['gridspace'],Opts['gridspace']))\n #yticks=list(np.arange(Opts['latmin'],Opts['latmax']+Opts['gridspace'],Opts['gridspace']))\n xticks=list(np.arange(-180,180,Opts['gridspace']))\n yticks=list(np.arange(-90,90,Opts['gridspace']))\n ax.gridlines(ylocs=yticks,xlocs=xticks,linewidth=2,\n linestyle='dotted')\n ax.xaxis.set_major_formatter(LONGITUDE_FORMATTER) # you need this here\n ax.yaxis.set_major_formatter(LATITUDE_FORMATTER)# you need this here, too\n\n try:\n import pmagpy.lcc_ticks as lcc_ticks\n lcc_ticks.lambert_xticks(ax, xticks)\n lcc_ticks.lambert_yticks(ax, yticks)\n\n except:\n print ('plotting of tick marks on Lambert Conformal requires the package \"shapely\".\\n Try importing with \"conda install -c conda-forge shapely\"')\n else:\n gl = ax.gridlines(crs=ccrs.PlateCarree(), linewidth=2,\n linestyle='dotted', draw_labels=True)\n gl.ylocator = mticker.FixedLocator(np.arange(-80, 81, Opts['gridspace']))\n gl.xlocator = mticker.FixedLocator(np.arange(-180, 181, Opts['gridspace']))\n gl.xformatter = LONGITUDE_FORMATTER\n gl.yformatter = LATITUDE_FORMATTER\n gl.xlabels_top = False\n else:\n gl = ax.gridlines(crs=ccrs.PlateCarree(),\n linewidth=2, linestyle='dotted')\n elif Opts['pltgrid']:\n print('gridlines only supported for PlateCarree, Lambert Conformal, and Mercator plots currently')\n prn_name, symsize = 0, 5\n # if 'names' in list(Opts.keys()) > 0:\n # names = Opts['names']\n # if len(names) > 0:\n # prn_name = 1\n##\n X, Y, T, k = [], [], [], 0\n if 'symsize' in list(Opts.keys()):\n symsize = Opts['symsize']\n if Opts['sym'][-1] != '-': # just plot points\n color, symbol = Opts['sym'][0], Opts['sym'][1]\n ax.scatter(lons, lats, s=Opts['symsize'], c=color, marker=symbol,\n transform=ccrs.Geodetic(), edgecolors=Opts['edgecolor'])\n if prn_name == 1:\n print('labels not yet implemented in plot_map')\n # for pt in range(len(lats)):\n # T.append(plt.text(X[pt] + 5000, Y[pt] - 5000, names[pt]))\n else: # for lines, need to separate chunks using lat==100.\n ax.plot(lons, lats, Opts['sym'], transform=ccrs.Geodetic())\n if Opts['global']:\n ax.set_global()", "response": "Plots a cartopy map with the specified lats and lons."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nplots a basemap of the desired field element", "response": "def plot_mag_map_basemap(fignum, element, lons, lats, element_type, cmap='RdYlBu', lon_0=0, date=\"\"):\n \"\"\"\n makes a color contour map of geomagnetic field element\n\n Parameters\n ____________\n fignum : matplotlib figure number\n element : field element array from pmag.do_mag_map for plotting\n lons : longitude array from pmag.do_mag_map for plotting\n lats : latitude array from pmag.do_mag_map for plotting\n element_type : [B,Br,I,D] geomagnetic element type\n B : field intensity\n Br : radial field intensity\n I : inclinations\n D : declinations\n Optional\n _________\n cmap : matplotlib color map\n lon_0 : central longitude of the Hammer projection\n date : date used for field evaluation,\n if custom ghfile was used, supply filename\n\n\n Effects\n ______________\n plots a Hammer projection color contour with the desired field element\n \"\"\"\n if not has_basemap:\n print('-W- Basemap must be installed to run plot_mag_map_basemap')\n return\n from matplotlib import cm # matplotlib's color map module\n lincr = 1\n if type(date) != str:\n date = str(date)\n fig = plt.figure(fignum)\n m = Basemap(projection='hammer', lon_0=lon_0)\n x, y = m(*meshgrid(lons, lats))\n m.drawcoastlines()\n if element_type == 'B':\n levmax = element.max()+lincr\n levmin = round(element.min()-lincr)\n levels = np.arange(levmin, levmax, lincr)\n cs = m.contourf(x, y, element, levels=levels, cmap=cmap)\n plt.title('Field strength ($\\mu$T): '+date)\n if element_type == 'Br':\n levmax = element.max()+lincr\n levmin = round(element.min()-lincr)\n cs = m.contourf(x, y, element, levels=np.arange(\n levmin, levmax, lincr), cmap=cmap)\n plt.title('Radial field strength ($\\mu$T): '+date)\n if element_type == 'I':\n levmax = element.max()+lincr\n levmin = round(element.min()-lincr)\n cs = m.contourf(\n x, y, element, levels=np.arange(-90, 100, 20), cmap=cmap)\n m.contour(x, y, element, levels=np.arange(-80, 90, 10), colors='black')\n plt.title('Field inclination: '+date)\n if element_type == 'D':\n # cs=m.contourf(x,y,element,levels=np.arange(-180,180,10),cmap=cmap)\n cs = m.contourf(\n x, y, element, levels=np.arange(-180, 180, 10), cmap=cmap)\n m.contour(x, y, element, levels=np.arange(-180, 180, 10), colors='black')\n plt.title('Field declination: '+date)\n cbar = m.colorbar(cs, location='bottom')"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef plot_mag_map(fignum, element, lons, lats, element_type, cmap='coolwarm', lon_0=0, date=\"\", contours=False, proj='PlateCarree'):\n\n if not has_cartopy:\n print('This function requires installation of cartopy')\n return\n from matplotlib import cm\n if lon_0 == 180:\n lon_0 = 179.99\n if lon_0 > 180:\n lon_0 = lon_0-360.\n lincr = 1\n if type(date) != str:\n date = str(date)\n if proj == 'PlateCarree':\n fig = plt.figure(fignum)\n ax = plt.axes(projection=ccrs.PlateCarree(central_longitude=lon_0))\n if proj == 'Mollweide':\n fig = plt.figure(fignum)\n # this issue is fixed in >=0.17\n if not LooseVersion(Cartopy.__version__) > LooseVersion('0.16.0'):\n if lon_0 != 0:\n print('This projection requires lon_0=0')\n return\n\n ax = plt.axes(projection=ccrs.Mollweide(central_longitude=lon_0))\n xx, yy = np.meshgrid(lons, lats)\n levmax = 5*round(element.max()/5)+5\n levmin = 5*round(element.min()/5)-5\n if element_type == 'Br' or element_type == 'B':\n plt.contourf(xx, yy, element,\n levels=np.arange(levmin, levmax, 1),\n cmap=cmap, transform=ccrs.PlateCarree())\n cbar = plt.colorbar(orientation='horizontal')\n if contours:\n plt.contour(xx, yy, element, levels=np.arange(levmin, levmax, 10),\n colors='black', transform=ccrs.PlateCarree())\n\n if element_type == 'Br':\n plt.title('Radial field strength ($\\mu$T): '+date)\n else:\n plt.title('Total field strength ($\\mu$T): '+date)\n if element_type == 'I':\n plt.contourf(xx, yy, element,\n levels=np.arange(-90, 90, lincr),\n cmap=cmap, transform=ccrs.PlateCarree())\n cbar = plt.colorbar(orientation='horizontal')\n if contours:\n plt.contour(xx, yy, element, levels=np.arange(-80, 90, 10),\n colors='black', transform=ccrs.PlateCarree())\n plt.title('Field inclination: '+date)\n if element_type == 'D':\n plt.contourf(xx, yy, element,\n levels=np.arange(-180, 180, 10),\n cmap=cmap, transform=ccrs.PlateCarree())\n cbar = plt.colorbar(orientation='horizontal')\n if contours:\n plt.contour(xx, yy, element, levels=np.arange(-180, 180, 10),\n colors='black', transform=ccrs.PlateCarree())\n plt.title('Field declination: '+date)\n ax.coastlines()\n ax.set_global()\n return ax", "response": "Plots a color contour map of the desired field element."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nplots the dec inc block as a color contour", "response": "def plot_eq_cont(fignum, DIblock, color_map='coolwarm'):\n \"\"\"\n plots dec inc block as a color contour\n Parameters\n __________________\n Input:\n fignum : figure number\n DIblock : nested pairs of [Declination, Inclination]\n color_map : matplotlib color map [default is coolwarm]\n Output:\n figure\n \"\"\"\n import random\n plt.figure(num=fignum)\n plt.axis(\"off\")\n XY = []\n centres = []\n counter = 0\n for rec in DIblock:\n counter = counter + 1\n X = pmag.dir2cart([rec[0], rec[1], 1.])\n # from Collinson 1983\n R = old_div(np.sqrt(1. - X[2]), (np.sqrt(X[0]**2 + X[1]**2)))\n XY.append([X[0] * R, X[1] * R])\n # radius of the circle\n radius = (old_div(3., (np.sqrt(np.pi * (9. + float(counter)))))) + 0.01\n num = 2. * (old_div(1., radius)) # number of circles\n # a,b are the extent of the grids over which the circles are equispaced\n a1, a2 = (0. - (radius * num / 2.)), (0. + (radius * num / 2.))\n b1, b2 = (0. - (radius * num / 2.)), (0. + (radius * num / 2.))\n # this is to get an array (a list of list wont do) of x,y values\n xlist = np.linspace(a1, a2, int(np.ceil(num)))\n ylist = np.linspace(b1, b2, int(np.ceil(num)))\n X, Y = np.meshgrid(xlist, ylist)\n # to put z in the array I just multiply both x,y with zero. I will add to\n # the zero values later\n Z = X * Y * 0.\n # keeping the centres of the circles as a separate list instead of in\n # array helps later\n for j in range(len(ylist)):\n for i in range(len(xlist)):\n centres.append([xlist[i], ylist[j]])\n # the following lines are to figure out what happens at the edges where part of a circle might lie outside\n # a thousand random numbers are generated within the x,y limit of the circles and tested whether it is contained in\n # the eq area net space....their ratio gives the fraction of circle\n # contained in the net\n fraction = []\n beta, alpha = 0.001, 0.001 # to avoid those 'division by float' thingy\n for i in range(0, int(np.ceil(num))**2):\n if np.sqrt(((centres[i][0])**2) + ((centres[i][1])**2)) - 1. < radius:\n for j in range(1, 1000):\n rnd1 = random.uniform(\n centres[i][0] - radius, centres[i][0] + radius)\n rnd2 = random.uniform(\n centres[i][1] - radius, centres[i][1] + radius)\n if ((centres[i][0] - rnd1)**2 + (centres[i][1] - rnd2)**2) <= radius**2:\n if (rnd1**2) + (rnd2**2) < 1.:\n alpha = alpha + 1.\n beta = beta + 1.\n else:\n alpha = alpha + 1.\n fraction.append(old_div(alpha, beta))\n alpha, beta = 0.001, 0.001\n else:\n fraction.append(1.) # if the whole circle lies in the net\n\n # for every circle count the number of points lying in it\n count = 0\n dotspercircle = 0.\n for j in range(0, int(np.ceil(num))):\n for i in range(0, int(np.ceil(num))):\n for k in range(0, counter):\n if (XY[k][0] - centres[count][0])**2 + (XY[k][1] - centres[count][1])**2 <= radius**2:\n dotspercircle += 1.\n Z[i][j] = Z[i][j] + (dotspercircle * fraction[count])\n count += 1\n dotspercircle = 0.\n im = plt.imshow(Z, interpolation='bilinear', origin='lower',\n # cmap=plt.color_map.hot, extent=(-1., 1., -1., 1.))\n cmap=color_map, extent=(-1., 1., -1., 1.))\n plt.colorbar(shrink=0.5)\n x, y = [], []\n # Draws the border\n for i in range(0, 360):\n x.append(np.sin((old_div(np.pi, 180.)) * float(i)))\n y.append(np.cos((old_div(np.pi, 180.)) * float(i)))\n plt.plot(x, y, 'w-')\n x, y = [], []\n # the map will be a square of 1X1..this is how I erase the redundant area\n for j in range(1, 4):\n for i in range(0, 360):\n x.append(np.sin((old_div(np.pi, 180.)) * float(i))\n * (1. + (old_div(float(j), 10.))))\n y.append(np.cos((old_div(np.pi, 180.)) * float(i))\n * (1. + (old_div(float(j), 10.))))\n plt.plot(x, y, 'w-', linewidth=26)\n x, y = [], []\n # the axes\n plt.axis(\"equal\")"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nmaking a time scale plot between the specified ages.", "response": "def plot_ts(ax, agemin, agemax, timescale='gts12', ylabel=\"Age (Ma)\"):\n \"\"\"\n Make a time scale plot between specified ages.\n\n Parameters:\n ------------\n ax : figure object\n agemin : Minimum age for timescale\n agemax : Maximum age for timescale\n timescale : Time Scale [ default is Gradstein et al., (2012)]\n for other options see pmag.get_ts()\n ylabel : if set, plot as ylabel\n \"\"\"\n ax.set_title(timescale.upper())\n ax.axis([-.25, 1.5, agemax, agemin])\n ax.axes.get_xaxis().set_visible(False)\n # get dates and chron names for timescale\n TS, Chrons = pmag.get_ts(timescale)\n X, Y, Y2 = [0, 1], [], []\n cnt = 0\n if agemin < TS[1]: # in the Brunhes\n Y = [agemin, agemin] # minimum age\n Y1 = [TS[1], TS[1]] # age of the B/M boundary\n ax.fill_between(X, Y, Y1, facecolor='black') # color in Brunhes, black\n for d in TS[1:]:\n pol = cnt % 2\n cnt += 1\n if d <= agemax and d >= agemin:\n ind = TS.index(d)\n Y = [TS[ind], TS[ind]]\n Y1 = [TS[ind+1], TS[ind+1]]\n if pol:\n # fill in every other time\n ax.fill_between(X, Y, Y1, facecolor='black')\n ax.plot([0, 1, 1, 0, 0], [agemin, agemin, agemax, agemax, agemin], 'k-')\n plt.yticks(np.arange(agemin, agemax+1, 1))\n if ylabel != \"\":\n ax.set_ylabel(ylabel)\n ax2 = ax.twinx()\n ax2.axis('off')\n for k in range(len(Chrons)-1):\n c = Chrons[k]\n cnext = Chrons[k+1]\n d = cnext[1]-(cnext[1]-c[1])/3.\n if d >= agemin and d < agemax:\n # make the Chron boundary tick\n ax2.plot([1, 1.5], [c[1], c[1]], 'k-')\n ax2.text(1.05, d, c[0])\n ax2.axis([-.25, 1.5, agemax, agemin])"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef main():\n args = sys.argv\n PLT = 1\n plots = 0\n user, meas_file = \"\", \"magic_measurements.txt\"\n pltspec = \"\"\n dir_path = '.'\n fmt = 'png'\n verbose = pmagplotlib.verbose\n version_num = pmag.get_version()\n if '-WD' in args:\n ind = args.index('-WD')\n dir_path = args[ind+1]\n if \"-h\" in args:\n print(main.__doc__)\n sys.exit()\n if \"-usr\" in args:\n ind = args.index(\"-usr\")\n user = args[ind+1]\n if '-f' in args:\n ind = args.index(\"-f\")\n meas_file = args[ind+1]\n if '-sav' in args:\n verbose = 0\n plots = 1\n if '-spc' in args:\n ind = args.index(\"-spc\")\n pltspec = args[ind+1]\n verbose = 0\n plots = 1\n if '-fmt' in args:\n ind = args.index(\"-fmt\")\n fmt = args[ind+1]\n meas_file = dir_path+'/'+meas_file\n #\n #\n meas_data, file_type = pmag.magic_read(meas_file)\n if file_type != 'magic_measurements':\n print(main.__doc__)\n print('bad file')\n sys.exit()\n #\n # initialize some variables\n # define figure numbers for hyst,deltaM,DdeltaM curves\n HystRecs, RemRecs = [], []\n HDD = {}\n HDD['hyst'] = 1\n pmagplotlib.plot_init(HDD['hyst'], 5, 5)\n #\n # get list of unique experiment names and specimen names\n #\n experiment_names, sids = [], []\n hyst_data = pmag.get_dictitem(\n meas_data, 'magic_method_codes', 'LP-HYS', 'has') # get all hysteresis data\n for rec in hyst_data:\n if 'er_synthetic_name' in rec.keys() and rec['er_synthetic_name'] != \"\":\n rec['er_specimen_name'] = rec['er_synthetic_name']\n if rec['magic_experiment_name'] not in experiment_names:\n experiment_names.append(rec['magic_experiment_name'])\n if rec['er_specimen_name'] not in sids:\n sids.append(rec['er_specimen_name'])\n if 'measurement_temp' not in rec.keys():\n # assume room T measurement unless otherwise specified\n rec['measurement_temp'] = '300'\n #\n k = 0\n if pltspec != \"\":\n k = sids.index(pltspec)\n intlist = ['measurement_magnitude', 'measurement_magn_moment',\n 'measurement_magn_volume', 'measurement_magn_mass']\n while k < len(sids):\n locname, site, sample, synth = '', '', '', ''\n s = sids[k]\n hmeths = []\n if verbose:\n print(s, k+1, 'out of ', len(sids))\n #\n #\n B, M = [], [] # B,M for hysteresis, Bdcd,Mdcd for irm-dcd data\n # get all measurements for this specimen\n spec = pmag.get_dictitem(hyst_data, 'er_specimen_name', s, 'T')\n if 'er_location_name' in spec[0].keys():\n locname = spec[0]['er_location_name']\n if 'er_site_name' in spec[0].keys():\n site = spec[0]['er_site_name']\n if 'er_sample_name' in spec[0].keys():\n sample = spec[0]['er_sample_name']\n if 'er_synthetic_name' in spec[0].keys():\n synth = spec[0]['er_synthetic_name']\n for m in intlist:\n # get all non-blank data for this specimen\n meas_data = pmag.get_dictitem(spec, m, '', 'F')\n if len(meas_data) > 0:\n break\n c = ['k-', 'b-', 'c-', 'g-', 'm-', 'r-', 'y-']\n cnum = 0\n if len(meas_data) > 0:\n Temps = []\n xlab, ylab, title = '', '', ''\n for rec in meas_data:\n if rec['measurement_temp'] not in Temps:\n Temps.append(rec['measurement_temp'])\n for t in Temps:\n print('working on t: ', t)\n t_data = pmag.get_dictitem(\n meas_data, 'measurement_temp', t, 'T')\n B, M = [], []\n for rec in t_data:\n B.append(float(rec['measurement_lab_field_dc']))\n M.append(float(rec[m]))\n # now plot the hysteresis curve(s)\n #\n if len(B) > 0:\n B = numpy.array(B)\n M = numpy.array(M)\n if t == Temps[-1]:\n xlab = 'Field (T)'\n ylab = m\n title = 'Hysteresis: '+s\n if t == Temps[0]:\n pmagplotlib.clearFIG(HDD['hyst'])\n pmagplotlib.plot_xy(\n HDD['hyst'], B, M, sym=c[cnum], xlab=xlab, ylab=ylab, title=title)\n pmagplotlib.plot_xy(HDD['hyst'], [\n 1.1*B.min(), 1.1*B.max()], [0, 0], sym='k-', xlab=xlab, ylab=ylab, title=title)\n pmagplotlib.plot_xy(HDD['hyst'], [0, 0], [\n 1.1*M.min(), 1.1*M.max()], sym='k-', xlab=xlab, ylab=ylab, title=title)\n if verbose:\n pmagplotlib.draw_figs(HDD)\n cnum += 1\n if cnum == len(c):\n cnum = 0\n #\n files = {}\n if plots:\n if pltspec != \"\":\n s = pltspec\n files = {}\n for key in HDD.keys():\n if pmagplotlib.isServer: # use server plot naming convention\n if synth == '':\n filename = \"LO:_\"+locname+'_SI:_'+site + \\\n '_SA:_'+sample+'_SP:_'+s+'_TY:_'+key+'_.'+fmt\n else:\n filename = 'SY:_'+synth+'_TY:_'+key+'_.'+fmt\n files[key] = filename\n else: # use more readable plot naming convention\n if synth == '':\n filename = ''\n for item in [locname, site, sample, s, key]:\n if item:\n item = item.replace(' ', '_')\n filename += item + '_'\n if filename.endswith('_'):\n filename = filename[:-1]\n filename += \".{}\".format(fmt)\n else:\n filename = synth+'_'+key+'.fmt'\n files[key] = filename\n\n pmagplotlib.save_plots(HDD, files)\n if pltspec != \"\":\n sys.exit()\n if verbose:\n pmagplotlib.draw_figs(HDD)\n ans = raw_input(\n \"S[a]ve plots, [s]pecimen name, [q]uit, to continue\\n \")\n if ans == \"a\":\n files = {}\n for key in HDD.keys():\n if pmagplotlib.isServer:\n print('server')\n files[key] = \"LO:_\"+locname+'_SI:_'+site + \\\n '_SA:_'+sample+'_SP:_'+s+'_TY:_'+key+'_.'+fmt\n else:\n print('not server')\n filename = ''\n for item in [locname, site, sample, s, key]:\n if item:\n item = item.replace(' ', '_')\n filename += item + '_'\n if filename.endswith('_'):\n filename = filename[:-1]\n filename += \".{}\".format(fmt)\n files[key] = filename\n print('files', files)\n pmagplotlib.save_plots(HDD, files)\n if ans == '':\n k += 1\n if ans == \"p\":\n del HystRecs[-1]\n k -= 1\n if ans == 'q':\n print(\"Good bye\")\n sys.exit()\n if ans == 's':\n keepon = 1\n specimen = raw_input(\n 'Enter desired specimen name (or first part there of): ')\n while keepon == 1:\n try:\n k = sids.index(specimen)\n keepon = 0\n except:\n tmplist = []\n for qq in range(len(sids)):\n if specimen in sids[qq]:\n tmplist.append(sids[qq])\n print(specimen, \" not found, but this was: \")\n print(tmplist)\n specimen = raw_input('Select one or try again\\n ')\n k = sids.index(specimen)\n else:\n k += 1\n if len(B) == 0:\n if verbose:\n print('skipping this one - no hysteresis data')\n k += 1", "response": "NAME\n quick_hyst.py\n\n DESCRIPTION\n makes plots of hysteresis data\n\n SYNTAX\n quick_hyst.py [command line options]\n\n OPTIONS\n -h prints help message and quits\n -usr USER: identify user, default is \"\"\n -f: specify input file, default is magic_measurements.txt\n -spc SPEC: specify specimen name to plot and quit\n -sav save all plots and quit\n -fmt [png,svg,eps,jpg]"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nfunctions misfit evaluation for best - fit tanh curve", "response": "def funk(p, x, y):\n \"\"\"\n Function misfit evaluation for best-fit tanh curve\n f(x[:]) = alpha*tanh(beta*x[:])\n alpha = params[0]\n beta = params[1]\n funk(params) = sqrt(sum((y[:] - f(x[:]))**2)/len(y[:]))\n Output is RMS misfit\n x=xx[0][:]\n y=xx[1][:]q\n \"\"\"\n alpha=p[0]\n beta=p[1]\n dev=0\n for i in range(len(x)):\n dev=dev+((y[i]-(alpha*math.tanh(beta*x[i])))**2)\n rms=math.sqrt(old_div(dev,float(len(y))))\n return rms"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef compare(a, b):\n s=0\n for i in range(len(a)):\n s=s+abs(a[i]-b[i])\n return s", "response": "Compare items in 2 arrays. Returns sum of absolute values."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncalculate the TRM using tanh relationship", "response": "def TRM(f,a,b):\n \"\"\"\n Calculate TRM using tanh relationship\n TRM(f)=a*math.tanh(b*f)\n \"\"\"\n m = float(a) * math.tanh(float(b) * float(f))\n return float(m)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef TRMinv(m,a,b):\n WARN = True # Warn, rather than stop if I encounter a NaN...\n \"\"\"\n Calculate applied field from TRM using tanh relationship\n TRMinv(m)=(1/b)*atanh(m/a)\n \"\"\"\n if float(a)==0:\n print('ERROR: TRMinv: a==0.')\n if not WARN : sys.exit()\n if float(b)==0:\n print('ERROR: TRMinv: b==0.')\n if not WARN : sys.exit()\n x = (old_div(float(m), float(a)))\n if (1-x)<=0:\n print('ERROR: TRMinv: (1-x)==0.')\n return -1\n if not WARN : sys.exit()\n f = (old_div(1.,float(b))) * 0.5 * math.log (old_div((1+x), (1-x)))\n return float(f)", "response": "Calculate applied field from TRM using tanh relationship."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef NRM(f,a,b,best):\n WARN = True # Warn, rather than stop if I encounter a NaN...\n \"\"\" \n Calculate NRM expected lab field and estimated ancient field\n NRM(blab,best)= (best/blab)*TRM(blab)\n \"\"\"\n if float(f)==0:\n print('ERROR: NRM: f==0.')\n if not WARN : sys.exit()\n m = (old_div(float(best),float(f))) * TRM(f,a,b)\n return float(m)", "response": "Calculate NRM expected lab field and estimated ancient field"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef main():\n dist='F' # default distribution is Fisherian\n mode=1\n title=\"\"\n EQ={'eq':1}\n if len(sys.argv) > 0:\n if '-h' in sys.argv: # check if help is needed\n print(main.__doc__)\n sys.exit() # graceful quit\n if '-i' in sys.argv: # ask for filename\n file=input(\"Enter file name with dec, inc data: \")\n dist=input(\"Enter desired distrubution: [Fish]er, [Bing]ham, [Kent] [Boot] [default is Fisher]: \")\n if dist==\"\":dist=\"F\"\n if dist==\"Bing\":dist=\"B\"\n if dist==\"Kent\":dist=\"K\"\n if dist==\"Boot\":\n type=input(\" Ellipses or distribution of vectors? [E]/V \")\n if type==\"\" or type==\"E\":\n dist=\"BE\"\n else:\n dist=\"BE\"\n else:\n#\n if '-f' in sys.argv:\n ind=sys.argv.index('-f')\n file=sys.argv[ind+1]\n else:\n print('you must specify a file name')\n print(main.__doc__)\n sys.exit()\n if '-Bing' in sys.argv:dist='B'\n if '-Kent' in sys.argv:dist='K'\n if '-Boot' in sys.argv:\n ind=sys.argv.index('-Boot')\n type=sys.argv[ind+1]\n if type=='E':\n dist='BE'\n elif type=='V':\n dist='BV'\n EQ['bdirs']=2\n pmagplotlib.plot_init(EQ['bdirs'],5,5)\n else:\n print(main.__doc__)\n sys.exit()\n pmagplotlib.plot_init(EQ['eq'],5,5)\n#\n# get to work\n f=open(file,'r')\n data=f.readlines()\n#\n DIs= [] # set up list for dec inc data\n DiRecs=[]\n pars=[]\n nDIs,rDIs,npars,rpars=[],[],[],[]\n mode =1\n for line in data: # read in the data from standard input\n DiRec={}\n rec=line.split() # split each line on space to get records\n DIs.append((float(rec[0]),float(rec[1]),1.))\n DiRec['dec']=rec[0]\n DiRec['inc']=rec[1]\n DiRec['direction_type']='l'\n DiRecs.append(DiRec)\n # split into two modes\n ppars=pmag.doprinc(DIs) # get principal directions\n for rec in DIs:\n angle=pmag.angle([rec[0],rec[1]],[ppars['dec'],ppars['inc']])\n if angle>90.:\n rDIs.append(rec)\n else:\n nDIs.append(rec)\n if dist=='B': # do on whole dataset\n title=\"Bingham confidence ellipse\"\n bpars=pmag.dobingham(DIs)\n for key in list(bpars.keys()):\n if key!='n':print(\" \",key, '%7.1f'%(bpars[key]))\n if key=='n':print(\" \",key, ' %i'%(bpars[key]))\n npars.append(bpars['dec'])\n npars.append(bpars['inc'])\n npars.append(bpars['Zeta'])\n npars.append(bpars['Zdec'])\n npars.append(bpars['Zinc'])\n npars.append(bpars['Eta'])\n npars.append(bpars['Edec'])\n npars.append(bpars['Einc'])\n if dist=='F':\n title=\"Fisher confidence cone\"\n if len(nDIs)>3:\n fpars=pmag.fisher_mean(nDIs)\n print(\"mode \",mode)\n for key in list(fpars.keys()):\n if key!='n':print(\" \",key, '%7.1f'%(fpars[key]))\n if key=='n':print(\" \",key, ' %i'%(fpars[key]))\n mode+=1\n npars.append(fpars['dec'])\n npars.append(fpars['inc'])\n npars.append(fpars['alpha95']) # Beta\n npars.append(fpars['dec'])\n isign=abs(fpars['inc']) / fpars['inc']\n npars.append(fpars['inc']-isign*90.) #Beta inc\n npars.append(fpars['alpha95']) # gamma\n npars.append(fpars['dec']+90.) # Beta dec\n npars.append(0.) #Beta inc\n if len(rDIs)>3:\n fpars=pmag.fisher_mean(rDIs)\n print(\"mode \",mode)\n for key in list(fpars.keys()):\n if key!='n':print(\" \",key, '%7.1f'%(fpars[key]))\n if key=='n':print(\" \",key, ' %i'%(fpars[key]))\n mode+=1\n rpars.append(fpars['dec'])\n rpars.append(fpars['inc'])\n rpars.append(fpars['alpha95']) # Beta\n rpars.append(fpars['dec'])\n isign=abs(fpars['inc']) / fpars['inc']\n rpars.append(fpars['inc']-isign*90.) #Beta inc\n rpars.append(fpars['alpha95']) # gamma\n rpars.append(fpars['dec']+90.) # Beta dec\n rpars.append(0.) #Beta inc\n if dist=='K':\n title=\"Kent confidence ellipse\"\n if len(nDIs)>3:\n kpars=pmag.dokent(nDIs,len(nDIs))\n print(\"mode \",mode)\n for key in list(kpars.keys()):\n if key!='n':print(\" \",key, '%7.1f'%(kpars[key]))\n if key=='n':print(\" \",key, ' %i'%(kpars[key]))\n mode+=1\n npars.append(kpars['dec'])\n npars.append(kpars['inc'])\n npars.append(kpars['Zeta'])\n npars.append(kpars['Zdec'])\n npars.append(kpars['Zinc'])\n npars.append(kpars['Eta'])\n npars.append(kpars['Edec'])\n npars.append(kpars['Einc'])\n if len(rDIs)>3:\n kpars=pmag.dokent(rDIs,len(rDIs))\n print(\"mode \",mode)\n for key in list(kpars.keys()):\n if key!='n':print(\" \",key, '%7.1f'%(kpars[key]))\n if key=='n':print(\" \",key, ' %i'%(kpars[key]))\n mode+=1\n rpars.append(kpars['dec'])\n rpars.append(kpars['inc'])\n rpars.append(kpars['Zeta'])\n rpars.append(kpars['Zdec'])\n rpars.append(kpars['Zinc'])\n rpars.append(kpars['Eta'])\n rpars.append(kpars['Edec'])\n rpars.append(kpars['Einc'])\n else: # assume bootstrap\n if dist=='BE':\n if len(nDIs)>5:\n BnDIs=pmag.di_boot(nDIs)\n Bkpars=pmag.dokent(BnDIs,1.)\n print(\"mode \",mode)\n for key in list(Bkpars.keys()):\n if key!='n':print(\" \",key, '%7.1f'%(Bkpars[key]))\n if key=='n':print(\" \",key, ' %i'%(Bkpars[key]))\n mode+=1\n npars.append(Bkpars['dec'])\n npars.append(Bkpars['inc'])\n npars.append(Bkpars['Zeta'])\n npars.append(Bkpars['Zdec'])\n npars.append(Bkpars['Zinc'])\n npars.append(Bkpars['Eta'])\n npars.append(Bkpars['Edec'])\n npars.append(Bkpars['Einc'])\n if len(rDIs)>5:\n BrDIs=pmag.di_boot(rDIs)\n Bkpars=pmag.dokent(BrDIs,1.)\n print(\"mode \",mode)\n for key in list(Bkpars.keys()):\n if key!='n':print(\" \",key, '%7.1f'%(Bkpars[key]))\n if key=='n':print(\" \",key, ' %i'%(Bkpars[key]))\n mode+=1\n rpars.append(Bkpars['dec'])\n rpars.append(Bkpars['inc'])\n rpars.append(Bkpars['Zeta'])\n rpars.append(Bkpars['Zdec'])\n rpars.append(Bkpars['Zinc'])\n rpars.append(Bkpars['Eta'])\n rpars.append(Bkpars['Edec'])\n rpars.append(Bkpars['Einc'])\n title=\"Bootstrapped confidence ellipse\"\n elif dist=='BV':\n if len(nDIs)>5:\n pmagplotlib.plot_eq(EQ['eq'],nDIs,'Data')\n BnDIs=pmag.di_boot(nDIs)\n pmagplotlib.plot_eq(EQ['bdirs'],BnDIs,'Bootstrapped Eigenvectors')\n if len(rDIs)>5:\n BrDIs=pmag.di_boot(rDIs)\n if len(nDIs)>5: # plot on existing plots\n pmagplotlib.plot_di(EQ['eq'],rDIs)\n pmagplotlib.plot_di(EQ['bdirs'],BrDIs)\n else:\n pmagplotlib.plot_eq(EQ['eq'],rDIs,'Data')\n pmagplotlib.plot_eq(EQ['bdirs'],BrDIs,'Bootstrapped Eigenvectors')\n pmagplotlib.draw_figs(EQ)\n ans=input('s[a]ve, [q]uit ')\n if ans=='q':sys.exit()\n if ans=='a':\n files={}\n for key in list(EQ.keys()):\n files[key]='BE_'+key+'.svg'\n pmagplotlib.save_plots(EQ,files)\n sys.exit()\n if len(nDIs)>5:\n pmagplotlib.plot_conf(EQ['eq'],title,DiRecs,npars,1)\n if len(rDIs)>5 and dist!='B':\n pmagplotlib.plot_conf(EQ['eq'],title,[],rpars,0)\n elif len(rDIs)>5 and dist!='B':\n pmagplotlib.plot_conf(EQ['eq'],title,DiRecs,rpars,1)\n pmagplotlib.draw_figs(EQ)\n ans=input('s[a]ve, [q]uit ')\n if ans=='q':sys.exit()\n if ans=='a':\n files={}\n for key in list(EQ.keys()):\n files[key]=key+'.svg'\n pmagplotlib.save_plots(EQ,files)", "response": "NAME is the main function for the get_n_tweets function."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ninitialize the GUI for the interpretation editor", "response": "def init_UI(self):\n \"\"\"\n Builds User Interface for the interpretation Editor\n \"\"\"\n\n #set fonts\n FONT_WEIGHT=1\n if sys.platform.startswith('win'): FONT_WEIGHT=-1\n font1 = wx.Font(9+FONT_WEIGHT, wx.SWISS, wx.NORMAL, wx.NORMAL, False, self.font_type)\n font2 = wx.Font(12+FONT_WEIGHT, wx.SWISS, wx.NORMAL, wx.NORMAL, False, self.font_type)\n\n #if you're on mac do some funny stuff to make it look okay\n is_mac = False\n if sys.platform.startswith(\"darwin\"):\n is_mac = True\n\n self.search_bar = wx.SearchCtrl(self.panel, size=(350*self.GUI_RESOLUTION,25) ,style=wx.TE_PROCESS_ENTER | wx.TE_PROCESS_TAB | wx.TE_NOHIDESEL)\n self.Bind(wx.EVT_TEXT_ENTER, self.on_enter_search_bar,self.search_bar)\n self.Bind(wx.EVT_SEARCHCTRL_SEARCH_BTN, self.on_enter_search_bar,self.search_bar)\n self.search_bar.SetHelpText(dieh.search_help)\n# self.Bind(wx.EVT_TEXT, self.on_complete_search_bar,self.search_bar)\n\n #build logger\n self.logger = wx.ListCtrl(self.panel, -1, size=(100*self.GUI_RESOLUTION,475*self.GUI_RESOLUTION),style=wx.LC_REPORT)\n self.logger.SetFont(font1)\n self.logger.InsertColumn(0, 'specimen',width=75*self.GUI_RESOLUTION)\n self.logger.InsertColumn(1, 'fit name',width=65*self.GUI_RESOLUTION)\n self.logger.InsertColumn(2, 'max',width=55*self.GUI_RESOLUTION)\n self.logger.InsertColumn(3, 'min',width=55*self.GUI_RESOLUTION)\n self.logger.InsertColumn(4, 'n',width=25*self.GUI_RESOLUTION)\n self.logger.InsertColumn(5, 'fit type',width=60*self.GUI_RESOLUTION)\n self.logger.InsertColumn(6, 'dec',width=45*self.GUI_RESOLUTION)\n self.logger.InsertColumn(7, 'inc',width=45*self.GUI_RESOLUTION)\n self.logger.InsertColumn(8, 'mad',width=45*self.GUI_RESOLUTION)\n self.logger.InsertColumn(9, 'dang',width=45*self.GUI_RESOLUTION)\n self.logger.InsertColumn(10, 'a95',width=45*self.GUI_RESOLUTION)\n self.logger.InsertColumn(11, 'K',width=45*self.GUI_RESOLUTION)\n self.logger.InsertColumn(12, 'R',width=45*self.GUI_RESOLUTION)\n self.Bind(wx.EVT_LIST_ITEM_ACTIVATED, self.OnClick_listctrl, self.logger)\n self.Bind(wx.EVT_LIST_ITEM_RIGHT_CLICK,self.OnRightClickListctrl,self.logger)\n self.logger.SetHelpText(dieh.logger_help)\n\n #set fit attributes boxsizers\n self.display_sizer = wx.StaticBoxSizer(wx.StaticBox(self.panel, wx.ID_ANY, \"display options\"), wx.HORIZONTAL)\n self.name_sizer = wx.StaticBoxSizer(wx.StaticBox(self.panel, wx.ID_ANY, \"fit name/color\"), wx.VERTICAL)\n self.bounds_sizer = wx.StaticBoxSizer(wx.StaticBox(self.panel, wx.ID_ANY, \"fit bounds\"), wx.VERTICAL)\n self.buttons_sizer = wx.StaticBoxSizer(wx.StaticBox(self.panel, wx.ID_ANY), wx.VERTICAL)\n\n #logger display selection box\n UPPER_LEVEL = self.parent.level_box.GetValue()\n if UPPER_LEVEL=='sample':\n name_choices = self.parent.samples\n if UPPER_LEVEL=='site':\n name_choices = self.parent.sites\n if UPPER_LEVEL=='location':\n name_choices = self.parent.locations\n if UPPER_LEVEL=='study':\n name_choices = ['this study']\n\n self.level_box = wx.ComboBox(self.panel, -1, size=(110*self.GUI_RESOLUTION, 25), value=UPPER_LEVEL, choices=['sample','site','location','study'], style=wx.CB_DROPDOWN|wx.TE_READONLY)\n self.Bind(wx.EVT_COMBOBOX, self.on_select_high_level,self.level_box)\n self.level_box.SetHelpText(dieh.level_box_help)\n\n self.level_names = wx.ComboBox(self.panel, -1, size=(110*self.GUI_RESOLUTION, 25), value=self.parent.level_names.GetValue(), choices=name_choices, style=wx.CB_DROPDOWN|wx.TE_READONLY)\n self.Bind(wx.EVT_COMBOBOX, self.on_select_level_name,self.level_names)\n self.level_names.SetHelpText(dieh.level_names_help)\n\n #mean type and plot display boxes\n self.mean_type_box = wx.ComboBox(self.panel, -1, size=(110*self.GUI_RESOLUTION, 25), value=self.parent.mean_type_box.GetValue(), choices=['Fisher','Fisher by polarity','None'], style=wx.CB_DROPDOWN|wx.TE_READONLY, name=\"high_type\")\n self.Bind(wx.EVT_COMBOBOX, self.on_select_mean_type_box,self.mean_type_box)\n self.mean_type_box.SetHelpText(dieh.mean_type_help)\n\n self.mean_fit_box = wx.ComboBox(self.panel, -1, size=(110*self.GUI_RESOLUTION, 25), value=self.parent.mean_fit, choices=(['None','All'] + self.parent.fit_list), style=wx.CB_DROPDOWN|wx.TE_READONLY, name=\"high_type\")\n self.Bind(wx.EVT_COMBOBOX, self.on_select_mean_fit_box,self.mean_fit_box)\n self.mean_fit_box.SetHelpText(dieh.mean_fit_help)\n\n #show box\n if UPPER_LEVEL == \"study\" or UPPER_LEVEL == \"location\":\n show_box_choices = ['specimens','samples','sites']\n if UPPER_LEVEL == \"site\":\n show_box_choices = ['specimens','samples']\n if UPPER_LEVEL == \"sample\":\n show_box_choices = ['specimens']\n\n self.show_box = wx.ComboBox(self.panel, -1, size=(110*self.GUI_RESOLUTION, 25), value='specimens', choices=show_box_choices, style=wx.CB_DROPDOWN|wx.TE_READONLY,name=\"high_elements\")\n self.Bind(wx.EVT_COMBOBOX, self.on_select_show_box,self.show_box)\n self.show_box.SetHelpText(dieh.show_help)\n\n #coordinates box\n self.coordinates_box = wx.ComboBox(self.panel, -1, size=(110*self.GUI_RESOLUTION, 25), choices=self.parent.coordinate_list, value=self.parent.coordinates_box.GetValue(), style=wx.CB_DROPDOWN|wx.TE_READONLY, name=\"coordinates\")\n self.Bind(wx.EVT_COMBOBOX, self.on_select_coordinates,self.coordinates_box)\n self.coordinates_box.SetHelpText(dieh.coordinates_box_help)\n\n #bounds select boxes\n self.tmin_box = wx.ComboBox(self.panel, -1, size=(80*self.GUI_RESOLUTION, 25), choices=[''] + self.parent.T_list, style=wx.CB_DROPDOWN|wx.TE_READONLY, name=\"lower bound\")\n self.tmin_box.SetHelpText(dieh.tmin_box_help)\n\n self.tmax_box = wx.ComboBox(self.panel, -1, size=(80*self.GUI_RESOLUTION, 25), choices=[''] + self.parent.T_list, style=wx.CB_DROPDOWN|wx.TE_READONLY, name=\"upper bound\")\n self.tmax_box.SetHelpText(dieh.tmax_box_help)\n\n #color box\n self.color_dict = self.parent.color_dict\n self.color_box = wx.ComboBox(self.panel, -1, size=(80*self.GUI_RESOLUTION, 25), choices=[''] + sorted(self.color_dict.keys()), style=wx.CB_DROPDOWN|wx.TE_PROCESS_ENTER, name=\"color\")\n self.Bind(wx.EVT_TEXT_ENTER, self.add_new_color, self.color_box)\n self.color_box.SetHelpText(dieh.color_box_help)\n\n #name box\n self.name_box = wx.TextCtrl(self.panel, -1, size=(80*self.GUI_RESOLUTION, 25), name=\"name\")\n self.name_box.SetHelpText(dieh.name_box_help)\n\n #more mac stuff\n h_size_buttons,button_spacing = 25,5.5\n if is_mac: h_size_buttons,button_spacing = 18,0.\n\n #buttons\n self.add_all_button = wx.Button(self.panel, id=-1, label='add new fit to all specimens',size=(160*self.GUI_RESOLUTION,h_size_buttons))\n self.add_all_button.SetFont(font1)\n self.Bind(wx.EVT_BUTTON, self.add_fit_to_all, self.add_all_button)\n self.add_all_button.SetHelpText(dieh.add_all_help)\n\n self.add_fit_button = wx.Button(self.panel, id=-1, label='add fit to highlighted specimens',size=(160*self.GUI_RESOLUTION,h_size_buttons))\n self.add_fit_button.SetFont(font1)\n self.Bind(wx.EVT_BUTTON, self.add_highlighted_fits, self.add_fit_button)\n self.add_fit_button.SetHelpText(dieh.add_fit_btn_help)\n\n self.delete_fit_button = wx.Button(self.panel, id=-1, label='delete highlighted fits',size=(160*self.GUI_RESOLUTION,h_size_buttons))\n self.delete_fit_button.SetFont(font1)\n self.Bind(wx.EVT_BUTTON, self.delete_highlighted_fits, self.delete_fit_button)\n self.delete_fit_button.SetHelpText(dieh.delete_fit_btn_help)\n\n self.apply_changes_button = wx.Button(self.panel, id=-1, label='apply changes to highlighted fits',size=(160*self.GUI_RESOLUTION,h_size_buttons))\n self.apply_changes_button.SetFont(font1)\n self.Bind(wx.EVT_BUTTON, self.apply_changes, self.apply_changes_button)\n self.apply_changes_button.SetHelpText(dieh.apply_changes_help)\n\n #windows\n display_window_0 = wx.GridSizer(2, 1, 10*self.GUI_RESOLUTION, 19*self.GUI_RESOLUTION)\n display_window_1 = wx.GridSizer(2, 1, 10*self.GUI_RESOLUTION, 19*self.GUI_RESOLUTION)\n display_window_2 = wx.GridSizer(2, 1, 10*self.GUI_RESOLUTION, 19*self.GUI_RESOLUTION)\n name_window = wx.GridSizer(2, 1, 10*self.GUI_RESOLUTION, 19*self.GUI_RESOLUTION)\n bounds_window = wx.GridSizer(2, 1, 10*self.GUI_RESOLUTION, 19*self.GUI_RESOLUTION)\n buttons1_window = wx.GridSizer(4, 1, 5*self.GUI_RESOLUTION, 19*self.GUI_RESOLUTION)\n display_window_0.AddMany( [(self.coordinates_box, wx.ALIGN_LEFT),\n (self.show_box, wx.ALIGN_LEFT)] )\n display_window_1.AddMany( [(self.level_box, wx.ALIGN_LEFT),\n (self.level_names, wx.ALIGN_LEFT)] )\n display_window_2.AddMany( [(self.mean_type_box, wx.ALIGN_LEFT),\n (self.mean_fit_box, wx.ALIGN_LEFT)] )\n name_window.AddMany( [(self.name_box, wx.ALIGN_LEFT),\n (self.color_box, wx.ALIGN_LEFT)] )\n bounds_window.AddMany( [(self.tmin_box, wx.ALIGN_LEFT),\n (self.tmax_box, wx.ALIGN_LEFT)] )\n buttons1_window.AddMany( [(self.add_fit_button, wx.ALL|wx.ALIGN_CENTER|wx.SHAPED, 0),\n (self.add_all_button, wx.ALL|wx.ALIGN_CENTER|wx.SHAPED, 0),\n (self.delete_fit_button, wx.ALL|wx.ALIGN_CENTER|wx.SHAPED, 0),\n (self.apply_changes_button, wx.ALL|wx.ALIGN_CENTER|wx.SHAPED, 0)])\n self.display_sizer.Add(display_window_0, 1, wx.TOP|wx.EXPAND, 8)\n self.display_sizer.Add(display_window_1, 1, wx.TOP | wx.LEFT|wx.EXPAND, 8)\n self.display_sizer.Add(display_window_2, 1, wx.TOP | wx.LEFT|wx.EXPAND, 8)\n self.name_sizer.Add(name_window, 1, wx.TOP, 5.5)\n self.bounds_sizer.Add(bounds_window, 1, wx.TOP, 5.5)\n self.buttons_sizer.Add(buttons1_window, 1, wx.TOP, 0)\n\n #duplicate high levels plot\n self.fig = Figure((2.5*self.GUI_RESOLUTION, 2.5*self.GUI_RESOLUTION), dpi=100)\n self.canvas = FigCanvas(self.panel, -1, self.fig, )\n self.toolbar = NavigationToolbar(self.canvas)\n self.toolbar.Hide()\n self.toolbar.zoom()\n self.high_EA_setting = \"Zoom\"\n self.canvas.Bind(wx.EVT_LEFT_DCLICK,self.on_equalarea_high_select)\n self.canvas.Bind(wx.EVT_MOTION,self.on_change_high_mouse_cursor)\n self.canvas.Bind(wx.EVT_MIDDLE_DOWN,self.home_high_equalarea)\n self.canvas.Bind(wx.EVT_RIGHT_DOWN,self.pan_zoom_high_equalarea)\n self.canvas.SetHelpText(dieh.eqarea_help)\n\n self.eqarea = self.fig.add_subplot(111)\n draw_net(self.eqarea)\n\n #Higher Level Statistics Box\n self.stats_sizer = wx.StaticBoxSizer( wx.StaticBox( self.panel, wx.ID_ANY,\"mean statistics\" ), wx.VERTICAL)\n\n for parameter in ['mean_type','dec','inc','alpha95','K','R','n_lines','n_planes']:\n COMMAND=\"self.%s_window=wx.TextCtrl(self.panel,style=wx.TE_CENTER|wx.TE_READONLY,size=(100*self.GUI_RESOLUTION,25))\"%parameter\n exec(COMMAND)\n COMMAND=\"self.%s_window.SetBackgroundColour(wx.WHITE)\"%parameter\n exec(COMMAND)\n COMMAND=\"self.%s_window.SetFont(font2)\"%parameter\n exec(COMMAND)\n COMMAND=\"self.%s_outer_window = wx.GridSizer(1,2,5*self.GUI_RESOLUTION,15*self.GUI_RESOLUTION)\"%parameter\n exec(COMMAND)\n COMMAND=\"\"\"self.%s_outer_window.AddMany([\n (wx.StaticText(self.panel,label='%s',style=wx.TE_CENTER),wx.EXPAND),\n (self.%s_window, wx.EXPAND)])\"\"\"%(parameter,parameter,parameter)\n exec(COMMAND)\n COMMAND=\"self.stats_sizer.Add(self.%s_outer_window, 1, wx.ALIGN_LEFT|wx.EXPAND, 0)\"%parameter\n exec(COMMAND)\n\n self.switch_stats_button = wx.SpinButton(self.panel, id=wx.ID_ANY, style=wx.SP_HORIZONTAL|wx.SP_ARROW_KEYS|wx.SP_WRAP, name=\"change stats\")\n self.Bind(wx.EVT_SPIN, self.on_select_stats_button,self.switch_stats_button)\n self.switch_stats_button.SetHelpText(dieh.switch_stats_btn_help)\n\n #construct panel\n hbox0 = wx.BoxSizer(wx.HORIZONTAL)\n hbox0.Add(self.name_sizer,flag=wx.ALIGN_TOP|wx.EXPAND,border=8)\n hbox0.Add(self.bounds_sizer,flag=wx.ALIGN_TOP|wx.EXPAND,border=8)\n\n vbox0 = wx.BoxSizer(wx.VERTICAL)\n vbox0.Add(hbox0,flag=wx.ALIGN_TOP,border=8)\n vbox0.Add(self.buttons_sizer,flag=wx.ALIGN_TOP,border=8)\n\n hbox1 = wx.BoxSizer(wx.HORIZONTAL)\n hbox1.Add(vbox0,flag=wx.ALIGN_TOP,border=8)\n hbox1.Add(self.stats_sizer,flag=wx.ALIGN_TOP,border=8)\n hbox1.Add(self.switch_stats_button,flag=wx.ALIGN_TOP|wx.EXPAND,border=8)\n\n vbox1 = wx.BoxSizer(wx.VERTICAL)\n vbox1.Add(self.display_sizer,flag=wx.ALIGN_TOP,border=8)\n vbox1.Add(hbox1,flag=wx.ALIGN_TOP,border=8)\n vbox1.Add(self.canvas,proportion=1,flag=wx.ALIGN_CENTER_HORIZONTAL | wx.ALIGN_CENTER_VERTICAL | wx.EXPAND,border=8)\n\n vbox2 = wx.BoxSizer(wx.VERTICAL)\n vbox2.Add(self.search_bar,proportion=.5,flag=wx.ALIGN_LEFT | wx.ALIGN_BOTTOM | wx.EXPAND, border=8)\n vbox2.Add(self.logger,proportion=1,flag=wx.ALIGN_LEFT|wx.EXPAND,border=8)\n\n hbox2 = wx.BoxSizer(wx.HORIZONTAL)\n hbox2.Add(vbox2,proportion=1,flag=wx.ALIGN_LEFT|wx.EXPAND)\n hbox2.Add(vbox1,flag=wx.ALIGN_TOP|wx.EXPAND)\n\n self.panel.SetSizerAndFit(hbox2)\n hbox2.Fit(self)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nupdate the interpretation editor window to reflect the changes made to the interpretation data.", "response": "def update_editor(self):\n \"\"\"\n updates the logger and plot on the interpretation editor window\n \"\"\"\n\n self.fit_list = []\n self.search_choices = []\n for specimen in self.specimens_list:\n if specimen not in self.parent.pmag_results_data['specimens']: continue\n self.fit_list += [(fit,specimen) for fit in self.parent.pmag_results_data['specimens'][specimen]]\n\n self.logger.DeleteAllItems()\n offset = 0\n for i in range(len(self.fit_list)):\n i -= offset\n v = self.update_logger_entry(i)\n if v == \"s\": offset += 1"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef update_logger_entry(self,i):\n if i < len(self.fit_list):\n tup = self.fit_list[i]\n elif i < self.logger.GetItemCount():\n self.logger.DeleteItem(i)\n return\n else: return\n\n coordinate_system = self.parent.COORDINATE_SYSTEM\n fit = tup[0]\n pars = fit.get(coordinate_system)\n fmin,fmax,n,ftype,dec,inc,mad,dang,a95,sk,sr2 = \"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\"\n\n specimen = tup[1]\n if coordinate_system=='geographic':\n block_key = 'zijdblock_geo'\n elif coordinate_system=='tilt-corrected':\n block_key = 'zijdblock_tilt'\n else:\n block_key = 'zijdblock'\n\n name = fit.name\n if pars == {} and self.parent.Data[specimen][block_key] != []:\n fit.put(specimen, coordinate_system, self.parent.get_PCA_parameters(specimen,fit,fit.tmin,fit.tmax,coordinate_system,fit.PCA_type))\n pars = fit.get(coordinate_system)\n if self.parent.Data[specimen][block_key]==[]:\n spars = fit.get('specimen')\n fmin = fit.tmin\n fmax = fit.tmax\n if 'specimen_n' in list(spars.keys()): n = str(spars['specimen_n'])\n else: n = 'No Data'\n if 'calculation_type' in list(spars.keys()): ftype = spars['calculation_type']\n else: ftype = 'No Data'\n dec = 'No Data'\n inc = 'No Data'\n mad = 'No Data'\n dang = 'No Data'\n a95 = 'No Data'\n sk = 'No Data'\n sr2 = 'No Data'\n else:\n if 'measurement_step_min' in list(pars.keys()): fmin = str(fit.tmin)\n else: fmin = \"N/A\"\n if 'measurement_step_max' in list(pars.keys()): fmax = str(fit.tmax)\n else: fmax = \"N/A\"\n if 'specimen_n' in list(pars.keys()): n = str(pars['specimen_n'])\n else: n = \"N/A\"\n if 'calculation_type' in list(pars.keys()): ftype = pars['calculation_type']\n else: ftype = \"N/A\"\n if 'specimen_dec' in list(pars.keys()): dec = \"%.1f\"%pars['specimen_dec']\n else: dec = \"N/A\"\n if 'specimen_inc' in list(pars.keys()): inc = \"%.1f\"%pars['specimen_inc']\n else: inc = \"N/A\"\n if 'specimen_mad' in list(pars.keys()): mad = \"%.1f\"%pars['specimen_mad']\n else: mad = \"N/A\"\n if 'specimen_dang' in list(pars.keys()): dang = \"%.1f\"%pars['specimen_dang']\n else: dang = \"N/A\"\n if 'specimen_alpha95' in list(pars.keys()): a95 = \"%.1f\"%pars['specimen_alpha95']\n else: a95 = \"N/A\"\n if 'specimen_k' in list(pars.keys()): sk = \"%.1f\"%pars['specimen_k']\n else: sk = \"N/A\"\n if 'specimen_r' in list(pars.keys()): sr2 = \"%.1f\"%pars['specimen_r']\n else: sr2 = \"N/A\"\n\n if self.search_query != \"\":\n entry = (specimen+name+fmin+fmax+n+ftype+dec+inc+mad+dang+a95+sk+sr2).replace(\" \",\"\").lower()\n if self.search_query not in entry:\n self.fit_list.pop(i)\n if i < self.logger.GetItemCount():\n self.logger.DeleteItem(i)\n return \"s\"\n for e in (specimen,name,fmin,fmax,n,ftype,dec,inc,mad,dang,a95,sk,sr2):\n if e not in self.search_choices:\n self.search_choices.append(e)\n\n if i < self.logger.GetItemCount():\n self.logger.DeleteItem(i)\n self.logger.InsertItem(i, str(specimen))\n self.logger.SetItem(i, 1, name)\n self.logger.SetItem(i, 2, fmin)\n self.logger.SetItem(i, 3, fmax)\n self.logger.SetItem(i, 4, n)\n self.logger.SetItem(i, 5, ftype)\n self.logger.SetItem(i, 6, dec)\n self.logger.SetItem(i, 7, inc)\n self.logger.SetItem(i, 8, mad)\n self.logger.SetItem(i, 9, dang)\n self.logger.SetItem(i, 10, a95)\n self.logger.SetItem(i, 11, sk)\n self.logger.SetItem(i, 12, sr2)\n self.logger.SetItemBackgroundColour(i,\"WHITE\")\n a,b = False,False\n if fit in self.parent.bad_fits:\n self.logger.SetItemBackgroundColour(i,\"red\")\n b = True\n if self.parent.current_fit == fit:\n self.logger.SetItemBackgroundColour(i,\"LIGHT BLUE\")\n self.logger_focus(i)\n self.current_fit_index = i\n a = True\n if a and b:\n self.logger.SetItemBackgroundColour(i,\"red\")", "response": "helper function that updates the logger entry with the information from the fit_list parameter"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nupdating passed in fit or index as current fit for the editor (does not affect parent), if no parameters are passed in it sets first fit as current @param: new_fit -> fit object to highlight as selected", "response": "def change_selected(self,new_fit):\n \"\"\"\n updates passed in fit or index as current fit for the editor (does not affect parent),\n if no parameters are passed in it sets first fit as current\n @param: new_fit -> fit object to highlight as selected\n \"\"\"\n if len(self.fit_list)==0: return\n if self.search_query and self.parent.current_fit not in [x[0] for x in self.fit_list]: return\n if self.current_fit_index == None:\n if not self.parent.current_fit: return\n for i,(fit,specimen) in enumerate(self.fit_list):\n if fit == self.parent.current_fit:\n self.current_fit_index = i\n break\n i = 0\n if isinstance(new_fit, Fit):\n for i, (fit,speci) in enumerate(self.fit_list):\n if fit == new_fit:\n break\n elif type(new_fit) is int:\n i = new_fit\n elif new_fit != None:\n print(('cannot select fit of type: ' + str(type(new_fit))))\n if self.current_fit_index != None and \\\n len(self.fit_list) > 0 and \\\n self.fit_list[self.current_fit_index][0] in self.parent.bad_fits:\n self.logger.SetItemBackgroundColour(self.current_fit_index,\"\")\n else:\n self.logger.SetItemBackgroundColour(self.current_fit_index,\"WHITE\")\n self.current_fit_index = i\n if self.fit_list[self.current_fit_index][0] in self.parent.bad_fits:\n self.logger.SetItemBackgroundColour(self.current_fit_index,\"red\")\n else:\n self.logger.SetItemBackgroundColour(self.current_fit_index,\"LIGHT BLUE\")"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nfocuses the logger on an index 12 entries below i", "response": "def logger_focus(self,i,focus_shift=16):\n \"\"\"\n focuses the logger on an index 12 entries below i\n @param: i -> index to focus on\n \"\"\"\n if self.logger.GetItemCount()-1 > i+focus_shift:\n i += focus_shift\n else:\n i = self.logger.GetItemCount()-1\n self.logger.Focus(i)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nhandling the double click on the listctrl", "response": "def OnClick_listctrl(self, event):\n \"\"\"\n Edits the logger and the Zeq_GUI parent object to select the fit that was newly selected by a double click\n @param: event -> wx.ListCtrlEvent that triggered this function\n \"\"\"\n i = event.GetIndex()\n if self.parent.current_fit == self.fit_list[i][0]: return\n self.parent.initialize_CART_rot(self.fit_list[i][1])\n si = self.parent.specimens.index(self.fit_list[i][1])\n self.parent.specimens_box.SetSelection(si)\n self.parent.select_specimen(self.fit_list[i][1])\n self.change_selected(i)\n fi = 0\n while (self.parent.s == self.fit_list[i][1] and i >= 0): i,fi = (i-1,fi+1)\n self.parent.update_fit_box()\n self.parent.fit_box.SetSelection(fi-1)\n self.parent.update_selection()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef OnRightClickListctrl(self, event):\n i = event.GetIndex()\n fit,spec = self.fit_list[i][0],self.fit_list[i][1]\n if fit in self.parent.bad_fits:\n if not self.parent.mark_fit_good(fit,spec=spec): return\n if i == self.current_fit_index:\n self.logger.SetItemBackgroundColour(i,\"LIGHT BLUE\")\n else:\n self.logger.SetItemBackgroundColour(i,\"WHITE\")\n else:\n if not self.parent.mark_fit_bad(fit): return\n if i == self.current_fit_index:\n self.logger.SetItemBackgroundColour(i,\"red\")\n else:\n self.logger.SetItemBackgroundColour(i,\"red\")\n self.parent.calculate_high_levels_data()\n self.parent.plot_high_levels_data()\n self.logger_focus(i)", "response": "Function is called when the Zeq_GUI is right clicking the list control."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef on_select_show_box(self,event):\n self.parent.UPPER_LEVEL_SHOW=self.show_box.GetValue()\n self.parent.calculate_high_levels_data()\n self.parent.plot_high_levels_data()", "response": "This function is called when the user selects a new type of mean shown on the high levels mean plot."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsets the selected specimen interpretations in the level_names combobox to give the user selections for which interpretations are shown in the logger.", "response": "def on_select_high_level(self,event,called_by_parent=False):\n \"\"\"\n alters the possible entries in level_names combobox to give the user selections for which specimen interpretations to display in the logger\n @param: event -> the wx.COMBOBOXEVENT that triggered this function\n \"\"\"\n UPPER_LEVEL=self.level_box.GetValue()\n\n if UPPER_LEVEL=='sample':\n self.level_names.SetItems(self.parent.samples)\n self.level_names.SetStringSelection(self.parent.Data_hierarchy['sample_of_specimen'][self.parent.s])\n\n if UPPER_LEVEL=='site':\n self.level_names.SetItems(self.parent.sites)\n self.level_names.SetStringSelection(self.parent.Data_hierarchy['site_of_specimen'][self.parent.s])\n\n if UPPER_LEVEL=='location':\n self.level_names.SetItems(self.parent.locations)\n self.level_names.SetStringSelection(self.parent.Data_hierarchy['location_of_specimen'][self.parent.s])\n\n if UPPER_LEVEL=='study':\n self.level_names.SetItems(['this study'])\n self.level_names.SetStringSelection('this study')\n\n if not called_by_parent:\n self.parent.level_box.SetStringSelection(UPPER_LEVEL)\n self.parent.onSelect_high_level(event,True)\n\n self.on_select_level_name(event)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nselecting the specimen interpretatoins in this objects logger", "response": "def on_select_level_name(self,event,called_by_parent=False):\n \"\"\"\n change this objects specimens_list to control which specimen interpretatoins are displayed in this objects logger\n @param: event -> the wx.ComboBoxEvent that triggered this function\n \"\"\"\n high_level_name=str(self.level_names.GetValue())\n\n if self.level_box.GetValue()=='sample':\n self.specimens_list=self.parent.Data_hierarchy['samples'][high_level_name]['specimens']\n elif self.level_box.GetValue()=='site':\n self.specimens_list=self.parent.Data_hierarchy['sites'][high_level_name]['specimens']\n elif self.level_box.GetValue()=='location':\n self.specimens_list=self.parent.Data_hierarchy['locations'][high_level_name]['specimens']\n elif self.level_box.GetValue()=='study':\n self.specimens_list=self.parent.Data_hierarchy['study']['this study']['specimens']\n\n if not called_by_parent:\n self.parent.level_names.SetStringSelection(high_level_name)\n self.parent.onSelect_level_name(event,True)\n\n self.specimens_list.sort(key=spec_key_func)\n self.update_editor()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef on_select_mean_type_box(self, event):\n new_mean_type = self.mean_type_box.GetValue()\n if new_mean_type == \"None\":\n self.parent.clear_high_level_pars()\n self.parent.mean_type_box.SetStringSelection(new_mean_type)\n self.parent.onSelect_mean_type_box(event)", "response": "set parent Zeq_GUI to reflect change in this box and change the\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsetting parent Zeq_GUI to reflect the change in this box then replot the high level means plot @param: event -> the wx.COMBOBOXEVENT that triggered this function", "response": "def on_select_mean_fit_box(self, event):\n \"\"\"\n set parent Zeq_GUI to reflect the change in this box then replot the high level means plot\n @param: event -> the wx.COMBOBOXEVENT that triggered this function\n \"\"\"\n new_mean_fit = self.mean_fit_box.GetValue()\n self.parent.mean_fit_box.SetStringSelection(new_mean_fit)\n self.parent.onSelect_mean_fit_box(event)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds a new interpretation to each specimen highlighted in logger", "response": "def add_highlighted_fits(self, evnet):\n \"\"\"\n adds a new interpretation to each specimen highlighted in logger if multiple interpretations are highlighted of the same specimen only one new interpretation is added\n @param: event -> the wx.ButtonEvent that triggered this function\n \"\"\"\n\n specimens = []\n next_i = self.logger.GetNextSelected(-1)\n if next_i == -1: return\n while next_i != -1:\n fit,specimen = self.fit_list[next_i]\n if specimen in specimens:\n next_i = self.logger.GetNextSelected(next_i)\n continue\n else: specimens.append(specimen)\n next_i = self.logger.GetNextSelected(next_i)\n\n for specimen in specimens:\n self.add_fit_to_specimen(specimen)\n\n self.update_editor()\n self.parent.update_selection()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef delete_highlighted_fits(self, event):\n\n next_i = -1\n deleted_items = []\n while True:\n next_i = self.logger.GetNextSelected(next_i)\n if next_i == -1:\n break\n deleted_items.append(next_i)\n deleted_items.sort(reverse=True)\n for item in deleted_items:\n self.delete_entry(index=item)\n self.parent.update_selection()", "response": "Delete highlighted fits from the logger and the Zeq_GUI parent object"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef delete_entry(self, fit = None, index = None):\n if type(index) == int and not fit:\n fit,specimen = self.fit_list[index]\n if fit and type(index) == int:\n for i, (f,s) in enumerate(self.fit_list):\n if fit == f:\n index,specimen = i,s\n break\n\n if index == self.current_fit_index: self.current_fit_index = None\n if fit not in self.parent.pmag_results_data['specimens'][specimen]:\n print((\"cannot remove item (entry #: \" + str(index) + \") as it doesn't exist, this is a dumb bug contact devs\"))\n self.logger.DeleteItem(index)\n return\n self.parent.pmag_results_data['specimens'][specimen].remove(fit)\n del self.fit_list[index]\n self.logger.DeleteItem(index)", "response": "Delete a single entry from the logger of this object."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef apply_changes(self, event):\n\n new_name = self.name_box.GetLineText(0)\n new_color = self.color_box.GetValue()\n new_tmin = self.tmin_box.GetValue()\n new_tmax = self.tmax_box.GetValue()\n\n next_i = -1\n changed_i = []\n while True:\n next_i = self.logger.GetNextSelected(next_i)\n if next_i == -1:\n break\n specimen = self.fit_list[next_i][1]\n fit = self.fit_list[next_i][0]\n if new_name:\n if new_name not in [x.name for x in self.parent.pmag_results_data['specimens'][specimen]]: fit.name = new_name\n if new_color:\n fit.color = self.color_dict[new_color]\n #testing\n not_both = True\n if new_tmin and new_tmax:\n if fit == self.parent.current_fit:\n self.parent.tmin_box.SetStringSelection(new_tmin)\n self.parent.tmax_box.SetStringSelection(new_tmax)\n fit.put(specimen,self.parent.COORDINATE_SYSTEM, self.parent.get_PCA_parameters(specimen,fit,new_tmin,new_tmax,self.parent.COORDINATE_SYSTEM,fit.PCA_type))\n not_both = False\n if new_tmin and not_both:\n if fit == self.parent.current_fit:\n self.parent.tmin_box.SetStringSelection(new_tmin)\n fit.put(specimen,self.parent.COORDINATE_SYSTEM, self.parent.get_PCA_parameters(specimen,fit,new_tmin,fit.tmax,self.parent.COORDINATE_SYSTEM,fit.PCA_type))\n if new_tmax and not_both:\n if fit == self.parent.current_fit:\n self.parent.tmax_box.SetStringSelection(new_tmax)\n fit.put(specimen,self.parent.COORDINATE_SYSTEM, self.parent.get_PCA_parameters(specimen,fit,fit.tmin,new_tmax,self.parent.COORDINATE_SYSTEM,fit.PCA_type))\n changed_i.append(next_i)\n\n offset = 0\n for i in changed_i:\n i -= offset\n v = self.update_logger_entry(i)\n if v == \"s\":\n offset += 1\n\n self.parent.update_selection()", "response": "Applies the changes in the various attribute boxes of this object to all highlighted fit objects in the GUI."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\npanning or zoom the high equal area", "response": "def pan_zoom_high_equalarea(self,event):\n \"\"\"\n Uses the toolbar for the canvas to change the function from zoom to pan or pan to zoom\n @param: event -> the wx.MouseEvent that triggered this funciton\n \"\"\"\n if event.LeftIsDown() or event.ButtonDClick():\n return\n elif self.high_EA_setting == \"Zoom\":\n self.high_EA_setting = \"Pan\"\n try: self.toolbar.pan('off')\n except TypeError: pass\n elif self.high_EA_setting == \"Pan\":\n self.high_EA_setting = \"Zoom\"\n try: self.toolbar.zoom()\n except TypeError: pass\n else:\n self.high_EA_setting = \"Zoom\"\n try: self.toolbar.zoom()\n except TypeError: pass"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef main():\n N, L, D, R = 100, 0., 0., 0\n G2, G3 = 0., 0.\n cnt = 1\n Imax = 0\n if len(sys.argv) != 0 and '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n else:\n if '-n' in sys.argv:\n ind = sys.argv.index('-n')\n N = int(sys.argv[ind + 1])\n if '-d' in sys.argv:\n ind = sys.argv.index('-d')\n D = float(sys.argv[ind + 1])\n if '-lat' in sys.argv:\n ind = sys.argv.index('-lat')\n L = float(sys.argv[ind + 1])\n if '-t' in sys.argv:\n ind = sys.argv.index('-t')\n Imax = 1e3 * float(sys.argv[ind + 1])\n if '-rev' in sys.argv:\n R = 1\n if '-G2' in sys.argv:\n ind = sys.argv.index('-G2')\n G2 = float(sys.argv[ind + 1])\n if '-G3' in sys.argv:\n ind = sys.argv.index('-G3')\n G3 = float(sys.argv[ind + 1])\n for k in range(N):\n gh = pmag.mktk03(8, k, G2, G3) # terms and random seed\n # get a random longitude, between 0 and 359\n lon = random.randint(0, 360)\n vec = pmag.getvec(gh, L, lon) # send field model and lat to getvec\n if vec[2] >= Imax:\n vec[0] += D\n if k % 2 == 0 and R == 1:\n vec[0] += 180.\n vec[1] = -vec[1]\n if vec[0] >= 360.:\n vec[0] -= 360.\n print('%7.1f %7.1f %8.2f ' % (vec[0], vec[1], vec[2]))", "response": "NAME\n tk03.py\n\n DESCRIPTION\n generates set of vectors drawn from TK03.gad at given lat and\n rotated about vertical axis by given Dec\n\n INPUT (COMMAND LINE ENTRY)\n OUTPUT\n dec, inc, int\n\n SYNTAX\n tk03.py [command line options] [> OutputFileName]\n\n OPTIONS\n -n N specify N, default is 100\n -d D specify mean Dec, default is 0\n -lat LAT specify latitude, default is 0\n -rev include reversals\n -t INT truncates intensities to >INT uT\n -G2 FRAC specify average g_2^0 fraction (default is 0)\n -G3 FRAC specify average g_3^0 fraction (default is 0)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef main():\n N,mean,sigma=100,0,1.\n outfile=\"\"\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n else:\n if '-s' in sys.argv:\n ind=sys.argv.index('-s')\n sigma=float(sys.argv[ind+1])\n if '-n' in sys.argv:\n ind=sys.argv.index('-n')\n N=int(sys.argv[ind+1])\n if '-m' in sys.argv:\n ind=sys.argv.index('-m')\n mean=float(sys.argv[ind+1])\n if '-F' in sys.argv:\n ind=sys.argv.index('-F')\n outfile=sys.argv[ind+1]\n out=open(outfile,'w')\n for k in range(N): \n x='%f'%(pmag.gaussdev(mean,sigma)) # send kappa to fshdev\n if outfile==\"\":\n print(x)\n else:\n out.write(x+'\\n')", "response": "NAME gaussian. py\n gaussian. py"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nnames qqunf. py is the main function of the qqunf module. It makes a qqplot from input data against uniform distribution of the current one.", "response": "def main():\n \"\"\"\n NAME\n qqunf.py\n\n DESCRIPTION\n makes qq plot from input data against uniform distribution\n\n SYNTAX\n qqunf.py [command line options]\n\n OPTIONS\n -h help message\n -f FILE, specify file on command line\n\n \"\"\"\n fmt,plot='svg',0\n if '-h' in sys.argv: # check if help is needed\n print(main.__doc__)\n sys.exit() # graceful quit\n elif '-f' in sys.argv: # ask for filename\n ind=sys.argv.index('-f')\n file=sys.argv[ind+1]\n f=open(file,'r')\n input=f.readlines()\n Data=[]\n for line in input:\n line.replace('\\n','')\n if '\\t' in line: # read in the data from standard input\n rec=line.split('\\t') # split each line on space to get records\n else:\n rec=line.split() # split each line on space to get records\n Data.append(float(rec[0]))\n \n#\n if len(Data) >=10: \n QQ={'unf1':1}\n pmagplotlib.plot_init(QQ['unf1'],5,5)\n pmagplotlib.plot_qq_unf(QQ['unf1'],Data,'QQ-Uniform') # make plot\n else:\n print('you need N> 10')\n sys.exit()\n pmagplotlib.draw_figs(QQ) \n files={}\n for key in list(QQ.keys()):\n files[key]=key+'.'+fmt \n if pmagplotlib.isServer:\n black = '#000000'\n purple = '#800080'\n titles={}\n titles['eq']='Equal Area Plot'\n EQ = pmagplotlib.add_borders(EQ,titles,black,purple)\n pmagplotlib.save_plots(QQ,files)\n elif plot==1:\n files['qq']=file+'.'+fmt \n pmagplotlib.save_plots(QQ,files)\n else:\n ans=input(\" S[a]ve to save plot, [q]uit without saving: \")\n if ans==\"a\": pmagplotlib.save_plots(QQ,files)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef InitUI(self):\n self.main_sizer = wx.BoxSizer(wx.VERTICAL)\n\n if self.grid_type in self.contribution.tables:\n dataframe = self.contribution.tables[self.grid_type]\n else:\n dataframe = None\n self.grid_builder = GridBuilder(self.contribution, self.grid_type,\n self.panel, parent_type=self.parent_type,\n reqd_headers=self.reqd_headers,\n exclude_cols=self.exclude_cols,\n huge=self.huge)\n\n self.grid = self.grid_builder.make_grid()\n self.grid.InitUI()\n\n\n ## Column management buttons\n self.add_cols_button = wx.Button(self.panel, label=\"Add additional columns\",\n name='add_cols_btn',\n size=(170, 20))\n self.Bind(wx.EVT_BUTTON, self.on_add_cols, self.add_cols_button)\n self.remove_cols_button = wx.Button(self.panel, label=\"Remove columns\",\n name='remove_cols_btn',\n size=(170, 20))\n\n self.Bind(wx.EVT_BUTTON, self.on_remove_cols, self.remove_cols_button)\n\n ## Row management buttons\n self.remove_row_button = wx.Button(self.panel, label=\"Remove last row\",\n name='remove_last_row_btn')\n self.Bind(wx.EVT_BUTTON, self.on_remove_row, self.remove_row_button)\n many_rows_box = wx.BoxSizer(wx.HORIZONTAL)\n self.add_many_rows_button = wx.Button(self.panel, label=\"Add row(s)\",\n name='add_many_rows_btn')\n self.rows_spin_ctrl = wx.SpinCtrl(self.panel, value='1', initial=1,\n name='rows_spin_ctrl')\n many_rows_box.Add(self.add_many_rows_button, flag=wx.ALIGN_CENTRE)\n many_rows_box.Add(self.rows_spin_ctrl)\n self.Bind(wx.EVT_BUTTON, self.on_add_rows, self.add_many_rows_button)\n\n self.deleteRowButton = wx.Button(self.panel, id=-1,\n label='Delete selected row(s)',\n name='delete_row_btn')\n self.Bind(wx.EVT_BUTTON, lambda event: self.on_remove_row(event, False), self.deleteRowButton)\n self.deleteRowButton.Disable()\n\n # measurements table should not be able to add new rows\n # that should be done elsewhere\n if self.huge:\n self.add_many_rows_button.Disable()\n self.rows_spin_ctrl.Disable()\n self.remove_row_button.Disable()\n # can't remove cols (seg fault), but can add them\n #self.add_cols_button.Disable()\n self.remove_cols_button.Disable()\n\n\n ## Data management buttons\n self.importButton = wx.Button(self.panel, id=-1,\n label='Import MagIC-format file',\n name='import_btn')\n self.Bind(wx.EVT_BUTTON, self.onImport, self.importButton)\n self.exitButton = wx.Button(self.panel, id=-1,\n label='Save and close grid',\n name='save_and_quit_btn')\n self.Bind(wx.EVT_BUTTON, self.onSave, self.exitButton)\n self.cancelButton = wx.Button(self.panel, id=-1, label='Cancel',\n name='cancel_btn')\n self.Bind(wx.EVT_BUTTON, self.onCancelButton, self.cancelButton)\n self.Bind(wx.EVT_CLOSE, self.onCancelButton)\n\n ## Input/output buttons\n self.copyButton = wx.Button(self.panel, id=-1,\n label=\"Start copy mode\",\n name=\"copy_mode_btn\")\n self.Bind(wx.EVT_BUTTON, self.onCopyMode, self.copyButton)\n self.selectAllButton = wx.Button(self.panel, id=-1,\n label=\"Copy all cells\",\n name=\"select_all_btn\")\n self.Bind(wx.EVT_BUTTON, self.onSelectAll, self.selectAllButton)\n self.copySelectionButton = wx.Button(self.panel, id=-1,\n label=\"Copy selected cells\",\n name=\"copy_selection_btn\")\n self.Bind(wx.EVT_BUTTON, self.onCopySelection, self.copySelectionButton)\n self.copySelectionButton.Disable()\n\n ## Help message and button\n # button\n self.toggle_help_btn = wx.Button(self.panel, id=-1, label=\"Show help\",\n name='toggle_help_btn')\n self.Bind(wx.EVT_BUTTON, self.toggle_help, self.toggle_help_btn)\n # message\n self.help_msg_boxsizer = wx.StaticBoxSizer(wx.StaticBox(self.panel, -1, name='help_msg_boxsizer'), wx.VERTICAL)\n\n if self.grid_type == 'measurements':\n self.default_msg_text = \"Edit measurements here.\\nIn general, measurements should be imported directly into Pmag GUI,\\nwhich has protocols for converting many lab formats into the MagIC format.\\nIf we are missing your particular lab format, please let us know: https://github.com/PmagPy/PmagPy/issues.\\nThis grid is just meant for looking at your measurements and doing small edits.\\nCurrently, you can't add/remove rows here. You can add columns and edit cell values.\"\n else:\n self.default_msg_text = 'Edit {} here.\\nYou can add or remove both rows and columns, however required columns may not be deleted.\\nControlled vocabularies are indicated by **, and will have drop-down-menus.\\nSuggested vocabularies are indicated by ^^, and also have drop-down-menus.\\nTo edit all values in a column, click the column header.\\nYou can cut and paste a block of cells from an Excel-like file.\\nJust click the top left cell and use command \"v\".'.format(self.grid_type)\n txt = ''\n if self.grid_type == 'locations':\n txt = '\\n\\nNote: you can fill in location start/end latitude/longitude here.\\nHowever, if you add sites in step 2, the program will calculate those values automatically,\\nbased on site latitudes/logitudes.\\nThese values will be written to your upload file.'\n if self.grid_type == 'samples':\n txt = \"\\n\\nNote: you can fill in lithology, class, and type for each sample here.\\nHowever, if the sample's class, lithology, and type are the same as its parent site,\\nthose values will propagate down, and will be written to your sample file automatically.\"\n if self.grid_type == 'specimens':\n txt = \"\\n\\nNote: you can fill in lithology, class, and type for each specimen here.\\nHowever, if the specimen's class, lithology, and type are the same as its parent sample,\\nthose values will propagate down, and will be written to your specimen file automatically.\"\n if self.grid_type == 'ages':\n txt = \"\\n\\nNote: only ages for which you provide data will be written to your upload file.\"\n self.default_msg_text += txt\n self.msg_text = wx.StaticText(self.panel, label=self.default_msg_text,\n style=wx.TE_CENTER, name='msg text')\n self.help_msg_boxsizer.Add(self.msg_text)\n self.help_msg_boxsizer.ShowItems(False)\n\n ## Code message and button\n # button\n self.toggle_codes_btn = wx.Button(self.panel, id=-1,\n label=\"Show method codes\",\n name='toggle_codes_btn')\n self.Bind(wx.EVT_BUTTON, self.toggle_codes, self.toggle_codes_btn)\n # message\n self.code_msg_boxsizer = pw.MethodCodeDemystifier(self.panel, self.contribution.vocab)\n self.code_msg_boxsizer.ShowItems(False)\n\n ## Add content to sizers\n self.hbox = wx.BoxSizer(wx.HORIZONTAL)\n col_btn_vbox = wx.StaticBoxSizer(wx.StaticBox(self.panel, -1, label='Columns',\n name='manage columns'), wx.VERTICAL)\n row_btn_vbox = wx.StaticBoxSizer(wx.StaticBox(self.panel, -1, label='Rows',\n name='manage rows'), wx.VERTICAL)\n self.main_btn_vbox = wx.StaticBoxSizer(wx.StaticBox(self.panel, -1, label='Manage data',\n name='manage data'), wx.VERTICAL)\n input_output_vbox = wx.StaticBoxSizer(wx.StaticBox(self.panel, -1, label='In/Out',\n name='manage in out'), wx.VERTICAL)\n col_btn_vbox.Add(self.add_cols_button, flag=wx.ALL, border=5)\n col_btn_vbox.Add(self.remove_cols_button, flag=wx.ALL, border=5)\n row_btn_vbox.Add(many_rows_box, flag=wx.ALL, border=5)\n row_btn_vbox.Add(self.remove_row_button, flag=wx.ALL, border=5)\n row_btn_vbox.Add(self.deleteRowButton, flag=wx.ALL, border=5)\n self.main_btn_vbox.Add(self.importButton, flag=wx.ALL, border=5)\n self.main_btn_vbox.Add(self.exitButton, flag=wx.ALL, border=5)\n self.main_btn_vbox.Add(self.cancelButton, flag=wx.ALL, border=5)\n input_output_vbox.Add(self.copyButton, flag=wx.ALL, border=5)\n input_output_vbox.Add(self.selectAllButton, flag=wx.ALL, border=5)\n input_output_vbox.Add(self.copySelectionButton, flag=wx.ALL, border=5)\n self.hbox.Add(col_btn_vbox)\n self.hbox.Add(row_btn_vbox)\n self.hbox.Add(self.main_btn_vbox)\n self.hbox.Add(input_output_vbox)\n\n #self.panel.Bind(wx.grid.EVT_GRID_LABEL_LEFT_CLICK, self.onLeftClickLabel, self.grid)\n self.grid.Bind(wx.grid.EVT_GRID_LABEL_LEFT_CLICK, self.onLeftClickLabel, self.grid)\n #\n self.Bind(wx.EVT_KEY_DOWN, self.on_key_down)\n self.panel.Bind(wx.EVT_TEXT_PASTE, self.do_fit)\n # add actual data!\n self.grid_builder.add_data_to_grid(self.grid, self.grid_type)\n # fill in some default values\n self.grid_builder.fill_defaults()\n # set scrollbars\n self.grid.set_scrollbars()\n\n ## this would be a way to prevent editing\n ## some cells in age grid.\n ## with multiple types of ages, though,\n ## this doesn't make much sense\n #if self.grid_type == 'ages':\n # attr = wx.grid.GridCellAttr()\n # attr.SetReadOnly(True)\n # self.grid.SetColAttr(1, attr)\n\n self.drop_down_menu = drop_down_menus.Menus(self.grid_type, self.contribution, self.grid)\n self.grid_box = wx.StaticBoxSizer(wx.StaticBox(self.panel, -1, name='grid container'), wx.VERTICAL)\n self.grid_box.Add(self.grid, 1, flag=wx.ALL|wx.EXPAND, border=5)\n\n # final layout, set size\n self.main_sizer.Add(self.hbox, flag=wx.ALL|wx.ALIGN_CENTER,#|wx.SHAPED,\n border=20)\n self.main_sizer.Add(self.toggle_help_btn, 0,\n flag=wx.BOTTOM|wx.ALIGN_CENTRE,#|wx.SHAPED,\n border=5)\n self.main_sizer.Add(self.help_msg_boxsizer, 0,\n flag=wx.BOTTOM|wx.ALIGN_CENTRE,\n border=10)\n self.main_sizer.Add(self.toggle_codes_btn, 0,\n flag=wx.BOTTOM|wx.ALIGN_CENTRE,#|wx.SHAPED,\n border=5)\n self.main_sizer.Add(self.code_msg_boxsizer, 0,\n flag=wx.BOTTOM|wx.ALIGN_CENTRE,#|wx.SHAPED,\n border=5)\n self.main_sizer.Add(self.grid_box, 2, flag=wx.ALL|wx.ALIGN_CENTER|wx.EXPAND, border=10)\n self.panel.SetSizer(self.main_sizer)\n panel_sizer = wx.BoxSizer(wx.VERTICAL)\n panel_sizer.Add(self.panel, 1, wx.EXPAND)\n self.SetSizer(panel_sizer)\n panel_sizer.Fit(self)\n ## this keeps sizing correct if the user resizes the window manually\n #self.Bind(wx.EVT_SIZE, self.do_fit)\n# self.Centre()\n self.Show()", "response": "Initialize the main UI for the current locale."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncalling when a key is down in the content area.", "response": "def on_key_down(self, event):\n \"\"\"\n If user does command v,\n re-size window in case pasting has changed the content size.\n \"\"\"\n keycode = event.GetKeyCode()\n meta_down = event.MetaDown() or event.GetCmdDown()\n if keycode == 86 and meta_down:\n # treat it as if it were a wx.EVT_TEXT_SIZE\n self.do_fit(event)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ntoggles help message on help button click.", "response": "def toggle_help(self, event, mode=None):\n \"\"\"\n Show/hide help message on help button click.\n \"\"\"\n # if mode == 'open', show no matter what.\n # if mode == 'close', close. otherwise, change state\n btn = self.toggle_help_btn\n shown = self.help_msg_boxsizer.GetStaticBox().IsShown()\n # if mode is specified, do that mode\n if mode == 'open':\n self.help_msg_boxsizer.ShowItems(True)\n btn.SetLabel('Hide help')\n elif mode == 'close':\n self.help_msg_boxsizer.ShowItems(False)\n btn.SetLabel('Show help')\n # otherwise, simply toggle states\n else:\n if shown:\n self.help_msg_boxsizer.ShowItems(False)\n btn.SetLabel('Show help')\n else:\n self.help_msg_boxsizer.ShowItems(True)\n btn.SetLabel('Hide help')\n self.do_fit(None)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nshow or hide method codes explanation widget on button click", "response": "def toggle_codes(self, event):\n \"\"\"\n Show/hide method code explanation widget on button click\n \"\"\"\n btn = event.GetEventObject()\n if btn.Label == 'Show method codes':\n self.code_msg_boxsizer.ShowItems(True)\n btn.SetLabel('Hide method codes')\n else:\n self.code_msg_boxsizer.ShowItems(False)\n btn.SetLabel('Show method codes')\n self.do_fit(None)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef remove_col_label(self, event=None, col=None):\n if event:\n col = event.GetCol()\n if not col:\n return\n label = self.grid.GetColLabelValue(col)\n if '**' in label:\n label = label.strip('**')\n elif '^^' in label:\n label = label.strip('^^')\n if label in self.reqd_headers:\n pw.simple_warning(\"That header is required, and cannot be removed\")\n return False\n else:\n print('That header is not required:', label)\n # remove column from wxPython grid\n self.grid.remove_col(col)\n # remove column from DataFrame if present\n if self.grid_type in self.contribution.tables:\n if label in self.contribution.tables[self.grid_type].df.columns:\n del self.contribution.tables[self.grid_type].df[label]\n # causes resize on each column header delete\n # can leave this out if we want.....\n self.main_sizer.Fit(self)", "response": "remove column from grid"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef on_add_cols(self, event):\n col_labels = self.grid.col_labels\n dia = pw.ChooseOne(self, yes=\"Add single columns\", no=\"Add groups\")\n result1 = dia.ShowModal()\n if result1 == wx.ID_CANCEL:\n return\n elif result1 == wx.ID_YES:\n items = sorted([col_name for col_name in self.dm.index if col_name not in col_labels])\n dia = pw.HeaderDialog(self, 'columns to add',\n items1=list(items), groups=[])\n dia.Centre()\n result2 = dia.ShowModal()\n else:\n groups = self.dm['group'].unique()\n dia = pw.HeaderDialog(self, 'groups to add',\n items1=list(groups), groups=True)\n dia.Centre()\n result2 = dia.ShowModal()\n new_headers = []\n if result2 == 5100:\n new_headers = dia.text_list\n # if there is nothing to add, quit\n if not new_headers:\n return\n if result1 == wx.ID_YES:\n # add individual headers\n errors = self.add_new_grid_headers(new_headers)\n else:\n # add header groups\n errors = self.add_new_header_groups(new_headers)\n if errors:\n errors_str = ', '.join(errors)\n pw.simple_warning('You are already using the following headers: {}\\nSo they will not be added'.format(errors_str))\n\n # problem: if widgets above the grid are too wide,\n # the grid does not re-size when adding columns\n # awkward solution (causes flashing):\n if self.grid.GetWindowStyle() != wx.DOUBLE_BORDER:\n self.grid.SetWindowStyle(wx.DOUBLE_BORDER)\n self.main_sizer.Fit(self)\n self.grid.SetWindowStyle(wx.NO_BORDER)\n self.Centre()\n self.main_sizer.Fit(self)\n #\n self.grid.changes = set(range(self.grid.GetNumberRows()))\n dia.Destroy()", "response": "Show simple dialog that allows user to add a new column name"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd new header groups to the compile list of all headers belonging to all specified groups return the list of errors that are already included", "response": "def add_new_header_groups(self, groups):\n \"\"\"\n compile list of all headers belonging to all specified groups\n eliminate all headers that are already included\n add any req'd drop-down menus\n return errors\n \"\"\"\n already_present = []\n for group in groups:\n col_names = self.dm[self.dm['group'] == group].index\n for col in col_names:\n if col not in self.grid.col_labels:\n col_number = self.grid.add_col(col)\n # add to appropriate headers list\n # add drop down menus for user-added column\n if col in self.contribution.vocab.vocabularies:\n self.drop_down_menu.add_drop_down(col_number, col)\n elif col in self.contribution.vocab.suggested:\n self.drop_down_menu.add_drop_down(col_number, col)\n elif col in ['specimen', 'sample', 'site', 'location',\n 'specimens', 'samples', 'sites']:\n self.drop_down_menu.add_drop_down(col_number, col)\n elif col == 'experiments':\n self.drop_down_menu.add_drop_down(col_number, col)\n if col == \"method_codes\":\n self.drop_down_menu.add_method_drop_down(col_number, col)\n else:\n already_present.append(col)\n return already_present"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef add_new_grid_headers(self, new_headers):\n already_present = []\n for name in new_headers:\n if name:\n if name not in self.grid.col_labels:\n col_number = self.grid.add_col(name)\n # add to appropriate headers list\n # add drop down menus for user-added column\n if name in self.contribution.vocab.vocabularies:\n self.drop_down_menu.add_drop_down(col_number, name)\n elif name in self.contribution.vocab.suggested:\n self.drop_down_menu.add_drop_down(col_number, name)\n elif name in ['specimen', 'sample', 'site',\n 'specimens', 'samples', 'sites']:\n self.drop_down_menu.add_drop_down(col_number, name)\n elif name == 'experiments':\n self.drop_down_menu.add_drop_down(col_number, name)\n if name == \"method_codes\":\n self.drop_down_menu.add_method_drop_down(col_number, name)\n else:\n already_present.append(name)\n #pw.simple_warning('You are already using column header: {}'.format(name))\n return already_present", "response": "Add in all user - added headers."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding rows to grid", "response": "def on_add_rows(self, event):\n \"\"\"\n add rows to grid\n \"\"\"\n num_rows = self.rows_spin_ctrl.GetValue()\n #last_row = self.grid.GetNumberRows()\n for row in range(num_rows):\n self.grid.add_row()\n #if not self.grid.changes:\n # self.grid.changes = set([])\n #self.grid.changes.add(last_row)\n #last_row += 1\n self.main_sizer.Fit(self)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef on_remove_row(self, event, row_num=-1):\n text = \"Are you sure? If you select delete you won't be able to retrieve these rows...\"\n dia = pw.ChooseOne(self, \"Yes, delete rows\", \"Leave rows for now\", text)\n dia.Centre()\n result = dia.ShowModal()\n if result == wx.ID_NO:\n return\n default = (255, 255, 255, 255)\n if row_num == -1:\n # unhighlight any selected rows:\n for row in self.selected_rows:\n attr = wx.grid.GridCellAttr()\n attr.SetBackgroundColour(default)\n self.grid.SetRowAttr(row, attr)\n row_num = self.grid.GetNumberRows() - 1\n self.deleteRowButton.Disable()\n self.selected_rows = {row_num}\n # remove row(s) from the contribution\n df = self.contribution.tables[self.grid_type].df\n row_nums = list(range(len(df)))\n df = df.iloc[[i for i in row_nums if i not in self.selected_rows]]\n self.contribution.tables[self.grid_type].df = df\n # now remove row(s) from grid\n # delete rows, adjusting the row # appropriately as you delete\n for num, row in enumerate(self.selected_rows):\n row -= num\n if row < 0:\n row = 0\n self.grid.remove_row(row)\n attr = wx.grid.GridCellAttr()\n attr.SetBackgroundColour(default)\n self.grid.SetRowAttr(row, attr)\n # reset the grid\n self.selected_rows = set()\n self.deleteRowButton.Disable()\n self.grid.Refresh()\n self.main_sizer.Fit(self)", "response": "Remove specified grid row from the contribution."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef onLeftClickLabel(self, event):\n if event.Col == -1 and event.Row == -1:\n pass\n if event.Row < 0:\n if self.remove_cols_mode:\n self.remove_col_label(event)\n else:\n self.drop_down_menu.on_label_click(event)\n else:\n if event.Col < 0 and self.grid_type != 'age':\n self.onSelectRow(event)", "response": "Called when the user clicks on a label."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nimport a MagIC - format file and save it to disk", "response": "def onImport(self, event):\n \"\"\"\n Import a MagIC-format file\n \"\"\"\n if self.grid.changes:\n print(\"-W- Your changes will be overwritten...\")\n wind = pw.ChooseOne(self, \"Import file anyway\", \"Save grid first\",\n \"-W- Your grid has unsaved changes which will be overwritten if you import a file now...\")\n wind.Centre()\n res = wind.ShowModal()\n # save grid first:\n if res == wx.ID_NO:\n self.onSave(None, alert=True, destroy=False)\n # reset self.changes\n self.grid.changes = set()\n\n openFileDialog = wx.FileDialog(self, \"Open MagIC-format file\", self.WD, \"\",\n \"MagIC file|*.*\", wx.FD_OPEN | wx.FD_FILE_MUST_EXIST)\n result = openFileDialog.ShowModal()\n if result == wx.ID_OK:\n # get filename\n filename = openFileDialog.GetPath()\n # make sure the dtype is correct\n f = open(filename)\n line = f.readline()\n if line.startswith(\"tab\"):\n delim, dtype = line.split(\"\\t\")\n else:\n delim, dtype = line.split(\"\")\n f.close()\n dtype = dtype.strip()\n if (dtype != self.grid_type) and (dtype + \"s\" != self.grid_type):\n text = \"You are currently editing the {} grid, but you are trying to import a {} file.\\nPlease open the {} grid and then re-try this import.\".format(self.grid_type, dtype, dtype)\n pw.simple_warning(text)\n return\n # grab old data for concatenation\n if self.grid_type in self.contribution.tables:\n old_df_container = self.contribution.tables[self.grid_type]\n else:\n old_df_container = None\n old_col_names = self.grid.col_labels\n # read in new file and update contribution\n df_container = cb.MagicDataFrame(filename, dmodel=self.dm,\n columns=old_col_names)\n # concatenate if possible\n if not isinstance(old_df_container, type(None)):\n df_container.df = pd.concat([old_df_container.df, df_container.df],\n axis=0, sort=True)\n self.contribution.tables[df_container.dtype] = df_container\n self.grid_builder = GridBuilder(self.contribution, self.grid_type,\n self.panel, parent_type=self.parent_type,\n reqd_headers=self.reqd_headers)\n # delete old grid\n self.grid_box.Hide(0)\n self.grid_box.Remove(0)\n # create new, updated grid\n self.grid = self.grid_builder.make_grid()\n self.grid.InitUI()\n # add data to new grid\n self.grid_builder.add_data_to_grid(self.grid, self.grid_type)\n # add new grid to sizer and fit everything\n self.grid_box.Add(self.grid, flag=wx.ALL, border=5)\n self.main_sizer.Fit(self)\n self.Centre()\n # add any needed drop-down-menus\n self.drop_down_menu = drop_down_menus.Menus(self.grid_type,\n self.contribution,\n self.grid)\n # done!\n return"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef onCancelButton(self, event):\n if self.grid.changes:\n dlg1 = wx.MessageDialog(self, caption=\"Message:\",\n message=\"Are you sure you want to exit this grid?\\nYour changes will not be saved.\\n \",\n style=wx.OK|wx.CANCEL)\n result = dlg1.ShowModal()\n if result == wx.ID_OK:\n dlg1.Destroy()\n self.Destroy()\n else:\n self.Destroy()\n if self.main_frame:\n self.main_frame.Show()\n self.main_frame.Raise()", "response": "Quit grid with warning if unsaved changes present"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsaves grid data and alert user", "response": "def onSave(self, event, alert=False, destroy=True):\n \"\"\"\n Save grid data\n \"\"\"\n # tidy up drop_down menu\n if self.drop_down_menu:\n self.drop_down_menu.clean_up()\n # then save actual data\n self.grid_builder.save_grid_data()\n if not event and not alert:\n return\n # then alert user\n wx.MessageBox('Saved!', 'Info',\n style=wx.OK | wx.ICON_INFORMATION)\n if destroy:\n self.Destroy()"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nset self. df_slice based on user s selection", "response": "def onDragSelection(self, event):\n \"\"\"\n Set self.df_slice based on user's selection\n \"\"\"\n if self.grid.GetSelectionBlockTopLeft():\n #top_left = self.grid.GetSelectionBlockTopLeft()\n #bottom_right = self.grid.GetSelectionBlockBottomRight()\n # awkward hack to fix wxPhoenix memory problem, (Github issue #221)\n bottom_right = eval(repr(self.grid.GetSelectionBlockBottomRight()).replace(\"GridCellCoordsArray: \", \"\").replace(\"GridCellCoords\", \"\"))\n top_left = eval(repr(self.grid.GetSelectionBlockTopLeft()).replace(\"GridCellCoordsArray: \", \"\").replace(\"GridCellCoords\", \"\"))\n #\n top_left = top_left[0]\n bottom_right = bottom_right[0]\n else:\n return\n # GetSelectionBlock returns (row, col)\n min_col = top_left[1]\n max_col = bottom_right[1]\n min_row = top_left[0]\n max_row = bottom_right[0]\n self.df_slice = self.contribution.tables[self.grid_type].df.iloc[min_row:max_row+1, min_col:max_col+1]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef onKey(self, event):\n if event.CmdDown() or event.ControlDown():\n if event.GetKeyCode() == 67:\n self.onCopySelection(None)", "response": "Copy selection if control down and c key code 67"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef onSelectAll(self, event):\n # do clean up here!!!\n if self.drop_down_menu:\n self.drop_down_menu.clean_up()\n # save all grid data\n self.grid_builder.save_grid_data()\n df = self.contribution.tables[self.grid_type].df\n # write df to clipboard for pasting\n # header arg determines whether columns are taken\n # index arg determines whether index is taken\n pd.DataFrame.to_clipboard(df, header=False, index=False)\n print('-I- You have copied all cells! You may paste them into a text document or spreadsheet using Command v.')", "response": "Select all grid and copies it to the Clipboard"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncopy selection to the Clipboard", "response": "def onCopySelection(self, event):\n \"\"\"\n Copies self.df_slice to the Clipboard if slice exists\n \"\"\"\n if self.df_slice is not None:\n pd.DataFrame.to_clipboard(self.df_slice, header=False, index=False)\n self.grid.ClearSelection()\n self.df_slice = None\n print('-I- You have copied the selected cells. You may paste them into a text document or spreadsheet using Command v.')\n else:\n print('-W- No cells were copied! You must highlight a selection cells before hitting the copy button. You can do this by clicking and dragging, or by using the Shift key and click.')"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef make_grid(self):\n changes = None\n # if there is a MagicDataFrame, extract data from it\n if isinstance(self.magic_dataframe, cb.MagicDataFrame):\n # get columns and reorder slightly\n col_labels = list(self.magic_dataframe.df.columns)\n for ex_col in self.exclude_cols:\n col_labels.pop(ex_col)\n if self.grid_type == 'ages':\n levels = ['specimen', 'sample', 'site', 'location']\n for label in levels[:]:\n if label in col_labels:\n col_labels.remove(label)\n else:\n levels.remove(label)\n col_labels[:0] = levels\n else:\n if self.parent_type:\n if self.parent_type[:-1] in col_labels:\n col_labels.remove(self.parent_type[:-1])\n col_labels[:0] = [self.parent_type[:-1]]\n if self.grid_type[:-1] in col_labels:\n col_labels.remove(self.grid_type[:-1])\n col_labels[:0] = (self.grid_type[:-1],)\n for col in col_labels:\n if col not in self.magic_dataframe.df.columns:\n self.magic_dataframe.df[col] = None\n self.magic_dataframe.df = self.magic_dataframe.df[col_labels]\n self.magic_dataframe.sort_dataframe_cols()\n col_labels = list(self.magic_dataframe.df.columns)\n row_labels = self.magic_dataframe.df.index\n # make sure minimum defaults are present\n for header in self.reqd_headers:\n if header not in col_labels:\n changes = set([1])\n col_labels.append(header)\n # if there is no pre-existing MagicDataFrame,\n # make a blank grid with do some defaults:\n else:\n # default headers\n #col_labels = list(self.data_model.get_headers(self.grid_type, 'Names'))\n #col_labels[:0] = self.reqd_headers\n col_labels = list(self.reqd_headers)\n if self.grid_type in ['specimens', 'samples', 'sites']:\n col_labels.extend(['age', 'age_sigma'])\n ## use the following line if you want sorted column labels:\n #col_labels = sorted(set(col_labels))\n # defaults are different for ages\n if self.grid_type == 'ages':\n levels = ['specimen', 'sample', 'site', 'location']\n for label in levels:\n if label in col_labels:\n col_labels.remove(label)\n col_labels[:0] = levels\n else:\n if self.parent_type:\n col_labels.remove(self.parent_type[:-1])\n col_labels[:0] = [self.parent_type[:-1]]\n col_labels.remove(self.grid_type[:-1])\n col_labels[:0] = [self.grid_type[:-1]]\n # make sure all reqd cols are in magic_dataframe\n for col in col_labels:\n if col not in self.magic_dataframe.df.columns:\n self.magic_dataframe.df[col] = None\n # make the grid\n if not self.huge:\n grid = magic_grid.MagicGrid(parent=self.panel, name=self.grid_type,\n row_labels=[], col_labels=col_labels)\n # make the huge grid\n else:\n row_labels = self.magic_dataframe.df.index\n grid = magic_grid.HugeMagicGrid(parent=self.panel, name=self.grid_type,\n row_labels=row_labels, col_labels=col_labels)\n grid.do_event_bindings()\n grid.changes = changes\n\n self.grid = grid\n return grid", "response": "make a grid of the magic data"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds columns as needed for age and age_unit.", "response": "def add_age_defaults(self):\n \"\"\"\n Add columns as needed:\n age, age_unit, specimen, sample, site, location.\n \"\"\"\n if isinstance(self.magic_dataframe, cb.MagicDataFrame):\n for col in ['age', 'age_unit']:\n if col not in self.grid.col_labels:\n self.grid.add_col(col)\n for level in ['locations', 'sites', 'samples', 'specimens']:\n if level in self.contribution.tables:\n if level[:-1] not in self.grid.col_labels:\n self.grid.add_col(level[:-1])"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nchecks to see if the current grid is empty except for default values", "response": "def current_grid_empty(self):\n \"\"\"\n Check to see if grid is empty except for default values\n \"\"\"\n empty = True\n # df IS empty if there are no rows\n if not any(self.magic_dataframe.df.index):\n empty = True\n # df is NOT empty if there are at least two rows\n elif len(self.grid.row_labels) > 1:\n empty = False\n # if there is one row, df MIGHT be empty\n else:\n # check all the non-null values\n non_null_vals = [val for val in self.magic_dataframe.df.values[0] if cb.not_null(val, False)]\n for val in non_null_vals:\n if not isinstance(val, str):\n empty = False\n break\n # if there are any non-default values, grid is not empty\n if val.lower() not in ['this study', 'g', 'i']:\n empty = False\n break\n return empty"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nsave the grid data in the object self. grid.", "response": "def save_grid_data(self):\n \"\"\"\n Save grid data in the data object\n \"\"\"\n if not self.grid.changes:\n print('-I- No changes to save')\n return\n\n starred_cols = self.grid.remove_starred_labels()\n # locks in value in cell currently edited\n self.grid.SaveEditControlValue()\n # changes is a dict with key values == row number\n if self.grid.changes:\n new_data = self.grid.save_items()\n # HugeMagicGrid will return a pandas dataframe\n if self.huge:\n self.magic_dataframe.df = new_data\n # MagicGrid will return a dictionary with\n # new/updated data that must be incorporated\n # into the dataframe\n else:\n for key in new_data:\n data = new_data[key]\n # update the row if it exists already,\n # otherwise create a new row\n\n updated = self.magic_dataframe.update_row(key, data)\n if not isinstance(updated, pd.DataFrame):\n if self.grid_type == 'ages':\n label = key\n else:\n label = self.grid_type[:-1]\n self.magic_dataframe.add_row(label, data,\n self.grid.col_labels)\n # update the contribution with the new dataframe\n self.contribution.tables[self.grid_type] = self.magic_dataframe\n # *** probably don't actually want to write to file, here (but maybe)\n\n self.contribution.write_table_to_file(self.grid_type)\n #self.magic_dataframe.write_magic_file(\"{}.txt\".format(self.grid_type),\n # self.contribution.directory)\n # propagate age info if age table was edited\n if self.grid_type == 'ages':\n self.contribution.propagate_ages()\n return"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nfill in self. grid with default values in certain columns.", "response": "def fill_defaults(self):\n \"\"\"\n Fill in self.grid with default values in certain columns.\n Only fill in new values if grid is missing those values.\n \"\"\"\n defaults = {'result_quality': 'g',\n 'result_type': 'i',\n 'orientation_quality': 'g',\n 'citations': 'This study'}\n for col_name in defaults:\n if col_name in self.grid.col_labels:\n # try to grab existing values from contribution\n if self.grid_type in self.contribution.tables:\n if col_name in self.contribution.tables[self.grid_type].df.columns:\n old_vals = self.contribution.tables[self.grid_type].df[col_name]\n # if column is completely filled in, skip\n if all([cb.not_null(val, False) for val in old_vals]):\n continue\n new_val = defaults[col_name]\n vals = list(np.where((old_vals.notnull()) & (old_vals != ''), old_vals, new_val))\n else:\n vals = [defaults[col_name]] * self.grid.GetNumberRows()\n # if values not available in contribution, use defaults\n else:\n vals = [defaults[col_name]] * self.grid.GetNumberRows()\n # if col_name not present in grid, skip\n else:\n vals = None\n #\n if vals:\n print('-I- Updating column \"{}\" with default values'.format(col_name))\n if self.huge:\n self.grid.SetColumnValues(col_name, vals)\n else:\n col_ind = self.grid.col_labels.index(col_name)\n for row, val in enumerate(vals):\n self.grid.SetCellValue(row, col_ind, val)\n self.grid.changes = set(range(self.grid.GetNumberRows()))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a list of the children of the result_data", "response": "def get_result_children(self, result_data):\n \"\"\"\n takes in dict in form of {'er_specimen_names': 'name1:name2:name3'}\n and so forth.\n returns lists of specimens, samples, sites, and locations\n \"\"\"\n specimens, samples, sites, locations = \"\", \"\", \"\", \"\"\n children = {'specimen': specimens, 'sample': samples,\n 'site': sites, 'location': locations}\n for dtype in children:\n header_name = 'er_' + dtype + '_names'\n if result_data[header_name]:\n children[dtype] = result_data[header_name].split(\":\")\n # make sure there are no extra spaces in names\n children[dtype] = [child.strip() for child in children[dtype]]\n\n return children['specimen'], children['sample'], children['site'], children['location']"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef main():\n# set defaults\n Comps=[] # list of components\n version_num=pmag.get_version()\n args=sys.argv\n DefaultAge=[\"none\"]\n skipdirs,coord,excrit,custom,vgps,average,Iaverage,plotsites,opt=1,0,0,0,0,0,0,0,0\n get_model_lat=0 # this skips VADM calculation altogether, when get_model_lat=1, uses present day\n fmt='svg'\n dir_path=\".\"\n model_lat_file=\"\"\n Caverage=0\n infile='pmag_specimens.txt'\n measfile=\"magic_measurements.txt\"\n sampfile=\"er_samples.txt\"\n sitefile=\"er_sites.txt\"\n agefile=\"er_ages.txt\"\n specout=\"er_specimens.txt\"\n sampout=\"pmag_samples.txt\"\n siteout=\"pmag_sites.txt\"\n resout=\"pmag_results.txt\"\n critout=\"pmag_criteria.txt\"\n instout=\"magic_instruments.txt\"\n sigcutoff,OBJ=\"\",\"\"\n noDir,noInt=0,0\n polarity=0\n coords=['0']\n Dcrit,Icrit,nocrit=0,0,0\n corrections=[]\n nocorrection=['DA-NL','DA-AC','DA-CR']\n priorities=['DA-AC-ARM','DA-AC-TRM'] # priorities for anisotropy correction\n# get command line stuff\n if \"-h\" in args:\n print(main.__doc__)\n sys.exit()\n if '-WD' in args:\n ind=args.index(\"-WD\")\n dir_path=args[ind+1]\n if '-cor' in args:\n ind=args.index('-cor')\n cors=args[ind+1].split(':') # list of required data adjustments\n for cor in cors:\n nocorrection.remove('DA-'+cor)\n corrections.append('DA-'+cor)\n if '-pri' in args:\n ind=args.index('-pri')\n priorities=args[ind+1].split(':') # list of required data adjustments\n for p in priorities:\n p='DA-AC-'+p\n if '-f' in args:\n ind=args.index(\"-f\")\n measfile=args[ind+1]\n if '-fsp' in args:\n ind=args.index(\"-fsp\")\n infile=args[ind+1]\n if '-fsi' in args:\n ind=args.index(\"-fsi\")\n sitefile=args[ind+1]\n if \"-crd\" in args:\n ind=args.index(\"-crd\")\n coord=args[ind+1]\n if coord=='s':coords=['-1']\n if coord=='g':coords=['0']\n if coord=='t':coords=['100']\n if coord=='b':coords=['0','100']\n if \"-usr\" in args:\n ind=args.index(\"-usr\")\n user=sys.argv[ind+1]\n else: user=\"\"\n if \"-C\" in args: Dcrit,Icrit,nocrit=1,1,1 # no selection criteria\n if \"-sam\" in args: vgps=1 # save sample level VGPS/VADMs\n if \"-xSi\" in args:\n nositeints=1 # skip site level intensity\n else:\n nositeints=0\n if \"-age\" in args:\n ind=args.index(\"-age\")\n DefaultAge[0]=args[ind+1]\n DefaultAge.append(args[ind+2])\n DefaultAge.append(args[ind+3])\n Daverage,Iaverage,Caverage=0,0,0\n if \"-aD\" in args: Daverage=1 # average by sample directions\n if \"-aI\" in args: Iaverage=1 # average by sample intensities\n if \"-aC\" in args: Caverage=1 # average all components together ??? why???\n if \"-pol\" in args: polarity=1 # calculate averages by polarity\n if '-xD' in args:noDir=1\n if '-xI' in args:\n noInt=1\n elif \"-fla\" in args:\n if '-lat' in args:\n print(\"you should set a paleolatitude file OR use present day lat - not both\")\n sys.exit()\n ind=args.index(\"-fla\")\n model_lat_file=dir_path+'/'+args[ind+1]\n get_model_lat=2\n mlat=open(model_lat_file,'r')\n ModelLats=[]\n for line in mlat.readlines():\n ModelLat={}\n tmp=line.split()\n ModelLat[\"er_site_name\"]=tmp[0]\n ModelLat[\"site_model_lat\"]=tmp[1]\n ModelLat[\"er_sample_name\"]=tmp[0]\n ModelLat[\"sample_lat\"]=tmp[1]\n ModelLats.append(ModelLat)\n get_model_lat=2\n elif '-lat' in args:\n get_model_lat=1\n if \"-p\" in args:\n plotsites=1\n if \"-fmt\" in args:\n ind=args.index(\"-fmt\")\n fmt=args[ind+1]\n if noDir==0: # plot by site - set up plot window\n import pmagplotlib\n EQ={}\n EQ['eqarea']=1\n pmagplotlib.plot_init(EQ['eqarea'],5,5) # define figure 1 as equal area projection\n pmagplotlib.plot_net(EQ['eqarea']) # I don't know why this has to be here, but otherwise the first plot never plots...\n pmagplotlib.draw_figs(EQ)\n if '-WD' in args:\n infile=dir_path+'/'+infile\n measfile=dir_path+'/'+measfile\n instout=dir_path+'/'+instout\n sampfile=dir_path+'/'+sampfile\n sitefile=dir_path+'/'+sitefile\n agefile=dir_path+'/'+agefile\n specout=dir_path+'/'+specout\n sampout=dir_path+'/'+sampout\n siteout=dir_path+'/'+siteout\n resout=dir_path+'/'+resout\n critout=dir_path+'/'+critout\n if \"-exc\" in args: # use existing pmag_criteria file\n if \"-C\" in args:\n print('you can not use both existing and no criteria - choose either -exc OR -C OR neither (for default)')\n sys.exit()\n crit_data,file_type=pmag.magic_read(critout)\n print(\"Acceptance criteria read in from \", critout)\n else : # use default criteria (if nocrit set, then get really loose criteria as default)\n crit_data=pmag.default_criteria(nocrit)\n if nocrit==0:\n print(\"Acceptance criteria are defaults\")\n else:\n print(\"No acceptance criteria used \")\n accept={}\n for critrec in crit_data:\n for key in list(critrec.keys()):\n# need to migrate specimen_dang to specimen_int_dang for intensity data using old format\n if 'IE-SPEC' in list(critrec.keys()) and 'specimen_dang' in list(critrec.keys()) and 'specimen_int_dang' not in list(critrec.keys()):\n critrec['specimen_int_dang']=critrec['specimen_dang']\n del critrec['specimen_dang']\n# need to get rid of ron shaars sample_int_sigma_uT\n if 'sample_int_sigma_uT' in list(critrec.keys()):\n critrec['sample_int_sigma']='%10.3e'%(eval(critrec['sample_int_sigma_uT'])*1e-6)\n if key not in list(accept.keys()) and critrec[key]!='':\n accept[key]=critrec[key]\n #\n #\n if \"-exc\" not in args and \"-C\" not in args:\n print(\"args\",args)\n pmag.magic_write(critout,[accept],'pmag_criteria')\n print(\"\\n Pmag Criteria stored in \",critout,'\\n')\n#\n# now we're done slow dancing\n#\n SiteNFO,file_type=pmag.magic_read(sitefile) # read in site data - has the lats and lons\n SampNFO,file_type=pmag.magic_read(sampfile) # read in site data - has the lats and lons\n height_nfo=pmag.get_dictitem(SiteNFO,'site_height','','F') # find all the sites with height info.\n if agefile !=\"\":AgeNFO,file_type=pmag.magic_read(agefile) # read in the age information\n Data,file_type=pmag.magic_read(infile) # read in specimen interpretations\n IntData=pmag.get_dictitem(Data,'specimen_int','','F') # retrieve specimens with intensity data\n comment,orient=\"\",[]\n samples,sites=[],[]\n for rec in Data: # run through the data filling in missing keys and finding all components, coordinates available\n# fill in missing fields, collect unique sample and site names\n if 'er_sample_name' not in list(rec.keys()):\n rec['er_sample_name']=\"\"\n elif rec['er_sample_name'] not in samples:\n samples.append(rec['er_sample_name'])\n if 'er_site_name' not in list(rec.keys()):\n rec['er_site_name']=\"\"\n elif rec['er_site_name'] not in sites:\n sites.append(rec['er_site_name'])\n if 'specimen_int' not in list(rec.keys()):rec['specimen_int']=''\n if 'specimen_comp_name' not in list(rec.keys()) or rec['specimen_comp_name']==\"\":rec['specimen_comp_name']='A'\n if rec['specimen_comp_name'] not in Comps:Comps.append(rec['specimen_comp_name'])\n rec['specimen_tilt_correction']=rec['specimen_tilt_correction'].strip('\\n')\n if \"specimen_tilt_correction\" not in list(rec.keys()): rec[\"specimen_tilt_correction\"]=\"-1\" # assume sample coordinates\n if rec[\"specimen_tilt_correction\"] not in orient: orient.append(rec[\"specimen_tilt_correction\"]) # collect available coordinate systems\n if \"specimen_direction_type\" not in list(rec.keys()): rec[\"specimen_direction_type\"]='l' # assume direction is line - not plane\n if \"specimen_dec\" not in list(rec.keys()): rec[\"specimen_direction_type\"]='' # if no declination, set direction type to blank\n if \"specimen_n\" not in list(rec.keys()): rec[\"specimen_n\"]='' # put in n\n if \"specimen_alpha95\" not in list(rec.keys()): rec[\"specimen_alpha95\"]='' # put in alpha95\n if \"magic_method_codes\" not in list(rec.keys()): rec[\"magic_method_codes\"]=''\n #\n # start parsing data into SpecDirs, SpecPlanes, SpecInts\n SpecInts,SpecDirs,SpecPlanes=[],[],[]\n samples.sort() # get sorted list of samples and sites\n sites.sort()\n if noInt==0: # don't skip intensities\n IntData=pmag.get_dictitem(Data,'specimen_int','','F') # retrieve specimens with intensity data\n if nocrit==0: # use selection criteria\n for rec in IntData: # do selection criteria\n kill=pmag.grade(rec,accept,'specimen_int')\n if len(kill)==0: SpecInts.append(rec) # intensity record to be included in sample, site calculations\n else:\n SpecInts=IntData[:] # take everything - no selection criteria\n # check for required data adjustments\n if len(corrections)>0 and len(SpecInts)>0:\n for cor in corrections:\n SpecInts=pmag.get_dictitem(SpecInts,'magic_method_codes',cor,'has') # only take specimens with the required corrections\n if len(nocorrection)>0 and len(SpecInts)>0:\n for cor in nocorrection:\n SpecInts=pmag.get_dictitem(SpecInts,'magic_method_codes',cor,'not') # exclude the corrections not specified for inclusion\n# take top priority specimen of its name in remaining specimens (only one per customer)\n PrioritySpecInts=[]\n specimens=pmag.get_specs(SpecInts) # get list of uniq specimen names\n for spec in specimens:\n ThisSpecRecs=pmag.get_dictitem(SpecInts,'er_specimen_name',spec,'T') # all the records for this specimen\n if len(ThisSpecRecs)==1:\n PrioritySpecInts.append(ThisSpecRecs[0])\n elif len(ThisSpecRecs)>1: # more than one\n prec=[]\n for p in priorities:\n ThisSpecRecs=pmag.get_dictitem(SpecInts,'magic_method_codes',p,'has') # all the records for this specimen\n if len(ThisSpecRecs)>0:prec.append(ThisSpecRecs[0])\n PrioritySpecInts.append(prec[0]) # take the best one\n SpecInts=PrioritySpecInts # this has the first specimen record\n if noDir==0: # don't skip directions\n AllDirs=pmag.get_dictitem(Data,'specimen_direction_type','','F') # retrieve specimens with directed lines and planes\n Ns=pmag.get_dictitem(AllDirs,'specimen_n','','F') # get all specimens with specimen_n information\n if nocrit!=1: # use selection criteria\n for rec in Ns: # look through everything with specimen_n for \"good\" data\n kill=pmag.grade(rec,accept,'specimen_dir')\n if len(kill)==0: # nothing killed it\n SpecDirs.append(rec)\n else: # no criteria\n SpecDirs=AllDirs[:] # take them all\n# SpecDirs is now the list of all specimen directions (lines and planes) that pass muster\n#\n PmagSamps,SampDirs=[],[] # list of all sample data and list of those that pass the DE-SAMP criteria\n PmagSites,PmagResults=[],[] # list of all site data and selected results\n SampInts=[]\n for samp in samples: # run through the sample names\n if Daverage==1: # average by sample if desired\n SampDir=pmag.get_dictitem(SpecDirs,'er_sample_name',samp,'T') # get all the directional data for this sample\n if len(SampDir)>0: # there are some directions\n for coord in coords: # step through desired coordinate systems\n CoordDir=pmag.get_dictitem(SampDir,'specimen_tilt_correction',coord,'T') # get all the directions for this sample\n if len(CoordDir)>0: # there are some with this coordinate system\n if Caverage==0: # look component by component\n for comp in Comps:\n CompDir=pmag.get_dictitem(CoordDir,'specimen_comp_name',comp,'T') # get all directions from this component\n if len(CompDir)>0: # there are some\n PmagSampRec=pmag.lnpbykey(CompDir,'sample','specimen') # get a sample average from all specimens\n PmagSampRec[\"er_location_name\"]=CompDir[0]['er_location_name'] # decorate the sample record\n PmagSampRec[\"er_site_name\"]=CompDir[0]['er_site_name']\n PmagSampRec[\"er_sample_name\"]=samp\n PmagSampRec[\"er_citation_names\"]=\"This study\"\n PmagSampRec[\"er_analyst_mail_names\"]=user\n PmagSampRec['magic_software_packages']=version_num\n if nocrit!=1:PmagSampRec['pmag_criteria_codes']=\"ACCEPT\"\n if agefile != \"\": PmagSampRec= pmag.get_age(PmagSampRec,\"er_site_name\",\"sample_inferred_\",AgeNFO,DefaultAge)\n site_height=pmag.get_dictitem(height_nfo,'er_site_name',PmagSampRec['er_site_name'],'T')\n if len(site_height)>0:PmagSampRec[\"sample_height\"]=site_height[0]['site_height'] # add in height if available\n PmagSampRec['sample_comp_name']=comp\n PmagSampRec['sample_tilt_correction']=coord\n PmagSampRec['er_specimen_names']= pmag.get_list(CompDir,'er_specimen_name') # get a list of the specimen names used\n PmagSampRec['magic_method_codes']= pmag.get_list(CompDir,'magic_method_codes') # get a list of the methods used\n if nocrit!=1: # apply selection criteria\n kill=pmag.grade(PmagSampRec,accept,'sample_dir')\n else:\n kill=[]\n if len(kill)==0:\n SampDirs.append(PmagSampRec)\n if vgps==1: # if sample level VGP info desired, do that now\n PmagResRec=pmag.getsampVGP(PmagSampRec,SiteNFO)\n if PmagResRec!=\"\":PmagResults.append(PmagResRec)\n PmagSamps.append(PmagSampRec)\n if Caverage==1: # average all components together basically same as above\n PmagSampRec=pmag.lnpbykey(CoordDir,'sample','specimen')\n PmagSampRec[\"er_location_name\"]=CoordDir[0]['er_location_name']\n PmagSampRec[\"er_site_name\"]=CoordDir[0]['er_site_name']\n PmagSampRec[\"er_sample_name\"]=samp\n PmagSampRec[\"er_citation_names\"]=\"This study\"\n PmagSampRec[\"er_analyst_mail_names\"]=user\n PmagSampRec['magic_software_packages']=version_num\n if nocrit!=1:PmagSampRec['pmag_criteria_codes']=\"\"\n if agefile != \"\": PmagSampRec= pmag.get_age(PmagSampRec,\"er_site_name\",\"sample_inferred_\",AgeNFO,DefaultAge)\n site_height=pmag.get_dictitem(height_nfo,'er_site_name',site,'T')\n if len(site_height)>0:PmagSampRec[\"sample_height\"]=site_height[0]['site_height'] # add in height if available\n PmagSampRec['sample_tilt_correction']=coord\n PmagSampRec['sample_comp_name']= pmag.get_list(CoordDir,'specimen_comp_name') # get components used\n PmagSampRec['er_specimen_names']= pmag.get_list(CoordDir,'er_specimen_name') # get specimne names averaged\n PmagSampRec['magic_method_codes']= pmag.get_list(CoordDir,'magic_method_codes') # assemble method codes\n if nocrit!=1: # apply selection criteria\n kill=pmag.grade(PmagSampRec,accept,'sample_dir')\n if len(kill)==0: # passes the mustard\n SampDirs.append(PmagSampRec)\n if vgps==1:\n PmagResRec=pmag.getsampVGP(PmagSampRec,SiteNFO)\n if PmagResRec!=\"\":PmagResults.append(PmagResRec)\n else: # take everything\n SampDirs.append(PmagSampRec)\n if vgps==1:\n PmagResRec=pmag.getsampVGP(PmagSampRec,SiteNFO)\n if PmagResRec!=\"\":PmagResults.append(PmagResRec)\n PmagSamps.append(PmagSampRec)\n if Iaverage==1: # average by sample if desired\n SampI=pmag.get_dictitem(SpecInts,'er_sample_name',samp,'T') # get all the intensity data for this sample\n if len(SampI)>0: # there are some\n PmagSampRec=pmag.average_int(SampI,'specimen','sample') # get average intensity stuff\n PmagSampRec[\"sample_description\"]=\"sample intensity\" # decorate sample record\n PmagSampRec[\"sample_direction_type\"]=\"\"\n PmagSampRec['er_site_name']=SampI[0][\"er_site_name\"]\n PmagSampRec['er_sample_name']=samp\n PmagSampRec['er_location_name']=SampI[0][\"er_location_name\"]\n PmagSampRec[\"er_citation_names\"]=\"This study\"\n PmagSampRec[\"er_analyst_mail_names\"]=user\n if agefile != \"\": PmagSampRec=pmag.get_age(PmagSampRec,\"er_site_name\",\"sample_inferred_\", AgeNFO,DefaultAge)\n site_height=pmag.get_dictitem(height_nfo,'er_site_name',PmagSampRec['er_site_name'],'T')\n if len(site_height)>0:PmagSampRec[\"sample_height\"]=site_height[0]['site_height'] # add in height if available\n PmagSampRec['er_specimen_names']= pmag.get_list(SampI,'er_specimen_name')\n PmagSampRec['magic_method_codes']= pmag.get_list(SampI,'magic_method_codes')\n if nocrit!=1: # apply criteria!\n kill=pmag.grade(PmagSampRec,accept,'sample_int')\n if len(kill)==0:\n PmagSampRec['pmag_criteria_codes']=\"ACCEPT\"\n SampInts.append(PmagSampRec)\n PmagSamps.append(PmagSampRec)\n else:PmagSampRec={} # sample rejected\n else: # no criteria\n SampInts.append(PmagSampRec)\n PmagSamps.append(PmagSampRec)\n PmagSampRec['pmag_criteria_codes']=\"\"\n if vgps==1 and get_model_lat!=0 and PmagSampRec!={}: #\n if get_model_lat==1: # use sample latitude\n PmagResRec=pmag.getsampVDM(PmagSampRec,SampNFO)\n del(PmagResRec['model_lat']) # get rid of the model lat key\n elif get_model_lat==2: # use model latitude\n PmagResRec=pmag.getsampVDM(PmagSampRec,ModelLats)\n if PmagResRec!={}:PmagResRec['magic_method_codes']=PmagResRec['magic_method_codes']+\":IE-MLAT\"\n if PmagResRec!={}:\n PmagResRec['er_specimen_names']=PmagSampRec['er_specimen_names']\n PmagResRec['er_sample_names']=PmagSampRec['er_sample_name']\n PmagResRec['pmag_criteria_codes']='ACCEPT'\n PmagResRec['average_int_sigma_perc']=PmagSampRec['sample_int_sigma_perc']\n PmagResRec['average_int_sigma']=PmagSampRec['sample_int_sigma']\n PmagResRec['average_int_n']=PmagSampRec['sample_int_n']\n PmagResRec['vadm_n']=PmagSampRec['sample_int_n']\n PmagResRec['data_type']='i'\n PmagResults.append(PmagResRec)\n if len(PmagSamps)>0:\n TmpSamps,keylist=pmag.fillkeys(PmagSamps) # fill in missing keys from different types of records\n pmag.magic_write(sampout,TmpSamps,'pmag_samples') # save in sample output file\n print(' sample averages written to ',sampout)\n\n#\n#create site averages from specimens or samples as specified\n#\n for site in sites:\n if Daverage==0: key,dirlist='specimen',SpecDirs # if specimen averages at site level desired\n if Daverage==1: key,dirlist='sample',SampDirs # if sample averages at site level desired\n tmp=pmag.get_dictitem(dirlist,'er_site_name',site,'T') # get all the sites with directions\n tmp1=pmag.get_dictitem(tmp,key+'_tilt_correction',coords[-1],'T') # use only the last coordinate if Caverage==0\n sd=pmag.get_dictitem(SiteNFO,'er_site_name',site,'T') # fish out site information (lat/lon, etc.)\n if len(sd)>0:\n sitedat=sd[0]\n if Caverage==0: # do component wise averaging\n for comp in Comps:\n siteD=pmag.get_dictitem(tmp1,key+'_comp_name',comp,'T') # get all components comp\n if len(siteD)>0: # there are some for this site and component name\n PmagSiteRec=pmag.lnpbykey(siteD,'site',key) # get an average for this site\n PmagSiteRec['site_comp_name']=comp # decorate the site record\n PmagSiteRec[\"er_location_name\"]=siteD[0]['er_location_name']\n PmagSiteRec[\"er_site_name\"]=siteD[0]['er_site_name']\n PmagSiteRec['site_tilt_correction']=coords[-1]\n PmagSiteRec['site_comp_name']= pmag.get_list(siteD,key+'_comp_name')\n if Daverage==1:\n PmagSiteRec['er_sample_names']= pmag.get_list(siteD,'er_sample_name')\n else:\n PmagSiteRec['er_specimen_names']= pmag.get_list(siteD,'er_specimen_name')\n # determine the demagnetization code (DC3,4 or 5) for this site\n AFnum=len(pmag.get_dictitem(siteD,'magic_method_codes','LP-DIR-AF','has'))\n Tnum=len(pmag.get_dictitem(siteD,'magic_method_codes','LP-DIR-T','has'))\n DC=3\n if AFnum>0:DC+=1\n if Tnum>0:DC+=1\n PmagSiteRec['magic_method_codes']= pmag.get_list(siteD,'magic_method_codes')+':'+ 'LP-DC'+str(DC)\n PmagSiteRec['magic_method_codes'].strip(\":\")\n if plotsites==1:\n print(PmagSiteRec['er_site_name'])\n pmagplotlib.plot_site(EQ['eqarea'],PmagSiteRec,siteD,key) # plot and list the data\n pmagplotlib.draw_figs(EQ)\n PmagSites.append(PmagSiteRec)\n else: # last component only\n siteD=tmp1[:] # get the last orientation system specified\n if len(siteD)>0: # there are some\n PmagSiteRec=pmag.lnpbykey(siteD,'site',key) # get the average for this site\n PmagSiteRec[\"er_location_name\"]=siteD[0]['er_location_name'] # decorate the record\n PmagSiteRec[\"er_site_name\"]=siteD[0]['er_site_name']\n PmagSiteRec['site_comp_name']=comp\n PmagSiteRec['site_tilt_correction']=coords[-1]\n PmagSiteRec['site_comp_name']= pmag.get_list(siteD,key+'_comp_name')\n PmagSiteRec['er_specimen_names']= pmag.get_list(siteD,'er_specimen_name')\n PmagSiteRec['er_sample_names']= pmag.get_list(siteD,'er_sample_name')\n AFnum=len(pmag.get_dictitem(siteD,'magic_method_codes','LP-DIR-AF','has'))\n Tnum=len(pmag.get_dictitem(siteD,'magic_method_codes','LP-DIR-T','has'))\n DC=3\n if AFnum>0:DC+=1\n if Tnum>0:DC+=1\n PmagSiteRec['magic_method_codes']= pmag.get_list(siteD,'magic_method_codes')+':'+ 'LP-DC'+str(DC)\n PmagSiteRec['magic_method_codes'].strip(\":\")\n if Daverage==0:PmagSiteRec['site_comp_name']= pmag.get_list(siteD,key+'_comp_name')\n if plotsites==1:\n pmagplotlib.plot_site(EQ['eqarea'],PmagSiteRec,siteD,key)\n pmagplotlib.draw_figs(EQ)\n PmagSites.append(PmagSiteRec)\n else:\n print('site information not found in er_sites for site, ',site,' site will be skipped')\n for PmagSiteRec in PmagSites: # now decorate each dictionary some more, and calculate VGPs etc. for results table\n PmagSiteRec[\"er_citation_names\"]=\"This study\"\n PmagSiteRec[\"er_analyst_mail_names\"]=user\n PmagSiteRec['magic_software_packages']=version_num\n if agefile != \"\": PmagSiteRec= pmag.get_age(PmagSiteRec,\"er_site_name\",\"site_inferred_\",AgeNFO,DefaultAge)\n PmagSiteRec['pmag_criteria_codes']='ACCEPT'\n if 'site_n_lines' in list(PmagSiteRec.keys()) and 'site_n_planes' in list(PmagSiteRec.keys()) and PmagSiteRec['site_n_lines']!=\"\" and PmagSiteRec['site_n_planes']!=\"\":\n if int(PmagSiteRec[\"site_n_planes\"])>0:\n PmagSiteRec[\"magic_method_codes\"]=PmagSiteRec['magic_method_codes']+\":DE-FM-LP\"\n elif int(PmagSiteRec[\"site_n_lines\"])>2:\n PmagSiteRec[\"magic_method_codes\"]=PmagSiteRec['magic_method_codes']+\":DE-FM\"\n kill=pmag.grade(PmagSiteRec,accept,'site_dir')\n if len(kill)==0:\n PmagResRec={} # set up dictionary for the pmag_results table entry\n PmagResRec['data_type']='i' # decorate it a bit\n PmagResRec['magic_software_packages']=version_num\n PmagSiteRec['site_description']='Site direction included in results table'\n PmagResRec['pmag_criteria_codes']='ACCEPT'\n dec=float(PmagSiteRec[\"site_dec\"])\n inc=float(PmagSiteRec[\"site_inc\"])\n if 'site_alpha95' in list(PmagSiteRec.keys()) and PmagSiteRec['site_alpha95']!=\"\":\n a95=float(PmagSiteRec[\"site_alpha95\"])\n else:a95=180.\n sitedat=pmag.get_dictitem(SiteNFO,'er_site_name',PmagSiteRec['er_site_name'],'T')[0] # fish out site information (lat/lon, etc.)\n lat=float(sitedat['site_lat'])\n lon=float(sitedat['site_lon'])\n plong,plat,dp,dm=pmag.dia_vgp(dec,inc,a95,lat,lon) # get the VGP for this site\n if PmagSiteRec['site_tilt_correction']=='-1':C=' (spec coord) '\n if PmagSiteRec['site_tilt_correction']=='0':C=' (geog. coord) '\n if PmagSiteRec['site_tilt_correction']=='100':C=' (strat. coord) '\n PmagResRec[\"pmag_result_name\"]=\"VGP Site: \"+PmagSiteRec[\"er_site_name\"] # decorate some more\n PmagResRec[\"result_description\"]=\"Site VGP, coord system = \"+str(coord)+' component: '+comp\n PmagResRec['er_site_names']=PmagSiteRec['er_site_name']\n PmagResRec['pmag_criteria_codes']='ACCEPT'\n PmagResRec['er_citation_names']='This study'\n PmagResRec['er_analyst_mail_names']=user\n PmagResRec[\"er_location_names\"]=PmagSiteRec[\"er_location_name\"]\n if Daverage==1:\n PmagResRec[\"er_sample_names\"]=PmagSiteRec[\"er_sample_names\"]\n else:\n PmagResRec[\"er_specimen_names\"]=PmagSiteRec[\"er_specimen_names\"]\n PmagResRec[\"tilt_correction\"]=PmagSiteRec['site_tilt_correction']\n PmagResRec[\"pole_comp_name\"]=PmagSiteRec['site_comp_name']\n PmagResRec[\"average_dec\"]=PmagSiteRec[\"site_dec\"]\n PmagResRec[\"average_inc\"]=PmagSiteRec[\"site_inc\"]\n PmagResRec[\"average_alpha95\"]=PmagSiteRec[\"site_alpha95\"]\n PmagResRec[\"average_n\"]=PmagSiteRec[\"site_n\"]\n PmagResRec[\"average_n_lines\"]=PmagSiteRec[\"site_n_lines\"]\n PmagResRec[\"average_n_planes\"]=PmagSiteRec[\"site_n_planes\"]\n PmagResRec[\"vgp_n\"]=PmagSiteRec[\"site_n\"]\n PmagResRec[\"average_k\"]=PmagSiteRec[\"site_k\"]\n PmagResRec[\"average_r\"]=PmagSiteRec[\"site_r\"]\n PmagResRec[\"average_lat\"]='%10.4f ' %(lat)\n PmagResRec[\"average_lon\"]='%10.4f ' %(lon)\n if agefile != \"\": PmagResRec= pmag.get_age(PmagResRec,\"er_site_names\",\"average_\",AgeNFO,DefaultAge)\n site_height=pmag.get_dictitem(height_nfo,'er_site_name',site,'T')\n if len(site_height)>0:PmagResRec[\"average_height\"]=site_height[0]['site_height']\n PmagResRec[\"vgp_lat\"]='%7.1f ' % (plat)\n PmagResRec[\"vgp_lon\"]='%7.1f ' % (plong)\n PmagResRec[\"vgp_dp\"]='%7.1f ' % (dp)\n PmagResRec[\"vgp_dm\"]='%7.1f ' % (dm)\n PmagResRec[\"magic_method_codes\"]= PmagSiteRec[\"magic_method_codes\"]\n if PmagSiteRec['site_tilt_correction']=='0':PmagSiteRec['magic_method_codes']=PmagSiteRec['magic_method_codes']+\":DA-DIR-GEO\"\n if PmagSiteRec['site_tilt_correction']=='100':PmagSiteRec['magic_method_codes']=PmagSiteRec['magic_method_codes']+\":DA-DIR-TILT\"\n PmagSiteRec['site_polarity']=\"\"\n if polarity==1: # assign polarity based on angle of pole lat to spin axis - may want to re-think this sometime\n angle=pmag.angle([0,0],[0,(90-plat)])\n if angle <= 55.: PmagSiteRec[\"site_polarity\"]='n'\n if angle > 55. and angle < 125.: PmagSiteRec[\"site_polarity\"]='t'\n if angle >= 125.: PmagSiteRec[\"site_polarity\"]='r'\n PmagResults.append(PmagResRec)\n if polarity==1:\n crecs=pmag.get_dictitem(PmagSites,'site_tilt_correction','100','T') # find the tilt corrected data\n if len(crecs)<2:crecs=pmag.get_dictitem(PmagSites,'site_tilt_correction','0','T') # if there aren't any, find the geographic corrected data\n if len(crecs)>2: # if there are some,\n comp=pmag.get_list(crecs,'site_comp_name').split(':')[0] # find the first component\n crecs=pmag.get_dictitem(crecs,'site_comp_name',comp,'T') # fish out all of the first component\n precs=[]\n for rec in crecs:\n precs.append({'dec':rec['site_dec'],'inc':rec['site_inc'],'name':rec['er_site_name'],'loc':rec['er_location_name']})\n polpars=pmag.fisher_by_pol(precs) # calculate average by polarity\n for mode in list(polpars.keys()): # hunt through all the modes (normal=A, reverse=B, all=ALL)\n PolRes={}\n PolRes['er_citation_names']='This study'\n PolRes[\"pmag_result_name\"]=\"Polarity Average: Polarity \"+mode #\n PolRes[\"data_type\"]=\"a\"\n PolRes[\"average_dec\"]='%7.1f'%(polpars[mode]['dec'])\n PolRes[\"average_inc\"]='%7.1f'%(polpars[mode]['inc'])\n PolRes[\"average_n\"]='%i'%(polpars[mode]['n'])\n PolRes[\"average_r\"]='%5.4f'%(polpars[mode]['r'])\n PolRes[\"average_k\"]='%6.0f'%(polpars[mode]['k'])\n PolRes[\"average_alpha95\"]='%7.1f'%(polpars[mode]['alpha95'])\n PolRes['er_site_names']= polpars[mode]['sites']\n PolRes['er_location_names']= polpars[mode]['locs']\n PolRes['magic_software_packages']=version_num\n PmagResults.append(PolRes)\n\n if noInt!=1 and nositeints!=1:\n for site in sites: # now do intensities for each site\n if plotsites==1:print(site)\n if Iaverage==0: key,intlist='specimen',SpecInts # if using specimen level data\n if Iaverage==1: key,intlist='sample',PmagSamps # if using sample level data\n Ints=pmag.get_dictitem(intlist,'er_site_name',site,'T') # get all the intensities for this site\n if len(Ints)>0: # there are some\n PmagSiteRec=pmag.average_int(Ints,key,'site') # get average intensity stuff for site table\n PmagResRec=pmag.average_int(Ints,key,'average') # get average intensity stuff for results table\n if plotsites==1: # if site by site examination requested - print this site out to the screen\n for rec in Ints:print(rec['er_'+key+'_name'],' %7.1f'%(1e6*float(rec[key+'_int'])))\n if len(Ints)>1:\n print('Average: ','%7.1f'%(1e6*float(PmagResRec['average_int'])),'N: ',len(Ints))\n print('Sigma: ','%7.1f'%(1e6*float(PmagResRec['average_int_sigma'])),'Sigma %: ',PmagResRec['average_int_sigma_perc'])\n input('Press any key to continue\\n')\n er_location_name=Ints[0][\"er_location_name\"]\n PmagSiteRec[\"er_location_name\"]=er_location_name # decorate the records\n PmagSiteRec[\"er_citation_names\"]=\"This study\"\n PmagResRec[\"er_location_names\"]=er_location_name\n PmagResRec[\"er_citation_names\"]=\"This study\"\n PmagSiteRec[\"er_analyst_mail_names\"]=user\n PmagResRec[\"er_analyst_mail_names\"]=user\n PmagResRec[\"data_type\"]='i'\n if Iaverage==0:\n PmagSiteRec['er_specimen_names']= pmag.get_list(Ints,'er_specimen_name') # list of all specimens used\n PmagResRec['er_specimen_names']= pmag.get_list(Ints,'er_specimen_name')\n PmagSiteRec['er_sample_names']= pmag.get_list(Ints,'er_sample_name') # list of all samples used\n PmagResRec['er_sample_names']= pmag.get_list(Ints,'er_sample_name')\n PmagSiteRec['er_site_name']= site\n PmagResRec['er_site_names']= site\n PmagSiteRec['magic_method_codes']= pmag.get_list(Ints,'magic_method_codes')\n PmagResRec['magic_method_codes']= pmag.get_list(Ints,'magic_method_codes')\n kill=pmag.grade(PmagSiteRec,accept,'site_int')\n if nocrit==1 or len(kill)==0:\n b,sig=float(PmagResRec['average_int']),\"\"\n if(PmagResRec['average_int_sigma'])!=\"\":sig=float(PmagResRec['average_int_sigma'])\n sdir=pmag.get_dictitem(PmagResults,'er_site_names',site,'T') # fish out site direction\n if len(sdir)>0 and sdir[-1]['average_inc']!=\"\": # get the VDM for this record using last average inclination (hope it is the right one!)\n inc=float(sdir[0]['average_inc']) #\n mlat=pmag.magnetic_lat(inc) # get magnetic latitude using dipole formula\n PmagResRec[\"vdm\"]='%8.3e '% (pmag.b_vdm(b,mlat)) # get VDM with magnetic latitude\n PmagResRec[\"vdm_n\"]=PmagResRec['average_int_n']\n if 'average_int_sigma' in list(PmagResRec.keys()) and PmagResRec['average_int_sigma']!=\"\":\n vdm_sig=pmag.b_vdm(float(PmagResRec['average_int_sigma']),mlat)\n PmagResRec[\"vdm_sigma\"]='%8.3e '% (vdm_sig)\n else:\n PmagResRec[\"vdm_sigma\"]=\"\"\n mlat=\"\" # define a model latitude\n if get_model_lat==1: # use present site latitude\n mlats=pmag.get_dictitem(SiteNFO,'er_site_name',site,'T')\n if len(mlats)>0: mlat=mlats[0]['site_lat']\n elif get_model_lat==2: # use a model latitude from some plate reconstruction model (or something)\n mlats=pmag.get_dictitem(ModelLats,'er_site_name',site,'T')\n if len(mlats)>0: PmagResRec['model_lat']=mlats[0]['site_model_lat']\n mlat=PmagResRec['model_lat']\n if mlat!=\"\":\n PmagResRec[\"vadm\"]='%8.3e '% (pmag.b_vdm(b,float(mlat))) # get the VADM using the desired latitude\n if sig!=\"\":\n vdm_sig=pmag.b_vdm(float(PmagResRec['average_int_sigma']),float(mlat))\n PmagResRec[\"vadm_sigma\"]='%8.3e '% (vdm_sig)\n PmagResRec[\"vadm_n\"]=PmagResRec['average_int_n']\n else:\n PmagResRec[\"vadm_sigma\"]=\"\"\n sitedat=pmag.get_dictitem(SiteNFO,'er_site_name',PmagSiteRec['er_site_name'],'T') # fish out site information (lat/lon, etc.)\n if len(sitedat)>0:\n sitedat=sitedat[0]\n PmagResRec['average_lat']=sitedat['site_lat']\n PmagResRec['average_lon']=sitedat['site_lon']\n else:\n PmagResRec['average_lon']='UNKNOWN'\n PmagResRec['average_lon']='UNKNOWN'\n PmagResRec['magic_software_packages']=version_num\n PmagResRec[\"pmag_result_name\"]=\"V[A]DM: Site \"+site\n PmagResRec[\"result_description\"]=\"V[A]DM of site\"\n PmagResRec[\"pmag_criteria_codes\"]=\"ACCEPT\"\n if agefile != \"\": PmagResRec= pmag.get_age(PmagResRec,\"er_site_names\",\"average_\",AgeNFO,DefaultAge)\n site_height=pmag.get_dictitem(height_nfo,'er_site_name',site,'T')\n if len(site_height)>0:PmagResRec[\"average_height\"]=site_height[0]['site_height']\n PmagSites.append(PmagSiteRec)\n PmagResults.append(PmagResRec)\n if len(PmagSites)>0:\n Tmp,keylist=pmag.fillkeys(PmagSites)\n pmag.magic_write(siteout,Tmp,'pmag_sites')\n print(' sites written to ',siteout)\n else: print(\"No Site level table\")\n if len(PmagResults)>0:\n TmpRes,keylist=pmag.fillkeys(PmagResults)\n pmag.magic_write(resout,TmpRes,'pmag_results')\n print(' results written to ',resout)\n else: print(\"No Results level table\")", "response": "main function for pmag_specimens_results_magic. py"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nnames eqarea_magic.py DESCRIPTION makes equal area projections from declination/inclination data SYNTAX eqarea_magic.py [command line options] INPUT takes magic formatted sites, samples, specimens, or measurements OPTIONS -h prints help message and quits -f FILE: specify input magic format file from magic, default='sites.txt' supported types=[measurements, specimens, samples, sites] -fsp FILE: specify specimen file name, (required if you want to plot measurements by sample) default='specimens.txt' -fsa FILE: specify sample file name, (required if you want to plot specimens by site) default='samples.txt' -fsi FILE: specify site file name, default='sites.txt' -flo FILE: specify location file name, default='locations.txt' -obj OBJ: specify level of plot [all, sit, sam, spc], default is all -crd [s,g,t]: specify coordinate system, [s]pecimen, [g]eographic, [t]ilt adjusted default is geographic, unspecified assumed geographic -fmt [svg,png,jpg] format for output plots -ell [F,K,B,Be,Bv] plot Fisher, Kent, Bingham, Bootstrap ellipses or Boostrap eigenvectors -c plot as colour contour -cm CM use color map CM [default is coolwarm] -sav save plot and quit quietly -no-tilt data are unoriented, allows plotting of measurement dec/inc NOTE all: entire file; sit: site; sam: sample; spc: specimen", "response": "def main():\n \"\"\"\n NAME\n eqarea_magic.py\n\n DESCRIPTION\n makes equal area projections from declination/inclination data\n\n SYNTAX\n eqarea_magic.py [command line options]\n\n INPUT\n takes magic formatted sites, samples, specimens, or measurements\n\n OPTIONS\n -h prints help message and quits\n -f FILE: specify input magic format file from magic, default='sites.txt'\n supported types=[measurements, specimens, samples, sites]\n -fsp FILE: specify specimen file name, (required if you want to plot measurements by sample)\n default='specimens.txt'\n -fsa FILE: specify sample file name, (required if you want to plot specimens by site)\n default='samples.txt'\n -fsi FILE: specify site file name, default='sites.txt'\n -flo FILE: specify location file name, default='locations.txt'\n\n -obj OBJ: specify level of plot [all, sit, sam, spc], default is all\n -crd [s,g,t]: specify coordinate system, [s]pecimen, [g]eographic, [t]ilt adjusted\n default is geographic, unspecified assumed geographic\n -fmt [svg,png,jpg] format for output plots\n -ell [F,K,B,Be,Bv] plot Fisher, Kent, Bingham, Bootstrap ellipses or Boostrap eigenvectors\n -c plot as colour contour\n -cm CM use color map CM [default is coolwarm]\n -sav save plot and quit quietly\n -no-tilt data are unoriented, allows plotting of measurement dec/inc\n NOTE\n all: entire file; sit: site; sam: sample; spc: specimen\n \"\"\"\n # extract arguments from sys.argv\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n dir_path = pmag.get_named_arg(\"-WD\", default_val=\".\")\n input_dir_path = pmag.get_named_arg('-ID', '')\n if not input_dir_path:\n input_dir_path = dir_path\n in_file = pmag.get_named_arg(\"-f\", default_val=\"sites.txt\")\n in_file = pmag.resolve_file_name(in_file, input_dir_path)\n if \"-ID\" not in sys.argv:\n input_dir_path = os.path.split(in_file)[0]\n\n plot_by = pmag.get_named_arg(\"-obj\", default_val=\"all\").lower()\n spec_file = pmag.get_named_arg(\"-fsp\", default_val=\"specimens.txt\")\n samp_file = pmag.get_named_arg(\"-fsa\", default_val=\"samples.txt\")\n site_file = pmag.get_named_arg(\"-fsi\", default_val=\"sites.txt\")\n loc_file = pmag.get_named_arg(\"-flo\", default_val=\"locations.txt\")\n ignore_tilt = False\n if '-no-tilt' in sys.argv:\n ignore_tilt = True\n color_map = \"coolwarm\"\n if '-c' in sys.argv:\n contour = True\n if '-cm' in sys.argv:\n ind = sys.argv.index('-cm')\n color_map = sys.argv[ind+1]\n else:\n color_map = 'coolwarm'\n else:\n contour = False\n interactive = True\n save_plots = False\n if '-sav' in sys.argv:\n save_plots = True\n interactive = False\n plot_ell = False\n if '-ell' in sys.argv:\n plot_ell = pmag.get_named_arg(\"-ell\", \"F\")\n crd = pmag.get_named_arg(\"-crd\", default_val=\"g\")\n fmt = pmag.get_named_arg(\"-fmt\", \"svg\")\n ipmag.eqarea_magic(in_file, dir_path, input_dir_path, spec_file, samp_file, site_file, loc_file,\n plot_by, crd, ignore_tilt, save_plots, fmt, contour, color_map,\n plot_ell, \"all\", interactive)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ninitializing the main UI for the current resource class", "response": "def InitUI(self):\n \"\"\"\n initialize window\n \"\"\"\n self.main_sizer = wx.BoxSizer(wx.VERTICAL)\n self.init_grid_headers()\n self.grid_builder = GridBuilder(self.er_magic, self.grid_type, self.grid_headers,\n self.panel, self.parent_type)\n self.grid = self.grid_builder.make_grid()\n self.grid.InitUI()\n\n ## Column management buttons\n self.add_cols_button = wx.Button(self.panel, label=\"Add additional columns\",\n name='add_cols_btn')\n self.Bind(wx.EVT_BUTTON, self.on_add_cols, self.add_cols_button)\n self.remove_cols_button = wx.Button(self.panel, label=\"Remove columns\",\n name='remove_cols_btn')\n self.Bind(wx.EVT_BUTTON, self.on_remove_cols, self.remove_cols_button)\n\n ## Row management buttons\n self.remove_row_button = wx.Button(self.panel, label=\"Remove last row\",\n name='remove_last_row_btn')\n self.Bind(wx.EVT_BUTTON, self.on_remove_row, self.remove_row_button)\n many_rows_box = wx.BoxSizer(wx.HORIZONTAL)\n self.add_many_rows_button = wx.Button(self.panel, label=\"Add row(s)\",\n name='add_many_rows_btn')\n self.rows_spin_ctrl = wx.SpinCtrl(self.panel, value='1', initial=1,\n name='rows_spin_ctrl')\n many_rows_box.Add(self.add_many_rows_button, flag=wx.ALIGN_CENTRE)\n many_rows_box.Add(self.rows_spin_ctrl)\n self.Bind(wx.EVT_BUTTON, self.on_add_rows, self.add_many_rows_button)\n\n self.deleteRowButton = wx.Button(self.panel, id=-1, label='Delete selected row(s)', name='delete_row_btn')\n self.Bind(wx.EVT_BUTTON, lambda event: self.on_remove_row(event, False), self.deleteRowButton)\n self.deleteRowButton.Disable()\n\n ## Data management buttons\n self.importButton = wx.Button(self.panel, id=-1,\n label='Import MagIC-format file', name='import_btn')\n self.Bind(wx.EVT_BUTTON, self.onImport, self.importButton)\n self.exitButton = wx.Button(self.panel, id=-1,\n label='Save and close grid', name='save_and_quit_btn')\n self.Bind(wx.EVT_BUTTON, self.onSave, self.exitButton)\n self.cancelButton = wx.Button(self.panel, id=-1, label='Cancel', name='cancel_btn')\n self.Bind(wx.EVT_BUTTON, self.onCancelButton, self.cancelButton)\n\n ## Help message and button\n # button\n self.toggle_help_btn = wx.Button(self.panel, id=-1, label=\"Show help\",\n name='toggle_help_btn')\n self.Bind(wx.EVT_BUTTON, self.toggle_help, self.toggle_help_btn)\n # message\n self.help_msg_boxsizer = wx.StaticBoxSizer(wx.StaticBox(self.panel, -1, name='help_msg_boxsizer'), wx.VERTICAL)\n self.default_msg_text = 'Edit {} here.\\nYou can add or remove both rows and columns, however required columns may not be deleted.\\nControlled vocabularies are indicated by **, and will have drop-down-menus.\\nTo edit all values in a column, click the column header.\\nYou can cut and paste a block of cells from an Excel-like file.\\nJust click the top left cell and use command \"v\".\\nColumns that pertain to interpretations will be marked with \"++\".'.format(self.grid_type + 's')\n txt = ''\n if self.grid_type == 'location':\n txt = '\\n\\nNote: you can fill in location start/end latitude/longitude here.\\nHowever, if you add sites in step 2, the program will calculate those values automatically,\\nbased on site latitudes/logitudes.\\nThese values will be written to your upload file.'\n if self.grid_type == 'sample':\n txt = \"\\n\\nNote: you can fill in lithology, class, and type for each sample here.\\nHowever, if the sample's class, lithology, and type are the same as its parent site,\\nthose values will propagate down, and will be written to your sample file automatically.\"\n if self.grid_type == 'specimen':\n txt = \"\\n\\nNote: you can fill in lithology, class, and type for each specimen here.\\nHowever, if the specimen's class, lithology, and type are the same as its parent sample,\\nthose values will propagate down, and will be written to your specimen file automatically.\"\n if self.grid_type == 'age':\n txt = \"\\n\\nNote: only ages for which you provide data will be written to your upload file.\"\n self.default_msg_text += txt\n self.msg_text = wx.StaticText(self.panel, label=self.default_msg_text,\n style=wx.TE_CENTER, name='msg text')\n self.help_msg_boxsizer.Add(self.msg_text)\n self.help_msg_boxsizer.ShowItems(False)\n\n ## Code message and button\n # button\n self.toggle_codes_btn = wx.Button(self.panel, id=-1, label=\"Show method codes\",\n name='toggle_codes_btn')\n self.Bind(wx.EVT_BUTTON, self.toggle_codes, self.toggle_codes_btn)\n # message\n self.code_msg_boxsizer = pw.MethodCodeDemystifier(self.panel, vocab)\n self.code_msg_boxsizer.ShowItems(False)\n\n ## Add content to sizers\n self.hbox = wx.BoxSizer(wx.HORIZONTAL)\n col_btn_vbox = wx.StaticBoxSizer(wx.StaticBox(self.panel, -1, label='Columns',\n name='manage columns'), wx.VERTICAL)\n row_btn_vbox = wx.StaticBoxSizer(wx.StaticBox(self.panel, -1, label='Rows',\n name='manage rows'), wx.VERTICAL)\n main_btn_vbox = wx.StaticBoxSizer(wx.StaticBox(self.panel, -1, label='Manage data',\n name='manage data'), wx.VERTICAL)\n col_btn_vbox.Add(self.add_cols_button, 1, flag=wx.ALL, border=5)\n col_btn_vbox.Add(self.remove_cols_button, 1, flag=wx.ALL, border=5)\n row_btn_vbox.Add(many_rows_box, 1, flag=wx.ALL, border=5)\n row_btn_vbox.Add(self.remove_row_button, 1, flag=wx.ALL, border=5)\n row_btn_vbox.Add(self.deleteRowButton, 1, flag=wx.ALL, border=5)\n main_btn_vbox.Add(self.importButton, 1, flag=wx.ALL, border=5)\n main_btn_vbox.Add(self.exitButton, 1, flag=wx.ALL, border=5)\n main_btn_vbox.Add(self.cancelButton, 1, flag=wx.ALL, border=5)\n self.hbox.Add(col_btn_vbox, 1)\n self.hbox.Add(row_btn_vbox, 1)\n self.hbox.Add(main_btn_vbox, 1)\n\n self.panel.Bind(wx.grid.EVT_GRID_LABEL_LEFT_CLICK, self.onLeftClickLabel, self.grid)\n self.Bind(wx.EVT_KEY_DOWN, self.on_key_down)\n self.panel.Bind(wx.EVT_TEXT_PASTE, self.do_fit)\n\n # add actual data!\n self.grid_builder.add_data_to_grid(self.grid, self.grid_type)\n if self.grid_type == 'age':\n self.grid_builder.add_age_data_to_grid()\n\n # add drop_down menus\n if self.parent_type:\n belongs_to = sorted(self.er_magic.data_lists[self.parent_type][0], key=lambda item: item.name)\n else:\n belongs_to = ''\n self.drop_down_menu = drop_down_menus.Menus(self.grid_type, self, self.grid, belongs_to)\n\n self.grid_box = wx.StaticBoxSizer(wx.StaticBox(self.panel, -1, name='grid container'), wx.VERTICAL)\n self.grid_box.Add(self.grid, 1, flag=wx.ALL|wx.EXPAND, border=5)\n\n # a few special touches if it is a location grid\n if self.grid_type == 'location':\n lat_lon_dict = self.er_magic.get_min_max_lat_lon(self.er_magic.locations)\n for loc in self.er_magic.locations:\n # try to fill in min/max latitudes/longitudes from sites\n d = lat_lon_dict[loc.name]\n col_labels = [self.grid.GetColLabelValue(col) for col in range(self.grid.GetNumberCols())]\n row_labels = [self.grid.GetCellValue(row, 0) for row in range(self.grid.GetNumberRows())]\n for key, value in list(d.items()):\n if value:\n if str(loc.er_data[key]) == str(value):\n # no need to update\n pass\n else:\n # update\n loc.er_data[key] = value\n col_ind = col_labels.index(key)\n row_ind = row_labels.index(loc.name)\n self.grid.SetCellValue(row_ind, col_ind, str(value))\n if not self.grid.changes:\n self.grid.changes = set([row_ind])\n else:\n self.grid.changes.add(row_ind)\n\n # a few special touches if it is an age grid\n if self.grid_type == 'age':\n self.remove_row_button.Disable()\n self.add_many_rows_button.Disable()\n self.grid.SetColLabelValue(0, 'er_site_name')\n toggle_box = wx.StaticBoxSizer(wx.StaticBox(self.panel, -1, label='Ages level', name='Ages level'), wx.VERTICAL)\n\n levels = ['specimen', 'sample', 'site', 'location']\n age_level = pw.radio_buttons(self.panel, levels, 'Choose level to assign ages')\n level_ind = levels.index(self.er_magic.age_type)\n age_level.radio_buttons[level_ind].SetValue(True)\n\n toggle_box.Add(age_level)\n\n self.Bind(wx.EVT_RADIOBUTTON, self.toggle_ages)\n self.hbox.Add(toggle_box)\n\n # a few special touches if it is a result grid\n if self.grid_type == 'result':\n # populate specimen_names, sample_names, etc.\n self.drop_down_menu.choices[2] = [sorted([spec.name for spec in self.er_magic.specimens if spec]), False]\n self.drop_down_menu.choices[3] = [sorted([samp.name for samp in self.er_magic.samples if samp]), False]\n self.drop_down_menu.choices[4] = [sorted([site.name for site in self.er_magic.sites if site]), False]\n self.drop_down_menu.choices[5] = [sorted([loc.name for loc in self.er_magic.locations if loc]), False]\n for row in range(self.grid.GetNumberRows()):\n result_name = self.grid.GetCellValue(row, 0)\n result = self.er_magic.find_by_name(result_name, self.er_magic.results)\n if result:\n if result.specimens:\n self.grid.SetCellValue(row, 2, ' : '.join([pmag.get_attr(spec) for spec in result.specimens]))\n if result.samples:\n self.grid.SetCellValue(row, 3, ' : '.join([pmag.get_attr(samp) for samp in result.samples]))\n if result.sites:\n self.grid.SetCellValue(row, 4, ' : '.join([pmag.get_attr(site) for site in result.sites]))\n if result.locations:\n self.grid.SetCellValue(row, 5, ' : '.join([pmag.get_attr(loc) for loc in result.locations]))\n self.drop_down_menu.choices[5] = [sorted([loc.name for loc in self.er_magic.locations if loc]), False]\n\n # final layout, set size\n self.main_sizer.Add(self.hbox, flag=wx.ALL|wx.ALIGN_CENTER|wx.SHAPED, border=20)\n self.main_sizer.Add(self.toggle_help_btn, .5, flag=wx.BOTTOM|wx.ALIGN_CENTRE|wx.SHAPED, border=5)\n self.main_sizer.Add(self.help_msg_boxsizer, .5, flag=wx.BOTTOM|wx.ALIGN_CENTRE|wx.SHAPED, border=10)\n self.main_sizer.Add(self.toggle_codes_btn, .5, flag=wx.BOTTOM|wx.ALIGN_CENTRE|wx.SHAPED, border=5)\n self.main_sizer.Add(self.code_msg_boxsizer, .5, flag=wx.BOTTOM|wx.ALIGN_CENTRE|wx.SHAPED, border=5)\n self.main_sizer.Add(self.grid_box, 2, flag=wx.ALL|wx.EXPAND, border=10)\n self.panel.SetSizer(self.main_sizer)\n self.main_sizer.Fit(self)\n ## this keeps sizing correct if the user resizes the window manually\n #self.Bind(wx.EVT_SIZE, self.do_fit)\n self.Centre()\n self.Show()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef do_fit(self, event):\n #self.grid.ShowScrollbars(wx.SHOW_SB_NEVER, wx.SHOW_SB_NEVER)\n if event:\n event.Skip()\n self.main_sizer.Fit(self)\n disp_size = wx.GetDisplaySize()\n actual_size = self.GetSize()\n rows = self.grid.GetNumberRows()\n # if there isn't enough room to display new content\n # resize the frame\n if disp_size[1] - 75 < actual_size[1]:\n self.SetSize((actual_size[0], disp_size[1] * .95))\n self.Centre()", "response": "Re - fit the window to the size of the content."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef toggle_ages(self, event):\n if self.grid.changes:\n self.onSave(None)\n\n label = event.GetEventObject().Label\n\n self.er_magic.age_type = label\n self.grid.Destroy()\n\n # normally grid_frame is reset to None when grid is destroyed\n # in this case we are simply replacing the grid, so we need to\n # reset grid_frame\n self.parent.Parent.grid_frame = self\n self.parent.Parent.Hide()\n self.grid = self.grid_builder.make_grid()\n self.grid.InitUI()\n self.panel.Bind(wx.grid.EVT_GRID_LABEL_LEFT_CLICK, self.onLeftClickLabel, self.grid)\n self.grid_builder.add_data_to_grid(self.grid, self.grid_type)\n self.grid_builder.add_age_data_to_grid()\n self.drop_down_menu = drop_down_menus.Menus(self.grid_type, self, self.grid, None)\n self.grid.SetColLabelValue(0, 'er_' + label + '_name')\n self.grid.size_grid()\n self.grid_box.Add(self.grid, flag=wx.ALL, border=5)\n self.main_sizer.Fit(self)\n if self.parent.Parent.validation_mode:\n if 'age' in self.parent.Parent.validation_mode:\n self.grid.paint_invalid_cells(self.parent.Parent.warn_dict['age'])\n self.grid.ForceRefresh()\n # the grid show up if it's the same size as the previous grid\n # awkward solution (causes flashing):\n if self.grid.Size[0] < 100:\n if self.grid.GetWindowStyle() != wx.DOUBLE_BORDER:\n self.grid.SetWindowStyle(wx.DOUBLE_BORDER)\n self.main_sizer.Fit(self)\n self.grid.SetWindowStyle(wx.NO_BORDER)\n self.main_sizer.Fit(self)", "response": "Switch the type of grid between site and sample ages"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nremoving a column from the grid", "response": "def remove_col_label(self, event):#, include_pmag=True):\n \"\"\"\n check to see if column is required\n if it is not, delete it from grid\n \"\"\"\n er_possible_headers = self.grid_headers[self.grid_type]['er'][2]\n pmag_possible_headers = self.grid_headers[self.grid_type]['pmag'][2]\n er_actual_headers = self.grid_headers[self.grid_type]['er'][0]\n pmag_actual_headers = self.grid_headers[self.grid_type]['pmag'][0]\n col = event.GetCol()\n label = self.grid.GetColLabelValue(col)\n if '**' in label:\n label = label.strip('**')\n if label in self.grid_headers[self.grid_type]['er'][1]:\n pw.simple_warning(\"That header is required, and cannot be removed\")\n return False\n #elif include_pmag and label in self.grid_headers[self.grid_type]['pmag'][1]:\n # pw.simple_warning(\"That header is required, and cannot be removed\")\n # return False\n else:\n print('That header is not required:', label)\n self.grid.remove_col(col)\n #if label in er_possible_headers:\n try:\n print('removing {} from er_actual_headers'.format(label))\n er_actual_headers.remove(label)\n except ValueError:\n pass\n #if label in pmag_possible_headers:\n try:\n pmag_actual_headers.remove(label)\n except ValueError:\n pass\n # causes resize on each column header delete\n # can leave this out if we want.....\n self.main_sizer.Fit(self)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nshows simple dialog that allows user to add a new column name", "response": "def on_add_cols(self, event):\n \"\"\"\n Show simple dialog that allows user to add a new column name\n \"\"\"\n col_labels = self.grid.col_labels\n # do not list headers that are already column labels in the grid\n er_items = [head for head in self.grid_headers[self.grid_type]['er'][2] if head not in col_labels]\n # remove unneeded headers\n er_items = builder.remove_list_headers(er_items)\n pmag_headers = sorted(list(set(self.grid_headers[self.grid_type]['pmag'][2]).union(self.grid_headers[self.grid_type]['pmag'][1])))\n # do not list headers that are already column labels in the grid\n # make sure that pmag_specific columns are marked with '++'\n to_add = [i + '++' for i in self.er_magic.double if i in pmag_headers and i + '++' not in col_labels]\n pmag_headers.extend(to_add)\n pmag_items = [head for head in pmag_headers if head not in er_items and head not in col_labels]\n # remove unneeded headers\n pmag_items = sorted(builder.remove_list_headers(pmag_items))\n dia = pw.HeaderDialog(self, 'columns to add', items1=er_items, items2=pmag_items)\n dia.Centre()\n result = dia.ShowModal()\n new_headers = []\n if result == 5100:\n new_headers = dia.text_list\n if not new_headers:\n return\n errors = self.add_new_grid_headers(new_headers, er_items, pmag_items)\n if errors:\n errors_str = ', '.join(errors)\n pw.simple_warning('You are already using the following headers: {}\\nSo they will not be added'.format(errors_str))\n\n # problem: if widgets above the grid are too wide,\n # the grid does not re-size when adding columns\n # awkward solution (causes flashing):\n if self.grid.GetWindowStyle() != wx.DOUBLE_BORDER:\n self.grid.SetWindowStyle(wx.DOUBLE_BORDER)\n self.main_sizer.Fit(self)\n self.grid.SetWindowStyle(wx.NO_BORDER)\n self.Centre()\n self.main_sizer.Fit(self)\n #\n self.grid.changes = set(range(self.grid.GetNumberRows()))\n dia.Destroy()"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding in all user - added headers.", "response": "def add_new_grid_headers(self, new_headers, er_items, pmag_items):\n \"\"\"\n Add in all user-added headers.\n If those new headers depend on other headers, add the other headers too.\n \"\"\"\n\n def add_pmag_reqd_headers():\n if self.grid_type == 'result':\n return []\n add_in = []\n col_labels = self.grid.col_labels\n for reqd_head in self.grid_headers[self.grid_type]['pmag'][1]:\n if reqd_head in self.er_magic.double:\n if reqd_head + \"++\" not in col_labels:\n add_in.append(reqd_head + \"++\")\n else:\n if reqd_head not in col_labels:\n add_in.append(reqd_head)\n add_in = builder.remove_list_headers(add_in)\n return add_in\n #\n already_present = []\n for name in new_headers:\n if name:\n if name not in self.grid.col_labels:\n col_number = self.grid.add_col(name)\n # add to appropriate headers list\n if name in er_items:\n self.grid_headers[self.grid_type]['er'][0].append(str(name))\n if name in pmag_items:\n name = name.strip('++')\n if name not in self.grid_headers[self.grid_type]['pmag'][0]:\n self.grid_headers[self.grid_type]['pmag'][0].append(str(name))\n # add any required pmag headers that are not in the grid already\n for header in add_pmag_reqd_headers():\n col_number = self.grid.add_col(header)\n # add drop_down_menus for added reqd columns\n if header in vocab.possible_vocabularies:\n self.drop_down_menu.add_drop_down(col_number, name)\n if header in ['magic_method_codes++']:\n self.drop_down_menu.add_method_drop_down(col_number, header)\n # add drop down menus for user-added column\n if name in vocab.possible_vocabularies:\n self.drop_down_menu.add_drop_down(col_number, name)\n if name in ['magic_method_codes', 'magic_method_codes++']:\n self.drop_down_menu.add_method_drop_down(col_number, name)\n else:\n already_present.append(name)\n #pw.simple_warning('You are already using column header: {}'.format(name))\n return already_present"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef on_remove_cols(self, event):\n # open the help message\n self.toggle_help(event=None, mode='open')\n # first unselect any selected cols/cells\n self.remove_cols_mode = True\n self.grid.ClearSelection()\n self.remove_cols_button.SetLabel(\"end delete column mode\")\n # change button to exit the delete columns mode\n self.Unbind(wx.EVT_BUTTON, self.remove_cols_button)\n self.Bind(wx.EVT_BUTTON, self.exit_col_remove_mode, self.remove_cols_button)\n # then disable all other buttons\n for btn in [self.add_cols_button, self.remove_row_button, self.add_many_rows_button]:\n btn.Disable()\n # then make some visual changes\n self.msg_text.SetLabel(\"Remove grid columns: click on a column header to delete it. Required headers for {}s may not be deleted.\".format(self.grid_type))\n self.help_msg_boxsizer.Fit(self.help_msg_boxsizer.GetStaticBox())\n self.main_sizer.Fit(self)\n self.grid.SetWindowStyle(wx.DOUBLE_BORDER)\n self.grid_box.GetStaticBox().SetWindowStyle(wx.DOUBLE_BORDER)\n self.grid.Refresh()\n self.main_sizer.Fit(self) # might not need this one\n self.grid.changes = set(range(self.grid.GetNumberRows()))", "response": "enter remove columns mode"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef on_remove_row(self, event, row_num=-1):\n if row_num == -1:\n default = (255, 255, 255, 255)\n # unhighlight any selected rows:\n for row in self.selected_rows:\n attr = wx.grid.GridCellAttr()\n attr.SetBackgroundColour(default)\n self.grid.SetRowAttr(row, attr)\n row_num = self.grid.GetNumberRows() - 1\n self.deleteRowButton.Disable()\n self.selected_rows = {row_num}\n\n function_mapping = {'specimen': self.er_magic.delete_specimen,\n 'sample': self.er_magic.delete_sample,\n 'site': self.er_magic.delete_site,\n 'location': self.er_magic.delete_location,\n 'result': self.er_magic.delete_result}\n\n names = [self.grid.GetCellValue(row, 0) for row in self.selected_rows]\n orphans = []\n for name in names:\n if name:\n try:\n row = self.grid.row_labels.index(name)\n function_mapping[self.grid_type](name)\n orphans.extend([name])\n # if user entered a name, then deletes the row before saving,\n # there will be a ValueError\n except ValueError:\n pass\n self.grid.remove_row(row)\n self.selected_rows = set()\n self.deleteRowButton.Disable()\n self.grid.Refresh()\n self.main_sizer.Fit(self)", "response": "Remove specified grid row."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nexits col remove mode", "response": "def exit_col_remove_mode(self, event):\n \"\"\"\n go back from 'remove cols' mode to normal\n \"\"\"\n # close help messge\n self.toggle_help(event=None, mode='close')\n # update mode\n self.remove_cols_mode = False\n # re-enable all buttons\n for btn in [self.add_cols_button, self.remove_row_button, self.add_many_rows_button]:\n btn.Enable()\n\n # unbind grid click for deletion\n self.Unbind(wx.grid.EVT_GRID_LABEL_LEFT_CLICK)\n # undo visual cues\n self.grid.SetWindowStyle(wx.DEFAULT)\n self.grid_box.GetStaticBox().SetWindowStyle(wx.DEFAULT)\n self.msg_text.SetLabel(self.default_msg_text)\n self.help_msg_boxsizer.Fit(self.help_msg_boxsizer.GetStaticBox())\n self.main_sizer.Fit(self)\n # re-bind self.remove_cols_button\n self.Bind(wx.EVT_BUTTON, self.on_remove_cols, self.remove_cols_button)\n self.remove_cols_button.SetLabel(\"Remove columns\")"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef make_grid(self, incl_pmag=True):\n if incl_pmag and self.grid_type in self.er_magic.incl_pmag_data:\n incl_pmag = True\n else:\n incl_pmag = False\n er_header = self.grid_headers[self.grid_type]['er'][0]\n if incl_pmag:\n pmag_header = self.grid_headers[self.grid_type]['pmag'][0]\n else:\n pmag_header = []\n # if we need to use '++' to distinguish pmag magic_method_codes from er\n if incl_pmag and self.grid_type in ('specimen', 'sample', 'site'):\n for double_header in self.er_magic.double:\n try:\n pmag_header.remove(double_header)\n pmag_header.append(double_header + '++')\n except ValueError:\n pass\n header = sorted(list(set(er_header).union(pmag_header)))\n\n first_headers = []\n for string in ['citation', '{}_class'.format(self.grid_type),\n '{}_lithology'.format(self.grid_type), '{}_type'.format(self.grid_type),\n 'site_definition']:\n for head in header[:]:\n if string in head:\n header.remove(head)\n first_headers.append(head)\n\n # the way we work it, each specimen is assigned to a sample\n # each sample is assigned to a site\n # specimens can not be added en masse to a site object, for example\n # this data will be written in\n for string in ['er_specimen_names', 'er_sample_names', 'er_site_names']:\n for head in header[:]:\n if string in head:\n header.remove(head)\n\n # do headers for results type grid\n if self.grid_type == 'result':\n #header.remove('pmag_result_name')\n header[:0] = ['pmag_result_name', 'er_citation_names', 'er_specimen_names',\n 'er_sample_names', 'er_site_names', 'er_location_names']\n\n elif self.grid_type == 'age':\n for header_type in self.er_magic.first_age_headers:\n if header_type in header:\n header.remove(header_type)\n lst = ['er_' + self.grid_type + '_name']\n lst.extend(self.er_magic.first_age_headers)\n header[:0] = lst\n\n # do headers for all other data types without parents\n elif not self.parent_type:\n lst = ['er_' + self.grid_type + '_name']\n lst.extend(first_headers)\n header[:0] = lst\n # do headers for all data types with parents\n else:\n lst = ['er_' + self.grid_type + '_name', 'er_' + self.parent_type + '_name']\n lst.extend(first_headers)\n header[:0] = lst\n\n grid = magic_grid.MagicGrid(parent=self.panel, name=self.grid_type,\n row_labels=[], col_labels=header,\n double=self.er_magic.double)\n grid.do_event_bindings()\n\n self.grid = grid\n return grid", "response": "make a new grid of pmag_magic_data"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsaves the grid data in the data object", "response": "def save_grid_data(self):\n \"\"\"\n Save grid data in the data object\n \"\"\"\n if not self.grid.changes:\n print('-I- No changes to save')\n return\n\n if self.grid_type == 'age':\n age_data_type = self.er_magic.age_type\n self.er_magic.write_ages = True\n\n starred_cols = self.grid.remove_starred_labels()\n\n self.grid.SaveEditControlValue() # locks in value in cell currently edited\n\n if self.grid.changes:\n num_cols = self.grid.GetNumberCols()\n\n for change in self.grid.changes:\n if change == -1:\n continue\n else:\n old_item = self.grid.row_items[change]\n new_item_name = self.grid.GetCellValue(change, 0)\n new_er_data = {}\n new_pmag_data = {}\n er_header = self.grid_headers[self.grid_type]['er'][0]\n pmag_header = self.grid_headers[self.grid_type]['pmag'][0]\n start_num = 2 if self.parent_type else 1\n result_data = {}\n\n for col in range(start_num, num_cols):\n col_label = str(self.grid.GetColLabelValue(col))\n value = str(self.grid.GetCellValue(change, col))\n #new_data[col_label] = value\n if value == '\\t':\n value = ''\n\n if '++' in col_label:\n col_name = col_label[:-2]\n new_pmag_data[col_name] = value\n continue\n\n # pmag_* files are new interpretations, so should only have \"This study\"\n # er_* files can have multiple citations\n if col_label == 'er_citation_names':\n new_pmag_data[col_label] = 'This study'\n new_er_data[col_label] = value\n continue\n\n if er_header and (col_label in er_header):\n new_er_data[col_label] = value\n\n if self.grid_type in ('specimen', 'sample', 'site'):\n if pmag_header and (col_label in pmag_header) and (col_label not in self.er_magic.double):\n new_pmag_data[col_label] = value\n else:\n if pmag_header and (col_label in pmag_header):\n new_pmag_data[col_label] = value\n\n if col_label in ('er_specimen_names', 'er_sample_names',\n 'er_site_names', 'er_location_names'):\n result_data[col_label] = value\n\n # if there is an item in the data, get its name\n if isinstance(old_item, str):\n old_item_name = None\n else:\n old_item_name = self.grid.row_items[change].name\n\n if self.parent_type:\n new_parent_name = self.grid.GetCellValue(change, 1)\n else:\n new_parent_name = ''\n\n # create a new item\n if new_item_name and not old_item_name:\n print('-I- make new item named', new_item_name)\n if self.grid_type == 'result':\n specs, samps, sites, locs = self.get_result_children(result_data)\n item = self.er_magic.add_result(new_item_name, specs, samps, sites,\n locs, new_pmag_data)\n else:\n item = self.er_magic.add_methods[self.grid_type](new_item_name, new_parent_name,\n new_er_data, new_pmag_data)\n\n # update an existing item\n elif new_item_name and old_item_name:\n print('-I- update existing {} formerly named {} to {}'.format(self.grid_type,\n old_item_name,\n new_item_name))\n if self.grid_type == 'result':\n specs, samps, sites, locs = self.get_result_children(result_data)\n item = self.er_magic.update_methods['result'](old_item_name, new_item_name,\n new_er_data=None,\n new_pmag_data=new_pmag_data,\n spec_names=specs,\n samp_names=samps,\n site_names=sites,\n loc_names=locs,\n replace_data=True)\n elif self.grid_type == 'age':\n item_type = age_data_type\n item = self.er_magic.update_methods['age'](old_item_name, new_er_data,\n item_type, replace_data=True)\n\n else:\n item = self.er_magic.update_methods[self.grid_type](old_item_name, new_item_name,\n new_parent_name, new_er_data,\n new_pmag_data, replace_data=True)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nnames hysteresis_magic.py DESCRIPTION calculates hystereis parameters and saves them in rmag_hystereis format file makes plots if option selected SYNTAX hysteresis_magic.py [command line options] OPTIONS -h prints help message and quits -usr USER: identify user, default is \"\" -f: specify input file, default is agm_measurements.txt -fh: specify rmag_hysteresis.txt input file -F: specify output file, default is rmag_hysteresis.txt -P: do not make the plots -spc SPEC: specify specimen name to plot and quit -sav save all plots and quit -fmt [png,svg,eps,jpg]", "response": "def main():\n \"\"\"\n NAME\n hysteresis_magic.py\n\n DESCRIPTION\n calculates hystereis parameters and saves them in rmag_hystereis format file\n makes plots if option selected\n\n SYNTAX\n hysteresis_magic.py [command line options]\n\n OPTIONS\n -h prints help message and quits\n -usr USER: identify user, default is \"\"\n -f: specify input file, default is agm_measurements.txt\n -fh: specify rmag_hysteresis.txt input file\n -F: specify output file, default is rmag_hysteresis.txt\n -P: do not make the plots\n -spc SPEC: specify specimen name to plot and quit\n -sav save all plots and quit\n -fmt [png,svg,eps,jpg]\n \"\"\"\n args = sys.argv\n PLT = 1\n plots = 0\n user, meas_file, rmag_out, rmag_file = \"\", \"agm_measurements.txt\", \"rmag_hysteresis.txt\", \"\"\n pltspec = \"\"\n dir_path = '.'\n fmt = 'svg'\n verbose = pmagplotlib.verbose\n version_num = pmag.get_version()\n if '-WD' in args:\n ind = args.index('-WD')\n dir_path = args[ind+1]\n if \"-h\" in args:\n print(main.__doc__)\n sys.exit()\n if \"-usr\" in args:\n ind = args.index(\"-usr\")\n user = args[ind+1]\n if '-f' in args:\n ind = args.index(\"-f\")\n meas_file = args[ind+1]\n if '-F' in args:\n ind = args.index(\"-F\")\n rmag_out = args[ind+1]\n if '-fh' in args:\n ind = args.index(\"-fh\")\n rmag_file = args[ind+1]\n rmag_file = dir_path+'/'+rmag_file\n if '-P' in args:\n PLT = 0\n irm_init, imag_init = -1, -1\n if '-sav' in args:\n verbose = 0\n plots = 1\n if '-spc' in args:\n ind = args.index(\"-spc\")\n pltspec = args[ind+1]\n verbose = 0\n plots = 1\n if '-fmt' in args:\n ind = args.index(\"-fmt\")\n fmt = args[ind+1]\n rmag_out = dir_path+'/'+rmag_out\n meas_file = dir_path+'/'+meas_file\n rmag_rem = dir_path+\"/rmag_remanence.txt\"\n #\n #\n meas_data, file_type = pmag.magic_read(meas_file)\n if file_type != 'magic_measurements':\n print(main.__doc__)\n print('bad file')\n sys.exit()\n #\n # initialize some variables\n # define figure numbers for hyst,deltaM,DdeltaM curves\n HystRecs, RemRecs = [], []\n HDD = {}\n if verbose:\n if verbose and PLT:\n print(\"Plots may be on top of each other - use mouse to place \")\n if PLT:\n HDD['hyst'], HDD['deltaM'], HDD['DdeltaM'] = 1, 2, 3\n pmagplotlib.plot_init(HDD['DdeltaM'], 5, 5)\n pmagplotlib.plot_init(HDD['deltaM'], 5, 5)\n pmagplotlib.plot_init(HDD['hyst'], 5, 5)\n imag_init = 0\n irm_init = 0\n else:\n HDD['hyst'], HDD['deltaM'], HDD['DdeltaM'], HDD['irm'], HDD['imag'] = 0, 0, 0, 0, 0\n #\n if rmag_file != \"\":\n hyst_data, file_type = pmag.magic_read(rmag_file)\n #\n # get list of unique experiment names and specimen names\n #\n experiment_names, sids = [], []\n for rec in meas_data:\n meths = rec['magic_method_codes'].split(':')\n methods = []\n for meth in meths:\n methods.append(meth.strip())\n if 'LP-HYS' in methods:\n if 'er_synthetic_name' in list(rec.keys()) and rec['er_synthetic_name'] != \"\":\n rec['er_specimen_name'] = rec['er_synthetic_name']\n if rec['magic_experiment_name'] not in experiment_names:\n experiment_names.append(rec['magic_experiment_name'])\n if rec['er_specimen_name'] not in sids:\n sids.append(rec['er_specimen_name'])\n #\n k = 0\n locname = ''\n if pltspec != \"\":\n k = sids.index(pltspec)\n print(sids[k])\n while k < len(sids):\n s = sids[k]\n if verbose and PLT:\n print(s, k+1, 'out of ', len(sids))\n #\n #\n # B,M for hysteresis, Bdcd,Mdcd for irm-dcd data\n B, M, Bdcd, Mdcd = [], [], [], []\n Bimag, Mimag = [], [] # Bimag,Mimag for initial magnetization curves\n first_dcd_rec, first_rec, first_imag_rec = 1, 1, 1\n for rec in meas_data:\n methcodes = rec['magic_method_codes'].split(':')\n meths = []\n for meth in methcodes:\n meths.append(meth.strip())\n if rec['er_specimen_name'] == s and \"LP-HYS\" in meths:\n B.append(float(rec['measurement_lab_field_dc']))\n M.append(float(rec['measurement_magn_moment']))\n if first_rec == 1:\n e = rec['magic_experiment_name']\n HystRec = {}\n first_rec = 0\n if \"er_location_name\" in list(rec.keys()):\n HystRec[\"er_location_name\"] = rec[\"er_location_name\"]\n locname = rec['er_location_name'].replace('/', '-')\n if \"er_sample_name\" in list(rec.keys()):\n HystRec[\"er_sample_name\"] = rec[\"er_sample_name\"]\n if \"er_site_name\" in list(rec.keys()):\n HystRec[\"er_site_name\"] = rec[\"er_site_name\"]\n if \"er_synthetic_name\" in list(rec.keys()) and rec['er_synthetic_name'] != \"\":\n HystRec[\"er_synthetic_name\"] = rec[\"er_synthetic_name\"]\n else:\n HystRec[\"er_specimen_name\"] = rec[\"er_specimen_name\"]\n if rec['er_specimen_name'] == s and \"LP-IRM-DCD\" in meths:\n Bdcd.append(float(rec['treatment_dc_field']))\n Mdcd.append(float(rec['measurement_magn_moment']))\n if first_dcd_rec == 1:\n RemRec = {}\n irm_exp = rec['magic_experiment_name']\n first_dcd_rec = 0\n if \"er_location_name\" in list(rec.keys()):\n RemRec[\"er_location_name\"] = rec[\"er_location_name\"]\n if \"er_sample_name\" in list(rec.keys()):\n RemRec[\"er_sample_name\"] = rec[\"er_sample_name\"]\n if \"er_site_name\" in list(rec.keys()):\n RemRec[\"er_site_name\"] = rec[\"er_site_name\"]\n if \"er_synthetic_name\" in list(rec.keys()) and rec['er_synthetic_name'] != \"\":\n RemRec[\"er_synthetic_name\"] = rec[\"er_synthetic_name\"]\n else:\n RemRec[\"er_specimen_name\"] = rec[\"er_specimen_name\"]\n if rec['er_specimen_name'] == s and \"LP-IMAG\" in meths:\n if first_imag_rec == 1:\n imag_exp = rec['magic_experiment_name']\n first_imag_rec = 0\n Bimag.append(float(rec['measurement_lab_field_dc']))\n Mimag.append(float(rec['measurement_magn_moment']))\n #\n # now plot the hysteresis curve\n #\n if len(B) > 0:\n hmeths = []\n for meth in meths:\n hmeths.append(meth)\n hpars = pmagplotlib.plot_hdd(HDD, B, M, e)\n if verbose and PLT:\n pmagplotlib.draw_figs(HDD)\n #\n # get prior interpretations from hyst_data\n if rmag_file != \"\":\n hpars_prior = {}\n for rec in hyst_data:\n if rec['magic_experiment_names'] == e:\n if rec['hysteresis_bcr'] != \"\" and rec['hysteresis_mr_moment'] != \"\":\n hpars_prior['hysteresis_mr_moment'] = rec['hysteresis_mr_moment']\n hpars_prior['hysteresis_ms_moment'] = rec['hysteresis_ms_moment']\n hpars_prior['hysteresis_bc'] = rec['hysteresis_bc']\n hpars_prior['hysteresis_bcr'] = rec['hysteresis_bcr']\n break\n if verbose:\n pmagplotlib.plot_hpars(HDD, hpars_prior, 'ro')\n else:\n if verbose:\n pmagplotlib.plot_hpars(HDD, hpars, 'bs')\n HystRec['hysteresis_mr_moment'] = hpars['hysteresis_mr_moment']\n HystRec['hysteresis_ms_moment'] = hpars['hysteresis_ms_moment']\n HystRec['hysteresis_bc'] = hpars['hysteresis_bc']\n HystRec['hysteresis_bcr'] = hpars['hysteresis_bcr']\n HystRec['hysteresis_xhf'] = hpars['hysteresis_xhf']\n HystRec['magic_experiment_names'] = e\n HystRec['magic_software_packages'] = version_num\n if hpars[\"magic_method_codes\"] not in hmeths:\n hmeths.append(hpars[\"magic_method_codes\"])\n methods = \"\"\n for meth in hmeths:\n methods = methods+meth.strip()+\":\"\n HystRec[\"magic_method_codes\"] = methods[:-1]\n HystRec[\"er_citation_names\"] = \"This study\"\n HystRecs.append(HystRec)\n #\n if len(Bdcd) > 0:\n rmeths = []\n for meth in meths:\n rmeths.append(meth)\n if verbose and PLT:\n print('plotting IRM')\n if irm_init == 0:\n HDD['irm'] = 5\n pmagplotlib.plot_init(HDD['irm'], 5, 5)\n irm_init = 1\n rpars = pmagplotlib.plot_irm(HDD['irm'], Bdcd, Mdcd, irm_exp)\n RemRec['remanence_mr_moment'] = rpars['remanence_mr_moment']\n RemRec['remanence_bcr'] = rpars['remanence_bcr']\n RemRec['magic_experiment_names'] = irm_exp\n if rpars[\"magic_method_codes\"] not in meths:\n meths.append(rpars[\"magic_method_codes\"])\n methods = \"\"\n for meth in rmeths:\n methods = methods+meth.strip()+\":\"\n RemRec[\"magic_method_codes\"] = methods[:-1]\n RemRec[\"er_citation_names\"] = \"This study\"\n RemRecs.append(RemRec)\n else:\n if irm_init:\n pmagplotlib.clearFIG(HDD['irm'])\n if len(Bimag) > 0:\n if verbose:\n print('plotting initial magnetization curve')\n# first normalize by Ms\n Mnorm = []\n for m in Mimag:\n Mnorm.append(m / float(hpars['hysteresis_ms_moment']))\n if imag_init == 0:\n HDD['imag'] = 4\n pmagplotlib.plot_init(HDD['imag'], 5, 5)\n imag_init = 1\n pmagplotlib.plot_imag(HDD['imag'], Bimag, Mnorm, imag_exp)\n else:\n if imag_init:\n pmagplotlib.clearFIG(HDD['imag'])\n #\n files = {}\n if plots:\n if pltspec != \"\":\n s = pltspec\n files = {}\n for key in list(HDD.keys()):\n files[key] = locname+'_'+s+'_'+key+'.'+fmt\n pmagplotlib.save_plots(HDD, files)\n if pltspec != \"\":\n sys.exit()\n if verbose and PLT:\n pmagplotlib.draw_figs(HDD)\n ans = input(\n \"S[a]ve plots, [s]pecimen name, [q]uit, to continue\\n \")\n if ans == \"a\":\n files = {}\n for key in list(HDD.keys()):\n files[key] = locname+'_'+s+'_'+key+'.'+fmt\n pmagplotlib.save_plots(HDD, files)\n if ans == '':\n k += 1\n if ans == \"p\":\n del HystRecs[-1]\n k -= 1\n if ans == 'q':\n print(\"Good bye\")\n sys.exit()\n if ans == 's':\n keepon = 1\n specimen = input(\n 'Enter desired specimen name (or first part there of): ')\n while keepon == 1:\n try:\n k = sids.index(specimen)\n keepon = 0\n except:\n tmplist = []\n for qq in range(len(sids)):\n if specimen in sids[qq]:\n tmplist.append(sids[qq])\n print(specimen, \" not found, but this was: \")\n print(tmplist)\n specimen = input('Select one or try again\\n ')\n k = sids.index(specimen)\n else:\n k += 1\n if len(B) == 0 and len(Bdcd) == 0:\n if verbose:\n print('skipping this one - no hysteresis data')\n k += 1\n if rmag_out == \"\" and ans == 's' and verbose:\n really = input(\n \" Do you want to overwrite the existing rmag_hystersis.txt file? 1/[0] \")\n if really == \"\":\n print('i thought not - goodbye')\n sys.exit()\n rmag_out = \"rmag_hysteresis.txt\"\n if len(HystRecs) > 0:\n pmag.magic_write(rmag_out, HystRecs, \"rmag_hysteresis\")\n if verbose:\n print(\"hysteresis parameters saved in \", rmag_out)\n if len(RemRecs) > 0:\n pmag.magic_write(rmag_rem, RemRecs, \"rmag_remanence\")\n if verbose:\n print(\"remanence parameters saved in \", rmag_rem)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef main():\n#\n# get command line arguments\n#\n\n args = sys.argv\n if \"-h\" in args:\n print(main.__doc__)\n sys.exit()\n dir_path = pmag.get_named_arg(\"-WD\", \".\")\n user = pmag.get_named_arg(\"-usr\", \"\")\n labfield = pmag.get_named_arg(\"-dc\", '0.5')\n meas_file = pmag.get_named_arg(\"-F\", \"measurements.txt\")\n samp_file = pmag.get_named_arg(\"-fsa\", \"samples.txt\")\n try:\n infile = pmag.get_named_arg(\"-f\", reqd=True)\n except pmag.MissingCommandLineArgException:\n print(main.__doc__)\n print(\"-f is required option\")\n sys.exit()\n specnum = int(pmag.get_named_arg(\"-spc\", 0))\n location = pmag.get_named_arg(\"-loc\", \"\")\n specimen_name = pmag.get_named_arg(\"-spn\", reqd=True)\n syn = 0\n if \"-syn\" in args:\n syn = 1\n samp_con = pmag.get_named_arg(\"-ncn\", \"1\")\n if \"-ncn\" in args:\n ind = args.index(\"-ncn\")\n samp_con = sys.argv[ind+1]\n data_model_num = int(pmag.get_named_arg(\"-DM\", 3))\n convert.mst(infile, specimen_name, dir_path, \"\", meas_file, samp_file,\n user, specnum, samp_con, labfield, location, syn, data_model_num)", "response": "This function is the main entry point for the magic command line tool. It is used to convert MsT data to measurements format files and print help message and quits."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nnaming parse_measurements.py DESCRIPTION takes measurments file and creates specimen and instrument files SYNTAX parse_measurements.py [command line options] OPTIONS -h prints help message and quits -f FILE magic_measurements input file, default is \"magic_measurements.txt\" -fsi FILE er_sites input file, default is none -Fsp FILE specimen output er_specimens format file, default is \"er_specimens.txt\" -Fin FILE instrument output magic_instruments format file, default is \"magic_instruments.txt\" OUPUT writes er_specimens and magic_instruments formatted files", "response": "def main():\n \"\"\"\n NAME\n parse_measurements.py\n\n DESCRIPTION\n takes measurments file and creates specimen and instrument files\n\n SYNTAX\n parse_measurements.py [command line options]\n\n OPTIONS\n -h prints help message and quits\n -f FILE magic_measurements input file, default is \"magic_measurements.txt\"\n -fsi FILE er_sites input file, default is none\n -Fsp FILE specimen output er_specimens format file, default is \"er_specimens.txt\"\n -Fin FILE instrument output magic_instruments format file, default is \"magic_instruments.txt\"\n OUPUT\n writes er_specimens and magic_instruments formatted files\n\n \"\"\"\n infile = 'magic_measurements.txt'\n sitefile = \"\"\n specout = \"er_specimens.txt\"\n instout = \"magic_instruments.txt\"\n# get command line stuff\n if \"-h\" in sys.argv:\n print(main.__doc__)\n sys.exit()\n if '-f' in sys.argv:\n ind = sys.argv.index(\"-f\")\n infile = sys.argv[ind + 1]\n if '-fsi' in sys.argv:\n ind = sys.argv.index(\"-fsi\")\n sitefile = sys.argv[ind + 1]\n if '-Fsp' in sys.argv:\n ind = sys.argv.index(\"-Fsp\")\n specout = sys.argv[ind + 1]\n if '-Fin' in sys.argv:\n ind = sys.argv.index(\"-Fin\")\n instout = sys.argv[ind + 1]\n if '-WD' in sys.argv:\n ind = sys.argv.index(\"-WD\")\n dir_path = sys.argv[ind + 1]\n infile = dir_path + '/' + infile\n if sitefile != \"\":\n sitefile = dir_path + '/' + sitefile\n specout = dir_path + '/' + specout\n instout = dir_path + '/' + instout\n# now do re-ordering\n pmag.ParseMeasFile(infile, sitefile, instout, specout)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nname utrecht_magic.py DESCRIPTION converts Utrecht magnetometer data files to magic_measurements files SYNTAX utrecht_magic.py [command line options] OPTIONS -h: prints the help message and quits. -f FILE: specify input file, or -F FILE: specify output file, default is magic_measurements.txt -Fsa: specify er_samples format file for appending, default is new er_samples.txt (Not working yet) -WD: output directory for MagIC files -ncn: Site Naming Convention Site to Sample naming convention: [1] XXXXY: where XXXX is an arbitrary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] [2: default] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX [5] site name = sample name [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY -spc: number of characters to remove to generate sample names from specimen names -dmy: European date format -loc LOCNAME : specify location/study name -lat latitude of samples -lon longitude of samples -A: don't average replicate measurements -mcd: [SO-MAG,SO-SUN,SO-SIGHT...] supply how these samples were oriented -dc: B PHI THETA: dc lab field (in microTesla), phi,and theta must be input as a tuple \"(DC,PHI,THETA)\". If not input user will be asked for values, this is advantagious if there are differing dc fields between steps or specimens. Note: this currently only works with the decimal IZZI naming convetion (XXX.0,1,2,3 where XXX is the treatment temperature and 0 is a zero field step, 1 is in field, and 2 is a pTRM check, 3 is a tail check). All other steps are hardcoded dc_field = 0. INPUT Utrecht magnetometer data file", "response": "def main(command_line=True, **kwargs):\n \"\"\"\n NAME\n utrecht_magic.py\n\n DESCRIPTION\n converts Utrecht magnetometer data files to magic_measurements files\n\n SYNTAX\n utrecht_magic.py [command line options]\n\n OPTIONS\n -h: prints the help message and quits.\n -f FILE: specify input file, or\n -F FILE: specify output file, default is magic_measurements.txt\n -Fsa: specify er_samples format file for appending, default is new er_samples.txt (Not working yet)\n -WD: output directory for MagIC files\n -ncn: Site Naming Convention\n Site to Sample naming convention:\n [1] XXXXY: where XXXX is an arbitrary length site designation and Y\n is the single character sample designation. e.g., TG001a is the\n first sample from site TG001. [default]\n [2: default] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)\n [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX\n [5] site name = sample name\n [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED\n [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY\n -spc: number of characters to remove to generate sample names from specimen names\n -dmy: European date format\n -loc LOCNAME : specify location/study name\n -lat latitude of samples\n -lon longitude of samples\n -A: don't average replicate measurements\n -mcd: [SO-MAG,SO-SUN,SO-SIGHT...] supply how these samples were oriented\n -dc: B PHI THETA: dc lab field (in microTesla), phi,and theta must be input as a tuple \"(DC,PHI,THETA)\". If not input user will be asked for values, this is advantagious if there are differing dc fields between steps or specimens. Note: this currently only works with the decimal IZZI naming convetion (XXX.0,1,2,3 where XXX is the treatment temperature and 0 is a zero field step, 1 is in field, and 2 is a pTRM check, 3 is a tail check). All other steps are hardcoded dc_field = 0.\n\n INPUT\n Utrecht magnetometer data file\n \"\"\"\n# initialize some stuff\n sample_lat = 0.0\n sample_lon = 0.0\n noave = 0\n er_location_name = \"unknown\"\n args = sys.argv\n meth_code = \"LP-NO\"\n version_num = pmag.get_version()\n site_num = 1\n\n mag_file = \"\"\n dir_path = '.'\n MagRecs = []\n SpecOuts = []\n SampOuts = []\n SiteOuts = []\n\n meas_file='magic_measurements.txt'\n spec_file='er_specimens.txt'\n samp_file='er_samples.txt'\n site_file='er_sites.txt'\n meth_code = \"\"\n #\n # get command line arguments\n #\n if command_line:\n if '-WD' in sys.argv:\n ind = sys.argv.index('-WD')\n dir_path = sys.argv[ind+1]\n if '-ID' in sys.argv:\n ind = sys.argv.index('-ID')\n input_dir_path = sys.argv[ind+1]\n else:\n input_dir_path = dir_path\n output_dir_path = dir_path\n if \"-h\" in args:\n print(main.__doc__)\n return False\n if '-F' in args:\n ind = args.index(\"-F\")\n meas_file = args[ind+1]\n if '-Fsp' in args:\n ind=args.index(\"-Fsp\")\n spec_file=args[ind+1]\n if '-Fsa' in args:\n ind = args.index(\"-Fsa\")\n samp_file = args[ind+1]\n if '-Fsi' in args: # LORI addition\n ind=args.index(\"-Fsi\")\n site_file=args[ind+1]\n #try:\n # open(samp_file,'r')\n # ErSamps,file_type=pmag.magic_read(samp_file)\n # print 'sample information will be appended to ', samp_file\n #except:\n # print samp_file,' not found: sample information will be stored in new er_samples.txt file'\n # samp_file = output_dir_path+'/er_samples.txt'\n if '-f' in args:\n ind = args.index(\"-f\")\n mag_file = args[ind+1]\n if \"-loc\" in args:\n ind = args.index(\"-loc\")\n er_location_name = args[ind+1]\n if \"-lat\" in args:\n ind = args.index(\"-lat\")\n site_lat = args[ind+1]\n if \"-lon\" in args:\n ind = args.index(\"-lon\")\n site_lon = args[ind+1]\n if \"-A\" in args:\n noave = 1\n if \"-mcd\" in args:\n ind = args.index(\"-mcd\")\n meth_code = args[ind+1]\n #samp_con='5'\n if \"-ncn\" in args:\n ind=args.index(\"-ncn\")\n samp_con=sys.argv[ind+1]\n if \"4\" in samp_con:\n if \"-\" not in samp_con:\n print(\"option [4] must be in form 4-Z where Z is an integer\")\n return False, \"naming convention option [4] must be in form 4-Z where Z is an integer\"\n else:\n site_num=samp_con.split(\"-\")[1]\n samp_con=\"4\"\n elif \"7\" in samp_con:\n if \"-\" not in samp_con:\n print(\"option [7] must be in form 7-Z where Z is an integer\")\n return False, \"naming convention option [7] must be in form 7-Z where Z is an integer\"\n else:\n site_num=samp_con.split(\"-\")[1]\n samp_con=\"7\"\n else: samp_con=\"1\"\n if '-dc' in args:\n ind=args.index('-dc')\n DC_FIELD,DC_PHI,DC_THETA=list(map(float,args[ind+1].strip('( ) [ ]').split(',')))\n DC_FIELD *= 1e-6\n yn=''\n GET_DC_PARAMS=False\n else: DC_FIELD,DC_PHI,DC_THETA=0,0,-90\n if '-spc' in args:\n ind=args.index(\"-spc\")\n specnum=-int(args[ind+1])\n else: specnum = 0\n if '-dmy' in args:\n ind=args.index(\"-dmy\")\n dmy_flag=True\n else: dmy_flag=False\n\n\n if not command_line:\n dir_path = kwargs.get('dir_path', '.')\n input_dir_path = kwargs.get('input_dir_path', dir_path)\n output_dir_path = dir_path\n meas_file = kwargs.get('meas_file', 'magic_measurements.txt')\n mag_file = kwargs.get('mag_file')\n spec_file = kwargs.get('spec_file', 'er_specimens.txt') # specimen outfile\n samp_file = kwargs.get('samp_file', 'er_samples.txt')\n site_file = kwargs.get('site_file', 'er_sites.txt') # site outfile\n er_location_name = kwargs.get('location_name', '')\n site_lat = kwargs.get('site_lat', '')\n site_lon = kwargs.get('site_lon', '')\n #oave = kwargs.get('noave', 0) # default (0) means DO average\n meth_code = kwargs.get('meth_code', \"LP-NO\")\n specnum = -int(kwargs.get('specnum', 0))\n samp_con = kwargs.get('samp_con', '2')\n if \"4\" in samp_con:\n if \"-\" not in samp_con:\n print(\"option [4] must be in form 4-Z where Z is an integer\")\n return False, \"naming convention option [4] must be in form 4-Z where Z is an integer\"\n else:\n site_num=samp_con.split(\"-\")[1]\n samp_con=\"4\"\n elif \"7\" in samp_con:\n if \"-\" not in samp_con:\n print(\"option [7] must be in form 7-Z where Z is an integer\")\n return False, \"naming convention option [7] must be in form 7-Z where Z is an integer\"\n else:\n site_num=samp_con.split(\"-\")[1]\n samp_con=\"7\"\n DC_FIELD,DC_PHI,DC_THETA = list(map(float, kwargs.get('dc_params', (0,0,-90))))\n DC_FIELD *= 1e-6\n noave = kwargs.get('avg', True)\n dmy_flag = kwargs.get('dmy_flag', False)\n\n # format variables\n if not mag_file:\n return False, 'You must provide a Utrecht formated file'\n mag_file = os.path.join(input_dir_path, mag_file)\n meas_file = os.path.join(output_dir_path, meas_file)\n spec_file = os.path.join(output_dir_path, spec_file)\n samp_file = os.path.join(output_dir_path, samp_file)\n site_file = os.path.join(output_dir_path, site_file)\n\n # parse data\n\n # Open up the Utrecht file and read the header information\n print('mag_file in utrecht_file', mag_file)\n AF_or_T = mag_file.split('.')[-1]\n data = open(mag_file, 'r')\n line = data.readline()\n line_items = line.split(',')\n operator=line_items[0]\n operator=operator.replace(\"\\\"\",\"\")\n machine=line_items[1]\n machine=machine.replace(\"\\\"\",\"\")\n machine=machine.rstrip('\\n')\n print(\"operator=\", operator)\n print(\"machine=\", machine)\n\n #read in measurement data\n line = data.readline()\n while line != \"END\" and line != '\"END\"':\n ErSpecRec,ErSampRec,ErSiteRec = {},{},{}\n line_items = line.split(',')\n spec_name=line_items[0]\n spec_name=spec_name.replace(\"\\\"\",\"\")\n print(\"spec_name=\", spec_name)\n free_string=line_items[1]\n free_string=free_string.replace(\"\\\"\",\"\")\n print(\"free_string=\", free_string)\n dec=line_items[2]\n print(\"dec=\", dec)\n inc=line_items[3]\n print(\"inc=\", inc)\n volume=float(line_items[4])\n volume=volume * 1e-6 # enter volume in cm^3, convert to m^3\n print(\"volume=\", volume)\n bed_plane=line_items[5]\n print(\"bed_plane=\", bed_plane)\n bed_tilt=line_items[6]\n print(\"bed_tilt=\", bed_tilt)\n\n # Configure et er_ tables\n ErSpecRec['er_specimen_name'] = spec_name\n if specnum==0: sample_name = spec_name\n else: sample_name = spec_name[:specnum]\n ErSampRec['er_sample_name'] = sample_name\n ErSpecRec['er_sample_name'] = sample_name\n er_site_name = pmag.parse_site(sample_name,samp_con,site_num)\n ErSpecRec['er_site_name']=er_site_name\n ErSpecRec['er_location_name']=er_location_name\n ErSampRec['sample_azimuth'] = dec\n ErSampRec['sample_dip'] = str(float(inc)-90)\n ErSampRec['sample_bed_dip_direction'] = bed_plane\n ErSampRec['sample_bed_tilt'] = bed_tilt\n ErSiteRec['site_lat'] = site_lat\n ErSiteRec['site_lon'] = site_lon\n ErSpecRec['magic_method_codes'] = meth_code\n ErSampRec['er_location_name'] = er_location_name\n ErSiteRec['er_location_name'] = er_location_name\n ErSiteRec['er_site_name'] = er_site_name\n ErSampRec['er_site_name'] = er_site_name\n ErSampRec['er_citation_names'] = 'This study'\n SpecOuts.append(ErSpecRec)\n SampOuts.append(ErSampRec)\n SiteOuts.append(ErSiteRec)\n\n #measurement data\n line = data.readline()\n line = line.rstrip(\"\\n\")\n items = line.split(\",\")\n while line != '9999':\n print(line)\n step=items[0]\n step=step.split('.')\n step_value=step[0]\n step_type = \"\"\n if len(step) == 2:\n step_type=step[1]\n if step_type=='5':\n step_value = items[0]\n A=float(items[1])\n B=float(items[2])\n C=float(items[3])\n# convert to MagIC coordinates\n Z=-A\n X=-B\n Y=C\n cart = np.array([X, Y, Z]).transpose()\n direction = pmag.cart2dir(cart).transpose()\n measurement_dec = direction[0]\n measurement_inc = direction[1]\n measurement_magn_moment = direction[2] * 1.0e-12 # the data are in pico-Am^2 - this converts to Am^2\n measurement_magn_volume = direction[2] * 1.0e-12 / volume # data volume normalized - converted to A/m\n print(\"measurement_magn_moment=\", measurement_magn_moment)\n print(\"measurement_magn_volume=\", measurement_magn_volume)\n error = items[4]\n date=items[5]\n date=date.strip('\"')\n if date.count(\"-\") > 0:\n date=date.split(\"-\")\n elif date.count(\"/\") > 0:\n date=date.split(\"/\")\n else: print(\"date format seperator cannot be identified\")\n print(date)\n time=items[6]\n time=time.strip('\"')\n time=time.split(\":\")\n print(time)\n if dmy_flag:\n date_time = date[1] + \":\" + date[0] + \":\" + date[2] + \":\" + time[0] + \":\" + time[1] + \":\" + \"0.0\"\n else:\n date_time = date[0] + \":\" + date[1] + \":\" + date[2] + \":\" + time[0] + \":\" + time[1] + \":\" + \"0.0\"\n print(date_time)\n\n MagRec = {}\n MagRec[\"er_analyst_mail_names\"] = operator\n MagRec[\"magic_instrument_codes\"] = \"Utrecht_\" + machine\n MagRec[\"measurement_description\"] = \"free string = \" + free_string\n MagRec[\"measurement_date\"] = date_time \n MagRec[\"er_citation_names\"] = \"This study\"\n MagRec['er_location_name'] = er_location_name\n MagRec['er_site_name'] = er_site_name\n MagRec['er_sample_name'] = sample_name\n MagRec['magic_software_packages'] = version_num\n MagRec[\"measurement_temp\"] = '%8.3e' % (273) # room temp in kelvin\n MagRec[\"measurement_flag\"] = 'g'\n MagRec[\"measurement_standard\"] = 'u'\n MagRec[\"magic_experiment_name\"] = er_location_name + er_site_name + spec_name\n MagRec[\"measurement_number\"] = er_location_name + er_site_name + spec_name + items[0]\n MagRec[\"er_specimen_name\"] = spec_name\n # MagRec[\"treatment_ac_field\"] = '0'\n if AF_or_T.lower() == \"th\":\n MagRec[\"treatment_temp\"] = '%8.3e' % (float(step_value)+273.) # temp in kelvin\n MagRec['treatment_ac_field']='0'\n meas_type = \"LP-DIR-T:LT-T-Z\"\n else:\n MagRec['treatment_temp']='273'\n MagRec['treatment_ac_field']='%10.3e'%(float(step_value)*1e-3)\n meas_type = \"LP-DIR-AF:LT-AF-Z\"\n MagRec['treatment_dc_field']='0'\n if step_value == '0':\n meas_type = \"LT-NO\"\n print(\"step_type=\", step_type)\n if step_type == '0' and AF_or_T.lower() == 'th':\n if meas_type == \"\":\n meas_type = \"LT-T-Z\"\n else:\n meas_type = meas_type + \":\" + \"LT-T-Z\"\n elif step_type == '1':\n if meas_type == \"\":\n meas_type = \"LT-T-I\"\n else:\n meas_type = meas_type + \":\" + \"LT-T-I\"\n MagRec['treatment_dc_field']='%1.2e'%DC_FIELD\n elif step_type == '2':\n if meas_type == \"\":\n meas_type = \"LT-PTRM-I\"\n else:\n meas_type = meas_type + \":\" + \"LT-PTRM-I\"\n MagRec['treatment_dc_field']='%1.2e'%DC_FIELD\n elif step_type == '3':\n if meas_type == \"\" :\n meas_type = \"LT-PTRM-Z\"\n else:\n meas_type = meas_type + \":\" + \"LT-PTRM-Z\"\n print(\"meas_type=\", meas_type)\n MagRec['treatment_dc_field_phi'] = '%1.2f'%DC_PHI\n MagRec['treatment_dc_field_theta'] = '%1.2f'%DC_THETA\n MagRec['magic_method_codes'] = meas_type\n MagRec[\"measurement_magn_moment\"] = measurement_magn_moment\n MagRec[\"measurement_magn_volume\"] = measurement_magn_volume\n MagRec[\"measurement_dec\"] = measurement_dec\n MagRec[\"measurement_inc\"] = measurement_inc\n MagRec['measurement_csd'] = error \n# MagRec['measurement_positions'] = '1'\n MagRecs.append(MagRec)\n\n line = data.readline()\n line = line.rstrip(\"\\n\")\n items = line.split(\",\")\n line = data.readline()\n line = line.rstrip(\"\\n\")\n items = line.split(\",\")\n\n# write out the data to MagIC data files\n pmag.magic_write(spec_file, SpecOuts, 'er_specimens')\n pmag.magic_write(samp_file, SampOuts, 'er_samples')\n pmag.magic_write(site_file, SiteOuts, 'er_sites')\n# MagOuts = pmag.measurements_methods(MagRecs, noave)\n# pmag.magic_write(meas_file, MagOuts, 'magic_measurements')\n pmag.magic_write(meas_file, MagRecs, 'magic_measurements')\n print(\"results put in \", meas_file)\n print(\"exit!\")\n return True, meas_file"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef on_okButton(self, event):\n os.chdir(self.WD)\n options = {}\n HUJI_file = self.bSizer0.return_value()\n if not HUJI_file:\n pw.simple_warning(\"You must select a HUJI format file\")\n return False\n options['magfile'] = HUJI_file\n magicoutfile=os.path.split(HUJI_file)[1]+\".magic\"\n outfile=os.path.join(self.WD, magicoutfile)\n options['meas_file'] = outfile\n user = self.bSizer1.return_value()\n options['user'] = user\n if user:\n user = '-usr ' + user\n experiment_type = self.bSizer2.return_value()\n options['codelist'] = experiment_type\n if not experiment_type:\n pw.simple_warning(\"You must select an experiment type\")\n return False\n cooling_rate = self.cooling_rate.GetValue() or 0\n if cooling_rate:\n experiment_type = experiment_type + \" \" + cooling_rate\n lab_field = self.bSizer3.return_value()\n if not lab_field:\n lab_field = \"0 0 0\"\n lab_field_list = lab_field.split()\n options['labfield'] = lab_field_list[0]\n options['phi'] = lab_field_list[1]\n options['theta'] = lab_field_list[2]\n lab_field = '-dc ' + lab_field\n spc = self.bSizer4.return_value()\n options['specnum'] = spc or 0\n if not spc:\n spc = '-spc 0'\n else:\n spc = '-spc ' + spc\n ncn = self.bSizer5.return_value()\n options['samp_con'] = ncn\n loc_name = self.bSizer6.return_value()\n options['er_location_name'] = loc_name\n if loc_name:\n loc_name = '-loc ' + loc_name\n peak_AF = self.bSizer7.return_value()\n options['peakfield'] = peak_AF\n\n replicate = self.bSizer8.return_value()\n if replicate:\n options['noave'] = 0\n replicate = ''\n else:\n options['noave'] = 1\n replicate = '-A'\n\n old_format= self.bSizer0a.return_value()\n\n if old_format:\n COMMAND = \"huji_magic.py -f {} -F {} {} -LP {} {} -ncn {} {} {} {} {}\".format(HUJI_file, outfile, user, experiment_type, loc_name, ncn, lab_field, spc, peak_AF, replicate)\n program_ran, error_message = huji_magic.main(False, **options)\n if program_ran:\n pw.close_window(self, COMMAND, outfile)\n else:\n pw.simple_warning(error_message)\n\n else: # new format\n COMMAND = \"huji_magic_new.py -f {} -F {} {} -LP {} {} -ncn {} {} {} {}\".format(HUJI_file, outfile, user, experiment_type, loc_name, ncn, lab_field, spc, peak_AF)\n program_ran, error_message = huji_magic_new.main(False, **options)\n if program_ran:\n pw.close_window(self, COMMAND, outfile)\n else:\n pw.simple_warning(error_message)", "response": "This method is called when the user selects a new entry in the HUJI file and runs huji_magic. py with the appropriate flags."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef on_okButton(self, event):\n os.chdir(self.WD)\n options_dict = {}\n wd = self.WD\n options_dict['dir_path'] = wd\n full_file = self.bSizer0.return_value()\n if not full_file:\n pw.simple_warning('You must provide a Utrecht format file')\n return False\n input_directory, Utrecht_file = os.path.split(full_file)\n options_dict['mag_file'] = Utrecht_file\n options_dict['input_dir_path'] = input_directory\n if input_directory:\n ID = \"-ID \" + input_directory\n else:\n ID = ''\n outfile = Utrecht_file + \".magic\"\n options_dict['meas_file'] = outfile\n samp_outfile = Utrecht_file[:Utrecht_file.find('.')] + \"_er_samples.txt\"\n options_dict['samp_file'] = samp_outfile\n spec_outfile = Utrecht_file[:Utrecht_file.find('.')] + \"_er_specimens.txt\"\n options_dict['spec_file'] = spec_outfile\n site_outfile = Utrecht_file[:Utrecht_file.find('.')] + \"_er_sites.txt\"\n options_dict['site_file'] = site_outfile\n dc_flag,dc_params = '',''\n if self.bSizer6.return_value() != '':\n dc_params = list(map(float,self.bSizer6.return_value().split()))\n options_dict['dc_params'] = dc_params\n dc_flag = '-dc'\n spec_num = self.bSizer3.return_value()\n options_dict['specnum'] = spec_num\n if spec_num:\n spec_num = \"-spc \" + str(spec_num)\n else:\n spec_num = \"-spc 0\" # defaults to 0 if user doesn't choose number\n loc_name = self.bSizer4.return_value()\n options_dict['location_name'] = loc_name\n if loc_name:\n loc_name = \"-loc \" + loc_name\n ncn = self.bSizer2.return_value()\n options_dict['samp_con'] = ncn\n particulars = self.bSizer1.return_value()\n options_dict['meth_code'] = particulars\n if particulars:\n particulars = \"-mcd \" + particulars\n euro_date = self.bSizer7.return_value()\n if euro_date: options_dict['dmy_flag'] = True; dmy_flag='-dmy'\n else: options_dict['dmy_flag'] = False; dmy_flag=''\n try: site_lat,site_lon = self.bSizer8.return_value().split()\n except ValueError: site_lat,site_lon = '',''\n options_dict['site_lat'] = site_lat\n options_dict['site_lon'] = site_lon\n replicate = self.bSizer5.return_value()\n if replicate:\n options_dict['avg'] = False\n replicate = ''\n else:\n options_dict['avg'] = True\n replicate = '-A'\n\n COMMAND = \"cit_magic.py -WD {} -f {} -F {} {} {} {} -ncn {} {} -Fsp {} -Fsi {} -Fsa {} {} {} {} {} -lat {} -lon {}\".format(wd, Utrecht_file, outfile, particulars, spec_num, loc_name, ncn, ID, spec_outfile, site_outfile, samp_outfile, replicate, dc_flag, dc_params, dmy_flag, site_lon, site_lat)\n # to run as module:\n program_ran, error_message = utrecht_magic.main(command_line=False, **options_dict)\n if program_ran:\n pw.close_window(self, COMMAND, outfile)\n else:\n pw.simple_warning(error_message)", "response": "Function that is called when the user selects a new entry in the utrecht_magic script and runs the appropriate command line options."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef on_m_calc_orient(self,event):\n '''\n This fucntion does exactly what the 'import orientation' fuction does in MagIC.py\n after some dialog boxes the function calls orientation_magic.py\n '''\n # first see if demag_orient.txt\n self.on_m_save_file(None)\n orient_convention_dia = orient_convention(None)\n orient_convention_dia.Center()\n #orient_convention_dia.ShowModal()\n if orient_convention_dia.ShowModal() == wx.ID_OK:\n ocn_flag = orient_convention_dia.ocn_flag\n dcn_flag = orient_convention_dia.dcn_flag\n gmt_flags = orient_convention_dia.gmt_flags\n orient_convention_dia.Destroy()\n else:\n return\n\n or_con = orient_convention_dia.ocn\n dec_correction_con = int(orient_convention_dia.dcn)\n try:\n hours_from_gmt = float(orient_convention_dia.gmt)\n except:\n hours_from_gmt = 0\n try:\n dec_correction = float(orient_convention_dia.correct_dec)\n except:\n dec_correction = 0\n\n method_code_dia=method_code_dialog(None)\n method_code_dia.Center()\n if method_code_dia.ShowModal() == wx.ID_OK:\n bedding_codes_flags=method_code_dia.bedding_codes_flags\n methodcodes_flags=method_code_dia.methodcodes_flags\n method_code_dia.Destroy()\n else:\n print(\"-I- Canceling calculation\")\n return\n\n method_codes = method_code_dia.methodcodes\n average_bedding = method_code_dia.average_bedding\n bed_correction = method_code_dia.bed_correction\n\n command_args=['orientation_magic.py']\n command_args.append(\"-WD %s\"%self.WD)\n command_args.append(\"-Fsa er_samples_orient.txt\")\n command_args.append(\"-Fsi er_sites_orient.txt \")\n command_args.append(\"-f %s\"%\"demag_orient.txt\")\n command_args.append(ocn_flag)\n command_args.append(dcn_flag)\n command_args.append(gmt_flags)\n command_args.append(bedding_codes_flags)\n command_args.append(methodcodes_flags)\n commandline = \" \".join(command_args)\n\n print(\"-I- executing command: %s\" %commandline)\n os.chdir(self.WD)\n if os.path.exists(os.path.join(self.WD, 'er_samples.txt')) or os.path.exists(os.path.join(self.WD, 'er_sites.txt')):\n append = True\n else:\n append = False\n samp_file = \"er_samples.txt\"\n site_file = \"er_sites.txt\"\n success, error_message = ipmag.orientation_magic(or_con, dec_correction_con, dec_correction,\n bed_correction, hours_from_gmt=hours_from_gmt,\n method_codes=method_codes, average_bedding=average_bedding,\n orient_file='demag_orient.txt', samp_file=samp_file,\n site_file=site_file, input_dir_path=self.WD,\n output_dir_path=self.WD, append=append, data_model=3)\n\n if not success:\n dlg1 = wx.MessageDialog(None,caption=\"Message:\", message=\"-E- ERROR: Error in running orientation_magic.py\\n{}\".format(error_message) ,style=wx.OK|wx.ICON_INFORMATION)\n dlg1.ShowModal()\n dlg1.Destroy()\n\n print(\"-E- ERROR: Error in running orientation_magic.py\")\n return\n else:\n dlg2 = wx.MessageDialog(None,caption=\"Message:\", message=\"-I- Successfully ran orientation_magic\", style=wx.OK|wx.ICON_INFORMATION)\n dlg2.ShowModal()\n dlg2.Destroy()\n self.Parent.Show()\n self.Parent.Raise()\n self.Destroy()\n self.contribution.add_magic_table('samples')\n return", "response": "This function is called when the user hits the orientation_magic. txt file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef main():\n file=\"\"\n if '-h' in sys.argv:\n print(main.__doc__)\n sys.exit()\n elif '-i' in sys.argv:\n file=input(\"Enter eigenparameters data file name: \")\n \n elif '-f' in sys.argv:\n ind=sys.argv.index('-f')\n file=sys.argv[ind+1]\n if file!=\"\":\n f=open(file,'r')\n data=f.readlines()\n f.close()\n else:\n data=sys.stdin.readlines()\n ofile=\"\"\n if '-F' in sys.argv:\n ind = sys.argv.index('-F')\n ofile= sys.argv[ind+1]\n out = open(ofile, 'w + a')\n file_outstring = \"\"\n for line in data:\n tau,Vdirs=[],[]\n rec=line.split()\n for k in range(0,9,3):\n tau.append(float(rec[k]))\n Vdirs.append((float(rec[k+1]),float(rec[k+2])))\n srot=pmag.doeigs_s(tau,Vdirs) \n outstring=\"\"\n for s in srot:outstring+='%10.8f '%(s)\n if ofile==\"\":\n print(outstring)\n else:\n out.write(outstring+'\\n')", "response": "A function that takes input file and converts it to s format and prints out a string of the output file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef main():\n\n\n args = sys.argv\n if '-h' in args:\n print(main.__doc__)\n sys.exit()\n dataframe = extractor.command_line_dataframe([['f', False, 'rmag_anisotropy.txt'],\n ['fb', False, 'magic_measurements.txt'],\n ['fsa', False, 'er_samples.txt'],\n ['fa', False, None], ['fsum', False, None],\n ['fmt', False, 'svg'], ['ds', False, 'mbsf'],\n ['d', False, '-1 -1'], ['sav', False, False],\n ['WD', False, '.' ], ['DM', False, 3],\n ['fsp', False, 'specimens.txt']])\n #args = sys.argv\n checked_args = extractor.extract_and_check_args(args, dataframe)\n ani_file, meas_file, samp_file, age_file, sum_file, fmt, depth_scale, depth, save_quietly, dir_path, data_model, spec_file = extractor.get_vars(['f', 'fb', 'fsa', 'fa', 'fsum', 'fmt', 'ds', 'd', 'sav', 'WD', 'DM', 'fsp'], checked_args)\n\n # format min/max depth\n try:\n dmin, dmax = depth.split()\n dmin, dmax = float(dmin), float(dmax)\n except:\n print('you must provide depth in this format: -d dmin dmax')\n print('could not parse \"{}\", defaulting to plotting all depths'.format('-d ' + str(depth)))\n dmin, dmax = -1, -1\n\n if depth_scale:\n if depth_scale not in ['age', 'mbsf', 'mcd']:\n print('-W- Unrecognized option \"{}\" provided for depth scale.\\n Options for depth scale are mbsf (meters below sea floor) or mcd (meters composite depth).\\n Alternatively, if you provide an age file the depth scale will be automatically set to plot by age instead.\\n Using default \"mbsf\"'.format(depth_scale))\n depth_scale = 'sample_core_depth'\n if age_file:\n depth_scale = 'age'\n elif 'mbsf' in depth_scale:\n depth_scale = 'sample_core_depth'\n elif 'mcd' in depth_scale:\n depth_scale = 'sample_composite_depth'\n\n data_model = int(float(data_model))\n # MagIC 2\n if data_model == 2:\n fig, figname = ipmag.ani_depthplot2(ani_file, meas_file, samp_file, age_file, sum_file, fmt, dmin, dmax, depth_scale, dir_path)\n # MagIC 3\n else:\n if meas_file == \"magic_measurements.txt\":\n meas_file = 'measurements.txt'\n if samp_file in ['er_samples.txt', 'pmag_samples.txt']:\n samp_file = \"samples.txt\"\n site_file = 'sites.txt'\n fig, fignames = ipmag.ani_depthplot(spec_file, samp_file, meas_file, site_file, age_file, sum_file, fmt, dmin, dmax, depth_scale, dir_path)\n figname = fignames[0]\n if save_quietly:\n if dir_path == '.':\n dir_path = os.getcwd()\n plt.savefig(figname)\n plt.clf()\n print('Saved file: {}'.format(figname))\n return False\n\n app = wx.App(redirect=False)\n if not fig:\n pw.simple_warning('No plot was able to be created with the data you provided.\\nMake sure you have given all the required information and try again')\n return False\n\n dpi = fig.get_dpi()\n pixel_width = dpi * fig.get_figwidth()\n pixel_height = dpi * fig.get_figheight()\n figname = os.path.join(dir_path, figname)\n plot_frame = pmag_menu_dialogs.PlotFrame((int(pixel_width), int(pixel_height + 50)),\n fig, figname, standalone=True)\n\n app.MainLoop()", "response": "This function is the main entry point for the animal depth plot. It is the main function for the animal depth plot."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn object address as a string", "response": "def address(obj):\n \"\"\"Return object address as a string: ''\"\"\"\n return \"<%s @ %s>\" % (obj.__class__.__name__,\n hex(id(obj)).upper().replace('X', 'x'))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the size of an arbitrary type of item", "response": "def get_size(item):\n \"\"\"Return size of an item of arbitrary type\"\"\"\n if isinstance(item, (list, set, tuple, dict)):\n return len(item)\n elif isinstance(item, (ndarray, MaskedArray)):\n return item.shape\n elif isinstance(item, Image):\n return item.size\n if isinstance(item, (DataFrame, Index, Series)):\n return item.shape\n else:\n return 1"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting the attributes of an object using dir.", "response": "def get_object_attrs(obj):\n \"\"\"\n Get the attributes of an object using dir.\n\n This filters protected attributes\n \"\"\"\n attrs = [k for k in dir(obj) if not k.startswith('__')]\n if not attrs:\n attrs = dir(obj)\n return attrs"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert a string to a datetime. timedelta value.", "response": "def str_to_timedelta(value):\n \"\"\"Convert a string to a datetime.timedelta value.\n\n The following strings are accepted:\n\n - 'datetime.timedelta(1, 5, 12345)'\n - 'timedelta(1, 5, 12345)'\n - '(1, 5, 12345)'\n - '1, 5, 12345'\n - '1'\n\n if there are less then three parameters, the missing parameters are\n assumed to be 0. Variations in the spacing of the parameters are allowed.\n\n Raises:\n ValueError for strings not matching the above criterion.\n\n \"\"\"\n m = re.match(r'^(?:(?:datetime\\.)?timedelta)?'\n r'\\(?'\n r'([^)]*)'\n r'\\)?$', value)\n if not m:\n raise ValueError('Invalid string for datetime.timedelta')\n args = [int(a.strip()) for a in m.group(1).split(',')]\n return datetime.timedelta(*args)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_color_name(value):\n if not is_known_type(value):\n return CUSTOM_TYPE_COLOR\n for typ, name in list(COLORS.items()):\n if isinstance(value, typ):\n return name\n else:\n np_dtype = get_numpy_dtype(value)\n if np_dtype is None or not hasattr(value, 'size'):\n return UNSUPPORTED_COLOR\n elif value.size == 1:\n return SCALAR_COLOR\n else:\n return ARRAY_COLOR", "response": "Return color name depending on value type"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef sort_against(list1, list2, reverse=False):\n try:\n return [item for _, item in \n sorted(zip(list2, list1), key=lambda x: x[0], reverse=reverse)]\n except:\n return list1", "response": "Sorts list1 by list2 by permutation which takes list2 \n to sorted list1."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndefault display for unknown objects.", "response": "def default_display(value, with_module=True):\n \"\"\"Default display for unknown objects.\"\"\"\n object_type = type(value)\n try:\n name = object_type.__name__\n module = object_type.__module__\n if with_module:\n return name + ' object of ' + module + ' module'\n else:\n return name\n except:\n type_str = to_text_string(object_type)\n return type_str[1:-1]"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef collections_display(value, level):\n is_dict = isinstance(value, dict)\n is_set = isinstance(value, set)\n\n # Get elements\n if is_dict:\n elements = iteritems(value)\n else:\n elements = value\n\n # Truncate values\n truncate = False\n if level == 1 and len(value) > 10:\n elements = islice(elements, 10) if is_dict or is_set else value[:10]\n truncate = True\n elif level == 2 and len(value) > 5:\n elements = islice(elements, 5) if is_dict or is_set else value[:5]\n truncate = True\n\n # Get display of each element\n if level <= 2:\n if is_dict:\n displays = [value_to_display(k, level=level) + ':' +\n value_to_display(v, level=level)\n for (k, v) in list(elements)]\n else:\n displays = [value_to_display(e, level=level)\n for e in elements]\n if truncate:\n displays.append('...')\n display = ', '.join(displays)\n else:\n display = '...'\n\n # Return display\n if is_dict:\n display = '{' + display + '}'\n elif isinstance(value, list):\n display = '[' + display + ']'\n elif isinstance(value, set):\n display = '{' + display + '}'\n else:\n display = '(' + display + ')'\n\n return display", "response": "Display for collections i. e. list set tuple and dict."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconvert a value to a string for display purpose", "response": "def value_to_display(value, minmax=False, level=0):\n \"\"\"Convert value for display purpose\"\"\"\n # To save current Numpy printoptions\n np_printoptions = FakeObject\n\n try:\n numeric_numpy_types = (int64, int32, int16, int8,\n uint64, uint32, uint16, uint8,\n float64, float32, float16,\n complex128, complex64, bool_)\n if ndarray is not FakeObject:\n # Save printoptions\n np_printoptions = get_printoptions()\n # Set max number of elements to show for Numpy arrays\n # in our display\n set_printoptions(threshold=10)\n if isinstance(value, recarray):\n if level == 0:\n fields = value.names\n display = 'Field names: ' + ', '.join(fields)\n else:\n display = 'Recarray'\n elif isinstance(value, MaskedArray):\n display = 'Masked array'\n elif isinstance(value, ndarray):\n if level == 0:\n if minmax:\n try:\n display = 'Min: %r\\nMax: %r' % (value.min(), value.max())\n except (TypeError, ValueError):\n if value.dtype.type in numeric_numpy_types:\n display = str(value)\n else:\n display = default_display(value)\n elif value.dtype.type in numeric_numpy_types:\n display = str(value)\n else:\n display = default_display(value)\n else:\n display = 'Numpy array'\n elif any([type(value) == t for t in [list, set, tuple, dict]]):\n display = collections_display(value, level+1)\n elif isinstance(value, Image):\n if level == 0:\n display = '%s Mode: %s' % (address(value), value.mode)\n else:\n display = 'Image'\n elif isinstance(value, DataFrame):\n if level == 0:\n cols = value.columns\n if PY2 and len(cols) > 0:\n # Get rid of possible BOM utf-8 data present at the\n # beginning of a file, which gets attached to the first\n # column header when headers are present in the first\n # row.\n # Fixes Issue 2514\n try:\n ini_col = to_text_string(cols[0], encoding='utf-8-sig')\n except:\n ini_col = to_text_string(cols[0])\n cols = [ini_col] + [to_text_string(c) for c in cols[1:]]\n else:\n cols = [to_text_string(c) for c in cols]\n display = 'Column names: ' + ', '.join(list(cols))\n else:\n display = 'Dataframe'\n elif isinstance(value, NavigableString):\n # Fixes Issue 2448\n display = to_text_string(value)\n if level > 0:\n display = u\"'\" + display + u\"'\"\n elif isinstance(value, Index):\n if level == 0:\n try:\n display = value._summary()\n except AttributeError:\n display = value.summary()\n else:\n display = 'Index'\n elif is_binary_string(value):\n # We don't apply this to classes that extend string types\n # See issue 5636\n if is_type_text_string(value):\n try:\n display = to_text_string(value, 'utf8')\n if level > 0:\n display = u\"'\" + display + u\"'\"\n except:\n display = value\n if level > 0:\n display = b\"'\" + display + b\"'\"\n else:\n display = default_display(value)\n elif is_text_string(value):\n # We don't apply this to classes that extend string types\n # See issue 5636\n if is_type_text_string(value):\n display = value\n if level > 0:\n display = u\"'\" + display + u\"'\"\n else:\n display = default_display(value)\n elif (isinstance(value, datetime.date) or\n isinstance(value, datetime.timedelta)):\n display = str(value)\n elif (isinstance(value, NUMERIC_TYPES) or\n isinstance(value, bool) or\n isinstance(value, numeric_numpy_types)):\n display = repr(value)\n else:\n if level == 0:\n display = default_display(value)\n else:\n display = default_display(value, with_module=False)\n except:\n display = default_display(value)\n\n # Truncate display at 70 chars to avoid freezing Spyder\n # because of large displays\n if len(display) > 70:\n if is_binary_string(display):\n ellipses = b' ...'\n else:\n ellipses = u' ...'\n display = display[:70].rstrip() + ellipses\n\n # Restore Numpy printoptions\n if np_printoptions is not FakeObject:\n set_printoptions(**np_printoptions)\n\n return display"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nconverts value to value", "response": "def display_to_value(value, default_value, ignore_errors=True):\n \"\"\"Convert back to value\"\"\"\n from qtpy.compat import from_qvariant\n value = from_qvariant(value, to_text_string)\n try:\n np_dtype = get_numpy_dtype(default_value)\n if isinstance(default_value, bool):\n # We must test for boolean before NumPy data types\n # because `bool` class derives from `int` class\n try:\n value = bool(float(value))\n except ValueError:\n value = value.lower() == \"true\"\n elif np_dtype is not None:\n if 'complex' in str(type(default_value)):\n value = np_dtype(complex(value))\n else:\n value = np_dtype(value)\n elif is_binary_string(default_value):\n value = to_binary_string(value, 'utf8')\n elif is_text_string(default_value):\n value = to_text_string(value)\n elif isinstance(default_value, complex):\n value = complex(value)\n elif isinstance(default_value, float):\n value = float(value)\n elif isinstance(default_value, int):\n try:\n value = int(value)\n except ValueError:\n value = float(value)\n elif isinstance(default_value, datetime.datetime):\n value = datestr_to_datetime(value)\n elif isinstance(default_value, datetime.date):\n value = datestr_to_datetime(value).date()\n elif isinstance(default_value, datetime.timedelta):\n value = str_to_timedelta(value)\n elif ignore_errors:\n value = try_to_eval(value)\n else:\n value = eval(value)\n except (ValueError, SyntaxError):\n if ignore_errors:\n value = try_to_eval(value)\n else:\n return default_value\n return value"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn type string of an object.", "response": "def get_type_string(item):\n \"\"\"Return type string of an object.\"\"\"\n if isinstance(item, DataFrame):\n return \"DataFrame\"\n if isinstance(item, Index):\n return type(item).__name__\n if isinstance(item, Series):\n return \"Series\"\n found = re.findall(r\"<(?:type|class) '(\\S*)'>\",\n to_text_string(type(item)))\n if found:\n return found[0]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning human - readable type string of an item", "response": "def get_human_readable_type(item):\n \"\"\"Return human-readable type string of an item\"\"\"\n if isinstance(item, (ndarray, MaskedArray)):\n return item.dtype.name\n elif isinstance(item, Image):\n return \"Image\"\n else:\n text = get_type_string(item)\n if text is None:\n text = to_text_string('unknown')\n else:\n return text[text.find('.')+1:]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning True if the value is supported False otherwise", "response": "def is_supported(value, check_all=False, filters=None, iterate=False):\n \"\"\"Return True if the value is supported, False otherwise\"\"\"\n assert filters is not None\n if value is None:\n return True\n if not is_editable_type(value):\n return False\n elif not isinstance(value, filters):\n return False\n elif iterate:\n if isinstance(value, (list, tuple, set)):\n valid_count = 0\n for val in value:\n if is_supported(val, filters=filters, iterate=check_all):\n valid_count += 1\n if not check_all:\n break\n return valid_count > 0\n elif isinstance(value, dict):\n for key, val in list(value.items()):\n if not is_supported(key, filters=filters, iterate=check_all) \\\n or not is_supported(val, filters=filters,\n iterate=check_all):\n return False\n if not check_all:\n break\n return True"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nkeeping only objects that can be pickled", "response": "def globalsfilter(input_dict, check_all=False, filters=None,\n exclude_private=None, exclude_capitalized=None,\n exclude_uppercase=None, exclude_unsupported=None,\n excluded_names=None):\n \"\"\"Keep only objects that can be pickled\"\"\"\n output_dict = {}\n for key, value in list(input_dict.items()):\n excluded = (exclude_private and key.startswith('_')) or \\\n (exclude_capitalized and key[0].isupper()) or \\\n (exclude_uppercase and key.isupper()\n and len(key) > 1 and not key[1:].isdigit()) or \\\n (key in excluded_names) or \\\n (exclude_unsupported and \\\n not is_supported(value, check_all=check_all,\n filters=filters))\n if not excluded:\n output_dict[key] = value\n return output_dict"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_supported_types():\n from datetime import date, timedelta\n editable_types = [int, float, complex, list, set, dict, tuple, date,\n timedelta] + list(TEXT_TYPES) + list(INT_TYPES)\n try:\n from numpy import ndarray, matrix, generic\n editable_types += [ndarray, matrix, generic]\n except:\n pass\n try:\n from pandas import DataFrame, Series, DatetimeIndex\n editable_types += [DataFrame, Series, Index]\n except:\n pass\n picklable_types = editable_types[:]\n try:\n from spyder.pil_patch import Image\n editable_types.append(Image.Image)\n except:\n pass\n return dict(picklable=picklable_types, editable=editable_types)", "response": "Returns a dictionnary containing types lists supported by the\n namespace browser."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting the data from the remote data file.", "response": "def get_remote_data(data, settings, mode, more_excluded_names=None):\n \"\"\"\n Return globals according to filter described in *settings*:\n * data: data to be filtered (dictionary)\n * settings: variable explorer settings (dictionary)\n * mode (string): 'editable' or 'picklable'\n * more_excluded_names: additional excluded names (list)\n \"\"\"\n supported_types = get_supported_types()\n assert mode in list(supported_types.keys())\n excluded_names = settings['excluded_names']\n if more_excluded_names is not None:\n excluded_names += more_excluded_names\n return globalsfilter(data, check_all=settings['check_all'],\n filters=tuple(supported_types[mode]),\n exclude_private=settings['exclude_private'],\n exclude_uppercase=settings['exclude_uppercase'],\n exclude_capitalized=settings['exclude_capitalized'],\n exclude_unsupported=settings['exclude_unsupported'],\n excluded_names=excluded_names)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nmake a remote view of dictionary data", "response": "def make_remote_view(data, settings, more_excluded_names=None):\n \"\"\"\n Make a remote view of dictionary *data*\n -> globals explorer\n \"\"\"\n data = get_remote_data(data, settings, mode='editable',\n more_excluded_names=more_excluded_names)\n remote = {}\n for key, value in list(data.items()):\n view = value_to_display(value, minmax=settings['minmax'])\n remote[key] = {'type': get_human_readable_type(value),\n 'size': get_size(value),\n 'color': get_color_name(value),\n 'view': view}\n return remote"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _pdb_frame(self):\n if self._pdb_obj is not None and self._pdb_obj.curframe is not None:\n return self._pdb_obj.curframe", "response": "Return the current Pdb frame if there is any"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn the namespace view", "response": "def get_namespace_view(self):\n \"\"\"\n Return the namespace view\n\n This is a dictionary with the following structure\n\n {'a': {'color': '#800000', 'size': 1, 'type': 'str', 'view': '1'}}\n\n Here:\n * 'a' is the variable name\n * 'color' is the color used to show it\n * 'size' and 'type' are self-evident\n * and'view' is its value or the text shown in the last column\n \"\"\"\n from spyder_kernels.utils.nsview import make_remote_view\n\n settings = self.namespace_view_settings\n if settings:\n ns = self._get_current_namespace()\n view = repr(make_remote_view(ns, settings, EXCLUDED_NAMES))\n return view\n else:\n return repr(None)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_var_properties(self):\n from spyder_kernels.utils.nsview import get_remote_data\n\n settings = self.namespace_view_settings\n if settings:\n ns = self._get_current_namespace()\n data = get_remote_data(ns, settings, mode='editable',\n more_excluded_names=EXCLUDED_NAMES)\n\n properties = {}\n for name, value in list(data.items()):\n properties[name] = {\n 'is_list': isinstance(value, (tuple, list)),\n 'is_dict': isinstance(value, dict),\n 'is_set': isinstance(value, set),\n 'len': self._get_len(value),\n 'is_array': self._is_array(value),\n 'is_image': self._is_image(value),\n 'is_data_frame': self._is_data_frame(value),\n 'is_series': self._is_series(value),\n 'array_shape': self._get_array_shape(value),\n 'array_ndim': self._get_array_ndim(value)\n }\n\n return repr(properties)\n else:\n return repr(None)", "response": "Get some properties of the variables in the current namespace"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef send_spyder_msg(self, spyder_msg_type, content=None, data=None):\n import cloudpickle\n\n if content is None:\n content = {}\n content['spyder_msg_type'] = spyder_msg_type\n msg = self.session.send(\n self.iopub_socket,\n 'spyder_msg',\n content=content,\n buffers=[cloudpickle.dumps(data, protocol=PICKLE_PROTOCOL)],\n parent=self._parent_header,\n )\n self.log.debug(msg)", "response": "Send a custom message to the Spyder frontend."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets the value of a variable", "response": "def get_value(self, name):\n \"\"\"Get the value of a variable\"\"\"\n ns = self._get_current_namespace()\n value = ns[name]\n try:\n self.send_spyder_msg('data', data=value)\n except:\n # * There is no need to inform users about\n # these errors.\n # * value = None makes Spyder to ignore\n # petitions to display a value\n self.send_spyder_msg('data', data=None)\n self._do_publish_pdb_state = False"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset the value of a variable in the namespace", "response": "def set_value(self, name, value, PY2_frontend):\n \"\"\"Set the value of a variable\"\"\"\n import cloudpickle\n ns = self._get_reference_namespace(name)\n\n # We send serialized values in a list of one element\n # from Spyder to the kernel, to be able to send them\n # at all in Python 2\n svalue = value[0]\n\n # We need to convert svalue to bytes if the frontend\n # runs in Python 2 and the kernel runs in Python 3\n if PY2_frontend and not PY2:\n svalue = bytes(svalue, 'latin-1')\n\n # Deserialize and set value in namespace\n dvalue = cloudpickle.loads(svalue)\n ns[name] = dvalue\n self.log.debug(ns)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef remove_value(self, name):\n ns = self._get_reference_namespace(name)\n ns.pop(name)", "response": "Remove a variable from the reference namespace"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef copy_value(self, orig_name, new_name):\n ns = self._get_reference_namespace(orig_name)\n ns[new_name] = ns[orig_name]", "response": "Copy a variable from the reference namespace to the new name."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef load_data(self, filename, ext):\n from spyder_kernels.utils.iofuncs import iofunctions\n from spyder_kernels.utils.misc import fix_reference_name\n\n glbs = self._mglobals()\n\n load_func = iofunctions.load_funcs[ext]\n data, error_message = load_func(filename)\n\n if error_message:\n return error_message\n\n for key in list(data.keys()):\n new_key = fix_reference_name(key, blacklist=list(glbs.keys()))\n if new_key != key:\n data[new_key] = data.pop(key)\n\n try:\n glbs.update(data)\n except Exception as error:\n return str(error)\n\n return None", "response": "Load data from filename and update the local cache"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsaving the current namespace into filename", "response": "def save_namespace(self, filename):\n \"\"\"Save namespace into filename\"\"\"\n from spyder_kernels.utils.nsview import get_remote_data\n from spyder_kernels.utils.iofuncs import iofunctions\n\n ns = self._get_current_namespace()\n settings = self.namespace_view_settings\n data = get_remote_data(ns, settings, mode='picklable',\n more_excluded_names=EXCLUDED_NAMES).copy()\n return iofunctions.save(data, filename)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\npublishing Variable Explorer state and Pdb step through send_spyder_msg.", "response": "def publish_pdb_state(self):\n \"\"\"\n Publish Variable Explorer state and Pdb step through\n send_spyder_msg.\n \"\"\"\n if self._pdb_obj and self._do_publish_pdb_state:\n state = dict(namespace_view = self.get_namespace_view(),\n var_properties = self.get_var_properties(),\n step = self._pdb_step)\n self.send_spyder_msg('pdb_state', content={'pdb_state': state})\n self._do_publish_pdb_state = True"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef is_defined(self, obj, force_import=False):\n from spyder_kernels.utils.dochelpers import isdefined\n\n ns = self._get_current_namespace(with_magics=True)\n return isdefined(obj, force_import=force_import, namespace=ns)", "response": "Return True if object is defined in current namespace"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_doc(self, objtxt):\n try:\n import matplotlib\n matplotlib.rcParams['docstring.hardcopy'] = True\n except:\n pass\n from spyder_kernels.utils.dochelpers import getdoc\n\n obj, valid = self._eval(objtxt)\n if valid:\n return getdoc(obj)", "response": "Get object documentation dictionary"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets object source from object text", "response": "def get_source(self, objtxt):\n \"\"\"Get object source\"\"\"\n from spyder_kernels.utils.dochelpers import getsource\n\n obj, valid = self._eval(objtxt)\n if valid:\n return getsource(obj)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the current namespace", "response": "def _get_current_namespace(self, with_magics=False):\n \"\"\"\n Return current namespace\n\n This is globals() if not debugging, or a dictionary containing\n both locals() and globals() for current frame when debugging\n \"\"\"\n ns = {}\n glbs = self._mglobals()\n\n if self._pdb_frame is None:\n ns.update(glbs)\n else:\n ns.update(glbs)\n ns.update(self._pdb_locals)\n\n # Add magics to ns so we can show help about them on the Help\n # plugin\n if with_magics:\n line_magics = self.shell.magics_manager.magics['line']\n cell_magics = self.shell.magics_manager.magics['cell']\n ns.update(line_magics)\n ns.update(cell_magics)\n\n return ns"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the namespace where the reference name is defined.", "response": "def _get_reference_namespace(self, name):\n \"\"\"\n Return namespace where reference name is defined\n\n It returns the globals() if reference has not yet been defined\n \"\"\"\n glbs = self._mglobals()\n if self._pdb_frame is None:\n return glbs\n else:\n lcls = self._pdb_locals\n if name in lcls:\n return lcls\n else:\n return glbs"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn current globals -- handles Pdb frames", "response": "def _mglobals(self):\n \"\"\"Return current globals -- handles Pdb frames\"\"\"\n if self._pdb_frame is not None:\n return self._pdb_frame.f_globals\n else:\n return self.shell.user_ns"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _is_image(self, var):\n try:\n from PIL import Image\n return isinstance(var, Image.Image)\n except:\n return False", "response": "Return True if variable is a PIL. Image image"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsets all Spyder breakpoints in an active pdb session", "response": "def _set_spyder_breakpoints(self, breakpoints):\n \"\"\"Set all Spyder breakpoints in an active pdb session\"\"\"\n if not self._pdb_obj:\n return\n\n # Breakpoints come serialized from Spyder. We send them\n # in a list of one element to be able to send them at all\n # in Python 2\n serialized_breakpoints = breakpoints[0]\n breakpoints = pickle.loads(serialized_breakpoints)\n\n self._pdb_obj.set_spyder_breakpoints(breakpoints)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _eval(self, text):\n from spyder_kernels.py3compat import is_text_string\n\n assert is_text_string(text)\n ns = self._get_current_namespace(with_magics=True)\n try:\n return eval(text, ns), True\n except:\n return None, False", "response": "Evaluate text and return obj valid"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _set_mpl_backend(self, backend, pylab=False):\n import traceback\n from IPython.core.getipython import get_ipython\n\n generic_error = (\n \"\\n\" + \"=\"*73 + \"\\n\"\n \"NOTE: The following error appeared when setting \"\n \"your Matplotlib backend!!\\n\" + \"=\"*73 + \"\\n\\n\"\n \"{0}\"\n )\n\n magic = 'pylab' if pylab else 'matplotlib'\n\n error = None\n try:\n get_ipython().run_line_magic(magic, backend)\n except RuntimeError as err:\n # This catches errors generated by ipykernel when\n # trying to set a backend. See issue 5541\n if \"GUI eventloops\" in str(err):\n import matplotlib\n previous_backend = matplotlib.get_backend()\n if not backend in previous_backend.lower():\n # Only inform about an error if the user selected backend\n # and the one set by Matplotlib are different. Else this\n # message is very confusing.\n error = (\n \"\\n\"\n \"NOTE: Spyder *can't* set your selected Matplotlib \"\n \"backend because there is a previous backend already \"\n \"in use.\\n\\n\"\n \"Your backend will be {0}\".format(previous_backend)\n )\n del matplotlib\n # This covers other RuntimeError's\n else:\n error = generic_error.format(traceback.format_exc())\n except Exception:\n error = generic_error.format(traceback.format_exc())\n\n self._mpl_backend_error = error", "response": "Set a Matplotlib backend for Matplotlib."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nimporting our customizations into the kernel.", "response": "def import_spydercustomize():\n \"\"\"Import our customizations into the kernel.\"\"\"\n here = osp.dirname(__file__)\n parent = osp.dirname(here)\n customize_dir = osp.join(parent, 'customize')\n\n # Remove current directory from sys.path to prevent kernel\n # crashes when people name Python files or modules with\n # the same name as standard library modules.\n # See spyder-ide/spyder#8007\n while '' in sys.path:\n sys.path.remove('')\n\n # Import our customizations\n site.addsitedir(customize_dir)\n import spydercustomize\n\n # Remove our customize path from sys.path\n try:\n sys.path.remove(customize_dir)\n except ValueError:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef kernel_config():\n import ipykernel\n from IPython.core.application import get_ipython_dir\n from traitlets.config.loader import Config, load_pyconfig_files\n\n # ---- IPython config ----\n try:\n profile_path = osp.join(get_ipython_dir(), 'profile_default')\n cfg = load_pyconfig_files(['ipython_config.py',\n 'ipython_kernel_config.py'],\n profile_path)\n except:\n cfg = Config()\n\n # ---- Spyder config ----\n spy_cfg = Config()\n\n # Enable/disable certain features for testing\n testing = os.environ.get('SPY_TESTING') == 'True'\n if testing:\n # Don't load nor save history in our IPython consoles.\n spy_cfg.HistoryAccessor.enabled = False\n\n # Until we implement Issue 1052\n spy_cfg.InteractiveShell.xmode = 'Plain'\n\n # Jedi completer. It's only available in Python 3\n jedi_o = os.environ.get('SPY_JEDI_O') == 'True'\n if not PY2:\n spy_cfg.IPCompleter.use_jedi = jedi_o\n\n # Run lines of code at startup\n run_lines_o = os.environ.get('SPY_RUN_LINES_O')\n if run_lines_o is not None:\n spy_cfg.IPKernelApp.exec_lines = [x.strip() for x in run_lines_o.split(';')]\n else:\n spy_cfg.IPKernelApp.exec_lines = []\n\n # Clean terminal arguments input\n clear_argv = \"import sys;sys.argv = [''];del sys\"\n spy_cfg.IPKernelApp.exec_lines.append(clear_argv)\n\n # Load %autoreload magic\n spy_cfg.IPKernelApp.exec_lines.append(\n \"get_ipython().kernel._load_autoreload_magic()\")\n\n # Load wurlitzer extension\n spy_cfg.IPKernelApp.exec_lines.append(\n \"get_ipython().kernel._load_wurlitzer()\")\n\n # Default inline backend configuration\n # This is useful to have when people doesn't\n # use our config system to configure the\n # inline backend but want to use\n # '%matplotlib inline' at runtime\n if LooseVersion(ipykernel.__version__) < LooseVersion('4.5'):\n dpi_option = 'savefig.dpi'\n else:\n dpi_option = 'figure.dpi'\n\n spy_cfg.InlineBackend.rc = {'figure.figsize': (6.0, 4.0),\n dpi_option: 72,\n 'font.size': 10,\n 'figure.subplot.bottom': .125,\n 'figure.facecolor': 'white',\n 'figure.edgecolor': 'white'}\n\n # Pylab configuration\n mpl_backend = None\n if is_module_installed('matplotlib'):\n # Set Matplotlib backend with Spyder options\n pylab_o = os.environ.get('SPY_PYLAB_O')\n backend_o = os.environ.get('SPY_BACKEND_O')\n if pylab_o == 'True' and backend_o is not None:\n # Select the automatic backend\n if backend_o == '1':\n if is_module_installed('PyQt5'):\n auto_backend = 'qt5'\n elif is_module_installed('PyQt4'):\n auto_backend = 'qt4'\n elif is_module_installed('_tkinter'):\n auto_backend = 'tk'\n else:\n auto_backend = 'inline'\n else:\n auto_backend = ''\n\n # Mapping of Spyder options to backends\n backends = {'0': 'inline',\n '1': auto_backend,\n '2': 'qt5',\n '3': 'qt4',\n '4': 'osx',\n '5': 'gtk3',\n '6': 'gtk',\n '7': 'wx',\n '8': 'tk'}\n\n # Select backend\n mpl_backend = backends[backend_o]\n\n # Inline backend configuration\n if mpl_backend == 'inline':\n # Figure format\n format_o = os.environ.get('SPY_FORMAT_O')\n formats = {'0': 'png',\n '1': 'svg'}\n if format_o is not None:\n spy_cfg.InlineBackend.figure_format = formats[format_o]\n\n # Resolution\n resolution_o = os.environ.get('SPY_RESOLUTION_O')\n if resolution_o is not None:\n spy_cfg.InlineBackend.rc[dpi_option] = float(resolution_o)\n\n # Figure size\n width_o = float(os.environ.get('SPY_WIDTH_O'))\n height_o = float(os.environ.get('SPY_HEIGHT_O'))\n if width_o is not None and height_o is not None:\n spy_cfg.InlineBackend.rc['figure.figsize'] = (width_o,\n height_o)\n\n # Print figure kwargs\n bbox_inches_o = os.environ.get('SPY_BBOX_INCHES_O')\n bbox_inches = 'tight' if bbox_inches_o == 'True' else None\n spy_cfg.InlineBackend.print_figure_kwargs.update(\n {'bbox_inches': bbox_inches})\n else:\n # Set Matplotlib backend to inline for external kernels.\n # Fixes issue 108\n mpl_backend = 'inline'\n\n # Automatically load Pylab and Numpy, or only set Matplotlib\n # backend\n autoload_pylab_o = os.environ.get('SPY_AUTOLOAD_PYLAB_O') == 'True'\n command = \"get_ipython().kernel._set_mpl_backend('{0}', {1})\"\n spy_cfg.IPKernelApp.exec_lines.append(\n command.format(mpl_backend, autoload_pylab_o))\n\n # Enable Cython magic\n run_cython = os.environ.get('SPY_RUN_CYTHON') == 'True'\n if run_cython and is_module_installed('Cython'):\n spy_cfg.IPKernelApp.exec_lines.append('%reload_ext Cython')\n\n # Run a file at startup\n use_file_o = os.environ.get('SPY_USE_FILE_O')\n run_file_o = os.environ.get('SPY_RUN_FILE_O')\n if use_file_o == 'True' and run_file_o is not None:\n spy_cfg.IPKernelApp.file_to_run = run_file_o\n\n # Autocall\n autocall_o = os.environ.get('SPY_AUTOCALL_O')\n if autocall_o is not None:\n spy_cfg.ZMQInteractiveShell.autocall = int(autocall_o)\n\n # To handle the banner by ourselves in IPython 3+\n spy_cfg.ZMQInteractiveShell.banner1 = ''\n\n # Greedy completer\n greedy_o = os.environ.get('SPY_GREEDY_O') == 'True'\n spy_cfg.IPCompleter.greedy = greedy_o\n\n # Sympy loading\n sympy_o = os.environ.get('SPY_SYMPY_O') == 'True'\n if sympy_o and is_module_installed('sympy'):\n lines = sympy_config(mpl_backend)\n spy_cfg.IPKernelApp.exec_lines.append(lines)\n\n # Merge IPython and Spyder configs. Spyder prefs will have prevalence\n # over IPython ones\n cfg._merge(spy_cfg)\n return cfg", "response": "Create a config object with IPython kernel options."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef fix_reference_name(name, blacklist=None):\n name = \"\".join(re.split(r'[^0-9a-zA-Z_]', name))\n while name and not re.match(r'([a-zA-Z]+[0-9a-zA-Z_]*)$', name):\n if not re.match(r'[a-zA-Z]', name[0]):\n name = name[1:]\n continue\n name = str(name)\n if not name:\n name = \"data\"\n if blacklist is not None and name in blacklist:\n get_new_name = lambda index: name+('%03d' % index)\n index = 0\n while get_new_name(index) in blacklist:\n index += 1\n name = get_new_name(index)\n return name", "response": "Return a syntax - valid Python reference name from an arbitrary name"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef user_return(self, frame, return_value):\n # This is useful when debugging in an active interpreter (otherwise,\n # the debugger will stop before reaching the target file)\n if self._wait_for_mainpyfile:\n if (self.mainpyfile != self.canonic(frame.f_code.co_filename)\n or frame.f_lineno<= 0):\n return\n self._wait_for_mainpyfile = 0\n self._old_Pdb_user_return(frame, return_value)", "response": "This function is called when a user return trap is set here."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef set_post_mortem():\n def ipython_post_mortem_debug(shell, etype, evalue, tb,\n tb_offset=None):\n post_mortem_excepthook(etype, evalue, tb)\n ipython_shell = get_ipython()\n ipython_shell.set_custom_exc((Exception,), ipython_post_mortem_debug)", "response": "Enable the post mortem debugging excepthook."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef runfile(filename, args=None, wdir=None, namespace=None, post_mortem=False):\n try:\n filename = filename.decode('utf-8')\n except (UnicodeError, TypeError, AttributeError):\n # UnicodeError, TypeError --> eventually raised in Python 2\n # AttributeError --> systematically raised in Python 3\n pass\n if __umr__.enabled:\n __umr__.run()\n if args is not None and not isinstance(args, basestring):\n raise TypeError(\"expected a character buffer object\")\n if namespace is None:\n namespace = _get_globals()\n namespace['__file__'] = filename\n sys.argv = [filename]\n if args is not None:\n for arg in shlex.split(args):\n sys.argv.append(arg)\n if wdir is not None:\n try:\n wdir = wdir.decode('utf-8')\n except (UnicodeError, TypeError, AttributeError):\n # UnicodeError, TypeError --> eventually raised in Python 2\n # AttributeError --> systematically raised in Python 3\n pass\n os.chdir(wdir)\n if post_mortem:\n set_post_mortem()\n if __umr__.has_cython:\n # Cython files\n with io.open(filename, encoding='utf-8') as f:\n ipython_shell = get_ipython()\n ipython_shell.run_cell_magic('cython', '', f.read())\n else:\n execfile(filename, namespace)\n\n clear_post_mortem()\n sys.argv = ['']\n\n # Avoid error when running `%reset -f` programmatically\n # See issue spyder-ide/spyder-kernels#91\n try:\n namespace.pop('__file__')\n except KeyError:\n pass", "response": "Run a file and return a dictionary of the result."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef runcell(cellname, filename):\n try:\n filename = filename.decode('utf-8')\n except (UnicodeError, TypeError, AttributeError):\n # UnicodeError, TypeError --> eventually raised in Python 2\n # AttributeError --> systematically raised in Python 3\n pass\n ipython_shell = get_ipython()\n namespace = _get_globals()\n namespace['__file__'] = filename\n try:\n cell_code = ipython_shell.cell_code\n except AttributeError:\n _print(\"--Run Cell Error--\\n\"\n \"Please use only through Spyder's Editor; \"\n \"shouldn't be called manually from the console\")\n return\n\n # Trigger `post_execute` to exit the additional pre-execution.\n # See Spyder PR #7310.\n ipython_shell.events.trigger('post_execute')\n\n ipython_shell.run_cell(cell_code)\n namespace.pop('__file__')\n del ipython_shell.cell_code", "response": "Runs a code cell from an editor as a file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndebugging filename args: command line arguments (string) wdir: working directory post_mortem: boolean, included for compatiblity with runfile", "response": "def debugfile(filename, args=None, wdir=None, post_mortem=False):\n \"\"\"\n Debug filename\n args: command line arguments (string)\n wdir: working directory\n post_mortem: boolean, included for compatiblity with runfile\n \"\"\"\n debugger = pdb.Pdb()\n filename = debugger.canonic(filename)\n debugger._wait_for_mainpyfile = 1\n debugger.mainpyfile = filename\n debugger._user_requested_quit = 0\n if os.name == 'nt':\n filename = filename.replace('\\\\', '/')\n debugger.run(\"runfile(%r, args=%r, wdir=%r)\" % (filename, args, wdir))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a list of paths to be skipped from module AttributeNames reloading.", "response": "def create_pathlist(self, initial_pathlist):\n \"\"\"\n Add to pathlist Python library paths to be skipped from module\n reloading.\n \"\"\"\n # Get standard installation paths\n try:\n paths = sysconfig.get_paths()\n standard_paths = [paths['stdlib'],\n paths['purelib'],\n paths['scripts'],\n paths['data']]\n except Exception:\n standard_paths = []\n\n # Get user installation path\n # See Spyder issue 8776\n try:\n import site\n if getattr(site, 'getusersitepackages', False):\n # Virtualenvs don't have this function but\n # conda envs do\n user_path = [site.getusersitepackages()]\n elif getattr(site, 'USER_SITE', False):\n # However, it seems virtualenvs have this\n # constant\n user_path = [site.USER_SITE]\n else:\n user_path = []\n except Exception:\n user_path = []\n\n return initial_pathlist + standard_paths + user_path"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndecides if a module is reloadable or not.", "response": "def is_module_reloadable(self, module, modname):\n \"\"\"Decide if a module is reloadable or not.\"\"\"\n if self.has_cython:\n # Don't return cached inline compiled .PYX files\n return False\n else:\n if (self.is_module_in_pathlist(module) or\n self.is_module_in_namelist(modname)):\n return False\n else:\n return True"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndeciding if a module can be reloaded or not according to its pathlist.", "response": "def is_module_in_pathlist(self, module):\n \"\"\"Decide if a module can be reloaded or not according to its path.\"\"\"\n modpath = getattr(module, '__file__', None)\n\n # Skip module according to different criteria\n if modpath is None:\n # *module* is a C module that is statically linked into the\n # interpreter. There is no way to know its path, so we\n # choose to ignore it.\n return True\n elif any([p in modpath for p in self.pathlist]):\n # We don't want to reload modules that belong to the\n # standard library or installed to site-packages,\n # just modules created by the user.\n return True\n elif not os.name == 'nt':\n # Module paths containing the strings below can be ihherited\n # from the default Linux installation, Homebrew or the user\n # site-packages in a virtualenv.\n patterns = [r'^/usr/lib.*',\n r'^/usr/local/lib.*',\n r'^/usr/.*/dist-packages/.*',\n r'^/home/.*/.local/lib.*',\n r'^/Library/.*',\n r'^/Users/.*/Library/.*',\n r'^/Users/.*/.local/.*',\n ]\n\n if [p for p in patterns if re.search(p, modpath)]:\n return True\n else:\n return False\n else:\n return False"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nactivating Cython support. We need to run this here because if the support is active, we don't to run the UMR at all.", "response": "def activate_cython(self):\n \"\"\"\n Activate Cython support.\n\n We need to run this here because if the support is\n active, we don't to run the UMR at all.\n \"\"\"\n run_cython = os.environ.get(\"SPY_RUN_CYTHON\") == \"True\"\n\n if run_cython:\n try:\n __import__('Cython')\n self.has_cython = True\n except Exception:\n pass\n\n if self.has_cython:\n # Import pyximport to enable Cython files support for\n # import statement\n import pyximport\n pyx_setup_args = {}\n\n # Add Numpy include dir to pyximport/distutils\n try:\n import numpy\n pyx_setup_args['include_dirs'] = numpy.get_include()\n except Exception:\n pass\n\n # Setup pyximport and enable Cython files reload\n pyximport.install(setup_args=pyx_setup_args,\n reload_support=True)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndeletes user modules which are not considered as system modules and which are not in previous_modules and if they are reloadable then reload them.", "response": "def run(self):\n \"\"\"\n Delete user modules to force Python to deeply reload them\n\n Do not del modules which are considered as system modules, i.e.\n modules installed in subdirectories of Python interpreter's binary\n Do not del C modules\n \"\"\"\n self.modnames_to_reload = []\n for modname, module in list(sys.modules.items()):\n if modname not in self.previous_modules:\n # Decide if a module can be reloaded or not\n if self.is_module_reloadable(module, modname):\n self.modnames_to_reload.append(modname)\n del sys.modules[modname]\n else:\n continue\n\n # Report reloaded modules\n if self.verbose and self.modnames_to_reload:\n modnames = self.modnames_to_reload\n _print(\"\\x1b[4;33m%s\\x1b[24m%s\\x1b[0m\"\\\n % (\"Reloaded modules\", \": \"+\", \".join(modnames)))"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nextract a value from a Matlab file and return it as a list of dicts.", "response": "def get_matlab_value(val):\n \"\"\"\n Extract a value from a Matlab file\n\n From the oct2py project, see\n https://pythonhosted.org/oct2py/conversions.html\n \"\"\"\n import numpy as np\n\n # Extract each item of a list.\n if isinstance(val, list):\n return [get_matlab_value(v) for v in val]\n\n # Ignore leaf objects.\n if not isinstance(val, np.ndarray):\n return val\n\n # Convert user defined classes.\n if hasattr(val, 'classname'):\n out = dict()\n for name in val.dtype.names:\n out[name] = get_matlab_value(val[name].squeeze().tolist())\n cls = type(val.classname, (object,), out)\n return cls()\n\n # Extract struct data.\n elif val.dtype.names:\n out = MatlabStruct()\n for name in val.dtype.names:\n out[name] = get_matlab_value(val[name].squeeze().tolist())\n val = out\n\n # Extract cells.\n elif val.dtype.kind == 'O':\n val = val.squeeze().tolist()\n if not isinstance(val, list):\n val = [val]\n val = get_matlab_value(val)\n\n # Compress singleton values.\n elif val.size == 1:\n val = val.item()\n\n # Compress empty values.\n elif val.size == 0:\n if val.dtype.kind in 'US':\n val = ''\n else:\n val = []\n\n return val"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nload a pickle file as a dictionary", "response": "def load_pickle(filename):\n \"\"\"Load a pickle file as a dictionary\"\"\"\n try:\n if pd:\n return pd.read_pickle(filename), None\n else:\n with open(filename, 'rb') as fid:\n data = pickle.load(fid)\n return data, None\n except Exception as err:\n return None, str(err)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nloading a json file as a dictionary", "response": "def load_json(filename):\n \"\"\"Load a json file as a dictionary\"\"\"\n try:\n if PY2:\n args = 'rb'\n else:\n args = 'r'\n with open(filename, args) as fid:\n data = json.load(fid)\n return data, None\n except Exception as err:\n return None, str(err)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef save_dictionary(data, filename):\n filename = osp.abspath(filename)\n old_cwd = getcwd()\n os.chdir(osp.dirname(filename))\n error_message = None\n skipped_keys = []\n data_copy = {}\n\n try:\n # Copy dictionary before modifying it to fix #6689\n for obj_name, obj_value in data.items():\n # Skip modules, since they can't be pickled, users virtually never\n # would want them to be and so they don't show up in the skip list.\n # Skip callables, since they are only pickled by reference and thus\n # must already be present in the user's environment anyway.\n if not (callable(obj_value) or isinstance(obj_value,\n types.ModuleType)):\n # If an object cannot be deepcopied, then it cannot be pickled.\n # Ergo, we skip it and list it later.\n try:\n data_copy[obj_name] = copy.deepcopy(obj_value)\n except Exception:\n skipped_keys.append(obj_name)\n data = data_copy\n if not data:\n raise RuntimeError('No supported objects to save')\n\n saved_arrays = {}\n if load_array is not None:\n # Saving numpy arrays with np.save\n arr_fname = osp.splitext(filename)[0]\n for name in list(data.keys()):\n try:\n if isinstance(data[name],\n np.ndarray) and data[name].size > 0:\n # Save arrays at data root\n fname = __save_array(data[name], arr_fname,\n len(saved_arrays))\n saved_arrays[(name, None)] = osp.basename(fname)\n data.pop(name)\n elif isinstance(data[name], (list, dict)):\n # Save arrays nested in lists or dictionaries\n if isinstance(data[name], list):\n iterator = enumerate(data[name])\n else:\n iterator = iter(list(data[name].items()))\n to_remove = []\n for index, value in iterator:\n if isinstance(value,\n np.ndarray) and value.size > 0:\n fname = __save_array(value, arr_fname,\n len(saved_arrays))\n saved_arrays[(name, index)] = (\n osp.basename(fname))\n to_remove.append(index)\n for index in sorted(to_remove, reverse=True):\n data[name].pop(index)\n except (RuntimeError, pickle.PicklingError, TypeError,\n AttributeError, IndexError):\n # If an array can't be saved with numpy for some reason,\n # leave the object intact and try to save it normally.\n pass\n if saved_arrays:\n data['__saved_arrays__'] = saved_arrays\n\n pickle_filename = osp.splitext(filename)[0] + '.pickle'\n # Attempt to pickle everything.\n # If pickling fails, iterate through to eliminate problem objs & retry.\n with open(pickle_filename, 'w+b') as fdesc:\n try:\n pickle.dump(data, fdesc, protocol=2)\n except (pickle.PicklingError, AttributeError, TypeError,\n ImportError, IndexError, RuntimeError):\n data_filtered = {}\n for obj_name, obj_value in data.items():\n try:\n pickle.dumps(obj_value, protocol=2)\n except Exception:\n skipped_keys.append(obj_name)\n else:\n data_filtered[obj_name] = obj_value\n if not data_filtered:\n raise RuntimeError('No supported objects to save')\n pickle.dump(data_filtered, fdesc, protocol=2)\n\n # Use PAX (POSIX.1-2001) format instead of default GNU.\n # This improves interoperability and UTF-8/long variable name support.\n with tarfile.open(filename, \"w\", format=tarfile.PAX_FORMAT) as tar:\n for fname in ([pickle_filename]\n + [fn for fn in list(saved_arrays.values())]):\n tar.add(osp.basename(fname))\n os.remove(fname)\n except (RuntimeError, pickle.PicklingError, TypeError) as error:\n error_message = to_text_string(error)\n else:\n if skipped_keys:\n skipped_keys.sort()\n error_message = ('Some objects could not be saved: '\n + ', '.join(skipped_keys))\n finally:\n os.chdir(old_cwd)\n return error_message", "response": "Save dictionary to a single file."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef load_dictionary(filename):\n filename = osp.abspath(filename)\n old_cwd = getcwd()\n tmp_folder = tempfile.mkdtemp()\n os.chdir(tmp_folder)\n data = None\n error_message = None\n try:\n with tarfile.open(filename, \"r\") as tar:\n tar.extractall()\n pickle_filename = glob.glob('*.pickle')[0]\n # 'New' format (Spyder >=2.2 for Python 2 and Python 3)\n with open(pickle_filename, 'rb') as fdesc:\n data = pickle.loads(fdesc.read())\n saved_arrays = {}\n if load_array is not None:\n # Loading numpy arrays saved with np.save\n try:\n saved_arrays = data.pop('__saved_arrays__')\n for (name, index), fname in list(saved_arrays.items()):\n arr = np.load( osp.join(tmp_folder, fname) )\n if index is None:\n data[name] = arr\n elif isinstance(data[name], dict):\n data[name][index] = arr\n else:\n data[name].insert(index, arr)\n except KeyError:\n pass\n # Except AttributeError from e.g. trying to load function no longer present\n except (AttributeError, EOFError, ValueError) as error:\n error_message = to_text_string(error)\n # To ensure working dir gets changed back and temp dir wiped no matter what\n finally:\n os.chdir(old_cwd)\n try:\n shutil.rmtree(tmp_folder)\n except OSError as error:\n error_message = to_text_string(error)\n return data, error_message", "response": "Load a dictionary from a. spydata file"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef getobj(txt, last=False):\n txt_end = \"\"\n for startchar, endchar in [\"[]\", \"()\"]:\n if txt.endswith(endchar):\n pos = txt.rfind(startchar)\n if pos:\n txt_end = txt[pos:]\n txt = txt[:pos]\n tokens = re.split(SYMBOLS, txt)\n token = None\n try:\n while token is None or re.match(SYMBOLS, token):\n token = tokens.pop()\n if token.endswith('.'):\n token = token[:-1]\n if token.startswith('.'):\n # Invalid object name\n return None\n if last:\n #XXX: remove this statement as well as the \"last\" argument\n token += txt[ txt.rfind(token) + len(token) ]\n token += txt_end\n if token:\n return token\n except IndexError:\n return None", "response": "Return the last valid object name in string"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets the text documentation from an object.", "response": "def getdoc(obj):\n \"\"\"\n Return text documentation from an object. This comes in a form of\n dictionary with four keys:\n\n name:\n The name of the inspected object\n argspec:\n It's argspec\n note:\n A phrase describing the type of object (function or method) we are\n inspecting, and the module it belongs to.\n docstring:\n It's docstring\n \"\"\"\n \n docstring = inspect.getdoc(obj) or inspect.getcomments(obj) or ''\n \n # Most of the time doc will only contain ascii characters, but there are\n # some docstrings that contain non-ascii characters. Not all source files\n # declare their encoding in the first line, so querying for that might not\n # yield anything, either. So assume the most commonly used\n # multi-byte file encoding (which also covers ascii). \n try:\n docstring = to_text_string(docstring)\n except:\n pass\n \n # Doc dict keys\n doc = {'name': '',\n 'argspec': '',\n 'note': '',\n 'docstring': docstring}\n \n if callable(obj):\n try:\n name = obj.__name__\n except AttributeError:\n doc['docstring'] = docstring\n return doc\n if inspect.ismethod(obj):\n imclass = get_meth_class(obj)\n if get_meth_class_inst(obj) is not None:\n doc['note'] = 'Method of %s instance' \\\n % get_meth_class_inst(obj).__class__.__name__\n else:\n doc['note'] = 'Unbound %s method' % imclass.__name__\n obj = get_meth_func(obj)\n elif hasattr(obj, '__module__'):\n doc['note'] = 'Function of %s module' % obj.__module__\n else:\n doc['note'] = 'Function'\n doc['name'] = obj.__name__\n if inspect.isfunction(obj):\n if PY2:\n args, varargs, varkw, defaults = inspect.getargspec(obj)\n doc['argspec'] = inspect.formatargspec(\n args, varargs, varkw, defaults,\n formatvalue=lambda o:'='+repr(o))\n else:\n (args, varargs, varkw, defaults, kwonlyargs, kwonlydefaults,\n annotations) = inspect.getfullargspec(obj)\n doc['argspec'] = inspect.formatargspec(\n args, varargs, varkw, defaults, kwonlyargs, kwonlydefaults,\n annotations, formatvalue=lambda o:'='+repr(o))\n if name == '':\n doc['name'] = name + ' lambda '\n doc['argspec'] = doc['argspec'][1:-1] # remove parentheses\n else:\n argspec = getargspecfromtext(doc['docstring'])\n if argspec:\n doc['argspec'] = argspec\n # Many scipy and numpy docstrings begin with a function\n # signature on the first line. This ends up begin redundant\n # when we are using title and argspec to create the\n # rich text \"Definition:\" field. We'll carefully remove this\n # redundancy but only under a strict set of conditions:\n # Remove the starting charaters of the 'doc' portion *iff*\n # the non-whitespace characters on the first line \n # match *exactly* the combined function title \n # and argspec we determined above.\n signature = doc['name'] + doc['argspec']\n docstring_blocks = doc['docstring'].split(\"\\n\\n\")\n first_block = docstring_blocks[0].strip()\n if first_block == signature:\n doc['docstring'] = doc['docstring'].replace(\n signature, '', 1).lstrip()\n else:\n doc['argspec'] = '(...)'\n \n # Remove self from argspec\n argspec = doc['argspec']\n doc['argspec'] = argspec.replace('(self)', '()').replace('(self, ', '(')\n \n return doc"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef getsource(obj):\n try:\n try:\n src = to_text_string(inspect.getsource(obj))\n except TypeError:\n if hasattr(obj, '__class__'):\n src = to_text_string(inspect.getsource(obj.__class__))\n else:\n # Bindings like VTK or ITK require this case\n src = getdoc(obj)\n return src\n except (TypeError, IOError):\n return", "response": "Wrapper around inspect. getSource"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef getsignaturefromtext(text, objname):\n if isinstance(text, dict):\n text = text.get('docstring', '')\n # Regexps\n oneline_re = objname + r'\\([^\\)].+?(?<=[\\w\\]\\}\\'\"])\\)(?!,)'\n multiline_re = objname + r'\\([^\\)]+(?<=[\\w\\]\\}\\'\"])\\)(?!,)'\n multiline_end_parenleft_re = r'(%s\\([^\\)]+(\\),\\n.+)+(?<=[\\w\\]\\}\\'\"])\\))'\n # Grabbing signatures\n if not text:\n text = ''\n sigs_1 = re.findall(oneline_re + '|' + multiline_re, text)\n sigs_2 = [g[0] for g in re.findall(multiline_end_parenleft_re % objname, text)]\n all_sigs = sigs_1 + sigs_2\n # The most relevant signature is usually the first one. There could be\n # others in doctests but those are not so important\n if all_sigs:\n return all_sigs[0]\n else:\n return ''", "response": "Get object signatures from text"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ntry to get the formatted argspec of a callable from the first block of the text.", "response": "def getargspecfromtext(text):\n \"\"\"\n Try to get the formatted argspec of a callable from the first block of its\n docstring\n \n This will return something like\n '(foo, bar, k=1)'\n \"\"\"\n blocks = text.split(\"\\n\\n\")\n first_block = blocks[0].strip()\n return getsignaturefromtext(first_block, '')"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef getargsfromtext(text, objname):\n signature = getsignaturefromtext(text, objname)\n if signature:\n argtxt = signature[signature.find('(')+1:-1]\n return argtxt.split(',')", "response": "Get arguments from text."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef getargs(obj):\n if inspect.isfunction(obj) or inspect.isbuiltin(obj):\n func_obj = obj\n elif inspect.ismethod(obj):\n func_obj = get_meth_func(obj)\n elif inspect.isclass(obj) and hasattr(obj, '__init__'):\n func_obj = getattr(obj, '__init__')\n else:\n return []\n if not hasattr(func_obj, 'func_code'):\n # Builtin: try to extract info from doc\n args = getargsfromdoc(func_obj)\n if args is not None:\n return args\n else:\n # Example: PyQt5\n return getargsfromdoc(obj)\n args, _, _ = inspect.getargs(func_obj.func_code)\n if not args:\n return getargsfromdoc(obj)\n \n # Supporting tuple arguments in def statement:\n for i_arg, arg in enumerate(args):\n if isinstance(arg, list):\n args[i_arg] = \"(%s)\" % \", \".join(arg)\n \n defaults = get_func_defaults(func_obj)\n if defaults is not None:\n for index, default in enumerate(defaults):\n args[index+len(args)-len(defaults)] += '='+repr(default)\n if inspect.isclass(obj) or inspect.ismethod(obj):\n if len(args) == 1:\n return None\n if 'self' in args:\n args.remove('self')\n return args", "response": "Get the names and default values of a function s arguments"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef getargtxt(obj, one_arg_per_line=True):\n args = getargs(obj)\n if args:\n sep = ', '\n textlist = None\n for i_arg, arg in enumerate(args):\n if textlist is None:\n textlist = ['']\n textlist[-1] += arg\n if i_arg < len(args)-1:\n textlist[-1] += sep\n if len(textlist[-1]) >= 32 or one_arg_per_line:\n textlist.append('')\n if inspect.isclass(obj) or inspect.ismethod(obj):\n if len(textlist) == 1:\n return None\n if 'self'+sep in textlist:\n textlist.remove('self'+sep)\n return textlist", "response": "Get the names and default values of a function s arguments Return list with separators ( ', ) formatted for calltips\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning True if object is defined in namespace", "response": "def isdefined(obj, force_import=False, namespace=None):\n \"\"\"Return True if object is defined in namespace\n If namespace is None --> namespace = locals()\"\"\"\n if namespace is None:\n namespace = locals()\n attr_list = obj.split('.')\n base = attr_list.pop(0)\n if len(base) == 0:\n return False\n if base not in builtins.__dict__ and base not in namespace:\n if force_import:\n try:\n module = __import__(base, globals(), namespace)\n if base not in globals():\n globals()[base] = module\n namespace[base] = module\n except Exception:\n return False\n else:\n return False\n for attr in attr_list:\n try:\n attr_not_found = not hasattr(eval(base, namespace), attr)\n except (SyntaxError, AttributeError):\n return False\n if attr_not_found:\n if force_import:\n try:\n __import__(base+'.'+attr, globals(), namespace)\n except (ImportError, SyntaxError):\n return False\n else:\n return False\n base += '.'+attr\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef read(self, size):\n return_val = self.handle.read(size)\n if not return_val:\n raise IOError\n else: \n return return_val", "response": "Reads the n bytes from the file and returns it."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef escape_tex(value):\n newval = value\n for pattern, replacement in LATEX_SUBS:\n newval = pattern.sub(replacement, newval)\n return newval", "response": "Escape a text tex safe value."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef packagenameify(s):\n return ''.join(w if w in ACRONYMS else w.title() for w in s.split('.')[-1:])", "response": "Makes a package name from a string."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nhandles the fields of the class.", "response": "def handle_fields(definitions, fields, prefix, offset, multiplier):\n \"\"\"\n Helper for handling naming and sizing of fields. It's terrible.\n \"\"\"\n items = []\n for f in fields:\n if f.type_id == \"array\" and f.options['fill'].value in CONSTRUCT_CODE:\n prefix_name = '.'.join([prefix, f.identifier]) if prefix else f.identifier\n n_with_values = f.options['n_with_values'].value\n bitfields = f.options['fields'].value if n_with_values > 0 else None\n if 'size' in f.options:\n name = \"%s[%s]\" % (f.options['fill'].value, str(f.options['size'].value))\n size = field_sizes[f.options['fill'].value] * f.options['size'].value\n item = FieldItem(prefix_name, name, offset, size,\n str(f.units), f.desc, n_with_values, bitfields)\n items.append(item)\n offset += size\n else:\n name = \"%s[%s]\" % (f.options['fill'].value, \"N\")\n multiplier = field_sizes[f.options['fill'].value]\n size = field_sizes[f.options['fill'].value] * 1\n item = FieldItem(prefix_name, name, offset, \"N\",\n str(f.units), f.desc, n_with_values, bitfields)\n items.append(item)\n offset += size\n elif f.type_id == \"string\":\n prefix_name = '.'.join([prefix, f.identifier]) if prefix else f.identifier\n n_with_values = f.options['n_with_values'].value\n bitfields = f.options['fields'].value if n_with_values > 0 else None\n if 'size' in f.options:\n name = \"string\"\n size = field_sizes['u8'] * f.options['size'].value\n item = FieldItem(prefix_name, name, offset, size,\n str(f.units), f.desc, n_with_values, bitfields)\n items.append(item)\n offset += size\n else:\n name = \"string\"\n size = field_sizes['u8']\n multiplier = 1\n item = FieldItem(prefix_name, name, offset, \"N\",\n str(f.units), f.desc, n_with_values, bitfields)\n items.append(item)\n offset += size\n elif f.type_id == \"array\":\n name = f.options['fill'].value\n definition = next(d for d in definitions if name == d.identifier)\n prefix_name = '.'.join([prefix, f.identifier]) if prefix else f.identifier\n (new_items, new_offset, new_multiplier) \\\n = handle_fields(definitions,\n definition.fields,\n prefix_name + \"[N]\",\n offset,\n multiplier)\n multiplier = new_offset - offset\n (newer_items, newer_offset, newer_multiplier) \\\n = handle_fields(definitions,\n definition.fields,\n prefix_name + \"[N]\", offset,\n multiplier)\n items += newer_items\n offset = newer_offset\n elif f.type_id not in CONSTRUCT_CODE:\n name = f.type_id\n definition = next(d for d in definitions if name == d.identifier)\n prefix_name = '.'.join([prefix, f.identifier]) if prefix else f.identifier\n (new_items, new_offset, new_multiplier) \\\n = handle_fields(definitions,\n definition.fields,\n prefix_name,\n offset,\n multiplier)\n items += new_items\n offset = new_offset\n multiplier = new_multiplier\n else:\n size = field_sizes[f.type_id]\n name = f.type_id\n adj_offset = \"%dN+%d\" % (multiplier, offset) if multiplier else offset\n prefix_name = '.'.join([prefix, f.identifier]) if prefix else f.identifier\n n_with_values = f.options['n_with_values'].value\n bitfields = f.options['fields'].value if n_with_values > 0 else None\n item = FieldItem(prefix_name, name, adj_offset, size, str(f.units), f.desc, n_with_values, bitfields)\n items.append(item)\n offset += size\n return (items, offset, multiplier)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef render_source(output_dir, package_specs, version):\n destination_filename = \"%s/sbp_out.tex\" % output_dir\n py_template = JENV.get_template(TEMPLATE_NAME)\n stable_msgs = []\n unstable_msgs = []\n prims = []\n for p in sorted(package_specs, key=attrgetter('identifier')):\n pkg_name = p.identifier\n stable = p.stable\n\n # build list of required definitions (this package plus includes)\n # TODO: recursively include files\n definitions = p.definitions\n for inc in p.includes:\n inc_basename = inc.split(\".\")[0]\n for pkg in package_specs:\n if pkg.identifier.endswith(inc_basename):\n definitions += pkg.definitions\n\n if pkg_name == \"swiftnav.sbp.types\":\n prims = p.definitions\n for d in p.definitions:\n if d.public and d.static and d.sbp_id:\n items, size, multiplier \\\n = handle_fields(definitions, d.fields, \"\", 0, None)\n adj_size = \"\"\n if multiplier == 1:\n adj_size = \"N+%d\" % (size - 1) if size > 1 else \"N\"\n elif multiplier:\n if multiplier == size:\n adj_size = \"%dN\" % multiplier\n else:\n adj_size = \"%dN+%d\" % (multiplier, size - multiplier)\n else:\n adj_size = \"%d\" % size\n ti = TableItem(pkg_name, d.identifier, d.sbp_id,\n d.short_desc, d.desc, adj_size, items, p.stable,\n p.description)\n pkg_name = \"\"\n if stable:\n stable_msgs.append(ti)\n else:\n unstable_msgs.append(ti)\n with open(destination_filename, 'w') as f:\n f.write(py_template.render(msgs=stable_msgs,\n umsgs=unstable_msgs,\n prims=prims,\n version=version))\n import subprocess\n import os\n os.chdir(output_dir)\n subprocess.call([\"pdflatex\", \"--enable-write18\",\n \"-shell-escape\", \"sbp_out.tex\"])\n subprocess.call([\"mv\", \"sbp_out.pdf\", \"../docs/sbp.pdf\"])", "response": "Render and output the source file for the given package_specs and version."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets and parse arguments.", "response": "def get_args():\n \"\"\"\n Get and parse arguments.\n \"\"\"\n import argparse\n parser = argparse.ArgumentParser(\n description=\"Swift Navigation SBP Example.\")\n parser.add_argument(\n \"-s\",\n \"--serial-port\",\n default=[DEFAULT_SERIAL_PORT],\n nargs=1,\n help=\"specify the serial port to use.\")\n parser.add_argument(\n \"-b\",\n \"--baud\",\n default=[DEFAULT_SERIAL_BAUD],\n nargs=1,\n help=\"specify the baud rate to use.\")\n parser.add_argument(\n \"-a\",\n \"--address\",\n default=[DEFAULT_UDP_ADDRESS],\n nargs=1,\n help=\"specify the serial port to use.\")\n parser.add_argument(\n \"-p\",\n \"--udp-port\",\n default=[DEFAULT_UDP_PORT],\n nargs=1,\n help=\"specify the baud rate to use.\")\n return parser.parse_args()"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconverting a value to a C language appropriate identifier format.", "response": "def convert(value):\n \"\"\"Converts to a C language appropriate identifier format.\n\n \"\"\"\n s0 = \"Sbp\" + value if value in COLLISIONS else value\n s1 = re.sub('(.)([A-Z][a-z]+)', r'\\1_\\2', s0)\n return re.sub('([a-z0-9])([A-Z])', r'\\1_\\2', s1).lower() + \"_t\""} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nbuild an identifier from a field.", "response": "def mk_id(field):\n \"\"\"Builds an identifier from a field.\n \"\"\"\n name = field.type_id\n if name == \"string\":\n return \"%s\" % (\"char\")\n elif name == \"array\" and field.size:\n if field.options['fill'].value not in CONSTRUCT_CODE:\n return \"%s\" % convert(field.options['fill'].value)\n else:\n return \"%s\" % field.options['fill'].value\n elif name == \"array\":\n return \"%s\" % convert(field.options['fill'].value)\n elif name not in CONSTRUCT_CODE:\n return convert(name)\n else:\n return name"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef mk_size(field):\n name = field.type_id\n if name == \"string\" and field.options.get('size', None):\n return \"%s[%d];\" % (field.identifier, field.options.get('size').value)\n elif name == \"string\":\n return \"%s[0];\" % field.identifier\n elif name == \"array\" and field.options.get('size', None):\n return \"%s[%d];\" % (field.identifier, field.options.get('size').value)\n elif name == \"array\":\n return \"%s[0];\" % field.identifier\n else:\n return '%s;' % field.identifier", "response": "Builds an identifier for a container type."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef render_source(output_dir, package_spec):\n path, name = package_spec.filepath\n destination_filename = \"%s/%s.h\" % (output_dir, name)\n py_template = JENV.get_template(MESSAGES_TEMPLATE_NAME)\n with open(destination_filename, 'w') as f:\n f.write(py_template.render(msgs=package_spec.definitions,\n pkg_name=name,\n filepath=\"/\".join(package_spec.filepath) + \".yaml\",\n max_msgid_len=package_spec.max_msgid_len,\n description=package_spec.description,\n timestamp=package_spec.creation_timestamp,\n include=extensions(package_spec.includes)))", "response": "Render and output a source file given a package specification."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef crc16jit(buf, offset, crc, length):\n for index in range(offset, offset + length):\n data = buf[index]\n lookup = crc16_tab[((nb.u2(crc) >> 8) & nb.u2(0xFF)) ^ (data & nb.u2(0xFF))]\n crc = ((nb.u2(crc) << nb.u2(8)) & nb.u2(0xFFFF)) ^ lookup\n crc = nb.u2(crc) & nb.u2(0xFFFF)\n return crc", "response": "CRC16 implementation acording to CCITT standards."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef crc16_nojit(s, crc=0):\n for ch in bytearray(s): # bytearray's elements are integers in both python 2 and 3\n crc = ((crc << 8) & 0xFFFF) ^ _crc16_tab[((crc >> 8) & 0xFF) ^ (ch & 0xFF)]\n crc &= 0xFFFF\n return crc", "response": "CRC16 implementation acording to CCITT standards."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_framed(self, buf, offset, insert_payload):\n header_offset = offset + self._header_len\n self.length = insert_payload(buf, header_offset, self.payload)\n struct.pack_into(self._header_fmt,\n buf,\n offset,\n self.preamble,\n self.msg_type,\n self.sender,\n self.length)\n crc_offset = header_offset + self.length\n preamble_bytes = 1\n crc_over_len = self._header_len + self.length - preamble_bytes\n self.crc = crc16jit(buf, offset+1, 0, crc_over_len)\n struct.pack_into(self._crc_fmt, buf, crc_offset, self.crc)\n length = preamble_bytes + crc_over_len + self._crc_len\n return length", "response": "Returns the framed message and updates the CRC."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef pack(self):\n buf = np.zeros(512, dtype=np.uint8)\n packed_len = self._get_framed(buf, 0, self._copy_payload)\n d = buf[:packed_len]\n return d.tobytes()", "response": "Pack to framed binary message."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef pack_into(self, buf, offset, write_payload):\n return self._get_framed(buf, offset, write_payload)", "response": "Pack the binary message into the buffer."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nunpack and return a framed binary message.", "response": "def unpack(d):\n \"\"\"Unpack and return a framed binary message.\n\n \"\"\"\n p = SBP._parser.parse(d)\n assert p.preamble == SBP_PREAMBLE, \"Invalid preamble 0x%x.\" % p.preamble\n return SBP(p.msg_type, p.sender, p.length, p.payload, p.crc)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nproduce a JSON - encoded SBP message.", "response": "def to_json(self, sort_keys=False):\n \"\"\"Produce a JSON-encoded SBP message.\n\n \"\"\"\n d = self.to_json_dict()\n return json.dumps(d, sort_keys=sort_keys)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngiving a JSON - encoded message build an object.", "response": "def from_json(s):\n \"\"\"Given a JSON-encoded message, build an object.\n\n \"\"\"\n d = json.loads(s)\n sbp = SBP.from_json_dict(d)\n return sbp"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreading a set of bytes from the underlying file and returns them as a string.", "response": "def read(self, size):\n \"\"\"\n Read wrapper.\n\n Parameters\n ----------\n size : int\n Number of bytes to read.\n \"\"\"\n try:\n return self.handle.read(size)\n except (OSError, serial.SerialException):\n print()\n print(\"Piksi disconnected\")\n print()\n self.handle.close()\n raise IOError"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nwrap for serial. write.", "response": "def write(self, s):\n \"\"\"\n Write wrapper.\n\n Parameters\n ----------\n s : bytes\n Bytes to write\n \"\"\"\n try:\n return self.handle.write(s)\n except (OSError, serial.SerialException,\n serial.writeTimeoutError) as e:\n if e == serial.writeTimeoutError:\n print(\"sbp pyserial_driver: writeTimeoutError\")\n return 0\n else:\n print()\n print(\"Piksi disconnected\")\n print()\n self.handle.close()\n raise IOError"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef parse_type(field):\n if field.type_id == 'string':\n if 'size' in field.options:\n return \"parser.getString(%d)\" % field.options['size'].value\n else:\n return \"parser.getString()\"\n elif field.type_id in JAVA_TYPE_MAP:\n # Primitive java types have extractor methods in SBPMessage.Parser\n return \"parser.get\" + field.type_id.capitalize() + \"()\"\n if field.type_id == 'array':\n # Call function to build array\n t = field.options['fill'].value\n if t in JAVA_TYPE_MAP:\n if 'size' in field.options:\n return \"parser.getArrayof%s(%d)\" % (t.capitalize(), field.options['size'].value)\n else:\n return \"parser.getArrayof%s()\" % t.capitalize()\n else:\n if 'size' in field.options:\n return \"parser.getArray(%s.class, %d)\" % (t, field.options['size'].value)\n else:\n return \"parser.getArray(%s.class)\" % t\n else:\n # This is an inner class, call default constructor\n return \"new %s().parse(parser)\" % field.type_id", "response": "Function to pull a type from the binary payload."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef build_type(field):\n if field.type_id == 'string':\n if 'size' in field.options:\n return \"builder.putString(%s, %d)\" % (field.identifier, field.options['size'].value)\n else:\n return \"builder.putString(%s)\" % field.identifier\n elif field.type_id in JAVA_TYPE_MAP:\n # Primitive java types have extractor methods in SBPMessage.Builder\n return \"builder.put%s(%s)\" % (field.type_id.capitalize(), field.identifier)\n if field.type_id == 'array':\n # Call function to build array\n t = field.options['fill'].value\n if t in JAVA_TYPE_MAP:\n if 'size' in field.options:\n return \"builder.putArrayof%s(%s, %d)\" % (t.capitalize(),\n field.identifier,\n field.options['size'].value)\n else:\n return \"builder.putArrayof%s(%s)\" % (t.capitalize(), field.identifier)\n else:\n if 'size' in field.options:\n return \"builder.putArray(%s, %d)\" % (field.identifier, field.options['size'].value)\n else:\n return \"builder.putArray(%s)\" % field.identifier\n else:\n return \"%s.build(builder)\" % field.identifier", "response": "Function to build a type into the binary payload."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef render_source(output_dir, package_spec, jenv=JENV):\n path, module_name = package_spec.filepath\n java_template = jenv.get_template(TEMPLATE_NAME)\n module_path = \"com.\" + package_spec.identifier\n yaml_filepath = \"/\".join(package_spec.filepath) + \".yaml\"\n includes = [\".\".join(i.split(\".\")[:-1]) for i in package_spec.includes]\n includes = [i for i in includes if i != \"types\"]\n for msg in package_spec.definitions:\n msg_name = classnameify(msg.identifier) if msg.sbp_id else msg.identifier\n l = \"/\".join(package_spec.filepath)\n destination_filename = \"%s/com/%s/%s.java\" % (output_dir, l , msg_name)\n\n # Create the output directory if it doesn't exist\n if not os.path.exists(os.path.dirname(destination_filename)):\n os.mkdir(os.path.dirname(destination_filename))\n\n with open(destination_filename, 'w+') as f:\n print(destination_filename)\n f.write(java_template.render(m=msg,\n filepath=yaml_filepath,\n module_path=module_path,\n include=includes,\n description=package_spec.description))", "response": "Render and output the source code for the package_spec."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nrendering and output dispatch table", "response": "def render_table(output_dir, packages, jenv=JENV):\n \"\"\"\n Render and output dispatch table\n \"\"\"\n destination_filename = output_dir + \"/com/swiftnav/sbp/client/MessageTable.java\"\n with open(destination_filename, 'w+') as f:\n print(destination_filename)\n f.write(jenv.get_template(TEMPLATE_TABLE_NAME).render(packages=packages))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef construct_format(f, type_map=CONSTRUCT_CODE):\n formatted = \"\"\n if type_map.get(f.type_id, None):\n return \"%s('%s')\" % (type_map.get(f.type_id), f.identifier)\n elif f.type_id == 'string' and f.options.get('size', None):\n return \"string('%s', { length: %d })\" % (f.identifier, f.options['size'].value)\n elif f.type_id == 'string':\n return \"string('%s', { greedy: true })\" % (f.identifier)\n elif f.type_id == 'array':\n fill = f.options['fill'].value\n f_ = copy.copy(f)\n f_.type_id = fill\n size = f.options.get('size', None)\n size_fn = f.options.get('size_fn', None)\n field_type = type_map.get(f_.type_id, None)\n if field_type is None:\n field_type = \"%s.prototype.parser\" % f_.type_id\n else:\n field_type = \"'%s'\" % field_type\n if size is not None:\n d = { \"'uint16'\" : \"'uint16le'\", \"'uint32'\" : \"'uint32le'\", \"'uint64'\" : \"'uint16le'\",\n \"'int16'\" : \"'int16le'\", \"'int32'\" : \"'int32le'\", \"'int64'\" : \"'int16le'\" }\n field_type_arr = d.get(field_type, field_type)\n return \"array('%s', { length: %d, type: %s })\" % (f.identifier, size.value, field_type_arr)\n elif f.options.get('size_fn') is not None:\n return \"array('%s', { type: %s, length: '%s' })\" % (f_.identifier, field_type, size_fn.value)\n else:\n return \"array('%s', { type: %s, readUntil: 'eof' })\" % (f_.identifier, field_type)\n else:\n return \"nest('%s', { type: %s.prototype.parser })\" % (f.identifier, f.type_id)\n return formatted", "response": "Constructs a format string for binary - parser library."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ngive a binary payload d update the appropriate payload fields of the message.", "response": "def from_binary(self, d):\n \"\"\"Given a binary payload d, update the appropriate payload fields of\n the message.\n\n \"\"\"\n p = MsgEphemerisGPSDepF._parser.parse(d)\n for n in self.__class__.__slots__:\n setattr(self, n, getattr(p, n))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef to_binary(self):\n c = containerize(exclude_fields(self))\n self.payload = MsgEphemerisGPSDepF._parser.build(c)\n return self.pack()", "response": "Produce a framed or packed SBP message."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nproducing a framed SBP message into the provided buffer and offset.", "response": "def into_buffer(self, buf, offset):\n \"\"\"Produce a framed/packed SBP message into the provided buffer and offset.\n\n \"\"\"\n self.payload = containerize(exclude_fields(self))\n self.parser = MsgEphemerisGPSDepF._parser\n self.stream_payload.reset(buf, offset)\n return self.pack_into(buf, offset, self._build_payload)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _readall(self, size):\n data = b\"\"\n while len(data) < size:\n d = self._read(size - len(data))\n if self._broken:\n raise StopIteration\n if not d:\n # NOTE (Buro/jgross): Force a yield here to another thread. In\n # case the stream fails midstream, the spinning here causes\n # the UI thread to lock up without yielding.\n time.sleep(0)\n continue\n data += d\n return data", "response": "Reads all the available bytes from the stream."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreads and build a new SBP message.", "response": "def _receive(self):\n \"\"\"\n Read and build SBP message.\n \"\"\"\n # preamble - not readall(1) to allow breaking before messages,\n # empty input\n preamble = self._read(1)\n if not preamble:\n return None\n elif ord(preamble) != SBP_PREAMBLE:\n if self._verbose:\n print(\"Host Side Unhandled byte: 0x%02x\" % ord(preamble))\n return None\n # hdr\n hdr = self._readall(5)\n msg_crc = crc16(hdr)\n msg_type, sender, msg_len = struct.unpack(\"\" % (obj.__class__.__name__, pprint.pformat(items, width=1))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef capture_setting(self, sbp_msg, **metadata):\n if sbp_msg.msg_type == SBP_MSG_SETTINGS_READ_RESP:\n section, setting, value = sbp_msg.payload.split(b'\\0')[:3]\n self.settings.append((section, setting, value))", "response": "Callback to extract and store setting values from SBP_MSG_SETTINGS_READ_RESP and store them in self. settings."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef wait_for_setting_value(self, section, setting, value, wait_time=5.0):\n expire = time.time() + wait_time\n ok = False\n while not ok and time.time() < expire:\n settings = [x for x in self.settings if (x[0], x[1]) == (section, setting)]\n # Check to see if the last setting has the value we want\n if len(settings) > 0:\n ok = settings[-1][2] == value\n\n time.sleep(0.1)\n return ok", "response": "Function to wait until a user - specified value is set in the specified section."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef clear(self, section=None, setting=None, value=None):\n match = [all((section is None or x_y_z[0] == section,\n setting is None or x_y_z[1] == setting,\n value is None or x_y_z[2] == value)) for x_y_z in self.settings]\n\n keep = [setting_remove for setting_remove in zip(self.settings,match) if not setting_remove[1]]\n\n self.settings[:] = [x[0] for x in keep]", "response": "Clear the settings of the current session."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nrender and output a source file from a package specification.", "response": "def render_source(output_dir, package_spec, jenv=JENV):\n \"\"\"\n Render and output\n \"\"\"\n path, name = package_spec.filepath\n directory = output_dir\n destination_filename = \"%s/%s.py\" % (directory, name)\n py_template = jenv.get_template(TEMPLATE_NAME)\n module_path = \".\".join(package_spec.identifier.split(\".\")[1:-1])\n includes = [\".\".join(i.split(\".\")[:-1]) for i in package_spec.includes]\n includes = [i for i in includes if i != \"types\"]\n print(destination_filename, includes)\n with open(destination_filename, 'w') as f:\n f.write(py_template.render(msgs=package_spec.definitions,\n filepath=\"/\".join(package_spec.filepath) + \".yaml\",\n module_path=module_path,\n include=includes,\n timestamp=package_spec.creation_timestamp,\n description=package_spec.description))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef read(self, size):\n data = None\n while True:\n try:\n data = self.handle.recv(size)\n except socket.timeout as socket_error:\n self._reconnect(socket_error)\n except socket.error as socket_error:\n # this is fine, just retry\n if socket_error.errno == errno.EINTR:\n continue\n self._reconnect(IOError)\n if not data:\n self._reconnect(IOError)\n break\n return data", "response": "Reads from the socket and returns the data."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nwrite a bytes to the log.", "response": "def write(self, s):\n \"\"\"\n Write wrapper.\n\n Parameters\n ----------\n s : bytes\n Bytes to write\n \"\"\"\n try:\n self._write_lock.acquire()\n self.handle.sendall(s)\n except socket.timeout:\n self._connect()\n except socket.error:\n raise IOError\n finally:\n self._write_lock.release()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef exclude_fields(obj, exclude=EXCLUDE):\n return dict([(k, getattr(obj, k)) for k in obj.__slots__ if k not in exclude])", "response": "Return dict of object without parent attrs."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef walk_json_dict(coll):\n if isinstance(coll, dict):\n return dict((k, walk_json_dict(v)) for (k, v) in iter(coll.items()))\n elif isinstance(coll, bytes):\n return coll.decode('ascii')\n elif hasattr(coll, '__iter__') and not isinstance(coll, str):\n return [walk_json_dict(seq) for seq in coll]\n else:\n return coll", "response": "Flatten a parsed SBP object into dicts and lists which are\n compatible for JSON output."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef containerize(coll):\n if isinstance(coll, Container):\n [setattr(coll, k, containerize(v)) for (k, v) in coll.items()]\n return coll\n elif isinstance(coll, dict):\n return containerize(Container(**coll))\n elif isinstance(coll, list):\n for j, i in enumerate(coll):\n if isinstance(i, dict):\n coll[j] = containerize(Container(**i))\n return coll\n else:\n return coll", "response": "Walk attribute fields passed from an SBP message and convert them to a Container where appropriate. Used for Construct proper\n serialization."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nprinting a orphaned string representation of an object without the clutter of its parent object.", "response": "def fmt_repr(obj):\n \"\"\"Print a orphaned string representation of an object without the\n clutter of its parent object.\n\n \"\"\"\n items = [\"%s = %r\" % (k, v) for k, v in list(exclude_fields(obj).items())]\n return \"<%s: {%s}>\" % (obj.__class__.__name__, ', '.join(items))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreads an SBP specification file.", "response": "def read_spec(filename, verbose=False):\n \"\"\"\n Read an SBP specification.\n\n Parameters\n ----------\n filename : str\n Local filename for specification.\n verbose : bool\n Print out some debugging info\n\n Returns\n ----------\n\n Raises\n ----------\n Exception\n On empty file.\n yaml.YAMLError\n On Yaml parsing error\n voluptuous.Invalid\n On invalid SBP schema\n \"\"\"\n contents = None\n with open(filename, 'r') as f:\n contents = yaml.load(f)\n if contents is None:\n raise Exception(\"Empty yaml file: %s.\" % filename)\n try:\n s.package_schema(contents)\n except Exception as e:\n sys.stderr.write(\"Invalid SBP YAML specification: %s.\\n\" % filename)\n raise e\n return contents"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the base directory and index of files to generate.", "response": "def get_files(input_file):\n \"\"\"\n Initializes an index of files to generate, returns the base\n directory and index.\n\n \"\"\"\n file_index = {}\n base_dir = None\n if os.path.isfile(input_file):\n file_index[input_file] = None\n base_dir = os.path.dirname(input_file)\n elif os.path.isdir(input_file):\n base_dir = input_file\n for inf in glob.glob(input_file + s.SBP_EXTENSION):\n file_index[os.path.abspath(inf)] = None\n for inf in glob.glob(input_file + '/*'):\n base, index = get_files(os.path.abspath(inf))\n z = file_index.copy()\n z.update(index)\n file_index = z\n return (base_dir, file_index)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngives a base directory and an initial set of files retrieves dependencies and adds them to the file_index.", "response": "def resolve_deps(base_dir, file_index):\n \"\"\"\n Given a base directory and an initial set of files, retrieves\n dependencies and adds them to the file_index.\n\n \"\"\"\n def flatten(tree, index = {}):\n for include in tree.get('include', []):\n fname = base_dir + \"/\" + include\n assert os.path.exists(fname), \"File %s does not exist.\" % fname\n if fname not in index:\n index[fname] = read_spec(fname)\n index.update(flatten(index[fname], file_index))\n return index\n for fname, contents in file_index.items():\n file_index[fname] = read_spec(fname)\n file_index.update(flatten(file_index[fname], file_index))\n return file_index"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef mk_package(contents):\n package = contents.get('package', None)\n description = contents.get('description', None)\n include = contents.get('include', [])\n definitions = contents.get('definitions', [])\n resolved = [mk_definition(defn) for defn in definitions]\n return sbp.PackageSpecification(identifier=package,\n description=description,\n includes=include,\n definitions=resolved,\n render_source=contents.get('render_source', True),\n stable=contents.get('stable', False),\n public=contents.get('public', True))", "response": "Instantiates a package specification from a parsed AST of a\n package."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef mk_field(field):\n assert len(field) == 1\n identifier, contents = next(iter(field.items()))\n contents = dict(list({'units': '', 'n_with_values': 0}.items()) + list(contents.items()))\n return sbp.resolve_type(sbp.Field(identifier=identifier,\n type_id=contents.pop('type'),\n options=contents))", "response": "Instantiates a field specification from a parsed AST of a\n field."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nformats a global variable name.", "response": "def to_global(s):\n \"\"\"\n Format a global variable name.\n \"\"\"\n if s.startswith('GPSTime'):\n s = 'Gps' + s[3:]\n\n if '_' in s:\n s = \"\".join([i.capitalize() for i in s.split(\"_\")])\n return s[0].lower() + s[1:]"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef to_data(s):\n if s.startswith('GPSTime'):\n s = 'Gps' + s[3:]\n\n if '_' in s:\n return \"\".join([i.capitalize() for i in s.split(\"_\")])\n return s", "response": "Format a data variable name."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef to_type(f, type_map=CONSTRUCT_CODE):\n name = f.type_id\n if name.startswith('GPSTime'):\n name = 'Gps' + name[3:]\n if type_map.get(name, None):\n return type_map.get(name, None)\n elif name == 'array':\n fill = f.options['fill'].value\n f_ = copy.copy(f)\n f_.type_id = fill\n return \"[%s]\" % to_type(f_)\n return name", "response": "Format a proper type."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef render_source(output_dir, package_spec):\n path, name = package_spec.filepath\n module_prefix = \"SwiftNav.SBP\"\n module_name = camel_case(name)\n full_module_name = \".\".join([module_prefix, module_name])\n destination_filename = \"%s/src/SwiftNav/SBP/%s.hs\" % (output_dir, module_name)\n py_template = JENV.get_template(MESSAGES_TEMPLATE_NAME)\n module_includes = [\".\".join([module_prefix] + [camel_case(j) for j in i.split(\".\")[:-1]])\n for i in package_spec.includes]\n\n with open(destination_filename, 'w') as f:\n f.write(py_template.render(msgs=package_spec.definitions,\n description=package_spec.description,\n module_name=full_module_name,\n module_includes=module_includes))", "response": "Render and output a file given a package specification."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconverts a string to a comment.", "response": "def to_comment(value):\n \"\"\"\n Builds a comment.\n \"\"\"\n if value is None:\n return\n if len(value.split('\\n')) == 1:\n return \"* \" + value\n else:\n return '\\n'.join([' * ' + l for l in value.split('\\n')[:-1]])"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef to_identifier(s):\n if s.startswith('GPS'):\n s = 'Gps' + s[3:]\n return ''.join([i.capitalize() for i in s.split('_')]) if '_' in s else s", "response": "Convert snake_case to camel_case."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nrender and output a single source file given a package specification.", "response": "def render_source(output_dir, package_spec):\n \"\"\"\n Render and output to a directory given a package specification.\n \"\"\"\n path, name = package_spec.filepath\n destination_filename = '%s/%s.proto' % (output_dir, name)\n pb_template = JENV.get_template(MESSAGES_TEMPLATE_NAME)\n includes = [include[:-5] if include.endswith('.yaml') else include for include in package_spec.includes]\n if 'types' in includes:\n includes.remove('types')\n with open(destination_filename, 'w') as f:\n f.write(pb_template.render(\n name=name,\n package=package_spec.identifier,\n messages=package_spec.definitions,\n includes=includes,\n description=package_spec.description,\n ))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nloads an app resource into a new container.", "response": "def load_app_resource(**kwargs):\n '''\n :param kwargs: keyword args for :func:`~dxpy.bindings.search.find_one_data_object`, with the exception of \"project\"\n :raises: :exc:`~dxpy.exceptions.DXError` if \"project\" is given, if this is called with dxpy.JOB_ID not set, or if \"DX_RESOURCES_ID\" or \"DX_PROJECT_CONTEXT_ID\" is not found in the environment variables\n :returns: None if no matching object is found; otherwise returns a dxpy object handler for that class of object\n\n Searches for a data object in the app resources container matching the given keyword arguments. If found, the\n object will be cloned into the running job's workspace container, and the handler for it will be returned. If the\n app resources container ID is not found in DX_RESOURCES_ID, falls back to looking in the current project.\n\n Example::\n\n @dxpy.entry_point('main')\n def main(*args, **kwargs):\n x = load_app_resource(name=\"Indexed genome\", classname='file')\n dxpy.download_dxfile(x)\n '''\n\n if 'project' in kwargs:\n raise DXError('Unexpected kwarg: \"project\"')\n if dxpy.JOB_ID is None:\n raise DXError('Not called by a job')\n if 'DX_RESOURCES_ID' not in os.environ and 'DX_PROJECT_CONTEXT_ID' not in os.environ:\n raise DXError('App resources container ID could not be found')\n\n kwargs['project'] = os.environ.get('DX_RESOURCES_ID', os.environ.get('DX_PROJECT_CONTEXT_ID'))\n kwargs['return_handler'] = True\n\n return find_one_data_object(**kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef load_from_cache(**kwargs):\n '''\n :param kwargs: keyword args for :func:`~dxpy.bindings.search.find_one_data_object`, with the exception of \"project\"\n :raises: :exc:`~dxpy.exceptions.DXError` if \"project\" is given, if this is called with dxpy.JOB_ID not set, or if \"DX_PROJECT_CACHE_ID\" is not found in the environment variables\n :returns: None if no matching object is found; otherwise returns a dxpy object handler for that class of object\n\n Searches for a data object in the project cache container matching\n the given keyword arguments. If found, the object will be cloned\n into the running job's workspace container, and the handler for it\n will be returned.\n\n Example::\n\n @dxpy.entry_point('main')\n def main(*args, **kwargs):\n x = load_from_cache(name=\"Indexed genome\", classname='file')\n if x is None:\n x = compute_result(*args)\n save_to_cache(x)\n '''\n\n if 'project' in kwargs:\n raise DXError('Unexpected kwarg: \"project\"')\n if dxpy.JOB_ID is None:\n raise DXError('Not called by a job')\n if 'DX_PROJECT_CACHE_ID' not in os.environ:\n raise DXError('Project cache ID could not be found in the environment variable DX_PROJECT_CACHE_ID')\n\n kwargs['project'] = os.environ.get('DX_PROJECT_CACHE_ID')\n kwargs['return_handler'] = True\n\n cached_object = find_one_data_object(**kwargs)\n\n if cached_object is None:\n return None\n\n return cached_object.clone(dxpy.WORKSPACE_ID)", "response": "Loads a data object from the project cache container."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsaving the given object to the project cache.", "response": "def save_to_cache(dxobject):\n '''\n :param dxobject: a dxpy object handler for an object to save to the cache\n :raises: :exc:`~dxpy.exceptions.DXError` if this is called with dxpy.JOB_ID not set, or if \"DX_PROJECT_CACHE_ID\" is not found in the environment variables\n\n Clones the given object to the project cache.\n\n Example::\n\n @dxpy.entry_point('main')\n def main(*args, **kwargs):\n x = load_from_cache(name=\"Indexed genome\", classname='file')\n if x is None:\n x = compute_result(*args)\n save_to_cache(x)\n '''\n\n if dxpy.JOB_ID is None:\n raise DXError('Not called by a job')\n if 'DX_PROJECT_CACHE_ID' not in os.environ:\n raise DXError('Project cache ID could not be found in the environment variable DX_PROJECT_CACHE_ID')\n\n dxobject.clone(os.environ.get('DX_PROJECT_CACHE_ID'))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _get_run_input_common_fields(executable_input, **kwargs):\n '''\n Takes the same arguments as the run method. Creates an input hash for the /executable-xxxx/run method,\n translating ONLY the fields that can be handled uniformly across all executables: project, folder, name, tags,\n properties, details, depends_on, allow_ssh, debug, delay_workspace_destruction, ignore_reuse, and extra_args.\n '''\n project = kwargs.get('project') or dxpy.WORKSPACE_ID\n\n run_input = {\"input\": executable_input}\n for arg in ['folder', 'name', 'tags', 'properties', 'details']:\n if kwargs.get(arg) is not None:\n run_input[arg] = kwargs[arg]\n\n if kwargs.get('instance_type') is not None or kwargs.get('cluster_spec') is not None:\n instance_type_srd = SystemRequirementsDict.from_instance_type(kwargs.get('instance_type'))\n cluster_spec_srd = SystemRequirementsDict(kwargs.get('cluster_spec'))\n run_input[\"systemRequirements\"] = (instance_type_srd + cluster_spec_srd).as_dict()\n\n if kwargs.get('depends_on') is not None:\n run_input[\"dependsOn\"] = []\n if isinstance(kwargs['depends_on'], list):\n for item in kwargs['depends_on']:\n if isinstance(item, DXJob) or isinstance(item, DXDataObject):\n if item.get_id() is None:\n raise DXError('A dxpy handler given in depends_on does not have an ID set')\n run_input[\"dependsOn\"].append(item.get_id())\n elif isinstance(item, basestring):\n run_input['dependsOn'].append(item)\n else:\n raise DXError('Expected elements of depends_on to only be either instances of DXJob or DXDataObject, or strings')\n else:\n raise DXError('Expected depends_on field to be a list')\n\n if kwargs.get('delay_workspace_destruction') is not None:\n run_input[\"delayWorkspaceDestruction\"] = kwargs['delay_workspace_destruction']\n\n if kwargs.get('allow_ssh') is not None:\n run_input[\"allowSSH\"] = kwargs['allow_ssh']\n\n if kwargs.get('debug') is not None:\n run_input[\"debug\"] = kwargs['debug']\n\n if kwargs.get('priority') is not None:\n run_input[\"priority\"] = kwargs['priority']\n\n if kwargs.get('ignore_reuse') is not None:\n run_input[\"ignoreReuse\"] = kwargs['ignore_reuse']\n\n if dxpy.JOB_ID is None:\n run_input[\"project\"] = project\n\n if kwargs.get('extra_args') is not None:\n merge(run_input, kwargs['extra_args'])\n\n return run_input", "response": "Returns the input hash for the run method."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_run_input_fields_for_applet(executable_input, **kwargs):\n '''\n Takes the same arguments as the run method. Creates an input\n hash for the /applet-xxxx/run method.\n '''\n # Although it says \"for_applet\", this is factored out of\n # DXApplet because apps currently use the same mechanism\n for unsupported_arg in ['stage_instance_types', 'stage_folders', 'rerun_stages', 'ignore_reuse_stages']:\n if kwargs.get(unsupported_arg):\n raise DXError(unsupported_arg + ' is not supported for applets (only workflows)')\n\n return DXExecutable._get_run_input_common_fields(executable_input, **kwargs)", "response": "Returns the input fields needed to run an apple workflow."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nruns a new job in the execution tree.", "response": "def run(self, executable_input, project=None, folder=None, name=None, tags=None, properties=None, details=None,\n instance_type=None, stage_instance_types=None, stage_folders=None, rerun_stages=None, cluster_spec=None,\n depends_on=None, allow_ssh=None, debug=None, delay_workspace_destruction=None, priority=None,\n ignore_reuse=None, ignore_reuse_stages=None, extra_args=None, **kwargs):\n '''\n :param executable_input: Hash of the executable's input arguments\n :type executable_input: dict\n :param project: Project ID of the project context\n :type project: string\n :param folder: Folder in which executable's outputs will be placed in *project*\n :type folder: string\n :param name: Name for the new job (default is \"\")\n :type name: string\n :param tags: Tags to associate with the job\n :type tags: list of strings\n :param properties: Properties to associate with the job\n :type properties: dict with string values\n :param details: Details to set for the job\n :type details: dict or list\n :param instance_type: Instance type on which the jobs will be run, or a dict mapping function names to instance type requests\n :type instance_type: string or dict\n :param depends_on: List of data objects or jobs to wait that need to enter the \"closed\" or \"done\" states, respectively, before the new job will be run; each element in the list can either be a dxpy handler or a string ID\n :type depends_on: list\n :param allow_ssh: List of hostname or IP masks to allow SSH connections from\n :type allow_ssh: list\n :param debug: Configuration options for job debugging\n :type debug: dict\n :param delay_workspace_destruction: Whether to keep the job's temporary workspace around for debugging purposes for 3 days after it succeeds or fails\n :type delay_workspace_destruction: boolean\n :param priority: Priority level to request for all jobs created in the execution tree, either \"normal\" or \"high\"\n :type priority: string\n :param ignore_reuse: Disable job reuse for this execution\n :type ignore_reuse: boolean\n :param ignore_reuse_stages: Stages of a workflow (IDs, names, or indices) or \"*\" for which job reuse should be disabled\n :type ignore_reuse_stages: list\n :param extra_args: If provided, a hash of options that will be merged into the underlying JSON given for the API call\n :type extra_args: dict\n :returns: Object handler of the newly created job\n :rtype: :class:`~dxpy.bindings.dxjob.DXJob`\n\n Creates a new job that executes the function \"main\" of this executable with\n the given input *executable_input*.\n\n '''\n # stage_instance_types, stage_folders, and rerun_stages are\n # only supported for workflows, but we include them\n # here. Applet-based executables should detect when they\n # receive a truthy workflow-specific value and raise an error.\n run_input = self._get_run_input(executable_input,\n project=project,\n folder=folder,\n name=name,\n tags=tags,\n properties=properties,\n details=details,\n instance_type=instance_type,\n stage_instance_types=stage_instance_types,\n stage_folders=stage_folders,\n rerun_stages=rerun_stages,\n cluster_spec=cluster_spec,\n depends_on=depends_on,\n allow_ssh=allow_ssh,\n ignore_reuse=ignore_reuse,\n ignore_reuse_stages=ignore_reuse_stages,\n debug=debug,\n delay_workspace_destruction=delay_workspace_destruction,\n priority=priority,\n extra_args=extra_args)\n\n return self._run_impl(run_input, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a new applet with the given hash and input specification.", "response": "def _new(self, dx_hash, **kwargs):\n '''\n :param dx_hash: Standard hash populated in :func:`dxpy.bindings.DXDataObject.new()` containing attributes common to all data object classes.\n :type dx_hash: dict\n :param runSpec: Run specification\n :type runSpec: dict\n :param dxapi: API version string\n :type dxapi: string\n :param inputSpec: Input specification (optional)\n :type inputSpec: dict\n :param outputSpec: Output specification (optional)\n :type outputSpec: dict\n :param access: Access specification (optional)\n :type access: dict\n :param title: Title string (optional)\n :type title: string\n :param summary: Summary string (optional)\n :type summary: string\n :param description: Description string (optional)\n :type description: string\n\n .. note:: It is highly recommended that the higher-level module\n :mod:`dxpy.app_builder` or (preferably) its frontend `dx build\n `_\n be used instead for applet creation.\n\n Creates an applet with the given parameters. See the API\n documentation for the `/applet/new\n `_\n method for more info. The applet is not run until :meth:`run()`\n is called.\n\n '''\n for field in 'runSpec', 'dxapi':\n if field not in kwargs:\n raise DXError(\"%s: Keyword argument %s is required\" % (self.__class__.__name__, field))\n dx_hash[field] = kwargs[field]\n del kwargs[field]\n for field in 'inputSpec', 'outputSpec', 'access', 'title', 'summary', 'description':\n if field in kwargs:\n dx_hash[field] = kwargs[field]\n del kwargs[field]\n\n resp = dxpy.api.applet_new(dx_hash, **kwargs)\n self.set_ids(resp[\"id\"], dx_hash[\"project\"])"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef run(self, applet_input, *args, **kwargs):\n # Rename applet_input arg to preserve API compatibility when calling\n # DXApplet.run(applet_input=...)\n return super(DXApplet, self).run(applet_input, *args, **kwargs)", "response": "Creates a new applet with the given input applet_input."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_env_from_args(args):\n ''' Sets the environment variables for this process from arguments (argparse.Namespace)\n and calls dxpy._initialize() to reset any values that it has already set.\n '''\n args = vars(args)\n\n if args.get('apiserver_host') is not None:\n config['DX_APISERVER_HOST'] = args['apiserver_host']\n if args.get('apiserver_port') is not None:\n config['DX_APISERVER_PORT'] = args['apiserver_port']\n if args.get('apiserver_protocol') is not None:\n config['DX_APISERVER_PROTOCOL'] = args['apiserver_protocol']\n if args.get('project_context_id') is not None:\n config['DX_PROJECT_CONTEXT_ID'] = args['project_context_id']\n if args.get('workspace_id') is not None:\n config['DX_WORKSPACE_ID'] = args['workspace_id']\n if args.get('cli_wd') is not None:\n config['DX_CLI_WD'] = args['cli_wd']\n if args.get('security_context') is not None:\n config['DX_SECURITY_CONTEXT'] = args['security_context']\n if args.get('auth_token') is not None:\n config['DX_SECURITY_CONTEXT'] = json.dumps({\"auth_token\": args['auth_token'],\n \"auth_token_type\": \"Bearer\"})", "response": "Sets the environment variables for this process from the given arguments."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef validate_conf(asset_conf):\n if 'name' not in asset_conf:\n raise AssetBuilderException('The asset configuration does not contain the required field \"name\".')\n # TODO: this default is not a good idea, and we will have to remove it once we ask customers to always provide release\n if 'release' not in asset_conf:\n asset_conf['release'] = \"12.04\"\n elif asset_conf['release'] not in ['16.04', '14.04', '12.04']:\n raise AssetBuilderException('The \"release\" field value should be either \"12.04\" (DEPRECATED), \"14.04\", \"16.04\".')\n if 'version' not in asset_conf:\n raise AssetBuilderException('The asset configuration does not contain the required field \"version\". ')\n if 'title' not in asset_conf:\n raise AssetBuilderException('The asset configuration does not contain the required field \"title\". ')\n if 'description' not in asset_conf:\n raise AssetBuilderException('The asset configuration does not contain the required field \"description\".')\n if 'distribution' in asset_conf:\n if asset_conf['distribution'] != 'Ubuntu':\n raise AssetBuilderException('The distribution may only take the value \"Ubuntu\".')\n else:\n asset_conf['distribution'] = \"Ubuntu\"", "response": "Validates the contents of the conf file and makes sure that the required information is provided."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nuploading an asset to the platform", "response": "def get_asset_tarball(asset_name, src_dir, dest_project, dest_folder, json_out):\n \"\"\"\n If the src_dir contains a \"resources\" directory its contents are archived and\n the archived file is uploaded to the platform\n \"\"\"\n if os.path.isdir(os.path.join(src_dir, \"resources\")):\n temp_dir = tempfile.mkdtemp()\n try:\n resource_file = os.path.join(temp_dir, asset_name + \"_resources.tar.gz\")\n cmd = [\"tar\", \"-czf\", resource_file, \"-C\", os.path.join(src_dir, \"resources\"), \".\"]\n subprocess.check_call(cmd)\n file_id = dx_upload(resource_file, dest_project, dest_folder, json_out)\n return file_id\n finally:\n shutil.rmtree(temp_dir)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef pick(choices, default=None, str_choices=None, prompt=None, allow_mult=False, more_choices=False):\n '''\n :param choices: Strings between which the user will make a choice\n :type choices: list of strings\n :param default: Number the index to be used as the default\n :type default: int or None\n :param str_choices: Strings to be used as aliases for the choices; must be of the same length as choices and each string must be unique\n :type str_choices: list of strings\n :param prompt: A custom prompt to be used\n :type prompt: string\n :param allow_mult: Whether \"*\" is a valid option to select all choices\n :type allow_mult: boolean\n :param more_choices: Whether \"m\" is a valid option to ask for more options\n :type more_choices: boolean\n :returns: The user's choice, i.e. one of a numbered index of choices (e.g. 0 for the first item), \"*\" (only if allow_mult is True), or \"m\" (only if more_results is True)\n :rtype: int or string\n :raises: :exc:`EOFError` to signify quitting the process\n\n At most one of allow_mult and more_choices should be set to True.\n '''\n for i in range(len(choices)):\n prefix = str(i) + ') '\n lines = choices[i].split(\"\\n\")\n joiner = \"\\n\" + \" \" * len(prefix)\n print(prefix + joiner.join(lines))\n if more_choices:\n print('m) More options not shown...')\n print('')\n if prompt is None:\n prompt = 'Pick a numbered choice'\n if allow_mult:\n prompt += ' or \"*\" for all'\n elif more_choices:\n prompt += ' or \"m\" for more options'\n if default is not None:\n prompt += ' [' + str(default) + ']'\n prompt += ': '\n while True:\n try:\n value = input(prompt)\n except KeyboardInterrupt:\n print('')\n raise\n except EOFError:\n print('')\n raise\n if default is not None and value == '':\n return default\n if allow_mult and value == '*':\n return value\n if more_choices and value == 'm':\n return value\n try:\n choice = str_choices.index(value)\n return choice\n except:\n pass\n try:\n choice = int(value)\n if choice not in range(len(choices)):\n raise IndexError()\n return choice\n except Exception:\n print('Not a valid selection')", "response": "This function asks the user for a numbered choice from a list of strings."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef object_exists_in_project(obj_id, proj_id):\n '''\n :param obj_id: object ID\n :type obj_id: str\n :param proj_id: project ID\n :type proj_id: str\n\n Returns True if the specified data object can be found in the specified\n project.\n '''\n if obj_id is None:\n raise ValueError(\"Expected obj_id to be a string\")\n if proj_id is None:\n raise ValueError(\"Expected proj_id to be a string\")\n if not is_container_id(proj_id):\n raise ValueError('Expected %r to be a container ID' % (proj_id,))\n return try_call(dxpy.DXHTTPRequest, '/' + obj_id + '/describe', {'project': proj_id})['project'] == proj_id", "response": "Checks if the specified object exists in the specified project."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the index of the last occurrence of a character in a string where the character is not an escaped character.", "response": "def get_last_pos_of_char(char, string):\n '''\n :param char: The character to find\n :type char: string\n :param string: The string in which to search for *char*\n :type string: string\n :returns: Index in *string* where *char* last appears (unescaped by a preceding \"\\\\\"), -1 if not found\n :rtype: int\n\n Finds the last occurrence of *char* in *string* in which *char* is\n not present as an escaped character.\n\n '''\n pos = len(string)\n while pos > 0:\n pos = string[:pos].rfind(char)\n if pos == -1:\n return -1\n num_backslashes = 0\n test_index = pos - 1\n while test_index >= 0 and string[test_index] == '\\\\':\n num_backslashes += 1\n test_index -= 1\n if num_backslashes % 2 == 0:\n return pos\n return -1"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_first_pos_of_char(char, string):\n '''\n :param char: The character to find\n :type char: string\n :param string: The string in which to search for *char*\n :type string: string\n :returns: Index in *string* where *char* last appears (unescaped by a preceding \"\\\\\"), -1 if not found\n :rtype: int\n\n Finds the first occurrence of *char* in *string* in which *char* is\n not present as an escaped character.\n\n '''\n first_pos = -1\n pos = len(string)\n while pos > 0:\n pos = string[:pos].rfind(char)\n if pos == -1:\n return first_pos\n num_backslashes = 0\n test_index = pos - 1\n while test_index >= 0 and string[test_index] == '\\\\':\n num_backslashes += 1\n test_index -= 1\n if num_backslashes % 2 == 0:\n first_pos = pos\n return first_pos", "response": "Returns the index of the first occurrence of a character in a string where the character is not an escaped character."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nsplit the string at the given character and returns a list of substrings of the string that are not empty.", "response": "def split_unescaped(char, string, include_empty_strings=False):\n '''\n :param char: The character on which to split the string\n :type char: string\n :param string: The string to split\n :type string: string\n :returns: List of substrings of *string*\n :rtype: list of strings\n\n Splits *string* whenever *char* appears without an odd number of\n backslashes ('\\\\') preceding it, discarding any empty string\n elements.\n\n '''\n words = []\n pos = len(string)\n lastpos = pos\n while pos >= 0:\n pos = get_last_pos_of_char(char, string[:lastpos])\n if pos >= 0:\n if pos + 1 != lastpos or include_empty_strings:\n words.append(string[pos + 1: lastpos])\n lastpos = pos\n if lastpos != 0 or include_empty_strings:\n words.append(string[:lastpos])\n words.reverse()\n return words"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncleans a folder path into a folder path and name and entity name.", "response": "def clean_folder_path(path, expected=None):\n '''\n :param path: A folder path to sanitize and parse\n :type path: string\n :param expected: Whether a folder (\"folder\"), a data object (\"entity\"), or either (None) is expected\n :type expected: string or None\n :returns: *folderpath*, *name*\n\n Unescape and parse *path* as a folder path to possibly an entity\n name. Consecutive unescaped forward slashes \"/\" are collapsed to\n a single forward slash. If *expected* is \"folder\", *name* is\n always returned as None. Otherwise, the string to the right of\n the last unescaped \"/\" is considered a possible data object name\n and returned as such.\n\n '''\n folders = split_unescaped('/', path)\n\n if len(folders) == 0:\n return '/', None\n\n if expected == 'folder' or folders[-1] == '.' or folders[-1] == '..' or get_last_pos_of_char('/', path) == len(path) - 1:\n entity_name = None\n else:\n entity_name = unescape_name_str(folders.pop())\n\n sanitized_folders = []\n\n for folder in folders:\n if folder == '.':\n pass\n elif folder == '..':\n if len(sanitized_folders) > 0:\n sanitized_folders.pop()\n else:\n sanitized_folders.append(unescape_folder_str(folder))\n\n return ('/' + '/'.join(sanitized_folders)), entity_name"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nresolve a container ID or name from a potential project or container ID.", "response": "def resolve_container_id_or_name(raw_string, is_error=False, multi=False):\n '''\n :param raw_string: A potential project or container ID or name\n :type raw_string: string\n :param is_error: Whether to raise an exception if the project or\n container ID cannot be resolved\n :type is_error: boolean\n :returns: Project or container ID if found or else None\n :rtype: string or None\n :raises: :exc:`ResolutionError` if *is_error* is True and the\n project or container could not be resolved\n\n Unescapes and attempts to resolve *raw_string* to a project or\n container ID.\n\n '''\n string = unescape_name_str(raw_string)\n if is_container_id(string):\n return ([string] if multi else string)\n\n if string in cached_project_names:\n return ([cached_project_names[string]] if multi else cached_project_names[string])\n\n try:\n results = list(dxpy.find_projects(name=string, describe=True, level='VIEW'))\n except Exception as details:\n raise ResolutionError(str(details))\n\n if len(results) == 1:\n cached_project_names[string] = results[0]['id']\n return ([results[0]['id']] if multi else results[0]['id'])\n elif len(results) == 0:\n if is_error:\n raise ResolutionError('Could not find a project named \"' + string + '\"')\n return ([] if multi else None)\n elif not multi:\n if INTERACTIVE_CLI:\n print('Found multiple projects with name \"' + string + '\"')\n choice = pick(['{id} ({level})'.format(id=result['id'], level=result['level'])\n for result in results])\n return results[choice]['id']\n else:\n raise ResolutionError('Found multiple projects with name \"' + string + '\"; please use a project ID to specify the desired project')\n else:\n # len(results) > 1 and multi\n return [result['id'] for result in results]"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef resolve_path(path, expected=None, multi_projects=False, allow_empty_string=True):\n '''\n :param path: A path to a data object to attempt to resolve\n :type path: string\n :param expected: one of the following: \"folder\", \"entity\", or None\n to indicate whether the expected path is a folder, a data\n object, or either\n :type expected: string or None\n :returns: A tuple of 3 values: container_ID, folderpath, entity_name\n :rtype: string, string, string\n :raises: exc:`ResolutionError` if the project cannot be resolved by\n name or the path is malformed\n :param allow_empty_string: If false, a ResolutionError will be\n raised if *path* is an empty string. Use this when resolving\n the empty string could result in unexpected behavior.\n :type allow_empty_string: boolean\n\n Attempts to resolve *path* to a project or container ID, a folder\n path, and a data object or folder name. This method will NOT\n raise an exception if the specified folder or object does not\n exist. This method is primarily for parsing purposes.\n\n Returns one of the following:\n\n (project, folder, maybe_name)\n where\n project is a container ID (non-null)\n folder is a folder path\n maybe_name is a string if the path could represent a folder or an object, or\n maybe_name is None if the path could only represent a folder\n\n OR\n\n (maybe_project, None, object_id)\n where\n maybe_project is a container ID or None\n object_id is a dataobject, app, or execution (specified by ID, not name)\n\n OR\n\n (job_id, None, output_name)\n where\n job_id and output_name are both non-null\n\n '''\n # TODO: callers that intend to obtain a data object probably won't be happy\n # with an app or execution ID. Callers should probably have to specify\n # whether they are okay with getting an execution ID or not.\n\n # TODO: callers that are looking for a place to write data, rather than\n # read it, probably won't be happy with receiving an object ID, or a\n # JBOR. Callers should probably specify whether they are looking for an\n # \"LHS\" expression or not.\n\n if '_DX_FUSE' in os.environ:\n from xattr import xattr\n path = xattr(path)['project'] + \":\" + xattr(path)['id']\n\n if path == '' and not allow_empty_string:\n raise ResolutionError('Cannot parse \"\"; expected the path to be a non-empty string')\n path = _maybe_convert_stringified_dxlink(path)\n\n # Easy case: \":\"\n if path == ':':\n if dxpy.WORKSPACE_ID is None:\n raise ResolutionError(\"Cannot resolve \\\":\\\": expected a project name or ID \"\n \"to the left of the colon, or for a current project to be set\")\n return ([dxpy.WORKSPACE_ID] if multi_projects else dxpy.WORKSPACE_ID), '/', None\n # Second easy case: empty string\n if path == '':\n if dxpy.WORKSPACE_ID is None:\n raise ResolutionError('Expected a project name or ID to the left of a colon, '\n 'or for a current project to be set')\n return ([dxpy.WORKSPACE_ID] if multi_projects else dxpy.WORKSPACE_ID), dxpy.config.get('DX_CLI_WD', '/'), None\n # Third easy case: hash ID\n if is_container_id(path):\n return ([path] if multi_projects else path), '/', None\n elif is_hashid(path):\n return ([dxpy.WORKSPACE_ID] if multi_projects else dxpy.WORKSPACE_ID), None, path\n\n # using a numerical sentinel value to indicate that it hasn't been\n # set in case dxpy.WORKSPACE_ID is actually None\n project = 0\n folderpath = None\n entity_name = None\n wd = dxpy.config.get('DX_CLI_WD', u'/')\n\n # Test for multiple colons\n last_colon = get_last_pos_of_char(':', path)\n if last_colon >= 0:\n last_last_colon = get_last_pos_of_char(':', path[:last_colon])\n if last_last_colon >= 0:\n raise ResolutionError('Cannot parse \"' + path + '\" as a path; at most one unescaped colon can be present')\n\n substrings = split_unescaped(':', path)\n\n if len(substrings) == 2:\n # One of the following:\n # 1) job-id:fieldname\n # 2) project-name-or-id:folderpath/to/possible/entity\n if is_job_id(substrings[0]):\n return ([substrings[0]] if multi_projects else substrings[0]), None, substrings[1]\n\n if multi_projects:\n project_ids = resolve_container_id_or_name(substrings[0], is_error=True, multi=True)\n else:\n project = resolve_container_id_or_name(substrings[0], is_error=True)\n wd = '/'\n elif get_last_pos_of_char(':', path) >= 0:\n # :folderpath/to/possible/entity OR project-name-or-id:\n # Colon is either at the beginning or at the end\n wd = '/'\n if path.startswith(':'):\n if dxpy.WORKSPACE_ID is None:\n raise ResolutionError('Cannot resolve \"%s\": expected a project name or ID to the left of the '\n 'colon, or for a current project to be set' % (path,))\n project = dxpy.WORKSPACE_ID\n else:\n # One nonempty string to the left of a colon\n project = resolve_container_id_or_name(substrings[0], is_error=True)\n folderpath = '/'\n else:\n # One nonempty string, no colon present, do NOT interpret as\n # project\n project = dxpy.WORKSPACE_ID\n if project is None:\n raise ResolutionError('Cannot resolve \"%s\": expected the path to be qualified with a project name or ID, '\n 'and a colon; or for a current project to be set' % (path,))\n\n # Determine folderpath and entity_name if necessary\n if folderpath is None:\n folderpath = substrings[-1]\n folderpath, entity_name = clean_folder_path(('' if folderpath.startswith('/') else wd + '/') + folderpath, expected)\n\n if multi_projects:\n return (project_ids if project == 0 else [project]), folderpath, entity_name\n else:\n return project, folderpath, entity_name", "response": "Resolves a path to a container or a data object."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _check_resolution_needed(path, project, folderpath, entity_name, expected_classes=None, describe=True,\n enclose_in_list=False):\n \"\"\"\n :param path: Path to the object that required resolution; propagated from\n command-line\n :type path: string\n :param project: The potential project the entity belongs to\n :type project: string\n :param folderpath: Path to the entity\n :type folderpath: string\n :param entity_name: The name of the entity\n :type entity_name: string\n :param expected_classes: A list of expected classes the entity is allowed\n to belong to if it is an ID (e.g. \"record\",\n \"file\", \"job\"); if None, then entity_name may be\n any data object class\n :type expected_classes: list or None\n :param describe: Dictionary of inputs to the describe API call; if\n no describe input is provided (default value True), then\n an empty mapping is passed to the describe API method\n :type describe: dict or True\n :param enclose_in_list: Whether the describe output is to be in the form\n of a list (if False, the last return value is a\n dictionary; if True, the last return value is a\n list of one dictionary); it will only have an\n effect if entity_name is a DX ID and is described\n :type enclose_in_list: boolean\n :returns: Whether or not the entity needs to be resolved with a more\n general resolution method, the project, the folderpath, and the\n entity name\n :rtype: tuple of 4 elements\n :raises: ResolutionError if the entity fails to be described\n\n Attempts to resolve the entity to a folder or an object, and describes\n the entity iff it is a DX ID of an expected class in the list\n expected_classes.\n Otherwise, determines whether or not more general resolution may be able\n to resolve the entity.\n\n If a more general resolution method is needed, then the return values will\n look like:\n (True, , , )\n\n If the entity is a DX ID, but is not one of the supplied expected\n classes, then the return values will look like:\n (False, None, None, None)\n\n If the entity can be successfully described, then the return values will\n look like:\n ::= {\"id\": entity_name, \"describe\": {...}}\n ::= || []\n (False, , , )\n\n If the entity may be a folder, then the return values will look like:\n (False, , , None)\n\n TODO: Allow arbitrary flags for the describe mapping.\n \"\"\"\n if entity_name is None:\n # Definitely a folder (or project)\n # TODO: find a good way to check if folder exists and expected=folder\n return False, project, folderpath, None\n elif is_hashid(entity_name):\n\n found_valid_class = True\n if expected_classes is not None:\n found_valid_class = False\n for klass in expected_classes:\n if entity_name.startswith(klass):\n found_valid_class = True\n if not found_valid_class:\n return False, None, None, None\n\n if describe is True:\n describe = {}\n\n # entity is an ID of a valid class, try to describe it\n if 'project' not in describe:\n if project != dxpy.WORKSPACE_ID:\n describe['project'] = project\n elif dxpy.WORKSPACE_ID is not None:\n describe['project'] = dxpy.WORKSPACE_ID\n try:\n desc = dxpy.DXHTTPRequest('/' + entity_name + '/describe', describe)\n desc = dxpy.append_underlying_workflow_describe(desc)\n except Exception as details:\n if 'project' in describe:\n # Now try it without the hint\n del describe['project']\n try:\n desc = dxpy.DXHTTPRequest('/' + entity_name + '/describe', describe)\n except Exception as details2:\n raise ResolutionError(str(details2))\n else:\n raise ResolutionError(str(details))\n result = {\"id\": entity_name, \"describe\": desc}\n if enclose_in_list:\n return False, project, folderpath, [result]\n else:\n return False, project, folderpath, result\n\n else:\n # Need to resolve later\n return True, project, folderpath, entity_name", "response": "Checks if a more general resolution method is needed for the given entity."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nresolve a folder_name to a folder_name in the specified project.", "response": "def _resolve_folder(project, parent_folder, folder_name):\n \"\"\"\n :param project: The project that the folder belongs to\n :type project: string\n :param parent_folder: Full path to the parent folder that contains\n folder_name\n :type parent_folder: string\n :param folder_name: Name of the folder\n :type folder_name: string\n :returns: The path to folder_name, if it exists, in the form of\n \"/\"\n :rtype: string\n :raises: ResolutionError if folder_name is not a folder, or if\n folder_name points to a folder that does not exist\n\n Attempts to resolve folder_name at location parent_folder in project.\n \"\"\"\n if '/' in folder_name:\n # Then there's no way it's supposed to be a folder\n raise ResolutionError('Object of name ' + str(folder_name) + ' could not be resolved in folder ' +\n str(parent_folder) + ' of project ID ' + str(project))\n possible_folder, _skip = clean_folder_path(parent_folder + '/' + folder_name, 'folder')\n\n if not check_folder_exists(project, parent_folder, folder_name):\n raise ResolutionError('Unable to resolve \"' + folder_name +\n '\" to a data object or folder name in \\'' + parent_folder + \"'\")\n return possible_folder"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _validate_resolution_output_length(path, entity_name, results, allow_mult=False, all_mult=False,\n ask_to_resolve=True):\n \"\"\"\n :param path: Path to the object that required resolution; propagated from\n command-line\n :type path: string\n :param entity_name: Name of the object\n :type entity_name: string\n :param results: Result of resolution; non-empty list of object\n specifications (each specification is a dictionary with\n keys \"project\" and \"id\")\n :type results: list of dictionaries\n :param allow_mult: If True, it is okay to choose from multiple results\n of a single resolved object, or return all results\n found; if False, raise an error if multiple results\n are found\n :type allow_mult: boolean\n :param all_mult: If True, return all results if multiple results are\n found for a single resolved object; if False, user needs\n to choose a single result if multiple are found; the value\n of all_mult only has an effect if allow_mult is True)\n :type all_mult: boolean\n :param ask_to_resolve: Whether picking may be necessary (if True, a\n list is returned; if False, only one result\n is returned); if specified as True, then all\n results will be returned, regardless of the\n values of allow_mult and all_mult\n :type ask_to_resolve: boolean\n :returns: The results of resolving entity_name, expected to be of the\n following form:\n # If only one result is present or the user\n # is able to select from multiple\n OR\n [, ...] # If multiple results are present and\n # it is allowed\n where is of the following form:\n {\"project\": , \"id\": }\n :rtype: dict or list of dicts\n :raises: ValueError if results is empty\n :raises: ResolutionError if too many results are found and the user is\n not in interactive mode and cannot select one\n\n Precondition: results must be a nonempty list\n\n Validates length of results.\n\n If there are multiple results found and the user is in interactive mode,\n then the user will be prompted to select a single result to be returned.\n \"\"\"\n if len(results) == 0:\n raise ValueError(\"'results' must be nonempty.\")\n\n # Caller wants ALL results, so return the entire results list\n # At this point, do not care about the values of allow_mult or all_mult\n if not ask_to_resolve:\n return results\n\n if len(results) > 1:\n # The other way the caller can specify it wants all results is by setting\n # allow_mult to be True and allowing all_mult to be True (or if the object name is a glob pattern)\n if allow_mult and (all_mult or is_glob_pattern(entity_name)):\n return results\n if INTERACTIVE_CLI:\n print('The given path \"' + path + '\" resolves to the following data objects:')\n if any(['describe' not in result for result in results]):\n # findDataObject API call must be made to get 'describe' mappings\n project, folderpath, entity_name = resolve_path(path, expected='entity')\n results = _resolve_global_entity(project, folderpath, entity_name)\n choice = pick([get_ls_l_desc(result['describe']) for result in results],\n allow_mult=allow_mult)\n if allow_mult and choice == '*':\n return results\n else:\n return [results[choice]] if allow_mult else results[choice]\n else:\n raise ResolutionError('The given path \"' + path + '\" resolves to ' +\n str(len(results)) + ' data objects')\n else:\n return [results[0]] if allow_mult else results[0]", "response": "Validate the resolution output length of the object that is required by the user."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _resolve_global_entity(project_or_job_id, folderpath, entity_name, describe=True, visibility=\"either\"):\n if is_job_id(project_or_job_id):\n if describe is True:\n describe = {}\n # The following function call will raise a ResolutionError if no results\n # could be found.\n # If the call is successful, then the project will be incorporated into the\n # \"describe\" mapping of the returned dictionaries.\n return resolve_job_ref(project_or_job_id, entity_name, describe=describe)\n else:\n try:\n return list(dxpy.find_data_objects(project=project_or_job_id,\n folder=folderpath,\n name=entity_name,\n name_mode='glob',\n recurse=False,\n describe=describe,\n visibility=visibility))\n except Exception as details:\n raise ResolutionError(str(details))", "response": "Resolves a global entity."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _format_resolution_output(path, project, folderpath, entity_name, result):\n try:\n if len(result) == 0:\n folder = _resolve_folder(project, folderpath, entity_name)\n return {\"project\": project, \"folder\": folder, \"name\": None}\n else:\n validated_results = _validate_resolution_output_length(path, entity_name, result)\n return {\"project\": None if is_job_id(project) else project,\n \"folder\": None, \"name\": validated_results}\n except ResolutionError:\n return {\"project\": None, \"folder\": None, \"name\": None}", "response": "Formats the resolution output of a single entity in the order they were resolved."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef resolve_multiple_existing_paths(paths):\n done_objects = {} # Return value\n to_resolve_in_batch_paths = [] # Paths to resolve\n to_resolve_in_batch_inputs = [] # Project, folderpath, and entity name\n for path in paths:\n project, folderpath, entity_name = resolve_path(path, expected='entity')\n try:\n must_resolve, project, folderpath, entity_name = _check_resolution_needed(\n path, project, folderpath, entity_name)\n except:\n must_resolve = False\n\n if must_resolve:\n if is_glob_pattern(entity_name):\n # TODO: Must call findDataObjects because resolveDataObjects does not support glob patterns\n try:\n find_results = _resolve_global_entity(project, folderpath, entity_name)\n done_objects[path] = _format_resolution_output(path, project, folderpath, entity_name,\n find_results)\n except ResolutionError:\n # Catches any ResolutionError thrown by _resolve_global_entity\n done_objects[path] = {\"project\": None, \"folder\": None, \"name\": None}\n else:\n # Prepare batch call for resolveDataObjects\n to_resolve_in_batch_paths.append(path)\n to_resolve_in_batch_inputs.append({\"project\": project, \"folder\": folderpath, \"name\": entity_name})\n else:\n # No need to resolve\n done_objects[path] = {\"project\": project, \"folder\": folderpath, \"name\": entity_name}\n\n # Call resolveDataObjects\n resolution_results = dxpy.resolve_data_objects(to_resolve_in_batch_inputs)\n for path, inputs, result in zip(to_resolve_in_batch_paths, to_resolve_in_batch_inputs,\n resolution_results):\n done_objects[path] = _format_resolution_output(path, inputs[\"project\"], inputs[\"folder\"], inputs[\"name\"],\n result)\n return done_objects", "response": "Resolves multiple paths in a single object or Nones."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nresolves an existing path to a list of results.", "response": "def resolve_existing_path(path, expected=None, ask_to_resolve=True, expected_classes=None, allow_mult=False,\n describe=True, all_mult=False, allow_empty_string=True, visibility=\"either\"):\n '''\n :param expected: one of the following: \"folder\", \"entity\", or None to indicate\n whether the expected path is a folder, a data object, or either\n :type expected: string or None\n :param ask_to_resolve: Whether picking may be necessary (if true, a list is returned; if false, only one result is returned)\n :type ask_to_resolve: boolean\n :param expected_classes: A list of expected classes the entity is allowed\n to belong to if it is an ID (e.g. \"record\",\n \"file\", \"job\"); if None, then entity_name may be\n any data object class\n :type expected_classes: list or None\n :param allow_mult: Whether to allow the user to select multiple results from the same path\n :type allow_mult: boolean\n :param describe: Input hash to describe call for the results, or True if no describe input\n is to be provided\n :type describe: dict or True\n :param all_mult: Whether to return all matching results without prompting (only applicable if allow_mult == True)\n :type all_mult: boolean\n :returns: A LIST of results when ask_to_resolve is False or allow_mult is True\n :raises: :exc:`ResolutionError` if the request path was invalid, or a single result was requested and input is not a TTY\n :param allow_empty_string: If false, a ResolutionError will be raised if *path* is an empty string. Use this when resolving the empty string could result in unexpected behavior.\n :type allow_empty_string: boolean\n :param visibility: The visibility expected (\"either\", \"hidden\", or \"visible\")\n :type visibility: string\n\n Returns either a list of results or a single result (depending on\n how many is expected; if only one, then an interactive picking of\n a choice will be initiated if input is a tty, or else throw an error).\n\n TODO: Always treats the path as a glob pattern.\n\n Output is of the form {\"id\": id, \"describe\": describe hash} a list\n of those\n\n TODO: Callers should specify exactly what fields they want, and then\n hopefully we can avoid having a default set of fields that may be very\n expensive\n\n NOTE: if expected_classes is provided and conflicts with the class\n of the hash ID, it will return None for all fields.\n '''\n project, folderpath, entity_name = resolve_path(path, expected=expected, allow_empty_string=allow_empty_string)\n must_resolve, project, folderpath, entity_name = _check_resolution_needed(path,\n project,\n folderpath,\n entity_name,\n expected_classes=expected_classes,\n describe=describe,\n enclose_in_list=(not ask_to_resolve or\n allow_mult))\n\n if must_resolve:\n results = _resolve_global_entity(project, folderpath, entity_name, describe=describe, visibility=visibility)\n if len(results) == 0:\n # Could not resolve entity, so it is probably a folder\n folder = _resolve_folder(project, folderpath, entity_name)\n return project, folder, None\n else:\n validated_results = _validate_resolution_output_length(path,\n entity_name,\n results,\n allow_mult=allow_mult,\n all_mult=all_mult,\n ask_to_resolve=ask_to_resolve)\n if is_job_id(project):\n return None, None, validated_results\n return project, None, validated_results\n return project, folderpath, entity_name"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nchecking if a folder exists at the specified path.", "response": "def check_folder_exists(project, path, folder_name):\n '''\n :param project: project id\n :type project: string\n :param path: path to where we should look for the folder in question\n :type path: string\n :param folder_name: name of the folder in question\n :type folder_name: string\n :returns: A boolean True or False whether the folder exists at the specified path\n :type: boolean\n :raises: :exc:'ResolutionError' if dxpy.api.container_list_folder raises an exception\n\n This function returns a boolean value that indicates whether a folder of the\n specified name exists at the specified path\n\n Note: this function will NOT work on the root folder case, i.e. '/'\n '''\n if folder_name is None or path is None:\n return False\n try:\n folder_list = dxpy.api.container_list_folder(project, {\"folder\": path, \"only\": \"folders\"})\n except dxpy.exceptions.DXAPIError as e:\n if e.name == 'ResourceNotFound':\n raise ResolutionError(str(e.msg))\n else:\n raise e\n target_folder = path + '/' + folder_name\n # sanitize input if necessary\n target_folder, _skip = clean_folder_path(target_folder, 'folder')\n\n # Check that folder name exists in return from list folder API call\n return target_folder in folder_list['folders']"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ntrying to resolve to an app object and returns the describe hash of the app object if found otherwise None.", "response": "def get_app_from_path(path):\n '''\n :param path: A string to attempt to resolve to an app object\n :type path: string\n :returns: The describe hash of the app object if found, or None otherwise\n :rtype: dict or None\n\n This method parses a string that is expected to perhaps refer to\n an app object. If found, its describe hash will be returned. For\n more information on the contents of this hash, see the API\n documentation. [TODO: external link here]\n\n '''\n alias = None\n if not path.startswith('app-'):\n path = 'app-' + path\n if '/' in path:\n alias = path[path.find('/') + 1:]\n path = path[:path.find('/')]\n try:\n return dxpy.api.app_describe(path, alias=alias)\n except dxpy.DXAPIError:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ntrying to resolve a global workflow object from a path.", "response": "def get_global_workflow_from_path(path):\n '''\n :param path: A string to attempt to resolve to a global workflow object\n :type path: string\n :returns: The describe hash of the global workflow object if found, or None otherwise\n :rtype: dict or None\n\n This method parses a string that is expected to perhaps refer to\n a global workflow object. If found, its describe hash will be returned.\n For more information on the contents of this hash, see the API\n documentation. [TODO: external link here]\n\n '''\n alias = None\n if not path.startswith('globalworkflow-'):\n path = 'globalworkflow-' + path\n if '/' in path:\n alias = path[path.find('/') + 1:]\n path = path[:path.find('/')]\n\n try:\n return dxpy.api.global_workflow_describe(path, alias=alias)\n except dxpy.DXAPIError:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nresolves a global executable.", "response": "def resolve_global_executable(path, is_version_required=False):\n \"\"\"\n :param path: A string which is supposed to identify a global executable (app or workflow)\n :type path: string\n :param is_version_required: If set to True, the path has to specify a specific version/alias, e.g. \"myapp/1.0.0\"\n :type is_version_required: boolean\n :returns: The describe hash of the global executable object (app or workflow)\n :raises: :exc:`ResolutionError` if it cannot be found\n\n *path* is expected to have one of the following forms:\n\n - hash ID, e.g. \"globalworkflow-F85Z6bQ0xku1PKY6FjGQ011J\", \"app-FBZ3f200yfzkKYyp9JkFVQ97\"\n - named ID, e.g. \"app-myapp\", \"globalworkflow-myworkflow\"\n - named ID with alias (version or tag), e.g. \"myapp/1.2.0\", \"myworkflow/1.2.0\"\n - named ID with prefix and with alias (version or tag), e.g. \"app-myapp/1.2.0\", \"globalworkflow-myworkflow/1.2.0\"\n \"\"\"\n if not is_hashid(path) and is_version_required and \"/\" not in path:\n raise ResolutionError('Version is required, e.g. \"myexec/1.0.0\"'.format())\n\n # First, check if the prefix is provided, then we don't have to resolve the name\n if path.startswith('app-'):\n return resolve_app(path)\n elif path.startswith('globalworkflow-'):\n return resolve_global_workflow(path)\n\n # If the path doesn't include a prefix, we must try describing\n # as an app and, if that fails, as a global workflow\n desc = get_app_from_path(path)\n if not desc:\n desc = get_global_workflow_from_path(path)\n if desc is None:\n raise ResolutionError(\n 'The given path \"' + path + '\" could not be resolved to an accessible global executable (app or workflow)')\n return desc"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef resolve_to_objects_or_project(path, all_matching_results=False):\n '''\n :param path: Path to resolve\n :type path: string\n :param all_matching_results: Whether to return a list of all matching results\n :type all_matching_results: boolean\n\n A thin wrapper over :meth:`resolve_existing_path` which throws an\n error if the path does not look like a project and doesn't match a\n data object path.\n\n Returns either a list of results or a single result (depending on\n how many is expected; if only one, then an interactive picking of\n a choice will be initiated if input is a tty, or else throw an error).\n '''\n # Attempt to resolve name\n project, folderpath, entity_results = resolve_existing_path(path,\n expected='entity',\n allow_mult=True,\n all_mult=all_matching_results)\n if entity_results is None and not is_container_id(path):\n if folderpath != None and folderpath != '/':\n raise ResolutionError('Could not resolve \"' + path + \\\n '''\" to an existing data object or to only a project;\n if you were attempting to refer to a project by name,\n please append a colon \":\" to indicate that it is a project.''')\n return project, folderpath, entity_results", "response": "Resolves a path to a data object or a project."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating a new job that will execute a particular function on the current applet.", "response": "def new_dxjob(fn_input, fn_name, name=None, tags=None, properties=None, details=None,\n instance_type=None, depends_on=None,\n **kwargs):\n '''\n :param fn_input: Function input\n :type fn_input: dict\n :param fn_name: Name of the function to be called\n :type fn_name: string\n :param name: Name for the new job (default is \":\")\n :type name: string\n :param tags: Tags to associate with the job\n :type tags: list of strings\n :param properties: Properties to associate with the job\n :type properties: dict with string values\n :param details: Details to set for the job\n :type details: dict or list\n :param instance_type: Instance type on which the job will be run, or a dict mapping function names to instance type requests\n :type instance_type: string or dict\n :param depends_on: List of data objects or jobs to wait that need to enter the \"closed\" or \"done\" states, respectively, before the new job will be run; each element in the list can either be a dxpy handler or a string ID\n :type depends_on: list\n :rtype: :class:`~dxpy.bindings.dxjob.DXJob`\n\n Creates and enqueues a new job that will execute a particular\n function (from the same app or applet as the one the current job is\n running). Returns the :class:`~dxpy.bindings.dxjob.DXJob` handle for\n the job.\n\n Note that this function is shorthand for::\n\n dxjob = DXJob()\n dxjob.new(fn_input, fn_name, **kwargs)\n\n .. note:: This method is intended for calls made from within\n already-executing jobs or apps. If it is called from outside of\n an Execution Environment, an exception will be thrown. To create\n new jobs from outside the Execution Environment, use\n :func:`dxpy.bindings.dxapplet.DXApplet.run` or\n :func:`dxpy.bindings.dxapp.DXApp.run`.\n\n .. note:: If the environment variable ``DX_JOB_ID`` is not set, this method assmes that it is running within the debug harness, executes the job in place, and provides a debug job handler object that does not have a corresponding remote API job object.\n\n '''\n dxjob = DXJob()\n dxjob.new(fn_input, fn_name, name=name, tags=tags, properties=properties,\n details=details, instance_type=instance_type, depends_on=depends_on, **kwargs)\n return dxjob"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef new(self, fn_input, fn_name, name=None, tags=None, properties=None, details=None,\n instance_type=None, depends_on=None,\n **kwargs):\n '''\n :param fn_input: Function input\n :type fn_input: dict\n :param fn_name: Name of the function to be called\n :type fn_name: string\n :param name: Name for the new job (default is \":\")\n :type name: string\n :param tags: Tags to associate with the job\n :type tags: list of strings\n :param properties: Properties to associate with the job\n :type properties: dict with string values\n :param details: Details to set for the job\n :type details: dict or list\n :param instance_type: Instance type on which the job will be run, or a dict mapping function names to instance type requests\n :type instance_type: string or dict\n :param depends_on: List of data objects or jobs to wait that need to enter the \"closed\" or \"done\" states, respectively, before the new job will be run; each element in the list can either be a dxpy handler or a string ID\n :type depends_on: list\n\n Creates and enqueues a new job that will execute a particular\n function (from the same app or applet as the one the current job\n is running).\n\n .. note:: This method is intended for calls made from within\n already-executing jobs or apps. If it is called from outside\n of an Execution Environment, an exception will be thrown. To\n create new jobs from outside the Execution Environment, use\n :func:`dxpy.bindings.dxapplet.DXApplet.run` or\n :func:`dxpy.bindings.dxapp.DXApp.run`.\n\n '''\n final_depends_on = []\n if depends_on is not None:\n if isinstance(depends_on, list):\n for item in depends_on:\n if isinstance(item, DXJob) or isinstance(item, DXDataObject):\n if item.get_id() is None:\n raise DXError('A dxpy handler given in depends_on does not have an ID set')\n final_depends_on.append(item.get_id())\n elif isinstance(item, basestring):\n final_depends_on.append(item)\n else:\n raise DXError('Expected elements of depends_on to only be either instances of DXJob or DXDataObject, or strings')\n else:\n raise DXError('Expected depends_on field to be a list')\n\n if 'DX_JOB_ID' in os.environ:\n req_input = {}\n req_input[\"input\"] = fn_input\n req_input[\"function\"] = fn_name\n if name is not None:\n req_input[\"name\"] = name\n if tags is not None:\n req_input[\"tags\"] = tags\n if properties is not None:\n req_input[\"properties\"] = properties\n if instance_type is not None:\n req_input[\"systemRequirements\"] = SystemRequirementsDict.from_instance_type(instance_type, fn_name).as_dict()\n if depends_on is not None:\n req_input[\"dependsOn\"] = final_depends_on\n if details is not None:\n req_input[\"details\"] = details\n resp = dxpy.api.job_new(req_input, **kwargs)\n self.set_id(resp[\"id\"])\n else:\n self.set_id(queue_entry_point(function=fn_name, input_hash=fn_input,\n depends_on=final_depends_on,\n name=name))", "response": "Creates a new job with the given parameters."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef set_id(self, dxid):\n '''\n :param dxid: New job ID to be associated with the handler (localjob IDs also accepted for local runs)\n :type dxid: string\n\n Discards the currently stored ID and associates the handler with *dxid*\n '''\n if dxid is not None:\n if not (isinstance(dxid, basestring) and dxid.startswith('localjob-')):\n # localjob IDs (which do not follow the usual ID\n # syntax) should be allowed; otherwise, follow the\n # usual syntax checking\n verify_string_dxid(dxid, self._class)\n self._dxid = dxid", "response": "Sets the ID of the currently stored handler."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef describe(self, fields=None, io=None, **kwargs):\n if fields is not None and io is not None:\n raise DXError('DXJob.describe: cannot provide non-None values for both fields and io')\n describe_input = {}\n if fields is not None:\n describe_input['fields'] = fields\n if io is not None:\n describe_input['io'] = io\n self._desc = dxpy.api.job_describe(self._dxid, describe_input, **kwargs)\n return self._desc", "response": "Returns a dictionary containing information about the job and its related entries."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding each of the specified tags to the job.", "response": "def add_tags(self, tags, **kwargs):\n \"\"\"\n :param tags: Tags to add to the job\n :type tags: list of strings\n\n Adds each of the specified tags to the job. Takes no\n action for tags that are already listed for the job.\n\n \"\"\"\n\n dxpy.api.job_add_tags(self._dxid, {\"tags\": tags}, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef remove_tags(self, tags, **kwargs):\n\n dxpy.api.job_remove_tags(self._dxid, {\"tags\": tags}, **kwargs)", "response": "Removes the specified tags from the job."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_properties(self, properties, **kwargs):\n\n dxpy.api.job_set_properties(self._dxid, {\"properties\": properties}, **kwargs)", "response": "Sets the properties of the job for the given key - value pairs."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef wait_on_done(self, interval=2, timeout=3600*24*7, **kwargs):\n '''\n :param interval: Number of seconds between queries to the job's state\n :type interval: integer\n :param timeout: Maximum amount of time to wait, in seconds, until the job is done running\n :type timeout: integer\n :raises: :exc:`~dxpy.exceptions.DXError` if the timeout is reached before the job has finished running, or :exc:`dxpy.exceptions.DXJobFailureError` if the job fails\n\n Waits until the job has finished running.\n '''\n\n elapsed = 0\n while True:\n state = self._get_state(**kwargs)\n if state == \"done\":\n break\n if state == \"failed\":\n desc = self.describe(**kwargs)\n err_msg = \"Job has failed because of {failureReason}: {failureMessage}\".format(**desc)\n if desc.get(\"failureFrom\") != None and desc[\"failureFrom\"][\"id\"] != desc[\"id\"]:\n err_msg += \" (failure from {id})\".format(id=desc['failureFrom']['id'])\n raise DXJobFailureError(err_msg)\n if state == \"terminated\":\n raise DXJobFailureError(\"Job was terminated.\")\n\n if elapsed >= timeout or elapsed < 0:\n raise DXJobFailureError(\"Reached timeout while waiting for the job to finish\")\n\n time.sleep(interval)\n elapsed += interval", "response": "Waits until the job is done."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_state(self, **kwargs):\n '''\n :returns: State of the remote object\n :rtype: string\n\n Queries the API server for the job's state.\n\n Note that this function is shorthand for:\n\n dxjob.describe(io=False, **kwargs)[\"state\"]\n\n '''\n\n return self.describe(fields=dict(state=True), **kwargs)[\"state\"]", "response": "Returns the state of the remote object in the format that is expected by the API server."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_user_id(user_id_or_username):\n user_id_or_username = user_id_or_username.lower()\n if not user_id_or_username.startswith(\"user-\"):\n user_id = \"user-\" + user_id_or_username.lower()\n else:\n user_id = user_id_or_username\n return user_id", "response": "Gets the user ID based on the value specified on the command - line."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_org_invite_args(user_id, args):\n org_invite_args = {\"invitee\": user_id}\n org_invite_args[\"level\"] = args.level\n if \"set_bill_to\" in args and args.set_bill_to is True:\n # /org-x/invite is called in conjunction with /user/new.\n org_invite_args[\"allowBillableActivities\"] = True\n else:\n org_invite_args[\"allowBillableActivities\"] = args.allow_billable_activities\n org_invite_args[\"appAccess\"] = args.app_access\n org_invite_args[\"projectAccess\"] = args.project_access\n org_invite_args[\"suppressEmailNotification\"] = args.no_email\n return org_invite_args", "response": "Returns the arguments to pass to the org - x invite command."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the user new arguments.", "response": "def _get_user_new_args(args):\n \"\"\"\n PRECONDITION: `_validate_new_user_input()` has been called on `args`.\n \"\"\"\n user_new_args = {\"username\": args.username,\n \"email\": args.email}\n if args.first is not None:\n user_new_args[\"first\"] = args.first\n if args.last is not None:\n user_new_args[\"last\"] = args.last\n if args.middle is not None:\n user_new_args[\"middle\"] = args.middle\n if args.token_duration is not None:\n token_duration_ms = normalize_timedelta(args.token_duration)\n if token_duration_ms > 30 * 24 * 60 * 60 * 1000:\n raise ValueError(\"--token-duration must be 30 days or less\")\n else:\n user_new_args[\"tokenDuration\"] = token_duration_ms\n if args.occupation is not None:\n user_new_args[\"occupation\"] = args.occupation\n if args.set_bill_to is True:\n user_new_args[\"billTo\"] = args.org\n return user_new_args"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_input_for_run(args, executable, preset_inputs=None, input_name_prefix=None):\n # The following may throw if the executable is a workflow with no\n # input spec available (because a stage is inaccessible)\n exec_inputs = try_call(ExecutableInputs,\n executable,\n input_name_prefix=input_name_prefix,\n active_region=args.region)\n\n # Use input and system requirements from a cloned execution\n if args.input_json is None and args.filename is None:\n # --input-json and --input-json-file completely override input\n # from the cloned job\n exec_inputs.update(args.input_from_clone, strip_prefix=False)\n\n # Update with inputs passed to the this function\n if preset_inputs is not None:\n exec_inputs.update(preset_inputs, strip_prefix=False)\n\n # Update with inputs passed with -i, --input_json, --input_json_file, etc.\n # If batch_tsv is set, do not prompt for missing arguments\n require_all_inputs = (args.batch_tsv is None)\n try_call(exec_inputs.update_from_args, args, require_all_inputs)\n\n return exec_inputs.inputs", "response": "Returns an input dictionary that can be passed to the executable. run method."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef register_parser(parser, subparsers_action=None, categories=('other', ), add_help=True):\n name = re.sub('^dx ', '', parser.prog)\n if subparsers_action is None:\n subparsers_action = subparsers\n if isinstance(categories, basestring):\n categories = (categories, )\n\n parser_map[name] = parser\n if add_help:\n _help = subparsers_action._choices_actions[-1].help\n parser_categories['all']['cmds'].append((name, _help))\n for category in categories:\n parser_categories[category]['cmds'].append((name, _help))", "response": "Register a parser with the global parser_map."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning True if the exception is always safe to retry.", "response": "def _is_retryable_exception(e):\n \"\"\"Returns True if the exception is always safe to retry.\n\n This is True if the client was never able to establish a connection\n to the server (for example, name resolution failed or the connection\n could otherwise not be initialized).\n\n Conservatively, if we can't tell whether a network connection could\n have been established, we return False.\n\n \"\"\"\n if isinstance(e, urllib3.exceptions.ProtocolError):\n e = e.args[1]\n if isinstance(e, (socket.gaierror, socket.herror)):\n return True\n if isinstance(e, socket.error) and e.errno in _RETRYABLE_SOCKET_ERRORS:\n return True\n if isinstance(e, urllib3.exceptions.NewConnectionError):\n return True\n return False"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _extract_msg_from_last_exception():\n ''' Extract a useful error message from the last thrown exception '''\n last_exc_type, last_error, last_traceback = sys.exc_info()\n if isinstance(last_error, exceptions.DXAPIError):\n # Using the same code path as below would not\n # produce a useful message when the error contains a\n # 'details' hash (which would have a last line of\n # '}')\n return last_error.error_message()\n else:\n return traceback.format_exc().splitlines()[-1].strip()", "response": "Extract a useful error message from the last thrown exception."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncalculating the delay that we should wait for a resource to be retried.", "response": "def _calculate_retry_delay(response, num_attempts):\n '''\n Returns the time in seconds that we should wait.\n\n :param num_attempts: number of attempts that have been made to the\n resource, including the most recent failed one\n :type num_attempts: int\n '''\n if response is not None and response.status == 503 and 'retry-after' in response.headers:\n try:\n return int(response.headers['retry-after'])\n except ValueError:\n # In RFC 2616, retry-after can be formatted as absolute time\n # instead of seconds to wait. We don't bother to parse that,\n # but the apiserver doesn't generate such responses anyway.\n pass\n if num_attempts <= 1:\n return 1\n num_attempts = min(num_attempts, 7)\n return randint(2 ** (num_attempts - 2), 2 ** (num_attempts - 1))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef DXHTTPRequest(resource, data, method='POST', headers=None, auth=True,\n timeout=DEFAULT_TIMEOUT,\n use_compression=None, jsonify_data=True, want_full_response=False,\n decode_response_body=True, prepend_srv=True, session_handler=None,\n max_retries=DEFAULT_RETRIES, always_retry=False,\n **kwargs):\n '''\n :param resource: API server route, e.g. \"/record/new\". If *prepend_srv* is False, a fully qualified URL is expected. If this argument is a callable, it will be called just before each request attempt, and expected to return a tuple (URL, headers). Headers returned by the callback are updated with *headers* (including headers set by this method).\n :type resource: string\n :param data: Content of the request body\n :type data: list or dict, if *jsonify_data* is True; or string or file-like object, otherwise\n :param headers: Names and values of HTTP headers to submit with the request (in addition to those needed for authentication, compression, or other options specified with the call).\n :type headers: dict\n :param auth:\n Controls the ``Authentication`` header or other means of authentication supplied with the request. If ``True``\n (default), a token is obtained from the ``DX_SECURITY_CONTEXT``. If the value evaluates to false, no action is\n taken to prepare authentication for the request. Otherwise, the value is assumed to be callable, and called with\n three arguments (method, url, headers) and expected to prepare the authentication headers by reference.\n :type auth: tuple, object, True (default), or None\n :param timeout: HTTP request timeout, in seconds\n :type timeout: float\n :param config: *config* value to pass through to :meth:`requests.request`\n :type config: dict\n :param use_compression: Deprecated\n :type use_compression: string or None\n :param jsonify_data: If True, *data* is converted from a Python list or dict to a JSON string\n :type jsonify_data: boolean\n :param want_full_response: If True, the full :class:`requests.Response` object is returned (otherwise, only the content of the response body is returned)\n :type want_full_response: boolean\n :param decode_response_body: If True (and *want_full_response* is False), the response body is decoded and, if it is a JSON string, deserialized. Otherwise, the response body is uncompressed if transport compression is on, and returned raw.\n :type decode_response_body: boolean\n :param prepend_srv: If True, prepends the API server location to the URL\n :type prepend_srv: boolean\n :param session_handler: Deprecated.\n :param max_retries: Maximum number of retries to perform for a request. A \"failed\" request is retried if any of the following is true:\n\n - A response is received from the server, and the content length received does not match the \"Content-Length\" header.\n - A response is received from the server, and the response has an HTTP status code in 5xx range.\n - A response is received from the server, the \"Content-Length\" header is not set, and the response JSON cannot be parsed.\n - No response is received from the server, and either *always_retry* is True or the request *method* is \"GET\".\n\n :type max_retries: int\n :param always_retry: If True, indicates that it is safe to retry a request on failure\n\n - Note: It is not guaranteed that the request will *always* be retried on failure; rather, this is an indication to the function that it would be safe to do so.\n\n :type always_retry: boolean\n :returns: Response from API server in the format indicated by *want_full_response* and *decode_response_body*.\n :raises: :exc:`exceptions.DXAPIError` or a subclass if the server returned a non-200 status code; :exc:`requests.exceptions.HTTPError` if an invalid response was received from the server; or :exc:`requests.exceptions.ConnectionError` if a connection cannot be established.\n\n Wrapper around :meth:`requests.request()` that makes an HTTP\n request, inserting authentication headers and (by default)\n converting *data* to JSON.\n\n .. note:: Bindings methods that make API calls make the underlying\n HTTP request(s) using :func:`DXHTTPRequest`, and most of them\n will pass any unrecognized keyword arguments you have supplied\n through to :func:`DXHTTPRequest`.\n\n '''\n if headers is None:\n headers = {}\n\n global _UPGRADE_NOTIFY\n\n seq_num = _get_sequence_number()\n\n url = APISERVER + resource if prepend_srv else resource\n method = method.upper() # Convert method name to uppercase, to ease string comparisons later\n\n if auth is True:\n auth = AUTH_HELPER\n\n if auth:\n auth(_RequestForAuth(method, url, headers))\n\n pool_args = {arg: kwargs.pop(arg, None) for arg in (\"verify\", \"cert_file\", \"key_file\")}\n test_retry = kwargs.pop(\"_test_retry_http_request\", False)\n\n # data is a sequence/buffer or a dict\n # serialized_data is a sequence/buffer\n\n if jsonify_data:\n serialized_data = json.dumps(data)\n if 'Content-Type' not in headers and method == 'POST':\n headers['Content-Type'] = 'application/json'\n else:\n serialized_data = data\n\n # If the input is a buffer, its data gets consumed by\n # requests.request (moving the read position). Record the initial\n # buffer position so that we can return to it if the request fails\n # and needs to be retried.\n rewind_input_buffer_offset = None\n if hasattr(data, 'seek') and hasattr(data, 'tell'):\n rewind_input_buffer_offset = data.tell()\n\n # Maintain two separate counters for the number of tries...\n\n try_index = 0 # excluding 503 errors. The number of tries as given here\n # cannot exceed (max_retries + 1).\n try_index_including_503 = 0 # including 503 errors. This number is used to\n # do exponential backoff.\n\n retried_responses = []\n _url = None\n while True:\n success, time_started = True, None\n response = None\n req_id = None\n try:\n time_started = time.time()\n _method, _url, _headers = _process_method_url_headers(method, url, headers)\n\n _debug_print_request(_DEBUG, seq_num, time_started, _method, _url, _headers, jsonify_data, data)\n\n body = _maybe_truncate_request(_url, serialized_data)\n\n # throws BadStatusLine if the server returns nothing\n try:\n pool_manager = _get_pool_manager(**pool_args)\n\n _headers['User-Agent'] = USER_AGENT\n _headers['DNAnexus-API'] = API_VERSION\n\n # Converted Unicode headers to ASCII and throw an error if not possible\n def ensure_ascii(i):\n if not isinstance(i, bytes):\n i = i.encode('ascii')\n return i\n\n _headers = {ensure_ascii(k): ensure_ascii(v) for k, v in _headers.items()}\n if USING_PYTHON2:\n encoded_url = _url\n else:\n # This is needed for python 3 urllib\n _headers.pop(b'host', None)\n _headers.pop(b'content-length', None)\n\n # The libraries downstream (http client) require elimination of non-ascii\n # chars from URL.\n # We check if the URL contains non-ascii characters to see if we need to\n # quote it. It is important not to always quote the path (here: parts[2])\n # since it might contain elements (e.g. HMAC for api proxy) containing\n # special characters that should not be quoted.\n try:\n ensure_ascii(_url)\n encoded_url = _url\n except UnicodeEncodeError:\n import urllib.parse\n parts = list(urllib.parse.urlparse(_url))\n parts[2] = urllib.parse.quote(parts[2])\n encoded_url = urllib.parse.urlunparse(parts)\n\n response = pool_manager.request(_method, encoded_url, headers=_headers, body=body,\n timeout=timeout, retries=False, **kwargs)\n\n except urllib3.exceptions.ClosedPoolError:\n # If another thread closed the pool before the request was\n # started, will throw ClosedPoolError\n raise exceptions.UrllibInternalError(\"ClosedPoolError\")\n\n _raise_error_for_testing(try_index, method)\n req_id = response.headers.get(\"x-request-id\", \"unavailable\")\n\n if (_UPGRADE_NOTIFY\n and response.headers.get('x-upgrade-info', '').startswith('A recommended update is available')\n and '_ARGCOMPLETE' not in os.environ):\n logger.info(response.headers['x-upgrade-info'])\n try:\n with file(_UPGRADE_NOTIFY, 'a'):\n os.utime(_UPGRADE_NOTIFY, None)\n except:\n pass\n _UPGRADE_NOTIFY = False\n\n # If an HTTP code that is not in the 200 series is received and the content is JSON, parse it and throw the\n # appropriate error. Otherwise, raise the usual exception.\n if response.status // 100 != 2:\n # response.headers key lookup is case-insensitive\n if response.headers.get('content-type', '').startswith('application/json'):\n try:\n content = response.data.decode('utf-8')\n except AttributeError:\n raise exceptions.UrllibInternalError(\"Content is none\", response.status)\n try:\n content = json.loads(content)\n except ValueError:\n # The JSON is not parsable, but we should be able to retry.\n raise exceptions.BadJSONInReply(\"Invalid JSON received from server\", response.status)\n try:\n error_class = getattr(exceptions, content[\"error\"][\"type\"], exceptions.DXAPIError)\n except (KeyError, AttributeError, TypeError):\n raise exceptions.HTTPError(response.status, content)\n raise error_class(content, response.status, time_started, req_id)\n else:\n try:\n content = response.data.decode('utf-8')\n except AttributeError:\n raise exceptions.UrllibInternalError(\"Content is none\", response.status)\n raise exceptions.HTTPError(\"{} {} [Time={} RequestID={}]\\n{}\".format(response.status,\n response.reason,\n time_started,\n req_id,\n content))\n\n if want_full_response:\n return response\n else:\n if 'content-length' in response.headers:\n if int(response.headers['content-length']) != len(response.data):\n range_str = (' (%s)' % (headers['Range'],)) if 'Range' in headers else ''\n raise exceptions.ContentLengthError(\n \"Received response with content-length header set to %s but content length is %d%s. \" +\n \"[Time=%f RequestID=%s]\" %\n (response.headers['content-length'], len(response.data), range_str, time_started, req_id)\n )\n\n content = response.data\n\n response_was_json = False\n\n if decode_response_body:\n content = content.decode('utf-8')\n if response.headers.get('content-type', '').startswith('application/json'):\n try:\n content = json.loads(content)\n except ValueError:\n # The JSON is not parsable, but we should be able to retry.\n raise exceptions.BadJSONInReply(\"Invalid JSON received from server\", response.status)\n else:\n response_was_json = True\n\n req_id = response.headers.get('x-request-id') or \"--\"\n\n _debug_print_response(_DEBUG, seq_num, time_started, req_id, response.status, response_was_json,\n _method, _url, content)\n\n if test_retry:\n retried_responses.append(content)\n if len(retried_responses) == 1:\n continue\n else:\n _set_retry_response(retried_responses[0])\n return retried_responses[1]\n\n return content\n raise AssertionError('Should never reach this line: expected a result to have been returned by now')\n except Exception as e:\n # Avoid reusing connections in the pool, since they may be\n # in an inconsistent state (observed as \"ResponseNotReady\"\n # errors).\n _get_pool_manager(**pool_args).clear()\n success = False\n exception_msg = _extract_msg_from_last_exception()\n if isinstance(e, _expected_exceptions):\n # Total number of allowed tries is the initial try PLUS\n # up to (max_retries) subsequent retries.\n total_allowed_tries = max_retries + 1\n ok_to_retry = False\n is_retryable = always_retry or (method == 'GET') or _is_retryable_exception(e)\n # Because try_index is not incremented until we escape\n # this iteration of the loop, try_index is equal to the\n # number of tries that have failed so far, minus one.\n if try_index + 1 < total_allowed_tries:\n # BadStatusLine --- server did not return anything\n # BadJSONInReply --- server returned JSON that didn't parse properly\n if (response is None\n or isinstance(e, (exceptions.ContentLengthError, BadStatusLine, exceptions.BadJSONInReply,\n urllib3.exceptions.ProtocolError, exceptions.UrllibInternalError))):\n ok_to_retry = is_retryable\n else:\n ok_to_retry = 500 <= response.status < 600\n\n # The server has closed the connection prematurely\n if (response is not None\n and response.status == 400 and is_retryable and method == 'PUT'\n and isinstance(e, requests.exceptions.HTTPError)):\n if 'RequestTimeout' in exception_msg:\n logger.info(\"Retrying 400 HTTP error, due to slow data transfer. \" +\n \"Request Time=%f Request ID=%s\", time_started, req_id)\n else:\n logger.info(\"400 HTTP error, of unknown origin, exception_msg=[%s]. \" +\n \"Request Time=%f Request ID=%s\", exception_msg, time_started, req_id)\n ok_to_retry = True\n\n # Unprocessable entity, request has semantical errors\n if response is not None and response.status == 422:\n ok_to_retry = False\n\n if ok_to_retry:\n if rewind_input_buffer_offset is not None:\n data.seek(rewind_input_buffer_offset)\n\n delay = _calculate_retry_delay(response, try_index_including_503 + 1)\n\n range_str = (' (range=%s)' % (headers['Range'],)) if 'Range' in headers else ''\n if response is not None and response.status == 503:\n waiting_msg = 'Waiting %d seconds before retry...' % (delay,)\n else:\n waiting_msg = 'Waiting %d seconds before retry %d of %d...' % (\n delay, try_index + 1, max_retries)\n\n logger.warning(\"[%s] %s %s: %s. %s %s\",\n time.ctime(), method, _url, exception_msg, waiting_msg, range_str)\n time.sleep(delay)\n try_index_including_503 += 1\n if response is None or response.status != 503:\n try_index += 1\n continue\n\n # All retries have been exhausted OR the error is deemed not\n # retryable. Print the latest error and propagate it back to the caller.\n if not isinstance(e, exceptions.DXAPIError):\n logger.error(\"[%s] %s %s: %s.\", time.ctime(), method, _url, exception_msg)\n\n if isinstance(e, urllib3.exceptions.ProtocolError) and \\\n 'Connection reset by peer' in exception_msg:\n # If the protocol error is 'connection reset by peer', most likely it is an\n # error in the ssl handshake due to unsupported TLS protocol.\n _test_tls_version()\n\n # Retries have been exhausted, and we are unable to get a full\n # buffer from the data source. Raise a special exception.\n if isinstance(e, urllib3.exceptions.ProtocolError) and \\\n 'Connection broken: IncompleteRead' in exception_msg:\n raise exceptions.DXIncompleteReadsError(exception_msg)\n raise\n finally:\n if success and try_index > 0:\n logger.info(\"[%s] %s %s: Recovered after %d retries\", time.ctime(), method, _url, try_index)\n\n raise AssertionError('Should never reach this line: should have attempted a retry or reraised by now')\n raise AssertionError('Should never reach this line: should never break out of loop')", "response": "A basic HTTP request."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset the API server info for the current node.", "response": "def set_api_server_info(host=None, port=None, protocol=None):\n '''\n :param host: API server hostname\n :type host: string\n :param port: API server port. If not specified, *port* is guessed based on *protocol*.\n :type port: string\n :param protocol: Either \"http\" or \"https\"\n :type protocol: string\n\n Overrides the current settings for which API server to communicate\n with. Any parameters that are not explicitly specified are not\n overridden.\n '''\n global APISERVER_PROTOCOL, APISERVER_HOST, APISERVER_PORT, APISERVER\n if host is not None:\n APISERVER_HOST = host\n if port is not None:\n APISERVER_PORT = port\n if protocol is not None:\n APISERVER_PROTOCOL = protocol\n if port is None or port == '':\n APISERVER = APISERVER_PROTOCOL + \"://\" + APISERVER_HOST\n else:\n APISERVER = APISERVER_PROTOCOL + \"://\" + APISERVER_HOST + \":\" + str(APISERVER_PORT)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the auth server name from the currently configured API server name.", "response": "def get_auth_server_name(host_override=None, port_override=None, protocol='https'):\n \"\"\"\n Chooses the auth server name from the currently configured API server name.\n\n Raises DXError if the auth server name cannot be guessed and the overrides\n are not provided (or improperly provided).\n \"\"\"\n if host_override is not None or port_override is not None:\n if host_override is None or port_override is None:\n raise exceptions.DXError(\"Both host and port must be specified if either is specified\")\n return protocol + '://' + host_override + ':' + str(port_override)\n elif APISERVER_HOST == 'stagingapi.dnanexus.com':\n return 'https://stagingauth.dnanexus.com'\n elif APISERVER_HOST == 'api.dnanexus.com':\n return 'https://auth.dnanexus.com'\n elif APISERVER_HOST == 'stagingapi.cn.dnanexus.com':\n return 'https://stagingauth.cn.dnanexus.com:7001'\n elif APISERVER_HOST == 'api.cn.dnanexus.com':\n return 'https://auth.cn.dnanexus.com:8001'\n elif APISERVER_HOST == \"localhost\" or APISERVER_HOST == \"127.0.0.1\":\n if \"DX_AUTHSERVER_HOST\" not in os.environ or \"DX_AUTHSERVER_PORT\" not in os.environ:\n err_msg = \"Must set authserver env vars (DX_AUTHSERVER_HOST, DX_AUTHSERVER_PORT) if apiserver is {apiserver}.\"\n raise exceptions.DXError(err_msg.format(apiserver=APISERVER_HOST))\n else:\n return os.environ[\"DX_AUTHSERVER_HOST\"] + \":\" + os.environ[\"DX_AUTHSERVER_PORT\"]\n else:\n err_msg = \"Could not determine which auth server is associated with {apiserver}.\"\n raise exceptions.DXError(err_msg.format(apiserver=APISERVER_HOST))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef append_underlying_workflow_describe(globalworkflow_desc):\n if not globalworkflow_desc or \\\n globalworkflow_desc['class'] != 'globalworkflow' or \\\n not 'regionalOptions' in globalworkflow_desc:\n return globalworkflow_desc\n\n for region, config in globalworkflow_desc['regionalOptions'].items():\n workflow_id = config['workflow']\n workflow_desc = dxpy.api.workflow_describe(workflow_id)\n globalworkflow_desc['regionalOptions'][region]['workflowDescribe'] = workflow_desc\n return globalworkflow_desc", "response": "Adds the workflowDescribe field to the config for each region of the globalworkflow."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _construct_jbor(job_id, field_name_and_maybe_index):\n '''\n :param job_id: Job ID\n :type job_id: string\n :param field_name_and_maybe_index: Field name, plus possibly \".N\" where N is an array index\n :type field_name_and_maybe_index: string\n :returns: dict of JBOR\n '''\n link = {\"$dnanexus_link\": {\"job\": job_id}}\n if '.' in field_name_and_maybe_index:\n split_by_dot = field_name_and_maybe_index.rsplit('.', 1)\n link[\"$dnanexus_link\"][\"field\"] = split_by_dot[0]\n link[\"$dnanexus_link\"][\"index\"] = int(split_by_dot[1])\n else:\n link[\"$dnanexus_link\"][\"field\"] = field_name_and_maybe_index\n return link", "response": "Constructs a JBOR object from the job ID and field name."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nupdate the internal inputs dictionary with the key value pairs from new_inputs overwriting existing keys.", "response": "def update(self, new_inputs, strip_prefix=True):\n \"\"\"\n Updates the inputs dictionary with the key/value pairs from new_inputs, overwriting existing keys.\n \"\"\"\n if strip_prefix and self.input_name_prefix is not None:\n for i in new_inputs:\n if i.startswith(self.input_name_prefix):\n self.inputs[i[len(self.input_name_prefix):]] = new_inputs[i]\n else:\n self.inputs.update(new_inputs)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _update_requires_resolution_inputs(self):\n input_paths = [quad[1] for quad in self.requires_resolution]\n results = resolve_multiple_existing_paths(input_paths)\n for input_name, input_value, input_class, input_index in self.requires_resolution:\n project = results[input_value]['project']\n folderpath = results[input_value]['folder']\n entity_result = results[input_value]['name']\n if input_class is None:\n if entity_result is not None:\n if isinstance(entity_result, basestring):\n # Case: -ifoo=job-012301230123012301230123\n # Case: -ifoo=analysis-012301230123012301230123\n assert(is_job_id(entity_result) or\n (is_analysis_id(entity_result)))\n input_value = entity_result\n elif is_hashid(input_value):\n input_value = {'$dnanexus_link': entity_result['id']}\n elif 'describe' in entity_result:\n # Then findDataObjects was called (returned describe hash)\n input_value = {\"$dnanexus_link\": {\"project\": entity_result['describe']['project'],\n \"id\": entity_result['id']}}\n else:\n # Then resolveDataObjects was called in a batch (no describe hash)\n input_value = {\"$dnanexus_link\": {\"project\": entity_result['project'],\n \"id\": entity_result['id']}}\n if input_index >= 0:\n if self.inputs[input_name][input_index] is not None:\n raise AssertionError(\"Expected 'self.inputs' to have saved a spot for 'input_value'.\")\n self.inputs[input_name][input_index] = input_value\n else:\n if self.inputs[input_name] is not None:\n raise AssertionError(\"Expected 'self.inputs' to have saved a spot for 'input_value'.\")\n self.inputs[input_name] = input_value\n else:\n msg = 'Value provided for input field \"' + input_name + '\" could not be parsed as ' + \\\n input_class + ': '\n if input_value == '':\n raise DXCLIError(msg + 'empty string cannot be resolved')\n if entity_result is None:\n raise DXCLIError(msg + 'could not resolve \\\"' + input_value + '\\\" to a name or ID')\n try:\n dxpy.bindings.verify_string_dxid(entity_result['id'], input_class)\n except DXError as details:\n raise DXCLIError(msg + str(details))\n if is_hashid(input_value):\n input_value = {'$dnanexus_link': entity_result['id']}\n elif 'describe' in entity_result:\n # Then findDataObjects was called (returned describe hash)\n input_value = {'$dnanexus_link': {\"project\": entity_result['describe']['project'],\n \"id\": entity_result['id']}}\n else:\n # Then resolveDataObjects was called in a batch (no describe hash)\n input_value = {\"$dnanexus_link\": {\"project\": entity_result['project'],\n \"id\": entity_result['id']}}\n if input_index != -1:\n # The class is an array, so append the resolved value\n self.inputs[input_name].append(input_value)\n else:\n self.inputs[input_name] = input_value", "response": "Updates self. inputs with resolved input values."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nescape the given unicode string.", "response": "def escape_unicode_string(u):\n \"\"\"\n Escapes the nonprintable chars 0-31 and 127, and backslash;\n preferably with a friendly equivalent such as '\\n' if available, but\n otherwise with a Python-style backslashed hex escape.\n \"\"\"\n def replacer(matchobj):\n if ord(matchobj.group(1)) == 127:\n return \"\\\\x7f\"\n if ord(matchobj.group(1)) == 92: # backslash\n return \"\\\\\\\\\"\n return REPLACEMENT_TABLE[ord(matchobj.group(1))]\n return re.sub(\"([\\\\000-\\\\037\\\\134\\\\177])\", replacer, u)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef format_tree(tree, root=None):\n ''' Tree pretty printer.\n Expects trees to be given as mappings (dictionaries). Keys will be printed; values will be traversed if they are\n mappings. To preserve order, use collections.OrderedDict.\n\n Example:\n\n print format_tree(collections.OrderedDict({'foo': 0, 'bar': {'xyz': 0}}))\n\n '''\n formatted_tree = [root] if root is not None else []\n def _format(tree, prefix=' '):\n nodes = list(tree.keys())\n for i in range(len(nodes)):\n node = nodes[i]\n if i == len(nodes)-1 and len(prefix) > 1:\n my_prefix = prefix[:-4] + '\u2514\u2500\u2500 '\n my_multiline_prefix = prefix[:-4] + ' '\n else:\n my_prefix = prefix[:-4] + '\u251c\u2500\u2500 '\n my_multiline_prefix = prefix[:-4] + '\u2502 '\n n = 0\n for line in node.splitlines():\n if n == 0:\n formatted_tree.append(my_prefix + line)\n else:\n formatted_tree.append(my_multiline_prefix + line)\n n += 1\n\n if isinstance(tree[node], collections.Mapping):\n subprefix = prefix\n if i < len(nodes)-1 and len(prefix) > 1 and prefix[-4:] == ' ':\n subprefix = prefix[:-4] + '\u2502 '\n _format(tree[node], subprefix + ' ')\n _format(tree)\n return '\\n'.join(formatted_tree)", "response": "Pretty printer for the tree."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef format_table(table, column_names=None, column_specs=None, max_col_width=32,\n report_dimensions=False):\n ''' Table pretty printer.\n Expects tables to be given as arrays of arrays.\n\n Example:\n\n print format_table([[1, \"2\"], [3, \"456\"]], column_names=['A', 'B'])\n\n '''\n if len(table) > 0:\n col_widths = [0] * len(list(table)[0])\n elif column_specs is not None:\n col_widths = [0] * (len(column_specs) + 1)\n elif column_names is not None:\n col_widths = [0] * len(column_names)\n my_column_names = []\n if column_specs is not None:\n column_names = ['Row']\n column_names.extend([col['name'] for col in column_specs])\n column_specs = [{'name': 'Row', 'type': 'float'}] + column_specs\n if column_names is not None:\n for i in range(len(column_names)):\n my_col = str(column_names[i])\n if len(my_col) > max_col_width:\n my_col = my_col[:max_col_width-1] + '\u2026'\n my_column_names.append(my_col)\n col_widths[i] = max(col_widths[i], len(my_col))\n my_table = []\n for row in table:\n my_row = []\n for i in range(len(row)):\n my_item = escape_unicode_string(str(row[i]))\n if len(my_item) > max_col_width:\n my_item = my_item[:max_col_width-1] + '\u2026'\n my_row.append(my_item)\n col_widths[i] = max(col_widths[i], len(my_item))\n my_table.append(my_row)\n\n def border(i):\n return WHITE() + i + ENDC()\n\n type_colormap = {'boolean': BLUE(),\n 'integer': YELLOW(),\n 'float': WHITE(),\n 'string': GREEN()}\n for i in 'uint8', 'int16', 'uint16', 'int32', 'uint32', 'int64':\n type_colormap[i] = type_colormap['integer']\n type_colormap['double'] = type_colormap['float']\n\n def col_head(i):\n if column_specs is not None:\n return BOLD() + type_colormap[column_specs[i]['type']] + column_names[i] + ENDC()\n else:\n return BOLD() + WHITE() + column_names[i] + ENDC()\n\n formatted_table = [border('\u250c') + border('\u252c').join(border('\u2500')*i for i in col_widths) + border('\u2510')]\n if len(my_column_names) > 0:\n padded_column_names = [col_head(i) + ' '*(col_widths[i]-len(my_column_names[i])) for i in range(len(my_column_names))]\n formatted_table.append(border('\u2502') + border('\u2502').join(padded_column_names) + border('\u2502'))\n formatted_table.append(border('\u251c') + border('\u253c').join(border('\u2500')*i for i in col_widths) + border('\u2524'))\n\n for row in my_table:\n padded_row = [row[i] + ' '*(col_widths[i]-len(row[i])) for i in range(len(row))]\n formatted_table.append(border('\u2502') + border('\u2502').join(padded_row) + border('\u2502'))\n formatted_table.append(border('\u2514') + border('\u2534').join(border('\u2500')*i for i in col_widths) + border('\u2518'))\n\n if report_dimensions:\n return '\\n'.join(formatted_table), len(formatted_table), sum(col_widths) + len(col_widths) + 1\n else:\n return '\\n'.join(formatted_table)", "response": "Table pretty printer.\n Expects tables to be given as arrays of arrays.\n\n Example:\n\n print format_table([[1, \"2\"], [3, \"456\"]], column_names=['A', 'B'])"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef flatten_json_array(json_string, array_name):\n\n result = re.sub('\"{}\": \\\\[\\r?\\n\\\\s*'.format(array_name), '\"{}\": ['.format(array_name), json_string, flags=re.MULTILINE)\n flatten_regexp = re.compile('\"{}\": \\\\[(.*)(?<=,)\\r?\\n\\\\s*'.format(array_name), flags=re.MULTILINE)\n while flatten_regexp.search(result):\n result = flatten_regexp.sub('\"{}\": [\\\\1 '.format(array_name), result)\n result = re.sub('\"{}\": \\\\[(.*)\\r?\\n\\\\s*\\\\]'.format(array_name), '\"{}\": [\\\\1]'.format(array_name), result, flags=re.MULTILINE)\n return result", "response": "Flattens all arrays with the same name in the JSON string"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a new remote workflow object with the specified title summary and description.", "response": "def new_dxworkflow(title=None, summary=None, description=None, output_folder=None, init_from=None, **kwargs):\n '''\n :param title: Workflow title (optional)\n :type title: string\n :param summary: Workflow summary (optional)\n :type summary: string\n :param description: Workflow description (optional)\n :type description: string\n :param output_folder: Default output folder of the workflow (optional)\n :type output_folder: string\n :param init_from: Another analysis workflow object handler or and analysis (string or handler) from which to initialize the metadata (optional)\n :type init_from: :class:`~dxpy.bindings.dxworkflow.DXWorkflow`, :class:`~dxpy.bindings.dxanalysis.DXAnalysis`, or string (for analysis IDs only)\n :rtype: :class:`DXWorkflow`\n\n Additional optional parameters not listed: all those under\n :func:`dxpy.bindings.DXDataObject.new`, except `details`.\n\n Creates a new remote workflow object with project set to *project*\n and returns the appropriate handler.\n\n Example:\n\n r = dxpy.new_dxworkflow(title=\"My Workflow\", description=\"This workflow contains...\")\n\n Note that this function is shorthand for::\n\n dxworkflow = DXWorkflow()\n dxworkflow.new(**kwargs)\n '''\n dxworkflow = DXWorkflow()\n dxworkflow.new(title=title, summary=summary, description=description, output_folder=output_folder, init_from=init_from, **kwargs)\n return dxworkflow"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _new(self, dx_hash, **kwargs):\n\n def _set_dx_hash(kwargs, dxhash, key, new_key=None):\n new_key = key if new_key is None else new_key\n if key in kwargs:\n if kwargs[key] is not None:\n dxhash[new_key] = kwargs[key]\n del kwargs[key]\n\n if \"init_from\" in kwargs:\n if kwargs[\"init_from\"] is not None:\n if not (isinstance(kwargs[\"init_from\"], (DXWorkflow, DXAnalysis)) or \\\n (isinstance(kwargs[\"init_from\"], basestring) and \\\n re.compile('^analysis-[0-9A-Za-z]{24}$').match(kwargs[\"init_from\"]))):\n raise DXError(\"Expected init_from to be an instance of DXWorkflow or DXAnalysis, or to be a string analysis ID.\")\n if isinstance(kwargs[\"init_from\"], basestring):\n dx_hash[\"initializeFrom\"] = {\"id\": kwargs[\"init_from\"]}\n else:\n dx_hash[\"initializeFrom\"] = {\"id\": kwargs[\"init_from\"].get_id()}\n if isinstance(kwargs[\"init_from\"], DXWorkflow):\n dx_hash[\"initializeFrom\"][\"project\"] = kwargs[\"init_from\"].get_proj_id()\n del kwargs[\"init_from\"]\n\n _set_dx_hash(kwargs, dx_hash, \"title\")\n _set_dx_hash(kwargs, dx_hash, \"summary\")\n _set_dx_hash(kwargs, dx_hash, \"description\")\n _set_dx_hash(kwargs, dx_hash, \"output_folder\", \"outputFolder\")\n _set_dx_hash(kwargs, dx_hash, \"stages\")\n _set_dx_hash(kwargs, dx_hash, \"workflow_inputs\", \"inputs\")\n _set_dx_hash(kwargs, dx_hash, \"workflow_outputs\", \"outputs\")\n\n resp = dxpy.api.workflow_new(dx_hash, **kwargs)\n self.set_ids(resp[\"id\"], dx_hash[\"project\"])", "response": "Create a new remote workflow object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_stage_id(self, stage):\n '''\n :param stage: A stage ID, name, or index (stage index is the number n for the nth stage, starting from 0; can be provided as an int or a string)\n :type stage: int or string\n :returns: The stage ID (this is a no-op if it was already a stage ID)\n :raises: :class:`~dxpy.exceptions.DXError` if *stage* could not be parsed, resolved to a stage ID, or it could not be found in the workflow\n '''\n # first, if it is a string, see if it is an integer\n if isinstance(stage, basestring):\n try:\n stage = int(stage)\n except:\n # we'll try parsing it as a string later\n pass\n\n if not isinstance(stage, basestring):\n # Try to parse as stage index; ensure that if it's not a\n # string that it is an integer at this point.\n try:\n stage_index = int(stage)\n except:\n raise DXError('DXWorkflow: the given stage identifier was neither a string stage ID nor an integer index')\n if stage_index < 0 or stage_index >= len(self.stages):\n raise DXError('DXWorkflow: the workflow contains ' + str(len(self.stages)) + \\\n ' stage(s), and the numerical value of the given stage identifier is out of range')\n return self.stages[stage_index].get(\"id\")\n\n if re.compile('^([a-zA-Z_]|stage-)[0-9a-zA-Z_]*$').match(stage) is not None:\n # Check if there exists a stage with this stage id\n stage_id_exists = any([stg['id'] for stg in self.stages if stg.get('id') == stage])\n if stage_id_exists:\n return stage\n\n # A stage with the provided ID can't be found in the workflow, so look for it as a name\n stage_ids_matching_name = [stg['id'] for stg in self.stages if stg.get('name') == stage]\n if len(stage_ids_matching_name) == 0:\n raise DXError('DXWorkflow: the given stage identifier ' + stage + ' could not be found as a stage ID nor as a stage name')\n elif len(stage_ids_matching_name) > 1:\n raise DXError('DXWorkflow: more than one workflow stage was found to have the name \"' + stage + '\"')\n else:\n return stage_ids_matching_name[0]", "response": "Get the stage ID from the given stage."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nadd a new stage to the workflow.", "response": "def add_stage(self, executable, stage_id=None, name=None, folder=None, stage_input=None, instance_type=None,\n edit_version=None, **kwargs):\n '''\n :param executable: string or a handler for an app or applet\n :type executable: string, DXApplet, or DXApp\n :param stage_id: id for the stage (optional)\n :type stage_id: string\n :param name: name for the stage (optional)\n :type name: string\n :param folder: default output folder for the stage; either a relative or absolute path (optional)\n :type folder: string\n :param stage_input: input fields to bind as default inputs for the executable (optional)\n :type stage_input: dict\n :param instance_type: Default instance type on which all jobs will be run for this stage, or a dict mapping function names to instance type requests\n :type instance_type: string or dict\n :param edit_version: if provided, the edit version of the workflow that should be modified; if not provided, the current edit version will be used (optional)\n :type edit_version: int\n :returns: ID of the added stage\n :rtype: string\n :raises: :class:`~dxpy.exceptions.DXError` if *executable* is not an expected type :class:`~dxpy.exceptions.DXAPIError` for errors thrown from the API call\n\n Adds the specified executable as a new stage in the workflow.\n '''\n if isinstance(executable, basestring):\n exec_id = executable\n elif isinstance(executable, DXExecutable):\n exec_id = executable.get_id()\n else:\n raise DXError(\"dxpy.DXWorkflow.add_stage: executable must be a string or an instance of DXApplet or DXApp\")\n add_stage_input = {\"executable\": exec_id}\n if stage_id is not None:\n add_stage_input[\"id\"] = stage_id\n if name is not None:\n add_stage_input[\"name\"] = name\n if folder is not None:\n add_stage_input[\"folder\"] = folder\n if stage_input is not None:\n add_stage_input[\"input\"] = stage_input\n if instance_type is not None:\n add_stage_input[\"systemRequirements\"] = SystemRequirementsDict.from_instance_type(instance_type).as_dict()\n self._add_edit_version_to_request(add_stage_input, edit_version)\n try:\n result = dxpy.api.workflow_add_stage(self._dxid, add_stage_input, **kwargs)\n finally:\n self.describe() # update cached describe\n return result['stage']"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_stage(self, stage, **kwargs):\n '''\n :param stage: A number for the stage index (for the nth stage, starting from 0), or a string of the stage index, name, or ID\n :type stage: int or string\n :returns: Hash of stage descriptor in workflow\n '''\n stage_id = self._get_stage_id(stage)\n result = next((stage for stage in self.stages if stage['id'] == stage_id), None)\n if result is None:\n raise DXError('The stage ID ' + stage_id + ' could not be found')\n return result", "response": "Get the stage descriptor in the workflow."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef remove_stage(self, stage, edit_version=None, **kwargs):\n '''\n :param stage: A number for the stage index (for the nth stage, starting from 0), or a string of the stage index, name, or ID\n :type stage: int or string\n :param edit_version: if provided, the edit version of the workflow that should be modified; if not provided, the current edit version will be used (optional)\n :type edit_version: int\n :returns: Stage ID that was removed\n :rtype: string\n\n Removes the specified stage from the workflow\n '''\n stage_id = self._get_stage_id(stage)\n remove_stage_input = {\"stage\": stage_id}\n self._add_edit_version_to_request(remove_stage_input, edit_version)\n try:\n dxpy.api.workflow_remove_stage(self._dxid, remove_stage_input, **kwargs)\n finally:\n self.describe() # update cached describe\n return stage_id", "response": "Removes the specified stage from the workflow."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nmoving the specified stage to a new index.", "response": "def move_stage(self, stage, new_index, edit_version=None, **kwargs):\n '''\n :param stage: A number for the stage index (for the nth stage, starting from 0), or a string of the stage index, name, or ID\n :type stage: int or string\n :param new_index: The new position in the order of stages that the specified stage should have (where 0 indicates the first stage)\n :type new_index: int\n :param edit_version: if provided, the edit version of the workflow that should be modified; if not provided, the current edit version will be used (optional)\n :type edit_version: int\n\n Removes the specified stage from the workflow\n '''\n stage_id = self._get_stage_id(stage)\n move_stage_input = {\"stage\": stage_id,\n \"newIndex\": new_index}\n self._add_edit_version_to_request(move_stage_input, edit_version)\n try:\n dxpy.api.workflow_move_stage(self._dxid, move_stage_input, **kwargs)\n finally:\n self.describe()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef update(self, title=None, unset_title=False, summary=None, description=None,\n output_folder=None, unset_output_folder=False,\n workflow_inputs=None, unset_workflow_inputs=False,\n workflow_outputs=None, unset_workflow_outputs=False,\n stages=None, edit_version=None, **kwargs):\n '''\n :param title: workflow title to set; cannot be provided with *unset_title* set to True\n :type title: string\n :param unset_title: whether to unset the title; cannot be provided with string value for *title*\n :type unset_title: boolean\n :param summary: workflow summary to set\n :type summary: string\n :param description: workflow description to set\n :type description: string\n :param output_folder: new default output folder for the workflow\n :type output_folder: string\n :param unset_folder: whether to unset the default output folder; cannot be True with string value for *output_folder*\n :type unset_folder: boolean\n :param stages: updates to the stages to make; see API documentation for /workflow-xxxx/update for syntax of this field; use :meth:`update_stage()` to update a single stage\n :type stages: dict\n :param workflow_inputs: updates to the workflow input to make; see API documentation for /workflow-xxxx/update for syntax of this field\n :type workflow_inputs: dict\n :param workflow_outputs: updates to the workflow output to make; see API documentation for /workflow-xxxx/update for syntax of this field\n :type workflow_outputs: dict\n :param edit_version: if provided, the edit version of the workflow that should be modified; if not provided, the current edit version will be used (optional)\n :type edit_version: int\n\n Make general metadata updates to the workflow\n '''\n update_input = {}\n if title is not None and unset_title:\n raise DXError('dxpy.DXWorkflow.update: cannot provide both \"title\" and set \"unset_title\"')\n if output_folder is not None and unset_output_folder:\n raise DXError('dxpy.DXWorkflow.update: cannot provide both \"output_folder\" and set \"unset_output_folder\"')\n if workflow_inputs is not None and unset_workflow_inputs:\n raise DXError('dxpy.DXWorkflow.update: cannot provide both \"workflow_inputs\" and set \"unset_workflow_inputs\"')\n if workflow_outputs is not None and unset_workflow_outputs:\n raise DXError('dxpy.DXWorkflow.update: cannot provide both \"workflow_outputs\" and set \"unset_workflow_outputs\"')\n\n if title is not None:\n update_input[\"title\"] = title\n elif unset_title:\n update_input[\"title\"] = None\n if summary is not None:\n update_input[\"summary\"] = summary\n if description is not None:\n update_input[\"description\"] = description\n if output_folder is not None:\n update_input[\"outputFolder\"] = output_folder\n elif unset_output_folder:\n update_input[\"outputFolder\"] = None\n if stages is not None:\n update_input[\"stages\"] = stages\n if workflow_inputs is not None:\n update_input[\"inputs\"] = workflow_inputs\n elif unset_workflow_inputs:\n update_input[\"inputs\"] = None\n if workflow_outputs is not None:\n update_input[\"outputs\"] = workflow_outputs\n elif unset_workflow_outputs:\n update_input[\"outputs\"] = None\n\n # only perform update if there are changes to make\n if update_input:\n self._add_edit_version_to_request(update_input, edit_version)\n try:\n dxpy.api.workflow_update(self._dxid, update_input, **kwargs)\n finally:\n self.describe()", "response": "Updates the metadata of a specific workflow entry."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nupdate the workflow entry for a specific stage.", "response": "def update_stage(self, stage, executable=None, force=False,\n name=None, unset_name=False, folder=None, unset_folder=False, stage_input=None,\n instance_type=None, edit_version=None, **kwargs):\n '''\n :param stage: A number for the stage index (for the nth stage, starting from 0), or a string stage index, name, or ID\n :type stage: int or string\n :param executable: string or a handler for an app or applet\n :type executable: string, DXApplet, or DXApp\n :param force: whether to use *executable* even if it is incompatible with the previous executable's spec\n :type force: boolean\n :param name: new name for the stage; cannot be provided with *unset_name* set to True\n :type name: string\n :param unset_name: whether to unset the stage name; cannot be True with string value for *name*\n :type unset_name: boolean\n :param folder: new default output folder for the stage; either a relative or absolute path (optional)\n :type folder: string\n :param unset_folder: whether to unset the stage folder; cannot be True with string value for *folder*\n :type unset_folder: boolean\n :param stage_input: input fields to bind as default inputs for the executable (optional)\n :type stage_input: dict\n :param instance_type: Default instance type on which all jobs will be run for this stage, or a dict mapping function names to instance type requests\n :type instance_type: string or dict\n :param edit_version: if provided, the edit version of the workflow that should be modified; if not provided, the current edit version will be used (optional)\n :type edit_version: int\n\n Removes the specified stage from the workflow\n '''\n stage_id = self._get_stage_id(stage)\n\n if name is not None and unset_name:\n raise DXError('dxpy.DXWorkflow.update_stage: cannot provide both \"name\" and set \"unset_name\"')\n if folder is not None and unset_folder:\n raise DXError('dxpy.DXWorkflow.update_stage: cannot provide both \"folder\" and set \"unset_folder\"')\n\n if executable is not None:\n if isinstance(executable, basestring):\n exec_id = executable\n elif isinstance(executable, DXExecutable):\n exec_id = executable.get_id()\n else:\n raise DXError(\"dxpy.DXWorkflow.update_stage: executable (if provided) must be a string or an instance of DXApplet or DXApp\")\n update_stage_exec_input = {\"stage\": stage_id,\n \"executable\": exec_id,\n \"force\": force}\n self._add_edit_version_to_request(update_stage_exec_input, edit_version)\n try:\n dxpy.api.workflow_update_stage_executable(self._dxid, update_stage_exec_input, **kwargs)\n finally:\n self.describe() # update cached describe\n\n # Construct hash and update the workflow's stage if necessary\n update_stage_input = {}\n if name is not None:\n update_stage_input[\"name\"] = name\n elif unset_name:\n update_stage_input[\"name\"] = None\n if folder:\n update_stage_input[\"folder\"] = folder\n elif unset_folder:\n update_stage_input[\"folder\"] = None\n if stage_input:\n update_stage_input[\"input\"] = stage_input\n if instance_type is not None:\n update_stage_input[\"systemRequirements\"] = SystemRequirementsDict.from_instance_type(instance_type).as_dict()\n if update_stage_input:\n update_input = {\"stages\": {stage_id: update_stage_input}}\n self._add_edit_version_to_request(update_input, edit_version)\n try:\n dxpy.api.workflow_update(self._dxid, update_input, **kwargs)\n finally:\n self.describe()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _get_input_name(self, input_str, region=None, describe_output=None):\n '''\n :param input_str: A string of one of the forms: \"\", \"\", \".\", \".\", \".\"\n :type input_str: string\n :returns: If the given form was one of those which uses the stage index or stage name, it is translated to the stage ID for use in the API call (stage name takes precedence)\n '''\n if '.' in input_str:\n stage_identifier, input_name = input_str.split('.', 1)\n # Try to parse as a stage ID or name\n return self._get_stage_id(stage_identifier) + '.' + input_name\n\n return input_str", "response": "Returns the input name for the given input string."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef run(self, workflow_input, *args, **kwargs):\n '''\n :param workflow_input: Dictionary of the workflow's input arguments; see below for more details\n :type workflow_input: dict\n :param instance_type: Instance type on which all stages' jobs will be run, or a dict mapping function names to instance types. These may be overridden on a per-stage basis if stage_instance_types is specified.\n :type instance_type: string or dict\n :param stage_instance_types: A dict mapping stage IDs, names, or indices to either a string (representing an instance type to be used for all functions in that stage), or a dict mapping function names to instance types.\n :type stage_instance_types: dict\n :param stage_folders: A dict mapping stage IDs, names, indices, and/or the string \"*\" to folder values to be used for the stages' output folders (use \"*\" as the default for all unnamed stages)\n :type stage_folders: dict\n :param rerun_stages: A list of stage IDs, names, indices, and/or the string \"*\" to indicate which stages should be run even if there are cached executions available\n :type rerun_stages: list of strings\n :param ignore_reuse_stages: Stages of a workflow (IDs, names, or indices) or \"*\" for which job reuse should be disabled\n :type ignore_reuse_stages: list\n :returns: Object handler of the newly created analysis\n :rtype: :class:`~dxpy.bindings.dxanalysis.DXAnalysis`\n\n Run the associated workflow. See :meth:`dxpy.bindings.dxapplet.DXExecutable.run` for additional args.\n\n When providing input for the workflow, keys should be of one of the following forms:\n\n * \"N.name\" where *N* is the stage number, and *name* is the\n name of the input, e.g. \"0.reads\" if the first stage takes\n in an input called \"reads\"\n\n * \"stagename.name\" where *stagename* is the stage name, and\n *name* is the name of the input within the stage\n\n * \"stageID.name\" where *stageID* is the stage ID, and *name*\n is the name of the input within the stage\n\n * \"name\" where *name* is the name of a workflow level input\n (defined in inputs) or the name that has been\n exported for the workflow (this name will appear as a key\n in the \"inputSpec\" of this workflow's description if it has\n been exported for this purpose)\n\n '''\n return super(DXWorkflow, self).run(workflow_input, *args, **kwargs)", "response": "Runs the associated workflow."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_folder_matches(text, delim_pos, dxproj, folderpath):\n '''\n :param text: String to be tab-completed; still in escaped form\n :type text: string\n :param delim_pos: index of last unescaped \"/\" in text\n :type delim_pos: int\n :param dxproj: DXProject handler to use\n :type dxproj: DXProject\n :param folderpath: Unescaped path in which to search for folder matches\n :type folderpath: string\n :returns: List of matches\n :rtype: list of strings\n\n Members of the returned list are guaranteed to start with *text*\n and be in escaped form for consumption by the command-line.\n '''\n try:\n folders = dxproj.list_folder(folder=folderpath, only='folders')['folders']\n folder_names = [name[name.rfind('/') + 1:] for name in folders]\n if text != '' and delim_pos != len(text) - 1:\n folder_names += ['.', '..']\n prefix = text[:delim_pos + 1]\n return [prefix + f + '/' for f in folder_names if (prefix + f + '/').startswith(text)]\n except:\n return []", "response": "Returns a list of strings that are part of the given text in the specified folder."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a list of data objects that match the given text.", "response": "def get_data_matches(text, delim_pos, dxproj, folderpath, classname=None,\n typespec=None, visibility=None):\n '''\n :param text: String to be tab-completed; still in escaped form\n :type text: string\n :param delim_pos: index of last unescaped \"/\" or \":\" in text\n :type delim_pos: int\n :param dxproj: DXProject handler to use\n :type dxproj: DXProject\n :param folderpath: Unescaped path in which to search for data object matches\n :type folderpath: string\n :param classname: Data object class by which to restrict the search (None for no restriction on class)\n :type classname: string\n :param visibility: Visibility to constrain the results to; default is \"visible\" for empty strings, \"either\" for nonempty\n :type visibility: string\n :returns: List of matches\n :rtype: list of strings\n\n Members of the returned list are guaranteed to start with *text*\n and be in escaped form for consumption by the command-line.\n '''\n #unescaped_text = unescape_completion_name_str(text[delim_pos + 1:])\n unescaped_text = text[delim_pos + 1:]\n\n if visibility is None:\n if text != '' and delim_pos != len(text) - 1:\n visibility = \"either\"\n else:\n visibility = \"visible\"\n\n try:\n results = list(dxpy.find_data_objects(project=dxproj.get_id(),\n folder=folderpath,\n name=unescaped_text + \"*\",\n name_mode=\"glob\",\n recurse=False,\n visibility=visibility,\n classname=classname,\n limit=100,\n describe=dict(fields=dict(name=True)),\n typename=typespec))\n prefix = '' if text == '' else text[:delim_pos + 1]\n return [prefix + escape_name(result['describe']['name']) for result in results]\n except:\n return []"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef verify_string_dxid(dxid, expected_classes):\n '''\n :param dxid: Value to verify as a DNAnexus ID of class *expected_class*\n :param expected_classes: Single string or list of strings of allowed classes of the ID, e.g. \"file\" or [\"project\", \"container\"]\n :type expected_classes: string or list of strings\n :raises: :exc:`~dxpy.exceptions.DXError` if *dxid* is not a string or is not a valid DNAnexus ID of the expected class\n '''\n if isinstance(expected_classes, basestring):\n expected_classes = [expected_classes]\n if not isinstance(expected_classes, list) or len(expected_classes) == 0:\n raise DXError('verify_string_dxid: expected_classes should be a string or list of strings')\n if not (isinstance(dxid, basestring) and\n re.match('^(' + '|'.join(expected_classes) + ')-[0-9a-zA-Z]{24}$', dxid)):\n if len(expected_classes) == 1:\n str_expected_classes = expected_classes[0]\n elif len(expected_classes) == 2:\n str_expected_classes = ' or '.join(expected_classes)\n else:\n str_expected_classes = ', '.join(expected_classes[:-1]) + ', or ' + expected_classes[-1]\n\n raise DXError('Invalid ID of class %s: %r' % (str_expected_classes, dxid))", "response": "Verifies that the given string is a valid DNAnexus ID of the class expected_class."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef set_id(self, dxid):\n '''\n :param dxid: New ID to be associated with the handler\n :type dxid: string\n\n Discards the currently stored ID and associates the handler with *dxid*\n '''\n if dxid is not None:\n verify_string_dxid(dxid, self._class)\n\n self._dxid = dxid", "response": "Sets the ID of the currently stored handler."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef new(self, **kwargs):\n '''\n :param project: Project ID in which to create the new remote object\n :type project: string\n :param name: Name for the object\n :type name: string\n :param tags: Tags to add for the object\n :type tags: list of strings\n :param types: Types to add to the object\n :type types: list of strings\n :param hidden: Whether the object is to be hidden\n :type hidden: boolean\n :param properties: Properties given as key-value pairs of strings\n :type properties: dict\n :param details: Details to set for the object\n :type details: dict or list\n :param folder: Full path to the destination folder\n :type folder: string\n :param parents: If True, recursively create all parent folders if they are missing\n :type parents: boolean\n\n :rtype: :class:`DXDataObject`\n\n Creates a data object with the given fields. Only *project* is\n required, and only if no default project or workspace is set;\n the remaining arguments are optional and have default behavior\n as specified in the API documentation for the ``/new`` method of\n each data object class.\n\n '''\n if not hasattr(self, '_class'):\n raise NotImplementedError(\n \"DXDataObject is an abstract class; a subclass should\" + \\\n \"be initialized instead.\")\n\n dx_hash, remaining_kwargs = self._get_creation_params(kwargs)\n self._new(dx_hash, **remaining_kwargs)", "response": "Creates a new object in the specified project."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nset the ID and project of the currently stored object.", "response": "def set_ids(self, dxid, project=None):\n '''\n :param dxid: Object ID or a DNAnexus link (a dict with key \"$dnanexus_link\"); if a project ID is provided in the DNAnexus link, it will be used as *project* unless *project* has been explictly provided\n :type dxid: string or dict\n :param project: Project ID\n :type project: string\n\n Discards the currently stored ID and associates the handler with\n *dxid*. Associates the handler with the copy of the object in\n *project* (if no project is explicitly specified, the default\n data container is used).\n\n '''\n if is_dxlink(dxid):\n dxid, project_from_link = get_dxlink_ids(dxid)\n if project is None:\n project = project_from_link\n\n if dxid is not None:\n verify_string_dxid(dxid, self._class)\n self._dxid = dxid\n\n if project is None:\n self._proj = dxpy.WORKSPACE_ID\n elif project is not None:\n verify_string_dxid(project, ['project', 'container'])\n self._proj = project"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a dict containing the description of the object.", "response": "def describe(self, incl_properties=False, incl_details=False, fields=None, default_fields=None, **kwargs):\n \"\"\"\n :param fields: set of fields to include in the output, for\n example ``{'name', 'modified'}``. The field ``id`` is always\n implicitly included. If ``fields`` is specified, the default\n fields are not included (that is, only the fields specified\n here, and ``id``, are included) unless ``default_fields`` is\n additionally set to True.\n :type fields: set or sequence of str\n :param default_fields: if True, include the default fields in\n addition to fields requested in ``fields``, if any; if\n False, only the fields specified in ``fields``, if any, are\n returned (defaults to False if ``fields`` is specified, True\n otherwise)\n :type default_fields: bool\n :param incl_properties: if true, includes the properties of the\n object in the output (deprecated; use\n ``fields={'properties'}, default_fields=True`` instead)\n :type incl_properties: bool\n :param incl_details: if true, includes the details of the object\n in the output (deprecated; use ``fields={'details'},\n default_fields=True`` instead)\n :type incl_details: bool\n :returns: Description of the remote object\n :rtype: dict\n\n Return a dict with a description of the remote data object.\n\n The result includes the key-value pairs as specified in the API\n documentation for the ``/describe`` method of each data object\n class. The API defines some default set of fields that will be\n included (at a minimum, \"id\", \"class\", etc. should be available,\n and there may be additional fields that vary based on the\n class); the set of fields may be customized using ``fields`` and\n ``default_fields``.\n\n Any project-specific metadata fields (name, properties, and\n tags) are obtained from the copy of the object in the project\n associated with the handler, if possible.\n\n \"\"\"\n\n if self._dxid is None:\n raise DXError('This {handler} handler has not been initialized with a {_class} ID and cannot be described'.format(\n handler=self.__class__.__name__,\n _class=self._class)\n )\n\n if (incl_properties or incl_details) and (fields is not None or default_fields is not None):\n raise ValueError('Cannot specify properties or details in conjunction with fields or default_fields')\n\n if incl_properties or incl_details:\n describe_input = dict(properties=incl_properties, details=incl_details)\n else:\n describe_input = {}\n if default_fields is not None:\n describe_input['defaultFields'] = default_fields\n if fields is not None:\n describe_input['fields'] = {field_name: True for field_name in fields}\n\n if self._proj is not None:\n describe_input[\"project\"] = self._proj\n\n self._desc = self._describe(self._dxid, describe_input, **kwargs)\n\n return self._desc"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add_types(self, types, **kwargs):\n\n self._add_types(self._dxid, {\"types\": types}, **kwargs)", "response": "Adds each of the specified types to the object."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef remove_types(self, types, **kwargs):\n\n self._remove_types(self._dxid, {\"types\": types}, **kwargs)", "response": "Removes the specified types from the object."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsetting the details for the object with the specified value.", "response": "def set_details(self, details, **kwargs):\n \"\"\"\n :param details: Details to set for the object\n :type details: dict or list\n :raises: :class:`~dxpy.exceptions.DXAPIError` if the object is not in the \"open\" state\n\n Sets the details for the remote object with the specified value.\n If the input contains the string ``\"$dnanexus_link\"`` as a key\n in a hash, it must be the only key in the hash, and its value\n must be a valid ID of an existing object.\n\n \"\"\"\n\n return self._set_details(self._dxid, details, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nrename the object in the specified project.", "response": "def rename(self, name, **kwargs):\n \"\"\"\n :param name: New name for the object\n :type name: string\n\n Renames the remote object.\n\n The name is changed on the copy of the object in the project\n associated with the handler.\n\n \"\"\"\n\n return self._rename(self._dxid, {\"project\": self._proj,\n \"name\": name}, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset the properties of the object for the given key - value pairs.", "response": "def set_properties(self, properties, **kwargs):\n \"\"\"\n :param properties: Property names and values given as key-value pairs of strings\n :type properties: dict\n\n Given key-value pairs in *properties* for property names and\n values, the properties are set on the object for the given\n property names. Any property with a value of :const:`None`\n indicates the property will be deleted.\n\n .. note:: Any existing properties not mentioned in *properties*\n are not modified by this method.\n\n The properties are written to the copy of the object in the\n project associated with the handler.\n\n The following example sets the properties for \"name\" and\n \"project\" for a remote file::\n\n dxfile.set_properties({\"name\": \"George\", \"project\": \"cancer\"})\n\n Subsequently, the following would delete the property \"project\"::\n\n dxfile.set_properties({\"project\": None})\n\n \"\"\"\n\n self._set_properties(self._dxid, {\"project\": self._proj,\n \"properties\": properties},\n **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nadds each of the specified tags to the remote object.", "response": "def add_tags(self, tags, **kwargs):\n \"\"\"\n :param tags: Tags to add to the object\n :type tags: list of strings\n\n Adds each of the specified tags to the remote object. Takes no\n action for tags that are already listed for the object.\n\n The tags are added to the copy of the object in the project\n associated with the handler.\n\n \"\"\"\n\n self._add_tags(self._dxid, {\"project\": self._proj, \"tags\": tags},\n **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nremoves the specified tags from the object.", "response": "def remove_tags(self, tags, **kwargs):\n \"\"\"\n :param tags: Tags to remove from the object\n :type tags: list of strings\n\n Removes each of the specified tags from the remote object. Takes\n no action for tags that the object does not currently have.\n\n The tags are removed from the copy of the object in the project\n associated with the handler.\n\n \"\"\"\n\n self._remove_tags(self._dxid, {\"project\": self._proj, \"tags\": tags},\n **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nremove the object from the project.", "response": "def remove(self, **kwargs):\n '''\n :raises: :exc:`~dxpy.exceptions.DXError` if no project is associated with the object\n\n Permanently removes the associated remote object from the\n associated project.\n '''\n\n if self._proj is None:\n raise DXError(\"Remove called when a project ID was not associated with this object handler\")\n\n dxpy.api.project_remove_objects(self._proj, {\"objects\": [self._dxid]},\n **kwargs)\n\n # Reset internal state\n self._dxid = None\n self._proj = None\n self._desc = {}"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef move(self, folder, **kwargs):\n '''\n :param folder: Folder route to which to move the object\n :type folder: string\n :raises: :exc:`~dxpy.exceptions.DXError` if no project is associated with the object\n\n Moves the associated remote object to *folder*.\n\n '''\n\n if self._proj is None:\n raise DXError(\"Move called when a project ID was not associated with this object handler\")\n\n dxpy.api.project_move(self._proj, {\"objects\": [self._dxid],\n \"destination\": folder},\n **kwargs)", "response": "Moves the associated remote object to the specified folder."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nclone the object to the destination project and returns a new object handler for the new object.", "response": "def clone(self, project, folder=\"/\", **kwargs):\n '''\n :param project: Destination project ID\n :type project: string\n :param folder: Folder route to which to move the object\n :type folder: string\n :raises: :exc:`~dxpy.exceptions.DXError` if no project is associated with the object\n :returns: An object handler for the new cloned object\n :rtype: :class:`DXDataObject`\n\n Clones the associated remote object to *folder* in *project* and\n returns an object handler for the new object in the destination\n project.\n\n '''\n\n if self._proj is None:\n raise DXError(\"Clone called when a project ID was not associated with this object handler\")\n\n dxpy.api.project_clone(self._proj,\n {\"objects\": [self._dxid],\n \"project\": project,\n \"destination\": folder},\n **kwargs)\n cloned_copy = copy.copy(self)\n cloned_copy.set_ids(cloned_copy.get_id(), project)\n return cloned_copy"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_session_conf_dir(self, cleanup=False):\n sessions_dir = os.path.join(self._user_conf_dir, \"sessions\")\n try:\n from psutil import Process, pid_exists\n\n if cleanup:\n try:\n session_dirs = os.listdir(sessions_dir)\n except OSError as e:\n # Silently skip cleanup and continue if we are unable to\n # enumerate the session directories for any reason\n # (including, most commonly, because the sessions dir\n # doesn't exist)\n session_dirs = []\n for session_dir in session_dirs:\n try:\n session_pid = int(session_dir)\n except ValueError:\n # If dir name doesn't look like an int, leave it\n # alone\n continue\n if not pid_exists(session_pid):\n rmtree(os.path.join(sessions_dir, session_dir), ignore_errors=True)\n\n parent_process = Process(os.getpid()).parent()\n default_session_dir = os.path.join(sessions_dir, str(parent_process.pid))\n while parent_process is not None and parent_process.pid != 0:\n session_dir = os.path.join(sessions_dir, str(parent_process.pid))\n if os.path.exists(session_dir):\n return session_dir\n parent_process = parent_process.parent()\n return default_session_dir\n except (ImportError, IOError, AttributeError) as e:\n # We don't bundle psutil with Windows, so failure to import\n # psutil would be expected.\n if platform.system() != 'Windows':\n warn(fill(\"Error while retrieving session configuration: \" + format_exception(e)))\n except Exception as e:\n warn(fill(\"Unexpected error while retrieving session configuration: \" + format_exception(e)))\n return self._get_ppid_session_conf_dir(sessions_dir)", "response": "Tries to find the session configuration directory by looking in ~/.dnanexus_config and looking in ~/.dnanexus_config for the immediate parent of this process and then its parent."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef dxlink(object_id, project_id=None, field=None):\n '''\n :param object_id: Object ID or the object handler itself\n :type object_id: string or :class:`~dxpy.bindings.DXDataObject`\n :param project_id: A project ID, if creating a cross-project DXLink\n :type project_id: string\n :param field: A field name, if creating a job-based object reference\n :type field: string\n :returns: A dict formatted as a symbolic DNAnexus object reference\n :rtype: dict\n\n Creates a DXLink to the specified object.\n\n If `object_id` is already a link, it is returned without modification.\n\n If `object_id is a `~dxpy.bindings.DXDataObject`, the object ID is\n retrieved via its `get_id()` method.\n\n If `field` is not `None`, `object_id` is expected to be of class 'job'\n and the link created is a Job Based Object Reference (JBOR), which is\n of the form::\n\n {'$dnanexus_link': {'job': object_id, 'field': field}}\n\n If `field` is `None` and `project_id` is not `None`, the link created\n is a project-specific link of the form::\n\n {'$dnanexus_link': {'project': project_id, 'id': object_id}}\n '''\n if is_dxlink(object_id):\n return object_id\n if isinstance(object_id, DXDataObject):\n object_id = object_id.get_id()\n if not any((project_id, field)):\n return {'$dnanexus_link': object_id}\n elif field:\n dxpy.verify_string_dxid(object_id, \"job\")\n return {'$dnanexus_link': {'job': object_id, 'field': field}}\n else:\n return {'$dnanexus_link': {'project': project_id, 'id': object_id}}", "response": "Returns a dict formatted as a symbolic DNAnexus object reference."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns True if x is a DNAnexus link.", "response": "def is_dxlink(x):\n '''\n :param x: A potential DNAnexus link\n\n Returns whether *x* appears to be a DNAnexus link (is a dict with\n key ``\"$dnanexus_link\"``) with a referenced data object.\n '''\n if not isinstance(x, dict):\n return False\n if '$dnanexus_link' not in x:\n return False\n link = x['$dnanexus_link']\n if isinstance(link, basestring):\n return True\n elif isinstance(link, dict):\n return any(key in link for key in ('id', 'job'))\n return False"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_dxlink_ids(link):\n '''\n :param link: A DNAnexus link\n :type link: dict\n :returns: (Object ID, Project ID) if the link is to a data object (or :const:`None`\n if no project specified in the link), or (Job ID, Field) if the link is\n a job-based object reference (JBOR).\n :rtype: tuple\n\n Get the object ID and detail from a link. There are three types of links:\n\n * Simple link of the form ``{\"$dnanexus_link\": \"file-XXXX\"}`` returns\n ``(\"file-XXXX\", None)``.\n * Data object link of the form ``{\"$dnanexus_link': {\"id\": \"file-XXXX\",\n \"project\": \"project-XXXX\"}}`` returns ``(\"file-XXXX\", \"project-XXXX\")``.\n * Job-based object reference (JBOR) of the form ``{\"$dnanexus_link\":\n {\"job\": \"job-XXXX\", \"field\": \"foo\"}}`` returns ``(\"job-XXXX\", \"foo\")``.\n '''\n if not is_dxlink(link):\n raise DXError('Invalid link: %r' % link)\n if isinstance(link['$dnanexus_link'], basestring):\n return link['$dnanexus_link'], None\n elif 'id' in link['$dnanexus_link']:\n return link['$dnanexus_link']['id'], link['$dnanexus_link'].get('project')\n else:\n return link['$dnanexus_link']['job'], link['$dnanexus_link']['field']", "response": "Get the object ID and project ID from a DNXLink object."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning an object handler for the object identified by id_or_link.", "response": "def get_handler(id_or_link, project=None):\n '''\n :param id_or_link: String containing an object ID or dict containing a DXLink\n :type id_or_link: string or dict\n :param project: String project ID to use as the context if the the object is a data object\n :type project: string\n :rtype: :class:`~dxpy.bindings.DXObject`, :class:`~dxpy.bindings.DXApp`, or :class:`~dxpy.bindings.DXGlobalWorkflow`\n\n Parses a string or DXLink dict. Creates and returns an object handler for it.\n\n Example::\n\n get_handler(\"file-1234\")\n '''\n try:\n cls = _guess_link_target_type(id_or_link)\n except Exception as e:\n raise DXError(\"Could not parse link {}: {}\".format(id_or_link, e))\n\n if cls in [dxpy.DXApp, dxpy.DXGlobalWorkflow]:\n # This special case should translate identifiers of the form\n # \"app-name\" or \"app-name/version_or_tag\" to the appropriate\n # arguments\n if dxpy.utils.resolver.is_hashid(id_or_link):\n return cls(id_or_link)\n else:\n slash_pos = id_or_link.find('/')\n dash_pos = id_or_link.find('-')\n if slash_pos == -1:\n return cls(name=id_or_link[dash_pos+1:])\n else:\n return cls(name=id_or_link[dash_pos+1:slash_pos],\n alias=id_or_link[slash_pos + 1:])\n elif project is None or cls in [dxpy.DXJob, dxpy.DXAnalysis, dxpy.DXProject, dxpy.DXContainer]:\n # This case is important for the handlers which do not\n # take a project field\n return cls(id_or_link)\n else:\n return cls(id_or_link, project=project)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a string containing the description of the object.", "response": "def describe(id_or_link, **kwargs):\n '''\n :param id_or_link: String containing an object ID or dict containing a DXLink,\n or a list of object IDs or dicts containing a DXLink.\n\n Given an object ID, calls :meth:`~dxpy.bindings.DXDataObject.describe` on the object.\n\n Example::\n\n describe(\"file-1234\")\n\n Given a list of object IDs, calls :meth:`~dxpy.api.system_describe_data_objects`.\n\n Example::\n\n describe([\"file-1234\", \"workflow-5678\"])\n\n Note: If id_or_link is a list and **kwargs contains a \"fields\" parameter, these\n fields will be returned in the response for each data object in addition to the\n fields included by default. Additionally, describe options can be provided for\n each data object class in the \"classDescribeOptions\" kwargs argument. See\n https://wiki.dnanexus.com/API-Specification-v1.0.0/System-Methods#API-method:-/system/describeDataObjects\n for input parameters used with the multiple object describe method.\n '''\n # If this is a list, extract the ids.\n # TODO: modify the procedure to use project ID when possible\n if isinstance(id_or_link, basestring) or is_dxlink(id_or_link):\n handler = get_handler(id_or_link)\n return handler.describe(**kwargs)\n else:\n links = []\n for link in id_or_link:\n # If this entry is a dxlink, then get the id.\n if is_dxlink(link):\n # Guaranteed by is_dxlink that one of the following will work\n if isinstance(link['$dnanexus_link'], basestring):\n link = link['$dnanexus_link']\n else:\n link = link['$dnanexus_link']['id']\n links.append(link)\n\n # Prepare input to system_describe_data_objects, the same fields will be passed\n # for all data object classes; if a class doesn't include a field in its describe\n # output, it will be ignored\n describe_input = \\\n dict([(field, True) for field in kwargs['fields']]) if kwargs.get('fields', []) else True\n describe_links_input = [{'id': link, 'describe': describe_input} for link in links]\n bulk_describe_input = {'objects': describe_links_input}\n\n if 'classDescribeOptions' in kwargs:\n bulk_describe_input['classDescribeOptions'] = kwargs['classDescribeOptions']\n\n data_object_descriptions = dxpy.api.system_describe_data_objects(bulk_describe_input)\n return [desc['describe'] for desc in data_object_descriptions['results']]"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a dictionary containing the attributes of the project or container.", "response": "def describe(self, **kwargs):\n \"\"\"\n :returns: A hash containing attributes of the project or container.\n :rtype: dict\n\n Returns a hash with key-value pairs as specified by the API\n specification for the `/project-xxxx/describe\n `_\n method. This will usually include keys such as \"id\", \"name\",\n \"class\", \"billTo\", \"created\", \"modified\", and \"dataUsage\".\n\n \"\"\"\n # TODO: link to /container-xxxx/describe\n api_method = dxpy.api.container_describe\n if isinstance(self, DXProject):\n api_method = dxpy.api.project_describe\n self._desc = api_method(self._dxid, **kwargs)\n return self._desc"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef new_folder(self, folder, parents=False, **kwargs):\n api_method = dxpy.api.container_new_folder\n if isinstance(self, DXProject):\n api_method = dxpy.api.project_new_folder\n\n api_method(self._dxid, {\"folder\": folder,\n \"parents\": parents},\n **kwargs)", "response": "Creates a new folder in the specified folder."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nlist the contents of the specified folder.", "response": "def list_folder(self, folder=\"/\", describe=False, only=\"all\", includeHidden=False, **kwargs):\n \"\"\"\n :param folder: Full path to the folder to list\n :type folder: string\n :param describe: If True, returns the output of ``/describe`` on each object (see below for notes)\n :type describe: bool or dict\n :param only: Indicate \"objects\" for only objects, \"folders\" for only folders, or \"all\" for both\n :type only: string\n :param includeHidden: Indicate whether hidden objects should be returned\n :type includeHidden: bool\n :returns: A hash with key \"objects\" for the list of object IDs and key \"folders\" for the list of folder routes\n :rtype: dict\n\n Returns a hash containing a list of objects that reside directly\n inside the specified folder, and a list of strings representing\n the full paths to folders that reside directly inside the\n specified folder.\n\n By default, the list of objects is provided as a list containing\n one hash ``{\"id\": \"class-XXXX\"}`` with the ID of each matching\n object. If *describe* is not False, the output of ``/describe``\n is also included in an additional field \"describe\" for each\n object. If *describe* is True, ``/describe`` is called with the\n default arguments. *describe* may also be a hash, indicating the\n input hash to be supplied to each ``/describe`` call.\n\n \"\"\"\n # TODO: it would be nice if we could supply describe\n # fields/defaultFields in a similar way to what we pass to the\n # high-level describe method, rather than having to construct\n # the literal API input\n\n api_method = dxpy.api.container_list_folder\n if isinstance(self, DXProject):\n api_method = dxpy.api.project_list_folder\n\n return api_method(self._dxid, {\"folder\": folder,\n \"describe\": describe,\n \"only\": only,\n \"includeHidden\": includeHidden},\n **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef move(self, destination, objects=[], folders=[], **kwargs):\n api_method = dxpy.api.container_move\n if isinstance(self, DXProject):\n api_method = dxpy.api.project_move\n\n api_method(self._dxid, {\"objects\": objects,\n \"folders\": folders,\n \"destination\": destination},\n **kwargs)", "response": "Moves the specified objects and folders into the specified destination folder."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nmove the contents of the specified folder to the specified destination folder.", "response": "def move_folder(self, folder, destination, **kwargs):\n \"\"\"\n :param folder: Full path to the folder to move\n :type folder: string\n :param destination: Full path to the destination folder that will contain *folder*\n :type destination: string\n\n Moves *folder* to reside in *destination* in the same project or\n container. All objects and subfolders inside *folder* are also\n moved.\n\n \"\"\"\n api_method = dxpy.api.container_move\n if isinstance(self, DXProject):\n api_method = dxpy.api.project_move\n\n api_method(self._dxid, {\"folders\": [folder],\n \"destination\": destination},\n **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef remove_folder(self, folder, recurse=False, force=False, **kwargs):\n api_method = dxpy.api.container_remove_folder\n if isinstance(self, DXProject):\n api_method = dxpy.api.project_remove_folder\n\n completed = False\n while not completed:\n resp = api_method(self._dxid,\n {\"folder\": folder, \"recurse\": recurse, \"force\": force, \"partial\": True},\n always_retry=force, # api call is idempotent under 'force' semantics\n **kwargs)\n if 'completed' not in resp:\n raise DXError('Error removing folder')\n completed = resp['completed']", "response": "Removes the specified folder from the project or container."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nremove the specified objects from the project or container.", "response": "def remove_objects(self, objects, force=False, **kwargs):\n \"\"\"\n :param objects: List of object IDs to remove from the project or container\n :type objects: list of strings\n :param force: If True, will suppress errors for objects that do not exist\n :type force: bool\n\n Removes the specified objects from the project or container.\n\n Removal propagates to any hidden objects that become unreachable\n from any visible object in the same project or container as a\n result of this operation.\n\n \"\"\"\n api_method = dxpy.api.container_remove_objects\n if isinstance(self, DXProject):\n api_method = dxpy.api.project_remove_objects\n\n api_method(self._dxid,\n {\"objects\": objects, \"force\": force},\n always_retry=force, # api call is idempotent under 'force' semantics\n **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef clone(self, container, destination=\"/\", objects=[], folders=[], parents=False, **kwargs):\n api_method = dxpy.api.container_clone\n if isinstance(self, DXProject):\n api_method = dxpy.api.project_clone\n\n return api_method(self._dxid,\n {\"objects\": objects,\n \"folders\": folders,\n \"project\": container,\n \"destination\": destination,\n \"parents\": parents},\n **kwargs)", "response": "Clones the specified objects and folders in the specified container into the specified destination folder."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a new project with the given parameters.", "response": "def new(self, name, summary=None, description=None, protected=None,\n restricted=None, download_restricted=None, contains_phi=None, tags=None,\n properties=None, bill_to=None, **kwargs):\n \"\"\"\n :param name: The name of the project\n :type name: string\n :param summary: If provided, a short summary of what the project contains\n :type summary: string\n :param description: If provided, the new project description\n :type name: string\n :param protected: If provided, whether the project should be protected\n :type protected: boolean\n :param restricted: If provided, whether the project should be restricted\n :type restricted: boolean\n :param download_restricted: If provided, whether external downloads should be restricted\n :type download_restricted: boolean\n :param contains_phi: If provided, whether the project should be marked as containing protected health information (PHI)\n :type contains_phi: boolean\n :param tags: If provided, tags to associate with the project\n :type tags: list of strings\n :param properties: If provided, properties to associate with the project\n :type properties: dict\n :param bill_to: If provided, ID of the entity to which any costs associated with this project will be billed; must be the ID of the requesting user or an org of which the requesting user is a member with allowBillableActivities permission\n :type bill_to: string\n\n Creates a new project. Initially only the user performing this action\n will be in the permissions/member list, with ADMINISTER access.\n See the API documentation for the `/project/new\n `_\n method for more info.\n\n \"\"\"\n input_hash = {}\n input_hash[\"name\"] = name\n if summary is not None:\n input_hash[\"summary\"] = summary\n if description is not None:\n input_hash[\"description\"] = description\n if protected is not None:\n input_hash[\"protected\"] = protected\n if restricted is not None:\n input_hash[\"restricted\"] = restricted\n if download_restricted is not None:\n input_hash[\"downloadRestricted\"] = download_restricted\n if contains_phi is not None:\n input_hash[\"containsPHI\"] = contains_phi\n if bill_to is not None:\n input_hash[\"billTo\"] = bill_to\n if tags is not None:\n input_hash[\"tags\"] = tags\n if properties is not None:\n input_hash[\"properties\"] = properties\n\n self.set_id(dxpy.api.project_new(input_hash, **kwargs)[\"id\"])\n self._desc = {}\n return self._dxid"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef update(self, name=None, summary=None, description=None, protected=None,\n restricted=None, download_restricted=None, version=None, **kwargs):\n \"\"\"\n :param name: If provided, the new project name\n :type name: string\n :param summary: If provided, the new project summary\n :type summary: string\n :param description: If provided, the new project description\n :type name: string\n :param protected: If provided, whether the project should be protected\n :type protected: boolean\n :param restricted: If provided, whether the project should be restricted\n :type restricted: boolean\n :param download_restricted: If provided, whether external downloads should be restricted\n :type download_restricted: boolean\n :param version: If provided, the update will only occur if the value matches the current project's version number\n :type version: int\n\n Updates the project with the new fields. All fields are\n optional. Fields that are not provided are not changed.\n See the API documentation for the `/project-xxxx/update\n `_\n method for more info.\n\n \"\"\"\n update_hash = {}\n if name is not None:\n update_hash[\"name\"] = name\n if summary is not None:\n update_hash[\"summary\"] = summary\n if description is not None:\n update_hash[\"description\"] = description\n if protected is not None:\n update_hash[\"protected\"] = protected\n if restricted is not None:\n update_hash[\"restricted\"] = restricted\n if download_restricted is not None:\n update_hash[\"downloadRestricted\"] = download_restricted\n if version is not None:\n update_hash[\"version\"] = version\n dxpy.api.project_update(self._dxid, update_hash, **kwargs)", "response": "Updates the current project s attributes with the values provided."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef invite(self, invitee, level, send_email=True, **kwargs):\n\n return dxpy.api.project_invite(self._dxid,\n {\"invitee\": invitee, \"level\": level,\n \"suppressEmailNotification\": not send_email},\n **kwargs)", "response": "Invites the specified person to the project."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndecreases the permissions of a project member.", "response": "def decrease_perms(self, member, level, **kwargs):\n \"\"\"\n :param member: Username (of the form \"user-USERNAME\") of the project member whose permissions will be decreased.\n :type member: string\n :param level: Permissions level that the member will have after this operation (None, \"VIEW\", \"UPLOAD\", or \"CONTRIBUTE\")\n :type level: string or None\n\n Decreases the permissions that the specified user has in the project.\n\n \"\"\"\n\n input_hash = {}\n input_hash[member] = level\n\n return dxpy.api.project_decrease_permissions(self._dxid,\n input_hash,\n **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef set_properties(self, properties, **kwargs):\n\n return dxpy.api.project_set_properties(self._dxid, {\"properties\": properties}, **kwargs)", "response": "Set the properties of the project for the given key - value pairs."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef format_exception(e):\n from .utils.printing import fill\n return '\\n'.join(fill(line) for line in traceback.format_exception_only(type(e), e))", "response": "Returns a string containing the type and text of the exception."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nexiting the program with an exception information.", "response": "def exit_with_exc_info(code=1, message='', print_tb=False, exception=None):\n '''Exits the program, printing information about the last exception (if\n any) and an optional error message. Uses *exception* instead if provided.\n\n :param code: Exit code.\n :type code: integer (valid exit code, 0-255)\n :param message: Message to be printed after the exception information.\n :type message: string\n :param print_tb: If set to True, prints the exception traceback; otherwise, suppresses it.\n :type print_tb: boolean\n :type exception: an exception to use in place of the last exception raised\n '''\n exc_type, exc_value = (exception.__class__, exception) \\\n if exception is not None else sys.exc_info()[:2]\n\n if exc_type is not None:\n if print_tb:\n traceback.print_exc()\n elif isinstance(exc_value, KeyboardInterrupt):\n sys.stderr.write('^C\\n')\n else:\n for line in traceback.format_exception_only(exc_type, exc_value):\n sys.stderr.write(line)\n\n sys.stderr.write(message)\n if message != '' and not message.endswith('\\n'):\n sys.stderr.write('\\n')\n sys.exit(code)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef err_exit(message='', code=None, expected_exceptions=default_expected_exceptions, arg_parser=None,\n ignore_sigpipe=True, exception=None):\n '''Exits the program, printing information about the last exception (if\n any) and an optional error message. Uses *exception* instead if provided.\n\n Uses **expected_exceptions** to set the error code decide whether to\n suppress the error traceback.\n\n :param message: Message to be printed after the exception information.\n :type message: string\n :param code: Exit code.\n :type code: integer (valid exit code, 0-255)\n :param expected_exceptions: Exceptions for which to exit with error code 3 (expected error condition) and suppress the stack trace (unless the _DX_DEBUG environment variable is set).\n :type expected_exceptions: iterable\n :param arg_parser: argparse.ArgumentParser object used in the program (optional)\n :param ignore_sigpipe: Whether to exit silently with code 3 when IOError with code EPIPE is raised. Default true.\n :type ignore_sigpipe: boolean\n :param exception: an exception to use in place of the last exception raised\n '''\n if arg_parser is not None:\n message = arg_parser.prog + \": \" + message\n\n exc = exception if exception is not None else sys.exc_info()[1]\n if isinstance(exc, SystemExit):\n raise exc\n elif isinstance(exc, expected_exceptions):\n exit_with_exc_info(EXPECTED_ERR_EXIT_STATUS, message, print_tb=dxpy._DEBUG > 0, exception=exception)\n elif ignore_sigpipe and isinstance(exc, IOError) and getattr(exc, 'errno', None) == errno.EPIPE:\n if dxpy._DEBUG > 0:\n print(\"Broken pipe\", file=sys.stderr)\n sys.exit(3)\n else:\n if code is None:\n code = 1\n exit_with_exc_info(code, message, print_tb=True, exception=exception)", "response": "Exits the program with an error code and an optional error message."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a one - line description of the error.", "response": "def error_message(self):\n \"Returns a one-line description of the error.\"\n output = self.msg + \", code \" + str(self.code)\n output += \". Request Time={}, Request ID={}\".format(self.timestamp, self.req_id)\n if self.name != self.__class__.__name__:\n output = self.name + \": \" + output\n return output"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\npublishing the global workflow.", "response": "def publish(self, **kwargs):\n \"\"\"\n Publishes the global workflow, so all users can find it and use it on the platform.\n\n The current user must be a developer of the workflow.\n \"\"\"\n if self._dxid is not None:\n return dxpy.api.global_workflow_publish(self._dxid, **kwargs)\n else:\n return dxpy.api.global_workflow_publish('globalworkflow-' + self._name, alias=self._alias, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef describe_underlying_workflow(self, region, describe_output=None):\n assert(describe_output is None or describe_output.get('class', '') == 'globalworkflow')\n\n if region is None:\n raise DXError(\n 'DXGlobalWorkflow: region must be provided to get an underlying workflow')\n\n # Perhaps we have cached it already\n if region in self._workflow_desc_by_region:\n return self._workflow_desc_by_region[region]\n\n if not describe_output:\n describe_output = self.describe()\n\n if region not in describe_output['regionalOptions'].keys():\n raise DXError('DXGlobalWorkflow: the global workflow {} is not enabled in region {}'.format(\n self.get_id(), region))\n\n underlying_workflow_id = describe_output['regionalOptions'][region]['workflow']\n dxworkflow = dxpy.DXWorkflow(underlying_workflow_id)\n dxworkflow_desc = dxworkflow.describe()\n self._workflow_desc_by_region = dxworkflow_desc\n return dxworkflow_desc", "response": "Get the global workflow description for a given region."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_underlying_workflow(self, region, describe_output=None):\n assert(describe_output is None or describe_output.get('class') == 'globalworkflow')\n\n if region is None:\n raise DXError(\n 'DXGlobalWorkflow: region must be provided to get an underlying workflow')\n\n # Perhaps we have cached it already\n if region in self._workflows_by_region:\n return self._workflows_by_region[region]\n\n if not describe_output:\n describe_output = self.describe()\n\n if region not in describe_output['regionalOptions'].keys():\n raise DXError('DXGlobalWorkflow: the global workflow {} is not enabled in region {}'.format(\n self.get_id(), region))\n\n underlying_workflow_id = describe_output['regionalOptions'][region]['workflow']\n self._workflow_desc_by_region = dxpy.DXWorkflow(underlying_workflow_id)\n return dxpy.DXWorkflow(underlying_workflow_id)", "response": "Returns an object handler of a global workflow in a given region."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef append_underlying_workflow_desc(self, describe_output, region):\n assert(describe_output is None or describe_output.get('class') == 'globalworkflow')\n\n underlying_workflow_desc = self.describe_underlying_workflow(region,\n describe_output=describe_output)\n for field in ['inputs', 'outputs', 'inputSpec', 'outputSpec', 'stages']:\n describe_output[field] = underlying_workflow_desc[field]\n return describe_output", "response": "Adds the underlying workflow description to the given dict describe_output."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting the input associated with the global workflow.", "response": "def _get_run_input(self, workflow_input, project=None, **kwargs):\n \"\"\"\n Checks the region in which the global workflow is run\n and returns the input associated with the underlying workflow\n from that region.\n \"\"\"\n region = dxpy.api.project_describe(project,\n input_params={\"fields\": {\"region\": True}})[\"region\"]\n dxworkflow = self.get_underlying_workflow(region)\n return dxworkflow._get_run_input(workflow_input, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef run(self, workflow_input, *args, **kwargs):\n '''\n :param workflow_input: Dictionary of the workflow's input arguments; see below for more details\n :type workflow_input: dict\n :param instance_type: Instance type on which all stages' jobs will be run, or a dict mapping function names to instance types. These may be overridden on a per-stage basis if stage_instance_types is specified.\n :type instance_type: string or dict\n :param stage_instance_types: A dict mapping stage IDs, names, or indices to either a string (representing an instance type to be used for all functions in that stage), or a dict mapping function names to instance types.\n :type stage_instance_types: dict\n :param stage_folders: A dict mapping stage IDs, names, indices, and/or the string \"*\" to folder values to be used for the stages' output folders (use \"*\" as the default for all unnamed stages)\n :type stage_folders: dict\n :param rerun_stages: A list of stage IDs, names, indices, and/or the string \"*\" to indicate which stages should be run even if there are cached executions available\n :type rerun_stages: list of strings\n :returns: Object handler of the newly created analysis\n :rtype: :class:`~dxpy.bindings.dxanalysis.DXAnalysis`\n\n Run the workflow. See :meth:`dxpy.bindings.dxapplet.DXExecutable.run` for additional args.\n\n When providing input for the workflow, keys should be of one of the following forms:\n\n * \"N.name\" where *N* is the stage number, and *name* is the\n name of the input, e.g. \"0.reads\" if the first stage takes\n in an input called \"reads\"\n\n * \"stagename.name\" where *stagename* is the stage name, and\n *name* is the name of the input within the stage\n\n * \"stageID.name\" where *stageID* is the stage ID, and *name*\n is the name of the input within the stage\n\n * \"name\" where *name* is the name of a workflow level input\n (defined in inputs) or the name that has been\n exported for the workflow (this name will appear as a key\n in the \"inputSpec\" of this workflow's description if it has\n been exported for this purpose)\n\n '''\n return super(DXGlobalWorkflow, self).run(workflow_input, *args, **kwargs)", "response": "Runs the analysis on the given stage and returns the object handler of the newly created analysis."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a remote file - like object that is a Python file - like object.", "response": "def open_dxfile(dxid, project=None, mode=None, read_buffer_size=dxfile.DEFAULT_BUFFER_SIZE):\n '''\n :param dxid: file ID\n :type dxid: string\n :rtype: :class:`~dxpy.bindings.dxfile.DXFile`\n\n Given the object ID of an uploaded file, returns a remote file\n handler that is a Python file-like object.\n\n Example::\n\n with open_dxfile(\"file-xxxx\") as fd:\n for line in fd:\n ...\n\n Note that this is shorthand for::\n\n DXFile(dxid)\n\n '''\n return DXFile(dxid, project=project, mode=mode, read_buffer_size=read_buffer_size)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef new_dxfile(mode=None, write_buffer_size=dxfile.DEFAULT_BUFFER_SIZE, expected_file_size=None, file_is_mmapd=False,\n **kwargs):\n '''\n :param mode: One of \"w\" or \"a\" for write and append modes, respectively\n :type mode: string\n :rtype: :class:`~dxpy.bindings.dxfile.DXFile`\n\n Additional optional parameters not listed: all those under\n :func:`dxpy.bindings.DXDataObject.new`.\n\n Creates a new remote file object that is ready to be written to;\n returns a :class:`~dxpy.bindings.dxfile.DXFile` object that is a\n writable file-like object.\n\n Example::\n\n with new_dxfile(media_type=\"application/json\") as fd:\n fd.write(\"foo\\\\n\")\n\n Note that this is shorthand for::\n\n dxFile = DXFile()\n dxFile.new(**kwargs)\n\n '''\n dx_file = DXFile(mode=mode, write_buffer_size=write_buffer_size, expected_file_size=expected_file_size,\n file_is_mmapd=file_is_mmapd)\n dx_file.new(**kwargs)\n return dx_file", "response": "Creates a new remote file object that is ready to be written to."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndownload a single file from the specified file ID or file handler.", "response": "def download_dxfile(dxid, filename, chunksize=dxfile.DEFAULT_BUFFER_SIZE, append=False, show_progress=False,\n project=None, describe_output=None, **kwargs):\n '''\n :param dxid: DNAnexus file ID or DXFile (file handler) object\n :type dxid: string or DXFile\n :param filename: Local filename\n :type filename: string\n :param append: If True, appends to the local file (default is to truncate local file if it exists)\n :type append: boolean\n :param project: project to use as context for this download (may affect\n which billing account is billed for this download). If None or\n DXFile.NO_PROJECT_HINT, no project hint is supplied to the API server.\n :type project: str or None\n :param describe_output: (experimental) output of the file-xxxx/describe API call,\n if available. It will make it possible to skip another describe API call.\n It should contain the default fields of the describe API call output and\n the \"parts\" field, not included in the output by default.\n :type describe_output: dict or None\n\n Downloads the remote file referenced by *dxid* and saves it to *filename*.\n\n Example::\n\n download_dxfile(\"file-xxxx\", \"localfilename.fastq\")\n\n '''\n # retry the inner loop while there are retriable errors\n part_retry_counter = defaultdict(lambda: 3)\n success = False\n while not success:\n success = _download_dxfile(dxid,\n filename,\n part_retry_counter,\n chunksize=chunksize,\n append=append,\n show_progress=show_progress,\n project=project,\n describe_output=describe_output,\n **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndownload a file from the local file system and store it in the local file system.", "response": "def _download_dxfile(dxid, filename, part_retry_counter,\n chunksize=dxfile.DEFAULT_BUFFER_SIZE, append=False, show_progress=False,\n project=None, describe_output=None, **kwargs):\n '''\n Core of download logic. Download file-id *dxid* and store it in\n a local file *filename*.\n\n The return value is as follows:\n - True means the download was successfully completed\n - False means the download was stopped because of a retryable error\n - Exception raised for other errors\n '''\n def print_progress(bytes_downloaded, file_size, action=\"Downloaded\"):\n num_ticks = 60\n\n effective_file_size = file_size or 1\n if bytes_downloaded > effective_file_size:\n effective_file_size = bytes_downloaded\n\n ticks = int(round((bytes_downloaded / float(effective_file_size)) * num_ticks))\n percent = int(math.floor((bytes_downloaded / float(effective_file_size)) * 100))\n\n fmt = \"[{done}{pending}] {action} {done_bytes:,}{remaining} bytes ({percent}%) {name}\"\n # Erase the line and return the cursor to the start of the line.\n # The following VT100 escape sequence will erase the current line.\n sys.stderr.write(\"\\33[2K\")\n sys.stderr.write(fmt.format(action=action,\n done=(\"=\" * (ticks - 1) + \">\") if ticks > 0 else \"\",\n pending=\" \" * (num_ticks - ticks),\n done_bytes=bytes_downloaded,\n remaining=\" of {size:,}\".format(size=file_size) if file_size else \"\",\n percent=percent,\n name=filename))\n sys.stderr.flush()\n sys.stderr.write(\"\\r\")\n sys.stderr.flush()\n\n _bytes = 0\n\n if isinstance(dxid, DXFile):\n dxfile = dxid\n else:\n dxfile = DXFile(dxid, mode=\"r\", project=(project if project != DXFile.NO_PROJECT_HINT else None))\n\n if describe_output and describe_output.get(\"parts\") is not None:\n dxfile_desc = describe_output\n else:\n dxfile_desc = dxfile.describe(fields={\"parts\"}, default_fields=True, **kwargs)\n\n if 'drive' in dxfile_desc:\n # A symbolic link. Get the MD5 checksum, if we have it\n if 'md5' in dxfile_desc:\n md5 = dxfile_desc['md5']\n else:\n md5 = None\n _download_symbolic_link(dxid, md5, project, filename)\n return True\n\n parts = dxfile_desc[\"parts\"]\n parts_to_get = sorted(parts, key=int)\n file_size = dxfile_desc.get(\"size\")\n\n offset = 0\n for part_id in parts_to_get:\n parts[part_id][\"start\"] = offset\n offset += parts[part_id][\"size\"]\n\n if append:\n fh = open(filename, \"ab\")\n else:\n try:\n fh = open(filename, \"rb+\")\n except IOError:\n fh = open(filename, \"wb\")\n\n if show_progress:\n print_progress(0, None)\n\n def get_chunk(part_id_to_get, start, end):\n url, headers = dxfile.get_download_url(project=project, **kwargs)\n # If we're fetching the whole object in one shot, avoid setting the Range header to take advantage of gzip\n # transfer compression\n sub_range = False\n if len(parts) > 1 or (start > 0) or (end - start + 1 < parts[part_id_to_get][\"size\"]):\n sub_range = True\n data = dxpy._dxhttp_read_range(url, headers, start, end, FILE_REQUEST_TIMEOUT, sub_range)\n return part_id_to_get, data\n\n def chunk_requests():\n for part_id_to_chunk in parts_to_get:\n part_info = parts[part_id_to_chunk]\n for chunk_start in range(part_info[\"start\"], part_info[\"start\"] + part_info[\"size\"], chunksize):\n chunk_end = min(chunk_start + chunksize, part_info[\"start\"] + part_info[\"size\"]) - 1\n yield get_chunk, [part_id_to_chunk, chunk_start, chunk_end], {}\n\n def verify_part(_part_id, got_bytes, hasher):\n if got_bytes is not None and got_bytes != parts[_part_id][\"size\"]:\n msg = \"Unexpected part data size in {} part {} (expected {}, got {})\"\n msg = msg.format(dxfile.get_id(), _part_id, parts[_part_id][\"size\"], got_bytes)\n raise DXPartLengthMismatchError(msg)\n if hasher is not None and \"md5\" not in parts[_part_id]:\n warnings.warn(\"Download of file {} is not being checked for integrity\".format(dxfile.get_id()))\n elif hasher is not None and hasher.hexdigest() != parts[_part_id][\"md5\"]:\n msg = \"Checksum mismatch in {} part {} (expected {}, got {})\"\n msg = msg.format(dxfile.get_id(), _part_id, parts[_part_id][\"md5\"], hasher.hexdigest())\n raise DXChecksumMismatchError(msg)\n\n with fh:\n last_verified_pos = 0\n\n if fh.mode == \"rb+\":\n # We already downloaded the beginning of the file, verify that the\n # chunk checksums match the metadata.\n last_verified_part, max_verify_chunk_size = None, 1024*1024\n try:\n for part_id in parts_to_get:\n part_info = parts[part_id]\n if \"md5\" not in part_info:\n raise DXFileError(\"File {} does not contain part md5 checksums\".format(dxfile.get_id()))\n bytes_to_read = part_info[\"size\"]\n hasher = hashlib.md5()\n while bytes_to_read > 0:\n chunk = fh.read(min(max_verify_chunk_size, bytes_to_read))\n if len(chunk) < min(max_verify_chunk_size, bytes_to_read):\n raise DXFileError(\"Local data for part {} is truncated\".format(part_id))\n hasher.update(chunk)\n bytes_to_read -= max_verify_chunk_size\n if hasher.hexdigest() != part_info[\"md5\"]:\n raise DXFileError(\"Checksum mismatch when verifying downloaded part {}\".format(part_id))\n else:\n last_verified_part = part_id\n last_verified_pos = fh.tell()\n if show_progress:\n _bytes += part_info[\"size\"]\n print_progress(_bytes, file_size, action=\"Verified\")\n except (IOError, DXFileError) as e:\n logger.debug(e)\n fh.seek(last_verified_pos)\n fh.truncate()\n if last_verified_part is not None:\n del parts_to_get[:parts_to_get.index(last_verified_part)+1]\n if show_progress and len(parts_to_get) < len(parts):\n print_progress(last_verified_pos, file_size, action=\"Resuming at\")\n logger.debug(\"Verified %s/%d downloaded parts\", last_verified_part, len(parts_to_get))\n\n try:\n # Main loop. In parallel: download chunks, verify them, and write them to disk.\n get_first_chunk_sequentially = (file_size > 128 * 1024 and last_verified_pos == 0 and dxpy.JOB_ID)\n cur_part, got_bytes, hasher = None, None, None\n for chunk_part, chunk_data in response_iterator(chunk_requests(),\n dxfile._http_threadpool,\n do_first_task_sequentially=get_first_chunk_sequentially):\n if chunk_part != cur_part:\n verify_part(cur_part, got_bytes, hasher)\n cur_part, got_bytes, hasher = chunk_part, 0, hashlib.md5()\n got_bytes += len(chunk_data)\n hasher.update(chunk_data)\n fh.write(chunk_data)\n if show_progress:\n _bytes += len(chunk_data)\n print_progress(_bytes, file_size)\n verify_part(cur_part, got_bytes, hasher)\n if show_progress:\n print_progress(_bytes, file_size, action=\"Completed\")\n except DXFileError:\n print(traceback.format_exc(), file=sys.stderr)\n part_retry_counter[cur_part] -= 1\n if part_retry_counter[cur_part] > 0:\n print(\"Retrying {} ({} tries remain for part {})\".format(dxfile.get_id(), part_retry_counter[cur_part], cur_part),\n file=sys.stderr)\n return False\n raise\n\n if show_progress:\n sys.stderr.write(\"\\n\")\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef upload_local_file(filename=None, file=None, media_type=None, keep_open=False,\n wait_on_close=False, use_existing_dxfile=None, show_progress=False,\n write_buffer_size=None, multithread=True, **kwargs):\n '''\n :param filename: Local filename\n :type filename: string\n :param file: File-like object\n :type file: File-like object\n :param media_type: Internet Media Type\n :type media_type: string\n :param keep_open: If False, closes the file after uploading\n :type keep_open: boolean\n :param write_buffer_size: Buffer size to use for upload\n :type write_buffer_size: int\n :param wait_on_close: If True, waits for the file to close\n :type wait_on_close: boolean\n :param use_existing_dxfile: Instead of creating a new file object, upload to the specified file\n :type use_existing_dxfile: :class:`~dxpy.bindings.dxfile.DXFile`\n :param multithread: If True, sends multiple write requests asynchronously\n :type multithread: boolean\n :returns: Remote file handler\n :rtype: :class:`~dxpy.bindings.dxfile.DXFile`\n\n Additional optional parameters not listed: all those under\n :func:`dxpy.bindings.DXDataObject.new`.\n\n Exactly one of *filename* or *file* is required.\n\n Uploads *filename* or reads from *file* into a new file object (with\n media type *media_type* if given) and returns the associated remote\n file handler. The \"name\" property of the newly created remote file\n is set to the basename of *filename* or to *file.name* (if it\n exists).\n\n Examples::\n\n # Upload from a path\n dxpy.upload_local_file(\"/home/ubuntu/reads.fastq.gz\")\n # Upload from a file-like object\n with open(\"reads.fastq\") as fh:\n dxpy.upload_local_file(file=fh)\n\n '''\n fd = file if filename is None else open(filename, 'rb')\n\n try:\n file_size = os.fstat(fd.fileno()).st_size\n except:\n file_size = 0\n\n file_is_mmapd = hasattr(fd, \"fileno\")\n\n if write_buffer_size is None:\n write_buffer_size=dxfile.DEFAULT_BUFFER_SIZE\n\n if use_existing_dxfile:\n handler = use_existing_dxfile\n else:\n # Set a reasonable name for the file if none has been set\n # already\n creation_kwargs = kwargs.copy()\n if 'name' not in kwargs:\n if filename is not None:\n creation_kwargs['name'] = os.path.basename(filename)\n else:\n # Try to get filename from file-like object\n try:\n local_file_name = file.name\n except AttributeError:\n pass\n else:\n creation_kwargs['name'] = os.path.basename(local_file_name)\n\n # Use 'a' mode because we will be responsible for closing the file\n # ourselves later (if requested).\n handler = new_dxfile(mode='a', media_type=media_type, write_buffer_size=write_buffer_size,\n expected_file_size=file_size, file_is_mmapd=file_is_mmapd, **creation_kwargs)\n\n # For subsequent API calls, don't supply the dataobject metadata\n # parameters that are only needed at creation time.\n _, remaining_kwargs = dxpy.DXDataObject._get_creation_params(kwargs)\n\n num_ticks = 60\n offset = 0\n\n handler._ensure_write_bufsize(**remaining_kwargs)\n\n def can_be_mmapd(fd):\n if not hasattr(fd, \"fileno\"):\n return False\n mode = os.fstat(fd.fileno()).st_mode\n return not (stat.S_ISCHR(mode) or stat.S_ISFIFO(mode))\n\n def read(num_bytes):\n \"\"\"\n Returns a string or mmap'd data containing the next num_bytes of\n the file, or up to the end if there are fewer than num_bytes\n left.\n \"\"\"\n # If file cannot be mmap'd (e.g. is stdin, or a fifo), fall back\n # to doing an actual read from the file.\n if not can_be_mmapd(fd):\n return fd.read(handler._write_bufsize)\n\n bytes_available = max(file_size - offset, 0)\n if bytes_available == 0:\n return b\"\"\n\n return mmap.mmap(fd.fileno(), min(handler._write_bufsize, bytes_available), offset=offset, access=mmap.ACCESS_READ)\n\n handler._num_bytes_transmitted = 0\n\n def report_progress(handler, num_bytes):\n handler._num_bytes_transmitted += num_bytes\n if file_size > 0:\n ticks = int(round((handler._num_bytes_transmitted / float(file_size)) * num_ticks))\n percent = int(round((handler._num_bytes_transmitted / float(file_size)) * 100))\n\n fmt = \"[{done}{pending}] Uploaded {done_bytes:,} of {total:,} bytes ({percent}%) {name}\"\n sys.stderr.write(fmt.format(done='=' * (ticks - 1) + '>' if ticks > 0 else '',\n pending=' ' * (num_ticks - ticks),\n done_bytes=handler._num_bytes_transmitted,\n total=file_size,\n percent=percent,\n name=filename if filename is not None else ''))\n sys.stderr.flush()\n sys.stderr.write(\"\\r\")\n sys.stderr.flush()\n\n if show_progress:\n report_progress(handler, 0)\n\n while True:\n buf = read(handler._write_bufsize)\n offset += len(buf)\n\n if len(buf) == 0:\n break\n\n handler.write(buf,\n report_progress_fn=report_progress if show_progress else None,\n multithread=multithread,\n **remaining_kwargs)\n\n if filename is not None:\n fd.close()\n\n handler.flush(report_progress_fn=report_progress if show_progress else None, **remaining_kwargs)\n\n if show_progress:\n sys.stderr.write(\"\\n\")\n sys.stderr.flush()\n\n if not keep_open:\n handler.close(block=wait_on_close, report_progress_fn=report_progress if show_progress else None, **remaining_kwargs)\n\n return handler", "response": "Uploads a local file to the specified file - like object and returns a handler that will be used to handle the upload."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef upload_string(to_upload, media_type=None, keep_open=False, wait_on_close=False, **kwargs):\n\n # Use 'a' mode because we will be responsible for closing the file\n # ourselves later (if requested).\n handler = new_dxfile(media_type=media_type, mode='a', **kwargs)\n\n # For subsequent API calls, don't supply the dataobject metadata\n # parameters that are only needed at creation time.\n _, remaining_kwargs = dxpy.DXDataObject._get_creation_params(kwargs)\n handler.write(to_upload, **remaining_kwargs)\n\n if not keep_open:\n handler.close(block=wait_on_close, **remaining_kwargs)\n\n return handler", "response": "Uploads a string to a file object and returns a handler that can be used to create the file object."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef list_subfolders(project, path, recurse=True):\n '''\n :param project: Project ID to use as context for the listing\n :type project: string\n :param path: Subtree root path\n :type path: string\n :param recurse: Return a complete subfolders tree\n :type recurse: boolean\n\n Returns a list of subfolders for the remote *path* (included to the result) of the *project*.\n\n Example::\n\n list_subfolders(\"project-xxxx\", folder=\"/input\")\n\n '''\n project_folders = dxpy.get_handler(project).describe(input_params={'folders': True})['folders']\n # TODO: support shell-style path globbing (i.e. /a*/c matches /ab/c but not /a/b/c)\n # return pathmatch.filter(project_folders, os.path.join(path, '*'))\n if recurse:\n return (f for f in project_folders if f.startswith(path))\n else:\n return (f for f in project_folders if f.startswith(path) and '/' not in f[len(path)+1:])", "response": "Returns a list of subfolders for the remote path."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef download_folder(project, destdir, folder=\"/\", overwrite=False, chunksize=dxfile.DEFAULT_BUFFER_SIZE,\n show_progress=False, **kwargs):\n '''\n :param project: Project ID to use as context for this download.\n :type project: string\n :param destdir: Local destination location\n :type destdir: string\n :param folder: Path to the remote folder to download\n :type folder: string\n :param overwrite: Overwrite existing files\n :type overwrite: boolean\n\n Downloads the contents of the remote *folder* of the *project* into the local directory specified by *destdir*.\n\n Example::\n\n download_folder(\"project-xxxx\", \"/home/jsmith/input\", folder=\"/input\")\n\n '''\n\n def ensure_local_dir(d):\n if not os.path.isdir(d):\n if os.path.exists(d):\n raise DXFileError(\"Destination location '{}' already exists and is not a directory\".format(d))\n logger.debug(\"Creating destination directory: '%s'\", d)\n os.makedirs(d)\n\n def compose_local_dir(d, remote_folder, remote_subfolder):\n suffix = remote_subfolder[1:] if remote_folder == \"/\" else remote_subfolder[len(remote_folder) + 1:]\n if os.sep != '/':\n suffix = suffix.replace('/', os.sep)\n return os.path.join(d, suffix) if suffix != \"\" else d\n\n normalized_folder = folder.strip()\n if normalized_folder != \"/\" and normalized_folder.endswith(\"/\"):\n normalized_folder = normalized_folder[:-1]\n if normalized_folder == \"\":\n raise DXFileError(\"Invalid remote folder name: '{}'\".format(folder))\n normalized_dest_dir = os.path.normpath(destdir).strip()\n if normalized_dest_dir == \"\":\n raise DXFileError(\"Invalid destination directory name: '{}'\".format(destdir))\n # Creating target directory tree\n remote_folders = list(list_subfolders(project, normalized_folder, recurse=True))\n if len(remote_folders) <= 0:\n raise DXFileError(\"Remote folder '{}' not found\".format(normalized_folder))\n remote_folders.sort()\n for remote_subfolder in remote_folders:\n ensure_local_dir(compose_local_dir(normalized_dest_dir, normalized_folder, remote_subfolder))\n\n # Downloading files\n describe_input = dict(fields=dict(folder=True,\n name=True,\n id=True,\n parts=True,\n size=True,\n drive=True,\n md5=True))\n\n # A generator that returns the files one by one. We don't want to materialize it, because\n # there could be many files here.\n files_gen = dxpy.search.find_data_objects(classname='file', state='closed', project=project,\n folder=normalized_folder, recurse=True, describe=describe_input)\n if files_gen is None:\n # In python 3, the generator can be None, and iterating on it\n # will cause an error.\n return\n\n # Now it is safe, in both python 2 and 3, to iterate on the generator\n for remote_file in files_gen:\n local_filename = os.path.join(compose_local_dir(normalized_dest_dir,\n normalized_folder,\n remote_file['describe']['folder']),\n remote_file['describe']['name'])\n if os.path.exists(local_filename) and not overwrite:\n raise DXFileError(\n \"Destination file '{}' already exists but no overwrite option is provided\".format(local_filename)\n )\n logger.debug(\"Downloading '%s/%s' remote file to '%s' location\",\n (\"\" if remote_file['describe']['folder'] == \"/\" else remote_file['describe']['folder']),\n remote_file['describe']['name'],\n local_filename)\n download_dxfile(remote_file['describe']['id'],\n local_filename,\n chunksize=chunksize,\n project=project,\n show_progress=show_progress,\n describe_output=remote_file['describe'],\n **kwargs)", "response": "Downloads the contents of a folder into a local directory."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef build(src_dir, parallel_build=True):\n # TODO: use Gentoo or deb buildsystem\n config_script = os.path.join(src_dir, \"configure\")\n if os.path.isfile(config_script) and os.access(config_script, os.X_OK):\n logger.debug(\"Running ./configure in {cwd}\".format(cwd=os.path.abspath(src_dir)))\n try:\n subprocess.check_call([config_script])\n except subprocess.CalledProcessError as e:\n raise AppBuilderException(\"./configure in target directory failed with exit code %d\" % (e.returncode,))\n if os.path.isfile(os.path.join(src_dir, \"Makefile\")) \\\n or os.path.isfile(os.path.join(src_dir, \"makefile\")) \\\n or os.path.isfile(os.path.join(src_dir, \"GNUmakefile\")):\n if parallel_build:\n make_shortcmd = \"make -j%d\" % (NUM_CORES,)\n else:\n make_shortcmd = \"make\"\n logger.debug(\"Building with {make} in {cwd}\".format(make=make_shortcmd, cwd=os.path.abspath(src_dir)))\n try:\n make_cmd = [\"make\", \"-C\", src_dir]\n if parallel_build:\n make_cmd.append(\"-j\" + str(NUM_CORES))\n subprocess.check_call(make_cmd)\n except subprocess.CalledProcessError as e:\n raise AppBuilderException(\"%s in target directory failed with exit code %d\" % (make_shortcmd, e.returncode))", "response": "Builds the base directory of the app."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef is_link_local(link_target):\n is_local=(not os.path.isabs(link_target))\n\n if is_local:\n # make sure that the path NEVER extends outside the resources directory!\n d,l = os.path.split(link_target)\n link_parts = []\n while l:\n link_parts.append(l)\n d,l = os.path.split(d)\n curr_path = os.sep\n\n for p in reversed(link_parts):\n is_local = (is_local and not (curr_path == os.sep and p == os.pardir) )\n curr_path = os.path.abspath(os.path.join(curr_path, p))\n\n return is_local", "response": "Returns a boolean indicating if the link is local to the current directory."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nfixes permissions for the .", "response": "def _fix_perms(perm_obj):\n \"\"\"\n :param perm_obj: A permissions object, as given by os.stat()\n :type perm_obj: integer\n :returns: A permissions object that is the result of \"chmod a+rX\" on the\n given permission object. This is defined to be the permission object\n bitwise or-ed with all stat.S_IR*, and if the stat.S_IXUSR bit is\n set, then the permission object should also be returned bitwise or-ed\n with stat.S_IX* (stat.S_IXUSR not included because it would be redundant).\n :rtype: integer\n \"\"\"\n ret_perm = perm_obj | stat.S_IROTH | stat.S_IRGRP | stat.S_IRUSR\n if ret_perm & stat.S_IXUSR:\n ret_perm = ret_perm | stat.S_IXGRP | stat.S_IXOTH\n\n return ret_perm"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nuploading the contents of the resources directory src_dir to the remote file applet_spec.", "response": "def upload_resources(src_dir, project=None, folder='/', ensure_upload=False, force_symlinks=False):\n \"\"\"\n :param ensure_upload: If True, will bypass checksum of resources directory\n and upload resources bundle unconditionally;\n will NOT be able to reuse this bundle in future builds.\n Else if False, will compute checksum and upload bundle\n if checksum is different from a previously uploaded\n bundle's checksum.\n :type ensure_upload: boolean\n :param force_symlinks: If true, will bypass the attempt to dereference any\n non-local symlinks and will unconditionally include\n the link as-is. Note that this will almost certainly\n result in a broken link within the resource directory\n unless you really know what you're doing.\n :type force_symlinks: boolean\n :returns: A list (possibly empty) of references to the generated archive(s)\n :rtype: list\n\n If it exists, archives and uploads the contents of the\n ``resources/`` subdirectory of *src_dir* to a new remote file\n object, and returns a list describing a single bundled dependency in\n the form expected by the ``bundledDepends`` field of a run\n specification. Returns an empty list, if no archive was created.\n \"\"\"\n applet_spec = _get_applet_spec(src_dir)\n\n if project is None:\n dest_project = applet_spec['project']\n else:\n dest_project = project\n applet_spec['project'] = project\n\n resources_dir = os.path.join(src_dir, \"resources\")\n if os.path.exists(resources_dir) and len(os.listdir(resources_dir)) > 0:\n target_folder = applet_spec['folder'] if 'folder' in applet_spec else folder\n\n # While creating the resource bundle, optimistically look for a\n # resource bundle with the same contents, and reuse it if possible.\n # The resource bundle carries a property 'resource_bundle_checksum'\n # that indicates the checksum; the way in which the checksum is\n # computed is given below. If the checksum matches (and\n # ensure_upload is False), then we will use the existing file,\n # otherwise, we will compress and upload the tarball.\n\n\n # The input to the SHA1 contains entries of the form (whitespace\n # only included here for readability):\n #\n # / \\0 MODE \\0 MTIME \\0\n # /foo \\0 MODE \\0 MTIME \\0\n # ...\n #\n # where there is one entry for each directory or file (order is\n # specified below), followed by a numeric representation of the\n # mode, and the mtime in milliseconds since the epoch.\n #\n # Note when looking at a link, if the link is to be dereferenced,\n # the mtime and mode used are that of the target (using os.stat())\n # If the link is to be kept as a link, the mtime and mode are those\n # of the link itself (using os.lstat())\n\n with tempfile.NamedTemporaryFile(suffix=\".tar\") as tar_tmp_fh:\n\n output_sha1 = hashlib.sha1()\n tar_fh = tarfile.open(fileobj=tar_tmp_fh, mode='w')\n\n for dirname, subdirs, files in os.walk(resources_dir):\n if not dirname.startswith(resources_dir):\n raise AssertionError('Expected %r to start with root directory %r' % (dirname, resources_dir))\n\n # Add an entry for the directory itself\n relative_dirname = dirname[len(resources_dir):]\n dir_stat = os.lstat(dirname)\n if not relative_dirname.startswith('/'):\n relative_dirname = '/' + relative_dirname\n\n fields = [relative_dirname, str(_fix_perms(dir_stat.st_mode)), str(int(dir_stat.st_mtime * 1000))]\n output_sha1.update(b''.join(s.encode('utf-8') + b'\\0' for s in fields))\n\n # add an entry in the tar file for the current directory, but\n # do not recurse!\n tar_fh.add(dirname, arcname='.' + relative_dirname, recursive=False, filter=_fix_perm_filter)\n\n # Canonicalize the order of subdirectories; this is the order in\n # which they will be visited by os.walk\n subdirs.sort()\n\n # check the subdirectories for symlinks. We should throw an error\n # if there are any links that point outside of the directory (unless\n # --force-symlinks is given). If a link is pointing internal to\n # the directory (or --force-symlinks is given), we should add it\n # as a file.\n for subdir_name in subdirs:\n dir_path = os.path.join(dirname, subdir_name)\n\n # If we do have a symlink,\n if os.path.islink(dir_path):\n # Let's get the pointed-to path to ensure that it is\n # still in the directory\n link_target = os.readlink(dir_path)\n\n # If this is a local link, add it to the list of files (case 1)\n # else raise an error\n if force_symlinks or is_link_local(link_target):\n files.append(subdir_name)\n else:\n raise AppBuilderException(\"Cannot include symlinks to directories outside of the resource directory. '%s' points to directory '%s'\" % (dir_path, os.path.realpath(dir_path)))\n\n\n # Canonicalize the order of files so that we compute the\n # checksum in a consistent order\n for filename in sorted(files):\n deref_link = False\n\n relative_filename = os.path.join(relative_dirname, filename)\n true_filename = os.path.join(dirname, filename)\n\n file_stat = os.lstat(true_filename)\n # check for a link here, please!\n if os.path.islink(true_filename):\n\n # Get the pointed-to path\n link_target = os.readlink(true_filename)\n\n if not (force_symlinks or is_link_local(link_target)):\n # if we are pointing outside of the directory, then:\n # try to get the true stat of the file and make sure\n # to dereference the link!\n try:\n file_stat = os.stat(os.path.join(dirname, link_target))\n deref_link = True\n except OSError:\n # uh-oh! looks like we have a broken link!\n # since this is guaranteed to cause problems (and\n # we know we're not forcing symlinks here), we\n # should throw an error\n raise AppBuilderException(\"Broken symlink: Link '%s' points to '%s', which does not exist\" % (true_filename, os.path.realpath(true_filename)) )\n\n\n fields = [relative_filename, str(_fix_perms(file_stat.st_mode)), str(int(file_stat.st_mtime * 1000))]\n output_sha1.update(b''.join(s.encode('utf-8') + b'\\0' for s in fields))\n\n # If we are to dereference, use the target fn\n if deref_link:\n true_filename = os.path.realpath(true_filename)\n\n tar_fh.add(true_filename, arcname='.' + relative_filename, filter=_fix_perm_filter)\n\n # end for filename in sorted(files)\n\n # end for dirname, subdirs, files in os.walk(resources_dir):\n\n # at this point, the tar is complete, so close the tar_fh\n tar_fh.close()\n\n # Optimistically look for a resource bundle with the same\n # contents, and reuse it if possible. The resource bundle\n # carries a property 'resource_bundle_checksum' that indicates\n # the checksum; the way in which the checksum is computed is\n # given in the documentation of _directory_checksum.\n\n if ensure_upload:\n properties_dict = {}\n existing_resources = False\n else:\n directory_checksum = output_sha1.hexdigest()\n properties_dict = dict(resource_bundle_checksum=directory_checksum)\n existing_resources = dxpy.find_one_data_object(\n project=dest_project,\n folder=target_folder,\n properties=dict(resource_bundle_checksum=directory_checksum),\n visibility='either',\n zero_ok=True,\n state='closed',\n return_handler=True\n )\n\n if existing_resources:\n logger.info(\"Found existing resource bundle that matches local resources directory: \" +\n existing_resources.get_id())\n\n dx_resource_archive = existing_resources\n else:\n\n logger.debug(\"Uploading in \" + src_dir)\n # We need to compress the tar that we've created\n\n\n targz_fh = tempfile.NamedTemporaryFile(suffix=\".tar.gz\", delete=False)\n\n # compress the file by reading the tar file and passing\n # it though a GzipFile object, writing the given\n # block size (by default 8192 bytes) at a time\n targz_gzf = gzip.GzipFile(fileobj=targz_fh, mode='wb')\n tar_tmp_fh.seek(0)\n dat = tar_tmp_fh.read(io.DEFAULT_BUFFER_SIZE)\n while dat:\n targz_gzf.write(dat)\n dat = tar_tmp_fh.read(io.DEFAULT_BUFFER_SIZE)\n\n targz_gzf.flush()\n targz_gzf.close()\n targz_fh.close()\n\n if 'folder' in applet_spec:\n try:\n dxpy.get_handler(dest_project).new_folder(applet_spec['folder'], parents=True)\n except dxpy.exceptions.DXAPIError:\n pass # TODO: make this better\n\n dx_resource_archive = dxpy.upload_local_file(\n targz_fh.name,\n wait_on_close=True,\n project=dest_project,\n folder=target_folder,\n hidden=True,\n properties=properties_dict\n )\n\n os.unlink(targz_fh.name)\n\n # end compressed file creation and upload\n\n archive_link = dxpy.dxlink(dx_resource_archive.get_id())\n\n # end tempfile.NamedTemporaryFile(suffix=\".tar\") as tar_fh\n\n return [{'name': 'resources.tar.gz', 'id': archive_link}]\n else:\n return []"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nuploading a applet to the app builder.", "response": "def upload_applet(src_dir, uploaded_resources, check_name_collisions=True, overwrite=False, archive=False, project=None, override_folder=None, override_name=None, dx_toolkit_autodep=\"stable\", dry_run=False, **kwargs):\n \"\"\"\n Creates a new applet object.\n\n :param project: ID of container in which to create the applet.\n :type project: str, or None to use whatever is specified in dxapp.json\n :param override_folder: folder name for the resulting applet which, if specified, overrides that given in dxapp.json\n :type override_folder: str\n :param override_name: name for the resulting applet which, if specified, overrides that given in dxapp.json\n :type override_name: str\n :param dx_toolkit_autodep: What type of dx-toolkit dependency to\n inject if none is present. \"stable\" for the APT package; \"git\"\n for HEAD of dx-toolkit master branch; or False for no\n dependency.\n :type dx_toolkit_autodep: boolean or string\n\n \"\"\"\n applet_spec = _get_applet_spec(src_dir)\n\n if project is None:\n dest_project = applet_spec['project']\n else:\n dest_project = project\n applet_spec['project'] = project\n\n if 'name' not in applet_spec:\n try:\n applet_spec['name'] = os.path.basename(os.path.abspath(src_dir))\n except:\n raise AppBuilderException(\"Could not determine applet name from the specification (dxapp.json) or from the name of the working directory (%r)\" % (src_dir,))\n\n if override_folder:\n applet_spec['folder'] = override_folder\n if 'folder' not in applet_spec:\n applet_spec['folder'] = '/'\n\n if override_name:\n applet_spec['name'] = override_name\n\n if 'dxapi' not in applet_spec:\n applet_spec['dxapi'] = dxpy.API_VERSION\n\n applets_to_overwrite = []\n archived_applet = None\n if check_name_collisions and not dry_run:\n destination_path = applet_spec['folder'] + ('/' if not applet_spec['folder'].endswith('/') else '') + applet_spec['name']\n logger.debug(\"Checking for existing applet at \" + destination_path)\n for result in dxpy.find_data_objects(classname=\"applet\", name=applet_spec[\"name\"], folder=applet_spec['folder'], project=dest_project, recurse=False):\n if overwrite:\n # Don't remove the old applet until after the new one\n # has been created. This avoids a race condition where\n # we remove the old applet, but that causes garbage\n # collection of the bundled resources that will be\n # shared with the new applet\n applets_to_overwrite.append(result['id'])\n elif archive:\n logger.debug(\"Archiving applet %s\" % (result['id']))\n proj = dxpy.DXProject(dest_project)\n archive_folder = '/.Applet_archive'\n try:\n proj.list_folder(archive_folder)\n except dxpy.DXAPIError:\n proj.new_folder(archive_folder)\n\n proj.move(objects=[result['id']], destination=archive_folder)\n archived_applet = dxpy.DXApplet(result['id'], project=dest_project)\n now = datetime.datetime.fromtimestamp(archived_applet.created/1000).ctime()\n new_name = archived_applet.name + \" ({d})\".format(d=now)\n archived_applet.rename(new_name)\n logger.info(\"Archived applet %s to %s:\\\"%s/%s\\\"\" % (result['id'], dest_project, archive_folder, new_name))\n else:\n raise AppBuilderException(\"An applet already exists at %s (id %s) and the --overwrite (-f) or --archive (-a) options were not given\" % (destination_path, result['id']))\n\n # -----\n # Override various fields from the pristine dxapp.json\n\n # Carry region-specific values from regionalOptions into the main\n # runSpec\n applet_spec[\"runSpec\"].setdefault(\"bundledDepends\", [])\n applet_spec[\"runSpec\"].setdefault(\"assetDepends\", [])\n if not dry_run:\n region = dxpy.api.project_describe(dest_project, input_params={\"fields\": {\"region\": True}})[\"region\"]\n\n # if regionalOptions contain at least one region, they must include\n # the region of the target project\n if len(applet_spec.get('regionalOptions', {})) != 0 and region not in applet_spec.get('regionalOptions', {}):\n err_mesg = \"destination project is in region {} but \\\"regionalOptions\\\" do not contain this region. \".format(region)\n err_mesg += \"Please, update your \\\"regionalOptions\\\" specification\"\n raise AppBuilderException(err_mesg)\n\n regional_options = applet_spec.get('regionalOptions', {}).get(region, {})\n\n # We checked earlier that if region-specific values for the\n # fields below are given, the same fields are not also specified\n # in the top-level runSpec. So the operations below should not\n # result in any user-supplied settings being clobbered.\n\n if 'systemRequirements' in regional_options:\n applet_spec[\"runSpec\"][\"systemRequirements\"] = regional_options['systemRequirements']\n\n if 'bundledDepends' in regional_options:\n applet_spec[\"runSpec\"][\"bundledDepends\"].extend(regional_options[\"bundledDepends\"])\n if 'assetDepends' in regional_options:\n applet_spec[\"runSpec\"][\"assetDepends\"].extend(regional_options[\"assetDepends\"])\n\n # Inline Readme.md and Readme.developer.md\n dxpy.executable_builder.inline_documentation_files(applet_spec, src_dir)\n\n # Inline the code of the program\n if \"file\" in applet_spec[\"runSpec\"]:\n # Put it into runSpec.code instead\n with open(os.path.join(src_dir, applet_spec[\"runSpec\"][\"file\"])) as code_fh:\n applet_spec[\"runSpec\"][\"code\"] = code_fh.read()\n del applet_spec[\"runSpec\"][\"file\"]\n\n # If this is applet requires a cluster, inline any bootstrapScript code that may be provided.\n # bootstrapScript is an *optional* clusterSpec parameter.\n # NOTE: assumes bootstrapScript is always provided as a filename\n if \"systemRequirements\" in applet_spec[\"runSpec\"]:\n sys_reqs = applet_spec[\"runSpec\"][\"systemRequirements\"]\n for entry_point in sys_reqs:\n try:\n bootstrap_script = os.path.join(src_dir, sys_reqs[entry_point][\"clusterSpec\"][\"bootstrapScript\"])\n with open(bootstrap_script) as code_fh:\n sys_reqs[entry_point][\"clusterSpec\"][\"bootstrapScript\"] = code_fh.read()\n except KeyError:\n # either no \"clusterSpec\" or no \"bootstrapScript\" within \"clusterSpec\"\n continue\n except IOError:\n raise AppBuilderException(\"The clusterSpec \\\"bootstrapScript\\\" could not be read.\")\n\n # Attach bundled resources to the app\n if uploaded_resources is not None:\n applet_spec[\"runSpec\"][\"bundledDepends\"].extend(uploaded_resources)\n\n # Validate and process assetDepends\n asset_depends = applet_spec[\"runSpec\"][\"assetDepends\"]\n if type(asset_depends) is not list or any(type(dep) is not dict for dep in asset_depends):\n raise AppBuilderException(\"Expected runSpec.assetDepends to be an array of objects\")\n for asset in asset_depends:\n asset_project = asset.get(\"project\", None)\n asset_folder = asset.get(\"folder\", '/')\n asset_stages = asset.get(\"stages\", None)\n if \"id\" in asset:\n asset_record = dxpy.DXRecord(asset[\"id\"]).describe(fields={'details'}, default_fields=True)\n elif \"name\" in asset and asset_project is not None and \"version\" in asset:\n try:\n asset_record = dxpy.find_one_data_object(zero_ok=True, classname=\"record\", typename=\"AssetBundle\",\n name=asset[\"name\"], properties=dict(version=asset[\"version\"]),\n project=asset_project, folder=asset_folder, recurse=False,\n describe={\"defaultFields\": True, \"fields\": {\"details\": True}},\n state=\"closed\", more_ok=False)\n except dxpy.exceptions.DXSearchError:\n msg = \"Found more than one asset record that matches: name={0}, folder={1} in project={2}.\"\n raise AppBuilderException(msg.format(asset[\"name\"], asset_folder, asset_project))\n else:\n raise AppBuilderException(\"Each runSpec.assetDepends element must have either {'id'} or \"\n \"{'name', 'project' and 'version'} field(s).\")\n\n if asset_record:\n if \"id\" in asset:\n asset_details = asset_record[\"details\"]\n else:\n asset_details = asset_record[\"describe\"][\"details\"]\n if \"archiveFileId\" in asset_details:\n archive_file_id = asset_details[\"archiveFileId\"]\n else:\n raise AppBuilderException(\"The required field 'archiveFileId' was not found in \"\n \"the details of the asset bundle %s \" % asset_record[\"id\"])\n archive_file_name = dxpy.DXFile(archive_file_id).describe()[\"name\"]\n bundle_depends = {\n \"name\": archive_file_name,\n \"id\": archive_file_id\n }\n if asset_stages:\n bundle_depends[\"stages\"] = asset_stages\n applet_spec[\"runSpec\"][\"bundledDepends\"].append(bundle_depends)\n # If the file is not found in the applet destination project, clone it from the asset project\n if (not dry_run and\n dxpy.DXRecord(dxid=asset_record[\"id\"], project=dest_project).describe()[\"project\"] != dest_project):\n dxpy.DXRecord(asset_record[\"id\"], project=asset_record[\"project\"]).clone(dest_project)\n else:\n raise AppBuilderException(\"No asset bundle was found that matched the specification %s\"\n % (json.dumps(asset)))\n\n # Include the DNAnexus client libraries as an execution dependency, if they are not already\n # there\n if dx_toolkit_autodep == \"git\":\n dx_toolkit_dep = {\"name\": \"dx-toolkit\",\n \"package_manager\": \"git\",\n \"url\": \"git://github.com/dnanexus/dx-toolkit.git\",\n \"tag\": \"master\",\n \"build_commands\": \"make install DESTDIR=/ PREFIX=/opt/dnanexus\"}\n elif dx_toolkit_autodep == \"stable\":\n dx_toolkit_dep = {\"name\": \"dx-toolkit\", \"package_manager\": \"apt\"}\n elif dx_toolkit_autodep:\n raise AppBuilderException(\"dx_toolkit_autodep must be one of 'stable', 'git', or False; got %r instead\" % (dx_toolkit_autodep,))\n\n if dx_toolkit_autodep:\n applet_spec[\"runSpec\"].setdefault(\"execDepends\", [])\n exec_depends = applet_spec[\"runSpec\"][\"execDepends\"]\n if type(exec_depends) is not list or any(type(dep) is not dict for dep in exec_depends):\n raise AppBuilderException(\"Expected runSpec.execDepends to be an array of objects\")\n dx_toolkit_dep_found = any(dep.get('name') in DX_TOOLKIT_PKGS or dep.get('url') in DX_TOOLKIT_GIT_URLS for dep in exec_depends)\n if not dx_toolkit_dep_found:\n exec_depends.append(dx_toolkit_dep)\n if dx_toolkit_autodep == \"git\":\n applet_spec.setdefault(\"access\", {})\n applet_spec[\"access\"].setdefault(\"network\", [])\n # Note: this can be set to \"github.com\" instead of \"*\" if the build doesn't download any deps\n if \"*\" not in applet_spec[\"access\"][\"network\"]:\n applet_spec[\"access\"][\"network\"].append(\"*\")\n\n merge(applet_spec, kwargs)\n\n # -----\n # Now actually create the applet\n\n if dry_run:\n print(\"Would create the following applet:\")\n print(json.dumps(applet_spec, indent=2))\n print(\"*** DRY-RUN-- no applet was created ***\")\n return None, None\n\n if applet_spec.get(\"categories\", []):\n if \"tags\" not in applet_spec:\n applet_spec[\"tags\"] = []\n applet_spec[\"tags\"] = list(set(applet_spec[\"tags\"]) | set(applet_spec[\"categories\"]))\n\n applet_id = dxpy.api.applet_new(applet_spec)[\"id\"]\n\n if archived_applet:\n archived_applet.set_properties({'replacedWith': applet_id})\n\n # Now it is permissible to delete the old applet(s), if any\n if applets_to_overwrite:\n logger.info(\"Deleting applet(s) %s\" % (','.join(applets_to_overwrite)))\n dxpy.DXProject(dest_project).remove_objects(applets_to_overwrite)\n\n return applet_id, applet_spec"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _create_or_update_version(app_name, version, app_spec, try_update=True):\n # This has a race condition since the app could have been created or\n # published since we last looked.\n try:\n app_id = dxpy.api.app_new(app_spec)[\"id\"]\n return app_id\n except dxpy.exceptions.DXAPIError as e:\n # TODO: detect this error more reliably\n if e.name == 'InvalidInput' and e.msg == 'Specified name and version conflict with an existing alias':\n print('App %s/%s already exists' % (app_spec[\"name\"], version), file=sys.stderr)\n # The version number was already taken, so app/new doesn't work.\n # However, maybe it hasn't been published yet, so we might be able\n # to app-xxxx/update it.\n app_describe = dxpy.api.app_describe(\"app-\" + app_name, alias=version)\n if app_describe.get(\"published\", 0) > 0:\n return None\n return _update_version(app_name, version, app_spec, try_update=try_update)\n raise e", "response": "Creates a new version of the app and updates it if it already exists."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _update_version(app_name, version, app_spec, try_update=True):\n if not try_update:\n return None\n try:\n app_id = dxpy.api.app_update(\"app-\" + app_name, version, app_spec)[\"id\"]\n return app_id\n except dxpy.exceptions.DXAPIError as e:\n if e.name == 'InvalidState':\n print('App %s/%s has already been published' % (app_spec[\"name\"], version), file=sys.stderr)\n return None\n raise e", "response": "Updates an app in place. Returns an app_id or None."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate an app object from the applet list.", "response": "def create_app_multi_region(regional_options, app_name, src_dir, publish=False, set_default=False, billTo=None,\n try_versions=None, try_update=True, confirm=True):\n \"\"\"\n Creates a new app object from the specified applet(s).\n\n :param regional_options: Region-specific options for the app. See\n https://wiki.dnanexus.com/API-Specification-v1.0.0/Apps#API-method:-/app/new\n for details; this should contain keys for each region the app is\n to be enabled in, and for the values, a dict containing (at\n minimum) a key \"applet\" whose value is an applet ID for that\n region.\n :type regional_options: dict\n \"\"\"\n return _create_app(dict(regionalOptions=regional_options), app_name, src_dir, publish=publish,\n set_default=set_default, billTo=billTo, try_versions=try_versions, try_update=try_update,\n confirm=confirm)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef create_app(applet_id, applet_name, src_dir, publish=False, set_default=False, billTo=None, try_versions=None,\n try_update=True, confirm=True, regional_options=None):\n \"\"\"\n Creates a new app object from the specified applet.\n\n .. deprecated:: 0.204.0\n Use :func:`create_app_multi_region()` instead.\n\n \"\"\"\n # In this case we don't know the region of the applet, so we use the\n # legacy API {\"applet\": applet_id} without specifying a region\n # specifically.\n return _create_app(dict(applet=applet_id), applet_name, src_dir, publish=publish, set_default=set_default,\n billTo=billTo, try_versions=try_versions, try_update=try_update, confirm=confirm)", "response": "Create a new app object from the applet."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_enabled_regions(app_spec, from_command_line):\n enabled_regions = dxpy.executable_builder.get_enabled_regions('app', app_spec, from_command_line, AppBuilderException)\n\n if enabled_regions is not None and len(enabled_regions) == 0:\n raise AssertionError(\"This app should be enabled in at least one region\")\n return enabled_regions", "response": "Returns a list of the regions in which the app should be enabled."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef exit_with_error(msg):\n '''\n :param msg: string message to print before exiting\n\n Print the error message, as well as a blurb on where to find the\n job workspaces\n '''\n msg += '\\n'\n msg += 'Local job workspaces can be found in: ' + str(environ.get('DX_TEST_JOB_HOMEDIRS'))\n sys.exit(msg)", "response": "Exit with error message"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef has_local_job_refs(io_hash):\n '''\n :param io_hash: input/output hash\n :type io_hash: dict\n :returns: boolean indicating whether any job-based object references are found in *io_hash*\n '''\n q = []\n\n for field in io_hash:\n if is_job_ref(io_hash[field]):\n if get_job_from_jbor(io_hash[field]).startswith('localjob'):\n return True\n elif isinstance(io_hash[field], list) or isinstance(io_hash[field], dict):\n q.append(io_hash[field])\n\n while len(q) > 0:\n thing = q.pop()\n if isinstance(thing, list):\n for i in range(len(thing)):\n if is_job_ref(thing[i]):\n if get_job_from_jbor(thing[i]).startswith('localjob'):\n return True\n elif isinstance(thing[i], list) or isinstance(thing[i], dict):\n q.append(thing[i])\n else:\n for field in thing:\n if is_job_ref(thing[field]):\n if get_job_from_jbor(thing[field]).startswith('localjob'):\n return True\n elif isinstance(thing[field], list) or isinstance(thing[field], dict):\n q.append(thing[field])\n\n return False", "response": "Returns True if any local job - based object references are found in the input or output hash\n ."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef resolve_job_ref(jbor, job_outputs={}, should_resolve=True):\n '''\n :param jbor: a dict that is a valid job-based object reference\n :type jbor: dict\n :param job_outputs: a dict of finished local jobs to their output hashes\n :type job_outputs: :class:`collections.OrderedDict`\n :returns: the referenced value if present\n :raises: :exc:`Exception` if the job-based object reference cannot be resolved\n\n TODO: Support metadata references\n '''\n ref_job_id = get_job_from_jbor(jbor)\n ref_job_field = get_field_from_jbor(jbor)\n ref_job_index = get_index_from_jbor(jbor)\n def resolve_from_hash(output_hash):\n if ref_job_index is None:\n return output_hash[ref_job_field]\n else:\n return output_hash[ref_job_field][ref_job_index]\n if is_localjob_id(ref_job_id):\n if job_outputs.get(ref_job_id) is None:\n if should_resolve:\n raise Exception('Job ' + ref_job_id + ' not found in local finished jobs')\n else:\n return jbor\n if ref_job_field not in job_outputs[ref_job_id]:\n raise Exception('Cannot resolve a JBOR with job ID ' + ref_job_id + ' because field \"' + ref_job_field + '\" was not found in its output')\n return resolve_from_hash(job_outputs[ref_job_id])\n else:\n dxjob = dxpy.DXJob(ref_job_id)\n try:\n dxjob.wait_on_done()\n except Exception as e:\n raise Exception('Could not wait for ' + ref_job_id + ' to finish: ' + str(e))\n job_desc = dxjob.describe()\n if ref_job_field not in job_desc['output']:\n raise Exception('Cannot resolve a JBOR with job ID ' + ref_job_id + ' because field \"' + ref_job_field + '\" was not found in its output')\n return resolve_from_hash(job_desc['output'])", "response": "Resolve a JBOR object to a unique version of the object that is referenced by the JBOR."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef resolve_job_references(io_hash, job_outputs, should_resolve=True):\n '''\n :param io_hash: an input or output hash in which to resolve any job-based object references possible\n :type io_hash: dict\n :param job_outputs: a mapping of finished local jobs to their output hashes\n :type job_outputs: dict\n :param should_resolve: whether it is an error if a job-based object reference in *io_hash* cannot be resolved yet\n :type should_resolve: boolean\n\n Modifies *io_hash* in-place.\n '''\n q = []\n\n for field in io_hash:\n if is_job_ref(io_hash[field]):\n io_hash[field] = resolve_job_ref(io_hash[field], job_outputs, should_resolve)\n elif isinstance(io_hash[field], list) or isinstance(io_hash[field], dict):\n q.append(io_hash[field])\n\n while len(q) > 0:\n thing = q.pop()\n if isinstance(thing, list):\n for i in range(len(thing)):\n if is_job_ref(thing[i]):\n thing[i] = resolve_job_ref(thing[i], job_outputs, should_resolve)\n elif isinstance(thing[i], list) or isinstance(thing[i], dict):\n q.append(thing[i])\n else:\n for field in thing:\n if is_job_ref(thing[field]):\n thing[field] = resolve_job_ref(thing[field], job_outputs, should_resolve)\n elif isinstance(thing[field], list) or isinstance(thing[field], dict):\n q.append(thing[field])", "response": "Resolves any job - based object references in the given input or output hashes."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_implicit_depends_on(input_hash, depends_on):\n '''\n Add DNAnexus links to non-closed data objects in input_hash to depends_on\n '''\n q = []\n\n for field in input_hash:\n possible_dep = get_nonclosed_data_obj_link(input_hash[field])\n if possible_dep is not None:\n depends_on.append(possible_dep)\n elif isinstance(input_hash[field], list) or isinstance(input_hash[field], dict):\n q.append(input_hash[field])\n\n while len(q) > 0:\n thing = q.pop()\n if isinstance(thing, list):\n for i in range(len(thing)):\n possible_dep = get_nonclosed_data_obj_link(thing[i])\n if possible_dep is not None:\n depends_on.append(possible_dep)\n elif isinstance(thing[i], list) or isinstance(thing[i], dict):\n q.append(thing[i])\n else:\n for field in thing:\n possible_dep = get_nonclosed_data_obj_link(thing[field])\n if possible_dep is not None:\n depends_on.append(possible_dep)\n elif isinstance(thing[field], list) or isinstance(thing[field], dict):\n q.append(thing[field])", "response": "Get implicit dependencies of data objects in input_hash."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef queue_entry_point(function, input_hash, depends_on=[], name=None):\n '''\n :param function: function to run\n :param input_hash: input to new job\n :param depends_on: list of data object IDs and/or job IDs (local or remote) to wait for before the job can be run\n :type depends_on: list of strings\n :param name: job name (optional)\n :returns: new local job ID\n\n This function should only be called by a locally running job, so\n all relevant DX_TEST_* environment variables should be set.\n\n This function will set up the home directory for the job, add an\n entry in job_outputs.json, and append the job information to the\n job_queue.json file. (Both files found in\n $DX_TEST_JOB_HOMEDIRS.)\n '''\n ensure_env_vars()\n\n all_job_outputs_path = os.path.join(environ['DX_TEST_JOB_HOMEDIRS'], 'job_outputs.json')\n\n with open(all_job_outputs_path, 'r') as fd:\n all_job_outputs = json.load(fd, object_pairs_hook=collections.OrderedDict)\n job_id = 'localjob-' + str(len(all_job_outputs))\n\n with open(all_job_outputs_path, write_mode) as fd:\n all_job_outputs[job_id] = None\n json.dump(all_job_outputs, fd, indent=4)\n fd.write(eol)\n\n job_homedir = os.path.join(environ['DX_TEST_JOB_HOMEDIRS'], job_id)\n os.mkdir(job_homedir)\n\n job_queue_path = os.path.join(environ['DX_TEST_JOB_HOMEDIRS'], 'job_queue.json')\n with open(job_queue_path, 'r') as fd:\n job_queue = json.load(fd)\n job_entry = {\"id\": job_id,\n \"function\": function,\n \"input_hash\": input_hash,\n \"depends_on\": depends_on}\n if name is not None:\n job_entry['name'] = name\n job_queue.append(job_entry)\n with open(job_queue_path, write_mode) as fd:\n json.dump(job_queue, fd, indent=4)\n fd.write(eol)\n\n return job_id", "response": "This function creates a new local job ID\n that runs the function with input_hash and adds the job information to the job_queue. json file."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef run_one_entry_point(job_id, function, input_hash, run_spec, depends_on, name=None):\n '''\n :param job_id: job ID of the local job to run\n :type job_id: string\n :param function: function to run\n :type function: string\n :param input_hash: input for the job (may include job-based object references)\n :type input_hash: dict\n :param run_spec: run specification from the dxapp.json of the app\n :type run_spec: dict\n\n Runs the specified entry point and retrieves the job's output,\n updating job_outputs.json (in $DX_TEST_JOB_HOMEDIRS) appropriately.\n '''\n print('======')\n\n job_homedir = os.path.join(environ['DX_TEST_JOB_HOMEDIRS'], job_id)\n\n job_env = environ.copy()\n job_env['HOME'] = os.path.join(environ['DX_TEST_JOB_HOMEDIRS'], job_id)\n\n all_job_outputs_path = os.path.join(environ['DX_TEST_JOB_HOMEDIRS'], 'job_outputs.json')\n\n with open(all_job_outputs_path, 'r') as fd:\n all_job_outputs = json.load(fd, object_pairs_hook=collections.OrderedDict)\n\n if isinstance(name, basestring):\n name += ' (' + job_id + ':' + function + ')'\n else:\n name = job_id + ':' + function\n job_name = BLUE() + BOLD() + name + ENDC()\n print(job_name)\n\n # Resolve local job-based object references\n try:\n resolve_job_references(input_hash, all_job_outputs)\n except Exception as e:\n exit_with_error(job_name + ' ' + JOB_STATES('failed') + ' when resolving input:\\n' + fill(str(e)))\n\n # Get list of non-closed data objects in the input that appear as\n # DNAnexus links; append to depends_on\n if depends_on is None:\n depends_on = []\n get_implicit_depends_on(input_hash, depends_on)\n\n try:\n wait_for_depends_on(depends_on, all_job_outputs)\n except Exception as e:\n exit_with_error(job_name + ' ' + JOB_STATES('failed') + ' when processing depends_on:\\n' + fill(str(e)))\n\n # Save job input to job_input.json\n with open(os.path.join(job_homedir, 'job_input.json'), 'w') as fd:\n json.dump(input_hash, fd, indent=4)\n fd.write('\\n')\n\n print(job_output_to_str(input_hash, title=(BOLD() + 'Input: ' + ENDC()),\n title_len=len(\"Input: \")).lstrip())\n\n if run_spec['interpreter'] == 'bash':\n # Save job input to env vars\n env_path = os.path.join(job_homedir, 'environment')\n with open(env_path, 'w') as fd:\n job_input_file = os.path.join(job_homedir, 'job_input.json')\n var_defs_hash = file_load_utils.gen_bash_vars(job_input_file, job_homedir=job_homedir)\n for key, val in list(var_defs_hash.items()):\n fd.write(\"{}={}\\n\".format(key, val))\n\n print(BOLD() + 'Logs:' + ENDC())\n start_time = datetime.datetime.now()\n if run_spec['interpreter'] == 'bash':\n script = '''\n cd {homedir};\n . {env_path};\n . {code_path};\n if [[ $(type -t {function}) == \"function\" ]];\n then {function};\n else echo \"$0: Global scope execution complete. Not invoking entry point function {function} because it was not found\" 1>&2;\n fi'''.format(homedir=pipes.quote(job_homedir),\n env_path=pipes.quote(os.path.join(job_env['HOME'], 'environment')),\n code_path=pipes.quote(environ['DX_TEST_CODE_PATH']),\n function=function)\n invocation_args = ['bash', '-c', '-e'] + (['-x'] if environ.get('DX_TEST_X_FLAG') else []) + [script]\n elif run_spec['interpreter'] == 'python2.7':\n script = '''#!/usr/bin/env python\nimport os\nos.chdir({homedir})\n\n{code}\n\nimport dxpy, json\nif dxpy.utils.exec_utils.RUN_COUNT == 0:\n dxpy.run()\n'''.format(homedir=repr(job_homedir),\n code=run_spec['code'])\n\n job_env['DX_TEST_FUNCTION'] = function\n invocation_args = ['python', '-c', script]\n\n if USING_PYTHON2:\n invocation_args = [arg.encode(sys.stdout.encoding) for arg in invocation_args]\n env = {k: v.encode(sys.stdout.encoding) for k, v in job_env.items()}\n else:\n env = job_env\n\n fn_process = subprocess.Popen(invocation_args, env=env)\n\n fn_process.communicate()\n end_time = datetime.datetime.now()\n\n if fn_process.returncode != 0:\n exit_with_error(job_name + ' ' + JOB_STATES('failed') + ', exited with error code ' + str(fn_process.returncode) + ' after ' + str(end_time - start_time))\n\n # Now updating job output aggregation file with job's output\n job_output_path = os.path.join(job_env['HOME'], 'job_output.json')\n if os.path.exists(job_output_path):\n try:\n with open(job_output_path, 'r') as fd:\n job_output = json.load(fd, object_pairs_hook=collections.OrderedDict)\n except Exception as e:\n exit_with_error('Error: Could not load output of ' + job_name + ':\\n' + fill(str(e.__class__) + ': ' + str(e)))\n else:\n job_output = {}\n\n print(job_name + ' -> ' + GREEN() + 'finished running' + ENDC() + ' after ' + str(end_time - start_time))\n print(job_output_to_str(job_output, title=(BOLD() + \"Output: \" + ENDC()),\n title_len=len(\"Output: \")).lstrip())\n\n with open(os.path.join(environ['DX_TEST_JOB_HOMEDIRS'], 'job_outputs.json'), 'r') as fd:\n all_job_outputs = json.load(fd, object_pairs_hook=collections.OrderedDict)\n all_job_outputs[job_id] = job_output\n\n # Before dumping, see if any new jbors should be resolved now\n for other_job_id in all_job_outputs:\n if all_job_outputs[other_job_id] is None:\n # Skip if job is not done yet (true for ancestor jobs)\n continue\n resolve_job_references(all_job_outputs[other_job_id], all_job_outputs, should_resolve=False)\n\n with open(os.path.join(environ['DX_TEST_JOB_HOMEDIRS'], 'job_outputs.json'), write_mode) as fd:\n json.dump(all_job_outputs, fd, indent=4)\n fd.write(eol)", "response": "Runs a single entry point on the local job."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef run_entry_points(run_spec):\n '''\n :param run_spec: run specification from the dxapp.json of the app\n :type run_spec: dict\n\n Runs all job entry points found in\n $DX_TEST_JOB_HOMEDIRS/job_queue.json in a first-in, first-out\n manner until it is an empty array (or an error occurs).\n '''\n job_queue_path = os.path.join(environ['DX_TEST_JOB_HOMEDIRS'], 'job_queue.json')\n all_job_outputs_path = os.path.join(environ['DX_TEST_JOB_HOMEDIRS'], 'job_outputs.json')\n\n while True:\n with open(job_queue_path, 'r') as fd:\n job_queue = json.load(fd)\n if len(job_queue) == 0:\n return\n\n with open(all_job_outputs_path, 'r') as fd:\n all_job_outputs = json.load(fd)\n\n entry_point_to_run = None\n for i, entry_point in enumerate(job_queue):\n runnable = True\n # See if its inputs are ready\n while has_local_job_refs(entry_point['input_hash']):\n try:\n resolve_job_references(entry_point['input_hash'], all_job_outputs)\n except:\n runnable = False\n break\n if runnable:\n entry_point_to_run = job_queue.pop(i)\n break\n\n if entry_point_to_run is None:\n # Just run the first entry point and let the runner throw\n # the appropriate error\n entry_point_to_run = job_queue.pop(0)\n\n with open(job_queue_path, write_mode) as fd:\n # Update job queue with updated inputs and after having\n # popped the entry point to be run\n json.dump(job_queue, fd)\n fd.write(eol)\n\n run_one_entry_point(job_id=entry_point_to_run['id'],\n function=entry_point_to_run['function'],\n input_hash=entry_point_to_run['input_hash'],\n run_spec=run_spec,\n depends_on=entry_point_to_run.get('depends_on', []),\n name=entry_point_to_run.get('name'))", "response": "Runs all entry points in a single file in a first - in order."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef analysis_add_tags(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/addTags' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Add tags to a node in the current node."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef analysis_describe(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/describe' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Invokes the /analysis-xxxx/describe API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Workflows-and-Analyses#API-method%3A-%2Fanalysis-xxxx%2Fdescribe"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef analysis_remove_tags(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/removeTags' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Removes tags from a node in a node s tree."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsets properties of the object in the current node.", "response": "def analysis_set_properties(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /analysis-xxxx/setProperties API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Workflows-and-Analyses#API-method%3A-%2Fanalysis-xxxx%2FsetProperties\n \"\"\"\n return DXHTTPRequest('/%s/setProperties' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef analysis_terminate(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/terminate' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Terminate the current object in the current state of the current object."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nadds a new developer to the current version of the app.", "response": "def app_add_developers(app_name_or_id, alias=None, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /app-xxxx/addDevelopers API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Apps#API-method:-/app-xxxx%5B/yyyy%5D/addDevelopers\n \"\"\"\n fully_qualified_version = app_name_or_id + (('/' + alias) if alias else '')\n return DXHTTPRequest('/%s/addDevelopers' % fully_qualified_version, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ninvoke the /app-xxxx/run API method. For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Apps#API-method:-/app-xxxx%5B/yyyy%5D/run", "response": "def app_run(app_name_or_id, alias=None, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /app-xxxx/run API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Apps#API-method:-/app-xxxx%5B/yyyy%5D/run\n \"\"\"\n input_params_cp = Nonce.update_nonce(input_params)\n fully_qualified_version = app_name_or_id + (('/' + alias) if alias else '')\n return DXHTTPRequest('/%s/run' % fully_qualified_version, input_params_cp, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef applet_add_tags(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/addTags' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Adds tags to applet -xxxx."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef applet_describe(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/describe' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Get the details of a applet."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef applet_get(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/get' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Get applet s content."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef applet_get_details(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/getDetails' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Get details of applet."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef applet_list_projects(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/listProjects' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "List the applet projects of a given object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef applet_remove_tags(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/removeTags' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Remove tags from applet."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef applet_rename(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/rename' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Invokes the /applet-xxxx/rename API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Name#API-method%3A-%2Fclass-xxxx%2Frename"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ninvoke the /applet-xxxx/validateBatch API method. For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Applets-and-Entry-Points#API-method%3A-%2Fapplet-xxxx%2FvalidateBatch", "response": "def applet_validate_batch(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /applet-xxxx/validateBatch API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Applets-and-Entry-Points#API-method%3A-%2Fapplet-xxxx%2FvalidateBatch\n \"\"\"\n return DXHTTPRequest('/%s/validateBatch' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ninvoke the /applet-xxxx/run API method. For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Applets-and-Entry-Points#API-method%3A-%2Fapplet-xxxx%2Frun", "response": "def applet_run(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /applet-xxxx/run API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Applets-and-Entry-Points#API-method%3A-%2Fapplet-xxxx%2Frun\n \"\"\"\n input_params_cp = Nonce.update_nonce(input_params)\n return DXHTTPRequest('/%s/run' % object_id, input_params_cp, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ninvoke the /applet-xxxx/setProperties API method. For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Properties#API-method%3A-%2Fclass-xxxx%2FsetProperties", "response": "def applet_set_properties(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /applet-xxxx/setProperties API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Properties#API-method%3A-%2Fclass-xxxx%2FsetProperties\n \"\"\"\n return DXHTTPRequest('/%s/setProperties' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ninvoke the /container-xxxx/clone API method. For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Cloning#API-method%3A-%2Fclass-xxxx%2Fclone", "response": "def container_clone(object_id, input_params={}, always_retry=False, **kwargs):\n \"\"\"\n Invokes the /container-xxxx/clone API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Cloning#API-method%3A-%2Fclass-xxxx%2Fclone\n \"\"\"\n return DXHTTPRequest('/%s/clone' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nretrieves the details of a container.", "response": "def container_describe(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /container-xxxx/describe API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Containers-for-Execution#API-method%3A-%2Fcontainer-xxxx%2Fdescribe\n \"\"\"\n return DXHTTPRequest('/%s/describe' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef container_destroy(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/destroy' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Invokes the /container-xxxx/destroy API method."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nlists the contents of a container.", "response": "def container_list_folder(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /container-xxxx/listFolder API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Folders-and-Deletion#API-method%3A-%2Fclass-xxxx%2FlistFolder\n \"\"\"\n return DXHTTPRequest('/%s/listFolder' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nmoves the container to the next container.", "response": "def container_move(object_id, input_params={}, always_retry=False, **kwargs):\n \"\"\"\n Invokes the /container-xxxx/move API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Folders-and-Deletion#API-method%3A-%2Fclass-xxxx%2Fmove\n \"\"\"\n return DXHTTPRequest('/%s/move' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a new folder on the container.", "response": "def container_new_folder(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /container-xxxx/newFolder API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Folders-and-Deletion#API-method%3A-%2Fclass-xxxx%2FnewFolder\n \"\"\"\n return DXHTTPRequest('/%s/newFolder' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nremove a folder from a container.", "response": "def container_remove_folder(object_id, input_params={}, always_retry=False, **kwargs):\n \"\"\"\n Invokes the /container-xxxx/removeFolder API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Folders-and-Deletion#API-method%3A-%2Fclass-xxxx%2FremoveFolder\n \"\"\"\n return DXHTTPRequest('/%s/removeFolder' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef container_remove_objects(object_id, input_params={}, always_retry=False, **kwargs):\n return DXHTTPRequest('/%s/removeObjects' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Remove objects from a container."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef container_rename_folder(object_id, input_params={}, always_retry=False, **kwargs):\n return DXHTTPRequest('/%s/renameFolder' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Invokes the /container-xxxx/renameFolder API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Folders-and-Deletion#API-method%3A-%2Fclass-xxxx%2FrenameFolder"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef database_add_tags(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/addTags' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Adds tags to a node in the database."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef database_add_types(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/addTypes' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Adds types to a database."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef database_close(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/close' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Closes the specified object."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef database_describe(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/describe' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Get the details of a single database."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef database_get_details(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/getDetails' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Get details of a single object."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nlists all projects in the database.", "response": "def database_list_projects(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /database-xxxx/listProjects API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Cloning#API-method%3A-%2Fclass-xxxx%2FlistProjects\n \"\"\"\n return DXHTTPRequest('/%s/listProjects' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef database_relocate(object_id, input_params={}, always_retry=False, **kwargs):\n return DXHTTPRequest('/%s/relocate' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Relocate a node in a cluster."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef database_remove_tags(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/removeTags' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Remove tags from a node in a cluster."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nremove types from a database.", "response": "def database_remove_types(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /database-xxxx/removeTypes API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Types#API-method%3A-%2Fclass-xxxx%2FremoveTypes\n \"\"\"\n return DXHTTPRequest('/%s/removeTypes' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef database_rename(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/rename' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Rename an object in the database."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef database_set_details(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/setDetails' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Sets the details of a single object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef database_set_properties(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/setProperties' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Sets the properties of a database object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef database_set_visibility(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/setVisibility' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Sets the visibility of the object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef database_list_folder(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/listFolder' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "List the folder of a database."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef file_add_tags(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/addTags' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Add tags to a file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd types to a file.", "response": "def file_add_types(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /file-xxxx/addTypes API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Types#API-method%3A-%2Fclass-xxxx%2FaddTypes\n \"\"\"\n return DXHTTPRequest('/%s/addTypes' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ninvokes the /file-xxxx/close API method. For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Files#API-method%3A-%2Ffile-xxxx%2Fclose", "response": "def file_close(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /file-xxxx/close API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Files#API-method%3A-%2Ffile-xxxx%2Fclose\n \"\"\"\n return DXHTTPRequest('/%s/close' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ninvokes the /file-xxxx/describe API method. For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Files#API-method%3A-%2Ffile-xxxx%2Fdescribe", "response": "def file_describe(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /file-xxxx/describe API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Files#API-method%3A-%2Ffile-xxxx%2Fdescribe\n \"\"\"\n return DXHTTPRequest('/%s/describe' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef file_download(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/download' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Downloads a file from the specified object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget details of a file.", "response": "def file_get_details(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /file-xxxx/getDetails API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Details-and-Links#API-method%3A-%2Fclass-xxxx%2FgetDetails\n \"\"\"\n return DXHTTPRequest('/%s/getDetails' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef file_list_projects(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/listProjects' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "List all projects in a file."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef file_remove_tags(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/removeTags' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Remove tags from a file."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nremoves types from a file.", "response": "def file_remove_types(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /file-xxxx/removeTypes API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Types#API-method%3A-%2Fclass-xxxx%2FremoveTypes\n \"\"\"\n return DXHTTPRequest('/%s/removeTypes' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef file_rename(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/rename' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Rename a file in a node."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef file_set_details(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/setDetails' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Sets the details of a file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef file_set_properties(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/setProperties' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Sets the properties of a file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsetting the visibility of a file.", "response": "def file_set_visibility(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /file-xxxx/setVisibility API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Visibility#API-method%3A-%2Fclass-xxxx%2FsetVisibility\n \"\"\"\n return DXHTTPRequest('/%s/setVisibility' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef file_upload(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/upload' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Upload a file to the object identified by object_id."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef file_new(input_params={}, always_retry=True, **kwargs):\n input_params_cp = Nonce.update_nonce(input_params)\n return DXHTTPRequest('/file/new', input_params_cp, always_retry=always_retry, **kwargs)", "response": "This method is used to invoke the file new API method."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef global_workflow_add_authorized_users(name_or_id, alias=None, input_params={}, always_retry=True, **kwargs):\n fully_qualified_version = name_or_id + (('/' + alias) if alias else '')\n return DXHTTPRequest('/%s/addAuthorizedUsers' % fully_qualified_version, input_params, always_retry=always_retry, **kwargs)", "response": "Add authorized users to a global workflow."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ninvoking the /globalworkflow-xxxx/run API method. For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Global-Workflows#API-method:-/globalworkflow-xxxx%5B/yyyy%5D/run", "response": "def global_workflow_run(name_or_id, alias=None, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /globalworkflow-xxxx/run API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Global-Workflows#API-method:-/globalworkflow-xxxx%5B/yyyy%5D/run\n \"\"\"\n input_params_cp = Nonce.update_nonce(input_params)\n fully_qualified_version = name_or_id + (('/' + alias) if alias else '')\n return DXHTTPRequest('/%s/run' % fully_qualified_version, input_params_cp, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd tags to a node in a cluster.", "response": "def job_add_tags(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /job-xxxx/addTags API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Applets-and-Entry-Points#API-method%3A-%2Fjob-xxxx%2FaddTags\n \"\"\"\n return DXHTTPRequest('/%s/addTags' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndescribing a single job.", "response": "def job_describe(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /job-xxxx/describe API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Applets-and-Entry-Points#API-method%3A-%2Fjob-xxxx%2Fdescribe\n \"\"\"\n return DXHTTPRequest('/%s/describe' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef job_get_log(object_id, input_params={}, always_retry=False, **kwargs):\n return DXHTTPRequest('/%s/getLog' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Get the log of a job."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef job_remove_tags(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/removeTags' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Remove tags from a node in a cluster."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef job_set_properties(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/setProperties' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Set properties of a job."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef job_terminate(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/terminate' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Terminate the current job."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a single object representing the current notifications.", "response": "def notifications_get(input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /notifications/get API method.\n \"\"\"\n return DXHTTPRequest('/notifications/get', input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef notifications_mark_read(input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/notifications/markRead', input_params, always_retry=always_retry, **kwargs)", "response": "Mark read notifications for a node in a tree."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ninvoke the /org-xxxx/describe API method. For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Organizations#API-method%3A-%2Forg-xxxx%2Fdescribe", "response": "def org_describe(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /org-xxxx/describe API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Organizations#API-method%3A-%2Forg-xxxx%2Fdescribe\n \"\"\"\n return DXHTTPRequest('/%s/describe' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the list of members of a node in the order they were assigned.", "response": "def org_find_members(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /org-xxxx/findMembers API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Organizations#API-method%3A-%2Forg-xxxx%2FfindMembers\n \"\"\"\n return DXHTTPRequest('/%s/findMembers' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget a list of projects in the organization.", "response": "def org_find_projects(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /org-xxxx/findProjects API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Organizations#API-method%3A-%2Forg-xxxx%2FfindProjects\n \"\"\"\n return DXHTTPRequest('/%s/findProjects' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef org_find_apps(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/findApps' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Get a list of apps in the organization."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef org_invite(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/invite' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Invokes the /org-xxxx/invite API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Organizations#API-method%3A-%2Forg-xxxx%2Finvite"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef org_remove_member(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/removeMember' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Remove a member from the object."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsets the member access of the object.", "response": "def org_set_member_access(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /org-xxxx/setMemberAccess API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Organizations#API-method%3A-%2Forg-xxxx%2FsetMemberAccess\n \"\"\"\n return DXHTTPRequest('/%s/setMemberAccess' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef org_update(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/update' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Update the object with the given object_id."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds tags to a node in a node s tree.", "response": "def project_add_tags(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /project-xxxx/addTags API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Projects#API-method%3A-%2Fproject-xxxx%2FaddTags\n \"\"\"\n return DXHTTPRequest('/%s/addTags' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef project_clone(object_id, input_params={}, always_retry=False, **kwargs):\n return DXHTTPRequest('/%s/clone' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Clones a node in the cluster."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef project_decrease_permissions(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/decreasePermissions' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Decrease permissions of a node in a cluster."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef project_describe(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/describe' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Describe a single project."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef project_destroy(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/destroy' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Invokes the /project-xxxx/destroy API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Projects#API-method%3A-%2Fproject-xxxx%2Fdestroy"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ninvoking the /project-xxxx/invite API method. For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Project-Permissions-and-Sharing#API-method%3A-%2Fproject-xxxx%2Finvite", "response": "def project_invite(object_id, input_params={}, always_retry=False, **kwargs):\n \"\"\"\n Invokes the /project-xxxx/invite API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Project-Permissions-and-Sharing#API-method%3A-%2Fproject-xxxx%2Finvite\n \"\"\"\n return DXHTTPRequest('/%s/invite' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef project_leave(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/leave' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Delete a node from the project."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef project_list_folder(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/listFolder' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "List the contents of a folder in the project."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef project_move(object_id, input_params={}, always_retry=False, **kwargs):\n return DXHTTPRequest('/%s/move' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Moves the current project to the next available project."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a new folder on the project.", "response": "def project_new_folder(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /project-xxxx/newFolder API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Folders-and-Deletion#API-method%3A-%2Fclass-xxxx%2FnewFolder\n \"\"\"\n return DXHTTPRequest('/%s/newFolder' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef project_remove_folder(object_id, input_params={}, always_retry=False, **kwargs):\n return DXHTTPRequest('/%s/removeFolder' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Remove a folder from the project."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef project_remove_objects(object_id, input_params={}, always_retry=False, **kwargs):\n return DXHTTPRequest('/%s/removeObjects' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Remove objects from a node in the current project."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nremoves tags from a node in a cluster.", "response": "def project_remove_tags(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /project-xxxx/removeTags API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Projects#API-method%3A-%2Fproject-xxxx%2FremoveTags\n \"\"\"\n return DXHTTPRequest('/%s/removeTags' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ninvokes the /project-xxxx/renameFolder API method. For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Folders-and-Deletion#API-method%3A-%2Fclass-xxxx%2FrenameFolder", "response": "def project_rename_folder(object_id, input_params={}, always_retry=False, **kwargs):\n \"\"\"\n Invokes the /project-xxxx/renameFolder API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Folders-and-Deletion#API-method%3A-%2Fclass-xxxx%2FrenameFolder\n \"\"\"\n return DXHTTPRequest('/%s/renameFolder' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsetting properties of a project.", "response": "def project_set_properties(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /project-xxxx/setProperties API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Projects#API-method%3A-%2Fproject-xxxx%2FsetProperties\n \"\"\"\n return DXHTTPRequest('/%s/setProperties' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets a single object from the DNA Transfer API.", "response": "def project_transfer(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /project-xxxx/transfer API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Project-Permissions-and-Sharing#API-method%3A-%2Fproject-xxxx%2Ftransfer\n \"\"\"\n return DXHTTPRequest('/%s/transfer' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef project_update(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/update' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Update the project with the given object_id."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nupdate the sponsorship of a project.", "response": "def project_update_sponsorship(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /project-xxxx/updateSponsorship API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Projects#API-method%3A-%2Fproject-xxxx%2FupdateSponsorship\n \"\"\"\n return DXHTTPRequest('/%s/updateSponsorship' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate a new project.", "response": "def project_new(input_params={}, always_retry=False, **kwargs):\n \"\"\"\n Invokes the /project/new API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Projects#API-method%3A-%2Fproject%2Fnew\n \"\"\"\n return DXHTTPRequest('/project/new', input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding tags to a node.", "response": "def record_add_tags(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /record-xxxx/addTags API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Tags#API-method%3A-%2Fclass-xxxx%2FaddTags\n \"\"\"\n return DXHTTPRequest('/%s/addTags' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd types to a node.", "response": "def record_add_types(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /record-xxxx/addTypes API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Types#API-method%3A-%2Fclass-xxxx%2FaddTypes\n \"\"\"\n return DXHTTPRequest('/%s/addTypes' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef record_close(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/close' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Close the object with the given object_id."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ninvoking the /record-xxxx/describe API method. For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Records#API-method%3A-%2Frecord-xxxx%2Fdescribe", "response": "def record_describe(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /record-xxxx/describe API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Records#API-method%3A-%2Frecord-xxxx%2Fdescribe\n \"\"\"\n return DXHTTPRequest('/%s/describe' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef record_get_details(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/getDetails' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Get details for a record."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef record_list_projects(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/listProjects' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Get a list of all projects in the object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef record_remove_tags(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/removeTags' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Remove tags from a node."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nremove types from a node in the cluster.", "response": "def record_remove_types(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /record-xxxx/removeTypes API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Types#API-method%3A-%2Fclass-xxxx%2FremoveTypes\n \"\"\"\n return DXHTTPRequest('/%s/removeTypes' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef record_rename(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/rename' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Rename an object in the current object."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef record_set_details(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/setDetails' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Record the details of a single object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef record_set_properties(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/setProperties' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Sets properties of a record."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef record_set_visibility(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/setVisibility' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Sets the visibility of the object."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef system_describe_data_objects(input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/system/describeDataObjects', input_params, always_retry=always_retry, **kwargs)", "response": "Get the details of all data objects in the current node."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting the list of executions for the current node.", "response": "def system_describe_executions(input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /system/describeExecutions API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/System-Methods#API-method:-/system/describeExecutions\n \"\"\"\n return DXHTTPRequest('/system/describeExecutions', input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef system_describe_projects(input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/system/describeProjects', input_params, always_retry=always_retry, **kwargs)", "response": "Get the list of projects in the current node."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef system_find_affiliates(input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/system/findAffiliates', input_params, always_retry=always_retry, **kwargs)", "response": "Retrieves the list of all affiliates in the current node."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef system_find_apps(input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/system/findApps', input_params, always_retry=always_retry, **kwargs)", "response": "Retrieves the list of apps in the current node."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nretrieving the data objects in the current node.", "response": "def system_find_data_objects(input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /system/findDataObjects API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Search#API-method%3A-%2Fsystem%2FfindDataObjects\n \"\"\"\n return DXHTTPRequest('/system/findDataObjects', input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef system_find_global_workflows(input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/system/findGlobalWorkflows', input_params, always_retry=always_retry, **kwargs)", "response": "Retrieves the global workflows for a single virtual machine."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef system_resolve_data_objects(input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/system/resolveDataObjects', input_params, always_retry=always_retry, **kwargs)", "response": "Invoke the system resolve data objects API method."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nretrieve the list of executions for the current node in the cluster.", "response": "def system_find_executions(input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /system/findExecutions API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Search#API-method%3A-%2Fsystem%2FfindExecutions\n \"\"\"\n return DXHTTPRequest('/system/findExecutions', input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef system_find_analyses(input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/system/findAnalyses', input_params, always_retry=always_retry, **kwargs)", "response": "Retrieves the list of analyses in the current node."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef system_find_databases(input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/system/findDatabases', input_params, always_retry=always_retry, **kwargs)", "response": "Retrieves the list of databases in the current node."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ninvoking the /system/findJobs API method. For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Search#API-method%3A-%2Fsystem%2FfindJobs", "response": "def system_find_jobs(input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /system/findJobs API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Search#API-method%3A-%2Fsystem%2FfindJobs\n \"\"\"\n return DXHTTPRequest('/system/findJobs', input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nretrieve the list of projects in the current node.", "response": "def system_find_projects(input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /system/findProjects API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Search#API-method%3A-%2Fsystem%2FfindProjects\n \"\"\"\n return DXHTTPRequest('/system/findProjects', input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ninvoke the /system/findUsers API method. For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Search#API-method%3A-%2Fsystem%2FfindUsers", "response": "def system_find_users(input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /system/findUsers API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Search#API-method%3A-%2Fsystem%2FfindUsers\n \"\"\"\n return DXHTTPRequest('/system/findUsers', input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef system_find_project_members(input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/system/findProjectMembers', input_params, always_retry=always_retry, **kwargs)", "response": "Retrieves the project members of the current node."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef system_find_orgs(input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/system/findOrgs', input_params, always_retry=always_retry, **kwargs)", "response": "Get the list of ORGS available in the current user s account."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a request to generate batch inputs for the current node.", "response": "def system_generate_batch_inputs(input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /system/generateBatchInputs API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Search#API-method:-/system/generateBatchInputs\n \"\"\"\n return DXHTTPRequest('/system/generateBatchInputs', input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a request object that will execute the system global search API.", "response": "def system_global_search(input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /system/globalSearch API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Search#API-method:-/system/globalSearch\n \"\"\"\n return DXHTTPRequest('/system/globalSearch', input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ngreet a single node from the system.", "response": "def system_greet(input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /system/greet API method.\n \"\"\"\n return DXHTTPRequest('/system/greet', input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a request object that can be used to retrieve the headers for a single node.", "response": "def system_headers(input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /system/headers API method.\n \"\"\"\n return DXHTTPRequest('/system/headers', input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef system_shorten_url(input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/system/shortenURL', input_params, always_retry=always_retry, **kwargs)", "response": "shorten the URL of a node in a tree."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef system_whoami(input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/system/whoami', input_params, always_retry=always_retry, **kwargs)", "response": "This method is used to invoke the system whoami API method."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nretrieve the user s detailed information.", "response": "def user_describe(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /user-xxxx/describe API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Users#API-method%3A-%2Fuser-xxxx%2Fdescribe\n \"\"\"\n return DXHTTPRequest('/%s/describe' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nupdating the user s current state.", "response": "def user_update(object_id, input_params={}, always_retry=False, **kwargs):\n \"\"\"\n Invokes the /user-xxxx/update API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Users#API-method%3A-%2Fuser-xxxx%2Fupdate\n \"\"\"\n return DXHTTPRequest('/%s/update' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef workflow_add_stage(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/addStage' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Adds a stage to the current object."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nadds tags to a node in the current node.", "response": "def workflow_add_tags(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /workflow-xxxx/addTags API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Tags#API-method%3A-%2Fclass-xxxx%2FaddTags\n \"\"\"\n return DXHTTPRequest('/%s/addTags' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef workflow_add_types(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/addTypes' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Adds types to a node in the order they appear in the object."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef workflow_close(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/close' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Closes a single object in the cluster."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef workflow_describe(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/describe' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Describe the current state of a single object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndry run a single workflow.", "response": "def workflow_dry_run(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /workflow-xxxx/dryRun API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Workflows-and-Analyses#API-method%3A-%2Fworkflow-xxxx%2FdryRun\n \"\"\"\n return DXHTTPRequest('/%s/dryRun' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget details of a specific object.", "response": "def workflow_get_details(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /workflow-xxxx/getDetails API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Details-and-Links#API-method%3A-%2Fclass-xxxx%2FgetDetails\n \"\"\"\n return DXHTTPRequest('/%s/getDetails' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a request to check if a workflow is stage compatible with the current state of the current stage.", "response": "def workflow_is_stage_compatible(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /workflow-xxxx/isStageCompatible API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Workflows-and-Analyses#API-method%3A-%2Fworkflow-xxxx%2FisStageCompatible\n \"\"\"\n return DXHTTPRequest('/%s/isStageCompatible' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef workflow_list_projects(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/listProjects' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "List the projects in the object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef workflow_move_stage(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/moveStage' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Moves the current stage of the object to the next stage."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\noverwrite the specified object with the current state of the current state of the current state of the current state of the current state of the current state of the current state of the current state.", "response": "def workflow_overwrite(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /workflow-xxxx/overwrite API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Workflows-and-Analyses#API-method%3A-%2Fworkflow-xxxx%2Foverwrite\n \"\"\"\n return DXHTTPRequest('/%s/overwrite' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nremoves a stage from the current object.", "response": "def workflow_remove_stage(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /workflow-xxxx/removeStage API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Workflows-and-Analyses#API-method%3A-%2Fworkflow-xxxx%2FremoveStage\n \"\"\"\n return DXHTTPRequest('/%s/removeStage' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef workflow_remove_tags(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/removeTags' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Removes tags from a node in a node s sequence."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef workflow_remove_types(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/removeTypes' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Remove types from a node in the cluster."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef workflow_rename(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/rename' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Rename an object in the current object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nvalidating a batch of objects.", "response": "def workflow_validate_batch(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /workflow-xxxx/validateBatch API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Workflows-and-Analyses#API-method%3A-%2Fworkflow-xxxx%2FvalidateBatch\n \"\"\"\n return DXHTTPRequest('/%s/validateBatch' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ninvokes the /workflow-xxxx/setDetails API method. For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Details-and-Links#API-method%3A-%2Fclass-xxxx%2FsetDetails", "response": "def workflow_set_details(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /workflow-xxxx/setDetails API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Details-and-Links#API-method%3A-%2Fclass-xxxx%2FsetDetails\n \"\"\"\n return DXHTTPRequest('/%s/setDetails' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsets properties of a node in a node s hierarchy.", "response": "def workflow_set_properties(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /workflow-xxxx/setProperties API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Properties#API-method%3A-%2Fclass-xxxx%2FsetProperties\n \"\"\"\n return DXHTTPRequest('/%s/setProperties' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset the stage inputs for the object.", "response": "def workflow_set_stage_inputs(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /workflow-xxxx/setStageInputs API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Workflows-and-Analyses#API-method%3A-%2Fworkflow-xxxx%2FsetStageInputs\n \"\"\"\n return DXHTTPRequest('/%s/setStageInputs' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef workflow_set_visibility(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/setVisibility' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "Sets the visibility of the object in the cluster."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nupdate the object with the given object_id.", "response": "def workflow_update(object_id, input_params={}, always_retry=True, **kwargs):\n \"\"\"\n Invokes the /workflow-xxxx/update API method.\n\n For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Workflows-and-Analyses#API-method%3A-%2Fworkflow-xxxx%2Fupdate\n \"\"\"\n return DXHTTPRequest('/%s/update' % object_id, input_params, always_retry=always_retry, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef workflow_update_stage_executable(object_id, input_params={}, always_retry=True, **kwargs):\n return DXHTTPRequest('/%s/updateStageExecutable' % object_id, input_params, always_retry=always_retry, **kwargs)", "response": "This method is used to invoke the workflow -xxxx update stage executable API method."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _image_to_data(img):\n # If the image is already encoded in Base64, we have nothing to do here\n if \"src\" not in img.attrs or img[\"src\"].startswith(\"data:\"):\n return\n elif re.match(\"https?://\", img[\"src\"]):\n img_data = _load_url(img[\"src\"]).read()\n else:\n img_data = _load_file(img[\"src\"]).read()\n img_type = imghdr.what(\"\", img_data)\n img_b64 = base64.b64encode(img_data)\n src_data = \"data:image/none;base64,\"\n if img_type:\n src_data = \"data:image/{};base64,{}\".format(img_type, img_b64)\n img[\"src\"] = src_data", "response": "Does the work of encoding an image into Base64"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _bake_css(link):\n if \"href\" in link.attrs and (re.search(\"\\.css$\", link[\"href\"])) or (\"rel\" in link.attrs and link[\"rel\"] is \"stylesheet\") or (\"type\" in link.attrs and link[\"type\"] is \"text/css\"):\n if re.match(\"https?://\", link[\"href\"]):\n css_data = _load_url(link[\"href\"]).read()\n else:\n css_data = _load_file(link[\"href\"]).read()\n link.clear()\n if USING_PYTHON2:\n link.string = css_data\n else:\n link.string = str(css_data)\n link.name = \"style\"\n del link[\"rel\"]\n del link[\"href\"]", "response": "Takes a link element and turns it into an inline style link if applicable"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nbake a script element and only if it contains a remote resource", "response": "def _bake_script(script):\n \"\"\"\n Takes a script element and bakes it in only if it contains a remote resource\n \"\"\"\n if \"src\" in script.attrs:\n if re.match(\"https?://\", script[\"src\"]):\n script_data = _load_url(script[\"src\"]).read()\n else:\n script_data = _load_file(script[\"src\"]).read()\n script.clear()\n if USING_PYTHON2:\n script.string = \"\\n\" + script_data + \"\\n\"\n else:\n script.string = \"\\n\" + str(script_data) + \"\\n\"\n del script[\"src\"]\n del script[\"type\"]"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nloads a file from the local filesystem", "response": "def _load_file(path):\n \"\"\"\n Loads a file from the local filesystem\n \"\"\"\n if not os.path.exists(path):\n parser.error(\"{} was not found!\".format(path))\n if USING_PYTHON2:\n mode = \"r\"\n else:\n mode = \"rb\"\n try:\n f = open(path, mode)\n return f\n except IOError as ex:\n parser.error(\"{path} could not be read due to an I/O error! ({ex})\".format(path=path, ex=ex))"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nloading a URL resource from a remote server", "response": "def _load_url(url):\n \"\"\"\n Loads a URL resource from a remote server\n \"\"\"\n try:\n response = requests.get(url)\n return BytesIO(response.content)\n except IOError as ex:\n parser.error(\"{url} could not be loaded remotely! ({ex})\".format(url=url, ex=ex))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef bake(src):\n src = os.path.realpath(src)\n path = os.path.dirname(src)\n filename = os.path.basename(src)\n html = _load_file(src).read()\n if imghdr.what(\"\", html):\n html = \"\".format(cgi.escape(filename))\n\n # Change to the file's directory so image files with relative paths can be loaded correctly\n cwd = os.getcwd()\n os.chdir(path)\n bs_html = bs4.BeautifulSoup(html, \"html.parser\")\n images = bs_html.find_all(\"img\")\n for image in images:\n _image_to_data(image)\n for link in bs_html.find_all(\"link\"):\n _bake_css(link)\n for script in bs_html.find_all(\"script\"):\n _bake_script(script)\n os.chdir(cwd)\n return bs_html", "response": "Runs the encoder on the given source file"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nuploads the HTML to the server and returns the ID of the uploaded file.", "response": "def upload_html(destination, html, name=None):\n \"\"\"\n Uploads the HTML to a file on the server\n \"\"\"\n [project, path, n] = parse_destination(destination)\n try:\n dxfile = dxpy.upload_string(html, media_type=\"text/html\", project=project, folder=path, hidden=True, name=name or None)\n return dxfile.get_id()\n except dxpy.DXAPIError as ex:\n parser.error(\"Could not upload HTML report to DNAnexus server! ({ex})\".format(ex=ex))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef create_record(destination, file_ids, width=None, height=None):\n [project, path, name] = parse_destination(destination)\n files = [dxpy.dxlink(file_id) for file_id in file_ids]\n details = {\"files\": files}\n if width:\n details[\"width\"] = width\n if height:\n details[\"height\"] = height\n try:\n dxrecord = dxpy.new_dxrecord(project=project, folder=path, types=[\"Report\", \"HTMLReport\"], details=details, name=name)\n dxrecord.close()\n return dxrecord.get_id()\n except dxpy.DXAPIError as ex:\n parser.error(\"Could not create an HTML report record on DNAnexus servers! ({ex})\".format(ex=ex))", "response": "Creates a master record for the HTML report."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef save(filename, html):\n try:\n out_file = open(filename, \"w\")\n out_file.write(html)\n out_file.close()\n except IOError as ex:\n parser.error(\"Could not write baked HTML to local file {name}. ({ex})\".format(name=filename, ex=ex))", "response": "Saves the HTML to a local file."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nformatting a byte size as a string.", "response": "def get_size_str(size):\n \"\"\"\n Formats a byte size as a string.\n\n The returned string is no more than 9 characters long.\n \"\"\"\n if size == 0:\n magnitude = 0\n level = 0\n else:\n magnitude = math.floor(math.log(size, 10))\n level = int(min(math.floor(magnitude // 3), 4))\n return ('%d' if level == 0 else '%.2f') % (float(size) / 2**(level*10)) + ' ' + SIZE_LEVEL[level]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncheck if a thing is a job - based object reference", "response": "def is_job_ref(thing, reftype=dict):\n '''\n :param thing: something that might be a job-based object reference hash\n :param reftype: type that a job-based object reference would be (default is dict)\n '''\n return isinstance(thing, reftype) and \\\n ((len(thing) == 2 and \\\n isinstance(thing.get('field'), basestring) and \\\n isinstance(thing.get('job'), basestring)) or \\\n (len(thing) == 1 and \\\n isinstance(thing.get('$dnanexus_link'), reftype) and \\\n isinstance(thing['$dnanexus_link'].get('field'), basestring) and \\\n isinstance(thing['$dnanexus_link'].get('job'), basestring)))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nprinting the description of a DNAnexus entity.", "response": "def print_desc(desc, verbose=False):\n '''\n :param desc: The describe hash of a DNAnexus entity\n :type desc: dict\n\n Depending on the class of the entity, this method will print a\n formatted and human-readable string containing the data in *desc*.\n '''\n if desc['class'] in ['project', 'workspace', 'container']:\n print_project_desc(desc, verbose=verbose)\n elif desc['class'] == 'app':\n print_app_desc(desc, verbose=verbose)\n elif desc['class'] == 'globalworkflow':\n print_globalworkflow_desc(desc, verbose=verbose)\n elif desc['class'] in ['job', 'analysis']:\n print_execution_desc(desc)\n elif desc['class'] == 'user':\n print_user_desc(desc)\n elif desc['class'] in ['org', 'team']:\n print_generic_desc(desc)\n else:\n print_data_obj_desc(desc, verbose=verbose)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a string representation of the given ls - l - desc.", "response": "def get_ls_l_desc(desc, include_folder=False, include_project=False):\n \"\"\"\n desc must have at least all the fields given by get_ls_l_desc_fields.\n \"\"\"\n # If you make this method consume an additional field, you must add it to\n # get_ls_l_desc_fields above.\n if 'state' in desc:\n state_len = len(desc['state'])\n if desc['state'] != 'closed':\n state_str = YELLOW() + desc['state'] + ENDC()\n else:\n state_str = GREEN() + desc['state'] + ENDC()\n else:\n state_str = ''\n state_len = 0\n\n name_str = ''\n if include_folder:\n name_str += desc['folder'] + ('/' if desc['folder'] != '/' else '')\n\n name_str += desc['name']\n\n if desc['class'] in ['applet', 'workflow']:\n name_str = BOLD() + GREEN() + name_str + ENDC()\n\n size_str = ''\n if 'size' in desc and desc['class'] == 'file':\n size_str = get_size_str(desc['size'])\n elif 'length' in desc:\n size_str = str(desc['length']) + ' rows'\n size_padding = ' ' * max(0, 9 - len(size_str))\n\n return (state_str +\n DELIMITER(' '*(8 - state_len)) + render_short_timestamp(desc['modified']) +\n DELIMITER(' ') + size_str +\n DELIMITER(size_padding + ' ') + name_str +\n DELIMITER(' (') + ((desc['project'] + DELIMITER(':')) if include_project else '') + desc['id'] +\n DELIMITER(')'))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_find_executions_string(desc, has_children, single_result=False, show_outputs=True,\n is_cached_result=False):\n '''\n :param desc: hash of execution's describe output\n :param has_children: whether the execution has children to be printed\n :param single_result: whether the execution is displayed as a single result or as part of an execution tree\n :param is_cached_result: whether the execution should be formatted as a cached result\n '''\n is_not_subjob = desc['parentJob'] is None or desc['class'] == 'analysis' or single_result\n result = (\"* \" if is_not_subjob and get_delimiter() is None else \"\")\n canonical_execution_name = desc['executableName']\n if desc['class'] == 'job':\n canonical_execution_name += \":\" + desc['function']\n execution_name = desc.get('name', '')\n\n # Format the name of the execution\n if is_cached_result:\n result += BOLD() + \"[\" + ENDC()\n result += BOLD() + BLUE()\n if desc['class'] == 'analysis':\n result += UNDERLINE()\n result += execution_name + ENDC()\n\n if execution_name != canonical_execution_name and execution_name+\":main\" != canonical_execution_name:\n result += ' (' + canonical_execution_name + ')'\n\n if is_cached_result:\n result += BOLD() + \"]\" + ENDC()\n\n # Format state\n result += DELIMITER(' (') + JOB_STATES(desc['state']) + DELIMITER(') ') + desc['id']\n\n # Add unicode pipe to child if necessary\n result += DELIMITER('\\n' + (u'\u2502 ' if is_not_subjob and has_children else (\" \" if is_not_subjob else \"\")))\n result += desc['launchedBy'][5:] + DELIMITER(' ')\n result += render_short_timestamp(desc['created'])\n\n cached_and_runtime_strs = []\n\n if is_cached_result:\n cached_and_runtime_strs.append(YELLOW() + \"cached\" + ENDC())\n\n if desc['class'] == 'job':\n # Only print runtime if it ever started running\n if desc.get('startedRunning'):\n if desc['state'] in ['done', 'failed', 'terminated', 'waiting_on_output']:\n runtime = datetime.timedelta(seconds=int(desc['stoppedRunning']-desc['startedRunning'])//1000)\n cached_and_runtime_strs.append(\"runtime \" + str(runtime))\n elif desc['state'] == 'running':\n seconds_running = max(int(time.time()-desc['startedRunning']//1000), 0)\n msg = \"running for {rt}\".format(rt=datetime.timedelta(seconds=seconds_running))\n cached_and_runtime_strs.append(msg)\n\n if cached_and_runtime_strs:\n result += \" (\" + \", \".join(cached_and_runtime_strs) + \")\"\n\n if show_outputs:\n prefix = DELIMITER('\\n' + (u'\u2502 ' if is_not_subjob and has_children else (\" \" if is_not_subjob else \"\")))\n if desc.get(\"output\") != None:\n result += job_output_to_str(desc['output'], prefix=prefix)\n elif desc['state'] == 'failed' and 'failureReason' in desc:\n result += prefix + BOLD() + desc['failureReason'] + ENDC() + \": \" + fill(desc.get('failureMessage', ''),\n subsequent_indent=prefix.lstrip('\\n'))\n\n return result", "response": "Return a string that can be used to find the executions of the given execution."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef from_instance_count(cls, instance_count_arg, entrypoint=\"*\"):\n try:\n if instance_count_arg is None:\n return cls(None)\n if isinstance(instance_count_arg, basestring) or isinstance(instance_count_arg, int):\n return cls({entrypoint: {\"clusterSpec\": {\"initialInstanceCount\": int(instance_count_arg)}}})\n if isinstance(instance_count_arg, dict):\n return cls({k: {\"clusterSpec\": {\"initialInstanceCount\": int(v)}} for k, v in instance_count_arg.items()})\n raise ValueError\n except ValueError:\n DXError('Expected instance_count field to be either an int, string or a dict')", "response": "Returns a SystemRequirementsDict that can be passed as a\n input to job or new or run API calls."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef from_instance_type(cls, instance_type_arg, entrypoint=\"*\"):\n if instance_type_arg is None:\n return cls(None)\n if isinstance(instance_type_arg, basestring):\n # By default, all entry points (\"*\") should use this instance type\n return cls({entrypoint: {\"instanceType\": instance_type_arg}})\n if isinstance(instance_type_arg, dict):\n # instance_type is a map of entry point to instance type\n return cls({fn: {\"instanceType\": fn_inst} for fn, fn_inst in instance_type_arg.items()})\n raise DXError('Expected instance_type field to be either a string or a dict')", "response": "Returns a SystemRequirementsDict that can be passed as a\n input to job or new or run API calls."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating a SystemRequirementsDict encapsulating system requirements.", "response": "def from_sys_requirements(cls, system_requirements, _type='all'):\n \"\"\"\n Returns SystemRequirementsDict encapsulating system requirements.\n It can extract only entrypoints with specific fields ('clusterSpec',\n 'instanceType', etc), depending on the value of _type.\n \"\"\"\n if _type not in ('all', 'clusterSpec', 'instanceType'):\n raise DXError(\"Expected '_type' to be either 'all', 'clusterSpec', or 'instanceType'\")\n\n if _type == 'all':\n return cls(system_requirements)\n\n extracted = defaultdict(dict)\n for entrypoint, req in system_requirements.items():\n if _type in req:\n extracted[entrypoint][_type] = req[_type]\n return cls(dict(extracted))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef override_cluster_spec(self, srd):\n\n merged_cluster_spec = copy.deepcopy(self.entrypoints)\n\n # Remove entrypoints without \"clusterSpec\"\n merged_cluster_spec = dict([(k, v) for k, v in merged_cluster_spec.items() if v.get(\"clusterSpec\") is not None])\n\n # Remove entrypoints not provided in requested instance counts\n merged_cluster_spec = dict([(k, v) for k, v in merged_cluster_spec.items() if \\\n k in srd.entrypoints or \"*\" in srd.entrypoints])\n\n # Overwrite values of self.entrypoints.clusterSpec with the ones from srd\n # Named entrypoint takes precedence over the wildcard\n for entry_pt, req in merged_cluster_spec.items():\n merged_cluster_spec[entry_pt][\"clusterSpec\"].update(\n srd.entrypoints.get(entry_pt, srd.entrypoints.get(\"*\"))[\"clusterSpec\"])\n\n # Check if all entrypoints in srd are included in merged_cluster_spec\n # (if a named entrypoint was used in srd and such an entrypoint doesn't exist\n # in app sys req, we need to take the cluster spec from the app's \"*\", if it exists)\n for entry_pt, req in srd.entrypoints.items():\n if entry_pt not in merged_cluster_spec and \"*\" in self.entrypoints and \"clusterSpec\" in self.entrypoints[\"*\"]:\n merged_cluster_spec[entry_pt] = {\"clusterSpec\": copy.deepcopy(self.entrypoints[\"*\"][\"clusterSpec\"])}\n merged_cluster_spec[entry_pt][\"clusterSpec\"].update(req[\"clusterSpec\"])\n\n return SystemRequirementsDict(merged_cluster_spec)", "response": "Override the clusterSpec field of the app - xxx run with the new clusterSpec."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _add_dict_values(self, d1, d2):\n\n if d1 is None and d2 is None:\n return None\n\n d1 = d1 or {}\n d2 = d2 or {}\n\n added = {}\n for key in set(list(d1.keys()) + list(d2.keys())):\n added[key] = dict(d1.get(key, {}), **(d2.get(key, {})))\n return added", "response": "Adds the values of two dictionaries to the internal dictionary d1 and d2."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a new remote record object with the specified details.", "response": "def new_dxrecord(details=None, **kwargs):\n '''\n :rtype: :class:`DXRecord`\n\n :param details: The contents of the record to be created.\n :type details: dict\n\n Additional optional parameters not listed: all those under\n :func:`dxpy.bindings.DXDataObject.new`, except `details`.\n\n Creates a new remote record object with project set to *project*\n and returns the appropriate handler.\n\n Example:\n\n r = dxpy.new_dxrecord({\"x\": 1, \"y\": 2})\n\n Note that this function is shorthand for::\n\n dxrecord = DXRecord()\n dxrecord.new(**kwargs)\n\n '''\n dxrecord = DXRecord()\n dxrecord.new(details=details, **kwargs)\n return dxrecord"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate a new record object.", "response": "def _new(self, dx_hash, close=False, **kwargs):\n \"\"\"\n :param dx_hash: Standard hash populated in :func:`dxpy.bindings.DXDataObject.new()` containing attributes common to all data object classes.\n :type dx_hash: dict\n :param init_from: Record from which to initialize the metadata\n :type init_from: :class:`DXRecord`\n :param close: Whether or not to close the record immediately after creating it\n :type close: boolean\n\n Create a new remote record object.\n\n \"\"\"\n\n if \"init_from\" in kwargs:\n if kwargs[\"init_from\"] is not None:\n if not isinstance(kwargs[\"init_from\"], DXRecord):\n raise DXError(\"Expected instance of DXRecord to init_from\")\n dx_hash[\"initializeFrom\"] = \\\n {\"id\": kwargs[\"init_from\"].get_id(),\n \"project\": kwargs[\"init_from\"].get_proj_id()}\n del kwargs[\"init_from\"]\n\n if close:\n dx_hash[\"close\"] = True\n\n resp = dxpy.api.record_new(dx_hash, **kwargs)\n self.set_ids(resp[\"id\"], dx_hash[\"project\"])"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _recursive_cleanup(foo):\n if isinstance(foo, dict):\n for (key, val) in list(foo.items()):\n if isinstance(val, dict):\n _recursive_cleanup(val)\n if val == \"\" or val == [] or val == {}:\n del foo[key]", "response": "Recursively cleans up things that look empty."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndump an executable into a directory that can be used to build a resource container.", "response": "def dump_executable(executable, destination_directory, omit_resources=False, describe_output={}):\n \"\"\"\n Reconstitutes an app, applet, or a workflow into a directory that would\n create a functionally identical executable if \"dx build\" were run on it.\n destination_directory will be the root source directory for the\n executable.\n\n :param executable: executable, i.e. app, applet, or workflow, to be dumped\n :type executable: DXExecutable (either of: DXApp, DXApplet, DXWorkflow, DXGlobalWorkflow)\n :param destination_directory: an existing, empty, and writable directory\n :type destination_directory: str\n :param omit_resources: if True, executable's resources will not be downloaded\n :type omit_resources: boolean\n :param describe_output: output of a describe API call on the executable\n :type describe_output: dictionary\n \"\"\"\n try:\n old_cwd = os.getcwd()\n os.chdir(destination_directory)\n if isinstance(executable, dxpy.DXWorkflow):\n _dump_workflow(executable, describe_output)\n elif isinstance(executable, dxpy.DXGlobalWorkflow):\n # Add inputs, outputs, stages. These fields contain region-specific values\n # e.g. files or applets, that's why:\n # * if the workflow is global, we will unpack the underlying workflow\n # from the region of the current project context\n # * if this is a regular, project-based workflow, we will just use\n # its description (the describe_output that we already have)\n # Underlying workflows are workflows stored in resource containers\n # of the global workflow (one per each region the global workflow is\n # enabled in). #TODO: add a link to documentation.\n current_project = dxpy.WORKSPACE_ID\n if not current_project:\n raise DXError(\n 'A project needs to be selected to \"dx get\" a global workflow. You can use \"dx select\" to select a project')\n region = dxpy.api.project_describe(current_project,\n input_params={\"fields\": {\"region\": True}})[\"region\"]\n describe_output = executable.append_underlying_workflow_desc(describe_output, region)\n _dump_workflow(executable, describe_output)\n else:\n _dump_app_or_applet(executable, omit_resources, describe_output)\n except:\n err_exit()\n finally:\n os.chdir(old_cwd)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef resolve_data_objects(objects, project=None, folder=None, batchsize=1000):\n if not isinstance(batchsize, int) or batchsize <= 0 or batchsize > 1000:\n raise ValueError(\"batchsize for resolve_data_objects must be a positive integer not exceeding 1000\")\n args = {}\n if project:\n args.update({'project': project})\n if folder:\n args.update({'folder': folder})\n\n results = []\n\n # Call API method /system/resolveDataObjects in groups of size batchsize\n for i in range(0, len(objects), batchsize):\n args.update({'objects': objects[i:(i+batchsize)]})\n results.extend(dxpy.api.system_resolve_data_objects(args)['results'])\n return results", "response": "Resolves the given list of data objects in a single project folder and folder."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a generator that yields the objects found in the specified order.", "response": "def _find(api_method, query, limit, return_handler, first_page_size, **kwargs):\n ''' Takes an API method handler (dxpy.api.find*) and calls it with *query*,\n and then wraps a generator around its output. Used by the methods below.\n\n Note that this function may only be used for /system/find* methods.\n '''\n num_results = 0\n\n if \"limit\" not in query:\n query[\"limit\"] = first_page_size\n\n while True:\n resp = api_method(query, **kwargs)\n\n by_parent = resp.get('byParent')\n descriptions = resp.get('describe')\n def format_result(result):\n if return_handler:\n result = dxpy.get_handler(result['id'], project=result.get('project'))\n if by_parent is not None:\n return result, by_parent, descriptions\n else:\n return result\n\n for i in resp[\"results\"]:\n if num_results == limit:\n return\n num_results += 1\n yield format_result(i)\n\n # set up next query\n if resp[\"next\"] is not None:\n query[\"starting\"] = resp[\"next\"]\n query[\"limit\"] = min(query[\"limit\"]*2, 1000)\n else:\n return"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsearching for data objects in the base object hierarchy.", "response": "def find_data_objects(classname=None, state=None, visibility=None,\n name=None, name_mode='exact', properties=None,\n typename=None, tag=None, tags=None,\n link=None, project=None, folder=None, recurse=None,\n modified_after=None, modified_before=None,\n created_after=None, created_before=None,\n describe=False, limit=None, level=None, region=None,\n return_handler=False, first_page_size=100,\n **kwargs):\n \"\"\"\n :param classname:\n Class with which to restrict the search, i.e. one of \"record\",\n \"file\", \"applet\", \"workflow\"\n :type classname: string\n :param state: State of the object (\"open\", \"closing\", \"closed\", \"any\")\n :type state: string\n :param visibility: Visibility of the object (\"hidden\", \"visible\", \"either\")\n :type visibility: string\n :param name: Name of the object (also see *name_mode*)\n :type name: string\n :param name_mode: Method by which to interpret the *name* field (\"exact\": exact match, \"glob\": use \"*\" and \"?\" as wildcards, \"regexp\": interpret as a regular expression)\n :type name_mode: string\n :param properties: Properties (key-value pairs) that each result must have (use value True to require the property key and allow any value)\n :type properties: dict\n :param typename: Type constraint that each result must conform to\n :type typename: string or dict\n :param tag: Tag that each result must be tagged with (deprecated in favor of *tags*)\n :type tag: string\n :param tags: List of tags that each result must have ALL of\n :type tags: list of strings\n :param link: ID of an object that each result must link to\n :type link: string\n :param project: ID of a project in which each result must appear\n :type project: string\n :param folder: If *project* is given, full path to a folder in which each result must belong (default is the root folder)\n :type folder: string\n :param recurse: If *project* is given, whether to look in subfolders of *folder* as well (default is True)\n :type recurse: boolean\n :param modified_after: Timestamp after which each result was last modified (see note below for interpretation)\n :type modified_after: int or string\n :param modified_before: Timestamp before which each result was last modified (see note below for interpretation)\n :type modified_before: int or string\n :param created_after: Timestamp after which each result was last created (see note below for interpretation)\n :type created_after: int or string\n :param created_before: Timestamp before which each result was last created (see note below for interpretation)\n :type created_before: int or string\n :param describe: Controls whether to also return the output of\n calling describe() on each object. Supply False to omit describe\n output, True to obtain the default describe output, or a dict to\n be supplied as the describe call input (which may, among other\n things, be used to customize the set of fields that is returned)\n :type describe: bool or dict\n :param level: The minimum permissions level for which results should be returned (one of \"VIEW\", \"UPLOAD\", \"CONTRIBUTE\", or \"ADMINISTER\")\n :param region: Filter on result set by the given region(s).\n :type region: string or list of strings\n :type level: string\n :param limit: The maximum number of results to be returned (if not specified, the number of results is unlimited)\n :type limit: int\n :param first_page_size: The number of results that the initial API call will return. Subsequent calls will raise this by multiplying by 2 up to a maximum of 1000.\n :type first_page_size: int\n :param return_handler: If True, yields results as dxpy object handlers (otherwise, yields each result as a dict with keys \"id\" and \"project\")\n :type return_handler: boolean\n :rtype: generator\n\n Returns a generator that yields all data objects matching the query,\n up to *limit* objects. It transparently handles paging through the\n result set if necessary. For all parameters that are omitted, the\n search is not restricted by the corresponding field.\n\n .. note:: All timestamps must be supplied as one of the following:\n\n * A nonnegative integer, interpreted as milliseconds since the Epoch\n * A negative integer, interpreted as an offset in milliseconds relative\n to the current time\n * A string containing a negative integer with one of the suffixes\n \"s\", \"m\", \"d\", \"w\", or \"y\" (for seconds, minutes, days, weeks,\n or years), interpreted as an offset from the current time.\n\n The following examples both find all items that were created more\n than 1 week ago::\n\n items1 = list(find_data_objects(created_before=\"-1w\"))\n items2 = list(find_data_objects(created_before=-7*24*60*60*1000))\n\n This example iterates through all files with property\n \"project\" set to \"cancer project\" and prints their object IDs:\n\n for result in find_data_objects(classname=\"file\", properties={\"project\": \"cancer project\"}):\n print \"Found a file with object id \" + result[\"id\"]\n\n \"\"\"\n\n query = {}\n if classname is not None:\n query[\"class\"] = classname\n if state is not None:\n query[\"state\"] = state\n if visibility is not None:\n query[\"visibility\"] = visibility\n if name is not None:\n if name_mode == 'exact':\n query[\"name\"] = name\n elif name_mode == 'glob':\n query['name'] = {'glob': name}\n elif name_mode == 'regexp':\n query['name'] = {'regexp': name}\n else:\n raise DXError('find_data_objects: Unexpected value found for argument name_mode')\n if properties is not None:\n query[\"properties\"] = properties\n if typename is not None:\n query[\"type\"] = typename\n if tag is not None and tags is not None:\n raise DXError('find_data_objects: Arguments \"tag\" and \"tags\" cannot both be provided')\n if tag is not None:\n query[\"tag\"] = tag\n if tags is not None:\n query[\"tags\"] = {\"$and\": tags}\n if link is not None:\n query[\"link\"] = link\n if project is not None:\n query[\"scope\"] = {\"project\": project}\n if folder is not None:\n query[\"scope\"][\"folder\"] = folder\n if recurse is not None:\n query[\"scope\"][\"recurse\"] = recurse\n elif folder is not None or recurse is not None:\n if dxpy.WORKSPACE_ID is not None:\n query['scope'] = {'project': dxpy.WORKSPACE_ID}\n if folder is not None:\n query['scope']['folder'] = folder\n if recurse is not None:\n query['scope']['recurse'] = recurse\n else:\n raise DXError(\"Cannot search within a folder or recurse if a project is not specified\")\n if modified_after is not None or modified_before is not None:\n query[\"modified\"] = {}\n if modified_after is not None:\n query[\"modified\"][\"after\"] = dxpy.utils.normalize_time_input(modified_after)\n if modified_before is not None:\n query[\"modified\"][\"before\"] = dxpy.utils.normalize_time_input(modified_before)\n if created_after is not None or created_before is not None:\n query[\"created\"] = {}\n if created_after is not None:\n query[\"created\"][\"after\"] = dxpy.utils.normalize_time_input(created_after)\n if created_before is not None:\n query[\"created\"][\"before\"] = dxpy.utils.normalize_time_input(created_before)\n if describe is not None and describe is not False:\n query[\"describe\"] = describe\n if level is not None:\n query['level'] = level\n if region is not None:\n query['region'] = region\n if limit is not None:\n query[\"limit\"] = limit\n\n return _find(dxpy.api.system_find_data_objects, query, limit, return_handler, first_page_size, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsearch for all executions in the applet or app.", "response": "def find_executions(classname=None, launched_by=None, executable=None, project=None,\n state=None, origin_job=None, parent_job=None, no_parent_job=False,\n parent_analysis=None, no_parent_analysis=False, root_execution=None,\n created_after=None, created_before=None, describe=False,\n name=None, name_mode=\"exact\", tags=None, properties=None, limit=None,\n first_page_size=100, return_handler=False, include_subjobs=True,\n **kwargs):\n '''\n :param classname:\n Class with which to restrict the search, i.e. one of \"job\",\n \"analysis\"\n :type classname: string\n :param launched_by: User ID of the user who launched the execution's origin execution\n :type launched_by: string\n :param executable: ID of the applet or app that spawned this execution, or a corresponding remote object handler\n :type executable: string or a DXApp/DXApplet/DXWorkflow instance\n :param project: ID of the project context for the execution\n :type project: string\n :param state: State of the execution (e.g. \"failed\", \"done\")\n :type state: string\n :param origin_job: ID of the original job that eventually spawned this execution (possibly by way of other executions)\n :type origin_job: string\n :param parent_job: ID of the parent job (deprecated: use the string 'none' to indicate it should have no parent job; use *no_parent_job* instead)\n :type parent_job: string\n :param no_parent_job: Indicate results should have no parent job; cannot be set to True with *parent_job* set to a string\n :type no_parent_job: boolean\n :param parent_analysis: ID of the parent analysis (deprecated: use the string 'none' to indicate it should have no parent analysis; use *no_parent_analysis* instead)\n :type parent_analysis: string\n :param no_parent_analysis: Indicate results should have no parent analysis; cannot be set to True with *parent_analysis* set to a string\n :type no_parent_job: boolean\n :param root_execution: ID of the top-level (user-initiated) execution (job or analysis) that eventually spawned this execution (possibly by way of other executions)\n :type root_execution: string\n :param created_after: Timestamp after which each result was last created (see note accompanying :meth:`find_data_objects()` for interpretation)\n :type created_after: int or string\n :param created_before: Timestamp before which each result was last created (see note accompanying :meth:`find_data_objects()` for interpretation)\n :type created_before: int or string\n :param describe: Controls whether to also return the output of\n calling describe() on each execution. Supply False to omit\n describe output, True to obtain the default describe output, or\n a dict to be supplied as the describe call input (which may be\n used to customize the set of fields that is to be returned; for\n example, you can supply {\"io\": False} to suppress detailed\n information about the execution's inputs and outputs)\n :type describe: bool or dict\n :param name: Name of the job or analysis to search by (also see *name_mode*)\n :type name: string\n :param name_mode: Method by which to interpret the *name* field (\"exact\": exact match, \"glob\": use \"*\" and \"?\" as wildcards, \"regexp\": interpret as a regular expression)\n :type name_mode: string\n :param tags: List of tags that each result must have ALL of\n :type tags: list of strings\n :param properties: Properties (key-value pairs) that each result must have (use value True to require the property key and allow any value)\n :type properties: dict\n :param limit: The maximum number of results to be returned (if not specified, the number of results is unlimited)\n :type limit: int\n :param first_page_size: The number of results that the initial API call will return. Subsequent calls will raise this by multiplying by 2 up to a maximum of 1000.\n :type first_page_size: int\n :param return_handler: If True, yields results as dxpy object handlers (otherwise, yields each result as a dict with keys \"id\" and \"project\")\n :type return_handler: boolean\n :param include_subjobs: If False, no subjobs will be returned by the API\n :type include_subjobs: boolean\n :rtype: generator\n\n Returns a generator that yields all executions (jobs or analyses) that match the query. It transparently handles\n paging through the result set if necessary. For all parameters that are omitted, the search is not restricted by\n the corresponding field.\n\n The following example iterates through all finished jobs and analyses in a\n particular project that were launched in the last two days::\n\n for result in find_executions(state=\"done\", project=proj_id, created_after=\"-2d\"):\n print \"Found job or analysis with object id \" + result[\"id\"]\n\n '''\n\n query = {}\n if classname is not None:\n query[\"class\"] = classname\n if launched_by is not None:\n query[\"launchedBy\"] = launched_by\n if executable is not None:\n if isinstance(executable, (DXApplet, DXApp, DXWorkflow)):\n query[\"executable\"] = executable.get_id()\n else:\n query[\"executable\"] = executable\n if project is not None:\n if isinstance(project, DXProject):\n query[\"project\"] = project.get_id()\n else:\n query[\"project\"] = project\n if state is not None:\n query[\"state\"] = state\n if origin_job is not None:\n if isinstance(origin_job, DXJob):\n query[\"originJob\"] = origin_job.get_id()\n else:\n query[\"originJob\"] = origin_job\n if parent_job is not None:\n if no_parent_job:\n raise DXError('find_executions: Cannot provide parent_job and set no_parent_job to True')\n if parent_job == \"none\": # to be deprecated\n query[\"parentJob\"] = None\n elif isinstance(parent_job, DXJob):\n query[\"parentJob\"] = parent_job.get_id()\n else:\n query[\"parentJob\"] = parent_job\n elif no_parent_job:\n query[\"parentJob\"] = None\n if parent_analysis is not None:\n if no_parent_analysis:\n raise DXError('find_executions: Cannot provide parent_analysis and set no_parent_analysis to True')\n if parent_analysis == \"none\": # to be deprecated\n query[\"parentAnalysis\"] = None\n elif isinstance(parent_analysis, DXAnalysis):\n query[\"parentAnalysis\"] = parent_analysis.get_id()\n else:\n query[\"parentAnalysis\"] = parent_analysis\n elif no_parent_analysis:\n query[\"parentAnalysis\"] = None\n if root_execution is not None:\n if isinstance(root_execution, (DXJob, DXAnalysis)):\n query[\"rootExecution\"] = root_execution.get_id()\n else:\n query[\"rootExecution\"] = root_execution\n if created_after is not None or created_before is not None:\n query[\"created\"] = {}\n if created_after is not None:\n query[\"created\"][\"after\"] = dxpy.utils.normalize_time_input(created_after)\n if created_before is not None:\n query[\"created\"][\"before\"] = dxpy.utils.normalize_time_input(created_before)\n if describe is not None and describe is not False:\n query[\"describe\"] = describe\n if name is not None:\n if name_mode == 'exact':\n query['name'] = name\n elif name_mode == 'glob':\n query['name'] = {'glob': name}\n elif name_mode == 'regexp':\n query['name'] = {'regexp': name}\n else:\n raise DXError('find_executions: Unexpected value found for argument name_mode')\n if tags is not None:\n query['tags'] = {'$and': tags}\n if properties is not None:\n query['properties'] = properties\n if include_subjobs is not True:\n query[\"includeSubjobs\"] = include_subjobs\n if limit is not None:\n query[\"limit\"] = limit\n\n return _find(dxpy.api.system_find_executions, query, limit, return_handler, first_page_size, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef find_projects(name=None, name_mode='exact', properties=None, tags=None,\n level=None, describe=False, explicit_perms=None, region=None,\n public=None, created_after=None, created_before=None, billed_to=None,\n limit=None, return_handler=False, first_page_size=100, containsPHI=None, **kwargs):\n \"\"\"\n :param name: Name of the project (also see *name_mode*)\n :type name: string\n :param name_mode: Method by which to interpret the *name* field (\"exact\": exact match, \"glob\": use \"*\" and \"?\" as wildcards, \"regexp\": interpret as a regular expression)\n :type name_mode: string\n :param properties: Properties (key-value pairs) that each result must have (use value True to require the property key and allow any value)\n :type properties: dict\n :param tags: Tags that each result must have\n :type tags: list of strings\n :param level: One of \"VIEW\", \"UPLOAD\", \"CONTRIBUTE\", or \"ADMINSTER\". If specified, only returns projects where the current user has at least the specified permission level.\n :type level: string\n :param describe: Controls whether to also return the output of\n calling describe() on each project. Supply False to omit\n describe output, True to obtain the default describe output, or\n a dict to be supplied as the describe call input (which may be\n used to customize the set of fields that is returned)\n :type describe: bool or dict\n :param explicit_perms: Filter on presence of an explicit permision. If True, matching projects must have an explicit permission (any permission granted directly to the user or an organization to which the user belongs). If False, matching projects must not have any explicit permissions for the user. (default is None, for no filter)\n :type explicit_perms: boolean or None\n :param region: If specified, only returns projects where the project is in the given region.\n :type region: string\n :param public: Filter on the project being public. If True, matching projects must be public. If False, matching projects must not be public. (default is None, for no filter)\n :type public: boolean or None\n :param created_after: Timestamp after which each result was created\n (see note accompanying :meth:`find_data_objects()` for interpretation)\n :type created_after: int or string\n :param created_before: Timestamp before which each result was created\n (see note accompanying :meth:`find_data_objects()` for interpretation)\n :type created_before: int or string\n :param billed_to: Entity ID (user or organization) that pays for the project's storage costs\n :type billed_to: string\n :param limit: The maximum number of results to be returned (if not specified, the number of results is unlimited)\n :type limit: int\n :param first_page_size: The number of results that the initial API call will return. Subsequent calls will raise this by multiplying by 2 up to a maximum of 1000.\n :type first_page_size: int\n :param return_handler: If True, yields results as dxpy object handlers (otherwise, yields each result as a dict with keys \"id\" and \"project\")\n :type return_handler: boolean\n :param containsPHI: If set to true, only returns projects that contain PHI.\n If set to false, only returns projects that do not contain PHI.\n :type containsPHI: boolean\n :rtype: generator\n\n Returns a generator that yields all projects that match the query.\n It transparently handles paging through the result set if necessary.\n For all parameters that are omitted, the search is not restricted by\n the corresponding field.\n\n You can use the *level* parameter to find projects that the user has\n at least a specific level of access to (e.g. \"CONTRIBUTE\").\n\n \"\"\"\n query = {}\n if name is not None:\n if name_mode == 'exact':\n query['name'] = name\n elif name_mode == 'glob':\n query['name'] = {'glob': name}\n elif name_mode == 'regexp':\n query['name'] = {'regexp': name}\n else:\n raise DXError('find_projects: Unexpected value found for argument name_mode')\n if properties is not None:\n query[\"properties\"] = properties\n if tags is not None:\n query[\"tags\"] = {\"$and\": tags}\n if level is not None:\n query[\"level\"] = level\n if describe is not None and describe is not False:\n query[\"describe\"] = describe\n if explicit_perms is not None:\n query['explicitPermission'] = explicit_perms\n if public is not None:\n query['public'] = public\n if created_after is not None or created_before is not None:\n query[\"created\"] = {}\n if created_after is not None:\n query[\"created\"][\"after\"] = dxpy.utils.normalize_time_input(created_after)\n if created_before is not None:\n query[\"created\"][\"before\"] = dxpy.utils.normalize_time_input(created_before)\n if region is not None:\n query['region'] = region\n if billed_to is not None:\n query['billTo'] = billed_to\n if limit is not None:\n query[\"limit\"] = limit\n if containsPHI is not None:\n query[\"containsPHI\"] = containsPHI\n\n return _find(dxpy.api.system_find_projects, query, limit, return_handler, first_page_size, **kwargs)", "response": "Find projects in a resource tree."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef find_apps(name=None, name_mode='exact', category=None,\n all_versions=None, published=None,\n billed_to=None, created_by=None, developer=None,\n created_after=None, created_before=None,\n modified_after=None, modified_before=None,\n describe=False, limit=None, return_handler=False, first_page_size=100, **kwargs):\n \"\"\"\n This method is identical to :meth:`find_global_executables()` with the API method\n used: :meth:`system_find_apps()`.\n \"\"\"\n return find_global_executables(dxpy.api.system_find_apps,\n name=name, name_mode=name_mode, category=category,\n all_versions=all_versions, published=published,\n billed_to=billed_to, created_by=created_by, developer=developer,\n created_after=created_after, created_before=created_before,\n modified_after=modified_after, modified_before=modified_before,\n describe=describe, limit=limit, return_handler=return_handler,\n first_page_size=first_page_size, **kwargs)", "response": "Find all apps in a node."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a single data object that satisfies the supplied constraints.", "response": "def find_one_data_object(zero_ok=False, more_ok=True, **kwargs):\n \"\"\"\n :param zero_ok:\n If False (default), :class:`~dxpy.exceptions.DXSearchError` is\n raised if the search has 0 results; if True, returns None if the\n search has 0 results\n :type zero_ok: bool\n :param more_ok:\n If False, :class:`~dxpy.exceptions.DXSearchError` is raised if\n the search has 2 or more results\n :type more_ok: bool\n\n Returns one data object that satisfies the supplied constraints, or\n None if none exist (provided *zero_ok* is True). Supports all search\n constraint arguments supported by :meth:`find_data_objects()`.\n\n \"\"\"\n return _find_one(find_data_objects, zero_ok=zero_ok, more_ok=more_ok, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef find_one_project(zero_ok=False, more_ok=True, **kwargs):\n return _find_one(find_projects, zero_ok=zero_ok, more_ok=more_ok, **kwargs)", "response": "Returns a single project that satisfies the supplied constraints."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef find_one_app(zero_ok=False, more_ok=True, **kwargs):\n return _find_one(find_apps, zero_ok=zero_ok, more_ok=more_ok, **kwargs)", "response": "Returns a single app that satisfies the supplied constraints."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a generator that yields the objects from the API method with org_id and query.", "response": "def _org_find(api_method, org_id, query, first_page_size=100):\n \"\"\"\n Takes an API method handler ``dxpy.api.org_find...`` and calls it with\n *org_id* and *query*, then wraps a generator around its output. Used by\n :meth:`org_find_members` and :meth:`org_find_projects` below.\n\n :param first_page_size: The number of results that the initial API call will return.\n :type first_page_size: int\n\n \"\"\"\n if \"limit\" not in query:\n query[\"limit\"] = min(first_page_size, 1000)\n\n while True:\n resp = api_method(org_id, query)\n for result in resp[\"results\"]:\n yield result\n\n # set up next query\n if resp[\"next\"] is not None:\n query[\"starting\"] = resp[\"next\"]\n query[\"limit\"] = min(query[\"limit\"] * 2, 1000)\n else:\n break"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef org_find_members(org_id=None, level=None, describe=False):\n query = {}\n if level is not None:\n query[\"level\"] = level\n query[\"describe\"] = describe\n\n return _org_find(dxpy.api.org_find_members, org_id, query)", "response": "Returns a generator that yields all org members that match the specified criteria."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef org_find_projects(org_id=None, name=None, name_mode='exact', ids=None, properties=None, tags=None, describe=False,\n public=None, created_after=None, created_before=None, region=None, containsPHI=None):\n \"\"\"\n :param org_id: ID of the organization\n :type org_id: string\n :param name: Name that each result must have (also see *name_mode* param)\n :type name: string\n :param name_mode: Method by which to interpret the *name* param (\"exact\": exact match,\n \"glob\": use \"*\" and \"?\" as wildcards, \"regexp\": interpret as a regular expression)\n :type name_mode: string\n :param ids: List of project IDs. Each result must have a project ID that was specified in this list.\n :type ids: array of strings\n :param properties: Properties (key-value pairs) that each result must have\n (use value True to require the property key and allow any value)\n :type properties: dict\n :param tags: Tags that each result must have\n :type tags: list of strings\n :param describe: Whether or not to return the response of ``dxpy.api.project_describe`` for each result. False\n omits the describe response; True includes it; a dict will be used as the input to\n ``dxpy.api.project_describe`` (to customize the desired set of fields in the describe response).\n :type describe: bool or dict\n :param public: True indicates that each result must be public; False indicates that each result must be private;\n None indicates that both public and private projects will be returned in the result set.\n :type public: boolean or None\n :param created_after: Timestamp after which each result was created\n (see note accompanying :meth:`find_data_objects()` for interpretation)\n :type created_after: int or string\n :param created_before: Timestamp before which each result was created\n (see note accompanying :meth:`find_data_objects()` for interpretation)\n :type created_before: int or string\n :param region: If specified, only returns projects where the project is in the given region.\n :type region: string\n :param containsPHI: If set to true, only returns projects that contain PHI.\n If set to false, only returns projects that do not contain PHI.\n :type containsPHI: boolean\n :rtype: generator\n\n Returns a generator that yields all projects that match the query formed by intersecting all specified\n constraints. The search is not restricted by any parameters that were unspecified.\n\n \"\"\"\n query = {}\n if name is not None:\n if name_mode == 'exact':\n query['name'] = name\n elif name_mode == 'glob':\n query['name'] = {'glob': name}\n elif name_mode == 'regexp':\n query['name'] = {'regexp': name}\n else:\n raise DXError('org_find_projects: Unexpected value found for argument name_mode')\n if ids is not None:\n query[\"id\"] = ids\n if properties is not None:\n if len(properties.keys()) == 1:\n query[\"properties\"] = properties\n else:\n query[\"properties\"] = {\"$and\": [{k: v} for (k, v) in properties.iteritems()]}\n if tags is not None:\n if len(tags) == 1:\n query[\"tags\"] = tags[0]\n else:\n query[\"tags\"] = {\"$and\": tags}\n query[\"describe\"] = describe\n if public is not None:\n query['public'] = public\n if created_after is not None or created_before is not None:\n query[\"created\"] = {}\n if created_after is not None:\n query[\"created\"][\"after\"] = dxpy.utils.normalize_time_input(created_after)\n if created_before is not None:\n query[\"created\"][\"before\"] = dxpy.utils.normalize_time_input(created_before)\n if region is not None:\n query['region'] = region\n if containsPHI is not None:\n query[\"containsPHI\"] = containsPHI\n\n return _org_find(dxpy.api.org_find_projects, org_id, query)", "response": "Find all projects in the organization."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef org_find_apps(org_id,\n name=None,\n name_mode='exact',\n category=None,\n all_versions=None,\n published=None,\n created_by=None,\n developer=None,\n authorized_user=None,\n created_after=None,\n created_before=None,\n modified_after=None,\n modified_before=None,\n describe=False,\n limit=None,\n return_handler=False,\n first_page_size=100,\n **kwargs):\n \"\"\"\n :param name: Name of the app (also see *name_mode*)\n :type name: string\n :param name_mode: Method by which to interpret the *name* field\n \"exact\": exact match,\n \"glob\": use \"*\" and \"?\" as wildcards,\n \"regexp\": interpret as a regular expression\n :type name_mode: string\n :param category: If specified, only returns apps that are in the specified category\n :type category: string\n :param all_versions: Whether to return all versions of each app or just the version tagged \"default\"\n :type all_versions: boolean\n :param published: If specified, only returns results that have the specified publish status\n True for published apps,\n False for unpublished apps\n :type published: boolean\n :param created_by: If specified, only returns app versions that were created by the specified user\n (of the form \"user-USERNAME\")\n :type created_by: string\n :param developer: If specified, only returns apps for which the specified user (of the form \"user-USERNAME\")\n is a developer\n :type developer: string\n :param authorized_user: If specified, only returns apps for which the specified user (either a user ID, org ID,\n or the string \"PUBLIC\") appears in the app's list of authorized users\n :type authorized_user: string\n :param created_after: Timestamp after which each result was last created (see note accompanying\n :meth:`find_data_objects()` for interpretation)\n :type created_after: int or string\n :param created_before: Timestamp before which each result was last created (see note accompanying\n :meth:`find_data_objects()` for interpretation)\n :type created_before: int or string\n :param modified_after: Timestamp after which each result was last modified (see note accompanying\n :meth:`find_data_objects()` for interpretation)\n :type modified_after: int or string\n :param modified_before: Timestamp before which each result was last modified (see note accompanying\n :meth:`find_data_objects()` for interpretation)\n :type modified_before: int or string\n :param describe: Controls whether to also return the output of\n calling describe() on each app. Supply False to omit describe\n output, True to obtain the default describe output, or a dict to\n be supplied as the describe call input (which may be used to\n customize the set of fields that is returned)\n :type describe: bool or dict\n :param limit: The maximum number of results to be returned (if not specified, the number of results is unlimited)\n :type limit: int\n :param first_page_size: The number of results that the initial API call will return. Subsequent calls will raise\n this by multiplying by 2 up to a maximum of 1000.\n :type first_page_size: int\n :param return_handler: If True, yields results as dxpy object handlers (otherwise, yields each result as a dict\n with keys \"id\" and \"project\")\n :type return_handler: boolean\n :rtype: generator\n\n Returns a generator that yields all apps that match the query. It\n transparently handles paging through the result set if necessary.\n For all parameters that are omitted, the search is not restricted by\n the corresponding field.\n\n \"\"\"\n\n query = {}\n if name is not None:\n if name_mode == 'exact':\n query['name'] = name\n elif name_mode == 'glob':\n query['name'] = {'glob': name}\n elif name_mode == 'regexp':\n query['name'] = {'regexp': name}\n else:\n raise DXError('find_apps: Unexpected value found for argument name_mode')\n if category is not None:\n query[\"category\"] = category\n if all_versions is not None:\n query[\"allVersions\"] = all_versions\n if published is not None:\n query[\"published\"] = published\n if created_by is not None:\n query[\"createdBy\"] = created_by\n if developer is not None:\n query[\"developer\"] = developer\n if authorized_user is not None:\n query[\"authorizedUser\"] = authorized_user\n if modified_after is not None or modified_before is not None:\n query[\"modified\"] = {}\n if modified_after is not None:\n query[\"modified\"][\"after\"] = dxpy.utils.normalize_time_input(modified_after)\n if modified_before is not None:\n query[\"modified\"][\"before\"] = dxpy.utils.normalize_time_input(modified_before)\n if created_after is not None or created_before is not None:\n query[\"created\"] = {}\n if created_after is not None:\n query[\"created\"][\"after\"] = dxpy.utils.normalize_time_input(created_after)\n if created_before is not None:\n query[\"created\"][\"before\"] = dxpy.utils.normalize_time_input(created_before)\n if describe is not None and describe is not False:\n query[\"describe\"] = describe\n if limit is not None:\n query[\"limit\"] = limit\n\n return _org_find(dxpy.api.org_find_apps, org_id, query)", "response": "This function returns a list of apps that are available for the org."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a generator that yields all orgs matching the query.", "response": "def find_orgs(query, first_page_size=10):\n \"\"\"\n :param query: The input to the /system/findOrgs API method.\n :type query: dict\n\n :param first_page_size: The number of results that the initial\n /system/findOrgs API call will return; default 10, max 1000. Subsequent\n calls will raise the number of returned results exponentially up to a\n max of 1000.\n :type first_page_size: int\n\n :rtype: generator\n\n Returns a generator that yields all orgs matching the specified query. Will\n transparently handle pagination as necessary.\n \"\"\"\n return _find(dxpy.api.system_find_orgs, query, limit=None,\n return_handler=False, first_page_size=first_page_size)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _create_dirs(idir, dirs):\n '''\n Create a set of directories, so we could store the input files.\n For example, seq1 could be stored under:\n /in/seq1/NC_001122.fasta\n\n TODO: this call could fail, we need to report a reasonable error code\n\n Note that we create a directory for every file array, even if\n it has zero inputs.\n '''\n # create the itself\n file_load_utils.ensure_dir(idir)\n # create each subdir\n for d in dirs:\n file_load_utils.ensure_dir(os.path.join(idir, d))", "response": "Create a set of directories that can be used to store the input files."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngenerating a dict of values for the downloaded files.", "response": "def _gen_helper_dict(filtered_inputs):\n '''\n Create a dict of values for the downloaded files. This is similar to the variables created\n when running a bash app.\n '''\n\n file_key_descs, _ignore = file_load_utils.analyze_bash_vars(\n file_load_utils.get_input_json_file(), None)\n\n flattened_dict = {}\n\n def add_if_no_collision(key, value, dict_):\n if key not in dict_:\n dict_[key] = value\n\n for input_ in filtered_inputs:\n if input_ not in file_key_descs:\n continue\n input_var_dict = file_key_descs[input_]\n add_if_no_collision(input_ + '_path', input_var_dict[\"path\"], flattened_dict)\n add_if_no_collision(input_ + '_name', input_var_dict[\"basename\"], flattened_dict)\n add_if_no_collision(input_ + '_prefix', input_var_dict[\"prefix\"], flattened_dict)\n\n return flattened_dict"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget the number of threads that can be used in parallel.", "response": "def _get_num_parallel_threads(max_threads, num_cores, mem_available_mb):\n '''\n Ensure at least ~1.2 GB memory per thread, see PTFM-18767\n '''\n return min(max_threads, num_cores, max(int(mem_available_mb/1200), 1))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef refill_paragraphs(string, ignored_prefix=' '):\n paragraphs = string.split('\\n\\n')\n refilled_paragraphs = [fill(paragraph) if not paragraph.startswith(ignored_prefix) else paragraph for paragraph in paragraphs]\n return '\\n\\n'.join(refilled_paragraphs).strip('\\n')", "response": "Refills the given text where the text is composed of paragraphs\n separated by blank lines."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef format_find_results(args, results):\n if args.json:\n print(json.dumps(list(results), indent=4))\n elif args.brief:\n for result in results:\n print(result['id'])\n else:\n if args.func.__name__ in (\"find_projects\", \"org_find_projects\"):\n _format_find_projects_results(results)\n elif args.func.__name__ in (\"org_find_members\"):\n _format_find_org_members_results(results)\n elif args.func.__name__ in (\"org_find_apps\"): # should have \"find_apps\" here one day\n _format_find_apps_results(results, verbose=args.verbose)", "response": "Formats the output of dx find... commands for args. json and args. brief arguments."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef main(**kwargs):\n draw_lines()\n draw_histogram()\n draw_bar_chart()\n destination = \"-r /report\"\n if use_html:\n generate_html()\n command = \"dx-build-report-html {h} {d}\".format(h=html_filename, d=destination)\n else:\n command = \"dx-build-report-html {l} {b} {h} {d}\".format(l=lines_filename, b=bars_filename, h=histogram_filename, d=destination)\n sub_output = json.loads(subprocess.check_output(command, shell=True))\n output = {}\n output[\"report\"] = dxpy.dxlink(sub_output[\"recordId\"])\n return output", "response": "Generates a couple of simple graphs and optionally generates an HTML file to upload them"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndraw a histogram of random numbers.", "response": "def draw_histogram():\n \"\"\"\n Uses sample code from http://matplotlib.org/1.2.1/examples/api/histogram_demo.html\n \"\"\"\n\n mu, sigma = 100, 15\n x = mu + sigma * numpy.random.randn(10000)\n fig = pyplot.figure()\n ax = fig.add_subplot(111)\n n, bins, patches = ax.hist(x, 50, normed=1, facecolor='green', alpha=0.75)\n\n ax.set_xlabel('Random number generated')\n ax.set_ylabel('Probability')\n ax.set_xlim(40, 160)\n ax.set_ylim(0, 0.03)\n ax.grid(True)\n\n pyplot.savefig(histogram_filename)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndraw a bar chart for the next N elements.", "response": "def draw_bar_chart():\n \"\"\"\n Uses sample code from http://matplotlib.org/1.2.1/examples/api/barchart_demo.html\n \"\"\"\n N = 5\n menMeans = (20, 35, 30, 35, 27)\n menStd = (2, 3, 4, 1, 2)\n\n ind = numpy.arange(N)\n width = 0.35\n\n fig = pyplot.figure()\n ax = fig.add_subplot(111)\n rects1 = ax.bar(ind, menMeans, width, color='r', yerr=menStd)\n\n womenMeans = (25, 32, 34, 20, 25)\n womenStd = (3, 5, 2, 3, 3)\n rects2 = ax.bar(ind+width, womenMeans, width, color='y', yerr=womenStd)\n\n ax.set_ylabel('Scores')\n ax.set_title('Scores by group and color')\n ax.set_xticks(ind+width)\n ax.set_xticklabels(('G1', 'G2', 'G3', 'G4', 'G5'))\n\n ax.legend((rects1[0], rects2[0]), ('Red', 'Yellow'))\n\n def autolabel(rects):\n for rect in rects:\n height = rect.get_height()\n ax.text(rect.get_x()+rect.get_width()/2., 1.05*height, '%d' % int(height),\n ha='center', va='bottom')\n\n autolabel(rects1)\n autolabel(rects2)\n pyplot.savefig(bars_filename)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef draw_lines():\n r = numpy.random.randn(200)\n fig = pyplot.figure()\n ax = fig.add_subplot(111)\n ax.plot(r)\n\n ax.grid(True)\n\n pyplot.savefig(lines_filename)", "response": "Draw a line between a set of random values\n "} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef generate_html():\n html_file = open(html_filename, \"w\")\n html_file.write(\"\")\n html_file.write(\"

Here are some graphs for you!

\")\n for image in [lines_filename, bars_filename, histogram_filename]:\n html_file.write(\"

{0}

\".format(image))\n html_file.write(\"\")\n html_file.close()", "response": "Generate an HTML file incorporating the images produced by this script\n "} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _safe_unicode(o):\n def clean(s):\n return u''.join([c if c in ASCII_PRINTABLE else '?' for c in s])\n if USING_PYTHON2:\n try:\n return unicode(o)\n except:\n try:\n s = str(o)\n try:\n return s.decode(\"utf-8\")\n except:\n return clean(s[:2048]) + u\" [Raw error message: \" + unicode(s.encode(\"hex\"), 'utf-8') + u\"]\"\n except:\n return u\"(Unable to decode Python exception message)\"\n else:\n return str(o)", "response": "Returns an equivalent unicode object for the given object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nformat the specified exception into a string that can be used in the log file.", "response": "def _format_exception_message(e):\n \"\"\"\n Formats the specified exception.\n \"\"\"\n # Prevent duplication of \"AppError\" in places that print \"AppError\"\n # and then this formatted string\n if isinstance(e, dxpy.AppError):\n return _safe_unicode(e)\n if USING_PYTHON2:\n return unicode(e.__class__.__name__, 'utf-8') + \": \" + _safe_unicode(e)\n else:\n return e.__class__.__name__ + \": \" + _safe_unicode(e)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ntriggering the execution environment entry point processor. Use this function in the program entry point code: .. code-block:: python import dxpy @dxpy.entry_point('main') def hello(i): pass dxpy.run() This method may be used to invoke the program either in a production environment (inside the execution environment) or for local debugging (in the debug harness), as follows: If the environment variable *DX_JOB_ID* is set, the processor retrieves the job with that ID from the API server. The job's *function* field indicates the function name to be invoked. That function name is looked up in the table of all methods decorated with *@dxpy.entry_point('name')* in the module from which :func:`run()` was called, and the matching method is invoked (with the job's input supplied as parameters). This is the mode of operation used in the DNAnexus execution environment. .. warning:: The parameters *function_name* and *function_input* are disregarded in this mode of operation. If the environment variable *DX_JOB_ID* is not set, the function name may be given in *function_name*; if not set, it is set by the environment variable *DX_TEST_FUNCTION*. The function input may be given in *function_input*; if not set, it is set by the local file *job_input.json* which is expected to be present. The absence of *DX_JOB_ID* signals to :func:`run()` that execution is happening in the debug harness. In this mode of operation, all calls to :func:`dxpy.bindings.dxjob.new_dxjob()` (and higher level handler methods which use it) are intercepted, and :func:`run()` is invoked instead with appropriate inputs.", "response": "def run(function_name=None, function_input=None):\n \"\"\"Triggers the execution environment entry point processor.\n\n Use this function in the program entry point code:\n\n .. code-block:: python\n\n import dxpy\n\n @dxpy.entry_point('main')\n def hello(i):\n pass\n\n dxpy.run()\n\n This method may be used to invoke the program either in a production\n environment (inside the execution environment) or for local\n debugging (in the debug harness), as follows:\n\n If the environment variable *DX_JOB_ID* is set, the processor\n retrieves the job with that ID from the API server. The job's\n *function* field indicates the function name to be invoked. That\n function name is looked up in the table of all methods decorated\n with *@dxpy.entry_point('name')* in the module from which\n :func:`run()` was called, and the matching method is invoked (with\n the job's input supplied as parameters). This is the mode of\n operation used in the DNAnexus execution environment.\n\n .. warning::\n\n The parameters *function_name* and *function_input* are\n disregarded in this mode of operation.\n\n If the environment variable *DX_JOB_ID* is not set, the function\n name may be given in *function_name*; if not set, it is set by the\n environment variable *DX_TEST_FUNCTION*. The function input may be\n given in *function_input*; if not set, it is set by the local file\n *job_input.json* which is expected to be present.\n\n The absence of *DX_JOB_ID* signals to :func:`run()` that execution\n is happening in the debug harness. In this mode of operation, all\n calls to :func:`dxpy.bindings.dxjob.new_dxjob()` (and higher level\n handler methods which use it) are intercepted, and :func:`run()` is\n invoked instead with appropriate inputs.\n\n \"\"\"\n\n global RUN_COUNT\n RUN_COUNT += 1\n\n dx_working_dir = os.getcwd()\n\n if dxpy.JOB_ID is not None:\n logging.basicConfig()\n\n try:\n logging.getLogger().addHandler(dxpy.DXLogHandler())\n except dxpy.exceptions.DXError:\n print(\"TODO: FIXME: the EE client should die if logging is not available\")\n\n job = dxpy.describe(dxpy.JOB_ID)\n else:\n if function_name is None:\n function_name = os.environ.get('DX_TEST_FUNCTION', 'main')\n if function_input is None:\n with open(\"job_input.json\", \"r\") as fh:\n function_input = json.load(fh)\n\n job = {'function': function_name, 'input': function_input}\n\n with open(\"job_error_reserved_space\", \"w\") as fh:\n fh.write(\"This file contains reserved space for writing job errors in case the filesystem becomes full.\\n\" + \" \"*1024*64)\n\n print(\"Invoking\", job.get('function'), \"with\", job.get('input'))\n\n try:\n result = ENTRY_POINT_TABLE[job['function']](**job['input'])\n except dxpy.AppError as e:\n save_error(e, dx_working_dir, error_type=\"AppError\")\n raise\n except Exception as e:\n save_error(e, dx_working_dir)\n raise\n\n if result is not None:\n # TODO: protect against client removing its original working directory\n os.chdir(dx_working_dir)\n if USING_PYTHON2:\n # On python-2 we need to use binary mode\n with open(\"job_output.json\", \"wb\") as fh:\n json.dump(result, fh, indent=2, cls=DXJSONEncoder)\n fh.write(b\"\\n\")\n else:\n with open(\"job_output.json\", \"w\") as fh:\n json.dump(result, fh, indent=2, cls=DXJSONEncoder)\n fh.write(\"\\n\")\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef entry_point(entry_point_name):\n def wrap(f):\n ENTRY_POINT_TABLE[entry_point_name] = f\n @wraps(f)\n def wrapped_f(*args, **kwargs):\n return f(*args, **kwargs)\n return wrapped_f\n return wrap", "response": "Use this to decorate a DNAnexus execution environment entry point."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the workflow ID and project ID from a given path.", "response": "def get_workflow_id_and_project(path):\n '''\n :param path: a path or ID to a workflow object\n :type path: string\n :returns: tuple of (workflow ID, project ID)\n\n Returns the workflow and project IDs from the given path if\n available; otherwise, exits with an appropriate error message.\n '''\n project, _folderpath, entity_result = try_call(resolve_existing_path, path, expected='entity')\n try:\n if entity_result is None or not entity_result['id'].startswith('workflow-'):\n raise DXCLIError('Could not resolve \"' + path + '\" to a workflow object')\n except:\n err_exit()\n return entity_result['id'], project"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef user_info(authserver_host=None, authserver_port=None):\n authserver = get_auth_server_name(authserver_host, authserver_port)\n return DXHTTPRequest(authserver + \"/system/getUserInfo\", {}, prepend_srv=False)", "response": "Returns the result of the user_info call against the specified auth\n server."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the input directory for the current node.", "response": "def get_input_dir(job_homedir=None):\n '''\n :param job_homedir: explicit value for home directory, used for testing purposes\n :rtype: string\n :returns: path to input directory\n\n Returns the input directory, where all inputs are downloaded\n '''\n if job_homedir is not None:\n home_dir = job_homedir\n else:\n home_dir = environ.get('HOME')\n idir = os.path.join(home_dir, 'in')\n return idir"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the output directory where all outputs are created and uploaded from the base directory.", "response": "def get_output_dir(job_homedir=None):\n '''\n :param job_homedir: explicit value for home directory, used for testing purposes\n :rtype: string\n :returns: path to output directory\n\n Returns the output directory, where all outputs are created, and\n uploaded from\n '''\n if job_homedir is not None:\n home_dir = job_homedir\n else:\n home_dir = environ.get('HOME')\n odir = os.path.join(home_dir, 'out')\n return odir"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef rm_output_json_file():\n path = get_output_json_file()\n try:\n os.remove(path)\n except OSError as e:\n if e.errno == errno.ENOENT:\n pass\n else:\n raise", "response": "Remove the output json file."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nensure that the directory path exists and is a file.", "response": "def ensure_dir(path):\n \"\"\"\n :param path: path to directory to be created\n\n Create a directory if it does not already exist.\n \"\"\"\n if not os.path.exists(path):\n # path does not exist, create the directory\n os.mkdir(path)\n else:\n # The path exists, check that it is not a file\n if os.path.isfile(path):\n raise Exception(\"Path %s already exists, and it is a file, not a directory\" % path)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nmakes a valid Unix filename on Unix systems.", "response": "def make_unix_filename(fname):\n \"\"\"\n :param fname: the basename of a file (e.g., xxx in /zzz/yyy/xxx).\n :returns: a valid unix filename\n :rtype: string\n :raises: DXError if the filename is invalid on a Unix system\n\n The problem being solved here is that *fname* is a python string, it\n may contain characters that are invalid for a file name. We replace all the slashes with %2F.\n Another issue, is that the user may choose an invalid name. Since we focus\n on Unix systems, the only possibilies are \".\" and \"..\".\n \"\"\"\n # sanity check for filenames\n bad_filenames = [\".\", \"..\"]\n if fname in bad_filenames:\n raise DXError(\"Invalid filename {}\".format(fname))\n return fname.replace('/', '%2F')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nextracting list of files from the input directory and create a set of directories to create and a set of files with sources and destinations.", "response": "def get_job_input_filenames(job_input_file):\n \"\"\"Extract list of files, returns a set of directories to create, and\n a set of files, with sources and destinations. The paths created are\n relative to the input directory.\n\n Note: we go through file names inside arrays, and create a\n separate subdirectory for each. This avoids clobbering files when\n duplicate filenames appear in an array.\n \"\"\"\n def get_input_hash():\n with open(job_input_file) as fh:\n job_input = json.load(fh)\n return job_input\n job_input = get_input_hash()\n\n files = collections.defaultdict(list) # dictionary, with empty lists as default elements\n dirs = [] # directories to create under \n\n # Local function for adding a file to the list of files to be created\n # for example:\n # iname == \"seq1\"\n # subdir == \"015\"\n # value == { \"$dnanexus_link\": {\n # \"project\": \"project-BKJfY1j0b06Z4y8PX8bQ094f\",\n # \"id\": \"file-BKQGkgQ0b06xG5560GGQ001B\"\n # }\n # will create a record describing that the file should\n # be downloaded into seq1/015/\n def add_file(iname, subdir, value):\n if not dxpy.is_dxlink(value):\n return\n handler = dxpy.get_handler(value)\n if not isinstance(handler, dxpy.DXFile):\n return\n filename = make_unix_filename(handler.name)\n trg_dir = iname\n if subdir is not None:\n trg_dir = os.path.join(trg_dir, subdir)\n files[iname].append({'trg_fname': os.path.join(trg_dir, filename),\n 'handler': handler,\n 'src_file_id': handler.id})\n dirs.append(trg_dir)\n\n # An array of inputs, for a single key. A directory\n # will be created per array entry. For example, if the input key is\n # FOO, and the inputs are {A, B, C}.vcf then, the directory structure\n # will be:\n # /FOO/00/A.vcf\n # /FOO/01/B.vcf\n # /FOO/02/C.vcf\n def add_file_array(input_name, links):\n num_files = len(links)\n if num_files == 0:\n return\n num_digits = len(str(num_files - 1))\n dirs.append(input_name)\n for i, link in enumerate(links):\n subdir = str(i).zfill(num_digits)\n add_file(input_name, subdir, link)\n\n for input_name, value in list(job_input.items()):\n if isinstance(value, list):\n # This is a file array\n add_file_array(input_name, value)\n else:\n add_file(input_name, None, value)\n\n ## create a dictionary of the all non-file elements\n rest_hash = {key: val for key, val in list(job_input.items()) if key not in files}\n return dirs, files, rest_hash"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_input_spec_patterns():\n ''' Extract the inputSpec patterns, if they exist -- modifed from dx-upload-all-outputs\n\n Returns a dict of all patterns, with keys equal to the respective\n input parameter names.\n '''\n input_spec = None\n if 'DX_JOB_ID' in environ:\n # works in the cloud, not locally\n job_desc = dxpy.describe(dxpy.JOB_ID)\n if job_desc[\"function\"] == \"main\":\n # The input spec does not apply for subjobs\n desc = dxpy.describe(job_desc.get(\"app\", job_desc.get(\"applet\")))\n if \"inputSpec\" in desc:\n input_spec = desc[\"inputSpec\"]\n elif 'DX_TEST_DXAPP_JSON' in environ:\n # works only locally\n path_to_dxapp_json = environ['DX_TEST_DXAPP_JSON']\n with open(path_to_dxapp_json) as fd:\n dxapp_json = json.load(fd)\n input_spec = dxapp_json.get('inputSpec')\n\n # convert to a dictionary. Each entry in the input spec\n # has {name, class} attributes.\n if input_spec is None:\n return {}\n\n # For each field name, return its patterns.\n # Make sure a pattern is legal, ignore illegal patterns.\n def is_legal_pattern(pattern):\n return \"*\" in pattern\n patterns_dict = {}\n for spec in input_spec:\n name = spec['name']\n if 'patterns' in spec:\n patterns_dict[name] = []\n for p in spec['patterns']:\n if is_legal_pattern(p):\n patterns_dict[name].append(p)\n return patterns_dict", "response": "Extract the inputSpec patterns if they exist and return a dictionary of all patterns."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef gen_bash_vars(job_input_file, job_homedir=None, check_name_collision=True):\n file_key_descs, rest_hash = analyze_bash_vars(job_input_file, job_homedir)\n\n def string_of_elem(elem):\n result = None\n if isinstance(elem, basestring):\n result = elem\n elif isinstance(elem, dxpy.DXFile):\n result = json.dumps(dxpy.dxlink(elem))\n else:\n result = json.dumps(elem)\n return pipes.quote(result)\n\n def string_of_value(val):\n if isinstance(val, list):\n string = \" \".join([string_of_elem(vitem) for vitem in val])\n return \"( {} )\".format(string)\n else:\n return string_of_elem(val)\n\n var_defs_hash = {}\n\n def gen_text_line_and_name_collision(key, val):\n ''' In the absence of a name collision, create a line describing a bash variable.\n '''\n if check_name_collision:\n if key not in environ and key not in var_defs_hash:\n var_defs_hash[key] = val\n else:\n sys.stderr.write(dxpy.utils.printing.fill(\n \"Creating environment variable ({}) would cause a name collision\".format(key)\n ) + \"\\n\")\n else:\n var_defs_hash[key] = val\n\n # Processing non-file variables before the file variables. This priorities them,\n # so that in case of name collisions, the file-variables will be dropped.\n for key, desc in list(rest_hash.items()):\n gen_text_line_and_name_collision(key, string_of_value(desc))\n for file_key, desc in list(file_key_descs.items()):\n gen_text_line_and_name_collision(file_key, string_of_value(desc['handler']))\n gen_text_line_and_name_collision(file_key + \"_name\", string_of_value(desc['basename']))\n gen_text_line_and_name_collision(file_key + \"_prefix\", string_of_value(desc['prefix']))\n gen_text_line_and_name_collision(file_key + \"_path\", string_of_value(desc['path']))\n\n return var_defs_hash", "response": "Generates a list of lines for each bash variable in the job input file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nwait for a future that completes.", "response": "def wait_for_a_future(futures, print_traceback=False):\n \"\"\"\n Return the next future that completes. If a KeyboardInterrupt is\n received, then the entire process is exited immediately. See\n wait_for_all_futures for more notes.\n \"\"\"\n while True:\n try:\n future = next(concurrent.futures.as_completed(futures, timeout=THREAD_TIMEOUT_MAX))\n break\n except concurrent.futures.TimeoutError:\n pass\n except KeyboardInterrupt:\n if print_traceback:\n traceback.print_stack()\n else:\n print('')\n os._exit(os.EX_IOERR)\n\n return future"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef wait_for_all_futures(futures, print_traceback=False):\n try:\n while True:\n waited_futures = concurrent.futures.wait(futures, timeout=60)\n if len(waited_futures.not_done) == 0:\n break\n except KeyboardInterrupt:\n if print_traceback:\n traceback.print_stack()\n else:\n print('')\n os._exit(os.EX_IOERR)", "response": "Wait indefinitely for all futures in the input iterable to complete."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef response_iterator(request_iterator, thread_pool, max_active_tasks=None, do_first_task_sequentially=True):\n tasks_in_progress = collections.deque()\n if max_active_tasks is None:\n max_active_tasks = cpu_count()\n\n # The following two functions facilitate GC by not adding extra variables to the enclosing scope.\n def submit_task(task_iterator, executor, futures_queue):\n retval = next(task_iterator, None)\n if retval is None:\n return False\n task_callable, task_args, task_kwargs = retval\n task_future = executor.submit(task_callable, *task_args, **task_kwargs)\n futures_queue.append(task_future)\n return True\n\n def next_result(tasks_in_progress):\n future = tasks_in_progress.popleft()\n try:\n result = future.result(timeout=THREAD_TIMEOUT_MAX)\n except KeyboardInterrupt:\n print('')\n os._exit(os.EX_IOERR)\n return result\n\n if do_first_task_sequentially:\n task_callable, task_args, task_kwargs = next(request_iterator)\n yield task_callable(*task_args, **task_kwargs)\n\n for _i in range(max_active_tasks):\n retval = submit_task(request_iterator, thread_pool, tasks_in_progress)\n if not retval:\n break\n\n while len(tasks_in_progress) > 0:\n result = next_result(tasks_in_progress)\n submit_task(request_iterator, thread_pool, tasks_in_progress)\n yield result\n del result", "response": "Returns an iterator that yields the results of the request and returns the result of the request."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef normalize_time_input(t, future=False, default_unit='ms'):\n error_msg = 'Error: Expected an int timestamp, a date format (e.g. YYYY-MM-DD), or an int with a single-letter suffix (s=seconds, m=minutes, h=hours, d=days, w=weeks, M=months, y=years; e.g. \"-10d\" indicates 10 days ago); but got {t}'\n if isinstance(t, basestring) and t.isdigit():\n t = int(t)\n\n if isinstance(t, basestring):\n try:\n t = normalize_timedelta(t)\n except ValueError:\n try:\n t = int(time.mktime(dateutil.parser.parse(t).timetuple())*1000)\n assert t > 0\n except (ValueError, OverflowError, AssertionError):\n raise ValueError(error_msg.format(t=t))\n elif isinstance(t, numbers.Integral):\n units_multipliers = {'ms': 1, 's': 1000}\n if default_unit not in units_multipliers:\n raise ValueError(\"Expected default_unit to be one of 's' or 'ms'\")\n t = t * units_multipliers[default_unit]\n else:\n raise ValueError(error_msg.format(t=t))\n now = int(time.time()*1000)\n if t < 0 or (future and t < now):\n t += now\n return t", "response": "Normalizes a time input to a number of milliseconds since epoch."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef normalize_timedelta(timedelta):\n try:\n return int(timedelta) * 1000\n except ValueError as e:\n t, suffix = timedelta[:-1], timedelta[-1:]\n suffix_multipliers = {'s': 1000, 'm': 1000*60, 'h': 1000*60*60, 'd': 1000*60*60*24, 'w': 1000*60*60*24*7,\n 'M': 1000*60*60*24*30, 'y': 1000*60*60*24*365}\n if suffix not in suffix_multipliers:\n raise ValueError()\n return int(t) * suffix_multipliers[suffix]", "response": "Normalizes a datetime timedelta string to an integer in milliseconds."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _dict_raise_on_duplicates(ordered_pairs):\n d = {}\n for k, v in ordered_pairs:\n if k in d:\n raise ValueError(\"duplicate key: %r\" % (k,))\n else:\n d[k] = v\n return d", "response": "Return a dictionary with the keys in ordered_pairs."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef update_nonce(input_params):\n '''\n Static method to return a copy of the input dictionary with an\n additional unique nonce\n :param input: an input dictionary that may be empty\n :type input: dict\n :returns an extended copy of the input with an additional nonce field\n\n The input dictionary is updated with a nonce only if does not already\n have a non empty nonce\n '''\n input_cp = input_params.copy()\n if len(input_cp.get('nonce', '')) == 0:\n input_cp['nonce'] = str(Nonce())\n return input_cp", "response": "Static method to update the nonce field of the input dictionary with an additional unique nonce"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef inline_documentation_files(json_spec, src_dir):\n # Inline description from a readme file\n if 'description' not in json_spec:\n readme_filename = None\n for filename in 'README.md', 'Readme.md', 'readme.md':\n if os.path.exists(os.path.join(src_dir, filename)):\n readme_filename = filename\n break\n if readme_filename is not None:\n with open(os.path.join(src_dir, readme_filename)) as fh:\n json_spec['description'] = fh.read()\n\n # Inline developerNotes from Readme.developer.md\n if 'developerNotes' not in json_spec:\n for filename in 'README.developer.md', 'Readme.developer.md', 'readme.developer.md':\n if os.path.exists(os.path.join(src_dir, filename)):\n with open(os.path.join(src_dir, filename)) as fh:\n json_spec['developerNotes'] = fh.read()\n break", "response": "Inline the contents of the README. md file into the JSON_SPEC dictionary."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef verify_developer_rights(prefixed_name):\n assert(prefixed_name.startswith('app-') or prefixed_name.startswith('globalworkflow-'))\n\n if prefixed_name.partition('-')[0] == 'app':\n exception_type = dxpy.app_builder.AppBuilderException\n describe_method = dxpy.api.app_describe\n exception_msg = \\\n 'An app with the given name already exists and you are not a developer of that app'\n else:\n exception_type = dxpy.workflow_builder.WorkflowBuilderException\n describe_method = dxpy.api.global_workflow_describe\n exception_msg = \\\n 'A global workflow with the given name already exists and you are not a developer of that workflow'\n\n name_already_exists = True\n is_developer = False\n version = None\n executable_id = None\n FoundExecutable = collections.namedtuple('FoundExecutable', ['name', 'version', 'id'])\n try:\n describe_output = describe_method(prefixed_name,\n input_params={\"fields\": {\"isDeveloperFor\": True,\n \"version\": True,\n \"id\": True}})\n is_developer = describe_output['isDeveloperFor']\n version = describe_output['version']\n executable_id = describe_output['id']\n except dxpy.exceptions.DXAPIError as e:\n if e.name == 'ResourceNotFound':\n name_already_exists = False\n elif e.name == 'PermissionDenied':\n raise exception_type(exception_msg)\n else:\n raise e\n\n if not name_already_exists:\n # This app/workflow doesn't exist yet so its creation will succeed\n # (or at least, not fail on the basis of the ACL).\n return FoundExecutable(name=None, version=None, id=None)\n\n name_without_prefix = prefixed_name.partition('-')[2]\n if not is_developer:\n raise exception_type('You are not a developer for {n}'.format(n=name_without_prefix))\n\n return FoundExecutable(name=name_without_prefix, version=version, id=executable_id)", "response": "Verifies that the current user has the given developer rights to the app or global workflow with the given name."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef assert_consistent_regions(from_spec, from_command_line, builder_exception):\n if from_spec is None or from_command_line is None:\n return\n if set(from_spec) != set(from_command_line):\n raise builder_exception(\"--region and the 'regionalOptions' key in the JSON file do not agree\")", "response": "Ensures that the regions passed in from_spec and from_command_line are the same."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef assert_consistent_reg_options(exec_type, json_spec, executable_builder_exeception):\n reg_options_spec = json_spec.get('regionalOptions')\n json_fn = 'dxapp.json' if exec_type == 'app' else 'dxworkflow.json'\n\n if not isinstance(reg_options_spec, dict):\n raise executable_builder_exeception(\"The field 'regionalOptions' in must be a mapping\")\n if not reg_options_spec:\n raise executable_builder_exeception(\n \"The field 'regionalOptions' in \" + json_fn + \" must be a non-empty mapping\")\n regional_options_list = list(reg_options_spec.items())\n for region, opts_for_region in regional_options_list:\n if not isinstance(opts_for_region, dict):\n raise executable_builder_exeception(\"The field 'regionalOptions['\" + region +\n \"']' in \" + json_fn + \" must be a mapping\")\n if set(opts_for_region.keys()) != set(regional_options_list[0][1].keys()):\n if set(opts_for_region.keys()) - set(regional_options_list[0][1].keys()):\n with_key, without_key = region, regional_options_list[0][0]\n key_name = next(iter(set(opts_for_region.keys()) - set(regional_options_list[0][1].keys())))\n else:\n with_key, without_key = regional_options_list[0][0], region\n key_name = next(iter(set(regional_options_list[0][1].keys()) - set(opts_for_region.keys())))\n raise executable_builder_exeception(\n \"All regions in regionalOptions must specify the same options; \" +\n \"%s was given for %s but not for %s\" % (key_name, with_key, without_key)\n )\n\n if exec_type == 'app':\n for key in opts_for_region:\n if key in json_spec.get('runSpec', {}):\n raise executable_builder_exeception(\n key + \" cannot be given in both runSpec and in regional options for \" + region)", "response": "Validates the regionalOptions field and verifies all the regions used in the current workflow are in the same options."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_enabled_regions(exec_type, json_spec, from_command_line, executable_builder_exeception):\n\n from_spec = json_spec.get('regionalOptions')\n\n if from_spec is not None:\n assert_consistent_reg_options(exec_type, json_spec, executable_builder_exeception)\n\n assert_consistent_regions(from_spec, from_command_line, executable_builder_exeception)\n\n enabled_regions = None\n if from_spec is not None:\n enabled_regions = from_spec.keys()\n elif from_command_line is not None:\n enabled_regions = from_command_line\n\n return enabled_regions", "response": "Returns a list of regions that are enabled for the global executable."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef describe(self, fields=None, **kwargs):\n describe_input = {}\n if fields is not None:\n describe_input['fields'] = fields\n self._desc = dxpy.api.analysis_describe(self._dxid, describe_input, **kwargs)\n return self._desc", "response": "Returns the description of the current object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef add_tags(self, tags, **kwargs):\n\n dxpy.api.analysis_add_tags(self._dxid, {\"tags\": tags}, **kwargs)", "response": "Adds each of the specified tags to the analysis."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nremoving the specified tags from the analysis.", "response": "def remove_tags(self, tags, **kwargs):\n \"\"\"\n :param tags: Tags to remove from the analysis\n :type tags: list of strings\n\n Removes each of the specified tags from the analysis. Takes\n no action for tags that the analysis does not currently have.\n\n \"\"\"\n\n dxpy.api.analysis_remove_tags(self._dxid, {\"tags\": tags}, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsetting the properties of the current analysis for the given key - value pairs.", "response": "def set_properties(self, properties, **kwargs):\n \"\"\"\n :param properties: Property names and values given as key-value pairs of strings\n :type properties: dict\n\n Given key-value pairs in *properties* for property names and\n values, the properties are set on the analysis for the given\n property names. Any property with a value of :const:`None`\n indicates the property will be deleted.\n\n .. note:: Any existing properties not mentioned in *properties*\n are not modified by this method.\n\n \"\"\"\n\n dxpy.api.analysis_set_properties(self._dxid, {\"properties\": properties}, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a dictionary containing a valid reference to an output of this analysis.", "response": "def get_output_ref(self, field, index=None, metadata=None):\n '''\n :param field: Output field name of this analysis\n :type field: string\n :param index: If the referenced field is an array, optionally specify an index (starting from 0) to indicate a particular member of the array\n :type index: int\n :param metadata: If the referenced field is of a data object class, a string indicating the metadata that should be read, e.g. \"name\", \"properties.propkey\", \"details.refgenome\"\n :type metadata: string\n\n Returns a dict containing a valid reference to an output of this analysis.\n '''\n\n link = {\"$dnanexus_link\": {\"analysis\": self._dxid, \"field\": field}}\n if index is not None:\n link[\"$dnanexus_link\"][\"index\"] = index\n if metadata is not None:\n link[\"$dnanexus_link\"][\"metadata\"] = metadata\n return link"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nchecking the suggestions guidelines for any data objects that can be found in the specified app. json file and raises an exception if any of the violations are not found.", "response": "def _check_suggestions(app_json, publish=False):\n \"\"\"\n Examines the specified dxapp.json file and warns about any\n violations of suggestions guidelines.\n\n :raises: AppBuilderException for data objects that could not be found\n \"\"\"\n for input_field in app_json.get('inputSpec', []):\n for suggestion in input_field.get('suggestions', []):\n if 'project' in suggestion:\n try:\n project = dxpy.api.project_describe(suggestion['project'], {\"permissions\": True})\n if 'PUBLIC' not in project['permissions'] and publish:\n logger.warn('Project {name} NOT PUBLIC!'.format(name=project['name']))\n except dxpy.exceptions.DXAPIError as e:\n if e.code == 404:\n logger.warn('Suggested project {name} does not exist, or not accessible by user'.format(\n name=suggestion['project']))\n if 'path' in suggestion:\n try:\n check_folder_exists(suggestion['project'], suggestion['path'], '')\n except ResolutionError as e:\n logger.warn('Folder {path} could not be found in project {project}'.format(\n path=suggestion['path'], project=suggestion['project']))\n if '$dnanexus_link' in suggestion:\n if suggestion['$dnanexus_link'].startswith(('file-', 'record-')):\n try:\n dnanexus_link = dxpy.describe(suggestion['$dnanexus_link'])\n except dxpy.exceptions.DXAPIError as e:\n if e.code == 404:\n raise dxpy.app_builder.AppBuilderException(\n 'Suggested object {name} could not be found'.format(\n name=suggestion['$dnanexus_link']))\n except Exception as e:\n raise dxpy.app_builder.AppBuilderException(str(e))\n if 'value' in suggestion and isinstance(suggestion[\"value\"], dict):\n if '$dnanexus_link' in suggestion['value']:\n # Check if we have JSON or string\n if isinstance(suggestion['value']['$dnanexus_link'], dict):\n if 'project' in suggestion['value']['$dnanexus_link']:\n try:\n dxpy.api.project_describe(suggestion['value']['$dnanexus_link']['project'])\n except dxpy.exceptions.DXAPIError as e:\n if e.code == 404:\n logger.warn('Suggested project {name} does not exist, or not accessible by user'.format(\n name=suggestion['value']['$dnanexus_link']['project']))\n elif isinstance(suggestion['value']['$dnanexus_link'], basestring):\n if suggestion['value']['$dnanexus_link'].startswith(('file-', 'record-')):\n try:\n dnanexus_link = dxpy.describe(suggestion['value']['$dnanexus_link'])\n except dxpy.exceptions.DXAPIError as e:\n if e.code == 404:\n raise dxpy.app_builder.AppBuilderException(\n 'Suggested object {name} could not be found'.format(\n name=suggestion['value']['$dnanexus_link']))\n except Exception as e:\n raise dxpy.app_builder.AppBuilderException(str(e))"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _lint(dxapp_json_filename, mode):\n\n def _find_readme(dirname):\n for basename in ['README.md', 'Readme.md', 'readme.md']:\n if os.path.exists(os.path.join(dirname, basename)):\n return os.path.join(dirname, basename)\n return None\n\n app_spec = json.load(open(dxapp_json_filename))\n\n dirname = os.path.basename(os.path.dirname(os.path.abspath(dxapp_json_filename)))\n\n if mode == \"app\":\n if 'title' not in app_spec:\n logger.warn('app is missing a title, please add one in the \"title\" field of dxapp.json')\n\n if 'summary' in app_spec:\n if app_spec['summary'].endswith('.'):\n logger.warn('summary \"%s\" should be a short phrase not ending in a period' % (app_spec['summary'],))\n else:\n logger.warn('app is missing a summary, please add one in the \"summary\" field of dxapp.json')\n\n readme_filename = _find_readme(os.path.dirname(dxapp_json_filename))\n if 'description' in app_spec:\n if readme_filename:\n raise dxpy.app_builder.AppBuilderException('Description was provided both in Readme.md '\n 'and in the \"description\" field of {file}. Please consolidate content in Readme.md '\n 'and remove the \"description\" field.'.format(file=dxapp_json_filename))\n if not app_spec['description'].strip().endswith('.'):\n logger.warn('\"description\" field should be written in complete sentences and end with a period')\n else:\n if readme_filename is None:\n logger.warn(\"app is missing a description, please supply one in README.md\")\n if 'categories' in app_spec:\n for category in app_spec['categories']:\n if category not in APP_CATEGORIES:\n logger.warn('app has unrecognized category \"%s\"' % (category,))\n if category == 'Import':\n if 'title' in app_spec and not app_spec['title'].endswith('Importer'):\n logger.warn('title \"%s\" should end in \"Importer\"' % (app_spec['title'],))\n if category == 'Export':\n if 'title' in app_spec and not app_spec['title'].endswith('Exporter'):\n logger.warn('title \"%s\" should end in \"Exporter\"' % (app_spec['title'],))\n\n if 'name' in app_spec:\n if app_spec['name'] != app_spec['name'].lower():\n logger.warn('name \"%s\" should be all lowercase' % (app_spec['name'],))\n if dirname != app_spec['name']:\n logger.warn('app name \"%s\" does not match containing directory \"%s\"' % (app_spec['name'], dirname))\n else:\n logger.warn('app is missing a name, please add one in the \"name\" field of dxapp.json')\n\n if 'version' in app_spec:\n if not dxpy.executable_builder.GLOBAL_EXEC_VERSION_RE.match(app_spec['version']):\n logger.warn('\"version\" %s should be semver compliant (e.g. of the form X.Y.Z)' % (app_spec['version'],))\n\n # Note that identical checks are performed on the server side (and\n # will cause the app build to fail), but the checks here are printed\n # sooner and multiple naming problems can be detected in a single\n # pass.\n if 'inputSpec' in app_spec:\n for i, input_field in enumerate(app_spec['inputSpec']):\n if not re.match(\"^[a-zA-Z_][0-9a-zA-Z_]*$\", input_field['name']):\n logger.error('input %d has illegal name \"%s\" (must match ^[a-zA-Z_][0-9a-zA-Z_]*$)' % (i, input_field['name']))\n else:\n logger.warn(\"dxapp.json contains no input specification (inputSpec). Your applet will not be usable as an \" +\n \"app, runnable from the GUI, or composable using workflows.\")\n if 'outputSpec' in app_spec:\n for i, output_field in enumerate(app_spec['outputSpec']):\n if not re.match(\"^[a-zA-Z_][0-9a-zA-Z_]*$\", output_field['name']):\n logger.error('output %d has illegal name \"%s\" (must match ^[a-zA-Z_][0-9a-zA-Z_]*$)' % (i, output_field['name']))\n else:\n logger.warn(\"dxapp.json contains no output specification (outputSpec). Your applet will not be usable as an \" +\n \"app, runnable from the GUI, or composable using workflows.\")", "response": "Check the dxapp. json file and raise an exception if it is not."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncheck that the code is in CODE parses as LANG.", "response": "def _check_syntax(code, lang, temp_dir, enforce=True):\n \"\"\"\n Checks that the code whose text is in CODE parses as LANG.\n\n Raises DXSyntaxError if there is a problem and \"enforce\" is True.\n \"\"\"\n # This function needs the language to be explicitly set, so we can\n # generate an appropriate temp filename.\n if lang == 'python2.7':\n temp_basename = 'inlined_code_from_dxapp_json.py'\n elif lang == 'bash':\n temp_basename = 'inlined_code_from_dxapp_json.sh'\n else:\n raise ValueError('lang must be one of \"python2.7\" or \"bash\"')\n # Dump the contents out to a temporary file, then call _check_file_syntax.\n with open(os.path.join(temp_dir, temp_basename), 'w') as ofile:\n ofile.write(code)\n _check_file_syntax(os.path.join(temp_dir, temp_basename), temp_dir, override_lang=lang, enforce=enforce)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nchecking that the code in FILENAME parses and checks that the language of the file is correct.", "response": "def _check_file_syntax(filename, temp_dir, override_lang=None, enforce=True):\n \"\"\"\n Checks that the code in FILENAME parses, attempting to autodetect\n the language if necessary.\n\n Raises IOError if the file cannot be read.\n\n Raises DXSyntaxError if there is a problem and \"enforce\" is True.\n \"\"\"\n def check_python(filename):\n # Generate a semi-recognizable name to write the pyc to. Of\n # course it's possible that different files being scanned could\n # have the same basename, so this path won't be unique, but the\n # checks don't run concurrently so this shouldn't cause any\n # problems.\n pyc_path = os.path.join(temp_dir, os.path.basename(filename) + \".pyc\")\n try:\n if USING_PYTHON2:\n filename = filename.encode(sys.getfilesystemencoding())\n py_compile.compile(filename, cfile=pyc_path, doraise=True)\n finally:\n try:\n os.unlink(pyc_path)\n except OSError:\n pass\n def check_bash(filename):\n if platform.system() == 'Windows':\n logging.warn(\n 'Skipping bash syntax check due to unavailability of bash on Windows.')\n else:\n subprocess.check_output([\"/bin/bash\", \"-n\", filename], stderr=subprocess.STDOUT)\n\n if override_lang == 'python2.7':\n checker_fn = check_python\n elif override_lang == 'bash':\n checker_fn = check_bash\n elif filename.endswith('.py'):\n checker_fn = check_python\n elif filename.endswith('.sh'):\n checker_fn = check_bash\n else:\n # Ignore other kinds of files.\n return\n\n # Do a test read of the file to catch errors like the file not\n # existing or not being readable.\n open(filename)\n\n try:\n checker_fn(filename)\n except subprocess.CalledProcessError as e:\n print(filename + \" has a syntax error! Interpreter output:\", file=sys.stderr)\n for line in e.output.strip(\"\\n\").split(\"\\n\"):\n print(\" \" + line.rstrip(\"\\n\"), file=sys.stderr)\n if enforce:\n raise DXSyntaxError(filename + \" has a syntax error\")\n except py_compile.PyCompileError as e:\n print(filename + \" has a syntax error! Interpreter output:\", file=sys.stderr)\n print(\" \" + e.msg.strip(), file=sys.stderr)\n if enforce:\n raise DXSyntaxError(e.msg.strip())"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nchecks that the dxapp. json file exists and parses as the type it is going to be parsed.", "response": "def _verify_app_source_dir_impl(src_dir, temp_dir, mode, enforce=True):\n \"\"\"Performs syntax and lint checks on the app source.\n\n Precondition: the dxapp.json file exists and can be parsed.\n \"\"\"\n _lint(os.path.join(src_dir, \"dxapp.json\"), mode)\n\n # Check that the entry point file parses as the type it is going to\n # be interpreted as. The extension is irrelevant.\n manifest = json.load(open(os.path.join(src_dir, \"dxapp.json\")))\n if \"runSpec\" in manifest:\n if \"interpreter\" not in manifest['runSpec']:\n raise dxpy.app_builder.AppBuilderException('runSpec.interpreter field was not present')\n\n if \"release\" not in manifest['runSpec'] or \"distribution\" not in manifest['runSpec']:\n warn_message = 'runSpec.distribution or runSpec.release was not present. These fields '\n warn_message += 'will be required in a future version of the API. Recommended value '\n warn_message += 'for distribution is \\\"Ubuntu\\\" and release - \\\"14.04\\\".'\n logger.warn(warn_message)\n\n if manifest['runSpec']['interpreter'] in [\"python2.7\", \"bash\"]:\n if \"file\" in manifest['runSpec']:\n entry_point_file = os.path.abspath(os.path.join(src_dir, manifest['runSpec']['file']))\n try:\n _check_file_syntax(entry_point_file, temp_dir, override_lang=manifest['runSpec']['interpreter'], enforce=enforce)\n except IOError as e:\n raise dxpy.app_builder.AppBuilderException(\n 'Could not open runSpec.file=%r. The problem was: %s' % (entry_point_file, e))\n except DXSyntaxError:\n raise dxpy.app_builder.AppBuilderException('Entry point file %s has syntax errors, see above for details. Rerun with --no-check-syntax to proceed anyway.' % (entry_point_file,))\n elif \"code\" in manifest['runSpec']:\n try:\n _check_syntax(manifest['runSpec']['code'], manifest['runSpec']['interpreter'], temp_dir, enforce=enforce)\n except DXSyntaxError:\n raise dxpy.app_builder.AppBuilderException('Code in runSpec.code has syntax errors, see above for details. Rerun with --no-check-syntax to proceed anyway.')\n\n cluster_bootstrap_scripts = _get_all_cluster_bootstrap_script_names(manifest)\n for filename in cluster_bootstrap_scripts:\n abs_filename = os.path.abspath(os.path.join(src_dir, filename))\n try:\n _check_file_syntax(abs_filename,\n temp_dir,\n override_lang=manifest['runSpec']['interpreter'],\n enforce=enforce)\n except IOError as e:\n raise dxpy.app_builder.AppBuilderException(\n 'Could not open cluster bootstrap script %r. The problem was: %s' % (abs_filename, e))\n except DXSyntaxError:\n raise dxpy.app_builder.AppBuilderException('Code in cluster bootstrapScript %r has syntax errors, see above for details. Rerun with --no-check-syntax to proceed anyway.' % filename)\n\n if 'execDepends' in manifest['runSpec']:\n if not isinstance(manifest['runSpec']['execDepends'], list):\n raise dxpy.app_builder.AppBuilderException('Expected runSpec.execDepends to be an array. Rerun with --no-check-syntax to proceed anyway.')\n if not all(isinstance(dep, dict) for dep in manifest['runSpec']['execDepends']):\n raise dxpy.app_builder.AppBuilderException('Expected runSpec.execDepends to be an array of hashes. Rerun with --no-check-syntax to proceed anyway.')\n if any(dep.get('package_manager', 'apt') != 'apt' for dep in manifest['runSpec']['execDepends']):\n if not isinstance(manifest.get('access'), dict) or 'network' not in manifest['access']:\n msg = '\\n'.join(['runSpec.execDepends specifies non-APT dependencies, but no network access spec is given.',\n 'Add {\"access\": {\"network\": [\"*\"]}} to allow dependencies to install.',\n 'See https://wiki.dnanexus.com/Developer-Tutorials/Request-Additional-App-Resources#Network-Access.',\n 'Rerun with --no-check-syntax to proceed anyway.'])\n raise dxpy.app_builder.AppBuilderException(msg)\n\n if 'authorizedUsers' in manifest:\n if not isinstance(manifest['authorizedUsers'], list) or isinstance(manifest['authorizedUsers'], basestring):\n raise dxpy.app_builder.AppBuilderException('Expected authorizedUsers to be a list of strings')\n for thing in manifest['authorizedUsers']:\n if thing != 'PUBLIC' and (not isinstance(thing, basestring) or not re.match(\"^(org-|user-)\", thing)):\n raise dxpy.app_builder.AppBuilderException('authorizedUsers field contains an entry which is not either the string \"PUBLIC\" or a user or org ID')\n\n if \"pricingPolicy\" in manifest:\n error_message = \"\\\"pricingPolicy\\\" at the top level is not accepted. It must be specified \"\n error_message += \"under the \\\"regionalOptions\\\" field in all enabled regions of the app\"\n raise dxpy.app_builder.AppBuilderException(error_message)\n\n # Check all other files that are going to be in the resources tree.\n # For these we detect the language based on the filename extension.\n # Obviously this check can have false positives, since the app can\n # execute (or not execute!) all these files in whatever way it\n # wishes, e.g. it could use Python != 2.7 or some non-bash shell.\n # Consequently errors here are non-fatal.\n files_with_problems = []\n for dirpath, dirnames, filenames in os.walk(os.path.abspath(os.path.join(src_dir, \"resources\"))):\n for filename in filenames:\n # On Mac OS, the resource fork for \"FILE.EXT\" gets tarred up\n # as a file named \"._FILE.EXT\". To a naive check this\n # appears to be a file of the same extension. Therefore, we\n # exclude these from syntax checking since they are likely\n # to not parse as whatever language they appear to be.\n if not filename.startswith(\"._\"):\n try:\n _check_file_syntax(os.path.join(dirpath, filename), temp_dir, enforce=True)\n except IOError as e:\n raise dxpy.app_builder.AppBuilderException(\n 'Could not open file in resources directory %r. The problem was: %s' %\n (os.path.join(dirpath, filename), e)\n )\n except DXSyntaxError:\n # Suppresses errors from _check_file_syntax so we\n # only print a nice error message\n files_with_problems.append(os.path.join(dirpath, filename))\n\n if files_with_problems:\n # Make a message of the form:\n # \"/path/to/my/app.py\"\n # OR \"/path/to/my/app.py and 3 other files\"\n files_str = files_with_problems[0] if len(files_with_problems) == 1 else (files_with_problems[0] + \" and \" + str(len(files_with_problems) - 1) + \" other file\" + (\"s\" if len(files_with_problems) > 2 else \"\"))\n logging.warn('%s contained syntax errors, see above for details' % (files_str,))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _verify_app_source_dir(src_dir, mode, enforce=True):\n temp_dir = tempfile.mkdtemp(prefix='dx-build_tmp')\n try:\n _verify_app_source_dir_impl(src_dir, temp_dir, mode, enforce=enforce)\n finally:\n shutil.rmtree(temp_dir)", "response": "Performs syntax and lint checks on the app source directory."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nparses the contents of dxapp. json and returns a dict of the parsed contents. Raises AppBuilderException if this cannot be done.", "response": "def _parse_app_spec(src_dir):\n \"\"\"Returns the parsed contents of dxapp.json.\n\n Raises either AppBuilderException or a parser error (exit codes 3 or\n 2 respectively) if this cannot be done.\n \"\"\"\n if not os.path.isdir(src_dir):\n parser.error(\"%s is not a directory\" % src_dir)\n if not os.path.exists(os.path.join(src_dir, \"dxapp.json\")):\n raise dxpy.app_builder.AppBuilderException(\"Directory %s does not contain dxapp.json: not a valid DNAnexus app source directory\" % src_dir)\n with open(os.path.join(src_dir, \"dxapp.json\")) as app_desc:\n try:\n return json_load_raise_on_duplicates(app_desc)\n except Exception as e:\n raise dxpy.app_builder.AppBuilderException(\"Could not parse dxapp.json file as JSON: \" + str(e.args))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _build_app(args, extra_args):\n\n if not args.remote:\n # LOCAL BUILD\n\n try:\n output = build_and_upload_locally(\n args.src_dir,\n args.mode,\n overwrite=args.overwrite,\n archive=args.archive,\n publish=args.publish,\n destination_override=args.destination,\n version_override=args.version_override,\n bill_to_override=args.bill_to,\n use_temp_build_project=args.use_temp_build_project,\n do_parallel_build=args.parallel_build,\n do_version_autonumbering=args.version_autonumbering,\n do_try_update=args.update,\n dx_toolkit_autodep=args.dx_toolkit_autodep,\n do_check_syntax=args.check_syntax,\n ensure_upload=args.ensure_upload,\n force_symlinks=args.force_symlinks,\n dry_run=args.dry_run,\n confirm=args.confirm,\n return_object_dump=args.json,\n region=args.region,\n **extra_args\n )\n\n if output is not None and args.run is None:\n print(json.dumps(output))\n except dxpy.app_builder.AppBuilderException as e:\n # AppBuilderException represents errors during app or applet building\n # that could reasonably have been anticipated by the user.\n print(\"Error: %s\" % (e.args,), file=sys.stderr)\n sys.exit(3)\n except dxpy.exceptions.DXAPIError as e:\n print(\"Error: %s\" % (e,), file=sys.stderr)\n sys.exit(3)\n\n if args.dry_run:\n return None\n\n return output['id']\n\n else:\n # REMOTE BUILD\n\n try:\n app_json = _parse_app_spec(args.src_dir)\n _check_suggestions(app_json, publish=args.publish)\n _verify_app_source_dir(args.src_dir, args.mode)\n if args.mode == \"app\" and not args.dry_run:\n dxpy.executable_builder.verify_developer_rights('app-' + app_json['name'])\n except dxpy.app_builder.AppBuilderException as e:\n print(\"Error: %s\" % (e.args,), file=sys.stderr)\n sys.exit(3)\n\n # The following flags might be useful in conjunction with\n # --remote. To enable these, we need to learn how to pass these\n # options through to the interior call of dx_build_app(let).\n if args.dry_run:\n parser.error('--remote cannot be combined with --dry-run')\n if args.overwrite:\n parser.error('--remote cannot be combined with --overwrite/-f')\n if args.archive:\n parser.error('--remote cannot be combined with --archive/-a')\n\n # The following flags are probably not useful in conjunction\n # with --remote.\n if args.json:\n parser.error('--remote cannot be combined with --json')\n if not args.use_temp_build_project:\n parser.error('--remote cannot be combined with --no-temp-build-project')\n\n if isinstance(args.region, list) and len(args.region) > 1:\n parser.error('--region can only be specified once for remote builds')\n region = args.region[0] if args.region is not None else None\n\n more_kwargs = {}\n if args.version_override:\n more_kwargs['version_override'] = args.version_override\n if args.bill_to:\n more_kwargs['bill_to_override'] = args.bill_to\n if not args.version_autonumbering:\n more_kwargs['do_version_autonumbering'] = False\n if not args.update:\n more_kwargs['do_try_update'] = False\n if not args.parallel_build:\n more_kwargs['do_parallel_build'] = False\n if not args.check_syntax:\n more_kwargs['do_check_syntax'] = False\n\n return _build_app_remote(args.mode, args.src_dir, destination_override=args.destination,\n publish=args.publish, dx_toolkit_autodep=args.dx_toolkit_autodep,\n region=region, watch=args.watch, **more_kwargs)", "response": "Builds an app or applet and returns the resulting executable ID\n ."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef set_id(self, dxid=None, name=None, alias=None):\n '''\n :param dxid: App ID\n :type dxid: string\n :param name: App name\n :type name: string\n :param alias: App version or tag\n :type alias: string\n :raises: :exc:`~dxpy.exceptions.DXError` if *dxid* and some other input are both given or if neither *dxid* nor *name* are given\n\n Discards the currently stored ID and associates the handler\n with the requested parameters. Note that if *dxid* is given,\n the other fields should not be given, and if *name* is given,\n *alias* has default value \"default\".\n\n '''\n self._dxid = None\n self._name = None\n self._alias = None\n if dxid is not None:\n if name is not None or alias is not None:\n raise DXError(\"Did not expect name or alias to be given if dxid is given\")\n verify_string_dxid(dxid, self._class)\n self._dxid = dxid\n elif name is not None:\n self._name = name\n if not isinstance(name, basestring):\n raise DXError(\"App name needs to be a string: %r\" % (name,))\n if alias is not None:\n if not isinstance(alias, basestring):\n raise DXError(\"App alias needs to be a string: %r\" % (alias,))\n self._alias = alias\n else:\n self._alias = 'default'", "response": "Sets the ID of the currently stored handler."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the object ID of the currently associated app.", "response": "def get_id(self):\n '''\n :returns: Object ID of associated app\n :rtype: string\n\n Returns the object ID of the app that the handler is currently\n associated with.\n '''\n if self._dxid is not None:\n return self._dxid\n else:\n return 'app-' + self._name + '/' + self._alias"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef new(self, **kwargs):\n '''\n :param initializeFrom: ID of an existing app object from which to initialize the app\n :type initializeFrom: string\n :param applet: ID of the applet that the app will be created from\n :type applet: string\n :param name: Name of the app (inherits from *initializeFrom* if possible)\n :type name: string\n :param title: Title or brand name of the app (optional)\n :type title: string\n :param summary: A short description of the app (optional)\n :type summary: string\n :param description: An extended description of the app (optional)\n :type description: string\n :param details: Arbitrary JSON to be associated with the app (optional)\n :type details: dict or list\n :param version: Version number\n :type version: string\n :param bill_to: ID of the user or organization who will own the app and be billed for its space usage (optional if an app with this name already exists)\n :type bill_to: string\n :param access: Access specification (optional)\n :type access: dict\n :param resources: Specifies what is to be put into the app's resources container. Must be a string containing a project ID, or a list containing object IDs. (optional)\n :type resources: string or list\n\n .. note:: It is highly recommended that the higher-level module\n :mod:`dxpy.app_builder` or (preferably) its frontend `dx build --create-app\n `_\n be used instead for app creation.\n\n Creates an app with the given parameters by using the specified\n applet or app as a base and overriding its attributes. See the\n API documentation for the `/app/new\n `_\n method for more info.\n\n Exactly one of *initializeFrom* and *applet* must be provided.\n\n The app is only available to its developers until\n :meth:`publish()` is called, and is not run until :meth:`run()`\n is called.\n\n '''\n #TODO: add support for regionalOptions (and deprecate top-level applet and resources)\n dx_hash = {}\n if 'applet' not in kwargs and 'initializeFrom' not in kwargs:\n raise DXError(\"%s: One of the keyword arguments %s and %s is required\" % (self.__class__.__name__, 'applet', 'initializeFrom'))\n\n for field in ['version']:\n if field not in kwargs:\n raise DXError(\"%s: Keyword argument %s is required\" % (self.__class__.__name__, field))\n dx_hash[field] = kwargs[field]\n del kwargs[field]\n\n for field in 'initializeFrom', 'applet', 'name', 'title', 'summary', 'description', 'billing', 'access', 'resources':\n if field in kwargs:\n dx_hash[field] = kwargs[field]\n del kwargs[field]\n\n if \"bill_to\" in kwargs:\n dx_hash['billTo'] = kwargs['bill_to']\n del kwargs[\"bill_to\"]\n\n resp = dxpy.api.app_new(dx_hash, **kwargs)\n self.set_id(dxid=resp[\"id\"])", "response": "Creates a new app with the given parameters."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a dict with the description of the app.", "response": "def describe(self, fields=None, **kwargs):\n '''\n :param fields: Hash where the keys are field names that should be returned, and values should be set to True (default is that all fields are returned)\n :type fields: dict\n :returns: Description of the remote app object\n :rtype: dict\n\n Returns a dict with a description of the app. The result\n includes the key-value pairs as specified in the API\n documentation for the `/app-xxxx/describe\n `_\n method.\n\n '''\n describe_input = {}\n if fields:\n describe_input['fields'] = fields\n if self._dxid is not None:\n self._desc = dxpy.api.app_describe(self._dxid, input_params=describe_input, **kwargs)\n else:\n self._desc = dxpy.api.app_describe('app-' + self._name, alias=self._alias,\n input_params=describe_input, **kwargs)\n\n return self._desc"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nupdate the contents of the applet s resources and details of the app s resources container.", "response": "def update(self, **kwargs):\n '''\n :param applet: ID of the applet to replace the app's contents with\n :type applet: string\n :param details: Metadata to store with the app (optional)\n :type details: dict or list\n :param access: Access specification (optional)\n :type access: dict\n :param resources: Specifies what is to be put into the app's resources container. Must be a string containing a project ID, or a list containing object IDs. (optional)\n :type resources: string or list\n\n Updates the parameters of an existing app. See the API\n documentation for the `/app/update\n `_\n method for more info.\n\n The current user must be a developer of the app.\n\n '''\n updates = {}\n for field in 'applet', 'billing', 'access', 'resources', 'details':\n if field in kwargs:\n updates[field] = kwargs[field]\n del kwargs[field]\n\n if self._dxid is not None:\n resp = dxpy.api.app_update(self._dxid, input_params=updates, **kwargs)\n else:\n resp = dxpy.api.app_update('app-' + self._name, alias=self._alias,\n input_params=updates, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add_tags(self, tags, **kwargs):\n if self._dxid is not None:\n return dxpy.api.app_add_tags(self._dxid, input_params={\"tags\": tags}, **kwargs)\n else:\n return dxpy.api.app_add_tags('app-' + self._name, alias=self._alias,\n input_params={\"tags\": tags}, **kwargs)", "response": "Adds the specified tags to the app."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nremoves the specified tags from the app.", "response": "def remove_tags(self, tags, **kwargs):\n \"\"\"\n :param tags: Tags to remove from the app\n :type tags: array\n\n Removes the specified application name tags (aliases) from this\n app, so that it is no longer addressable by those aliases.\n\n The current user must be a developer of the app.\n\n \"\"\"\n if self._dxid is not None:\n return dxpy.api.app_remove_tags(self._dxid, input_params={\"tags\": tags}, **kwargs)\n else:\n return dxpy.api.app_remove_tags('app-' + self._name, alias=self._alias,\n input_params={\"tags\": tags}, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ninstall the app in the current user s account.", "response": "def install(self, **kwargs):\n \"\"\"\n Installs the app in the current user's account.\n \"\"\"\n if self._dxid is not None:\n return dxpy.api.app_install(self._dxid, **kwargs)\n else:\n return dxpy.api.app_install('app-' + self._name, alias=self._alias, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get(self, **kwargs):\n if self._dxid is not None:\n return dxpy.api.app_get(self._dxid, **kwargs)\n else:\n return dxpy.api.app_get('app-' + self._name, alias=self._alias, **kwargs)", "response": "Returns the full specification of the remote app object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef delete(self, **kwargs):\n if self._dxid is not None:\n return dxpy.api.app_delete(self._dxid, **kwargs)\n else:\n return dxpy.api.app_delete('app-' + self._name, alias=self._alias, **kwargs)", "response": "Removes this app object from the platform."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a new job that executes the function main of this app with the given input *app_input*.", "response": "def run(self, app_input, *args, **kwargs):\n \"\"\"\n Creates a new job that executes the function \"main\" of this app with\n the given input *app_input*.\n\n See :meth:`dxpy.bindings.dxapplet.DXExecutable.run` for the available\n args.\n \"\"\"\n # Rename app_input to preserve API compatibility when calling\n # DXApp.run(app_input=...).\n return super(DXApp, self).run(app_input, *args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nparses a dxworkflow. json file into a list of dictionaries.", "response": "def _parse_executable_spec(src_dir, json_file_name, parser):\n \"\"\"\n Returns the parsed contents of a json specification.\n Raises WorkflowBuilderException (exit code 3) if this cannot be done.\n \"\"\"\n if not os.path.isdir(src_dir):\n parser.error(\"{} is not a directory\".format(src_dir))\n\n if not os.path.exists(os.path.join(src_dir, json_file_name)):\n raise WorkflowBuilderException(\n \"Directory {} does not contain dxworkflow.json: not a valid DNAnexus workflow source directory\"\n .format(src_dir))\n\n with open(os.path.join(src_dir, json_file_name)) as desc:\n try:\n return json_load_raise_on_duplicates(desc)\n except Exception as e:\n raise WorkflowBuilderException(\"Could not parse {} file as JSON: {}\".format(json_file_name, e.args))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_destination_project(json_spec, args, build_project_id=None):\n if build_project_id:\n return build_project_id\n if 'project' in json_spec:\n return json_spec['project']\n if dxpy.WORKSPACE_ID:\n return dxpy.WORKSPACE_ID\n error_msg = \"Can't create a workflow without specifying a destination project; \"\n error_msg += \"please use the -d/--destination flag to explicitly specify a project\"\n raise WorkflowBuilderException(error_msg)", "response": "Returns the project in which the workflow should be created."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_destination_folder(json_spec, folder_name=None):\n dest_folder = folder_name or json_spec.get('folder') or '/'\n if not dest_folder.endswith('/'):\n dest_folder = dest_folder + '/'\n return dest_folder", "response": "Returns the folder name of the project where the workflow should be created."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nset the categories on the given global workflow.", "response": "def _set_categories_on_workflow(global_workflow_id, categories_to_set):\n \"\"\"\n Note: Categories are set on the workflow series level,\n i.e. the same set applies to all versions.\n \"\"\"\n assert(isinstance(categories_to_set, list))\n\n existing_categories = dxpy.api.global_workflow_list_categories(global_workflow_id)['categories']\n categories_to_add = set(categories_to_set).difference(set(existing_categories))\n categories_to_remove = set(existing_categories).difference(set(categories_to_set))\n if categories_to_add:\n dxpy.api.global_workflow_add_categories(global_workflow_id,\n input_params={'categories': list(categories_to_add)})\n if categories_to_remove:\n dxpy.api.global_workflow_remove_categories(global_workflow_id,\n input_params={'categories': list(categories_to_remove)})"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns True if a global workflow with the given name and version already exists in the platform.", "response": "def _version_exists(json_spec, name=None, version=None):\n \"\"\"\n Returns True if a global workflow with the given name and version\n already exists in the platform and the user has developer rights\n to the workflow. \"name\" and \"version\" can be passed if we already\n made a \"describe\" API call on the global workflow and so know the\n requested name and version already exists.\n \"\"\"\n requested_name = json_spec['name']\n requested_version = json_spec['version']\n\n if requested_name == name and requested_version == version:\n return True\n else:\n try:\n desc_output = dxpy.api.global_workflow_describe('globalworkflow-' + json_spec['name'],\n alias=json_spec['version'],\n input_params={\"fields\": {\"name\": True,\n \"version\": True}})\n return desc_output['name'] == json_spec['name'] and desc_output['version'] == json_spec['version']\n except dxpy.exceptions.DXAPIError:\n return False\n except:\n raise"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef validate_ignore_reuse(stages, ignore_reuse_stages):\n if not isinstance(ignore_reuse_stages, list):\n raise WorkflowBuilderException('\"IgnoreReuse must be a list of strings - stage IDs or \"*\"')\n\n ignore_reuse_set = set(ignore_reuse_stages)\n if '*' in ignore_reuse_set and ignore_reuse_set == 1:\n return\n\n stage_ids = set([stage.get('id') for stage in stages])\n for ignored in ignore_reuse_set:\n if ignored not in stage_ids:\n raise WorkflowBuilderException(\n 'Stage with ID {} not found. Add a matching \"id\" for the stage you wish to set ignoreReuse for'.format(ignored))", "response": "Checks if the given list of stages contains a valid entry in the order they are not used."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_validated_stages(stages):\n if not isinstance(stages, list):\n raise WorkflowBuilderException(\"Stages must be specified as a list of dictionaries\")\n validated_stages = []\n for index, stage in enumerate(stages):\n validated_stages.append(_get_validated_stage(stage, index))\n return validated_stages", "response": "Validates the given stages and returns a list of dictionaries."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _validate_json_for_regular_workflow(json_spec, args):\n validated = {}\n override_project_id, override_folder, override_workflow_name = \\\n dxpy.executable_builder.get_parsed_destination(args.destination)\n validated['project'] = _get_destination_project(json_spec, args, override_project_id)\n validated['folder'] = _get_destination_folder(json_spec, override_folder)\n\n workflow_name = _get_workflow_name(json_spec, override_workflow_name)\n if not workflow_name:\n print('Warning: workflow name is not specified')\n else:\n validated['name'] = workflow_name\n return validated", "response": "Validates the JSON spec for a regular project - based workflow."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nvalidate the JSON spec for a global workflow.", "response": "def _validate_json_for_global_workflow(json_spec, args):\n \"\"\"\n Validates fields used for building a global workflow.\n Since building a global workflow is done after all the underlying workflows\n are built, which may be time-consuming, we validate as much as possible here.\n \"\"\"\n # TODO: verify the billTo can build the workflow\n # TODO: if the global workflow build fails add an option to interactively change billto\n # TODO: (or other simple fields) instead of failing altogether\n # TODO: get a confirmation before building a workflow that may be costly\n if 'name' not in json_spec:\n raise WorkflowBuilderException(\n \"dxworkflow.json contains no 'name' field, but it is required to build a global workflow\")\n if not dxpy.executable_builder.GLOBAL_EXEC_NAME_RE.match(json_spec['name']):\n raise WorkflowBuilderException(\n \"The name of your workflow must match /^[a-zA-Z0-9._-]+$/\")\n if json_spec['name'] != json_spec['name'].lower():\n logger.warn('workflow name \"{}\" should be all lowercase'.format(json_spec['name']))\n\n if 'version' not in json_spec:\n raise WorkflowBuilderException(\n \"dxworkflow.json contains no 'version' field, but it is required to build a global workflow\")\n if not dxpy.executable_builder.GLOBAL_EXEC_VERSION_RE.match(json_spec['version']):\n logger.warn('\"version\" {} should be semver compliant (e.g. of the form X.Y.Z)'.format(json_spec['version']))\n\n if 'details' in json_spec:\n if not isinstance(json_spec['details'], dict):\n raise WorkflowBuilderException(\n 'The field \"details\" must be a dictionary')\n\n if 'regionalOptions' in json_spec:\n if not (isinstance(json_spec['regionalOptions'], dict)\n and json_spec['regionalOptions']\n and all([isinstance(i, dict) for i in json_spec['regionalOptions'].values()])):\n raise WorkflowBuilderException(\n 'The field \"regionalOptions\" must be a non-empty dictionary whose values are dictionaries')\n\n if args.bill_to:\n json_spec[\"billTo\"] = args.bill_to"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nvalidating dxworkflow. json and returns the json that can be sent with the dxworkflow API or globalworkflow API or dxworkflow. new API or dxworkflow. new API or dxworkflow. new API or dxworkflow. new API.", "response": "def _get_validated_json(json_spec, args):\n \"\"\"\n Validates dxworkflow.json and returns the json that can be sent with the\n /workflow/new API or /globalworkflow/new request.\n \"\"\"\n if not json_spec:\n return\n if not args:\n return\n\n validated_spec = copy.deepcopy(json_spec)\n\n # print ignored keys if present in json_spec\n unsupported_keys = _get_unsupported_keys(validated_spec.keys(), SUPPORTED_KEYS)\n if len(unsupported_keys) > 0:\n logger.warn(\n \"Warning: the following root level fields are not supported and will be ignored: {}\"\n .format(\", \".join(unsupported_keys)))\n\n\n if 'stages' in validated_spec:\n validated_spec['stages'] = _get_validated_stages(validated_spec['stages'])\n\n if 'name' in validated_spec:\n if args.src_dir != validated_spec['name']:\n logger.warn(\n 'workflow name \"%s\" does not match containing directory \"%s\"' % (validated_spec['name'], args.src_dir))\n\n if 'ignoreReuse' in validated_spec:\n validate_ignore_reuse(validated_spec['stages'], validated_spec['ignoreReuse'])\n\n validated_documentation_fields = _get_validated_json_for_build_or_update(validated_spec, args)\n validated_spec.update(validated_documentation_fields)\n\n # Project-based workflow specific validation\n if args.mode == 'workflow':\n validated = _validate_json_for_regular_workflow(json_spec, args)\n validated_spec.update(validated)\n\n # Global workflow specific validation\n if args.mode == 'globalworkflow':\n _validate_json_for_global_workflow(validated_spec, args)\n\n return validated_spec"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nvalidating that the json_spec is valid and returns the json_spec that can be used to build or update the current version of the current project.", "response": "def _get_validated_json_for_build_or_update(json_spec, args):\n \"\"\"\n Validates those fields that can be used when either building\n a new version (of a local, project-based workflow) or updating\n an existing version (of a global workflow).\n \"\"\"\n validated = copy.deepcopy(json_spec)\n\n dxpy.executable_builder.inline_documentation_files(validated, args.src_dir)\n\n if 'title' not in json_spec:\n logger.warn(\"dxworkflow.json is missing a title, please add one in the 'title' field\")\n\n if 'summary' not in json_spec:\n logger.warn(\"dxworkflow.json is missing a summary, please add one in the 'summary' field\")\n else:\n if json_spec['summary'].endswith('.'):\n logger.warn(\"summary {} should be a short phrase not ending in a period\".format(json_spec['summary'],))\n\n return validated"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncheck if the global workflow regions and the regions of the executables of the workflow are all matching.", "response": "def _assert_executable_regions_match(workflow_enabled_regions, workflow_spec):\n \"\"\"\n Check if the global workflow regions and the regions of stages (apps) match.\n If the workflow contains any applets, the workflow can be currently enabled\n in only one region - the region in which the applets are stored.\n \"\"\"\n executables = [i.get(\"executable\") for i in workflow_spec.get(\"stages\")]\n\n for exect in executables:\n\n if exect.startswith(\"applet-\") and len(workflow_enabled_regions) > 1:\n raise WorkflowBuilderException(\"Building a global workflow with applets in more than one region is not yet supported.\")\n\n elif exect.startswith(\"app-\"):\n app_regional_options = dxpy.api.app_describe(exect, input_params={\"fields\": {\"regionalOptions\": True}})\n app_regions = set(app_regional_options['regionalOptions'].keys())\n if not workflow_enabled_regions.issubset(app_regions):\n additional_workflow_regions = workflow_enabled_regions - app_regions\n mesg = \"The app {} is enabled in regions {} while the global workflow in {}.\".format(\n exect, \", \".join(app_regions), \", \".join(workflow_enabled_regions))\n mesg += \" The workflow will not be able to run in {}.\".format(\", \".join(additional_workflow_regions))\n mesg += \" If you are a developer of the app, you can enable the app in {} to run the workflow in that region(s).\".format(\n \", \".join(additional_workflow_regions))\n logger.warn(mesg)\n\n elif exect.startswith(\"workflow-\"):\n # We recurse to check the regions of the executables of the inner workflow\n inner_workflow_spec = dxpy.api.workflow_describe(exect)\n _assert_executable_regions_match(workflow_enabled_regions, inner_workflow_spec)\n\n elif exect.startswith(\"globalworkflow-\"):\n raise WorkflowBuilderException(\"Building a global workflow with nested global workflows is not yet supported\")"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nbuilds a regular workflow.", "response": "def _build_regular_workflow(json_spec):\n \"\"\"\n Precondition: json_spec must be validated\n \"\"\"\n workflow_id = dxpy.api.workflow_new(json_spec)[\"id\"]\n dxpy.api.workflow_close(workflow_id)\n return workflow_id"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a set of regions in which the global workflow should be enabled.", "response": "def _get_validated_enabled_regions(json_spec, from_command_line):\n \"\"\"\n Returns a set of regions (region names) in which the global workflow\n should be enabled. Also validates and synchronizes the regions\n passed via CLI argument and in the regionalOptions field.\n \"\"\"\n enabled_regions = dxpy.executable_builder.get_enabled_regions('globalworkflow',\n json_spec,\n from_command_line,\n WorkflowBuilderException)\n if not enabled_regions:\n enabled_regions = []\n if not dxpy.WORKSPACE_ID:\n msg = \"A context project must be selected to enable a workflow in the project's region.\"\n msg += \" You can use 'dx select' to select a project. Otherwise you can use --region option\"\n msg += \" to select a region in which the workflow should be enabled\"\n raise(WorkflowBuilderException(msg))\n region = dxpy.api.project_describe(dxpy.WORKSPACE_ID,\n input_params={\"fields\": {\"region\": True}})[\"region\"]\n\n enabled_regions.append(region)\n\n if not enabled_regions:\n raise AssertionError(\"This workflow should be enabled in at least one region\")\n\n return set(enabled_regions)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a temporary project for each region in enabled_regions and creates a temporary project for each region in enabled_regions.", "response": "def _create_temporary_projects(enabled_regions, args):\n \"\"\"\n Creates a temporary project needed to build an underlying workflow\n for a global workflow. Returns a dictionary with region names as keys\n and project IDs as values\n\n The regions in which projects will be created can be:\n i. regions specified in dxworkflow.json \"regionalOptions\"\n ii. regions specified as an argument to \"dx build\"\n iii. current context project, if None of the above are set\n If both args and dxworkflow.json specify regions, they must match.\n \"\"\"\n # Create one temp project in each region\n projects_by_region = {} # Project IDs by region\n for region in enabled_regions:\n try:\n project_input = {\"name\": \"Temporary build project for dx build global workflow\",\n \"region\": region}\n if args.bill_to:\n project_input[\"billTo\"] = args.bill_to\n temp_project = dxpy.api.project_new(project_input)[\"id\"]\n projects_by_region[region] = temp_project\n logger.debug(\"Created temporary project {} to build in\".format(temp_project))\n except:\n # Clean up any temp projects that might have been created\n if projects_by_region:\n dxpy.executable_builder.delete_temporary_projects(projects_by_region.values())\n err_exit()\n return projects_by_region"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _build_underlying_workflows(enabled_regions, json_spec, args):\n projects_by_region = _create_temporary_projects(enabled_regions, args)\n workflows_by_region = {}\n\n try:\n for region, project in projects_by_region.items():\n json_spec['project'] = project\n workflow_id = _build_regular_workflow(json_spec)\n logger.debug(\"Created workflow \" + workflow_id + \" successfully\")\n workflows_by_region[region] = workflow_id\n except:\n # Clean up\n if projects_by_region:\n dxpy.executable_builder.delete_temporary_projects(projects_by_region.values())\n raise\n\n return workflows_by_region, projects_by_region", "response": "Creates a workflow in a temporary project for each enabled region."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nbuilding a global workflow for each enabled region and creates a new global workflow on the platform based on these workflows.", "response": "def _build_global_workflow(json_spec, args):\n \"\"\"\n Creates a workflow in a temporary project for each enabled region\n and builds a global workflow on the platform based on these workflows.\n \"\"\"\n # First determine in which regions the global workflow needs to be available\n enabled_regions = _get_validated_enabled_regions(json_spec, args.region)\n\n # Verify all the stages are also enabled in these regions\n # TODO: Add support for dx building multi-region global workflows with applets\n _assert_executable_regions_match(enabled_regions, json_spec)\n\n workflows_by_region, projects_by_region = {}, {} # IDs by region\n try:\n # prepare \"regionalOptions\" field for the globalworkflow/new input\n workflows_by_region, projects_by_region = \\\n _build_underlying_workflows(enabled_regions, json_spec, args)\n regional_options = {}\n for region, workflow_id in workflows_by_region.items():\n regional_options[region] = {'workflow': workflow_id}\n json_spec.update({'regionalOptions': regional_options})\n\n # leave only fields that are actually used to build the workflow\n gwf_provided_keys = GLOBALWF_SUPPORTED_KEYS.intersection(set(json_spec.keys()))\n gwf_final_json = dict((k, v) for k, v in json_spec.items() if k in gwf_provided_keys)\n\n # we don't want to print the whole documentation to the screen so we'll remove these fields\n print_spec = copy.deepcopy(gwf_final_json)\n if \"description\" in gwf_final_json:\n del print_spec[\"description\"]\n if \"developerNotes\" in gwf_final_json:\n del print_spec[\"developerNotes\"]\n logger.info(\"Will create global workflow with spec: {}\".format(json.dumps(print_spec)))\n\n # Create a new global workflow version on the platform\n global_workflow_id = dxpy.api.global_workflow_new(gwf_final_json)[\"id\"]\n\n logger.info(\"Uploaded global workflow {n}/{v} successfully\".format(n=gwf_final_json[\"name\"],\n v=gwf_final_json[\"version\"]))\n logger.info(\"You can publish this workflow with:\")\n logger.info(\" dx publish {n}/{v}\".format(n=gwf_final_json[\"name\"],\n v=gwf_final_json[\"version\"]))\n finally:\n # Clean up\n if projects_by_region:\n dxpy.executable_builder.delete_temporary_projects(projects_by_region.values())\n\n # Set any additional fields on the created workflow\n try:\n _set_categories_on_workflow(global_workflow_id, gwf_final_json.get(\"categories\", []))\n except:\n logger.warn(\n \"The workflow {n}/{v} was created but setting categories failed\".format(n=gwf_final_json['name'],\n v=gwf_final_json['version']))\n raise\n\n return global_workflow_id"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nbuild or updates a workflow on the platform.", "response": "def _build_or_update_workflow(json_spec, args):\n \"\"\"\n Creates or updates a workflow on the platform.\n Returns the workflow ID, or None if the workflow cannot be created.\n \"\"\"\n try:\n if args.mode == 'workflow':\n json_spec = _get_validated_json(json_spec, args)\n workflow_id = _build_regular_workflow(json_spec)\n elif args.mode == 'globalworkflow':\n # Verify if the global workflow already exists and if the user has developer rights to it\n # If the global workflow name doesn't exist, the user is free to build it\n # If the name does exist two things can be done:\n # * either update the requested version, if this version already exists\n # * or create the version if it doesn't exist\n existing_workflow = dxpy.executable_builder.verify_developer_rights('globalworkflow-' + json_spec['name'])\n if existing_workflow and _version_exists(json_spec,\n existing_workflow.name,\n existing_workflow.version):\n workflow_id = _update_global_workflow(json_spec, args, existing_workflow.id)\n else:\n json_spec = _get_validated_json(json_spec, args)\n workflow_id = _build_global_workflow(json_spec, args)\n else:\n raise WorkflowBuilderException(\"Unrecognized workflow type: {}\".format(args.mode))\n except dxpy.exceptions.DXAPIError as e:\n raise e\n return workflow_id"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef build(args, parser):\n\n if args is None:\n raise Exception(\"arguments not provided\")\n\n try:\n json_spec = _parse_executable_spec(args.src_dir, \"dxworkflow.json\", parser)\n workflow_id = _build_or_update_workflow(json_spec, args)\n _print_output(workflow_id, args)\n except WorkflowBuilderException as e:\n print(\"Error: %s\" % (e.args,), file=sys.stderr)\n sys.exit(3)", "response": "Builds a new global workflow based on the given arguments."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef cp_to_noexistent_destination(args, dest_path, dx_dest, dest_proj):\n ''' Copy the source to a destination that does not currently\n exist. This involves creating the target file/folder.\n '''\n # Destination folder path is new => renaming\n if len(args.sources) != 1:\n # Can't copy and rename more than one object\n raise DXCLIError('The destination folder does not exist')\n last_slash_pos = get_last_pos_of_char('/', dest_path)\n if last_slash_pos == 0:\n dest_folder = '/'\n else:\n dest_folder = dest_path[:last_slash_pos]\n dest_name = dest_path[last_slash_pos + 1:].replace('\\/', '/')\n try:\n dx_dest.list_folder(folder=dest_folder, only='folders')\n except dxpy.DXAPIError as details:\n if details.code == requests.codes['not_found']:\n raise DXCLIError('The destination folder does not exist')\n else:\n raise\n except:\n err_exit()\n\n # Clone and rename either the data object or the folder.\n # src_result is None if it could not be resolved to an object.\n src_proj, src_path, src_results = try_call(resolve_existing_path,\n args.sources[0],\n allow_mult=True, all_mult=args.all)\n\n if src_proj == dest_proj:\n if is_hashid(args.sources[0]):\n # This is the only case in which the source project is\n # purely assumed, so give a better error message.\n raise DXCLIError(fill('Error: You must specify a source project for ' + args.sources[0]))\n else:\n raise DXCLIError(fill('A source path and the destination path resolved to the ' +\n 'same project or container. Please specify different source ' +\n 'and destination containers, e.g.') +\n '\\n dx cp source-project:source-id-or-path dest-project:dest-path')\n\n if src_results is None:\n try:\n contents = dxpy.api.project_list_folder(src_proj,\n {\"folder\": src_path, \"includeHidden\": True})\n dxpy.api.project_new_folder(dest_proj, {\"folder\": dest_path})\n exists = dxpy.api.project_clone(src_proj,\n {\"folders\": contents['folders'],\n \"objects\": [result['id'] for result in contents['objects']],\n \"project\": dest_proj,\n \"destination\": dest_path})['exists']\n if len(exists) > 0:\n print(fill('The following objects already existed in the destination ' +\n 'container and were not copied:') + '\\n ' + '\\n '.join(exists))\n return\n except:\n err_exit()\n else:\n try:\n exists = dxpy.api.project_clone(src_proj,\n {\"objects\": [result['id'] for result in src_results],\n \"project\": dest_proj,\n \"destination\": dest_folder})['exists']\n if len(exists) > 0:\n print(fill('The following objects already existed in the destination ' +\n 'container and were not copied:') + '\\n ' + '\\n '.join(exists))\n for result in src_results:\n if result['id'] not in exists:\n dxpy.DXHTTPRequest('/' + result['id'] + '/rename',\n {\"project\": dest_proj,\n \"name\": dest_name})\n return\n except:\n err_exit()", "response": "Copy the source to a destination that does not currently have a corresponding object."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the file size in readable form.", "response": "def _readable_part_size(num_bytes):\n \"Returns the file size in readable form.\"\n B = num_bytes\n KB = float(1024)\n MB = float(KB * 1024)\n GB = float(MB * 1024)\n TB = float(GB * 1024)\n\n if B < KB:\n return '{0} {1}'.format(B, 'bytes' if B != 1 else 'byte')\n elif KB <= B < MB:\n return '{0:.2f} KiB'.format(B/KB)\n elif MB <= B < GB:\n return '{0:.2f} MiB'.format(B/MB)\n elif GB <= B < TB:\n return '{0:.2f} GiB'.format(B/GB)\n elif TB <= B:\n return '{0:.2f} TiB'.format(B/TB)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _new(self, dx_hash, media_type=None, **kwargs):\n\n if media_type is not None:\n dx_hash[\"media\"] = media_type\n\n resp = dxpy.api.file_new(dx_hash, **kwargs)\n self.set_ids(resp[\"id\"], dx_hash[\"project\"])", "response": "Create a new remote file with the given hash and media type."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nset the IDs of the currently stored object.", "response": "def set_ids(self, dxid, project=None):\n '''\n :param dxid: Object ID\n :type dxid: string\n :param project: Project ID\n :type project: string\n\n Discards the currently stored ID and associates the handler with\n *dxid*. As a side effect, it also flushes the buffer for the\n previous file object if the buffer is nonempty.\n '''\n if self._dxid is not None:\n self.flush()\n\n DXDataObject.set_ids(self, dxid, project)\n\n # Reset state\n self._pos = 0\n self._file_length = None\n self._cur_part = 1\n self._num_uploaded_parts = 0"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nseeks to a specific offset in the file.", "response": "def seek(self, offset, from_what=os.SEEK_SET):\n '''\n :param offset: Position in the file to seek to\n :type offset: integer\n\n Seeks to *offset* bytes from the beginning of the file. This is a no-op if the file is open for writing.\n\n The position is computed from adding *offset* to a reference point; the reference point is selected by the\n *from_what* argument. A *from_what* value of 0 measures from the beginning of the file, 1 uses the current file\n position, and 2 uses the end of the file as the reference point. *from_what* can be omitted and defaults to 0,\n using the beginning of the file as the reference point.\n '''\n if from_what == os.SEEK_SET:\n reference_pos = 0\n elif from_what == os.SEEK_CUR:\n reference_pos = self._pos\n elif from_what == os.SEEK_END:\n if self._file_length == None:\n desc = self.describe()\n self._file_length = int(desc[\"size\"])\n reference_pos = self._file_length\n else:\n raise DXFileError(\"Invalid value supplied for from_what\")\n\n orig_pos = self._pos\n self._pos = reference_pos + offset\n\n in_buf = False\n orig_buf_pos = self._read_buf.tell()\n if offset < orig_pos:\n if orig_buf_pos > orig_pos - offset:\n # offset is less than original position but within the buffer\n in_buf = True\n else:\n buf_len = dxpy.utils.string_buffer_length(self._read_buf)\n if buf_len - orig_buf_pos > offset - orig_pos:\n # offset is greater than original position but within the buffer\n in_buf = True\n\n if in_buf:\n # offset is within the buffer (at least one byte following\n # the offset can be read directly out of the buffer)\n self._read_buf.seek(orig_buf_pos - orig_pos + offset)\n elif offset == orig_pos:\n # This seek is a no-op (the cursor is just past the end of\n # the read buffer and coincides with the desired seek\n # position). We don't have the data ready, but the request\n # for the data starting here is already in flight.\n #\n # Detecting this case helps to optimize for sequential read\n # access patterns.\n pass\n else:\n # offset is outside the buffer-- reset buffer and queues.\n # This is the failsafe behavior\n self._read_buf = BytesIO()\n # TODO: if the offset is within the next response(s), don't throw out the queues\n self._request_iterator, self._response_iterator = None, None"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef flush(self, multithread=True, **kwargs):\n '''\n Flushes the internal write buffer.\n '''\n if self._write_buf.tell() > 0:\n data = self._write_buf.getvalue()\n self._write_buf = BytesIO()\n\n if multithread:\n self._async_upload_part_request(data, index=self._cur_part, **kwargs)\n else:\n self.upload_part(data, self._cur_part, **kwargs)\n\n self._cur_part += 1\n\n if len(self._http_threadpool_futures) > 0:\n dxpy.utils.wait_for_all_futures(self._http_threadpool_futures)\n try:\n for future in self._http_threadpool_futures:\n if future.exception() != None:\n raise future.exception()\n finally:\n self._http_threadpool_futures = set()", "response": "Flushes the internal write buffer."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _write2(self, data, multithread=True, **kwargs):\n '''\n :param data: Data to be written\n :type data: str or mmap object\n :param multithread: If True, sends multiple write requests asynchronously\n :type multithread: boolean\n\n Writes the data *data* to the file.\n\n .. note::\n\n Writing to remote files is append-only. Using :meth:`seek`\n does not affect where the next :meth:`write` will occur.\n\n '''\n if not USING_PYTHON2:\n assert(isinstance(data, bytes))\n\n self._ensure_write_bufsize(**kwargs)\n\n def write_request(data_for_write_req):\n if multithread:\n self._async_upload_part_request(data_for_write_req, index=self._cur_part, **kwargs)\n else:\n self.upload_part(data_for_write_req, self._cur_part, **kwargs)\n self._cur_part += 1\n\n if self._write_buf.tell() == 0 and self._write_bufsize == len(data):\n # In the special case of a write that is the same size as\n # our write buffer size, and no unflushed data in the\n # buffer, just directly dispatch the write and bypass the\n # write buffer.\n #\n # This saves a buffer copy, which is especially helpful if\n # 'data' is actually mmap'd from a file.\n #\n # TODO: an additional optimization could be made to allow\n # the last request from an mmap'd upload to take this path\n # too (in general it won't because it's not of length\n # _write_bufsize). This is probably inconsequential though.\n write_request(data)\n return\n\n remaining_space = self._write_bufsize - self._write_buf.tell()\n\n if len(data) <= remaining_space:\n self._write_buf.write(data)\n else:\n self._write_buf.write(data[:remaining_space])\n\n temp_data = self._write_buf.getvalue()\n self._write_buf = BytesIO()\n write_request(temp_data)\n\n # TODO: check if repeat string splitting is bad for\n # performance when len(data) >> _write_bufsize\n self.write(data[remaining_space:], **kwargs)", "response": "Write to the file."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef write(self, data, multithread=True, **kwargs):\n '''\n :param data: Data to be written\n :type data: str or mmap object\n :param multithread: If True, sends multiple write requests asynchronously\n :type multithread: boolean\n\n Writes the data *data* to the file.\n\n .. note::\n\n Writing to remote files is append-only. Using :meth:`seek`\n does not affect where the next :meth:`write` will occur.\n\n '''\n if USING_PYTHON2:\n self._write2(data, multithread=multithread, **kwargs)\n else:\n # In python3, the underlying system methods use the 'bytes' type, not 'string'\n #\n # This is, hopefully, a temporary hack. It is not a good idea for two reasons:\n # 1) Performance, we need to make a pass on the data, and need to allocate\n # another buffer of similar size\n # 2) The types are wrong. The \"bytes\" type should be visible to the caller\n # of the write method, instead of being hidden.\n\n # Should we throw an exception if the file is opened in binary mode,\n # and the data is unicode/text?\n if isinstance(data, str):\n bt = data.encode(\"utf-8\")\n elif isinstance(data, bytearray):\n bt = bytes(data)\n elif isinstance(data, bytes):\n bt = data\n elif isinstance(data, mmap.mmap):\n bt = bytes(data)\n else:\n raise DXFileError(\"Invalid type {} for write data argument\".format(type(data)))\n assert(isinstance(bt, bytes))\n self._write2(bt, multithread=multithread, **kwargs)", "response": "Writes the data to the file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef close(self, block=False, **kwargs):\n '''\n :param block: If True, this function blocks until the remote file has closed.\n :type block: boolean\n\n Attempts to close the file.\n\n .. note:: The remote file cannot be closed until all parts have\n been fully uploaded. An exception will be thrown if this is\n not the case.\n '''\n self.flush(**kwargs)\n\n # Also populates emptyLastPartAllowed\n self._ensure_write_bufsize(**kwargs)\n\n if self._num_uploaded_parts == 0 and self._empty_last_part_allowed:\n # We haven't uploaded any parts in this session.\n # In case no parts have been uploaded at all and region\n # settings allow last empty part upload, try to upload\n # an empty part (otherwise files with 0 parts cannot be closed).\n try:\n if USING_PYTHON2:\n self.upload_part('', 1, **kwargs)\n else:\n self.upload_part(b'', 1, **kwargs)\n except dxpy.exceptions.InvalidState:\n pass\n\n if 'report_progress_fn' in kwargs:\n del kwargs['report_progress_fn']\n\n dxpy.api.file_close(self._dxid, **kwargs)\n\n if block:\n self._wait_on_close(**kwargs)", "response": "Closes the remote file."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef upload_part(self, data, index=None, display_progress=False, report_progress_fn=None, **kwargs):\n if not USING_PYTHON2:\n # In python3, the underlying system methods use the 'bytes' type, not 'string'\n assert(isinstance(data, bytes))\n\n req_input = {}\n if index is not None:\n req_input[\"index\"] = int(index)\n\n md5 = hashlib.md5()\n if hasattr(data, 'seek') and hasattr(data, 'tell'):\n # data is a buffer; record initial position (so we can rewind back)\n rewind_input_buffer_offset = data.tell()\n while True:\n bytes_read = data.read(MD5_READ_CHUNK_SIZE)\n if bytes_read:\n md5.update(bytes_read)\n else:\n break\n # rewind the buffer to original position\n data.seek(rewind_input_buffer_offset)\n else:\n md5.update(data)\n\n req_input[\"md5\"] = md5.hexdigest()\n req_input[\"size\"] = len(data)\n\n def get_upload_url_and_headers():\n # This function is called from within a retry loop, so to avoid amplifying the number of retries\n # geometrically, we decrease the allowed number of retries for the nested API call every time.\n if 'max_retries' not in kwargs:\n kwargs['max_retries'] = dxpy.DEFAULT_RETRIES\n elif kwargs['max_retries'] > 0:\n kwargs['max_retries'] -= 1\n\n if \"timeout\" not in kwargs:\n kwargs[\"timeout\"] = FILE_REQUEST_TIMEOUT\n\n resp = dxpy.api.file_upload(self._dxid, req_input, **kwargs)\n url = resp[\"url\"]\n return url, _validate_headers(resp.get(\"headers\", {}))\n\n # The file upload API requires us to get a pre-authenticated upload URL (and headers for it) every time we\n # attempt an upload. Because DXHTTPRequest will retry requests under retryable conditions, we give it a callback\n # to ask us for a new upload URL every time it attempts a request (instead of giving them directly).\n dxpy.DXHTTPRequest(get_upload_url_and_headers,\n data,\n jsonify_data=False,\n prepend_srv=False,\n always_retry=True,\n timeout=FILE_REQUEST_TIMEOUT,\n auth=None,\n method='PUT')\n\n self._num_uploaded_parts += 1\n\n if display_progress:\n warn(\".\")\n\n if report_progress_fn is not None:\n report_progress_fn(self, len(data))", "response": "Uploads a part of data to the local file system."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngenerating a download URL for the specified file and optional duration.", "response": "def get_download_url(self, duration=None, preauthenticated=False, filename=None, project=None, **kwargs):\n \"\"\"\n :param duration: number of seconds for which the generated URL will be\n valid, should only be specified when preauthenticated is True\n :type duration: int\n :param preauthenticated: if True, generates a 'preauthenticated'\n download URL, which embeds authentication info in the URL and does\n not require additional headers\n :type preauthenticated: bool\n :param filename: desired filename of the downloaded file\n :type filename: str\n :param project: ID of a project containing the file (the download URL\n will be associated with this project, and this may affect which\n billing account is billed for this download).\n If no project is specified, an attempt will be made to verify if the file is\n in the project from the DXFile handler (as specified by the user or\n the current project stored in dxpy.WORKSPACE_ID). Otherwise, no hint is supplied.\n This fall back behavior does not happen inside a job environment.\n A non preauthenticated URL is only valid as long as the user has\n access to that project and the project contains that file.\n :type project: str\n :returns: download URL and dict containing HTTP headers to be supplied\n with the request\n :rtype: tuple (str, dict)\n :raises: :exc:`~dxpy.exceptions.ResourceNotFound` if a project context was\n given and the file was not found in that project context.\n :raises: :exc:`~dxpy.exceptions.ResourceNotFound` if no project context was\n given and the file was not found in any projects.\n\n Obtains a URL that can be used to directly download the associated\n file.\n\n \"\"\"\n args = {\"preauthenticated\": preauthenticated}\n\n if duration is not None:\n args[\"duration\"] = duration\n if filename is not None:\n args[\"filename\"] = filename\n\n # If project=None, we fall back to the project attached to this handler\n # (if any). If this is supplied, it's treated as a hint: if it's a\n # project in which this file exists, it's passed on to the\n # apiserver. Otherwise, NO hint is supplied. In principle supplying a\n # project in the handler that doesn't contain this file ought to be an\n # error, but it's this way for backwards compatibility. We don't know\n # who might be doing downloads and creating handlers without being\n # careful that the project encoded in the handler contains the file\n # being downloaded. They may now rely on such behavior.\n if project is None and 'DX_JOB_ID' not in os.environ:\n project_from_handler = self.get_proj_id()\n if project_from_handler and object_exists_in_project(self.get_id(), project_from_handler):\n project = project_from_handler\n\n if project is not None and project is not DXFile.NO_PROJECT_HINT:\n args[\"project\"] = project\n\n # Test hook to write 'project' argument passed to API call to a\n # local file\n if '_DX_DUMP_BILLED_PROJECT' in os.environ:\n with open(os.environ['_DX_DUMP_BILLED_PROJECT'], \"w\") as fd:\n if project is not None and project != DXFile.NO_PROJECT_HINT:\n fd.write(project)\n\n with self._url_download_mutex:\n if self._download_url is None or self._download_url_expires < time.time():\n # The idea here is to cache a download URL for the entire file, that will\n # be good for a few minutes. This avoids each thread having to ask the\n # server for a URL, increasing server load.\n #\n # To avoid thread race conditions, this check/update procedure is protected\n # with a lock.\n\n # logging.debug(\"Download URL unset or expired, requesting a new one\")\n if \"timeout\" not in kwargs:\n kwargs[\"timeout\"] = FILE_REQUEST_TIMEOUT\n resp = dxpy.api.file_download(self._dxid, args, **kwargs)\n self._download_url = resp[\"url\"]\n self._download_url_headers = _validate_headers(resp.get(\"headers\", {}))\n if preauthenticated:\n self._download_url_expires = resp[\"expires\"]/1000 - 60 # Try to account for drift\n else:\n self._download_url_expires = 32503680000 # doesn't expire (year 3000)\n\n # Make a copy, ensuring each thread has its own mutable\n # version of the headers. Note: python strings are\n # immutable, so we can safely give a reference to the\n # download url.\n retval_download_url = self._download_url\n retval_download_url_headers = copy.copy(self._download_url_headers)\n\n return retval_download_url, retval_download_url_headers"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _read2(self, length=None, use_compression=None, project=None, **kwargs):\n '''\n :param length: Maximum number of bytes to be read\n :type length: integer\n :param project: project to use as context for this download (may affect\n which billing account is billed for this download). If specified,\n must be a project in which this file exists. If not specified, the\n project ID specified in the handler is used for the download, IF it\n contains this file. If set to DXFile.NO_PROJECT_HINT, no project ID\n is supplied for the download, even if the handler specifies a\n project ID.\n :type project: str or None\n :rtype: string\n :raises: :exc:`~dxpy.exceptions.ResourceNotFound` if *project* is supplied\n and it does not contain this file\n\n Returns the next *length* bytes, or all the bytes until the end of file\n (if no *length* is given or there are fewer than *length* bytes left in\n the file).\n\n .. note:: After the first call to read(), the project arg and\n passthrough kwargs are not respected while using the same response\n iterator (i.e. until next seek).\n\n '''\n if self._file_length == None:\n desc = self.describe(**kwargs)\n if desc[\"state\"] != \"closed\":\n raise DXFileError(\"Cannot read from file until it is in the closed state\")\n self._file_length = int(desc[\"size\"])\n\n # If running on a worker, wait for the first file download chunk\n # to come back before issuing any more requests. This ensures\n # that all subsequent requests can take advantage of caching,\n # rather than having all of the first DXFILE_HTTP_THREADS\n # requests simultaneously hit a cold cache. Enforce a minimum\n # size for this heuristic so we don't incur the overhead for\n # tiny files (which wouldn't contribute as much to the load\n # anyway).\n get_first_chunk_sequentially = (self._file_length > 128 * 1024 and self._pos == 0 and dxpy.JOB_ID)\n\n if self._pos == self._file_length:\n return b\"\"\n\n if length == None or length > self._file_length - self._pos:\n length = self._file_length - self._pos\n\n buf = self._read_buf\n buf_remaining_bytes = dxpy.utils.string_buffer_length(buf) - buf.tell()\n if length <= buf_remaining_bytes:\n self._pos += length\n return buf.read(length)\n else:\n orig_buf_pos = buf.tell()\n orig_file_pos = self._pos\n buf.seek(0, os.SEEK_END)\n self._pos += buf_remaining_bytes\n while self._pos < orig_file_pos + length:\n remaining_len = orig_file_pos + length - self._pos\n\n if self._response_iterator is None:\n self._request_iterator = self._generate_read_requests(\n start_pos=self._pos, project=project, **kwargs)\n\n content = self._next_response_content(get_first_chunk_sequentially=get_first_chunk_sequentially)\n\n if len(content) < remaining_len:\n buf.write(content)\n self._pos += len(content)\n else: # response goes beyond requested length\n buf.write(content[:remaining_len])\n self._pos += remaining_len\n self._read_buf = BytesIO()\n self._read_buf.write(content[remaining_len:])\n self._read_buf.seek(0)\n buf.seek(orig_buf_pos)\n return buf.read()", "response": "Reads a file from the file and returns the next length bytes."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef setup_ssh_tunnel(job_id, local_port, remote_port):\n cmd = ['dx', 'ssh', '--suppress-running-check', job_id, '-o', 'StrictHostKeyChecking no']\n cmd += ['-f', '-L', '{0}:localhost:{1}'.format(local_port, remote_port), '-N']\n subprocess.check_call(cmd)", "response": "Setup an ssh tunnel to the given job - id."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\npolling for the server to start running and post the SERVER_READY_TAG.", "response": "def poll_for_server_running(job_id):\n \"\"\"\n Poll for the job to start running and post the SERVER_READY_TAG.\n \"\"\"\n sys.stdout.write('Waiting for server in {0} to initialize ...'.format(job_id))\n sys.stdout.flush()\n desc = dxpy.describe(job_id)\n # Keep checking until the server has begun or it has failed.\n while(SERVER_READY_TAG not in desc['tags'] and desc['state'] != 'failed'):\n time.sleep(SLEEP_PERIOD)\n sys.stdout.write('.')\n sys.stdout.flush()\n desc = dxpy.describe(job_id)\n\n # If the server job failed, provide friendly advice.\n if desc['state'] == 'failed':\n msg = RED('Error:') + ' Server failed to run.\\n'\n msg += 'You may want to check the job logs by running:'\n msg += BOLD('dx watch {0}'.format(job_id))\n err_exit(msg)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nopen the given command and use the OS to automatically open the appropriate resource.", "response": "def multi_platform_open(cmd):\n \"\"\"\n Take the given command and use the OS to automatically open the appropriate\n resource. For instance, if a URL is provided, this will have the OS automatically\n open the URL in the default web browser.\n \"\"\"\n if platform == \"linux\" or platform == \"linux2\":\n cmd = ['xdg-open', cmd]\n elif platform == \"darwin\":\n cmd = ['open', cmd]\n elif platform == \"win32\":\n cmd = ['start', cmd]\n subprocess.check_call(cmd)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_notebook_app_versions():\n notebook_apps = dxpy.find_apps(name=NOTEBOOK_APP, all_versions=True)\n versions = [str(dxpy.describe(app['id'])['version']) for app in notebook_apps]\n return versions", "response": "Get the valid version numbers of the notebook app."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef run_notebook(args, ssh_config_check):\n # Check that ssh is setup. Currently notebooks require ssh for tunelling.\n ssh_config_check()\n if args.only_check_config:\n return\n\n # If the user requested a specific version of the notebook server,\n # get the executable id.\n if args.version is not None:\n executable = get_app_from_path('app-{0}/{1}'.format(NOTEBOOK_APP, args.version))\n if executable is not None and 'id' in executable:\n executable = executable['id']\n else:\n msg = RED('Warning:') + ' Invalid notebook version: {0}\\nValid versions are: '.format(args.version)\n msg += BOLD('{0}'.format(str(get_notebook_app_versions())))\n err_exit(msg)\n else:\n executable = 'app-{0}'.format(NOTEBOOK_APP)\n\n # Compose the command to launch the notebook\n cmd = ['dx', 'run', executable, '-inotebook_type={0}'.format(args.notebook_type)]\n cmd += ['-iinput_files={0}'.format(f) for f in args.notebook_files]\n cmd += ['-itimeout={0}'.format(args.timeout), '-y', '--brief', '--allow-ssh', '--instance-type', args.instance_type]\n if args.spark:\n cmd += ['-iinstall_spark=true']\n if args.snapshot:\n cmd += ['-isnapshot={0}'.format(args.snapshot)]\n job_id = subprocess.check_output(cmd).strip()\n\n poll_for_server_running(job_id)\n\n if args.notebook_type in {'jupyter', 'jupyter_lab', 'jupyter_notebook'}:\n remote_port = 8888\n\n setup_ssh_tunnel(job_id, args.port, remote_port)\n\n if args.open_server:\n multi_platform_open('http://localhost:{0}'.format(args.port))\n print('A web browser should have opened to connect you to your notebook.')\n print('If no browser appears, or if you need to reopen a browser at any point, you should be able to point your browser to http://localhost:{0}'.format(args.port))", "response": "Launch the notebook server."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the best fit of the given word in the given list of known words.", "response": "def correct(word, known_words):\n \"\"\"\n :param word: Word to correct\n :type word: string\n :param known_words: List of known words\n :type known_words: iterable of strings\n\n Given **word**, suggests a correction from **known_words**. If no reasonably close correction is found, returns\n **word**.\n \"\"\"\n NWORDS = _train(known_words)\n candidates = _known([word], NWORDS) or _known(_edits1(word), NWORDS) or _known_edits2(word, NWORDS) or [word]\n return max(candidates, key=NWORDS.get)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nvalidate user supplied filters.", "response": "def _validate_filters(cls, filters, model_class):\n \"\"\"Validate user supplied filters.\n\n Validate filters are on existing properties and filter values\n have valid semantics.\n\n Args:\n filters: user supplied filters. Each filter should be a list or tuple of\n format (, ,\n ). Value type is up to the property's type.\n model_class: the db.Model class for the entity type to apply filters on.\n\n Raises:\n BadReaderParamsError: if any filter is invalid in any way.\n \"\"\"\n if not filters:\n return\n\n properties = model_class.properties()\n\n for f in filters:\n prop, _, val = f\n if prop not in properties:\n raise errors.BadReaderParamsError(\n \"Property %s is not defined for entity type %s\",\n prop, model_class.kind())\n\n # Validate the value of each filter. We need to know filters have\n # valid value to carry out splits.\n try:\n properties[prop].validate(val)\n except db.BadValueError, e:\n raise errors.BadReaderParamsError(e)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nvalidate ndb. Model filters.", "response": "def _validate_filters_ndb(cls, filters, model_class):\n \"\"\"Validate ndb.Model filters.\"\"\"\n if not filters:\n return\n\n properties = model_class._properties\n\n for f in filters:\n prop, _, val = f\n if prop not in properties:\n raise errors.BadReaderParamsError(\n \"Property %s is not defined for entity type %s\",\n prop, model_class._get_kind())\n\n # Validate the value of each filter. We need to know filters have\n # valid value to carry out splits.\n try:\n properties[prop]._do_validate(val)\n except db.BadValueError, e:\n raise errors.BadReaderParamsError(e)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _normalize_entity(value):\n if ndb is not None and isinstance(value, ndb.Model):\n return None\n if getattr(value, \"_populate_internal_entity\", None):\n return value._populate_internal_entity()\n return value", "response": "Return an entity from an entity or model instance."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a key from an entity model instance key or key string.", "response": "def _normalize_key(value):\n \"\"\"Return a key from an entity, model instance, key, or key string.\"\"\"\n if ndb is not None and isinstance(value, (ndb.Model, ndb.Key)):\n return None\n if getattr(value, \"key\", None):\n return value.key()\n elif isinstance(value, basestring):\n return datastore.Key(value)\n else:\n return value"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef append(self, item):\n if self.should_flush():\n self.flush()\n self.items.append(item)", "response": "Adds a new item to the list. If needed flush existing items and clear existing items."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef put(self, entity):\n actual_entity = _normalize_entity(entity)\n if actual_entity is None:\n return self.ndb_put(entity)\n self.puts.append(actual_entity)", "response": "Registers entity to datastore."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nlike put but for NDB entities.", "response": "def ndb_put(self, entity):\n \"\"\"Like put(), but for NDB entities.\"\"\"\n assert ndb is not None and isinstance(entity, ndb.Model)\n self.ndb_puts.append(entity)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nregisters entity to be deleted from datastore.", "response": "def delete(self, entity):\n \"\"\"Registers entity to delete from datastore.\n\n Args:\n entity: an entity, model instance, or key to delete.\n \"\"\"\n key = _normalize_key(entity)\n if key is None:\n return self.ndb_delete(entity)\n self.deletes.append(key)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef ndb_delete(self, entity_or_key):\n if ndb is not None and isinstance(entity_or_key, ndb.Model):\n key = entity_or_key.key\n else:\n key = entity_or_key\n self.ndb_deletes.append(key)", "response": "Like delete but for NDB entities and keys."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef flush(self):\n self.puts.flush()\n self.deletes.flush()\n self.ndb_puts.flush()\n self.ndb_deletes.flush()", "response": "Flush all changed to datastore."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _flush_puts(self, items, options):\n datastore.Put(items, config=self._create_config(options))", "response": "Flushes all puts to datastore."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _flush_deletes(self, items, options):\n datastore.Delete(items, config=self._create_config(options))", "response": "Flush all deletes to datastore."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _flush_ndb_puts(self, items, options):\n assert ndb is not None\n ndb.put_multi(items, config=self._create_config(options))", "response": "Flush all NDB puts to datastore."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nflush all deletes to datastore.", "response": "def _flush_ndb_deletes(self, items, options):\n \"\"\"Flush all deletes to datastore.\"\"\"\n assert ndb is not None\n ndb.delete_multi(items, config=self._create_config(options))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a datastore Config.", "response": "def _create_config(self, options):\n \"\"\"Creates datastore Config.\n\n Returns:\n A datastore_rpc.Configuration instance.\n \"\"\"\n return datastore.CreateConfig(deadline=options[\"deadline\"],\n force_writes=self.force_writes)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nincrement the value of a counter.", "response": "def increment(self, counter_name, delta=1):\n \"\"\"Increment counter value.\n\n Args:\n counter_name: name of the counter as string.\n delta: increment delta as int.\n \"\"\"\n self._shard_state.counters_map.increment(counter_name, delta)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nserializing all states into json form.", "response": "def to_json(self):\n \"\"\"Serializes all states into json form.\n\n Returns:\n all states in json-compatible map.\n \"\"\"\n cursor = self._get_cursor()\n cursor_object = False\n if cursor and isinstance(cursor, datastore_query.Cursor):\n cursor = cursor.to_websafe_string()\n cursor_object = True\n return {\"key_range\": self._key_range.to_json(),\n \"query_spec\": self._query_spec.to_json(),\n \"cursor\": cursor,\n \"cursor_object\": cursor_object}"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef find_mapreduce_yaml(status_file=__file__):\n checked = set()\n yaml = _find_mapreduce_yaml(os.path.dirname(status_file), checked)\n if not yaml:\n yaml = _find_mapreduce_yaml(os.getcwd(), checked)\n return yaml", "response": "Traverse the directory trees to find mapreduce. yaml file."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _find_mapreduce_yaml(start, checked):\n dir = start\n while dir not in checked:\n checked.add(dir)\n for mr_yaml_name in MR_YAML_NAMES:\n yaml_path = os.path.join(dir, mr_yaml_name)\n if os.path.exists(yaml_path):\n return yaml_path\n dir = os.path.dirname(dir)\n return None", "response": "Traverse the directory tree identified by start until a directory already\n in checked is encountered."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef parse_mapreduce_yaml(contents):\n try:\n builder = yaml_object.ObjectBuilder(MapReduceYaml)\n handler = yaml_builder.BuilderHandler(builder)\n listener = yaml_listener.EventListener(handler)\n listener.Parse(contents)\n\n mr_info = handler.GetResults()\n except (ValueError, yaml_errors.EventError), e:\n raise errors.BadYamlError(e)\n\n if len(mr_info) < 1:\n raise errors.BadYamlError(\"No configs found in mapreduce.yaml\")\n if len(mr_info) > 1:\n raise errors.MultipleDocumentsInMrYaml(\"Found %d YAML documents\" %\n len(mr_info))\n\n jobs = mr_info[0]\n job_names = set(j.name for j in jobs.mapreduce)\n if len(jobs.mapreduce) != len(job_names):\n raise errors.BadYamlError(\n \"Overlapping mapreduce names; names must be unique\")\n\n return jobs", "response": "Parses mapreduce. yaml file contents.\n returns a new object with all the data from original file."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nlocates mapreduce. yaml loads and parses its info.", "response": "def get_mapreduce_yaml(parse=parse_mapreduce_yaml):\n \"\"\"Locates mapreduce.yaml, loads and parses its info.\n\n Args:\n parse: Used for testing.\n\n Returns:\n MapReduceYaml object.\n\n Raises:\n errors.BadYamlError: when contents is not a valid mapreduce.yaml file or the\n file is missing.\n \"\"\"\n mr_yaml_path = find_mapreduce_yaml()\n if not mr_yaml_path:\n raise errors.MissingYamlError()\n mr_yaml_file = open(mr_yaml_path)\n try:\n return parse(mr_yaml_file.read())\n finally:\n mr_yaml_file.close()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef to_dict(mapreduce_yaml):\n all_configs = []\n for config in mapreduce_yaml.mapreduce:\n out = {\n \"name\": config.name,\n \"mapper_input_reader\": config.mapper.input_reader,\n \"mapper_handler\": config.mapper.handler,\n }\n if config.mapper.params_validator:\n out[\"mapper_params_validator\"] = config.mapper.params_validator\n if config.mapper.params:\n param_defaults = {}\n for param in config.mapper.params:\n param_defaults[param.name] = param.default or param.value\n out[\"mapper_params\"] = param_defaults\n if config.params:\n param_defaults = {}\n for param in config.params:\n param_defaults[param.name] = param.default or param.value\n out[\"params\"] = param_defaults\n if config.mapper.output_writer:\n out[\"mapper_output_writer\"] = config.mapper.output_writer\n all_configs.append(out)\n\n return all_configs", "response": "Converts a MapReduceYaml file into a JSON - encodable dictionary."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef validate(cls, job_config):\n if job_config.input_reader_cls != cls:\n raise errors.BadReaderParamsError(\n \"Expect input reader class %r, got %r.\" %\n (cls, job_config.input_reader_cls))", "response": "Validates relevant parameters.\n\n This method can validate fields which it deems relevant.\n\n Args:\n job_config: an instance of map_job.JobConfig.\n\n Raises:\n errors.BadReaderParamsError: required parameters are missing or invalid."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _sort_records_map(records):\n ctx = context.get()\n l = len(records)\n key_records = [None] * l\n\n logging.debug(\"Parsing\")\n for i in range(l):\n proto = kv_pb.KeyValue()\n proto.ParseFromString(records[i])\n key_records[i] = (proto.key(), records[i])\n\n logging.debug(\"Sorting\")\n key_records.sort(cmp=_compare_keys)\n\n logging.debug(\"Writing\")\n mapper_spec = ctx.mapreduce_spec.mapper\n params = input_readers._get_params(mapper_spec)\n bucket_name = params.get(\"bucket_name\")\n filename = (ctx.mapreduce_spec.name + \"/\" + ctx.mapreduce_id + \"/output-\" +\n ctx.shard_id + \"-\" + str(int(time.time())))\n full_filename = \"/%s/%s\" % (bucket_name, filename)\n filehandle = cloudstorage.open(full_filename, mode=\"w\")\n with output_writers.GCSRecordsPool(filehandle, ctx=ctx) as pool:\n for key_record in key_records:\n pool.append(key_record[1])\n\n logging.debug(\"Finalizing\")\n filehandle.close()\n\n entity = _OutputFile(key_name=full_filename,\n parent=_OutputFile.get_root_key(ctx.mapreduce_id))\n entity.put()", "response": "Map function sorting records."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsplitting input into multiple shards.", "response": "def split_input(cls, mapper_spec):\n \"\"\"Split input into multiple shards.\"\"\"\n filelists = mapper_spec.params[cls.FILES_PARAM]\n max_values_count = mapper_spec.params.get(cls.MAX_VALUES_COUNT_PARAM, -1)\n max_values_size = mapper_spec.params.get(cls.MAX_VALUES_SIZE_PARAM, -1)\n return [cls([0] * len(files), max_values_count, max_values_size)\n for files in filelists]"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef validate(cls, mapper_spec):\n if mapper_spec.input_reader_class() != cls:\n raise errors.BadReaderParamsError(\"Input reader class mismatch\")\n params = mapper_spec.params\n if cls.FILES_PARAM not in params:\n raise errors.BadReaderParamsError(\"Missing files parameter.\")", "response": "Validate reader parameters in mapper_spec."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef validate(cls, mapper_spec):\n if mapper_spec.output_writer_class() != cls:\n raise errors.BadWriterParamsError(\"Output writer class mismatch\")\n params = output_writers._get_params(mapper_spec)\n # Bucket Name is required\n if cls.BUCKET_NAME_PARAM not in params:\n raise errors.BadWriterParamsError(\n \"%s is required for the _HashingGCSOutputWriter\" %\n cls.BUCKET_NAME_PARAM)", "response": "Validates the mapper specification."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef to_json(self):\n # Use the member variable (since we don't have access to the context) to\n # flush each pool to minimize the size of each filehandle before we\n # serialize it.\n for pool in self._pools:\n if pool is not None:\n pool.flush(True)\n return {\"filehandles\": pickle.dumps(self._filehandles)}", "response": "Returns the json -izable version of the OutputWriter state to serialize in json."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget the list of filenames for the shard.", "response": "def get_filenames(cls, mapreduce_state):\n \"\"\"See parent class.\"\"\"\n shards = mapreduce_state.mapreduce_spec.mapper.shard_count\n filenames = []\n for _ in range(shards):\n filenames.append([None] * shards)\n shard_states = model.ShardState.find_all_by_mapreduce_state(mapreduce_state)\n for x, shard_state in enumerate(shard_states):\n shard_filenames = shard_state.writer_state[\"shard_filenames\"]\n for y in range(shards):\n filenames[y][x] = shard_filenames[y]\n return filenames"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nwrites data to a single file.", "response": "def write(self, data):\n \"\"\"Write data.\n\n Args:\n data: actual data yielded from handler. Type is writer-specific.\n \"\"\"\n ctx = context.get()\n if len(data) != 2:\n logging.error(\"Got bad tuple of length %d (2-tuple expected): %s\",\n len(data), data)\n\n try:\n key = str(data[0])\n value = str(data[1])\n except TypeError:\n logging.error(\"Expecting a tuple, but got %s: %s\",\n data.__class__.__name__, data)\n\n file_index = key.__hash__() % len(self._filehandles)\n\n # Work-around: Since we don't have access to the context in the to_json()\n # function, but we need to flush each pool before we serialize the\n # filehandle, we rely on a member variable instead of using context for\n # pool management.\n pool = self._pools[file_index]\n if pool is None:\n filehandle = self._filehandles[file_index]\n pool = output_writers.GCSRecordsPool(filehandle=filehandle, ctx=ctx)\n self._pools[file_index] = pool\n\n proto = kv_pb.KeyValue()\n proto.set_key(key)\n proto.set_value(value)\n pool.append(proto.Encode())"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_mapper_params(self):\n reader_params = self.input_reader_cls.params_to_json(\n self.input_reader_params)\n # TODO(user): Do the same for writer params.\n return {\"input_reader\": reader_params,\n \"output_writer\": self.output_writer_params}", "response": "Converts self to model. MapperSpec. params."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_mapper_spec(self):\n # pylint: disable=g-import-not-at-top\n from mapreduce import model\n\n return model.MapperSpec(\n handler_spec=util._obj_to_path(self.mapper),\n input_reader_spec=util._obj_to_path(self.input_reader_cls),\n params=self._get_mapper_params(),\n shard_count=self.shard_count,\n output_writer_spec=util._obj_to_path(self.output_writer_cls))", "response": "Converts self to model. MapperSpec."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert self to model. MapreduceSpec. params.", "response": "def _get_mr_params(self):\n \"\"\"Converts self to model.MapreduceSpec.params.\"\"\"\n return {\"force_writes\": self._force_writes,\n \"done_callback\": self.done_callback_url,\n \"user_params\": self.user_params,\n \"shard_max_attempts\": self.shard_max_attempts,\n \"task_max_attempts\": self._task_max_attempts,\n \"task_max_data_processing_attempts\":\n self._task_max_data_processing_attempts,\n \"queue_name\": self.queue_name,\n \"base_path\": self._base_path,\n \"app_id\": self._app,\n \"api_version\": self._api_version}"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _get_default_mr_params(cls):\n cfg = cls(_lenient=True)\n mr_params = cfg._get_mr_params()\n mr_params[\"api_version\"] = 0\n return mr_params", "response": "Gets default values for old API."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nconvert MapreduceSpec to JobConfig.", "response": "def _to_map_job_config(cls,\n mr_spec,\n # TODO(user): Remove this parameter after it can be\n # read from mr_spec.\n queue_name):\n \"\"\"Converts model.MapreduceSpec back to JobConfig.\n\n This method allows our internal methods to use JobConfig directly.\n This method also allows us to expose JobConfig as an API during execution,\n despite that it is not saved into datastore.\n\n Args:\n mr_spec: model.MapreduceSpec.\n queue_name: queue name.\n\n Returns:\n The JobConfig object for this job.\n \"\"\"\n mapper_spec = mr_spec.mapper\n # 0 means all the old APIs before api_version is introduced.\n api_version = mr_spec.params.get(\"api_version\", 0)\n old_api = api_version == 0\n\n # Deserialize params from json if input_reader/output_writer are new API.\n input_reader_cls = mapper_spec.input_reader_class()\n input_reader_params = input_readers._get_params(mapper_spec)\n if issubclass(input_reader_cls, input_reader.InputReader):\n input_reader_params = input_reader_cls.params_from_json(\n input_reader_params)\n\n output_writer_cls = mapper_spec.output_writer_class()\n output_writer_params = output_writers._get_params(mapper_spec)\n # TODO(user): Call json (de)serialization for writer.\n # if (output_writer_cls and\n # issubclass(output_writer_cls, output_writer.OutputWriter)):\n # output_writer_params = output_writer_cls.params_from_json(\n # output_writer_params)\n\n # We can not always convert MapreduceSpec generated by older API\n # to JobConfig. Thus, mr framework should use/expose the returned JobConfig\n # object with caution when a job is started with an old API.\n # In this case, this method only tries not to blow up and assemble a\n # JobConfig object as accurate as possible.\n return cls(_lenient=old_api,\n job_name=mr_spec.name,\n job_id=mr_spec.mapreduce_id,\n # handler_spec from older API may not have map_job.Mapper type.\n mapper=util.for_name(mapper_spec.handler_spec),\n input_reader_cls=input_reader_cls,\n input_reader_params=input_reader_params,\n output_writer_cls=output_writer_cls,\n output_writer_params=output_writer_params,\n shard_count=mapper_spec.shard_count,\n queue_name=queue_name,\n user_params=mr_spec.params.get(\"user_params\"),\n shard_max_attempts=mr_spec.params.get(\"shard_max_attempts\"),\n done_callback_url=mr_spec.params.get(\"done_callback\"),\n _force_writes=mr_spec.params.get(\"force_writes\"),\n _base_path=mr_spec.params[\"base_path\"],\n _task_max_attempts=mr_spec.params.get(\"task_max_attempts\"),\n _task_max_data_processing_attempts=(\n mr_spec.params.get(\"task_max_data_processing_attempts\")),\n _hooks_cls=util.for_name(mr_spec.hooks_class_name),\n _app=mr_spec.params.get(\"app_id\"),\n _api_version=api_version)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef __write_record(self, record_type, data):\n length = len(data)\n\n crc = crc32c.crc_update(crc32c.CRC_INIT, [record_type])\n crc = crc32c.crc_update(crc, data)\n crc = crc32c.crc_finalize(crc)\n\n self.__writer.write(\n struct.pack(_HEADER_FORMAT, _mask_crc(crc), length, record_type))\n self.__writer.write(data)\n self.__position += _HEADER_LENGTH + length", "response": "Write a single physical record."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef write(self, data):\n block_remaining = _BLOCK_SIZE - self.__position % _BLOCK_SIZE\n\n if block_remaining < _HEADER_LENGTH:\n # Header won't fit into remainder\n self.__writer.write('\\x00' * block_remaining)\n self.__position += block_remaining\n block_remaining = _BLOCK_SIZE\n\n if block_remaining < len(data) + _HEADER_LENGTH:\n first_chunk = data[:block_remaining - _HEADER_LENGTH]\n self.__write_record(_RECORD_TYPE_FIRST, first_chunk)\n data = data[len(first_chunk):]\n\n while True:\n block_remaining = _BLOCK_SIZE - self.__position % _BLOCK_SIZE\n if block_remaining >= len(data) + _HEADER_LENGTH:\n self.__write_record(_RECORD_TYPE_LAST, data)\n break\n else:\n chunk = data[:block_remaining - _HEADER_LENGTH]\n self.__write_record(_RECORD_TYPE_MIDDLE, chunk)\n data = data[len(chunk):]\n else:\n self.__write_record(_RECORD_TYPE_FULL, data)", "response": "Writes a single record."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\npadding current block with 0.", "response": "def _pad_block(self):\n \"\"\"Pad block with 0.\n\n Pad current block with 0. Reader will simply treat these as corrupted\n record and skip the block.\n\n This method is idempotent.\n \"\"\"\n pad_length = _BLOCK_SIZE - self.__position % _BLOCK_SIZE\n if pad_length and pad_length != _BLOCK_SIZE:\n self.__writer.write('\\x00' * pad_length)\n self.__position += pad_length"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef __try_read_record(self):\n block_remaining = _BLOCK_SIZE - self.__reader.tell() % _BLOCK_SIZE\n if block_remaining < _HEADER_LENGTH:\n return ('', _RECORD_TYPE_NONE)\n\n header = self.__reader.read(_HEADER_LENGTH)\n if len(header) != _HEADER_LENGTH:\n raise EOFError('Read %s bytes instead of %s' %\n (len(header), _HEADER_LENGTH))\n\n (masked_crc, length, record_type) = struct.unpack(_HEADER_FORMAT, header)\n crc = _unmask_crc(masked_crc)\n\n if length + _HEADER_LENGTH > block_remaining:\n # A record can't be bigger than one block.\n raise errors.InvalidRecordError('Length is too big')\n\n data = self.__reader.read(length)\n if len(data) != length:\n raise EOFError('Not enough data read. Expected: %s but got %s' %\n (length, len(data)))\n\n if record_type == _RECORD_TYPE_NONE:\n return ('', record_type)\n\n actual_crc = crc32c.crc_update(crc32c.CRC_INIT, [record_type])\n actual_crc = crc32c.crc_update(actual_crc, data)\n actual_crc = crc32c.crc_finalize(actual_crc)\n\n if actual_crc != crc:\n raise errors.InvalidRecordError('Data crc does not match')\n return (data, record_type)", "response": "Try reading a record."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef __sync(self):\n pad_length = _BLOCK_SIZE - self.__reader.tell() % _BLOCK_SIZE\n if pad_length and pad_length != _BLOCK_SIZE:\n data = self.__reader.read(pad_length)\n if len(data) != pad_length:\n raise EOFError('Read %d bytes instead of %d' %\n (len(data), pad_length))", "response": "Skip reader to the block boundary."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreads a single record from the current position in reader.", "response": "def read(self):\n \"\"\"Reads record from current position in reader.\n\n Returns:\n original bytes stored in a single record.\n \"\"\"\n data = None\n while True:\n last_offset = self.tell()\n try:\n (chunk, record_type) = self.__try_read_record()\n if record_type == _RECORD_TYPE_NONE:\n self.__sync()\n elif record_type == _RECORD_TYPE_FULL:\n if data is not None:\n logging.warning(\n \"Ordering corruption: Got FULL record while already \"\n \"in a chunk at offset %d\", last_offset)\n return chunk\n elif record_type == _RECORD_TYPE_FIRST:\n if data is not None:\n logging.warning(\n \"Ordering corruption: Got FIRST record while already \"\n \"in a chunk at offset %d\", last_offset)\n data = chunk\n elif record_type == _RECORD_TYPE_MIDDLE:\n if data is None:\n logging.warning(\n \"Ordering corruption: Got MIDDLE record before FIRST \"\n \"record at offset %d\", last_offset)\n else:\n data += chunk\n elif record_type == _RECORD_TYPE_LAST:\n if data is None:\n logging.warning(\n \"Ordering corruption: Got LAST record but no chunk is in \"\n \"progress at offset %d\", last_offset)\n else:\n result = data + chunk\n data = None\n return result\n else:\n raise errors.InvalidRecordError(\n \"Unsupported record type: %s\" % record_type)\n\n except errors.InvalidRecordError, e:\n logging.warning(\"Invalid record encountered at %s (%s). Syncing to \"\n \"the next block\", last_offset, e)\n data = None\n self.__sync()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nstarting a mapreduce job. Args: job_name: mapreduce name. Only for display purpose. handler_spec: fully qualified name to your map function/class. input_reader_spec: fully qualified name to input reader class. output_writer_spec: fully qualified name to output writer class. params: a dictionary of parameters for input reader and output writer initialization. shards: number of shards. This provides a guide to mapreduce. The real number of shards is determined by how input are splited.", "response": "def run(self,\n job_name,\n handler_spec,\n input_reader_spec,\n output_writer_spec=None,\n params=None,\n shards=None,\n base_path=None):\n \"\"\"Start a mapreduce job.\n\n Args:\n job_name: mapreduce name. Only for display purpose.\n handler_spec: fully qualified name to your map function/class.\n input_reader_spec: fully qualified name to input reader class.\n output_writer_spec: fully qualified name to output writer class.\n params: a dictionary of parameters for input reader and output writer\n initialization.\n shards: number of shards. This provides a guide to mapreduce. The real\n number of shards is determined by how input are splited.\n \"\"\"\n if shards is None:\n shards = parameters.config.SHARD_COUNT\n if base_path is None:\n base_path = parameters.config.BASE_PATH\n mapreduce_id = control.start_map(\n job_name,\n handler_spec,\n input_reader_spec,\n params or {},\n mapreduce_parameters={\n \"done_callback\": self.get_callback_url(),\n \"done_callback_method\": \"GET\",\n \"pipeline_id\": self.pipeline_id,\n \"base_path\": base_path,\n },\n shard_count=shards,\n output_writer_spec=output_writer_spec,\n queue_name=self.queue_name,\n )\n self.fill(self.outputs.job_id, mapreduce_id)\n self.set_status(console_url=\"%s/detail?mapreduce_id=%s\" % (\n (base_path, mapreduce_id)))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef start_map(name,\n handler_spec,\n reader_spec,\n mapper_parameters,\n shard_count=None,\n output_writer_spec=None,\n mapreduce_parameters=None,\n base_path=None,\n queue_name=None,\n eta=None,\n countdown=None,\n hooks_class_name=None,\n _app=None,\n in_xg_transaction=False):\n \"\"\"Start a new, mapper-only mapreduce.\n\n Deprecated! Use map_job.start instead.\n\n If a value can be specified both from an explicit argument and from\n a dictionary, the value from the explicit argument wins.\n\n Args:\n name: mapreduce name. Used only for display purposes.\n handler_spec: fully qualified name of mapper handler function/class to call.\n reader_spec: fully qualified name of mapper reader to use\n mapper_parameters: dictionary of parameters to pass to mapper. These are\n mapper-specific and also used for reader/writer initialization.\n Should have format {\"input_reader\": {}, \"output_writer\":{}}. Old\n deprecated style does not have sub dictionaries.\n shard_count: number of shards to create.\n mapreduce_parameters: dictionary of mapreduce parameters relevant to the\n whole job.\n base_path: base path of mapreduce library handler specified in app.yaml.\n \"/mapreduce\" by default.\n queue_name: taskqueue queue name to be used for mapreduce tasks.\n see util.get_queue_name.\n eta: absolute time when the MR should execute. May not be specified\n if 'countdown' is also supplied. This may be timezone-aware or\n timezone-naive.\n countdown: time in seconds into the future that this MR should execute.\n Defaults to zero.\n hooks_class_name: fully qualified name of a hooks.Hooks subclass.\n in_xg_transaction: controls what transaction scope to use to start this MR\n job. If True, there has to be an already opened cross-group transaction\n scope. MR will use one entity group from it.\n If False, MR will create an independent transaction to start the job\n regardless of any existing transaction scopes.\n\n Returns:\n mapreduce id as string.\n \"\"\"\n if shard_count is None:\n shard_count = parameters.config.SHARD_COUNT\n\n if mapper_parameters:\n mapper_parameters = dict(mapper_parameters)\n\n # Make sure this old API fill all parameters with default values.\n mr_params = map_job.JobConfig._get_default_mr_params()\n if mapreduce_parameters:\n mr_params.update(mapreduce_parameters)\n\n # Override default values if user specified them as arguments.\n if base_path:\n mr_params[\"base_path\"] = base_path\n mr_params[\"queue_name\"] = util.get_queue_name(queue_name)\n\n mapper_spec = model.MapperSpec(handler_spec,\n reader_spec,\n mapper_parameters,\n shard_count,\n output_writer_spec=output_writer_spec)\n\n if in_xg_transaction and not db.is_in_transaction():\n logging.warning(\"Expects an opened xg transaction to start mapreduce \"\n \"when transactional is True.\")\n\n return handlers.StartJobHandler._start_map(\n name,\n mapper_spec,\n mr_params,\n # TODO(user): Now that \"queue_name\" is part of mr_params.\n # Remove all the other ways to get queue_name after one release.\n queue_name=mr_params[\"queue_name\"],\n eta=eta,\n countdown=countdown,\n hooks_class_name=hooks_class_name,\n _app=_app,\n in_xg_transaction=in_xg_transaction)", "response": "Start a new mapreduce job."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets status enum. Returns: One of the status enum.", "response": "def get_status(self):\n \"\"\"Get status enum.\n\n Returns:\n One of the status enum.\n \"\"\"\n self.__update_state()\n if self._state.active:\n return self.RUNNING\n else:\n return self._state.result_status"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget the value of the named counter from this job.", "response": "def get_counter(self, counter_name, default=0):\n \"\"\"Get the value of the named counter from this job.\n\n When a job is running, counter values won't be very accurate.\n\n Args:\n counter_name: name of the counter in string.\n default: default value if the counter doesn't exist.\n\n Returns:\n Value in int of the named counter.\n \"\"\"\n self.__update_state()\n return self._state.counters_map.get(counter_name, default)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_outputs(self):\n assert self.SUCCESS == self.get_status()\n ss = model.ShardState.find_all_by_mapreduce_state(self._state)\n for s in ss:\n yield iter(s.writer_state.get(\"outs\", []))", "response": "Get outputs of this job."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef submit(cls, job_config, in_xg_transaction=False):\n cls.__validate_job_config(job_config)\n mapper_spec = job_config._get_mapper_spec()\n\n # Create mr spec.\n mapreduce_params = job_config._get_mr_params()\n mapreduce_spec = model.MapreduceSpec(\n job_config.job_name,\n job_config.job_id,\n mapper_spec.to_json(),\n mapreduce_params,\n util._obj_to_path(job_config._hooks_cls))\n\n # Save states and enqueue task.\n if in_xg_transaction:\n propagation = db.MANDATORY\n else:\n propagation = db.INDEPENDENT\n\n state = None\n @db.transactional(propagation=propagation)\n def _txn():\n state = cls.__create_and_save_state(job_config, mapreduce_spec)\n cls.__add_kickoff_task(job_config, mapreduce_spec)\n return state\n\n state = _txn()\n return cls(state)", "response": "Submit the job to run."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef __update_state(self):\n # Only if the job was not in a terminal state.\n if self._state.active:\n self._state = self.__get_state_by_id(self.job_config.job_id)", "response": "Updates the state of the job."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef __get_state_by_id(cls, job_id):\n state = model.MapreduceState.get_by_job_id(job_id)\n if state is None:\n raise ValueError(\"Job state for job %s is missing.\" % job_id)\n return state", "response": "Get the job state by id."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef __create_and_save_state(cls, job_config, mapreduce_spec):\n state = model.MapreduceState.create_new(job_config.job_id)\n state.mapreduce_spec = mapreduce_spec\n state.active = True\n state.active_shards = 0\n state.app_id = job_config._app\n config = datastore_rpc.Configuration(force_writes=job_config._force_writes)\n state.put(config=config)\n return state", "response": "Create and save the map job state to datastore."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd kickoff task to taskqueue.", "response": "def __add_kickoff_task(cls, job_config, mapreduce_spec):\n \"\"\"Add kickoff task to taskqueue.\n\n Args:\n job_config: map_job.JobConfig.\n mapreduce_spec: model.MapreduceSpec,\n \"\"\"\n params = {\"mapreduce_id\": job_config.job_id}\n # Task is not named so that it can be added within a transaction.\n kickoff_task = taskqueue.Task(\n # TODO(user): Perhaps make this url a computed field of job_config.\n url=job_config._base_path + \"/kickoffjob_callback/\" + job_config.job_id,\n headers=util._get_task_headers(job_config.job_id),\n params=params)\n if job_config._hooks_cls:\n hooks = job_config._hooks_cls(mapreduce_spec)\n try:\n hooks.enqueue_kickoff_task(kickoff_task, job_config.queue_name)\n return\n except NotImplementedError:\n pass\n kickoff_task.add(job_config.queue_name, transactional=True)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef to_json_str(self):\n _json = self.to_json()\n try:\n return json.dumps(_json, sort_keys=True, cls=JsonEncoder)\n except:\n logging.exception(\"Could not serialize JSON: %r\", _json)\n raise", "response": "Convert data to json string representation."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconverts json string representation into object.", "response": "def from_json_str(cls, json_str):\n \"\"\"Convert json string representation into class instance.\n\n Args:\n json_str: json representation as string.\n\n Returns:\n New instance of the class with data loaded from json string.\n \"\"\"\n return cls.from_json(json.loads(json_str, cls=JsonDecoder))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets value for the datastore.", "response": "def get_value_for_datastore(self, model_instance):\n \"\"\"Gets value for datastore.\n\n Args:\n model_instance: instance of the model class.\n\n Returns:\n datastore-compatible value.\n \"\"\"\n value = super(JsonProperty, self).get_value_for_datastore(model_instance)\n if not value:\n return None\n json_value = value\n if not isinstance(value, dict):\n json_value = value.to_json()\n if not json_value:\n return None\n return datastore_types.Text(json.dumps(\n json_value, sort_keys=True, cls=JsonEncoder))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef make_value_from_datastore(self, value):\n\n if value is None:\n return None\n _json = json.loads(value, cls=JsonDecoder)\n if self.data_type == dict:\n return _json\n return self.data_type.from_json(_json)", "response": "Convert value from datastore representation."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nvalidate the specified value.", "response": "def validate(self, value):\n \"\"\"Validate value.\n\n Args:\n value: model value.\n\n Returns:\n Whether the specified value is valid data type value.\n\n Raises:\n BadValueError: when value is not of self.data_type type.\n \"\"\"\n if value is not None and not isinstance(value, self.data_type):\n raise datastore_errors.BadValueError(\n \"Property %s must be convertible to a %s instance (%s)\" %\n (self.name, self.data_type, value))\n return super(JsonProperty, self).validate(value)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add(self, queue_name, transactional=False):\n task = self.to_task()\n task.add(queue_name, transactional)", "response": "Add a task to the queue."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nconvert to a taskqueue task.", "response": "def to_task(self):\n \"\"\"Convert to a taskqueue task.\"\"\"\n # Never pass params to taskqueue.Task. Use payload instead. Otherwise,\n # it's up to a particular taskqueue implementation to generate\n # payload from params. It could blow up payload size over limit.\n return taskqueue.Task(\n url=self.url,\n payload=self._payload,\n name=self.name,\n eta=self.eta,\n countdown=self.countdown,\n headers=self._headers)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef decode_payload(cls, request):\n # TODO(user): Pass mr_id into headers. Otherwise when payload decoding\n # failed, we can't abort a mr.\n if request.headers.get(cls.PAYLOAD_VERSION_HEADER) != cls.PAYLOAD_VERSION:\n raise DeprecationWarning(\n \"Task is generated by an older incompatible version of mapreduce. \"\n \"Please kill this job manually\")\n return cls._decode_payload(request.body)", "response": "Decode the payload of a Huge task."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nincrements the value of a counter.", "response": "def increment(self, counter_name, delta):\n \"\"\"Increment counter value.\n\n Args:\n counter_name: counter name as String.\n delta: increment delta as Integer.\n\n Returns:\n new counter value.\n \"\"\"\n current_value = self.counters.get(counter_name, 0)\n new_value = current_value + delta\n self.counters[counter_name] = new_value\n return new_value"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add_map(self, counters_map):\n for counter_name in counters_map.counters:\n self.increment(counter_name, counters_map.counters[counter_name])", "response": "Adds all counters from the passed map to the current object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef sub_map(self, counters_map):\n for counter_name in counters_map.counters:\n self.increment(counter_name, -counters_map.counters[counter_name])", "response": "Subtracts all counters from the passed map."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nserializing this MapperSpec into a json -izable object.", "response": "def to_json(self):\n \"\"\"Serializes this MapperSpec into a json-izable object.\"\"\"\n result = {\n \"mapper_handler_spec\": self.handler_spec,\n \"mapper_input_reader\": self.input_reader_spec,\n \"mapper_params\": self.params,\n \"mapper_shard_count\": self.shard_count\n }\n if self.output_writer_spec:\n result[\"mapper_output_writer\"] = self.output_writer_spec\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_hooks(self):\n if self.__hooks is None and self.hooks_class_name is not None:\n hooks_class = util.for_name(self.hooks_class_name)\n if not isinstance(hooks_class, type):\n raise ValueError(\"hooks_class_name must refer to a class, got %s\" %\n type(hooks_class).__name__)\n if not issubclass(hooks_class, hooks.Hooks):\n raise ValueError(\n \"hooks_class_name must refer to a hooks.Hooks subclass\")\n self.__hooks = hooks_class(self)\n\n return self.__hooks", "response": "Returns a hooks. Hooks class or None if no hooks class has been set."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nserializes all data in this mapreduce spec into json form.", "response": "def to_json(self):\n \"\"\"Serializes all data in this mapreduce spec into json form.\n\n Returns:\n data in json format.\n \"\"\"\n mapper_spec = self.mapper.to_json()\n return {\n \"name\": self.name,\n \"mapreduce_id\": self.mapreduce_id,\n \"mapper_spec\": mapper_spec,\n \"params\": self.params,\n \"hooks_class_name\": self.hooks_class_name,\n }"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a MapreduceSpec from the json representation of a MapreduceSpec.", "response": "def from_json(cls, json):\n \"\"\"Create new MapreduceSpec from the json, encoded by to_json.\n\n Args:\n json: json representation of MapreduceSpec.\n\n Returns:\n an instance of MapreduceSpec with all data deserialized from json.\n \"\"\"\n mapreduce_spec = cls(json[\"name\"],\n json[\"mapreduce_id\"],\n json[\"mapper_spec\"],\n json.get(\"params\"),\n json.get(\"hooks_class_name\"))\n return mapreduce_spec"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _get_mapreduce_spec(cls, mr_id):\n key = 'GAE-MR-spec: %s' % mr_id\n spec_json = memcache.get(key)\n if spec_json:\n return cls.from_json(spec_json)\n state = MapreduceState.get_by_job_id(mr_id)\n spec = state.mapreduce_spec\n spec_json = spec.to_json()\n memcache.set(key, spec_json)\n return spec", "response": "Get Mapreduce spec from mr id."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nretrieving the Key for a Job.", "response": "def get_key_by_job_id(cls, mapreduce_id):\n \"\"\"Retrieves the Key for a Job.\n\n Args:\n mapreduce_id: The job to retrieve.\n\n Returns:\n Datastore Key that can be used to fetch the MapreduceState.\n \"\"\"\n return db.Key.from_path(cls.kind(), str(mapreduce_id))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef set_processed_counts(self, shards_processed, shards_status):\n chart = google_chart_api.BarChart()\n\n def filter_status(status_to_filter):\n return [count if status == status_to_filter else 0\n for count, status in zip(shards_processed, shards_status)]\n\n if shards_status:\n # Each index will have only one non-zero count, so stack them to color-\n # code the bars by status\n # These status values are computed in _update_state_from_shard_states,\n # in mapreduce/handlers.py.\n chart.stacked = True\n chart.AddBars(filter_status(\"unknown\"), color=\"404040\")\n chart.AddBars(filter_status(\"success\"), color=\"00ac42\")\n chart.AddBars(filter_status(\"running\"), color=\"3636a9\")\n chart.AddBars(filter_status(\"aborted\"), color=\"e29e24\")\n chart.AddBars(filter_status(\"failed\"), color=\"f6350f\")\n else:\n chart.AddBars(shards_processed)\n\n shard_count = len(shards_processed)\n\n if shard_count > 95:\n # Auto-spacing does not work for large numbers of shards.\n pixels_per_shard = 700.0 / shard_count\n bar_thickness = int(pixels_per_shard * .9)\n\n chart.style = bar_chart.BarChartStyle(bar_thickness=bar_thickness,\n bar_gap=0.1, use_fractional_gap_spacing=True)\n\n if shards_processed and shard_count <= 95:\n # Adding labels puts us in danger of exceeding the URL length, only\n # do it when we have a small amount of data to plot.\n # Only 16 labels on the whole chart.\n stride_length = max(1, shard_count / 16)\n chart.bottom.labels = []\n for x in xrange(shard_count):\n if (x % stride_length == 0 or\n x == shard_count - 1):\n chart.bottom.labels.append(x)\n else:\n chart.bottom.labels.append(\"\")\n chart.left.labels = [\"0\", str(max(shards_processed))]\n chart.left.min = 0\n\n self.chart_width = min(700, max(300, shard_count * 20))\n self.chart_url = chart.display.Url(self.chart_width, 200)", "response": "Updates a chart url to display processed count for each shard."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a new MapreduceState object.", "response": "def create_new(mapreduce_id=None,\n gettime=datetime.datetime.now):\n \"\"\"Create a new MapreduceState.\n\n Args:\n mapreduce_id: Mapreduce id as string.\n gettime: Used for testing.\n \"\"\"\n if not mapreduce_id:\n mapreduce_id = MapreduceState.new_mapreduce_id()\n state = MapreduceState(key_name=mapreduce_id,\n last_poll_time=gettime())\n state.set_processed_counts([], [])\n return state"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef reset_for_retry(self, output_writer):\n self.input_reader = self.initial_input_reader\n self.slice_id = 0\n self.retries += 1\n self.output_writer = output_writer\n self.handler = self.mapreduce_spec.mapper.handler", "response": "Reset self for shard retry."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadvances relavent states for next slice.", "response": "def advance_for_next_slice(self, recovery_slice=False):\n \"\"\"Advance relavent states for next slice.\n\n Args:\n recovery_slice: True if this slice is running recovery logic.\n See handlers.MapperWorkerCallbackHandler._attempt_slice_recovery\n for more info.\n \"\"\"\n if recovery_slice:\n self.slice_id += 2\n # Restore input reader to the beginning of the slice.\n self.input_reader = self.input_reader.from_json(self._input_reader_json)\n else:\n self.slice_id += 1"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef to_dict(self):\n result = {\"mapreduce_spec\": self.mapreduce_spec.to_json_str(),\n \"shard_id\": self.shard_id,\n \"slice_id\": str(self.slice_id),\n \"input_reader_state\": self.input_reader.to_json_str(),\n \"initial_input_reader_state\":\n self.initial_input_reader.to_json_str(),\n \"retries\": str(self.retries)}\n if self.output_writer:\n result[\"output_writer_state\"] = self.output_writer.to_json_str()\n serialized_handler = util.try_serialize_handler(self.handler)\n if serialized_handler:\n result[\"serialized_handler\"] = serialized_handler\n return result", "response": "Convert state to dictionary to save in task payload."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef from_request(cls, request):\n mapreduce_spec = MapreduceSpec.from_json_str(request.get(\"mapreduce_spec\"))\n mapper_spec = mapreduce_spec.mapper\n input_reader_spec_dict = json.loads(request.get(\"input_reader_state\"),\n cls=json_util.JsonDecoder)\n input_reader = mapper_spec.input_reader_class().from_json(\n input_reader_spec_dict)\n initial_input_reader_spec_dict = json.loads(\n request.get(\"initial_input_reader_state\"), cls=json_util.JsonDecoder)\n initial_input_reader = mapper_spec.input_reader_class().from_json(\n initial_input_reader_spec_dict)\n\n output_writer = None\n if mapper_spec.output_writer_class():\n output_writer = mapper_spec.output_writer_class().from_json(\n json.loads(request.get(\"output_writer_state\", \"{}\"),\n cls=json_util.JsonDecoder))\n assert isinstance(output_writer, mapper_spec.output_writer_class()), (\n \"%s.from_json returned an instance of wrong class: %s\" % (\n mapper_spec.output_writer_class(),\n output_writer.__class__))\n\n handler = util.try_deserialize_handler(request.get(\"serialized_handler\"))\n if not handler:\n handler = mapreduce_spec.mapper.handler\n\n return cls(mapreduce_spec.params[\"base_path\"],\n mapreduce_spec,\n str(request.get(\"shard_id\")),\n int(request.get(\"slice_id\")),\n input_reader,\n initial_input_reader,\n output_writer=output_writer,\n retries=int(request.get(\"retries\")),\n handler=handler)", "response": "Create TransientShardState from webapp request."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef reset_for_retry(self):\n self.retries += 1\n self.last_work_item = \"\"\n self.active = True\n self.result_status = None\n self.input_finished = False\n self.counters_map = CountersMap()\n self.slice_id = 0\n self.slice_start_time = None\n self.slice_request_id = None\n self.slice_retries = 0\n self.acquired_once = False", "response": "Reset self for shard retry."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef advance_for_next_slice(self, recovery_slice=False):\n self.slice_start_time = None\n self.slice_request_id = None\n self.slice_retries = 0\n self.acquired_once = False\n if recovery_slice:\n self.slice_id += 2\n else:\n self.slice_id += 1", "response": "Advance self for next slice."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef copy_from(self, other_state):\n for prop in self.properties().values():\n setattr(self, prop.name, getattr(other_state, prop.name))", "response": "Copy data from another shard state entity to self."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nfinds all shard states for given mapreduce state.", "response": "def find_all_by_mapreduce_state(cls, mapreduce_state):\n \"\"\"Find all shard states for given mapreduce.\n\n Args:\n mapreduce_state: MapreduceState instance\n\n Yields:\n shard states sorted by shard id.\n \"\"\"\n keys = cls.calculate_keys_by_mapreduce_state(mapreduce_state)\n i = 0\n while i < len(keys):\n @db.non_transactional\n def no_tx_get(i):\n return db.get(keys[i:i+cls._MAX_STATES_IN_MEMORY])\n # We need a separate function to so that we can mix non-transactional and\n # use be a generator\n states = no_tx_get(i)\n for s in states:\n i += 1\n if s is not None:\n yield s"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef calculate_keys_by_mapreduce_state(cls, mapreduce_state):\n if mapreduce_state is None:\n return []\n\n keys = []\n for i in range(mapreduce_state.mapreduce_spec.mapper.shard_count):\n shard_id = cls.shard_id_from_number(mapreduce_state.key().name(), i)\n keys.append(cls.get_key_by_shard_id(shard_id))\n return keys", "response": "Calculate all shard states keys for given mapreduce."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a new shard state.", "response": "def create_new(cls, mapreduce_id, shard_number):\n \"\"\"Create new shard state.\n\n Args:\n mapreduce_id: unique mapreduce id as string.\n shard_number: shard number for which to create shard state.\n\n Returns:\n new instance of ShardState ready to put into datastore.\n \"\"\"\n shard_id = cls.shard_id_from_number(mapreduce_id, shard_number)\n state = cls(key_name=shard_id,\n mapreduce_id=mapreduce_id)\n return state"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nretrieves the Key for a given mapreduce ID.", "response": "def get_key_by_job_id(cls, mapreduce_id):\n \"\"\"Retrieves the Key for a mapreduce ID.\n\n Args:\n mapreduce_id: The job to fetch.\n\n Returns:\n Datastore Key for the command for the given job ID.\n \"\"\"\n return db.Key.from_path(cls.kind(), \"%s:%s\" % (mapreduce_id, cls._KEY_NAME))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nabort a job in a sequence.", "response": "def abort(cls, mapreduce_id, **kwargs):\n \"\"\"Causes a job to abort.\n\n Args:\n mapreduce_id: The job to abort. Not verified as a valid job.\n \"\"\"\n cls(key_name=\"%s:%s\" % (mapreduce_id, cls._KEY_NAME),\n command=cls.ABORT).put(**kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef initialize(self, request, response):\n super(TaskQueueHandler, self).initialize(request, response)\n\n # Check request is from taskqueue.\n if \"X-AppEngine-QueueName\" not in self.request.headers:\n logging.error(self.request.headers)\n logging.error(\"Task queue handler received non-task queue request\")\n self.response.set_status(\n 403, message=\"Task queue handler received non-task queue request\")\n return\n\n # Check task has not been retried too many times.\n if self.task_retry_count() + 1 > parameters.config.TASK_MAX_ATTEMPTS:\n logging.error(\n \"Task %s has been attempted %s times. Dropping it permanently.\",\n self.request.headers[\"X-AppEngine-TaskName\"],\n self.task_retry_count() + 1)\n self._drop_gracefully()\n return\n\n try:\n self._preprocess()\n self._preprocess_success = True\n # pylint: disable=bare-except\n except:\n self._preprocess_success = False\n logging.error(\n \"Preprocess task %s failed. Dropping it permanently.\",\n self.request.headers[\"X-AppEngine-TaskName\"])\n self._drop_gracefully()", "response": "Initialize the task queue."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef retry_task(self):\n self.response.set_status(httplib.SERVICE_UNAVAILABLE, \"Retry task\")\n self.response.clear()", "response": "Ask taskqueue to retry this task."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the base path for all mapreduce - related urls.", "response": "def base_path(self):\n \"\"\"Base path for all mapreduce-related urls.\n\n JSON handlers are mapped to /base_path/command/command_name thus they\n require special treatment.\n\n Raises:\n BadRequestPathError: if the path does not end with \"/command\".\n\n Returns:\n The base path.\n \"\"\"\n path = self.request.path\n base_path = path[:path.rfind(\"/\")]\n if not base_path.endswith(\"/command\"):\n raise BadRequestPathError(\n \"Json handlers should have /command path prefix\")\n return base_path[:base_path.rfind(\"/\")]"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef read(self, n):\n if self._EOF:\n return \"\"\n\n while self._seg_index <= self._last_seg_index:\n result = self._read_from_seg(n)\n if result != \"\":\n return result\n else:\n self._next_seg()\n\n self._EOF = True\n return \"\"", "response": "Reads n bytes from the segments in the file."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _read_from_seg(self, n):\n result = self._seg.read(size=n)\n if result == \"\":\n return result\n offset = self._seg.tell()\n if offset > self._seg_valid_length:\n extra = offset - self._seg_valid_length\n result = result[:-1*extra]\n self._offset += len(result)\n return result", "response": "Read n bytes from the current seg."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a key name lexically ordered by time descending.", "response": "def _get_descending_key(gettime=time.time):\n \"\"\"Returns a key name lexically ordered by time descending.\n\n This lets us have a key name for use with Datastore entities which returns\n rows in time descending order when it is scanned in lexically ascending order,\n allowing us to bypass index building for descending indexes.\n\n Args:\n gettime: Used for testing.\n\n Returns:\n A string with a time descending key.\n \"\"\"\n now_descending = int((_FUTURE_TIME - gettime()) * 100)\n request_id_hash = os.environ.get(\"REQUEST_ID_HASH\")\n if not request_id_hash:\n request_id_hash = str(random.getrandbits(32))\n return \"%d%s\" % (now_descending, request_id_hash)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting the Host header value for all mr tasks.", "response": "def _get_task_host():\n \"\"\"Get the Host header value for all mr tasks.\n\n Task Host header determines which instance this task would be routed to.\n\n Current version id format is: v7.368834058928280579\n Current module id is just the module's name. It could be \"default\"\n Default version hostname is app_id.appspot.com\n\n Returns:\n A complete host name is of format version.module.app_id.appspot.com\n If module is the default module, just version.app_id.appspot.com. The reason\n is if an app doesn't have modules enabled and the url is\n \"version.default.app_id\", \"version\" is ignored and \"default\" is used as\n version. If \"default\" version doesn't exist, the url is routed to the\n default version.\n \"\"\"\n version = os.environ[\"CURRENT_VERSION_ID\"].split(\".\")[0]\n default_host = os.environ[\"DEFAULT_VERSION_HOSTNAME\"]\n module = os.environ[\"CURRENT_MODULE_ID\"]\n if os.environ[\"CURRENT_MODULE_ID\"] == \"default\":\n return \"%s.%s\" % (version, default_host)\n return \"%s.%s.%s\" % (version, module, default_host)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_queue_name(queue_name):\n if queue_name:\n return queue_name\n queue_name = os.environ.get(\"HTTP_X_APPENGINE_QUEUENAME\",\n parameters.config.QUEUE_NAME)\n if len(queue_name) > 1 and queue_name[0:2] == \"__\":\n # We are currently in some special queue. E.g. __cron.\n return parameters.config.QUEUE_NAME\n else:\n return queue_name", "response": "Determine which queue MR should run on."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef total_seconds(td):\n secs = td.seconds + td.days * 24 * 3600\n if td.microseconds:\n secs += 1\n return secs", "response": "convert a timedelta to seconds."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nresolves and instantiates a handler by fully qualified name.", "response": "def handler_for_name(fq_name):\n \"\"\"Resolves and instantiates handler by fully qualified name.\n\n First resolves the name using for_name call. Then if it resolves to a class,\n instantiates a class, if it resolves to a method - instantiates the class and\n binds method to the instance.\n\n Args:\n fq_name: fully qualified name of something to find.\n\n Returns:\n handler instance which is ready to be called.\n \"\"\"\n resolved_name = for_name(fq_name)\n if isinstance(resolved_name, (type, types.ClassType)):\n # create new instance if this is type\n return resolved_name()\n elif isinstance(resolved_name, types.MethodType):\n # bind the method\n return getattr(resolved_name.im_class(), resolved_name.__name__)\n else:\n return resolved_name"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ntrying to serialize a function or an object.", "response": "def try_serialize_handler(handler):\n \"\"\"Try to serialize map/reduce handler.\n\n Args:\n handler: handler function/instance. Handler can be a function or an\n instance of a callable class. In the latter case, the handler will\n be serialized across slices to allow users to save states.\n\n Returns:\n serialized handler string or None.\n \"\"\"\n if (isinstance(handler, types.InstanceType) or # old style class\n (isinstance(handler, object) and # new style class\n not inspect.isfunction(handler) and\n not inspect.ismethod(handler)) and\n hasattr(handler, \"__call__\")):\n return pickle.dumps(handler)\n return None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn true if the object is generator or generator function.", "response": "def is_generator(obj):\n \"\"\"Return true if the object is generator or generator function.\n\n Generator function objects provides same attributes as functions.\n See isfunction.__doc__ for attributes listing.\n\n Adapted from Python 2.6.\n\n Args:\n obj: an object to test.\n\n Returns:\n true if the object is generator function.\n \"\"\"\n if isinstance(obj, types.GeneratorType):\n return True\n\n CO_GENERATOR = 0x20\n return bool(((inspect.isfunction(obj) or inspect.ismethod(obj)) and\n obj.func_code.co_flags & CO_GENERATOR))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef parse_bool(obj):\n if type(obj) is str:\n TRUTH_VALUE_SET = [\"true\", \"1\", \"yes\", \"t\", \"on\"]\n return obj.lower() in TRUTH_VALUE_SET\n else:\n return bool(obj)", "response": "Returns true if the object represents a truth value false otherwise."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef create_datastore_write_config(mapreduce_spec):\n force_writes = parse_bool(mapreduce_spec.params.get(\"force_writes\", \"false\"))\n if force_writes:\n return datastore_rpc.Configuration(force_writes=force_writes)\n else:\n # dev server doesn't support force_writes.\n return datastore_rpc.Configuration()", "response": "Creates datastore config to use in write operations."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ntell NDB to never cache anything in memcache or in - process.", "response": "def _set_ndb_cache_policy():\n \"\"\"Tell NDB to never cache anything in memcache or in-process.\n\n This ensures that entities fetched from Datastore input_readers via NDB\n will not bloat up the request memory size and Datastore Puts will avoid\n doing calls to memcache. Without this you get soft memory limit exits,\n which hurts overall throughput.\n \"\"\"\n ndb_ctx = ndb.get_context()\n ndb_ctx.set_cache_policy(lambda key: False)\n ndb_ctx.set_memcache_policy(lambda key: False)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _obj_to_path(obj):\n if obj is None:\n return obj\n\n if inspect.isclass(obj) or inspect.isfunction(obj):\n fetched = getattr(sys.modules[obj.__module__], obj.__name__, None)\n if fetched is None:\n raise ValueError(\n \"Object %r must be defined on the top level of a module.\" % obj)\n return \"%s.%s\" % (obj.__module__, obj.__name__)\n raise TypeError(\"Unexpected type %s.\" % type(obj))", "response": "Returns the fully qualified path to the object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef strip_prefix_from_items(prefix, items):\n items_no_prefix = []\n for item in items:\n if item.startswith(prefix):\n items_no_prefix.append(item[len(prefix):])\n else:\n items_no_prefix.append(item)\n return items_no_prefix", "response": "Strips out the prefix from each of the items if it is present."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ninvoke the method on the taskqueue. Task object task.", "response": "def _run_task_hook(hooks, method, task, queue_name):\n \"\"\"Invokes hooks.method(task, queue_name).\n\n Args:\n hooks: A hooks.Hooks instance or None.\n method: The name of the method to invoke on the hooks class e.g.\n \"enqueue_kickoff_task\".\n task: The taskqueue.Task to pass to the hook method.\n queue_name: The name of the queue to pass to the hook method.\n\n Returns:\n True if the hooks.Hooks instance handled the method, False otherwise.\n \"\"\"\n if hooks is not None:\n try:\n getattr(hooks, method)(task, queue_name)\n except NotImplementedError:\n # Use the default task addition implementation.\n return False\n\n return True\n return False"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndropping worker task gracefully.", "response": "def _drop_gracefully(self):\n \"\"\"Drop worker task gracefully.\n\n Set current shard_state to failed. Controller logic will take care of\n other shards and the entire MR.\n \"\"\"\n shard_id = self.request.headers[util._MR_SHARD_ID_TASK_HEADER]\n mr_id = self.request.headers[util._MR_ID_TASK_HEADER]\n shard_state, mr_state = db.get([\n model.ShardState.get_key_by_shard_id(shard_id),\n model.MapreduceState.get_key_by_job_id(mr_id)])\n\n if shard_state and shard_state.active:\n shard_state.set_for_failure()\n config = util.create_datastore_write_config(mr_state.mapreduce_spec)\n shard_state.put(config=config)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _try_acquire_lease(self, shard_state, tstate):\n # Controller will tally shard_states and properly handle the situation.\n if not shard_state:\n logging.warning(\"State not found for shard %s; Possible spurious task \"\n \"execution. Dropping this task.\",\n tstate.shard_id)\n return self._TASK_DIRECTIVE.DROP_TASK\n\n if not shard_state.active:\n logging.warning(\"Shard %s is not active. Possible spurious task \"\n \"execution. Dropping this task.\", tstate.shard_id)\n logging.warning(str(shard_state))\n return self._TASK_DIRECTIVE.DROP_TASK\n\n # Validate shard retry count.\n if shard_state.retries > tstate.retries:\n logging.warning(\n \"Got shard %s from previous shard retry %s. Possible spurious \"\n \"task execution. Dropping this task.\",\n tstate.shard_id,\n tstate.retries)\n logging.warning(str(shard_state))\n return self._TASK_DIRECTIVE.DROP_TASK\n elif shard_state.retries < tstate.retries:\n # By the end of last slice, task enqueue succeeded but datastore commit\n # failed. That transaction will be retried and adding the same task\n # will pass.\n logging.warning(\n \"ShardState for %s is behind slice. Waiting for it to catch up\",\n shard_state.shard_id)\n return self._TASK_DIRECTIVE.RETRY_TASK\n\n # Validate slice id.\n # Taskqueue executes old successful tasks.\n if shard_state.slice_id > tstate.slice_id:\n logging.warning(\n \"Task %s-%s is behind ShardState %s. Dropping task.\"\"\",\n tstate.shard_id, tstate.slice_id, shard_state.slice_id)\n return self._TASK_DIRECTIVE.DROP_TASK\n # By the end of last slice, task enqueue succeeded but datastore commit\n # failed. That transaction will be retried and adding the same task\n # will pass. User data is duplicated in this case.\n elif shard_state.slice_id < tstate.slice_id:\n logging.warning(\n \"Task %s-%s is ahead of ShardState %s. Waiting for it to catch up.\",\n tstate.shard_id, tstate.slice_id, shard_state.slice_id)\n return self._TASK_DIRECTIVE.RETRY_TASK\n\n # Check potential duplicated tasks for the same slice.\n # See model.ShardState doc.\n if shard_state.slice_start_time:\n countdown = self._wait_time(shard_state,\n parameters._LEASE_DURATION_SEC)\n if countdown > 0:\n logging.warning(\n \"Last retry of slice %s-%s may be still running.\"\n \"Will try again in %s seconds\", tstate.shard_id, tstate.slice_id,\n countdown)\n # TODO(user): There might be a better way. Taskqueue's countdown\n # only applies to add new tasks, not retry of tasks.\n # Reduce contention.\n time.sleep(countdown)\n return self._TASK_DIRECTIVE.RETRY_TASK\n # lease could have expired. Verify with logs API.\n else:\n if self._wait_time(shard_state,\n parameters._MAX_LEASE_DURATION_SEC):\n if not self._has_old_request_ended(shard_state):\n logging.warning(\n \"Last retry of slice %s-%s is still in flight with request_id \"\n \"%s. Will try again later.\", tstate.shard_id, tstate.slice_id,\n shard_state.slice_request_id)\n return self._TASK_DIRECTIVE.RETRY_TASK\n else:\n logging.warning(\n \"Last retry of slice %s-%s has no log entry and has\"\n \"timed out after %s seconds\",\n tstate.shard_id, tstate.slice_id,\n parameters._MAX_LEASE_DURATION_SEC)\n\n # Lease expired or slice_start_time not set.\n config = util.create_datastore_write_config(tstate.mapreduce_spec)\n @db.transactional(retries=5)\n def _tx():\n \"\"\"Use datastore to set slice_start_time to now.\n\n If failed for any reason, raise error to retry the task (hence all\n the previous validation code). The task would die naturally eventually.\n\n Raises:\n Rollback: If the shard state is missing.\n\n Returns:\n A _TASK_DIRECTIVE enum.\n \"\"\"\n fresh_state = model.ShardState.get_by_shard_id(tstate.shard_id)\n if not fresh_state:\n logging.warning(\"ShardState missing.\")\n raise db.Rollback()\n if (fresh_state.active and\n fresh_state.slice_id == shard_state.slice_id and\n fresh_state.slice_start_time == shard_state.slice_start_time):\n shard_state.slice_start_time = datetime.datetime.now()\n shard_state.slice_request_id = os.environ.get(\"REQUEST_LOG_ID\")\n shard_state.acquired_once = True\n shard_state.put(config=config)\n return self._TASK_DIRECTIVE.PROCEED_TASK\n else:\n logging.warning(\n \"Contention on slice %s-%s execution. Will retry again.\",\n tstate.shard_id, tstate.slice_id)\n # One proposer should win. In case all lost, back off arbitrarily.\n time.sleep(random.randrange(1, 5))\n return self._TASK_DIRECTIVE.RETRY_TASK\n\n return _tx()", "response": "Checks if the lease is available and if so attempts to get it."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _has_old_request_ended(self, shard_state):\n assert shard_state.slice_start_time is not None\n assert shard_state.slice_request_id is not None\n request_ids = [shard_state.slice_request_id]\n logs = None\n try:\n logs = list(logservice.fetch(request_ids=request_ids))\n except (apiproxy_errors.FeatureNotEnabledError,\n apiproxy_errors.CapabilityDisabledError) as e:\n # Managed VMs do not have access to the logservice API\n # See https://groups.google.com/forum/#!topic/app-engine-managed-vms/r8i65uiFW0w\n logging.warning(\"Ignoring exception: %s\", e)\n\n if not logs or not logs[0].finished:\n return False\n return True", "response": "Returns True if the previous slice retry has ended according to Logs API."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _wait_time(self, shard_state, secs, now=datetime.datetime.now):\n assert shard_state.slice_start_time is not None\n delta = now() - shard_state.slice_start_time\n duration = datetime.timedelta(seconds=secs)\n if delta < duration:\n return util.total_seconds(duration - delta)\n else:\n return 0", "response": "Time to wait until slice_start_time is secs ago from now."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ntry to free a lease.", "response": "def _try_free_lease(self, shard_state, slice_retry=False):\n \"\"\"Try to free lease.\n\n A lightweight transaction to update shard_state and unset\n slice_start_time to allow the next retry to happen without blocking.\n We don't care if this fails or not because the lease will expire\n anyway.\n\n Under normal execution, _save_state_and_schedule_next is the exit point.\n It updates/saves shard state and schedules the next slice or returns.\n Other exit points are:\n 1. _are_states_consistent: at the beginning of handle, checks\n if datastore states and the task are in sync.\n If not, raise or return.\n 2. _attempt_slice_retry: may raise exception to taskqueue.\n 3. _save_state_and_schedule_next: may raise exception when taskqueue/db\n unreachable.\n\n This handler should try to free the lease on every exceptional exit point.\n\n Args:\n shard_state: model.ShardState.\n slice_retry: whether to count this as a failed slice execution.\n \"\"\"\n @db.transactional\n def _tx():\n fresh_state = model.ShardState.get_by_shard_id(shard_state.shard_id)\n if fresh_state and fresh_state.active:\n # Free lease.\n fresh_state.slice_start_time = None\n fresh_state.slice_request_id = None\n if slice_retry:\n fresh_state.slice_retries += 1\n fresh_state.put()\n try:\n _tx()\n # pylint: disable=broad-except\n except Exception, e:\n logging.warning(e)\n logging.warning(\n \"Release lock for shard %s failed. Wait for lease to expire.\",\n shard_state.shard_id)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nmake sure shard life cycle interface are respected.", "response": "def _maintain_LC(self, obj, slice_id, last_slice=False, begin_slice=True,\n shard_ctx=None, slice_ctx=None):\n \"\"\"Makes sure shard life cycle interface are respected.\n\n Args:\n obj: the obj that may have implemented _ShardLifeCycle.\n slice_id: current slice_id\n last_slice: whether this is the last slice.\n begin_slice: whether this is the beginning or the end of a slice.\n shard_ctx: shard ctx for dependency injection. If None, it will be read\n from self.\n slice_ctx: slice ctx for dependency injection. If None, it will be read\n from self.\n \"\"\"\n if obj is None or not isinstance(obj, shard_life_cycle._ShardLifeCycle):\n return\n\n shard_context = shard_ctx or self.shard_context\n slice_context = slice_ctx or self.slice_context\n if begin_slice:\n if slice_id == 0:\n obj.begin_shard(shard_context)\n obj.begin_slice(slice_context)\n else:\n obj.end_slice(slice_context)\n if last_slice:\n obj.end_shard(shard_context)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nhandle request. This method has to be careful to pass the same ShardState instance to its subroutines calls if the calls mutate or read from ShardState. Note especially that Context instance caches and updates the ShardState instance. Returns: Set HTTP status code and always returns None.", "response": "def handle(self):\n \"\"\"Handle request.\n\n This method has to be careful to pass the same ShardState instance to\n its subroutines calls if the calls mutate or read from ShardState.\n Note especially that Context instance caches and updates the ShardState\n instance.\n\n Returns:\n Set HTTP status code and always returns None.\n \"\"\"\n # Reconstruct basic states.\n self._start_time = self._time()\n shard_id = self.request.headers[util._MR_SHARD_ID_TASK_HEADER]\n mr_id = self.request.headers[util._MR_ID_TASK_HEADER]\n spec = model.MapreduceSpec._get_mapreduce_spec(mr_id)\n shard_state, control = db.get([\n model.ShardState.get_key_by_shard_id(shard_id),\n model.MapreduceControl.get_key_by_job_id(mr_id),\n ])\n\n # Set context before any IO code is called.\n ctx = context.Context(spec, shard_state,\n task_retry_count=self.task_retry_count())\n context.Context._set(ctx)\n\n # Unmarshall input reader, output writer, and other transient states.\n tstate = model.TransientShardState.from_request(self.request)\n\n # Try acquire a lease on the shard.\n if shard_state:\n is_this_a_retry = shard_state.acquired_once\n task_directive = self._try_acquire_lease(shard_state, tstate)\n if task_directive in (self._TASK_DIRECTIVE.RETRY_TASK,\n self._TASK_DIRECTIVE.DROP_TASK):\n return self.__return(shard_state, tstate, task_directive)\n assert task_directive == self._TASK_DIRECTIVE.PROCEED_TASK\n\n # Abort shard if received signal.\n if control and control.command == model.MapreduceControl.ABORT:\n task_directive = self._TASK_DIRECTIVE.ABORT_SHARD\n return self.__return(shard_state, tstate, task_directive)\n\n # Retry shard if user disabled slice retry.\n if (is_this_a_retry and\n parameters.config.TASK_MAX_DATA_PROCESSING_ATTEMPTS <= 1):\n task_directive = self._TASK_DIRECTIVE.RETRY_SHARD\n return self.__return(shard_state, tstate, task_directive)\n\n # TODO(user): Find a better way to set these per thread configs.\n # E.g. what if user change it?\n util._set_ndb_cache_policy()\n\n job_config = map_job.JobConfig._to_map_job_config(\n spec,\n os.environ.get(\"HTTP_X_APPENGINE_QUEUENAME\"))\n job_context = map_job_context.JobContext(job_config)\n self.shard_context = map_job_context.ShardContext(job_context, shard_state)\n self.slice_context = map_job_context.SliceContext(self.shard_context,\n shard_state,\n tstate)\n try:\n slice_id = tstate.slice_id\n self._lc_start_slice(tstate, slice_id)\n\n if shard_state.is_input_finished():\n self._lc_end_slice(tstate, slice_id, last_slice=True)\n # Finalize the stream and set status if there's no more input.\n if (tstate.output_writer and\n isinstance(tstate.output_writer, output_writers.OutputWriter)):\n # It's possible that finalization is successful but\n # saving state failed. In this case this shard will retry upon\n # finalization error.\n # TODO(user): make finalize method idempotent!\n tstate.output_writer.finalize(ctx, shard_state)\n shard_state.set_for_success()\n return self.__return(shard_state, tstate, task_directive)\n\n if is_this_a_retry:\n task_directive = self._attempt_slice_recovery(shard_state, tstate)\n if task_directive != self._TASK_DIRECTIVE.PROCEED_TASK:\n return self.__return(shard_state, tstate, task_directive)\n\n last_slice = self._process_inputs(\n tstate.input_reader, shard_state, tstate, ctx)\n\n self._lc_end_slice(tstate, slice_id)\n\n ctx.flush()\n\n if last_slice:\n # We're done processing data but we still need to finalize the output\n # stream. We save this condition in datastore and force a new slice.\n # That way if finalize fails no input data will be retried.\n shard_state.set_input_finished()\n # pylint: disable=broad-except\n except Exception, e:\n logging.warning(\"Shard %s got error.\", shard_state.shard_id)\n logging.error(traceback.format_exc())\n\n # Fail fast.\n if type(e) is errors.FailJobError:\n logging.error(\"Got FailJobError.\")\n task_directive = self._TASK_DIRECTIVE.FAIL_TASK\n else:\n task_directive = self._TASK_DIRECTIVE.RETRY_SLICE\n\n self.__return(shard_state, tstate, task_directive)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef __return(self, shard_state, tstate, task_directive):\n task_directive = self._set_state(shard_state, tstate, task_directive)\n self._save_state_and_schedule_next(shard_state, tstate, task_directive)\n context.Context._set(None)", "response": "Handler should always call this as the last statement."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _process_inputs(self,\n input_reader,\n shard_state,\n tstate,\n ctx):\n \"\"\"Read inputs, process them, and write out outputs.\n\n This is the core logic of MapReduce. It reads inputs from input reader,\n invokes user specified mapper function, and writes output with\n output writer. It also updates shard_state accordingly.\n e.g. if shard processing is done, set shard_state.active to False.\n\n If errors.FailJobError is caught, it will fail this MR job.\n All other exceptions will be logged and raised to taskqueue for retry\n until the number of retries exceeds a limit.\n\n Args:\n input_reader: input reader.\n shard_state: shard state.\n tstate: transient shard state.\n ctx: mapreduce context.\n\n Returns:\n Whether this shard has finished processing all its input split.\n \"\"\"\n processing_limit = self._processing_limit(tstate.mapreduce_spec)\n if processing_limit == 0:\n return\n\n finished_shard = True\n # Input reader may not be an iterator. It is only a container.\n iterator = iter(input_reader)\n\n while True:\n try:\n entity = iterator.next()\n except StopIteration:\n break\n # Reading input got exception. If we assume\n # 1. The input reader have done enough retries.\n # 2. The input reader can still serialize correctly after this exception.\n # 3. The input reader, upon resume, will try to re-read this failed\n # record.\n # 4. This exception doesn't imply the input reader is permanently stuck.\n # we can serialize current slice immediately to avoid duplicated\n # outputs.\n # TODO(user): Validate these assumptions on all readers. MR should\n # also have a way to detect fake forward progress.\n\n if isinstance(entity, db.Model):\n shard_state.last_work_item = repr(entity.key())\n elif isinstance(entity, ndb.Model):\n shard_state.last_work_item = repr(entity.key)\n else:\n shard_state.last_work_item = repr(entity)[:100]\n\n processing_limit -= 1\n\n if not self._process_datum(\n entity, input_reader, ctx, tstate):\n finished_shard = False\n break\n elif processing_limit == 0:\n finished_shard = False\n break\n\n # Flush context and its pools.\n self.slice_context.incr(\n context.COUNTER_MAPPER_WALLTIME_MS,\n int((self._time() - self._start_time)*1000))\n\n return finished_shard", "response": "Read inputs process them and write out outputs."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _process_datum(self, data, input_reader, ctx, transient_shard_state):\n if data is not input_readers.ALLOW_CHECKPOINT:\n self.slice_context.incr(context.COUNTER_MAPPER_CALLS)\n\n handler = transient_shard_state.handler\n\n if isinstance(handler, map_job.Mapper):\n handler(self.slice_context, data)\n else:\n if input_reader.expand_parameters:\n result = handler(*data)\n else:\n result = handler(data)\n\n if util.is_generator(result):\n for output in result:\n if isinstance(output, operation.Operation):\n output(ctx)\n else:\n output_writer = transient_shard_state.output_writer\n if not output_writer:\n logging.warning(\n \"Handler yielded %s, but no output writer is set.\", output)\n else:\n output_writer.write(output)\n\n if self._time() - self._start_time >= parameters.config._SLICE_DURATION_SEC:\n return False\n return True", "response": "Process a single data piece."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsets the shard_state and tstate based on task_directive.", "response": "def _set_state(self, shard_state, tstate, task_directive):\n \"\"\"Set shard_state and tstate based on task_directive.\n\n Args:\n shard_state: model.ShardState for current shard.\n tstate: model.TransientShardState for current shard.\n task_directive: self._TASK_DIRECTIVE for current shard.\n\n Returns:\n A _TASK_DIRECTIVE enum.\n PROCEED_TASK if task should proceed normally.\n RETRY_SHARD if shard should be retried.\n RETRY_SLICE if slice should be retried.\n FAIL_TASK if sahrd should fail.\n RECOVER_SLICE if slice should be recovered.\n ABORT_SHARD if shard should be aborted.\n RETRY_TASK if task should be retried.\n DROP_TASK if task should be dropped.\n \"\"\"\n if task_directive in (self._TASK_DIRECTIVE.RETRY_TASK,\n self._TASK_DIRECTIVE.DROP_TASK):\n return task_directive\n\n if task_directive == self._TASK_DIRECTIVE.ABORT_SHARD:\n shard_state.set_for_abort()\n return task_directive\n\n if task_directive == self._TASK_DIRECTIVE.PROCEED_TASK:\n shard_state.advance_for_next_slice()\n tstate.advance_for_next_slice()\n return task_directive\n\n if task_directive == self._TASK_DIRECTIVE.RECOVER_SLICE:\n tstate.advance_for_next_slice(recovery_slice=True)\n shard_state.advance_for_next_slice(recovery_slice=True)\n return task_directive\n\n if task_directive == self._TASK_DIRECTIVE.RETRY_SLICE:\n task_directive = self._attempt_slice_retry(shard_state, tstate)\n if task_directive == self._TASK_DIRECTIVE.RETRY_SHARD:\n task_directive = self._attempt_shard_retry(shard_state, tstate)\n if task_directive == self._TASK_DIRECTIVE.FAIL_TASK:\n shard_state.set_for_failure()\n\n return task_directive"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _save_state_and_schedule_next(self, shard_state, tstate, task_directive):\n spec = tstate.mapreduce_spec\n\n if task_directive == self._TASK_DIRECTIVE.DROP_TASK:\n return\n if task_directive in (self._TASK_DIRECTIVE.RETRY_SLICE,\n self._TASK_DIRECTIVE.RETRY_TASK):\n # Set HTTP code to 500.\n return self.retry_task()\n elif task_directive == self._TASK_DIRECTIVE.ABORT_SHARD:\n logging.info(\"Aborting shard %d of job '%s'\",\n shard_state.shard_number, shard_state.mapreduce_id)\n task = None\n elif task_directive == self._TASK_DIRECTIVE.FAIL_TASK:\n logging.critical(\"Shard %s failed permanently.\", shard_state.shard_id)\n task = None\n elif task_directive == self._TASK_DIRECTIVE.RETRY_SHARD:\n logging.warning(\"Shard %s is going to be attempted for the %s time.\",\n shard_state.shard_id,\n shard_state.retries + 1)\n task = self._state_to_task(tstate, shard_state)\n elif task_directive == self._TASK_DIRECTIVE.RECOVER_SLICE:\n logging.warning(\"Shard %s slice %s is being recovered.\",\n shard_state.shard_id,\n shard_state.slice_id)\n task = self._state_to_task(tstate, shard_state)\n else:\n assert task_directive == self._TASK_DIRECTIVE.PROCEED_TASK\n countdown = self._get_countdown_for_next_slice(spec)\n task = self._state_to_task(tstate, shard_state, countdown=countdown)\n\n # Prepare parameters for db transaction and taskqueue.\n queue_name = os.environ.get(\"HTTP_X_APPENGINE_QUEUENAME\",\n # For test only.\n # TODO(user): Remove this.\n \"default\")\n config = util.create_datastore_write_config(spec)\n\n @db.transactional(retries=5)\n def _tx():\n \"\"\"The Transaction helper.\"\"\"\n fresh_shard_state = model.ShardState.get_by_shard_id(tstate.shard_id)\n if not fresh_shard_state:\n raise db.Rollback()\n if (not fresh_shard_state.active or\n \"worker_active_state_collision\" in _TEST_INJECTED_FAULTS):\n logging.warning(\"Shard %s is not active. Possible spurious task \"\n \"execution. Dropping this task.\", tstate.shard_id)\n logging.warning(\"Datastore's %s\", str(fresh_shard_state))\n logging.warning(\"Slice's %s\", str(shard_state))\n return\n fresh_shard_state.copy_from(shard_state)\n fresh_shard_state.put(config=config)\n # Add task in the same datastore transaction.\n # This way we guarantee taskqueue is never behind datastore states.\n # Old tasks will be dropped.\n # Future task won't run until datastore states catches up.\n if fresh_shard_state.active:\n # Not adding task transactionally.\n # transactional enqueue requires tasks with no name.\n self._add_task(task, spec, queue_name)\n\n try:\n _tx()\n except (datastore_errors.Error,\n taskqueue.Error,\n runtime.DeadlineExceededError,\n apiproxy_errors.Error), e:\n logging.warning(\n \"Can't transactionally continue shard. \"\n \"Will retry slice %s %s for the %s time.\",\n tstate.shard_id,\n tstate.slice_id,\n self.task_retry_count() + 1)\n self._try_free_lease(shard_state)\n raise e", "response": "Save state and schedule next slice if needed."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nrecovering a slice. This is run when a slice had been previously attempted and output may have been written. If an output writer requires slice recovery, we run those logic to remove output duplicates. Otherwise we just retry the slice. If recovery is needed, then the entire slice will be dedicated to recovery logic. No data processing will take place. Thus we call the slice \"recovery slice\". This is needed for correctness: An output writer instance can be out of sync from its physical medium only when the slice dies after acquring the shard lock but before committing shard state to db. The worst failure case is when shard state failed to commit after the NAMED task for the next slice was added. Thus, recovery slice has a special logic to increment current slice_id n to n+2. If the task for n+1 had been added, it will be dropped because it is behind shard state. Args: shard_state: an instance of Model.ShardState. tstate: an instance of Model.TransientShardState. Returns: _TASK_DIRECTIVE.PROCEED_TASK to continue with this retry. _TASK_DIRECTIVE.RECOVER_SLICE to recover this slice. The next slice will start at the same input as this slice but output to a new instance of output writer. Combining outputs from all writer instances is up to implementation.", "response": "def _attempt_slice_recovery(self, shard_state, tstate):\n \"\"\"Recover a slice.\n\n This is run when a slice had been previously attempted and output\n may have been written. If an output writer requires slice recovery,\n we run those logic to remove output duplicates. Otherwise we just retry\n the slice.\n\n If recovery is needed, then the entire slice will be dedicated\n to recovery logic. No data processing will take place. Thus we call\n the slice \"recovery slice\". This is needed for correctness:\n An output writer instance can be out of sync from its physical\n medium only when the slice dies after acquring the shard lock but before\n committing shard state to db. The worst failure case is when\n shard state failed to commit after the NAMED task for the next slice was\n added. Thus, recovery slice has a special logic to increment current\n slice_id n to n+2. If the task for n+1 had been added, it will be dropped\n because it is behind shard state.\n\n Args:\n shard_state: an instance of Model.ShardState.\n tstate: an instance of Model.TransientShardState.\n\n Returns:\n _TASK_DIRECTIVE.PROCEED_TASK to continue with this retry.\n _TASK_DIRECTIVE.RECOVER_SLICE to recover this slice.\n The next slice will start at the same input as\n this slice but output to a new instance of output writer.\n Combining outputs from all writer instances is up to implementation.\n \"\"\"\n mapper_spec = tstate.mapreduce_spec.mapper\n if not (tstate.output_writer and\n tstate.output_writer._supports_slice_recovery(mapper_spec)):\n return self._TASK_DIRECTIVE.PROCEED_TASK\n\n tstate.output_writer = tstate.output_writer._recover(\n tstate.mapreduce_spec, shard_state.shard_number,\n shard_state.retries + 1)\n return self._TASK_DIRECTIVE.RECOVER_SLICE"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _attempt_slice_retry(self, shard_state, tstate):\n if (shard_state.slice_retries + 1 <\n parameters.config.TASK_MAX_DATA_PROCESSING_ATTEMPTS):\n logging.warning(\n \"Slice %s %s failed for the %s of up to %s attempts \"\n \"(%s of %s taskqueue execution attempts). \"\n \"Will retry now.\",\n tstate.shard_id,\n tstate.slice_id,\n shard_state.slice_retries + 1,\n parameters.config.TASK_MAX_DATA_PROCESSING_ATTEMPTS,\n self.task_retry_count() + 1,\n parameters.config.TASK_MAX_ATTEMPTS)\n # Clear info related to current exception. Otherwise, the real\n # callstack that includes a frame for this method will show up\n # in log.\n sys.exc_clear()\n self._try_free_lease(shard_state, slice_retry=True)\n return self._TASK_DIRECTIVE.RETRY_SLICE\n\n if parameters.config.TASK_MAX_DATA_PROCESSING_ATTEMPTS > 0:\n logging.warning(\"Slice attempt %s exceeded %s max attempts.\",\n self.task_retry_count() + 1,\n parameters.config.TASK_MAX_DATA_PROCESSING_ATTEMPTS)\n return self._TASK_DIRECTIVE.RETRY_SHARD", "response": "Attempts to retry this slice."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _get_countdown_for_next_slice(self, spec):\n countdown = 0\n if self._processing_limit(spec) != -1:\n countdown = max(\n int(parameters.config._SLICE_DURATION_SEC -\n (self._time() - self._start_time)), 0)\n return countdown", "response": "Get countdown for next slice s task."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngenerating a HugeTask instance for the given state.", "response": "def _state_to_task(cls,\n tstate,\n shard_state,\n eta=None,\n countdown=None):\n \"\"\"Generate task for slice according to current states.\n\n Args:\n tstate: An instance of TransientShardState.\n shard_state: An instance of ShardState.\n eta: Absolute time when the MR should execute. May not be specified\n if 'countdown' is also supplied. This may be timezone-aware or\n timezone-naive.\n countdown: Time in seconds into the future that this MR should execute.\n Defaults to zero.\n\n Returns:\n A model.HugeTask instance for the slice specified by current states.\n \"\"\"\n base_path = tstate.base_path\n\n task_name = MapperWorkerCallbackHandler.get_task_name(\n tstate.shard_id,\n tstate.slice_id,\n tstate.retries)\n\n headers = util._get_task_headers(tstate.mapreduce_spec.mapreduce_id)\n headers[util._MR_SHARD_ID_TASK_HEADER] = tstate.shard_id\n\n worker_task = model.HugeTask(\n url=base_path + \"/worker_callback/\" + tstate.shard_id,\n params=tstate.to_dict(),\n name=task_name,\n eta=eta,\n countdown=countdown,\n parent=shard_state,\n headers=headers)\n return worker_task"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nadds a task to the task queue.", "response": "def _add_task(cls,\n worker_task,\n mapreduce_spec,\n queue_name):\n \"\"\"Schedule slice scanning by adding it to the task queue.\n\n Args:\n worker_task: a model.HugeTask task for slice. This is NOT a taskqueue\n task.\n mapreduce_spec: an instance of model.MapreduceSpec.\n queue_name: Optional queue to run on; uses the current queue of\n execution or the default queue if unspecified.\n \"\"\"\n if not _run_task_hook(mapreduce_spec.get_hooks(),\n \"enqueue_worker_task\",\n worker_task,\n queue_name):\n try:\n # Not adding transactionally because worker_task has name.\n # Named task is not allowed for transactional add.\n worker_task.add(queue_name)\n except (taskqueue.TombstonedTaskError,\n taskqueue.TaskAlreadyExistsError), e:\n logging.warning(\"Task %r already exists. %s: %s\",\n worker_task.name,\n e.__class__,\n e)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _processing_limit(self, spec):\n processing_rate = float(spec.mapper.params.get(\"processing_rate\", 0))\n slice_processing_limit = -1\n if processing_rate > 0:\n slice_processing_limit = int(math.ceil(\n parameters.config._SLICE_DURATION_SEC*processing_rate/\n int(spec.mapper.shard_count)))\n return slice_processing_limit", "response": "Get the limit on the number of map calls allowed by this slice."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nschedule a slice of the MR.", "response": "def _schedule_slice(cls,\n shard_state,\n tstate,\n queue_name=None,\n eta=None,\n countdown=None):\n \"\"\"Schedule slice scanning by adding it to the task queue.\n\n Args:\n shard_state: An instance of ShardState.\n tstate: An instance of TransientShardState.\n queue_name: Optional queue to run on; uses the current queue of\n execution or the default queue if unspecified.\n eta: Absolute time when the MR should execute. May not be specified\n if 'countdown' is also supplied. This may be timezone-aware or\n timezone-naive.\n countdown: Time in seconds into the future that this MR should execute.\n Defaults to zero.\n \"\"\"\n queue_name = queue_name or os.environ.get(\"HTTP_X_APPENGINE_QUEUENAME\",\n \"default\")\n task = cls._state_to_task(tstate, shard_state, eta, countdown)\n cls._add_task(task, tstate.mapreduce_spec, queue_name)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nupdating the state by examing shard states.", "response": "def _update_state_from_shard_states(self, state, shard_states, control):\n \"\"\"Update mr state by examing shard states.\n\n Args:\n state: current mapreduce state as MapreduceState.\n shard_states: an iterator over shard states.\n control: model.MapreduceControl entity.\n \"\"\"\n # Initialize vars.\n state.active_shards, state.aborted_shards, state.failed_shards = 0, 0, 0\n total_shards = 0\n processed_counts = []\n processed_status = []\n state.counters_map.clear()\n\n # Tally across shard states once.\n for s in shard_states:\n total_shards += 1\n status = 'unknown'\n if s.active:\n state.active_shards += 1\n status = 'running'\n if s.result_status == model.ShardState.RESULT_SUCCESS:\n status = 'success'\n elif s.result_status == model.ShardState.RESULT_ABORTED:\n state.aborted_shards += 1\n status = 'aborted'\n elif s.result_status == model.ShardState.RESULT_FAILED:\n state.failed_shards += 1\n status = 'failed'\n\n # Update stats in mapreduce state by aggregating stats from shard states.\n state.counters_map.add_map(s.counters_map)\n processed_counts.append(s.counters_map.get(context.COUNTER_MAPPER_CALLS))\n processed_status.append(status)\n\n state.set_processed_counts(processed_counts, processed_status)\n state.last_poll_time = datetime.datetime.utcfromtimestamp(self._time())\n\n spec = state.mapreduce_spec\n\n if total_shards != spec.mapper.shard_count:\n logging.error(\"Found %d shard states. Expect %d. \"\n \"Issuing abort command to job '%s'\",\n total_shards, spec.mapper.shard_count,\n spec.mapreduce_id)\n # We issue abort command to allow shards to stop themselves.\n model.MapreduceControl.abort(spec.mapreduce_id)\n\n # If any shard is active then the mr is active.\n # This way, controller won't prematurely stop before all the shards have.\n state.active = bool(state.active_shards)\n if not control and (state.failed_shards or state.aborted_shards):\n # Issue abort command if there are failed shards.\n model.MapreduceControl.abort(spec.mapreduce_id)\n\n if not state.active:\n # Set final result status derived from shard states.\n if state.failed_shards or not total_shards:\n state.result_status = model.MapreduceState.RESULT_FAILED\n # It's important failed shards is checked before aborted shards\n # because failed shards will trigger other shards to abort.\n elif state.aborted_shards:\n state.result_status = model.MapreduceState.RESULT_ABORTED\n else:\n state.result_status = model.MapreduceState.RESULT_SUCCESS\n self._finalize_outputs(spec, state)\n self._finalize_job(spec, state)\n else:\n @db.transactional(retries=5)\n def _put_state():\n \"\"\"The helper for storing the state.\"\"\"\n fresh_state = model.MapreduceState.get_by_job_id(spec.mapreduce_id)\n # We don't check anything other than active because we are only\n # updating stats. It's OK if they are briefly inconsistent.\n if not fresh_state.active:\n logging.warning(\n \"Job %s is not active. Looks like spurious task execution. \"\n \"Dropping controller task.\", spec.mapreduce_id)\n return\n config = util.create_datastore_write_config(spec)\n state.put(config=config)\n\n _put_state()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _finalize_outputs(cls, mapreduce_spec, mapreduce_state):\n # Only finalize the output writers if the job is successful.\n if (mapreduce_spec.mapper.output_writer_class() and\n mapreduce_state.result_status == model.MapreduceState.RESULT_SUCCESS):\n mapreduce_spec.mapper.output_writer_class().finalize_job(mapreduce_state)", "response": "Finalize outputs.\n\n Args:\n mapreduce_spec: an instance of MapreduceSpec.\n mapreduce_state: an instance of MapreduceState."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nfinalize a job execution.", "response": "def _finalize_job(cls, mapreduce_spec, mapreduce_state):\n \"\"\"Finalize job execution.\n\n Invokes done callback and save mapreduce state in a transaction,\n and schedule necessary clean ups. This method is idempotent.\n\n Args:\n mapreduce_spec: an instance of MapreduceSpec\n mapreduce_state: an instance of MapreduceState\n \"\"\"\n config = util.create_datastore_write_config(mapreduce_spec)\n queue_name = util.get_queue_name(mapreduce_spec.params.get(\n model.MapreduceSpec.PARAM_DONE_CALLBACK_QUEUE))\n done_callback = mapreduce_spec.params.get(\n model.MapreduceSpec.PARAM_DONE_CALLBACK)\n done_task = None\n if done_callback:\n done_task = taskqueue.Task(\n url=done_callback,\n headers=util._get_task_headers(mapreduce_spec.mapreduce_id,\n util.CALLBACK_MR_ID_TASK_HEADER),\n method=mapreduce_spec.params.get(\"done_callback_method\", \"POST\"))\n\n @db.transactional(retries=5)\n def _put_state():\n \"\"\"Helper to store state.\"\"\"\n fresh_state = model.MapreduceState.get_by_job_id(\n mapreduce_spec.mapreduce_id)\n if not fresh_state.active:\n logging.warning(\n \"Job %s is not active. Looks like spurious task execution. \"\n \"Dropping task.\", mapreduce_spec.mapreduce_id)\n return\n mapreduce_state.put(config=config)\n # Enqueue done_callback if needed.\n if done_task and not _run_task_hook(\n mapreduce_spec.get_hooks(),\n \"enqueue_done_task\",\n done_task,\n queue_name):\n done_task.add(queue_name, transactional=True)\n\n _put_state()\n logging.info(\"Final result for job '%s' is '%s'\",\n mapreduce_spec.mapreduce_id, mapreduce_state.result_status)\n cls._clean_up_mr(mapreduce_spec)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef reschedule(cls,\n mapreduce_state,\n mapreduce_spec,\n serial_id,\n queue_name=None):\n \"\"\"Schedule new update status callback task.\n\n Args:\n mapreduce_state: mapreduce state as model.MapreduceState\n mapreduce_spec: mapreduce specification as MapreduceSpec.\n serial_id: id of the invocation as int.\n queue_name: The queue to schedule this task on. Will use the current\n queue of execution if not supplied.\n \"\"\"\n task_name = ControllerCallbackHandler.get_task_name(\n mapreduce_spec, serial_id)\n task_params = ControllerCallbackHandler.controller_parameters(\n mapreduce_spec, serial_id)\n if not queue_name:\n queue_name = os.environ.get(\"HTTP_X_APPENGINE_QUEUENAME\", \"default\")\n\n controller_callback_task = model.HugeTask(\n url=(mapreduce_spec.params[\"base_path\"] + \"/controller_callback/\" +\n mapreduce_spec.mapreduce_id),\n name=task_name, params=task_params,\n countdown=parameters.config._CONTROLLER_PERIOD_SEC,\n parent=mapreduce_state,\n headers=util._get_task_headers(mapreduce_spec.mapreduce_id))\n\n if not _run_task_hook(mapreduce_spec.get_hooks(),\n \"enqueue_controller_task\",\n controller_callback_task,\n queue_name):\n try:\n controller_callback_task.add(queue_name)\n except (taskqueue.TombstonedTaskError,\n taskqueue.TaskAlreadyExistsError), e:\n logging.warning(\"Task %r with params %r already exists. %s: %s\",\n task_name, task_params, e.__class__, e)", "response": "Reschedule a new update status callback task."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef handle(self):\n # Get and verify mr state.\n mr_id = self.request.get(\"mapreduce_id\")\n # Log the mr_id since this is started in an unnamed task\n logging.info(\"Processing kickoff for job %s\", mr_id)\n state = model.MapreduceState.get_by_job_id(mr_id)\n if not self._check_mr_state(state, mr_id):\n return\n\n # Create input readers.\n readers, serialized_readers_entity = self._get_input_readers(state)\n if readers is None:\n # We don't have any data. Finish map.\n logging.warning(\"Found no mapper input data to process.\")\n state.active = False\n state.result_status = model.MapreduceState.RESULT_SUCCESS\n ControllerCallbackHandler._finalize_job(\n state.mapreduce_spec, state)\n return False\n\n # Create output writers.\n self._setup_output_writer(state)\n\n # Save states and make sure we use the saved input readers for\n # subsequent operations.\n result = self._save_states(state, serialized_readers_entity)\n if result is None:\n readers, _ = self._get_input_readers(state)\n elif not result:\n return\n\n queue_name = self.request.headers.get(\"X-AppEngine-QueueName\")\n KickOffJobHandler._schedule_shards(state.mapreduce_spec, readers,\n queue_name,\n state.mapreduce_spec.params[\"base_path\"],\n state)\n\n ControllerCallbackHandler.reschedule(\n state, state.mapreduce_spec, serial_id=0, queue_name=queue_name)", "response": "Handles kick off request."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndrops the current job and abort all tasks.", "response": "def _drop_gracefully(self):\n \"\"\"See parent.\"\"\"\n mr_id = self.request.get(\"mapreduce_id\")\n logging.error(\"Failed to kick off job %s\", mr_id)\n\n state = model.MapreduceState.get_by_job_id(mr_id)\n if not self._check_mr_state(state, mr_id):\n return\n\n # Issue abort command just in case there are running tasks.\n config = util.create_datastore_write_config(state.mapreduce_spec)\n model.MapreduceControl.abort(mr_id, config=config)\n\n # Finalize job and invoke callback.\n state.active = False\n state.result_status = model.MapreduceState.RESULT_FAILED\n ControllerCallbackHandler._finalize_job(state.mapreduce_spec, state)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting input readers. Args: state: a MapreduceState model. Returns: A tuple: (a list of input readers, a model._HugeTaskPayload entity). The payload entity contains the json serialized input readers. (None, None) when input reader inplitting returned no data to process.", "response": "def _get_input_readers(self, state):\n \"\"\"Get input readers.\n\n Args:\n state: a MapreduceState model.\n\n Returns:\n A tuple: (a list of input readers, a model._HugeTaskPayload entity).\n The payload entity contains the json serialized input readers.\n (None, None) when input reader inplitting returned no data to process.\n \"\"\"\n serialized_input_readers_key = (self._SERIALIZED_INPUT_READERS_KEY %\n state.key().id_or_name())\n serialized_input_readers = model._HugeTaskPayload.get_by_key_name(\n serialized_input_readers_key, parent=state)\n\n # Initialize input readers.\n input_reader_class = state.mapreduce_spec.mapper.input_reader_class()\n split_param = state.mapreduce_spec.mapper\n if issubclass(input_reader_class, map_job.InputReader):\n split_param = map_job.JobConfig._to_map_job_config(\n state.mapreduce_spec,\n os.environ.get(\"HTTP_X_APPENGINE_QUEUENAME\"))\n if serialized_input_readers is None:\n readers = input_reader_class.split_input(split_param)\n else:\n readers = [input_reader_class.from_json_str(_json) for _json in\n json.loads(zlib.decompress(\n serialized_input_readers.payload))]\n\n if not readers:\n return None, None\n\n # Update state and spec with actual shard count.\n state.mapreduce_spec.mapper.shard_count = len(readers)\n state.active_shards = len(readers)\n\n # Prepare to save serialized input readers.\n if serialized_input_readers is None:\n # Use mr_state as parent so it can be easily cleaned up later.\n serialized_input_readers = model._HugeTaskPayload(\n key_name=serialized_input_readers_key, parent=state)\n readers_json_str = [i.to_json_str() for i in readers]\n serialized_input_readers.payload = zlib.compress(json.dumps(\n readers_json_str))\n return readers, serialized_input_readers"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _save_states(self, state, serialized_readers_entity):\n mr_id = state.key().id_or_name()\n fresh_state = model.MapreduceState.get_by_job_id(mr_id)\n if not self._check_mr_state(fresh_state, mr_id):\n return False\n if fresh_state.active_shards != 0:\n logging.warning(\n \"Mapreduce %s already has active shards. Looks like spurious task \"\n \"execution.\", mr_id)\n return None\n config = util.create_datastore_write_config(state.mapreduce_spec)\n db.put([state, serialized_readers_entity], config=config)\n return True", "response": "Run transaction to save state."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nprepare shard states and schedules their execution.", "response": "def _schedule_shards(cls,\n spec,\n readers,\n queue_name,\n base_path,\n mr_state):\n \"\"\"Prepares shard states and schedules their execution.\n\n Even though this method does not schedule shard task and save shard state\n transactionally, it's safe for taskqueue to retry this logic because\n the initial shard_state for each shard is the same from any retry.\n This is an important yet reasonable assumption on model.ShardState.\n\n Args:\n spec: mapreduce specification as MapreduceSpec.\n readers: list of InputReaders describing shard splits.\n queue_name: The queue to run this job on.\n base_path: The base url path of mapreduce callbacks.\n mr_state: The MapReduceState of current job.\n \"\"\"\n # Create shard states.\n shard_states = []\n for shard_number, input_reader in enumerate(readers):\n shard_state = model.ShardState.create_new(spec.mapreduce_id, shard_number)\n shard_state.shard_description = str(input_reader)\n shard_states.append(shard_state)\n\n # Retrieves already existing shard states.\n existing_shard_states = db.get(shard.key() for shard in shard_states)\n existing_shard_keys = set(shard.key() for shard in existing_shard_states\n if shard is not None)\n\n # Save non existent shard states.\n # Note: we could do this transactionally if necessary.\n db.put((shard for shard in shard_states\n if shard.key() not in existing_shard_keys),\n config=util.create_datastore_write_config(spec))\n\n # Create output writers.\n writer_class = spec.mapper.output_writer_class()\n writers = [None] * len(readers)\n if writer_class:\n for shard_number, shard_state in enumerate(shard_states):\n writers[shard_number] = writer_class.create(\n mr_state.mapreduce_spec,\n shard_state.shard_number, shard_state.retries + 1,\n mr_state.writer_state)\n\n # Schedule ALL shard tasks.\n # Since each task is named, _add_task will fall back gracefully if a\n # task already exists.\n for shard_number, (input_reader, output_writer) in enumerate(\n zip(readers, writers)):\n shard_id = model.ShardState.shard_id_from_number(\n spec.mapreduce_id, shard_number)\n task = MapperWorkerCallbackHandler._state_to_task(\n model.TransientShardState(\n base_path, spec, shard_id, 0, input_reader, input_reader,\n output_writer=output_writer,\n handler=spec.mapper.handler),\n shard_states[shard_number])\n MapperWorkerCallbackHandler._add_task(task,\n spec,\n queue_name)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _check_mr_state(cls, state, mr_id):\n if state is None:\n logging.warning(\n \"Mapreduce State for job %s is missing. Dropping Task.\",\n mr_id)\n return False\n if not state.active:\n logging.warning(\n \"Mapreduce %s is not active. Looks like spurious task \"\n \"execution. Dropping Task.\", mr_id)\n return False\n return True", "response": "Checks if the given MapreduceState instance is valid."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nretrieves additional user - supplied params for the job and validates them.", "response": "def _get_params(self, validator_parameter, name_prefix):\n \"\"\"Retrieves additional user-supplied params for the job and validates them.\n\n Args:\n validator_parameter: name of the request parameter which supplies\n validator for this parameter set.\n name_prefix: common prefix for all parameter names in the request.\n\n Raises:\n Any exception raised by the 'params_validator' request parameter if\n the params fail to validate.\n\n Returns:\n The user parameters.\n \"\"\"\n params_validator = self.request.get(validator_parameter)\n\n user_params = {}\n for key in self.request.arguments():\n if key.startswith(name_prefix):\n values = self.request.get_all(key)\n adjusted_key = key[len(name_prefix):]\n if len(values) == 1:\n user_params[adjusted_key] = values[0]\n else:\n user_params[adjusted_key] = values\n\n if params_validator:\n resolved_validator = util.for_name(params_validator)\n resolved_validator(user_params)\n\n return user_params"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _get_required_param(self, param_name):\n value = self.request.get(param_name)\n if not value:\n raise errors.NotEnoughArgumentsError(param_name + \" not specified\")\n return value", "response": "Get a required request parameter."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nstarts a new mapreduce job.", "response": "def _start_map(cls,\n name,\n mapper_spec,\n mapreduce_params,\n queue_name,\n eta=None,\n countdown=None,\n hooks_class_name=None,\n _app=None,\n in_xg_transaction=False):\n # pylint: disable=g-doc-args\n # pylint: disable=g-doc-return-or-yield\n \"\"\"See control.start_map.\n\n Requirements for this method:\n 1. The request that invokes this method can either be regular or\n from taskqueue. So taskqueue specific headers can not be used.\n 2. Each invocation transactionally starts an isolated mapreduce job with\n a unique id. MapreduceState should be immediately available after\n returning. See control.start_map's doc on transactional.\n 3. Method should be lightweight.\n \"\"\"\n # Validate input reader.\n mapper_input_reader_class = mapper_spec.input_reader_class()\n mapper_input_reader_class.validate(mapper_spec)\n\n # Validate output writer.\n mapper_output_writer_class = mapper_spec.output_writer_class()\n if mapper_output_writer_class:\n mapper_output_writer_class.validate(mapper_spec)\n\n # Create a new id and mr spec.\n mapreduce_id = model.MapreduceState.new_mapreduce_id()\n mapreduce_spec = model.MapreduceSpec(\n name,\n mapreduce_id,\n mapper_spec.to_json(),\n mapreduce_params,\n hooks_class_name)\n\n # Validate mapper handler.\n ctx = context.Context(mapreduce_spec, None)\n context.Context._set(ctx)\n try:\n # pylint: disable=pointless-statement\n mapper_spec.handler\n finally:\n context.Context._set(None)\n\n # Save states and enqueue task.\n if in_xg_transaction:\n propagation = db.MANDATORY\n else:\n propagation = db.INDEPENDENT\n\n @db.transactional(propagation=propagation)\n def _txn():\n cls._create_and_save_state(mapreduce_spec, _app)\n cls._add_kickoff_task(mapreduce_params[\"base_path\"], mapreduce_spec, eta,\n countdown, queue_name)\n _txn()\n\n return mapreduce_id"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates and save Mapreduce state to datastore.", "response": "def _create_and_save_state(cls, mapreduce_spec, _app):\n \"\"\"Save mapreduce state to datastore.\n\n Save state to datastore so that UI can see it immediately.\n\n Args:\n mapreduce_spec: model.MapreduceSpec,\n _app: app id if specified. None otherwise.\n\n Returns:\n The saved Mapreduce state.\n \"\"\"\n state = model.MapreduceState.create_new(mapreduce_spec.mapreduce_id)\n state.mapreduce_spec = mapreduce_spec\n state.active = True\n state.active_shards = 0\n if _app:\n state.app_id = _app\n config = util.create_datastore_write_config(mapreduce_spec)\n state.put(config=config)\n return state"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _add_kickoff_task(cls,\n base_path,\n mapreduce_spec,\n eta,\n countdown,\n queue_name):\n \"\"\"Enqueues a new kickoff task.\"\"\"\n params = {\"mapreduce_id\": mapreduce_spec.mapreduce_id}\n # Task is not named so that it can be added within a transaction.\n kickoff_task = taskqueue.Task(\n url=base_path + \"/kickoffjob_callback/\" + mapreduce_spec.mapreduce_id,\n headers=util._get_task_headers(mapreduce_spec.mapreduce_id),\n params=params,\n eta=eta,\n countdown=countdown)\n hooks = mapreduce_spec.get_hooks()\n if hooks is not None:\n try:\n hooks.enqueue_kickoff_task(kickoff_task, queue_name)\n return\n except NotImplementedError:\n pass\n kickoff_task.add(queue_name, transactional=True)", "response": "Enqueues a new kickoff task."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nscheduling the finalization of the mapreduce task.", "response": "def schedule(cls, mapreduce_spec):\n \"\"\"Schedule finalize task.\n\n Args:\n mapreduce_spec: mapreduce specification as MapreduceSpec.\n \"\"\"\n task_name = mapreduce_spec.mapreduce_id + \"-finalize\"\n finalize_task = taskqueue.Task(\n name=task_name,\n url=(mapreduce_spec.params[\"base_path\"] + \"/finalizejob_callback/\" +\n mapreduce_spec.mapreduce_id),\n params={\"mapreduce_id\": mapreduce_spec.mapreduce_id},\n headers=util._get_task_headers(mapreduce_spec.mapreduce_id))\n queue_name = util.get_queue_name(None)\n if not _run_task_hook(mapreduce_spec.get_hooks(),\n \"enqueue_controller_task\",\n finalize_task,\n queue_name):\n try:\n finalize_task.add(queue_name)\n except (taskqueue.TombstonedTaskError,\n taskqueue.TaskAlreadyExistsError), e:\n logging.warning(\"Task %r already exists. %s: %s\",\n task_name, e.__class__, e)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nobtains input reader parameters. Utility function for input readers implementation. Fetches parameters from mapreduce specification giving appropriate usage warnings. Args: mapper_spec: The MapperSpec for the job allowed_keys: set of all allowed keys in parameters as strings. If it is not None, then parameters are expected to be in a separate \"input_reader\" subdictionary of mapper_spec parameters. allow_old: Allow parameters to exist outside of the input_reader subdictionary for compatability. Returns: mapper parameters as dict Raises: BadReaderParamsError: if parameters are invalid/missing or not allowed.", "response": "def _get_params(mapper_spec, allowed_keys=None, allow_old=True):\n \"\"\"Obtain input reader parameters.\n\n Utility function for input readers implementation. Fetches parameters\n from mapreduce specification giving appropriate usage warnings.\n\n Args:\n mapper_spec: The MapperSpec for the job\n allowed_keys: set of all allowed keys in parameters as strings. If it is not\n None, then parameters are expected to be in a separate \"input_reader\"\n subdictionary of mapper_spec parameters.\n allow_old: Allow parameters to exist outside of the input_reader\n subdictionary for compatability.\n\n Returns:\n mapper parameters as dict\n\n Raises:\n BadReaderParamsError: if parameters are invalid/missing or not allowed.\n \"\"\"\n if \"input_reader\" not in mapper_spec.params:\n message = (\"Input reader's parameters should be specified in \"\n \"input_reader subdictionary.\")\n if not allow_old or allowed_keys:\n raise errors.BadReaderParamsError(message)\n params = mapper_spec.params\n params = dict((str(n), v) for n, v in params.iteritems())\n else:\n if not isinstance(mapper_spec.params.get(\"input_reader\"), dict):\n raise errors.BadReaderParamsError(\n \"Input reader parameters should be a dictionary\")\n params = mapper_spec.params.get(\"input_reader\")\n params = dict((str(n), v) for n, v in params.iteritems())\n if allowed_keys:\n params_diff = set(params.keys()) - allowed_keys\n if params_diff:\n raise errors.BadReaderParamsError(\n \"Invalid input_reader parameters: %s\" % \",\".join(params_diff))\n return params"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsplits a namespace by scatter index.", "response": "def _split_ns_by_scatter(cls,\n shard_count,\n namespace,\n raw_entity_kind,\n filters,\n app):\n \"\"\"Split a namespace by scatter index into key_range.KeyRange.\n\n TODO(user): Power this with key_range.KeyRange.compute_split_points.\n\n Args:\n shard_count: number of shards.\n namespace: namespace name to split. str.\n raw_entity_kind: low level datastore API entity kind.\n app: app id in str.\n\n Returns:\n A list of key_range.KeyRange objects. If there are not enough entities to\n splits into requested shards, the returned list will contain KeyRanges\n ordered lexicographically with any Nones appearing at the end.\n \"\"\"\n if shard_count == 1:\n # With one shard we don't need to calculate any split points at all.\n return [key_range.KeyRange(namespace=namespace, _app=app)]\n\n ds_query = datastore.Query(kind=raw_entity_kind,\n namespace=namespace,\n _app=app,\n keys_only=True)\n ds_query.Order(\"__scatter__\")\n oversampling_factor = 32\n random_keys = None\n if filters:\n ds_query_with_filters = copy.copy(ds_query)\n for (key, op, value) in filters:\n ds_query_with_filters.update({'%s %s' % (key, op): value})\n try:\n random_keys = ds_query_with_filters.Get(shard_count *\n oversampling_factor)\n except db.NeedIndexError, why:\n logging.warning('Need to add an index for optimal mapreduce-input'\n ' splitting:\\n%s' % why)\n # We'll try again without the filter. We hope the filter\n # will filter keys uniformly across the key-name space!\n\n if not random_keys:\n random_keys = ds_query.Get(shard_count * oversampling_factor)\n\n if not random_keys:\n # There are no entities with scatter property. We have no idea\n # how to split.\n return ([key_range.KeyRange(namespace=namespace, _app=app)] +\n [None] * (shard_count - 1))\n\n random_keys.sort()\n\n if len(random_keys) >= shard_count:\n # We've got a lot of scatter values. Sample them down.\n random_keys = cls._choose_split_points(random_keys, shard_count)\n\n k_ranges = []\n\n k_ranges.append(key_range.KeyRange(\n key_start=None,\n key_end=random_keys[0],\n direction=key_range.KeyRange.ASC,\n include_start=False,\n include_end=False,\n namespace=namespace,\n _app=app))\n\n for i in range(0, len(random_keys) - 1):\n k_ranges.append(key_range.KeyRange(\n key_start=random_keys[i],\n key_end=random_keys[i+1],\n direction=key_range.KeyRange.ASC,\n include_start=True,\n include_end=False,\n namespace=namespace,\n _app=app))\n\n k_ranges.append(key_range.KeyRange(\n key_start=random_keys[-1],\n key_end=None,\n direction=key_range.KeyRange.ASC,\n include_start=True,\n include_end=False,\n namespace=namespace,\n _app=app))\n\n if len(k_ranges) < shard_count:\n # We need to have as many shards as it was requested. Add some Nones.\n k_ranges += [None] * (shard_count - len(k_ranges))\n return k_ranges"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _choose_split_points(cls, sorted_keys, shard_count):\n assert len(sorted_keys) >= shard_count\n index_stride = len(sorted_keys) / float(shard_count)\n return [sorted_keys[int(round(index_stride * i))]\n for i in range(1, shard_count)]", "response": "Returns the best split points given a random set of datastore. Keys."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nvalidates the filters for the entity type.", "response": "def _validate_filters_ndb(cls, filters, model_class):\n \"\"\"Validate ndb.Model filters.\"\"\"\n if not filters:\n return\n\n properties = model_class._properties\n\n\n for idx, f in enumerate(filters):\n prop, ineq, val = f\n if prop not in properties:\n raise errors.BadReaderParamsError(\n \"Property %s is not defined for entity type %s\",\n prop, model_class._get_kind())\n\n # Attempt to cast the value to a KeyProperty if appropriate.\n # This enables filtering against keys.\n try:\n if (isinstance(val, basestring) and\n isinstance(properties[prop],\n (ndb.KeyProperty, ndb.ComputedProperty))):\n val = ndb.Key(urlsafe=val)\n filters[idx] = [prop, ineq, val]\n except:\n pass\n\n # Validate the value of each filter. We need to know filters have\n # valid value to carry out splits.\n try:\n properties[prop]._do_validate(val)\n except db.BadValueError, e:\n raise errors.BadReaderParamsError(e)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _iter_key_ranges(self):\n while True:\n if self._current_key_range is None:\n if self._key_ranges:\n self._current_key_range = self._key_ranges.pop()\n # The most recently popped key_range may be None, so continue here\n # to find the next keyrange that's valid.\n continue\n else:\n break\n\n for key, o in self._iter_key_range(\n copy.deepcopy(self._current_key_range)):\n # The caller must consume yielded values so advancing the KeyRange\n # before yielding is safe.\n self._current_key_range.advance(key)\n yield o\n self._current_key_range = None", "response": "Iterates over self. _key_ranges and yields KeyRange objects."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _iter_ns_range(self):\n while True:\n if self._current_key_range is None:\n query = self._ns_range.make_datastore_query()\n namespace_result = query.Get(1)\n if not namespace_result:\n break\n\n namespace = namespace_result[0].name() or \"\"\n self._current_key_range = key_range.KeyRange(\n namespace=namespace, _app=self._ns_range.app)\n yield ALLOW_CHECKPOINT\n\n for key, o in self._iter_key_range(\n copy.deepcopy(self._current_key_range)):\n # The caller must consume yielded values so advancing the KeyRange\n # before yielding is safe.\n self._current_key_range.advance(key)\n yield o\n\n if (self._ns_range.is_single_namespace or\n self._current_key_range.namespace == self._ns_range.namespace_end):\n break\n self._ns_range = self._ns_range.with_start_after(\n self._current_key_range.namespace)\n self._current_key_range = None", "response": "Iterates over self. _ns_range and yields the objects in self. _current_key_range."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _split_input_from_namespace(cls, app, namespace, entity_kind,\n shard_count):\n \"\"\"Helper for _split_input_from_params.\n\n If there are not enough Entities to make all of the given shards, the\n returned list of KeyRanges will include Nones. The returned list will\n contain KeyRanges ordered lexographically with any Nones appearing at the\n end.\n\n Args:\n app: the app.\n namespace: the namespace.\n entity_kind: entity kind as string.\n shard_count: the number of shards.\n\n Returns:\n KeyRange objects.\n \"\"\"\n\n raw_entity_kind = cls._get_raw_entity_kind(entity_kind)\n if shard_count == 1:\n # With one shard we don't need to calculate any splitpoints at all.\n return [key_range.KeyRange(namespace=namespace, _app=app)]\n\n ds_query = datastore.Query(kind=raw_entity_kind,\n namespace=namespace,\n _app=app,\n keys_only=True)\n ds_query.Order(\"__scatter__\")\n random_keys = ds_query.Get(shard_count * cls._OVERSAMPLING_FACTOR)\n\n if not random_keys:\n # There are no entities with scatter property. We have no idea\n # how to split.\n return ([key_range.KeyRange(namespace=namespace, _app=app)] +\n [None] * (shard_count - 1))\n\n random_keys.sort()\n\n if len(random_keys) >= shard_count:\n # We've got a lot of scatter values. Sample them down.\n random_keys = cls._choose_split_points(random_keys, shard_count)\n\n # pylint: disable=redefined-outer-name\n key_ranges = []\n\n key_ranges.append(key_range.KeyRange(\n key_start=None,\n key_end=random_keys[0],\n direction=key_range.KeyRange.ASC,\n include_start=False,\n include_end=False,\n namespace=namespace,\n _app=app))\n\n for i in range(0, len(random_keys) - 1):\n key_ranges.append(key_range.KeyRange(\n key_start=random_keys[i],\n key_end=random_keys[i+1],\n direction=key_range.KeyRange.ASC,\n include_start=True,\n include_end=False,\n namespace=namespace,\n _app=app))\n\n key_ranges.append(key_range.KeyRange(\n key_start=random_keys[-1],\n key_end=None,\n direction=key_range.KeyRange.ASC,\n include_start=True,\n include_end=False,\n namespace=namespace,\n _app=app))\n\n if len(key_ranges) < shard_count:\n # We need to have as many shards as it was requested. Add some Nones.\n key_ranges += [None] * (shard_count - len(key_ranges))\n\n return key_ranges", "response": "Helper method for _split_input_from_params."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns input reader objects. Helper for split_input.", "response": "def _split_input_from_params(cls, app, namespaces, entity_kind_name,\n params, shard_count):\n \"\"\"Return input reader objects. Helper for split_input.\"\"\"\n # pylint: disable=redefined-outer-name\n key_ranges = [] # KeyRanges for all namespaces\n for namespace in namespaces:\n key_ranges.extend(\n cls._split_input_from_namespace(app,\n namespace,\n entity_kind_name,\n shard_count))\n\n # Divide the KeyRanges into shard_count shards. The KeyRanges for different\n # namespaces might be very different in size so the assignment of KeyRanges\n # to shards is done round-robin.\n shared_ranges = [[] for _ in range(shard_count)]\n for i, k_range in enumerate(key_ranges):\n shared_ranges[i % shard_count].append(k_range)\n batch_size = int(params.get(cls.BATCH_SIZE_PARAM, cls._BATCH_SIZE))\n\n return [cls(entity_kind_name,\n key_ranges=key_ranges,\n ns_range=None,\n batch_size=batch_size)\n for key_ranges in shared_ranges if key_ranges]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nvalidate the mapper spec and all required parameters.", "response": "def validate(cls, mapper_spec):\n \"\"\"Validates mapper spec and all mapper parameters.\n\n Args:\n mapper_spec: The MapperSpec for this InputReader.\n\n Raises:\n BadReaderParamsError: required parameters are missing or invalid.\n \"\"\"\n if mapper_spec.input_reader_class() != cls:\n raise BadReaderParamsError(\"Input reader class mismatch\")\n params = _get_params(mapper_spec)\n if cls.ENTITY_KIND_PARAM not in params:\n raise BadReaderParamsError(\"Missing mapper parameter 'entity_kind'\")\n if cls.BATCH_SIZE_PARAM in params:\n try:\n batch_size = int(params[cls.BATCH_SIZE_PARAM])\n if batch_size < 1:\n raise BadReaderParamsError(\"Bad batch size: %s\" % batch_size)\n except ValueError, e:\n raise BadReaderParamsError(\"Bad batch size: %s\" % e)\n if cls.NAMESPACE_PARAM in params:\n if not isinstance(params[cls.NAMESPACE_PARAM],\n (str, unicode, type(None))):\n raise BadReaderParamsError(\n \"Expected a single namespace string\")\n if cls.NAMESPACES_PARAM in params:\n raise BadReaderParamsError(\"Multiple namespaces are no longer supported\")\n if cls.FILTERS_PARAM in params:\n filters = params[cls.FILTERS_PARAM]\n if not isinstance(filters, list):\n raise BadReaderParamsError(\"Expected list for filters parameter\")\n for f in filters:\n if not isinstance(f, (tuple, list)):\n raise BadReaderParamsError(\"Filter should be a tuple or list: %s\", f)\n if len(f) != 3:\n raise BadReaderParamsError(\"Filter should be a 3-tuple: %s\", f)\n if not isinstance(f[0], basestring):\n raise BadReaderParamsError(\"First element should be string: %s\", f)\n if f[1] != \"=\":\n raise BadReaderParamsError(\n \"Only equality filters are supported: %s\", f)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef split_input(cls, mapper_spec):\n params = _get_params(mapper_spec)\n entity_kind_name = params[cls.ENTITY_KIND_PARAM]\n batch_size = int(params.get(cls.BATCH_SIZE_PARAM, cls._BATCH_SIZE))\n shard_count = mapper_spec.shard_count\n namespace = params.get(cls.NAMESPACE_PARAM)\n app = params.get(cls._APP_PARAM)\n filters = params.get(cls.FILTERS_PARAM)\n\n if namespace is None:\n # It is difficult to efficiently shard large numbers of namespaces because\n # there can be an arbitrary number of them. So the strategy is:\n # 1. if there are a small number of namespaces in the datastore then\n # generate one KeyRange per namespace per shard and assign each shard a\n # KeyRange for every namespace. This should lead to nearly perfect\n # sharding.\n # 2. if there are a large number of namespaces in the datastore then\n # generate one NamespaceRange per worker. This can lead to very bad\n # sharding because namespaces can contain very different numbers of\n # entities and each NamespaceRange may contain very different numbers\n # of namespaces.\n namespace_query = datastore.Query(\"__namespace__\",\n keys_only=True,\n _app=app)\n namespace_keys = namespace_query.Get(\n limit=cls.MAX_NAMESPACES_FOR_KEY_SHARD+1)\n\n if len(namespace_keys) > cls.MAX_NAMESPACES_FOR_KEY_SHARD:\n ns_ranges = namespace_range.NamespaceRange.split(n=shard_count,\n contiguous=True,\n _app=app)\n return [cls(entity_kind_name,\n key_ranges=None,\n ns_range=ns_range,\n batch_size=batch_size,\n filters=filters)\n for ns_range in ns_ranges]\n elif not namespace_keys:\n return [cls(entity_kind_name,\n key_ranges=None,\n ns_range=namespace_range.NamespaceRange(_app=app),\n batch_size=shard_count,\n filters=filters)]\n else:\n namespaces = [namespace_key.name() or \"\"\n for namespace_key in namespace_keys]\n else:\n namespaces = [namespace]\n\n readers = cls._split_input_from_params(\n app, namespaces, entity_kind_name, params, shard_count)\n if filters:\n for reader in readers:\n reader._filters = filters\n return readers", "response": "Splits the input into shards without fetching query results."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef to_json(self):\n if self._key_ranges is None:\n key_ranges_json = None\n else:\n key_ranges_json = []\n for k in self._key_ranges:\n if k:\n key_ranges_json.append(k.to_json())\n else:\n key_ranges_json.append(None)\n\n if self._ns_range is None:\n namespace_range_json = None\n else:\n namespace_range_json = self._ns_range.to_json_object()\n\n if self._current_key_range is None:\n current_key_range_json = None\n else:\n current_key_range_json = self._current_key_range.to_json()\n\n json_dict = {self.KEY_RANGE_PARAM: key_ranges_json,\n self.NAMESPACE_RANGE_PARAM: namespace_range_json,\n self.CURRENT_KEY_RANGE_PARAM: current_key_range_json,\n self.ENTITY_KIND_PARAM: self._entity_kind,\n self.BATCH_SIZE_PARAM: self._batch_size,\n self.FILTERS_PARAM: self._filters}\n return json_dict", "response": "Serializes all the data in this query range into json - compatible map."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a new DatastoreInputReader from the json encoded by to_json.", "response": "def from_json(cls, json):\n \"\"\"Create new DatastoreInputReader from the json, encoded by to_json.\n\n Args:\n json: json map representation of DatastoreInputReader.\n\n Returns:\n an instance of DatastoreInputReader with all data deserialized from json.\n \"\"\"\n if json[cls.KEY_RANGE_PARAM] is None:\n # pylint: disable=redefined-outer-name\n key_ranges = None\n else:\n key_ranges = []\n for k in json[cls.KEY_RANGE_PARAM]:\n if k:\n key_ranges.append(key_range.KeyRange.from_json(k))\n else:\n key_ranges.append(None)\n\n if json[cls.NAMESPACE_RANGE_PARAM] is None:\n ns_range = None\n else:\n ns_range = namespace_range.NamespaceRange.from_json_object(\n json[cls.NAMESPACE_RANGE_PARAM])\n\n if json[cls.CURRENT_KEY_RANGE_PARAM] is None:\n current_key_range = None\n else:\n current_key_range = key_range.KeyRange.from_json(\n json[cls.CURRENT_KEY_RANGE_PARAM])\n\n return cls(\n json[cls.ENTITY_KIND_PARAM],\n key_ranges,\n ns_range,\n json[cls.BATCH_SIZE_PARAM],\n current_key_range,\n filters=json.get(cls.FILTERS_PARAM))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef next(self):\n self._has_iterated = True\n\n if self._read_before_start:\n self._blob_reader.readline()\n self._read_before_start = False\n start_position = self._blob_reader.tell()\n\n if start_position > self._end_position:\n raise StopIteration()\n\n line = self._blob_reader.readline()\n\n if not line:\n raise StopIteration()\n\n return start_position, line.rstrip(\"\\n\")", "response": "Returns the next input from as an ( offset line ) tuple."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn an json - compatible input shard spec for remaining inputs.", "response": "def to_json(self):\n \"\"\"Returns an json-compatible input shard spec for remaining inputs.\"\"\"\n new_pos = self._blob_reader.tell()\n if self._has_iterated:\n new_pos -= 1\n return {self.BLOB_KEY_PARAM: self._blob_key,\n self.INITIAL_POSITION_PARAM: new_pos,\n self.END_POSITION_PARAM: self._end_position}"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef from_json(cls, json):\n return cls(json[cls.BLOB_KEY_PARAM],\n json[cls.INITIAL_POSITION_PARAM],\n json[cls.END_POSITION_PARAM])", "response": "Instantiates an instance of this InputReader for the given shard spec."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef validate(cls, mapper_spec):\n if mapper_spec.input_reader_class() != cls:\n raise BadReaderParamsError(\"Mapper input reader class mismatch\")\n params = _get_params(mapper_spec)\n if cls.BLOB_KEYS_PARAM not in params:\n raise BadReaderParamsError(\"Must specify 'blob_keys' for mapper input\")\n blob_keys = params[cls.BLOB_KEYS_PARAM]\n if isinstance(blob_keys, basestring):\n # This is a mechanism to allow multiple blob keys (which do not contain\n # commas) in a single string. It may go away.\n blob_keys = blob_keys.split(\",\")\n if len(blob_keys) > cls._MAX_BLOB_KEYS_COUNT:\n raise BadReaderParamsError(\"Too many 'blob_keys' for mapper input\")\n if not blob_keys:\n raise BadReaderParamsError(\"No 'blob_keys' specified for mapper input\")\n for blob_key in blob_keys:\n blob_info = blobstore.BlobInfo.get(blobstore.BlobKey(blob_key))\n if not blob_info:\n raise BadReaderParamsError(\"Could not find blobinfo for key %s\" %\n blob_key)", "response": "Validates the mapper spec and all required parameters."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a list of shard_count input_spec_shards for the specified shard_count.", "response": "def split_input(cls, mapper_spec):\n \"\"\"Returns a list of shard_count input_spec_shards for input_spec.\n\n Args:\n mapper_spec: The mapper specification to split from. Must contain\n 'blob_keys' parameter with one or more blob keys.\n\n Returns:\n A list of BlobstoreInputReaders corresponding to the specified shards.\n \"\"\"\n params = _get_params(mapper_spec)\n blob_keys = params[cls.BLOB_KEYS_PARAM]\n if isinstance(blob_keys, basestring):\n # This is a mechanism to allow multiple blob keys (which do not contain\n # commas) in a single string. It may go away.\n blob_keys = blob_keys.split(\",\")\n\n blob_sizes = {}\n for blob_key in blob_keys:\n blob_info = blobstore.BlobInfo.get(blobstore.BlobKey(blob_key))\n blob_sizes[blob_key] = blob_info.size\n\n shard_count = min(cls._MAX_SHARD_COUNT, mapper_spec.shard_count)\n shards_per_blob = shard_count // len(blob_keys)\n if shards_per_blob == 0:\n shards_per_blob = 1\n\n chunks = []\n for blob_key, blob_size in blob_sizes.items():\n blob_chunk_size = blob_size // shards_per_blob\n for i in xrange(shards_per_blob - 1):\n chunks.append(BlobstoreLineInputReader.from_json(\n {cls.BLOB_KEY_PARAM: blob_key,\n cls.INITIAL_POSITION_PARAM: blob_chunk_size * i,\n cls.END_POSITION_PARAM: blob_chunk_size * (i + 1)}))\n chunks.append(BlobstoreLineInputReader.from_json(\n {cls.BLOB_KEY_PARAM: blob_key,\n cls.INITIAL_POSITION_PARAM: blob_chunk_size * (shards_per_blob - 1),\n cls.END_POSITION_PARAM: blob_size}))\n return chunks"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef next(self):\n if not self._zip:\n self._zip = zipfile.ZipFile(self._reader(self._blob_key))\n # Get a list of entries, reversed so we can pop entries off in order\n self._entries = self._zip.infolist()[self._start_index:self._end_index]\n self._entries.reverse()\n if not self._entries:\n raise StopIteration()\n entry = self._entries.pop()\n self._start_index += 1\n return (entry, lambda: self._read(entry))", "response": "Returns the next input from this input reader as a 2 - tuple."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nread entry content. Args: entry: zip file entry as zipfile.ZipInfo. Returns: Entry content as string.", "response": "def _read(self, entry):\n \"\"\"Read entry content.\n\n Args:\n entry: zip file entry as zipfile.ZipInfo.\n Returns:\n Entry content as string.\n \"\"\"\n start_time = time.time()\n content = self._zip.read(entry.filename)\n\n ctx = context.get()\n if ctx:\n operation.counters.Increment(COUNTER_IO_READ_BYTES, len(content))(ctx)\n operation.counters.Increment(\n COUNTER_IO_READ_MSEC, int((time.time() - start_time) * 1000))(ctx)\n\n return content"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef from_json(cls, json):\n return cls(json[cls.BLOB_KEY_PARAM],\n json[cls.START_INDEX_PARAM],\n json[cls.END_INDEX_PARAM])", "response": "Creates an InputReader instance from the given json."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns an input shard state for the remaining inputs.", "response": "def to_json(self):\n \"\"\"Returns an input shard state for the remaining inputs.\n\n Returns:\n A json-izable version of the remaining InputReader.\n \"\"\"\n return {self.BLOB_KEY_PARAM: self._blob_key,\n self.START_INDEX_PARAM: self._start_index,\n self.END_INDEX_PARAM: self._end_index}"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nvalidates mapper spec and all required parameters.", "response": "def validate(cls, mapper_spec):\n \"\"\"Validates mapper spec and all mapper parameters.\n\n Args:\n mapper_spec: The MapperSpec for this InputReader.\n\n Raises:\n BadReaderParamsError: required parameters are missing or invalid.\n \"\"\"\n if mapper_spec.input_reader_class() != cls:\n raise BadReaderParamsError(\"Mapper input reader class mismatch\")\n params = _get_params(mapper_spec)\n if cls.BLOB_KEY_PARAM not in params:\n raise BadReaderParamsError(\"Must specify 'blob_key' for mapper input\")\n blob_key = params[cls.BLOB_KEY_PARAM]\n blob_info = blobstore.BlobInfo.get(blobstore.BlobKey(blob_key))\n if not blob_info:\n raise BadReaderParamsError(\"Could not find blobinfo for key %s\" %\n blob_key)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef split_input(cls, mapper_spec, _reader=blobstore.BlobReader):\n params = _get_params(mapper_spec)\n blob_key = params[cls.BLOB_KEY_PARAM]\n zip_input = zipfile.ZipFile(_reader(blob_key))\n zfiles = zip_input.infolist()\n total_size = sum(x.file_size for x in zfiles)\n num_shards = min(mapper_spec.shard_count, cls._MAX_SHARD_COUNT)\n size_per_shard = total_size // num_shards\n\n # Break the list of files into sublists, each of approximately\n # size_per_shard bytes.\n shard_start_indexes = [0]\n current_shard_size = 0\n for i, fileinfo in enumerate(zfiles):\n current_shard_size += fileinfo.file_size\n if current_shard_size >= size_per_shard:\n shard_start_indexes.append(i + 1)\n current_shard_size = 0\n\n if shard_start_indexes[-1] != len(zfiles):\n shard_start_indexes.append(len(zfiles))\n\n return [cls(blob_key, start_index, end_index, _reader)\n for start_index, end_index\n in zip(shard_start_indexes, shard_start_indexes[1:])]", "response": "Splits the input file into several input shards."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef split_input(cls, mapper_spec, _reader=blobstore.BlobReader):\n params = _get_params(mapper_spec)\n blob_keys = params[cls.BLOB_KEYS_PARAM]\n if isinstance(blob_keys, basestring):\n # This is a mechanism to allow multiple blob keys (which do not contain\n # commas) in a single string. It may go away.\n blob_keys = blob_keys.split(\",\")\n\n blob_files = {}\n total_size = 0\n for blob_key in blob_keys:\n zip_input = zipfile.ZipFile(_reader(blob_key))\n blob_files[blob_key] = zip_input.infolist()\n total_size += sum(x.file_size for x in blob_files[blob_key])\n\n shard_count = min(cls._MAX_SHARD_COUNT, mapper_spec.shard_count)\n\n # We can break on both blob key and file-within-zip boundaries.\n # A shard will span at minimum a single blob key, but may only\n # handle a few files within a blob.\n\n size_per_shard = total_size // shard_count\n\n readers = []\n for blob_key in blob_keys:\n bfiles = blob_files[blob_key]\n current_shard_size = 0\n start_file_index = 0\n next_file_index = 0\n for fileinfo in bfiles:\n next_file_index += 1\n current_shard_size += fileinfo.file_size\n if current_shard_size >= size_per_shard:\n readers.append(cls(blob_key, start_file_index, next_file_index, 0,\n _reader))\n current_shard_size = 0\n start_file_index = next_file_index\n if current_shard_size != 0:\n readers.append(cls(blob_key, start_file_index, next_file_index, 0,\n _reader))\n\n return readers", "response": "Returns a list of InputReaders that can be used to read blobs within the input spec."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the next line from this input reader as a 2 - tuple.", "response": "def next(self):\n \"\"\"Returns the next line from this input reader as (lineinfo, line) tuple.\n\n Returns:\n The next input from this input reader, in the form of a 2-tuple.\n The first element of the tuple describes the source, it is itself\n a tuple (blobkey, filenumber, byteoffset).\n The second element of the tuple is the line found at that offset.\n \"\"\"\n if not self._filestream:\n if not self._zip:\n self._zip = zipfile.ZipFile(self._reader(self._blob_key))\n # Get a list of entries, reversed so we can pop entries off in order\n self._entries = self._zip.infolist()[self._start_file_index:\n self._end_file_index]\n self._entries.reverse()\n if not self._entries:\n raise StopIteration()\n entry = self._entries.pop()\n value = self._zip.read(entry.filename)\n self._filestream = StringIO.StringIO(value)\n if self._initial_offset:\n self._filestream.seek(self._initial_offset)\n self._filestream.readline()\n\n start_position = self._filestream.tell()\n line = self._filestream.readline()\n\n if not line:\n # Done with this file in the zip. Move on to the next file.\n self._filestream.close()\n self._filestream = None\n self._start_file_index += 1\n self._initial_offset = 0\n return self.next()\n\n return ((self._blob_key, self._start_file_index, start_position),\n line.rstrip(\"\\n\"))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _next_offset(self):\n if self._filestream:\n offset = self._filestream.tell()\n if offset:\n offset -= 1\n else:\n offset = self._initial_offset\n\n return offset", "response": "Return the offset of the next line to read."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef to_json(self):\n\n return {self.BLOB_KEY_PARAM: self._blob_key,\n self.START_FILE_INDEX_PARAM: self._start_file_index,\n self.END_FILE_INDEX_PARAM: self._end_file_index,\n self.OFFSET_PARAM: self._next_offset()}", "response": "Returns an input shard state for the remaining inputs."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef from_json(cls, json, _reader=blobstore.BlobReader):\n return cls(json[cls.BLOB_KEY_PARAM],\n json[cls.START_FILE_INDEX_PARAM],\n json[cls.END_FILE_INDEX_PARAM],\n json[cls.OFFSET_PARAM],\n _reader)", "response": "Creates an InputReader instance from the given json."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef to_json(self):\n return {self.NAMESPACE_RANGE_PARAM: self.ns_range.to_json_object(),\n self.BATCH_SIZE_PARAM: self._batch_size}", "response": "Serializes all the data in this query range into json - compatible map."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a new DatastoreInputReader from the json encoded by to_json.", "response": "def from_json(cls, json):\n \"\"\"Create new DatastoreInputReader from the json, encoded by to_json.\n\n Args:\n json: json map representation of DatastoreInputReader.\n\n Returns:\n an instance of DatastoreInputReader with all data deserialized from json.\n \"\"\"\n return cls(\n namespace_range.NamespaceRange.from_json_object(\n json[cls.NAMESPACE_RANGE_PARAM]),\n json[cls.BATCH_SIZE_PARAM])"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nvalidate the mapper spec.", "response": "def validate(cls, mapper_spec):\n \"\"\"Validates mapper spec.\n\n Args:\n mapper_spec: The MapperSpec for this InputReader.\n\n Raises:\n BadReaderParamsError: required parameters are missing or invalid.\n \"\"\"\n if mapper_spec.input_reader_class() != cls:\n raise BadReaderParamsError(\"Input reader class mismatch\")\n params = _get_params(mapper_spec)\n if cls.BATCH_SIZE_PARAM in params:\n try:\n batch_size = int(params[cls.BATCH_SIZE_PARAM])\n if batch_size < 1:\n raise BadReaderParamsError(\"Bad batch size: %s\" % batch_size)\n except ValueError, e:\n raise BadReaderParamsError(\"Bad batch size: %s\" % e)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsplits the input reader into a list of input readers for the input spec.", "response": "def split_input(cls, mapper_spec):\n \"\"\"Returns a list of input readers for the input spec.\n\n Args:\n mapper_spec: The MapperSpec for this InputReader.\n\n Returns:\n A list of InputReaders.\n \"\"\"\n batch_size = int(_get_params(mapper_spec).get(\n cls.BATCH_SIZE_PARAM, cls._BATCH_SIZE))\n shard_count = mapper_spec.shard_count\n namespace_ranges = namespace_range.NamespaceRange.split(shard_count,\n contiguous=True)\n return [NamespaceInputReader(ns_range, batch_size)\n for ns_range in namespace_ranges]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef from_json(cls, json):\n # Strip out unrecognized parameters, as introduced by b/5960884.\n params = dict((str(k), v) for k, v in json.iteritems()\n if k in cls._PARAMS)\n\n # This is not symmetric with to_json() wrt. PROTOTYPE_REQUEST_PARAM because\n # the constructor parameters need to be JSON-encodable, so the decoding\n # needs to happen there anyways.\n if cls._OFFSET_PARAM in params:\n params[cls._OFFSET_PARAM] = base64.b64decode(params[cls._OFFSET_PARAM])\n return cls(**params)", "response": "Creates an InputReader from a given JSON object."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef to_json(self):\n\n params = dict(self.__params) # Shallow copy.\n if self._PROTOTYPE_REQUEST_PARAM in params:\n prototype_request = params[self._PROTOTYPE_REQUEST_PARAM]\n params[self._PROTOTYPE_REQUEST_PARAM] = prototype_request.Encode()\n if self._OFFSET_PARAM in params:\n params[self._OFFSET_PARAM] = base64.b64encode(params[self._OFFSET_PARAM])\n return params", "response": "Returns an input shard state for the remaining inputs."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsplit the input into shards.", "response": "def split_input(cls, mapper_spec):\n \"\"\"Returns a list of input readers for the given input specification.\n\n Args:\n mapper_spec: The MapperSpec for this InputReader.\n\n Returns:\n A list of InputReaders.\n \"\"\"\n params = _get_params(mapper_spec)\n shard_count = mapper_spec.shard_count\n\n # Pick out the overall start and end times and time step per shard.\n start_time = params[cls.START_TIME_PARAM]\n end_time = params[cls.END_TIME_PARAM]\n seconds_per_shard = (end_time - start_time) / shard_count\n\n # Create a LogInputReader for each shard, modulating the params as we go.\n shards = []\n for _ in xrange(shard_count - 1):\n params[cls.END_TIME_PARAM] = (params[cls.START_TIME_PARAM] +\n seconds_per_shard)\n shards.append(LogInputReader(**params))\n params[cls.START_TIME_PARAM] = params[cls.END_TIME_PARAM]\n\n # Create a final shard to complete the time range.\n params[cls.END_TIME_PARAM] = end_time\n return shards + [LogInputReader(**params)]"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef validate(cls, mapper_spec):\n if mapper_spec.input_reader_class() != cls:\n raise errors.BadReaderParamsError(\"Input reader class mismatch\")\n\n params = _get_params(mapper_spec, allowed_keys=cls._PARAMS)\n if (cls.VERSION_IDS_PARAM not in params and\n cls.MODULE_VERSIONS_PARAM not in params):\n raise errors.BadReaderParamsError(\"Must specify a list of version ids or \"\n \"module/version ids for mapper input\")\n if (cls.VERSION_IDS_PARAM in params and\n cls.MODULE_VERSIONS_PARAM in params):\n raise errors.BadReaderParamsError(\"Can not supply both version ids or \"\n \"module/version ids. Use only one.\")\n if (cls.START_TIME_PARAM not in params or\n params[cls.START_TIME_PARAM] is None):\n raise errors.BadReaderParamsError(\"Must specify a starting time for \"\n \"mapper input\")\n if cls.END_TIME_PARAM not in params or params[cls.END_TIME_PARAM] is None:\n params[cls.END_TIME_PARAM] = time.time()\n\n if params[cls.START_TIME_PARAM] >= params[cls.END_TIME_PARAM]:\n raise errors.BadReaderParamsError(\"The starting time cannot be later \"\n \"than or the same as the ending time.\")\n\n if cls._PROTOTYPE_REQUEST_PARAM in params:\n try:\n params[cls._PROTOTYPE_REQUEST_PARAM] = log_service_pb.LogReadRequest(\n params[cls._PROTOTYPE_REQUEST_PARAM])\n except (TypeError, ProtocolBuffer.ProtocolBufferDecodeError):\n raise errors.BadReaderParamsError(\"The prototype request must be \"\n \"parseable as a LogReadRequest.\")\n\n # Pass the parameters to logservice.fetch() to verify any underlying\n # constraints on types or values. This only constructs an iterator, it\n # doesn't trigger any requests for actual log records.\n try:\n logservice.fetch(**params)\n except logservice.InvalidArgumentError, e:\n raise errors.BadReaderParamsError(\"One or more parameters are not valid \"\n \"inputs to logservice.fetch(): %s\" % e)", "response": "Validates the input reader s specification and all necessary parameters."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _next_file(self):\n while True:\n if self._bucket_iter:\n try:\n return self._bucket_iter.next().filename\n except StopIteration:\n self._bucket_iter = None\n self._bucket = None\n if self._index >= len(self._filenames):\n return\n filename = self._filenames[self._index]\n self._index += 1\n if self._delimiter is None or not filename.endswith(self._delimiter):\n return filename\n self._bucket = cloudstorage.listbucket(filename,\n delimiter=self._delimiter)\n self._bucket_iter = iter(self._bucket)", "response": "Find next file in the list."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef validate(cls, mapper_spec):\n reader_spec = cls.get_params(mapper_spec, allow_old=False)\n\n # Bucket Name is required\n if cls.BUCKET_NAME_PARAM not in reader_spec:\n raise errors.BadReaderParamsError(\n \"%s is required for Google Cloud Storage\" %\n cls.BUCKET_NAME_PARAM)\n try:\n cloudstorage.validate_bucket_name(\n reader_spec[cls.BUCKET_NAME_PARAM])\n except ValueError, error:\n raise errors.BadReaderParamsError(\"Bad bucket name, %s\" % (error))\n\n # Object Name(s) are required\n if cls.OBJECT_NAMES_PARAM not in reader_spec:\n raise errors.BadReaderParamsError(\n \"%s is required for Google Cloud Storage\" %\n cls.OBJECT_NAMES_PARAM)\n filenames = reader_spec[cls.OBJECT_NAMES_PARAM]\n if not isinstance(filenames, list):\n raise errors.BadReaderParamsError(\n \"Object name list is not a list but a %s\" %\n filenames.__class__.__name__)\n for filename in filenames:\n if not isinstance(filename, basestring):\n raise errors.BadReaderParamsError(\n \"Object name is not a string but a %s\" %\n filename.__class__.__name__)\n if cls.DELIMITER_PARAM in reader_spec:\n delimiter = reader_spec[cls.DELIMITER_PARAM]\n if not isinstance(delimiter, basestring):\n raise errors.BadReaderParamsError(\n \"%s is not a string but a %s\" %\n (cls.DELIMITER_PARAM, type(delimiter)))", "response": "Validate the mapper specification."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef split_input(cls, mapper_spec):\n reader_spec = cls.get_params(mapper_spec, allow_old=False)\n bucket = reader_spec[cls.BUCKET_NAME_PARAM]\n filenames = reader_spec[cls.OBJECT_NAMES_PARAM]\n delimiter = reader_spec.get(cls.DELIMITER_PARAM)\n account_id = reader_spec.get(cls._ACCOUNT_ID_PARAM)\n buffer_size = reader_spec.get(cls.BUFFER_SIZE_PARAM)\n fail_on_missing_input = reader_spec.get(cls.FAIL_ON_MISSING_INPUT)\n\n # Gather the complete list of files (expanding wildcards)\n all_filenames = []\n for filename in filenames:\n if filename.endswith(\"*\"):\n all_filenames.extend(\n [file_stat.filename for file_stat in cloudstorage.listbucket(\n \"/\" + bucket + \"/\" + filename[:-1], delimiter=delimiter,\n _account_id=account_id)])\n else:\n all_filenames.append(\"/%s/%s\" % (bucket, filename))\n\n # Split into shards\n readers = []\n for shard in range(0, mapper_spec.shard_count):\n shard_filenames = all_filenames[shard::mapper_spec.shard_count]\n if shard_filenames:\n reader = cls(\n shard_filenames, buffer_size=buffer_size, _account_id=account_id,\n delimiter=delimiter)\n reader._fail_on_missing_input = fail_on_missing_input\n readers.append(reader)\n return readers", "response": "Returns a list of InputReaders."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the next input from this input reader a block of bytes.", "response": "def next(self):\n \"\"\"Returns the next input from this input reader, a block of bytes.\n\n Non existent files will be logged and skipped. The file might have been\n removed after input splitting.\n\n Returns:\n The next input from this input reader in the form of a cloudstorage\n ReadBuffer that supports a File-like interface (read, readline, seek,\n tell, and close). An error may be raised if the file can not be opened.\n\n Raises:\n StopIteration: The list of files has been exhausted.\n \"\"\"\n options = {}\n if self._buffer_size:\n options[\"read_buffer_size\"] = self._buffer_size\n if self._account_id:\n options[\"_account_id\"] = self._account_id\n while True:\n filename = self._next_file()\n if filename is None:\n raise StopIteration()\n try:\n start_time = time.time()\n handle = cloudstorage.open(filename, **options)\n\n ctx = context.get()\n if ctx:\n operation.counters.Increment(\n COUNTER_IO_READ_MSEC, int((time.time() - start_time) * 1000))(ctx)\n\n return handle\n except cloudstorage.NotFoundError:\n # Fail the job if we're strict on missing input.\n if getattr(self, \"_fail_on_missing_input\", False):\n raise errors.FailJobError(\n \"File missing in GCS, aborting: %s\" % filename)\n # Move on otherwise.\n logging.warning(\"File %s may have been removed. Skipping file.\",\n filename)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef next(self):\n while True:\n if not hasattr(self, \"_cur_handle\") or self._cur_handle is None:\n # If there are no more files, StopIteration is raised here\n self._cur_handle = super(_GoogleCloudStorageRecordInputReader,\n self).next()\n if not hasattr(self, \"_record_reader\") or self._record_reader is None:\n self._record_reader = records.RecordsReader(self._cur_handle)\n\n try:\n start_time = time.time()\n content = self._record_reader.read()\n\n ctx = context.get()\n if ctx:\n operation.counters.Increment(COUNTER_IO_READ_BYTES, len(content))(ctx)\n operation.counters.Increment(\n COUNTER_IO_READ_MSEC, int((time.time() - start_time) * 1000))(ctx)\n return content\n\n except EOFError:\n self._cur_handle = None\n self._record_reader = None", "response": "Returns the next input from this input reader a record."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn an input shard state for the remaining inputs.", "response": "def to_json(self):\n \"\"\"Returns an input shard state for the remaining inputs.\n\n Returns:\n A json-izable version of the remaining InputReader.\n \"\"\"\n result = super(_ReducerReader, self).to_json()\n result[\"current_key\"] = self.encode_data(self.current_key)\n result[\"current_values\"] = self.encode_data(self.current_values)\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef from_json(cls, json):\n result = super(_ReducerReader, cls).from_json(json)\n result.current_key = _ReducerReader.decode_data(json[\"current_key\"])\n result.current_values = _ReducerReader.decode_data(json[\"current_values\"])\n return result", "response": "Creates an instance of the InputReader configured using the values of json."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef incr(self, counter_name, delta=1):\n self._state.counters_map.increment(counter_name, delta)", "response": "Changes counter by delta."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the current value of a counter.", "response": "def counter(self, counter_name, default=0):\n \"\"\"Get the current counter value.\n\n Args:\n counter_name: name of the counter in string.\n default: default value in int if one doesn't exist.\n\n Returns:\n Current value of the counter.\n \"\"\"\n return self._state.counters_map.get(counter_name, default)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef emit(self, value):\n if not self._tstate.output_writer:\n logging.error(\"emit is called, but no output writer is set.\")\n return\n self._tstate.output_writer.write(value)", "response": "Emits a value to the output writer."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef validate(cls, job_config):\n reader_params = job_config.input_reader_params\n\n # Bucket Name is required\n if cls.BUCKET_NAME_PARAM not in reader_params:\n raise errors.BadReaderParamsError(\n \"%s is required for Google Cloud Storage\" %\n cls.BUCKET_NAME_PARAM)\n try:\n cloudstorage.validate_bucket_name(\n reader_params[cls.BUCKET_NAME_PARAM])\n except ValueError, error:\n raise errors.BadReaderParamsError(\"Bad bucket name, %s\" % (error))\n\n # Object Name(s) are required\n if cls.OBJECT_NAMES_PARAM not in reader_params:\n raise errors.BadReaderParamsError(\n \"%s is required for Google Cloud Storage\" %\n cls.OBJECT_NAMES_PARAM)\n filenames = reader_params[cls.OBJECT_NAMES_PARAM]\n if not isinstance(filenames, list):\n raise errors.BadReaderParamsError(\n \"Object name list is not a list but a %s\" %\n filenames.__class__.__name__)\n for filename in filenames:\n if not isinstance(filename, basestring):\n raise errors.BadReaderParamsError(\n \"Object name is not a string but a %s\" %\n filename.__class__.__name__)\n\n # Delimiter.\n if cls.DELIMITER_PARAM in reader_params:\n delimiter = reader_params[cls.DELIMITER_PARAM]\n if not isinstance(delimiter, basestring):\n raise errors.BadReaderParamsError(\n \"%s is not a string but a %s\" %\n (cls.DELIMITER_PARAM, type(delimiter)))\n\n # Buffer size.\n if cls.BUFFER_SIZE_PARAM in reader_params:\n buffer_size = reader_params[cls.BUFFER_SIZE_PARAM]\n if not isinstance(buffer_size, int):\n raise errors.BadReaderParamsError(\n \"%s is not an int but a %s\" %\n (cls.BUFFER_SIZE_PARAM, type(buffer_size)))\n\n # Path filter.\n if cls.PATH_FILTER_PARAM in reader_params:\n path_filter = reader_params[cls.PATH_FILTER_PARAM]\n if not isinstance(path_filter, PathFilter):\n raise errors.BadReaderParamsError(\n \"%s is not an instance of PathFilter but %s.\" %\n (cls.PATH_FILTER_PARAM, type(path_filter)))", "response": "Validate the mapper specification."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef split_input(cls, job_config):\n reader_params = job_config.input_reader_params\n bucket = reader_params[cls.BUCKET_NAME_PARAM]\n filenames = reader_params[cls.OBJECT_NAMES_PARAM]\n delimiter = reader_params.get(cls.DELIMITER_PARAM)\n account_id = reader_params.get(cls._ACCOUNT_ID_PARAM)\n buffer_size = reader_params.get(cls.BUFFER_SIZE_PARAM)\n path_filter = reader_params.get(cls.PATH_FILTER_PARAM)\n\n # Gather the complete list of files (expanding wildcards)\n all_filenames = []\n for filename in filenames:\n if filename.endswith(\"*\"):\n all_filenames.extend(\n [file_stat.filename for file_stat in cloudstorage.listbucket(\n \"/\" + bucket + \"/\" + filename[:-1], delimiter=delimiter,\n _account_id=account_id)])\n else:\n all_filenames.append(\"/%s/%s\" % (bucket, filename))\n\n # Split into shards\n readers = []\n for shard in range(0, job_config.shard_count):\n shard_filenames = all_filenames[shard::job_config.shard_count]\n if shard_filenames:\n readers.append(cls(\n shard_filenames, buffer_size=buffer_size, _account_id=account_id,\n delimiter=delimiter, path_filter=path_filter))\n return readers", "response": "Splits the input files into shards and returns a list of InputReaders."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a handler to the next file in the list.", "response": "def next(self):\n \"\"\"Returns a handler to the next file.\n\n Non existent files will be logged and skipped. The file might have been\n removed after input splitting.\n\n Returns:\n The next input from this input reader in the form of a cloudstorage\n ReadBuffer that supports a File-like interface (read, readline, seek,\n tell, and close). An error may be raised if the file can not be opened.\n\n Raises:\n StopIteration: The list of files has been exhausted.\n \"\"\"\n options = {}\n if self._buffer_size:\n options[\"read_buffer_size\"] = self._buffer_size\n if self._account_id:\n options[\"_account_id\"] = self._account_id\n while True:\n filename = self._next_file()\n if filename is None:\n raise StopIteration()\n if (self._path_filter and\n not self._path_filter.accept(self._slice_ctx, filename)):\n continue\n try:\n start_time = time.time()\n handle = cloudstorage.open(filename, **options)\n self._slice_ctx.incr(self.COUNTER_IO_READ_MSEC,\n int(time.time() - start_time) * 1000)\n self._slice_ctx.incr(self.COUNTER_FILE_READ)\n return handle\n except cloudstorage.NotFoundError:\n logging.warning(\"File %s may have been removed. Skipping file.\",\n filename)\n self._slice_ctx.incr(self.COUNTER_FILE_MISSING)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the next input from this input reader a record.", "response": "def next(self):\n \"\"\"Returns the next input from this input reader, a record.\n\n Returns:\n The next input from this input reader in the form of a record read from\n an LevelDB file.\n\n Raises:\n StopIteration: The ordered set records has been exhausted.\n \"\"\"\n while True:\n if not hasattr(self, \"_cur_handle\") or self._cur_handle is None:\n # If there are no more files, StopIteration is raised here\n self._cur_handle = super(GCSRecordInputReader, self).next()\n if not hasattr(self, \"_record_reader\") or self._record_reader is None:\n self._record_reader = records.RecordsReader(self._cur_handle)\n\n try:\n start_time = time.time()\n content = self._record_reader.read()\n self._slice_ctx.incr(self.COUNTER_IO_READ_BYTE, len(content))\n self._slice_ctx.incr(self.COUNTER_IO_READ_MSEC,\n int(time.time() - start_time) * 1000)\n return content\n except EOFError:\n self._cur_handle = None\n self._record_reader = None"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _get_query_spec(cls, params):\n entity_kind = params[cls.ENTITY_KIND_PARAM]\n filters = params.get(cls.FILTERS_PARAM)\n app = params.get(cls._APP_PARAM)\n ns = params.get(cls.NAMESPACE_PARAM)\n\n return model.QuerySpec(\n entity_kind=cls._get_raw_entity_kind(entity_kind),\n keys_only=bool(params.get(cls.KEYS_ONLY_PARAM, False)),\n filters=filters,\n batch_size=int(params.get(cls.BATCH_SIZE_PARAM, cls._BATCH_SIZE)),\n model_class_path=entity_kind,\n app=app,\n ns=ns)", "response": "Construct a model. QuerySpec from model. MapperSpec."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget a list of key_ranges.KeyRanges objects, one for each shard. This method uses scatter index to split each namespace into pieces and assign those pieces to shards. Args: app: app_id in str. namespaces: a list of namespaces in str. shard_count: number of shards to split. query_spec: model.QuerySpec. Returns: a list of key_ranges.KeyRanges objects.", "response": "def _to_key_ranges_by_shard(cls, app, namespaces, shard_count, query_spec):\n \"\"\"Get a list of key_ranges.KeyRanges objects, one for each shard.\n\n This method uses scatter index to split each namespace into pieces\n and assign those pieces to shards.\n\n Args:\n app: app_id in str.\n namespaces: a list of namespaces in str.\n shard_count: number of shards to split.\n query_spec: model.QuerySpec.\n\n Returns:\n a list of key_ranges.KeyRanges objects.\n \"\"\"\n key_ranges_by_ns = []\n # Split each ns into n splits. If a ns doesn't have enough scatter to\n # split into n, the last few splits are None.\n for namespace in namespaces:\n ranges = cls._split_ns_by_scatter(\n shard_count,\n namespace,\n query_spec.entity_kind,\n app)\n # The nth split of each ns will be assigned to the nth shard.\n # Shuffle so that None are not all by the end.\n random.shuffle(ranges)\n key_ranges_by_ns.append(ranges)\n\n # KeyRanges from different namespaces might be very different in size.\n # Use round robin to make sure each shard can have at most one split\n # or a None from a ns.\n ranges_by_shard = [[] for _ in range(shard_count)]\n for ranges in key_ranges_by_ns:\n for i, k_range in enumerate(ranges):\n if k_range:\n ranges_by_shard[i].append(k_range)\n\n key_ranges_by_shard = []\n for ranges in ranges_by_shard:\n if ranges:\n key_ranges_by_shard.append(key_ranges.KeyRangesFactory.create_from_list(\n ranges))\n return key_ranges_by_shard"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nsplit a namespace by a number of shards.", "response": "def _split_ns_by_scatter(cls,\n shard_count,\n namespace,\n raw_entity_kind,\n app):\n \"\"\"Split a namespace by scatter index into key_range.KeyRange.\n\n TODO(user): Power this with key_range.KeyRange.compute_split_points.\n\n Args:\n shard_count: number of shards.\n namespace: namespace name to split. str.\n raw_entity_kind: low level datastore API entity kind.\n app: app id in str.\n\n Returns:\n A list of key_range.KeyRange objects. If there are not enough entities to\n splits into requested shards, the returned list will contain KeyRanges\n ordered lexicographically with any Nones appearing at the end.\n \"\"\"\n if shard_count == 1:\n # With one shard we don't need to calculate any split points at all.\n return [key_range.KeyRange(namespace=namespace, _app=app)]\n\n ds_query = datastore.Query(kind=raw_entity_kind,\n namespace=namespace,\n _app=app,\n keys_only=True)\n ds_query.Order(\"__scatter__\")\n oversampling_factor = 32\n random_keys = ds_query.Get(shard_count * oversampling_factor)\n\n if not random_keys:\n # There are no entities with scatter property. We have no idea\n # how to split.\n return ([key_range.KeyRange(namespace=namespace, _app=app)] +\n [None] * (shard_count - 1))\n\n random_keys.sort()\n\n if len(random_keys) >= shard_count:\n # We've got a lot of scatter values. Sample them down.\n random_keys = cls._choose_split_points(random_keys, shard_count)\n\n k_ranges = []\n\n k_ranges.append(key_range.KeyRange(\n key_start=None,\n key_end=random_keys[0],\n direction=key_range.KeyRange.ASC,\n include_start=False,\n include_end=False,\n namespace=namespace,\n _app=app))\n\n for i in range(0, len(random_keys) - 1):\n k_ranges.append(key_range.KeyRange(\n key_start=random_keys[i],\n key_end=random_keys[i+1],\n direction=key_range.KeyRange.ASC,\n include_start=True,\n include_end=False,\n namespace=namespace,\n _app=app))\n\n k_ranges.append(key_range.KeyRange(\n key_start=random_keys[-1],\n key_end=None,\n direction=key_range.KeyRange.ASC,\n include_start=True,\n include_end=False,\n namespace=namespace,\n _app=app))\n\n if len(k_ranges) < shard_count:\n # We need to have as many shards as it was requested. Add some Nones.\n k_ranges += [None] * (shard_count - len(k_ranges))\n return k_ranges"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _setup_constants(alphabet=NAMESPACE_CHARACTERS,\n max_length=MAX_NAMESPACE_LENGTH,\n batch_size=NAMESPACE_BATCH_SIZE):\n \"\"\"Calculate derived constant values. Only useful for testing.\"\"\"\n\n global NAMESPACE_CHARACTERS\n global MAX_NAMESPACE_LENGTH\n # pylint: disable=global-variable-undefined\n global MAX_NAMESPACE\n global _LEX_DISTANCE\n global NAMESPACE_BATCH_SIZE\n\n NAMESPACE_CHARACTERS = alphabet\n MAX_NAMESPACE_LENGTH = max_length\n MAX_NAMESPACE = NAMESPACE_CHARACTERS[-1] * MAX_NAMESPACE_LENGTH\n NAMESPACE_BATCH_SIZE = batch_size\n\n # _LEX_DISTANCE will contain the lexical distance between two adjacent\n # characters in NAMESPACE_CHARACTERS at each character index. This is used\n # to calculate the ordinal for each string. Example:\n # NAMESPACE_CHARACTERS = 'ab'\n # MAX_NAMESPACE_LENGTH = 3\n # _LEX_DISTANCE = [1, 3, 7]\n # '' => 0\n # 'a' => 1\n # 'aa' => 2\n # 'aaa' => 3\n # 'aab' => 4 - Distance between 'aaa' and 'aab' is 1.\n # 'ab' => 5 - Distance between 'aa' and 'ab' is 3.\n # 'aba' => 6\n # 'abb' => 7\n # 'b' => 8 - Distance between 'a' and 'b' is 7.\n # 'ba' => 9\n # 'baa' => 10\n # 'bab' => 11\n # ...\n # _namespace_to_ord('bab') = (1 * 7 + 1) + (0 * 3 + 1) + (1 * 1 + 1) = 11\n _LEX_DISTANCE = [1]\n for i in range(1, MAX_NAMESPACE_LENGTH):\n _LEX_DISTANCE.append(\n _LEX_DISTANCE[i-1] * len(NAMESPACE_CHARACTERS) + 1)\n # pylint: disable=undefined-loop-variable\n del i", "response": "Calculate derived constant values. Only useful for testing."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _ord_to_namespace(n, _max_length=None):\n if _max_length is None:\n _max_length = MAX_NAMESPACE_LENGTH\n\n length = _LEX_DISTANCE[_max_length - 1]\n if n == 0:\n return ''\n n -= 1\n return (NAMESPACE_CHARACTERS[n / length] +\n _ord_to_namespace(n % length, _max_length - 1))", "response": "Convert a namespace ordinal to a namespace string."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _namespace_to_ord(namespace):\n n = 0\n for i, c in enumerate(namespace):\n n += (_LEX_DISTANCE[MAX_NAMESPACE_LENGTH - i- 1] *\n NAMESPACE_CHARACTERS.index(c)\n + 1)\n return n", "response": "Converts a namespace string into an int representing its lexographic order."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the key for a namespace.", "response": "def _key_for_namespace(namespace, app):\n \"\"\"Return the __namespace__ key for a namespace.\n\n Args:\n namespace: The namespace whose key is requested.\n app: The id of the application that the key belongs to.\n\n Returns:\n A db.Key representing the namespace.\n \"\"\"\n if namespace:\n return db.Key.from_path(metadata.Namespace.KIND_NAME,\n namespace,\n _app=app)\n else:\n return db.Key.from_path(metadata.Namespace.KIND_NAME,\n metadata.Namespace.EMPTY_NAMESPACE_ID,\n _app=app)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef split_range(self):\n if self.is_single_namespace:\n return [self]\n\n mid_point = (_namespace_to_ord(self.namespace_start) +\n _namespace_to_ord(self.namespace_end)) // 2\n\n return [NamespaceRange(self.namespace_start,\n _ord_to_namespace(mid_point),\n _app=self.app),\n NamespaceRange(_ord_to_namespace(mid_point+1),\n self.namespace_end,\n _app=self.app)]", "response": "Splits the NamespaceRange into two nearly equal - sized ranges."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a copy of this NamespaceName with a new namespace_start.", "response": "def with_start_after(self, after_namespace):\n \"\"\"Returns a copy of this NamespaceName with a new namespace_start.\n\n Args:\n after_namespace: A namespace string.\n\n Returns:\n A NamespaceRange object whose namespace_start is the lexographically next\n namespace after the given namespace string.\n\n Raises:\n ValueError: if the NamespaceRange includes only a single namespace.\n \"\"\"\n namespace_start = _ord_to_namespace(_namespace_to_ord(after_namespace) + 1)\n return NamespaceRange(namespace_start, self.namespace_end, _app=self.app)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a datastore. Query that generates all namespaces in the range.", "response": "def make_datastore_query(self, cursor=None):\n \"\"\"Returns a datastore.Query that generates all namespaces in the range.\n\n Args:\n cursor: start cursor for the query.\n\n Returns:\n A datastore.Query instance that generates db.Keys for each namespace in\n the NamespaceRange.\n \"\"\"\n filters = {}\n filters['__key__ >= '] = _key_for_namespace(\n self.namespace_start, self.app)\n filters['__key__ <= '] = _key_for_namespace(\n self.namespace_end, self.app)\n\n return datastore.Query('__namespace__',\n filters=filters,\n keys_only=True,\n cursor=cursor,\n _app=self.app)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a new NamespaceRange with leading non - existant namespaces removed.", "response": "def normalized_start(self):\n \"\"\"Returns a NamespaceRange with leading non-existant namespaces removed.\n\n Returns:\n A copy of this NamespaceRange whose namespace_start is adjusted to exclude\n the portion of the range that contains no actual namespaces in the\n datastore. None is returned if the NamespaceRange contains no actual\n namespaces in the datastore.\n \"\"\"\n namespaces_after_key = list(self.make_datastore_query().Run(limit=1))\n\n if not namespaces_after_key:\n return None\n\n namespace_after_key = namespaces_after_key[0].name() or ''\n return NamespaceRange(namespace_after_key,\n self.namespace_end,\n _app=self.app)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef to_json_object(self):\n obj_dict = dict(namespace_start=self.namespace_start,\n namespace_end=self.namespace_end)\n if self.app is not None:\n obj_dict['app'] = self.app\n return obj_dict", "response": "Returns a dict representation that can be serialized to JSON."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsplitting the complete NamespaceRange into n equally - sized NamespaceRanges.", "response": "def split(cls,\n n,\n contiguous,\n can_query=itertools.chain(itertools.repeat(True, 50),\n itertools.repeat(False)).next,\n _app=None):\n # pylint: disable=g-doc-args\n \"\"\"Splits the complete NamespaceRange into n equally-sized NamespaceRanges.\n\n Args:\n n: The maximum number of NamespaceRanges to return. Fewer than n\n namespaces may be returned.\n contiguous: If True then the returned NamespaceRanges will cover the\n entire space of possible namespaces (i.e. from MIN_NAMESPACE to\n MAX_NAMESPACE) without gaps. If False then the returned\n NamespaceRanges may exclude namespaces that don't appear in the\n datastore.\n can_query: A function that returns True if split() can query the datastore\n to generate more fair namespace range splits, and False otherwise.\n If not set then split() is allowed to make 50 datastore queries.\n\n Returns:\n A list of at most n NamespaceRanges representing a near-equal distribution\n of actual existant datastore namespaces. The returned list will be sorted\n lexographically.\n\n Raises:\n ValueError: if n is < 1.\n \"\"\"\n if n < 1:\n raise ValueError('n must be >= 1')\n\n ranges = None\n if can_query():\n if not contiguous:\n ns_keys = get_namespace_keys(_app, n + 1)\n if not ns_keys:\n return []\n else:\n if len(ns_keys) <= n:\n # If you have less actual namespaces than number of NamespaceRanges\n # to return, then just return the list of those namespaces.\n ns_range = []\n for ns_key in ns_keys:\n ns_range.append(NamespaceRange(ns_key.name() or '',\n ns_key.name() or '',\n _app=_app))\n return sorted(ns_range,\n key=lambda ns_range: ns_range.namespace_start)\n # Use the first key and save the initial normalized_start() call.\n ranges = [NamespaceRange(ns_keys[0].name() or '', _app=_app)]\n else:\n ns_range = NamespaceRange(_app=_app).normalized_start()\n if ns_range is None:\n return [NamespaceRange(_app=_app)]\n ranges = [ns_range]\n else:\n ranges = [NamespaceRange(_app=_app)]\n\n singles = []\n while ranges and (len(ranges) + len(singles)) < n:\n namespace_range = ranges.pop(0)\n if namespace_range.is_single_namespace:\n singles.append(namespace_range)\n else:\n left, right = namespace_range.split_range()\n if can_query():\n right = right.normalized_start()\n if right is not None:\n ranges.append(right)\n ranges.append(left)\n ns_ranges = sorted(singles + ranges,\n key=lambda ns_range: ns_range.namespace_start)\n\n if contiguous:\n if not ns_ranges:\n # This condition is possible if every namespace was deleted after the\n # first call to ns_range.normalized_start().\n return [NamespaceRange(_app=_app)]\n\n continuous_ns_ranges = []\n for i in range(len(ns_ranges)):\n if i == 0:\n namespace_start = MIN_NAMESPACE\n else:\n namespace_start = ns_ranges[i].namespace_start\n\n if i == len(ns_ranges) - 1:\n namespace_end = MAX_NAMESPACE\n else:\n namespace_end = _ord_to_namespace(\n _namespace_to_ord(ns_ranges[i+1].namespace_start) - 1)\n\n continuous_ns_ranges.append(NamespaceRange(namespace_start,\n namespace_end,\n _app=_app))\n return continuous_ns_ranges\n else:\n return ns_ranges"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef append(self, data):\n data_length = len(data)\n if self._size + data_length > self._flush_size:\n self.flush()\n\n if not self._exclusive and data_length > _FILE_POOL_MAX_SIZE:\n raise errors.Error(\n \"Too big input %s (%s).\" % (data_length, _FILE_POOL_MAX_SIZE))\n else:\n self._buffer.append(data)\n self._size += data_length\n\n if self._size > self._flush_size:\n self.flush()", "response": "Append data to a file."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef flush(self):\n # Write data to in-memory buffer first.\n buf = cStringIO.StringIO()\n with records.RecordsWriter(buf) as w:\n for record in self._buffer:\n w.write(record)\n w._pad_block()\n str_buf = buf.getvalue()\n buf.close()\n\n if not self._exclusive and len(str_buf) > _FILE_POOL_MAX_SIZE:\n # Shouldn't really happen because of flush size.\n raise errors.Error(\n \"Buffer too big. Can't write more than %s bytes in one request: \"\n \"risk of writes interleaving. Got: %s\" %\n (_FILE_POOL_MAX_SIZE, len(str_buf)))\n\n # Write data to file.\n start_time = time.time()\n self._write(str_buf)\n if self._ctx:\n operation.counters.Increment(\n COUNTER_IO_WRITE_BYTES, len(str_buf))(self._ctx)\n operation.counters.Increment(\n COUNTER_IO_WRITE_MSEC,\n int((time.time() - start_time) * 1000))(self._ctx)\n\n # reset buffer\n self._buffer = []\n self._size = 0\n gc.collect()", "response": "Flushes the contents of the pool to the file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _write(self, str_buf):\n self._filehandle.write(str_buf)\n self._buf_size += len(str_buf)", "response": "Writes a string to GCS."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nflushes the contents of the GCS record pool.", "response": "def flush(self, force=False):\n \"\"\"Flush pool contents.\n\n Args:\n force: Inserts additional padding to achieve the minimum block size\n required for GCS.\n \"\"\"\n super(GCSRecordsPool, self).flush()\n if force:\n extra_padding = self._buf_size % self._GCS_BLOCK_SIZE\n if extra_padding > 0:\n self._write(\"\\x00\" * (self._GCS_BLOCK_SIZE - extra_padding))\n self._filehandle.flush()"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns bucket used for writing tmp files.", "response": "def _get_tmp_gcs_bucket(cls, writer_spec):\n \"\"\"Returns bucket used for writing tmp files.\"\"\"\n if cls.TMP_BUCKET_NAME_PARAM in writer_spec:\n return writer_spec[cls.TMP_BUCKET_NAME_PARAM]\n return cls._get_gcs_bucket(writer_spec)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _get_tmp_account_id(cls, writer_spec):\n # pick tmp id iff tmp bucket is set explicitly\n if cls.TMP_BUCKET_NAME_PARAM in writer_spec:\n return writer_spec.get(cls._TMP_ACCOUNT_ID_PARAM, None)\n return cls._get_account_id(writer_spec)", "response": "Returns the account id to use with tmp bucket."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngenerates a filename for a particular output.", "response": "def _generate_filename(cls, writer_spec, name, job_id, num,\n attempt=None, seg_index=None):\n \"\"\"Generates a filename for a particular output.\n\n Args:\n writer_spec: specification dictionary for the output writer.\n name: name of the job.\n job_id: the ID number assigned to the job.\n num: shard number.\n attempt: the shard attempt number.\n seg_index: index of the seg. None means the final output.\n\n Returns:\n a string containing the filename.\n\n Raises:\n BadWriterParamsError: if the template contains any errors such as invalid\n syntax or contains unknown substitution placeholders.\n \"\"\"\n naming_format = cls._TMP_FILE_NAMING_FORMAT\n if seg_index is None:\n naming_format = writer_spec.get(cls.NAMING_FORMAT_PARAM,\n cls._DEFAULT_NAMING_FORMAT)\n\n template = string.Template(naming_format)\n try:\n # Check that template doesn't use undefined mappings and is formatted well\n if seg_index is None:\n return template.substitute(name=name, id=job_id, num=num)\n else:\n return template.substitute(name=name, id=job_id, num=num,\n attempt=attempt,\n seg=seg_index)\n except ValueError, error:\n raise errors.BadWriterParamsError(\"Naming template is bad, %s\" % (error))\n except KeyError, error:\n raise errors.BadWriterParamsError(\"Naming template '%s' has extra \"\n \"mappings, %s\" % (naming_format, error))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef validate(cls, mapper_spec):\n writer_spec = cls.get_params(mapper_spec, allow_old=False)\n\n # Bucket Name is required\n if cls.BUCKET_NAME_PARAM not in writer_spec:\n raise errors.BadWriterParamsError(\n \"%s is required for Google Cloud Storage\" %\n cls.BUCKET_NAME_PARAM)\n try:\n cloudstorage.validate_bucket_name(\n writer_spec[cls.BUCKET_NAME_PARAM])\n except ValueError, error:\n raise errors.BadWriterParamsError(\"Bad bucket name, %s\" % (error))\n\n # Validate the naming format does not throw any errors using dummy values\n cls._generate_filename(writer_spec, \"name\", \"id\", 0)\n cls._generate_filename(writer_spec, \"name\", \"id\", 0, 1, 0)", "response": "Validate the mapper specification."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _open_file(cls, writer_spec, filename_suffix, use_tmp_bucket=False):\n if use_tmp_bucket:\n bucket = cls._get_tmp_gcs_bucket(writer_spec)\n account_id = cls._get_tmp_account_id(writer_spec)\n else:\n bucket = cls._get_gcs_bucket(writer_spec)\n account_id = cls._get_account_id(writer_spec)\n\n # GoogleCloudStorage format for filenames, Initial slash is required\n filename = \"/%s/%s\" % (bucket, filename_suffix)\n\n content_type = writer_spec.get(cls.CONTENT_TYPE_PARAM, None)\n\n options = {}\n if cls.ACL_PARAM in writer_spec:\n options[\"x-goog-acl\"] = writer_spec.get(cls.ACL_PARAM)\n\n return cloudstorage.open(filename, mode=\"w\", content_type=content_type,\n options=options, _account_id=account_id)", "response": "Opens a gcs file for writing."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nwrites data to the Google Cloud Storage file.", "response": "def write(self, data):\n \"\"\"Write data to the GoogleCloudStorage file.\n\n Args:\n data: string containing the data to be written.\n \"\"\"\n start_time = time.time()\n self._get_write_buffer().write(data)\n ctx = context.get()\n operation.counters.Increment(COUNTER_IO_WRITE_BYTES, len(data))(ctx)\n operation.counters.Increment(\n COUNTER_IO_WRITE_MSEC, int((time.time() - start_time) * 1000))(ctx)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _create(cls, writer_spec, filename_suffix):\n writer = cls._open_file(writer_spec, filename_suffix)\n return cls(writer, writer_spec=writer_spec)", "response": "Helper method that actually creates the file in cloud storage."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _rewrite_tmpfile(self, mainfile, tmpfile, writer_spec):\n if mainfile.closed:\n # can happen when finalize fails\n return\n\n account_id = self._get_tmp_account_id(writer_spec)\n f = cloudstorage_api.open(tmpfile, _account_id=account_id)\n # both reads and writes are buffered - the number here doesn't matter\n data = f.read(self._REWRITE_BLOCK_SIZE)\n while data:\n mainfile.write(data)\n data = f.read(self._REWRITE_BLOCK_SIZE)\n f.close()\n mainfile.flush()", "response": "Copies contents of tmpfile to mainfile."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate a new random - named tmpfile.", "response": "def _create_tmpfile(cls, status):\n \"\"\"Creates a new random-named tmpfile.\"\"\"\n\n # We can't put the tmpfile in the same directory as the output. There are\n # rare circumstances when we leave trash behind and we don't want this trash\n # to be loaded into bigquery and/or used for restore.\n #\n # We used mapreduce id, shard number and attempt and 128 random bits to make\n # collisions virtually impossible.\n tmpl = string.Template(cls._TMPFILE_PATTERN)\n filename = tmpl.substitute(\n id=status.mapreduce_id, shard=status.shard,\n random=random.getrandbits(cls._RAND_BITS))\n\n return cls._open_file(status.writer_spec, filename, use_tmp_bucket=True)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _try_to_clean_garbage(self, writer_spec, exclude_list=()):\n # Try to remove garbage (if any). Note that listbucket is not strongly\n # consistent so something might survive.\n tmpl = string.Template(self._TMPFILE_PREFIX)\n prefix = tmpl.substitute(\n id=self.status.mapreduce_id, shard=self.status.shard)\n bucket = self._get_tmp_gcs_bucket(writer_spec)\n account_id = self._get_tmp_account_id(writer_spec)\n for f in cloudstorage.listbucket(\"/%s/%s\" % (bucket, prefix),\n _account_id=account_id):\n if f.filename not in exclude_list:\n self._remove_tmpfile(f.filename, self.status.writer_spec)", "response": "Try to remove any files created by this shard that aren t needed."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget weights for each offset in str of certain max length.", "response": "def _get_weights(max_length):\n \"\"\"Get weights for each offset in str of certain max length.\n\n Args:\n max_length: max length of the strings.\n\n Returns:\n A list of ints as weights.\n\n Example:\n If max_length is 2 and alphabet is \"ab\", then we have order \"\", \"a\", \"aa\",\n \"ab\", \"b\", \"ba\", \"bb\". So the weight for the first char is 3.\n \"\"\"\n weights = [1]\n for i in range(1, max_length):\n weights.append(weights[i-1] * len(_ALPHABET) + 1)\n weights.reverse()\n return weights"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nconverting a string to its lexicographical order.", "response": "def _str_to_ord(content, weights):\n \"\"\"Converts a string to its lexicographical order.\n\n Args:\n content: the string to convert. Of type str.\n weights: weights from _get_weights.\n\n Returns:\n an int or long that represents the order of this string. \"\" has order 0.\n \"\"\"\n ordinal = 0\n for i, c in enumerate(content):\n ordinal += weights[i] * _ALPHABET.index(c) + 1\n return ordinal"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreverse function of _str_to_ord.", "response": "def _ord_to_str(ordinal, weights):\n \"\"\"Reverse function of _str_to_ord.\"\"\"\n chars = []\n for weight in weights:\n if ordinal == 0:\n return \"\".join(chars)\n ordinal -= 1\n index, ordinal = divmod(ordinal, weight)\n chars.append(_ALPHABET[index])\n return \"\".join(chars)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _get_range_from_filters(cls, filters, model_class):\n if not filters:\n return None, None, None\n\n range_property = None\n start_val = None\n end_val = None\n start_filter = None\n end_filter = None\n for f in filters:\n prop, op, val = f\n\n if op in [\">\", \">=\", \"<\", \"<=\"]:\n if range_property and range_property != prop:\n raise errors.BadReaderParamsError(\n \"Range on only one property is supported.\")\n range_property = prop\n\n if val is None:\n raise errors.BadReaderParamsError(\n \"Range can't be None in filter %s\", f)\n\n if op in [\">\", \">=\"]:\n if start_val is not None:\n raise errors.BadReaderParamsError(\n \"Operation %s is specified more than once.\", op)\n start_val = val\n start_filter = f\n else:\n if end_val is not None:\n raise errors.BadReaderParamsError(\n \"Operation %s is specified more than once.\", op)\n end_val = val\n end_filter = f\n elif op != \"=\":\n raise errors.BadReaderParamsError(\n \"Only < <= > >= = are supported as operation. Got %s\", op)\n\n if not range_property:\n return None, None, None\n\n if start_val is None or end_val is None:\n raise errors.BadReaderParamsError(\n \"Filter should contains a complete range on property %s\",\n range_property)\n if issubclass(model_class, db.Model):\n property_obj = model_class.properties()[range_property]\n else:\n property_obj = (\n model_class._properties[ # pylint: disable=protected-access\n range_property])\n supported_properties = (\n _DISCRETE_PROPERTY_SPLIT_FUNCTIONS.keys() +\n _CONTINUOUS_PROPERTY_SPLIT_FUNCTIONS.keys())\n if not isinstance(property_obj, tuple(supported_properties)):\n raise errors.BadReaderParamsError(\n \"Filtered property %s is not supported by sharding.\", range_property)\n if not start_val < end_val:\n raise errors.BadReaderParamsError(\n \"Start value %s should be smaller than end value %s\",\n start_val, end_val)\n\n return property_obj, start_filter, end_filter", "response": "This method returns the property range from the user supplied filters."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef split(self, n):\n new_range_filters = []\n name = self.start[0]\n prop_cls = self.prop.__class__\n if prop_cls in _DISCRETE_PROPERTY_SPLIT_FUNCTIONS:\n splitpoints = _DISCRETE_PROPERTY_SPLIT_FUNCTIONS[prop_cls](\n self.start[2], self.end[2], n,\n self.start[1] == \">=\", self.end[1] == \"<=\")\n start_filter = (name, \">=\", splitpoints[0])\n for p in splitpoints[1:]:\n end_filter = (name, \"<\", p)\n new_range_filters.append([start_filter, end_filter])\n start_filter = (name, \">=\", p)\n else:\n splitpoints = _CONTINUOUS_PROPERTY_SPLIT_FUNCTIONS[prop_cls](\n self.start[2], self.end[2], n)\n start_filter = self.start\n for p in splitpoints:\n end_filter = (name, \"<\", p)\n new_range_filters.append([start_filter, end_filter])\n start_filter = (name, \">=\", p)\n new_range_filters.append([start_filter, self.end])\n\n for f in new_range_filters:\n f.extend(self._equality_filters)\n\n return [self.__class__(f, self.model_class_path) for f in new_range_filters]", "response": "Evenly split this range into contiguous non overlapping subranges."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nmake a query of entities within this range.", "response": "def make_query(self, ns):\n \"\"\"Make a query of entities within this range.\n\n Query options are not supported. They should be specified when the query\n is run.\n\n Args:\n ns: namespace of this query.\n\n Returns:\n a db.Query or ndb.Query, depends on the model class's type.\n \"\"\"\n if issubclass(self.model_class, db.Model):\n query = db.Query(self.model_class, namespace=ns)\n for f in self.filters:\n query.filter(\"%s %s\" % (f[0], f[1]), f[2])\n else:\n query = self.model_class.query(namespace=ns)\n for f in self.filters:\n query = query.filter(ndb.FilterNode(*f))\n return query"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef validate(cls, job_config):\n if job_config.output_writer_cls != cls:\n raise errors.BadWriterParamsError(\n \"Expect output writer class %r, got %r.\" %\n (cls, job_config.output_writer_cls))", "response": "Validates relevant parameters.\n\n This method can validate fields which it deems relevant.\n\n Args:\n job_config: an instance of map_job.JobConfig.\n\n Raises:\n errors.BadWriterParamsError: required parameters are missing or invalid."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncommitting the output references when a shard finishes.", "response": "def commit_output(cls, shard_ctx, iterator):\n \"\"\"Saves output references when a shard finishes.\n\n Inside end_shard(), an output writer can optionally use this method\n to persist some references to the outputs from this shard\n (e.g a list of filenames)\n\n Args:\n shard_ctx: map_job_context.ShardContext for this shard.\n iterator: an iterator that yields json serializable\n references to the outputs from this shard.\n Contents from the iterator can be accessible later via\n map_job.Job.get_outputs.\n \"\"\"\n # We accept an iterator just in case output references get too big.\n outs = tuple(iterator)\n shard_ctx._state.writer_state[\"outs\"] = outs"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a new handlers map.", "response": "def create_handlers_map():\n \"\"\"Create new handlers map.\n\n Returns:\n list of (regexp, handler) pairs for WSGIApplication constructor.\n \"\"\"\n pipeline_handlers_map = []\n\n if pipeline:\n pipeline_handlers_map = pipeline.create_handlers_map(prefix=\".*/pipeline\")\n\n return pipeline_handlers_map + [\n # Task queue handlers.\n # Always suffix by mapreduce_id or shard_id for log analysis purposes.\n # mapreduce_id or shard_id also presents in headers or payload.\n (r\".*/worker_callback.*\", handlers.MapperWorkerCallbackHandler),\n (r\".*/controller_callback.*\", handlers.ControllerCallbackHandler),\n (r\".*/kickoffjob_callback.*\", handlers.KickOffJobHandler),\n (r\".*/finalizejob_callback.*\", handlers.FinalizeJobHandler),\n\n # RPC requests with JSON responses\n # All JSON handlers should have /command/ prefix.\n (r\".*/command/start_job\", handlers.StartJobHandler),\n (r\".*/command/cleanup_job\", handlers.CleanUpJobHandler),\n (r\".*/command/abort_job\", handlers.AbortJobHandler),\n (r\".*/command/list_configs\", status.ListConfigsHandler),\n (r\".*/command/list_jobs\", status.ListJobsHandler),\n (r\".*/command/get_job_detail\", status.GetJobDetailHandler),\n\n # UI static files\n (STATIC_RE, status.ResourceHandler),\n\n # Redirect non-file URLs that do not end in status/detail to status page.\n (r\".*\", RedirectHandler),\n ]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef from_json(cls, json):\n if json[\"name\"] in _KEYRANGES_CLASSES:\n return _KEYRANGES_CLASSES[json[\"name\"]].from_json(json)\n raise ValueError(\"Invalid json %s\", json)", "response": "Deserialize from json.\n\n Args:\n json: a dict of json compatible fields.\n\n Returns:\n a KeyRanges object.\n\n Raises:\n ValueError: if the json is invalid."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsplitting text into list of sentences.", "response": "def split_into_sentences(s):\n \"\"\"Split text into list of sentences.\"\"\"\n s = re.sub(r\"\\s+\", \" \", s)\n s = re.sub(r\"[\\\\.\\\\?\\\\!]\", \"\\n\", s)\n return s.split(\"\\n\")"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsplits a sentence into list of words.", "response": "def split_into_words(s):\n \"\"\"Split a sentence into list of words.\"\"\"\n s = re.sub(r\"\\W+\", \" \", s)\n s = re.sub(r\"[_0-9]+\", \" \", s)\n return s.split()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nindexing demo map function.", "response": "def index_map(data):\n \"\"\"Index demo map function.\"\"\"\n (entry, text_fn) = data\n text = text_fn()\n\n logging.debug(\"Got %s\", entry.filename)\n for s in split_into_sentences(text):\n for w in split_into_words(s.lower()):\n yield (w, entry.filename)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nphrasing demo map function.", "response": "def phrases_map(data):\n \"\"\"Phrases demo map function.\"\"\"\n (entry, text_fn) = data\n text = text_fn()\n filename = entry.filename\n\n logging.debug(\"Got %s\", filename)\n for s in split_into_sentences(text):\n words = split_into_words(s.lower())\n if len(words) < PHRASE_LENGTH:\n yield (\":\".join(words), filename)\n continue\n for i in range(0, len(words) - PHRASE_LENGTH):\n yield (\":\".join(words[i:i+PHRASE_LENGTH]), filename)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef phrases_reduce(key, values):\n if len(values) < 10:\n return\n counts = {}\n for filename in values:\n counts[filename] = counts.get(filename, 0) + 1\n\n words = re.sub(r\":\", \" \", key)\n threshold = len(values) / 2\n for filename, count in counts.items():\n if count > threshold:\n yield \"%s:%s\\n\" % (words, filename)", "response": "Phrases demo reduce function."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the internal key for a particular item in the database.", "response": "def getKeyName(username, date, blob_key):\n \"\"\"Returns the internal key for a particular item in the database.\n\n Our items are stored with keys of the form 'user/date/blob_key' ('/' is\n not the real separator, but __SEP is).\n\n Args:\n username: The given user's e-mail address.\n date: A datetime object representing the date and time that an input\n file was uploaded to this app.\n blob_key: The blob key corresponding to the location of the input file\n in the Blobstore.\n Returns:\n The internal key for the item specified by (username, date, blob_key).\n \"\"\"\n\n sep = FileMetadata.__SEP\n return str(username + sep + str(date) + sep + blob_key)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nloads a Custodian instance from a dict containing a spec.", "response": "def from_spec(cls, spec):\n \"\"\"\n Load a Custodian instance where the jobs are specified from a\n structure and a spec dict. This allows simple\n custom job sequences to be constructed quickly via a YAML file.\n\n Args:\n spec (dict): A dict specifying job. A sample of the dict in\n YAML format for the usual MP workflow is given as follows\n\n ```\n jobs:\n - jb: custodian.vasp.jobs.VaspJob\n params:\n final: False\n suffix: .relax1\n - jb: custodian.vasp.jobs.VaspJob\n params:\n final: True\n suffix: .relax2\n settings_override: {\"file\": \"CONTCAR\", \"action\": {\"_file_copy\": {\"dest\": \"POSCAR\"}}\n jobs_common_params:\n vasp_cmd: /opt/vasp\n handlers:\n - hdlr: custodian.vasp.handlers.VaspErrorHandler\n - hdlr: custodian.vasp.handlers.AliasingErrorHandler\n - hdlr: custodian.vasp.handlers.MeshSymmetryHandler\n validators:\n - vldr: custodian.vasp.validators.VasprunXMLValidator\n custodian_params:\n scratch_dir: /tmp\n ```\n\n The `jobs` key is a list of jobs. Each job is\n specified via \"job\": , and all parameters are\n specified via `params` which is a dict.\n\n `common_params` specify a common set of parameters that are\n passed to all jobs, e.g., vasp_cmd.\n\n Returns:\n Custodian instance.\n \"\"\"\n\n dec = MontyDecoder()\n\n def load_class(dotpath):\n modname, classname = dotpath.rsplit(\".\", 1)\n mod = __import__(modname, globals(), locals(), [classname], 0)\n return getattr(mod, classname)\n\n def process_params(d):\n decoded = {}\n for k, v in d.items():\n if k.startswith(\"$\"):\n if isinstance(v, list):\n v = [os.path.expandvars(i) for i in v]\n elif isinstance(v, dict):\n v = {k2: os.path.expandvars(v2) for k2, v2 in v.items()}\n else:\n v = os.path.expandvars(v)\n decoded[k.strip(\"$\")] = dec.process_decoded(v)\n return decoded\n\n jobs = []\n common_params = process_params(spec.get(\"jobs_common_params\", {}))\n\n for d in spec[\"jobs\"]:\n cls_ = load_class(d[\"jb\"])\n params = process_params(d.get(\"params\", {}))\n params.update(common_params)\n jobs.append(cls_(**params))\n\n handlers = []\n for d in spec.get(\"handlers\", []):\n cls_ = load_class(d[\"hdlr\"])\n params = process_params(d.get(\"params\", {}))\n handlers.append(cls_(**params))\n\n validators = []\n for d in spec.get(\"validators\", []):\n cls_ = load_class(d[\"vldr\"])\n params = process_params(d.get(\"params\", {}))\n validators.append(cls_(**params))\n\n custodian_params = process_params(spec.get(\"custodian_params\", {}))\n\n return cls(jobs=jobs, handlers=handlers, validators=validators,\n **custodian_params)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef run(self):\n cwd = os.getcwd()\n\n with ScratchDir(self.scratch_dir, create_symbolic_link=True,\n copy_to_current_on_exit=True,\n copy_from_current_on_enter=True) as temp_dir:\n self.total_errors = 0\n start = datetime.datetime.now()\n logger.info(\"Run started at {} in {}.\".format(\n start, temp_dir))\n v = sys.version.replace(\"\\n\", \" \")\n logger.info(\"Custodian running on Python version {}\".format(v))\n logger.info(\"Hostname: {}, Cluster: {}\".format(\n *get_execution_host_info()))\n\n try:\n # skip jobs until the restart\n for job_n, job in islice(enumerate(self.jobs, 1),\n self.restart, None):\n self._run_job(job_n, job)\n # We do a dump of the run log after each job.\n dumpfn(self.run_log, Custodian.LOG_FILE, cls=MontyEncoder,\n indent=4)\n # Checkpoint after each job so that we can recover from last\n # point and remove old checkpoints\n if self.checkpoint:\n self.restart = job_n\n Custodian._save_checkpoint(cwd, job_n)\n except CustodianError as ex:\n logger.error(ex.message)\n if ex.raises:\n raise\n finally:\n # Log the corrections to a json file.\n logger.info(\"Logging to {}...\".format(Custodian.LOG_FILE))\n dumpfn(self.run_log, Custodian.LOG_FILE, cls=MontyEncoder,\n indent=4)\n end = datetime.datetime.now()\n logger.info(\"Run ended at {}.\".format(end))\n run_time = end - start\n logger.info(\"Run completed. Total time taken = {}.\"\n .format(run_time))\n if self.gzipped_output:\n gzip_dir(\".\")\n\n # Cleanup checkpoint files (if any) if run is successful.\n Custodian._delete_checkpoints(cwd)\n\n return self.run_log", "response": "Runs all jobs in the cluster and returns a list of lists of error dictionaries."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _run_job(self, job_n, job):\n self.run_log.append({\"job\": job.as_dict(), \"corrections\": [],\n \"handler\": None, \"validator\": None,\n \"max_errors\": False, \"max_errors_per_job\": False,\n \"max_errors_per_handler\": False,\n \"nonzero_return_code\": False})\n self.errors_current_job = 0\n # reset the counters of the number of times a correction has been\n # applied for each handler\n for h in self.handlers:\n h.n_applied_corrections = 0\n\n job.setup()\n\n attempt = 0\n while (self.total_errors < self.max_errors and\n self.errors_current_job < self.max_errors_per_job):\n attempt += 1\n logger.info(\n \"Starting job no. {} ({}) attempt no. {}. Total errors and \"\n \"errors in job thus far = {}, {}.\".format(\n job_n, job.name, attempt, self.total_errors,\n self.errors_current_job))\n\n p = job.run()\n # Check for errors using the error handlers and perform\n # corrections.\n has_error = False\n zero_return_code = True\n\n # While the job is running, we use the handlers that are\n # monitors to monitor the job.\n if isinstance(p, subprocess.Popen):\n if self.monitors:\n n = 0\n while True:\n n += 1\n time.sleep(self.polling_time_step)\n if p.poll() is not None:\n break\n terminate = self.terminate_func or p.terminate\n if n % self.monitor_freq == 0:\n has_error = self._do_check(self.monitors,\n terminate)\n if terminate is not None and terminate != p.terminate:\n time.sleep(self.polling_time_step)\n else:\n p.wait()\n if self.terminate_func is not None and \\\n self.terminate_func != p.terminate:\n self.terminate_func()\n time.sleep(self.polling_time_step)\n\n zero_return_code = p.returncode == 0\n\n logger.info(\"{}.run has completed. \"\n \"Checking remaining handlers\".format(job.name))\n # Check for errors again, since in some cases non-monitor\n # handlers fix the problems detected by monitors\n # if an error has been found, not all handlers need to run\n if has_error:\n self._do_check([h for h in self.handlers\n if not h.is_monitor])\n else:\n has_error = self._do_check(self.handlers)\n\n if has_error:\n # This makes sure the job is killed cleanly for certain systems.\n job.terminate()\n\n # If there are no errors detected, perform\n # postprocessing and exit.\n if not has_error:\n for v in self.validators:\n if v.check():\n self.run_log[-1][\"validator\"] = v\n s = \"Validation failed: {}\".format(v)\n raise ValidationError(s, True, v)\n if not zero_return_code:\n if self.terminate_on_nonzero_returncode:\n self.run_log[-1][\"nonzero_return_code\"] = True\n s = \"Job return code is %d. Terminating...\" % \\\n p.returncode\n logger.info(s)\n raise ReturnCodeError(s, True)\n else:\n warnings.warn(\"subprocess returned a non-zero return \"\n \"code. Check outputs carefully...\")\n job.postprocess()\n return\n\n # Check that all errors could be handled\n for x in self.run_log[-1][\"corrections\"]:\n if not x[\"actions\"] and x[\"handler\"].raises_runtime_error:\n self.run_log[-1][\"handler\"] = x[\"handler\"]\n s = \"Unrecoverable error for handler: {}\".format(x[\"handler\"])\n raise NonRecoverableError(s, True, x[\"handler\"])\n for x in self.run_log[-1][\"corrections\"]:\n if not x[\"actions\"]:\n self.run_log[-1][\"handler\"] = x[\"handler\"]\n s = \"Unrecoverable error for handler: %s\" % x[\"handler\"]\n raise NonRecoverableError(s, False, x[\"handler\"])\n\n if self.errors_current_job >= self.max_errors_per_job:\n self.run_log[-1][\"max_errors_per_job\"] = True\n msg = \"Max errors per job reached: {}.\".format(self.max_errors_per_job)\n logger.info(msg)\n raise MaxCorrectionsPerJobError(msg, True, self.max_errors_per_job, job)\n else:\n self.run_log[-1][\"max_errors\"] = True\n msg = \"Max errors reached: {}.\".format(self.max_errors)\n logger.info(msg)\n raise MaxCorrectionsError(msg, True, self.max_errors)", "response": "Runs a single job."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nrun custodian in interuppted mode.", "response": "def run_interrupted(self):\n \"\"\"\n Runs custodian in a interuppted mode, which sets up and\n validates jobs but doesn't run the executable\n\n Returns:\n number of remaining jobs\n\n Raises:\n ValidationError: if a job fails validation\n ReturnCodeError: if the process has a return code different from 0\n NonRecoverableError: if an unrecoverable occurs\n MaxCorrectionsPerJobError: if max_errors_per_job is reached\n MaxCorrectionsError: if max_errors is reached\n MaxCorrectionsPerHandlerError: if max_errors_per_handler is reached\n \"\"\"\n start = datetime.datetime.now()\n try:\n cwd = os.getcwd()\n v = sys.version.replace(\"\\n\", \" \")\n logger.info(\"Custodian started in singleshot mode at {} in {}.\"\n .format(start, cwd))\n logger.info(\"Custodian running on Python version {}\".format(v))\n\n # load run log\n if os.path.exists(Custodian.LOG_FILE):\n self.run_log = loadfn(Custodian.LOG_FILE, cls=MontyDecoder)\n\n if len(self.run_log) == 0:\n # starting up an initial job - setup input and quit\n job_n = 0\n job = self.jobs[job_n]\n logger.info(\"Setting up job no. 1 ({}) \".format(job.name))\n job.setup()\n self.run_log.append({\"job\": job.as_dict(), \"corrections\": [],\n 'job_n': job_n})\n return len(self.jobs)\n else:\n # Continuing after running calculation\n job_n = self.run_log[-1]['job_n']\n job = self.jobs[job_n]\n\n # If we had to fix errors from a previous run, insert clean log\n # dict\n if len(self.run_log[-1]['corrections']) > 0:\n logger.info(\"Reran {}.run due to fixable errors\".format(\n job.name))\n\n # check error handlers\n logger.info(\"Checking error handlers for {}.run\".format(\n job.name))\n if self._do_check(self.handlers):\n logger.info(\"Failed validation based on error handlers\")\n # raise an error for an unrecoverable error\n for x in self.run_log[-1][\"corrections\"]:\n if not x[\"actions\"] and x[\"handler\"].raises_runtime_error:\n self.run_log[-1][\"handler\"] = x[\"handler\"]\n s = \"Unrecoverable error for handler: {}. \" \\\n \"Raising RuntimeError\".format(x[\"handler\"])\n raise NonRecoverableError(s, True, x[\"handler\"])\n logger.info(\"Corrected input based on error handlers\")\n # Return with more jobs to run if recoverable error caught\n # and corrected for\n return len(self.jobs) - job_n\n\n # check validators\n logger.info(\"Checking validator for {}.run\".format(job.name))\n for v in self.validators:\n if v.check():\n self.run_log[-1][\"validator\"] = v\n logger.info(\"Failed validation based on validator\")\n s = \"Validation failed: {}\".format(v)\n raise ValidationError(s, True, v)\n\n logger.info(\"Postprocessing for {}.run\".format(job.name))\n job.postprocess()\n\n # IF DONE WITH ALL JOBS - DELETE ALL CHECKPOINTS AND RETURN\n # VALIDATED\n if len(self.jobs) == (job_n + 1):\n self.finished = True\n return 0\n\n # Setup next job_n\n job_n += 1\n job = self.jobs[job_n]\n self.run_log.append({\"job\": job.as_dict(), \"corrections\": [],\n 'job_n': job_n})\n job.setup()\n return len(self.jobs) - job_n\n\n except CustodianError as ex:\n logger.error(ex.message)\n if ex.raises:\n raise\n\n finally:\n # Log the corrections to a json file.\n logger.info(\"Logging to {}...\".format(Custodian.LOG_FILE))\n dumpfn(self.run_log, Custodian.LOG_FILE, cls=MontyEncoder,\n indent=4)\n end = datetime.datetime.now()\n logger.info(\"Run ended at {}.\".format(end))\n run_time = end - start\n logger.info(\"Run completed. Total time taken = {}.\"\n .format(run_time))\n if self.finished and self.gzipped_output:\n gzip_dir(\".\")"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _do_check(self, handlers, terminate_func=None):\n corrections = []\n for h in handlers:\n try:\n if h.check():\n if h.max_num_corrections is not None \\\n and h.n_applied_corrections >= h.max_num_corrections:\n msg = \"Maximum number of corrections {} reached \" \\\n \"for handler {}\".format(h.max_num_corrections, h)\n if h.raise_on_max:\n self.run_log[-1][\"handler\"] = h\n self.run_log[-1][\"max_errors_per_handler\"] = True\n raise MaxCorrectionsPerHandlerError(msg, True, h.max_num_corrections, h)\n else:\n logger.warning(msg+\" Correction not applied.\")\n continue\n if terminate_func is not None and h.is_terminating:\n logger.info(\"Terminating job\")\n terminate_func()\n # make sure we don't terminate twice\n terminate_func = None\n d = h.correct()\n d[\"handler\"] = h\n logger.error(\"\\n\" + pformat(d, indent=2, width=-1))\n corrections.append(d)\n h.n_applied_corrections += 1\n except Exception:\n if not self.skip_over_errors:\n raise\n else:\n import traceback\n logger.error(\"Bad handler %s \" % h)\n logger.error(traceback.format_exc())\n corrections.append(\n {\"errors\": [\"Bad handler %s \" % h],\n \"actions\": []})\n self.total_errors += len(corrections)\n self.errors_current_job += len(corrections)\n self.run_log[-1][\"corrections\"].extend(corrections)\n # We do a dump of the run log after each check.\n dumpfn(self.run_log, Custodian.LOG_FILE, cls=MontyEncoder,\n indent=4)\n return len(corrections) > 0", "response": "Checks the specified handlers and returns True iff errors caught."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef apply_actions(self, actions):\n modified = []\n for a in actions:\n if \"dict\" in a:\n k = a[\"dict\"]\n modified.append(k)\n self.feffinp[k] = self.modify_object(a[\"action\"], self.feffinp[k])\n elif \"file\" in a:\n self.modify(a[\"action\"], a[\"file\"])\n else:\n raise ValueError(\"Unrecognized format: {}\".format(a))\n if modified:\n feff = self.feffinp\n feff_input = \"\\n\\n\".join(str(feff[k]) for k in\n [\"HEADER\", \"PARAMETERS\", \"POTENTIALS\", \"ATOMS\"]\n if k in feff)\n for k, v in six.iteritems(feff):\n with open(os.path.join('.', k), \"w\") as f:\n f.write(str(v))\n\n with open(os.path.join('.', \"feff.inp\"), \"w\") as f:\n f.write(feff_input)", "response": "Applies a list of actions to the FEFF Input Set and rewrites modified\n files."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a file. Args: filename (str): Filename. settings (dict): Must be {\"content\": actual_content}", "response": "def file_create(filename, settings):\n \"\"\"\n Creates a file.\n\n Args:\n filename (str): Filename.\n settings (dict): Must be {\"content\": actual_content}\n \"\"\"\n if len(settings) != 1:\n raise ValueError(\"Settings must only contain one item with key \"\n \"'content'.\")\n for k, v in settings.items():\n if k == \"content\":\n with open(filename, 'w') as f:\n f.write(v)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nmoves a file. {'_file_move': {'dest': 'new_file_name'}} Args: filename (str): Filename. settings (dict): Must be {\"dest\": path of new file}", "response": "def file_move(filename, settings):\n \"\"\"\n Moves a file. {'_file_move': {'dest': 'new_file_name'}}\n\n Args:\n filename (str): Filename.\n settings (dict): Must be {\"dest\": path of new file}\n \"\"\"\n if len(settings) != 1:\n raise ValueError(\"Settings must only contain one item with key \"\n \"'dest'.\")\n for k, v in settings.items():\n if k == \"dest\":\n shutil.move(filename, v)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndelete a file. {'_file_delete': {'mode': \"actual\"}} Args: filename (str): Filename. settings (dict): Must be {\"mode\": actual/simulated}. Simulated mode only prints the action without performing it.", "response": "def file_delete(filename, settings):\n \"\"\"\n Deletes a file. {'_file_delete': {'mode': \"actual\"}}\n\n Args:\n filename (str): Filename.\n settings (dict): Must be {\"mode\": actual/simulated}. Simulated\n mode only prints the action without performing it.\n \"\"\"\n if len(settings) != 1:\n raise ValueError(\"Settings must only contain one item with key \"\n \"'mode'.\")\n for k, v in settings.items():\n if k == \"mode\" and v == \"actual\":\n try:\n os.remove(filename)\n except OSError:\n #Skip file not found error.\n pass\n elif k == \"mode\" and v == \"simulated\":\n print(\"Simulated removal of {}\".format(filename))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef file_copy(filename, settings):\n for k, v in settings.items():\n if k.startswith(\"dest\"):\n shutil.copyfile(filename, v)", "response": "Copies a file. {'_file_copy': {'dest': 'new_file_name'}}\n\n Args:\n filename (str): Filename.\n settings (dict): Must be {\"dest\": path of new file}"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nmodify file with settings", "response": "def file_modify(filename, settings):\n \"\"\"\n Modifies file access\n\n Args:\n filename (str): Filename.\n settings (dict): Can be \"mode\" or \"owners\"\n \"\"\"\n for k, v in settings.items():\n if k == \"mode\":\n os.chmod(filename,v)\n if k == \"owners\":\n os.chown(filename,v)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef setup(self):\n decompress_dir('.')\n\n if self.backup:\n for f in FEFF_INPUT_FILES:\n shutil.copy(f, \"{}.orig\".format(f))\n\n for f in FEFF_BACKUP_FILES:\n if os.path.isfile(f):\n shutil.copy(f, \"{}.orig\".format(f))", "response": "Performs initial setup for FeffJob do backing up."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef run(self):\n\n \"\"\"\n Performs the actual FEFF run\n Returns:\n (subprocess.Popen) Used for monitoring.\n \"\"\"\n with open(self.output_file, \"w\") as f_std, \\\n open(self.stderr_file, \"w\", buffering=1) as f_err:\n # Use line buffering for stderr\n # On TSCC, need to run shell command\n p = subprocess.Popen(self.feff_cmd, stdout=f_std, stderr=f_err, shell=True)\n\n return p", "response": "Performs the actual FEFF run\n Returns:\n (subprocess.Popen) Used for monitoring."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef modify(self, modification, obj):\n for action, settings in modification.items():\n if action in self.supported_actions:\n self.supported_actions[action].__call__(obj, settings)\n elif self.strict:\n raise ValueError(\"{} is not a supported action!\"\n .format(action))", "response": "Modify the object in - place."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nmodifying an object that supports pymatgen s as_dict and from_dict API.", "response": "def modify_object(self, modification, obj):\n \"\"\"\n Modify an object that supports pymatgen's as_dict() and from_dict API.\n\n Args:\n modification (dict): Modification must be {action_keyword :\n settings}. E.g., {'_set': {'Hello':'Universe', 'Bye': 'World'}}\n obj (object): Object to modify\n \"\"\"\n d = obj.as_dict()\n self.modify(modification, d)\n return obj.from_dict(d)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef apply_actions(self, actions):\n modified = []\n for a in actions:\n if \"dict\" in a:\n k = a[\"dict\"]\n modified.append(k)\n self.vi[k] = self.modify_object(a[\"action\"], self.vi[k])\n elif \"file\" in a:\n self.modify(a[\"action\"], a[\"file\"])\n else:\n raise ValueError(\"Unrecognized format: {}\".format(a))\n for f in modified:\n self.vi[f].write_file(f)", "response": "Applies a list of actions to the Vasp Input Set and rewrites modified\n files."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef backup(filenames, prefix=\"error\"):\n num = max([0] + [int(f.split(\".\")[1])\n for f in glob(\"{}.*.tar.gz\".format(prefix))])\n filename = \"{}.{}.tar.gz\".format(prefix, num + 1)\n logging.info(\"Backing up run to {}.\".format(filename))\n with tarfile.open(filename, \"w:gz\") as tar:\n for fname in filenames:\n for f in glob(fname):\n tar.add(f)", "response": "Backup a list of files to a tar. gz file."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_execution_host_info():\n host = os.environ.get('HOSTNAME', None)\n cluster = os.environ.get('SGE_O_HOST', None)\n if host is None:\n try:\n import socket\n host = host or socket.gethostname()\n except:\n pass\n return host or 'unknown', cluster or 'unknown'", "response": "Tries to return a tuple describing the execution host."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nperforming the actual QChem run.", "response": "def run(self):\n \"\"\"\n Perform the actual QChem run.\n\n Returns:\n (subprocess.Popen) Used for monitoring.\n \"\"\"\n qclog = open(self.qclog_file, 'w')\n p = subprocess.Popen(self.current_command, stdout=qclog)\n return p"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef opt_with_frequency_flattener(cls,\n qchem_command,\n multimode=\"openmp\",\n input_file=\"mol.qin\",\n output_file=\"mol.qout\",\n qclog_file=\"mol.qclog\",\n max_iterations=10,\n max_molecule_perturb_scale=0.3,\n check_connectivity=True,\n **QCJob_kwargs):\n \"\"\"\n Optimize a structure and calculate vibrational frequencies to check if the\n structure is in a true minima. If a frequency is negative, iteratively\n perturbe the geometry, optimize, and recalculate frequencies until all are\n positive, aka a true minima has been found.\n\n Args:\n qchem_command (str): Command to run QChem.\n multimode (str): Parallelization scheme, either openmp or mpi.\n input_file (str): Name of the QChem input file.\n output_file (str): Name of the QChem output file.\n max_iterations (int): Number of perturbation -> optimization -> frequency\n iterations to perform. Defaults to 10.\n max_molecule_perturb_scale (float): The maximum scaled perturbation that\n can be applied to the molecule. Defaults to 0.3.\n check_connectivity (bool): Whether to check differences in connectivity\n introduced by structural perturbation. Defaults to True.\n **QCJob_kwargs: Passthrough kwargs to QCJob. See\n :class:`custodian.qchem.jobs.QCJob`.\n \"\"\"\n\n min_molecule_perturb_scale = 0.1\n scale_grid = 10\n perturb_scale_grid = (\n max_molecule_perturb_scale - min_molecule_perturb_scale\n ) / scale_grid\n\n if not os.path.exists(input_file):\n raise AssertionError('Input file must be present!')\n orig_opt_input = QCInput.from_file(input_file)\n orig_opt_rem = copy.deepcopy(orig_opt_input.rem)\n orig_freq_rem = copy.deepcopy(orig_opt_input.rem)\n orig_freq_rem[\"job_type\"] = \"freq\"\n first = True\n reversed_direction = False\n num_neg_freqs = []\n\n for ii in range(max_iterations):\n yield (QCJob(\n qchem_command=qchem_command,\n multimode=multimode,\n input_file=input_file,\n output_file=output_file,\n qclog_file=qclog_file,\n suffix=\".opt_\" + str(ii),\n backup=first,\n **QCJob_kwargs))\n first = False\n opt_outdata = QCOutput(output_file + \".opt_\" + str(ii)).data\n if opt_outdata[\"structure_change\"] == \"unconnected_fragments\" and not opt_outdata[\"completion\"]:\n print(\"Unstable molecule broke into unconnected fragments which failed to optimize! Exiting...\")\n break\n else:\n freq_QCInput = QCInput(\n molecule=opt_outdata.get(\"molecule_from_optimized_geometry\"),\n rem=orig_freq_rem,\n opt=orig_opt_input.opt,\n pcm=orig_opt_input.pcm,\n solvent=orig_opt_input.solvent)\n freq_QCInput.write_file(input_file)\n yield (QCJob(\n qchem_command=qchem_command,\n multimode=multimode,\n input_file=input_file,\n output_file=output_file,\n qclog_file=qclog_file,\n suffix=\".freq_\" + str(ii),\n backup=first,\n **QCJob_kwargs))\n outdata = QCOutput(output_file + \".freq_\" + str(ii)).data\n errors = outdata.get(\"errors\")\n if len(errors) != 0:\n raise AssertionError('No errors should be encountered while flattening frequencies!')\n if outdata.get('frequencies')[0] > 0.0:\n print(\"All frequencies positive!\")\n break\n else:\n num_neg_freqs += [sum(1 for freq in outdata.get('frequencies') if freq < 0)]\n if len(num_neg_freqs) > 1:\n if num_neg_freqs[-1] == num_neg_freqs[-2] and not reversed_direction:\n reversed_direction = True\n elif num_neg_freqs[-1] == num_neg_freqs[-2] and reversed_direction:\n if len(num_neg_freqs) < 3:\n raise AssertionError(\"ERROR: This should only be possible after at least three frequency flattening iterations! Exiting...\")\n else:\n raise Exception(\"ERROR: Reversing the perturbation direction still could not flatten any frequencies. Exiting...\")\n elif num_neg_freqs[-1] != num_neg_freqs[-2] and reversed_direction:\n reversed_direction = False\n\n negative_freq_vecs = outdata.get(\"frequency_mode_vectors\")[0]\n structure_successfully_perturbed = False\n\n for molecule_perturb_scale in np.arange(\n max_molecule_perturb_scale, min_molecule_perturb_scale,\n -perturb_scale_grid):\n new_coords = perturb_coordinates(\n old_coords=outdata.get(\"initial_geometry\"),\n negative_freq_vecs=negative_freq_vecs,\n molecule_perturb_scale=molecule_perturb_scale,\n reversed_direction=reversed_direction)\n new_molecule = Molecule(\n species=outdata.get('species'),\n coords=new_coords,\n charge=outdata.get('charge'),\n spin_multiplicity=outdata.get('multiplicity'))\n if check_connectivity:\n old_molgraph = MoleculeGraph.with_local_env_strategy(outdata.get(\"initial_molecule\"),\n OpenBabelNN(),\n reorder=False,\n extend_structure=False)\n new_molgraph = MoleculeGraph.with_local_env_strategy(new_molecule,\n OpenBabelNN(),\n reorder=False,\n extend_structure=False)\n if old_molgraph.isomorphic_to(new_molgraph):\n structure_successfully_perturbed = True\n break\n if not structure_successfully_perturbed:\n raise Exception(\n \"ERROR: Unable to perturb coordinates to remove negative frequency without changing the connectivity! Exiting...\"\n )\n\n new_opt_QCInput = QCInput(\n molecule=new_molecule,\n rem=orig_opt_rem,\n opt=orig_opt_input.opt,\n pcm=orig_opt_input.pcm,\n solvent=orig_opt_input.solvent)\n new_opt_QCInput.write_file(input_file)", "response": "Optimize a structure and calculate vibrational frequencies."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef setup(self):\n decompress_dir('.')\n\n if self.backup:\n for f in VASP_INPUT_FILES:\n shutil.copy(f, \"{}.orig\".format(f))\n\n if self.auto_npar:\n try:\n incar = Incar.from_file(\"INCAR\")\n # Only optimized NPAR for non-HF and non-RPA calculations.\n if not (incar.get(\"LHFCALC\") or incar.get(\"LRPA\") or\n incar.get(\"LEPSILON\")):\n if incar.get(\"IBRION\") in [5, 6, 7, 8]:\n # NPAR should not be set for Hessian matrix\n # calculations, whether in DFPT or otherwise.\n del incar[\"NPAR\"]\n else:\n import multiprocessing\n # try sge environment variable first\n # (since multiprocessing counts cores on the current\n # machine only)\n ncores = os.environ.get('NSLOTS') or \\\n multiprocessing.cpu_count()\n ncores = int(ncores)\n for npar in range(int(math.sqrt(ncores)),\n ncores):\n if ncores % npar == 0:\n incar[\"NPAR\"] = npar\n break\n incar.write_file(\"INCAR\")\n except:\n pass\n\n if self.auto_continue:\n if os.path.exists(\"continue.json\"):\n actions = loadfn(\"continue.json\").get(\"actions\")\n logger.info(\"Continuing previous VaspJob. Actions: {}\".format(actions))\n backup(VASP_BACKUP_FILES, prefix=\"prev_run\")\n VaspModder().apply_actions(actions)\n\n else:\n # Default functionality is to copy CONTCAR to POSCAR and set\n # ISTART to 1 in the INCAR, but other actions can be specified\n if self.auto_continue is True:\n actions = [{\"file\": \"CONTCAR\",\n \"action\": {\"_file_copy\": {\"dest\": \"POSCAR\"}}},\n {\"dict\": \"INCAR\",\n \"action\": {\"_set\": {\"ISTART\": 1}}}]\n else:\n actions = self.auto_continue\n dumpfn({\"actions\": actions}, \"continue.json\")\n\n if self.settings_override is not None:\n VaspModder().apply_actions(self.settings_override)", "response": "Sets up the current state of the VASP job."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nperforms the actual VASP run.", "response": "def run(self):\n \"\"\"\n Perform the actual VASP run.\n\n Returns:\n (subprocess.Popen) Used for monitoring.\n \"\"\"\n cmd = list(self.vasp_cmd)\n if self.auto_gamma:\n vi = VaspInput.from_directory(\".\")\n kpts = vi[\"KPOINTS\"]\n if kpts.style == Kpoints.supported_modes.Gamma \\\n and tuple(kpts.kpts[0]) == (1, 1, 1):\n if self.gamma_vasp_cmd is not None and which(\n self.gamma_vasp_cmd[-1]):\n cmd = self.gamma_vasp_cmd\n elif which(cmd[-1] + \".gamma\"):\n cmd[-1] += \".gamma\"\n logger.info(\"Running {}\".format(\" \".join(cmd)))\n with open(self.output_file, 'w') as f_std, \\\n open(self.stderr_file, \"w\", buffering=1) as f_err:\n # use line buffering for stderr\n p = subprocess.Popen(cmd, stdout=f_std, stderr=f_err)\n return p"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef postprocess(self):\n for f in VASP_OUTPUT_FILES + [self.output_file]:\n if os.path.exists(f):\n if self.final and self.suffix != \"\":\n shutil.move(f, \"{}{}\".format(f, self.suffix))\n elif self.suffix != \"\":\n shutil.copy(f, \"{}{}\".format(f, self.suffix))\n\n if self.copy_magmom and not self.final:\n try:\n outcar = Outcar(\"OUTCAR\")\n magmom = [m['tot'] for m in outcar.magnetization]\n incar = Incar.from_file(\"INCAR\")\n incar['MAGMOM'] = magmom\n incar.write_file(\"INCAR\")\n except:\n logger.error('MAGMOM copy from OUTCAR to INCAR failed')\n\n # Remove continuation so if a subsequent job is run in\n # the same directory, will not restart this job.\n if os.path.exists(\"continue.json\"):\n os.remove(\"continue.json\")", "response": "Postprocessing includes renaming gzipping where necessary."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef double_relaxation_run(cls, vasp_cmd, auto_npar=True, ediffg=-0.05,\n half_kpts_first_relax=False, auto_continue=False):\n \"\"\"\n Returns a list of two jobs corresponding to an AFLOW style double\n relaxation run.\n\n Args:\n vasp_cmd (str): Command to run vasp as a list of args. For example,\n if you are using mpirun, it can be something like\n [\"mpirun\", \"pvasp.5.2.11\"]\n auto_npar (bool): Whether to automatically tune NPAR to be sqrt(\n number of cores) as recommended by VASP for DFT calculations.\n Generally, this results in significant speedups. Defaults to\n True. Set to False for HF, GW and RPA calculations.\n ediffg (float): Force convergence criteria for subsequent runs (\n ignored for the initial run.)\n half_kpts_first_relax (bool): Whether to halve the kpoint grid\n for the first relaxation. Speeds up difficult convergence\n considerably. Defaults to False.\n\n Returns:\n List of two jobs corresponding to an AFLOW style run.\n \"\"\"\n incar_update = {\"ISTART\": 1}\n if ediffg:\n incar_update[\"EDIFFG\"] = ediffg\n settings_overide_1 = None\n settings_overide_2 = [\n {\"dict\": \"INCAR\",\n \"action\": {\"_set\": incar_update}},\n {\"file\": \"CONTCAR\",\n \"action\": {\"_file_copy\": {\"dest\": \"POSCAR\"}}}]\n if half_kpts_first_relax and os.path.exists(\"KPOINTS\") and \\\n os.path.exists(\"POSCAR\"):\n kpts = Kpoints.from_file(\"KPOINTS\")\n orig_kpts_dict = kpts.as_dict()\n # lattice vectors with length < 8 will get >1 KPOINT\n kpts.kpts = np.round(np.maximum(np.array(kpts.kpts) / 2,\n 1)).astype(int).tolist()\n low_kpts_dict = kpts.as_dict()\n settings_overide_1 = [\n {\"dict\": \"KPOINTS\",\n \"action\": {\"_set\": low_kpts_dict}}\n ]\n settings_overide_2.append(\n {\"dict\": \"KPOINTS\",\n \"action\": {\"_set\": orig_kpts_dict}}\n )\n\n return [VaspJob(vasp_cmd, final=False, suffix=\".relax1\",\n auto_npar=auto_npar, auto_continue=auto_continue,\n settings_override=settings_overide_1),\n VaspJob(vasp_cmd, final=True, backup=False, suffix=\".relax2\",\n auto_npar=auto_npar, auto_continue=auto_continue,\n settings_override=settings_overide_2)]", "response": "Returns a list of two jobs corresponding to a double relaxation run."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef metagga_opt_run(cls, vasp_cmd, auto_npar=True, ediffg=-0.05,\n half_kpts_first_relax=False, auto_continue=False):\n \"\"\"\n Returns a list of thres jobs to perform an optimization for any\n metaGGA functional. There is an initial calculation of the\n GGA wavefunction which is fed into the initial metaGGA optimization\n to precondition the electronic structure optimizer. The metaGGA\n optimization is performed using the double relaxation scheme\n \"\"\"\n\n incar = Incar.from_file(\"INCAR\")\n # Defaults to using the SCAN metaGGA\n metaGGA = incar.get(\"METAGGA\", \"SCAN\")\n\n # Pre optimze WAVECAR and structure using regular GGA\n pre_opt_setings = [{\"dict\": \"INCAR\",\n \"action\": {\"_set\": {\"METAGGA\": None,\n \"LWAVE\": True,\n \"NSW\": 0}}}]\n jobs = [VaspJob(vasp_cmd, auto_npar=auto_npar,\n final=False, suffix=\".precondition\",\n settings_override=pre_opt_setings)]\n\n # Finish with regular double relaxation style run using SCAN\n jobs.extend(VaspJob.double_relaxation_run(vasp_cmd, auto_npar=auto_npar,\n ediffg=ediffg,\n half_kpts_first_relax=half_kpts_first_relax))\n\n # Ensure the first relaxation doesn't overwrite the original inputs\n jobs[1].backup = False\n\n # Update double_relaxation job to start from pre-optimized run\n post_opt_settings = [{\"dict\": \"INCAR\",\n \"action\": {\"_set\": {\"METAGGA\": metaGGA, \"ISTART\": 1,\n \"NSW\": incar.get(\"NSW\", 99),\n \"LWAVE\": incar.get(\"LWAVE\", False)}}},\n {\"file\": \"CONTCAR\",\n \"action\": {\"_file_copy\": {\"dest\": \"POSCAR\"}}}]\n if jobs[1].settings_override:\n post_opt_settings = jobs[1].settings_override + post_opt_settings\n jobs[1].settings_override = post_opt_settings\n\n return jobs", "response": "Performs an optimization for any specific metaGGA functional."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef full_opt_run(cls, vasp_cmd, vol_change_tol=0.02,\n max_steps=10, ediffg=-0.05, half_kpts_first_relax=False,\n **vasp_job_kwargs):\n \"\"\"\n Returns a generator of jobs for a full optimization run. Basically,\n this runs an infinite series of geometry optimization jobs until the\n % vol change in a particular optimization is less than vol_change_tol.\n\n Args:\n vasp_cmd (str): Command to run vasp as a list of args. For example,\n if you are using mpirun, it can be something like\n [\"mpirun\", \"pvasp.5.2.11\"]\n vol_change_tol (float): The tolerance at which to stop a run.\n Defaults to 0.05, i.e., 5%.\n max_steps (int): The maximum number of runs. Defaults to 10 (\n highly unlikely that this limit is ever reached).\n ediffg (float): Force convergence criteria for subsequent runs (\n ignored for the initial run.)\n half_kpts_first_relax (bool): Whether to halve the kpoint grid\n for the first relaxation. Speeds up difficult convergence\n considerably. Defaults to False.\n \\*\\*vasp_job_kwargs: Passthrough kwargs to VaspJob. See\n :class:`custodian.vasp.jobs.VaspJob`.\n\n Returns:\n Generator of jobs.\n \"\"\"\n for i in range(max_steps):\n if i == 0:\n settings = None\n backup = True\n if half_kpts_first_relax and os.path.exists(\"KPOINTS\") and \\\n os.path.exists(\"POSCAR\"):\n kpts = Kpoints.from_file(\"KPOINTS\")\n orig_kpts_dict = kpts.as_dict()\n kpts.kpts = np.maximum(np.array(kpts.kpts) / 2, 1).tolist()\n low_kpts_dict = kpts.as_dict()\n settings = [\n {\"dict\": \"KPOINTS\",\n \"action\": {\"_set\": low_kpts_dict}}\n ]\n else:\n backup = False\n initial = Poscar.from_file(\"POSCAR\").structure\n final = Poscar.from_file(\"CONTCAR\").structure\n vol_change = (final.volume - initial.volume) / initial.volume\n\n logger.info(\"Vol change = %.1f %%!\" % (vol_change * 100))\n if abs(vol_change) < vol_change_tol:\n logger.info(\"Stopping optimization!\")\n break\n else:\n incar_update = {\"ISTART\": 1}\n if ediffg:\n incar_update[\"EDIFFG\"] = ediffg\n settings = [\n {\"dict\": \"INCAR\",\n \"action\": {\"_set\": incar_update}},\n {\"file\": \"CONTCAR\",\n \"action\": {\"_file_copy\": {\"dest\": \"POSCAR\"}}}]\n if i == 1 and half_kpts_first_relax:\n settings.append({\"dict\": \"KPOINTS\",\n \"action\": {\"_set\": orig_kpts_dict}})\n logger.info(\"Generating job = %d!\" % (i+1))\n yield VaspJob(vasp_cmd, final=False, backup=backup,\n suffix=\".relax%d\" % (i+1), settings_override=settings,\n **vasp_job_kwargs)", "response": "This function runs a full optimization run of a specific object."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a generator of jobs to run a constrained optimization run on a set of species species.", "response": "def constrained_opt_run(cls, vasp_cmd, lattice_direction, initial_strain,\n atom_relax=True, max_steps=20, algo=\"bfgs\",\n **vasp_job_kwargs):\n \"\"\"\n Returns a generator of jobs for a constrained optimization run. Typical\n use case is when you want to approximate a biaxial strain situation,\n e.g., you apply a defined strain to a and b directions of the lattice,\n but allows the c-direction to relax.\n\n Some guidelines on the use of this method:\n i. It is recommended you do not use the Auto kpoint generation. The\n grid generated via Auto may fluctuate with changes in lattice\n param, resulting in numerical noise.\n ii. Make sure your EDIFF/EDIFFG is properly set in your INCAR. The\n optimization relies on these values to determine convergence.\n\n Args:\n vasp_cmd (str): Command to run vasp as a list of args. For example,\n if you are using mpirun, it can be something like\n [\"mpirun\", \"pvasp.5.2.11\"]\n lattice_direction (str): Which direction to relax. Valid values are\n \"a\", \"b\" or \"c\".\n initial_strain (float): An initial strain to be applied to the\n lattice_direction. This can usually be estimated as the\n negative of the strain applied in the other two directions.\n E.g., if you apply a tensile strain of 0.05 to the a and b\n directions, you can use -0.05 as a reasonable first guess for\n initial strain.\n atom_relax (bool): Whether to relax atomic positions.\n max_steps (int): The maximum number of runs. Defaults to 20 (\n highly unlikely that this limit is ever reached).\n algo (str): Algorithm to use to find minimum. Default is \"bfgs\",\n which is fast, but can be sensitive to numerical noise\n in energy calculations. The alternative is \"bisection\",\n which is more robust but can be a bit slow. The code does fall\n back on the bisection when bfgs gives a non-sensical result,\n e.g., negative lattice params.\n \\*\\*vasp_job_kwargs: Passthrough kwargs to VaspJob. See\n :class:`custodian.vasp.jobs.VaspJob`.\n\n Returns:\n Generator of jobs. At the end of the run, an \"EOS.txt\" is written\n which provides a quick look at the E vs lattice parameter.\n \"\"\"\n nsw = 99 if atom_relax else 0\n\n incar = Incar.from_file(\"INCAR\")\n\n # Set the energy convergence criteria as the EDIFFG (if present) or\n # 10 x EDIFF (which itself defaults to 1e-4 if not present).\n if incar.get(\"EDIFFG\") and incar.get(\"EDIFFG\") > 0:\n etol = incar[\"EDIFFG\"]\n else:\n etol = incar.get(\"EDIFF\", 1e-4) * 10\n\n if lattice_direction == \"a\":\n lattice_index = 0\n elif lattice_direction == \"b\":\n lattice_index = 1\n else:\n lattice_index = 2\n\n energies = {}\n\n for i in range(max_steps):\n if i == 0:\n settings = [\n {\"dict\": \"INCAR\",\n \"action\": {\"_set\": {\"ISIF\": 2, \"NSW\": nsw}}}]\n structure = Poscar.from_file(\"POSCAR\").structure\n x = structure.lattice.abc[lattice_index]\n backup = True\n else:\n backup = False\n v = Vasprun(\"vasprun.xml\")\n structure = v.final_structure\n energy = v.final_energy\n lattice = structure.lattice\n\n x = lattice.abc[lattice_index]\n\n energies[x] = energy\n\n if i == 1:\n x *= (1 + initial_strain)\n else:\n # Sort the lattice parameter by energies.\n min_x = min(energies.keys(), key=lambda e: energies[e])\n sorted_x = sorted(energies.keys())\n ind = sorted_x.index(min_x)\n if ind == 0:\n other = ind + 1\n elif ind == len(sorted_x) - 1:\n other = ind - 1\n else:\n other = ind + 1 \\\n if energies[sorted_x[ind + 1]] \\\n < energies[sorted_x[ind - 1]] \\\n else ind - 1\n if abs(energies[min_x]\n - energies[sorted_x[other]]) < etol:\n logger.info(\"Stopping optimization! Final %s = %f\"\n % (lattice_direction, min_x))\n break\n\n if ind == 0 and len(sorted_x) > 2:\n # Lowest energy lies outside of range of lowest value.\n # we decrease the lattice parameter in the next\n # iteration to find a minimum. This applies only when\n # there are at least 3 values.\n x = sorted_x[0] - abs(sorted_x[1] - sorted_x[0])\n logger.info(\"Lowest energy lies below bounds. \"\n \"Setting %s = %f.\" % (lattice_direction, x))\n elif ind == len(sorted_x) - 1 and len(sorted_x) > 2:\n # Lowest energy lies outside of range of highest value.\n # we increase the lattice parameter in the next\n # iteration to find a minimum. This applies only when\n # there are at least 3 values.\n x = sorted_x[-1] + abs(sorted_x[-1] - sorted_x[-2])\n logger.info(\"Lowest energy lies above bounds. \"\n \"Setting %s = %f.\" % (lattice_direction, x))\n else:\n if algo.lower() == \"bfgs\" and len(sorted_x) >= 4:\n try:\n # If there are more than 4 data points, we will\n # do a quadratic fit to accelerate convergence.\n x1 = list(energies.keys())\n y1 = [energies[j] for j in x1]\n z1 = np.polyfit(x1, y1, 2)\n pp = np.poly1d(z1)\n from scipy.optimize import minimize\n result = minimize(\n pp, min_x,\n bounds=[(sorted_x[0], sorted_x[-1])])\n if (not result.success) or result.x[0] < 0:\n raise ValueError(\n \"Negative lattice constant!\")\n x = result.x[0]\n logger.info(\"BFGS minimized %s = %f.\"\n % (lattice_direction, x))\n except ValueError as ex:\n # Fall back on bisection algo if the bfgs fails.\n logger.info(str(ex))\n x = (min_x + sorted_x[other]) / 2\n logger.info(\"Falling back on bisection %s = %f.\"\n % (lattice_direction, x))\n else:\n x = (min_x + sorted_x[other]) / 2\n logger.info(\"Bisection %s = %f.\"\n % (lattice_direction, x))\n\n lattice = lattice.matrix\n lattice[lattice_index] = lattice[lattice_index] / \\\n np.linalg.norm(lattice[lattice_index]) * x\n\n s = Structure(lattice, structure.species, structure.frac_coords)\n fname = \"POSCAR.%f\" % x\n s.to(filename=fname)\n\n incar_update = {\"ISTART\": 1, \"NSW\": nsw, \"ISIF\": 2}\n\n settings = [\n {\"dict\": \"INCAR\",\n \"action\": {\"_set\": incar_update}},\n {\"file\": fname,\n \"action\": {\"_file_copy\": {\"dest\": \"POSCAR\"}}}]\n\n logger.info(\"Generating job = %d with parameter %f!\" % (i + 1, x))\n yield VaspJob(vasp_cmd, final=False, backup=backup,\n suffix=\".static.%f\" % x,\n settings_override=settings, **vasp_job_kwargs)\n\n with open(\"EOS.txt\", \"wt\") as f:\n f.write(\"# %s energy\\n\" % lattice_direction)\n for k in sorted(energies.keys()):\n f.write(\"%f %f\\n\" % (k, energies[k]))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nset up the necessary data structures for the VASPNEBJob.", "response": "def setup(self):\n \"\"\"\n Performs initial setup for VaspNEBJob, including overriding any settings\n and backing up.\n \"\"\"\n neb_dirs = self.neb_dirs\n\n if self.backup:\n # Back up KPOINTS, INCAR, POTCAR\n for f in VASP_NEB_INPUT_FILES:\n shutil.copy(f, \"{}.orig\".format(f))\n # Back up POSCARs\n for path in neb_dirs:\n poscar = os.path.join(path, \"POSCAR\")\n shutil.copy(poscar, \"{}.orig\".format(poscar))\n\n if self.half_kpts and os.path.exists(\"KPOINTS\"):\n kpts = Kpoints.from_file(\"KPOINTS\")\n kpts.kpts = np.maximum(np.array(kpts.kpts) / 2, 1)\n kpts.kpts = kpts.kpts.astype(int).tolist()\n if tuple(kpts.kpts[0]) == (1, 1, 1):\n kpt_dic = kpts.as_dict()\n kpt_dic[\"generation_style\"] = 'Gamma'\n kpts = Kpoints.from_dict(kpt_dic)\n kpts.write_file(\"KPOINTS\")\n\n if self.auto_npar:\n try:\n incar = Incar.from_file(\"INCAR\")\n import multiprocessing\n # Try sge environment variable first\n # (since multiprocessing counts cores on the current\n # machine only)\n ncores = os.environ.get('NSLOTS') or multiprocessing.cpu_count()\n ncores = int(ncores)\n for npar in range(int(math.sqrt(ncores)),\n ncores):\n if ncores % npar == 0:\n incar[\"NPAR\"] = npar\n break\n incar.write_file(\"INCAR\")\n except:\n pass\n\n if self.auto_continue and \\\n os.path.exists(\"STOPCAR\") and \\\n not os.access(\"STOPCAR\", os.W_OK):\n # Remove STOPCAR\n os.chmod(\"STOPCAR\", 0o644)\n os.remove(\"STOPCAR\")\n\n # Copy CONTCAR to POSCAR\n for path in self.neb_sub:\n contcar = os.path.join(path, \"CONTCAR\")\n poscar = os.path.join(path, \"POSCAR\")\n shutil.copy(contcar, poscar)\n\n if self.settings_override is not None:\n VaspModder().apply_actions(self.settings_override)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nset up the backup file if necessary.", "response": "def setup(self):\n \"\"\"\n Performs backup if necessary.\n \"\"\"\n if self.backup:\n shutil.copy(self.input_file, \"{}.orig\".format(self.input_file))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef run(self):\n with zopen(self.output_file, 'w') as fout:\n return subprocess.Popen(self.nwchem_cmd + [self.input_file],\n stdout=fout)", "response": "Performs actual nwchem run."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef valid_GC(x):\n x = float(x)\n if x < 0.0 or x > 1.0:\n raise ArgumentTypeError(\"{} not in range [0.0, 1.0]\".format(x))\n return x", "response": "type function for argparse to check GC values."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef filter_stream(fq, args):\n if args.quality:\n quality_check = ave_qual\n else:\n quality_check = silent_quality_check\n minlen = args.length + int(args.headcrop or 0) - (int(args.tailcrop or 0))\n for rec in SeqIO.parse(fq, \"fastq\"):\n if args.GC_filter:\n gc = (rec.seq.upper().count(\"C\") + rec.seq.upper().count(\"G\")) / len(rec)\n else:\n gc = 0.50 # dummy variable\n if quality_check(rec.letter_annotations[\"phred_quality\"]) > args.quality \\\n and minlen <= len(rec) <= args.maxlength \\\n and args.minGC <= gc <= args.maxGC:\n print(rec[args.headcrop:args.tailcrop].format(\"fastq\"), end=\"\")", "response": "Filter a fastq file on stdin and print out the filtered fastq records."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef filter_using_summary(fq, args):\n data = {entry[0]: entry[1] for entry in process_summary(\n summaryfile=args.summary,\n threads=\"NA\",\n readtype=args.readtype,\n barcoded=False)[\n [\"readIDs\", \"quals\"]].itertuples(index=False)}\n try:\n for record in SeqIO.parse(fq, \"fastq\"):\n if data[record.id] > args.quality \\\n and args.length <= len(record) <= args.maxlength:\n print(record[args.headcrop:args.tailcrop].format(\"fastq\"), end=\"\")\n except KeyError:\n logging.error(\"mismatch between summary and fastq: \\\n {} was not found in the summary file.\".format(record.id))\n sys.exit('\\nERROR: mismatch between sequencing_summary and fastq file: \\\n {} was not found in the summary file.\\nQuitting.'.format(record.id))", "response": "Filter the sequence using the albacore summary file for more accurate quality estimate\n "} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef master_key_required(func):\n '''decorator describing methods that require the master key'''\n def ret(obj, *args, **kw):\n conn = ACCESS_KEYS\n if not (conn and conn.get('master_key')):\n message = '%s requires the master key' % func.__name__\n raise core.ParseError(message)\n func(obj, *args, **kw)\n return ret", "response": "decorator describing methods that require the master key"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef execute(cls, uri, http_verb, extra_headers=None, batch=False, _body=None, **kw):\n if batch:\n urlsplitter = urlparse(API_ROOT).netloc\n ret = {\"method\": http_verb, \"path\": uri.split(urlsplitter, 1)[1]}\n if kw:\n ret[\"body\"] = kw\n return ret\n\n if not ('app_id' in ACCESS_KEYS and 'rest_key' in ACCESS_KEYS):\n raise core.ParseError('Missing connection credentials')\n\n app_id = ACCESS_KEYS.get('app_id')\n rest_key = ACCESS_KEYS.get('rest_key')\n master_key = ACCESS_KEYS.get('master_key')\n\n url = uri if uri.startswith(API_ROOT) else cls.ENDPOINT_ROOT + uri\n if _body is None:\n data = kw and json.dumps(kw, default=date_handler) or \"{}\"\n else:\n data = _body\n if http_verb == 'GET' and data:\n url += '?%s' % urlencode(kw)\n data = None\n else:\n if cls.__name__ == 'File':\n data = data\n else:\n data = data.encode('utf-8')\n\n headers = {\n 'Content-type': 'application/json',\n 'X-Parse-Application-Id': app_id,\n 'X-Parse-REST-API-Key': rest_key\n }\n headers.update(extra_headers or {})\n\n if cls.__name__ == 'File':\n request = Request(url.encode('utf-8'), data, headers)\n else:\n request = Request(url, data, headers)\n\n if ACCESS_KEYS.get('session_token'):\n request.add_header('X-Parse-Session-Token', ACCESS_KEYS.get('session_token'))\n elif master_key:\n request.add_header('X-Parse-Master-Key', master_key)\n\n request.get_method = lambda: http_verb\n\n try:\n response = urlopen(request, timeout=CONNECTION_TIMEOUT)\n except HTTPError as e:\n exc = {\n 400: core.ResourceRequestBadRequest,\n 401: core.ResourceRequestLoginRequired,\n 403: core.ResourceRequestForbidden,\n 404: core.ResourceRequestNotFound\n }.get(e.code, core.ParseError)\n raise exc(e.read())\n\n return json.loads(response.read().decode('utf-8'))", "response": "Execute a command with the given parameters and return the response JSON."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef batch(self, methods):\n methods = list(methods) # methods can be iterator\n if not methods:\n #accepts also empty list (or generator) - it allows call batch directly with query result (eventually empty)\n return\n queries, callbacks = list(zip(*[m(batch=True) for m in methods]))\n # perform all the operations in one batch\n responses = self.execute(\"\", \"POST\", requests=queries)\n # perform the callbacks with the response data (updating the existing\n # objets, etc)\n\n batched_errors = []\n for callback, response in zip(callbacks, responses):\n if \"success\" in response:\n callback(response[\"success\"])\n else:\n batched_errors.append(response[\"error\"])\n\n if batched_errors:\n raise core.ParseBatchError(batched_errors)", "response": "Calls a list of create update or delete methods in a single batch operation."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef update_channels(cls, installation_id, channels_to_add=set(),\n channels_to_remove=set(), **kw):\n \"\"\"\n Allow an application to manually subscribe or unsubscribe an\n installation to a certain push channel in a unified operation.\n\n this is based on:\n https://www.parse.com/docs/rest#installations-updating\n\n installation_id: the installation id you'd like to add a channel to\n channels_to_add: the name of the channel you'd like to subscribe the user to\n channels_to_remove: the name of the channel you'd like to unsubscribe the user from\n\n \"\"\"\n installation_url = cls._get_installation_url(installation_id)\n current_config = cls.GET(installation_url)\n\n new_channels = list(set(current_config['channels']).union(channels_to_add).difference(channels_to_remove))\n\n cls.PUT(installation_url, channels=new_channels)", "response": "Update the push channels of an anonymization."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nfetch the objects matching the query.", "response": "def _fetch(self, count=False):\n if self._result_cache is not None:\n return len(self._result_cache) if count else self._result_cache\n \"\"\"\n Return a list of objects matching query, or if count == True return\n only the number of objects matching.\n \"\"\"\n options = dict(self._options) # make a local copy\n if self._where:\n # JSON encode WHERE values\n options['where'] = json.dumps(self._where)\n if self._select_related:\n options['include'] = ','.join(self._select_related)\n if count:\n return self._manager._count(**options)\n\n self._result_cache = self._manager._fetch(**options)\n return self._result_cache"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates a proper object subclass matching class_name", "response": "def factory(cls, class_name):\n \"\"\"find proper Object subclass matching class_name\n system types like _User are mapped to types without underscore (parse_resr.user.User)\n If user don't declare matching type, class is created on the fly\n \"\"\"\n class_name = str(class_name.lstrip('_'))\n types = ParseResource.__subclasses__()\n while types:\n t = types.pop()\n if t.__name__ == class_name:\n return t\n types.extend(t.__subclasses__())\n else:\n return type(class_name, (Object,), {})"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef schema(cls):\n root = '/'.join([API_ROOT, 'schemas', cls.__name__])\n schema = cls.GET(root)\n return schema", "response": "Retrieves the class s schema."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nincrementing one value in the object.", "response": "def increment(self, key, amount=1):\n \"\"\"\n Increment one value in the object. Note that this happens immediately:\n it does not wait for save() to be called\n \"\"\"\n payload = {\n key: {\n '__op': 'Increment',\n 'amount': amount\n }\n }\n self.__class__.PUT(self._absolute_url, **payload)\n self.__dict__[key] += amount"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef remove(self, key):\n payload = {\n key: {\n '__op': 'Delete'\n }\n }\n self.__class__.PUT(self._absolute_url, **payload)\n del self.__dict__[key]", "response": "Remove a column value in the object."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef login_required(func):\n '''decorator describing User methods that need to be logged in'''\n def ret(obj, *args, **kw):\n if not hasattr(obj, 'sessionToken'):\n message = '%s requires a logged-in session' % func.__name__\n raise ResourceRequestLoginRequired(message)\n return func(obj, *args, **kw)\n return ret", "response": "decorator describing User methods that need to be logged in"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ntriggers Parse s Password Process. Return True / False indicate success on the request", "response": "def request_password_reset(email):\n '''Trigger Parse\\'s Password Process. Return True/False\n indicate success/failure on the request'''\n\n url = '/'.join([API_ROOT, 'requestPasswordReset'])\n try:\n User.POST(url, email=email)\n return True\n except ParseError:\n return False"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef parse(d):\n res = {}\n for k, v in iteritems(d):\n if isinstance(v, string_types) and DATETIME_ISO_FORMAT.match(v):\n v = dateutil.parser.parse(v)\n res[k] = v\n return res", "response": "Convert iso formatted timestamps found as values in the dict d to datetime objects."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef dump(obj, path, **kw):\n open_kw = {'mode': 'w'}\n if PY3: # pragma: no cover\n open_kw['encoding'] = 'utf-8'\n\n # avoid indented lines ending with \", \" on PY2\n if kw.get('indent') and kw.get('separators') is None:\n kw['separators'] = (',', ': ')\n\n with open(str(path), **open_kw) as fp:\n return json.dump(obj, fp, **kw)", "response": "Python 2 + 3 compatible version of json. dump."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef load(path, **kw):\n _kw = {}\n if PY3: # pragma: no cover\n _kw['encoding'] = 'utf-8'\n with open(str(path), **_kw) as fp:\n return json.load(fp, **kw)", "response": "Python 2 + 3 compatible version of json. load."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nstrip brackets and what is inside brackets from text.", "response": "def strip_brackets(text, brackets=None):\n \"\"\"Strip brackets and what is inside brackets from text.\n\n .. note::\n If the text contains only one opening bracket, the rest of the text\n will be ignored. This is a feature, not a bug, as we want to avoid that\n this function raises errors too easily.\n \"\"\"\n res = []\n for c, type_ in _tokens(text, brackets=brackets):\n if type_ == TextType.text:\n res.append(c)\n return ''.join(res).strip()"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsplitting text at separators outside of brackets.", "response": "def split_text_with_context(text, separators=WHITESPACE, brackets=None):\n \"\"\"Splits text at separators outside of brackets.\n\n :param text:\n :param separators: An iterable of single character tokens.\n :param brackets:\n :return: A `list` of non-empty chunks.\n\n .. note:: This function leaves content in brackets in the chunks.\n \"\"\"\n res, chunk = [], []\n for c, type_ in _tokens(text, brackets=brackets):\n if type_ == TextType.text and c in separators:\n res.append(''.join(chunk).strip())\n chunk = []\n else:\n chunk.append(c)\n res.append(''.join(chunk).strip())\n return nfilter(res)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsplits text along the separators unless they appear within brackets.", "response": "def split_text(text, separators=re.compile('\\s'), brackets=None, strip=False):\n \"\"\"Split text along the separators unless they appear within brackets.\n\n :param separators: An iterable single characters or a compiled regex pattern.\n :param brackets: `dict` mapping start tokens to end tokens of what is to be \\\n recognized as brackets.\n\n .. note:: This function will also strip content within brackets.\n \"\"\"\n if not isinstance(separators, PATTERN_TYPE):\n separators = re.compile(\n '[{0}]'.format(''.join('\\{0}'.format(c) for c in separators)))\n\n return nfilter(\n s.strip() if strip else s for s in\n separators.split(strip_brackets(text, brackets=brackets)))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nyielding marker value pairs from a text block.", "response": "def marker_split(block):\n \"\"\"Yield marker, value pairs from a text block (i.e. a list of lines).\n\n :param block: text block consisting of \\n separated lines as it will be the case for \\\n files read using \"rU\" mode.\n :return: generator of (marker, value) pairs.\n \"\"\"\n marker = None\n value = []\n\n for line in block.split('\\n'):\n line = line.strip()\n if line.startswith('\\\\_'):\n continue # we simply ignore SFM header fields\n match = MARKER_PATTERN.match(line)\n if match:\n if marker:\n yield marker, '\\n'.join(value)\n marker = match.group('marker')\n value = [line[match.end():]]\n else:\n value.append(line)\n if marker:\n yield marker, ('\\n'.join(value)).strip()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nretrieve the first value for a marker or None.", "response": "def get(self, key, default=None):\n \"\"\"Retrieve the first value for a marker or None.\"\"\"\n for k, v in self:\n if k == key:\n return v\n return default"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef read(self,\n filename,\n encoding='utf-8',\n marker_map=None,\n entry_impl=Entry,\n entry_sep='\\n\\n',\n entry_prefix=None,\n keep_empty=False):\n \"\"\"Extend the list by parsing new entries from a file.\n\n :param filename:\n :param encoding:\n :param marker_map: A dict used to map marker names.\n :param entry_impl: Subclass of Entry or None\n :param entry_sep:\n :param entry_prefix:\n \"\"\"\n marker_map = marker_map or {}\n for entry in parse(\n filename,\n encoding,\n entry_sep,\n entry_prefix or entry_sep,\n keep_empty=keep_empty):\n if entry:\n self.append(entry_impl([(marker_map.get(k, k), v) for k, v in entry]))", "response": "Extend the list by parsing new entries from a file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef write(self, filename, encoding='utf-8'):\n with io.open(str(filename), 'w', encoding=encoding) as fp:\n for entry in self:\n fp.write(entry.__unicode__())\n fp.write('\\n\\n')", "response": "Write the list of entries to a file."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nasking a yes or no question interactively.", "response": "def confirm(question, default=True):\n \"\"\"Ask a yes/no question interactively.\n\n :param question: The text of the question to ask.\n :returns: True if the answer was \"yes\", False otherwise.\n \"\"\"\n valid = {\"\": default, \"yes\": True, \"y\": True, \"no\": False, \"n\": False}\n while 1:\n choice = input(question + (\" [Y/n] \" if default else \" [y/N] \")).lower()\n if choice in valid:\n return valid[choice]\n print(\"Please respond with 'y' or 'n' \")"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef gettext(self, section, option, whitespace_preserving_prefix='.'):\n lines = []\n for line in self.get(section, option, fallback='').splitlines():\n if re.match(re.escape(whitespace_preserving_prefix) + '\\s+', line):\n line = line[len(whitespace_preserving_prefix):]\n lines.append(line)\n return '\\n'.join(lines)", "response": "Return the gettext string for the specified option in section."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef data_url(content, mimetype=None):\n if isinstance(content, pathlib.Path):\n if not mimetype:\n mimetype = guess_type(content.name)[0]\n with content.open('rb') as fp:\n content = fp.read()\n else:\n if isinstance(content, text_type):\n content = content.encode('utf8')\n return \"data:{0};base64,{1}\".format(\n mimetype or 'application/octet-stream', b64encode(content).decode())", "response": "Returns content encoded as base64 Data URI."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef dict_merged(d, _filter=None, **kw):\n def f(s):\n if _filter:\n return _filter(s)\n return s is not None\n d = d or {}\n for k, v in iteritems(kw):\n if f(v):\n d[k] = v\n return d", "response": "Update dictionary d with the items passed as kw if the value passes _filter."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nremoving illegal characters from text.", "response": "def xmlchars(text):\n \"\"\"Not all of UTF-8 is considered valid character data in XML ...\n\n Thus, this function can be used to remove illegal characters from ``text``.\n \"\"\"\n invalid = list(range(0x9))\n invalid.extend([0xb, 0xc])\n invalid.extend(range(0xe, 0x20))\n return re.sub('|'.join('\\\\x%0.2X' % i for i in invalid), '', text)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef slug(s, remove_whitespace=True, lowercase=True):\n res = ''.join(c for c in unicodedata.normalize('NFD', s)\n if unicodedata.category(c) != 'Mn')\n if lowercase:\n res = res.lower()\n for c in string.punctuation:\n res = res.replace(c, '')\n res = re.sub('\\s+', '' if remove_whitespace else ' ', res)\n res = res.encode('ascii', 'ignore').decode('ascii')\n assert re.match('[ A-Za-z0-9]*$', res)\n return res", "response": "Condensed version of s containing only lowercase alphanumeric characters."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncasts string to binary_type.", "response": "def encoded(string, encoding='utf-8'):\n \"\"\"Cast string to binary_type.\n\n :param string: six.binary_type or six.text_type\n :param encoding: encoding which the object is forced to\n :return: six.binary_type\n \"\"\"\n assert isinstance(string, string_types) or isinstance(string, binary_type)\n if isinstance(string, text_type):\n return string.encode(encoding)\n try:\n # make sure the string can be decoded in the specified encoding ...\n string.decode(encoding)\n return string\n except UnicodeDecodeError:\n # ... if not use latin1 as best guess to decode the string before encoding as\n # specified.\n return string.decode('latin1').encode(encoding)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef readlines(p,\n encoding=None,\n strip=False,\n comment=None,\n normalize=None,\n linenumbers=False):\n \"\"\"\n Read a `list` of lines from a text file.\n\n :param p: File path (or `list` or `tuple` of text)\n :param encoding: Registered codec.\n :param strip: If `True`, strip leading and trailing whitespace.\n :param comment: String used as syntax to mark comment lines. When not `None`, \\\n commented lines will be stripped. This implies `strip=True`.\n :param normalize: 'NFC', 'NFKC', 'NFD', 'NFKD'\n :param linenumbers: return also line numbers.\n :return: `list` of text lines or pairs (`int`, text or `None`).\n \"\"\"\n if comment:\n strip = True\n if isinstance(p, (list, tuple)):\n res = [l.decode(encoding) if encoding else l for l in p]\n else:\n with Path(p).open(encoding=encoding or 'utf-8') as fp:\n res = fp.readlines()\n if strip:\n res = [l.strip() or None for l in res]\n if comment:\n res = [None if l and l.startswith(comment) else l for l in res]\n if normalize:\n res = [unicodedata.normalize(normalize, l) if l else l for l in res]\n if linenumbers:\n return [(n, l) for n, l in enumerate(res, 1)]\n return [l for l in res if l is not None]", "response": "Read a list of lines from a text file."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nwraps for os. walk yielding Path objects.", "response": "def walk(p, mode='all', **kw):\n \"\"\"Wrapper for `os.walk`, yielding `Path` objects.\n\n :param p: root of the directory tree to walk.\n :param mode: 'all|dirs|files', defaulting to 'all'.\n :param kw: Keyword arguments are passed to `os.walk`.\n :return: Generator for the requested Path objects.\n \"\"\"\n for dirpath, dirnames, filenames in os.walk(as_posix(p), **kw):\n if mode in ('all', 'dirs'):\n for dirname in dirnames:\n yield Path(dirpath).joinpath(dirname)\n if mode in ('all', 'files'):\n for fname in filenames:\n yield Path(dirpath).joinpath(fname)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef bibtex(self):\n m = max(itertools.chain(map(len, self), [0]))\n fields = (\" %s = {%s}\" % (k.ljust(m), self[k]) for k in self)\n return \"@%s{%s,\\n%s\\n}\" % (\n getattr(self.genre, 'value', self.genre), self.id, \",\\n\".join(fields))", "response": "Represent the source in BibTeX format."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef text(self):\n genre = getattr(self.genre, 'value', self.genre)\n pages_at_end = genre in (\n 'book',\n 'phdthesis',\n 'mastersthesis',\n 'misc',\n 'techreport')\n thesis = genre in ('phdthesis', 'mastersthesis')\n\n if self.get('editor'):\n editors = self['editor']\n affix = 'eds' if ' and ' in editors or '&' in editors else 'ed'\n editors = \" %s (%s.)\" % (editors, affix)\n else:\n editors = None\n\n res = [self.get('author', editors), self.get('year', 'n.d')]\n if genre == 'book':\n res.append(self.get_with_translation('booktitle') or\n self.get_with_translation('title'))\n series = ', '.join(filter(None, [self.get('series'), self.get('volume')]))\n if series:\n res.append('(%s.)' % series)\n elif genre == 'misc':\n # in case of misc records, we use the note field in case a title is missing.\n res.append(self.get_with_translation('title') or self.get('note'))\n else:\n res.append(self.get_with_translation('title'))\n\n if genre == 'article':\n atom = ' '.join(filter(None, [self.get('journal'), self.get('volume')]))\n if self.get('issue'):\n atom += '(%s)' % self['issue']\n res.append(atom)\n res.append(self.get('pages'))\n elif genre == 'incollection' or genre == 'inproceedings':\n prefix = 'In'\n atom = ''\n if editors:\n atom += editors\n if self.get('booktitle'):\n if atom:\n atom += ','\n atom += \" %s\" % self.get_with_translation('booktitle')\n if self.get('pages'):\n atom += \", %s\" % self['pages']\n res.append(prefix + atom)\n else:\n # check for author to make sure we haven't included the editors yet.\n if editors and self.get('author'):\n res.append(\"In %s\" % editors)\n\n for attr in [\n 'journal',\n 'volume' if genre != 'book' else None,\n ]:\n if attr and self.get(attr):\n res.append(self.get(attr))\n\n if self.get('issue'):\n res.append(\"(%s)\" % self['issue'])\n\n if not pages_at_end and self.get('pages'):\n res.append(self['pages'])\n\n if self.get('publisher'):\n res.append(\": \".join(filter(None, [self.get('address'), self['publisher']])))\n else:\n if genre == 'misc' and self.get('howpublished'):\n res.append(self.get('howpublished'))\n\n if not thesis and pages_at_end and self.get('pages'):\n res.append(self['pages'] + 'pp')\n\n note = self.get('note') or self._genre_note.get(genre)\n if note and note not in res:\n if thesis:\n joiner = ','\n if self.get('school'):\n note += '{0} {1}'.format(joiner, self.get('school'))\n joiner = ';'\n if self.get('pages'):\n note += '{0} {1}pp.'.format(joiner, self.get('pages'))\n res.append('(%s)' % note)\n\n return ' '.join(\n x if x.endswith(('.', '.)')) else '%s.' % x for x in res if x)", "response": "Linearize the bib source according to the rules of the unified style."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef request(self, path, method='GET', params=None, type=REST_TYPE):\n response_text = self._get_http_client(type).request(path, method, params)\n\n if not response_text:\n return response_text\n\n response_json = json.loads(response_text)\n\n if 'errors' in response_json:\n raise (ErrorException([Error().load(e) for e in response_json['errors']]))\n\n return response_json", "response": "Builds a request gets a response and decodes it."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef hlr_create(self, msisdn, reference):\n return HLR().load(self.request('hlr', 'POST', {'msisdn': msisdn, 'reference': reference}))", "response": "Perform a new HLR lookup."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef message_create(self, originator, recipients, body, params=None):\n if params is None: params = {}\n if type(recipients) == list:\n recipients = ','.join(recipients)\n\n params.update({'originator': originator, 'body': body, 'recipients': recipients})\n return Message().load(self.request('messages', 'POST', params))", "response": "Create a new message."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a new voice message.", "response": "def voice_message_create(self, recipients, body, params=None):\n \"\"\"Create a new voice message.\"\"\"\n if params is None: params = {}\n if type(recipients) == list:\n recipients = ','.join(recipients)\n\n params.update({'recipients': recipients, 'body': body})\n return VoiceMessage().load(self.request('voicemessages', 'POST', params))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef lookup(self, phonenumber, params=None):\n if params is None: params = {}\n return Lookup().load(self.request('lookup/' + str(phonenumber), 'GET', params))", "response": "Do a new lookup."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef lookup_hlr(self, phonenumber, params=None):\n if params is None: params = {}\n return HLR().load(self.request('lookup/' + str(phonenumber) + '/hlr', 'GET', params))", "response": "Retrieve the information of a specific HLR lookup."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nperform a new HLR lookup.", "response": "def lookup_hlr_create(self, phonenumber, params=None):\n \"\"\"Perform a new HLR lookup.\"\"\"\n if params is None: params = {}\n return HLR().load(self.request('lookup/' + str(phonenumber) + '/hlr', 'POST', params))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef verify_create(self, recipient, params=None):\n if params is None: params = {}\n params.update({'recipient': recipient})\n return Verify().load(self.request('verify', 'POST', params))", "response": "Create a new verification."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nverifying the token of a specific verification.", "response": "def verify_verify(self, id, token):\n \"\"\"Verify the token of a specific verification.\"\"\"\n return Verify().load(self.request('verify/' + str(id), params={'token': token}))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate typed objects from the dicts.", "response": "def items(self, value):\n \"\"\"Create typed objects from the dicts.\"\"\"\n items = []\n for item in value:\n items.append(self.itemType().load(item))\n\n self._items = items"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nbuilds a request and gets a response.", "response": "def request(self, path, method='GET', params=None):\n \"\"\"Builds a request and gets a response.\"\"\"\n if params is None: params = {}\n url = urljoin(self.endpoint, path)\n\n headers = {\n 'Accept': 'application/json',\n 'Authorization': 'AccessKey ' + self.access_key,\n 'User-Agent': self.user_agent,\n 'Content-Type': 'application/json'\n }\n\n if method == 'DELETE':\n response = requests.delete(url, verify=True, headers=headers, data=json.dumps(params))\n elif method == 'GET':\n response = requests.get(url, verify=True, headers=headers, params=params)\n elif method == 'PATCH':\n response = requests.patch(url, verify=True, headers=headers, data=json.dumps(params))\n elif method == 'POST':\n response = requests.post(url, verify=True, headers=headers, data=json.dumps(params))\n elif method == 'PUT':\n response = requests.put(url, verify=True, headers=headers, data=json.dumps(params))\n else:\n raise ValueError(str(method) + ' is not a supported HTTP method')\n\n if response.status_code in self.__supported_status_codes:\n response_text = response.text\n else:\n response.raise_for_status()\n\n return response_text"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef run(self):\n config_h = opj(\"src\", \"cysignals\", \"cysignals_config.h\")\n if not os.path.isfile(config_h):\n import subprocess\n subprocess.check_call([\"make\", \"configure\"])\n subprocess.check_call([\"sh\", \"configure\"])\n\n dist = self.distribution\n ext_modules = dist.ext_modules\n if ext_modules:\n dist.ext_modules[:] = self.cythonize(ext_modules)\n\n _build.run(self)", "response": "Run ``./configure`` and Cython first."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a list of all cython extra debug information files.", "response": "def cython_debug_files():\n \"\"\"\n Cython extra debug information files\n \"\"\"\n # Search all subdirectories of sys.path directories for a\n # \"cython_debug\" directory. Note that sys_path is a variable set by\n # cysignals-CSI. It may differ from sys.path if GDB is run with a\n # different Python interpreter.\n files = []\n for path in sys_path: # noqa\n pattern = os.path.join(path, '*', 'cython_debug', 'cython_debug_info_*')\n files.extend(glob.glob(pattern))\n return files"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nretrieves a new frame from the PhoXi and convert it to a ColorImage DepthImage and IrImage.", "response": "def frames(self):\n \"\"\"Retrieve a new frame from the PhoXi and convert it to a ColorImage,\n a DepthImage, and an IrImage.\n\n Returns\n -------\n :obj:`tuple` of :obj:`ColorImage`, :obj:`DepthImage`, :obj:`IrImage`, :obj:`numpy.ndarray`\n The ColorImage, DepthImage, and IrImage of the current frame.\n \"\"\"\n _, phoxi_depth_im, _ = self._phoxi.frames()\n webcam_color_im, _, _ = self._webcam.frames(most_recent=True)\n\n # Colorize PhoXi Image\n phoxi_color_im = self._colorize(phoxi_depth_im, webcam_color_im)\n return phoxi_color_im, phoxi_depth_im, None"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncollecting a series of depth images and return the median of the set.", "response": "def median_depth_img(self, num_img=1, fill_depth=0.0):\n \"\"\"Collect a series of depth images and return the median of the set.\n\n Parameters\n ----------\n num_img : int\n The number of consecutive frames to process.\n\n Returns\n -------\n DepthImage\n The median DepthImage collected from the frames.\n \"\"\"\n depths = []\n\n for _ in range(num_img):\n _, depth, _ = self.frames()\n depths.append(depth)\n\n median_depth = Image.median_images(depths)\n median_depth.data[median_depth.data == 0.0] = fill_depth\n return median_depth"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef sensor(sensor_type, cfg):\n sensor_type = sensor_type.lower()\n if sensor_type == 'kinect2':\n s = Kinect2Sensor(packet_pipeline_mode=cfg['pipeline_mode'],\n device_num=cfg['device_num'],\n frame=cfg['frame'])\n elif sensor_type == 'bridged_kinect2':\n s = KinectSensorBridged(quality=cfg['quality'], frame=cfg['frame'])\n elif sensor_type == 'primesense':\n flip_images = True\n if 'flip_images' in cfg.keys():\n flip_images = cfg['flip_images']\n s = PrimesenseSensor(auto_white_balance=cfg['auto_white_balance'],\n flip_images=flip_images,\n frame=cfg['frame'])\n elif sensor_type == 'virtual':\n s = VirtualSensor(cfg['image_dir'],\n frame=cfg['frame'])\n elif sensor_type == 'tensor_dataset':\n s = TensorDatasetVirtualSensor(cfg['dataset_dir'],\n frame=cfg['frame'])\n elif sensor_type == 'primesense_ros':\n s = PrimesenseSensor_ROS(frame=cfg['frame'])\n elif sensor_type == 'ensenso':\n s = EnsensoSensor(frame=cfg['frame'])\n elif sensor_type == 'phoxi':\n s = PhoXiSensor(frame=cfg['frame'],\n device_name=cfg['device_name'],\n size=cfg['size'])\n elif sensor_type == 'webcam':\n s = WebcamSensor(frame=cfg['frame'],\n device_id=cfg['device_id'])\n elif sensor_type == 'colorized_phoxi':\n s = ColorizedPhoXiSensor(frame=cfg['frame'], phoxi_config=cfg['phoxi_config'],\n webcam_config=cfg['webcam_config'], calib_dir=cfg['calib_dir'])\n elif sensor_type == 'realsense':\n s = RealSenseSensor(\n cam_id=cfg['cam_id'],\n filter_depth=cfg['filter_depth'],\n frame=cfg['frame'],\n )\n else:\n raise ValueError('RGBD sensor type %s not supported' %(sensor_type))\n return s", "response": "Creates a new sensor object of the specified type."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting the index of a point in an array", "response": "def get_point_index(point, all_points, eps = 1e-4):\n \"\"\" Get the index of a point in an array \"\"\"\n inds = np.where(np.linalg.norm(point - all_points, axis=1) < eps)\n if inds[0].shape[0] == 0:\n return -1\n return inds[0][0]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nmatching features between two graspable objects based on a full distance matrix.", "response": "def match(self, source_obj_features, target_obj_features):\n \"\"\"\n Matches features between two graspable objects based on a full distance matrix.\n\n Parameters\n ----------\n source_obj_features : :obj:`BagOfFeatures`\n bag of the source objects features\n target_obj_features : :obj:`BagOfFeatures`\n bag of the target objects features\n\n Returns\n -------\n corrs : :obj:`Correspondences`\n the correspondences between source and target\n \"\"\"\n if not isinstance(source_obj_features, f.BagOfFeatures):\n raise ValueError('Must supply source bag of object features')\n if not isinstance(target_obj_features, f.BagOfFeatures):\n raise ValueError('Must supply target bag of object features')\n\n # source feature descriptors and keypoints\n source_descriptors = source_obj_features.descriptors\n target_descriptors = target_obj_features.descriptors\n source_keypoints = source_obj_features.keypoints\n target_keypoints = target_obj_features.keypoints\n\n #calculate distance between this model's descriptors and each of the other_model's descriptors\n dists = spatial.distance.cdist(source_descriptors, target_descriptors)\n\n #calculate the indices of the target_model that minimize the distance to the descriptors in this model\n source_closest_descriptors = dists.argmin(axis=1)\n target_closest_descriptors = dists.argmin(axis=0)\n match_indices = []\n source_matched_points = np.zeros((0,3))\n target_matched_points = np.zeros((0,3))\n\n #calculate which points/indices the closest descriptors correspond to\n for i, j in enumerate(source_closest_descriptors):\n # for now, only keep correspondences that are a 2-way match\n if target_closest_descriptors[j] == i:\n match_indices.append(j)\n source_matched_points = np.r_[source_matched_points, source_keypoints[i:i+1, :]]\n target_matched_points = np.r_[target_matched_points, target_keypoints[j:j+1, :]]\n else:\n match_indices.append(-1)\n\n return Correspondences(match_indices, source_matched_points, target_matched_points)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef match(self, source_points, target_points, source_normals, target_normals):\n # compute the distances and inner products between the point sets\n dists = ssd.cdist(source_points, target_points, 'euclidean')\n ip = source_normals.dot(target_normals.T) # abs because we don't have correct orientations\n source_ip = source_points.dot(target_normals.T)\n target_ip = target_points.dot(target_normals.T)\n target_ip = np.diag(target_ip)\n target_ip = np.tile(target_ip, [source_points.shape[0], 1])\n abs_diff = np.abs(source_ip - target_ip) # difference in inner products\n\n # mark invalid correspondences\n invalid_dists = np.where(dists > self.dist_thresh_)\n abs_diff[invalid_dists[0], invalid_dists[1]] = np.inf\n invalid_norms = np.where(ip < self.norm_thresh_)\n abs_diff[invalid_norms[0], invalid_norms[1]] = np.inf\n\n # choose the closest matches\n match_indices = np.argmin(abs_diff, axis=1)\n match_vals = np.min(abs_diff, axis=1)\n invalid_matches = np.where(match_vals == np.inf)\n match_indices[invalid_matches[0]] = -1\n\n return NormalCorrespondences(match_indices, source_points, target_points, source_normals, target_normals)", "response": "Matches points between two point - normal sets. Uses the closest ip to choose matches with thresholding only."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _config_pipe(self):\n self._cfg.enable_device(self.id)\n\n # configure the color stream\n self._cfg.enable_stream(\n rs.stream.color,\n RealSenseSensor.COLOR_IM_WIDTH,\n RealSenseSensor.COLOR_IM_HEIGHT,\n rs.format.bgr8,\n RealSenseSensor.FPS\n )\n\n # configure the depth stream\n self._cfg.enable_stream(\n rs.stream.depth,\n RealSenseSensor.DEPTH_IM_WIDTH,\n 360 if self._depth_align else RealSenseSensor.DEPTH_IM_HEIGHT,\n rs.format.z16,\n RealSenseSensor.FPS\n )", "response": "Configures the pipeline to stream color and depth."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _set_depth_scale(self):\n sensor = self._profile.get_device().first_depth_sensor()\n self._depth_scale = sensor.get_depth_scale()", "response": "Retrieve the scale of the depth sensor."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _set_intrinsics(self):\n strm = self._profile.get_stream(rs.stream.color)\n obj = strm.as_video_stream_profile().get_intrinsics()\n self._intrinsics[0, 0] = obj.fx\n self._intrinsics[1, 1] = obj.fy\n self._intrinsics[0, 2] = obj.ppx\n self._intrinsics[1, 2] = obj.ppy", "response": "Read the intrinsics matrix from the stream."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef color_intrinsics(self):\n return CameraIntrinsics(\n self._frame,\n self._intrinsics[0, 0],\n self._intrinsics[1, 1],\n self._intrinsics[0, 2],\n self._intrinsics[1, 2],\n height=RealSenseSensor.COLOR_IM_HEIGHT,\n width=RealSenseSensor.COLOR_IM_WIDTH,\n )", "response": "Return a new CameraIntrinsics object for the RealSense color camera."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _read_color_and_depth_image(self):\n frames = self._pipe.wait_for_frames()\n if self._depth_align:\n frames = self._align.process(frames)\n\n depth_frame = frames.get_depth_frame()\n color_frame = frames.get_color_frame()\n\n if not depth_frame or not color_frame:\n logging.warning('Could not retrieve frames.')\n return None, None\n\n if self._filter_depth:\n depth_frame = self._filter_depth_frame(depth_frame)\n\n # convert to numpy arrays\n depth_image = self._to_numpy(depth_frame, np.float32)\n color_image = self._to_numpy(color_frame, np.uint8)\n\n # convert depth to meters\n depth_image *= self._depth_scale\n\n # bgr to rgb\n color_image = color_image[..., ::-1]\n\n depth = DepthImage(depth_image, frame=self._frame)\n color = ColorImage(color_image, frame=self._frame)\n return color, depth", "response": "Read a color and depth image from the device."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nstart the webcam sensor.", "response": "def start(self):\n \"\"\"Start the sensor.\n \"\"\"\n self._cap = cv2.VideoCapture(self._device_id + cv2.CAP_V4L2)\n if not self._cap.isOpened():\n self._running = False\n self._cap.release()\n self._cap = None\n return False\n\n self._cap.set(cv2.CAP_PROP_FRAME_WIDTH, self._camera_intr.width)\n self._cap.set(cv2.CAP_PROP_FRAME_HEIGHT, self._camera_intr.height)\n self._running = True\n\n # Capture 5 frames to flush webcam sensor\n for _ in range(5):\n _ = self.frames()\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nretrieve a new frame from the PhoXi and convert it to a ColorImage DepthImage and IrImage.", "response": "def frames(self, most_recent=False):\n \"\"\"Retrieve a new frame from the PhoXi and convert it to a ColorImage,\n a DepthImage, and an IrImage.\n\n Parameters\n ----------\n most_recent: bool\n If true, the OpenCV buffer is emptied for the webcam before reading the most recent frame.\n\n Returns\n -------\n :obj:`tuple` of :obj:`ColorImage`, :obj:`DepthImage`, :obj:`IrImage`, :obj:`numpy.ndarray`\n The ColorImage, DepthImage, and IrImage of the current frame.\n \"\"\"\n if most_recent:\n for i in xrange(4):\n self._cap.grab()\n for i in range(1):\n if self._adjust_exposure:\n try:\n command = 'v4l2-ctl -d /dev/video{} -c exposure_auto=1 -c exposure_auto_priority=0 -c exposure_absolute=100 -c saturation=60 -c gain=140'.format(self._device_id)\n FNULL = open(os.devnull, 'w')\n subprocess.call(shlex.split(command), stdout=FNULL, stderr=subprocess.STDOUT)\n except:\n pass\n ret, frame = self._cap.read()\n rgb_data = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)\n\n return ColorImage(rgb_data, frame=self._frame), None, None"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nset the buffer formatting.", "response": "def _set_format(self, msg):\n \"\"\" Set the buffer formatting. \"\"\"\n num_points = msg.height * msg.width\n self._format = '<' + num_points * 'ffff'"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset the camera intrinsics from an info msg.", "response": "def _set_camera_properties(self, msg):\n \"\"\" Set the camera intrinsics from an info msg. \"\"\"\n focal_x = msg.K[0]\n focal_y = msg.K[4]\n center_x = msg.K[2]\n center_y = msg.K[5]\n im_height = msg.height\n im_width = msg.width\n self._camera_intr = CameraIntrinsics(self._frame, focal_x, focal_y,\n center_x, center_y,\n height=im_height,\n width=im_width)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _depth_im_from_pointcloud(self, msg):\n # set format\n if self._format is None:\n self._set_format(msg)\n\n # rescale camera intr in case binning is turned on\n if msg.height != self._camera_intr.height:\n rescale_factor = float(msg.height) / self._camera_intr.height\n self._camera_intr = self._camera_intr.resize(rescale_factor)\n \n # read num points\n num_points = msg.height * msg.width\n \n # read buffer\n raw_tup = struct.Struct(self._format).unpack_from(msg.data, 0)\n raw_arr = np.array(raw_tup)\n\n # subsample depth values and reshape\n depth_ind = 2 + 4 * np.arange(num_points)\n depth_buf = raw_arr[depth_ind]\n depth_arr = depth_buf.reshape(msg.height, msg.width)\n depth_im = DepthImage(depth_arr, frame=self._frame)\n\n return depth_im", "response": "Convert a pointcloud2 message to a depth image."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef frames(self):\n # wait for a new image\n while self._cur_depth_im is None:\n time.sleep(0.01)\n \n # read next image\n depth_im = self._cur_depth_im\n color_im = ColorImage(np.zeros([depth_im.height,\n depth_im.width,\n 3]).astype(np.uint8), frame=self._frame)\n self._cur_depth_im = None\n return color_im, depth_im, None", "response": "Retrieve a new frame from the Ensenso and convert it to a ColorImage DepthImage and IrImage."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef register(self, source_point_cloud, target_point_cloud,\n source_normal_cloud, target_normal_cloud, matcher,\n num_iterations=1, compute_total_cost=True, match_centroids=False,\n vis=False):\n \"\"\"\n Iteratively register objects to one another using a modified version of point to plane ICP.\n The cost func is PointToPlane_COST + gamma * PointToPoint_COST.\n Uses a `stochastic Gauss-Newton step` where on each iteration a smaller number of points is sampled.\n\n Parameters\n ----------\n source_point_cloud : :obj:`autolab_core.PointCloud`\n source object points\n target_point_cloud : :obj`autolab_core.PointCloud`\n target object points\n source_normal_cloud : :obj:`autolab_core.NormalCloud`\n source object outward-pointing normals\n target_normal_cloud : :obj:`autolab_core.NormalCloud`\n target object outward-pointing normals\n matcher : :obj:`PointToPlaneFeatureMatcher`\n object to match the point sets\n num_iterations : int\n the number of iterations to run\n compute_total_cost : bool\n whether or not to compute the total cost upon termination.\n match_centroids : bool\n whether or not to match the centroids of the point clouds\n \n Returns\n -------\n :obj`RegistrationResult`\n results containing source to target transformation and cost\n \"\"\"\n # check valid data\n if not isinstance(source_point_cloud, PointCloud) or not isinstance(target_point_cloud, PointCloud):\n raise ValueError('Source and target point clouds must be PointCloud objects')\n if not isinstance(source_normal_cloud, NormalCloud) or not isinstance(target_normal_cloud, NormalCloud):\n raise ValueError('Source and target normal clouds must be NormalCloud objects')\n if not isinstance(matcher, PointToPlaneFeatureMatcher):\n raise ValueError('Feature matcher must be a PointToPlaneFeatureMatcher object')\n if source_point_cloud.num_points != source_normal_cloud.num_points or target_point_cloud.num_points != target_normal_cloud.num_points:\n raise ValueError('Input point clouds must have the same number of points as corresponding normal cloud')\n\n # extract source and target point and normal data arrays\n orig_source_points = source_point_cloud.data.T\n orig_target_points = target_point_cloud.data.T\n orig_source_normals = source_normal_cloud.data.T\n orig_target_normals = target_normal_cloud.data.T\n\n # setup the problem\n normal_norms = np.linalg.norm(orig_target_normals, axis=1)\n valid_inds = np.nonzero(normal_norms)\n orig_target_points = orig_target_points[valid_inds[0],:]\n orig_target_normals = orig_target_normals[valid_inds[0],:]\n\n normal_norms = np.linalg.norm(orig_source_normals, axis=1)\n valid_inds = np.nonzero(normal_norms)\n orig_source_points = orig_source_points[valid_inds[0],:]\n orig_source_normals = orig_source_normals[valid_inds[0],:]\n\n # alloc buffers for solutions\n source_mean_point = np.mean(orig_source_points, axis=0)\n target_mean_point = np.mean(orig_target_points, axis=0)\n R_sol = np.eye(3)\n t_sol = np.zeros([3, 1]) #init with diff between means\n if match_centroids:\n t_sol[:,0] = target_mean_point - source_mean_point\n\n # iterate through\n for i in range(num_iterations):\n logging.info('Point to plane ICP iteration %d' %(i))\n\n # subsample points\n source_subsample_inds = np.random.choice(orig_source_points.shape[0], size=self.sample_size_)\n source_points = orig_source_points[source_subsample_inds,:]\n source_normals = orig_source_normals[source_subsample_inds,:]\n target_subsample_inds = np.random.choice(orig_target_points.shape[0], size=self.sample_size_)\n target_points = orig_target_points[target_subsample_inds,:]\n target_normals = orig_target_normals[target_subsample_inds,:]\n\n # transform source points\n source_points = (R_sol.dot(source_points.T) + np.tile(t_sol, [1, source_points.shape[0]])).T\n source_normals = (R_sol.dot(source_normals.T)).T\n \n # closest points\n corrs = matcher.match(source_points, target_points, source_normals, target_normals)\n\n # solve optimal rotation + translation\n valid_corrs = np.where(corrs.index_map != -1)[0]\n source_corr_points = corrs.source_points[valid_corrs,:]\n target_corr_points = corrs.target_points[corrs.index_map[valid_corrs], :]\n target_corr_normals = corrs.target_normals[corrs.index_map[valid_corrs], :]\n\n num_corrs = valid_corrs.shape[0]\n if num_corrs == 0:\n logging.warning('No correspondences found')\n break\n\n # create A and b matrices for Gauss-Newton step on joint cost function\n A = np.zeros([6,6])\n b = np.zeros([6,1])\n Ap = np.zeros([6,6])\n bp = np.zeros([6,1])\n G = np.zeros([3,6])\n G[:,3:] = np.eye(3)\n\n for i in range(num_corrs):\n s = source_corr_points[i:i+1,:].T\n t = target_corr_points[i:i+1,:].T\n n = target_corr_normals[i:i+1,:].T\n G[:,:3] = skew(s).T\n A += G.T.dot(n).dot(n.T).dot(G)\n b += G.T.dot(n).dot(n.T).dot(t - s)\n\n Ap += G.T.dot(G)\n bp += G.T.dot(t - s)\n v = np.linalg.solve(A + self.gamma_*Ap + self.mu_*np.eye(6),\n b + self.gamma_*bp)\n\n # create pose values from the solution\n R = np.eye(3)\n R = R + skew(v[:3])\n U, S, V = np.linalg.svd(R)\n R = U.dot(V)\n t = v[3:]\n\n # incrementally update the final transform\n R_sol = R.dot(R_sol)\n t_sol = R.dot(t_sol) + t\n\n T_source_target = RigidTransform(R_sol, t_sol, from_frame=source_point_cloud.frame, to_frame=target_point_cloud.frame)\n\n total_cost = 0\n source_points = (R_sol.dot(orig_source_points.T) + np.tile(t_sol, [1, orig_source_points.shape[0]])).T\n source_normals = (R_sol.dot(orig_source_normals.T)).T\n\n if compute_total_cost:\n # rematch all points to get the final cost\n corrs = matcher.match(source_points, orig_target_points, source_normals, orig_target_normals)\n valid_corrs = np.where(corrs.index_map != -1)[0]\n num_corrs = valid_corrs.shape[0]\n if num_corrs == 0:\n return RegistrationResult(T_source_target, np.inf)\n\n # get the corresponding points\n source_corr_points = corrs.source_points[valid_corrs,:]\n target_corr_points = corrs.target_points[corrs.index_map[valid_corrs], :]\n target_corr_normals = corrs.target_normals[corrs.index_map[valid_corrs], :]\n\n # determine total cost\n source_target_alignment = np.diag((source_corr_points - target_corr_points).dot(target_corr_normals.T))\n point_plane_cost = (1.0 / num_corrs) * np.sum(source_target_alignment * source_target_alignment)\n point_dist_cost = (1.0 / num_corrs) * np.sum(np.linalg.norm(source_corr_points - target_corr_points, axis=1)**2)\n total_cost = point_plane_cost + self.gamma_ * point_dist_cost\n\n return RegistrationResult(T_source_target, total_cost)", "response": "This method registers the source and target point clouds with the target point clouds."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nretrieves the next frame from the image directory and convert it to a ColorImage DepthImage and IrImage.", "response": "def frames(self):\n \"\"\"Retrieve the next frame from the image directory and convert it to a ColorImage,\n a DepthImage, and an IrImage.\n\n Parameters\n ----------\n skip_registration : bool\n If True, the registration step is skipped.\n\n Returns\n -------\n :obj:`tuple` of :obj:`ColorImage`, :obj:`DepthImage`, :obj:`IrImage`, :obj:`numpy.ndarray`\n The ColorImage, DepthImage, and IrImage of the current frame.\n\n Raises\n ------\n RuntimeError\n If the stream is not running or if all images in the\n directory have been used.\n \"\"\"\n if not self._running:\n raise RuntimeError('Device pointing to %s not runnning. Cannot read frames' %(self._path_to_images))\n\n if self._im_index >= self._num_images:\n raise RuntimeError('Device is out of images')\n\n # read images\n color_filename = os.path.join(self._path_to_images, 'color_%d%s' %(self._im_index, self._color_ext))\n color_im = ColorImage.open(color_filename, frame=self._frame)\n depth_filename = os.path.join(self._path_to_images, 'depth_%d.npy' %(self._im_index))\n depth_im = DepthImage.open(depth_filename, frame=self._frame)\n self._im_index = (self._im_index + 1) % self._num_images\n return color_im, depth_im, None"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncollect a series of depth images and return the median of the set.", "response": "def median_depth_img(self, num_img=1):\n \"\"\"Collect a series of depth images and return the median of the set.\n\n Parameters\n ----------\n num_img : int\n The number of consecutive frames to process.\n\n Returns\n -------\n :obj:`DepthImage`\n The median DepthImage collected from the frames.\n \"\"\"\n depths = []\n\n for _ in range(num_img):\n _, depth, _ = self.frames()\n depths.append(depth)\n\n return Image.median_images(depths)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nretrieves the next frame from the tensor dataset and convert it to a ColorImage DepthImage and IrImage.", "response": "def frames(self):\n \"\"\"Retrieve the next frame from the tensor dataset and convert it to a ColorImage,\n a DepthImage, and an IrImage.\n\n Parameters\n ----------\n skip_registration : bool\n If True, the registration step is skipped.\n\n Returns\n -------\n :obj:`tuple` of :obj:`ColorImage`, :obj:`DepthImage`, :obj:`IrImage`, :obj:`numpy.ndarray`\n The ColorImage, DepthImage, and IrImage of the current frame.\n\n Raises\n ------\n RuntimeError\n If the stream is not running or if all images in the\n directory have been used.\n \"\"\"\n if not self._running:\n raise RuntimeError('Device pointing to %s not runnning. Cannot read frames' %(self._path_to_images))\n\n if self._im_index >= self._num_images:\n raise RuntimeError('Device is out of images')\n\n # read images\n datapoint = self._dataset.datapoint(self._im_index,\n TensorDatasetVirtualSensor.IMAGE_FIELDS)\n color_im = ColorImage(datapoint[TensorDatasetVirtualSensor.COLOR_IM_FIELD],\n frame=self._frame)\n depth_im = DepthImage(datapoint[TensorDatasetVirtualSensor.DEPTH_IM_FIELD],\n frame=self._frame)\n if self._image_rescale_factor != 1.0:\n color_im = color_im.resize(self._image_rescale_factor)\n depth_im = depth_im.resize(self._image_rescale_factor, interp='nearest')\n self._im_index = (self._im_index + 1) % self._num_images\n return color_im, depth_im, None"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a new CameraIntrinsics object for the primesense color camera.", "response": "def color_intrinsics(self):\n \"\"\":obj:`CameraIntrinsics` : The camera intrinsics for the primesense color camera.\n \"\"\"\n return CameraIntrinsics(self._ir_frame, PrimesenseSensor.FOCAL_X, PrimesenseSensor.FOCAL_Y,\n PrimesenseSensor.CENTER_X, PrimesenseSensor.CENTER_Y,\n height=PrimesenseSensor.DEPTH_IM_HEIGHT,\n width=PrimesenseSensor.DEPTH_IM_WIDTH)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreads a depth image from the device", "response": "def _read_depth_image(self):\n \"\"\" Reads a depth image from the device \"\"\"\n # read raw uint16 buffer\n im_arr = self._depth_stream.read_frame()\n raw_buf = im_arr.get_buffer_as_uint16()\n buf_array = np.array([raw_buf[i] for i in range(PrimesenseSensor.DEPTH_IM_WIDTH * PrimesenseSensor.DEPTH_IM_HEIGHT)])\n\n # convert to image in meters\n depth_image = buf_array.reshape(PrimesenseSensor.DEPTH_IM_HEIGHT,\n PrimesenseSensor.DEPTH_IM_WIDTH)\n depth_image = depth_image * MM_TO_METERS # convert to meters\n if self._flip_images:\n depth_image = np.flipud(depth_image)\n else:\n depth_image = np.fliplr(depth_image)\n return DepthImage(depth_image, frame=self._frame)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _read_color_image(self):\n # read raw buffer\n im_arr = self._color_stream.read_frame()\n raw_buf = im_arr.get_buffer_as_triplet()\n r_array = np.array([raw_buf[i][0] for i in range(PrimesenseSensor.COLOR_IM_WIDTH * PrimesenseSensor.COLOR_IM_HEIGHT)]) \n g_array = np.array([raw_buf[i][1] for i in range(PrimesenseSensor.COLOR_IM_WIDTH * PrimesenseSensor.COLOR_IM_HEIGHT)]) \n b_array = np.array([raw_buf[i][2] for i in range(PrimesenseSensor.COLOR_IM_WIDTH * PrimesenseSensor.COLOR_IM_HEIGHT)]) \n\n # convert to uint8 image\n color_image = np.zeros([PrimesenseSensor.COLOR_IM_HEIGHT, PrimesenseSensor.COLOR_IM_WIDTH, 3])\n color_image[:,:,0] = r_array.reshape(PrimesenseSensor.COLOR_IM_HEIGHT,\n PrimesenseSensor.COLOR_IM_WIDTH)\n color_image[:,:,1] = g_array.reshape(PrimesenseSensor.COLOR_IM_HEIGHT,\n PrimesenseSensor.COLOR_IM_WIDTH)\n color_image[:,:,2] = b_array.reshape(PrimesenseSensor.COLOR_IM_HEIGHT,\n PrimesenseSensor.COLOR_IM_WIDTH)\n if self._flip_images:\n color_image = np.flipud(color_image.astype(np.uint8))\n else:\n color_image = np.fliplr(color_image.astype(np.uint8))\n return ColorImage(color_image, frame=self._frame)", "response": "Reads a color image from the device"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef frames(self):\n color_im = self._read_color_image()\n depth_im = self._read_depth_image()\n return color_im, depth_im, None", "response": "Retrieve a new frame from the Kinect and convert it to a ColorImage DepthImage and IrImage."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncollects a series of depth images and return the min of the set.", "response": "def min_depth_img(self, num_img=1):\n \"\"\"Collect a series of depth images and return the min of the set.\n\n Parameters\n ----------\n num_img : int\n The number of consecutive frames to process.\n\n Returns\n -------\n :obj:`DepthImage`\n The min DepthImage collected from the frames.\n \"\"\"\n depths = []\n\n for _ in range(num_img):\n _, depth, _ = self.frames()\n depths.append(depth)\n\n return Image.min_images(depths)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _ros_read_images(self, stream_buffer, number, staleness_limit = 10.):\n \n rospy.wait_for_service(stream_buffer, timeout = self.timeout)\n ros_image_buffer = rospy.ServiceProxy(stream_buffer, ImageBuffer)\n ret = ros_image_buffer(number, 1)\n if not staleness_limit == None:\n if ret.timestamps[-1] > staleness_limit:\n raise RuntimeError(\"Got data {0} seconds old, more than allowed {1} seconds\"\n .format(ret.timestamps[-1], staleness_limit))\n \n data = ret.data.reshape(ret.data_dim1, ret.data_dim2, ret.data_dim3).astype(ret.dtype)\n \n # Special handling for 1 element, since dstack's behavior is different\n if number == 1:\n return [data]\n return np.dsplit(data, number)", "response": "Reads images from a stream buffer."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nread the depth images from the device", "response": "def _read_depth_images(self, num_images):\n \"\"\" Reads depth images from the device \"\"\"\n depth_images = self._ros_read_images(self._depth_image_buffer, num_images, self.staleness_limit)\n for i in range(0, num_images):\n depth_images[i] = depth_images[i] * MM_TO_METERS # convert to meters\n if self._flip_images:\n depth_images[i] = np.flipud(depth_images[i])\n depth_images[i] = np.fliplr(depth_images[i])\n depth_images[i] = DepthImage(depth_images[i], frame=self._frame) \n return depth_images"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _read_color_images(self, num_images):\n color_images = self._ros_read_images(self._color_image_buffer, num_images, self.staleness_limit)\n for i in range(0, num_images):\n if self._flip_images:\n color_images[i] = np.flipud(color_images[i].astype(np.uint8))\n color_images[i] = np.fliplr(color_images[i].astype(np.uint8))\n color_images[i] = ColorImage(color_images[i], frame=self._frame) \n return color_images", "response": "Reads color images from the device"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef median_depth_img(self, num_img=1, fill_depth=0.0):\n depths = self._read_depth_images(num_img)\n\n median_depth = Image.median_images(depths)\n median_depth.data[median_depth.data == 0.0] = fill_depth\n return median_depth", "response": "Collect a series of depth images and return the median of the set."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncollects a series of depth images and return the min of the set.", "response": "def min_depth_img(self, num_img=1):\n \"\"\"Collect a series of depth images and return the min of the set.\n\n Parameters\n ----------\n num_img : int\n The number of consecutive frames to process.\n\n Returns\n -------\n :obj:`DepthImage`\n The min DepthImage collected from the frames.\n \"\"\"\n depths = self._read_depth_images(num_img)\n\n return Image.min_images(depths)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef T_obj_camera(self):\n if self.stable_pose is None:\n T_obj_world = RigidTransform(from_frame='obj', to_frame='world')\n else:\n T_obj_world = self.stable_pose.T_obj_table.as_frames('obj', 'world')\n T_camera_obj = T_obj_world.inverse() * self.T_camera_world\n return T_camera_obj", "response": "Returns the transformation from camera to object when the object is in the given stable pose."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef image(self, render_mode):\n if render_mode == RenderMode.COLOR:\n return self.color_im\n elif render_mode == RenderMode.DEPTH:\n return self.depth_im\n elif render_mode == RenderMode.SEGMASK:\n return self.binary_im\n else:\n return None", "response": "Returns an image generated with a particular render mode."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nregisters a camera to a chessboard.", "response": "def register(sensor, config):\n \"\"\"\n Registers a camera to a chessboard.\n\n Parameters\n ----------\n sensor : :obj:`perception.RgbdSensor`\n the sensor to register\n config : :obj:`autolab_core.YamlConfig` or :obj:`dict`\n configuration file for registration\n\n Returns\n -------\n :obj:`ChessboardRegistrationResult`\n the result of registration\n\n Notes\n -----\n The config must have the parameters specified in the Other Parameters section.\n\n Other Parameters\n ----------------\n num_transform_avg : int\n the number of independent registrations to average together\n num_images : int\n the number of images to read for each independent registration\n corners_x : int\n the number of chessboard corners in the x-direction\n corners_y : int\n the number of chessboard corners in the y-direction\n color_image_rescale_factor : float\n amount to rescale the color image for detection (numbers around 4-8 are useful)\n vis : bool\n whether or not to visualize the registration\n \"\"\"\n # read config\n num_transform_avg = config['num_transform_avg']\n num_images = config['num_images']\n sx = config['corners_x']\n sy = config['corners_y']\n point_order = config['point_order']\n color_image_rescale_factor = config['color_image_rescale_factor']\n flip_normal = config['flip_normal']\n y_points_left = False\n if 'y_points_left' in config.keys() and sx == sy:\n y_points_left = config['y_points_left']\n num_images = 1\n vis = config['vis']\n\n # read params from sensor\n logging.info('Registering camera %s' %(sensor.frame))\n ir_intrinsics = sensor.ir_intrinsics\n\n # repeat registration multiple times and average results\n R = np.zeros([3,3])\n t = np.zeros([3,1])\n points_3d_plane = PointCloud(np.zeros([3, sx*sy]), frame=sensor.ir_frame)\n\n k = 0\n while k < num_transform_avg:\n # average a bunch of depth images together\n depth_ims = None\n for i in range(num_images):\n start = time.time()\n small_color_im, new_depth_im, _ = sensor.frames()\n end = time.time()\n logging.info('Frames Runtime: %.3f' %(end-start))\n if depth_ims is None:\n depth_ims = np.zeros([new_depth_im.height,\n new_depth_im.width,\n num_images])\n depth_ims[:,:,i] = new_depth_im.data\n\n med_depth_im = np.median(depth_ims, axis=2)\n depth_im = DepthImage(med_depth_im, sensor.ir_frame)\n\n # find the corner pixels in an upsampled version of the color image\n big_color_im = small_color_im.resize(color_image_rescale_factor)\n corner_px = big_color_im.find_chessboard(sx=sx, sy=sy)\n\n if vis:\n plt.figure()\n plt.imshow(big_color_im.data)\n for i in range(sx):\n plt.scatter(corner_px[i,0], corner_px[i,1], s=25, c='b')\n plt.show()\n\n if corner_px is None:\n logging.error('No chessboard detected! Check camera exposure settings')\n continue\n\n # convert back to original image\n small_corner_px = corner_px / color_image_rescale_factor\n \n if vis:\n plt.figure()\n plt.imshow(small_color_im.data)\n for i in range(sx):\n plt.scatter(small_corner_px[i,0], small_corner_px[i,1], s=25, c='b')\n plt.axis('off')\n plt.show()\n\n # project points into 3D\n camera_intr = sensor.ir_intrinsics\n points_3d = camera_intr.deproject(depth_im)\n\n # get round chessboard ind\n corner_px_round = np.round(small_corner_px).astype(np.uint16)\n corner_ind = depth_im.ij_to_linear(corner_px_round[:,0], corner_px_round[:,1])\n if corner_ind.shape[0] != sx*sy:\n logging.warning('Did not find all corners. Discarding...')\n continue\n\n # average 3d points\n points_3d_plane = (k * points_3d_plane + points_3d[corner_ind]) / (k + 1)\n logging.info('Registration iteration %d of %d' %(k+1, config['num_transform_avg']))\n k += 1\n\n # fit a plane to the chessboard corners\n X = np.c_[points_3d_plane.x_coords, points_3d_plane.y_coords, np.ones(points_3d_plane.num_points)]\n y = points_3d_plane.z_coords\n A = X.T.dot(X)\n b = X.T.dot(y)\n w = np.linalg.inv(A).dot(b)\n n = np.array([w[0], w[1], -1])\n n = n / np.linalg.norm(n)\n if flip_normal:\n n = -n\n mean_point_plane = points_3d_plane.mean()\n \n # find x-axis of the chessboard coordinates on the fitted plane\n T_camera_table = RigidTransform(translation = -points_3d_plane.mean().data,\n from_frame=points_3d_plane.frame,\n to_frame='table')\n points_3d_centered = T_camera_table * points_3d_plane\n\n # get points along y\n if point_order == 'row_major':\n coord_pos_x = int(math.floor(sx*sy/2.0))\n coord_neg_x = int(math.ceil(sx*sy/2.0))\n \n points_pos_x = points_3d_centered[coord_pos_x:]\n points_neg_x = points_3d_centered[:coord_neg_x]\n x_axis = np.mean(points_pos_x.data, axis=1) - np.mean(points_neg_x.data, axis=1)\n x_axis = x_axis - np.vdot(x_axis, n)*n\n x_axis = x_axis / np.linalg.norm(x_axis)\n y_axis = np.cross(n, x_axis)\n else:\n coord_pos_y = int(math.floor(sx*(sy-1)/2.0))\n coord_neg_y = int(math.ceil(sx*(sy+1)/2.0))\n points_pos_y = points_3d_centered[:coord_pos_y]\n points_neg_y = points_3d_centered[coord_neg_y:]\n y_axis = np.mean(points_pos_y.data, axis=1) - np.mean(points_neg_y.data, axis=1)\n y_axis = y_axis - np.vdot(y_axis, n)*n\n y_axis = y_axis / np.linalg.norm(y_axis)\n x_axis = np.cross(-n, y_axis)\n\n # produce translation and rotation from plane center and chessboard basis\n rotation_cb_camera = RigidTransform.rotation_from_axes(x_axis, y_axis, n)\n translation_cb_camera = mean_point_plane.data\n T_cb_camera = RigidTransform(rotation=rotation_cb_camera,\n translation=translation_cb_camera,\n from_frame='cb',\n to_frame=sensor.frame)\n \n if y_points_left and np.abs(T_cb_camera.y_axis[1]) > 0.1:\n if T_cb_camera.x_axis[0] > 0:\n T_cb_camera.rotation = T_cb_camera.rotation.dot(RigidTransform.z_axis_rotation(-np.pi/2).T)\n else:\n T_cb_camera.rotation = T_cb_camera.rotation.dot(RigidTransform.z_axis_rotation(np.pi/2).T)\n T_camera_cb = T_cb_camera.inverse()\n \n # optionally display cb corners with detected pose in 3d space\n if config['debug']:\n # display image with axes overlayed\n cb_center_im = camera_intr.project(Point(T_cb_camera.translation, frame=sensor.ir_frame))\n cb_x_im = camera_intr.project(Point(T_cb_camera.translation + T_cb_camera.x_axis * config['scale_amt'], frame=sensor.ir_frame))\n cb_y_im = camera_intr.project(Point(T_cb_camera.translation + T_cb_camera.y_axis * config['scale_amt'], frame=sensor.ir_frame))\n cb_z_im = camera_intr.project(Point(T_cb_camera.translation + T_cb_camera.z_axis * config['scale_amt'], frame=sensor.ir_frame))\n x_line = np.array([cb_center_im.data, cb_x_im.data])\n y_line = np.array([cb_center_im.data, cb_y_im.data])\n z_line = np.array([cb_center_im.data, cb_z_im.data])\n\n plt.figure()\n plt.imshow(small_color_im.data)\n plt.scatter(cb_center_im.data[0], cb_center_im.data[1])\n plt.plot(x_line[:,0], x_line[:,1], c='r', linewidth=3)\n plt.plot(y_line[:,0], y_line[:,1], c='g', linewidth=3)\n plt.plot(z_line[:,0], z_line[:,1], c='b', linewidth=3)\n plt.axis('off')\n plt.title('Chessboard frame in camera %s' %(sensor.frame))\n plt.show()\n\n return ChessboardRegistrationResult(T_camera_cb, points_3d_plane)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds a new feature to the bag.", "response": "def add(self, feature):\n \"\"\" Add a new feature to the bag.\n\n Parameters\n ----------\n feature : :obj:`Feature`\n feature to add\n \"\"\"\n self.features_.append(feature)\n self.num_features_ = len(self.features_)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd a list of features to the bag.", "response": "def extend(self, features):\n \"\"\" Add a list of features to the bag.\n\n Parameters\n ----------\n feature : :obj:`list` of :obj:`Feature`\n features to add\n \"\"\"\n self.features_.extend(features)\n self.num_features_ = len(self.features_)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a feature in the list at the specified index.", "response": "def feature(self, index):\n \"\"\" Returns a feature.\n\n Parameters\n ----------\n index : int\n index of feature in list\n\n Returns\n -------\n :obj:`Feature`\n \"\"\"\n if index < 0 or index >= self.num_features_:\n raise ValueError('Index %d out of range' %(index))\n return self.features_[index]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn some subset of the features.", "response": "def feature_subset(self, indices):\n \"\"\" Returns some subset of the features.\n \n Parameters\n ----------\n indices : :obj:`list` of :obj:`int`\n indices of the features in the list\n\n Returns\n -------\n :obj:`list` of :obj:`Feature`\n \"\"\"\n if isinstance(indices, np.ndarray):\n indices = indices.tolist()\n if not isinstance(indices, list):\n raise ValueError('Can only index with lists')\n return [self.features_[i] for i in indices]"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef total_weight(self):\n weights = self._raw_weights()\n if weights.shape[1] == 0:\n return 0.0\n elif weights.shape[1] < self._ntaps:\n return np.sum(np.mean(weights, axis=1))\n else:\n return self._filter_coeffs.dot(np.sum(weights, axis=0))", "response": "Read a weight from the sensor in grams."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreads individual weights from the load cells in grams.", "response": "def individual_weights(self):\n \"\"\"Read individual weights from the load cells in grams.\n\n Returns\n -------\n weight : float\n The sensor weight in grams.\n \"\"\"\n weights = self._raw_weights()\n if weights.shape[1] == 0:\n return np.zeros(weights.shape[0])\n elif weights.shape[1] < self._ntaps:\n return np.mean(weights, axis=1)\n else:\n return weights.dot(self._filter_coeffs)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates a numpy array containing the raw sensor weights.", "response": "def _raw_weights(self):\n \"\"\"Create a numpy array containing the raw sensor weights.\n \"\"\"\n if self._debug:\n return np.array([[],[],[],[]])\n\n if not self._running:\n raise ValueError('Weight sensor is not running!')\n if len(self._weight_buffers) == 0:\n time.sleep(0.3)\n if len(self._weight_buffers) == 0:\n raise ValueError('Weight sensor is not retrieving data!')\n weights = np.array(self._weight_buffers)\n return weights"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _weights_callback(self, msg):\n # Read weights\n weights = np.array(msg.data)\n\n # If needed, initialize indiv_weight_buffers\n if len(self._weight_buffers) == 0:\n self._weight_buffers = [[] for i in range(len(weights))]\n\n # Record individual weights\n for i, w in enumerate(weights):\n if len(self._weight_buffers[i]) == self._ntaps:\n self._weight_buffers[i].pop(0)\n self._weight_buffers[i].append(w)", "response": "Callback for recording weights from sensor."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef vec(self):\n return np.r_[self.fx, self.fy, self.cx, self.cy, self.skew, self.height, self.width]", "response": "Vector representation of this camera."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the ROS CamerInfo object for the current message.", "response": "def rosmsg(self):\n \"\"\":obj:`sensor_msgs.CamerInfo` : Returns ROS CamerInfo msg \n \"\"\"\n from sensor_msgs.msg import CameraInfo, RegionOfInterest\n from std_msgs.msg import Header\n\n msg_header = Header()\n msg_header.frame_id = self._frame\n\n msg_roi = RegionOfInterest()\n msg_roi.x_offset = 0\n msg_roi.y_offset = 0\n msg_roi.height = 0\n msg_roi.width = 0\n msg_roi.do_rectify = 0\n\n msg = CameraInfo()\n msg.header = msg_header\n msg.height = self._height\n msg.width = self._width\n msg.distortion_model = 'plumb_bob'\n msg.D = [0.0, 0.0, 0.0, 0.0, 0.0]\n msg.K = [self._fx, 0.0, self._cx, 0.0, self._fy, self._cy, 0.0, 0.0, 1.0]\n msg.R = [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]\n msg.P = [self._fx, 0.0, self._cx, 0.0, 0.0, self._fx, self._cy, 0.0, 0.0, 0.0, 1.0, 0.0]\n msg.binning_x = 0\n msg.binning_y = 0\n msg.roi = msg_roi\n\n return msg"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef crop(self, height, width, crop_ci, crop_cj):\n cx = self.cx + float(width-1)/2 - crop_cj\n cy = self.cy + float(height-1)/2 - crop_ci\n cropped_intrinsics = CameraIntrinsics(frame=self.frame,\n fx=self.fx,\n fy=self.fy,\n skew=self.skew,\n cx=cx, cy=cy,\n height=height,\n width=width)\n return cropped_intrinsics", "response": "Convert to new camera intrinsics for crop of image from original camera."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert to new camera intrinsics with parameters for resized image.", "response": "def resize(self, scale):\n \"\"\" Convert to new camera intrinsics with parameters for resized image.\n \n Parameters\n ----------\n scale : float\n the amount to rescale the intrinsics\n \n Returns\n -------\n :obj:`CameraIntrinsics`\n camera intrinsics for resized image \n \"\"\"\n center_x = float(self.width-1) / 2\n center_y = float(self.height-1) / 2\n orig_cx_diff = self.cx - center_x\n orig_cy_diff = self.cy - center_y\n height = scale * self.height\n width = scale * self.width\n scaled_center_x = float(width-1) / 2\n scaled_center_y = float(height-1) / 2\n fx = scale * self.fx\n fy = scale * self.fy\n skew = scale * self.skew\n cx = scaled_center_x + scale * orig_cx_diff\n cy = scaled_center_y + scale * orig_cy_diff\n scaled_intrinsics = CameraIntrinsics(frame=self.frame,\n fx=fx, fy=fy, skew=skew, cx=cx, cy=cy,\n height=height, width=width)\n return scaled_intrinsics"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef project(self, point_cloud, round_px=True):\n if not isinstance(point_cloud, PointCloud) and not (isinstance(point_cloud, Point) and point_cloud.dim == 3):\n raise ValueError('Must provide PointCloud or 3D Point object for projection')\n if point_cloud.frame != self._frame:\n raise ValueError('Cannot project points in frame %s into camera with frame %s' %(point_cloud.frame, self._frame))\n\n points_proj = self._K.dot(point_cloud.data)\n if len(points_proj.shape) == 1:\n points_proj = points_proj[:, np.newaxis]\n point_depths = np.tile(points_proj[2,:], [3, 1])\n points_proj = np.divide(points_proj, point_depths)\n if round_px:\n points_proj = np.round(points_proj)\n\n if isinstance(point_cloud, Point):\n return Point(data=points_proj[:2,:].astype(np.int16), frame=self._frame)\n return ImageCoords(data=points_proj[:2,:].astype(np.int16), frame=self._frame)", "response": "Projects a point cloud onto the camera image plane."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef deproject_to_image(self, depth_image):\n point_cloud = self.deproject(depth_image)\n point_cloud_im_data = point_cloud.data.T.reshape(depth_image.height, depth_image.width, 3)\n return PointCloudImage(data=point_cloud_im_data,\n frame=self._frame)", "response": "Deprojects a DepthImage into a PointCloudImage."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nloading a CameraIntrinsics object from a file.", "response": "def load(filename):\n \"\"\"Load a CameraIntrinsics object from a file.\n\n Parameters\n ----------\n filename : :obj:`str`\n The .intr file to load the object from.\n\n Returns\n -------\n :obj:`CameraIntrinsics`\n The CameraIntrinsics object loaded from the file.\n\n Raises\n ------\n ValueError\n If filename does not have the .intr extension.\n \"\"\"\n file_root, file_ext = os.path.splitext(filename)\n if file_ext.lower() != INTR_EXTENSION:\n raise ValueError('Extension %s not supported for CameraIntrinsics. Must be stored with extension %s' %(file_ext, INTR_EXTENSION))\n\n f = open(filename, 'r')\n ci = json.load(f)\n f.close()\n return CameraIntrinsics(frame=ci['_frame'],\n fx=ci['_fx'],\n fy=ci['_fy'],\n cx=ci['_cx'],\n cy=ci['_cy'],\n skew=ci['_skew'],\n height=ci['_height'],\n width=ci['_width'])"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef finetune_classification_cnn(config):\n # read params\n dataset = config['dataset']\n x_names = config['x_names']\n y_name = config['y_name']\n model_dir = config['model_dir']\n debug = config['debug']\n\n num_classes = None\n if 'num_classes' in config.keys():\n num_classes = config['num_classes']\n\n batch_size = config['training']['batch_size']\n train_pct = config['training']['train_pct']\n model_save_period = config['training']['model_save_period']\n\n data_aug_config = config['data_augmentation']\n preproc_config = config['preprocessing']\n iterator_config = config['data_iteration']\n model_config = config['model']\n base_model_config = model_config['base']\n optimization_config = config['optimization']\n train_config = config['training']\n\n generator_image_shape = None\n if 'image_shape' in data_aug_config.keys():\n generator_image_shape = data_aug_config['image_shape']\n optimizer_name = optimization_config['optimizer']\n\n model_params = {}\n if 'params' in model_config.keys():\n model_params = model_config['params']\n \n base_model_params = {}\n if 'params' in base_model_config.keys():\n base_model_params = base_model_config['params']\n\n if debug:\n seed = 108\n random.seed(seed)\n np.random.seed(seed)\n\n # generate model dir\n if not os.path.exists(model_dir):\n os.mkdir(model_dir)\n model_id = utils.gen_experiment_id()\n model_dir = os.path.join(model_dir, 'model_%s' %(model_id))\n if not os.path.exists(model_dir):\n os.mkdir(model_dir)\n logging.info('Saving model to %s' %(model_dir))\n latest_model_filename = os.path.join(model_dir, 'weights_{epoch:05d}.h5')\n best_model_filename = os.path.join(model_dir, 'weights.h5')\n\n # save config\n training_config_filename = os.path.join(model_dir, 'training_config.yaml')\n config.save(training_config_filename)\n\n # open dataset\n dataset = TensorDataset.open(dataset)\n \n # split dataset\n indices_filename = os.path.join(model_dir, 'splits.npz')\n if os.path.exists(indices_filename):\n indices = np.load(indices_filename)['arr_0'].tolist()\n train_indices = indices['train']\n val_indices = indices['val']\n else:\n train_indices, val_indices = dataset.split(train_pct)\n indices = np.array({'train':train_indices,\n 'val':val_indices})\n np.savez_compressed(indices_filename, indices)\n num_train = train_indices.shape[0]\n num_val = val_indices.shape[0]\n val_steps = int(np.ceil(float(num_val) / batch_size))\n\n # init generator\n train_generator_filename = os.path.join(model_dir, 'train_preprocessor.pkl')\n val_generator_filename = os.path.join(model_dir, 'val_preprocessor.pkl')\n if os.path.exists(train_generator_filename):\n logging.info('Loading generators')\n train_generator = pkl.load(open(train_generator_filename, 'rb'))\n val_generator = pkl.load(open(val_generator_filename, 'rb'))\n else:\n logging.info('Fitting generator')\n train_generator = TensorDataGenerator(num_classes=num_classes,\n **data_aug_config)\n val_generator = TensorDataGenerator(featurewise_center=data_aug_config['featurewise_center'],\n featurewise_std_normalization=data_aug_config['featurewise_std_normalization'],\n image_shape=generator_image_shape,\n num_classes=num_classes)\n fit_start = time.time()\n train_generator.fit(dataset, x_names, y_name, indices=train_indices, **preproc_config)\n val_generator.mean = train_generator.mean\n val_generator.std = train_generator.std\n val_generator.min_output = train_generator.min_output\n val_generator.max_output = train_generator.max_output\n val_generator.num_classes = train_generator.num_classes\n fit_stop = time.time()\n logging.info('Generator fit took %.3f sec' %(fit_stop - fit_start))\n pkl.dump(train_generator, open(train_generator_filename, 'wb'))\n pkl.dump(val_generator, open(val_generator_filename, 'wb'))\n \n if num_classes is None:\n num_classes = int(train_generator.num_classes)\n\n # init iterator\n train_iterator = train_generator.flow_from_dataset(dataset, x_names, y_name,\n indices=train_indices,\n batch_size=batch_size,\n **iterator_config)\n val_iterator = val_generator.flow_from_dataset(dataset, x_names, y_name,\n indices=val_indices,\n batch_size=batch_size,\n **iterator_config)\n\n # setup model\n base_cnn = ClassificationCNN.open(base_model_config['model'],\n base_model_config['type'],\n input_name=x_names[0],\n **base_model_params)\n cnn = FinetunedClassificationCNN(base_cnn=base_cnn,\n name='dexresnet',\n num_classes=num_classes,\n output_name=y_name,\n im_preprocessor=val_generator,\n **model_params)\n\n # setup training\n cnn.freeze_base_cnn()\n if optimizer_name == 'sgd':\n optimizer = SGD(lr=optimization_config['lr'],\n momentum=optimization_config['momentum'])\n elif optimizer_name == 'adam':\n optimizer = Adam(lr=optimization_config['lr'])\n else:\n raise ValueError('Optimizer %s not supported!' %(optimizer_name))\n model = cnn.model\n model.compile(optimizer=optimizer,\n loss=optimization_config['loss'],\n metrics=optimization_config['metrics'])\n\n # train\n steps_per_epoch = int(np.ceil(float(num_train) / batch_size))\n latest_model_ckpt = ModelCheckpoint(latest_model_filename, period=model_save_period)\n best_model_ckpt = ModelCheckpoint(best_model_filename,\n save_best_only=True,\n period=model_save_period)\n train_history_cb = TrainHistory(model_dir)\n callbacks = [latest_model_ckpt, best_model_ckpt, train_history_cb]\n history = model.fit_generator(train_iterator,\n steps_per_epoch=steps_per_epoch,\n epochs=train_config['epochs'],\n callbacks=callbacks,\n validation_data=val_iterator,\n validation_steps=val_steps,\n class_weight=train_config['class_weight'],\n use_multiprocessing=train_config['use_multiprocessing'])\n\n # save model\n cnn.save(model_dir)\n \n # save history\n history_filename = os.path.join(model_dir, 'history.pkl')\n pkl.dump(history.history, open(history_filename, 'wb'))", "response": "Function that finetune classification CNN."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nloads a set of color images depth images and camera intrinsics from a Kinect sensor.", "response": "def load_images(cfg):\n \"\"\"Helper function for loading a set of color images, depth images, and IR\n camera intrinsics.\n\n The config dictionary must have these keys:\n - prestored_data -- If 1, use the virtual sensor, else use a real sensor.\n - prestored_data_dir -- A path to the prestored data dir for a virtual sensor.\n - sensor/frame -- The frame of reference for the sensor.\n - sensor/device_num -- The device number for the real Kinect.\n - sensor/pipeline_mode -- The mode for the real Kinect's packet pipeline.\n - num_images -- The number of images to generate.\n\n Parameters\n ----------\n cfg : :obj:`dict`\n A config dictionary.\n\n Returns\n -------\n :obj:`tuple` of :obj:`list` of :obj:`ColorImage`, :obj:`list` of :obj:`DepthImage`, :obj:`CameraIntrinsics`\n A set of ColorImages and DepthImages, and the Kinect's CameraIntrinsics\n for its IR sensor.\n \"\"\"\n if 'prestored_data' in cfg.keys() and cfg['prestored_data'] == 1:\n sensor = VirtualKinect2Sensor(path_to_images=cfg['prestored_data_dir'], frame=cfg['sensor']['frame'])\n else:\n sensor = Kinect2Sensor(device_num=cfg['sensor']['device_num'], frame=cfg['sensor']['frame'],\n packet_pipeline_mode=cfg['sensor']['pipeline_mode'])\n sensor.start()\n ir_intrinsics = sensor.ir_intrinsics\n\n # get raw images\n colors = []\n depths = []\n\n for _ in range(cfg['num_images']):\n color, depth, _ = sensor.frames()\n colors.append(color)\n depths.append(depth)\n\n sensor.stop()\n\n return colors, depths, ir_intrinsics"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef color_intrinsics(self):\n if self._device is None:\n raise RuntimeError('Kinect2 device %s not runnning. Cannot return color intrinsics')\n camera_params = self._device.getColorCameraParams()\n return CameraIntrinsics(self._color_frame, camera_params.fx, camera_params.fy,\n camera_params.cx, camera_params.cy)", "response": "Return a list of camera intrinsics for the Kinect s color camera."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef ir_intrinsics(self):\n if self._device is None:\n raise RuntimeError('Kinect2 device %s not runnning. Cannot return IR intrinsics')\n camera_params = self._device.getIrCameraParams()\n return CameraIntrinsics(self._ir_frame, camera_params.fx, camera_params.fy,\n camera_params.cx, camera_params.cy,\n height=Kinect2Sensor.DEPTH_IM_HEIGHT,\n width=Kinect2Sensor.DEPTH_IM_WIDTH)", "response": "Return a list of camera intrinsics for the Kinect s IR camera."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef start(self):\n # open packet pipeline\n if self._packet_pipeline_mode == Kinect2PacketPipelineMode.OPENGL:\n self._pipeline = lf2.OpenGLPacketPipeline()\n elif self._packet_pipeline_mode == Kinect2PacketPipelineMode.CPU:\n self._pipeline = lf2.CpuPacketPipeline()\n\n # setup logger\n self._logger = lf2.createConsoleLogger(lf2.LoggerLevel.Warning)\n lf2.setGlobalLogger(self._logger)\n\n # check devices\n self._fn_handle = lf2.Freenect2()\n self._num_devices = self._fn_handle.enumerateDevices()\n if self._num_devices == 0:\n raise IOError('Failed to start stream. No Kinect2 devices available!')\n if self._num_devices <= self._device_num:\n raise IOError('Failed to start stream. Device num %d unavailable!' %(self._device_num))\n\n # open device\n self._serial = self._fn_handle.getDeviceSerialNumber(self._device_num)\n self._device = self._fn_handle.openDevice(self._serial, pipeline=self._pipeline)\n\n # add device sync modes\n self._listener = lf2.SyncMultiFrameListener(\n lf2.FrameType.Color | lf2.FrameType.Ir | lf2.FrameType.Depth)\n self._device.setColorFrameListener(self._listener)\n self._device.setIrAndDepthFrameListener(self._listener)\n\n # start device\n self._device.start()\n\n # open registration\n self._registration = None\n if self._registration_mode == Kinect2RegistrationMode.COLOR_TO_DEPTH:\n logging.debug('Using color to depth registration')\n self._registration = lf2.Registration(self._device.getIrCameraParams(),\n self._device.getColorCameraParams())\n self._running = True", "response": "Starts the Kinect v2 sensor stream."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef stop(self):\n # check that everything is running\n if not self._running or self._device is None:\n logging.warning('Kinect2 device %d not runnning. Aborting stop' %(self._device_num))\n return False\n\n # stop the device\n self._device.stop()\n self._device.close()\n self._device = None\n self._running = False\n return True", "response": "Stops the Kinect2 sensor stream. Returns True if the device was already stopped False otherwise."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nretrieving a new frame from the Kinect and convert it to a ColorImage DepthImage and IrImage.", "response": "def frames(self, skip_registration=False):\n \"\"\"Retrieve a new frame from the Kinect and convert it to a ColorImage,\n a DepthImage, and an IrImage.\n\n Parameters\n ----------\n skip_registration : bool\n If True, the registration step is skipped.\n\n Returns\n -------\n :obj:`tuple` of :obj:`ColorImage`, :obj:`DepthImage`, :obj:`IrImage`, :obj:`numpy.ndarray`\n The ColorImage, DepthImage, and IrImage of the current frame.\n\n Raises\n ------\n RuntimeError\n If the Kinect stream is not running.\n \"\"\"\n color_im, depth_im, ir_im, _ = self._frames_and_index_map(skip_registration=skip_registration)\n return color_im, depth_im, ir_im"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _frames_and_index_map(self, skip_registration=False):\n if not self._running:\n raise RuntimeError('Kinect2 device %s not runnning. Cannot read frames' %(self._device_num))\n\n # read frames\n frames = self._listener.waitForNewFrame()\n unregistered_color = frames['color']\n distorted_depth = frames['depth']\n ir = frames['ir']\n\n # apply color to depth registration\n color_frame = self._color_frame\n color = unregistered_color\n depth = distorted_depth\n color_depth_map = np.zeros([depth.height, depth.width]).astype(np.int32).ravel()\n if not skip_registration and self._registration_mode == Kinect2RegistrationMode.COLOR_TO_DEPTH:\n color_frame = self._ir_frame\n depth = lf2.Frame(depth.width, depth.height, 4, lf2.FrameType.Depth)\n color = lf2.Frame(depth.width, depth.height, 4, lf2.FrameType.Color)\n self._registration.apply(unregistered_color, distorted_depth, depth, color, color_depth_map=color_depth_map)\n\n # convert to array (copy needed to prevent reference of deleted data\n color_arr = copy.copy(color.asarray())\n color_arr[:,:,[0,2]] = color_arr[:,:,[2,0]] # convert BGR to RGB\n color_arr[:,:,0] = np.fliplr(color_arr[:,:,0])\n color_arr[:,:,1] = np.fliplr(color_arr[:,:,1])\n color_arr[:,:,2] = np.fliplr(color_arr[:,:,2])\n color_arr[:,:,3] = np.fliplr(color_arr[:,:,3])\n depth_arr = np.fliplr(copy.copy(depth.asarray()))\n ir_arr = np.fliplr(copy.copy(ir.asarray()))\n\n # convert meters\n if self._depth_mode == Kinect2DepthMode.METERS:\n depth_arr = depth_arr * MM_TO_METERS\n\n # Release and return\n self._listener.release(frames)\n return (ColorImage(color_arr[:,:,:3], color_frame),\n DepthImage(depth_arr, self._ir_frame),\n IrImage(ir_arr.astype(np.uint16), self._ir_frame),\n color_depth_map)", "response": "Retrieve a new frame from the Kinect and return a ColorImage DepthImage IrImage and a map from depth pixels to color pixel indices."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _process_image_msg(self, msg):\n encoding = msg.encoding\n try:\n image = self._bridge.imgmsg_to_cv2(msg, encoding)\n except CvBridgeError as e:\n rospy.logerr(e)\n return image", "response": "Process an image message and return a numpy array with the image data"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _color_image_callback(self, image_msg):\n color_arr = self._process_image_msg(image_msg)\n self._cur_color_im = ColorImage(color_arr[:,:,::-1], self._frame)", "response": "Process color image message and store it in self. _cur_color_im"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsubscribe to depth image topic and keep it up to date", "response": "def _depth_image_callback(self, image_msg):\n \"\"\" subscribe to depth image topic and keep it up to date\n \"\"\"\n encoding = image_msg.encoding\n try:\n depth_arr = self._bridge.imgmsg_to_cv2(image_msg, encoding)\n import pdb; pdb.set_trace()\n\n except CvBridgeError as e:\n rospy.logerr(e)\n depth = np.array(depth_arr*MM_TO_METERS, np.float32)\n self._cur_depth_im = DepthImage(depth, self._frame)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef stop(self):\n # check that everything is running\n if not self._running:\n logging.warning('Kinect not running. Aborting stop')\n return False\n\n # stop subs\n self._image_sub.unregister()\n self._depth_sub.unregister()\n self._camera_info_sub.unregister\n\n self._running = False\n return True", "response": "Stop the Kinect sensor"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nretrieves a new frame from the Ensenso and convert it to a ColorImage DepthImage IrImage", "response": "def frames(self):\n \"\"\"Retrieve a new frame from the Ensenso and convert it to a ColorImage,\n a DepthImage, IrImage is always none for this type\n\n Returns\n -------\n :obj:`tuple` of :obj:`ColorImage`, :obj:`DepthImage`, :obj:`IrImage`, :obj:`numpy.ndarray`\n The ColorImage, DepthImage, and IrImage of the current frame.\n\n Raises\n ------\n RuntimeError\n If the Kinect stream is not running.\n \"\"\"\n # wait for a new image\n while self._cur_depth_im is None or self._cur_color_im is None:\n time.sleep(0.01)\n \n # read next image\n depth_im = self._cur_depth_im\n color_im = self._cur_color_im\n\n self._cur_color_im = None\n self._cur_depth_im = None\n\n #TODO add ir image\n return color_im, depth_im, None"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef frames(self):\n if not self._running:\n raise RuntimeError('VirtualKinect2 device pointing to %s not runnning. Cannot read frames' %(self._path_to_images))\n\n if self._im_index > self._num_images:\n raise RuntimeError('VirtualKinect2 device is out of images')\n\n # read images\n color_filename = os.path.join(self._path_to_images, 'color_%d.png' %(self._im_index))\n color_im = ColorImage.open(color_filename, frame=self._frame)\n depth_filename = os.path.join(self._path_to_images, 'depth_%d.npy' %(self._im_index))\n depth_im = DepthImage.open(depth_filename, frame=self._frame)\n ir_filename = os.path.join(self._path_to_images, 'ir_%d.npy' %(self._im_index))\n ir_im = None\n if os.path.exists(ir_filename):\n ir_im = IrImage.open(ir_filename, frame=self._frame)\n self._im_index += 1\n return color_im, depth_im, ir_im", "response": "Retrieve the next frame from the image directory and convert it to a ColorImage DepthImage and IrImage."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate a Kinect2 sensor of the specified type.", "response": "def sensor(sensor_type, cfg):\n \"\"\" Creates a Kinect2 sensor of the specified type.\n\n Parameters\n ----------\n sensor_type : :obj:`str`\n the type of the sensor (real or virtual)\n cfg : :obj:`YamlConfig`\n dictionary of parameters for sensor initialization\n \"\"\"\n sensor_type = sensor_type.lower()\n if sensor_type == 'real':\n s = Kinect2Sensor(packet_pipeline_mode=cfg['pipeline_mode'],\n device_num=cfg['device_num'],\n frame=cfg['frame'])\n elif sensor_type == 'virtual':\n s = VirtualKinect2Sensor(cfg['image_dir'],\n frame=cfg['frame'])\n elif sensor_type == 'bridged':\n s = KinectSensorBridged(quality=cfg['quality'], frame=cfg['frame'])\n else:\n raise ValueError('Kinect2 sensor type %s not supported' %(sensor_type))\n return s"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _connect(self, id_mask):\n # Get all devices attached as USB serial\n all_devices = glob.glob('/dev/ttyUSB*')\n\n # Identify which of the devices are LoadStar Serial Sensors\n sensors = []\n for device in all_devices:\n try:\n ser = serial.Serial(port=device,\n timeout=0.5,\n exclusive=True)\n ser.write('ID\\r')\n ser.flush()\n time.sleep(0.05)\n resp = ser.read(13)\n ser.close()\n\n if len(resp) >= 10 and resp[:len(id_mask)] == id_mask:\n sensors.append((device, resp.rstrip('\\r\\n')))\n except:\n continue\n sensors = sorted(sensors, key=lambda x : x[1])\n\n # Connect to each of the serial devices\n serials = []\n for device, key in sensors:\n ser = serial.Serial(port=device, timeout=0.5)\n serials.append(ser)\n rospy.loginfo('Connected to load cell {} at {}'.format(key, device))\n return serials", "response": "Connects to all of the load cells serially."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _flush(self):\n for ser in self._serials:\n ser.flush()\n ser.flushInput()\n ser.flushOutput()\n time.sleep(0.02)", "response": "Flushes all of the serial ports."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nzeroing out all load cells.", "response": "def _tare(self):\n \"\"\"Zeros out (tare) all of the load cells.\n \"\"\"\n for ser in self._serials:\n ser.write('TARE\\r')\n ser.flush()\n ser.flushInput()\n ser.flushOutput()\n time.sleep(0.02)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _read_weights(self):\n weights = []\n\n grams_per_pound = 453.592\n\n # Read from each of the sensors\n for ser in self._serials:\n ser.write('W\\r')\n ser.flush()\n time.sleep(0.02)\n for ser in self._serials:\n try:\n output_str = ser.readline()\n weight = float(output_str) * grams_per_pound\n weights.append(weight)\n except:\n weights.append(0.0)\n\n # Log the output\n log_output = ''\n for w in weights:\n log_output +='{:.2f} '.format(w)\n rospy.loginfo(log_output)\n\n return weights", "response": "Reads the weights from each of the load cells."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef run(self):\n if not self.camera.is_running:\n self.camera.start()\n while True:\n if not self.cmd_q.empty():\n cmd = self.cmd_q.get()\n if cmd[0] == 'stop':\n self.out.close()\n self.recording = False\n elif cmd[0] == 'start':\n filename = cmd[1]\n self.out = si.FFmpegWriter(filename)\n self.recording = True\n self.count = 0\n\n if self.recording:\n if self.count == 0:\n image, _, _ = self.camera.frames()\n \n if self.data_buf is None:\n self.data_buf = np.zeros([1, image.height, image.width, image.channels])\n self.data_buf[0,...] = image.raw_data\n self.out.writeFrame(self.data_buf)\n\n self.count += 1\n if self.count == self.rate:\n self.count = 0", "response": "Continually write images to the filename specified by a command queue."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef start(self):\n self._started = True\n self._camera = _Camera(self._actual_camera, self._cmd_q, self._res, self._codec, self._fps, self._rate)\n self._camera.start()", "response": "Starts the camera recording process."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nstarting recording to a given output video file.", "response": "def start_recording(self, output_file):\n \"\"\" Starts recording to a given output video file.\n\n Parameters\n ----------\n output_file : :obj:`str`\n filename to write video to\n \"\"\"\n if not self._started:\n raise Exception(\"Must start the video recorder first by calling .start()!\")\n if self._recording:\n raise Exception(\"Cannot record a video while one is already recording!\")\n self._recording = True\n self._cmd_q.put(('start', output_file))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nstops writing video to file.", "response": "def stop_recording(self):\n \"\"\" Stops writing video to file. \"\"\"\n if not self._recording:\n raise Exception(\"Cannot stop a video recording when it's not recording!\")\n self._cmd_q.put(('stop',))\n self._recording = False"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nstop the camera process.", "response": "def stop(self):\n \"\"\" Stop the camera process. \"\"\"\n if not self._started:\n raise Exception(\"Cannot stop a video recorder before starting it!\")\n self._started = False\n if self._actual_camera.is_running:\n self._actual_camera.stop()\n if self._camera is not None:\n try:\n self._camera.terminate()\n except:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _preprocess_data(self, data):\n original_type = data.dtype\n if len(data.shape) == 1:\n data = data[:, np.newaxis, np.newaxis]\n elif len(data.shape) == 2:\n data = data[:, :, np.newaxis]\n elif len(data.shape) == 0 or len(data.shape) > 3:\n raise ValueError(\n 'Illegal data array passed to image. Must be 1, 2, or 3 dimensional numpy array')\n return data.astype(original_type)", "response": "Converts a data array to the preferred 3D structure."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef rosmsg(self):\n from cv_bridge import CvBridge, CvBridgeError\n cv_bridge = CvBridge()\n try:\n return cv_bridge.cv2_to_imgmsg(self._data, encoding=self._encoding)\n except CvBridgeError as cv_bridge_exception:\n logging.error('%s' % (str(cv_bridge_exception)))", "response": "Return the ROS Image object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef can_convert(x):\n if len(x.shape) < 2 or len(x.shape) > 3:\n return False\n dtype = x.dtype\n height = x.shape[0]\n width = x.shape[1]\n channels = 1\n if len(x.shape) == 3:\n channels = x.shape[2]\n if channels > 4:\n return False\n return True", "response": "Returns True if x can be converted to an image False otherwise."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconverting an array of data to an Image based on the values in the array and the data format.", "response": "def from_array(x, frame='unspecified'):\n \"\"\" Converts an array of data to an Image based on the values in the array and the data format. \"\"\"\n\n if not Image.can_convert(x):\n raise ValueError('Cannot convert array to an Image!')\n\n dtype = x.dtype\n height = x.shape[0]\n width = x.shape[1]\n channels = 1\n if len(x.shape) == 3:\n channels = x.shape[2]\n if dtype == np.uint8:\n if channels == 1:\n if np.any((x % BINARY_IM_MAX_VAL) > 0):\n return GrayscaleImage(x, frame)\n return BinaryImage(x, frame)\n elif channels == 3:\n return ColorImage(x, frame)\n else:\n raise ValueError(\n 'No available image conversion for uint8 array with 2 channels')\n elif dtype == np.uint16:\n if channels != 1:\n raise ValueError(\n 'No available image conversion for uint16 array with 2 or 3 channels')\n return GrayscaleImage(x, frame)\n elif dtype == np.float32 or dtype == np.float64:\n if channels == 1:\n return DepthImage(x, frame)\n elif channels == 2:\n return GdImage(x, frame)\n elif channels == 3:\n logging.warning('Converting float array to uint8')\n return ColorImage(x.astype(np.uint8), frame)\n return RgbdImage(x, frame)\n else:\n raise ValueError(\n 'Conversion for dtype %s not supported!' %\n (str(dtype)))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef transform(self, translation, theta, method='opencv'):\n theta = np.rad2deg(theta)\n trans_map = np.float32(\n [[1, 0, translation[1]], [0, 1, translation[0]]])\n rot_map = cv2.getRotationMatrix2D(\n (self.center[1], self.center[0]), theta, 1)\n trans_map_aff = np.r_[trans_map, [[0, 0, 1]]]\n rot_map_aff = np.r_[rot_map, [[0, 0, 1]]]\n full_map = rot_map_aff.dot(trans_map_aff)\n full_map = full_map[:2, :]\n if method == 'opencv':\n im_data_tf = cv2.warpAffine(\n self.data, full_map, (self.width, self.height), flags=cv2.INTER_NEAREST)\n else:\n im_data_tf = sni.affine_transform(self.data,\n matrix=full_map[:, :2],\n offset=full_map[:, 2],\n order=0)\n return type(self)(\n im_data_tf.astype(\n self.data.dtype),\n frame=self._frame)", "response": "Creates a new image by translating and rotating the current image."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a new image that is aligned to the given height and width.", "response": "def align(self, scale, center, angle, height, width):\n \"\"\" Create a thumbnail from the original image that\n is scaled by the given factor, centered on the center pixel, oriented along the grasp angle, and cropped to the desired height and width.\n\n Parameters\n ----------\n scale : float\n scale factor to apply\n center : 2D array\n array containing the row and column index of the pixel to center on\n angle : float\n angle to align the image to\n height : int\n height of the final image\n width : int\n width of the final image\n \"\"\"\n # rescale\n scaled_im = self.resize(scale)\n\n # transform\n cx = scaled_im.center[1]\n cy = scaled_im.center[0]\n dx = cx - center[0] * scale\n dy = cy - center[1] * scale\n translation = np.array([dy, dx])\n tf_im = scaled_im.transform(translation, angle)\n\n # crop\n aligned_im = tf_im.crop(height, width)\n return aligned_im"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef gradients(self):\n g = np.gradient(self.data.astype(np.float32))\n return g", "response": "Return the gradient as a pair of numpy arrays."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef linear_to_ij(self, linear_inds):\n return np.c_[linear_inds / self.width, linear_inds % self.width]", "response": "Converts linear indices to row and column coordinates."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef is_same_shape(self, other_im, check_channels=False):\n if self.height == other_im.height and self.width == other_im.width:\n if check_channels and self.channels != other_im.channels:\n return False\n return True\n return False", "response": "Checks if two images have the same height and width and optionally channels."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef mask_by_ind(self, inds):\n new_data = np.zeros(self.shape)\n for ind in inds:\n new_data[ind[0], ind[1]] = self.data[ind[0], ind[1]]\n return type(self)(new_data.astype(self.data.dtype), self.frame)", "response": "Create a new image by zeroing out data at locations not in the\n given indices."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef mask_by_linear_ind(self, linear_inds):\n inds = self.linear_to_ij(linear_inds)\n return self.mask_by_ind(inds)", "response": "Create a new image by zeroing out data at locations not in the\n given indices."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating a median Image from a list of Images.", "response": "def median_images(images):\n \"\"\"Create a median Image from a list of Images.\n\n Parameters\n ----------\n :obj:`list` of :obj:`Image`\n A list of Image objects.\n\n Returns\n -------\n :obj:`Image`\n A new Image of the same type whose data is the median of all of\n the images' data.\n \"\"\"\n images_data = np.array([image.data for image in images])\n median_image_data = np.median(images_data, axis=0)\n\n an_image = images[0]\n return type(an_image)(\n median_image_data.astype(\n an_image.data.dtype),\n an_image.frame)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a min Image from a list of Images.", "response": "def min_images(images):\n \"\"\"Create a min Image from a list of Images.\n\n Parameters\n ----------\n :obj:`list` of :obj:`Image`\n A list of Image objects.\n\n Returns\n -------\n :obj:`Image`\n A new Image of the same type whose data is the min of all of\n the images' data.\n \"\"\"\n images_data = np.array([image.data for image in images])\n images_data[images_data == 0] = np.inf\n min_image_data = np.min(images_data, axis=0)\n min_image_data[min_image_data == np.inf] = 0.0\n\n an_image = images[0]\n return type(an_image)(\n min_image_data.astype(\n an_image.data.dtype),\n an_image.frame)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef apply(self, method, *args, **kwargs):\n data = method(self.data, *args, **kwargs)\n return type(self)(data.astype(self.type), self.frame)", "response": "Create a new image by applying a function to the current image s data."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef crop(self, height, width, center_i=None, center_j=None):\n # compute crop center px\n height = int(np.round(height))\n width = int(np.round(width))\n if center_i is None:\n center_i = float(self.height) / 2\n if center_j is None:\n center_j = float(self.width) / 2\n\n # crop using PIL\n desired_start_row = int(np.floor(center_i - float(height) / 2))\n desired_end_row = int(np.floor(center_i + float(height) / 2))\n desired_start_col = int(np.floor(center_j - float(width) / 2))\n desired_end_col = int(np.floor(center_j + float(width) / 2))\n\n pil_im = PImage.fromarray(self.data)\n cropped_pil_im = pil_im.crop((desired_start_col,\n desired_start_row,\n desired_end_col,\n desired_end_row))\n crop_data = np.array(cropped_pil_im)\n\n if crop_data.shape[0] != height or crop_data.shape[1] != width:\n raise ValueError('Crop dims are incorrect')\n\n return type(self)(crop_data.astype(self.data.dtype), self._frame)", "response": "Crop the image centered around center_i and center_j."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nzero out all of the image inside of a crop box.", "response": "def focus(self, height, width, center_i=None, center_j=None):\n \"\"\"Zero out all of the image outside of a crop box.\n\n Parameters\n ----------\n height : int\n The height of the desired crop box.\n\n width : int\n The width of the desired crop box.\n\n center_i : int\n The center height point of the crop box. If not specified, the center\n of the image is used.\n\n center_j : int\n The center width point of the crop box. If not specified, the center\n of the image is used.\n\n Returns\n -------\n :obj:`Image`\n A new Image of the same type and size that is zeroed out except\n within the crop box.\n \"\"\"\n if center_i is None:\n center_i = self.height / 2\n if center_j is None:\n center_j = self.width / 2\n\n start_row = int(max(0, center_i - height / 2))\n end_row = int(min(self.height - 1, center_i + height / 2))\n start_col = int(max(0, center_j - width / 2))\n end_col = int(min(self.width - 1, center_j + width / 2))\n\n focus_data = np.zeros(self._data.shape)\n focus_data[start_row:end_row + 1, start_col:end_col + \\\n 1] = self._data[start_row:end_row + 1, start_col:end_col + 1]\n return type(self)(focus_data.astype(self._data.dtype), self._frame)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the center of the image of the same type and size that is re - centered at the mean location of the nonzero pixels.", "response": "def center_nonzero(self):\n \"\"\"Recenters the image on the mean of the coordinates of nonzero pixels.\n\n Returns\n -------\n :obj:`Image`\n A new Image of the same type and size that is re-centered\n at the mean location of the non-zero pixels.\n \"\"\"\n # get the center of the nonzero pixels\n nonzero_px = np.where(self._data != 0.0)\n nonzero_px = np.c_[nonzero_px[0], nonzero_px[1]]\n mean_px = np.mean(nonzero_px, axis=0)\n center_px = (np.array(self.shape) / 2.0)[:2]\n diff_px = center_px - mean_px\n\n # transform image\n nonzero_px_tf = nonzero_px + diff_px\n nonzero_px_tf[:, 0] = np.max(\n np.c_[np.zeros(nonzero_px_tf[:, 0].shape), nonzero_px_tf[:, 0]], axis=1)\n nonzero_px_tf[:, 0] = np.min(np.c_[(\n self.height - 1) * np.ones(nonzero_px_tf[:, 0].shape), nonzero_px_tf[:, 0]], axis=1)\n nonzero_px_tf[:, 1] = np.max(\n np.c_[np.zeros(nonzero_px_tf[:, 1].shape), nonzero_px_tf[:, 1]], axis=1)\n nonzero_px_tf[:, 1] = np.min(np.c_[(\n self.width - 1) * np.ones(nonzero_px_tf[:, 1].shape), nonzero_px_tf[:, 1]], axis=1)\n nonzero_px = nonzero_px.astype(np.uint16)\n nonzero_px_tf = nonzero_px_tf.astype(np.uint16)\n shifted_data = np.zeros(self.shape)\n shifted_data[nonzero_px_tf[:, 0], nonzero_px_tf[:, 1],\n :] = self.data[nonzero_px[:, 0], nonzero_px[:, 1]].reshape(-1, self.channels)\n\n return type(self)(\n shifted_data.astype(\n self.data.dtype), frame=self._frame), diff_px"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef nonzero_pixels(self):\n nonzero_px = np.where(np.sum(self.raw_data, axis=2) > 0)\n nonzero_px = np.c_[nonzero_px[0], nonzero_px[1]]\n return nonzero_px", "response": "Return an array of the nonzero pixels."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning an array of the zero pixels.", "response": "def zero_pixels(self):\n \"\"\" Return an array of the zero pixels.\n\n Returns\n -------\n :obj:`numpy.ndarray`\n Nx2 array of the zero pixels\n \"\"\"\n zero_px = np.where(np.sum(self.raw_data, axis=2) == 0)\n zero_px = np.c_[zero_px[0], zero_px[1]]\n return zero_px"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn an array of the NaN pixels.", "response": "def nan_pixels(self):\n \"\"\" Return an array of the NaN pixels.\n\n Returns\n -------\n :obj:`numpy.ndarray`\n Nx2 array of the NaN pixels\n \"\"\"\n nan_px = np.where(np.isnan(np.sum(self.raw_data, axis=2)))\n nan_px = np.c_[nan_px[0], nan_px[1]]\n return nan_px"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef finite_pixels(self):\n finite_px = np.where(np.isfinite(self.data))\n finite_px = np.c_[finite_px[0], finite_px[1]]\n return finite_px", "response": "Return an array of the finite pixels."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreplaces all zeros in the image with a specified value.", "response": "def replace_zeros(self, val, zero_thresh=0.0):\n \"\"\" Replaces all zeros in the image with a specified value\n\n Returns\n -------\n image dtype\n value to replace zeros with\n \"\"\"\n new_data = self.data.copy()\n new_data[new_data <= zero_thresh] = val\n return type(self)(new_data.astype(self.data.dtype), frame=self._frame)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef save(self, filename):\n filename = str(filename)\n file_root, file_ext = os.path.splitext(filename)\n if file_ext in COLOR_IMAGE_EXTS:\n im_data = self._image_data()\n if im_data.dtype.type == np.uint8:\n pil_image = PImage.fromarray(im_data.squeeze())\n pil_image.save(filename)\n else:\n try:\n import png\n except:\n raise ValueError('PyPNG not installed! Cannot save 16-bit images')\n png.fromarray(im_data, 'L').save(filename)\n elif file_ext == '.npy':\n np.save(filename, self._data)\n elif file_ext == '.npz':\n np.savez_compressed(filename, self._data)\n else:\n raise ValueError('Extension %s not supported' % (file_ext))", "response": "Writes the image to a file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef savefig(self, output_path, title, dpi=400, format='png', cmap=None):\n plt.figure()\n plt.imshow(self.data, cmap=cmap)\n plt.title(title)\n plt.axis('off')\n title_underscore = title.replace(' ', '_')\n plt.savefig(\n os.path.join(\n output_path,\n '{0}.{1}'.format(\n title_underscore,\n format)),\n dpi=dpi,\n format=format)", "response": "Write the image to a file using matplotlib."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nload a data matrix from a given file.", "response": "def load_data(filename):\n \"\"\"Loads a data matrix from a given file.\n\n Parameters\n ----------\n filename : :obj:`str`\n The file to load the data from. Must be one of .png, .jpg,\n .npy, or .npz.\n\n Returns\n -------\n :obj:`numpy.ndarray`\n The data array read from the file.\n \"\"\"\n file_root, file_ext = os.path.splitext(filename)\n data = None\n if file_ext.lower() in COLOR_IMAGE_EXTS:\n data = cv2.cvtColor(cv2.imread(filename), cv2.COLOR_BGR2RGB)\n elif file_ext == '.npy':\n data = np.load(filename)\n elif file_ext == '.npz':\n data = np.load(filename)['arr_0']\n else:\n raise ValueError('Extension %s not supported' % (file_ext))\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nchecks that the given data is a uint8 array with one or three channels.", "response": "def _check_valid_data(self, data):\n \"\"\"Checks that the given data is a uint8 array with one or three\n channels.\n\n Parameters\n ----------\n data : :obj:`numpy.ndarray`\n The data to check.\n\n Raises\n ------\n ValueError\n If the data is invalid.\n \"\"\"\n if data.dtype.type is not np.uint8:\n raise ValueError(\n 'Illegal data type. Color images only support uint8 arrays')\n\n if len(data.shape) != 3 or data.shape[2] != 3:\n raise ValueError(\n 'Illegal data type. Color images only support three channels')"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef bgr2rgb(self):\n new_data = cv2.cvtColor(self.raw_data, cv2.COLOR_BGR2RGB)\n return ColorImage(new_data, frame=self.frame, encoding='rgb8')", "response": "Converts data using cv2."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef rgb2bgr(self):\n new_data = cv2.cvtColor(self.raw_data, cv2.COLOR_RGB2BGR)\n return ColorImage(new_data, frame=self.frame, encoding='bgr8')", "response": "Converts data using cv2."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nswap the two channels specified in the tuple.", "response": "def swap_channels(self, channel_swap):\n \"\"\" Swaps the two channels specified in the tuple.\n\n Parameters\n ----------\n channel_swap : :obj:`tuple` of int\n the two channels to swap\n\n Returns\n -------\n :obj:`ColorImage`\n color image with cols swapped\n \"\"\"\n if len(channel_swap) != 2:\n raise ValueError('Illegal value for channel swap')\n ci = channel_swap[0]\n cj = channel_swap[1]\n if ci < 0 or ci > 2 or cj < 0 or cj > 2:\n raise ValueError('Channels must be between 0 and 1')\n new_data = self.data.copy()\n new_data[:, :, ci] = self.data[:, :, cj]\n new_data[:, :, cj] = self.data[:, :, ci]\n return ColorImage(new_data, frame=self._frame)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nfinding the corners of an sx X sy chessboard in the image.", "response": "def find_chessboard(self, sx=6, sy=9):\n \"\"\"Finds the corners of an sx X sy chessboard in the image.\n\n Parameters\n ----------\n sx : int\n Number of chessboard corners in x-direction.\n sy : int\n Number of chessboard corners in y-direction.\n\n Returns\n -------\n :obj:`list` of :obj:`numpy.ndarray`\n A list containing the 2D points of the corners of the detected\n chessboard, or None if no chessboard found.\n \"\"\"\n # termination criteria\n criteria = (cv2.TERM_CRITERIA_EPS +\n cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)\n\n # prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)\n objp = np.zeros((sx * sy, 3), np.float32)\n objp[:, :2] = np.mgrid[0:sx, 0:sy].T.reshape(-1, 2)\n\n # Arrays to store object points and image points from all the images.\n objpoints = [] # 3d point in real world space\n imgpoints = [] # 2d points in image plane.\n\n # create images\n img = self.data.astype(np.uint8)\n gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n\n # Find the chess board corners\n ret, corners = cv2.findChessboardCorners(gray, (sx, sy), None)\n\n # If found, add object points, image points (after refining them)\n if ret:\n objpoints.append(objp)\n cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1), criteria)\n imgpoints.append(corners)\n\n if corners is not None:\n return corners.squeeze()\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef foreground_mask(\n self,\n tolerance,\n ignore_black=True,\n use_hsv=False,\n scale=8,\n bgmodel=None):\n \"\"\"Creates a binary image mask for the foreground of an image against\n a uniformly colored background. The background is assumed to be the mode value of the histogram\n for each of the color channels.\n\n Parameters\n ----------\n tolerance : int\n A +/- level from the detected mean backgroud color. Pixels withing\n this range will be classified as background pixels and masked out.\n\n ignore_black : bool\n If True, the zero pixels will be ignored\n when computing the background model.\n\n use_hsv : bool\n If True, image will be converted to HSV for background model\n generation.\n\n scale : int\n Size of background histogram bins -- there will be BINARY_IM_MAX_VAL/size bins\n in the color histogram for each channel.\n\n bgmodel : :obj:`list` of int\n A list containing the red, green, and blue channel modes of the\n background. If this is None, a background model will be generated\n using the other parameters.\n\n Returns\n -------\n :obj:`BinaryImage`\n A binary image that masks out the background from the current\n ColorImage.\n \"\"\"\n # get a background model\n if bgmodel is None:\n bgmodel = self.background_model(ignore_black=ignore_black,\n use_hsv=use_hsv,\n scale=scale)\n\n # get the bounds\n lower_bound = np.array(\n [bgmodel[i] - tolerance for i in range(self.channels)])\n upper_bound = np.array(\n [bgmodel[i] + tolerance for i in range(self.channels)])\n orig_zero_indices = np.where(np.sum(self._data, axis=2) == 0)\n\n # threshold\n binary_data = cv2.inRange(self.data, lower_bound, upper_bound)\n binary_data[:, :, ] = (BINARY_IM_MAX_VAL - binary_data[:, :, ])\n binary_data[orig_zero_indices[0], orig_zero_indices[1], ] = 0.0\n binary_im = BinaryImage(binary_data.astype(np.uint8), frame=self.frame)\n return binary_im", "response": "Creates a binary image mask for the foreground of an image against the current color image."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate a background model for the given image.", "response": "def background_model(self, ignore_black=True, use_hsv=False, scale=8):\n \"\"\"Creates a background model for the given image. The background\n color is given by the modes of each channel's histogram.\n\n Parameters\n ----------\n ignore_black : bool\n If True, the zero pixels will be ignored\n when computing the background model.\n\n use_hsv : bool\n If True, image will be converted to HSV for background model\n generation.\n\n scale : int\n Size of background histogram bins -- there will be BINARY_IM_MAX_VAL/size bins\n in the color histogram for each channel.\n\n Returns\n -------\n A list containing the red, green, and blue channel modes of the\n background.\n \"\"\"\n # hsv color\n data = self.data\n if use_hsv:\n pil_im = PImage.fromarray(self._data)\n pil_im = pil_im.convert('HSV')\n data = np.asarray(pil_im)\n\n # find the black pixels\n nonblack_pixels = np.where(np.sum(self.data, axis=2) > 0)\n r_data = self.r_data\n g_data = self.g_data\n b_data = self.b_data\n if ignore_black:\n r_data = r_data[nonblack_pixels[0], nonblack_pixels[1]]\n g_data = g_data[nonblack_pixels[0], nonblack_pixels[1]]\n b_data = b_data[nonblack_pixels[0], nonblack_pixels[1]]\n\n # generate histograms for each channel\n bounds = (0, np.iinfo(np.uint8).max + 1)\n num_bins = bounds[1] / scale\n r_hist, _ = np.histogram(r_data, bins=num_bins, range=bounds)\n g_hist, _ = np.histogram(g_data, bins=num_bins, range=bounds)\n b_hist, _ = np.histogram(b_data, bins=num_bins, range=bounds)\n hists = (r_hist, g_hist, b_hist)\n\n # find the thesholds as the modes of the image\n modes = [0 for i in range(self.channels)]\n for i in range(self.channels):\n modes[i] = scale * np.argmax(hists[i])\n\n return modes"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndraw a white box on the image.", "response": "def draw_box(self, box):\n \"\"\"Draw a white box on the image.\n\n Parameters\n ----------\n :obj:`autolab_core.Box`\n A 2D box to draw in the image.\n\n Returns\n -------\n :obj:`ColorImage`\n A new image that is the same as the current one, but with\n the white box drawn in.\n \"\"\"\n box_data = self._data.copy()\n min_i = box.min_pt[1]\n min_j = box.min_pt[0]\n max_i = box.max_pt[1]\n max_j = box.max_pt[0]\n\n # draw the vertical lines\n for j in range(min_j, max_j):\n box_data[min_i, j, :] = BINARY_IM_MAX_VAL * np.ones(self.channels)\n box_data[max_i, j, :] = BINARY_IM_MAX_VAL * np.ones(self.channels)\n\n # draw the horizontal lines\n for i in range(min_i, max_i):\n box_data[i, min_j, :] = BINARY_IM_MAX_VAL * np.ones(self.channels)\n box_data[i, max_j, :] = BINARY_IM_MAX_VAL * np.ones(self.channels)\n\n return ColorImage(box_data, self._frame)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef nonzero_hsv_data(self):\n hsv_data = cv2.cvtColor(self.data, cv2.COLOR_BGR2HSV)\n nonzero_px = self.nonzero_pixels()\n return hsv_data[nonzero_px[:, 0], nonzero_px[:, 1], ...]", "response": "Computes non zero hsv values for the image."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nsegment a color image using KMeans based on spatial and color distances.", "response": "def segment_kmeans(self, rgb_weight, num_clusters, hue_weight=0.0):\n \"\"\"\n Segment a color image using KMeans based on spatial and color distances.\n Black pixels will automatically be assigned to their own 'background' cluster.\n\n Parameters\n ----------\n rgb_weight : float\n weighting of RGB distance relative to spatial and hue distance\n num_clusters : int\n number of clusters to use\n hue_weight : float\n weighting of hue from hsv relative to spatial and RGB distance\n\n Returns\n -------\n :obj:`SegmentationImage`\n image containing the segment labels\n \"\"\"\n # form features array\n label_offset = 1\n nonzero_px = np.where(self.data != 0.0)\n nonzero_px = np.c_[nonzero_px[0], nonzero_px[1]]\n\n # get hsv data if specified\n color_vals = rgb_weight * \\\n self._data[nonzero_px[:, 0], nonzero_px[:, 1], :]\n if hue_weight > 0.0:\n hsv_data = cv2.cvtColor(self.data, cv2.COLOR_BGR2HSV)\n color_vals = np.c_[color_vals, hue_weight *\n hsv_data[nonzero_px[:, 0], nonzero_px[:, 1], :1]]\n features = np.c_[nonzero_px, color_vals.astype(np.float32)]\n\n # perform KMeans clustering\n kmeans = sc.KMeans(n_clusters=num_clusters)\n labels = kmeans.fit_predict(features)\n\n # create output label array\n label_im = np.zeros([self.height, self.width]).astype(np.uint8)\n label_im[nonzero_px[:, 0], nonzero_px[:, 1]] = labels + label_offset\n return SegmentationImage(label_im, frame=self.frame)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef inpaint(self, win_size=3, rescale_factor=1.0):\n # get original shape\n orig_shape = (self.height, self.width)\n \n # resize the image\n resized_data = self.resize(rescale_factor, interp='nearest').data\n\n # inpaint smaller image\n mask = 1 * (np.sum(resized_data, axis=2) == 0)\n inpainted_data = cv2.inpaint(resized_data, mask.astype(np.uint8),\n win_size, cv2.INPAINT_TELEA)\n inpainted_im = ColorImage(inpainted_data, frame=self.frame)\n\n # fill in zero pixels with inpainted and resized image\n filled_data = inpainted_im.resize(\n orig_shape, interp='bilinear').data\n new_data = self.data\n new_data[self.data == 0] = filled_data[self.data == 0]\n return ColorImage(new_data, frame=self.frame)", "response": "Fills in the zero pixels in the image with the inpainted image."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nconvert the color image to grayscale using OpenCV.", "response": "def to_grayscale(self):\n \"\"\"Converts the color image to grayscale using OpenCV.\n\n Returns\n -------\n :obj:`GrayscaleImage`\n Grayscale image corresponding to original color image.\n \"\"\"\n gray_data = cv2.cvtColor(self.data, cv2.COLOR_RGB2GRAY)\n return GrayscaleImage(gray_data, frame=self.frame)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef open(filename, frame='unspecified'):\n data = Image.load_data(filename).astype(np.uint8)\n return ColorImage(data, frame)", "response": "Creates a ColorImage from a file."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _check_valid_data(self, data):\n if data.dtype.type is not np.float32 and \\\n data.dtype.type is not np.float64:\n raise ValueError(\n 'Illegal data type. Depth images only support float arrays')\n\n if len(data.shape) == 3 and data.shape[2] != 1:\n raise ValueError(\n 'Illegal data type. Depth images only support single channel')", "response": "Checks that the given data is a float array with one channel."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the data in image format with scaling and conversion to uint8 types.", "response": "def _image_data(self, normalize=False,\n min_depth=MIN_DEPTH,\n max_depth=MAX_DEPTH,\n twobyte=False):\n \"\"\"Returns the data in image format, with scaling and conversion to uint8 types.\n\n Parameters\n ----------\n normalize : bool\n whether or not to normalize by the min and max depth of the image\n min_depth : float\n minimum depth value for the normalization\n max_depth : float\n maximum depth value for the normalization\n twobyte: bool\n whether or not to use 16-bit encoding\n\n Returns\n -------\n :obj:`numpy.ndarray` of uint8\n A 3D matrix representing the image. The first dimension is rows, the\n second is columns, and the third is a set of 3 RGB values, each of\n which is simply the depth entry scaled to between 0 and BINARY_IM_MAX_VAL.\n \"\"\"\n max_val = BINARY_IM_MAX_VAL\n if twobyte:\n max_val = np.iinfo(np.uint16).max\n \n if normalize:\n min_depth = np.min(self._data)\n max_depth = np.max(self._data)\n depth_data = (self._data - min_depth) / (max_depth - min_depth)\n depth_data = float(max_val) * depth_data.squeeze()\n else:\n zero_px = np.where(self._data == 0)\n zero_px = np.c_[zero_px[0], zero_px[1], zero_px[2]]\n depth_data = ((self._data - min_depth) * \\\n (float(max_val) / (max_depth - min_depth))).squeeze()\n depth_data[zero_px[:,0], zero_px[:,1]] = 0\n im_data = np.zeros([self.height, self.width, 3])\n im_data[:, :, 0] = depth_data\n im_data[:, :, 1] = depth_data\n im_data[:, :, 2] = depth_data\n if twobyte:\n return im_data.astype(np.uint16)\n return im_data.astype(np.uint8)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef threshold(self, front_thresh=0.0, rear_thresh=100.0):\n data = np.copy(self._data)\n data[data < front_thresh] = 0.0\n data[data > rear_thresh] = 0.0\n return DepthImage(data, self._frame)", "response": "Creates a new DepthImage by setting all depths less than front_thresh and greater than rear_thresh to 0. 0."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef threshold_gradients(self, grad_thresh):\n data = np.copy(self._data)\n gx, gy = self.gradients()\n gradients = np.zeros([gx.shape[0], gx.shape[1], 2])\n gradients[:, :, 0] = gx\n gradients[:, :, 1] = gy\n gradient_mags = np.linalg.norm(gradients, axis=2)\n ind = np.where(gradient_mags > grad_thresh)\n data[ind[0], ind[1]] = 0.0\n return DepthImage(data, self._frame)", "response": "Creates a new DepthImage by zeroing out all depths with magnitude greater than grad_thresh."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef threshold_gradients_pctile(self, thresh_pctile, min_mag=0.0):\n data = np.copy(self._data)\n gx, gy = self.gradients()\n gradients = np.zeros([gx.shape[0], gx.shape[1], 2])\n gradients[:, :, 0] = gx\n gradients[:, :, 1] = gy\n gradient_mags = np.linalg.norm(gradients, axis=2)\n grad_thresh = np.percentile(gradient_mags, thresh_pctile)\n ind = np.where(\n (gradient_mags > grad_thresh) & (\n gradient_mags > min_mag))\n data[ind[0], ind[1]] = 0.0\n return DepthImage(data, self._frame)", "response": "Creates a new DepthImage by zeroing out all depths with magnitude greater than some percentile of all gradients."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef inpaint(self, rescale_factor=1.0):\n # get original shape\n orig_shape = (self.height, self.width)\n\n # form inpaint kernel\n inpaint_kernel = np.array([[1, 1, 1], [1, 0, 1], [1, 1, 1]])\n\n # resize the image\n resized_data = self.resize(rescale_factor, interp='nearest').data\n\n # inpaint the smaller image\n cur_data = resized_data.copy()\n zeros = (cur_data == 0)\n while np.any(zeros):\n neighbors = ssg.convolve2d((cur_data != 0), inpaint_kernel,\n mode='same', boundary='symm')\n avg_depth = ssg.convolve2d(cur_data, inpaint_kernel,\n mode='same', boundary='symm')\n avg_depth[neighbors > 0] = avg_depth[neighbors > 0] / \\\n neighbors[neighbors > 0]\n avg_depth[neighbors == 0] = 0\n avg_depth[resized_data > 0] = resized_data[resized_data > 0]\n cur_data = avg_depth\n zeros = (cur_data == 0)\n\n # fill in zero pixels with inpainted and resized image\n inpainted_im = DepthImage(cur_data, frame=self.frame)\n filled_data = inpainted_im.resize(\n orig_shape, interp='bilinear').data\n new_data = np.copy(self.data)\n new_data[self.data == 0] = filled_data[self.data == 0]\n return DepthImage(new_data, frame=self.frame)", "response": "In - place inpainting of the image."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef invalid_pixel_mask(self):\n # init mask buffer\n mask = np.zeros([self.height, self.width, 1]).astype(np.uint8)\n\n # update invalid pixels\n zero_pixels = self.zero_pixels()\n nan_pixels = self.nan_pixels()\n mask[zero_pixels[:, 0], zero_pixels[:, 1]] = BINARY_IM_MAX_VAL\n mask[nan_pixels[:, 0], nan_pixels[:, 1]] = BINARY_IM_MAX_VAL\n return BinaryImage(mask, frame=self.frame)", "response": "Returns a binary mask for the NaN - and zero - valued pixels."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef pixels_farther_than(self, depth_im, filter_equal_depth=False):\n # take closest pixel\n if filter_equal_depth:\n farther_px = np.where((self.data > depth_im.data) & (np.isfinite(depth_im.data)))\n else:\n farther_px = np.where((self.data >= depth_im.data) & (np.isfinite(depth_im.data)))\n farther_px = np.c_[farther_px[0], farther_px[1]]\n return farther_px", "response": "Returns the pixels that are farther away than those in the corresponding depth image."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef combine_with(self, depth_im):\n new_data = self.data.copy()\n # replace zero pixels\n new_data[new_data == 0] = depth_im.data[new_data == 0]\n # take closest pixel\n new_data[(new_data > depth_im.data) & (depth_im.data > 0)] = depth_im.data[(\n new_data > depth_im.data) & (depth_im.data > 0)]\n return DepthImage(new_data, frame=self.frame)", "response": "Returns a new DepthImage object with the values of this DepthImage and the same depth image."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a BinaryImage from the depth image.", "response": "def to_binary(self, threshold=0.0):\n \"\"\"Creates a BinaryImage from the depth image. Points where the depth\n is greater than threshold are converted to ones, and all other points\n are zeros.\n\n Parameters\n ----------\n threshold : float\n The depth threshold.\n\n Returns\n -------\n :obj:`BinaryImage`\n A BinaryImage where all 1 points had a depth greater than threshold\n in the DepthImage.\n \"\"\"\n data = BINARY_IM_MAX_VAL * (self._data > threshold)\n return BinaryImage(data.astype(np.uint8), self._frame)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconverts the image to a color image.", "response": "def to_color(self, normalize=False):\n \"\"\" Convert to a color image.\n\n Parameters\n ----------\n normalize : bool\n whether or not to normalize by the maximum depth\n\n Returns\n -------\n :obj:`ColorImage`\n color image corresponding to the depth image\n \"\"\"\n im_data = self._image_data(normalize=normalize)\n return ColorImage(im_data, frame=self._frame)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef to_float(self):\n return DepthImage(self.data.astype(np.float32), frame=self.frame)", "response": "Converts to 32 - bit float data."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncompute a PointNormalCloud from the depth image.", "response": "def point_normal_cloud(self, camera_intr):\n \"\"\"Computes a PointNormalCloud from the depth image.\n\n Parameters\n ----------\n camera_intr : :obj:`CameraIntrinsics`\n The camera parameters on which this depth image was taken.\n\n Returns\n -------\n :obj:`autolab_core.PointNormalCloud`\n A PointNormalCloud created from the depth image.\n \"\"\"\n point_cloud_im = camera_intr.deproject_to_image(self)\n normal_cloud_im = point_cloud_im.normal_cloud_im()\n point_cloud = point_cloud_im.to_point_cloud()\n normal_cloud = normal_cloud_im.to_normal_cloud()\n return PointNormalCloud(\n point_cloud.data,\n normal_cloud.data,\n frame=self._frame)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef open(filename, frame='unspecified'):\n file_root, file_ext = os.path.splitext(filename)\n data = Image.load_data(filename)\n if file_ext.lower() in COLOR_IMAGE_EXTS:\n data = (data * (MAX_DEPTH / BINARY_IM_MAX_VAL)).astype(np.float32)\n return DepthImage(data, frame)", "response": "Creates a DepthImage from a file."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn the data in image format with scaling and conversion to uint8 types.", "response": "def _image_data(self):\n \"\"\"Returns the data in image format, with scaling and conversion to uint8 types.\n\n Returns\n -------\n :obj:`numpy.ndarray` of uint8\n A 3D matrix representing the image. The first dimension is rows, the\n second is columns, and the third is simply the IR entry scaled to between 0 and BINARY_IM_MAX_VAL.\n \"\"\"\n return (self._data * (float(BINARY_IM_MAX_VAL) / MAX_IR)).astype(np.uint8)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef resize(self, size, interp='bilinear'):\n resized_data = sm.imresize(self._data, size, interp=interp)\n return IrImage(resized_data, self._frame)", "response": "Resize the image.\n\n Parameters\n ----------\n size : int, float, or tuple\n * int - Percentage of current size.\n * float - Fraction of current size.\n * tuple - Size of the output image.\n\n interp : :obj:`str`, optional\n Interpolation to use for re-sizing ('nearest', 'lanczos', 'bilinear',\n 'bicubic', or 'cubic')\n\n Returns\n -------\n :obj:`IrImage`\n The resized image."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates an IrImage from a file.", "response": "def open(filename, frame='unspecified'):\n \"\"\"Creates an IrImage from a file.\n\n Parameters\n ----------\n filename : :obj:`str`\n The file to load the data from. Must be one of .png, .jpg,\n .npy, or .npz.\n\n frame : :obj:`str`\n A string representing the frame of reference in which the new image\n lies.\n\n Returns\n -------\n :obj:`IrImage`\n The new IR image.\n \"\"\"\n data = Image.load_data(filename)\n data = (data * (MAX_IR / BINARY_IM_MAX_VAL)).astype(np.uint16)\n return IrImage(data, frame)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef resize(self, size, interp='bilinear'):\n resized_data = sm.imresize(self.data, size, interp=interp)\n return GrayscaleImage(resized_data, self._frame)", "response": "Resize the image.\n\n Parameters\n ----------\n size : int, float, or tuple\n * int - Percentage of current size.\n * float - Fraction of current size.\n * tuple - Size of the output image.\n\n interp : :obj:`str`, optional\n Interpolation to use for re-sizing ('nearest', 'lanczos', 'bilinear',\n 'bicubic', or 'cubic')\n\n Returns\n -------\n :obj:`GrayscaleImage`\n The resized image."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef to_color(self):\n color_data = np.repeat(self.data[:,:,np.newaxis], 3, axis=2)\n return ColorImage(color_data, self._frame)", "response": "Convert the grayscale image to a ColorImage."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef open(filename, frame='unspecified'):\n data = Image.load_data(filename)\n return GrayscaleImage(data, frame)", "response": "Creates a GrayscaleImage from a file."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ntakes OR operation with this binary image and other binary image.", "response": "def pixelwise_or(self, binary_im):\n \"\"\" Takes OR operation with other binary image.\n\n Parameters\n ----------\n binary_im : :obj:`BinaryImage`\n binary image for and operation\n\n Returns\n -------\n :obj:`BinaryImage`\n OR of this binary image and other image\n \"\"\"\n data = np.copy(self._data)\n ind = np.where(binary_im.data > 0)\n data[ind[0], ind[1], ...] = BINARY_IM_MAX_VAL\n return BinaryImage(data, self._frame)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ninverts image ( all nonzeros become zeros and vice verse ) returns a new binary image with the inverse of the image.", "response": "def inverse(self):\n \"\"\" Inverts image (all nonzeros become zeros and vice verse)\n Returns\n -------\n :obj:`BinaryImage`\n inverse of this binary image\n \"\"\"\n data = np.zeros(self.shape).astype(np.uint8)\n ind = np.where(self.data == 0)\n data[ind[0], ind[1], ...] = BINARY_IM_MAX_VAL\n return BinaryImage(data, self._frame)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nremove all white connected components from the binary image.", "response": "def prune_contours(self, area_thresh=1000.0, dist_thresh=20,\n preserve_topology=True):\n \"\"\"Removes all white connected components with area less than area_thresh.\n Parameters\n ----------\n area_thresh : float\n The minimum area for which a white connected component will not be\n zeroed out.\n dist_thresh : int\n If a connected component is within dist_thresh of the top of the\n image, it will not be pruned out, regardless of its area.\n Returns\n -------\n :obj:`BinaryImage`\n The new pruned binary image.\n \"\"\"\n # get all contours (connected components) from the binary image\n contours, hierarchy = cv2.findContours(\n self.data.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)\n num_contours = len(contours)\n middle_pixel = np.array(self.shape)[:2] / 2\n middle_pixel = middle_pixel.reshape(1, 2)\n center_contour = None\n pruned_contours = []\n\n # find which contours need to be pruned\n for i in range(num_contours):\n area = cv2.contourArea(contours[i])\n if area > area_thresh:\n # check close to origin\n fill = np.zeros([self.height, self.width, 3])\n cv2.fillPoly(\n fill,\n pts=[\n contours[i]],\n color=(\n BINARY_IM_MAX_VAL,\n BINARY_IM_MAX_VAL,\n BINARY_IM_MAX_VAL))\n nonzero_px = np.where(fill > 0)\n nonzero_px = np.c_[nonzero_px[0], nonzero_px[1]]\n dists = ssd.cdist(middle_pixel, nonzero_px)\n min_dist = np.min(dists)\n pruned_contours.append((contours[i], min_dist))\n\n if len(pruned_contours) == 0:\n return None\n\n pruned_contours.sort(key=lambda x: x[1])\n\n # keep all contours within some distance of the top\n num_contours = len(pruned_contours)\n keep_indices = [0]\n source_coords = pruned_contours[0][0].squeeze().astype(np.float32)\n for i in range(1, num_contours):\n target_coords = pruned_contours[i][0].squeeze().astype(np.float32)\n dists = ssd.cdist(source_coords, target_coords)\n min_dist = np.min(dists)\n if min_dist < dist_thresh:\n keep_indices.append(i)\n\n # keep the top num_areas pruned contours\n keep_indices = np.unique(keep_indices)\n pruned_contours = [pruned_contours[i][0] for i in keep_indices]\n\n # mask out bad areas in the image\n pruned_data = np.zeros([self.height, self.width, 3])\n for contour in pruned_contours:\n cv2.fillPoly(\n pruned_data,\n pts=[contour],\n color=(\n BINARY_IM_MAX_VAL,\n BINARY_IM_MAX_VAL,\n BINARY_IM_MAX_VAL))\n pruned_data = pruned_data[:, :, 0] # convert back to one channel\n\n # preserve topology of original image\n if preserve_topology:\n orig_zeros = np.where(self.data == 0)\n pruned_data[orig_zeros[0], orig_zeros[1]] = 0\n return BinaryImage(pruned_data.astype(np.uint8), self._frame)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a list of Contour objects that are connected to the binary image containing the contour at the specified area.", "response": "def find_contours(self, min_area=0.0, max_area=np.inf):\n \"\"\"Returns a list of connected components with an area between\n min_area and max_area.\n Parameters\n ----------\n min_area : float\n The minimum area for a contour\n max_area : float\n The maximum area for a contour\n Returns\n -------\n :obj:`list` of :obj:`Contour`\n A list of resuting contours\n \"\"\"\n # get all contours (connected components) from the binary image\n _, contours, hierarchy = cv2.findContours(\n self.data.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)\n num_contours = len(contours)\n kept_contours = []\n\n # find which contours need to be pruned\n for i in range(num_contours):\n area = cv2.contourArea(contours[i])\n logging.debug('Contour %d area: %.3f' % (len(kept_contours), area))\n if area > min_area and area < max_area:\n boundary_px = contours[i].squeeze()\n boundary_px_ij_swapped = np.zeros(boundary_px.shape)\n boundary_px_ij_swapped[:, 0] = boundary_px[:, 1]\n boundary_px_ij_swapped[:, 1] = boundary_px[:, 0]\n kept_contours.append(\n Contour(\n boundary_px_ij_swapped,\n area=area,\n frame=self._frame))\n\n return kept_contours"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngenerates a binary image with only the given contour filled in.", "response": "def contour_mask(self, contour):\n \"\"\" Generates a binary image with only the given contour filled in. \"\"\"\n # fill in new data\n new_data = np.zeros(self.data.shape)\n num_boundary = contour.boundary_pixels.shape[0]\n boundary_px_ij_swapped = np.zeros([num_boundary, 1, 2])\n boundary_px_ij_swapped[:, 0, 0] = contour.boundary_pixels[:, 1]\n boundary_px_ij_swapped[:, 0, 1] = contour.boundary_pixels[:, 0]\n cv2.fillPoly(\n new_data, pts=[\n boundary_px_ij_swapped.astype(\n np.int32)], color=(\n BINARY_IM_MAX_VAL, BINARY_IM_MAX_VAL, BINARY_IM_MAX_VAL))\n orig_zeros = np.where(self.data == 0)\n new_data[orig_zeros[0], orig_zeros[1]] = 0\n return BinaryImage(new_data.astype(np.uint8), frame=self._frame)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef boundary_map(self):\n # compute contours\n contours = self.find_contours()\n\n # fill in nonzero pixels\n new_data = np.zeros(self.data.shape)\n for contour in contours:\n new_data[contour.boundary_pixels[:, 0].astype(np.uint8),\n contour.boundary_pixels[:, 1].astype(np.uint8)] = np.iinfo(np.uint8).max\n return BinaryImage(new_data.astype(np.uint8), frame=self.frame)", "response": "Computes the boundary pixels in the image and sets them to nonzero values."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nfinds the closest set of pixels from start to pixel_set.", "response": "def closest_pixel_to_set(self, start, pixel_set, direction, w=13, t=0.5):\n \"\"\"Starting at pixel, moves start by direction * t until there is a\n pixel from pixel_set within a radius w of start. Then, returns start.\n\n Parameters\n ----------\n start : :obj:`numpy.ndarray` of float\n The initial pixel location at which to start.\n\n pixel_set : set of 2-tuples of float\n The set of pixels to check set intersection with\n\n direction : :obj:`numpy.ndarray` of float\n The 2D direction vector in which to move pixel.\n\n w : int\n A circular diameter in which to check for pixels.\n As soon as the current pixel has some non-zero pixel with a diameter\n w of it, this function returns the current pixel location.\n\n t : float\n The step size with which to move pixel along direction.\n\n Returns\n -------\n :obj:`numpy.ndarray` of float\n The first pixel location along the direction vector at which there\n exists some intersection with pixel_set within a radius w.\n \"\"\"\n\n # create circular structure for checking clearance\n y, x = np.meshgrid(np.arange(w) - w / 2, np.arange(w) - w / 2)\n cur_px_y = np.ravel(y + start[0]).astype(np.uint16)\n cur_px_x = np.ravel(x + start[1]).astype(np.uint16)\n \n # create comparison set, check set overlap\n cur_px = set(zip(cur_px_y, cur_px_x))\n includes = True\n if np.all(\n cur_px_y >= 0) and np.all(\n cur_px_y < self.height) and np.all(\n cur_px_x >= 0) and np.all(\n cur_px_x < self.width):\n includes = not cur_px.isdisjoint(pixel_set)\n else:\n return None\n\n # Continue until out of bounds or sets overlap\n while not includes:\n start = start + t * direction\n cur_px_y = np.ravel(y + start[0]).astype(np.uint16)\n cur_px_x = np.ravel(x + start[1]).astype(np.uint16)\n cur_px = set(zip(cur_px_y, cur_px_x))\n if np.all(\n cur_px_y >= 0) and np.all(\n cur_px_y < self.height) and np.all(\n cur_px_x >= 0) and np.all(\n cur_px_x < self.width):\n includes = not cur_px.isdisjoint(pixel_set)\n else:\n return None\n \n return start"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nfinds the closest all zero pixels in a 2D direction vector.", "response": "def closest_allzero_pixel(self, pixel, direction, w=13, t=0.5):\n \"\"\"Starting at pixel, moves pixel by direction * t until all\n zero pixels within a radius w of pixel. Then, returns pixel.\n\n Parameters\n ----------\n pixel : :obj:`numpy.ndarray` of float\n The initial pixel location at which to start.\n\n direction : :obj:`numpy.ndarray` of float\n The 2D direction vector in which to move pixel.\n\n w : int\n A circular diameter in which to check for zero pixels.\n As soon as the current pixel has all zero pixels with a diameter\n w of it, this function returns the current pixel location.\n\n t : float\n The step size with which to move pixel along direction.\n\n Returns\n -------\n :obj:`numpy.ndarray` of float\n The first pixel location along the direction vector at which there\n exists all zero pixels within a radius w.\n \"\"\"\n # create circular structure for checking clearance\n y, x = np.meshgrid(np.arange(w) - w / 2, np.arange(w) - w / 2)\n\n cur_px_y = np.ravel(y + pixel[0]).astype(np.uint16)\n cur_px_x = np.ravel(x + pixel[1]).astype(np.uint16)\n \n # Check if all pixels in radius are in bounds and zero-valued\n empty = False\n if np.all(\n cur_px_y >= 0) and np.all(\n cur_px_y < self.height) and np.all(\n cur_px_x >= 0) and np.all(\n cur_px_x < self.width):\n empty = np.all(self[cur_px_y, cur_px_x] <= self._threshold)\n else:\n return None\n \n # If some nonzero pixels, continue incrementing along direction\n # and checking for empty space\n while not empty:\n pixel = pixel + t * direction\n cur_px_y = np.ravel(y + pixel[0]).astype(np.uint16)\n cur_px_x = np.ravel(x + pixel[1]).astype(np.uint16)\n if np.all(\n cur_px_y >= 0) and np.all(\n cur_px_y < self.height) and np.all(\n cur_px_x >= 0) and np.all(\n cur_px_x < self.width):\n empty = np.all(self[cur_px_y, cur_px_x] <= self._threshold)\n else:\n return None\n \n return pixel"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nadd a frame to the image.", "response": "def add_frame(\n self,\n left_boundary,\n right_boundary,\n upper_boundary,\n lower_boundary):\n \"\"\" Adds a frame to the image, e.g. turns the boundaries white\n\n Parameters\n ----------\n left_boundary : int\n the leftmost boundary of the frame\n right_boundary : int\n the rightmost boundary of the frame (must be greater than left_boundary)\n upper_boundary : int\n the upper boundary of the frame\n lower_boundary : int\n the lower boundary of the frame (must be greater than upper_boundary)\n\n Returns\n -------\n :obj:`BinaryImage`\n binary image with white (BINARY_IM_MAX_VAL) on the boundaries\n \"\"\"\n # check valid boundary pixels\n left_boundary = max(0, left_boundary)\n right_boundary = min(self.width - 1, right_boundary)\n upper_boundary = max(0, upper_boundary)\n lower_boundary = min(self.height - 1, lower_boundary)\n\n if right_boundary < left_boundary:\n raise ValueError(\n 'Left boundary must be smaller than the right boundary')\n if upper_boundary > lower_boundary:\n raise ValueError(\n 'Upper boundary must be smaller than the lower boundary')\n\n # fill in border pixels\n bordered_data = self.data.copy()\n bordered_data[:upper_boundary, :] = BINARY_IM_MAX_VAL\n bordered_data[lower_boundary:, :] = BINARY_IM_MAX_VAL\n bordered_data[:, :left_boundary] = BINARY_IM_MAX_VAL\n bordered_data[:, right_boundary:] = BINARY_IM_MAX_VAL\n return BinaryImage(bordered_data, frame=self._frame)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nfinds the black pixel with the largest distance from the white pixels.", "response": "def most_free_pixel(self):\n \"\"\" Find the black pixel with the largest distance from the white pixels.\n\n Returns\n -------\n :obj:`numpy.ndarray`\n 2-vector containing the most free pixel\n \"\"\"\n dist_tf = self.to_distance_im()\n max_px = np.where(dist_tf == np.max(dist_tf))\n free_pixel = np.array([max_px[0][0], max_px[1][0]])\n return free_pixel"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef diff_with_target(self, binary_im):\n red = np.array([BINARY_IM_MAX_VAL, 0, 0])\n yellow = np.array([BINARY_IM_MAX_VAL, BINARY_IM_MAX_VAL, 0])\n green = np.array([0, BINARY_IM_MAX_VAL, 0])\n overlap_data = np.zeros([self.height, self.width, 3])\n unfilled_px = np.where((self.data == 0) & (binary_im.data > 0))\n overlap_data[unfilled_px[0], unfilled_px[1], :] = red\n filled_px = np.where((self.data > 0) & (binary_im.data > 0))\n overlap_data[filled_px[0], filled_px[1], :] = green\n spurious_px = np.where((self.data > 0) & (binary_im.data == 0))\n overlap_data[spurious_px[0], spurious_px[1], :] = yellow\n return ColorImage(overlap_data.astype(np.uint8), frame=self.frame)", "response": "Creates a color image that visualizes the overlap between two images."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef num_adjacent(self, i, j):\n # check values\n if i < 1 or i > self.height - 2 or j < 1 and j > self.width - 2:\n raise ValueError('Pixels out of bounds')\n\n # count the number of blacks\n count = 0\n diffs = [[-1, 0], [1, 0], [0, -1], [0, 1]]\n for d in diffs:\n if self.data[i + d[0]][j + d[1]] > self._threshold:\n count += 1\n return count", "response": "Counts the number of adjacent nonzero pixels to a given pixel."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef to_sdf(self):\n # compute medial axis transform\n skel, sdf_in = morph.medial_axis(self.data, return_distance=True)\n useless_skel, sdf_out = morph.medial_axis(\n np.iinfo(np.uint8).max - self.data, return_distance=True)\n\n # convert to true sdf\n sdf = sdf_out - sdf_in\n return sdf", "response": "Converts the 2D image to a 2D signed distance field."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef to_color(self):\n color_data = np.zeros([self.height, self.width, 3])\n color_data[:, :, 0] = self.data\n color_data[:, :, 1] = self.data\n color_data[:, :, 2] = self.data\n return ColorImage(color_data.astype(np.uint8), self._frame)", "response": "Creates a ColorImage from the binary image."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef open(filename, frame='unspecified'):\n data = Image.load_data(filename)\n if len(data.shape) > 2 and data.shape[2] > 1:\n data = data[:, :, 0]\n return BinaryImage(data, frame)", "response": "Creates a BinaryImage from a file."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _check_valid_data(self, data):\n if data.dtype.type is not np.float32 and \\\n data.dtype.type is not np.float64:\n raise ValueError(\n 'Illegal data type. RGB-D images only support float arrays')\n\n if len(data.shape) != 3 and data.shape[2] != 4:\n raise ValueError(\n 'Illegal data type. RGB-D images only support four channel')\n\n color_data = data[:, :, :3]\n if np.any((color_data < 0) | (color_data > BINARY_IM_MAX_VAL)):\n raise ValueError(\n 'Color channels must be in the range (0, BINARY_IM_MAX_VAL)')", "response": "Checks that the given data is a float array with four channels."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating an RGB - D image from a separate color and depth image.", "response": "def from_color_and_depth(color_im, depth_im):\n \"\"\" Creates an RGB-D image from a separate color and depth image. \"\"\"\n # check shape\n if color_im.height != depth_im.height or color_im.width != depth_im.width:\n raise ValueError('Color and depth images must have the same shape')\n\n # check frame\n if color_im.frame != depth_im.frame:\n raise ValueError('Color and depth images must have the same frame')\n\n # form composite data\n rgbd_data = np.zeros([color_im.height, color_im.width, 4])\n rgbd_data[:, :, :3] = color_im.data.astype(np.float64)\n rgbd_data[:, :, 3] = depth_im.data\n return RgbdImage(rgbd_data, frame=color_im.frame)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef color(self):\n return ColorImage(self.raw_data[:, :, :3].astype(\n np.uint8), frame=self.frame)", "response": "Returns the color image."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef mask_binary(self, binary_im):\n color = self.color.mask_binary(binary_im)\n depth = self.depth.mask_binary(binary_im)\n return RgbdImage.from_color_and_depth(color, depth)", "response": "Create a new image by zeroing out the pixels of the given binary image."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncrops the image centered around center_i and center_j.", "response": "def crop(self, height, width, center_i=None, center_j=None):\n \"\"\"Crop the image centered around center_i, center_j.\n\n Parameters\n ----------\n height : int\n The height of the desired image.\n\n width : int\n The width of the desired image.\n\n center_i : int\n The center height point at which to crop. If not specified, the center\n of the image is used.\n\n center_j : int\n The center width point at which to crop. If not specified, the center\n of the image is used.\n\n Returns\n -------\n :obj:`Image`\n A cropped Image of the same type.\n \"\"\"\n # crop channels separately\n color_im_cropped = self.color.crop(height, width,\n center_i=center_i,\n center_j=center_j)\n depth_im_cropped = self.depth.crop(height, width,\n center_i=center_i,\n center_j=center_j)\n\n # return combination of cropped data\n return RgbdImage.from_color_and_depth(\n color_im_cropped, depth_im_cropped)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef transform(self, translation, theta, method='opencv'):\n # transform channels separately\n color_im_tf = self.color.transform(translation, theta, method=method)\n depth_im_tf = self.depth.transform(translation, theta, method=method)\n\n # return combination of cropped data\n return RgbdImage.from_color_and_depth(color_im_tf, depth_im_tf)", "response": "Create a new image by translating and rotating the current image."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconverts to a grayscale and depth image.", "response": "def to_grayscale_depth(self):\n \"\"\" Converts to a grayscale and depth (G-D) image. \"\"\"\n gray = self.color.to_grayscale()\n return GdImage.from_grayscale_and_depth(gray, self.depth)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a new rgbd image with the values of a different rgbd image.", "response": "def combine_with(self, rgbd_im):\n \"\"\"\n Replaces all zeros in the source rgbd image with the values of a different rgbd image\n\n Parameters\n ----------\n rgbd_im : :obj:`RgbdImage`\n rgbd image to combine with\n\n Returns\n -------\n :obj:`RgbdImage`\n the combined rgbd image\n \"\"\"\n new_data = self.data.copy()\n depth_data = self.depth.data\n other_depth_data = rgbd_im.depth.data\n depth_zero_px = self.depth.zero_pixels()\n depth_replace_px = np.where(\n (other_depth_data != 0) & (\n other_depth_data < depth_data))\n depth_replace_px = np.c_[depth_replace_px[0], depth_replace_px[1]]\n\n # replace zero pixels\n new_data[depth_zero_px[:, 0], depth_zero_px[:, 1],\n :] = rgbd_im.data[depth_zero_px[:, 0], depth_zero_px[:, 1], :]\n\n # take closest pixel\n new_data[depth_replace_px[:, 0], depth_replace_px[:, 1],\n :] = rgbd_im.data[depth_replace_px[:, 0], depth_replace_px[:, 1], :]\n\n return RgbdImage(new_data, frame=self.frame)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncrops the image centered around center_i center_j.", "response": "def crop(self, height, width, center_i=None, center_j=None):\n \"\"\"Crop the image centered around center_i, center_j.\n\n Parameters\n ----------\n height : int\n The height of the desired image.\n\n width : int\n The width of the desired image.\n\n center_i : int\n The center height point at which to crop. If not specified, the center\n of the image is used.\n\n center_j : int\n The center width point at which to crop. If not specified, the center\n of the image is used.\n\n Returns\n -------\n :obj:`Image`\n A cropped Image of the same type.\n \"\"\"\n color_im_crop = self.color.crop(height, width, center_i, center_j)\n depth_im_crop = self.depth.crop(height, width, center_i, center_j)\n return RgbdImage.from_color_and_depth(color_im_crop, depth_im_crop)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef from_grayscale_and_depth(gray_im, depth_im):\n # check shape\n if gray_im.height != depth_im.height or gray_im.width != depth_im.width:\n raise ValueError(\n 'Grayscale and depth images must have the same shape')\n\n # check frame\n if gray_im.frame != depth_im.frame:\n raise ValueError(\n 'Grayscale and depth images must have the same frame')\n\n # form composite data\n gd_data = np.zeros([gray_im.height, gray_im.width, 2])\n gd_data[:, :, 0] = gray_im.data.astype(np.float64)\n gd_data[:, :, 1] = depth_im.data\n return GdImage(gd_data, frame=gray_im.frame)", "response": "Creates an G - D image from a separate grayscale and depth image."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef gray(self):\n return GrayscaleImage(\n self.raw_data[:, :, 0].astype(np.uint8), frame=self.frame)", "response": "Returns the grayscale image."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef resize(self, size, interp='bilinear'):\n # resize channels separately\n gray_im_resized = self.gray.resize(size, interp)\n depth_im_resized = self.depth.resize(size, interp)\n\n # return combination of resized data\n return GdImage.from_grayscale_and_depth(\n gray_im_resized, depth_im_resized)", "response": "Resize the image.\n\n Parameters\n ----------\n size : int, float, or tuple\n * int - Percentage of current size.\n * float - Fraction of current size.\n * tuple - Size of the output image.\n\n interp : :obj:`str`, optional\n Interpolation to use for re-sizing ('nearest', 'lanczos', 'bilinear',\n 'bicubic', or 'cubic')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncropping the image centered around center_i center_j.", "response": "def crop(self, height, width, center_i=None, center_j=None):\n \"\"\"Crop the image centered around center_i, center_j.\n\n Parameters\n ----------\n height : int\n The height of the desired image.\n\n width : int\n The width of the desired image.\n\n center_i : int\n The center height point at which to crop. If not specified, the center\n of the image is used.\n\n center_j : int\n The center width point at which to crop. If not specified, the center\n of the image is used.\n\n Returns\n -------\n :obj:`Image`\n A cropped Image of the same type.\n \"\"\"\n gray_im_crop = self.gray.crop(height, width, center_i, center_j)\n depth_im_crop = self.depth.crop(height, width, center_i, center_j)\n return GdImage.from_grayscale_and_depth(gray_im_crop, depth_im_crop)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef border_pixels(\n self,\n grad_sigma=0.5,\n grad_lower_thresh=0.1,\n grad_upper_thresh=1.0):\n \"\"\"\n Returns the pixels on the boundary between all segments, excluding the zero segment.\n\n Parameters\n ----------\n grad_sigma : float\n standard deviation used for gaussian gradient filter\n grad_lower_thresh : float\n lower threshold on gradient threshold used to determine the boundary pixels\n grad_upper_thresh : float\n upper threshold on gradient threshold used to determine the boundary pixels\n\n Returns\n -------\n :obj:`numpy.ndarray`\n Nx2 array of pixels on the boundary\n \"\"\"\n # boundary pixels\n boundary_im = np.ones(self.shape)\n for i in range(1, self.num_segments):\n label_border_im = self.data.copy()\n label_border_im[self.data == 0] = i\n grad_mag = sf.gaussian_gradient_magnitude(\n label_border_im.astype(np.float32), sigma=grad_sigma)\n\n nonborder_px = np.where(\n (grad_mag < grad_lower_thresh) | (\n grad_mag > grad_upper_thresh))\n boundary_im[nonborder_px[0], nonborder_px[1]] = 0\n\n # return boundary pixels\n border_px = np.where(boundary_im > 0)\n border_px = np.c_[border_px[0], border_px[1]]\n return border_px", "response": "Returns the pixels on the boundary between all segments excluding the zero segment."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef segment_mask(self, segnum):\n binary_data = np.zeros(self.shape)\n binary_data[self.data == segnum] = BINARY_IM_MAX_VAL\n return BinaryImage(binary_data.astype(np.uint8), frame=self.frame)", "response": "Returns a binary image of just the segment corresponding to the given number."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef mask_binary(self, binary_im):\n data = np.copy(self._data)\n ind = np.where(binary_im.data == 0)\n data[ind[0], ind[1], :] = 0\n return SegmentationImage(data, self._frame)", "response": "Create a new image by zeroing out the data at locations\n where binary_im == 0. 0."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef open(filename, frame='unspecified'):\n data = Image.load_data(filename)\n return SegmentationImage(data, frame)", "response": "Opens a segmentation image from a file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef resize(self, size, interp='nearest'):\n resized_data_0 = sm.imresize(self._data[:,:,0], size, interp=interp, mode='F')\n resized_data_1 = sm.imresize(self._data[:,:,1], size, interp=interp, mode='F')\n resized_data_2 = sm.imresize(self._data[:,:,2], size, interp=interp, mode='F')\n resized_data = np.zeros([resized_data_0.shape[0],\n resized_data_0.shape[1],\n self.channels])\n resized_data[:,:,0] = resized_data_0\n resized_data[:,:,1] = resized_data_1\n resized_data[:,:,2] = resized_data_2\n return PointCloudImage(resized_data, self._frame)", "response": "Resize the image.\n\n Parameters\n ----------\n size : int, float, or tuple\n * int - Percentage of current size.\n * float - Fraction of current size.\n * tuple - Size of the output image.\n\n interp : :obj:`str`, optional\n Interpolation to use for re-sizing ('nearest', 'lanczos', 'bilinear',\n 'bicubic', or 'cubic')\n\n Returns\n -------\n :obj:`PointCloudImage`\n The resized image."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconverting the point cloud to a mesh.", "response": "def to_mesh(self, dist_thresh=0.01):\n \"\"\" Convert the point cloud to a mesh.\n\n Returns\n -------\n :obj:`trimesh.Trimesh`\n mesh of the point cloud\n \"\"\"\n # init vertex and triangle buffers\n vertices = []\n triangles = []\n vertex_indices = -1 * np.ones([self.height, self.width]).astype(np.int32)\n \n for i in range(self.height-1):\n for j in range(self.width-1):\n # read corners of square\n v0 = self.data[i,j,:]\n v1 = self.data[i,j+1,:]\n v2 = self.data[i+1,j,:]\n v3 = self.data[i+1,j+1,:]\n\n # check distances\n d01 = np.abs(v0[2] - v1[2])\n d02 = np.abs(v0[2] - v2[2])\n d03 = np.abs(v0[2] - v3[2])\n d13 = np.abs(v1[2] - v3[2])\n d23 = np.abs(v2[2] - v3[2])\n\n # add tri 1\n if max(d01, d03, d13) < dist_thresh:\n # add vertices\n if vertex_indices[i,j] == -1:\n vertices.append(v0)\n vertex_indices[i,j] = len(vertices)-1\n if vertex_indices[i,j+1] == -1:\n vertices.append(v1)\n vertex_indices[i,j+1] = len(vertices)-1\n if vertex_indices[i+1,j+1] == -1:\n vertices.append(v3)\n vertex_indices[i+1,j+1] = len(vertices)-1\n \n # add tri\n i0 = vertex_indices[i,j]\n i1 = vertex_indices[i,j+1]\n i3 = vertex_indices[i+1,j+1]\n triangles.append([i0, i1, i3])\n\n # add tri 2\n if max(d01, d03, d23) < dist_thresh:\n # add vertices\n if vertex_indices[i,j] == -1:\n vertices.append(v0)\n vertex_indices[i,j] = len(vertices)-1\n if vertex_indices[i+1,j] == -1:\n vertices.append(v2)\n vertex_indices[i+1,j] = len(vertices)-1\n if vertex_indices[i+1,j+1] == -1:\n vertices.append(v3)\n vertex_indices[i+1,j+1] = len(vertices)-1\n \n # add tri\n i0 = vertex_indices[i,j]\n i2 = vertex_indices[i+1,j]\n i3 = vertex_indices[i+1,j+1]\n triangles.append([i0, i3, i2])\n\n # return trimesh\n import trimesh\n mesh = trimesh.Trimesh(vertices, triangles)\n return mesh"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconverts the image to a PointCloud object.", "response": "def to_point_cloud(self):\n \"\"\"Convert the image to a PointCloud object.\n\n Returns\n -------\n :obj:`autolab_core.PointCloud`\n The corresponding PointCloud.\n \"\"\"\n return PointCloud(\n data=self._data.reshape(\n self.height *\n self.width,\n 3).T,\n frame=self._frame)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef normal_cloud_im(self, ksize=3):\n # compute direction via cross product of derivatives\n gy = cv2.Sobel(self.data, cv2.CV_64F, 1, 0, ksize=ksize)\n gx = cv2.Sobel(self.data, cv2.CV_64F, 0, 1, ksize=ksize)\n gx_data = gx.reshape(self.height * self.width, 3)\n gy_data = gy.reshape(self.height * self.width, 3)\n pc_grads = np.cross(gx_data, gy_data) # default to point toward camera\n\n # normalize\n pc_grad_norms = np.linalg.norm(pc_grads, axis=1)\n pc_grads[pc_grad_norms > 0] = pc_grads[pc_grad_norms > 0] / np.tile(pc_grad_norms[pc_grad_norms > 0, np.newaxis], [1, 3])\n pc_grads[pc_grad_norms == 0.0] = np.array([0,0,-1.0]) # zero norm means pointing toward camera\n\n # reshape\n normal_im_data = pc_grads.reshape(self.height, self.width, 3)\n\n # preserve zeros\n zero_px = self.zero_pixels()\n normal_im_data[zero_px[:,0], zero_px[:,1], :] = np.zeros(3)\n \n return NormalCloudImage(normal_im_data, frame=self.frame)", "response": "Generate a NormalCloudImage from the PointCloudImage using Sobel filtering."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a PointCloudImage from a file.", "response": "def open(filename, frame='unspecified'):\n \"\"\"Creates a PointCloudImage from a file.\n\n Parameters\n ----------\n filename : :obj:`str`\n The file to load the data from. Must be one of .png, .jpg,\n .npy, or .npz.\n\n frame : :obj:`str`\n A string representing the frame of reference in which the new image\n lies.\n\n Returns\n -------\n :obj:`PointCloudImage`\n The new PointCloudImage.\n \"\"\"\n data = Image.load_data(filename)\n return PointCloudImage(data, frame)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef to_normal_cloud(self):\n return NormalCloud(\n data=self._data.reshape(\n self.height *\n self.width,\n 3).T,\n frame=self._frame)", "response": "Convert the image to a NormalCloud object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef open(filename, frame='unspecified'):\n data = Image.load_data(filename)\n return NormalCloudImage(data, frame)", "response": "Creates a NormalCloudImage from a file."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nprojecting a point cloud onto the camera image plane and creates a DepthImage.", "response": "def project_to_image(self, point_cloud, round_px=True):\n \"\"\"Projects a point cloud onto the camera image plane and creates\n a depth image. Zero depth means no point projected into the camera\n at that pixel location (i.e. infinite depth).\n\n Parameters\n ----------\n point_cloud : :obj:`autolab_core.PointCloud` or :obj:`autolab_core.Point`\n A PointCloud or Point to project onto the camera image plane.\n\n round_px : bool\n If True, projections are rounded to the nearest pixel.\n\n Returns\n -------\n :obj:`DepthImage`\n A DepthImage generated from projecting the point cloud into the\n camera.\n\n Raises\n ------\n ValueError\n If the input is not a PointCloud or Point in the same reference\n frame as the camera.\n \"\"\"\n if not isinstance(point_cloud, PointCloud) and not (isinstance(point_cloud, Point) and point_cloud.dim == 3):\n raise ValueError('Must provide PointCloud or 3D Point object for projection')\n if point_cloud.frame != self._frame:\n raise ValueError('Cannot project points in frame %s into camera with frame %s' %(point_cloud.frame, self._frame))\n\n points_proj = self.S.dot(point_cloud.data) + self.t\n if len(points_proj.shape) == 1:\n points_proj = points_proj[:, np.newaxis]\n point_depths = points_proj[2,:]\n point_z = np.tile(point_depths, [3, 1])\n points_proj = np.divide(points_proj, point_z)\n if round_px:\n points_proj = np.round(points_proj)\n points_proj = points_proj[:2,:].astype(np.int16)\n\n valid_ind = np.where((points_proj[0,:] >= 0) & \\\n (points_proj[1,:] >= 0) & \\\n (points_proj[0,:] < self.width) & \\\n (points_proj[1,:] < self.height))[0]\n\n depth_data = np.zeros([self.height, self.width])\n depth_data[points_proj[1,valid_ind], points_proj[0,valid_ind]] = point_depths[valid_ind]\n return DepthImage(depth_data, frame=self.frame)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef deproject(self, depth_image):\n # check valid input\n if not isinstance(depth_image, DepthImage):\n raise ValueError('Must provide DepthImage object for projection')\n if depth_image.frame != self._frame:\n raise ValueError('Cannot deproject points in frame %s from camera with frame %s' %(depth_image.frame, self._frame))\n\n # create homogeneous pixels\n row_indices = np.arange(depth_image.height)\n col_indices = np.arange(depth_image.width)\n pixel_grid = np.meshgrid(col_indices, row_indices)\n pixels = np.c_[pixel_grid[0].flatten(), pixel_grid[1].flatten()].T\n depth_data = depth_image.data.flatten()\n pixels_homog = np.r_[pixels, depth_data.reshape(1, depth_data.shape[0])]\n\n # deproject\n points_3d = np.linalg.inv(self.S).dot(pixels_homog - np.tile(self.t.reshape(3,1), [1, pixels_homog.shape[1]]))\n return PointCloud(data=points_3d, frame=self._frame)", "response": "Deprojects a DepthImage into a PointCloud."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef deproject_pixel(self, depth, pixel):\n if not isinstance(pixel, Point) and not pixel.dim == 2:\n raise ValueError('Must provide 2D Point object for pixel projection')\n if pixel.frame != self._frame:\n raise ValueError('Cannot deproject pixel in frame %s from camera with frame %s' %(pixel.frame, self._frame))\n\n point = np.r_[pixel.data, depth]\n point_3d = np.linalg.inv(self.S).dot(point - self.t)\n return Point(data=point_3d, frame=self._frame)", "response": "Deprojects a single pixel with a given depth into a 3D point."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef save(self, filename):\n file_root, file_ext = os.path.splitext(filename)\n if file_ext.lower() != INTR_EXTENSION:\n raise ValueError('Extension %s not supported for OrhtographicIntrinsics. Must be stored with extension %s' %(file_ext, INTR_EXTENSION))\n\n camera_intr_dict = copy.deepcopy(self.__dict__)\n f = open(filename, 'w')\n json.dump(camera_intr_dict, f)\n f.close()", "response": "Save the camera intrinsics object to a. intr file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef load(filename):\n file_root, file_ext = os.path.splitext(filename)\n if file_ext.lower() != INTR_EXTENSION:\n raise ValueError('Extension %s not supported for CameraIntrinsics. Must be stored with extension %s' %(file_ext, INTR_EXTENSION))\n\n f = open(filename, 'r')\n ci = json.load(f)\n f.close()\n return OrthographicIntrinsics(frame=ci['_frame'],\n vol_height=ci['_vol_height'],\n vol_width=ci['_vol_width'],\n vol_depth=ci['_vol_depth'],\n plane_height=ci['_plane_height'],\n plane_width=ci['_plane_width'],\n depth_scale=ci['_depth_scale'])", "response": "Loads a CameraIntrinsics object from a file."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nretrieve a new frame from the PhoXi and convert it to a ColorImage DepthImage and IrImage.", "response": "def frames(self):\n \"\"\"Retrieve a new frame from the PhoXi and convert it to a ColorImage,\n a DepthImage, and an IrImage.\n\n Returns\n -------\n :obj:`tuple` of :obj:`ColorImage`, :obj:`DepthImage`, :obj:`IrImage`, :obj:`numpy.ndarray`\n The ColorImage, DepthImage, and IrImage of the current frame.\n \"\"\"\n # Run a software trigger\n times = []\n\n rospy.ServiceProxy('phoxi_camera/start_acquisition', Empty)()\n rospy.ServiceProxy('phoxi_camera/trigger_image', TriggerImage)()\n\n self._cur_color_im = None\n self._cur_depth_im = None\n self._cur_normal_map = None\n\n rospy.ServiceProxy('phoxi_camera/get_frame', GetFrame)(-1)\n\n max_time = 5.0\n time_waiting = 0.0\n while self._cur_color_im is None or self._cur_depth_im is None or self._cur_normal_map is None:\n time.sleep(0.05)\n time_waiting += 0.05\n if time_waiting > max_time:\n raise SensorUnresponsiveException('PhoXi sensor seems to be non-responsive')\n return self._cur_color_im, self._cur_depth_im, None"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _connect_to_sensor(self):\n name = self._device_name\n try:\n # Check if device is actively in list\n rospy.wait_for_service('phoxi_camera/get_device_list')\n device_list = rospy.ServiceProxy('phoxi_camera/get_device_list', GetDeviceList)().out\n if not str(name) in device_list:\n logging.error('PhoXi sensor {} not in list of active devices'.format(name))\n return False\n\n success = rospy.ServiceProxy('phoxi_camera/connect_camera', ConnectCamera)(name).success\n if not success:\n logging.error('Could not connect to PhoXi sensor {}'.format(name))\n return False\n\n logging.debug('Connected to PhoXi Sensor {}'.format(name))\n return True\n\n except rospy.ServiceException as e:\n logging.error('Service call failed: {}'.format(e))\n return False", "response": "Connect to the sensor."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nstarting the OpenCVCameraSensor Stream", "response": "def start(self):\n \"\"\" Starts the OpenCVCameraSensor Stream\n Raises:\n Exception if unable to open stream\n \"\"\"\n self._sensor = cv2.VideoCapture(self._device_id)\n if not self._sensor.isOpened():\n raise Exception(\"Unable to open OpenCVCameraSensor for id {0}\".format(self._device_id))\n self.flush()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef frames(self, flush=True):\n self.flush()\n ret_val, frame = self._sensor.read()\n if not ret_val:\n raise Exception(\"Unable to retrieve frame from OpenCVCameraSensor for id {0}\".format(self._device_id))\n frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)\n if self._upside_down:\n frame = np.flipud(frame).astype(np.uint8)\n frame = np.fliplr(frame).astype(np.uint8)\n return ColorImage(frame)", "response": "Returns the latest color image from the stream"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the image associated with a particular render mode", "response": "def image(self, render_mode):\n \"\"\" Get the image associated with a particular render mode \"\"\"\n if render_mode == RenderMode.SEGMASK:\n return self.query_im\n elif render_mode == RenderMode.COLOR:\n return self.color_im\n elif render_mode == RenderMode.DEPTH:\n return self.depth_im\n else:\n raise ValueError('Render mode %s not supported' %(render_mode))"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndetecting all relevant objects in an rgbd image pair using foreground masking.", "response": "def detect(self, color_im, depth_im, cfg, camera_intr=None,\n T_camera_world=None, segmask=None):\n \"\"\"\n Detects all relevant objects in an rgbd image pair using foreground masking.\n\n Parameters\n ----------\n color_im : :obj:`ColorImage`\n color image for detection\n depth_im : :obj:`DepthImage`\n depth image for detection (corresponds to color image)\n cfg : :obj:`YamlConfig`\n parameters of detection function\n camera_intr : :obj:`CameraIntrinsics`\n intrinsics of the camera\n T_camera_world : :obj:`autolab_core.RigidTransform`\n registration of the camera to world frame\n segmask : :obj:`BinaryImage`\n optional segmask of invalid pixels\n\n Returns\n ------\n :obj:`list` of :obj:`RgbdDetection`\n all detections in the image\n \"\"\"\n # read params\n foreground_mask_tolerance = cfg['foreground_mask_tolerance']\n min_contour_area = cfg['min_contour_area']\n max_contour_area = cfg['max_contour_area']\n w = cfg['filter_dim']\n\n # mask image using background detection\n bgmodel = color_im.background_model()\n binary_im = color_im.foreground_mask(foreground_mask_tolerance, bgmodel=bgmodel)\n\n # filter the image\n y, x = np.ogrid[-w/2+1:w/2+1, -w/2+1:w/2+1]\n mask = x*x + y*y <= w/2*w/2\n filter_struct = np.zeros([w,w]).astype(np.uint8)\n filter_struct[mask] = 1\n binary_im_filtered = binary_im.apply(snm.grey_closing, structure=filter_struct)\n\n visualize = False\n if visualize:\n plt.figure()\n plt.imshow(binary_im_filtered.data, cmap=plt.cm.gray)\n plt.axis('off')\n plt.show()\n\n # find all contours\n contours = binary_im_filtered.find_contours(min_area=min_contour_area, max_area=max_contour_area)\n\n # convert contours to detections\n detections = []\n for contour in contours:\n box = contour.bounding_box\n color_thumbnail = color_im.crop(box.height, box.width, box.ci, box.cj)\n depth_thumbnail = depth_im.crop(box.height, box.width, box.ci, box.cj)\n binary_thumbnail = binary_im_filtered.crop(box.height, box.width, box.ci, box.cj)\n thumbnail_intr = camera_intr\n if camera_intr is not None:\n thumbnail_intr = camera_intr.crop(box.height, box.width, box.ci, box.cj)\n detections.append(RgbdDetection(color_thumbnail,\n depth_thumbnail,\n box,\n binary_thumbnail=binary_thumbnail,\n contour=contour,\n camera_intr=thumbnail_intr))\n return detections"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef detect(self, color_im, depth_im, cfg, camera_intr=None,\n T_camera_world=None,\n vis_foreground=False, vis_segmentation=False, segmask=None):\n \"\"\"\n Detects all relevant objects in an rgbd image pair using foreground masking.\n\n Parameters\n ----------\n color_im : :obj:`ColorImage`\n color image for detection\n depth_im : :obj:`DepthImage`\n depth image for detection (corresponds to color image)\n cfg : :obj:`YamlConfig`\n parameters of detection function\n camera_intr : :obj:`CameraIntrinsics`\n intrinsics of the camera\n T_camera_world : :obj:`autolab_core.RigidTransform`\n registration of the camera to world frame\n segmask : :obj:`BinaryImage`\n optional segmask of invalid pixels\n\n Returns\n ------\n :obj:`list` of :obj:`RgbdDetection`\n all detections in the image\n \"\"\"\n # read params\n foreground_mask_tolerance = cfg['foreground_mask_tolerance']\n min_contour_area = cfg['min_contour_area']\n max_contour_area = cfg['max_contour_area']\n min_box_area = cfg['min_box_area']\n max_box_area = cfg['max_box_area']\n box_padding_px = cfg['box_padding_px']\n crop_height = cfg['image_height']\n crop_width = cfg['image_width']\n depth_grad_thresh = cfg['depth_grad_thresh']\n\n w = cfg['filter_dim']\n\n half_crop_height = float(crop_height) / 2\n half_crop_width = float(crop_width) / 2\n half_crop_dims = np.array([half_crop_height, half_crop_width])\n\n fill_depth = np.max(depth_im.data[depth_im.data > 0])\n\n kinect2_denoising = False\n if 'kinect2_denoising' in cfg.keys() and cfg['kinect2_denoising']:\n kinect2_denoising = True\n depth_offset = cfg['kinect2_noise_offset']\n max_depth = cfg['kinect2_noise_max_depth']\n\n # mask image using background detection\n bgmodel = color_im.background_model()\n binary_im = color_im.foreground_mask(foreground_mask_tolerance, bgmodel=bgmodel)\n\n # filter the image\n y, x = np.ogrid[-w/2+1:w/2+1, -w/2+1:w/2+1]\n mask = x*x + y*y <= w/2*w/2\n filter_struct = np.zeros([w,w]).astype(np.uint8)\n filter_struct[mask] = 1\n binary_im_filtered = binary_im.apply(snm.grey_closing, structure=filter_struct)\n\n # find all contours\n contours = binary_im_filtered.find_contours(min_area=min_contour_area, max_area=max_contour_area)\n\n if vis_foreground:\n plt.figure()\n plt.subplot(1,2,1)\n plt.imshow(color_im.data)\n plt.axis('off')\n plt.subplot(1,2,2)\n plt.imshow(binary_im_filtered.data, cmap=plt.cm.gray)\n plt.axis('off')\n plt.show()\n\n # threshold gradients of depth\n depth_im = depth_im.threshold_gradients(depth_grad_thresh)\n\n # convert contours to detections\n detections = []\n for contour in contours:\n orig_box = contour.bounding_box\n if orig_box.area > min_box_area and orig_box.area < max_box_area:\n # convert orig bounding box to query bounding box\n min_pt = orig_box.center - half_crop_dims\n max_pt = orig_box.center + half_crop_dims\n query_box = Box(min_pt, max_pt, frame=orig_box.frame)\n\n # segment color to get refined detection\n color_thumbnail = color_im.crop(query_box.height, query_box.width, query_box.ci, query_box.cj)\n binary_thumbnail, segment_thumbnail, query_box = self._segment_color(color_thumbnail, query_box, bgmodel, cfg, vis_segmentation=vis_segmentation)\n if binary_thumbnail is None:\n continue\n else:\n # otherwise take original bounding box\n query_box = Box(contour.bounding_box.min_pt - box_padding_px,\n contour.bounding_box.max_pt + box_padding_px,\n frame = contour.bounding_box.frame)\n\n binary_thumbnail = binary_im_filtered.crop(query_box.height, query_box.width, query_box.ci, query_box.cj)\n\n # crop to get thumbnails\n color_thumbnail = color_im.crop(query_box.height, query_box.width, query_box.ci, query_box.cj)\n depth_thumbnail = depth_im.crop(query_box.height, query_box.width, query_box.ci, query_box.cj)\n thumbnail_intr = camera_intr\n if camera_intr is not None:\n thumbnail_intr = camera_intr.crop(query_box.height, query_box.width, query_box.ci, query_box.cj)\n\n # fix depth thumbnail\n depth_thumbnail = depth_thumbnail.replace_zeros(fill_depth)\n if kinect2_denoising:\n depth_data = depth_thumbnail.data\n min_depth = np.min(depth_data)\n binary_mask_data = binary_thumbnail.data\n depth_mask_data = depth_thumbnail.mask_binary(binary_thumbnail).data\n depth_mask_data += depth_offset\n depth_data[binary_mask_data > 0] = depth_mask_data[binary_mask_data > 0]\n depth_thumbnail = DepthImage(depth_data, depth_thumbnail.frame)\n\n # append to detections\n detections.append(RgbdDetection(color_thumbnail,\n depth_thumbnail,\n query_box,\n binary_thumbnail=binary_thumbnail,\n contour=contour,\n camera_intr=thumbnail_intr))\n\n return detections", "response": "Detects all relevant objects in an rgbd image pair using foreground masking."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef detect(self, color_im, depth_im, cfg, camera_intr,\n T_camera_world,\n vis_foreground=False, vis_segmentation=False, segmask=None):\n \"\"\"Detects all relevant objects in an rgbd image pair using foreground masking.\n\n Parameters\n ----------\n color_im : :obj:`ColorImage`\n color image for detection\n depth_im : :obj:`DepthImage`\n depth image for detection (corresponds to color image)\n cfg : :obj:`YamlConfig`\n parameters of detection function\n camera_intr : :obj:`CameraIntrinsics`\n intrinsics of the camera\n T_camera_world : :obj:`autolab_core.RigidTransform`\n registration of the camera to world frame\n segmask : :obj:`BinaryImage`\n optional segmask of invalid pixels\n\n Returns\n -------\n :obj:`list` of :obj:`RgbdDetection`\n all detections in the image\n \"\"\"\n # read params\n min_pt_box = np.array(cfg['min_pt'])\n max_pt_box = np.array(cfg['max_pt'])\n min_contour_area = cfg['min_contour_area']\n max_contour_area = cfg['max_contour_area']\n min_box_area = cfg['min_box_area']\n max_box_area = cfg['max_box_area']\n box_padding_px = cfg['box_padding_px']\n crop_height = cfg['image_height']\n crop_width = cfg['image_width']\n depth_grad_thresh = cfg['depth_grad_thresh']\n point_cloud_mask_only = cfg['point_cloud_mask_only']\n\n w = cfg['filter_dim']\n\n half_crop_height = float(crop_height) / 2\n half_crop_width = float(crop_width) / 2\n half_crop_dims = np.array([half_crop_height, half_crop_width])\n\n fill_depth = np.max(depth_im.data[depth_im.data > 0])\n\n kinect2_denoising = False\n if 'kinect2_denoising' in cfg.keys() and cfg['kinect2_denoising']:\n kinect2_denoising = True\n depth_offset = cfg['kinect2_noise_offset']\n max_depth = cfg['kinect2_noise_max_depth']\n\n box = Box(min_pt_box, max_pt_box, 'world')\n\n # project into 3D\n point_cloud_cam = camera_intr.deproject(depth_im)\n point_cloud_world = T_camera_world * point_cloud_cam\n seg_point_cloud_world, _ = point_cloud_world.box_mask(box)\n seg_point_cloud_cam = T_camera_world.inverse() * seg_point_cloud_world\n depth_im_seg = camera_intr.project_to_image(seg_point_cloud_cam)\n\n # mask image using background detection\n bgmodel = color_im.background_model()\n binary_im = depth_im_seg.to_binary()\n if segmask is not None:\n binary_im = binary_im.mask_binary(segmask.inverse())\n\n # filter the image\n y, x = np.ogrid[-w/2+1:w/2+1, -w/2+1:w/2+1]\n mask = x*x + y*y <= w/2*w/2\n filter_struct = np.zeros([w,w]).astype(np.uint8)\n filter_struct[mask] = 1\n binary_im_filtered_data = snm.binary_dilation(binary_im.data, structure=filter_struct)\n binary_im_filtered = BinaryImage(binary_im_filtered_data.astype(np.uint8),\n frame=binary_im.frame,\n threshold=0)\n\n # find all contours\n contours = binary_im_filtered.find_contours(min_area=min_contour_area, max_area=max_contour_area)\n\n if vis_foreground:\n plt.figure()\n plt.subplot(1,3,1)\n plt.imshow(color_im.data)\n plt.imshow(segmask.data, cmap=plt.cm.gray)\n plt.axis('off')\n plt.subplot(1,3,2)\n plt.imshow(binary_im.data, cmap=plt.cm.gray)\n plt.axis('off')\n plt.subplot(1,3,3)\n plt.imshow(binary_im_filtered.data, cmap=plt.cm.gray)\n plt.axis('off')\n plt.show()\n\n # switch to just return the mean of nonzero_px\n if point_cloud_mask_only == 1:\n center_px = np.mean(binary_im_filtered.nonzero_pixels(), axis=0)\n ci = center_px[0]\n cj = center_px[1]\n binary_thumbnail = binary_im_filtered.crop(crop_height, crop_width, ci, cj)\n color_thumbnail = color_im.crop(crop_height, crop_width, ci, cj)\n depth_thumbnail = depth_im.crop(crop_height, crop_width, ci, cj)\n thumbnail_intr = camera_intr\n if camera_intr is not None:\n thumbnail_intr = camera_intr.crop(crop_height, crop_width, ci, cj)\n \n \n query_box = Box(center_px - half_crop_dims, center_px + half_crop_dims)\n return [RgbdDetection(color_thumbnail,\n depth_thumbnail,\n query_box,\n binary_thumbnail=binary_thumbnail,\n contour=None,\n camera_intr=thumbnail_intr)]\n\n # convert contours to detections\n detections = []\n for i, contour in enumerate(contours):\n orig_box = contour.bounding_box\n logging.debug('Orig box %d area: %.3f' %(i, orig_box.area))\n if orig_box.area > min_box_area and orig_box.area < max_box_area:\n # convert orig bounding box to query bounding box\n min_pt = orig_box.center - half_crop_dims\n max_pt = orig_box.center + half_crop_dims\n query_box = Box(min_pt, max_pt, frame=orig_box.frame)\n\n # segment color to get refined detection\n contour_mask = binary_im_filtered.contour_mask(contour)\n binary_thumbnail = contour_mask.crop(query_box.height, query_box.width, query_box.ci, query_box.cj)\n\n else:\n # otherwise take original bounding box\n query_box = Box(contour.bounding_box.min_pt - box_padding_px,\n contour.bounding_box.max_pt + box_padding_px,\n frame = contour.bounding_box.frame)\n\n binary_thumbnail = binary_im_filtered.crop(query_box.height, query_box.width, query_box.ci, query_box.cj)\n\n # crop to get thumbnails\n color_thumbnail = color_im.crop(query_box.height, query_box.width, query_box.ci, query_box.cj)\n depth_thumbnail = depth_im.crop(query_box.height, query_box.width, query_box.ci, query_box.cj)\n thumbnail_intr = camera_intr\n if camera_intr is not None:\n thumbnail_intr = camera_intr.crop(query_box.height, query_box.width, query_box.ci, query_box.cj)\n\n # fix depth thumbnail\n depth_thumbnail = depth_thumbnail.replace_zeros(fill_depth)\n if kinect2_denoising:\n depth_data = depth_thumbnail.data\n min_depth = np.min(depth_data)\n binary_mask_data = binary_thumbnail.data\n depth_mask_data = depth_thumbnail.mask_binary(binary_thumbnail).data\n depth_mask_data += depth_offset\n depth_data[binary_mask_data > 0] = depth_mask_data[binary_mask_data > 0]\n depth_thumbnail = DepthImage(depth_data, depth_thumbnail.frame)\n\n # append to detections\n detections.append(RgbdDetection(color_thumbnail,\n depth_thumbnail,\n query_box,\n binary_thumbnail=binary_thumbnail,\n contour=contour,\n camera_intr=thumbnail_intr))\n\n return detections", "response": "Detects all relevant objects in an rgbd image pair using foreground masking."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a detector of the specified type.", "response": "def detector(detector_type):\n \"\"\" Returns a detector of the specified type. \"\"\"\n if detector_type == 'point_cloud_box':\n return PointCloudBoxDetector()\n elif detector_type == 'rgbd_foreground_mask_query':\n return RgbdForegroundMaskQueryImageDetector()\n elif detector_type == 'rgbd_foreground_mask':\n return RgbdForegroundMaskDetector()\n raise ValueError('Detector type %s not understood' %(detector_type))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef preprocess_images(raw_color_im,\n raw_depth_im,\n camera_intr,\n T_camera_world,\n workspace_box,\n workspace_im,\n image_proc_config):\n \"\"\" Preprocess a set of color and depth images. \"\"\"\n # read params\n inpaint_rescale_factor = image_proc_config['inpaint_rescale_factor']\n cluster = image_proc_config['cluster']\n cluster_tolerance = image_proc_config['cluster_tolerance']\n min_cluster_size = image_proc_config['min_cluster_size']\n max_cluster_size = image_proc_config['max_cluster_size']\n\n # deproject into 3D world coordinates\n point_cloud_cam = camera_intr.deproject(raw_depth_im)\n point_cloud_cam.remove_zero_points()\n point_cloud_world = T_camera_world * point_cloud_cam\n\n # compute the segmask for points above the box\n seg_point_cloud_world, _ = point_cloud_world.box_mask(workspace_box)\n seg_point_cloud_cam = T_camera_world.inverse() * seg_point_cloud_world\n depth_im_seg = camera_intr.project_to_image(seg_point_cloud_cam)\n\n # mask out objects in the known workspace\n env_pixels = depth_im_seg.pixels_farther_than(workspace_im)\n depth_im_seg._data[env_pixels[:,0], env_pixels[:,1]] = 0\n\n # REMOVE NOISE\n # clip low points\n low_indices = np.where(point_cloud_world.data[2,:] < workspace_box.min_pt[2])[0] \n point_cloud_world.data[2,low_indices] = workspace_box.min_pt[2]\n \n # clip high points\n high_indices = np.where(point_cloud_world.data[2,:] > workspace_box.max_pt[2])[0] \n point_cloud_world.data[2,high_indices] = workspace_box.max_pt[2]\n\n # segment out the region in the workspace (including the table)\n workspace_point_cloud_world, valid_indices = point_cloud_world.box_mask(workspace_box)\n invalid_indices = np.setdiff1d(np.arange(point_cloud_world.num_points),\n valid_indices)\n\n if cluster:\n # create new cloud\n pcl_cloud = pcl.PointCloud(workspace_point_cloud_world.data.T.astype(np.float32))\n tree = pcl_cloud.make_kdtree()\n \n # find large clusters (likely to be real objects instead of noise)\n ec = pcl_cloud.make_EuclideanClusterExtraction()\n ec.set_ClusterTolerance(cluster_tolerance)\n ec.set_MinClusterSize(min_cluster_size)\n ec.set_MaxClusterSize(max_cluster_size)\n ec.set_SearchMethod(tree)\n cluster_indices = ec.Extract()\n num_clusters = len(cluster_indices)\n\n # read out all points in large clusters\n filtered_points = np.zeros([3,workspace_point_cloud_world.num_points])\n cur_i = 0\n for j, indices in enumerate(cluster_indices):\n num_points = len(indices)\n points = np.zeros([3,num_points])\n \n for i, index in enumerate(indices):\n points[0,i] = pcl_cloud[index][0]\n points[1,i] = pcl_cloud[index][1]\n points[2,i] = pcl_cloud[index][2]\n\n filtered_points[:,cur_i:cur_i+num_points] = points.copy()\n cur_i = cur_i + num_points\n\n # reconstruct the point cloud\n all_points = np.c_[filtered_points[:,:cur_i], point_cloud_world.data[:,invalid_indices]]\n else:\n all_points = point_cloud_world.data\n filtered_point_cloud_world = PointCloud(all_points,\n frame='world') \n\n # compute the filtered depth image\n filtered_point_cloud_cam = T_camera_world.inverse() * filtered_point_cloud_world\n depth_im = camera_intr.project_to_image(filtered_point_cloud_cam) \n\n # form segmask\n segmask = depth_im_seg.to_binary()\n valid_px_segmask = depth_im.invalid_pixel_mask().inverse()\n segmask = segmask.mask_binary(valid_px_segmask)\n \n # inpaint\n color_im = raw_color_im.inpaint(rescale_factor=inpaint_rescale_factor)\n depth_im = depth_im.inpaint(rescale_factor=inpaint_rescale_factor) \n return color_im, depth_im, segmask", "response": "Preprocess a set of color and depth images."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef conv(input, kernel, biases, k_h, k_w, c_o, s_h, s_w, padding=\"VALID\", group=1):\n c_i = input.get_shape()[-1]\n assert c_i%group==0\n assert c_o%group==0\n convolve = lambda i, k: tf.nn.conv2d(i, k, [1, s_h, s_w, 1], padding=padding)\n \n if group==1:\n conv = convolve(input, kernel)\n else:\n input_groups = tf.split(input, group, axis=3)\n kernel_groups = tf.split(kernel, group, axis=3)\n output_groups = [convolve(i, k) for i,k in zip(input_groups, kernel_groups)]\n conv = tf.concat(output_groups, axis=3)\n return tf.reshape(tf.nn.bias_add(conv, biases), [-1]+conv.get_shape().as_list()[1:])", "response": "Convolution layer helper function"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _parse_config(self, config):\n self._batch_size = config['batch_size']\n self._im_height = config['im_height']\n self._im_width = config['im_width']\n self._num_channels = config['channels']\n self._output_layer = config['out_layer']\n self._feature_layer = config['feature_layer']\n self._out_size = None\n if 'out_size' in config.keys():\n self._out_size = config['out_size']\n self._input_arr = np.zeros([self._batch_size, self._im_height,\n self._im_width, self._num_channels])\n\n if self._model_dir is None:\n self._net_data = np.load(config['caffe_weights']).item()\n self._mean = np.load(config['mean_file'])\n self._model_filename = None\n else:\n self._net_data = None\n self._mean = np.load(os.path.join(self._model_dir, 'mean.npy'))\n self._model_filename = os.path.join(self._model_dir, 'model.ckpt')", "response": "Parses a tensorflow configuration and populates the internal state of the object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nload a model into the graph and populates the internal variables.", "response": "def _load(self):\n \"\"\" Loads a model into weights \"\"\"\n if self._model_filename is None:\n raise ValueError('Model filename not specified')\n\n # read the input image\n self._graph = tf.Graph()\n with self._graph.as_default():\n # read in filenames\n reader = tf.train.NewCheckpointReader(self._model_filename)\n\n # load AlexNet weights\n weights = AlexNetWeights()\n weights.conv1W = tf.Variable(reader.get_tensor(\"Variable\")) \n weights.conv1b = tf.Variable(reader.get_tensor(\"Variable_1\"))\n weights.conv2W = tf.Variable(reader.get_tensor(\"Variable_2\"))\n weights.conv2b = tf.Variable(reader.get_tensor(\"Variable_3\"))\n weights.conv3W = tf.Variable(reader.get_tensor(\"Variable_4\"))\n weights.conv3b = tf.Variable(reader.get_tensor(\"Variable_5\"))\n weights.conv4W = tf.Variable(reader.get_tensor(\"Variable_6\"))\n weights.conv4b = tf.Variable(reader.get_tensor(\"Variable_7\"))\n weights.conv5W = tf.Variable(reader.get_tensor(\"Variable_8\"))\n weights.conv5b = tf.Variable(reader.get_tensor(\"Variable_9\"))\n weights.fc6W = tf.Variable(reader.get_tensor(\"Variable_10\"))\n weights.fc6b = tf.Variable(reader.get_tensor(\"Variable_11\"))\n weights.fc7W = tf.Variable(reader.get_tensor(\"Variable_12\"))\n weights.fc7b = tf.Variable(reader.get_tensor(\"Variable_13\"))\n weights.fc8W = tf.Variable(reader.get_tensor(\"Variable_14\"))\n weights.fc8b = tf.Variable(reader.get_tensor(\"Variable_15\"))\n\n # form network\n self._input_node = tf.placeholder(tf.float32, (self._batch_size, self._im_height, self._im_width, self._num_channels))\n self._output_tensor = self.build_alexnet(weights)\n self._feature_tensor = self.build_alexnet(weights, output_layer=self._feature_layer)\n self._initialized = True"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nopening from caffe weights", "response": "def _initialize(self):\n \"\"\" Open from caffe weights \"\"\"\n self._graph = tf.Graph()\n with self._graph.as_default():\n self._input_node = tf.placeholder(tf.float32, (self._batch_size, self._im_height, self._im_width, self._num_channels))\n weights = self.build_alexnet_weights()\n self._output_tensor = self.build_alexnet(weights)\n self._feature_tensor = self.build_alexnet(weights, output_layer=self._feature_layer)\n self._initialized = True"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef open_session(self):\n with self._graph.as_default():\n init = tf.initialize_all_variables()\n self._sess = tf.Session()\n self._sess.run(init)", "response": "Open tensorflow session. Exposed for memory management."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncloses tensorflow session. Exposes for memory management.", "response": "def close_session(self):\n \"\"\" Close tensorflow session. Exposes for memory management. \"\"\"\n with self._graph.as_default():\n self._sess.close()\n self._sess = None"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\npredicts a set of images in batches.", "response": "def predict(self, image_arr, featurize=False):\n \"\"\" Predict a set of images in batches.\n\n Parameters\n ----------\n image_arr : NxHxWxC :obj:`numpy.ndarray`\n input set of images in a num_images x image height x image width x image channels array (must match parameters of network)\n featurize : bool\n whether or not to use the featurization layer or classification output layer\n\n Returns\n -------\n :obj:`numpy.ndarray`\n num_images x feature_dim containing the output values for each input image\n \"\"\"\n # setup prediction\n num_images = image_arr.shape[0]\n output_arr = None\n\n # predict by filling in image array in batches\n close_sess = False\n if not self._initialized and self._dynamic_load:\n self._load()\n with self._graph.as_default():\n if self._sess is None: \n close_sess = True \n self.open_session()\n\n i = 0\n while i < num_images:\n dim = min(self._batch_size, num_images-i)\n cur_ind = i\n end_ind = cur_ind + dim\n self._input_arr[:dim,:,:,:] = image_arr[cur_ind:end_ind,:,:,:] - self._mean\n if featurize:\n output = self._sess.run(self._feature_tensor,\n feed_dict={self._input_node: self._input_arr})\n else:\n output = self._sess.run(self._output_tensor,\n feed_dict={self._input_node: self._input_arr})\n if output_arr is None:\n output_arr = output\n else:\n output_arr = np.r_[output_arr, output]\n\n i = end_ind\n if close_sess:\n self.close_session()\n return output_arr[:num_images,...]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef build_alexnet_weights(self):\n net_data = self._net_data\n #conv1\n #conv(11, 11, 96, 4, 4, padding='VALID', name='conv1')\n k_h = 11; k_w = 11; c_o = 96; s_h = 4; s_w = 4\n conv1W = tf.Variable(net_data[\"conv1\"][0])\n conv1b = tf.Variable(net_data[\"conv1\"][1])\n \n #conv2\n #conv(5, 5, 256, 1, 1, group=2, name='conv2')\n k_h = 5; k_w = 5; c_o = 256; s_h = 1; s_w = 1; group = 2\n conv2W = tf.Variable(net_data[\"conv2\"][0])\n conv2b = tf.Variable(net_data[\"conv2\"][1])\n \n #conv3\n #conv(3, 3, 384, 1, 1, name='conv3')\n k_h = 3; k_w = 3; c_o = 384; s_h = 1; s_w = 1; group = 1\n conv3W = tf.Variable(net_data[\"conv3\"][0])\n conv3b = tf.Variable(net_data[\"conv3\"][1])\n \n #conv4\n #conv(3, 3, 384, 1, 1, group=2, name='conv4')\n k_h = 3; k_w = 3; c_o = 384; s_h = 1; s_w = 1; group = 2\n conv4W = tf.Variable(net_data[\"conv4\"][0])\n conv4b = tf.Variable(net_data[\"conv4\"][1])\n \n #conv5\n #conv(3, 3, 256, 1, 1, group=2, name='conv5')\n k_h = 3; k_w = 3; c_o = 256; s_h = 1; s_w = 1; group = 2\n conv5W = tf.Variable(net_data[\"conv5\"][0])\n conv5b = tf.Variable(net_data[\"conv5\"][1])\n \n #fc6\n #fc(4096, name='fc6')\n fc6_in_size = net_data[\"fc6\"][0].shape[0]\n fc6_out_size = net_data[\"fc6\"][0].shape[1]\n fc6W = tf.Variable(net_data[\"fc6\"][0])\n fc6b = tf.Variable(net_data[\"fc6\"][1])\n\n #fc7\n #fc(4096, name='fc7') \n fc7_in_size = fc6_out_size\n fc7_out_size = net_data[\"fc7\"][0].shape[1]\n fc7W = tf.Variable(net_data[\"fc7\"][0])\n fc7b = tf.Variable(net_data[\"fc7\"][1])\n \n #fc8\n #fc(num_cats, relu=False, name='fc8') \n fc8_in_size = fc7_out_size\n fc8_out_size = self._out_size\n fc8W = tf.Variable(tf.truncated_normal([fc8_in_size, fc8_out_size],\n stddev=0.01,\n seed=None))\n fc8b = tf.Variable(tf.constant(0.0, shape=[fc8_out_size]))\n \n # make return object\n weights = AlexNetWeights()\n weights.conv1W = conv1W\n weights.conv1b = conv1b\n weights.conv2W = conv2W\n weights.conv2b = conv2b\n weights.conv3W = conv3W\n weights.conv3b = conv3b \n weights.conv4W = conv4W\n weights.conv4b = conv4b \n weights.conv5W = conv5W\n weights.conv5b = conv5b \n weights.fc6W = fc6W\n weights.fc6b = fc6b\n weights.fc7W = fc7W\n weights.fc7b = fc7b\n weights.fc8W = fc8W\n weights.fc8b = fc8b\n return weights", "response": "Builds a set of convnet weights for AlexNet"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef build_alexnet(self, weights, output_layer=None):\n if output_layer is None:\n output_layer = self._output_layer\n\n #conv1\n #conv(11, 11, 96, 4, 4, padding='VALID', name='conv1')\n k_h = 11; k_w = 11; c_o = 96; s_h = 4; s_w = 4\n conv1_in = conv(self._input_node, weights.conv1W, weights.conv1b, k_h, k_w, c_o, s_h, s_w, padding=\"SAME\", group=1)\n conv1 = tf.nn.relu(conv1_in)\n if output_layer == 'conv1':\n return tf.contrib.layers.flatten(conv1)\n \n #lrn1\n #lrn(2, 2e-05, 0.75, name='norm1')\n radius = 2; alpha = 2e-05; beta = 0.75; bias = 1.0\n lrn1 = tf.nn.local_response_normalization(conv1,\n depth_radius=radius,\n alpha=alpha,\n beta=beta,\n bias=bias)\n \n #maxpool1\n #max_pool(3, 3, 2, 2, padding='VALID', name='pool1')\n k_h = 3; k_w = 3; s_h = 2; s_w = 2; padding = 'VALID'\n maxpool1 = tf.nn.max_pool(lrn1, ksize=[1, k_h, k_w, 1], strides=[1, s_h, s_w, 1], padding=padding)\n \n \n #conv2\n #conv(5, 5, 256, 1, 1, group=2, name='conv2')\n k_h = 5; k_w = 5; c_o = 256; s_h = 1; s_w = 1; group = 2\n conv2_in = conv(maxpool1, weights.conv2W, weights.conv2b, k_h, k_w, c_o, s_h, s_w, padding=\"SAME\", group=group)\n conv2 = tf.nn.relu(conv2_in)\n if output_layer == 'conv2':\n return tf.contrib.layers.flatten(conv2)\n \n #lrn2\n #lrn(2, 2e-05, 0.75, name='norm2')\n radius = 2; alpha = 2e-05; beta = 0.75; bias = 1.0\n lrn2 = tf.nn.local_response_normalization(conv2,\n depth_radius=radius,\n alpha=alpha,\n beta=beta,\n bias=bias)\n \n #maxpool2\n #max_pool(3, 3, 2, 2, padding='VALID', name='pool2') \n k_h = 3; k_w = 3; s_h = 2; s_w = 2; padding = 'VALID'\n maxpool2 = tf.nn.max_pool(lrn2, ksize=[1, k_h, k_w, 1], strides=[1, s_h, s_w, 1], padding=padding)\n \n #conv3\n #conv(3, 3, 384, 1, 1, name='conv3')\n k_h = 3; k_w = 3; c_o = 384; s_h = 1; s_w = 1; group = 1\n conv3_in = conv(maxpool2, weights.conv3W, weights.conv3b, k_h, k_w, c_o, s_h, s_w, padding=\"SAME\", group=group)\n conv3 = tf.nn.relu(conv3_in)\n if output_layer == 'conv3':\n return tf.contrib.layers.flatten(conv3)\n \n #conv4\n #conv(3, 3, 384, 1, 1, group=2, name='conv4')\n k_h = 3; k_w = 3; c_o = 384; s_h = 1; s_w = 1; group = 2\n conv4_in = conv(conv3, weights.conv4W, weights.conv4b, k_h, k_w, c_o, s_h, s_w, padding=\"SAME\", group=group)\n conv4 = tf.nn.relu(conv4_in)\n if output_layer == 'conv4':\n return tf.contrib.layers.flatten(conv4)\n \n #conv5\n #conv(3, 3, 256, 1, 1, group=2, name='conv5')\n k_h = 3; k_w = 3; c_o = 256; s_h = 1; s_w = 1; group = 2\n conv5_in = conv(conv4, weights.conv5W, weights.conv5b, k_h, k_w, c_o, s_h, s_w, padding=\"SAME\", group=group)\n conv5 = tf.nn.relu(conv5_in)\n if output_layer == 'conv5':\n return tf.contrib.layers.flatten(conv5)\n \n #maxpool5\n #max_pool(3, 3, 2, 2, padding='VALID', name='pool5')\n k_h = 3; k_w = 3; s_h = 2; s_w = 2; padding = 'VALID'\n maxpool5 = tf.nn.max_pool(conv5, ksize=[1, k_h, k_w, 1], strides=[1, s_h, s_w, 1], padding=padding)\n\n #fc6\n #fc(4096, name='fc6')\n fc6 = tf.nn.relu_layer(tf.reshape(maxpool5, [-1, int(np.prod(maxpool5.get_shape()[1:]))]), weights.fc6W, weights.fc6b)\n if output_layer == 'fc6':\n return fc6\n \n #fc7\n #fc(4096, name='fc7') \n fc7 = tf.nn.relu_layer(fc6, weights.fc7W, weights.fc7b)\n if output_layer == 'fc7':\n return fc7\n \n #fc8\n #fc(num_cats, relu=False, name='fc8')\n fc8 = tf.nn.xw_plus_b(fc7, weights.fc8W, weights.fc8b)\n if output_layer == 'fc8':\n return fc8 \n\n #softmax\n sm = tf.nn.softmax(fc8)\n return sm", "response": "Builds the Alexnet graph from the given weights."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _forward_pass(self, images):\n # form image array\n num_images = len(images)\n if num_images == 0:\n return None\n for image in images:\n if not isinstance(image, Image):\n new_images = []\n for image in images:\n if len(image.shape) > 2:\n new_images.append(ColorImage(image, frame='unspecified'))\n elif image.dtype == np.float32 or image.dtype == np.float64:\n new_images.append(DepthImage(image, frame='unspecified'))\n else:\n raise ValueError('Image type not understood')\n images = new_images\n break\n\n im_height = images[0].height\n im_width = images[0].width\n channels = images[0].channels\n tensor_channels = 3\n image_arr = np.zeros([num_images, im_height, im_width, tensor_channels])\n for j, image in enumerate(images):\n if channels == 3:\n image_arr[j,:,:,:] = image.raw_data\n else:\n image_arr[j,:,:,:] = np.tile(image.raw_data, [1,1,1,3])\n\n # predict\n fp_start = time.time()\n final_blobs = self.cnn_.featurize(image_arr)\n fp_stop = time.time()\n logging.debug('Featurization took %f sec per image' %((fp_stop - fp_start) / len(images)))\n return final_blobs.reshape(final_blobs.shape[0], -1)", "response": "Forward pass a list of images through the CNN"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a DataFrame with the data about the repositories found at the specified repositories path in the form of siva files.", "response": "def repositories(self):\n \"\"\"\n Returns a DataFrame with the data about the repositories found at\n the specified repositories path in the form of siva files.\n\n >>> repos_df = engine.repositories\n\n :rtype: RepositoriesDataFrame\n \"\"\"\n return RepositoriesDataFrame(self.__engine.getRepositories(),\n self.session, self.__implicits)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef blobs(self, repository_ids=[], reference_names=[], commit_hashes=[]):\n if not isinstance(repository_ids, list):\n raise Exception(\"repository_ids must be a list\")\n\n if not isinstance(reference_names, list):\n raise Exception(\"reference_names must be a list\")\n\n if not isinstance(commit_hashes, list):\n raise Exception(\"commit_hashes must be a list\")\n\n return BlobsDataFrame(self.__engine.getBlobs(repository_ids,\n reference_names,\n commit_hashes),\n self.session,\n self.__implicits)", "response": "Returns the blobs of a list of repositories reference names and commit hashes."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef from_metadata(self, db_path, db_name='engine_metadata.db'):\n\n self.__engine.fromMetadata(db_path, db_name)\n return self", "response": "Loads the metadata from the database at the specified path."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef __generate_method(name):\n try:\n func = getattr(DataFrame, name)\n except AttributeError as e:\n # PySpark version is too old\n def func(self, *args, **kwargs):\n raise e\n return func\n wraps = getattr(functools, \"wraps\", lambda _: lambda f: f) # py3.4+\n\n @wraps(func)\n def _wrapper(self, *args, **kwargs):\n dataframe = func(self, *args, **kwargs)\n if self.__class__ != SourcedDataFrame \\\n and isinstance(self, SourcedDataFrame) \\\n and isinstance(dataframe, DataFrame):\n return self.__class__(dataframe._jdf, self._session, self._implicits)\n return dataframe\n\n return _wrapper", "response": "A decorator that returns the original method by name to return the derived class instance."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef references(self):\n return ReferencesDataFrame(self._engine_dataframe.getReferences(),\n self._session, self._implicits)", "response": "Returns the joined DataFrame of references and repositories."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a new DataFrame with only the remote references of the current repositories.", "response": "def remote_references(self):\n \"\"\"\n Returns a new DataFrame with only the remote references of the\n current repositories.\n\n >>> remote_refs_df = repos_df.remote_references\n\n :rtype: ReferencesDataFrame\n \"\"\"\n return ReferencesDataFrame(self._engine_dataframe.getRemoteReferences(),\n self._session, self._implicits)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef master_ref(self):\n return ReferencesDataFrame(self._engine_dataframe.getReferences().getHEAD(),\n self._session, self._implicits)", "response": "Filters the current DataFrame references to only contain those rows whose reference is master."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef head_ref(self):\n return ReferencesDataFrame(self._engine_dataframe.getHEAD(),\n self._session, self._implicits)", "response": "Returns a new DataFrame containing only those rows whose reference is HEAD."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef master_ref(self):\n return ReferencesDataFrame(self._engine_dataframe.getMaster(),\n self._session, self._implicits)\n return self.ref('refs/heads/master')", "response": "Returns a new DataFrame with only those rows whose reference is master."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef ref(self, ref):\n return ReferencesDataFrame(self.filter(self.name == ref)._jdf,\n self._session, self._implicits)", "response": "Returns a new DataFrame containing only those rows whose reference is the given reference name."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the current DataFrame joined with the commits DataFrame with all of the tree entries in all references.", "response": "def all_reference_commits(self):\n \"\"\"\n Returns the current DataFrame joined with the commits DataFrame, with all of the commits\n in all references.\n\n >>> commits_df = refs_df.all_reference_commits\n\n Take into account that getting all the commits will lead to a lot of repeated tree\n entries and blobs, thus making your query very slow.\n\n Most of the time, you just want the HEAD commit of each reference:\n\n >>> commits_df = refs_df.commits\n\n :rtype: CommitsDataFrame\n \"\"\"\n return CommitsDataFrame(self._engine_dataframe.getAllReferenceCommits(), self._session, self._implicits)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the current DataFrame joined with the commits DataFrame.", "response": "def commits(self):\n \"\"\"\n Returns the current DataFrame joined with the commits DataFrame. It just returns\n the last commit in a reference (aka the current state).\n\n >>> commits_df = refs_df.commits\n\n If you want all commits from the references, use the `all_reference_commits` method,\n but take into account that getting all the commits will lead to a lot of repeated tree\n entries and blobs, thus making your query very slow.\n\n >>> commits_df = refs_df.all_reference_commits\n\n :rtype: CommitsDataFrame\n \"\"\"\n return CommitsDataFrame(self._engine_dataframe.getCommits(), self._session, self._implicits)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn this DataFrame joined with the blobs DataSource.", "response": "def blobs(self):\n \"\"\"\n Returns this DataFrame joined with the blobs DataSource.\n\n >>> blobs_df = refs_df.blobs\n\n :rtype: BlobsDataFrame\n \"\"\"\n return BlobsDataFrame(self._engine_dataframe.getBlobs(), self._session, self._implicits)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn this DataFrame joined with the tree entries DataSource.", "response": "def tree_entries(self):\n \"\"\"\n Returns this DataFrame joined with the tree entries DataSource.\n\n >>> entries_df = commits_df.tree_entries\n\n :rtype: TreeEntriesDataFrame\n \"\"\"\n return TreeEntriesDataFrame(self._engine_dataframe.getTreeEntries(), self._session, self._implicits)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a new DataFrame with the language data of any blob added to its row.", "response": "def classify_languages(self):\n \"\"\"\n Returns a new DataFrame with the language data of any blob added to\n its row.\n\n >>> blobs_lang_df = blobs_df.classify_languages\n\n :rtype: BlobsWithLanguageDataFrame\n \"\"\"\n return BlobsWithLanguageDataFrame(self._engine_dataframe.classifyLanguages(),\n self._session, self._implicits)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef extract_uasts(self):\n return UASTsDataFrame(self._engine_dataframe.extractUASTs(),\n self._session, self._implicits)", "response": "Returns a new DataFrame with the parsed UAST data of any blob added to\n its row."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef query_uast(self, query, query_col='uast', output_col='result'):\n return UASTsDataFrame(self._engine_dataframe.queryUAST(query,\n query_col,\n output_col),\n self._session, self._implicits)", "response": "Queries the UAST of a file with the given query to get specific nodes."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef extract_tokens(self, input_col='result', output_col='tokens'):\n return UASTsDataFrame(self._engine_dataframe.extractTokens(input_col, output_col),\n self._session, self._implicits)", "response": "Extracts the tokens from UAST nodes."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef list(\n self,\n bucket: str,\n prefix: str=None,\n delimiter: str=None,\n ) -> typing.Iterator[str]:\n \"\"\"\n Returns an iterator of all blob entries in a bucket that match a given prefix. Do not return any keys that\n contain the delimiter past the\n prefix.\n \"\"\"\n kwargs = dict()\n if prefix is not None:\n kwargs['prefix'] = prefix\n if delimiter is not None:\n kwargs['delimiter'] = delimiter\n bucket_obj = self._ensure_bucket_loaded(bucket)\n for blob_obj in bucket_obj.list_blobs(**kwargs):\n yield blob_obj.name", "response": "Returns an iterator of all blob entries in a given bucket that match a given prefix and optionally a given delimiter."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef delete(self, bucket: str, key: str):\n bucket_obj = self._ensure_bucket_loaded(bucket)\n try:\n bucket_obj.delete_blob(key)\n except NotFound:\n return False", "response": "Deletes an object in a bucket."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nretrieve the data for a given object in a given bucket.", "response": "def get(self, bucket: str, key: str) -> bytes:\n \"\"\"\n Retrieves the data for a given object in a given bucket.\n :param bucket: the bucket the object resides in.\n :param key: the key of the object for which metadata is being\n retrieved.\n :return: the data\n \"\"\"\n bucket_obj = self._ensure_bucket_loaded(bucket)\n blob_obj = bucket_obj.blob(key)\n\n try:\n return blob_obj.download_as_string()\n except NotFound:\n raise BlobNotFoundError(f\"Could not find gs://{bucket}/{key}\")"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nretrieve the cloud - provided checksum for a given object in a given bucket and key.", "response": "def get_cloud_checksum(\n self,\n bucket: str,\n key: str\n ) -> str:\n \"\"\"\n Retrieves the cloud-provided checksum for a given object in a given bucket.\n :param bucket: the bucket the object resides in.\n :param key: the key of the object for which checksum is being retrieved.\n :return: the cloud-provided checksum\n \"\"\"\n blob_obj = self._get_blob_obj(bucket, key)\n return self.compute_cloud_checksum(blob_obj)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nretrieve the content - type for a given object in a given bucket.", "response": "def get_content_type(\n self,\n bucket: str,\n key: str\n ) -> str:\n \"\"\"\n Retrieves the content-type for a given object in a given bucket.\n :param bucket: the bucket the object resides in.\n :param key: the key of the object for which content-type is being retrieved.\n :return: the content-type\n \"\"\"\n blob_obj = self._get_blob_obj(bucket, key)\n return blob_obj.content_type"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_copy_token(\n self,\n bucket: str,\n key: str,\n cloud_checksum: str,\n ) -> typing.Any:\n \"\"\"\n Given a bucket, key, and the expected cloud-provided checksum, retrieve a token that can be passed into\n :func:`~cloud_blobstore.BlobStore.copy` that guarantees the copy refers to the same version of the blob\n identified by the checksum.\n :param bucket: the bucket the object resides in.\n :param key: the key of the object for which checksum is being retrieved.\n :param cloud_checksum: the expected cloud-provided checksum.\n :return: an opaque copy token\n \"\"\"\n blob_obj = self._get_blob_obj(bucket, key)\n assert binascii.hexlify(base64.b64decode(blob_obj.crc32c)).decode(\"utf-8\").lower() == cloud_checksum\n return blob_obj.generation", "response": "Returns an opaque copy token that can be passed into the blobstore. BlobStore. copy method."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nretrieving the creation date for a given object in a given bucket.", "response": "def get_creation_date(\n self,\n bucket: str,\n key: str,\n ) -> datetime.datetime:\n \"\"\"\n Retrieves the creation date for a given key in a given bucket.\n :param bucket: the bucket the object resides in.\n :param key: the key of the object for which the creation date is being retrieved.\n :return: the creation date\n \"\"\"\n blob_obj = self._get_blob_obj(bucket, key)\n return blob_obj.time_created"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nretrieving the last modified date for a given object in a given bucket.", "response": "def get_last_modified_date(\n self,\n bucket: str,\n key: str,\n ) -> datetime.datetime:\n \"\"\"\n Retrieves last modified date for a given key in a given bucket.\n :param bucket: the bucket the object resides in.\n :param key: the key of the object for which the last modified date is being retrieved.\n :return: the last modified date\n \"\"\"\n blob_obj = self._get_blob_obj(bucket, key)\n return blob_obj.updated"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_user_metadata(\n self,\n bucket: str,\n key: str\n ) -> typing.Dict[str, str]:\n \"\"\"\n Retrieves the user metadata for a given object in a given bucket. If the platform has any mandatory prefixes or\n suffixes for the metadata keys, they should be stripped before being returned.\n :param bucket: the bucket the object resides in.\n :param key: the key of the object for which metadata is being\n retrieved.\n :return: a dictionary mapping metadata keys to metadata values.\n \"\"\"\n blob_obj = self._get_blob_obj(bucket, key)\n return blob_obj.metadata", "response": "Retrieves the user metadata for a given object in a given bucket."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nretrieve the filesize of the object in the specified bucket and key.", "response": "def get_size(\n self,\n bucket: str,\n key: str\n ) -> int:\n \"\"\"\n Retrieves the filesize\n :param bucket: the bucket the object resides in.\n :param key: the key of the object for which size is being retrieved.\n :return: integer equal to filesize in bytes\n \"\"\"\n blob_obj = self._get_blob_obj(bucket, key)\n return blob_obj.size"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef check_bucket_exists(self, bucket: str) -> bool:\n bucket_obj = self.gcp_client.bucket(bucket) # type: Bucket\n return bucket_obj.exists()", "response": "Checks if bucket with specified name exists."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _get_shard(self, shard):\n @wraps(API_WRAPPER._get_shard)\n def get_shard(*arg, **kwargs):\n \"\"\"Gets the shard '{}'\"\"\".format(shard)\n return self.get_shards(Shard(shard, *arg, **kwargs))\n return get_shard", "response": "Dynamically builds methods to query the shard with proper with arg and kwargs support"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nrequest the API This method is wrapped by similar functions", "response": "def request(self, shards, full_response, return_status_tuple=False):\n \"\"\"Request the API\n\n This method is wrapped by similar functions\n \"\"\"\n try:\n resp = self._request(shards)\n if return_status_tuple:\n return (self._parser(resp, full_response), True)\n else:\n return self._parser(resp, full_response)\n except (ConflictError, CloudflareServerError, InternalServerError) as exc:\n # The Retry system\n if return_status_tuple:\n return (None, False)\n elif self.api_mother.do_retry:\n # TODO\n # request_limit = 0\n sleep(self.api_mother.retry_sleep)\n resp = self.request(shards, full_response, True)\n while not resp[1]:\n sleep(self.api_mother.retry_sleep)\n resp = self.request(shards, full_response, True)\n return resp[0]\n else:\n raise exc"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef send_telegram(telegram=None, client_key=None, tgid=None, key=None): # pragma: no cover\n if telegram:\n pass\n else:\n telegram = self.api_mother.telegram(client_key, tgid, key)\n telegram.send_telegram(self.nation_name)", "response": "Sends Telegram to the API."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nwraps around the verify API", "response": "def verify(self, checksum=None, token=None, full_response=False):\n \"\"\"Wraps around the verify API\"\"\"\n payload = {\"checksum\":checksum, \"a\":\"verify\"}\n if token:\n payload.update({\"token\":token})\n return self.get_shards(Shard(**payload), full_response=True)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nretrieving metadata objects from blobstore response.", "response": "def get_listing_from_response(self, resp) -> typing.Iterable[typing.Tuple[str, dict]]:\n \"\"\"\n Retrieve blob metadata objects from blobstore response.\n Metadata objects represented as tuples in the form of:\n (key, {BlobMetadataField: val, ...})\n \"\"\"\n raise NotImplementedError()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef list(\n self,\n bucket: str,\n prefix: str=None,\n delimiter: str=None,\n ) -> typing.Iterator[str]:\n \"\"\"\n Returns an iterator of all blob entries in a bucket that match a given prefix. Do not return any keys that\n contain the delimiter past the prefix.\n \"\"\"\n raise NotImplementedError()", "response": "Returns an iterator of all blob entries in a given bucket that contain a given prefix and optionally a given delimiter."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef list_v2(\n self,\n bucket: str,\n prefix: str=None,\n delimiter: str=None,\n start_after_key: str=None,\n token: str=None,\n k_page_max: int=None,\n ) -> typing.Iterable[typing.Tuple[str, dict]]:\n \"\"\"\n Returns an iterator of all blob entries in a bucket that match a given prefix. Do not return any keys that\n contain the delimiter past the prefix.\n \"\"\"\n raise NotImplementedError()", "response": "Returns an iterator over all blob entries in a given bucket that match a given prefix."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef upload_file_handle(\n self,\n bucket: str,\n key: str,\n src_file_handle: typing.BinaryIO,\n content_type: str=None,\n metadata: dict=None):\n \"\"\"\n Saves the contents of a file handle as the contents of an object in a bucket.\n \"\"\"\n raise NotImplementedError()", "response": "Uploads a file handle to a specific object in a bucket."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_copy_token(\n self,\n bucket: str,\n key: str,\n cloud_checksum: str,\n ) -> typing.Any:\n \"\"\"\n Given a bucket, key, and the expected cloud-provided checksum, retrieve a token that can be passed into\n :func:`~cloud_blobstore.BlobStore.copy` that guarantees the copy refers to the same version of the blob\n identified by the checksum.\n :param bucket: the bucket the object resides in.\n :param key: the key of the object for which checksum is being retrieved.\n :param cloud_checksum: the expected cloud-provided checksum.\n :return: an opaque copy token\n \"\"\"\n raise NotImplementedError()", "response": "Returns an opaque copy token that can be passed into the blobstore. BlobStore. copy method that guarantees the copy refers to the same version of the object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_user_metadata(\n self,\n bucket: str,\n key: str\n ) -> typing.Dict[str, str]:\n \"\"\"\n Retrieves the user metadata for a given object in a given bucket. If the platform has any mandatory prefixes or\n suffixes for the metadata keys, they should be stripped before being returned.\n :param bucket: the bucket the object resides in.\n :param key: the key of the object for which metadata is being\n retrieved.\n :return: a dictionary mapping metadata keys to metadata values.\n \"\"\"\n raise NotImplementedError()", "response": "Returns the user metadata for a given object in a given bucket and key."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning an iterator of all blob entries in a given bucket that match a given prefix and optionally a given delimiter.", "response": "def list(\n self,\n bucket: str,\n prefix: str=None,\n delimiter: str=None,\n ) -> typing.Iterator[str]:\n \"\"\"\n Returns an iterator of all blob entries in a bucket that match a given prefix. Do not return any keys that\n contain the delimiter past the prefix.\n \"\"\"\n kwargs = dict()\n if prefix is not None:\n kwargs['Prefix'] = prefix\n if delimiter is not None:\n kwargs['Delimiter'] = delimiter\n for item in (\n boto3.resource(\"s3\").Bucket(bucket).\n objects.\n filter(**kwargs)):\n yield item.key"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nretrieve the data for a given object in a given bucket.", "response": "def get(self, bucket: str, key: str) -> bytes:\n \"\"\"\n Retrieves the data for a given object in a given bucket.\n :param bucket: the bucket the object resides in.\n :param key: the key of the object for which metadata is being\n retrieved.\n :return: the data\n \"\"\"\n try:\n response = self.s3_client.get_object(\n Bucket=bucket,\n Key=key\n )\n return response['Body'].read()\n except botocore.exceptions.ClientError as ex:\n if ex.response['Error']['Code'] == \"NoSuchKey\":\n raise BlobNotFoundError(f\"Could not find s3://{bucket}/{key}\") from ex\n raise BlobStoreUnknownError(ex)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nretrieves all the metadata for a given object in a given bucket.", "response": "def get_all_metadata(\n self,\n bucket: str,\n key: str\n ) -> dict:\n \"\"\"\n Retrieves all the metadata for a given object in a given bucket.\n :param bucket: the bucket the object resides in.\n :param key: the key of the object for which metadata is being retrieved.\n :return: the metadata\n \"\"\"\n try:\n return self.s3_client.head_object(\n Bucket=bucket,\n Key=key\n )\n except botocore.exceptions.ClientError as ex:\n if str(ex.response['Error']['Code']) == \\\n str(requests.codes.not_found):\n raise BlobNotFoundError(f\"Could not find s3://{bucket}/{key}\") from ex\n raise BlobStoreUnknownError(ex)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nretrieving the content - type for a given object in a given bucket and key.", "response": "def get_content_type(\n self,\n bucket: str,\n key: str\n ) -> str:\n \"\"\"\n Retrieves the content-type for a given object in a given bucket.\n :param bucket: the bucket the object resides in.\n :param key: the key of the object for which content-type is being retrieved.\n :return: the content-type\n \"\"\"\n response = self.get_all_metadata(bucket, key)\n # hilariously, the ETag is quoted. Unclear why.\n return response['ContentType']"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nretrieving the cloud - provided checksum for a given object in a given bucket and key.", "response": "def get_cloud_checksum(\n self,\n bucket: str,\n key: str\n ) -> str:\n \"\"\"\n Retrieves the cloud-provided checksum for a given object in a given bucket.\n :param bucket: the bucket the object resides in.\n :param key: the key of the object for which checksum is being retrieved.\n :return: the cloud-provided checksum\n \"\"\"\n response = self.get_all_metadata(bucket, key)\n return self.compute_cloud_checksum(response)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_creation_date(\n self,\n bucket: str,\n key: str,\n ) -> datetime:\n \"\"\"\n Retrieves the creation date for a given key in a given bucket.\n :param bucket: the bucket the object resides in.\n :param key: the key of the object for which the creation date is being retrieved.\n :return: the creation date\n \"\"\"\n # An S3 object's creation date is stored in its LastModified field which stores the\n # most recent value between the two.\n return self.get_last_modified_date(bucket, key)", "response": "Retrieves the creation date for a given object in a given bucket."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_last_modified_date(\n self,\n bucket: str,\n key: str,\n ) -> datetime:\n \"\"\"\n Retrieves last modified date for a given key in a given bucket.\n :param bucket: the bucket the object resides in.\n :param key: the key of the object for which the last modified date is being retrieved.\n :return: the last modified date\n \"\"\"\n response = self.get_all_metadata(bucket, key)\n return response['LastModified']", "response": "Retrieves the last modified date for a given object in a given bucket."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_user_metadata(\n self,\n bucket: str,\n key: str\n ) -> typing.Dict[str, str]:\n \"\"\"\n Retrieves the user metadata for a given object in a given bucket. If the platform has any mandatory prefixes or\n suffixes for the metadata keys, they should be stripped before being returned.\n :param bucket: the bucket the object resides in.\n :param key: the key of the object for which metadata is being\n retrieved.\n :return: a dictionary mapping metadata keys to metadata values.\n \"\"\"\n try:\n response = self.get_all_metadata(bucket, key)\n metadata = response['Metadata'].copy()\n\n response = self.s3_client.get_object_tagging(\n Bucket=bucket,\n Key=key,\n )\n for tag in response['TagSet']:\n key, value = tag['Key'], tag['Value']\n metadata[key] = value\n\n return metadata\n except botocore.exceptions.ClientError as ex:\n if str(ex.response['Error']['Code']) == \\\n str(requests.codes.not_found):\n raise BlobNotFoundError(f\"Could not find s3://{bucket}/{key}\") from ex\n raise BlobStoreUnknownError(ex)", "response": "Retrieves the user metadata for a given object in a given bucket."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nretrieving the size of the object in the specified bucket and key.", "response": "def get_size(\n self,\n bucket: str,\n key: str\n ) -> int:\n \"\"\"\n Retrieves the filesize\n :param bucket: the bucket the object resides in.\n :param key: the key of the object for which size is being retrieved.\n :return: integer equal to filesize in bytes\n \"\"\"\n try:\n response = self.get_all_metadata(bucket, key)\n size = response['ContentLength']\n return size\n except botocore.exceptions.ClientError as ex:\n if str(ex.response['Error']['Code']) == str(requests.codes.not_found):\n raise BlobNotFoundError(f\"Could not find s3://{bucket}/{key}\") from ex\n raise BlobStoreUnknownError(ex)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngives a bucket key and upload_id find the next N missing parts.", "response": "def find_next_missing_parts(\n self,\n bucket: str,\n key: str,\n upload_id: str,\n part_count: int,\n search_start: int=1,\n return_count: int=1) -> typing.Sequence[int]:\n \"\"\"\n Given a `bucket`, `key`, and `upload_id`, find the next N missing parts of a multipart upload, where\n N=`return_count`. If `search_start` is provided, start the search at part M, where M=`search_start`.\n `part_count` is the number of parts expected for the upload.\n\n Note that the return value may contain fewer than N parts.\n \"\"\"\n if part_count < search_start:\n raise ValueError(\"\")\n result = list()\n while True:\n kwargs = dict(Bucket=bucket, Key=key, UploadId=upload_id) # type: dict\n if search_start > 1:\n kwargs['PartNumberMarker'] = search_start - 1\n\n # retrieve all the parts after the one we *think* we need to start from.\n parts_resp = self.s3_client.list_parts(**kwargs)\n\n # build a set of all the parts known to be uploaded, detailed in this request.\n parts_map = set() # type: typing.Set[int]\n for part_detail in parts_resp.get('Parts', []):\n parts_map.add(part_detail['PartNumber'])\n\n while True:\n if search_start not in parts_map:\n # not found, add it to the list of parts we still need.\n result.append(search_start)\n\n # have we met our requirements?\n if len(result) == return_count or search_start == part_count:\n return result\n\n search_start += 1\n\n if parts_resp['IsTruncated'] and search_start == parts_resp['NextPartNumberMarker']:\n # finished examining the results of this batch, move onto the next one\n break"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef check_bucket_exists(self, bucket: str) -> bool:\n exists = True\n try:\n self.s3_client.head_bucket(Bucket=bucket)\n except botocore.exceptions.ClientError as e:\n # If a client error is thrown, then check that it was a 404 error.\n # If it was a 404 error, then the bucket does not exist.\n error_code = int(e.response['Error']['Code'])\n if error_code == 404:\n exists = False\n return exists", "response": "Checks if the specified bucket exists."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_bucket_region(self, bucket) -> str:\n region = self.s3_client.get_bucket_location(Bucket=bucket)[\"LocationConstraint\"]\n return 'us-east-1' if region is None else region", "response": "Get region associated with a specified bucket name."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nextracts version information from source code", "response": "def get_version(filename='scanf.py'):\r\n \"\"\" Extract version information from source code \"\"\"\r\n version = '' \r\n with open(filename, 'r') as fp:\r\n for line in fp:\r\n m = re.search('__version__ .* ''(.*)''', line)\r\n if m is not None:\r\n version = (m.group(1)).strip('\\'')\r\n break\r\n return version"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ntranslating the format into a regular expression and a list of casts.", "response": "def scanf_compile(format, collapseWhitespace=True):\n \"\"\"\n Translate the format into a regular expression\n\n For example:\n >>> format_re, casts = scanf_compile('%s - %d errors, %d warnings')\n >>> print format_re.pattern\n (\\S+) \\- ([+-]?\\d+) errors, ([+-]?\\d+) warnings\n\n Translated formats are cached for faster reuse\n \"\"\"\n\n format_pat = \"\"\n cast_list = []\n i = 0\n length = len(format)\n while i < length:\n found = None\n for token, pattern, cast in scanf_translate:\n found = token.match(format, i)\n if found:\n if cast: # cast != None\n cast_list.append(cast)\n groups = found.groupdict() or found.groups()\n if groups:\n pattern = pattern % groups\n format_pat += pattern\n i = found.end()\n break\n if not found:\n char = format[i]\n # escape special characters\n if char in \"|^$()[]-.+*?{}<>\\\\\":\n format_pat += \"\\\\\"\n format_pat += char\n i += 1\n if DEBUG:\n print(\"DEBUG: %r -> %s\" % (format, format_pat))\n if collapseWhitespace:\n format_pat = re.sub(r'\\s+', r'\\\\s+', format_pat)\n\n format_re = re.compile(format_pat)\n return format_re, cast_list"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef scanf(format, s=None, collapseWhitespace=True):\n\n if s is None:\n s = sys.stdin\n\n if hasattr(s, \"readline\"):\n s = s.readline()\n\n format_re, casts = scanf_compile(format, collapseWhitespace)\n\n found = format_re.search(s)\n if found:\n groups = found.groups()\n return tuple([casts[i](groups[i]) for i in range(len(groups))])", "response": "Scan a log file for the given format and return a tuple of the found log files."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef extractdata(pattern, text=None, filepath=None):\n y = []\n if text is None:\n textsource = open(filepath, 'r')\n else:\n textsource = text.splitlines()\n\n for line in textsource:\n match = scanf(pattern, line)\n if match:\n if len(y) == 0:\n y = [[s] for s in match]\n else:\n for i, ydata in enumerate(y):\n ydata.append(match[i])\n\n if text is None:\n textsource.close()\n\n return y", "response": "Read through a file or body of text and return a list of the data that matches the supplied pattern string."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef nation(self, nation_name, password=None, autologin=None):\n return Nation(nation_name, self, password=password, autologin=autologin)", "response": "Setup access to the Nation API with the Nation object based off of the nation name."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef wa(self, chamber):\n if isinstance(chamber, int):\n chamber = str(chamber)\n return WorldAssembly(chamber, self)", "response": "Setup access to the World Assembly API with the World Assembly object based off region_name\n ."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate Telegram Templates which can be used to send telegrams", "response": "def telegram(self, client_key=None, tgid=None, key=None):\n \"\"\"Create Telegram Templates which can be used to send telegrams\n :param client_key: Client Key Nationstates Gave you\n :param tgid: TGID from api template\n :param key: Key from api Template\n \"\"\"\n return Telegram(self, client_key, tgid, key)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\napply a patch to the current node s entry set.", "response": "def apply_patch(diffs):\n \"\"\" Not ready for use yet \"\"\"\n pass\n\n if isinstance(diffs, patch.diff):\n diffs = [diffs]\n\n for diff in diffs:\n if diff.header.old_path == '/dev/null':\n text = []\n else:\n with open(diff.header.old_path) as f:\n text = f.read()\n\n new_text = apply_diff(diff, text)\n with open(diff.header.new_path, 'w') as f:\n f.write(new_text)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef deserialize_compact(jwt):\n parts = jwt.split('.')\n\n if len(parts) == 3:\n token_type = JWS\n elif len(parts) == 5:\n token_type = JWE\n else:\n raise Error('Malformed JWT')\n\n return token_type(*parts)", "response": "Deserializes a compact representation of a JWT."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef encrypt(claims, jwk, adata='', add_header=None, alg='RSA-OAEP',\n enc='A128CBC-HS256', rng=get_random_bytes, compression=None):\n \"\"\" Encrypts the given claims and produces a :class:`~jose.JWE`\n\n :param claims: A `dict` representing the claims for this\n :class:`~jose.JWE`.\n :param jwk: A `dict` representing the JWK to be used for encryption of\n the CEK. This parameter is algorithm-specific.\n :param adata: Arbitrary string data to add to the authentication\n (i.e. HMAC). The same data must be provided during\n decryption.\n :param add_header: Additional items to be added to the header. Additional\n headers *will* be authenticated.\n :param alg: The algorithm to use for CEK encryption\n :param enc: The algorithm to use for claims encryption\n :param rng: Random number generator. A string of random bytes is expected\n as output.\n :param compression: The compression algorithm to use. Currently supports\n `'DEF'`.\n :rtype: :class:`~jose.JWE`\n :raises: :class:`~jose.Error` if there is an error producing the JWE\n \"\"\"\n # copy so the injected claim doesn't mutate the input claims\n # this is a temporary hack to allow for graceful deprecation of tokens,\n # ensuring that the library can still handle decrypting tokens issued\n # before the implementation of the fix\n claims = deepcopy(claims)\n assert _TEMP_VER_KEY not in claims\n claims[_TEMP_VER_KEY] = _TEMP_VER\n\n header = dict((add_header or {}).items() + [\n (HEADER_ENC, enc), (HEADER_ALG, alg)])\n\n # promote the temp key to the header\n assert _TEMP_VER_KEY not in header\n header[_TEMP_VER_KEY] = claims[_TEMP_VER_KEY]\n\n plaintext = json_encode(claims)\n\n # compress (if required)\n if compression is not None:\n header[HEADER_ZIP] = compression\n try:\n (compress, _) = COMPRESSION[compression]\n except KeyError:\n raise Error(\n 'Unsupported compression algorithm: {}'.format(compression))\n plaintext = compress(plaintext)\n\n # body encryption/hash\n ((cipher, _), key_size), ((hash_fn, _), hash_mod) = JWA[enc]\n iv = rng(AES.block_size)\n encryption_key = rng(hash_mod.digest_size)\n\n ciphertext = cipher(plaintext, encryption_key[-hash_mod.digest_size/2:], iv)\n hash = hash_fn(_jwe_hash_str(ciphertext, iv, adata),\n encryption_key[:-hash_mod.digest_size/2], hash_mod)\n\n # cek encryption\n (cipher, _), _ = JWA[alg]\n encryption_key_ciphertext = cipher(encryption_key, jwk)\n\n return JWE(*map(b64encode_url,\n (json_encode(header),\n encryption_key_ciphertext,\n iv,\n ciphertext,\n auth_tag(hash))))", "response": "Encrypts the given claims and produces a JWE with the given key and the given data."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nencrypts the given claims and produces a JWE token that is compatible with the CEK.", "response": "def spec_compliant_encrypt(claims, jwk, add_header=None, alg='RSA-OAEP',\n enc='A128CBC-HS256', rng=get_random_bytes):\n \"\"\" Encrypts the given claims and produces a :class:`~jose.JWE`\n\n :param claims: A `dict` representing the claims for this\n :class:`~jose.JWE`.\n :param jwk: A `dict` representing the JWK to be used for encryption of\n the CEK. This parameter is algorithm-specific.\n :param add_header: Additional items to be added to the header. Additional\n headers *will* be authenticated.\n :param alg: The algorithm to use for CEK encryption\n :param enc: The algorithm to use for claims encryption\n :param rng: Random number generator. A string of random bytes is expected\n as output.\n : param compression: The compression algorithm to use. Currently supports\n `'DEF'`.\n :rtype: :class:`~jose.JWE`\n :raises: :class:`~jose.Error` if there is an error producing the JWE\n \"\"\"\n # We need 5 components for JWE token\n # 1. Generate header\n header = dict((add_header or {}).items() + [(HEADER_ENC, enc),\n (HEADER_ALG, alg)])\n protected_header = json_encode(header)\n\n # 2. Generate CEK\n mac_key, enc_key = _generate_encryption_keys(enc, rng)\n encrypted_key = _encrypt_key(mac_key + enc_key, jwk, alg)\n\n # 3. Generate Initialization Vector\n iv = _generate_iv(enc, rng)\n\n # 4. Generate payload\n plaintext = json_encode(claims)\n # Compress if needed\n if HEADER_ZIP in header:\n try:\n (compression_func, _) = COMPRESSION[header[HEADER_ZIP]]\n except KeyError:\n raise Error(\n 'Unsupported compression algorithm: {}'.format(header[HEADER_ZIP]))\n M = compression_func(plaintext)\n else:\n M = plaintext\n\n # Encrypt payload\n ((cipher, _), key_len), _ = JWA[enc]\n ciphertext = cipher(M, enc_key, iv)\n\n # 5. Generate authentication tag\n authentication_tag = _generate_authentication_tag(\n mac_key, protected_header, ciphertext, iv, enc\n )\n\n return JWE(\n *map(\n b64encode_url,\n (protected_header, encrypted_key, iv, ciphertext,\n authentication_tag)\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef sign(claims, jwk, add_header=None, alg='HS256'):\n (hash_fn, _), mod = JWA[alg]\n\n header = dict((add_header or {}).items() + [(HEADER_ALG, alg)])\n header, payload = map(b64encode_url, map(json_encode, (header, claims)))\n\n sig = b64encode_url(hash_fn(_jws_hash_str(header, payload), jwk['k'],\n mod=mod))\n\n return JWS(header, payload, sig)", "response": "Signs the given claims and produces a JWS."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nverify the given JWS.", "response": "def verify(jws, jwk, alg, validate_claims=True, expiry_seconds=None):\n \"\"\" Verifies the given :class:`~jose.JWS`\n\n :param jws: The :class:`~jose.JWS` to be verified.\n :param jwk: A `dict` representing the JWK to use for verification. This\n parameter is algorithm-specific.\n :param alg: The algorithm to verify the signature with.\n :param validate_claims: A `bool` indicating whether or not the `exp`, `iat`\n and `nbf` claims should be validated. Defaults to\n `True`.\n :param expiry_seconds: An `int` containing the JWT expiry in seconds, used\n when evaluating the `iat` claim. Defaults to `None`,\n which disables `iat` claim validation.\n :rtype: :class:`~jose.JWT`\n :raises: :class:`~jose.Expired` if the JWT has expired\n :raises: :class:`~jose.NotYetValid` if the JWT is not yet valid\n :raises: :class:`~jose.Error` if there is an error decrypting the JWE\n \"\"\"\n header, payload, sig = map(b64decode_url, jws)\n header = json_decode(header)\n if alg != header[HEADER_ALG]:\n raise Error('Invalid algorithm')\n\n (_, verify_fn), mod = JWA[header[HEADER_ALG]]\n\n if not verify_fn(_jws_hash_str(jws.header, jws.payload),\n jwk['k'], sig, mod=mod):\n raise Error('Mismatched signatures')\n\n claims = json_decode(b64decode_url(jws.payload))\n _validate(claims, validate_claims, expiry_seconds)\n\n return JWT(header, claims)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndecoding base64 encoded JWT tokens.", "response": "def b64decode_url(istr):\n \"\"\" JWT Tokens may be truncated without the usual trailing padding '='\n symbols. Compensate by padding to the nearest 4 bytes.\n \"\"\"\n istr = encode_safe(istr)\n try:\n return urlsafe_b64decode(istr + '=' * (4 - (len(istr) % 4)))\n except TypeError as e:\n raise Error('Unable to decode base64: %s' % (e))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nvalidating the claims and return the base base", "response": "def _validate(claims, validate_claims, expiry_seconds):\n \"\"\" Validate expiry related claims.\n\n If validate_claims is False, do nothing.\n\n Otherwise, validate the exp and nbf claims if they are present, and\n validate the iat claim if expiry_seconds is provided.\n \"\"\"\n if not validate_claims:\n return\n\n now = time()\n\n # TODO: implement support for clock skew\n\n # The exp (expiration time) claim identifies the expiration time on or\n # after which the JWT MUST NOT be accepted for processing. The\n # processing of the exp claim requires that the current date/time MUST\n # be before the expiration date/time listed in the exp claim.\n try:\n expiration_time = claims[CLAIM_EXPIRATION_TIME]\n except KeyError:\n pass\n else:\n _check_expiration_time(now, expiration_time)\n\n # The iat (issued at) claim identifies the time at which the JWT was\n # issued. This claim can be used to determine the age of the JWT.\n # If expiry_seconds is provided, and the iat claims is present,\n # determine the age of the token and check if it has expired.\n try:\n issued_at = claims[CLAIM_ISSUED_AT]\n except KeyError:\n pass\n else:\n if expiry_seconds is not None:\n _check_expiration_time(now, issued_at + expiry_seconds)\n\n # The nbf (not before) claim identifies the time before which the JWT\n # MUST NOT be accepted for processing. The processing of the nbf claim\n # requires that the current date/time MUST be after or equal to the\n # not-before date/time listed in the nbf claim.\n try:\n not_before = claims[CLAIM_NOT_BEFORE]\n except KeyError:\n pass\n else:\n _check_not_before(now, not_before)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef gauge(key, gauge=None, default=float(\"nan\"), **dims):\n return global_registry().gauge(key, gauge=gauge, default=default, **dims)", "response": "Adds a gauge to the global pyformance registry"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef count_calls_with_dims(**dims):\n def counter_wrapper(fn):\n @functools.wraps(fn)\n def fn_wrapper(*args, **kwargs):\n counter(\"%s_calls\" %\n pyformance.registry.get_qualname(fn), **dims).inc()\n return fn(*args, **kwargs)\n return fn_wrapper\n return counter_wrapper", "response": "Decorator to track the number of times a function is called\n with dimensions."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef hist_calls(fn):\n @functools.wraps(fn)\n def wrapper(*args, **kwargs):\n _histogram = histogram(\n \"%s_calls\" % pyformance.registry.get_qualname(fn))\n rtn = fn(*args, **kwargs)\n if type(rtn) in (int, float):\n _histogram.add(rtn)\n return rtn\n return wrapper", "response": "Decorator to check the distribution of return values of a function."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef hist_calls_with_dims(**dims):\n def hist_wrapper(fn):\n @functools.wraps(fn)\n def fn_wrapper(*args, **kwargs):\n _histogram = histogram(\n \"%s_calls\" % pyformance.registry.get_qualname(fn), **dims)\n rtn = fn(*args, **kwargs)\n if type(rtn) in (int, float):\n _histogram.add(rtn)\n return rtn\n return fn_wrapper\n return hist_wrapper", "response": "Decorator to check the distribution of return values of a\n function with dimensions."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds custom metric instances to the registry with dimensions", "response": "def add(self, key, metric, **dims):\n \"\"\"Adds custom metric instances to the registry with dimensions\n which are not created with their constructors default arguments\n \"\"\"\n return super(MetricsRegistry, self).add(\n self.metadata.register(key, **dims), metric)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef counter(self, key, **dims):\n return super(MetricsRegistry, self).counter(\n self.metadata.register(key, **dims))", "response": "Adds a counter with dimensions to the registry"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef histogram(self, key, **dims):\n return super(MetricsRegistry, self).histogram(\n self.metadata.register(key, **dims))", "response": "Adds histogram with dimensions to the registry"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding a gauge to the registry", "response": "def gauge(self, key, gauge=None, default=float(\"nan\"), **dims):\n \"\"\"Adds gauge with dimensions to the registry\"\"\"\n return super(MetricsRegistry, self).gauge(\n self.metadata.register(key, **dims), gauge=gauge, default=default)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nadds meter with dimensions to the registry", "response": "def meter(self, key, **dims):\n \"\"\"Adds meter with dimensions to the registry\"\"\"\n return super(MetricsRegistry, self).meter(\n self.metadata.register(key, **dims))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef timer(self, key, **dims):\n return super(MetricsRegistry, self).timer(\n self.metadata.register(key, **dims))", "response": "Adds a timer with dimensions to the registry"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef timer(self, key, **dims):\n return super(RegexRegistry, self).timer(self._get_key(key), **dims)", "response": "Adds a timer with dimensions to the registry"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nadds histogram with dimensions to the registry", "response": "def histogram(self, key, **dims):\n \"\"\"Adds histogram with dimensions to the registry\"\"\"\n return super(RegexRegistry, self).histogram(self._get_key(key), **dims)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef counter(self, key, **dims):\n return super(RegexRegistry, self).counter(self._get_key(key), **dims)", "response": "Adds a counter with dimensions to the registry"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding a gauge to the registry", "response": "def gauge(self, key, gauge=None, default=float(\"nan\"), **dims):\n \"\"\"Adds gauge with dimensions to the registry\"\"\"\n return super(RegexRegistry, self).gauge(\n self._get_key(key), gauge=gauge, default=default, **dims)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef meter(self, key, **dims):\n return super(RegexRegistry, self).meter(self._get_key(key), **dims)", "response": "Adds meter with dimensions to the registry"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nremoves extra dimensions added by the add_dimensions function.", "response": "def remove_dimensions(self, dimension_names):\n \"\"\"Removes extra dimensions added by the add_dimensions() function.\n Ignores dimension names that don't exist.\n\n Args:\n dimension_names (list): List of dimension names to remove.\n \"\"\"\n with self._lock:\n for dimension in dimension_names:\n if dimension in self._extra_dimensions:\n del self._extra_dimensions[dimension]"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef send(self, cumulative_counters=None, gauges=None, counters=None):\n if not gauges and not cumulative_counters and not counters:\n return\n\n data = {\n 'cumulative_counter': cumulative_counters,\n 'gauge': gauges,\n 'counter': counters,\n }\n _logger.debug('Sending datapoints to SignalFx: %s', data)\n\n for metric_type, datapoints in data.items():\n if not datapoints:\n continue\n if not isinstance(datapoints, list):\n raise TypeError('Datapoints not of type list %s', datapoints)\n for datapoint in datapoints:\n self._add_extra_dimensions(datapoint)\n self._add_to_queue(metric_type, datapoint)\n\n # Ensure the sending thread is running.\n self._start_thread()", "response": "Sends the given metrics to the SignalFx."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsends an event to SignalFx.", "response": "def send_event(self, event_type, category=None, dimensions=None,\n properties=None, timestamp=None):\n \"\"\"Send an event to SignalFx.\n\n Args:\n event_type (string): the event type (name of the event time\n series).\n category (string): the category of the event.\n dimensions (dict): a map of event dimensions.\n properties (dict): a map of extra properties on that event.\n timestamp (float): timestamp when the event has occured\n \"\"\"\n if category and category not in SUPPORTED_EVENT_CATEGORIES:\n raise ValueError('Event category is not one of the supported' +\n 'types: {' +\n ', '.join(SUPPORTED_EVENT_CATEGORIES) + '}')\n\n data = {\n 'eventType': event_type,\n 'category': category,\n 'dimensions': dimensions or {},\n 'properties': properties or {},\n 'timestamp': int(timestamp) if timestamp else None,\n }\n\n _logger.debug('Sending event to SignalFx: %s', data)\n self._add_extra_dimensions(data)\n return self._send_event(event_data=data, url='{0}/{1}'.format(\n self._endpoint, self._INGEST_ENDPOINT_EVENT_SUFFIX),\n session=self._session)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nstopping send thread and flush points for a safe exit.", "response": "def stop(self, msg='Thread stopped'):\n \"\"\"Stop send thread and flush points for a safe exit.\"\"\"\n with self._lock:\n if not self._thread_running:\n return\n self._thread_running = False\n self._queue.put(_BaseSignalFxIngestClient._QUEUE_STOP)\n self._send_thread.join()\n _logger.debug(msg)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nassigning the supplied value to the appropriate protobuf value type.", "response": "def _assign_value_by_type(self, pbuf_obj, value, _bool=True, _float=True,\n _integer=True, _string=True, error_prefix=''):\n \"\"\"Assigns the supplied value to the appropriate protobuf value type\"\"\"\n # bool inherits int, so bool instance check must be executed prior to\n # checking for integer types\n if isinstance(value, bool) and _bool is True:\n pbuf_obj.value.boolValue = value\n elif isinstance(value, six.integer_types) and \\\n not isinstance(value, bool) and _integer is True:\n if value < INTEGER_MIN or value > INTEGER_MAX:\n raise ValueError(\n ('{}: {} exceeds signed 64 bit integer range '\n 'as defined by ProtocolBuffers ({} to {})')\n .format(error_prefix, str(value),\n str(INTEGER_MIN), str(INTEGER_MAX)))\n pbuf_obj.value.intValue = value\n elif isinstance(value, float) and _float is True:\n pbuf_obj.value.doubleValue = value\n elif isinstance(value, six.string_types) and _string is True:\n pbuf_obj.value.strValue = value\n else:\n raise ValueError(\n '{}: {} is of invalid type {}'\n .format(error_prefix, str(value), str(type(value))))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _assign_value(self, pbuf_dp, value):\n self._assign_value_by_type(pbuf_dp, value, _bool=False,\n error_prefix='Invalid value')", "response": "Assigns a value to the protobuf obj"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\niterate over the messages from the computation s output and yield them as they are processed.", "response": "def stream(self):\n \"\"\"Iterate over the messages from the computation's output.\n\n Control and metadata messages are intercepted and interpreted to\n enhance this Computation's object knowledge of the computation's\n context. Data and event messages are yielded back to the caller as a\n generator.\n \"\"\"\n\n iterator = iter(self._stream)\n while self._state < Computation.STATE_COMPLETED:\n try:\n message = next(iterator)\n except StopIteration:\n if self._state < Computation.STATE_COMPLETED:\n self._stream = self._execute()\n iterator = iter(self._stream)\n continue\n\n if isinstance(message, messages.StreamStartMessage):\n self._state = Computation.STATE_STREAM_STARTED\n continue\n\n if isinstance(message, messages.JobStartMessage):\n self._state = Computation.STATE_COMPUTATION_STARTED\n self._id = message.handle\n yield message\n continue\n\n if isinstance(message, messages.JobProgressMessage):\n yield message\n continue\n\n if isinstance(message, messages.ChannelAbortMessage):\n self._state = Computation.STATE_ABORTED\n raise errors.ComputationAborted(message.abort_info)\n\n if isinstance(message, messages.EndOfChannelMessage):\n self._state = Computation.STATE_COMPLETED\n continue\n\n # Intercept metadata messages to accumulate received metadata...\n if isinstance(message, messages.MetadataMessage):\n self._metadata[message.tsid] = message.properties\n yield message\n continue\n\n # ...as well as expired-tsid messages to clean it up.\n if isinstance(message, messages.ExpiredTsIdMessage):\n if message.tsid in self._metadata:\n del self._metadata[message.tsid]\n yield message\n continue\n\n if isinstance(message, messages.InfoMessage):\n self._process_info_message(message.message)\n self._batch_count_detected = True\n if self._current_batch_message:\n yield self._get_batch_to_yield()\n continue\n\n # Accumulate data messages and release them when we have received\n # all batches for the same logical timestamp.\n if isinstance(message, messages.DataMessage):\n self._state = Computation.STATE_DATA_RECEIVED\n\n if not self._batch_count_detected:\n self._expected_batches += 1\n\n if not self._current_batch_message:\n self._current_batch_message = message\n self._current_batch_count = 1\n elif (message.logical_timestamp_ms ==\n self._current_batch_message.logical_timestamp_ms):\n self._current_batch_message.add_data(message.data)\n self._current_batch_count += 1\n else:\n self._batch_count_detected = True\n\n if (self._batch_count_detected and\n self._current_batch_count == self._expected_batches):\n yield self._get_batch_to_yield()\n continue\n\n if isinstance(message, messages.EventMessage):\n yield message\n continue\n\n if isinstance(message, messages.ErrorMessage):\n raise errors.ComputationFailed(message.errors)\n\n # Yield last batch, even if potentially incomplete.\n if self._current_batch_message:\n yield self._get_batch_to_yield()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nprocessing an information message received from the computation.", "response": "def _process_info_message(self, message):\n \"\"\"Process an information message received from the computation.\"\"\"\n # Extract the output resolution from the appropriate message, if\n # it's present.\n if message['messageCode'] == 'JOB_RUNNING_RESOLUTION':\n self._resolution = message['contents']['resolutionMs']\n elif message['messageCode'] == 'FETCH_NUM_TIMESERIES':\n self._num_input_timeseries += int(message['numInputTimeSeries'])"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nexecutes the given SignalFlow program and stream the output back.", "response": "def execute(self, program, start=None, stop=None, resolution=None,\n max_delay=None, persistent=False, immediate=False,\n disable_all_metric_publishes=None):\n \"\"\"Execute the given SignalFlow program and stream the output back.\"\"\"\n params = self._get_params(start=start, stop=stop,\n resolution=resolution,\n maxDelay=max_delay,\n persistent=persistent,\n immediate=immediate,\n disableAllMetricPublishes=disable_all_metric_publishes)\n\n def exec_fn(since=None):\n if since:\n params['start'] = since\n return self._transport.execute(program, params)\n\n c = computation.Computation(exec_fn)\n self._computations.add(c)\n return c"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef preflight(self, program, start, stop, resolution=None,\n max_delay=None):\n \"\"\"Preflight the given SignalFlow program and stream the output\n back.\"\"\"\n params = self._get_params(start=start, stop=stop,\n resolution=resolution,\n maxDelay=max_delay)\n\n def exec_fn(since=None):\n if since:\n params['start'] = since\n return self._transport.preflight(program, params)\n\n c = computation.Computation(exec_fn)\n self._computations.add(c)\n return c", "response": "Preflight the given SignalFlow program and stream the output\n back."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nstarting executing the given SignalFlow program without being attached to the output of the computation.", "response": "def start(self, program, start=None, stop=None, resolution=None,\n max_delay=None):\n \"\"\"Start executing the given SignalFlow program without being attached\n to the output of the computation.\"\"\"\n params = self._get_params(start=start, stop=stop,\n resolution=resolution,\n maxDelay=max_delay)\n self._transport.start(program, params)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nattaching to an existing SignalFlow computation.", "response": "def attach(self, handle, filters=None, resolution=None):\n \"\"\"Attach to an existing SignalFlow computation.\"\"\"\n params = self._get_params(filters=filters, resolution=resolution)\n c = computation.Computation(\n lambda since: self._transport.attach(handle, params))\n self._computations.add(c)\n return c"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef stop(self, handle, reason=None):\n params = self._get_params(reason=reason)\n self._transport.stop(handle, params)", "response": "Stop a SignalFlow computation."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _get_object_by_name(self, object_endpoint, object_name, timeout=None):\n timeout = timeout or self._timeout\n resp = self._get(self._u(object_endpoint, object_name),\n session=self._session, timeout=timeout)\n resp.raise_for_status()\n return resp.json()", "response": "Generic function to get object by name from SignalFx.\n API endpoint suffix is v2 / tag."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef search_metrics(self, *args, **kwargs):\n return self._search_metrics_and_metadata(\n self._METRIC_ENDPOINT_SUFFIX, *args, **kwargs)", "response": "Search for metrics on the base url of the base class."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_metric_by_name(self, metric_name, **kwargs):\n return self._get_object_by_name(self._METRIC_ENDPOINT_SUFFIX,\n metric_name,\n **kwargs)", "response": "get a metric by name"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating or update a metric object by name", "response": "def update_metric_by_name(self, metric_name, metric_type, description=None,\n custom_properties=None, tags=None, **kwargs):\n \"\"\"\n Create or update a metric object\n\n Args:\n metric_name (string): name of metric\n type (string): metric type, must be one of 'gauge', 'counter',\n 'cumulative_counter'\n description (optional[string]): a description\n custom_properties (optional[dict]): dictionary of custom properties\n tags (optional[list of strings]): list of tags associated with\n metric\n \"\"\"\n data = {'type': metric_type.upper(),\n 'description': description or '',\n 'customProperties': custom_properties or {},\n 'tags': tags or []}\n resp = self._put(self._u(self._METRIC_ENDPOINT_SUFFIX,\n str(metric_name)),\n data=data, **kwargs)\n resp.raise_for_status()\n return resp.json()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef search_dimensions(self, *args, **kwargs):\n return self._search_metrics_and_metadata(\n self._DIMENSION_ENDPOINT_SUFFIX, *args, **kwargs)", "response": "Search for the dimensions of the current node"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_dimension(self, key, value, **kwargs):\n return self._get_object_by_name(self._DIMENSION_ENDPOINT_SUFFIX,\n '{0}/{1}'.format(key, value),\n **kwargs)", "response": "get a dimension by key and value"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef update_dimension(self, key, value, description=None,\n custom_properties=None, tags=None, **kwargs):\n \"\"\"\n update a dimension\n Args:\n key (string): key of the dimension\n value (string): value of the dimension\n description (optional[string]): a description\n custom_properties (optional[dict]): dictionary of custom properties\n tags (optional[list of strings]): list of tags associated with\n metric\n \"\"\"\n data = {'description': description or '',\n 'customProperties': custom_properties or {},\n 'tags': tags or [],\n 'key': key,\n 'value': value}\n resp = self._put(self._u(self._DIMENSION_ENDPOINT_SUFFIX, key, value),\n data=data, **kwargs)\n resp.raise_for_status()\n return resp.json()", "response": "Update a dimension of a resource."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsearching the metric time series for the specified resource.", "response": "def search_metric_time_series(self, *args, **kwargs):\n \"\"\"\n Args:\n query (string): elasticsearch string query\n order_by (optional[string]): property by which to order results\n offset (optional[int]): number of results to skip for pagination\n (default=0)\n limit (optional[int]): how many results to return (default=50)\n timeout (optional[int]): how long to wait for response (in seconds)\n\n Returns:\n result of query search on metric time series\n\n \"\"\"\n return self._search_metrics_and_metadata(self._MTS_ENDPOINT_SUFFIX,\n *args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting a metric time series by id", "response": "def get_metric_time_series(self, mts_id, **kwargs):\n \"\"\"get a metric time series by id\"\"\"\n return self._get_object_by_name(self._MTS_ENDPOINT_SUFFIX,\n mts_id,\n **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef search_tags(self, *args, **kwargs):\n return self._search_metrics_and_metadata(self._TAG_ENDPOINT_SUFFIX,\n *args, **kwargs)", "response": "Search for tags on a set of tags"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_tag(self, tag_name, **kwargs):\n return self._get_object_by_name(self._TAG_ENDPOINT_SUFFIX,\n tag_name,\n **kwargs)", "response": "get a tag by name"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nupdate a tag by name", "response": "def update_tag(self, tag_name, description=None,\n custom_properties=None, **kwargs):\n \"\"\"update a tag by name\n\n Args:\n tag_name (string): name of tag to update\n description (optional[string]): a description\n custom_properties (optional[dict]): dictionary of custom properties\n \"\"\"\n data = {'description': description or '',\n 'customProperties': custom_properties or {}}\n resp = self._put(self._u(self._TAG_ENDPOINT_SUFFIX, tag_name),\n data=data, **kwargs)\n resp.raise_for_status()\n return resp.json()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef delete_tag(self, tag_name, **kwargs):\n resp = self._delete(self._u(self._TAG_ENDPOINT_SUFFIX, tag_name),\n **kwargs)\n resp.raise_for_status()\n # successful delete returns 204, which has no associated json\n return resp", "response": "delete a tag by name"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the organization to which the user belongs", "response": "def get_organization(self, **kwargs):\n \"\"\"Get the organization to which the user belongs\n\n Returns:\n dictionary of the response\n \"\"\"\n resp = self._get(self._u(self._ORGANIZATION_ENDPOINT_SUFFIX),\n **kwargs)\n resp.raise_for_status()\n return resp.json()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_chart(self, id, **kwargs):\n resp = self._get_object_by_name(self._CHART_ENDPOINT_SUFFIX, id,\n **kwargs)\n return resp", "response": "Retrieve a ( v2 ) chart by id."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nretrieving a ( v2 ) dashboard by id.", "response": "def get_dashboard(self, id, **kwargs):\n \"\"\"\"Retrieve a (v2) dashboard by id.\n \"\"\"\n resp = self._get_object_by_name(self._DASHBOARD_ENDPOINT_SUFFIX, id,\n **kwargs)\n return resp"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_detector(self, id, **kwargs):\n resp = self._get_object_by_name(self._DETECTOR_ENDPOINT_SUFFIX, id,\n **kwargs)\n return resp", "response": "Retrieve a ( v2 ) detector by id."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nvalidating a detector. Validates the given detector; throws a 400 Bad Request HTTP error if the detector is invalid; otherwise doesn't return or throw anything. Args: detector (object): the detector model object. Will be serialized as JSON.", "response": "def validate_detector(self, detector):\n \"\"\"Validate a detector.\n\n Validates the given detector; throws a 400 Bad Request HTTP error if\n the detector is invalid; otherwise doesn't return or throw anything.\n\n Args:\n detector (object): the detector model object. Will be serialized as\n JSON.\n \"\"\"\n resp = self._post(self._u(self._DETECTOR_ENDPOINT_SUFFIX, 'validate'),\n data=detector)\n resp.raise_for_status()"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate a new detector.", "response": "def create_detector(self, detector):\n \"\"\"Creates a new detector.\n\n Args:\n detector (object): the detector model object. Will be serialized as\n JSON.\n Returns:\n dictionary of the response (created detector model).\n \"\"\"\n resp = self._post(self._u(self._DETECTOR_ENDPOINT_SUFFIX),\n data=detector)\n resp.raise_for_status()\n return resp.json()"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nupdating an existing detector model.", "response": "def update_detector(self, detector_id, detector):\n \"\"\"Update an existing detector.\n\n Args:\n detector_id (string): the ID of the detector.\n detector (object): the detector model object. Will be serialized as\n JSON.\n Returns:\n dictionary of the response (updated detector model).\n \"\"\"\n resp = self._put(self._u(self._DETECTOR_ENDPOINT_SUFFIX, detector_id),\n data=detector)\n resp.raise_for_status()\n return resp.json()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nremove a detector. Args: detector_id (string): the ID of the detector.", "response": "def delete_detector(self, detector_id, **kwargs):\n \"\"\"Remove a detector.\n\n Args:\n detector_id (string): the ID of the detector.\n \"\"\"\n resp = self._delete(self._u(self._DETECTOR_ENDPOINT_SUFFIX,\n detector_id),\n **kwargs)\n resp.raise_for_status()\n # successful delete returns 204, which has no response json\n return resp"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_detector_incidents(self, id, **kwargs):\n resp = self._get(\n self._u(self._DETECTOR_ENDPOINT_SUFFIX, id, 'incidents'),\n None,\n **kwargs\n )\n resp.raise_for_status()\n return resp.json()", "response": "Gets all incidents for a detector"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_incident(self, id, **kwargs):\n resp = self._get_object_by_name(self._INCIDENT_ENDPOINT_SUFFIX, id,\n **kwargs)\n return resp", "response": "Retrieve a ( v2 ) incident by id."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nretrieving all (v2) incidents.", "response": "def get_incidents(self, offset=0, limit=None, include_resolved=False, **kwargs):\n \"\"\"Retrieve all (v2) incidents.\n \"\"\"\n resp = self._get(\n self._u(self._INCIDENT_ENDPOINT_SUFFIX),\n params={\n 'offset': offset,\n 'limit': limit,\n 'include_resolved': str(include_resolved).lower(),\n },\n **kwargs)\n\n resp.raise_for_status()\n return resp.json()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nexecute a program across the given time range and returns a Pandas DataFrame containing the results indexed by output timestamp.", "response": "def get_data_frame(client, program, start, stop, resolution=None):\n \"\"\"Executes the given program across the given time range (expressed in\n millisecond timestamps since Epoch), and returns a Pandas DataFrame\n containing the results, indexed by output timestamp.\n\n If the program contains multiple publish() calls, their outputs are merged\n into the returned DataFrame.\"\"\"\n data = {}\n metadata = {}\n\n c = client.execute(program, start=start, stop=stop, resolution=resolution)\n for msg in c.stream():\n if isinstance(msg, messages.DataMessage):\n if msg.logical_timestamp_ms in data:\n data[msg.logical_timestamp_ms].update(msg.data)\n else:\n data[msg.logical_timestamp_ms] = msg.data\n elif isinstance(msg, messages.MetadataMessage):\n metadata[msg.tsid] = msg.properties\n\n df = pandas.DataFrame.from_dict(data, orient='index')\n df.metadata = metadata\n return df"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef login(self, email, password):\n r = requests.post('{0}/v2/session'.format(self._api_endpoint),\n json={'email': email, 'password': password})\n r.raise_for_status()\n return r.json()['accessToken']", "response": "Authenticate a user with SignalFx to acquire a session token."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nobtaining a metadata REST API client.", "response": "def rest(self, token, endpoint=None, timeout=None):\n \"\"\"Obtain a metadata REST API client.\"\"\"\n from . import rest\n return rest.SignalFxRestClient(\n token=token,\n endpoint=endpoint or self._api_endpoint,\n timeout=timeout or self._timeout)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nobtain a datapoint and event ingest client.", "response": "def ingest(self, token, endpoint=None, timeout=None, compress=None):\n \"\"\"Obtain a datapoint and event ingest client.\"\"\"\n from . import ingest\n if ingest.sf_pbuf:\n client = ingest.ProtoBufSignalFxIngestClient\n else:\n _logger.warn('Protocol Buffers not installed properly; '\n 'falling back to JSON.')\n client = ingest.JsonSignalFxIngestClient\n compress = compress if compress is not None else self._compress\n return client(\n token=token,\n endpoint=endpoint or self._ingest_endpoint,\n timeout=timeout or self._timeout,\n compress=compress)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef signalflow(self, token, endpoint=None, timeout=None, compress=None):\n from . import signalflow\n compress = compress if compress is not None else self._compress\n return signalflow.SignalFlowClient(\n token=token,\n endpoint=endpoint or self._stream_endpoint,\n timeout=timeout or self._timeout,\n compress=compress)", "response": "Obtain a SignalFlow API client."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef register(self, key, **kwargs):\n dimensions = dict((k, str(v)) for k, v in kwargs.items())\n composite_key = self._composite_name(key, dimensions)\n self._metadata[composite_key] = {\n 'metric': key,\n 'dimensions': dimensions\n }\n return composite_key", "response": "Registers metadata for a metric and returns a composite key"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef closed(self, code, reason=None):\n if code != 1000:\n self._error = errors.SignalFlowException(code, reason)\n _logger.info('Lost WebSocket connection with %s (%s: %s).',\n self, code, reason)\n for c in self._channels.values():\n c.offer(WebSocketComputationChannel.END_SENTINEL)\n self._channels.clear()\n with self._connection_cv:\n self._connected = False\n self._connection_cv.notify()", "response": "Handler called when the WebSocket is closed. Status code 1000 denotes a normal close ; all others are errors."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndetermining the current AWS unique ID for the current object.", "response": "def get_aws_unique_id(timeout=DEFAULT_AWS_TIMEOUT):\n \"\"\"Determine the current AWS unique ID\n\n Args:\n timeout (int): How long to wait for a response from AWS metadata IP\n \"\"\"\n try:\n resp = requests.get(AWS_ID_URL, timeout=timeout).json()\n except requests.exceptions.ConnectTimeout:\n _logger.warning('Connection timeout when determining AWS unique '\n 'ID. Not using AWS unique ID.')\n return None\n else:\n aws_id = \"{0}_{1}_{2}\".format(resp['instanceId'], resp['region'],\n resp['accountId'])\n _logger.debug('Using AWS unique ID %s.', aws_id)\n return aws_id"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncomputes the one - dimensional discrete Fourier Transform.", "response": "def fft(a, n=None, axis=-1, norm=None):\n \"\"\"\n Compute the one-dimensional discrete Fourier Transform.\n\n This function computes the one-dimensional *n*-point discrete Fourier\n Transform (DFT) with the efficient Fast Fourier Transform (FFT)\n algorithm [CT].\n\n Parameters\n ----------\n a : array_like\n Input array, can be complex.\n n : int, optional\n Length of the transformed axis of the output.\n If `n` is smaller than the length of the input, the input is cropped.\n If it is larger, the input is padded with zeros. If `n` is not given,\n the length of the input along the axis specified by `axis` is used.\n axis : int, optional\n Axis over which to compute the FFT. If not given, the last axis is\n used.\n norm : {None, \"ortho\"}, optional\n .. versionadded:: 1.10.0\n Normalization mode (see `numpy.fft`). Default is None.\n\n Returns\n -------\n out : complex ndarray\n The truncated or zero-padded input, transformed along the axis\n indicated by `axis`, or the last one if `axis` is not specified.\n\n Raises\n ------\n IndexError\n if `axes` is larger than the last axis of `a`.\n\n See Also\n --------\n numpy.fft : for definition of the DFT and conventions used.\n ifft : The inverse of `fft`.\n fft2 : The two-dimensional FFT.\n fftn : The *n*-dimensional FFT.\n rfftn : The *n*-dimensional FFT of real input.\n fftfreq : Frequency bins for given FFT parameters.\n\n Notes\n -----\n FFT (Fast Fourier Transform) refers to a way the discrete Fourier\n Transform (DFT) can be calculated efficiently, by using symmetries in the\n calculated terms. The symmetry is highest when `n` is a power of 2, and\n the transform is therefore most efficient for these sizes.\n\n The DFT is defined, with the conventions used in this implementation, in\n the documentation for the `numpy.fft` module.\n\n References\n ----------\n .. [CT] Cooley, James W., and John W. Tukey, 1965, \"An algorithm for the\n machine calculation of complex Fourier series,\" *Math. Comput.*\n 19: 297-301.\n\n Examples\n --------\n >>> np.fft.fft(np.exp(2j * np.pi * np.arange(8) / 8))\n array([ -3.44505240e-16 +1.14383329e-17j,\n 8.00000000e+00 -5.71092652e-15j,\n 2.33482938e-16 +1.22460635e-16j,\n 1.64863782e-15 +1.77635684e-15j,\n 9.95839695e-17 +2.33482938e-16j,\n 0.00000000e+00 +1.66837030e-15j,\n 1.14383329e-17 +1.22460635e-16j,\n -1.64863782e-15 +1.77635684e-15j])\n\n >>> import matplotlib.pyplot as plt\n >>> t = np.arange(256)\n >>> sp = np.fft.fft(np.sin(t))\n >>> freq = np.fft.fftfreq(t.shape[-1])\n >>> plt.plot(freq, sp.real, freq, sp.imag)\n [, ]\n >>> plt.show()\n\n In this example, real input has an FFT which is Hermitian, i.e., symmetric\n in the real part and anti-symmetric in the imaginary part, as described in\n the `numpy.fft` documentation.\n\n \"\"\"\n output = mkl_fft.fft(a, n, axis)\n if _unitary(norm):\n output *= 1 / sqrt(output.shape[axis])\n return output"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef ifft(a, n=None, axis=-1, norm=None):\n unitary = _unitary(norm)\n output = mkl_fft.ifft(a, n, axis)\n if unitary:\n output *= sqrt(output.shape[axis])\n return output", "response": "Compute the inverse discrete Fourier transform of a vector."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncomputing the one - dimensional discrete Fourier Transform for real input.", "response": "def rfft(a, n=None, axis=-1, norm=None):\n \"\"\"\n Compute the one-dimensional discrete Fourier Transform for real input.\n\n This function computes the one-dimensional *n*-point discrete Fourier\n Transform (DFT) of a real-valued array by means of an efficient algorithm\n called the Fast Fourier Transform (FFT).\n\n Parameters\n ----------\n a : array_like\n Input array\n n : int, optional\n Number of points along transformation axis in the input to use.\n If `n` is smaller than the length of the input, the input is cropped.\n If it is larger, the input is padded with zeros. If `n` is not given,\n the length of the input along the axis specified by `axis` is used.\n axis : int, optional\n Axis over which to compute the FFT. If not given, the last axis is\n used.\n norm : {None, \"ortho\"}, optional\n .. versionadded:: 1.10.0\n Normalization mode (see `numpy.fft`). Default is None.\n\n Returns\n -------\n out : complex ndarray\n The truncated or zero-padded input, transformed along the axis\n indicated by `axis`, or the last one if `axis` is not specified.\n If `n` is even, the length of the transformed axis is ``(n/2)+1``.\n If `n` is odd, the length is ``(n+1)/2``.\n\n Raises\n ------\n IndexError\n If `axis` is larger than the last axis of `a`.\n\n See Also\n --------\n numpy.fft : For definition of the DFT and conventions used.\n irfft : The inverse of `rfft`.\n fft : The one-dimensional FFT of general (complex) input.\n fftn : The *n*-dimensional FFT.\n rfftn : The *n*-dimensional FFT of real input.\n\n Notes\n -----\n When the DFT is computed for purely real input, the output is\n Hermitian-symmetric, i.e. the negative frequency terms are just the complex\n conjugates of the corresponding positive-frequency terms, and the\n negative-frequency terms are therefore redundant. This function does not\n compute the negative frequency terms, and the length of the transformed\n axis of the output is therefore ``n//2 + 1``.\n\n When ``A = rfft(a)`` and fs is the sampling frequency, ``A[0]`` contains\n the zero-frequency term 0*fs, which is real due to Hermitian symmetry.\n\n If `n` is even, ``A[-1]`` contains the term representing both positive\n and negative Nyquist frequency (+fs/2 and -fs/2), and must also be purely\n real. If `n` is odd, there is no term at fs/2; ``A[-1]`` contains\n the largest positive frequency (fs/2*(n-1)/n), and is complex in the\n general case.\n\n If the input `a` contains an imaginary part, it is silently discarded.\n\n Examples\n --------\n >>> np.fft.fft([0, 1, 0, 0])\n array([ 1.+0.j, 0.-1.j, -1.+0.j, 0.+1.j])\n >>> np.fft.rfft([0, 1, 0, 0])\n array([ 1.+0.j, 0.-1.j, -1.+0.j])\n\n Notice how the final element of the `fft` output is the complex conjugate\n of the second element, for real input. For `rfft`, this symmetry is\n exploited to compute only the non-negative frequency terms.\n\n \"\"\"\n unitary = _unitary(norm)\n if unitary and n is None:\n a = asarray(a)\n n = a.shape[axis]\n output = mkl_fft.rfft_numpy(a, n=n, axis=axis)\n if unitary:\n output *= 1 / sqrt(n)\n return output"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef irfft(a, n=None, axis=-1, norm=None):\n output = mkl_fft.irfft_numpy(a, n=n, axis=axis)\n if _unitary(norm):\n output *= sqrt(output.shape[axis])\n return output", "response": "Compute the inverse of the n - point DFT of real input."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncomputes the FFT of a signal along a given axis.", "response": "def hfft(a, n=None, axis=-1, norm=None):\n \"\"\"\n Compute the FFT of a signal which has Hermitian symmetry (real spectrum).\n\n Parameters\n ----------\n a : array_like\n The input array.\n n : int, optional\n Length of the transformed axis of the output.\n For `n` output points, ``n//2+1`` input points are necessary. If the\n input is longer than this, it is cropped. If it is shorter than this,\n it is padded with zeros. If `n` is not given, it is determined from\n the length of the input along the axis specified by `axis`.\n axis : int, optional\n Axis over which to compute the FFT. If not given, the last\n axis is used.\n norm : {None, \"ortho\"}, optional\n .. versionadded:: 1.10.0\n Normalization mode (see `numpy.fft`). Default is None.\n\n Returns\n -------\n out : ndarray\n The truncated or zero-padded input, transformed along the axis\n indicated by `axis`, or the last one if `axis` is not specified.\n The length of the transformed axis is `n`, or, if `n` is not given,\n ``2*(m-1)`` where ``m`` is the length of the transformed axis of the\n input. To get an odd number of output points, `n` must be specified.\n\n Raises\n ------\n IndexError\n If `axis` is larger than the last axis of `a`.\n\n See also\n --------\n rfft : Compute the one-dimensional FFT for real input.\n ihfft : The inverse of `hfft`.\n\n Notes\n -----\n `hfft`/`ihfft` are a pair analogous to `rfft`/`irfft`, but for the\n opposite case: here the signal has Hermitian symmetry in the time domain\n and is real in the frequency domain. So here it's `hfft` for which\n you must supply the length of the result if it is to be odd:\n ``ihfft(hfft(a), len(a)) == a``, within numerical accuracy.\n\n Examples\n --------\n >>> signal = np.array([1, 2, 3, 4, 3, 2])\n >>> np.fft.fft(signal)\n array([ 15.+0.j, -4.+0.j, 0.+0.j, -1.-0.j, 0.+0.j, -4.+0.j])\n >>> np.fft.hfft(signal[:4]) # Input first half of signal\n array([ 15., -4., 0., -1., 0., -4.])\n >>> np.fft.hfft(signal, 6) # Input entire signal and truncate\n array([ 15., -4., 0., -1., 0., -4.])\n\n\n >>> signal = np.array([[1, 1.j], [-1.j, 2]])\n >>> np.conj(signal.T) - signal # check Hermitian symmetry\n array([[ 0.-0.j, 0.+0.j],\n [ 0.+0.j, 0.-0.j]])\n >>> freq_spectrum = np.fft.hfft(signal)\n >>> freq_spectrum\n array([[ 1., 1.],\n [ 2., -2.]])\n\n \"\"\"\n # The copy may be required for multithreading.\n a = array(a, copy=True, dtype=complex)\n if n is None:\n n = (a.shape[axis] - 1) * 2\n unitary = _unitary(norm)\n return irfft(conjugate(a), n, axis) * (sqrt(n) if unitary else n)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncomputing the inverse FFT of a signal.", "response": "def ihfft(a, n=None, axis=-1, norm=None):\n \"\"\"\n Compute the inverse FFT of a signal which has Hermitian symmetry.\n\n Parameters\n ----------\n a : array_like\n Input array.\n n : int, optional\n Length of the inverse FFT.\n Number of points along transformation axis in the input to use.\n If `n` is smaller than the length of the input, the input is cropped.\n If it is larger, the input is padded with zeros. If `n` is not given,\n the length of the input along the axis specified by `axis` is used.\n axis : int, optional\n Axis over which to compute the inverse FFT. If not given, the last\n axis is used.\n norm : {None, \"ortho\"}, optional\n .. versionadded:: 1.10.0\n Normalization mode (see `numpy.fft`). Default is None.\n\n Returns\n -------\n out : complex ndarray\n The truncated or zero-padded input, transformed along the axis\n indicated by `axis`, or the last one if `axis` is not specified.\n If `n` is even, the length of the transformed axis is ``(n/2)+1``.\n If `n` is odd, the length is ``(n+1)/2``.\n\n See also\n --------\n hfft, irfft\n\n Notes\n -----\n `hfft`/`ihfft` are a pair analogous to `rfft`/`irfft`, but for the\n opposite case: here the signal has Hermitian symmetry in the time domain\n and is real in the frequency domain. So here it's `hfft` for which\n you must supply the length of the result if it is to be odd:\n ``ihfft(hfft(a), len(a)) == a``, within numerical accuracy.\n\n Examples\n --------\n >>> spectrum = np.array([ 15, -4, 0, -1, 0, -4])\n >>> np.fft.ifft(spectrum)\n array([ 1.+0.j, 2.-0.j, 3.+0.j, 4.+0.j, 3.+0.j, 2.-0.j])\n >>> np.fft.ihfft(spectrum)\n array([ 1.-0.j, 2.-0.j, 3.-0.j, 4.-0.j])\n\n \"\"\"\n # The copy may be required for multithreading.\n a = array(a, copy=True, dtype=float)\n if n is None:\n n = a.shape[axis]\n unitary = _unitary(norm)\n output = conjugate(rfft(a, n, axis))\n return output * (1 / (sqrt(n) if unitary else n))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef fftn(a, s=None, axes=None, norm=None):\n output = mkl_fft.fftn(a, s, axes)\n if _unitary(norm):\n output *= 1 / sqrt(_tot_size(output, axes))\n return output", "response": "Compute the N - dimensional discrete Fourier transform over a sequence of integers."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef ifftn(a, s=None, axes=None, norm=None):\n unitary = _unitary(norm)\n output = mkl_fft.ifftn(a, s, axes)\n if unitary:\n output *= sqrt(_tot_size(output, axes))\n return output", "response": "Compute the inverse discrete Fourier Transform of an array."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef fft2(a, s=None, axes=(-2, -1), norm=None):\n\n return fftn(a, s=s, axes=axes, norm=norm)", "response": "Compute the 2 - dimensional discrete Fourier transform of a sequence of integers."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef ifft2(a, s=None, axes=(-2, -1), norm=None):\n\n return ifftn(a, s=s, axes=axes, norm=norm)", "response": "Compute the 2 - dimensional inverse discrete Fourier Transform of an array."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncompute the N - dimensional discrete Fourier Transform for real input.", "response": "def rfftn(a, s=None, axes=None, norm=None):\n \"\"\"\n Compute the N-dimensional discrete Fourier Transform for real input.\n\n This function computes the N-dimensional discrete Fourier Transform over\n any number of axes in an M-dimensional real array by means of the Fast\n Fourier Transform (FFT). By default, all axes are transformed, with the\n real transform performed over the last axis, while the remaining\n transforms are complex.\n\n Parameters\n ----------\n a : array_like\n Input array, taken to be real.\n s : sequence of ints, optional\n Shape (length along each transformed axis) to use from the input.\n (``s[0]`` refers to axis 0, ``s[1]`` to axis 1, etc.).\n The final element of `s` corresponds to `n` for ``rfft(x, n)``, while\n for the remaining axes, it corresponds to `n` for ``fft(x, n)``.\n Along any axis, if the given shape is smaller than that of the input,\n the input is cropped. If it is larger, the input is padded with zeros.\n if `s` is not given, the shape of the input along the axes specified\n by `axes` is used.\n axes : sequence of ints, optional\n Axes over which to compute the FFT. If not given, the last ``len(s)``\n axes are used, or all axes if `s` is also not specified.\n norm : {None, \"ortho\"}, optional\n .. versionadded:: 1.10.0\n Normalization mode (see `numpy.fft`). Default is None.\n\n Returns\n -------\n out : complex ndarray\n The truncated or zero-padded input, transformed along the axes\n indicated by `axes`, or by a combination of `s` and `a`,\n as explained in the parameters section above.\n The length of the last axis transformed will be ``s[-1]//2+1``,\n while the remaining transformed axes will have lengths according to\n `s`, or unchanged from the input.\n\n Raises\n ------\n ValueError\n If `s` and `axes` have different length.\n IndexError\n If an element of `axes` is larger than than the number of axes of `a`.\n\n See Also\n --------\n irfftn : The inverse of `rfftn`, i.e. the inverse of the n-dimensional FFT\n of real input.\n fft : The one-dimensional FFT, with definitions and conventions used.\n rfft : The one-dimensional FFT of real input.\n fftn : The n-dimensional FFT.\n rfft2 : The two-dimensional FFT of real input.\n\n Notes\n -----\n The transform for real input is performed over the last transformation\n axis, as by `rfft`, then the transform over the remaining axes is\n performed as by `fftn`. The order of the output is as for `rfft` for the\n final transformation axis, and as for `fftn` for the remaining\n transformation axes.\n\n See `fft` for details, definitions and conventions used.\n\n Examples\n --------\n >>> a = np.ones((2, 2, 2))\n >>> np.fft.rfftn(a)\n array([[[ 8.+0.j, 0.+0.j],\n [ 0.+0.j, 0.+0.j]],\n [[ 0.+0.j, 0.+0.j],\n [ 0.+0.j, 0.+0.j]]])\n\n >>> np.fft.rfftn(a, axes=(2, 0))\n array([[[ 4.+0.j, 0.+0.j],\n [ 4.+0.j, 0.+0.j]],\n [[ 0.+0.j, 0.+0.j],\n [ 0.+0.j, 0.+0.j]]])\n\n \"\"\"\n unitary = _unitary(norm)\n if unitary:\n a = asarray(a)\n s, axes = _cook_nd_args(a, s, axes)\n\n output = mkl_fft.rfftn_numpy(a, s, axes)\n if unitary:\n n_tot = prod(asarray(s, dtype=output.dtype))\n output *= 1 / sqrt(n_tot)\n return output"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncomputes the 2 - dimensional discrete Fourier Transform for a real array.", "response": "def rfft2(a, s=None, axes=(-2, -1), norm=None):\n \"\"\"\n Compute the 2-dimensional FFT of a real array.\n\n Parameters\n ----------\n a : array\n Input array, taken to be real.\n s : sequence of ints, optional\n Shape of the FFT.\n axes : sequence of ints, optional\n Axes over which to compute the FFT.\n norm : {None, \"ortho\"}, optional\n .. versionadded:: 1.10.0\n Normalization mode (see `numpy.fft`). Default is None.\n\n Returns\n -------\n out : ndarray\n The result of the real 2-D FFT.\n\n See Also\n --------\n rfftn : Compute the N-dimensional discrete Fourier Transform for real\n input.\n\n Notes\n -----\n This is really just `rfftn` with different default behavior.\n For more details see `rfftn`.\n\n \"\"\"\n\n return rfftn(a, s, axes, norm)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef irfftn(a, s=None, axes=None, norm=None):\n output = mkl_fft.irfftn_numpy(a, s, axes)\n if _unitary(norm):\n output *= sqrt(_tot_size(output, axes))\n return output", "response": "Compute the inverse of the N - dimensional FFT of real input over any number of axes in an M - dimensional array."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef irfft2(a, s=None, axes=(-2, -1), norm=None):\n\n return irfftn(a, s, axes, norm)", "response": "Compute the 2 - dimensional inverse FFT of a real array."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsuppressing alerts for specified duration based on alert attributes.", "response": "def cli(obj, environment, service, resource, event, group, tags, customer, start, duration, text, delete):\n \"\"\"Suppress alerts for specified duration based on alert attributes.\"\"\"\n client = obj['client']\n if delete:\n client.delete_blackout(delete)\n else:\n if not environment:\n raise click.UsageError('Missing option \"--environment\" / \"-E\".')\n try:\n blackout = client.create_blackout(\n environment=environment,\n service=service,\n resource=resource,\n event=event,\n group=group,\n tags=tags,\n customer=customer,\n start=start,\n duration=duration,\n text=text\n )\n except Exception as e:\n click.echo('ERROR: {}'.format(e))\n sys.exit(1)\n click.echo(blackout.id)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef cli(obj, username, scopes, duration, text, customer, delete):\n client = obj['client']\n if delete:\n client.delete_key(delete)\n else:\n try:\n expires = datetime.utcnow() + timedelta(seconds=duration) if duration else None\n key = client.create_key(username, scopes, expires, text, customer)\n except Exception as e:\n click.echo('ERROR: {}'.format(e))\n sys.exit(1)\n click.echo(key.key)", "response": "Create or delete an API key."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds group / org / domain / role - to - customer or delete lookup entry.", "response": "def cli(obj, customer, match, delete):\n \"\"\"Add group/org/domain/role-to-customer or delete lookup entry.\"\"\"\n client = obj['client']\n if delete:\n client.delete_customer(delete)\n else:\n if not customer:\n raise click.UsageError('Missing option \"--customer\".')\n if not match:\n raise click.UsageError('Missing option \"--org\" / \"--group\" / \"--domain\" / \"--role\".')\n try:\n customer = client.create_customer(customer, match)\n except Exception as e:\n click.echo('ERROR: {}'.format(e))\n sys.exit(1)\n click.echo(customer.id)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating user or update user details including password reset.", "response": "def cli(obj, id, name, email, password, status, roles, text, email_verified, delete):\n \"\"\"Create user or update user details, including password reset.\"\"\"\n client = obj['client']\n if delete:\n client.delete_user(delete)\n elif id:\n if not any([name, email, password, status, roles, text, email_verified]):\n click.echo('Nothing to update.')\n sys.exit(1)\n try:\n r = client.update_user(\n id, name=name, email=email, password=password, status=status,\n roles=roles, attributes=None, text=text, email_verified=email_verified\n )\n except Exception as e:\n click.echo('ERROR: {}'.format(e))\n sys.exit(1)\n if r['status'] == 'ok':\n click.echo('Updated.')\n else:\n click.echo(r['message'])\n else:\n if not email:\n raise click.UsageError('Need \"--email\" to create user.')\n if not password:\n password = click.prompt('Password', hide_input=True)\n try:\n user = client.create_user(\n name=name, email=email, password=password, status=status,\n roles=roles, attributes=None, text=text, email_verified=email_verified\n )\n except Exception as e:\n click.echo('ERROR: {}'.format(e))\n sys.exit(1)\n click.echo(user.id)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef cli(ctx, obj):\n client = obj['client']\n click.echo('alerta {}'.format(client.mgmt_status()['version']))\n click.echo('alerta client {}'.format(client_version))\n click.echo('requests {}'.format(requests_version))\n click.echo('click {}'.format(click.__version__))\n ctx.exit()", "response": "Show Alerta server and client versions."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndisplaying logged in user or full userinfo.", "response": "def cli(obj, show_userinfo):\n \"\"\"Display logged in user or full userinfo.\"\"\"\n client = obj['client']\n userinfo = client.userinfo()\n if show_userinfo:\n for k, v in userinfo.items():\n if isinstance(v, list):\n v = ', '.join(v)\n click.echo('{:20}: {}'.format(k, v))\n else:\n click.echo(userinfo['preferred_username'])"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef cli(obj):\n for k, v in obj.items():\n if isinstance(v, list):\n v = ', '.join(v)\n click.echo('{:20}: {}'.format(k, v))", "response": "Display client config downloaded from API server."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef cli(obj):\n client = obj['client']\n status = client.mgmt_status()\n\n now = datetime.fromtimestamp(int(status['time']) / 1000.0)\n uptime = datetime(1, 1, 1) + timedelta(seconds=int(status['uptime']) / 1000.0)\n\n click.echo('{} up {} days {:02d}:{:02d}'.format(\n now.strftime('%H:%M'),\n uptime.day - 1, uptime.hour, uptime.minute\n ))", "response": "Display API server uptime in days hours."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nquery for alerts based on search filter criteria.", "response": "def cli(obj, ids, query, filters, display, from_date=None):\n \"\"\"Query for alerts based on search filter criteria.\"\"\"\n client = obj['client']\n timezone = obj['timezone']\n if ids:\n query = [('id', x) for x in ids]\n elif query:\n query = [('q', query)]\n else:\n query = build_query(filters)\n if from_date:\n query.append(('from-date', from_date))\n\n r = client.http.get('/alerts', query, page=1, page_size=1000)\n\n if obj['output'] == 'json':\n click.echo(json.dumps(r['alerts'], sort_keys=True, indent=4, ensure_ascii=False))\n elif obj['output'] in ['json_lines', 'jsonl', 'ndjson']:\n for alert in r['alerts']:\n click.echo(json.dumps(alert, ensure_ascii=False))\n else:\n alerts = [Alert.parse(a) for a in r['alerts']]\n last_time = r['lastTime']\n auto_refresh = r['autoRefresh']\n\n if display == 'tabular':\n headers = {'id': 'ID', 'lastReceiveTime': 'LAST RECEIVED', 'severity': 'SEVERITY', 'status': 'STATUS',\n 'duplicateCount': 'DUPL', 'customer': 'CUSTOMER', 'environment': 'ENVIRONMENT', 'service': 'SERVICE',\n 'resource': 'RESOURCE', 'group': 'GROUP', 'event': 'EVENT', 'value': 'VALUE', 'text': 'TEXT'}\n click.echo(tabulate([a.tabular('summary', timezone)\n for a in alerts], headers=headers, tablefmt=obj['output']))\n elif display in ['compact', 'details']:\n for alert in reversed(alerts):\n color = COLOR_MAP.get(alert.severity, {'fg': 'white'})\n click.secho('{}|{}|{}|{:5d}|{}|{:<5s}|{:<10s}|{:<18s}|{:12s}|{:16s}|{:12s}'.format(\n alert.id[0:8],\n DateTime.localtime(alert.last_receive_time, timezone),\n alert.severity,\n alert.duplicate_count,\n alert.customer or '-',\n alert.environment,\n ','.join(alert.service),\n alert.resource,\n alert.group,\n alert.event,\n alert.value or 'n/a'), fg=color['fg'])\n click.secho(' |{}'.format(alert.text), fg=color['fg'])\n\n if display == 'details':\n click.secho(' severity | {} -> {}'.format(alert.previous_severity,\n alert.severity), fg=color['fg'])\n click.secho(' trend | {}'.format(alert.trend_indication), fg=color['fg'])\n click.secho(' status | {}'.format(alert.status), fg=color['fg'])\n click.secho(' resource | {}'.format(alert.resource), fg=color['fg'])\n click.secho(' group | {}'.format(alert.group), fg=color['fg'])\n click.secho(' event | {}'.format(alert.event), fg=color['fg'])\n click.secho(' value | {}'.format(alert.value), fg=color['fg'])\n click.secho(' tags | {}'.format(' '.join(alert.tags)), fg=color['fg'])\n\n for key, value in alert.attributes.items():\n click.secho(' {} | {}'.format(key.ljust(10), value), fg=color['fg'])\n\n latency = alert.receive_time - alert.create_time\n\n click.secho(' time created | {}'.format(\n DateTime.localtime(alert.create_time, timezone)), fg=color['fg'])\n click.secho(' time received | {}'.format(\n DateTime.localtime(alert.receive_time, timezone)), fg=color['fg'])\n click.secho(' last received | {}'.format(\n DateTime.localtime(alert.last_receive_time, timezone)), fg=color['fg'])\n click.secho(' latency | {}ms'.format((latency.microseconds / 1000)), fg=color['fg'])\n click.secho(' timeout | {}s'.format(alert.timeout), fg=color['fg'])\n\n click.secho(' alert id | {}'.format(alert.id), fg=color['fg'])\n click.secho(' last recv id | {}'.format(alert.last_receive_id), fg=color['fg'])\n click.secho(' customer | {}'.format(alert.customer), fg=color['fg'])\n click.secho(' environment | {}'.format(alert.environment), fg=color['fg'])\n click.secho(' service | {}'.format(','.join(alert.service)), fg=color['fg'])\n click.secho(' resource | {}'.format(alert.resource), fg=color['fg'])\n click.secho(' type | {}'.format(alert.event_type), fg=color['fg'])\n click.secho(' repeat | {}'.format(alert.repeat), fg=color['fg'])\n click.secho(' origin | {}'.format(alert.origin), fg=color['fg'])\n click.secho(' correlate | {}'.format(','.join(alert.correlate)), fg=color['fg'])\n\n return auto_refresh, last_time"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nremoving tags from alerts.", "response": "def cli(obj, ids, query, filters, tags):\n \"\"\"Remove tags from alerts.\"\"\"\n client = obj['client']\n if ids:\n total = len(ids)\n else:\n if query:\n query = [('q', query)]\n else:\n query = build_query(filters)\n total, _, _ = client.get_count(query)\n ids = [a.id for a in client.get_alerts(query)]\n\n with click.progressbar(ids, label='Untagging {} alerts'.format(total)) as bar:\n for id in bar:\n client.untag_alert(id, tags)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nwatching for new alerts.", "response": "def cli(ctx, ids, query, filters, details, interval):\n \"\"\"Watch for new alerts.\"\"\"\n if details:\n display = 'details'\n else:\n display = 'compact'\n from_date = None\n\n auto_refresh = True\n while auto_refresh:\n try:\n auto_refresh, from_date = ctx.invoke(query_cmd, ids=ids, query=query,\n filters=filters, display=display, from_date=from_date)\n time.sleep(interval)\n except (KeyboardInterrupt, SystemExit) as e:\n sys.exit(e)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ndisplays API server switch status and usage metrics.", "response": "def cli(obj):\n \"\"\"Display API server switch status and usage metrics.\"\"\"\n client = obj['client']\n metrics = client.mgmt_status()['metrics']\n headers = {'title': 'METRIC', 'type': 'TYPE', 'name': 'NAME', 'value': 'VALUE', 'average': 'AVERAGE'}\n click.echo(tabulate([{\n 'title': m['title'],\n 'type': m['type'],\n 'name': '{}.{}'.format(m['group'], m['name']),\n 'value': m.get('value', None) or m.get('count', 0),\n 'average': int(m['totalTime']) * 1.0 / int(m['count']) if m['type'] == 'timer' else None\n } for m in metrics], headers=headers, tablefmt=obj['output']))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nauthenticates using Azure Github Google OAuth2 Basic Auth username and password instead of using an API key.", "response": "def cli(obj, username):\n \"\"\"Authenticate using Azure, Github, Gitlab, Google OAuth2, OpenID or\n Basic Auth username/password instead of using an API key.\"\"\"\n client = obj['client']\n provider = obj['provider']\n client_id = obj['client_id']\n\n try:\n if provider == 'azure':\n token = azure.login(client, obj['azure_tenant'], client_id)['token']\n elif provider == 'github':\n token = github.login(client, obj['github_url'], client_id)['token']\n elif provider == 'gitlab':\n token = gitlab.login(client, obj['gitlab_url'], client_id)['token']\n elif provider == 'google':\n if not username:\n username = click.prompt('Email')\n token = google.login(client, username, client_id)['token']\n elif provider == 'openid':\n token = oidc.login(client, obj['oidc_auth_url'], client_id)['token']\n elif provider == 'basic':\n if not username:\n username = click.prompt('Email')\n password = click.prompt('Password', hide_input=True)\n token = client.login(username, password)['token']\n else:\n click.echo('ERROR: unknown provider {provider}'.format(provider=provider))\n sys.exit(1)\n except Exception as e:\n raise AuthError(e)\n\n jwt = Jwt()\n preferred_username = jwt.parse(token)['preferred_username']\n if preferred_username:\n save_token(client.endpoint, preferred_username, token)\n click.echo('Logged in as {}'.format(preferred_username))\n else:\n click.echo('Failed to login.')\n sys.exit(1)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndumps the class data in the format of a. netrc file.", "response": "def dump_netrc(self):\n \"\"\"Dump the class data in the format of a .netrc file.\"\"\"\n rep = ''\n for host in self.hosts.keys():\n attrs = self.hosts[host]\n rep = rep + 'machine ' + host + '\\n\\tlogin ' + str(attrs[0]) + '\\n'\n if attrs[1]:\n rep = rep + 'account ' + str(attrs[1])\n rep = rep + '\\tpassword ' + str(attrs[2]) + '\\n'\n for macro in self.macros.keys():\n rep = rep + 'macdef ' + macro + '\\n'\n for line in self.macros[macro]:\n rep = rep + line\n rep = rep + '\\n'\n return rep"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef cli(obj, origin, tags, timeout, customer, delete):\n client = obj['client']\n if delete:\n client.delete_heartbeat(delete)\n else:\n try:\n heartbeat = client.heartbeat(origin=origin, tags=tags, timeout=timeout, customer=customer)\n except Exception as e:\n click.echo('ERROR: {}'.format(e))\n sys.exit(1)\n click.echo(heartbeat.id)", "response": "Send or delete a heartbeat."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef cli(obj, role, scopes, delete):\n client = obj['client']\n if delete:\n client.delete_perm(delete)\n else:\n if not role:\n raise click.UsageError('Missing option \"--role\".')\n if not scopes:\n raise click.UsageError('Missing option \"--scope\".')\n try:\n perm = client.create_perm(role, scopes)\n except Exception as e:\n click.echo('ERROR: {}'.format(e))\n sys.exit(1)\n click.echo(perm.id)", "response": "Add or delete role - to - permission lookup entry."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef cli(obj):\n client = obj['client']\n timezone = obj['timezone']\n\n screen = Screen(client, timezone)\n screen.run()", "response": "Display alerts like unix top command."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef cli(obj, expired=None, info=None):\n client = obj['client']\n client.housekeeping(expired_delete_hours=expired, info_delete_hours=info)", "response": "Trigger the expiration and deletion of alerts."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef cli(obj, name, email, password, status, text):\n client = obj['client']\n if not email:\n raise click.UsageError('Need \"--email\" to sign-up new user.')\n if not password:\n raise click.UsageError('Need \"--password\" to sign-up new user.')\n try:\n r = client.signup(name=name, email=email, password=password, status=status, attributes=None, text=text)\n except Exception as e:\n click.echo('ERROR: {}'.format(e))\n sys.exit(1)\n if 'token' in r:\n click.echo('Signed Up.')\n else:\n raise AuthError", "response": "Create new Basic Auth user."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nshowing raw data for alerts.", "response": "def cli(obj, ids, query, filters):\n \"\"\"Show raw data for alerts.\"\"\"\n client = obj['client']\n if ids:\n query = [('id', x) for x in ids]\n elif query:\n query = [('q', query)]\n else:\n query = build_query(filters)\n alerts = client.search(query)\n\n headers = {'id': 'ID', 'rawData': 'RAW DATA'}\n click.echo(\n tabulate([{'id': a.id, 'rawData': a.raw_data} for a in alerts], headers=headers, tablefmt=obj['output']))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef cli(obj, name, email, password, status, text):\n if not any([name, email, password, status, text]):\n click.echo('Nothing to update.')\n sys.exit(1)\n\n client = obj['client']\n try:\n r = client.update_me(name=name, email=email, password=password, status=status, attributes=None, text=text)\n except Exception as e:\n click.echo('ERROR: {}'.format(e))\n sys.exit(1)\n if r['status'] == 'ok':\n click.echo('Updated.')\n else:\n click.echo(r['message'])", "response": "Update current user details including password reset."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nshowing status and severity changes for alerts.", "response": "def cli(obj, ids, query, filters):\n \"\"\"Show status and severity changes for alerts.\"\"\"\n client = obj['client']\n\n if obj['output'] == 'json':\n r = client.http.get('/alerts/history')\n click.echo(json.dumps(r['history'], sort_keys=True, indent=4, ensure_ascii=False))\n else:\n timezone = obj['timezone']\n if ids:\n query = [('id', x) for x in ids]\n elif query:\n query = [('q', query)]\n else:\n query = build_query(filters)\n alerts = client.get_history(query)\n\n headers = {'id': 'ID', 'updateTime': 'LAST UPDATED', 'severity': 'SEVERITY', 'status': 'STATUS',\n 'type': 'TYPE', 'customer': 'CUSTOMER', 'environment': 'ENVIRONMENT', 'service': 'SERVICE',\n 'resource': 'RESOURCE', 'group': 'GROUP', 'event': 'EVENT', 'value': 'VALUE', 'text': 'TEXT'}\n click.echo(\n tabulate([a.tabular(timezone) for a in alerts], headers=headers, tablefmt=obj['output']))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef convert_to_wav(files):\n '''Converts files to a format that pocketsphinx can deal wtih (16khz mono 16bit wav)'''\n converted = []\n for f in files:\n new_name = f + '.temp.wav'\n print(new_name)\n if (os.path.exists(f + '.transcription.txt') is False) and (os.path.exists(new_name) is False):\n subprocess.call(['ffmpeg', '-y', '-i', f, '-acodec', 'pcm_s16le', '-ac', '1', '-ar', '16000', new_name])\n converted.append(new_name)\n return converted", "response": "Converts a list of files to a format that pocketsphinx can deal wtih ( 16khz mono 16bit wav"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nuse pocketsphinx to transcribe audio files", "response": "def transcribe(files=[], pre=10, post=50):\n '''Uses pocketsphinx to transcribe audio files'''\n\n total = len(files)\n\n for i, f in enumerate(files):\n filename = f.replace('.temp.wav', '') + '.transcription.txt'\n\n if os.path.exists(filename) is False:\n print(str(i+1) + '/' + str(total) + ' Transcribing ' + f)\n transcript = subprocess.check_output(['pocketsphinx_continuous', '-infile', f, '-time', 'yes', '-logfn', '/dev/null', '-vad_prespeech', str(pre), '-vad_postspeech', str(post)])\n\n with open(filename, 'w') as outfile:\n outfile.write(transcript.decode('utf8'))\n\n os.remove(f)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef convert_timestamps(files):\n '''Converts pocketsphinx transcriptions to usable timestamps'''\n\n sentences = []\n\n for f in files:\n\n if not f.endswith('.transcription.txt'):\n f = f + '.transcription.txt'\n\n if os.path.exists(f) is False:\n continue\n\n with open(f, 'r') as infile:\n lines = infile.readlines()\n\n lines = [re.sub(r'\\(.*?\\)', '', l).strip().split(' ') for l in lines]\n lines = [l for l in lines if len(l) == 4]\n\n seg_start = -1\n seg_end = -1\n\n for index, line in enumerate(lines):\n word, start, end, conf = line\n if word == '' or word == '' or word == '':\n if seg_start == -1:\n seg_start = index\n seg_end = -1\n else:\n seg_end = index\n\n if seg_start > -1 and seg_end > -1:\n words = lines[seg_start+1:seg_end]\n start = float(lines[seg_start][1])\n end = float(lines[seg_end][1])\n if words:\n sentences.append({'start': start, 'end': end, 'words': words, 'file': f})\n if word == '':\n seg_start = -1\n else:\n seg_start = seg_end\n seg_end = -1\n\n return sentences", "response": "Converts pocketsphinx transcriptions to usable timestamps"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef text(files):\n '''Returns the whole transcribed text'''\n sentences = convert_timestamps(files)\n out = []\n for s in sentences:\n out.append(' '.join([w[0] for w in s['words']]))\n return '\\n'.join(out)", "response": "Returns the whole transcribed text"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nsearch for words or sentences containing a search phrase", "response": "def search(query, files, mode='sentence', regex=False):\n '''Searches for words or sentences containing a search phrase'''\n out = []\n sentences = convert_timestamps(files)\n\n if mode == 'fragment':\n out = fragment_search(query, sentences, regex)\n elif mode == 'word':\n out = word_search(query, sentences, regex)\n elif mode == 'franken':\n out = franken_sentence(query, files)\n else:\n out = sentence_search(query, sentences, regex)\n\n\n return out"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef extract_words(files):\n ''' Extracts individual words form files and exports them to individual files. '''\n output_directory = 'extracted_words'\n if not os.path.exists(output_directory):\n os.makedirs(output_directory)\n\n for f in files:\n file_format = None\n source_segment = None\n if f.lower().endswith('.mp3'):\n file_format = 'mp3'\n source_segment = AudioSegment.from_mp3(f)\n elif f.lower().endswith('.wav'):\n file_format = 'wav'\n source_segment = AudioSegment.from_wav(f)\n if not file_format or source_segment:\n print('Unsupported audio format for ' + f)\n sentences = convert_timestamps(files)\n for s in sentences:\n for word in s['words']:\n start = float(word[1]) * 1000\n end = float(word[2]) * 1000\n word = word[0]\n total_time = end - start\n audio = AudioSegment.silent(duration=total_time)\n audio = audio.overlay(source_segment[start:end])\n number = 0\n output_path = None\n while True:\n output_filename = word\n if number:\n output_filename += \"_\" + str(number)\n output_filename = output_filename + '.' + file_format\n output_path = os.path.join(output_directory, output_filename)\n if not os.path.exists(output_path):\n # this file doesn't exist, so we can continue\n break\n # file already exists, increment name and try again\n number += 1\n print('Exporting to: ' + output_path)\n audio.export(output_path, format=file_format)", "response": "Extracts individual words form files and exports them to individual files."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncomposes a list of audio segments into one audiotrack.", "response": "def compose(segments, out='out.mp3', padding=0, crossfade=0, layer=False):\n '''Stiches together a new audiotrack'''\n\n files = {}\n\n working_segments = []\n\n audio = AudioSegment.empty()\n\n if layer:\n total_time = max([s['end'] - s['start'] for s in segments]) * 1000\n audio = AudioSegment.silent(duration=total_time)\n\n for i, s in enumerate(segments):\n try:\n start = s['start'] * 1000\n end = s['end'] * 1000\n f = s['file'].replace('.transcription.txt', '')\n if f not in files:\n if f.endswith('.wav'):\n files[f] = AudioSegment.from_wav(f)\n elif f.endswith('.mp3'):\n files[f] = AudioSegment.from_mp3(f)\n\n segment = files[f][start:end]\n\n print(start, end, f)\n\n if layer:\n audio = audio.overlay(segment, times=1)\n else:\n if i > 0:\n audio = audio.append(segment, crossfade=crossfade)\n else:\n audio = audio + segment\n\n if padding > 0:\n audio = audio + AudioSegment.silent(duration=padding)\n\n s['duration'] = len(segment)\n working_segments.append(s)\n except:\n continue\n\n audio.export(out, format=os.path.splitext(out)[1].replace('.', ''))\n return working_segments"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nencoding and sign a pickle - able object. Return a ( byte ) string.", "response": "def cookie_encode(data, key):\n ''' Encode and sign a pickle-able object. Return a (byte) string '''\n msg = base64.b64encode(pickle.dumps(data, -1))\n sig = base64.b64encode(hmac.new(tob(key), msg).digest())\n return tob('!') + sig + tob('?') + msg"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef cookie_decode(data, key):\n ''' Verify and decode an encoded string. Return an object or None.'''\n data = tob(data)\n if cookie_is_encoded(data):\n sig, msg = data.split(tob('?'), 1)\n if _lscmp(sig[1:], base64.b64encode(hmac.new(tob(key), msg).digest())):\n return pickle.loads(base64.b64decode(msg))\n return None", "response": "Verify and decode an encoded string. Return an object or None."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nimports a module or fetches an object from a module.", "response": "def load(target, **namespace):\n \"\"\" Import a module or fetch an object from a module.\n\n * ``package.module`` returns `module` as a module object.\n * ``pack.mod:name`` returns the module variable `name` from `pack.mod`.\n * ``pack.mod:func()`` calls `pack.mod.func()` and returns the result.\n\n The last form accepts not only function calls, but any type of\n expression. Keyword arguments passed to this function are available as\n local variables. Example: ``import_string('re:compile(x)', x='[a-z]')``\n \"\"\"\n module, target = target.split(\":\", 1) if ':' in target else (target, None)\n if module not in sys.modules: __import__(module)\n if not target: return sys.modules[module]\n if target.isalnum(): return getattr(sys.modules[module], target)\n package_name = module.split('.')[0]\n namespace[package_name] = sys.modules[package_name]\n return eval('%s.%s' % (module, target), namespace)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add(self, rule, method, target, name=None):\n ''' Add a new route or replace the target for an existing route. '''\n if rule in self.rules:\n self.rules[rule][method] = target\n if name: self.builder[name] = self.builder[rule]\n return\n\n target = self.rules[rule] = {method: target}\n\n # Build pattern and other structures for dynamic routes\n anons = 0 # Number of anonymous wildcards\n pattern = '' # Regular expression pattern\n filters = [] # Lists of wildcard input filters\n builder = [] # Data structure for the URL builder\n is_static = True\n for key, mode, conf in self.parse_rule(rule):\n if mode:\n is_static = False\n mask, in_filter, out_filter = self.filters[mode](conf)\n if key:\n pattern += '(?P<%s>%s)' % (key, mask)\n else:\n pattern += '(?:%s)' % mask\n key = 'anon%d' % anons; anons += 1\n if in_filter: filters.append((key, in_filter))\n builder.append((key, out_filter or str))\n elif key:\n pattern += re.escape(key)\n builder.append((None, key))\n self.builder[rule] = builder\n if name: self.builder[name] = builder\n\n if is_static and not self.strict_order:\n self.static[self.build(rule)] = target\n return\n\n def fpat_sub(m):\n return m.group(0) if len(m.group(1)) % 2 else m.group(1) + '(?:'\n flat_pattern = re.sub(r'(\\\\*)(\\(\\?P<[^>]*>|\\((?!\\?))', fpat_sub, pattern)\n\n try:\n re_match = re.compile('^(%s)$' % pattern).match\n except re.error, e:\n raise RouteSyntaxError(\"Could not add Route: %s (%s)\" % (rule, e))\n\n def match(path):\n \"\"\" Return an url-argument dictionary. \"\"\"\n url_args = re_match(path).groupdict()\n for name, wildcard_filter in filters:\n try:\n url_args[name] = wildcard_filter(url_args[name])\n except ValueError:\n raise HTTPError(400, 'Path has wrong format.')\n return url_args\n\n try:\n combined = '%s|(^%s$)' % (self.dynamic[-1][0].pattern, flat_pattern)\n self.dynamic[-1] = (re.compile(combined), self.dynamic[-1][1])\n self.dynamic[-1][1].append((match, target))\n except (AssertionError, IndexError), e: # AssertionError: Too many groups\n self.dynamic.append((re.compile('(^%s$)' % flat_pattern),\n [(match, target)]))\n return match", "response": "Add a new route or replace the target for an existing route."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a tuple of target and url_agrs.", "response": "def match(self, environ):\n ''' Return a (target, url_agrs) tuple or raise HTTPError(400/404/405). '''\n path, targets, urlargs = environ['PATH_INFO'] or '/', None, {}\n if path in self.static:\n targets = self.static[path]\n else:\n for combined, rules in self.dynamic:\n match = combined.match(path)\n if not match: continue\n getargs, targets = rules[match.lastindex - 1]\n urlargs = getargs(path) if getargs else {}\n break\n\n if not targets:\n raise HTTPError(404, \"Not found: \" + repr(environ['PATH_INFO']))\n method = environ['REQUEST_METHOD'].upper()\n if method in targets:\n return targets[method], urlargs\n if method == 'HEAD' and 'GET' in targets:\n return targets['GET'], urlargs\n if 'ANY' in targets:\n return targets['ANY'], urlargs\n allowed = [verb for verb in targets if verb != 'ANY']\n if 'GET' in allowed and 'HEAD' not in allowed:\n allowed.append('HEAD')\n raise HTTPError(405, \"Method not allowed.\",\n header=[('Allow',\",\".join(allowed))])"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nyield all Plugins affecting this route.", "response": "def all_plugins(self):\n ''' Yield all Plugins affecting this route. '''\n unique = set()\n for p in reversed(self.app.plugins + self.plugins):\n if True in self.skiplist: break\n name = getattr(p, 'name', False)\n if name and (name in self.skiplist or name in unique): continue\n if p in self.skiplist or type(p) in self.skiplist: continue\n if name: unique.add(name)\n yield p"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreset all routes and clear all all caches.", "response": "def reset(self, route=None):\n ''' Reset all routes (force plugins to be re-applied) and clear all\n caches. If an ID or route object is given, only that specific route\n is affected. '''\n if route is None: routes = self.routes\n elif isinstance(route, Route): routes = [route]\n else: routes = [self.routes[route]]\n for route in routes: route.reset()\n if DEBUG:\n for route in routes: route.prepare()\n self.hooks.trigger('app_reset')"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconverting the output of a WSGI request into a WSGI compatible response.", "response": "def _cast(self, out, request, response, peek=None):\n \"\"\" Try to convert the parameter into something WSGI compatible and set\n correct HTTP headers when possible.\n Support: False, str, unicode, dict, HTTPResponse, HTTPError, file-like,\n iterable of strings and iterable of unicodes\n \"\"\"\n\n # Empty output is done here\n if not out:\n response['Content-Length'] = 0\n return []\n # Join lists of byte or unicode strings. Mixed lists are NOT supported\n if isinstance(out, (tuple, list))\\\n and isinstance(out[0], (bytes, unicode)):\n out = out[0][0:0].join(out) # b'abc'[0:0] -> b''\n # Encode unicode strings\n if isinstance(out, unicode):\n out = out.encode(response.charset)\n # Byte Strings are just returned\n if isinstance(out, bytes):\n response['Content-Length'] = len(out)\n return [out]\n # HTTPError or HTTPException (recursive, because they may wrap anything)\n # TODO: Handle these explicitly in handle() or make them iterable.\n if isinstance(out, HTTPError):\n out.apply(response)\n out = self.error_handler.get(out.status, repr)(out)\n if isinstance(out, HTTPResponse):\n depr('Error handlers must not return :exc:`HTTPResponse`.') #0.9\n return self._cast(out, request, response)\n if isinstance(out, HTTPResponse):\n out.apply(response)\n return self._cast(out.output, request, response)\n\n # File-like objects.\n if hasattr(out, 'read'):\n if 'wsgi.file_wrapper' in request.environ:\n return request.environ['wsgi.file_wrapper'](out)\n elif hasattr(out, 'close') or not hasattr(out, '__iter__'):\n return WSGIFileWrapper(out)\n\n # Handle Iterables. We peek into them to detect their inner type.\n try:\n out = iter(out)\n first = out.next()\n while not first:\n first = out.next()\n except StopIteration:\n return self._cast('', request, response)\n except HTTPResponse, e:\n first = e\n except Exception, e:\n first = HTTPError(500, 'Unhandled exception', e, format_exc(10))\n if isinstance(e, (KeyboardInterrupt, SystemExit, MemoryError))\\\n or not self.catchall:\n raise\n # These are the inner types allowed in iterator or generator objects.\n if isinstance(first, HTTPResponse):\n return self._cast(first, request, response)\n if isinstance(first, bytes):\n return itertools.chain([first], out)\n if isinstance(first, unicode):\n return itertools.imap(lambda x: x.encode(response.charset),\n itertools.chain([first], out))\n return self._cast(HTTPError(500, 'Unsupported response type: %s'\\\n % type(first)), request, response)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget the content of a cookie.", "response": "def get_cookie(self, key, default=None, secret=None):\n \"\"\" Return the content of a cookie. To read a `Signed Cookie`, the\n `secret` must match the one used to create the cookie (see\n :meth:`BaseResponse.set_cookie`). If anything goes wrong (missing\n cookie or wrong signature), return a default value. \"\"\"\n value = self.cookies.get(key)\n if secret and value:\n dec = cookie_decode(value, secret) # (key, value) tuple or None\n return dec[1] if dec and dec[0] == key else default\n return value or default"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef forms(self):\n forms = FormsDict()\n for name, item in self.POST.iterallitems():\n if not hasattr(item, 'filename'):\n forms[name] = item\n return forms", "response": "Returns a FormsDict containing all the form values parsed from an url - encoded POST or PUT request body."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef params(self):\n params = FormsDict()\n for key, value in self.query.iterallitems():\n params[key] = value\n for key, value in self.forms.iterallitems():\n params[key] = value\n return params", "response": "A FormsDict with the combined values of query and forms. File uploads are stored in files."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef files(self):\n files = FormsDict()\n for name, item in self.POST.iterallitems():\n if hasattr(item, 'filename'):\n files[name] = item\n return files", "response": "Returns a FormsDict containing the file uploads parsed from a URL encoded POST or PUT request body."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the JSON content of the request.", "response": "def json(self):\n ''' If the ``Content-Type`` header is ``application/json``, this\n property holds the parsed content of the request body. Only requests\n smaller than :attr:`MEMFILE_MAX` are processed to avoid memory\n exhaustion. '''\n if 'application/json' in self.environ.get('CONTENT_TYPE', '') \\\n and 0 < self.content_length < self.MEMFILE_MAX:\n return json_loads(self.body.read(self.MEMFILE_MAX))\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef remote_route(self):\n proxy = self.environ.get('HTTP_X_FORWARDED_FOR')\n if proxy: return [ip.strip() for ip in proxy.split(',')]\n remote = self.environ.get('REMOTE_ADDR')\n return [remote] if remote else []", "response": "A list of all IPs that were involved in this request and followed by zero or more proxies."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_header(self, name, default=None):\n ''' Return the value of a previously defined header. If there is no\n header with that name, return a default value. '''\n return self._headers.get(_hkey(name), [default])[-1]", "response": "Return the value of a previously defined header."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a new response header with the given name and value.", "response": "def set_header(self, name, value, append=False):\n ''' Create a new response header, replacing any previously defined\n headers with the same name. '''\n if append:\n self.add_header(name, value)\n else:\n self._headers[_hkey(name)] = [str(value)]"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef iter_headers(self):\n ''' Yield (header, value) tuples, skipping headers that are not\n allowed with the current response status code. '''\n headers = self._headers.iteritems()\n bad_headers = self.bad_headers.get(self.status_code)\n if bad_headers:\n headers = [h for h in headers if h[0] not in bad_headers]\n for name, values in headers:\n for value in values:\n yield name, value\n if self._cookies:\n for c in self._cookies.values():\n yield 'Set-Cookie', c.OutputString()", "response": "Yields the headers that are not\n allowed with the current response status code."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nremoving a callback from a hook.", "response": "def remove(self, name, func):\n ''' Remove a callback from a hook. '''\n was_empty = self._empty()\n if name in self.hooks and func in self.hooks[name]:\n self.hooks[name].remove(func)\n if self.app and not was_empty and self._empty(): self.app.reset()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ntriggering a hook and return a list of results.", "response": "def trigger(self, name, *a, **ka):\n ''' Trigger a hook and return a list of results. '''\n hooks = self.hooks[name]\n if ka.pop('reversed', False): hooks = hooks[::-1]\n return [hook(*a, **ka) for hook in hooks]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get(self, key, default=None, index=-1, type=None):\n ''' Return the most recent value for a key.\n\n :param default: The default value to be returned if the key is not\n present or the type conversion fails.\n :param index: An index for the list of available values.\n :param type: If defined, this callable is used to cast the value\n into a specific type. Exception are suppressed and result in\n the default value to be returned.\n '''\n try:\n val = self.dict[key][index]\n return type(val) if type else val\n except Exception, e:\n pass\n return default", "response": "Return the most recent value for a key."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nsearch name in all directories specified in lookup. First without common extensions. Return first hit.", "response": "def search(cls, name, lookup=[]):\n \"\"\" Search name in all directories specified in lookup.\n First without, then with common extensions. Return first hit. \"\"\"\n if os.path.isfile(name): return name\n for spath in lookup:\n fname = os.path.join(spath, name)\n if os.path.isfile(fname):\n return fname\n for ext in cls.extensions:\n if os.path.isfile('%s.%s' % (fname, ext)):\n return '%s.%s' % (fname, ext)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef authorize_url(self):\n\n url = \"%s?client_id=%s&response_type=code&redirect_uri=%s\" % (\n self.AUTHORIZE_URL, self.CLIENT_ID, quote_plus(self.REDIRECT_URI)\n )\n\n if getattr(self, 'SCOPE', None) is not None:\n url = '%s&scope=%s' % (url, '+'.join(self.SCOPE))\n\n return url", "response": "Rewrite this property method If there are more arguments\n need attach to the url. Like bellow."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_access_token(self, code, method='POST', parse=True):\n\n data = {\n 'client_id': self.CLIENT_ID,\n 'client_secret': self.CLIENT_SECRET,\n 'redirect_uri': self.REDIRECT_URI,\n 'code': code,\n 'grant_type': 'authorization_code'\n }\n\n\n if method == 'POST':\n res = self.http_post(self.ACCESS_TOKEN_URL, data, parse=parse)\n else:\n res = self.http_get(self.ACCESS_TOKEN_URL, data, parse=parse)\n\n self.parse_token_response(res)", "response": "Get an access token from the API."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ntransforms a string into JavaScript code.", "response": "def transform_string(self, jsx, harmony=False, strip_types=False):\n \"\"\" Transform ``jsx`` JSX string into javascript\n\n :param jsx: JSX source code\n :type jsx: basestring\n :keyword harmony: Transform ES6 code into ES3 (default: False)\n :type harmony: bool\n :keyword strip_types: Strip type declarations (default: False)\n :type harmony: bool\n :return: compiled JS code\n :rtype: str\n \"\"\"\n opts = {'harmony': harmony, 'stripTypes': strip_types}\n try:\n result = self.context.call(\n '%s.transform' % self.JSX_TRANSFORMER_JS_EXPR, jsx, opts)\n except execjs.ProgramError as e:\n raise TransformError(str(e))\n js = result['code']\n return js"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nfetch a redis conenction pool for the unique combination of host and port. Will create a new one if it doesn t exist.", "response": "def pool(self, host, port, db, pools={}, **options):\n '''\n Fetch a redis conenction pool for the unique combination of host\n and port. Will create a new one if there isn't one already.\n '''\n key = (host, port, db)\n rval = pools.get(key)\n if not isinstance(rval, ConnectionPool):\n rval = ConnectionPool(host=host, port=port, db=db, **options)\n pools[key] = rval\n\n return rval"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nrank a member in the leaderboard.", "response": "def rank_member(self, member, score, member_data=None):\n '''\n Rank a member in the leaderboard.\n\n @param member [String] Member name.\n @param score [float] Member score.\n @param member_data [String] Optional member data.\n '''\n self.rank_member_in(self.leaderboard_name, member, score, member_data)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef rank_member_in(\n self, leaderboard_name, member, score, member_data=None):\n '''\n Rank a member in the named leaderboard.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param member [String] Member name.\n @param score [float] Member score.\n @param member_data [String] Optional member data.\n '''\n pipeline = self.redis_connection.pipeline()\n if isinstance(self.redis_connection, Redis):\n pipeline.zadd(leaderboard_name, member, score)\n else:\n pipeline.zadd(leaderboard_name, score, member)\n if member_data:\n pipeline.hset(\n self._member_data_key(leaderboard_name),\n member,\n member_data)\n pipeline.execute()", "response": "Rank a member in the named leaderboard."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nranking a member in the leaderboard based on the + rank_conditional +.", "response": "def rank_member_if(\n self, rank_conditional, member, score, member_data=None):\n '''\n Rank a member in the leaderboard based on execution of the +rank_conditional+.\n\n The +rank_conditional+ is passed the following parameters:\n member: Member name.\n current_score: Current score for the member in the leaderboard.\n score: Member score.\n member_data: Optional member data.\n leaderboard_options: Leaderboard options, e.g. 'reverse': Value of reverse option\n\n @param rank_conditional [function] Function which must return +True+ or +False+ that controls whether or not the member is ranked in the leaderboard.\n @param member [String] Member name.\n @param score [float] Member score.\n @param member_data [String] Optional member_data.\n '''\n self.rank_member_if_in(\n self.leaderboard_name,\n rank_conditional,\n member,\n score,\n member_data)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef rank_member_if_in(\n self,\n leaderboard_name,\n rank_conditional,\n member,\n score,\n member_data=None):\n '''\n Rank a member in the named leaderboard based on execution of the +rank_conditional+.\n\n The +rank_conditional+ is passed the following parameters:\n member: Member name.\n current_score: Current score for the member in the leaderboard.\n score: Member score.\n member_data: Optional member data.\n leaderboard_options: Leaderboard options, e.g. 'reverse': Value of reverse option\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param rank_conditional [function] Function which must return +True+ or +False+ that controls whether or not the member is ranked in the leaderboard.\n @param member [String] Member name.\n @param score [float] Member score.\n @param member_data [String] Optional member_data.\n '''\n current_score = self.redis_connection.zscore(leaderboard_name, member)\n if current_score is not None:\n current_score = float(current_score)\n\n if rank_conditional(self, member, current_score, score, member_data, {'reverse': self.order}):\n self.rank_member_in(leaderboard_name, member, score, member_data)", "response": "Rank a member in the named leaderboard based on the condition that is true or false."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef rank_members_in(self, leaderboard_name, members_and_scores):\n '''\n Rank an array of members in the named leaderboard.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param members_and_scores [Array] Variable list of members and scores.\n '''\n pipeline = self.redis_connection.pipeline()\n for member, score in grouper(2, members_and_scores):\n if isinstance(self.redis_connection, Redis):\n pipeline.zadd(leaderboard_name, member, score)\n else:\n pipeline.zadd(leaderboard_name, score, member)\n pipeline.execute()", "response": "Rank an array of members in the named leaderboard."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nretrieving the optional member data for a given member in the named leaderboard.", "response": "def member_data_for_in(self, leaderboard_name, member):\n '''\n Retrieve the optional member data for a given member in the named leaderboard.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param member [String] Member name.\n @return String of optional member data.\n '''\n return self.redis_connection.hget(\n self._member_data_key(leaderboard_name), member)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nretrieve the optional member data for a given list of members in the named leaderboard.", "response": "def members_data_for_in(self, leaderboard_name, members):\n '''\n Retrieve the optional member data for a given list of members in the named leaderboard.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param members [Array] Member names.\n @return Array of strings of optional member data.\n '''\n return self.redis_connection.hmget(\n self._member_data_key(leaderboard_name), members)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef update_member_data(self, member, member_data):\n '''\n Update the optional member data for a given member in the leaderboard.\n\n @param member [String] Member name.\n @param member_data [String] Optional member data.\n '''\n self.update_member_data_in(self.leaderboard_name, member, member_data)", "response": "Update the optional member data for a given member in the leaderboard."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef update_member_data_in(self, leaderboard_name, member, member_data):\n '''\n Update the optional member data for a given member in the named leaderboard.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param member [String] Member name.\n @param member_data [String] Optional member data.\n '''\n self.redis_connection.hset(\n self._member_data_key(leaderboard_name),\n member,\n member_data)", "response": "Update the optional member data for a given member in the named leaderboard."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef remove_member_data_in(self, leaderboard_name, member):\n '''\n Remove the optional member data for a given member in the named leaderboard.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param member [String] Member name.\n '''\n self.redis_connection.hdel(\n self._member_data_key(leaderboard_name),\n member)", "response": "Removes the optional member data for a given member in the named leaderboard."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef remove_member_from(self, leaderboard_name, member):\n '''\n Remove the optional member data for a given member in the named leaderboard.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param member [String] Member name.\n '''\n pipeline = self.redis_connection.pipeline()\n pipeline.zrem(leaderboard_name, member)\n pipeline.hdel(self._member_data_key(leaderboard_name), member)\n pipeline.execute()", "response": "Remove the optional member data for a given member from the named leaderboard."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nretrieves the total number of pages in the named leaderboard.", "response": "def total_pages_in(self, leaderboard_name, page_size=None):\n '''\n Retrieve the total number of pages in the named leaderboard.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param page_size [int, nil] Page size to be used when calculating the total number of pages.\n @return the total number of pages in the named leaderboard.\n '''\n if page_size is None:\n page_size = self.page_size\n\n return int(\n math.ceil(\n self.total_members_in(leaderboard_name) /\n float(page_size)))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef total_members_in_score_range(self, min_score, max_score):\n '''\n Retrieve the total members in a given score range from the leaderboard.\n\n @param min_score [float] Minimum score.\n @param max_score [float] Maximum score.\n @return the total members in a given score range from the leaderboard.\n '''\n return self.total_members_in_score_range_in(\n self.leaderboard_name, min_score, max_score)", "response": "Retrieve the total members in a given score range from the leaderboard."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef total_members_in_score_range_in(\n self, leaderboard_name, min_score, max_score):\n '''\n Retrieve the total members in a given score range from the named leaderboard.\n\n @param leaderboard_name Name of the leaderboard.\n @param min_score [float] Minimum score.\n @param max_score [float] Maximum score.\n @return the total members in a given score range from the named leaderboard.\n '''\n return self.redis_connection.zcount(\n leaderboard_name, min_score, max_score)", "response": "Retrieve the total members in a given score range from the named leaderboard."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef total_scores_in(self, leaderboard_name):\n '''\n Sum of scores for all members in the named leaderboard.\n\n @param leaderboard_name Name of the leaderboard.\n @return Sum of scores for all members in the named leaderboard.\n '''\n return sum([leader[self.SCORE_KEY] for leader in self.all_leaders_from(self.leaderboard_name)])", "response": "Return the total score for all members in the named leaderboard."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef rank_for_in(self, leaderboard_name, member):\n '''\n Retrieve the rank for a member in the named leaderboard.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param member [String] Member name.\n @return the rank for a member in the leaderboard.\n '''\n if self.order == self.ASC:\n try:\n return self.redis_connection.zrank(\n leaderboard_name, member) + 1\n except:\n return None\n else:\n try:\n return self.redis_connection.zrevrank(\n leaderboard_name, member) + 1\n except:\n return None", "response": "Retrieve the rank for a member in the named leaderboard."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nretrieve the score for a member in the named leaderboard.", "response": "def score_for_in(self, leaderboard_name, member):\n '''\n Retrieve the score for a member in the named leaderboard.\n\n @param leaderboard_name Name of the leaderboard.\n @param member [String] Member name.\n @return the score for a member in the leaderboard or +None+ if the member is not in the leaderboard.\n '''\n score = self.redis_connection.zscore(leaderboard_name, member)\n if score is not None:\n score = float(score)\n\n return score"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef score_and_rank_for_in(self, leaderboard_name, member):\n '''\n Retrieve the score and rank for a member in the named leaderboard.\n\n @param leaderboard_name [String]Name of the leaderboard.\n @param member [String] Member name.\n @return the score and rank for a member in the named leaderboard as a Hash.\n '''\n return {\n self.MEMBER_KEY: member,\n self.SCORE_KEY: self.score_for_in(leaderboard_name, member),\n self.RANK_KEY: self.rank_for_in(leaderboard_name, member)\n }", "response": "Retrieve the score and rank for a member in the named leaderboard."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nchanging the score for a member in the leaderboard by a score delta which can be positive or negative.", "response": "def change_score_for(self, member, delta, member_data=None):\n '''\n Change the score for a member in the leaderboard by a score delta which can be positive or negative.\n\n @param member [String] Member name.\n @param delta [float] Score change.\n @param member_data [String] Optional member data.\n '''\n self.change_score_for_member_in(self.leaderboard_name, member, delta, member_data)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nchanges the score for a member in the named leaderboard by a delta.", "response": "def change_score_for_member_in(self, leaderboard_name, member, delta, member_data=None):\n '''\n Change the score for a member in the named leaderboard by a delta which can be positive or negative.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param member [String] Member name.\n @param delta [float] Score change.\n @param member_data [String] Optional member data.\n '''\n pipeline = self.redis_connection.pipeline()\n pipeline.zincrby(leaderboard_name, member, delta)\n if member_data:\n pipeline.hset(\n self._member_data_key(leaderboard_name),\n member,\n member_data)\n pipeline.execute()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nremoving members from the leaderboard in a given score range.", "response": "def remove_members_in_score_range(self, min_score, max_score):\n '''\n Remove members from the leaderboard in a given score range.\n\n @param min_score [float] Minimum score.\n @param max_score [float] Maximum score.\n '''\n self.remove_members_in_score_range_in(\n self.leaderboard_name,\n min_score,\n max_score)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nremoving members from the named leaderboard in a given score range.", "response": "def remove_members_in_score_range_in(\n self, leaderboard_name, min_score, max_score):\n '''\n Remove members from the named leaderboard in a given score range.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param min_score [float] Minimum score.\n @param max_score [float] Maximum score.\n '''\n self.redis_connection.zremrangebyscore(\n leaderboard_name,\n min_score,\n max_score)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nremoves members from the named leaderboard in a given rank range.", "response": "def remove_members_outside_rank_in(self, leaderboard_name, rank):\n '''\n Remove members from the named leaderboard in a given rank range.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param rank [int] the rank (inclusive) which we should keep.\n @return the total member count which was removed.\n '''\n if self.order == self.DESC:\n rank = -(rank) - 1\n return self.redis_connection.zremrangebyrank(\n leaderboard_name, 0, rank)\n else:\n return self.redis_connection.zremrangebyrank(\n leaderboard_name, rank, -1)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the first page where a member falls in the leaderboard.", "response": "def page_for(self, member, page_size=DEFAULT_PAGE_SIZE):\n '''\n Determine the page where a member falls in the leaderboard.\n\n @param member [String] Member name.\n @param page_size [int] Page size to be used in determining page location.\n @return the page where a member falls in the leaderboard.\n '''\n return self.page_for_in(self.leaderboard_name, member, page_size)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef page_for_in(self, leaderboard_name, member,\n page_size=DEFAULT_PAGE_SIZE):\n '''\n Determine the page where a member falls in the named leaderboard.\n\n @param leaderboard [String] Name of the leaderboard.\n @param member [String] Member name.\n @param page_size [int] Page size to be used in determining page location.\n @return the page where a member falls in the leaderboard.\n '''\n rank_for_member = None\n if self.order == self.ASC:\n rank_for_member = self.redis_connection.zrank(\n leaderboard_name,\n member)\n else:\n rank_for_member = self.redis_connection.zrevrank(\n leaderboard_name,\n member)\n\n if rank_for_member is None:\n rank_for_member = 0\n else:\n rank_for_member += 1\n\n return int(math.ceil(float(rank_for_member) / float(page_size)))", "response": "Determine the page where a member falls in the named leaderboard."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nretrieves the percentile for a member in the named leaderboard.", "response": "def percentile_for_in(self, leaderboard_name, member):\n '''\n Retrieve the percentile for a member in the named leaderboard.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param member [String] Member name.\n @return the percentile for a member in the named leaderboard.\n '''\n if not self.check_member_in(leaderboard_name, member):\n return None\n\n responses = self.redis_connection.pipeline().zcard(\n leaderboard_name).zrevrank(leaderboard_name, member).execute()\n\n percentile = math.ceil(\n (float(\n (responses[0] -\n responses[1] -\n 1)) /\n float(\n responses[0]) *\n 100))\n\n if self.order == self.ASC:\n return 100 - percentile\n else:\n return percentile"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncalculating the score for a given percentile value in the leaderboard.", "response": "def score_for_percentile_in(self, leaderboard_name, percentile):\n '''\n Calculate the score for a given percentile value in the leaderboard.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param percentile [float] Percentile value (0.0 to 100.0 inclusive).\n @return the score corresponding to the percentile argument. Return +None+ for arguments outside 0-100 inclusive and for leaderboards with no members.\n '''\n if not 0 <= percentile <= 100:\n return None\n\n total_members = self.total_members_in(leaderboard_name)\n if total_members < 1:\n return None\n\n if self.order == self.ASC:\n percentile = 100 - percentile\n\n index = (total_members - 1) * (percentile / 100.0)\n\n scores = [\n pair[1] for pair in self.redis_connection.zrange(\n leaderboard_name, int(\n math.floor(index)), int(\n math.ceil(index)), withscores=True)]\n\n if index == math.floor(index):\n return scores[0]\n else:\n interpolate_fraction = index - math.floor(index)\n return scores[0] + interpolate_fraction * (scores[1] - scores[0])"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nexpiring the given leaderboard in a set of seconds.", "response": "def expire_leaderboard_for(self, leaderboard_name, seconds):\n '''\n Expire the given leaderboard in a set number of seconds. Do not use this with\n leaderboards that utilize member data as there is no facility to cascade the\n expiration out to the keys for the member data.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param seconds [int] Number of seconds after which the leaderboard will be expired.\n '''\n pipeline = self.redis_connection.pipeline()\n pipeline.expire(leaderboard_name, seconds)\n pipeline.expire(self._member_data_key(leaderboard_name), seconds)\n pipeline.execute()"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nretrieve a page of leaders from the leaderboard.", "response": "def leaders(self, current_page, **options):\n '''\n Retrieve a page of leaders from the leaderboard.\n\n @param current_page [int] Page to retrieve from the leaderboard.\n @param options [Hash] Options to be used when retrieving the page from the leaderboard.\n @return a page of leaders from the leaderboard.\n '''\n return self.leaders_in(self.leaderboard_name, current_page, **options)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nretrieves a page of leaders from the named leaderboard.", "response": "def leaders_in(self, leaderboard_name, current_page, **options):\n '''\n Retrieve a page of leaders from the named leaderboard.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param current_page [int] Page to retrieve from the named leaderboard.\n @param options [Hash] Options to be used when retrieving the page from the named leaderboard.\n @return a page of leaders from the named leaderboard.\n '''\n if current_page < 1:\n current_page = 1\n\n page_size = options.get('page_size', self.page_size)\n\n index_for_redis = current_page - 1\n\n starting_offset = (index_for_redis * page_size)\n if starting_offset < 0:\n starting_offset = 0\n\n ending_offset = (starting_offset + page_size) - 1\n\n raw_leader_data = self._range_method(\n self.redis_connection,\n leaderboard_name,\n int(starting_offset),\n int(ending_offset),\n withscores=False)\n return self._parse_raw_members(\n leaderboard_name, raw_leader_data, **options)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nretrieving all leaders from the named leaderboard.", "response": "def all_leaders_from(self, leaderboard_name, **options):\n '''\n Retrieves all leaders from the named leaderboard.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param options [Hash] Options to be used when retrieving the leaders from the named leaderboard.\n @return the named leaderboard.\n '''\n raw_leader_data = self._range_method(\n self.redis_connection, leaderboard_name, 0, -1, withscores=False)\n return self._parse_raw_members(\n leaderboard_name, raw_leader_data, **options)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nretrieves members from the leaderboard within a given score range.", "response": "def members_from_score_range(\n self, minimum_score, maximum_score, **options):\n '''\n Retrieve members from the leaderboard within a given score range.\n\n @param minimum_score [float] Minimum score (inclusive).\n @param maximum_score [float] Maximum score (inclusive).\n @param options [Hash] Options to be used when retrieving the data from the leaderboard.\n @return members from the leaderboard that fall within the given score range.\n '''\n return self.members_from_score_range_in(\n self.leaderboard_name, minimum_score, maximum_score, **options)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef members_from_score_range_in(\n self, leaderboard_name, minimum_score, maximum_score, **options):\n '''\n Retrieve members from the named leaderboard within a given score range.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param minimum_score [float] Minimum score (inclusive).\n @param maximum_score [float] Maximum score (inclusive).\n @param options [Hash] Options to be used when retrieving the data from the leaderboard.\n @return members from the leaderboard that fall within the given score range.\n '''\n raw_leader_data = []\n if self.order == self.DESC:\n raw_leader_data = self.redis_connection.zrevrangebyscore(\n leaderboard_name,\n maximum_score,\n minimum_score)\n else:\n raw_leader_data = self.redis_connection.zrangebyscore(\n leaderboard_name,\n minimum_score,\n maximum_score)\n return self._parse_raw_members(\n leaderboard_name, raw_leader_data, **options)", "response": "Retrieve members from the named leaderboard within a given score range."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef members_from_rank_range(self, starting_rank, ending_rank, **options):\n '''\n Retrieve members from the leaderboard within a given rank range.\n\n @param starting_rank [int] Starting rank (inclusive).\n @param ending_rank [int] Ending rank (inclusive).\n @param options [Hash] Options to be used when retrieving the data from the leaderboard.\n @return members from the leaderboard that fall within the given rank range.\n '''\n return self.members_from_rank_range_in(\n self.leaderboard_name, starting_rank, ending_rank, **options)", "response": "Retrieve members from the leaderboard within a given rank range."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nretrieves members from the named leaderboard within a given rank range.", "response": "def members_from_rank_range_in(\n self, leaderboard_name, starting_rank, ending_rank, **options):\n '''\n Retrieve members from the named leaderboard within a given rank range.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param starting_rank [int] Starting rank (inclusive).\n @param ending_rank [int] Ending rank (inclusive).\n @param options [Hash] Options to be used when retrieving the data from the leaderboard.\n @return members from the leaderboard that fall within the given rank range.\n '''\n starting_rank -= 1\n if starting_rank < 0:\n starting_rank = 0\n\n ending_rank -= 1\n if ending_rank > self.total_members_in(leaderboard_name):\n ending_rank = self.total_members_in(leaderboard_name) - 1\n\n raw_leader_data = []\n if self.order == self.DESC:\n raw_leader_data = self.redis_connection.zrevrange(\n leaderboard_name,\n starting_rank,\n ending_rank,\n withscores=False)\n else:\n raw_leader_data = self.redis_connection.zrange(\n leaderboard_name,\n starting_rank,\n ending_rank,\n withscores=False)\n\n return self._parse_raw_members(\n leaderboard_name, raw_leader_data, **options)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef top(self, number, **options):\n '''\n Retrieve members from the leaderboard within a range from 1 to the number given.\n\n @param ending_rank [int] Ending rank (inclusive).\n @param options [Hash] Options to be used when retrieving the data from the leaderboard.\n\n @return number from the leaderboard that fall within the given rank range.\n '''\n return self.top_in(self.leaderboard_name, number, **options)", "response": "Retrieve members from the leaderboard within a range from 1 to the number given."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nretrieve members from the named leaderboard within a range from 1 to the number given.", "response": "def top_in(self, leaderboard_name, number, **options):\n '''\n Retrieve members from the named leaderboard within a range from 1 to the number given.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param starting_rank [int] Starting rank (inclusive).\n @param ending_rank [int] Ending rank (inclusive).\n @param options [Hash] Options to be used when retrieving the data from the leaderboard.\n\n @return members from the leaderboard that fall within the given rank range.\n '''\n return self.members_from_rank_range_in(leaderboard_name, 1, number, **options)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef member_at(self, position, **options):\n '''\n Retrieve a member at the specified index from the leaderboard.\n\n @param position [int] Position in leaderboard.\n @param options [Hash] Options to be used when retrieving the member from the leaderboard.\n @return a member from the leaderboard.\n '''\n return self.member_at_in(self.leaderboard_name, position, **options)", "response": "Retrieve a member from the leaderboard at the specified position."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nretrieves a member from the named leaderboard at the specified index from the leaderboard.", "response": "def member_at_in(self, leaderboard_name, position, **options):\n '''\n Retrieve a member at the specified index from the leaderboard.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param position [int] Position in named leaderboard.\n @param options [Hash] Options to be used when retrieving the member from the named leaderboard.\n @return a page of leaders from the named leaderboard.\n '''\n if position > 0 and position <= self.total_members_in(leaderboard_name):\n page_size = options.get('page_size', self.page_size)\n current_page = math.ceil(float(position) / float(page_size))\n offset = (position - 1) % page_size\n\n leaders = self.leaders_in(\n leaderboard_name,\n current_page,\n **options)\n if leaders:\n return leaders[offset]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nretrieving a page of leaders from the leaderboard around a given member.", "response": "def around_me(self, member, **options):\n '''\n Retrieve a page of leaders from the leaderboard around a given member.\n\n @param member [String] Member name.\n @param options [Hash] Options to be used when retrieving the page from the leaderboard.\n @return a page of leaders from the leaderboard around a given member.\n '''\n return self.around_me_in(self.leaderboard_name, member, **options)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nretrieving a page of leaders from the named leaderboard around a given member.", "response": "def around_me_in(self, leaderboard_name, member, **options):\n '''\n Retrieve a page of leaders from the named leaderboard around a given member.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param member [String] Member name.\n @param options [Hash] Options to be used when retrieving the page from the named leaderboard.\n @return a page of leaders from the named leaderboard around a given member. Returns an empty array for a non-existent member.\n '''\n reverse_rank_for_member = None\n if self.order == self.DESC:\n reverse_rank_for_member = self.redis_connection.zrevrank(\n leaderboard_name,\n member)\n else:\n reverse_rank_for_member = self.redis_connection.zrank(\n leaderboard_name,\n member)\n\n if reverse_rank_for_member is None:\n return []\n\n page_size = options.get('page_size', self.page_size)\n\n starting_offset = reverse_rank_for_member - (page_size // 2)\n if starting_offset < 0:\n starting_offset = 0\n\n ending_offset = (starting_offset + page_size) - 1\n\n raw_leader_data = self._range_method(\n self.redis_connection,\n leaderboard_name,\n int(starting_offset),\n int(ending_offset),\n withscores=False)\n return self._parse_raw_members(\n leaderboard_name, raw_leader_data, **options)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nretrieves a page of leaders from the leaderboard for a given list of members.", "response": "def ranked_in_list(self, members, **options):\n '''\n Retrieve a page of leaders from the leaderboard for a given list of members.\n\n @param members [Array] Member names.\n @param options [Hash] Options to be used when retrieving the page from the leaderboard.\n @return a page of leaders from the leaderboard for a given list of members.\n '''\n return self.ranked_in_list_in(\n self.leaderboard_name, members, **options)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef ranked_in_list_in(self, leaderboard_name, members, **options):\n '''\n Retrieve a page of leaders from the named leaderboard for a given list of members.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param members [Array] Member names.\n @param options [Hash] Options to be used when retrieving the page from the named leaderboard.\n @return a page of leaders from the named leaderboard for a given list of members.\n '''\n ranks_for_members = []\n\n pipeline = self.redis_connection.pipeline()\n\n for member in members:\n if self.order == self.ASC:\n pipeline.zrank(leaderboard_name, member)\n else:\n pipeline.zrevrank(leaderboard_name, member)\n\n pipeline.zscore(leaderboard_name, member)\n\n responses = pipeline.execute()\n\n for index, member in enumerate(members):\n data = {}\n data[self.MEMBER_KEY] = member\n rank = responses[index * 2]\n if rank is not None:\n rank += 1\n else:\n if not options.get('include_missing', True):\n continue\n data[self.RANK_KEY] = rank\n score = responses[index * 2 + 1]\n if score is not None:\n score = float(score)\n data[self.SCORE_KEY] = score\n\n ranks_for_members.append(data)\n\n if ('with_member_data' in options) and (True == options['with_member_data']):\n for index, member_data in enumerate(self.members_data_for_in(leaderboard_name, members)):\n try:\n ranks_for_members[index][self.MEMBER_DATA_KEY] = member_data\n except:\n pass\n\n if 'sort_by' in options:\n sort_value_if_none = float('-inf') if self.order == self.ASC else float('+inf')\n if self.RANK_KEY == options['sort_by']:\n ranks_for_members = sorted(\n ranks_for_members,\n key=lambda member: member.get(self.RANK_KEY) if member.get(self.RANK_KEY) is not None else sort_value_if_none\n )\n elif self.SCORE_KEY == options['sort_by']:\n ranks_for_members = sorted(\n ranks_for_members,\n key=lambda member: member.get(self.SCORE_KEY) if member.get(self.SCORE_KEY) is not None else sort_value_if_none\n )\n\n return ranks_for_members", "response": "Retrieve a page of leaders from the named leaderboard for a given list of members."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef merge_leaderboards(self, destination, keys, aggregate='SUM'):\n '''\n Merge leaderboards given by keys with this leaderboard into a named destination leaderboard.\n\n @param destination [String] Destination leaderboard name.\n @param keys [Array] Leaderboards to be merged with the current leaderboard.\n @param options [Hash] Options for merging the leaderboards.\n '''\n keys.insert(0, self.leaderboard_name)\n self.redis_connection.zunionstore(destination, keys, aggregate)", "response": "Merge leaderboards given by keys with this leaderboard into a named destination leaderboard."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nintersects leaderboards given by keys with this leaderboard into a named destination leaderboard.", "response": "def intersect_leaderboards(self, destination, keys, aggregate='SUM'):\n '''\n Intersect leaderboards given by keys with this leaderboard into a named destination leaderboard.\n\n @param destination [String] Destination leaderboard name.\n @param keys [Array] Leaderboards to be merged with the current leaderboard.\n @param options [Hash] Options for intersecting the leaderboards.\n '''\n keys.insert(0, self.leaderboard_name)\n self.redis_connection.zinterstore(destination, keys, aggregate)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _member_data_key(self, leaderboard_name):\n '''\n Key for retrieving optional member data.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @return a key in the form of +leaderboard_name:member_data+\n '''\n if self.global_member_data is False:\n return '%s:%s' % (leaderboard_name, self.member_data_namespace)\n else:\n return self.member_data_namespace", "response": "Return the key for retrieving optional member data."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nparses the raw leaders data returned from a given leaderboard query. Do associative lookups with the member to rank score and potentially sort the results.", "response": "def _parse_raw_members(\n self, leaderboard_name, members, members_only=False, **options):\n '''\n Parse the raw leaders data as returned from a given leader board query. Do associative\n lookups with the member to rank, score and potentially sort the results.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param members [List] A list of members as returned from a sorted set range query\n @param members_only [bool] Set True to return the members as is, Default is False.\n @param options [Hash] Options to be used when retrieving the page from the named leaderboard.\n @return a list of members.\n '''\n if members_only:\n return [{self.MEMBER_KEY: m} for m in members]\n\n if members:\n return self.ranked_in_list_in(leaderboard_name, members, **options)\n else:\n return []"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef delete_leaderboard_named(self, leaderboard_name):\n '''\n Delete the named leaderboard.\n\n @param leaderboard_name [String] Name of the leaderboard.\n '''\n pipeline = self.redis_connection.pipeline()\n pipeline.delete(leaderboard_name)\n pipeline.delete(self._member_data_key(leaderboard_name))\n pipeline.delete(self._ties_leaderboard_key(leaderboard_name))\n pipeline.execute()", "response": "Delete the named leaderboard."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef change_score_for_member_in(self, leaderboard_name, member, delta, member_data=None):\n '''\n Change the score for a member in the named leaderboard by a delta which can be positive or negative.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param member [String] Member name.\n @param delta [float] Score change.\n @param member_data [String] Optional member data.\n '''\n previous_score = self.score_for(member)\n new_score = (previous_score or 0) + delta\n\n total_members_at_previous_score = []\n if previous_score is not None:\n total_members_at_previous_score = self.redis_connection.zrevrangebyscore(leaderboard_name, previous_score, previous_score)\n\n pipeline = self.redis_connection.pipeline()\n if isinstance(self.redis_connection, Redis):\n pipeline.zadd(leaderboard_name, member, new_score)\n pipeline.zadd(self._ties_leaderboard_key(leaderboard_name), str(float(new_score)), new_score)\n else:\n pipeline.zadd(leaderboard_name, new_score, member)\n pipeline.zadd(self._ties_leaderboard_key(leaderboard_name), new_score, str(float(new_score)))\n if member_data:\n pipeline.hset(\n self._member_data_key(leaderboard_name),\n member,\n member_data)\n pipeline.execute()\n\n if len(total_members_at_previous_score) == 1:\n self.redis_connection.zrem(self._ties_leaderboard_key(leaderboard_name), str(float(previous_score)))", "response": "Change the score for a member in the named leaderboard by a delta which can be positive or negative."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef rank_member_in(\n self, leaderboard_name, member, score, member_data=None):\n '''\n Rank a member in the named leaderboard.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param member [String] Member name.\n @param score [float] Member score.\n @param member_data [String] Optional member data.\n '''\n member_score = None or self.redis_connection.zscore(leaderboard_name, member)\n can_delete_score = member_score is not None and\\\n (len(self.members_from_score_range_in(leaderboard_name, member_score, member_score)) == 1) and\\\n member_score != score\n\n pipeline = self.redis_connection.pipeline()\n if isinstance(self.redis_connection, Redis):\n pipeline.zadd(leaderboard_name, member, score)\n pipeline.zadd(self._ties_leaderboard_key(leaderboard_name),\n str(float(score)), score)\n else:\n pipeline.zadd(leaderboard_name, score, member)\n pipeline.zadd(self._ties_leaderboard_key(leaderboard_name),\n score, str(float(score)))\n if can_delete_score:\n pipeline.zrem(self._ties_leaderboard_key(leaderboard_name),\n str(float(member_score)))\n if member_data:\n pipeline.hset(\n self._member_data_key(leaderboard_name),\n member,\n member_data)\n pipeline.execute()", "response": "Rank a member in the named leaderboard."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef rank_member_across(\n self, leaderboards, member, score, member_data=None):\n '''\n Rank a member across multiple leaderboards.\n\n @param leaderboards [Array] Leaderboard names.\n @param member [String] Member name.\n @param score [float] Member score.\n @param member_data [String] Optional member data.\n '''\n for leaderboard_name in leaderboards:\n self.rank_member_in(leaderboard, member, score, member_data)", "response": "Rank a member across multiple leaderboards."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nrank an array of members in the named leaderboard.", "response": "def rank_members_in(self, leaderboard_name, members_and_scores):\n '''\n Rank an array of members in the named leaderboard.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param members_and_scores [Array] Variable list of members and scores.\n '''\n for member, score in grouper(2, members_and_scores):\n self.rank_member_in(leaderboard_name, member, score)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef remove_member_from(self, leaderboard_name, member):\n '''\n Remove the optional member data for a given member in the named leaderboard.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param member [String] Member name.\n '''\n member_score = None or self.redis_connection.zscore(\n leaderboard_name, member)\n can_delete_score = member_score and len(\n self.members_from_score_range_in(leaderboard_name, member_score, member_score)) == 1\n\n pipeline = self.redis_connection.pipeline()\n pipeline.zrem(leaderboard_name, member)\n if can_delete_score:\n pipeline.zrem(self._ties_leaderboard_key(leaderboard_name),\n str(float(member_score)))\n pipeline.hdel(self._member_data_key(leaderboard_name), member)\n pipeline.execute()", "response": "Remove the optional member data for a given member in the named leaderboard."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nretrieving the rank for a member in the named leaderboard.", "response": "def rank_for_in(self, leaderboard_name, member):\n '''\n Retrieve the rank for a member in the named leaderboard.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param member [String] Member name.\n @return the rank for a member in the leaderboard.\n '''\n member_score = self.score_for_in(leaderboard_name, member)\n if self.order == self.ASC:\n try:\n return self.redis_connection.zrank(\n self._ties_leaderboard_key(leaderboard_name), str(float(member_score))) + 1\n except:\n return None\n else:\n try:\n return self.redis_connection.zrevrank(\n self._ties_leaderboard_key(leaderboard_name), str(float(member_score))) + 1\n except:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nremove members from the named leaderboard in a given score range.", "response": "def remove_members_in_score_range_in(\n self, leaderboard_name, min_score, max_score):\n '''\n Remove members from the named leaderboard in a given score range.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param min_score [float] Minimum score.\n @param max_score [float] Maximum score.\n '''\n pipeline = self.redis_connection.pipeline()\n pipeline.zremrangebyscore(\n leaderboard_name,\n min_score,\n max_score)\n pipeline.zremrangebyscore(\n self._ties_leaderboard_key(leaderboard_name),\n min_score,\n max_score)\n pipeline.execute()"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nexpires the given leaderboard at a specific UNIX timestamp.", "response": "def expire_leaderboard_at_for(self, leaderboard_name, timestamp):\n '''\n Expire the given leaderboard at a specific UNIX timestamp. Do not use this with\n leaderboards that utilize member data as there is no facility to cascade the\n expiration out to the keys for the member data.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param timestamp [int] UNIX timestamp at which the leaderboard will be expired.\n '''\n pipeline = self.redis_connection.pipeline()\n pipeline.expireat(leaderboard_name, timestamp)\n pipeline.expireat(\n self._ties_leaderboard_key(leaderboard_name), timestamp)\n pipeline.expireat(self._member_data_key(leaderboard_name), timestamp)\n pipeline.execute()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef rank_for_in(self, leaderboard_name, member):\n '''\n Retrieve the rank for a member in the named leaderboard.\n\n @param leaderboard_name [String] Name of the leaderboard.\n @param member [String] Member name.\n @return the rank for a member in the leaderboard.\n '''\n member_score = str(float(self.score_for_in(leaderboard_name, member)))\n if self.order == self.ASC:\n try:\n return self.redis_connection.zcount(\n leaderboard_name, '-inf', '(%s' % member_score) + 1\n except:\n return None\n else:\n try:\n return self.redis_connection.zcount(\n leaderboard_name, '(%s' % member_score, '+inf') + 1\n except:\n return None", "response": "Retrieve the rank for a member in the named leaderboard."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nretrieve the score and rank for a member in the named leaderboard.", "response": "def score_and_rank_for_in(self, leaderboard_name, member):\n '''\n Retrieve the score and rank for a member in the named leaderboard.\n\n @param leaderboard_name [String]Name of the leaderboard.\n @param member [String] Member name.\n @return the score and rank for a member in the named leaderboard as a Hash.\n '''\n pipeline = self.redis_connection.pipeline()\n pipeline.zscore(leaderboard_name, member)\n if self.order == self.ASC:\n pipeline.zrank(leaderboard_name, member)\n else:\n pipeline.zrevrank(leaderboard_name, member)\n responses = pipeline.execute()\n\n if responses[0] is not None:\n responses[0] = float(responses[0])\n\n if self.order == self.ASC:\n try:\n responses[1] = self.redis_connection.zcount(\n leaderboard_name, '-inf', \"(%s\" % str(float(responses[0]))) + 1\n except:\n responses[1] = None\n else:\n try:\n responses[1] = self.redis_connection.zcount(\n leaderboard_name, \"(%s\" % str(float(responses[0])), '+inf') + 1\n except:\n responses[1] = None\n\n return {\n self.MEMBER_KEY: member,\n self.SCORE_KEY: responses[0],\n self.RANK_KEY: responses[1]\n }"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nvalidates that the specified key is allowed according to the provided list of patterns.", "response": "def check_key(key, allowed):\n \"\"\"\n Validate that the specified key is allowed according the provided\n list of patterns.\n \"\"\"\n\n if key in allowed:\n return True\n\n for pattern in allowed:\n if fnmatch(key, pattern):\n return True\n\n return False"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef cs_encode(s):\n if PY2 and isinstance(s, text_type):\n s = s.encode(\"utf-8\")\n return quote(s, safe=\"*\")", "response": "Encode a URI component like CloudStack would do before signing."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ntransforming an heterogeneous map of params into a CloudStack ready mapping of parameter to values.", "response": "def transform(params):\n \"\"\"\n Transforms an heterogeneous map of params into a CloudStack\n ready mapping of parameter to values.\n\n It handles lists and dicts.\n\n >>> p = {\"a\": 1, \"b\": \"foo\", \"c\": [\"eggs\", \"spam\"], \"d\": {\"key\": \"value\"}}\n >>> transform(p)\n >>> print(p)\n {'a': '1', 'b': 'foo', 'c': 'eggs,spam', 'd[0].key': 'value'}\n \"\"\"\n for key, value in list(params.items()):\n if value is None:\n params.pop(key)\n continue\n\n if isinstance(value, (string_type, binary_type)):\n continue\n\n if isinstance(value, integer_types):\n params[key] = text_type(value)\n elif isinstance(value, (list, tuple, set, dict)):\n if not value:\n params.pop(key)\n else:\n if isinstance(value, dict):\n value = [value]\n if isinstance(value, set):\n value = list(value)\n if not isinstance(value[0], dict):\n params[key] = \",\".join(value)\n else:\n params.pop(key)\n for index, val in enumerate(value):\n for name, v in val.items():\n k = \"%s[%d].%s\" % (key, index, name)\n params[k] = text_type(v)\n else:\n raise ValueError(type(value))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreading the configuration from the environment or config file.", "response": "def read_config(ini_group=None):\n \"\"\"\n Read the configuration from the environment, or config.\n\n First it try to go for the environment, then it overrides\n those with the cloudstack.ini file.\n \"\"\"\n env_conf = dict(DEFAULT_CONFIG)\n for key in REQUIRED_CONFIG_KEYS.union(ALLOWED_CONFIG_KEYS):\n env_key = \"CLOUDSTACK_{0}\".format(key.upper())\n value = os.getenv(env_key)\n if value:\n env_conf[key] = value\n\n # overrides means we have a .ini to read\n overrides = os.getenv('CLOUDSTACK_OVERRIDES', '').strip()\n\n if not overrides and set(env_conf).issuperset(REQUIRED_CONFIG_KEYS):\n return env_conf\n\n ini_conf = read_config_from_ini(ini_group)\n\n overrides = {s.lower() for s in re.split(r'\\W+', overrides)}\n config = dict(dict(env_conf, **ini_conf),\n **{k: v for k, v in env_conf.items() if k in overrides})\n\n missings = REQUIRED_CONFIG_KEYS.difference(config)\n if missings:\n raise ValueError(\"the configuration is missing the following keys: \" +\n \", \".join(missings))\n\n # convert booleans values.\n bool_keys = ('dangerous_no_tls_verify',)\n for bool_key in bool_keys:\n if isinstance(config[bool_key], string_type):\n try:\n config[bool_key] = strtobool(config[bool_key])\n except ValueError:\n pass\n\n return config"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _response_value(self, response, json=True):\n if json:\n contentType = response.headers.get(\"Content-Type\", \"\")\n if not contentType.startswith((\"application/json\",\n \"text/javascript\")):\n if response.status_code == 200:\n raise CloudStackException(\n \"JSON (application/json) was expected, got {!r}\"\n .format(contentType),\n response=response)\n\n raise CloudStackException(\n \"HTTP {0.status_code} {0.reason}\"\n .format(response),\n \"Make sure endpoint URL {!r} is correct.\"\n .format(self.endpoint),\n response=response)\n\n try:\n data = response.json()\n except ValueError as e:\n raise CloudStackException(\n \"HTTP {0.status_code} {0.reason}\"\n .format(response),\n \"{0!s}. Malformed JSON document\".format(e),\n response=response)\n\n [key] = data.keys()\n data = data[key]\n else:\n data = response.text\n\n if response.status_code != 200:\n raise CloudStackException(\n \"HTTP {0} response from CloudStack\".format(\n response.status_code),\n data,\n response=response)\n\n return data", "response": "Parses the HTTP response as a the cloudstack value. It throws an exception if the server didn t answer with a 200."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\npoll the async job result.", "response": "def _jobresult(self, jobid, json=True, headers=None):\n \"\"\"Poll the async job result.\n\n To be run via in a Thread, the result is put within\n the result list which is a hack.\n \"\"\"\n failures = 0\n\n total_time = self.job_timeout or 2**30\n remaining = timedelta(seconds=total_time)\n endtime = datetime.now() + remaining\n\n while remaining.total_seconds() > 0:\n timeout = max(min(self.timeout, remaining.total_seconds()), 1)\n try:\n kind, params = self._prepare_request('queryAsyncJobResult',\n jobid=jobid)\n\n transform(params)\n params['signature'] = self._sign(params)\n\n req = requests.Request(self.method,\n self.endpoint,\n headers=headers,\n **{kind: params})\n prepped = req.prepare()\n if self.trace:\n print(prepped.method, prepped.url, file=sys.stderr)\n if prepped.headers:\n print(prepped.headers, \"\\n\", file=sys.stderr)\n if prepped.body:\n print(prepped.body, file=sys.stderr)\n else:\n print(file=sys.stderr)\n\n with requests.Session() as session:\n response = session.send(prepped,\n timeout=timeout,\n verify=self.verify,\n cert=self.cert)\n\n j = self._response_value(response, json)\n\n if self.trace:\n print(response.status_code, response.reason,\n file=sys.stderr)\n headersTrace = \"\\n\".join(\n \"{}: {}\".format(k, v)\n for k, v in response.headers.items())\n print(headersTrace, \"\\n\", file=sys.stderr)\n print(response.text, \"\\n\", file=sys.stderr)\n\n failures = 0\n if j['jobstatus'] != PENDING:\n if j['jobresultcode'] or j['jobstatus'] != SUCCESS:\n raise CloudStackException(\"Job failure\",\n response=response)\n\n if 'jobresult' not in j:\n raise CloudStackException(\"Unknown job result\",\n response=response)\n\n return j['jobresult']\n\n except CloudStackException:\n raise\n\n except Exception as e:\n failures += 1\n if failures > 10:\n raise e\n\n time.sleep(self.poll_interval)\n remaining = endtime - datetime.now()\n\n if response:\n response.status_code = 408\n\n raise CloudStackException(\"Timeout waiting for async job result\",\n jobid,\n response=response)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncompute a signature string according to the CloudStack signature method.", "response": "def _sign(self, data):\n \"\"\"\n Compute a signature string according to the CloudStack\n signature method (hmac/sha1).\n \"\"\"\n\n # Python2/3 urlencode aren't good enough for this task.\n params = \"&\".join(\n \"=\".join((key, cs_encode(value)))\n for key, value in sorted(data.items())\n )\n\n digest = hmac.new(\n self.secret.encode('utf-8'),\n msg=params.lower().encode('utf-8'),\n digestmod=hashlib.sha1).digest()\n\n return base64.b64encode(digest).decode('utf-8').strip()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef parse(self, data=b''):\n self.buf.extend(data)\n while self.buf:\n if self.state == AWAITING_CONTROL_LINE:\n msg = MSG_RE.match(self.buf)\n if msg:\n try:\n subject, sid, _, reply, needed_bytes = msg.groups()\n self.msg_arg[\"subject\"] = subject\n self.msg_arg[\"sid\"] = int(sid)\n if reply:\n self.msg_arg[\"reply\"] = reply\n else:\n self.msg_arg[\"reply\"] = b''\n self.needed = int(needed_bytes)\n del self.buf[:msg.end()]\n self.state = AWAITING_MSG_PAYLOAD\n continue\n except:\n raise ErrProtocol(\"nats: malformed MSG\")\n\n ok = OK_RE.match(self.buf)\n if ok:\n # Do nothing and just skip.\n del self.buf[:ok.end()]\n continue\n\n err = ERR_RE.match(self.buf)\n if err:\n err_msg = err.groups()\n yield self.nc._process_err(err_msg)\n del self.buf[:err.end()]\n continue\n\n ping = PING_RE.match(self.buf)\n if ping:\n del self.buf[:ping.end()]\n yield self.nc._process_ping()\n continue\n\n pong = PONG_RE.match(self.buf)\n if pong:\n del self.buf[:pong.end()]\n yield self.nc._process_pong()\n continue\n\n info = INFO_RE.match(self.buf)\n if info:\n info_line = info.groups()[0]\n self.nc._process_info(info_line)\n del self.buf[:info.end()]\n continue\n\n # If nothing matched at this point, then probably\n # a split buffer and need to gather more bytes,\n # otherwise it would mean that there is an issue\n # and we're getting malformed control lines.\n if len(self.buf\n ) < MAX_CONTROL_LINE_SIZE and _CRLF_ not in self.buf:\n break\n else:\n raise ErrProtocol(\"nats: unknown protocol\")\n\n elif self.state == AWAITING_MSG_PAYLOAD:\n if len(self.buf) >= self.needed + CRLF_SIZE:\n subject = self.msg_arg[\"subject\"]\n sid = self.msg_arg[\"sid\"]\n reply = self.msg_arg[\"reply\"]\n\n # Consume msg payload from buffer and set next parser state.\n payload = bytes(self.buf[:self.needed])\n del self.buf[:self.needed + CRLF_SIZE]\n self.state = AWAITING_CONTROL_LINE\n yield self.nc._process_msg(sid, subject, reply, payload)\n else:\n # Wait until we have enough bytes in buffer.\n break", "response": "Parses the wire protocol from NATS and dispatches the subscription callbacks."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef connect(self,\n servers=[\"nats://127.0.0.1:4222\"],\n loop=None,\n # 'io_loop' and 'loop' are the same, but we have\n # both params to be consistent with asyncio client.\n io_loop=None,\n\n # Event Callbacks\n error_cb=None,\n disconnected_cb=None,\n reconnected_cb=None,\n closed_cb=None,\n # 'close_cb' is the same as 'closed_cb' but we have\n # both params to be consistent with the asyncio client.\n close_cb=None,\n\n # CONNECT options\n name=None,\n pedantic=False,\n verbose=False,\n no_echo=False,\n user=None,\n password=None,\n token=None,\n\n # Reconnect logic\n allow_reconnect=True,\n connect_timeout=DEFAULT_CONNECT_TIMEOUT,\n reconnect_time_wait=RECONNECT_TIME_WAIT,\n max_reconnect_attempts=MAX_RECONNECT_ATTEMPTS,\n dont_randomize=False,\n\n # Ping Interval\n ping_interval=DEFAULT_PING_INTERVAL,\n max_outstanding_pings=MAX_OUTSTANDING_PINGS,\n tls=None,\n max_read_buffer_size=DEFAULT_READ_BUFFER_SIZE,\n max_write_buffer_size=DEFAULT_WRITE_BUFFER_SIZE,\n read_chunk_size=DEFAULT_READ_CHUNK_SIZE,\n tcp_nodelay=False,\n drain_timeout=DEFAULT_DRAIN_TIMEOUT,\n ):\n \"\"\"\n Establishes a connection to a NATS server.\n\n Examples:\n\n # Configure pool of NATS servers.\n nc = nats.io.client.Client()\n yield nc.connect({ 'servers': ['nats://192.168.1.10:4222', 'nats://192.168.2.10:4222'] })\n\n # User and pass are to be passed on the uri to authenticate.\n yield nc.connect({ 'servers': ['nats://hello:world@192.168.1.10:4222'] })\n\n # Simple URL can be used as well.\n yield nc.connect('demo.nats.io:4222')\n\n \"\"\"\n self._setup_server_pool(servers)\n self._loop = io_loop or loop or tornado.ioloop.IOLoop.current()\n self._error_cb = error_cb\n self._closed_cb = closed_cb or close_cb\n self._reconnected_cb = reconnected_cb\n self._disconnected_cb = disconnected_cb\n self._max_read_buffer_size = max_read_buffer_size\n self._max_write_buffer_size = max_write_buffer_size\n self._read_chunk_size = read_chunk_size\n\n self.options[\"verbose\"] = verbose\n self.options[\"pedantic\"] = pedantic\n self.options[\"name\"] = name\n self.options[\"no_echo\"] = no_echo\n self.options[\"user\"] = user\n self.options[\"password\"] = password\n self.options[\"token\"] = token\n self.options[\"max_outstanding_pings\"] = max_outstanding_pings\n self.options[\"max_reconnect_attempts\"] = max_reconnect_attempts\n self.options[\"reconnect_time_wait\"] = reconnect_time_wait\n self.options[\"dont_randomize\"] = dont_randomize\n self.options[\"allow_reconnect\"] = allow_reconnect\n self.options[\"tcp_nodelay\"] = tcp_nodelay\n\n # In seconds\n self.options[\"connect_timeout\"] = connect_timeout\n self.options[\"ping_interval\"] = ping_interval\n self.options[\"drain_timeout\"] = drain_timeout\n\n # TLS customizations\n if tls is not None:\n self.options[\"tls\"] = tls\n\n while True:\n try:\n s = self._next_server()\n if s is None:\n raise ErrNoServers\n\n # Check when was the last attempt and back off before reconnecting\n if s.last_attempt is not None:\n now = time.time()\n if (now - s.last_attempt\n ) < self.options[\"reconnect_time_wait\"]:\n yield tornado.gen.sleep(\n self.options[\"reconnect_time_wait\"])\n\n # Mark that we have attempted to connect\n s.reconnects += 1\n s.last_attempt = time.time()\n yield self._server_connect(s)\n self._current_server = s\n\n # Mark that TCP connect worked against this server.\n s.did_connect = True\n\n # Established TCP connection at least and about\n # to send connect command, which might not succeed\n # in case TLS required and handshake failed.\n self._status = Client.CONNECTING\n yield self._process_connect_init()\n break\n except ErrNoServers:\n raise\n except Exception as e:\n self._status = Client.DISCONNECTED\n self._err = e\n if self._error_cb is not None:\n self._error_cb(ErrServerConnect(e))\n if not self.options[\"allow_reconnect\"]:\n raise ErrNoServers\n\n # Flush pending data before continuing in connected status.\n # FIXME: Could use future here and wait for an error result\n # to bail earlier in case there are errors in the connection.\n yield self._flush_pending()\n\n # First time connecting to NATS so if there were no errors,\n # we can consider to be connected at this point.\n self._status = Client.CONNECTED\n\n # Prepare the ping pong interval.\n self._ping_timer = tornado.ioloop.PeriodicCallback(\n self._ping_interval, self.options[\"ping_interval\"] * 1000)\n self._ping_timer.start()", "response": "Connect to a NATS server."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _server_connect(self, s):\n self._socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n self._socket.setblocking(0)\n self._socket.settimeout(1.0)\n\n if self.options[\"tcp_nodelay\"]:\n self._socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)\n\n self.io = tornado.iostream.IOStream(self._socket,\n max_buffer_size=self._max_read_buffer_size,\n max_write_buffer_size=self._max_write_buffer_size,\n read_chunk_size=self._read_chunk_size)\n\n # Connect to server with a deadline\n future = self.io.connect((s.uri.hostname, s.uri.port))\n yield tornado.gen.with_timeout(\n timedelta(seconds=self.options[\"connect_timeout\"]), future)\n\n # Called whenever disconnected from the server.\n self.io.set_close_callback(self._process_op_err)", "response": "Connect to the server and return a tornado. Future that will be called when the server is ready to receive messages."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngenerates a JSON string with the params to be used when sending a CONNECT to the server.", "response": "def connect_command(self):\n '''\n Generates a JSON string with the params to be used\n when sending CONNECT to the server.\n\n ->> CONNECT {\"verbose\": false, \"pedantic\": false, \"lang\": \"python2\" }\n\n '''\n options = {\n \"verbose\": self.options[\"verbose\"],\n \"pedantic\": self.options[\"pedantic\"],\n \"lang\": __lang__,\n \"version\": __version__,\n \"protocol\": PROTOCOL\n }\n if \"auth_required\" in self._server_info:\n if self._server_info[\"auth_required\"] == True:\n # In case there is no password, then consider handle\n # sending a token instead.\n if self.options[\"user\"] is not None and self.options[\"password\"] is not None:\n options[\"user\"] = self.options[\"user\"]\n options[\"pass\"] = self.options[\"password\"]\n elif self.options[\"token\"] is not None:\n options[\"auth_token\"] = self.options[\"token\"]\n elif self._current_server.uri.password is None:\n options[\"auth_token\"] = self._current_server.uri.username\n else:\n options[\"user\"] = self._current_server.uri.username\n options[\"pass\"] = self._current_server.uri.password\n if self.options[\"name\"] is not None:\n options[\"name\"] = self.options[\"name\"]\n if self.options[\"no_echo\"] is not None:\n options[\"echo\"] = not self.options[\"no_echo\"]\n\n args = json.dumps(options, sort_keys=True)\n return CONNECT_PROTO.format(CONNECT_OP, args, _CRLF_)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsends a command to the server.", "response": "def send_command(self, cmd, priority=False):\n \"\"\"\n Flushes a command to the server as a bytes payload.\n \"\"\"\n if priority:\n self._pending.insert(0, cmd)\n else:\n self._pending.append(cmd)\n self._pending_size += len(cmd)\n\n if self._pending_size > DEFAULT_PENDING_SIZE:\n yield self._flush_pending()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef publish_request(self, subject, reply, payload):\n payload_size = len(payload)\n if payload_size > self._max_payload_size:\n raise ErrMaxPayload\n if self.is_closed:\n raise ErrConnectionClosed\n yield self._publish(subject, reply, payload, payload_size)\n if self._flush_queue.empty():\n yield self._flush_pending()", "response": "Publishes a message with a reply subscription and a payload."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _flush_timeout(self, timeout):\n future = tornado.concurrent.Future()\n yield self._send_ping(future)\n try:\n result = yield tornado.gen.with_timeout(\n timedelta(seconds=timeout), future)\n except tornado.gen.TimeoutError:\n # Set the future to False so it can be ignored in _process_pong,\n # and try to remove from the list of pending pongs.\n future.set_result(False)\n for i, pong_future in enumerate(self._pongs):\n if pong_future == future:\n del self._pongs[i]\n break\n raise\n raise tornado.gen.Return(result)", "response": "Sends a ping to the server and returns True if the server responds back otherwise raise a TimeoutError."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef request(self, subject, payload, timeout=0.5, expected=1, cb=None):\n if self.is_draining:\n raise ErrConnectionDraining\n\n # If callback given then continue to use old style.\n if cb is not None:\n next_inbox = INBOX_PREFIX[:]\n next_inbox.extend(self._nuid.next())\n inbox = str(next_inbox)\n sid = yield self.subscribe(\n inbox,\n queue=_EMPTY_,\n cb=cb,\n max_msgs=expected,\n )\n yield self.auto_unsubscribe(sid, expected)\n yield self.publish_request(subject, inbox, payload)\n raise tornado.gen.Return(sid)\n\n if self._resp_sub_prefix is None:\n self._resp_map = {}\n\n # Create a prefix and single wildcard subscription once.\n self._resp_sub_prefix = str(INBOX_PREFIX[:])\n self._resp_sub_prefix += self._nuid.next()\n self._resp_sub_prefix += b'.'\n resp_mux_subject = str(self._resp_sub_prefix[:])\n resp_mux_subject += b'*'\n sub = Subscription(subject=str(resp_mux_subject))\n\n # FIXME: Allow setting pending limits for responses mux subscription.\n sub.pending_msgs_limit = DEFAULT_SUB_PENDING_MSGS_LIMIT\n sub.pending_bytes_limit = DEFAULT_SUB_PENDING_BYTES_LIMIT\n sub.pending_queue = tornado.queues.Queue(\n maxsize=sub.pending_msgs_limit)\n\n # Single task for handling the requests\n @tornado.gen.coroutine\n def wait_for_msgs():\n while True:\n sub = wait_for_msgs.sub\n if sub.closed:\n break\n\n msg = yield sub.pending_queue.get()\n if msg is None:\n break\n\n token = msg.subject[INBOX_PREFIX_LEN:]\n try:\n fut = self._resp_map[token]\n fut.set_result(msg)\n del self._resp_map[token]\n except KeyError:\n # Future already handled so drop any extra\n # responses which may have made it.\n continue\n\n wait_for_msgs.sub = sub\n self._loop.spawn_callback(wait_for_msgs)\n\n # Store the subscription in the subscriptions map,\n # then send the protocol commands to the server.\n self._ssid += 1\n sid = self._ssid\n sub.sid = sid\n self._subs[sid] = sub\n\n # Send SUB command...\n sub_cmd = b''.join([\n SUB_OP, _SPC_,\n sub.subject.encode(), _SPC_, (\"%d\" % sid).encode(), _CRLF_\n ])\n yield self.send_command(sub_cmd)\n yield self._flush_pending()\n\n # Use a new NUID for the token inbox and then use the future.\n token = self._nuid.next()\n inbox = self._resp_sub_prefix[:]\n inbox.extend(token)\n future = tornado.concurrent.Future()\n self._resp_map[token.decode()] = future\n yield self.publish_request(subject, str(inbox), payload)\n msg = yield tornado.gen.with_timeout(\n timedelta(seconds=timeout), future)\n raise tornado.gen.Return(msg)", "response": "Send a request to the server."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nimplements the request/response pattern via pub/sub using an ephemeral subscription which will be published with a limited interest of 1 reply returning the response or raising a Timeout error. ->> SUB _INBOX.E9jM2HTirMXDMXPROSQmSd 90 ->> UNSUB 90 1 ->> PUB hello _INBOX.E9jM2HTirMXDMXPROSQmSd 5 ->> MSG_PAYLOAD: world <<- MSG hello 2 _INBOX.E9jM2HTirMXDMXPROSQmSd 5", "response": "def timed_request(self, subject, payload, timeout=0.5):\n \"\"\"\n Implements the request/response pattern via pub/sub\n using an ephemeral subscription which will be published\n with a limited interest of 1 reply returning the response\n or raising a Timeout error.\n\n ->> SUB _INBOX.E9jM2HTirMXDMXPROSQmSd 90\n ->> UNSUB 90 1\n ->> PUB hello _INBOX.E9jM2HTirMXDMXPROSQmSd 5\n ->> MSG_PAYLOAD: world\n <<- MSG hello 2 _INBOX.E9jM2HTirMXDMXPROSQmSd 5\n\n \"\"\"\n next_inbox = INBOX_PREFIX[:]\n next_inbox.extend(self._nuid.next())\n inbox = str(next_inbox)\n future = tornado.concurrent.Future()\n sid = yield self.subscribe(\n subject=inbox, queue=_EMPTY_, cb=None, future=future, max_msgs=1)\n yield self.auto_unsubscribe(sid, 1)\n yield self.publish_request(subject, inbox, payload)\n msg = yield tornado.gen.with_timeout(\n timedelta(seconds=timeout), future)\n raise tornado.gen.Return(msg)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef subscribe(\n self,\n subject=\"\",\n queue=\"\",\n cb=None,\n future=None,\n max_msgs=0,\n is_async=False,\n pending_msgs_limit=DEFAULT_SUB_PENDING_MSGS_LIMIT,\n pending_bytes_limit=DEFAULT_SUB_PENDING_BYTES_LIMIT,\n ):\n \"\"\"\n Sends a SUB command to the server. Takes a queue parameter\n which can be used in case of distributed queues or left empty\n if it is not the case, and a callback that will be dispatched\n message for processing them.\n \"\"\"\n if self.is_closed:\n raise ErrConnectionClosed\n\n if self.is_draining:\n raise ErrConnectionDraining\n\n self._ssid += 1\n sid = self._ssid\n sub = Subscription(\n subject=subject,\n queue=queue,\n cb=cb,\n future=future,\n max_msgs=max_msgs,\n is_async=is_async,\n sid=sid,\n )\n self._subs[sid] = sub\n\n if cb is not None:\n sub.pending_msgs_limit = pending_msgs_limit\n sub.pending_bytes_limit = pending_bytes_limit\n sub.pending_queue = tornado.queues.Queue(\n maxsize=pending_msgs_limit)\n\n @tornado.gen.coroutine\n def wait_for_msgs():\n while True:\n sub = wait_for_msgs.sub\n err_cb = wait_for_msgs.err_cb\n\n try:\n sub = wait_for_msgs.sub\n if sub.closed:\n break\n\n msg = yield sub.pending_queue.get()\n if msg is None:\n break\n\n sub.received += 1\n sub.pending_size -= len(msg.data)\n if sub.max_msgs > 0 and sub.received >= sub.max_msgs:\n # If we have hit the max for delivered msgs, remove sub.\n self._subs.pop(sub.sid, None)\n self._remove_subscription(sub)\n\n # Invoke depending of type of handler.\n if sub.is_async:\n # NOTE: Deprecate this usage in a next release,\n # the handler implementation ought to decide\n # the concurrency level at which the messages\n # should be processed.\n self._loop.spawn_callback(sub.cb, msg)\n else:\n yield sub.cb(msg)\n except Exception as e:\n # All errors from calling an async subscriber\n # handler are async errors.\n if err_cb is not None:\n yield err_cb(e)\n\n # Bind the subscription and error cb if present\n wait_for_msgs.sub = sub\n wait_for_msgs.err_cb = self._error_cb\n self._loop.spawn_callback(wait_for_msgs)\n\n elif future is not None:\n # Used to handle the single response from a request\n # based on auto unsubscribe.\n sub.future = future\n\n # Send SUB command...\n sub_cmd = b''.join([\n SUB_OP, _SPC_,\n sub.subject.encode(), _SPC_,\n sub.queue.encode(), _SPC_, (\"%d\" % sid).encode(), _CRLF_\n ])\n yield self.send_command(sub_cmd)\n yield self._flush_pending()\n raise tornado.gen.Return(sid)", "response": "Subscribes a message to the server."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef subscribe_async(self, subject, **kwargs):\n kwargs[\"is_async\"] = True\n sid = yield self.subscribe(subject, **kwargs)\n raise tornado.gen.Return(sid)", "response": "Subscribes a subject asynchronously."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ntaking a subscription sequence id and removes the subscription from the client, optionally after receiving more than max_msgs, and unsubscribes immediatedly.", "response": "def unsubscribe(self, ssid, max_msgs=0):\n \"\"\"\n Takes a subscription sequence id and removes the subscription\n from the client, optionally after receiving more than max_msgs,\n and unsubscribes immediatedly.\n \"\"\"\n if self.is_closed:\n raise ErrConnectionClosed\n\n sub = None\n try:\n sub = self._subs[ssid]\n except KeyError:\n # Already unsubscribed.\n return\n\n # In case subscription has already received enough messages\n # then announce to the server that we are unsubscribing and\n # remove the callback locally too.\n if max_msgs == 0 or sub.received >= max_msgs:\n self._subs.pop(ssid, None)\n self._remove_subscription(sub)\n\n # We will send these for all subs when we reconnect anyway,\n # so that we can suppress here.\n if not self.is_reconnecting:\n yield self.auto_unsubscribe(ssid, max_msgs)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef auto_unsubscribe(self, sid, limit=1):\n if self.is_draining:\n raise ErrConnectionDraining\n yield self._unsubscribe(sid, limit)", "response": "Unsubscribe from a specific user."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _process_ping(self):\n yield self.send_command(PONG_PROTO)\n if self._flush_queue.empty():\n yield self._flush_pending()", "response": "Process a PING command."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _process_msg(self, sid, subject, reply, data):\n payload_size = len(data)\n self.stats['in_msgs'] += 1\n self.stats['in_bytes'] += payload_size\n\n msg = Msg(subject=subject.decode(), reply=reply.decode(), data=data)\n\n # Don't process the message if the subscription has been removed\n sub = self._subs.get(sid)\n if sub is None:\n raise tornado.gen.Return()\n\n # Check if it is an old style request.\n if sub.future is not None:\n sub.future.set_result(msg)\n\n # Discard subscription since done\n self._subs.pop(sid, None)\n self._remove_subscription(sub)\n raise tornado.gen.Return()\n\n # Let subscription wait_for_msgs coroutine process the messages,\n # but in case sending to the subscription task would block,\n # then consider it to be an slow consumer and drop the message.\n try:\n sub.pending_size += payload_size\n if sub.pending_size >= sub.pending_bytes_limit:\n # Substract again the bytes since throwing away\n # the message so would not be pending data.\n sub.pending_size -= payload_size\n\n if self._error_cb is not None:\n yield self._error_cb(ErrSlowConsumer())\n raise tornado.gen.Return()\n sub.pending_queue.put_nowait(msg)\n except tornado.queues.QueueFull:\n if self._error_cb is not None:\n yield self._error_cb(ErrSlowConsumer())", "response": "Process a received message and store it in the internal cache."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _process_connect_init(self):\n # INFO {...}\n line = yield self.io.read_until(_CRLF_, max_bytes=None)\n _, args = line.split(INFO_OP + _SPC_, 1)\n self._server_info = tornado.escape.json_decode((args))\n\n if 'max_payload' in self._server_info:\n self._max_payload_size = self._server_info[\"max_payload\"]\n\n # Check whether we need to upgrade to TLS first of all\n if 'tls_required' in self._server_info and self._server_info['tls_required']:\n # Detach and prepare for upgrading the TLS connection.\n self._loop.remove_handler(self._socket.fileno())\n\n tls_opts = {}\n if \"tls\" in self.options:\n # Allow customizing the TLS version though default\n # to one that the server supports at least.\n tls_opts = self.options[\"tls\"]\n\n # Rewrap using a TLS connection, can't do handshake on connect\n # as the socket is non blocking.\n self._socket = ssl.wrap_socket(\n self._socket, do_handshake_on_connect=False, **tls_opts)\n\n # Use the TLS stream instead from now\n self.io = tornado.iostream.SSLIOStream(self._socket)\n self.io.set_close_callback(self._process_op_err)\n self.io._do_ssl_handshake()\n\n # Refresh state of the parser upon reconnect.\n if self.is_reconnecting:\n self._ps.reset()\n\n # CONNECT then send a PING expecting a PONG to make a\n # roundtrip to the server and assert that sent commands sent\n # this far have been processed already.\n cmd = self.connect_command()\n yield self.io.write(cmd)\n yield self.io.write(PING_PROTO)\n\n # FIXME: Add readline timeout for these.\n next_op = yield self.io.read_until(\n _CRLF_, max_bytes=MAX_CONTROL_LINE_SIZE)\n if self.options[\"verbose\"] and OK_OP in next_op:\n next_op = yield self.io.read_until(\n _CRLF_, max_bytes=MAX_CONTROL_LINE_SIZE)\n if ERR_OP in next_op:\n err_line = next_op.decode()\n _, err_msg = err_line.split(_SPC_, 1)\n # FIXME: Maybe handling could be more special here,\n # checking for ErrAuthorization for example.\n # yield from self._process_err(err_msg)\n raise NatsError(\"nats: \" + err_msg.rstrip('\\r\\n'))\n\n if PONG_PROTO in next_op:\n self._status = Client.CONNECTED\n\n self._loop.spawn_callback(self._read_loop)\n self._pongs = []\n self._pings_outstanding = 0\n self._ping_timer = tornado.ioloop.PeriodicCallback(\n self._ping_interval, self.options[\"ping_interval\"] * 1000)\n self._ping_timer.start()\n\n # Queue and flusher for coalescing writes to the server.\n self._flush_queue = tornado.queues.Queue(maxsize=1024)\n self._loop.spawn_callback(self._flusher_loop)", "response": "Handles the initial part of the NATS protocol connecting to the server."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _process_info(self, info_line):\n info = tornado.escape.json_decode(info_line.decode())\n\n if 'connect_urls' in info:\n if info['connect_urls']:\n connect_urls = []\n for connect_url in info['connect_urls']:\n uri = urlparse(\"nats://%s\" % connect_url)\n srv = Srv(uri)\n srv.discovered = True\n\n # Filter for any similar server in the server pool already.\n should_add = True\n for s in self._server_pool:\n if uri.netloc == s.uri.netloc:\n should_add = False\n if should_add:\n connect_urls.append(srv)\n\n if self.options[\"dont_randomize\"] is not True:\n shuffle(connect_urls)\n for srv in connect_urls:\n self._server_pool.append(srv)", "response": "Process INFO lines sent by the server to reconfigure client\n with latest updates from cluster."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the next available server.", "response": "def _next_server(self):\n \"\"\"\n Chooses next available server to connect.\n \"\"\"\n if self.options[\"dont_randomize\"]:\n server = self._server_pool.pop(0)\n self._server_pool.append(server)\n else:\n shuffle(self._server_pool)\n\n s = None\n for server in self._server_pool:\n if self.options[\"max_reconnect_attempts\"] > 0 and (\n server.reconnects >\n self.options[\"max_reconnect_attempts\"]):\n continue\n else:\n s = server\n return s"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _process_op_err(self, err=None):\n if self.is_connecting or self.is_closed or self.is_reconnecting:\n return\n\n if self.options[\"allow_reconnect\"] and self.is_connected:\n self._status = Client.RECONNECTING\n yield self._attempt_reconnect()\n else:\n # Transition into CLOSED state\n self._status = Client.DISCONNECTED\n self._err = err\n yield self._close(Client.CLOSED)", "response": "Process errors which occured while reading the protocol."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _select_next_server(self):\n\n # Continue trying to connect until there is an available server\n # or bail in case there are no more available servers.\n while True:\n if len(self._server_pool) == 0:\n self._current_server = None\n raise ErrNoServers\n\n now = time.time()\n s = self._server_pool.pop(0)\n if self.options[\"max_reconnect_attempts\"] > 0:\n if s.reconnects > self.options[\"max_reconnect_attempts\"]:\n # Discard server since already tried to reconnect too many times.\n continue\n\n # Not yet exceeded max_reconnect_attempts so can still use\n # this server in the future.\n self._server_pool.append(s)\n if s.last_attempt is not None and now < s.last_attempt + self.options[\"reconnect_time_wait\"]:\n # Backoff connecting to server if we attempted recently.\n yield tornado.gen.sleep(self.options[\"reconnect_time_wait\"])\n\n try:\n yield self._server_connect(s)\n self._current_server = s\n break\n except Exception as e:\n s.last_attempt = time.time()\n s.reconnects += 1\n\n self._err = e\n if self._error_cb is not None:\n self._error_cb(e)\n self._status = Client.RECONNECTING\n continue", "response": "Iterates over the server pool and attempts to connect to the next available server."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncloses the connection and closes the connection.", "response": "def _close(self, status, do_callbacks=True):\n \"\"\"\n Takes the status on which it should leave the connection\n and an optional boolean parameter to dispatch the disconnected\n and close callbacks if there are any.\n \"\"\"\n if self.is_closed:\n self._status = status\n return\n self._status = Client.CLOSED\n\n # Stop background tasks\n yield self._end_flusher_loop()\n\n if self._ping_timer is not None and self._ping_timer.is_running():\n self._ping_timer.stop()\n\n if not self.io.closed():\n self.io.close()\n\n # Cleanup subscriptions since not reconnecting so no need\n # to replay the subscriptions anymore.\n for ssid, sub in self._subs.items():\n self._subs.pop(ssid, None)\n self._remove_subscription(sub)\n\n if do_callbacks:\n if self._disconnected_cb is not None:\n self._disconnected_cb()\n if self._closed_cb is not None:\n self._closed_cb()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef drain(self, sid=None):\n if self.is_draining:\n return\n\n if self.is_closed:\n raise ErrConnectionClosed\n\n if self.is_connecting or self.is_reconnecting:\n raise ErrConnectionReconnecting\n\n # Drain a single subscription\n if sid is not None:\n raise tornado.gen.Return(self._drain_sub(sid))\n\n # Start draining the subscriptions\n self._status = Client.DRAINING_SUBS\n\n drain_tasks = []\n for ssid, sub in self._subs.items():\n task = self._drain_sub(ssid)\n drain_tasks.append(task)\n\n # Wait for subscriptions to stop handling messages.\n drain_is_done = tornado.gen.multi(drain_tasks)\n try:\n yield tornado.gen.with_timeout(\n timedelta(seconds=self.options[\"drain_timeout\"]),\n drain_is_done,\n )\n except tornado.gen.TimeoutError:\n if self._error_cb is not None:\n yield self._error_cb(ErrDrainTimeout())\n finally:\n self._status = Client.DRAINING_PUBS\n yield self.flush()\n yield self.close()", "response": "Drain a single subscription or publishers."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nprocesses an error from the server and dispatches the error callback.", "response": "def _process_err(self, err=None):\n \"\"\"\n Stores the last received error from the server and dispatches the error callback.\n \"\"\"\n self.stats['errors_received'] += 1\n\n if err == \"'Authorization Violation'\":\n self._err = ErrAuthorization\n elif err == \"'Slow Consumer'\":\n self._err = ErrSlowConsumer\n elif err == \"'Stale Connection'\":\n self._err = ErrStaleConnection\n else:\n self._err = Exception(err)\n\n if self._error_cb is not None:\n self._error_cb(err)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreads loop for gathering bytes from the server in a buffer containing MAX_CONTROL_LINE_SIZE and parsing them.", "response": "def _read_loop(self, data=''):\n \"\"\"\n Read loop for gathering bytes from the server in a buffer\n of maximum MAX_CONTROL_LINE_SIZE, then received bytes are streamed\n to the parsing callback for processing.\n \"\"\"\n while True:\n if not self.is_connected or self.is_connecting or self.io.closed():\n break\n\n try:\n yield self.io.read_bytes(\n DEFAULT_READ_CHUNK_SIZE,\n streaming_callback=self._ps.parse,\n partial=True)\n except tornado.iostream.StreamClosedError as e:\n self._err = e\n if self._error_cb is not None and not self.is_reconnecting and not self.is_closed:\n self._error_cb(e)\n break"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _flusher_loop(self):\n while True:\n pending = []\n pending_size = 0\n try:\n # Block and wait for the flusher to be kicked\n yield self._flush_queue.get()\n\n # Check whether we should bail first\n if not self.is_connected or self.is_connecting or self.io.closed():\n break\n\n # Flush only when we actually have something in buffer...\n if self._pending_size > 0:\n cmds = b''.join(self._pending)\n\n # Reset pending queue and store tmp in case write fails\n self._pending, pending = [], self._pending\n self._pending_size, pending_size = 0, self._pending_size\n yield self.io.write(cmds)\n except tornado.iostream.StreamBufferFullError:\n # Acumulate as pending data size and flush when possible.\n self._pending = pending + self._pending\n self._pending_size += pending_size\n except tornado.iostream.StreamClosedError as e:\n self._pending = pending + self._pending\n self._pending_size += pending_size\n yield self._process_op_err(e)", "response": "Coroutine which continuously flushes pending commands to the socket."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _end_flusher_loop(self):\n if not self.is_connected or self.is_connecting or self.io.closed():\n if self._flush_queue is not None and self._flush_queue.empty():\n self._flush_pending(check_connected=False)\n yield tornado.gen.moment", "response": "End the flusher loop."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nload the YAML files found in source into a dict.", "response": "def yaml_load(source, defaultdata=NO_DEFAULT):\n \"\"\"merge YAML data from files found in source\n\n Always returns a dict. The YAML files are expected to contain some kind of\n key:value structures, possibly deeply nested. When merging, lists are\n appended and dict keys are replaced. The YAML files are read with the\n yaml.safe_load function.\n\n source can be a file, a dir, a list/tuple of files or a string containing\n a glob expression (with ?*[]).\n\n For a directory, all *.yaml files will be read in alphabetical order.\n\n defaultdata can be used to initialize the data.\n \"\"\"\n logger = logging.getLogger(__name__)\n logger.debug(\"initialized with source=%s, defaultdata=%s\", source, defaultdata)\n if defaultdata is NO_DEFAULT:\n data = None\n else:\n data = defaultdata\n files = []\n if type(source) is not str and len(source) == 1:\n # when called from __main source is always a list, even if it contains only one item.\n # turn into a string if it contains only one item to support our different call modes\n source = source[0]\n if type(source) is list or type(source) is tuple:\n # got a list, assume to be files\n files = source\n elif os.path.isdir(source):\n # got a dir, read all *.yaml files\n files = sorted(glob.glob(os.path.join(source, \"*.yaml\")))\n elif os.path.isfile(source):\n # got a single file, turn it into list to use the same code\n files = [source]\n else:\n # try to use the source as a glob\n files = sorted(glob.glob(source))\n if files:\n logger.debug(\"Reading %s\\n\", \", \".join(files))\n for yaml_file in files:\n try:\n with open(yaml_file) as f:\n new_data = safe_load(f)\n logger.debug(\"YAML LOAD: %s\", new_data)\n except MarkedYAMLError as e:\n logger.error(\"YAML Error: %s\", e)\n raise YamlReaderError(\"YAML Error: %s\" % str(e))\n if new_data is not None:\n data = data_merge(data, new_data)\n else:\n if defaultdata is NO_DEFAULT:\n logger.error(\"No YAML data found in %s and no default data given\", source)\n raise YamlReaderError(\"No YAML data found in %s\" % source)\n\n return data"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ngenerate a widget visualization using the widget", "response": "def widget(self, which_viz='viz'):\n '''\n Generate a widget visualization using the widget. The export_viz_to_widget\n method passes the visualization JSON to the instantiated widget, which is\n returned and visualized on the front-end.\n '''\n if hasattr(self, 'widget_class') == True:\n self.widget_instance = self.widget_class(network = self.export_viz_to_widget(which_viz))\n\n return self.widget_instance\n else:\n print('Can not make widget because Network has no attribute widget_class')\n print('Please instantiate Network with clustergrammer_widget using: Network(clustergrammer_widget)')"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndate iterator returning dates in d1 < x < d2", "response": "def iterdays(self, d1, d2):\n \"\"\"\n Date iterator returning dates in d1 <= x < d2\n \"\"\"\n curr = datetime.datetime.combine(d1, datetime.time())\n end = datetime.datetime.combine(d2, datetime.time())\n if d1.date() == d2.date():\n yield curr\n return\n while curr < end:\n yield curr\n curr = curr + datetime.timedelta(days=1)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning an iterator over the weekdays between d1 and d2 excluding weekends.", "response": "def iterweekdays(self, d1, d2):\n \"\"\"\n Date iterator returning dates in d1 <= x < d2, excluding weekends\n \"\"\"\n for dt in self.iterdays(d1, d2):\n if not self.isweekend(dt):\n yield dt"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef iterbusinessdays(self, d1, d2):\n assert d2 >= d1\n if d1.date() == d2.date() and d2.time() < self.business_hours[0]:\n return\n first = True\n for dt in self.iterdays(d1, d2):\n if first and d1.time() > self.business_hours[1]:\n first = False\n continue\n first = False\n if not self.isweekend(dt) and not self.isholiday(dt):\n yield dt", "response": "Date iterator returning dates in d1 < x < d2 excluding weekends and holidays"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a datetime. timedelta with the number of full business days between d1 and d2 and the business time between d1 and d2.", "response": "def businesstimedelta(self, d1, d2):\n \"\"\"\n Returns a datetime.timedelta with the number of full business days\n and business time between d1 and d2\n \"\"\"\n\n if d1 > d2:\n d1, d2, timedelta_direction = d2, d1, -1\n else:\n timedelta_direction = 1\n businessdays = self._build_spanning_datetimes(d1, d2)\n time = datetime.timedelta()\n\n if len(businessdays) == 0:\n # HACK: manually handle the case when d1 is after business hours while d2 is during\n if self.isduringbusinesshours(d2):\n time += d2 - datetime.datetime.combine(d2,\n self.business_hours[0])\n\n # HACK: manually handle the case where d1 is on an earlier non-business day and d2 is after hours on a business day\n elif not self.isbusinessday(d1) and self.isbusinessday(d2):\n if d2.time() > self.business_hours[1]:\n time += datetime.datetime.combine(\n d2,\n self.business_hours[1]) - datetime.datetime.combine(\n d2, self.business_hours[0])\n elif d2.time() > self.business_hours[0]:\n time += d2 - datetime.datetime.combine(\n d2, self.business_hours[0])\n\n else:\n prev = None\n current = None\n count = 0\n for d in businessdays:\n if current is None:\n current = d\n current = datetime.datetime.combine(d, current.time())\n if prev is not None:\n if prev.date() != current.date():\n time += datetime.timedelta(days=1)\n if count == len(businessdays) - 1:\n if current > d:\n # We went too far\n time -= datetime.timedelta(days=1)\n time += self.open_hours - (current - d)\n else:\n time += d - current\n count += 1\n prev = current\n\n return time * timedelta_direction"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef businesstime_hours(self, d1, d2):\n open_hours = self.open_hours.seconds / 3600\n btd = self.businesstimedelta(d1, d2)\n btd_hours = btd.seconds / 3600\n return datetime.timedelta(hours=(btd.days * open_hours + btd_hours))", "response": "Returns a datetime. timedelta of business hours between d1 and d2 based on the length of the businessday \n "} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _day_rule_matches(self, rule, dt):\n if dt.weekday() == 4:\n sat = dt + datetime.timedelta(days=1)\n if super(USFederalHolidays, self)._day_rule_matches(rule, sat):\n return True\n elif dt.weekday() == 0:\n sun = dt - datetime.timedelta(days=1)\n if super(USFederalHolidays, self)._day_rule_matches(rule, sun):\n return True\n return super(USFederalHolidays, self)._day_rule_matches(rule, dt)", "response": "Return True if the rule matches the given datetime."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndoes something with a cloned repo.", "response": "def act_on_cloned_repo(self, path: Union[str, pathlib.Path],\n api) -> Optional[HookResult]:\n \"\"\"Do something with a cloned repo.\n\n Args:\n path: Path to the repo.\n api: An instance of :py:class:`repobee.github_api.GitHubAPI`.\n\n Returns:\n optionally returns a HookResult namedtuple for reporting the\n outcome of the hook. May also return None, in which case no\n reporting will be performed for the hook.\n \"\"\""} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngenerates a mapping of each student repository to a list of reviewers.", "response": "def generate_review_allocations(\n self, master_repo_name: str, students: Iterable[str],\n num_reviews: int,\n review_team_name_function: Callable[[str, str], str]\n ) -> Mapping[str, List[str]]:\n \"\"\"Generate a (peer_review_team -> reviewers) mapping for each student\n repository (i.e. -), where len(reviewers) =\n num_reviews.\n\n review_team_name_function should be used to generate review team names.\n It should be called like:\n\n .. code-block:: python\n \n review_team_name_function(master_repo_name, student)\n\n .. important::\n \n There must be strictly more students than reviewers per repo\n (`num_reviews`). Otherwise, allocation is impossible.\n\n Args:\n master_repo_name: Name of a master repository.\n students: Students for which to generate peer review allocations.\n num_reviews: Amount of reviews each student should perform (and\n consequently amount of reviewers per repo)\n review_team_name_function: A function that takes a master repo name\n as its first argument, and a student username as its second, and\n returns a review team name.\n Returns:\n a (peer_review_team -> reviewers) mapping for each student repository.\n \"\"\""} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef hset(self, hashroot, key, value):\n hroot = self.root / hashroot\n if not hroot.is_dir():\n hroot.mkdir()\n hfile = hroot / gethashfile(key)\n d = self.get(hfile, {})\n d.update( {key : value})\n self[hfile] = d", "response": "set a hash entry"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncompressing category hashroot so that fast_only is True for compressed items.", "response": "def hcompress(self, hashroot):\n \"\"\" Compress category 'hashroot', so hset is fast again\n\n hget will fail if fast_only is True for compressed items (that were\n hset before hcompress).\n\n \"\"\"\n hfiles = self.keys(hashroot + \"/*\")\n all = {}\n for f in hfiles:\n # print \"using\",f\n all.update(self[f])\n self.uncache(f)\n\n self[hashroot + '/xx'] = all\n for f in hfiles:\n p = self.root / f\n if p.name == 'xx':\n continue\n p.unlink()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a list of all keys in DB or all keys matching a globpat", "response": "def keys(self, globpat = None):\n \"\"\" All keys in DB, or all keys matching a glob\"\"\"\n\n if globpat is None:\n files = self.root.rglob('*')\n else:\n files = self.root.glob(globpat)\n return [self._normalized(p) for p in files if p.is_file()]"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef uncache(self,*items):\n if not items:\n self.cache = {}\n for it in items:\n self.cache.pop(it,None)", "response": "Removes all or specified items from the cache."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nwaiting for a key to get a value from the database.", "response": "def waitget(self,key, maxwaittime = 60 ):\n \"\"\" Wait (poll) for a key to get a value\n\n Will wait for `maxwaittime` seconds before raising a KeyError.\n The call exits normally if the `key` field in db gets a value\n within the timeout period.\n\n Use this for synchronizing different processes or for ensuring\n that an unfortunately timed \"db['key'] = newvalue\" operation\n in another process (which causes all 'get' operation to cause a\n KeyError for the duration of pickling) won't screw up your program\n logic.\n \"\"\"\n\n wtimes = [0.2] * 3 + [0.5] * 2 + [1]\n tries = 0\n waited = 0\n while 1:\n try:\n val = self[key]\n return val\n except KeyError:\n pass\n\n if waited > maxwaittime:\n raise KeyError(key)\n\n time.sleep(wtimes[tries])\n waited+=wtimes[tries]\n if tries < len(wtimes) -1:\n tries+=1"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef hacking_todo_format(physical_line, tokens):\n # TODO(jogo): make the following doctests pass:\n # H101: #TODO(jogo fail\n # H101: #TODO(jogo\n # TODO(jogo): make this check docstrings as well (don't have to be at top\n # of function)\n for token_type, text, start_index, _, _ in tokens:\n if token_type == tokenize.COMMENT:\n pos = text.find('TODO')\n pos1 = text.find('TODO(')\n if (pos != pos1):\n return pos + start_index[1], \"H101: Use TODO(NAME)\"", "response": "Check for the todo section in the file."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nchecks for Apache 2. 0 license.", "response": "def hacking_has_license(physical_line, filename, lines, line_number):\n \"\"\"Check for Apache 2.0 license.\n\n H102 license header not found\n \"\"\"\n # don't work about init files for now\n # TODO(sdague): enforce license in init file if it's not empty of content\n license_found = False\n\n # skip files that are < 10 lines, which isn't enough for a license to fit\n # this allows us to handle empty files, as well as not fail on the Okay\n # doctests.\n if line_number is 1 and len(lines) > 10 and _project_is_apache():\n for idx, line in enumerate(lines):\n # if it's more than 10 characters in, it's probably not in the\n # header\n if 0 <= line.find('Licensed under the Apache License') < 10:\n license_found = True\n if 0 <= line.find('SPDX-License-Identifier:') < 10:\n license_found = True\n if not license_found:\n return (0, \"H102: Apache 2.0 license header not found\")"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncheck for Apache 2. 0 license.", "response": "def hacking_has_correct_license(physical_line, filename, lines, line_number):\n \"\"\"Check for Apache 2.0 license.\n\n H103 header does not match Apache 2.0 License notice\n \"\"\"\n # don't work about init files for now\n # TODO(sdague): enforce license in init file if it's not empty of content\n\n # skip files that are < 10 lines, which isn't enough for a license to fit\n # this allows us to handle empty files, as well as not fail on the Okay\n # doctests.\n if line_number is 1 and len(lines) > 10 and _project_is_apache():\n for idx, line in enumerate(lines):\n column = line.find('Licensed under the Apache License')\n if (0 < column < 10 and not\n _check_for_exact_apache(idx, lines)):\n if (line.find('SPDX-License-Identifier: Apache-2.0') <= 0):\n return (column, \"H103: Header does not match Apache 2.0 \"\n \"License notice\")"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nchecking for empty files with only comments H104 empty file with only comments", "response": "def hacking_has_only_comments(physical_line, filename, lines, line_number):\n \"\"\"Check for empty files with only comments\n\n H104 empty file with only comments\n \"\"\"\n if line_number == 1 and all(map(EMPTY_LINE_RE.match, lines)):\n return (0, \"H104: File contains nothing but comments\")"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _project_is_apache():\n\n global _is_apache_cache\n if _is_apache_cache is not None:\n return _is_apache_cache\n license_files = [\"LICENSE\"]\n for filename in license_files:\n try:\n with open(filename, \"r\") as file:\n for line in file:\n if re.search('Apache License', line):\n _is_apache_cache = True\n return True\n except IOError:\n pass\n _is_apache_cache = False\n return False", "response": "Determine if a project is Apache."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nchecking if the Apache 2. 0 license header matches the contents of the source tree.", "response": "def _check_for_exact_apache(start, lines):\n \"\"\"Check for the Apache 2.0 license header.\n\n We strip all the newlines and extra spaces so this license string\n should work regardless of indentation in the file.\n \"\"\"\n APACHE2 = \"\"\"\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may\nnot use this file except in compliance with the License. You may obtain\na copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\nWARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\nLicense for the specific language governing permissions and limitations\nunder the License.\"\"\"\n\n # out of all the formatting I've seen, a 12 line version seems to be the\n # longest in the source tree. So just take the 12 lines starting with where\n # the Apache starting words were found, strip all the '#' and collapse the\n # spaces.\n content = ''.join(lines[start:(start + 12)])\n content = re.sub('\\#', '', content)\n content = re.sub('\\s+', ' ', content).strip()\n stripped_apache2 = re.sub('\\s+', ' ', APACHE2).strip()\n\n if stripped_apache2 in content:\n return True\n else:\n print(\"!=:\\n'%s' !=\\n'%s'\" %\n (content, stripped_apache2))\n return False"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef hacking_no_author_tags(physical_line):\n for regex in AUTHOR_TAG_RE:\n if regex.match(physical_line):\n physical_line = physical_line.lower()\n pos = physical_line.find('moduleauthor')\n if pos < 0:\n pos = physical_line.find('author')\n return (pos, \"H105: Don't use author tags\")", "response": "Check that no author tags are used. H105 don t use author tags"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef hacking_python3x_except_compatible(logical_line, noqa):\n if noqa:\n return\n\n def is_old_style_except(logical_line):\n return (',' in logical_line and\n ')' not in logical_line.rpartition(',')[2])\n\n if (logical_line.startswith(\"except \") and\n logical_line.endswith(':') and\n is_old_style_except(logical_line)):\n yield 0, \"H231: Python 3.x incompatible 'except x,y:' construct\"", "response": "Check for Python 3. x compatible except statements."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef hacking_python3x_octal_literals(logical_line, tokens, noqa):\n if noqa:\n return\n\n for token_type, text, _, _, _ in tokens:\n if token_type == tokenize.NUMBER:\n match = RE_OCTAL.match(text)\n if match:\n yield 0, (\"H232: Python 3.x incompatible octal %s should be \"\n \"written as 0o%s \" %\n (match.group(0)[1:], match.group(1)))", "response": "Check for octal literals in Python 3. x compatible form."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef hacking_no_assert_equals(logical_line, tokens, noqa):\n\n if noqa:\n return\n for token_type, text, start_index, _, _ in tokens:\n\n if token_type == tokenize.NAME:\n if text == \"assertEquals\" or text == \"assertNotEquals\":\n yield (start_index[1],\n \"H234: %s is deprecated, use %s\" % (text, text[:-1]))", "response": "r Check for deprecated assertEquals and assertNotEquals."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nchecks for removed modules in Python 3.", "response": "def hacking_no_removed_module(logical_line, noqa):\n r\"\"\"Check for removed modules in Python 3.\n\n Examples:\n Okay: from os import path\n Okay: from os import path as p\n Okay: from os import (path as p)\n Okay: import os.path\n H237: import thread\n Okay: import thread # noqa\n H237: import commands\n H237: import md5 as std_md5\n \"\"\"\n if noqa:\n return\n line = core.import_normalize(logical_line.strip())\n if line and line.split()[0] == 'import':\n module_name = line.split()[1].split('.')[0]\n if module_name in removed_modules:\n yield 0, (\"H237: module %s is \"\n \"removed in Python 3\" % module_name)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef hacking_no_old_style_class(logical_line, noqa):\n if noqa:\n return\n line = core.import_normalize(logical_line.strip())\n if line.startswith(\"class \") and not RE_NEW_STYLE_CLASS.match(line):\n yield (0, \"H238: old style class declaration, \"\n \"use new style (inherit from `object`)\")", "response": "Check for old style classes."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nchecking for vim editor configuration in source files.", "response": "def no_vim_headers(physical_line, line_number, lines):\n r\"\"\"Check for vim editor configuration in source files.\n\n By default vim modelines can only appear in the first or\n last 5 lines of a source file.\n\n Examples:\n H106: # vim: set tabstop=4 shiftwidth=4\\n#\\n#\\n#\\n#\\n#\n H106: # Lic\\n# vim: set tabstop=4 shiftwidth=4\\n#\\n#\\n#\\n#\\n#\n H106: # Lic\\n#\\n#\\n#\\n#\\n#\\n#\\n#\\n#\\n# vim: set tabstop=4 shiftwidth=4\n Okay: # Lic\\n#\\n#\\n#\\n#\\n#\\n#\\n#\n Okay: # viminal hill is located in Rome\n Okay: # vim, ze nemluvis cesky\n \"\"\"\n if ((line_number <= 5 or line_number > len(lines) - 5) and\n vim_header_re.match(physical_line)):\n return 0, \"H106: Don't put vim configuration in source files\""} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef hacking_import_rules(logical_line, filename, noqa):\n # TODO(jogo): make the following doctests pass:\n # H301: import os, sys\n # TODO(mordred: We need to split this into different checks so that they\n # can be disabled by command line switches properly\n\n if noqa:\n return\n\n split_line = logical_line.split()\n split_line_len = len(split_line)\n if (split_line_len > 1 and split_line[0] in ('import', 'from') and\n not core.is_import_exception(split_line[1])):\n pos = logical_line.find(',')\n if pos != -1:\n if split_line[0] == 'from':\n yield pos, \"H301: one import per line\"\n pos = logical_line.find('*')\n if pos != -1:\n yield pos, \"H303: No wildcard (*) import.\"\n return\n\n if split_line_len in (2, 4, 6) and split_line[1] != \"__future__\":\n if 'from' == split_line[0] and split_line_len > 3:\n mod = '.'.join((split_line[1], split_line[3]))\n if core.is_import_exception(mod):\n return\n if RE_RELATIVE_IMPORT.search(logical_line):\n yield logical_line.find('.'), (\n \"H304: No relative imports. '%s' is a relative import\"\n % logical_line)\n return", "response": "r Check for imports."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef is_import_exception(mod):\n return (mod in IMPORT_EXCEPTIONS or\n any(mod.startswith(m + '.') for m in IMPORT_EXCEPTIONS))", "response": "Check if import has been whitelisted."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef run(self):\n if self.name and self.name not in self.__class__._has_run:\n self.__class__._has_run.add(self.name)\n ret = self.run_once()\n if ret is not None:\n yield ret", "response": "A context manager that runs the check once."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef hacking_no_locals(logical_line, tokens, noqa):\n if noqa:\n return\n for_formatting = False\n for token_type, text, start, _, _ in tokens:\n if text == \"%\" and token_type == tokenize.OP:\n for_formatting = True\n if for_formatting and token_type == tokenize.NAME:\n for k, v in LOCALS_TEXT_MAP.items():\n if text == k and v in logical_line:\n yield (start[1],\n \"H501: Do not use %s for string formatting\" % v)", "response": "Do not use locals or self. __dict__ for string formatting."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nchecks for docstring not starting with space.", "response": "def hacking_docstring_start_space(physical_line, previous_logical, tokens):\n r\"\"\"Check for docstring not starting with space.\n\n OpenStack HACKING guide recommendation for docstring:\n Docstring should not start with space\n\n Okay: def foo():\\n '''This is good.'''\n Okay: def foo():\\n r'''This is good.'''\n Okay: def foo():\\n a = ''' This is not a docstring.'''\n Okay: def foo():\\n pass\\n ''' This is not.'''\n H401: def foo():\\n ''' This is not.'''\n H401: def foo():\\n r''' This is not.'''\n \"\"\"\n docstring = is_docstring(tokens, previous_logical)\n if docstring:\n start, start_triple = _find_first_of(docstring, START_DOCSTRING_TRIPLE)\n if docstring[len(start_triple)] == ' ':\n # docstrings get tokenized on the last line of the docstring, so\n # we don't know the exact position.\n return (0, \"H401: docstring should not start with\"\n \" a space\")"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef hacking_docstring_multiline_end(physical_line, previous_logical, tokens):\n docstring = is_docstring(tokens, previous_logical)\n if docstring:\n if '\\n' not in docstring:\n # not a multi line\n return\n else:\n last_line = docstring.split('\\n')[-1]\n pos = max(last_line.rfind(i) for i in END_DOCSTRING_TRIPLE)\n if len(last_line[:pos].strip()) > 0:\n # Something before the end docstring triple\n return (pos,\n \"H403: multi line docstrings should end on a new line\")", "response": "Check multi line docstring end."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nchecking multi line docstring starts immediately with summary.", "response": "def hacking_docstring_multiline_start(physical_line, previous_logical, tokens):\n r\"\"\"Check multi line docstring starts immediately with summary.\n\n OpenStack HACKING guide recommendation for docstring:\n Docstring should start with a one-line summary, less than 80 characters.\n\n Okay: '''foobar\\n\\nfoo\\nbar\\n'''\n Okay: def foo():\\n a = '''\\nnot\\na docstring\\n'''\n H404: def foo():\\n '''\\nfoo\\nbar\\n'''\\n\\n\n H404: def foo():\\n r'''\\nfoo\\nbar\\n'''\\n\\n\n \"\"\"\n docstring = is_docstring(tokens, previous_logical)\n if docstring:\n if '\\n' not in docstring:\n # single line docstring\n return\n start, start_triple = _find_first_of(docstring, START_DOCSTRING_TRIPLE)\n lines = docstring.split('\\n')\n if lines[0].strip() == start_triple:\n # docstrings get tokenized on the last line of the docstring, so\n # we don't know the exact position.\n return (0, \"H404: multi line docstring \"\n \"should start without a leading new line\")"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nchecks multi line docstring summary is separated with empty line.", "response": "def hacking_docstring_summary(physical_line, previous_logical, tokens):\n r\"\"\"Check multi line docstring summary is separated with empty line.\n\n OpenStack HACKING guide recommendation for docstring:\n Docstring should start with a one-line summary, less than 80 characters.\n\n Okay: def foo():\\n a = '''\\nnot\\na docstring\\n'''\n Okay: '''foobar\\n\\nfoo\\nbar\\n'''\n H405: def foo():\\n '''foobar\\nfoo\\nbar\\n'''\n H405: def foo():\\n r'''foobar\\nfoo\\nbar\\n'''\n H405: def foo():\\n '''foobar\\n'''\n \"\"\"\n docstring = is_docstring(tokens, previous_logical)\n if docstring:\n if '\\n' not in docstring:\n # not a multi line docstring\n return\n lines = docstring.split('\\n')\n if len(lines) > 1 and len(lines[1].strip()) is not 0:\n # docstrings get tokenized on the last line of the docstring, so\n # we don't know the exact position.\n return (0, \"H405: multi line docstring \"\n \"summary not separated with an empty line\")"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn found docstring 'A docstring is a string literal that occurs as the first statement in a module, function, class,' http://www.python.org/dev/peps/pep-0257/#what-is-a-docstring", "response": "def is_docstring(tokens, previous_logical):\n \"\"\"Return found docstring\n\n 'A docstring is a string literal that occurs as the first statement in a\n module, function, class,'\n http://www.python.org/dev/peps/pep-0257/#what-is-a-docstring\n \"\"\"\n for token_type, text, start, _, _ in tokens:\n if token_type == tokenize.STRING:\n break\n elif token_type != tokenize.INDENT:\n return False\n else:\n return False\n line = text.lstrip()\n start, start_triple = _find_first_of(line, START_DOCSTRING_TRIPLE)\n if (previous_logical.startswith(\"def \") or\n previous_logical.startswith(\"class \")):\n if start == 0:\n return text"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _find_first_of(line, substrings):\n starts = ((line.find(i), i) for i in substrings)\n found = [(i, sub) for i, sub in starts if i != -1]\n if found:\n return min(found)\n else:\n return -1, None", "response": "Find the first occurrence of one of substrings in line."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef hacking_localization_strings(logical_line, tokens, noqa):\n if noqa:\n return\n gen = check_i18n()\n next(gen)\n try:\n list(map(gen.send, tokens))\n gen.close()\n except LocalizationError as e:\n yield e.args", "response": "r Check localization in line."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef is_none(node):\n '''Check whether an AST node corresponds to None.\n\n In Python 2 None uses the same ast.Name class that variables etc. use,\n but in Python 3 there is a new ast.NameConstant class.\n '''\n if PY2:\n return isinstance(node, ast.Name) and node.id == 'None'\n return isinstance(node, ast.NameConstant) and node.value is None", "response": "Check whether an AST node corresponds to None."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef hacking_assert_is_none(logical_line, noqa):\n if noqa:\n return\n for func_name in ('assertEqual', 'assertIs', 'assertNotEqual',\n 'assertIsNot'):\n try:\n start = logical_line.index('.%s(' % func_name) + 1\n except ValueError:\n continue\n checker = NoneArgChecker(func_name)\n checker.visit(ast.parse(logical_line))\n if checker.none_found:\n yield start, \"H203: Use assertIs(Not)None to check for None\"", "response": "Hacking assertIsNone for None"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef hacking_assert_greater_less(logical_line, noqa):\n if noqa:\n return\n\n methods = ['assertTrue', 'assertFalse']\n for method in methods:\n start = logical_line.find('.%s' % method) + 1\n if start != 0:\n break\n else:\n return\n comparisons = [ast.Gt, ast.GtE, ast.Lt, ast.LtE]\n checker = AssertTrueFalseChecker(methods, comparisons)\n checker.visit(ast.parse(logical_line))\n if checker.error:\n yield start, 'H205: Use assert{Greater,Less}[Equal]'", "response": "r Check that self. assertGreaterEqual and self. assertLessEqual are used."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef hacking_no_cr(physical_line):\n pos = physical_line.find('\\r')\n if pos != -1 and pos == (len(physical_line) - 2):\n return (pos, \"H903: Windows style line endings not allowed in code\")", "response": "Check that we only use newlines not carriage returns."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndownload a dataset by owner and id.", "response": "def download_dataset(self, owner, id, **kwargs):\n \"\"\"\n Download dataset\n This endpoint will return a .zip containing all files within the dataset as originally uploaded. If you are interested retrieving clean data extracted from those files by data.world, check out `GET:/sql` and `GET:/sparql`.\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please define a `callback` function\n to be invoked when receiving the response.\n >>> def callback_function(response):\n >>> pprint(response)\n >>>\n >>> thread = api.download_dataset(owner, id, callback=callback_function)\n\n :param callback function: The callback function\n for asynchronous request. (optional)\n :param str owner: User name and unique identifier of the creator of a dataset or project. For example, in the URL: [https://data.world/jonloyens/an-intro-to-dataworld-dataset](https://data.world/jonloyens/an-intro-to-dataworld-dataset), jonloyens is the unique identifier of the owner. (required)\n :param str id: Dataset unique identifier. For example, in the URL:[https://data.world/jonloyens/an-intro-to-dataworld-dataset](https://data.world/jonloyens/an-intro-to-dataworld-dataset), an-intro-to-dataworld-dataset is the unique identifier of the dataset. (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.download_dataset_with_http_info(owner, id, **kwargs)\n else:\n (data) = self.download_dataset_with_http_info(owner, id, **kwargs)\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef download_file(self, owner, id, file, **kwargs):\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.download_file_with_http_info(owner, id, file, **kwargs)\n else:\n (data) = self.download_file_with_http_info(owner, id, file, **kwargs)\n return data", "response": "Download a file within the dataset."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef count(self, count):\n if count is None:\n raise ValueError(\"Invalid value for `count`, must not be `None`\")\n if count is not None and count < 0:\n raise ValueError(\"Invalid value for `count`, must be a value greater than or equal to `0`\")\n\n self._count = count", "response": "Sets the count of this PaginatedDatasetResults."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsets the type of this WebAuthorization.", "response": "def type(self, type):\n \"\"\"\n Sets the type of this WebAuthorization.\n The authorization scheme. Usually this is \\\"Bearer\\\" but it could be other values like \\\"Token\\\" or \\\"Basic\\\" etc.\n\n :param type: The type of this WebAuthorization.\n :type: str\n \"\"\"\n if type is None:\n raise ValueError(\"Invalid value for `type`, must not be `None`\")\n if type is not None and len(type) > 50:\n raise ValueError(\"Invalid value for `type`, length must be less than or equal to `50`\")\n if type is not None and not re.search('[\\\\x21-\\\\x7E \\\\t]*', type):\n raise ValueError(\"Invalid value for `type`, must be a follow pattern or equal to `/[\\\\x21-\\\\x7E \\\\t]*/`\")\n\n self._type = type"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nset the credentials of this WebAuthorization.", "response": "def credentials(self, credentials):\n \"\"\"\n Sets the credentials of this WebAuthorization.\n The confidential portion of the `Authorization` header that follows the `type` field. This field is write-only. It is omitted by read operations. If authorization is required, the `credentials` value must be provided whenever a File Source is created or modified. An update to a dataset that does not change the File Source may omit the `credentials` field--the update will preserve the previous value.\n\n :param credentials: The credentials of this WebAuthorization.\n :type: str\n \"\"\"\n if credentials is not None and len(credentials) > 1024:\n raise ValueError(\"Invalid value for `credentials`, length must be less than or equal to `1024`\")\n if credentials is not None and len(credentials) < 1:\n raise ValueError(\"Invalid value for `credentials`, length must be greater than or equal to `1`\")\n if credentials is not None and not re.search('[\\\\x21-\\\\x7E \\\\t]*', credentials):\n raise ValueError(\"Invalid value for `credentials`, must be a follow pattern or equal to `/[\\\\x21-\\\\x7E \\\\t]*/`\")\n\n self._credentials = credentials"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef describe(self, resource=None):\n if resource is None:\n # Show simpler descriptor, omitting schema definitions\n simple_descriptor = copy.deepcopy(self._datapackage.descriptor)\n for resource in simple_descriptor['resources']:\n resource.pop('schema', None)\n return simple_descriptor\n else:\n return self.__resources[resource].descriptor", "response": "Describe dataset or resource within dataset."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nextracting raw data from resource", "response": "def _load_raw_data(self, resource_name):\n \"\"\"Extract raw data from resource\n\n :param resource_name:\n \"\"\"\n # Instantiating the resource again as a simple `Resource` ensures that\n # ``data`` will be returned as bytes.\n upcast_resource = datapackage.Resource(\n self.__resources[resource_name].descriptor,\n default_base_path=self.__base_path)\n return upcast_resource.data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nbuilding table structure from resource data", "response": "def _load_table(self, resource_name):\n \"\"\"Build table structure from resource data\n\n :param resource_name:\n \"\"\"\n tabular_resource = self.__tabular_resources[resource_name]\n\n try:\n # Sorting fields in the same order as they appear in the schema\n # is necessary for tables to be converted into pandas.DataFrame\n fields = []\n if 'schema' in tabular_resource.descriptor:\n fields = [f['name'] for f in\n tabular_resource.descriptor['schema']['fields']]\n elif len(tabular_resource.data) > 0:\n fields = tabular_resource.data[0].keys()\n\n return [order_columns_in_row(fields, row) for row in\n tabular_resource.data]\n except (SchemaValidationError, ValueError, TypeError) as e:\n warnings.warn(\n 'Unable to set column types automatically using {} schema. '\n 'Data types may need to be adjusted manually. '\n 'Error: {}'.format(resource_name, e))\n self.__invalid_schemas.append(resource_name)\n file_format = tabular_resource.descriptor['format']\n with Stream(io.BytesIO(self.raw_data[resource_name]),\n format=file_format, headers=1,\n scheme='stream', encoding='utf-8') as stream:\n return [OrderedDict(zip(stream.headers, row))\n for row in stream.iter()]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _load_dataframe(self, resource_name):\n try:\n import pandas\n except ImportError:\n raise RuntimeError('To enable dataframe support, '\n 'run \\'pip install datadotworld[pandas]\\'')\n\n tabular_resource = self.__tabular_resources[resource_name]\n field_dtypes = fields_to_dtypes(tabular_resource.descriptor['schema'])\n\n try:\n return pandas.read_csv(\n path.join(\n self.__base_path,\n tabular_resource.descriptor['path']),\n dtype=field_dtypes['other'],\n parse_dates=list(field_dtypes['dates'].keys()),\n infer_datetime_format=True)\n except ValueError as e:\n warnings.warn(\n 'Unable to set data frame dtypes automatically using {} '\n 'schema. Data types may need to be adjusted manually. '\n 'Error: {}'.format(resource_name, e))\n return pandas.read_csv(\n path.join(\n self.__base_path,\n tabular_resource.descriptor['path']))", "response": "Build pandas. DataFrame from resource data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nappending records to a stream.", "response": "def append_records(self, owner, id, stream_id, body, **kwargs):\n \"\"\"\n Append records to a stream.\n This endpoint appends JSON data to a stream associated with a dataset. Streams don't need to be created before you can append data to it. They will be created on-demand, the first time they are used. Multiple records can be appended at once by using JSON-L (`application/json-l`) as the request content type. Currently, data uploaded to a dataset via a stream is not immediatelly processed. Instead, it is processed automatically at least once a day or as a result of a \\\"Sync files\\\" endpoint invocation. Once processed, the contents of a stream will appear as part of the respective dataset as a `.jsonl` file (e.g. `my-stream` will produce a file named `my-stream.jsonl`).\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please define a `callback` function\n to be invoked when receiving the response.\n >>> def callback_function(response):\n >>> pprint(response)\n >>>\n >>> thread = api.append_records(owner, id, stream_id, body, callback=callback_function)\n\n :param callback function: The callback function\n for asynchronous request. (optional)\n :param str owner: User name and unique identifier of the creator of a dataset or project. For example, in the URL: [https://data.world/jonloyens/an-intro-to-dataworld-dataset](https://data.world/jonloyens/an-intro-to-dataworld-dataset), jonloyens is the unique identifier of the owner. (required)\n :param str id: Dataset unique identifier. For example, in the URL:[https://data.world/jonloyens/an-intro-to-dataworld-dataset](https://data.world/jonloyens/an-intro-to-dataworld-dataset), an-intro-to-dataworld-dataset is the unique identifier of the dataset. (required)\n :param str stream_id: Stream unique identifier as defined by the user the first time the stream was used. Only lower case letters, numbers and dashes are allowed. Maximum length of 95 characters. (required)\n :param object body: (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.append_records_with_http_info(owner, id, stream_id, body, **kwargs)\n else:\n (data) = self.append_records_with_http_info(owner, id, stream_id, body, **kwargs)\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create_insight(self, project_owner, project_id, **kwargs):\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.create_insight_with_http_info(project_owner, project_id, **kwargs)\n else:\n (data) = self.create_insight_with_http_info(project_owner, project_id, **kwargs)\n return data", "response": "Create an insight\n Create a new insight.\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please define a `callback` function\n to be invoked when receiving the response.\n >>> def callback_function(response):\n >>> pprint(response)\n >>>\n >>> thread = api.create_insight(project_owner, project_id, callback=callback_function)\n\n :param callback function: The callback function\n for asynchronous request. (optional)\n :param str project_owner: User name and unique identifier of the creator of a project. For example, in the URL: [https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs](https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs), `government` is the unique identifier of the owner. (required)\n :param str project_id: User name and unique identifier of the project. For example, in the URL: [https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs](https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs), `how-to-add-depth-to-your-data-with-the-us-census-acs` is the unique identifier of the owner. (required)\n :param InsightCreateRequest body:\n :return: CreateInsightResponse\n If the method is called asynchronously,\n returns the request thread."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef delete_insight(self, project_owner, project_id, id, **kwargs):\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.delete_insight_with_http_info(project_owner, project_id, id, **kwargs)\n else:\n (data) = self.delete_insight_with_http_info(project_owner, project_id, id, **kwargs)\n return data", "response": "Delete an existing Insight."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_insight(self, project_owner, project_id, id, **kwargs):\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.get_insight_with_http_info(project_owner, project_id, id, **kwargs)\n else:\n (data) = self.get_insight_with_http_info(project_owner, project_id, id, **kwargs)\n return data", "response": "Retrieve an insight\n Retrieve an insight.\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please define a `callback` function\n to be invoked when receiving the response.\n >>> def callback_function(response):\n >>> pprint(response)\n >>>\n >>> thread = api.get_insight(project_owner, project_id, id, callback=callback_function)\n\n :param callback function: The callback function\n for asynchronous request. (optional)\n :param str project_owner: User name and unique identifier of the creator of a project. For example, in the URL: [https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs](https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs), `government` is the unique identifier of the owner. (required)\n :param str project_id: User name and unique identifier of the project. For example, in the URL: [https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs](https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs), `how-to-add-depth-to-your-data-with-the-us-census-acs` is the unique identifier of the owner. (required)\n :param str id: Insight unique identifier. (required)\n :return: InsightSummaryResponse\n If the method is called asynchronously,\n returns the request thread."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_insights_for_project(self, project_owner, project_id, **kwargs):\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.get_insights_for_project_with_http_info(project_owner, project_id, **kwargs)\n else:\n (data) = self.get_insights_for_project_with_http_info(project_owner, project_id, **kwargs)\n return data", "response": "Get the list of insights for a project."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef replace_insight(self, project_owner, project_id, id, **kwargs):\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.replace_insight_with_http_info(project_owner, project_id, id, **kwargs)\n else:\n (data) = self.replace_insight_with_http_info(project_owner, project_id, id, **kwargs)\n return data", "response": "Replace an insight\n Replace an insight.\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please define a `callback` function\n to be invoked when receiving the response.\n >>> def callback_function(response):\n >>> pprint(response)\n >>>\n >>> thread = api.replace_insight(project_owner, project_id, id, callback=callback_function)\n\n :param callback function: The callback function\n for asynchronous request. (optional)\n :param str project_owner: User name and unique identifier of the creator of a project. For example, in the URL: [https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs](https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs), `government` is the unique identifier of the owner. (required)\n :param str project_id: User name and unique identifier of the project. For example, in the URL: [https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs](https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs), `how-to-add-depth-to-your-data-with-the-us-census-acs` is the unique identifier of the owner. (required)\n :param str id: Insight unique identifier. (required)\n :param InsightPutRequest body:\n :return: SuccessMessage\n If the method is called asynchronously,\n returns the request thread."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nupdates an insight entry.", "response": "def update_insight(self, project_owner, project_id, id, **kwargs):\n \"\"\"\n Update an insight\n Update an insight. Note that only elements included in the request will be updated. All omitted elements will remain untouched.\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please define a `callback` function\n to be invoked when receiving the response.\n >>> def callback_function(response):\n >>> pprint(response)\n >>>\n >>> thread = api.update_insight(project_owner, project_id, id, callback=callback_function)\n\n :param callback function: The callback function\n for asynchronous request. (optional)\n :param str project_owner: User name and unique identifier of the creator of a project. For example, in the URL: [https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs](https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs), `government` is the unique identifier of the owner. (required)\n :param str project_id: User name and unique identifier of the project. For example, in the URL: [https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs](https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs), `how-to-add-depth-to-your-data-with-the-us-census-acs` is the unique identifier of the owner. (required)\n :param str id: Insight unique identifier. (required)\n :param InsightPatchRequest body:\n :return: SuccessMessage\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.update_insight_with_http_info(project_owner, project_id, id, **kwargs)\n else:\n (data) = self.update_insight_with_http_info(project_owner, project_id, id, **kwargs)\n return data"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nset the summary of this DatasetPatchRequest.", "response": "def summary(self, summary):\n \"\"\"\n Sets the summary of this DatasetPatchRequest.\n Long-form dataset summary (Markdown supported).\n\n :param summary: The summary of this DatasetPatchRequest.\n :type: str\n \"\"\"\n if summary is not None and len(summary) > 25000:\n raise ValueError(\"Invalid value for `summary`, length must be less than or equal to `25000`\")\n if summary is not None and len(summary) < 0:\n raise ValueError(\"Invalid value for `summary`, length must be greater than or equal to `0`\")\n\n self._summary = summary"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef license(self, license):\n allowed_values = [\"Public Domain\", \"PDDL\", \"CC-0\", \"CC-BY\", \"ODC-BY\", \"CC-BY-SA\", \"ODC-ODbL\", \"CC BY-NC\", \"CC BY-NC-SA\", \"Other\"]\n if license not in allowed_values:\n raise ValueError(\n \"Invalid value for `license` ({0}), must be one of {1}\"\n .format(license, allowed_values)\n )\n\n self._license = license", "response": "Sets the license of this DatasetPatchRequest."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef visibility(self, visibility):\n allowed_values = [\"OPEN\", \"PRIVATE\"]\n if visibility not in allowed_values:\n raise ValueError(\n \"Invalid value for `visibility` ({0}), must be one of {1}\"\n .format(visibility, allowed_values)\n )\n\n self._visibility = visibility", "response": "Sets the visibility of this DatasetPatchRequest."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef objective(self, objective):\n if objective is not None and len(objective) > 120:\n raise ValueError(\"Invalid value for `objective`, length must be less than or equal to `120`\")\n if objective is not None and len(objective) < 0:\n raise ValueError(\"Invalid value for `objective`, length must be greater than or equal to `0`\")\n\n self._objective = objective", "response": "Sets the objective of this ProjectPutRequest."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add_linked_dataset(self, owner, id, linked_dataset_owner, linked_dataset_id, **kwargs):\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.add_linked_dataset_with_http_info(owner, id, linked_dataset_owner, linked_dataset_id, **kwargs)\n else:\n (data) = self.add_linked_dataset_with_http_info(owner, id, linked_dataset_owner, linked_dataset_id, **kwargs)\n return data", "response": "Link dataset add a linked dataset to a project."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef create_project(self, owner, **kwargs):\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.create_project_with_http_info(owner, **kwargs)\n else:\n (data) = self.create_project_with_http_info(owner, **kwargs)\n return data", "response": "Create a project. This method creates a new project."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndeletes a project and all data associated with it.", "response": "def delete_project(self, owner, id, **kwargs):\n \"\"\"\n Delete a project\n Permanently deletes a project and all data associated with it. This operation cannot be undone, although a new project may be created with the same id.\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please define a `callback` function\n to be invoked when receiving the response.\n >>> def callback_function(response):\n >>> pprint(response)\n >>>\n >>> thread = api.delete_project(owner, id, callback=callback_function)\n\n :param callback function: The callback function\n for asynchronous request. (optional)\n :param str owner: User name and unique identifier of the creator of a project. For example, in the URL: [https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs](https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs), government is the unique identifier of the owner. (required)\n :param str id: Project unique identifier. For example, in the URL:[https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs](https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs), how-to-add-depth-to-your-data-with-the-us-census-acs is the unique identifier of the project. (required)\n :return: SuccessMessage\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.delete_project_with_http_info(owner, id, **kwargs)\n else:\n (data) = self.delete_project_with_http_info(owner, id, **kwargs)\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_project(self, owner, id, **kwargs):\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.get_project_with_http_info(owner, id, **kwargs)\n else:\n (data) = self.get_project_with_http_info(owner, id, **kwargs)\n return data", "response": "Retrieve a project by its unique identifier."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef patch_project(self, owner, id, **kwargs):\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.patch_project_with_http_info(owner, id, **kwargs)\n else:\n (data) = self.patch_project_with_http_info(owner, id, **kwargs)\n return data", "response": "Update a project with the specified attributes."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef remove_linked_dataset(self, owner, id, linked_dataset_owner, linked_dataset_id, **kwargs):\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.remove_linked_dataset_with_http_info(owner, id, linked_dataset_owner, linked_dataset_id, **kwargs)\n else:\n (data) = self.remove_linked_dataset_with_http_info(owner, id, linked_dataset_owner, linked_dataset_id, **kwargs)\n return data", "response": "Remove a linked dataset from a project."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef replace_project(self, owner, id, **kwargs):\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.replace_project_with_http_info(owner, id, **kwargs)\n else:\n (data) = self.replace_project_with_http_info(owner, id, **kwargs)\n return data", "response": "Replace a project with a given id or completely rewrite it."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nloading a dataset from the local filesystem if necessary.", "response": "def load_dataset(dataset_key, force_update=False, auto_update=False,\n profile='default', **kwargs):\n \"\"\"Load a dataset from the local filesystem, downloading it from data.world\n first, if necessary.\n\n This function returns an object of type `LocalDataset`. The object\n allows access to metedata via it's `describe()` method and to all the data\n via three properties `raw_data`, `tables` and `dataframes`, all of which\n are mappings (dict-like structures).\n\n :param dataset_key: Dataset identifier, in the form of owner/id or of a url\n :type dataset_key: str\n :param force_update: Flag, indicating if a new copy of the dataset should\n be downloaded replacing any previously downloaded copy\n (Default value = False)\n :type force_update: bool\n :param auto_update: Flag, indicating that dataset be updated to the latest\n version\n :type auto_update: bool\n :param profile: Configuration profile (account) to use.\n (Default value = 'default')\n :type profile: str, optional\n :returns: The object representing the dataset\n :rtype: LocalDataset\n :raises RestApiError: If a server error occurs\n\n Examples\n --------\n >>> import datadotworld as dw\n >>> dataset = dw.load_dataset('jonloyens/an-intro-to-dataworld-dataset')\n >>> list(dataset.dataframes)\n ['changelog', 'datadotworldbballstats', 'datadotworldbballteam']\n \"\"\"\n return _get_instance(profile, **kwargs). \\\n load_dataset(dataset_key,\n force_update=force_update,\n auto_update=auto_update)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef query(dataset_key, query, query_type='sql', profile='default',\n parameters=None, **kwargs):\n \"\"\"Query an existing dataset\n\n :param dataset_key: Dataset identifier, in the form of owner/id or of a url\n :type dataset_key: str\n :param query: SQL or SPARQL query\n :type query: str\n :param query_type: The type of the query. Must be either 'sql' or 'sparql'.\n (Default value = 'sql')\n :type query_type: {'sql', 'sparql'}, optional\n :param parameters: parameters to the query - if SPARQL query, this should\n be a dict containing named parameters, if SQL query, then this should\n be a list containing positional parameters. Boolean values will be\n converted to xsd:boolean, Integer values to xsd:integer, and other\n Numeric values to xsd:decimal. anything else is treated as a String\n literal (Default value = None)\n :type parameters: query parameters, optional\n :param profile: Configuration profile (account) to use.\n (Default value = 'default')\n :type profile: str, optional\n :returns: Object containing the results of the query\n :rtype: Results\n :raises RuntimeError: If a server error occurs\n\n Examples\n --------\n >>> import datadotworld as dw\n >>> results = dw.query(\n ... 'jonloyens/an-intro-to-dataworld-dataset',\n ... 'SELECT * FROM `DataDotWorldBBallStats`, `DataDotWorldBBallTeam` '\n ... 'WHERE DataDotWorldBBallTeam.Name = DataDotWorldBBallStats.Name')\n >>> df = results.dataframe\n >>> df.shape\n (8, 6)\n \"\"\"\n return _get_instance(profile, **kwargs).query(dataset_key, query,\n query_type=query_type,\n parameters=parameters,\n **kwargs)", "response": "Query an existing dataset with the specified query"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef open_remote_file(dataset_key, file_name, profile='default',\n mode='w', **kwargs):\n \"\"\"Open a remote file object that can be used to write to or read from\n a file in a data.world dataset\n\n :param dataset_key: Dataset identifier, in the form of owner/id\n :type dataset_key: str\n :param file_name: The name of the file to open\n :type file_name: str\n :param mode: the mode for the file - must be 'w', 'wb', 'r', or 'rb' -\n indicating read/write ('r'/'w') and optionally \"binary\"\n handling of the file data. (Default value = 'w')\n :type mode: str, optional\n :param chunk_size: size of chunked bytes to return when reading streamed\n bytes in 'rb' mode\n :type chunk_size: int, optional\n :param decode_unicode: whether to decode textual responses as unicode when\n returning streamed lines in 'r' mode\n :type decode_unicode: bool, optional\n :param profile: (Default value = 'default')\n :param **kwargs:\n\n Examples\n --------\n >>> import datadotworld as dw\n >>>\n >>> # write a text file\n >>> with dw.open_remote_file('username/test-dataset',\n ... 'test.txt') as w:\n ... w.write(\"this is a test.\")\n >>>\n >>> # write a jsonlines file\n >>> import json\n >>> with dw.open_remote_file('username/test-dataset',\n ... 'test.jsonl') as w:\n ... json.dump({'foo':42, 'bar':\"A\"}, w)\n ... w.write(\"\\\\n\")\n ... json.dump({'foo':13, 'bar':\"B\"}, w)\n ... w.write(\"\\\\n\")\n >>>\n >>> # write a csv file\n >>> import csv\n >>> with dw.open_remote_file('username/test-dataset',\n ... 'test.csv') as w:\n ... csvw = csv.DictWriter(w, fieldnames=['foo', 'bar'])\n ... csvw.writeheader()\n ... csvw.writerow({'foo':42, 'bar':\"A\"})\n ... csvw.writerow({'foo':13, 'bar':\"B\"})\n >>>\n >>> # write a pandas dataframe as a csv file\n >>> import pandas as pd\n >>> df = pd.DataFrame({'foo':[1,2,3,4],'bar':['a','b','c','d']})\n >>> with dw.open_remote_file('username/test-dataset',\n ... 'dataframe.csv') as w:\n ... df.to_csv(w, index=False)\n >>>\n >>> # write a binary file\n >>> with dw.open_remote_file('username/test-dataset',\n >>> 'test.txt', mode='wb') as w:\n ... w.write(bytes([100,97,116,97,46,119,111,114,108,100]))\n >>>\n >>> # read a text file\n >>> with dw.open_remote_file('username/test-dataset',\n ... 'test.txt', mode='r') as r:\n ... print(r.read())\n >>>\n >>> # read a csv file\n >>> with dw.open_remote_file('username/test-dataset',\n ... 'test.csv', mode='r') as r:\n ... csvr = csv.DictReader(r)\n ... for row in csvr:\n ... print(row['column a'], row['column b'])\n >>>\n >>> # read a binary file\n >>> with dw.open_remote_file('username/test-dataset',\n ... 'test', mode='rb') as r:\n ... bytes = r.read()\n \"\"\"\n return _get_instance(profile, **kwargs).open_remote_file(\n dataset_key, file_name,\n mode=mode, **kwargs)", "response": "Open a remote file object that can be used to read from or write to or read from a file in a data. world dataset."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef table(self):\n if self._table is None:\n self._table = list(self._iter_rows())\n\n return self._table", "response": "Build and cache a table from query results"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef dataframe(self):\n\n if self._dataframe is None:\n try:\n import pandas as pd\n except ImportError:\n raise RuntimeError('To enable dataframe support, '\n 'run \\'pip install datadotworld[pandas]\\'')\n\n self._dataframe = pd.DataFrame.from_records(self._iter_rows(),\n coerce_float=True)\n\n return self._dataframe", "response": "Build and cache a dataframe from query results"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef fields_to_dtypes(schema):\n datetime_types = ['date', 'datetime']\n datetime_fields = {\n f['name']: _TABLE_SCHEMA_DTYPE_MAPPING.get(f['type'], 'object')\n for f in schema['fields']\n if f['type'] in datetime_types}\n\n other_fields = {\n f['name']: _TABLE_SCHEMA_DTYPE_MAPPING.get(f['type'], 'object')\n for f in schema['fields']\n if f['type'] not in datetime_types}\n\n return {'dates': datetime_fields, 'other': other_fields}", "response": "Maps table schema fields types to dtypes separating date fields\n "} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef sanitize_resource_schema(r):\n if 'schema' in r.descriptor:\n r.descriptor['schema'] = _sanitize_schema(r.descriptor['schema'])\n\n return r", "response": "Sanitize resource schema for increased compatibility"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ninfers Table Schema from SPARQL results JSON Results Spec", "response": "def infer_table_schema(sparql_results_json):\n \"\"\"Infer Table Schema from SPARQL results JSON\n\n SPARQL JSON Results Spec:\n https://www.w3.org/TR/2013/REC-sparql11-results-json-20130321\n\n :param sparql_results_json: SPARQL JSON results of a query\n :returns: A schema descriptor for the inferred schema\n :rtype: dict (json)\n \"\"\"\n if ('results' in sparql_results_json and\n 'bindings' in sparql_results_json['results'] and\n len(sparql_results_json['results']['bindings']) > 0):\n\n # SQL results include metadata, SPARQL results don't\n result_metadata = sparql_results_json.get('metadata', [])\n metadata_names = [item['name'] for item in result_metadata]\n result_vars = sparql_results_json['head']['vars']\n\n _verify_unique_names(result_vars, metadata_names)\n\n # SQL results require var name mapping, SPARQL results vars don't\n result_vars_mapping = dict(zip(\n result_vars, (metadata_names\n if metadata_names != []\n else result_vars)))\n\n homogeneous_types = _get_types_from_sample(\n result_vars, sparql_results_json)\n\n fields = []\n if homogeneous_types is None:\n for result_var in result_vars:\n fields.append({\n 'name': result_vars_mapping.get(result_var),\n 'type': 'string'\n })\n else:\n for index, var in enumerate(result_vars):\n field = {\n 'name': result_vars_mapping.get(var),\n 'type': infer_table_schema_type_from_rdf_term(\n homogeneous_types[var].get('type'),\n homogeneous_types[var].get('datatype')\n )}\n\n if 'datatype' in homogeneous_types.get(var):\n field['rdfType'] = homogeneous_types[var].get('datatype')\n\n term_metadata = (result_metadata[index]\n if result_metadata != [] else {})\n if 'description' in term_metadata:\n field['description'] = term_metadata['description']\n\n fields.append(field)\n\n return _sanitize_schema({'fields': fields})\n elif 'boolean' in sparql_results_json:\n # ASK query results\n return {'fields': [{'name': 'boolean', 'type': 'boolean'}]}\n else:\n warn('Unable to infer table schema from empty query results')\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nensure columns appear in the same order for every row in table", "response": "def order_columns_in_row(fields, unordered_row):\n \"\"\"Ensure columns appear in the same order for every row in table\n\n :param fields:\n :param unordered_row:\n \"\"\"\n fields_idx = {f: pos for pos, f in enumerate(fields)}\n return OrderedDict(sorted(unordered_row.items(),\n key=lambda i: fields_idx[i[0]]))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _get_types_from_sample(result_vars, sparql_results_json):\n total_bindings = len(sparql_results_json['results']['bindings'])\n homogeneous_types = {}\n for result_var in result_vars:\n var_types = set()\n var_datatypes = set()\n for i in range(0, min(total_bindings, 10)):\n binding = sparql_results_json['results']['bindings'][i]\n rdf_term = binding.get(result_var)\n if rdf_term is not None: # skip missing values\n var_types.add(rdf_term.get('type'))\n var_datatypes.add(rdf_term.get('datatype'))\n if len(var_types) > 1 or len(var_datatypes) > 1:\n return None # Heterogeneous types\n else:\n homogeneous_types[result_var] = {\n 'type': var_types.pop() if var_types else None,\n 'datatype': var_datatypes.pop() if var_datatypes else None\n }\n\n return homogeneous_types", "response": "Return types if homogenous within sample"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\napplies supplier_func to each element in seq returns 1st not None", "response": "def _first_not_none(seq, supplier_func):\n \"\"\"Applies supplier_func to each element in seq, returns 1st not None\n\n :param seq: Sequence of object\n :type seq: iterable\n :param supplier_func: Function that extracts the desired value from\n elements in seq\n :type supplier_func: function\n \"\"\"\n for i in seq:\n obj = supplier_func(i)\n if obj is not None:\n return obj\n\n return None"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef cli(ctx, profile):\n if ctx.obj is None:\n ctx.obj = {}\n ctx.obj['profile'] = profile\n pass", "response": "dw commands support working with multiple data. world accounts\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nusing this command to configure API tokens", "response": "def configure(obj, token):\n \"\"\"Use this command to configure API tokens\n \"\"\"\n config = obj.get('config') or FileConfig(obj['profile'])\n config.auth_token = token\n config.save()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef request_entity(self, request_entity):\n if request_entity is not None and len(request_entity) > 10000:\n raise ValueError(\"Invalid value for `request_entity`, length must be less than or equal to `10000`\")\n\n self._request_entity = request_entity", "response": "Sets the request_entity of this FileSourceCreateOrUpdateRequest."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef title(self, title):\n if title is None:\n raise ValueError(\"Invalid value for `title`, must not be `None`\")\n if title is not None and len(title) > 128:\n raise ValueError(\"Invalid value for `title`, length must be less than or equal to `128`\")\n if title is not None and len(title) < 1:\n raise ValueError(\"Invalid value for `title`, length must be greater than or equal to `1`\")\n\n self._title = title", "response": "Sets the title of this InsightPutRequest."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef sparql_get(self, owner, id, query, **kwargs):\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.sparql_get_with_http_info(owner, id, query, **kwargs)\n else:\n (data) = self.sparql_get_with_http_info(owner, id, query, **kwargs)\n return data", "response": "SPARQL query for a specific resource."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nretrieve an existing dataset definition This method retrieves metadata about an existing :param dataset_key: Dataset identifier, in the form of owner/id :type dataset_key: str :returns: Dataset definition, with all attributes :rtype: dict :raises RestApiException: If a server error occurs Examples -------- >>> import datadotworld as dw >>> api_client = dw.api_client() >>> intro_dataset = api_client.get_dataset( ... 'jonloyens/an-intro-to-dataworld-dataset') # doctest: +SKIP >>> intro_dataset['title'] # doctest: +SKIP 'An Intro to data.world Dataset'", "response": "def get_dataset(self, dataset_key):\n \"\"\"Retrieve an existing dataset definition\n\n This method retrieves metadata about an existing\n\n :param dataset_key: Dataset identifier, in the form of owner/id\n :type dataset_key: str\n :returns: Dataset definition, with all attributes\n :rtype: dict\n :raises RestApiException: If a server error occurs\n\n Examples\n --------\n >>> import datadotworld as dw\n >>> api_client = dw.api_client()\n >>> intro_dataset = api_client.get_dataset(\n ... 'jonloyens/an-intro-to-dataworld-dataset') # doctest: +SKIP\n >>> intro_dataset['title'] # doctest: +SKIP\n 'An Intro to data.world Dataset'\n \"\"\"\n try:\n return self._datasets_api.get_dataset(\n *(parse_dataset_key(dataset_key))).to_dict()\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate a new dataset with the specified properties", "response": "def create_dataset(self, owner_id, **kwargs):\n \"\"\"Create a new dataset\n\n :param owner_id: Username of the owner of the new dataset\n :type owner_id: str\n :param title: Dataset title (will be used to generate dataset id on\n creation)\n :type title: str\n :param description: Dataset description\n :type description: str, optional\n :param summary: Dataset summary markdown\n :type summary: str, optional\n :param tags: Dataset tags\n :type tags: list, optional\n :param license: Dataset license\n :type license: {'Public Domain', 'PDDL', 'CC-0', 'CC-BY', 'ODC-BY',\n 'CC-BY-SA', 'ODC-ODbL', 'CC BY-NC', 'CC BY-NC-SA', 'Other'}\n :param visibility: Dataset visibility\n :type visibility: {'OPEN', 'PRIVATE'}\n :param files: File name as dict, source URLs, description and labels()\n as properties\n :type files: dict, optional\n *Description and labels are optional*\n :returns: Newly created dataset key\n :rtype: str\n :raises RestApiException: If a server error occurs\n\n Examples\n --------\n >>> import datadotworld as dw\n >>> api_client = dw.api_client()\n >>> url = 'http://www.acme.inc/example.csv'\n >>> api_client.create_dataset(\n ... 'username', title='Test dataset', visibility='PRIVATE',\n ... license='Public Domain',\n ... files={'dataset.csv':{'url': url}}) # doctest: +SKIP\n \"\"\"\n request = self.__build_dataset_obj(\n lambda: _swagger.DatasetCreateRequest(\n title=kwargs.get('title'),\n visibility=kwargs.get('visibility')),\n lambda name, url, expand_archive, description, labels:\n _swagger.FileCreateRequest(\n name=name,\n source=_swagger.FileSourceCreateRequest(\n url=url,\n expand_archive=expand_archive),\n description=description,\n labels=labels),\n kwargs)\n try:\n (_, _, headers) = self._datasets_api.create_dataset_with_http_info(\n owner_id, request, _return_http_data_only=False)\n if 'Location' in headers:\n return headers['Location']\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nupdate an existing dataset", "response": "def update_dataset(self, dataset_key, **kwargs):\n \"\"\"Update an existing dataset\n\n :param description: Dataset description\n :type description: str, optional\n :param summary: Dataset summary markdown\n :type summary: str, optional\n :param tags: Dataset tags\n :type tags: list, optional\n :param license: Dataset license\n :type license: {'Public Domain', 'PDDL', 'CC-0', 'CC-BY', 'ODC-BY',\n 'CC-BY-SA', 'ODC-ODbL', 'CC BY-NC', 'CC BY-NC-SA', 'Other'}\n :param visibility: Dataset visibility\n :type visibility: {'OPEN', 'PRIVATE'}, optional\n :param files: File names and source URLs to add or update\n :type files: dict, optional\n :param dataset_key: Dataset identifier, in the form of owner/id\n :type dataset_key: str\n :raises RestApiException: If a server error occurs\n\n Examples\n --------\n >>> import datadotworld as dw\n >>> api_client = dw.api_client()\n >>> api_client.update_dataset(\n ... 'username/test-dataset',\n ... tags=['demo', 'datadotworld']) # doctest: +SKIP\n \"\"\"\n request = self.__build_dataset_obj(\n lambda: _swagger.DatasetPatchRequest(),\n lambda name, url, expand_archive, description, labels:\n _swagger.FileCreateOrUpdateRequest(\n name=name,\n source=_swagger.FileSourceCreateOrUpdateRequest(\n url=url,\n expand_archive=expand_archive)\n if url is not None else None,\n description=description,\n labels=labels),\n kwargs)\n owner_id, dataset_id = parse_dataset_key(dataset_key)\n try:\n self._datasets_api.patch_dataset(owner_id, dataset_id, request)\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef replace_dataset(self, dataset_key, **kwargs):\n request = self.__build_dataset_obj(\n lambda: _swagger.DatasetPutRequest(\n title=kwargs.get('title'),\n visibility=kwargs.get('visibility')\n ),\n lambda name, url, expand_archive, description, labels:\n _swagger.FileCreateRequest(\n name=name,\n source=_swagger.FileSourceCreateRequest(\n url=url,\n expand_archive=expand_archive),\n description=description,\n labels=labels),\n kwargs)\n\n owner_id, dataset_id = parse_dataset_key(dataset_key)\n try:\n self._datasets_api.replace_dataset(owner_id, dataset_id, request)\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)", "response": "This method replaces an existing dataset in datadotworld."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef delete_dataset(self, dataset_key):\n owner_id, dataset_id = parse_dataset_key(dataset_key)\n try:\n self._datasets_api.delete_dataset(owner_id, dataset_id)\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)", "response": "Deletes a dataset and all associated data\n "} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nadds or update dataset files linked to source URLs", "response": "def add_files_via_url(self, dataset_key, files={}):\n \"\"\"Add or update dataset files linked to source URLs\n\n :param dataset_key: Dataset identifier, in the form of owner/id\n :type dataset_key: str\n :param files: Dict containing the name of files and metadata\n Uses file name as a dict containing File description, labels and\n source URLs to add or update (Default value = {})\n *description and labels are optional.*\n :type files: dict\n :raises RestApiException: If a server error occurs\n\n Examples\n --------\n >>> import datadotworld as dw\n >>> url = 'http://www.acme.inc/example.csv'\n >>> api_client = dw.api_client()\n >>> api_client.add_files_via_url(\n ... 'username/test-dataset',\n ... {'example.csv': {\n ... 'url': url,\n ... 'labels': ['raw data'],\n ... 'description': 'file description'}}) # doctest: +SKIP\n \"\"\"\n file_requests = [_swagger.FileCreateOrUpdateRequest(\n name=file_name,\n source=_swagger.FileSourceCreateOrUpdateRequest(\n url=file_info['url'],\n expand_archive=file_info.get('expand_archive',\n False)),\n description=file_info.get('description'),\n labels=file_info.get('labels'),\n ) for file_name, file_info in files.items()]\n owner_id, dataset_id = parse_dataset_key(dataset_key)\n try:\n self._datasets_api.add_files_by_source(\n owner_id, dataset_id,\n _swagger.FileBatchUpdateRequest(files=file_requests))\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ntrigger synchronization process to update all dataset files linked to a specific dataset.", "response": "def sync_files(self, dataset_key):\n \"\"\"Trigger synchronization process to update all dataset files linked to\n source URLs.\n\n :param dataset_key: Dataset identifier, in the form of owner/id\n :type dataset_key: str\n :raises RestApiException: If a server error occurs\n\n Examples\n --------\n >>> import datadotworld as dw\n >>> api_client = dw.api_client()\n >>> api_client.sync_files('username/test-dataset') # doctest: +SKIP\n \"\"\"\n try:\n self._datasets_api.sync(*(parse_dataset_key(dataset_key)))\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nupload files to the local filesystem", "response": "def upload_files(self, dataset_key, files, files_metadata={}, **kwargs):\n \"\"\"Upload dataset files\n\n :param dataset_key: Dataset identifier, in the form of owner/id\n :type dataset_key: str\n :param files: The list of names/paths for files stored in the\n local filesystem\n :type files: list of str\n :param expand_archives: Boolean value to indicate files should be\n expanded upon upload\n :type expand_archive: bool optional\n :param files_metadata: Dict containing the name of files and metadata\n Uses file name as a dict containing File description, labels and\n source URLs to add or update\n :type files_metadata: dict optional\n :raises RestApiException: If a server error occurs\n\n Examples\n --------\n >>> import datadotworld as dw\n >>> api_client = dw.api_client()\n >>> api_client.upload_files(\n ... 'username/test-dataset',\n ... ['/my/local/example.csv']) # doctest: +SKIP\n \"\"\"\n owner_id, dataset_id = parse_dataset_key(dataset_key)\n try:\n self._uploads_api.upload_files(owner_id, dataset_id, files,\n **kwargs)\n if files_metadata:\n self.update_dataset(dataset_key, files=files_metadata)\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef upload_file(self, dataset_key, name, file_metadata={}, **kwargs):\n owner_id, dataset_id = parse_dataset_key(dataset_key)\n try:\n self._uploads_api.upload_file(owner_id, dataset_id, name, **kwargs)\n if file_metadata:\n self.update_dataset(dataset_key, files=file_metadata)\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)", "response": "Uploads one file to a dataset"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndelete files in a dataset", "response": "def delete_files(self, dataset_key, names):\n \"\"\"Delete dataset file(s)\n\n :param dataset_key: Dataset identifier, in the form of owner/id\n :type dataset_key: str\n :param names: The list of names for files to be deleted\n :type names: list of str\n :raises RestApiException: If a server error occurs\n\n Examples\n --------\n >>> import datadotworld as dw\n >>> api_client = dw.api_client()\n >>> api_client.delete_files(\n ... 'username/test-dataset', ['example.csv']) # doctest: +SKIP\n \"\"\"\n owner_id, dataset_id = parse_dataset_key(dataset_key)\n try:\n self._datasets_api.delete_files_and_sync_sources(\n owner_id, dataset_id, names)\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef download_datapackage(self, dataset_key, dest_dir):\n if path.isdir(dest_dir):\n raise ValueError('dest_dir must be a new directory, '\n 'but {} already exists'.format(dest_dir))\n\n owner_id, dataset_id = parse_dataset_key(dataset_key)\n url = \"{0}://{1}/datapackage/{2}/{3}\".format(\n self._protocol, self._download_host, owner_id, dataset_id)\n headers = {\n 'User-Agent': _user_agent(),\n 'Authorization': 'Bearer {0}'.format(self._config.auth_token)\n }\n\n try:\n response = requests.get(url, headers=headers, stream=True)\n response.raise_for_status()\n except requests.RequestException as e:\n raise RestApiError(cause=e)\n\n unzip_dir = path.join(self._config.tmp_dir, str(uuid.uuid4()))\n os.makedirs(unzip_dir)\n\n zip_file = path.join(unzip_dir, 'dataset.zip')\n\n with open(zip_file, 'wb') as f:\n for data in response.iter_content(chunk_size=4096):\n f.write(data)\n\n zip_obj = zipfile.ZipFile(zip_file)\n zip_obj.extractall(path=unzip_dir)\n\n # Find where datapackage.json is within expanded files\n unzipped_descriptor = glob.glob(\n '{}/**/datapackage.json'.format(unzip_dir))\n if not unzipped_descriptor:\n raise RuntimeError(\n 'Zip file did not contain a datapackage manifest.')\n\n unzipped_dir = path.dirname(unzipped_descriptor[0])\n\n shutil.move(unzipped_dir, dest_dir)\n shutil.rmtree(unzip_dir, ignore_errors=True)\n\n return path.join(dest_dir, 'datapackage.json')", "response": "Download and unzip a dataset s datapackage."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nretrieves data for authenticated user", "response": "def get_user_data(self):\n \"\"\"Retrieve data for authenticated user\n\n :returns: User data, with all attributes\n :rtype: dict\n :raises RestApiException: If a server error occurs\n\n Examples\n --------\n >>> import datadotworld as dw\n >>> api_client = dw.api_client()\n >>> user_data = api_client.get_user_data() # doctest: +SKIP\n >>> user_data[display_name] # doctest: +SKIP\n 'Name User'\n \"\"\"\n try:\n return self._user_api.get_user_data().to_dict()\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef fetch_contributing_projects(self, **kwargs):\n try:\n return self._user_api.fetch_contributing_projects(\n **kwargs).to_dict()\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)", "response": "Fetch projects that the currently authenticated user has access to to\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nexecute SQL queries against a dataset.", "response": "def sql(self, dataset_key, query, desired_mimetype='application/json',\n **kwargs):\n \"\"\"Executes SQL queries against a dataset via POST\n\n :param dataset_key: Dataset identifier, in the form of owner/id\n :type dataset_key: str\n :param query: SQL query\n :type query: str\n :param include_table_schema: Flags indicating to include table schema\n in the response\n :type include_table_schema: bool\n :returns: file object that can be used in file parsers and\n data handling modules.\n :rtype: file-like object\n :raises RestApiException: If a server error occurs\n\n Examples\n --------\n >>> import datadotworld as dw\n >>> api_client = dw.api_client()\n >>> api_client.sql('username/test-dataset', 'query') # doctest: +SKIP\n \"\"\"\n api_client = self._build_api_client(\n default_mimetype_header_accept=desired_mimetype)\n sql_api = kwargs.get('sql_api_mock', _swagger.SqlApi(api_client))\n owner_id, dataset_id = parse_dataset_key(dataset_key)\n try:\n response = sql_api.sql_post(\n owner_id, dataset_id, query, _preload_content=False, **kwargs)\n return six.BytesIO(response.data)\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nexecutes SPARQL queries against a dataset.", "response": "def sparql(self, dataset_key, query,\n desired_mimetype='application/sparql-results+json', **kwargs):\n \"\"\"Executes SPARQL queries against a dataset via POST\n\n :param dataset_key: Dataset identifier, in the form of owner/id\n :type dataset_key: str\n :param query: SPARQL query\n :type query: str\n :returns: file object that can be used in file parsers and\n data handling modules.\n :rtype: file object\n :raises RestApiException: If a server error occurs\n\n Examples\n --------\n >>> import datadotworld as dw\n >>> api_client = dw.api_client()\n >>> api_client.sparql_post('username/test-dataset',\n ... query) # doctest: +SKIP\n \"\"\"\n api_client = self._build_api_client(\n default_mimetype_header_accept=desired_mimetype)\n sparql_api = kwargs.get('sparql_api_mock',\n _swagger.SparqlApi(api_client))\n owner_id, dataset_id = parse_dataset_key(dataset_key)\n try:\n response = sparql_api.sparql_post(\n owner_id, dataset_id, query, _preload_content=False, **kwargs)\n return six.BytesIO(response.data)\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef download_dataset(self, dataset_key):\n owner_id, dataset_id = parse_dataset_key(dataset_key)\n try:\n return self._download_api.download_dataset(owner_id, dataset_id)\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)", "response": "Download all files within the dataset."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef append_records(self, dataset_key, stream_id, body):\n owner_id, dataset_id = parse_dataset_key(dataset_key)\n try:\n return self._streams_api.append_records(owner_id, dataset_id,\n stream_id, body)\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)", "response": "Append records to a stream."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nretrieves an existing project This method retrieves metadata about an existing project :param project_key: Project identifier, in the form of owner/id :type project_key: str :returns: Project definition, with all attributes :rtype: dict :raises RestApiException: If a server error occurs Examples -------- >>> import datadotworld as dw >>> api_client = dw.api_client() >>> intro_project = api_client.get_project( ... 'jonloyens/' ... 'an-example-project-that-shows-what-to-put-in-data-world' ... ) # doctest: +SKIP >>> intro_project['title'] # doctest: +SKIP 'An Example Project that Shows What To Put in data.world'", "response": "def get_project(self, project_key):\n \"\"\"Retrieve an existing project\n\n This method retrieves metadata about an existing project\n\n :param project_key: Project identifier, in the form of owner/id\n :type project_key: str\n :returns: Project definition, with all attributes\n :rtype: dict\n :raises RestApiException: If a server error occurs\n\n Examples\n --------\n >>> import datadotworld as dw\n >>> api_client = dw.api_client()\n >>> intro_project = api_client.get_project(\n ... 'jonloyens/'\n ... 'an-example-project-that-shows-what-to-put-in-data-world'\n ... ) # doctest: +SKIP\n >>> intro_project['title'] # doctest: +SKIP\n 'An Example Project that Shows What To Put in data.world'\n \"\"\"\n try:\n owner_id, project_id = parse_dataset_key(project_key)\n return self._projects_api.get_project(owner_id,\n project_id).to_dict()\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create_project(self, owner_id, **kwargs):\n request = self.__build_project_obj(\n lambda: _swagger.ProjectCreateRequest(\n title=kwargs.get('title'),\n visibility=kwargs.get('visibility')\n ),\n lambda name, url, description, labels:\n _swagger.FileCreateRequest(\n name=name,\n source=_swagger.FileSourceCreateRequest(url=url),\n description=description,\n labels=labels), kwargs)\n try:\n (_, _, headers) = self._projects_api.create_project_with_http_info(\n owner_id, body=request, _return_http_data_only=False)\n if 'Location' in headers:\n return headers['Location']\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)", "response": "Creates a new project with the given properties"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef update_project(self, project_key, **kwargs):\n request = self.__build_project_obj(\n lambda: _swagger.ProjectPatchRequest(),\n lambda name, url, description, labels:\n _swagger.FileCreateOrUpdateRequest(\n name=name,\n source=_swagger.FileSourceCreateOrUpdateRequest(url=url),\n description=description,\n labels=labels),\n kwargs)\n owner_id, project_id = parse_dataset_key(project_key)\n try:\n return self._projects_api.patch_project(owner_id,\n project_id,\n body=request)\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)", "response": "Update an existing project with the given key."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef replace_project(self, project_key, **kwargs):\n request = self.__build_project_obj(\n lambda: _swagger.ProjectCreateRequest(\n title=kwargs.get('title'),\n visibility=kwargs.get('visibility')\n ),\n lambda name, url, description, labels:\n _swagger.FileCreateRequest(\n name=name,\n source=_swagger.FileSourceCreateRequest(url=url),\n description=description,\n labels=labels),\n kwargs)\n try:\n project_owner_id, project_id = parse_dataset_key(project_key)\n self._projects_api.replace_project(project_owner_id,\n project_id,\n body=request)\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)", "response": "Replaces an existing Project with a given id or completely rewrite the project with a given id."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nlink a project to an existing dataset This method links a dataset to an existing project This method links a project to an existing dataset This method links a dataset to an existing project This method links a project to an existing dataset This method links a dataset to an existing project", "response": "def add_linked_dataset(self, project_key, dataset_key):\n \"\"\"Link project to an existing dataset\n\n This method links a dataset to project\n\n :param project_key: Project identifier, in the form of owner/id\n :type project_key: str\n :param dataset_key: Dataset identifier, in the form of owner/id\n :type project_key: str\n :raises RestApiException: If a server error occurs\n\n Examples\n --------\n >>> import datadotworld as dw\n >>> api_client = dw.api_client()\n >>> linked_dataset = api_client.add_linked_dataset(\n ... 'username/test-project',\n ... 'username/test-dataset') # doctest: +SKIP\n \"\"\"\n try:\n project_owner_id, project_id = parse_dataset_key(project_key)\n dataset_owner_id, dataset_id = parse_dataset_key(dataset_key)\n self._projects_api.add_linked_dataset(project_owner_id,\n project_id,\n dataset_owner_id,\n dataset_id)\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_insight(self, project_key, insight_id, **kwargs):\n try:\n project_owner, project_id = parse_dataset_key(project_key)\n return self._insights_api.get_insight(project_owner,\n project_id,\n insight_id,\n **kwargs).to_dict()\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)", "response": "Retrieve an insight entry"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_insights_for_project(self, project_key, **kwargs):\n try:\n project_owner, project_id = parse_dataset_key(project_key)\n return self._insights_api.get_insights_for_project(project_owner,\n project_id,\n **kwargs)\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)", "response": "Get the list of insights for a project."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a new Insight object.", "response": "def create_insight(self, project_key, **kwargs):\n \"\"\"Create a new insight\n\n :param project_key: Project identifier, in the form of\n projectOwner/projectid\n :type project_key: str\n :param title: Insight title\n :type title: str\n :param description: Insight description.\n :type description: str, optional\n :param image_url: If image-based, the URL of the image\n :type image_url: str\n :param embed_url: If embed-based, the embeddable URL\n :type embed_url: str\n :param source_link: Permalink to source code or platform this insight\n was generated with. Allows others to replicate the steps originally\n used to produce the insight.\n :type source_link: str, optional\n :param data_source_links: One or more permalinks to the data sources\n used to generate this insight. Allows others to access the data\n originally used to produce the insight.\n :type data_source_links: array\n :returns: Insight with message and uri object\n :rtype: object\n :raises RestApiException: If a server error occurs\n\n Examples\n --------\n >>> import datadotworld as dw\n >>> api_client = dw.api_client()\n >>> api_client.create_insight(\n ... 'projectOwner/projectid', title='Test insight',\n ... image_url='url') # doctest: +SKIP\n \"\"\"\n request = self.__build_insight_obj(\n lambda: _swagger.InsightCreateRequest(\n title=kwargs.get('title'),\n body=_swagger.InsightBody(\n image_url=kwargs.get('image_url'),\n embed_url=kwargs.get('embed_url'),\n markdown_body=kwargs.get('markdown_body')\n )\n ), kwargs)\n project_owner, project_id = parse_dataset_key(project_key)\n try:\n (_, _, headers) = self._insights_api.create_insight_with_http_info(\n project_owner,\n project_id,\n body=request,\n _return_http_data_only=False)\n if 'Location' in headers:\n return headers['Location']\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreplace an insight. :param project_key: Projrct identifier, in the form of projectOwner/projectid :type project_key: str :param insight_id: Insight unique identifier. :type insight_id: str :param title: Insight title :type title: str :param description: Insight description. :type description: str, optional :param image_url: If image-based, the URL of the image :type image_url: str :param embed_url: If embed-based, the embeddable URL :type embed_url: str :param source_link: Permalink to source code or platform this insight was generated with. Allows others to replicate the steps originally used to produce the insight. :type source_link: str, optional :param data_source_links: One or more permalinks to the data sources used to generate this insight. Allows others to access the data originally used to produce the insight. :type data_source_links: array :returns: message object :rtype: object :raises RestApiException: If a server error occurs Examples -------- >>> import datadotworld as dw >>> api_client = dw.api_client() >>> api_client.replace_insight( ... 'projectOwner/projectid', ... '1230-9324-3424242442', ... embed_url='url', ... title='Test insight') # doctest: +SKIP", "response": "def replace_insight(self, project_key, insight_id, **kwargs):\n \"\"\"Replace an insight.\n\n :param project_key: Projrct identifier, in the form of\n projectOwner/projectid\n :type project_key: str\n :param insight_id: Insight unique identifier.\n :type insight_id: str\n :param title: Insight title\n :type title: str\n :param description: Insight description.\n :type description: str, optional\n :param image_url: If image-based, the URL of the image\n :type image_url: str\n :param embed_url: If embed-based, the embeddable URL\n :type embed_url: str\n :param source_link: Permalink to source code or platform this insight\n was generated with. Allows others to replicate the steps originally\n used to produce the insight.\n :type source_link: str, optional\n :param data_source_links: One or more permalinks to the data sources\n used to generate this insight. Allows others to access the data\n originally used to produce the insight.\n :type data_source_links: array\n :returns: message object\n :rtype: object\n :raises RestApiException: If a server error occurs\n\n Examples\n --------\n >>> import datadotworld as dw\n >>> api_client = dw.api_client()\n >>> api_client.replace_insight(\n ... 'projectOwner/projectid',\n ... '1230-9324-3424242442',\n ... embed_url='url',\n ... title='Test insight') # doctest: +SKIP\n \"\"\"\n request = self.__build_insight_obj(\n lambda: _swagger.InsightPutRequest(\n title=kwargs.get('title'),\n body=_swagger.InsightBody(\n image_url=kwargs.get('image_url'),\n embed_url=kwargs.get('embed_url'),\n markdown_body=kwargs.get('markdown_body')\n )\n ), kwargs)\n project_owner, project_id = parse_dataset_key(project_key)\n try:\n self._insights_api.replace_insight(project_owner,\n project_id,\n insight_id,\n body=request)\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef update_insight(self, project_key, insight_id, **kwargs):\n request = self.__build_insight_obj(\n lambda: _swagger.InsightPatchRequest(), kwargs)\n project_owner, project_id = parse_dataset_key(project_key)\n try:\n self._insights_api.update_insight(project_owner,\n project_id,\n insight_id, body=request)\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)", "response": "Update an insight entry."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef delete_insight(self, project_key, insight_id):\n projectOwner, projectId = parse_dataset_key(project_key)\n try:\n self._insights_api.delete_insight(projectOwner,\n projectId,\n insight_id)\n except _swagger.rest.ApiException as e:\n raise RestApiError(cause=e)", "response": "Delete an existing insight."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef parse_dataset_key(dataset_key):\n match = re.match(DATASET_KEY_PATTERN, dataset_key)\n if not match:\n raise ValueError('Invalid dataset key. Key must include user and '\n 'dataset names, separated by (i.e. user/dataset).')\n return match.groups()", "response": "Parses a dataset key and returns the owner and dataset id."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef from_keys(cls, keys, loader_func, type_hint=None):\n return cls({k: LazyLoadedValue(\n lambda k=k: loader_func(k), type_hint=type_hint) for k in keys})", "response": "Factory method for LazyLoadedDict with a list of keys."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef name(self, name):\n if name is None:\n raise ValueError(\"Invalid value for `name`, must not be `None`\")\n if name is not None and len(name) > 128:\n raise ValueError(\"Invalid value for `name`, length must be less than or equal to `128`\")\n if name is not None and len(name) < 1:\n raise ValueError(\"Invalid value for `name`, length must be greater than or equal to `1`\")\n if name is not None and not re.search('^[^\\/]+$', name):\n raise ValueError(\"Invalid value for `name`, must be a follow pattern or equal to `/^[^\\/]+$/`\")\n\n self._name = name", "response": "Sets the name of this FileCreateOrUpdateRequest."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef labels(self, labels):\n allowed_values = [\"raw data\", \"documentation\", \"visualization\", \"clean data\", \"script\", \"report\"]\n if not set(labels).issubset(set(allowed_values)):\n raise ValueError(\n \"Invalid values for `labels` [{0}], must be a subset of [{1}]\"\n .format(\", \".join(map(str, set(labels)-set(allowed_values))),\n \", \".join(map(str, allowed_values)))\n )\n\n self._labels = labels", "response": "Sets the labels of this FileCreateOrUpdateRequest."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsets the user of this WebCredentials.", "response": "def user(self, user):\n \"\"\"\n Sets the user of this WebCredentials.\n The name of the account to login to.\n\n :param user: The user of this WebCredentials.\n :type: str\n \"\"\"\n if user is None:\n raise ValueError(\"Invalid value for `user`, must not be `None`\")\n if user is not None and len(user) > 1024:\n raise ValueError(\"Invalid value for `user`, length must be less than or equal to `1024`\")\n\n self._user = user"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef password(self, password):\n if password is not None and len(password) > 1024:\n raise ValueError(\"Invalid value for `password`, length must be less than or equal to `1024`\")\n\n self._password = password", "response": "Sets the password of this WebCredentials."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nset the owner of this OauthTokenReference.", "response": "def owner(self, owner):\n \"\"\"\n Sets the owner of this OauthTokenReference.\n User name of the owner of the OAuth token within data.world.\n\n :param owner: The owner of this OauthTokenReference.\n :type: str\n \"\"\"\n if owner is None:\n raise ValueError(\"Invalid value for `owner`, must not be `None`\")\n if owner is not None and len(owner) > 31:\n raise ValueError(\"Invalid value for `owner`, length must be less than or equal to `31`\")\n if owner is not None and len(owner) < 3:\n raise ValueError(\"Invalid value for `owner`, length must be greater than or equal to `3`\")\n if owner is not None and not re.search('[a-z0-9](?:-(?!-)|[a-z0-9])+[a-z0-9]', owner):\n raise ValueError(\"Invalid value for `owner`, must be a follow pattern or equal to `/[a-z0-9](?:-(?!-)|[a-z0-9])+[a-z0-9]/`\")\n\n self._owner = owner"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nset the site of this OauthTokenReference.", "response": "def site(self, site):\n \"\"\"\n Sets the site of this OauthTokenReference.\n\n :param site: The site of this OauthTokenReference.\n :type: str\n \"\"\"\n if site is None:\n raise ValueError(\"Invalid value for `site`, must not be `None`\")\n if site is not None and len(site) > 255:\n raise ValueError(\"Invalid value for `site`, length must be less than or equal to `255`\")\n if site is not None and len(site) < 3:\n raise ValueError(\"Invalid value for `site`, length must be greater than or equal to `3`\")\n if site is not None and not re.search('(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?\\\\.)+[a-z]{2,}(?:@[a-z0-9](?:[-.](?=[a-z0-9])|[a-z0-9]){0,29})?', site):\n raise ValueError(\"Invalid value for `site`, must be a follow pattern or equal to `/(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?\\\\.)+[a-z]{2,}(?:@[a-z0-9](?:[-.](?=[a-z0-9])|[a-z0-9]){0,29})?/`\")\n\n self._site = site"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef sql_get(self, owner, id, query, **kwargs):\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.sql_get_with_http_info(owner, id, query, **kwargs)\n else:\n (data) = self.sql_get_with_http_info(owner, id, query, **kwargs)\n return data", "response": "This endpoint executes a SQL query against a dataset."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsets the url of this FileSourceCreateRequest.", "response": "def url(self, url):\n \"\"\"\n Sets the url of this FileSourceCreateRequest.\n Source URL of file. Must be an http, https.\n\n :param url: The url of this FileSourceCreateRequest.\n :type: str\n \"\"\"\n if url is None:\n raise ValueError(\"Invalid value for `url`, must not be `None`\")\n if url is not None and len(url) > 4096:\n raise ValueError(\"Invalid value for `url`, length must be less than or equal to `4096`\")\n if url is not None and len(url) < 1:\n raise ValueError(\"Invalid value for `url`, length must be greater than or equal to `1`\")\n if url is not None and not re.search('^https?:.*', url):\n raise ValueError(\"Invalid value for `url`, must be a follow pattern or equal to `/^https?:.*/`\")\n\n self._url = url"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nuploading a file to a dataset.", "response": "def upload_file(self, owner, id, name, **kwargs):\n \"\"\"\n Upload file\n Upload one file at a time to a dataset. This endpoint expects requests of type `application/octet-stream`. For example, assuming that you want to upload a local file named `file1.csv` to a hypothetical dataset `https://data.world/awesome-user/awesome-dataset` and choose its name on data.world to be `better-name.csv`, this is what the cURL command would look like. ```bash curl \\\\ -H \\\"Authorization: Bearer \\\" \\\\ -X PUT -H \\\"Content-Type: application/octet-stream\\\" \\\\ --data-binary @file1.csv \\\\ https://api.data.world/v0/uploads/awesome-user/awesome-dataset/files/better-name.csv ``` This method of upload is typically not supported by Swagger clients. Other HTTP clients can be used to supply the contents of the file directly in the body of the request.\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please define a `callback` function\n to be invoked when receiving the response.\n >>> def callback_function(response):\n >>> pprint(response)\n >>>\n >>> thread = api.upload_file(owner, id, name, callback=callback_function)\n\n :param callback function: The callback function\n for asynchronous request. (optional)\n :param str owner: User name and unique identifier of the creator of a dataset or project. For example, in the URL: [https://data.world/jonloyens/an-intro-to-dataworld-dataset](https://data.world/jonloyens/an-intro-to-dataworld-dataset), jonloyens is the unique identifier of the owner. (required)\n :param str id: Dataset unique identifier. For example, in the URL:[https://data.world/jonloyens/an-intro-to-dataworld-dataset](https://data.world/jonloyens/an-intro-to-dataworld-dataset), an-intro-to-dataworld-dataset is the unique identifier of the dataset. (required)\n :param str name: File name and unique identifier within dataset. (required)\n :param bool expand_archive: Indicates whether a compressed file should be expanded upon upload.\n :return: SuccessMessage\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.upload_file_with_http_info(owner, id, name, **kwargs)\n else:\n (data) = self.upload_file_with_http_info(owner, id, name, **kwargs)\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef upload_files(self, owner, id, file, **kwargs):\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.upload_files_with_http_info(owner, id, file, **kwargs)\n else:\n (data) = self.upload_files_with_http_info(owner, id, file, **kwargs)\n return data", "response": "Upload multiple files at once to a dataset."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef query(self, dataset_key, query, query_type=\"sql\", parameters=None):\n # TODO Move network request to RestApiClient\n owner_id, dataset_id = parse_dataset_key(dataset_key)\n params = {\n \"query\": query\n }\n if parameters and query_type == \"sparql\":\n # if SPARQL, then the parameters should be a Mapping containing\n # named parameters\n params[\"parameters\"] = \",\".join(\n [\"{}={}\".format(k, convert_to_sparql_literal(parameters[k]))\n for k in parameters.keys()])\n elif parameters and query_type == \"sql\":\n # if SQL, then the parameters should be an array with positional\n # parameters, need to unwind them to $data_world_paramN for each\n # 0-indexed position N\n parameters = {\"$data_world_param{}\".format(i): x\n for i, x in enumerate(parameters)}\n params[\"parameters\"] = \",\".join([\"{}={}\".format(\n k, convert_to_sparql_literal(parameters[k]))\n for k in parameters.keys()])\n url = \"{0}://{1}/{2}/{3}/{4}\".format(self._protocol, self._query_host,\n query_type, owner_id, dataset_id)\n headers = {\n 'User-Agent': _user_agent(),\n 'Accept': 'application/sparql-results+json',\n 'Authorization': 'Bearer {0}'.format(self._config.auth_token)\n }\n response = requests.get(url, params=params, headers=headers)\n if response.status_code == 200:\n return QueryResults(response.json())\n raise RuntimeError(\n 'Error executing query: {}'.format(response.content))", "response": "Query an existing dataset with the specified query"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nload a dataset from the local filesystem.", "response": "def load_dataset(self, dataset_key, force_update=False, auto_update=False):\n \"\"\"Load a dataset from the local filesystem, downloading it from\n data.world first, if necessary.\n\n This function returns an object of type `LocalDataset`. The object\n allows access to metedata via it's `describe()` method and to all the\n data via three properties `raw_data`, `tables` and `dataframes`, all\n of which are mappings (dict-like structures).\n\n :param dataset_key: Dataset identifier, in the form of owner/id or of\n a url\n :type dataset_key: str\n :param force_update: Flag, indicating if a new copy of the dataset\n should be downloaded replacing any previously downloaded copy\n (Default value = False)\n :type force_update: bool\n :param auto_update: Flag, indicating that dataset be updated to the\n latest version\n :type auto_update: bool\n :returns: The object representing the dataset\n :rtype: LocalDataset\n :raises RestApiError: If a server error occurs\n \"\"\"\n owner_id, dataset_id = parse_dataset_key(dataset_key)\n cache_dir = path.join(self._config.cache_dir, owner_id, dataset_id,\n 'latest')\n backup_dir = None\n if path.isdir(cache_dir) and force_update:\n backup_dir = path.join(self._config.cache_dir, owner_id,\n dataset_id, 'backup')\n move_cache_dir_to_backup_dir(backup_dir, cache_dir)\n\n descriptor_file = path.join(cache_dir, 'datapackage.json')\n if not path.isfile(descriptor_file):\n try:\n descriptor_file = self.api_client.download_datapackage(\n dataset_key, cache_dir)\n except RestApiError as e:\n if backup_dir is not None:\n shutil.move(backup_dir, cache_dir)\n warn('Unable to download datapackage ({}). '\n 'Loading previously saved version.'.format(e.reason))\n else:\n raise\n else:\n try:\n dataset_info = self.api_client.get_dataset(dataset_key)\n except RestApiError as e:\n return LocalDataset(descriptor_file)\n\n last_modified = datetime.strptime(dataset_info['updated'],\n '%Y-%m-%dT%H:%M:%S.%fZ')\n if (last_modified > datetime.utcfromtimestamp(\n path.getmtime(str(descriptor_file)))):\n if auto_update:\n try:\n backup_dir = path.join(self._config.cache_dir,\n owner_id, dataset_id,\n 'backup')\n move_cache_dir_to_backup_dir(backup_dir,\n cache_dir)\n descriptor_file = self.api_client. \\\n download_datapackage(dataset_key, cache_dir)\n except RestApiError as e:\n if backup_dir is not None:\n shutil.move(backup_dir, cache_dir)\n warn('Unable to auto update datapackage ({}). '\n 'Loading previously saved version.'\n .format(e.reason))\n else:\n raise\n else:\n filterwarnings('always',\n message='You are using an outdated copy')\n warn('You are using an outdated copy of {}. '\n 'If you wish to use the latest version, call this '\n 'function with the argument '\n 'auto_update=True or '\n 'force_update=True'.format(dataset_key))\n\n if backup_dir is not None:\n shutil.rmtree(backup_dir, ignore_errors=True)\n\n return LocalDataset(descriptor_file)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef open_remote_file(self, dataset_key, file_name,\n mode='w', **kwargs):\n \"\"\"Open a remote file object that can be used to write to or read from\n a file in a data.world dataset\n\n :param dataset_key: Dataset identifier, in the form of owner/id\n :type dataset_key: str\n :param file_name: The name of the file to open\n :type file_name: str\n :param mode: the mode for the file - must be 'w', 'wb', 'r', or 'rb' -\n indicating read/write ('r'/'w') and optionally \"binary\"\n handling of the file data. (Default value = 'w')\n :type mode: str, optional\n :param chunk_size: size of chunked bytes to return when reading\n streamed bytes in 'rb' mode\n :type chunk_size: int, optional\n :param decode_unicode: whether to decode textual responses as unicode\n when returning streamed lines in 'r' mode\n :type decode_unicode: bool, optional\n :param **kwargs:\n\n Examples\n --------\n >>> import datadotworld as dw\n >>>\n >>> # write a text file\n >>> with dw.open_remote_file('username/test-dataset',\n ... 'test.txt') as w:\n ... w.write(\"this is a test.\")\n >>>\n >>> # write a jsonlines file\n >>> import json\n >>> with dw.open_remote_file('username/test-dataset',\n ... 'test.jsonl') as w:\n ... json.dump({'foo':42, 'bar':\"A\"}, w)\n ... w.write(\"\\\\n\")\n ... json.dump({'foo':13, 'bar':\"B\"}, w)\n ... w.write(\"\\\\n\")\n >>>\n >>> # write a csv file\n >>> import csv\n >>> with dw.open_remote_file('username/test-dataset',\n ... 'test.csv') as w:\n ... csvw = csv.DictWriter(w, fieldnames=['foo', 'bar'])\n ... csvw.writeheader()\n ... csvw.writerow({'foo':42, 'bar':\"A\"})\n ... csvw.writerow({'foo':13, 'bar':\"B\"})\n >>>\n >>> # write a pandas dataframe as a csv file\n >>> import pandas as pd\n >>> df = pd.DataFrame({'foo':[1,2,3,4],'bar':['a','b','c','d']})\n >>> with dw.open_remote_file('username/test-dataset',\n ... 'dataframe.csv') as w:\n ... df.to_csv(w, index=False)\n >>>\n >>> # write a binary file\n >>> with dw.open_remote_file('username/test-dataset',\n >>> 'test.txt', mode='wb') as w:\n ... w.write(bytes([100,97,116,97,46,119,111,114,108,100]))\n >>>\n >>> # read a text file\n >>> with dw.open_remote_file('username/test-dataset',\n ... 'test.txt', mode='r') as r:\n ... print(r.read())\n >>>\n >>> # read a csv file\n >>> with dw.open_remote_file('username/test-dataset',\n ... 'test.csv', mode='r') as r:\n ... csvr = csv.DictReader(r)\n ... for row in csvr:\n ... print(row['column a'], row['column b'])\n >>>\n >>> # read a binary file\n >>> with dw.open_remote_file('username/test-dataset',\n ... 'test', mode='rb') as r:\n ... bytes = r.read()\n \"\"\"\n try:\n return RemoteFile(self._config, dataset_key, file_name,\n mode=mode, **kwargs)\n except Exception as e:\n raise RestApiError(cause=e)", "response": "Open a remote file object that can be used to read from or write to or read from a file in a data. world dataset."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add_files_by_source(self, owner, id, body, **kwargs):\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.add_files_by_source_with_http_info(owner, id, body, **kwargs)\n else:\n (data) = self.add_files_by_source_with_http_info(owner, id, body, **kwargs)\n return data", "response": "Add files to a dataset by source."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a new dataset. This method creates a new dataset and returns the response.", "response": "def create_dataset(self, owner, body, **kwargs):\n \"\"\"\n Create a dataset\n Create a new dataset.\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please define a `callback` function\n to be invoked when receiving the response.\n >>> def callback_function(response):\n >>> pprint(response)\n >>>\n >>> thread = api.create_dataset(owner, body, callback=callback_function)\n\n :param callback function: The callback function\n for asynchronous request. (optional)\n :param str owner: User name and unique identifier of the creator of a dataset or project. For example, in the URL: [https://data.world/jonloyens/an-intro-to-dataworld-dataset](https://data.world/jonloyens/an-intro-to-dataworld-dataset), jonloyens is the unique identifier of the owner. (required)\n :param DatasetCreateRequest body: (required)\n :return: CreateDatasetResponse\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.create_dataset_with_http_info(owner, body, **kwargs)\n else:\n (data) = self.create_dataset_with_http_info(owner, body, **kwargs)\n return data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndeleting a dataset and all data associated with it.", "response": "def delete_dataset(self, owner, id, **kwargs):\n \"\"\"\n Delete a dataset\n Permanently deletes a dataset and all data associated with it. This operation cannot be undone, although a new dataset may be created with the same id.\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please define a `callback` function\n to be invoked when receiving the response.\n >>> def callback_function(response):\n >>> pprint(response)\n >>>\n >>> thread = api.delete_dataset(owner, id, callback=callback_function)\n\n :param callback function: The callback function\n for asynchronous request. (optional)\n :param str owner: User name and unique identifier of the creator of a dataset or project. For example, in the URL: [https://data.world/jonloyens/an-intro-to-dataworld-dataset](https://data.world/jonloyens/an-intro-to-dataworld-dataset), jonloyens is the unique identifier of the owner. (required)\n :param str id: Dataset unique identifier. For example, in the URL:[https://data.world/jonloyens/an-intro-to-dataworld-dataset](https://data.world/jonloyens/an-intro-to-dataworld-dataset), an-intro-to-dataworld-dataset is the unique identifier of the dataset. (required)\n :return: SuccessMessage\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.delete_dataset_with_http_info(owner, id, **kwargs)\n else:\n (data) = self.delete_dataset_with_http_info(owner, id, **kwargs)\n return data"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef delete_file_and_sync_source(self, owner, id, name, **kwargs):\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.delete_file_and_sync_source_with_http_info(owner, id, name, **kwargs)\n else:\n (data) = self.delete_file_and_sync_source_with_http_info(owner, id, name, **kwargs)\n return data", "response": "Delete a single file from a dataset and sync source."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ndeletes one or more files from a dataset and sync sources.", "response": "def delete_files_and_sync_sources(self, owner, id, name, **kwargs):\n \"\"\"\n Delete files\n Delete one or more files from a dataset by their name, including files added via URL. **Batching** Note that the `name` parameter can be include multiple times in the query string, once for each file that is to be deleted together in a single request.\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please define a `callback` function\n to be invoked when receiving the response.\n >>> def callback_function(response):\n >>> pprint(response)\n >>>\n >>> thread = api.delete_files_and_sync_sources(owner, id, name, callback=callback_function)\n\n :param callback function: The callback function\n for asynchronous request. (optional)\n :param str owner: User name and unique identifier of the creator of a dataset or project. For example, in the URL: [https://data.world/jonloyens/an-intro-to-dataworld-dataset](https://data.world/jonloyens/an-intro-to-dataworld-dataset), jonloyens is the unique identifier of the owner. (required)\n :param str id: Dataset unique identifier. For example, in the URL:[https://data.world/jonloyens/an-intro-to-dataworld-dataset](https://data.world/jonloyens/an-intro-to-dataworld-dataset), an-intro-to-dataworld-dataset is the unique identifier of the dataset. (required)\n :param list[str] name: Names of files to be deleted. Multiple can be provided in a single request by repeating the query string parameter name as many times as necessary. (required)\n :return: SuccessMessage\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.delete_files_and_sync_sources_with_http_info(owner, id, name, **kwargs)\n else:\n (data) = self.delete_files_and_sync_sources_with_http_info(owner, id, name, **kwargs)\n return data"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_dataset(self, owner, id, **kwargs):\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.get_dataset_with_http_info(owner, id, **kwargs)\n else:\n (data) = self.get_dataset_with_http_info(owner, id, **kwargs)\n return data", "response": "Retrieve a dataset by user name and unique identifier."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nupdates an existing dataset.", "response": "def patch_dataset(self, owner, id, body, **kwargs):\n \"\"\"\n Update a dataset\n Update an existing dataset. Note that only elements or files included in the request will be updated. All omitted elements or files will remain untouched.\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please define a `callback` function\n to be invoked when receiving the response.\n >>> def callback_function(response):\n >>> pprint(response)\n >>>\n >>> thread = api.patch_dataset(owner, id, body, callback=callback_function)\n\n :param callback function: The callback function\n for asynchronous request. (optional)\n :param str owner: User name and unique identifier of the creator of a dataset or project. For example, in the URL: [https://data.world/jonloyens/an-intro-to-dataworld-dataset](https://data.world/jonloyens/an-intro-to-dataworld-dataset), jonloyens is the unique identifier of the owner. (required)\n :param str id: Dataset unique identifier. For example, in the URL:[https://data.world/jonloyens/an-intro-to-dataworld-dataset](https://data.world/jonloyens/an-intro-to-dataworld-dataset), an-intro-to-dataworld-dataset is the unique identifier of the dataset. (required)\n :param DatasetPatchRequest body: (required)\n :return: SuccessMessage\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.patch_dataset_with_http_info(owner, id, body, **kwargs)\n else:\n (data) = self.patch_dataset_with_http_info(owner, id, body, **kwargs)\n return data"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a dataset with a given id or completely rewrite it.", "response": "def replace_dataset(self, owner, id, body, **kwargs):\n \"\"\"\n Create / Replace a dataset\n Create a dataset with a given id or completely rewrite the dataset, including any previously added files, if one already exists with the given id.\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please define a `callback` function\n to be invoked when receiving the response.\n >>> def callback_function(response):\n >>> pprint(response)\n >>>\n >>> thread = api.replace_dataset(owner, id, body, callback=callback_function)\n\n :param callback function: The callback function\n for asynchronous request. (optional)\n :param str owner: User name and unique identifier of the creator of a dataset or project. For example, in the URL: [https://data.world/jonloyens/an-intro-to-dataworld-dataset](https://data.world/jonloyens/an-intro-to-dataworld-dataset), jonloyens is the unique identifier of the owner. (required)\n :param str id: Dataset unique identifier. For example, in the URL:[https://data.world/jonloyens/an-intro-to-dataworld-dataset](https://data.world/jonloyens/an-intro-to-dataworld-dataset), an-intro-to-dataworld-dataset is the unique identifier of the dataset. (required)\n :param DatasetPutRequest body: (required)\n :return: SuccessMessage\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.replace_dataset_with_http_info(owner, id, body, **kwargs)\n else:\n (data) = self.replace_dataset_with_http_info(owner, id, body, **kwargs)\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef sync_via_get(self, owner, id, **kwargs):\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.sync_via_get_with_http_info(owner, id, **kwargs)\n else:\n (data) = self.sync_via_get_with_http_info(owner, id, **kwargs)\n return data", "response": "Syncs all files within a dataset with the given id."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef fetch_contributing_datasets(self, **kwargs):\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.fetch_contributing_datasets_with_http_info(**kwargs)\n else:\n (data) = self.fetch_contributing_datasets_with_http_info(**kwargs)\n return data", "response": "Fetch all the datasets that the currently authenticated user has access to."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef fetch_contributing_projects(self, **kwargs):\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.fetch_contributing_projects_with_http_info(**kwargs)\n else:\n (data) = self.fetch_contributing_projects_with_http_info(**kwargs)\n return data", "response": "List projects that the currently authenticated user has access to."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nfetch datasets that the currently authenticated user has access to.", "response": "def fetch_datasets(self, **kwargs):\n \"\"\"\n List datasets as owner\n Fetch datasets that the currently authenticated user has access to because he or she is the owner of the dataset.\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please define a `callback` function\n to be invoked when receiving the response.\n >>> def callback_function(response):\n >>> pprint(response)\n >>>\n >>> thread = api.fetch_datasets(callback=callback_function)\n\n :param callback function: The callback function\n for asynchronous request. (optional)\n :param str limit: Maximum number of items to include in a page of results.\n :param str next: Token from previous result page to be used when requesting a subsequent page.\n :return: PaginatedDatasetResults\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.fetch_datasets_with_http_info(**kwargs)\n else:\n (data) = self.fetch_datasets_with_http_info(**kwargs)\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef fetch_liked_datasets(self, **kwargs):\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.fetch_liked_datasets_with_http_info(**kwargs)\n else:\n (data) = self.fetch_liked_datasets_with_http_info(**kwargs)\n return data", "response": "List liked datasets in a specific language."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nlisting liked projects with the currently authenticated user.", "response": "def fetch_liked_projects(self, **kwargs):\n \"\"\"\n List liked projects\n Fetch projects that the currently authenticated user likes.\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please define a `callback` function\n to be invoked when receiving the response.\n >>> def callback_function(response):\n >>> pprint(response)\n >>>\n >>> thread = api.fetch_liked_projects(callback=callback_function)\n\n :param callback function: The callback function\n for asynchronous request. (optional)\n :return: PaginatedProjectResults\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.fetch_liked_projects_with_http_info(**kwargs)\n else:\n (data) = self.fetch_liked_projects_with_http_info(**kwargs)\n return data"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef fetch_projects(self, **kwargs):\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.fetch_projects_with_http_info(**kwargs)\n else:\n (data) = self.fetch_projects_with_http_info(**kwargs)\n return data", "response": "List projects owned by the currently authenticated user."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_user_data(self, **kwargs):\n kwargs['_return_http_data_only'] = True\n if kwargs.get('callback'):\n return self.get_user_data_with_http_info(**kwargs)\n else:\n (data) = self.get_user_data_with_http_info(**kwargs)\n return data", "response": "Get user data for the currently authenticated user."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef owner(self, owner):\n if owner is None:\n raise ValueError(\"Invalid value for `owner`, must not be `None`\")\n if owner is not None and not re.search('[a-z0-9](?:-(?!-)|[a-z0-9])+[a-z0-9]', owner):\n raise ValueError(\"Invalid value for `owner`, must be a follow pattern or equal to `/[a-z0-9](?:-(?!-)|[a-z0-9])+[a-z0-9]/`\")\n\n self._owner = owner", "response": "Sets the owner of this LinkedDatasetCreateOrUpdateRequest."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nwriting the given value to the stream", "response": "def write(self, value):\n \"\"\"write the given value to the stream - if the object is a bytearray,\n write it as-is - otherwise, convert the object to a string with\n `str()` and write the UTF-8 bytes\n\n :param value: the value to write\n :type value: str or bytearray\n :raises TypeError: if the type of the value provided does not match\n the mode in which the file was opened.\n :raises NotImplementedError: if the mode of the file is not one of the\n supported values (currently only \"writing\" modes for files are\n supported - leaving the option to implement \"read\" modes open for\n future work)\n \"\"\"\n if 'w' == self._mode and isinstance(value, str):\n self._queue.put(value.encode('utf-8'))\n elif self._mode in {'w', 'wb'}:\n if isinstance(value, (bytes, bytearray)):\n self._queue.put(value)\n else:\n raise TypeError(\n \"a string or bytes object is required, not {}\".format(\n type(value)))\n else:\n raise IOError(\"File not opened in write mode.\")"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef read(self):\n if 'r' == self._mode:\n return self._read_response.text\n elif 'rb' == self._mode:\n return self._read_response.content\n else:\n raise IOError(\"File not opened in read mode.\")", "response": "read the contents of the file that s been opened in read mode"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nopens the file in read mode", "response": "def _open_for_read(self):\n \"\"\"open the file in read mode\"\"\"\n ownerid, datasetid = parse_dataset_key(self._dataset_key)\n response = requests.get(\n '{}/file_download/{}/{}/{}'.format(\n self._query_host, ownerid, datasetid, self._file_name),\n headers={\n 'User-Agent': self._user_agent,\n 'Authorization': 'Bearer {}'.format(\n self._config.auth_token)\n }, stream=True)\n try:\n response.raise_for_status()\n except Exception as e:\n raise RestApiError(cause=e)\n self._read_response = response"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nopens the file in write mode", "response": "def _open_for_write(self):\n \"\"\"open the file in write mode\"\"\"\n def put_request(body):\n \"\"\"\n\n :param body:\n \"\"\"\n ownerid, datasetid = parse_dataset_key(self._dataset_key)\n response = requests.put(\n \"{}/uploads/{}/{}/files/{}\".format(\n self._api_host, ownerid, datasetid, self._file_name),\n data=body,\n headers={\n 'User-Agent': self._user_agent,\n 'Authorization': 'Bearer {}'.format(\n self._config.auth_token)\n })\n self._response_queue.put(response)\n\n body = iter(self._queue.get, self._sentinel)\n self._thread = Thread(target=put_request, args=(body,))\n self._thread.start()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nserialize the context dictionary as JSON and return it as a HTTP Repsonse object.", "response": "def render_json_response(self, context, status=200):\n \"\"\"\n Serialize the context dictionary as JSON and return it\n as a HTTP Repsonse object. This method only allows\n serialization of simple objects (i.e. no model instances)\n \"\"\"\n json_context = json.dumps(context, cls=self.json_encoder_class)\n\n return HttpResponse(json_context,\n content_type=self.get_content_type(),\n status=status)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get(self, request, *args, **kwargs):\n context = self.chart_instance.chartjs_configuration(*args, **kwargs)\n return self.render_json_response(context)", "response": "This View only responds to GET requests. This View only responds to GET requests."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef parse(self, filename: str, language: Optional[str]=None,\n contents: Optional[str]=None, mode: Optional[ModeType]=None,\n timeout: Optional[int]=None) -> ResultContext:\n \"\"\"\n Queries the Babelfish server and receives the UAST response for the specified\n file.\n\n :param filename: The path to the file. Can be arbitrary if contents \\\n is not None.\n :param language: The programming language of the file. Refer to \\\n https://doc.bblf.sh/languages.html for the list of \\\n currently supported languages. None means autodetect.\n :param contents: The contents of the file. IF None, it is read from \\\n filename.\n :param mode: UAST transformation mode.\n :param timeout: The request timeout in seconds.\n :type filename: str\n :type language: str\n :type contents: str\n :type timeout: float\n :return: UAST object.\n \"\"\"\n\n # TODO: handle syntax errors\n contents = self._get_contents(contents, filename)\n request = ParseRequest(filename=os.path.basename(filename),\n content=contents, mode=mode,\n language=self._scramble_language(language))\n response = self._stub_v2.Parse(request, timeout=timeout)\n return ResultContext(response)", "response": "Queries the Babelfish server and receives the UAST response for the specified file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nclose the gRPC channel and free the acquired resources.", "response": "def close(self) -> None:\n \"\"\"\n Close the gRPC channel and free the acquired resources. Using a closed client is\n not supported.\n \"\"\"\n self._channel.close()\n self._channel = self._stub_v1 = self._stub_v2 = None"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef iterator(n: Union[Node, CompatNodeIterator, dict],\n order: TreeOrder = TreeOrder.PRE_ORDER) -> CompatNodeIterator:\n \"\"\"\n This function has the same signature as the pre-v3 iterator()\n call returning a compatibility CompatNodeIterator.\n \"\"\"\n\n if isinstance(n, CompatNodeIterator):\n return CompatNodeIterator(n._nodeit.iterate(order), only_nodes=True)\n elif isinstance(n, Node):\n nat_it = native_iterator(n.internal_node, order)\n return CompatNodeIterator(NodeIterator(nat_it), only_nodes=True)\n elif isinstance(n, dict):\n nat_it = native_iterator(n, order)\n return CompatNodeIterator(NodeIterator(nat_it, uast()), only_nodes=True)\n else:\n raise WrongTypeException(\n \"iterator on non node or iterator type (%s)\" % str(type(n))\n )", "response": "Returns an iterator over the nodes in the tree."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a CompatNodeIterator that iterates over nodes that match the query.", "response": "def filter(n: Node, query: str) -> CompatNodeIterator:\n \"\"\"\n This function has the same signature as the pre-v3 filter() returning a\n compatibility CompatNodeIterator.\n \"\"\"\n ctx = uast()\n return CompatNodeIterator(NodeIterator(ctx.filter(query, n.internal_node), ctx))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef filter_string(n: Node, query: str) -> str:\n return _scalariter2item(n, query, str)", "response": "Filter and ensure that the returned value is of string type."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef filter_bool(n: Node, query: str) -> bool:\n return _scalariter2item(n, query, bool)", "response": "Filter and ensure that the returned value is of type bool."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nfilter and ensure that the returned value is of type int.", "response": "def filter_int(n: Node, query: str) -> int:\n \"\"\"\n Filter and ensure that the returned value is of type int.\n \"\"\"\n return _scalariter2item(n, query, int)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nfilters and ensure that the returned value is of type float.", "response": "def filter_float(n: Node, query: str) -> float:\n \"\"\"\n Filter and ensure that the returned value is of type int.\n \"\"\"\n return _scalariter2item(n, query, float)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nparse the specified filename or contents and return a CompatParseResponse object.", "response": "def parse(self, filename: str, language: str = None, contents: str = None,\n timeout: float = None) -> CompatParseResponse:\n\n \"\"\"\n Parse the specified filename or contents and return a CompatParseResponse.\n \"\"\"\n\n return self._parse(filename, language, contents, timeout,\n Mode.Value('ANNOTATED'))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef filter(self, query: str) -> Optional['CompatNodeIterator']:\n if not self._last_node:\n return None\n\n return filter(self._last_node, query)", "response": "Filter the results using this iterator as base."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the properties of the current node in the iteration.", "response": "def properties(self) -> dict:\n \"\"\"\n Returns the properties of the current node in the iteration.\n \"\"\"\n if isinstance(self._last_node, dict):\n return self._last_node.keys()\n else:\n return {}"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nwrite a list of tuples to a file", "response": "def write_xspf(f, tuples):\n \"\"\"send me a list of (artist,title,mp3_url)\"\"\"\n xml = XmlWriter(f, indentAmount=' ')\n xml.prolog()\n xml.start('playlist', { 'xmlns': 'http://xspf.org/ns/0/', 'version': '1' })\n xml.start('trackList')\n for tupe in tuples:\n xml.start('track')\n xml.elem('creator',tupe[0])\n xml.elem('title',tupe[1])\n xml.elem('location', tupe[2])\n xml.end()\n xml.end()\n xml.end()\n f.close()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef callm(method, param_dict, POST=False, socket_timeout=None, data=None):\n try:\n param_dict['api_key'] = config.ECHO_NEST_API_KEY\n param_list = []\n if not socket_timeout:\n socket_timeout = config.CALL_TIMEOUT\n\n for key,val in param_dict.iteritems():\n if isinstance(val, list):\n param_list.extend( [(key,subval) for subval in val] )\n elif val is not None:\n if isinstance(val, unicode):\n val = val.encode('utf-8')\n param_list.append( (key,val) )\n\n params = urllib.urlencode(param_list)\n\n orig_timeout = socket.getdefaulttimeout()\n socket.setdefaulttimeout(socket_timeout)\n\n if(POST):\n if (not method == 'track/upload') or ((method == 'track/upload') and 'url' in param_dict):\n \"\"\"\n this is a normal POST call\n \"\"\"\n url = 'http://%s/%s/%s/%s' % (config.API_HOST, config.API_SELECTOR,\n config.API_VERSION, method)\n\n if data is None:\n data = ''\n data = urllib.urlencode(data)\n data = \"&\".join([data, params])\n\n f = opener.open(url, data=data)\n else:\n \"\"\"\n upload with a local file is special, as the body of the request is the content of the file,\n and the other parameters stay on the URL\n \"\"\"\n url = '/%s/%s/%s?%s' % (config.API_SELECTOR, config.API_VERSION,\n method, params)\n\n if ':' in config.API_HOST:\n host, port = config.API_HOST.split(':')\n else:\n host = config.API_HOST\n port = 80\n\n if config.TRACE_API_CALLS:\n logger.info(\"%s/%s\" % (host+':'+str(port), url,))\n conn = httplib.HTTPConnection(host, port = port)\n conn.request('POST', url, body = data, headers = dict([('Content-Type', 'application/octet-stream')]+headers))\n f = conn.getresponse()\n\n else:\n \"\"\"\n just a normal GET call\n \"\"\"\n url = 'http://%s/%s/%s/%s?%s' % (config.API_HOST, config.API_SELECTOR, config.API_VERSION,\n method, params)\n\n f = opener.open(url)\n\n socket.setdefaulttimeout(orig_timeout)\n\n # try/except\n response_dict = get_successful_response(f)\n return response_dict\n\n except IOError, e:\n if hasattr(e, 'reason'):\n raise EchoNestIOError(error=e.reason)\n elif hasattr(e, 'code'):\n raise EchoNestIOError(code=e.code)\n else:\n raise", "response": "Call the api! \n Param_dict is a *regular* *python* *dictionary* so if you want to have multi-valued params\n put them in a list.\n \n ** note, if we require 2.6, we can get rid of this timeout munging."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef oauthgetm(method, param_dict, socket_timeout=None):\n try:\n import oauth2 # lazy import this so oauth2 is not a hard dep\n except ImportError:\n raise Exception(\"You must install the python-oauth2 library to use this method.\")\n\n \"\"\"\n Call the api! With Oauth! \n Param_dict is a *regular* *python* *dictionary* so if you want to have multi-valued params\n put them in a list.\n \n ** note, if we require 2.6, we can get rid of this timeout munging.\n \"\"\"\n def build_request(url):\n params = {\n 'oauth_version': \"1.0\",\n 'oauth_nonce': oauth2.generate_nonce(),\n 'oauth_timestamp': int(time.time())\n }\n consumer = oauth2.Consumer(key=config.ECHO_NEST_CONSUMER_KEY, secret=config.ECHO_NEST_SHARED_SECRET)\n params['oauth_consumer_key'] = config.ECHO_NEST_CONSUMER_KEY\n \n req = oauth2.Request(method='GET', url=url, parameters=params)\n signature_method = oauth2.SignatureMethod_HMAC_SHA1()\n req.sign_request(signature_method, consumer, None)\n return req\n \n param_dict['api_key'] = config.ECHO_NEST_API_KEY\n param_list = []\n if not socket_timeout:\n socket_timeout = config.CALL_TIMEOUT\n \n for key,val in param_dict.iteritems():\n if isinstance(val, list):\n param_list.extend( [(key,subval) for subval in val] )\n elif val is not None:\n if isinstance(val, unicode):\n val = val.encode('utf-8')\n param_list.append( (key,val) )\n\n params = urllib.urlencode(param_list)\n \n orig_timeout = socket.getdefaulttimeout()\n socket.setdefaulttimeout(socket_timeout)\n \"\"\"\n just a normal GET call\n \"\"\"\n url = 'http://%s/%s/%s/%s?%s' % (config.API_HOST, config.API_SELECTOR, config.API_VERSION, \n method, params)\n req = build_request(url)\n f = opener.open(req.to_url())\n \n socket.setdefaulttimeout(orig_timeout)\n \n # try/except\n response_dict = get_successful_response(f)\n return response_dict", "response": "This function is used to make a GET request to the API."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nattempts to replace postMultipart() with nearly-identical interface. (The files tuple no longer requires the filename, and we only return the response body.) Uses the urllib2_file.py originally from http://fabien.seisen.org which was also drawn heavily from http://code.activestate.com/recipes/146306/ . This urllib2_file.py is more desirable because of the chunked uploading from a file pointer (no need to read entire file into memory) and the ability to work from behind a proxy (due to its basis on urllib2).", "response": "def postChunked(host, selector, fields, files):\n \"\"\"\n Attempt to replace postMultipart() with nearly-identical interface.\n (The files tuple no longer requires the filename, and we only return\n the response body.) \n Uses the urllib2_file.py originally from \n http://fabien.seisen.org which was also drawn heavily from \n http://code.activestate.com/recipes/146306/ .\n \n This urllib2_file.py is more desirable because of the chunked \n uploading from a file pointer (no need to read entire file into \n memory) and the ability to work from behind a proxy (due to its \n basis on urllib2).\n \"\"\"\n params = urllib.urlencode(fields)\n url = 'http://%s%s?%s' % (host, selector, params)\n u = urllib2.urlopen(url, files)\n result = u.read()\n [fp.close() for (key, fp) in files]\n return result"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a catalog object with a given name. Does not check to see if the catalog already exists.", "response": "def create_catalog_by_name(name, T=\"general\"):\n \"\"\"\n Creates a catalog object, with a given name. Does not check to see if the catalog already exists.\n\n Create a catalog object like\n \"\"\"\n result = util.callm(\"catalog/create\", {}, POST=True, \n data={\"name\":name, \"type\":T})\n result = result['response']\n return Catalog(result['id'], **dict( (k,result[k]) for k in ('name', 'type')))"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ngrab a catalog by name", "response": "def get_catalog_by_name(name):\n \"\"\"\n Grabs a catalog by name, if its there on the api key.\n Otherwise, an error is thrown (mirroring the API)\n \"\"\"\n kwargs = {\n 'name' : name,\n }\n result = util.callm(\"%s/%s\" % ('catalog', 'profile'), kwargs)\n return Catalog(**util.fix(result['response']['catalog']))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a list of all catalogs created on this API key", "response": "def list_catalogs(results=30, start=0):\n \"\"\"\n Returns list of all catalogs created on this API key\n\n Args:\n\n Kwargs:\n results (int): An integer number of results to return\n\n start (int): An integer starting value for the result set\n\n Returns:\n A list of catalog objects\n\n Example:\n\n >>> catalog.list_catalogs()\n [, , ]\n >>>\n\n \"\"\"\n result = util.callm(\"%s/%s\" % ('catalog', 'list'), {'results': results, 'start': start})\n cats = [Catalog(**util.fix(d)) for d in result['response']['catalogs']]\n start = result['response']['start']\n total = result['response']['total']\n return ResultList(cats, start, total)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nupdates a list of items in a single object", "response": "def update(self, items):\n \"\"\"\n Update a catalog object\n\n Args:\n items (list): A list of dicts describing update data and action codes (see api docs)\n\n Kwargs:\n\n Returns:\n A ticket id\n\n Example:\n\n >>> c = catalog.Catalog('my_songs', type='song')\n >>> items\n [{'action': 'update',\n 'item': {'artist_name': 'dAn ThE aUtOmAtOr',\n 'disc_number': 1,\n 'genre': 'Instrumental',\n 'item_id': '38937DDF04BC7FC4',\n 'play_count': 5,\n 'release': 'Bombay the Hard Way: Guns, Cars & Sitars',\n 'song_name': 'Inspector Jay From Dehli',\n 'track_number': 9,\n 'url': 'file://localhost/Users/tylerw/Music/iTunes/iTunes%20Media/Music/Dan%20the%20Automator/Bombay%20the%20Hard%20Way_%20Guns,%20Cars%20&%20Sitars/09%20Inspector%20Jay%20From%20Dehli.m4a'}}]\n >>> ticket = c.update(items)\n >>> ticket\n u'7dcad583f2a38e6689d48a792b2e4c96'\n >>> c.status(ticket)\n {u'ticket_status': u'complete', u'update_info': []}\n >>>\n\n \"\"\"\n post_data = {}\n items_json = json.dumps(items, default=dthandler)\n post_data['data'] = items_json\n\n response = self.post_attribute(\"update\", data=post_data)\n\n return response['ticket']"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nread items from the catalog and returns them as ResultList objects.", "response": "def read_items(self, buckets=None, results=15, start=0,item_ids=None):\n \"\"\"\n Returns data from the catalog; also expanded for the requested buckets.\n This method is provided for backwards-compatibility\n\n Args:\n\n Kwargs:\n buckets (list): A list of strings specifying which buckets to retrieve\n\n results (int): An integer number of results to return\n\n start (int): An integer starting value for the result set\n\n Returns:\n A list of objects in the catalog; list contains additional attributes 'start' and 'total'\n\n Example:\n\n >>> c\n \n >>> c.read_items(results=1)\n []\n >>>\n \"\"\"\n warnings.warn(\"catalog.read_items() is depreciated. Please use catalog.get_item_dicts() instead.\")\n kwargs = {}\n kwargs['bucket'] = buckets or []\n kwargs['item_id'] = item_ids or []\n response = self.get_attribute(\"read\", results=results, start=start, **kwargs)\n rval = ResultList([])\n if item_ids:\n rval.start=0;\n rval.total=len(response['catalog']['items'])\n else:\n rval.start = response['catalog']['start']\n rval.total = response['catalog']['total']\n for item in response['catalog']['items']:\n new_item = None\n # song items\n if 'song_id' in item:\n item['id'] = item.pop('song_id')\n item['title'] = item.pop('song_name')\n request = item['request']\n new_item = song.Song(**util.fix(item))\n new_item.request = request\n # artist item\n elif 'artist_id' in item:\n item['id'] = item.pop('artist_id')\n item['name'] = item.pop('artist_name')\n request = item['request']\n new_item = artist.Artist(**util.fix(item))\n new_item.request = request\n # unresolved item\n else:\n new_item = item\n rval.append(new_item)\n return rval"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_item_dicts(self, buckets=None, results=15, start=0,item_ids=None):\n kwargs = {}\n kwargs['bucket'] = buckets or []\n kwargs['item_id'] = item_ids or []\n response = self.get_attribute(\"read\", results=results, start=start, **kwargs)\n rval = ResultList(response['catalog']['items'])\n if item_ids:\n rval.start=0;\n rval.total=len(response['catalog']['items'])\n else:\n rval.start = response['catalog']['start']\n rval.total = response['catalog']['total']\n return rval", "response": "Returns data from the catalog ; also expanded for the requested buckets"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_feed(self, buckets=None, since=None, results=15, start=0):\n kwargs = {}\n kwargs['bucket'] = buckets or []\n\tif since:\n\t\tkwargs['since']=since\n response = self.get_attribute(\"feed\", results=results, start=start, **kwargs)\n rval = ResultList(response['feed'])\n return rval", "response": "Get the feed for the given buckets."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngiving an audio file print out the artist title and some audio attributes of the song", "response": "def _show_one(audio_file):\n \"given an audio file, print out the artist, title and some audio attributes of the song\"\n print 'File: ', audio_file\n pytrack = track.track_from_filename(audio_file)\n print 'Artist: ', pytrack.artist if hasattr(pytrack, 'artist') else 'Unknown'\n print 'Title: ', pytrack.title if hasattr(pytrack, 'title') else 'Unknown'\n print 'Track ID: ', pytrack.id\n print 'Tempo: ', pytrack.tempo\n print 'Energy: %1.3f %s' % (pytrack.energy, _bar(pytrack.energy))\n if not pytrack.valence:\n # Track hasn't had latest attributes computed. Force an upload.\n pytrack = track.track_from_filename(audio_file, force_upload=True)\n print 'Valence: %1.3f %s' % (pytrack.valence, _bar(pytrack.valence)) \n print 'Acousticness: %1.3f %s' % (pytrack.acousticness, _bar(pytrack.acousticness))\n print"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nprint out the tempo for each audio file in the given directory", "response": "def show_attrs(directory):\n \"print out the tempo for each audio file in the given directory\"\n for f in os.listdir(directory):\n if _is_audio(f):\n path = os.path.join(directory, f)\n _show_one(path)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nsearch for songs by name description style mood and results.", "response": "def search(title=None, artist=None, artist_id=None, combined=None, description=None, style=None, mood=None,\n results=None, start=None, max_tempo=None, min_tempo=None,\n max_duration=None, min_duration=None, max_loudness=None, min_loudness=None,\n artist_max_familiarity=None, artist_min_familiarity=None, artist_max_hotttnesss=None,\n artist_min_hotttnesss=None, song_max_hotttnesss=None, song_min_hotttnesss=None, mode=None,\n min_energy=None, max_energy=None, min_danceability=None, max_danceability=None,\n key=None, max_latitude=None, min_latitude=None, max_longitude=None, min_longitude=None,\n sort=None, buckets=None, limit=False, test_new_things=None, rank_type=None,\n artist_start_year_after=None, artist_start_year_before=None, artist_end_year_after=None,\n artist_end_year_before=None,song_type=None,min_song_currency=None,max_song_currency=None,\n min_song_discovery=None, max_song_discovery=None, max_acousticness=None, min_acousticness=None,\n max_liveness=None, min_liveness=None, max_speechiness=None, min_speechiness=None,\n max_valence=None, min_valence=None):\n \"\"\"Search for songs by name, description, or constraint.\n\n Args:\n\n Kwargs:\n title (str): the name of a song\n artist (str): the name of an artist\n artist_id (str): the artist_id\n combined (str): the artist name and song title\n description (str): A string describing the artist and song\n style (str): A string describing the style/genre of the artist and song\n mood (str): A string describing the mood of the artist and song\n results (int): An integer number of results to return\n max_acousticness (float): The max acousticness of song results\n min_acousticness (float): The min acousticness of song results\n max_tempo (float): The max tempo of song results\n min_tempo (float): The min tempo of song results\n max_duration (float): The max duration of song results\n min_duration (float): The min duration of song results\n max_liveness (float): The max liveness of song results\n min_liveness (float): The min liveness of song results\n max_loudness (float): The max loudness of song results\n min_loudness (float): The min loudness of song results\n max_speechiness (float): The max speechiness of song results \n min_speechiess (float): The min speechiness of song results\n max_valence (float): The max valence of song results\n min_valence (float): The min valence of song results\n artist_max_familiarity (float): A float specifying the max familiarity of artists to search for\n artist_min_familiarity (float): A float specifying the min familiarity of artists to search for\n artist_max_hotttnesss (float): A float specifying the max hotttnesss of artists to search for\n artist_min_hotttnesss (float): A float specifying the max hotttnesss of artists to search for\n song_max_hotttnesss (float): A float specifying the max hotttnesss of songs to search for\n song_min_hotttnesss (float): A float specifying the max hotttnesss of songs to search for\n max_energy (float): The max energy of song results\n min_energy (float): The min energy of song results\n max_dancibility (float): The max dancibility of song results\n min_dancibility (float): The min dancibility of song results\n mode (int): 0 or 1 (minor or major)\n key (int): 0-11 (c, c-sharp, d, e-flat, e, f, f-sharp, g, a-flat, a, b-flat, b)\n max_latitude (float): A float specifying the max latitude of artists to search for\n min_latitude (float): A float specifying the min latitude of artists to search for\n max_longitude (float): A float specifying the max longitude of artists to search for\n min_longitude (float): A float specifying the min longitude of artists to search for \n sort (str): A string indicating an attribute and order for sorting the results\n buckets (list): A list of strings specifying which buckets to retrieve\n limit (bool): A boolean indicating whether or not to limit the results to one of the id spaces specified in buckets\n rank_type (str): A string denoting the desired ranking for description searches, either 'relevance' or 'familiarity\n artist_start_year_before (int): Returned songs's artists will have started recording music before this year.\n artist_start_year_after (int): Returned songs's artists will have started recording music after this year.\n artist_end_year_before (int): Returned songs's artists will have stopped recording music before this year.\n artist_end_year_after (int): Returned songs's artists will have stopped recording music after this year.\n song_type (string): A string or list of strings specifiying the type of song to search for.\n\n Returns:\n A list of Song objects\n\n Example:\n\n >>> results = song.search(artist='shakira', title='she wolf', buckets=['id:7digital', 'tracks'], limit=True, results=1)\n >>> results\n []\n >>> results[0].get_tracks('7digital')[0]\n {u'catalog': u'7digital',\n u'foreign_id': u'7digital:track:7854109',\n u'id': u'TRTOBSE12903CACEC4',\n u'preview_url': u'http://previews.7digital.com/clips/34/7854109.clip.mp3',\n u'release_image': u'http://cdn.7static.com/static/img/sleeveart/00/007/081/0000708184_200.jpg'}\n >>> \n \"\"\"\n \n limit = str(limit).lower()\n kwargs = locals()\n kwargs['bucket'] = buckets\n del kwargs['buckets']\n \n result = util.callm(\"%s/%s\" % ('song', 'search'), kwargs)\n return [Song(**util.fix(s_dict)) for s_dict in result['response']['songs']]"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef profile(ids=None, track_ids=None, buckets=None, limit=False):\n kwargs = {}\n\n if ids:\n if not isinstance(ids, list):\n ids = [ids]\n kwargs['id'] = ids\n\n if track_ids:\n if not isinstance(track_ids, list):\n track_ids = [track_ids]\n kwargs['track_id'] = track_ids\n\n buckets = buckets or []\n if buckets:\n kwargs['bucket'] = buckets\n\n if limit:\n kwargs['limit'] = 'true'\n \n result = util.callm(\"%s/%s\" % ('song', 'profile'), kwargs)\n return [Song(**util.fix(s_dict)) for s_dict in result['response']['songs']]", "response": "get the profiles for multiple songs at once"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_audio_summary(self, cache=True):\n if not (cache and ('audio_summary' in self.cache)):\n response = self.get_attribute('profile', bucket='audio_summary')\n if response['songs'] and 'audio_summary' in response['songs'][0]:\n self.cache['audio_summary'] = response['songs'][0]['audio_summary']\n else:\n self.cache['audio_summary'] = {}\n return self.cache['audio_summary']", "response": "Get an audio summary of a song."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_song_hotttnesss(self, cache=True):\n if not (cache and ('song_hotttnesss' in self.cache)):\n response = self.get_attribute('profile', bucket='song_hotttnesss')\n self.cache['song_hotttnesss'] = response['songs'][0]['song_hotttnesss']\n return self.cache['song_hotttnesss']", "response": "Get our numerical description of how hottt a song currently is."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the types of a song.", "response": "def get_song_type(self, cache=True):\n \"\"\"Get the types of a song.\n \n Args:\n cache (boolean): A boolean indicating whether or not the cached value should be used\n (if available). Defaults to True.\n \n Returns:\n A list of strings, each representing a song type: 'christmas', for example.\n \n Example:\n >>> s = song.Song('SOQKVPH12A58A7AF4D')\n >>> s.song_type\n [u'christmas']\n >>> \n\n \"\"\" \n if not (cache and ('song_type' in self.cache)):\n response = self.get_attribute('profile', bucket='song_type')\n if response['songs'][0].has_key('song_type'):\n self.cache['song_type'] = response['songs'][0]['song_type']\n else:\n self.cache['song_type'] = []\n return self.cache['song_type']"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_artist_hotttnesss(self, cache=True):\n if not (cache and ('artist_hotttnesss' in self.cache)):\n response = self.get_attribute('profile', bucket='artist_hotttnesss')\n self.cache['artist_hotttnesss'] = response['songs'][0]['artist_hotttnesss']\n return self.cache['artist_hotttnesss']", "response": "Get our numerical description of how hottt a song currently is."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_artist_familiarity(self, cache=True):\n if not (cache and ('artist_familiarity' in self.cache)):\n response = self.get_attribute('profile', bucket='artist_familiarity')\n self.cache['artist_familiarity'] = response['songs'][0]['artist_familiarity']\n return self.cache['artist_familiarity']", "response": "Get our numerical estimation of how familiar a song currently is to the world."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the location of a song s artist.", "response": "def get_artist_location(self, cache=True):\n \"\"\"Get the location of a song's artist.\n \n Args:\n cache (bool): A boolean indicating whether or not the cached value should be used (if available). Defaults to True.\n \n Returns:\n An artist location object.\n \n Example:\n >>> s = song.Song('SOQKVPH12A58A7AF4D')\n >>> s.artist_location\n {u'latitude': 34.053489999999996, u'location': u'Los Angeles, CA', u'longitude': -118.24532000000001}\n >>> \n\n \"\"\"\n if not (cache and ('artist_location' in self.cache)):\n response = self.get_attribute('profile', bucket='artist_location')\n self.cache['artist_location'] = response['songs'][0]['artist_location']\n return self.cache['artist_location']"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the number of discovery times for a specific song.", "response": "def get_song_discovery(self, cache=True):\n \"\"\"\n Args:\n cache (bool): A boolean indicating whether or not the cached value should be used (if available). Defaults to True.\n\n Returns:\n A float representing a song's discovery rank.\n\n Example:\n >>> s = song.Song('SOQKVPH12A58A7AF4D')\n >>> s.get_song_discovery()\n 0.639626025843539\n >>> s.song_discovery\n 0.639626025843539\n >>>\n \"\"\"\n if not (cache and ('song_discovery' in self.cache)):\n response = self.get_attribute('profile', bucket='song_discovery')\n self.cache['song_discovery'] = response['songs'][0]['song_discovery']\n return self.cache['song_discovery']"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn the songs s currency rank.", "response": "def get_song_currency(self, cache=True):\n \"\"\"\n Args:\n cache (bool): A boolean indicating whether or not the cached value should be used (if available). Defaults to True.\n \n Returns:\n A float representing a song's currency rank.\n \n Example:\n >>> s = song.Song('SOQKVPH12A58A7AF4D')\n >>> s.get_song_currency()\n 0.639626025843539\n >>> s.song_currency\n 0.639626025843539\n >>>\n \"\"\"\n if not (cache and ('song_currency' in self.cache)):\n response = self.get_attribute('profile', bucket='song_currency')\n self.cache['song_currency'] = response['songs'][0]['song_currency']\n return self.cache['song_currency']"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_tracks(self, catalog, cache=True):\n if not (cache and ('tracks' in self.cache) and (catalog in [td['catalog'] for td in self.cache['tracks']])):\n kwargs = {\n 'bucket':['tracks', 'id:%s' % catalog],\n }\n \n response = self.get_attribute('profile', **kwargs)\n if not 'tracks' in self.cache:\n self.cache['tracks'] = []\n # don't blow away the cache for other catalogs\n potential_tracks = response['songs'][0].get('tracks', [])\n existing_track_ids = [tr['foreign_id'] for tr in self.cache['tracks']]\n new_tds = filter(lambda tr: tr['foreign_id'] not in existing_track_ids, potential_tracks)\n self.cache['tracks'].extend(new_tds)\n return filter(lambda tr: tr['catalog']==util.map_idspace(catalog), self.cache['tracks'])", "response": "Get the tracks for a song given a catalog."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef basic(type='artist-radio', artist_id=None, artist=None, song_id=None, song=None, track_id=None, dmca=False,\n results=15, buckets=None, limit=False,genres=None,):\n \"\"\"Get a basic playlist\n \n Args:\n \n Kwargs:\n type (str): a string representing the playlist type ('artist-radio' or 'song-radio')\n \n artist_id (str): the artist_id to seed the playlist\n \n artist (str): the name of an artist to seed the playlist\n \n song_id (str): a song_id to seed the playlist\n \n song (str): the name of a song to seed the playlist\n \n track_id (str): the name of a track to seed the playlist\n \n dmca (bool): make the playlist dmca-compliant\n \n results (int): desired length of the playlist\n \n buckets (list): A list of strings specifying which buckets to retrieve\n \n limit (bool): Whether results should be restricted to any idspaces given in the buckets parameter\n \"\"\"\n\n limit = str(limit).lower()\n dmca = str(dmca).lower()\n\n kwargs = locals()\n kwargs['bucket'] = kwargs['buckets']\n del kwargs['buckets']\n kwargs['genre'] = kwargs['genres']\n del kwargs['genres']\n\n result = util.callm(\"%s/%s\" % ('playlist', 'basic'), kwargs)\n return [Song(**util.fix(s_dict)) for s_dict in result['response']['songs']]", "response": "Get a basic playlist with optional parameters"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef static(type='artist', artist_pick='song_hotttnesss-desc', variety=.5, artist_id=None, artist=None, song_id=None,\n track_id=None, description=None, style=None, mood=None, results=15, max_tempo=None, min_tempo=None,\n max_duration=None, min_duration=None, max_loudness=None, min_loudness=None, max_danceability=None,\n min_danceability=None, max_energy=None, min_energy=None, artist_max_familiarity=None,\n artist_min_familiarity=None, artist_max_hotttnesss=None, artist_min_hotttnesss=None,\n song_max_hotttnesss=None, song_min_hotttnesss=None, min_longitude=None, max_longitude=None,\n min_latitude=None, max_latitude=None, adventurousness=0.2, mode=None, key=None, buckets=None, sort=None,\n limit=False, seed_catalog=None, source_catalog=None, rank_type=None, test_new_things=None,\n artist_start_year_after=None, artist_start_year_before=None, artist_end_year_after=None,\n artist_end_year_before=None, dmca=False, distribution=None, song_type=None, genres=None):\n \"\"\"Get a static playlist\n \n Args:\n \n Kwargs:\n type (str): a string representing the playlist type ('artist', 'artist-radio', ...)\n \n artist_pick (str): How songs should be chosen for each artist\n \n variety (float): A number between 0 and 1 specifying the variety of the playlist\n \n artist_id (str): the artist_id\n \n artist (str): the name of an artist\n \n song_id (str): the song_id\n \n track_id (str): the track id\n \n description (str): A string describing the artist and song\n \n style (str): A string describing the style/genre of the artist and song\n \n mood (str): A string describing the mood of the artist and song\n \n results (int): An integer number of results to return\n \n max_tempo (float): The max tempo of song results\n \n min_tempo (float): The min tempo of song results\n \n max_duration (float): The max duration of song results\n \n min_duration (float): The min duration of song results\n \n max_loudness (float): The max loudness of song results\n \n min_loudness (float): The min loudness of song results\n \n artist_max_familiarity (float): A float specifying the max familiarity of artists to search for\n \n artist_min_familiarity (float): A float specifying the min familiarity of artists to search for\n \n artist_max_hotttnesss (float): A float specifying the max hotttnesss of artists to search for\n \n artist_min_hotttnesss (float): A float specifying the max hotttnesss of artists to search for\n \n song_max_hotttnesss (float): A float specifying the max hotttnesss of songs to search for\n \n song_min_hotttnesss (float): A float specifying the max hotttnesss of songs to search for\n \n max_energy (float): The max energy of song results\n \n min_energy (float): The min energy of song results\n \n max_danceability (float): The max danceability of song results\n \n min_danceability (float): The min danceability of song results\n \n mode (int): 0 or 1 (minor or major)\n \n key (int): 0-11 (c, c-sharp, d, e-flat, e, f, f-sharp, g, a-flat, a, b-flat, b)\n \n max_latitude (float): A float specifying the max latitude of artists to search for\n \n min_latitude (float): A float specifying the min latitude of artists to search for\n \n max_longitude (float): A float specifying the max longitude of artists to search for\n \n min_longitude (float): A float specifying the min longitude of artists to search for \n \n adventurousness (float): A float ranging from 0 for old favorites to 1.0 for unheard music according to a seed_catalog\n \n sort (str): A string indicating an attribute and order for sorting the results\n \n buckets (list): A list of strings specifying which buckets to retrieve\n \n limit (bool): A boolean indicating whether or not to limit the results to one of the id spaces specified in buckets\n \n seed_catalog (str or Catalog): An Artist Catalog object or Artist Catalog id to use as a seed\n \n source_catalog (str or Catalog): A Catalog object or catalog id\n\n rank_type (str): A string denoting the desired ranking for description searches, either 'relevance' or 'familiarity' \n \n artist_start_year_before (int): Returned song's artists will have started recording music before this year.\n \n artist_start_year_after (int): Returned song's artists will have started recording music after this year.\n \n artist_end_year_before (int): Returned song's artists will have stopped recording music before this year.\n \n artist_end_year_after (int): Returned song's artists will have stopped recording music after this year.\n\n distribution (str): Affects the range of artists returned and how many songs each artist will have in the playlist relative to how similar they are to the seed. (wandering, focused)\n\n song_type (str): A string or list of strings of the type of songs allowed. The only valid song type at the moment is 'christmas'.\n Valid formats are 'song_type', 'song_type:true', 'song_type:false', or 'song_type:any'.\n\n Returns:\n A list of Song objects\n \n Example:\n \n >>> p = playlist.static(type='artist-radio', artist=['ida maria', 'florence + the machine'])\n >>> p\n [,\n ,\n ,\n ,\n ,\n ,\n ,\n ,\n ,\n ,\n ,\n ,\n ,\n ,\n ]\n >>> \n\n \"\"\"\n limit = str(limit).lower()\n\n if seed_catalog and isinstance(seed_catalog, catalog.Catalog):\n seed_catalog = seed_catalog.id\n\n if source_catalog and isinstance(source_catalog, catalog.Catalog):\n source_catalog = source_catalog.id\n dmca = str(dmca).lower()\n kwargs = locals()\n kwargs['bucket'] = kwargs['buckets'] or []\n del kwargs['buckets']\n kwargs['genre'] = kwargs['genres']\n del kwargs['genres']\n\n result = util.callm(\"%s/%s\" % ('playlist', 'static'), kwargs)\n return [Song(**util.fix(s_dict)) for s_dict in result['response']['songs']]", "response": "Returns a new playlist with the specified type and optional parameters."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _track_from_response(result, timeout):\n response = result['response']\n status = response['track']['status'].lower()\n\n if status == 'pending':\n # Need to wait for async upload or analyze call to finish.\n result = _wait_for_pending_track(response['track']['id'], timeout)\n response = result['response']\n status = response['track']['status'].lower()\n\n if not status == 'complete':\n track_id = response['track']['id']\n if status == 'pending':\n raise Exception('%s: the operation didn\\'t complete before the timeout (%d secs)' %\n (track_id, timeout))\n else:\n raise Exception('%s: there was an error analyzing the track, status: %s' % (track_id, status))\n else:\n # track_properties starts as the response dictionary.\n track_properties = response['track']\n # 'id' and 'md5' are separated to construct the Track object.\n identifier = track_properties.pop('id')\n md5 = track_properties.pop('md5', None) # tracks from song api calls will not have an md5\n # Pop off the audio_summary dict and make those keys attributes\n # of the Track. This includes things like tempo, energy, and loudness.\n track_properties.update(track_properties.pop('audio_summary'))\n return Track(identifier, md5, track_properties)", "response": "This function creates a Track object from a response from a song api call."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nuploads audio file to a node.", "response": "def _upload(param_dict, timeout, data):\n \"\"\"\n Calls upload either with a local audio file,\n or a url. Returns a track object.\n \"\"\"\n param_dict['format'] = 'json'\n param_dict['wait'] = 'true'\n param_dict['bucket'] = 'audio_summary'\n result = util.callm('track/upload', param_dict, POST = True, socket_timeout = 300, data = data)\n return _track_from_response(result, timeout)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a track object from a file - like object.", "response": "def track_from_file(file_object, filetype, timeout=DEFAULT_ASYNC_TIMEOUT, force_upload=False):\n \"\"\"\n Create a track object from a file-like object.\n\n NOTE: Does not create the detailed analysis for the Track. Call\n Track.get_analysis() for that.\n\n Args:\n file_object: a file-like Python object\n filetype: the file type. Supported types include mp3, ogg, wav, m4a, mp4, au\n force_upload: skip the MD5 shortcut path, force an upload+analysis\n Example:\n >>> f = open(\"Miaow-01-Tempered-song.mp3\")\n >>> t = track.track_from_file(f, 'mp3')\n >>> t\n < Track >\n >>>\n \"\"\"\n if not force_upload:\n try:\n # Check if this file has already been uploaded.\n # This is much faster than uploading.\n md5 = hashlib.md5(file_object.read()).hexdigest()\n return track_from_md5(md5)\n except util.EchoNestAPIError:\n # Fall through to do a fresh upload.\n pass\n\n file_object.seek(0)\n return _track_from_data(file_object.read(), filetype, timeout)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef track_from_filename(filename, filetype = None, timeout=DEFAULT_ASYNC_TIMEOUT, force_upload=False):\n filetype = filetype or filename.split('.')[-1]\n file_object = open(filename, 'rb')\n result = track_from_file(file_object, filetype, timeout, force_upload)\n file_object.close()\n return result", "response": "Create a track object from a filename."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate a track object from a public URL.", "response": "def track_from_url(url, timeout=DEFAULT_ASYNC_TIMEOUT):\n \"\"\"\n Create a track object from a public http URL.\n\n NOTE: Does not create the detailed analysis for the Track. Call\n Track.get_analysis() for that.\n\n Args:\n url: A string giving the URL to read from. This must be on a public machine accessible by HTTP.\n\n Example:\n >>> t = track.track_from_url(\"http://www.miaowmusic.com/mp3/Miaow-01-Tempered-song.mp3\")\n >>> t\n < Track >\n >>>\n\n \"\"\"\n param_dict = dict(url = url)\n return _upload(param_dict, timeout, data=None)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef track_from_id(identifier, timeout=DEFAULT_ASYNC_TIMEOUT):\n param_dict = dict(id = identifier)\n return _profile(param_dict, timeout)", "response": "Create a track object from an Echo Nest track ID."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef track_from_md5(md5, timeout=DEFAULT_ASYNC_TIMEOUT):\n param_dict = dict(md5 = md5)\n return _profile(param_dict, timeout)", "response": "Create a track object from an md5 hash."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_analysis(self):\n if self.analysis_url:\n try:\n # Try the existing analysis_url first. This expires shortly\n # after creation.\n try:\n json_string = urllib2.urlopen(self.analysis_url).read()\n except urllib2.HTTPError:\n # Probably the analysis_url link has expired. Refresh it.\n param_dict = dict(id = self.id)\n new_track = _profile(param_dict, DEFAULT_ASYNC_TIMEOUT)\n if new_track and new_track.analysis_url:\n self.analysis_url = new_track.analysis_url\n json_string = urllib2.urlopen(self.analysis_url).read()\n else:\n raise Exception(\"Failed to create track analysis.\")\n\n analysis = json.loads(json_string)\n analysis_track = analysis.pop('track', {})\n self.__dict__.update(analysis)\n self.__dict__.update(analysis_track)\n except Exception: #pylint: disable=W0702\n # No detailed analysis found.\n raise Exception(\"Failed to create track analysis.\")\n else:\n raise Exception(\"Failed to create track analysis.\")", "response": "Retrieve the detailed analysis for the track. Raises Exception if unable to create the detailed analysis."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets the tempo for a song", "response": "def get_tempo(artist, title):\n \"gets the tempo for a song\"\n results = song.search(artist=artist, title=title, results=1, buckets=['audio_summary'])\n if len(results) > 0:\n return results[0].audio_summary['tempo']\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef list(sandbox_name, results=15, start=0):\n result = util.callm(\"%s/%s\" % ('sandbox', 'list'), {'sandbox':sandbox_name, 'results': results, 'start': start})\n assets = result['response']['assets']\n start = result['response']['start']\n total = result['response']['total']\n\n return ResultList(assets, start, total)", "response": "Returns a list of all assets available in this sandbox"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsearching for artists by name description style mood and start.", "response": "def search(name=None, description=None, style=None, mood=None, start=0, \\\n results=15, buckets=None, limit=False, \\\n fuzzy_match=False, sort=None, max_familiarity=None, min_familiarity=None, \\\n max_hotttnesss=None, min_hotttnesss=None, test_new_things=None, rank_type=None, \\\n artist_start_year_after=None, artist_start_year_before=None,artist_end_year_after=None,artist_end_year_before=None):\n \"\"\"Search for artists by name, description, or constraint.\n \n Args:\n \n Kwargs:\n name (str): the name of an artist\n \n description (str): A string describing the artist\n \n style (str): A string describing the style/genre of the artist\n \n mood (str): A string describing the mood of the artist\n \n start (int): An integer starting value for the result set\n\n results (int): An integer number of results to return\n \n buckets (list): A list of strings specifying which buckets to retrieve\n \n limit (bool): A boolean indicating whether or not to limit the results to one of the id spaces specified in buckets\n \n fuzzy_match (bool): A boolean indicating whether or not to search for similar sounding matches (only works with name)\n \n max_familiarity (float): A float specifying the max familiarity of artists to search for\n \n min_familiarity (float): A float specifying the min familiarity of artists to search for\n \n max_hotttnesss (float): A float specifying the max hotttnesss of artists to search for\n \n min_hotttnesss (float): A float specifying the max hotttnesss of artists to search for\n \n artist_start_year_before (int): Returned artists will have started recording music before this year.\n \n artist_start_year_after (int): Returned artists will have started recording music after this year.\n \n artist_end_year_before (int): Returned artists will have stopped recording music before this year.\n \n artist_end_year_after (int): Returned artists will have stopped recording music after this year.\n \n rank_type (str): A string denoting the desired ranking for description searches, either 'relevance' or 'familiarity'\n\n Returns:\n A list of Artist objects\n \n Example:\n \n >>> results = artist.search(name='t-pain')\n >>> results\n [, , , , , , ]\n >>> \n\n \"\"\"\n limit = str(limit).lower()\n fuzzy_match = str(fuzzy_match).lower()\n kwargs = locals()\n kwargs['bucket'] = buckets or []\n del kwargs['buckets']\n \"\"\"Search for artists\"\"\"\n result = util.callm(\"%s/%s\" % ('artist', 'search'), kwargs)\n return [Artist(**util.fix(a_dict)) for a_dict in result['response']['artists']]"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef top_hottt(start=0, results=15, buckets = None, limit=False):\n buckets = buckets or []\n kwargs = {}\n if start:\n kwargs['start'] = start\n if results:\n kwargs['results'] = results\n if buckets:\n kwargs['bucket'] = buckets\n if limit:\n kwargs['limit'] = 'true'\n \n \"\"\"Get top hottt artists\"\"\"\n result = util.callm(\"%s/%s\" % ('artist', 'top_hottt'), kwargs)\n return [Artist(**util.fix(a_dict)) for a_dict in result['response']['artists']]", "response": "Return the top hottt artists in the specified buckets."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef top_terms(results=15):\n \n kwargs = {}\n if results:\n kwargs['results'] = results\n \n \"\"\"Get top terms\"\"\"\n result = util.callm(\"%s/%s\" % ('artist', 'top_terms'), kwargs)\n return result['response']['terms']", "response": "Get a list of the top overall terms of the current language."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef similar(names=None, ids=None, start=0, results=15, buckets=None, limit=False, max_familiarity=None, min_familiarity=None,\n max_hotttnesss=None, min_hotttnesss=None, seed_catalog=None,artist_start_year_before=None, \\\n artist_start_year_after=None,artist_end_year_before=None,artist_end_year_after=None):\n \"\"\"Return similar artists to this one\n \n Args:\n \n Kwargs:\n ids (str/list): An artist id or list of ids\n \n names (str/list): An artist name or list of names\n \n results (int): An integer number of results to return\n \n buckets (list): A list of strings specifying which buckets to retrieve\n \n limit (bool): A boolean indicating whether or not to limit the results to one of the id spaces specified in buckets\n \n start (int): An integer starting value for the result set\n \n max_familiarity (float): A float specifying the max familiarity of artists to search for\n \n min_familiarity (float): A float specifying the min familiarity of artists to search for\n \n max_hotttnesss (float): A float specifying the max hotttnesss of artists to search for\n \n min_hotttnesss (float): A float specifying the max hotttnesss of artists to search for\n \n seed_catalog (str): A string specifying the catalog similar artists are restricted to\n \n Returns:\n A list of similar Artist objects\n \n Example:\n\n >>> some_dudes = [artist.Artist('weezer'), artist.Artist('radiohead')]\n >>> some_dudes\n [, ]\n >>> sims = artist.similar(ids=[art.id for art in some_dudes], results=5)\n >>> sims\n [, , , , ]\n >>> \n\n \"\"\"\n \n buckets = buckets or []\n kwargs = {}\n\n if ids:\n if not isinstance(ids, list):\n ids = [ids]\n kwargs['id'] = ids\n if names:\n if not isinstance(names, list):\n names = [names]\n kwargs['name'] = names\n if max_familiarity is not None:\n kwargs['max_familiarity'] = max_familiarity\n if min_familiarity is not None:\n kwargs['min_familiarity'] = min_familiarity\n if max_hotttnesss is not None:\n kwargs['max_hotttnesss'] = max_hotttnesss\n if min_hotttnesss is not None:\n kwargs['min_hotttnesss'] = min_hotttnesss\n if seed_catalog is not None:\n kwargs['seed_catalog'] = seed_catalog\n if start:\n kwargs['start'] = start\n if results:\n kwargs['results'] = results\n if buckets:\n kwargs['bucket'] = buckets\n if limit:\n kwargs['limit'] = 'true'\n if artist_start_year_before:\n kwargs['artist_start_year_before'] = artist_start_year_before\n if artist_start_year_after:\n kwargs['artist_start_year_after'] = artist_start_year_after\n if artist_end_year_before:\n kwargs['artist_end_year_before'] = artist_end_year_before\n if artist_end_year_after:\n kwargs['artist_end_year_after'] = artist_end_year_after\n\n\n result = util.callm(\"%s/%s\" % ('artist', 'similar'), kwargs)\n return [Artist(**util.fix(a_dict)) for a_dict in result['response']['artists']]", "response": "Returns a list of similar artists to this one."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nextract artists from a block of text.", "response": "def extract(text='', start=0, results=15, buckets=None, limit=False, max_familiarity=None, min_familiarity=None,\n max_hotttnesss=None, min_hotttnesss=None):\n \"\"\"Extract artist names from a block of text.\n \n Args:\n \n Kwargs:\n text (str): The text to extract artists from\n \n start (int): An integer starting value for the result set\n \n results (int): An integer number of results to return\n \n buckets (list): A list of strings specifying which buckets to retrieve\n \n limit (bool): A boolean indicating whether or not to limit the results to one of the id spaces specified in buckets\n \n max_familiarity (float): A float specifying the max familiarity of artists to search for\n \n min_familiarity (float): A float specifying the min familiarity of artists to search for\n \n max_hotttnesss (float): A float specifying the max hotttnesss of artists to search for\n \n min_hotttnesss (float): A float specifying the max hotttnesss of artists to search for\n \n Returns:\n A list of Artist objects\n \n Example:\n \n >>> results = artist.extract(text='i saw beyonce at burger king, she was eatin, she was eatin')\n >>> results\n\n >>> \n \n \"\"\"\n\n buckets = buckets or []\n kwargs = {}\n \n kwargs['text'] = text\n \n if max_familiarity is not None:\n kwargs['max_familiarity'] = max_familiarity\n if min_familiarity is not None:\n kwargs['min_familiarity'] = min_familiarity\n if max_hotttnesss is not None:\n kwargs['max_hotttnesss'] = max_hotttnesss\n if min_hotttnesss is not None:\n kwargs['min_hotttnesss'] = min_hotttnesss\n if start:\n kwargs['start'] = start\n if results:\n kwargs['results'] = results\n if buckets:\n kwargs['bucket'] = buckets\n if limit:\n kwargs['limit'] = 'true'\n \n result = util.callm(\"%s/%s\" % ('artist', 'extract'), kwargs)\n \n return [Artist(**util.fix(a_dict)) for a_dict in result['response']['artists']]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsuggesting artists based upon partial names.", "response": "def suggest(q='', results=15, buckets=None, limit=False, max_familiarity=None, min_familiarity=None,\n max_hotttnesss=None, min_hotttnesss=None):\n \"\"\"Suggest artists based upon partial names.\n\n Args:\n\n Kwargs:\n q (str): The text to suggest artists from\n\n results (int): An integer number of results to return\n\n buckets (list): A list of strings specifying which buckets to retrieve\n\n limit (bool): A boolean indicating whether or not to limit the results to one of the id spaces specified in buckets\n\n max_familiarity (float): A float specifying the max familiarity of artists to search for\n\n min_familiarity (float): A float specifying the min familiarity of artists to search for\n\n max_hotttnesss (float): A float specifying the max hotttnesss of artists to search for\n\n min_hotttnesss (float): A float specifying the max hotttnesss of artists to search for\n\n Returns:\n A list of Artist objects\n\n Example:\n\n >>> results = artist.suggest(text='rad')\n >>> results\n\n >>> \n\n \"\"\"\n\n buckets = buckets or []\n kwargs = {}\n\n kwargs['q'] = q\n\n if max_familiarity is not None:\n kwargs['max_familiarity'] = max_familiarity\n if min_familiarity is not None:\n kwargs['min_familiarity'] = min_familiarity\n if max_hotttnesss is not None:\n kwargs['max_hotttnesss'] = max_hotttnesss\n if min_hotttnesss is not None:\n kwargs['min_hotttnesss'] = min_hotttnesss\n if results:\n kwargs['results'] = results\n if buckets:\n kwargs['bucket'] = buckets\n if limit:\n kwargs['limit'] = 'true'\n\n result = util.callm(\"%s/%s\" % ('artist', 'suggest'), kwargs)\n\n return [Artist(**util.fix(a_dict)) for a_dict in result['response']['artists']]"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_biographies(self, results=15, start=0, license=None, cache=True):\n if cache and ('biographies' in self.cache) and results==15 and start==0 and license==None:\n return self.cache['biographies']\n else:\n response = self.get_attribute('biographies', results=results, start=start, license=license)\n if results==15 and start==0 and license==None:\n self.cache['biographies'] = ResultList(response['biographies'], 0, response['total'])\n return ResultList(response['biographies'], start, response['total'])", "response": "Get a list of artist biographies"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_familiarity(self, cache=True):\n if not (cache and ('familiarity' in self.cache)):\n response = self.get_attribute('familiarity')\n self.cache['familiarity'] = response['artist']['familiarity']\n return self.cache['familiarity']", "response": "Get our numerical estimation of how familiar an artist currently is to the world."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the foreign id for this artist for a specific id space.", "response": "def get_foreign_id(self, idspace='musicbrainz', cache=True):\n \"\"\"Get the foreign id for this artist for a specific id space\n \n Args:\n \n Kwargs:\n idspace (str): A string indicating the idspace to fetch a foreign id for.\n \n Returns:\n A foreign ID string\n \n Example:\n \n >>> a = artist.Artist('fabulous')\n >>> a.get_foreign_id('7digital')\n u'7digital:artist:186042'\n >>> \n \"\"\"\n if not (cache and ('foreign_ids' in self.cache) and filter(lambda d: d.get('catalog') == idspace, self.cache['foreign_ids'])):\n response = self.get_attribute('profile', bucket=['id:'+idspace])\n foreign_ids = response['artist'].get(\"foreign_ids\", [])\n self.cache['foreign_ids'] = self.cache.get('foreign_ids', []) + foreign_ids\n cval = filter(lambda d: d.get('catalog') == util.map_idspace(idspace),\n self.cache.get('foreign_ids'))\n return cval[0].get('foreign_id') if cval else None"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_twitter_id(self, cache=True):\n if not (cache and ('twitter' in self.cache)):\n response = self.get_attribute('twitter')\n self.cache['twitter'] = response['artist'].get('twitter')\n return self.cache['twitter']", "response": "Get the twitter id for this artist if it exists in the cache."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_hotttnesss(self, cache=True):\n if not (cache and ('hotttnesss' in self.cache)):\n response = self.get_attribute('hotttnesss')\n self.cache['hotttnesss'] = response['artist']['hotttnesss']\n return self.cache['hotttnesss']", "response": "Get our numerical description of how hottt an artist currently is."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget a list of artist images in a specific language.", "response": "def get_images(self, results=15, start=0, license=None, cache=True):\n \"\"\"Get a list of artist images\n \n Args:\n cache (bool): A boolean indicating whether or not the cached value should be used (if available). Defaults to True.\n \n results (int): An integer number of results to return\n \n start (int): An integer starting value for the result set\n \n license (str): A string specifying the desired license type\n \n Returns:\n A list of image document dicts; list contains additional attributes 'start' and 'total'\n \n Example:\n \n >>> a = artist.Artist('Captain Beefheart')\n >>> images = a.get_images(results=1)\n >>> images.total\n 49\n >>> images[0]['url']\n u'http://c4.ac-images.myspacecdn.com/images01/5/l_e1a329cdfdb16a848288edc6d578730f.jpg'\n >>> \n \"\"\"\n \n if cache and ('images' in self.cache) and results==15 and start==0 and license==None:\n return self.cache['images']\n else:\n response = self.get_attribute('images', results=results, start=start, license=license)\n total = response.get('total') or 0\n if results==15 and start==0 and license==None:\n self.cache['images'] = ResultList(response['images'], 0, total)\n return ResultList(response['images'], start, total)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets a list of news articles on the web related to an artist.", "response": "def get_news(self, results=15, start=0, cache=True, high_relevance=False):\n \"\"\"Get a list of news articles found on the web related to an artist\n \n Args:\n cache (bool): A boolean indicating whether or not the cached value should be used (if available). Defaults to True.\n \n results (int): An integer number of results to return\n \n start (int): An integer starting value for the result set\n\n Returns:\n A list of news document dicts; list contains additional attributes 'start' and 'total'\n \n Example:\n \n >>> a = artist.Artist('Henry Threadgill')\n >>> news = a.news\n >>> news.total\n 41\n >>> news[0]['name']\n u'Jazz Journalists Association Announces 2010 Jazz Award Winners'\n >>> \n \"\"\"\n if cache and ('news' in self.cache) and results==15 and start==0 and not high_relevance:\n return self.cache['news']\n else:\n high_relevance = 'true' if high_relevance else 'false'\n response = self.get_attribute('news', results=results, start=start, high_relevance=high_relevance)\n if results==15 and start==0:\n self.cache['news'] = ResultList(response['news'], 0, response['total'])\n return ResultList(response['news'], start, response['total'])"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a list of similar Artist objects for this one.", "response": "def get_similar(self, results=15, start=0, buckets=None, limit=False, cache=True, max_familiarity=None, min_familiarity=None, \\\n max_hotttnesss=None, min_hotttnesss=None, min_results=None, reverse=False, artist_start_year_before=None, \\\n artist_start_year_after=None,artist_end_year_before=None,artist_end_year_after=None):\n \"\"\"Return similar artists to this one\n \n Args:\n \n Kwargs:\n cache (bool): A boolean indicating whether or not the cached value should be used (if available). Defaults to True.\n \n results (int): An integer number of results to return\n \n start (int): An integer starting value for the result set\n \n max_familiarity (float): A float specifying the max familiarity of artists to search for\n \n min_familiarity (float): A float specifying the min familiarity of artists to search for\n \n max_hotttnesss (float): A float specifying the max hotttnesss of artists to search for\n \n min_hotttnesss (float): A float specifying the max hotttnesss of artists to search for\n \n reverse (bool): A boolean indicating whether or not to return dissimilar artists (wrecommender). Defaults to False.\n \n Returns:\n A list of similar Artist objects\n \n Example:\n \n >>> a = artist.Artist('Sleater Kinney')\n >>> similars = a.similar[:5]\n >>> similars\n [, , , , ]\n >>> \n \"\"\"\n buckets = buckets or []\n kwargs = {}\n if max_familiarity:\n kwargs['max_familiarity'] = max_familiarity\n if min_familiarity:\n kwargs['min_familiarity'] = min_familiarity\n if max_hotttnesss:\n kwargs['max_hotttnesss'] = max_hotttnesss\n if min_hotttnesss:\n kwargs['min_hotttnesss'] = min_hotttnesss\n if min_results:\n kwargs['min_results'] = min_results\n if buckets:\n kwargs['bucket'] = buckets\n if limit:\n kwargs['limit'] = 'true'\n if reverse:\n kwargs['reverse'] = 'true'\n if artist_start_year_before:\n kwargs['artist_start_year_before'] = artist_start_year_before\n if artist_start_year_after:\n kwargs['artist_start_year_after'] = artist_start_year_after\n if artist_end_year_before:\n kwargs['artist_end_year_before'] = artist_end_year_before\n if artist_end_year_after:\n kwargs['artist_end_year_after'] = artist_end_year_after\n \n \n if cache and ('similar' in self.cache) and results==15 and start==0 and (not kwargs):\n return [Artist(**util.fix(a)) for a in self.cache['similar']]\n else:\n response = self.get_attribute('similar', results=results, start=start, **kwargs)\n if results==15 and start==0 and (not kwargs):\n self.cache['similar'] = response['artists']\n return [Artist(**util.fix(a)) for a in response['artists']]"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the songs associated with an artist.", "response": "def get_songs(self, cache=True, results=15, start=0):\n \"\"\"Get the songs associated with an artist\n \n Args:\n \n Kwargs:\n cache (bool): A boolean indicating whether or not the cached value should be used (if available). Defaults to True.\n \n results (int): An integer number of results to return\n \n start (int): An integer starting value for the result set\n \n Results:\n A list of Song objects; list contains additional attributes 'start' and 'total'\n \n Example:\n\n >>> a = artist.Artist('Strokes')\n >>> a.get_songs(results=5)\n [, , , , ]\n >>> \n \"\"\"\n \n if cache and ('songs' in self.cache) and results==15 and start==0:\n if not isinstance(self.cache['songs'][0], Song):\n song_objects = []\n for s in self.cache[\"songs\"]:\n song_objects.append(Song(id=s['id'], \n title=s['title'],\n artist_name=self.name,\n artist_id=self.id))\n self.cache['songs'] = song_objects\n return self.cache['songs']\n else:\n response = self.get_attribute('songs', results=results, start=start)\n for s in response['songs']:\n s.update({'artist_id':self.id, 'artist_name':self.name})\n songs = [Song(**util.fix(s)) for s in response['songs']]\n if results==15 and start==0:\n self.cache['songs'] = ResultList(songs, 0, response['total'])\n return ResultList(songs, start, response['total'])"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a list of term documents associated with an artist.", "response": "def get_terms(self, sort='weight', cache=True):\n \"\"\"Get the terms associated with an artist\n \n Args:\n \n Kwargs:\n cache (bool): A boolean indicating whether or not the cached value should be used (if available). Defaults to True.\n \n sort (str): A string specifying the desired sorting type (weight or frequency)\n \n Results:\n A list of term document dicts\n \n Example:\n\n >>> a = artist.Artist('tom petty')\n >>> a.terms\n [{u'frequency': 1.0, u'name': u'heartland rock', u'weight': 1.0},\n {u'frequency': 0.88569401860168606,\n u'name': u'jam band',\n u'weight': 0.9116501862732439},\n {u'frequency': 0.9656145118557401,\n u'name': u'pop rock',\n u'weight': 0.89777934440040685},\n {u'frequency': 0.8414744288140491,\n u'name': u'southern rock',\n u'weight': 0.8698567153186606},\n {u'frequency': 0.9656145118557401,\n u'name': u'hard rock',\n u'weight': 0.85738022655218893},\n {u'frequency': 0.88569401860168606,\n u'name': u'singer-songwriter',\n u'weight': 0.77427243392312772},\n {u'frequency': 0.88569401860168606,\n u'name': u'rock',\n u'weight': 0.71158718989399083},\n {u'frequency': 0.60874110500110956,\n u'name': u'album rock',\n u'weight': 0.69758668733499629},\n {u'frequency': 0.74350792060935744,\n u'name': u'psychedelic',\n u'weight': 0.68457367494207944},\n {u'frequency': 0.77213698386292873,\n u'name': u'pop',\n u'weight': 0.65039556639337293},\n {u'frequency': 0.41747136183050298,\n u'name': u'bar band',\n u'weight': 0.54974975024767025}]\n >>> \n\n \"\"\"\n if cache and ('terms' in self.cache) and sort=='weight':\n return self.cache['terms']\n else:\n response = self.get_attribute('terms', sort=sort)\n if sort=='weight':\n self.cache['terms'] = response['terms']\n return response['terms']"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the urls for an artist and return the dict containing the url", "response": "def get_urls(self, cache=True):\n \"\"\"Get the urls for an artist\n \n Args:\n \n Kwargs:\n cache (bool): A boolean indicating whether or not the cached value should be used (if available). Defaults to True.\n \n Results:\n A url document dict\n \n Example:\n\n >>> a = artist.Artist('the unicorns')\n >>> a.get_urls()\n {u'amazon_url': u'http://www.amazon.com/gp/search?ie=UTF8&keywords=The Unicorns&tag=httpechonecom-20&index=music',\n u'aolmusic_url': u'http://music.aol.com/artist/the-unicorns',\n u'itunes_url': u'http://itunes.com/TheUnicorns',\n u'lastfm_url': u'http://www.last.fm/music/The+Unicorns',\n u'mb_url': u'http://musicbrainz.org/artist/603c5f9f-492a-4f21-9d6f-1642a5dbea2d.html',\n u'myspace_url': u'http://www.myspace.com/iwasbornunicorn'}\n >>> \n\n \"\"\"\n if not (cache and ('urls' in self.cache)):\n response = self.get_attribute('urls')\n self.cache['urls'] = response['urls']\n return self.cache['urls']"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget a list of video documents on the web related to an artist.", "response": "def get_video(self, results=15, start=0, cache=True):\n \"\"\"Get a list of video documents found on the web related to an artist\n \n Args:\n \n Kwargs:\n cache (bool): A boolean indicating whether or not the cached value should be used (if available). Defaults to True.\n \n results (int): An integer number of results to return\n \n start (int): An integer starting value for the result set\n \n Returns:\n A list of video document dicts; list contains additional attributes 'start' and 'total'\n \n Example:\n\n >>> a = artist.Artist('the vapors')\n >>> a.get_video(results=1, start=2)\n [{u'date_found': u'2009-12-28T08:27:48',\n u'id': u'd02f9e6dc7904f70402d4676516286b9',\n u'image_url': u'http://i1.ytimg.com/vi/p6c0wOFL3Us/default.jpg',\n u'site': u'youtube',\n u'title': u'The Vapors-Turning Japanese (rectangular white vinyl promo)',\n u'url': u'http://youtube.com/watch?v=p6c0wOFL3Us'}]\n >>> \n\n \"\"\"\n if cache and ('video' in self.cache) and results==15 and start==0:\n return self.cache['video']\n else:\n response = self.get_attribute('video', results=results, start=start)\n if results==15 and start==0:\n self.cache['video'] = ResultList(response['video'], 0, response['total'])\n return ResultList(response['video'], start, response['total'])"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting a list of years active dictionaries for an artist.", "response": "def get_years_active(self, cache=True):\n \"\"\"Get a list of years active dictionaries for an artist\n \n Args:\n \n Kwargs:\n cache (bool): A boolean indicating whether or not the cached value should be used (if available). Defaults to True.\n \n Returns:\n A list of years active dictionaries; list contains additional attributes 'start' and 'total'\n \n Example:\n\n >>> a = artist.Artist('yelle')\n >>> a.get_years_active()\n [{ start: 2005 }]\n >>> \n\n \"\"\"\n if cache and ('years_active' in self.cache):\n return self.cache['years_active']\n else:\n response = self.get_attribute('profile', bucket=['years_active'])\n self.cache['years_active'] = response['artist']['years_active']\n return response['artist']['years_active']"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting the number of related documents of various types for the artist.", "response": "def get_doc_counts(self, cache=True):\n \"\"\"\n Get the number of related documents of various types for the artist.\n The types include audio, biographies, blogs, images, news, reviews, songs, videos.\n \n Note that these documents can be retrieved by calling artist., for example,\n artist.biographies.\n \n Args:\n \n Kwargs:\n cache (bool): A boolean indicating whether or not the cached value should be used (if available).\n Defaults to True.\n \n Returns:\n A dictionary with one key for each document type, mapped to an integer count of documents.\n \n Example:\n \n >>> a = artist.Artist(\"The Kinks\")\n\n >>> a.get_doc_counts()\n {u'audio': 194,\n u'biographies': 9,\n u'blogs': 379,\n u'images': 177,\n u'news': 84,\n u'reviews': 110,\n u'songs': 499,\n u'videos': 340}\n >>>\n \"\"\"\n if not cache or not ('doc_counts' in self.cache):\n response = self.get_attribute(\"profile\", bucket='doc_counts')\n self.cache['doc_counts'] = response['artist']['doc_counts']\n return self.cache['doc_counts']"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nwraps for the _run method of the base class", "response": "def run(self, bundle,\n container_id=None,\n log_path=None,\n pid_file=None,\n log_format=\"kubernetes\"):\n\n ''' run is a wrapper to create, start, attach, and delete a container.\n\n Equivalent command line example: \n singularity oci run -b ~/bundle mycontainer\n\n Parameters\n ==========\n bundle: the full path to the bundle folder\n container_id: an optional container_id. If not provided, use same\n container_id used to generate OciImage instance\n log_path: the path to store the log.\n pid_file: specify the pid file path to use\n log_format: defaults to kubernetes. Can also be \"basic\" or \"json\"\n '''\n return self._run(bundle,\n container_id=container_id,\n log_path=log_path,\n pid_file=pid_file,\n command=\"run\",\n log_format=log_format)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a container from a bundle", "response": "def create(self, bundle,\n container_id=None,\n empty_process=False,\n log_path=None,\n pid_file=None,\n sync_socket=None,\n log_format=\"kubernetes\"): \n\n ''' use the client to create a container from a bundle directory. The bundle\n directory should have a config.json. You must be the root user to\n create a runtime.\n\n Equivalent command line example: \n singularity oci create [create options...] \n\n Parameters\n ==========\n bundle: the full path to the bundle folder\n container_id: an optional container_id. If not provided, use same\n container_id used to generate OciImage instance\n empty_process: run container without executing container process (for\n example, for a pod container waiting for signals). This\n is a specific use case for tools like Kubernetes\n log_path: the path to store the log.\n pid_file: specify the pid file path to use\n sync_socket: the path to the unix socket for state synchronization.\n log_format: defaults to kubernetes. Can also be \"basic\" or \"json\"\n '''\n return self._run(bundle,\n container_id=container_id,\n empty_process=empty_process,\n log_path=log_path,\n pid_file=pid_file,\n sync_socket=sync_socket,\n command=\"create\",\n log_format=log_format)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndelete an instance based on container_id", "response": "def delete(self, container_id=None, sudo=None):\n '''delete an instance based on container_id.\n\n Parameters\n ==========\n container_id: the container_id to delete\n sudo: whether to issue the command with sudo (or not)\n a container started with sudo will belong to the root user\n If started by a user, the user needs to control deleting it\n if the user doesn't set to True/False, we use client self.sudo\n\n Returns\n =======\n return_code: the return code from the delete command. 0 indicates a\n successful delete, 255 indicates not.\n '''\n sudo = self._get_sudo(sudo)\n container_id = self.get_container_id(container_id)\n\n # singularity oci delete\n cmd = self._init_command('delete')\n\n # Add the container_id\n cmd.append(container_id)\n\n # Delete the container, return code goes to user (message to screen)\n return self._run_and_return(cmd, sudo=sudo)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef attach(self, container_id=None, sudo=False):\n '''attach to a container instance based on container_id\n\n Parameters\n ==========\n container_id: the container_id to delete\n sudo: whether to issue the command with sudo (or not)\n a container started with sudo will belong to the root user\n If started by a user, the user needs to control deleting it\n\n Returns\n =======\n return_code: the return code from the delete command. 0 indicates a\n successful delete, 255 indicates not.\n '''\n sudo = self._get_sudo(sudo)\n container_id = self.get_container_id(container_id)\n\n # singularity oci delete\n cmd = self._init_command('attach')\n\n # Add the container_id\n cmd.append(container_id)\n\n # Delete the container, return code goes to user (message to screen)\n return self._run_and_return(cmd, sudo)", "response": "attach to a container instance based on container_id"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef execute(self, command=None, container_id=None, sudo=False, stream=False):\n '''execute a command to a container instance based on container_id\n\n Parameters\n ==========\n container_id: the container_id to delete\n command: the command to execute to the container\n sudo: whether to issue the command with sudo (or not)\n a container started with sudo will belong to the root user\n If started by a user, the user needs to control deleting it\n stream: if True, return an iterate to iterate over results of exec.\n default is False, will return full output as string.\n\n Returns\n =======\n return_code: the return code from the delete command. 0 indicates a\n successful delete, 255 indicates not.\n '''\n sudo = self._get_sudo(sudo)\n container_id = self.get_container_id(container_id)\n\n # singularity oci delete\n cmd = self._init_command('exec')\n\n # Add the container_id\n cmd.append(container_id)\n\n if command != None:\n if not isinstance(command, list):\n command = [command]\n\n cmd = cmd + command\n\n # Execute the command, return response to user\n if stream:\n return stream_command(cmd, sudo=sudo)\n return self._run_command(cmd, sudo=sudo, quiet=True)", "response": "execute a command to a container instance based on container_id"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef update(self, container_id, from_file=None):\n '''update container cgroup resources for a specific container_id,\n The container must have state \"running\" or \"created.\"\n\n Singularity Example:\n singularity oci update [update options...] \n singularity oci update --from-file cgroups-update.json mycontainer\n\n Parameters\n ==========\n container_id: the container_id to update cgroups for\n from_file: a path to an OCI JSON resource file to update from.\n '''\n sudo = self._get_sudo(sudo)\n container_id = self.get_container_id(container_id)\n\n # singularity oci delete\n cmd = self._init_command('update')\n\n if from_file != None:\n cmd = cmd + ['--from-file', from_file]\n\n # Add the container_id\n cmd.append(container_id)\n\n # Delete the container, return code goes to user (message to screen)\n return self._run_and_return(cmd, sudo)", "response": "update container cgroup resources for a specific container_id"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_client(quiet=False, debug=False):\n '''\n get the client and perform imports not on init, in case there are any\n initialization or import errors. \n\n Parameters\n ==========\n quiet: if True, suppress most output about the client\n debug: turn on debugging mode\n\n '''\n from spython.utils import get_singularity_version\n from .base import Client\n\n Client.quiet = quiet\n Client.debug = debug\n\n # Do imports here, can be customized\n from .apps import apps\n from .build import build\n from .execute import execute \n from .help import help\n from .inspect import inspect\n from .instances import ( instances, stopall ) # global instance commands\n from .run import run\n from .pull import pull\n\n # Actions\n Client.apps = apps\n Client.build = build\n Client.execute = execute\n Client.help = help\n Client.inspect = inspect\n Client.instances = instances\n Client.run = run\n Client.pull = pull\n\n # Command Groups, Images\n from spython.image.cmd import generate_image_commands # deprecated\n Client.image = generate_image_commands()\n\n # Commands Groups, Instances\n from spython.instance.cmd import generate_instance_commands # instance level commands\n Client.instance = generate_instance_commands()\n Client.instance_stopall = stopall\n Client.instance.version = Client.version\n\n # Commands Groups, OCI (Singularity version 3 and up)\n if \"version 3\" in get_singularity_version():\n from spython.oci.cmd import generate_oci_commands\n Client.oci = generate_oci_commands()() # first () runs function, second\n # initializes OciImage class\n Client.oci.debug = Client.debug\n Client.oci.quiet = Client.quiet\n Client.oci.OciImage.quiet = Client.quiet\n Client.oci.OciImage.debug = Client.debug\n\n\n # Initialize\n cli = Client()\n\n # Pass on verbosity\n cli.image.debug = cli.debug\n cli.image.quiet = cli.quiet\n cli.instance.debug = cli.debug\n cli.instance.quiet = cli.quiet \n\n return cli", "response": "get the client and perform imports not on init"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef build(self, recipe=None, \n image=None, \n isolated=False,\n sandbox=False,\n writable=False,\n build_folder=None,\n robot_name=False,\n ext='simg',\n sudo=True,\n stream=False):\n\n '''build a singularity image, optionally for an isolated build\n (requires sudo). If you specify to stream, expect the image name\n and an iterator to be returned.\n \n image, builder = Client.build(...)\n\n Parameters\n ==========\n\n recipe: the path to the recipe file (or source to build from). If not\n defined, we look for \"Singularity\" file in $PWD\n image: the image to build (if None, will use arbitary name\n isolated: if True, run build with --isolated flag\n sandbox: if True, create a writable sandbox\n writable: if True, use writable ext3 (sandbox takes preference)\n build_folder: where the container should be built.\n ext: the image extension to use.\n robot_name: boolean, default False. if you don't give your image a \n name (with \"image\") then a fun robot name will be generated\n instead. Highly recommended :) \n sudo: give sudo to the command (or not) default is True for build\n \n '''\n from spython.utils import check_install\n check_install()\n\n cmd = self._init_command('build')\n\n if 'version 3' in self.version():\n ext = 'sif'\n\n # No image provided, default to use the client's loaded image\n if recipe is None:\n recipe = self._get_uri()\n\n # If it's still None, try default build recipe\n if recipe is None:\n recipe = 'Singularity'\n\n if not os.path.exists(recipe):\n bot.exit('Cannot find %s, exiting.' %image)\n\n if image is None:\n if re.search('(docker|shub)://', recipe) and robot_name is False:\n image = self._get_filename(recipe, ext)\n else:\n image = \"%s.%s\" %(self.RobotNamer.generate(), ext)\n\n # Does the user want a custom build folder?\n if build_folder is not None:\n if not os.path.exists(build_folder):\n bot.exit('%s does not exist!' % build_folder)\n image = os.path.join(build_folder, image)\n \n\n # The user wants to run an isolated build\n if isolated is True:\n cmd.append('--isolated')\n\n if sandbox is True:\n cmd.append('--sandbox')\n elif sandbox is True:\n cmd.append('--writable')\n\n cmd = cmd + [image, recipe]\n\n if stream is False:\n output = self._run_command(cmd, sudo=sudo, capture=False)\n else:\n # Here we return the expected image, and an iterator! \n # The caller must iterate over\n return image, stream_command(cmd, sudo=sudo)\n\n if os.path.exists(image):\n return image", "response": "build a singularity image"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nstarts an instance. This is done by default when an instance is created.", "response": "def start(self, image=None, name=None, args=None, sudo=False, options=[], capture=False):\n '''start an instance. This is done by default when an instance is created.\n\n Parameters\n ==========\n image: optionally, an image uri (if called as a command from Client)\n name: a name for the instance\n sudo: if the user wants to run the command with sudo\n capture: capture output, default is False. With True likely to hang.\n args: arguments to provide to the instance (supported Singularity 3.1+)\n options: a list of tuples, each an option to give to the start command\n [(\"--bind\", \"/tmp\"),...]\n\n USAGE: \n singularity [...] instance.start [...] \n\n ''' \n from spython.utils import ( run_command, \n check_install )\n check_install()\n\n # If name provided, over write robot (default)\n if name != None:\n self.name = name\n\n # If an image isn't provided, we have an initialized instance\n if image is None:\n\n # Not having this means it was called as a command, without an image\n if not hasattr(self, \"_image\"):\n bot.exit('Please provide an image, or create an Instance first.')\n\n image = self._image\n\n # Derive subgroup command based on singularity version\n subgroup = 'instance.start'\n if 'version 3' in self.version():\n subgroup = [\"instance\", \"start\"]\n\n cmd = self._init_command(subgroup)\n\n # Add options, if they are provided\n if not isinstance(options, list):\n options = options.split(' ')\n\n # Assemble the command!\n cmd = cmd + options + [image, self.name]\n\n # If arguments are provided\n if args != None:\n if not isinstance(args, list):\n args = [args]\n cmd = cmd + args\n\n # Save the options and cmd, if the user wants to see them later\n self.options = options\n self.args = args\n self.cmd = cmd\n\n output = run_command(cmd, sudo=sudo, quiet=True, capture=capture)\n\n if output['return_code'] == 0:\n self._update_metadata()\n\n else:\n message = '%s : return code %s' %(output['message'], \n output['return_code'])\n bot.error(message)\n\n return self"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nimports will import the image to the image_path", "response": "def importcmd(self, image_path, input_source):\n '''import will import (stdin) to the image\n\n Parameters\n ==========\n image_path: path to image to import to. \n input_source: input source or file\n import_type: if not specified, imports whatever function is given\n \n '''\n from spython.utils import check_install\n check_install()\n\n cmd = ['singularity', 'image.import', image_path, input_source]\n output = self.run_command(cmd, sudo=False)\n self.println(output) \n return image_path"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef inspect(self, image=None, json=True, app=None, quiet=True):\n '''inspect will show labels, defile, runscript, and tests for an image\n \n Parameters\n ==========\n image: path of image to inspect\n json: print json instead of raw text (default True)\n quiet: Don't print result to the screen (default True)\n app: if defined, return help in context of an app\n\n '''\n check_install()\n\n # No image provided, default to use the client's loaded image\n if image is None:\n image = self._get_uri()\n\n cmd = self._init_command('inspect')\n if app is not None:\n cmd = cmd + ['--app', app]\n\n options = ['e','d','l','r','hf','t']\n\n # After Singularity 3.0, helpfile was changed to H from\n\n if \"version 3\" in self.version():\n options = ['e','d','l','r','H','t']\n\n [cmd.append('-%s' % x) for x in options]\n\n if json is True:\n cmd.append('--json')\n\n cmd.append(image)\n result = run_command(cmd, quiet=False)\n\n if result['return_code'] == 0:\n result = jsonp.loads(result['message'][0])\n\n # Fix up labels\n labels = parse_labels(result)\n\n if not quiet:\n print(jsonp.dumps(result, indent=4))\n\n return result", "response": "inspect will show labels defile runscript and tests for an image"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef parse_labels(result):\n '''fix up the labels, meaning parse to json if needed, and return\n original updated object\n\n Parameters\n ==========\n result: the json object to parse from inspect\n '''\n\n if \"data\" in result:\n labels = result['data']['attributes'].get('labels') or {}\n\n elif 'attributes' in result:\n labels = result['attributes'].get('labels') or {}\n\n # If labels included, try parsing to json\n\n try:\n labels = jsonp.loads(labels)\n except:\n pass\n\n if \"data\" in result:\n result['data']['attributes']['labels'] = labels\n else:\n result['attributes']['labels'] = labels\n\n return result", "response": "fix up the labels meaning parse to json if needed and return the original updated object\n Parameters ==========\n result - the json object to parse from inspect\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ngive the user an ipython shell", "response": "def ipython(image):\n '''give the user an ipython shell\n '''\n\n # The client will announce itself (backend/database) unless it's get\n from spython.main import get_client\n from spython.main.parse import ( DockerRecipe, SingularityRecipe )\n\n client = get_client()\n client.load(image)\n\n # Add recipe parsers\n client.DockerRecipe = DockerRecipe\n client.SingularityRecipe = SingularityRecipe\n\n from IPython import embed\n embed()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _setup(self, lines):\n '''setup required adding content from the host to the rootfs,\n so we try to capture with with ADD.\n '''\n bot.warning('SETUP is error prone, please check output.')\n\n for line in lines:\n\n # For all lines, replace rootfs with actual root /\n line = re.sub('[$]{?SINGULARITY_ROOTFS}?', \n '', '$SINGULARITY_ROOTFS')\n\n # If we have nothing left, don't continue\n if line in ['', None]:\n continue\n\n # If the line starts with copy or move, assume is file from host\n if re.search('(^cp|^mv)', line):\n line = re.sub('(^cp|^mv)', '', line)\n self.files.append(line)\n\n # If it's a general command, add to install routine\n else:\n self.install.append(line)", "response": "setup required adding content from the host to the rootfs so we try to capture with ADD."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _from(self, line):\n ''' get the FROM container image name from a FROM line!\n\n Parameters\n ==========\n line: the line from the recipe file to parse for FROM\n\n '''\n self.fromHeader = line\n bot.debug('FROM %s' %self.fromHeader)", "response": "get the FROM container image name from a FROM line!"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _env(self, lines):\n '''env will parse a list of environment lines and simply remove any\n blank lines, or those with export. Dockerfiles don't usually\n have exports.\n \n Parameters\n ==========\n lines: A list of environment pair lines.\n\n '''\n environ = [x for x in lines if not x.startswith('export')] \n self.environ += environ", "response": "This method will parse a list of environment lines and simply add any\n blank lines to the environment."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _comments(self, lines):\n ''' comments is a wrapper for comment, intended to be given a list\n of comments.\n\n Parameters\n ==========\n lines: the list of lines to parse\n\n ''' \n for line in lines:\n comment = self._comment(line)\n self.comments.append(comment)", "response": "comments is a wrapper for comment"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _run(self, lines):\n '''_parse the runscript to be the Docker CMD. If we have one line,\n call it directly. If not, write the entrypoint into a script. \n\n Parameters\n ==========\n lines: the line from the recipe file to parse for CMD\n\n '''\n lines = [x for x in lines if x not in ['', None]]\n\n # Default runscript is first index\n runscript = lines[0]\n\n # Multiple line runscript needs multiple lines written to script\n if len(lines) > 1:\n\n bot.warning('More than one line detected for runscript!')\n bot.warning('These will be echoed into a single script to call.')\n self._write_script('/entrypoint.sh', lines)\n runscript = \"/bin/bash /entrypoint.sh\"\n\n self.cmd = runscript", "response": "_parse the runscript to be the Docker CMD."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nmapping will take the section name from a Singularity recipe and return a map function to add it to the appropriate place. Any lines that don't cleanly map are assumed to be comments. Parameters ========== section: the name of the Singularity recipe section Returns ======= function: to map a line to its command group (e.g., install)", "response": "def _get_mapping(self, section):\n '''mapping will take the section name from a Singularity recipe \n and return a map function to add it to the appropriate place. \n Any lines that don't cleanly map are assumed to be comments.\n\n Parameters\n ==========\n section: the name of the Singularity recipe section\n \n Returns\n =======\n function: to map a line to its command group (e.g., install)\n\n '''\n\n # Ensure section is lowercase\n section = section.lower()\n\n mapping = {\"environment\": self._env,\n \"comments\": self._comments,\n \"runscript\": self._run,\n \"labels\": self._labels,\n \"setup\": self._setup,\n \"files\": self._files,\n \"from\": self._from,\n \"post\": self._post,\n \"test\": self._test,\n \"help\": self._comments}\n\n if section in mapping:\n return mapping[section]\n return self._comments"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nparses is the base function for parsing the recipe and extracting the correct data structures.", "response": "def _parse(self):\n '''parse is the base function for parsing the recipe, and extracting\n elements into the correct data structures. Everything is parsed into\n lists or dictionaries that can be assembled again on demand. \n \n Singularity: we parse files/labels first, then install. \n cd first in a line is parsed as WORKDIR\n\n '''\n # If the recipe isn't loaded, load it\n if not hasattr(self, 'config'):\n self.load_recipe()\n\n # Parse each section\n for section, lines in self.config.items():\n bot.debug(section)\n\n # Get the correct parsing function\n parser = self._get_mapping(section)\n \n # Parse it, if appropriate\n if parser:\n parser(lines)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nload the From section of the Dockerfile.", "response": "def _load_from(self, line):\n '''load the From section of the recipe for the Dockerfile.\n '''\n # Remove any comments\n line = line.split('#',1)[0] \n line = re.sub('(F|f)(R|r)(O|o)(M|m):','', line).strip()\n bot.info('FROM %s' %line)\n self.config['from'] = line"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreads in a section to a list and stop when we hit the next section", "response": "def _load_section(self, lines, section):\n '''read in a section to a list, and stop when we hit the next section\n '''\n members = []\n\n while True:\n\n if len(lines) == 0:\n break\n next_line = lines[0] \n\n # The end of a section\n if next_line.strip().startswith(\"%\"):\n break\n\n # Still in current section!\n else:\n new_member = lines.pop(0).strip()\n if new_member not in ['', None]:\n members.append(new_member)\n\n # Add the list to the config\n if len(members) > 0:\n if section is not None:\n self.config[section] += members"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef load_recipe(self):\n '''load will return a loaded in singularity recipe. The idea\n is that these sections can then be parsed into a Dockerfile,\n or printed back into their original form.\n\n Returns\n =======\n config: a parsed recipe Singularity recipe\n '''\n\n # Comments between sections, add to top of file\n lines = self.lines.copy()\n comments = []\n\n # Start with a fresh config!\n self.config = dict()\n \n section = None\n name = None\n\n while len(lines) > 0:\n\n # Clean up white trailing/leading space\n line = lines.pop(0)\n stripped = line.strip()\n\n # Bootstrap Line\n if re.search('(b|B)(o|O){2}(t|T)(s|S)(t|T)(r|R)(a|A)(p|P)', line):\n self._load_bootstrap(stripped)\n\n # From Line\n if re.search('(f|F)(r|R)(O|o)(m|M)', stripped):\n self._load_from(stripped)\n\n # Comment\n if stripped.startswith(\"#\"):\n comments.append(stripped)\n continue\n\n # Section\n elif stripped.startswith('%'):\n section = self._add_section(stripped)\n bot.debug(\"Adding section title %s\" %section)\n\n # If we have a section, and are adding it\n elif section is not None:\n lines = [line] + lines\n self._load_section(lines=lines,\n section=section)\n\n self.config['comments'] = comments", "response": "load will return a loaded in singularity recipe"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _add_section(self, line, section=None):\n '''parse a line for a section, and return the parsed section (if not\n None)\n\n Parameters\n ==========\n line: the line to parse\n section: the current (or previous) section\n\n Resulting data structure is:\n config['post'] (in lowercase)\n\n '''\n # Remove any comments\n line = line.split('#',1)[0].strip()\n\n # Is there a section name?\n parts = line.split(' ')\n if len(parts) > 1:\n name = ' '.join(parts[1:]) \n section = re.sub('[%]|(\\s+)','',parts[0]).lower()\n\n if section not in self.config: \n self.config[section] = []\n bot.debug(\"Adding section %s\" %section)\n\n return section", "response": "parse a line for a section and return the parsed section"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_container_id(self, container_id=None):\n ''' a helper function shared between functions that will return a \n container_id. First preference goes to a container_id provided by\n the user at runtime. Second preference goes to the container_id\n instantiated with the client.\n\n Parameters\n ==========\n container_id: image uri to parse (required)\n '''\n\n # The user must provide a container_id, or have one with the client\n if container_id == None and self.container_id == None:\n bot.exit('You must provide a container_id.')\n\n # Choose whichever is not None, with preference for function provided\n container_id = container_id or self.container_id\n return container_id", "response": "a helper function shared between functions that will return a \n container_id"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nruns a command and return the return code.", "response": "def _run_and_return(self, cmd, sudo=None):\n ''' Run a command, show the message to the user if quiet isn't set,\n and return the return code. This is a wrapper for the OCI client\n to run a command and easily return the return code value (what\n the user is ultimately interested in).\n\n Parameters\n ==========\n cmd: the command (list) to run.\n sudo: whether to add sudo or not. \n\n '''\n sudo = self._get_sudo(sudo)\n result = self._run_command(cmd, \n sudo=sudo, \n quiet=True, \n return_result=True)\n\n # Successful return with no output\n if len(result) == 0:\n return \n\n # Show the response to the user, only if not quiet.\n elif not self.quiet:\n bot.println(result['message'])\n\n # Return the state object to the user\n return result['return_code']"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nexport will export an image", "response": "def export(self, image_path, tmptar=None):\n '''export will export an image, sudo must be used.\n\n Parameters\n ==========\n \n image_path: full path to image\n tmptar: if defined, use custom temporary path for tar export\n\n '''\n from spython.utils import check_install\n check_install()\n\n if tmptar is None:\n tmptar = \"/%s/tmptar.tar\" %(tempfile.mkdtemp())\n cmd = ['singularity', 'image.export', '-f', tmptar, image_path]\n\n output = self.run_command(cmd, sudo=False)\n return tmptar"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef write_json(json_obj, filename, mode=\"w\", print_pretty=True):\n '''write_json will (optionally,pretty print) a json object to file\n :param json_obj: the dict to print to json\n :param filename: the output file to write to\n :param pretty_print: if True, will use nicer formatting\n '''\n with open(filename, mode) as filey:\n if print_pretty:\n filey.writelines(\n json.dumps(\n json_obj,\n indent=4,\n separators=(\n ',',\n ': ')))\n else:\n filey.writelines(json.dumps(json_obj))\n return filename", "response": "write_json will print a json object to a file"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nexecuting a command to a singularity container", "response": "def execute(self, \n image = None, \n command = None,\n app = None,\n writable = False,\n contain = False,\n bind = None,\n stream = False,\n nv = False,\n return_result=False):\n\n ''' execute: send a command to a container\n \n Parameters\n ==========\n\n image: full path to singularity image\n command: command to send to container\n app: if not None, execute a command in context of an app\n writable: This option makes the file system accessible as read/write\n contain: This option disables the automatic sharing of writable\n filesystems on your host\n bind: list or single string of bind paths.\n This option allows you to map directories on your host system to\n directories within your container using bind mounts\n nv: if True, load Nvidia Drivers in runtime (default False)\n return_result: if True, return entire json object with return code\n and message result (default is False)\n '''\n from spython.utils import check_install\n check_install()\n\n cmd = self._init_command('exec')\n\n # nv option leverages any GPU cards\n if nv is True:\n cmd += ['--nv']\n \n # If the image is given as a list, it's probably the command\n if isinstance(image, list):\n command = image\n image = None\n\n if command is not None:\n \n # No image provided, default to use the client's loaded image\n if image is None:\n image = self._get_uri()\n self.quiet = True\n\n # If an instance is provided, grab it's name\n if isinstance(image, self.instance):\n image = image.get_uri()\n\n # Does the user want to use bind paths option?\n if bind is not None:\n cmd += self._generate_bind_list(bind)\n\n # Does the user want to run an app?\n if app is not None:\n cmd = cmd + ['--app', app]\n\n sudo = False\n if writable is True:\n sudo = True\n\n if not isinstance(command, list):\n command = command.split(' ')\n\n cmd = cmd + [image] + command\n \n if stream is False:\n return self._run_command(cmd,\n sudo=sudo,\n return_result=return_result)\n return stream_command(cmd, sudo=sudo)\n\n bot.error('Please include a command (list) to execute.')"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nparses a table to json from a string where a header is expected by default.", "response": "def parse_table(table_string, header, remove_rows=1):\n '''parse a table to json from a string, where a header is expected by default.\n Return a jsonified table.\n\n Parameters\n ==========\n table_string: the string table, ideally with a header\n header: header of expected table, must match dimension (number columns)\n remove_rows: an integer to indicate a number of rows to remove from top\n the default is 1 assuming we don't want the header\n '''\n rows = [x for x in table_string.split('\\n') if x]\n rows = rows[0+remove_rows:]\n\n # Parse into json dictionary\n parsed = []\n\n for row in rows:\n item = {}\n # This assumes no white spaces in each entry, which should be the case\n row = [x for x in row.split(' ') if x]\n for e in range(len(row)):\n item[header[e]] = row[e]\n parsed.append(item)\n return parsed"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting is a list for a single instance", "response": "def get(self, name, return_json=False, quiet=False):\n '''get is a list for a single instance. It is assumed to be running,\n and we need to look up the PID, etc.\n '''\n from spython.utils import check_install\n check_install()\n\n # Ensure compatible for singularity prior to 3.0, and after 3.0\n subgroup = \"instance.list\"\n\n if 'version 3' in self.version():\n subgroup = [\"instance\", \"list\"]\n\n cmd = self._init_command(subgroup)\n\n cmd.append(name)\n output = run_command(cmd, quiet=True)\n\n # Success, we have instances\n\n if output['return_code'] == 0:\n\n # Only print the table if we are returning json\n if quiet is False:\n print(''.join(output['message']))\n\n # Prepare json result from table\n\n header = ['daemon_name','pid','container_image']\n instances = parse_table(output['message'][0], header)\n\n # Does the user want instance objects instead?\n listing = []\n if return_json is False:\n for i in instances:\n new_instance = Instance(pid=i['pid'],\n name=i['daemon_name'],\n image=i['container_image'],\n start=False)\n\n listing.append(new_instance)\n instances = listing\n\n # Couldn't get UID\n\n elif output['return_code'] == 255:\n bot.error(\"Couldn't get UID\")\n \n # Return code of 0\n else:\n bot.info('No instances found.')\n\n # If we are given a name, return just one\n if name is not None and len(instances) == 1:\n instances = instances[0]\n\n return instances"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsets the message level in the environment to set based on args.", "response": "def set_verbosity(args):\n '''determine the message level in the environment to set based on args.\n '''\n level = \"INFO\"\n\n if args.debug is True:\n level = \"DEBUG\"\n elif args.quiet is True:\n level = \"QUIET\"\n\n os.environ['MESSAGELEVEL'] = level\n os.putenv('MESSAGELEVEL', level)\n os.environ['SINGULARITY_MESSAGELEVEL'] = level\n os.putenv('SINGULARITY_MESSAGELEVEL', level)\n \n # Import logger to set\n from spython.logger import bot\n bot.debug('Logging level %s' %level)\n import spython\n\n bot.debug(\"Singularity Python Version: %s\" % spython.__version__)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngenerating the list of commands that can be used to create an image.", "response": "def generate_image_commands():\n ''' The Image client holds the Singularity image command group, mainly\n deprecated commands (image.import) and additional command helpers\n that are commonly use but not provided by Singularity\n\n The levels of verbosity (debug and quiet) are passed from the main\n client via the environment variable MESSAGELEVEL.\n\n These commands are added to Client.image under main/__init__.py to \n expose subcommands:\n\n Client.image.export\n Client.image.imprt\n Client.image.decompress\n Client.image.create\n\n '''\n\n class ImageClient(object):\n group = \"image\"\n\n from spython.main.base.logger import println\n from spython.main.base.command import ( init_command, run_command )\n from .utils import ( compress, decompress )\n from .create import create\n from .importcmd import importcmd\n from .export import export\n\n ImageClient.create = create\n ImageClient.imprt = importcmd\n ImageClient.export = export\n ImageClient.decompress = decompress\n ImageClient.compress = compress\n ImageClient.println = println\n ImageClient.init_command = init_command\n ImageClient.run_command = run_command\n \n cli = ImageClient()\n return cli"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef init_command(self, action, flags=None):\n '''return the initial Singularity command with any added flags.\n \n Parameters\n ==========\n action: the main action to perform (e.g., build)\n flags: one or more additional flags (e.g, volumes) \n not implemented yet.\n '''\n\n if not isinstance(action, list):\n action = [action] \n cmd = ['singularity'] + action\n\n if self.quiet is True:\n cmd.insert(1, '--quiet')\n if self.debug is True:\n cmd.insert(1, '--debug')\n\n return cmd", "response": "return the initial Singularity command with any added flags."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ngenerate bind string will take a single string or list of binds and return a list of binds that can be added to an exec or run command.", "response": "def generate_bind_list(self, bindlist=None):\n '''generate bind string will take a single string or list of binds, and\n return a list that can be added to an exec or run command. For example,\n the following map as follows:\n\n ['/host:/container', '/both'] --> [\"--bind\", \"/host:/container\",\"--bind\",\"/both\" ]\n ['/both'] --> [\"--bind\", \"/both\"]\n '/host:container' --> [\"--bind\", \"/host:container\"]\n None --> []\n \n An empty bind or otherwise value of None should return an empty list.\n The binds are also checked on the host.\n\n Parameters\n ==========\n bindlist: a string or list of bind mounts\n\n '''\n binds = []\n \n # Case 1: No binds provided\n if not bindlist:\n return binds\n\n # Case 2: provides a long string or non list, and must be split\n if not isinstance(bindlist, list):\n bindlist = bindlist.split(' ')\n\n for bind in bindlist:\n\n # Still cannot be None\n if bind:\n bot.debug('Adding bind %s' %bind)\n binds += ['--bind', bind]\n\n # Check that exists on host\n host = bind.split(':')[0]\n if not os.path.exists(host):\n bot.error('%s does not exist on host.' %bind)\n sys.exit(1)\n\n return binds"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsending command to the terminal", "response": "def send_command(self, cmd, sudo=False, stderr=None, stdout=None):\n '''send command is a non interactive version of run_command, meaning\n that we execute the command and return the return value, but don't\n attempt to stream any content (text from the screen) back to the\n user. This is useful for commands interacting with OCI bundles.\n\n Parameters\n ==========\n cmd: the list of commands to send to the terminal\n sudo: use sudo (or not)\n '''\n \n if sudo is True:\n cmd = ['sudo'] + cmd\n\n process = subprocess.Popen(cmd, stderr=stderr, stdout=stdout)\n result = process.communicate()\n return result"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nwrap for the global run_command function.", "response": "def run_command(self, cmd, \n sudo=False,\n capture=True,\n quiet=None,\n return_result=False):\n\n '''run_command is a wrapper for the global run_command, checking first\n for sudo and exiting on error if needed. The message is returned as\n a list of lines for the calling function to parse, and stdout uses\n the parent process so it appears for the user.\n\n Parameters\n ==========\n cmd: the command to run\n sudo: does the command require sudo?\n quiet: if quiet set by function, overrides client setting.\n return_result: return the result, if not successful (default False).\n On success, returns result.\n\n '''\n # First preference to function, then to client setting\n if quiet == None:\n quiet = self.quiet\n\n result = run_cmd(cmd, sudo=sudo, capture=capture, quiet=quiet)\n\n # If one line is returned, squash dimension\n if len(result['message']) == 1:\n result['message'] = result['message'][0]\n\n # If the user wants to return the result, just return it\n if return_result is True:\n return result\n\n # On success, return result\n if result['return_code'] == 0:\n return result['message']\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef load(self, recipe):\n '''load a recipe file into the client, first performing checks, and\n then parsing the file.\n\n Parameters\n ==========\n recipe: the original recipe file, parsed by the subclass either\n DockerRecipe or SingularityRecipe\n \n '''\n self.recipe = recipe # the recipe file\n self._run_checks() # does the recipe file exist?\n self.parse()", "response": "load a recipe file into the client"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nparses is the base function for parsing the recipe, whether it be a Dockerfile or Singularity recipe. The recipe is read in as lines, and saved to a list if needed for the future. If the client has it, the recipe type specific _parse function is called. Instructions for making a client subparser: It should have a main function _parse that parses a list of lines from some recipe text file into the appropriate sections, e.g., self.fromHeader self.environ self.labels self.install self.files self.test self.entrypoint", "response": "def parse(self):\n '''parse is the base function for parsing the recipe, whether it be\n a Dockerfile or Singularity recipe. The recipe is read in as lines,\n and saved to a list if needed for the future. If the client has\n it, the recipe type specific _parse function is called.\n\n Instructions for making a client subparser:\n\n It should have a main function _parse that parses a list of lines\n from some recipe text file into the appropriate sections, e.g.,\n \n self.fromHeader\n self.environ\n self.labels\n self.install\n self.files\n self.test\n self.entrypoint\n\n '''\n\n self.cmd = None\n self.comments = []\n self.entrypoint = None\n self.environ = []\n self.files = []\n self.install = []\n self.labels = []\n self.ports = []\n self.test = None\n self.volumes = []\n\n if self.recipe:\n\n # Read in the raw lines of the file\n self.lines = read_file(self.recipe)\n\n # If properly instantiated by Docker or Singularity Recipe, parse\n if hasattr(self, '_parse'):\n self._parse()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef save(self, output_file=None, \n convert_to=None, \n runscript=\"/bin/bash\",\n force=False):\n\n '''save will convert a recipe to a specified format (defaults to the\n opposite of the recipe type originally loaded, (e.g., docker-->\n singularity and singularity-->docker) and write to an output file,\n if specified. If not specified, a temporary file is used.\n\n Parameters\n ==========\n output_file: the file to save to, not required (estimates default)\n convert_to: can be manually forced (docker or singularity)\n runscript: default runscript (entrypoint) to use\n force: if True, override discovery from Dockerfile\n\n '''\n\n converted = self.convert(convert_to, runscript, force)\n if output_file is None:\n output_file = self._get_conversion_outfile(convert_to=None)\n bot.info('Saving to %s' %output_file)\n write_file(output_file, converted)", "response": "save will convert a recipe to a specified format and write to an output file"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nsee convert and save. This is a helper function that returns the proper conversion function that does not call it.", "response": "def _get_converter(self, convert_to=None):\n '''see convert and save. This is a helper function that returns \n the proper conversion function, but doesn't call it. We do this\n so that in the case of convert, we do the conversion and return\n a string. In the case of save, we save the recipe to file for the \n user.\n\n Parameters\n ==========\n convert_to: a string either docker or singularity, if a different\n\n Returns\n =======\n converter: the function to do the conversion\n\n '''\n conversion = self._get_conversion_type(convert_to)\n\n # Perform conversion\n if conversion == \"singularity\":\n return self.docker2singularity\n return self.singularity2docker"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _get_conversion_outfile(self, convert_to=None):\n '''a helper function to return a conversion temporary output file\n based on kind of conversion\n\n Parameters\n ==========\n convert_to: a string either docker or singularity, if a different\n\n '''\n conversion = self._get_conversion_type(convert_to)\n prefix = \"Singularity\"\n if conversion == \"docker\":\n prefix = \"Dockerfile\"\n suffix = next(tempfile._get_candidate_names())\n return \"%s.%s\" %(prefix, suffix)", "response": "a helper function to return a conversion temporary output file based on kind of conversion"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_conversion_type(self, convert_to=None):\n '''a helper function to return the conversion type based on user \n preference and input recipe.\n\n Parameters\n ==========\n convert_to: a string either docker or singularity (default None)\n\n '''\n acceptable = ['singularity', 'docker']\n\n # Default is to convert to opposite kind\n conversion = \"singularity\"\n if self.name == \"singularity\":\n conversion = \"docker\"\n\n # Unless the user asks for a specific type\n if convert_to is not None and convert_to in acceptable:\n conversion = convert_to\n return conversion", "response": "a helper function to return the conversion type based on user \n preference and input recipe."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _write_script(path, lines, chmod=True):\n '''write a script with some lines content to path in the image. This\n is done by way of adding echo statements to the install section.\n\n Parameters\n ==========\n path: the path to the file to write\n lines: the lines to echo to the file\n chmod: If true, change permission to make u+x\n\n '''\n if len(lines) > 0:\n lastline = lines.pop()\n for line in lines:\n self.install.append('echo \"%s\" >> %s' %path)\n self.install.append(lastline) \n\n if chmod is True:\n self.install.append('chmod u+x %s' %path)", "response": "write a script with some lines content to path in the image"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the state of an OciImage if it exists", "response": "def state(self, container_id=None, sudo=None, sync_socket=None):\n\n ''' get the state of an OciImage, if it exists. The optional states that\n can be returned are created, running, stopped or (not existing).\n\n Equivalent command line example: \n singularity oci state \n \n Parameters\n ==========\n container_id: the id to get the state of.\n sudo: Add sudo to the command. If the container was created by root,\n you need sudo to interact and get its state.\n sync_socket: the path to the unix socket for state synchronization\n\n Returns\n =======\n state: a parsed json of the container state, if exists. If the\n container is not found, None is returned.\n '''\n sudo = self._get_sudo(sudo)\n container_id = self.get_container_id(container_id)\n\n # singularity oci state\n cmd = self._init_command('state')\n\n if sync_socket != None:\n cmd = cmd + ['--sync-socket', sync_socket]\n\n # Finally, add the container_id\n cmd.append(container_id)\n\n # Get the instance state\n result = self._run_command(cmd, sudo=sudo, quiet=True)\n\n if result != None:\n\n # If successful, a string is returned to parse\n if isinstance(result, str):\n return json.loads(result)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _state_command(self, container_id=None, command='start', sudo=None):\n\n ''' A generic state command to wrap pause, resume, kill, etc., where the\n only difference is the command. This function will be unwrapped if the\n child functions get more complicated (with additional arguments).\n\n Equivalent command line example: \n singularity oci \n \n Parameters\n ==========\n container_id: the id to start.\n command: one of start, resume, pause, kill, defaults to start.\n sudo: Add sudo to the command. If the container was created by root,\n you need sudo to interact and get its state.\n\n Returns\n =======\n return_code: the return code to indicate if the container was started.\n '''\n sudo = self._get_sudo(sudo)\n container_id = self.get_container_id(container_id)\n\n # singularity oci state\n cmd = self._init_command(command)\n\n # Finally, add the container_id\n cmd.append(container_id)\n\n # Run the command, return return code\n return self._run_and_return(cmd, sudo)", "response": "A generic state command to wrap pause resume kill etc."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nstops a started OciImage container", "response": "def kill(self, container_id=None, sudo=None, signal=None):\n\n ''' stop (kill) a started OciImage container, if it exists\n\n Equivalent command line example: \n singularity oci kill \n \n Parameters\n ==========\n container_id: the id to stop.\n signal: signal sent to the container (default SIGTERM)\n sudo: Add sudo to the command. If the container was created by root,\n you need sudo to interact and get its state.\n\n Returns\n =======\n return_code: the return code to indicate if the container was killed.\n '''\n sudo = self._get_sudo(sudo)\n container_id = self.get_container_id(container_id)\n\n # singularity oci state\n cmd = self._init_command('kill')\n\n # Finally, add the container_id\n cmd.append(container_id)\n\n # Add the signal, if defined\n if signal != None:\n cmd = cmd + ['--signal', signal]\n\n # Run the command, return return code\n return self._run_and_return(cmd, sudo)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nresumes a stopped OciImage container", "response": "def resume(self, container_id=None, sudo=None):\n ''' resume a stopped OciImage container, if it exists\n\n Equivalent command line example: \n singularity oci resume \n \n Parameters\n ==========\n container_id: the id to stop.\n sudo: Add sudo to the command. If the container was created by root,\n you need sudo to interact and get its state.\n\n Returns\n =======\n return_code: the return code to indicate if the container was resumed.\n '''\n return self._state_command(container_id, command='resume', sudo=sudo)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef pause(self, container_id=None, sudo=None):\n ''' pause a running OciImage container, if it exists\n\n Equivalent command line example: \n singularity oci pause \n \n Parameters\n ==========\n container_id: the id to stop.\n sudo: Add sudo to the command. If the container was created by root,\n you need sudo to interact and get its state.\n\n Returns\n =======\n return_code: the return code to indicate if the container was paused.\n '''\n return self._state_command(container_id, command='pause', sudo=sudo)", "response": "Pause a running OciImage container"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nstops ALL instances in the current user s group", "response": "def stopall(self, sudo=False, quiet=True):\n '''stop ALL instances. This command is only added to the command group\n as it doesn't make sense to call from a single instance\n\n Parameters\n ==========\n sudo: if the command should be done with sudo (exposes different set of\n instances)\n\n '''\n from spython.utils import run_command, check_install\n check_install()\n\n subgroup = 'instance.stop'\n\n if 'version 3' in self.version():\n subgroup = [\"instance\", \"stop\"]\n\n cmd = self._init_command(subgroup)\n cmd = cmd + ['--all']\n output = run_command(cmd, sudo=sudo, quiet=quiet)\n\n if output['return_code'] != 0:\n message = '%s : return code %s' %(output['message'], \n output['return_code'])\n bot.error(message)\n return output['return_code']\n\n return output['return_code']"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsetting the logging level based on the environment", "response": "def init_level(self, quiet=False):\n '''set the logging level based on the environment\n \n Parameters\n ==========\n quiet: boolean if True, set to quiet. Gets overriden by environment\n setting, and only exists to define default\n\n '''\n \n if os.environ.get('MESSAGELEVEL') == \"QUIET\":\n quiet = True\n\n self.quiet = quiet"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nprints will print the output, given that quiet is not True. This function also serves to convert output in bytes to utf-8 Parameters ========== output: the string to print quiet: a runtime variable to over-ride the default.", "response": "def println(self, output, quiet=False):\n '''print will print the output, given that quiet is not True. This\n function also serves to convert output in bytes to utf-8\n\n Parameters\n ==========\n output: the string to print\n quiet: a runtime variable to over-ride the default.\n\n '''\n if isinstance(output,bytes):\n output = output.decode('utf-8')\n if self.quiet is False and quiet is False:\n print(output)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef help(self, command=None):\n '''help prints the general function help, or help for a specific command\n\n Parameters\n ========== \n command: the command to get help for, if none, prints general help\n\n '''\n from spython.utils import check_install\n check_install()\n\n cmd = ['singularity','--help']\n if command != None:\n cmd.append(command)\n help = self._run_command(cmd)\n return help", "response": "prints the general function help or help for a specific command"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef compress(self, image_path):\n '''compress will (properly) compress an image'''\n if os.path.exists(image_path):\n compressed_image = \"%s.gz\" %image_path\n os.system('gzip -c -6 %s > %s' %(image_path, compressed_image))\n return compressed_image\n\n bot.exit(\"Cannot find image %s\" %image_path)", "response": "compress will ( properly ) compress an image"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndecompress will ( properly ) decompress an image", "response": "def decompress(self, image_path, quiet=True):\n '''decompress will (properly) decompress an image'''\n\n if not os.path.exists(image_path):\n bot.exit(\"Cannot find image %s\" %image_path)\n \n extracted_file = image_path.replace('.gz','')\n cmd = ['gzip','-d','-f', image_path]\n result = self.run_command(cmd, quiet=quiet) # exits if return code != 0\n return extracted_file"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef generate_name(self, name=None):\n '''generate a Robot Name for the instance to use, if the user doesn't\n supply one.\n '''\n # If no name provided, use robot name\n if name == None:\n name = self.RobotNamer.generate()\n self.name = name.replace('-','_')", "response": "generate a Robot Name for the instance to use"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _update_metadata(self, kwargs=None):\n '''Extract any additional attributes to hold with the instance\n from kwargs\n '''\n\n # If not given metadata, use instance.list to get it for container\n if kwargs == None and hasattr(self, 'name'):\n kwargs = self._list(self.name, quiet=True, return_json=True)\n\n # Add acceptable arguments\n for arg in ['pid', 'name']:\n\n # Skip over non-iterables:\n if arg in kwargs:\n setattr(self, arg, kwargs[arg])\n \n if \"image\" in kwargs:\n self._image = kwargs['image']\n elif \"container_image\" in kwargs:\n self._image = kwargs['container_image']", "response": "Extract any additional attributes to hold with the instance\n from kwargs\n "} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef run(self, \n image = None,\n args = None,\n app = None,\n sudo = False,\n writable = False,\n contain = False,\n bind = None,\n stream = False,\n nv = False,\n return_result=False):\n\n '''\n run will run the container, with or withour arguments (which\n should be provided in a list)\n \n Parameters\n ==========\n image: full path to singularity image\n args: args to include with the run \n app: if not None, execute a command in context of an app\n writable: This option makes the file system accessible as read/write\n contain: This option disables the automatic sharing of writable\n filesystems on your host\n bind: list or single string of bind paths.\n This option allows you to map directories on your host system to\n directories within your container using bind mounts\n stream: if True, return for the user to run\n nv: if True, load Nvidia Drivers in runtime (default False)\n return_result: if True, return entire json object with return code\n and message result (default is False)\n\n '''\n from spython.utils import check_install\n check_install()\n\n cmd = self._init_command('run')\n \n # nv option leverages any GPU cards\n if nv is True:\n cmd += ['--nv']\n\n # No image provided, default to use the client's loaded image\n if image is None:\n image = self._get_uri()\n\n # If an instance is provided, grab it's name\n if isinstance(image, self.instance):\n image = image.get_uri()\n\n # Does the user want to use bind paths option?\n if bind is not None:\n cmd += self._generate_bind_list(bind)\n\n # Does the user want to run an app?\n if app is not None:\n cmd = cmd + ['--app', app]\n\n cmd = cmd + [image]\n\n # Conditions for needing sudo\n if writable is True:\n sudo = True\n \n if args is not None: \n if not isinstance(args, list):\n args = args.split(' ')\n cmd = cmd + args\n\n if stream is False:\n result = self._run_command(cmd, \n sudo=sudo, \n return_result=return_result)\n else:\n return stream_command(cmd, sudo=sudo)\n\n # If the user wants the raw result object\n if return_result:\n return result\n\n # Otherwise, we parse the result if it was successful\n if result:\n result = result.strip('\\n')\n\n try:\n result = json.loads(result)\n except:\n pass\n return result", "response": "This function runs the command in the current container"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nmount an OCI bundle from the specified image", "response": "def mount(self, image, sudo=None):\n '''create an OCI bundle from SIF image\n\n Parameters\n ==========\n image: the container (sif) to mount\n '''\n return self._state_command(image, command=\"mount\", sudo=sudo)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _setup(self, action, line):\n ''' replace the command name from the group, alert the user of content,\n and clean up empty spaces\n '''\n bot.debug('[in] %s' % line)\n\n # Replace ACTION at beginning\n line = re.sub('^%s' % action, '', line)\n\n # Handle continuation lines without ACTION by padding with leading space\n line = \" \" + line\n\n # Split into components\n return [x for x in self._split_line(line) if x not in ['', None]]", "response": "replace the command name from the group alert the user of content and clean up empty spaces"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _from(self, line):\n ''' get the FROM container image name from a FROM line!\n\n Parameters\n ==========\n line: the line from the recipe file to parse for FROM\n\n '''\n fromHeader = self._setup('FROM', line)\n\n # Singularity does not support AS level\n self.fromHeader = re.sub(\"AS .+\", \"\", fromHeader[0], flags=re.I)\n\n if \"scratch\" in self.fromHeader:\n bot.warning('scratch is no longer available on Docker Hub.')\n bot.debug('FROM %s' %self.fromHeader)", "response": "get the FROM container image name from a FROM line!"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nparse the ARG line", "response": "def _arg(self, line):\n '''singularity doesn't have support for ARG, so instead will issue\n a warning to the console for the user to export the variable\n with SINGULARITY prefixed at build.\n \n Parameters\n ==========\n line: the line from the recipe file to parse for ARG\n \n '''\n line = self._setup('ARG', line)\n bot.warning(\"ARG is not supported for Singularity! To get %s\" %line[0])\n bot.warning(\"in the container, on host export SINGULARITY_%s\" %line[0])"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nparsing a line that beings with ENV indicative of one or more environment variables.", "response": "def _env(self, line):\n '''env will parse a line that beings with ENV, indicative of one or\n more environment variables.\n \n Parameters\n ==========\n line: the line from the recipe file to parse for ADD\n \n '''\n line = self._setup('ENV', line)\n\n # Extract environment (list) from the line\n environ = parse_env(line)\n\n # Add to global environment, run during install\n self.install += environ\n\n # Also define for global environment\n self.environ += environ"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _copy(self, lines):\n '''parse_add will copy multiple files from one location to another. \n This likely will need tweaking, as the files might need to be \n mounted from some location before adding to the image. \n The add command is done for an entire directory. It is also\n possible to have more than one file copied to a destination:\n https://docs.docker.com/engine/reference/builder/#copy\n e.g.: /\n '''\n lines = self._setup('COPY', lines)\n\n for line in lines:\n values = line.split(\" \")\n topath = values.pop()\n for frompath in values:\n self._add_files(frompath, topath)", "response": "parse_add will copy multiple files from one location to another"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _add(self, lines):\n '''Add can also handle https, and compressed files.\n\n Parameters\n ==========\n line: the line from the recipe file to parse for ADD\n\n '''\n lines = self._setup('ADD', lines)\n\n for line in lines:\n values = line.split(\" \")\n frompath = values.pop(0)\n\n # Custom parsing for frompath\n\n # If it's a web address, add to install routine to get\n if frompath.startswith('http'):\n for topath in values:\n self._parse_http(frompath, topath)\n\n # Add the file, and decompress in install\n elif re.search(\"[.](gz|gzip|bz2|xz)$\", frompath.strip()):\n for topath in values:\n self._parse_archive(frompath, topath)\n\n # Just add the files\n else:\n for topath in values:\n self._add_files(frompath, topath)", "response": "Add can also handle http and compressed files."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _add_files(self, source, dest):\n '''add files is the underlying function called to add files to the\n list, whether originally called from the functions to parse archives,\n or https. We make sure that any local references are changed to\n actual file locations before adding to the files list.\n \n Parameters\n ==========\n source: the source\n dest: the destiation\n '''\n\n # Create data structure to iterate over\n\n paths = {'source': source,\n 'dest': dest}\n\n for pathtype, path in paths.items():\n if path == \".\":\n paths[pathtype] = os.getcwd()\n \n # Warning if doesn't exist\n if not os.path.exists(path):\n bot.warning(\"%s doesn't exist, ensure exists for build\" %path)\n\n # The pair is added to the files as a list\n self.files.append([paths['source'], paths['dest']])", "response": "add files is the underlying function called to add files to the list of files that were added to the archive"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nparse the archive and add it to the files list", "response": "def _parse_archive(self, targz, dest):\n '''parse_targz will add a line to the install script to extract a \n targz to a location, and also add it to the files.\n\n Parameters\n ==========\n targz: the targz to extract\n dest: the location to extract it to\n\n '''\n\n # Add command to extract it\n self.install.append(\"tar -zvf %s %s\" %(targz, dest))\n\n # Ensure added to container files\n return self._add_files(targz, dest)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nadds a comment to the install and add the metadata value for the object we have", "response": "def _volume(self, line):\n '''We don't have logic for volume for Singularity, so we add as\n a comment in the install, and a metadata value for the recipe \n object\n \n Parameters\n ==========\n line: the line from the recipe file to parse to INSTALL\n\n '''\n volumes = self._setup('VOLUME', line)\n if len(volumes) > 0:\n self.volumes += volumes\n return self._comment(\"# %s\" %line)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _expose(self, line):\n '''Again, just add to metadata, and comment in install.\n \n Parameters\n ==========\n line: the line from the recipe file to parse to INSTALL\n\n '''\n ports = self._setup('EXPOSE', line)\n if len(ports) > 0:\n self.ports += ports\n return self._comment(\"# %s\" %line)", "response": "Add a new entry to the list of ports that are available for the current recipe."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _workdir(self, line):\n '''A Docker WORKDIR command simply implies to cd to that location\n\n Parameters\n ==========\n line: the line from the recipe file to parse for WORKDIR\n\n '''\n workdir = self._setup('WORKDIR', line)\n line = \"cd %s\" %(''.join(workdir))\n self.install.append(line)", "response": "A Docker WORKDIR command simply implies to cd to that location"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _label(self, line):\n '''_label will parse a Dockerfile label\n \n Parameters\n ==========\n line: the line from the recipe file to parse for CMD\n\n '''\n label = self._setup('LABEL', line)\n self.labels += [ label ]", "response": "parse a Dockerfile label\n \n "} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nmaps will take the command from a Dockerfile and return a map function to add it to the appropriate place. Any lines that don't cleanly map are assumed to be comments. Parameters ========== line: the list that has been parsed into parts with _split_line parser: the previously used parser, for context Returns ======= function: to map a line to its command group", "response": "def _get_mapping(self, line, parser=None, previous=None):\n '''mapping will take the command from a Dockerfile and return a map\n function to add it to the appropriate place. Any lines that don't\n cleanly map are assumed to be comments.\n\n Parameters\n ==========\n line: the list that has been parsed into parts with _split_line\n parser: the previously used parser, for context \n\n Returns\n =======\n function: to map a line to its command group\n\n '''\n\n # Split the command into cleanly the command and rest\n if not isinstance(line, list):\n line = self._split_line(line)\n\n # No line we will give function to handle empty line\n if len(line) == 0:\n return None\n\n cmd = line[0].upper()\n\n mapping = {\"ADD\": self._add,\n \"ARG\": self._arg,\n \"COPY\": self._copy,\n \"CMD\": self._cmd,\n \"ENTRYPOINT\": self._entry,\n \"ENV\": self._env,\n \"EXPOSE\": self._expose,\n \"FROM\": self._from,\n \"HEALTHCHECK\": self._test,\n \"RUN\": self._run,\n \"WORKDIR\": self._workdir,\n \"MAINTAINER\": self._label,\n \"VOLUME\": self._volume,\n \"LABEL\": self._label}\n\n # If it's a command line, return correct function\n if cmd in mapping:\n return mapping[cmd]\n\n # If it's a continued line, return previous\n cleaned = self._clean_line(line[-1])\n previous = self._clean_line(previous)\n\n # if we are continuing from last\n if cleaned.endswith('\\\\') and parser or previous.endswith('\\\\'):\n return parser\n\n return self._default"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _parse(self):\n '''parse is the base function for parsing the Dockerfile, and extracting\n elements into the correct data structures. Everything is parsed into\n lists or dictionaries that can be assembled again on demand. \n\n Environment: Since Docker also exports environment as we go, \n we add environment to the environment section and \n install\n\n Labels: include anything that is a LABEL, ARG, or (deprecated)\n maintainer.\n\n Add/Copy: are treated the same\n\n '''\n parser = None\n previous = None\n\n for line in self.lines:\n\n parser = self._get_mapping(line, parser, previous)\n\n # Parse it, if appropriate\n if parser:\n parser(line)\n\n previous = line", "response": "Parse the Dockerfile and extract the necessary information into the correct data structures."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_uri(self, image):\n '''get the uri of an image, or the string (optional) that appears before\n ://. Optional. If none found, returns ''\n '''\n image = image or ''\n uri = ''\n\n match = re.match(\"^(?P.+)://\", image)\n if match:\n uri = match.group('uri')\n\n return uri", "response": "get the uri of an image or the string that appears before\n : //. Optional. Returns an empty string if no such image exists."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef parse_image_name(self, image):\n '''\n simply split the uri from the image. Singularity handles\n parsing of registry, namespace, image.\n \n Parameters\n ==========\n image: the complete image uri to load (e.g., docker://ubuntu) \n\n '''\n self._image = image\n self.uri = self.get_uri(image)\n self.image = self.remove_uri(image)", "response": "Parses the image name and sets the internal _image attribute."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning an md5 hash of the file based on a criteria level.", "response": "def get_hash(self, image=None):\n '''return an md5 hash of the file based on a criteria level. This\n is intended to give the file a reasonable version. This only is\n useful for actual image files.\n \n Parameters\n ==========\n image: the image path to get hash for (first priority). Second\n priority is image path saved with image object, if exists.\n\n '''\n hasher = hashlib.md5()\n image = image or self.image\n \n if os.path.exists(image):\n with open(image, \"rb\") as f:\n for chunk in iter(lambda: f.read(4096), b\"\"):\n hasher.update(chunk)\n return hasher.hexdigest()\n\n bot.warning('%s does not exist.' %image)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a list of SCIF apps in image.", "response": "def apps(self, image=None, full_path=False, root=''):\n '''\n return list of SCIF apps in image. The Singularity software serves\n a scientific filesystem integration that will install apps to\n /scif/apps and associated data to /scif/data. For more information \n about SCIF, see https://sci-f.github.io\n\n Parameters\n ==========\n full_path: if True, return relative to scif base folder\n image_path: full path to the image\n\n '''\n from spython.utils import check_install\n check_install()\n\n # No image provided, default to use the client's loaded image\n if image is None:\n image = self._get_uri()\n\n cmd = self._init_command('apps') + [ image ]\n output = self._run_command(cmd)\n\n if full_path is True:\n root = '/scif/apps/'\n\n if len(output) > 0:\n output = ''.join(output).split('\\n')\n output = ['%s%s' %(root,x) for x in output if x]\n\n return output"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate will create a new image", "response": "def create(self,image_path, size=1024, sudo=False):\n '''create will create a a new image\n\n Parameters\n ==========\n image_path: full path to image\n size: image sizein MiB, default is 1024MiB\n filesystem: supported file systems ext3/ext4 (ext[2/3]: default ext3\n\n ''' \n from spython.utils import check_install\n check_install()\n\n\n cmd = self.init_command('image.create')\n cmd = cmd + ['--size', str(size), image_path ]\n\n output = self.run_command(cmd,sudo=sudo)\n self.println(output)\n\n if not os.path.exists(image_path):\n bot.exit(\"Could not create image %s\" %image_path)\n\n return image_path"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef check_install(software='singularity', quiet=True):\n '''check_install will attempt to run the singularity command, and\n return True if installed. The command line utils will not run \n without this check.\n '''\n\n cmd = [software, '--version']\n found = False\n\n try:\n version = run_command(cmd, quiet=True)\n except: # FileNotFoundError\n return found\n\n if version is not None:\n if version['return_code'] == 0:\n found = True\n\n if quiet is False:\n version = version['message']\n bot.info(\"Found %s version %s\" % (software.upper(), version))\n\n return found", "response": "check_install will attempt to run the singularity command and return True if installed."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_singularity_version():\n '''get the singularity client version. Useful in the case that functionality\n has changed, etc. Can be \"hacked\" if needed by exporting \n SPYTHON_SINGULARITY_VERSION, which is checked before checking on the\n command line.\n '''\n version = os.environ.get('SPYTHON_SINGULARITY_VERSION', \"\")\n if version == \"\":\n try:\n version = run_command([\"singularity\", '--version'], quiet=True)\n except: # FileNotFoundError\n return version\n\n if version['return_code'] == 0:\n if len(version['message']) > 0:\n version = version['message'][0].strip('\\n')\n\n return version", "response": "get the singularity client version. Useful in the case that functionality\n has changed etc."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef stream_command(cmd, no_newline_regexp=\"Progess\", sudo=False):\n '''stream a command (yield) back to the user, as each line is available.\n\n # Example usage:\n results = []\n for line in stream_command(cmd):\n print(line, end=\"\")\n results.append(line)\n\n Parameters\n ==========\n cmd: the command to send, should be a list for subprocess\n no_newline_regexp: the regular expression to determine skipping a\n newline. Defaults to finding Progress\n\n '''\n if sudo is True:\n cmd = ['sudo'] + cmd\n\n process = subprocess.Popen(cmd, \n stdout = subprocess.PIPE, \n universal_newlines = True)\n for line in iter(process.stdout.readline, \"\"):\n if not re.search(no_newline_regexp, line):\n yield line\n process.stdout.close()\n return_code = process.wait()\n if return_code:\n raise subprocess.CalledProcessError(return_code, cmd)", "response": "stream a command to the user"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef run_command(cmd, \n sudo=False,\n capture=True,\n no_newline_regexp=\"Progess\",\n quiet=False):\n\n '''run_command uses subprocess to send a command to the terminal. If\n capture is True, we use the parent stdout, so the progress bar (and\n other commands of interest) are piped to the user. This means we \n don't return the output to parse.\n\n Parameters\n ==========\n cmd: the command to send, should be a list for subprocess\n sudo: if needed, add to start of command\n no_newline_regexp: the regular expression to determine skipping a\n newline. Defaults to finding Progress\n capture: if True, don't set stdout and have it go to console. This\n option can print a progress bar, but won't return the lines\n as output.\n '''\n\n if sudo is True:\n cmd = ['sudo'] + cmd\n\n stdout = None\n if capture is True:\n stdout = subprocess.PIPE\n\n # Use the parent stdout and stderr\n process = subprocess.Popen(cmd, \n stderr = subprocess.PIPE, \n stdout = stdout)\n lines = ()\n found_match = False\n\n for line in process.communicate():\n if line:\n if isinstance(line, bytes):\n line = line.decode('utf-8')\n lines = lines + (line,)\n if re.search(no_newline_regexp, line) and found_match is True:\n if quiet is False:\n sys.stdout.write(line)\n found_match = True\n else:\n if quiet is False:\n sys.stdout.write(line)\n print(line.rstrip())\n found_match = False\n\n output = {'message': lines,\n 'return_code': process.returncode }\n\n return output", "response": "Runs a command and returns the output of the command."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nloads an image either an actual path on the filesystem or a uri.", "response": "def load(self, image=None):\n '''load an image, either an actual path on the filesystem or a uri.\n\n Parameters\n ==========\n image: the image path or uri to load (e.g., docker://ubuntu \n\n '''\n from spython.image import Image\n from spython.instance import Instance\n\n self.simage = Image(image)\n\n if image is not None:\n if image.startswith('instance://'):\n self.simage = Instance(image)\n bot.info(self.simage)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsetting an environment variable for Singularity", "response": "def setenv(self, variable, value):\n '''set an environment variable for Singularity\n \n Parameters\n ==========\n variable: the variable to set\n value: the value to set\n '''\n os.environ[variable] = value\n os.putenv(variable, value)\n bot.debug('%s set to %s' % (variable, value))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the URI of the loaded image object", "response": "def get_uri(self):\n ''' check if the loaded image object (self.simage) has an associated uri\n return if yes, None if not.\n '''\n if hasattr(self, 'simage'):\n if self.simage is not None:\n if self.simage.image not in ['', None]:\n # Concatenates the ://\n return str(self.simage)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef parse_verbosity(self, args):\n '''parse_verbosity will take an argument object, and return the args\n passed (from a dictionary) to a list\n\n Parameters\n ==========\n args: the argparse argument objects\n\n '''\n\n flags = []\n\n if args.silent is True:\n flags.append('--silent')\n elif args.quiet is True:\n flags.append('--quiet')\n elif args.debug is True:\n flags.append('--debug')\n elif args.verbose is True:\n flags.append('-' + 'v' * args.verbose)\n\n return flags", "response": "parse_verbosity will take an argument object and return the list of the args\n passed to a list of the args\nWorkItem"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef generate_oci_commands():\n ''' The oci command group will allow interaction with an image using\n OCI commands.\n '''\n from spython.oci import OciImage\n\n from spython.main.base.logger import println\n\n # run_command uses run_cmd, but wraps to catch error\n from spython.main.base.command import ( run_command, send_command )\n from spython.main.base.generate import RobotNamer\n\n # Oci Command Groups\n from .mounts import ( mount, umount )\n from .states import ( kill, state, start, pause, resume, _state_command )\n from .actions import ( attach, create, delete, execute, run, _run, update )\n\n # Oci Commands\n OciImage.start = start\n OciImage.mount = mount\n OciImage.umount = umount\n OciImage.state = state\n OciImage.resume = resume\n OciImage.pause = pause\n OciImage.attach = attach\n OciImage.create = create\n OciImage.delete = delete\n OciImage.execute = execute\n OciImage.update = update\n OciImage.kill = kill\n OciImage.run = run\n OciImage._run = _run\n OciImage._state_command = _state_command\n\n OciImage.RobotNamer = RobotNamer()\n OciImage._send_command = send_command # send and disregard stderr, stdout\n OciImage._run_command = run_command\n OciImage._println = println\n OciImage.OciImage = OciImage\n\n return OciImage", "response": "Generate the oci command group for an image using the oci command group."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconverting a Singularity recipe to a Dockerfile", "response": "def singularity2docker(self, runscript=\"/bin/bash\", force=False):\n '''convert a Singularity recipe to a (best estimated) Dockerfile'''\n\n recipe = [ \"FROM %s\" %self.fromHeader ]\n\n # Comments go up front!\n recipe += self.comments \n\n # First add files, labels\n recipe += write_lines('ADD', self.files)\n recipe += write_lines('LABEL', self.labels)\n recipe += write_lines('ENV', self.environ)\n\n # Install routine is added as RUN commands\n recipe += write_lines('RUN', self.install)\n\n # Take preference for user, entrypoint, command, then default\n runscript = self._create_runscript(runscript, force)\n recipe.append('CMD %s' %runscript)\n\n if self.test is not None:\n recipe.append(write_lines('HEALTHCHECK', self.test))\n\n # Clean up extra white spaces\n return '\\n'.join(recipe).replace('\\n\\n','\\n')"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef write_lines(label, lines):\n '''write a list of lines with a header for a section.\n \n Parameters\n ==========\n lines: one or more lines to write, with header appended\n\n '''\n result = []\n continued = False\n for line in lines:\n if continued:\n result.append(line)\n else:\n result.append('%s %s' %(label, line))\n continued = False\n if line.endswith('\\\\'):\n continued = True\n \n return result", "response": "write a list of lines with a header for a section."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a section based on key value recipe pairs", "response": "def create_section(self, attribute, name=None):\n '''create a section based on key, value recipe pairs, \n This is used for files or label\n\n Parameters\n ==========\n attribute: the name of the data section, either labels or files\n name: the name to write to the recipe file (e.g., %name).\n if not defined, the attribute name is used.\n\n '''\n\n # Default section name is the same as attribute\n if name is None:\n name = attribute \n\n # Put a space between sections\n section = ['\\n']\n\n # Only continue if we have the section and it's not empty\n try:\n section = getattr(self, attribute)\n except AttributeError:\n bot.debug('Recipe does not have section for %s' %attribute)\n return section\n\n # if the section is empty, don't print it\n if len(section) == 0:\n return section\n\n # Files or Labels\n if attribute in ['labels', 'files']:\n return create_keyval_section(section, name)\n\n # An environment section needs exports\n if attribute in ['environ']:\n return create_env_section(section, name)\n\n # Post, Setup\n return finish_section(section, name)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a section based on key value recipe pairs", "response": "def create_keyval_section(pairs, name):\n '''create a section based on key, value recipe pairs, \n This is used for files or label\n\n Parameters\n ==========\n section: the list of values to return as a parsed list of lines\n name: the name of the section to write (e.g., files)\n\n '''\n section = ['%' + name ]\n for pair in pairs:\n section.append(' '.join(pair).strip().strip('\\\\'))\n return section"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a section that contains the environment key value pairs.", "response": "def create_env_section(pairs, name):\n '''environment key value pairs need to be joined by an equal, and \n exported at the end.\n\n Parameters\n ==========\n section: the list of values to return as a parsed list of lines\n name: the name of the section to write (e.g., files)\n\n '''\n section = ['%' + name ]\n for pair in pairs:\n section.append(\"export %s\" %pair)\n return section"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a Singularity build recipe based on a Singularity build recipe object.", "response": "def docker2singularity(self, runscript=\"/bin/bash\", force=False):\n '''docker2singularity will return a Singularity build recipe based on\n a the loaded recipe object. It doesn't take any arguments as the\n recipe object contains the sections, and the calling function \n determines saving / output logic.\n '''\n\n recipe = ['Bootstrap: docker']\n recipe += [ \"From: %s\" %self.fromHeader ]\n \n # Sections with key value pairs\n recipe += self._create_section('files')\n recipe += self._create_section('labels')\n recipe += self._create_section('install', 'post')\n recipe += self._create_section('environ', 'environment') \n\n # Take preference for user, entrypoint, command, then default\n runscript = self._create_runscript(runscript, force)\n recipe += finish_section(runscript, 'runscript')\n\n if self.test is not None:\n recipe += finish_section(self.test, 'test')\n\n # Clean up extra white spaces\n return '\\n'.join(recipe).replace('\\n\\n','\\n')"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\npulling a singularity hub or Docker image", "response": "def pull(self, \n image=None,\n name=None,\n pull_folder='',\n ext=\"simg\",\n force=False,\n capture=False,\n name_by_commit=False,\n name_by_hash=False,\n stream=False):\n\n '''pull will pull a singularity hub or Docker image\n \n Parameters\n ==========\n image: the complete image uri. If not provided, the client loaded is used\n pull_folder: if not defined, pulls to $PWD (''). If defined, pulls to\n user specified location instead.\n\n Docker and Singularity Hub Naming\n ---------------------------------\n name: a custom name to use, to override default\n ext: if no name specified, the default extension to use.\n\n ''' \n from spython.utils import check_install\n check_install()\n\n cmd = self._init_command('pull')\n\n # If Singularity version > 3.0, we have sif format\n if 'version 3' in self.version():\n ext = 'sif'\n\n # No image provided, default to use the client's loaded image\n if image is None:\n image = self._get_uri()\n\n # If it's still None, no go!\n if image is None:\n bot.exit('You must provide an image uri, or use client.load() first.')\n\n # Singularity Only supports shub and Docker pull\n if not re.search('^(shub|docker)://', image):\n bot.exit(\"pull only valid for docker and shub. Use sregistry client.\")\n\n # Did the user ask for a custom pull folder?\n if pull_folder:\n self.setenv('SINGULARITY_PULLFOLDER', pull_folder)\n\n # If we still don't have a custom name, base off of image uri.\n # Determine how to tell client to name the image, preference to hash\n\n if name_by_hash is True:\n cmd.append('--hash')\n\n elif name_by_commit is True:\n cmd.append('--commit')\n\n elif name is None:\n name = self._get_filename(image, ext)\n \n # Only add name if we aren't naming by hash or commit\n if not name_by_commit and not name_by_hash:\n\n # Regression Singularity 3.* onward, PULLFOLDER not honored\n # https://github.com/sylabs/singularity/issues/2788\n if pull_folder and 'version 3' in self.version():\n pull_folder_name = os.path.join(pull_folder, os.path.basename(name))\n cmd = cmd + [\"--name\", pull_folder_name] \n else:\n cmd = cmd + [\"--name\", name]\n\n if force is True:\n cmd = cmd + [\"--force\"]\n \n cmd.append(image)\n bot.info(' '.join(cmd))\n\n # If name is still None, make empty string\n if name is None:\n name = ''\n\n final_image = os.path.join(pull_folder, name)\n\n # Option 1: For hash or commit, need return value to get final_image\n if name_by_commit or name_by_hash:\n\n # Set pull to temporary location\n tmp_folder = tempfile.mkdtemp()\n self.setenv('SINGULARITY_PULLFOLDER', tmp_folder)\n self._run_command(cmd, capture=capture)\n\n try:\n tmp_image = os.path.join(tmp_folder, os.listdir(tmp_folder)[0])\n final_image = os.path.join(pull_folder, os.path.basename(tmp_image))\n shutil.move(tmp_image, final_image)\n shutil.rmtree(tmp_folder)\n\n except:\n bot.error('Issue pulling image with commit or hash, try without?')\n\n # Option 2: Streaming we just run to show user\n elif stream is False:\n self._run_command(cmd, capture=capture)\n\n # Option 3: A custom name we can predict (not commit/hash) and can also show\n else:\n return final_image, stream_command(cmd, sudo=False)\n\n if os.path.exists(final_image):\n bot.info(final_image)\n return final_image"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a unique id that is unique with the current time and the given number of bits of .", "response": "def new_timestamped_uid(bits=32):\n \"\"\"\n A unique id that begins with an ISO timestamp followed by fractions of seconds and bits of\n randomness. The advantage of this is it sorts nicely by time, while still being unique.\n Example: 20150912T084555Z-378465-43vtwbx\n \"\"\"\n return \"%s-%s\" % (re.sub('[^\\w.]', '', datetime.now().isoformat()).replace(\".\", \"Z-\"), new_uid(bits))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nabbreviate a string, adding an indicator like an ellipsis if required.", "response": "def abbreviate_str(string, max_len=80, indicator=\"...\"):\n \"\"\"\n Abbreviate a string, adding an indicator like an ellipsis if required.\n \"\"\"\n if not string or not max_len or len(string) <= max_len:\n return string\n elif max_len <= len(indicator):\n return string[0:max_len]\n else:\n return string[0:max_len - len(indicator)] + indicator"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nabbreviates a list with a maximum length of max_items.", "response": "def abbreviate_list(items, max_items=10, item_max_len=40, joiner=\", \", indicator=\"...\"):\n \"\"\"\n Abbreviate a list, truncating each element and adding an indicator at the end if the\n whole list was truncated. Set item_max_len to None or 0 not to truncate items.\n \"\"\"\n if not items:\n return items\n else:\n shortened = [abbreviate_str(\"%s\" % item, max_len=item_max_len) for item in items[0:max_items]]\n if len(items) > max_items:\n shortened.append(indicator)\n return joiner.join(shortened)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nexpands a template string like blah FO blah", "response": "def expand_variables(template_str, value_map, transformer=None):\n \"\"\"\n Expand a template string like \"blah blah $FOO blah\" using given value mapping.\n \"\"\"\n if template_str is None:\n return None\n else:\n if transformer is None:\n transformer = lambda v: v\n try:\n # Don't bother iterating items for Python 2+3 compatibility.\n transformed_value_map = {k: transformer(value_map[k]) for k in value_map}\n return Template(template_str).substitute(transformed_value_map)\n except Exception as e:\n raise ValueError(\"could not expand variable names in command '%s': %s\" % (template_str, e))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef shell_expand_to_popen(template, values):\n return [expand_variables(item, values) for item in shlex.split(template)]", "response": "Expand a template like cp source blah into a list of popen arguments."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nmoves the given file or directory to a backup file.", "response": "def move_to_backup(path, backup_suffix=BACKUP_SUFFIX):\n \"\"\"\n Move the given file or directory to the same name, with a backup suffix.\n If backup_suffix not supplied, move it to the extension \".bak\".\n NB: If backup_suffix is supplied and is None, don't do anything.\n \"\"\"\n if backup_suffix and os.path.exists(path):\n backup_path = path + backup_suffix\n # Some messy corner cases need to be handled for existing backups.\n # TODO: Note if this is a directory, and we do this twice at once, there is a potential race\n # that could leave one backup inside the other.\n if os.path.islink(backup_path):\n os.unlink(backup_path)\n elif os.path.isdir(backup_path):\n shutil.rmtree(backup_path)\n shutil.move(path, backup_path)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef make_all_dirs(path, mode=0o777):\n # Avoid races inherent to doing this in two steps (check then create).\n # Python 3 has exist_ok but the approach below works for Python 2+3.\n # https://stackoverflow.com/questions/600268/mkdir-p-functionality-in-python\n try:\n os.makedirs(path, mode=mode)\n except OSError as e:\n if e.errno == errno.EEXIST and os.path.isdir(path):\n pass\n else:\n raise\n return path", "response": "Ensure local dir with all its parent dirs are created."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef make_parent_dirs(path, mode=0o777):\n parent = os.path.dirname(path)\n if parent:\n make_all_dirs(parent, mode)\n return path", "response": "Ensure parent directories of a file are created as needed."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef atomic_output_file(dest_path, make_parents=False, backup_suffix=None, suffix=\".partial.%s\"):\n if dest_path == os.devnull:\n # Handle the (probably rare) case of writing to /dev/null.\n yield dest_path\n else:\n tmp_path = (\"%s\" + suffix) % (dest_path, new_uid())\n if make_parents:\n make_parent_dirs(tmp_path)\n\n yield tmp_path\n\n # Note this is not in a finally block, so that result won't be renamed to final location\n # in case of abnormal exit.\n if not os.path.exists(tmp_path):\n raise IOError(\"failure in writing file '%s': target file '%s' missing\" % (dest_path, tmp_path))\n if backup_suffix:\n move_to_backup(dest_path, backup_suffix=backup_suffix)\n # If the target already exists, and is a directory, it has to be removed.\n if os.path.isdir(dest_path):\n shutil.rmtree(dest_path)\n shutil.move(tmp_path, dest_path)", "response": "A context manager for writing a file or directory in an atomic way."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreading a string from a file.", "response": "def read_string_from_file(path, encoding=\"utf8\"):\n \"\"\"\n Read entire contents of file into a string.\n \"\"\"\n with codecs.open(path, \"rb\", encoding=encoding) as f:\n value = f.read()\n return value"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nwrite a string to a file atomically.", "response": "def write_string_to_file(path, string, make_parents=False, backup_suffix=BACKUP_SUFFIX, encoding=\"utf8\"):\n \"\"\"\n Write entire file with given string contents, atomically. Keeps backup by default.\n \"\"\"\n with atomic_output_file(path, make_parents=make_parents, backup_suffix=backup_suffix) as tmp_path:\n with codecs.open(tmp_path, \"wb\", encoding=encoding) as f:\n f.write(string)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef set_file_mtime(path, mtime, atime=None):\n if not atime:\n atime = mtime\n f = open(path, 'a')\n try:\n os.utime(path, (atime, mtime))\n finally:\n f.close()", "response": "Set access and modification times on a file."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncopies file on local filesystem in an atomic way.", "response": "def copyfile_atomic(source_path, dest_path, make_parents=False, backup_suffix=None):\n \"\"\"\n Copy file on local filesystem in an atomic way, so partial copies never exist. Preserves timestamps.\n \"\"\"\n with atomic_output_file(dest_path, make_parents=make_parents, backup_suffix=backup_suffix) as tmp_path:\n shutil.copyfile(source_path, tmp_path)\n set_file_mtime(tmp_path, os.path.getmtime(source_path))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncopying a file or directory recursively and atomically reanaming file or top - level dir when done.", "response": "def copytree_atomic(source_path, dest_path, make_parents=False, backup_suffix=None, symlinks=False):\n \"\"\"\n Copy a file or directory recursively, and atomically, reanaming file or top-level dir when done.\n Unlike shutil.copytree, this will not fail on a file.\n \"\"\"\n if os.path.isdir(source_path):\n with atomic_output_file(dest_path, make_parents=make_parents, backup_suffix=backup_suffix) as tmp_path:\n shutil.copytree(source_path, tmp_path, symlinks=symlinks)\n else:\n copyfile_atomic(source_path, dest_path, make_parents=make_parents, backup_suffix=backup_suffix)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef movefile(source_path, dest_path, make_parents=False, backup_suffix=None):\n if make_parents:\n make_parent_dirs(dest_path)\n move_to_backup(dest_path, backup_suffix=backup_suffix)\n shutil.move(source_path, dest_path)", "response": "Move file. With a few extra options."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nremoves the given path if it is a directory or a file.", "response": "def rmtree_or_file(path, ignore_errors=False, onerror=None):\n \"\"\"\n rmtree fails on files or symlinks. This removes the target, whatever it is.\n \"\"\"\n # TODO: Could add an rsync-based delete, as in\n # https://github.com/vivlabs/instaclone/blob/master/instaclone/instaclone.py#L127-L143\n if ignore_errors and not os.path.exists(path):\n return\n if os.path.isdir(path) and not os.path.islink(path):\n shutil.rmtree(path, ignore_errors=ignore_errors, onerror=onerror)\n else:\n os.unlink(path)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef chmod_native(path, mode_expression, recursive=False):\n popenargs = [\"chmod\"]\n if recursive:\n popenargs.append(\"-R\")\n popenargs.append(mode_expression)\n popenargs.append(path)\n subprocess.check_call(popenargs)", "response": "Wrapper for os. chmod that uses the built - in Python os. chmod command."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncompute SHA1 hash of a file.", "response": "def file_sha1(path):\n \"\"\"\n Compute SHA1 hash of a file.\n \"\"\"\n sha1 = hashlib.sha1()\n with open(path, \"rb\") as f:\n while True:\n block = f.read(2 ** 10)\n if not block:\n break\n sha1.update(block)\n return sha1.hexdigest()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\nasync def get_image(\n self,\n input_source: str,\n output_format: str = IMAGE_JPEG,\n extra_cmd: Optional[str] = None,\n timeout: int = 15,\n ) -> Optional[bytes]:\n \"\"\"Open FFmpeg process as capture 1 frame.\"\"\"\n command = [\"-an\", \"-frames:v\", \"1\", \"-c:v\", output_format]\n\n # open input for capture 1 frame\n is_open = await self.open(\n cmd=command,\n input_source=input_source,\n output=\"-f image2pipe -\",\n extra_cmd=extra_cmd,\n )\n\n # error after open?\n if not is_open:\n _LOGGER.warning(\"Error starting FFmpeg.\")\n return None\n\n # read image\n try:\n proc_func = functools.partial(self._proc.communicate, timeout=timeout)\n image, _ = await self._loop.run_in_executor(None, proc_func)\n return image\n\n except (subprocess.TimeoutExpired, ValueError):\n _LOGGER.warning(\"Timeout reading image.\")\n self.kill()\n return None", "response": "Read image from the specified image file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\nasync def get_version(self, timeout: int = 15) -> Optional[str]:\n command = [\"-version\"]\n # open input for capture 1 frame\n\n is_open = await self.open(cmd=command, input_source=None, output=\"\")\n\n # error after open?\n if not is_open:\n _LOGGER.warning(\"Error starting FFmpeg.\")\n return\n\n # read output\n try:\n proc_func = functools.partial(self._proc.communicate, timeout=timeout)\n output, _ = await self._loop.run_in_executor(None, proc_func)\n\n result = re.search(r\"ffmpeg version (\\S*)\", output.decode())\n if result is not None:\n return result.group(1)\n\n except (subprocess.TimeoutExpired, ValueError):\n _LOGGER.warning(\"Timeout reading stdout.\")\n self.kill()\n\n return None", "response": "Execute the FFmpeg process and parse the version information."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nopens the camera process as mjpeg video stream.", "response": "def open_camera(\n self, input_source: str, extra_cmd: Optional[str] = None\n ) -> Coroutine:\n \"\"\"Open FFmpeg process as mjpeg video stream.\n\n Return A coroutine.\n \"\"\"\n command = [\"-an\", \"-c:v\", \"mjpeg\"]\n\n return self.open(\n cmd=command,\n input_source=input_source,\n output=\"-f mpjpeg -\",\n extra_cmd=extra_cmd,\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nloads the NVRTC shared library with an optional search path in lib_path.", "response": "def _load_nvrtc_lib(self, lib_path):\n \"\"\"\n Loads the NVRTC shared library, with an optional search path in\n lib_path.\n \"\"\"\n if sizeof(c_void_p) == 8:\n if system() == 'Windows':\n def_lib_name = 'nvrtc64_92.dll'\n elif system() == 'Darwin':\n def_lib_name = 'libnvrtc.dylib'\n else:\n def_lib_name = 'libnvrtc.so'\n else:\n raise NVRTCException('NVRTC is not supported on 32-bit platforms.')\n\n if len(lib_path) == 0:\n name = def_lib_name\n else:\n name = lib_path\n\n self._lib = cdll.LoadLibrary(name)\n\n self._lib.nvrtcCreateProgram.argtypes = [\n POINTER(c_void_p), # prog\n c_char_p, # src\n c_char_p, # name\n c_int, # numHeaders\n POINTER(c_char_p), # headers\n POINTER(c_char_p) # include_names\n ]\n self._lib.nvrtcCreateProgram.restype = c_int\n\n self._lib.nvrtcDestroyProgram.argtypes = [\n POINTER(c_void_p) # prog\n ]\n self._lib.nvrtcDestroyProgram.restype = c_int\n\n self._lib.nvrtcCompileProgram.argtypes = [\n c_void_p, # prog\n c_int, # numOptions\n POINTER(c_char_p) # options\n ]\n self._lib.nvrtcCompileProgram.restype = c_int\n\n self._lib.nvrtcGetPTXSize.argtypes = [\n c_void_p, # prog\n POINTER(c_size_t) # ptxSizeRet\n ]\n self._lib.nvrtcGetPTXSize.restype = c_int\n\n self._lib.nvrtcGetPTX.argtypes = [\n c_void_p, # prog\n c_char_p # ptx\n ]\n self._lib.nvrtcGetPTX.restype = c_int\n\n self._lib.nvrtcGetProgramLogSize.argtypes = [\n c_void_p, # prog\n POINTER(c_size_t) # logSizeRet\n ]\n self._lib.nvrtcGetProgramLogSize.restype = c_int\n\n self._lib.nvrtcGetProgramLog.argtypes = [\n c_void_p, # prog\n c_char_p # log\n ]\n self._lib.nvrtcGetProgramLog.restype = c_int\n\n self._lib.nvrtcAddNameExpression.argtypes = [\n c_void_p, # prog\n c_char_p # nameExpression\n ]\n self._lib.nvrtcAddNameExpression.restype = c_int\n\n self._lib.nvrtcGetLoweredName.argtypes = [\n c_void_p, # prog\n c_char_p, # nameExpression\n POINTER(c_char_p) # loweredName\n ]\n self._lib.nvrtcGetLoweredName.restype = c_int\n\n self._lib.nvrtcGetErrorString.argtypes = [\n c_int # result\n ]\n self._lib.nvrtcGetErrorString.restype = c_char_p\n\n self._lib.nvrtcVersion.argtypes = [\n POINTER(c_int), # major\n POINTER(c_int) # minor\n ]\n self._lib.nvrtcVersion.restype = c_int"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef nvrtcCreateProgram(self, src, name, headers, include_names):\n res = c_void_p()\n headers_array = (c_char_p * len(headers))()\n headers_array[:] = encode_str_list(headers)\n include_names_array = (c_char_p * len(include_names))()\n include_names_array[:] = encode_str_list(include_names)\n code = self._lib.nvrtcCreateProgram(byref(res),\n c_char_p(encode_str(src)), c_char_p(encode_str(name)),\n len(headers),\n headers_array, include_names_array)\n self._throw_on_error(code)\n return res", "response": "Creates and returns a new NVRTC program object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndestroying the given NVRTC program object.", "response": "def nvrtcDestroyProgram(self, prog):\n \"\"\"\n Destroys the given NVRTC program object.\n \"\"\"\n code = self._lib.nvrtcDestroyProgram(byref(prog))\n self._throw_on_error(code)\n return"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncompile the NVRTC program into PTX using the provided options.", "response": "def nvrtcCompileProgram(self, prog, options):\n \"\"\"\n Compiles the NVRTC program object into PTX, using the provided options\n array. See the NVRTC API documentation for accepted options.\n \"\"\"\n options_array = (c_char_p * len(options))()\n options_array[:] = encode_str_list(options)\n code = self._lib.nvrtcCompileProgram(prog, len(options), options_array)\n self._throw_on_error(code)\n return"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the compiled PTX for the NVRTC program object.", "response": "def nvrtcGetPTX(self, prog):\n \"\"\"\n Returns the compiled PTX for the NVRTC program object.\n \"\"\"\n size = c_size_t()\n code = self._lib.nvrtcGetPTXSize(prog, byref(size))\n self._throw_on_error(code)\n\n buf = create_string_buffer(size.value)\n code = self._lib.nvrtcGetPTX(prog, buf)\n self._throw_on_error(code)\n\n return buf.value.decode('utf-8')"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the log for the NVRTC program object.", "response": "def nvrtcGetProgramLog(self, prog):\n \"\"\"\n Returns the log for the NVRTC program object.\n\n Only useful after calls to nvrtcCompileProgram or nvrtcVerifyProgram.\n \"\"\"\n size = c_size_t()\n code = self._lib.nvrtcGetProgramLogSize(prog, byref(size))\n self._throw_on_error(code)\n\n buf = create_string_buffer(size.value)\n code = self._lib.nvrtcGetProgramLog(prog, buf)\n self._throw_on_error(code)\n\n return buf.value.decode('utf-8')"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef nvrtcAddNameExpression(self, prog, name_expression):\n code = self._lib.nvrtcAddNameExpression(prog,\n c_char_p(encode_str(name_expression)))\n self._throw_on_error(code)\n return", "response": "Add a name expression to the given NVRTC program."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the lower cased version of the given name expression.", "response": "def nvrtcGetLoweredName(self, prog, name_expression):\n \"\"\"\n Notes the given name expression denoting a __global__ function or\n function template instantiation.\n \"\"\"\n lowered_name = c_char_p()\n code = self._lib.nvrtcGetLoweredName(prog,\n c_char_p(encode_str(name_expression)),\n byref(lowered_name))\n self._throw_on_error(code)\n return lowered_name.value.decode('utf-8')"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef nvrtcGetErrorString(self, code):\n code_int = c_int(code)\n res = self._lib.nvrtcGetErrorString(code_int)\n return res.decode('utf-8')", "response": "Returns a string identifier for the given NVRTC status code."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the loaded NVRTC library version as a tuple.", "response": "def nvrtcVersion(self):\n \"\"\"\n Returns the loaded NVRTC library version as a (major, minor) tuple.\n \"\"\"\n major = c_int()\n minor = c_int()\n code = self._lib.nvrtcVersion(byref(major), byref(minor))\n self._throw_on_error(code)\n return (major.value, minor.value)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef is_running(self) -> bool:\n if self._proc is None or self._proc.returncode is not None:\n return False\n return True", "response": "Return True if ffmpeg is running."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngenerates ffmpeg command line.", "response": "def _generate_ffmpeg_cmd(\n self,\n cmd: List[str],\n input_source: Optional[str],\n output: Optional[str],\n extra_cmd: Optional[str] = None,\n ) -> None:\n \"\"\"Generate ffmpeg command line.\"\"\"\n self._argv = [self._ffmpeg]\n\n # start command init\n if input_source is not None:\n self._put_input(input_source)\n self._argv.extend(cmd)\n\n # exists a extra cmd from customer\n if extra_cmd is not None:\n self._argv.extend(shlex.split(extra_cmd))\n\n self._merge_filters()\n self._put_output(output)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _put_input(self, input_source: str) -> None:\n input_cmd = shlex.split(str(input_source))\n if len(input_cmd) > 1:\n self._argv.extend(input_cmd)\n else:\n self._argv.extend([\"-i\", input_source])", "response": "Put input string to ffmpeg command."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _put_output(self, output: Optional[str]) -> None:\n if output is None:\n self._argv.extend([\"-f\", \"null\", \"-\"])\n return\n\n output_cmd = shlex.split(str(output))\n if len(output_cmd) > 1:\n self._argv.extend(output_cmd)\n else:\n self._argv.append(output)", "response": "Put output string to ffmpeg command."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nmerges all filter config in command line.", "response": "def _merge_filters(self) -> None:\n \"\"\"Merge all filter config in command line.\"\"\"\n for opts in ([\"-filter:a\", \"-af\"], [\"-filter:v\", \"-vf\"]):\n filter_list = []\n new_argv = []\n cmd_iter = iter(self._argv)\n for element in cmd_iter:\n if element in opts:\n filter_list.insert(0, next(cmd_iter))\n else:\n new_argv.append(element)\n\n # update argv if changes\n if filter_list:\n new_argv.extend([opts[0], \",\".join(filter_list)])\n self._argv = new_argv.copy()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\nasync def open(\n self,\n cmd: List[str],\n input_source: Optional[str],\n output: Optional[str] = \"-\",\n extra_cmd: Optional[str] = None,\n stdout_pipe: bool = True,\n stderr_pipe: bool = False,\n ) -> bool:\n \"\"\"Start a ffmpeg instance and pipe output.\"\"\"\n stdout = subprocess.PIPE if stdout_pipe else subprocess.DEVNULL\n stderr = subprocess.PIPE if stderr_pipe else subprocess.DEVNULL\n\n if self.is_running:\n _LOGGER.warning(\"FFmpeg is already running!\")\n return True\n\n # set command line\n self._generate_ffmpeg_cmd(cmd, input_source, output, extra_cmd)\n\n # start ffmpeg\n _LOGGER.debug(\"Start FFmpeg with %s\", str(self._argv))\n try:\n proc_func = functools.partial(\n subprocess.Popen,\n self._argv,\n bufsize=0,\n stdin=subprocess.PIPE,\n stdout=stdout,\n stderr=stderr,\n )\n self._proc = await self._loop.run_in_executor(None, proc_func)\n except Exception as err: # pylint: disable=broad-except\n _LOGGER.exception(\"FFmpeg fails %s\", err)\n self._clear()\n return False\n\n return self._proc is not None", "response": "Start a ffmpeg instance and pipe output."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\nasync def close(self, timeout=5) -> None:\n if not self.is_running:\n _LOGGER.warning(\"FFmpeg isn't running!\")\n return\n\n # Can't use communicate because we attach the output to a streamreader\n def _close():\n \"\"\"Close ffmpeg.\"\"\"\n self._proc.stdin.write(b\"q\")\n self._proc.wait(timeout=timeout)\n\n # send stop to ffmpeg\n try:\n await self._loop.run_in_executor(None, _close)\n _LOGGER.debug(\"Close FFmpeg process\")\n\n except (subprocess.TimeoutExpired, ValueError):\n _LOGGER.warning(\"Timeout while waiting of FFmpeg\")\n self.kill()\n\n finally:\n self._clear()", "response": "Stop a running ffmpeg instance."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate and return a streamreader.", "response": "async def get_reader(self, source=FFMPEG_STDOUT) -> asyncio.StreamReader:\n \"\"\"Create and return streamreader.\"\"\"\n reader = asyncio.StreamReader(loop=self._loop)\n reader_protocol = asyncio.StreamReaderProtocol(reader)\n\n # Attach stream\n if source == FFMPEG_STDOUT:\n await self._loop.connect_read_pipe(\n lambda: reader_protocol, self._proc.stdout\n )\n else:\n await self._loop.connect_read_pipe(\n lambda: reader_protocol, self._proc.stderr\n )\n\n # Start reader\n return reader"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nstopping a ffmpeg instance.", "response": "def close(self, timeout: int = 5) -> None:\n \"\"\"Stop a ffmpeg instance.\n\n Return a coroutine\n \"\"\"\n if self._read_task is not None and not self._read_task.cancelled():\n self._read_task.cancel()\n\n return super().close(timeout)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreads lines from pipe and process them.", "response": "async def _process_lines(self, pattern: Optional[str] = None) -> None:\n \"\"\"Read line from pipe they match with pattern.\"\"\"\n if pattern is not None:\n cmp = re.compile(pattern)\n\n _LOGGER.debug(\"Start working with pattern '%s'.\", pattern)\n\n # read lines\n while self.is_running:\n try:\n line = await self._input.readline()\n if not line:\n break\n line = line.decode()\n except Exception: # pylint: disable=broad-except\n break\n\n match = True if pattern is None else cmp.search(line)\n if match:\n _LOGGER.debug(\"Process: %s\", line)\n await self._que.put(line)\n\n try:\n await self._loop.run_in_executor(None, self._proc.wait)\n finally:\n await self._que.put(None)\n _LOGGER.debug(\"Close read ffmpeg output.\")"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nstart ffmpeg processing data from output.", "response": "async def start_worker(\n self,\n cmd: List[str],\n input_source: str,\n output: Optional[str] = None,\n extra_cmd: Optional[str] = None,\n pattern: Optional[str] = None,\n reading: str = FFMPEG_STDERR,\n ) -> None:\n \"\"\"Start ffmpeg do process data from output.\"\"\"\n if self.is_running:\n _LOGGER.warning(\"Can't start worker. It is allready running!\")\n return\n\n if reading == FFMPEG_STDERR:\n stdout = False\n stderr = True\n else:\n stdout = True\n stderr = False\n\n # start ffmpeg and reading to queue\n await self.open(\n cmd=cmd,\n input_source=input_source,\n output=output,\n extra_cmd=extra_cmd,\n stdout_pipe=stdout,\n stderr_pipe=stderr,\n )\n\n self._input = await self.get_reader(reading)\n\n # start background processing\n self._read_task = self._loop.create_task(self._process_lines(pattern))\n self._loop.create_task(self._worker_process())"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncompile the program object to PTX using the compiler specified in options.", "response": "def compile(self, options=[]):\n \"\"\"\n Compiles the program object to PTX using the compiler options\n specified in `options`.\n \"\"\"\n try:\n self._interface.nvrtcCompileProgram(self._program, options)\n ptx = self._interface.nvrtcGetPTX(self._program)\n return ptx\n except NVRTCException as e:\n log = self._interface.nvrtcGetProgramLog(self._program)\n raise ProgramException(log)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef set_options(\n self, time_duration: int = 1, time_reset: int = 2, peak: int = -30\n ) -> None:\n \"\"\"Set option parameter for noise sensor.\"\"\"\n self._time_duration = time_duration\n self._time_reset = time_reset\n self._peak = peak", "response": "Set option parameter for noise sensor."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef open_sensor(\n self,\n input_source: str,\n output_dest: Optional[str] = None,\n extra_cmd: Optional[str] = None,\n ) -> Coroutine:\n \"\"\"Open FFmpeg process for read autio stream.\n\n Return a coroutine.\n \"\"\"\n command = [\"-vn\", \"-filter:a\", \"silencedetect=n={}dB:d=1\".format(self._peak)]\n\n # run ffmpeg, read output\n return self.start_worker(\n cmd=command,\n input_source=input_source,\n output=output_dest,\n extra_cmd=extra_cmd,\n pattern=\"silence\",\n )", "response": "Open a FFmpeg process for reading the current state of the user s audio file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef set_options(\n self,\n time_reset: int = 60,\n time_repeat: int = 0,\n repeat: int = 0,\n changes: int = 10,\n ) -> None:\n \"\"\"Set option parameter for noise sensor.\"\"\"\n self._time_reset = time_reset\n self._time_repeat = time_repeat\n self._repeat = repeat\n self._changes = changes", "response": "Set option parameter for noise sensor."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef open_sensor(\n self, input_source: str, extra_cmd: Optional[str] = None\n ) -> Coroutine:\n \"\"\"Open FFmpeg process a video stream for motion detection.\n\n Return a coroutine.\n \"\"\"\n command = [\n \"-an\",\n \"-filter:v\",\n \"select=gt(scene\\\\,{0})\".format(self._changes / 100),\n ]\n\n # run ffmpeg, read output\n return self.start_worker(\n cmd=command,\n input_source=input_source,\n output=\"-f framemd5 -\",\n extra_cmd=extra_cmd,\n pattern=self.MATCH,\n reading=FFMPEG_STDOUT,\n )", "response": "Open a video stream for motion detection."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\nasync def _worker_process(self) -> None:\n state = self.STATE_NONE\n timeout = None\n\n self._loop.call_soon(self._callback, False)\n\n # for repeat feature\n re_frame = 0\n re_time = 0\n\n re_data = re.compile(self.MATCH)\n\n # process queue data\n while True:\n try:\n _LOGGER.debug(\"Reading State: %d, timeout: %s\", state, timeout)\n with async_timeout.timeout(timeout, loop=self._loop):\n data = await self._que.get()\n if data is None:\n self._loop.call_soon(self._callback, None)\n return\n except asyncio.TimeoutError:\n _LOGGER.debug(\"Blocking timeout\")\n # reset motion detection\n if state == self.STATE_MOTION:\n state = self.STATE_NONE\n self._loop.call_soon(self._callback, False)\n timeout = None\n # reset repeate state\n if state == self.STATE_REPEAT:\n state = self.STATE_NONE\n timeout = None\n continue\n\n frames = re_data.search(data)\n if frames:\n # repeat not used\n if self._repeat == 0 and state == self.STATE_NONE:\n state = self.STATE_MOTION\n self._loop.call_soon(self._callback, True)\n timeout = self._time_reset\n\n # repeat feature is on / first motion\n if state == self.STATE_NONE:\n state = self.STATE_REPEAT\n timeout = self._time_repeat\n re_frame = 0\n re_time = time()\n\n elif state == self.STATE_REPEAT:\n re_frame += 1\n\n # REPEAT ready?\n if re_frame >= self._repeat:\n state = self.STATE_MOTION\n self._loop.call_soon(self._callback, True)\n timeout = self._time_reset\n else:\n past = time() - re_time\n timeout -= past\n\n # REPEAT time down\n if timeout <= 0:\n _LOGGER.debug(\"Reset repeat to none\")\n state = self.STATE_NONE\n timeout = None\n\n continue\n\n _LOGGER.warning(\"Unknown data from queue!\")", "response": "This function processes the data in the queue."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nopens a connection to a JDBC driver and return a Connection instance.", "response": "def connect(jclassname, driver_args, jars=None, libs=None):\n \"\"\"Open a connection to a database using a JDBC driver and return\n a Connection instance.\n\n jclassname: Full qualified Java class name of the JDBC driver.\n driver_args: Argument or sequence of arguments to be passed to the\n Java DriverManager.getConnection method. Usually the\n database URL. See\n http://docs.oracle.com/javase/6/docs/api/java/sql/DriverManager.html\n for more details\n jars: Jar filename or sequence of filenames for the JDBC driver\n libs: Dll/so filenames or sequence of dlls/sos used as shared\n library by the JDBC driver\n \"\"\"\n\n if _gateway_is_running():\n gateway = java_gateway.JavaGateway()\n else:\n driver_args = [driver_args] if isinstance(driver_args, str) else driver_args\n\n if jars:\n classpath = os.pathsep.join(jars) if isinstance(jars, list) else jars\n else:\n classpath = None\n\n if libs:\n javaopts = libs if isinstance(libs, list) else [libs]\n else:\n javaopts = []\n\n gateway = java_gateway.JavaGateway.launch_gateway(\n port=25333, classpath=classpath, javaopts=javaopts, die_on_exit=True)\n\n java_gateway.java_import(gateway.jvm, 'java.sql.DriverManager')\n gateway.jvm.Class.forName(jclassname)\n\n connection = gateway.jvm.DriverManager.getConnection(*driver_args)\n if _converters is None:\n types_map = {}\n for type in gateway.jvm.Class.forName(\"java.sql.Types\").getFields():\n types_map[type.getName()] = type.getInt(None)\n _init_converters(types_map)\n\n return Connection(connection, _converters)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nsubscribe to a resource path", "response": "def add_resource_subscription(self, device_id, _resource_path, **kwargs): # noqa: E501\n \"\"\"Subscribe to a resource path # noqa: E501\n\n The Device Management Connect eventing model consists of observable resources. This means that endpoints can deliver updated resource content, periodically or with a more sophisticated solution-dependent logic. The OMA LwM2M resource model including objects, object instances, resources and resource instances is also supported. Applications can subscribe to objects, object instances or individual resources to make the device to provide value change notifications to Device Management Connect service. An application needs to call a `/notification/callback` method to get Device Management Connect to push notifications of the resource changes. **Notification rules** A web application can place dynamic observation rules for individual Object Instances and Resources to define when the device sends observations. More information in [Notification rules](/docs/current/connecting/resource-change-webapp.html). All manual subscriptions are removed during a full device registration and applications need to re-subscribe at that point. To avoid this, you can use `/subscriptions` to set a pre-subscription. **Example usage:** curl -X PUT \\\\ https://api.us-east-1.mbedcloud.com/v2/subscriptions/{device-id}/{resourcePath} \\\\ -H 'authorization: Bearer {api-key}' # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.add_resource_subscription(device_id, _resource_path, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str device_id: A unique Device Management device ID for the endpoint. Note that the ID must be an exact match. You cannot use wildcards here. (required)\n :param str _resource_path: The URL of the resource. (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.add_resource_subscription_with_http_info(device_id, _resource_path, **kwargs) # noqa: E501\n else:\n (data) = self.add_resource_subscription_with_http_info(device_id, _resource_path, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef check_resource_subscription(self, device_id, _resource_path, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.check_resource_subscription_with_http_info(device_id, _resource_path, **kwargs) # noqa: E501\n else:\n (data) = self.check_resource_subscription_with_http_info(device_id, _resource_path, **kwargs) # noqa: E501\n return data", "response": "Read subscription status # noqa: E501\n\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.check_resource_subscription(device_id, _resource_path, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str device_id: A unique Device Management device ID for the endpoint. Note that the ID must be an exact match. You cannot use wildcards here. (required)\n :param str _resource_path: The URL of the resource. (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndelete all resource subscriptions from an endpoint.", "response": "def delete_endpoint_subscriptions(self, device_id, **kwargs): # noqa: E501\n \"\"\"Delete subscriptions from an endpoint # noqa: E501\n\n Deletes all resource subscriptions in a single endpoint. **Example usage:** curl -X DELETE \\\\ https://api.us-east-1.mbedcloud.com/v2/subscriptions/{device-id} \\\\ -H 'authorization: Bearer {api-key}' # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.delete_endpoint_subscriptions(device_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str device_id: A unique Device Management device ID for the endpoint. Note that the ID must be an exact match. You cannot use wildcards here. (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.delete_endpoint_subscriptions_with_http_info(device_id, **kwargs) # noqa: E501\n else:\n (data) = self.delete_endpoint_subscriptions_with_http_info(device_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef delete_pre_subscriptions(self, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.delete_pre_subscriptions_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.delete_pre_subscriptions_with_http_info(**kwargs) # noqa: E501\n return data", "response": "Remove pre - subscriptions."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef delete_resource_subscription(self, device_id, _resource_path, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.delete_resource_subscription_with_http_info(device_id, _resource_path, **kwargs) # noqa: E501\n else:\n (data) = self.delete_resource_subscription_with_http_info(device_id, _resource_path, **kwargs) # noqa: E501\n return data", "response": "Remove a subscription from a resource path."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_endpoint_subscriptions(self, device_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_endpoint_subscriptions_with_http_info(device_id, **kwargs) # noqa: E501\n else:\n (data) = self.get_endpoint_subscriptions_with_http_info(device_id, **kwargs) # noqa: E501\n return data", "response": "Get all subscribed resources from a single endpoint."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting pre - subscriptions for a specific resource.", "response": "def get_pre_subscriptions(self, **kwargs): # noqa: E501\n \"\"\"Get pre-subscriptions # noqa: E501\n\n You can retrieve the pre-subscription data with the GET operation. The server returns with the same JSON structure as described above. If there are no pre-subscribed resources, it returns with an empty array. **Example usage:** curl -X GET https://api.us-east-1.mbedcloud.com/v2/subscriptions -H 'authorization: Bearer {api-key}' # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_pre_subscriptions(asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :return: PresubscriptionArray\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_pre_subscriptions_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.get_pre_subscriptions_with_http_info(**kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nupdate the pre - subscriptions for a specific resource type and endpoint ID.", "response": "def update_pre_subscriptions(self, presubsription, **kwargs): # noqa: E501\n \"\"\"Set pre-subscriptions # noqa: E501\n\n Pre-subscription is a set of rules and patterns put by the application. When an endpoint registers and its ID, type and registered resources match the pre-subscription data, Device Management Connect sends subscription requests to the device automatically. The pattern may include the endpoint ID (optionally having an `*` character at the end), endpoint type, a list of resources or expressions with an `*` character at the end. Subscriptions based on pre-subscriptions are done when device registers or does register update. To remove the pre-subscription data, put an empty array as a rule. **Notification rules** A web application can place dynamic observation rules for individual Object Instances and Resources to define when the device sends observations. More information in [Notification rules](/docs/current/connecting/resource-change-webapp.html). **Limits**: - The maximum length of the endpoint name and endpoint type is 64 characters. - The maximum length of the resource path is 128 characters. - You can listen to 256 separate resource paths. - The maximum number of pre-subscription entries is 1024. **Example request:** ``` curl -X PUT \\\\ https://api.us-east-1.mbedcloud.com/v2/subscriptions \\\\ -H 'authorization: Bearer {api-key}' \\\\ -H 'content-type: application/json' \\\\ -d '[ { \\\"endpoint-name\\\": \\\"node-001\\\", \\\"resource-path\\\": [\\\"/dev\\\"] }, { \\\"endpoint-type\\\": \\\"Light\\\", \\\"resource-path\\\": [\\\"/sen/*\\\"] }, { \\\"endpoint-name\\\": \\\"node*\\\" }, { \\\"endpoint-type\\\": \\\"Sensor\\\" }, { \\\"resource-path\\\": [\\\"/dev/temp\\\",\\\"/dev/hum\\\"] } ]' ``` - Subscribe to `/dev` resource of endpoint named `node-001`. - Subscribe to `Light` type of endpoints and their resources prefixed with `/sen/`. - Subscribe to all observable resources of endpoint names prefixed with `node`. - Subscribe to all observable resources of `Sensor` type endpoints. - Subscribe to `/dev/temp` and `/dev/hum` resources of all endpoints. **Note**: For efficiency reasons, you should use resource path patterns in the pre-subscription data. This prevents the notification flow from unwanted resources. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.update_pre_subscriptions(presubsription, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param PresubscriptionArray presubsription: Array of pre-subscriptions. (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_pre_subscriptions_with_http_info(presubsription, **kwargs) # noqa: E501\n else:\n (data) = self.update_pre_subscriptions_with_http_info(presubsription, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add_psk(self, **kwargs):\n api = self._get_api(bootstrap.PreSharedKeysApi)\n item = PreSharedKey._create_request_map(kwargs)\n item = models.PreSharedKey(**item)\n api.upload_pre_shared_key(item)\n return PreSharedKey(item)", "response": "Add a new pre - shared key to the current user s registry."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_psk(self, endpoint_name, **kwargs):\n api = self._get_api(bootstrap.PreSharedKeysApi)\n return PreSharedKey(api.get_pre_shared_key(endpoint_name=endpoint_name))", "response": "Get a pre - shared key for the given endpoint."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nlisting all pre shared keys.", "response": "def list_psks(self, **kwargs):\n \"\"\"List\"\"\"\n api = self._get_api(bootstrap.PreSharedKeysApi)\n return PaginatedResponse(api.list_pre_shared_keys, lwrap_type=PreSharedKey, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndeletes a pre shared key.", "response": "def delete_psk(self, endpoint_name, **kwargs):\n \"\"\"Delete\"\"\"\n api = self._get_api(bootstrap.PreSharedKeysApi)\n return api.delete_pre_shared_key(endpoint_name=endpoint_name)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsetting the ca_id of this DeviceData.", "response": "def ca_id(self, ca_id):\n \"\"\"\n Sets the ca_id of this DeviceData.\n The certificate issuer's ID.\n\n :param ca_id: The ca_id of this DeviceData.\n :type: str\n \"\"\"\n if ca_id is not None and len(ca_id) > 500:\n raise ValueError(\"Invalid value for `ca_id`, length must be less than or equal to `500`\")\n\n self._ca_id = ca_id"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef deployed_state(self, deployed_state):\n allowed_values = [\"development\", \"production\"]\n if deployed_state not in allowed_values:\n raise ValueError(\n \"Invalid value for `deployed_state` ({0}), must be one of {1}\"\n .format(deployed_state, allowed_values)\n )\n\n self._deployed_state = deployed_state", "response": "Sets the deployed_state of this DeviceData."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef device_class(self, device_class):\n if device_class is not None and len(device_class) > 32:\n raise ValueError(\"Invalid value for `device_class`, length must be less than or equal to `32`\")\n\n self._device_class = device_class", "response": "Sets the device_class of this DeviceData."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef device_key(self, device_key):\n if device_key is not None and len(device_key) > 512:\n raise ValueError(\"Invalid value for `device_key`, length must be less than or equal to `512`\")\n\n self._device_key = device_key", "response": "Sets the device_key of this DeviceData."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nset the endpoint_type of this DeviceData.", "response": "def endpoint_type(self, endpoint_type):\n \"\"\"\n Sets the endpoint_type of this DeviceData.\n The endpoint type of the device. For example, the device is a gateway.\n\n :param endpoint_type: The endpoint_type of this DeviceData.\n :type: str\n \"\"\"\n if endpoint_type is not None and len(endpoint_type) > 64:\n raise ValueError(\"Invalid value for `endpoint_type`, length must be less than or equal to `64`\")\n\n self._endpoint_type = endpoint_type"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef mechanism(self, mechanism):\n allowed_values = [\"connector\", \"direct\"]\n if mechanism not in allowed_values:\n raise ValueError(\n \"Invalid value for `mechanism` ({0}), must be one of {1}\"\n .format(mechanism, allowed_values)\n )\n\n self._mechanism = mechanism", "response": "Sets the mechanism of this DeviceData."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _update_api_client(self, api_parent_class=None):\n clients = ([self.api_clients[api_parent_class]]\n if api_parent_class else self.api_clients.values())\n\n for api_client in clients:\n api_client.configuration.host = (self.config.get('host') or\n api_client.configuration.host)\n api_client.configuration.api_key['Authorization'] = self.config['api_key']", "response": "Updates the ApiClient object of specified parent api class or all of them"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nverify that the filters are valid.", "response": "def _verify_filters(self, kwargs, obj, encode=False):\n \"\"\"Legacy entrypoint with 'encode' flag\"\"\"\n return (filters.legacy_filter_formatter if encode else filters.filter_formatter)(\n kwargs,\n obj._get_attributes_map()\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_last_api_metadata(self):\n last_metadata = None\n for key, api in iteritems(self.apis):\n api_client = api.api_client\n if api_client is not None:\n metadata = api_client.get_last_metadata()\n if metadata is not None and metadata.get('timestamp', None) is not None:\n if last_metadata is None:\n last_metadata = metadata\n elif metadata[\"timestamp\"] >= last_metadata[\"timestamp\"]:\n last_metadata = metadata\n if last_metadata is not None:\n last_metadata = ApiMetadata(last_metadata.get(\"url\"),\n last_metadata.get(\"method\"),\n last_metadata.get(\"response\", None),\n last_metadata.get(\"return_data\", None),\n last_metadata.get(\"exception\", None))\n return last_metadata", "response": "Get meta data for the last Mbed Cloud API call."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn all arguments received in init and this method call", "response": "def success(self, **kwargs):\n \"\"\"Returns all arguments received in init and this method call\"\"\"\n response = {'success': True}\n # check dates can be manipulated\n response.update(kwargs)\n response.update(self.kwargs)\n response['test_argument3'] = datetime.timedelta(days=1) + response['test_argument3']\n return response"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nupdate attributes of the object.", "response": "def update_attributes(self, updates):\n \"\"\"Update attributes.\"\"\"\n if not isinstance(updates, dict):\n updates = updates.to_dict()\n for sdk_key, spec_key in self._get_attributes_map().items():\n attr = '_%s' % sdk_key\n if spec_key in updates and not hasattr(self, attr):\n setattr(self, attr, updates[spec_key])"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef to_dict(self):\n dictionary = {}\n for key, value in iteritems(self.__dict__):\n property_name = key[1:]\n if hasattr(self, property_name):\n dictionary.update({property_name: getattr(self, property_name, None)})\n return dictionary", "response": "Return dictionary of object."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef etag(self, etag):\n if etag is None:\n raise ValueError(\"Invalid value for `etag`, must not be `None`\")\n if etag is not None and not re.search('[A-Za-z0-9]{0,256}', etag):\n raise ValueError(\"Invalid value for `etag`, must be a follow pattern or equal to `/[A-Za-z0-9]{0,256}/`\")\n\n self._etag = etag", "response": "Sets the etag of this BulkResponse."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsetting the id of this BulkResponse.", "response": "def id(self, id):\n \"\"\"\n Sets the id of this BulkResponse.\n Bulk ID\n\n :param id: The id of this BulkResponse.\n :type: str\n \"\"\"\n if id is None:\n raise ValueError(\"Invalid value for `id`, must not be `None`\")\n if id is not None and not re.search('^[A-Za-z0-9]{32}', id):\n raise ValueError(\"Invalid value for `id`, must be a follow pattern or equal to `/^[A-Za-z0-9]{32}/`\")\n\n self._id = id"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsetting the mfa_status of this AccountInfo.", "response": "def mfa_status(self, mfa_status):\n \"\"\"\n Sets the mfa_status of this AccountInfo.\n The enforcement status of the multi-factor authentication, either 'enforced' or 'optional'.\n\n :param mfa_status: The mfa_status of this AccountInfo.\n :type: str\n \"\"\"\n allowed_values = [\"enforced\", \"optional\"]\n if mfa_status not in allowed_values:\n raise ValueError(\n \"Invalid value for `mfa_status` ({0}), must be one of {1}\"\n .format(mfa_status, allowed_values)\n )\n\n self._mfa_status = mfa_status"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef device_create(self, device, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.device_create_with_http_info(device, **kwargs) # noqa: E501\n else:\n (data) = self.device_create_with_http_info(device, **kwargs) # noqa: E501\n return data", "response": "Create a new device."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef device_destroy(self, id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.device_destroy_with_http_info(id, **kwargs) # noqa: E501\n else:\n (data) = self.device_destroy_with_http_info(id, **kwargs) # noqa: E501\n return data", "response": "Delete a device. Only available for devices with a developer certificate."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef device_event_list(self, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.device_event_list_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.device_event_list_with_http_info(**kwargs) # noqa: E501\n return data", "response": "List all device events for an account."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef device_event_retrieve(self, device_event_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.device_event_retrieve_with_http_info(device_event_id, **kwargs) # noqa: E501\n else:\n (data) = self.device_event_retrieve_with_http_info(device_event_id, **kwargs) # noqa: E501\n return data", "response": "Retrieve a specific device event."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef device_list(self, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.device_list_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.device_list_with_http_info(**kwargs) # noqa: E501\n return data", "response": "List all devices. # noqa: E501\n\n List all devices. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.device_list(asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param int limit: How many objects to retrieve in the page.\n :param str order: The order of the records based on creation time, `ASC` or `DESC`; by default `ASC`.\n :param str after: The ID of The item after which to retrieve the next page.\n :param str include: Comma-separated list of data fields to return. Currently supported: `total_count`.\n :param str filter: URL encoded query string parameter to filter returned data. ##### Filtering ```?filter={URL encoded query string}``` The query string is made up of key/value pairs separated by ampersands. So for a query of ```key1=value1&key2=value2&key3=value3``` this would be encoded as follows: ```?filter=key1%3Dvalue1%26key2%3Dvalue2%26key3%3Dvalue3``` ###### Filterable fields: The below table lists all the fields that can be filtered on with certain filters:
Field = / __eq / __neq __in / __nin __lte / __gte
account_id \u2713 \u2713  
auto_update \u2713 \u2713  
bootstrap_expiration_date \u2713 \u2713 \u2713
bootstrapped_timestamp \u2713 \u2713 \u2713
ca_id \u2713 \u2713  
connector_expiration_date \u2713 \u2713 \u2713
created_at \u2713 \u2713 \u2713
custom_attributes \u2713    
deployed_state \u2713 \u2713  
deployment \u2713 \u2713  
description \u2713 \u2713  
device_class \u2713 \u2713  
device_execution_mode \u2713 \u2713  
device_key \u2713 \u2713  
endpoint_name \u2713 \u2713  
endpoint_type \u2713 \u2713  
enrolment_list_timestamp \u2713 \u2713 \u2713
etag \u2713 \u2713 \u2713
firmware_checksum \u2713 \u2713  
host_gateway \u2713 \u2713  
id \u2713 \u2713  
manifest \u2713 \u2713  
manifest_timestamp \u2713 \u2713 \u2713
mechanism \u2713 \u2713  
mechanism_url \u2713 \u2713  
name \u2713 \u2713  
serial_number \u2713 \u2713  
state \u2713 \u2713  
updated_at \u2713 \u2713 \u2713
vendor_id \u2713 \u2713  
  The examples below show the queries in *unencoded* form. ###### By device properties (all properties are filterable): ```state=[unenrolled|cloud_enrolling|bootstrapped|registered]``` ```device_class={value}``` ###### On date-time fields: Date-time fields should be specified in UTC RFC3339 format ```YYYY-MM-DDThh:mm:ss.msZ```. There are three permitted variations: * UTC RFC3339 with milliseconds e.g. 2016-11-30T16:25:12.1234Z * UTC RFC3339 without milliseconds e.g. 2016-11-30T16:25:12Z * UTC RFC3339 shortened - without milliseconds and punctuation e.g. 20161130T162512Z Date-time filtering supports three operators: * equality * greater than or equal to – field name suffixed with ```__gte``` * less than or equal to – field name suffixed with ```__lte``` Lower and upper limits to a date-time range may be specified by including both the ```__gte``` and ```__lte``` forms in the filter. ```{field name}[|__lte|__gte]={UTC RFC3339 date-time}``` ###### On device custom attributes: ```custom_attributes__{param}={value}``` ```custom_attributes__tag=TAG1``` ##### Multi-field example ```state=bootstrapped&created_at__gte=2016-11-30T16:25:12.1234Z&created_at__lte=2016-12-30T00:00:00Z``` Encoded: ```?filter=state%3Dbootstrapped%26created_at__gte%3D2016-11-30T16%3A25%3A12.1234Z%26created_at__lte%3D2016-11-30T00%3A00%3A00Z``` ##### Filtering with filter operators String field filtering supports the following operators: * equality: `__eq` * non-equality: `__neq` * in : `__in` * not in: `__nin` For `__in` and `__nin` filters list of parameters must be comma-separated: `state__nin=unenrolled,dergistered`\n :return: DevicePage\n If the method is called asynchronously,\n returns the request thread."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nlists all device events.", "response": "def device_log_list(self, **kwargs): # noqa: E501\n \"\"\"DEPRECATED: List all device events. # noqa: E501\n\n DEPRECATED: List all device events. Use `/v3/device-events/` instead. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.device_log_list(asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param int limit: How many objects to retrieve in the page.\n :param str order: The order of the records based on creation time, `ASC` or `DESC`; by default `ASC`.\n :param str after: The ID of The item after which to retrieve the next page.\n :param str include: Comma-separated list of data fields to return. Currently supported: `total_count`.\n :param str filter: URL encoded query string parameter to filter returned data. ##### Filtering ```?filter={URL encoded query string}``` The query string is made up of key/value pairs separated by ampersands. So for a query of ```key1=value1&key2=value2&key3=value3``` this would be encoded as follows: ```?filter=key1%3Dvalue1%26key2%3Dvalue2%26key3%3Dvalue3``` ###### Filterable fields: The below table lists all the fields that can be filtered on with certain filters:
Field = / __eq / __neq __in / __nin __lte / __gte
date_time \u2713 \u2713 \u2713
description \u2713 \u2713  
id \u2713 \u2713  
device_id \u2713 \u2713  
event_type \u2713 \u2713  
state_change \u2713 \u2713  
  The examples below show the queries in *unencoded* form. ###### By id: ```id={id}``` ###### By state change: ```state_change=[True|False]``` ###### By event type: ```event_type={value}``` ###### On date-time fields: Date-time fields should be specified in UTC RFC3339 format ```YYYY-MM-DDThh:mm:ss.msZ```. There are three permitted variations: * UTC RFC3339 with milliseconds e.g. 2016-11-30T16:25:12.1234Z * UTC RFC3339 without milliseconds e.g. 2016-11-30T16:25:12Z * UTC RFC3339 shortened - without milliseconds and punctuation e.g. 20161130T162512Z Date-time filtering supports three operators: * equality * greater than or equal to – field name suffixed with ```__gte``` * less than or equal to – field name suffixed with ```__lte``` Lower and upper limits to a date-time range may be specified by including both the ```__gte``` and ```__lte``` forms in the filter. ```{field name}[|__lte|__gte]={UTC RFC3339 date-time}``` ##### Multi-field example ```id=0158d38771f70000000000010010038c&state_change=True&date_time__gte=2016-11-30T16:25:12.1234Z``` Encoded: ```?filter=id%3D0158d38771f70000000000010010038c%26state_change%3DTrue%26date_time__gte%3D2016-11-30T16%3A25%3A12.1234Z``` ##### Filtering with filter operators String field filtering supports the following operators: * equality: `__eq` * non-equality: `__neq` * in : `__in` * not in: `__nin` For `__in` and `__nin` filters list of parameters must be comma-separated: `event_type__in=update.device.device-created,update.device.device-updated`\n :return: DeviceEventPage\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.device_log_list_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.device_log_list_with_http_info(**kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nretrieving a specific device event.", "response": "def device_log_retrieve(self, device_event_id, **kwargs): # noqa: E501\n \"\"\"DEPRECATED: Retrieve a device event. # noqa: E501\n\n Retrieve device event (deprecated, use /v3/device-events/{device_event_id}/ instead) # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.device_log_retrieve(device_event_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str device_event_id: (required)\n :return: DeviceEventData\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.device_log_retrieve_with_http_info(device_event_id, **kwargs) # noqa: E501\n else:\n (data) = self.device_log_retrieve_with_http_info(device_event_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a new device query.", "response": "def device_query_create(self, device, **kwargs): # noqa: E501\n \"\"\"Create a device query # noqa: E501\n\n Create a new device query. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.device_query_create(device, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param DeviceQueryPostPutRequest device: (required)\n :return: DeviceQuery\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.device_query_create_with_http_info(device, **kwargs) # noqa: E501\n else:\n (data) = self.device_query_create_with_http_info(device, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndeleting a device query.", "response": "def device_query_destroy(self, query_id, **kwargs): # noqa: E501\n \"\"\"Delete a device query # noqa: E501\n\n Delete a device query. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.device_query_destroy(query_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str query_id: (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.device_query_destroy_with_http_info(query_id, **kwargs) # noqa: E501\n else:\n (data) = self.device_query_destroy_with_http_info(query_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef device_query_list(self, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.device_query_list_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.device_query_list_with_http_info(**kwargs) # noqa: E501\n return data", "response": "List all device queries."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nretrieving a specific device query.", "response": "def device_query_retrieve(self, query_id, **kwargs): # noqa: E501\n \"\"\"Retrieve a device query. # noqa: E501\n\n Retrieve a specific device query. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.device_query_retrieve(query_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str query_id: (required)\n :return: DeviceQuery\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.device_query_retrieve_with_http_info(query_id, **kwargs) # noqa: E501\n else:\n (data) = self.device_query_retrieve_with_http_info(query_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nupdates a device query.", "response": "def device_query_update(self, query_id, body, **kwargs): # noqa: E501\n \"\"\"Update a device query # noqa: E501\n\n Update a specifc device query. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.device_query_update(query_id, body, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str query_id: (required)\n :param DeviceQueryPostPutRequest body: Device query update object. (required)\n :return: DeviceQuery\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.device_query_update_with_http_info(query_id, body, **kwargs) # noqa: E501\n else:\n (data) = self.device_query_update_with_http_info(query_id, body, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef device_retrieve(self, id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.device_retrieve_with_http_info(id, **kwargs) # noqa: E501\n else:\n (data) = self.device_retrieve_with_http_info(id, **kwargs) # noqa: E501\n return data", "response": "Get a devices # noqa: E501\n\n Retrieve information about a specific device. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.device_retrieve(id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str id: (required)\n :return: DeviceData\n If the method is called asynchronously,\n returns the request thread."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef device_update(self, id, device, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.device_update_with_http_info(id, device, **kwargs) # noqa: E501\n else:\n (data) = self.device_update_with_http_info(id, device, **kwargs) # noqa: E501\n return data", "response": "Update a specific device."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef group_create(self, group, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.group_create_with_http_info(group, **kwargs) # noqa: E501\n else:\n (data) = self.group_create_with_http_info(group, **kwargs) # noqa: E501\n return data", "response": "Create a group # noqa: E501\n\n Create a group # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.group_create(group, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param Group group: Group (required)\n :return: DeviceGroup\n If the method is called asynchronously,\n returns the request thread."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef group_delete(self, device_group_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.group_delete_with_http_info(device_group_id, **kwargs) # noqa: E501\n else:\n (data) = self.group_delete_with_http_info(device_group_id, **kwargs) # noqa: E501\n return data", "response": "Delete a group # noqa: E501\n\n Delete a group. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.group_delete(device_group_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str device_group_id: The ID of the group (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nlisting all groups. # noqa: E501 List all groups. # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass asynchronous=True >>> thread = api.group_list(asynchronous=True) >>> result = thread.get() :param asynchronous bool :param int limit: How many objects to retrieve in the page. :param str order: The order of the records based on creation time, `ASC` or `DESC`; by default `ASC`. :param str after: The ID of The item after which to retrieve the next page. :param str include: Comma-separated list of data fields to return. Currently supported: `total_count`. :param str filter: URL-encoded query string parameter to filter returned data `?filter={URL-encoded query string}` ###### Filterable fields: The table lists all the fields that can be filtered on with certain filters:
Field = / __eq / __neq __in / __nin __lte / __gte
id \u2713 \u2713  
devices_count \u2713 \u2713 \u2713
name \u2713 \u2713  
description \u2713 \u2713  
custom_attributes \u2713  
created_at \u2713 \u2713 \u2713
updated_at \u2713 \u2713 \u2713
etag \u2713 \u2713 \u2713
  The query string is made up of key-value pairs separated by ampersands. For example, this query: `key1=value1&key2=value2&key3=value3` would be URL-encoded as: `?filter=key1__eq%3Dvalue1%26key2__eq%3Dvalue2%26key3__eq%3Dvalue3` **Filtering by properties** `name__eq=mygroup` **Filtering on date-time fields** Date-time fields should be specified in UTC RFC3339 format, `YYYY-MM-DDThh:mm:ss.msZ`. There are three permitted variations: * UTC RFC3339 with milliseconds. Example: `2016-11-30T16:25:12.1234Z` * UTC RFC3339 without milliseconds. Example: `2016-11-30T16:25:12Z` * UTC RFC3339 shortened without milliseconds and punctuation. Example: `20161130T162512Z` Date-time filtering supports three operators: * equality by appending `__eq` to the field name * greater than or equal to by appending `__gte` to the field name * less than or equal to by appending `__lte` to the field name `{field name}[|__eq|__lte|__gte]={UTC RFC3339 date-time}` Time ranges may be specified by including both the `__gte` and `__lte` forms in the filter. For example: `created_at__gte=2016-11-30T16:25:12.1234Z&created_at__lte=2016-12-30T00:00:00Z` **Filtering on multiple fields** `name__eq=mygroup&created_at__gte=2016-11-30T16:25:12.1234Z&created_at__lte=2016-12-30T00:00:00Z` **Filtering with filter operators** String field filtering supports the following operators: * equality: `__eq` * non-equality: `__neq` * in : `__in` * not in: `__nin` For `__in` and `__nin` filters list of parameters must be comma-separated: `name__in=group1,group2` :return: DeviceGroupPage If the method is called asynchronously, returns the request thread.", "response": "def group_list(self, **kwargs): # noqa: E501\n \"\"\"List all groups. # noqa: E501\n\n List all groups. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.group_list(asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param int limit: How many objects to retrieve in the page.\n :param str order: The order of the records based on creation time, `ASC` or `DESC`; by default `ASC`.\n :param str after: The ID of The item after which to retrieve the next page.\n :param str include: Comma-separated list of data fields to return. Currently supported: `total_count`.\n :param str filter: URL-encoded query string parameter to filter returned data `?filter={URL-encoded query string}` ###### Filterable fields: The table lists all the fields that can be filtered on with certain filters:
Field = / __eq / __neq __in / __nin __lte / __gte
id \u2713 \u2713  
devices_count \u2713 \u2713 \u2713
name \u2713 \u2713  
description \u2713 \u2713  
custom_attributes \u2713  
created_at \u2713 \u2713 \u2713
updated_at \u2713 \u2713 \u2713
etag \u2713 \u2713 \u2713
  The query string is made up of key-value pairs separated by ampersands. For example, this query: `key1=value1&key2=value2&key3=value3` would be URL-encoded as: `?filter=key1__eq%3Dvalue1%26key2__eq%3Dvalue2%26key3__eq%3Dvalue3` **Filtering by properties** `name__eq=mygroup` **Filtering on date-time fields** Date-time fields should be specified in UTC RFC3339 format, `YYYY-MM-DDThh:mm:ss.msZ`. There are three permitted variations: * UTC RFC3339 with milliseconds. Example: `2016-11-30T16:25:12.1234Z` * UTC RFC3339 without milliseconds. Example: `2016-11-30T16:25:12Z` * UTC RFC3339 shortened without milliseconds and punctuation. Example: `20161130T162512Z` Date-time filtering supports three operators: * equality by appending `__eq` to the field name * greater than or equal to by appending `__gte` to the field name * less than or equal to by appending `__lte` to the field name `{field name}[|__eq|__lte|__gte]={UTC RFC3339 date-time}` Time ranges may be specified by including both the `__gte` and `__lte` forms in the filter. For example: `created_at__gte=2016-11-30T16:25:12.1234Z&created_at__lte=2016-12-30T00:00:00Z` **Filtering on multiple fields** `name__eq=mygroup&created_at__gte=2016-11-30T16:25:12.1234Z&created_at__lte=2016-12-30T00:00:00Z` **Filtering with filter operators** String field filtering supports the following operators: * equality: `__eq` * non-equality: `__neq` * in : `__in` * not in: `__nin` For `__in` and `__nin` filters list of parameters must be comma-separated: `name__in=group1,group2`\n :return: DeviceGroupPage\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.group_list_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.group_list_with_http_info(**kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef group_members_add(self, device_group_id, body, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.group_members_add_with_http_info(device_group_id, body, **kwargs) # noqa: E501\n else:\n (data) = self.group_members_add_with_http_info(device_group_id, body, **kwargs) # noqa: E501\n return data", "response": "Add a device to a group."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nremoving one device from a group.", "response": "def group_members_remove(self, device_group_id, body, **kwargs): # noqa: E501\n \"\"\"Remove a device from a group # noqa: E501\n\n Remove one device from a group # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.group_members_remove(device_group_id, body, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str device_group_id: The ID of the group (required)\n :param DeviceGroupManipulation body: Body of the request (required)\n :return: DevicePage\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.group_members_remove_with_http_info(device_group_id, body, **kwargs) # noqa: E501\n else:\n (data) = self.group_members_remove_with_http_info(device_group_id, body, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef group_members_retrieve(self, device_group_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.group_members_retrieve_with_http_info(device_group_id, **kwargs) # noqa: E501\n else:\n (data) = self.group_members_retrieve_with_http_info(device_group_id, **kwargs) # noqa: E501\n return data", "response": "Get a page of devices"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef group_retrieve(self, device_group_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.group_retrieve_with_http_info(device_group_id, **kwargs) # noqa: E501\n else:\n (data) = self.group_retrieve_with_http_info(device_group_id, **kwargs) # noqa: E501\n return data", "response": "Get a group. # noqa: E501\n\n Get a group. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.group_retrieve(device_group_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str device_group_id: The group ID (required)\n :return: DeviceGroup\n If the method is called asynchronously,\n returns the request thread."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nupdate the attributes of a group.", "response": "def group_update(self, device_group_id, group, **kwargs): # noqa: E501\n \"\"\"Modify the attributes of a group. # noqa: E501\n\n Modify the attributes of a group. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.group_update(device_group_id, group, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str device_group_id: (required)\n :param Group1 group: Group (required)\n :return: DeviceGroup\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.group_update_with_http_info(device_group_id, group, **kwargs) # noqa: E501\n else:\n (data) = self.group_update_with_http_info(device_group_id, group, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _notify(self, data):\n payload = tlv.decode(\n payload=data.get('payload'),\n content_type=data.get('ct'),\n decode_b64=True\n )\n super(CurrentResourceValue, self)._notify(payload)\n # after one response, close the channel\n self.ensure_stopped()", "response": "Notify this channel of inbound data"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef account_id(self, account_id):\n if account_id is None:\n raise ValueError(\"Invalid value for `account_id`, must not be `None`\")\n if account_id is not None and len(account_id) > 250:\n raise ValueError(\"Invalid value for `account_id`, length must be less than or equal to `250`\")\n if account_id is not None and len(account_id) < 1:\n raise ValueError(\"Invalid value for `account_id`, length must be greater than or equal to `1`\")\n\n self._account_id = account_id", "response": "Sets the account_id of this ServicePackageQuotaHistoryReservation. The account_id of this ServicePackageQuotaHistoryReservation."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nset the campaign_name of this ServicePackageQuotaHistoryReservation.", "response": "def campaign_name(self, campaign_name):\n \"\"\"\n Sets the campaign_name of this ServicePackageQuotaHistoryReservation.\n Textual campaign name for this reservation.\n\n :param campaign_name: The campaign_name of this ServicePackageQuotaHistoryReservation.\n :type: str\n \"\"\"\n if campaign_name is None:\n raise ValueError(\"Invalid value for `campaign_name`, must not be `None`\")\n if campaign_name is not None and len(campaign_name) > 250:\n raise ValueError(\"Invalid value for `campaign_name`, length must be less than or equal to `250`\")\n if campaign_name is not None and len(campaign_name) < 1:\n raise ValueError(\"Invalid value for `campaign_name`, length must be greater than or equal to `1`\")\n\n self._campaign_name = campaign_name"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef id(self, id):\n if id is None:\n raise ValueError(\"Invalid value for `id`, must not be `None`\")\n if id is not None and len(id) > 250:\n raise ValueError(\"Invalid value for `id`, length must be less than or equal to `250`\")\n if id is not None and len(id) < 1:\n raise ValueError(\"Invalid value for `id`, length must be greater than or equal to `1`\")\n\n self._id = id", "response": "Sets the id of this ServicePackageQuotaHistoryReservation. id"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nnotifies this channel of inbound data", "response": "def notify(self, data):\n \"\"\"Notify this channel of inbound data\"\"\"\n string_channels = {\n ChannelIdentifiers.de_registrations,\n ChannelIdentifiers.registrations_expired\n }\n\n if data['channel'] in string_channels:\n message = {'device_id': data[\"value\"], 'channel': data[\"channel\"]}\n else:\n message = DeviceStateChanges._map_endpoint_data(data)\n\n return super(DeviceStateChanges, self).notify(message)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_connected_endpoints(self, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_connected_endpoints_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.get_connected_endpoints_with_http_info(**kwargs) # noqa: E501\n return data", "response": "Get all connected endpoints for a given resource type."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_endpoint_resources(self, device_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_endpoint_resources_with_http_info(device_id, **kwargs) # noqa: E501\n else:\n (data) = self.get_endpoint_resources_with_http_info(device_id, **kwargs) # noqa: E501\n return data", "response": "Get the resources on an endpoint."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef metadata_wrapper(fn):\n @functools.wraps(fn)\n def wrapped_f(self, *args, **kwargs):\n self.last_metadata = {}\n self.last_metadata[\"url\"] = self.configuration.host + args[0]\n self.last_metadata[\"method\"] = args[1]\n self.last_metadata[\"timestamp\"] = time.time()\n try:\n return fn(self, *args, **kwargs)\n except Exception as e:\n self.last_metadata[\"exception\"] = e\n raise\n return wrapped_f", "response": "Save metadata of last api call."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nmaking the HTTP request (synchronous) and return the deserialized data. To make an async request, set the asynchronous parameter. :param resource_path: Path to method endpoint. :param method: Method to call. :param path_params: Path parameters in the url. :param query_params: Query parameters in the url. :param header_params: Header parameters to be placed in the request header. :param body: Request body. :param post_params dict: Request post form parameters, for `application/x-www-form-urlencoded`, `multipart/form-data`. :param auth_settings list: Auth Settings names for the request. :param response: Response data type. :param files dict: key -> filename, value -> filepath, for `multipart/form-data`. :param asynchronous bool: execute request asynchronously :param _return_http_data_only: response data without head status code and headers :param collection_formats: dict of collection formats for path, query, header, and post parameters. :param _preload_content: if False, the urllib3.HTTPResponse object will be returned without reading/decoding response data. Default is True. :param _request_timeout: timeout setting for this request. If one number provided, it will be total request timeout. It can also be a pair (tuple) of (connection, read) timeouts. :return: If asynchronous parameter is True, the request will be called asynchronously. The method will return the request thread. If parameter asynchronous is False or missing, then the method will return the response directly.", "response": "def call_api(self, resource_path, method,\n path_params=None, query_params=None, header_params=None,\n body=None, post_params=None, files=None,\n response_type=None, auth_settings=None, asynchronous=None,\n _return_http_data_only=None, collection_formats=None, _preload_content=True,\n _request_timeout=None):\n \"\"\"\n Makes the HTTP request (synchronous) and return the deserialized data.\n To make an async request, set the asynchronous parameter.\n\n :param resource_path: Path to method endpoint.\n :param method: Method to call.\n :param path_params: Path parameters in the url.\n :param query_params: Query parameters in the url.\n :param header_params: Header parameters to be\n placed in the request header.\n :param body: Request body.\n :param post_params dict: Request post form parameters,\n for `application/x-www-form-urlencoded`, `multipart/form-data`.\n :param auth_settings list: Auth Settings names for the request.\n :param response: Response data type.\n :param files dict: key -> filename, value -> filepath,\n for `multipart/form-data`.\n :param asynchronous bool: execute request asynchronously\n :param _return_http_data_only: response data without head status code and headers\n :param collection_formats: dict of collection formats for path, query,\n header, and post parameters.\n :param _preload_content: if False, the urllib3.HTTPResponse object will be returned without\n reading/decoding response data. Default is True.\n :param _request_timeout: timeout setting for this request. If one number provided, it will be total request\n timeout. It can also be a pair (tuple) of (connection, read) timeouts.\n :return:\n If asynchronous parameter is True,\n the request will be called asynchronously.\n The method will return the request thread.\n If parameter asynchronous is False or missing,\n then the method will return the response directly.\n \"\"\"\n if not asynchronous:\n return self.__call_api(resource_path, method,\n path_params, query_params, header_params,\n body, post_params, files,\n response_type, auth_settings,\n _return_http_data_only, collection_formats, _preload_content, _request_timeout)\n else:\n thread = self.pool.apply_async(self.__call_api, (resource_path, method,\n path_params, query_params,\n header_params, body,\n post_params, files,\n response_type, auth_settings,\n _return_http_data_only,\n collection_formats, _preload_content, _request_timeout))\n return thread"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nprepare form parameters for post processing.", "response": "def prepare_post_parameters(self, post_params=None, files=None):\n \"\"\"\n Builds form parameters.\n\n :param post_params: Normal form parameters.\n :param files: File parameters.\n :return: Form parameters with files.\n \"\"\"\n params = post_params or []\n for key, values in (files or {}).items():\n for maybe_file_or_path in values if isinstance(values, list) else [values]:\n try:\n # use the parameter as if it was an open file object\n data = maybe_file_or_path.read()\n maybe_file_or_path = maybe_file_or_path.name\n except AttributeError:\n # then it is presumably a file path\n with open(maybe_file_or_path, 'rb') as fh:\n data = fh.read()\n basepath = os.path.basename(maybe_file_or_path)\n mimetype = mimetypes.guess_type(basepath)[0] or 'application/octet-stream'\n params.append((key, (basepath, data, mimetype)))\n return params"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert nested dictionary filters into django - style key value pairs", "response": "def _normalise_key_values(filter_obj, attr_map=None):\n \"\"\"Converts nested dictionary filters into django-style key value pairs\n\n Map filter operators and aliases to operator-land\n Additionally, perform replacements according to attribute map\n Automatically assumes __eq if not explicitly defined\n \"\"\"\n new_filter = {}\n for key, constraints in filter_obj.items():\n aliased_key = key\n if attr_map is not None:\n aliased_key = attr_map.get(key)\n if aliased_key is None:\n raise CloudValueError(\n 'Invalid key %r for filter attribute; must be one of:\\n%s' % (\n key,\n attr_map.keys()\n )\n )\n if not isinstance(constraints, dict):\n constraints = {'eq': constraints}\n for operator, value in constraints.items():\n # FIXME: deprecate this $ nonsense\n canonical_operator = FILTER_OPERATOR_ALIASES.get(operator.lstrip('$'))\n if canonical_operator is None:\n raise CloudValueError(\n 'Invalid operator %r for filter key %s; must be one of:\\n%s' % (\n operator,\n key,\n FILTER_OPERATOR_ALIASES.keys()\n )\n )\n canonical_key = str('%s__%s' % (aliased_key, canonical_operator))\n new_filter[canonical_key] = _normalise_value(value)\n return new_filter"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef legacy_filter_formatter(kwargs, attr_map):\n params = _depluralise_filters_key(copy.copy(kwargs))\n new_filter = _get_filter(sdk_filter=params.pop('filter', {}), attr_map=attr_map)\n if new_filter:\n new_filter = sorted([(k.rsplit('__eq')[0], v) for k, v in new_filter.items()])\n params['filter'] = urllib.parse.urlencode(new_filter)\n return params", "response": "Build a filter for update and device apis\n "} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nbuild a filter according to the cross - api specification", "response": "def filter_formatter(kwargs, attr_map):\n \"\"\"Builds a filter according to the cross-api specification\n\n :param kwargs: expected to contain {'filter': {filter dict}}\n :returns: {validated filter dict}\n \"\"\"\n params = _depluralise_filters_key(copy.copy(kwargs))\n params.update(_get_filter(sdk_filter=params.pop('filter', {}), attr_map=attr_map))\n return params"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\napproximates number of results according to the API", "response": "def count(self):\n \"\"\"Approximate number of results, according to the API\"\"\"\n if self._total_count is None:\n self._total_count = self._get_total_count()\n return self._total_count"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the first item from the query or None if there are no results", "response": "def first(self):\n \"\"\"Returns the first item from the query, or None if there are no results\"\"\"\n if self._results_cache:\n return self._results_cache[0]\n\n query = PaginatedResponse(func=self._func, lwrap_type=self._lwrap_type, **self._kwargs)\n try:\n return next(query)\n except StopIteration:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef to_dict(self):\n return dict(\n after=self._kwargs.get('after'),\n data=list(self),\n has_more=self._has_more,\n limit=self._max_results,\n order=self._kwargs.get('order', 'ASC'),\n page_size=self._page_size,\n total_count=self.count(),\n )", "response": "Returns a dictionary representation of the object."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef data(self):\n import warnings\n warnings.warn(\n '`data` attribute is deprecated and will be removed in a future release, '\n 'use %s as an iterable instead' % (PaginatedResponse,),\n category=DeprecationWarning,\n stacklevel=2 # log wherever '.data' is referenced, rather than this line\n )\n return list(self)", "response": "Deprecated. Returns the data as a list"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsending release notifications to interested parties", "response": "def main():\n \"\"\"Sends release notifications to interested parties\n\n Currently this is an arm-internal slack channel.\n\n 1. assumes you've set a token for an authorised slack user/bot.\n 2. assumes said user/bot is already in channel.\n otherwise: https://github.com/slackapi/python-slackclient#joining-a-channel\n \"\"\"\n slack_token = os.environ.get('SLACK_API_TOKEN')\n channel_id = os.environ.get('SLACK_CHANNEL', '#mbed-cloud-sdk')\n message = os.environ.get('SLACK_MESSAGE', (\n ':checkered_flag: New version of :snake: Python SDK released: *{version}* '\n '()'\n ))\n\n if not slack_token:\n print('no slack token')\n return\n\n version = os.environ.get('SLACK_NOTIFY_VERSION')\n if not version:\n try:\n import mbed_cloud\n except ImportError:\n pass\n else:\n version = mbed_cloud.__version__\n\n payload = message.format(version=version) if version else message\n\n print('notifying slack channel %s with payload:\\n%s' % (channel_id, payload))\n\n sc = SlackClient(slack_token)\n sc.api_call(\n 'chat.postMessage',\n channel=channel_id,\n text=payload,\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef service(self, service):\n if service is None:\n raise ValueError(\"Invalid value for `service`, must not be `None`\")\n allowed_values = [\"lwm2m\", \"bootstrap\"]\n if service not in allowed_values:\n raise ValueError(\n \"Invalid value for `service` ({0}), must be one of {1}\"\n .format(service, allowed_values)\n )\n\n self._service = service", "response": "Sets the service of this TrustedCertificateInternalResp."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a new developer certificate.", "response": "def create_developer_certificate(self, authorization, body, **kwargs): # noqa: E501\n \"\"\"Create a new developer certificate to connect to the bootstrap server. # noqa: E501\n\n This REST API is intended to be used by customers to get a developer certificate (a certificate that can be flashed into multiple devices to connect to bootstrap server). **Note:** The number of developer certificates allowed per account is limited. Please see [Using your own certificate authority](/docs/current/mbed-cloud-deploy/instructions-for-factory-setup-and-device-provision.html#using-your-own-certificate-authority-with-mbed-cloud). **Example usage:** curl -X POST \\\"http://api.us-east-1.mbedcloud.com/v3/developer-certificates\\\" -H \\\"accept: application/json\\\" -H \\\"Authorization: Bearer THE_ACCESS_TOKEN\\\" -H \\\"content-type: application/json\\\" -d \\\"{ \\\\\\\"name\\\\\\\": \\\\\\\"THE_CERTIFICATE_NAME\\\\\\\", \\\\\\\"description\\\\\\\": \\\\\\\"THE_CERTIFICATE_DESCRIPTION\\\\\\\"}\\\" # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.create_developer_certificate(authorization, body, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str authorization: Bearer {Access Token}. (required)\n :param DeveloperCertificateRequestData body: (required)\n :return: DeveloperCertificateResponseData\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.create_developer_certificate_with_http_info(authorization, body, **kwargs) # noqa: E501\n else:\n (data) = self.create_developer_certificate_with_http_info(authorization, body, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_developer_certificate(self, developer_certificate_id, authorization, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_developer_certificate_with_http_info(developer_certificate_id, authorization, **kwargs) # noqa: E501\n else:\n (data) = self.get_developer_certificate_with_http_info(developer_certificate_id, authorization, **kwargs) # noqa: E501\n return data", "response": "Get an existing developer certificate."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsetting the month of this ReportResponse.", "response": "def month(self, month):\n \"\"\"\n Sets the month of this ReportResponse.\n Month of requested billing report\n\n :param month: The month of this ReportResponse.\n :type: str\n \"\"\"\n if month is None:\n raise ValueError(\"Invalid value for `month`, must not be `None`\")\n if month is not None and not re.search('^\\\\d{4}-\\\\d{2}$', month):\n raise ValueError(\"Invalid value for `month`, must be a follow pattern or equal to `/^\\\\d{4}-\\\\d{2}$/`\")\n\n self._month = month"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef create_certificate_issuer_config(self, create_certificate_issuer_config, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.create_certificate_issuer_config_with_http_info(create_certificate_issuer_config, **kwargs) # noqa: E501\n else:\n (data) = self.create_certificate_issuer_config_with_http_info(create_certificate_issuer_config, **kwargs) # noqa: E501\n return data", "response": "Create a certificate issuer configuration."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndelete a certificate issuer configuration.", "response": "def delete_certificate_issuer_config_by_id(self, certificate_issuer_configuration_id, **kwargs): # noqa: E501\n \"\"\"Delete certificate issuer configuration. # noqa: E501\n\n Delete the configured certificate issuer configuration. You can only delete the configurations of custom certificates. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.delete_certificate_issuer_config_by_id(certificate_issuer_configuration_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str certificate_issuer_configuration_id: The ID of the certificate issuer configuration. (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.delete_certificate_issuer_config_by_id_with_http_info(certificate_issuer_configuration_id, **kwargs) # noqa: E501\n else:\n (data) = self.delete_certificate_issuer_config_by_id_with_http_info(certificate_issuer_configuration_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_certificate_issuer_config(self, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_certificate_issuer_config_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.get_certificate_issuer_config_with_http_info(**kwargs) # noqa: E501\n return data", "response": "Get certificate issuer configuration."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_certificate_issuer_config_by_id(self, certificate_issuer_configuration_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_certificate_issuer_config_by_id_with_http_info(certificate_issuer_configuration_id, **kwargs) # noqa: E501\n else:\n (data) = self.get_certificate_issuer_config_by_id_with_http_info(certificate_issuer_configuration_id, **kwargs) # noqa: E501\n return data", "response": "Get a certificate issuer configuration by ID."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_certificate_issuer_configs(self, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_certificate_issuer_configs_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.get_certificate_issuer_configs_with_http_info(**kwargs) # noqa: E501\n return data", "response": "Get certificate issuer configurations."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef update_certificate_issuer_config(self, certificate_issuer_config_request, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_certificate_issuer_config_with_http_info(certificate_issuer_config_request, **kwargs) # noqa: E501\n else:\n (data) = self.update_certificate_issuer_config_with_http_info(certificate_issuer_config_request, **kwargs) # noqa: E501\n return data", "response": "Update the certificate issuer configuration."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef update_certificate_issuer_config_by_id(self, certificate_issuer_configuration_id, certificate_issuer_config_request, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_certificate_issuer_config_by_id_with_http_info(certificate_issuer_configuration_id, certificate_issuer_config_request, **kwargs) # noqa: E501\n else:\n (data) = self.update_certificate_issuer_config_by_id_with_http_info(certificate_issuer_configuration_id, certificate_issuer_config_request, **kwargs) # noqa: E501\n return data", "response": "Update the configured certificate issuer configuration."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsets the active_devices of this ReportBillingData.", "response": "def active_devices(self, active_devices):\n \"\"\"\n Sets the active_devices of this ReportBillingData.\n\n :param active_devices: The active_devices of this ReportBillingData.\n :type: int\n \"\"\"\n if active_devices is None:\n raise ValueError(\"Invalid value for `active_devices`, must not be `None`\")\n if active_devices is not None and active_devices < 0:\n raise ValueError(\"Invalid value for `active_devices`, must be a value greater than or equal to `0`\")\n\n self._active_devices = active_devices"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nset the firmware_updates of this ReportBillingData.", "response": "def firmware_updates(self, firmware_updates):\n \"\"\"\n Sets the firmware_updates of this ReportBillingData.\n\n :param firmware_updates: The firmware_updates of this ReportBillingData.\n :type: int\n \"\"\"\n if firmware_updates is None:\n raise ValueError(\"Invalid value for `firmware_updates`, must not be `None`\")\n if firmware_updates is not None and firmware_updates < 0:\n raise ValueError(\"Invalid value for `firmware_updates`, must be a value greater than or equal to `0`\")\n\n self._firmware_updates = firmware_updates"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef main():\n last_known = '0'\n if os.path.isfile(metafile):\n with open(metafile) as fh:\n last_known = fh.read()\n\n import mbed_cloud\n current = mbed_cloud.__version__\n\n # how significant a change in version scheme should trigger a new changelog entry\n # (api major, api minor, sdk major, sdk minor, sdk patch)\n sigfigs = 4\n current_version = LooseVersion(current).version\n last_known_version = LooseVersion(last_known).version\n\n should_towncrier = current_version[:sigfigs] != last_known_version[:sigfigs]\n\n print('%s -- %s :: current vs previous changelog build' % (current, last_known))\n if should_towncrier:\n print('%s >> %s :: running changelog build' % (current, last_known))\n subprocess.check_call(\n ['towncrier', '--yes'],\n cwd=os.path.join(PROJECT_ROOT, 'docs', 'changelog')\n )\n with open(metafile, 'w') as fh:\n fh.write(current)", "response": "Writes out newsfile if significant version bump is needed"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef paths(self):\n filename = '.mbed_cloud_config.json'\n return [\n # Global config in /etc for *nix users\n \"/etc/%s\" % filename,\n\n # Config file in home directory\n os.path.join(os.path.expanduser(\"~\"), filename),\n\n # Config file in current directory\n os.path.join(os.getcwd(), filename),\n\n # Config file specified using environment variable\n os.environ.get(self.path_from_env_key)\n ]", "response": "Get list of paths to look in for configuration data"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef load(self, updates):\n # Go through in order and override the config (`.mbed_cloud_config.json` loader)\n for path in self.paths():\n if not path:\n continue\n abs_path = os.path.abspath(os.path.expanduser(path))\n if not os.path.isfile(abs_path):\n self._using_paths.append('missing: %s' % abs_path)\n continue\n self._using_paths.append(' exists: %s' % abs_path)\n with open(abs_path) as fh:\n self.update(json.load(fh))\n\n # New dotenv loader - requires explicit instructions to use current working directory\n load_dotenv(find_dotenv(usecwd=True))\n\n # Pluck config values out of the environment\n for env_var, key in {ENVVAR_API_HOST: 'host', ENVVAR_API_KEY: 'api_key'}.items():\n env_value = os.getenv(env_var)\n if env_value is not None:\n self[key] = env_value\n if updates:\n self.update(updates)\n self.validate()", "response": "Load configuration data from file or environment variables."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nvalidate the current config", "response": "def validate(self):\n \"\"\"Validate / fix up the current config\"\"\"\n if not self.get('api_key'):\n raise ValueError(\"api_key not found in config. Please see documentation.\")\n host = self.get('host') or DEFAULT_CLOUD_HOST\n if host:\n # remove extraneous slashes and force to byte string\n # otherwise msg += message_body in httplib will fail in python2\n # when message_body contains binary data, and url is unicode\n\n # remaining failure modes include at least:\n # passing bytes in python3 will fail as we try to strip unicode '/' characters\n # passing unicode code points in python2 will fail due to httplib host.encode('ascii')\n host = host.strip('/')\n if not isinstance(host, str):\n host = host.encode('utf-8')\n self['host'] = host\n\n self.setdefault('autostart_notification_thread', True)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef expand_dict_as_keys(d):\n to_product = []\n for key, values in sorted(d.items()):\n # if we sort the inputs here, itertools.product will keep a stable sort order for us later\n key_values = sorted([(key, v) for v in utils.ensure_listable(values) if v is not None])\n if key_values:\n to_product.append(key_values)\n return list(itertools.product(*to_product))", "response": "Expands a dictionary into a list of immutables with cartesian product of the list parts\n "} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nstores a new item in routing map", "response": "def create_route(self, item, routes):\n \"\"\"Stores a new item in routing map\"\"\"\n for route in routes:\n self._routes.setdefault(route, set()).add(item)\n return item"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nremoves item from matching routes", "response": "def remove_routes(self, item, routes):\n \"\"\"Removes item from matching routes\"\"\"\n for route in routes:\n items = self._routes.get(route)\n try:\n items.remove(item)\n LOG.debug('removed item from route %s', route)\n except ValueError:\n pass\n if not items:\n self._routes.pop(route)\n LOG.debug('removed route %s', route)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a list of all items in the routing table.", "response": "def list_all(self):\n \"\"\"All items\"\"\"\n return list(set(\n item for items in self._routes.values() for item in items\n ))"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets or start the requested channel", "response": "def get_channel(self, subscription_channel, **observer_params):\n \"\"\"Get or start the requested channel\"\"\"\n keys = subscription_channel.get_routing_keys()\n # watch keys are unique sets of keys that we will attempt to extract from inbound items\n self.watch_keys.add(frozenset({k_v[0] for key in keys for k_v in key}))\n self.create_route(subscription_channel, keys)\n subscription_channel._configure(self, self.connect_api, observer_params)\n return subscription_channel"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef subscribe(self, subscription_channel, **observer_params):\n return self.get_channel(subscription_channel, **observer_params).ensure_started().observer", "response": "Subscribe to a channel and returns its observer"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nroute inbound items to individual channels", "response": "def _notify_single_item(self, item):\n \"\"\"Route inbound items to individual channels\"\"\"\n # channels that this individual item has already triggered\n # (dont want to trigger them again)\n triggered_channels = set()\n for key_set in self.watch_keys:\n # only pluck keys if they exist\n plucked = {\n key_name: item[key_name]\n for key_name in key_set if key_name in item\n }\n route_keys = expand_dict_as_keys(plucked)\n for route in route_keys:\n channels = self.get_route_items(route) or {}\n LOG.debug('route table match: %s -> %s', route, channels)\n if not channels:\n LOG.debug(\n 'no subscribers for message.\\nkey %s\\nroutes: %s',\n route,\n self._routes\n )\n for channel in channels:\n if channel in triggered_channels:\n LOG.debug('skipping dispatch to %s', channel)\n continue\n LOG.debug('routing dispatch to %s: %s', channel, item)\n try:\n channel.notify(item) and triggered_channels.add(channel)\n except Exception: # noqa\n LOG.exception('Channel notification failed')\n return triggered_channels"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nnotifies subscribers that data was received", "response": "def notify(self, data):\n \"\"\"Notify subscribers that data was received\"\"\"\n triggered_channels = []\n for channel_name, items in data.items():\n for item in items or []:\n LOG.debug('notify received: %s', item)\n try:\n # some channels return strings rather than objects (e.g. de-registrations),\n # normalize them here\n item = {'value': item} if isinstance(item, six.string_types) else dict(item)\n # inject the channel name to the data (so channels can filter on it)\n item['channel'] = channel_name\n triggered_channels.extend(list(self._notify_single_item(item)))\n except Exception: # noqa\n LOG.exception('Subscription notification failed')\n return triggered_channels"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef type(self, type):\n if type is None:\n raise ValueError(\"Invalid value for `type`, must not be `None`\")\n allowed_values = [\"reservation\", \"reservation_release\", \"reservation_termination\", \"package_renewal\", \"package_creation\", \"package_termination\"]\n if type not in allowed_values:\n raise ValueError(\n \"Invalid value for `type` ({0}), must be one of {1}\"\n .format(type, allowed_values)\n )\n\n self._type = type", "response": "Sets the type of this AggregatedQuotaUsageReport."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef list_campaigns(self, **kwargs):\n kwargs = self._verify_sort_options(kwargs)\n kwargs = self._verify_filters(kwargs, Campaign, True)\n api = self._get_api(update_service.DefaultApi)\n return PaginatedResponse(api.update_campaign_list, lwrap_type=Campaign, **kwargs)", "response": "List all update campaigns."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting existing update campaign.", "response": "def get_campaign(self, campaign_id):\n \"\"\"Get existing update campaign.\n\n :param str campaign_id: Campaign ID to retrieve (Required)\n :return: Update campaign object matching provided ID\n :rtype: Campaign\n \"\"\"\n api = self._get_api(update_service.DefaultApi)\n return Campaign(api.update_campaign_retrieve(campaign_id))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_campaign(self, name, device_filter, **kwargs):\n device_filter = filters.legacy_filter_formatter(\n dict(filter=device_filter),\n Device._get_attributes_map()\n )\n campaign = Campaign._create_request_map(kwargs)\n if 'when' in campaign:\n # FIXME: randomly validating an input here is a sure route to nasty surprises elsewhere\n campaign['when'] = force_utc(campaign['when'])\n body = UpdateCampaignPostRequest(\n name=name,\n device_filter=device_filter['filter'],\n **campaign)\n api = self._get_api(update_service.DefaultApi)\n return Campaign(api.update_campaign_create(body))", "response": "Add a new update campaign with a name and device filtering."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nupdating an update campaign.", "response": "def update_campaign(self, campaign_object=None, campaign_id=None, **kwargs):\n \"\"\"Update an update campaign.\n\n :param :class:`Campaign` campaign_object: Campaign object to update (Required)\n :return: updated campaign object\n :rtype: Campaign\n \"\"\"\n api = self._get_api(update_service.DefaultApi)\n if campaign_object:\n campaign_id = campaign_object.id\n campaign_object = campaign_object._create_patch_request()\n else:\n campaign_object = Campaign._create_request_map(kwargs)\n if 'device_filter' in campaign_object:\n campaign_object[\"device_filter\"] = filters.legacy_filter_formatter(\n dict(filter=campaign_object[\"device_filter\"]),\n Device._get_attributes_map()\n )['filter']\n if 'when' in campaign_object:\n # FIXME: randomly validating an input here is a sure route to nasty surprises elsewhere\n campaign_object['when'] = force_utc(campaign_object['when'])\n return Campaign(api.update_campaign_update(campaign_id=campaign_id,\n campaign=campaign_object))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndelete an update campaign.", "response": "def delete_campaign(self, campaign_id):\n \"\"\"Delete an update campaign.\n\n :param str campaign_id: Campaign ID to delete (Required)\n :return: void\n \"\"\"\n api = self._get_api(update_service.DefaultApi)\n api.update_campaign_destroy(campaign_id)\n return"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nlists the state of all devices in the given campaign.", "response": "def list_campaign_device_states(self, campaign_id, **kwargs):\n \"\"\"List campaign devices status.\n\n :param str campaign_id: Id of the update campaign (Required)\n :param int limit: number of devices state to retrieve\n :param str order: sort direction of device state when ordered by creation time (desc|asc)\n :param str after: get devices state after given id\n :return: List of :py:class:`CampaignDeviceState` objects\n :rtype: PaginatedResponse\n \"\"\"\n kwargs = self._verify_sort_options(kwargs)\n kwargs = self._verify_filters(kwargs, CampaignDeviceState, True)\n kwargs[\"campaign_id\"] = campaign_id\n api = self._get_api(update_service.DefaultApi)\n return PaginatedResponse(api.update_campaign_metadata_list,\n lwrap_type=CampaignDeviceState, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_firmware_image(self, image_id):\n api = self._get_api(update_service.DefaultApi)\n return FirmwareImage(api.firmware_image_retrieve(image_id))", "response": "Get a firmware image with provided image_id."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nlist all firmware images in the cache.", "response": "def list_firmware_images(self, **kwargs):\n \"\"\"List all firmware images.\n\n :param int limit: number of firmware images to retrieve\n :param str order: ordering of images when ordered by time. 'desc' or 'asc'\n :param str after: get firmware images after given `image_id`\n :param dict filters: Dictionary of filters to apply\n :return: list of :py:class:`FirmwareImage` objects\n :rtype: PaginatedResponse\n \"\"\"\n kwargs = self._verify_sort_options(kwargs)\n kwargs = self._verify_filters(kwargs, FirmwareImage, True)\n api = self._get_api(update_service.DefaultApi)\n return PaginatedResponse(api.firmware_image_list, lwrap_type=FirmwareImage, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nadd a new firmware image.", "response": "def add_firmware_image(self, name, datafile, **kwargs):\n \"\"\"Add a new firmware reference.\n\n :param str name: Firmware file short name (Required)\n :param str datafile: The file object or *path* to the firmware image file (Required)\n :param str description: Firmware file description\n :return: the newly created firmware file object\n :rtype: FirmwareImage\n \"\"\"\n kwargs.update({'name': name})\n firmware_image = FirmwareImage._create_request_map(kwargs)\n firmware_image.update({'datafile': datafile})\n api = self._get_api(update_service.DefaultApi)\n return FirmwareImage(\n api.firmware_image_create(**firmware_image)\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndelete a firmware image.", "response": "def delete_firmware_image(self, image_id):\n \"\"\"Delete a firmware image.\n\n :param str image_id: image ID for the firmware to remove/delete (Required)\n :return: void\n \"\"\"\n api = self._get_api(update_service.DefaultApi)\n api.firmware_image_destroy(image_id=image_id)\n return"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_firmware_manifest(self, manifest_id):\n api = self._get_api(update_service.DefaultApi)\n return FirmwareManifest(api.firmware_manifest_retrieve(manifest_id=manifest_id))", "response": "Get a specific manifest with provided manifest_id."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef list_firmware_manifests(self, **kwargs):\n kwargs = self._verify_sort_options(kwargs)\n kwargs = self._verify_filters(kwargs, FirmwareManifest, True)\n api = self._get_api(update_service.DefaultApi)\n return PaginatedResponse(api.firmware_manifest_list, lwrap_type=FirmwareManifest, **kwargs)", "response": "List all manifests.\n\n :param int limit: number of manifests to retrieve\n :param str order: sort direction of manifests when ordered by time. 'desc' or 'asc'\n :param str after: get manifests after given `image_id`\n :param dict filters: Dictionary of filters to apply\n :return: list of :py:class:`FirmwareManifest` objects\n :rtype: PaginatedResponse"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds a new manifest reference.", "response": "def add_firmware_manifest(self, name, datafile, key_table_file=None, **kwargs):\n \"\"\"Add a new manifest reference.\n\n :param str name: Manifest file short name (Required)\n :param str datafile: The file object or path to the manifest file (Required)\n :param str key_table_file: The file object or path to the key_table file (Optional)\n :param str description: Manifest file description\n :return: the newly created manifest file object\n :rtype: FirmwareManifest\n \"\"\"\n kwargs.update({\n 'name': name,\n 'url': datafile, # really it's the datafile\n })\n if key_table_file is not None:\n kwargs.update({'key_table_url': key_table_file}) # really it's the key_table\n firmware_manifest = FirmwareManifest._create_request_map(kwargs)\n api = self._get_api(update_service.DefaultApi)\n return FirmwareManifest(\n api.firmware_manifest_create(**firmware_manifest)\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ndeletes an existing manifest.", "response": "def delete_firmware_manifest(self, manifest_id):\n \"\"\"Delete an existing manifest.\n\n :param str manifest_id: Manifest file ID to delete (Required)\n :return: void\n \"\"\"\n api = self._get_api(update_service.DefaultApi)\n return api.firmware_manifest_destroy(manifest_id)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef device_filter(self):\n if isinstance(self._device_filter, str):\n return self._decode_query(self._device_filter)\n return self._device_filter", "response": "The device filter to use."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nprovides account - specific statistics for other cloud services.", "response": "def get_metrics(self, include, interval, **kwargs): # noqa: E501\n \"\"\"Provides account-specific statistics for other cloud services. # noqa: E501\n\n This REST API is used to get account-specific statistics. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_metrics(include, interval, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str include: A comma-separated list of requested metrics and total_count (if included, the response will contain total_count to specify the total number of records available). Supported values are: - `transactions` - `full_registrations` - `registration_updates` - `deleted_registrations` - `expired_registrations` - `bootstraps_successful` - `bootstraps_failed` - `bootstraps_pending` - `handshakes_successful` - `connect_rest_api_success` - `connect_rest_api_error` - `device_proxy_request_success` - `device_proxy_request_error` - `device_subscription_request_success` - `device_subscription_request_error` - `device_observations` - `total_count` **Note:** The metrics device_proxy_request_success, device_proxy_request_error, device_subscription_request_success, device_subscription_request_error and device_observations monitor only the response from the device to Mbed Cloud Connect and they do not confirm that the response is delivered to client callback URLs used when you try to access device resources using [Connect API](/docs/current/service-api-references/mbed-cloud-connect.html) endpoints. New metrics will be added to monitor the response delivery to client callback URLs later. **Example usage:** ``` curl -X GET \\\\ -H \\\"Authorization : Bearer \\\" 'https://api.us-east-1.mbedcloud.com/v3/metrics?include=transactions,total_count&start=20170207&end=20170407&interval=1d' { \\\"object\\\": \\\"list\\\", \\\"limit\\\": 20, \\\"total_count\\\": 54, \\\"after\\\": \\\"2017-07-26T00:00:00Z\\\", \\\"has_more\\\": true, \\\"data\\\": [ { \\\"id\\\": \\\"015d8157c800015e306fffff005374617473000\\\", \\\"timestamp\\\": \\\"2017-07-27T00:00:00Z\\\", \\\"transactions\\\": 27366 }, { \\\"id\\\": \\\"015d867e2400015e306fffff005374617473000\\\", \\\"timestamp\\\": \\\"2017-07-28T00:00:00Z\\\", \\\"transactions\\\": 27480 } ] } ``` (required)\n :param str interval: Group the data by this interval in minutes, hours, days or weeks. Sample values: 5m, 2h, 3d, 4w. The maximum interval cannot exceed one year (365 days). The allowed ranges are 5m-525600m/1h-8760h/1d-365d/1w-53w. (required)\n :param date start: UTC time/year/date in RFC3339 format. Fetch the data with timestamp greater than or equal to this value. Sample values: 20170207T092056990Z / 2017-02-07T09:20:56.990Z / 2017 / 20170207. The maximum time between start and end parameters cannot exceed more than one year (365 days). The parameter is not mandatory, if the period is specified. \n :param date end: UTC time/year/date in RFC3339 format. Fetch the data with timestamp less than this value. Sample values: 20170207T092056990Z / 2017-02-07T09:20:56.990Z / 2017 / 20170207. The maximum time between start and end parameters cannot exceed more than one year (365 days). The parameter is not mandatory, if the period is specified. \n :param str period: Period. Fetch the data for the period in minutes, hours, days or weeks. Sample values: 5m, 2h, 3d, 4w. The parameter is not mandatory, if the start and end time are specified. The maximum period cannot exceed one year (365 days). The allowed ranges are 5m-525600m/1h-8760h/1d-365d/1w-53w. \n :param int limit: The number of results to return. The default value is 50, minimum 2 and maximum 1000. \n :param str after: The metric ID after which to start fetching. This also can be used for pagination as follows. **Example usage:** ``` curl -X GET \\\\ -H \\\"Authorization : Bearer \\\" 'https://api.us-east-1.mbedcloud.com/v3/metrics?include=transactions,total_count&start=20170707&end=20170829&interval=1d&limit=20' { \\\"object\\\": \\\"list\\\", \\\"limit\\\": 20, \\\"total_count\\\": 54, \\\"has_more\\\": true, \\\"data\\\": [ { \\\"id\\\": \\\"015d1a589800015e306fffff005374617473000\\\", \\\"timestamp\\\": \\\"2017-07-07T00:00:00Z\\\", \\\"transactions\\\": 26381 }, . . . { \\\"id\\\": \\\"015d7c316c00015e306fffff005374617473000\\\", \\\"timestamp\\\": \\\"2017-07-26T00:00:00Z\\\", \\\"transactions\\\": 25569 } ] } ``` If the parameter \u201chas more\u201d is true, it indicates that the list is not complete and more values are available. You can give the last ID of the list as the value of the \u201cafter\u201d query parameter, and you get the next page of values. You can keep doing this until \u201chas more\u201d is false. ``` curl -X GET \\\\ -H \\\"Authorization : Bearer \\\" 'https://api.us-east-1.mbedcloud.com/v3/metrics?include=transactions,total_count&start=20170707&end=20170829&interval=1d&limit=20&after=015d7c316c00015e306fffff005374617473000' { \\\"object\\\": \\\"list\\\", \\\"limit\\\": 20, \\\"total_count\\\": 54, \\\"after\\\": \\\"2017-07-26T00:00:00Z\\\", \\\"has_more\\\": true, \\\"data\\\": [ { \\\"id\\\": \\\"015d8157c800015e306fffff005374617473000\\\", \\\"timestamp\\\": \\\"2017-07-27T00:00:00Z\\\", \\\"transactions\\\": 27366 }, . . . { \\\"id\\\": \\\"015de3309c00015e306fffff005374617473000\\\", \\\"timestamp\\\": \\\"2017-08-15T00:00:00Z\\\", \\\"transactions\\\": 24707 } ] } ``` \n :param str order: The order of the records to return. Available values are ASC and DESC. The default value is ASC. \n :return: SuccessfulResponse\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_metrics_with_http_info(include, interval, **kwargs) # noqa: E501\n else:\n (data) = self.get_metrics_with_http_info(include, interval, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef reason(self, reason):\n if reason is None:\n raise ValueError(\"Invalid value for `reason`, must not be `None`\")\n allowed_values = [\"reservation\", \"reservation_release\", \"reservation_termination\", \"package_creation\", \"package_renewal\", \"package_termination\"]\n if reason not in allowed_values:\n raise ValueError(\n \"Invalid value for `reason` ({0}), must be one of {1}\"\n .format(reason, allowed_values)\n )\n\n self._reason = reason", "response": "Sets the reason of this ServicePackageQuotaHistoryItem."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset the endpoint_name of this PreSharedKey.", "response": "def endpoint_name(self, endpoint_name):\n \"\"\"\n Sets the endpoint_name of this PreSharedKey.\n The unique endpoint identifier that this pre-shared key applies to. 16-64 [printable](https://en.wikipedia.org/wiki/ASCII#Printable_characters) (non-control) ASCII characters.\n\n :param endpoint_name: The endpoint_name of this PreSharedKey.\n :type: str\n \"\"\"\n if endpoint_name is None:\n raise ValueError(\"Invalid value for `endpoint_name`, must not be `None`\")\n if endpoint_name is not None and not re.search('^[ -~]{16,64}$', endpoint_name):\n raise ValueError(\"Invalid value for `endpoint_name`, must be a follow pattern or equal to `/^[ -~]{16,64}$/`\")\n\n self._endpoint_name = endpoint_name"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsets the secret_hex of this PreSharedKey.", "response": "def secret_hex(self, secret_hex):\n \"\"\"\n Sets the secret_hex of this PreSharedKey.\n The secret of the pre-shared key in hexadecimal. It is not case sensitive; 4a is same as 4A, and it is allowed with or without 0x in the beginning. The minimum length of the secret is 128 bits and maximum 256 bits.\n\n :param secret_hex: The secret_hex of this PreSharedKey.\n :type: str\n \"\"\"\n if secret_hex is None:\n raise ValueError(\"Invalid value for `secret_hex`, must not be `None`\")\n if secret_hex is not None and not re.search('^(0[xX])?[0-9a-fA-F]{32,64}$', secret_hex):\n raise ValueError(\"Invalid value for `secret_hex`, must be a follow pattern or equal to `/^(0[xX])?[0-9a-fA-F]{32,64}$/`\")\n\n self._secret_hex = secret_hex"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nremove a pre - shared key.", "response": "def delete_pre_shared_key(self, endpoint_name, **kwargs): # noqa: E501\n \"\"\"Remove a pre-shared key. # noqa: E501\n\n Remove a pre-shared key. **Example usage:** ``` curl -H \\\"authorization: Bearer ${API_TOKEN}\\\" -X DELETE https://api.us-east-1.mbedcloud.com/v2/device-shared-keys/my-endpoint-0001 ``` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.delete_pre_shared_key(endpoint_name, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str endpoint_name: The unique endpoint identifier that this pre-shared key applies to. [Reserved characters](https://en.wikipedia.org/wiki/Percent-encoding#Percent-encoding_reserved_characters) must be percent-encoded. (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.delete_pre_shared_key_with_http_info(endpoint_name, **kwargs) # noqa: E501\n else:\n (data) = self.delete_pre_shared_key_with_http_info(endpoint_name, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets a pre - shared key for an endpoint.", "response": "def get_pre_shared_key(self, endpoint_name, **kwargs): # noqa: E501\n \"\"\"Get a pre-shared key. # noqa: E501\n\n Check if a pre-shared key for an endpoint exists or not. The response does not contain the secret itself. **Example usage:** ``` curl -H \\\"authorization: Bearer ${API_TOKEN}\\\" https://api.us-east-1.mbedcloud.com/v2/device-shared-keys/my-endpoint-0001 ``` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_pre_shared_key(endpoint_name, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str endpoint_name: The unique endpoint identifier that this pre-shared key applies to. [Reserved characters](https://en.wikipedia.org/wiki/Percent-encoding#Percent-encoding_reserved_characters) must be percent-encoded. (required)\n :return: PreSharedKeyWithoutSecret\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_pre_shared_key_with_http_info(endpoint_name, **kwargs) # noqa: E501\n else:\n (data) = self.get_pre_shared_key_with_http_info(endpoint_name, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nlist pre - shared keys.", "response": "def list_pre_shared_keys(self, **kwargs): # noqa: E501\n \"\"\"List pre-shared keys. # noqa: E501\n\n List pre-shared keys with pagination and default page size of 50 entries. **Example usage:** ``` curl -H \\\"authorization: Bearer ${API_TOKEN}\\\" https://api.us-east-1.mbedcloud.com/v2/device-shared-keys ``` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.list_pre_shared_keys(asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param int limit: The number of entries per page\n :param str after: An offset token for fetching a specific page. Provided by the server.\n :return: ListOfPreSharedKeysWithoutSecret\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.list_pre_shared_keys_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.list_pre_shared_keys_with_http_info(**kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nupload a pre - shared key to Pelion Device Management.", "response": "def upload_pre_shared_key(self, body, **kwargs): # noqa: E501\n \"\"\"Upload a pre-shared key to Pelion Device Management. # noqa: E501\n\n Upload a pre-shared key (PSK) for an endpoint to allow it to bootstrap. The existing key will not be overwritten but needs to be deleted first in case of re-setting PSK for an endpoint. **Note**: The PSK APIs are available only to accounts that have this feature enabled. **Example usage:** ``` curl -H \\\"authorization: Bearer ${API_TOKEN}\\\" -H \\\"content-type: application/json\\\" -X POST https://api.us-east-1.mbedcloud.com/v2/device-shared-keys \\\\ -d '{\\\"endpoint_name\\\": \\\"my-endpoint-0001\\\", \\\"secret_hex\\\": \\\"4a4a4a4a4a4a4a4a4a4a4a4a4a4a4a4a\\\" }' ``` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.upload_pre_shared_key(body, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param PreSharedKey body: Pre-shared key to be uploaded. (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.upload_pre_shared_key_with_http_info(body, **kwargs) # noqa: E501\n else:\n (data) = self.upload_pre_shared_key_with_http_info(body, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_value_length(byte):\n length_value = byte & length_type_mask\n return LengthTypes.to_ints.get(length_value, byte & length_mask)", "response": "Get the length of the value in bytes."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngives some bytes combine them together to make one long binary tree item", "response": "def combine_bytes(bytearr):\n \"\"\"Given some bytes, join them together to make one long binary\n\n (e.g. 00001000 00000000 -> 0000100000000000)\n\n :param bytearr:\n :return:\n \"\"\"\n bytes_count = len(bytearr)\n result = 0b0\n for index, byt in enumerate(bytearr):\n offset_bytes = bytes_count - index - 1\n result += byt << (8 * offset_bytes)\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef binary_tlv_to_python(binary_string, result=None):\n result = {} if result is None else result\n\n if not binary_string:\n return result\n\n byte = binary_string[0]\n kind = byte & type_mask\n id_length = get_id_length(byte)\n payload_length = get_value_length(byte)\n\n # start after the type indicator\n offset = 1\n item_id = str(combine_bytes(binary_string[offset:offset + id_length]))\n offset += id_length\n\n # get length of payload from specifier\n value_length = payload_length\n if byte & length_type_mask != LengthTypes.SET_BYTE:\n value_length = combine_bytes(binary_string[offset:offset + payload_length])\n offset += payload_length\n\n if kind == Types.MULTI:\n binary_tlv_to_python(\n binary_string[offset:offset + value_length],\n result.setdefault(item_id, {})\n )\n else:\n value_binary = binary_string[offset: offset + value_length]\n result[item_id] = (\n combine_bytes(value_binary) if not all(value_binary) else value_binary.decode('utf8')\n )\n\n offset += value_length\n binary_tlv_to_python(binary_string[offset:], result)\n return result", "response": "Recursively decode a bytearray object and store output in result object\n "} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef maybe_decode_payload(payload, content_type='application/nanoservice-tlv', decode_b64=True):\n if not payload:\n return None\n\n binary = b64decoder(payload) if decode_b64 else payload\n if content_type and 'tlv' in content_type.lower():\n return binary_tlv_to_python(bytearray(binary))\n return binary", "response": "If the payload is tlv decode it otherwise passthrough\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset the after of this ServicePackageQuotaHistoryResponse.", "response": "def after(self, after):\n \"\"\"\n Sets the after of this ServicePackageQuotaHistoryResponse.\n After which quota history ID this paged response is fetched.\n\n :param after: The after of this ServicePackageQuotaHistoryResponse.\n :type: str\n \"\"\"\n if after is not None and len(after) > 32:\n raise ValueError(\"Invalid value for `after`, length must be less than or equal to `32`\")\n if after is not None and len(after) < 32:\n raise ValueError(\"Invalid value for `after`, length must be greater than or equal to `32`\")\n\n self._after = after"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nset the object of this ServicePackageQuotaHistoryResponse.", "response": "def object(self, object):\n \"\"\"\n Sets the object of this ServicePackageQuotaHistoryResponse.\n Always set to 'service-package-quota-history'.\n\n :param object: The object of this ServicePackageQuotaHistoryResponse.\n :type: str\n \"\"\"\n if object is None:\n raise ValueError(\"Invalid value for `object`, must not be `None`\")\n allowed_values = [\"service-package-quota-history\"]\n if object not in allowed_values:\n raise ValueError(\n \"Invalid value for `object` ({0}), must be one of {1}\"\n .format(object, allowed_values)\n )\n\n self._object = object"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef total_count(self, total_count):\n if total_count is None:\n raise ValueError(\"Invalid value for `total_count`, must not be `None`\")\n if total_count is not None and total_count < 0:\n raise ValueError(\"Invalid value for `total_count`, must be a value greater than or equal to `0`\")\n\n self._total_count = total_count", "response": "Sets the total_count of this ServicePackageQuotaHistoryResponse."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsets the issuer_type of this CertificateIssuerInfo.", "response": "def issuer_type(self, issuer_type):\n \"\"\"\n Sets the issuer_type of this CertificateIssuerInfo.\n The type of the certificate issuer. - GLOBAL_SIGN: Certificates are issued by GlobalSign service. The users must provide their own GlobalSign account credentials. - CFSSL_AUTH: Certificates are issued by CFSSL authenticated signing service. The users must provide their own CFSSL host_url and credentials. \n\n :param issuer_type: The issuer_type of this CertificateIssuerInfo.\n :type: str\n \"\"\"\n if issuer_type is None:\n raise ValueError(\"Invalid value for `issuer_type`, must not be `None`\")\n allowed_values = [\"GLOBAL_SIGN\", \"CFSSL_AUTH\"]\n if issuer_type not in allowed_values:\n raise ValueError(\n \"Invalid value for `issuer_type` ({0}), must be one of {1}\"\n .format(issuer_type, allowed_values)\n )\n\n self._issuer_type = issuer_type"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef code(self, code):\n allowed_values = [400, 401, 404]\n if code not in allowed_values:\n raise ValueError(\n \"Invalid value for `code` ({0}), must be one of {1}\"\n .format(code, allowed_values)\n )\n\n self._code = code", "response": "Sets the code of this ErrorResponse."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsetting the request_id of this ErrorResponse.", "response": "def request_id(self, request_id):\n \"\"\"\n Sets the request_id of this ErrorResponse.\n Request ID.\n\n :param request_id: The request_id of this ErrorResponse.\n :type: str\n \"\"\"\n if request_id is not None and not re.search('^[A-Za-z0-9]{32}', request_id):\n raise ValueError(\"Invalid value for `request_id`, must be a follow pattern or equal to `/^[A-Za-z0-9]{32}/`\")\n\n self._request_id = request_id"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nstarting the notifications thread.", "response": "def start_notifications(self):\n \"\"\"Start the notifications thread.\n\n If an external callback is not set up (using `update_webhook`) then\n calling this function is mandatory to get or set resource.\n\n .. code-block:: python\n\n >>> api.start_notifications()\n >>> print(api.get_resource_value(device, path))\n Some value\n >>> api.stop_notifications()\n\n :returns: void\n \"\"\"\n with self._notifications_lock:\n if self.has_active_notification_thread:\n return\n api = self._get_api(mds.NotificationsApi)\n self._notifications_thread = NotificationsThread(\n self._db,\n self._queues,\n b64decode=self.b64decode,\n notifications_api=api,\n subscription_manager=self.subscribe,\n )\n self._notifications_thread.daemon = True\n self._notifications_thread.start()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nstop the notifications thread.", "response": "def stop_notifications(self):\n \"\"\"Stop the notifications thread.\n\n :returns:\n \"\"\"\n with self._notifications_lock:\n if not self.has_active_notification_thread:\n return\n thread = self._notifications_thread\n self._notifications_thread = None\n stopping = thread.stop()\n api = self._get_api(mds.NotificationsApi)\n api.delete_long_poll_channel()\n return stopping.wait()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef list_connected_devices(self, **kwargs):\n # TODO(pick one of these)\n filter_or_filters = 'filter' if 'filter' in kwargs else 'filters'\n kwargs.setdefault(filter_or_filters, {}).setdefault('state', {'$eq': 'registered'})\n kwargs = self._verify_sort_options(kwargs)\n kwargs = self._verify_filters(kwargs, Device, True)\n api = self._get_api(device_directory.DefaultApi)\n return PaginatedResponse(api.device_list, lwrap_type=Device, **kwargs)", "response": "List connected devices.\n\n Example usage, listing all registered devices in the catalog:\n\n .. code-block:: python\n\n filters = {\n 'created_at': {'$gte': datetime.datetime(2017,01,01),\n '$lte': datetime.datetime(2017,12,31)\n }\n }\n devices = api.list_connected_devices(order='asc', filters=filters)\n for idx, device in enumerate(devices):\n print(device)\n\n ## Other example filters\n\n # Directly connected devices (not via gateways):\n filters = {\n 'host_gateway': {'$eq': ''},\n 'device_type': {'$eq': ''}\n }\n\n # Devices connected via gateways:\n filters = {\n 'host_gateway': {'$neq': ''}\n }\n\n # Gateway devices:\n filters = {\n 'device_type': {'$eq': 'MBED_GW'}\n }\n\n\n :param int limit: The number of devices to retrieve.\n :param str order: The ordering direction, ascending (asc) or\n descending (desc)\n :param str after: Get devices after/starting at given `device_id`\n :param filters: Dictionary of filters to apply.\n :returns: a list of connected :py:class:`Device` objects.\n :rtype: PaginatedResponse"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef list_resources(self, device_id):\n api = self._get_api(mds.EndpointsApi)\n return [Resource(r) for r in api.get_endpoint_resources(device_id)]", "response": "List all resources registered to a connected device."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets a resource. :param str device_id: ID of the device (Required) :param str path: Path of the resource to get (Required) :returns: Device resource :rtype Resource", "response": "def get_resource(self, device_id, resource_path):\n \"\"\"Get a resource.\n\n :param str device_id: ID of the device (Required)\n :param str path: Path of the resource to get (Required)\n :returns: Device resource\n\n :rtype Resource\n \"\"\"\n resources = self.list_resources(device_id)\n for r in resources:\n if r.path == resource_path:\n return r\n raise CloudApiException(\"Resource not found\")"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting a resource value for a given device and resource path. Will not block but instead return an AsyncConsumer.", "response": "def get_resource_value_async(self, device_id, resource_path, fix_path=True):\n \"\"\"Get a resource value for a given device and resource path.\n\n Will not block, but instead return an AsyncConsumer. Example usage:\n\n .. code-block:: python\n\n a = api.get_resource_value_async(device, path)\n while not a.is_done:\n time.sleep(0.1)\n if a.error:\n print(\"Error\", a.error)\n print(\"Current value\", a.value)\n\n :param str device_id: The name/id of the device (Required)\n :param str resource_path: The resource path to get (Required)\n :param bool fix_path: strip leading / of path if present\n :returns: Consumer object to control asynchronous request\n :rtype: AsyncConsumer\n \"\"\"\n return self._mds_rpc_post(device_id=device_id, method='GET', uri=resource_path)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting a resource value for a given device and resource path by blocking thread.", "response": "def get_resource_value(self, device_id, resource_path, fix_path=True, timeout=None):\n \"\"\"Get a resource value for a given device and resource path by blocking thread.\n\n Example usage:\n\n .. code-block:: python\n\n try:\n v = api.get_resource_value(device_id, path)\n print(\"Current value\", v)\n except CloudAsyncError, e:\n print(\"Error\", e)\n\n :param str device_id: The name/id of the device (Required)\n :param str resource_path: The resource path to get (Required)\n :param fix_path: if True then the leading /, if found, will be stripped before\n doing request to backend. This is a requirement for the API to work properly\n :param timeout: Seconds to request value for before timeout. If not provided, the\n program might hang indefinitely.\n :raises: CloudAsyncError, CloudTimeoutError\n :returns: The resource value for the requested resource path\n :rtype: str\n \"\"\"\n return self.get_resource_value_async(device_id, resource_path, fix_path).wait(timeout)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef set_resource_value(self, device_id, resource_path, resource_value,\n fix_path=True, timeout=None):\n \"\"\"Set resource value for given resource path, on device.\n\n Will block and wait for response to come through. Usage:\n\n .. code-block:: python\n\n try:\n v = api.set_resource_value(device, path, value)\n print(\"Success, new value:\", v)\n except AsyncError, e:\n print(\"Error\", e)\n\n :param str device_id: The name/id of the device (Required)\n :param str resource_path: The resource path to update (Required)\n :param str resource_value: The new value to set for given path\n :param fix_path: Unused\n :param timeout: Timeout in seconds\n :raises: AsyncError\n :returns: The value of the new resource\n :rtype: str\n \"\"\"\n self.ensure_notifications_thread()\n return self.set_resource_value_async(\n device_id, resource_path, resource_value\n ).wait(timeout)", "response": "Set the value of a resource in a given path on a device."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef set_resource_value_async(self, device_id, resource_path,\n resource_value=None, fix_path=True):\n \"\"\"Set resource value for given resource path, on device.\n\n Will not block. Returns immediately. Usage:\n\n .. code-block:: python\n\n a = api.set_resource_value_async(device, path, value)\n while not a.is_done:\n time.sleep(0.1)\n if a.error:\n print(\"Error\", a.error)\n print(\"Success, new value:\", a.value)\n\n :param str device_id: The name/id of the device (Required)\n :param str resource_path: The resource path to update (Required)\n :param str resource_value: The new value to set for given path\n :param fix_path: Unused\n :returns: An async consumer object holding reference to request\n :rtype: AsyncConsumer\n \"\"\"\n payload_b64 = self._base64_encode(resource_value)\n\n if not resource_path.startswith(\"/\"):\n resource_path = \"/\" + resource_path\n\n return self._mds_rpc_post(\n device_id,\n method='PUT',\n uri=resource_path,\n content_type=\"text/plain\",\n payload_b64=payload_b64\n )", "response": "Set resource value for given resource path on given device. Will not block. Returns immediately."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef execute_resource(self, device_id, resource_path, fix_path=True, timeout=None):\n self.ensure_notifications_thread()\n return self.execute_resource_async(device_id, resource_path).wait(timeout)", "response": "Execute a function on a resource."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nexecuting a function on a resource.", "response": "def execute_resource_async(self, device_id, resource_path, fix_path=True):\n \"\"\"Execute a function on a resource.\n\n Will not block. Returns immediately. Usage:\n\n .. code-block:: python\n\n a = api.execute_resource_async(device, path)\n while not a.is_done:\n time.sleep(0.1)\n if a.error:\n print(\"Error\", a.error)\n print(\"Success, returned value:\", a.value)\n\n :param str device_id: The name/id of the device (Required)\n :param str resource_path: The resource path to update (Required)\n :param fix_path: Unused\n :returns: An async consumer object holding reference to request\n :rtype: AsyncConsumer\n \"\"\"\n if not resource_path.startswith(\"/\"):\n resource_path = \"/\" + resource_path\n\n return self._mds_rpc_post(device_id=device_id, method='POST', uri=resource_path)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_resource_subscription(self, device_id, resource_path, fix_path=True, queue_size=5):\n # When path starts with / we remove the slash, as the API can't handle //.\n # Keep the original path around however, as we use that for queue registration.\n fixed_path = resource_path\n if fix_path and resource_path.startswith(\"/\"):\n fixed_path = resource_path[1:]\n\n # Create the queue and register it with the dict holding all queues\n q = queue.Queue(queue_size) if queue_size > 0 else None\n\n # FIXME: explicit behaviour on replacing an existing queue\n self._queues[device_id][resource_path] = q\n\n # Send subscription request\n self._add_subscription(device_id, fixed_path)\n\n # Return the Queue object to the user\n return q", "response": "Subscribe to resource updates."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef add_resource_subscription_async(self, device_id, resource_path, callback_fn,\n fix_path=True, queue_size=5):\n \"\"\"Subscribe to resource updates with callback function.\n\n When called on a valid device and resource path a subscription is setup so that\n any update on the resource path value triggers an update on the callback function.\n\n :param device_id: Name of device to set the subscription on (Required)\n :param resource_path: The resource path on device to observe (Required)\n :param callback_fn: Callback function to be executed on update to subscribed resource\n :param fix_path: Removes leading / on resource_path if found\n :param queue_size: Sets the Queue size. If set to 0, no queue object will be created\n :returns: void\n \"\"\"\n queue = self.add_resource_subscription(device_id, resource_path, fix_path, queue_size)\n\n # Setup daemon thread for callback function\n t = threading.Thread(target=self._subscription_handler,\n args=[queue, device_id, resource_path, callback_fn])\n t.daemon = True\n t.start()", "response": "Subscribe to resource updates with callback function."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_resource_subscription(self, device_id, resource_path, fix_path=True):\n # When path starts with / we remove the slash, as the API can't handle //.\n # Keep the original path around however, as we use that for queue registration.\n fixed_path = resource_path\n if fix_path and resource_path.startswith(\"/\"):\n fixed_path = resource_path[1:]\n\n api = self._get_api(mds.SubscriptionsApi)\n try:\n api.check_resource_subscription(device_id, fixed_path)\n except Exception as e:\n if e.status == 404:\n return False\n raise\n return True", "response": "Check if a resource subscription is available on a device."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef update_presubscriptions(self, presubscriptions):\n api = self._get_api(mds.SubscriptionsApi)\n presubscriptions_list = []\n for presubscription in presubscriptions:\n if not isinstance(presubscription, dict):\n presubscription = presubscription.to_dict()\n presubscription = {\n \"endpoint_name\": presubscription.get(\"device_id\", None),\n \"endpoint_type\": presubscription.get(\"device_type\", None),\n \"_resource_path\": presubscription.get(\"resource_paths\", None)\n }\n presubscriptions_list.append(PresubscriptionData(**presubscription))\n return api.update_pre_subscriptions(presubscriptions_list)", "response": "Update pre - subscription data."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef delete_subscriptions(self):\n warnings.warn('This could be slow for large numbers of connected devices.'\n 'If possible, explicitly delete subscriptions known to have been created.')\n for device in self.list_connected_devices():\n try:\n self.delete_device_subscriptions(device_id=device.id)\n except CloudApiException as e:\n LOG.warning('failed to remove subscription for %s: %s', device.id, e)\n continue", "response": "Remove all subscriptions.\n\n Warning: This could be slow for large numbers of connected devices.\n If possible, explicitly delete subscriptions known to have been created.\n\n :returns: None"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets a list of pre - subscription data for a specific resource.", "response": "def list_presubscriptions(self, **kwargs):\n \"\"\"Get a list of pre-subscription data\n\n :returns: a list of `Presubscription` objects\n :rtype: list of mbed_cloud.presubscription.Presubscription\n \"\"\"\n api = self._get_api(mds.SubscriptionsApi)\n resp = api.get_pre_subscriptions(**kwargs)\n return [Presubscription(p) for p in resp]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nlisting all subscribed resources from a single device", "response": "def list_device_subscriptions(self, device_id, **kwargs):\n \"\"\"Lists all subscribed resources from a single device\n\n :param device_id: ID of the device (Required)\n :returns: a list of subscribed resources\n :rtype: list of str\n \"\"\"\n api = self._get_api(mds.SubscriptionsApi)\n resp = api.get_endpoint_subscriptions(device_id, **kwargs)\n return resp.split(\"\\n\")"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef delete_device_subscriptions(self, device_id):\n api = self._get_api(mds.SubscriptionsApi)\n return api.delete_endpoint_subscriptions(device_id)", "response": "Removes a device s subscriptions\n "} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef delete_resource_subscription(self, device_id=None, resource_path=None, fix_path=True):\n devices = [_f for _f in [device_id] if _f]\n if not device_id:\n devices = list(self._queues.keys())\n resource_paths = [resource_path]\n if not resource_path:\n resource_paths = []\n for e in devices:\n resource_paths.extend(list(self._queues[e].keys()))\n\n # Delete the subscriptions\n for e in devices:\n for r in resource_paths:\n # Fix the path, if required.\n fixed_path = r\n if fix_path and r.startswith(\"/\"):\n fixed_path = r[1:]\n\n # Make request to API, ignoring result\n self._delete_subscription(device_id, fixed_path)\n\n # Remove Queue from dictionary\n del self._queues[e][r]\n return", "response": "Unsubscribe from device and or resource_path updates."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef notify_webhook_received(self, payload):\n class PayloadContainer: # noqa\n # bodge to give attribute lookup\n data = payload\n\n notification = self._get_api(mds.NotificationsApi).api_client.deserialize(\n PayloadContainer, mds.NotificationMessage.__name__\n )\n handle_channel_message(\n db=self._db,\n queues=self._queues,\n b64decode=self.b64decode,\n notification_object=notification\n )", "response": "Callback function for triggering the notification channel handlers."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the currently set webhook URL if it exists.", "response": "def get_webhook(self):\n \"\"\"Get the current callback URL if it exists.\n\n :return: The currently set webhook\n \"\"\"\n api = self._get_api(mds.NotificationsApi)\n return Webhook(api.get_webhook())"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef update_webhook(self, url, headers=None):\n headers = headers or {}\n api = self._get_api(mds.NotificationsApi)\n\n # Delete notifications channel\n api.delete_long_poll_channel()\n\n # Send the request to register the webhook\n webhook_obj = WebhookData(url=url, headers=headers)\n api.register_webhook(webhook_obj)\n return", "response": "Register new webhook for incoming subscriptions."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef delete_webhook(self):\n api = self._get_api(mds.NotificationsApi)\n api.deregister_webhook()\n\n # Every subscription will be deleted, so we can clear the queues too.\n self._queues.clear()\n return", "response": "Delete the registered webhook."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef list_metrics(self, include=None, interval=\"1d\", **kwargs):\n self._verify_arguments(interval, kwargs)\n include = Metric._map_includes(include)\n kwargs.update(dict(include=include, interval=interval))\n api = self._get_api(statistics.StatisticsApi)\n return PaginatedResponse(api.get_metrics, lwrap_type=Metric, **kwargs)", "response": "Get statistics.\n\n :param list[str] include: List of fields included in response.\n None, or an empty list will return all fields.\n Fields: transactions, successful_api_calls, failed_api_calls, successful_handshakes,\n pending_bootstraps, successful_bootstraps, failed_bootstraps, registrations,\n updated_registrations, expired_registrations, deleted_registrations\n :param str interval: Group data by this interval in days, weeks or hours.\n Sample values: 2h, 3w, 4d.\n :param datetime start: Fetch the data with timestamp greater than or equal to this value.\n The parameter is not mandatory, if the period is specified.\n :param datetime end: Fetch the data with timestamp less than this value.\n The parameter is not mandatory, if the period is specified.\n :param str period: Period. Fetch the data for the period in days, weeks or hours.\n Sample values: 2h, 3w, 4d.\n The parameter is not mandatory, if the start and end time are specified\n :param int limit: The number of devices to retrieve\n :param str order: The ordering direction, ascending (asc) or descending (desc)\n :param str after: Get metrics after/starting at given metric ID\n :returns: a list of :py:class:`Metric` objects\n :rtype: PaginatedResponse"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef enrollment_identity(self, enrollment_identity):\n if enrollment_identity is None:\n raise ValueError(\"Invalid value for `enrollment_identity`, must not be `None`\")\n if enrollment_identity is not None and not re.search('^A-[A-Za-z0-9:]{95}$', enrollment_identity):\n raise ValueError(\"Invalid value for `enrollment_identity`, must be a follow pattern or equal to `/^A-[A-Za-z0-9:]{95}$/`\")\n\n self._enrollment_identity = enrollment_identity", "response": "Sets the enrollment_identity of this EnrollmentId."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nfilters for this channel based on key - value matching", "response": "def _filter_optional_keys(self, data):\n \"\"\"Filtering for this channel, based on key-value matching\"\"\"\n for filter_key, filter_value in (self._optional_filters or {}).items():\n data_value = data.get(filter_key)\n LOG.debug(\n 'optional keys filter %s: %s (%s)',\n filter_key, filter_value, data_value\n )\n if data_value is None or data_value != filter_value:\n LOG.debug(\n 'optional keys filter rejecting %s: %s (%s)',\n filter_key, filter_value, data_value\n )\n return False\n return True"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef notify(self, data):\n for filter_function in self._filters:\n if not filter_function(data):\n return False\n self._notify(data)\n return True", "response": "Notify this channel of inbound data"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _configure(self, manager, connect_api_instance, observer_params):\n self._manager = manager\n self._api = connect_api_instance\n self._observer_params = self._observer_params or {}\n self._observer_params.update(observer_params)", "response": "Configure behind - the - scenes settings for the channel."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nstop the channel and remove all routes.", "response": "def ensure_stopped(self):\n \"\"\"Idempotent channel stop\"\"\"\n if not self.active:\n return self\n self.stop()\n self.observer.cancel()\n self._manager.remove_routes(self, self.get_routing_keys())\n self._active = False\n return self"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndelete a resource path from a device.", "response": "def delete_resource_path(self, device_id, _resource_path, **kwargs): # noqa: E501\n \"\"\"Delete a resource path # noqa: E501\n\n A request to delete a resource path must be handled by both Device Management Client and Device Management Connect. All resource APIs are asynchronous. These APIs respond only if the device is turned on and connected to Device Management Connect and there is an active notification channel. **Example usage:** curl -X DELETE \\\\ https://api.us-east-1.mbedcloud.com/v2/endpoints/{device-id}/{resourcePath} \\\\ -H 'authorization: Bearer {api-key}' # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.delete_resource_path(device_id, _resource_path, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str device_id: A unique Device Management device ID for the endpoint. Note that the ID must be an exact match. You cannot use wildcards here. (required)\n :param str _resource_path: The URL of the resource. (required)\n :param bool no_resp:

Non-confirmable requests
All resource APIs have the parameter noResp. If you make a request with `noResp=true`, Device Management Connect makes a CoAP non-confirmable request to the device. Such requests are not guaranteed to arrive in the device, and you do not get back an async-response-id. If calls with this parameter enabled succeed, they return with the status code `204 No Content`. If the underlying protocol does not support non-confirmable requests, or if the endpoint is registered in queue mode, the response is status code `409 Conflict`. \n :return: AsyncID\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.delete_resource_path_with_http_info(device_id, _resource_path, **kwargs) # noqa: E501\n else:\n (data) = self.delete_resource_path_with_http_info(device_id, _resource_path, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef execute_or_create_resource(self, device_id, _resource_path, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.execute_or_create_resource_with_http_info(device_id, _resource_path, **kwargs) # noqa: E501\n else:\n (data) = self.execute_or_create_resource_with_http_info(device_id, _resource_path, **kwargs) # noqa: E501\n return data", "response": "Execute a function on a resource or create a new Resource instance."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nread from a resource", "response": "def get_resource_value(self, device_id, _resource_path, **kwargs): # noqa: E501\n \"\"\"Read from a resource # noqa: E501\n\n Requests the resource value and when the response is available, an `AsyncIDResponse` json object is received in the notification channel. The preferred way to get resource values is to use the **subscribe** and **callback** methods. All resource APIs are asynchronous. These APIs only respond if the device is turned on and connected to Device Management. Please refer to [Lightweight Machine to Machine Technical specification](http://www.openmobilealliance.org/release/LightweightM2M/V1_0-20170208-A/OMA-TS-LightweightM2M-V1_0-20170208-A.pdf) for more inforamtion. **Example usage:** curl -X GET \\\\ https://api.us-east-1.mbedcloud.com/v2/endpoints/{device-id}/{resourcePath} \\\\ -H 'authorization: Bearer {api-key}' # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_resource_value(device_id, _resource_path, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str device_id: Unique Device Management device ID for the endpoint. Note that the ID needs to be an exact match. You cannot use wildcards here. (required)\n :param str _resource_path: The URL of the resource. (required)\n :param bool cache_only: If true, the response comes only from the cache. Default: false. Device Management Connect caches the received resource values for the time of [max_age](/docs/current/connecting/working-with-the-resources.html) defined in the client side. \n :param bool no_resp:

Non-confirmable requests
All resource APIs have the parameter `noResp`. If a request is made with `noResp=true`, Device Management Connect makes a CoAP non-confirmable request to the device. Such requests are not guaranteed to arrive in the device, and you do not get back an async-response-id. If calls with this parameter enabled succeed, they return with the status code `204 No Content`. If the underlying protocol does not support non-confirmable requests, or if the endpoint is registered in queue mode, the response is status code `409 Conflict`. \n :return: None\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_resource_value_with_http_info(device_id, _resource_path, **kwargs) # noqa: E501\n else:\n (data) = self.get_resource_value_with_http_info(device_id, _resource_path, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef update_resource_value(self, device_id, _resource_path, resource_value, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_resource_value_with_http_info(device_id, _resource_path, resource_value, **kwargs) # noqa: E501\n else:\n (data) = self.update_resource_value_with_http_info(device_id, _resource_path, resource_value, **kwargs) # noqa: E501\n return data", "response": "Update the value of a resource on a device."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget the remaining value", "response": "def get_quota_remaining(self):\n \"\"\"Get the remaining value\"\"\"\n api = self._get_api(billing.DefaultApi)\n quota = api.get_service_package_quota()\n return None if quota is None else int(quota.quota)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget quota usage history", "response": "def get_quota_history(self, **kwargs):\n \"\"\"Get quota usage history\"\"\"\n kwargs = self._verify_sort_options(kwargs)\n kwargs = self._verify_filters(kwargs, ServicePackage)\n api = self._get_api(billing.DefaultApi)\n return PaginatedResponse(\n api.get_service_package_quota_history,\n lwrap_type=QuotaHistory,\n **kwargs\n )"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting all service packages", "response": "def get_service_packages(self):\n \"\"\"Get all service packages\"\"\"\n api = self._get_api(billing.DefaultApi)\n package_response = api.get_service_packages()\n packages = []\n for state in PACKAGE_STATES:\n # iterate states in order\n items = getattr(package_response, state) or []\n for item in ensure_listable(items):\n params = item.to_dict()\n params['state'] = state\n packages.append(ServicePackage(params))\n return packages"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _month_converter(self, date_time):\n if not date_time:\n date_time = datetime.datetime.utcnow()\n if isinstance(date_time, datetime.datetime):\n date_time = '%s-%02d' % (date_time.year, date_time.month)\n return date_time", "response": "Returns Billing API format YYYY - DD"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndownloads a report overview", "response": "def get_report_overview(self, month, file_path):\n \"\"\"Downloads a report overview\n\n :param month: month as datetime instance, or string in YYYY-MM format\n :type month: str or datetime\n :param str file_path: location to store output file\n :return: outcome\n :rtype: True or None\n \"\"\"\n api = self._get_api(billing.DefaultApi)\n month = self._month_converter(month)\n response = api.get_billing_report(month=month)\n if file_path and response:\n content = api.api_client.sanitize_for_serialization(response.to_dict())\n with open(file_path, 'w') as fh:\n fh.write(\n json.dumps(\n content,\n sort_keys=True,\n indent=2,\n )\n )\n return response"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_report_firmware_updates(self, month=None, file_path=None):\n api = self._get_api(billing.DefaultApi)\n month = self._month_converter(month)\n\n response = api.get_billing_report_firmware_updates(month=month)\n download_url = response.url\n file_path = self._filepath_converter(file_path, response.filename)\n if file_path:\n urllib.request.urlretrieve(download_url, file_path)\n return response", "response": "Downloads a report of the firmware updates for a given month."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndeleting a long poll channel.", "response": "def delete_long_poll_channel(self, **kwargs): # noqa: E501\n \"\"\"Delete notification Long Poll channel # noqa: E501\n\n To delete a notification Long Poll channel. This is required to change the channel from Long Poll to a callback. You should not make a GET `/v2/notification/pull` call for 2 minutes after channel was deleted, because it can implicitly recreate the pull channel. You can also have some random responses with payload or 410 GONE with \\\"CHANNEL_DELETED\\\" as a payload or 200/204 until the old channel is purged. **Example usage:** curl -X DELETE https://api.us-east-1.mbedcloud.com/v2/notification/pull -H 'authorization: Bearer {api-key}' # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.delete_long_poll_channel(asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :return: None\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.delete_long_poll_channel_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.delete_long_poll_channel_with_http_info(**kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndeleting callback URL # noqa: E501 Deletes the callback URL. **Example usage:** curl -X DELETE https://api.us-east-1.mbedcloud.com/v2/notification/callback -H 'authorization: Bearer {api-key}' # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass asynchronous=True >>> thread = api.deregister_webhook(asynchronous=True) >>> result = thread.get() :param asynchronous bool :return: None If the method is called asynchronously, returns the request thread.", "response": "def deregister_webhook(self, **kwargs): # noqa: E501\n \"\"\"Delete callback URL # noqa: E501\n\n Deletes the callback URL. **Example usage:** curl -X DELETE https://api.us-east-1.mbedcloud.com/v2/notification/callback -H 'authorization: Bearer {api-key}' # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.deregister_webhook(asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :return: None\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.deregister_webhook_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.deregister_webhook_with_http_info(**kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the current webhook URL", "response": "def get_webhook(self, **kwargs): # noqa: E501\n \"\"\"Check callback URL # noqa: E501\n\n Shows the current callback URL if it exists. **Example usage:** curl -X GET https://api.us-east-1.mbedcloud.com/v2/notification/callback -H 'authorization: Bearer {api-key}' # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_webhook(asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :return: Webhook\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_webhook_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.get_webhook_with_http_info(**kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget notifications using Long Poll", "response": "def long_poll_notifications(self, **kwargs): # noqa: E501\n \"\"\"Get notifications using Long Poll # noqa: E501\n\n In this case, notifications are delivered through HTTP long poll requests. The HTTP request is kept open until an event notification or a batch of event notifications are delivered to the client or the request times out (response code 204). In both cases, the client should open a new polling connection after the previous one closes. Only a single long polling connection per API key can be ongoing at any given time. You must have a persistent connection (Connection keep-alive header in the request) to avoid excess TLS handshakes. The pull channel is implicitly created by the first GET call to `/v2/notification/pull`. It is refreshed on each GET call. If the channel is not polled for a long time (10 minutes) - it expires and will be deleted. This means that no notifications will stay in the queue between polls. A channel can be also deleted explicitly by a DELETE call. **Note:** If you cannot have a public facing callback URL, for example when developing on your local machine, you can use long polling to check for new messages. However, **long polling is deprecated** and will likely be replaced in future. It is meant only for experimentation and not for commercial usage. The proper method to receive notifications is a **notification callback**. There can only be one notification channel per API key at a time in Device Management Connect. If a callback notification channel already exists, you need to delete it before creating a long poll notification channel, and vice-versa. **Example usage:** curl -X GET https://api.us-east-1.mbedcloud.com/v2/notification/pull -H 'authorization: Bearer {api-key}' # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.long_poll_notifications(asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :return: NotificationMessage\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.long_poll_notifications_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.long_poll_notifications_with_http_info(**kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nregister a webhook with Device Management Connect.", "response": "def register_webhook(self, webhook, **kwargs): # noqa: E501\n \"\"\"Register a callback URL # noqa: E501\n\n Register a URL to which the server should deliver notifications of the subscribed resource changes. To get notifications pushed, you also need to place the subscriptions. The maximum length of the URL, header keys and values, all combined, is 400 characters. Notifications are delivered as PUT messages to the HTTP server defined by the client with a subscription server message. The given URL should be accessible and respond to the PUT request with response code of 200 or 204. Device Management Connect tests the callback URL with an empty payload when the URL is registered. For more information on callback notification, see [NotificationMessage](/docs/current/service-api-references/mbed-cloud-connect.html#models). **Optional headers in a callback message:** You can set optional headers to a callback in a **Webhook** object. Device Management Connect will include the header and key pairs to the notification messages send them to callback URL. As the callback URL's are API key specific also the headers are. One possible use for the additional headers is to check the origin of a PUT request and also distinguish the application (API key) to which the notification belongs to. **Note**: Only one callback URL per an API key can be active. If you register a new URL while another one is already active, it replaces the active one. There can be only one notification channel at a time. If the Long Poll notification is already present, you need to delete it before setting the callback URL. **Expiration of a callback URL:** A callback can expire when Device Management cannot deliver a notification due to a connection timeout or an error response (4xx or 5xx). After each delivery failure, Device Management sets an exponential back off time and makes a retry attempt after that. The first retry delay is 1 second, then 2s, 4s, 8s, ..., 2min, 2min. The maximum retry delay is 2 minutes. The callback URL will be removed if all retries fail withing 24 hours. More about [notification sending logic](/docs/current/integrate-web-app/event-notification.html#notification-sending-logic). **Supported callback URL protocols:** Currently, only HTTP and HTTPS protocols are supported. **HTTPS callback URLs:** When delivering a notification to an HTTPS based callback URL, Device Management Connect will present a valid client certificate to identify itself. The certificate is signed by a trusted certificate authorithy (GlobalSign) with a Common Name (CN) set to notifications.mbedcloud.com. **Example usage:** This example command shows how to set your callback URL and API key. It also sets an optional header authorization. When Device Management Connect calls your callback URL, the call contains the authorization header with the defined value. curl -X PUT \\\\ https://api.us-east-1.mbedcloud.com/v2/notification/callback \\\\ -H 'authorization: Bearer {api-key}' \\\\ -H 'content-type: application/json' \\\\ -d '{ \\\"url\\\": \\\"{callback-url}\\\", \\\"headers\\\": {\\\"authorization\\\" : \\\"f4b93d6e-4652-4874-82e4-41a3ced0cd56\\\"} }' # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.register_webhook(webhook, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param Webhook webhook: A json object that contains the optional headers and the URL to which the notifications need to be sent. (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.register_webhook_with_http_info(webhook, **kwargs) # noqa: E501\n else:\n (data) = self.register_webhook_with_http_info(webhook, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef event_type(self, event_type):\n if event_type is not None and len(event_type) > 100:\n raise ValueError(\"Invalid value for `event_type`, length must be less than or equal to `100`\")\n\n self._event_type = event_type", "response": "Sets the event_type of this DeviceEventData."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef add_my_api_key_to_groups(self, body, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.add_my_api_key_to_groups_with_http_info(body, **kwargs) # noqa: E501\n else:\n (data) = self.add_my_api_key_to_groups_with_http_info(body, **kwargs) # noqa: E501\n return data", "response": "Add API key to a list of groups."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a new API key.", "response": "def create_api_key(self, body, **kwargs): # noqa: E501\n \"\"\"Create a new API key. # noqa: E501\n\n An endpoint for creating a new API key. **Example usage:** `curl -X POST https://api.us-east-1.mbedcloud.com/v3/api-keys -d '{\\\"name\\\": \\\"MyKey1\\\"}' -H 'content-type: application/json' -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.create_api_key(body, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param ApiKeyInfoReq body: The details of the API key to be created. (required)\n :return: ApiKeyInfoResp\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.create_api_key_with_http_info(body, **kwargs) # noqa: E501\n else:\n (data) = self.create_api_key_with_http_info(body, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef delete_api_key(self, api_key, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.delete_api_key_with_http_info(api_key, **kwargs) # noqa: E501\n else:\n (data) = self.delete_api_key_with_http_info(api_key, **kwargs) # noqa: E501\n return data", "response": "Delete an API key."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndeletes a trusted certificate by ID.", "response": "def delete_certificate(self, cert_id, **kwargs): # noqa: E501\n \"\"\"Delete a trusted certificate by ID. # noqa: E501\n\n An endpoint for deleting a trusted certificate. **Example usage:** `curl -X DELETE https://api.us-east-1.mbedcloud.com/v3/trusted-certificates/{cert-id} -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.delete_certificate(cert_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str cert_id: The ID of the trusted certificate to be deleted. (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.delete_certificate_with_http_info(cert_id, **kwargs) # noqa: E501\n else:\n (data) = self.delete_certificate_with_http_info(cert_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_all_api_keys(self, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_all_api_keys_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.get_all_api_keys_with_http_info(**kwargs) # noqa: E501\n return data", "response": "Get all API keys in an array optionally filtered by the owner name."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_all_certificates(self, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_all_certificates_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.get_all_certificates_with_http_info(**kwargs) # noqa: E501\n return data", "response": "Get all trusted certificates in an array."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_all_groups(self, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_all_groups_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.get_all_groups_with_http_info(**kwargs) # noqa: E501\n return data", "response": "Get all group information."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget API key details.", "response": "def get_api_key(self, api_key, **kwargs): # noqa: E501\n \"\"\"Get API key details. # noqa: E501\n\n An endpoint for retrieving API key details. **Example usage:** `curl https://api.us-east-1.mbedcloud.com/v3/api-keys/{apikey-id} -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_api_key(api_key, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str api_key: The ID of the API key to be retrieved. (required)\n :return: ApiKeyInfoResp\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_api_key_with_http_info(api_key, **kwargs) # noqa: E501\n else:\n (data) = self.get_api_key_with_http_info(api_key, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget the API keys of a group.", "response": "def get_api_keys_of_group(self, group_id, **kwargs): # noqa: E501\n \"\"\"Get the API keys of a group. # noqa: E501\n\n An endpoint for listing the API keys of the group with details. **Example usage:** `curl https://api.us-east-1.mbedcloud.com/v3/policy-groups/{group-id}/api-keys -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_api_keys_of_group(group_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str group_id: The ID of the group whose API keys are retrieved. (required)\n :param int limit: The number of results to return (2-1000), default is 50.\n :param str after: The entity ID to fetch after the given one.\n :param str order: The order of the records based on creation time, ASC or DESC; by default ASC\n :param str include: Comma separated additional data to return. Currently supported: total_count\n :return: ApiKeyInfoRespList\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_api_keys_of_group_with_http_info(group_id, **kwargs) # noqa: E501\n else:\n (data) = self.get_api_keys_of_group_with_http_info(group_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget a trusted certificate by ID.", "response": "def get_certificate(self, cert_id, **kwargs): # noqa: E501\n \"\"\"Get trusted certificate by ID. # noqa: E501\n\n An endpoint for retrieving a trusted certificate by ID. **Example usage:** `curl https://api.us-east-1.mbedcloud.com/v3/trusted-certificates/{cert-id} -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_certificate(cert_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str cert_id: The ID of the trusted certificate to be retrieved. (required)\n :return: TrustedCertificateResp\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_certificate_with_http_info(cert_id, **kwargs) # noqa: E501\n else:\n (data) = self.get_certificate_with_http_info(cert_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_group_summary(self, group_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_group_summary_with_http_info(group_id, **kwargs) # noqa: E501\n else:\n (data) = self.get_group_summary_with_http_info(group_id, **kwargs) # noqa: E501\n return data", "response": "Get the group summary."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets the groups of the API key.", "response": "def get_groups_of_my_api_key(self, **kwargs): # noqa: E501\n \"\"\"Get groups of the API key. # noqa: E501\n\n An endpoint for retrieving groups of the API key. **Example usage:** `curl https://api.us-east-1.mbedcloud.com/v3/api-keys/me/groups -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_groups_of_my_api_key(asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param int limit: The number of results to return (2-1000), default is 50.\n :param str after: The entity ID to fetch after the given one.\n :param str order: The order of the records based on creation time, ASC or DESC; by default ASC\n :param str include: Comma separated additional data to return. Currently supported: total_count\n :return: GroupSummaryList\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_groups_of_my_api_key_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.get_groups_of_my_api_key_with_http_info(**kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_my_account_info(self, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_my_account_info_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.get_my_account_info_with_http_info(**kwargs) # noqa: E501\n return data", "response": "Get account info. # noqa: E501\n\n Returns detailed information about the account. **Example usage:** `curl https://api.us-east-1.mbedcloud.com/v3/accounts/me?include=policies -H 'Authorization: Bearer API_KEY'`. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_my_account_info(asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str include: Comma separated additional data to return. Currently supported: limits, policies, sub_accounts.\n :param str properties: Property name to be returned from account specific properties.\n :return: AccountInfo\n If the method is called asynchronously,\n returns the request thread."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget API key details.", "response": "def get_my_api_key(self, **kwargs): # noqa: E501\n \"\"\"Get API key details. # noqa: E501\n\n An endpoint for retrieving API key details. **Example usage:** `curl https://api.us-east-1.mbedcloud.com/v3/api-keys/me -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_my_api_key(asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :return: ApiKeyInfoResp\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_my_api_key_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.get_my_api_key_with_http_info(**kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nremoving API keys from a group.", "response": "def remove_api_keys_from_group(self, group_id, body, **kwargs): # noqa: E501\n \"\"\"Remove API keys from a group. # noqa: E501\n\n An endpoint for removing API keys from groups. **Example usage:** `curl -X DELETE https://api.us-east-1.mbedcloud.com/v3/policy-groups/{group-id}/api-keys -d '[0162056a9a1586f30242590700000000,0117056a9a1586f30242590700000000]' -H 'content-type: application/json' -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.remove_api_keys_from_group(group_id, body, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str group_id: The ID of the group whose API keys are removed. (required)\n :param SubjectList body: A list of API keys to be removed from the group. (required)\n :return: UpdatedResponse\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.remove_api_keys_from_group_with_http_info(group_id, body, **kwargs) # noqa: E501\n else:\n (data) = self.remove_api_keys_from_group_with_http_info(group_id, body, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef remove_my_api_key_from_groups(self, body, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.remove_my_api_key_from_groups_with_http_info(body, **kwargs) # noqa: E501\n else:\n (data) = self.remove_my_api_key_from_groups_with_http_info(body, **kwargs) # noqa: E501\n return data", "response": "Remove API key from groups."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef update_api_key(self, api_key, body, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_api_key_with_http_info(api_key, body, **kwargs) # noqa: E501\n else:\n (data) = self.update_api_key_with_http_info(api_key, body, **kwargs) # noqa: E501\n return data", "response": "Update the attributes of an API key."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nupdating a trusted certificate.", "response": "def update_certificate(self, cert_id, body, **kwargs): # noqa: E501\n \"\"\"Update trusted certificate. # noqa: E501\n\n An endpoint for updating existing trusted certificates. **Example usage:** `curl -X PUT https://api.us-east-1.mbedcloud.com/v3/trusted-certificates/{cert-id} -d {\\\"description\\\": \\\"very important cert\\\"} -H 'content-type: application/json' -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.update_certificate(cert_id, body, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str cert_id: The ID of the trusted certificate to be updated. (required)\n :param TrustedCertificateUpdateReq body: A trusted certificate object with attributes. (required)\n :return: TrustedCertificateResp\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_certificate_with_http_info(cert_id, body, **kwargs) # noqa: E501\n else:\n (data) = self.update_certificate_with_http_info(cert_id, body, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nupdates API key details.", "response": "def update_my_api_key(self, body, **kwargs): # noqa: E501\n \"\"\"Update API key details. # noqa: E501\n\n An endpoint for updating API key details. **Example usage:** `curl -X PUT https://api.us-east-1.mbedcloud.com/v3/api-keys/me -d '{\\\"name\\\": \\\"TestApiKey25\\\"}' -H 'content-type: application/json' -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.update_my_api_key(body, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param ApiKeyUpdateReq body: New API key attributes to be stored. (required)\n :return: ApiKeyInfoResp\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_my_api_key_with_http_info(body, **kwargs) # noqa: E501\n else:\n (data) = self.update_my_api_key_with_http_info(body, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _pattern_match(self, item, pattern):\n if pattern.endswith('*'):\n return item.startswith(pattern[:-1])\n else:\n return item == pattern", "response": "Determine whether the item supplied is matched by the pattern."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add(self, items):\n with _presub_lock:\n self.load_from_cloud()\n for entry in utils.ensure_listable(items):\n # add new entries, but only if they're unique\n if entry not in self.current:\n self.current.append(entry)\n self.save_to_cloud()", "response": "Add new entries to global presubs list"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef remove(self, items):\n with _presub_lock:\n self.load_from_cloud()\n for entry in utils.ensure_listable(items):\n # remove all matching entries\n while entry in self.current:\n self.current.remove(entry)\n self.save_to_cloud()", "response": "Remove entry from global presubs list"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef force_utc(time, name='field', precision=6):\n if not isinstance(time, datetime.datetime):\n raise CloudValueError(\"%s should be of type datetime\" % (name,))\n clip = 6 - precision\n timestring = time.isoformat()\n if clip:\n timestring = timestring[:-clip]\n return timestring + \"Z\"", "response": "Force a time to be UTC - safe"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets billing report for a given month.", "response": "def get_billing_report(self, month, **kwargs): # noqa: E501\n \"\"\"Get billing report. # noqa: E501\n\n Fetch the billing report generated for the currently authenticated commercial non-subtenant account. Billing reports for subtenant accounts are included in their aggregator's billing report response. **Example usage:** curl -X GET https://api.us-east-1.mbedcloud.com/v3/billing-report?month=2018-07 -H 'authorization: Bearer {api-key}' # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_billing_report(month, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str month: Queried year and month of billing report. (required)\n :return: ReportResponse\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_billing_report_with_http_info(month, **kwargs) # noqa: E501\n else:\n (data) = self.get_billing_report_with_http_info(month, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the raw billing data of the active devices for the given month.", "response": "def get_billing_report_active_devices(self, month, **kwargs): # noqa: E501\n \"\"\"Get raw billing data of the active devices for the month. # noqa: E501\n\n Fetch the raw billing data of the active devices for the currently authenticated commercial non-subtenant account. This is supplementary data for the billing report. The raw billing data of the active devices for subtenant accounts are included in their aggregator's raw billing data of the active devices. The endpoint returns the URL to download the gzipped CSV file. The first line is the header providing information on the active devices. For example, the ID of an active device. **Example usage:** curl -X GET https://api.us-east-1.mbedcloud.com/v3/billing-report-active-devices?month=2018-07 -H 'authorization: Bearer {api-key}' # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_billing_report_active_devices(month, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str month: Queried year and month of billing report. (required)\n :return: BillingReportRawDataResponse\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_billing_report_active_devices_with_http_info(month, **kwargs) # noqa: E501\n else:\n (data) = self.get_billing_report_active_devices_with_http_info(month, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_billing_report_firmware_updates(self, month, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_billing_report_firmware_updates_with_http_info(month, **kwargs) # noqa: E501\n else:\n (data) = self.get_billing_report_firmware_updates_with_http_info(month, **kwargs) # noqa: E501\n return data", "response": "Get raw billing data for the given month."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the available firmware update quota for the currently authenticated commercial account.", "response": "def get_service_package_quota(self, **kwargs): # noqa: E501\n \"\"\"Service package quota. # noqa: E501\n\n Get the available firmware update quota for the currently authenticated commercial account. **Example usage:** curl -X GET https://api.us-east-1.mbedcloud.com/v3/service-packages-quota -H 'authorization: Bearer {api-key}' # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_service_package_quota(asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :return: ServicePackageQuota\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_service_package_quota_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.get_service_package_quota_with_http_info(**kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets your service package quota history. This API is available for commercial accounts.", "response": "def get_service_package_quota_history(self, **kwargs): # noqa: E501\n \"\"\"Service package quota history. # noqa: E501\n\n Get your quota usage history. This API is available for commercial accounts. Aggregator accounts can see own and subtenant quota usage data. History data is ordered in ascending order based on the added timestamp. **Example usage:** curl -X GET https://api.us-east-1.mbedcloud.com/v3/service-packages-quota-history -H 'authorization: Bearer {api-key}' # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_service_package_quota_history(asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param int limit: Maximum amount of quota history entries contained in one paged response.\n :param str after: To fetch after which quota history ID. The results will contain entries after specified entry.\n :return: ServicePackageQuotaHistoryResponse\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_service_package_quota_history_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.get_service_package_quota_history_with_http_info(**kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_service_packages(self, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_service_packages_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.get_service_packages_with_http_info(**kwargs) # noqa: E501\n return data", "response": "Get all service packages for the currently authenticated commercial account."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef catch_exceptions(*exceptions):\n def wrap(fn):\n @functools.wraps(fn)\n def wrapped_f(*args, **kwargs):\n try:\n return fn(*args, **kwargs)\n except exceptions:\n t, value, traceback = sys.exc_info()\n # If any resource does not exist, return None instead of raising\n if str(value.status) == '404':\n return None\n e = CloudApiException(str(value), value.reason, value.status)\n raise_(CloudApiException, e, traceback)\n return wrapped_f\n return wrap", "response": "Catch all exceptions provided as arguments and raise CloudApiException instead."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef endpoint_name(self, endpoint_name):\n if endpoint_name is not None and len(endpoint_name) > 64:\n raise ValueError(\"Invalid value for `endpoint_name`, length must be less than or equal to `64`\")\n\n self._endpoint_name = endpoint_name", "response": "Sets the endpoint_name of this DeviceDataPostRequest."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef firmware_checksum(self, firmware_checksum):\n if firmware_checksum is not None and len(firmware_checksum) > 64:\n raise ValueError(\"Invalid value for `firmware_checksum`, length must be less than or equal to `64`\")\n\n self._firmware_checksum = firmware_checksum", "response": "Sets the firmware_checksum of this DeviceDataPostRequest."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsets the serial_number of this DeviceDataPostRequest.", "response": "def serial_number(self, serial_number):\n \"\"\"\n Sets the serial_number of this DeviceDataPostRequest.\n The serial number of the device.\n\n :param serial_number: The serial_number of this DeviceDataPostRequest.\n :type: str\n \"\"\"\n if serial_number is not None and len(serial_number) > 64:\n raise ValueError(\"Invalid value for `serial_number`, length must be less than or equal to `64`\")\n\n self._serial_number = serial_number"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nset the vendor_id of this DeviceDataPostRequest.", "response": "def vendor_id(self, vendor_id):\n \"\"\"\n Sets the vendor_id of this DeviceDataPostRequest.\n The device vendor ID.\n\n :param vendor_id: The vendor_id of this DeviceDataPostRequest.\n :type: str\n \"\"\"\n if vendor_id is not None and len(vendor_id) > 255:\n raise ValueError(\"Invalid value for `vendor_id`, length must be less than or equal to `255`\")\n\n self._vendor_id = vendor_id"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef main():\n source_files = (\n ('integration', r'results/results.xml'),\n ('unittests', r'results/unittests.xml'),\n ('coverage', r'results/coverage.xml')\n )\n\n parsed = {k: ElementTree.parse(v).getroot().attrib for k, v in source_files}\n\n with open(r'results/summary.json', 'w') as fh:\n json.dump(parsed, fh)", "response": "Collects results from CI run"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nlisting all devices in the device catalog.", "response": "def list_devices(self, **kwargs):\n \"\"\"List devices in the device catalog.\n\n Example usage, listing all registered devices in the catalog:\n\n .. code-block:: python\n\n filters = { 'state': {'$eq': 'registered' } }\n devices = api.list_devices(order='asc', filters=filters)\n for idx, d in enumerate(devices):\n print(idx, d.id)\n\n :param int limit: The number of devices to retrieve.\n :param str order: The ordering direction, ascending (asc) or\n descending (desc)\n :param str after: Get devices after/starting at given `device_id`\n :param filters: Dictionary of filters to apply.\n :returns: a list of :py:class:`Device` objects registered in the catalog.\n :rtype: PaginatedResponse\n \"\"\"\n kwargs = self._verify_sort_options(kwargs)\n kwargs = self._verify_filters(kwargs, Device, True)\n api = self._get_api(device_directory.DefaultApi)\n return PaginatedResponse(api.device_list, lwrap_type=Device, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets details about a specific device.", "response": "def get_device(self, device_id):\n \"\"\"Get device details from catalog.\n\n :param str device_id: the ID of the device to retrieve (Required)\n :returns: device object matching the `device_id`.\n :rtype: Device\n \"\"\"\n api = self._get_api(device_directory.DefaultApi)\n return Device(api.device_retrieve(device_id))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nupdate existing device in catalog.", "response": "def update_device(self, device_id, **kwargs):\n \"\"\"Update existing device in catalog.\n\n .. code-block:: python\n\n existing_device = api.get_device(...)\n updated_device = api.update_device(\n existing_device.id,\n certificate_fingerprint = \"something new\"\n )\n\n :param str device_id: The ID of the device to update (Required)\n :param obj custom_attributes: Up to 5 custom JSON attributes\n :param str description: The description of the device\n :param str name: The name of the device\n :param str alias: The alias of the device\n :param str device_type: The endpoint type of the device - e.g. if the device is a gateway\n :param str host_gateway: The endpoint_name of the host gateway, if appropriate\n :param str certificate_fingerprint: Fingerprint of the device certificate\n :param str certificate_issuer_id: ID of the issuer of the certificate\n :returns: the updated device object\n :rtype: Device\n \"\"\"\n api = self._get_api(device_directory.DefaultApi)\n device = Device._create_request_map(kwargs)\n body = DeviceDataPostRequest(**device)\n return Device(api.device_update(device_id, body))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef add_device(self, **kwargs):\n api = self._get_api(device_directory.DefaultApi)\n device = Device._create_request_map(kwargs)\n device = DeviceData(**device)\n return Device(api.device_create(device))", "response": "Add a new device to the catalog."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef delete_device(self, device_id):\n api = self._get_api(device_directory.DefaultApi)\n return api.device_destroy(id=device_id)", "response": "Delete device from catalog."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef list_queries(self, **kwargs):\n kwargs = self._verify_sort_options(kwargs)\n kwargs = self._verify_filters(kwargs, Query, True)\n api = self._get_api(device_directory.DefaultApi)\n return PaginatedResponse(api.device_query_list, lwrap_type=Query, **kwargs)", "response": "List queries in device query service."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nadds a new query to the device query service.", "response": "def add_query(self, name, filter, **kwargs):\n \"\"\"Add a new query to device query service.\n\n .. code-block:: python\n\n f = api.add_query(\n name = \"Query name\",\n filter = {\n \"device_id\": {\"$eq\": \"01234\"},\n custom_attributes = {\n \"foo\": {\"$eq\": \"bar\"}\n }\n }\n )\n print(f.created_at)\n\n :param str name: Name of query (Required)\n :param dict filter: Filter properties to apply (Required)\n :param return: The newly created query object.\n :return: the newly created query object\n :rtype: Query\n \"\"\"\n # Ensure we have the correct types and get the new query object\n filter_obj = filters.legacy_filter_formatter(\n dict(filter=filter),\n Device._get_attributes_map()\n ) if filter else None\n query_map = Query._create_request_map(kwargs)\n # Create the DeviceQuery object\n f = DeviceQuery(name=name, query=filter_obj['filter'], **query_map)\n api = self._get_api(device_directory.DefaultApi)\n return Query(api.device_query_create(f))"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nupdating existing query in device query service.", "response": "def update_query(self, query_id, name=None, filter=None, **kwargs):\n \"\"\"Update existing query in device query service.\n\n .. code-block:: python\n\n q = api.get_query(...)\n q.filter[\"custom_attributes\"][\"foo\"] = {\n \"$eq\": \"bar\"\n }\n new_q = api.update_query(\n query_id = q.id,\n name = \"new name\",\n filter = q.filter\n )\n\n :param str query_id: Existing query ID to update (Required)\n :param str name: name of query\n :param dict filter: query properties to apply\n :return: the newly updated query object.\n :rtype: Query\n \"\"\"\n # Get urlencoded query attribute\n filter_obj = filters.legacy_filter_formatter(\n dict(filter=filter),\n Device._get_attributes_map()\n ) if filter else None\n\n query_map = Query._create_request_map(kwargs)\n\n # The filter option is optional on update but DeviceQueryPostPutRequest sets it None, which is invalid.\n # Manually create a resource body without the filter parameter if only the name is provided.\n if filter is None:\n body = {\"name\": name}\n else:\n body = DeviceQueryPostPutRequest(name=name, query=filter_obj['filter'], **query_map)\n\n api = self._get_api(device_directory.DefaultApi)\n return Query(api.device_query_update(query_id, body))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef delete_query(self, query_id):\n api = self._get_api(device_directory.DefaultApi)\n api.device_query_destroy(query_id)\n return", "response": "Delete query in device query service."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting query in device query service.", "response": "def get_query(self, query_id):\n \"\"\"Get query in device query service.\n\n :param int query_id: ID of the query to get (Required)\n :returns: device query object\n :rtype: Query\n \"\"\"\n api = self._get_api(device_directory.DefaultApi)\n return Query(api.device_query_retrieve(query_id))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nlist all device events.", "response": "def list_device_events(self, **kwargs):\n \"\"\"List all device logs.\n\n :param int limit: The number of logs to retrieve.\n :param str order: The ordering direction, ascending (asc) or\n descending (desc)\n :param str after: Get logs after/starting at given `device_event_id`\n :param dict filters: Dictionary of filters to apply.\n :return: list of :py:class:`DeviceEvent` objects\n :rtype: PaginatedResponse\n \"\"\"\n kwargs = self._verify_sort_options(kwargs)\n kwargs = self._verify_filters(kwargs, DeviceEvent, True)\n\n api = self._get_api(device_directory.DefaultApi)\n return PaginatedResponse(api.device_log_list, lwrap_type=DeviceEvent, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_device_event(self, device_event_id):\n api = self._get_api(device_directory.DefaultApi)\n return DeviceEvent(api.device_log_retrieve(device_event_id))", "response": "Get a specific device event with provided ID."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef filter(self):\n if isinstance(self._filter, str):\n return self._decode_query(self._filter)\n return self._filter", "response": "Get the query of this Query."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_metadata(item):\n for metadata_key in ('METADATA', 'PKG-INFO'):\n try:\n metadata_lines = item.get_metadata_lines(metadata_key)\n break\n except (KeyError, IOError):\n # The package will be shown in the report without any license information\n # if a metadata key is not found.\n metadata_lines = []\n\n return metadata_lines", "response": "Get metadata information from the distribution."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ntidying up a license string e. g. :: OSI :: mit software license -> MIT", "response": "def license_cleanup(text):\n \"\"\"Tidy up a license string\n\n e.g. \"::OSI:: mit software license\" -> \"MIT\"\n \"\"\"\n if not text:\n return None\n text = text.rsplit(':', 1)[-1]\n replacements = [\n 'licenses',\n 'license',\n 'licences',\n 'licence',\n 'software',\n ',',\n ]\n for replacement in replacements:\n text = text.replace(replacement, '')\n text = text.strip().upper()\n text = text.replace(' ', '_')\n text = text.replace('-', '_')\n if any(trigger.upper() in text for trigger in BAD_LICENSES):\n return None\n return text"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_package_info_from_line(tpip_pkg, line):\n lower_line = line.lower()\n\n try:\n metadata_key, metadata_value = lower_line.split(':', 1)\n except ValueError:\n return\n\n metadata_key = metadata_key.strip()\n metadata_value = metadata_value.strip()\n\n if metadata_value == 'unknown':\n return\n\n # extract exact matches\n if metadata_key in TPIP_FIELD_MAPPINGS:\n tpip_pkg[TPIP_FIELD_MAPPINGS[metadata_key]] = metadata_value\n return\n\n if metadata_key.startswith('version') and not tpip_pkg.get('PkgVersion'):\n # ... but if not, we'll use whatever we find\n tpip_pkg['PkgVersion'] = metadata_value\n return\n\n # Handle british and american spelling of licence/license\n if 'licen' in lower_line:\n if metadata_key.startswith('classifier') or '::' in metadata_value:\n license = lower_line.rsplit(':')[-1].strip().lower()\n license = license_cleanup(license)\n if license:\n tpip_pkg.setdefault('PkgLicenses', []).append(license)", "response": "Given a line of text from metadata extract semantic info"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef process_metadata(pkg_name, metadata_lines):\n # Initialise a dictionary with all the fields to report on.\n tpip_pkg = dict(\n PkgName=pkg_name,\n PkgType='python package',\n PkgMgrURL='https://pypi.org/project/%s/' % pkg_name,\n )\n\n # Extract the metadata into a list for each field as there may be multiple\n # entries for each one.\n for line in metadata_lines:\n get_package_info_from_line(tpip_pkg, line)\n\n # condense PkgAuthorEmail into the Originator field\n if 'PkgAuthorEmail' in tpip_pkg:\n tpip_pkg['PkgOriginator'] = '%s <%s>' % (\n tpip_pkg['PkgOriginator'],\n tpip_pkg.pop('PkgAuthorEmail')\n )\n\n explicit_license = license_cleanup(tpip_pkg.get('PkgLicense'))\n license_candidates = tpip_pkg.pop('PkgLicenses', [])\n\n if explicit_license:\n tpip_pkg['PkgLicense'] = explicit_license\n else:\n tpip_pkg['PkgLicense'] = ' '.join(set(license_candidates))\n\n return tpip_pkg", "response": "Process the metadata lines into a dictionary containing the relevant fields."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef write_csv_file(output_filename, tpip_pkgs):\n dirname = os.path.dirname(os.path.abspath(os.path.expanduser(output_filename)))\n\n if dirname and not os.path.exists(dirname):\n os.makedirs(dirname)\n\n with open(output_filename, 'w', newline='') as csvfile:\n writer = csv.DictWriter(csvfile, fieldnames=FIELDNAMES)\n writer.writeheader()\n writer.writerows(tpip_pkgs)", "response": "Write the TPIP report out to a CSV file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nensuring each value is ascii - only", "response": "def force_ascii_values(data):\n \"\"\"Ensures each value is ascii-only\"\"\"\n return {\n k: v.encode('utf8').decode('ascii', 'backslashreplace')\n for k, v in data.items()\n }"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef main():\n parser = argparse.ArgumentParser(description='Generate a TPIP report as a CSV file.')\n parser.add_argument('output_filename', type=str, metavar='output-file',\n help='the output path and filename', nargs='?')\n parser.add_argument('--only', type=str, help='only parse this package')\n args = parser.parse_args()\n\n output_path = os.path.abspath(args.output_filename) if args.output_filename else None\n\n skips = []\n tpip_pkgs = []\n for pkg_name, pkg_item in sorted(pkg_resources.working_set.by_key.items()):\n if args.only and args.only not in pkg_name.lower():\n continue\n if pkg_name in EXCLUDED_PACKAGES:\n skips.append(pkg_name)\n continue\n metadata_lines = get_metadata(pkg_item)\n tpip_pkg = process_metadata(pkg_name, metadata_lines)\n tpip_pkgs.append(force_ascii_values(tpip_pkg))\n\n print(json.dumps(tpip_pkgs, indent=2, sort_keys=True))\n print('Parsed %s packages\\nOutput to CSV: `%s`\\nIgnored packages: %s' % (\n len(tpip_pkgs),\n output_path,\n ', '.join(skips),\n ))\n output_path and write_csv_file(output_path, tpip_pkgs)", "response": "Generate a TPIP report."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsetting the remaining_quota of this ServicePackageMetadata.", "response": "def remaining_quota(self, remaining_quota):\n \"\"\"\n Sets the remaining_quota of this ServicePackageMetadata.\n Current available service package quota.\n\n :param remaining_quota: The remaining_quota of this ServicePackageMetadata.\n :type: int\n \"\"\"\n if remaining_quota is None:\n raise ValueError(\"Invalid value for `remaining_quota`, must not be `None`\")\n if remaining_quota is not None and remaining_quota < 0:\n raise ValueError(\"Invalid value for `remaining_quota`, must be a value greater than or equal to `0`\")\n\n self._remaining_quota = remaining_quota"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsetting the reserved_quota of this ServicePackageMetadata.", "response": "def reserved_quota(self, reserved_quota):\n \"\"\"\n Sets the reserved_quota of this ServicePackageMetadata.\n Sum of all open reservations for this account.\n\n :param reserved_quota: The reserved_quota of this ServicePackageMetadata.\n :type: int\n \"\"\"\n if reserved_quota is None:\n raise ValueError(\"Invalid value for `reserved_quota`, must not be `None`\")\n if reserved_quota is not None and reserved_quota < 0:\n raise ValueError(\"Invalid value for `reserved_quota`, must be a value greater than or equal to `0`\")\n\n self._reserved_quota = reserved_quota"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef create_bulk_device_enrollment(self, enrollment_identities, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.create_bulk_device_enrollment_with_http_info(enrollment_identities, **kwargs) # noqa: E501\n else:\n (data) = self.create_bulk_device_enrollment_with_http_info(enrollment_identities, **kwargs) # noqa: E501\n return data", "response": "Bulk upload a file containing a number of enrollments."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create_device_enrollment(self, enrollment_identity, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.create_device_enrollment_with_http_info(enrollment_identity, **kwargs) # noqa: E501\n else:\n (data) = self.create_device_enrollment_with_http_info(enrollment_identity, **kwargs) # noqa: E501\n return data", "response": "Create an enrollment claim for one or several devices."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef delete_bulk_device_enrollment(self, enrollment_identities, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.delete_bulk_device_enrollment_with_http_info(enrollment_identities, **kwargs) # noqa: E501\n else:\n (data) = self.delete_bulk_device_enrollment_with_http_info(enrollment_identities, **kwargs) # noqa: E501\n return data", "response": "Bulk delete \u2013 bulk delete"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndelete an enrollment by ID.", "response": "def delete_device_enrollment(self, id, **kwargs): # noqa: E501\n \"\"\"Delete an enrollment by ID. # noqa: E501\n\n To free a device from your account you can delete the enrollment claim. To bypass the device ownership, you need to delete the enrollment and do a factory reset for the device. For more information, see [Transferring the ownership using First-to-Claim](/docs/current/connecting/device-ownership.html).
**Example usage:** ``` curl -X DELETE \\\\ -H 'Authorization: Bearer ' \\\\ https://api.us-east-1.mbedcloud.com/v3/device-enrollments/{id} ``` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.delete_device_enrollment(id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str id: Enrollment identity. (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.delete_device_enrollment_with_http_info(id, **kwargs) # noqa: E501\n else:\n (data) = self.delete_device_enrollment_with_http_info(id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_bulk_device_enrollment(self, id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_bulk_device_enrollment_with_http_info(id, **kwargs) # noqa: E501\n else:\n (data) = self.get_bulk_device_enrollment_with_http_info(id, **kwargs) # noqa: E501\n return data", "response": "Get bulk upload entity"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget bulk delete entity", "response": "def get_bulk_device_enrollment_delete(self, id, **kwargs): # noqa: E501\n \"\"\"Get bulk delete entity # noqa: E501\n\n Provides information on bulk delete for the given ID. For example, the bulk status and the number of processed enrollment identities. Also links to the bulk delete reports are provided. **Example usage:** ``` curl -X GET \\\\ -H 'Authorization: Bearer ' \\\\ https://api.us-east-1.mbedcloud.com/v3/device-enrollments-bulk-deletes/{id} ``` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_bulk_device_enrollment_delete(id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str id: Bulk delete task entity ID (required)\n :return: BulkResponse\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_bulk_device_enrollment_delete_with_http_info(id, **kwargs) # noqa: E501\n else:\n (data) = self.get_bulk_device_enrollment_delete_with_http_info(id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets details of an enrollment by ID.", "response": "def get_device_enrollment(self, id, **kwargs): # noqa: E501\n \"\"\"Get details of an enrollment by ID. # noqa: E501\n\n To check the enrollment info in detail, for example date of claim and expiration date. **Example usage:** ``` curl -X GET \\\\ -H 'Authorization: Bearer ' \\\\ https://api.us-east-1.mbedcloud.com/v3/device-enrollments/{id} ``` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_device_enrollment(id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str id: Enrollment identity. (required)\n :return: EnrollmentIdentity\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_device_enrollment_with_http_info(id, **kwargs) # noqa: E501\n else:\n (data) = self.get_device_enrollment_with_http_info(id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_device_enrollments(self, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_device_enrollments_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.get_device_enrollments_with_http_info(**kwargs) # noqa: E501\n return data", "response": "Get a list of pending and claimed enrollments for a given entity ID."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef root_manifest_id(self, root_manifest_id):\n if root_manifest_id is not None and len(root_manifest_id) > 32:\n raise ValueError(\"Invalid value for `root_manifest_id`, length must be less than or equal to `32`\")\n\n self._root_manifest_id = root_manifest_id", "response": "Sets the root_manifest_id of this UpdateCampaignPutRequest."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsetting the state of this UpdateCampaignPutRequest.", "response": "def state(self, state):\n \"\"\"\n Sets the state of this UpdateCampaignPutRequest.\n The state of the campaign\n\n :param state: The state of this UpdateCampaignPutRequest.\n :type: str\n \"\"\"\n allowed_values = [\"draft\", \"scheduled\", \"allocatingquota\", \"allocatedquota\", \"quotaallocationfailed\", \"checkingmanifest\", \"checkedmanifest\", \"devicefetch\", \"devicecopy\", \"devicecheck\", \"publishing\", \"deploying\", \"deployed\", \"manifestremoved\", \"expired\", \"stopping\", \"autostopped\", \"userstopped\", \"conflict\"]\n if state not in allowed_values:\n raise ValueError(\n \"Invalid value for `state` ({0}), must be one of {1}\"\n .format(state, allowed_values)\n )\n\n self._state = state"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef notify(self, data):\n LOG.debug('notify received: %s', data)\n self._notify_count += 1\n if self._cancelled:\n LOG.debug('notify skipping due to `cancelled`')\n return self\n if self._once_done and self._once:\n LOG.debug('notify skipping due to `once`')\n return self\n with self._lock:\n try:\n # notify next consumer immediately\n self._waitables.get_nowait().put_nowait(data)\n LOG.debug('found a consumer, notifying')\n except queue.Empty:\n # store the notification\n try:\n self._notifications.put_nowait(data)\n LOG.debug('no consumers, queueing data')\n except queue.Full:\n LOG.warning('notification queue full - discarding new data')\n\n # callbacks are sent straight away\n # bombproofing should be handled by individual callbacks\n for callback in self._callbacks:\n LOG.debug('callback: %s', callback)\n callback(data)\n self._once_done = True\n return self", "response": "Notify this observer that data has arrived."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncancel the observer .", "response": "def cancel(self):\n \"\"\"Cancels the observer\n\n No more notifications will be passed on\n \"\"\"\n LOG.debug('cancelling %s', self)\n self._cancelled = True\n self.clear_callbacks() # not strictly necessary, but may release references\n while True:\n try:\n self._waitables.get_nowait().put_nowait(self.sentinel)\n except queue.Empty:\n break"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef hmac_hex_key(self, hmac_hex_key):\n if hmac_hex_key is None:\n raise ValueError(\"Invalid value for `hmac_hex_key`, must not be `None`\")\n if hmac_hex_key is not None and len(hmac_hex_key) > 64:\n raise ValueError(\"Invalid value for `hmac_hex_key`, length must be less than or equal to `64`\")\n if hmac_hex_key is not None and not re.search('^([a-fA-F0-9][a-fA-F0-9]){1,32}$', hmac_hex_key):\n raise ValueError(\"Invalid value for `hmac_hex_key`, must be a follow pattern or equal to `/^([a-fA-F0-9][a-fA-F0-9]){1,32}$/`\")\n\n self._hmac_hex_key = hmac_hex_key", "response": "Sets the hmac_hex_key of this CfsslAuthCredentials."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef name(self, name):\n if name is None:\n raise ValueError(\"Invalid value for `name`, must not be `None`\")\n if name is not None and len(name) > 200:\n raise ValueError(\"Invalid value for `name`, length must be less than or equal to `200`\")\n\n self._name = name", "response": "Sets the name of this DeviceQueryPostPutRequest."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsets the query of this DeviceQueryPostPutRequest.", "response": "def query(self, query):\n \"\"\"\n Sets the query of this DeviceQueryPostPutRequest.\n The device query.\n\n :param query: The query of this DeviceQueryPostPutRequest.\n :type: str\n \"\"\"\n if query is None:\n raise ValueError(\"Invalid value for `query`, must not be `None`\")\n if query is not None and len(query) > 1000:\n raise ValueError(\"Invalid value for `query`, length must be less than or equal to `1000`\")\n\n self._query = query"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nlists the API keys registered in the organisation.", "response": "def list_api_keys(self, **kwargs):\n \"\"\"List the API keys registered in the organisation.\n\n List api keys Example:\n\n .. code-block:: python\n\n account_management_api = AccountManagementAPI()\n\n # List api keys\n api_keys_paginated_response = account_management_api.list_api_keys()\n # get single api key\n api_keys_paginated_response.data[0]\n\n :param int limit: Number of API keys to get\n :param str after: Entity ID after which to start fetching\n :param str order: Order of the records to return (asc|desc)\n :param dict filters: Dictionary of filters to apply: str owner (eq)\n :returns: a list of :class:`ApiKey` objects\n :rtype: PaginatedResponse\n :raises: ApiException\n \"\"\"\n kwargs = self._verify_sort_options(kwargs)\n kwargs = self._verify_filters(kwargs, ApiKey)\n\n api = self._get_api(iam.DeveloperApi)\n\n # Return the data array\n return PaginatedResponse(api.get_all_api_keys, lwrap_type=ApiKey, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget API key details for the given API key ID", "response": "def get_api_key(self, api_key_id):\n \"\"\"Get API key details for key registered in organisation.\n\n :param str api_key_id: The ID of the API key to be updated (Required)\n :returns: API key object\n :rtype: ApiKey\n \"\"\"\n api = self._get_api(iam.DeveloperApi)\n return ApiKey(api.get_api_key(api_key_id))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef delete_api_key(self, api_key_id):\n api = self._get_api(iam.DeveloperApi)\n api.delete_api_key(api_key_id)\n return", "response": "Delete an API key registered in the organisation."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nadd an API key to the organisation.", "response": "def add_api_key(self, name, **kwargs):\n \"\"\"Create new API key registered to organisation.\n\n :param str name: The name of the API key (Required)\n :param list groups: List of group IDs (`str`)\n :param str owner: User ID owning the API key\n :param str status: The status of the API key. Values: ACTIVE, INACTIVE\n :returns: Newly created API key object\n :rtype: ApiKey\n \"\"\"\n api = self._get_api(iam.DeveloperApi)\n kwargs.update({'name': name})\n api_key = ApiKey._create_request_map(kwargs)\n body = iam.ApiKeyInfoReq(**api_key)\n return ApiKey(api.create_api_key(body))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef update_api_key(self, api_key_id, **kwargs):\n api = self._get_api(iam.DeveloperApi)\n apikey = ApiKey._create_request_map(kwargs)\n body = iam.ApiKeyUpdateReq(**apikey)\n return ApiKey(api.update_api_key(api_key_id, body))", "response": "Update the API key."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef list_users(self, **kwargs):\n kwargs = self._verify_sort_options(kwargs)\n kwargs = self._verify_filters(kwargs, User)\n api = self._get_api(iam.AccountAdminApi)\n return PaginatedResponse(api.get_all_users, lwrap_type=User, **kwargs)", "response": "List all users in organisation."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget details about a user.", "response": "def get_user(self, user_id):\n \"\"\"Get user details of specified user.\n\n :param str user_id: the ID of the user to get (Required)\n :returns: the user object with details about the user.\n :rtype: User\n \"\"\"\n api = self._get_api(iam.AccountAdminApi)\n return User(api.get_user(user_id))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef update_user(self, user_id, **kwargs):\n api = self._get_api(iam.AccountAdminApi)\n user = User._create_request_map(kwargs)\n body = iam.UserUpdateReq(**user)\n return User(api.update_user(user_id, body))", "response": "Update user properties of specified user."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef delete_user(self, user_id):\n api = self._get_api(iam.AccountAdminApi)\n api.delete_user(user_id)\n return", "response": "Delete user specified user."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a new user with provided details.", "response": "def add_user(self, username, email, **kwargs):\n \"\"\"Create a new user with provided details.\n\n Add user example:\n\n .. code-block:: python\n\n account_management_api = AccountManagementAPI()\n # Add user\n user = {\n \"username\": \"test_user\",\n \"email\": \"test@gmail.com\",\n \"phone_number\": \"0123456789\"\n }\n new_user = account_management_api.add_user(**user)\n\n :param str username: The unique username of the user (Required)\n :param str email: The unique email of the user (Required)\n :param str full_name: The full name of the user\n :param list groups: List of group IDs (`str`) which this user belongs to\n :param str password: The password string of the user\n :param str phone_number: Phone number of the user\n :param bool terms_accepted: 'General Terms & Conditions' have been accepted\n :param bool marketing_accepted: Marketing Information opt-in\n :returns: the new user object\n :rtype: User\n \"\"\"\n api = self._get_api(iam.AccountAdminApi)\n kwargs.update({'username': username, 'email': email})\n user = User._create_request_map(kwargs)\n body = iam.UserUpdateReq(**user)\n return User(api.create_user(body))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_account(self):\n api = self._get_api(iam.DeveloperApi)\n return Account(api.get_my_account_info(include=\"limits, policies\"))", "response": "Get details of the current account."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nupdate details of an account associated with current API key.", "response": "def update_account(self, **kwargs):\n \"\"\"Update details of account associated with current API key.\n\n :param str address_line1: Postal address line 1.\n :param str address_line2: Postal address line 2.\n :param str city: The city part of the postal address.\n :param str display_name: The display name for the account.\n :param str country: The country part of the postal address.\n :param str company: The name of the company.\n :param str state: The state part of the postal address.\n :param str contact: The name of the contact person for this account.\n :param str postal_code: The postal code part of the postal address.\n :param str parent_id: The ID of the parent account.\n :param str phone_number: The phone number of the company.\n :param str email: Email address for this account.\n :param list[str] aliases: List of aliases\n :returns: an account object.\n :rtype: Account\n \"\"\"\n api = self._get_api(iam.AccountAdminApi)\n account = Account._create_request_map(kwargs)\n body = AccountUpdateReq(**account)\n return Account(api.update_my_account(body))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nlist all groups in organisation.", "response": "def list_groups(self, **kwargs):\n \"\"\"List all groups in organisation.\n\n :param int limit: The number of groups to retrieve\n :param str order: The ordering direction, ascending (asc) or descending (desc)\n :param str after: Get groups after/starting at given group ID\n :returns: a list of :py:class:`Group` objects.\n :rtype: PaginatedResponse\n \"\"\"\n kwargs = self._verify_sort_options(kwargs)\n api = self._get_api(iam.DeveloperApi)\n return PaginatedResponse(api.get_all_groups, lwrap_type=Group, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget details of the group.", "response": "def get_group(self, group_id):\n \"\"\"Get details of the group.\n\n :param str group_id: The group ID (Required)\n :returns: :py:class:`Group` object.\n :rtype: Group\n \"\"\"\n api = self._get_api(iam.DeveloperApi)\n return Group(api.get_group_summary(group_id))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef list_group_users(self, group_id, **kwargs):\n kwargs[\"group_id\"] = group_id\n kwargs = self._verify_sort_options(kwargs)\n api = self._get_api(iam.AccountAdminApi)\n return PaginatedResponse(api.get_users_of_group, lwrap_type=User, **kwargs)", "response": "List users of a group."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef list_group_api_keys(self, group_id, **kwargs):\n kwargs[\"group_id\"] = group_id\n kwargs = self._verify_sort_options(kwargs)\n api = self._get_api(iam.DeveloperApi)\n return PaginatedResponse(api.get_api_keys_of_group, lwrap_type=ApiKey, **kwargs)", "response": "List API keys of a group."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef defer(self, *args, **kwargs):\n LOG.debug(\n '%s on %s (awaitable %s async %s provider %s)',\n 'deferring',\n self._func,\n self._is_awaitable,\n self._is_asyncio_provider,\n self._concurrency_provider\n )\n\n if self._blocked:\n raise RuntimeError('Already activated this deferred call by blocking on it')\n with self._lock:\n if not self._deferable:\n func_partial = functools.partial(self._func, *args, **kwargs)\n\n # we are either:\n # - pure asyncio\n # - asyncio but with blocking function\n # - not asyncio, use threadpool\n self._deferable = (\n # pure asyncio\n asyncio.ensure_future(func_partial(), loop=self._concurrency_provider)\n if self._is_awaitable else (\n # asyncio blocked\n self._concurrency_provider.run_in_executor(\n func=func_partial,\n executor=None\n )\n if self._is_asyncio_provider else (\n # not asyncio\n self._concurrency_provider.apply_async(func_partial)\n )\n )\n )\n return self._deferable", "response": "Call the function and immediately return an asynchronous object."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncall the wrapped function and wait for the result of the function call.", "response": "def block(self, *args, **kwargs):\n \"\"\"Call the wrapped function, and wait for the result in a blocking fashion\n\n Returns the result of the function call.\n\n :param args:\n :param kwargs:\n :return: result of function call\n \"\"\"\n LOG.debug(\n '%s on %s (awaitable %s async %s provider %s)',\n 'blocking',\n self._func,\n self._is_awaitable,\n self._is_asyncio_provider,\n self._concurrency_provider\n )\n\n if self._deferable:\n raise RuntimeError('Already activated this call by deferring it')\n with self._lock:\n if not hasattr(self, '_result'):\n try:\n self._result = (\n self._concurrency_provider.run_until_complete(self.defer(*args, **kwargs))\n if self._is_asyncio_provider else self._func(*args, **kwargs)\n )\n except queue.Empty:\n raise CloudTimeoutError(\"No data received after %.1f seconds.\" % kwargs.get(\"timeout\", 0))\n self._blocked = True\n return self._result"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef after(self, after):\n if after is None:\n raise ValueError(\"Invalid value for `after`, must not be `None`\")\n if after is not None and not re.search('^[A-Za-z0-9]{32}', after):\n raise ValueError(\"Invalid value for `after`, must be a follow pattern or equal to `/^[A-Za-z0-9]{32}/`\")\n\n self._after = after", "response": "Sets the after of this EnrollmentIdentities."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef order(self, order):\n if order is None:\n raise ValueError(\"Invalid value for `order`, must not be `None`\")\n allowed_values = [\"ASC\", \"DESC\"]\n if order not in allowed_values:\n raise ValueError(\n \"Invalid value for `order` ({0}), must be one of {1}\"\n .format(order, allowed_values)\n )\n\n self._order = order", "response": "Sets the order of this EnrollmentIdentities."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef health_indicator(self, health_indicator):\n allowed_values = [\"ok\", \"warning\", \"error\"]\n if health_indicator not in allowed_values:\n raise ValueError(\n \"Invalid value for `health_indicator` ({0}), must be one of {1}\"\n .format(health_indicator, allowed_values)\n )\n\n self._health_indicator = health_indicator", "response": "Sets the health_indicator of this UpdateCampaign."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef quota(self, quota):\n if quota is None:\n raise ValueError(\"Invalid value for `quota`, must not be `None`\")\n if quota is not None and quota < 0:\n raise ValueError(\"Invalid value for `quota`, must be a value greater than or equal to `0`\")\n\n self._quota = quota", "response": "Sets the quota of this ServicePackageQuota."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef enrolled_device_id(self, enrolled_device_id):\n if enrolled_device_id is None:\n raise ValueError(\"Invalid value for `enrolled_device_id`, must not be `None`\")\n if enrolled_device_id is not None and not re.search('^[A-Za-z0-9]{32}', enrolled_device_id):\n raise ValueError(\"Invalid value for `enrolled_device_id`, must be a follow pattern or equal to `/^[A-Za-z0-9]{32}/`\")\n\n self._enrolled_device_id = enrolled_device_id", "response": "Sets the enrolled_device_id of this EnrollmentIdentity."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncheck for existence of a new newsfile", "response": "def main(news_dir=None):\n \"\"\"Checks for existence of a new newsfile\"\"\"\n if news_dir is None:\n from generate_news import news_dir\n news_dir = os.path.abspath(news_dir)\n\n # assume the name of the remote alias is just 'origin'\n remote_alias = 'origin'\n\n # figure out what the 'default branch' for the origin is\n origin_stats = subprocess.check_output(\n ['git', 'remote', 'show', remote_alias],\n cwd=news_dir\n ).decode()\n\n # the output of the git command looks like:\n # ' HEAD branch: master'\n origin_branch = 'master' # unless we prove otherwise\n for line in origin_stats.splitlines():\n if 'head branch:' in line.lower():\n origin_branch = line.split(':', 1)[-1].strip()\n break\n\n origin = '%s/%s' % (remote_alias, origin_branch)\n\n # figure out the current branch\n current_branch = subprocess.check_output(\n ['git', 'rev-parse', '--abbrev-ref', 'HEAD'],\n cwd=news_dir\n ).decode().strip()\n\n print(':: Finding news in `%s` to add to remote `%s`' % (current_branch, origin))\n diff_command = ['git', 'diff', '%s...%s' % (origin, current_branch), '--name-status', news_dir]\n file_diff = subprocess.check_output(\n diff_command,\n cwd=news_dir\n ).decode()\n\n # the output of the git command looks like:\n # 'A docs/news/789.feature'\n\n # [optional] ensure we have an addition, rather than just modify/delete\n added_news = [line for line in file_diff.splitlines() if line.lower().strip().startswith('a')]\n\n # pass or fail\n if not added_news:\n print(\n '! Error: Uh-oh, did not find any news files!\\n'\n '! Please add a news file to `%s`\\n'\n '+ File diff:\\n%s\\n'\n '{} Git diff command:\\n`%s`\\n' % (\n news_dir,\n file_diff.strip(),\n subprocess.list2cmdline(diff_command)\n )\n )\n exit(1) # exit with an error status, no need for a traceback\n print(':: %s new files in `%s`' % (len(added_news), news_dir))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef add_account_api_key_to_groups(self, account_id, api_key, body, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.add_account_api_key_to_groups_with_http_info(account_id, api_key, body, **kwargs) # noqa: E501\n else:\n (data) = self.add_account_api_key_to_groups_with_http_info(account_id, api_key, body, **kwargs) # noqa: E501\n return data", "response": "Add an API key to a list of groups."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add_account_certificate(self, account_id, body, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.add_account_certificate_with_http_info(account_id, body, **kwargs) # noqa: E501\n else:\n (data) = self.add_account_certificate_with_http_info(account_id, body, **kwargs) # noqa: E501\n return data", "response": "Add a new trusted certificate."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef add_account_user_to_groups(self, account_id, user_id, body, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.add_account_user_to_groups_with_http_info(account_id, user_id, body, **kwargs) # noqa: E501\n else:\n (data) = self.add_account_user_to_groups_with_http_info(account_id, user_id, body, **kwargs) # noqa: E501\n return data", "response": "Add a user to a list of groups."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_subjects_to_account_group(self, account_id, group_id, body, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.add_subjects_to_account_group_with_http_info(account_id, group_id, body, **kwargs) # noqa: E501\n else:\n (data) = self.add_subjects_to_account_group_with_http_info(account_id, group_id, body, **kwargs) # noqa: E501\n return data", "response": "Add members to a group."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nchecking the API key for an existing account.", "response": "def check_account_api_key(self, account_id, api_key, **kwargs): # noqa: E501\n \"\"\"Check the API key. # noqa: E501\n\n An endpoint for checking API key. **Example usage:** `curl -X POST https://api.us-east-1.mbedcloud.com/v3/accounts/{accountID}/api-keys/{apiKey} -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.check_account_api_key(account_id, api_key, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str account_id: Account ID. (required)\n :param str api_key: The API key to be checked. (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.check_account_api_key_with_http_info(account_id, api_key, **kwargs) # noqa: E501\n else:\n (data) = self.check_account_api_key_with_http_info(account_id, api_key, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate a new account.", "response": "def create_account(self, body, **kwargs): # noqa: E501\n \"\"\"Create a new account. # noqa: E501\n\n An endpoint for creating a new account. **Example usage:** `curl -X POST https://api.us-east-1.mbedcloud.com/v3/accounts -d '{\\\"display_name\\\": \\\"MyAccount1\\\", \\\"admin_name\\\": \\\"accountAdmin1\\\", \\\"email\\\": \\\"example_admin@myaccount.info\\\"}' -H 'content-type: application/json' -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.create_account(body, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param AccountCreationReq body: Details of the account to be created. (required)\n :param str action: Action, either 'create' or 'enroll'.
  • 'create' creates the account where its admin user has ACTIVE status if admin_password was defined in the request, or RESET status if no admin_password was defined. If the user already exists, its status is not modified.
  • 'enroll' creates the account where its admin user has ENROLLING status. If the user already exists, its status is not modified. Email to finish the enrollment or to notify the existing user about the new account is sent to the admin_email defined in the request.
\n :return: AccountCreationResp\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.create_account_with_http_info(body, **kwargs) # noqa: E501\n else:\n (data) = self.create_account_with_http_info(body, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef create_account_api_key(self, account_id, body, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.create_account_api_key_with_http_info(account_id, body, **kwargs) # noqa: E501\n else:\n (data) = self.create_account_api_key_with_http_info(account_id, body, **kwargs) # noqa: E501\n return data", "response": "Create a new API key."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a new group.", "response": "def create_account_group(self, account_id, body, **kwargs): # noqa: E501\n \"\"\"Create a new group. # noqa: E501\n\n An endpoint for creating a new group. **Example usage:** `curl -X POST https://api.us-east-1.mbedcloud.com/v3/accounts/{accountID}/policy-groups -d '{\\\"name\\\": \\\"MyGroup1\\\"}' -H 'content-type: application/json' -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.create_account_group(account_id, body, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str account_id: Account ID. (required)\n :param GroupCreationInfo body: Details of the group to be created. (required)\n :return: GroupSummary\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.create_account_group_with_http_info(account_id, body, **kwargs) # noqa: E501\n else:\n (data) = self.create_account_group_with_http_info(account_id, body, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create_account_invitation(self, account_id, body, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.create_account_invitation_with_http_info(account_id, body, **kwargs) # noqa: E501\n else:\n (data) = self.create_account_invitation_with_http_info(account_id, body, **kwargs) # noqa: E501\n return data", "response": "Create a user invitation."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a new user in an account.", "response": "def create_account_user(self, account_id, body, **kwargs): # noqa: E501\n \"\"\"Create a new user. # noqa: E501\n\n An endpoint for creating or inviting a new user to the account. In case of invitation email address is used only, other attributes are set in the 2nd step. **Example usage:** `curl -X POST https://api.us-east-1.mbedcloud.com/v3/accounts/{accountID}/users -d {\\\"email\\\": \\\"myemail@company.com\\\"} -H 'content-type: application/json' -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.create_account_user(account_id, body, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str account_id: Account ID. (required)\n :param UserInfoReq body: A user object with attributes. (required)\n :param str action: Create or invite user.\n :return: UserInfoResp\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.create_account_user_with_http_info(account_id, body, **kwargs) # noqa: E501\n else:\n (data) = self.create_account_user_with_http_info(account_id, body, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndelete an API key.", "response": "def delete_account_api_key(self, account_id, api_key, **kwargs): # noqa: E501\n \"\"\"Delete the API key. # noqa: E501\n\n An endpoint for deleting an API key. **Example usage:** `curl -X DELETE https://api.us-east-1.mbedcloud.com/v3/accounts/{accountID}/api-keys/{apikey} -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.delete_account_api_key(account_id, api_key, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str account_id: Account ID. (required)\n :param str api_key: The ID of the API key to be deleted. (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.delete_account_api_key_with_http_info(account_id, api_key, **kwargs) # noqa: E501\n else:\n (data) = self.delete_account_api_key_with_http_info(account_id, api_key, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef delete_account_certificate(self, account_id, cert_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.delete_account_certificate_with_http_info(account_id, cert_id, **kwargs) # noqa: E501\n else:\n (data) = self.delete_account_certificate_with_http_info(account_id, cert_id, **kwargs) # noqa: E501\n return data", "response": "Delete a trusted certificate by ID."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef delete_account_group(self, account_id, group_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.delete_account_group_with_http_info(account_id, group_id, **kwargs) # noqa: E501\n else:\n (data) = self.delete_account_group_with_http_info(account_id, group_id, **kwargs) # noqa: E501\n return data", "response": "Delete a group. # noqa: E501\n\n An endpoint for deleting a group. **Example usage:** `curl -X DELETE https://api.us-east-1.mbedcloud.com/v3/accounts/{accountID}/policy-groups/{groupID} -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.delete_account_group(account_id, group_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str account_id: Account ID. (required)\n :param str group_id: The ID of the group to be deleted. (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndelete an existing user invitation.", "response": "def delete_account_invitation(self, account_id, invitation_id, **kwargs): # noqa: E501\n \"\"\"Delete a user invitation. # noqa: E501\n\n An endpoint for deleting an active user invitation which has been sent for a new or an existing user to join the account. **Example usage:** `curl -X DELETE https://api.us-east-1.mbedcloud.com/v3/accounts/{account-id}/user-invitations/{invitation-id} -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.delete_account_invitation(account_id, invitation_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str account_id: Account ID. (required)\n :param str invitation_id: The ID of the invitation to be deleted. (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.delete_account_invitation_with_http_info(account_id, invitation_id, **kwargs) # noqa: E501\n else:\n (data) = self.delete_account_invitation_with_http_info(account_id, invitation_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndeletes a user. # noqa: E501 An endpoint for deleting a user. **Example usage:** `curl -X DELETE https://api.us-east-1.mbedcloud.com/v3/accounts/{accountID}/users/{user-id} -H 'Authorization: Bearer API_KEY'` # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass asynchronous=True >>> thread = api.delete_account_user(account_id, user_id, asynchronous=True) >>> result = thread.get() :param asynchronous bool :param str account_id: Account ID. (required) :param str user_id: The ID of the user to be deleted. (required) :return: None If the method is called asynchronously, returns the request thread.", "response": "def delete_account_user(self, account_id, user_id, **kwargs): # noqa: E501\n \"\"\"Delete a user. # noqa: E501\n\n An endpoint for deleting a user. **Example usage:** `curl -X DELETE https://api.us-east-1.mbedcloud.com/v3/accounts/{accountID}/users/{user-id} -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.delete_account_user(account_id, user_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str account_id: Account ID. (required)\n :param str user_id: The ID of the user to be deleted. (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.delete_account_user_with_http_info(account_id, user_id, **kwargs) # noqa: E501\n else:\n (data) = self.delete_account_user_with_http_info(account_id, user_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_account_api_key(self, account_id, api_key, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_account_api_key_with_http_info(account_id, api_key, **kwargs) # noqa: E501\n else:\n (data) = self.get_account_api_key_with_http_info(account_id, api_key, **kwargs) # noqa: E501\n return data", "response": "Get API key details."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_account_certificate(self, account_id, cert_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_account_certificate_with_http_info(account_id, cert_id, **kwargs) # noqa: E501\n else:\n (data) = self.get_account_certificate_with_http_info(account_id, cert_id, **kwargs) # noqa: E501\n return data", "response": "Get a trusted certificate by ID."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget the account group summary.", "response": "def get_account_group_summary(self, account_id, group_id, **kwargs): # noqa: E501\n \"\"\"Get group information. # noqa: E501\n\n An endpoint for getting general information about the group. **Example usage:** `curl https://api.us-east-1.mbedcloud.com/v3/accounts/{accountID}/policy-groups/{groupID} -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_account_group_summary(account_id, group_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str account_id: Account ID. (required)\n :param str group_id: The ID of the group to be retrieved. (required)\n :return: GroupSummary\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_account_group_summary_with_http_info(account_id, group_id, **kwargs) # noqa: E501\n else:\n (data) = self.get_account_group_summary_with_http_info(account_id, group_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_account_info(self, account_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_account_info_with_http_info(account_id, **kwargs) # noqa: E501\n else:\n (data) = self.get_account_info_with_http_info(account_id, **kwargs) # noqa: E501\n return data", "response": "Get information about the account."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_account_invitation(self, account_id, invitation_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_account_invitation_with_http_info(account_id, invitation_id, **kwargs) # noqa: E501\n else:\n (data) = self.get_account_invitation_with_http_info(account_id, invitation_id, **kwargs) # noqa: E501\n return data", "response": "Get an account invitation."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_account_user(self, account_id, user_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_account_user_with_http_info(account_id, user_id, **kwargs) # noqa: E501\n else:\n (data) = self.get_account_user_with_http_info(account_id, user_id, **kwargs) # noqa: E501\n return data", "response": "Get the details of a user."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets all API keys for an account.", "response": "def get_all_account_api_keys(self, account_id, **kwargs): # noqa: E501\n \"\"\"Get all API keys. # noqa: E501\n\n An endpoint for retrieving the API keys in an array, optionally filtered by the owner. **Example usage:** `curl https://api.us-east-1.mbedcloud.com/v3/accounts/{accountID}/api-keys -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_all_account_api_keys(account_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str account_id: Account ID. (required)\n :param int limit: The number of results to return (2-1000), default is 50.\n :param str after: The entity ID to fetch after the given one.\n :param str order: The order of the records based on creation time, ASC or DESC; by default ASC\n :param str include: Comma separated additional data to return. Currently supported: total_count\n :param str key__eq: API key filter.\n :param str owner__eq: Owner name filter.\n :return: ApiKeyInfoRespList\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_all_account_api_keys_with_http_info(account_id, **kwargs) # noqa: E501\n else:\n (data) = self.get_all_account_api_keys_with_http_info(account_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting all trusted certificates for an account.", "response": "def get_all_account_certificates(self, account_id, **kwargs): # noqa: E501\n \"\"\"Get all trusted certificates. # noqa: E501\n\n An endpoint for retrieving trusted certificates in an array. **Example usage:** `curl https://api.us-east-1.mbedcloud.com/v3/accounts/{accountID}/trusted-certificates -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_all_account_certificates(account_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str account_id: Account ID. (required)\n :param int limit: The number of results to return (2-1000), default is 50.\n :param str after: The entity ID to fetch after the given one.\n :param str order: The order of the records based on creation time, ASC or DESC; by default ASC\n :param str include: Comma separated additional data to return. Currently supported: total_count\n :param str name__eq: Filter for certificate name\n :param str service__eq: Filter for service\n :param int expire__eq: Filter for expire\n :param int device_execution_mode__eq: Filter for developer certificates\n :param int device_execution_mode__neq: Filter for not developer certificates\n :param str owner__eq: Owner name filter\n :param bool enrollment_mode__eq: Enrollment mode filter\n :param str issuer__like: Filter for issuer. Finds all matches where the filter value is a case insensitive substring of the result. Example: issuer__like=cn=iss matches CN=issuer.\n :param str subject__like: Filter for subject. Finds all matches where the filter value is a case insensitive substring of the result. Example: subject__like=cn=su matches CN=subject.\n :return: TrustedCertificateInternalRespList\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_all_account_certificates_with_http_info(account_id, **kwargs) # noqa: E501\n else:\n (data) = self.get_all_account_certificates_with_http_info(account_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget all policy groups for an account.", "response": "def get_all_account_groups(self, account_id, **kwargs): # noqa: E501\n \"\"\"Get all group information. # noqa: E501\n\n An endpoint for retrieving all group information. **Example usage:** `curl https://api.us-east-1.mbedcloud.com/v3/accounts/{accountID}/policy-groups -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_all_account_groups(account_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str account_id: Account ID. (required)\n :param int limit: The number of results to return (2-1000), default is 50.\n :param str after: The entity ID to fetch after the given one.\n :param str order: The order of the records based on creation time, ASC or DESC; by default ASC\n :param str include: Comma separated additional data to return. Currently supported: total_count\n :param str name__eq: Filter for group name\n :return: GroupSummaryList\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_all_account_groups_with_http_info(account_id, **kwargs) # noqa: E501\n else:\n (data) = self.get_all_account_groups_with_http_info(account_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting all the user invitations for the given account.", "response": "def get_all_account_invitations(self, account_id, **kwargs): # noqa: E501\n \"\"\"Get the details of all the user invitations. # noqa: E501\n\n An endpoint for retrieving the details of all the active user invitations sent for new or existing users to join the account. **Example usage:** `curl https://api.us-east-1.mbedcloud.com/v3/accounts/{account-id}/user-invitations -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_all_account_invitations(account_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str account_id: Account ID. (required)\n :param int limit: The number of results to return (2-1000), default is 50.\n :param str after: The entity ID to fetch after the given one.\n :param str order: The order of the records based on creation time, ASC or DESC; by default ASC\n :return: UserInvitationRespList\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_all_account_invitations_with_http_info(account_id, **kwargs) # noqa: E501\n else:\n (data) = self.get_all_account_invitations_with_http_info(account_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_all_account_users(self, account_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_all_account_users_with_http_info(account_id, **kwargs) # noqa: E501\n else:\n (data) = self.get_all_account_users_with_http_info(account_id, **kwargs) # noqa: E501\n return data", "response": "Get all users in an account."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_all_accounts(self, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_all_accounts_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.get_all_accounts_with_http_info(**kwargs) # noqa: E501\n return data", "response": "Get all accounts. # noqa: E501\n\n Returns an array of account objects, optionally filtered by status and tier level. **Example usage:** `curl https://api.us-east-1.mbedcloud.com/v3/accounts -H 'Authorization: Bearer API_KEY'`. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_all_accounts(asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str status__eq: An optional filter for account status, ENROLLING, ACTIVE, RESTRICTED or SUSPENDED.\n :param str status__in: An optional filter for getting accounts with a specified set of statuses.\n :param str status__nin: An optional filter for excluding accounts with a specified set of statuses.\n :param str tier__eq: An optional filter for tier level, must be 0, 1, 2, 98, 99 or omitted.\n :param str parent__eq: An optional filter for parent account ID.\n :param str end_market__eq: An optional filter for account end market.\n :param str country__like: An optional filter for account country. Finds all matches where the filter value is a case insensitive substring of the result. Example: country__like=LAND matches Ireland.\n :param int limit: The number of results to return (2-1000), default is 1000.\n :param str after: The entity ID to fetch after the given one.\n :param str order: The order of the records based on creation time, ASC or DESC. Default value is ASC\n :param str include: Comma separated additional data to return. Currently supported: limits, policies, sub_accounts\n :param str format: Format information for the response to the query, supported: format=breakdown.\n :param str properties: Property name to be returned from account specific properties.\n :return: AccountInfoList\n If the method is called asynchronously,\n returns the request thread."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting API keys of a group.", "response": "def get_api_keys_of_account_group(self, account_id, group_id, **kwargs): # noqa: E501\n \"\"\"Get API keys of a group. # noqa: E501\n\n An endpoint for listing the API keys of the group with details. **Example usage:** `curl https://api.us-east-1.mbedcloud.com/v3/accounts/{accountID}/policy-groups/{groupID}/api-keys -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_api_keys_of_account_group(account_id, group_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str account_id: Account ID. (required)\n :param str group_id: The ID of the group whose API keys are retrieved. (required)\n :param int limit: The number of results to return (2-1000), default is 50.\n :param str after: The entity ID to fetch after the given one.\n :param str order: The order of the records based on creation time, ASC or DESC; by default ASC\n :param str include: Comma separated additional data to return. Currently supported: total_count\n :return: ApiKeyInfoRespList\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_api_keys_of_account_group_with_http_info(account_id, group_id, **kwargs) # noqa: E501\n else:\n (data) = self.get_api_keys_of_account_group_with_http_info(account_id, group_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_groups_of_account_apikey(self, account_id, api_key, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_groups_of_account_apikey_with_http_info(account_id, api_key, **kwargs) # noqa: E501\n else:\n (data) = self.get_groups_of_account_apikey_with_http_info(account_id, api_key, **kwargs) # noqa: E501\n return data", "response": "Get the groups of the API key."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_groups_of_account_user(self, account_id, user_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_groups_of_account_user_with_http_info(account_id, user_id, **kwargs) # noqa: E501\n else:\n (data) = self.get_groups_of_account_user_with_http_info(account_id, user_id, **kwargs) # noqa: E501\n return data", "response": "Get the groups of the user."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_users_of_account_group(self, account_id, group_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_users_of_account_group_with_http_info(account_id, group_id, **kwargs) # noqa: E501\n else:\n (data) = self.get_users_of_account_group_with_http_info(account_id, group_id, **kwargs) # noqa: E501\n return data", "response": "Get users of a group."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef remove_account_api_key_from_groups(self, account_id, api_key, body, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.remove_account_api_key_from_groups_with_http_info(account_id, api_key, body, **kwargs) # noqa: E501\n else:\n (data) = self.remove_account_api_key_from_groups_with_http_info(account_id, api_key, body, **kwargs) # noqa: E501\n return data", "response": "Remove an API key from a group."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef remove_account_user_from_groups(self, account_id, user_id, body, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.remove_account_user_from_groups_with_http_info(account_id, user_id, body, **kwargs) # noqa: E501\n else:\n (data) = self.remove_account_user_from_groups_with_http_info(account_id, user_id, body, **kwargs) # noqa: E501\n return data", "response": "Remove a user from a group."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef remove_api_keys_from_account_group(self, account_id, group_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.remove_api_keys_from_account_group_with_http_info(account_id, group_id, **kwargs) # noqa: E501\n else:\n (data) = self.remove_api_keys_from_account_group_with_http_info(account_id, group_id, **kwargs) # noqa: E501\n return data", "response": "Remove API keys from an account group."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef remove_users_from_account_group(self, account_id, group_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.remove_users_from_account_group_with_http_info(account_id, group_id, **kwargs) # noqa: E501\n else:\n (data) = self.remove_users_from_account_group_with_http_info(account_id, group_id, **kwargs) # noqa: E501\n return data", "response": "Remove users from a group."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nresets the secret key of the API key.", "response": "def reset_account_api_key_secret(self, account_id, api_key, **kwargs): # noqa: E501\n \"\"\"Reset the secret key. # noqa: E501\n\n An endpoint for resetting the secret key of the API key. **Example usage:** `curl -X POST https://api.us-east-1.mbedcloud.com/v3/accounts/{accountID}/api-keys/{apiKey}/reset-secret -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.reset_account_api_key_secret(account_id, api_key, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str account_id: Account ID. (required)\n :param str api_key: The ID of the API key to be reset. (required)\n :return: ApiKeyInfoResp\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.reset_account_api_key_secret_with_http_info(account_id, api_key, **kwargs) # noqa: E501\n else:\n (data) = self.reset_account_api_key_secret_with_http_info(account_id, api_key, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef update_account(self, account_id, body, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_account_with_http_info(account_id, body, **kwargs) # noqa: E501\n else:\n (data) = self.update_account_with_http_info(account_id, body, **kwargs) # noqa: E501\n return data", "response": "Update attributes of an existing account."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef update_account_api_key(self, account_id, api_key, body, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_account_api_key_with_http_info(account_id, api_key, body, **kwargs) # noqa: E501\n else:\n (data) = self.update_account_api_key_with_http_info(account_id, api_key, body, **kwargs) # noqa: E501\n return data", "response": "Update API key details."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef update_account_certificate(self, account_id, cert_id, body, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_account_certificate_with_http_info(account_id, cert_id, body, **kwargs) # noqa: E501\n else:\n (data) = self.update_account_certificate_with_http_info(account_id, cert_id, body, **kwargs) # noqa: E501\n return data", "response": "Update a trusted certificate."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef update_account_group_name(self, account_id, group_id, body, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_account_group_name_with_http_info(account_id, group_id, body, **kwargs) # noqa: E501\n else:\n (data) = self.update_account_group_name_with_http_info(account_id, group_id, body, **kwargs) # noqa: E501\n return data", "response": "Update the name of a group."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nupdating user details. # noqa: E501 An endpoint for updating user details. **Example usage:** `curl -X PUT https://api.us-east-1.mbedcloud.com/v3/accounts/{accountID}/users/{user-id} -d '{\\\"username\\\": \\\"myusername\\\"}' -H 'content-type: application/json' -H 'Authorization: Bearer API_KEY'` # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass asynchronous=True >>> thread = api.update_account_user(account_id, user_id, body, asynchronous=True) >>> result = thread.get() :param asynchronous bool :param str account_id: Account ID. (required) :param str user_id: The ID of the user to be updated. (required) :param UserUpdateReq body: A user object with attributes. (required) :return: UserInfoResp If the method is called asynchronously, returns the request thread.", "response": "def update_account_user(self, account_id, user_id, body, **kwargs): # noqa: E501\n \"\"\"Update user details. # noqa: E501\n\n An endpoint for updating user details. **Example usage:** `curl -X PUT https://api.us-east-1.mbedcloud.com/v3/accounts/{accountID}/users/{user-id} -d '{\\\"username\\\": \\\"myusername\\\"}' -H 'content-type: application/json' -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.update_account_user(account_id, user_id, body, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str account_id: Account ID. (required)\n :param str user_id: The ID of the user to be updated. (required)\n :param UserUpdateReq body: A user object with attributes. (required)\n :return: UserInfoResp\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_account_user_with_http_info(account_id, user_id, body, **kwargs) # noqa: E501\n else:\n (data) = self.update_account_user_with_http_info(account_id, user_id, body, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nvalidate the user email.", "response": "def validate_account_user_email(self, account_id, user_id, **kwargs): # noqa: E501\n \"\"\"Validate the user email. # noqa: E501\n\n An endpoint for validating the user email. **Example usage:** `curl -X POST https://api.us-east-1.mbedcloud.com/v3/accounts/{accountID}/users/{user-id}/validate-email -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.validate_account_user_email(account_id, user_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str account_id: Account ID. (required)\n :param str user_id: The ID of the user whose email is validated. (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.validate_account_user_email_with_http_info(account_id, user_id, **kwargs) # noqa: E501\n else:\n (data) = self.validate_account_user_email_with_http_info(account_id, user_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create_certificate_issuer(self, certificate_issuer_request, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.create_certificate_issuer_with_http_info(certificate_issuer_request, **kwargs) # noqa: E501\n else:\n (data) = self.create_certificate_issuer_with_http_info(certificate_issuer_request, **kwargs) # noqa: E501\n return data", "response": "Create a certificate issuer."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef delete_certificate_issuer(self, certificate_issuer_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.delete_certificate_issuer_with_http_info(certificate_issuer_id, **kwargs) # noqa: E501\n else:\n (data) = self.delete_certificate_issuer_with_http_info(certificate_issuer_id, **kwargs) # noqa: E501\n return data", "response": "Delete a certificate issuer by ID."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_certificate_issuer(self, certificate_issuer_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_certificate_issuer_with_http_info(certificate_issuer_id, **kwargs) # noqa: E501\n else:\n (data) = self.get_certificate_issuer_with_http_info(certificate_issuer_id, **kwargs) # noqa: E501\n return data", "response": "Get a certificate issuer by ID."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_certificate_issuers(self, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_certificate_issuers_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.get_certificate_issuers_with_http_info(**kwargs) # noqa: E501\n return data", "response": "Get certificate issuers list."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef update_certificate_issuer(self, certificate_issuer_id, certificate_issuer_update_request, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_certificate_issuer_with_http_info(certificate_issuer_id, certificate_issuer_update_request, **kwargs) # noqa: E501\n else:\n (data) = self.update_certificate_issuer_with_http_info(certificate_issuer_id, certificate_issuer_update_request, **kwargs) # noqa: E501\n return data", "response": "Update a certificate issuer."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nverifies a certificate issuer.", "response": "def verify_certificate_issuer(self, certificate_issuer_id, **kwargs): # noqa: E501\n \"\"\"Verify certificate issuer. # noqa: E501\n\n A utility API that can be used to validate the user configuration before activating a certificate issuer. Verifies that the certificate issuer is accessible and can be used to generate certificates by Device Management.
**Note:** The API requests the 3rd party CA to sign a test certificate. For some 3rd party CAs, this operation may make use of the account quota.
**Example usage:** ``` curl -X POST \\\\ -H 'authorization: ' \\\\ -H 'content-type: application/json;charset=UTF-8' \\\\ https://api.us-east-1.mbedcloud.com/v3/certificate-issuers/01621a36719d507b9d48a91b00000000/verify ``` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.verify_certificate_issuer(certificate_issuer_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str certificate_issuer_id: Certificate issuer ID.
The ID of the certificate issuer. (required)\n :return: CertificateIssuerVerifyResponse\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.verify_certificate_issuer_with_http_info(certificate_issuer_id, **kwargs) # noqa: E501\n else:\n (data) = self.verify_certificate_issuer_with_http_info(certificate_issuer_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef deployment_state(self, deployment_state):\n allowed_values = [\"pending\", \"updated_connector_channel\", \"failed_connector_channel_update\", \"deployed\", \"manifestremoved\", \"deregistered\"]\n if deployment_state not in allowed_values:\n raise ValueError(\n \"Invalid value for `deployment_state` ({0}), must be one of {1}\"\n .format(deployment_state, allowed_values)\n )\n\n self._deployment_state = deployment_state", "response": "Sets the deployment_state of this CampaignDeviceMetadata."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nlist all certificates registered to organisation.", "response": "def list_certificates(self, **kwargs):\n \"\"\"List certificates registered to organisation.\n\n Currently returns partially populated certificates. To obtain the full certificate object:\n `[get_certificate(certificate_id=cert['id']) for cert in list_certificates]`\n\n :param int limit: The number of certificates to retrieve.\n :param str order: The ordering direction, ascending (asc) or\n descending (desc).\n :param str after: Get certificates after/starting at given `certificate_id`.\n :param dict filters: Dictionary of filters to apply: type (eq), expire (eq), owner (eq)\n :return: list of :py:class:`Certificate` objects\n :rtype: Certificate\n \"\"\"\n kwargs = self._verify_sort_options(kwargs)\n kwargs = self._verify_filters(kwargs, Certificate)\n if \"service__eq\" in kwargs:\n if kwargs[\"service__eq\"] == CertificateType.bootstrap:\n pass\n elif kwargs[\"service__eq\"] == CertificateType.developer:\n kwargs[\"device_execution_mode__eq\"] = 1\n kwargs.pop(\"service__eq\")\n elif kwargs[\"service__eq\"] == CertificateType.lwm2m:\n pass\n else:\n raise CloudValueError(\n \"Incorrect value for CertificateType filter: %s\" % (kwargs[\"service__eq\"])\n )\n owner = kwargs.pop('owner_id__eq', None)\n if owner is not None:\n kwargs['owner__eq'] = owner\n api = self._get_api(iam.DeveloperApi)\n return PaginatedResponse(api.get_all_certificates, lwrap_type=Certificate, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting a certificate by id.", "response": "def get_certificate(self, certificate_id):\n \"\"\"Get certificate by id.\n\n :param str certificate_id: The certificate id (Required)\n :returns: Certificate object\n :rtype: Certificate\n \"\"\"\n api = self._get_api(iam.DeveloperApi)\n certificate = Certificate(api.get_certificate(certificate_id))\n self._extend_certificate(certificate)\n return certificate"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef delete_certificate(self, certificate_id):\n api = self._get_api(iam.DeveloperApi)\n api.delete_certificate(certificate_id)\n return", "response": "Delete a certificate.\n\n :param str certificate_id: The certificate id (Required)\n :returns: void"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nadd a new BYOC certificate.", "response": "def add_certificate(self, name, type, certificate_data, signature=None, **kwargs):\n \"\"\"Add a new BYOC certificate.\n\n :param str name: name of the certificate (Required)\n :param str type: type of the certificate. Values: lwm2m or bootstrap (Required)\n :param str certificate_data: X509.v3 trusted certificate in PEM format. (Required)\n :param str signature: This parameter has been DEPRECATED in the API and does not need to\n be provided.\n :param str status: Status of the certificate.\n Allowed values: \"ACTIVE\" | \"INACTIVE\".\n :param str description: Human readable description of this certificate,\n not longer than 500 characters.\n :returns: Certificate object\n :rtype: Certificate\n \"\"\"\n kwargs.update({'name': name})\n kwargs.update({'type': type})\n api = self._get_api(iam.AccountAdminApi)\n\n kwargs.update({'certificate_data': certificate_data})\n certificate = Certificate._create_request_map(kwargs)\n if not certificate.get('enrollment_mode') and signature:\n certificate.update({'signature': signature})\n body = iam.TrustedCertificateReq(**certificate)\n prod_cert = api.add_certificate(body)\n return self.get_certificate(prod_cert.id)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nadd a new developer certificate.", "response": "def add_developer_certificate(self, name, **kwargs):\n \"\"\"Add a new developer certificate.\n\n :param str name: name of the certificate (Required)\n :param str description: Human readable description of this certificate,\n not longer than 500 characters.\n :returns: Certificate object\n :rtype: Certificate\n \"\"\"\n kwargs['name'] = name\n api = self._get_api(cert.DeveloperCertificateApi)\n certificate = Certificate._create_request_map(kwargs)\n\n # just pull the fields we care about\n subset = cert.DeveloperCertificateRequestData.attribute_map\n certificate = {k: v for k, v in certificate.items() if k in subset}\n\n body = cert.DeveloperCertificateRequestData(**certificate)\n dev_cert = api.create_developer_certificate(self.auth, body)\n return self.get_certificate(dev_cert.id)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nupdate a certificate. :param str certificate_id: The certificate id (Required) :param str certificate_data: X509.v3 trusted certificate in PEM format. :param str signature: This parameter has been DEPRECATED in the API and does not need to be provided. :param str type: type of the certificate. Values: lwm2m or bootstrap. :param str status: Status of the certificate. Allowed values: \"ACTIVE\" | \"INACTIVE\". :param str description: Human readable description of this certificate, not longer than 500 characters. :returns: Certificate object :rtype: Certificate", "response": "def update_certificate(self, certificate_id, **kwargs):\n \"\"\"Update a certificate.\n\n :param str certificate_id: The certificate id (Required)\n :param str certificate_data: X509.v3 trusted certificate in PEM format.\n :param str signature: This parameter has been DEPRECATED in the API and does not need to\n be provided.\n :param str type: type of the certificate. Values: lwm2m or bootstrap.\n :param str status: Status of the certificate.\n Allowed values: \"ACTIVE\" | \"INACTIVE\".\n :param str description: Human readable description of this certificate,\n not longer than 500 characters.\n :returns: Certificate object\n :rtype: Certificate\n \"\"\"\n api = self._get_api(iam.DeveloperApi)\n cert = Certificate._create_request_map(kwargs)\n body = iam.TrustedCertificateReq(**cert)\n certificate = Certificate(api.update_certificate(certificate_id, body))\n return self.get_certificate(certificate.id)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef type(self):\n if self._device_mode == 1 or self._type == CertificateType.developer:\n return CertificateType.developer\n elif self._type == CertificateType.bootstrap:\n return CertificateType.bootstrap\n else:\n return CertificateType.lwm2m", "response": "Returns the type of the certificate."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_all_server_credentials(self, authorization, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_all_server_credentials_with_http_info(authorization, **kwargs) # noqa: E501\n else:\n (data) = self.get_all_server_credentials_with_http_info(authorization, **kwargs) # noqa: E501\n return data", "response": "Fetch all server credentials."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_bootstrap_server_credentials(self, authorization, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_bootstrap_server_credentials_with_http_info(authorization, **kwargs) # noqa: E501\n else:\n (data) = self.get_bootstrap_server_credentials_with_http_info(authorization, **kwargs) # noqa: E501\n return data", "response": "Fetch bootstrap server credentials."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_l2_m2_m_server_credentials(self, authorization, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_l2_m2_m_server_credentials_with_http_info(authorization, **kwargs) # noqa: E501\n else:\n (data) = self.get_l2_m2_m_server_credentials_with_http_info(authorization, **kwargs) # noqa: E501\n return data", "response": "Fetch LwM2M server credentials."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nadd an enrollment claim to the device.", "response": "def add_enrollment_claim(self, **kwargs):\n \"\"\"Add\"\"\"\n api = self._get_api(enrollment.PublicAPIApi)\n item = EnrollmentClaim._create_request_map(kwargs)\n item = models.EnrollmentIdentity(**item)\n return EnrollmentClaim(api.create_device_enrollment(item))"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting an enrollment claim", "response": "def get_enrollment_claim(self, id, **kwargs):\n \"\"\"Get\"\"\"\n api = self._get_api(enrollment.PublicAPIApi)\n return EnrollmentClaim(api.get_device_enrollment(id=id))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nlists all available enrollment claims.", "response": "def list_enrollment_claims(self, **kwargs):\n \"\"\"List\"\"\"\n kwargs = self._verify_sort_options(kwargs)\n kwargs = self._verify_filters(kwargs, EnrollmentClaim)\n api = self._get_api(enrollment.PublicAPIApi)\n return PaginatedResponse(\n api.get_device_enrollments,\n lwrap_type=EnrollmentClaim,\n **kwargs\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndeletes an enrollment claim.", "response": "def delete_enrollment_claim(self, id, **kwargs):\n \"\"\"Delete\"\"\"\n api = self._get_api(enrollment.PublicAPIApi)\n return api.delete_device_enrollment(id=id)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef handle_channel_message(db, queues, b64decode, notification_object):\n for notification in getattr(notification_object, 'notifications') or []:\n # Ensure we have subscribed for the path we received a notification for\n subscriber_queue = queues[notification.ep].get(notification.path)\n if subscriber_queue is None:\n LOG.debug(\n \"Ignoring notification on %s (%s) as no subscription is registered\",\n notification.ep,\n notification.path\n )\n break\n\n payload = tlv.decode(\n payload=notification.payload,\n content_type=notification.ct,\n decode_b64=b64decode\n )\n subscriber_queue.put(payload)\n\n for response in getattr(notification_object, 'async_responses') or []:\n payload = tlv.decode(\n payload=response.payload,\n content_type=response.ct,\n decode_b64=b64decode\n )\n db.update({response.id: dict(\n payload=payload,\n error=response.error,\n status_code=response.status\n )})", "response": "Handle a NotificationMessage object from a channel."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef wait(self, timeout=0):\n start_time = time.time()\n\n # We return synchronously, so we block in a busy loop waiting for the\n # request to be done.\n while not self.is_done:\n duration = time.time() - start_time\n if timeout and duration > timeout:\n raise CloudTimeoutError(\n \"Timeout getting async value. Timeout: %d seconds\" % timeout\n )\n time.sleep(0.1)\n\n # If we get an any status code other than a 2xx we raise an exception to the user, which can then be handled\n # accordingly.\n status_code, error_msg, payload = self.check_error()\n if not self._status_ok(status_code):\n raise CloudAsyncError(\"Async response for '%s' returned an error.\" % self.async_id,\n reason=error_msg,\n status=status_code)\n\n value = self.value\n if isinstance(value, six.binary_type):\n value = value.decode('utf-8')\n return value", "response": "Blocks until the async response is received or forever."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef check_error(self):\n if not self.is_done:\n raise CloudUnhandledError(\"Need to check if request is done, before checking for error\")\n response = self.db[self.async_id]\n error_msg = response[\"error\"]\n status_code = int(response[\"status_code\"])\n payload = response[\"payload\"]\n return status_code, error_msg, payload", "response": "Check if the async response is an error."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef error(self):\n status_code, error_msg, payload = self.check_error()\n\n if status_code != 200 and not error_msg and not payload:\n return \"Async error (%s). Status code: %r\" % (self.async_id, status_code)\n return error_msg", "response": "Check if the async response is an error."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef value(self):\n status_code, error_msg, payload = self.check_error()\n\n if not self._status_ok(status_code) and not payload:\n raise CloudUnhandledError(\"Attempted to decode async request which returned an error.\",\n reason=error_msg,\n status=status_code)\n return self.db[self.async_id][\"payload\"]", "response": "Get the value of the finished async request if it is available."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef add_api_key_to_groups(self, api_key, body, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.add_api_key_to_groups_with_http_info(api_key, body, **kwargs) # noqa: E501\n else:\n (data) = self.add_api_key_to_groups_with_http_info(api_key, body, **kwargs) # noqa: E501\n return data", "response": "Add API key to a list of groups."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nadd a new trusted certificate.", "response": "def add_certificate(self, body, **kwargs): # noqa: E501\n \"\"\"Upload a new trusted certificate. # noqa: E501\n\n An endpoint for uploading new trusted certificates. **Example usage:** `curl -X POST https://api.us-east-1.mbedcloud.com/v3/trusted-certificates -d {\\\"name\\\": \\\"myCert1\\\", \\\"description\\\": \\\"very important cert\\\", \\\"certificate\\\": \\\"certificate_data\\\", \\\"service\\\": \\\"lwm2m\\\"} -H 'content-type: application/json' -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.add_certificate(body, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param TrustedCertificateReq body: A trusted certificate object with attributes. (required)\n :return: TrustedCertificateResp\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.add_certificate_with_http_info(body, **kwargs) # noqa: E501\n else:\n (data) = self.add_certificate_with_http_info(body, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef add_subjects_to_group(self, group_id, body, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.add_subjects_to_group_with_http_info(group_id, body, **kwargs) # noqa: E501\n else:\n (data) = self.add_subjects_to_group_with_http_info(group_id, body, **kwargs) # noqa: E501\n return data", "response": "Add members to a group."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding a user to a list of groups.", "response": "def add_user_to_groups(self, user_id, body, **kwargs): # noqa: E501\n \"\"\"Add user to a list of groups. # noqa: E501\n\n An endpoint for adding user to groups. **Example usage:** `curl -X POST https://api.us-east-1.mbedcloud.com/v3/users/{user-id}/groups -d '[0162056a9a1586f30242590700000000,0117056a9a1586f30242590700000000]' -H 'content-type: application/json' -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.add_user_to_groups(user_id, body, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str user_id: The ID of the user to be added to the group. (required)\n :param list[str] body: A list of IDs of the groups to be updated. (required)\n :return: UpdatedResponse\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.add_user_to_groups_with_http_info(user_id, body, **kwargs) # noqa: E501\n else:\n (data) = self.add_user_to_groups_with_http_info(user_id, body, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a new group.", "response": "def create_group(self, body, **kwargs): # noqa: E501\n \"\"\"Create a new group. # noqa: E501\n\n An endpoint for creating a new group. **Example usage:** `curl -X POST https://api.us-east-1.mbedcloud.com/v3/policy-groups -d '{\\\"name\\\": \\\"MyGroup1\\\"}' -H 'content-type: application/json' -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.create_group(body, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param GroupCreationInfo body: Details of the group to be created. (required)\n :return: GroupSummary\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.create_group_with_http_info(body, **kwargs) # noqa: E501\n else:\n (data) = self.create_group_with_http_info(body, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create_invitation(self, body, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.create_invitation_with_http_info(body, **kwargs) # noqa: E501\n else:\n (data) = self.create_invitation_with_http_info(body, **kwargs) # noqa: E501\n return data", "response": "Create a user invitation."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a new user.", "response": "def create_user(self, body, **kwargs): # noqa: E501\n \"\"\"Create a new user. # noqa: E501\n\n An endpoint for creating or inviting a new user to the account. In case of invitation email address is used only, other attributes are set in the 2nd step. **Example usage:** `curl -X POST https://api.us-east-1.mbedcloud.com/v3/users?action=invite -d {\\\"email\\\": \\\"myemail@company.com\\\"} -H 'content-type: application/json' -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.create_user(body, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param UserInfoReq body: A user object with attributes. (required)\n :param str action: Action, either 'create' or 'invite'.\n :return: UserInfoResp\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.create_user_with_http_info(body, **kwargs) # noqa: E501\n else:\n (data) = self.create_user_with_http_info(body, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef delete_group(self, group_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.delete_group_with_http_info(group_id, **kwargs) # noqa: E501\n else:\n (data) = self.delete_group_with_http_info(group_id, **kwargs) # noqa: E501\n return data", "response": "Delete a group. # noqa: E501\n\n An endpoint for deleting a group. **Example usage:** `curl -X DELETE https://api.us-east-1.mbedcloud.com/v3/policy-groups/{group-id} -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.delete_group(group_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str group_id: The ID of the group to be deleted. (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndeleting an existing user invitation.", "response": "def delete_invitation(self, invitation_id, **kwargs): # noqa: E501\n \"\"\"Delete a user invitation. # noqa: E501\n\n An endpoint for deleting an active user invitation which has been sent for a new or an existing user to join the account. **Example usage:** `curl -X DELETE https://api.us-east-1.mbedcloud.com/v3/user-invitations/{invitation-id} -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.delete_invitation(invitation_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str invitation_id: The ID of the invitation to be deleted. (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.delete_invitation_with_http_info(invitation_id, **kwargs) # noqa: E501\n else:\n (data) = self.delete_invitation_with_http_info(invitation_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef delete_user(self, user_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.delete_user_with_http_info(user_id, **kwargs) # noqa: E501\n else:\n (data) = self.delete_user_with_http_info(user_id, **kwargs) # noqa: E501\n return data", "response": "Delete a user. # noqa: E501\n\n An endpoint for deleting a user. **Example usage:** `curl -X DELETE https://api.us-east-1.mbedcloud.com/v3/users/{user-id} -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.delete_user(user_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str user_id: The ID of the user to be deleted. (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets all the user invitations.", "response": "def get_all_invitations(self, **kwargs): # noqa: E501\n \"\"\"Get the details of all the user invitations. # noqa: E501\n\n An endpoint for retrieving the details of all the active user invitations sent for new or existing users to join the account. **Example usage:** `curl https://api.us-east-1.mbedcloud.com/v3/user-invitations -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_all_invitations(asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param int limit: The number of results to return (2-1000), default is 50.\n :param str after: The entity ID to fetch after the given one.\n :param str order: The order of the records based on creation time, ASC or DESC; by default ASC\n :return: UserInvitationRespList\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_all_invitations_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.get_all_invitations_with_http_info(**kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_all_users(self, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_all_users_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.get_all_users_with_http_info(**kwargs) # noqa: E501\n return data", "response": "Get the details of all users. # noqa: E501\n\n An endpoint for retrieving the details of all users. **Example usage:** `curl https://api.us-east-1.mbedcloud.com/v3/users -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_all_users(asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param int limit: The number of results to return (2-1000), default is 50.\n :param str after: The entity ID to fetch after the given one.\n :param str order: The order of the records based on creation time, ASC or DESC; by default ASC\n :param str include: Comma separated additional data to return. Currently supported: total_count\n :param str email__eq: Filter for email address\n :param str status__eq: Filter for status, for example active or reset\n :param str status__in: An optional filter for getting users with a specified set of statuses.\n :param str status__nin: An optional filter for excluding users with a specified set of statuses.\n :return: UserInfoRespList\n If the method is called asynchronously,\n returns the request thread."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the groups of the API key.", "response": "def get_groups_of_apikey(self, api_key, **kwargs): # noqa: E501\n \"\"\"Get groups of the API key. # noqa: E501\n\n An endpoint for retrieving groups of the API key. **Example usage:** `curl https://api.us-east-1.mbedcloud.com/v3/api-keys/{apikey-id}/groups -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_groups_of_apikey(api_key, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str api_key: The ID of the API key whose details are retrieved. (required)\n :param int limit: The number of results to return (2-1000), default is 50.\n :param str after: The entity ID to fetch after the given one.\n :param str order: The order of the records based on creation time, ASC or DESC; by default ASC\n :param str include: Comma separated additional data to return. Currently supported: total_count\n :return: GroupSummaryList\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_groups_of_apikey_with_http_info(api_key, **kwargs) # noqa: E501\n else:\n (data) = self.get_groups_of_apikey_with_http_info(api_key, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_groups_of_user(self, user_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_groups_of_user_with_http_info(user_id, **kwargs) # noqa: E501\n else:\n (data) = self.get_groups_of_user_with_http_info(user_id, **kwargs) # noqa: E501\n return data", "response": "Get the groups of a user."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_invitation(self, invitation_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_invitation_with_http_info(invitation_id, **kwargs) # noqa: E501\n else:\n (data) = self.get_invitation_with_http_info(invitation_id, **kwargs) # noqa: E501\n return data", "response": "Details of a user invitation. # noqa: E501\n\n An endpoint for retrieving the details of an active user invitation sent for a new or an existing user to join the account. **Example usage:** `curl https://api.us-east-1.mbedcloud.com/v3/user-invitations/{invitation-id} -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_invitation(invitation_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str invitation_id: The ID of the invitation to be retrieved. (required)\n :return: UserInvitationResp\n If the method is called asynchronously,\n returns the request thread."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting users of a group.", "response": "def get_users_of_group(self, group_id, **kwargs): # noqa: E501\n \"\"\"Get users of a group. # noqa: E501\n\n An endpoint for listing the users of a group with details. **Example usage:** `curl https://api.us-east-1.mbedcloud.com/v3/policy-groups/{group-id}/users -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.get_users_of_group(group_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str group_id: The ID of the group whose users are retrieved. (required)\n :param int limit: The number of results to return (2-1000), default is 50.\n :param str after: The entity ID to fetch after the given one.\n :param str order: The order of the records based on creation time, ASC or DESC; by default ASC\n :param str include: Comma separated additional data to return. Currently supported: total_count\n :param str status__eq: An optional filter for getting users by status.\n :param str status__in: An optional filter for getting users with a specified set of statuses.\n :param str status__nin: An optional filter for excluding users with a specified set of statuses.\n :return: UserInfoRespList\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.get_users_of_group_with_http_info(group_id, **kwargs) # noqa: E501\n else:\n (data) = self.get_users_of_group_with_http_info(group_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nremoves an API key from a list of groups.", "response": "def remove_api_key_from_groups(self, api_key, body, **kwargs): # noqa: E501\n \"\"\"Remove API key from groups. # noqa: E501\n\n An endpoint for removing API key from groups. **Example usage:** `curl -X DELETE https://api.us-east-1.mbedcloud.com/v3/api-keys/{apikey-id}/groups -d '[0162056a9a1586f30242590700000000,0117056a9a1586f30242590700000000]' -H 'content-type: application/json' -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.remove_api_key_from_groups(api_key, body, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str api_key: The ID of the API key to be removed from the group. (required)\n :param list[str] body: A list of IDs of the groups to be updated. (required)\n :return: UpdatedResponse\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.remove_api_key_from_groups_with_http_info(api_key, body, **kwargs) # noqa: E501\n else:\n (data) = self.remove_api_key_from_groups_with_http_info(api_key, body, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef remove_user_from_groups(self, user_id, body, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.remove_user_from_groups_with_http_info(user_id, body, **kwargs) # noqa: E501\n else:\n (data) = self.remove_user_from_groups_with_http_info(user_id, body, **kwargs) # noqa: E501\n return data", "response": "Remove a user from a group."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nremoves users from a group.", "response": "def remove_users_from_group(self, group_id, body, **kwargs): # noqa: E501\n \"\"\"Remove users from a group. # noqa: E501\n\n An endpoint for removing users from groups. **Example usage:** `curl -X DELETE https://api.us-east-1.mbedcloud.com/v3/policy-groups/{group-id}/users -d '[0162056a9a1586f30242590700000000,0117056a9a1586f30242590700000000]' -H 'content-type: application/json' -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.remove_users_from_group(group_id, body, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str group_id: The ID of the group whose users are removed. (required)\n :param SubjectList body: A list of users to be removed from the group. (required)\n :return: UpdatedResponse\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.remove_users_from_group_with_http_info(group_id, body, **kwargs) # noqa: E501\n else:\n (data) = self.remove_users_from_group_with_http_info(group_id, body, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nupdate the group name.", "response": "def update_group_name(self, group_id, body, **kwargs): # noqa: E501\n \"\"\"Update the group name. # noqa: E501\n\n An endpoint for updating a group name. **Example usage:** `curl -X PUT https://api.us-east-1.mbedcloud.com/v3/policy-groups/{group-id} -d '{\\\"name\\\": \\\"TestGroup2\\\"}' -H 'content-type: application/json' -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.update_group_name(group_id, body, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str group_id: The ID of the group to be updated. (required)\n :param GroupUpdateInfo body: Details of the group to be created. (required)\n :return: UpdatedResponse\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_group_name_with_http_info(group_id, body, **kwargs) # noqa: E501\n else:\n (data) = self.update_group_name_with_http_info(group_id, body, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef update_my_account(self, body, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_my_account_with_http_info(body, **kwargs) # noqa: E501\n else:\n (data) = self.update_my_account_with_http_info(body, **kwargs) # noqa: E501\n return data", "response": "Update the attributes of the account."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nupdates user details. # noqa: E501 An endpoint for updating user details. **Example usage:** `curl -X PUT https://api.us-east-1.mbedcloud.com/v3/users/{user-id} -d '{\\\"username\\\": \\\"myusername\\\"}' -H 'content-type: application/json' -H 'Authorization: Bearer API_KEY'` # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass asynchronous=True >>> thread = api.update_user(user_id, body, asynchronous=True) >>> result = thread.get() :param asynchronous bool :param str user_id: The ID of the user whose details are updated. (required) :param UserUpdateReq body: A user object with attributes. (required) :return: UserInfoResp If the method is called asynchronously, returns the request thread.", "response": "def update_user(self, user_id, body, **kwargs): # noqa: E501\n \"\"\"Update user details. # noqa: E501\n\n An endpoint for updating user details. **Example usage:** `curl -X PUT https://api.us-east-1.mbedcloud.com/v3/users/{user-id} -d '{\\\"username\\\": \\\"myusername\\\"}' -H 'content-type: application/json' -H 'Authorization: Bearer API_KEY'` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.update_user(user_id, body, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str user_id: The ID of the user whose details are updated. (required)\n :param UserUpdateReq body: A user object with attributes. (required)\n :return: UserInfoResp\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_user_with_http_info(user_id, body, **kwargs) # noqa: E501\n else:\n (data) = self.update_user_with_http_info(user_id, body, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef to_dict(self):\n return {\n 'observable': self.observable,\n 'path': self.path,\n 'type': self.type,\n 'content_type': self.content_type\n }", "response": "Return dictionary of object."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsets the certificate_issuer_id of this CreateCertificateIssuerConfig.", "response": "def certificate_issuer_id(self, certificate_issuer_id):\n \"\"\"\n Sets the certificate_issuer_id of this CreateCertificateIssuerConfig.\n The ID of the certificate issuer. \n\n :param certificate_issuer_id: The certificate_issuer_id of this CreateCertificateIssuerConfig.\n :type: str\n \"\"\"\n if certificate_issuer_id is None:\n raise ValueError(\"Invalid value for `certificate_issuer_id`, must not be `None`\")\n if certificate_issuer_id is not None and len(certificate_issuer_id) > 32:\n raise ValueError(\"Invalid value for `certificate_issuer_id`, length must be less than or equal to `32`\")\n\n self._certificate_issuer_id = certificate_issuer_id"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nset the reference of this CreateCertificateIssuerConfig.", "response": "def reference(self, reference):\n \"\"\"\n Sets the reference of this CreateCertificateIssuerConfig.\n The certificate name, as created in the factory, to which the certificate issuer configuration applies. The following names are reserved and cannot be configured: LwM2M, BOOTSTRAP. \n\n :param reference: The reference of this CreateCertificateIssuerConfig.\n :type: str\n \"\"\"\n if reference is None:\n raise ValueError(\"Invalid value for `reference`, must not be `None`\")\n if reference is not None and len(reference) > 50:\n raise ValueError(\"Invalid value for `reference`, length must be less than or equal to `50`\")\n if reference is not None and not re.search('(?!mbed\\\\.)[\\\\w-_.]{1,50}', reference):\n raise ValueError(\"Invalid value for `reference`, must be a follow pattern or equal to `/(?!mbed\\\\.)[\\\\w-_.]{1,50}/`\")\n\n self._reference = reference"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef webhook_handler(request):\n body = request.stream.read().decode('utf8')\n print('webhook handler saw:', body)\n api.notify_webhook_received(payload=body)\n\n # nb. protected references are not part of the API.\n # this is just to demonstrate that the asyncid is stored\n print('key store contains:', api._db.keys())", "response": "Receives the webhook from mbed cloud services\n Passes the raw http body directly to mbed sdk to notify that a webhook was received"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nstarts the demo sequence", "response": "def start_sequence():\n \"\"\"Start the demo sequence\n\n We must start this thread in the same process as the webserver to be certain\n we are sharing the api instance in memory.\n\n (ideally in future the async id database will be capable of being more than\n just a dictionary)\n \"\"\"\n print('getting started!...')\n t = threading.Thread(target=my_application, kwargs=dict(api=api))\n t.daemon = True\n t.start()\n return 'ok, starting webhook to: %s' % (ngrok_url,)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsends an async HTTP request to a device.", "response": "def create_async_request(self, device_id, async_id, body, **kwargs): # noqa: E501\n \"\"\"Send an async request to device # noqa: E501\n\n This API provides an interface to asynchronously call methods on a device. The `async-id` is provided by the client, enabling the client to track the end-to-end flow with an identifier that is relevant to the end application. For example, a web application's session ID along with the device ID and the resource path could be used as the `async-id`. This also avoids any race conditions with [the notification channel](/docs/current/integrate-web-app/event-notification.html). All responses are sent through the currently configured notification channel as an **AsyncIDResponse**. For `GET` methods, values may be fetched from an internal cache, instead of contacting the device. See also /v2/endpoints/{device-id}/{resourcePath}. ``` Example URI: POST /v2/device-requests/015f2fa34d310000000000010030036c?async-id=123e4567-e89b-12d3-a456-426655440000 Example payloads: { \\\"method\\\": \\\"GET\\\", \\\"uri\\\": \\\"/5/0/1\\\" } { \\\"method\\\": \\\"PUT\\\", \\\"uri\\\": \\\"/5/0/1%20?k1=v1&k2=v2%22\\\", \\\"accept\\\": \\\"text/plain\\\", \\\"content-type\\\": \\\"text/plain\\\", \\\"payload-b64\\\": \\\"dmFsdWUxCg==\\\" } Immediate response: 202 Accepted Example AsyncIDResponse, delivered via the notification channel: { \\\"async-responses\\\": [ { \\\"id\\\": \\\"123e4567-e89b-12d3-a456-426655440000\\\", \\\"status\\\": 200, \\\"payload\\\": \\\"dmFsdWUxCg==\\\", \\\"ct\\\": \\\"text/plain\\\", \\\"max-age\\\": 600 } ] } ``` # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.create_async_request(device_id, async_id, body, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str device_id: The device ID generated by Device Management. (required)\n :param str async_id: The client-generated ID for matching the correct response delivered via a notification. (required)\n :param DeviceRequest body: Device request to send. (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.create_async_request_with_http_info(device_id, async_id, body, **kwargs) # noqa: E501\n else:\n (data) = self.create_async_request_with_http_info(device_id, async_id, body, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef new_preload():\n testrunner_image = get_testrunner_image()\n local_image = testrunner_image.rsplit(':')[-2].rsplit('/')[-1]\n version_file = 'testrunner_version.txt'\n template = yaml.safe_load(f\"\"\"\n machine:\n image: 'circleci/classic:201710-02'\n steps:\n - run:\n name: AWS login\n command: |-\n login=\"$(aws ecr get-login --no-include-email)\"\n ${{login}}\n - run:\n name: Pull TestRunner Image\n command: docker pull {testrunner_image}\n - run:\n name: Make cache directory\n command: mkdir -p {cache_dir}\n - run:\n name: Obtain Testrunner Version\n command: echo $(docker run {testrunner_image} python -m trunner --version) > {cache_dir}/{version_file}\n - run:\n name: Export docker image layer cache\n command: docker save -o {cache_dir}/{testrunner_cache} {testrunner_image}\n - persist_to_workspace: # workspace is used later in this same build\n root: {cache_dir}\n paths: '{testrunner_cache}'\n - persist_to_workspace: # workspace is used later in this same build\n root: {cache_dir}\n paths: '{version_file}'\n - store_artifacts:\n path: {version_file}\n \"\"\")\n return 'preload', template", "response": "This function is used to create a new build with the same image as the current build"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef new_build(py_ver: PyVer):\n cache_file = f'app_{py_ver.name}.tar'\n cache_path = f'{cache_dir}/{cache_file}'\n cache_key = f'v3-{py_ver.name}-{{{{ .Branch }}}}'\n template = yaml.safe_load(f\"\"\"\n machine:\n image: 'circleci/classic:201710-02'\n docker_layer_caching: true\n steps:\n - checkout\n - restore_cache:\n keys: ['{cache_key}']\n paths: ['{cache_path}']\n - attach_workspace:\n at: {cache_dir}\n - run:\n name: Load docker image for TestRunner\n command: docker load -i {cache_dir}/{testrunner_cache}\n - run:\n name: Load docker image layer cache\n command: docker load -i {cache_path} || true # silent failure if missing cache\n - run:\n name: Build application docker image\n command: >-\n docker build --cache-from={py_ver.tag}\n -t {py_ver.tag}\n --build-arg TESTRUNNER_VERSION=$(cat {cache_dir}/testrunner_version.txt)\n -f container/{py_ver.docker_file} .\n - run:\n name: Make cache directory\n command: mkdir -p {cache_dir}\n - run:\n name: Export docker image layer cache\n command: docker save -o {cache_path} {py_ver.tag}\n - save_cache: # cache is used between builds\n key: '{cache_key}'\n paths: ['{cache_path}']\n - persist_to_workspace: # workspace is used later in this same build\n root: {cache_dir}\n paths: '{cache_file}'\n \n # extracting documentation for review:\n - run:\n name: Start a named container\n command: docker run --name=SDK {py_ver.tag}\n - run:\n name: Extract the documentation\n command: 'docker cp SDK:/build/built_docs ./built_docs'\n - store_artifacts:\n path: built_docs\n \"\"\")\n return build_name(py_ver), template", "response": "Build a new build"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\njobbing for deploying package to pypi", "response": "def new_deploy(py_ver: PyVer, release_target: ReleaseTarget):\n \"\"\"Job for deploying package to pypi\"\"\"\n cache_file = f'app_{py_ver.name}.tar'\n template = yaml.safe_load(f\"\"\"\n machine:\n image: circleci/classic:201710-02\n steps:\n - attach_workspace:\n at: {cache_dir}\n - checkout\n - run:\n name: Install prerequisites\n command: sudo pip install awscli\n - run:\n name: Load docker image layer cache\n command: docker load -i {cache_dir}/{cache_file}\n - run:\n name: Start a named container\n command: docker run --name=SDK {py_ver.tag}\n - run:\n name: Extract the documentation\n command: 'docker cp SDK:/build/built_docs ./built_docs'\n - run:\n name: Upload the documentation\n command: >-\n aws s3 sync --delete --cache-control\n max-age=3600 built_docs s3://mbed-cloud-sdk-python\n - run:\n name: Tag and release\n command: >-\n docker run --env-file=scripts/templates/envvars.env\n -e TWINE_REPOSITORY={release_target.twine_repo}\n {py_ver.tag}\n sh -c \"source .venv/bin/activate && python scripts/tag_and_release.py --mode={release_target.mode}\"\n - run:\n name: Start the release party!\n command: >-\n docker run --env-file=scripts/templates/envvars.env\n {py_ver.tag}\n sh -c \"source .venv/bin/activate && python scripts/notify.py\"\n \"\"\")\n return deploy_name(py_ver, release_target), template"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngenerates the output of the Circle CI 2. 0 config. yml file.", "response": "def generate_circle_output():\n \"\"\"Build sequence for Circle CI 2.0 config.yml\n\n builds the circleci structure\n also links individual jobs into a workflow graph\n \"\"\"\n base = new_base()\n workflow = networkx.DiGraph()\n LOG.info('%s python versions', len(python_versions))\n LOG.info('%s mbed cloud hosts', len(mbed_cloud_hosts))\n\n job, content = new_tpip()\n base['jobs'].update({job: content})\n workflow.add_node(job)\n\n job, content = new_newscheck()\n base['jobs'].update({job: content})\n workflow.add_node(\n job,\n workflow=dict(\n filters=dict(\n branches=dict(\n # we ignore this check for builds directly on the base branches\n ignore=['master', 'integration']\n )\n )\n )\n )\n\n job, content = new_build_documentation()\n base['jobs'].update({job: content})\n workflow.add_node(\n job,\n workflow=dict(\n requires=['build_py2', 'build_py3'],\n filters = dict(\n branches=dict(\n # Only update the documentation on release branches\n only=['master', 'beta']\n )\n )\n )\n )\n\n preload_job, content = new_preload()\n base['jobs'].update({preload_job: content})\n workflow.add_node(job)\n\n for py_ver in python_versions.values():\n build_job, build_content = new_build(py_ver=py_ver)\n base['jobs'].update({build_job: build_content})\n workflow.add_edge(preload_job, build_job)\n\n for cloud_host in mbed_cloud_hosts.values():\n test_job, test_content = new_test(py_ver=py_ver, cloud_host=cloud_host)\n base['jobs'].update({test_job: test_content})\n workflow.add_edge(build_job, test_job)\n\n for release_target in release_target_map.values():\n deploy_job, deploy_content = new_deploy(\n py_ver=python_versions['three'],\n release_target=release_target\n )\n base['jobs'].update({deploy_job: deploy_content})\n\n # wire up the release gates (clicky buttons)\n workflow.add_edge(\n test_name(python_versions['three'], mbed_cloud_hosts['osii']),\n release_name(release_target_map['beta']),\n type='approval',\n )\n workflow.add_edge(\n test_name(python_versions['three'], mbed_cloud_hosts['production']),\n release_name(release_target_map['prod']),\n type='approval',\n filters=dict(branches=dict(only='master')),\n )\n workflow.add_edge(\n test_name(python_versions['two'], mbed_cloud_hosts['production']),\n release_name(release_target_map['prod']),\n )\n\n # we only want to deploy in certain conditions\n workflow.add_edge(\n release_name(release_target_map['beta']),\n deploy_name(python_versions['three'], release_target_map['beta'])\n )\n\n workflow.add_edge(\n release_name(release_target_map['prod']),\n deploy_name(python_versions['three'], release_target_map['prod'])\n )\n\n workflow_jobs = base['workflows']['python_sdk_workflow']['jobs']\n\n # build the workflow graph\n for job_name in networkx.topological_sort(workflow):\n job_config = {}\n per_node_config = workflow.nodes[job_name].get('workflow')\n if per_node_config:\n job_config.update(per_node_config)\n workflow_jobs.append({job_name: job_config})\n for edge in workflow.in_edges(job_name):\n job_config.update(workflow.get_edge_data(*edge))\n job_config.setdefault('requires', []).append(edge[0])\n\n LOG.info('%s circle jobs', len(base['jobs']))\n return dict(base)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngenerates the Dockerfile for the current version of the project.", "response": "def generate_docker_file(py_ver: PyVer):\n \"\"\"Templated docker files\"\"\"\n with open(os.path.join(script_templates_root, 'Dockerfile')) as fh:\n return fh.read().format(py_ver=py_ver, author=author_file)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef generate_docker_targets():\n output = {}\n for py_ver in python_versions.values():\n filepath = os.path.join(container_config_root, py_ver.docker_file)\n output[filepath] = generate_docker_file(py_ver)\n filepath = os.path.join(container_config_root, py_ver.compose_file)\n output[filepath] = generate_compose_file(py_ver)\n return output", "response": "Write all templated container engine files"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef main(output_path=None):\n config_output_file = output_path or os.path.join(PROJECT_ROOT, '.circleci', 'config.yml')\n yaml_structure = generate_circle_output()\n with open(config_output_file, 'w') as fh:\n yaml_content = yaml.safe_dump(data=yaml_structure, default_flow_style=False)\n fh.write(\n f'#\\n'\n f'# This file is autogenerated, do not modify manually. '\n f'See {author_file} for instructions.\\n'\n f'#\\n'\n f'{yaml_content}'\n )\n\n for path, content in generate_docker_targets().items():\n LOG.info('writing %s', path)\n with open(path, 'w') as fh:\n fh.write(content)", "response": "This function is used to generate a new build system and write out the output of the build system."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating an image. This method creates an image and returns the FirmwareImage object.", "response": "def firmware_image_create(self, datafile, name, **kwargs): # noqa: E501\n \"\"\"Create an image # noqa: E501\n\n Create a firmware image. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.firmware_image_create(datafile, name, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param file datafile: The firmware image file to upload (required)\n :param str name: The name of the firmware image (required)\n :param str description: The description of the firmware image\n :return: FirmwareImage\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.firmware_image_create_with_http_info(datafile, name, **kwargs) # noqa: E501\n else:\n (data) = self.firmware_image_create_with_http_info(datafile, name, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef firmware_image_destroy(self, image_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.firmware_image_destroy_with_http_info(image_id, **kwargs) # noqa: E501\n else:\n (data) = self.firmware_image_destroy_with_http_info(image_id, **kwargs) # noqa: E501\n return data", "response": "Delete an image # noqa: E501\n\n Delete a firmware image. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.firmware_image_destroy(image_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str image_id: The firmware image ID (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nlist all firmware images.", "response": "def firmware_image_list(self, **kwargs): # noqa: E501\n \"\"\"List all images # noqa: E501\n\n List all firmware images. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.firmware_image_list(asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param int limit: How many firmware images to retrieve\n :param str order: ASC or DESC\n :param str after: The ID of the the item after which to retrieve the next page\n :param str include: A comma-separated list of data fields to return. Currently supported: total_count\n :param str filter: URL-encoded query string parameter to filter returned data `?filter={URL-encoded query string}` ###### Filterable fields: The table lists all the fields that can be filtered on with certain filters:
Field = / __eq / __neq __in / __nin __lte / __gte
created_at \u2713 \u2713 \u2713
datafile \u2713 \u2713  
datafile_checksum \u2713 \u2713  
datafile_size \u2713 \u2713  
description \u2713 \u2713  
etag \u2713 \u2713 \u2713
id \u2713 \u2713  
name \u2713 \u2713  
timestamp \u2713 \u2713 \u2713
updated_at \u2713 \u2713 \u2713
  The query string is made up of key-value pairs separated by ampersands. For example, this query: `key1=value1&key2=value2&key3=value3` would be URL-encoded as: `?filter=key1__eq%3Dvalue1%26key2__eq%3Dvalue2%26key3__eq%3Dvalue3` **Filtering by properties** `name__eq=myimage` **Filtering on date-time fields** Date-time fields should be specified in UTC RFC3339 format, `YYYY-MM-DDThh:mm:ss.msZ`. There are three permitted variations: * UTC RFC3339 with milliseconds. Example: `2016-11-30T16:25:12.1234Z` * UTC RFC3339 without milliseconds. Example: `2016-11-30T16:25:12Z` * UTC RFC3339 shortened without milliseconds and punctuation. Example: `20161130T162512Z` Date-time filtering supports three operators: * equality by appending `__eq` to the field name * greater than or equal to by appending `__gte` to the field name * less than or equal to by appending `__lte` to the field name `{field name}[|__eq|__lte|__gte]={UTC RFC3339 date-time}` Time ranges may be specified by including both the `__gte` and `__lte` forms in the filter. For example: `created_at__gte=2016-11-30T16:25:12.1234Z&created_at__lte=2016-12-30T00:00:00Z` **Filtering on multiple fields** `name__eq=myimage&created_at__gte=2016-11-30T16:25:12.1234Z&created_at__lte=2016-12-30T00:00:00Z` **Filtering with filter operators** String field filtering supports the following operators: * equality: `__eq` * non-equality: `__neq` * in : `__in` * not in: `__nin` For `__in` and `__nin` filters list of parameters must be comma-separated: `name__in=fw-image1,fw-image2`\n :return: FirmwareImagePage\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.firmware_image_list_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.firmware_image_list_with_http_info(**kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget an image # noqa: E501 Retrieve a firmware image. # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass asynchronous=True >>> thread = api.firmware_image_retrieve(image_id, asynchronous=True) >>> result = thread.get() :param asynchronous bool :param str image_id: The firmware image ID (required) :return: FirmwareImage If the method is called asynchronously, returns the request thread.", "response": "def firmware_image_retrieve(self, image_id, **kwargs): # noqa: E501\n \"\"\"Get an image # noqa: E501\n\n Retrieve a firmware image. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.firmware_image_retrieve(image_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str image_id: The firmware image ID (required)\n :return: FirmwareImage\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.firmware_image_retrieve_with_http_info(image_id, **kwargs) # noqa: E501\n else:\n (data) = self.firmware_image_retrieve_with_http_info(image_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef firmware_manifest_create(self, datafile, name, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.firmware_manifest_create_with_http_info(datafile, name, **kwargs) # noqa: E501\n else:\n (data) = self.firmware_manifest_create_with_http_info(datafile, name, **kwargs) # noqa: E501\n return data", "response": "Create a manifest # noqa: E501\n\n Create a firmware manifest. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.firmware_manifest_create(datafile, name, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param file datafile: The manifest file to create. The API gateway enforces the account-specific file size. (required)\n :param str name: The name of the firmware manifest (required)\n :param str description: The description of the firmware manifest\n :param file key_table: The key table of pre-shared keys for devices\n :return: FirmwareManifest\n If the method is called asynchronously,\n returns the request thread."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef firmware_manifest_destroy(self, manifest_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.firmware_manifest_destroy_with_http_info(manifest_id, **kwargs) # noqa: E501\n else:\n (data) = self.firmware_manifest_destroy_with_http_info(manifest_id, **kwargs) # noqa: E501\n return data", "response": "Delete a firmware manifest."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef firmware_manifest_list(self, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.firmware_manifest_list_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.firmware_manifest_list_with_http_info(**kwargs) # noqa: E501\n return data", "response": "List all the manifests in the current page."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef firmware_manifest_retrieve(self, manifest_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.firmware_manifest_retrieve_with_http_info(manifest_id, **kwargs) # noqa: E501\n else:\n (data) = self.firmware_manifest_retrieve_with_http_info(manifest_id, **kwargs) # noqa: E501\n return data", "response": "Get a manifest # noqa: E501\n\n Retrieve a firmware manifest. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.firmware_manifest_retrieve(manifest_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str manifest_id: The firmware manifest ID (required)\n :return: FirmwareManifest\n If the method is called asynchronously,\n returns the request thread."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef update_campaign_archive(self, campaign_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_campaign_archive_with_http_info(campaign_id, **kwargs) # noqa: E501\n else:\n (data) = self.update_campaign_archive_with_http_info(campaign_id, **kwargs) # noqa: E501\n return data", "response": "Archive a campaign. # noqa: E501\n\n This command will archive a campaign. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.update_campaign_archive(campaign_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str campaign_id: The campaign ID (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef update_campaign_create(self, campaign, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_campaign_create_with_http_info(campaign, **kwargs) # noqa: E501\n else:\n (data) = self.update_campaign_create_with_http_info(campaign, **kwargs) # noqa: E501\n return data", "response": "Create a new update campaign."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndeleting an update campaign.", "response": "def update_campaign_destroy(self, campaign_id, **kwargs): # noqa: E501\n \"\"\"Delete a campaign # noqa: E501\n\n Delete an update campaign. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.update_campaign_destroy(campaign_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str campaign_id: The ID of the update campaign (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_campaign_destroy_with_http_info(campaign_id, **kwargs) # noqa: E501\n else:\n (data) = self.update_campaign_destroy_with_http_info(campaign_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nlisting all update campaigns for devices specified by a filter.", "response": "def update_campaign_list(self, **kwargs): # noqa: E501\n \"\"\"List all campaigns # noqa: E501\n\n Get update campaigns for devices specified by a filter. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.update_campaign_list(asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param int limit: How many update campaigns to retrieve\n :param str order: The order of the records. Acceptable values: ASC, DESC. Default: ASC\n :param str after: The ID of the the item after which to retrieve the next page\n :param str include: A comma-separated list of data fields to return. Currently supported: total_count\n :param str filter: URL-encoded query string parameter to filter returned data `?filter={URL-encoded query string}` ###### Filterable fields: The below table lists all the fields that can be filtered on with certain filters:
Field = / __eq / __neq __in / __nin __lte / __gte
created_at \u2713 \u2713 \u2713
description \u2713 \u2713  
device_filter \u2713 \u2713  
etag \u2713 \u2713 \u2713
finished \u2713 \u2713 \u2713
id \u2713 \u2713  
name \u2713 \u2713  
root_manifest_id \u2713 \u2713  
started_at \u2713 \u2713 \u2713
state \u2713 \u2713  
updated_at \u2713 \u2713 \u2713
when \u2713 \u2713 \u2713
  The query string is made up of key-value pairs separated by ampersands. For example, this query: `key1__eq=value1&key2__eq=value2&key3__eq=value3` would be URL-encoded as: `?filter=key1__eq%3Dvalue1%26key2__eq%3Dvalue2%26key3__eq%3Dvalue3` **Filtering by campaign properties** `state__eq=[draft|scheduled|devicefectch|devicecopy|publishing|deploying|deployed|manifestremoved|expired]` `root_manifest_id__eq=43217771234242e594ddb433816c498a` **Filtering on date-time fields** Date-time fields should be specified in UTC RFC3339 format, `YYYY-MM-DDThh:mm:ss.msZ`. There are three permitted variations: * UTC RFC3339 with milliseconds. Example: `2016-11-30T16:25:12.1234Z` * UTC RFC3339 without milliseconds. Example: `2016-11-30T16:25:12Z` * UTC RFC3339 shortened without milliseconds and punctuation. Example: `20161130T162512Z` Date-time filtering supports three operators: * equality by appending `__eq` to the field name * greater than or equal to by appending `__gte` to the field name * less than or equal to by appending `__lte` to the field name `{field name}[|__eq|__lte|__gte]={UTC RFC3339 date-time}` Time ranges may be specified by including both the `__gte` and `__lte` forms in the filter. For example: `created_at__gte=2016-11-30T16:25:12.1234Z&created_at__lte=2016-12-30T00:00:00Z` **Filtering on multiple fields** `state__eq=deployed&created_at__gte=2016-11-30T16:25:12.1234Z&created_at__lte=2016-12-30T00:00:00Z` **Filtering with filter operators** String field filtering supports the following operators: * equality: `__eq` * non-equality: `__neq` * in : `__in` * not in: `__nin` For `__in` and `__nin` filters list of parameters must be comma-separated: `name__in=fw-image1,fw-image2`\n :return: UpdateCampaignPage\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_campaign_list_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.update_campaign_list_with_http_info(**kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef update_campaign_metadata_list(self, campaign_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_campaign_metadata_list_with_http_info(campaign_id, **kwargs) # noqa: E501\n else:\n (data) = self.update_campaign_metadata_list_with_http_info(campaign_id, **kwargs) # noqa: E501\n return data", "response": "List all campaign device metadata."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget a campaign device metadata.", "response": "def update_campaign_metadata_retrieve(self, campaign_id, campaign_device_metadata_id, **kwargs): # noqa: E501\n \"\"\"Get a campaign device metadata # noqa: E501\n\n Get update campaign metadata. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.update_campaign_metadata_retrieve(campaign_id, campaign_device_metadata_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str campaign_id: The update campaign ID (required)\n :param str campaign_device_metadata_id: The campaign device metadata ID (required)\n :return: CampaignDeviceMetadata\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_campaign_metadata_retrieve_with_http_info(campaign_id, campaign_device_metadata_id, **kwargs) # noqa: E501\n else:\n (data) = self.update_campaign_metadata_retrieve_with_http_info(campaign_id, campaign_device_metadata_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nupdate the campaign metrics.", "response": "def update_campaign_metrics(self, campaign_id, **kwargs): # noqa: E501\n \"\"\"Get campaign metrics # noqa: E501\n\n Get detailed statistics of a campaign. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.update_campaign_metrics(campaign_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str campaign_id: The campaign ID (required)\n :return: CampaignMetrics\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_campaign_metrics_with_http_info(campaign_id, **kwargs) # noqa: E501\n else:\n (data) = self.update_campaign_metrics_with_http_info(campaign_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef update_campaign_retrieve(self, campaign_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_campaign_retrieve_with_http_info(campaign_id, **kwargs) # noqa: E501\n else:\n (data) = self.update_campaign_retrieve_with_http_info(campaign_id, **kwargs) # noqa: E501\n return data", "response": "Get an update campaign s entries."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef update_campaign_start(self, campaign_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_campaign_start_with_http_info(campaign_id, **kwargs) # noqa: E501\n else:\n (data) = self.update_campaign_start_with_http_info(campaign_id, **kwargs) # noqa: E501\n return data", "response": "Start a campaign. # noqa: E501\n\n This command will begin the process of starting a campaign. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.update_campaign_start(campaign_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str campaign_id: The campaign ID (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nstop a campaign. # noqa: E501 This command will begin the process of stopping a campaign. # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass asynchronous=True >>> thread = api.update_campaign_stop(campaign_id, asynchronous=True) >>> result = thread.get() :param asynchronous bool :param str campaign_id: The campaign ID (required) :return: None If the method is called asynchronously, returns the request thread.", "response": "def update_campaign_stop(self, campaign_id, **kwargs): # noqa: E501\n \"\"\"Stop a campaign. # noqa: E501\n\n This command will begin the process of stopping a campaign. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.update_campaign_stop(campaign_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str campaign_id: The campaign ID (required)\n :return: None\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_campaign_stop_with_http_info(campaign_id, **kwargs) # noqa: E501\n else:\n (data) = self.update_campaign_stop_with_http_info(campaign_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nmodifies a campaign # noqa: E501 Modify an update campaign. # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass asynchronous=True >>> thread = api.update_campaign_update(campaign_id, campaign, asynchronous=True) >>> result = thread.get() :param asynchronous bool :param str campaign_id: (required) :param UpdateCampaignPutRequest campaign: Update campaign (required) :return: UpdateCampaign If the method is called asynchronously, returns the request thread.", "response": "def update_campaign_update(self, campaign_id, campaign, **kwargs): # noqa: E501\n \"\"\"Modify a campaign # noqa: E501\n\n Modify an update campaign. # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.update_campaign_update(campaign_id, campaign, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str campaign_id: (required)\n :param UpdateCampaignPutRequest campaign: Update campaign (required)\n :return: UpdateCampaign\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.update_campaign_update_with_http_info(campaign_id, campaign, **kwargs) # noqa: E501\n else:\n (data) = self.update_campaign_update_with_http_info(campaign_id, campaign, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef upload_job_chunk_create(self, content_length, upload_job_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.upload_job_chunk_create_with_http_info(content_length, upload_job_id, **kwargs) # noqa: E501\n else:\n (data) = self.upload_job_chunk_create_with_http_info(content_length, upload_job_id, **kwargs) # noqa: E501\n return data", "response": "Append a chunk to an upload job."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nlisting all metadata for uploaded chunks # noqa: E501 List all metadata for uploaded chunks # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass asynchronous=True >>> thread = api.upload_job_chunk_list(upload_job_id, asynchronous=True) >>> result = thread.get() :param asynchronous bool :param str upload_job_id: Upload job (required) :param int limit: How many metadata items for uploaded chunks to retrieve :param str order: ASC or DESC :param str after: The ID of the the item after which to retrieve the next page :param str include: A comma-separated list of data fields to return. Currently supported: total_count :param str filter: URL-encoded query string parameter to filter returned data `?filter={URL-encoded query string}` ###### Filterable fields: The table lists all the fields that can be filtered on with certain filters:
Field = / __eq / __neq __in / __nin __lte / __gte
created_at \u2713 \u2713 \u2713
etag \u2713 \u2713 \u2713
id \u2713 \u2713  
updated_at \u2713 \u2713 \u2713
status \u2713 \u2713
hash \u2713 \u2713
length \u2713 \u2713 \u2713
  The query string is made up of key-value pairs separated by ampersands. For example, this query: `key1=value1&key2=value2&key3=value3` would be URL-encoded as: `?filter=key1__eq%3Dvalue1%26key2__eq%3Dvalue2%26key3__eq%3Dvalue3` **Filtering by properties** `status__eq=in_progress` **Filtering on date-time fields** Date-time fields should be specified in UTC RFC3339 format, `YYYY-MM-DDThh:mm:ss.msZ`. There are three permitted variations: * UTC RFC3339 with milliseconds. Example: `2016-11-30T16:25:12.1234Z` * UTC RFC3339 without milliseconds. Example: `2016-11-30T16:25:12Z` * UTC RFC3339 shortened without milliseconds and punctuation. Example: `20161130T162512Z` Date-time filtering supports three operators: * equality by appending `__eq` to the field name * greater than or equal to by appending `__gte` to the field name * less than or equal to by appending `__lte` to the field name `{field name}[|__eq|__lte|__gte]={UTC RFC3339 date-time}` Time ranges may be specified by including both the `__gte` and `__lte` forms in the filter. For example: `created_at__gte=2016-11-30T16:25:12.1234Z&created_at__lte=2016-12-30T00:00:00Z` **Filtering on multiple fields** `status__eq=in_progress&created_at__gte=2016-11-30T16:25:12.1234Z&created_at__lte=2016-12-30T00:00:00Z` **Filtering with filter operators** String field filtering supports the following operators: * equality: `__eq` * non-equality: `__neq` * in : `__in` * not in: `__nin` For `__in` and `__nin` filters list of parameters must be comma-separated: `status__in=in_progress,success` :return: UploadChunkInfoPage If the method is called asynchronously, returns the request thread.", "response": "def upload_job_chunk_list(self, upload_job_id, **kwargs): # noqa: E501\n \"\"\"List all metadata for uploaded chunks # noqa: E501\n\n List all metadata for uploaded chunks # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.upload_job_chunk_list(upload_job_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str upload_job_id: Upload job (required)\n :param int limit: How many metadata items for uploaded chunks to retrieve\n :param str order: ASC or DESC\n :param str after: The ID of the the item after which to retrieve the next page\n :param str include: A comma-separated list of data fields to return. Currently supported: total_count\n :param str filter: URL-encoded query string parameter to filter returned data `?filter={URL-encoded query string}` ###### Filterable fields: The table lists all the fields that can be filtered on with certain filters:
Field = / __eq / __neq __in / __nin __lte / __gte
created_at \u2713 \u2713 \u2713
etag \u2713 \u2713 \u2713
id \u2713 \u2713  
updated_at \u2713 \u2713 \u2713
status \u2713 \u2713
hash \u2713 \u2713
length \u2713 \u2713 \u2713
  The query string is made up of key-value pairs separated by ampersands. For example, this query: `key1=value1&key2=value2&key3=value3` would be URL-encoded as: `?filter=key1__eq%3Dvalue1%26key2__eq%3Dvalue2%26key3__eq%3Dvalue3` **Filtering by properties** `status__eq=in_progress` **Filtering on date-time fields** Date-time fields should be specified in UTC RFC3339 format, `YYYY-MM-DDThh:mm:ss.msZ`. There are three permitted variations: * UTC RFC3339 with milliseconds. Example: `2016-11-30T16:25:12.1234Z` * UTC RFC3339 without milliseconds. Example: `2016-11-30T16:25:12Z` * UTC RFC3339 shortened without milliseconds and punctuation. Example: `20161130T162512Z` Date-time filtering supports three operators: * equality by appending `__eq` to the field name * greater than or equal to by appending `__gte` to the field name * less than or equal to by appending `__lte` to the field name `{field name}[|__eq|__lte|__gte]={UTC RFC3339 date-time}` Time ranges may be specified by including both the `__gte` and `__lte` forms in the filter. For example: `created_at__gte=2016-11-30T16:25:12.1234Z&created_at__lte=2016-12-30T00:00:00Z` **Filtering on multiple fields** `status__eq=in_progress&created_at__gte=2016-11-30T16:25:12.1234Z&created_at__lte=2016-12-30T00:00:00Z` **Filtering with filter operators** String field filtering supports the following operators: * equality: `__eq` * non-equality: `__neq` * in : `__in` * not in: `__nin` For `__in` and `__nin` filters list of parameters must be comma-separated: `status__in=in_progress,success`\n :return: UploadChunkInfoPage\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.upload_job_chunk_list_with_http_info(upload_job_id, **kwargs) # noqa: E501\n else:\n (data) = self.upload_job_chunk_list_with_http_info(upload_job_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets metadata about a chunk # noqa: E501 Get metadata about a chunk # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass asynchronous=True >>> thread = api.upload_job_chunk_retreive(upload_job_id, chunk_id, asynchronous=True) >>> result = thread.get() :param asynchronous bool :param str upload_job_id: Upload job (required) :param str chunk_id: Chunk (required) :return: UploadChunkInfo If the method is called asynchronously, returns the request thread.", "response": "def upload_job_chunk_retreive(self, upload_job_id, chunk_id, **kwargs): # noqa: E501\n \"\"\"Get metadata about a chunk # noqa: E501\n\n Get metadata about a chunk # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.upload_job_chunk_retreive(upload_job_id, chunk_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str upload_job_id: Upload job (required)\n :param str chunk_id: Chunk (required)\n :return: UploadChunkInfo\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.upload_job_chunk_retreive_with_http_info(upload_job_id, chunk_id, **kwargs) # noqa: E501\n else:\n (data) = self.upload_job_chunk_retreive_with_http_info(upload_job_id, chunk_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a new upload job.", "response": "def upload_job_create(self, upload_job, **kwargs): # noqa: E501\n \"\"\"Create a new upload job # noqa: E501\n\n Create a new upload job # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.upload_job_create(upload_job, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param UploadJob upload_job: Upload job (required)\n :return: UploadJob\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.upload_job_create_with_http_info(upload_job, **kwargs) # noqa: E501\n else:\n (data) = self.upload_job_create_with_http_info(upload_job, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef upload_job_delete(self, upload_job_id, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.upload_job_delete_with_http_info(upload_job_id, **kwargs) # noqa: E501\n else:\n (data) = self.upload_job_delete_with_http_info(upload_job_id, **kwargs) # noqa: E501\n return data", "response": "Delete an upload job."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef upload_job_list(self, **kwargs): # noqa: E501\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.upload_job_list_with_http_info(**kwargs) # noqa: E501\n else:\n (data) = self.upload_job_list_with_http_info(**kwargs) # noqa: E501\n return data", "response": "Get all upload jobs"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets an upload job.", "response": "def upload_job_retrieve(self, upload_job_id, **kwargs): # noqa: E501\n \"\"\"Get an upload job # noqa: E501\n\n Get an upload job # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.upload_job_retrieve(upload_job_id, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str upload_job_id: Upload job (required)\n :return: UploadJob\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.upload_job_retrieve_with_http_info(upload_job_id, **kwargs) # noqa: E501\n else:\n (data) = self.upload_job_retrieve_with_http_info(upload_job_id, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nupdates an existing upload job.", "response": "def upload_job_update(self, upload_job_id, upload_job, **kwargs): # noqa: E501\n \"\"\"Update an upload job # noqa: E501\n\n Update an upload job # noqa: E501\n This method makes a synchronous HTTP request by default. To make an\n asynchronous HTTP request, please pass asynchronous=True\n >>> thread = api.upload_job_update(upload_job_id, upload_job, asynchronous=True)\n >>> result = thread.get()\n\n :param asynchronous bool\n :param str upload_job_id: Upload job id (required)\n :param UploadJob1 upload_job: Upload job (required)\n :return: UploadJob\n If the method is called asynchronously,\n returns the request thread.\n \"\"\"\n kwargs['_return_http_data_only'] = True\n if kwargs.get('asynchronous'):\n return self.upload_job_update_with_http_info(upload_job_id, upload_job, **kwargs) # noqa: E501\n else:\n (data) = self.upload_job_update_with_http_info(upload_job_id, upload_job, **kwargs) # noqa: E501\n return data"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nset the api_key of this GlobalSignCredentials.", "response": "def api_key(self, api_key):\n \"\"\"\n Sets the api_key of this GlobalSignCredentials.\n Unique ID for API client (provided by GlobalSign). \n\n :param api_key: The api_key of this GlobalSignCredentials.\n :type: str\n \"\"\"\n if api_key is None:\n raise ValueError(\"Invalid value for `api_key`, must not be `None`\")\n if api_key is not None and len(api_key) > 1000:\n raise ValueError(\"Invalid value for `api_key`, length must be less than or equal to `1000`\")\n\n self._api_key = api_key"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsets the api_secret of this GlobalSignCredentials.", "response": "def api_secret(self, api_secret):\n \"\"\"\n Sets the api_secret of this GlobalSignCredentials.\n API Secret matching the API key (provided by GlobalSign). \n\n :param api_secret: The api_secret of this GlobalSignCredentials.\n :type: str\n \"\"\"\n if api_secret is None:\n raise ValueError(\"Invalid value for `api_secret`, must not be `None`\")\n if api_secret is not None and len(api_secret) > 250:\n raise ValueError(\"Invalid value for `api_secret`, length must be less than or equal to `250`\")\n\n self._api_secret = api_secret"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef client_certificate(self, client_certificate):\n if client_certificate is None:\n raise ValueError(\"Invalid value for `client_certificate`, must not be `None`\")\n if client_certificate is not None and len(client_certificate) > 3000:\n raise ValueError(\"Invalid value for `client_certificate`, length must be less than or equal to `3000`\")\n\n self._client_certificate = client_certificate", "response": "Sets the client_certificate of this GlobalSignCredentials."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nset the passphrase of this GlobalSignCredentials.", "response": "def passphrase(self, passphrase):\n \"\"\"\n Sets the passphrase of this GlobalSignCredentials.\n The passphrase to decrypt the private key in case it is encrypted. Empty if the private key is not encrypted. \n\n :param passphrase: The passphrase of this GlobalSignCredentials.\n :type: str\n \"\"\"\n if passphrase is not None and len(passphrase) > 1000:\n raise ValueError(\"Invalid value for `passphrase`, length must be less than or equal to `1000`\")\n\n self._passphrase = passphrase"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsets the private_key of this GlobalSignCredentials.", "response": "def private_key(self, private_key):\n \"\"\"\n Sets the private_key of this GlobalSignCredentials.\n The private key that matches the client certificate to allow HTTPS connection over TLS/SSL. The private key may be encrypted using a symmetric encryption key derived from a passphrase. The private key must be in PEM format. \n\n :param private_key: The private_key of this GlobalSignCredentials.\n :type: str\n \"\"\"\n if private_key is None:\n raise ValueError(\"Invalid value for `private_key`, must not be `None`\")\n if private_key is not None and len(private_key) > 3000:\n raise ValueError(\"Invalid value for `private_key`, length must be less than or equal to `3000`\")\n\n self._private_key = private_key"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef git_url_ssh_to_https(url):\n path = url.split('github.com', 1)[1][1:].strip()\n new = 'https://{GITHUB_TOKEN}@github.com/%s' % path\n print('rewriting git url to: %s' % new)\n return new.format(GITHUB_TOKEN=os.getenv('GITHUB_TOKEN'))", "response": "Convert a git url to a git url"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ntag the current repository and commits changes to news files", "response": "def main(mode):\n \"\"\"Tags the current repository\n\n and commits changes to news files\n\n :param mode: ReleaseTarget mode (i.e. beta or prod)\n :type mode: ReleaseTarget\n \"\"\"\n # see:\n # https://packaging.python.org/tutorials/distributing-packages/#uploading-your-project-to-pypi\n version = subprocess.check_output(['python', 'setup.py', '--version']).decode().strip()\n\n twine_repo = os.getenv('TWINE_REPOSITORY_URL') or os.getenv('TWINE_REPOSITORY')\n print('tagging and releasing %r as a %s release to %s as Twine user %s' % (\n version,\n mode.name,\n twine_repo,\n os.getenv('TWINE_USERNAME')\n ))\n\n print('python - preparing environment')\n subprocess.check_call(['apk', 'update'])\n subprocess.check_call(['apk', 'add', 'git'])\n # Pin twine to 1.11 due to https://github.com/pypa/twine/issues/404\n subprocess.check_call(['pip', 'install', 'twine<1.12'])\n url = subprocess.check_output(['git', 'remote', 'get-url', 'origin'])\n new_url = git_url_ssh_to_https(url.decode())\n subprocess.check_call(['git', 'remote', 'set-url', 'origin', new_url])\n branch_spec = 'origin/%s' % os.getenv('CIRCLE_BRANCH')\n subprocess.check_call(['git', 'branch', '--set-upstream-to', branch_spec])\n subprocess.check_call(['git', 'fetch', '--tags', '--force'])\n\n # tags\n subprocess.check_call(['git', 'tag', '-a', version, '-m', 'release %s' % version])\n subprocess.check_call(['git', 'tag', '-f', 'latest'])\n if mode == release_target_map['prod']:\n print('git - pushing %s tags' % mode.name)\n subprocess.check_call(['git', 'push', '-f', 'origin', '--tags'])\n\n print('git - add changes')\n subprocess.check_call(['git', 'add', 'src/mbed_cloud/_version.py'])\n subprocess.check_call(['git', 'add', 'CHANGELOG.rst'])\n subprocess.check_call(['git', 'add', 'docs/news/*'])\n message = ':checkered_flag: :newspaper: releasing version %s\\n[skip ci]' % version\n subprocess.check_call(['git', 'commit', '-m', message])\n\n if mode == release_target_map['prod']:\n print('git - pushing %s changelog commit' % mode.name)\n subprocess.check_call(['git', 'push', 'origin'])\n\n print('pypi - uploading')\n subprocess.check_call(['python', '-m', 'twine', 'upload', mode.bundle])\n print('pypi - uploading successful')"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef zip_unicode(output, version):\n\n zipper = zipfile.ZipFile(os.path.join(output, 'unicodedata', '%s.zip' % version), 'w', zipfile.ZIP_DEFLATED)\n target = os.path.join(output, 'unicodedata', version)\n\n print('Zipping %s.zip...' % version)\n\n for root, dirs, files in os.walk(target):\n for file in files:\n if file.endswith('.txt'):\n zipper.write(os.path.join(root, file), arcname=file)", "response": "Zip the Unicode files."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef unzip_unicode(output, version):\n\n unzipper = zipfile.ZipFile(os.path.join(output, 'unicodedata', '%s.zip' % version))\n target = os.path.join(output, 'unicodedata', version)\n\n print('Unzipping %s.zip...' % version)\n\n os.makedirs(target)\n\n for f in unzipper.namelist():\n # Do I need backslash on windows? Or is it forward as well?\n unzipper.extract(f, target)", "response": "Unzip the Unicode files."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndownloading Unicode data scripts and blocks.", "response": "def download_unicodedata(version, output=HOME, no_zip=False):\n \"\"\"Download Unicode data scripts and blocks.\"\"\"\n files = [\n 'UnicodeData.txt',\n 'Scripts.txt',\n 'Blocks.txt',\n 'PropList.txt',\n 'DerivedCoreProperties.txt',\n 'DerivedNormalizationProps.txt',\n 'CompositionExclusions.txt',\n 'PropertyValueAliases.txt',\n 'PropertyAliases.txt',\n 'EastAsianWidth.txt',\n 'LineBreak.txt',\n 'HangulSyllableType.txt',\n 'DerivedAge.txt',\n 'auxiliary/WordBreakProperty.txt',\n 'auxiliary/SentenceBreakProperty.txt',\n 'auxiliary/GraphemeBreakProperty.txt',\n 'extracted/DerivedDecompositionType.txt',\n 'extracted/DerivedNumericType.txt',\n 'extracted/DerivedNumericValues.txt',\n 'extracted/DerivedJoiningType.txt',\n 'extracted/DerivedJoiningGroup.txt',\n 'extracted/DerivedCombiningClass.txt'\n ]\n\n files.append('ScriptExtensions.txt')\n if PY35:\n files.append('IndicPositionalCategory.txt')\n else:\n files.append('IndicMatraCategory.txt')\n files.append('IndicSyllabicCategory.txt')\n\n if PY34:\n files.append('BidiBrackets.txt')\n\n if PY37:\n files.append('VerticalOrientation.txt')\n\n http_url = 'http://www.unicode.org/Public/%s/ucd/' % version\n ftp_url = 'ftp://ftp.unicode.org/Public/%s/ucd/' % version\n\n destination = os.path.join(output, 'unicodedata', version)\n if not os.path.exists(destination):\n os.makedirs(destination)\n\n zip_data = not no_zip\n\n for f in files:\n file_location = os.path.join(destination, os.path.basename(f))\n retrieved = False\n if not os.path.exists(file_location):\n for url in (ftp_url, http_url):\n furl = url + f\n try:\n print('Downloading: %s --> %s' % (furl, file_location))\n response = urlopen(furl, timeout=30)\n data = response.read()\n except Exception:\n print('Failed: %s' % url)\n continue\n with open(file_location, 'w') as uf:\n uf.write(data.decode('utf-8'))\n retrieved = True\n break\n if not retrieved:\n print('Failed to acquire all needed Unicode files!')\n break\n else:\n retrieved = True\n print('Skipping: found %s' % file_location)\n\n if not retrieved:\n zip_data = False\n break\n\n if zip_data and not os.path.exists(os.path.join(output, 'unicodedata', '%s.zip' % version)):\n zip_unicode(output, version)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_unicodedata(version, output=HOME, no_zip=False):\n\n target = os.path.join(output, 'unicodedata', version)\n zip_target = os.path.join(output, 'unicodedata', '%s.zip' % version)\n\n if not os.path.exists(target) and os.path.exists(zip_target):\n unzip_unicode(output, version)\n\n # Download missing files if any. Zip if required.\n download_unicodedata(version, output, no_zip)", "response": "Ensure Unicode data for a single version of the language."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\npickle a ReplaceTemplate object.", "response": "def _pickle(r):\n \"\"\"Pickle.\"\"\"\n\n return ReplaceTemplate, (r.groups, r.group_slots, r.literals, r.pattern_hash, r.use_format, r._bytes)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef posix_props(self, prop, in_group=False):\n\n try:\n if self.is_bytes or not self.unicode:\n pattern = _uniprops.get_posix_property(\n prop, (_uniprops.POSIX_BYTES if self.is_bytes else _uniprops.POSIX)\n )\n else:\n pattern = _uniprops.get_posix_property(prop, _uniprops.POSIX_UNICODE)\n except Exception:\n raise ValueError('Invalid POSIX property!')\n if not in_group and not pattern: # pragma: no cover\n pattern = '^%s' % ('\\x00-\\xff' if self.is_bytes else _uniprops.UNICODE_RANGE)\n\n return [pattern]", "response": "Return a list of pattern patterns that match the POSIX properties of the current object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ninserting Unicode value by its name.", "response": "def unicode_name(self, name, in_group=False):\n \"\"\"Insert Unicode value by its name.\"\"\"\n\n value = ord(_unicodedata.lookup(name))\n if (self.is_bytes and value > 0xFF):\n value = \"\"\n if not in_group and value == \"\":\n return '[^%s]' % ('\\x00-\\xff' if self.is_bytes else _uniprops.UNICODE_RANGE)\n elif value == \"\":\n return value\n else:\n return ['\\\\%03o' % value if value <= 0xFF else chr(value)]"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef unicode_props(self, props, value, in_group=False, negate=False):\n\n # `'GC = Some_Unpredictable-Category Name' -> 'gc=someunpredictablecategoryname'`\n category = None\n\n # `\\p{^negated}` Strip off the caret after evaluation.\n if props.startswith(\"^\"):\n negate = not negate\n if props.startswith(\"^\"):\n props = props[1:]\n\n # Get the property and value.\n # If a property is present and not block,\n # we can assume `GC` as that is all we support.\n # If we are wrong it will fail.\n if value:\n if _uniprops.is_enum(props):\n category = props\n props = value\n elif value in ('y', 'yes', 't', 'true'):\n category = 'binary'\n elif value in ('n', 'no', 'f', 'false'):\n category = 'binary'\n negate = not negate\n else:\n raise ValueError('Invalid Unicode property!')\n\n v = _uniprops.get_unicode_property((\"^\" if negate else \"\") + props, category, self.is_bytes)\n if not in_group:\n if not v:\n v = '^%s' % ('\\x00-\\xff' if self.is_bytes else _uniprops.UNICODE_RANGE)\n v = \"[%s]\" % v\n properties = [v]\n\n return properties", "response": "Return a list of Unicode properties."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ninsert letter ( ASCII or Unicode upper or lower case properties.", "response": "def letter_case_props(self, case, in_group, negate=False):\n \"\"\"Insert letter (ASCII or Unicode) case properties.\"\"\"\n\n # Use traditional ASCII upper/lower case unless:\n # 1. The strings fed in are not bytes\n # 2. And the the Unicode flag was used\n if not in_group:\n v = self.posix_props((\"^\" if negate else \"\") + (\"upper\" if case == _UPPER else \"lower\"), in_group=in_group)\n v[0] = \"[%s]\" % v[0]\n else:\n v = self.posix_props((\"^\" if negate else \"\") + (\"upper\" if case == _UPPER else \"lower\"), in_group=in_group)\n return v"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef main_group(self, i):\n\n current = []\n while True:\n try:\n t = next(i)\n current.extend(self.normal(t, i))\n except StopIteration:\n break\n return current", "response": "The main group of the ith element."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef parse_bytes(self, i):\n\n value = int(self.get_byte(i), 16)\n single = self.get_single_stack()\n if self.span_stack:\n text = self.convert_case(chr(value), self.span_stack[-1])\n value = ord(self.convert_case(text, single)) if single is not None else ord(text)\n elif single:\n value = ord(self.convert_case(chr(value), single))\n if self.use_format and value in _CURLY_BRACKETS_ORD:\n self.handle_format(chr(value), i)\n else:\n self.result.append('\\\\%03o' % value)", "response": "Parse a single byte."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget the named group of the user.", "response": "def get_named_group(self, t, i):\n \"\"\"Get group number.\"\"\"\n\n index = i.index\n value = [t]\n try:\n c = next(i)\n if c != \"<\":\n raise SyntaxError(\"Group missing '<' at %d!\" % (i.index - 1))\n value.append(c)\n c = next(i)\n if c in _DIGIT:\n value.append(c)\n c = next(i)\n while c != '>':\n if c in _DIGIT:\n value.append(c)\n c = next(i)\n value.append(c)\n elif c in _LETTERS_UNDERSCORE:\n value.append(c)\n c = next(i)\n while c != '>':\n if c in _WORD:\n value.append(c)\n c = next(i)\n value.append(c)\n else:\n raise SyntaxError(\"Invalid group character at %d!\" % (i.index - 1))\n except StopIteration:\n raise SyntaxError(\"Unmatched '<' at %d!\" % index)\n\n return ''.join(value)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting next format char.", "response": "def format_next(self, i):\n \"\"\"Get next format char.\"\"\"\n\n c = next(i)\n return self.format_references(next(i), i) if c == '\\\\' else c"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef span_case(self, i, case):\n\n # A new \\L, \\C or \\E should pop the last in the stack.\n if self.span_stack:\n self.span_stack.pop()\n if self.single_stack:\n self.single_stack.pop()\n self.span_stack.append(case)\n count = len(self.span_stack)\n self.end_found = False\n try:\n while not self.end_found:\n t = next(i)\n if self.use_format and t in _CURLY_BRACKETS:\n self.handle_format(t, i)\n elif t == '\\\\':\n try:\n t = next(i)\n self.reference(t, i)\n except StopIteration:\n self.result.append(t)\n raise\n else:\n self.result.append(self.convert_case(t, case))\n if self.end_found or count > len(self.span_stack):\n self.end_found = False\n break\n except StopIteration:\n pass\n if count == len(self.span_stack):\n self.span_stack.pop()", "response": "Uppercase or lowercase the next range of characters until end marker is found."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget the correct single stack item to use.", "response": "def get_single_stack(self):\n \"\"\"Get the correct single stack item to use.\"\"\"\n\n single = None\n while self.single_stack:\n single = self.single_stack.pop()\n return single"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nparses a pattern and return a new replace template.", "response": "def parse(self, pattern, template, use_format=False):\n \"\"\"Parse template.\"\"\"\n\n if isinstance(template, bytes):\n self.is_bytes = True\n else:\n self.is_bytes = False\n if isinstance(pattern.pattern, bytes) != self.is_bytes:\n raise TypeError('Pattern string type must match replace template string type!')\n self._original = template\n self.use_format = use_format\n self.parse_template(pattern)\n\n return ReplaceTemplate(\n tuple(self.groups),\n tuple(self.group_slots),\n tuple(self.literals),\n hash(pattern),\n self.use_format,\n self.is_bytes\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nfinding and return the appropriate group index.", "response": "def _get_group_index(self, index):\n \"\"\"Find and return the appropriate group index.\"\"\"\n\n g_index = None\n for group in self.groups:\n if group[0] == index:\n g_index = group[1]\n break\n return g_index"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _get_group_attributes(self, index):\n\n g_case = (None, None, -1)\n for group in self.group_slots:\n if group[0] == index:\n g_case = group[1]\n break\n return g_case", "response": "Find and return the appropriate group case."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _to_bstr(l):\n\n if isinstance(l, str):\n l = l.encode('ascii', 'backslashreplace')\n elif not isinstance(l, bytes):\n l = str(l).encode('ascii', 'backslashreplace')\n return l", "response": "Convert to byte string."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef format_string(m, l, capture, is_bytes):\n\n for fmt_type, value in capture[1:]:\n if fmt_type == FMT_ATTR:\n # Attribute\n l = getattr(l, value)\n elif fmt_type == FMT_INDEX:\n # Index\n l = l[value]\n elif fmt_type == FMT_CONV:\n if is_bytes:\n # Conversion\n if value in ('r', 'a'):\n l = repr(l).encode('ascii', 'backslashreplace')\n elif value == 's':\n # If the object is not string or byte string already\n l = _to_bstr(l)\n else:\n # Conversion\n if value == 'a':\n l = ascii(l)\n elif value == 'r':\n l = repr(l)\n elif value == 's':\n # If the object is not string or byte string already\n l = str(l)\n elif fmt_type == FMT_SPEC:\n # Integers and floats don't have an explicit 's' format type.\n if value[3] and value[3] == 's':\n if isinstance(l, int): # pragma: no cover\n raise ValueError(\"Unknown format code 's' for object of type 'int'\")\n if isinstance(l, float): # pragma: no cover\n raise ValueError(\"Unknown format code 's' for object of type 'float'\")\n\n # Ensure object is a byte string\n l = _to_bstr(l) if is_bytes else str(l)\n\n spec_type = value[1]\n if spec_type == '^':\n l = l.center(value[2], value[0])\n elif spec_type == \">\":\n l = l.rjust(value[2], value[0])\n else:\n l = l.ljust(value[2], value[0])\n\n # Make sure the final object is a byte string\n return _to_bstr(l) if is_bytes else str(l)", "response": "Perform a string format."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_version():\n\n path = os.path.join(os.path.dirname(__file__), 'backrefs')\n fp, pathname, desc = imp.find_module('__meta__', [path])\n try:\n vi = imp.load_module('__meta__', fp, pathname, desc).__version_info__\n return vi._get_canonical(), vi._get_dev_status()\n except Exception:\n print(traceback.format_exc())\n finally:\n fp.close()", "response": "Get version and version_info without importing the entire module."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nload list of dependencies.", "response": "def get_requirements():\n \"\"\"Load list of dependencies.\"\"\"\n\n install_requires = []\n with open(\"requirements/project.txt\") as f:\n for line in f:\n if not line.startswith(\"#\"):\n install_requires.append(line.strip())\n return install_requires"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_unicodedata():\n\n import unicodedata\n\n fail = False\n uver = unicodedata.unidata_version\n path = os.path.join(os.path.dirname(__file__), 'tools')\n fp, pathname, desc = imp.find_module('unidatadownload', [path])\n try:\n unidatadownload = imp.load_module('unidatadownload', fp, pathname, desc)\n unidatadownload.get_unicodedata(uver, no_zip=True)\n except Exception:\n print(traceback.format_exc())\n fail = True\n finally:\n fp.close()\n\n assert not fail, \"Failed to obtain unicodedata!\"\n return uver", "response": "Download the unicodedata version for the given Python version."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef generate_unicode_table():\n\n uver = get_unicodedata()\n fail = False\n path = os.path.join(os.path.dirname(__file__), 'tools')\n fp, pathname, desc = imp.find_module('unipropgen', [path])\n try:\n unipropgen = imp.load_module('unipropgen', fp, pathname, desc)\n unipropgen.build_tables(\n os.path.join(\n os.path.dirname(__file__),\n 'backrefs', 'uniprops', 'unidata'\n ),\n uver\n )\n except Exception:\n print(traceback.format_exc())\n fail = True\n finally:\n fp.close()\n\n assert not fail, \"Failed uniprops.py generation!\"", "response": "Generate the Unicode table for the given Python version."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting size of cache.", "response": "def _get_cache_size(replace=False):\n \"\"\"Get size of cache.\"\"\"\n\n if not replace:\n size = _cached_search_compile.cache_info().currsize\n else:\n size = _cached_replace_compile.cache_info().currsize\n return size"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _apply_replace_backrefs(m, repl=None, flags=0):\n\n if m is None:\n raise ValueError(\"Match is None!\")\n else:\n if isinstance(repl, ReplaceTemplate):\n return repl.expand(m)\n elif isinstance(repl, (str, bytes)):\n return _bregex_parse._ReplaceParser().parse(m.re, repl, bool(flags & FORMAT)).expand(m)", "response": "Expand with either the ReplaceTemplate or compile on the fly or return None."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _apply_search_backrefs(pattern, flags=0):\n\n if isinstance(pattern, (str, bytes)):\n re_verbose = VERBOSE & flags\n if flags & V0:\n re_version = V0\n elif flags & V1:\n re_version = V1\n else:\n re_version = 0\n if not (flags & DEBUG):\n pattern = _cached_search_compile(pattern, re_verbose, re_version, type(pattern))\n else: # pragma: no cover\n pattern = _bregex_parse._SearchParser(pattern, re_verbose, re_version).parse()\n elif isinstance(pattern, Bregex):\n if flags:\n raise ValueError(\"Cannot process flags argument with a compiled pattern\")\n pattern = pattern._pattern\n elif isinstance(pattern, _REGEX_TYPE):\n if flags:\n raise ValueError(\"Cannot process flags argument with a compiled pattern!\")\n else:\n raise TypeError(\"Not a string or compiled pattern!\")\n return pattern", "response": "Apply the search backrefs to the search pattern."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _assert_expandable(repl, use_format=False):\n\n if isinstance(repl, ReplaceTemplate):\n if repl.use_format != use_format:\n if use_format:\n raise ValueError(\"Replace not compiled as a format replace\")\n else:\n raise ValueError(\"Replace should not be compiled as a format replace!\")\n elif not isinstance(repl, (str, bytes)):\n raise TypeError(\"Expected string, buffer, or compiled replace!\")", "response": "Check if replace template is expandable."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncompiles both the search or search and replace into one object.", "response": "def compile(pattern, flags=0, auto_compile=None, **kwargs): # noqa A001\n \"\"\"Compile both the search or search and replace into one object.\"\"\"\n\n if isinstance(pattern, Bregex):\n if auto_compile is not None:\n raise ValueError(\"Cannot compile Bregex with a different auto_compile!\")\n elif flags != 0:\n raise ValueError(\"Cannot process flags argument with a compiled pattern\")\n return pattern\n else:\n if auto_compile is None:\n auto_compile = True\n\n return Bregex(compile_search(pattern, flags, **kwargs), auto_compile)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef compile_search(pattern, flags=0, **kwargs):\n\n return _regex.compile(_apply_search_backrefs(pattern, flags), flags, **kwargs)", "response": "Compile a pattern with extended search references."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef expandf(m, format): # noqa A002\n\n _assert_expandable(format, True)\n return _apply_replace_backrefs(m, format, flags=FORMAT)", "response": "Expand the string using the format replace pattern or function."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\napply search to the base class.", "response": "def search(self, string, *args, **kwargs):\n \"\"\"Apply `search`.\"\"\"\n\n return self._pattern.search(string, *args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\napplies match to the current object.", "response": "def match(self, string, *args, **kwargs):\n \"\"\"Apply `match`.\"\"\"\n\n return self._pattern.match(string, *args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\napply fullmatch to the current set of patterns.", "response": "def fullmatch(self, string, *args, **kwargs):\n \"\"\"Apply `fullmatch`.\"\"\"\n\n return self._pattern.fullmatch(string, *args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\napply split to obtain a list of strings.", "response": "def split(self, string, *args, **kwargs):\n \"\"\"Apply `split`.\"\"\"\n\n return self._pattern.split(string, *args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\napplying findall to the current object.", "response": "def findall(self, string, *args, **kwargs):\n \"\"\"Apply `findall`.\"\"\"\n\n return self._pattern.findall(string, *args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\napply sub to the current object.", "response": "def sub(self, repl, string, *args, **kwargs):\n \"\"\"Apply `sub`.\"\"\"\n\n return self._pattern.sub(self._auto_compile(repl), string, *args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nparse a regex template for the regex module.", "response": "def regex_parse_template(self, template, pattern):\n \"\"\"\n Parse template for the regex module.\n\n Do NOT edit the literal list returned by\n _compile_replacement_helper as you will edit\n the original cached value. Copy the values\n instead.\n \"\"\"\n\n groups = []\n literals = []\n replacements = _compile_replacement_helper(pattern, template)\n count = 0\n for part in replacements:\n if isinstance(part, int):\n literals.append(None)\n groups.append((count, part))\n else:\n literals.append(part)\n count += 1\n return groups, literals"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nparse a template pattern.", "response": "def parse_template(self, pattern):\n \"\"\"Parse template.\"\"\"\n\n i = _util.StringIter((self._original.decode('latin-1') if self.is_bytes else self._original))\n iter(i)\n self.result = [\"\"]\n\n while True:\n try:\n t = next(i)\n if self.use_format and t in _CURLY_BRACKETS:\n self.handle_format(t, i)\n elif t == '\\\\':\n try:\n t = next(i)\n self.reference(t, i)\n except StopIteration:\n self.result.append(t)\n raise\n else:\n self.result.append(t)\n\n except StopIteration:\n break\n\n if len(self.result) > 1:\n self.literal_slots.append(\"\".join(self.result))\n del self.result[:]\n self.result.append(\"\")\n self.slot += 1\n\n if self.is_bytes:\n self._template = \"\".join(self.literal_slots).encode('latin-1')\n else:\n self._template = \"\".join(self.literal_slots)\n self.groups, self.literals = self.regex_parse_template(self._template, pattern)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nuses the template expand the string.", "response": "def expand(self, m):\n \"\"\"Using the template, expand the string.\"\"\"\n\n if m is None:\n raise ValueError(\"Match is None!\")\n\n sep = m.string[:0]\n if isinstance(sep, bytes) != self._bytes:\n raise TypeError('Match string type does not match expander string type!')\n text = []\n # Expand string\n for x in range(0, len(self.literals)):\n index = x\n l = self.literals[x]\n if l is None:\n g_index = self._get_group_index(index)\n span_case, single_case, capture = self._get_group_attributes(index)\n if not self.use_format:\n # Non format replace\n try:\n l = m.group(g_index)\n except IndexError: # pragma: no cover\n raise IndexError(\"'%d' is out of range!\" % capture)\n else:\n # String format replace\n try:\n obj = m.captures(g_index)\n except IndexError: # pragma: no cover\n raise IndexError(\"'%d' is out of range!\" % g_index)\n l = _util.format_string(m, obj, capture, self._bytes)\n if span_case is not None:\n if span_case == _LOWER:\n l = l.lower()\n else:\n l = l.upper()\n if single_case is not None:\n if single_case == _LOWER:\n l = l[0:1].lower() + l[1:]\n else:\n l = l[0:1].upper() + l[1:]\n text.append(l)\n\n return sep.join(text)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert a Unicode char to a uniform Unicode character.", "response": "def uniformat(value):\n \"\"\"Convert a Unicode char.\"\"\"\n\n if value in GROUP_ESCAPES:\n # Escape characters that are (or will be in the future) problematic\n c = \"\\\\x%02x\\\\x%02x\" % (0x5c, value)\n elif value <= 0xFF:\n c = \"\\\\x%02x\" % value\n elif value <= 0xFFFF:\n c = \"\\\\u%04x\" % value\n else:\n c = \"\\\\U%08x\" % value\n return c"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nclamps the Unicode range.", "response": "def create_span(unirange, is_bytes=False):\n \"\"\"Clamp the Unicode range.\"\"\"\n\n if len(unirange) < 2:\n unirange.append(unirange[0])\n if is_bytes:\n if unirange[0] > MAXASCII:\n return None\n if unirange[1] > MAXASCII:\n unirange[1] = MAXASCII\n return [x for x in range(unirange[0], unirange[1] + 1)]"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncomposes a table with the specified entry name of values not explicitly defined.", "response": "def not_explicitly_defined(table, name, is_bytes=False):\n \"\"\"Compose a table with the specified entry name of values not explicitly defined.\"\"\"\n\n all_chars = ALL_ASCII if is_bytes else ALL_CHARS\n s = set()\n for k, v in table.items():\n s.update(v)\n if name in table:\n table[name] = list(set(table[name]) | (all_chars - s))\n else:\n table[name] = list(all_chars - s)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef char2range(d, is_bytes=False, invert=True):\n\n fmt = bytesformat if is_bytes else uniformat\n maxrange = MAXASCII if is_bytes else MAXUNICODE\n\n for k1 in sorted(d.keys()):\n v1 = d[k1]\n if not isinstance(v1, list):\n char2range(v1, is_bytes=is_bytes, invert=invert)\n else:\n inverted = k1.startswith('^')\n v1.sort()\n last = None\n first = None\n ilast = None\n ifirst = None\n v2 = []\n iv2 = []\n if v1 and v1[0] != 0:\n ifirst = 0\n for i in v1:\n if first is None:\n first = i\n last = i\n elif i == last + 1:\n last = i\n elif first is not None:\n if first == last:\n v2.append(fmt(first))\n else:\n v2.append(\"%s-%s\" % (fmt(first), fmt(last)))\n if invert and ifirst is not None:\n ilast = first - 1\n if ifirst == ilast:\n iv2.append(fmt(ifirst))\n else:\n iv2.append(\"%s-%s\" % (fmt(ifirst), fmt(ilast)))\n ifirst = last + 1\n first = i\n last = i\n\n if not v1:\n iv2 = [\"%s-%s\" % (fmt(0), fmt(maxrange))]\n elif first is not None:\n if first == last:\n v2.append(fmt(first))\n else:\n v2.append(\"%s-%s\" % (fmt(first), fmt(last)))\n if invert and ifirst is not None:\n ilast = first - 1\n if ifirst == ilast:\n iv2.append(fmt(ifirst))\n else:\n iv2.append(\"%s-%s\" % (fmt(ifirst), fmt(ilast)))\n ifirst = last + 1\n if invert and ifirst <= maxrange:\n ilast = maxrange\n if ifirst == ilast:\n iv2.append(fmt(ifirst))\n else:\n iv2.append(\"%s-%s\" % (fmt(ifirst), fmt(ilast)))\n d[k1] = ''.join(v2)\n if invert:\n d[k1[1:] if inverted else '^' + k1] = ''.join(iv2)", "response": "Convert the characters in the dict to a range in string form."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngenerating canonical combining class property.", "response": "def gen_ccc(output, ascii_props=False, append=False, prefix=\"\"):\n \"\"\"Generate `canonical combining class` property.\"\"\"\n\n obj = {}\n with codecs.open(os.path.join(HOME, 'unicodedata', UNIVERSION, 'DerivedCombiningClass.txt'), 'r', 'utf-8') as uf:\n for line in uf:\n if not line.startswith('#'):\n data = line.split('#')[0].split(';')\n if len(data) < 2:\n continue\n span = create_span([int(i, 16) for i in data[0].strip().split('..')], is_bytes=ascii_props)\n if span is None:\n continue\n name = format_name(data[1])\n\n if name not in obj:\n obj[name] = []\n obj[name].extend(span)\n\n for x in range(0, 256):\n key = str(x)\n if key not in obj:\n obj[key] = []\n\n for name in list(obj.keys()):\n s = set(obj[name])\n obj[name] = sorted(s)\n\n not_explicitly_defined(obj, '0', is_bytes=ascii_props)\n\n # Convert characters values to ranges\n char2range(obj, is_bytes=ascii_props)\n\n with codecs.open(output, 'a' if append else 'w', 'utf-8') as f:\n if not append:\n f.write(HEADER)\n # Write out the Unicode properties\n f.write('%s_canonical_combining_class = {\\n' % prefix)\n count = len(obj) - 1\n i = 0\n for k1, v1 in sorted(obj.items()):\n f.write(' \"%s\": \"%s\"' % (k1, v1))\n if i == count:\n f.write('\\n}\\n')\n else:\n f.write(',\\n')\n i += 1"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef gen_nf_quick_check(output, ascii_props=False, append=False, prefix=\"\"):\n\n categories = []\n nf = {}\n all_chars = ALL_ASCII if ascii_props else ALL_CHARS\n file_name = os.path.join(HOME, 'unicodedata', UNIVERSION, 'DerivedNormalizationProps.txt')\n with codecs.open(file_name, 'r', 'utf-8') as uf:\n for line in uf:\n if not line.startswith('#'):\n data = line.split('#')[0].split(';')\n if len(data) < 2:\n continue\n if not data[1].strip().lower().endswith('_qc'):\n continue\n span = create_span([int(i, 16) for i in data[0].strip().split('..')], is_bytes=ascii_props)\n if span is None:\n continue\n name = format_name(data[1][:-3] + 'quickcheck')\n subvalue = format_name(data[2])\n\n if name not in nf:\n nf[name] = {}\n categories.append(name)\n if subvalue not in nf[name]:\n nf[name][subvalue] = []\n nf[name][subvalue].extend(span)\n\n for k1, v1 in nf.items():\n temp = set()\n for k2 in list(v1.keys()):\n temp |= set(v1[k2])\n v1['y'] = list(all_chars - temp)\n\n for k1, v1 in nf.items():\n for name in list(v1.keys()):\n s = set(nf[k1][name])\n nf[k1][name] = sorted(s)\n\n # Convert characters values to ranges\n char2range(nf, is_bytes=ascii_props)\n\n with codecs.open(output, 'a' if append else 'w', 'utf-8') as f:\n if not append:\n f.write(HEADER)\n for key, value in sorted(nf.items()):\n # Write out the Unicode properties\n f.write('%s_%s = {\\n' % (prefix, key.replace('quickcheck', '_quick_check')))\n count = len(value) - 1\n i = 0\n for k1, v1 in sorted(value.items()):\n f.write(' \"%s\": \"%s\"' % (k1, v1))\n if i == count:\n f.write('\\n}\\n')\n else:\n f.write(',\\n')\n i += 1\n\n return categories", "response": "Generate quick check properties."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngenerating bidi class property.", "response": "def gen_bidi(output, ascii_props=False, append=False, prefix=\"\"):\n \"\"\"Generate `bidi class` property.\"\"\"\n\n bidi_class = {}\n max_range = MAXASCII if ascii_props else MAXUNICODE\n with codecs.open(os.path.join(HOME, 'unicodedata', UNIVERSION, 'UnicodeData.txt'), 'r', 'utf-8') as uf:\n for line in uf:\n data = line.strip().split(';')\n if data:\n bidi = data[4].strip().lower()\n if not bidi:\n continue\n value = int(data[0].strip(), 16)\n\n if bidi not in bidi_class:\n bidi_class[bidi] = []\n\n if value > max_range:\n continue\n\n bidi_class[bidi].append(value)\n\n for name in list(bidi_class.keys()):\n s = set(bidi_class[name])\n bidi_class[name] = sorted(s)\n\n # Convert characters values to ranges\n char2range(bidi_class, is_bytes=ascii_props)\n\n with codecs.open(output, 'a' if append else 'w', 'utf-8') as f:\n if not append:\n f.write(HEADER)\n f.write('%s_bidi_classes = {\\n' % prefix)\n count = len(bidi_class) - 1\n i = 0\n for k1, v1 in sorted(bidi_class.items()):\n f.write(' \"%s\": \"%s\"' % (k1, v1))\n if i == count:\n f.write('\\n}\\n')\n else:\n f.write(',\\n')\n i += 1"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ngenerate the bytes posix table and write out to file.", "response": "def gen_posix(output, is_bytes=False, append=False, prefix=\"\"):\n \"\"\"Generate the bytes posix table and write out to file.\"\"\"\n\n posix_table = {}\n\n # `Alnum: [a-zA-Z0-9]`\n s = set([x for x in range(0x30, 0x39 + 1)])\n s |= set([x for x in range(0x41, 0x5a + 1)])\n s |= set([x for x in range(0x61, 0x7a + 1)])\n posix_table[\"alnum\"] = list(s)\n\n # `Alpha: [a-zA-Z]`\n s = set([x for x in range(0x41, 0x5a)])\n s |= set([x for x in range(0x61, 0x7a)])\n posix_table[\"alpha\"] = list(s)\n\n # `ASCII: [\\x00-\\x7F]`\n s = set([x for x in range(0, 0x7F + 1)])\n posix_table[\"ascii\"] = list(s)\n\n # `Blank: [ \\t]`\n s = set([0x20, 0x09])\n posix_table[\"blank\"] = list(s)\n\n # `Cntrl: [\\x00-\\x1F\\x7F]`\n s = set([x for x in range(0, 0x1F + 1)] + [0x7F])\n posix_table[\"cntrl\"] = list(s)\n\n # `Digit: [0-9]`\n s = set([x for x in range(0x30, 0x39 + 1)])\n posix_table[\"digit\"] = list(s)\n\n # `Graph: [\\x21-\\x7E]`\n s = set([x for x in range(0x21, 0x7E + 1)])\n posix_table[\"graph\"] = list(s)\n\n # `Lower: [a-z]`\n s = set([x for x in range(0x61, 0x7a + 1)])\n posix_table[\"lower\"] = list(s)\n\n # `Print: [\\x20-\\x7E]`\n s = set([x for x in range(0x20, 0x7E + 1)])\n posix_table[\"print\"] = list(s)\n\n # `Punct: [!\\\"\\#$%&'()*+,\\-./:;<=>?@\\[\\\\\\]^_`{|}~]`\n s = set([x for x in range(0x21, 0x2f + 1)])\n s |= set([x for x in range(0x3a, 0x40 + 1)])\n s |= set([x for x in range(0x5b, 0x60 + 1)])\n s |= set([x for x in range(0x7b, 0x7e + 1)])\n posix_table[\"punct\"] = list(s)\n\n # `Space: [ \\t\\r\\n\\v\\f]`\n s = set([x for x in range(0x09, 0x0d + 1)] + [0x20])\n posix_table[\"space\"] = list(s)\n\n # `Upper: [A-Z]`\n s = set([x for x in range(0x41, 0x5a + 1)])\n posix_table[\"upper\"] = list(s)\n\n # `XDigit: [A-Fa-f0-9]`\n s = set([x for x in range(0x30, 0x39 + 1)])\n s |= set([x for x in range(0x41, 0x46 + 1)])\n s |= set([x for x in range(0x61, 0x66 + 1)])\n posix_table[\"xdigit\"] = list(s)\n\n # Convert characters values to ranges\n char2range(posix_table, is_bytes=is_bytes)\n\n with codecs.open(output, 'a' if append else 'w', 'utf-8') as f:\n if not append:\n f.write(HEADER)\n # Write out the Unicode properties\n f.write('%s_posix_properties = {\\n' % prefix)\n count = len(posix_table) - 1\n i = 0\n for k1, v1 in sorted(posix_table.items()):\n f.write(' \"%s\": \"%s\"' % (k1, v1))\n if i == count:\n f.write('\\n}\\n')\n else:\n f.write(',\\n')\n i += 1"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef gen_uposix(table, posix_table):\n\n # `Alnum: [\\p{L&}\\p{Nd}]`\n s = set(table['l']['c'] + table['n']['d'])\n posix_table[\"posixalnum\"] = list(s)\n\n # `Alpha: [\\p{L&}]`\n s = set(table['l']['c'])\n posix_table[\"posixalpha\"] = list(s)\n\n # `ASCII: [\\x00-\\x7F]`\n s = set([x for x in range(0, 0x7F + 1)])\n posix_table[\"posixascii\"] = list(s)\n\n # `Blank: [\\p{Zs}\\t]`\n s = set(table['z']['s'] + [0x09])\n posix_table[\"posixblank\"] = list(s)\n\n # `Cntrl: [\\p{Cc}]`\n s = set(table['c']['c'])\n posix_table[\"posixcntrl\"] = list(s)\n\n # `Digit: [\\p{Nd}]`\n s = set(table['n']['d'])\n posix_table[\"posixdigit\"] = list(s)\n\n # `Graph: [^\\p{Z}\\p{C}]`\n s = set()\n for table_name in ('z', 'c'):\n for sub_table_name in table[table_name]:\n if not sub_table_name.startswith('^'):\n s |= set(table[table_name][sub_table_name])\n posix_table[\"^posixgraph\"] = list(s)\n\n # `Lower: [\\p{Ll}]`\n s = set(table['l']['l'])\n posix_table[\"posixlower\"] = list(s)\n\n # `Print: [\\P{C}]`\n s = set()\n for table_name in ('c',):\n for sub_table_name in table[table_name]:\n if not sub_table_name.startswith('^'):\n s |= set(table[table_name][sub_table_name])\n posix_table[\"^posixprint\"] = list(s)\n\n # `Punct: [\\p{P}\\p{S}]`\n s = set()\n for table_name in ('p', 's'):\n for sub_table_name in table[table_name]:\n if not sub_table_name.startswith('^'):\n s |= set(table[table_name][sub_table_name])\n posix_table[\"posixpunct\"] = list(s)\n\n # `Space: [\\p{Z}\\t\\r\\n\\v\\f]`\n s = set()\n for table_name in ('z',):\n for sub_table_name in table[table_name]:\n if not sub_table_name.startswith('^'):\n s |= set(table[table_name][sub_table_name])\n s |= set([x for x in range(0x09, 0x0e)])\n posix_table[\"posixspace\"] = list(s)\n\n # `Upper: [\\p{Lu}]`\n s = set(table['l']['u'])\n posix_table[\"posixupper\"] = list(s)\n\n # `XDigit: [A-Fa-f0-9]`\n s = set([x for x in range(0x30, 0x39 + 1)])\n s |= set([x for x in range(0x41, 0x46 + 1)])\n s |= set([x for x in range(0x61, 0x66 + 1)])\n posix_table[\"posixxdigit\"] = list(s)", "response": "Generate the posix table and write out to file."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngenerating the property table and dump it to the provided file.", "response": "def gen_properties(output, ascii_props=False, append=False):\n \"\"\"Generate the property table and dump it to the provided file.\"\"\"\n\n files = {\n 'gc': os.path.join(output, 'generalcategory.py'),\n 'blk': os.path.join(output, 'block.py'),\n 'sc': os.path.join(output, 'script.py'),\n 'bc': os.path.join(output, 'bidiclass.py'),\n 'binary': os.path.join(output, 'binary.py'),\n 'posix': os.path.join(output, 'posix.py'),\n 'age': os.path.join(output, 'age.py'),\n 'ea': os.path.join(output, 'eastasianwidth.py'),\n 'gcb': os.path.join(output, 'graphemeclusterbreak.py'),\n 'lb': os.path.join(output, 'linebreak.py'),\n 'sb': os.path.join(output, 'sentencebreak.py'),\n 'wb': os.path.join(output, 'wordbreak.py'),\n 'hst': os.path.join(output, 'hangulsyllabletype.py'),\n 'dt': os.path.join(output, 'decompositiontype.py'),\n 'jt': os.path.join(output, 'joiningtype.py'),\n 'jg': os.path.join(output, 'joininggroup.py'),\n 'nt': os.path.join(output, 'numerictype.py'),\n 'nv': os.path.join(output, 'numericvalue.py'),\n 'ccc': os.path.join(output, 'canonicalcombiningclass.py'),\n 'qc': os.path.join(output, 'quickcheck.py'),\n 'alias': os.path.join(output, 'alias.py')\n }\n\n files['scx'] = os.path.join(output, 'scriptextensions.py')\n files['insc'] = os.path.join(output, 'indicsyllabiccategory.py')\n if PY34:\n files['bpt'] = os.path.join(output, 'bidipairedbrackettype.py')\n if PY35:\n files['inpc'] = os.path.join(output, 'indicpositionalcategory.py')\n else:\n files['inmc'] = os.path.join(output, 'indicmatracategory.py')\n if PY37:\n files['vo'] = os.path.join(output, 'verticalorientation.py')\n\n prefix = \"ascii\" if ascii_props else 'unicode'\n\n # `L&` or `Lc` won't be found in the table,\n # so initialize 'c' at the start. `&` will have to be converted to 'c'\n # before sending it through.\n categories = [\n 'generalcategory', 'script', 'block',\n 'bidiclass', 'eastasianwidth', 'linebreak',\n 'hangulsyllabletype', 'wordbreak', 'sentencebreak',\n 'graphemeclusterbreak', 'decompositiontype', 'joiningtype',\n 'joininggroup', 'numerictype', 'numericvalue',\n 'canonicalcombiningclass', 'age'\n ]\n categories.append('scriptextensions')\n categories.append('indicsyllabiccategory')\n if PY34:\n categories.append('bidipairedbrackettype')\n if PY35:\n categories.append('indicpositionalcategory')\n else:\n categories.append('indicmatracategory')\n if PY37:\n categories.append('verticalorientation')\n if ascii_props:\n print('=========Ascii Tables=========')\n else:\n print('========Unicode Tables========')\n print('Building: General Category')\n max_range = ASCII_RANGE if ascii_props else UNICODE_RANGE\n all_chars = ALL_ASCII if ascii_props else ALL_CHARS\n table = {'l': {'c': []}}\n itable = {'l': {}}\n with codecs.open(os.path.join(HOME, 'unicodedata', UNIVERSION, 'UnicodeData.txt'), 'r', 'utf-8') as uf:\n for line in uf:\n data = line.strip().split(';')\n if data:\n i = int(data[0], 16)\n if i > max_range[1]:\n continue\n p = data[2].lower()\n if p[0] not in table:\n table[p[0]] = {}\n itable[p[0]] = {}\n if p[1] not in table[p[0]]:\n table[p[0]][p[1]] = []\n table[p[0]][p[1]].append(i)\n # Add LC which is a combo of Ll, Lu, and Lt\n if p[0] == 'l' and p[1] in ('l', 'u', 't'):\n table['l']['c'].append(i)\n\n # Create inverse of each category\n for k1, v1 in table.items():\n inverse_category = set()\n for k2, v2 in v1.items():\n s = set(v2)\n inverse_category |= s\n itable[k1]['^'] = list(all_chars - inverse_category)\n\n # Generate Unicode blocks\n print('Building: Blocks')\n gen_blocks(files['blk'], ascii_props, append, prefix)\n\n # Generate Unicode scripts\n print('Building: Scripts & Script Extensions')\n gen_scripts(\n 'Scripts.txt', 'ScriptExtensions.txt', 'scripts', 'script_extensions', files['sc'], files['scx'],\n notexplicit='zzzz', ascii_props=ascii_props, append=append, prefix=prefix\n )\n\n # Generate Unicode bidi classes\n print('Building: Bidi Classes')\n gen_bidi(files['bc'], ascii_props, append, prefix)\n\n if PY34:\n print('Building: Bidi Paired Bracket Type')\n gen_enum(\n 'BidiBrackets.txt', 'bidi_paired_bracket_type', files['bpt'], notexplicit='n',\n field=2, ascii_props=ascii_props, append=append, prefix=prefix\n )\n\n # Generate Unicode binary\n print('Building: Binary')\n binary = gen_binary(table, files['binary'], ascii_props, append, prefix)\n\n # Generate posix table and write out to file.\n print('Building: Posix')\n gen_posix(files['posix'], is_bytes=ascii_props, append=append, prefix=prefix)\n\n print('Building: Age')\n gen_age(files['age'], ascii_props, append, prefix)\n\n print('Building: East Asian Width')\n gen_enum(\n 'EastAsianWidth.txt', 'east_asian_width', files['ea'],\n notexplicit='n', ascii_props=ascii_props, append=append, prefix=prefix\n )\n\n print('Building: Grapheme Cluster Break')\n gen_enum(\n 'GraphemeBreakProperty.txt', 'grapheme_cluster_break', files['gcb'], notexplicit='other',\n ascii_props=ascii_props, append=append, prefix=prefix\n )\n\n print('Building: Line Break')\n gen_enum(\n 'LineBreak.txt', 'line_break', files['lb'], notexplicit='unknown',\n ascii_props=ascii_props, append=append, prefix=prefix\n )\n\n print('Building: Sentence Break')\n gen_enum(\n 'SentenceBreakProperty.txt', 'sentence_break', files['sb'], notexplicit='other',\n ascii_props=ascii_props, append=append, prefix=prefix\n )\n\n print('Building: Word Break')\n gen_enum(\n 'WordBreakProperty.txt', 'word_break', files['wb'], notexplicit='other',\n ascii_props=ascii_props, append=append, prefix=prefix\n )\n\n if PY35:\n print('Building: Indic Positional Category')\n gen_enum(\n 'IndicPositionalCategory.txt', 'indic_positional_category', files['inpc'], notexplicit='na',\n ascii_props=ascii_props, append=append, prefix=prefix\n )\n else:\n print('Building: Indic Matra Category')\n gen_enum(\n 'IndicMatraCategory.txt', 'indic_matra_category', files['inmc'], notexplicit='na',\n ascii_props=ascii_props, append=append, prefix=prefix\n )\n print('Building: Indic Syllabic Category')\n gen_enum(\n 'IndicSyllabicCategory.txt', 'indic_syllabic_category', files['insc'], notexplicit='other',\n ascii_props=ascii_props, append=append, prefix=prefix\n )\n\n print('Building: Hangul Syllable Type')\n gen_enum(\n 'HangulSyllableType.txt', 'hangul_syllable_type', files['hst'], notexplicit='na',\n ascii_props=ascii_props, append=append, prefix=prefix\n )\n\n print('Building: Decomposition Type')\n gen_enum(\n 'DerivedDecompositionType.txt', 'decomposition_type', files['dt'], notexplicit='none',\n ascii_props=ascii_props, append=append, prefix=prefix\n )\n\n print('Building: Joining Type')\n gen_enum(\n 'DerivedJoiningType.txt', 'joining_type', files['jt'], notexplicit='u',\n ascii_props=ascii_props, append=append, prefix=prefix\n )\n\n print('Building: Joining Group')\n gen_enum(\n 'DerivedJoiningGroup.txt', 'joining_group', files['jg'], notexplicit='nonjoining',\n ascii_props=ascii_props, append=append, prefix=prefix\n )\n\n print('Building: Numeric Type')\n gen_enum(\n 'DerivedNumericType.txt', 'numeric_type', files['nt'], notexplicit='none',\n ascii_props=ascii_props, append=append, prefix=prefix\n )\n\n print('Building: Numeric Value')\n gen_enum(\n 'DerivedNumericValues.txt', 'numeric_values', files['nv'], field=3, notexplicit='nan',\n ascii_props=ascii_props, append=append, prefix=prefix\n )\n\n print('Building: Canonical Combining Class')\n gen_ccc(files['ccc'], ascii_props, append, prefix)\n\n print('Building: NF* Quick Check')\n categories.extend(gen_nf_quick_check(files['qc'], ascii_props, append, prefix))\n\n if PY37:\n print('Building: Vertical Orientation')\n gen_enum(\n 'VerticalOrientation.txt', 'vertical_orientation', files['vo'], notexplicit='R',\n ascii_props=ascii_props, append=append, prefix=prefix\n )\n\n if not ascii_props:\n print('Building: Aliases')\n gen_alias(categories, binary, files['alias'], ascii_props, append, prefix)\n\n # Convert char values to string ranges.\n char2range(table, is_bytes=ascii_props)\n char2range(itable, is_bytes=ascii_props, invert=False)\n for k1, v1 in itable.items():\n table[k1]['^'] = v1['^']\n\n with codecs.open(files['gc'], 'a' if append else 'w', 'utf-8') as f:\n if not append:\n f.write(HEADER)\n # Write out the Unicode properties\n f.write('%s_properties = {\\n' % prefix)\n count = len(table) - 1\n i = 0\n for k1, v1 in sorted(table.items()):\n f.write(' \"%s\": {\\n' % k1)\n count2 = len(v1) - 1\n j = 0\n for k2, v2 in sorted(v1.items()):\n f.write(' \"%s\": \"%s\"' % (k2, v2))\n if j == count2:\n f.write('\\n }')\n else:\n f.write(',\\n')\n j += 1\n if i == count:\n f.write('\\n}\\n')\n else:\n f.write(',\\n')\n i += 1\n\n if not append:\n with codecs.open(os.path.join(output, '__init__.py'), 'w') as f:\n f.write(HEADER)\n for x in sorted(files):\n f.write('from .%s import * # noqa\\n' % os.path.basename(files[x])[:-3])"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef build_unicode_property_table(output):\n\n if not os.path.exists(output):\n os.mkdir(output)\n gen_properties(output)", "response": "Build and write out Unicode property table."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nbuilds and write out Unicode property table.", "response": "def build_ascii_property_table(output):\n \"\"\"Build and write out Unicode property table.\"\"\"\n\n if not os.path.exists(output):\n os.mkdir(output)\n gen_properties(output, ascii_props=True, append=True)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_posix_property(value, mode=POSIX):\n\n if mode == POSIX_BYTES:\n return unidata.ascii_posix_properties[value]\n elif mode == POSIX_UNICODE:\n return unidata.unicode_binary[\n ('^posix' + value[1:]) if value.startswith('^') else ('posix' + value)\n ]\n else:\n return unidata.unicode_posix_properties[value]", "response": "Retrieve the posix category."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget CANONICAL COMBINING CLASS property.", "response": "def get_canonical_combining_class_property(value, is_bytes=False):\n \"\"\"Get `CANONICAL COMBINING CLASS` property.\"\"\"\n\n obj = unidata.ascii_canonical_combining_class if is_bytes else unidata.unicode_canonical_combining_class\n\n if value.startswith('^'):\n negated = value[1:]\n value = '^' + unidata.unicode_alias['canonicalcombiningclass'].get(negated, negated)\n else:\n value = unidata.unicode_alias['canonicalcombiningclass'].get(value, value)\n\n return obj[value]"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget EAST ASIAN WIDTH property.", "response": "def get_east_asian_width_property(value, is_bytes=False):\n \"\"\"Get `EAST ASIAN WIDTH` property.\"\"\"\n\n obj = unidata.ascii_east_asian_width if is_bytes else unidata.unicode_east_asian_width\n\n if value.startswith('^'):\n negated = value[1:]\n value = '^' + unidata.unicode_alias['eastasianwidth'].get(negated, negated)\n else:\n value = unidata.unicode_alias['eastasianwidth'].get(value, value)\n\n return obj[value]"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_grapheme_cluster_break_property(value, is_bytes=False):\n\n obj = unidata.ascii_grapheme_cluster_break if is_bytes else unidata.unicode_grapheme_cluster_break\n\n if value.startswith('^'):\n negated = value[1:]\n value = '^' + unidata.unicode_alias['graphemeclusterbreak'].get(negated, negated)\n else:\n value = unidata.unicode_alias['graphemeclusterbreak'].get(value, value)\n\n return obj[value]", "response": "Get GRAPHEME CLUSTER BREAK property."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_line_break_property(value, is_bytes=False):\n\n obj = unidata.ascii_line_break if is_bytes else unidata.unicode_line_break\n\n if value.startswith('^'):\n negated = value[1:]\n value = '^' + unidata.unicode_alias['linebreak'].get(negated, negated)\n else:\n value = unidata.unicode_alias['linebreak'].get(value, value)\n\n return obj[value]", "response": "Get LINE BREAK property."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_sentence_break_property(value, is_bytes=False):\n\n obj = unidata.ascii_sentence_break if is_bytes else unidata.unicode_sentence_break\n\n if value.startswith('^'):\n negated = value[1:]\n value = '^' + unidata.unicode_alias['sentencebreak'].get(negated, negated)\n else:\n value = unidata.unicode_alias['sentencebreak'].get(value, value)\n\n return obj[value]", "response": "Get SENTENCE BREAK property."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting WORD BREAK property.", "response": "def get_word_break_property(value, is_bytes=False):\n \"\"\"Get `WORD BREAK` property.\"\"\"\n\n obj = unidata.ascii_word_break if is_bytes else unidata.unicode_word_break\n\n if value.startswith('^'):\n negated = value[1:]\n value = '^' + unidata.unicode_alias['wordbreak'].get(negated, negated)\n else:\n value = unidata.unicode_alias['wordbreak'].get(value, value)\n\n return obj[value]"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_hangul_syllable_type_property(value, is_bytes=False):\n\n obj = unidata.ascii_hangul_syllable_type if is_bytes else unidata.unicode_hangul_syllable_type\n\n if value.startswith('^'):\n negated = value[1:]\n value = '^' + unidata.unicode_alias['hangulsyllabletype'].get(negated, negated)\n else:\n value = unidata.unicode_alias['hangulsyllabletype'].get(value, value)\n\n return obj[value]", "response": "Get HANGUL SYLLABLE TYPE property."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets INDIC POSITIONAL / MATRA CATEGORY property.", "response": "def get_indic_positional_category_property(value, is_bytes=False):\n \"\"\"Get `INDIC POSITIONAL/MATRA CATEGORY` property.\"\"\"\n\n if PY35:\n obj = unidata.ascii_indic_positional_category if is_bytes else unidata.unicode_indic_positional_category\n alias_key = 'indicpositionalcategory'\n else:\n obj = unidata.ascii_indic_matra_category if is_bytes else unidata.unicode_indic_matra_category\n alias_key = 'indicmatracategory'\n\n if value.startswith('^'):\n negated = value[1:]\n value = '^' + unidata.unicode_alias[alias_key].get(negated, negated)\n else:\n value = unidata.unicode_alias[alias_key].get(value, value)\n\n return obj[value]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget INDIC SYLLABIC CATEGORY property.", "response": "def get_indic_syllabic_category_property(value, is_bytes=False):\n \"\"\"Get `INDIC SYLLABIC CATEGORY` property.\"\"\"\n\n obj = unidata.ascii_indic_syllabic_category if is_bytes else unidata.unicode_indic_syllabic_category\n\n if value.startswith('^'):\n negated = value[1:]\n value = '^' + unidata.unicode_alias['indicsyllabiccategory'].get(negated, negated)\n else:\n value = unidata.unicode_alias['indicsyllabiccategory'].get(value, value)\n\n return obj[value]"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets DECOMPOSITION TYPE property.", "response": "def get_decomposition_type_property(value, is_bytes=False):\n \"\"\"Get `DECOMPOSITION TYPE` property.\"\"\"\n\n obj = unidata.ascii_decomposition_type if is_bytes else unidata.unicode_decomposition_type\n\n if value.startswith('^'):\n negated = value[1:]\n value = '^' + unidata.unicode_alias['decompositiontype'].get(negated, negated)\n else:\n value = unidata.unicode_alias['decompositiontype'].get(value, value)\n\n return obj[value]"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_nfc_quick_check_property(value, is_bytes=False):\n\n obj = unidata.ascii_nfc_quick_check if is_bytes else unidata.unicode_nfc_quick_check\n\n if value.startswith('^'):\n negated = value[1:]\n value = '^' + unidata.unicode_alias['nfcquickcheck'].get(negated, negated)\n else:\n value = unidata.unicode_alias['nfcquickcheck'].get(value, value)\n\n return obj[value]", "response": "Get NFC QUICK CHECK property."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget NFD QUICK CHECK property.", "response": "def get_nfd_quick_check_property(value, is_bytes=False):\n \"\"\"Get `NFD QUICK CHECK` property.\"\"\"\n\n obj = unidata.ascii_nfd_quick_check if is_bytes else unidata.unicode_nfd_quick_check\n\n if value.startswith('^'):\n negated = value[1:]\n value = '^' + unidata.unicode_alias['nfdquickcheck'].get(negated, negated)\n else:\n value = unidata.unicode_alias['nfdquickcheck'].get(value, value)\n\n return obj[value]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_nfkc_quick_check_property(value, is_bytes=False):\n\n obj = unidata.ascii_nfkc_quick_check if is_bytes else unidata.unicode_nfkc_quick_check\n\n if value.startswith('^'):\n negated = value[1:]\n value = '^' + unidata.unicode_alias['nfkcquickcheck'].get(negated, negated)\n else:\n value = unidata.unicode_alias['nfkcquickcheck'].get(value, value)\n\n return obj[value]", "response": "Get NFKC QUICK CHECK property."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget NFKD QUICK CHECK property.", "response": "def get_nfkd_quick_check_property(value, is_bytes=False):\n \"\"\"Get `NFKD QUICK CHECK` property.\"\"\"\n\n obj = unidata.ascii_nfkd_quick_check if is_bytes else unidata.unicode_nfkd_quick_check\n\n if value.startswith('^'):\n negated = value[1:]\n value = '^' + unidata.unicode_alias['nfkdquickcheck'].get(negated, negated)\n else:\n value = unidata.unicode_alias['nfkdquickcheck'].get(value, value)\n\n return obj[value]"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_numeric_type_property(value, is_bytes=False):\n\n obj = unidata.ascii_numeric_type if is_bytes else unidata.unicode_numeric_type\n\n if value.startswith('^'):\n negated = value[1:]\n value = '^' + unidata.unicode_alias['numerictype'].get(negated, negated)\n else:\n value = unidata.unicode_alias['numerictype'].get(value, value)\n\n return obj[value]", "response": "Get NUMERIC TYPE property."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget NUMERIC VALUE property.", "response": "def get_numeric_value_property(value, is_bytes=False):\n \"\"\"Get `NUMERIC VALUE` property.\"\"\"\n\n obj = unidata.ascii_numeric_values if is_bytes else unidata.unicode_numeric_values\n\n if value.startswith('^'):\n negated = value[1:]\n value = '^' + unidata.unicode_alias['numericvalue'].get(negated, negated)\n else:\n value = unidata.unicode_alias['numericvalue'].get(value, value)\n\n return obj[value]"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_joining_type_property(value, is_bytes=False):\n\n obj = unidata.ascii_joining_type if is_bytes else unidata.unicode_joining_type\n\n if value.startswith('^'):\n negated = value[1:]\n value = '^' + unidata.unicode_alias['joiningtype'].get(negated, negated)\n else:\n value = unidata.unicode_alias['joiningtype'].get(value, value)\n\n return obj[value]", "response": "Get JOINING TYPE property."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets JOINING GROUP property.", "response": "def get_joining_group_property(value, is_bytes=False):\n \"\"\"Get `JOINING GROUP` property.\"\"\"\n\n obj = unidata.ascii_joining_group if is_bytes else unidata.unicode_joining_group\n\n if value.startswith('^'):\n negated = value[1:]\n value = '^' + unidata.unicode_alias['joininggroup'].get(negated, negated)\n else:\n value = unidata.unicode_alias['joininggroup'].get(value, value)\n\n return obj[value]"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_is_property(value, is_bytes=False):\n\n if value.startswith('^'):\n prefix = value[1:3]\n temp = value[3:]\n negate = '^'\n else:\n prefix = value[:2]\n temp = value[2:]\n negate = ''\n\n if prefix != 'is':\n raise ValueError(\"Does not start with 'is'!\")\n\n script_obj = unidata.ascii_script_extensions if is_bytes else unidata.unicode_script_extensions\n bin_obj = unidata.ascii_binary if is_bytes else unidata.unicode_binary\n\n value = negate + unidata.unicode_alias['script'].get(temp, temp)\n\n if value not in script_obj:\n value = negate + unidata.unicode_alias['binary'].get(temp, temp)\n obj = bin_obj\n else:\n obj = script_obj\n\n return obj[value]", "response": "Get shortcut for SC or Binary property."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_in_property(value, is_bytes=False):\n\n if value.startswith('^'):\n prefix = value[1:3]\n temp = value[3:]\n negate = '^'\n else:\n prefix = value[:2]\n temp = value[2:]\n negate = ''\n\n if prefix != 'in':\n raise ValueError(\"Does not start with 'in'!\")\n\n value = negate + unidata.unicode_alias['block'].get(temp, temp)\n obj = unidata.ascii_blocks if is_bytes else unidata.unicode_blocks\n\n return obj[value]", "response": "Get shortcut for Block property."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_unicode_property(value, prop=None, is_bytes=False):\n\n if prop is not None:\n prop = unidata.unicode_alias['_'].get(prop, prop)\n try:\n if prop == 'generalcategory':\n return get_gc_property(value, is_bytes)\n elif prop == 'script':\n return get_script_property(value, is_bytes)\n elif prop == 'scriptextensions':\n return get_script_extension_property(value, is_bytes)\n elif prop == 'block':\n return get_block_property(value, is_bytes)\n elif prop == 'binary':\n return get_binary_property(value, is_bytes)\n elif prop == 'bidiclass':\n return get_bidi_property(value, is_bytes)\n elif prop == 'bidipairedbrackettype':\n return get_bidi_paired_bracket_type_property(value, is_bytes)\n elif prop == 'age':\n return get_age_property(value, is_bytes)\n elif prop == 'eastasianwidth':\n return get_east_asian_width_property(value, is_bytes)\n elif PY35 and prop == 'indicpositionalcategory':\n return get_indic_positional_category_property(value, is_bytes)\n elif not PY35 and prop == 'indicmatracategory':\n return get_indic_positional_category_property(value, is_bytes)\n elif prop == 'indicsyllabiccategory':\n return get_indic_syllabic_category_property(value, is_bytes)\n elif prop == 'hangulsyllabletype':\n return get_hangul_syllable_type_property(value, is_bytes)\n elif prop == 'decompositiontype':\n return get_decomposition_type_property(value, is_bytes)\n elif prop == 'canonicalcombiningclass':\n return get_canonical_combining_class_property(value, is_bytes)\n elif prop == 'numerictype':\n return get_numeric_type_property(value, is_bytes)\n elif prop == 'numericvalue':\n return get_numeric_value_property(value, is_bytes)\n elif prop == 'joiningtype':\n return get_joining_type_property(value, is_bytes)\n elif prop == 'joininggroup':\n return get_joining_group_property(value, is_bytes)\n elif prop == 'graphemeclusterbreak':\n return get_grapheme_cluster_break_property(value, is_bytes)\n elif prop == 'linebreak':\n return get_line_break_property(value, is_bytes)\n elif prop == 'sentencebreak':\n return get_sentence_break_property(value, is_bytes)\n elif prop == 'wordbreak':\n return get_word_break_property(value, is_bytes)\n elif prop == 'nfcquickcheck':\n return get_nfc_quick_check_property(value, is_bytes)\n elif prop == 'nfdquickcheck':\n return get_nfd_quick_check_property(value, is_bytes)\n elif prop == 'nfkcquickcheck':\n return get_nfkc_quick_check_property(value, is_bytes)\n elif prop == 'nfkdquickcheck':\n return get_nfkd_quick_check_property(value, is_bytes)\n elif PY37 and prop == 'verticalorientation':\n return get_vertical_orientation_property(value, is_bytes)\n else:\n raise ValueError('Invalid Unicode property!')\n except Exception:\n raise ValueError('Invalid Unicode property!')\n\n try:\n return get_gc_property(value, is_bytes)\n except Exception:\n pass\n\n try:\n return get_script_extension_property(value, is_bytes)\n except Exception:\n pass\n\n try:\n return get_block_property(value, is_bytes)\n except Exception:\n pass\n\n try:\n return get_binary_property(value, is_bytes)\n except Exception:\n pass\n\n try:\n return get_is_property(value, is_bytes)\n except Exception:\n pass\n\n try:\n return get_in_property(value, is_bytes)\n except Exception:\n pass\n\n raise ValueError('Invalid Unicode property!')", "response": "Retrieve the Unicode category from the table."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _apply_search_backrefs(pattern, flags=0):\n\n if isinstance(pattern, (str, bytes)):\n re_verbose = bool(VERBOSE & flags)\n re_unicode = None\n if bool((ASCII | LOCALE) & flags):\n re_unicode = False\n elif bool(UNICODE & flags):\n re_unicode = True\n if not (flags & DEBUG):\n pattern = _cached_search_compile(pattern, re_verbose, re_unicode, type(pattern))\n else: # pragma: no cover\n pattern = _bre_parse._SearchParser(pattern, re_verbose, re_unicode).parse()\n elif isinstance(pattern, Bre):\n if flags:\n raise ValueError(\"Cannot process flags argument with a compiled pattern\")\n pattern = pattern._pattern\n elif isinstance(pattern, (_RE_TYPE, Bre)):\n if flags:\n raise ValueError(\"Cannot process flags argument with a compiled pattern!\")\n else:\n raise TypeError(\"Not a string or compiled pattern!\")\n return pattern", "response": "Apply the search backrefs to the search pattern."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncompiles both the search or search and replace into one object.", "response": "def compile(pattern, flags=0, auto_compile=None): # noqa A001\n \"\"\"Compile both the search or search and replace into one object.\"\"\"\n\n if isinstance(pattern, Bre):\n if auto_compile is not None:\n raise ValueError(\"Cannot compile Bre with a different auto_compile!\")\n elif flags != 0:\n raise ValueError(\"Cannot process flags argument with a compiled pattern\")\n return pattern\n else:\n if auto_compile is None:\n auto_compile = True\n\n return Bre(compile_search(pattern, flags), auto_compile)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef compile_replace(pattern, repl, flags=0):\n\n call = None\n if pattern is not None and isinstance(pattern, _RE_TYPE):\n if isinstance(repl, (str, bytes)):\n if not (pattern.flags & DEBUG):\n call = _cached_replace_compile(pattern, repl, flags, type(repl))\n else: # pragma: no cover\n call = _bre_parse._ReplaceParser().parse(pattern, repl, bool(flags & FORMAT))\n elif isinstance(repl, ReplaceTemplate):\n if flags:\n raise ValueError(\"Cannot process flags argument with a ReplaceTemplate!\")\n if repl.pattern_hash != hash(pattern):\n raise ValueError(\"Pattern hash doesn't match hash in compiled replace!\")\n call = repl\n else:\n raise TypeError(\"Not a valid type!\")\n else:\n raise TypeError(\"Pattern must be a compiled regular expression!\")\n return call", "response": "Construct a method that can be used as a replace method for sub subn etc."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef findall(pattern, string, *args, **kwargs):\n\n flags = args[2] if len(args) > 2 else kwargs.get('flags', 0)\n return _re.findall(_apply_search_backrefs(pattern, flags), string, *args, **kwargs)", "response": "Apply findall after applying backrefs."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\napply sub after applying backrefs.", "response": "def sub(pattern, repl, string, *args, **kwargs):\n \"\"\"Apply `sub` after applying backrefs.\"\"\"\n\n flags = args[4] if len(args) > 4 else kwargs.get('flags', 0)\n is_replace = _is_replace(repl)\n is_string = isinstance(repl, (str, bytes))\n if is_replace and repl.use_format:\n raise ValueError(\"Compiled replace cannot be a format object!\")\n\n pattern = compile_search(pattern, flags)\n return _re.sub(\n pattern, (compile_replace(pattern, repl) if is_replace or is_string else repl), string, *args, **kwargs\n )"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef add(request, kind, method, *args):\n request.session.setdefault(_key_name(kind), []).append({\n \"method\": method,\n \"args\": args\n })", "response": "Add a new record to the session."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nextracting the version number from the code.", "response": "def get_version():\n \"\"\" Extract the version number from the code. \"\"\"\n here = os.path.abspath(os.path.dirname(__file__))\n jbxapi_file = os.path.join(here, \"jbxapi.py\")\n\n with open(jbxapi_file) as f:\n content = f.read()\n match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\", content, re.M)\n\n if not match:\n raise RuntimeError(\"Unable to find version string.\")\n return match.group(1)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nfetching a list of all analyses.", "response": "def analysis_list(self):\n \"\"\"\n Fetch a list of all analyses.\n \"\"\"\n response = self._post(self.apiurl + '/v2/analysis/list', data={'apikey': self.apikey})\n\n return self._raise_or_extract(response)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsubmits a sample and returns the submission id.", "response": "def submit_sample(self, sample, cookbook=None, params={}, _extra_params={}):\n \"\"\"\n Submit a sample and returns the submission id.\n\n Parameters:\n sample: The sample to submit. Needs to be a file-like object or a tuple in\n the shape (filename, file-like object).\n cookbook: Uploads a cookbook together with the sample. Needs to be a file-like object or a\n tuple in the shape (filename, file-like object)\n params: Customize the sandbox parameters. They are described in more detail\n in the default submission parameters.\n\n Example:\n\n import jbxapi\n\n joe = jbxapi.JoeSandbox()\n with open(\"sample.exe\", \"rb\") as f:\n joe.submit_sample(f, params={\"systems\": [\"w7\"]})\n\n Example:\n\n import io, jbxapi\n\n joe = jbxapi.JoeSandbox()\n\n cookbook = io.BytesIO(b\"cookbook content\")\n with open(\"sample.exe\", \"rb\") as f:\n joe.submit_sample(f, cookbook=cookbook)\n \"\"\"\n self._check_user_parameters(params)\n\n files = {'sample': sample}\n if cookbook:\n files['cookbook'] = cookbook\n\n return self._submit(params, files, _extra_params=_extra_params)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsubmit a sample at a given URL for analysis.", "response": "def submit_sample_url(self, url, params={}, _extra_params={}):\n \"\"\"\n Submit a sample at a given URL for analysis.\n \"\"\"\n self._check_user_parameters(params)\n params = copy.copy(params)\n params['sample-url'] = url\n return self._submit(params, _extra_params=_extra_params)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsubmits a website for analysis.", "response": "def submit_url(self, url, params={}, _extra_params={}):\n \"\"\"\n Submit a website for analysis.\n \"\"\"\n self._check_user_parameters(params)\n params = copy.copy(params)\n params['url'] = url\n return self._submit(params, _extra_params=_extra_params)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef server_online(self):\n response = self._post(self.apiurl + '/v2/server/online', data={'apikey': self.apikey})\n\n return self._raise_or_extract(response)", "response": "Returns True if the Joe Sandbox servers are running False otherwise."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef analysis_info(self, webid):\n response = self._post(self.apiurl + \"/v2/analysis/info\", data={'apikey': self.apikey, 'webid': webid})\n\n return self._raise_or_extract(response)", "response": "Show the status and most important attributes of an analysis."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndownload a resource for an analysis. Eg. the full report binaries screenshots.", "response": "def analysis_download(self, webid, type, run=None, file=None):\n \"\"\"\n Download a resource for an analysis. E.g. the full report, binaries, screenshots.\n The full list of resources can be found in our API documentation.\n\n When `file` is given, the return value is the filename specified by the server,\n otherwise it's a tuple of (filename, bytes).\n\n Parameters:\n webid: the webid of the analysis\n type: the report type, e.g. 'html', 'bins'\n run: specify the run. If it is None, let Joe Sandbox pick one\n file: a writeable file-like object (When obmitted, the method returns\n the data as a bytes object.)\n\n Example:\n\n json_report, name = joe.analysis_download(123456, 'jsonfixed')\n\n Example:\n\n with open(\"full_report.html\", \"wb\") as f:\n name = joe.analysis_download(123456, \"html\", file=f)\n \"\"\"\n\n # when no file is specified, we create our own\n if file is None:\n _file = io.BytesIO()\n else:\n _file = file\n\n data = {\n 'apikey': self.apikey,\n 'webid': webid,\n 'type': type,\n 'run': run,\n }\n\n response = self._post(self.apiurl + \"/v2/analysis/download\", data=data, stream=True)\n\n try:\n filename = response.headers[\"Content-Disposition\"].split(\"filename=\")[1][1:-2]\n except Exception as e:\n filename = type\n\n # do standard error handling when encountering an error (i.e. throw an exception)\n if not response.ok:\n self._raise_or_extract(response)\n raise RuntimeError(\"Unreachable because statement above should raise.\")\n\n try:\n for chunk in response.iter_content(1024):\n _file.write(chunk)\n except requests.exceptions.RequestException as e:\n raise ConnectionError(e)\n\n # no user file means we return the content\n if file is None:\n return (filename, _file.getvalue())\n else:\n return filename"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsearch for analyses that match the given query.", "response": "def analysis_search(self, query):\n \"\"\"\n Lists the webids of the analyses that match the given query.\n\n Searches in MD5, SHA1, SHA256, filename, cookbook name, comment, url and report id.\n \"\"\"\n response = self._post(self.apiurl + \"/v2/analysis/search\", data={'apikey': self.apikey, 'q': query})\n\n return self._raise_or_extract(response)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef server_systems(self):\n response = self._post(self.apiurl + \"/v2/server/systems\", data={'apikey': self.apikey})\n\n return self._raise_or_extract(response)", "response": "Retrieve a list of available systems."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nshowing information about the account.", "response": "def account_info(self):\n \"\"\"\n Only available on Joe Sandbox Cloud\n\n Show information about the account.\n \"\"\"\n response = self._post(self.apiurl + \"/v2/account/info\", data={'apikey': self.apikey})\n\n return self._raise_or_extract(response)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nquerying information about the server.", "response": "def server_info(self):\n \"\"\"\n Query information about the server.\n \"\"\"\n response = self._post(self.apiurl + \"/v2/server/info\", data={'apikey': self.apikey})\n\n return self._raise_or_extract(response)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef server_lia_countries(self):\n response = self._post(self.apiurl + \"/v2/server/lia_countries\", data={'apikey': self.apikey})\n\n return self._raise_or_extract(response)", "response": "Show the available localized internet anonymization countries."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef server_languages_and_locales(self):\n response = self._post(self.apiurl + \"/v2/server/languages_and_locales\", data={'apikey': self.apikey})\n\n return self._raise_or_extract(response)", "response": "Show the available languages and locales"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nwrapping around requests. post which converts errors to ConnectionError and retries a few times", "response": "def _post(self, url, data=None, **kwargs):\n \"\"\"\n Wrapper around requests.post which\n\n (a) always inserts a timeout\n (b) converts errors to ConnectionError\n (c) re-tries a few times\n (d) converts file names to ASCII\n \"\"\"\n\n # Remove non-ASCII characters from filenames due to a limitation of the combination of\n # urllib3 (via python-requests) and our server\n # https://github.com/requests/requests/issues/2117\n # Internal Ticket #3090\n if \"files\" in kwargs and kwargs[\"files\"] is not None:\n acceptable_chars = \"0123456789\" + \"abcdefghijklmnopqrstuvwxyz\" + \\\n \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\" + \" _-.,()[]{}\"\n for param_name, fp in kwargs[\"files\"].items():\n if isinstance(fp, (tuple, list)):\n filename, fp = fp\n else:\n filename = requests.utils.guess_filename(fp) or param_name\n\n def encode(char):\n try:\n if char in acceptable_chars:\n return char\n except UnicodeDecodeError:\n pass\n return \"x{:02x}\".format(ord(char))\n filename = \"\".join(encode(x) for x in filename)\n\n kwargs[\"files\"][param_name] = (filename, fp)\n\n for i in itertools.count(1):\n try:\n return self.session.post(url, data=data, timeout=self.timeout, **kwargs)\n except requests.exceptions.Timeout as e:\n # exhausted all retries\n if i >= self.retries:\n raise ConnectionError(e)\n except requests.exceptions.RequestException as e:\n raise ConnectionError(e)\n\n # exponential backoff\n max_backoff = 4 ** i / 10 # .4, 1.6, 6.4, 25.6, ...\n time.sleep(random.uniform(0, max_backoff))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _check_user_parameters(self, user_parameters):\n if not user_parameters:\n return\n\n # sanity check against typos\n for key in user_parameters:\n if key not in submission_defaults:\n raise ValueError(\"Unknown parameter {0}\".format(key))", "response": "Checks that the user_parameters dict contains all known keys and raises a ValueError if not."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nraise an exception if the response indicates an API error.", "response": "def _raise_or_extract(self, response):\n \"\"\"\n Raises an exception if the response indicates an API error.\n\n Otherwise returns the object at the 'data' key of the API response.\n \"\"\"\n\n try:\n data = response.json()\n except ValueError:\n raise JoeException(\"The server responded with an unexpected format ({}). Is the API url correct?\". format(response.status_code))\n\n try:\n if response.ok:\n return data['data']\n else:\n error = data['errors'][0]\n raise ApiError(error)\n except (KeyError, TypeError):\n raise JoeException(\"Unexpected data ({}). Is the API url correct?\". format(response.status_code))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _parse_os_release_content(lines):\n props = {}\n lexer = shlex.shlex(lines, posix=True)\n lexer.whitespace_split = True\n\n # The shlex module defines its `wordchars` variable using literals,\n # making it dependent on the encoding of the Python source file.\n # In Python 2.6 and 2.7, the shlex source file is encoded in\n # 'iso-8859-1', and the `wordchars` variable is defined as a byte\n # string. This causes a UnicodeDecodeError to be raised when the\n # parsed content is a unicode object. The following fix resolves that\n # (... but it should be fixed in shlex...):\n if sys.version_info[0] == 2 and isinstance(lexer.wordchars, bytes):\n lexer.wordchars = lexer.wordchars.decode('iso-8859-1')\n\n tokens = list(lexer)\n for token in tokens:\n # At this point, all shell-like parsing has been done (i.e.\n # comments processed, quotes and backslash escape sequences\n # processed, multi-line values assembled, trailing newlines\n # stripped, etc.), so the tokens are now either:\n # * variable assignments: var=value\n # * commands or their arguments (not allowed in os-release)\n if '=' in token:\n k, v = token.split('=', 1)\n if isinstance(v, bytes):\n v = v.decode('utf-8')\n props[k.lower()] = v\n else:\n # Ignore any tokens that are not variable assignments\n pass\n\n if 'version_codename' in props:\n # os-release added a version_codename field. Use that in\n # preference to anything else Note that some distros purposefully\n # do not have code names. They should be setting\n # version_codename=\"\"\n props['codename'] = props['version_codename']\n elif 'ubuntu_codename' in props:\n # Same as above but a non-standard field name used on older Ubuntus\n props['codename'] = props['ubuntu_codename']\n elif 'version' in props:\n # If there is no version_codename, parse it from the version\n codename = re.search(r'(\\(\\D+\\))|,(\\s+)?\\D+', props['version'])\n if codename:\n codename = codename.group()\n codename = codename.strip('()')\n codename = codename.strip(',')\n codename = codename.strip()\n # codename appears within paranthese.\n props['codename'] = codename\n\n return props", "response": "Parse the lines of an os - release file and return a dictionary containing all information items."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef create_a10_device_instance(self, context, a10_device_instance):\n LOG.debug(\"A10DeviceInstancePlugin.create(): a10_device_instance=%s\", a10_device_instance)\n\n config = a10_config.A10Config()\n vthunder_defaults = config.get_vthunder_config()\n\n imgr = instance_manager.InstanceManager.from_config(config, context)\n\n dev_instance = common_resources.remove_attributes_not_specified(\n a10_device_instance.get(resources.RESOURCE))\n\n # Create the instance with specified defaults.\n vthunder_config = vthunder_defaults.copy()\n vthunder_config.update(_convert(dev_instance, _API, _VTHUNDER_CONFIG))\n instance = imgr.create_device_instance(vthunder_config, dev_instance.get(\"name\"))\n\n db_record = {}\n db_record.update(_convert(vthunder_config, _VTHUNDER_CONFIG, _DB))\n db_record.update(_convert(dev_instance, _API, _DB))\n db_record.update(_convert(instance, _INSTANCE, _DB))\n\n # If success, return the created DB record\n # Else, raise an exception because that's what we would do anyway\n db_instance = super(A10DeviceInstancePlugin, self).create_a10_device_instance(\n context, {resources.RESOURCE: db_record})\n\n return _make_api_dict(db_instance)", "response": "Attempt to create an instance using neutron context"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef vport_meta(self, vip):\n vport_meta = self.meta(vip, 'vport', None)\n if vport_meta is None:\n vport_meta = self.meta(vip, 'port', {})\n return vport_meta", "response": "Get the vport meta no matter which name was used"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef apply_template(template, *args, **kw):\n if six.callable(template):\n return template(*args, **kw)\n if isinstance(template, six.string_types):\n return template\n if isinstance(template, collections.Mapping):\n return template.__class__((k, apply_template(v, *args, **kw)) for k, v in template.items())\n if isinstance(template, collections.Iterable):\n return template.__class__(apply_template(v, *args, **kw) for v in template)\n return template", "response": "Applies every callable in any Mapping or Iterable"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nperforms initialization of system - wide settings", "response": "def initialize_vthunder(a10_cfg, device_cfg, client):\n \"\"\"Perform initialization of system-wide settings\"\"\"\n\n vth = a10_cfg.get_vthunder_config()\n initialize_interfaces(vth, device_cfg, client)\n initialize_dns(vth, device_cfg, client)\n initialize_licensing(vth, device_cfg, client)\n initialize_sflow(vth, device_cfg, client)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _plumb_port(self, server, network_id, wrong_ips):\n\n for attached_interface in server.interface_list():\n if attached_interface.net_id == network_id:\n if any(map(lambda x: x['ip_address'] in wrong_ips, attached_interface.fixed_ips)):\n continue\n return attached_interface\n\n return server.interface_attach(None, network_id, None)", "response": "Look for an existing port on the network_id Add one if it doesn t exist"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nallocating an IP from the specified subnet and creates a port", "response": "def allocate_ip_for_subnet(self, subnet_id, mac, port_id):\n \"\"\"Allocates an IP from the specified subnet and creates a port\"\"\"\n # Get an available IP and mark it as used before someone else does\n # If there's no IP, , log it and return an error\n # If we successfully get an IP, create a port with the specified MAC and device data\n # If port creation fails, deallocate the IP\n subnet = self.get_subnet(subnet_id)\n ip, mask, port_id = self.a10_allocate_ip_from_dhcp_range(subnet, \"vlan\", mac, port_id)\n return ip, mask, port_id"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsearching for an available IP. addr from unallocated nmodels. IPAllocationPool range. Returns the address and the mac address.", "response": "def a10_allocate_ip_from_dhcp_range(self, subnet, interface_id, mac, port_id):\n \"\"\"Search for an available IP.addr from unallocated nmodels.IPAllocationPool range.\n If no addresses are available then an error is raised. Returns the address as a string.\n This search is conducted by a difference of the nmodels.IPAllocationPool set_a\n and the current IP allocations.\n \"\"\"\n subnet_id = subnet[\"id\"]\n network_id = subnet[\"network_id\"]\n\n iprange_result = self.get_ipallocationpool_by_subnet_id(subnet_id)\n ip_in_use_list = [x.ip_address for x in self.get_ipallocations_by_subnet_id(subnet_id)]\n\n range_begin, range_end = iprange_result.first_ip, iprange_result.last_ip\n ip_address = IPHelpers.find_unused_ip(range_begin, range_end, ip_in_use_list)\n\n if not ip_address:\n msg = \"Cannot allocate from subnet {0}\".format(subnet)\n LOG.error(msg)\n # TODO(mdurrant) - Raise neutron exception\n raise Exception\n\n mark_in_use = {\n \"ip_address\": ip_address,\n \"network_id\": network_id,\n \"port_id\": port_id,\n \"subnet_id\": subnet[\"id\"]\n }\n\n self.create_ipallocation(mark_in_use)\n\n return ip_address, subnet[\"cidr\"], mark_in_use[\"port_id\"]"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nremoves a pool association from a context", "response": "def _dissociate(self, c, context, hm, pool_id):\n \"\"\"Remove a pool association\"\"\"\n pool_name = self._pool_name(context, pool_id)\n c.client.slb.service_group.update(pool_name, health_monitor=\"\",\n health_check_disable=True)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef dissociate(self, c, context, hm, pool_id):\n\n self._dissociate(c, context, hm, pool_id)\n pools = hm.get(\"pools\", [])\n if not any(p for p in pools if p.get(\"pool_id\") != pool_id):\n self._delete_unused(c, context, hm)", "response": "Remove a pool association and the healthmonitor if its the last one"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndeleting a healthmonitor and ALL its pool associations", "response": "def _delete(self, c, context, hm):\n \"\"\"Delete a healthmonitor and ALL its pool associations\"\"\"\n pools = hm.get(\"pools\", [])\n\n for pool in pools:\n pool_id = pool.get(\"pool_id\")\n self._dissociate(c, context, hm, pool_id)\n\n self._delete_unused(c, context, hm)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncalculating the fitness of the current agent.", "response": "def calculate_fitness(self):\n \"\"\"Calculcate your fitness.\"\"\"\n if self.fitness is not None:\n raise Exception(\n \"You are calculating the fitness of agent {}, \".format(self.id)\n + \"but they already have a fitness\"\n )\n\n said_blue = self.infos(type=Meme)[0].contents == \"blue\"\n\n proportion = float(\n max(\n self.network.nodes(type=RogersEnvironment)[0].infos(),\n key=attrgetter(\"id\"),\n ).contents\n )\n self.proportion = proportion\n is_blue = proportion > 0.5\n\n if said_blue is is_blue:\n self.score = 1\n else:\n self.score = 0\n\n is_asocial = self.infos(type=LearningGene)[0].contents == \"asocial\"\n\n e = 2\n b = 1\n c = 0.3 * b\n baseline = c + 0.0001\n\n self.fitness = (baseline + self.score * b - is_asocial * c) ** e"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef setup(self):\n if not self.networks():\n super(Bartlett1932, self).setup()\n for net in self.networks():\n self.models.WarOfTheGhostsSource(network=net)", "response": "Setup the networks.\n\n Setup only does stuff if there are no networks, this is so it only\n runs once at the start of the experiment. It first calls the same\n function in the super (see experiments.py in dallinger). Then it adds a\n source to each network."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds a node to the chain and receive transmissions.", "response": "def add_node_to_network(self, node, network):\n \"\"\"Add node to the chain and receive transmissions.\"\"\"\n network.add_node(node)\n parents = node.neighbors(direction=\"from\")\n if len(parents):\n parent = parents[0]\n parent.transmit()\n node.receive()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef recruit(self):\n if self.networks(full=False):\n self.recruiter.recruit(n=1)\n else:\n self.recruiter.close_recruitment()", "response": "Recruit one participant at a time until all networks are full."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef participate(self):\n try:\n logger.info(\"Entering participate method\")\n ready = WebDriverWait(self.driver, 10).until(\n EC.element_to_be_clickable((By.ID, \"finish-reading\"))\n )\n stimulus = self.driver.find_element_by_id(\"stimulus\")\n story = stimulus.find_element_by_id(\"story\")\n story_text = story.text\n logger.info(\"Stimulus text:\")\n logger.info(story_text)\n ready.click()\n submit = WebDriverWait(self.driver, 10).until(\n EC.element_to_be_clickable((By.ID, \"submit-response\"))\n )\n textarea = WebDriverWait(self.driver, 10).until(\n EC.element_to_be_clickable((By.ID, \"reproduction\"))\n )\n textarea.clear()\n text = self.transform_text(story_text)\n logger.info(\"Transformed text:\")\n logger.info(text)\n textarea.send_keys(text)\n submit.click()\n return True\n except TimeoutException:\n return False", "response": "Finish reading and send text"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate new infos on demand.", "response": "def create_information(self):\n \"\"\"Create new infos on demand.\"\"\"\n info = self._info_type()(origin=self, contents=self._contents())\n return info"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef setup(self):\n if not self.networks():\n super(IteratedDrawing, self).setup()\n for net in self.networks():\n self.models.DrawingSource(network=net)", "response": "Setup the networks.\n\n Setup only does stuff if there are no networks, this is so it only\n runs once at the start of the experiment. It first calls the same\n function in the super (see experiments.py in dallinger). Then it adds a\n source to each network."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef setup(self):\n super(RogersExperiment, self).setup()\n\n for net in random.sample(self.networks(role=\"experiment\"), self.catch_repeats):\n net.role = \"catch\"\n\n for net in self.networks():\n source = self.models.RogersSource(network=net)\n source.create_information()\n net.max_size = net.max_size + 1 # make room for environment node.\n env = self.models.RogersEnvironment(network=net)\n env.proportion = self.color_proportion_for_network(net)\n env.create_information()", "response": "Create the structure of the Rogers sources and environments."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a new network.", "response": "def create_network(self):\n \"\"\"Create a new network.\"\"\"\n return DiscreteGenerational(\n generations=self.generations,\n generation_size=self.generation_size,\n initial_source=True,\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nmakes a new node for participants.", "response": "def create_node(self, network, participant):\n \"\"\"Make a new node for participants.\"\"\"\n return self.models.RogersAgent(network=network, participant=participant)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef submission_successful(self, participant):\n num_approved = len(Participant.query.filter_by(status=\"approved\").all())\n current_generation = participant.nodes()[0].generation\n if (\n num_approved % self.generation_size == 0\n and (current_generation % 10 + 1) == 0\n ):\n for e in self.models.RogersEnvironment.query.all():\n e.step()", "response": "Run when a participant submits successfully."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nrecruiting participants if necessary.", "response": "def recruit(self):\n \"\"\"Recruit participants if necessary.\"\"\"\n num_approved = len(Participant.query.filter_by(status=\"approved\").all())\n end_of_generation = num_approved % self.generation_size == 0\n complete = num_approved >= (self.generations * self.generation_size)\n if complete:\n self.log(\"All networks full: closing recruitment\", \"-----\")\n self.recruiter.close_recruitment()\n elif end_of_generation:\n self.log(\"generation finished, recruiting another\")\n self.recruiter.recruit(n=self.generation_size)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncalculates a participants bonus.", "response": "def bonus(self, participant):\n \"\"\"Calculate a participants bonus.\"\"\"\n scores = [\n n.score for n in participant.nodes() if n.network.role == \"experiment\"\n ]\n average = float(sum(scores)) / float(len(scores))\n\n bonus = round(max(0.0, ((average - 0.5) * 2)) * self.bonus_payment, 2)\n return bonus"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef attention_check(self, participant=None):\n if self.catch_repeats == 0:\n return True\n\n scores = [n.score for n in participant.nodes() if n.network.role == \"catch\"]\n avg = float(sum(scores)) / float(len(scores))\n return avg >= self.min_acceptable_performance", "response": "Check a participant paid attention."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef size_on_copy(root=\".\"):\n total_size = 0\n exclusions = exclusion_policy()\n for dirpath, dirnames, filenames in os.walk(root, topdown=True):\n current_exclusions = exclusions(dirpath, os.listdir(dirpath))\n # Modifying dirnames in-place will prune the subsequent files and\n # directories visited by os.walk. This is only possible when\n # topdown = True\n dirnames[:] = [d for d in dirnames if d not in current_exclusions]\n legit_files = [f for f in filenames if f not in current_exclusions]\n for f in legit_files:\n fp = os.path.join(dirpath, f)\n total_size += os.path.getsize(fp)\n return total_size", "response": "Return the size of the experiment directory in bytes excluding any\n files and directories which would be excluded on copy."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nassemble the experiment_id. txt and experiment_root. js files and templates from Dallinger and return the absolute path of the new directory.", "response": "def assemble_experiment_temp_dir(config):\n \"\"\"Create a temp directory from which to run an experiment.\n The new directory will include:\n - Copies of custom experiment files which don't match the exclusion policy\n - Templates and static resources from Dallinger\n - An export of the loaded configuration\n - Heroku-specific files (Procile, runtime.txt) from Dallinger\n\n Assumes the experiment root directory is the current working directory.\n\n Returns the absolute path of the new directory.\n \"\"\"\n app_id = config.get(\"id\")\n dst = os.path.join(tempfile.mkdtemp(), app_id)\n\n # Copy local experiment files, minus some\n shutil.copytree(os.getcwd(), dst, ignore=exclusion_policy())\n\n # Export the loaded configuration\n config.write(filter_sensitive=True, directory=dst)\n\n # Save the experiment id\n with open(os.path.join(dst, \"experiment_id.txt\"), \"w\") as file:\n file.write(app_id)\n\n # Copy Dallinger files\n dallinger_root = dallinger_package_path()\n ensure_directory(os.path.join(dst, \"static\", \"scripts\"))\n ensure_directory(os.path.join(dst, \"static\", \"css\"))\n frontend_files = [\n os.path.join(\"static\", \"css\", \"dallinger.css\"),\n os.path.join(\"static\", \"scripts\", \"dallinger2.js\"),\n os.path.join(\"static\", \"scripts\", \"reqwest.min.js\"),\n os.path.join(\"static\", \"scripts\", \"require.js\"),\n os.path.join(\"static\", \"scripts\", \"reconnecting-websocket.js\"),\n os.path.join(\"static\", \"scripts\", \"spin.min.js\"),\n os.path.join(\"static\", \"scripts\", \"tracker.js\"),\n os.path.join(\"static\", \"scripts\", \"store+json2.min.js\"),\n os.path.join(\"templates\", \"error.html\"),\n os.path.join(\"templates\", \"error-complete.html\"),\n os.path.join(\"templates\", \"launch.html\"),\n os.path.join(\"templates\", \"complete.html\"),\n os.path.join(\"templates\", \"questionnaire.html\"),\n os.path.join(\"templates\", \"thanks.html\"),\n os.path.join(\"templates\", \"waiting.html\"),\n os.path.join(\"static\", \"robots.txt\"),\n ]\n frontend_dirs = [os.path.join(\"templates\", \"base\")]\n for filename in frontend_files:\n src = os.path.join(dallinger_root, \"frontend\", filename)\n dst_filepath = os.path.join(dst, filename)\n if not os.path.exists(dst_filepath):\n shutil.copy(src, dst_filepath)\n for filename in frontend_dirs:\n src = os.path.join(dallinger_root, \"frontend\", filename)\n dst_filepath = os.path.join(dst, filename)\n if not os.path.exists(dst_filepath):\n shutil.copytree(src, dst_filepath)\n\n # Copy Heroku files\n heroku_files = [\"Procfile\", \"runtime.txt\"]\n for filename in heroku_files:\n src = os.path.join(dallinger_root, \"heroku\", filename)\n shutil.copy(src, os.path.join(dst, filename))\n\n if not config.get(\"clock_on\"):\n # If the clock process has been disabled, overwrite the Procfile:\n src = os.path.join(dallinger_root, \"heroku\", \"Procfile_no_clock\")\n shutil.copy(src, os.path.join(dst, \"Procfile\"))\n\n return dst"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nchecking the experiment's python dependencies, then prepares a temp directory with files merged from the custom experiment and Dallinger. The resulting directory includes all the files necessary to deploy to Heroku.", "response": "def setup_experiment(log, debug=True, verbose=False, app=None, exp_config=None):\n \"\"\"Checks the experiment's python dependencies, then prepares a temp directory\n with files merged from the custom experiment and Dallinger.\n\n The resulting directory includes all the files necessary to deploy to\n Heroku.\n \"\"\"\n\n # Verify that the Postgres server is running.\n try:\n db.check_connection()\n except Exception:\n log(\"There was a problem connecting to the Postgres database!\")\n raise\n\n # Check that the demo-specific requirements are satisfied.\n try:\n with open(\"requirements.txt\", \"r\") as f:\n dependencies = [r for r in f.readlines() if r[:3] != \"-e \"]\n except (OSError, IOError):\n dependencies = []\n pkg_resources.require(dependencies)\n\n # Generate a unique id for this experiment.\n from dallinger.experiment import Experiment\n\n generated_uid = public_id = Experiment.make_uuid(app)\n\n # If the user provided an app name, use it everywhere that's user-facing.\n if app:\n public_id = str(app)\n\n log(\"Experiment id is \" + public_id + \"\")\n\n # Load and update the config\n config = get_config()\n if not config.ready:\n config.load() #\n if exp_config:\n config.extend(exp_config)\n config.extend({\"id\": six.text_type(generated_uid)})\n\n temp_dir = assemble_experiment_temp_dir(config)\n log(\"Deployment temp directory: {}\".format(temp_dir), chevrons=False)\n\n # Zip up the temporary directory and place it in the cwd.\n if not debug:\n log(\"Freezing the experiment package...\")\n shutil.make_archive(\n os.path.join(os.getcwd(), \"snapshots\", public_id + \"-code\"), \"zip\", temp_dir\n )\n\n return (public_id, temp_dir)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef deploy_sandbox_shared_setup(log, verbose=True, app=None, exp_config=None):\n if verbose:\n out = None\n else:\n out = open(os.devnull, \"w\")\n\n config = get_config()\n if not config.ready:\n config.load()\n heroku.sanity_check(config)\n\n (id, tmp) = setup_experiment(log, debug=False, app=app, exp_config=exp_config)\n\n # Register the experiment using all configured registration services.\n if config.get(\"mode\") == \"live\":\n log(\"Registering the experiment on configured services...\")\n registration.register(id, snapshot=None)\n\n # Log in to Heroku if we aren't already.\n log(\"Making sure that you are logged in to Heroku.\")\n heroku.log_in()\n config.set(\"heroku_auth_token\", heroku.auth_token())\n log(\"\", chevrons=False)\n\n # Change to temporary directory.\n cwd = os.getcwd()\n os.chdir(tmp)\n\n # Commit Heroku-specific files to tmp folder's git repo.\n git = GitClient(output=out)\n git.init()\n git.add(\"--all\")\n git.commit('\"Experiment {}\"'.format(id))\n\n # Initialize the app on Heroku.\n log(\"Initializing app on Heroku...\")\n team = config.get(\"heroku_team\", None)\n heroku_app = HerokuApp(dallinger_uid=id, output=out, team=team)\n heroku_app.bootstrap()\n heroku_app.buildpack(\"https://github.com/stomita/heroku-buildpack-phantomjs\")\n\n # Set up add-ons and AWS environment variables.\n database_size = config.get(\"database_size\")\n redis_size = config.get(\"redis_size\")\n addons = [\n \"heroku-postgresql:{}\".format(quote(database_size)),\n \"heroku-redis:{}\".format(quote(redis_size)),\n \"papertrail\",\n ]\n if config.get(\"sentry\"):\n addons.append(\"sentry\")\n\n for name in addons:\n heroku_app.addon(name)\n\n heroku_config = {\n \"aws_access_key_id\": config[\"aws_access_key_id\"],\n \"aws_secret_access_key\": config[\"aws_secret_access_key\"],\n \"aws_region\": config[\"aws_region\"],\n \"auto_recruit\": config[\"auto_recruit\"],\n \"smtp_username\": config[\"smtp_username\"],\n \"smtp_password\": config[\"smtp_password\"],\n \"whimsical\": config[\"whimsical\"],\n }\n\n heroku_app.set_multiple(**heroku_config)\n\n # Wait for Redis database to be ready.\n log(\"Waiting for Redis...\")\n ready = False\n while not ready:\n try:\n r = redis.from_url(heroku_app.redis_url)\n r.set(\"foo\", \"bar\")\n ready = True\n except (ValueError, redis.exceptions.ConnectionError):\n time.sleep(2)\n\n log(\"Saving the URL of the postgres database...\")\n # Set the notification URL and database URL in the config file.\n config.extend(\n {\n \"notification_url\": heroku_app.url + \"/notifications\",\n \"database_url\": heroku_app.db_url,\n }\n )\n config.write()\n git.add(\"config.txt\")\n time.sleep(0.25)\n git.commit(\"Save URLs for database and notifications\")\n time.sleep(0.25)\n\n # Launch the Heroku app.\n log(\"Pushing code to Heroku...\")\n git.push(remote=\"heroku\", branch=\"HEAD:master\")\n\n log(\"Scaling up the dynos...\")\n size = config.get(\"dyno_type\")\n for process in [\"web\", \"worker\"]:\n qty = config.get(\"num_dynos_\" + process)\n heroku_app.scale_up_dyno(process, qty, size)\n if config.get(\"clock_on\"):\n heroku_app.scale_up_dyno(\"clock\", 1, size)\n\n time.sleep(8)\n\n # Launch the experiment.\n log(\"Launching the experiment on the remote server and starting recruitment...\")\n launch_data = _handle_launch_data(\"{}/launch\".format(heroku_app.url), error=log)\n result = {\n \"app_name\": heroku_app.name,\n \"app_home\": heroku_app.url,\n \"recruitment_msg\": launch_data.get(\"recruitment_msg\", None),\n }\n log(\"Experiment details:\")\n log(\"App home: {}\".format(result[\"app_home\"]), chevrons=False)\n log(\"Recruiter info:\")\n log(result[\"recruitment_msg\"], chevrons=False)\n\n # Return to the branch whence we came.\n os.chdir(cwd)\n\n log(\"Completed deployment of experiment \" + id + \".\")\n return result", "response": "Deploy the shared setup to Heroku and launch the app."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef run(self):\n self.configure()\n self.setup()\n self.update_dir()\n db.init_db(drop_all=True)\n self.out.log(\"Starting up the server...\")\n config = get_config()\n with HerokuLocalWrapper(config, self.out, verbose=self.verbose) as wrapper:\n try:\n self.execute(wrapper)\n except KeyboardInterrupt:\n pass\n finally:\n os.chdir(self.original_dir)\n self.cleanup()", "response": "Run the Heroku server."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndispatch to by notify.", "response": "def new_recruit(self, match):\n \"\"\"Dispatched to by notify(). If a recruitment request has been issued,\n open a browser window for the a new participant (in this case the\n person doing local debugging).\n \"\"\"\n self.out.log(\"new recruitment request!\")\n url = match.group(1)\n if self.proxy_port is not None:\n self.out.log(\"Using proxy port {}\".format(self.proxy_port))\n url = url.replace(str(get_config().get(\"base_port\")), self.proxy_port)\n new_webbrowser_profile().open(url, new=1, autoraise=True)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef recruitment_closed(self, match):\n if self.status_thread is None:\n self.status_thread = threading.Thread(target=self.check_status)\n self.status_thread.start()", "response": "Called when a recruitment is closed."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef check_status(self):\n self.out.log(\"Recruitment is complete. Waiting for experiment completion...\")\n base_url = get_base_url()\n status_url = base_url + \"/summary\"\n while not self.complete:\n time.sleep(10)\n try:\n resp = requests.get(status_url)\n exp_data = resp.json()\n except (ValueError, requests.exceptions.RequestException):\n self.out.error(\"Error fetching experiment status.\")\n else:\n self.out.log(\"Experiment summary: {}\".format(exp_data))\n if exp_data.get(\"completed\", False):\n self.out.log(\"Experiment completed, all nodes filled.\")\n self.complete = True\n self.heroku.stop()", "response": "Check the output of the summary route until the experiment is complete."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nmonitor output from heroku process.", "response": "def notify(self, message):\n \"\"\"Monitor output from heroku process.\n\n This overrides the base class's `notify`\n to make sure that we stop if the status-monitoring thread\n has determined that the experiment is complete.\n \"\"\"\n if self.complete:\n return HerokuLocalWrapper.MONITOR_STOP\n return super(DebugDeployment, self).notify(message)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef execute(self, heroku):\n db.init_db(drop_all=True)\n self.out.log(\n \"Ingesting dataset from {}...\".format(os.path.basename(self.zip_path))\n )\n data.ingest_zip(self.zip_path)\n base_url = get_base_url()\n self.out.log(\"Server is running on {}. Press Ctrl+C to exit.\".format(base_url))\n\n if self.exp_config.get(\"replay\"):\n self.out.log(\"Launching the experiment...\")\n time.sleep(4)\n _handle_launch_data(\"{}/launch\".format(base_url), error=self.out.error)\n heroku.monitor(listener=self.notify)\n\n # Just run until interrupted:\n while self.keep_running():\n time.sleep(1)", "response": "Start the server and loop until interrupted."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef start_replay(self, match):\n self.out.log(\"replay ready!\")\n url = match.group(1)\n new_webbrowser_profile().open(url, new=1, autoraise=True)", "response": "Start a new recruitment window."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef load(self):\n app = util.import_app(\"dallinger.experiment_server.sockets:app\")\n if self.options.get(\"mode\") == \"debug\":\n app.debug = True\n return app", "response": "Return our application to be run."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef perturbed_contents(self):\n animal = json.loads(self.contents)\n\n for prop, prop_range in self.properties.items():\n range = prop_range[1] - prop_range[0]\n jittered = animal[prop] + random.gauss(0, 0.1 * range)\n animal[prop] = max(min(jittered, prop_range[1]), prop_range[0])\n\n return json.dumps(animal)", "response": "Perturb the given animal."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding an agent connecting it to the previous node.", "response": "def add_node(self, node):\n \"\"\"Add an agent, connecting it to the previous node.\"\"\"\n other_nodes = [n for n in self.nodes() if n.id != node.id]\n if len(self.nodes()) > 11:\n parents = [max(other_nodes, key=attrgetter(\"creation_time\"))]\n else:\n parents = [n for n in other_nodes if isinstance(n, Source)]\n\n for parent in parents:\n parent.connect(whom=node)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef add_node(self, node):\n other_nodes = [n for n in self.nodes() if n.id != node.id]\n\n for n in other_nodes:\n if isinstance(n, Source):\n node.connect(direction=\"from\", whom=n)\n else:\n node.connect(direction=\"both\", whom=n)", "response": "Add a node connecting it to everyone and back."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nconnecting the source to all existing other nodes.", "response": "def add_source(self, source):\n \"\"\"Connect the source to all existing other nodes.\"\"\"\n nodes = [n for n in self.nodes() if not isinstance(n, Source)]\n source.connect(whom=nodes)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nadds a node and connect it to the center.", "response": "def add_node(self, node):\n \"\"\"Add a node and connect it to the center.\"\"\"\n nodes = self.nodes()\n\n if len(nodes) > 1:\n first_node = min(nodes, key=attrgetter(\"creation_time\"))\n first_node.connect(direction=\"both\", whom=node)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nlinking to the agent from a parent based on the parent s fitness", "response": "def add_node(self, node):\n \"\"\"Link to the agent from a parent based on the parent's fitness\"\"\"\n num_agents = len(self.nodes(type=Agent))\n curr_generation = int((num_agents - 1) / float(self.generation_size))\n node.generation = curr_generation\n\n if curr_generation == 0 and self.initial_source:\n parent = self._select_oldest_source()\n else:\n parent = self._select_fit_node_from_generation(\n node_type=type(node), generation=curr_generation - 1\n )\n\n if parent is not None:\n parent.connect(whom=node)\n parent.transmit(to_whom=node)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_node(self, node):\n for predecessor in self._most_recent_predecessors_to(node):\n predecessor.connect(whom=node)", "response": "Add a node connecting it to all the active nodes."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate a new network by reading the configuration file.", "response": "def create_network(self):\n \"\"\"Create a new network by reading the configuration file.\"\"\"\n class_ = getattr(networks, self.network_class)\n return class_(max_size=self.quorum)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nrun when a request to create an info is complete.", "response": "def info_post_request(self, node, info):\n \"\"\"Run when a request to create an info is complete.\"\"\"\n for agent in node.neighbors():\n node.transmit(what=info, to_whom=agent)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nstop caching for pages wrapped in nocache decorator.", "response": "def nocache(func):\n \"\"\"Stop caching for pages wrapped in nocache decorator.\"\"\"\n\n def new_func(*args, **kwargs):\n \"\"\"No cache Wrapper.\"\"\"\n resp = make_response(func(*args, **kwargs))\n resp.cache_control.no_cache = True\n return resp\n\n return update_wrapper(new_func, func)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a list of browser exclusion rules defined in the Configuration.", "response": "def exclusions(self):\n \"\"\"Return list of browser exclusion rules defined in the Configuration.\n \"\"\"\n exclusion_rules = [\n r.strip()\n for r in self.config.get(\"browser_exclude_rule\", \"\").split(\",\")\n if r.strip()\n ]\n return exclusion_rules"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncheck if a user agent is supported by this browser.", "response": "def is_supported(self, user_agent_string):\n \"\"\"Check user agent against configured exclusions.\n \"\"\"\n user_agent_obj = user_agents.parse(user_agent_string)\n browser_ok = True\n for rule in self.exclusions:\n if rule in [\"mobile\", \"tablet\", \"touchcapable\", \"pc\", \"bot\"]:\n if (\n (rule == \"mobile\" and user_agent_obj.is_mobile)\n or (rule == \"tablet\" and user_agent_obj.is_tablet)\n or (rule == \"touchcapable\" and user_agent_obj.is_touch_capable)\n or (rule == \"pc\" and user_agent_obj.is_pc)\n or (rule == \"bot\" and user_agent_obj.is_bot)\n ):\n browser_ok = False\n elif rule in user_agent_string:\n browser_ok = False\n\n return browser_ok"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef setup(self):\n if not self.networks():\n super(FunctionLearning, self).setup()\n for net in self.networks():\n self.models.SinusoidalFunctionSource(network=net)", "response": "This is a method that is called by the experiment once at the start of the experiment. It creates the source for each network."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate a node for a participant.", "response": "def create_node(self, network, participant):\n \"\"\"Create a node for a participant.\"\"\"\n return self.models.MCMCPAgent(network=network, participant=participant)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef data_check(self, participant):\n infos = participant.infos()\n return len([info for info in infos if info.chosen]) * 2 == len(infos)", "response": "Make sure each trial contains exactly one chosen info."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nfinishing reading and send text", "response": "def participate(self):\n \"\"\"Finish reading and send text\"\"\"\n try:\n while True:\n left = WebDriverWait(self.driver, 10).until(\n EC.element_to_be_clickable((By.ID, \"left_button\"))\n )\n right = WebDriverWait(self.driver, 10).until(\n EC.element_to_be_clickable((By.ID, \"right_button\"))\n )\n\n random.choice((left, right)).click()\n time.sleep(1.0)\n except TimeoutException:\n return False"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef find_experiment_export(app_id):\n\n # Check locally first\n cwd = os.getcwd()\n data_filename = \"{}-data.zip\".format(app_id)\n path_to_data = os.path.join(cwd, \"data\", data_filename)\n if os.path.exists(path_to_data):\n try:\n Data(path_to_data)\n except IOError:\n from dallinger import logger\n\n logger.exception(\n \"Error reading local data file {}, checking remote.\".format(\n path_to_data\n )\n )\n else:\n return path_to_data\n\n # Get remote file instead\n path_to_data = os.path.join(tempfile.mkdtemp(), data_filename)\n\n buckets = [user_s3_bucket(), dallinger_s3_bucket()]\n\n for bucket in buckets:\n try:\n bucket.download_file(data_filename, path_to_data)\n except botocore.exceptions.ClientError:\n pass\n else:\n return path_to_data", "response": "Attempt to find a zipped export of an experiment with the ID provided\n and return its path. Returns None if not found."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nload the data from wherever it is found.", "response": "def load(app_id):\n \"\"\"Load the data from wherever it is found.\"\"\"\n path_to_data = find_experiment_export(app_id)\n if path_to_data is None:\n raise IOError(\"Dataset {} could not be found.\".format(app_id))\n\n return Data(path_to_data)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndumping the database to a temporary directory.", "response": "def dump_database(id):\n \"\"\"Dump the database to a temporary directory.\"\"\"\n\n tmp_dir = tempfile.mkdtemp()\n current_dir = os.getcwd()\n os.chdir(tmp_dir)\n\n FNULL = open(os.devnull, \"w\")\n heroku_app = HerokuApp(dallinger_uid=id, output=FNULL)\n heroku_app.backup_capture()\n heroku_app.backup_download()\n\n for filename in os.listdir(tmp_dir):\n if filename.startswith(\"latest.dump\"):\n os.rename(filename, \"database.dump\")\n\n os.chdir(current_dir)\n\n return os.path.join(tmp_dir, \"database.dump\")"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nregistering a UUID key in the global S3 bucket.", "response": "def register(id, url=None):\n \"\"\"Register a UUID key in the global S3 bucket.\"\"\"\n bucket = registration_s3_bucket()\n key = registration_key(id)\n obj = bucket.Object(key)\n obj.put(Body=url or \"missing\")\n return _generate_s3_url(bucket, key)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nchecks if a UUID is already registered", "response": "def is_registered(id):\n \"\"\"Check if a UUID is already registered\"\"\"\n bucket = registration_s3_bucket()\n key = registration_key(id)\n found_keys = set(obj.key for obj in bucket.objects.filter(Prefix=key))\n return key in found_keys"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef copy_heroku_to_local(id):\n heroku_app = HerokuApp(dallinger_uid=id)\n try:\n subprocess.call([\"dropdb\", heroku_app.name])\n except Exception:\n pass\n\n heroku_app.pg_pull()", "response": "Copy a Heroku database locally."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncopy a local database to a set of CSV files.", "response": "def copy_db_to_csv(dsn, path, scrub_pii=False):\n \"\"\"Copy a local database to a set of CSV files.\"\"\"\n if \"postgresql://\" in dsn or \"postgres://\" in dsn:\n conn = psycopg2.connect(dsn=dsn)\n else:\n conn = psycopg2.connect(database=dsn, user=\"dallinger\")\n cur = conn.cursor()\n for table in table_names:\n csv_path = os.path.join(path, \"{}.csv\".format(table))\n with open(csv_path, \"w\") as f:\n sql = \"COPY {} TO STDOUT WITH CSV HEADER\".format(table)\n cur.copy_expert(sql, f)\n conn.close()\n if scrub_pii:\n _scrub_participant_table(path)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nscrubs PII from the given participant table.", "response": "def _scrub_participant_table(path_to_data):\n \"\"\"Scrub PII from the given participant table.\"\"\"\n path = os.path.join(path_to_data, \"participant.csv\")\n with open_for_csv(path, \"r\") as input, open(\"{}.0\".format(path), \"w\") as output:\n reader = csv.reader(input)\n writer = csv.writer(output)\n headers = next(reader)\n writer.writerow(headers)\n for i, row in enumerate(reader):\n row[headers.index(\"worker_id\")] = row[headers.index(\"id\")]\n row[headers.index(\"unique_id\")] = \"{}:{}\".format(\n row[headers.index(\"id\")], row[headers.index(\"assignment_id\")]\n )\n writer.writerow(row)\n\n os.rename(\"{}.0\".format(path), path)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef export(id, local=False, scrub_pii=False):\n\n print(\"Preparing to export the data...\")\n\n if local:\n db_uri = db.db_url\n else:\n db_uri = HerokuApp(id).db_uri\n\n # Create the data package if it doesn't already exist.\n subdata_path = os.path.join(\"data\", id, \"data\")\n try:\n os.makedirs(subdata_path)\n\n except OSError as e:\n if e.errno != errno.EEXIST or not os.path.isdir(subdata_path):\n raise\n\n # Copy in the data.\n copy_db_to_csv(db_uri, subdata_path, scrub_pii=scrub_pii)\n\n # Copy the experiment code into a code/ subdirectory.\n try:\n shutil.copyfile(\n os.path.join(\"snapshots\", id + \"-code.zip\"),\n os.path.join(\"data\", id, id + \"-code.zip\"),\n )\n\n except Exception:\n pass\n\n # Copy in the DATA readme.\n # open(os.path.join(id, \"README.txt\"), \"a\").close()\n\n # Save the experiment id.\n with open(os.path.join(\"data\", id, \"experiment_id.md\"), \"a+\") as file:\n file.write(id)\n\n # Zip data\n src = os.path.join(\"data\", id)\n dst = os.path.join(\"data\", id + \"-data.zip\")\n archive_data(id, src, dst)\n\n cwd = os.getcwd()\n data_filename = \"{}-data.zip\".format(id)\n path_to_data = os.path.join(cwd, \"data\", data_filename)\n\n # Backup data on S3 unless run locally\n if not local:\n bucket = user_s3_bucket()\n bucket.upload_file(path_to_data, data_filename)\n url = _generate_s3_url(bucket, data_filename)\n\n # Register experiment UUID with dallinger\n register(id, url)\n\n return path_to_data", "response": "Export data from an experiment."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngives a path to a zip file created with export() recreate the database with the data stored in the included. csv files.", "response": "def ingest_zip(path, engine=None):\n \"\"\"Given a path to a zip file created with `export()`, recreate the\n database with the data stored in the included .csv files.\n \"\"\"\n import_order = [\n \"network\",\n \"participant\",\n \"node\",\n \"info\",\n \"notification\",\n \"question\",\n \"transformation\",\n \"vector\",\n \"transmission\",\n ]\n with ZipFile(path, \"r\") as archive:\n filenames = archive.namelist()\n for name in import_order:\n filename = [f for f in filenames if name in f][0]\n model_name = name.capitalize()\n model = getattr(models, model_name)\n file = archive.open(filename)\n if six.PY3:\n file = io.TextIOWrapper(file, encoding=\"utf8\", newline=\"\")\n ingest_to_model(file, model, engine)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef ingest_to_model(file, model, engine=None):\n if engine is None:\n engine = db.engine\n reader = csv.reader(file)\n columns = tuple('\"{}\"'.format(n) for n in next(reader))\n postgres_copy.copy_from(\n file, model, engine, columns=columns, format=\"csv\", HEADER=False\n )\n fix_autoincrement(model.__table__.name)", "response": "Load data from a CSV file into a SQLAlchemy model class."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_or_create_s3_bucket(s3, name):\n exists = True\n try:\n s3.meta.client.head_bucket(Bucket=name)\n except botocore.exceptions.ClientError as e:\n error_code = int(e.response[\"Error\"][\"Code\"])\n if error_code == 404:\n exists = False\n else:\n raise\n\n if not exists:\n s3.create_bucket(Bucket=name)\n\n return s3.Bucket(name)", "response": "Get an S3 bucket resource after making sure it exists"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting the user s S3 bucket.", "response": "def user_s3_bucket(canonical_user_id=None):\n \"\"\"Get the user's S3 bucket.\"\"\"\n s3 = _s3_resource()\n if not canonical_user_id:\n canonical_user_id = _get_canonical_aws_user_id(s3)\n\n s3_bucket_name = \"dallinger-{}\".format(\n hashlib.sha256(canonical_user_id.encode(\"utf8\")).hexdigest()[0:8]\n )\n\n return _get_or_create_s3_bucket(s3, s3_bucket_name)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _contents(self):\n images = [\"owl.png\"]\n\n # We're selecting from a list of only one item here, but it's a useful\n # technique to demonstrate:\n image = random.choice(images)\n\n image_path = os.path.join(\"static\", \"stimuli\", image)\n\n uri_encoded_image = \"data:image/png;base64,\" + base64.b64encode(\n open(image_path, \"rb\").read()\n )\n\n return json.dumps({\"image\": uri_encoded_image, \"sketch\": \"\"})", "response": "Define the contents of new Infos."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef transmit_by_fitness(from_whom, to_whom=None, what=None):\n parents = from_whom\n parent_fs = [p.fitness for p in parents]\n parent_probs = [(f / (1.0 * sum(parent_fs))) for f in parent_fs]\n\n rnd = random.random()\n temp = 0.0\n for i, probability in enumerate(parent_probs):\n temp += probability\n if temp > rnd:\n parent = parents[i]\n break\n\n parent.transmit(what=what, to_whom=to_whom)", "response": "Choose a parent with probability proportional to their fitness."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_messenger(config):\n email_settings = EmailConfig(config)\n if config.get(\"mode\") == \"debug\":\n return DebugMessenger(email_settings)\n problems = email_settings.validate()\n if problems:\n logger.info(problems + \" Will log errors instead of emailing them.\")\n return DebugMessenger(email_settings)\n return EmailingMessenger(email_settings)", "response": "Return an appropriate Messenger."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nvalidate that the config is valid.", "response": "def validate(self):\n \"\"\"Could this config be used to send a real email?\"\"\"\n missing = []\n for k, v in self._map.items():\n attr = getattr(self, k, False)\n if not attr or attr == CONFIG_PLACEHOLDER:\n missing.append(v)\n if missing:\n return \"Missing or invalid config values: {}\".format(\n \", \".join(sorted(missing))\n )"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef sessions_scope(local_session, commit=False):\n try:\n yield local_session\n if commit:\n local_session.commit()\n logger.debug(\"DB session auto-committed as requested\")\n except Exception as e:\n # We log the exception before re-raising it, in case the rollback also\n # fails\n logger.exception(\"Exception during scoped worker transaction, \" \"rolling back.\")\n # This rollback is potentially redundant with the remove call below,\n # depending on how the scoped session is configured, but we'll be\n # explicit here.\n local_session.rollback()\n raise e\n finally:\n local_session.remove()\n logger.debug(\"Session complete, db session closed\")", "response": "Provide a transactional scope around a series of operations."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nmanaging contexts and add debugging to db sessions.", "response": "def scoped_session_decorator(func):\n \"\"\"Manage contexts and add debugging to db sessions.\"\"\"\n\n @wraps(func)\n def wrapper(*args, **kwargs):\n with sessions_scope(session):\n # The session used in func comes from the funcs globals, but\n # it will be a proxied thread local var from the session\n # registry, and will therefore be identical to the one returned\n # by the context manager above.\n logger.debug(\"Running worker %s in scoped DB session\", func.__name__)\n return func(*args, **kwargs)\n\n return wrapper"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef init_db(drop_all=False, bind=engine):\n try:\n if drop_all:\n Base.metadata.drop_all(bind=bind)\n Base.metadata.create_all(bind=bind)\n except OperationalError as err:\n msg = 'password authentication failed for user \"dallinger\"'\n if msg in err.message:\n sys.stderr.write(db_user_warning)\n raise\n\n return session", "response": "Initialize the database optionally dropping existing tables optionally creating new tables."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef serialized(func):\n\n @wraps(func)\n def wrapper(*args, **kw):\n attempts = 100\n session.remove()\n while attempts > 0:\n try:\n session.connection(\n execution_options={\"isolation_level\": \"SERIALIZABLE\"}\n )\n result = func(*args, **kw)\n session.commit()\n return result\n except OperationalError as exc:\n session.rollback()\n if isinstance(exc.orig, TransactionRollbackError):\n if attempts > 0:\n attempts -= 1\n else:\n raise Exception(\n \"Could not commit serialized transaction \"\n \"after 100 attempts.\"\n )\n else:\n raise\n finally:\n session.remove()\n time.sleep(random.expovariate(0.5))\n\n return wrapper", "response": "A decorator that commits and retries the transaction if the function fails."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef chat(ws):\n lag_tolerance_secs = float(request.args.get(\"tolerance\", 0.1))\n client = Client(ws, lag_tolerance_secs=lag_tolerance_secs)\n client.subscribe(request.args.get(\"channel\"))\n gevent.spawn(client.heartbeat)\n client.publish()", "response": "Relay chat messages to and from clients.\n "} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef subscribe(self, client):\n self.clients.append(client)\n log(\"Subscribed client {} to channel {}\".format(client, self.name))", "response": "Subscribe a client to the channel."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef listen(self):\n pubsub = redis_conn.pubsub()\n name = self.name\n if isinstance(name, six.text_type):\n name = name.encode(\"utf-8\")\n try:\n pubsub.subscribe([name])\n except ConnectionError:\n app.logger.exception(\"Could not connect to redis.\")\n log(\"Listening on channel {}\".format(self.name))\n for message in pubsub.listen():\n data = message.get(\"data\")\n if message[\"type\"] == \"message\" and data != \"None\":\n channel = message[\"channel\"]\n payload = \"{}:{}\".format(channel.decode(\"utf-8\"), data.decode(\"utf-8\"))\n for client in self.clients:\n gevent.spawn(client.send, payload)\n gevent.sleep(0.001)", "response": "Listen to all subscribed clients."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef subscribe(self, client, channel_name):\n if channel_name not in self.channels:\n self.channels[channel_name] = channel = Channel(channel_name)\n channel.start()\n\n self.channels[channel_name].subscribe(client)", "response": "Register a new client to receive messages on a channel."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef unsubscribe(self, client):\n for channel in self.channels.values():\n channel.unsubscribe(client)", "response": "Unsubscribe a client from all channels."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef send(self, message):\n if isinstance(message, bytes):\n message = message.decode(\"utf8\")\n\n with self.send_lock:\n try:\n self.ws.send(message)\n except socket.error:\n chat_backend.unsubscribe(self)", "response": "Send a single message to the websocket."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef heartbeat(self):\n while not self.ws.closed:\n gevent.sleep(HEARTBEAT_DELAY)\n gevent.spawn(self.send, \"ping\")", "response": "Send a ping to the websocket periodically."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef publish(self):\n while not self.ws.closed:\n # Sleep to prevent *constant* context-switches.\n gevent.sleep(self.lag_tolerance_secs)\n message = self.ws.receive()\n if message is not None:\n channel_name, data = message.split(\":\", 1)\n redis_conn.publish(channel_name, data)", "response": "Relay messages from client to redis."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a Selenium WebDriver instance of the type requested in the configuration.", "response": "def driver(self):\n \"\"\"Returns a Selenium WebDriver instance of the type requested in the\n configuration.\"\"\"\n from dallinger.config import get_config\n\n config = get_config()\n if not config.ready:\n config.load()\n driver_url = config.get(\"webdriver_url\", None)\n driver_type = config.get(\"webdriver_type\")\n driver = None\n\n if driver_url:\n capabilities = CAPABILITY_MAP.get(driver_type.lower())\n if capabilities is None:\n raise ValueError(\n \"Unsupported remote webdriver_type: {}\".format(driver_type)\n )\n driver = webdriver.Remote(\n desired_capabilities=capabilities, command_executor=driver_url\n )\n else:\n driver_class = DRIVER_MAP.get(driver_type.lower())\n if driver_class is not None:\n driver = driver_class()\n\n if driver is None:\n raise ValueError(\"Unsupported webdriver_type: {}\".format(driver_type))\n\n driver.set_window_size(1024, 768)\n logger.info(\"Created {} webdriver.\".format(driver_type))\n return driver"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef sign_up(self):\n try:\n self.driver.get(self.URL)\n logger.info(\"Loaded ad page.\")\n begin = WebDriverWait(self.driver, 10).until(\n EC.element_to_be_clickable((By.CLASS_NAME, \"btn-primary\"))\n )\n begin.click()\n logger.info(\"Clicked begin experiment button.\")\n WebDriverWait(self.driver, 10).until(lambda d: len(d.window_handles) == 2)\n self.driver.switch_to_window(self.driver.window_handles[-1])\n self.driver.set_window_size(1024, 768)\n logger.info(\"Switched to experiment popup.\")\n consent = WebDriverWait(self.driver, 10).until(\n EC.element_to_be_clickable((By.ID, \"consent\"))\n )\n consent.click()\n logger.info(\"Clicked consent button.\")\n participate = WebDriverWait(self.driver, 10).until(\n EC.element_to_be_clickable((By.CLASS_NAME, \"btn-success\"))\n )\n participate.click()\n logger.info(\"Clicked start button.\")\n return True\n except TimeoutException:\n logger.error(\"Error during experiment sign up.\")\n return False", "response": "Accept HIT give consent and start experiment."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef complete_questionnaire(self):\n logger.info(\"Complete questionnaire.\")\n difficulty = self.driver.find_element_by_id(\"difficulty\")\n difficulty.value = \"4\"\n engagement = self.driver.find_element_by_id(\"engagement\")\n engagement.value = \"3\"", "response": "Complete the standard debriefing form."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsigning off the current experiment.", "response": "def sign_off(self):\n \"\"\"Submit questionnaire and finish.\n\n This uses Selenium to click the submit button on the questionnaire\n and return to the original window.\n \"\"\"\n try:\n logger.info(\"Bot player signing off.\")\n feedback = WebDriverWait(self.driver, 20).until(\n EC.presence_of_element_located((By.ID, \"submit-questionnaire\"))\n )\n self.complete_questionnaire()\n feedback.click()\n logger.info(\"Clicked submit questionnaire button.\")\n self.driver.switch_to_window(self.driver.window_handles[0])\n self.driver.set_window_size(1024, 768)\n logger.info(\"Switched back to initial window.\")\n return True\n except TimeoutException:\n logger.error(\"Error during experiment sign off.\")\n return False"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsending worker status ('worker_complete' worker_failed worker_id", "response": "def complete_experiment(self, status):\n \"\"\"Sends worker status ('worker_complete' or 'worker_failed')\n to the experiment server.\n \"\"\"\n url = self.driver.current_url\n p = urllib.parse.urlparse(url)\n complete_url = \"%s://%s/%s?participant_id=%s\"\n complete_url = complete_url % (p.scheme, p.netloc, status, self.participant_id)\n self.driver.get(complete_url)\n logger.info(\"Forced call to %s: %s\" % (status, complete_url))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsigning up run the participate method then sign off and close the driver.", "response": "def run_experiment(self):\n \"\"\"Sign up, run the ``participate`` method, then sign off and close\n the driver.\"\"\"\n try:\n self.sign_up()\n self.participate()\n if self.sign_off():\n self.complete_experiment(\"worker_complete\")\n else:\n self.complete_experiment(\"worker_failed\")\n finally:\n self.driver.quit()"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nrunning the phases of interacting with the experiment .", "response": "def run_experiment(self):\n \"\"\"Runs the phases of interacting with the experiment\n including signup, participation, signoff, and recording completion.\n \"\"\"\n self.sign_up()\n self.participate()\n if self.sign_off():\n self.complete_experiment(\"worker_complete\")\n else:\n self.complete_experiment(\"worker_failed\")"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsigns up a participant for the experiment.", "response": "def sign_up(self):\n \"\"\"Signs up a participant for the experiment.\n\n This is done using a POST request to the /participant/ endpoint.\n \"\"\"\n self.log(\"Bot player signing up.\")\n self.subscribe_to_quorum_channel()\n while True:\n url = (\n \"{host}/participant/{self.worker_id}/\"\n \"{self.hit_id}/{self.assignment_id}/\"\n \"debug?fingerprint_hash={hash}&recruiter=bots:{bot_name}\".format(\n host=self.host,\n self=self,\n hash=uuid.uuid4().hex,\n bot_name=self.__class__.__name__,\n )\n )\n try:\n result = requests.post(url)\n result.raise_for_status()\n except RequestException:\n self.stochastic_sleep()\n continue\n\n if result.json()[\"status\"] == \"error\":\n self.stochastic_sleep()\n continue\n\n self.on_signup(result.json())\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef complete_experiment(self, status):\n self.log(\"Bot player completing experiment. Status: {}\".format(status))\n while True:\n url = \"{host}/{status}?participant_id={participant_id}\".format(\n host=self.host, participant_id=self.participant_id, status=status\n )\n try:\n result = requests.get(url)\n result.raise_for_status()\n except RequestException:\n self.stochastic_sleep()\n continue\n return result", "response": "Record worker completion status to the experiment server."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef subscribe_to_quorum_channel(self):\n from dallinger.experiment_server.sockets import chat_backend\n\n self.log(\"Bot subscribing to quorum channel.\")\n chat_backend.subscribe(self, \"quorum\")", "response": "Subscribe to the quorum channel."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef complete_questionnaire(self):\n while True:\n data = {\n \"question\": \"questionnaire\",\n \"number\": 1,\n \"response\": json.dumps(self.question_responses),\n }\n url = \"{host}/question/{self.participant_id}\".format(\n host=self.host, self=self\n )\n try:\n result = requests.post(url, data=data)\n result.raise_for_status()\n except RequestException:\n self.stochastic_sleep()\n continue\n return True", "response": "Complete the standard debriefing form."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef check_credentials(self):\n try:\n return bool(self.mturk.get_account_balance())\n except NoCredentialsError:\n raise MTurkServiceException(\"No AWS credentials set!\")\n except ClientError:\n raise MTurkServiceException(\"Invalid AWS credentials!\")\n except Exception as ex:\n raise MTurkServiceException(\n \"Error checking credentials: {}\".format(str(ex))\n )", "response": "Verifies the credentials for the key and secret combination by making a balance inquiry"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsetting a REST endpoint to recieve notifications about the HIT.", "response": "def set_rest_notification(self, url, hit_type_id):\n \"\"\"Set a REST endpoint to recieve notifications about the HIT\n The newer AWS MTurk API does not support this feature, which means we\n cannot use boto3 here. Instead, we make the call manually after\n assembling a properly signed request.\n \"\"\"\n ISO8601 = \"%Y-%m-%dT%H:%M:%SZ\"\n notification_version = \"2006-05-05\"\n API_version = \"2014-08-15\"\n data = {\n \"AWSAccessKeyId\": self.aws_key,\n \"HITTypeId\": hit_type_id,\n \"Notification.1.Active\": \"True\",\n \"Notification.1.Destination\": url,\n \"Notification.1.EventType.1\": \"AssignmentAccepted\",\n \"Notification.1.EventType.2\": \"AssignmentAbandoned\",\n \"Notification.1.EventType.3\": \"AssignmentReturned\",\n \"Notification.1.EventType.4\": \"AssignmentSubmitted\",\n \"Notification.1.EventType.5\": \"HITReviewable\",\n \"Notification.1.EventType.6\": \"HITExpired\",\n \"Notification.1.Transport\": \"REST\",\n \"Notification.1.Version\": notification_version,\n \"Operation\": \"SetHITTypeNotification\",\n \"SignatureVersion\": \"1\",\n \"Timestamp\": time.strftime(ISO8601, time.gmtime()),\n \"Version\": API_version,\n }\n query_string, signature = self._calc_old_api_signature(data)\n body = query_string + \"&Signature=\" + urllib.parse.quote_plus(signature)\n data[\"Signature\"] = signature\n headers = {\n \"Content-Type\": \"application/x-www-form-urlencoded; charset=UTF-8\",\n \"Content-Length\": str(len(body)),\n \"Host\": self.legacy_host,\n }\n resp = requests.post(\"https://\" + self.legacy_host, headers=headers, data=body)\n return \"True\" in resp.text"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nregistering HIT Type for this HIT and return the HIT Type s ID.", "response": "def register_hit_type(\n self, title, description, reward, duration_hours, keywords, qualifications\n ):\n \"\"\"Register HIT Type for this HIT and return the type's ID, which\n is required for creating a HIT.\n \"\"\"\n reward = str(reward)\n duration_secs = int(datetime.timedelta(hours=duration_hours).total_seconds())\n hit_type = self.mturk.create_hit_type(\n Title=title,\n Description=description,\n Reward=reward,\n AssignmentDurationInSeconds=duration_secs,\n Keywords=\",\".join(keywords),\n AutoApprovalDelayInSeconds=0,\n QualificationRequirements=qualifications,\n )\n\n return hit_type[\"HITTypeId\"]"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ntranslating restrictions and qualifications to boto Qualifications objects", "response": "def build_hit_qualifications(self, approve_requirement, restrict_to_usa, blacklist):\n \"\"\"Translate restrictions/qualifications to boto Qualifications objects\n\n @blacklist is a list of names for Qualifications workers must\n not already hold in order to see and accept the HIT.\n \"\"\"\n quals = [\n {\n \"QualificationTypeId\": PERCENTAGE_APPROVED_REQUIREMENT_ID,\n \"Comparator\": \"GreaterThanOrEqualTo\",\n \"IntegerValues\": [approve_requirement],\n \"RequiredToPreview\": True,\n }\n ]\n if restrict_to_usa:\n quals.append(\n {\n \"QualificationTypeId\": LOCALE_REQUIREMENT_ID,\n \"Comparator\": \"EqualTo\",\n \"LocaleValues\": [{\"Country\": \"US\"}],\n \"RequiredToPreview\": True,\n }\n )\n if blacklist is not None:\n for item in blacklist:\n qtype = self.get_qualification_type_by_name(item)\n if qtype:\n quals.append(\n {\n \"QualificationTypeId\": qtype[\"id\"],\n \"Comparator\": \"DoesNotExist\",\n \"RequiredToPreview\": True,\n }\n )\n return quals"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef create_qualification_type(self, name, description, status=\"Active\"):\n try:\n response = self.mturk.create_qualification_type(\n Name=name, Description=description, QualificationTypeStatus=status\n )\n except Exception as ex:\n if \"already created a QualificationType with this name\" in str(ex):\n raise DuplicateQualificationNameError(str(ex))\n\n return self._translate_qtype(response[\"QualificationType\"])", "response": "Create a new qualification type."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_qualification_type_by_name(self, name):\n max_fuzzy_matches_to_check = 100\n query = name.upper()\n start = time.time()\n args = {\n \"Query\": query,\n \"MustBeRequestable\": False,\n \"MustBeOwnedByCaller\": True,\n \"MaxResults\": max_fuzzy_matches_to_check,\n }\n results = self.mturk.list_qualification_types(**args)[\"QualificationTypes\"]\n # This loop is largely for tests, because there's some indexing that\n # needs to happen on MTurk for search to work:\n while not results and time.time() - start < self.max_wait_secs:\n time.sleep(1)\n results = self.mturk.list_qualification_types(**args)[\"QualificationTypes\"]\n\n if not results:\n return None\n\n qualifications = [self._translate_qtype(r) for r in results]\n if len(qualifications) > 1:\n for qualification in qualifications:\n if qualification[\"name\"].upper() == query:\n return qualification\n\n raise MTurkServiceException(\"{} was not a unique name\".format(query))\n\n return qualifications[0]", "response": "Return a Qualification Type by name."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef assign_qualification(self, qualification_id, worker_id, score, notify=False):\n return self._is_ok(\n self.mturk.associate_qualification_with_worker(\n QualificationTypeId=qualification_id,\n WorkerId=worker_id,\n IntegerValue=score,\n SendNotification=notify,\n )\n )", "response": "Assign a worker to a specific qualification"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef increment_qualification_score(self, name, worker_id, notify=False):\n result = self.get_current_qualification_score(name, worker_id)\n current_score = result[\"score\"] or 0\n new_score = current_score + 1\n qtype_id = result[\"qtype\"][\"id\"]\n self.assign_qualification(qtype_id, worker_id, new_score, notify)\n\n return {\"qtype\": result[\"qtype\"], \"score\": new_score}", "response": "Increment the current qualification score for a worker on a\n qualification with the provided name."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets a worker s qualification score as an iteger.", "response": "def get_qualification_score(self, qualification_id, worker_id):\n \"\"\"Return a worker's qualification score as an iteger.\n \"\"\"\n try:\n response = self.mturk.get_qualification_score(\n QualificationTypeId=qualification_id, WorkerId=worker_id\n )\n except ClientError as ex:\n error = str(ex)\n if \"does not exist\" in error:\n raise WorkerLacksQualification(\n \"Worker {} does not have qualification {}.\".format(\n worker_id, qualification_id\n )\n )\n if \"operation can be called with a status of: Granted\" in error:\n raise RevokedQualification(\n \"Worker {} has had qualification {} revoked.\".format(\n worker_id, qualification_id\n )\n )\n\n raise MTurkServiceException(error)\n return response[\"Qualification\"][\"IntegerValue\"]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn the current score for a worker on a qualification with the provided name.", "response": "def get_current_qualification_score(self, name, worker_id):\n \"\"\"Return the current score for a worker, on a qualification with the\n provided name.\n \"\"\"\n qtype = self.get_qualification_type_by_name(name)\n if qtype is None:\n raise QualificationNotFoundException(\n 'No Qualification exists with name \"{}\"'.format(name)\n )\n try:\n score = self.get_qualification_score(qtype[\"id\"], worker_id)\n except (WorkerLacksQualification, RevokedQualification):\n score = None\n\n return {\"qtype\": qtype, \"score\": score}"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nremoves a qualification type from the database", "response": "def dispose_qualification_type(self, qualification_id):\n \"\"\"Remove a qualification type we created\"\"\"\n return self._is_ok(\n self.mturk.delete_qualification_type(QualificationTypeId=qualification_id)\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_workers_with_qualification(self, qualification_id):\n done = False\n next_token = None\n while not done:\n if next_token is not None:\n response = self.mturk.list_workers_with_qualification_type(\n QualificationTypeId=qualification_id,\n MaxResults=MAX_SUPPORTED_BATCH_SIZE,\n Status=\"Granted\",\n NextToken=next_token,\n )\n else:\n response = self.mturk.list_workers_with_qualification_type(\n QualificationTypeId=qualification_id,\n MaxResults=MAX_SUPPORTED_BATCH_SIZE,\n Status=\"Granted\",\n )\n if response:\n for r in response[\"Qualifications\"]:\n yield {\"id\": r[\"WorkerId\"], \"score\": r[\"IntegerValue\"]}\n if \"NextToken\" in response:\n next_token = response[\"NextToken\"]\n else:\n done = True", "response": "Get workers with the given qualification."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create_hit(\n self,\n title,\n description,\n keywords,\n reward,\n duration_hours,\n lifetime_days,\n ad_url,\n notification_url,\n approve_requirement,\n max_assignments,\n us_only,\n blacklist=None,\n annotation=None,\n ):\n \"\"\"Create the actual HIT and return a dict with its useful properties.\"\"\"\n frame_height = 600\n mturk_question = self._external_question(ad_url, frame_height)\n qualifications = self.build_hit_qualifications(\n approve_requirement, us_only, blacklist\n )\n # We need a HIT_Type in order to register for REST notifications\n hit_type_id = self.register_hit_type(\n title, description, reward, duration_hours, keywords, qualifications\n )\n self.set_rest_notification(notification_url, hit_type_id)\n\n params = {\n \"HITTypeId\": hit_type_id,\n \"Question\": mturk_question,\n \"LifetimeInSeconds\": int(\n datetime.timedelta(days=lifetime_days).total_seconds()\n ),\n \"MaxAssignments\": max_assignments,\n \"UniqueRequestToken\": self._request_token(),\n }\n if annotation:\n params[\"RequesterAnnotation\"] = annotation\n\n response = self.mturk.create_hit_with_hit_type(**params)\n if \"HIT\" not in response:\n raise MTurkServiceException(\"HIT request was invalid for unknown reason.\")\n return self._translate_hit(response[\"HIT\"])", "response": "Create the HIT and return a dict with its useful properties."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef extend_hit(self, hit_id, number, duration_hours=None):\n self.create_additional_assignments_for_hit(hit_id, number)\n\n if duration_hours is not None:\n self.update_expiration_for_hit(hit_id, duration_hours)\n\n return self.get_hit(hit_id)", "response": "Extend an existing HIT and return an updated description"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef expire_hit(self, hit_id):\n try:\n self.mturk.update_expiration_for_hit(HITId=hit_id, ExpireAt=0)\n except Exception as ex:\n raise MTurkServiceException(\n \"Failed to expire HIT {}: {}\".format(hit_id, str(ex))\n )\n return True", "response": "Expire a HIT, which will change its status to \"Reviewable\",\n allowing it to be deleted."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngrants a bonus to the Worker.", "response": "def grant_bonus(self, assignment_id, amount, reason):\n \"\"\"Grant a bonus to the MTurk Worker.\n Issues a payment of money from your account to a Worker. To\n be eligible for a bonus, the Worker must have submitted\n results for one of your HITs, and have had those results\n approved or rejected. This payment happens separately from the\n reward you pay to the Worker when you approve the Worker's\n assignment.\n \"\"\"\n assignment = self.get_assignment(assignment_id)\n worker_id = assignment[\"worker_id\"]\n amount_str = \"{:.2f}\".format(amount)\n try:\n return self._is_ok(\n self.mturk.send_bonus(\n WorkerId=worker_id,\n BonusAmount=amount_str,\n AssignmentId=assignment_id,\n Reason=reason,\n UniqueRequestToken=self._request_token(),\n )\n )\n except ClientError as ex:\n error = \"Failed to pay assignment {} bonus of {}: {}\".format(\n assignment_id, amount_str, str(ex)\n )\n raise MTurkServiceException(error)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_assignment(self, assignment_id):\n try:\n response = self.mturk.get_assignment(AssignmentId=assignment_id)\n except ClientError as ex:\n if \"does not exist\" in str(ex):\n return None\n raise\n return self._translate_assignment(response[\"Assignment\"])", "response": "Get an assignment by ID and reformat the response."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\napproving an assignment initiates two payments from the Requester's Amazon.com account: 1. The Worker who submitted the results is paid the reward specified in the HIT. 2. Amazon Mechanical Turk fees are debited.", "response": "def approve_assignment(self, assignment_id):\n \"\"\"Approving an assignment initiates two payments from the\n Requester's Amazon.com account:\n 1. The Worker who submitted the results is paid\n the reward specified in the HIT.\n 2. Amazon Mechanical Turk fees are debited.\n \"\"\"\n try:\n return self._is_ok(\n self.mturk.approve_assignment(AssignmentId=assignment_id)\n )\n except ClientError as ex:\n assignment = self.get_assignment(assignment_id)\n raise MTurkServiceException(\n \"Failed to approve assignment {}, {}: {}\".format(\n assignment_id, str(assignment), str(ex)\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nmake the specified directory importable as the dallinger_experiment package.", "response": "def initialize_experiment_package(path):\n \"\"\"Make the specified directory importable as the `dallinger_experiment` package.\"\"\"\n # Create __init__.py if it doesn't exist (needed for Python 2)\n init_py = os.path.join(path, \"__init__.py\")\n if not os.path.exists(init_py):\n open(init_py, \"a\").close()\n\n # Retain already set experiment module\n if sys.modules.get(\"dallinger_experiment\") is not None:\n return\n dirname = os.path.dirname(path)\n basename = os.path.basename(path)\n sys.path.insert(0, dirname)\n package = __import__(basename)\n if path not in package.__path__:\n raise Exception(\"Package was not imported from the requested path!\")\n sys.modules[\"dallinger_experiment\"] = package\n package.__package__ = \"dallinger_experiment\"\n sys.path.pop(0)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning json description of a participant.", "response": "def json_data(self):\n \"\"\"Return json description of a participant.\"\"\"\n return {\n \"type\": self.type,\n \"recruiter\": self.recruiter_id,\n \"assignment_id\": self.assignment_id,\n \"hit_id\": self.hit_id,\n \"mode\": self.mode,\n \"end_time\": self.end_time,\n \"base_pay\": self.base_pay,\n \"bonus\": self.bonus,\n \"status\": self.status,\n }"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting a list of questions associated with this participant. If type is not specified the type of questions is returned.", "response": "def questions(self, type=None):\n \"\"\"Get questions associated with this participant.\n\n Return a list of questions associated with the participant. If\n specified, ``type`` filters by class.\n\n \"\"\"\n if type is None:\n type = Question\n\n if not issubclass(type, Question):\n raise TypeError(\"{} is not a valid question type.\".format(type))\n\n return type.query.filter_by(participant_id=self.id).all()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget all infos created by the participants nodes.", "response": "def infos(self, type=None, failed=False):\n \"\"\"Get all infos created by the participants nodes.\n\n Return a list of infos produced by nodes associated with the\n participant. If specified, ``type`` filters by class. By default, failed\n infos are excluded, to include only failed nodes use ``failed=True``,\n for all nodes use ``failed=all``. Note that failed filters the infos,\n not the nodes - infos from all nodes (whether failed or not) can be\n returned.\n\n \"\"\"\n nodes = self.nodes(failed=\"all\")\n infos = []\n for n in nodes:\n infos.extend(n.infos(type=type, failed=failed))\n return infos"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef json_data(self):\n return {\n \"number\": self.number,\n \"type\": self.type,\n \"participant_id\": self.participant_id,\n \"question\": self.question,\n \"response\": self.response,\n }", "response": "Return json description of a question."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef json_data(self):\n return {\n \"type\": self.type,\n \"max_size\": self.max_size,\n \"full\": self.full,\n \"role\": self.role,\n }", "response": "Return json description of a participant."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef nodes(self, type=None, failed=False, participant_id=None):\n if type is None:\n type = Node\n\n if not issubclass(type, Node):\n raise TypeError(\"{} is not a valid node type.\".format(type))\n\n if failed not in [\"all\", False, True]:\n raise ValueError(\"{} is not a valid node failed\".format(failed))\n\n if participant_id is not None:\n if failed == \"all\":\n return type.query.filter_by(\n network_id=self.id, participant_id=participant_id\n ).all()\n else:\n return type.query.filter_by(\n network_id=self.id, participant_id=participant_id, failed=failed\n ).all()\n else:\n if failed == \"all\":\n return type.query.filter_by(network_id=self.id).all()\n else:\n return type.query.filter_by(failed=failed, network_id=self.id).all()", "response": "Get nodes in the network."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef infos(self, type=None, failed=False):\n if type is None:\n type = Info\n if failed not in [\"all\", False, True]:\n raise ValueError(\"{} is not a valid failed\".format(failed))\n\n if failed == \"all\":\n return type.query.filter_by(network_id=self.id).all()\n else:\n return type.query.filter_by(network_id=self.id, failed=failed).all()", "response": "Get infos in the network."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget all transmissions in the network.", "response": "def transmissions(self, status=\"all\", failed=False):\n \"\"\"Get transmissions in the network.\n\n status { \"all\", \"received\", \"pending\" }\n failed { False, True, \"all\" }\n To get transmissions from a specific vector, see the\n transmissions() method in class Vector.\n \"\"\"\n if status not in [\"all\", \"pending\", \"received\"]:\n raise ValueError(\n \"You cannot get transmission of status {}.\".format(status)\n + \"Status can only be pending, received or all\"\n )\n if failed not in [\"all\", False, True]:\n raise ValueError(\"{} is not a valid failed\".format(failed))\n\n if status == \"all\":\n if failed == \"all\":\n return Transmission.query.filter_by(network_id=self.id).all()\n else:\n return Transmission.query.filter_by(\n network_id=self.id, failed=failed\n ).all()\n else:\n if failed == \"all\":\n return Transmission.query.filter_by(\n network_id=self.id, status=status\n ).all()\n else:\n return Transmission.query.filter_by(\n network_id=self.id, status=status, failed=failed\n ).all()"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the vectors in the network.", "response": "def vectors(self, failed=False):\n \"\"\"\n Get vectors in the network.\n\n failed = { False, True, \"all\" }\n To get the vectors to/from to a specific node, see Node.vectors().\n \"\"\"\n if failed not in [\"all\", False, True]:\n raise ValueError(\"{} is not a valid vector failed\".format(failed))\n\n if failed == \"all\":\n return Vector.query.filter_by(network_id=self.id).all()\n else:\n return Vector.query.filter_by(network_id=self.id, failed=failed).all()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget a node s neighbors - nodes that are directly connected to it.", "response": "def neighbors(self, type=None, direction=\"to\", failed=None):\n \"\"\"Get a node's neighbors - nodes that are directly connected to it.\n\n Type specifies the class of neighbour and must be a subclass of\n Node (default is Node).\n Connection is the direction of the connections and can be \"to\"\n (default), \"from\", \"either\", or \"both\".\n \"\"\"\n # get type\n if type is None:\n type = Node\n if not issubclass(type, Node):\n raise ValueError(\n \"{} is not a valid neighbor type,\"\n \"needs to be a subclass of Node.\".format(type)\n )\n\n # get direction\n if direction not in [\"both\", \"either\", \"from\", \"to\"]:\n raise ValueError(\n \"{} not a valid neighbor connection.\"\n \"Should be both, either, to or from.\".format(direction)\n )\n\n if failed is not None:\n raise ValueError(\n \"You should not pass a failed argument to neighbors(). \"\n \"Neighbors is \"\n \"unusual in that a failed argument cannot be passed. This is \"\n \"because there is inherent uncertainty in what it means for a \"\n \"neighbor to be failed. The neighbors function will only ever \"\n \"return not-failed nodes connected to you via not-failed \"\n \"vectors. If you want to do more elaborate queries, for \"\n \"example, getting not-failed nodes connected to you via failed\"\n \" vectors, you should do so via sql queries.\"\n )\n\n neighbors = []\n # get the neighbours\n if direction == \"to\":\n outgoing_vectors = (\n Vector.query.with_entities(Vector.destination_id)\n .filter_by(origin_id=self.id, failed=False)\n .all()\n )\n\n neighbor_ids = [v.destination_id for v in outgoing_vectors]\n if neighbor_ids:\n neighbors = Node.query.filter(Node.id.in_(neighbor_ids)).all()\n neighbors = [n for n in neighbors if isinstance(n, type)]\n\n if direction == \"from\":\n incoming_vectors = (\n Vector.query.with_entities(Vector.origin_id)\n .filter_by(destination_id=self.id, failed=False)\n .all()\n )\n\n neighbor_ids = [v.origin_id for v in incoming_vectors]\n if neighbor_ids:\n neighbors = Node.query.filter(Node.id.in_(neighbor_ids)).all()\n neighbors = [n for n in neighbors if isinstance(n, type)]\n\n if direction == \"either\":\n neighbors = list(\n set(\n self.neighbors(type=type, direction=\"to\")\n + self.neighbors(type=type, direction=\"from\")\n )\n )\n\n if direction == \"both\":\n neighbors = list(\n set(self.neighbors(type=type, direction=\"to\"))\n & set(self.neighbors(type=type, direction=\"from\"))\n )\n\n return neighbors"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nchecks whether this node is connected [to/from] whom. whom can be a list of nodes or a single node. direction can be \"to\" (default), \"from\", \"both\" or \"either\". If whom is a single node this method returns a boolean, otherwise it returns a list of booleans", "response": "def is_connected(self, whom, direction=\"to\", failed=None):\n \"\"\"Check whether this node is connected [to/from] whom.\n\n whom can be a list of nodes or a single node.\n direction can be \"to\" (default), \"from\", \"both\" or \"either\".\n\n If whom is a single node this method returns a boolean,\n otherwise it returns a list of booleans\n \"\"\"\n if failed is not None:\n raise ValueError(\n \"You should not pass a failed argument to is_connected.\"\n \"is_connected is \"\n \"unusual in that a failed argument cannot be passed. This is \"\n \"because there is inherent uncertainty in what it means for a \"\n \"connection to be failed. The is_connected function will only \"\n \"ever check along not-failed vectors. \"\n \"If you want to check along failed vectors \"\n \"you should do so via sql queries.\"\n )\n\n # make whom a list\n if isinstance(whom, list):\n is_list = True\n else:\n whom = [whom]\n is_list = False\n\n whom_ids = [n.id for n in whom]\n\n # check whom contains only Nodes\n for node in whom:\n if not isinstance(node, Node):\n raise TypeError(\n \"is_connected cannot parse objects of type {}.\".format(type(node))\n )\n\n # check direction\n if direction not in [\"to\", \"from\", \"either\", \"both\"]:\n raise ValueError(\n \"{} is not a valid direction for is_connected\".format(direction)\n )\n\n # get is_connected\n connected = []\n if direction == \"to\":\n vectors = (\n Vector.query.with_entities(Vector.destination_id)\n .filter_by(origin_id=self.id, failed=False)\n .all()\n )\n destinations = set([v.destination_id for v in vectors])\n for w in whom_ids:\n connected.append(w in destinations)\n\n elif direction == \"from\":\n vectors = (\n Vector.query.with_entities(Vector.origin_id)\n .filter_by(destination_id=self.id, failed=False)\n .all()\n )\n origins = set([v.origin_id for v in vectors])\n for w in whom_ids:\n connected.append(w in origins)\n\n elif direction in [\"either\", \"both\"]:\n\n vectors = (\n Vector.query.with_entities(Vector.origin_id, Vector.destination_id)\n .filter(\n and_(\n Vector.failed == false(),\n or_(\n Vector.destination_id == self.id,\n Vector.origin_id == self.id,\n ),\n )\n )\n .all()\n )\n\n destinations = set([v.destination_id for v in vectors])\n origins = set([v.origin_id for v in vectors])\n\n if direction == \"either\":\n origins_destinations = destinations.union(origins)\n\n elif direction == \"both\":\n origins_destinations = destinations.intersection(origins)\n\n for w in whom_ids:\n connected.append(w in origins_destinations)\n\n if is_list:\n return connected\n else:\n return connected[0]"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget infos that originate from this node.", "response": "def infos(self, type=None, failed=False):\n \"\"\"Get infos that originate from this node.\n\n Type must be a subclass of :class:`~dallinger.models.Info`, the default is\n ``Info``. Failed can be True, False or \"all\".\n\n \"\"\"\n if type is None:\n type = Info\n\n if not issubclass(type, Info):\n raise TypeError(\n \"Cannot get infos of type {} \" \"as it is not a valid type.\".format(type)\n )\n\n if failed not in [\"all\", False, True]:\n raise ValueError(\"{} is not a valid vector failed\".format(failed))\n\n if failed == \"all\":\n return type.query.filter_by(origin_id=self.id).all()\n else:\n return type.query.filter_by(origin_id=self.id, failed=failed).all()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting all infos that have been sent to this node.", "response": "def received_infos(self, type=None, failed=None):\n \"\"\"Get infos that have been sent to this node.\n\n Type must be a subclass of info, the default is Info.\n \"\"\"\n if failed is not None:\n raise ValueError(\n \"You should not pass a failed argument to received_infos. \"\n \"received_infos is \"\n \"unusual in that a failed argument cannot be passed. This is \"\n \"because there is inherent uncertainty in what it means for a \"\n \"received info to be failed. The received_infos function will \"\n \"only ever check not-failed transmissions. \"\n \"If you want to check failed transmissions \"\n \"you should do so via sql queries.\"\n )\n\n if type is None:\n type = Info\n\n if not issubclass(type, Info):\n raise TypeError(\n \"Cannot get infos of type {} \" \"as it is not a valid type.\".format(type)\n )\n\n transmissions = (\n Transmission.query.with_entities(Transmission.info_id)\n .filter_by(destination_id=self.id, status=\"received\", failed=False)\n .all()\n )\n\n info_ids = [t.info_id for t in transmissions]\n if info_ids:\n return type.query.filter(type.id.in_(info_ids)).all()\n else:\n return []"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting all the transformations done by this Node.", "response": "def transformations(self, type=None, failed=False):\n \"\"\"\n Get Transformations done by this Node.\n\n type must be a type of Transformation (defaults to Transformation)\n Failed can be True, False or \"all\"\n \"\"\"\n if failed not in [\"all\", False, True]:\n raise ValueError(\"{} is not a valid transmission failed\".format(failed))\n\n if type is None:\n type = Transformation\n\n if failed == \"all\":\n return type.query.filter_by(node_id=self.id).all()\n else:\n return type.query.filter_by(node_id=self.id, failed=failed).all()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nfail a node, setting its status to \"failed\". Also fails all vectors that connect to or from the node. You cannot fail a node that has already failed, but you can fail a dead node. Set node.failed to True and :attr:`~dallinger.models.Node.time_of_death` to now. Instruct all not-failed vectors connected to this node, infos made by this node, transmissions to or from this node and transformations made by this node to fail.", "response": "def fail(self):\n \"\"\"\n Fail a node, setting its status to \"failed\".\n\n Also fails all vectors that connect to or from the node.\n You cannot fail a node that has already failed, but you\n can fail a dead node.\n\n Set node.failed to True and :attr:`~dallinger.models.Node.time_of_death`\n to now. Instruct all not-failed vectors connected to this node, infos\n made by this node, transmissions to or from this node and\n transformations made by this node to fail.\n\n \"\"\"\n if self.failed is True:\n raise AttributeError(\"Cannot fail {} - it has already failed.\".format(self))\n else:\n self.failed = True\n self.time_of_death = timenow()\n self.network.calculate_full()\n\n for v in self.vectors():\n v.fail()\n for i in self.infos():\n i.fail()\n for t in self.transmissions(direction=\"all\"):\n t.fail()\n for t in self.transformations():\n t.fail()"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate a vector from self to whom and return a list of vectors between the node and whom.", "response": "def connect(self, whom, direction=\"to\"):\n \"\"\"Create a vector from self to/from whom.\n\n Return a list of newly created vector between the node and whom.\n ``whom`` can be a specific node or a (nested) list of nodes. Nodes can\n only connect with nodes in the same network. In addition nodes cannot\n connect with themselves or with Sources. ``direction`` specifies the\n direction of the connection it can be \"to\" (node -> whom), \"from\" (whom\n -> node) or both (node <-> whom). The default is \"to\".\n\n Whom may be a (nested) list of nodes.\n\n Will raise an error if:\n 1. whom is not a node or list of nodes\n 2. whom is/contains a source if direction is to or both\n 3. whom is/contains self\n 4. whom is/contains a node in a different network\n\n If self is already connected to/from whom a Warning\n is raised and nothing happens.\n\n This method returns a list of the vectors created\n (even if there is only one).\n\n \"\"\"\n # check direction\n if direction not in [\"to\", \"from\", \"both\"]:\n raise ValueError(\n \"{} is not a valid direction for connect()\".format(direction)\n )\n\n # make whom a list\n whom = self.flatten([whom])\n\n # make the connections\n new_vectors = []\n if direction in [\"to\", \"both\"]:\n already_connected_to = self.flatten(\n [self.is_connected(direction=\"to\", whom=whom)]\n )\n for node, connected in zip(whom, already_connected_to):\n if connected:\n print(\n \"Warning! {} already connected to {}, \"\n \"instruction to connect will be ignored.\".format(self, node)\n )\n else:\n new_vectors.append(Vector(origin=self, destination=node))\n if direction in [\"from\", \"both\"]:\n already_connected_from = self.flatten(\n [self.is_connected(direction=\"from\", whom=whom)]\n )\n for node, connected in zip(whom, already_connected_from):\n if connected:\n print(\n \"Warning! {} already connected from {}, \"\n \"instruction to connect will be ignored.\".format(self, node)\n )\n else:\n new_vectors.append(Vector(origin=node, destination=self))\n return new_vectors"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nturn a list of lists into a list.", "response": "def flatten(self, lst):\n \"\"\"Turn a list of lists into a list.\"\"\"\n if lst == []:\n return lst\n if isinstance(lst[0], list):\n return self.flatten(lst[0]) + self.flatten(lst[1:])\n return lst[:1] + self.flatten(lst[1:])"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ntransmits one or more infos from one node to another.", "response": "def transmit(self, what=None, to_whom=None):\n \"\"\"Transmit one or more infos from one node to another.\n\n \"what\" dictates which infos are sent, it can be:\n (1) None (in which case the node's _what method is called).\n (2) an Info (in which case the node transmits the info)\n (3) a subclass of Info (in which case the node transmits all\n its infos of that type)\n (4) a list of any combination of the above\n \"to_whom\" dictates which node(s) the infos are sent to, it can be:\n (1) None (in which case the node's _to_whom method is called)\n (2) a Node (in which case the node transmits to that node)\n (3) a subclass of Node (in which case the node transmits to all\n nodes of that type it is connected to)\n (4) a list of any combination of the above\n Will additionally raise an error if:\n (1) _what() or _to_whom() returns None or a list containing None.\n (2) what is/contains an info that does not originate from the\n transmitting node\n (3) to_whom is/contains a node that the transmitting node does not\n have a not-failed connection with.\n \"\"\"\n whats = set()\n for what in self.flatten([what]):\n if what is None:\n what = self._what()\n if inspect.isclass(what) and issubclass(what, Info):\n whats.update(self.infos(type=what))\n else:\n whats.add(what)\n\n to_whoms = set()\n for to_whom in self.flatten([to_whom]):\n if to_whom is None:\n to_whom = self._to_whom()\n if inspect.isclass(to_whom) and issubclass(to_whom, Node):\n to_whoms.update(self.neighbors(direction=\"to\", type=to_whom))\n else:\n to_whoms.add(to_whom)\n\n transmissions = []\n vectors = self.vectors(direction=\"outgoing\")\n for what in whats:\n for to_whom in to_whoms:\n try:\n vector = [v for v in vectors if v.destination_id == to_whom.id][0]\n except IndexError:\n raise ValueError(\n \"{} cannot transmit to {} as it does not have \"\n \"a connection to them\".format(self, to_whom)\n )\n t = Transmission(info=what, vector=vector)\n transmissions.append(t)\n\n return transmissions"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef fail(self):\n if self.failed is True:\n raise AttributeError(\"Cannot fail {} - it has already failed.\".format(self))\n else:\n self.failed = True\n self.time_of_death = timenow()\n\n for t in self.transmissions():\n t.fail()", "response": "Fail a single instance of the class."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef json_data(self):\n return {\n \"type\": self.type,\n \"origin_id\": self.origin_id,\n \"network_id\": self.network_id,\n \"contents\": self.contents,\n }", "response": "The json representation of an info."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef transformations(self, relationship=\"all\"):\n if relationship not in [\"all\", \"parent\", \"child\"]:\n raise ValueError(\n \"You cannot get transformations of relationship {}\".format(relationship)\n + \"Relationship can only be parent, child or all.\"\n )\n\n if relationship == \"all\":\n return Transformation.query.filter(\n and_(\n Transformation.failed == false(),\n or_(\n Transformation.info_in == self, Transformation.info_out == self\n ),\n )\n ).all()\n\n if relationship == \"parent\":\n return Transformation.query.filter_by(\n info_in_id=self.id, failed=False\n ).all()\n\n if relationship == \"child\":\n return Transformation.query.filter_by(\n info_out_id=self.id, failed=False\n ).all()", "response": "Get all the transformations involving this info."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef json_data(self):\n return {\n \"vector_id\": self.vector_id,\n \"origin_id\": self.origin_id,\n \"destination_id\": self.destination_id,\n \"info_id\": self.info_id,\n \"network_id\": self.network_id,\n \"receive_time\": self.receive_time,\n \"status\": self.status,\n }", "response": "The json representation of a transmissions."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef json_data(self):\n return {\n \"info_in_id\": self.info_in_id,\n \"info_out_id\": self.info_out_id,\n \"node_id\": self.node_id,\n \"network_id\": self.network_id,\n }", "response": "The json representation of a transformation."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the absolute path of the root directory of the installed Dallinger package", "response": "def dallinger_package_path():\n \"\"\"Return the absolute path of the root directory of the installed\n Dallinger package:\n\n >>> utils.dallinger_package_location()\n '/Users/janedoe/projects/Dallinger3/dallinger'\n \"\"\"\n dist = get_distribution(\"dallinger\")\n src_base = os.path.join(dist.location, dist.project_name)\n\n return src_base"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ngenerate random id numbers.", "response": "def generate_random_id(size=6, chars=string.ascii_uppercase + string.digits):\n \"\"\"Generate random id numbers.\"\"\"\n return \"\".join(random.choice(chars) for x in range(size))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nrunning a command and returns the return code.", "response": "def run_command(cmd, out, ignore_errors=False):\n \"\"\"We want to both send subprocess output to stdout or another file\n descriptor as the subprocess runs, *and* capture the actual exception\n message on errors. CalledProcessErrors do not reliably contain the\n underlying exception in either the 'message' or 'out' attributes, so\n we tee the stderr to a temporary file and if a CalledProcessError is\n raised we read its contents to recover stderr\n \"\"\"\n tempdir = tempfile.mkdtemp()\n output_file = os.path.join(tempdir, \"stderr\")\n original_cmd = \" \".join(cmd)\n p = subprocess.Popen(cmd, stdout=out, stderr=subprocess.PIPE)\n t = subprocess.Popen([\"tee\", output_file], stdin=p.stderr, stdout=out)\n t.wait()\n p.communicate()\n p.stderr.close()\n if p.returncode != 0 and not ignore_errors:\n with open(output_file, \"r\") as output:\n error = output.read()\n message = 'Command: \"{}\": Error: \"{}\"'.format(\n original_cmd, error.replace(\"\\n\", \"\")\n )\n shutil.rmtree(tempdir, ignore_errors=True)\n raise CommandError(message)\n\n shutil.rmtree(tempdir, ignore_errors=True)\n return p.returncode"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nlog a message to stdout.", "response": "def log(msg, delay=0.5, chevrons=True, verbose=True):\n \"\"\"Log a message to stdout.\"\"\"\n if verbose:\n if chevrons:\n click.echo(\"\\n\u276f\u276f \" + msg)\n else:\n click.echo(msg)\n time.sleep(delay)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nlogging a message to stderr.", "response": "def error(msg, delay=0.5, chevrons=True, verbose=True):\n \"\"\"Log a message to stdout.\"\"\"\n if verbose:\n if chevrons:\n click.secho(\"\\n\u276f\u276f \" + msg, err=True, fg=\"red\")\n else:\n click.secho(msg, err=True, fg=\"red\")\n time.sleep(delay)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nverifies the experiment id.", "response": "def verify_id(ctx, param, app):\n \"\"\"Verify the experiment id.\"\"\"\n if app is None:\n raise TypeError(\"Select an experiment using the --app parameter.\")\n elif app[0:5] == \"dlgr-\":\n raise ValueError(\n \"The --app parameter requires the full \"\n \"UUID beginning with {}-...\".format(app[5:13])\n )\n return app"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef verify_directory(verbose=True, max_size_mb=50):\n # Check required files\n ok = True\n mb_to_bytes = 1000 * 1000\n expected_files = [\"config.txt\", \"experiment.py\"]\n\n for f in expected_files:\n if os.path.exists(f):\n log(\"\u2713 {} is PRESENT\".format(f), chevrons=False, verbose=verbose)\n else:\n log(\"\u2717 {} is MISSING\".format(f), chevrons=False, verbose=verbose)\n ok = False\n\n # Check size\n max_size = max_size_mb * mb_to_bytes\n size = size_on_copy()\n if size > max_size:\n size_in_mb = round(size / mb_to_bytes)\n log(\n \"\u2717 {}MB is TOO BIG (greater than {}MB)\".format(size_in_mb, max_size_mb),\n chevrons=False,\n verbose=verbose,\n )\n ok = False\n\n return ok", "response": "Verify that the current directory looks like a Dallinger experiment and that it does not have unintended contents that will be copied on the current deployment."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nperform basic sanity checks on experiment. py.", "response": "def verify_experiment_module(verbose):\n \"\"\"Perform basic sanity checks on experiment.py.\n \"\"\"\n ok = True\n if not os.path.exists(\"experiment.py\"):\n return False\n\n # Bootstrap a package in a temp directory and make it importable:\n temp_package_name = \"TEMP_VERIFICATION_PACKAGE\"\n tmp = tempfile.mkdtemp()\n clone_dir = os.path.join(tmp, temp_package_name)\n to_ignore = shutil.ignore_patterns(\n os.path.join(\".git\", \"*\"), \"*.db\", \"snapshots\", \"data\", \"server.log\"\n )\n shutil.copytree(os.getcwd(), clone_dir, ignore=to_ignore)\n initialize_experiment_package(clone_dir)\n from dallinger_experiment import experiment\n\n if clone_dir not in experiment.__file__:\n raise ImportError(\"Checking the wrong experiment.py... aborting.\")\n classes = inspect.getmembers(experiment, inspect.isclass)\n exps = [c for c in classes if (c[1].__bases__[0].__name__ in \"Experiment\")]\n\n # Clean up:\n for entry in [k for k in sys.modules if temp_package_name in k]:\n del sys.modules[entry]\n\n # Run checks:\n if len(exps) == 0:\n log(\n \"\u2717 experiment.py does not define an experiment class.\",\n delay=0,\n chevrons=False,\n verbose=verbose,\n )\n ok = False\n elif len(exps) == 1:\n log(\n \"\u2713 experiment.py defines 1 experiment\",\n delay=0,\n chevrons=False,\n verbose=verbose,\n )\n else:\n log(\n \"\u2717 experiment.py defines more than one experiment class.\",\n delay=0,\n chevrons=False,\n verbose=verbose,\n )\n ok = False\n\n return ok"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nchecking for common or costly errors in experiment configuration.", "response": "def verify_config(verbose=True):\n \"\"\"Check for common or costly errors in experiment configuration.\n \"\"\"\n ok = True\n config = get_config()\n if not config.ready:\n config.load()\n # Check base_payment is correct\n try:\n base_pay = config.get(\"base_payment\")\n except KeyError:\n log(\"\u2717 No value for base_pay.\", delay=0, chevrons=False, verbose=verbose)\n else:\n dollarFormat = \"{:.2f}\".format(base_pay)\n\n if base_pay <= 0:\n log(\n \"\u2717 base_payment must be positive value in config.txt.\",\n delay=0,\n chevrons=False,\n verbose=verbose,\n )\n ok = False\n\n if float(dollarFormat) != float(base_pay):\n log(\n \"\u2717 base_payment must be in [dollars].[cents] format in config.txt. Try changing \"\n \"{0} to {1}.\".format(base_pay, dollarFormat),\n delay=0,\n chevrons=False,\n verbose=verbose,\n )\n ok = False\n\n return ok"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nwarn if there are files which conflict with these deployed by Dallinger but always returns True.", "response": "def verify_no_conflicts(verbose=True):\n \"\"\"Warn if there are filenames which conflict with those deployed by\n Dallinger, but always returns True (meaning \"OK\").\n \"\"\"\n conflicts = False\n\n reserved_files = [\n os.path.join(\"templates\", \"complete.html\"),\n os.path.join(\"templates\", \"error.html\"),\n os.path.join(\"templates\", \"error-complete.html\"),\n os.path.join(\"templates\", \"launch.html\"),\n os.path.join(\"templates\", \"thanks.html\"),\n os.path.join(\"static\", \"css\", \"dallinger.css\"),\n os.path.join(\"static\", \"scripts\", \"dallinger2.js\"),\n os.path.join(\"static\", \"scripts\", \"reqwest.min.js\"),\n os.path.join(\"static\", \"scripts\", \"store+json2.min.js\"),\n os.path.join(\"static\", \"scripts\", \"tracker.js\"),\n os.path.join(\"static\", \"robots.txt\"),\n ]\n\n for f in reserved_files:\n if os.path.exists(f):\n log(\n \"\u2717 {} OVERWRITES shared frontend files inserted at run-time\".format(f),\n delay=0,\n chevrons=False,\n verbose=verbose,\n )\n conflicts = True\n\n if not conflicts:\n log(\"\u2713 no file conflicts\", delay=0, chevrons=False, verbose=verbose)\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nperforming a series of checks on the current directory to verify that that it s a valid Dallinger experiment.", "response": "def verify_package(verbose=True):\n \"\"\"Perform a series of checks on the current directory to verify that\n it's a valid Dallinger experiment.\n \"\"\"\n results = (\n verify_directory(verbose),\n verify_experiment_module(verbose),\n verify_config(verbose),\n verify_no_conflicts(verbose),\n )\n\n ok = all(results)\n\n return ok"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef require_exp_directory(f):\n error = \"The current directory is not a valid Dallinger experiment.\"\n\n @wraps(f)\n def wrapper(**kwargs):\n if not verify_directory(kwargs.get(\"verbose\")):\n raise click.UsageError(error)\n return f(**kwargs)\n\n return wrapper", "response": "Decorator to verify that a command is run inside a valid Dallinger experiment directory."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nrunning the experiment locally.", "response": "def debug(verbose, bot, proxy, exp_config=None):\n \"\"\"Run the experiment locally.\"\"\"\n debugger = DebugDeployment(Output(), verbose, bot, proxy, exp_config)\n log(header, chevrons=False)\n debugger.run()"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndeploys app using Heroku to the MTurk Sandbox.", "response": "def sandbox(verbose, app):\n \"\"\"Deploy app using Heroku to the MTurk Sandbox.\"\"\"\n if app:\n verify_id(None, None, app)\n log(header, chevrons=False)\n _deploy_in_mode(\"sandbox\", app=app, verbose=verbose, log=log)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef qualify(workers, qualification, value, by_name, notify, sandbox):\n if not (workers and qualification and value):\n raise click.BadParameter(\n \"Must specify a qualification ID, value/score, and at least one worker ID\"\n )\n mturk = _mturk_service_from_config(sandbox)\n if by_name:\n result = mturk.get_qualification_type_by_name(qualification)\n if result is None:\n raise click.BadParameter(\n 'No qualification with name \"{}\" exists.'.format(qualification)\n )\n\n qid = result[\"id\"]\n else:\n qid = qualification\n\n click.echo(\n \"Assigning qualification {} with value {} to {} worker{}...\".format(\n qid, value, len(workers), \"s\" if len(workers) > 1 else \"\"\n )\n )\n for worker in workers:\n if mturk.set_qualification_score(qid, worker, int(value), notify=notify):\n click.echo(\"{} OK\".format(worker))\n\n # print out the current set of workers with the qualification\n results = list(mturk.get_workers_with_qualification(qid))\n\n click.echo(\"{} workers with qualification {}:\".format(len(results), qid))\n\n for score, count in Counter([r[\"score\"] for r in results]).items():\n click.echo(\"{} with value {}\".format(count, score))", "response": "Assign a qualification to 1 or more workers"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef revoke(workers, qualification, by_name, reason, sandbox):\n if not (workers and qualification):\n raise click.BadParameter(\n \"Must specify a qualification ID or name, and at least one worker ID\"\n )\n\n mturk = _mturk_service_from_config(sandbox)\n if by_name:\n result = mturk.get_qualification_type_by_name(qualification)\n if result is None:\n raise click.BadParameter(\n 'No qualification with name \"{}\" exists.'.format(qualification)\n )\n\n qid = result[\"id\"]\n else:\n qid = qualification\n\n if not click.confirm(\n '\\n\\nYou are about to revoke qualification \"{}\" '\n \"for these workers:\\n\\t{}\\n\\n\"\n \"This will send an email to each of them from Amazon MTurk. \"\n \"Continue?\".format(qid, \"\\n\\t\".join(workers))\n ):\n click.echo(\"Aborting...\")\n return\n\n for worker in workers:\n if mturk.revoke_qualification(qid, worker, reason):\n click.echo(\n 'Revoked qualification \"{}\" from worker \"{}\"'.format(qid, worker)\n )\n\n # print out the current set of workers with the qualification\n results = list(mturk.get_workers_with_qualification(qid))\n click.echo(\n 'There are now {} workers with qualification \"{}\"'.format(len(results), qid)\n )", "response": "Revoke a qualification from 1 or more workers"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef hibernate(app):\n log(\"The database backup URL is...\")\n backup_url = data.backup(app)\n log(backup_url)\n\n log(\"Scaling down the web servers...\")\n heroku_app = HerokuApp(app)\n heroku_app.scale_down_dynos()\n\n log(\"Removing addons...\")\n\n addons = [\n \"heroku-postgresql\",\n # \"papertrail\",\n \"heroku-redis\",\n ]\n for addon in addons:\n heroku_app.addon_destroy(addon)", "response": "Pause an experiment and remove costly resources."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nlist hits for an experiment id.", "response": "def hits(app, sandbox):\n \"\"\"List hits for an experiment id.\"\"\"\n hit_list = list(_current_hits(_mturk_service_from_config(sandbox), app))\n out = Output()\n out.log(\n \"Found {} hits for this experiment id: {}\".format(\n len(hit_list), \", \".join(h[\"id\"] for h in hit_list)\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef expire(app, sandbox, exit=True):\n success = []\n failures = []\n service = _mturk_service_from_config(sandbox)\n hits = _current_hits(service, app)\n for hit in hits:\n hit_id = hit[\"id\"]\n try:\n service.expire_hit(hit_id)\n success.append(hit_id)\n except MTurkServiceException:\n failures.append(hit_id)\n out = Output()\n if success:\n out.log(\"Expired {} hits: {}\".format(len(success), \", \".join(success)))\n if failures:\n out.log(\n \"Could not expire {} hits: {}\".format(len(failures), \", \".join(failures))\n )\n if not success and not failures:\n out.log(\"No hits found for this application.\")\n if not sandbox:\n out.log(\n \"If this experiment was run in the MTurk sandbox, use: \"\n \"`dallinger expire --sandbox --app {}`\".format(app)\n )\n if exit and not success:\n sys.exit(1)", "response": "Expire all hits for an experiment id."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef destroy(ctx, app, expire_hit, sandbox):\n if expire_hit:\n ctx.invoke(expire, app=app, sandbox=sandbox, exit=False)\n HerokuApp(app).destroy()", "response": "Tear down an experiment server."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef awaken(app, databaseurl):\n id = app\n config = get_config()\n config.load()\n\n bucket = data.user_s3_bucket()\n key = bucket.lookup(\"{}.dump\".format(id))\n url = key.generate_url(expires_in=300)\n\n heroku_app = HerokuApp(id, output=None, team=None)\n heroku_app.addon(\"heroku-postgresql:{}\".format(config.get(\"database_size\")))\n time.sleep(60)\n\n heroku_app.pg_wait()\n time.sleep(10)\n\n heroku_app.addon(\"heroku-redis:{}\".format(config.get(\"redis_size\")))\n heroku_app.restore(url)\n\n # Scale up the dynos.\n log(\"Scaling up the dynos...\")\n size = config.get(\"dyno_type\")\n for process in [\"web\", \"worker\"]:\n qty = config.get(\"num_dynos_\" + process)\n heroku_app.scale_up_dyno(process, qty, size)\n if config.get(\"clock_on\"):\n heroku_app.scale_up_dyno(\"clock\", 1, size)", "response": "Restores the database from a given url."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nload the database state from an exported zip file and leave the server running until stopping the process with - c.", "response": "def load(app, verbose, replay, exp_config=None):\n \"\"\"Import database state from an exported zip file and leave the server\n running until stopping the process with -c.\n \"\"\"\n if replay:\n exp_config = exp_config or {}\n exp_config[\"replay\"] = True\n log(header, chevrons=False)\n loader = LoaderDeployment(app, Output(), verbose, exp_config)\n loader.run()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef monitor(app):\n heroku_app = HerokuApp(dallinger_uid=app)\n webbrowser.open(heroku_app.dashboard_url)\n webbrowser.open(\"https://requester.mturk.com/mturk/manageHITs\")\n heroku_app.open_logs()\n check_call([\"open\", heroku_app.db_uri])\n while _keep_running():\n summary = get_summary(app)\n click.clear()\n click.echo(header)\n click.echo(\"\\nExperiment {}\\n\".format(app))\n click.echo(summary)\n time.sleep(10)", "response": "Set up application monitoring."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef bot(app, debug):\n if debug is None:\n verify_id(None, None, app)\n\n (id, tmp) = setup_experiment(log)\n\n if debug:\n url = debug\n else:\n heroku_app = HerokuApp(dallinger_uid=app)\n worker = generate_random_id()\n hit = generate_random_id()\n assignment = generate_random_id()\n ad_url = \"{}/ad\".format(heroku_app.url)\n ad_parameters = \"assignmentId={}&hitId={}&workerId={}&mode=sandbox\"\n ad_parameters = ad_parameters.format(assignment, hit, worker)\n url = \"{}?{}\".format(ad_url, ad_parameters)\n bot = bot_factory(url)\n bot.run_experiment()", "response": "Run the experiment bot."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nverify that the current directory is compatible with Dallinger.", "response": "def verify():\n \"\"\"Verify that app is compatible with Dallinger.\"\"\"\n verbose = True\n log(\n \"Verifying current directory as a Dallinger experiment...\",\n delay=0,\n verbose=verbose,\n )\n ok = verify_package(verbose=verbose)\n if ok:\n log(\"\u2713 Everything looks good!\", delay=0, verbose=verbose)\n else:\n log(\"\u2639 Some problems were found.\", delay=0, verbose=verbose)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets all the drawings.", "response": "def getdrawings():\n \"\"\"Get all the drawings.\"\"\"\n infos = Info.query.all()\n sketches = [json.loads(info.contents) for info in infos]\n return jsonify(drawings=sketches)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef index():\n config = _config()\n html = \"

Dallinger Experiment in progress

\"\n for item in sorted(config.as_dict().items()):\n html += '
{}
{}
'.format(\n *item\n )\n html += \"
\"\n return html", "response": "Index route for the current Dallinger Experiment"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a generic success response.", "response": "def success_response(**data):\n \"\"\"Return a generic success response.\"\"\"\n data_out = {}\n data_out[\"status\"] = \"success\"\n data_out.update(data)\n js = dumps(data_out, default=date_handler)\n return Response(js, status=200, mimetype=\"application/json\")"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef error_response(\n error_type=\"Internal server error\",\n error_text=\"\",\n status=400,\n participant=None,\n simple=False,\n request_data=\"\",\n):\n \"\"\"Return a generic server error response.\"\"\"\n last_exception = sys.exc_info()\n if last_exception[0]:\n db.logger.error(\n \"Failure for request: {!r}\".format(dict(request.args)),\n exc_info=last_exception,\n )\n\n data = {\"status\": \"error\"}\n\n if simple:\n data[\"message\"] = error_text\n else:\n data[\"html\"] = (\n error_page(\n error_text=error_text,\n error_type=error_type,\n participant=participant,\n request_data=request_data,\n )\n .get_data()\n .decode(\"utf-8\")\n )\n return Response(dumps(data), status=status, mimetype=\"application/json\")", "response": "Return a generic server error response."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef error_page(\n participant=None,\n error_text=None,\n compensate=True,\n error_type=\"default\",\n request_data=\"\",\n):\n \"\"\"Render HTML for error page.\"\"\"\n config = _config()\n\n if error_text is None:\n error_text = \"\"\"There has been an error and so you are unable to\n continue, sorry!\"\"\"\n\n if participant is not None:\n hit_id = (participant.hit_id,)\n assignment_id = (participant.assignment_id,)\n worker_id = participant.worker_id\n participant_id = participant.id\n else:\n hit_id = request.form.get(\"hit_id\", \"\")\n assignment_id = request.form.get(\"assignment_id\", \"\")\n worker_id = request.form.get(\"worker_id\", \"\")\n participant_id = request.form.get(\"participant_id\", None)\n\n if participant_id:\n try:\n participant_id = int(participant_id)\n except (ValueError, TypeError):\n participant_id = None\n\n return make_response(\n render_template(\n \"error.html\",\n error_text=error_text,\n compensate=compensate,\n contact_address=config.get(\"contact_email_on_error\"),\n error_type=error_type,\n hit_id=hit_id,\n assignment_id=assignment_id,\n worker_id=worker_id,\n request_data=request_data,\n participant_id=participant_id,\n ),\n 500,\n )", "response": "Render HTML for error page."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef inject_experiment():\n exp = Experiment(session)\n return dict(experiment=exp, env=os.environ)", "response": "Inject experiment and enviroment variables into the template context."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef should_show_thanks_page_to(participant):\n if participant is None:\n return False\n status = participant.status\n marked_done = participant.end_time is not None\n ready_for_external_submission = (\n status in (\"overrecruited\", \"working\") and marked_done\n )\n assignment_complete = status in (\"submitted\", \"approved\")\n\n return assignment_complete or ready_for_external_submission", "response": "Returns True if the participant is shown in the ad. html page."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef advertisement():\n if not (\"hitId\" in request.args and \"assignmentId\" in request.args):\n raise ExperimentError(\"hit_assign_worker_id_not_set_in_mturk\")\n config = _config()\n\n # Browser rule validation, if configured:\n browser = ValidatesBrowser(config)\n if not browser.is_supported(request.user_agent.string):\n raise ExperimentError(\"browser_type_not_allowed\")\n\n hit_id = request.args[\"hitId\"]\n assignment_id = request.args[\"assignmentId\"]\n app_id = config.get(\"id\", \"unknown\")\n mode = config.get(\"mode\")\n debug_mode = mode == \"debug\"\n worker_id = request.args.get(\"workerId\")\n participant = None\n\n if worker_id is not None:\n # First check if this workerId has completed the task before\n # under a different assignment (v1):\n already_participated = bool(\n models.Participant.query.filter(\n models.Participant.assignment_id != assignment_id\n )\n .filter(models.Participant.worker_id == worker_id)\n .count()\n )\n\n if already_participated and not debug_mode:\n raise ExperimentError(\"already_did_exp_hit\")\n\n # Next, check for participants already associated with this very\n # assignment, and retain their status, if found:\n try:\n participant = (\n models.Participant.query.filter(models.Participant.hit_id == hit_id)\n .filter(models.Participant.assignment_id == assignment_id)\n .filter(models.Participant.worker_id == worker_id)\n .one()\n )\n except exc.SQLAlchemyError:\n pass\n\n recruiter_name = request.args.get(\"recruiter\")\n if recruiter_name:\n recruiter = recruiters.by_name(recruiter_name)\n else:\n recruiter = recruiters.from_config(config)\n recruiter_name = recruiter.nickname\n\n if should_show_thanks_page_to(participant):\n # They've either done, or they're from a recruiter that requires\n # submission of an external form to complete their participation.\n return render_template(\n \"thanks.html\",\n hitid=hit_id,\n assignmentid=assignment_id,\n workerid=worker_id,\n external_submit_url=recruiter.external_submission_url,\n mode=config.get(\"mode\"),\n app_id=app_id,\n )\n if participant and participant.status == \"working\":\n # Once participants have finished the instructions, we do not allow\n # them to start the task again.\n raise ExperimentError(\"already_started_exp_mturk\")\n\n # Participant has not yet agreed to the consent. They might not\n # even have accepted the HIT.\n return render_template(\n \"ad.html\",\n recruiter=recruiter_name,\n hitid=hit_id,\n assignmentid=assignment_id,\n workerid=worker_id,\n mode=config.get(\"mode\"),\n app_id=app_id,\n )", "response": "This function returns the url that will be used to display the ad for the given hit."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef summary():\n exp = Experiment(session)\n state = {\n \"status\": \"success\",\n \"summary\": exp.log_summary(),\n \"completed\": exp.is_complete(),\n }\n unfilled_nets = (\n models.Network.query.filter(models.Network.full != true())\n .with_entities(models.Network.id, models.Network.max_size)\n .all()\n )\n working = (\n models.Participant.query.filter_by(status=\"working\")\n .with_entities(func.count(models.Participant.id))\n .scalar()\n )\n state[\"unfilled_networks\"] = len(unfilled_nets)\n nodes_remaining = 0\n required_nodes = 0\n if state[\"unfilled_networks\"] == 0:\n if working == 0 and state[\"completed\"] is None:\n state[\"completed\"] = True\n else:\n for net in unfilled_nets:\n node_count = (\n models.Node.query.filter_by(network_id=net.id, failed=False)\n .with_entities(func.count(models.Node.id))\n .scalar()\n )\n net_size = net.max_size\n required_nodes += net_size\n nodes_remaining += net_size - node_count\n state[\"nodes_remaining\"] = nodes_remaining\n state[\"required_nodes\"] = required_nodes\n\n if state[\"completed\"] is None:\n state[\"completed\"] = False\n\n # Regenerate a waiting room message when checking status\n # to counter missed messages at the end of the waiting room\n nonfailed_count = models.Participant.query.filter(\n (models.Participant.status == \"working\")\n | (models.Participant.status == \"overrecruited\")\n | (models.Participant.status == \"submitted\")\n | (models.Participant.status == \"approved\")\n ).count()\n exp = Experiment(session)\n overrecruited = exp.is_overrecruited(nonfailed_count)\n if exp.quorum:\n quorum = {\"q\": exp.quorum, \"n\": nonfailed_count, \"overrecruited\": overrecruited}\n db.queue_message(WAITING_ROOM_CHANNEL, dumps(quorum))\n\n return Response(dumps(state), status=200, mimetype=\"application/json\")", "response": "Summarize the participants status codes."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef experiment_property(prop):\n exp = Experiment(session)\n try:\n value = exp.public_properties[prop]\n except KeyError:\n abort(404)\n return success_response(**{prop: value})", "response": "Get a property of the experiment by name."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef consent():\n config = _config()\n return render_template(\n \"consent.html\",\n hit_id=request.args[\"hit_id\"],\n assignment_id=request.args[\"assignment_id\"],\n worker_id=request.args[\"worker_id\"],\n mode=config.get(\"mode\"),\n )", "response": "Return the consent form. Here for backwards - compatibility with 2. x."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget a parameter from a request.", "response": "def request_parameter(parameter, parameter_type=None, default=None, optional=False):\n \"\"\"Get a parameter from a request.\n\n parameter is the name of the parameter you are looking for\n parameter_type is the type the parameter should have\n default is the value the parameter takes if it has not been passed\n\n If the parameter is not found and no default is specified,\n or if the parameter is found but is of the wrong type\n then a Response object is returned\n \"\"\"\n exp = Experiment(session)\n\n # get the parameter\n try:\n value = request.values[parameter]\n except KeyError:\n # if it isnt found use the default, or return an error Response\n if default is not None:\n return default\n elif optional:\n return None\n else:\n msg = \"{} {} request, {} not specified\".format(\n request.url, request.method, parameter\n )\n return error_response(error_type=msg)\n\n # check the parameter type\n if parameter_type is None:\n # if no parameter_type is required, return the parameter as is\n return value\n elif parameter_type == \"int\":\n # if int is required, convert to an int\n try:\n value = int(value)\n return value\n except ValueError:\n msg = \"{} {} request, non-numeric {}: {}\".format(\n request.url, request.method, parameter, value\n )\n return error_response(error_type=msg)\n elif parameter_type == \"known_class\":\n # if its a known class check against the known classes\n try:\n value = exp.known_classes[value]\n return value\n except KeyError:\n msg = \"{} {} request, unknown_class: {} for parameter {}\".format(\n request.url, request.method, value, parameter\n )\n return error_response(error_type=msg)\n elif parameter_type == \"bool\":\n # if its a boolean, convert to a boolean\n if value in [\"True\", \"False\"]:\n return value == \"True\"\n else:\n msg = \"{} {} request, non-boolean {}: {}\".format(\n request.url, request.method, parameter, value\n )\n return error_response(error_type=msg)\n else:\n msg = \"/{} {} request, unknown parameter type: {} for parameter {}\".format(\n request.url, request.method, parameter_type, parameter\n )\n return error_response(error_type=msg)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nassigning properties to an object.", "response": "def assign_properties(thing):\n \"\"\"Assign properties to an object.\n\n When creating something via a post request (e.g. a node), you can pass the\n properties of the object in the request. This function gets those values\n from the request and fills in the relevant columns of the table.\n \"\"\"\n details = request_parameter(parameter=\"details\", optional=True)\n if details:\n setattr(thing, \"details\", loads(details))\n\n for p in range(5):\n property_name = \"property\" + str(p + 1)\n property = request_parameter(parameter=property_name, optional=True)\n if property:\n setattr(thing, property_name, property)\n\n session.commit()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef create_participant(worker_id, hit_id, assignment_id, mode):\n # Lock the table, triggering multiple simultaneous accesses to fail\n try:\n session.connection().execute(\"LOCK TABLE participant IN EXCLUSIVE MODE NOWAIT\")\n except exc.OperationalError as e:\n e.orig = TransactionRollbackError()\n raise e\n\n missing = [p for p in (worker_id, hit_id, assignment_id) if p == \"undefined\"]\n if missing:\n msg = \"/participant POST: required values were 'undefined'\"\n return error_response(error_type=msg, status=403)\n\n fingerprint_hash = request.args.get(\"fingerprint_hash\")\n try:\n fingerprint_found = models.Participant.query.filter_by(\n fingerprint_hash=fingerprint_hash\n ).one_or_none()\n except MultipleResultsFound:\n fingerprint_found = True\n\n if fingerprint_hash and fingerprint_found:\n db.logger.warning(\"Same browser fingerprint detected.\")\n\n if mode == \"live\":\n return error_response(\n error_type=\"/participant POST: Same participant dectected.\", status=403\n )\n\n already_participated = models.Participant.query.filter_by(\n worker_id=worker_id\n ).one_or_none()\n\n if already_participated:\n db.logger.warning(\"Worker has already participated.\")\n return error_response(\n error_type=\"/participant POST: worker has already participated.\", status=403\n )\n\n duplicate = models.Participant.query.filter_by(\n assignment_id=assignment_id, status=\"working\"\n ).one_or_none()\n\n if duplicate:\n msg = \"\"\"\n AWS has reused assignment_id while existing participant is\n working. Replacing older participant {}.\n \"\"\"\n app.logger.warning(msg.format(duplicate.id))\n q.enqueue(worker_function, \"AssignmentReassigned\", None, duplicate.id)\n\n # Count working or beyond participants.\n nonfailed_count = (\n models.Participant.query.filter(\n (models.Participant.status == \"working\")\n | (models.Participant.status == \"overrecruited\")\n | (models.Participant.status == \"submitted\")\n | (models.Participant.status == \"approved\")\n ).count()\n + 1\n )\n\n recruiter_name = request.args.get(\"recruiter\", \"undefined\")\n if not recruiter_name or recruiter_name == \"undefined\":\n recruiter = recruiters.from_config(_config())\n if recruiter:\n recruiter_name = recruiter.nickname\n\n # Create the new participant.\n participant = models.Participant(\n recruiter_id=recruiter_name,\n worker_id=worker_id,\n assignment_id=assignment_id,\n hit_id=hit_id,\n mode=mode,\n fingerprint_hash=fingerprint_hash,\n )\n\n exp = Experiment(session)\n\n overrecruited = exp.is_overrecruited(nonfailed_count)\n if overrecruited:\n participant.status = \"overrecruited\"\n\n session.add(participant)\n session.flush() # Make sure we know the id for the new row\n result = {\"participant\": participant.__json__()}\n\n # Queue notification to others in waiting room\n if exp.quorum:\n quorum = {\"q\": exp.quorum, \"n\": nonfailed_count, \"overrecruited\": overrecruited}\n db.queue_message(WAITING_ROOM_CHANNEL, dumps(quorum))\n result[\"quorum\"] = quorum\n\n # return the data\n return success_response(**result)", "response": "Create a participant.\n\n This route is hit early on. Any nodes the participant creates will be\n defined in reference to the participant object. You must specify the\n worker_id, hit_id, assignment_id, and mode in the url."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the participant with the given id.", "response": "def get_participant(participant_id):\n \"\"\"Get the participant with the given id.\"\"\"\n try:\n ppt = models.Participant.query.filter_by(id=participant_id).one()\n except NoResultFound:\n return error_response(\n error_type=\"/participant GET: no participant found\", status=403\n )\n\n # return the data\n return success_response(participant=ppt.__json__())"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_network(network_id):\n try:\n net = models.Network.query.filter_by(id=network_id).one()\n except NoResultFound:\n return error_response(error_type=\"/network GET: no network found\", status=403)\n\n # return the data\n return success_response(network=net.__json__())", "response": "Get the network with the given id."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating a new question in the node tree.", "response": "def create_question(participant_id):\n \"\"\"Send a POST request to the question table.\n\n Questions store information at the participant level, not the node\n level.\n You should pass the question (string) number (int) and response\n (string) as arguments.\n \"\"\"\n # Get the participant.\n try:\n ppt = models.Participant.query.filter_by(id=participant_id).one()\n except NoResultFound:\n return error_response(\n error_type=\"/question POST no participant found\", status=403\n )\n\n question = request_parameter(parameter=\"question\")\n response = request_parameter(parameter=\"response\")\n number = request_parameter(parameter=\"number\", parameter_type=\"int\")\n for x in [question, response, number]:\n if isinstance(x, Response):\n return x\n\n # Consult the recruiter regarding whether to accept a questionnaire\n # from the participant:\n rejection = ppt.recruiter.rejects_questionnaire_from(ppt)\n if rejection:\n return error_response(\n error_type=\"/question POST, status = {}, reason: {}\".format(\n ppt.status, rejection\n ),\n participant=ppt,\n )\n\n try:\n # execute the request\n models.Question(\n participant=ppt, question=question, response=response, number=number\n )\n session.commit()\n except Exception:\n return error_response(error_type=\"/question POST server error\", status=403)\n\n # return the data\n return success_response()"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget the node s neighbors", "response": "def node_neighbors(node_id):\n \"\"\"Send a GET request to the node table.\n\n This calls the neighbours method of the node\n making the request and returns a list of descriptions of\n the nodes (even if there is only one).\n Required arguments: participant_id, node_id\n Optional arguments: type, connection\n\n After getting the neighbours it also calls\n exp.node_get_request()\n \"\"\"\n exp = Experiment(session)\n\n # get the parameters\n node_type = request_parameter(\n parameter=\"node_type\", parameter_type=\"known_class\", default=models.Node\n )\n connection = request_parameter(parameter=\"connection\", default=\"to\")\n failed = request_parameter(parameter=\"failed\", parameter_type=\"bool\", optional=True)\n for x in [node_type, connection]:\n if type(x) == Response:\n return x\n\n # make sure the node exists\n node = models.Node.query.get(node_id)\n if node is None:\n return error_response(\n error_type=\"/node/neighbors, node does not exist\",\n error_text=\"/node/{0}/neighbors, node {0} does not exist\".format(node_id),\n )\n\n # get its neighbors\n if failed is not None:\n # This will always raise because \"failed\" is not a supported parameter.\n # We just want to pass the exception message back in the response:\n try:\n node.neighbors(type=node_type, direction=connection, failed=failed)\n except Exception as e:\n return error_response(error_type=\"node.neighbors\", error_text=str(e))\n\n else:\n nodes = node.neighbors(type=node_type, direction=connection)\n try:\n # ping the experiment\n exp.node_get_request(node=node, nodes=nodes)\n session.commit()\n except Exception:\n return error_response(error_type=\"exp.node_get_request\")\n\n return success_response(nodes=[n.__json__() for n in nodes])"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef create_node(participant_id):\n exp = Experiment(session)\n\n # Get the participant.\n try:\n participant = models.Participant.query.filter_by(id=participant_id).one()\n except NoResultFound:\n return error_response(error_type=\"/node POST no participant found\", status=403)\n\n # Make sure the participant status is working\n if participant.status != \"working\":\n error_type = \"/node POST, status = {}\".format(participant.status)\n return error_response(error_type=error_type, participant=participant)\n\n # execute the request\n network = exp.get_network_for_participant(participant=participant)\n if network is None:\n return Response(dumps({\"status\": \"error\"}), status=403)\n\n node = exp.create_node(participant=participant, network=network)\n assign_properties(node)\n exp.add_node_to_network(node=node, network=network)\n\n # ping the experiment\n exp.node_post_request(participant=participant, node=node)\n\n # return the data\n return success_response(node=node.__json__())", "response": "Create a new node for the given participant."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting the vectors of a node.", "response": "def node_vectors(node_id):\n \"\"\"Get the vectors of a node.\n\n You must specify the node id in the url.\n You can pass direction (incoming/outgoing/all) and failed\n (True/False/all).\n \"\"\"\n exp = Experiment(session)\n # get the parameters\n direction = request_parameter(parameter=\"direction\", default=\"all\")\n failed = request_parameter(parameter=\"failed\", parameter_type=\"bool\", default=False)\n for x in [direction, failed]:\n if type(x) == Response:\n return x\n\n # execute the request\n node = models.Node.query.get(node_id)\n if node is None:\n return error_response(error_type=\"/node/vectors, node does not exist\")\n\n try:\n vectors = node.vectors(direction=direction, failed=failed)\n exp.vector_get_request(node=node, vectors=vectors)\n session.commit()\n except Exception:\n return error_response(\n error_type=\"/node/vectors GET server error\",\n status=403,\n participant=node.participant,\n )\n\n # return the data\n return success_response(vectors=[v.__json__() for v in vectors])"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef node_received_infos(node_id):\n exp = Experiment(session)\n\n # get the parameters\n info_type = request_parameter(\n parameter=\"info_type\", parameter_type=\"known_class\", default=models.Info\n )\n if type(info_type) == Response:\n return info_type\n\n # check the node exists\n node = models.Node.query.get(node_id)\n if node is None:\n return error_response(\n error_type=\"/node/infos, node {} does not exist\".format(node_id)\n )\n\n # execute the request:\n infos = node.received_infos(type=info_type)\n\n try:\n # ping the experiment\n exp.info_get_request(node=node, infos=infos)\n\n session.commit()\n except Exception:\n return error_response(\n error_type=\"info_get_request error\",\n status=403,\n participant=node.participant,\n )\n\n return success_response(infos=[i.__json__() for i in infos])", "response": "Get all the infos a node has been sent and has received."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate an info. The node id must be specified in the url. You must pass contents as an argument. info_type is an additional optional argument. If info_type is a custom subclass of Info it must be added to the known_classes of the experiment class.", "response": "def info_post(node_id):\n \"\"\"Create an info.\n\n The node id must be specified in the url.\n\n You must pass contents as an argument.\n info_type is an additional optional argument.\n If info_type is a custom subclass of Info it must be\n added to the known_classes of the experiment class.\n \"\"\"\n # get the parameters and validate them\n contents = request_parameter(parameter=\"contents\")\n info_type = request_parameter(\n parameter=\"info_type\", parameter_type=\"known_class\", default=models.Info\n )\n for x in [contents, info_type]:\n if type(x) == Response:\n return x\n # check the node exists\n node = models.Node.query.get(node_id)\n if node is None:\n return error_response(error_type=\"/info POST, node does not exist\")\n\n exp = Experiment(session)\n try:\n # execute the request\n info = info_type(origin=node, contents=contents)\n assign_properties(info)\n\n # ping the experiment\n exp.info_post_request(node=node, info=info)\n\n session.commit()\n except Exception:\n return error_response(\n error_type=\"/info POST server error\",\n status=403,\n participant=node.participant,\n )\n\n # return the data\n return success_response(info=info.__json__())"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets all the transmissions of a node.", "response": "def node_transmissions(node_id):\n \"\"\"Get all the transmissions of a node.\n\n The node id must be specified in the url.\n You can also pass direction (to/from/all) or status (all/pending/received)\n as arguments.\n \"\"\"\n exp = Experiment(session)\n\n # get the parameters\n direction = request_parameter(parameter=\"direction\", default=\"incoming\")\n status = request_parameter(parameter=\"status\", default=\"all\")\n for x in [direction, status]:\n if type(x) == Response:\n return x\n\n # check the node exists\n node = models.Node.query.get(node_id)\n if node is None:\n return error_response(error_type=\"/node/transmissions, node does not exist\")\n\n # execute the request\n transmissions = node.transmissions(direction=direction, status=status)\n\n try:\n if direction in [\"incoming\", \"all\"] and status in [\"pending\", \"all\"]:\n node.receive()\n session.commit()\n # ping the experiment\n exp.transmission_get_request(node=node, transmissions=transmissions)\n session.commit()\n except Exception:\n return error_response(\n error_type=\"/node/transmissions GET server error\",\n status=403,\n participant=node.participant,\n )\n\n # return the data\n return success_response(transmissions=[t.__json__() for t in transmissions])"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef node_transmit(node_id):\n exp = Experiment(session)\n what = request_parameter(parameter=\"what\", optional=True)\n to_whom = request_parameter(parameter=\"to_whom\", optional=True)\n\n # check the node exists\n node = models.Node.query.get(node_id)\n if node is None:\n return error_response(error_type=\"/node/transmit, node does not exist\")\n # create what\n if what is not None:\n try:\n what = int(what)\n what = models.Info.query.get(what)\n if what is None:\n return error_response(\n error_type=\"/node/transmit POST, info does not exist\",\n participant=node.participant,\n )\n except Exception:\n try:\n what = exp.known_classes[what]\n except KeyError:\n msg = \"/node/transmit POST, {} not in experiment.known_classes\"\n return error_response(\n error_type=msg.format(what), participant=node.participant\n )\n\n # create to_whom\n if to_whom is not None:\n try:\n to_whom = int(to_whom)\n to_whom = models.Node.query.get(to_whom)\n if to_whom is None:\n return error_response(\n error_type=\"/node/transmit POST, recipient Node does not exist\",\n participant=node.participant,\n )\n except Exception:\n try:\n to_whom = exp.known_classes[to_whom]\n except KeyError:\n msg = \"/node/transmit POST, {} not in experiment.known_classes\"\n return error_response(\n error_type=msg.format(to_whom), participant=node.participant\n )\n\n # execute the request\n try:\n transmissions = node.transmit(what=what, to_whom=to_whom)\n for t in transmissions:\n assign_properties(t)\n session.commit()\n # ping the experiment\n exp.transmission_post_request(node=node, transmissions=transmissions)\n session.commit()\n except Exception:\n return error_response(\n error_type=\"/node/transmit POST, server error\", participant=node.participant\n )\n\n # return the data\n return success_response(transmissions=[t.__json__() for t in transmissions])", "response": "Transmit a node to another node."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting all the transformations of a node.", "response": "def transformation_get(node_id):\n \"\"\"Get all the transformations of a node.\n\n The node id must be specified in the url.\n\n You can also pass transformation_type.\n \"\"\"\n exp = Experiment(session)\n\n # get the parameters\n transformation_type = request_parameter(\n parameter=\"transformation_type\",\n parameter_type=\"known_class\",\n default=models.Transformation,\n )\n if type(transformation_type) == Response:\n return transformation_type\n\n # check the node exists\n node = models.Node.query.get(node_id)\n if node is None:\n return error_response(\n error_type=\"/node/transformations, \"\n \"node {} does not exist\".format(node_id)\n )\n\n # execute the request\n transformations = node.transformations(type=transformation_type)\n try:\n # ping the experiment\n exp.transformation_get_request(node=node, transformations=transformations)\n session.commit()\n except Exception:\n return error_response(\n error_type=\"/node/transformations GET failed\", participant=node.participant\n )\n\n # return the data\n return success_response(transformations=[t.__json__() for t in transformations])"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncheck that the assignment_id of the participant is unique.", "response": "def check_for_duplicate_assignments(participant):\n \"\"\"Check that the assignment_id of the participant is unique.\n\n If it isnt the older participants will be failed.\n \"\"\"\n participants = models.Participant.query.filter_by(\n assignment_id=participant.assignment_id\n ).all()\n duplicates = [\n p for p in participants if (p.id != participant.id and p.status == \"working\")\n ]\n for d in duplicates:\n q.enqueue(worker_function, \"AssignmentAbandoned\", None, d.id)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef worker_failed():\n participant_id = request.args.get(\"participant_id\")\n if not participant_id:\n return error_response(\n error_type=\"bad request\", error_text=\"participantId parameter is required\"\n )\n\n try:\n _worker_failed(participant_id)\n except KeyError:\n return error_response(\n error_type=\"ParticipantId not found: {}\".format(participant_id)\n )\n\n return success_response(\n field=\"status\", data=\"success\", request_type=\"worker failed\"\n )", "response": "Fail worker. Used by bots only for now."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef worker_function(\n event_type, assignment_id, participant_id, node_id=None, details=None\n):\n \"\"\"Process the notification.\"\"\"\n _config()\n try:\n db.logger.debug(\n \"rq: worker_function working on job id: %s\", get_current_job().id\n )\n db.logger.debug(\n \"rq: Received Queue Length: %d (%s)\", len(q), \", \".join(q.job_ids)\n )\n except AttributeError:\n db.logger.debug(\"Debug worker_function called synchronously\")\n\n exp = Experiment(session)\n key = \"-----\"\n\n exp.log(\n \"Received an {} notification for assignment {}, participant {}\".format(\n event_type, assignment_id, participant_id\n ),\n key,\n )\n\n if event_type == \"TrackingEvent\":\n node = None\n if node_id:\n node = models.Node.query.get(node_id)\n if not node:\n participant = None\n if participant_id:\n # Lookup assignment_id to create notifications\n participant = models.Participant.query.get(participant_id)\n elif assignment_id:\n participants = models.Participant.query.filter_by(\n assignment_id=assignment_id\n ).all()\n # if there are one or more participants select the most recent\n if participants:\n participant = max(participants, key=attrgetter(\"creation_time\"))\n participant_id = participant.id\n if not participant:\n exp.log(\n \"Warning: No participant associated with this \"\n \"TrackingEvent notification.\",\n key,\n )\n return\n nodes = participant.nodes()\n if not nodes:\n exp.log(\n \"Warning: No node associated with this \"\n \"TrackingEvent notification.\",\n key,\n )\n return\n node = max(nodes, key=attrgetter(\"creation_time\"))\n\n if not details:\n details = {}\n info = information.TrackingEvent(origin=node, details=details)\n session.add(info)\n session.commit()\n return\n\n runner_cls = WorkerEvent.for_name(event_type)\n if not runner_cls:\n exp.log(\"Event type {} is not supported... ignoring.\".format(event_type))\n return\n\n if assignment_id is not None:\n # save the notification to the notification table\n notif = models.Notification(assignment_id=assignment_id, event_type=event_type)\n session.add(notif)\n session.commit()\n\n # try to identify the participant\n participants = models.Participant.query.filter_by(\n assignment_id=assignment_id\n ).all()\n\n # if there are one or more participants select the most recent\n if participants:\n participant = max(participants, key=attrgetter(\"creation_time\"))\n\n # if there are none print an error\n else:\n exp.log(\n \"Warning: No participants associated with this \"\n \"assignment_id. Notification will not be processed.\",\n key,\n )\n return None\n\n elif participant_id is not None:\n participant = models.Participant.query.filter_by(id=participant_id).all()[0]\n else:\n raise ValueError(\n \"Error: worker_function needs either an assignment_id or a \"\n \"participant_id, they cannot both be None\"\n )\n\n participant_id = participant.id\n\n runner = runner_cls(\n participant, assignment_id, exp, session, _config(), datetime.now()\n )\n runner()\n session.commit()", "response": "This function is called by the worker thread when a new event is received."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncapture a backup of the app.", "response": "def all_apps(self):\n \"\"\"Capture a backup of the app.\"\"\"\n cmd = [\"heroku\", \"apps\", \"--json\"]\n if self.team:\n cmd.extend([\"--team\", self.team])\n return json.loads(self._result(cmd))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates the heroku app and local git remote. Call this once you re getting the app and local git remote you re going to use.", "response": "def bootstrap(self):\n \"\"\"Creates the heroku app and local git remote. Call this once you're\n in the local repo you're going to use.\n \"\"\"\n cmd = [\"heroku\", \"apps:create\", self.name, \"--buildpack\", \"heroku/python\"]\n\n # If a team is specified, assign the app to the team.\n if self.team:\n cmd.extend([\"--team\", self.team])\n\n self._run(cmd)\n # Set HOST value\n self.set_multiple(\n HOST=self.url, CREATOR=self.login_name(), DALLINGER_UID=self.dallinger_uid\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsets up an addon", "response": "def addon(self, name):\n \"\"\"Set up an addon\"\"\"\n cmd = [\"heroku\", \"addons:create\", name, \"--app\", self.name]\n self._run(cmd)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef buildpack(self, url):\n cmd = [\"heroku\", \"buildpacks:add\", url, \"--app\", self.name]\n self._run(cmd)", "response": "Add a buildpack by URL."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef db_uri(self):\n output = self.get(\"DATABASE\", subcommand=\"pg:credentials:url\")\n match = re.search(\"(postgres://.*)$\", output)\n if match is None:\n raise NameError(\n \"Could not retrieve the DB URI. Check for error output from \"\n \"heroku above the stack trace.\"\n )\n return match.group(1)", "response": "The connection URL for the remote database. For example postgres://some - long - uid@ec2 - 52 - 7 - 232 - 59. compute - 1. amazonaws. com : 5336. compute - 1. amazonaws. com"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndestroying an app and all its add -ons", "response": "def destroy(self):\n \"\"\"Destroy an app and all its add-ons\"\"\"\n result = self._result(\n [\"heroku\", \"apps:destroy\", \"--app\", self.name, \"--confirm\", self.name]\n )\n return result"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets a app config value by name", "response": "def get(self, key, subcommand=\"config:get\"):\n \"\"\"Get a app config value by name\"\"\"\n cmd = [\"heroku\", subcommand, key, \"--app\", self.name]\n return self._result(cmd)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef pg_wait(self):\n retries = 10\n while retries:\n retries = retries - 1\n try:\n self._run([\"heroku\", \"pg:wait\", \"--app\", self.name])\n except subprocess.CalledProcessError:\n time.sleep(5)\n if not retries:\n raise\n else:\n break", "response": "Wait for the DB to be fired up."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef restore(self, url):\n self._run(\n [\n \"heroku\",\n \"pg:backups:restore\",\n \"{}\".format(url),\n \"DATABASE_URL\",\n \"--app\",\n self.name,\n \"--confirm\",\n self.name,\n ]\n )", "response": "Restore the remote database from the URL of a backup."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef scale_up_dyno(self, process, quantity, size):\n self._run(\n [\n \"heroku\",\n \"ps:scale\",\n \"{}={}:{}\".format(process, quantity, size),\n \"--app\",\n self.name,\n ]\n )", "response": "Scale up a dyno."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nturns off web and worker dynos plus clock process.", "response": "def scale_down_dynos(self):\n \"\"\"Turn off web and worker dynos, plus clock process if\n there is one and it's active.\n \"\"\"\n processes = [\"web\", \"worker\"]\n if self.clock_is_on:\n processes.append(\"clock\")\n for process in processes:\n self.scale_down_dyno(process)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nconfigures an app key value pair", "response": "def set(self, key, value):\n \"\"\"Configure an app key/value pair\"\"\"\n cmd = [\n \"heroku\",\n \"config:set\",\n \"{}={}\".format(key, quote(str(value))),\n \"--app\",\n self.name,\n ]\n if self._is_sensitive_key(key):\n self._run_quiet(cmd)\n else:\n self._run(cmd)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef set_multiple(self, **kwargs):\n quiet = False\n if not kwargs:\n return\n cmd = [\"heroku\", \"config:set\"]\n for k in sorted(kwargs):\n cmd.append(\"{}={}\".format(k, quote(str(kwargs[k]))))\n if self._is_sensitive_key(k):\n quiet = True\n cmd.extend([\"--app\", self.name])\n if quiet:\n self._run_quiet(cmd)\n else:\n self._run(cmd)", "response": "Configure multiple app key value pairs"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef start(self, timeout_secs=60):\n\n def _handle_timeout(signum, frame):\n raise HerokuTimeoutError(\n \"Failed to start after {} seconds.\".format(timeout_secs, self._record)\n )\n\n if self.is_running:\n self.out.log(\"Local Heroku is already running.\")\n return\n\n signal.signal(signal.SIGALRM, _handle_timeout)\n signal.alarm(timeout_secs)\n self._boot()\n try:\n success = self._verify_startup()\n finally:\n signal.alarm(0)\n\n if not success:\n self.stop(signal.SIGKILL)\n raise HerokuStartupError(\n \"Failed to start for unknown reason: {}\".format(self._record)\n )\n return True", "response": "Start the heroku local subprocess group and verify that it has started successfully."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nstopping the heroku local subprocess and all of its children.", "response": "def stop(self, signal=None):\n \"\"\"Stop the heroku local subprocess and all of its children.\n \"\"\"\n signal = signal or self.int_signal\n self.out.log(\"Cleaning up local Heroku process...\")\n if self._process is None:\n self.out.log(\"No local Heroku process was running.\")\n return\n\n try:\n os.killpg(os.getpgid(self._process.pid), signal)\n self.out.log(\"Local Heroku process terminated.\")\n except OSError:\n self.out.log(\"Local Heroku was already terminated.\")\n self.out.log(traceback.format_exc())\n finally:\n self._process = None"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef monitor(self, listener):\n for line in self._stream():\n self._record.append(line)\n if self.verbose:\n self.out.blather(line)\n if listener(line) is self.MONITOR_STOP:\n return", "response": "Relay the stream to listener until stop."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef load():\n initialize_experiment_package(os.getcwd())\n try:\n try:\n from dallinger_experiment import experiment\n except ImportError:\n from dallinger_experiment import dallinger_experiment as experiment\n\n classes = inspect.getmembers(experiment, inspect.isclass)\n for name, c in classes:\n if \"Experiment\" in c.__bases__[0].__name__:\n return c\n else:\n raise ImportError\n except ImportError:\n logger.error(\"Could not import experiment.\")\n raise", "response": "Load the active experiment."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate the networks if they don t already exist.", "response": "def setup(self):\n \"\"\"Create the networks if they don't already exist.\"\"\"\n if not self.networks():\n for _ in range(self.practice_repeats):\n network = self.create_network()\n network.role = \"practice\"\n self.session.add(network)\n for _ in range(self.experiment_repeats):\n network = self.create_network()\n network.role = \"experiment\"\n self.session.add(network)\n self.session.commit()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef networks(self, role=\"all\", full=\"all\"):\n if full not in [\"all\", True, False]:\n raise ValueError(\n \"full must be boolean or all, it cannot be {}\".format(full)\n )\n\n if full == \"all\":\n if role == \"all\":\n return Network.query.all()\n else:\n return Network.query.filter_by(role=role).all()\n else:\n if role == \"all\":\n return Network.query.filter_by(full=full).all()\n else:\n return Network.query.filter(\n and_(Network.role == role, Network.full == full)\n ).all()", "response": "All the networks in the experiment."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_network_for_participant(self, participant):\n key = participant.id\n networks_with_space = (\n Network.query.filter_by(full=False).order_by(Network.id).all()\n )\n networks_participated_in = [\n node.network_id\n for node in Node.query.with_entities(Node.network_id)\n .filter_by(participant_id=participant.id)\n .all()\n ]\n\n legal_networks = [\n net for net in networks_with_space if net.id not in networks_participated_in\n ]\n\n if not legal_networks:\n self.log(\"No networks available, returning None\", key)\n return None\n\n self.log(\n \"{} networks out of {} available\".format(\n len(legal_networks), (self.practice_repeats + self.experiment_repeats)\n ),\n key,\n )\n\n legal_practice_networks = [\n net for net in legal_networks if net.role == \"practice\"\n ]\n if legal_practice_networks:\n chosen_network = legal_practice_networks[0]\n self.log(\n \"Practice networks available.\"\n \"Assigning participant to practice network {}.\".format(\n chosen_network.id\n ),\n key,\n )\n else:\n chosen_network = self.choose_network(legal_networks, participant)\n self.log(\n \"No practice networks available.\"\n \"Assigning participant to experiment network {}\".format(\n chosen_network.id\n ),\n key,\n )\n return chosen_network", "response": "Find a network for a participant."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef recruit(self):\n if not self.networks(full=False):\n self.log(\"All networks full: closing recruitment\", \"-----\")\n self.recruiter.close_recruitment()", "response": "Recruit participants to the experiment as needed."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef log_summary(self):\n participants = Participant.query.with_entities(Participant.status).all()\n counts = Counter([p.status for p in participants])\n sorted_counts = sorted(counts.items(), key=itemgetter(0))\n self.log(\"Status summary: {}\".format(str(sorted_counts)))\n return sorted_counts", "response": "Log a summary of all the participants status codes."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef save(self, *objects):\n if len(objects) > 0:\n self.session.add_all(objects)\n self.session.commit()", "response": "Add all the objects to the session and commit them."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef fail_participant(self, participant):\n participant_nodes = Node.query.filter_by(\n participant_id=participant.id, failed=False\n ).all()\n\n for node in participant_nodes:\n node.fail()", "response": "Fail all the nodes of a participant."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndeploying and run an experiment.", "response": "def run(self, exp_config=None, app_id=None, bot=False, **kwargs):\n \"\"\"Deploy and run an experiment.\n\n The exp_config object is either a dictionary or a\n ``localconfig.LocalConfig`` object with parameters\n specific to the experiment run grouped by section.\n \"\"\"\n import dallinger as dlgr\n\n app_id = self.make_uuid(app_id)\n\n if bot:\n kwargs[\"recruiter\"] = \"bots\"\n\n self.app_id = app_id\n self.exp_config = exp_config or kwargs\n self.update_status(\"Starting\")\n try:\n if self.exp_config.get(\"mode\") == \"debug\":\n dlgr.command_line.debug.callback(\n verbose=True, bot=bot, proxy=None, exp_config=self.exp_config\n )\n else:\n dlgr.deployment.deploy_sandbox_shared_setup(\n dlgr.command_line.log,\n app=app_id,\n verbose=self.verbose,\n exp_config=self.exp_config,\n )\n except Exception:\n self.update_status(\"Errored\")\n raise\n else:\n self.update_status(\"Running\")\n self._await_completion()\n self.update_status(\"Retrieving data\")\n data = self.retrieve_data()\n self.update_status(\"Completed\")\n return data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncollecting data for the provided experiment id.", "response": "def collect(self, app_id, exp_config=None, bot=False, **kwargs):\n \"\"\"Collect data for the provided experiment id.\n\n The ``app_id`` parameter must be a valid UUID.\n If an existing data file is found for the UUID it will\n be returned, otherwise - if the UUID is not already registered -\n the experiment will be run and data collected.\n\n See :meth:`~Experiment.run` method for other parameters.\n \"\"\"\n try:\n results = data_load(app_id)\n self.log(\n \"Data found for experiment {}, retrieving.\".format(app_id),\n key=\"Retrieve:\",\n )\n return results\n except IOError:\n self.log(\n \"Could not fetch data for id: {}, checking registry\".format(app_id),\n key=\"Retrieve:\",\n )\n\n exp_config = exp_config or {}\n if is_registered(app_id):\n raise RuntimeError(\n \"The id {} is registered, \".format(app_id)\n + \"but you do not have permission to access to the data\"\n )\n elif kwargs.get(\"mode\") == \"debug\" or exp_config.get(\"mode\") == \"debug\":\n raise RuntimeError(\"No remote or local data found for id {}\".format(app_id))\n\n try:\n assert isinstance(uuid.UUID(app_id, version=4), uuid.UUID)\n except (ValueError, AssertionError):\n raise ValueError(\"Invalid UUID supplied {}\".format(app_id))\n\n self.log(\n \"{} appears to be a new experiment id, running experiment.\".format(app_id),\n key=\"Retrieve:\",\n )\n return self.run(exp_config, app_id, bot, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ngenerate a new UUID.", "response": "def make_uuid(cls, app_id=None):\n \"\"\"Generates a new UUID.\n This is a class method and can be called as `Experiment.make_uuid()`.\n Takes an optional `app_id` which is converted to a string and, if it\n is a valid UUID, returned.\n \"\"\"\n try:\n if app_id and isinstance(uuid.UUID(str(app_id), version=4), uuid.UUID):\n return str(app_id)\n except (ValueError, AssertionError):\n pass\n return str(uuid.UUID(int=random.getrandbits(128)))"}