text
stringlengths
0
1.25M
meta
stringlengths
47
1.89k
\section{\module{code} --- Interpreter base classes} \declaremodule{standard}{code} \modulesynopsis{Base classes for interactive Python interpreters.} The \code{code} module provides facilities to implement read-eval-print loops in Python. Two classes and convenience functions are included which can be used to build applications which provide an interactive interpreter prompt. \begin{classdesc}{InteractiveInterpreter}{\optional{locals}} This class deals with parsing and interpreter state (the user's namespace); it does not deal with input buffering or prompting or input file naming (the filename is always passed in explicitly). The optional \var{locals} argument specifies the dictionary in which code will be executed; it defaults to a newly created dictionary with key \code{'__name__'} set to \code{'__console__'} and key \code{'__doc__'} set to \code{None}. \end{classdesc} \begin{classdesc}{InteractiveConsole}{\optional{locals\optional{, filename}}} Closely emulate the behavior of the interactive Python interpreter. This class builds on \class{InteractiveInterpreter} and adds prompting using the familiar \code{sys.ps1} and \code{sys.ps2}, and input buffering. \end{classdesc} \begin{funcdesc}{interact}{\optional{banner\optional{, readfunc\optional{, local}}}} Convenience function to run a read-eval-print loop. This creates a new instance of \class{InteractiveConsole} and sets \var{readfunc} to be used as the \method{raw_input()} method, if provided. If \var{local} is provided, it is passed to the \class{InteractiveConsole} constructor for use as the default namespace for the interpreter loop. The \method{interact()} method of the instance is then run with \var{banner} passed as the banner to use, if provided. The console object is discarded after use. \end{funcdesc} \begin{funcdesc}{compile_command}{source\optional{, filename\optional{, symbol}}} This function is useful for programs that want to emulate Python's interpreter main loop (a.k.a. the read-eval-print loop). The tricky part is to determine when the user has entered an incomplete command that can be completed by entering more text (as opposed to a complete command or a syntax error). This function \emph{almost} always makes the same decision as the real interpreter main loop. \var{source} is the source string; \var{filename} is the optional filename from which source was read, defaulting to \code{'<input>'}; and \var{symbol} is the optional grammar start symbol, which should be either \code{'single'} (the default) or \code{'eval'}. Returns a code object (the same as \code{compile(\var{source}, \var{filename}, \var{symbol})}) if the command is complete and valid; \code{None} if the command is incomplete; raises \exception{SyntaxError} if the command is complete and contains a syntax error, or raises \exception{OverflowError} if the command includes a numeric constant which exceeds the range of the appropriate numeric type. \end{funcdesc} \subsection{Interactive Interpreter Objects \label{interpreter-objects}} \begin{methoddesc}{runsource}{source\optional{, filename\optional{, symbol}}} Compile and run some source in the interpreter. Arguments are the same as for \function{compile_command()}; the default for \var{filename} is \code{'<input>'}, and for \var{symbol} is \code{'single'}. One several things can happen: \begin{itemize} \item The input is incorrect; \function{compile_command()} raised an exception (\exception{SyntaxError} or \exception{OverflowError}). A syntax traceback will be printed by calling the \method{showsyntaxerror()} method. \method{runsource()} returns \code{0}. \item The input is incomplete, and more input is required; \function{compile_command()} returned \code{None}. \method{runsource()} returns \code{1}. \item The input is complete; \function{compile_command()} returned a code object. The code is executed by calling the \method{runcode()} (which also handles run-time exceptions, except for \exception{SystemExit}). \method{runsource()} returns \code{0}. \end{itemize} The return value can be used to decide whether to use \code{sys.ps1} or \code{sys.ps2} to prompt the next line. \end{methoddesc} \begin{methoddesc}{runcode}{code} Execute a code object. When an exception occurs, \method{showtraceback()} is called to display a traceback. All exceptions are caught except \exception{SystemExit}, which is allowed to propagate. A note about \exception{KeyboardInterrupt}: this exception may occur elsewhere in this code, and may not always be caught. The caller should be prepared to deal with it. \end{methoddesc} \begin{methoddesc}{showsyntaxerror}{\optional{filename}} Display the syntax error that just occurred. This does not display a stack trace because there isn't one for syntax errors. If \var{filename} is given, it is stuffed into the exception instead of the default filename provided by Python's parser, because it always uses \code{'<string>'} when reading from a string. The output is written by the \method{write()} method. \end{methoddesc} \begin{methoddesc}{showtraceback}{} Display the exception that just occurred. We remove the first stack item because it is within the interpreter object implementation. The output is written by the \method{write()} method. \end{methoddesc} \begin{methoddesc}{write}{data} Write a string to the standard error stream (\code{sys.stderr}). Derived classes should override this to provide the appropriate output handling as needed. \end{methoddesc} \subsection{Interactive Console Objects \label{console-objects}} The \class{InteractiveConsole} class is a subclass of \class{InteractiveInterpreter}, and so offers all the methods of the interpreter objects as well as the following additions. \begin{methoddesc}{interact}{\optional{banner}} Closely emulate the interactive Python console. The optional banner argument specify the banner to print before the first interaction; by default it prints a banner similar to the one printed by the standard Python interpreter, followed by the class name of the console object in parentheses (so as not to confuse this with the real interpreter -- since it's so close!). \end{methoddesc} \begin{methoddesc}{push}{line} Push a line of source text to the interpreter. The line should not have a trailing newline; it may have internal newlines. The line is appended to a buffer and the interpreter's \method{runsource()} method is called with the concatenated contents of the buffer as source. If this indicates that the command was executed or invalid, the buffer is reset; otherwise, the command is incomplete, and the buffer is left as it was after the line was appended. The return value is \code{1} if more input is required, \code{0} if the line was dealt with in some way (this is the same as \method{runsource()}). \end{methoddesc} \begin{methoddesc}{resetbuffer}{} Remove any unhandled source text from the input buffer. \end{methoddesc} \begin{methoddesc}{raw_input}{\optional{prompt}} Write a prompt and read a line. The returned line does not include the trailing newline. When the user enters the \EOF{} key sequence, \exception{EOFError} is raised. The base implementation uses the built-in function \function{raw_input()}; a subclass may replace this with a different implementation. \end{methoddesc}
{"hexsha": "36410b28867d6dd983c97ec0c4b4f5d311002e22", "size": 7406, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Doc/lib/libcode.tex", "max_stars_repo_name": "marcosptf/cpython-2.0.1", "max_stars_repo_head_hexsha": "73c739a764e8b1dc84640e73b880bc66e1916bca", "max_stars_repo_licenses": ["PSF-2.0"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2022-03-26T21:53:36.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T21:47:20.000Z", "max_issues_repo_path": "Doc/lib/libcode.tex", "max_issues_repo_name": "marcosptf/cpython-2.0.1", "max_issues_repo_head_hexsha": "73c739a764e8b1dc84640e73b880bc66e1916bca", "max_issues_repo_licenses": ["PSF-2.0"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2020-11-18T15:48:14.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-03T21:20:50.000Z", "max_forks_repo_path": "Doc/lib/libcode.tex", "max_forks_repo_name": "marcosptf/cpython-2.0.1", "max_forks_repo_head_hexsha": "73c739a764e8b1dc84640e73b880bc66e1916bca", "max_forks_repo_licenses": ["PSF-2.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2015-07-16T08:14:13.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-27T01:55:17.000Z", "avg_line_length": 42.32, "max_line_length": 77, "alphanum_fraction": 0.7712665406, "num_tokens": 1769}
import gym, xlwt import numpy as np from itertools import count def initial_excel(): global worksheet, workbook # xlwt 库将数据导入Excel并设置默认字符编码为ascii workbook = xlwt.Workbook(encoding='ascii') # 添加一个表 参数为表名 worksheet = workbook.add_sheet('resources usage') # 生成单元格样式的方法 # 设置列宽, 3为列的数目, 12为列的宽度, 256为固定值 for i in range(3): worksheet.col(i).width = 256 * 12 # 设置单元格行高, 25为行高, 20为固定值 worksheet.row(1).height_mismatch = True worksheet.row(1).height = 20 * 25 worksheet.write(0, 0, 'time') worksheet.write(0, 1, 'CPU usage(%)') worksheet.write(0, 2, 'Memory usage(%)') for i in range(3): worksheet.write(1, i, 0) # 保存excel文件 workbook.save('data/tetrisres_monitor.xls') def check_res(state): job_cpu_demand = state[33:63] job_memory_demand = state[63:93] cpu_res = state[1] memory_res = state[2] for i in range(len(job_cpu_demand)): if ((job_cpu_demand[i] == -1.0) and (job_memory_demand[i] == -1.0)): continue else: if (job_cpu_demand[i] > cpu_res or job_memory_demand[i] > memory_res): job_cpu_demand[i] = -1.0 job_memory_demand[i] = -1.0 else: continue state[33:63] = job_cpu_demand state[63:93] = job_memory_demand return np.array(state, dtype=np.float32) def alignment_score(state): job_cpu_demand = state[33:63] job_memory_demand = state[63:93] cpu_res = state[1] memory_res = state[2] alignment_score = cpu_res * job_cpu_demand + memory_res * job_memory_demand if all(map(lambda x: x < 0, alignment_score)): return -1 else: return np.where(alignment_score == np.max(alignment_score))[0][0] initial_excel() env = gym.make("clusterEnv-v0").unwrapped print("Tetris") line = 2 state = env.reset() sum_reward = 0 # 记录每一幕的reward for i in count(): valid_state = check_res(state) action = alignment_score(valid_state) if action == -1: time, cpu_usage, memory_usage = env.return_res_usage() worksheet.write(line, 1, str(100 - cpu_usage) + '%') worksheet.write(line, 2, str(100 - memory_usage) + '%') line += 1 next_state, reward, done, info = env.step(action) if action == -1: time, cpu_usage, memory_usage = env.return_res_usage() worksheet.write(line - 1, 0, time) sum_reward += reward # 记录资源使用率 state = next_state if done: print("makespan : ", time) break workbook.save('data/tetrisres_monitor.xls') env.close()
{"hexsha": "374da368e4d7b8503ff6c3acc8614890d1be3962", "size": 2574, "ext": "py", "lang": "Python", "max_stars_repo_path": "resources_monitor_tetris.py", "max_stars_repo_name": "Livioni/Cloud-Workflow-Scheduling-base-on-Deep-Reinforcement-Learning", "max_stars_repo_head_hexsha": "eb246ebba160567277c9c1aa226e359f48629dac", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2022-03-03T08:52:14.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-11T02:27:57.000Z", "max_issues_repo_path": "resources_monitor_tetris.py", "max_issues_repo_name": "Livioni/Cloud-Workflow-Scheduling-base-on-Deep-Reinforcement-Learning", "max_issues_repo_head_hexsha": "eb246ebba160567277c9c1aa226e359f48629dac", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-03-11T02:51:06.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-11T05:02:34.000Z", "max_forks_repo_path": "resources_monitor_tetris.py", "max_forks_repo_name": "Livioni/Cloud-Workflow-Scheduling-base-on-Deep-Reinforcement-Learning", "max_forks_repo_head_hexsha": "eb246ebba160567277c9c1aa226e359f48629dac", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.5862068966, "max_line_length": 82, "alphanum_fraction": 0.6317016317, "include": true, "reason": "import numpy", "num_tokens": 778}
r""" The modified gamma distribution PSD =================================== The form of the modified gamma distribution (MGD) used here is as follows: .. math:: \frac{N(X)}{dX} = N \frac{\nu}{\Gamma(1 + \alpha)}\lambda^{\nu(1 + \alpha)} D^{\nu(1 + \alpha) - 1} \cdot \exp \{-(\lambda D)^\nu\}. The distribution is described by four parameters: 1. The intercept parameter :math:`N` 2. The slope parameter :math:`\lambda` 3. The shape parameter :math:`\alpha` 4. The parameter :math:`\nu` """ import numpy as np import scipy as sp from scipy.special import gamma from artssat import dimensions as dim from artssat.scattering.psd.arts.arts_psd import ArtsPSD from artssat.scattering.psd.data.psd_data import PSDData, D_eq class ModifiedGamma(ArtsPSD): r""" The :class:`ModifiedGamma` class describes the size distribution of scattering particles in an atmosphere using the four parameters of the particle size distribution. """ properties = [("intercept_parameter", (dim.p, dim.lat, dim.lon), np.ndarray), ("alpha", (dim.p, dim.lat, dim.lon), np.ndarray), ("lmbd", (dim.p, dim.lat, dim.lon), np.ndarray), ("nu", (dim.p, dim.lat, dim.lon), np.ndarray)] def __init__(self, size_parameter, intercept_parameter = None, alpha = None, lmbd = None, nu = None): r""" Create instance of the modified gamma distribution with given parameters. If any of the parameters is neither provided and nor explicitly set afterwards, it will be requested from the data provider. However, most operations on PSDs will require the values to be set and can thus first be performed when the object has access to the data. Parameters: size_parameter(SizeParameter): The SizeParameter instance describing the size parameter that should be used fo PSD. intercept_parameter(numpy.float or ndarray): The intercept parameter :math:`N` alpha(numpy.float or ndarray): The shape parameter :math:`\alpha`. Must be broadcastable into the shape of N. lmbd(numpy.float or ndarray): The slope parameter :math:`\lambda`. Must be broadcastable into the shape of N. nu(numpy.float or ndarray): The :math:`\nu` parameter. Must be broadcastable into the shape of N. """ if not intercept_parameter is None: self.intercept_parameter = intercept_parameter shape = self.intercept_parameter.shape if not alpha is None: try: self.alpha = np.broadcast_to(alpha, shape) except: raise Exception("Could not broadcast alpha parameter to shape " "of intercept parameter.") if not lmbd is None: try: self.lmbd = np.broadcast_to(lmbd, shape) except: raise Exception("Could not broadcast lambda parameter to shape " " of intercept parameter.") if not nu is None: try: self.nu = np.broadcast_to(nu, shape) except: raise Exception("Could not broadcast nu parameter to shape " " of N parameter.") super().__init__(size_parameter) def _get_parameters(self): """ Checks if parameters of the PSD are available and tries to broadcast them to the shape of the intercept parameter. Returns: :code:`tuple(n, alpha, lmbd, nu)` containing the four parameters of the PSD. Raises: An exception if any of the MGD parameters is not set or cannot be broadcasted. """ n = self.intercept_parameter if n is None: raise Exception("The intercept parameter needs to be set to use" " this function.") shape = n.shape # Lambda parameter lmbd = self.lmbd if lmbd is None: raise Exception("The lambda parameter needs to be set to use " "this function") try: lmbd = np.broadcast_to(lmbd, shape) except: raise Exception("Could not broadcast lambda paramter to the shape" "of the provided intercept parameter N.") # Alpha parameter alpha = self.alpha if alpha is None: raise Exception("The alpha parameter needs to be set to use " "this function.") try: alpha = np.broadcast_to(alpha, shape) except: raise Exception("Could not broadcast alpha paramter to the shape" "of the provided intercept parameter N.") # Nu parameter nu = self.nu if nu is None: raise Exception("The nu parameter needs to be set to use this" "function.") try: nu = np.broadcast_to(nu, shape) except: raise Exception("Could not broadcast nu paramter to the shape" "of the provided intercept parameter N.") return n, lmbd, alpha, nu @property def moment_names(self): r""" The free parameters of the PSD. """ return [] def get_moment(self, p, reference_size_parameter = None): r""" Computes the :math:`p` th moment :math:`M(p)` of the PSD using .. math:: M(p) = \frac{N}{\lambda} \frac{\Gamma (1 + \alpha + p / \nu )} {\Gamma({1 + \alpha})}. Parameters: p(np.float): Which moment of the PSD to compute. Raises: Exception: If any of the parameters of the PSD is not set. """ if not reference_size_parameter is None: a1 = self.size_parameter.a b1 = self.size_parameter.b a2 = reference_size_parameter.a b2 = reference_size_parameter.b c = (a1 / a2) ** (p / b2) p = p * b1 / b2 else: c = 1.0 n, lmbd, alpha, nu = self._get_parameters() m = n / lmbd ** p m *= gamma(1 + alpha + p / nu) m /= gamma(1 + alpha) return c * m def get_mass_density(self): r""" Computes the mass density :math: `\rho_m` for the given bulk elements using .. math:: \rho_m = a \cdot M(b). where :math:`a` and :math:`b` are the coefficients of the mass-size relation of the size parameter. Returns: :code:`numpy.ndarray` containing the mass density corresponding to each volume element described by the PSD. """ a = self.size_parameter.a b = self.size_parameter.b return a * self.get_moment(b) @property def pnd_call_agenda(self): r""" ARTS agenda that contains the call to the WSM that computed this PSD. """ n0 = np.nan if not self.intercept_parameter is None \ and self.intercept_parameter.size == 1: n0 = self.intercept_parameter[0] mu = np.nan if not self.mu is None \ and self.mu.size == 1: mu = self.mu[0] lmbd = np.nan if not self.lmbd is None \ and self.lmbd.size == 1: lmbd = self.lmbd[0] nu = np.nan if not self.nu is None \ and self.nu.size == 1: nu = self.nu[0] @arts_agenda def pnd_call(ws): ws.psdMgd(n0 = n0, mu = mu, la = lambd, gam = nu, t_min = self.t_min, t_max = self.t_max) return pnd_call def evaluate(self, x): r""" Computes the values of this modified gamma distribution evaluated at the given size grid :code:`x`. Parameters: x(numpy.array): Array containing the values of the size parameter at which to evaluate the PSD. Returns: :class:`PSDData` object containing the numeric PSD data obtained by evaluating this PSD at the given values of the size parameter. """ n, lmbd, alpha, nu = self._get_parameters() shape = n.shape n = n.reshape(shape + (1,)) lmbd = lmbd.reshape(shape + (1,)) alpha = alpha.reshape(shape + (1,)) nu = nu.reshape(shape + (1,)) print(n, lmbd, alpha, nu) y = n * nu / gamma(1 + alpha) y *= lmbd ** (nu * (1.0 + alpha)) y = y * x ** (nu * (1.0 + alpha) - 1) \ * np.exp(- (lmbd * x) ** nu) return PSDData(x, y, self.size_parameter)
{"hexsha": "b79955c038659628794a14f055fbcb460917aea5", "size": 9049, "ext": "py", "lang": "Python", "max_stars_repo_path": "artssat/scattering/psd/modified_gamma.py", "max_stars_repo_name": "simonpf/pARTS", "max_stars_repo_head_hexsha": "b4d9f4c2ceac594273c5589e44fe6a3a4f8d7028", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-09-02T08:20:42.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-18T17:19:38.000Z", "max_issues_repo_path": "artssat/scattering/psd/modified_gamma.py", "max_issues_repo_name": "simonpf/pARTS", "max_issues_repo_head_hexsha": "b4d9f4c2ceac594273c5589e44fe6a3a4f8d7028", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "artssat/scattering/psd/modified_gamma.py", "max_forks_repo_name": "simonpf/pARTS", "max_forks_repo_head_hexsha": "b4d9f4c2ceac594273c5589e44fe6a3a4f8d7028", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.4201388889, "max_line_length": 81, "alphanum_fraction": 0.5399491657, "include": true, "reason": "import numpy,import scipy,from scipy", "num_tokens": 2034}
function F = exclude(X,Y) %EXCLUDE Excludes a binary solution % % F = exclude(X,value) % %EXCLUDE is used to avoid a particular binary solution. This can be used % to repeatedly solve MILP problems while exluding all past solutions % % A = randn(30,15); % b = 25*rand(30,1); % c = randn(15,1); % x = binvar(15,1); % Model = A*x <= b; % sol = solvesdp(Model,c'*x); % while sol.problem == 0 % Model = [Model, exclude(x,double(x))]; % sol = solvesdp(Model,c'*x); % end if isa(X,'sdpvar') & is(X,'binary') & isnumeric(Y) & ismember(Y,[0 1]) if isequal(size(X),size(Y)) else error('Dimension mismatch in EXCLUDE') end zv = find((Y == 0)); ov = find((Y == 1)); lhs = 0; if ~isempty(zv) lhs = lhs + sum(extsubsref(X,zv)); end if ~isempty(ov) lhs = lhs + sum(1-extsubsref(X,ov)); end F = [lhs >=1]; else error('EXCLUDE only applicable to binary variables and data'); end
{"author": "yalmip", "repo": "YALMIP", "sha": "f6d5a6d4222a4d722de30bffb43cae4b3e13b860", "save_path": "github-repos/MATLAB/yalmip-YALMIP", "path": "github-repos/MATLAB/yalmip-YALMIP/YALMIP-f6d5a6d4222a4d722de30bffb43cae4b3e13b860/@sdpvar/exclude.m"}
[STATEMENT] lemma list_induct_2_rev[consumes 1, case_names Nil Cons]: assumes "length x = length y" assumes "P [] []" assumes "\<And>x xs y ys. length xs = length ys \<Longrightarrow> P xs ys \<Longrightarrow> P (xs@[x]) (ys@[y])" shows "P x y" [PROOF STATE] proof (prove) goal (1 subgoal): 1. P x y [PROOF STEP] using assms(1) [PROOF STATE] proof (prove) using this: length x = length y goal (1 subgoal): 1. P x y [PROOF STEP] proof (induct "length x" arbitrary: x y) [PROOF STATE] proof (state) goal (2 subgoals): 1. \<And>x y. \<lbrakk>0 = length x; length x = length y\<rbrakk> \<Longrightarrow> P x y 2. \<And>xa x y. \<lbrakk>\<And>x y. \<lbrakk>xa = length x; length x = length y\<rbrakk> \<Longrightarrow> P x y; Suc xa = length x; length x = length y\<rbrakk> \<Longrightarrow> P x y [PROOF STEP] case 0 [PROOF STATE] proof (state) this: 0 = length x length x = length y goal (2 subgoals): 1. \<And>x y. \<lbrakk>0 = length x; length x = length y\<rbrakk> \<Longrightarrow> P x y 2. \<And>xa x y. \<lbrakk>\<And>x y. \<lbrakk>xa = length x; length x = length y\<rbrakk> \<Longrightarrow> P x y; Suc xa = length x; length x = length y\<rbrakk> \<Longrightarrow> P x y [PROOF STEP] then [PROOF STATE] proof (chain) picking this: 0 = length x length x = length y [PROOF STEP] show ?case [PROOF STATE] proof (prove) using this: 0 = length x length x = length y goal (1 subgoal): 1. P x y [PROOF STEP] using assms(2) [PROOF STATE] proof (prove) using this: 0 = length x length x = length y P [] [] goal (1 subgoal): 1. P x y [PROOF STEP] by simp [PROOF STATE] proof (state) this: P x y goal (1 subgoal): 1. \<And>xa x y. \<lbrakk>\<And>x y. \<lbrakk>xa = length x; length x = length y\<rbrakk> \<Longrightarrow> P x y; Suc xa = length x; length x = length y\<rbrakk> \<Longrightarrow> P x y [PROOF STEP] next [PROOF STATE] proof (state) goal (1 subgoal): 1. \<And>xa x y. \<lbrakk>\<And>x y. \<lbrakk>xa = length x; length x = length y\<rbrakk> \<Longrightarrow> P x y; Suc xa = length x; length x = length y\<rbrakk> \<Longrightarrow> P x y [PROOF STEP] case (Suc n) [PROOF STATE] proof (state) this: \<lbrakk>n = length ?x; length ?x = length ?y\<rbrakk> \<Longrightarrow> P ?x ?y Suc n = length x length x = length y goal (1 subgoal): 1. \<And>xa x y. \<lbrakk>\<And>x y. \<lbrakk>xa = length x; length x = length y\<rbrakk> \<Longrightarrow> P x y; Suc xa = length x; length x = length y\<rbrakk> \<Longrightarrow> P x y [PROOF STEP] obtain x1 x2 where a:"x = x1@[x2]" and c:"length x1 = n" [PROOF STATE] proof (prove) goal (1 subgoal): 1. (\<And>x1 x2. \<lbrakk>x = x1 @ [x2]; length x1 = n\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] by (metis Suc(2) append_butlast_last_id length_append_singleton length_greater_0_conv nat.inject zero_less_Suc) [PROOF STATE] proof (state) this: x = x1 @ [x2] length x1 = n goal (1 subgoal): 1. \<And>xa x y. \<lbrakk>\<And>x y. \<lbrakk>xa = length x; length x = length y\<rbrakk> \<Longrightarrow> P x y; Suc xa = length x; length x = length y\<rbrakk> \<Longrightarrow> P x y [PROOF STEP] obtain y1 y2 where b:"y = y1@[y2]" and d:"length y1 = n" [PROOF STATE] proof (prove) goal (1 subgoal): 1. (\<And>y1 y2. \<lbrakk>y = y1 @ [y2]; length y1 = n\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] by (metis Suc(2,3) append_butlast_last_id length_append_singleton length_greater_0_conv nat.inject zero_less_Suc) [PROOF STATE] proof (state) this: y = y1 @ [y2] length y1 = n goal (1 subgoal): 1. \<And>xa x y. \<lbrakk>\<And>x y. \<lbrakk>xa = length x; length x = length y\<rbrakk> \<Longrightarrow> P x y; Suc xa = length x; length x = length y\<rbrakk> \<Longrightarrow> P x y [PROOF STEP] have "P x1 y1" [PROOF STATE] proof (prove) goal (1 subgoal): 1. P x1 y1 [PROOF STEP] using c d Suc [PROOF STATE] proof (prove) using this: length x1 = n length y1 = n \<lbrakk>n = length ?x; length ?x = length ?y\<rbrakk> \<Longrightarrow> P ?x ?y Suc n = length x length x = length y goal (1 subgoal): 1. P x1 y1 [PROOF STEP] by simp [PROOF STATE] proof (state) this: P x1 y1 goal (1 subgoal): 1. \<And>xa x y. \<lbrakk>\<And>x y. \<lbrakk>xa = length x; length x = length y\<rbrakk> \<Longrightarrow> P x y; Suc xa = length x; length x = length y\<rbrakk> \<Longrightarrow> P x y [PROOF STEP] hence "P (x1@[x2]) (y1@[y2])" [PROOF STATE] proof (prove) using this: P x1 y1 goal (1 subgoal): 1. P (x1 @ [x2]) (y1 @ [y2]) [PROOF STEP] using assms(3) c d [PROOF STATE] proof (prove) using this: P x1 y1 \<lbrakk>length ?xs = length ?ys; P ?xs ?ys\<rbrakk> \<Longrightarrow> P (?xs @ [?x]) (?ys @ [?y]) length x1 = n length y1 = n goal (1 subgoal): 1. P (x1 @ [x2]) (y1 @ [y2]) [PROOF STEP] by simp [PROOF STATE] proof (state) this: P (x1 @ [x2]) (y1 @ [y2]) goal (1 subgoal): 1. \<And>xa x y. \<lbrakk>\<And>x y. \<lbrakk>xa = length x; length x = length y\<rbrakk> \<Longrightarrow> P x y; Suc xa = length x; length x = length y\<rbrakk> \<Longrightarrow> P x y [PROOF STEP] thus ?case [PROOF STATE] proof (prove) using this: P (x1 @ [x2]) (y1 @ [y2]) goal (1 subgoal): 1. P x y [PROOF STEP] using a b [PROOF STATE] proof (prove) using this: P (x1 @ [x2]) (y1 @ [y2]) x = x1 @ [x2] y = y1 @ [y2] goal (1 subgoal): 1. P x y [PROOF STEP] by simp [PROOF STATE] proof (state) this: P x y goal: No subgoals! [PROOF STEP] qed
{"llama_tokens": 2324, "file": "Equivalence_Relation_Enumeration_Equivalence_Relation_Enumeration", "length": 23}
# Copyright 2016 The TensorFlow Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== """Functional tests for AdagradDA operations.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np from tensorflow.python.framework import constant_op from tensorflow.python.framework import dtypes from tensorflow.python.framework import test_util from tensorflow.python.ops import embedding_ops from tensorflow.python.ops import math_ops from tensorflow.python.ops import resource_variable_ops from tensorflow.python.ops import variables from tensorflow.python.platform import test from tensorflow.python.training import adagrad_da class AdagradDAOptimizerTest(test.TestCase): def doTestAdagradDAwithoutRegularizationBasic1(self, use_resource=False): for dtype in [dtypes.float64, dtypes.float32]: with self.cached_session() as sess: global_step = variables.Variable(0, dtype=dtypes.int64) if use_resource: var0 = resource_variable_ops.ResourceVariable([0.0, 0.0], dtype=dtype) var1 = resource_variable_ops.ResourceVariable([0.0, 0.0], dtype=dtype) else: var0 = variables.Variable([0.0, 0.0], dtype=dtype) var1 = variables.Variable([0.0, 0.0], dtype=dtype) grads0 = constant_op.constant([0.1, 0.2], dtype=dtype) grads1 = constant_op.constant([0.01, 0.02], dtype=dtype) opt = adagrad_da.AdagradDAOptimizer( 3.0, global_step, initial_gradient_squared_accumulator_value=0.1, l1_regularization_strength=0.0, l2_regularization_strength=0.0) update = opt.apply_gradients( zip([grads0, grads1], [var0, var1]), global_step=global_step) variables.global_variables_initializer().run() v0_val, v1_val = self.evaluate([var0, var1]) self.assertAllClose([0.0, 0.0], v0_val) self.assertAllClose([0.0, 0.0], v1_val) # Run a step of AdagradDA update.run() v0_val, v1_val = self.evaluate([var0, var1]) # Let g be the gradient accumulator, gg be the gradient squared # accumulator, T be the global step, lr be the learning rate, # and k the initial gradient squared accumulator value. # w = \dfrac{sign(-g)*lr*|g - l1*T|_{+}}{l2*T*lr + \sqrt{k+gg})} # For -0.1*3.0*(0.1 - 0)/(0 + sqrt(0.1 + 0.1*0.1)) = -0.904534 # similarly for others. self.assertAllCloseAccordingToType( np.array([-0.904534, -1.603567]), v0_val) self.assertAllCloseAccordingToType( np.array([-0.094821, -0.189358]), v1_val) @test_util.run_deprecated_v1 def testAdagradDAWithoutRegularizationBasic1(self): self.doTestAdagradDAwithoutRegularizationBasic1() @test_util.run_deprecated_v1 def testResourceAdagradDAWithoutRegularizationBasic1(self): self.doTestAdagradDAwithoutRegularizationBasic1(use_resource=True) @test_util.run_deprecated_v1 def testMinimizeSparseResourceVariable(self): for dtype in [dtypes.float32, dtypes.float64]: with self.cached_session(): var0 = resource_variable_ops.ResourceVariable([[1.0, 2.0]], dtype=dtype) global_step = resource_variable_ops.ResourceVariable( 0, dtype=dtypes.int64) x = constant_op.constant([[4.0], [5.0]], dtype=dtype) pred = math_ops.matmul(embedding_ops.embedding_lookup([var0], [0]), x) loss = pred * pred sgd_op = adagrad_da.AdagradDAOptimizer( 1.0, global_step).minimize(loss) variables.global_variables_initializer().run() # Fetch params to validate initial values self.assertAllCloseAccordingToType([[1.0, 2.0]], self.evaluate(var0)) # Run 1 step of sgd sgd_op.run() # Validate updated params self.assertAllCloseAccordingToType([[-1, -1]], self.evaluate(var0), rtol=0.01) @test_util.run_deprecated_v1 def testAdagradDAwithoutRegularizationBasic2(self): for dtype in [dtypes.float64, dtypes.float32]: with self.cached_session() as sess: global_step = variables.Variable(0, dtype=dtypes.int64) var0 = variables.Variable([1.0, 2.0], dtype=dtype) var1 = variables.Variable([4.0, 3.0], dtype=dtype) grads0 = constant_op.constant([0.1, 0.2], dtype=dtype) grads1 = constant_op.constant([0.01, 0.02], dtype=dtype) opt = adagrad_da.AdagradDAOptimizer( 3.0, global_step, initial_gradient_squared_accumulator_value=0.1, l1_regularization_strength=0.0, l2_regularization_strength=0.0) update = opt.apply_gradients( zip([grads0, grads1], [var0, var1]), global_step=global_step) variables.global_variables_initializer().run() v0_val, v1_val = self.evaluate([var0, var1]) self.assertAllCloseAccordingToType([1.0, 2.0], v0_val) self.assertAllCloseAccordingToType([4.0, 3.0], v1_val) # Run a step of AdagradDA update.run() v0_val, v1_val = self.evaluate([var0, var1]) self.assertAllCloseAccordingToType( np.array([-0.904534, -1.603567]), v0_val) self.assertAllCloseAccordingToType( np.array([-0.094821, -0.189358]), v1_val) @test_util.run_deprecated_v1 def testAdagradDAWithL1(self): for dtype in [dtypes.float64, dtypes.float32]: with self.cached_session() as sess: global_step = variables.Variable(0, dtype=dtypes.int64) var0 = variables.Variable([1.0, 2.0], dtype=dtype) var1 = variables.Variable([4.0, 3.0], dtype=dtype) grads0 = constant_op.constant([0.1, 0.2], dtype=dtype) grads1 = constant_op.constant([0.01, 0.02], dtype=dtype) opt = adagrad_da.AdagradDAOptimizer( 3.0, global_step, initial_gradient_squared_accumulator_value=0.1, l1_regularization_strength=0.001, l2_regularization_strength=0.0) update = opt.apply_gradients( zip([grads0, grads1], [var0, var1]), global_step=global_step) variables.global_variables_initializer().run() v0_val, v1_val = self.evaluate([var0, var1]) self.assertAllCloseAccordingToType([1.0, 2.0], v0_val) self.assertAllCloseAccordingToType([4.0, 3.0], v1_val) # Run a step of AdagradDA update.run() v0_val, v1_val = self.evaluate([var0, var1]) self.assertAllCloseAccordingToType( np.array([-0.895489, -1.59555]), v0_val) self.assertAllCloseAccordingToType( np.array([-0.085339, -0.17989]), v1_val) @test_util.run_deprecated_v1 def testAdagradDAWithL1_L2(self): for dtype in [dtypes.float64, dtypes.float32]: with self.cached_session() as sess: global_step = variables.Variable(0, dtype=dtypes.int64) var0 = variables.Variable([1.0, 2.0], dtype=dtype) var1 = variables.Variable([4.0, 3.0], dtype=dtype) grads0 = constant_op.constant([0.1, 0.2], dtype=dtype) grads1 = constant_op.constant([0.01, 0.02], dtype=dtype) opt = adagrad_da.AdagradDAOptimizer( 3.0, global_step, initial_gradient_squared_accumulator_value=0.1, l1_regularization_strength=0.001, l2_regularization_strength=2.0) update = opt.apply_gradients( zip([grads0, grads1], [var0, var1]), global_step=global_step) variables.global_variables_initializer().run() v0_val, v1_val = self.evaluate([var0, var1]) self.assertAllCloseAccordingToType([1.0, 2.0], v0_val) self.assertAllCloseAccordingToType([4.0, 3.0], v1_val) # Run a step of AdagradDA update.run() v0_val, v1_val = self.evaluate([var0, var1]) self.assertAllCloseAccordingToType( np.array([-0.046907, -0.093659]), v0_val) self.assertAllCloseAccordingToType( np.array([-0.004275, -0.009023]), v1_val) if __name__ == "__main__": test.main()
{"hexsha": "0730618e31f272cc87e06256c0482f9f9a80db9e", "size": 8763, "ext": "py", "lang": "Python", "max_stars_repo_path": "tensorflow/python/training/adagrad_da_test.py", "max_stars_repo_name": "abhaikollara/tensorflow", "max_stars_repo_head_hexsha": "4f96df3659696990cb34d0ad07dc67843c4225a9", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 848, "max_stars_repo_stars_event_min_datetime": "2019-12-03T00:16:17.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T22:53:17.000Z", "max_issues_repo_path": "tensorflow/python/training/adagrad_da_test.py", "max_issues_repo_name": "abhaikollara/tensorflow", "max_issues_repo_head_hexsha": "4f96df3659696990cb34d0ad07dc67843c4225a9", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 656, "max_issues_repo_issues_event_min_datetime": "2019-12-03T00:48:46.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T18:41:54.000Z", "max_forks_repo_path": "tensorflow/python/training/adagrad_da_test.py", "max_forks_repo_name": "abhaikollara/tensorflow", "max_forks_repo_head_hexsha": "4f96df3659696990cb34d0ad07dc67843c4225a9", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 506, "max_forks_repo_forks_event_min_datetime": "2019-12-03T00:46:26.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-30T10:34:56.000Z", "avg_line_length": 41.9282296651, "max_line_length": 80, "alphanum_fraction": 0.656852676, "include": true, "reason": "import numpy", "num_tokens": 2348}
theory Post_Visibility_Traceback imports Traceback_Intro begin consts PID :: postID consts VIS :: vis subsection \<open>Tracing Back Post Visibility Status\<close> text \<open>We prove the following traceback property: If, at some point \<open>t\<close> on a system trace, the visibility of a post \<open>PID\<close> has a value \<open>VIS\<close>, then one of the following holds: \begin{itemize} \item Either \<open>VIS\<close> is \<open>FriendV\<close> (i.e., friends-only) which is the default at post creation \item Or the post's owner had issued a successful ``update visibility'' action setting the visibility to \<open>VIS\<close>, and no other successful update actions to \<open>PID\<close>'s visibility occur between the time of that action and \<open>t\<close>. \end{itemize} This will be captured in the predicate \<open>proper\<close>, and the main theorem states that \<open>proper tr\<close> holds for any trace \<open>tr\<close> that leads to post \<open>PID\<close> acquiring visibility \<open>VIS\<close>. \<close> text \<open>\<open>SNC uidd trn\<close> means ``The transaction \<open>trn\<close> is a successful post creation by user \<open>uidd\<close>'' \<close> fun SNC :: "userID \<Rightarrow> (state,act,out) trans \<Rightarrow> bool" where "SNC uidd (Trans s (Cact (cPost uid p pid tit)) ou s') = (ou = outOK \<and> (uid,pid) = (uidd,PID))" | "SNC uidd _ = False" text \<open>\<open>SNVU uidd vvs trn\<close> means "The transaction \<open>trn\<close> is a successful post visibility update for \<open>PID\<close>, by user \<open>uidd\<close>, to value \<open>vvs\<close>'' \<close> fun SNVU :: "userID \<Rightarrow> vis \<Rightarrow> (state,act,out) trans \<Rightarrow> bool" where "SNVU uidd vvs (Trans s (Uact (uVisPost uid p pid vs)) ou s') = (ou = outOK \<and> (uid,pid) = (uidd,PID) \<and> vs = vvs)" | "SNVU uidd vvis _ = False" definition proper :: "(state,act,out) trans trace \<Rightarrow> bool" where "proper tr \<equiv> VIS = FriendV \<or> (\<exists> uid tr1 trn tr2 trnn tr3. tr = tr1 @ trn # tr2 @ trnn # tr3 \<and> SNC uid trn \<and> SNVU uid VIS trnn \<and> (\<forall> vis. never (SNVU uid vis) tr3))" (* *) definition proper1 :: "(state,act,out) trans trace \<Rightarrow> bool" where "proper1 tr \<equiv> \<exists> tr2 trnn tr3. tr = tr2 @ trnn # tr3 \<and> SNVU (owner (srcOf trnn) PID) VIS trnn" lemma not_never_ex: assumes "\<not> never P xs" shows "\<exists> xs1 x xs2. xs = xs1 @ x # xs2 \<and> P x \<and> never P xs2" using assms proof(induct xs rule: rev_induct) case (Nil) thus ?case unfolding list_all_iff empty_iff by auto next case (snoc y ys) show ?case proof(cases "P y") case True thus ?thesis using snoc apply(intro exI[of _ ys]) apply(intro exI[of _ y] exI[of _ "[]"]) by auto next case False then obtain xs1 x xs2 where "ys = xs1 @ x # xs2 \<and> P x \<and> never P xs2" using snoc by auto thus ?thesis using snoc False apply(intro exI[of _ xs1]) apply(intro exI[of _ x] exI[of _ "xs2 ## y"]) by auto qed qed lemma SNVU_postIDs: assumes "validTrans trn" and "SNVU uid vs trn" shows "PID \<in>\<in> postIDs (srcOf trn)" proof(cases trn) case (Trans s a ou s') then show ?thesis using assms by (cases "a") (auto simp: all_defs elim: step_elims) qed lemma SNVU_visib: assumes "validTrans trn" and "SNVU uid vs trn" shows "vis (tgtOf trn) PID = vs" proof(cases trn) case (Trans s a ou s') then show ?thesis using assms by (cases "a") (auto simp: all_defs elim: step_elims) qed lemma owner_validTrans: assumes "validTrans trn" and "PID \<in>\<in> postIDs (srcOf trn)" shows "owner (srcOf trn) PID = owner (tgtOf trn) PID" proof(cases trn) case (Trans s a ou s') then show ?thesis using assms by (cases "a") (auto simp: all_defs elim: step_elims) qed lemma owner_valid: assumes "valid tr" and "PID \<in>\<in> postIDs (srcOf (hd tr))" shows "owner (srcOf (hd tr)) PID = owner (tgtOf (last tr)) PID" using assms using owner_validTrans IDs_mono validTrans by induct auto lemma SNVU_vis_validTrans: assumes "validTrans trn" and "PID \<in>\<in> postIDs (srcOf trn)" and "\<forall> vs. \<not> SNVU (owner (srcOf trn) PID) vs trn" shows "vis (srcOf trn) PID = vis (tgtOf trn) PID" proof(cases trn) case (Trans s a ou s') then show ?thesis using assms by (cases "a") (auto simp: all_defs elim: step_elims) qed lemma SNVU_vis_valid: assumes "valid tr" and "PID \<in>\<in> postIDs (srcOf (hd tr))" and "\<forall> vis. never (SNVU (owner (srcOf (hd tr)) PID) vis) tr" shows "vis (srcOf (hd tr)) PID = vis (tgtOf (last tr)) PID" using assms proof induct case (Singl) thus ?case using SNVU_vis_validTrans by auto next case (Cons trn tr) have n: "PID \<in>\<in> postIDs (srcOf (hd tr))" using Cons by (simp add: IDs_mono(2) validTrans) have v: "\<forall> vis. never (SNVU (owner (srcOf (hd tr)) PID) vis) tr" using Cons by (simp add: owner_validTrans) have "vis (srcOf trn) PID = vis (srcOf (hd tr)) PID" using Cons SNVU_vis_validTrans by auto also have "... = vis (tgtOf (last tr)) PID" using n v Cons(4) by auto finally show ?case using Cons by auto qed lemma proper1_never: assumes vtr: "valid tr" and PID: "PID \<in>\<in> postIDs (srcOf (hd tr))" and tr: "proper1 tr" and v: "vis (tgtOf (last tr)) PID = VIS" shows "\<exists> tr2 trnn tr3. tr = tr2 @ trnn # tr3 \<and> SNVU (owner (srcOf trnn) PID) VIS trnn \<and> (\<forall> vis. never (SNVU (owner (srcOf trnn) PID) vis) tr3)" proof- obtain tr2 trnn tr3 where tr: "tr = tr2 @ trnn # tr3" and SNVU: "SNVU (owner (srcOf trnn) PID) VIS trnn" using tr unfolding proper1_def by auto define uid where "uid \<equiv> owner (srcOf trnn) PID" show ?thesis proof(cases "never (\<lambda> trn. \<exists> vis. SNVU uid vis trn) tr3") case True thus ?thesis using tr SNVU unfolding uid_def list_all_iff by blast next case False from not_never_ex[OF this] obtain tr3a tr3n tr3b vs where tr3: "tr3 = tr3a @ tr3n # tr3b" and SNVUtr3n: "SNVU uid vs tr3n" and n: "\<forall> vs. never (SNVU uid vs) tr3b" unfolding list_all_iff by blast have trnn: "validTrans trnn" and tr3n: "validTrans tr3n" and vtr3: "valid tr3" using tr unfolding tr tr3 by (metis Nil_is_append_conv append_self_conv2 list.distinct(1) tr tr3 valid_ConsE valid_append vtr)+ hence PID_trnn: "PID \<in>\<in> postIDs (srcOf trnn)" and PID_tr3n: "PID \<in>\<in> postIDs (srcOf tr3n)" using SNVU_postIDs SNVU SNVUtr3n by auto have vvv: "valid (trnn # tr3a @ [tr3n])" using vtr unfolding tr tr3 by (smt Nil_is_append_conv append_self_conv2 hd_append2 list.distinct(1) list.sel(1) valid_Cons_iff valid_append) hence PID_tr3n': "PID \<in>\<in> postIDs (tgtOf tr3n)" using tr3n SNVUtr3n by (simp add: IDs_mono(2) PID_tr3n validTrans) from owner_valid[OF vvv] PID_trnn have 000: "owner (tgtOf tr3n) PID = uid" unfolding uid_def by simp hence 0: "owner (srcOf tr3n) PID = uid" using PID_tr3n owner_validTrans tr3n by blast have 00: "vs = vis (tgtOf tr3n) PID" using SNVUtr3n tr3n SNVU_visib by auto have vis: "vs = VIS" proof(cases "tr3b = []") case True thus ?thesis using v 00 unfolding tr tr3 by simp next case False hence tgt: "tgtOf tr3n = srcOf (hd tr3b)" and tr3b: "valid tr3b" using vtr3 unfolding tr3 apply (metis valid_append list.distinct(2) self_append_conv2 valid_ConsE) by (metis False append_self_conv2 list.distinct(1) tr3 valid_Cons_iff valid_append vtr3) show ?thesis unfolding 00 tgt using v False PID_tr3n' using SNVU_vis_valid[OF tr3b _ n[unfolded 000[symmetric] tgt]] unfolding tr tr3 tgt by simp qed show ?thesis apply(intro exI[of _ "tr2 @ trnn # tr3a"]) apply(intro exI[of _ tr3n] exI[of _ tr3b]) using SNVUtr3n n unfolding tr tr3 0 vis by simp qed qed (* *) lemma SNVU_validTrans: assumes "validTrans trn" and "PID \<in>\<in> postIDs (srcOf trn)" and "vis (srcOf trn) PID \<noteq> VIS" and "vis (tgtOf trn) PID = VIS" shows "SNVU (owner (srcOf trn) PID) VIS trn" proof(cases trn) case (Trans s a ou s') then show ?thesis using assms by (cases "a") (auto simp: all_defs elim: step_elims) qed lemma valid_mono_postID: assumes "valid tr" and "PID \<in>\<in> postIDs (srcOf (hd tr))" shows "PID \<in>\<in> postIDs (tgtOf (last tr))" using assms proof induct case (Singl trn) then show ?case using IDs_mono(2) by (cases trn) auto next case (Cons trn tr) then show ?case using IDs_mono(2) by (cases trn) auto qed lemma proper1_valid: assumes V: "VIS \<noteq> FriendV" and a: "valid tr" "PID \<in>\<in> postIDs (srcOf (hd tr))" "vis (srcOf (hd tr)) PID \<noteq> VIS" "vis (tgtOf (last tr)) PID = VIS" shows "proper1 tr" using a unfolding valid_valid2 proof induct case (Singl trn) then show ?case unfolding proper1_def using SNVU_validTrans by (intro exI[of _ "owner (srcOf trn) PID"] exI[of _ "[]"] exI[of _ trn]) auto next case (Rcons tr trn) hence "PID \<in>\<in> postIDs (srcOf (hd tr))" using Rcons by simp from valid_mono_postID[OF \<open>valid2 tr\<close>[unfolded valid2_valid] this] have "PID \<in>\<in> postIDs (tgtOf (last tr))" by simp hence 0: "PID \<in>\<in> postIDs (srcOf trn)" using Rcons by simp show ?case proof(cases "vis (srcOf trn) PID = VIS") case False hence "SNVU (owner (srcOf trn) PID) VIS trn" apply (intro SNVU_validTrans) using 0 Rcons by auto thus ?thesis unfolding proper1_def by (intro exI[of _ tr] exI[of _ trn] exI[of _ "[]"]) auto next case True hence "proper1 tr" using Rcons by auto then obtain trr trnn tr3 where tr: "tr = trr @ trnn # tr3" and SNVU: "SNVU (owner (srcOf trnn) PID) VIS trnn" unfolding proper1_def using V by auto have "vis (tgtOf trn) PID = VIS" using Rcons.prems by auto thus ?thesis using SNVU V unfolding proper1_def tr by(intro exI[of _ trr] exI[of _ trnn] exI[of _ "tr3 ## trn"]) auto qed qed lemma istate_postIDs: "\<not> PID \<in>\<in> postIDs istate" unfolding istate_def by simp (* *) definition proper2 :: "(state,act,out) trans trace \<Rightarrow> bool" where "proper2 tr \<equiv> \<exists> uid tr1 trn tr2. tr = tr1 @ trn # tr2 \<and> SNC uid trn" lemma SNC_validTrans: assumes "VIS \<noteq> FriendV" and "validTrans trn" and "\<not> PID \<in>\<in> postIDs (srcOf trn)" and "PID \<in>\<in> postIDs (tgtOf trn)" shows "\<exists> uid. SNC uid trn" proof(cases trn) case (Trans s a ou s') then show ?thesis using assms by (cases "a") (auto simp: all_defs elim: step_elims) qed lemma proper2_valid: assumes V: "VIS \<noteq> FriendV" and a: "valid tr" "\<not> PID \<in>\<in> postIDs (srcOf (hd tr))" "PID \<in>\<in> postIDs (tgtOf (last tr))" shows "proper2 tr" using a unfolding valid_valid2 proof induct case (Singl trn) then obtain uid where "SNC uid trn" using SNC_validTrans V by auto thus ?case unfolding proper2_def using SNC_validTrans by (intro exI[of _ uid] exI[of _ "[]"] exI[of _ trn]) auto next case (Rcons tr trn) show ?case proof(cases "PID \<in>\<in> postIDs (srcOf trn)") case False then obtain uid where "SNC uid trn" using Rcons SNC_validTrans V by auto thus ?thesis unfolding proper2_def apply - apply (intro exI[of _ uid] exI[of _ tr]) by (intro exI[of _ trn] exI[of _ "[]"]) auto next case True hence "proper2 tr" using Rcons by auto then obtain uid tr1 trnn tr2 where tr: "tr = tr1 @ trnn # tr2" and SFRC: "SNC uid trnn" unfolding proper2_def by auto have "PID \<in>\<in> postIDs (tgtOf trn)" using V Rcons.prems by auto show ?thesis using SFRC unfolding proper2_def tr apply - apply (intro exI[of _ uid] exI[of _ tr1]) by (intro exI[of _ trnn] exI[of _ "tr2 ## trn"]) simp qed qed lemma proper2_valid_istate: assumes V: "VIS \<noteq> FriendV" and a: "valid tr" "srcOf (hd tr) = istate" "PID \<in>\<in> postIDs (tgtOf (last tr))" shows "proper2 tr" using proper2_valid assms istate_postIDs by auto (* *) lemma SNC_visPost: assumes "VIS \<noteq> FriendV" and "validTrans trn" "SNC uid trn" and "reach (srcOf trn)" shows "vis (tgtOf trn) PID \<noteq> VIS" proof(cases trn) case (Trans s a ou s') then show ?thesis using assms apply (cases "a") apply (auto simp: all_defs elim: step_elims) subgoal for x2 apply(cases x2) using reach_not_postIDs_vis_FriendV by (auto simp: all_defs elim: step_elims) . qed lemma SNC_postIDs: assumes "validTrans trn" and "SNC uid trn" shows "PID \<in>\<in> postIDs (tgtOf trn)" proof(cases trn) case (Trans s a ou s') then show ?thesis using assms by (cases "a") (auto simp: all_defs elim: step_elims) qed lemma SNC_owner: assumes "validTrans trn" and "SNC uid trn" shows "uid = owner (tgtOf trn) PID" proof(cases trn) case (Trans s a ou s') then show ?thesis using assms by (cases "a") (auto simp: all_defs elim: step_elims) qed theorem post_accountability: assumes v: "valid tr" and i: "srcOf (hd tr) = istate" and PIDin: "PID \<in>\<in> postIDs (tgtOf (last tr))" and PID: "vis (tgtOf (last tr)) PID = VIS" shows "proper tr" proof(cases "VIS = FriendV") case True thus ?thesis unfolding proper_def by auto next case False have "proper2 tr" using proper2_valid_istate[OF False v i PIDin] . then obtain uid tr1 trn trr where tr: "tr = tr1 @ trn # trr" and SNC: "SNC uid trn" unfolding proper2_def by auto hence trn: "validTrans trn" and r: "reach (srcOf trn)" using v unfolding tr apply (metis list.distinct(2) self_append_conv2 valid_ConsE valid_append) by (metis (mono_tags, lifting) append_Cons hd_append i list.sel(1) reach.simps tr v valid_append valid_init_reach) hence N: "PID \<in>\<in> postIDs (tgtOf trn)" "vis (tgtOf trn) PID \<noteq> VIS" using SNC_postIDs SNC_visPost False SNC by auto hence trrNE: "trr \<noteq> []" and 1: "last tr = last trr" using PID unfolding tr by auto hence trr_v: "valid trr" using v unfolding tr by (metis valid_Cons_iff append_self_conv2 list.distinct(1) valid_append) have 0: "tgtOf trn = srcOf (hd trr)" using v trrNE unfolding tr by (metis valid_append list.distinct(2) self_append_conv2 valid_ConsE) have "proper1 trr" using proper1_valid[OF False trr_v N[unfolded 0] PID[unfolded 1]] . from proper1_never[OF trr_v N(1)[unfolded 0] this PID[unfolded 1]] obtain tr2 trnn tr3 where trr: "trr = tr2 @ trnn # tr3" and SNVU: "SNVU (owner (srcOf trnn) PID) VIS trnn" and vis: "\<forall> vis. never (SNVU (owner (srcOf trnn) PID) vis) tr3" by auto have 00: "srcOf (hd (tr2 @ [trnn])) = tgtOf trn" using v unfolding tr trr by (metis "0" append_self_conv2 hd_append2 list.sel(1) trr) have trnn: "validTrans trnn" using trr_v unfolding trr by (metis valid_Cons_iff append_self_conv2 list.distinct(1) valid_append) have vv: "valid (tr2 @ [trnn])" using v unfolding tr trr by (smt Nil_is_append_conv append_self_conv2 hd_append2 list.distinct(1) list.sel(1) valid_Cons_iff valid_append) have "uid = owner (tgtOf trn) PID" using SNC trn SNC_owner by auto also have "... = owner (tgtOf trnn) PID" using owner_valid[OF vv] N(1) unfolding 00 by simp also have "... = owner (srcOf trnn) PID" using SNVU trnn SNVU_postIDs owner_validTrans by auto finally have uid: "uid = owner (srcOf trnn) PID" . show ?thesis unfolding proper_def apply(rule disjI2) apply(intro exI[of _ uid] exI[of _ tr1]) apply(rule exI[of _ trn], rule exI[of _ tr2]) apply(intro exI[of _ trnn] exI[of _ tr3]) using SNC SNVU vis unfolding tr trr uid by auto qed end
{"author": "isabelle-prover", "repo": "mirror-afp-devel", "sha": "c84055551f07621736c3eb6a1ef4fb7e8cc57dd1", "save_path": "github-repos/isabelle/isabelle-prover-mirror-afp-devel", "path": "github-repos/isabelle/isabelle-prover-mirror-afp-devel/mirror-afp-devel-c84055551f07621736c3eb6a1ef4fb7e8cc57dd1/thys/CoSMed/Traceback_Properties/Post_Visibility_Traceback.thy"}
#!/usr/bin/env python3 # -*- coding: utf-8 -*- """ Created on Wed Sep 25 19:24:13 2019 @author: george """ import numpy as np import scipy as sc def generate_Coin(A = None, init = None, p_Coin = None, c_type = "Standard", N = 5000): """ coin experiment for HMM testing """ if c_type == "Standard": A = np.zeros( shape = [2,2]) A[0,0] = 0.5 A[0,1] = 0.5 A[1,0] = 0.05 A[1,1] = 0.95 init = np.zeros(2) init[0] = 0.5 init[1] = 0.5 p_Coin = np.zeros( shape = [2,2]) p_Coin[0,0] = 0.99 p_Coin[0,1] = 0.01 p_Coin[1,0] = 0.01 p_Coin[1,1] = 0.99 states = np.zeros(shape = [N,1]) coins = np.zeros( shape = [N,1]) si = -1 for i in range(N): if i == 0: si = np.random.choice([0,1], size = 1, p = init )[0] else: si = np.random.choice([0,1], size = 1, p = A[si])[0] states[i] = si ci = np.random.choice([0,1], size = 1,p = p_Coin[si])[0] coins[i] = ci data = np.expand_dims(np.concatenate((states.T, coins.T), axis = 0 ), axis = 2) return data, states, coins def gauss_seq1d(T = 24, N = 500, A = None, m_0 = 0, m_1 = 1, std_0 = 1, std_1 = 1): """ Generate a training set of N sequences of length T, to train Hidden Markov Model """ if A is None: print(" NO A MATRIX INPUTED ") return data = np.zeros(shape = [N, T] ) #dataset shape states = np.zeros(shape = [N, T] ) for i in range(N): data[i], states[i] = generate_gauss(T, A, m_0, m_1, std_0, std_1) return np.expand_dims(data, axis = 2), states def generate_gauss( T, A, m_0, m_1, std_0, std_1 ): """ generates a sequence of data points from an HMM with state transition model A and observational model 1 gaussian in each state with means and variances m_0, m_1 var_0, var_1 """ sequence = np.zeros(T) states = np.zeros(T) m = [m_0, m_1] std = [std_0, std_1] #start by picking a state at random s_t = np.random.choice([0,1], size = 1)[0].astype(int) x_t = np.random.normal(loc = m[s_t], scale = std[s_t], size = 1)[0] states[0] = s_t sequence[0] = x_t for t in range(1, T): s_t = np.random.choice([0,1], size = 1, p = A[s_t])[0].astype(int) x_t = np.random.normal(loc = m[s_t], scale = std[s_t], size = 1)[0] states[t] = s_t sequence[t] = x_t return sequence, states
{"hexsha": "e2d5cc39927a2380b868ec5bc57b18b98925f27f", "size": 2801, "ext": "py", "lang": "Python", "max_stars_repo_path": "MHMM/Tests/_experiments.py", "max_stars_repo_name": "jorje1908/MHMM", "max_stars_repo_head_hexsha": "e77f6d6dfa65444d7e7bbe4b3c469119306c429c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-04-16T00:17:40.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-21T14:05:43.000Z", "max_issues_repo_path": "MHMM/Tests/_experiments.py", "max_issues_repo_name": "jorje1908/MHMM", "max_issues_repo_head_hexsha": "e77f6d6dfa65444d7e7bbe4b3c469119306c429c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "MHMM/Tests/_experiments.py", "max_forks_repo_name": "jorje1908/MHMM", "max_forks_repo_head_hexsha": "e77f6d6dfa65444d7e7bbe4b3c469119306c429c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.5887096774, "max_line_length": 84, "alphanum_fraction": 0.4784005712, "include": true, "reason": "import numpy,import scipy", "num_tokens": 908}
//---------------------------------------------------------------------------// //! //! \file MonteCarlo_AnalogElasticElectronScatteringDistribution.hpp //! \author Luke Kersting //! \brief The electron analog elastic scattering distribution base class //! //---------------------------------------------------------------------------// #ifndef MONTE_CARLO_ANALOG_ELASTIC_ELECTRON_SCATTERING_DISTRIBUTION_HPP #define MONTE_CARLO_ANALOG_ELASTIC_ELECTRON_SCATTERING_DISTRIBUTION_HPP // Std Lib Includes #include <limits> // Boost Includes #include <boost/function.hpp> #include <boost/bind.hpp> // Trilinos Includes #include <Teuchos_Array.hpp> #include <Teuchos_RCP.hpp> // FRENSIE Includes #include "MonteCarlo_ElectronState.hpp" #include "MonteCarlo_ParticleBank.hpp" #include "MonteCarlo_ElectronScatteringDistribution.hpp" #include "MonteCarlo_AdjointElectronScatteringDistribution.hpp" #include "Utility_TabularOneDDistribution.hpp" namespace MonteCarlo{ //! The scattering distribution base class class AnalogElasticElectronScatteringDistribution : public ElectronScatteringDistribution, public AdjointElectronScatteringDistribution { public: //! Typedef for the elastic distribution typedef Teuchos::Array<Utility::Pair< double, Teuchos::RCP<const Utility::TabularOneDDistribution> > > ElasticDistribution; //! Constructor AnalogElasticElectronScatteringDistribution( const ElasticDistribution& elastic_scattering_distribution, const double lower_cutoff_angle = 1.0e-6, const bool angle_is_used_as_independent_variable = true ); //! Destructor virtual ~AnalogElasticElectronScatteringDistribution() { /* ... */ } //! Evaluate the distribution double evaluatePDF( const double incoming_energy, const double scattering_angle ) const; //! Evaluate the distribution double evaluate( const unsigned incoming_energy_bin, const double scattering_angle ) const; //! Evaluate the PDF double evaluate( const double incoming_energy, const double scattering_angle ) const; //! Evaluate the PDF double evaluatePDF( const unsigned incoming_energy_bin, const double scattering_angle ) const; //! Evaluate the CDF double evaluateCDF( const double incoming_energy, const double scattering_angle ) const; //! Evaluate the cross section ratio for the cutoff angle double evaluateCutoffCrossSectionRatio( const double incoming_energy ) const; //! Return the energy at a given energy bin double getEnergy( const unsigned energy_bin ) const; //! Sample an outgoing energy and direction from the distribution void sample( const double incoming_energy, double& outgoing_energy, double& scattering_angle_cosine ) const; //! Sample an outgoing energy and direction and record the number of trials void sampleAndRecordTrials( const double incoming_energy, double& outgoing_energy, double& scattering_angle_cosine, unsigned& trials ) const; //! Randomly scatter the electron void scatterElectron( ElectronState& electron, ParticleBank& bank, SubshellType& shell_of_interaction ) const; //! Randomly scatter the adjoint electron void scatterAdjointElectron( AdjointElectronState& adjoint_electron, ParticleBank& bank, SubshellType& shell_of_interaction ) const; //protected: //! Sample an outgoing direction from the distribution void sampleAndRecordTrialsImpl( const double incoming_energy, double& scattering_angle_cosine, unsigned& trials ) const; private: // The scattering angle above which the analog distribution is used double d_lower_cutoff_angle; // Independent parameter flag: false = angle cosine, true = angle (in units of pi) bool d_angle_is_used_as_independent_variable; // elastic scattering distribution without forward screening data ElasticDistribution d_elastic_scattering_distribution; }; } // end MonteCarlo namespace #endif // end MONTE_CARLO_ANALOG_ELASTIC_ELECTRON_SCATTERING_DISTRIBUTION_HPP //---------------------------------------------------------------------------// // end MonteCarlo_AnalogElasticElectronScatteringDistribution.hpp //---------------------------------------------------------------------------//
{"hexsha": "6dc348212470749ac4c02d0590e6d8e135c2a908", "size": 4631, "ext": "hpp", "lang": "C++", "max_stars_repo_path": "packages/monte_carlo/collision/native/src/MonteCarlo_AnalogElasticElectronScatteringDistribution.hpp", "max_stars_repo_name": "lkersting/SCR-2123", "max_stars_repo_head_hexsha": "06ae3d92998664a520dc6a271809a5aeffe18f72", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "packages/monte_carlo/collision/native/src/MonteCarlo_AnalogElasticElectronScatteringDistribution.hpp", "max_issues_repo_name": "lkersting/SCR-2123", "max_issues_repo_head_hexsha": "06ae3d92998664a520dc6a271809a5aeffe18f72", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "packages/monte_carlo/collision/native/src/MonteCarlo_AnalogElasticElectronScatteringDistribution.hpp", "max_forks_repo_name": "lkersting/SCR-2123", "max_forks_repo_head_hexsha": "06ae3d92998664a520dc6a271809a5aeffe18f72", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.8992248062, "max_line_length": 90, "alphanum_fraction": 0.6644353271, "num_tokens": 869}
/* * Copyright Andrey Semashev 2007 - 2014. * Distributed under the Boost Software License, Version 1.0. * (See accompanying file LICENSE_1_0.txt or copy at * http://www.boost.org/LICENSE_1_0.txt) */ /*! * \file sources/features.hpp * \author Andrey Semashev * \date 17.07.2009 * * The header contains definition of a features list class template. */ #ifndef BOOST_LOG_SOURCES_FEATURES_HPP_INCLUDED_ #define BOOST_LOG_SOURCES_FEATURES_HPP_INCLUDED_ #include <boost/mpl/lambda.hpp> #include <boost/log/detail/config.hpp> #ifdef BOOST_HAS_PRAGMA_ONCE #pragma once #endif #if defined(BOOST_NO_CXX11_VARIADIC_TEMPLATES) #include <boost/preprocessor/repetition/enum_params.hpp> #include <boost/preprocessor/repetition/enum_binary_params.hpp> #include <boost/preprocessor/repetition/enum_shifted_params.hpp> #include <boost/preprocessor/facilities/intercept.hpp> //! The macro defines the maximum number of features that can be specified for a logger #ifndef BOOST_LOG_FEATURES_LIMIT #define BOOST_LOG_FEATURES_LIMIT 10 #endif // BOOST_LOG_FEATURES_LIMIT #endif #include <boost/log/detail/header.hpp> namespace boost { BOOST_LOG_OPEN_NAMESPACE namespace sources { #if defined(BOOST_LOG_DOXYGEN_PASS) || !defined(BOOST_NO_CXX11_VARIADIC_TEMPLATES) /*! * \brief A type sequence of logger features * * This class template can be used to specify logger features in a \c basic_composite_logger instantiation. */ template< typename... FeaturesT > struct features { }; namespace aux { //! The metafunction produces the inherited features hierarchy with \c RootT as the ultimate base type template< typename RootT, typename FeaturesT > struct inherit_features; template< typename RootT, typename FeatureT0, typename... FeaturesT > struct inherit_features< RootT, features< FeatureT0, FeaturesT... > > { typedef typename mpl::lambda< FeatureT0 >::type::BOOST_NESTED_TEMPLATE apply< typename inherit_features< RootT, features< FeaturesT... > >::type >::type type; }; template< typename RootT, typename FeatureT0 > struct inherit_features< RootT, features< FeatureT0 > > { typedef typename mpl::lambda< FeatureT0 >::type::BOOST_NESTED_TEMPLATE apply< RootT >::type type; }; template< typename RootT > struct inherit_features< RootT, features< > > { typedef RootT type; }; } // namespace aux #else //! A type sequence of logger features template< BOOST_PP_ENUM_BINARY_PARAMS(BOOST_LOG_FEATURES_LIMIT, typename FeatureT, = void BOOST_PP_INTERCEPT) > struct features { }; namespace aux { template< typename RootT, typename FeaturesT > struct inherit_features; template< typename RootT, BOOST_PP_ENUM_PARAMS(BOOST_LOG_FEATURES_LIMIT, typename FeatureT) > struct inherit_features< RootT, features< BOOST_PP_ENUM_PARAMS(BOOST_LOG_FEATURES_LIMIT, FeatureT) > > { typedef typename mpl::lambda< FeatureT0 >::type::BOOST_NESTED_TEMPLATE apply< typename inherit_features< RootT, features< BOOST_PP_ENUM_SHIFTED_PARAMS(BOOST_LOG_FEATURES_LIMIT, FeatureT) > >::type >::type type; }; template< typename RootT, typename FeatureT0 > struct inherit_features< RootT, features< FeatureT0, BOOST_PP_ENUM_SHIFTED_PARAMS(BOOST_LOG_FEATURES_LIMIT, void BOOST_PP_INTERCEPT) > > { typedef typename mpl::lambda< FeatureT0 >::type::BOOST_NESTED_TEMPLATE apply< RootT >::type type; }; template< typename RootT > struct inherit_features< RootT, features< BOOST_PP_ENUM_PARAMS(BOOST_LOG_FEATURES_LIMIT, void BOOST_PP_INTERCEPT) > > { typedef RootT type; }; } // namespace aux #endif } // namespace sources BOOST_LOG_CLOSE_NAMESPACE // namespace log } // namespace boost #include <boost/log/detail/footer.hpp> #endif // BOOST_LOG_SOURCES_FEATURES_HPP_INCLUDED_
{"hexsha": "295cc0bb4f9a86ff2618f843251928ebcb7801e5", "size": 3870, "ext": "hpp", "lang": "C++", "max_stars_repo_path": "3party/boost/boost/log/sources/features.hpp", "max_stars_repo_name": "bowlofstew/omim", "max_stars_repo_head_hexsha": "8045157c95244aa8f862d47324df42a19b87e335", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 133.0, "max_stars_repo_stars_event_min_datetime": "2018-04-20T14:09:40.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-15T11:51:25.000Z", "max_issues_repo_path": "3party/boost/boost/log/sources/features.hpp", "max_issues_repo_name": "bowlofstew/omim", "max_issues_repo_head_hexsha": "8045157c95244aa8f862d47324df42a19b87e335", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 61.0, "max_issues_repo_issues_event_min_datetime": "2015-05-27T11:20:11.000Z", "max_issues_repo_issues_event_max_datetime": "2019-12-20T15:06:21.000Z", "max_forks_repo_path": "3party/boost/boost/log/sources/features.hpp", "max_forks_repo_name": "bowlofstew/omim", "max_forks_repo_head_hexsha": "8045157c95244aa8f862d47324df42a19b87e335", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 83.0, "max_forks_repo_forks_event_min_datetime": "2018-04-27T03:58:02.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-11T09:23:40.000Z", "avg_line_length": 25.6291390728, "max_line_length": 136, "alphanum_fraction": 0.7452196382, "num_tokens": 901}
// Boost.Range library // // Copyright Neil Groves 2010. Use, modification and // distribution is subject to the Boost Software License, Version // 1.0. (See accompanying file LICENSE_1_0.txt or copy at // http://www.boost.org/LICENSE_1_0.txt) // // For more information, see http://www.boost.org/libs/range/ // #include <boost/detail/workaround.hpp> #if BOOST_WORKAROUND(__BORLANDC__, BOOST_TESTED_AT(0x564)) # pragma warn -8091 // suppress warning in Boost.Test # pragma warn -8057 // unused argument argc/argv in Boost.Test #endif #include <boost/range/begin.hpp> #include <boost/test/unit_test.hpp> #include <boost/test/test_tools.hpp> #include <boost/test/included/unit_test.hpp> namespace mock_std { template<class SinglePassRange> inline BOOST_DEDUCED_TYPENAME boost::range_iterator<SinglePassRange>::type begin(SinglePassRange& rng) { return rng.begin(); } template<class SinglePassRange> inline BOOST_DEDUCED_TYPENAME boost::range_iterator<const SinglePassRange>::type begin(const SinglePassRange& rng) { return rng.begin(); } template<class SinglePassRange> void mock_algorithm_using_begin(const SinglePassRange& rng) { BOOST_CHECK( begin(rng) == rng.begin() ); } template<class SinglePassRange> void mock_algorithm_using_begin(SinglePassRange& rng) { BOOST_CHECK( begin(rng) == rng.begin() ); } } namespace boost { #ifdef BOOST_RANGE_SIMULATE_BEGIN_WITHOUT_ADL_NAMESPACE_BARRIER template<class SinglePassRange> inline BOOST_DEDUCED_TYPENAME range_iterator<SinglePassRange>::type begin(SinglePassRange& rng) { return rng.begin(); } template<class SinglePassRange> inline BOOST_DEDUCED_TYPENAME range_iterator<const SinglePassRange>::type begin(const SinglePassRange& rng) { return rng.begin(); } #endif class MockTestBeginCollection { public: typedef char value_type; typedef const char* const_pointer; typedef char* pointer; typedef const_pointer const_iterator; typedef pointer iterator; MockTestBeginCollection() : m_first() , m_last() { } const_iterator begin() const { return m_first; } iterator begin() { return m_first; } const_iterator end() const { return m_last; } iterator end() { return m_last; } private: iterator m_first; iterator m_last; }; } namespace { void test_range_begin() { boost::MockTestBeginCollection c; const boost::MockTestBeginCollection& const_c = c; mock_std::mock_algorithm_using_begin(const_c); mock_std::mock_algorithm_using_begin(c); } } using boost::unit_test::test_suite; boost::unit_test::test_suite* init_unit_test_suite( int argc, char* argv[] ) { boost::unit_test::test_suite* test = BOOST_TEST_SUITE( "Range Test Suite - begin() ADL namespace barrier" ); test->add( BOOST_TEST_CASE( &test_range_begin ) ); return test; }
{"hexsha": "dc91d52f2ac8a1eb77d9288dc83b2276bcfa673a", "size": 3069, "ext": "cpp", "lang": "C++", "max_stars_repo_path": "deps/src/boost_1_65_1/libs/range/test/begin.cpp", "max_stars_repo_name": "shreyasvj25/turicreate", "max_stars_repo_head_hexsha": "32e84ca16aef8d04aff3d49ae9984bd49326bffd", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 11356.0, "max_stars_repo_stars_event_min_datetime": "2017-12-08T19:42:32.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T16:55:25.000Z", "max_issues_repo_path": "deps/src/boost_1_65_1/libs/range/test/begin.cpp", "max_issues_repo_name": "shreyasvj25/turicreate", "max_issues_repo_head_hexsha": "32e84ca16aef8d04aff3d49ae9984bd49326bffd", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 2402.0, "max_issues_repo_issues_event_min_datetime": "2017-12-08T22:31:01.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-28T19:25:52.000Z", "max_forks_repo_path": "deps/src/boost_1_65_1/libs/range/test/begin.cpp", "max_forks_repo_name": "shreyasvj25/turicreate", "max_forks_repo_head_hexsha": "32e84ca16aef8d04aff3d49ae9984bd49326bffd", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1343.0, "max_forks_repo_forks_event_min_datetime": "2017-12-08T19:47:19.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-26T11:31:36.000Z", "avg_line_length": 26.0084745763, "max_line_length": 112, "alphanum_fraction": 0.6855653307, "num_tokens": 716}
export hausdorff_distance """ hausdorff_distance(X::LazySet{N}, Y::LazySet{N}; [p]::N=N(Inf), [ε]=N(1e-3)) where {N} Compute the Hausdorff distance between two convex sets up to a given threshold. ### Input - `X` -- convex set - `Y` -- convex set - `p` -- (optional, default: `Inf`) norm parameter of the Hausdorff distance - `ε` -- (optional, default: `1e-3`) precision threshold; the true Hausdorff distance may diverge from the result by at most this value ### Output A value from the ``ε``-neighborhood of the Hausdorff distance between ``X`` and ``Y``. ### Notes Given a ``p``-norm, the Hausdorff distance ``d_H^p(X, Y)`` between sets ``X`` and ``Y`` is defined as follows: ```math d_H^p(X, Y) = \\inf\\{δ ≥ 0 \\mid Y ⊆ X ⊕ δ 𝐵_p^n \\text{ and } X ⊆ Y ⊕ δ 𝐵_p^n\\} ``` Here ``𝐵_p^n`` is the ``n``-dimensional unit ball in the ``p``-norm. The implementation may internally rely on the support function of ``X`` and ``Y``; hence any imprecision in the implementation of the support function may affect the result. At the time of writing, the only set type with imprecise support function is the lazy [`Intersection`](@ref). ### Algorithm We perform binary search for bounding the Hausdorff distance in an interval ``[l, u]``, where initially ``l`` is ``0`` and ``u`` is described below. The binary search terminates when ``u - l ≤ ε``, i.e., the interval becomes sufficiently small. To find an upper bound ``u``, we start with the heuristics of taking the biggest distance in the axis-parallel directions. As long as this bound does not work, we increase the bound by ``2``. Given a value ``δ``, to check whether the sets are within Hausdorff distance ``δ``, we simply check the inclusions given above, where on the right-hand side we use a lazy `MinkowskiSum` with a `Ballp` centered in the origin. """ function hausdorff_distance(X::LazySet{N}, Y::LazySet{N}; p::N=N(Inf), ε=N(1e-3)) where {N} @assert ε > zero(N) "the value ε must be positive" @assert isbounded(X) && isbounded(Y) "the Hausdorff distance is only " * "defined for compact sets" n = dim(X) @assert dim(Y) == n "the Hausdorff distance is only defined between sets " * "of the same dimension, but they had dimensions $n " * "resp. $(dim(Y))" # phase 1: find a finite upper bound δ_upper = max(maximum(d -> abs(ρ(d, X) - ρ(d, Y)), BoxDirections{N}(n)), N(1e-3)) # this initial bound should be strictly positive # verify that this is an upper bound while !_mutual_issubset_in_δ_bloating(X, Y, δ_upper, n, p) δ_upper *= N(2) end # phase 2: perform binary search between lower bound (initially 0) and upper # bound until convergence δ_lower = N(0) while δ_upper - δ_lower > ε δ = (δ_upper + δ_lower) / N(2) if _mutual_issubset_in_δ_bloating(X, Y, δ, n, p) δ_upper = δ else δ_lower = δ end end return δ_upper end function _mutual_issubset_in_δ_bloating(X, Y, δ, n, p) return _issubset_in_δ_bloating(X, Y, δ, n, p) && _issubset_in_δ_bloating(Y, X, δ, n, p) end function _issubset_in_δ_bloating(X::LazySet{N}, Y, δ, n, p) where {N} return X ⊆ Y + Ballp(p, zeros(N, n), δ) end # for polytopes the default implementation of `⊆` requires membership in the rhs # set, which will be a MinkowskiSum and hence not available; we use the # alternative based on constraints_list on the right instead function _issubset_in_δ_bloating(X::AbstractPolytope{N}, Y, δ, n, p) where {N} return LazySets._issubset_constraints_list(X, Y + Ballp(p, zeros(N, n), δ)) end
{"hexsha": "9ce23edce8833322a4dc3d82685d810394d89e5f", "size": 3724, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "src/Approximations/hausdorff_distance.jl", "max_stars_repo_name": "goretkin/LazySets.jl", "max_stars_repo_head_hexsha": "6e829d9179bc25b8d7f6afb190a015e53760c601", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/Approximations/hausdorff_distance.jl", "max_issues_repo_name": "goretkin/LazySets.jl", "max_issues_repo_head_hexsha": "6e829d9179bc25b8d7f6afb190a015e53760c601", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/Approximations/hausdorff_distance.jl", "max_forks_repo_name": "goretkin/LazySets.jl", "max_forks_repo_head_hexsha": "6e829d9179bc25b8d7f6afb190a015e53760c601", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.8076923077, "max_line_length": 86, "alphanum_fraction": 0.6490332975, "num_tokens": 1121}
from __future__ import absolute_import, division from io import StringIO import os.path as op import numpy as np import pandas as pd from _common import cooler_cmp from click.testing import CliRunner import cooler # import pytest ### EXPORT ### from cooler.cli.info import info from cooler.cli.dump import dump from cooler.cli.show import show testdir = op.realpath(op.dirname(__file__)) datadir = op.join(testdir, "data") def test_info(): runner = CliRunner() with runner.isolated_filesystem(): f_in = op.join(datadir, "toy.symm.upper.2.cool") result = runner.invoke(info, [f_in]) assert result.exit_code == 0 result = runner.invoke(info, [f_in, "--field", "bin-type"]) assert result.exit_code == 0 result = runner.invoke(info, [f_in, "--field", "doesnotexist"]) assert result.exit_code > 0 result = runner.invoke(info, [f_in, "--metadata"]) assert result.exit_code == 0 def test_dump(): runner = CliRunner() with runner.isolated_filesystem(): f_in = op.join(datadir, "toy.symm.upper.2.cool") result = runner.invoke(dump, [f_in]) assert result.exit_code == 0 result = runner.invoke(dump, [f_in, "-t", "chroms", "--columns", "length"]) assert result.exit_code == 0 result = runner.invoke(dump, [f_in, "-t", "bins", "--columns", "chrom,start"]) assert result.exit_code == 0 result = runner.invoke(dump, [f_in, "-r", "chr1"]) assert result.exit_code == 0 result = runner.invoke(dump, [f_in, "-r", "chr1:0-16", "-r2", "chr1:10-25"]) assert result.exit_code == 0 result = runner.invoke(dump, [f_in, "-r", "chr1:10-25", "-r2", "chr1:0-5"]) assert result.exit_code == 0 result = runner.invoke(dump, [f_in, "--join"]) assert result.exit_code == 0 result = runner.invoke(dump, [f_in, "--join", "--one-based-ids"]) assert result.exit_code == 0 result = runner.invoke(dump, [f_in, "--join", "--one-based-starts"]) assert result.exit_code == 0 result = runner.invoke(dump, [f_in, "--annotate", "chrom", "--one-based-starts"]) assert result.exit_code == 0 # unbalanced file result = runner.invoke(dump, [f_in, "-b"]) assert result.exit_code == 1 # roundtrip symm-upper data result = runner.invoke(dump, [f_in, "-H", "-t", "bins"]) bins = pd.read_csv(StringIO(result.output), sep="\t") result = runner.invoke(dump, [f_in, "-H"]) pixels = pd.read_csv(StringIO(result.output), sep="\t") cooler.create_cooler("out.cool", bins, pixels, symmetric_upper=True) cooler_cmp(f_in, "out.cool") # duplexed output result = runner.invoke(dump, [f_in, "--matrix", "-H"]) pixels2 = pd.read_csv(StringIO(result.output), sep="\t") assert len(pixels2) > len(pixels) upper = pixels2[pixels2["bin1_id"] <= pixels2["bin2_id"]].reset_index(drop=True) assert np.allclose(pixels, upper) # lower triangle result = runner.invoke(dump, [f_in, "-H", "-r", "chr2", "-r2", "chr1"]) trans_lower = pd.read_csv(StringIO(result.output), sep="\t") assert len(trans_lower) == 0 result = runner.invoke(dump, [f_in, "-m", "-H", "-r", "chr2", "-r2", "chr1"]) trans_lower = pd.read_csv(StringIO(result.output), sep="\t") assert len(trans_lower) > 0 # roundtrip square data f_in = op.join(datadir, "toy.asymm.2.cool") result = runner.invoke(dump, [f_in, "-H", "-t", "bins"]) bins = pd.read_csv(StringIO(result.output), sep="\t") result = runner.invoke(dump, [f_in, "-H"]) pixels = pd.read_csv(StringIO(result.output), sep="\t") cooler.create_cooler("out.cool", bins, pixels, symmetric_upper=False) cooler_cmp(f_in, "out.cool") result = runner.invoke(dump, [f_in, "--matrix", "-H"]) pixels2 = pd.read_csv(StringIO(result.output), sep="\t") assert np.allclose(pixels, pixels2) # for square data, -m is a no-op result = runner.invoke(dump, [f_in, "-H", "-r", "chr2", "-r2", "chr1"]) lower1 = pd.read_csv(StringIO(result.output), sep="\t") result = runner.invoke(dump, [f_in, "-m", "-H", "-r", "chr2", "-r2", "chr1"]) lower2 = pd.read_csv(StringIO(result.output), sep="\t") assert np.allclose(lower1, lower2) def test_show(): runner = CliRunner() with runner.isolated_filesystem(): f_in = op.join(datadir, "toy.symm.upper.2.cool") result = runner.invoke(show, [f_in, 'chr1', '-o', 'bla']) assert result.exit_code == 0
{"hexsha": "bcaa1e4083db8d8901ef13d743080e61383029f3", "size": 4686, "ext": "py", "lang": "Python", "max_stars_repo_path": "tests/test_cli_export.py", "max_stars_repo_name": "mimakaev/cooler", "max_stars_repo_head_hexsha": "84b0d510dc3baf0b9ef3592f9d27ba795e1802ee", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 106, "max_stars_repo_stars_event_min_datetime": "2016-01-15T21:24:41.000Z", "max_stars_repo_stars_event_max_datetime": "2020-09-24T12:15:13.000Z", "max_issues_repo_path": "tests/test_cli_export.py", "max_issues_repo_name": "mimakaev/cooler", "max_issues_repo_head_hexsha": "84b0d510dc3baf0b9ef3592f9d27ba795e1802ee", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 190, "max_issues_repo_issues_event_min_datetime": "2016-02-16T03:35:30.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-25T19:16:28.000Z", "max_forks_repo_path": "tests/test_cli_export.py", "max_forks_repo_name": "mimakaev/cooler", "max_forks_repo_head_hexsha": "84b0d510dc3baf0b9ef3592f9d27ba795e1802ee", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 43, "max_forks_repo_forks_event_min_datetime": "2016-08-26T18:51:21.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-08T13:38:50.000Z", "avg_line_length": 40.0512820513, "max_line_length": 89, "alphanum_fraction": 0.5994451558, "include": true, "reason": "import numpy", "num_tokens": 1287}
/* Copyright (c) 2020, Ford Motor Company All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of the Ford Motor Company nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #include "vehicle_info_plugin/vehicle_info_pending_resumption_handler.h" #include <boost/range/algorithm/set_algorithm.hpp> #include <functional> #include "application_manager/event_engine/event_observer.h" #include "application_manager/message_helper.h" #include "application_manager/resumption/resumption_data_processor.h" #include "utils/helpers.h" #include "vehicle_info_plugin/custom_vehicle_data_manager.h" #include "vehicle_info_plugin/vehicle_info_plugin.h" namespace vehicle_info_plugin { SDL_CREATE_LOG_VARIABLE("VehicleInfoPlugin") using hmi_apis::FunctionID::VehicleInfo_SubscribeVehicleData; uint32_t get_corr_id_from_message(const smart_objects::SmartObject& message) { using namespace application_manager; return message[strings::params][strings::correlation_id].asInt(); } template <class T> std::string Stringify(const T& container) { std::stringstream ss; for (const auto& val : container) { ss << val << " "; } return ss.str(); } VehicleInfoPendingResumptionHandler::VehicleDataList SubscriptionsFromResponse( const smart_objects::SmartObject& response, std::function<bool(const smart_objects::SmartObject& vehicle_data)> vehicle_data_check) { namespace strings = application_manager::strings; VehicleInfoPendingResumptionHandler::VehicleDataList result; const auto response_params = response[strings::msg_params]; const auto response_keys = response_params.enumerate(); for (auto key : response_keys) { if (vehicle_data_check(response_params[key])) { result.insert(key); } } return result; } void FillResponseWithMissedVD( const VehicleInfoPendingResumptionHandler::VehicleDataList& vehicle_data, smart_objects::SmartObject* response) { DCHECK(response) namespace strings = application_manager::strings; auto& msg_params = (*response)[strings::msg_params]; for (const auto& vd : vehicle_data) { smart_objects::SmartObject vd_result(smart_objects::SmartType_Map); vd_result[strings::result_code] = hmi_apis::Common_VehicleDataResultCode::VDRC_SUCCESS; msg_params[vd] = vd_result; } } VehicleInfoPendingResumptionHandler::VehicleDataList SuccessfulSubscriptionsFromResponse( const smart_objects::SmartObject& response) { SDL_LOG_AUTO_TRACE(); using namespace application_manager; VehicleInfoPendingResumptionHandler::VehicleDataList result; if (!resumption::IsResponseSuccessful(response)) { return result; } auto successful_vehicle_data = [](const smart_objects::SmartObject& vehicle_data) { constexpr auto kSuccess = hmi_apis::Common_VehicleDataResultCode::VDRC_SUCCESS; const auto vd_result_code = vehicle_data[strings::result_code].asInt(); return kSuccess == vd_result_code; }; return SubscriptionsFromResponse(response, successful_vehicle_data); } VehicleInfoPendingResumptionHandler::VehicleInfoPendingResumptionHandler( application_manager::ApplicationManager& application_manager, CustomVehicleDataManager& custom_vehicle_data_manager) : PendingResumptionHandler(application_manager) , custom_vehicle_data_manager_(custom_vehicle_data_manager) {} void VehicleInfoPendingResumptionHandler::OnResumptionRevert() { SDL_LOG_AUTO_TRACE(); sync_primitives::AutoLock lock(pending_resumption_lock_); TriggerPendingResumption(); } void VehicleInfoPendingResumptionHandler::RaiseFinishedPendingResumption( const PendingSubscriptionsResumption& pending_resumption) { SDL_LOG_AUTO_TRACE(); using namespace application_manager; auto app = application_manager_.application(pending_resumption.app_id_); if (!app) { SDL_LOG_DEBUG("Application not found " << pending_resumption.app_id_); return; } auto& ext = VehicleInfoAppExtension::ExtractVIExtension(*app); ext.RemovePendingSubscriptions(); for (const auto& subscription : pending_resumption.restored_vehicle_data_) { SDL_LOG_DEBUG("Subscribe " << app->app_id() << " to " << subscription); ext.subscribeToVehicleInfo(subscription); } unsubscribe_from_event(VehicleInfo_SubscribeVehicleData); auto fake_response = CreateFakeResponseFromHMI(pending_resumption.subscription_results_, pending_resumption.fake_corr_id_); event_engine::Event event(VehicleInfo_SubscribeVehicleData); event.set_smart_object(fake_response); SDL_LOG_DEBUG("Raise fake response for resumption data processor"); event.raise(application_manager_.event_dispatcher()); } void VehicleInfoPendingResumptionHandler::SendHMIRequestForNotSubscribed( const PendingSubscriptionsResumption& pending_resumption) { SDL_LOG_AUTO_TRACE(); const auto remaining_subscriptions = pending_resumption.NotSubscribedData(); auto request = CreateSubscribeRequestToHMI(remaining_subscriptions); const auto corr_id = get_corr_id_from_message(*request); subscribe_on_event(VehicleInfo_SubscribeVehicleData, corr_id); application_manager_.GetRPCService().ManageHMICommand(request); } void VehicleInfoPendingResumptionHandler::ProcessNextPendingResumption( const smart_objects::SmartObject& response_message) { SDL_LOG_AUTO_TRACE(); if (pending_requests_.empty()) { SDL_LOG_DEBUG("No more pending resumptions"); return; } auto& pending = pending_requests_.front(); if (pending.waiting_for_hmi_response_) { SDL_LOG_DEBUG("Requests was already sent to HMI for " << pending.app_id_); return; } const auto successful_subscriptions = SuccessfulSubscriptionsFromResponse(response_message); pending.FillRestoredData(successful_subscriptions); if (!pending.IsSuccessfullyDone()) { SendHMIRequestForNotSubscribed(pending); pending.waiting_for_hmi_response_ = true; return; } auto pending_copy = pending; pending_requests_.pop_front(); RaiseFinishedPendingResumption(pending_copy); ProcessNextPendingResumption(response_message); } void VehicleInfoPendingResumptionHandler::TriggerPendingResumption() { SDL_LOG_AUTO_TRACE(); if (pending_requests_.empty()) { SDL_LOG_DEBUG("No pending resumptions"); return; } auto& pending_resumption = pending_requests_.front(); if (pending_resumption.waiting_for_hmi_response_) { SDL_LOG_DEBUG("Pending resumption for " << pending_resumption.app_id_ << " is already waiting for HMI response"); return; } if (!pending_resumption.IsSuccessfullyDone()) { SendHMIRequestForNotSubscribed(pending_resumption); pending_resumption.waiting_for_hmi_response_ = true; } } void VehicleInfoPendingResumptionHandler::HandleOnEvent( const application_manager::event_engine::Event& event) { SDL_LOG_AUTO_TRACE(); sync_primitives::AutoLock lock(pending_resumption_lock_); using namespace application_manager; if (pending_requests_.empty()) { SDL_LOG_DEBUG("Not waiting for any response"); return; } auto response_message = event.smart_object(); smart_objects::SmartObject converted_msg_params( response_message[strings::msg_params]); custom_vehicle_data_manager_.CreateMobileMessageParams(converted_msg_params); response_message[strings::msg_params] = converted_msg_params; const auto vs_count_in_response = response_message[application_manager::strings::msg_params].length(); if (resumption::IsResponseSuccessful(response_message) && vs_count_in_response == 0) { const auto& requested_vd = pending_requests_.front().requested_vehicle_data_; FillResponseWithMissedVD(requested_vd, &response_message); } for (auto& pending : pending_requests_) { pending.FillSubscriptionResults(response_message); } auto current_pending = pending_requests_.front(); pending_requests_.pop_front(); RaiseFinishedPendingResumption(current_pending); ProcessNextPendingResumption(response_message); } VehicleInfoPendingResumptionHandler::PendingSubscriptionsResumption VehicleInfoPendingResumptionHandler::SubscribeToFakeRequest( const uint32_t app_id, const VehicleDataList& subscriptions) { SDL_LOG_AUTO_TRACE(); const auto fake_request = CreateSubscribeRequestToHMI(subscriptions); const auto fake_corr_id = get_corr_id_from_message(*fake_request); auto resumption_request = MakeResumptionRequest( fake_corr_id, hmi_apis::FunctionID::VehicleInfo_SubscribeVehicleData, *fake_request); SDL_LOG_DEBUG("Subscribe subscriber " << app_id << " to fake request with corr id = " << fake_corr_id); resumption_data_processor().SubscribeToResponse(app_id, resumption_request); PendingSubscriptionsResumption pending_request( app_id, fake_corr_id, subscriptions); return pending_request; } void VehicleInfoPendingResumptionHandler::HandleResumptionSubscriptionRequest( application_manager::AppExtension& extension, application_manager::Application& app) { SDL_LOG_AUTO_TRACE(); sync_primitives::AutoLock lock(pending_resumption_lock_); SDL_LOG_TRACE("app id " << app.app_id()); auto& ext = dynamic_cast<VehicleInfoAppExtension&>(extension); auto subscriptions = ext.PendingSubscriptions().GetData(); for (auto ivi = subscriptions.begin(); ivi != subscriptions.end();) { if (IsSubscribedAppExist(*ivi, application_manager_)) { ext.RemovePendingSubscription(*ivi); ext.subscribeToVehicleInfo(*ivi); subscriptions.erase(ivi++); } else { ++ivi; } } if (subscriptions.empty()) { SDL_LOG_DEBUG("Subscriptions is empty"); return; } SDL_LOG_TRACE("resume subscriptions to : " << Stringify(subscriptions)); auto pending_request = SubscribeToFakeRequest(app.app_id(), subscriptions); pending_requests_.push_back(pending_request); SDL_LOG_DEBUG( "Add to pending resumptins corr_id = " << pending_request.fake_corr_id_); if (pending_requests_.size() == 1) { TriggerPendingResumption(); } // If there was pending resumption before, it will be triggered on HMI // response } smart_objects::SmartObjectSPtr VehicleInfoPendingResumptionHandler::CreateSubscribeRequestToHMI( const VehicleDataList& subscriptions) { sync_primitives::AutoLock lock(pending_resumption_lock_); using namespace application_manager; smart_objects::SmartObject msg_params = smart_objects::SmartObject(smart_objects::SmartType_Map); for (const auto& ivi_data : subscriptions) { msg_params[ivi_data] = true; } smart_objects::SmartObjectSPtr request = application_manager::MessageHelper::CreateModuleInfoSO( VehicleInfo_SubscribeVehicleData, application_manager_); (*request)[strings::msg_params] = msg_params; return request; } smart_objects::SmartObject VehicleInfoPendingResumptionHandler::CreateFakeResponseFromHMI( const std::map<std::string, smart_objects::SmartObject>& subscriptions, const uint32_t fake_corrlation_id) { SDL_LOG_AUTO_TRACE(); namespace strings = application_manager::strings; auto response = application_manager::MessageHelper::CreateResponseMessageFromHmi( VehicleInfo_SubscribeVehicleData, fake_corrlation_id, hmi_apis::Common_Result::SUCCESS); auto& message = *response; smart_objects::SmartObject msg_params(smart_objects::SmartType_Map); for (const auto& subscription : subscriptions) { msg_params[subscription.first] = subscription.second; SDL_LOG_DEBUG("fake response data : " << subscription.first << " result = " << subscription.second[strings::result_code].asInt()); } message[strings::msg_params] = msg_params; return *response; } bool VehicleInfoPendingResumptionHandler::PendingSubscriptionsResumption:: IsSuccessfullyDone() const { return requested_vehicle_data_.size() == restored_vehicle_data_.size(); } bool VehicleInfoPendingResumptionHandler::PendingSubscriptionsResumption:: DataWasRequested(const std::string& vd) const { bool result = (requested_vehicle_data_.end() != requested_vehicle_data_.find(vd)); return result; } VehicleInfoPendingResumptionHandler::VehicleDataList VehicleInfoPendingResumptionHandler::PendingSubscriptionsResumption:: NotSubscribedData() const { VehicleDataList not_subscribed; boost::set_difference(requested_vehicle_data_, restored_vehicle_data_, std::inserter(not_subscribed, not_subscribed.end())); return not_subscribed; } void VehicleInfoPendingResumptionHandler::PendingSubscriptionsResumption:: FillSubscriptionResults() { namespace strings = application_manager::strings; for (const auto& key : restored_vehicle_data_) { smart_objects::SmartObject vd_result(smart_objects::SmartType_Map); vd_result[strings::result_code] = hmi_apis::Common_VehicleDataResultCode::VDRC_SUCCESS; subscription_results_[key] = vd_result; } const auto not_subscribed = NotSubscribedData(); for (const auto& key : not_subscribed) { smart_objects::SmartObject vd_result(smart_objects::SmartType_Map); vd_result[strings::result_code] = hmi_apis::Common_VehicleDataResultCode::VDRC_DATA_NOT_SUBSCRIBED; subscription_results_[key] = vd_result; } } void VehicleInfoPendingResumptionHandler::PendingSubscriptionsResumption:: FillRestoredData(const VehicleDataList& successful_subscriptions) { for (auto& subscribed : successful_subscriptions) { if (DataWasRequested(subscribed)) { restored_vehicle_data_.insert(subscribed); } } } void VehicleInfoPendingResumptionHandler::PendingSubscriptionsResumption:: FillSubscriptionResults(const smart_objects::SmartObject& response) { SDL_LOG_AUTO_TRACE(); using namespace application_manager; auto successful_subscriptions = SuccessfulSubscriptionsFromResponse(response); SDL_LOG_DEBUG("Requested data : " << Stringify(requested_vehicle_data_)); SDL_LOG_DEBUG("Successful subscription in response : " << Stringify(successful_subscriptions)); FillRestoredData(successful_subscriptions); SDL_LOG_DEBUG("Restored data : " << Stringify(restored_vehicle_data_)); FillSubscriptionResults(); auto msg_params = response[strings::msg_params]; auto keys = msg_params.enumerate(); for (auto key : keys) { if (DataWasRequested(key)) { subscription_results_[key] = msg_params[key]; } } } } // namespace vehicle_info_plugin
{"hexsha": "c9affa5760a71897c0aa34d527d9261ad7dda2ff", "size": 15873, "ext": "cc", "lang": "C++", "max_stars_repo_path": "src/components/application_manager/rpc_plugins/vehicle_info_plugin/src/vehicle_info_pending_resumption_handler.cc", "max_stars_repo_name": "Sohei-Suzuki-Nexty/sdl_core", "max_stars_repo_head_hexsha": "68f082169e0a40fccd9eb0db3c83911c28870f07", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 5.0, "max_stars_repo_stars_event_min_datetime": "2015-02-26T07:47:26.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-03T15:08:53.000Z", "max_issues_repo_path": "src/components/application_manager/rpc_plugins/vehicle_info_plugin/src/vehicle_info_pending_resumption_handler.cc", "max_issues_repo_name": "Sohei-Suzuki-Nexty/sdl_core", "max_issues_repo_head_hexsha": "68f082169e0a40fccd9eb0db3c83911c28870f07", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 73.0, "max_issues_repo_issues_event_min_datetime": "2015-11-12T15:36:48.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-11T08:09:52.000Z", "max_forks_repo_path": "src/components/application_manager/rpc_plugins/vehicle_info_plugin/src/vehicle_info_pending_resumption_handler.cc", "max_forks_repo_name": "Sohei-Suzuki-Nexty/sdl_core", "max_forks_repo_head_hexsha": "68f082169e0a40fccd9eb0db3c83911c28870f07", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 8.0, "max_forks_repo_forks_event_min_datetime": "2015-09-11T08:37:55.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-13T19:34:02.000Z", "avg_line_length": 37.3482352941, "max_line_length": 80, "alphanum_fraction": 0.7725697726, "num_tokens": 3398}
# [Super SloMo] ##High Quality Estimation of Multiple Intermediate Frames for Video Interpolation from comet_ml import Experiment, ExistingExperiment import argparse import torch import torchvision import torchvision.transforms as transforms import torch.optim as optim import torch.nn as nn import torch.nn.functional as F import model import dataloader_c as dataloader from math import log10 import datetime import numpy as np import warnings from pytorch_msssim import ssim, ms_ssim, SSIM, MS_SSIM # pip install pytorch-msssim from PIL import Image import torchvision.utils as vutils from utils import init_net, upload_images warnings.simplefilter("ignore", UserWarning) # from tensorboardX import SummaryWriter # For parsing commandline arguments parser = argparse.ArgumentParser() parser.add_argument( "--dataset_root", type=str, required=True, help="path to dataset folder containing train-test-validation folders", ) parser.add_argument( "--checkpoint_dir", type=str, required=True, help="path to folder for saving checkpoints", ) parser.add_argument( "--checkpoint", type=str, help="path of checkpoint for pretrained model" ) parser.add_argument( "--train_continue", action="store_true", help="resuming from checkpoint." ) parser.add_argument( "-it", "--init_type", default="", type=str, help="the name of an initialization method: normal | xavier | kaiming | orthogonal", ) parser.add_argument( "--epochs", type=int, default=200, help="number of epochs to train. Default: 200." ) parser.add_argument( "-tbs", "--train_batch_size", type=int, default=384, help="batch size for training. Default: 6.", ) parser.add_argument( "-nw", "--num_workers", default=4, type=int, help="number of CPU you get" ) parser.add_argument( "-vbs", "--validation_batch_size", type=int, default=384, help="batch size for validation. Default: 10.", ) parser.add_argument( "-ilr", "--init_learning_rate", type=float, default=0.0001, help="set initial learning rate. Default: 0.0001.", ) parser.add_argument( "--milestones", type=list, default=[100, 150], help="Set to epoch values where you want to decrease learning rate by a factor of 0.1. Default: [100, 150]", ) parser.add_argument( "--progress_iter", type=int, default=100, help="frequency of reporting progress and validation. N: after every N iterations. Default: 100.", ) parser.add_argument( "--logimagefreq", type=int, default=1, help="frequency of logging image.", ) parser.add_argument( "--checkpoint_epoch", type=int, default=5, help="checkpoint saving frequency. N: after every N epochs. Each checkpoint is roughly of size 151 MB.Default: 5.", ) parser.add_argument( "-wp", "--workspace", default="tianyu-z", type=str, help="comet-ml workspace" ) parser.add_argument( "-dh", "--data_h", default=128, type=int, help="H of the data shape" ) parser.add_argument( "-dw", "--data_w", default=128, type=int, help="W of the data shape" ) parser.add_argument( "-pn", "--projectname", default="super-slomo", type=str, help="comet-ml project name", ) parser.add_argument( "--nocomet", action="store_true", help="not using comet_ml logging." ) parser.add_argument( "--cometid", type=str, default="", help="the comet id to resume exps", ) parser.add_argument( "-rs", "--randomseed", type=int, default=2021, help="batch size for validation. Default: 10.", ) args = parser.parse_args() random_seed = args.randomseed np.random.seed(random_seed) torch.manual_seed(random_seed) if torch.cuda.device_count() > 1: torch.cuda.manual_seed_all(random_seed) else: torch.cuda.manual_seed(random_seed) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False ##[TensorboardX](https://github.com/lanpa/tensorboardX) ### For visualizing loss and interpolated frames # writer = SummaryWriter("log") ###Initialize flow computation and arbitrary-time flow interpolation CNNs. device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") flowComp = model.UNet(6, 4) flowComp.to(device) if args.init_type != "": init_net(flowComp, args.init_type) print(args.init_type + " initializing flowComp done") ArbTimeFlowIntrp = model.UNet(20, 5) ArbTimeFlowIntrp.to(device) if args.init_type != "": init_net(ArbTimeFlowIntrp, args.init_type) print(args.init_type + " initializing ArbTimeFlowIntrp done") ### Initialization if args.train_continue: if not args.nocomet and args.cometid != "": comet_exp = ExistingExperiment(previous_experiment=args.cometid) elif not args.nocomet and args.cometid == "": comet_exp = Experiment(workspace=args.workspace, project_name=args.projectname) else: comet_exp = None dict1 = torch.load(args.checkpoint) ArbTimeFlowIntrp.load_state_dict(dict1["state_dictAT"]) flowComp.load_state_dict(dict1["state_dictFC"]) print("Pretrained model loaded!") else: # start logging info in comet-ml if not args.nocomet: comet_exp = Experiment(workspace=args.workspace, project_name=args.projectname) # comet_exp.log_parameters(flatten_opts(args)) else: comet_exp = None dict1 = {"loss": [], "valLoss": [], "valPSNR": [], "valSSIM": [], "epoch": -1} ###Initialze backward warpers for train and validation datasets trainFlowBackWarp = model.backWarp(128, 128, device) trainFlowBackWarp = trainFlowBackWarp.to(device) validationFlowBackWarp = model.backWarp(128, 128, device) validationFlowBackWarp = validationFlowBackWarp.to(device) ###Load Datasets # Channel wise mean calculated on adobe240-fps training dataset mean = [0.5, 0.5, 0.5] std = [1, 1, 1] normalize = transforms.Normalize(mean=mean, std=std) transform = transforms.Compose([transforms.ToTensor(), normalize]) trainset = dataloader.SuperSloMo( root=args.dataset_root + "/train", transform=transform, train=True ) trainloader = torch.utils.data.DataLoader( trainset, batch_size=args.train_batch_size, num_workers=args.num_workers, shuffle=True, ) validationset = dataloader.SuperSloMo( root=args.dataset_root + "/validation", transform=transform, randomCropSize=(128, 128), train=False, ) validationloader = torch.utils.data.DataLoader( validationset, batch_size=args.validation_batch_size, num_workers=args.num_workers, shuffle=False, ) print(trainset, validationset) ###Create transform to display image from tensor negmean = [x * -1 for x in mean] revNormalize = transforms.Normalize(mean=negmean, std=std) TP = transforms.Compose([revNormalize, transforms.ToPILImage()]) ###Utils def get_lr(optimizer): for param_group in optimizer.param_groups: return param_group["lr"] ###Loss and Optimizer L1_lossFn = nn.L1Loss() MSE_LossFn = nn.MSELoss() params = list(ArbTimeFlowIntrp.parameters()) + list(flowComp.parameters()) optimizer = optim.Adam(params, lr=args.init_learning_rate) # scheduler to decrease learning rate by a factor of 10 at milestones. scheduler = optim.lr_scheduler.MultiStepLR( optimizer, milestones=args.milestones, gamma=0.1 ) ###Initializing VGG16 model for perceptual loss vgg16 = torchvision.models.vgg16(pretrained=True) vgg16_conv_4_3 = nn.Sequential(*list(vgg16.children())[0][:22]) vgg16_conv_4_3.to(device) if args.init_type != "": init_net(vgg16_conv_4_3, args.init_type) for param in vgg16_conv_4_3.parameters(): param.requires_grad = False ### Validation function # def validate(epoch, logimage=False): # For details see training. psnr = 0 ssim_val = 0 tloss = 0 flag = 1 valid_images = [] with torch.no_grad(): for validationIndex, (validationData, validationFrameIndex) in enumerate( validationloader, 0 ): frame0, frameT, frame1 = validationData I0 = frame0.to(device) I1 = frame1.to(device) IFrame = frameT.to(device) flowOut = flowComp(torch.cat((I0, I1), dim=1)) F_0_1 = flowOut[:, :2, :, :] F_1_0 = flowOut[:, 2:, :, :] fCoeff = model.getFlowCoeff(validationFrameIndex, device) F_t_0 = fCoeff[0] * F_0_1 + fCoeff[1] * F_1_0 F_t_1 = fCoeff[2] * F_0_1 + fCoeff[3] * F_1_0 g_I0_F_t_0 = validationFlowBackWarp(I0, F_t_0) g_I1_F_t_1 = validationFlowBackWarp(I1, F_t_1) intrpOut = ArbTimeFlowIntrp( torch.cat( (I0, I1, F_0_1, F_1_0, F_t_1, F_t_0, g_I1_F_t_1, g_I0_F_t_0), dim=1 ) ) F_t_0_f = intrpOut[:, :2, :, :] + F_t_0 F_t_1_f = intrpOut[:, 2:4, :, :] + F_t_1 V_t_0 = torch.sigmoid(intrpOut[:, 4:5, :, :]) V_t_1 = 1 - V_t_0 g_I0_F_t_0_f = validationFlowBackWarp(I0, F_t_0_f) g_I1_F_t_1_f = validationFlowBackWarp(I1, F_t_1_f) wCoeff = model.getWarpCoeff(validationFrameIndex, device) Ft_p = ( wCoeff[0] * V_t_0 * g_I0_F_t_0_f + wCoeff[1] * V_t_1 * g_I1_F_t_1_f ) / (wCoeff[0] * V_t_0 + wCoeff[1] * V_t_1) # For tensorboard if flag: retImg = torchvision.utils.make_grid( [ revNormalize(frame0[0]), revNormalize(frameT[0]), revNormalize(Ft_p.cpu()[0]), revNormalize(frame1[0]), ], padding=10, ) flag = 0 if logimage: if validationIndex % args.logimagefreq == 0: valid_images.append( 255.0 * frame0[0] .resize_(1, 1, args.data_h, args.data_w) .repeat(1, 3, 1, 1) ) valid_images.append( 255.0 * frameT[0] .resize_(1, 1, args.data_h, args.data_w) .repeat(1, 3, 1, 1) ) valid_images.append( 255.0 * frame1[0] .resize_(1, 1, args.data_h, args.data_w) .repeat(1, 3, 1, 1) ) valid_images.append( 255.0 * Ft_p.cpu()[0] .resize_(1, 1, args.data_h, args.data_w) .repeat(1, 3, 1, 1) ) # loss recnLoss = L1_lossFn(Ft_p, IFrame) prcpLoss = MSE_LossFn(vgg16_conv_4_3(Ft_p), vgg16_conv_4_3(IFrame)) warpLoss = ( L1_lossFn(g_I0_F_t_0, IFrame) + L1_lossFn(g_I1_F_t_1, IFrame) + L1_lossFn(validationFlowBackWarp(I0, F_1_0), I1) + L1_lossFn(validationFlowBackWarp(I1, F_0_1), I0) ) loss_smooth_1_0 = torch.mean( torch.abs(F_1_0[:, :, :, :-1] - F_1_0[:, :, :, 1:]) ) + torch.mean(torch.abs(F_1_0[:, :, :-1, :] - F_1_0[:, :, 1:, :])) loss_smooth_0_1 = torch.mean( torch.abs(F_0_1[:, :, :, :-1] - F_0_1[:, :, :, 1:]) ) + torch.mean(torch.abs(F_0_1[:, :, :-1, :] - F_0_1[:, :, 1:, :])) loss_smooth = loss_smooth_1_0 + loss_smooth_0_1 loss = 204 * recnLoss + 102 * warpLoss + 0.005 * prcpLoss + loss_smooth tloss += loss.item() # psnr MSE_val = MSE_LossFn(Ft_p, IFrame) psnr += 10 * log10(1 / MSE_val.item()) ssim_val += ssim(Ft_p, IFrame, data_range=1, size_average=True).item() if logimage: upload_images( valid_images, epoch, exp=comet_exp, im_per_row=4, rows_per_log=int(len(valid_images) / 4), ) return ( (psnr / len(validationloader)), (ssim_val / len(validationloader)), (tloss / len(validationloader)), retImg, ) ### Training import time best_psnr = -1 best_ssim = -1 best_valloss = 9999 start = time.time() cLoss = dict1["loss"] valLoss = dict1["valLoss"] valPSNR = dict1["valPSNR"] valSSIM = dict1["valSSIM"] checkpoint_counter = int((dict1["epoch"] + 1) / args.checkpoint_epoch) ### Main training loop for epoch in range(dict1["epoch"] + 1, args.epochs): print("Epoch: ", epoch) # Append and reset cLoss.append([]) valLoss.append([]) valPSNR.append([]) valSSIM.append([]) iLoss = 0 if epoch > dict1["epoch"] + 1: # Increment scheduler count scheduler.step() # if epoch == dict1["epoch"] + 1: # # test if validate works # validate(epoch, True) for trainIndex, (trainData, trainFrameIndex) in enumerate(trainloader, 0): ## Getting the input and the target from the training set frame0, frameT, frame1 = trainData I0 = frame0.to(device) I1 = frame1.to(device) IFrame = frameT.to(device) optimizer.zero_grad() # Calculate flow between reference frames I0 and I1 flowOut = flowComp(torch.cat((I0, I1), dim=1)) # Extracting flows between I0 and I1 - F_0_1 and F_1_0 F_0_1 = flowOut[:, :2, :, :] F_1_0 = flowOut[:, 2:, :, :] fCoeff = model.getFlowCoeff(trainFrameIndex, device) # Calculate intermediate flows F_t_0 = fCoeff[0] * F_0_1 + fCoeff[1] * F_1_0 F_t_1 = fCoeff[2] * F_0_1 + fCoeff[3] * F_1_0 # Get intermediate frames from the intermediate flows g_I0_F_t_0 = trainFlowBackWarp(I0, F_t_0) g_I1_F_t_1 = trainFlowBackWarp(I1, F_t_1) # Calculate optical flow residuals and visibility maps intrpOut = ArbTimeFlowIntrp( torch.cat( (I0, I1, F_0_1, F_1_0, F_t_1, F_t_0, g_I1_F_t_1, g_I0_F_t_0), dim=1 ) ) # Extract optical flow residuals and visibility maps F_t_0_f = intrpOut[:, :2, :, :] + F_t_0 F_t_1_f = intrpOut[:, 2:4, :, :] + F_t_1 V_t_0 = torch.sigmoid(intrpOut[:, 4:5, :, :]) V_t_1 = 1 - V_t_0 # Get intermediate frames from the intermediate flows g_I0_F_t_0_f = trainFlowBackWarp(I0, F_t_0_f) g_I1_F_t_1_f = trainFlowBackWarp(I1, F_t_1_f) wCoeff = model.getWarpCoeff(trainFrameIndex, device) # Calculate final intermediate frame Ft_p = (wCoeff[0] * V_t_0 * g_I0_F_t_0_f + wCoeff[1] * V_t_1 * g_I1_F_t_1_f) / ( wCoeff[0] * V_t_0 + wCoeff[1] * V_t_1 ) # Loss recnLoss = L1_lossFn(Ft_p, IFrame) prcpLoss = MSE_LossFn(vgg16_conv_4_3(Ft_p), vgg16_conv_4_3(IFrame)) warpLoss = ( L1_lossFn(g_I0_F_t_0, IFrame) + L1_lossFn(g_I1_F_t_1, IFrame) + L1_lossFn(trainFlowBackWarp(I0, F_1_0), I1) + L1_lossFn(trainFlowBackWarp(I1, F_0_1), I0) ) loss_smooth_1_0 = torch.mean( torch.abs(F_1_0[:, :, :, :-1] - F_1_0[:, :, :, 1:]) ) + torch.mean(torch.abs(F_1_0[:, :, :-1, :] - F_1_0[:, :, 1:, :])) loss_smooth_0_1 = torch.mean( torch.abs(F_0_1[:, :, :, :-1] - F_0_1[:, :, :, 1:]) ) + torch.mean(torch.abs(F_0_1[:, :, :-1, :] - F_0_1[:, :, 1:, :])) loss_smooth = loss_smooth_1_0 + loss_smooth_0_1 # Total Loss - Coefficients 204 and 102 are used instead of 0.8 and 0.4 # since the loss in paper is calculated for input pixels in range 0-255 # and the input to our network is in range 0-1 loss = 204 * recnLoss + 102 * warpLoss + 0.005 * prcpLoss + loss_smooth # Backpropagate loss.backward() optimizer.step() iLoss += loss.item() # Validation and progress every `args.progress_iter` iterations # if (trainIndex % args.progress_iter) == args.progress_iter - 1: end = time.time() psnr, ssim_val, vLoss, valImg = validate(epoch, logimage=True) valPSNR[epoch].append(psnr) valSSIM[epoch].append(ssim_val) valLoss[epoch].append(vLoss) # Tensorboard itr = int(trainIndex + epoch * (len(trainloader))) # writer.add_scalars( # "Loss", # {"trainLoss": iLoss / args.progress_iter, "validationLoss": vLoss}, # itr, # ) # writer.add_scalar("PSNR", psnr, itr) # writer.add_image("Validation", valImg, itr) comet_exp.log_metrics( {"trainLoss": iLoss / args.progress_iter, "validationLoss": vLoss}, step=itr, epoch=epoch, ) comet_exp.log_metric("PSNR", psnr, step=itr, epoch=epoch) comet_exp.log_metric("SSIM", ssim_val, step=itr, epoch=epoch) # valImage = torch.movedim(valImg, 0, -1) # print(type(valImage)) # print(valImage.shape) # print(valImage.max()) # print(valImage.min()) # comet_exp.log_image( # valImage, # name="iter: " + str(iter) + ";epoch: " + str(epoch), # image_format="jpg", # step=itr, # ) ##### endVal = time.time() print( " Loss: %0.6f Iterations: %4d/%4d TrainExecTime: %0.1f ValLoss:%0.6f ValPSNR: %0.4f ValSSIM: %0.4f ValEvalTime: %0.2f LearningRate: %f" % ( iLoss / args.progress_iter, trainIndex, len(trainloader), end - start, vLoss, psnr, ssim_val, endVal - end, get_lr(optimizer), ) ) cLoss[epoch].append(iLoss / args.progress_iter) iLoss = 0 start = time.time() # Create checkpoint after every `args.checkpoint_epoch` epochs if (epoch % args.checkpoint_epoch) == args.checkpoint_epoch - 1: dict1 = { "Detail": "End to end Super SloMo.", "epoch": epoch, "timestamp": datetime.datetime.now(), "trainBatchSz": args.train_batch_size, "validationBatchSz": args.validation_batch_size, "learningRate": get_lr(optimizer), "loss": cLoss, "valLoss": valLoss, "valPSNR": valPSNR, "valSSIM": valSSIM, "state_dictFC": flowComp.state_dict(), "state_dictAT": ArbTimeFlowIntrp.state_dict(), } torch.save( dict1, args.checkpoint_dir + "/SuperSloMo" + str(checkpoint_counter) + ".ckpt", ) checkpoint_counter += 1 if psnr > best_psnr: best_psnr = psnr dict1 = { "Detail": "End to end Super SloMo.", "epoch": epoch, "timestamp": datetime.datetime.now(), "trainBatchSz": args.train_batch_size, "validationBatchSz": args.validation_batch_size, "learningRate": get_lr(optimizer), "loss": cLoss, "valLoss": valLoss, "valPSNR": valPSNR, "valSSIM": valSSIM, "state_dictFC": flowComp.state_dict(), "state_dictAT": ArbTimeFlowIntrp.state_dict(), } torch.save( dict1, args.checkpoint_dir + "/SuperSloMo" + "bestpsnr_epoch" + ".ckpt", ) print("New Best PSNR found and saved at " + str(epoch)) if vLoss < best_valloss: best_valloss = vLoss dict1 = { "Detail": "End to end Super SloMo.", "epoch": epoch, "timestamp": datetime.datetime.now(), "trainBatchSz": args.train_batch_size, "validationBatchSz": args.validation_batch_size, "learningRate": get_lr(optimizer), "loss": cLoss, "valLoss": valLoss, "valPSNR": valPSNR, "valSSIM": valSSIM, "state_dictFC": flowComp.state_dict(), "state_dictAT": ArbTimeFlowIntrp.state_dict(), } torch.save( dict1, args.checkpoint_dir + "/SuperSloMo" + "bestvalloss_epoch" + ".ckpt", ) print("New Best valloss found and saved at " + str(epoch)) if ssim_val > best_ssim: best_ssim = ssim_val dict1 = { "Detail": "End to end Super SloMo.", "epoch": epoch, "timestamp": datetime.datetime.now(), "trainBatchSz": args.train_batch_size, "validationBatchSz": args.validation_batch_size, "learningRate": get_lr(optimizer), "loss": cLoss, "valLoss": valLoss, "valPSNR": valPSNR, "valSSIM": valSSIM, "state_dictFC": flowComp.state_dict(), "state_dictAT": ArbTimeFlowIntrp.state_dict(), } torch.save( dict1, args.checkpoint_dir + "/SuperSloMo" + "bestssim_epoch" + ".ckpt", ) print("New Best SSIM found and saved at " + str(epoch))
{"hexsha": "0ea7768fd185301fdacd4b292ea08a02f037179e", "size": 21031, "ext": "py", "lang": "Python", "max_stars_repo_path": "train_cloudcast.py", "max_stars_repo_name": "tianyu-z/Super-SloMo", "max_stars_repo_head_hexsha": "55a278cc46b6edb731895548b5a5c26e9b3439ae", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "train_cloudcast.py", "max_issues_repo_name": "tianyu-z/Super-SloMo", "max_issues_repo_head_hexsha": "55a278cc46b6edb731895548b5a5c26e9b3439ae", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "train_cloudcast.py", "max_forks_repo_name": "tianyu-z/Super-SloMo", "max_forks_repo_head_hexsha": "55a278cc46b6edb731895548b5a5c26e9b3439ae", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.5780780781, "max_line_length": 149, "alphanum_fraction": 0.5957871713, "include": true, "reason": "import numpy", "num_tokens": 5764}
classdef SettingsLevelSetSmoothRectangleInclusion < SettingsLevelSetCreator properties (Access = public) widthH widthV pnorm end methods (Access = public) function obj = SettingsLevelSetSmoothRectangleInclusion(varargin) obj.loadParams('paramsLevelSetCreator_SmoothRectangle.json') if nargin == 1 obj.loadParams(varargin{1}) end end end end
{"author": "SwanLab", "repo": "Swan", "sha": "f8355f3561bb1a1603f56b3676873147d22a511e", "save_path": "github-repos/MATLAB/SwanLab-Swan", "path": "github-repos/MATLAB/SwanLab-Swan/Swan-f8355f3561bb1a1603f56b3676873147d22a511e/Topology Optimization/DesignVaribleInitializer/LevelSetInitializer/Settings/SettingsLevelSetSmoothRectangleInclusion.m"}
!||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| module hmix_del2 !BOP ! !MODULE: hmix_del2 ! !DESCRIPTION: ! This module contains routines for computing Laplacian horizontal ! diffusion of momentum and tracers. ! ! !REVISION HISTORY: ! CVS:$Id: hmix_del2.F90,v 1.20 2003/02/24 20:43:04 pwjones Exp $ ! CVS:$Name: POP_2_0_1 $ ! !USES: use kinds_mod use blocks use communicate use distribution use domain_size use domain use broadcast use boundary use constants use topostress use diagnostics use io use global_reductions use grid use exit_mod implicit none private save ! !PUBLIC MEMBER FUNCTIONS: public :: init_del2u, & init_del2t, & hdiffu_del2, & hdifft_del2 !EOP !BOC !----------------------------------------------------------------------- ! ! private module variables ! ! operator coefficients: ! ! DT{N,S,E,W} = {N,S,E,W} coefficients of 5-point stencil for the ! Del**2 operator before b.c.s have been applied ! DU{C,N,S,E,W} = central and {N,S,E,W} coefficients of 5-point ! stencil for the Del**2 operator acting at U points ! (without metric terms that mix U,V) ! DM{C,N,S,E,W} = central and {N,S,E,W} coefficients of 5-point ! stencil for the metric terms that mix U,V ! DUM = central coefficient for metric terms that do not mix U,V ! !----------------------------------------------------------------------- real (r8), dimension (:,:,:), allocatable :: & DTN,DTS,DTE,DTW, & DUC,DUN,DUS,DUE,DUW, & DMC,DMN,DMS,DME,DMW, & DUM, & AHF, &! variable mixing factor for tracer mixing AMF ! variable mixing factor for momentum mixing real (r8) :: & ah, &! horizontal tracer mixing coefficient am ! horizontal momentum mixing coefficient logical (log_kind) :: & lauto_hmixu, &! automatically computing mixing coeffs lauto_hmixt, &! automatically computing mixing coeffs lvariable_hmixu, &! spatially varying mixing coeffs lvariable_hmixt ! spatially varying mixing coeffs !EOC !*********************************************************************** contains !*********************************************************************** !BOP ! !IROUTINE: init_del2u ! !INTERFACE: subroutine init_del2u ! !DESCRIPTION: ! This routine calculates the coefficients of the 5-point stencils for ! the $\nabla^2$ operator acting on momentum fields and also ! calculates coefficients for all diffusive metric terms. See the ! description under hdiffu for the form of the operator. ! ! !REVISION HISTORY: ! same as module !EOP !BOC !----------------------------------------------------------------------- ! ! local variables ! !----------------------------------------------------------------------- integer (int_kind) :: & i,j, &! dummy loop indices iblock, &! block index nml_error ! error flag for namelist real (r8), dimension (:,:), allocatable :: & KXU,KYU, &! metric factors DXKX,DYKY,DXKY,DYKX, &! d{x,y}k{x,y} WORK1,WORK2 ! temporary work space logical (log_kind) :: & lauto_hmix, &! flag to internally compute mixing coeff lvariable_hmix ! flag to enable spatially varying mixing namelist /hmix_del2u_nml/ lauto_hmix, lvariable_hmix, am real (r8) :: & amfmin, amfmax ! min max mixing for varible mixing !----------------------------------------------------------------------- ! ! read input namelist to set options ! !----------------------------------------------------------------------- lauto_hmix = .false. lvariable_hmix = .false. am = c0 if (my_task == master_task) then open (nml_in, file=nml_filename, status='old',iostat=nml_error) if (nml_error /= 0) then nml_error = -1 else nml_error = 1 endif do while (nml_error > 0) read(nml_in, nml=hmix_del2u_nml,iostat=nml_error) end do if (nml_error == 0) close(nml_in) endif call broadcast_scalar(nml_error, master_task) if (nml_error /= 0) then call exit_POP(sigAbort,'ERROR reading hmix_del2u_nml') endif if (my_task == master_task) then write(stdout,blank_fmt) write(stdout,'(a33)') 'Laplacian momentum mixing options' write(stdout,blank_fmt) lauto_hmixu = lauto_hmix if (.not. lauto_hmixu) then write(stdout,'(a33)') 'Using input horizontal viscosity:' write(stdout,'(a7,2x,1pe12.5)') ' am =',am endif lvariable_hmixu = lvariable_hmix if (.not. lvariable_hmixu) then write(stdout,'(a44)') & 'Variable horizontal momentum mixing disabled' endif endif call broadcast_scalar(lauto_hmixu, master_task) call broadcast_scalar(lvariable_hmixu, master_task) call broadcast_scalar(am, master_task) !----------------------------------------------------------------------- ! ! automatically set horizontal mixing coefficients if requested ! !----------------------------------------------------------------------- if (lauto_hmixu) then ! scale to 1e7 at 1/2 degree am = 1.0e7_r8*(720.0_r8/float(nx_global)) if (my_task == master_task) then write(stdout,'(a44)') & 'Horizontal viscosity computed automatically:' write(stdout,'(a7,2x,1pe12.5)') ' am =',am endif endif !----------------------------------------------------------------------- ! ! set spatially variable mixing arrays if requested ! ! this applies only to momentum or tracer mixing ! with del2 or del4 options ! ! functions describing spatial dependence of horizontal mixing ! coefficients: am -> am*AHM ! ! for standard laplacian mixing they scale like (cell area)**0.5 ! (note: these forms assume a global mesh with a grid line along ! the equator, and the mixing coefficients are the set relative ! to the average grid spacing at the equator - for cells of ! this size AMF = AHF = 1.0). ! !----------------------------------------------------------------------- if (lvariable_hmixu) then amfmin = c1 amfmax = c1 allocate(AMF(nx_block,ny_block,nblocks_clinic)) do iblock=1,nblocks_clinic AMF(:,:,iblock) = sqrt(UAREA(:,:,iblock)/ & (c2*pi*radius/nx_global)**2) end do !RSx3 *** set variable viscosity for x3-prime run !AMF = c4*am*(DXUR**2+DYUR**2)*dtu !where(AMF.gt.p5) ! AMF = p5/AMF !elsewhere ! AMF = c1 !endwhere !RSx3 amfmin = global_minval(AMF, distrb_clinic, field_loc_NEcorner, CALCU) amfmax = global_maxval(AMF, distrb_clinic, field_loc_NEcorner, CALCU) if (my_task == master_task) then write(stdout,'(a37)') 'Variable horizontal viscosity enabled' write(stdout,'(a12,1x,1pe12.5,3x,a9,1x,1pe12.5)') & ' Min AMF =',amfmin,'Max AMF =',amfmax endif call update_ghost_cells(AMF, bndy_clinic, field_loc_NEcorner,& field_type_scalar) else !*** allocate AMF temporarily to simplify setup allocate(AMF(nx_block,ny_block,nblocks_clinic)) AMF = c1 endif !----------------------------------------------------------------------- ! ! calculate operator weights ! !----------------------------------------------------------------------- allocate(DUC(nx_block,ny_block,nblocks_clinic), & DUN(nx_block,ny_block,nblocks_clinic), & DUS(nx_block,ny_block,nblocks_clinic), & DUE(nx_block,ny_block,nblocks_clinic), & DUW(nx_block,ny_block,nblocks_clinic), & DMC(nx_block,ny_block,nblocks_clinic), & DMN(nx_block,ny_block,nblocks_clinic), & DMS(nx_block,ny_block,nblocks_clinic), & DME(nx_block,ny_block,nblocks_clinic), & DMW(nx_block,ny_block,nblocks_clinic), & DUM(nx_block,ny_block,nblocks_clinic)) allocate(KXU (nx_block,ny_block), & KYU (nx_block,ny_block), & DXKX (nx_block,ny_block), & DYKY (nx_block,ny_block), & DXKY (nx_block,ny_block), & DYKX (nx_block,ny_block), & WORK1 (nx_block,ny_block), & WORK2 (nx_block,ny_block)) do iblock=1,nblocks_clinic !----------------------------------------------------------------------- ! ! calculate central and {N,S,E,W} coefficients for ! Del**2 (without metric terms) acting on momentum. ! !----------------------------------------------------------------------- WORK1 = (HUS(:,:,iblock)/HTE(:,:,iblock))*p5*(AMF(:,:,iblock) + & eoshift(AMF(:,:,iblock),dim=2,shift=-1)) DUS(:,:,iblock) = WORK1*UAREA_R(:,:,iblock) DUN(:,:,iblock) = eoshift(WORK1,dim=2,shift=1)*UAREA_R(:,:,iblock) WORK1 = (HUW(:,:,iblock)/HTN(:,:,iblock))*p5*(AMF(:,:,iblock) + & eoshift(AMF(:,:,iblock),dim=1,shift=-1)) DUW(:,:,iblock) = WORK1*UAREA_R(:,:,iblock) DUE(:,:,iblock) = eoshift(WORK1,dim=1,shift=1)*UAREA_R(:,:,iblock) !----------------------------------------------------------------------- ! ! coefficients for metric terms in Del**2(U) ! and for metric advection terms (KXU,KYU) ! !----------------------------------------------------------------------- KXU = (eoshift(HUW(:,:,iblock),dim=1,shift=1) - HUW(:,:,iblock))*& UAREA_R(:,:,iblock) KYU = (eoshift(HUS(:,:,iblock),dim=2,shift=1) - HUS(:,:,iblock))*& UAREA_R(:,:,iblock) WORK1 = (HTE(:,:,iblock) - & eoshift(HTE(:,:,iblock),dim=1,shift=-1))* & TAREA_R(:,:,iblock) ! KXT WORK2 = p5*(WORK1 + eoshift(WORK1,dim=2,shift=1))* & p5*(eoshift(AMF(:,:,iblock),dim=1,shift=-1) + & AMF(:,:,iblock)) DXKX = (eoshift(WORK2,dim=1,shift=1) - WORK2)*DXUR(:,:,iblock) WORK2 = p5*(WORK1 + eoshift(WORK1,dim=1,shift=1))* & p5*(eoshift(AMF(:,:,iblock),dim=2,shift=-1) + & AMF(:,:,iblock)) DYKX = (eoshift(WORK2,dim=2,shift=1) - WORK2)*DYUR(:,:,iblock) WORK1 = (HTN(:,:,iblock) - & eoshift(HTN(:,:,iblock),dim=2,shift=-1))* & TAREA_R(:,:,iblock) ! KYT WORK2 = p5*(WORK1 + eoshift(WORK1,dim=1,shift=1))* & p5*(eoshift(AMF(:,:,iblock),dim=2,shift=-1) + & AMF(:,:,iblock)) DYKY = (eoshift(WORK2,dim=2,shift=1) - WORK2)*DYUR(:,:,iblock) WORK2 = p5*(WORK1 + eoshift(WORK1,dim=2,shift=1))* & p5*(eoshift(AMF(:,:,iblock),dim=1,shift=-1) + & AMF(:,:,iblock)) DXKY = (eoshift(WORK2,dim=1,shift=1) - WORK2)*DXUR(:,:,iblock) DUM(:,:,iblock) = -(DXKX + DYKY + & c2*AMF(:,:,iblock)*(KXU**2 + KYU**2)) DMC(:,:,iblock) = DXKY - DYKX !----------------------------------------------------------------------- ! ! calculate central and {N,S,E,W} coefficients for ! metric mixing terms which mix U,V. ! !----------------------------------------------------------------------- WORK1 = (eoshift(AMF(:,:,iblock),dim=2,shift= 1) - & eoshift(AMF(:,:,iblock),dim=2,shift=-1))/ & (HTE(:,:,iblock) + eoshift(HTE(:,:,iblock),dim=2,shift=1)) DME(:,:,iblock) = (c2*AMF(:,:,iblock)*KYU + WORK1)/ & (HTN(:,:,iblock) + & eoshift(HTN(:,:,iblock),dim=1,shift=1)) WORK1 = (eoshift(AMF(:,:,iblock),dim=1,shift= 1) - & eoshift(AMF(:,:,iblock),dim=1,shift=-1))/ & (HTN(:,:,iblock) + eoshift(HTN(:,:,iblock),dim=1,shift=1)) DMN(:,:,iblock) = -(c2*AMF(:,:,iblock)*KXU + WORK1)/ & (HTE(:,:,iblock) + & eoshift(HTE(:,:,iblock),dim=2,shift=1)) end do !*** these operator coefficients only needed in physical !*** domain and should be valid there assuming grid quantities !*** and AMF defined correctly in ghost cells !*** if so, no boundary update needed here !call update_ghost_cells(DUN, bndy_clinic, field_loc_u, & ! field_type_scalar) !call update_ghost_cells(DUS, bndy_clinic, field_loc_u, & ! field_type_scalar) !call update_ghost_cells(DUE, bndy_clinic, field_loc_u, & ! field_type_scalar) !call update_ghost_cells(DUW, bndy_clinic, field_loc_u, & ! field_type_scalar) !call update_ghost_cells(DUM, bndy_clinic, field_loc_u, & ! field_type_scalar) !call update_ghost_cells(DMC, bndy_clinic, field_loc_u, & ! field_type_scalar) !call update_ghost_cells(DME, bndy_clinic, field_loc_u, & ! field_type_scalar) !call update_ghost_cells(DMN, bndy_clinic, field_loc_u, & ! field_type_scalar) DUC = -(DUN + DUS + DUE + DUW) ! scalar laplacian DMW = -DME DMS = -DMN !----------------------------------------------------------------------- ! ! free up memory ! !----------------------------------------------------------------------- deallocate(KXU, KYU, & DXKX, DYKY, DXKY, DYKX, & WORK1, WORK2) if (.not. lvariable_hmixu) deallocate(AMF) !----------------------------------------------------------------------- !EOC end subroutine init_del2u !*********************************************************************** !BOP ! !IROUTINE: init_del2t ! !INTERFACE: subroutine init_del2t ! !DESCRIPTION: ! This routine reads parameters for Laplaciang tracer mixing and ! calculates the coefficients of the 5-point stencils for ! the $\nabla^2$ operator acting on tracer fields. See the hdifft ! routine for a description of the operator. ! ! !REVISION HISTORY: ! same as module !EOP !BOC !----------------------------------------------------------------------- ! ! local variables ! !----------------------------------------------------------------------- integer (int_kind) :: & i,j, &! dummy loop indices iblock, &! block index nml_error ! error flag for namelist real (r8), dimension (:,:), allocatable :: & WORK1,WORK2 ! temporary work space logical (log_kind) :: & lauto_hmix, &! true to automatically determine mixing coeff lvariable_hmix ! true for spatially varying mixing namelist /hmix_del2t_nml/ lauto_hmix, lvariable_hmix, ah real (r8) :: & ahfmin, ahfmax ! min max mixing for varible mixing !----------------------------------------------------------------------- ! ! read input namelist to set options ! !----------------------------------------------------------------------- lauto_hmix = .false. lvariable_hmix = .false. ah = c0 if (my_task == master_task) then open (nml_in, file=nml_filename, status='old',iostat=nml_error) if (nml_error /= 0) then nml_error = -1 else nml_error = 1 endif do while (nml_error > 0) read(nml_in, nml=hmix_del2t_nml,iostat=nml_error) end do if (nml_error == 0) close(nml_in) endif call broadcast_scalar(nml_error, master_task) if (nml_error /= 0) then call exit_POP(sigAbort,'ERROR reading hmix_del2t_nml') endif if (my_task == master_task) then write(stdout,blank_fmt) write(stdout,'(a31)') 'Laplacian tracer mixing options' write(stdout,blank_fmt) lauto_hmixt = lauto_hmix if (.not. lauto_hmixt) then write(stdout,'(a35)') 'Using input horizontal diffusivity:' write(stdout,'(a7,2x,1pe12.5)') ' ah =',ah endif lvariable_hmixt = lvariable_hmix if (.not. lvariable_hmixt) then write(stdout,'(a43)') & 'Variable horizontal tracer mixing disabled' endif endif call broadcast_scalar(lauto_hmixt, master_task) call broadcast_scalar(lvariable_hmixt, master_task) call broadcast_scalar(ah, master_task) !----------------------------------------------------------------------- ! ! automatically set horizontal mixing coefficients if requested ! !----------------------------------------------------------------------- if (lauto_hmixt) then ! scale to 1e7 at 1/2 degree ah = 1.0e7_r8*(720.0_r8/float(nx_global)) if (my_task == master_task) then write(stdout,'(a46)') & 'Horizontal diffusivity computed automatically:' write(stdout,'(a7,2x,1pe12.5)') ' ah =',ah endif endif !----------------------------------------------------------------------- ! ! set spatially variable mixing arrays if requested ! ! this applies only to momentum or tracer mixing ! with del2 or del4 options ! ! functions describing spatial dependence of horizontal mixing ! coefficients: ah -> ah*AHF ! ! for standard laplacian mixing they scale like (cell area)**0.5 ! (note: these forms assume a global mesh with a grid line along ! the equator, and the mixing coefficients are the set relative ! to the average grid spacing at the equator - for cells of ! this size AMF = AHF = 1.0). ! !----------------------------------------------------------------------- if (lvariable_hmixt) then ahfmin = c1 ahfmax = c1 allocate(AHF(nx_block,ny_block,nblocks_clinic)) do iblock=1,nblocks_clinic AHF(:,:,iblock) = sqrt(TAREA(:,:,iblock)/ & (c2*pi*radius/nx_global)**2) end do ahfmin = global_minval(AHF, distrb_clinic, field_loc_center, CALCT) ahfmax = global_maxval(AHF, distrb_clinic, field_loc_center, CALCT) if (my_task == master_task) then write(stdout,'(a39)') & 'Variable horizontal diffusivity enabled' write(stdout,'(a12,1x,1pe12.5,3x,a9,1x,1pe12.5)') & ' Min AHF =',ahfmin,'Max AHF =',ahfmax endif call update_ghost_cells(AHF, bndy_clinic, field_loc_center, & field_type_scalar) else !*** allocate AHF temporarily to simplify setup allocate(AHF(nx_block,ny_block,nblocks_clinic)) AHF = c1 endif !----------------------------------------------------------------------- ! ! calculate {N,S,E,W} coefficients for Del**2 acting on tracer ! fields (for tracers, the central coefficient is calculated as ! minus the sum of these after boundary conditions are applied). ! !----------------------------------------------------------------------- allocate(DTN(nx_block,ny_block,nblocks_clinic), & DTS(nx_block,ny_block,nblocks_clinic), & DTE(nx_block,ny_block,nblocks_clinic), & DTW(nx_block,ny_block,nblocks_clinic)) allocate(WORK1 (nx_block,ny_block), & WORK2 (nx_block,ny_block)) do iblock=1,nblocks_clinic WORK1 = (HTN(:,:,iblock)/HUW(:,:,iblock))*p5*(AHF(:,:,iblock) + & eoshift(AHF(:,:,iblock),dim=2,shift=1)) DTN(:,:,iblock) = WORK1*TAREA_R(:,:,iblock) DTS(:,:,iblock) = eoshift(WORK1,dim=2,shift=-1)* & TAREA_R(:,:,iblock) WORK1 = (HTE(:,:,iblock)/HUS(:,:,iblock))*p5*(AHF(:,:,iblock) + & eoshift(AHF(:,:,iblock),dim=1,shift=1)) DTE(:,:,iblock) = WORK1*TAREA_R(:,:,iblock) DTW(:,:,iblock) = eoshift(WORK1,dim=1,shift=-1)* & TAREA_R(:,:,iblock) end do !*** these coeffs only required in physical domain and !*** should be defined correctly there as long as grid !*** arrays have been correctly defined in ghost cells !*** if so, no ghost cell update required !call update_ghost_cells(DTN, bndy_clinic, field_loc_t, & ! field_type_scalar) !call update_ghost_cells(DTS, bndy_clinic, field_loc_t, & ! field_type_scalar) !call update_ghost_cells(DTE, bndy_clinic, field_loc_t, & ! field_type_scalar) !call update_ghost_cells(DTW, bndy_clinic, field_loc_t, & ! field_type_scalar) !----------------------------------------------------------------------- ! ! free up memory ! !----------------------------------------------------------------------- deallocate(WORK1, WORK2) if (.not. lvariable_hmixt) deallocate(AHF) !----------------------------------------------------------------------- !EOC end subroutine init_del2t !*********************************************************************** !BOP ! !IROUTINE: hdiffu_del2 ! !INTERFACE: subroutine hdiffu_del2(k, HDUK, HDVK, UMIXK, VMIXK, this_block) ! !DESCRIPTION: ! This routine computes the horizontial diffusion of momentum ! using the Laplacian diffusion operator given by: ! \begin{eqnarray} ! \nabla\cdot A_M \nabla u & = & ! {1\over{\Delta_y}}\delta_x ! \left(\overline{A_M}^x \Delta_y \delta_x u \right) ! + {1\over{\Delta_x}}\delta_y ! \left(\overline{A_M}^y \Delta_x \delta_y u \right) ! \nonumber \! & &- u\left(\delta_x k_x + \delta_y k_y + ! 2(k_x^2 + k_y^2)\right) \nonumber \! & &+ 2k_y \delta_x v - 2k_x \delta_y v \! \nabla\cdot A_M \nabla v &=& ! {1\over{\Delta_y}}\delta_x ! \left(\overline{A_M}^x \Delta_y \delta_x v \right) ! + {1\over{\Delta_x}}\delta_y ! \left(\overline{A_M}^y \Delta_x \delta_y v \right) ! \nonumber \! & &- v\left(\delta_x k_x + \delta_y k_y + ! 2(k_x^2 + k_y^2)\right) \nonumber \! & &+ 2k_y \delta_x u - 2k_x \delta_y u ! \end{eqnarray} ! where ! \begin{equation} ! k_x = {1\over{\Delta_y}}\delta_x\Delta_y ! \end{equation} ! and ! \begin{equation} ! k_y = {1\over{\Delta_x}}\delta_y\Delta_x ! \end{equation} ! ! Note that boundary conditions are not explicitly imposed on ! since $u = v = 0$ on the boundaries. ! ! !REVISION HISTORY: ! same as module ! !INPUT PARAMETERS: integer (int_kind), intent(in) :: k ! depth level index real (r8), dimension(nx_block,ny_block), intent(in) :: & UMIXK, &! U at level k and mixing time level VMIXK ! V at level k and mixing time level type (block), intent(in) :: & this_block ! block information for this subblock ! !OUTPUT PARAMETERS: real (r8), dimension(nx_block,ny_block), intent(out) :: & HDUK, &! Hdiff(Ub) at level k HDVK ! Hdiff(Vb) at level k !EOP !BOC !----------------------------------------------------------------------- ! ! local variables ! !----------------------------------------------------------------------- integer (int_kind) :: & i,j, &! loop indices bid ! local block address real (r8) :: & cc, &! center fivept weight cn, cs, ce, cw ! other weights for partial bottom cells real (r8), dimension(nx_block,ny_block) :: & UTMP, VTMP, &! modified velocities to use with topostress HDIFFCFL ! for cfl number diagnostics !----------------------------------------------------------------------- ! ! laplacian mixing ! ! calculate Del**2(U,V) without metric terms that mix U,V ! add metric terms that mix U,V ! !----------------------------------------------------------------------- bid = this_block%local_id HDUK = c0 HDVK = c0 !----------------------------------------------------------------------- ! ! handle four cases individually to avoid unnecessary copies ! these are all basic five point stencil operators - the topostress ! option requires operating on a modified velocity while the ! partial bottom cell case modifies the weights. ! !----------------------------------------------------------------------- if (ltopostress) then UTMP = merge(UMIXK(:,:) - TSU(:,:,bid), UMIXK(:,:), & k <= KMU(:,:,bid)) VTMP = merge(VMIXK(:,:) - TSV(:,:,bid), VMIXK(:,:), & k <= KMU(:,:,bid)) if (partial_bottom_cells) then do j=this_block%jb,this_block%je do i=this_block%ib,this_block%ie !*** add metric contrib to central coeff cc = DUC(i,j,bid) + DUM(i,j,bid) cn = DUN(i,j,bid)*min(DZU(i,j+1,k,bid), & DZU(i,j ,k,bid))/DZU(i,j ,k,bid) cs = DUS(i,j,bid)*min(DZU(i,j-1,k,bid), & DZU(i,j ,k,bid))/DZU(i,j ,k,bid) ce = DUE(i,j,bid)*min(DZU(i+1,j,k,bid), & DZU(i ,j,k,bid))/DZU(i ,j,k,bid) cw = DUW(i,j,bid)*min(DZU(i-1,j,k,bid), & DZU(i ,j,k,bid))/DZU(i ,j,k,bid) HDUK(i,j) = am*((cc*UTMP(i ,j ) + & cn*UTMP(i ,j+1) + cs*UTMP(i ,j-1) + & ce*UTMP(i+1,j ) + cw*UTMP(i-1,j )) + & (DMC(i,j,bid)*VTMP(i ,j ) + & DMN(i,j,bid)*VTMP(i ,j+1) + & DMS(i,j,bid)*VTMP(i ,j-1) + & DME(i,j,bid)*VTMP(i+1,j ) + & DMW(i,j,bid)*VTMP(i-1,j ))) HDVK(i,j) = am*((cc*VTMP(i ,j ) + & cn*VTMP(i ,j+1) + cs*VTMP(i ,j-1) + & ce*VTMP(i+1,j ) + cw*VTMP(i-1,j )) - & (DMC(i,j,bid)*UTMP(i ,j ) + & DMN(i,j,bid)*UTMP(i ,j+1) + & DMS(i,j,bid)*UTMP(i ,j-1) + & DME(i,j,bid)*UTMP(i+1,j ) + & DMW(i,j,bid)*UTMP(i-1,j ))) end do end do else ! no partial bottom cells do j=this_block%jb,this_block%je do i=this_block%ib,this_block%ie !*** add metric contrib to central coeff cc = DUC(i,j,bid) + DUM(i,j,bid) HDUK(i,j) = am*((cc *UTMP(i ,j ) + & DUN(i,j,bid)*UTMP(i ,j+1) + & DUS(i,j,bid)*UTMP(i ,j-1) + & DUE(i,j,bid)*UTMP(i+1,j ) + & DUW(i,j,bid)*UTMP(i-1,j ))+ & (DMC(i,j,bid)*VTMP(i ,j ) + & DMN(i,j,bid)*VTMP(i ,j+1) + & DMS(i,j,bid)*VTMP(i ,j-1) + & DME(i,j,bid)*VTMP(i+1,j ) + & DMW(i,j,bid)*VTMP(i-1,j ))) HDVK(i,j) = am*((cc *VTMP(i ,j ) + & DUN(i,j,bid)*VTMP(i ,j+1) + & DUS(i,j,bid)*VTMP(i ,j-1) + & DUE(i,j,bid)*VTMP(i+1,j ) + & DUW(i,j,bid)*VTMP(i-1,j ))- & (DMC(i,j,bid)*UTMP(i ,j ) + & DMN(i,j,bid)*UTMP(i ,j+1) + & DMS(i,j,bid)*UTMP(i ,j-1) + & DME(i,j,bid)*UTMP(i+1,j ) + & DMW(i,j,bid)*UTMP(i-1,j ))) end do end do endif ! partial bottom cells else ! no topostress if (partial_bottom_cells) then do j=this_block%jb,this_block%je do i=this_block%ib,this_block%ie !*** add metric contrib to central coeff cc = DUC(i,j,bid) + DUM(i,j,bid) cn = DUN(i,j,bid)*min(DZU(i,j+1,k,bid), & DZU(i,j ,k,bid))/DZU(i,j ,k,bid) cs = DUS(i,j,bid)*min(DZU(i,j-1,k,bid), & DZU(i,j ,k,bid))/DZU(i,j ,k,bid) ce = DUE(i,j,bid)*min(DZU(i+1,j,k,bid), & DZU(i ,j,k,bid))/DZU(i ,j,k,bid) cw = DUW(i,j,bid)*min(DZU(i-1,j,k,bid), & DZU(i ,j,k,bid))/DZU(i ,j,k,bid) HDUK(i,j) = am*((cc*UMIXK(i ,j ) + & cn*UMIXK(i ,j+1) + cs*UMIXK(i ,j-1) + & ce*UMIXK(i+1,j ) + cw*UMIXK(i-1,j )) + & (DMC(i,j,bid)*VMIXK(i ,j ) + & DMN(i,j,bid)*VMIXK(i ,j+1) + & DMS(i,j,bid)*VMIXK(i ,j-1) + & DME(i,j,bid)*VMIXK(i+1,j ) + & DMW(i,j,bid)*VMIXK(i-1,j ))) HDVK(i,j) = am*((cc*VMIXK(i ,j ) + & cn*VMIXK(i ,j+1) + cs*VMIXK(i ,j-1) + & ce*VMIXK(i+1,j ) + cw*VMIXK(i-1,j )) - & (DMC(i,j,bid)*UMIXK(i ,j ) + & DMN(i,j,bid)*UMIXK(i ,j+1) + & DMS(i,j,bid)*UMIXK(i ,j-1) + & DME(i,j,bid)*UMIXK(i+1,j ) + & DMW(i,j,bid)*UMIXK(i-1,j ))) end do end do else ! no partial bottom cells do j=this_block%jb,this_block%je do i=this_block%ib,this_block%ie !*** add metric contrib to central coeff cc = DUC(i,j,bid) + DUM(i,j,bid) HDUK(i,j) = am*((cc *UMIXK(i ,j ) + & DUN(i,j,bid)*UMIXK(i ,j+1) + & DUS(i,j,bid)*UMIXK(i ,j-1) + & DUE(i,j,bid)*UMIXK(i+1,j ) + & DUW(i,j,bid)*UMIXK(i-1,j ))+ & (DMC(i,j,bid)*VMIXK(i ,j ) + & DMN(i,j,bid)*VMIXK(i ,j+1) + & DMS(i,j,bid)*VMIXK(i ,j-1) + & DME(i,j,bid)*VMIXK(i+1,j ) + & DMW(i,j,bid)*VMIXK(i-1,j ))) HDVK(i,j) = am*((cc *VMIXK(i ,j ) + & DUN(i,j,bid)*VMIXK(i ,j+1) + & DUS(i,j,bid)*VMIXK(i ,j-1) + & DUE(i,j,bid)*VMIXK(i+1,j ) + & DUW(i,j,bid)*VMIXK(i-1,j ))- & (DMC(i,j,bid)*UMIXK(i ,j ) + & DMN(i,j,bid)*UMIXK(i ,j+1) + & DMS(i,j,bid)*UMIXK(i ,j-1) + & DME(i,j,bid)*UMIXK(i+1,j ) + & DMW(i,j,bid)*UMIXK(i-1,j ))) end do end do endif ! partial bottom cells endif ! topostress !----------------------------------------------------------------------- ! ! zero fields at land points ! !----------------------------------------------------------------------- where (k > KMU(:,:,bid)) HDUK = c0 HDVK = c0 endwhere !----------------------------------------------------------------------- ! ! compute horiz diffusion cfl diagnostics if required ! !----------------------------------------------------------------------- if (ldiag_cfl) then if (lvariable_hmixu) then HDIFFCFL = merge(c4*am*AMF(:,:,bid)* & (DXUR(:,:,bid)**2 + DYUR(:,:,bid)**2), & c0, KMU(:,:,bid) > k) else HDIFFCFL = merge(c4*am* & (DXUR(:,:,bid)**2 + DYUR(:,:,bid)**2), & c0, KMU(:,:,bid) > k) endif HDIFFCFL = abs(HDIFFCFL) call cfl_hdiff(k,bid,HDIFFCFL,2,this_block) endif !----------------------------------------------------------------------- !EOC end subroutine hdiffu_del2 !*********************************************************************** !BOP ! !IROUTINE: hdifft_del2 ! !INTERFACE: subroutine hdifft_del2(k,HDTK,TMIX,this_block) ! !DESCRIPTION: ! This routine computes the horizontial diffusion of tracers ! using the Laplacian operator given by: ! \begin{equation} ! \nabla\cdot A_M \nabla \phi = ! {1\over{\Delta_y}}\delta_x ! \left(\overline{A_H}^x \Delta_y \delta_x \phi \right) ! + {1\over{\Delta_x}}\delta_y ! \left(\overline{A_H}^y \Delta_x \delta_y \phi \right) ! \end{equation} ! with the boundary conditions of zero gradients of tracers. ! ! !REVISION HISTORY: ! same as module ! !INPUT PARAMETERS: integer (int_kind), intent(in) :: k ! depth level index real (r8), dimension(nx_block,ny_block,km,nt), intent(in) :: & TMIX ! tracers at mix time level type (block), intent(in) :: & this_block ! block information for this subblock ! !OUTPUT PARAMETERS: real (r8), dimension(nx_block,ny_block,nt), intent(out) :: & HDTK ! HDIFF(T) for tracer n at level k !EOP !BOC !----------------------------------------------------------------------- ! ! local variables ! !----------------------------------------------------------------------- integer (int_kind) :: & i,j,n, &! dummy tracer index bid ! local block address real (r8), dimension(nx_block,ny_block) :: & CC,CN,CS,CE,CW, &! coeff of 5pt stencil for Del**2 HDIFFCFL ! for cfl number diagnostics !----------------------------------------------------------------------- ! ! laplacian mixing ! ! implement boundary conditions by setting ! stencil coefficients to zero at land points. ! !----------------------------------------------------------------------- bid = this_block%local_id if (partial_bottom_cells) then CN = c0 CS = c0 CE = c0 CW = c0 do j=this_block%jb-1,this_block%je+1 do i=this_block%ib-1,this_block%ie+1 CN(i,j) = DTN(i,j,bid)*min(DZT(i,j ,k,bid), & DZT(i,j+1,k,bid))/DZT(i,j,k,bid) CS(i,j) = DTS(i,j,bid)*min(DZT(i,j ,k,bid), & DZT(i,j-1,k,bid))/DZT(i,j,k,bid) CE(i,j) = DTE(i,j,bid)*min(DZT(i ,j,k,bid), & DZT(i+1,j,k,bid))/DZT(i,j,k,bid) CW(i,j) = DTW(i,j,bid)*min(DZT(i ,j,k,bid), & DZT(i-1,j,k,bid))/DZT(i,j,k,bid) end do end do CN = merge(CN , c0, (k <= KMTN(:,:,bid)) .and. & (k <= KMT (:,:,bid))) CS = merge(CS , c0, (k <= KMTS(:,:,bid)) .and. & (k <= KMT (:,:,bid))) CE = merge(CE , c0, (k <= KMTE(:,:,bid)) .and. & (k <= KMT (:,:,bid))) CW = merge(CW , c0, (k <= KMTW(:,:,bid)) .and. & (k <= KMT (:,:,bid))) else CN = merge(DTN(:,:,bid), c0, (k <= KMTN(:,:,bid)) .and. & (k <= KMT (:,:,bid))) CS = merge(DTS(:,:,bid), c0, (k <= KMTS(:,:,bid)) .and. & (k <= KMT (:,:,bid))) CE = merge(DTE(:,:,bid), c0, (k <= KMTE(:,:,bid)) .and. & (k <= KMT (:,:,bid))) CW = merge(DTW(:,:,bid), c0, (k <= KMTW(:,:,bid)) .and. & (k <= KMT (:,:,bid))) endif CC = -(CN + CS + CE + CW) ! central coefficient !----------------------------------------------------------------------- ! ! calculate Del**2(T) for each tracer n ! !----------------------------------------------------------------------- HDTK = c0 do n = 1,nt do j=this_block%jb,this_block%je do i=this_block%ib,this_block%ie HDTK(i,j,n) = ah*(CC(i,j)*TMIX(i ,j ,k,n) + & CN(i,j)*TMIX(i ,j+1,k,n) + & CS(i,j)*TMIX(i ,j-1,k,n) + & CE(i,j)*TMIX(i+1,j ,k,n) + & CW(i,j)*TMIX(i-1,j ,k,n)) enddo enddo enddo !----------------------------------------------------------------------- ! ! compute horiz diffusion cfl diagnostics if required ! !----------------------------------------------------------------------- if (ldiag_cfl) then if (lvariable_hmixt) then HDIFFCFL = merge(c4*ah*AHF(:,:,bid)* & (DXTR(:,:,bid)**2 + DYTR(:,:,bid)**2), & c0, KMT(:,:,bid) > k) else HDIFFCFL = merge(c4*ah* & (DXTR(:,:,bid)**2 + DYTR(:,:,bid)**2), & c0, KMT(:,:,bid) > k) endif HDIFFCFL = abs(HDIFFCFL) call cfl_hdiff(k,bid,HDIFFCFL,1,this_block) endif !----------------------------------------------------------------------- !EOC end subroutine hdifft_del2 !*********************************************************************** end module hmix_del2 !|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
{"hexsha": "403b054c5cb65c35ebec57d34d7db34256893daf", "size": 35901, "ext": "f90", "lang": "FORTRAN", "max_stars_repo_path": "bench01opt_cpu_n1t1p1/compile/hmix_del2.f90", "max_stars_repo_name": "app-on-mic/POP-2.0.1-opt", "max_stars_repo_head_hexsha": "c23e290333d50293386f3004f26a355db9da4bcb", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "bench01opt_cpu_n1t1p1/compile/hmix_del2.f90", "max_issues_repo_name": "app-on-mic/POP-2.0.1-opt", "max_issues_repo_head_hexsha": "c23e290333d50293386f3004f26a355db9da4bcb", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "bench01opt_cpu_n1t1p1/compile/hmix_del2.f90", "max_forks_repo_name": "app-on-mic/POP-2.0.1-opt", "max_forks_repo_head_hexsha": "c23e290333d50293386f3004f26a355db9da4bcb", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.3432432432, "max_line_length": 109, "alphanum_fraction": 0.4720759867, "num_tokens": 9973}
import torch from torch import nn import pandas as pd import os from tqdm import tqdm import torchaudio import librosa import numpy as np import gc def sample2melspectrogram(samples,sample_rate): melspectrogram = librosa.feature.melspectrogram(samples,sample_rate,center=False) melspectrogram = librosa.power_to_db(melspectrogram, ref=np.max) melspectrogram = (melspectrogram - melspectrogram.min()) / (melspectrogram.max() - melspectrogram.min()) melspectrogram = melspectrogram[:80,:] return melspectrogram # load model model = torch.load('model.pt').cuda() print('use model is:',model) # test_data_dir test_data_dir = 'public_test/public_test/' # inference for loop files = os.listdir(test_data_dir) n = 10000 sample_submit = pd.read_csv('sample_submission.csv') i = 0 for f in tqdm(files[:n]): # load audio samples,sample_rate = librosa.load(test_data_dir+f) mel_spectrogram = sample2melspectrogram(samples,sample_rate) shape = mel_spectrogram.shape mel_spectrogram = np.reshape(mel_spectrogram, (-1, shape[0], shape[1])) X = torch.from_numpy(mel_spectrogram) X = torch.unsqueeze(X,0).cuda() y_hat = model(X).detach().cpu().numpy() sample_submit.iloc[[i],1:] = y_hat i += 1 gc.collect() # save sample_submit.to_csv('submit.csv',index=False) print('done')
{"hexsha": "4c3679630c6bdf2543e2fe2cbed0a609028dec8a", "size": 1373, "ext": "py", "lang": "Python", "max_stars_repo_path": "predict.py", "max_stars_repo_name": "Cuda-Chen/Tomofun-Dog-Voice-Recognition-AI-Million-Challenge", "max_stars_repo_head_hexsha": "cdc7f7cf9b1c29e8d1b1d6d19301154a7616d8f4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-07-02T23:10:43.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-06T07:26:01.000Z", "max_issues_repo_path": "predict.py", "max_issues_repo_name": "Cuda-Chen/Tomofun-Dog-Voice-Recognition-AI-Million-Challenge", "max_issues_repo_head_hexsha": "cdc7f7cf9b1c29e8d1b1d6d19301154a7616d8f4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-05-28T05:54:26.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-28T05:54:26.000Z", "max_forks_repo_path": "predict.py", "max_forks_repo_name": "Cuda-Chen/Tomofun-Dog-Voice-Recognition-AI-Million-Challenge", "max_forks_repo_head_hexsha": "cdc7f7cf9b1c29e8d1b1d6d19301154a7616d8f4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2021-05-28T01:53:30.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-02T18:56:53.000Z", "avg_line_length": 29.847826087, "max_line_length": 109, "alphanum_fraction": 0.7108521486, "include": true, "reason": "import numpy", "num_tokens": 373}
import pandas as pd import numpy as np import random from csv import writer import csv import math def genPastDayInfectNum(totalVisited): #被调用, 以每一次每一栋楼每一天的total visited number进行对应计算 infectedNum = totalVisited * (random.randint(0,2000)/10000) infectedNum = math.floor(infectedNum) return infectedNum def genPast6DaysTotalVisitedNum(): # 假设每一天每一栋楼的访问量为2000人次,波动+-20% totalVisitedNum = 2000 * (random.randint(8000,12000)/10000) # totalVisitedNum = math.floor(totalVisitedNum) infectedNum = genPastDayInfectNum(totalVisitedNum) percentage = (infectedNum/totalVisitedNum*100) return [math.floor(totalVisitedNum), math.floor(infectedNum), round(percentage,2)] # def genOtherColumn(): def composeArr(buildingNameList): covidAllRow = [] for i in range (len(buildingNameList)): for date in range (1,7): # 1,2,3,4,5,6 covidSingleRow = [] covidSingleRow.append(buildingNameList[i]) covidSingleRow.append(date) num = genPast6DaysTotalVisitedNum() covidSingleRow.append(num[0]) covidSingleRow.append(num[1]) covidSingleRow.append(num[2]) covidAllRow.append(covidSingleRow) return(covidAllRow) def createCSV(buidingRow): header = ['BuildingName','Date','InfectedNum','TotalVistedNum','percentage'] # open the file in the write mode with open('randomCovidData.csv', 'a+', encoding='UTF8', newline='') as f: # create the csv writer writer = csv.writer(f) # write a row to the csv file writer.writerow(header) for i in buidingRow: writer.writerow(i) buildingNameList = ["ARC","DEN","GRB","GWN","JHN","KNE","MGH","MNY","OUG","PAR","RAI","SAV","SMI","SUZ"] # the array is a 3D array, [[[suz],[suz]],[[other],[other]],.....] buidingRow = composeArr(buildingNameList) createCSV(buidingRow) # 生成了一个csv name: "randomCovidData.csv"
{"hexsha": "e527d7e769fea02f03c976ccceb475f0fc3290fd", "size": 1837, "ext": "py", "lang": "Python", "max_stars_repo_path": "files/genRandomCovidData.py", "max_stars_repo_name": "YuudachiXMMY/ProSeed-Hackthon-2022", "max_stars_repo_head_hexsha": "662973f7f6f338281aed36aa77e0e49d737de31e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "files/genRandomCovidData.py", "max_issues_repo_name": "YuudachiXMMY/ProSeed-Hackthon-2022", "max_issues_repo_head_hexsha": "662973f7f6f338281aed36aa77e0e49d737de31e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "files/genRandomCovidData.py", "max_forks_repo_name": "YuudachiXMMY/ProSeed-Hackthon-2022", "max_forks_repo_head_hexsha": "662973f7f6f338281aed36aa77e0e49d737de31e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.6603773585, "max_line_length": 105, "alphanum_fraction": 0.7114861187, "include": true, "reason": "import numpy", "num_tokens": 567}
import sys, os, glob, string import numpy as np import matplotlib.pyplot as plt from pyraf import iraf from tqdm import tqdm import odi_config as odi import pandas as pd from astropy.wcs import WCS from astropy.table import Table from astropy.io import fits from collections import OrderedDict def get_sdss_coords(img, ota, inst,output='test.sdss'): formats = ['csv','xml','html'] astro_url='http://skyserver.sdss3.org/public/en/tools/search/x_sql.aspx' public_url='http://skyserver.sdss.org/dr12/SkyserverWS/ImagingQuery/Cone?' default_url=public_url default_fmt='csv' hdulist = odi.fits.open(img.f) hdu = odi.tan_header_fix(hdulist[ota]) xdim = hdu.header['NAXIS1'] ydim = hdu.header['NAXIS2'] if not os.path.isfile(output): # and find the image center xc = xdim/2.0 yc = ydim/2.0 # get the CD matrix keywords cd11 = hdu.header['CD1_1'] cd22 = hdu.header['CD2_2'] # try to load cd12 and cd21, if they don't exist, set them to zero try : cd12 = hdu.header['CD1_2'] except: cd12 = 0.0 try : cd21 = hdu.header['CD2_1'] except: cd21 = 0.0 # print xdim, ydim, cd12, cd21 # Parse the WCS keywords in the primary HDU w = odi.WCS(hdu.header) # Some pixel coordinates of interest. pixcrd = np.array([[xc,yc]], np.float_) # Convert pixel coordinates to world coordinates # The second argument is "origin" -- in this case we're declaring we # have 1-based (Fortran-like) coordinates. world = w.wcs_pix2world(pixcrd, 1) # print(world) rac = world[0][0] decc = world[0][1] # print xc, yc, rac, decc # get the biggest radius of the image in arcminutes pixscal1 = 3600*abs(cd11) pixscal2 = 3600*abs(cd22) xas = pixscal1 * xdim yas = pixscal2 * ydim xam = xas/60 yam = yas/60 #print(xam,yam) #radius for query: sqrt2 = 1.414 sizeam = 1.414*(xam+yam)/4 print(sizeam) #qry = "limit=5000&format=csv&imgparams=ra,dec,u,err_u,g,err_g,r,err_r,i,err_i,z,err_z,probPSF&specparams=none&ra="+repr(rac)+"&dec="+repr(decc)+"&radius="+repr(sizeam)+"&magType=psf" qry = "limit=5000&format=csv&imgparams=ra,dec,psfMag_u,psfMagErr_u,psfMag_g,psfMagErr_g,psfMag_r,psfMagErr_r,psfMag_i,psfMagErr_i,psfMag_z,psfMagErr_z,probPSF&specparams=none&ra="+repr(rac)+"&dec="+repr(decc)+"&radius="+repr(sizeam)+"&magType=psf" #print 'with query\n-->', qry print('fetching SDSS sources around',rac,decc,'with radius',sizeam,'arcmin') url = default_url fmt = default_fmt writefirst = 1 verbose = 0 ofp = open(output,'w+') if verbose: odi.write_header(ofp,'#',url,qry) file_ = odi.httpquery(qry,url,fmt) # Output line by line (in case it's big) line = file_.readline() if line.startswith("ERROR"): # SQL Statement Error -> stderr ofp = sys.stderr if writefirst: ofp.write(string.rstrip(line)+os.linesep) line = file_.readline() while line: ofp.write(string.rstrip(line)+os.linesep) line = file_.readline() ofp.close() else: print('SDSS sources already fetched!') return xdim, ydim hdulist.close() def refetch_sdss_coords(img, ota, gapmask, inst,gmaglim=19.,offline = False,source='sdss'): image = odi.reprojpath+'reproj_'+ota+'.'+img.stem() outcoords = odi.coordspath+'reproj_'+ota+'.'+img.base()+'.sdss' hdulist = odi.fits.open(image) hdu = odi.tan_header_fix(hdulist[0]) xdim = hdu.header['NAXIS1'] ydim = hdu.header['NAXIS2'] if offline == False: formats = ['csv','xml','html'] astro_url='http://skyserver.sdss3.org/public/en/tools/search/x_sql.aspx' public_url='http://skyserver.sdss.org/dr12/SkyserverWS/ImagingQuery/Cone?' default_url=public_url default_fmt='csv' if not os.path.isfile(outcoords): # and find the image center xc = xdim/2.0 yc = ydim/2.0 # get the CD matrix keywords cd11 = hdu.header['CD1_1'] cd22 = hdu.header['CD2_2'] # try to load cd12 and cd21, if they don't exist, set them to zero try : cd12 = hdu.header['CD1_2'] except: cd12 = 0.0 try : cd21 = hdu.header['CD2_1'] except: cd21 = 0.0 w = odi.WCS(hdu.header) # Some pixel coordinates of interest. pixcrd = np.array([[xc,yc]], np.float_) # Convert pixel coordinates to world coordinates # The second argument is "origin" -- in this case we're declaring we # have 1-based (Fortran-like) coordinates. world = w.wcs_pix2world(pixcrd, 1) # print(world) rac = world[0][0] decc = world[0][1] # print xc, yc, rac, decc # get the biggest radius of the image in arcminutes pixscal1 = 3600*abs(cd11) pixscal2 = 3600*abs(cd22) xas = pixscal1 * xdim yas = pixscal2 * ydim xam = xas/60 yam = yas/60 #print(xam,yam) #radius for query: sqrt2 = 1.414 sizeam = 1.414*(xam+yam)/4 print(sizeam) #qry = "limit=5000&format=csv&imgparams=ra,dec,u,err_u,g,err_g,r,err_r,i,err_i,z,err_z,probPSF&specparams=none&ra="+repr(rac)+"&dec="+repr(decc)+"&radius="+repr(sizeam)+"&magType=psf" qry = "limit=5000&format=csv&imgparams=ra,dec,psfMag_u,psfMagErr_u,psfMag_g,psfMagErr_g,psfMag_r,psfMagErr_r,psfMag_i,psfMagErr_i,psfMag_z,psfMagErr_z,probPSF&specparams=none&ra="+repr(rac)+"&dec="+repr(decc)+"&radius="+repr(sizeam)+"&magType=psf" #print 'with query\n-->', qry print('fetching SDSS sources around',rac,decc,'with radius',sizeam,'arcmin') url = default_url fmt = default_fmt writefirst = 1 verbose = 0 ofp = open(outcoords,'w+') if verbose: odi.write_header(ofp,'#',url,qry) file_ = odi.httpquery(qry,url,fmt) # Output line by line (in case it's big) line = file_.readline() if line.startswith("ERROR"): # SQL Statement Error -> stderr ofp = sys.stderr if writefirst: ofp.write(string.rstrip(line)+os.linesep) line = file_.readline() while line: ofp.write(string.rstrip(line)+os.linesep) line = file_.readline() ofp.close() else: print('SDSS sources already fetched!') ras,decs,psfMag_u,psfMagErr_u,psfMag_g,psfMagErr_g,psfMag_r,psfMagErr_r,psfMag_i,psfMagErr_i,psfMag_z,psfMagErr_z = np.loadtxt(outcoords,usecols=(0,1,2,3,4,5,6,7,8,9,10,11), unpack=True, delimiter=',', skiprows=2) probPSF = np.loadtxt(outcoords, usecols=(12,), dtype=int, unpack=True, delimiter=',', skiprows=2) w = odi.WCS(hdu.header) with open(odi.coordspath+'reproj_'+ota+'.'+img.base()+'.sdssxy', 'w+') as fxy: j=0 # k=0 for i,c in enumerate(ras): coords2 = [[ras[i],decs[i]]] pixcrd2 = w.wcs_world2pix(coords2, 1) if psfMag_g[i]<gmaglim and probPSF[i]==1: if 100.0 <= pixcrd2[0][0] < xdim-100.0 and 100.0 <= pixcrd2[0][1] < ydim-100.0: # make an image cutout of the gap mask x, y = int(round(pixcrd2[0][0])), int(round(pixcrd2[0][1])) cutout = gapmask[y-30:y+30,x-30:x+30] # print cutout.flatten() # k+=1 # print k,cutout.astype(bool).any() if not (cutout.astype(bool)).any(): print(pixcrd2[0][0], pixcrd2[0][1], ras[i],decs[i],psfMag_u[i],psfMagErr_u[i],psfMag_g[i],psfMagErr_g[i],psfMag_r[i],psfMagErr_r[i],psfMag_i[i],psfMagErr_i[i],psfMag_z[i],psfMagErr_z[i], file=fxy) if offline == True: if source == 'sdss': outcoords = odi.sdsspath+'offline_'+ota+'.'+img.base()+'.sdss' ras,decs,psfMag_u,psfMagErr_u,psfMag_g,psfMagErr_g,psfMag_r,psfMagErr_r,psfMag_i,psfMagErr_i,psfMag_z,psfMagErr_z = np.loadtxt(outcoords,usecols=(0,1,2,3,4,5,6,7,8,9,10,11), unpack=True, delimiter=',', skiprows=1) if source == 'twomass': outcoords = odi.twomasspath+'offline_'+ota+'.'+img.base()+'.mass' ras,decs = np.loadtxt(outcoords,usecols=(2,3), unpack=True, delimiter=',', skiprows=1) # Just creating dummy variables so that the file formats remain the same for other functions psfMag_u = np.ones(len(ras)) psfMagErr_u = np.ones(len(ras)) psfMag_g = np.ones(len(ras)) psfMagErr_g = np.ones(len(ras)) psfMag_r = np.ones(len(ras)) psfMagErr_r = np.ones(len(ras)) psfMag_i = np.ones(len(ras)) psfMagErr_i = np.ones(len(ras)) psfMag_z = np.ones(len(ras)) psfMagErr_z = np.ones(len(ras)) if source == 'gaia': outcoords = odi.gaiapath+'offline_'+ota+'.'+img.base()+'.gaia' ras,decs = np.loadtxt(outcoords,usecols=(0,1), unpack=True, delimiter=',', skiprows=1) # Just creating dummy variables so that the file formats remain the same # for other functions psfMag_u = np.ones(len(ras)) psfMagErr_u = np.ones(len(ras)) psfMag_g = np.ones(len(ras)) psfMagErr_g = np.ones(len(ras)) psfMag_r = np.ones(len(ras)) psfMagErr_r = np.ones(len(ras)) psfMag_i = np.ones(len(ras)) psfMagErr_i = np.ones(len(ras)) psfMag_z = np.ones(len(ras)) psfMagErr_z = np.ones(len(ras)) tqdm.write('Using Ra and Dec from {:s} for reproject'.format(outcoords)) w = odi.WCS(hdu.header) with open(odi.coordspath+'reproj_'+ota+'.'+img.base()+'.sdssxy', 'w+') as fxy: for i,c in enumerate(ras): coords2 = [[ras[i],decs[i]]] pixcrd2 = w.wcs_world2pix(coords2, 1) if psfMag_g[i]<gmaglim: if 100.0 <= pixcrd2[0][0] < xdim-100.0 and 100.0 <= pixcrd2[0][1] < ydim-100.0: # make an image cutout of the gap mask x, y = int(round(pixcrd2[0][0])), int(round(pixcrd2[0][1])) cutout = gapmask[y-30:y+30,x-30:x+30] if not (cutout.astype(bool)).any(): print(pixcrd2[0][0], pixcrd2[0][1], ras[i],decs[i],psfMag_u[i],psfMagErr_u[i],psfMag_g[i],psfMagErr_g[i],psfMag_r[i],psfMagErr_r[i],psfMag_i[i],psfMagErr_i[i],psfMag_z[i],psfMagErr_z[i], file=fxy) hdulist.close() def repoxy_offline(img, ota, gapmask, inst,gmaglim=19.,source='sdss'): image = odi.reprojpath+'reproj_'+ota+'.'+img.stem() hdulist = odi.fits.open(image) hdu = odi.tan_header_fix(hdulist[0]) xdim = hdu.header['NAXIS1'] ydim = hdu.header['NAXIS2'] if source == 'sdss': outcoords = odi.sdsspath+'offline_'+ota+'.'+img.base()+'.sdss' ras,decs,psfMag_u,psfMagErr_u,psfMag_g,psfMagErr_g,psfMag_r,psfMagErr_r,psfMag_i,psfMagErr_i,psfMag_z,psfMagErr_z = np.loadtxt(outcoords,usecols=(0,1,2,3,4,5,6,7,8,9,10,11), unpack=True, delimiter=',', skiprows=1) outputxy = odi.coordspath+'reproj_'+ota+'.'+img.base()+'.sdssxy' if source == 'twomass': outcoords = odi.twomasspath+'offline_'+ota+'.'+img.base()+'.mass' outputxy = odi.coordspath+'reproj_'+ota+'.'+img.base()+'.massxy' ras,decs = np.loadtxt(outcoords,usecols=(2,3), unpack=True, delimiter=',', skiprows=1) # Just creating dummy variables so that the file formats remain the same for other functions psfMag_u = np.ones(len(ras)) psfMagErr_u = np.ones(len(ras)) psfMag_g = np.ones(len(ras)) psfMagErr_g = np.ones(len(ras)) psfMag_r = np.ones(len(ras)) psfMagErr_r = np.ones(len(ras)) psfMag_i = np.ones(len(ras)) psfMagErr_i = np.ones(len(ras)) psfMag_z = np.ones(len(ras)) psfMagErr_z = np.ones(len(ras)) if source == 'gaia': outcoords = odi.gaiapath+'offline_'+ota+'.'+img.base()+'.gaia' outputxy = odi.coordspath+'reproj_'+ota+'.'+img.base()+'.gaiaxy' ras,decs = np.loadtxt(outcoords,usecols=(0,1), unpack=True, delimiter=',', skiprows=1) # Just creating dummy variables so that the file formats remain the same # for other functions psfMag_u = np.ones(len(ras)) psfMagErr_u = np.ones(len(ras)) psfMag_g = np.ones(len(ras)) psfMagErr_g = np.ones(len(ras)) psfMag_r = np.ones(len(ras)) psfMagErr_r = np.ones(len(ras)) psfMag_i = np.ones(len(ras)) psfMagErr_i = np.ones(len(ras)) psfMag_z = np.ones(len(ras)) psfMagErr_z = np.ones(len(ras)) tqdm.write('Using Ra and Dec from {:s} for reproject'.format(outcoords)) w = odi.WCS(hdu.header) fxy = open(outputxy, 'w+') for i,c in enumerate(ras): coords2 = [[ras[i],decs[i]]] pixcrd2 = w.wcs_world2pix(coords2, 1) if psfMag_g[i]<gmaglim: if 100.0 <= pixcrd2[0][0] < xdim-100.0 and 100.0 <= pixcrd2[0][1] < ydim-100.0: # make an image cutout of the gap mask x, y = int(round(pixcrd2[0][0])), int(round(pixcrd2[0][1])) cutout = gapmask[y-30:y+30,x-30:x+30] if not (cutout.astype(bool)).any(): print(pixcrd2[0][0], pixcrd2[0][1], ras[i],decs[i],psfMag_u[i],psfMagErr_u[i],psfMag_g[i],psfMagErr_g[i],psfMag_r[i],psfMagErr_r[i],psfMag_i[i],psfMagErr_i[i],psfMag_z[i],psfMagErr_z[i], file=fxy) fxy.close() hdulist.close() def sdss_coords_full(img, inst,gmaglim=19.): formats = ['csv','xml','html'] astro_url='http://skyserver.sdss3.org/public/en/tools/search/x_sql.aspx' #public_url='http://skyserver.sdss.org/SkyserverWS/dr12/ImagingQuery/Cone?' public_url='http://skyserver.sdss.org/dr12/SkyserverWS/ImagingQuery/Cone?' default_url=public_url default_fmt='csv' image = img outcoords = img.nofits()+'.sdss' hdulist = odi.fits.open(image) hdu = odi.tan_header_fix(hdulist[0]) data = hdu.data # hdu = hdulist[ota] xdim = hdu.header['NAXIS1'] ydim = hdu.header['NAXIS2'] if not os.path.isfile(outcoords): # and find the image center xc = xdim/2.0 yc = ydim/2.0 # get the CD matrix keywords cd11 = hdu.header['CD1_1'] cd22 = hdu.header['CD2_2'] # try to load cd12 and cd21, if they don't exist, set them to zero try : cd12 = hdu.header['CD1_2'] except: cd12 = 0.0 try : cd21 = hdu.header['CD2_1'] except: cd21 = 0.0 # print xdim, ydim, cd12, cd21 # Parse the WCS keywords in the primary HDU w = odi.WCS(hdu.header) # Some pixel coordinates of interest. pixcrd = np.array([[xc,yc]], np.float_) # Convert pixel coordinates to world coordinates # The second argument is "origin" -- in this case we're declaring we # have 1-based (Fortran-like) coordinates. world = w.wcs_pix2world(pixcrd, 1) # print(world) rac = world[0][0] decc = world[0][1] # print xc, yc, rac, decc # get the biggest radius of the image in arcminutes pixscal1 = 3600*abs(cd11) pixscal2 = 3600*abs(cd22) xas = pixscal1 * xdim yas = pixscal2 * ydim xam = xas/60 yam = yas/60 #print(xam,yam) #radius for query: sqrt2 = 1.414 sizeam = 1.414*(xam+yam)/4 print(sizeam) #qry = "limit=5000&format=csv&imgparams=ra,dec,u,err_u,g,err_g,r,err_r,i,err_i,z,err_z,probPSF&specparams=none&ra="+repr(rac)+"&dec="+repr(decc)+"&radius="+repr(sizeam)+"&magType=psf" qry = "limit=10000&format=csv&imgparams=ra,dec,psfMag_u,psfMagErr_u,psfMag_g,psfMagErr_g,psfMag_r,psfMagErr_r,psfMag_i,psfMagErr_i,psfMag_z,psfMagErr_z,probPSF&specparams=none&ra="+repr(rac)+"&dec="+repr(decc)+"&radius="+repr(sizeam)+"&magType=psf" #print 'with query\n-->', qry print('fetching SDSS sources around',rac,decc,'with radius',sizeam,'arcmin') url = default_url fmt = default_fmt writefirst = 1 verbose = 0 ofp = open(outcoords,'w+') if verbose: odi.write_header(ofp,'#',url,qry) file_ = odi.httpquery(qry,url,fmt) # Output line by line (in case it's big) line = file_.readline() if line.startswith("ERROR"): # SQL Statement Error -> stderr ofp = sys.stderr if writefirst: ofp.write(string.rstrip(line)+os.linesep) line = file_.readline() while line: ofp.write(string.rstrip(line)+os.linesep) line = file_.readline() ofp.close() else: print('SDSS sources already fetched!') ras,decs,psfMag_u,psfMagErr_u,psfMag_g,psfMagErr_g,psfMag_r,psfMagErr_r,psfMag_i,psfMagErr_i,psfMag_z,psfMagErr_z = np.loadtxt(outcoords,usecols=(0,1,2,3,4,5,6,7,8,9,10,11), unpack=True, delimiter=',', skiprows=2) probPSF = np.loadtxt(outcoords, usecols=(12,), dtype=int, unpack=True, delimiter=',', skiprows=2) # print ras, decs w = odi.WCS(hdu.header) with open(img.nofits()+'.wcs.coo','w+') as f: with open(img.nofits()+'.sdssxy', 'w+') as fxy: for i,c in enumerate(ras): coords2 = [[ras[i],decs[i]]] pixcrd2 = w.wcs_world2pix(coords2, 1) if psfMag_g[i]<gmaglim and probPSF[i]==1 and 100.0 <= pixcrd2[0][0] < xdim-100.0 and 100.0 <= pixcrd2[0][1] < ydim-100.0: r, d = odi.deg_to_sex(ras[i], decs[i]) x, y = int(round(pixcrd2[0][0])), int(round(pixcrd2[0][1])) cutout = data[y-30:y+30,x-30:x+30] # print cutout.flatten() # k+=1 # print i,(cutout<-900).any() if not (cutout<-900).any(): print(r, d, psfMag_g[i], file=f) print(pixcrd2[0][0], pixcrd2[0][1], ras[i],decs[i],psfMag_u[i],psfMagErr_u[i],psfMag_g[i],psfMagErr_g[i],psfMag_r[i],psfMagErr_r[i],psfMag_i[i],psfMagErr_i[i],psfMag_z[i],psfMagErr_z[i], file=fxy) hdulist.close() # def get_sdss_coords_offline(img, ota, inst,output='test.sdss'): # hdulist = odi.fits.open(img.f) # # hdu = odi.tan_header_fix(hdulist[ota]) # xdim = hdu.header['NAXIS1'] # ydim = hdu.header['NAXIS2'] # # sdss_cat_img = hdulist['CAT.PHOTCALIB'] # sdss_cat_img_df = pd.DataFrame.from_dict(sdss_cat_img.data) # hdulist.close() # # ota = float(ota.strip('OTA.SCI')) # ota_matches_df = sdss_cat_img_df.iloc[np.where(sdss_cat_img_df['ODI_OTA'] == ota)] # # needed_columns = ['SDSS_RA','SDSS_DEC','SDSS_MAG_U', # 'SDSS_ERR_U', u'SDSS_MAG_G', u'SDSS_ERR_G', u'SDSS_MAG_R', # 'SDSS_ERR_R', u'SDSS_MAG_I', u'SDSS_ERR_I', u'SDSS_MAG_Z', # 'SDSS_ERR_Z','ODI_OTA','ODI_OTA'] # # output_df = ota_matches_df[needed_columns] # output_df.to_csv(output,index=False) # return xdim, ydim def refetch_sdss_coords_offline(img, ota, gapmask, inst,gmaglim=19.): image = odi.reprojpath+'reproj_'+ota+'.'+img.stem() outcoords = odi.coordspath+'reproj_'+ota+'.'+img.base()+'.sdss' hdulist = odi.fits.open(image) hdu = odi.tan_header_fix(hdulist[0]) xdim = hdu.header['NAXIS1'] ydim = hdu.header['NAXIS2'] if not os.path.isfile(outcoords): xc = xdim/2.0 yc = ydim/2.0 # get the CD matrix keywords cd11 = hdu.header['CD1_1'] cd22 = hdu.header['CD2_2'] # try to load cd12 and cd21, if they don't exist, set them to zero try : cd12 = hdu.header['CD1_2'] except: cd12 = 0.0 try : cd21 = hdu.header['CD2_1'] except: cd21 = 0.0 # print xdim, ydim, cd12, cd21 # Parse the WCS keywords in the primary HDU w = odi.WCS(hdu.header) # Some pixel coordinates of interest. pixcrd = np.array([[xc,yc]], np.float_) # Convert pixel coordinates to world coordinates # The second argument is "origin" -- in this case we're declaring we # have 1-based (Fortran-like) coordinates. world = w.wcs_pix2world(pixcrd, 1) # print(world) rac = world[0][0] decc = world[0][1] # print xc, yc, rac, decc # get the biggest radius of the image in arcminutes pixscal1 = 3600*abs(cd11) pixscal2 = 3600*abs(cd22) xas = pixscal1 * xdim yas = pixscal2 * ydim xam = xas/60 yam = yas/60 #print(xam,yam) #radius for query: sqrt2 = 1.414 sizeam = 1.414*(xam+yam)/4 print(sizeam) ras,decs,psfMag_u,psfMagErr_u,psfMag_g,psfMagErr_g,psfMag_r,psfMagErr_r,psfMag_i,psfMagErr_i,psfMag_z,psfMagErr_z = np.loadtxt(outcoords,usecols=(0,1,2,3,4,5,6,7,8,9,10,11), unpack=True, delimiter=',', skiprows=2) probPSF = np.loadtxt(outcoords, usecols=(12,), dtype=int, unpack=True, delimiter=',', skiprows=2) # print ras, decs w = odi.WCS(hdu.header) with open(odi.coordspath+'reproj_'+ota+'.'+img.base()+'.sdssxy', 'w+') as fxy: j=0 # k=0 for i,c in enumerate(ras): coords2 = [[ras[i],decs[i]]] pixcrd2 = w.wcs_world2pix(coords2, 1) if psfMag_g[i]<gmaglim and probPSF[i]==1: if 100.0 <= pixcrd2[0][0] < xdim-100.0 and 100.0 <= pixcrd2[0][1] < ydim-100.0: # make an image cutout of the gap mask x, y = int(round(pixcrd2[0][0])), int(round(pixcrd2[0][1])) cutout = gapmask[y-30:y+30,x-30:x+30] # print cutout.flatten() # k+=1 # print k,cutout.astype(bool).any() if not (cutout.astype(bool)).any(): # j+=1 # print pixcrd2[0][0], pixcrd2[0][1],x-30,x+30,y-30,y+30 # plt.imshow(cutout) # plt.show() print(pixcrd2[0][0], pixcrd2[0][1], ras[i],decs[i],psfMag_u[i],psfMagErr_u[i],psfMag_g[i],psfMagErr_g[i],psfMag_r[i],psfMagErr_r[i],psfMag_i[i],psfMagErr_i[i],psfMag_z[i],psfMagErr_z[i], file=fxy) # print j hdulist.close() def get_sdss_coords_offline(img, ota, inst,output='test.sdss'): """ Pull out and parse the ``CAT.PHOTCALIB`` table from ``img`` header. This function will separate the SDSS stars in ``CAT.PHOTCALIB`` based on which ``ota`` they fall on. Parameters ---------- img : str Name of image ota : str Name of OTA int : str Version of ODI used, ``podi`` or ``5odi`` Returns ------- xdim : int Size of OTA in the x direction ``NAXIS1`` ydim : int Size of OTA in the y direction ``NAXIS2`` Note ---- If the images being processed do not fall in the SDSS footprint, the QuickReduce pipeline will use PanSTARRS. This function will still pull out these stars and treat them as SDSS stars. There will be no ``u`` magnitudes available, however. """ ota_id = ota hdulist = odi.fits.open(img.f) hdu = odi.tan_header_fix(hdulist[ota]) xdim = hdu.header['NAXIS1'] ydim = hdu.header['NAXIS2'] try: sdss_cat_img = hdulist[u"CAT.PHOTCALIB"] cat_img_data = Table.read(sdss_cat_img, format='fits') # print(cat_img_data.colnames) # force little-endian byte order to make FITS play nice with pandas sdss_cat_img_df = cat_img_data.to_pandas() # sdss_cat_img_df = pd.DataFrame.from_dict(cat_img_dict) # print sdss_cat_img_df.keys() ota = float(ota.strip('OTA.SCI')) # print('catalog source:', hdulist[u"CAT.PHOTCALIB"].header['PHOTMCAT']) source = hdulist[u"CAT.PHOTCALIB"].header['PHOTMCAT'] # if 'sdss_dr' in hdulist[0].header['PHOTMCAT']: # try: # # print sdss_cat_img_df.columns ota_matches_df = sdss_cat_img_df.iloc[np.where(sdss_cat_img_df['ODI_OTA'] == ota)] needed_columns = ['REF_RA','REF_DEC', 'REF_G', 'REF_ERR_G', 'REF_R', 'REF_ERR_R', 'REF_I', 'REF_ERR_I', 'REF_Z', 'REF_ERR_Z', 'ODI_OTA'] output_df = ota_matches_df[needed_columns] output_df.to_csv(output,index=False) # except KeyError: # oditable = hdulist['CAT.ODI'].data # oditalbe_df = pd.DataFrame.from_dict(oditable) # ODI_RA = np.squeeze(np.array(oditalbe_df['RA'])) # ODI_DEC = np.squeeze( np.array(oditalbe_df['DEC'])) # ODI_OTA = np.squeeze( np.array(oditalbe_df['OTA'])) # junkdict = OrderedDict([('ODI_RA',ODI_RA), # ('ODI_DEC',ODI_DEC), # ('ODI_OTA',ODI_OTA.astype(float))]) # junk_df = pd.DataFrame.from_dict(junkdict) # matched_df = pd.merge(sdss_cat_img_df,junk_df ,on = ['ODI_RA','ODI_DEC'],how='inner') # # print matched_df.columns # needed_columns = np.insert(sdss_cat_img_df.columns.values,0,'ODI_OTA') # full_df = matched_df[needed_columns] # ota_matches_df = full_df.iloc[np.where(full_df['ODI_OTA'] == ota)] # needed_columns = ['SDSS_RA','SDSS_DEC', # 'SDSS_MAG_U','SDSS_ERR_U', # 'SDSS_MAG_G', 'SDSS_ERR_G', # 'SDSS_MAG_R','SDSS_ERR_R', # 'SDSS_MAG_I', 'SDSS_ERR_I', # 'SDSS_MAG_Z','SDSS_ERR_Z', # 'ODI_OTA'] # output_df = ota_matches_df[needed_columns] # output_df.to_csv(output,index=False) # else: # ota_matches_df = sdss_cat_img_df.iloc[np.where(sdss_cat_img_df['ODI_OTA'] == ota)] # ota_matches_df = ota_matches_df.reset_index() # junk_u = np.ones(len(ota_matches_df)) # junk_u_err = np.ones(len(ota_matches_df)) # ota_matches_df['IPP_MAG_U'] = junk_u # ota_matches_df['IPP_ERR_U'] = junk_u_err # needed_columns = ['IPP_RA', 'IPP_DEC', # 'IPP_MAG_U', 'IPP_ERR_U', # 'IPP_MAG_G', 'IPP_ERR_G', # 'IPP_MAG_R', 'IPP_ERR_R', # 'IPP_MAG_I', 'IPP_ERR_I', # 'IPP_MAG_Z','IPP_ERR_Z', # 'ODI_OTA'] # output_df = ota_matches_df[needed_columns] # output_df.to_csv(output,index=False) # if 'SDSS' in hdulist[0].header['PHOTMCAT']: # try: # # print sdss_cat_img_df.columns # ota_matches_df = sdss_cat_img_df.iloc[np.where(sdss_cat_img_df['ODI_OTA'] == ota)] # needed_columns = ['SDSS_RA','SDSS_DEC','SDSS_MAG_U', # 'SDSS_ERR_U', 'SDSS_MAG_G', 'SDSS_ERR_G', 'SDSS_MAG_R', # 'SDSS_ERR_R', 'SDSS_MAG_I', 'SDSS_ERR_I', 'SDSS_MAG_Z', # 'SDSS_ERR_Z', 'ODI_OTA'] # output_df = ota_matches_df[needed_columns] # output_df.to_csv(output,index=False) # except KeyError: # oditable = hdulist['CAT.ODI'].data # oditalbe_df = pd.DataFrame.from_dict(oditable) # ODI_RA = np.squeeze(np.array(oditalbe_df['RA'])) # ODI_DEC = np.squeeze( np.array(oditalbe_df['DEC'])) # ODI_OTA = np.squeeze( np.array(oditalbe_df['OTA'])) # junkdict = OrderedDict([('ODI_RA',ODI_RA), # ('ODI_DEC',ODI_DEC), # ('ODI_OTA',ODI_OTA.astype(float))]) # junk_df = pd.DataFrame.from_dict(junkdict) # matched_df = pd.merge(sdss_cat_img_df,junk_df ,on = ['ODI_RA','ODI_DEC'],how='inner') # # print matched_df.columns # needed_columns = np.insert(sdss_cat_img_df.columns.values,0,'ODI_OTA') # full_df = matched_df[needed_columns] # ota_matches_df = full_df.iloc[np.where(full_df['ODI_OTA'] == ota)] # needed_columns = ['SDSS_RA','SDSS_DEC', # 'SDSS_MAG_U','SDSS_ERR_U', # 'SDSS_MAG_G', 'SDSS_ERR_G', # 'SDSS_MAG_R','SDSS_ERR_R', # 'SDSS_MAG_I', 'SDSS_ERR_I', # 'SDSS_MAG_Z','SDSS_ERR_Z', # 'ODI_OTA'] # output_df = ota_matches_df[needed_columns] # output_df.to_csv(output,index=False) except KeyError: tqdm.write(img.f+'['+ota_id+']: missing PHOTCALIB table, skipping SDSS') hdulist.close() return xdim, ydim def get_gaia_coords(img,ota,inst,output='test.gaia',cluster=False,**kwargs): """ Query the online Gaia DR1 based on the central coordinates of the current OTA. If the ``cluster`` flag is set to ``True``, the querey will avoid a crowded region based on coordinates and a radius set by the user in the configuration files. Parameters ---------- img : ODIImage or StackedImage object Name of image ota : str Name of OTA int : str Version of ODI used, ``podi`` or ``5odi`` """ from astropy import units as u from astropy.coordinates import SkyCoord try: from astroquery.vizier import Vizier from astropy import __version__ as astropyversion except ImportError: print("astroquery not installed") print("try pip --user --no-deps install astroquery or contact admin") hdulist = fits.open(img.f) if ota=='None': hdu_ota = hdulist[0] else: hdu_ota = odi.tan_header_fix(hdulist[ota]) w = WCS(hdu_ota.header) naxis1 = hdu_ota.header['NAXIS1'] naxis2 = hdu_ota.header['NAXIS2'] ota_center_radec = w.wcs_pix2world([[naxis1/2.,naxis2/2.]],1) # print ota_center_radec corners = w.calc_footprint() center_skycoord = SkyCoord(ota_center_radec[0][0]*u.deg, ota_center_radec[0][1]*u.deg,frame='icrs') corner_skycoord = SkyCoord(corners[0,0]*u.deg, corners[0,1]*u.deg,frame='icrs') cone_radius = center_skycoord.separation(corner_skycoord).value # tqdm.write('{:4.0f} {:4.0f} {:6.4f}'.format(naxis1/2., naxis2/2., cone_radius)) # print '{:4.0f} {:4.0f} {:6.4f}'.format(naxis1/2., naxis2/2., cone_radius) #Set up vizier query for Gaia DR2 vquery = Vizier(columns=['RA_ICRS', 'DE_ICRS', 'e_RA_ICRS', 'e_DE_ICRS', 'Gmag'], column_filters={"Gmag":"<21.0"}, row_limit = -1, catalog='I/345/gaia2') # vquery = Vizier(columns=['ra', 'dec','ra_error', 'dec_error','phot_g_mean_mag'], # column_filters={"phot_g_mean_mag":"<21.0"}, # row_limit = -1) # vquery.catalog = 'I/345/gaia2' # print vquery.catalog check = vquery.query_region_async(SkyCoord(ra=ota_center_radec[0][0], dec=ota_center_radec[0][1], unit=(u.deg, u.deg), frame='icrs'), radius=cone_radius*u.deg, catalog='I/345/gaia2') # print check try: gaia_table = vquery.query_region(SkyCoord(ra=ota_center_radec[0][0], dec=ota_center_radec[0][1], unit=(u.deg, u.deg), frame='icrs'), radius=cone_radius*u.deg, catalog='I/345/gaia2')[0] except: print(vquery.response.content) hdulist.close() if cluster == True: try: racenter = kwargs['racenter'] deccenter = kwargs['deccenter'] min_radius = kwargs['min_radius'] G_lim = kwargs['G_lim'] except KeyError: print('Must provide racenter, deccenter, and min_radius') cluster_center = SkyCoord(racenter*u.degree ,deccenter*u.degree, frame='icrs') gaia_coords = SkyCoord(gaia_table['RA_ICRS'], gaia_table['DE_ICRS'],frame='icrs') dist_from_center = cluster_center.separation(gaia_coords).arcmin gaia_table['dis'] = dist_from_center # ota_gaia_df = ota_gaia_df[(ota_gaia_df.dis >= min_radius) & # (ota_gaia_df.phot_g_mean_mag <= G_lim)] gaia_table = gaia_table[gaia_table['dis'] > min_radius] ra_min, ra_max = min(corners[:,0]), max(corners[:,0]) dec_min, dec_max = min(corners[:,1]), max(corners[:,1]) # print ra_min, ra_max, dec_min, dec_max gaia_table_cut = gaia_table[(gaia_table['RA_ICRS'] > ra_min) & (gaia_table['RA_ICRS'] < ra_max) & (gaia_table['DE_ICRS'] > dec_min) & (gaia_table['DE_ICRS'] < dec_max)] gaia_table_cut['e_RA_ICRS'].convert_unit_to(u.deg) gaia_table_cut['e_DE_ICRS'].convert_unit_to(u.deg) ota_gaia_df = gaia_table_cut.to_pandas() cols_needed = ['RA_ICRS','DE_ICRS','Gmag','e_RA_ICRS','e_DE_ICRS'] ota_gaia_df = ota_gaia_df[cols_needed] ota_gaia_df.columns = ['ra', 'dec','phot_g_mean_mag','e_ra','e_dec'] gaia_catalog_out = output ota_gaia_df.to_csv(gaia_catalog_out, columns=['ra', 'dec','phot_g_mean_mag','e_ra','e_dec'], index=False) return ota_gaia_df if __name__ == '__main__': from odi_config import ODIImage img = ODIImage("20161025T221120.1_HI0126+05_odi_g.7376.fits", 1, '5odi') get_sdss_coords_offline(img, 'OTA33.SCI', '5odi', output='test.sdss') # get_gaia_coords(img, list(img.otas.values())[0], img.inst, output='test.gaia', cluster=False)
{"hexsha": "3cea37412ff4cf6940edc5b15a535901c1058b0b", "size": 35001, "ext": "py", "lang": "Python", "max_stars_repo_path": "odi_coords.py", "max_stars_repo_name": "bjanesh/odi-tools", "max_stars_repo_head_hexsha": "a9cf686762234f118c9a25c43a25c04462d30a80", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2016-04-28T19:33:23.000Z", "max_stars_repo_stars_event_max_datetime": "2017-02-21T17:07:56.000Z", "max_issues_repo_path": "odi_coords.py", "max_issues_repo_name": "bjanesh/odi-tools", "max_issues_repo_head_hexsha": "a9cf686762234f118c9a25c43a25c04462d30a80", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 18, "max_issues_repo_issues_event_min_datetime": "2016-04-27T13:52:10.000Z", "max_issues_repo_issues_event_max_datetime": "2017-07-19T15:57:49.000Z", "max_forks_repo_path": "odi_coords.py", "max_forks_repo_name": "bjanesh/odi-tools", "max_forks_repo_head_hexsha": "a9cf686762234f118c9a25c43a25c04462d30a80", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2016-07-13T02:13:16.000Z", "max_forks_repo_forks_event_max_datetime": "2018-01-18T03:33:59.000Z", "avg_line_length": 43.104679803, "max_line_length": 259, "alphanum_fraction": 0.5658981172, "include": true, "reason": "import numpy,from astropy", "num_tokens": 10651}
import numpy as np import itertools from .draw import _DrawingMixin from collections import deque __all__ = [ 'Graph' ] class Edges(): def __init__( self ): self.parent_edge = None self.child_edges = [] class _PolytreeBaseMixin(): def __init__( self ): """ This is the base class for polytrees Args: None Returns: None """ self.nodes = set() self.edge_children = list() self.edge_parents = list() self.modified = True self.evidence = {} self.bayesian_intervention = {} self.causal_intervention = {} self.guess_intervention = {} ###################################################################### def addEdge( self, parents, children ): """ Main method to build a graph. Parents and children can be anything, but must be hashable. It is important to note that the order of the parents used in any subsequent algorithm is the same as how the parents were passed in here! Args: None Returns: None """ assert isinstance( parents, list ) or isinstance( parents, tuple ) assert isinstance( children, list ) or isinstance( children, tuple ) for node in parents + children: self.nodes.add( node ) self.edge_children.append( children ) self.edge_parents.append( parents ) self.modified = True ###################################################################### @property def roots( self ): """ Convinient way to access the roots of the graph. Auto updated as needed """ if( hasattr( self, '_roots' ) == False or self.modified ): self._roots = self.nodes - set( list( itertools.chain( *self.edge_children ) ) ) self._leaves = self.nodes - set( list( itertools.chain( *self.edge_parents ) ) ) self.modified = False return self._roots @property def leaves( self ): """ Convinient way to access the leaves of the graph. Auto updated as needed """ if( hasattr( self, '_leaves' ) == False or self.modified ): self._roots = self.nodes - set( list( itertools.chain( *self.edge_children ) ) ) self._leaves = self.nodes - set( list( itertools.chain( *self.edge_parents ) ) ) self.modified = False return self._leaves @property def tree( self ): """ A tree of this data structure """ if( hasattr( self, '_tree' ) == False or self.modified ): self._tree = {} for e, ( parents, children ) in enumerate( zip( self.edge_parents, self.edge_children ) ): for parent in parents: if( parent not in self._tree ): self._tree[parent] = Edges() if( e not in self._tree[parent].child_edges ): self._tree[parent].child_edges.append( e ) for child in children: if( child not in self._tree ): self._tree[child] = Edges() if( self._tree[child].parent_edge is not None ): assert self._tree[child].parent_edge == e else: self._tree[child].parent_edge = e return self._tree ###################################################################### def getParents( self, node ): """ Get the parents for node Args: node : Query node Returns: list : A list containing the parents of node """ parent_edge = self.tree[node].parent_edge return self.edge_parents[parent_edge] if parent_edge is not None else [] def getChildren( self, node ): """ Get the children for node Args: node : Query node Returns: list : A nested list containing the children of node at each child edge """ ans = [] for child_edge in self.tree[node].child_edges: ans.append( self.edge_children[child_edge] ) return ans ###################################################################### def forwardPass( self ): """ Generator function that performs a bredth first search of the graph Args: None Returns: Each node, visited when its parents are visited """ edge_semaphores = np.array( [ len( e ) for e in self.edge_parents ] ) # Get the first edges to start with for edge, parents in enumerate( self.edge_parents ): edge_semaphores[edge] -= len( set.intersection( self.roots, set( parents ) ) ) for root in self.roots: yield root edges = np.arange( edge_semaphores.shape[ 0 ], dtype=int ) done_edges = edge_semaphores == 0 q = deque( edges[done_edges] ) while( len( q ) > 0 ): edge = q.popleft() for child in self.edge_children[edge]: yield child for child_edge in self.tree[child].child_edges: edge_semaphores[child_edge] -= 1 now_done = ( edge_semaphores == 0 ) & ( ~done_edges ) q.extend( edges[now_done] ) done_edges |= now_done def backwardPass( self ): """ Generator function that performs a reversed bredth first search of the graph Args: None Returns: Each node, visited when its children are visited """ edge_semaphores = np.array( [ len( e ) for e in self.edge_children ] ) # Get the first edges to start with for edge, children in enumerate( self.edge_children ): edge_semaphores[edge] -= len( set.intersection( self.leaves, set( children ) ) ) for leaf in self.leaves: yield leaf edges = np.arange( edge_semaphores.shape[0], dtype=int ) done_edges = edge_semaphores == 0 q = deque( edges[done_edges] ) while( len( q ) > 0 ): edge = q.popleft() for parent in self.edge_parents[edge]: yield parent if( self.tree[parent].parent_edge is not None ): edge_semaphores[self.tree[parent].parent_edge] -= 1 now_done = ( edge_semaphores == 0 ) & ( ~done_edges ) q.extend( edges[now_done] ) done_edges |= now_done ###################################################################### def toSparse( self ): """ Converts the graph into its sparse representation. It uniquely identifies edges and nodes and then holds arrays of parents for edges and children for edges. The return shape will be [ 2, : ] Args: None Returns: edge_parents_sparse : Parents for edges edge_children_sparse : Children for edges """ nodes = list( self.nodes ) edge_parents_sparse, edge_children_sparse = [], [] # Create the child edges for i, node_list in enumerate( self.edge_parents ): for j, node in enumerate( node_list ): edge_parents_sparse.append( [ i, nodes.index( node ) ] ) # Create the parent edges for i, node_list in enumerate( self.edge_children ): for j, node in enumerate( node_list ): edge_children_sparse.append( [ i, nodes.index( node ) ] ) return np.array( edge_parents_sparse ).T, np.array( edge_children_sparse ).T @staticmethod def combineSparse( sparse_graphs ): """ Combines sparse graphs into one big, unconnected graph Args: sparse_graphs : A list of sparse graphs Returns: edge_parents_sparse : Parents for edges edge_children_sparse : Children for edges """ edge_parents, edge_children = [], [] total_edges, total_nodes = 0, 0 for ep, ec in sparse_graphs: # See how many nodes and edges are in this sparse graph n_edges = max( ep[0, -1], ec[0, -1] ) + 1 n_nodes = max( np.max( ep[1, :] ), np.max( ec[1, :] ) ) + 1 # Adjust their indices ep[0, :] += total_edges ec[0, :] += total_edges ep[1, :] += total_nodes ec[1, :] += total_nodes # Increment the number of nodes and edges total_edges += n_edges total_nodes += n_nodes # Add the to the graph edge_parents.append( ep ) edge_children.append( ec ) # Concatenate the arrays edge_parents = np.hstack( edge_parents ) edge_children = np.hstack( edge_children ) return edge_parents, edge_children @staticmethod def fromSparse( edge_parents_sparse, edge_children_sparse ): """ Turn sparse format into graph Args: edge_parents_sparse : Parents for edges edge_children_sparse : Children for edges Returns: graph : The graph """ edges = {} for e, parent in edge_parents_sparse.T: if( e not in edges ): edges[e] = [ [ parent ], [] ] else: edges[e][0].append( parent ) for e, child in edge_children_sparse.T: edges[e][1].append( child ) graph = Graph() for e in sorted( edges.keys() ): graph.addEdge( edges[e][0], edges[e][1] ) return graph ########################################################################## class Graph( _PolytreeBaseMixin, _DrawingMixin ): pass
{"hexsha": "f2211ee702483422b4d88510b914edf1cf6b730e", "size": 9876, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/graph.py", "max_stars_repo_name": "EddieCunningham/CausalInference", "max_stars_repo_head_hexsha": "5938787a41222ae1810d5c649a1f3b93285fbb1e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-04-21T08:44:05.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-21T08:44:05.000Z", "max_issues_repo_path": "src/graph.py", "max_issues_repo_name": "hebo910820/CausalInference", "max_issues_repo_head_hexsha": "5938787a41222ae1810d5c649a1f3b93285fbb1e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/graph.py", "max_forks_repo_name": "hebo910820/CausalInference", "max_forks_repo_head_hexsha": "5938787a41222ae1810d5c649a1f3b93285fbb1e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-09-17T02:34:31.000Z", "max_forks_repo_forks_event_max_datetime": "2019-09-17T02:34:31.000Z", "avg_line_length": 32.5940594059, "max_line_length": 102, "alphanum_fraction": 0.5290603483, "include": true, "reason": "import numpy", "num_tokens": 2078}
from PIL import Image import numpy as np import time from dead_end_filler import DeadEndFiller class Solver: def __init__(self, path): maze = Image.open(path) (self.width, self.height) = maze.size self.pixels = np.array(maze) def dead_end_filler(self, time_it=False): return DeadEndFiller(self.width, self.height, self.pixels).solve(time_it=time_it) if __name__ == "__main__": import argparse parser = argparse.ArgumentParser() parser.add_argument("maze", nargs="?", type=str, default=None) parser.add_argument('--output', '-o', nargs='?', type=str, default='solved.png') args = parser.parse_args() solver = Solver(args.maze) maze = solver.dead_end_filler() maze.save(args.output) # import timeit # solver = Solver('mazes/1920_1080.png') # t = timeit.timeit(solver.solve, number=1) # print(t) # solved = solver.solve() # solved.show()
{"hexsha": "06114e2ec272a41b303285ff7b3b622dae101c42", "size": 944, "ext": "py", "lang": "Python", "max_stars_repo_path": "solver.py", "max_stars_repo_name": "SpyrosRoum/Maze-Generatori-and-Solver", "max_stars_repo_head_hexsha": "c6a65efbde12f0623ff2f1ca8d1ad0fbb02de3cc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "solver.py", "max_issues_repo_name": "SpyrosRoum/Maze-Generatori-and-Solver", "max_issues_repo_head_hexsha": "c6a65efbde12f0623ff2f1ca8d1ad0fbb02de3cc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "solver.py", "max_forks_repo_name": "SpyrosRoum/Maze-Generatori-and-Solver", "max_forks_repo_head_hexsha": "c6a65efbde12f0623ff2f1ca8d1ad0fbb02de3cc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.6, "max_line_length": 89, "alphanum_fraction": 0.6599576271, "include": true, "reason": "import numpy", "num_tokens": 240}
\chapter{Signal Processing} \section*{Introduction} \newpage \cex \inputminted[linenos=true,resetmargins=true]{c}{./c_examples/example18.c} \newpage \section*{Fourier Transforms}\addcontentsline{toc}{section}{Fourier Transforms} \subsection*{Vector FFT}\addcontentsline{toc}{subsection}{Vector FFT} \subsection*(Vector FFT by Row or Column}\addcontentsline{toc}{subsection}{Vector FFT by Row or Column} \section*{Convolution, Correlation and FIR Filtering}\addcontentsline{toc}{section}{Convolution, Correlation and FIR Filtering} \section*{Window Creation}\addcontentsline{toc}{section}{Window Creation} VSIPL provides functions to create Blackman, Chebyshev, Hanning and Kaiser windows. Unlike most functions in C VSIPL the window creation routines do not use an already created vector and fill it. Instead they actually create a block, allocate data for the block, create a unit stride full length vector on the block, fill the vector with the window coefficients, and then return the pointer to the vector view. The return value will be \ilCode{NULL} on an allocation failure. For pyJvsip the windows are defined as a method on a view so the functionality, from a user perspective, is to create a vector of a certain type and length and then fill the vector with a window. Size information are taken from the calling view. Under the covers the C VSIPL window functions are used so a copy is actually taking place to meet the requirements of pyJvsip.} \section*{Miscellaneous}\addcontentsline{toc}{section}{Miscellaneous} \subsection*{Histogram}\addcontentsline{toc}{subsection}{Histogram} \subsection*{Data Reorganization}\addcontentsline{toc}{subsection}{Data Reorganization} \subsubsection*{Frequency Swapping}\addcontentsline{toc}{subsubsection}{Frequency Swapping}
{"hexsha": "a32d74d8bc107575897dcc3cc024931ebc85b50e", "size": 1785, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/jvsip_book/c6.tex", "max_stars_repo_name": "rrjudd/jvsip", "max_stars_repo_head_hexsha": "56a965fff595b027139ff151d27d434f2480b9e8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2016-01-16T04:10:13.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-22T02:17:44.000Z", "max_issues_repo_path": "doc/jvsip_book/c6.tex", "max_issues_repo_name": "rrjudd/jvsip", "max_issues_repo_head_hexsha": "56a965fff595b027139ff151d27d434f2480b9e8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2015-09-11T04:48:03.000Z", "max_issues_repo_issues_event_max_datetime": "2015-09-11T13:44:29.000Z", "max_forks_repo_path": "doc/jvsip_book/c6.tex", "max_forks_repo_name": "rrjudd/jvsip", "max_forks_repo_head_hexsha": "56a965fff595b027139ff151d27d434f2480b9e8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2017-06-13T21:48:23.000Z", "max_forks_repo_forks_event_max_datetime": "2020-08-26T15:07:44.000Z", "avg_line_length": 61.5517241379, "max_line_length": 127, "alphanum_fraction": 0.8056022409, "num_tokens": 432}
using QMTK using QMTK.Consts.Pauli using Compat.Test @testset "local hamiltonian" begin mat = σ₁⊗σ₂ h = LocalHamiltonian(mat) rhs = SubSites(Bit, 1, 0) rhs_idx = Int(rhs) + 1 itr = LHIterator(h, rhs) for (val, lhs) in itr lhs_idx = Int(lhs) + 1 @test val == mat[lhs_idx, rhs_idx] end end # testset
{"hexsha": "c1f891f7641113d1ff4bff9b7e030cfaff8003a1", "size": 346, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "test/Hamiltonian/Core.jl", "max_stars_repo_name": "Roger-luo/QMTK.jl", "max_stars_repo_head_hexsha": "90987261588fc8a4aefa73df2b1fb5d0c5a3f9d5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 12, "max_stars_repo_stars_event_min_datetime": "2018-03-09T17:37:17.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-18T01:27:09.000Z", "max_issues_repo_path": "test/Hamiltonian/Core.jl", "max_issues_repo_name": "Roger-luo/QMTK.jl", "max_issues_repo_head_hexsha": "90987261588fc8a4aefa73df2b1fb5d0c5a3f9d5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 30, "max_issues_repo_issues_event_min_datetime": "2018-03-09T17:09:23.000Z", "max_issues_repo_issues_event_max_datetime": "2018-04-08T14:13:47.000Z", "max_forks_repo_path": "test/Hamiltonian/Core.jl", "max_forks_repo_name": "Roger-luo/QMTK.jl", "max_forks_repo_head_hexsha": "90987261588fc8a4aefa73df2b1fb5d0c5a3f9d5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 18.2105263158, "max_line_length": 42, "alphanum_fraction": 0.6098265896, "num_tokens": 124}
# VERSION >= v"0.4.0-dev+6521" && __precompile__(true) module Script export _nullFunction export _debug export compile export invoke global const _nullFunction = function() end global _debug = true function compile(file::String) result = nothing try result = evalfile(file) if _debug == true println("file compiled.") end catch err println("Cannot compile file. There was an error: ", err) end return result end function invoke(f::Function, args...) result = nothing try if f == _nullFunction error("function is null") end result = f(args...) if _debug == true println("function invoked. result: ", result) end catch err println("Cannot invoke function. There was an error: ", err) end return result end end # module
{"hexsha": "d33dd809100907438a1961fc9532f6c70cd01f77", "size": 761, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "src/Script.jl", "max_stars_repo_name": "Gilga/JuliaScriptLoader.jl", "max_stars_repo_head_hexsha": "beca946519b921006e90563e3aa33d7a2ff9edc1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/Script.jl", "max_issues_repo_name": "Gilga/JuliaScriptLoader.jl", "max_issues_repo_head_hexsha": "beca946519b921006e90563e3aa33d7a2ff9edc1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/Script.jl", "max_forks_repo_name": "Gilga/JuliaScriptLoader.jl", "max_forks_repo_head_hexsha": "beca946519b921006e90563e3aa33d7a2ff9edc1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 18.119047619, "max_line_length": 62, "alphanum_fraction": 0.7056504599, "num_tokens": 204}
# -*- coding: utf-8 -*- """ Created on Mon Oct 15 14:49:49 2018 @author: shams """ import numpy as np import pandas as pd import networkx as nx #from keras.preprocessing import sequence #from keras.models import load_model #from keras.layers import Dense, Input, LSTM, GRU #from keras.models import Model #import h5py def usr_top_chans(usr, netWindow , nchans = 5): chanList = list(netWindow.loc[netWindow['user']==usr]['channel'].unique()) b=netWindow.groupby(['user','channel']).count().reset_index() b['weight'] = b['text'] b=b.drop(['subtype','type','ts','time','date','text'],axis =1) G =nx.DiGraph() networkG=nx.from_pandas_edgelist(b,source ='user',target='channel', create_using = G ) networkG.add_weighted_edges_from(list(b.itertuples(index=False, name=None))) try: h,a=nx.hits(networkG) bib = dict((k, a[k]) for k in chanList if k in a) chScore = pd.DataFrame.from_dict(bib,orient='index') chScore.columns=['hScore'] chScore= chScore.sort_values(by='hScore',ascending=False) except: h,a=nx.hits(networkG,tol=1e-01) bib = dict((k, a[k]) for k in chanList if k in a) chScore = pd.DataFrame.from_dict(bib,orient='index') chScore.columns=['hScore'] chScore= chScore.sort_values(by='hScore',ascending=False) return(chScore.iloc[0:nchans]) # loading data user_file = pd.read_json('C:/Users/shams/OneDrive/Documents/Projects/Insight/datasets/users.json') #channel_file = pd.read_json('/scratch/nshams/data/channels.json') #allData = pd.read_json('/scratch/nshams/data/allData.json') allData = pd.read_json('C:/Users/shams/OneDrive/Documents/Projects/Insight/datasets/allData.json') # prepare training data freq = 'D' winSize = 10 #freq = 'W' allData.drop(allData['user']==None) allData['time'] = pd.to_datetime(allData['ts'],unit='s') networkLog = allData.sort_values(by=['ts'] ) networkLog['date']=networkLog['time'].apply(lambda x : x.date()) networkLog['date'] = pd.to_datetime(networkLog['date']) #usr_list = [x for x in allData['user'].unique() if x is not None] usr_list = user_file[user_file['is_admin']==1]['id'].tolist() bigData = pd.DataFrame() for usr in usr_list: # ----------------------build user's time series startDate = networkLog.loc[networkLog['user']==usr]['date'].min() endDate = networkLog.loc[networkLog['user']==usr]['date'].max() usrWindow = networkLog.loc[(networkLog['date']>= startDate )& (networkLog['date']<= endDate) ] usrLog = usrWindow.loc[usrWindow['user']== usr] usr_daily = usrLog['date'].value_counts().sort_index() usr_daily= usr_daily.reindex(fill_value=0) usr_daily.index = pd.DatetimeIndex(usr_daily.index) #usr_weekly = usr_daily.resample('W').sum() usr_ts = usr_daily.resample(freq).sum() input_ts = pd.DataFrame(usr_ts,index = usr_ts.index) input_ts = input_ts.rename(columns={'date':'user_ts'}) input_ts.fillna(0, inplace=True) input_ts['usr_ma'] = input_ts['user_ts'].rolling(window=winSize).mean() # ----------------------find corresponding high score channels topChans = usr_top_chans(usr, usrWindow , nchans = 3) topChans_list = topChans.index.values.tolist() ch_counter = list(enumerate(topChans_list, 1)) for counter , ch in ch_counter: channel_log = usrWindow[usrWindow['channel']==ch].sort_values(by='ts') channel_log['date']=channel_log['time'].apply(lambda x : x.date()) channel_daily = channel_log['date'].value_counts().sort_index() channel_daily = channel_daily.reindex(fill_value=0) channel_daily.index = pd.DatetimeIndex(channel_daily.index) channel_ts = channel_daily.resample(freq).sum() input_ts['ch'+str(counter)] = channel_ts input_ts['ch'+str(counter)].fillna(0,inplace = True) input_ts['ch'+str(counter)+'_ma'] = input_ts['ch'+str(counter)].rolling(window=winSize).mean() input_ts = input_ts.iloc[winSize:,:] input_ts['usr_tag'] = [usr for x in range(len(input_ts))] bigData = bigData.append(input_ts) #input_ts.to_json('/scratch/nshams/data/byUser/'+usr+'.json') bigData.to_json('C:/Users/shams/OneDrive/Documents/Projects/Insight/datasets/adminData.json') # append to the training data
{"hexsha": "d406908735e214707e994df7c082e67933fd8c3e", "size": 4362, "ext": "py", "lang": "Python", "max_stars_repo_path": "HPC_code/data_prep.py", "max_stars_repo_name": "nasim-shams/SlackTrack", "max_stars_repo_head_hexsha": "09d9d4522679ac2f95efc2d7653d7d1e432326b6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "HPC_code/data_prep.py", "max_issues_repo_name": "nasim-shams/SlackTrack", "max_issues_repo_head_hexsha": "09d9d4522679ac2f95efc2d7653d7d1e432326b6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HPC_code/data_prep.py", "max_forks_repo_name": "nasim-shams/SlackTrack", "max_forks_repo_head_hexsha": "09d9d4522679ac2f95efc2d7653d7d1e432326b6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.6034482759, "max_line_length": 102, "alphanum_fraction": 0.6648326456, "include": true, "reason": "import numpy,import networkx", "num_tokens": 1160}
from __future__ import annotations import dataclasses as dcls import functools import logging from dataclasses import dataclass from numbers import Number from typing import Any, Callable, Dict, Generic, Tuple, Type, TypeVar, final, overload import torch from numpy import ndarray from rich.logging import RichHandler from torch import Tensor from torch import device as Device from torch import dtype as DType from .prepasses import PrePass, PrePassFunc from .runnables import BatchInfo, Runnable, RunnableTensor from .tensors import TensorLike logger = logging.getLogger(__name__) logger.addHandler(RichHandler()) # FIXME: Currently disregards RunnableTensor API T = TypeVar("T", covariant=True) V = TypeVar("V", contravariant=True) @final @dataclass(frozen=True) class LazyFunction(Generic[V]): func: Callable[..., Tensor] prepass_func: PrePassFunc def __call__(self, *args: Any, **kwargs: Any) -> DelayedTensor: lazy_args = tuple(delayed(arg) for arg in args) lazy_kwargs = dict((k, delayed(v)) for (k, v) in kwargs.items()) prepass = self.prepass_func(*args, **kwargs) return DelayedTensor(self.func, prepass, *lazy_args, **lazy_kwargs) def __get__(self, obj: V, objtype: Type[V]) -> Callable[..., DelayedTensor]: assert isinstance(obj, objtype), [type(obj), objtype] if obj is None: return self else: return functools.partial(self, obj) @final @dataclass(init=False) class DelayedTensor(RunnableTensor): func: Callable[..., Tensor] prepass: PrePass args: Tuple[Runnable[Any], ...] = dcls.field(default_factory=tuple) kwargs: Dict[str, Runnable[Any]] = dcls.field(default_factory=dict) def __init__( self, func: Callable[..., Tensor], prepass: PrePass, *args: Runnable[Tensor] | Tensor | Number, **kwargs: Runnable[Tensor] | Tensor | Number, ) -> None: self.func = func self.prepass = prepass self.args = tuple(delayed(arg) for arg in args) self.kwargs = dict((k, delayed(v)) for (k, v) in kwargs.items()) def run(self, partial: range | None = None) -> Tensor: del partial real_args = [arg.run() for arg in self.args] real_kwargs = {k: v.run() for (k, v) in self.kwargs.items()} result = self.func(*real_args, **real_kwargs) assert self.prepass.shape == result.shape return result def size(self, dim: int | None = None) -> int | Tuple[int, ...]: shape = self.prepass.shape if dim is not None: return shape[dim] else: return shape @property def dtype(self) -> DType: return self.prepass.dtype @property def device(self) -> str | Device: return self.prepass.device class ImmediateTensor(Tensor, RunnableTensor, TensorLike): """ Immediate tensor is a thin wrapper for the `Tensor` class. It's basically a tensor. """ batch: BatchInfo | None = None def run(self, partial: range | None = None) -> Tensor: del partial return self @dataclass class ImmediateNumber(Runnable[Number]): data: Number def run(self, partial: range | None = None) -> Number: del partial return self.data @overload def delayed(input: Runnable[T]) -> Runnable[T]: ... @overload def delayed(input: Tensor | ndarray) -> RunnableTensor: ... @overload def delayed(input: Number) -> Runnable[Number]: ... def delayed(input: Runnable[Any] | Tensor | ndarray | Number) -> Runnable[Any]: if isinstance(input, Runnable): return input if isinstance(input, ndarray): tensor = torch.from_numpy(input) if isinstance(input, Number): return ImmediateNumber(input) if isinstance(input, Tensor): return tensor.as_subclass(ImmediateTensor) # type: ignore raise ValueError
{"hexsha": "91d86eebb467d3c0c19e53eba41ea5a771569ee3", "size": 3919, "ext": "py", "lang": "Python", "max_stars_repo_path": "koila/tensors/delayed.py", "max_stars_repo_name": "techthiyanes/koila", "max_stars_repo_head_hexsha": "b665482ff99a02bfeeceaa1323589fb89495a30c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "koila/tensors/delayed.py", "max_issues_repo_name": "techthiyanes/koila", "max_issues_repo_head_hexsha": "b665482ff99a02bfeeceaa1323589fb89495a30c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "koila/tensors/delayed.py", "max_forks_repo_name": "techthiyanes/koila", "max_forks_repo_head_hexsha": "b665482ff99a02bfeeceaa1323589fb89495a30c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.4797297297, "max_line_length": 87, "alphanum_fraction": 0.6547588671, "include": true, "reason": "from numpy", "num_tokens": 944}
""" Input/output relation task. Every input and output is explicitly defined. XOR is an example of this task. """ ### IMPORTS ### import random # Libraries import numpy as np # Local from ..networks.rnn import NeuralNetwork class MappingTask(object): # Default XOR input/output pairs INPUTS = [(-0.2, 0.1, 0.3, -0.4), (0.6, -0.1, 0.7, -0.5), (0.8, 0.1, -0.6, 0.0)] OUTPUTS = [(0.4, 0.6, 0.5, 0.7), (0.1, 0.3, 0.2, 0.9), (0.7, 0.1, 0.2, 0.1)] EPSILON = 1e-100 def __init__(self, do_all=True): self.do_all = do_all self.INPUTS = np.array(self.INPUTS, dtype=float) self.OUTPUTS = np.array(self.OUTPUTS, dtype=float) def evaluate(self, network, verbose=False): if not isinstance(network, NeuralNetwork): network = NeuralNetwork(network) network.make_feedforward() # if not network.node_types[-1](-1000) < -0.95: # print network.node_types[-1](-1000) # raise Exception("Network should be able to output value of -1, e.g. using a tanh node.") pairs = zip(self.INPUTS, self.OUTPUTS) random.shuffle(pairs) if not self.do_all: pairs = [random.choice(pairs)] rmse = 0.0 for (i, target) in pairs: # Feed with bias output = network.feed(i) # Grab the output output = output[-len(target):] err = (target - output) err[abs(err) < self.EPSILON] = 0; err = (err ** 2).mean() # Add error if verbose: print "%r -> %r (%.2f)" % (i, output, err) rmse += err score = 1/(1+np.sqrt(rmse / len(pairs))) return {'fitness': score} def solve(self, network): return int(self.evaluate(network) > 0.9)
{"hexsha": "1633353fb8f10d60daaab8ae39b6e5e5ac67446d", "size": 1876, "ext": "py", "lang": "Python", "max_stars_repo_path": "peas/tasks/mapping.py", "max_stars_repo_name": "promanev/PDSTEP_SNN_PEAS", "max_stars_repo_head_hexsha": "864cef4a86989b757f7b849b7d0486a83c6a85ca", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "peas/tasks/mapping.py", "max_issues_repo_name": "promanev/PDSTEP_SNN_PEAS", "max_issues_repo_head_hexsha": "864cef4a86989b757f7b849b7d0486a83c6a85ca", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "peas/tasks/mapping.py", "max_forks_repo_name": "promanev/PDSTEP_SNN_PEAS", "max_forks_repo_head_hexsha": "864cef4a86989b757f7b849b7d0486a83c6a85ca", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.7540983607, "max_line_length": 102, "alphanum_fraction": 0.5325159915, "include": true, "reason": "import numpy", "num_tokens": 547}
from flask import Flask, render_template, request,jsonify from tensorflow.keras.models import load_model import cv2 import numpy as np import base64 from PIL import Image import io import re img_size=100 app = Flask(__name__) model=load_model('model/model-015.model') label_dict={0:'Covid19 Negative', 1:'Covid19 Positive'} def preprocess(img): img=np.array(img) if(img.ndim==3): gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) else: gray=img gray=gray/255 resized=cv2.resize(gray,(img_size,img_size)) reshaped=resized.reshape(1,img_size,img_size) return reshaped @app.route("/") def index(): return(render_template("index.html")) @app.route("/predict", methods=["POST"]) def predict(): print('HERE') message = request.get_json(force=True) encoded = message['image'] decoded = base64.b64decode(encoded) dataBytesIO=io.BytesIO(decoded) dataBytesIO.seek(0) image = Image.open(dataBytesIO) test_image=preprocess(image) prediction = model.predict(test_image) result=np.argmax(prediction,axis=1)[0] accuracy=float(np.max(prediction,axis=1)[0]) label=label_dict[result] print(prediction,result,accuracy) response = {'prediction': {'result': label,'accuracy': accuracy}} return jsonify(response) app.run(debug=True) #<img src="" id="img" crossorigin="anonymous" width="400" alt="Image preview...">
{"hexsha": "539aeaad4936607a7192bcd9657eef402b3f29fd", "size": 1391, "ext": "py", "lang": "Python", "max_stars_repo_path": "webapp/app.py", "max_stars_repo_name": "amitd307/Covid-19-prediction-using-X-Ray-images", "max_stars_repo_head_hexsha": "2a12f6975b3301466957d41e08899940ebd44840", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-05-13T09:27:01.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-13T09:27:01.000Z", "max_issues_repo_path": "webapp/app.py", "max_issues_repo_name": "amitd307/Covid-19-prediction-using-X-Ray-images", "max_issues_repo_head_hexsha": "2a12f6975b3301466957d41e08899940ebd44840", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "webapp/app.py", "max_forks_repo_name": "amitd307/Covid-19-prediction-using-X-Ray-images", "max_forks_repo_head_hexsha": "2a12f6975b3301466957d41e08899940ebd44840", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-09-21T17:50:07.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-21T17:50:07.000Z", "avg_line_length": 22.435483871, "max_line_length": 81, "alphanum_fraction": 0.7124370956, "include": true, "reason": "import numpy", "num_tokens": 334}
// Copyright (C) 2009-2012 Lorenzo Caminiti // Distributed under the Boost Software License, Version 1.0 // (see accompanying file LICENSE_1_0.txt or a copy at // http://www.boost.org/LICENSE_1_0.txt) // Home at http://www.boost.org/libs/local_function #ifndef BOOST_LOCAL_FUNCTION_AUX_PP_DECL_TRAITS_PARAMS_HPP_ #define BOOST_LOCAL_FUNCTION_AUX_PP_DECL_TRAITS_PARAMS_HPP_ #include <boost/local_function/aux_/preprocessor/traits/decl_/index.hpp> #include <boost/local_function/aux_/preprocessor/traits/param.hpp> #include <boost/local_function/detail/preprocessor/keyword/default.hpp> #include <boost/preprocessor/tuple/elem.hpp> #include <boost/preprocessor/tuple/eat.hpp> #include <boost/preprocessor/tuple/rem.hpp> #include <boost/preprocessor/arithmetic/inc.hpp> #include <boost/preprocessor/control/iif.hpp> #include <boost/preprocessor/logical/compl.hpp> #include <boost/preprocessor/facilities/is_empty.hpp> #include <boost/preprocessor/list/adt.hpp> #include <boost/preprocessor/list/fold_left.hpp> // PRIVATE // #define BOOST_LOCAL_FUNCTION_AUX_PP_DECL_TRAITS_PARAMS_DEFAULT_OP_(s, \ default_count, param_traits) \ BOOST_PP_IIF(BOOST_PP_IS_EMPTY( \ BOOST_LOCAL_FUNCTION_AUX_PP_PARAM_TRAITS_DEFAULT(param_traits)), \ BOOST_PP_TUPLE_REM(1) \ , \ BOOST_PP_INC \ )(default_count) // Precondition: params is a pp-list which is not nil. #define BOOST_LOCAL_FUNCTION_AUX_PP_DECL_TRAITS_PARAMS_DEFAULT_COUNT_(params) \ BOOST_PP_LIST_FOLD_LEFT( \ BOOST_LOCAL_FUNCTION_AUX_PP_DECL_TRAITS_PARAMS_DEFAULT_OP_, \ 0 /* start with defaults_count to 0 */, params) // PUBLIC // // Expand: pp-list of param-traits (no bound variables). #define BOOST_LOCAL_FUNCTION_AUX_PP_DECL_TRAITS_PARAMS(decl_traits) \ BOOST_PP_TUPLE_ELEM(BOOST_LOCAL_FUNCTION_AUX_PP_DECL_TRAITS_INDEX_MAX, \ BOOST_LOCAL_FUNCTION_AUX_PP_DECL_TRAITS_INDEX_PARAMS, decl_traits) // Expand: number of parameters with default values (0 if no default). #define BOOST_LOCAL_FUNCTION_AUX_PP_DECL_TRAITS_PARAMS_DEFAULT_COUNT( \ decl_traits) \ BOOST_PP_IIF(BOOST_PP_LIST_IS_CONS( \ BOOST_LOCAL_FUNCTION_AUX_PP_DECL_TRAITS_PARAMS(decl_traits)), \ BOOST_LOCAL_FUNCTION_AUX_PP_DECL_TRAITS_PARAMS_DEFAULT_COUNT_ \ , \ 0 BOOST_PP_TUPLE_EAT(1) \ )(BOOST_LOCAL_FUNCTION_AUX_PP_DECL_TRAITS_PARAMS(decl_traits)) #endif // #include guard
{"hexsha": "10dd60961bc01f4f77f6a867b4c256ffee7de299", "size": 2429, "ext": "hpp", "lang": "C++", "max_stars_repo_path": "deps/src/boost_1_65_1/boost/local_function/aux_/preprocessor/traits/decl_params.hpp", "max_stars_repo_name": "shreyasvj25/turicreate", "max_stars_repo_head_hexsha": "32e84ca16aef8d04aff3d49ae9984bd49326bffd", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 11356.0, "max_stars_repo_stars_event_min_datetime": "2017-12-08T19:42:32.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T16:55:25.000Z", "max_issues_repo_path": "deps/src/boost_1_65_1/boost/local_function/aux_/preprocessor/traits/decl_params.hpp", "max_issues_repo_name": "shreyasvj25/turicreate", "max_issues_repo_head_hexsha": "32e84ca16aef8d04aff3d49ae9984bd49326bffd", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 2402.0, "max_issues_repo_issues_event_min_datetime": "2017-12-08T22:31:01.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-28T19:25:52.000Z", "max_forks_repo_path": "deps/src/boost_1_65_1/boost/local_function/aux_/preprocessor/traits/decl_params.hpp", "max_forks_repo_name": "shreyasvj25/turicreate", "max_forks_repo_head_hexsha": "32e84ca16aef8d04aff3d49ae9984bd49326bffd", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1343.0, "max_forks_repo_forks_event_min_datetime": "2017-12-08T19:47:19.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-26T11:31:36.000Z", "avg_line_length": 41.1694915254, "max_line_length": 79, "alphanum_fraction": 0.7809798271, "num_tokens": 563}
import collections import numpy as np from django.test import TestCase from dptable.variance_reduce import VarianceReduce class TestDPTable(TestCase): def setUp(self): self.domain = collections.OrderedDict([ ('A', [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85]), ('B', [137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200]), ('C', [45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108]), ('D', [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140]), ('E', [0, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60]), ('F', [0, 1]), ('G', [0, 1])]) self.jtree = dict({ 'parents': [0, 1, 0, 0, 0], 'separators': [[], ['F'], [], [], []], 'cliques': [['F', 'B'], ['F', 'C'], ['D', 'E'], ['A'], ['G']] }) self._lambda = 0.2 self.var_reduce = VarianceReduce(self.domain, self.jtree['cliques'], self._lambda) def test_different_matrix_constructor_with_zero_result(self): diff_operator = self.var_reduce.construct_difference(4) ones_4 = [1,1,1,1] result = np.dot(ones_4, diff_operator) self.assertEqual(np.sum(np.square(result)) == 0, True) def test_jtree_matrix_rep_with_chain_structure(self): print self.var_reduce.jt_rep() def test_extract_index_of_subset_of_node(self): nodes_subset = ['A', 'D', 'B'] nodes_index = self.var_reduce.find_subset_index(nodes_subset) self.assertEqual(nodes_index == [0,3,1], True) def test_log_p_func_with_specific_log_sum(self): log_sum_by_clique = self.var_reduce.log_p_func() self.assertEqual(log_sum_by_clique[0] - 7.20340552108 < 1e-10, True) def test_main_func(self): opted_cluster = self.var_reduce.main() print opted_cluster
{"hexsha": "8e752eae7f38b1487d765b82a0d0c12b5d63589e", "size": 2978, "ext": "py", "lang": "Python", "max_stars_repo_path": "test/test_dptable.py", "max_stars_repo_name": "sylar233/de-identification", "max_stars_repo_head_hexsha": "44731e9c22de647063bd82a19936b4c5a144ea6e", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2016-11-07T12:54:51.000Z", "max_stars_repo_stars_event_max_datetime": "2018-12-15T00:20:26.000Z", "max_issues_repo_path": "test/test_dptable.py", "max_issues_repo_name": "sylar233/de-identification", "max_issues_repo_head_hexsha": "44731e9c22de647063bd82a19936b4c5a144ea6e", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2016-07-05T06:06:31.000Z", "max_issues_repo_issues_event_max_datetime": "2016-07-27T05:21:36.000Z", "max_forks_repo_path": "test/test_dptable.py", "max_forks_repo_name": "sylar233/de-identification", "max_forks_repo_head_hexsha": "44731e9c22de647063bd82a19936b4c5a144ea6e", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2018-07-18T07:32:43.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-05T05:25:55.000Z", "avg_line_length": 62.0416666667, "max_line_length": 537, "alphanum_fraction": 0.596373405, "include": true, "reason": "import numpy", "num_tokens": 1537}
#include <boost/test/unit_test.hpp> #include <test_block.hpp> #include <timer.hpp> #include <utils.hpp> #include <crypto/hex.hpp> BOOST_AUTO_TEST_CASE(hex_test) { std::vector<uint8_t> bytes; auto from_hex_f = [&]() { bytes = std::move(crypto::from_hex<uint8_t>(test_block)); }; std::string hex; auto to_hex_f = [&]() { hex = std::move(crypto::to_hex(bytes)); }; [[maybe_unused]] auto from_hex_secs = timer::estimate(from_hex_f); [[maybe_unused]] auto to_hex_secs = timer::estimate(to_hex_f); write_result("from_hex", from_hex_secs); write_result("to_hex", to_hex_secs); BOOST_CHECK(test_block == hex); }
{"hexsha": "d8005670f5d7e0eb0c6d1b40eaabbe9e9d65ce31", "size": 647, "ext": "cpp", "lang": "C++", "max_stars_repo_path": "tests/crypto/hex_test.cpp", "max_stars_repo_name": "asuvalov/climb", "max_stars_repo_head_hexsha": "e1349d2deb1d2cfbd8ac01146cf9c1dedc7e51e2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tests/crypto/hex_test.cpp", "max_issues_repo_name": "asuvalov/climb", "max_issues_repo_head_hexsha": "e1349d2deb1d2cfbd8ac01146cf9c1dedc7e51e2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tests/crypto/hex_test.cpp", "max_forks_repo_name": "asuvalov/climb", "max_forks_repo_head_hexsha": "e1349d2deb1d2cfbd8ac01146cf9c1dedc7e51e2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.88, "max_line_length": 90, "alphanum_fraction": 0.6800618238, "num_tokens": 169}
opt.table<-function(out.file,dirs,labels,first.y=2051,last.y=2051,stochastic=F,test=F){ ##################### for (dir in dirs) { if ( file.access(file.path(data.path,dir,"op.dat"), mode = 0)!=0) stop(paste('Directory',dir,'does not exist')) } Init.function() # get SMS.contol object including sp.names ref<-Read.reference.points.OP(dir=data.path) Blim<-as.vector(c(rep(-1, first.VPA-1),ref[,'Blim'])) Bpa<-as.vector(c(rep(-1, first.VPA-1),ref[,'Bpa'])) Fpa<-as.vector(c(rep(-1, first.VPA-1),ref[,'Fpa'])) b<-NULL for (dir in dirs) { a<-Read.OP.condensed(dir=file.path(data.path,dir)) a<-droplevels(subset(a,Year>=first.y & Year<=last.y & Species !='Plaice' & Species !='Sole')) a<-data.frame(scen=dir,a) b<-rbind(b,a) } b$belowBlim<-ifelse(b$SSB<Blim[b$Species.n],1,0) b$belowBpa<-ifelse(b$SSB<Bpa[b$Species.n],1,0) b$aboveFpa<-ifelse(b$Fbar>Fpa[b$Species.n],1,0) minY<-min(b$Year) maxY<-max(b$Year) ny<-maxY-minY+1 a<-aggregate(cbind(yield,value,SSB)~scen+Species.n+Species,mean,data=b) aa<-aggregate(cbind(yield,value,SSB)~scen,sum,data=a) risk<-aggregate(cbind(belowBlim,belowBpa,aboveFpa)~scen+Species.n+Species,sum,data=b) if (test) saverisk<<-risk risk$belowBlim<-risk$belowBlim*100/ny risk$belowBpa<-risk$belowBpa*100/ny risk$aboveFpa<-risk$aboveFpa*100/ny risk.average<-aggregate(list(risk.average.Blim=risk$belowBlim),list(scen=risk$scen),mean) if (test) saverisk2<<-risk risk$belowBlim<-ifelse(risk$belowBlim<5,0,1) risk$belowBlim10<-ifelse(risk$belowBlim<10,0,1) risk$belowBpa<-ifelse(risk$belowBpa<5,0,1) risk$aboveFpa<-ifelse(risk$aboveFpa>5,1,0) risk<-aggregate(cbind(belowBlim,belowBlim10,belowBpa,aboveFpa)~scen,sum,data=risk) if (test) saverisk3<<-risk aa<-merge(aa,risk) aa<-merge(aa,risk.average) label<-data.frame(scen=dirs,labels) aa<-merge(aa,label) a<-cbind( t(t(tapply(aa$yield,aa$labels,sum)/1000)), t(t(tapply(aa$value,aa$labels,sum)/1000)), t(t(tapply(aa$SSB,aa$labels,sum)/1000)) , t(t(tapply(aa$belowBlim,aa$labels,sum))), t(t(tapply(aa$belowBpa,aa$labels,sum))), t(t(tapply(aa$aboveFpa,aa$labels,sum)))) if (stochastic) a<-cbind(a,t(t(tapply(aa$risk.average,aa$labels,sum)))) my.dimnames<-c('Yield','Value','SSB','no. of stocks','no. of stocks','no. of stocks') if (stochastic) my.dimnames<-c(my.dimnames,'Average (%)') dimnames(a)[[2]]<-my.dimnames print(a) if (!stochastic) { my.units<-c('(kt)','(m Euro)','(kt)','below Blim','below Bpa ','above Fpa ') my.dec<-c(0,0,0,0,0,0) } else { my.units<-c('(kt)','(m Euro)','(kt)','p(SSB < Blim) > 5%','p(SSB < Bpa) > 5%','above Fpa ','SSB < Blim') my.dec<-c(0,0,0,0,0,0,0) } xtab(a, caption='', cornername=' ', file=file.path(paste(out.file,'.html',sep='')), dec=my.dec, width='"100%"',units=my.units) cat('\nOutput file: ',file.path(paste(out.file,'.html',sep=''))) } #dirs<-paste("D1f_HCR_1_1_Rec0_penBpa____CVadj_limF_110_Oth",1:10,sep='') #dirs<-paste("D2f_HCR_2_0_Rec0_penBpa____CVadj_limF_110_Oth",1:10,sep='') dirs<-paste("D1f_HCR_1_1_Rec0_penBpa____CVadj_limF_110_Oth",1:10,sep='') #dirs<-paste("D1c_HCR_1_1_Rec0_penBpa_atAgeW___CVadj_limF_110_Oth",1:10,sep='') #dirs<-paste("D1e_HCR_1_1_Rec0_penBlim____CVadj_limF_110_Oth",1:10,sep='') #dirs<-paste("D1b_HCR_1_1_Rec0_penBlim_atAgeW___CVadj_limF_110_Oth",1:10,sep='') labels<-c('01. Default','02. Birds','03. R. radiata','04. Gurnards','05. Mackerel','06. Horse Mackerel','07. Grey seal','08. Porpoise','09. Hake','10. All') opt.table(out.file=file.path(data.path,'opti_compare_determ_HCR1_all_other'),first.y=2034,last.y=2043,stochastic=F, test=T, dirs=dirs, labels=labels ) dirs<-c( "D1a_HCR_1_1_Rec0__atAgeW___CVadj_limF_110_", "D1d_HCR_1_1_Rec0_____CVadj_limF_110_", "D1b_HCR_1_1_Rec0_penBlim_atAgeW___CVadj_limF_110_", "D1e_HCR_1_1_Rec0_penBlim____CVadj_limF_110_", "D1c_HCR_1_1_Rec0_penBpa_atAgeW___CVadj_limF_110_", "D1f_HCR_1_1_Rec0_penBpa____CVadj_limF_110_" ) labels<-c( "Value", "Yield", "Value, penalize SSB < Blim", "Yield, penalize SSB < Blim", "Value, penalize SSB < Bpa", "Yield, penalize SSB < Bpa" ) opt.table(out.file=file.path(data.path,'opti_compare_determ_HCR1'),first.y=2034,last.y=2043,stochastic=F, test=T, dirs=dirs, labels=labels ) dirs<-c( "D2a_HCR_2_0_Rec0__atAgeW___CVadj_limF_110_", "D2d_HCR_2_0_Rec0_____CVadj_limF_110_", "D2b_HCR_2_0_Rec0_penBlim_atAgeW___CVadj_limF_110_", "D2e_HCR_2_0_Rec0_penBlim____CVadj_limF_110_", "D2c_HCR_2_0_Rec0_penBpa_atAgeW___CVadj_limF_110_", "D2f_HCR_2_0_Rec0_penBpa____CVadj_limF_110_" ) labels<-c( "Value", "Yield", "Value, penalize SSB < Blim", "Yield, penalize SSB < Blim", "Value, penalize SSB < Bpa", "Yield, penalize SSB < Bpa" ) opt.table(out.file=file.path(data.path,'opti_compare_determ_HCR2'),first.y=2034,last.y=2043,stochastic=F, test=T, dirs=dirs, labels=labels )
{"hexsha": "1266d0ac60d9966277d4e059241dcde5e2f6ee3a", "size": 5040, "ext": "r", "lang": "R", "max_stars_repo_path": "SMS_R_prog/hcr_op_batch_optimize_compare.r", "max_stars_repo_name": "ices-eg/wg_WGSAM", "max_stars_repo_head_hexsha": "d5f93c431d1ec6c2fb1f3929f63cd9e636fc258a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2018-09-28T11:13:54.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-28T08:40:03.000Z", "max_issues_repo_path": "SMS_R_prog/hcr_op_batch_optimize_compare.r", "max_issues_repo_name": "ices-eg/wg_WGSAM", "max_issues_repo_head_hexsha": "d5f93c431d1ec6c2fb1f3929f63cd9e636fc258a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "SMS_R_prog/hcr_op_batch_optimize_compare.r", "max_forks_repo_name": "ices-eg/wg_WGSAM", "max_forks_repo_head_hexsha": "d5f93c431d1ec6c2fb1f3929f63cd9e636fc258a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.7446808511, "max_line_length": 156, "alphanum_fraction": 0.6779761905, "num_tokens": 1982}
__id__ = "$Id: GetData.py 635 2009-06-24 01:19:00Z jlconlin $" __author__ = "$Author: jlconlin $" __version__ = " $Revision: 635 $" __date__ = "$Date: 2009-06-23 19:19:00 -0600 (Tue, 23 Jun 2009) $" """ This module is used to extract the data from the output files used in this parameter study. """ import os import sys import numpy p=os.path.join(os.path.expanduser('~'), 'Code/Misc') try: sys.path.index(p) except ValueError: sys.path.append(p) import gnuFile def main(): WriteFile = "Relaxed.dat" lambda0 = 0.997520 # Get files in directory Files = os.listdir( os.getcwd() ) ScreenFiles = [] DatFiles = [] for f in Files: root, ext = os.path.splitext(f) # Get root and extension of file # print "file: %s, root = %s, ext = %s" %(f, root, ext) if ext == '.screen': ScreenFiles.append(f) elif ext == '.dat': DatFiles.append(f) # Remove any file we don't like try: DatFiles.pop( DatFiles.index(WriteFile) ) except ValueError: pass Eigen0 = {} Eigen1 = {} Eigen2 = {} FOM = {} for f in DatFiles: root, ext = os.path.splitext(f) # Remove extension from file parameter = root[7:] # root looks like '50mfp.r0.01' gF = gnuFile.gnuFile(f) Eigen0[parameter] = gF.Data['RAM eigenvalue-0-Real'][-1] Eigen1[parameter] = gF.Data['RAM eigenvalue-1-Real'][-1] Eigen2[parameter] = gF.Data['RAM eigenvalue-2-Real'][-1] FOM[parameter] = gF.Data['FOM-Arnoldi'][-1] # Sort data by parameter value keys = Eigen0.keys() # A key looks like '.r10' keys.sort() # Extract data Parameter = [] Histories = [ [], [], [] ] Eigenvalues = [ [], [], [] ] EigenvaluesSD = [ [], [], [] ] F = [ [], [], [] ] for key in keys: Parameter.append(float(key)) n,v,sd = Eigen0[key] Histories[0].append(n) Eigenvalues[0].append(v) EigenvaluesSD[0].append(sd) n,v,sd = Eigen1[key] Histories[1].append(n) Eigenvalues[1].append(v) EigenvaluesSD[1].append(sd) n,v,sd = Eigen2[key] Histories[2].append(n) Eigenvalues[2].append(v) EigenvaluesSD[2].append(sd) F[0].append(FOM[key][-1]) Data = {} Data['Eigenvalue-0'] = (Parameter, Eigenvalues[0], EigenvaluesSD[0]) Data['Eigenvalue-1'] = (Parameter, Eigenvalues[1], EigenvaluesSD[1]) Data['Eigenvalue-2'] = (Parameter, Eigenvalues[2], EigenvaluesSD[2]) Data['FOM-1'] = (Parameter, F[0]) gnuFile.Write_gnuFile(WriteFile, Data) if __name__ == "__main__": main()
{"hexsha": "6afad656003ef7d124ca56ba0f3b6c08924e029b", "size": 2674, "ext": "py", "lang": "Python", "max_stars_repo_path": "Code/trunk/cpp/Research/ParametricStudy/Relaxed/50mfp/Trial2/GetData.py", "max_stars_repo_name": "jlconlin/PhDThesis", "max_stars_repo_head_hexsha": "8e704613721a800ce1c59576e94f40fa6f7cd986", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Code/trunk/cpp/Research/ParametricStudy/Relaxed/50mfp/Trial2/GetData.py", "max_issues_repo_name": "jlconlin/PhDThesis", "max_issues_repo_head_hexsha": "8e704613721a800ce1c59576e94f40fa6f7cd986", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Code/trunk/cpp/Research/ParametricStudy/Relaxed/50mfp/Trial2/GetData.py", "max_forks_repo_name": "jlconlin/PhDThesis", "max_forks_repo_head_hexsha": "8e704613721a800ce1c59576e94f40fa6f7cd986", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.8541666667, "max_line_length": 74, "alphanum_fraction": 0.5736724009, "include": true, "reason": "import numpy", "num_tokens": 807}
# -#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-# import miscClin import miscMath import miscMatrix import miscTCGA import plotMatrix import tsvIO import numpy import sys # -#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-# NA_VALUE = -999999 # -#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-# def readGeneListFromFile(geneFile): fh = file(geneFile) geneList = [] for aLine in fh: aLine = aLine.strip() aLine = aLine.upper() tokenList = aLine.split('\t') if (len(tokenList) > 0): geneList += [tokenList[0]] fh.close() return (geneList) # -#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-# def gene_in_list(geneName, geneList): geneName = geneName.upper() # first, look for a perfect match if (geneName in geneList): return (1) else: if (1): # do not allow partial matches ... return (0) # second, settle for a partial match for aGene in geneList: if (geneName.find(aGene) >= 0): return (1) if (aGene.find(geneName) >= 0): return (1) return (0) # -#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-# if __name__ == "__main__": if (1): if (len(sys.argv) < 4 or len(sys.argv) > 5): print " Usage : %s <input file> <output file> <gene-list file> [min-nonZero-count] " % sys.argv[0] print " ERROR -- bad command line arguments " sys.exit(-1) inFile = sys.argv[1] outFile = sys.argv[2] geneFile = sys.argv[3] if (len(sys.argv) > 4): minNZC = int(sys.argv[4]) else: minNZC = 1 print " " print " " print " ********************* " print " in filterByGeneList : " print " ***************************************************************** " print " calling readTSV ... ", inFile testD = tsvIO.readTSV(inFile) tsvIO.lookAtDataD(testD) print " " print " " print " reading gene list from ", geneFile geneList = readGeneListFromFile(geneFile) print " --> have %d genes in list " % len(geneList) print geneList[:5] print geneList[-5:] print " " print " " rowLabels = testD['rowLabels'] colLabels = testD['colLabels'] dataMatrix = testD['dataMatrix'] numRow = len(rowLabels) numCol = len(colLabels) print numRow, rowLabels[:5] print numCol, colLabels[:5] print " " print " " rmRowList = [] nzHist = [0] * (numCol + 3) for iRow in range(numRow): if (iRow % 10000 == 0): print iRow, numRow rowName = rowLabels[iRow] tokenList = rowName.split(':') # print rowName, tokenList geneName = tokenList[2] ii = geneName.find('_') if (ii > 0): geneName = geneName[:ii] # print " looking for gene <%s> " % geneName if (not gene_in_list(geneName, geneList)): rmRowList += [iRow] else: numNZ = 0 for iCol in range(numCol): if (dataMatrix[iRow][iCol] != 0): numNZ += 1 nzHist[numNZ] += 1 if (numNZ < minNZC): rmRowList += [iRow] print " --> number of rows to be skipped : %d out of %d " % (len(rmRowList), numRow) print " number of rows remaining : %d " % (numRow - len(rmRowList)) print " " print " histogram of NZ counts : " for ii in range(len(nzHist)): if (nzHist[ii] > 0): print " %4d %12d " % (ii, nzHist[ii]) print " " print " " newD = tsvIO.filter_dataMatrix(testD, rmRowList, []) tsvIO.lookAtDataD(newD) if (newD['dataType'] == ""): newD['dataType'] = "B:GNAB" colLabels = newD['colLabels'] for ii in range(len(colLabels)): aLabel = colLabels[ii] if (aLabel.find("TUMOR") > 0): print " ERROR ??? how did this get here ??? ", aLabel sys.exit(-1) print " " print " ready to write output file ... ", outFile tsvIO.writeTSV_dataMatrix(newD, 0, 0, outFile) # -#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#
{"hexsha": "fa7b660e2434ea068f3ca0d258dab9a199e645c5", "size": 4372, "ext": "py", "lang": "Python", "max_stars_repo_path": "commands/feature_matrix_construction/main/filterByGeneList.py", "max_stars_repo_name": "cancerregulome/gidget", "max_stars_repo_head_hexsha": "6c9e9a37f9992267c7505c7a396ff7e2638599ab", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2016-02-22T21:29:23.000Z", "max_stars_repo_stars_event_max_datetime": "2020-09-19T07:38:21.000Z", "max_issues_repo_path": "commands/feature_matrix_construction/main/filterByGeneList.py", "max_issues_repo_name": "cancerregulome/gidget", "max_issues_repo_head_hexsha": "6c9e9a37f9992267c7505c7a396ff7e2638599ab", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2015-01-16T02:33:59.000Z", "max_issues_repo_issues_event_max_datetime": "2015-01-16T02:33:59.000Z", "max_forks_repo_path": "commands/feature_matrix_construction/main/filterByGeneList.py", "max_forks_repo_name": "cancerregulome/gidget", "max_forks_repo_head_hexsha": "6c9e9a37f9992267c7505c7a396ff7e2638599ab", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2015-12-27T08:40:12.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-01T06:30:23.000Z", "avg_line_length": 25.8698224852, "max_line_length": 110, "alphanum_fraction": 0.4643183898, "include": true, "reason": "import numpy", "num_tokens": 1559}
# updates eddy viscosity (ev/rev) # append to path so we can access Field class import sys sys.path.append("../../../") # class dependencies import numpy as np from bin.Field import Field, max, abs, isfinite # fortran module from bin.model_funcs.fortran_versions import turb2_fort def turb_BL(model,ws,w,ncyc=0): # grid parameters [nx, ny] = ws.field_size() [il, jl] = [nx+1, ny+1] [ie, je] = [nx+2, ny+2] [ib, jb] = [nx+3, ny+3] dims = ws.get_dims() itl = dims['itl'] itu = dims['itu'] # flow related variabless def get(varName): return ws.get_field(varName, model.className) p = get('p') # pressure ev = get('ev') # eddy viscosity # mesh vars vol = get('vol') x = ws.get_field('x') # flow params gamma = model.params['gamma'] mach = model.params['rm'] Re = model.params['re'] xtran = model.params['xtran'] # call turb turb2_fort.turb2(ie,je,itl+1,itu+1, w,p,ev, x,vol, \ gamma,mach,Re,xtran, ncyc, [il,jl,ib,jb])
{"hexsha": "a3fb15ec3465ea388356f86709a397af05d30061", "size": 1067, "ext": "py", "lang": "Python", "max_stars_repo_path": "bin/model_funcs/fortran_versions/turb2_wrap.py", "max_stars_repo_name": "AlexT-L/RANS", "max_stars_repo_head_hexsha": "f4f477b30429e5028f9a0a53d59787f9f3821a00", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "bin/model_funcs/fortran_versions/turb2_wrap.py", "max_issues_repo_name": "AlexT-L/RANS", "max_issues_repo_head_hexsha": "f4f477b30429e5028f9a0a53d59787f9f3821a00", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "bin/model_funcs/fortran_versions/turb2_wrap.py", "max_forks_repo_name": "AlexT-L/RANS", "max_forks_repo_head_hexsha": "f4f477b30429e5028f9a0a53d59787f9f3821a00", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.1956521739, "max_line_length": 62, "alphanum_fraction": 0.5820056232, "include": true, "reason": "import numpy", "num_tokens": 337}
import warnings import numpy as np from sklearn.preprocessing import LabelEncoder from sklearn.externals import joblib from sklearn import metrics from sklearn import linear_model from CreateVector import WordVector from sklearn.svm import LinearSVC from sklearn.datasets import make_classification import logging import os warnings.filterwarnings("ignore", category=DeprecationWarning) # os.chdir(r'../') logger = logging.getLogger(__name__) WHEN_TYPE = 'when' WHAT_TYPE = 'what' WHO_TYPE = 'who' WHICH_TYPE = 'which' WHY_TYPE = 'why' HOW_TYPE = 'how' QUANTITY_TYPE = 'quantity' AFFIRMATION_TYPE = 'affirmation' UNKNOWN_TYPE = 'unknown' ALL_TYPES = [WHEN_TYPE, WHAT_TYPE, WHO_TYPE, WHICH_TYPE, WHY_TYPE, HOW_TYPE, QUANTITY_TYPE, AFFIRMATION_TYPE, UNKNOWN_TYPE] TRAINING_DATA_PATH = 'data/train.txt' #print(os.getcwd()) class ModelCreation(): def __del__(self): logger.info("deleted") @staticmethod def load_data(filename): try: res = [] #open the train data with open(filename, 'r+') as file: for line in file: #parse with the following label question, label = line.split("|", 1) res.append((question.strip(), label.strip())) except : logger.exception("Error while opening the file") return res @staticmethod def train_vectors(method): try: train_data = ModelCreation.load_data(TRAINING_DATA_PATH) print("train vectors") question_vectors = np.asarray([WordVector.create_vector(line[0]) for line in train_data]) encoder = LabelEncoder() encoder.fit(ALL_TYPES) train_labels = encoder.transform([line[1] for line in train_data]) #We use optimized limited memory approach called BFGS #Lbfgs method might be used. -- https://stats.stackexchange.com/questions/284712/how-does-the-l-bfgs-work """The following method will be tested and we show the result to the end user""" if(method == "LogisticRegregressionLBFGS"): trainedModel = linear_model.LogisticRegression(multi_class='multinomial', solver='lbfgs') elif(method == "LogisticRegressionCV"): trainedModel = linear_model.LogisticRegressionCV(cv=7, random_state=0, multi_class='multinomial') elif(method == "LogisticRegressionNewton"): trainedModel = linear_model.LogisticRegressionCV(multi_class='multinomial', solver='newton-cg') elif(method == "LinearSVC"): trainedModel = LinearSVC(random_state=0, tol=1e-5) trainedModel.fit(question_vectors, train_labels) joblib.dump(trainedModel, 'model/LogisticRegressionNew.pkl') train_data_prediction = trainedModel.predict([WordVector.create_vector(line[0].lower()) for line in train_data]) from QuestionClassifier import QuestionAssigner assigner = QuestionAssigner() print("training error " + method + ":", assigner.training_error(train_labels, train_data_prediction)) return trainedModel, encoder except: logger.exception("Train vectors error") # obj = ModelCreation() # obj.train_vectors("LogisticRegressionCV")
{"hexsha": "764532cbb09b2ea69b1d656d5d1536c52011034e", "size": 3327, "ext": "py", "lang": "Python", "max_stars_repo_path": "AlgorithmQuestionAnswering/QuestionClassification/CreateModel.py", "max_stars_repo_name": "zointblackbriar/QuestionAnswering", "max_stars_repo_head_hexsha": "319c3623ced22254d75c2918929a875090bd2bf5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-03-04T19:44:10.000Z", "max_stars_repo_stars_event_max_datetime": "2019-03-04T19:44:10.000Z", "max_issues_repo_path": "AlgorithmQuestionAnswering/QuestionClassification/CreateModel.py", "max_issues_repo_name": "zointblackbriar/QuestionAnswering", "max_issues_repo_head_hexsha": "319c3623ced22254d75c2918929a875090bd2bf5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "AlgorithmQuestionAnswering/QuestionClassification/CreateModel.py", "max_forks_repo_name": "zointblackbriar/QuestionAnswering", "max_forks_repo_head_hexsha": "319c3623ced22254d75c2918929a875090bd2bf5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.1411764706, "max_line_length": 124, "alphanum_fraction": 0.6699729486, "include": true, "reason": "import numpy", "num_tokens": 714}
\documentclass[a4paper]{article} %% Language and font encodings \usepackage[english]{babel} \usepackage[utf8x]{inputenc} \usepackage[T1]{fontenc} \usepackage{caption} %% Sets page size and margins \usepackage[a4paper,top=3cm,bottom=2cm,left=3cm,right=3cm,marginparwidth=1.75cm]{geometry} %% Useful packages \usepackage{amsmath} \usepackage{amsfonts} \usepackage{graphicx} \usepackage{tikz} \usetikzlibrary{arrows.meta} \usepackage[colorinlistoftodos]{todonotes} \usepackage[colorlinks=true, allcolors=blue]{hyperref} \usepackage{color} \usepackage{url} %% display solutions or not \newif\ifsol \soltrue % comment out to hide solutions %% todo tracker -- overleaf v2 has better one it uses but will default to this if compiled on something that doesn't have built-in todo %\newcommand{\todo}[1]{\textbf{\textcolor{red}{#1}}} \title{Section 6: Reinforcement Learning} \author{CS 182 - Artificial Intelligence} \date{} \begin{document} \maketitle \noindent We use \textbf{Reinforcement Learning (RL)} when we have a Markov Decision Process (MDP) for which we know: \begin{itemize} \item $S$: set of states \item $A$: set of actions (for any given state) \item $s_0$: start state \item $s_g$: goal state (or set of goal states) \end{itemize} but do not know: \begin{itemize} \item $T : S \times A \to S$: transition map (or model) \item $R : S \times A \to \mathbb{R}$: reward function \end{itemize} The goal of reinforcement learning is to learn a \textbf{policy} $\pi: S \to A$ which returns the best action for any given state in the problem. In the process of looking for the policy, RL algorithms often calculate: \begin{itemize} \item $V : S \to \mathbb{R}$: \textbf{value function}, which stores the utility of every state \item $Q : S \times A \to \mathbb{R}$: \textbf{Q-value function}, which stores the value of every (state, action) pair \\ \end{itemize} \noindent In \textbf{Passive Reinforcement Learning}, an agent has a fixed policy, and learns about the environment while executing that policy. In lecture, we discussed two algorithms: \begin{enumerate} \item \textbf{Direct evaluation} runs many "experiments", or "simulations", resulting from following a given policy $\pi$, and keeps track of the total discounted rewards of the visited states. Afterwards, the reward instances for each state are averaged to determine the state's value. This could be written as \[ V(s) = \frac{1}{n} \sum_i (R(s_i) + \gamma R(s_i') + \gamma^2 R(s_i'') + ...) \] where $s_i$, $s_i'$, $s_i''$, $\dots$, are the states visited in the $i$th simulation. Direct evaluation can also be referred to as a \textbf{Monte Carlo} approach, since it depends on averaging over many randomized simulations or samples. \item \textbf{Temporal difference learning} iteratively runs experiments following a given policy. With each transition, a sample-based Bellman update is made to the value of the visited state (both equations are computing the same thing): \[ V^{\pi}(s) \leftarrow V^{\pi}(s) + \alpha [R(s, \pi(s), s') + \gamma V^{\pi}(s') - V^{\pi}(s)] \] \[ V^{\pi}(s) \leftarrow (1- \alpha) V^{\pi}(s) + \alpha [R(s, \pi(s), s') + \gamma V^{\pi}(s')] \] We can think of this algorithm can be thought of as sample-based policy evaluation. Remember policy evaluation was: $$V^{\pi}(s) \leftarrow \sum_{s'} T(s, \pi_k(s), s') [R(s,\pi(s),s') + \gamma V^{\pi}(s')]$$ In Q-learning we are still executing some policy $\pi$ we just don't know what the transition model is and thus can't compute all of the possible transitions to sum over. Instead we are simply computing one of the values in the sum and making a weighted update of our current value to incorporate the new information. \end{enumerate} \noindent In \textbf{Active Reinforcement Learning}, and agent chooses actions and tries to find an optimal policy while learning about the environment. The primary algorithm used in active RL is \textbf{Q-learning}. Similarly to TD learning, Q-learning makes sample-based Bellman updates, but to Q-values instead of values: \[ Q(s, a) \leftarrow Q(s, a) + \alpha [R(s, a, s') + \gamma \max_{a'} Q(s', a') - Q(s, a)] \] \[ Q(s, a) \leftarrow (1 - \alpha) Q(s, a) + \alpha [R(s, a, s') + \gamma \max_{a'} Q(s', a')] \] Since we are learning Q-values and not values we can extract the current estimate of the optimal policy by doing a one-step argmaximization over the Q-value immediately at any time: \\ \[ \pi(s) = arg\max_{a} Q(s,a) \] \noindent Some algorithmic details and variations: \begin{enumerate} \item In active algorithms, we must choose actions to take from states. One approach is \textbf{$\epsilon$-greedy action selection}: with (small) probability $\epsilon$, actions are chosen randomly; with (large) probability $1 - \epsilon$, we choose the current optimal action. This encourages exploration of the state space and is generally helpful in practice. Note that this is called off-policy learning as we learn the exact Q-values but execute with a different policy, the $\epsilon$-greedy policy. \item How can we evaluate the choice of parameters or exploration functions in Q-learning? One approach is to measure \textbf{regret} -- the difference between expected rewards and final optimal rewards. \item In linear \textbf{approximate} (or \textbf{feature based}) \textbf{Q-learning} we approximate the Q-function with a set of functions which are intended to represent higher-level features of the state space: \[ Q(s,a) = w_1 f_1(s,a) + w_2 f_2(s,a) + \dots + w_m f_m(s,a) \] For example, for PACMAN they could be the number of ghosts, the distance to the nearest ghost, the distance to the nearest food, etc. The advantage here is that we simply need to learn the correct weights, and thus we have fewer parameters to learn than in standard Q-learning, where we have to learn a table of Q-values for every single state-action pair. The disadvantage is that the quality of the Q-values is highly dependent on the quality of the functions. The update of the weights based on new information is: \[ w_i \leftarrow w_i + \alpha f_i(s,a) [R(s, a, s') + \gamma \max_{a'} Q(s', a') - Q(s, a)] \] \end{enumerate} \noindent Algorithmic parameters: \begin{itemize} \item $\alpha$: learning rate (determines the integration of new information into current estimates) \item $\gamma$: discount rate (determines how quickly rewards decay with time) \end{itemize} \newpage \section*{Practice Problems} \begin{enumerate} \item Suppose we are Q-learning with $\epsilon$-greedy action selection, starting with $\epsilon = 0.2$. When is it a good idea to... \begin{itemize} \item keep $\epsilon$ constant over time? \item decrease $\epsilon$ to 0 over time? \item increase $\epsilon$ over time? \end{itemize} \ifsol \textcolor{blue}{It is generally good to decrease $\epsilon$ to 0 with time, especially when using on-policy learning methods. The reason is that as the agent learns the actual optimal policy for the world, it should switch from a mix of exploration and exploitation to mostly exploitation. However, if the world is changing, we may leave $\epsilon$ high such that the agent would continue to explore. It never makes sense to continuously increase $\epsilon$, as this counteract convergence.} \else \vspace{7em} \fi \item Suppose we are Q-learning. When is it a good idea to... \begin{itemize} \item set $\alpha = 0$? \item set $\alpha = 1$? \item set $\alpha = 0.2$ and decrease it to 0 over time? \end{itemize} \ifsol \textcolor{blue}{If the learning rate $\alpha$ is set to 0, we will not make any updates to the value function, so this is never a good idea. When $\alpha$ is set to 1, we will be fully overriding prior value estimates with the new update. While this is generally undesirable, it is actually optimal for a fully deterministic MDP. The third option -- setting alpha to a low value and decreasing it over time -- is the most reasonable approach.} \else \vspace{7em} \fi \item What would the returned value assignments be for Direct Evaluation over a deterministic MDP? Can you find a simple upper bound on the number of experiments that need to be run by the algorithm for the value function to converge? What if the MDP is not deterministic? \ifsol \textcolor{blue}{If the MDP is deterministic, Direct Evaluation would assign the exact value of every visited state under the given policy, so the number of necessary experiments would be no larger than the total number of start states. If the MDP is not deterministic, we do not have this guarantee, since separate experiments would have different returns.} \else \vspace{7em} \fi \item When using features to represent the $Q$-function, is it guaranteed that the feature-based $Q$-learning finds the same optimal $Q^∗$ as would be found when using a tabular representation for the $Q$-function? \ifsol \textcolor{blue}{No -- the optimal Q-function $Q∗$ would depend on the choice of features. Even if the choice of features was fully informed, it may not be possible to represent the optimal $Q^*$ with a (linear) weighted combination of features.} \else \vspace{7em} \fi \item Why is temporal difference (TD) learning of Q-values (Q-learning) superior to TD learning of values? \ifsol \textcolor{blue}{If only values are found, it is difficult to extract a policy -- this would require knowing or calculating the transition function. If Q-values are found, the policy can be easily computed by $\pi(s) = \text{argmax}_a \; Q(s, a)$.} \else \vspace{7em} \fi \newpage \item Recall a problem from last week's section: Consider an MDP having three states, (1, 2, 3), where State 3 is a terminal state. In States 1 and 2, there are two possible actions: $A$ and $B$. Unlike last week, we no longer know the rewards or transition model ahead of time. \\ Suppose we decide to run Q-learning on this problem with $\alpha = 0.2$ and a 0.9 discount ($\gamma = 0.9$). All Q-values are initialized to 0. While running one episode/simulation within the algorithm, we get the following $(s, a, s', r)$ samples (in order): \\ (State 1, Action B, State 1, reward = -1) \\ (State 1, Action A, State 1, reward = -1) \\ (State 1, Action A, State 2, reward = -2) \\ (State 2, Action B, State 2, reward = -2) \\ (State 2, Action B, State 3, reward = 0) \\ Run Q-learning based on these samples. How are the Q-values updated? \\ \ifsol \textcolor{blue}{ Initialization: \begin{center} \begin{tabular}{ |c|c|c| } \hline & Action A & Action B \\ State 1 & 0 & 0 \\ State 2 & 0 & 0 \\ State 3 & 0 & 0 \\ \hline \end{tabular} \end{center} (State 1, Action B, State 1, reward = -1) $\rightarrow$ (0.8 * 0 + 0.2 (-1 + 0.9 max(0, 0)) = -0.2) \begin{center} \begin{tabular}{ |c|c|c| } \hline & Action A & Action B \\ State 1 & 0 & -0.2 \\ State 2 & 0 & 0 \\ State 3 & 0 & 0 \\ \hline \end{tabular} \end{center} (State 1, Action A, State 1, reward = -1) $\rightarrow$ (0.8 * 0 + 0.2 (-1 + 0.9 max(0, -0.2)) = -0.2) \begin{center} \begin{tabular}{ |c|c|c| } \hline & Action A & Action B \\ State 1 & -0.2 & -0.2 \\ State 2 & 0 & 0 \\ State 3 & 0 & 0 \\ \hline \end{tabular} \end{center} (State 1, Action A, State 2, reward = -2) $\rightarrow$ (0.8 * (-0.2) + 0.2 (-2 + 0.9 max(0, 0)) = -0.56) \begin{center} \begin{tabular}{ |c|c|c| } \hline & Action A & Action B \\ State 1 & -0.56 & -0.2 \\ State 2 & 0 & 0 \\ State 3 & 0 & 0 \\ \hline \end{tabular} \end{center} (State 2, Action B, State 2, reward = -2) $\rightarrow$ (0.8 * 0 + 0.2 (-2 + 0.9 max(0, 0)) = -0.4) \begin{center} \begin{tabular}{ |c|c|c| } \hline & Action A & Action B \\ State 1 & -0.56 & -0.2 \\ State 2 & 0 & -0.4 \\ State 3 & 0 & 0 \\ \hline \end{tabular} \end{center} (State 2, Action B, State 3, reward = 0) $\rightarrow$ (0.8 * (-0.4) + 0.2 (0 + 0.9 max(0, 0)) = -0.32) \begin{center} \begin{tabular}{ |c|c|c| } \hline & Action A & Action B \\ State 1 & -0.56 & -0.2 \\ State 2 & 0 & -0.32 \\ State 3 & 0 & 0 \\ \hline \end{tabular} \end{center} } \else \vspace{28em} \fi How would the solution be different if we chose all actions following the optimal policy? \\ \ifsol \textcolor{blue}{At every step, the next chosen action (from a given state) will be the one corresponding to the largest current Q-values of that state. In the samples given above, all actions follow the current optimal policy except the last sample, where action A would have been taken instead of Action B, since $Q(\text{State 2}, \text{Action A}) > Q(\text{State 2}, \text{Action B})$.} \else \vspace{7em} \fi \newpage \item Consider the Pacman-inspired game board below. Suppose that every action has a 80\% chance of moving in the desired direction, and a 10\% chance of moving in each of two perpendicular directions (though we only learn this transition function by executing actions in the world). The lower left-hand square is numbered (0, 0), the start square is (3, 0), and the goal square is (0, 6). The goal of this game is for Pacman to navigate to the exit (reward of 100) without touching the dangerous explosion square at (1, 4), which terminates the game with a reward of $-100$. Pacman cannot cross the thick wall in the middle of the board; an action causing a wall collision would make Pacman stay in place. \begin{center} \includegraphics[scale=0.25]{grid.png} \end{center} Let's say we choose to solve this problem with feature-based Q-learning, with the features: \\ $f_1 =$ the Manhattan distance to the goal \\ $f_2 =$ 0 or 1 indicating whether the explosion square is in 1-square radius of Pacman. \begin{enumerate} \item Which gameboard squares will be equivalent to the state defined by ($f_1 = 4$, $f_2 = 1$)? \\ \ifsol \textcolor{blue}{Squares (1, 3) and (2, 4) both have a Manhattan distance of $4$ to the goal, and are one square away from the square $(1, 4)$.} \else \vspace{5em} \fi \item What are the advantages and disadvantages of using this feature representation? \\ \ifsol \textcolor{blue}{The advantage of a feature representation is that it decreases the number of states needed for the problem, so each transition informs a greater number of (grid) state values. The downside is that features may not capture important information. In this problem, if the explosion square is nearby but behind a wall, it does not pose a threat; but the feature representation equates this situation to one where there is a dangerous square and no wall.} \else \vspace{5em} \fi \end{enumerate} \end{enumerate} \end{document}
{"hexsha": "48104dbbf208a0696b4dc630a5d823a124cc25b9", "size": 15461, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Section_06/tex/main.tex", "max_stars_repo_name": "Harvard-CS182-F18/courseware", "max_stars_repo_head_hexsha": "b1c5cc83dd45091c0ab74e0252405bc79ce51718", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Section_06/tex/main.tex", "max_issues_repo_name": "Harvard-CS182-F18/courseware", "max_issues_repo_head_hexsha": "b1c5cc83dd45091c0ab74e0252405bc79ce51718", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Section_06/tex/main.tex", "max_forks_repo_name": "Harvard-CS182-F18/courseware", "max_forks_repo_head_hexsha": "b1c5cc83dd45091c0ab74e0252405bc79ce51718", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 54.8262411348, "max_line_length": 710, "alphanum_fraction": 0.67162538, "num_tokens": 4407}
# coding: utf-8 # In[ ]: #### Modules for selecting features import matplotlib import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns import sys # random forest from sklearn.ensemble import RandomForestRegressor # Logistic regression from sklearn.linear_model import LogisticRegression # Standardization from sklearn.preprocessing import StandardScaler from sklearn.feature_selection import SelectKBest, f_regression, f_classif, SelectPercentile, SelectFromModel from sklearn.model_selection import StratifiedKFold, cross_validate, cross_val_score # Yに対する相関係数が n になるものを取得(回帰のみOK) # Based on correlation coefficient def get_important_features ( X,Y,criteria = 0.5 ) : #null列を削除 nulldata = [] for c in X.columns : if X[c].isnull().any() : nulldata.append(c) X = X.drop(nulldata,1) corr_dict = {} for column in X.columns : #test = df[df[column] > 0] #test = df corr = np.corrcoef(X[column], Y)[0][1] if np.isnan(corr) : continue if abs(corr) <= criteria : continue corr_dict[column] = corr for i, v in corr_dict.items() : print(i, round(v,2)) X_selected = X[list(corr_dict.keys())] print("X.shape={}, X_selected.shape={}".format(X.shape, X_selected.shape)) return X_selected # Yに対する分類精度をグラフで表示 # Based on LogisticRegression def display_class_important_features ( X,Y,criteria = 2 ) : #null列を削除 nulldata = [] for c in X.columns : if X[c].isnull().any() : nulldata.append(c) X = X.drop(nulldata,1) score_list = [] columns = [] for n in range(1,criteria+1): print('\n numbers of X :',n) ##select features select = SelectKBest(k=n) select.fit(X, Y) mask = select.get_support() #print(X.columns[mask]) diff = set(np.ravel(columns)) ^ set(X.columns[mask]) columns.append( list(diff) ) X_selected = X.iloc[:,mask] pr_auc = cross_val_score(LogisticRegression(solver='lbfgs'), X_selected, Y, scoring="average_precision", cv=10) # presicionを格納 score_list.append(np.mean(pr_auc)) print('PR_AUC: {:.5f}' .format(score_list[n-1])) print('analyzed----------') for index, item in enumerate(score_list): print('{} is {:4f}' . format(np.ravel(columns)[index], item )) plt.figure(figsize=(10, 10)) plt.plot( np.ravel(columns) ,score_list) plt.xticks(np.ravel(columns), rotation=90) plt.show() return ### Based on SelectKBest # Select best of K def select_best_k( X,Y, criteria = 5, type=f_regression ) : # remove null columns nulldata = [] for c in X.columns : if X[c].isnull().any() : nulldata.append(c) X = X.drop(nulldata,1) # classify or regression ? if type == 'class' : type = f_classif selector = SelectKBest(score_func=type, k=criteria) selector.fit(X, Y) mask = selector.get_support() X_selected = X.iloc[:,mask] print("X.shape={}, X_selected.shape={}".format(X.shape, X_selected.shape)) return X_selected ### Based on SelectPercentile def select_best_percentile( X,Y, criteria = 40, type=f_regression ) : #remove null nulldata = [] for c in X.columns : if X[c].isnull().any() : nulldata.append(c) X = X.drop(nulldata,1) # classify or regression ? if type == 'class' : type = f_classif # Top of criteria selector = SelectPercentile(score_func=type, percentile=criteria) selector.fit(X, Y) mask = selector.get_support() X_selected = X.iloc[:,mask] print("X.shape={}, X_selected.shape={}".format(X.shape, X_selected.shape)) return X_selected ### Based on Model. Default model is RandomForestRegressor def select_best_model( X,Y, model = RandomForestRegressor(), p_threshold='median' ) : #remove null nulldata = [] for c in X.columns : if X[c].isnull().any() : nulldata.append(c) X = X.drop(nulldata,1) # RandomForestRegressor.feature_importances_ selector = SelectFromModel(model, threshold=p_threshold) selector.fit(X, Y) mask = selector.get_support() X_selected = X.iloc[:,mask] print("X.shape={}, X_selected.shape={}".format(X.shape, X_selected.shape)) return X_selected
{"hexsha": "98a510fc0e35680a31090b55a19aca414465ac38", "size": 4586, "ext": "py", "lang": "Python", "max_stars_repo_path": "MY_select_features.py", "max_stars_repo_name": "igor-yusupov/autorace", "max_stars_repo_head_hexsha": "0294873a62f3dbfdf3564bb2b63e97e917be6de6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "MY_select_features.py", "max_issues_repo_name": "igor-yusupov/autorace", "max_issues_repo_head_hexsha": "0294873a62f3dbfdf3564bb2b63e97e917be6de6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "MY_select_features.py", "max_forks_repo_name": "igor-yusupov/autorace", "max_forks_repo_head_hexsha": "0294873a62f3dbfdf3564bb2b63e97e917be6de6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.7640449438, "max_line_length": 119, "alphanum_fraction": 0.6096816398, "include": true, "reason": "import numpy", "num_tokens": 1199}
using SpecialFunctions import Base.Broadcast const linearity_known_1 = IdDict{Function,Bool}() const linearity_known_2 = IdDict{Function,Bool}() const linearity_map_1 = IdDict{Function, Bool}() const linearity_map_2 = IdDict{Function, Tuple{Bool, Bool, Bool}}() # 1-arg const monadic_linear = [deg2rad, +, rad2deg, transpose, -, conj] const monadic_nonlinear = [asind, log1p, acsch, erfc, digamma, acos, asec, acosh, airybiprime, acsc, cscd, log, tand, log10, csch, asinh, airyai, abs2, gamma, lgamma, erfcx, bessely0, cosh, sin, cos, atan, cospi, cbrt, acosd, bessely1, acoth, erfcinv, erf, dawson, inv, acotd, airyaiprime, erfinv, trigamma, asecd, besselj1, exp, acot, sqrt, sind, sinpi, asech, log2, tan, invdigamma, airybi, exp10, sech, erfi, coth, asin, cotd, cosd, sinh, abs, besselj0, csc, tanh, secd, atand, sec, acscd, cot, exp2, expm1, atanh] # We store 3 bools even for 1-arg functions for type stability const three_trues = (true, true, true) for f in monadic_linear linearity_known_1[f] = true linearity_map_1[f] = true end for f in monadic_nonlinear linearity_known_1[f] = true linearity_map_1[f] = false end # 2-arg for f in [+, rem2pi, -, >, isless, <, isequal, max, min, convert, <=, >=] linearity_known_2[f] = true linearity_map_2[f] = (true, true, true) end for f in [*] linearity_known_2[f] = true linearity_map_2[f] = (true, true, false) end for f in [/] linearity_known_2[f] = true linearity_map_2[f] = (true, false, false) end for f in [\] linearity_known_2[f] = true linearity_map_2[f] = (false, true, false) end for f in [hypot, atan, mod, rem, lbeta, ^, beta] linearity_known_2[f] = true linearity_map_2[f] = (false, false, false) end haslinearity_1(@nospecialize(f)) = get(linearity_known_1, f, false) haslinearity_2(@nospecialize(f)) = get(linearity_known_2, f, false) linearity_1(@nospecialize(f)) = linearity_map_1[f] linearity_2(@nospecialize(f)) = linearity_map_2[f] # TermCombination datastructure struct TermCombination terms::Set{Dict{Int, Int}} # idx => pow end @eval Base.one(::Type{TermCombination}) = $(TermCombination(Set([Dict{Int,Int}()]))) @eval Base.zero(::Type{TermCombination}) = $(TermCombination(Set{Dict{Int,Int}}())) #= function Base.:(==)(comb1::TermCombination, comb2::TermCombination) comb1.terms == comb2.terms && return true n1 = reduce(max, (k for (k,_) in Iterators.flatten(comb1.terms)), init=0) n2 = reduce(max, (k for (k,_) in Iterators.flatten(comb2.terms)), init=0) n = max(n1, n2) _sparse(comb1, n) == _sparse(comb2, n) end =# # to make Mul and Add work Base.:*(::Number, comb::TermCombination) = comb function Base.:^(comb::TermCombination, ::Number) isone(comb) && return comb iszero(comb) && return _scalar return comb * comb end function Base.:+(comb1::TermCombination, comb2::TermCombination) if isone(comb1) && !iszero(comb2) return comb2 elseif isone(comb2) && !iszero(comb1) return comb1 elseif comb1 === comb2 return comb1 end TermCombination(union(comb1.terms, comb2.terms)) end Base.:+(comb1::TermCombination) = comb1 function _merge(dict1, dict2) d = copy(dict1) for (k, v) in dict2 d[k] = min(2, get(dict1, k, 0) + v) end d end function Base.:*(comb1::TermCombination, comb2::TermCombination) if isone(comb1) return comb2 elseif isone(comb2) return comb1 elseif comb1 === comb2 # squaring optimization terms = comb1.terms # turns out it's enough to track # a^2*b^2 # and a^2 + b^2 + ab # have the same hessian sparsity t = Dict(k=>2 for (k,_) in Iterators.flatten(terms)) TermCombination(Set([t])) #= # square each term t1 = [Dict(k=>2 for (k,_) in dict) for dict in comb1.terms] # multiply each term t2 = Dict{Int,Int}[] for i in 1:length(terms) for j in i+1:length(terms) push!(t2, _merge(terms[i], terms[j])) end end TermCombination(union(t1, t2)) =# else Set([_merge(dict1, dict2) for dict1 in comb1.terms, dict2 in comb2.terms]) |> TermCombination end end Base.:*(comb1::TermCombination) = comb1 Base.iszero(c::TermCombination) = isempty(c.terms) Base.isone(c::TermCombination) = all(isempty, c.terms) function _sparse(t::TermCombination, n) I = Int[] J = Int[] for dict in t.terms kv = collect(pairs(dict)) for i in 1:length(kv) k, v = kv[i] if v>=2 push!(I, k) push!(J, k) end for j in i+1:length(kv) if v >= 1 && kv[j][2] >= 1 push!(I, k) push!(J, kv[j][1]) end end end end s1 = sparse(I,J,fill!(BitVector(undef, length(I)), true),n,n) s1 .| s1' end _sparse(x::Number, n) = sparse(Int[],Int[],Int[],n,n) # 1-arg functions combine_terms_1(lin, term) = lin ? term : term * term # 2-arg functions function combine_terms_2(linearity, term1, term2) linear11, linear22, linear12 = linearity term = zero(TermCombination) if linear11 if !linear12 term += term1 end else term += term1 * term1 end if linear22 if !linear12 term += term2 end else term += term2 * term2 end if linear12 term += term1 + term2 else term += term1 * term2 end term end
{"hexsha": "09dde1ec762dc0370d6d2a2244fea80023a97d75", "size": 5630, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "src/linearity.jl", "max_stars_repo_name": "sharanry/Symbolics.jl", "max_stars_repo_head_hexsha": "eeee4366850459b929b46c438a7d6f63e027b4ca", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 907, "max_stars_repo_stars_event_min_datetime": "2021-01-22T02:36:06.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T23:13:39.000Z", "max_issues_repo_path": "src/linearity.jl", "max_issues_repo_name": "sharanry/Symbolics.jl", "max_issues_repo_head_hexsha": "eeee4366850459b929b46c438a7d6f63e027b4ca", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 472, "max_issues_repo_issues_event_min_datetime": "2021-01-21T14:18:35.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T21:10:18.000Z", "max_forks_repo_path": "src/linearity.jl", "max_forks_repo_name": "sharanry/Symbolics.jl", "max_forks_repo_head_hexsha": "eeee4366850459b929b46c438a7d6f63e027b4ca", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 83, "max_forks_repo_forks_event_min_datetime": "2021-01-25T14:09:17.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-28T19:37:30.000Z", "avg_line_length": 27.7339901478, "max_line_length": 517, "alphanum_fraction": 0.6110124334, "num_tokens": 1834}
import csv import serial import time import numpy z1baudrate = 115200 z1port = 'COM3' # set the correct port before run it b = 0.00 z1serial = serial.Serial(port=z1port, baudrate=z1baudrate) z1serial.timeout = 2 # set read timeout # print z1serial # debug serial. print(z1serial.is_open) # True for opened if z1serial.is_open: while True: size = z1serial.inWaiting() if size: data = z1serial.read(size) a = data.decode() a.split(',') if a[7]=='-': s=(a[7]+a[8]+a[9]+a[10]+a[11]) b=float(s) print(b) c = (a[20]+a[21]+a[22]+a[23]+a[24]) d = float(c) g = 29.0 if b / g <= -1: print("PROBABLE ACCIDENT") exit(0) else: s = (a[7]+a[8]+a[9]+a[10]) b=float(s) print(b) c = (a[20] + a[21] + a[22] + a[23]) d = float(c) h = 10 if d/h >=1: print("PROBABLE ACCIDENT") print(data) else: print('no data') time.sleep(1) else: print('z1serial not open') # z1serial.close() # close z1serial if z1serial is open.
{"hexsha": "79451dcde0249e930f89c984e48640fd17eba3af", "size": 1312, "ext": "py", "lang": "Python", "max_stars_repo_path": "Accelerometer_and_circular_store/seri.py", "max_stars_repo_name": "PremSuresh/Udaya-bon", "max_stars_repo_head_hexsha": "27298512e33815a08807896e8743b08ad4e09355", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2022-02-27T18:45:52.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-01T05:24:56.000Z", "max_issues_repo_path": "Accelerometer_and_circular_store/seri.py", "max_issues_repo_name": "PremSuresh/Udaya-bon", "max_issues_repo_head_hexsha": "27298512e33815a08807896e8743b08ad4e09355", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Accelerometer_and_circular_store/seri.py", "max_forks_repo_name": "PremSuresh/Udaya-bon", "max_forks_repo_head_hexsha": "27298512e33815a08807896e8743b08ad4e09355", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.7755102041, "max_line_length": 58, "alphanum_fraction": 0.4489329268, "include": true, "reason": "import numpy", "num_tokens": 378}
-import pandas as pd import numpy as np from PIL import Image from sklearn.preprocessing import LabelEncoder,StandardScaler from sklearn.grid_search import GridSearchCV from skimage.transform import resize from sklearn.svm import SVC import pickle no_of_training_data = 6500 train_data = pd.read_csv('labels.csv') # getting the id's of the images so that we can loop over ids = [] ids = train_data['id'].values # converting the name of dogs breed into numbers label_encoder = LabelEncoder().fit(train_data['breed']) train_y = label_encoder.transform(train_data['breed']) y_train = train_y[0:no_of_training_data] X_train= [] i =0 while i<no_of_training_data: img_name = 'train/'+ids[i] + '.jpg' im = Image.open(img_name) im_grey = im.convert('L') # convert the image to *greyscale* img = im_grey.resize((200,200), Image.ANTIALIAS) im_array = np.array(img).flatten() X_train.append(im_array) print('image: ',i) i = i+1 ''' X_test = [] i =100 while i<=199: img_name = 'train/'+ids[i] + '.jpg' im = Image.open(img_name) im_grey = im.convert('L') # convert the image to *greyscale* img = im_grey.resize((200,200), Image.ANTIALIAS) im_array = np.array(img).flatten() X_test.append(im_array) print('image: ',i) i = i+1 ''' # using support vector machines print('using support vector machines : ') Cs = [0.001] gammas = [0.001] param_grid = {'C': Cs, 'gamma' : gammas} model = GridSearchCV(SVC(kernel='rbf', probability =True), param_grid, cv=2) model.fit(X_train, y_train) print(model.best_params_) print(model.grid_scores_) filename = 'model_4.pickle' pickle.dump(model, open(filename, 'wb')) #print(y_test)
{"hexsha": "6607311c39ba116f8b481cf24de4f3a6fbc4a723", "size": 1698, "ext": "py", "lang": "Python", "max_stars_repo_path": "prog.py", "max_stars_repo_name": "adibyte95/Dog-Breed-Identification-Kaggle", "max_stars_repo_head_hexsha": "1ac111237bd0c681b5a2127edf783061be601447", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "prog.py", "max_issues_repo_name": "adibyte95/Dog-Breed-Identification-Kaggle", "max_issues_repo_head_hexsha": "1ac111237bd0c681b5a2127edf783061be601447", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "prog.py", "max_forks_repo_name": "adibyte95/Dog-Breed-Identification-Kaggle", "max_forks_repo_head_hexsha": "1ac111237bd0c681b5a2127edf783061be601447", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.1230769231, "max_line_length": 77, "alphanum_fraction": 0.6949352179, "include": true, "reason": "import numpy", "num_tokens": 451}
############## CELL CYCLE DISTRIBUTION ESTIMATION FROM DAPI INTENSITIES ################ # # licensed under the MIT License: # # Copyright (c) 2016 Andreas Stengl, David Hoerl, Heinrich Leonhardt and Jonas Helma # # Permission is hereby granted, free of charge, to any person obtaining a copy of this # software and associated documentation files (the "Software"), to deal in the Software # without restriction, including without limitation the rights to use, copy, modify, merge, # publish, distribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be # included in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED # TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULARPURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLEFOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER # IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE # OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # ######################################################################################## library(GenSA) ################################## Function Defs ############################### # sum of three Gaussian functions threegauss = function(x, params) { res = (params[3]*exp(-1/2*(x-params[1])^2/params[2]^2) + params[6]*exp(-1/2*(x-params[4])^2/params[5]^2) + params[9]*exp(-1/2*(x-params[7])^2/params[8]^2)) res } # single Gaussian agauss = function(x, params) { res = (params[3]*exp(-1/2*(x-params[1])^2/params[2]^2)) res } # plateau with Gaussian fade-out gaussline = function(x, params){ res = c() for(xi in x){ if (xi < params[1]){ y = (params[3]*exp(-1/2*(xi-params[1])^2/params[4]^2)) } else if (xi > params[2]) { y= (params[3]*exp(-1/2*(xi-params[2])^2/params[4]^2)) } else{ y =( params[3]) } res = c(res, y) } res } # sum of two Gaussian peaks plus a constant plateau with Gaussian fade-out twogaussplusbroadendgauss = function(x, params) { res = (params[3]*exp(-1/2*(x-params[1])^2/params[2]^2) + params[6]*exp(-1/2*(x-params[4])^2/params[5]^2) + gaussline(x, params[7:10])) res } # guess plataeu height as minimum of vals between indices x1 and x2 guessplateau = function(vals, x1,x2) { pg = vals[x1-1 + which.min(vals[c(x1):c(x2)])] dif = vals[x2] - pg # x2 is the minimum, take a slightly lower value as the guess if(dif <= 0){ pg = vals[x2]*0.99999999 } pg } # integrate peak area of G1, S, G2 and sum of them integratepeaks = function(xs, params){ G1 = integrate(function(x) {agauss(x, params[1:3])}, lower=min(xs), upper=max(xs)) G2 = integrate(function(x) {agauss(x, params[4:6])}, lower=min(xs), upper=max(xs)) S = integrate(function(x) {gaussline(x, params[7:10])}, lower=min(xs), upper=,max(xs)) Pop = G1$value+S$value+G2$value res = c(G1$value,S$value,G2$value,Pop) res } ################ Fitting script ######################### # for saving the results of multiple runs # do not re-initialize every time curveareas = data.frame() ########### START ################## ## 1) the integrated DAPI intensities per cell should be assigned to DAPIIntensities DAPIIntensities = c() ## in our case: # platerow = 3 # platecolumn = 3 # DAPIIntensities = subset(SKBR3,Row %in% platerow & Column == platecolumn)$IDapiAbs ## 2) get xs and ys # xs = histogram bin mids # ys = density curve h = hist(DAPIIntensities, breaks = 100, probability = T) xs = h$mids ys = h$density # 3) Guess parameters # Guesses for peak positions xG1 = xs[which.max(smooth(ys))] xp1g = which.max(ys) pgxg = xp1g - 1 + which.min(ys[xp1g:(which.max(ys)*2.5)]) xp2g = pgxg - 1 + which.max(ys[pgxg:100]) xG2 = xs[xp2g] # Guesses for peak and plateau heights peak1guess = ys[which.max(smooth(ys))] plateauguess <- guessplateau(ys,xp1g,xp2g) peak2guess = ys[xp2g] # Guess for all 10 params guess = c(xG1, xG1/6, (peak1guess-plateauguess), xG2, xG1/6, (peak2guess-plateauguess), xG1, xG2, plateauguess, xG1/6) # lower and upper bounds guessupper = guess + guess*c(0.5, 0.9, 0.05, 0.1, 0.5, 0.5, 0.25, 0.15, 0.1, 0.9) guesslower = guess - guess*c(0.5, 0.1, 0.3, 0.05, 0.5, 0.1, 0.5, 0.05, 0.9, 0.1) ## 4) do optimization res = GenSA(NULL,function(guess) {sum((ys-twogaussplusbroadendgauss(xs, guess))^2)}, upper = guessupper, lower = guesslower ) ## 5) plot histogram + fitted curves plot(xs, ys, type = "h") lines(xs, agauss(xs, res$par[1:3]), col="red") lines(xs, agauss(xs, res$par[4:6]), col="green") lines(xs, gaussline(xs, res$par[7:10]), col="blue") lines(xs, twogaussplusbroadendgauss(xs, res$par), col="cyan") # plot guesses abline(v = xG1, col='red') abline(v = xG2,col ='green') abline(h = peak1guess, col='red') abline(h = plateauguess, col='blue') abline(h = peak2guess, col='green') ## 6) append resulting areas under curves to accumulated results ### name for the condition of the histogram # e.g. condition = '16nM Trastuzumab' curveareas = rbind.data.frame(curveareas, c(condition, as.numeric(integratepeaks(xs, res$par))), stringsAsFactors = FALSE) colnames(curveareas) = c('condition', 'G1', 'S', 'G2', 'total') ################ Accumulated results plot/saving ################### # normalize to area=1 normIntegrals = apply(curveareas[,2:5], 2, function(x){as.numeric(as.character(x))}) / as.numeric(as.character(curveareas[,5])) # plot stacked bars barplot(t(normIntegrals[,1:3]), names.arg = curveareas$condition) # save cell cycle fractions write.table(normIntegrals, "Z:/normIntegrals.txt", sep="\t")
{"hexsha": "be6155aa27463d1cbe2c160efa021791db7ebe92", "size": 6082, "ext": "r", "lang": "R", "max_stars_repo_path": "DAPI_CellCycle_Fit.r", "max_stars_repo_name": "hoerldavid/CellCycleFit", "max_stars_repo_head_hexsha": "17a55ded5f7aaade8a2fba8a619bf099ee3d03ac", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "DAPI_CellCycle_Fit.r", "max_issues_repo_name": "hoerldavid/CellCycleFit", "max_issues_repo_head_hexsha": "17a55ded5f7aaade8a2fba8a619bf099ee3d03ac", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "DAPI_CellCycle_Fit.r", "max_forks_repo_name": "hoerldavid/CellCycleFit", "max_forks_repo_head_hexsha": "17a55ded5f7aaade8a2fba8a619bf099ee3d03ac", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.5129533679, "max_line_length": 122, "alphanum_fraction": 0.6251233147, "num_tokens": 1876}
import torch print("PyTorch Version: ",torch.__version__) import torch.nn as nn import torch.optim as optim import numpy as np import torchvision from torchvision import datasets, models, transforms import matplotlib.pyplot as plt import time import os,glob,shutil import copy print("PyTorch Version: ",torch.__version__) print("Torchvision Version: ",torchvision.__version__) from PIL import Image data_dir = "/home/ars/disk/chaoyuan/dataset/汽车分类/颜色/car_color_dataset/val" classes=os.listdir(data_dir) classes.sort() imgs=[data_dir+'/'+i for i in os.listdir(data_dir)] model_path='model.pkl' ########################### model_name='resnet' if model_name=='resnet': input_size=224 data_transforms = { 'train': transforms.Compose([ transforms.RandomResizedCrop(input_size), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'val': transforms.Compose([ transforms.Resize(input_size), transforms.CenterCrop(input_size), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } transform=data_transforms['val'] device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") ########################### model=torch.load(model_path) model.to(device) model.eval() ########################### os.system('killall display') def iter_data(): for cls in os.listdir(data_dir): cls_dir = data_dir + '/' + cls for f in glob.glob(cls_dir+'/*.jpg'): yield f,cls correct=0 total=0 for i,(f,cls) in enumerate(iter_data()): im = Image.open(f) im2=im im = transform(im).float() im = torch.tensor(im, requires_grad=False) im = im.unsqueeze(0) im=im.to(device) y = model(im) y = torch.argmax(y).cpu().int() y=int(y) # print(y,len(classes)) pred=classes[y] if pred==cls: correct+=1 total+=1 # print(cls==pred) print(i,f,y,pred,cls) accuracy=correct/total print('total: %s , accuracy : %s'%(total,accuracy))
{"hexsha": "7d01ac43d153e634b4a5c53ae42068aeb786d0ab", "size": 2103, "ext": "py", "lang": "Python", "max_stars_repo_path": "wpkit/cv/examples/torch/resnet/val.py", "max_stars_repo_name": "Peiiii/wpkit", "max_stars_repo_head_hexsha": "23a07548be766b559b80e3114ecc24e3f2f65ea5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "wpkit/cv/examples/torch/resnet/val.py", "max_issues_repo_name": "Peiiii/wpkit", "max_issues_repo_head_hexsha": "23a07548be766b559b80e3114ecc24e3f2f65ea5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "wpkit/cv/examples/torch/resnet/val.py", "max_forks_repo_name": "Peiiii/wpkit", "max_forks_repo_head_hexsha": "23a07548be766b559b80e3114ecc24e3f2f65ea5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.962962963, "max_line_length": 74, "alphanum_fraction": 0.645744175, "include": true, "reason": "import numpy", "num_tokens": 551}
from scipy.stats import pearsonr def calculate_corr(seq_i, seq_j): if len(seq_i)>=len(seq_j): longer_signal=seq_i shorter_signal=seq_j else: longer_signal=seq_j shorter_signal=seq_i LD=len(longer_signal) LK=len(shorter_signal) corr=[] for a in range(2, LK): x=shorter_signal[-a:]; y=longer_signal[:a]; corr.append(pearsonr(x, y)[0]) corr[0:9]=[0]*9; if LD!=LK: for b in range(abs(LD-LK)): x=shorter_signal; y=longer_signal[b:b+LK]; corr.append(pearsonr(x, y)[0]) for c in range(2, LK): x=shorter_signal[0:len(shorter_signal)-c+1] y=longer_signal[-LK+c-1:] corr.append(pearsonr(x, y)[0]) corr[-9:] = [0]*9 return max(corr), corr.index(max(corr))
{"hexsha": "09f430e900755408792fc5d662270e50b9c6afad", "size": 856, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/helpers/calculate_corr.py", "max_stars_repo_name": "knalecz/tsp_assembly", "max_stars_repo_head_hexsha": "aa723c2ff6d2859e0aa77976487b8d19302021e9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/helpers/calculate_corr.py", "max_issues_repo_name": "knalecz/tsp_assembly", "max_issues_repo_head_hexsha": "aa723c2ff6d2859e0aa77976487b8d19302021e9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/helpers/calculate_corr.py", "max_forks_repo_name": "knalecz/tsp_assembly", "max_forks_repo_head_hexsha": "aa723c2ff6d2859e0aa77976487b8d19302021e9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.9487179487, "max_line_length": 51, "alphanum_fraction": 0.546728972, "include": true, "reason": "from scipy", "num_tokens": 259}
using PowerSystems cost = VariableCost([(1.0, 1.0), (2.0, 1.1), (3.0, 1.2)]) slopes = get_slopes(cost) res = [1.0, 10.0, 10.0] for (ix, v) in enumerate(slopes) @test isapprox(v, res[ix]) end bps = get_breakpoint_upperbounds(cost) res = [1.0, 0.1, 0.1] for (ix, v) in enumerate(bps) @test isapprox(v, res[ix]) end
{"hexsha": "6958491cff2b93b6c474975dc455a5d147482fe3", "size": 322, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "test/test_cost_functions.jl", "max_stars_repo_name": "Nongchao/PowerSystems.jl", "max_stars_repo_head_hexsha": "0d7e74e71dc8957e3bf5f27846ec22d22ece7172", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-08-29T04:22:46.000Z", "max_stars_repo_stars_event_max_datetime": "2020-08-29T04:22:46.000Z", "max_issues_repo_path": "test/test_cost_functions.jl", "max_issues_repo_name": "Nongchao/PowerSystems.jl", "max_issues_repo_head_hexsha": "0d7e74e71dc8957e3bf5f27846ec22d22ece7172", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "test/test_cost_functions.jl", "max_forks_repo_name": "Nongchao/PowerSystems.jl", "max_forks_repo_head_hexsha": "0d7e74e71dc8957e3bf5f27846ec22d22ece7172", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.0, "max_line_length": 57, "alphanum_fraction": 0.6304347826, "num_tokens": 138}
% !TeX root = FoodFile.tex % Content Begins \begin{menu}{January} \begin{recipelist} {\scriptsize[1-2]} Spiced Chicken\\ {\scriptsize[3-4]} Tagliatelle and Mushroom Sauce\\ {\scriptsize[5-6]} Chick Pea and Tomato Curry\\ {\scriptsize[7]} Curry and Couscous\\% \end{recipelist}% \begin{recipelist} {\scriptsize[8-9]} Pasta and Bacon in White Sauce\\ {\scriptsize[10-11]} Stir Fried Egg Rice\\ {\scriptsize[12-13]} Tofu and Peppers\\ {\scriptsize[14]} Bread and Mackerel\\% \end{recipelist}\par% \subsection*{Shopping Lists} % \begin{shoppinglist}{Vegetables} 6 cloves garlic {\scriptsize[1-2, 5-6]}\\ 60 g ginger {\scriptsize[1-2]}\\ 2 green chilli {\scriptsize[5-6]}\\ 150 g green pepper {\scriptsize[1-2]}\\ 700 g mushrooms {\scriptsize[3-4]}\\ 1200 g onions {\scriptsize[1-2, 3-4, 5-6]}\\ 2 red chilli {\scriptsize[1-2]}\\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Groceries} 1 tbsp black bean sauce {\scriptsize[12-13]}\\ 400 g bread {\scriptsize[14]}\\ 400 g can chick peas {\scriptsize[5-6]}\\ 400 g can curry {\scriptsize[7]}\\ 400 g can tomatoes {\scriptsize[5-6]}\\ 600 g cous cous {\scriptsize[7, 12-13]}\\ 400 g frozen peas {\scriptsize[10-11]}\\ 400 g pasta shapes {\scriptsize[8-9]}\\ 2 tbsp plain flour {\scriptsize[8-9]}\\ 100 g raisins {\scriptsize[1-2, 5-6]}\\ 300 g tofu {\scriptsize[12-13]}\\ 1200 g white rice {\scriptsize[1-2, 5-6, 10-11]}\\ % \end{shoppinglist}% \par\vfil % % % \begin{shoppinglist}{Meat} 200 g bacon {\scriptsize[8-9]}\\ 600 g chicken {\scriptsize[1-2]}\\ 400 g smoked mackerel {\scriptsize[14]}\\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Dairy} 20 g butter {\scriptsize[14]}\\ 200 g cheddar cheese {\scriptsize[3-4]}\\ 4 eggs {\scriptsize[10-11]}\\ 400 ml milk {\scriptsize[8-9]}\\ 50 g parmesan cheese {\scriptsize[3-4]}\\ 140 ml single cream {\scriptsize[3-4]}\\ 150 g yoghurt {\scriptsize[1-2]}\\ % \end{shoppinglist}% \par\vfil % \vfil\clearpage % % \begin{shoppinglist}{Check} 1 bay leaf \\ 1 tsp garam masala \\ 1 pinch ground cinnamon \\ 1 pinch ground cloves \\ 2 tsp ground coriander \\ 2 tsp ground cumin \\ 1 pinch ground ginger \\ 2 tsp ground nutmeg \\ 5 tsp ground turmeric \\ 18 tbsp oil \\ 1 tsp paprika \\ 1 pinch pepper \\ 2 tbsp sherry \\ soy sauce \\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Extra Vegetables} 200 g celery {\scriptsize[8-9]}\\ 1 chillis {\scriptsize[10-11]}\\ 1 green chilli {\scriptsize[12-13]}\\ 400 g leeks {\scriptsize[10-11]}\\ 200 g mushrooms {\scriptsize[8-9]}\\ 1050 g onions {\scriptsize[8-9, 10-11, 10-11]}\\ 1 red chilli {\scriptsize[12-13]}\\ 300 g red pepper {\scriptsize[8-9, 12-13]}\\ 50 g spring onions {\scriptsize[10-11]}\\ 400 g tomatoes {\scriptsize[12-13]}\\ % \end{shoppinglist}% \par\vfil % % % \othershoppinglist{Other Shopping}% \othershoppinglist{Extra Other Shopping}% \vfil\clearpage \begin{recipe}{2}{Spiced Chicken}% Calories: 2655 \begin{ingredients} 400 g white rice \\ 1 tsp ground turmeric \\ 50 g raisins \\ 600 g chicken (diced) \\ 2 cloves garlic (crushed) \\ 60 g ginger (chopped) \\ 2 tsp ground turmeric \\ 2 tbsp oil \\ 2 tbsp oil \\ 400 g onions (chopped) \\ 150 g green pepper (chopped) \\ 1 tsp ground cumin \\ 1 tsp ground coriander \\ 2 red chilli (chopped) \\ 1 bay leaf \\ 150 g yoghurt \\ \end{ingredients} \index{Meat!Spiced Chicken} \begin{instructions} \item In a sauce pan place 400~g white rice, 1~tsp ground turmeric and 50~g raisins pour on 800~ml cold water, bring to the boil then turn off, cover and stand for 25 minutes. \item In a casserole dish mix 600~g diced chicken, 2~cloves crushed garlic, 60~g chopped ginger, 2~tsp ground turmeric and 2~tbsp oil and bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) minutes. \item In a large wok fry (in 2~tbsp oil) 400~g chopped onions and 150~g chopped green pepper until soft. \item Stir in 1~tsp ground cumin, 1~tsp ground coriander, 2 chopped red chilli and 1 bay leaf and fry for 1 minute. \item Take off heat and stir in 150~g yoghurt and 55~ml cold water. Pour over the chicken and bake, covered (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) for 15 minutes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Tagliatelle and Mushroom Sauce}% Calories: 1742 \begin{ingredients} 400 g onions (chopped) \\ 3 tbsp oil \\ 700 g mushrooms (chopped) \\ 2 tbsp sherry \\ 2 tsp ground nutmeg \\ 140 ml single cream \\ 50 g parmesan cheese (grated) \\ 200 g cheddar cheese (grated) \\ \end{ingredients} \index{Vegetarian!Tagliatelle and Mushroom Sauce} \begin{instructions} \item \item In a large wok fry 400~g chopped onions in 3~tbsp oil until soft. \item Stir in 700~g chopped mushrooms, 2~tbsp sherry, 200~ml cold water and 2~tsp ground nutmeg, cover and simmer for 5 minutes. \item Stir in 140~ml single cream and the drained pasta and warm through. \item Sprinkle with 50~g grated parmesan cheese and 200~g grated cheddar cheese. \end{instructions} \end{recipe}% \begin{recipe}{2}{Chick Pea and Tomato Curry}% Calories: 1684 \begin{ingredients} 400 g onions (chopped) \\ 4 cloves garlic (crushed) \\ 2 tbsp oil \\ 2 green chilli (chopped) \\ 1 tsp ground turmeric \\ 1 tsp paprika \\ 1 tbsp ground cumin \\ 1 tbsp ground coriander \\ 1 tsp garam masala \\ 400 g can tomatoes (chopped) \\ 400 g can chick peas (undrained) \\ 400 g white rice \\ 1 tsp ground turmeric \\ 50 g raisins \\ \end{ingredients} \index{Vegan!Chick Pea and Tomato Curry} \index{Curry!Chick Pea and Tomato Curry} \begin{instructions} \item In a large wok fry 400~g chopped onions and 4~cloves crushed garlic in 2~tbsp oil until soft. \item Stir in 2 chopped green chilli, 1~tsp ground turmeric, 1~tsp paprika, 1~tbsp ground cumin, 1~tbsp ground coriander, 1~tsp garam masala and fry for 1 minute. \item Stir in 400~g chopped can tomatoes and 400~g undrained can chick peas and simmer for 20 minutes. \item In a sauce pan place 400~g white rice, 1~tsp ground turmeric and 50~g raisins pour on 800~ml cold water, bring to the boil then turn off, cover and stand for 25 minutes. \end{instructions} \end{recipe}% \begin{recipe}{1}{Curry and Couscous}% Calories: 712 \begin{ingredients} 400 g can curry \\ 200 g cous cous \\ \end{ingredients} \index{Quick!Curry and Couscous} \index{Curry!Curry and Couscous} \begin{instructions} \item In a sauce pan warm 400~g can curry. \item In a sauce pan place 200~g cous cous pour on 300~ml boiling water, and cover, leave to stand for 10 minutes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Pasta and Bacon in White Sauce}% Calories: 2364 \begin{ingredients} 400 g pasta shapes \\ 350 g onions (chopped) \\ 200 g celery (chopped) \\ 150 g red pepper (chopped) \\ 200 g mushrooms (sliced) \\ 200 g bacon (chopped) \\ 2 tbsp oil \\ 2 tbsp plain flour \\ 400 ml milk \\ \end{ingredients} \index{Meat!Pasta and Bacon in White Sauce} \begin{instructions} \item In a sauce pan place 400~g pasta shapes pour on 800~ml boiling water, cover and simmer for 12 minutes, then drain. \item In a large wok fry 350~g chopped onions, 200~g chopped celery, 150~g chopped red pepper, 200~g sliced mushrooms and 200~g chopped bacon in 2~tbsp oil until onions soft. \item Stir in 2~tbsp plain flour until completely absorbed. \item Stir in 400~ml milk and stir until sauce thickens. \end{instructions} \end{recipe}% \begin{recipe}{2}{Stir Fried Egg Rice}% Calories: 1929 \begin{ingredients} 400 g white rice \\ 4 tbsp oil \\ 300 g onions (chopped) \\ 1 chillis (chopped) \\ 400 g onions (chopped) \\ 400 g leeks (chopped) \\ 400 g frozen peas \\ 1 pinch pepper \\ 1 pinch ground cinnamon \\ 1 pinch ground ginger \\ 1 pinch ground cloves \\ 4 eggs (beaten) \\ 50 g spring onions (chopped) \\ soy sauce \\ \end{ingredients} \index{Vegetarian!Stir Fried Egg Rice} \begin{instructions} \item In a sauce pan place 400~g white rice pour on 800~ml cold water, bring to the boil then turn off, cover and stand for 25 minutes. \item In a large wok fry (in 4~tbsp oil) 300~g chopped onions, and 1 chopped chillis until soft. \item Stir in 400~g chopped onions and 400~g chopped leeks and cook until the onions caramelize. \item Stir in the rice 400~g frozen peas, 1~pinch pepper, 1~pinch ground cinnamon, 1~pinch ground ginger and 1~pinch ground cloves and warm through. \item Stir in 4 beaten eggs and cook until the egg sets. \item Sprinkle with 50~g chopped spring onions and serve with soy sauce. \end{instructions} \end{recipe}% \begin{recipe}{2}{Tofu and Peppers}% Calories: 1536 \begin{ingredients} 400 g cous cous \\ 1 red chilli (chopped) \\ 1 green chilli (chopped) \\ 150 g red pepper (sliced) \\ 400 g tomatoes (chopped) \\ 3 tbsp oil \\ 300 g tofu (cubed) \\ 1 tbsp black bean sauce \\ \end{ingredients} \index{Vegan!Tofu and Peppers} \begin{instructions} \item In a sauce pan place 400~g cous cous pour on 600~ml boiling water, and cover, leave to stand for 10 minutes. \item In a large wok fry 1 chopped red chilli, 1 chopped green chilli, 150~g sliced red pepper, 400~g chopped tomatoes in 3~tbsp oil until soft. \item Stir in 300~g cubed tofu, 1~tbsp black bean sauce and warm through. \end{instructions} \end{recipe}% \begin{recipe}{1}{Bread and Mackerel}% Calories: 2255 \begin{ingredients} 400 g smoked mackerel \\ 400 g bread (sliced) \\ 20 g butter \\ \end{ingredients} \index{Quick!Bread and Mackerel} \begin{instructions} \item Grill 400~g smoked mackerel and serve with 400~g sliced bread and 20~g butter. \end{instructions} \end{recipe}% \clearpage \end{menu} \begin{menu}{February} \begin{recipelist} {\scriptsize[1-2]} Chicken with Cabbage and Peanuts\\ {\scriptsize[3-4]} Pea Risotto\\ {\scriptsize[5-6]} Tofu and Tahini\\ {\scriptsize[7]} Pasta and Stir in Sauce\\% \end{recipelist}% \begin{recipelist} {\scriptsize[8-9]} Spaghetti and Pasta Sauce\\ {\scriptsize[10-11]} Vegetable Chilli Stew\\ {\scriptsize[12-13]} Ethiopian Peanut Butter Curry\\ {\scriptsize[14]} Puttanesca Spaghetti\\% \end{recipelist}\par% \subsection*{Shopping Lists} % \begin{shoppinglist}{Vegetables} 400 g apples {\scriptsize[5-6]}\\ 1200 g baking potatoes {\scriptsize[5-6]}\\ 750 g cabbage {\scriptsize[1-2]}\\ 200 g celery {\scriptsize[5-6]}\\ 2 cloves garlic {\scriptsize[5-6]}\\ 150 g green pepper {\scriptsize[5-6]}\\ 750 g onions {\scriptsize[3-4, 5-6]}\\ 2 red chilli {\scriptsize[1-2]}\\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Groceries} 100 g anchovy fillets {\scriptsize[14]}\\ 1 tbsp black bean sauce {\scriptsize[1-2]}\\ 400 g brown rice {\scriptsize[12-13]}\\ 400 g can cannellini beans {\scriptsize[10-11]}\\ 400 g can kidney beans {\scriptsize[10-11]}\\ 100 g can pineapple {\scriptsize[5-6]}\\ 800 g can sweetcorn {\scriptsize[3-4]}\\ 800 g can tomatoes {\scriptsize[10-11, 14]}\\ 50 g capers {\scriptsize[14]}\\ 800 g frozen peas {\scriptsize[3-4]}\\ 400 g noodles {\scriptsize[1-2]}\\ 100 g olives {\scriptsize[14]}\\ 500 g passata {\scriptsize[10-11]}\\ 450 g pasta sauce {\scriptsize[8-9]}\\ 200 g pasta shapes {\scriptsize[7]}\\ 350 g peanut butter {\scriptsize[12-13]}\\ 70 g peanuts {\scriptsize[1-2]}\\ 600 g spaghetti {\scriptsize[8-9, 14]}\\ 150 g stir in pasta sauce {\scriptsize[7]}\\ 4 tbsp tahini {\scriptsize[5-6]}\\ 300 g tofu {\scriptsize[5-6]}\\ 450 g tomato puree {\scriptsize[10-11, 12-13, 14]}\\ 400 g white rice {\scriptsize[3-4]}\\ % \end{shoppinglist}% \par\vfil % % % \begin{shoppinglist}{Meat} 300 g chicken {\scriptsize[1-2]}\\ 400 g minced meat {\scriptsize[8-9]}\\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Dairy} 80 g butter {\scriptsize[3-4]}\\ 50 g parmesan cheese {\scriptsize[3-4]}\\ % \end{shoppinglist}% \par\vfil % \vfil\clearpage % % \begin{shoppinglist}{Check} 1 tbsp curry powder \\ 1 tsp ground nutmeg \\ 2 tsp miso \\ 15 tbsp oil \\ 150 ml sherry \\ 1 tbsp soy sauce \\ 5 cube stock \\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Extra Vegetables} 7 cloves garlic {\scriptsize[8-9, 10-11, 14]}\\ 1 green chilli {\scriptsize[10-11]}\\ 150 g green pepper {\scriptsize[10-11]}\\ 100 g mushrooms {\scriptsize[8-9]}\\ 800 g onions {\scriptsize[8-9, 10-11, 12-13]}\\ 1000 g potatoes {\scriptsize[10-11]}\\ 2 red chilli {\scriptsize[10-11, 14]}\\ 150 g red pepper {\scriptsize[10-11]}\\ % \end{shoppinglist}% \par\vfil % % % \othershoppinglist{Other Shopping}% \othershoppinglist{Extra Other Shopping}% \vfil\clearpage \begin{recipe}{2}{Chicken with Cabbage and Peanuts}% Calories: 1757 \begin{ingredients} 400 g noodles \\ 70 g peanuts \\ 3 tbsp oil \\ 2 red chilli (chopped) \\ 300 g chicken (sliced) \\ 750 g cabbage (sliced) \\ 1 tbsp black bean sauce \\ \end{ingredients} \index{Meat!Chicken with Cabbage and Peanuts} \begin{instructions} \item In a sauce pan place 400~g noodles pour on 800~ml boiling water, cover and simmer for 5 minutes, then drain. \item In a large wok fry 70~g peanuts in 3~tbsp oil then remove. \item Fry 2 chopped red chilli and 300~g sliced chicken in the oil until meat is done. \item Stir in 750~g sliced cabbage and heat through. \item Stir in 1~tbsp black bean sauce and fry for 2 minutes. \item Sprinkle with the peanuts. \end{instructions} \end{recipe}% \begin{recipe}{2}{Pea Risotto}% Calories: 3055 \begin{ingredients} 350 g onions (chopped) \\ 80 g butter \\ 800 g frozen peas \\ 800 g can sweetcorn \\ 2 cube stock (crumbled) \\ 400 g white rice \\ 50 g parmesan cheese (grated) \\ \end{ingredients} \index{Vegetarian!Pea Risotto} \begin{instructions} \item In a large wok fry 350~g chopped onions in 80~g butter until soft. \item Stir in 800~g frozen peas, 800~g can sweetcorn, 2~cube crumbled stock400~g white rice and 800~g cold water and simmer until rice done. \item Before serving stir in 50~g grated parmesan cheese. \end{instructions} \end{recipe}% \begin{recipe}{2}{Tofu and Tahini}% Calories: 2920 \begin{ingredients} 400 g onions (chopped) \\ 2 cloves garlic (crushed) \\ 150 g green pepper (chopped) \\ 200 g celery (chopped) \\ 4 tbsp oil \\ 1 cube stock (crumbled) \\ 1 tbsp soy sauce \\ 4 tbsp tahini \\ 1 tsp ground nutmeg \\ 2 tsp miso \\ 400 g apples (chopped) \\ 100 g can pineapple (chopped) \\ 300 g tofu (cubed) \\ 1200 g baking potatoes (washed) \\ \end{ingredients} \index{Vegan!Tofu and Tahini} \begin{instructions} \item In a large wok fry 400~g chopped onions, 2~cloves crushed garlic, 150~g chopped green pepper and 200~g chopped celery in 4~tbsp oil until soft. \item Stir in 1~cube crumbled stock, 1~tbsp soy sauce, 4~tbsp tahini, 1~tsp ground nutmeg, 2~tsp miso, 400~g chopped apples, 100~g chopped can pineapple and 250~ml cold water and simmer for 10 minutes. \item Put in a casserole dish and spinkle on top 300~g cubed tofu and bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) for 40 minutes. \item Bake 1200~g washed baking potatoes at (240$^{\circ}$C, Gas 9, 475$^{\circ}$F) for 40 minutes. \end{instructions} \end{recipe}% \begin{recipe}{1}{Pasta and Stir in Sauce}% Calories: 455 \begin{ingredients} 200 g pasta shapes \\ 150 g stir in pasta sauce \\ \end{ingredients} \index{Quick!Pasta and Stir in Sauce} \begin{instructions} \item In a sauce pan place 200~g pasta shapes pour on 400~ml boiling water, cover and simmer for 12 minutes, then drain. \item Stir in 150~g stir in pasta sauce. \end{instructions} \end{recipe}% \begin{recipe}{2}{Spaghetti and Pasta Sauce}% Calories: 2022 \begin{ingredients} 400 g onions \\ 400 g minced meat \\ 3 cloves garlic (crushed) \\ 100 g mushrooms (sliced) \\ 450 g pasta sauce \\ 400 g spaghetti \\ \end{ingredients} \index{Meat!Spaghetti and Pasta Sauce} \index{Spaghetti!Spaghetti and Pasta Sauce} \begin{instructions} \item In a large wok fry 400~g onions, 400~g minced meat, 3~cloves crushed garlic and 100~g sliced mushrooms. \item Stir in 450~g pasta sauce, and warm through. \item In a sauce pan place 400~g spaghetti pour on 800~ml boiling water and continue boiling for 12 minutes then drain. \end{instructions} \end{recipe}% \begin{recipe}{2}{Vegetable Chilli Stew}% Calories: 2453 \begin{ingredients} 1000 g potatoes (chopped) \\ 200 g onions (chopped) \\ 2 cube stock (crumbled) \\ 3 cloves garlic \\ 1 red chilli (chopped) \\ 1 green chilli (chopped) \\ 4 tbsp oil \\ 150 g red pepper (chopped) \\ 150 g green pepper (chopped) \\ 400 g can kidney beans (drained) \\ 400 g can cannellini beans (drained) \\ 400 g can tomatoes (chopped) \\ 150 g tomato puree \\ 500 g passata \\ 150 ml sherry \\ \end{ingredients} \index{Vegetarian!Vegetable Chilli Stew} \begin{instructions} \item In a sauce pan boil 1000~g chopped potatoes until not quite done then drain and put in casserole dish. \item In a large wok fry 200~g chopped onions, 2~cube crumbled stock, 3~cloves garlic, 1 chopped red chilli and 1 chopped green chilli, in 4~tbsp oil for 5 minutes. \item Stir in 150~g chopped red pepper, 150~g chopped green pepper, 400~g drained can kidney beans, 400~g drained can cannellini beans and simmer 5 minutes. \item Stir in 400~g chopped can tomatoes, 150~g tomato puree, 500~g passata and 150~ml sherry and warm through. \item Pour on top of the potatoes and bake 30 minutes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Ethiopian Peanut Butter Curry}% Calories: 2818 \begin{ingredients} 400 g brown rice \\ 2 tbsp oil \\ 200 g onions (chopped) \\ 1 tbsp curry powder \\ 150 g tomato puree \\ 350 g peanut butter \\ \end{ingredients} \index{Vegan!Ethiopian Peanut Butter Curry} \index{Curry!Ethiopian Peanut Butter Curry} \begin{instructions} \item In a sauce pan place 400~g brown rice pour on 800~ml cold water, bring to the boil then cover and simmer for 15 minutes, then leave to stand for 10 minutes. \item In a large wok, fry in 2~tbsp oil, 200~g chopped onions until soft. \item Stir in 1~tbsp curry powder and 150~g tomato puree and cook for 5 minutes. \item Stir in 350~g peanut butter and 350~ml cold water and simmer for 15 minutes uncovered. \end{instructions} \end{recipe}% \begin{recipe}{1}{Puttanesca Spaghetti}% Calories: 992 \begin{ingredients} 200 g spaghetti \\ 1 cloves garlic (crushed) \\ 1 red chilli (chopped) \\ 2 tbsp oil \\ 150 g tomato puree \\ 400 g can tomatoes (chopped) \\ 100 g anchovy fillets (chopped) \\ 100 g olives (chopped) \\ 50 g capers (drained) \\ \end{ingredients} \index{Quick!Puttanesca Spaghetti} \index{Spaghetti!Puttanesca Spaghetti} \begin{instructions} \item In a sauce pan place 200~g spaghetti pour on 400~ml boiling water and continue boiling for 12 minutes then drain. \item In a large wok fry 1~cloves crushed garlic and 1 chopped red chilli in 2~tbsp oil until brown. \item Stir in 150~g tomato puree, 400~g chopped can tomatoes, 100~g chopped anchovy fillets, 100~g chopped olives and 50~g drained capers and simmer for 10 minutes. \end{instructions} \end{recipe}% \clearpage \end{menu} \begin{menu}{March} \begin{recipelist} {\scriptsize[1-2]} Stew and Dumplings\\ {\scriptsize[3-4]} Leek Pasta Bake\\ {\scriptsize[5-6]} Carrot Curry\\ {\scriptsize[7]} Pasta and Pesto\\% \end{recipelist}% \begin{recipelist} {\scriptsize[8-9]} Norwegian Lamb and Cabbage\\ {\scriptsize[10-11]} Green Shakshuka\\ {\scriptsize[12-13]} Parsnip and Carrot Couscous\\ {\scriptsize[14]} Spaghetti Carbonara\\% \end{recipelist}\par% \subsection*{Shopping Lists} % \begin{shoppinglist}{Vegetables} 1200 g baking potatoes {\scriptsize[1-2]}\\ 1600 g carrots {\scriptsize[1-2, 5-6]}\\ 400 g celery {\scriptsize[1-2]}\\ 800 g leeks {\scriptsize[3-4]}\\ 1 lemon {\scriptsize[5-6]}\\ 520 g onions {\scriptsize[1-2, 5-6]}\\ 400 g spring greens {\scriptsize[3-4]}\\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Groceries} 800 g bread {\scriptsize[10-11]}\\ 400 g brown rice {\scriptsize[5-6]}\\ 400 g can coconut milk {\scriptsize[5-6]}\\ 400 g can tomatoes {\scriptsize[10-11]}\\ 150 g can tuna fish {\scriptsize[7]}\\ 200 g cashew nuts {\scriptsize[5-6]}\\ 400 g cous cous {\scriptsize[12-13]}\\ 600 g frozen peas {\scriptsize[5-6, 10-11]}\\ 200 g olives {\scriptsize[12-13]}\\ 600 g pasta shapes {\scriptsize[3-4, 7]}\\ 100 g pesto {\scriptsize[7]}\\ 3 tbsp plain flour {\scriptsize[3-4, 8-9]}\\ 200 g raisins {\scriptsize[5-6]}\\ 100 g self raising flour {\scriptsize[1-2]}\\ 200 g spaghetti {\scriptsize[14]}\\ 225 g tomato puree {\scriptsize[1-2, 10-11]}\\ 2 tbsp whole grain mustard {\scriptsize[3-4]}\\ % \end{shoppinglist}% \par\vfil % % % \begin{shoppinglist}{Meat} 100 g ham {\scriptsize[14]}\\ 800 g lamb {\scriptsize[8-9]}\\ 400 g meat {\scriptsize[1-2]}\\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Dairy} 150 g blue cheese {\scriptsize[3-4]}\\ 60 g butter {\scriptsize[1-2, 3-4]}\\ 50 g cheddar cheese {\scriptsize[14]}\\ 6 eggs {\scriptsize[10-11, 14]}\\ 580 ml milk {\scriptsize[1-2, 3-4, 14]}\\ 500 g yoghurt {\scriptsize[12-13]}\\ % \end{shoppinglist}% \par\vfil % \vfil\clearpage % % \begin{shoppinglist}{Check} 1 tbsp black pepper corns \\ 1 tsp chilli powder \\ 1 tsp cumin seeds \\ 3 tbsp curry powder \\ 0.5 tsp ground cardamom \\ 1 tsp ground cinnamon \\ 0.5 tsp ground coriander \\ 0.5 tsp ground cumin \\ 0.5 tsp ground ginger \\ 1 tsp ground nutmeg \\ 0.5 tsp ground turmeric \\ 2 tbsp honey \\ 80 g lime pickle \\ 0.5 tsp mixed herbs \\ 1 tbsp mustard seeds \\ 14 tbsp oil \\ 4 tbsp olive oil \\ 1 tsp paprika \\ 1 cube stock \\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Extra Vegetables} 1200 g baking potatoes {\scriptsize[8-9]}\\ 1000 g cabbage {\scriptsize[8-9]}\\ 800 g carrots {\scriptsize[12-13]}\\ 400 g leeks {\scriptsize[10-11]}\\ 2 lemon {\scriptsize[10-11, 12-13]}\\ 100 g onions {\scriptsize[10-11]}\\ 400 g parsnips {\scriptsize[12-13]}\\ 1 red pepper {\scriptsize[10-11]}\\ 300 g spring greens {\scriptsize[10-11]}\\ % \end{shoppinglist}% \par\vfil % % % \othershoppinglist{Other Shopping}% \othershoppinglist{Extra Other Shopping}% \vfil\clearpage \begin{recipe}{2}{Stew and Dumplings}% Calories: 4682 \begin{ingredients} 1200 g baking potatoes (washed) \\ 4 tbsp oil \\ 320 g onions (chopped) \\ 400 g meat (sliced) \\ 400 g carrots (chopped) \\ 400 g celery (chopped) \\ 75 g tomato puree \\ 0.5 tsp chilli powder \\ 1 cube stock (crumbled) \\ 100 g self raising flour \\ 30 g butter \\ 0.5 tsp mixed herbs \\ 80 ml milk \\ \end{ingredients} \index{Meat!Stew and Dumplings} \begin{instructions} \item Bake 1200~g washed baking potatoes at (240$^{\circ}$C, Gas 9, 475$^{\circ}$F) for 40 minutes. \item In a large wok fry (in 4~tbsp oil) 320~g chopped onions, 400~g sliced meat, 400~g chopped carrots and 400~g chopped celery until the meat is done. \item Stir in 75~g tomato puree, 0.5~tsp chilli powder, 1~cube crumbled stock, 500~ml cold water and simmer for 50 minutes. \item In a mixing bowl rub together 100~g self raising flour, 30~g butter and 0.5~tsp mixed herbs. Mix in 80~ml milk and knead into 8 balls. Put balls in stew, cover and simmer for 15 minutes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Leek Pasta Bake}% Calories: 2452 \begin{ingredients} 400 g pasta shapes \\ 3 tbsp oil \\ 800 g leeks (sliced) \\ 400 g spring greens (hacked) \\ 1 tsp ground nutmeg \\ 30 g butter \\ 2 tbsp plain flour \\ 400 ml milk \\ 2 tbsp whole grain mustard \\ 150 g blue cheese (crumbled) \\ \end{ingredients} \index{Vegetarian!Leek Pasta Bake} \begin{instructions} \item In a sauce pan place 400~g pasta shapes pour on 800~ml boiling water, cover and simmer for 12 minutes, then drain. \item In a large wok (in 3~tbsp oil) fry 800~g sliced leeks. \item 400~g hacked spring greens and 1~tsp ground nutmeg. \item In a sauce pan melt 30~g butter then stir in 2~tbsp plain flour until completely absorbed. \item Stir in 400~ml milk and 2~tbsp whole grain mustard until sauce thickens. \item In a casserole dish layer the pasta, vegetables and sauce with 150~g crumbled blue cheese and bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) for 20 minutes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Carrot Curry}% Calories: 4079 \begin{ingredients} 400 g brown rice \\ 3 tbsp oil \\ 200 g onions (sliced) \\ 1200 g carrots (chopped) \\ 3 tbsp curry powder \\ 1 tbsp mustard seeds \\ 1 tsp cumin seeds \\ 400 g can coconut milk \\ 200 g frozen peas \\ 200 g raisins \\ 200 g cashew nuts (chopped) \\ 80 g lime pickle \\ 1 lemon (juiced) \\ \end{ingredients} \index{Vegan!Carrot Curry} \index{Curry!Carrot Curry} \begin{instructions} \item In a sauce pan place 400~g brown rice pour on 800~ml cold water, bring to the boil then cover and simmer for 15 minutes, then leave to stand for 10 minutes. \item In a large wok (in 3~tbsp oil) fry 200~g sliced onions and 1200~g chopped carrots until soft. \item Stir in 3~tbsp curry powder, 1~tbsp mustard seeds and 1~tsp cumin seeds and heat through. \item Stir in 400~g can coconut milk, 200~g frozen peas, 200~g raisins and 200~g chopped cashew nuts and heat through. \item Serve with 80~g lime pickle or 1 juiced lemon \end{instructions} \end{recipe}% \begin{recipe}{1}{Pasta and Pesto}% Calories: 972 \begin{ingredients} 200 g pasta shapes \\ 100 g pesto \\ 150 g can tuna fish \\ \end{ingredients} \index{Quick!Pasta and Pesto} \begin{instructions} \item In a sauce pan place 200~g pasta shapes pour on 400~ml boiling water, cover and simmer for 12 minutes, then drain. \item Stir in 100~g pesto and 150~g can tuna fish. \end{instructions} \end{recipe}% \begin{recipe}{2}{Norwegian Lamb and Cabbage}% Calories: 4362 \begin{ingredients} 800 g lamb \\ 1 tbsp plain flour \\ 1000 g cabbage (quartered) \\ 1 tbsp black pepper corns \\ 1200 g baking potatoes (washed) \\ \end{ingredients} \index{Meat!Norwegian Lamb and Cabbage} \begin{instructions} \item In a casserole dish place 800~g lamb and sprinkle with 1~tbsp plain flour. Put 1000~g quartered cabbage on top and sprinkle with 1~tbsp black pepper corns. Pour in 800~ml cold water and bring to boil. Then simmer on extremely low heat for \textbf{5 hours}. \item Bake 1200~g washed baking potatoes at (240$^{\circ}$C, Gas 9, 475$^{\circ}$F) for 40 minutes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Green Shakshuka}% Calories: 3058 \begin{ingredients} 4 tbsp olive oil \\ 100 g onions (chopped) \\ 1 red pepper (chopped) \\ 400 g leeks (sliced) \\ 300 g spring greens (hacked) \\ 400 g frozen peas \\ 400 g can tomatoes (chopped) \\ 150 g tomato puree \\ 0.5 tsp ground coriander \\ 0.5 tsp ground cumin \\ 0.5 tsp chilli powder \\ 4 eggs \\ 1 lemon (juiced) \\ 800 g bread \\ \end{ingredients} \index{Vegetarian!Green Shakshuka} \begin{instructions} \item In a large wok fry (in 4~tbsp olive oil) 100~g chopped onions, 1 chopped red pepper and 400~g sliced leeks in until soft. \item Stir in 300~g hacked spring greens and fry until wilted. \item Stir in 400~g frozen peas, 400~g chopped can tomatoes, 150~g tomato puree, 0.5~tsp ground coriander, 0.5~tsp ground cumin and 0.5~tsp chilli powder. Then pour into the casserole and bake covered (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) for 20 minutes. \item Break 4 eggs into depressions in the mixture and bake until the whites are done. \item Serve with 1 juiced lemon and 800~g bread. \end{instructions} \end{recipe}% \begin{recipe}{2}{Parsnip and Carrot Couscous}% Calories: 2502 \begin{ingredients} 800 g carrots (sliced) \\ 400 g parsnips (sliced) \\ 4 tbsp oil \\ 2 tbsp honey \\ 1 tsp ground cinnamon \\ 1 tsp paprika \\ 0.5 tsp ground ginger \\ 0.5 tsp ground turmeric \\ 0.5 tsp ground cardamom \\ 400 g cous cous \\ 200 g olives (chopped) \\ 1 lemon (juiced) \\ 500 g yoghurt \\ \end{ingredients} \index{Vegan!Parsnip and Carrot Couscous} \begin{instructions} \item In a casserole dish coat 800~g sliced carrots and 400~g sliced parsnips with 4~tbsp oil and 2~tbsp honey. Sprinkle with 1~tsp ground cinnamon, 1~tsp paprika, 0.5~tsp ground ginger, 0.5~tsp ground turmeric and 0.5~tsp ground cardamom. Then bake covered (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) for 60 minutes. \item In a sauce pan place 400~g cous cous pour on 600~ml boiling water, and cover, leave to stand for 10 minutes. \item Stir the couscous into the vegetables, together with 200~g chopped olives. \item Serve with 1 juiced lemon and 500~g yoghurt. \end{instructions} \end{recipe}% \begin{recipe}{1}{Spaghetti Carbonara}% Calories: 606 \begin{ingredients} 200 g spaghetti \\ 2 eggs \\ 100 ml milk \\ 100 g ham (sliced) \\ 50 g cheddar cheese (grated) \\ \end{ingredients} \index{Quick!Spaghetti Carbonara} \index{Spaghetti!Spaghetti Carbonara} \begin{instructions} \item In a sauce pan place 200~g spaghetti pour on 400~ml boiling water and continue boiling for 12 minutes then drain. \item In a bowl beat 2 eggs, 100~ml milk and 100~g sliced ham\item Return the spaghetti to the sauce pan, pour on the eggs and milk. Warm until the eggs are cooked. \item Stir in 50~g grated cheddar cheese before serving. \end{instructions} \end{recipe}% \clearpage \end{menu} \begin{menu}{April} \begin{recipelist} {\scriptsize[1-2]} Kale and Bacon Pasta\\ {\scriptsize[3-4]} Vegetable Shepherds Pie\\ {\scriptsize[5-6]} Lentil and Vegetable Curry\\ {\scriptsize[7]} Cheese and Tagliatelle\\% \end{recipelist}% \begin{recipelist} {\scriptsize[8-9]} Chicken and Tomato Curry\\ {\scriptsize[10-11]} Parsnip and Barley Nut Roast\\ {\scriptsize[12-13]} Tomato Tofu\\ {\scriptsize[14]} Scrambled Egg and Beans\\% \end{recipelist}\par% \subsection*{Shopping Lists} % \begin{shoppinglist}{Vegetables} 1000 g boiling potatoes {\scriptsize[3-4]}\\ 400 g carrots {\scriptsize[3-4, 5-6]}\\ 2 chillis {\scriptsize[1-2]}\\ 6 cloves garlic {\scriptsize[1-2, 5-6]}\\ 200 g kale {\scriptsize[1-2]}\\ 2 lemon {\scriptsize[1-2, 5-6]}\\ 400 g onions {\scriptsize[5-6]}\\ 300 g parsnips {\scriptsize[3-4]}\\ 150 g red pepper {\scriptsize[5-6]}\\ 300 g swedes {\scriptsize[3-4]}\\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Groceries} 400 g baked beans {\scriptsize[14]}\\ 400 g bread {\scriptsize[14]}\\ 400 g brown rice {\scriptsize[8-9]}\\ 400 g can borlotti beans {\scriptsize[3-4]}\\ 1600 g can tomatoes {\scriptsize[5-6, 8-9, 12-13]}\\ 150 g cashew nuts {\scriptsize[5-6]}\\ 400 g cous cous {\scriptsize[5-6]}\\ 200 g creamed coconut {\scriptsize[5-6]}\\ 150 g hazel nuts {\scriptsize[10-11]}\\ 160 g horseraddish sauce {\scriptsize[3-4]}\\ 200 g mayonaise {\scriptsize[3-4]}\\ 400 g noodles {\scriptsize[12-13]}\\ 400 g pasta shapes {\scriptsize[1-2]}\\ 200 g pearl barley {\scriptsize[10-11]}\\ 200 g raisins {\scriptsize[5-6]}\\ 225 g red lentils {\scriptsize[5-6]}\\ 400 ml red wine {\scriptsize[8-9]}\\ 450 g tofu {\scriptsize[12-13]}\\ 290 g tomato puree {\scriptsize[8-9, 12-13]}\\ % \end{shoppinglist}% \par\vfil % % % \begin{shoppinglist}{Meat} 250 g bacon {\scriptsize[1-2]}\\ 400 g chicken {\scriptsize[8-9]}\\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Dairy} 30 g blue cheese {\scriptsize[7]}\\ 100 g butter {\scriptsize[3-4, 10-11, 10-11]}\\ 180 g cream cheese {\scriptsize[1-2]}\\ 6 eggs {\scriptsize[10-11, 14]}\\ 30 g gruyere cheese {\scriptsize[7]}\\ 83 tbsp milk {\scriptsize[3-4, 10-11, 14]}\\ 80 g parmesan cheese {\scriptsize[7, 10-11]}\\ 100 ml single cream {\scriptsize[7]}\\ % \end{shoppinglist}% \par\vfil % \vfil\clearpage % % \begin{shoppinglist}{Check} 1 tsp cumin seeds \\ 3 tbsp curry powder \\ 1 tsp ground cinnamon \\ 3 tbsp ground turmeric \\ 1 tsp mixed herbs \\ 1 tsp mustard seeds \\ 16 tbsp oil \\ 2 tbsp olive oil \\ 1 pinch pepper \\ 2 tbsp soy sauce \\ 1 cube stock \\ 150 g sunflower seeds \\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Extra Vegetables} 600 g carrots {\scriptsize[8-9, 12-13]}\\ 9 cloves garlic {\scriptsize[8-9, 10-11, 12-13]}\\ 60 g ginger {\scriptsize[8-9]}\\ 500 g mushrooms {\scriptsize[10-11, 12-13]}\\ 1200 g onions {\scriptsize[8-9, 10-11, 12-13]}\\ 400 g parsnips {\scriptsize[10-11]}\\ 1200 g potatoes {\scriptsize[10-11]}\\ 300 g red pepper {\scriptsize[12-13, 14]}\\ % \end{shoppinglist}% \par\vfil % % % \othershoppinglist{Other Shopping}% \othershoppinglist{Extra Other Shopping}% \vfil\clearpage \begin{recipe}{2}{Kale and Bacon Pasta}% Calories: 3440 \begin{ingredients} 400 g pasta shapes \\ 250 g bacon (chopped) \\ 2 tbsp olive oil \\ 4 cloves garlic (crushed) \\ 2 chillis (chopped) \\ 100 g sunflower seeds \\ 200 g kale (chopped) \\ 180 g cream cheese \\ 1 lemon (juiced) \\ 1 pinch pepper \\ \end{ingredients} \index{Meat!Kale and Bacon Pasta} \begin{instructions} \item In a sauce pan place 400~g pasta shapes pour on 800~ml boiling water, cover and simmer for 12 minutes, then drain. \item In a large wok fry 250~g chopped bacon in 2~tbsp olive oil until done. \item Stir in 4~cloves crushed garlic, 2 chopped chillis and 100~g sunflower seeds, and cook until the sunflower seeds are just toasted. \item Stir in 200~g chopped kale and 250~ml cold water, simmer until the kale has wilted. \item Stir in the pasta, 180~g cream cheese and 1 juiced lemon. Season with 1~pinch pepper. \end{instructions} \end{recipe}% \begin{recipe}{2}{Vegetable Shepherds Pie}% Calories: 3648 \begin{ingredients} 1000 g boiling potatoes (cubed) \\ 50 g butter \\ 4 tbsp milk \\ 300 g carrots (sliced) \\ 300 g parsnips (sliced) \\ 300 g swedes (sliced) \\ 400 g can borlotti beans (undrained) \\ 200 g mayonaise \\ 160 g horseraddish sauce \\ 50 g sunflower seeds \\ \end{ingredients} \index{Vegetarian!Vegetable Shepherds Pie} \index{Shepherds Pie!Vegetable Shepherds Pie} \begin{instructions} \item In a sauce pan boil 1000~g cubed boiling potatoes, then mash with 50~g butter and 4~tbsp milk. \item In a casserole dish mix 300~g sliced carrots, 300~g sliced parsnips and 300~g sliced swedes. Bake for 20 minutes. \item Stir in 400~g undrained can borlotti beans, 200~g mayonaise and 160~g horseraddish sauce. Spread the mashed potatoes on top, sprinkle with 50~g sunflower seeds and bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) 20 minutes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Lentil and Vegetable Curry}% Calories: 4516 \begin{ingredients} 225 g red lentils \\ 4 tbsp oil \\ 400 g onions (chopped) \\ 2 cloves garlic (crushed) \\ 100 g carrots (sliced) \\ 1 tsp cumin seeds \\ 1 tsp mustard seeds \\ 1 tsp ground cinnamon \\ 2 tbsp curry powder \\ 3 tbsp ground turmeric \\ 150 g red pepper (chopped) \\ 400 g can tomatoes (chopped) \\ 200 g creamed coconut (chopped) \\ 200 g raisins \\ 150 g cashew nuts (chopped) \\ 1 lemon (juiced) \\ 400 g cous cous \\ \end{ingredients} \index{Vegan!Lentil and Vegetable Curry} \index{Curry!Lentil and Vegetable Curry} \begin{instructions} \item In a sauce pan in 450~ml cold water cook 225~g red lentils until soft. \item In a large wok fry (in 4~tbsp oil) 400~g chopped onions, 2~cloves crushed garlic and 100~g sliced carrots until soft. \item Stir in and fry 1~tsp cumin seeds and 1~tsp mustard seeds until the seeds pop. Then stir in 1~tsp ground cinnamon, 2~tbsp curry powder and 3~tbsp ground turmeric. \item Stir in the lentils, 150~g chopped red pepper, 400~g chopped can tomatoes, 200~g chopped creamed coconut, 200~g raisins, 150~g chopped cashew nuts, 1 juiced lemon and 400~ml cold water and simmer for 25 minutes. \item In a sauce pan place 400~g cous cous pour on 600~ml boiling water, and cover, leave to stand for 10 minutes. \end{instructions} \end{recipe}% \begin{recipe}{1}{Cheese and Tagliatelle}% Calories: 359 \begin{ingredients} 100 ml single cream \\ 30 g parmesan cheese (grated) \\ 30 g gruyere cheese (chopped) \\ 30 g blue cheese (chopped) \\ \end{ingredients} \index{Quick!Cheese and Tagliatelle} \index{Vegetarian!Cheese and Tagliatelle} \begin{instructions} \item \item Stir in 100~ml single cream, 30~g grated parmesan cheese, 30~g chopped gruyere cheese, 30~g chopped blue cheese and gently warm until the cheese melts. \end{instructions} \end{recipe}% \begin{recipe}{2}{Chicken and Tomato Curry}% Calories: 2718 \begin{ingredients} 400 g onions (sliced) \\ 60 g ginger (grated) \\ 4 cloves garlic (crushed) \\ 400 g carrots (chopped) \\ 4 tbsp oil \\ 1 tbsp curry powder \\ 800 g can tomatoes (chopped) \\ 140 g tomato puree \\ 400 ml red wine \\ 400 g chicken (sliced) \\ 4 tbsp oil \\ 400 g brown rice \\ \end{ingredients} \index{Meat!Chicken and Tomato Curry} \index{Curry!Chicken and Tomato Curry} \begin{instructions} \item In a large wok fry 400~g sliced onions, 60~g grated ginger, 4~cloves crushed garlic and 400~g chopped carrots in 4~tbsp oil until soft. \item Stir in 1~tbsp curry powder, 800~g chopped can tomatoes, 140~g tomato puree and 400~ml red wine and simmer gently for 30 minutes. \item In a casserole dish place 400~g sliced chicken and 4~tbsp oil then bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) 30 minutes. \item In a sauce pan place 400~g brown rice pour on 800~ml cold water, bring to the boil then cover and simmer for 15 minutes, then leave to stand for 10 minutes. \item Add meat to the tomatoes and serve with the rice. \end{instructions} \end{recipe}% \begin{recipe}{2}{Parsnip and Barley Nut Roast}% Calories: 3007 \begin{ingredients} 150 g hazel nuts (chopped) \\ 200 g pearl barley \\ 400 g parsnips (cubed) \\ 1 cube stock (crumbled) \\ 25 g butter \\ 400 g onions (chopped) \\ 3 cloves garlic (crushed) \\ 400 g mushrooms (sliced) \\ 1 tsp mixed herbs \\ 50 g parmesan cheese (grated) \\ 2 eggs (beaten) \\ 1200 g potatoes (washed) \\ 25 g butter \\ 75 g milk \\ \end{ingredients} \index{Vegetarian!Parsnip and Barley Nut Roast} \begin{instructions} \item Grill 150~g chopped hazel nuts until golden. \item In a sauce pan place 200~g pearl barley, 400~g cubed parsnips, 1~cube crumbled stock and 400~ml boiling water and simmer until soft. \item In a large wok fry (in 25~g butter) 400~g chopped onions and 3~cloves crushed garlic until golden. \item Stir in 400~g sliced mushrooms and 1~tsp mixed herbs and cook until the mushrooms are done. \item Stir in 50~g grated parmesan cheese and 2 beaten eggs. Put into a casserole dish and back (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) for 20 minutes. \item In a sauce pan place 1200~g washed potatoes bring to the boil and simmer for 20 minutes. Then mash with 25~g butter and 75~g milk. \end{instructions} \end{recipe}% \begin{recipe}{2}{Tomato Tofu}% Calories: 1592 \begin{ingredients} 400 g onions (chopped) \\ 200 g carrots (chopped) \\ 2 cloves garlic (crushed) \\ 150 g red pepper (chopped) \\ 100 g mushrooms (chopped) \\ 3 tbsp oil \\ 150 g tomato puree \\ 2 tbsp soy sauce \\ 400 g can tomatoes \\ 450 g tofu (cubed) \\ 400 g noodles \\ \end{ingredients} \index{Vegan!Tomato Tofu} \begin{instructions} \item In a large wok fry 400~g chopped onions, 200~g chopped carrots, 2~cloves crushed garlic, 150~g chopped red pepper and 100~g chopped mushrooms in 3~tbsp oil until the mushrooms are done. \item Stir in 150~g tomato puree, 2~tbsp soy sauce and 400~g can tomatoes and simmer for 10 minutes. \item Spinkle on 450~g cubed tofu, cover and simmer for 10 minutes. \item In a sauce pan place 400~g noodles pour on 800~ml boiling water, cover and simmer for 5 minutes, then drain. \end{instructions} \end{recipe}% \begin{recipe}{1}{Scrambled Egg and Beans}% Calories: 1384 \begin{ingredients} 400 g baked beans \\ 150 g red pepper (sliced) \\ 1 tbsp oil \\ 4 eggs (beaten) \\ 4 tbsp milk \\ 400 g bread (sliced) \\ \end{ingredients} \index{Quick!Scrambled Egg and Beans} \begin{instructions} \item In a sauce pan warm 400~g baked beans. \item In a large wok fry 150~g sliced red pepper in 1~tbsp oil until soft. \item Stir in 4 beaten eggs, 4~tbsp milk and warm until firm. \item Serve with 400~g sliced bread \end{instructions} \end{recipe}% \clearpage \end{menu} \begin{menu}{May} \begin{recipelist} {\scriptsize[1-2]} Hungarian Goulash\\ {\scriptsize[3-4]} Aubergine Risotto\\ {\scriptsize[5-6]} Bean and Nut Roast\\ {\scriptsize[7]} Bacon and Mushroom Pasta\\% \end{recipelist}% \begin{recipelist} {\scriptsize[8-9]} Banana Curry\\ {\scriptsize[10-11]} Mackerell and Potato Hash\\ {\scriptsize[12-13]} Beans and Bulgur Wheat\\ {\scriptsize[14]} Egg and Chips\\% \end{recipelist}\par% \subsection*{Shopping Lists} % \begin{shoppinglist}{Vegetables} 500 g aubergine {\scriptsize[3-4]}\\ 1200 g baking potatoes {\scriptsize[5-6]}\\ 3 cloves garlic {\scriptsize[1-2, 5-6]}\\ 200 g mushrooms {\scriptsize[7]}\\ 950 g onions {\scriptsize[1-2, 3-4, 5-6]}\\ 250 g parsnips {\scriptsize[3-4]}\\ 150 g red pepper {\scriptsize[1-2]}\\ 150 g tomatoes {\scriptsize[3-4]}\\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Groceries} 100 g anchovy fillets {\scriptsize[10-11]}\\ 2 tbsp branston pickle {\scriptsize[5-6]}\\ 4 tbsp bread {\scriptsize[5-6]}\\ 400 g brown rice {\scriptsize[8-9]}\\ 150 g bulgur wheat {\scriptsize[12-13]}\\ 1200 g can kidney beans {\scriptsize[5-6, 12-13]}\\ 800 g can tomatoes {\scriptsize[1-2, 12-13]}\\ 400 g cous cous {\scriptsize[1-2]}\\ 800 g oven ready chips {\scriptsize[14]}\\ 100 g peanuts {\scriptsize[5-6]}\\ 1 tbsp plain flour {\scriptsize[1-2]}\\ 225 g tomato puree {\scriptsize[5-6, 12-13]}\\ 400 g white rice {\scriptsize[3-4]}\\ 2 tbsp whole grain mustard {\scriptsize[5-6]}\\ % \end{shoppinglist}% \par\vfil % % % \begin{shoppinglist}{Meat} 100 g bacon {\scriptsize[7]}\\ 400 g meat {\scriptsize[1-2]}\\ 400 g smoked mackerel {\scriptsize[10-11]}\\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Dairy} 100 g cheddar cheese {\scriptsize[7]}\\ 4 eggs {\scriptsize[14]}\\ 2 tbsp parmesan cheese {\scriptsize[3-4]}\\ 150 ml soured cream {\scriptsize[1-2]}\\ 500 g yoghurt {\scriptsize[8-9]}\\ % \end{shoppinglist}% \par\vfil % \vfil\clearpage % % \begin{shoppinglist}{Check} 1 tsp chilli powder \\ 1 tbsp curry powder \\ 1 tsp garam masala \\ 1 tsp ground coriander \\ 1 tsp ground cumin \\ 1 tbsp ground turmeric \\ 3 tsp miso \\ 2 tsp mustard seeds \\ 21 tbsp oil \\ 1 tsp paprika \\ 75 ml sherry \\ 3 cube stock \\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Extra Vegetables} 400 g broccoli {\scriptsize[12-13]}\\ 400 g carrots {\scriptsize[12-13]}\\ 250 g celery {\scriptsize[12-13]}\\ 7 cloves garlic {\scriptsize[8-9, 12-13]}\\ 200 g ginger {\scriptsize[8-9]}\\ 1000 g green bananas {\scriptsize[8-9]}\\ 3 green chilli {\scriptsize[8-9]}\\ 150 g green pepper {\scriptsize[12-13]}\\ 300 g mushrooms {\scriptsize[10-11]}\\ 1000 g new potatoes {\scriptsize[10-11]}\\ 1000 g onions {\scriptsize[8-9, 10-11, 12-13]}\\ 300 g red pepper {\scriptsize[10-11]}\\ % \end{shoppinglist}% \par\vfil % % % \othershoppinglist{Other Shopping}% \othershoppinglist{Extra Other Shopping}% \vfil\clearpage \begin{recipe}{2}{Hungarian Goulash}% Calories: 3452 \begin{ingredients} 400 g onions (chopped) \\ 2 cloves garlic (crushed) \\ 400 g meat (sliced) \\ 4 tbsp oil \\ 1 tbsp plain flour \\ 1 tsp paprika \\ 400 g can tomatoes (chopped) \\ 150 g red pepper (sliced) \\ 400 g cous cous \\ 150 ml soured cream \\ \end{ingredients} \index{Meat!Hungarian Goulash} \begin{instructions} \item In a large wok fry 400~g chopped onions, 2~cloves crushed garlic and 400~g sliced meat in 4~tbsp oil until meat sealed. \item Stir in 1~tbsp plain flour, 1~tsp paprika, 400~g chopped can tomatoes and 150~g sliced red pepper. \item Put in casserole dish and bake (160$^{\circ}$C, Gas 3, 325$^{\circ}$F) 50 minutes. \item In a sauce pan place 400~g cous cous pour on 600~ml boiling water, and cover, leave to stand for 10 minutes. \item Serve with 150~ml soured cream. \end{instructions} \end{recipe}% \begin{recipe}{2}{Aubergine Risotto}% Calories: 1349 \begin{ingredients} 500 g aubergine (cubed) \\ 200 g onions (chopped) \\ 2 tbsp oil \\ 250 g parsnips (cubed) \\ 75 ml sherry \\ 150 g tomatoes (chopped) \\ 1 cube stock (crumbled) \\ 2 tbsp parmesan cheese (grated) \\ 400 g white rice \\ \end{ingredients} \index{Vegetarian!Aubergine Risotto} \begin{instructions} \item In a large wok fry 500~g cubed aubergine, 200~g chopped onions in 2~tbsp oil until onions soft. \item Stir in 250~g cubed parsnips, 75~ml sherry, 150~g chopped tomatoes, 1~cube crumbled stock, 2~tbsp grated parmesan cheese, 400~g white rice and 750~ml boiling water. Bring to a boil and simmer until rice done. \end{instructions} \end{recipe}% \begin{recipe}{2}{Bean and Nut Roast}% Calories: 2848 \begin{ingredients} 1200 g baking potatoes (washed) \\ 350 g onions (chopped) \\ 1 cloves garlic (crushed) \\ 2 tbsp oil \\ 100 g peanuts (chopped) \\ 4 tbsp bread (crumbled) \\ 75 g tomato puree \\ 3 tsp miso \\ 2 tbsp branston pickle \\ 2 tbsp whole grain mustard \\ 1 tsp ground cumin \\ 1 tsp ground coriander \\ 1 tsp garam masala \\ 400 g can kidney beans (mashed) \\ \end{ingredients} \index{Vegan!Bean and Nut Roast} \begin{instructions} \item Bake 1200~g washed baking potatoes at (240$^{\circ}$C, Gas 9, 475$^{\circ}$F) for 40 minutes. \item In a large wok fry 350~g chopped onions, and 1~cloves crushed garlic in 2~tbsp oil until soft. \item Stir in 100~g chopped peanuts, 4~tbsp crumbled bread, 75~g tomato puree, 3~tsp miso, 2~tbsp branston pickle, 2~tbsp whole grain mustard, 1~tsp ground cumin, 1~tsp ground coriander and 1~tsp garam masala. \item Stir in 400~g mashed can kidney beans and mix well. \item Put in a casserole dish and bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) for 45 minutes. \end{instructions} \end{recipe}% \begin{recipe}{1}{Bacon and Mushroom Pasta}% Calories: 1127 \begin{ingredients} 100 g bacon (chopped) \\ 200 g mushrooms (sliced) \\ 1 tbsp oil \\ 100 g cheddar cheese (grated) \\ \end{ingredients} \index{Quick!Bacon and Mushroom Pasta} \begin{instructions} \item \item In a large wok fry 100~g chopped bacon and 200~g sliced mushrooms in 1~tbsp oil for 5 minutes. \item Stir in the pasta (cooked and drained) and 100~g grated cheddar cheese and warm through. \end{instructions} \end{recipe}% \begin{recipe}{2}{Banana Curry}% Calories: 2663 \begin{ingredients} 400 g brown rice \\ 400 g onions (chopped) \\ 3 green chilli (chopped) \\ 3 cloves garlic (crushed) \\ 200 g ginger (chopped) \\ 6 tbsp oil \\ 1 tbsp curry powder \\ 1 tbsp ground turmeric \\ 1 tsp mustard seeds \\ 1000 g green bananas (chopped) \\ 500 g yoghurt \\ \end{ingredients} \index{Vegetarian!Banana Curry} \index{Curry!Banana Curry} \begin{instructions} \item In a sauce pan place 400~g brown rice pour on 800~ml cold water, bring to the boil then cover and simmer for 15 minutes, then leave to stand for 10 minutes. \item In a large wok fry 400~g chopped onions, 3 chopped green chilli, 3~cloves crushed garlic, 200~g chopped ginger in 6~tbsp oil until onions soft. \item Stir in 1~tbsp curry powder, 1~tbsp ground turmeric and 1~tsp mustard seeds until mustard starts to pop. \item Stir in 1000~g chopped green bananas and fry quickly until hot. \item Stir in 500~g yoghurt and warm through. \end{instructions} \end{recipe}% \begin{recipe}{2}{Mackerell and Potato Hash}% Calories: 2900 \begin{ingredients} 1000 g new potatoes (quartered) \\ 3 tbsp oil \\ 400 g onions (chopped) \\ 1 tbsp mustard seeds \\ 300 g red pepper (sliced) \\ 300 g mushrooms (sliced) \\ 100 g anchovy fillets (chopped) \\ 400 g smoked mackerel (chopped) \\ \end{ingredients} \index{Meat!Mackerell and Potato Hash} \begin{instructions} \item In a sauce pan boil 1000~g quartered new potatoes until done, then drain. \item In a large wok in 3~tbsp oil fry 400~g chopped onions and 1~tbsp mustard seeds until the onions are soft. \item Stir in 300~g sliced red pepper, 300~g sliced mushrooms, 100~g chopped anchovy fillets, and 400~g chopped smoked mackerel and fry until the mushrooms are done. \item Stir in the potatoes and warm through. \end{instructions} \end{recipe}% \begin{recipe}{2}{Beans and Bulgur Wheat}% Calories: 1777 \begin{ingredients} 200 g onions (chopped) \\ 150 g green pepper (chopped) \\ 400 g carrots (sliced) \\ 4 cloves garlic (crushed) \\ 250 g celery (chopped) \\ 2 tbsp oil \\ 150 g bulgur wheat \\ 400 g can tomatoes (chopped) \\ 150 g tomato puree \\ 1 tsp chilli powder \\ 800 g can kidney beans (drained) \\ 400 g broccoli \\ 2 cube stock (crumbled) \\ \end{ingredients} \index{Vegan!Beans and Bulgur Wheat} \begin{instructions} \item In a large wok fry 200~g chopped onions, 150~g chopped green pepper, 400~g sliced carrots, 4~cloves crushed garlic, 250~g chopped celery in 2~tbsp oil until soft. \item Stir in 150~g bulgur wheat, 400~g chopped can tomatoes, 150~g tomato puree, 1~tsp chilli powder, 800~g drained can kidney beans, 400~g broccoli, 2~cube crumbled stock and 400~ml cold water and warm through. \item Put it all in a casserole dish and bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) 30 minutes. \end{instructions} \end{recipe}% \begin{recipe}{1}{Egg and Chips}% Calories: 1739 \begin{ingredients} 800 g oven ready chips \\ 4 eggs \\ 1 tbsp oil \\ \end{ingredients} \index{Quick!Egg and Chips} \begin{instructions} \item Heat up 800~g oven ready chips. \item In a large wok fry 4 eggs in 1~tbsp oil. \end{instructions} \end{recipe}% \clearpage \end{menu} \begin{menu}{June} \begin{recipelist} {\scriptsize[1-2]} Chicken Tagine\\ {\scriptsize[3-4]} Aubergine and Beans\\ {\scriptsize[5-6]} Halloumi Couscous\\ {\scriptsize[7]} Strammer Max\\% \end{recipelist}% \begin{recipelist} {\scriptsize[8-9]} Tofu and Mince Noodles\\ {\scriptsize[10-11]} Vegetable Mousaka\\ {\scriptsize[12-13]} Tarka Dhal\\ {\scriptsize[14]} Sweet Corn and Bacon\\% \end{recipelist}\par% \subsection*{Shopping Lists} % \begin{shoppinglist}{Vegetables} 800 g aubergine {\scriptsize[3-4, 5-6]}\\ 1400 g courgette {\scriptsize[3-4, 5-6]}\\ fresh coriander {\scriptsize[1-2]}\\ 6 cloves garlic {\scriptsize[1-2, 3-4]}\\ 150 g green pepper {\scriptsize[5-6]}\\ 1 lemon {\scriptsize[1-2]}\\ 1 lettuce {\scriptsize[5-6]}\\ 1 lime {\scriptsize[5-6]}\\ 600 g mushrooms {\scriptsize[3-4]}\\ 1200 g new potatoes {\scriptsize[3-4]}\\ 1050 g onions {\scriptsize[1-2, 3-4, 5-6]}\\ 300 g red pepper {\scriptsize[3-4, 5-6]}\\ 100 g tomatoes {\scriptsize[5-6]}\\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Groceries} 400 g bread {\scriptsize[7]}\\ 400 g brown rice {\scriptsize[12-13]}\\ 400 g can cannellini beans {\scriptsize[3-4]}\\ 800 g can sweetcorn {\scriptsize[14]}\\ 300 g can tomatoes {\scriptsize[3-4]}\\ 50 g capers {\scriptsize[5-6]}\\ 800 g cous cous {\scriptsize[1-2, 5-6]}\\ 25 g creamed coconut {\scriptsize[12-13]}\\ 150 g dried apricots {\scriptsize[1-2]}\\ 100 g flaked almonds {\scriptsize[1-2]}\\ 400 g noodles {\scriptsize[8-9]}\\ 100 g olives {\scriptsize[5-6]}\\ 4 tbsp plain flour {\scriptsize[8-9, 14]}\\ 225 g red lentils {\scriptsize[12-13]}\\ 450 g tofu {\scriptsize[8-9]}\\ 225 g tomato puree {\scriptsize[5-6, 10-11]}\\ % \end{shoppinglist}% \par\vfil % % % \begin{shoppinglist}{Meat} 100 g bacon {\scriptsize[14]}\\ 400 g chicken {\scriptsize[1-2]}\\ 50 g ham {\scriptsize[7]}\\ 200 g minced meat {\scriptsize[8-9]}\\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Dairy} 200 g cheddar cheese {\scriptsize[7, 10-11]}\\ 6 eggs {\scriptsize[7, 10-11, 14]}\\ 200 g halloumi cheese {\scriptsize[5-6]}\\ 800 ml milk {\scriptsize[10-11, 14]}\\ 150 ml yoghurt {\scriptsize[10-11]}\\ % \end{shoppinglist}% \par\vfil % \vfil\clearpage % % \begin{shoppinglist}{Check} 1 bay leaf \\ 1 tsp cumin seeds \\ 2 tbsp curry powder \\ 1 tsp ground cinnamon \\ 2 tsp ground cumin \\ 1 tsp ground ginger \\ 1 tsp ground turmeric \\ 2 tbsp honey \\ 1 tsp mixed herbs \\ 1 tsp mustard seeds \\ 16 tbsp oil \\ 10 tbsp olive oil \\ 2 tsp onion seeds \\ 1 tbsp sherry \\ 2 cube stock \\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Extra Vegetables} 1000 g aubergine {\scriptsize[10-11]}\\ 6 cloves garlic {\scriptsize[10-11, 12-13]}\\ 120 g ginger {\scriptsize[8-9, 12-13]}\\ 2 green chilli {\scriptsize[12-13]}\\ 100 g mushrooms {\scriptsize[10-11]}\\ 500 g new potatoes {\scriptsize[10-11]}\\ 600 g onions {\scriptsize[10-11, 12-13]}\\ 2 red chilli {\scriptsize[8-9]}\\ 400 g runner beans {\scriptsize[8-9]}\\ 500 g tomatoes {\scriptsize[10-11]}\\ % \end{shoppinglist}% \par\vfil % % % \othershoppinglist{Other Shopping}% \othershoppinglist{Extra Other Shopping}% \vfil\clearpage \begin{recipe}{2}{Chicken Tagine}% Calories: 3153 \begin{ingredients} 400 g chicken (chopped) \\ 2 tbsp olive oil \\ 300 g onions (chopped) \\ 2 cloves garlic (crushed) \\ 2 tbsp olive oil \\ 1 tsp ground cinnamon \\ 1 tsp ground cumin \\ 1 tsp ground ginger \\ 150 g dried apricots (quartered) \\ 1 lemon (juiced) \\ 2 tbsp honey \\ 1 cube stock (crumbled) \\ 400 g cous cous \\ 100 g flaked almonds (toasted) \\ fresh coriander (chopped) \\ \end{ingredients} \index{Meat!Chicken Tagine} \begin{instructions} \item In a large wok fry 400~g chopped chicken in 2~tbsp olive oil until done, then remove. \item In a large wok fry 300~g chopped onions and 2~cloves crushed garlic in 2~tbsp olive oil until golden. \item Stir the chicken, 1~tsp ground cinnamon, 1~tsp ground cumin, 1~tsp ground ginger, 150~g quartered dried apricots, 1 juiced lemon, 2~tbsp honey, 1~cube crumbled stock and 350~ml boiling water and simmer for 10 minutes. \item In a sauce pan place 400~g cous cous pour on 600~ml boiling water, and cover, leave to stand for 10 minutes. \item Serve with 100~g toasted flaked almonds and chopped fresh coriander. \end{instructions} \end{recipe}% \begin{recipe}{2}{Aubergine and Beans}% Calories: 2511 \begin{ingredients} 1200 g new potatoes (washed) \\ 4 cloves garlic (crushed) \\ 400 g onions (sliced) \\ 150 g red pepper (sliced) \\ 5 tbsp oil \\ 400 g aubergine (sliced) \\ 600 g mushrooms (sliced) \\ 1000 g courgette (sliced) \\ 300 g can tomatoes (chopped) \\ 400 g can cannellini beans \\ \end{ingredients} \index{Vegan!Aubergine and Beans} \begin{instructions} \item In a sauce pan place 1200~g washed new potatoes bring to the boil and simmer for 20 minutes. \item In a large wok fry 4~cloves crushed garlic, 400~g sliced onions and 150~g sliced red pepper in 5~tbsp oil until soft. \item Stir in 400~g sliced aubergine, 600~g sliced mushrooms, 1000~g sliced courgette, 300~g chopped can tomatoes and 400~g can cannellini beans simmer for 20 minutes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Halloumi Couscous}% Calories: 2682 \begin{ingredients} 400 g courgette (sliced) \\ 350 g onions (sliced) \\ 150 g red pepper (sliced) \\ 150 g green pepper (sliced) \\ 400 g aubergine (sliced) \\ 4 tbsp olive oil \\ 100 g tomatoes (chopped) \\ 50 g capers \\ 200 g halloumi cheese (chopped) \\ 100 g olives (chopped) \\ 400 g cous cous \\ 2 tbsp olive oil \\ 1 lime \\ 1 tbsp ground cumin \\ 75 g tomato puree \\ 2 tsp onion seeds \\ 1 lettuce (chopped) \\ \end{ingredients} \index{Vegetarian!Halloumi Couscous} \begin{instructions} \item In an open casserole dish bake 400~g sliced courgette, 350~g sliced onions, 150~g sliced red pepper, 150~g sliced green pepper, 400~g sliced aubergine in 4~tbsp olive oil for 60 minutes. \item Stir in 100~g chopped tomatoes, 50~g capers, 200~g chopped halloumi cheese, 100~g chopped olives and bake for 10 minutes. \item In a sauce pan place 400~g cous cous pour on 600~ml boiling water, and cover, leave to stand for 10 minutes. \item In a large wok mix 2~tbsp olive oil, juice of 1 lime, 1~tbsp ground cumin, 75~g tomato puree, 2~tsp onion seeds and warm through. \item Put the vegetables on to the cous cous. Then a layer of 1 chopped lettuce. Then the olive oil mixture. \end{instructions} \end{recipe}% \begin{recipe}{1}{Strammer Max}% Calories: 1151 \begin{ingredients} 400 g bread (sliced) \\ 50 g cheddar cheese \\ 2 eggs \\ 50 g ham \\ \end{ingredients} \index{Quick!Strammer Max} \begin{instructions} \item On 400~g sliced bread toast 50~g cheddar cheese. \item In a frying pan fry 2 eggs and serve with 50~g ham. \end{instructions} \end{recipe}% \begin{recipe}{2}{Tofu and Mince Noodles}% Calories: 1657 \begin{ingredients} 200 g minced meat \\ 2 red chilli (chopped) \\ 60 g ginger (chopped) \\ 2 tbsp oil \\ 2 tbsp plain flour \\ 1 cube stock (crumbled) \\ 1 tbsp sherry \\ 400 g runner beans (chopped) \\ 450 g tofu (cubed) \\ 400 g noodles \\ \end{ingredients} \index{Meat!Tofu and Mince Noodles} \begin{instructions} \item In a large wok fry 200~g minced meat, 2 chopped red chilli and 60~g chopped ginger in 2~tbsp oil until the pork is done. \item Stir in 2~tbsp plain flour, 1~cube crumbled stock, 1~tbsp sherry, 400~g chopped runner beans, 250~ml cold water, bring to boil and simmer for 10 minutes. \item Stir in 450~g cubed tofu, and simmer for 10 minutes. \item In a sauce pan place 400~g noodles pour on 800~ml boiling water, cover and simmer for 5 minutes, then drain. \end{instructions} \end{recipe}% \begin{recipe}{2}{Vegetable Mousaka}% Calories: 2189 \begin{ingredients} 500 g new potatoes (sliced) \\ 400 g onions (sliced) \\ 1000 g aubergine (sliced) \\ 4 tbsp oil \\ 4 cloves garlic (crushed) \\ 100 g mushrooms (sliced) \\ 500 g tomatoes (chopped) \\ 150 g tomato puree \\ 1 tsp mixed herbs \\ 2 eggs \\ 150 ml yoghurt \\ 200 ml milk \\ 150 g cheddar cheese (grated) \\ \end{ingredients} \index{Vegetarian!Vegetable Mousaka} \begin{instructions} \item In a sauce pan boil 500~g sliced new potatoes until soft, then drain. \item In a large wok fry 400~g sliced onions and 1000~g sliced aubergine in 4~tbsp oil until soft. \item Stir in 4~cloves crushed garlic, 100~g sliced mushrooms, 500 ~g chopped tomatoes, 150~g tomato puree, 3~tbsp boiling water and 1~tsp mixed herbs, and simmer for 10 minutes. \item Put everything in a casserole dish and bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) 30 minutes. \item In a mixing bowl beat together 2 eggs, 150~ml yoghurt and 200~ml milk. Pour on the casserole dish, spinkle with 150~g grated cheddar cheese and bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) 20 minutes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Tarka Dhal}% Calories: 1351 \begin{ingredients} 1 bay leaf \\ 2 tbsp curry powder \\ 1 tsp ground turmeric \\ 225 g red lentils \\ 3 tbsp oil \\ 1 tsp cumin seeds \\ 1 tsp mustard seeds \\ 200 g onions (sliced) \\ 2 cloves garlic (crushed) \\ 60 g ginger (chopped) \\ 2 green chilli (chopped) \\ 25 g creamed coconut \\ 400 g brown rice \\ \end{ingredients} \index{Vegan!Tarka Dhal} \index{Curry!Tarka Dhal} \begin{instructions} \item In a sauce pan cook 1 bay leaf, 2~tbsp curry powder, 1~tsp ground turmeric and 225~g red lentils, in 450~ml cold water until soft. \item In a large wok in 3~tbsp oil fry 1~tsp cumin seeds and 1~tsp mustard seeds until mustard seeds pop. \item Stir in 200~g sliced onions, 2~cloves crushed garlic, 60~g chopped ginger and 2 chopped green chilli and fry until onion browns. \item Stir in the lentils and 25~g creamed coconut and warm through. \item In a sauce pan place 400~g brown rice pour on 800~ml cold water, bring to the boil then cover and simmer for 15 minutes, then leave to stand for 10 minutes. \end{instructions} \end{recipe}% \begin{recipe}{1}{Sweet Corn and Bacon}% Calories: 2027 \begin{ingredients} 2 eggs \\ 100 g bacon (chopped) \\ 2 tbsp oil \\ 2 tbsp plain flour \\ 800 g can sweetcorn \\ 600 ml milk \\ \end{ingredients} \index{Quick!Sweet Corn and Bacon} \begin{instructions} \item In a sauce pan boil 2 eggs. \item In a large wok fry 100~g chopped bacon in 2~tbsp oil until done. \item Stir in 2~tbsp plain flour until absorbed. \item Stir in the liquid from 800~g can sweetcorn and 600~ml milk. \item Add the eggs (halved). \end{instructions} \end{recipe}% \clearpage \end{menu} \begin{menu}{July} \begin{recipelist} {\scriptsize[1-2]} Tuna Bake\\ {\scriptsize[3-4]} Onion and Tomato Lasagna\\ {\scriptsize[5-6]} Tofu and Runner Bean Stir Fry\\ {\scriptsize[7]} Quick Dummy\\% \end{recipelist}% \begin{recipelist} {\scriptsize[8-9]} Chicken Curry\\ {\scriptsize[10-11]} Indian Tomatoes\\ {\scriptsize[12-13]} Lentil Hotpot\\ {\scriptsize[14]} Quick Dummy\\% \end{recipelist}\par% \subsection*{Shopping Lists} % \begin{shoppinglist}{Vegetables} 2 chillis {\scriptsize[1-2, 5-6]}\\ 4 cloves garlic {\scriptsize[5-6]}\\ 60 g ginger {\scriptsize[5-6]}\\ 200 g mushrooms {\scriptsize[5-6]}\\ 1250 g onions {\scriptsize[1-2, 3-4, 5-6]}\\ 150 g red pepper {\scriptsize[5-6]}\\ 400 g runner beans {\scriptsize[5-6]}\\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Groceries} 400 g brown rice {\scriptsize[1-2]}\\ 400 g can evaporated milk {\scriptsize[8-9]}\\ 400 g can mushroom soup {\scriptsize[1-2]}\\ 200 g can sweetcorn {\scriptsize[1-2]}\\ 400 g can tomatoes {\scriptsize[3-4]}\\ 150 g can tuna fish {\scriptsize[1-2]}\\ 1 tbsp corn flour {\scriptsize[5-6]}\\ 400 g cous cous {\scriptsize[7, 14]}\\ 200 g frozen peas {\scriptsize[1-2]}\\ 200 g lasagna {\scriptsize[3-4]}\\ 4 naan bread {\scriptsize[10-11]}\\ 400 g noodles {\scriptsize[5-6]}\\ 225 g red lentils {\scriptsize[12-13]}\\ 300 g tofu {\scriptsize[5-6]}\\ 400 g white rice {\scriptsize[8-9]}\\ % \end{shoppinglist}% \par\vfil % % % \begin{shoppinglist}{Meat} 400 g chicken {\scriptsize[8-9]}\\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Dairy} 200 g cheddar cheese {\scriptsize[1-2, 3-4]}\\ 100 ml milk {\scriptsize[3-4]}\\ 300 ml soured cream {\scriptsize[3-4]}\\ 500 g yoghurt {\scriptsize[10-11]}\\ % \end{shoppinglist}% \par\vfil % \vfil\clearpage % % \begin{shoppinglist}{Check} 0.5 tsp asafoetida \\ 2 tsp cumin seeds \\ 2 tbsp curry powder \\ 0.5 tsp ground turmeric \\ 2 tsp miso \\ 3 tsp mixed herbs \\ 23 tbsp oil \\ 2 tbsp paprika \\ 5 tbsp sherry \\ 5 tbsp soy sauce \\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Extra Vegetables} 400 g celery {\scriptsize[12-13]}\\ 300 g fennel {\scriptsize[12-13]}\\ 4 cloves garlic {\scriptsize[8-9, 12-13]}\\ 100 g mushrooms {\scriptsize[12-13]}\\ 1050 g onions {\scriptsize[8-9, 10-11, 12-13]}\\ 500 g potatoes {\scriptsize[12-13]}\\ 1 red chilli {\scriptsize[10-11]}\\ 150 g red pepper {\scriptsize[12-13]}\\ 400 g runner beans {\scriptsize[10-11]}\\ 1000 g tomatoes {\scriptsize[10-11]}\\ % \end{shoppinglist}% \par\vfil % % % \othershoppinglist{Other Shopping}% \othershoppinglist{Extra Other Shopping}% \vfil\clearpage \begin{recipe}{2}{Tuna Bake}% Calories: 2919 \begin{ingredients} 400 g brown rice \\ 350 g onions (sliced) \\ 1 chillis (chopped) \\ 4 tbsp oil \\ 150 g can tuna fish (crumbled) \\ 200 g can sweetcorn (drained) \\ 200 g frozen peas \\ 400 g can mushroom soup \\ 50 g cheddar cheese (grated) \\ \end{ingredients} \index{Meat!Tuna Bake} \begin{instructions} \item In a sauce pan place 400~g brown rice pour on 800~ml cold water, bring to the boil then cover and simmer for 15 minutes, then leave to stand for 10 minutes. \item In a large wok fry 350~g sliced onions and 1 chopped chillis in 4~tbsp oil until soft. \item Stir in 150~g crumbled can tuna fish, 200~g drained can sweetcorn, 200~g frozen peas, 400~g can mushroom soup and the cooked rice. \item Place in a casserole dish and sprinkle with 50~g grated cheddar cheese and bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) 20 minutes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Onion and Tomato Lasagna}% Calories: 2681 \begin{ingredients} 700 g onions (chopped) \\ 3 tbsp oil \\ 400 g can tomatoes (chopped) \\ 200 g lasagna \\ 150 g cheddar cheese (grated) \\ 300 ml soured cream \\ 100 ml milk \\ \end{ingredients} \index{Vegetarian!Onion and Tomato Lasagna} \index{Lasagna!Onion and Tomato Lasagna} \begin{instructions} \item In a large wok fry 700~g chopped onions in 3~tbsp oil until soft. \item Stir in 400~g chopped can tomatoes and warm through. \item In a casserole dish layer 200~g lasagna, the tomato mix and 150~g grated cheddar cheese 3 times. \item In a mixing bowl mix 300~ml soured cream with 100~ml milk and pour on top of the lasagna. \item Bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) 50 minutes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Tofu and Runner Bean Stir Fry}% Calories: 1903 \begin{ingredients} 300 g tofu (cubed) \\ 7 tbsp oil \\ 2 tbsp paprika \\ 200 g onions (sliced) \\ 60 g ginger (chopped) \\ 1 chillis \\ 4 cloves garlic (crushed) \\ 400 g runner beans (halved) \\ 150 g red pepper (sliced) \\ 200 g mushrooms (sliced) \\ 400 g noodles \\ 1 tbsp corn flour \\ 5 tbsp sherry \\ 5 tbsp soy sauce \\ \end{ingredients} \index{Vegan!Tofu and Runner Bean Stir Fry} \begin{instructions} \item Coat 300~g cubed tofu in 7~tbsp oil and 2~tbsp paprika then bake until browned. \item In a large wok fry 200~g sliced onions, 60~g chopped ginger, 1 chillis and 4~cloves crushed garlic until soft. \item Stir in 400~g halved runner beans, 150~g sliced red pepper and 200~g sliced mushrooms, and cook until warm. \item In a sauce pan place 400~g noodles pour on 800~ml boiling water, cover and simmer for 5 minutes, then drain. \item To the wok, stir in 1~tbsp corn flour, 5~tbsp sherry and 5~tbsp soy sauce. \item Add the cooked tofu to the wok. \end{instructions} \end{recipe}% \begin{recipe}{1}{Quick Dummy}% Calories: 352 \begin{ingredients} 200 g cous cous \\ \end{ingredients} \index{Quick!Quick Dummy} \begin{instructions} \item In a sauce pan place 200~g cous cous pour on 300~ml boiling water, and cover, leave to stand for 10 minutes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Chicken Curry}% Calories: 2368 \begin{ingredients} 400 g white rice \\ 350 g onions (chopped) \\ 2 cloves garlic (crushed) \\ 3 tbsp oil \\ 2 tbsp curry powder \\ 400 g chicken (sliced) \\ 400 g can evaporated milk \\ \end{ingredients} \index{Meat!Chicken Curry} \index{Curry!Chicken Curry} \begin{instructions} \item In a sauce pan place 400~g white rice pour on 800~ml cold water, bring to the boil then turn off, cover and stand for 25 minutes. \item In a large wok fry 350~g chopped onions, 2~cloves crushed garlic in 3~tbsp oil until soft. \item Stir in 2~tbsp curry powder, 400~g sliced chicken and cook until done. \item Stir in 400~g can evaporated milk and bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) 25 minutes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Indian Tomatoes}% Calories: 1202 \begin{ingredients} 4 tbsp oil \\ 0.5 tsp asafoetida \\ 0.5 tsp ground turmeric \\ 2 tsp cumin seeds \\ 1 red chilli (chopped) \\ 350 g onions (chopped) \\ 1000 g tomatoes (chopped) \\ 400 g runner beans \\ 500 g yoghurt \\ 4 naan bread \\ \end{ingredients} \index{Vegetarian!Indian Tomatoes} \begin{instructions} \item In a large wok heat 4~tbsp oil, 0.5~tsp asafoetida, 0.5~tsp ground turmeric, and 2~tsp cumin seeds until the cumin sizzles. \item Stir in 1 chopped red chilli and 350~g chopped onions and fry until soft. \item Stir in 1000~g chopped tomatoes and 400~g runner beans simmer for 15 minutes. \item Serve with 500~g yoghurt and 4 naan bread. \end{instructions} \end{recipe}% \begin{recipe}{2}{Lentil Hotpot}% Calories: 1159 \begin{ingredients} 350 g onions (chopped) \\ 150 g red pepper (chopped) \\ 400 g celery (chopped) \\ 300 g fennel (chopped) \\ 100 g mushrooms (chopped) \\ 2 cloves garlic (crushed) \\ 2 tbsp oil \\ 225 g red lentils \\ 3 tsp mixed herbs \\ 500 g potatoes (sliced) \\ 2 tsp miso \\ \end{ingredients} \index{Vegan!Lentil Hotpot} \begin{instructions} \item In a large wok fry 350~g chopped onions, 150~g chopped red pepper, 400~g chopped celery, 300~g chopped fennel100~g chopped mushrooms and 2~cloves crushed garlic in 2~tbsp oil until soft. \item In a casserole dish layer 225~g red lentils, 3~tsp mixed herbs and vegetable tomato mix 3 times. \item Layer 500~g sliced potatoes on top. \item Pour on 2~tsp miso dissolved in 500~g boiling water, \item Bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) for 50 minutes. \end{instructions} \end{recipe}% \begin{recipe}{1}{Quick Dummy}% Calories: 352 \begin{ingredients} 200 g cous cous \\ \end{ingredients} \index{Quick!Quick Dummy} \begin{instructions} \item In a sauce pan place 200~g cous cous pour on 300~ml boiling water, and cover, leave to stand for 10 minutes. \end{instructions} \end{recipe}% \clearpage \end{menu} \begin{menu}{August} \begin{recipelist} {\scriptsize[1-2]} Chicken Pilaf\\ {\scriptsize[3-4]} Chick Pea Risotto\\ {\scriptsize[5-6]} Aubergine and Peppers\\ {\scriptsize[7]} Beans and Fried Eggs\\% \end{recipelist}% \begin{recipelist} {\scriptsize[8-9]} Sausages in Cider\\ {\scriptsize[10-11]} Egg Curry\\ {\scriptsize[12-13]} Lentil Dhansak\\ {\scriptsize[14]} Tortellini and Cheese\\% \end{recipelist}\par% \subsection*{Shopping Lists} % \begin{shoppinglist}{Vegetables} 400 g aubergine {\scriptsize[5-6]}\\ 1200 g baking potatoes {\scriptsize[5-6]}\\ 400 g courgette {\scriptsize[5-6]}\\ 9 cloves garlic {\scriptsize[1-2, 3-4, 5-6]}\\ 150 g green pepper {\scriptsize[5-6]}\\ 950 g onions {\scriptsize[1-2, 3-4, 5-6]}\\ 1 red chilli {\scriptsize[3-4]}\\ 150 g red pepper {\scriptsize[5-6]}\\ 1000 g sweet potatoes {\scriptsize[3-4]}\\ 600 g tomatoes {\scriptsize[5-6, 7]}\\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Groceries} 400 g baked beans {\scriptsize[7]}\\ 400 g can chick peas {\scriptsize[3-4]}\\ 800 g can tomatoes {\scriptsize[3-4, 12-13]}\\ 150 g cashew nuts {\scriptsize[12-13]}\\ 400 ml cider {\scriptsize[8-9]}\\ 400 g cous cous {\scriptsize[12-13]}\\ 300 g creamed coconut {\scriptsize[10-11, 12-13]}\\ 100 g flaked almonds {\scriptsize[1-2]}\\ 4 pitta bread {\scriptsize[7]}\\ 2 tbsp plain flour {\scriptsize[10-11]}\\ 75 g raisins {\scriptsize[3-4]}\\ 225 g red lentils {\scriptsize[12-13]}\\ 300 g tomato puree {\scriptsize[5-6, 10-11]}\\ 200 g tortellini {\scriptsize[14]}\\ 1200 g white rice {\scriptsize[1-2, 3-4, 10-11]}\\ % \end{shoppinglist}% \par\vfil % % % \begin{shoppinglist}{Meat} 600 g chicken {\scriptsize[1-2]}\\ 500 g sausages {\scriptsize[8-9]}\\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Dairy} 6 eggs {\scriptsize[7, 10-11]}\\ 170 g parmesan cheese {\scriptsize[3-4, 14]}\\ % \end{shoppinglist}% \par\vfil % \vfil\clearpage % % \begin{shoppinglist}{Check} 1 tbsp chilli powder \\ 6 cloves \\ 1 tsp cumin seeds \\ 2 tbsp curry powder \\ 1.5 tsp ground cinnamon \\ 1 tsp ground cumin \\ 3 tsp mustard seeds \\ 16 tbsp oil \\ 300 ml sherry \\ 2 cube stock \\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Extra Vegetables} 1 apples {\scriptsize[8-9]}\\ 1200 g boiling potatoes {\scriptsize[8-9]}\\ 100 g carrots {\scriptsize[12-13]}\\ 4 garlic {\scriptsize[8-9, 12-13]}\\ 1 lemon {\scriptsize[12-13]}\\ 800 g onions {\scriptsize[8-9, 10-11, 12-13]}\\ 150 g red pepper {\scriptsize[12-13]}\\ 400 g runner beans {\scriptsize[10-11]}\\ % \end{shoppinglist}% \par\vfil % % % \othershoppinglist{Other Shopping}% \othershoppinglist{Extra Other Shopping}% \vfil\clearpage \begin{recipe}{2}{Chicken Pilaf}% Calories: 2897 \begin{ingredients} 600 g chicken \\ 200 g onions (chopped) \\ 5 cloves garlic (crushed) \\ 4 tbsp oil \\ 1 tsp ground cumin \\ 0.5 tsp ground cinnamon \\ 6 cloves \\ 400 g white rice \\ 1 cube stock (crumbled) \\ 100 g flaked almonds \\ \end{ingredients} \index{Meat!Chicken Pilaf} \begin{instructions} \item In a large wok fry 600~g chicken, 200~g chopped onions and 5~cloves crushed garlic in 4~tbsp oil until chicken done. \item Stir in 1~tsp ground cumin, 0.5~tsp ground cinnamon, and 6 cloves and fry for 2 minutes. \item Stir in 400~g white rice, 1~cube crumbled stock, and 800~ml cold water and bake, covered (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) 30 minutes. \item Spinkle on 100~g flaked almonds before serving. \end{instructions} \end{recipe}% \begin{recipe}{2}{Chick Pea Risotto}% Calories: 3613 \begin{ingredients} 400 g onions (chopped) \\ 1 red chilli (chopped) \\ 2 cloves garlic (crushed) \\ 2 tbsp oil \\ 1000 g sweet potatoes (sliced) \\ 400 g can chick peas (drained) \\ 400 g can tomatoes (chopped) \\ 300 ml sherry \\ 75 g raisins \\ 1 cube stock (crumbled) \\ 400 g white rice \\ 150 g parmesan cheese (grated) \\ \end{ingredients} \index{Vegetarian!Chick Pea Risotto} \begin{instructions} \item In a large wok fry 400~g chopped onions, 1 chopped red chilli and 2~cloves crushed garlic in 2~tbsp oil until soft. \item Stir in 1000~g sliced sweet potatoes, 400~g drained can chick peas, 400~g chopped can tomatoes, 300~ml sherry, 75~g raisins, 1~cube crumbled stock, 400~g white rice and 500~ml cold water and bring to a boil. \item Put in a casserole dish, cover and bake (160$^{\circ}$C, Gas 3, 325$^{\circ}$F) 20 minutes. \item Serve with 150~g grated parmesan cheese. \end{instructions} \end{recipe}% \begin{recipe}{2}{Aubergine and Peppers}% Calories: 2683 \begin{ingredients} 1200 g baking potatoes (washed) \\ 350 g onions (sliced) \\ 2 cloves garlic (crushed) \\ 4 tbsp oil \\ 150 g red pepper (sliced) \\ 150 g green pepper (sliced) \\ 400 g aubergine (sliced) \\ 400 g courgette (sliced) \\ 150 g tomato puree \\ 400 g tomatoes (chopped) \\ \end{ingredients} \index{Vegan!Aubergine and Peppers} \begin{instructions} \item Bake 1200~g washed baking potatoes at (240$^{\circ}$C, Gas 9, 475$^{\circ}$F) for 40 minutes. \item In a large wok fry 350~g sliced onions and 2~cloves crushed garlic in 4~tbsp oil until soft. \item Stir in 150~g sliced red pepper, 150~g sliced green pepper, 400~g sliced aubergine and 400~g sliced courgette until soft. \item Stir in 150~g tomato puree, 100~ml cold water and 400~g chopped tomatoes, and bring to the boil. \item Put in a casserole dish and bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) for 40 minutes. \end{instructions} \end{recipe}% \begin{recipe}{1}{Beans and Fried Eggs}% Calories: 706 \begin{ingredients} 400 g baked beans \\ 2 eggs \\ 3 tbsp oil \\ 4 pitta bread \\ 200 g tomatoes (sliced) \\ \end{ingredients} \index{Quick!Beans and Fried Eggs} \begin{instructions} \item In a sauce pan warm 400~g baked beans. \item In a large wok fry 2 eggs in 3~tbsp oil\item Grill 4 pitta bread\item Serve with 200~g sliced tomatoes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Sausages in Cider}% Calories: 2519 \begin{ingredients} 1200 g boiling potatoes (chopped) \\ 500 g sausages \\ 250 g onions (quartered) \\ 2 garlic (crushed) \\ 1 apples (sliced) \\ 400 ml cider \\ \end{ingredients} \index{Meat!Sausages in Cider} \begin{instructions} \item In a sauce pan boil 1200~g chopped boiling potatoes then mash. \item In a casserole dish put 500~g sausages, 250~g quartered onions and 2 crushed garlic bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) 15 minutes. \item Add 1 sliced apples and 400~ml cider bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) 20 minutes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Egg Curry}% Calories: 2039 \begin{ingredients} 4 eggs \\ 400 g white rice \\ 350 g onions (chopped) \\ 3 tbsp oil \\ 1 tbsp chilli powder \\ 2 tsp mustard seeds \\ 2 tbsp plain flour \\ 1 tbsp curry powder \\ 150 g tomato puree \\ 100 g creamed coconut (crumbled) \\ 400 g runner beans (chopped) \\ \end{ingredients} \index{Vegetarian!Egg Curry} \index{Curry!Egg Curry} \begin{instructions} \item In sauce pan boil 4 eggs\item In a sauce pan place 400~g white rice pour on 800~ml cold water, bring to the boil then turn off, cover and stand for 25 minutes. \item In a large wok fry 350~g chopped onions in 3~tbsp oil until soft. \item Stir in 1~tbsp chilli powder and 2~tsp mustard seeds and fry until mustard seeds start to pop. \item Stir in 2~tbsp plain flour and 1~tbsp curry powder until it is all absorbed. \item Stir in 150~g tomato puree, 100~g crumbled creamed coconut and 400~ml cold water and warm through. \item Stir in 400~g chopped runner beans and the eggs and simmer for 10 minutes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Lentil Dhansak}% Calories: 3366 \begin{ingredients} 225 g red lentils \\ 200 g onions (chopped) \\ 2 cloves garlic (crushed) \\ 100 g carrots (sliced) \\ 1 tbsp curry powder \\ 1 tsp cumin seeds \\ 1 tsp mustard seeds \\ 1 tsp ground cinnamon \\ 400 g can tomatoes (chopped) \\ 150 g cashew nuts \\ 200 ml creamed coconut \\ 150 g red pepper (chopped) \\ 400 g cous cous \\ 1 lemon (juiced) \\ \end{ingredients} \index{Vegan!Lentil Dhansak} \index{Curry!Lentil Dhansak} \begin{instructions} \item In a sauce pan simmer 225~g red lentils with 500~ml cold water until soft. \item In a large wok fry 200~g chopped onions, 2~cloves crushed garlic and 100~g sliced carrots until soft. \item Stir in 1~tbsp curry powder, 1~tsp cumin seeds, 1~tsp mustard seeds and 1~tsp ground cinnamon and fry for 2 minutes. \item Stir in 400~g chopped can tomatoes, 150~g cashew nuts, 200~ml creamed coconut and 150~g chopped red pepper and simmer for 20 minutes. \item In a sauce pan place 400~g cous cous pour on 600~ml boiling water, and cover, leave to stand for 10 minutes. \item Serve with 1 juiced lemon. \end{instructions} \end{recipe}% \begin{recipe}{1}{Tortellini and Cheese}% Calories: 700 \begin{ingredients} 200 g tortellini \\ 20 g parmesan cheese (grated) \\ \end{ingredients} \index{Quick!Tortellini and Cheese} \begin{instructions} \item In a sauce pan place 200~g tortellini pour on 400~ml boiling water boil for 5 minutes then drain. \item Sprinkle with 20~g grated parmesan cheese. \end{instructions} \end{recipe}% \clearpage \end{menu} \begin{menu}{September} \begin{recipelist} {\scriptsize[1-2]} Lentils and Spicy Sausage\\ {\scriptsize[3-4]} Russian Potatoes\\ {\scriptsize[5-6]} Lentil Shepherds Pie\\ {\scriptsize[7]} Fish Fingers and Baked Beans\\% \end{recipelist}% \begin{recipelist} {\scriptsize[8-9]} Courgettes and Bacon\\ {\scriptsize[10-11]} Sweet Corn Pizza\\ {\scriptsize[12-13]} Caribbean Beans\\ {\scriptsize[14]} Tagliatelle and Olive Oil\\% \end{recipelist}\par% \subsection*{Shopping Lists} % \begin{shoppinglist}{Vegetables} 3000 g boiling potatoes {\scriptsize[3-4, 5-6]}\\ 200 g carrots {\scriptsize[5-6]}\\ 400 g celery {\scriptsize[5-6]}\\ 10 cloves garlic {\scriptsize[1-2, 3-4, 5-6]}\\ 1050 g onions {\scriptsize[1-2, 5-6]}\\ 50 g spring onions {\scriptsize[3-4]}\\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Groceries} 100 g anchovy fillets {\scriptsize[10-11]}\\ 400 g brown rice {\scriptsize[1-2]}\\ 400 g can baked beans {\scriptsize[7]}\\ 400 g can chick peas {\scriptsize[12-13]}\\ 400 g can coconut milk {\scriptsize[12-13]}\\ 400 g can kidney beans {\scriptsize[12-13]}\\ 200 g can sweetcorn {\scriptsize[10-11]}\\ 800 g can tomatoes {\scriptsize[1-2]}\\ 50 g capers {\scriptsize[3-4]}\\ 280 g fish fingers {\scriptsize[7]}\\ 250 g green lentils {\scriptsize[5-6]}\\ 200 g olives {\scriptsize[14]}\\ 200 g pineapple chunks {\scriptsize[10-11]}\\ 2 pizza bases {\scriptsize[10-11]}\\ 250 g red lentils {\scriptsize[1-2]}\\ 375 g tomato puree {\scriptsize[5-6, 10-11]}\\ 400 g white rice {\scriptsize[12-13]}\\ % \end{shoppinglist}% \par\vfil % % % \begin{shoppinglist}{Meat} 340 g bacon {\scriptsize[1-2, 8-9]}\\ 4 burgers {\scriptsize[3-4]}\\ 250 g chorizo sausage {\scriptsize[1-2]}\\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Dairy} 450 g cheddar cheese {\scriptsize[3-4, 8-9, 10-11]}\\ 600 g cottage cheese {\scriptsize[3-4]}\\ 2 eggs {\scriptsize[8-9]}\\ 4 tbsp milk {\scriptsize[8-9]}\\ 50 g parmesan cheese {\scriptsize[14]}\\ 300 ml soured cream {\scriptsize[3-4]}\\ % \end{shoppinglist}% \par\vfil % \vfil\clearpage % % \begin{shoppinglist}{Check} 1 bay leaf \\ 1 tsp chilli powder \\ 1 tsp dried thyme \\ 2 tsp ground coriander \\ 1 tsp ground cumin \\ 1 tsp marjoram \\ 3 tsp miso \\ 10 tbsp oil \\ 4 tbsp olive oil \\ 2 tsp oregano \\ 2 cube stock \\ 50 g sunflower seeds \\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Extra Vegetables} 1200 g baking potatoes {\scriptsize[8-9]}\\ 600 g courgette {\scriptsize[8-9, 14]}\\ 9 cloves garlic {\scriptsize[8-9, 12-13, 14]}\\ 300 g green pepper {\scriptsize[10-11, 12-13]}\\ 750 g onions {\scriptsize[8-9, 12-13]}\\ 2 red chilli {\scriptsize[12-13, 14]}\\ 150 g red pepper {\scriptsize[14]}\\ 400 g tomatoes {\scriptsize[10-11]}\\ % \end{shoppinglist}% \par\vfil % % % \othershoppinglist{Other Shopping}% \othershoppinglist{Extra Other Shopping}% \vfil\clearpage \begin{recipe}{2}{Lentils and Spicy Sausage}% Calories: 3031 \begin{ingredients} 400 g brown rice \\ 350 g onions (chopped) \\ 2 cloves garlic (crushed) \\ 100 g bacon (chopped) \\ 3 tbsp oil \\ 250 g red lentils \\ 800 g can tomatoes \\ 250 g chorizo sausage (sliced) \\ 1 tsp chilli powder \\ \end{ingredients} \index{Meat!Lentils and Spicy Sausage} \begin{instructions} \item In a sauce pan place 400~g brown rice pour on 800~ml cold water, bring to the boil then cover and simmer for 15 minutes, then leave to stand for 10 minutes. \item In a large wok fry 350~g chopped onions, 2~cloves crushed garlic and 100~g chopped bacon in 3~tbsp oil until bacon done. \item Stir in 250~g red lentils, 800~g can tomatoes, 250~g sliced chorizo sausage, 1~tsp chilli powder and 400~ml cold water and simmer for 25 minutes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Russian Potatoes}% Calories: 3407 \begin{ingredients} 2000 g boiling potatoes (cubed) \\ 600 g cottage cheese \\ 300 ml soured cream \\ 150 g cheddar cheese (grated) \\ 50 g capers \\ 4 cloves garlic (crushed) \\ 4 burgers \\ 50 g spring onions (chopped) \\ \end{ingredients} \index{Vegetarian!Russian Potatoes} \begin{instructions} \item In a sauce pan boil 2000~g cubed boiling potatoes until not quite done then drain. \item Stir in 600~g cottage cheese, 300~ml soured cream, 150~g grated cheddar cheese, 50~g capers, and 4~cloves crushed garlic and bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) for 30 minutes. \item Grill 4 burgers and serve with the potatoes. \item Sprinkle with 50~g chopped spring onions to serve. \end{instructions} \end{recipe}% \begin{recipe}{2}{Lentil Shepherds Pie}% Calories: 1710 \begin{ingredients} 1000 g boiling potatoes (chopped) \\ 700 g onions (chopped) \\ 4 cloves garlic (crushed) \\ 400 g celery (chopped) \\ 200 g carrots (sliced) \\ 2 tbsp oil \\ 250 g green lentils \\ 3 tsp miso \\ 1 cube stock (crumbled) \\ 75 g tomato puree \\ 1 tsp marjoram \\ 1 tsp ground cumin \\ 2 tsp ground coriander \\ 50 g sunflower seeds \\ \end{ingredients} \index{Vegan!Lentil Shepherds Pie} \index{Shepherds Pie!Lentil Shepherds Pie} \begin{instructions} \item In a sauce pan boil 1000~g chopped boiling potatoes then mash. \item In a large wok fry 700~g chopped onions, 4~cloves crushed garlic, 400~g chopped celery, and 200~g sliced carrots in 2~tbsp oil until soft. \item Stir in 250~g green lentils, 3~tsp miso, 1~cube crumbled stock, 75~g tomato puree, 650~ml boiling water, 1~tsp marjoram, 1~tsp ground cumin and 2~tsp ground coriander then bring to boil, then simmer for 20 minutes. \item Take off heat, put in a casserole dish, cover with the mashed potatoes and sprinkle with 50~g sunflower seeds. then bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) for 20 minutes. \end{instructions} \end{recipe}% \begin{recipe}{1}{Fish Fingers and Baked Beans}% Calories: 1317 \begin{ingredients} 400 g can baked beans \\ 280 g fish fingers \\ \end{ingredients} \index{Quick!Fish Fingers and Baked Beans} \begin{instructions} \item In a sauce pan heat up 400~g can baked beans. \item Grill 280~g fish fingers. \end{instructions} \end{recipe}% \begin{recipe}{2}{Courgettes and Bacon}% Calories: 3846 \begin{ingredients} 1200 g baking potatoes (washed) \\ 400 g courgette (sliced) \\ 400 g onions (sliced) \\ 2 cloves garlic (crushed) \\ 2 tbsp oil \\ 240 g bacon (chopped) \\ 100 g cheddar cheese (grated) \\ 2 eggs (beaten) \\ 4 tbsp milk \\ \end{ingredients} \index{Meat!Courgettes and Bacon} \begin{instructions} \item Bake 1200~g washed baking potatoes at (240$^{\circ}$C, Gas 9, 475$^{\circ}$F) for 40 minutes. then slice. \item In a large wok fry 400~g sliced courgette, 400~g sliced onions and 2~cloves crushed garlic. in 2~tbsp oil until soft. \item Stir in 240~g chopped bacon, 100~g grated cheddar cheese, 2 beaten eggs, and 4~tbsp milk. Layer in a casserole dish with the potatoes and bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) 35 minutes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Sweet Corn Pizza}% Calories: 1682 \begin{ingredients} 2 pizza bases \\ 300 g tomato puree \\ 400 g tomatoes (chopped) \\ 200 g can sweetcorn \\ 150 g green pepper (chopped) \\ 100 g anchovy fillets \\ 200 g pineapple chunks \\ 200 g cheddar cheese (grated) \\ 2 tsp oregano \\ \end{ingredients} \index{Vegetarian!Sweet Corn Pizza} \index{Pizza!Sweet Corn Pizza} \begin{instructions} \item On 2 pizza bases spread 300~g tomato puree, 400~g chopped tomatoes, 200~g can sweetcorn, 150~g chopped green pepper, 100~g anchovy fillets, 200~g pineapple chunks, 200~g grated cheddar cheese and 2~tsp oregano. \item Bake (240$^{\circ}$C, Gas 9, 475$^{\circ}$F) 20 minutes or until golden. \end{instructions} \end{recipe}% \begin{recipe}{2}{Caribbean Beans}% Calories: 2586 \begin{ingredients} 400 g white rice \\ 350 g onions (chopped) \\ 150 g green pepper (chopped) \\ 1 red chilli (chopped) \\ 3 cloves garlic (crushed) \\ 3 tbsp oil \\ 400 g can kidney beans \\ 400 g can chick peas \\ 1 tsp dried thyme \\ 1 cube stock (crumbled) \\ 400 g can coconut milk \\ \end{ingredients} \index{Vegan!Caribbean Beans} \begin{instructions} \item In a sauce pan place 400~g white rice pour on 800~ml cold water, bring to the boil then turn off, cover and stand for 25 minutes. \item In a large wok fry 350~g chopped onions, 150~g chopped green pepper, 1 chopped red chilli and 3~cloves crushed garlic in 3~tbsp oil until soft. \item Stir in 400~g can kidney beans and 400~g can chick peas in their liquid, cover and simmer for 5 minutes. \item Stir in 1~tsp dried thyme, 1~cube crumbled stock, and 400~g can coconut milk Cover and simmer for 10 minutes. \item Stir in the rice and serve. \end{instructions} \end{recipe}% \begin{recipe}{1}{Tagliatelle and Olive Oil}% Calories: 1097 \begin{ingredients} 4 cloves garlic (crushed) \\ 200 g courgette (sliced) \\ 150 g red pepper (sliced) \\ 1 bay leaf (crumbled) \\ 1 red chilli (chopped) \\ 4 tbsp olive oil \\ 200 g olives \\ 50 g parmesan cheese (grated) \\ \end{ingredients} \index{Quick!Tagliatelle and Olive Oil} \begin{instructions} \item save 150 ml of the water. \item In a large wok fry 4~cloves crushed garlic, 200~g sliced courgette, 150~g sliced red pepper, 1 crumbled bay leaf and 1 chopped red chilli in 4~tbsp olive oil until garlic browns. \item Stir in the tagliatelle, the water and 200~g olives and simmer for 3 minutes. \item Spinkle with 50~g grated parmesan cheese. \end{instructions} \end{recipe}% \clearpage \end{menu} \begin{menu}{October} \begin{recipelist} {\scriptsize[1-2]} Shepherds Pie\\ {\scriptsize[3-4]} Spinach and Olive Lasagna\\ {\scriptsize[5-6]} Chick Pea and Spinach Curry\\ {\scriptsize[7]} Quick Dummy\\% \end{recipelist}% \begin{recipelist} {\scriptsize[8-9]} Sausage and Beans\\ {\scriptsize[10-11]} Red Pepper Pasta\\ {\scriptsize[12-13]} Vegetable Cous Cous\\ {\scriptsize[14]} Quick Dummy\\% \end{recipelist}\par% \subsection*{Shopping Lists} % \begin{shoppinglist}{Vegetables} 1000 g boiling potatoes {\scriptsize[1-2]}\\ 400 g broccoli {\scriptsize[3-4]}\\ 400 g carrots {\scriptsize[1-2]}\\ 400 g celery {\scriptsize[1-2]}\\ 3 chillis {\scriptsize[1-2]}\\ 8 cloves garlic {\scriptsize[3-4, 5-6]}\\ 700 g mushrooms {\scriptsize[3-4, 5-6]}\\ 1200 g onions {\scriptsize[1-2, 3-4, 5-6]}\\ 750 g spinach {\scriptsize[3-4, 5-6]}\\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Groceries} 400 g brown rice {\scriptsize[5-6]}\\ 400 g can borlotti beans {\scriptsize[8-9]}\\ 1200 g can chick peas {\scriptsize[5-6, 8-9, 12-13]}\\ 2000 g can tomatoes {\scriptsize[3-4, 8-9, 10-11, 12-13]}\\ 800 g cous cous {\scriptsize[7, 12-13, 14]}\\ 100 g flaked almonds {\scriptsize[12-13]}\\ 400 g frozen peas {\scriptsize[10-11]}\\ 4 tsp gravy granules {\scriptsize[1-2]}\\ 200 g lasagna {\scriptsize[3-4]}\\ 200 g olives {\scriptsize[3-4]}\\ 400 ml passata {\scriptsize[3-4]}\\ 400 g pasta shapes {\scriptsize[10-11]}\\ 1 tbsp plain flour {\scriptsize[1-2]}\\ 280 g tomato puree {\scriptsize[8-9, 10-11]}\\ % \end{shoppinglist}% \par\vfil % % % \begin{shoppinglist}{Meat} 400 g minced meat {\scriptsize[1-2]}\\ 500 g sausages {\scriptsize[8-9]}\\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Dairy} 50 g butter {\scriptsize[1-2]}\\ 350 g cheddar cheese {\scriptsize[1-2, 3-4]}\\ 250 g cottage cheese {\scriptsize[3-4]}\\ 2 eggs {\scriptsize[3-4]}\\ 4 tbsp milk {\scriptsize[1-2]}\\ 1 tbsp parmesan cheese {\scriptsize[3-4]}\\ 500 g yoghurt {\scriptsize[12-13]}\\ % \end{shoppinglist}% \par\vfil % \vfil\clearpage % % \begin{shoppinglist}{Check} 2 bay leaf \\ 2 tsp cumin seeds \\ 2 tsp garam masala \\ 1 tsp ground cinnamon \\ 1 tsp ground turmeric \\ 2 tbsp honey \\ 24 tbsp oil \\ 2 tsp oregano \\ 2 tbsp soy sauce \\ 3 cube stock \\ 1 tsp thyme \\ 2 tbsp wine vinegar \\ 2 tbsp worcester sauce \\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Extra Vegetables} 1200 g baking potatoes {\scriptsize[8-9]}\\ 500 g cauliflower {\scriptsize[12-13]}\\ 1 chillis {\scriptsize[10-11]}\\ 225 g courgette {\scriptsize[12-13]}\\ 6 cloves garlic {\scriptsize[8-9, 12-13]}\\ 900 g onions {\scriptsize[8-9, 10-11, 12-13]}\\ 2 red chilli {\scriptsize[12-13]}\\ 3 red pepper {\scriptsize[10-11]}\\ 225 g runner beans {\scriptsize[12-13]}\\ % \end{shoppinglist}% \par\vfil % % % \othershoppinglist{Other Shopping}% \othershoppinglist{Extra Other Shopping}% \vfil\clearpage \begin{recipe}{2}{Shepherds Pie}% Calories: 3272 \begin{ingredients} 1000 g boiling potatoes (cubed) \\ 50 g butter \\ 4 tbsp milk \\ 4 tbsp oil \\ 400 g onions (chopped) \\ 400 g carrots (chopped) \\ 400 g celery (chopped) \\ 3 chillis (chopped) \\ 400 g minced meat \\ 2 bay leaf \\ 1 tsp thyme \\ 1 tbsp plain flour \\ 2 tbsp worcester sauce \\ 4 tsp gravy granules \\ 100 g cheddar cheese (grated) \\ \end{ingredients} \index{Meat!Shepherds Pie} \index{Shepherds Pie!Shepherds Pie} \begin{instructions} \item In a sauce pan boil 1000~g cubed boiling potatoes, then mash with 50~g butter and 4~tbsp milk. \item In a large wok in 4~tbsp oil fry 400~g chopped onions, 400~g chopped carrots, 400~g chopped celery and 3 chopped chillis until soft. \item Stir in 400~g minced meat, \item Stir in 2 bay leaf, 1~tsp thyme, 1~tbsp plain flour and 2~tbsp worcester sauce cook until the meat is done. \item Stir in 4~tsp gravy granules and 250~ml boiling water\item Put in a casserole dish, cover with the mashed potatoes and sprinkle with 100~g grated cheddar cheese. then bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) for 20 minutes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Spinach and Olive Lasagna}% Calories: 3210 \begin{ingredients} 4 tbsp oil \\ 4 cloves garlic (crushed) \\ 400 g onions (chopped) \\ 400 g can tomatoes (chopped) \\ 400 ml passata \\ 400 g broccoli \\ 200 g olives \\ 200 g mushrooms \\ 1 tbsp parmesan cheese (grated) \\ 250 g cheddar cheese (grated) \\ 250 g cottage cheese \\ 2 eggs (beaten) \\ 250 g spinach (hacked) \\ 200 g lasagna \\ \end{ingredients} \index{Vegetarian!Spinach and Olive Lasagna} \index{Lasagna!Spinach and Olive Lasagna} \begin{instructions} \item In a large wok in 4~tbsp oil fry 4~cloves crushed garlic and 400~g chopped onions until soft. \item Stir in 400~g chopped can tomatoes, 400~ml passata, 400~g broccoli, 200~g olives and 200~g mushrooms and cook until mushrooms are done. \item In a mixing bowl mix 1~tbsp grated parmesan cheese, 250~g grated cheddar cheese, 250~g cottage cheese, 2 beaten eggs and 250~g hacked spinach. \item In a casserole dish layer 200~g lasagna with the tomatoes and cheese 3 times. Bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) 50 minutes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Chick Pea and Spinach Curry}% Calories: 1799 \begin{ingredients} 400 g brown rice \\ 2 tsp cumin seeds \\ 4 tbsp oil \\ 4 cloves garlic (crushed) \\ 400 g onions (chopped) \\ 500 g spinach (hacked) \\ 500 g mushrooms (sliced) \\ 400 g can chick peas (drained) \\ 2 tbsp soy sauce \\ 2 tsp garam masala \\ \end{ingredients} \index{Vegan!Chick Pea and Spinach Curry} \index{Curry!Chick Pea and Spinach Curry} \begin{instructions} \item In a sauce pan place 400~g brown rice pour on 800~ml cold water, bring to the boil then cover and simmer for 15 minutes, then leave to stand for 10 minutes. \item In a large wok fry 2~tsp cumin seeds in 4~tbsp oil for 2 minutes. \item Stir in 4~cloves crushed garlic and 400~g chopped onions and fry until until soft. \item Stir in 500~g hacked spinach, 500~g sliced mushrooms, 400~g drained can chick peas, 2~tbsp soy sauce and 2~tsp garam masala and simmer until spinach wilts. \end{instructions} \end{recipe}% \begin{recipe}{1}{Quick Dummy}% Calories: 352 \begin{ingredients} 200 g cous cous \\ \end{ingredients} \index{Quick!Quick Dummy} \begin{instructions} \item In a sauce pan place 200~g cous cous pour on 300~ml boiling water, and cover, leave to stand for 10 minutes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Sausage and Beans}% Calories: 4767 \begin{ingredients} 1200 g baking potatoes (washed) \\ 500 g sausages \\ 350 g onions (sliced) \\ 4 cloves garlic (crushed) \\ 4 tbsp oil \\ 400 g can chick peas (drained) \\ 400 g can borlotti beans (drained) \\ 140 g tomato puree \\ 400 g can tomatoes (chopped) \\ \end{ingredients} \index{Meat!Sausage and Beans} \begin{instructions} \item Bake 1200~g washed baking potatoes at (240$^{\circ}$C, Gas 9, 475$^{\circ}$F) for 40 minutes. \item In a casserole dish put 500~g sausages bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) 20 minutes. \item In a large wok fry 350~g sliced onions and 4~cloves crushed garlic in 4~tbsp oil until soft. \item Stir in 400~g drained can chick peas, 400~g drained can borlotti beans, 140~g tomato puree and 400~g chopped can tomatoes and warm through. \item Pour beans over the sausages and bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) 30 minutes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Red Pepper Pasta}% Calories: 1766 \begin{ingredients} 350 g onions (chopped) \\ 1 chillis (chopped) \\ 3 red pepper (chopped) \\ 4 tbsp oil \\ 400 g can tomatoes \\ 140 g tomato puree \\ 2 tbsp wine vinegar \\ 2 tbsp honey \\ 2 cube stock (crumbled) \\ 2 tsp oregano \\ 400 g frozen peas \\ 400 g pasta shapes \\ \end{ingredients} \index{Vegan!Red Pepper Pasta} \begin{instructions} \item In a sauce pan350~g chopped onions, 1 chopped chillis and 3 chopped red pepper in 4~tbsp oil until soft. \item Add 400~g can tomatoes, 140~g tomato puree, 2~tbsp wine vinegar, 2~tbsp honey, 2~cube crumbled stock, 2~tsp oregano and 300~ml cold water simmer for 20 minutes. \item Stir in 400~g frozen peas and warm through. \item In a sauce pan place 400~g pasta shapes pour on 800~ml boiling water, cover and simmer for 12 minutes, then drain. \end{instructions} \end{recipe}% \begin{recipe}{2}{Vegetable Cous Cous}% Calories: 3022 \begin{ingredients} 200 g onions (chopped) \\ 2 red chilli (chopped) \\ 225 g courgette (sliced) \\ 225 g runner beans (chopped) \\ 500 g cauliflower \\ 2 cloves garlic (crushed) \\ 4 tbsp oil \\ 1 tsp ground turmeric \\ 1 tsp ground cinnamon \\ 1 cube stock (crumbled) \\ 400 g can chick peas (drained) \\ 800 g can tomatoes (chopped) \\ 400 g cous cous \\ 100 g flaked almonds (toasted) \\ 500 g yoghurt \\ \end{ingredients} \index{Vegetarian!Vegetable Cous Cous} \begin{instructions} \item In a large wok fry 200~g chopped onions, 2 chopped red chilli, 225~g sliced courgette, 225~g chopped runner beans, 500~g cauliflower and 2~cloves crushed garlic in 4~tbsp oil until soft. \item Stir in 1~tsp ground turmeric, 1~tsp ground cinnamon, 1~cube crumbled stock, 400~g drained can chick peas, 800~g chopped can tomatoes and 300~ml cold water and bring to the boil. \item Make a space in the middle and pour in 400~g cous cous in the middle, cover, and leave to stand for 10 minutes. \item Serve with 100~g toasted flaked almonds and 500~g yoghurt. \end{instructions} \end{recipe}% \begin{recipe}{1}{Quick Dummy}% Calories: 352 \begin{ingredients} 200 g cous cous \\ \end{ingredients} \index{Quick!Quick Dummy} \begin{instructions} \item In a sauce pan place 200~g cous cous pour on 300~ml boiling water, and cover, leave to stand for 10 minutes. \end{instructions} \end{recipe}% \clearpage \end{menu} \begin{menu}{November} \begin{recipelist} {\scriptsize[1-2]} German Sausages, Mash and Sauerkraut\\ {\scriptsize[3-4]} Vegetables and Pasta\\ {\scriptsize[5-6]} Sweet and Sour Celery\\ {\scriptsize[7]} Pizza and Swedish salad\\% \end{recipelist}% \begin{recipelist} {\scriptsize[8-9]} Chicken and Carrot Tagine\\ {\scriptsize[10-11]} Green Lentil Lasagne\\ {\scriptsize[12-13]} Coconut Dahl\\ {\scriptsize[14]} Spam and Mustard Pasta\\% \end{recipelist}\par% \subsection*{Shopping Lists} % \begin{shoppinglist}{Vegetables} 250 g cabbage {\scriptsize[7]}\\ 50 g carrots {\scriptsize[7]}\\ 650 g celery {\scriptsize[3-4, 5-6]}\\ 400 g courgette {\scriptsize[3-4]}\\ 4 cloves garlic {\scriptsize[3-4]}\\ 100 g mushrooms {\scriptsize[3-4]}\\ 450 g onions {\scriptsize[3-4, 5-6, 7]}\\ 150 g red pepper {\scriptsize[3-4]}\\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Groceries} 400 g brown rice {\scriptsize[12-13]}\\ 200 g can sweetcorn {\scriptsize[3-4]}\\ 400 g can tomatoes {\scriptsize[5-6]}\\ 400 g cous cous {\scriptsize[8-9]}\\ 100 g dessicated coconut {\scriptsize[12-13]}\\ 200 g green lentils {\scriptsize[10-11]}\\ 450 g lasagna {\scriptsize[10-11]}\\ 200 g olives {\scriptsize[5-6]}\\ 600 g pasta shapes {\scriptsize[3-4, 14]}\\ 2 pizza {\scriptsize[7]}\\ 3 tbsp plain flour {\scriptsize[3-4, 10-11]}\\ 52 tbsp raisins {\scriptsize[8-9, 12-13]}\\ 225 g red lentils {\scriptsize[12-13]}\\ 200 g spam {\scriptsize[14]}\\ 400 g white rice {\scriptsize[5-6]}\\ 100 g whole grain mustard {\scriptsize[14]}\\ % \end{shoppinglist}% \par\vfil % % % \begin{shoppinglist}{Meat} 400 g chicken {\scriptsize[8-9]}\\ 400 g sausages {\scriptsize[1-2]}\\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Dairy} 1000 ml milk {\scriptsize[3-4, 10-11]}\\ % \end{shoppinglist}% \par\vfil % \vfil\clearpage % % \begin{shoppinglist}{Check} 1 pinch basil \\ 1 bay leaf \\ 4 cloves \\ 1.5 tsp ground cinnamon \\ 7 tsp ground coriander \\ 2 tsp ground ginger \\ 1 tsp ground nutmeg \\ 1 tbsp ground turmeric \\ 1 tbsp honey \\ 2 tsp miso \\ 3 tsp mixed herbs \\ 18 tbsp oil \\ 1 pinch oregano \\ 1 pinch pepper \\ 1 pinch salt \\ 1 tbsp soy sauce \\ 2 cube stock \\ 2 tbsp sugar \\ 5 tbsp wine vinegar \\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Extra Vegetables} 1200 g baking potatoes {\scriptsize[10-11]}\\ 1000 g carrots {\scriptsize[8-9]}\\ 6 cloves garlic {\scriptsize[8-9, 10-11, 14]}\\ 60 g ginger {\scriptsize[12-13]}\\ 2 lemon {\scriptsize[8-9, 12-13]}\\ 600 g mushrooms {\scriptsize[10-11, 14]}\\ 800 g onions {\scriptsize[8-9, 10-11, 12-13]}\\ % \end{shoppinglist}% \par\vfil % % % \othershoppinglist{Other Shopping}% \othershoppinglist{Extra Other Shopping}% \vfil\clearpage \begin{recipe}{2}{German Sausages, Mash and Sauerkraut}% Calories: 1204 \begin{ingredients} 400 g sausages \\ \end{ingredients} \index{Meat!German Sausages, Mash and Sauerkraut} \begin{instructions} \item 400~g sausages \end{instructions} \end{recipe}% \begin{recipe}{2}{Vegetables and Pasta}% Calories: 1635 \begin{ingredients} 400 g pasta shapes \\ 200 g onions (chopped) \\ 250 g celery (chopped) \\ 3 tbsp oil \\ 150 g red pepper (chopped) \\ 400 g courgette (sliced) \\ 4 cloves garlic (crushed) \\ 100 g mushrooms (chopped) \\ 200 g can sweetcorn \\ 1 tbsp plain flour \\ 500 ml milk \\ 1 tbsp soy sauce \\ 2 tsp ground coriander \\ 2 tsp miso \\ \end{ingredients} \index{Vegetarian!Vegetables and Pasta} \begin{instructions} \item In a sauce pan place 400~g pasta shapes pour on 800~ml boiling water, cover and simmer for 12 minutes, then drain. \item In a large wok fry 200~g chopped onions and 250~g chopped celery, in 3~tbsp oil until soft. \item Stir in 150~g chopped red pepper, 400~g sliced courgette, 4~cloves crushed garlic, 100~g chopped mushrooms and 200~g can sweetcorn and warm through. \item Stir in 1~tbsp plain flour. \item Remove from heat and stir in 500~ml milk until a sauce is formed. \item Stir in the pasta, 1~tbsp soy sauce2~tsp ground coriander and 2~tsp miso and warm through. \end{instructions} \end{recipe}% \begin{recipe}{2}{Sweet and Sour Celery}% Calories: 1662 \begin{ingredients} 400 g white rice \\ 400 g celery (sliced) \\ 200 g onions (chopped) \\ 4 tbsp oil \\ 400 g can tomatoes (chopped) \\ 3 tbsp wine vinegar \\ 1 tbsp sugar \\ 4 cloves \\ 1 tsp ground cinnamon \\ 200 g olives \\ \end{ingredients} \index{Vegan!Sweet and Sour Celery} \begin{instructions} \item In a sauce pan place 400~g white rice pour on 800~ml cold water, bring to the boil then turn off, cover and stand for 25 minutes. \item In a large wok fry 400~g sliced celery and 200~g chopped onions in 4~tbsp oil until soft. \item Stir in 400~g chopped can tomatoes and simmer for 10 minutes. \item Stir in 3~tbsp wine vinegar, 1~tbsp sugar, 4 cloves, 1~tsp ground cinnamon, 200~g olives and the celery and simmer for 10 minutes. \end{instructions} \end{recipe}% \begin{recipe}{1}{Pizza and Swedish salad}% Calories: 445 \begin{ingredients} 2 pizza \\ 250 g cabbage (sliced) \\ 50 g carrots (sliced) \\ 50 g onions (sliced) \\ 1 tbsp sugar \\ 2 tbsp wine vinegar \\ 2 tbsp oil \\ 1 pinch basil \\ 1 pinch oregano \\ 1 pinch salt \\ 1 pinch pepper \\ \end{ingredients} \index{Quick!Pizza and Swedish salad} \index{Pizza!Pizza and Swedish salad} \begin{instructions} \item Bake 2 pizza(240$^{\circ}$C, Gas 9, 475$^{\circ}$F) 15 minutes. \item In a mixing bowl mix 250~g sliced cabbage, 50~g sliced carrots and 50~g sliced onions. \item Stir in 1~tbsp sugar, 2~tbsp wine vinegar, 2~tbsp oil, 1~pinch basil, 1~pinch oregano, 1~pinch salt and 1~pinch pepper. \end{instructions} \end{recipe}% \begin{recipe}{2}{Chicken and Carrot Tagine}% Calories: 2365 \begin{ingredients} 400 g chicken (cubed) \\ 400 g onions (chopped) \\ 2 tsp ground ginger \\ 0.5 tsp ground cinnamon \\ 1 cloves garlic (crushed) \\ 2 cube stock (crumbled) \\ 1000 g carrots (sliced) \\ 1 tbsp honey \\ 3 tsp mixed herbs \\ 3 tsp ground coriander \\ 3 tbsp oil \\ 2 tbsp raisins \\ 400 g cous cous \\ 1 lemon (juiced) \\ \end{ingredients} \index{Meat!Chicken and Carrot Tagine} \begin{instructions} \item In a large wok mix 400~g cubed chicken, 400~g chopped onions, 2~tsp ground ginger, 0.5~tsp ground cinnamon, 1~cloves crushed garlic, 2~cube crumbled stock, 1000~g sliced carrots, 1~tbsp honey, 3~tsp mixed herbs, 3~tsp ground coriander, 3~tbsp oil, 2~tbsp raisins, and 800~ml cold water. Bring to the boil then simmer, uncovered for 60 minutes. \item In a sauce pan place 400~g cous cous pour on 600~ml boiling water, and cover, leave to stand for 10 minutes. \item Stir in 1 juiced lemon to the stew before serving. \end{instructions} \end{recipe}% \begin{recipe}{2}{Green Lentil Lasagne}% Calories: 3981 \begin{ingredients} 1200 g baking potatoes (washed) \\ 200 g green lentils \\ 200 g onions (sliced) \\ 2 cloves garlic (crushed) \\ 200 g mushrooms (sliced) \\ 3 tbsp oil \\ 500 ml milk \\ 2 tbsp plain flour \\ 1 bay leaf (crumbled) \\ 1 tsp ground nutmeg \\ 2 tsp ground coriander \\ 450 g lasagna \\ \end{ingredients} \index{Vegetarian!Green Lentil Lasagne} \begin{instructions} \item Bake 1200~g washed baking potatoes at (240$^{\circ}$C, Gas 9, 475$^{\circ}$F) for 40 minutes. \item In a sauce pan cook 200~g green lentils until soft. Save 200 ml of the water. \item In a large wok fry 200~g sliced onions, 2~cloves crushed garlic and 200~g sliced mushrooms in 3~tbsp oil until soft. \item Stir in 500~ml milk, 2~tbsp plain flour, 1 crumbled bay leaf, 1~tsp ground nutmeg and 2~tsp ground coriander until sauce thinkens. \item Stir in the lentils and the water. \item Layer with 450~g lasagna and bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) for 50 minutes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Coconut Dahl}% Calories: 1441 \begin{ingredients} 400 g brown rice \\ 225 g red lentils \\ 50 g raisins \\ 1 tbsp ground turmeric \\ 60 g ginger (chopped) \\ 200 g onions (chopped) \\ 100 g dessicated coconut \\ 1 tbsp oil \\ 1 lemon (sliced) \\ \end{ingredients} \index{Vegan!Coconut Dahl} \index{Curry!Coconut Dahl} \begin{instructions} \item In a sauce pan place 400~g brown rice pour on 800~ml cold water, bring to the boil then cover and simmer for 15 minutes, then leave to stand for 10 minutes. \item In a sauce pan cook 225~g red lentils, 50~g raisins, 1~tbsp ground turmeric, 60~g chopped ginger and 200~g chopped onions in 600~ml cold water until soft. \item In a large wok fry 100~g dessicated coconut in 1~tbsp oil until it goes brown. \item Sprinkle the coconut on the dahl before serving and serve with 1 sliced lemon. \end{instructions} \end{recipe}% \begin{recipe}{1}{Spam and Mustard Pasta}% Calories: 1327 \begin{ingredients} 200 g pasta shapes \\ 2 tbsp oil \\ 200 g spam (sliced) \\ 3 cloves garlic (crushed) \\ 400 g mushrooms (sliced) \\ 100 g whole grain mustard \\ \end{ingredients} \index{Quick!Spam and Mustard Pasta} \begin{instructions} \item In a sauce pan place 200~g pasta shapes pour on 400~ml boiling water, cover and simmer for 12 minutes, then drain. \item In a large wok in 2~tbsp oil fry 200~g sliced spam, 3~cloves crushed garlic and 400~g sliced mushrooms. \item Stir in 100~g whole grain mustard and the pasta, and warm through. \end{instructions} \end{recipe}% \clearpage \end{menu} \begin{menu}{December} \begin{recipelist} {\scriptsize[1-2]} Lamb Hotpot\\ {\scriptsize[3-4]} Woolton Pie\\ {\scriptsize[5-6]} Mushroom Stew\\ {\scriptsize[7]} Quick Dummy\\% \end{recipelist}% \begin{recipelist} {\scriptsize[8-9]} Fletchs Chicken Curry\\ {\scriptsize[10-11]} Roasted Vegetable Pie\\ {\scriptsize[12-13]} Vegan Dummy\\ {\scriptsize[14]} Quick Dummy\\% \end{recipelist}\par% \subsection*{Shopping Lists} % \begin{shoppinglist}{Vegetables} 750 g carrots {\scriptsize[1-2, 3-4]}\\ 1 chillis {\scriptsize[3-4]}\\ 8 cloves garlic {\scriptsize[3-4, 5-6]}\\ 600 g mushrooms {\scriptsize[5-6]}\\ 1275 g onions {\scriptsize[1-2, 3-4, 5-6]}\\ 2500 g potatoes {\scriptsize[1-2, 3-4]}\\ 500 g swedes {\scriptsize[3-4]}\\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Groceries} 2 tbsp branston pickle {\scriptsize[8-9]}\\ 400 g brown rice {\scriptsize[8-9]}\\ 400 g can evaporated milk {\scriptsize[10-11]}\\ 400 g can tomatoes {\scriptsize[8-9]}\\ 1200 g cous cous {\scriptsize[5-6, 7, 12-13, 14]}\\ 4 tsp gravy granules {\scriptsize[3-4]}\\ 150 g pearl barley {\scriptsize[5-6]}\\ 201 tbsp plain flour {\scriptsize[1-2, 3-4]}\\ 250 g ready made pastry {\scriptsize[10-11]}\\ 150 g tomato puree {\scriptsize[8-9]}\\ 2 tbsp whole grain mustard {\scriptsize[3-4]}\\ % \end{shoppinglist}% \par\vfil % % % \begin{shoppinglist}{Meat} 500 g chicken {\scriptsize[8-9]}\\ 400 g lamb {\scriptsize[1-2]}\\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Dairy} 90 g butter {\scriptsize[3-4]}\\ 4 eggs {\scriptsize[3-4, 10-11]}\\ 100 g feta cheese {\scriptsize[10-11]}\\ % \end{shoppinglist}% \par\vfil % \vfil\clearpage % % \begin{shoppinglist}{Check} 1 bay leaf \\ 1 tbsp curry powder \\ 6 tbsp oil \\ 5 tbsp sherry \\ 4 tbsp soy sauce \\ 1 cube stock \\ 1 tsp thyme \\ 2 tbsp worcester sauce \\ % \end{shoppinglist}% % % % \begin{shoppinglist}{Extra Vegetables} 200 g apples {\scriptsize[8-9]}\\ 1200 g baking potatoes {\scriptsize[10-11]}\\ 200 g courgette {\scriptsize[10-11]}\\ 150 g green pepper {\scriptsize[10-11]}\\ 650 g onions {\scriptsize[8-9, 10-11]}\\ 150 g red pepper {\scriptsize[10-11]}\\ % \end{shoppinglist}% \par\vfil % % % \othershoppinglist{Other Shopping}% \othershoppinglist{Extra Other Shopping}% \vfil\clearpage \begin{recipe}{2}{Lamb Hotpot}% Calories: 2671 \begin{ingredients} 400 g lamb (sliced) \\ 400 g carrots (chopped) \\ 350 g onions (chopped) \\ 1 bay leaf \\ 1 tsp thyme \\ 1 tbsp plain flour \\ 2 tbsp worcester sauce \\ 1500 g potatoes (sliced) \\ \end{ingredients} \index{Meat!Lamb Hotpot} \begin{instructions} \item In a large wok fry 400~g sliced lamb in it's own fat until done. \item Stir in 400~g chopped carrots and 350~g chopped onions and fry until soft. \item Stir in 1 bay leaf, 1~tsp thyme, 1~tbsp plain flour, 600~ml cold water and 2~tbsp worcester sauce. Put in a casserole dish and layer 1500~g sliced potatoes on top. Bake covered (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) for 30 minutes, then uncovered 30 minutes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Woolton Pie}% Calories: 3147 \begin{ingredients} 4 tbsp oil \\ 550 g onions (chopped) \\ 6 cloves garlic (crushed) \\ 1 chillis (chopped) \\ 1000 g potatoes (chopped) \\ 350 g carrots (chopped) \\ 500 g swedes (chopped) \\ 2 tbsp whole grain mustard \\ 200 g plain flour \\ 90 g butter \\ 1 eggs (beaten) \\ 4 tsp gravy granules \\ \end{ingredients} \index{Vegetarian!Woolton Pie} \begin{instructions} \item In a large wok fry (in 4~tbsp oil) 550~g chopped onions, 6~cloves crushed garlic and 1 chopped chillis until soft. \item Stir in 1000~g chopped potatoes, 350~g chopped carrots, 500~g chopped swedes and 2~tbsp whole grain mustard. Then put in a casserole dish and cover, bake (240$^{\circ}$C, Gas 9, 475$^{\circ}$F) for 45 minutes. \item In a mixing bowl rub together 200~g plain flour and 90~g butter until it looks like bread crumbs. Stir in 3~tbsp cold water and form into dough. \item Roll out the dough and cover the vegetables when they are done. Brush the dough with 1 beaten eggs then bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) for futher 15 minutes. \item In a sauce pan mix 4~tsp gravy granules and 250~ml boiling water to serve with the pie. \end{instructions} \end{recipe}% \begin{recipe}{2}{Mushroom Stew}% Calories: 1339 \begin{ingredients} 375 g onions (chopped) \\ 2 tbsp oil \\ 600 g mushrooms \\ 2 cloves garlic (crushed) \\ 150 g pearl barley \\ 4 tbsp soy sauce \\ 5 tbsp sherry \\ 1 cube stock (crumbled) \\ 400 g cous cous \\ \end{ingredients} \index{Vegan!Mushroom Stew} \begin{instructions} \item In a large wok fry 375~g chopped onions in 2~tbsp oil until soft. \item Stir in 600~g mushrooms and 2~cloves crushed garlic and cook until mushrooms start to wilt. \item Stir in 150~g pearl barley, 4~tbsp soy sauce, 5~tbsp sherry, 1~cube crumbled stock, and 450~ml cold water and simmer for 25 minutes. \item In a sauce pan place 400~g cous cous pour on 600~ml boiling water, and cover, leave to stand for 10 minutes. \end{instructions} \end{recipe}% \begin{recipe}{1}{Quick Dummy}% Calories: 352 \begin{ingredients} 200 g cous cous \\ \end{ingredients} \index{Quick!Quick Dummy} \begin{instructions} \item In a sauce pan place 200~g cous cous pour on 300~ml boiling water, and cover, leave to stand for 10 minutes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Fletchs Chicken Curry}% Calories: 1755 \begin{ingredients} 500 g chicken (chopped) \\ 350 g onions (chopped) \\ 200 g apples (chopped) \\ 150 g tomato puree \\ 2 tbsp branston pickle \\ 400 g can tomatoes (chopped) \\ 1 tbsp curry powder \\ 400 g brown rice \\ \end{ingredients} \index{Meat!Fletchs Chicken Curry} \index{Curry!Fletchs Chicken Curry} \begin{instructions} \item In a casserole dish mix 500~g chopped chicken, 350~g chopped onions, 200~g chopped apples, 150~g tomato puree, 2~tbsp branston pickle, 400~g chopped can tomatoes and 1~tbsp curry powder, bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) for 30 minutes. \item In a sauce pan place 400~g brown rice pour on 800~ml cold water, bring to the boil then cover and simmer for 15 minutes, then leave to stand for 10 minutes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Roasted Vegetable Pie}% Calories: 3059 \begin{ingredients} 300 g onions (chopped) \\ 150 g red pepper \\ 150 g green pepper \\ 200 g courgette (chopped) \\ 3 eggs (beaten) \\ 100 g feta cheese \\ 400 g can evaporated milk \\ 250 g ready made pastry \\ 1200 g baking potatoes (washed) \\ \end{ingredients} \index{Vegetarian!Roasted Vegetable Pie} \begin{instructions} \item In a casserole dish mix 300~g chopped onions, 150~g red pepper, 150~g green pepper, and 200~g chopped courgette, and bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) 25 minutes. \item In a bowl mix 3 beaten eggs, 100~g feta cheese and 400~g can evaporated milk. Pour over the vegetables, cover with 250~g ready made pastry and bake (200$^{\circ}$C, Gas 5, 380$^{\circ}$F) 40 minutes. \item Bake 1200~g washed baking potatoes at (240$^{\circ}$C, Gas 9, 475$^{\circ}$F) for 40 minutes. \end{instructions} \end{recipe}% \begin{recipe}{2}{Vegan Dummy}% Calories: 704 \begin{ingredients} 400 g cous cous \\ \end{ingredients} \index{Vegan!Vegan Dummy} \begin{instructions} \item In a sauce pan place 400~g cous cous pour on 600~ml boiling water, and cover, leave to stand for 10 minutes. \end{instructions} \end{recipe}% \begin{recipe}{1}{Quick Dummy}% Calories: 352 \begin{ingredients} 200 g cous cous \\ \end{ingredients} \index{Quick!Quick Dummy} \begin{instructions} \item In a sauce pan place 200~g cous cous pour on 300~ml boiling water, and cover, leave to stand for 10 minutes. \end{instructions} \end{recipe}% \clearpage \end{menu} % Content Ends
{"hexsha": "7b153e94c3686d5f87d08343e89ae4cb7d7700ef", "size": 142561, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "data/processed/FoodFileContent.tex", "max_stars_repo_name": "joejcollins/FoodFile", "max_stars_repo_head_hexsha": "eb2369279147f51434a70c44b341560d7a92e9bc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "data/processed/FoodFileContent.tex", "max_issues_repo_name": "joejcollins/FoodFile", "max_issues_repo_head_hexsha": "eb2369279147f51434a70c44b341560d7a92e9bc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 18, "max_issues_repo_issues_event_min_datetime": "2018-02-25T21:57:11.000Z", "max_issues_repo_issues_event_max_datetime": "2018-10-16T13:34:04.000Z", "max_forks_repo_path": "data/processed/FoodFileContent.tex", "max_forks_repo_name": "joejcollins/FoodFile", "max_forks_repo_head_hexsha": "eb2369279147f51434a70c44b341560d7a92e9bc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.3402765921, "max_line_length": 97, "alphanum_fraction": 0.5652597835, "num_tokens": 45997}
function Investment_OPF_stage1(optimizer,set_opt_thermalgenerators,set_opt_winds,set_thermalgenerators,set_winds,set_demands,set_nodes,set_nodes_ref,set_nodes_noref,set_scenarios,set_times,P,V,max_demand,R,p_D,D,γ,Τ,wind,wind_opt,Ns_H,links,F_max_dict,B_dict,MapG,MapG_opt,MapD,MapW,MapW_opt,tech_thermal,tech_thermal_opt,tech_wind,tech_wind_opt,capacity_per_unit,varcost,invcost,maxBuilds,ownership,capacity_existingunits,fixedcost,EmissionsRate,HeatRate,fuelprice,life,CRF_thermal_opt,CRF_wind_opt,set_K,Set_Dual_constraint3_st2,Set_Dual_constraint4_st2,Set_Dual_constraint5_st2,Set_Dual_constraint6_st2,Set_Dual_constraint8_st2,Set_Dual_constraint10_st2) m=Model(optimizer) #= @variable(m, f[link in links,t in set_times,s in set_scenarios] ) @variable(m, θ[n in set_nodes,t in set_times,s in set_scenarios]) @variable(m, p_G_e[e in set_thermalgenerators,t in set_times,s in set_scenarios]>= 0) @variable(m, p_G_e_opt[e in set_opt_thermalgenerators,t in set_times,s in set_scenarios]>= 0) @variable(m, p_G_w[w in set_winds,t in set_times,s in set_scenarios]>= 0) @variable(m, p_G_w_opt[w in set_opt_winds,t in set_times,s in set_scenarios]>= 0) @variable(m, r_d[d in set_demands,t in set_times,s in set_scenarios]>= 0) @variable(m, r_w[w in set_winds,t in set_times,s in set_scenarios]>= 0) @variable(m, r_w_opt[w in set_opt_winds,t in set_times,s in set_scenarios]>= 0) =# @variable(m, x_e[e in set_opt_thermalgenerators]>= 0) @variable(m, x_w[w in set_opt_winds]>= 0) #Auxiliary variable Benders @variable(m, α >= 0) #Auxiliary variable to determine the cost per scenario #@variable(m, problem1_cost_scenario[s in set_scenarios]) @variable(m, investment_cost) @objective(m, Min, sum(tech_thermal_opt[e,invcost]*CRF_thermal_opt[e]*x_e[e] for e in set_opt_thermalgenerators) +sum(tech_wind_opt[w,invcost]*CRF_wind_opt[w]*x_w[w] for w in set_opt_winds) +sum(tech_thermal[e,fixedcost]*(tech_thermal[e,capacity_existingunits]) for e in set_thermalgenerators) +sum(tech_wind[w,fixedcost]*(tech_wind[w,capacity_existingunits]) for w in set_winds) +sum(tech_thermal_opt[e,fixedcost]*(x_e[e]) for e in set_opt_thermalgenerators) +sum(tech_wind_opt[w,fixedcost]*(x_w[w]) for w in set_opt_winds) +α) @constraint(m, constraint1, sum(tech_thermal[e,capacity_existingunits] for e in set_thermalgenerators) +sum(tech_wind[w,capacity_existingunits] for w in set_winds) +sum(x_w[w] for w in set_opt_winds) +sum(x_e[e] for e in set_opt_thermalgenerators) >= D*(1+R)) @constraint(m, constraint2[w in set_opt_winds], x_w[w] <= tech_wind_opt[w,maxBuilds]) @constraint(m, constraint3, α>=α_down) @constraint(m, constraint4[K in set_K], α>=sum( (sum(sum(tech_thermal[e,capacity_existingunits]*Set_Dual_constraint3_st2[e,t,s,K] for e in set_thermalgenerators) for t in set_times) +sum(sum(x_e[e]*Set_Dual_constraint4_st2[e,t,s,K] for e in set_opt_thermalgenerators) for t in set_times) +sum(sum(tech_wind[w,capacity_existingunits]*wind[w,t]*Set_Dual_constraint5_st2[w,t,s,K] for w in set_winds) for t in set_times) +sum(sum(x_w[w]*wind_opt[w,t]*Set_Dual_constraint6_st2[w,t,s,K] for w in set_opt_winds) for t in set_times) +sum(sum(F_max_dict[Z[2][j]]*Set_Dual_constraint8_st2[Z[1][j],t,s,K] for j in set_number_links) for t in set_times) +sum(sum(sum(p_D[d,t]*max_demand for d in set_demands if n == MapD[d][2])*Set_Dual_constraint10_st2[n,t,s,K] for n in set_nodes) for t in set_times) )*γ[s] for s in set_scenarios ) ) #@constraint(m, constraint3[e in set_thermalgenerators,t in set_times,s in set_scenarios], p_G_e[e,t,s] <= tech_thermal[e,capacity_existingunits]) #@constraint(m, constraint4[e in set_opt_thermalgenerators,t in set_times,s in set_scenarios], p_G_e_opt[e,t,s] <= x_e[e]) #@constraint(m, constraint5[w in set_winds,t in set_times,s in set_scenarios], p_G_w[w,t,s] +r_w[w,t,s] == tech_wind[w,capacity_existingunits]*wind[w,t]) #@constraint(m, constraint6[w in set_opt_winds,t in set_times,s in set_scenarios], p_G_w_opt[w,t,s] +r_w_opt[w,t,s] == x_w[w]*wind_opt[w,t]) #@constraint(m, constraint7[j in links,t in set_times,s in set_scenarios], f[j,t,s] == B_dict[j]*(θ[j[1],t,s]-θ[j[2],t,s])) #@constraint(m, constraint8[j in links,t in set_times,s in set_scenarios], f[j,t,s] <= F_max_dict[j]) #@constraint(m, constraint9[t in set_times,s in set_scenarios], θ[set_nodes_ref,t,s] == 0 ) #= @constraint(m, constraint10[n in set_nodes, t in set_times,s in set_scenarios], +sum(p_G_e[e,t,s] for e in set_thermalgenerators if n == MapG[e][2]) +sum(p_G_e_opt[e,t,s] for e in set_opt_thermalgenerators if n == MapG_opt[e][2]) +sum(p_G_w[w,t,s] for w in set_winds if n == MapW[w][2]) +sum(p_G_w_opt[w,t,s] for w in set_opt_winds if n == MapW_opt[w][2]) +sum(r_d[d,t,s] for d in set_demands if n == MapD[d][2]) -sum(p_D[d,t]*max_demand for d in set_demands if n == MapD[d][2]) -sum(f[j,t,s] for j in links if n == j[1])== 0) =# #Auxiliary Constraints= Cost per scenario #= @constraint(m, constraint11[s in set_scenarios], problem1_cost_scenario[s] == sum(( +sum(tech_thermal[e,varcost]*p_G_e[e,t,s] for e in set_thermalgenerators) +sum(tech_thermal_opt[e,varcost]*p_G_e_opt[e,t,s] for e in set_opt_thermalgenerators) +sum(tech_wind[w,varcost]*p_G_w[w,t,s] for w in set_winds) +sum(tech_wind_opt[w,varcost]*p_G_w_opt[w,t,s] for w in set_opt_winds) +sum(V*r_d[d,t,s] for d in set_demands) +sum(P*r_w[w,t,s] for w in set_winds) +sum(P*r_w_opt[w,t,s] for w in set_opt_winds) +sum(Τ[s]*tech_thermal[e,EmissionsRate]*tech_thermal[e,HeatRate]*p_G_e[e,t,s] for e in set_thermalgenerators) +sum(Τ[s]*tech_thermal_opt[e,EmissionsRate]*tech_thermal_opt[e,HeatRate]*p_G_e_opt[e,t,s] for e in set_opt_thermalgenerators))*Ns_H[t] for t in set_times) +sum(tech_thermal_opt[e,invcost]*CRF_thermal_opt[e]*x_e[e] for e in set_opt_thermalgenerators) +sum(tech_wind_opt[w,invcost]*CRF_wind_opt[w]*x_w[w] for w in set_opt_winds) +sum(tech_thermal[e,fixedcost]*(tech_thermal[e,capacity_existingunits]) for e in set_thermalgenerators) +sum(tech_wind[w,fixedcost]*(tech_wind[w,capacity_existingunits]) for w in set_winds) +sum(tech_thermal_opt[e,fixedcost]*(x_e[e]) for e in set_opt_thermalgenerators) +sum(tech_wind_opt[w,fixedcost]*(x_w[w]) for w in set_opt_winds)) =# #Auxiliary Constraints= Investment Cost @constraint(m, constraint12, investment_cost == sum(tech_thermal_opt[e,invcost]*CRF_thermal_opt[e]*x_e[e] for e in set_opt_thermalgenerators) +sum(tech_wind_opt[w,invcost]*CRF_wind_opt[w]*x_w[w] for w in set_opt_winds) +sum(tech_thermal[e,fixedcost]*(tech_thermal[e,capacity_existingunits]) for e in set_thermalgenerators) +sum(tech_wind[w,fixedcost]*(tech_wind[w,capacity_existingunits]) for w in set_winds) +sum(tech_thermal_opt[e,fixedcost]*(x_e[e]) for e in set_opt_thermalgenerators) +sum(tech_wind_opt[w,fixedcost]*(x_w[w]) for w in set_opt_winds)) @time optimize!(m) #cd("C:\\Users\\braya\\Desktop\\Doctorado\\Doctorado EEUU\\Semesters\\Spring Semester 2021\\DTU Course\\Project\\Step 4\\Julia") #Print output println("Number of variables and Constraints:") display(m) status = termination_status(m) println("The solution status is: $status") syscost_det=objective_value(m) println("Total Cost:",syscost_det) investment_cost_value=JuMP.value.(investment_cost) x_w_value=zeros(length(set_opt_winds)) for w in set_opt_winds println("Investment decision for candidate wind unit $w: ",JuMP.value.(x_w[w])) x_w_value[w]=JuMP.value.(x_w[w]) end x_e_value=zeros(length(set_opt_thermalgenerators)) for e in set_opt_thermalgenerators println("Investment decision for candidate thermal unit $e: ",JuMP.value.(x_e[e])) x_e_value[e]=JuMP.value.(x_e[e]) end return (syscost_det,x_w_value,x_e_value,investment_cost_value) end #= p_G_e_value=zeros(length(set_thermalgenerators),length(set_times),length(set_scenarios)) for t in set_times, s in set_scenarios, e in set_thermalgenerators #println("Production level of existing thermal generator $e in time $t under scenario $s:",JuMP.value.(p_G_e[e,t,s])) p_G_e_value[e,t,s]=JuMP.value.(p_G_e[e,t,s]) end p_G_e_opt_value=zeros(length(set_opt_thermalgenerators),length(set_times),length(set_scenarios)) for t in set_times, s in set_scenarios, e in set_opt_thermalgenerators #println("Production level of candidate thermal generator $e in time $t under scenario $s:",JuMP.value.(p_G_e_opt[e,t,s])) p_G_e_opt_value[e,t,s]=JuMP.value.(p_G_e_opt[e,t,s]) end p_G_w_value=zeros(length(set_winds),length(set_times),length(set_scenarios)) for t in set_times, s in set_scenarios, w in set_winds #println("Production level of wind unit $w in time $t under scenario $s:",JuMP.value.(p_G_w[w,t,s])) p_G_w_value[w,t,s]=JuMP.value.(p_G_w[w,t,s]) end p_G_w_opt_value=zeros(length(set_opt_winds),length(set_times),length(set_scenarios)) for t in set_times, s in set_scenarios, w in set_opt_winds #println("Production level of candidare wind unit $w in time $t under scenario $s: ",JuMP.value.(p_G_w_opt[w,t,s])) p_G_w_opt_value[w,t,s]=JuMP.value.(p_G_w_opt[w,t,s]) end p_D_value=zeros(length(set_demands),length(set_scenarios)) for d in set_demands, s in set_scenarios p_D_value[d,s]=p_D[d,s]*max_demand end #println("Comsuption level of demand d: ",p_D_value) r_d_value=zeros(length(set_demands),length(set_times),length(set_scenarios)) for t in set_times, s in set_scenarios, d in set_demands #println("Unserved energy of demand $d in time $t under scenario $s:",JuMP.value.(r_d[d,t,s])) r_d_value[d,t,s]=JuMP.value.(r_d[d,t,s]) end r_w_value=zeros(length(set_winds),length(set_times),length(set_scenarios)) for t in set_times, s in set_scenarios, w in set_winds #println("Curtailment of wind unit $w in time $t under scenario $s: ",JuMP.value.(r_w[w,t,s])) r_w_value[w,t,s]=JuMP.value.(r_w[w,t,s]) end Dual_constraint10=zeros(length(set_nodes),length(set_times),length(set_scenarios)) for t in set_times, s in set_scenarios,n in set_nodes Dual_constraint10[n,t,s]= JuMP.dual(constraint10[n,t,s])/(Ns_H[t]*γ[s]) #println("Electricity Price in node $n in time $t under scenario $s: ", Dual_constraint10[n,t,s]) end df1 = DataFrame( Dual_constraint10) #CSV.write("Electrity_Price.csv", df1) #for s in set_scenarios, t in set_times, j in links #println("Power Flow lines $j in time $t under scenario $s: ", JuMP.value.(f[j,t,s])) #end f_value=zeros(n_link,length(set_times),length(set_scenarios)) global i=1 for j in links for t in set_times for s in set_scenarios global f_value[i,t,s]= JuMP.value.(f[j,t,s]) end end global i=i+1 end =# #= print("") for n in set_nodes, s in set_scenarios, t in set_times println("Electricity Price in node $n in time $t under scenario $s: ", JuMP.dual(constraint10[n,t,s])) end =#
{"hexsha": "294de0917e8a243c16a00552c7bdb576d46023ac", "size": 11379, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "src/Investment_OPF_stage1.jl", "max_stars_repo_name": "bdvalqui/DTU_BrayamValqui_SP2021.jl", "max_stars_repo_head_hexsha": "cde096a6d5f2cf03b567056ef0655908e68769e7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/Investment_OPF_stage1.jl", "max_issues_repo_name": "bdvalqui/DTU_BrayamValqui_SP2021.jl", "max_issues_repo_head_hexsha": "cde096a6d5f2cf03b567056ef0655908e68769e7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/Investment_OPF_stage1.jl", "max_forks_repo_name": "bdvalqui/DTU_BrayamValqui_SP2021.jl", "max_forks_repo_head_hexsha": "cde096a6d5f2cf03b567056ef0655908e68769e7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.6805555556, "max_line_length": 657, "alphanum_fraction": 0.7137709816, "num_tokens": 3401}
[STATEMENT] lemma LIMSEQ_le_const2: "X \<longlonglongrightarrow> x \<Longrightarrow> \<exists>N. \<forall>n\<ge>N. X n \<le> a \<Longrightarrow> x \<le> a" for a x :: "'a::linorder_topology" [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<lbrakk>X \<longlonglongrightarrow> x; \<exists>N. \<forall>n\<ge>N. X n \<le> a\<rbrakk> \<Longrightarrow> x \<le> a [PROOF STEP] by (rule LIMSEQ_le[of X x "\<lambda>n. a"]) auto
{"llama_tokens": 169, "file": null, "length": 1}
module Neural using Random using Plots include("functions.jl") mutable struct Layer W::AbstractArray{Float64} # weights b::Vector{Float64} # bias afun::Function # activaion function dafun::Function # derivative of the activation function z::Vector{Float64} # W*x+b a::Vector{Float64} # afun(z) dCda::Vector{Float64} dCdW::AbstractArray{Float64} dCdb::Vector{Float64} function Layer(inlen::Int64, outlen::Int64, afun::Function, dafun::Function) σ = 2/(inlen + outlen) new(rand(outlen, inlen) * σ, rand(outlen) * σ, afun, dafun, [zeros(outlen) for i in 1:3]..., zeros(outlen, inlen), zeros(outlen)) end end mutable struct Net layers::Vector{Layer} C::Function dC::Function function Net(inlength::Int64, sizes::Vector{Int64}, activations::Vector; # activations can be eighter vector of symbols or vector of # Tuple{Function, Function} where the 2nd function is derivative of the 1st C::Function = softmaxLikelihood, dC::Function = dSoftmaxLikelihood) layers = Vector{Layer}() ilen = inlength for (size, id) in zip(sizes, activations) push!(layers, Layer(ilen, size, getFunctions(id)...)) ilen = size end new(layers, C, dC) end end @views function makechunks(X::AbstractArray, n::Integer) c = length(X) ÷ n return [X[1+c*k:(k == n-1 ? end : c*k+c)] for k = 0:n-1] end mean(x) = sum(x)/length(x) getFunctions(id::Symbol) = activations[id] getFunctions(f::Tuple{Function, Function}) = f function applyLayer!(layer::Layer, x::AbstractArray) layer.z = layer.W*x + layer.b layer.a = layer.afun(layer.z) return layer.a end function forward!(net::Net, x::AbstractArray) input = x map(layer -> input = applyLayer!(layer, input), net.layers) return input end function backward!(net::Net, x::AbstractArray, t::AbstractArray) lastLayer = net.layers[end] lastLayer.dCda = lastLayer.dafun(lastLayer.z) * net.dC(lastLayer.a, t) lastLayer.dCdW += lastLayer.dCda * transpose(net.layers[end - 1].a) lastLayer.dCdb += lastLayer.dCda for i in (length(net.layers) - 1):-1:1 layer = net.layers[i] next = net.layers[i + 1] prevout = i == 1 ? x : net.layers[i - 1].a layer.dCda = layer.dafun(layer.a) * transpose(next.W) * next.dCda layer.dCdW += layer.dCda * transpose(prevout) layer.dCdb += layer.dCda end end function updateLayer!(layer::Layer, α::Float64) layer.W -= α*layer.dCdW layer.b -= α*layer.dCdb layer.dCda = zeros(size(layer.dCda)) layer.dCdb = zeros(size(layer.dCdb)) layer.dCdW = zeros(size(layer.dCdW)) end function identify(net::Net, x::AbstractArray) vec = forward!(net, x) id = argmax(vec) return id, vec end function accuracy(net::Net, x::AbstractArray, t::AbstractArray) identities = map(i -> identify(net, x[:, i])[1], 1:size(x, 2)) cmp = [Int(identities[i] == argmax(t[:, i])) for i in 1:size(t, 2)] return sum(cmp)/size(t, 2) end function train!(net::Net, x::AbstractArray, t::AbstractArray, batchSize::Int64, epochs::Int64, α::Float64, αDecay::Float64 = 1.0) plotly() meanCosts = Vector{Float64}() accs = Vector{Float64}() testIndices = shuffle(1:size(x, 2))[1:2*batchSize] trainIndices = filter(i -> !(i in testIndices), 1:size(x, 2)) for i in 1:epochs costs = Vector{Float64}() acc = 0 indices = shuffle(trainIndices) # one column represents one data point for i in indices[1:batchSize] forward!(net, x[:, i]) backward!(net, x[:, i], t[:, i]) updateLayer!.(net.layers, α) output = forward!(net, x[:, i]) push!(costs, net.C(output, t[:, i])) end acc = accuracy(net, x[:, testIndices], t[:, testIndices]) meanCost = mean(costs) α /= αDecay push!(meanCosts, meanCost) push!(accs, acc); println("Epoch $i/$epochs:") println("\tMean cost: $meanCost") println("\tAccuracy: $(acc *100)%") println("\tLearning rate: $α") end plt = plot(1:epochs, meanCosts, xlabel = "Epoch", ylabel = "Mean cost", legend = false, line = [:scatter, :path], dpi = 600, size = (1440, 810), grid = false) plot!(plt, 1:epochs, accs, line = [:scatter, :path]) display(plt) #savefig(plt, "plt.png") end end
{"hexsha": "c183a2b6f2858fa32044b30f535f1e687a4988dd", "size": 4869, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "src/neural.jl", "max_stars_repo_name": "achjaj/shape-recognition", "max_stars_repo_head_hexsha": "ff83b69f65df3a74d28d5eada027420cac4e364f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/neural.jl", "max_issues_repo_name": "achjaj/shape-recognition", "max_issues_repo_head_hexsha": "ff83b69f65df3a74d28d5eada027420cac4e364f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/neural.jl", "max_forks_repo_name": "achjaj/shape-recognition", "max_forks_repo_head_hexsha": "ff83b69f65df3a74d28d5eada027420cac4e364f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.4129032258, "max_line_length": 162, "alphanum_fraction": 0.5596631752, "num_tokens": 1373}
SUBROUTINE DT_DDATAR IMPLICIT NONE INTEGER i , ia , iaa , ib , ibb , ip , iv , j , l DOUBLE PRECISION ONE , TINY10 , ZERO SAVE INCLUDE 'inc/dtflka' PARAMETER (TINY10=1.0D-10,ONE=1.0D0,ZERO=0.0D0) C quark-content to particle index conversion (DTUNUC 1.x) INCLUDE 'inc/dtq2id' DIMENSION iv(36) , ip(36) , ib(126) , ibb(126) , ia(126) , & iaa(126) DATA iv/33 , 34 , 38 , 123 , 0 , 0 , 32 , 33 , 39 , 124 , 0 , 0 , & 36 , 37 , 96 , 127 , 0 , 0 , 126 , 125 , 128 , 129 , 14*0/ DATA ip/23 , 14 , 16 , 116 , 0 , 0 , 13 , 23 , 25 , 117 , 0 , 0 , & 15 , 24 , 31 , 120 , 0 , 0 , 119 , 118 , 121 , 122 , 14*0/ DATA ib/0 , 1 , 21 , 140 , 0 , 0 , 8 , 22 , 137 , 0 , 0 , 97 , & 138 , 0 , 0 , 146 , 0 , 0 , 0 , 0 , 0 , 1 , 8 , 22 , 137 , & 0 , 0 , 0 , 20 , 142 , 0 , 0 , 98 , 139 , 0 , 0 , 147 , 0 , & 0 , 0 , 0 , 0 , 21 , 22 , 97 , 138 , 0 , 0 , 20 , 98 , 139 , & 0 , 0 , 0 , 145 , 0 , 0 , 148 , 0 , 0 , 0 , 0 , 0 , 140 , & 137 , 138 , 146 , 0 , 0 , 142 , 139 , 147 , 0 , 0 , 145 , & 148 , 50*0/ DATA ibb/53 , 54 , 104 , 161 , 0 , 0 , 55 , 105 , 162 , 0 , 0 , & 107 , 164 , 0 , 0 , 167 , 0 , 0 , 0 , 0 , 0 , 54 , 55 , 105 , & 162 , 0 , 0 , 56 , 106 , 163 , 0 , 0 , 108 , 165 , 0 , 0 , & 168 , 0 , 0 , 0 , 0 , 0 , 104 , 105 , 107 , 164 , 0 , 0 , & 106 , 108 , 165 , 0 , 0 , 109 , 166 , 0 , 0 , 169 , 0 , 0 , & 0 , 0 , 0 , 161 , 162 , 164 , 167 , 0 , 0 , 163 , 165 , 168 , & 0 , 0 , 166 , 169 , 0 , 0 , 170 , 47*0/ DATA ia/0 , 2 , 99 , 152 , 0 , 0 , 9 , 100 , 149 , 0 , 0 , 102 , & 150 , 0 , 0 , 158 , 0 , 0 , 0 , 0 , 0 , 2 , 9 , 100 , 149 , & 0 , 0 , 0 , 101 , 154 , 0 , 0 , 103 , 151 , 0 , 0 , 159 , 0 , & 0 , 0 , 0 , 0 , 99 , 100 , 102 , 150 , 0 , 0 , 101 , 103 , & 151 , 0 , 0 , 0 , 157 , 0 , 0 , 160 , 0 , 0 , 0 , 0 , 0 , & 152 , 149 , 150 , 158 , 0 , 0 , 154 , 151 , 159 , 0 , 0 , & 157 , 160 , 50*0/ DATA iaa/67 , 68 , 110 , 171 , 0 , 0 , 69 , 111 , 172 , 0 , 0 , & 113 , 174 , 0 , 0 , 177 , 0 , 0 , 0 , 0 , 0 , 68 , 69 , 111 , & 172 , 0 , 0 , 70 , 112 , 173 , 0 , 0 , 114 , 175 , 0 , 0 , & 178 , 0 , 0 , 0 , 0 , 0 , 110 , 111 , 113 , 174 , 0 , 0 , & 112 , 114 , 175 , 0 , 0 , 115 , 176 , 0 , 0 , 179 , 0 , 0 , & 0 , 0 , 0 , 171 , 172 , 174 , 177 , 0 , 0 , 173 , 175 , 178 , & 0 , 0 , 176 , 179 , 0 , 0 , 180 , 47*0/ l = 0 DO i = 1 , 6 DO j = 1 , 6 l = l + 1 IMPs(i,j) = ip(l) IMVe(i,j) = iv(l) END DO END DO l = 0 DO i = 1 , 6 DO j = 1 , 21 l = l + 1 IB08(i,j) = ib(l) IB10(i,j) = ibb(l) IA08(i,j) = ia(l) IA10(i,j) = iaa(l) END DO END DO C A1 = 0.88D0 C B1 = 3.0D0 C B2 = 3.0D0 C B3 = 8.0D0 C LT = 0 C LB = 0 C BET = 12.0D0 C AS = 0.25D0 C B8 = 0.33D0 C AME = 0.95D0 C DIQ = 0.375D0 C ISU = 4 END SUBROUTINE
{"hexsha": "a55cc9c8e239b2a0588e18b88411b162aa205076", "size": 3214, "ext": "f", "lang": "FORTRAN", "max_stars_repo_path": "src/dpmjet/DT_DDATAR.f", "max_stars_repo_name": "pzhristov/DPMJET", "max_stars_repo_head_hexsha": "946e001290ca5ece608d7e5d1bfc7311cda7ebaa", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-06-15T01:59:00.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-01T08:39:13.000Z", "max_issues_repo_path": "src/dpmjet/DT_DDATAR.f", "max_issues_repo_name": "pzhristov/DPMJET", "max_issues_repo_head_hexsha": "946e001290ca5ece608d7e5d1bfc7311cda7ebaa", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2021-03-15T09:53:05.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-20T20:52:28.000Z", "max_forks_repo_path": "src/dpmjet/DT_DDATAR.f", "max_forks_repo_name": "pzhristov/DPMJET", "max_forks_repo_head_hexsha": "946e001290ca5ece608d7e5d1bfc7311cda7ebaa", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2020-07-05T02:44:05.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-20T20:49:05.000Z", "avg_line_length": 38.2619047619, "max_line_length": 73, "alphanum_fraction": 0.3621655258, "num_tokens": 1665}
import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data import numpy as np import matplotlib.pyplot as plt import math tf.logging.set_verbosity(tf.logging.INFO) #----------------------------------------------- #variables epoch = 2000 learningRate = 0.1 batch_size = 120 mnist_data = "C:/tmp/MNIST_data" trainForRandomSet = True #----------------------------------------------- #data process and transformation MNIST_DATASET = input_data.read_data_sets(mnist_data) train_data = np.array(MNIST_DATASET.train.images, 'float32') train_target = np.array(MNIST_DATASET.train.labels, 'int64') print("training set consists of ", len(MNIST_DATASET.train.images), " instances") test_data = np.array(MNIST_DATASET.test.images, 'float32') test_target = np.array(MNIST_DATASET.test.labels, 'int64') print("test set consists of ", len(MNIST_DATASET.test.images), " instances") #----------------------------------------------- #visualization print("input layer consists of ", len(MNIST_DATASET.train.images[1])," features") #----------------------------------------------- feature_columns = [tf.contrib.layers.real_valued_column("", dimension=len(MNIST_DATASET.train.images[1]))] classifier = tf.contrib.learn.DNNClassifier( feature_columns=feature_columns , n_classes=10 #0 to 9 - 10 classes , hidden_units=[128, 64, 32, 16] #4 hidden layers consisting of 128, 64, 32, 16 units respectively #, optimizer=tf.train.ProximalAdagradOptimizer(learning_rate=learningRate) , optimizer=tf.train.GradientDescentOptimizer(learning_rate=learningRate) , activation_fn = tf.nn.sigmoid #activate this to see vanishing gradient #, activation_fn = tf.nn.relu #activate this to solve gradient vanishing problem ) #---------------------------------------- #training if trainForRandomSet == False: #train on all trainset classifier.fit(train_data, train_target, steps=epoch) else: def generate_input_fn(data, label): image_batch, label_batch = tf.train.shuffle_batch( [data, label] , batch_size=batch_size , capacity=8*batch_size , min_after_dequeue=4*batch_size , enqueue_many=True ) return image_batch, label_batch def input_fn_for_train(): return generate_input_fn(train_data, train_target) #train on small random selected dataset classifier.fit(input_fn=input_fn_for_train, steps=epoch) print("\n---training is over...") #---------------------------------------- #calculationg overall accuracy accuracy_score = classifier.evaluate(test_data, test_target, steps=epoch)['accuracy'] print("\n---evaluation...") print("accuracy: ", 100*accuracy_score,"%")
{"hexsha": "2c64b445a4ed72414e3064ab95e1fc38c9197f5c", "size": 2689, "ext": "py", "lang": "Python", "max_stars_repo_path": "python/gradient-vanishing.py", "max_stars_repo_name": "GangababuManam/tensorflow-101", "max_stars_repo_head_hexsha": "f5ba6b298ecdf0ca77ffe871c678f6699ab59a21", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 832, "max_stars_repo_stars_event_min_datetime": "2017-07-11T08:07:14.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-26T17:18:19.000Z", "max_issues_repo_path": "python/gradient-vanishing.py", "max_issues_repo_name": "saikat1506/tensorflow-101", "max_issues_repo_head_hexsha": "d0b8d055fc53ebb1189068243667cc8aca4eabcd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 22, "max_issues_repo_issues_event_min_datetime": "2018-04-20T09:30:04.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-25T13:14:33.000Z", "max_forks_repo_path": "python/gradient-vanishing.py", "max_forks_repo_name": "saikat1506/tensorflow-101", "max_forks_repo_head_hexsha": "d0b8d055fc53ebb1189068243667cc8aca4eabcd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 653, "max_forks_repo_forks_event_min_datetime": "2017-09-03T03:11:20.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-28T19:07:18.000Z", "avg_line_length": 33.1975308642, "max_line_length": 107, "alphanum_fraction": 0.6716251395, "include": true, "reason": "import numpy", "num_tokens": 624}
function compare(c1::Channel, c2::Channel; skip::Vector{Symbol}=[] ) # TP = true TP = TP && c1.state == c2.state TP = TP && c1.sz_max == c2.sz_max TP = TP && c1.data |> length == c2.data |> length # exit early if tests already failed !TP && (return false) # now check contents of data for i in 1:length(c1.data) TP = TP && c1.data[i] == c2.data[i] end return TP end compare(a::Int, b::Int) = a == b compare(a::Bool, b::Bool) = a == b compare(a::Dict, b::Dict) = a == b
{"hexsha": "fb6f48a062bc72a9589012727b9bccb80e57a31a", "size": 475, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "src/NeedsResolution.jl", "max_stars_repo_name": "akhand9999/IncrementalInference.jl", "max_stars_repo_head_hexsha": "8f847d0e32c42d06f5cc6e4652afb1f5fb95ba63", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/NeedsResolution.jl", "max_issues_repo_name": "akhand9999/IncrementalInference.jl", "max_issues_repo_head_hexsha": "8f847d0e32c42d06f5cc6e4652afb1f5fb95ba63", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/NeedsResolution.jl", "max_forks_repo_name": "akhand9999/IncrementalInference.jl", "max_forks_repo_head_hexsha": "8f847d0e32c42d06f5cc6e4652afb1f5fb95ba63", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.5909090909, "max_line_length": 49, "alphanum_fraction": 0.6126315789, "num_tokens": 165}
# Copyright (c) 2020, Huawei Technologies.All rights reserved. # # Licensed under the BSD 3-Clause License (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://opensource.org/licenses/BSD-3-Clause # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import torch import numpy as np from common_utils import TestCase, run_tests from common_device_type import dtypes, instantiate_device_type_tests from util_test import create_common_tensor class TestLogSpace(TestCase): def cpu_op_exec(self, start, end, steps, base): output = torch.logspace(start=start, end=end, steps=steps, base=base) output = output.numpy() return output def npu_op_exec(self, start, end, steps, base): output = torch.logspace(start=start, end=end, steps=steps, base=base, device="npu") output = output.to("cpu") output = output.numpy() return output def npu_op_exec_out(self, start, end, steps, base, dtype): output = torch.randn(steps) torch.logspace(start=start, end=end, steps=steps, base=base, dtype=dtype, out=output) output = output.to("cpu") output = output.numpy() return output def test_logspace_common_shape_format(self, device): shape_format = [ [0.0, 1.0, 10, 0.2, torch.float32], [2.0, 3.0, 10, 0.05, torch.float32], [10.0, 10.5, 11, 0.2, torch.float32], [10.0, 10.5, 110, 0.2, torch.float32], [0.0, 0.1, 20, 1.2, torch.float32], [0.5, 1.0, 50, 8.0, torch.float32], [1.0, 2.0, 2, -0.5, torch.float32], [0.0, 0.0, 1, 0.0, torch.float32], [1.0, 1.0, 1, 0.0, torch.float32], [1.0, 1.0, 0, 0.0, torch.float32], [1.0, 2.0, 9, 0.0, torch.float32] ] for item in shape_format: cpu_output = self.cpu_op_exec(item[0], item[1], item[2], item[3]) npu_output = self.npu_op_exec(item[0], item[1], item[2], item[3]) self.assertRtolEqual(cpu_output, npu_output) npu_out_output = self.npu_op_exec_out(item[0], item[1], item[2], item[3], item[4]) self.assertRtolEqual(cpu_output, npu_out_output) def test_logspace_float16_shape_format(self, device): def cpu_op_exec_fp16(start, end, steps, base, dtype): output = torch.logspace(start=start, end=end, steps=steps, base=base, dtype=torch.float32) output = output.numpy() output = output.astype(np.float16) return output def npu_op_exec(start, end, steps, base, dtype): output = torch.logspace( start=start, end=end, steps=steps, base=base, dtype=dtype, device="npu" ) output = output.to("cpu") output = output.numpy() return output shape_format = [ [-2.0, 2.0, 32, 32, torch.float16], [0.0, 1.0, 10, 0.2, torch.float16], [2.0, 3.0, 10, 0.05, torch.float16], [0.0, 0.1, 20, 1.2, torch.float16], [0.5, 1.0, 50, 8.0, torch.float16], [1.0, 2.0, 2, -0.5, torch.float16], [0.0, 0.0, 1, 0.0, torch.float16] ] for item in shape_format: cpu_output = cpu_op_exec_fp16(item[0], item[1], item[2], item[3], item[4]) npu_output = npu_op_exec(item[0], item[1], item[2], item[3], item[4]) self.assertRtolEqual(cpu_output, npu_output) instantiate_device_type_tests(TestLogSpace, globals(), except_for='cpu') if __name__ == "__main__": run_tests()
{"hexsha": "d3fb5afc10a18dfd9fce1efdfe0a0a55e755ed85", "size": 3976, "ext": "py", "lang": "Python", "max_stars_repo_path": "test/test_npu/test_network_ops/test_logspace.py", "max_stars_repo_name": "Ascend/pytorch", "max_stars_repo_head_hexsha": "39849cf72dafe8d2fb68bd1679d8fd54ad60fcfc", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-12-02T03:07:35.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-02T03:07:35.000Z", "max_issues_repo_path": "test/test_npu/test_network_ops/test_logspace.py", "max_issues_repo_name": "Ascend/pytorch", "max_issues_repo_head_hexsha": "39849cf72dafe8d2fb68bd1679d8fd54ad60fcfc", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-11-12T07:23:03.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-12T08:28:13.000Z", "max_forks_repo_path": "test/test_npu/test_network_ops/test_logspace.py", "max_forks_repo_name": "Ascend/pytorch", "max_forks_repo_head_hexsha": "39849cf72dafe8d2fb68bd1679d8fd54ad60fcfc", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.752688172, "max_line_length": 110, "alphanum_fraction": 0.6001006036, "include": true, "reason": "import numpy", "num_tokens": 1149}
#!/usr/bin/env python3 import numpy as np from computeCost import computeCost def gradientDescent(X, y, theta, alpha, num_iters): #GRADIENTDESCENT Performs gradient descent to learn theta # theta = GRADIENTDESENT(X, y, theta, alpha, num_iters) updates theta by # taking num_iters gradient steps with learning rate alpha # Initialize some useful values m = y.shape[0] # number of training examples J_history = np.reshape(np.zeros((num_iters, 1)), (num_iters, 1)) for i in range(num_iters): # ====================== YOUR CODE HERE ====================== # Instructions: Perform a single gradient step on the parameter vector # theta. # # Hint: While debugging, it can be useful to print out the values # of the cost function (computeCost) and gradient here. # theta = np.subtract(theta, (alpha / m) * np.dot(np.subtract(np.dot(X, theta), y).T, X).T) # ============================================================ # Save the cost J in every iteration J_history[i, 0] = computeCost(X, y, theta) return (theta, J_history)
{"hexsha": "b4fcd9d456b62a74db26b0d4cd0d03c0998bd200", "size": 1172, "ext": "py", "lang": "Python", "max_stars_repo_path": "machine-learning-ex1/ex1/gradientDescent.py", "max_stars_repo_name": "altermarkive/machine-learning-course", "max_stars_repo_head_hexsha": "2f0a2c2269dab2bd61d34d96a75ccdd8b87683c7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-05-11T20:58:03.000Z", "max_stars_repo_stars_event_max_datetime": "2018-05-11T20:58:03.000Z", "max_issues_repo_path": "machine-learning-ex1/ex1/gradientDescent.py", "max_issues_repo_name": "altermarkive/Machine-Learning-Course", "max_issues_repo_head_hexsha": "2f0a2c2269dab2bd61d34d96a75ccdd8b87683c7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "machine-learning-ex1/ex1/gradientDescent.py", "max_forks_repo_name": "altermarkive/Machine-Learning-Course", "max_forks_repo_head_hexsha": "2f0a2c2269dab2bd61d34d96a75ccdd8b87683c7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2016-11-04T13:40:31.000Z", "max_forks_repo_forks_event_max_datetime": "2018-05-11T20:58:05.000Z", "avg_line_length": 33.4857142857, "max_line_length": 97, "alphanum_fraction": 0.5793515358, "include": true, "reason": "import numpy", "num_tokens": 272}
(* * Copyright 2014, General Dynamics C4 Systems * * SPDX-License-Identifier: GPL-2.0-only *) theory Bits_R imports Corres begin crunch_ignore (add: bind return "when" get gets fail assert put modify unless select alternative assert_opt gets_the returnOk throwError lift bindE liftE whenE unlessE throw_opt assertE liftM liftME sequence_x zipWithM_x mapM_x sequence mapM sequenceE_x sequenceE mapME mapME_x catch select_f handleE' handleE handle_elseE forM forM_x forME_x zipWithM filterM forME_x withoutFailure throw catchFailure rethrowFailure capFaultOnFailure lookupErrorOnFailure nullCapOnFailure nothingOnFailure without_preemption withoutPreemption preemptionPoint cap_fault_on_failure lookup_error_on_failure const_on_failure ignore_failure ignoreFailure empty_on_failure emptyOnFailure unifyFailure unify_failure throw_on_false storeWordVM loadWord setRegister getRegister getRestartPC debugPrint setNextPC maskInterrupt clearMemory throw_on_false unifyFailure ignoreFailure empty_on_failure emptyOnFailure clearMemoryVM null_cap_on_failure setNextPC getRestartPC assertDerived throw_on_false getObject setObject updateObject loadObject) context Arch begin (*FIXME: arch_split*) crunch_ignore (add: invalidateLocalTLB_ASID invalidateLocalTLB_VAASID cleanByVA cleanByVA_PoU invalidateByVA invalidateByVA_I invalidate_I_PoU cleanInvalByVA branchFlush clean_D_PoU cleanInvalidate_D_PoC cleanInvalidateL2Range invalidateL2Range cleanL2Range flushBTAC writeContextID isb dsb dmb setHardwareASID setCurrentPD) end context begin interpretation Arch . (*FIXME: arch_split*) lemma throwE_R: "\<lbrace>\<top>\<rbrace> throw f \<lbrace>P\<rbrace>,-" by (simp add: validE_R_def) wp lemma withoutFailure_wp [wp]: "\<lbrace>P\<rbrace> f \<lbrace>Q\<rbrace> \<Longrightarrow> \<lbrace>P\<rbrace> withoutFailure f \<lbrace>Q\<rbrace>,\<lbrace>E\<rbrace>" "\<lbrace>P\<rbrace> f \<lbrace>Q\<rbrace> \<Longrightarrow> \<lbrace>P\<rbrace> withoutFailure f \<lbrace>Q\<rbrace>,-" "\<lbrace>\<top>\<rbrace> withoutFailure f -,\<lbrace>E\<rbrace>" by (auto simp: validE_R_def validE_E_def valid_def) lemma no_fail_typeError [simp, wp]: "no_fail \<bottom> (typeError xs ko)" by (simp add: typeError_def) lemma isCap_simps: "isZombie v = (\<exists>v0 v1 v2. v = Zombie v0 v1 v2)" "isArchObjectCap v = (\<exists>v0. v = ArchObjectCap v0)" "isThreadCap v = (\<exists>v0. v = ThreadCap v0)" "isCNodeCap v = (\<exists>v0 v1 v2 v3. v = CNodeCap v0 v1 v2 v3)" "isNotificationCap v = (\<exists>v0 v1 v2 v3. v = NotificationCap v0 v1 v2 v3)" "isEndpointCap v = (\<exists>v0 v1 v2 v3 v4 v5. v = EndpointCap v0 v1 v2 v3 v4 v5)" "isUntypedCap v = (\<exists>d v0 v1 f. v = UntypedCap d v0 v1 f)" "isReplyCap v = (\<exists>v0 v1 v2. v = ReplyCap v0 v1 v2)" "isIRQControlCap v = (v = IRQControlCap)" "isIRQHandlerCap v = (\<exists>v0. v = IRQHandlerCap v0)" "isNullCap v = (v = NullCap)" "isDomainCap v = (v = DomainCap)" "isPageCap w = (\<exists>d v0 v1 v2 v3. w = PageCap d v0 v1 v2 v3)" "isPageTableCap w = (\<exists>v0 v1. w = PageTableCap v0 v1)" "isPageDirectoryCap w = (\<exists>v0 v1. w = PageDirectoryCap v0 v1)" "isASIDControlCap w = (w = ASIDControlCap)" "isASIDPoolCap w = (\<exists>v0 v1. w = ASIDPoolCap v0 v1)" "isArchPageCap cap = (\<exists>d ref rghts sz data. cap = ArchObjectCap (PageCap d ref rghts sz data))" by (auto simp: isCap_defs split: capability.splits arch_capability.splits) lemma untyped_not_null [simp]: "\<not> isUntypedCap NullCap" by (simp add: isCap_simps) text \<open>Miscellaneous facts about low level constructs\<close> lemma projectKO_tcb: "(projectKO_opt ko = Some t) = (ko = KOTCB t)" by (cases ko) (auto simp: projectKO_opts_defs) lemma projectKO_cte: "(projectKO_opt ko = Some t) = (ko = KOCTE t)" by (cases ko) (auto simp: projectKO_opts_defs) lemma projectKO_ep: "(projectKO_opt ko = Some t) = (ko = KOEndpoint t)" by (cases ko) (auto simp: projectKO_opts_defs) lemma projectKO_ntfn: "(projectKO_opt ko = Some t) = (ko = KONotification t)" by (cases ko) (auto simp: projectKO_opts_defs) lemma projectKO_ASID: "(projectKO_opt ko = Some t) = (ko = KOArch (KOASIDPool t))" by (cases ko) (auto simp: projectKO_opts_defs split: arch_kernel_object.splits) lemma projectKO_PTE: "(projectKO_opt ko = Some t) = (ko = KOArch (KOPTE t))" by (cases ko) (auto simp: projectKO_opts_defs split: arch_kernel_object.splits) lemma projectKO_PDE: "(projectKO_opt ko = Some t) = (ko = KOArch (KOPDE t))" by (cases ko) (auto simp: projectKO_opts_defs split: arch_kernel_object.splits) lemma projectKO_user_data: "(projectKO_opt ko = Some (t :: user_data)) = (ko = KOUserData)" by (cases ko) (auto simp: projectKO_opts_defs split: arch_kernel_object.splits) lemma projectKO_user_data_device: "(projectKO_opt ko = Some (t :: user_data_device)) = (ko = KOUserDataDevice)" by (cases ko) (auto simp: projectKO_opts_defs split: arch_kernel_object.splits) lemmas projectKOs = projectKO_ntfn projectKO_ep projectKO_cte projectKO_tcb projectKO_ASID projectKO_PTE projectKO_PDE projectKO_user_data projectKO_user_data_device projectKO_eq projectKO_eq2 lemma capAligned_epI: "ep_at' p s \<Longrightarrow> capAligned (EndpointCap p a b c d e)" apply (clarsimp simp: obj_at'_real_def capAligned_def objBits_simps word_bits_def) apply (drule ko_wp_at_norm) apply clarsimp apply (drule ko_wp_at_aligned) apply (simp add: objBits_simps' projectKOs capUntypedPtr_def isCap_simps) done lemma capAligned_ntfnI: "ntfn_at' p s \<Longrightarrow> capAligned (NotificationCap p a b c)" apply (clarsimp simp: obj_at'_real_def capAligned_def objBits_simps word_bits_def capUntypedPtr_def isCap_simps) apply (fastforce dest: ko_wp_at_norm dest!: ko_wp_at_aligned simp: objBits_simps' projectKOs) done lemma capAligned_tcbI: "tcb_at' p s \<Longrightarrow> capAligned (ThreadCap p)" apply (clarsimp simp: obj_at'_real_def capAligned_def objBits_simps word_bits_def capUntypedPtr_def isCap_simps) apply (fastforce dest: ko_wp_at_norm dest!: ko_wp_at_aligned simp: objBits_simps' projectKOs) done lemma capAligned_reply_tcbI: "tcb_at' p s \<Longrightarrow> capAligned (ReplyCap p m r)" apply (clarsimp simp: obj_at'_real_def capAligned_def objBits_simps word_bits_def capUntypedPtr_def isCap_simps) apply (fastforce dest: ko_wp_at_norm dest!: ko_wp_at_aligned simp: objBits_simps' projectKOs) done lemma ko_at_valid_objs': assumes ko: "ko_at' k p s" assumes vo: "valid_objs' s" assumes k: "\<And>ko. projectKO_opt ko = Some k \<Longrightarrow> injectKO k = ko" shows "valid_obj' (injectKO k) s" using ko vo by (clarsimp simp: valid_objs'_def obj_at'_def projectKOs project_inject ranI) lemma obj_at_valid_objs': "\<lbrakk> obj_at' P p s; valid_objs' s \<rbrakk> \<Longrightarrow> \<exists>k. P k \<and> ((\<forall>ko. projectKO_opt ko = Some k \<longrightarrow> injectKO k = ko) \<longrightarrow> valid_obj' (injectKO k) s)" apply (drule obj_at_ko_at') apply clarsimp apply (rule_tac x=ko in exI) apply clarsimp apply (erule (1) ko_at_valid_objs') apply simp done lemma tcb_in_valid_state': "\<lbrakk> st_tcb_at' P t s; valid_objs' s \<rbrakk> \<Longrightarrow> \<exists>st. P st \<and> valid_tcb_state' st s" apply (clarsimp simp: pred_tcb_at'_def) apply (drule obj_at_valid_objs') apply fastforce apply (clarsimp simp: projectKOs) apply (fastforce simp add: valid_obj'_def valid_tcb'_def) done lemma getCurThread_corres [corres]: "corres (=) \<top> \<top> (gets cur_thread) getCurThread" by (simp add: getCurThread_def curthread_relation) lemma gct_wp [wp]: "\<lbrace>\<lambda>s. P (ksCurThread s) s\<rbrace> getCurThread \<lbrace>P\<rbrace>" by (unfold getCurThread_def, wp) lemma getIdleThread_corres [corres]: "corres (=) \<top> \<top> (gets idle_thread) getIdleThread" by (simp add: getIdleThread_def state_relation_def) lemma git_wp [wp]: "\<lbrace>\<lambda>s. P (ksIdleThread s) s\<rbrace> getIdleThread \<lbrace>P\<rbrace>" by (unfold getIdleThread_def, wp) lemma gsa_wp [wp]: "\<lbrace>\<lambda>s. P (ksSchedulerAction s) s\<rbrace> getSchedulerAction \<lbrace>P\<rbrace>" by (unfold getSchedulerAction_def, wp) text \<open>Shorthand names for the relations between faults, errors and failures\<close> definition fr :: "ExceptionTypes_A.fault \<Rightarrow> Fault_H.fault \<Rightarrow> bool" where fr_def[simp]: "fr x y \<equiv> (y = fault_map x)" definition ser :: "ExceptionTypes_A.syscall_error \<Rightarrow> Fault_H.syscall_error \<Rightarrow> bool" where ser_def[simp]: "ser x y \<equiv> (y = syscall_error_map x)" definition lfr :: "ExceptionTypes_A.lookup_failure \<Rightarrow> Fault_H.lookup_failure \<Rightarrow> bool" where lfr_def[simp]: "lfr x y \<equiv> (y = lookup_failure_map x)" text \<open>Correspondence and weakest precondition rules for the "on failure" transformers\<close> lemma corres_injection: assumes x: "t = injection_handler fn" assumes y: "t' = injection_handler fn'" assumes z: "\<And>ft ft'. f' ft ft' \<Longrightarrow> f (fn ft) (fn' ft')" shows "corres (f' \<oplus> r) P P' m m' \<Longrightarrow> corres (f \<oplus> r) P P' (t m) (t' m')" apply (simp add: injection_handler_def handleE'_def x y) apply (rule corres_guard_imp) apply (rule corres_split) apply assumption apply (case_tac v, (clarsimp simp: z)+) apply (rule wp_post_taut) apply (rule wp_post_taut) apply simp apply simp done lemma rethrowFailure_injection: "rethrowFailure = injection_handler" by (intro ext, simp add: rethrowFailure_def injection_handler_def o_def) lemma capFault_injection: "capFaultOnFailure addr b = injection_handler (Fault_H.CapFault addr b)" apply (rule ext) apply (simp add: capFaultOnFailure_def rethrowFailure_injection) done lemma lookupError_injection: "lookupErrorOnFailure b = injection_handler (Fault_H.FailedLookup b)" apply (rule ext) apply (simp add: lookupErrorOnFailure_def rethrowFailure_injection) done lemma corres_cap_fault: "corres (lfr \<oplus> r) P P' f g \<Longrightarrow> corres (fr \<oplus> r) P P' (cap_fault_on_failure addr b f) (capFaultOnFailure addr b g)" by (fastforce intro: corres_injection[where f'=lfr] simp: cap_fault_injection capFault_injection) lemmas corresK_cap_fault = corres_cap_fault[atomized, THEN corresK_lift_rule, rule_format, corresK] lemmas capFault_wp[wp] = injection_wp[OF capFault_injection] lemmas capFault_wp_E[wp] = injection_wp_E[OF capFault_injection] lemmas capFault_bindE = injection_bindE[OF capFault_injection capFault_injection] lemmas capFault_liftE[simp] = injection_liftE[OF capFault_injection] lemma corres_lookup_error: "\<lbrakk> corres (lfr \<oplus> r) P P' f g \<rbrakk> \<Longrightarrow> corres (ser \<oplus> r) P P' (lookup_error_on_failure b f) (lookupErrorOnFailure b g)" by (fastforce intro: corres_injection[where f'=lfr] simp: lookup_error_injection lookupError_injection) lemmas corresK_lookup_error = corres_lookup_error[atomized, THEN corresK_lift_rule, rule_format, corresK] lemmas lookupError_wp[wp] = injection_wp[OF lookupError_injection] lemmas lookupError_wp_E[wp] = injection_wp_E[OF lookupError_injection] lemmas lookupError_bindE = injection_bindE[OF lookupError_injection lookupError_injection] lemmas lookupError_liftE[simp] = injection_liftE[OF lookupError_injection] lemma unifyFailure_injection: "unifyFailure = injection_handler (\<lambda>x. ())" by (rule ext, simp add: unifyFailure_def injection_handler_def rethrowFailure_def o_def) lemmas unifyFailure_injection_corres = corres_injection [where f=dc, simplified, OF _ unifyFailure_injection] lemmas unifyFailure_discard = unifyFailure_injection_corres [OF id_injection, simplified] lemmas unifyFailure_wp = injection_wp [OF unifyFailure_injection] lemmas unifyFailure_wp_E[wp] = injection_wp_E [OF unifyFailure_injection] lemmas corres_unify_failure = corres_injection [OF unify_failure_injection unifyFailure_injection, rotated] lemma ignoreFailure_wp[wp_split]: "\<lbrace>P\<rbrace> v \<lbrace>\<lambda>rv. Q ()\<rbrace>,\<lbrace>\<lambda>rv. Q ()\<rbrace> \<Longrightarrow> \<lbrace>P\<rbrace> ignoreFailure v \<lbrace>Q\<rbrace>" by (simp add: ignoreFailure_def const_def) wp lemma ep'_cases_weak_wp: assumes "\<lbrace>P_A\<rbrace> a \<lbrace>Q\<rbrace>" assumes "\<And>q. \<lbrace>P_B\<rbrace> b q \<lbrace>Q\<rbrace>" assumes "\<And>q. \<lbrace>P_C\<rbrace> c q \<lbrace>Q\<rbrace>" shows "\<lbrace>P_A and P_B and P_C\<rbrace> case ts of IdleEP \<Rightarrow> a | SendEP q \<Rightarrow> b q | RecvEP q \<Rightarrow> c q \<lbrace>Q\<rbrace>" apply (cases ts) apply (simp, rule hoare_weaken_pre, rule assms, simp)+ done lemma ntfn'_cases_weak_wp: assumes "\<lbrace>P_A\<rbrace> a \<lbrace>Q\<rbrace>" assumes "\<And>q. \<lbrace>P_B\<rbrace> b q \<lbrace>Q\<rbrace>" assumes "\<And>bdg. \<lbrace>P_C\<rbrace> c bdg \<lbrace>Q\<rbrace>" shows "\<lbrace>P_A and P_B and P_C\<rbrace> case ts of IdleNtfn \<Rightarrow> a | WaitingNtfn q \<Rightarrow> b q | ActiveNtfn bdg \<Rightarrow> c bdg \<lbrace>Q\<rbrace>" apply (cases ts) apply (simp, rule hoare_weaken_pre, rule assms, simp)+ done lemma ko_at_imp_cte_wp_at': fixes x :: cte shows "\<lbrakk> ko_at' x ptr s \<rbrakk> \<Longrightarrow> cte_wp_at' (\<lambda>cte. cte = x) ptr s" apply (erule obj_atE') apply (clarsimp simp: projectKOs objBits_simps') apply (erule cte_wp_at_cteI') apply (simp add: cte_level_bits_def)+ done lemma modify_map_casesD: "modify_map m p f p' = Some cte \<Longrightarrow> (p \<noteq> p' \<longrightarrow> m p' = Some cte) \<and> (p = p' \<longrightarrow> (\<exists>cap node. m p = Some (CTE cap node) \<and> f (CTE cap node) = cte))" apply (simp add: modify_map_def split: if_split_asm) apply clarsimp apply (case_tac z) apply auto done lemma modify_map_casesE: "\<lbrakk> modify_map m p f p' = Some cte; \<lbrakk> p \<noteq> p'; m p' = Some cte \<rbrakk> \<Longrightarrow> P; \<And>cap node. \<lbrakk> p = p'; m p = Some (CTE cap node); cte = f (CTE cap node) \<rbrakk> \<Longrightarrow> P \<rbrakk> \<Longrightarrow> P" by (auto dest: modify_map_casesD) lemma modify_map_cases: "(modify_map m p f p' = Some cte) = ((p \<noteq> p' \<longrightarrow> m p' = Some cte) \<and> (p = p' \<longrightarrow> (\<exists>cap node. m p = Some (CTE cap node) \<and> f (CTE cap node) = cte)))" apply (rule iffI) apply (erule modify_map_casesD) apply (clarsimp simp: modify_map_def) done lemma no_0_modify_map [simp]: "no_0 (modify_map m p f) = no_0 m" by (simp add: no_0_def modify_map_def) lemma modify_map_0 [simp]: "no_0 m \<Longrightarrow> modify_map m 0 f = m" by (rule ext) (auto simp add: modify_map_def no_0_def) lemma modify_map_exists: "\<exists>cap node. m p = Some (CTE cap node) \<Longrightarrow> \<exists>cap' node'. modify_map m q f p = Some (CTE cap' node')" apply clarsimp apply (case_tac "f (CTE cap node)") apply (cases "q=p") apply (auto simp add: modify_map_cases) done lemma modify_map_exists_rev: "modify_map m q f p = Some (CTE cap node) \<Longrightarrow> \<exists>cap' node'. m p = Some (CTE cap' node')" apply (case_tac "f (CTE cap node)") apply (cases "q=p") apply (auto simp add: modify_map_cases) done lemma modify_map_if: "(modify_map m p f p' = Some cte) = (if p = p' then \<exists>cap node. m p = Some (CTE cap node) \<and> f (CTE cap node) = cte else \<exists>cap node. m p' = Some (CTE cap node) \<and> cte = CTE cap node)" apply (cases cte) apply (rule iffI) apply (drule modify_map_casesD) apply auto[1] apply (auto simp: modify_map_def) done lemma corres_empty_on_failure: "corres ((\<lambda>x y. r [] []) \<oplus> r) P P' m m' \<Longrightarrow> corres r P P' (empty_on_failure m) (emptyOnFailure m')" apply (simp add: empty_on_failure_def emptyOnFailure_def) apply (rule corres_guard_imp) apply (rule corres_split_catch) apply assumption apply (rule corres_trivial, simp) apply wp+ apply simp+ done lemmas corresK_empty_on_failure = corres_empty_on_failure[atomized, THEN corresK_lift_rule, rule_format, corresK] lemma emptyOnFailure_wp[wp]: "\<lbrace>P\<rbrace> m \<lbrace>Q\<rbrace>,\<lbrace>\<lambda>rv. Q []\<rbrace> \<Longrightarrow> \<lbrace>P\<rbrace> emptyOnFailure m \<lbrace>Q\<rbrace>" by (simp add: emptyOnFailure_def) wp lemma withoutPreemption_lift: "\<lbrace>P\<rbrace> f \<lbrace>Q\<rbrace> \<Longrightarrow> \<lbrace>P\<rbrace> withoutPreemption f \<lbrace>Q\<rbrace>, \<lbrace>E\<rbrace>" by simp lemma withoutPreemption_R: "\<lbrace>\<top>\<rbrace> withoutPreemption f -, \<lbrace>Q\<rbrace>" by (wp withoutPreemption_lift) lemma ko_at_cte_ipcbuffer: "ko_at' tcb p s \<Longrightarrow> cte_wp_at' (\<lambda>x. x = tcbIPCBufferFrame tcb) (p + tcbIPCBufferSlot * 0x10) s" apply (clarsimp simp: obj_at'_def projectKOs objBits_simps) apply (erule (2) cte_wp_at_tcbI') apply (fastforce simp add: tcb_cte_cases_def tcbIPCBufferSlot_def) apply simp done lemma set_ep_arch': "\<lbrace>\<lambda>s. P (ksArchState s)\<rbrace> setEndpoint ntfn p \<lbrace>\<lambda>_ s. P (ksArchState s)\<rbrace>" apply (simp add: setEndpoint_def setObject_def split_def) apply (wp updateObject_default_inv|simp)+ done lemma corres_const_on_failure: "corres ((\<lambda>_ _. r x y) \<oplus> r) P P' m m' \<Longrightarrow> corres r P P' (const_on_failure x m) (constOnFailure y m')" apply (simp add: const_on_failure_def constOnFailure_def) apply (rule corres_guard_imp) apply (rule corres_split_catch) apply assumption apply (rule corres_trivial, simp) apply (clarsimp simp: const_def) apply wp+ apply simp+ done lemmas corresK_const_on_failure = corres_const_on_failure[atomized, THEN corresK_lift_rule, rule_format, corresK] lemma constOnFailure_wp : "\<lbrace>P\<rbrace> m \<lbrace>Q\<rbrace>, \<lbrace>\<lambda>rv. Q n\<rbrace> \<Longrightarrow> \<lbrace>P\<rbrace> constOnFailure n m \<lbrace>Q\<rbrace>" apply (simp add: constOnFailure_def const_def) apply (wp|simp)+ done end end
{"author": "seL4", "repo": "l4v", "sha": "9ba34e269008732d4f89fb7a7e32337ffdd09ff9", "save_path": "github-repos/isabelle/seL4-l4v", "path": "github-repos/isabelle/seL4-l4v/l4v-9ba34e269008732d4f89fb7a7e32337ffdd09ff9/proof/refine/ARM/Bits_R.thy"}
import numpy as np # type: ignore from typing import List, Optional def _is_multi_dimensional(series) -> bool: try: series[0][0] return True except: return False class MultiSeries: def __init__(self, ys, xs=None): # Init types self.xs: List[np.array] = [] self.ys: List[np.array] = [] # First check if the input is multi-dim we_have_input_of_multiple_series = _is_multi_dimensional(ys) # Initialize y series if we_have_input_of_multiple_series: self.ys = [np.array(ys_row) for ys_row in ys] else: self.ys = [np.array(ys)] # Initialize x series if xs is None: self.xs = [ np.arange(1, len(ys_row) + 1, step=1, dtype=int) for ys_row in self.ys ] else: if we_have_input_of_multiple_series: self.xs = [np.array(xs_row) for xs_row in xs] else: self.xs = [np.array(xs)] def __len__(self) -> int: """Return the number of time series.""" return len(self.ys) def shape(self) -> List[int]: """Return a list with the length of the time series.""" return [len(ys_row) for ys_row in self.ys] def y_max(self) -> float: return max([ys_row.max() for ys_row in self.ys]) def y_min(self) -> float: return min([ys_row.min() for ys_row in self.ys]) def x_max(self) -> float: return max([xs_row.max() for xs_row in self.xs]) def x_min(self) -> float: return min([xs_row.min() for xs_row in self.xs])
{"hexsha": "84308143555048f5feaf49413079693ffd351777", "size": 1625, "ext": "py", "lang": "Python", "max_stars_repo_path": "uniplot/multi_series.py", "max_stars_repo_name": "olavolav/textplot", "max_stars_repo_head_hexsha": "f665a0d8cf1822b46db7c3ffe1766888ff1de3bf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 156, "max_stars_repo_stars_event_min_datetime": "2020-08-17T19:05:52.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T13:40:42.000Z", "max_issues_repo_path": "uniplot/multi_series.py", "max_issues_repo_name": "arita37/uniplot", "max_issues_repo_head_hexsha": "2e49cb01132a51029ffb8a5ab3b2704fdb7e2021", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 11, "max_issues_repo_issues_event_min_datetime": "2020-08-17T12:42:52.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-12T09:53:02.000Z", "max_forks_repo_path": "uniplot/multi_series.py", "max_forks_repo_name": "arita37/uniplot", "max_forks_repo_head_hexsha": "2e49cb01132a51029ffb8a5ab3b2704fdb7e2021", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2020-09-12T03:12:50.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-17T00:12:48.000Z", "avg_line_length": 28.0172413793, "max_line_length": 86, "alphanum_fraction": 0.5673846154, "include": true, "reason": "import numpy", "num_tokens": 423}
# coding: utf-8 # 2021/3/28 @ liujiayu import random import numpy as np import pytest @pytest.fixture(scope="package") def conf(): user_num = 5 item_num = 2 know_num = 3 return user_num, item_num, know_num @pytest.fixture(scope="package") def data(conf): user_num, item_num, know_num = conf q_m = np.zeros(shape=(item_num, know_num)) for i in range(item_num): for j in range(know_num): q_m[i, j] = random.randint(0, 1) R = -1 * np.ones(shape=(user_num, item_num)) for i in range(user_num): for j in range(item_num): R[i, j] = random.randint(-1, 1) index = random.randint(1, item_num - 1) obj_prob_index = np.arange(0, index) sub_prob_index = np.arange(index - 1, item_num) new_data = [{'user_id': 1, 'item_id': 1, 'score': 1.0}] return user_num, item_num, know_num, R, q_m, obj_prob_index, sub_prob_index, new_data
{"hexsha": "f8ef61392a645baf343c7ea9a810160c30a748d3", "size": 920, "ext": "py", "lang": "Python", "max_stars_repo_path": "tests/fuzzycdf/conftest.py", "max_stars_repo_name": "zelo2/EduCDM", "max_stars_repo_head_hexsha": "d725dc50ec677dfe409d88a3ffea6dce8effad62", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 36, "max_stars_repo_stars_event_min_datetime": "2021-04-28T03:22:03.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T16:54:44.000Z", "max_issues_repo_path": "tests/fuzzycdf/conftest.py", "max_issues_repo_name": "zelo2/EduCDM", "max_issues_repo_head_hexsha": "d725dc50ec677dfe409d88a3ffea6dce8effad62", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 21, "max_issues_repo_issues_event_min_datetime": "2021-03-18T14:10:11.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-29T14:12:45.000Z", "max_forks_repo_path": "tests/fuzzycdf/conftest.py", "max_forks_repo_name": "zelo2/EduCDM", "max_forks_repo_head_hexsha": "d725dc50ec677dfe409d88a3ffea6dce8effad62", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 36, "max_forks_repo_forks_event_min_datetime": "2021-03-17T14:43:18.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-29T07:52:26.000Z", "avg_line_length": 24.8648648649, "max_line_length": 89, "alphanum_fraction": 0.6347826087, "include": true, "reason": "import numpy", "num_tokens": 285}
! ! Parallel Sparse BLAS version 3.5 ! (C) Copyright 2006-2018 ! Salvatore Filippone ! Alfredo Buttari ! ! Redistribution and use in source and binary forms, with or without ! modification, are permitted provided that the following conditions ! are met: ! 1. Redistributions of source code must retain the above copyright ! notice, this list of conditions and the following disclaimer. ! 2. Redistributions in binary form must reproduce the above copyright ! notice, this list of conditions, and the following disclaimer in the ! documentation and/or other materials provided with the distribution. ! 3. The name of the PSBLAS group or the names of its contributors may ! not be used to endorse or promote products derived from this ! software without specific written permission. ! ! THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ! ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED ! TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR ! PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE PSBLAS GROUP OR ITS CONTRIBUTORS ! BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR ! CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF ! SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS ! INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN ! CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ! ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE ! POSSIBILITY OF SUCH DAMAGE. ! ! Module psb_s_tools_mod use psb_desc_mod, only : psb_desc_type, psb_spk_, psb_ipk_, psb_lpk_ use psb_s_vect_mod, only : psb_s_base_vect_type, psb_s_vect_type use psb_s_mat_mod, only : psb_sspmat_type, psb_lsspmat_type, psb_s_base_sparse_mat use psb_l_vect_mod, only : psb_l_vect_type use psb_s_multivect_mod, only : psb_s_base_multivect_type, psb_s_multivect_type interface psb_geall subroutine psb_salloc_vect(x, desc_a,info,n) import implicit none type(psb_s_vect_type), intent(out) :: x type(psb_desc_type), intent(in) :: desc_a integer(psb_ipk_),intent(out) :: info integer(psb_ipk_), optional, intent(in) :: n end subroutine psb_salloc_vect subroutine psb_salloc_vect_r2(x, desc_a,info,n,lb) import implicit none type(psb_s_vect_type), allocatable, intent(out) :: x(:) type(psb_desc_type), intent(in) :: desc_a integer(psb_ipk_),intent(out) :: info integer(psb_ipk_), optional, intent(in) :: n, lb end subroutine psb_salloc_vect_r2 subroutine psb_salloc_multivect(x, desc_a,info,n) import implicit none type(psb_s_multivect_type), intent(out) :: x type(psb_desc_type), intent(in) :: desc_a integer(psb_ipk_),intent(out) :: info integer(psb_ipk_), optional, intent(in) :: n end subroutine psb_salloc_multivect end interface interface psb_geasb subroutine psb_sasb_vect(x, desc_a, info,mold, scratch) import implicit none type(psb_desc_type), intent(in) :: desc_a type(psb_s_vect_type), intent(inout) :: x integer(psb_ipk_), intent(out) :: info class(psb_s_base_vect_type), intent(in), optional :: mold logical, intent(in), optional :: scratch end subroutine psb_sasb_vect subroutine psb_sasb_vect_r2(x, desc_a, info,mold, scratch) import implicit none type(psb_desc_type), intent(in) :: desc_a type(psb_s_vect_type), intent(inout) :: x(:) integer(psb_ipk_), intent(out) :: info class(psb_s_base_vect_type), intent(in), optional :: mold logical, intent(in), optional :: scratch end subroutine psb_sasb_vect_r2 subroutine psb_sasb_multivect(x, desc_a, info,mold, scratch, n) import implicit none type(psb_desc_type), intent(in) :: desc_a type(psb_s_multivect_type), intent(inout) :: x integer(psb_ipk_), intent(out) :: info class(psb_s_base_multivect_type), intent(in), optional :: mold logical, intent(in), optional :: scratch integer(psb_ipk_), optional, intent(in) :: n end subroutine psb_sasb_multivect end interface interface psb_gefree subroutine psb_sfree_vect(x, desc_a, info) import implicit none type(psb_desc_type), intent(in) :: desc_a type(psb_s_vect_type), intent(inout) :: x integer(psb_ipk_), intent(out) :: info end subroutine psb_sfree_vect subroutine psb_sfree_vect_r2(x, desc_a, info) import implicit none type(psb_desc_type), intent(in) :: desc_a type(psb_s_vect_type), allocatable, intent(inout) :: x(:) integer(psb_ipk_), intent(out) :: info end subroutine psb_sfree_vect_r2 subroutine psb_sfree_multivect(x, desc_a, info) import implicit none type(psb_desc_type), intent(in) :: desc_a type(psb_s_multivect_type), intent(inout) :: x integer(psb_ipk_), intent(out) :: info end subroutine psb_sfree_multivect end interface interface psb_geins subroutine psb_sins_vect(m,irw,val,x,desc_a,info,dupl,local) import implicit none integer(psb_ipk_), intent(in) :: m type(psb_desc_type), intent(in) :: desc_a type(psb_s_vect_type), intent(inout) :: x integer(psb_lpk_), intent(in) :: irw(:) real(psb_spk_), intent(in) :: val(:) integer(psb_ipk_), intent(out) :: info integer(psb_ipk_), optional, intent(in) :: dupl logical, intent(in), optional :: local end subroutine psb_sins_vect subroutine psb_sins_vect_v(m,irw,val,x,desc_a,info,dupl,local) import implicit none integer(psb_ipk_), intent(in) :: m type(psb_desc_type), intent(in) :: desc_a type(psb_s_vect_type), intent(inout) :: x type(psb_l_vect_type), intent(inout) :: irw type(psb_s_vect_type), intent(inout) :: val integer(psb_ipk_), intent(out) :: info integer(psb_ipk_), optional, intent(in) :: dupl logical, intent(in), optional :: local end subroutine psb_sins_vect_v subroutine psb_sins_vect_r2(m,irw,val,x,desc_a,info,dupl,local) import implicit none integer(psb_ipk_), intent(in) :: m type(psb_desc_type), intent(in) :: desc_a type(psb_s_vect_type), intent(inout) :: x(:) integer(psb_lpk_), intent(in) :: irw(:) real(psb_spk_), intent(in) :: val(:,:) integer(psb_ipk_), intent(out) :: info integer(psb_ipk_), optional, intent(in) :: dupl logical, intent(in), optional :: local end subroutine psb_sins_vect_r2 subroutine psb_sins_multivect(m,irw,val,x,desc_a,info,dupl,local) import implicit none integer(psb_ipk_), intent(in) :: m type(psb_desc_type), intent(in) :: desc_a type(psb_s_multivect_type), intent(inout) :: x integer(psb_lpk_), intent(in) :: irw(:) real(psb_spk_), intent(in) :: val(:,:) integer(psb_ipk_), intent(out) :: info integer(psb_ipk_), optional, intent(in) :: dupl logical, intent(in), optional :: local end subroutine psb_sins_multivect end interface interface psb_cdbldext Subroutine psb_scdbldext(a,desc_a,novr,desc_ov,info,extype) import implicit none integer(psb_ipk_), intent(in) :: novr Type(psb_sspmat_type), Intent(in) :: a Type(psb_desc_type), Intent(inout), target :: desc_a Type(psb_desc_type), Intent(out) :: desc_ov integer(psb_ipk_), intent(out) :: info integer(psb_ipk_), intent(in),optional :: extype end Subroutine psb_scdbldext end interface interface psb_sphalo Subroutine psb_ssphalo(a,desc_a,blk,info,rowcnv,colcnv,& & rowscale,colscale,outfmt,data) import implicit none Type(psb_sspmat_type),Intent(in) :: a Type(psb_sspmat_type),Intent(inout) :: blk Type(psb_desc_type),Intent(in), target :: desc_a integer(psb_ipk_), intent(out) :: info logical, optional, intent(in) :: rowcnv,colcnv,rowscale,colscale character(len=5), optional :: outfmt integer(psb_ipk_), intent(in), optional :: data end Subroutine psb_ssphalo Subroutine psb_lssphalo(a,desc_a,blk,info,rowcnv,colcnv,& & rowscale,colscale,outfmt,data) import implicit none Type(psb_lsspmat_type),Intent(in) :: a Type(psb_lsspmat_type),Intent(inout) :: blk Type(psb_desc_type),Intent(in), target :: desc_a integer(psb_ipk_), intent(out) :: info logical, optional, intent(in) :: rowcnv,colcnv,rowscale,colscale character(len=5), optional :: outfmt integer(psb_ipk_), intent(in), optional :: data end Subroutine psb_lssphalo end interface interface psb_spall subroutine psb_sspalloc(a, desc_a, info, nnz) import implicit none type(psb_desc_type), intent(in) :: desc_a type(psb_sspmat_type), intent(inout) :: a integer(psb_ipk_), intent(out) :: info integer(psb_ipk_), optional, intent(in) :: nnz end subroutine psb_sspalloc end interface interface psb_spasb subroutine psb_sspasb(a,desc_a, info, afmt, upd, dupl,mold) import implicit none type(psb_sspmat_type), intent (inout) :: a type(psb_desc_type), intent(in) :: desc_a integer(psb_ipk_), intent(out) :: info integer(psb_ipk_),optional, intent(in) :: dupl, upd character(len=*), optional, intent(in) :: afmt class(psb_s_base_sparse_mat), intent(in), optional :: mold end subroutine psb_sspasb end interface interface psb_spfree subroutine psb_sspfree(a, desc_a,info) import implicit none type(psb_desc_type), intent(in) :: desc_a type(psb_sspmat_type), intent(inout) ::a integer(psb_ipk_), intent(out) :: info end subroutine psb_sspfree end interface interface psb_spins subroutine psb_sspins(nz,ia,ja,val,a,desc_a,info,rebuild,local) import implicit none type(psb_desc_type), intent(inout) :: desc_a type(psb_sspmat_type), intent(inout) :: a integer(psb_ipk_), intent(in) :: nz integer(psb_lpk_), intent(in) :: ia(:),ja(:) real(psb_spk_), intent(in) :: val(:) integer(psb_ipk_), intent(out) :: info logical, intent(in), optional :: rebuild logical, intent(in), optional :: local end subroutine psb_sspins subroutine psb_sspins_v(nz,ia,ja,val,a,desc_a,info,rebuild,local) use psb_i_vect_mod, only : psb_i_vect_type import implicit none type(psb_desc_type), intent(inout) :: desc_a type(psb_sspmat_type), intent(inout) :: a integer(psb_ipk_), intent(in) :: nz type(psb_l_vect_type), intent(inout) :: ia,ja type(psb_s_vect_type), intent(inout) :: val integer(psb_ipk_), intent(out) :: info logical, intent(in), optional :: rebuild logical, intent(in), optional :: local end subroutine psb_sspins_v subroutine psb_sspins_2desc(nz,ia,ja,val,a,desc_ar,desc_ac,info) import implicit none type(psb_desc_type), intent(in) :: desc_ar type(psb_desc_type), intent(inout) :: desc_ac type(psb_sspmat_type), intent(inout) :: a integer(psb_ipk_), intent(in) :: nz integer(psb_lpk_), intent(in) :: ia(:),ja(:) real(psb_spk_), intent(in) :: val(:) integer(psb_ipk_), intent(out) :: info end subroutine psb_sspins_2desc end interface interface psb_sprn subroutine psb_ssprn(a, desc_a,info,clear) import implicit none type(psb_desc_type), intent(in) :: desc_a type(psb_sspmat_type), intent(inout) :: a integer(psb_ipk_), intent(out) :: info logical, intent(in), optional :: clear end subroutine psb_ssprn end interface end module psb_s_tools_mod
{"hexsha": "aaed4bb18a680ef63f8fadc26e93b17dadfd2349", "size": 12688, "ext": "f90", "lang": "FORTRAN", "max_stars_repo_path": "base/modules/tools/psb_s_tools_mod.f90", "max_stars_repo_name": "fccf/psblas3", "max_stars_repo_head_hexsha": "b6cfcf93ac2f08e7b1a1970ee638af9890502291", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "base/modules/tools/psb_s_tools_mod.f90", "max_issues_repo_name": "fccf/psblas3", "max_issues_repo_head_hexsha": "b6cfcf93ac2f08e7b1a1970ee638af9890502291", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "base/modules/tools/psb_s_tools_mod.f90", "max_forks_repo_name": "fccf/psblas3", "max_forks_repo_head_hexsha": "b6cfcf93ac2f08e7b1a1970ee638af9890502291", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.0132450331, "max_line_length": 84, "alphanum_fraction": 0.6370586381, "num_tokens": 3517}
import pyaos import cv2 import os import unittest import sys import glm import numpy as np import numpy.testing from pathlib import Path class TestAOSRenderTwice(unittest.TestCase): _window = None _aos1 = None _aos2 = None _fovDegrees = 50 def setUp(self): self._window = pyaos.PyGlfwWindow(512,512,'AOS') # make sure there is an OpenGL context #print( 'initializing AOS ... ' ) self._aos1 = pyaos.PyAOS(512,512,self._fovDegrees) self._aos2 = pyaos.PyAOS(512,512,self._fovDegrees) # loading DEM self._aos1.loadDEM("../data/plane.obj") self._aos2.loadDEM("../data/plane.obj") def tearDown(self): del self._aos1 del self._aos2 del self._window def test_render_twice(self): self.render_single_image(self._aos1) self.render_single_image(self._aos2) def test_clear_view(self): img = np.ones(shape=(512,512,1), dtype = np.float32) pose = np.eye(4) _aos = self._aos1 # adding 1 view self.assertTrue(_aos.getSize()==0) _aos.addView( img, pose, "01" ) self.assertTrue(_aos.getSize()==1) _aos.clearViews() self.assertTrue(_aos.getSize()==0) # adding N views for n in range(10): self.assertTrue(_aos.getSize()==n) _aos.addView( img, pose, str(n) ) self.assertTrue(_aos.getSize()==(n+1)) _aos.clearViews() self.assertTrue(_aos.getSize()==0) def render_single_image(self, _aos): img = np.ones(shape=(512,512,1), dtype = np.float32) pose = np.eye(4) # adding a view self.assertTrue(_aos.getSize()==0) _aos.addView( img, pose, "01" ) self.assertTrue(_aos.getSize()==1) rimg = _aos.render(pose, self._fovDegrees) # check that the rendered image is like the initial one #print( img ) #print( rimg ) self.assertTrue(np.allclose(img[:,:,0],rimg[:,:,0])) # adding a second view: img2 = np.ones(shape=(512,512,1), dtype = np.float32) * 2.0 self.assertTrue(_aos.getSize()==1) _aos.addView( img2, pose, "02" ) self.assertTrue(_aos.getSize()==2) rimg2 = _aos.render(pose, self._fovDegrees) rimg2 = rimg2[:,:,0] / rimg2[:,:,3] # check that the rendered image an average of the first and the second one! (1 + 2)/2 = 1.5 self.assertTrue(np.allclose(rimg2, np.ones(shape=(512,512), dtype = np.float32) * 1.5)) self.assertTrue(np.allclose(img[:,:,0],rimg[:,:,0])) # check that rimg has not changed! # replacing the second view with a new one img3 = np.ones(shape=(512,512), dtype = np.float32) * 3.0 self.assertTrue(_aos.getSize()==2) _aos.replaceView( 1, img3, pose, "03" ) self.assertTrue(_aos.getSize()==2) rimg = _aos.render(pose, self._fovDegrees) rimg = rimg[:,:,0] / rimg[:,:,3] # check that the rendered image is an average of the first and the thrid one! (1 + 3)/2 = 2.0 self.assertTrue(np.allclose(rimg, np.ones(shape=(512,512), dtype = np.float32) * 2.0)) # render only the third one rimg = _aos.render(pose, self._fovDegrees, [1]) rimg = rimg[:,:,0] / rimg[:,:,3] # check that the rendered image is like the third image self.assertTrue(np.allclose(rimg, np.ones(shape=(512,512), dtype = np.float32) * 3.0)) #cv2.imshow("Rendering",rimg) #cv2.waitKey(0) # get XYZ coordinates from plane.obj (z coordinate should be -100 everywhere) xyz = _aos.getXYZ() self.assertTrue(np.isclose(xyz[:,:,2],-100.0).all()) # translate the DEM up _aos.setDEMTransform( [0,0,10] ) _aos.render(pose, self._fovDegrees, [1]) xyz = _aos.getXYZ() print(xyz) self.assertTrue(np.isclose(xyz[:,:,2],-90.0).all()) # translate the DEM down _aos.setDEMTransform( [0,0,-20] ) _aos.render(pose, self._fovDegrees, [1]) xyz = _aos.getXYZ() print(xyz[:,:,2]) self.assertTrue(np.isclose(xyz[:,:,2],-120.0).all()) _aos.removeView(0) self.assertTrue(_aos.getSize()==1) _aos.removeView(0) self.assertTrue(_aos.getSize()==0) class TestAOSInit(unittest.TestCase): """ Test different scenarios for initialization """ # standard initialization def test_init(self): window = pyaos.PyGlfwWindow(512,512,'AOS') # make sure there is an OpenGL context #print( 'initializing AOS ... ' ) aos = pyaos.PyAOS(512,512,50,10) #print( 'aos created!' ) del aos del window # initialization twice def test_init_two(self): window = pyaos.PyGlfwWindow(512,512,'AOS') # make sure there is an OpenGL context #print( 'initializing AOS ... ' ) aos1 = pyaos.PyAOS(512,512,50,10) #print( 'aos created!' ) aos2 = pyaos.PyAOS(512,512,55) del aos1 del aos2 del window # initialization twice def test_init_twice(self): window = pyaos.PyGlfwWindow(512,512,'AOS') # make sure there is an OpenGL context #print( 'initializing AOS ... ' ) aos1 = pyaos.PyAOS(512,512,50,10) del aos1 del window #print( 'aos created!' ) window = pyaos.PyGlfwWindow(512,512,'AOS') # make sure there is an OpenGL context aos2 = pyaos.PyAOS(512,512,55) del aos2 del window def test_nocontext(self): # do it without valid opengl context errorRaised = False try: aos = pyaos.PyAOS(512,512,50,10) except RuntimeError as err: print("Runtime error: ", err) errorRaised = True self.assertTrue(errorRaised) def test_shaderloading(self): # wrong working directory, so shaders cannot be loaded! olddir = os. getcwd() os.chdir('..') window = pyaos.PyGlfwWindow(512,512,'AOS') # make sure there is an OpenGL context errorRaised = False try: aos = pyaos.PyAOS(512,512,50,10) except RuntimeError as err: print("Runtime error: ", err) errorRaised = True self.assertTrue(errorRaised) os.chdir(olddir) if __name__ == '__main__': file = Path(__file__).resolve() parent, root = file.parent, file.parents[1] wd = os.getcwd() os.chdir(parent) # change to AOS working dir for startup (this is required so that the program finds dlls and the shader) unittest.main() os.chdir(wd) # change back to previous working directory
{"hexsha": "988827ed8553de150ecf4f22d9f82a60c22b0deb", "size": 6745, "ext": "py", "lang": "Python", "max_stars_repo_path": "LFR/python/pyaos_test.py", "max_stars_repo_name": "zhouheping239/AOS", "max_stars_repo_head_hexsha": "2346ab523dacffe7612da2e45173b98c4433fc5a", "max_stars_repo_licenses": ["Intel"], "max_stars_count": 26, "max_stars_repo_stars_event_min_datetime": "2021-06-24T07:21:16.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-10T17:16:26.000Z", "max_issues_repo_path": "LFR/python/pyaos_test.py", "max_issues_repo_name": "zhouheping239/AOS", "max_issues_repo_head_hexsha": "2346ab523dacffe7612da2e45173b98c4433fc5a", "max_issues_repo_licenses": ["Intel"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2021-08-06T18:56:37.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-24T15:13:13.000Z", "max_forks_repo_path": "LFR/python/pyaos_test.py", "max_forks_repo_name": "zhouheping239/AOS", "max_forks_repo_head_hexsha": "2346ab523dacffe7612da2e45173b98c4433fc5a", "max_forks_repo_licenses": ["Intel"], "max_forks_count": 11, "max_forks_repo_forks_event_min_datetime": "2021-07-13T13:29:53.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-08T02:25:30.000Z", "avg_line_length": 29.3260869565, "max_line_length": 125, "alphanum_fraction": 0.5885841364, "include": true, "reason": "import numpy", "num_tokens": 1834}
""" @author: DeepCaT_Z """ #%% PRE-PROCESSING (mandatory): # Resizing all frames to pre-defined pixel's resolution # OBS: augmentation operations will be carried out while training the model. #%% ############################################ ######### IMPORTS: DO NOT TOUCH ############## ################################################ import os, shutil import numpy as np from tqdm import tqdm from skimage.io import imread, imsave from skimage.transform import resize from config_preprocess_Classification_DeepCaT_Z import * #%% ############################################ ######### MAIN CODE: DO NOT TOUCH ############## ################################################ new_datadir = f'{directory_original_data}_{IMAGE_SIZE}' if os.path.exists(new_datadir): shutil.rmtree(new_datadir) os.mkdir(new_datadir) for root, dirs, files in os.walk(directory_original_data): new_root = os.path.join(new_datadir, '/'.join(root.split('\\')[1:])) for dirname in dirs: os.mkdir(os.path.join(new_root, dirname)) if len(files) == 0: continue print(root) for f in tqdm(files): if f.endswith('.png'): x = imread(os.path.join(root, f), CHANNELS) if root.endswith('//masks'): x = x >= 128 x = resize(x, (IMAGE_SIZE, IMAGE_SIZE), 0) x = (x * 255).astype(np.uint8) else: x = resize(imread(os.path.join(root, f), CHANNELS), (IMAGE_SIZE, IMAGE_SIZE), 5) x = (x * 255).astype(np.uint8) imsave(os.path.join(new_root, f), x, check_contrast=False) else: shutil.copyfile(os.path.join(root, f), os.path.join(new_root, f)) #%% Resize with 110% IMAGE_SIZE (augmentation purposes) IMAGE_SIZE = int(IMAGE_SIZE*1.10) new_datadir = f'{directory_original_data}_{IMAGE_SIZE}' if os.path.exists(new_datadir): shutil.rmtree(new_datadir) os.mkdir(new_datadir) for root, dirs, files in os.walk(directory_original_data): new_root = os.path.join(new_datadir, '/'.join(root.split('\\')[1:])) for dirname in dirs: os.mkdir(os.path.join(new_root, dirname)) if len(files) == 0: continue print(root) for f in tqdm(files): if f.endswith('.png'): x = imread(os.path.join(root, f), CHANNELS) if root.endswith('//masks'): x = x >= 128 x = resize(x, (IMAGE_SIZE, IMAGE_SIZE), 0) x = (x * 255).astype(np.uint8) else: x = resize(imread(os.path.join(root, f), CHANNELS), (IMAGE_SIZE, IMAGE_SIZE), 5) x = (x * 255).astype(np.uint8) imsave(os.path.join(new_root, f), x, check_contrast=False) else: shutil.copyfile(os.path.join(root, f), os.path.join(new_root, f))
{"hexsha": "ca2c753ca39dae435c46c8a9cb139577b8c6b7eb", "size": 2913, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/TRAIN_MODELS/CLASSIFICATION/preprocess_Classification.py", "max_stars_repo_name": "CaT-zTools/Deep-CaT-z-software", "max_stars_repo_head_hexsha": "9b4b48b62b6621f124fbce3e87160a7b2a2d626c", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/TRAIN_MODELS/CLASSIFICATION/preprocess_Classification.py", "max_issues_repo_name": "CaT-zTools/Deep-CaT-z-software", "max_issues_repo_head_hexsha": "9b4b48b62b6621f124fbce3e87160a7b2a2d626c", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/TRAIN_MODELS/CLASSIFICATION/preprocess_Classification.py", "max_forks_repo_name": "CaT-zTools/Deep-CaT-z-software", "max_forks_repo_head_hexsha": "9b4b48b62b6621f124fbce3e87160a7b2a2d626c", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.8734177215, "max_line_length": 97, "alphanum_fraction": 0.5417095778, "include": true, "reason": "import numpy", "num_tokens": 665}
# -*- coding: utf-8 -*- import numpy as np import tensorflow as tf from yolo_v4 import _conv2d_fixed_padding, _fixed_padding, _get_size, \ _detection_layer, _upsample slim = tf.contrib.slim _BATCH_NORM_DECAY = 0.9 _BATCH_NORM_EPSILON = 1e-05 _LEAKY_RELU = 0.1 _ANCHORSTINY = [(10, 14), (23, 27), (37, 58), (81, 82), (135, 169), (344, 319)] _ANCHORS = [(12, 16), (19, 36), (40, 28), (36, 75), (76, 55), (72, 146), (142, 110), (192, 243), (459, 401)] def _tiny_res_block(inputs,in_channels,channel1,channel2,channel3,data_format): net = _conv2d_fixed_padding(inputs,in_channels,kernel_size=3) route = net #_,split=tf.split(net,num_or_size_splits=2,axis=1 if data_format =="NCHW" else 3) split = net[:, in_channels//2:, :, :]if data_format=="NCHW" else net[:, :, :, in_channels//2:] net = _conv2d_fixed_padding(split,channel1,kernel_size=3) route1 = net net = _conv2d_fixed_padding(net,channel2,kernel_size=3) net = tf.concat([net, route1], axis=1 if data_format == 'NCHW' else 3) net = _conv2d_fixed_padding(net,channel3,kernel_size=1) feat = net net = tf.concat([route, net], axis=1 if data_format == 'NCHW' else 3) net = slim.max_pool2d( net, [2, 2], scope='pool2') return net,feat def yolo_v4_tiny(inputs, num_classes, is_training=False, data_format='NCHW', reuse=False): """ Creates YOLO v4 tiny model. :param inputs: a 4-D tensor of size [batch_size, height, width, channels]. Dimension batch_size may be undefined. The channel order is RGB. :param num_classes: number of predicted classes. :param is_training: whether is training or not. :param data_format: data format NCHW or NHWC. :param reuse: whether or not the network and its variables should be reused. :return: """ # it will be needed later on img_size = inputs.get_shape().as_list()[1:3] # transpose the inputs to NCHW if data_format == 'NCHW': inputs = tf.transpose(inputs, [0, 3, 1, 2]) # normalize values to range [0..1] inputs = inputs / 255 # set batch norm params batch_norm_params = { 'decay': _BATCH_NORM_DECAY, 'epsilon': _BATCH_NORM_EPSILON, 'scale': True, 'is_training': is_training, 'fused': None, # Use fused batch norm if possible. } # Set activation_fn and parameters for conv2d, batch_norm. with slim.arg_scope([slim.conv2d, slim.batch_norm, _fixed_padding, slim.max_pool2d], data_format=data_format): with slim.arg_scope([slim.conv2d, slim.batch_norm, _fixed_padding], reuse=reuse): with slim.arg_scope([slim.conv2d], normalizer_fn=slim.batch_norm, normalizer_params=batch_norm_params, biases_initializer=None, activation_fn=lambda x: tf.nn.leaky_relu(x, alpha=_LEAKY_RELU)): with tf.variable_scope('yolo-v4-tiny'): # paste output of parse_config.py here return detections
{"hexsha": "312f4c266306e5f3aef0edb0b7c604a9d91129c5", "size": 3203, "ext": "py", "lang": "Python", "max_stars_repo_path": "yolov4tiny/yolo_v4_tiny.py", "max_stars_repo_name": "TNTWEN/OpenVINO-YOLO-Automatic-Generation", "max_stars_repo_head_hexsha": "bc052c9e6bc054a451ac28bbbab33a5088eb02de", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 22, "max_stars_repo_stars_event_min_datetime": "2020-09-04T13:49:11.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-07T00:37:35.000Z", "max_issues_repo_path": "yolov4tiny/yolo_v4_tiny.py", "max_issues_repo_name": "TNTWEN/OpenVINO-YOLO-Automatic-generation", "max_issues_repo_head_hexsha": "bc052c9e6bc054a451ac28bbbab33a5088eb02de", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2020-09-15T13:00:51.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-26T07:37:57.000Z", "max_forks_repo_path": "yolov4tiny/yolo_v4_tiny.py", "max_forks_repo_name": "TNTWEN/OpenVINO-YOLO-Automatic-generation", "max_forks_repo_head_hexsha": "bc052c9e6bc054a451ac28bbbab33a5088eb02de", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-07-14T02:31:28.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-16T17:27:08.000Z", "avg_line_length": 39.0609756098, "max_line_length": 115, "alphanum_fraction": 0.616921636, "include": true, "reason": "import numpy", "num_tokens": 893}
r"""Note. when H, W \le 10^5 on grid problem and it's impossible to create an actual graph because it's O(HW) space, consider y-axis and x-axis seperately. """ import typing import sys import numpy as np import numba as nb @nb.njit((nb.i8, nb.i8, nb.i8[:, :]), cache=True) def solve(h: int, w: int, rca: np.ndarray) -> typing.NoReturn: n = len(rca) dist = np.zeros(n, np.int64) rmax = np.full(h, -1, np.int64) cmax = np.full(w, -1, np.int64) order = np.argsort(rca[:, 2], kind='mergesort')[::-1] s = 0 prev = -1 for i in range(n): if rca[order[i], 2] != prev: for j in range(s, i): r, c, a = rca[order[j]] rmax[r] = max(rmax[r], dist[order[j]]) cmax[c] = max(cmax[c], dist[order[j]]) s = i r, c, a = rca[order[i]] dist[order[i]] = max(rmax[r], cmax[c]) + 1 prev = a for d in dist: print(d) def main() -> typing.NoReturn: h, w, n = map(int, input().split()) rca = np.array( sys.stdin.read().split(), dtype=np.int64, ).reshape(n, 3) rca[:, :2] -= 1 solve(h, w, rca) main()
{"hexsha": "eab3cee079060daf72a87537943d65b3a2bdbd88", "size": 1174, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/atcoder/abc224/e/sol_0.py", "max_stars_repo_name": "kagemeka/competitive-programming", "max_stars_repo_head_hexsha": "c70fe481bcd518f507b885fc9234691d8ce63171", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-07-11T03:20:10.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-11T03:20:10.000Z", "max_issues_repo_path": "src/atcoder/abc224/e/sol_0.py", "max_issues_repo_name": "kagemeka/competitive-programming", "max_issues_repo_head_hexsha": "c70fe481bcd518f507b885fc9234691d8ce63171", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 39, "max_issues_repo_issues_event_min_datetime": "2021-07-10T05:21:09.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-15T06:10:12.000Z", "max_forks_repo_path": "src/atcoder/abc224/e/sol_0.py", "max_forks_repo_name": "kagemeka/competitive-programming", "max_forks_repo_head_hexsha": "c70fe481bcd518f507b885fc9234691d8ce63171", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.9591836735, "max_line_length": 62, "alphanum_fraction": 0.5136286201, "include": true, "reason": "import numpy,import numba", "num_tokens": 388}
""" Neural Network from scratch. A simple Neural Network calss. License MIT, all rights reserved jerry liu @twairball """ import numpy as np # sigmoid and sigmoid derivative functions def sigmoid(x): x = np.clip(x, -500, 500) # avoid overflow return 1 / ( 1 + np.exp(-x)) def sigmoid_deriv(x): return x * (1 - x) class NN: """ A simple Neural Network class. params: lr: learning rate, regularizer term for weights W_n update during gradient descent update. default=0.0001. methods: train: trains neural network for number of iterations predit: predict outputs and evaluates loss. """ def __init__(self, lr=0.0001): self.weights = self.init_weights() self.biases = self.init_biases() self.lr = lr def init_weights(self): # initialize weights with mean 0 # network is 3, 4, 1 W_1 = 2 * np.random.random((3, 4)) - 1 # shape: (3,4) W_2 = 2 * np.random.random((4, 1)) - 1 # shape: (4,1) return [W_1, W_2] def init_biases(self): # initialize bias as zeros # bias shape is same as dimension of W_1, W_2 b_1 = np.zeros(4) # shape: (4,1) b_2 = np.zeros(1) # shape: (1,1) return [b_1, b_2] def forward_pass(self, inputs): W_1, W_2 = self.weights b_1, b_2 = self.biases h0 = inputs h1 = sigmoid(h0.dot(W_1) + b_1) h2 = sigmoid(h1.dot(W_2) + b_2) return h0, h1, h2 def back_prop(self, targets, activations): h0, h1, h2 = activations W_1, W_2 = self.weights b_1, b_2 = self.biases h2_loss = targets - h2 # shape (4,1) h2_delta = h2_loss * sigmoid_deriv(h2) # shape (4,1) h1_loss = h2_delta.dot(W_2.T) h1_delta = h1_loss * sigmoid_deriv(h1) # shape (4,4) # dW calculated as dot(input, delta) # added regularization term dW_2 = h1.T.dot(h2_delta) + self.lr * W_2 dW_1 = h0.T.dot(h1_delta) + self.lr * W_1 # db calculated as sum of delta db_2 = np.sum(h2_delta, axis=0) db_1 = np.sum(h1_delta, axis=0) # gradient descent update self.weights = [W_1 + dW_1, W_2 + dW_2] self.biases = [b_1 + db_1, b_2 + db_2] def predict(self, inputs, targets): """ Returns predictions given inputs. Also computes """ activations = self.forward_pass(inputs) preds = activations[-1] loss = np.mean(np.abs(targets - preds)) return preds, loss def train(self, inputs, targets, n_iters = 60000, print_every=5000): """ Train NN to fit targets given inputs. Inputs and outputs should be a 2-D matrix. """ for i in range(int(n_iters)): activations = self.forward_pass(inputs) self.back_prop(targets, activations) if (i % print_every) == 0: preds = activations[-1] loss = np.mean(np.abs(targets - preds)) print("[%d] loss: %f" % (i, loss)) def main(): # input and outputs X = np.array([[0,0,1],[0,1,1],[1,0,1],[1,1,1]]) # shape: (4, 3) Y = np.array([[0,1,1,0]]).T # shape: (4,1) # create a neural net and train it nn = NN() nn.train(X, Y) # evaluate predicted outputs preds, loss = nn.predict(X, Y) print("predicted output: %s" % preds) print("loss: %f" % loss) if __name__ == "__main__": main()
{"hexsha": "fbe7dcc35c33f5ae50882b176e6dbc514e3c8ad4", "size": 3518, "ext": "py", "lang": "Python", "max_stars_repo_path": "nn_model.py", "max_stars_repo_name": "twairball/nn_from_scratch", "max_stars_repo_head_hexsha": "8fcfa54e6041e59e917a789e537ee599733e5db5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-12-27T15:38:27.000Z", "max_stars_repo_stars_event_max_datetime": "2017-12-27T15:38:27.000Z", "max_issues_repo_path": "nn_model.py", "max_issues_repo_name": "twairball/nn_from_scratch", "max_issues_repo_head_hexsha": "8fcfa54e6041e59e917a789e537ee599733e5db5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "nn_model.py", "max_forks_repo_name": "twairball/nn_from_scratch", "max_forks_repo_head_hexsha": "8fcfa54e6041e59e917a789e537ee599733e5db5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.813559322, "max_line_length": 72, "alphanum_fraction": 0.5613985219, "include": true, "reason": "import numpy", "num_tokens": 1053}
subsection\<open>Peirce\<close> theory Peirce imports Types begin text\<open>As an example of our $\lambda\mu$ formalisation, we show show a $\lambda\mu$-term inhabiting Peirce's Law. The example is due to Parigot~\<^cite>\<open>"DBLP:conf/lpar/Parigot92"\<close>.\<close> text\<open>Peirce's law: $((A \rightarrow B) \rightarrow A) \rightarrow A$.\<close> lemma "\<Gamma>, \<Delta> \<turnstile>\<^sub>T \<lambda> (A\<rightarrow>B)\<rightarrow>A: (\<mu> A:(<0>((`0) \<degree> (\<lambda> A: (\<mu> B:(<1> (`0))))))) : ((A\<rightarrow>B)\<rightarrow>A)\<rightarrow>A" by fastforce end
{"author": "isabelle-prover", "repo": "mirror-afp-devel", "sha": "c84055551f07621736c3eb6a1ef4fb7e8cc57dd1", "save_path": "github-repos/isabelle/isabelle-prover-mirror-afp-devel", "path": "github-repos/isabelle/isabelle-prover-mirror-afp-devel/mirror-afp-devel-c84055551f07621736c3eb6a1ef4fb7e8cc57dd1/thys/LambdaMu/Peirce.thy"}
import ..lectures.love03_forward_proofs_demo /-! # LoVe Exercise 4: Functional Programming -/ set_option pp.beta true set_option pp.generalized_field_notation false namespace LoVe /-! ## Question 1: Reverse of a List We define a new accumulator-based version of `reverse`. The first argument, `as`, serves as the accumulator. This definition is __tail-recursive__, meaning that compilers and interpreters can easily optimize the recursion away, resulting in more efficient code. -/ def accurev {α : Type} : list α → list α → list α | as [] := as | as (x :: xs) := accurev (x :: as) xs /-! 1.1. Our intention is that `accurev [] xs` should be equal to `reverse xs`. But if we start an induction, we quickly see that the induction hypothesis is not strong enough. Start by proving the following generalization (using the `induction'` tactic or pattern matching): -/ lemma accurev_eq_reverse_append {α : Type} : ∀as xs : list α, accurev as xs = reverse xs ++ as := sorry /-! 1.2. Derive the desired equation. -/ lemma accurev_eq_reverse {α : Type} (xs : list α) : accurev [] xs = reverse xs := sorry /-! 1.3. Prove the following property. Hint: A one-line inductionless proof is possible. -/ lemma accurev_accurev {α : Type} (xs : list α) : accurev [] (accurev [] xs) = xs := sorry /-! 1.4. Prove the following lemma by structural induction, as a "paper" proof. This is a good exercise to develop a deeper understanding of how structural induction works (and is good practice for the final exam). lemma accurev_eq_reverse_append {α : Type} : ∀as xs : list α, accurev as xs = reverse xs ++ as Guidelines for paper proofs: We expect detailed, rigorous, mathematical proofs. You are welcome to use standard mathematical notation or Lean structured commands (e.g., `assume`, `have`, `show`, `calc`). You can also use tactical proofs (e.g., `intro`, `apply`), but then please indicate some of the intermediate goals, so that we can follow the chain of reasoning. Major proof steps, including applications of induction and invocation of the induction hypothesis, must be stated explicitly. For each case of a proof by induction, you must list the inductive hypotheses assumed (if any) and the goal to be proved. Minor proof steps corresponding to `refl`, `simp`, or `cc` need not be justified if you think they are obvious (to humans), but you should say which key lemmas they depend on. You should be explicit whenever you use a function definition or an introduction rule for an inductive predicate. -/ -- enter your paper proof here /-! ## Question 2: Drop and Take The `drop` function removes the first `n` elements from the front of a list. -/ def drop {α : Type} : ℕ → list α → list α | 0 xs := xs | (_ + 1) [] := [] | (m + 1) (x :: xs) := drop m xs /-! 2.1. Define the `take` function, which returns a list consisting of the the first `n` elements at the front of a list. To avoid unpleasant surprises in the proofs, we recommend that you follow the same recursion pattern as for `drop` above. -/ def take {α : Type} : ℕ → list α → list α := sorry #eval take 0 [3, 7, 11] -- expected: [] #eval take 1 [3, 7, 11] -- expected: [3] #eval take 2 [3, 7, 11] -- expected: [3, 7] #eval take 3 [3, 7, 11] -- expected: [3, 7, 11] #eval take 4 [3, 7, 11] -- expected: [3, 7, 11] #eval take 2 ["a", "b", "c"] -- expected: ["a", "b"] /-! 2.2. Prove the following lemmas, using `induction'` or pattern matching. Notice that they are registered as simplification rules thanks to the `@[simp]` attribute. -/ @[simp] lemma drop_nil {α : Type} : ∀n : ℕ, drop n ([] : list α) = [] := sorry @[simp] lemma take_nil {α : Type} : ∀n : ℕ, take n ([] : list α) = [] := sorry /-! 2.3. Follow the recursion pattern of `drop` and `take` to prove the following lemmas. In other words, for each lemma, there should be three cases, and the third case will need to invoke the induction hypothesis. Hint: Note that there are three variables in the `drop_drop` lemma (but only two arguments to `drop`). For the third case, `←add_assoc` might be useful. -/ lemma drop_drop {α : Type} : ∀(m n : ℕ) (xs : list α), drop n (drop m xs) = drop (n + m) xs | 0 n xs := by refl -- supply the two missing cases here lemma take_take {α : Type} : ∀(m : ℕ) (xs : list α), take m (take m xs) = take m xs := sorry lemma take_drop {α : Type} : ∀(n : ℕ) (xs : list α), take n xs ++ drop n xs = xs := sorry /-! ## Question 3: A Type of λ-Terms 3.1. Define an inductive type corresponding to the untyped λ-terms, as given by the following context-free grammar: term ::= 'var' string -- variable (e.g., `x`) | 'lam' string term -- λ-expression (e.g., `λx, t`) | 'app' term term -- application (e.g., `t u`) -/ -- enter your definition here /-! 3.2. Register a textual representation of the type `term` as an instance of the `has_repr` type class. Make sure to supply enough parentheses to guarantee that the output is unambiguous. -/ def term.repr : term → string -- enter your answer here @[instance] def term.has_repr : has_repr term := { repr := term.repr } /-! 3.3. Test your textual representation. The following command should print something like `(λx, ((y x) x))`. -/ #eval (term.lam "x" (term.app (term.app (term.var "y") (term.var "x")) (term.var "x"))) end LoVe
{"author": "BrownCS1951x", "repo": "fpv2021", "sha": "10bdbd92e64fb34115b68794b8ff480468f4dcaa", "save_path": "github-repos/lean/BrownCS1951x-fpv2021", "path": "github-repos/lean/BrownCS1951x-fpv2021/fpv2021-10bdbd92e64fb34115b68794b8ff480468f4dcaa/src/exercises/love04_functional_programming_exercise_sheet.lean"}
# See "Writing benchmarks" in the asv docs for more information. # https://asv.readthedocs.io/en/latest/writing_benchmarks.html # or the napari documentation on benchmarking # https://napari.org/developers/benchmarks.html import numpy as np from napari.layers.utils.text_manager import TextManager class TextManagerSuite: """Benchmarks for creating and modifying a text manager.""" param_names = ['n', 'text'] params = [ [2 ** i for i in range(4, 18, 2)], [ None, 'constant', 'string_property', 'float_property', '{string_property}: {float_property:.2f}', ], ] def setup(self, n, text): np.random.seed(0) self.properties = { 'string_property': np.random.choice(('cat', 'car'), n), 'float_property': np.random.rand(n), } self.current_properties = { k: np.array([v[-1]]) for k, v in self.properties.items() } self.manager = TextManager( n_text=n, properties=self.properties, text=text ) self.indices_to_remove = list(range(0, n, 2)) def time_create(self, n, text): TextManager(n_text=n, properties=self.properties, text=text) def time_refresh(self, n, text): self.manager.refresh_text(self.properties) def time_add_iteratively(self, n, text): for _ in range(512): self.manager.add(self.current_properties, 1) def time_remove_as_batch(self, n, text): self.manager.remove(self.indices_to_remove)
{"hexsha": "0ff830ea5920f1855c509f4c85e93bd748f9ca79", "size": 1577, "ext": "py", "lang": "Python", "max_stars_repo_path": "napari/benchmarks/benchmark_text_manager.py", "max_stars_repo_name": "MaksHess/napari", "max_stars_repo_head_hexsha": "64a144607342c02177fc62fa83a3442ace0a98e7", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1345, "max_stars_repo_stars_event_min_datetime": "2019-03-03T21:14:14.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T19:46:39.000Z", "max_issues_repo_path": "napari/benchmarks/benchmark_text_manager.py", "max_issues_repo_name": "MaksHess/napari", "max_issues_repo_head_hexsha": "64a144607342c02177fc62fa83a3442ace0a98e7", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 3904, "max_issues_repo_issues_event_min_datetime": "2019-03-02T01:30:24.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T20:17:27.000Z", "max_forks_repo_path": "napari/benchmarks/benchmark_text_manager.py", "max_forks_repo_name": "MaksHess/napari", "max_forks_repo_head_hexsha": "64a144607342c02177fc62fa83a3442ace0a98e7", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 306, "max_forks_repo_forks_event_min_datetime": "2019-03-29T17:09:10.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-30T09:54:11.000Z", "avg_line_length": 30.9215686275, "max_line_length": 68, "alphanum_fraction": 0.6119213697, "include": true, "reason": "import numpy", "num_tokens": 364}
__author__ = 'IVMIT KFU: Gataullin Ravil & Veselovkiy Sergei' import cv2 import numpy as np def add_gaussian_noise(bounding_box, mean, sigma): if bounding_box is not None: return bounding_box + np.random.normal(mean, sigma, bounding_box.shape) else: return None class LearningComponent: def __init__(self, init_patch): self.positives = [] self.negatives = [] self.new_positives = [] self.new_negatives = [] self.new_samples_count = 0 winSize = (8,8) blockSize = (4,4) blockStride = (4,4) cellSize = (4,4) nbins = 9 derivAperture = 1 winSigma = 4. histogramNormType = 0 L2HysThreshold = 2.0000000000000001e-01 gammaCorrection = 1 nlevels = 64 # self.descriptor = cv2.HOGDescriptor(winSize,blockSize,blockStride,cellSize,nbins,derivAperture,winSigma, histogramNormType,L2HysThreshold,gammaCorrection,nlevels) self.descriptor = cv2.HOGDescriptor() self.update_positives(init_patch) self.init_patch = init_patch # self.generate_training_examples(init_position) # # def generate_training_examples(self, gray_initial_frame, x, y, width, height, closest_count = 10, surround_count = 50, small_radius = None, big_radius = None, sigma = 5): # if small_radius == None: # small_radius = 0.1 * (width + height) / 2 # if big_radius == None: # big_radius = (width + height) / 2 # # for i in xrange(closest_count): # fi = i * np.pi / closest_count # diff_x = x + int(np.cos(fi) * small_radius) # diff_y = y + int(np.sin(fi) * small_radius) # # bounding_box = get_bounding_box(gray_initial_frame, diff_x, diff_y, width, height) # generalise = add_gaussian_noise(bounding_box, 0, sigma) # self.update_negatives(generalise) # # if bounding_box is not None: # bounding_box = cv2.resize(bounding_box, None, fx=1.01,fy=1.01) # generalise = add_gaussian_noise(bounding_box, 0, sigma) # self.update_negatives(generalise) # # bounding_box = cv2.resize(bounding_box, None, fx=0.99,fy=0.99) # generalise = add_gaussian_noise(bounding_box, 0, sigma) # self.update_negatives(generalise) # # bounding_box = get_bounding_box(gray_initial_frame, diff_x-max(int(0.01*width), 1), diff_y, width, height) # generalise = add_gaussian_noise(bounding_box, 0, sigma) # self.update_negatives(generalise) # # bounding_box = get_bounding_box(gray_initial_frame, diff_x+max(int(0.01*width), 1), diff_y, width, height) # generalise = add_gaussian_noise(bounding_box, 0, sigma) # self.update_negatives(generalise) # # bounding_box = get_bounding_box(gray_initial_frame, diff_x, diff_y-max(int(0.01*height), 1), width, height) # generalise = add_gaussian_noise(bounding_box, 0, sigma) # self.update_negatives(generalise) # # bounding_box = get_bounding_box(gray_initial_frame, diff_x, diff_y+max(int(0.01*height), 1), width, height) # generalise = add_gaussian_noise(bounding_box, 0, sigma) # self.update_negatives(generalise) # # for i in xrange(surround_count): # fi = i * np.pi / closest_count # diff_x = x + int(np.cos(fi) * big_radius) # diff_y = y + int(np.sin(fi) * big_radius) # # bounding_box = get_bounding_box(gray_initial_frame, diff_x, diff_y, width, height) # generalise = add_gaussian_noise(bounding_box, 0, sigma) # self.update_negatives(generalise) def get_training_set(self, const_weight): if len(self.negatives) != 0 and len(self.positives) != 0: samples = [] if const_weight: positive_weight = 1.0 negative_weight = 1.0 else: positive_weight = 1.0*len(self.negatives)/(len(self.negatives)+len(self.positives)) negative_weight = 1.0*len(self.positives)/(len(self.negatives)+len(self.positives)) weights = np.append(positive_weight*np.ones(len(self.positives)),negative_weight*np.ones(len(self.negatives))) targets = np.append(np.ones(len(self.positives)),np.zeros(len(self.negatives))) for positive in self.positives: samples.append(positive.calculate_feature(self.descriptor)) for negative in self.negatives: samples.append(negative.calculate_feature(self.descriptor)) return samples, weights, targets else: return np.array([]), np.array([]), np.array([]) def NCC(self, pi, pj): # pi, pj - patches CV_TM_CCOEFF_NORMED = 5 try: x = cv2.matchTemplate(pi.small_content, pj.small_content, CV_TM_CCOEFF_NORMED)[0][0] except: x = 1 return x def similarity(self, pi, pj): y = 0.5 * (self.NCC(pi, pj) + 1) return y def similarity_positive(self, p): if len(self.positives) > 0: similarity_list = [self.similarity(p,pi) for pi in self.positives] return max(similarity_list) else: return 1 def similarity_negative(self, p): if len(self.negatives) > 0: similarity_list = [self.similarity(p,pi) for pi in self.negatives] return max(similarity_list) else: return 1 def similarity_half_first_positive(self, p): if len(self.positives)/2+1 > 0: similarity_list = [self.similarity(p,pi) for pi in self.positives[:len(self.positives)/2+1]] return max(similarity_list) else: return 1 def relative_similarity(self, p): divisor = self.similarity_positive(p) + self.similarity_negative(p) if divisor != 0: return self.similarity_positive(p) / divisor else: return 0 def conservative_similarity(self, p): divisor = self.similarity_half_first_positive(p) + self.similarity_negative(p) if divisor != 0: return self.similarity_half_first_positive(p) / divisor else: return 0 def update_positives(self, patch): if patch != None: patch.calculate_feature(self.descriptor) self.positives.append(patch) self.new_samples_count += 1 def update_negatives(self, patch): if patch != None: patch.calculate_feature(self.descriptor) self.negatives.append(patch) self.new_samples_count += 1 def add_new_positive(self, patch): if patch != None: self.new_positives.append(patch) def add_new_negative(self, patch): if patch != None: self.new_negatives.append(patch) def n_expert(self, n_threshold = 0.2): for patch in self.new_positives: if self.conservative_similarity(patch) < n_threshold: self.update_negatives(patch) else: self.update_positives(patch) self.new_positives = [] def p_expert(self, p_threshold = 0.8): for patch in self.new_negatives: if self.conservative_similarity(patch) > p_threshold: self.update_positives(patch) else: self.update_negatives(patch) self.new_negatives = []
{"hexsha": "cc12d1c362957a7d15f903db401ec6cccb20c694", "size": 7618, "ext": "py", "lang": "Python", "max_stars_repo_path": "Tracking/learning.py", "max_stars_repo_name": "SAVeselovskiy/KFU_Visual_Tracking", "max_stars_repo_head_hexsha": "af45fd6a93d9f0369fc8bab97af4abecef444943", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Tracking/learning.py", "max_issues_repo_name": "SAVeselovskiy/KFU_Visual_Tracking", "max_issues_repo_head_hexsha": "af45fd6a93d9f0369fc8bab97af4abecef444943", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Tracking/learning.py", "max_forks_repo_name": "SAVeselovskiy/KFU_Visual_Tracking", "max_forks_repo_head_hexsha": "af45fd6a93d9f0369fc8bab97af4abecef444943", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.5212765957, "max_line_length": 176, "alphanum_fraction": 0.6076398005, "include": true, "reason": "import numpy", "num_tokens": 1878}
import os import unittest import numpy as np from gnes.indexer.chunk.bindexer import BIndexer @unittest.SkipTest class TestBIndexer(unittest.TestCase): def setUp(self): self.toy_data = np.array([[1, 2, 1, 2], [2, 1, 3, 4], [1, 2, 1, 2], [2, 1, 4, 3], [2, 1, 3, 4], [23, 32, 21, 33], [123, 132, 1, 1]]).astype(np.uint8) self.toy_label = [(234, 0), (432, 0), (123, 1), (321, 0), (1, 0), (2, 0), (6, 0)] self.toy_query = np.array([[1, 2, 1, 2], [2, 1, 3, 4], [3, 2, 1, 2]]).astype(np.uint8) self.toy_exp = [[(234, 0, 1., 1,), (123, 1, 1., 1)], [(432, 0, 1., 1), (1, 0, 1., 1)], [(234, 0, 1., 0.75), (123, 1, 1., 0.75)]] self.weights = [1.] * len(self.toy_label) dirname = os.path.dirname(__file__) self.dump_path = os.path.join(dirname, 'test-indexer.bin') def tearDown(self): if os.path.exists(self.dump_path): os.remove(self.dump_path) def test_nsw_search(self): fd = BIndexer(self.toy_data.shape[1], data_path=self.dump_path + '_1') fd.add(self.toy_label, self.toy_data, self.weights) self.assertEqual(fd.num_doc, 7) self.assertEqual(fd.num_chunks, 7) self.assertEqual(fd.num_chunks_avg, 1) rs = fd.query(self.toy_query, 2, method='nsw', normalized_score=False) for i in range(len(rs)): rs[i] = sorted(rs[i], key=lambda x: (x[3], x[0])) fd.close() self.assertEqual(rs, self.toy_exp) def test_force_search(self): fd = BIndexer(self.toy_data.shape[1], data_path=self.dump_path + '_2') fd.add(self.toy_label, self.toy_data, self.weights) rs = fd.query(self.toy_query, 2, method='force', normalized_score=False) for i in range(len(rs)): rs[i] = sorted(rs[i], key=lambda x: (x[3], x[0])) fd.close() self.assertEqual(rs, self.toy_exp) def test_dump_load(self): fd = BIndexer(self.toy_data.shape[1], data_path=self.dump_path + '_3') fd.add(self.toy_label, self.toy_data, self.weights) fd.dump() fd.close() # shutil.rmtree(self.data_path + "_3") fd2 = BIndexer.load(fd.dump_full_path) rs = fd2.query(self.toy_query, 2, normalized_score=False) for i in range(len(rs)): rs[i] = sorted(rs[i], key=lambda x: (x[3], x[0])) fd2.close() self.assertEqual(rs, self.toy_exp) os.remove(self.dump_path + '_3')
{"hexsha": "57d781f042d6a3b4167ea3a94e83ee543505fd34", "size": 2742, "ext": "py", "lang": "Python", "max_stars_repo_path": "tests/test_bindexer.py", "max_stars_repo_name": "micro-pixel/gnes", "max_stars_repo_head_hexsha": "388d1ba718ec04eedaaff3ce34da43689c197ee7", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-07-05T03:51:44.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-18T05:56:37.000Z", "max_issues_repo_path": "tests/test_bindexer.py", "max_issues_repo_name": "cmy9068/gnes", "max_issues_repo_head_hexsha": "44a54be4c80108ac65b2450b4af8deded6da3339", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tests/test_bindexer.py", "max_forks_repo_name": "cmy9068/gnes", "max_forks_repo_head_hexsha": "44a54be4c80108ac65b2450b4af8deded6da3339", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-10-28T15:07:36.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-28T15:07:36.000Z", "avg_line_length": 37.5616438356, "max_line_length": 94, "alphanum_fraction": 0.5142231947, "include": true, "reason": "import numpy", "num_tokens": 798}
```python import numpy as np from scipy.integrate import simps #my things from FermatPrincipleCartesian import * from Geometry import * from Symbolic import * from sympy import Matrix from RealData import PrepareData from ForwardEquation import * def LMSolContinous(dataDict,mu = 0.5): ''' ``rays`` origin and dir are in ENU frame. data is d = dtec = int_i ne ds - int_i0 ne ds. neFunc = f(beta) g(beta) = int_i f(beta) + rho_i ds - int_i0 f(beta) + rho_i0 ds minimize (dobs - d)Cdinv(dobs - d) + mu (log(neFunc) - log(neprior))Cminv(log(neFunc) - log(neprior)) Solve in continuous basis. Steps: 1. propagate rays 2. dd = d - g 3. wdd = Cdinv.dd 4. S = G^t.Cdinv.G + mu*lambda^t.Cminv.lambda 5. T = Sinv 6. dm = T.G^t.wdd ''' #first fit just iri layers and global offsets Nsol = 0 print("Constructing the model with {0} solitons".format(Nsol)) model = ForwardModel(dataDict['numAntennas'],dataDict['numDirections'],dataDict['numTimes'], pathlength=2000,filename='ww-background',numThreads=1, numSolitons = Nsol,radioArray = None) #a priori params = model.getForwardKernelParams() g = model.doForward(dataDict['rays'],N=100,load=False) dd = dataDict['dtec'] - g Cd = np.eye(np.size(params))*np.var(g)*1.2 Cdinv = np.linalg.pinv(Cd) wdd = Cdinv.dot(dd) rays = model.calcRays(dataDict['rays'],load=True) plotWavefront(lambda x,y,z : model.generateSolitonModel()(x,y,z,0),rays,*getSolitonCube(model)) g = model.doForward(dataDict['rays'],N=100,load=True) dd = dataDict['dtec'] - g print("Computing observation covariance.") Cd = np.eye(np.size(params))*np.var(g)*1.2 Cdinv = np.linalg.pinv(Cd) J = self.doJkernel(inRays,N=100,load=True) S = J.transpose().dot(Cdinv).dot(J) T = np.linalg.pinv(S) wdd = J.transpose().dot(Cdinv).dot(dd) dbeta = T.dot(wdd) params += dbeta model.setModelParams(params) #monte carlo L.Cminv.L #neFunc = model.solitonModelSymbolic #paramDict = self.getModelParamDict() #L = [] #for param i paramDict.keys(): # L.append(neFunc.diff(param)) def testForwardProblem(): sol = SolitonModel(8) neFunc = sol.generateSolitonModel() theta = np.linspace(-np.pi/8.,np.pi/8.,2) #phi = np.linspace(0,2*np.pi,6) rays = [] origin = ac.ITRS(sol.enu.location).cartesian.xyz.to(au.km).value for t in theta: for p in theta: direction = ac.SkyCoord(np.sin(t), np.sin(p), 1.,frame=sol.enu).transform_to('itrs').cartesian.xyz.value rays.append(Ray(origin,direction)) forwardProblem = ForwardProblem(sol) times = np.zeros(len(rays)) d = forwardProblem.doForward(rays,times,N=1000) print(d) #plotWavefront(f.nFunc.subs({'t':0}),rays,*getSolitonCube(sol)) #plotFuncCube(f.nFunc.subs({'t':0}), *getSolitonCube(sol),rays=rays) if __name__ == '__main__': np.random.seed(1234) #testForwardProblem() dataDict = PrepareData(infoFile='SB120-129/WendysBootes.npz', dataFolder='SB120-129/', timeStart = 0, timeEnd = 0, arrayFile='arrays/lofar.hba.antenna.cfg',load=True) LMSolContinous(dataDict,mu = 0.5) #LMSolContinous(**dataDict) ``` ```python import pylab as plt plt.hist(dataDict['dtec']) plt.show() ``` ```python ```
{"hexsha": "36ace200b0d254d4f9651cf5c759a00a275f18c9", "size": 318316, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "src/ionotomo/notebooks/ContinuousInversion.ipynb", "max_stars_repo_name": "Joshuaalbert/IonoTomo", "max_stars_repo_head_hexsha": "9f50fbac698d43a824dd098d76dce93504c7b879", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2017-06-22T08:47:07.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-01T12:33:02.000Z", "max_issues_repo_path": "src/ionotomo/notebooks/ContinuousInversion.ipynb", "max_issues_repo_name": "Joshuaalbert/IonoTomo", "max_issues_repo_head_hexsha": "9f50fbac698d43a824dd098d76dce93504c7b879", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-04-03T15:21:19.000Z", "max_issues_repo_issues_event_max_datetime": "2019-04-03T15:48:31.000Z", "max_forks_repo_path": "src/ionotomo/notebooks/ContinuousInversion.ipynb", "max_forks_repo_name": "Joshuaalbert/IonoTomo", "max_forks_repo_head_hexsha": "9f50fbac698d43a824dd098d76dce93504c7b879", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-03-01T16:20:00.000Z", "max_forks_repo_forks_event_max_datetime": "2020-07-07T15:09:02.000Z", "avg_line_length": 136.4406343763, "max_line_length": 2325, "alphanum_fraction": 0.7010015205, "converted": true, "num_tokens": 1057}
from sklearn.manifold import TSNE from sklearn.manifold import MDS import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import OPTICS from sklearn.cluster import DBSCAN from sklearn.cluster import AgglomerativeClustering from sklearn.cluster import AffinityPropagation from sklearn.cluster import SpectralClustering from sklearn import metrics import os, sys, subprocess # Global variables (what you really need here is java_file, data_path, and source_path) task = 1 # The name of the source file java_file = ['BankUserConcurrentGet.java', 'BankUserConcurrentPut.java', 'BankUserMultiThreaded.java', 'BankUserStrongConsistency.java'][task-1] # See get_source_code function of ClusterPlotter class for data_path, source_path, and cmu_cs # The path to the folder containing different students' source files data_path = './S20_3.3_OPE_Grading_Anon/3.3_OPE_Submissions-anonymized/' # The path to the source file folder for each student source_path = '/src/main/java/Project_OMP/BankUserSystem/' # You may not need this. This is useful when the names of folders for different students share cmu_cs string. cmu_cs = '@andrew.cmu.edu_data-consistency-ope_consistency-ope-task_' # Choose tsne or mds for dimension reduction (lower-case) embedding = 'mds' ''' java_file = ['ProfileServlet.java', 'FollowerServlet.java', 'HomepageServlet.java', 'TimelineServlet.java'][task-1] data_path = './F19_Project_3_2/task' + str(task) + '/' cmu_cs = '@andrew.cmu.edu_social-network_p32-task' + str(task) + '_' ''' class ClusterPlotter: def __init__(self, features, clusters, studentID, timestamp, algo_name): self.features = features self.clusters = clusters self.studentID = studentID self.timestamp = timestamp self.algo_name = algo_name self.k=np.unique(clusters).shape[0] def plot_all(self): x = self.features[:,0] y = self.features[:,1] fig = plt.figure() #fig.suptitle('All clusters together with ' + str(self.clusters.shape[0]) + ' points total') fig.suptitle('Task ' + str(task) + ' Solutions: ' + self.algo_name) ax = fig.add_subplot(1,1,1) #color_points = np.zeros((x.shape[0], 4)) #for i in range (x.shape[0]): # color_points[i,:] = self.colors[self.clusters[i]] ax.scatter(x, y, c=self.clusters, picker=True) #ax.legend(loc='center left', bbox_to_anchor=(1, 0.5)) fig.canvas.mpl_connect('pick_event', lambda e: self.onpick(e, 0)) def get_source_code(self, i, offset): file_path = data_path + str(self.studentID[i]) + cmu_cs + str(self.timestamp[i]) + source_path + java_file print('You opened a submission by', str(self.studentID[i]), 'at', str(self.timestamp[i])) if sys.platform == "win32": os.startfile(file_path) else: opener ="open" if sys.platform == "darwin" else "xdg-open" subprocess.call([opener, file_path]) #with open(file_path.strip(), 'r') as submission_file: # return submission_file.read() def onpick(self, event, offset): ind = event.ind print('-------------------') for i in ind: self.get_source_code(i, offset) def show(self): plt.savefig(data_path + '/clusters/task' + str(task) + '/' + self.algo_name + '.png') plt.show() # Embed to 2D inputCSV = pd.read_csv(data_path + 'input_task{}.csv'.format(task)) data = pd.read_csv(data_path + 'cluster_info_task{}.csv'.format(task)) data['ClusterID'] = data['ClusterID'].fillna(-1) clusterID = data['ClusterID'] distanceMatrix = data.drop(columns=['StudentID', 'Timestamp', 'ClusterID']) if embedding == 'tsne': reduced = TSNE(n_components=2, metric='precomputed', learning_rate=700, perplexity=40).fit_transform(distanceMatrix) elif embedding == 'mds': reduced = MDS(n_components=2, dissimilarity='precomputed', metric=True).fit_transform(distanceMatrix) else: raise ValueError("Embedding must be either mds or tsne (lower-case)") # Cluster solutions cluster_methods = ['optics_xi', 'optics_dbscan', 'dbscan', 'agglomerative_clustering', 'affinity_propagation', 'spectral_clustering'] clusterID_xi = OPTICS(metric='precomputed', max_eps=0.16, xi=0.05, algorithm='brute', min_samples=3).fit_predict(distanceMatrix) clusterID_op = OPTICS(metric='precomputed', max_eps=0.16, cluster_method='dbscan', min_samples=7).fit_predict(distanceMatrix) clusterID_db = DBSCAN(metric='precomputed', eps=0.1).fit_predict(distanceMatrix) clusterID_ag = AgglomerativeClustering(affinity='precomputed', linkage='average', n_clusters=2).fit_predict(distanceMatrix) clusterID_af = AffinityPropagation(affinity='precomputed', damping=0.7).fit_predict(1 - distanceMatrix) clusterID_sp = SpectralClustering(affinity='precomputed', n_clusters=2).fit_predict(1 - distanceMatrix) clusterIDs = [clusterID_xi, clusterID_op, clusterID_db, clusterID_ag, clusterID_af, clusterID_sp] # Evaluation for clusterID in clusterIDs: try: print(metrics.silhouette_score(distanceMatrix, clusterID, metric='precomputed')) except ValueError as identifier: print("Number of labels is 1. Valid values are 2 to n_samples - 1 (inclusive)") # Visualize (you may want to change the suffix of the save images) for i in range(len(cluster_methods)): c = ClusterPlotter(reduced, clusterIDs[i], data['StudentID'], data['Timestamp'], '{}_mds'.format(cluster_methods[i])) c.plot_all() c.show() # Update clusters in csv ''' data['ClusterID'] = clusterID_sp merged = data[['StudentID', 'Timestamp', 'ClusterID']].rename(columns={"StudentID": "Source_file_id", "Timestamp": "Project_id", "ClusterID": "Cluster_Id"}) inputCSV = inputCSV.drop(columns=['Cluster_id']) inputCSV = inputCSV.merge(merged, how='left', on=["Source_file_id", "Project_id"]) inputCSV.to_csv(data_path + 'input.csv') '''
{"hexsha": "ae894773e1eeb322152c298d9f8b774e25f49c7c", "size": 5902, "ext": "py", "lang": "Python", "max_stars_repo_path": "visualize_clusters.py", "max_stars_repo_name": "akonoroshi/cluster_visualization", "max_stars_repo_head_hexsha": "92e38c34d2764afbc00b9b3d9a42f133e7d11a4c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "visualize_clusters.py", "max_issues_repo_name": "akonoroshi/cluster_visualization", "max_issues_repo_head_hexsha": "92e38c34d2764afbc00b9b3d9a42f133e7d11a4c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "visualize_clusters.py", "max_forks_repo_name": "akonoroshi/cluster_visualization", "max_forks_repo_head_hexsha": "92e38c34d2764afbc00b9b3d9a42f133e7d11a4c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-12T01:23:55.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-12T01:23:55.000Z", "avg_line_length": 46.4724409449, "max_line_length": 156, "alphanum_fraction": 0.712470349, "include": true, "reason": "import numpy", "num_tokens": 1512}
#!/usr/bin/env python3 # author: github.com/olehermanse # import libraries used for plotting and mathematical operations: import numpy as np import matplotlib.pyplot as plt import random # Define a mathematical expression as a function: def f(x): return -x**4 + 2 * x**3 + 2 * x**2 - x def df(x): return -4 * x**3 + 6 * x**2 + 4 * x - 1 def gradient_ascent(function, derivative, gamma, x, precision): dx = gamma * derivative(x) while abs(dx) > precision: y = function(x) plt.plot(x,y, color="blue", marker="s", markersize=6) x += dx dx = gamma * derivative(x) return x,function(x) def plot_gradient_ascent(function,derivative,start,stop,steps): x = np.linspace(start, stop, steps) # Make array of 100 values fig = plt.figure("INF3490 - Gradient Ascent") fig.suptitle("Visualization of gradient ascent") plt.subplot(211) plt.plot(x,function(x)) randx = random.uniform(start,stop) solution = gradient_ascent(function, derivative, 0.1, randx, 0.001) plt.plot(solution[0],solution[1], color="yellow", marker="*", markersize=16) plt.subplot(212) plt.plot(x,derivative(x)) plt.savefig("gradient.pdf", format="pdf") plt.show() # Show all subplots in new window if __name__ == "__main__": plot_gradient_ascent(f,df,-2,3,100)
{"hexsha": "ef7f6cb46e73b5bf4a1487897827b61ededccdc4", "size": 1347, "ext": "py", "lang": "Python", "max_stars_repo_path": "group_lectures/02_search/02_gradient.py", "max_stars_repo_name": "mpambasange/MachineLearning", "max_stars_repo_head_hexsha": "8b813345264513a57934317b01e1311628dc5b01", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 16, "max_stars_repo_stars_event_min_datetime": "2016-09-01T08:50:59.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-15T20:56:07.000Z", "max_issues_repo_path": "group_lectures/02_search/02_gradient.py", "max_issues_repo_name": "olehermanse/INF3490-PythonAI", "max_issues_repo_head_hexsha": "8b813345264513a57934317b01e1311628dc5b01", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2016-10-20T09:36:19.000Z", "max_issues_repo_issues_event_max_datetime": "2017-08-29T00:28:54.000Z", "max_forks_repo_path": "group_lectures/02_search/02_gradient.py", "max_forks_repo_name": "olehermanse/INF3490-PythonAI", "max_forks_repo_head_hexsha": "8b813345264513a57934317b01e1311628dc5b01", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 15, "max_forks_repo_forks_event_min_datetime": "2016-10-31T12:30:37.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-15T12:12:50.000Z", "avg_line_length": 30.6136363636, "max_line_length": 80, "alphanum_fraction": 0.6533036377, "include": true, "reason": "import numpy", "num_tokens": 371}
import cv2 import numpy as np import os import pyk4a from pyk4a import Config, PyK4A # NFOV_2X2BINNED = 1 # NFOV_UNBINNED = 2 # WFOV_2X2BINNED = 3 # WFOV_UNBINNED = 4 # PASSIVE_IR = 5 def main(): config = Config( color_resolution=pyk4a.ColorResolution.RES_720P, depth_mode=pyk4a.DepthMode.PASSIVE_IR, synchronized_images_only=True, ) k4a = PyK4A(config) #k4a.start() k4a.load_calibration_json('calibration_data') print(k4a.calibration_data) #k4a.stop() if __name__ == "__main__": main()
{"hexsha": "e36d631037ceb27bbd245880d14231b1a899e485", "size": 592, "ext": "py", "lang": "Python", "max_stars_repo_path": "example/test.py", "max_stars_repo_name": "greeknerd1/stereo-rectify", "max_stars_repo_head_hexsha": "98a23c3ff96dd4344ecad13d4ff145060c8fb992", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "example/test.py", "max_issues_repo_name": "greeknerd1/stereo-rectify", "max_issues_repo_head_hexsha": "98a23c3ff96dd4344ecad13d4ff145060c8fb992", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "example/test.py", "max_forks_repo_name": "greeknerd1/stereo-rectify", "max_forks_repo_head_hexsha": "98a23c3ff96dd4344ecad13d4ff145060c8fb992", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 16.4444444444, "max_line_length": 60, "alphanum_fraction": 0.6385135135, "include": true, "reason": "import numpy", "num_tokens": 201}
[STATEMENT] theorem improving_att_imp_det_opt: assumes "\<And>v. \<exists>d. \<nu>_improving v (mk_dec_det d)" shows "\<nu>\<^sub>b_opt s = (\<Squnion>d \<in> D\<^sub>D. \<nu>\<^sub>b (mk_stationary_det d) s)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. apply_bfun \<nu>\<^sub>b_opt s = (\<Squnion>d\<in>D\<^sub>D. apply_bfun (\<nu>\<^sub>b (mk_stationary_det d)) s) [PROOF STEP] proof - [PROOF STATE] proof (state) goal (1 subgoal): 1. apply_bfun \<nu>\<^sub>b_opt s = (\<Squnion>d\<in>D\<^sub>D. apply_bfun (\<nu>\<^sub>b (mk_stationary_det d)) s) [PROOF STEP] obtain d where d: "\<nu>_conserving (mk_dec_det d)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. (\<And>d. \<nu>_conserving (mk_dec_det d) \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] using assms ex_improving_imp_conserving [PROOF STATE] proof (prove) using this: \<exists>d. \<nu>_improving ?v3 (mk_dec_det d) (\<And>v. \<exists>d. \<nu>_improving v (mk_dec_det d)) \<Longrightarrow> \<exists>d. \<nu>_conserving (mk_dec_det d) goal (1 subgoal): 1. (\<And>d. \<nu>_conserving (mk_dec_det d) \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] by auto [PROOF STATE] proof (state) this: \<nu>_conserving (mk_dec_det d) goal (1 subgoal): 1. apply_bfun \<nu>\<^sub>b_opt s = (\<Squnion>d\<in>D\<^sub>D. apply_bfun (\<nu>\<^sub>b (mk_stationary_det d)) s) [PROOF STEP] hence "d \<in> D\<^sub>D" [PROOF STATE] proof (prove) using this: \<nu>_conserving (mk_dec_det d) goal (1 subgoal): 1. d \<in> D\<^sub>D [PROOF STEP] using \<nu>_conserving_iff is_dec_mk_dec_det_iff [PROOF STATE] proof (prove) using this: \<nu>_conserving (mk_dec_det d) \<nu>_conserving ?d = (?d \<in> D\<^sub>R \<and> (\<forall>d'\<in>D\<^sub>R. \<forall>s. apply_bfun (L d' \<nu>\<^sub>b_opt) s \<le> apply_bfun (L ?d \<nu>\<^sub>b_opt) s)) is_dec (mk_dec_det ?d) = is_dec_det ?d goal (1 subgoal): 1. d \<in> D\<^sub>D [PROOF STEP] by blast [PROOF STATE] proof (state) this: d \<in> D\<^sub>D goal (1 subgoal): 1. apply_bfun \<nu>\<^sub>b_opt s = (\<Squnion>d\<in>D\<^sub>D. apply_bfun (\<nu>\<^sub>b (mk_stationary_det d)) s) [PROOF STEP] thus ?thesis [PROOF STATE] proof (prove) using this: d \<in> D\<^sub>D goal (1 subgoal): 1. apply_bfun \<nu>\<^sub>b_opt s = (\<Squnion>d\<in>D\<^sub>D. apply_bfun (\<nu>\<^sub>b (mk_stationary_det d)) s) [PROOF STEP] using \<Pi>\<^sub>M\<^sub>R_imp_policies \<nu>\<^sub>b_le_opt [PROOF STATE] proof (prove) using this: d \<in> D\<^sub>D ?p \<in> \<Pi>\<^sub>M\<^sub>R \<Longrightarrow> mk_markovian ?p \<in> \<Pi>\<^sub>H\<^sub>R ?p \<in> \<Pi>\<^sub>H\<^sub>R \<Longrightarrow> \<nu>\<^sub>b ?p \<le> \<nu>\<^sub>b_opt goal (1 subgoal): 1. apply_bfun \<nu>\<^sub>b_opt s = (\<Squnion>d\<in>D\<^sub>D. apply_bfun (\<nu>\<^sub>b (mk_stationary_det d)) s) [PROOF STEP] by (fastforce intro!: less_eq_bfunD cSup_eq_maximum[where z = "\<nu>\<^sub>b_opt s", symmetric] simp: conserving_imp_opt[OF d] image_iff simp del: \<nu>\<^sub>b.rep_eq \<nu>\<^sub>b_opt.rep_eq) [PROOF STATE] proof (state) this: apply_bfun \<nu>\<^sub>b_opt s = (\<Squnion>d\<in>D\<^sub>D. apply_bfun (\<nu>\<^sub>b (mk_stationary_det d)) s) goal: No subgoals! [PROOF STEP] qed
{"llama_tokens": 1448, "file": "MDP-Rewards_MDP_reward", "length": 11}
/* Copyright (c) 2014, Project OSRM, Dennis Luxen, others All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef PROCESSING_CHAIN_HPP #define PROCESSING_CHAIN_HPP #include "contractor.hpp" #include "contractor_options.hpp" #include "../data_structures/query_edge.hpp" #include "../data_structures/static_graph.hpp" #include "../data_structures/deallocating_vector.hpp" #include "../data_structures/node_based_graph.hpp" struct SpeedProfileProperties; struct EdgeBasedNode; struct lua_State; #include <boost/filesystem.hpp> #include <vector> /** \brief class of 'prepare' utility. */ class Prepare { public: using EdgeData = QueryEdge::EdgeData; explicit Prepare(ContractorConfig contractor_config) : config(std::move(contractor_config)) {} Prepare(const Prepare &) = delete; ~Prepare(); int Run(); protected: void ContractGraph(const unsigned max_edge_id, DeallocatingVector<EdgeBasedEdge> &edge_based_edge_list, DeallocatingVector<QueryEdge> &contracted_edge_list, std::vector<bool> &is_core_node, std::vector<float> &node_levels) const; void WriteCoreNodeMarker(std::vector<bool> &&is_core_node) const; void WriteNodeLevels(std::vector<float> &&node_levels) const; void ReadNodeLevels(std::vector<float> &contraction_order) const; std::size_t WriteContractedGraph(unsigned number_of_edge_based_nodes, const DeallocatingVector<QueryEdge> &contracted_edge_list); void FindComponents(unsigned max_edge_id, const DeallocatingVector<EdgeBasedEdge> &edges, std::vector<EdgeBasedNode> &nodes) const; private: ContractorConfig config; std::size_t LoadEdgeExpandedGraph(const std::string &edge_based_graph_path, DeallocatingVector<EdgeBasedEdge> &edge_based_edge_list, const std::string &edge_segment_lookup_path, const std::string &edge_penalty_path, const std::string &segment_speed_path); }; #endif // PROCESSING_CHAIN_HPP
{"hexsha": "0eb65553dfad69352b5fe105d112bee01e8414dd", "size": 3431, "ext": "hpp", "lang": "C++", "max_stars_repo_path": "contractor/processing_chain.hpp", "max_stars_repo_name": "aaronbenz/osrm-backend", "max_stars_repo_head_hexsha": "758d4023050d1f49971f919cea872a2276dafe14", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 1.0, "max_stars_repo_stars_event_min_datetime": "2021-03-06T05:07:44.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-06T05:07:44.000Z", "max_issues_repo_path": "contractor/processing_chain.hpp", "max_issues_repo_name": "aaronbenz/osrm-backend", "max_issues_repo_head_hexsha": "758d4023050d1f49971f919cea872a2276dafe14", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "contractor/processing_chain.hpp", "max_forks_repo_name": "aaronbenz/osrm-backend", "max_forks_repo_head_hexsha": "758d4023050d1f49971f919cea872a2276dafe14", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.3647058824, "max_line_length": 98, "alphanum_fraction": 0.7201981929, "num_tokens": 679}
# -*- coding: utf-8 -*- # Copyright 2020 The PsiZ Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================ """Module for testing models.py.""" import numpy as np import pytest import tensorflow as tf import psiz def test_n_sample_propogation(rate_1g_vi): """Test propogation properties.""" assert rate_1g_vi.n_sample == 1 # Set n_sample at model level. rate_1g_vi.n_sample = 100 assert rate_1g_vi.n_sample == 100 @pytest.mark.parametrize( "is_eager", [True, False] ) def test_call_2groups( rate_2g_mle, ds_rate_docket_2g, ds_rate_obs_2g, is_eager): """Test call with group-specific kernels.""" tf.config.run_functions_eagerly(is_eager) model = rate_2g_mle n_trial = 4 # n_submodule = len(model.submodules) # Compile compile_kwargs = { 'loss': tf.keras.losses.MeanSquaredError(), 'optimizer': tf.keras.optimizers.Adam(learning_rate=.001), 'weighted_metrics': [ tf.keras.metrics.MeanSquaredError(name='mse') ] } model.compile(**compile_kwargs) for data in ds_rate_docket_2g: x, y, sample_weight = tf.keras.utils.unpack_x_y_sample_weight(data) output = model(x, training=False) def test_save_load_rate_wtrace( rate_1g_mle, tmpdir, ds_rate_docket, ds_rate_obs_2g): """Test loading and saving of embedding model.""" model = rate_1g_mle # Compile compile_kwargs = { 'loss': tf.keras.losses.MeanSquaredError(), 'optimizer': tf.keras.optimizers.Adam(learning_rate=.001), 'weighted_metrics': [ tf.keras.metrics.MeanSquaredError(name='mse') ] } model.compile(**compile_kwargs) # Fit one epoch. model.fit(ds_rate_obs_2g, epochs=1) # Predict using original model. output_0 = model.predict(ds_rate_docket) # Save the model. fn = tmpdir.join('embedding_test') model.save(fn, overwrite=True, save_traces=True) # Load the saved model. reconstructed_model = tf.keras.models.load_model(fn) # Predict using loaded model. output_1 = reconstructed_model.predict(ds_rate_docket) # Test for equality. np.testing.assert_allclose(output_0, output_1) assert reconstructed_model.n_stimuli == model.n_stimuli assert reconstructed_model.n_dim == model.n_dim # Continue training without recompiling. reconstructed_model.fit(ds_rate_obs_2g, epochs=1) def test_save_load_rate_wotrace( rate_1g_mle, tmpdir, ds_rate_docket, ds_rate_obs_2g): """Test loading and saving of embedding model.""" model = rate_1g_mle # Compile compile_kwargs = { 'loss': tf.keras.losses.MeanSquaredError(), 'optimizer': tf.keras.optimizers.Adam(learning_rate=.001), 'weighted_metrics': [ tf.keras.metrics.MeanSquaredError(name='mse') ] } model.compile(**compile_kwargs) # Fit one epoch. model.fit(ds_rate_obs_2g, epochs=1) # Predict using original model. output_0 = model.predict(ds_rate_docket) # Save the model. fn = tmpdir.join('embedding_test') model.save(fn, overwrite=True, save_traces=False) # Load the saved model. reconstructed_model = tf.keras.models.load_model(fn) # Predict using loaded model. output_1 = reconstructed_model.predict(ds_rate_docket) # Test for equality. np.testing.assert_allclose(output_0, output_1) assert reconstructed_model.n_stimuli == model.n_stimuli assert reconstructed_model.n_dim == model.n_dim # Continue training without recompiling. reconstructed_model.fit(ds_rate_obs_2g, epochs=1)
{"hexsha": "4c07a997159c9c10ea0f900b9dd488771696ab55", "size": 4213, "ext": "py", "lang": "Python", "max_stars_repo_path": "tests/keras/models/test_rate.py", "max_stars_repo_name": "greenfieldvision/psiz", "max_stars_repo_head_hexsha": "37068530a78e08792e827ee55cf55e627add115e", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 21, "max_stars_repo_stars_event_min_datetime": "2020-04-03T21:10:05.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-02T01:31:11.000Z", "max_issues_repo_path": "tests/keras/models/test_rate.py", "max_issues_repo_name": "greenfieldvision/psiz", "max_issues_repo_head_hexsha": "37068530a78e08792e827ee55cf55e627add115e", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 14, "max_issues_repo_issues_event_min_datetime": "2020-04-10T00:48:02.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-25T18:06:55.000Z", "max_forks_repo_path": "tests/keras/models/test_rate.py", "max_forks_repo_name": "greenfieldvision/psiz", "max_forks_repo_head_hexsha": "37068530a78e08792e827ee55cf55e627add115e", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2020-10-13T16:46:14.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-10T00:08:47.000Z", "avg_line_length": 30.7518248175, "max_line_length": 78, "alphanum_fraction": 0.6866840731, "include": true, "reason": "import numpy", "num_tokens": 1028}
# Copyright 2022 DeepMind Technologies Limited. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Lewis Game.""" import numpy as np from emergent_communication_at_scale import types from emergent_communication_at_scale.game.game_interface import dispatch_per_device from emergent_communication_at_scale.game.game_interface import Game def iterator(num_games, max_steps, mode): """Iterator for dummy game.""" obs = types.GamesInputs( speaker_inp=np.eye(num_games), labels=np.ones((num_games,)), misc=dict(), ) if mode == 'train': obs = dispatch_per_device(obs) # Dispatch only at training. for _ in range(max_steps): yield obs class DummyGame(Game): """Dummy game for testing.""" def __init__(self, train_batch_size, eval_batch_size, max_steps): super().__init__(train_batch_size, eval_batch_size) self._max_steps = max_steps def get_training_games(self, rng): del rng return iterator(self._train_batch_size, self._max_steps, mode='train') def get_evaluation_games(self, mode: str = 'test'): return iterator(self._eval_batch_size, self._max_steps, mode=mode) def evaluate(self, prediction, target): pass
{"hexsha": "df18a8dd59c11be68a4b5597c2b95b2eb4d72868", "size": 1681, "ext": "py", "lang": "Python", "max_stars_repo_path": "game/dummy_game.py", "max_stars_repo_name": "deepmind/emergent_communication_at_scale", "max_stars_repo_head_hexsha": "1d17ca7ca021c0473f344f44c876decc84980f35", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2022-03-28T08:14:11.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T18:04:40.000Z", "max_issues_repo_path": "game/dummy_game.py", "max_issues_repo_name": "deepmind/emergent_communication_at_scale", "max_issues_repo_head_hexsha": "1d17ca7ca021c0473f344f44c876decc84980f35", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "game/dummy_game.py", "max_forks_repo_name": "deepmind/emergent_communication_at_scale", "max_forks_repo_head_hexsha": "1d17ca7ca021c0473f344f44c876decc84980f35", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.7169811321, "max_line_length": 83, "alphanum_fraction": 0.749553837, "include": true, "reason": "import numpy", "num_tokens": 385}
#!/usr/bin/env python # -*- coding: utf-8 -*- import math import time import matplotlib.pyplot as plt import numpy as np import rospy from std_msgs.msg import Float32MultiArray, Int32, String from geometry_msgs.msg import Pose, PoseStamped from vanttec_uuv.msg import GuidanceWaypoints from usv_perception.msg import obj_detected, obj_detected_list from nav_msgs.msg import Path # Class Definition class AutoNav: def __init__(self): self.ned_x = 0 self.ned_y = 0 self.yaw = 0 self.objects_list = [] self.activated = True self.state = -1 self.distance = 0 self.InitTime = rospy.Time.now().secs self.offset = .55 #camera to ins offset self.target_x = 0 self.target_y = 0 self.ned_alpha = 0 self.choose_side = 'left' self.distance_away = 5 self.waypoints = GuidanceWaypoints() self.uuv_path = Path() #Waypoint test instead of perception node # ROS Subscribers rospy.Subscriber("/uuv_simulation/dynamic_model/pose", Pose, self.ins_pose_callback) ''' rospy.Subscriber("/usv_perception/yolo_zed/objects_detected", obj_detected_list, self.objs_callback) ''' # ROS Publishers self.uuv_waypoints = rospy.Publisher("/uuv_guidance/guidance_controller/waypoints", GuidanceWaypoints, queue_size=10) self.uuv_path_pub = rospy.Publisher("/uuv_planning/motion_planning/desired_path", Path, queue_size=10) self.status_pub = rospy.Publisher("/mission/status", Int32, queue_size=10) self.test = rospy.Publisher("/mission/state", Int32, queue_size=10) #Waypoint test instead of perception node self.objects_list = [ { 'X': 7, 'Y': -4, 'Z': 0 }, { 'X': 7, 'Y': 0, 'Z': 0 }, { 'X': 7, 'Y': 4, 'Z': 0 } ] def ins_pose_callback(self,pose): self.ned_x = pose.position.x self.ned_y = pose.position.y self.ned_z = pose.position.z self.yaw = pose.orientation.z ''' def objs_callback(self,data): self.objects_list = [] for i in range(data.len): if str(data.objects[i].clase) == 'bouy': self.objects_list.append({'X' : data.objects[i].X + self.offset, 'Y' : data.objects[i].Y, 'color' : data.objects[i].color, 'class' : data.objects[i].clase}) ''' def center_point(self): ''' @name: center_point @brief: Returns two waypoints as desired positions. The first waypoint is between the middle of the gate and it right or left post, and the second a distance to the front @param: -- @return: -- ''' x_list = [] y_list = [] distance_list = [] for i in range(len(self.objects_list)): x_list.append(self.objects_list[i]['X']) y_list.append(self.objects_list[i]['Y']) distance_list.append(math.pow(x_list[i]**2 + y_list[i]**2, 0.5)) ind_g1 = np.argsort(distance_list)[0] ind_g2 = np.argsort(distance_list)[1] ind_g2 = np.argsort(distance_list)[2] x1 = x_list[ind_g1] y1 = -1*y_list[ind_g1] x2 = x_list[ind_g2] y2 = -1*y_list[ind_g2] x3 = x_list[ind_g2] y3 = -1*y_list[ind_g2] if (self.choose_side == 'left'): xc = min([x1,x2]) + abs(x1 - x2)/2 - self.distance_away yc = min([y1,y2]) + abs(y1 - y2)/2 if y1 < y2: yl = y1 xl = x1 yr = y2 xr = x2 else: yl = y2 xl = x2 yr = y1 xr = x1 else: xc = min([x2,x3]) + abs(x2 - x3)/2 - self.distance_away yc = min([y2,y3]) + abs(y2 - y3)/2 if y2 < y3: yl = y2 xl = x2 yr = y3 xr = x3 else: yl = y3 xl = x3 yr = y2 xr = x2 yd = yl - yr xd = xl - xr alpha = math.atan2(yd,xd) + math.pi/2 if (abs(alpha) > (math.pi)): alpha = (alpha/abs(alpha))*(abs(alpha) - 2*math.pi) self.ned_alpha = alpha + self.yaw if (abs(self.ned_alpha) > (math.pi)): self.ned_alpha = (self.ned_alpha/abs(self.ned_alpha))*(abs(self.ned_alpha) - 2*math.pi) xm, ym = self.gate_to_body(3,0,alpha,xc,yc) self.target_x, self.target_y = self.body_to_ned(xm, ym) #path_array = Float32MultiArray() #path_array.layout.data_offset = 5 #path_array.data = [xc, yc, xm, ym, 2] #self.desired(path_array) self.waypoints.guidance_law = 1 self.waypoints.waypoint_list_length = 2 self.waypoints.waypoint_list_x = [xc, xm] self.waypoints.waypoint_list_y = [yc, ym] self.waypoints.waypoint_list_z = [0,0] self.desired(self.waypoints) def calculate_distance_to_sub(self): ''' @name: calculate_distance_to_sub @brief: Returns the distance from the UUV to the next gate @param: -- @return: -- ''' x_list = [] y_list = [] distance_list = [] for i in range(len(self.objects_list)): x_list.append(self.objects_list[i]['X']) y_list.append(self.objects_list[i]['Y']) distance_list.append(math.pow(x_list[i]**2 + y_list[i]**2, 0.5)) ind_g1 = np.argsort(distance_list)[0] ind_g2 = np.argsort(distance_list)[1] x1 = x_list[ind_g1] y1 = -1*y_list[ind_g1] x2 = x_list[ind_g2] y2 = -1*y_list[ind_g2] x3 = x_list[ind_g2] y3 = -1*y_list[ind_g2] if (self.choose_side == 'left'): xc = min([x1,x2]) + abs(x1 - x2)/2 yc = min([y1,y2]) + abs(y1 - y2)/2 if y1 < y2: yl = y1 xl = x1 yr = y2 xr = x2 else: yl = y2 xl = x2 yr = y1 xr = x1 else: xc = min([x2,x3]) + abs(x2 - x3)/2 yc = min([y2,y3]) + abs(y2 - y3)/2 if y2 < y3: yl = y2 xl = x2 yr = y3 xr = x3 else: yl = y3 xl = x3 yr = y2 xr = x2 self.distance = math.pow(xc*xc + yc*yc, 0.5) def farther(self): ''' @name: farther @brief: Returns a waypoint farther to the front of the vehicle in the NED reference frame to avoid perturbations. @param: -- @return: -- ''' self.target_x, self.target_y = self.gate_to_ned(10, 0, self.ned_alpha, self.target_x, self.target_y) #path_array = Float32MultiArray() #path_array.layout.data_offset = 3 #path_array.data = [self.target_x, self.target_y, 0] #self.desired(data) self.waypoints.guidance_law = 1 self.waypoints.waypoint_list_length = 1 self.waypoints.waypoint_list_x = {self.target_x} self.waypoints.waypoint_list_y = { self.target_y} self.waypoints.waypoint_list_z = {0} self.desired(self.waypoints) def gate_to_body(self, gate_x2, gate_y2, alpha, body_x1, body_y1): ''' @name: gate_to_body @brief: Coordinate transformation between gate and body reference frames. @param: gate_x2: target x coordinate in gate reference frame gate_y2: target y coordinate in gate reference frame alpha: angle between gate and body reference frames body_x1: gate x coordinate in body reference frame body_y1: gate y coordinate in body reference frame @return: body_x2: target x coordinate in body reference frame body_y2: target y coordinate in body reference frame ''' p = np.array([[gate_x2],[gate_y2]]) J = self.rotation_matrix(alpha) n = J.dot(p) body_x2 = n[0] + body_x1 body_y2 = n[1] + body_y1 return (body_x2, body_y2) def body_to_ned(self, x2, y2): ''' @name: body_to_ned @brief: Coordinate transformation between body and NED reference frames. @param: x2: target x coordinate in body reference frame y2: target y coordinate in body reference frame @return: ned_x2: target x coordinate in ned reference frame ned_y2: target y coordinate in ned reference frame ''' p = np.array([x2, y2]) J = self.rotation_matrix(self.yaw) n = J.dot(p) ned_x2 = n[0] + self.ned_x ned_y2 = n[1] + self.ned_y return (ned_x2, ned_y2) def gate_to_ned(self, gate_x2, gate_y2, alpha, ned_x1, ned_y1): ''' @name: gate_to_ned @brief: Coordinate transformation between gate and NED reference frames. @param: gate_x2: target x coordinate in gate reference frame gate_y2: target y coordinate in gate reference frame alpha: angle between gate and ned reference frames body_x1: gate x coordinate in ned reference frame body_y1: gate y coordinate in ned reference frame @return: body_x2: target x coordinate in ned reference frame body_y2: target y coordinate in ned reference frame ''' p = np.array([[gate_x2],[gate_y2]]) J = self.rotation_matrix(alpha) n = J.dot(p) ned_x2 = n[0] + ned_x1 ned_y2 = n[1] + ned_y1 return (ned_x2, ned_y2) def rotation_matrix(self, angle): ''' @name: rotation_matrix @brief: Transformation matrix template. @param: angle: angle of rotation @return: J: transformation matrix ''' J = np.array([[math.cos(angle), -1*math.sin(angle)], [math.sin(angle), math.cos(angle)]]) return (J) def desired(self, path): self.uuv_waypoints.publish(path) self.uuv_path.header.stamp = rospy.Time.now() self.uuv_path.header.frame_id = "world" del self.uuv_path.poses[:] for index in range(path.waypoint_list_length): pose = PoseStamped() pose.header.stamp = rospy.Time.now() pose.header.frame_id = "world" pose.pose.position.x = path.waypoint_list_x[index] pose.pose.position.y = path.waypoint_list_y[index] pose.pose.position.z = path.waypoint_list_z[index] self.uuv_path.poses.append(pose) self.uuv_path_pub.publish(self.uuv_path) def main(): rospy.init_node("auto_nav_position", anonymous=False) rate = rospy.Rate(20) autoNav = AutoNav() autoNav.distance = 4 last_detection = [] while not rospy.is_shutdown() and autoNav.activated: rospy.loginfo("AutoNav is activated") #rospy.loginfo(autoNav.objects_list) rospy.loginfo(last_detection) if autoNav.objects_list != last_detection: rospy.loginfo("Last detection not activated") if autoNav.state == -1: rospy.loginfo("AutoNav.state == -1") while (not rospy.is_shutdown()) and (len(autoNav.objects_list) < 3): autoNav.test.publish(autoNav.state) rospy.loginfo("AutoNav.state in -1") rate.sleep() autoNav.state = 0 # last_detection = autoNav.objects_list if autoNav.state == 0: rospy.loginfo("AutoNav.state == 0") autoNav.test.publish(autoNav.state) if len(autoNav.objects_list) >= 3: rospy.loginfo("AutoNav.objects_list) >= 3") autoNav.calculate_distance_to_sub() if (len(autoNav.objects_list) >= 3) and (autoNav.distance >= 2): rospy.loginfo("AutoNav.objects_list) >= 3 and (autoNav.distance >= 2)") autoNav.center_point() else: rospy.loginfo("No autoNav.objects_list") initTime = rospy.Time.now().secs while ((not rospy.is_shutdown()) and (len(autoNav.objects_list) < 3 or autoNav.distance < 2)): rospy.loginfo("not rospy.is_shutdown() and (len(autoNav.objects_list) < 3 or autoNav.distance < 2)") if rospy.Time.now().secs - initTime > 2: rospy.loginfo("rospy.Time.now().secs - initTime > 2") autoNav.state = 1 rate.sleep() break #last_detection = autoNav.objects_list if autoNav.state == 1: rospy.loginfo("AutoNav.state == 1") autoNav.test.publish(autoNav.state) if len(autoNav.objects_list) >= 3: autoNav.state = 2 else: initTime = rospy.Time.now().secs while ((not rospy.is_shutdown()) and (len(autoNav.objects_list) < 3)): if rospy.Time.now().secs - initTime > 1: autoNav.farther() rate.sleep() break #last_detection = autoNav.objects_list if autoNav.objects_list != last_detection: rospy.loginfo("autoNav.objects_list != last_detection:") if autoNav.state == 2: rospy.loginfo("AutoNav.state == 2") autoNav.test.publish(autoNav.state) if len(autoNav.objects_list) >= 3: autoNav.calculate_distance_to_sub() if len(autoNav.objects_list) >= 3 and autoNav.distance >= 2: autoNav.center_point() else: initTime = rospy.Time.now().secs while ((not rospy.is_shutdown()) and (len(autoNav.objects_list) < 3 or autoNav.distance < 2)): if rospy.Time.now().secs - initTime > 2: autoNav.state = 3 rate.sleep() break # last_detection = autoNav.objects_list elif autoNav.state == 3: autoNav.test.publish(autoNav.state) time.sleep(1) autoNav.status_pub.publish(1) rate.sleep() rospy.spin() if __name__ == "__main__": try: main() except rospy.ROSInterruptException: pass
{"hexsha": "890edd7be825562cf1cb899122b67926d75c695d", "size": 15344, "ext": "py", "lang": "Python", "max_stars_repo_path": "lib/choose_side/scripts/auto_nav_position.py", "max_stars_repo_name": "vanttec/vanttec_uuv", "max_stars_repo_head_hexsha": "95a0db636f7b99ac9ad9756e0d962fa1acc71e5e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lib/choose_side/scripts/auto_nav_position.py", "max_issues_repo_name": "vanttec/vanttec_uuv", "max_issues_repo_head_hexsha": "95a0db636f7b99ac9ad9756e0d962fa1acc71e5e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lib/choose_side/scripts/auto_nav_position.py", "max_forks_repo_name": "vanttec/vanttec_uuv", "max_forks_repo_head_hexsha": "95a0db636f7b99ac9ad9756e0d962fa1acc71e5e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-02-18T01:11:46.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-18T01:11:46.000Z", "avg_line_length": 37.1525423729, "max_line_length": 125, "alphanum_fraction": 0.5205943691, "include": true, "reason": "import numpy", "num_tokens": 3692}
r"""Module defining halo bias models. The halo bias is defined as the ratio of the power spectrum of halo (centres) for halos of a given mass, to the linear matter power spectrum. In particular, it is assumed for the models defined here that the power spectrum of halo centres is merely a scalar multiple of the linear matter power spectrum. That is, we implement first-order, local, deterministic bias. Bias models are defined as :class:`~hmf.Component` instances -- that is, they are flexible models that the user can subclass and use in the halo model framework. See :class:`Bias` for instructions on how to use ``Bias`` models. The following notes will mostly describe how to subclass :class:`Bias` to define your own model. Also provided are several models from the literature. In addition, it defines a factory function :func:`make_colossus_bias` which helps with integration with the ``colossus`` cosmology code. With this function, the user is able to easily create a ``halomod``-compatible ``Component`` model that transparently uses ``colossus`` the background to do the actual computation of the halo bias. This means it is easy to use any of the updated models from ``colossus`` in a native way. Most models are specified in terms of the peak-height parameter, though it is possible to specify them in terms of mass, and include cosmological parameters. To define your own bias model, subclass either :class:`Bias` or any of the in-built models provided here. The only method required to be implemented is ``bias()``, which takes no parameters. It must return the local first-order bias as an array of the same shape as ``m`` (you also have access to the peak-height ``nu`` as an instance variable). See documentation for :class:`Bias` for more information on the instance variables available in the definition. As with all ``Component`` subclasses, arbitrary user-specified variables can be received by defining them in the `_defaults` class-level dictionary. The module also defines a :class:`ScaleDependentBias`, which corrects the bias function on different length scales. Examples -------- Define your own bias model that is unity for all masses (in fact this one is already built-in):: >>> class UnityBias(Bias): >>> def bias(self): >>> return np.ones_like(self.m) Use this bias model in a halo model:: >>> from halomod import HaloModel >>> hm = HaloModel(bias_model=UnityBias) >>> assert np.all(hm.bias == 1) Constructing and using a colossus-based halo bias:: >>> from halomod import HaloModel >>> from halomod.bias import make_colossus_bias >>> comparat = bias.make_colossus_bias(model="comparat17") >>> hm = HaloModel(bias_model=comparat) """ from typing import Optional import numpy as np from hmf import Component from scipy.interpolate import InterpolatedUnivariateSpline as spline from hmf.cosmology.cosmo import astropy_to_colossus from colossus.lss.bias import haloBiasFromNu from astropy.cosmology import FLRW, Planck15 from hmf.halos.mass_definitions import SOMean from hmf._internals import pluggable @pluggable class Bias(Component): r""" The base Bias component. This class should not be instantiated directly! Use a subclass that implements a specific bias model. The parameters listed below are the input parameters for *all* bias models. Extra model-specific parameters can be given -- these are documented in their respective class docstring. Parameters ---------- nu : array-like Peak-height, ``delta^2_c/sigma^2``. delta_c : float, optional Critical over-density for collapse. Not all bias components require this parameter. m : array, optional Vector of halo masses corresponding to `nu`. Not all bias components require this parameter. mstar : float, optional Nonlinear mass, defined by the relation ``sigma(mstar) = delta_c``, with ``sigma`` the mass variance in spheres corresponding to virial radii of halos of mass ``mstar``. delta_halo : float, optional The over-density of halos with respect to the mean background matter density. n : float, optional The spectral index of the linear matter power spectrum.. Om0 : float, optional The matter density, as a fraction of critical density, in the current universe. sigma_8 : float, optional The square root of the mass in spheres of radius 8 Mpc/h in the present day (normalizes the power spectrum). h : float, optional Hubble parameter in units of 100 km/s/Mpc. """ _models = {} _defaults = {} def __init__( self, nu: np.ndarray, delta_c: float = 1.686, m: Optional[np.ndarray] = None, mstar: Optional[float] = None, delta_halo: Optional[float] = 200, n: Optional[float] = 1, sigma_8: Optional[float] = 0.8, cosmo: FLRW = Planck15, n_eff: [None, np.ndarray] = None, z: float = 0.0, **model_parameters ): self.nu = nu self.n = n self.delta_c = delta_c self.delta_halo = delta_halo self.m = m self.mstar = mstar self.z = z self.cosmo = cosmo self.h = cosmo.h self.Om0 = cosmo.Om0 self.sigma_8 = sigma_8 self.n_eff = n_eff super(Bias, self).__init__(**model_parameters) def bias(self) -> np.ndarray: """Calculate the first-order, linear, deterministic halo bias. Returns ------- b : array-like The bias as a function of mass, as an array of values corresponding to the instance attributes `m` and/or `nu`. Examples -------- >>> import matplotlib.pyplot as plt >>> import numpy as np >>> from halomod.bias import Mo96 >>> peak_height = np.linspace(0.1, 2, 100) >>> bias = Mo96(nu=peak_height) >>> plt.plot(peak_height, bias.bias()) """ return np.ones_like(self.nu) class UnityBias(Bias): """A toy bias model which is exactly unity for all mass. See documentation for :class:`Bias` for information on input parameters. This model has no free parameters. """ def bias(self): return np.ones_like(self.nu) class Mo96(Bias): r""" Peak-background split bias corresponding to PS HMF. See documentation for :class:`Bias` for information on input parameters. This model has no free parameters. Notes ----- This bias form can be explicitly derived by assuming a Press-Schechter form for the HMF, as shown for example in [1]_. The form is .. math:: 1 + \frac{(\nu - 1)}{\delta_c} References ---------- .. [1] Mo, H. J. and White, S. D. M., "An analytic model for the spatial clustering of dark matter haloes", https://ui.adsabs.harvard.edu/abs/1996MNRAS.282..347M, 1996 """ def bias(self): return 1 + (self.nu - 1) / self.delta_c class Jing98(Bias): r""" Empirical bias of Jing (1998). See documentation for :class:`Bias` for information on input parameters. This model has no free parameters. Notes ----- This is an empirical form proposed in [1]_, with the formula .. math:: (a/\nu^4 + 1)^{b - c n} \left(1 + \frac{\nu^2 - 1}{\delta_c}\right) The parameters ``a``, ``b`` and ``c`` are free parameters, with values fitted in [1]_ of ``(0.5, 0.06, 0.02)``, which are the defaults here. Other Parameters ---------------- a,b,c : float The fitting parameters. References ---------- .. [1] Jing, Y. P., "Accurate Fitting Formula for the Two-Point Correlation Function of Dark Matter Halos", http://adsabs.harvard.edu/abs/1998ApJ...503L...9J, 1998. """ _defaults = {"a": 0.5, "b": 0.06, "c": 0.02} def bias(self): nu = self.nu a = self.params["a"] b = self.params["b"] c = self.params["c"] return (a / nu ** 2 + 1) ** (b - c * self.n_eff) * (1 + (nu - 1) / self.delta_c) class ST99(Bias): r""" Peak-background split bias corresponding to ST99 HMF. See documentation for :class:`Bias` for information on input parameters. This model has no free parameters. Notes ----- This bias form can be explicitly derived by assuming a Sheth-Tormen form for the HMF, as shown for example in [1]_. The form is .. math:: 1 + \frac{q\nu - 1}{\delta_c} + \frac{2p}{\delta_c ( 1 + q^p \nu^p)} with ``p`` and ``q`` having default values of ``(0.707, 0.3)``. They are free in this implementation for the user to modify. Other Parameters ---------------- p,q : float, optional The free parameters of the form. References ---------- .. [1] Sheth, R. K.. and Tormen, G., "Large-scale bias and the peak background split", https://ui.adsabs.harvard.edu/abs/1999MNRAS.308..119S, 1999 """ _defaults = {"q": 0.707, "p": 0.3} def bias(self): p = self.params["p"] q = self.params["q"] return ( 1 + (q * self.nu - 1) / self.delta_c + (2 * p / self.delta_c) / (1 + (q * self.nu) ** p) ) class SMT01(Bias): r""" Extended Press-Schechter-derived bias function corresponding to SMT01 HMF. See documentation for :class:`Bias` for information on input parameters. This model has no free parameters. Notes ----- This bias form can be explicitly derived by assuming a Sheth-Tormen form for the HMF and allowing for ellipsoidal collapse, as shown for example in [1]_. The form is .. math:: 1 + \frac{1}{\delta_c \sqrt{a}} \left(\sqrt{a} a \nu + \sqrt{a} b (a\nu)^{1-c} - \frac{(a\nu)^c}{(a\nu)^c + b(1-c)(1 - c/2)}\right) with ``a``, ``b`` and ``c`` having default values of ``(0.707, 0.5, 0.6)``. They are free in this implementation for the user to modify. Other Parameters ---------------- a,b,c : float, optional The free parameters of the form. References ---------- .. [1] Sheth, R. K. and Tormen G., "Ellipsoidal collapse and an improved model for the number and spatial distribution of dark matter haloes", https://ui.adsabs.harvard.edu/abs/2001MNRAS.323....1S, 2001 """ _defaults = {"a": 0.707, "b": 0.5, "c": 0.6} def bias(self): nu = self.nu a = self.params["a"] sa = np.sqrt(a) b = self.params["b"] c = self.params["c"] return 1 + ( sa * (a * nu) + sa * b * (a * nu) ** (1 - c) - (a * nu) ** c / ((a * nu) ** c + b * (1 - c) * (1 - c / 2)) ) / (self.delta_c * sa) class Seljak04(Bias): r""" Empirical bias relation from Seljak & Warren (2004), without cosmological dependence. See documentation for :class:`Bias` for information on input parameters. This model has no free parameters. Notes ----- This the form from [1]_ *without* cosmological dependence. The form is .. math:: a + bx^c + \frac{d}{ex+1} + fx^g with :math:`x = m/m_\star` (and :math:`m_star` the nonlinear mass -- see :class:`Bias` for details). The other parameters are all fitted, with values given [1]_ as ``(a,b,c,d,e,f,g) = (0.53, 0.39, 0.45, 0.13, 40, 5e-4, 1.5)``. Other Parameters ---------------- a,b,c,d,e,f,g : float, optional The fitted parameters. References ---------- .. [1] Seljak, U. and Warren M. S., "Large-scale bias and stochasticity of haloes and dark matter", https://ui.adsabs.harvard.edu/abs/2004MNRAS.355..129S, 2004. """ _defaults = { "a": 0.53, "b": 0.39, "c": 0.45, "d": 0.13, "e": 40, "f": 5e-4, "g": 1.5, } def bias(self): a = self.params["a"] b = self.params["b"] c = self.params["c"] d = self.params["d"] e = self.params["e"] f = self.params["f"] g = self.params["g"] x = self.m / self.mstar return a + b * x ** c + d / (e * x + 1) + f * x ** g class Seljak04Cosmo(Seljak04): r""" Empirical bias relation from Seljak & Warren (2004), with cosmological dependence. See documentation for :class:`Bias` for information on input parameters. This model has no free parameters. Notes ----- This the form from [1]_ *with* cosmological dependence -- except we do not include the running of the spectral index. The form is .. math:: b_{\rm no cosmo} + \log_10(x) \left[a_1 (\Omega_{m,0} - 0.3 + n - 1) + a_2(\sigma_8 - 0.9 + h-0.7)\right] with :math:`x = m/m_\star` (and :math:`m_{\star}` the nonlinear mass -- see :class:`Bias` for details). The non-cosmologically-dependent bias is that given by :class:`Seljak04`. ``a1`` and ``a2`` are fitted, with values given in [1]_ as ``(a1,a2) = (0.4, 0.3)``. Other Parameters ---------------- a,b,c,d,e,f,g : float, optional The fitted parameters for :class:`Seljak04`. a1,a2 : float, optional Fitted parameters for the cosmological dependence. References ---------- .. [1] Seljak, U. and Warren M. S., "Large-scale bias and stochasticity of haloes and dark matter", https://ui.adsabs.harvard.edu/abs/2004MNRAS.355..129S, 2004. """ _defaults = { "a": 0.53, "b": 0.39, "c": 0.45, "d": 0.13, "e": 40, "f": 5e-4, "g": 1.5, "a1": 0.4, "a2": 0.3, } def bias(self): b = super().bias() a1 = self.params["a1"] a2 = self.params["a2"] x = np.log10(self.m / self.mstar) x[x < -1] = -1 return b + x * ( a1 * (self.Om0 - 0.3 + self.n - 1) + a2 * (self.sigma_8 - 0.9 + self.h - 0.7) ) class Tinker05(SMT01): r""" Empirical bias from Tinker et al (2005). See documentation for :class:`Bias` for information on input parameters. This model has no free parameters. Notes ----- This form is the same as that of :class:`SMT01`, however the parameters were re-fit to simulations in [1]_. Here the default parameters are ``(a,b,c) = (0.707, 0.35, 0.8)``. References ---------- .. [1] Tinker J. et al., "On the Mass-to-Light Ratio of Large-Scale Structure", https://ui.adsabs.harvard.edu/abs/2005ApJ...631...41T, 2005 """ _defaults = {"a": 0.707, "b": 0.35, "c": 0.8} class Mandelbaum05(ST99): r""" Empirical bias of Mandelbaum (2005). See documentation for :class:`Bias` for information on input parameters. This model has no free parameters. Notes ----- This form is the same as that of :class:`SMT99`, however the parameters were re-fit to simulations from [1]_ in [2]_. Here the default parameters are ``(q,p) = (0.73, 0.15)``. References ---------- .. [1] Seljak, U. and Warren M. S., "Large-scale bias and stochasticity of haloes and dark matter", https://ui.adsabs.harvard.edu/abs/2004MNRAS.355..129S, 2004. .. [2] Mandelbaum, R. et al., "Galaxy-galaxy lensing: dissipationless simulations versus the halo model", https://ui.adsabs.harvard.edu/abs/2005MNRAS.362.1451M, 2005. """ _defaults = {"q": 0.73, "p": 0.15} class Pillepich10(Bias): r""" Empirical bias of Pillepich et al (2010). See documentation for :class:`Bias` for information on input parameters. This model has no free parameters. Notes ----- This is the fit from [1]_, but it is the Gaussian case. The form is .. math:: B_0 + B_1 \sqrt{\nu} + B_2 \nu with :math:`\nu` the peak-height parameter. The values of the parameters fitted to simulation are given as ``(B0, B1, B2) = (0.647, -0.32, 0.568)``. They are left free to the user. Other Parameters ---------------- B1, B2, B3 : float, optional The fitted parameters. References ---------- .. [1] Pillepich, A., Porciani, C. and Hahn, O., "Halo mass function and scale-dependent bias from N-body simulations with non-Gaussian initial conditions", https://ui.adsabs.harvard.edu/abs/2010MNRAS.402..191P, 2010 """ _defaults = {"B0": 0.647, "B1": -0.320, "B2": 0.568} def bias(self): nu = self.nu B0 = self.params["B0"] B1 = self.params["B1"] B2 = self.params["B2"] return B0 + B1 * np.sqrt(nu) + B2 * nu class Manera10(ST99): r""" Peak-background split bias from Manera et al. (2010) [1]_. See documentation for :class:`Bias` for information on input parameters. This model has no free parameters. Other Parameters ---------------- q, p : float, optional The fitted parameters. Notes ----- .. note:: This form from [1]_ has the same form as :class:`ST99`, but has refitted the parameters with ``(q, p) = (0.709, 0.2)``. References ---------- .. [1] Manera M., Sheth,R. K. and Scoccimarro R., "Large-scale bias and the inaccuracy of the peak-background split ", https://ui.adsabs.harvard.edu/abs/2010MNRAS.402..589M, 2010 """ _defaults = {"q": 0.709, "p": 0.248} class Tinker10(Bias): r""" Empirical bias of Tinker et al (2010). See documentation for :class:`Bias` for information on input parameters. This model has no free parameters. Notes ----- This is an empirical form that does not obey the peak-background split consistency formalism, but fits well to simulations. It is dependent on the spherical halo definition. The form from [1]_ is .. math:: 1 - A\frac{\nu^a}{\nu^a + \delta_c^a} + B \nu^b + C \nu^c with .. math:: A = 1 + 0.24 y e^{-(4/y)^4}, and .. math:: a = 0.44y - 0.88 and .. math:: C = 0.019 + 0.107y + 0.19 e^{-(4/y)^4} and :math:`y=\log_{10} \Delta_{\rm halo}`. The fitted parameters are ``(B,b,c) = (0.183, 1.5, 2.4)``. Other Parameters ---------------- B,b,c : float, optional The fitted parameters. References ---------- .. [1] Tinker, J. L. et al., "The Large-scale Bias of Dark Matter Halos: Numerical Calibration and Model Tests", https://ui.adsabs.harvard.edu/abs/2010ApJ...724..878T, 2010 See Also -------- :class:`Tinker10PBsplit` Bias from the same study but with the constraint of the peak-background split formalism. """ _defaults = {"B": 0.183, "b": 1.5, "c": 2.4} def bias(self): y = np.log10(self.delta_halo) A = 1.0 + 0.24 * y * np.exp(-((4 / y) ** 4)) a = 0.44 * y - 0.88 C = 0.019 + 0.107 * y + 0.19 * np.exp(-((4 / y) ** 4)) nu = np.sqrt(self.nu) B = self.params["B"] c = self.params["c"] b = self.params["b"] return ( 1 - A * nu ** a / (nu ** a + self.delta_c ** a) + B * nu ** b + C * nu ** c ) class Tinker10PBSplit(Bias): r""" Empirical bias of Tinker et al (2010). See documentation for :class:`Bias` for information on input parameters. This model has no free parameters. Notes ----- This is form from [1]_ obeys the peak-background split consistency formalism, which offers some advantages, but also fits well to simulations. It is dependent on the spherical halo definition. See the reference for details on the form. Other Parameters ---------------- alpha, beta,gamma,phi, eta : float, optional The fitted parameters. Each of these are available to specify at a certain overdensity. So for example ``alpha_200`` specifies the ``alpha`` parameter at a spherical halo overdensity of 200. All default values are taken from Tinker 2010. beta_exp, phi_exp, eta_exp, gamma_exp: float, optional The value of ``beta``, ``phi`` etc., are functions of redshift via the relation ``beta = beta0 (1 + z)^beta_exp`` (and likewise for the other parameters). max_z : float, optional The maximum redshift for which the redshift evolution holds. Above this redshift, the relation flattens. Default 3. References ---------- .. [1] Tinker, J. L. et al., "The Large-scale Bias of Dark Matter Halos: Numerical Calibration and Model Tests", https://ui.adsabs.harvard.edu/abs/2010ApJ...724..878T, 2010 See Also -------- :class:`Tinker10` Bias from the same study but without the constraint of the peak-background split formalism. """ _defaults = { # --- alpha "alpha_200": 0.368, "alpha_300": 0.363, "alpha_400": 0.385, "alpha_600": 0.389, "alpha_800": 0.393, "alpha_1200": 0.365, "alpha_1600": 0.379, "alpha_2400": 0.355, "alpha_3200": 0.327, # --- beta "beta_200": 0.589, "beta_300": 0.585, "beta_400": 0.544, "beta_600": 0.543, "beta_800": 0.564, "beta_1200": 0.623, "beta_1600": 0.637, "beta_2400": 0.673, "beta_3200": 0.702, # --- gamma "gamma_200": 0.864, "gamma_300": 0.922, "gamma_400": 0.987, "gamma_600": 1.09, "gamma_800": 1.2, "gamma_1200": 1.34, "gamma_1600": 1.5, "gamma_2400": 1.68, "gamma_3200": 1.81, # --- phi "phi_200": -0.729, "phi_300": -0.789, "phi_400": -0.910, "phi_600": -1.05, "phi_800": -1.2, "phi_1200": -1.26, "phi_1600": -1.45, "phi_2400": -1.5, "phi_3200": -1.49, # -- eta "eta_200": -0.243, "eta_300": -0.261, "eta_400": -0.261, "eta_600": -0.273, "eta_800": -0.278, "eta_1200": -0.301, "eta_1600": -0.301, "eta_2400": -0.319, "eta_3200": -0.336, # --others "beta_exp": 0.2, "phi_exp": -0.08, "eta_exp": 0.27, "gamma_exp": -0.01, "max_z": 3, } delta_virs = np.array([200, 300, 400, 600, 800, 1200, 1600, 2400, 3200]) def bias(self): if self.delta_halo not in self.delta_virs: beta_array = np.array([self.params["beta_%s" % d] for d in self.delta_virs]) gamma_array = np.array( [self.params["gamma_%s" % d] for d in self.delta_virs] ) phi_array = np.array([self.params["phi_%s" % d] for d in self.delta_virs]) eta_array = np.array([self.params["eta_%s" % d] for d in self.delta_virs]) beta_func = spline(self.delta_virs, beta_array) gamma_func = spline(self.delta_virs, gamma_array) phi_func = spline(self.delta_virs, phi_array) eta_func = spline(self.delta_virs, eta_array) beta_0 = beta_func(self.delta_halo) gamma_0 = gamma_func(self.delta_halo) phi_0 = phi_func(self.delta_halo) eta_0 = eta_func(self.delta_halo) else: beta_0 = self.params["beta_%s" % (int(self.delta_halo))] gamma_0 = self.params["gamma_%s" % (int(self.delta_halo))] phi_0 = self.params["phi_%s" % (int(self.delta_halo))] eta_0 = self.params["eta_%s" % (int(self.delta_halo))] beta = ( beta_0 * (1 + min(self.z, self.params["max_z"])) ** self.params["beta_exp"] ) phi = phi_0 * (1 + min(self.z, self.params["max_z"])) ** self.params["phi_exp"] eta = eta_0 * (1 + min(self.z, self.params["max_z"])) ** self.params["eta_exp"] gamma = ( gamma_0 * (1 + min(self.z, self.params["max_z"])) ** self.params["gamma_exp"] ) return ( 1 + (gamma * self.nu - (1 + 2 * eta)) / self.delta_c + 2 * phi / self.delta_c / (1 + (beta ** 2 * self.nu) ** phi) ) @pluggable class ScaleDepBias(Component): r"""Base class for scale-dependent bias models. Parameters ---------- xi_dm : np.ndarray The dark matter correlation function defined at some real-space scales, r. """ def __init__(self, xi_dm: np.ndarray, **model_parameters): self.xi_dm = xi_dm super(ScaleDepBias, self).__init__(**model_parameters) def bias_scale(self) -> np.ndarray: """Return the scale dependent bias as a function of r. The scale-dependent bias is a function of the dark matter correlation function, and the length of the returned array should be the same size as the instance :attr:`xi_dm`. """ pass class TinkerSD05(ScaleDepBias): r"""Scale-dependent bias from Tinker 2005. Notes ----- Defined in [1]_ as .. math:: \sqrt{\frac{(1 + a\xi)^b}{(1 + c\xi)^d}} with the fitted parameters ``(a,b,c,d)=(1.17, 1.49, 0.69, 2.09)``. References ---------- .. [1] Tinker J. et al., "On the Mass-to-Light Ratio of Large-Scale Structure", https://ui.adsabs.harvard.edu/abs/2005ApJ...631...41T, 2005 """ _defaults = {"a": 1.17, "b": 1.49, "c": 0.69, "d": 2.09} def bias_scale(self): a = self.params["a"] b = self.params["b"] c = self.params["c"] d = self.params["d"] return np.sqrt((1 + a * self.xi_dm) ** b / (1 + c * self.xi_dm) ** d) def make_colossus_bias(model="comparat17", mdef=SOMean(), **defaults): r""" A factory function which helps with integration with the ``colossus`` cosmology code. See :mod:`~halomod.bias` for an example of how to use it. Notice that it returns a *class* :class:`CustomColossusBias` not an instance. """ class CustomColossusBias(Bias): _model_name = model _defaults = defaults _mdef = mdef def __init__(self, *args, **kwargs): super(CustomColossusBias, self).__init__(*args, **kwargs) astropy_to_colossus(self.cosmo, sigma8=self.sigma_8, ns=self.n) def bias(self): return haloBiasFromNu( nu=np.sqrt(self.nu), z=self.z, mdef=self._mdef.colossus_name, model=self._model_name, **self.params ) CustomColossusBias.__name__ = model.capitalize() CustomColossusBias.__qualname__ = model.capitalize() return CustomColossusBias
{"hexsha": "5303f68c95b92d1d4626ba79b89fdf1fb269334a", "size": 26553, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/halomod/bias.py", "max_stars_repo_name": "sjforeman/halomod", "max_stars_repo_head_hexsha": "587db6bc71a77ea60a541b306fc3601eeb424bc9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/halomod/bias.py", "max_issues_repo_name": "sjforeman/halomod", "max_issues_repo_head_hexsha": "587db6bc71a77ea60a541b306fc3601eeb424bc9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/halomod/bias.py", "max_forks_repo_name": "sjforeman/halomod", "max_forks_repo_head_hexsha": "587db6bc71a77ea60a541b306fc3601eeb424bc9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.2245145631, "max_line_length": 146, "alphanum_fraction": 0.58844575, "include": true, "reason": "import numpy,from scipy,import astropy,from astropy", "num_tokens": 7689}
import numpy as np import sys import os import pytest from knnFeat import _get_feat sys.path.append(os.getcwd()) # Case 1: class_index == 0 and k_index == 0 @pytest.mark.success def test_get_feat_c0k0(): data = np.array([0, 0]) X_train = np.reshape(np.array([0, 1, 3, 4, 5, 6, 1, 1, 0, 3]), (5, 2)) y_train = np.array([0, 0, 0, 1, 1]) class_index = 0 k_index = 0 expected = _get_feat(data, X_train, y_train, class_index, k_index) # [0, 1] is the 1-nearest point actual = 1 assert expected == actual # Case 2: class_index == 0 and k_index == 1 @pytest.mark.success def test_get_feat_c0k1(): data = np.array([0, 0]) X_train = np.reshape(np.array([0, 1, 3, 4, 5, 6, 1, 1, 0, 3]), (5, 2)) y_train = np.array([0, 0, 0, 1, 1]) class_index = 0 k_index = 1 expected = _get_feat(data, X_train, y_train, class_index, k_index) # [0, 1] and [3, 4] is the 2-nearest points actual = 1 + 5 assert expected == actual # Case 3: class_index == 1 and k_index == 0 @pytest.mark.success def test_get_feat_c1k0(): data = np.array([0, 0]) X_train = np.reshape(np.array([0, 1, 3, 4, 5, 6, 0, 2, 0, 3]), (5, 2)) y_train = np.array([0, 0, 0, 1, 1]) class_index = 1 k_index = 0 expected = _get_feat(data, X_train, y_train, class_index, k_index) # [0, 2] is the 1-nearest point actual = 2 assert expected == actual # Case 4: class_index == 1 and k_index == 1 @pytest.mark.success def test_get_feat_c1k1(): data = np.array([0, 0]) X_train = np.reshape(np.array([0, 1, 3, 4, 5, 6, 0, 2, 0, 3]), (5, 2)) y_train = np.array([0, 0, 0, 1, 1]) class_index = 1 k_index = 1 expected = _get_feat(data, X_train, y_train, class_index, k_index) # [0, 2] and [0, 3]is the 2-nearest points actual = 2 + 3 assert expected == actual
{"hexsha": "d6d8163ab93cc76a1e04dbb8ecc53d2d2466c00f", "size": 1852, "ext": "py", "lang": "Python", "max_stars_repo_path": "test/test_get_feat.py", "max_stars_repo_name": "krishna-patel98/knnFeat", "max_stars_repo_head_hexsha": "257cd43c28ed4c933ef28b41492d263e19cc27db", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 65, "max_stars_repo_stars_event_min_datetime": "2018-06-23T07:36:49.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-13T17:30:19.000Z", "max_issues_repo_path": "test/test_get_feat.py", "max_issues_repo_name": "krishna-patel98/knnFeat", "max_issues_repo_head_hexsha": "257cd43c28ed4c933ef28b41492d263e19cc27db", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2018-06-24T14:55:08.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-04T12:28:32.000Z", "max_forks_repo_path": "test/test_get_feat.py", "max_forks_repo_name": "krishna-patel98/knnFeat", "max_forks_repo_head_hexsha": "257cd43c28ed4c933ef28b41492d263e19cc27db", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 11, "max_forks_repo_forks_event_min_datetime": "2018-06-23T07:37:04.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-22T16:28:23.000Z", "avg_line_length": 24.6933333333, "max_line_length": 74, "alphanum_fraction": 0.6031317495, "include": true, "reason": "import numpy", "num_tokens": 721}
program omi USE m3utilio USE ENV_VARS USE utilities_module implicit none character(18) :: rowheader character(256), allocatable :: OMI_filename( : ) character(256) :: file_name character(256) :: file_line character(16) :: OMI_FILE_NCF = 'OMI_FULL_NCF' character(16) :: EXTEN_FILE_NCF = 'OMI_EXPAND_NCF' CHARACTER(80) :: XMSG = ' ' integer nlatitude integer nlongitude integer year,month,day, julday integer icount, jcount integer i,j,k integer i_, j_ integer ip1, kp1 integer i_max, j_max integer nfiles integer unit_expand integer ipass integer io_files integer io_file_init integer io_full_dat integer jdate_init integer jdate_prev integer jdate_next integer ldate integer stat_allocate ! integer delta_julian integer delta_date integer, allocatable :: jdate( : ) integer, allocatable :: idate( : ) integer, allocatable :: oz_toms( : ) Character(3), allocatable :: oz_string( : ) real, parameter :: pi = 3.14159 real, parameter :: pi180 = pi / 180.0 real(8), parameter :: fill_limit = 3.14159d0 /36.0d0 real(8), allocatable :: oz ( :,: ) real(8), allocatable :: oz_( :,: ) real(8), allocatable :: oz_mean( :,: ) real(8), allocatable :: oz_prev( :,: ) real(8), allocatable :: oz_expand( :,: ) real(8), allocatable :: gregdate( : ),yrfrac_( : ) real, allocatable :: lat_omi( :), lon_omi( : ), lat_ioapi( :,: ) real(8), allocatable :: phi_omi( : ), theta_omi( : ) ! lat, lon [radians] real(8), allocatable :: lat_( : ),lon_( : ) real(8), allocatable :: oz_extend( :,: ) real, allocatable :: ioapi_buff( :,: ) real, allocatable :: ioapi_prev( :,: ) real, allocatable :: oz_adjust( :,: ) real, allocatable :: oz_ioapi( :,: ) real, allocatable :: cloud_fraction( :,: ) real, allocatable :: o3_missing( :,: ) real, allocatable :: lat_expand(:),lon_expand(:) real(8) :: yrfrac real :: latstepsize,lonstepsize real :: lat,lon real :: init_lat,init_lon real(8) :: w(2),v(4) logical :: eflag = .False. logical, save :: First_time = .True. logical, parameter :: near_neighbor = .False. ! replace missing using fill subroutine logical :: read_clouds = .False. logical :: TOMS_FORMAT = .False. interface SUBROUTINE CREATE_IOAPI_OMI ( FILE_NAME, JDATE, NLAT, NLON ) CHARACTER( 16 ), INTENT( IN ) :: FILE_NAME ! name of file INTEGER, INTENT( IN ) :: JDATE ! Start date of file, YYYYDDD INTEGER, INTENT( IN ) :: NLAT ! # of latitude points INTEGER, INTENT( IN ) :: NLON ! # of Longitude points END SUBROUTINE CREATE_IOAPI_OMI SUBROUTINE CREATE_EXTEND_OMI ( FILE_NAME, JDATE ) CHARACTER( 16 ), INTENT( IN ) :: FILE_NAME ! name of file INTEGER, INTENT( IN ) :: JDATE ! Start date of file, YYYYDDD END SUBROUTINE CREATE_EXTEND_OMI end interface Call GET_ENVS() nlatitude = 180 nlongitude = 360 if( CREATE_FULL_FILES )open(file=OMI_FULL_DAT,newunit=io_full_dat) file_name ='a' call get_OMI_listsize(OMI_FILE_LIST,nfiles) Allocate( OMI_filename( nfiles ), stat=stat_allocate ) If ( stat_allocate .ne. 0 ) Then xmsg = 'error allocating OMI_filename' write(6,'(a)')xmsg Stop End If Allocate( jdate( nfiles ), idate(nfiles), stat=stat_allocate ) If ( stat_allocate .ne. 0 ) Then xmsg = 'error allocating jdate,idate' write(6,'(a)')xmsg Stop End If Allocate( gregdate( nfiles ),yrfrac_(nfiles), stat=stat_allocate ) If ( stat_allocate .ne. 0 ) Then xmsg = 'error allocating gregdate,yrfrac' write(6,'(a)')xmsg Stop End If Open(file=OMI_FILE_LIST,status='old',newunit=io_files) Do j = 1,nfiles read(io_files,'(a)')file_name ! get starting position of date in file name k = index(file_name,'OMI.ozone.', back=.true. ) + 10 If( k .eq. 10 )Then k = index(file_name,'OMI.full.', back=.true. ) + 9 End If If( k .eq. 9 )Then k = index(file_name,'L3e_ozone_omi_', back=.true. ) + 14 TOMS_FORMAT = .True. End If read(file_name(k:k+7),*)idate(j) End Do If( TOMS_FORMAT )Then ! reset nlatitude and nlongitude Open(file=file_name,status='old',newunit=io_file_init) Read(io_file_init,'(a)')file_line Read(io_file_init,'(12x,i6)')nlongitude Read(io_file_init,'(12x,i6)')nlatitude Allocate( oz_string( nlongitude ), stat=stat_allocate ) Allocate( oz_toms( nlongitude ), stat=stat_allocate ) If ( stat_allocate .ne. 0 ) Then xmsg = 'error allocating jdate,idate' write(6,'(a)')xmsg Stop End If Close( io_file_init ) End If Call Init_Arrays() do j = 1,nfiles gregdate(j) = real(idate(j),8) year = int( gregdate(j) /10000.d0) month = int((gregdate(j) - (real(year,8))*10000.d0)/ 100.d0) day = int((gregdate(j) - (real(year,8))*10000.d0 & - (real(month,8))*100.d0 )/ 1.d0) call julian_date (year,month,day,julday,yrfrac) jdate( j ) = 1000 * year + julday If( j .Gt. 1 )Then ! check for continuous and ascending dates delta_date = Delta_Julian( jdate( j ),jdate( j-1 ) ) If( delta_date .gt. 1 )Then Print*,'Data gap from ',jdate( j-1 ),' to ', jdate( j ) Else If( delta_date .eq. 0 )Then write(6,'(a,2(i1000,1x))') & 'Input file list has files with equal dates betweeen lines:', j-1,j eflag = .true. Else If( delta_date .lt. 0 )Then write(6,'(a,2(i1000,1x))') & 'Input file list has files with decreasing dates betweeen lines:', j-1,j eflag = .true. End If End If yrfrac_(j) = yrfrac+real(year,8) End Do If( eflag )Then write(6,'(a)')'Above errors found in OMI_FILE_LIST' Stop End If !set jdate_init if( mod(jdate( 1 ),1000) .gt. 1 )then jdate_init = jdate( 1 ) - 1 else year = idate(1)/10000 - 1 day = 31 month = 12 call julian_date (year,month,day,julday,yrfrac) jdate_init = 1000 * year + julday end if latstepsize = 180.0/real(nlatitude) lonstepsize = 360.0/real(nlongitude) init_lat = 0.5*real(latstepsize) init_lon = 0.5*real(lonstepsize) close(io_files) open(file=OMI_FILE_LIST,newunit=io_files) do i = 1, (nlatitude/2) lat_omi( i ) = ( 90.0 + init_lat ) - latstepsize*real(i) phi_omi( i ) = real(pi180*lat_omi(i), 8 ) lat_omi( nlatitude - i + 1 ) = - lat_omi( i ) phi_omi( nlatitude - i + 1 ) = - phi_omi( i ) end do ! do i = 1, nlatitude ! print*,'lat_omi( i ), phi_omi( i ) = ',lat_omi( i ), phi_omi( i ) ! end do ! print*,'lat_omi( 1 ), phi_omi( 1 ) = ',lat_omi( 1 ), phi_omi( 1 ) do i = 1, nlongitude lon_omi( i ) = -180.0 - init_lon + lonstepsize*real(i) end do do i = 1, (nlongitude/2) theta_omi( i ) = real(pi180*(lon_omi( i ) + 360.0), 8) k = i + (nlongitude/2) theta_omi( k ) = real(pi180*lon_omi( k ), 8) end do ! do i = 1, nlongitude ! print*,'lon_omi( i ), theta_omi( i ) = ',lon_omi( i ), theta_omi( i ), lonstepsize ! end do ! print*,'lon_omi( 1 ), theta_omi( 1 ) = ',lon_omi( 1 ), theta_omi( 1 ), lonstepsize Do i = 1, nlatitude Do j = 1, nlongitude lat_ioapi( j,nlatitude-i+1) = lat_omi( i ) End Do End Do call expand_init() ! open( file = 'OMI_expand_14t16.dat', status = 'unknown', newunit = unit_expand ) ! write(unit_expand,'(a19,720f7.1)')' yeardate lat ',(lon_expand(j),j=1,720) call get_mean() do i = 1, nlatitude do k = 1, nlongitude if( oz_mean( i,k ) .ne. oz_mean( i,k ) )stop oz_ioapi( k, nlatitude - i + 1 ) = real( oz_mean( i,k ), 4 ) end do end do ! print*,'oz_ioapi:oz_mean max/min: ',maxval(oz_ioapi),'/',maxval(oz_mean),minval(oz_ioapi), ! & '/',minval(oz_mean) if( CREATE_FULL_FILES )then call CREATE_IOAPI_OMI( OMI_FILE_NCF, jdate_init, nlatitude, nlongitude ) ! call CREATE_EXTEND_OMI( EXTEN_FILE_NCF, jdate( 1 ) ) IF ( .NOT. WRITE3( OMI_FILE_NCF, 'OZONE_COLUMN', JDATE_INIT, 0, & OZ_IOAPI ) ) THEN XMSG = 'Error writing variable OZONE_COLUMN' CALL M3EXIT ( 'RO3', JDATE_INIT, 0, XMSG, XSTAT1 ) END IF cloud_fraction = -1.0 IF ( .NOT. WRITE3( OMI_FILE_NCF, 'CLOUD_FRACT', JDATE_INIT, 0, & CLOUD_FRACTION ) ) THEN XMSG = 'Error writing variable CLOUD_FRACT' CALL M3EXIT ( 'RO3', JDATE_INIT, 0, XMSG, XSTAT1 ) END IF o3_missing = -1.0 IF ( .NOT. WRITE3( OMI_FILE_NCF, 'O3_MISSING', JDATE_INIT, 0, & O3_MISSING ) ) THEN XMSG = 'Error writing variable O3_MISSING' CALL M3EXIT ( 'RO3', JDATE_INIT, 0, XMSG, XSTAT1 ) END IF IF ( .NOT. WRITE3( OMI_FILE_NCF, 'LATITUDE', JDATE_INIT, 0, & LAT_IOAPI ) ) THEN XMSG = 'Error writing variable LATITUDE' CALL M3EXIT ( 'RO3', JDATE_INIT, 0, XMSG, XSTAT1 ) END IF end if ! set initial previous values to mean from all files oz_prev = oz_mean jdate_next = jdate_init Loop_Omi_Files: do j = 1,nfiles read(io_files,'(a)')OMI_filename(j) write(6,'(a)')OMI_filename(j) ! determine whether to read cloud fraction if( index(OMI_filename(j),'OMI.full.', back=.true. ) .gt. 0 )then read_clouds = .True. else read_clouds = .False. end if open(file=OMI_filename(j),status = 'old', newunit=io_file_init) oz = 0.d0 oz_ = 0.d0 cloud_fraction = -1.0 If( TOMS_FORMAT )Then Do i = 1, 3 read(io_file_init,'(a)')file_line End Do Do i = nlatitude, 1, -1 lat = init_lat*real( i ) - ( 90.0 + init_lat ) ! read(io_file_init,'(1x,25a3)')(oz_string(k),k = 1, nlongitude) ! Do k = 1, nlongitude ! If( oz_string(k) .Eq. '***' )Then ! oz_toms(k) = 0.0 ! Else ! read(oz_string(k),'(i3)')oz_toms(k) ! End If ! End Do read(io_file_init,'(1x,25i3)')(oz_toms(k),k = 1, nlongitude) oz(i,1:nlongitude) = real( oz_toms(1:nlongitude),8) ! write(6,'(25(i3,1x))')(int(oz(i,k)),k = 1, nlongitude) ! if(i .ge. nlatitude -1 )write(6,555)yrfrac_(j),lat_omi(i),(( oz_toms(k) ),k=1,nlongitude) End Do Else read(io_file_init,*) do i = nlatitude,1,-1 lat = 90.0 + init_lat - init_lat*real( i ) read(io_file_init,*)rowheader,(oz(i,k),k=1,nlongitude) end do if( read_clouds )then do i = 1,nlatitude read(io_file_init,*)rowheader,(cloud_fraction(k,i),k=1,nlongitude) end do where( cloud_fraction .lt. -1.0 ) cloud_fraction = -1.0 where( cloud_fraction .gt. 1.0 ) cloud_fraction = 1.0 end if End If Close( io_file_init ) ! pause where( oz .lt. 1.0d-3 ) oz = -1.0d0 ! fill in missing values with nearest neighbors if( LUSE_NEIGHBORS ) then call fill(phi_omi, theta_omi, oz, fill_limit) end if ! replace values still missing with previous values do i = 1, nlatitude do k = 1, nlongitude if( oz(i,k) .le. 0.0d0 )then if( LUSE_PREV_DATE )oz(i,k) = oz_prev(i,k) o3_missing( k, nlatitude - i + 1 ) = 1.0 else o3_missing( k, nlatitude - i + 1 ) = -1.0 end if end do end do do i = 1, nlatitude do k = 1, nlongitude oz_ioapi( k, nlatitude - i + 1 ) = real( oz( i,k ), 4 ) end do end do write(6,'(5(a,f6.2,1x))') 'oz_ioapi:oz max:oz_ioapi:oz min/min ', & maxval(oz_ioapi),'/',maxval(oz),':',minval(oz_ioapi),'/',minval(oz) ! Test whether Observation's date matches expected date Call Julian_plus_One( jdate_next ) If( jdate_next .ne. JDATE( J ) )Then ! corrected expected date delta_date = Delta_julian( jdate_next, JDATE( J ) ) OZ_ADJUST = ( OZ_IOAPI - IOAPI_PREV )/REAL( delta_date + 1 ) If( CREATE_FULL_FILES )Then ! write out previous values Do ldate = 1, delta_date IOAPI_PREV = OZ_ADJUST + IOAPI_PREV IF ( .NOT. WRITE3( OMI_FILE_NCF, 'OZONE_COLUMN', jdate_next, 0, & IOAPI_PREV ) ) THEN XMSG = 'Error writing variable OZONE_COLUMN' CALL M3EXIT ( 'RO3', JDATE( J ), 0, XMSG, XSTAT1 ) END IF IF ( .NOT. WRITE3( OMI_FILE_NCF, 'CLOUD_FRACT', jdate_next, 0, & IOAPI_BUFF ) ) THEN XMSG = 'Error writing variable CLOUD_FRACT' CALL M3EXIT ( 'RO3', JDATE_INIT, 0, XMSG, XSTAT1 ) END IF IF ( .NOT. WRITE3( OMI_FILE_NCF, 'O3_MISSING', jdate_next, 0, & IOAPI_BUFF ) ) THEN XMSG = 'Error writing variable O3_MISSING' CALL M3EXIT ( 'RO3', JDATE_INIT, 0, XMSG, XSTAT1 ) END IF IF ( .NOT. WRITE3( OMI_FILE_NCF, 'LATITUDE', jdate_next, 0, & LAT_IOAPI ) ) THEN XMSG = 'Error writing variable LATITUDE' CALL M3EXIT ( 'RO3', JDATE_INIT, 0, XMSG, XSTAT1 ) END IF Call Julian_plus_One( jdate_next ) End Do Else Do ldate = 1, delta_date Call Julian_plus_One( jdate_next ) End Do End If End If If( CREATE_FULL_FILES )Then IF ( .NOT. WRITE3( OMI_FILE_NCF, 'OZONE_COLUMN', JDATE( J ), 0, & OZ_IOAPI ) ) THEN XMSG = 'Error writing variable OZONE_COLUMN' CALL M3EXIT ( 'RO3', JDATE( J ), 0, XMSG, XSTAT1 ) END IF IF ( .NOT. WRITE3( OMI_FILE_NCF, 'CLOUD_FRACT', JDATE( J ), 0, & CLOUD_FRACTION ) ) THEN XMSG = 'Error writing variable CLOUD_FRACT' CALL M3EXIT ( 'RO3', JDATE_INIT, 0, XMSG, XSTAT1 ) END IF IF ( .NOT. WRITE3( OMI_FILE_NCF, 'O3_MISSING', JDATE( J ), 0, & O3_MISSING ) ) THEN XMSG = 'Error writing variable O3_MISSING' CALL M3EXIT ( 'RO3', JDATE_INIT, 0, XMSG, XSTAT1 ) END IF IF ( .NOT. WRITE3( OMI_FILE_NCF, 'LATITUDE', JDATE( J ), 0, & LAT_IOAPI ) ) THEN XMSG = 'Error writing variable LATITUDE' CALL M3EXIT ( 'RO3', JDATE_INIT, 0, XMSG, XSTAT1 ) END IF End If IOAPI_PREV = OZ_IOAPI jdate_prev = jdate( j ) i_ = 0 lat_ = 0.0d0 lon_ = 0.0d0 oz_ = 0.0d0 do 490 i = 1,nlatitude i_ = i_ + 1 lat_(i_) = real( lat_omi( i ),8 ) ! lat j_ = 0 do 470 k = 1,nlongitude j_ = j_ + 1 lon_(j_) = real( lon_omi( k ),8 ) ! lon oz_(i_,j_) = max( -1.0d0, oz(i,k) ) 470 continue 490 continue i_max = i_ j_max = j_ call expand_grid do i = 1, 2*nlatitude-1 do k = 1, 2*nlongitude oz_extend( k, 2*nlatitude - i ) = oz_expand( i,k ) end do end do ! IF ( .NOT. WRITE3( EXTEN_FILE_NCF, 'OZONE_COLUMN', JDATE( J ), 0, ! & OZ_EXTEND ) ) THEN ! XMSG = 'Error writing variable OZONE_COLUMN' ! CALL M3EXIT ( 'RO3', JDATE( J ), 0, XMSG, XSTAT1 ) ! END IF If( CREATE_FULL_FILES )Then do i_ = 1, i_max if((j.eq.1).and.(i_.eq.1))then write(io_full_dat,545)latstepsize,lonstepsize write(io_full_dat,550)' yeardate lat ',((lon_(j_)),j_=1,j_max) endif write(io_full_dat,555)yrfrac_(j),lat_(i_),(idnint( oz_(i_,j_) ),j_=1,j_max) end do End If ! call o3tot_cmaq ( yrfrac_(j), lat_omi, lon_omi, oz_ ) ! do i = 1,359 ! write(unit_expand,555)yrfrac_(j),lat_expand(i), ! & (idnint( oz_expand(i,k) ),k=1,720) ! end do call extract_o3_cmaq ( jdate(j), yrfrac_(j), lat_expand, lon_expand, oz_expand ) ! call viz_o3totcol ( jdate(j) ) oz_prev = oz 890 End Do Loop_Omi_Files 545 format(2(f7.3,1x)) !550 format(7x,360f7.1) 550 format(a19,2880f9.3) !555 format(f6.1,1x,360f7.0) 555 format(f10.4,f9.3,2880i7) ! write(12,*)date(j) close(io_files) if( CREATE_FULL_FILES )close(io_full_dat) ! close(unit_expand) 999 stop CONTAINS Subroutine Init_Arrays() Implicit None Allocate( oz(nlatitude,nlongitude) ,oz_(nlatitude,nlongitude), stat=stat_allocate ) If ( stat_allocate .ne. 0 ) Then xmsg = 'error allocating oz,oz_' write(6,'(a)')xmsg Stop End If Allocate( oz_mean(nlatitude,nlongitude), stat=stat_allocate ) If ( stat_allocate .ne. 0 ) Then xmsg = 'error allocating oz_mean' write(6,'(a)')xmsg Stop End If Allocate( oz_prev(nlatitude,nlongitude), stat=stat_allocate ) If ( stat_allocate .ne. 0 ) Then xmsg = 'error allocating oz_prev' write(6,'(a)')xmsg Stop End If Allocate( oz_expand(2*nlatitude-1,2*nlongitude), stat=stat_allocate ) If ( stat_allocate .ne. 0 ) Then xmsg = 'error allocating oz_expand' write(6,'(a)')xmsg Stop End If Allocate( lat_omi(nlatitude),lon_omi(nlongitude) , stat=stat_allocate ) If ( stat_allocate .ne. 0 ) Then xmsg = 'error allocating lat_omi,lon_omi' write(6,'(a)')xmsg Stop End If Allocate( phi_omi(nlatitude),theta_omi(nlongitude) , stat=stat_allocate ) If ( stat_allocate .ne. 0 ) Then xmsg = 'error allocating phi_omi,theta_omi' write(6,'(a)')xmsg Stop End If Allocate( lat_(nlatitude),lon_(nlongitude), stat=stat_allocate ) If ( stat_allocate .ne. 0 ) Then xmsg = 'error allocating lat_,lon_' write(6,'(a)')xmsg Stop End If Allocate( oz_extend(2*nlongitude,2*nlatitude-1), stat=stat_allocate ) If ( stat_allocate .ne. 0 ) Then xmsg = 'error allocating oz_extend' write(6,'(a)')xmsg Stop End If Allocate( oz_ioapi(nlongitude,nlatitude), stat=stat_allocate ) If ( stat_allocate .ne. 0 ) Then xmsg = 'error allocating oz_ioapi' write(6,'(a)')xmsg Stop End If Allocate( ioapi_prev(nlongitude,nlatitude), stat=stat_allocate ) If ( stat_allocate .ne. 0 ) Then xmsg = 'error allocating ioapi_prev' write(6,'(a)')xmsg Stop End If ioapi_prev = 0.0 Allocate( ioapi_buff(nlongitude,nlatitude), stat=stat_allocate ) If ( stat_allocate .ne. 0 ) Then xmsg = 'error allocating ioapi_buff' write(6,'(a)')xmsg Stop End If ioapi_buff = -1.0 Allocate( oz_adjust(nlongitude,nlatitude), stat=stat_allocate ) If ( stat_allocate .ne. 0 ) Then xmsg = 'error allocating oz_adjust' write(6,'(a)')xmsg Stop End If oz_adjust = 0.0 Allocate( cloud_fraction(nlongitude,nlatitude), stat=stat_allocate ) If ( stat_allocate .ne. 0 ) Then xmsg = 'error allocating cloud_fraction' write(6,'(a)')xmsg Stop End If Allocate( o3_missing(nlongitude,nlatitude), stat=stat_allocate ) If ( stat_allocate .ne. 0 ) Then xmsg = 'error allocating oz_missing' write(6,'(a)')xmsg Stop End If Allocate( lat_ioapi(nlongitude,nlatitude), stat=stat_allocate ) If ( stat_allocate .ne. 0 ) Then xmsg = 'error allocating lat_ioapi' write(6,'(a)')xmsg Stop End If Allocate( lat_expand(2*nlatitude),lon_expand(2*nlongitude) , stat=stat_allocate ) If ( stat_allocate .ne. 0 ) Then xmsg = 'error allocating lat_expand,lon_expand' write(6,'(a)')xmsg Stop End If End Subroutine Init_Arrays subroutine get_mean() Implicit None real(8), parameter :: zero_limit = 3.1416d0 real(8), allocatable :: weigth(:,:) ! local: integer :: nlat integer :: nlon integer :: n,iread integer :: iozone integer :: line_number integer :: ilon integer :: mod_read integer :: rem_read real( 8 ) :: oz_min nlat = size(lat_omi) nlon = size(lon_omi) allocate( weigth(nlat,nlon) ) oz_mean = 0.0d0 mod_read = nlongitude/25 rem_read = mod(nlongitude, 25) If( TOMS_FORMAT )oz_toms = 0 rewind(io_files) do n = 1,nfiles read(io_files,'(a)')OMI_filename(n) write(6,'(a)')OMI_filename(n) open(file=OMI_filename(n),newunit = iozone) line_number = 0 If( TOMS_FORMAT )Then Do i = 1, 3 line_number = line_number + 1 read(iozone,'(a)',end=9503)file_line End Do Do i = nlat,1,-1 line_number = line_number + 1 ilon = 0 do iread = 1, mod_read read(iozone,'(a)',err=9501,end=9503,advance='yes')file_line read(file_line,'(1x,25i3)',err=9502)(oz_toms(ilon+k),k = 1, 25) ilon = ilon + 25 end do read(iozone,'(a)',err=9501,end=9503,advance='yes')file_line read(file_line,'(1x,25i3)',err=9502)(oz_toms(ilon+k),k = 1, rem_read) ilon = ilon + rem_read oz(i,1:nlongitude) = real( oz_toms(1:nlongitude),8) End Do Else read(iozone,*) oz = 0.d0 do i = nlat,1,-1 read(iozone,*)rowheader,(oz(i,k),k=1,nlon) end do End If close(iozone) print*,'maxval(oz) = ', maxval(oz) do i = 1, nlat do j = 1, nlon if( oz(i,j) .le. 1.0d-3 )cycle weigth(i,j) = 1.0d0 + weigth(i,j) oz_mean(i,j) = oz(i,j) + oz_mean(i,j) end do end do end do oz_min = 1.0d8 do i = 1, nlat do j = 1, nlon if( weigth(i,j) .le. 0.0d0 )cycle oz_mean(i,j) = oz_mean(i,j) / weigth(i,j) if( oz_mean(i,j) .Gt. 1 .And. oz_mean(i,j) .Lt. oz_min )oz_min = oz_mean(i,j) end do end do where( oz_mean .lt. oz_min ) oz_mean = oz_min ! do i = 1, nlat ! write(6,'(25(i3,1x))')(int(oz_mean(i,j)),j = 1, nlon) ! end do ! print*,'For mean, sum(weigth):maxval(weigth) = ',sum(weigth),":",maxval(weigth) rewind(io_files) ! fill in missing values with nearest neighbors if( near_neighbor ) then call fill(phi_omi, theta_omi, oz_mean, fill_limit) end if deallocate( weigth ) return 9501 write(6,'(2a)')'Error reading file: ',Trim( OMI_filename(n) ) write(6,'(a,i7)')'at line number: ',line_number Stop 9502 write(6,'(2a)')'Error reading file: ',Trim( OMI_filename(n) ) write(6,'(a,i7)')'Cannot data at line number:',line_number write(6,'(a)')Trim(file_line) Stop 9503 write(6,'(2a)')'Premature File End in ',Trim( OMI_filename(n) ) write(6,'(a,i7)')'at line number: ',line_number-1 print*,'Last line read: ',Trim(file_line) write(6,'(a,i7)')'Expected number of lines: ', & nlatitude*int(nlongitude/25)+nlatitude+3 Stop end subroutine get_mean subroutine expand_init() Implicit None lat_expand = 0.0 icount = 0 do i = 1, ( (size( lat_expand ) + 1)/2 - 1 ) ! 179 icount = icount + 1 lat_expand( icount ) = lat_omi( i ) icount = icount + 1 lat_expand( icount ) = 0.5*(lat_omi( i )+ lat_omi( i+1 )) end do icount = icount + 1 lat_expand( icount ) = lat_omi( i ) lon_expand = 0.0 icount = 0 do i = 1, ( size( lon_expand )/2 - 1 ) ! 359 icount = icount + 1 lon_expand( icount ) = lon_omi( i ) icount = icount + 1 lon_expand( icount ) = 0.5*(lon_omi( i )+ lon_omi( i+1 )) end do icount = icount + 1 lon_expand( icount ) = lon_omi( nlongitude ) icount = icount + 1 lon_expand( icount ) = 0.5*(lon_omi( nlongitude )+lon_omi( 1 ))+180.0 end subroutine expand_init subroutine expand_grid() Implicit None icount = 0 oz_expand = -1.0d0 ipass = 1 do i = 1,nlatitude ip1 = i + 1 icount = icount + 1 jcount = 0 do k = 1,nlongitude jcount = jcount + 1 kp1 = max(mod(k+1,nlongitude), 1) oz_expand(icount,jcount) = oz(i,k) w = 0.5d0 if( oz(i,k) .lt. 0.0d0 ) w(1) = 0.0d0 if( oz(i,kp1) .lt. 0.0d0 ) w(2) = 0.0d0 if( sum( w ) .gt. 1.0d-4 )then oz_expand(icount,jcount+1) = (w(1)*oz(i,k)+w(2)*oz(i,kp1)) & / sum( w ) end if if( ip1 .gt. nlatitude )cycle w = 0.5d0 if( oz(ip1,k) .lt. 0.0d0 )w(1) = 0.0d0 if( oz(i,k) .lt. 0.0d0 )w(2) = 0.0d0 if( sum( w ) .gt. 1.0d-4 )then oz_expand(icount+1,jcount) = (w(1)*oz(ip1,k)+w(2)*oz(i,k)) & / sum( w ) end if v = 0.25d0 if( oz(i ,k ) .lt. 0.0d0 ) v(1) = 0.0d0 if( oz(ip1,k ) .lt. 0.0d0 ) v(2) = 0.0d0 if( oz(i ,kp1) .lt. 0.0d0 ) v(3) = 0.0d0 if( oz(i+1,kp1) .lt. 0.0d0 ) v(4) = 0.0d0 if( sum( v ) .gt. 1.0d-4 )then oz_expand(icount+1,jcount+1) = (v(1)*oz(i,k)+v(2)*oz(ip1,k) & + v(3)*oz(i,kp1)+v(4)*oz(ip1,kp1)) & / sum( v ) end if jcount = jcount + 1 end do icount = icount + 1 end do 555 format(f10.4,f7.1,720i7) 650 format(a19,720f7.1) end subroutine expand_grid End Program Omi
{"hexsha": "8179450b2eec1fb9e155175f624e0fa5d627cff3", "size": 30047, "ext": "f", "lang": "FORTRAN", "max_stars_repo_path": "PREP/create_omi/src/driver.f", "max_stars_repo_name": "Simeng-unique/CMAQ-changed", "max_stars_repo_head_hexsha": "cb83401728ed7ea1bb19a6986c0acc84dabe11a4", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 203, "max_stars_repo_stars_event_min_datetime": "2017-02-04T18:01:47.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T09:09:00.000Z", "max_issues_repo_path": "PREP/create_omi/src/driver.f", "max_issues_repo_name": "Simeng-unique/CMAQ-changed", "max_issues_repo_head_hexsha": "cb83401728ed7ea1bb19a6986c0acc84dabe11a4", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 54, "max_issues_repo_issues_event_min_datetime": "2017-01-03T21:40:27.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-04T19:03:53.000Z", "max_forks_repo_path": "PREP/create_omi/src/driver.f", "max_forks_repo_name": "Simeng-unique/CMAQ-changed", "max_forks_repo_head_hexsha": "cb83401728ed7ea1bb19a6986c0acc84dabe11a4", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 170, "max_forks_repo_forks_event_min_datetime": "2016-11-09T22:30:04.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T03:21:59.000Z", "avg_line_length": 37.7001254705, "max_line_length": 107, "alphanum_fraction": 0.4850068226, "num_tokens": 8875}
import tinyflow as tf import numpy as np def test_add_grad(): x = tf.placeholder(tf.float32) y = tf.placeholder(tf.float32) ax = np.ones((2, 3)) ay = np.ones((2, 3)) * 4 z = x + y gx, gy = tf.gradients(z, [x, y]) sess = tf.Session() agx = sess.run(gx, feed_dict={x:ax, y:ay}) np.testing.assert_almost_equal(agx, np.ones((2,3))) def test_mul_grad(): x = tf.placeholder(tf.float32) ax = np.ones((2, 3)) z = x * 14 gx = tf.gradients(z, [x])[0] sess = tf.Session() agx = sess.run(gx, feed_dict={x:ax}) np.testing.assert_almost_equal(agx, np.ones((2,3)) * 14) def test_sum_grad(): x = tf.placeholder(tf.float32) ax = np.ones((2, 3)) z = -tf.reduce_sum(x) * 14 gx = tf.gradients(z, [x])[0] sess = tf.Session() agx = sess.run(gx, feed_dict={x:ax}) np.testing.assert_almost_equal(agx, -np.ones((2,3)) * 14) def test_mean_grad(): x = tf.placeholder(tf.float32) ax = np.ones((2, 3)) z = -tf.reduce_mean(x) * 14 gx = tf.gradients(z, [x])[0] sess = tf.Session() agx = sess.run(gx, feed_dict={x:ax}) np.testing.assert_almost_equal(agx, -np.ones((2,3)) * 14 / 6.0) def test_matmul_grad(): x = tf.placeholder(tf.float32) y = tf.placeholder(tf.float32) ax = np.ones((2, 3)) ay = np.ones((3, 4)) * 4 z = tf.matmul(x, y) * 4 gx, gy = tf.gradients(z, [x, y]) sess = tf.Session() agx = sess.run(gx, feed_dict={x:ax, y:ay}) agy = sess.run(gy, feed_dict={x:ax, y:ay}) np.testing.assert_almost_equal( agx, np.dot(np.ones((2,4)), ay.T) * 4) np.testing.assert_almost_equal( agy, np.dot(ax.T, np.ones((2,4))) * 4) if __name__ == "__main__": test_mean_grad() pass
{"hexsha": "e577768b39d97a7b93c5b772187c1062fc178666", "size": 1748, "ext": "py", "lang": "Python", "max_stars_repo_path": "tests/python/test_gradients.py", "max_stars_repo_name": "irvingzhang0512/tinyflow", "max_stars_repo_head_hexsha": "92abe0cd43ad8649f306bdfd2a4e870dedfb810a", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2035, "max_stars_repo_stars_event_min_datetime": "2016-09-30T04:17:41.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-28T07:02:48.000Z", "max_issues_repo_path": "tests/python/test_gradients.py", "max_issues_repo_name": "ChengduoZhao/tinyflow", "max_issues_repo_head_hexsha": "8ecffd8b473a99a0229b477285445ed660144d09", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 10, "max_issues_repo_issues_event_min_datetime": "2016-09-30T04:57:05.000Z", "max_issues_repo_issues_event_max_datetime": "2019-02-21T03:05:33.000Z", "max_forks_repo_path": "tests/python/test_gradients.py", "max_forks_repo_name": "ChengduoZhao/tinyflow", "max_forks_repo_head_hexsha": "8ecffd8b473a99a0229b477285445ed660144d09", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 390, "max_forks_repo_forks_event_min_datetime": "2016-09-30T04:42:50.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-28T03:58:28.000Z", "avg_line_length": 27.746031746, "max_line_length": 67, "alphanum_fraction": 0.5720823799, "include": true, "reason": "import numpy", "num_tokens": 601}
module NLmodel using JuMP #using AmplNLWriter, using Ipopt #using CoinOptServices function runModel(nodes, measuredNodeStateFull, LB, UB, expression, verbose) model = Model(with_optimizer(Ipopt.Optimizer, print_level=0)) weightRoot = 500 weightMeasured = 10000000 weightHard = 10000 nodesList = collect(keys(nodes)) measuredNodeState = filter((args)->indexin([args[1]], nodesList) != [0], measuredNodeStateFull) @variable(model, LB <= x[1:length(nodesList)] <= UB, start = 1) @variable(model, LB <= x_bar[1:length(nodesList)] <= UB, start = 1) @variable(model, LB <= p[1:length(nodesList)] <= UB) @variable(model, LB <= n[1:length(nodesList)] <= UB) numberMeasuredNodes = length(keys(measuredNodeState)) if numberMeasuredNodes > 0 @variable(model, m[1:numberMeasuredNodes]) end rootIdxs = [] variableIdxs = [] measuredIdxs = indexin(collect(keys(measuredNodeState)), nodesList) for nodeName in keys(nodes) nodeIdxs = indexin([nodeName], nodesList) nodeIndex = nodeIdxs[1] measured = false j = 0 for node in collect(keys(measuredNodeState)) j = j + 1 if measuredIdxs[j] == nodeIndex && nodes[nodeName].relation == "ROOT" measured = true rhs = measuredNodeState[node] < LB ? LB : measuredNodeState[node] @constraint(model, m[j] == rhs) break end end if nodes[nodeName].relation == "ROOT" if measured @constraint(model, p[nodeIndex] >= x[nodeIndex] - m[j]) @constraint(model, n[nodeIndex] >= m[j] - x[nodeIndex]) else @constraint(model, p[nodeIndex] >= x[nodeIndex] - 1) @constraint(model, n[nodeIndex] >= 1 - x[nodeIndex]) push!(rootIdxs, nodeIndex) end else if measured @constraint(model, p[nodeIndex] >= x[nodeIndex] - m[j]) @constraint(model, n[nodeIndex] >= m[j] - x[nodeIndex]) else @constraint(model, p[nodeIndex] >= x[nodeIndex] - x_bar[nodeIndex]) @constraint(model, n[nodeIndex] >= x_bar[nodeIndex] - x[nodeIndex]) push!(variableIdxs, nodeIndex) end if nodes[nodeName].relation == "AND" parentIndexes = indexin(nodes[nodeName].parents, nodesList) if length(parentIndexes) == 1 @constraint(model, x[parentIndexes[1]] == x_bar[nodeIndex]) else @NLconstraint(model, x[parentIndexes[1]] * x[parentIndexes[2]] == x_bar[nodeIndex]) end elseif nodes[nodeName].relation == "NEG" parentIndexes = indexin(nodes[nodeName].parents, nodesList) @NLconstraint(model, 1 / x[parentIndexes[1]] == x_bar[nodeIndex]) elseif nodes[nodeName].relation == "ANDNEG" parentIndexes = indexin(nodes[nodeName].parents, nodesList) @NLconstraint(model, x[parentIndexes[1]] / x[parentIndexes[2]] == x_bar[nodeIndex]) elseif nodes[nodeName].relation == "OR" posParentIdxs = indexin(nodes[nodeName].posParents, nodesList) count_expression = 0 total_expression = 0 for parent in nodes[nodeName].posParents if haskey(expression, parent) && expression[parent] != "" total_expression += parse(Float64, expression[parent]) count_expression += 1 ev = expression[parent] if verbose println("Parent node: $parent\tExpression value: $ev") end end end if verbose println("") end if count_expression == 0 @constraint(model, sum(x[posParentIdxs[a]] for a = 1:length(posParentIdxs)) / length(posParentIdxs) == x_bar[nodeIndex]) else average_expression = total_expression / count_expression evs = [] for parent in nodes[nodeName].posParents if haskey(expression, parent) && expression[parent] != "" push!(evs, parse(Float64, expression[parent])) else push!(evs, average_expression); end end total_expression = sum(evs) @constraint(model, sum(evs[a] * x[posParentIdxs[a]] for a = 1:length(posParentIdxs)) / total_expression == x_bar[nodeIndex]) end end end end if !isempty(measuredIdxs) measuredIdxs = measuredIdxs[measuredIdxs .!= nothing] end @NLobjective(model, Min, weightHard * sum(p[variableIdxs[i]] + n[variableIdxs[i]] for i = 1:length(variableIdxs))^2 + weightMeasured * sum(p[measuredIdxs[j]] + n[measuredIdxs[j]] for j = 1:length(measuredIdxs))^2 + weightRoot * sum(p[rootIdxs[k]] + n[rootIdxs[k]] for k = 1:length(rootIdxs))^2) if verbose println("Solving Model") println(model) end optimize!(model) if verbose println("Objective value: ", JuMP.objective_value(model)) end results = Dict() for i in eachindex(nodesList) results[nodesList[i]] = JuMP.value(x[i]) end x_values = Dict() x_bar_values = Dict() for i in eachindex(nodesList) x_values[nodesList[i]] = JuMP.value(x[i]) x_bar_values[nodesList[i]] = JuMP.value(x_bar[i]) end return [results, x_values, x_bar_values] end end
{"hexsha": "d3d55a1df075059807a1b461b64fd69cf7877ab8", "size": 5910, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "src/nlmodel.jl", "max_stars_repo_name": "OICR/mp-biopath", "max_stars_repo_head_hexsha": "3da9fc6e4ce7b3dd0ca184e61d58fab2f63940b9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2018-04-03T19:41:52.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-04T21:14:15.000Z", "max_issues_repo_path": "src/nlmodel.jl", "max_issues_repo_name": "OICR/mp-biopath", "max_issues_repo_head_hexsha": "3da9fc6e4ce7b3dd0ca184e61d58fab2f63940b9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 13, "max_issues_repo_issues_event_min_datetime": "2017-04-12T14:04:02.000Z", "max_issues_repo_issues_event_max_datetime": "2018-01-23T21:22:08.000Z", "max_forks_repo_path": "src/nlmodel.jl", "max_forks_repo_name": "OICR/mp-biopath", "max_forks_repo_head_hexsha": "3da9fc6e4ce7b3dd0ca184e61d58fab2f63940b9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.8846153846, "max_line_length": 145, "alphanum_fraction": 0.5461928934, "num_tokens": 1366}
import numpy as np class LogisticRegression(object): """Single Class Multivariate logistic regression model using gradient descent""" def __init__(self): pass def train(self, x, y, epochs=10, learning_rate=0.0001): self.theta_array = np.zeros(np.array(x.ndim)+1) x = self._add_bias(x) for _ in range(1, epochs): avg_minibatch_partial_grads = np.average( (self._sigmoid(x, self.theta_array) - y) * x, axis=1) self.theta_array -= learning_rate * avg_minibatch_partial_grads def validate(self, x, y): self._check_theta_exists('validating') x = self._add_bias(x) predicted_y = np.dot(x.transpose(), self.theta_array) rmse = np.sqrt(np.average(np.square(y- predicted_y))) # Root Mean Square Error (RMSE) return predicted_y, rmse def predict(self, x): self._check_theta_exists('predicting') x = self._add_bias(x) predicted_y = self._sigmoid(x, self.theta_array) return predicted_y def _add_bias(self, x): if x.ndim == 1: x = np.row_stack((x, np.ones(len(x)))) else: x = np.row_stack((x, np.ones(len(x[0])))) return x def _sigmoid(self, x, theta_array): sigmoid = 1/(1+np.exp(-np.dot(x.transpose(), theta_array))) return sigmoid def _avg_minibatch_loss(self, x, theta_array, y): avg_minibatch_loss = np.sqrt( np.average( np.square( x.transpose().dot(theta_array) - y))) return avg_minibatch_loss def _check_theta_exists(self, phrase): assert hasattr(self, 'theta_array'), ("ValueError: theta is not defined. " "Please make sure to train the model before {}}.".format(phrase))
{"hexsha": "9b7ced15387d3a452f2af7a453ebf4217ece594c", "size": 1845, "ext": "py", "lang": "Python", "max_stars_repo_path": "mlscratch/logistic_regression.py", "max_stars_repo_name": "BoPengGit/Machine-Learning-from-Scratch", "max_stars_repo_head_hexsha": "339c74f4e5e0dfb49cf355e9ca013fca1fd5b024", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "mlscratch/logistic_regression.py", "max_issues_repo_name": "BoPengGit/Machine-Learning-from-Scratch", "max_issues_repo_head_hexsha": "339c74f4e5e0dfb49cf355e9ca013fca1fd5b024", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "mlscratch/logistic_regression.py", "max_forks_repo_name": "BoPengGit/Machine-Learning-from-Scratch", "max_forks_repo_head_hexsha": "339c74f4e5e0dfb49cf355e9ca013fca1fd5b024", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.2711864407, "max_line_length": 95, "alphanum_fraction": 0.5945799458, "include": true, "reason": "import numpy", "num_tokens": 442}
from __future__ import absolute_import from __future__ import print_function import os import itertools import numpy as np np.random.seed(1337) # for reproducibility from kerosene.datasets import cifar100 from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation, Flatten from keras.layers.convolutional import Convolution2D, MaxPooling2D from keras.utils import np_utils from keras.optimizers import SGD, Adadelta, Adagrad import keras.backend as K K.set_image_dim_ordering('th') ''' Train a convnet on the CIFAR100 dataset. Like CIFAR10, CIFAR100 is also a labeled subset of the ``80 million tiny images'' dataset. But this version has 100 target classes each of which is in one of 20 superclasses. http://www.cs.toronto.edu/~kriz/cifar.html Antonio Torralba, Rob Fergus and William T. Freeman, *80 million tiny images: a large dataset for non-parametric object and scene recognition*, Pattern Analysis and Machine Intelligence, IEEE Transactions on 30.11 (2008): 1958-1970. Alex Krizhevsky, *Learning Multiple Layers of Features from Tiny Images*, technical report, 2009. This could be an interesting use of Keras graph capabilities with data going to two different softmax classifiers. Instead here we run twice and use transfer learning from the first model. This version can get to 53.14% test accuracy after 12 epochs. The final_labels version then gets 44.02% test accuracy (with 100 classses!) following on with another 12 epochs. 23 seconds per epoch on a GeForce GTX 680 GPU. ''' batch_size = 128 # building model with too many classes inially for simpler transfer learning afterwards nb_classes = 100 nb_epoch = 12 (X_train, y_train), (X_test, y_test) = cifar100.load_data() # print shape of data while model is building print("{1} train samples, {2} channel{0}, {3}x{4}".format("" if X_train.shape[1] == 1 else "s", *X_train.shape)) print("{1} test samples, {2} channel{0}, {3}x{4}".format("" if X_test.shape[1] == 1 else "s", *X_test.shape)) # input image dimensions _, img_channels, img_rows, img_cols = X_train.shape # convert class vectors to binary class matrices Y_train = np_utils.to_categorical(y_train, nb_classes) Y_test = np_utils.to_categorical(y_test, nb_classes) model = Sequential() model.add(Convolution2D(32, 3, 3, border_mode='same', input_shape=(img_channels, img_rows, img_cols))) model.add(Activation('relu')) model.add(Convolution2D(32, 3, 3)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Convolution2D(64, 3, 3, border_mode='same')) model.add(Activation('relu')) model.add(Convolution2D(64, 3, 3)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(512)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(nb_classes)) model.add(Activation('softmax')) # let's train the model using SGD + momentum (how original). sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True) model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy']) model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1, validation_data=(X_test, Y_test)) score = model.evaluate(X_test, Y_test, show_accuracy=True, verbose=0) print('Test score:', score[0]) print('Test accuracy:', score[1]) # a hackish transfer learning scenario - now use different labels (z_train,), (z_test,) = cifar100.load_data(sources=['fine_labels']) # convert class vectors to binary class matrices Z_train = np_utils.to_categorical(z_train, nb_classes) Z_test = np_utils.to_categorical(z_test, nb_classes) model.fit(X_train, Z_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1, validation_data=(X_test, Z_test)) score = model.evaluate(X_test, Z_test, show_accuracy=True, verbose=0) print('Test score:', score[0]) print('Test accuracy:', score[1])
{"hexsha": "70561478757068ab145ee6a4d89e2c963a1353b3", "size": 4023, "ext": "py", "lang": "Python", "max_stars_repo_path": "examples/cifar100.py", "max_stars_repo_name": "dribnet/kerosene", "max_stars_repo_head_hexsha": "f641710071c603ce46abb0f66a7a176fc832f612", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 35, "max_stars_repo_stars_event_min_datetime": "2015-08-12T09:30:47.000Z", "max_stars_repo_stars_event_max_datetime": "2020-02-16T05:22:07.000Z", "max_issues_repo_path": "examples/cifar100.py", "max_issues_repo_name": "dribnet/kerosene", "max_issues_repo_head_hexsha": "f641710071c603ce46abb0f66a7a176fc832f612", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 14, "max_issues_repo_issues_event_min_datetime": "2016-02-09T19:43:10.000Z", "max_issues_repo_issues_event_max_datetime": "2018-12-12T04:55:14.000Z", "max_forks_repo_path": "examples/cifar100.py", "max_forks_repo_name": "dribnet/kerosene", "max_forks_repo_head_hexsha": "f641710071c603ce46abb0f66a7a176fc832f612", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 18, "max_forks_repo_forks_event_min_datetime": "2016-02-16T07:30:35.000Z", "max_forks_repo_forks_event_max_datetime": "2020-04-25T14:30:04.000Z", "avg_line_length": 37.5981308411, "max_line_length": 114, "alphanum_fraction": 0.7462092965, "include": true, "reason": "import numpy", "num_tokens": 1062}
[STATEMENT] lemma upper_mult_float_interval: "upper (mult_float_interval p x y) = snd (bnds_mult p (lower x) (upper x) (lower y) (upper y))" [PROOF STATE] proof (prove) goal (1 subgoal): 1. upper (mult_float_interval p x y) = snd (bnds_mult p (lower x) (upper x) (lower y) (upper y)) [PROOF STEP] by transfer auto
{"llama_tokens": 128, "file": null, "length": 1}
import unittest import os import pandas as pd from pyStarDB import sp_pystardb as pystar import numpy as np #print to just check class MyTestCase(unittest.TestCase): def test_file_is_written_loop_notag(self): try: os.remove("name.star") except FileNotFoundError: pass a = pd.DataFrame([[0, 1], [2, 3]], columns=['col1', 'col2']) b = pystar.StarFile('name.star') b.update('', a, True) b.write_star_file() exists = os.path.exists('name.star') os.remove("name.star") self.assertTrue(exists,"File (loop) was not written") def test_file_is_written_no_loop_notag(self): try: os.remove("name.star") except FileNotFoundError: pass a = pd.DataFrame([[0, 1], [2, 3]], columns=['col1', 'col2']) b = pystar.StarFile('name.star') b.update('', a, False) b.write_star_file() exists = os.path.exists('name.star') try: os.remove("name.star") except FileNotFoundError: pass self.assertTrue(exists,"File (no-loop) was not written") def test_create_and_read(self): try: os.remove("name.star") except FileNotFoundError: pass a = pd.DataFrame([[0, 1], [2, 3]], columns=['_col1', '_col2']) b = pystar.StarFile('name.star') b.update('', a, True) b.write_star_file() c = pystar.StarFile('name.star') is_equal_col1 = a['_col1'].equals(c['']['_col1']) is_equal_col2 = a['_col2'].equals(c['']['_col2']) try: os.remove("name.star") except FileNotFoundError: pass self.assertTrue(is_equal_col1 and is_equal_col2,"Write / Read test failed") #Test to fix the tag bug def test_create_and_read_tag(self): try: os.remove("name.star") except FileNotFoundError: pass a = pd.DataFrame([[0, 1], [2, 3]], columns=['_col1', '_col2']) b = pystar.StarFile('name.star') b.update('my_tag', a, True) b.write_star_file() c = pystar.StarFile('name.star') is_equal_col1 = a['_col1'].equals(c['my_tag']['_col1']) is_equal_col2 = a['_col2'].equals(c['my_tag']['_col2']) try: os.remove("name.star") except FileNotFoundError: pass self.assertTrue(is_equal_col1 and is_equal_col2,"Write / Read test failed") def test_create_and_read_tag_multitag(self): fname="name.star" try: os.remove(fname) except FileNotFoundError: pass a = pd.DataFrame([[0, 1], [2, 3]], columns=['_col1', '_col2']) a2 = pd.DataFrame([[4, 5], [6, 7]], columns=['_col1', '_col2']) b = pystar.StarFile(fname) b.update('my_tag', a, True) b.update('my_tag_2', a2, True) b.write_star_file() c = pystar.StarFile(fname) is_equal_col1_mytag = a['_col1'].equals(c['my_tag']['_col1']) is_equal_col2_mytag = a['_col2'].equals(c['my_tag']['_col2']) is_equal_col1_mytag2 = a2['_col1'].equals(c['my_tag_2']['_col1']) is_equal_col2_mytag2 = a2['_col2'].equals(c['my_tag_2']['_col2']) all_is_equal = is_equal_col1_mytag and is_equal_col2_mytag and is_equal_col1_mytag2 and is_equal_col2_mytag2 try: os.remove(fname) except FileNotFoundError: pass self.assertTrue(all_is_equal,"Write / Read test failed") def test_linespacing_after_header(self): a = pd.DataFrame([[0, 1], [2, 3], [2,3]], columns=['_col1', '_col2']) a2 = pd.DataFrame([[4, 5], [6, 7], [3,3]], columns=['_col1', '_col2']) b = pystar.StarFile('name.star') b.update('my_tag', a, True) b.update('my_tag_2', a2, True) starpath = os.path.join(os.path.dirname(__file__), '../resources/name_space.star') c = pystar.StarFile(starpath) is_equal_col1_mytag = a['_col1'].equals(c['my_tag']['_col1']) is_equal_col2_mytag = a['_col2'].equals(c['my_tag']['_col2']) is_equal_col1_mytag2 = a2['_col1'].equals(c['my_tag_2']['_col1']) is_equal_col2_mytag2 = a2['_col2'].equals(c['my_tag_2']['_col2']) all_is_equal = is_equal_col1_mytag and is_equal_col2_mytag and is_equal_col1_mytag2 and is_equal_col2_mytag2 self.assertTrue(all_is_equal,"Write / Read test failed") def test_wrong_file_provided(self): with self.assertRaises(TypeError) as cm: starpath = os.path.join(os.path.dirname(__file__), '../resources/TcdA1-0010_frames_sum.cbox') a = pystar.StarFile(starpath) self.assertEqual(cm.exception.code, 1) def test_zero_number_of_columns(self): with self.assertRaises(TypeError) as cm: starpath = os.path.join(os.path.dirname(__file__), '../resources/ActinLifeAct_00072_zerocol.star') a = pystar.StarFile(starpath) self.assertEqual(str(cm.exception), "Unable to grab the header information and column information") def test_throw_expection(self): starpath = os.path.join(os.path.dirname(__file__), '../resources/TcdA1-0010_frames_sum.cbox') got_exception = False try: a = pystar.StarFile(starpath) except TypeError: got_exception = True self.assertTrue(got_exception) def test_write_non_loop(self): fname = "name.star" try: os.remove(fname) except FileNotFoundError: pass a = pd.DataFrame([[0, 1], [2, 3]], columns=['_col1', '_col2']) b = pystar.StarFile(fname) b.update('my_tag', a, True) version_df = pd.DataFrame([["1.0"]], columns=['_cbox_format_version']) b.update('global', version_df, False) b.write_star_file() c = pystar.StarFile(fname) global_is_written = ('global' in c) and ('my_tag' in c) try: os.remove(fname) except FileNotFoundError: pass self.assertTrue(global_is_written, True) def test_data_no_copy(self): fname = "name.star" try: os.remove(fname) except FileNotFoundError: pass b = pystar.StarFile(fname) a = pd.DataFrame([[0, 1], [2, 3]], columns=['_col1', '_col2']) b.update('my_tag', a, True) version_df = pd.DataFrame([["1.0"]], columns=['_cbox_format_version']) b.update('global', version_df, False) b.write_star_file() c = pystar.StarFile(fname) a = pd.DataFrame([[0, 1], [2, 3]], columns=['_col1', '_col2']) c.update('my_tag', a, True) version_df = pd.DataFrame([["1.0"]], columns=['_cbox_format_version']) c.update('global', version_df, False) c.write_star_file(overwrite=True, tags=["global","my_tag"]) col_1_counter = 0 col_2_counter = 0 with open(fname) as f: lines = f.readlines() for line in lines: if '_col1' in line: col_1_counter = col_1_counter + 1 if '_col2' in line: col_2_counter = col_2_counter +1 # try: # os.remove(fname) # except FileNotFoundError: # pass self.assertTrue(col_1_counter == 1 and col_2_counter == 1, "Data block seems to be copied...") def test_non_loop_entries_on_singleline(self): fname = "name.star" try: os.remove(fname) except FileNotFoundError: pass b = pystar.StarFile(fname) a = pd.DataFrame([[0, 1], [2, 3]], columns=['_col1', '_col2']) b.update('my_tag', a, True) version_df = pd.DataFrame([["1.0", "20"]], columns=['_cbox_format_version','_cbox_format_version_2']) b.update('global', version_df, False) b.write_star_file() c = pystar.StarFile(fname) try: os.remove(fname) except FileNotFoundError: pass global_is_written = ('global' in c) and ('global' in b) self.assertTrue(global_is_written, True) def test_pandas_merging(self): fname = "name.star" fname1 = "name1.star" b = pystar.StarFile(fname) a = pd.DataFrame([[0, 1], [2, 3]], columns=['_c1', '_c2']) b.update('my_tag', a, True) c = pystar.StarFile(fname1) d = pd.DataFrame([[9, 3], [3, 6]], columns=['_c1', '_c2']) c.update('my_tag', d, True) newd = b + c b += c self.assertTrue(b , newd) def test_markus_specification(self): fname = "name.star" fname1 = "name1.star" x = pd.DataFrame([[0, 1], [1, 2]], columns=['_c1', '_c2']) y = pd.DataFrame([[3, 4], [5, 6]], columns=['_c1', '_c2']) a = pystar.StarFile(fname) a.update('a', x, True) b = pystar.StarFile(fname1) b.update('a', y, True) c = pystar.StarFile.add_star([a, b]) self.assertTrue(np.array_equal(c['a']['_c1'].values, [0, 1, 3, 5]) ) self.assertTrue(np.array_equal(c['a']['_c2'].values, [1, 2, 4, 6])) d = pystar.StarFile(None) pystar.StarFile.add_star([a, b], d) self.assertTrue(np.array_equal(d['a']['_c1'].values, [0, 1, 3, 5]) ) self.assertTrue(np.array_equal(d['a']['_c2'].values, [1, 2, 4, 6])) e = a + b self.assertTrue(np.array_equal(e['a']['_c1'].values, [0, 1, 3, 5]) ) self.assertTrue(np.array_equal(e['a']['_c2'].values, [1, 2, 4, 6])) f = a f += b self.assertTrue(np.array_equal(f['a']['_c1'].values, [0, 1, 3, 5])) self.assertTrue(np.array_equal(f['a']['_c2'].values, [1, 2, 4, 6])) if __name__ == '__main__': unittest.main()
{"hexsha": "b226722a7335b900b07ca6b40831b75e9e312afe", "size": 9876, "ext": "py", "lang": "Python", "max_stars_repo_path": "tests/test_pystardb.py", "max_stars_repo_name": "MPI-Dortmund/pyStarDB", "max_stars_repo_head_hexsha": "0cfe9010fc8673792f061b85483221e413b80a61", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tests/test_pystardb.py", "max_issues_repo_name": "MPI-Dortmund/pyStarDB", "max_issues_repo_head_hexsha": "0cfe9010fc8673792f061b85483221e413b80a61", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2021-11-03T15:18:40.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-16T23:59:37.000Z", "max_forks_repo_path": "tests/test_pystardb.py", "max_forks_repo_name": "MPI-Dortmund/pyStarDB", "max_forks_repo_head_hexsha": "0cfe9010fc8673792f061b85483221e413b80a61", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.7556270096, "max_line_length": 116, "alphanum_fraction": 0.5678412313, "include": true, "reason": "import numpy", "num_tokens": 2713}
\documentclass[a4paper]{article} \usepackage{graphicx} \usepackage[english]{babel} \usepackage[utf8x]{inputenc} \usepackage[T1]{fontenc} \usepackage{sectsty} \usepackage{pdfpages} \usepackage[section]{placeins} \usepackage{float}% If comment this, figure moves to Page 2 \usepackage{listings} \usepackage{caption} \usepackage{subcaption} \setlength\parindent{0pt} %\setlength{\parskip}{\baselineskip} %\sectionfont{\fontsize{12}{15}\selectfont} \begin{document} \title{Software Architecture Process And Management} \date{} \author{Computer Science 4th Year, Created By Monkeys} \maketitle \newpage \tableofcontents \newpage \section{Introduction} \subsection{Learning Outcomes of the course} \begin{itemize} \item Integrate knowledge of software architecture to capture \textbf{quality attribute} requirements for a system, \textbf{evaluate proposed architectures} and create options for \textbf{improvement}. \item Analyse and justify complex \textbf{trade-off decisions} between \textbf{competing software architectures}. \item Evaluate the \textbf{strengths} and \textbf{weaknesses} of software architecture in support of particular approaches to \textbf{design}, \textbf{process} and \textbf{management} for a particular system and make \textbf{recommendations} on the \textbf{choice of process} for that system. \item Working in a group to \textbf{critically reflect} on aspects of the \textbf{software architecture literature} and practice to create a resource that support their learning in software architecture. \end{itemize} \subsection{What is Success for a Large Project?} A large project will be considered successful if: \begin{itemize} \item The software is delivered on schedule \item Development costs are within budget \item The software meets the needs of users \end{itemize} \subsection{Software Architecture} \underline{\textbf{Software Architecture Definitions}} \begin{enumerate} \item The software architecture of a system is the \textbf{set of structures} needed to reason about the system, which comprise software elements, relations among them and properties of both. (Bass, Clements, Kazman 2013) \item A software system's architecture is the set of \textbf{principal design decisions} about the system (R.N Talyor et al)\\ \end{enumerate} \underline{\textbf{Architecture is a collection of structures}}\\ We observe three frequently types of structure: \begin{enumerate} \item \textbf{Modular Structure}: static structure that focuses on how the functionality is divided up, structured, and assigned to development and implementation teams. \item \textbf{Component and Connector structure}: runtime structures that focus on how components interact. \item \textbf{Allocation structures}: mapping to organizational, development, installation, execution environments.\\ \end{enumerate} \underline{\textbf{Architecture is an Abstraction}}\\ Architecture is used to \textbf{suppress} detail that is unimportant for the reasoning we are doing. In particular it \textbf{abstracts away} from the private details of \textbf{implementation details} of specific methods.\\ ** All systems have architectures (even if people have forgotten them) **\\ Complicated systems are embedded in organisation and we can often see architecture through practice: \begin{itemize} \item \textit{How is the system developed?} -> This will often provide \textbf{clues to structures}. \item \textit{How is the maintenance, evolution, issue reporting dealt with?} -> This will often help with \textbf{modularity}. \item \textit{What are the failure characteristics of the system in operation?} -> This will often suggest \textbf{component and connector structure}. \end{itemize} \subsection{Case Study: General Practice Extraction System} "The General Practice Extraction Service (GPES)" is an IT system designed to allow NHS organizations to extract data from \textbf{all} GP practice computers. \textit{This is because different GP's have different contracts and therefore use different software to save patient data.}\\ Basic idea is to create an API to query every system from all different software's created for different GP's. This will allow generic extraction of \textbf{patient data} regardless of their GP. % IMAGE: GPES Customers \begin{figure}[H] \centering \begin{subfigure}{1\textwidth} \includegraphics[width=1\linewidth] {images/1-customers.png} \end{subfigure} %\caption{-} \end{figure} % IMAGE: GPES Structure \begin{figure}[H] \centering \begin{subfigure}{1\textwidth} \includegraphics[width=1\linewidth] {images/1-structure.png} \end{subfigure} %\caption{-} \end{figure} % IMAGE: GPES Timeline \begin{figure}[H] \centering \begin{subfigure}{1\textwidth} \includegraphics[width=1\linewidth] {images/1-timeline.png} \end{subfigure} %\caption{-} \end{figure} \underline{\textbf{National Audit office Conclusion of the GPES Project}} \begin{itemize} \item The project has been significantly delayed and many customers have yet to receive data. \item Mistakes in the original procurement and contract management contributed to the losses of public funds. This occurred through asset write-off's and settlements with suppliers \item Only \textbf{one} customer, \textit{NHS England} has so far received data from GPES.\\ \end{itemize} Originally the business plan for GPES said the service would start in \textbf{2009-2010}. It actually took until \textbf{2014} for the first extraction to take place. The total expected loss for the GPES project rose from \textbf{£14 million} to \textbf{£40 million} during the \textit{planning} and \textit{procurement stage}.\\ \underline{\textbf{Data Extract Issues}} \begin{itemize} \item First GP system suppliers were asked to fulfil a common query language for the extraction process (this was not in their interest as it would cost them alot to makes these changes to their current systems and thus pretty much refused to do so). \item This requirement then changed to each GP system supplier creating their own logical 'business rules' which would be used to extract the data. (Different for each supplier, one API to query each supplier to extract data) \item NHS IC's using a non-competitive procurement approach, in-addition to the changes in design both contributed to the restrictive process for \textit{extracts}. \item HSCIC (the successor of the NHS IC) has continued to use the GPSOC framework to require data sharing between NHS systems. \textit{The new framework (2014) states that the principal clinical system suppliers must provide an interface method for third-party system uses.} \item \textbf{HSCIC cannot do wide range nor scale of data extracts. Due to the design of the GPES system and the restrictions in the supplier contracts. (Over 100 different extracts have been requested) HSCIC estimate that they will only be able to design 24 new extracts in 2015-16}\\ \end{itemize} % IMAGE: GPES Issues \begin{figure}[H] \centering \begin{subfigure}{1\textwidth} \includegraphics[width=1\linewidth] {images/1-issues.png} \end{subfigure} %\caption{-} \end{figure} % IMAGE: GPES Issues Concluded \begin{figure}[H] \centering \begin{subfigure}{1\textwidth} \includegraphics[width=1\linewidth] {images/1-issues-concluded.png} \end{subfigure} %\caption{-} \end{figure} \newpage \section{Basic Concepts of Architectures} \subsection{What is good architecture?} The architecture is appropriate for the \textbf{context of use}. E.g. 3-tier e-commerce architecture is not appropriate for a avionics project.\\ Guidance on 'good architecture' focuses on: \begin{itemize} \item \textbf{Process} \item \textbf{Structure}\\ \end{itemize} Software architecture should capture the \textbf{principal design} decisions about the system. The \textbf{Blueprint} for software architecture focuses on: \begin{itemize} \item Structure \item Component behaviour \item Component interaction and how that influences \textbf{Quality Attributes} of the \textit{systems}.\\ \end{itemize} \subsection{Process} Architect teams are often small and \textbf{maintains the integrity} of the architecture. The architecture is \textit{justified} in relation to a \textbf{prioritized list of quality attributes} that need to be managed. \textbf{Stakeholders interests} are documented and are used to build the type of architecture that will reflect them.\\ Architecture is often evaluated in terms of \textit{how well it delivers the quality attributes}. Software architectures are often chosen to allow \textbf{incremental implementation}. (I.e Low coupling, high cohesion) - Definitions for coupling and cohesion! \subsection{Structure} The structure of architecture will differ depending on the requirements of the software, often the following are utilised: \begin{itemize} \item \textbf{Modularity} $\rightarrow$ Hides information, separates concerns, allows good robust interfaces that are unlikely to change \item Well known \textbf{patterns and tactics} are often implemented \item Architecture built to NOT depend on \textbf{particular versions of tools}, or \textbf{special features} \textit{unless its essential!} \item Modules \textit{producing} data should be \textbf{separate} from those \textit{consuming} data \item Usually a complex mapping between \textbf{modules} \textit{(static structure)} and \textbf{components} \textit{(dynamic structure)} \item MINIMISE the number of ways of \textbf{interaction between components} \item The architecture should clearly \textbf{identify resource contention issues} and deal with them. (E.g. network capacity, minimise network throughput using different techniques [EXC])\\ \end{itemize} \underline{\textbf{Prescriptive vs Descriptive Structures}}\\ \textbf{Prescriptive} structure is what we use to model the system before it is built. It is the aim the architect has while generating the blueprint \textit{(UMLAsBlueprint, forward engineering)}, however it is often to \textit{tidy} and unrealistic to be able to model the architecture of a system.\\ \textbf{Descriptive} structure is usually made after the system has been created. It is used to describe the entire system, how the \textbf{components} interact, the responsibilities of each \textbf{module} \textit{(usually extremely messy)} etc ... \subsection{The Importance of Architecture} Software Architecture has several uses: \begin{enumerate} \item Enables us to manage the \textbf{key attributes} of a system \item Allows reasoning about and managing \textbf{change} \item Allows predictions of \textbf{key quality attributes} \item Allows \textbf{improved communication} between stakeholders \item Defines \textbf{constraints} on the software's implementation \item Provides the basis for \textbf{evolutionary prototyping} \item Is the key artefact for reasoning about \textbf{cost} and \textbf{scheduling} \item Focuses on the assembly of \textbf{components} rather than the \textbf{creation/implementation} of components\\ \end{enumerate} Other uses are: \begin{itemize} \item \textit{Reflects the structure of an organisation} \item \textit{Can be used as the transferable, reusable model at the heard of a product line} \item \textit{Restricts design alternative and channels developer effort in a coordinated way} \item \textit{Provides the basis for training new team members} \end{itemize} \subsection{Managing Attributes and Change} It is a fact that the majority of software projects will undergo requirements change. This may also change \textbf{key quality attributes} of the system. The idea is to use architecture that will minimise the change to the \textit{architecture} and allow the system to be \textbf{modifiable} utilising the same abstract \textbf{architectural} ideas.\\ Managing change can be reasoned about on three levels: \begin{enumerate} \item Inside an element \textit{[cheapest]} \item Between elements maintaining the architecture \textit{[can be costly]} \item Requiring architecture change (we wish to avoid this as much as possible) \textit{[most expensive change]} \end{enumerate} \subsection{Prediction of Attributes} We can attempt to predict the \textbf{key quality attributes} of the system based on \textit{requirements} and possible (logical) \textit{system extensions} in the future. Planning for these changes will minimise need for architectural change, which in turn will \textbf{reduce the cost} in future work.\\ \textbf{** Models should be able to be built based on the predictions of the attributes and requirements **} \subsection{Communication Between Stakeholders} A well documented architecture allows \textbf{improved communication} between stakeholders. Some examples of how the documented architecture can help with communication are the following: \begin{itemize} \item User has particular requirements in terms of user experience \item Customer needs to know about schedule, budget and meeting regulations in their market \item Project manager needs to know the dependences in terms of the modules and components \end{itemize} \underline{\textit{These might be accommodated by different views of the system that are consistent}} \subsection{Early Design and Constraints} Early design carries the \textit{most fundamental} design decisions, e.g: \begin{itemize} \item What the \textbf{key quality attributes} are \item The \textbf{architecture form/type} that will give the best control over these attributes \item The characterisation of the behaviour of the architecture elements\\ \end{itemize} % IMAGE: Constraints \begin{figure}[H] \hskip-1.5cm\begin{subfigure}{1.1\textwidth} \includegraphics[width=1.2\linewidth] {images/2-constraints.png} \end{subfigure} %\caption{-} \end{figure} \subsection{Evolutionary Prototyping} \textbf{Evolutionary Prototyping} allows a system to be constantly tested under real conditions as it is being developed. As \textit{bugs} are detected they are fixed and tested in the next prototype. Examples of systems that used \textbf{evolutionary prototyping} are: \begin{itemize} \item Plug and Play - early experience of the BASE functionality + extensibility. \item Real time architectures - early experience with scheduling. \textit{(Worse case execution times guide design and deployment)} \end{itemize} \subsection{Cost and Scheduling} Reasoning about the \textbf{following topics} allows for effect cost and scheduling in a software project: \begin{itemize} \item Capturing dependencies \item Estimation of required efforts for different sections \item Allocating effort to elements \item Understanding of how elements influence each other \item Use architecture to interpret bottom-up estimates from teams working on elements \end{itemize} \subsection{Product Line (Model)} The \textbf{product line} model is a \textit{transferable and reusable} model. \textbf{Elements} are assets that compose to give new \textit{functionality}. The architecture provides the means to \textbf{compose the elements}. A planned approach allows the reuse of architectural elements \textit{(think object inheritance)}. \subsection{Component Level \& Channelling Development} At the component level we focus on the \textbf{assembly} of components rather than the \textbf{creation} of them! With well designed elements and architecture we can combine elements from different \textbf{producers} \textit{(provided they conform to a standardized interface)}. This provides the following \textbf{benefits}: \begin{itemize} \item Decrease time to market \item More reliability \item Lower cost \item Flexibility \textit{(e.g. using multiple or alternate suppliers for a component)}\\ \end{itemize} \textbf{Channelling Development} restricts alternatives and channels developer effort in a coordinate way. This provides a defined \textbf{context} for the developer. Well defined \textbf{interfaces} and clear ideas of the \textbf{functionality \& quality attributes} are required!\\ \textbf{** The overall goal is to provide clarity on what is an architectural decision and what is a development decision. **} \newpage \section{Context Design} Software architects and architecture have arisen as systems have grown in: \textit{scale}, \textit{economic importance} and \textit{criticality}. Architecture plays different roles in different contexts. The \textbf{main contexts} are: \begin{itemize} \item Technical Context \item Project Life-cycle Context \item Business Context \item Professional Context \end{itemize} \subsection{Technical Context} The \textbf{technical context} is whereby the architecture supports technical activity. For example this could be in \textbf{measuring} a statistic, the \textbf{verification \& validation} process, \textbf{compliance} ...\\ The architecture provides a means for controlling \textbf{quality attributes} of the system. In the \textbf{context of design} activities we try and choose architectures that \textbf{enable the attributes} we care most about. We may find through analysing already \textit{existing systems} that specific architectures inhibit (prevent) particular quality attributes.\\ ** Architecture does not often have much to say about the functionality of a system, because they provide containers for functionality. ** \subsubsection{Controlling Quality Attributes} Usually we care about multiple quality attributes at once. Selecting a type of architecture will allow specific quality attributes to be ensured for when it is deployed to the end user. Examples of \textbf{quality attributes} we might care about for a particular system are: \begin{table}[H] \begin{tabular}{|l|p{10cm}|} \hline QA & Description\\ \hline Safety & The safety of a system is whereby we worry about ensuring that the \textbf{system only behaves as is intended} and has no additional behaviour that is unspecified.\\ \hline Testability & The testability of a system ensures that \textbf{elements are clearly isolated}. That we know the \textbf{expected behaviour of components}. We know the \textbf{relations of modules} to track down faulty code and finally we know how the \textbf{components are intended to integrate together} to give overall behaviour.\\ \hline Availability & The availability of a system is whereby we worry about ensuring there is a \textbf{system to take over}, in the case the original system fails. \\ \hline \end{tabular} \end{table} Other examples of quality attributes include \textbf{performance}, \textbf{usability}, \textbf{interoperability} ...\\ These examples of quality attributes related to the \textbf{actuator monitoring} system that was described in lectures. As actuators are physical devices they will suffer from \textit{'wear and tear'} and eventually break. In safety critical system (for example cars, aeroplanes) these actuators require monitoring in order to prevent worst-case scenario's when they do break, and have them repaired beforehand.\\ The architecture for the actuator monitoring system will be required to hold at least those three quality attributes: \begin{enumerate} \item Availability - To ensure it is always monitoring the actuators \item Safety - To ensure the monitoring system does not deviate from intended behaviour (no false positives or false negative) \item Testability - To provide certainty that of the safety and availability is should provide. \end{enumerate} % IMAGE: Actuator Monitoring Architecture \begin{figure}[H] \centering \begin{subfigure}{1\textwidth} \includegraphics[width=1\linewidth] {images/3-actuator-monitoring-architecture.png} \end{subfigure} %\caption{-} \end{figure} \subsection{Project Life-cycle Context} The \textbf{project life cycle context} describe how the project will develop over time. The architecture is then created to adopt the life-cycle that is best for a particular project. When creating a project life-cycle the following must be complete \textit{(these are all done best by talking about the architecture)}: \begin{itemize} \item Making a business case for the system \item Understanding the requirements that concern quality attributes \item Deciding on architecture \item Documenting architecture \item Analysing and evaluating architecture \item Implementing and testing the system based on architecture \item Ensuring the implementation conforms to the architecture \end{itemize} \subsubsection{V-Model} The \textbf{V-Model} is a development of \textit{waterfall} and explicitly includes architectural design as a stage. It highly focuses on \textbf{requirements based testing} all the way down to the unit level! % IMAGE: V-Model \begin{figure}[H] \hskip-2.5cm\begin{subfigure}{1.2\textwidth} \includegraphics[width=1.2\linewidth] {images/3-v-model.png} \end{subfigure} %\caption{-} \end{figure} \subsubsection{Spiral Model} The \textbf{(Boehm's spiral model} is a type of \textit{iterative model}. It focuses on project risk management by constantly creating prototypes to be tested all the way through the development life-cycle. % IMAGE: Spiral \begin{figure}[H] \hskip-2.5cm\begin{subfigure}{1.2\textwidth} \includegraphics[width=1.2\linewidth] {images/3-spiral.png} \end{subfigure} %\caption{-} \end{figure} \subsubsection{Agile Development} The \textbf{Agile} development life-cycle is an iterative and incremental method of managing the design and building of a software product. The image below show two different forms of \textbf{agile} development. One with and one without Devops. % IMAGE: Agile \begin{figure}[H] \hskip-2.5cm\begin{subfigure}{1.2\textwidth} \includegraphics[width=1.2\linewidth] {images/3-Agile.png} \end{subfigure} %\caption{-} \end{figure} \subsubsection{Agile + Devops} % IMAGE: Agile + Devops \begin{figure}[H] \hskip-2.5cm\begin{subfigure}{1.2\textwidth} \includegraphics[width=1.2\linewidth] {images/3-ADevops.png} \end{subfigure} %\caption{-} \end{figure} \subsection{Business Context} The \textbf{business context} is discussed in later lectures. Two aspects we cover are: \begin{enumerate} \item How the organisation structure of stakeholders can drive architectural decisions and shapes decisions taking around architecture. \item How architectural expertise drives the structure of development organisation in terms of their functional units and interrelationships. \end{enumerate} \subsection{Professional Context} The architectural perspective gives you as a professional: \begin{itemize} \item A way of describing your expertise \item Your skills as an architect will be recognised within organisations you work within \item You can use architecture as a way of describing your past experience \item You can specialise in particular classes of architecture (e.g. financial architecture) \end{itemize} \subsection{Domain-Specific Software Architecture} \underline{\textbf{Design in the Technical Context}}\\ Design is a mixture of \textbf{creativity} and the use of \textbf{knowledge} that is institutionalised in the context. This takes the form of \textbf{reusable structures}. These reusable structures also influence other aspects of context, helping to shape \textbf{processes, organisations} and \textbf{professions}. We can plot different sorts of \textbf{architectural structures} depending on the degree to which it is \textbf{specific to a domain} and the extent to which it \textbf{influences the system}.\\ % IMAGE: \begin{figure}[H] \hskip-2.5cm\begin{subfigure}{1\textwidth} \includegraphics[width=1\linewidth] {images/3-graph.png} \end{subfigure} %\caption{-} \end{figure} \underline{\textbf{Domain Specific Software Architectures}}\\ \textbf{DSSA} is a collection of (pre-decided) \textbf{design decisions}. They capture important aspects of a particular task \textbf{(domain)} They are \textbf{common} across a range of systems in the domain and typically will have some predefined structures depending on the attributes we want to control.\\ These are \textbf{not} general purpose because they incorporate many specific characteristics of the \textbf{domain}. The main benefit is the extent to which \textbf{design knowledge is captured}. There are however problems, over time basic information can be forgotten.\\ ** Bridge example given, where key information was forgotten regarding the architecture of suspension bridges (from the 19th century). This results in a bridge collapsing because of wind. ** \subsection{Architectural Patterns} An architectural pattern is a set of \textbf{architectural design decisions} that are applicable to a \textbf{recurring design problem}, and \textbf{parametrized} to action for different \textbf{software development contexts} in which that problem appears.\\ They are similar to \textbf{DSSA} but capture less of the behaviour and attributes of the system. They are \textbf{more general} because they are intended to abstract a common \textbf{pattern over several domains}. Three common architectural patterns that are used are listed below: \begin{enumerate} \item State Logic Display: Three-Tiered Pattern \item Model View Controller Pattern \item Sense Compute Control Pattern\\ \end{enumerate} \textbf{Contexts shape design.} The \textbf{technical context} identifies features we want to control and \textbf{packages} a range of other properties. Standard architectures (\textit{patterns and domain specific architectures DSSA}) \textbf{package these}. The other context we consider also help to shape the choice of architecture.\\ \textbf{** In design we use pre-decided strcutures and then alter/extend them as and when we need too. **} % IMAGE: \begin{figure}[H] \hskip-2.5cm\begin{subfigure}{1.2\textwidth} \includegraphics[width=1.2\linewidth] {images/3-SLD-pattern} \end{subfigure} %\caption{-} \end{figure} % IMAGE: \begin{figure}[H] \hskip-2.5cm\begin{subfigure}{1.2\textwidth} \includegraphics[width=1.2\linewidth] {images/3-model-view-controller.png} \end{subfigure} %\caption{-} \end{figure} % IMAGE: \begin{figure}[H] \hskip-2.5cm\begin{subfigure}{1.2\textwidth} \includegraphics[width=1.2\linewidth] {images/3-sense-compute-control.png} \end{subfigure} %\caption{-} \end{figure} \section{Quality Attributes} \section{QA: Availability} \section{QA: Performance} \section{QA: Security} \section{QA: Testability} \section{QA: Modifiability} \newpage \section{Connectors} \textbf{Software connectors} are key elements of the software's architecture. They define the rules of \textbf{interaction} between \textbf{components}. There are various levels of software connectors that range from \textit{simple} to \textit{complex} connections. \begin{itemize} \item Simple: shared variable access, method calls ... \item Complex: database access, client-server, scheduler, load balancer ...\\ \end{itemize} In a projects \textbf{code base} the connections between components are often implicit and can be noticed easily. In the architecture design we \textbf{explicitly identify} them, to allow us to capture \textbf{system interactions} (at the level of the components). The specification for interactions are often \textit{complex.} An example for \textbf{LinkedIn} is provided below:\\ % IMAGE: LinkedIn Data \begin{figure}[H] \centering \hskip-2.5cm\begin{subfigure}{1\textwidth} \includegraphics[width=1\linewidth] {images/10-linked.png} \end{subfigure} %\caption{-} \end{figure} % IMAGE: LinkedIn Redliner \begin{figure}[H] \centering \hskip-2.5cm\begin{subfigure}{1\textwidth} \includegraphics[width=1\linewidth] {images/10-linked-redliner.png} \end{subfigure} %\caption{-} \end{figure} \subsection{What is Different About Connectors?} Depending on the software project, \textbf{components} will have \textbf{application-specific} functionality. \textbf{Connectors} provide \textit{interaction mechanisms} that are \textit{generic} across different application. \textbf{Interaction} may involve \textbf{multiple components}, and may have a protocol associated to it.\\ \subsection{Benefits of Explicit Connectors} \begin{itemize} \item \textbf{Interaction} is defined by the arrangement of the connectors (as far as possible) \item \textbf{Component interaction} is defined by the pattern of connectors in the architecture \item \textbf{Interaction} is \textit{"independent"} of the components \end{itemize} \subsection{Roles Played By Software Connectors} The specification of the connector protocols determine: \begin{itemize} \item The types of interfaces \item Properties of interaction \item Riles about ordering interaction \item Measurable features of interactions\\ \end{itemize} Connectors often have multiple roles, the main roles are: \begin{itemize} \item Communication \item Coordination \item Conversion \item Facilitation \end{itemize} \subsection{Communication} Information is transmitted between \textbf{components} (e.g. message passing; method calls). \textbf{Connectors} constrain: \begin{itemize} \item \textbf{Direction of flow }(The pipes in the image below) \item \textbf{Capacity / rate of flow}\\ \end{itemize} ** Additional Information ** \begin{itemize} \item Connector providing communication services support \textbf{transmission} of data among components \item Data transfer services are a primary building block of component interaction \item Components routinely pass messages, exchange data to be processed and communication results of computations \end{itemize} % IMAGE: Pipes \begin{figure}[H] \centering \hskip-2.5cm\begin{subfigure}{1\textwidth} \includegraphics[width=1\linewidth] {images/10-pipes.png} \end{subfigure} %\caption{-} \end{figure} Connectors influence measurable quality attributes of the system. It separates \textbf{communication} from functional aspects. \subsection{Coordination} \textbf{Coordination} controls the timing \textbf{relationship} of the functional aspects of the system. \\ ** Additional Information ** \begin{itemize} \item Connectors providing coordination services support transfer of \textbf{control} among components \item Components interact by passing the thread of execution to each other \item \textbf{Function calls and method invocations are examples of coordination connectors} \item Higher-order connectors, such as signals and load balancing connectors provide richer, more complex interaction built and coordination services \end{itemize} \subsection{Conversion} \textbf{Conversion} is how to get components to interact that \textbf{do not} have the right means of interaction. \textbf{Incompatibilities} might be related to: datatypes, ordering, frequency, structure of parameters etc ...\\ Examples of types of converters: \begin{itemize} \item Wrappers: deal with structural issues \item Adaptors: deal with datatype incompatibilities\\ \end{itemize} ** Additional Information ** \begin{itemize} \item Connectors providing conversion services \textbf{transform the interaction} required by one component to that provided by another \item Enabling heterogeneous components to interact with each other is \textbf{not} a trivial task \item Conversion services allow components that have not been specifically tailored for each other to establish and conduct interaction \end{itemize} \subsection{Facilitation} \textbf{Facilitation} enables interaction among a group of components that are intended to interact with one and other. ** Additional Information ** \begin{itemize} \item Improve interaction of components that were intended to interoperate (usually \textbf{optimise} or streamlines interactions) \item Ensure proper performance profiles (load balancing or scheduling) \item Synchronization mechanisms (monitors $\rightarrow$ enforce mutex access to resources) \end{itemize} \subsection{Types of Connectors (Talyor, Medvidovic \& Dashofy)} \begin{table}[H] \begin{tabular}{|l|p{10cm}|} \hline Connector & Description\\ \hline Method/Procedure Call & Producre call connectors model the flow of control among components through various invocation techniques. They are thus \textbf{coordination connector}. [Examples: fork and exec]\\ \hline Data Access & Data access connectors allow components to access maintained by a data store component. Therefore they provide \textbf{communication services}. [Example: JDBC $\rightarrow$ java SQL driver]\\ \hline Event & An even as the instantaneous effect of the termination of the invocation of an operation on an object, which occurs at that object's location. [Example: windows with GUI inputs] \\ \hline Stream & Streams are used to preform transfer of large amounts of data between autonomous processes. Thus they provide \textbf{communication services} in a system. [Examples: UNIX pipes, TCP/UDP sockets, client-server protocols]\\ \hline Distributor & Distributor connectors perform the identification of interaction paths and subsequent routing of communication and coordination information among components along these paths. They provide \textbf{facilitation} services. \textit{[Distributor connectos never exist by themselves, but provide assistance to other connectors, such as steams or procedure calls)}\\ \hline Arbitrator & When components are aware of the presence of other components but cannot make assumptions about their needs and state, arbitrators streamline system operation and resolve any conflicts (providing \textbf{facilitation}). They also redirect the flow of control (providing \textbf{coordination}) \\ \hline Adaptor & Adaptor connectors provide facilities to support interaction between components that have not been designed to interoperate. \textit{(adopters involve matching communication polices and interaction protocols among components, thus providing \textbf{conversion} services.}\\ \hline \end{tabular} \end{table} \section{Patterns} \section{Modelling The Life-cycle} \section{Dev-Ops} \section{Product Line Architecture} \section{Analysis} \end{document}
{"hexsha": "6f9b69bb5d23e47e263db8c2cfe8ade0781787e0", "size": 33751, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lecture_summary/summary.tex", "max_stars_repo_name": "aseemnarang/sapmnotes", "max_stars_repo_head_hexsha": "8b0c6a2181456a3ba7e6020586687e2c8f64a3f2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lecture_summary/summary.tex", "max_issues_repo_name": "aseemnarang/sapmnotes", "max_issues_repo_head_hexsha": "8b0c6a2181456a3ba7e6020586687e2c8f64a3f2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lecture_summary/summary.tex", "max_forks_repo_name": "aseemnarang/sapmnotes", "max_forks_repo_head_hexsha": "8b0c6a2181456a3ba7e6020586687e2c8f64a3f2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.8298192771, "max_line_length": 510, "alphanum_fraction": 0.7947023792, "num_tokens": 8017}
import os import sys import random from collections import OrderedDict import math import copy import logging import pickle import glob import numpy as np import pandas as pd from PIL import Image import xml.etree.ElementTree as ElementTree import torch import torch.utils.data as data import torchvision.transforms as transforms from os2d.structures.bounding_box import BoxList from os2d.engine.augmentation import DataAugmentation from os2d.utils import get_image_size_after_resize_preserving_aspect_ratio, mkdir, read_image from os2d.structures.feature_map import FeatureMapSize def read_annotation_file(path): dataframe = pd.read_csv(path) # add "imagefilename" and "classfilename" columns with default file names if not "imagefilename" in dataframe.columns: imagefilename = [] for row in dataframe["imageid"]: imagefilename.append(str(row)+".jpg") dataframe["imagefilename"] = imagefilename if not "classfilename" in dataframe.columns: classfilename = [] for row in dataframe["classid"]: classfilename.append(str(row)+".jpg") dataframe["classfilename"] = classfilename required_columns = {"imageid", "imagefilename", "classid", "classfilename", "gtbboxid", "difficult", "lx", "ty", "rx", "by"} assert required_columns.issubset(dataframe.columns), "Missing columns in gtboxframe: {}".format(required_columns - set(dataframe.columns)) return dataframe def build_eval_dataset(data_path, name, eval_scale, cache_images=False, no_image_reading=False, logger_prefix="OS2D"): logger = logging.getLogger(f"{logger_prefix}.dataset") logger.info("Preparing the {0} dataset: eval scale {1}, image caching {2}".format(name, eval_scale, cache_images)) if name.lower() == "dairy": annotation_folder="classes" image_size = 3000 classdatafile = os.path.join(data_path, "dairy", annotation_folder,"dairy.csv") gt_path = os.path.join(data_path, "dairy", annotation_folder, "images") image_path = os.path.join(data_path, "dairy", "src", "original") gtboxframe = read_annotation_file(classdatafile) elif name.lower() in ["paste-v", "paste-f"]: annotation_folder="classes" image_size = 1280 classdatafile = os.path.join(data_path, "paste", annotation_folder,"paste.csv") gtboxframe = read_annotation_file(classdatafile) if name.lower() == "paste-f": gtboxframe["difficult"] = 0 gt_path = os.path.join(data_path, "paste", annotation_folder, "images") image_path = os.path.join(data_path, "paste", "src", "original") else: raise(RuntimeError("Unknown dataset {0}".format(name))) dataset = DatasetOneShotDetection(gtboxframe, gt_path, image_path, name, image_size, eval_scale, cache_images=cache_images, no_image_reading=no_image_reading, logger_prefix=logger_prefix) return dataset def build_grozi_dataset(data_path, name, eval_scale, cache_images=False, no_image_reading=False, logger_prefix="OS2D"): logger = logging.getLogger(f"{logger_prefix}.dataset") logger.info("Preparing the GroZi-3.2k dataset: version {0}, eval scale {1}, image caching {2}".format(name, eval_scale, cache_images)) annotation_folder="classes" image_size = 3264 classdatafile = os.path.join(data_path, "grozi", annotation_folder,"grozi.csv") gt_path = os.path.join(data_path, "grozi", annotation_folder, "images") image_path = os.path.join(data_path, "grozi", "src", str(image_size)) gtboxframe = read_annotation_file(classdatafile) # define a subset split (using closure) subset_name = name.lower() assert subset_name.startswith("grozi"), "" subset_name = subset_name[len("grozi"):] subsets = ["train", "val-old-cl", "val-new-cl", "val-all", "train-mini"] found_subset = False for subset in subsets: if subset_name == "-"+subset: found_subset = subset break assert found_subset, "Could not identify subset {}".format(subset_name) def get_unique_images(gtboxframe): unique_images = gtboxframe[["imageid", "imagefilename"]].drop_duplicates() image_ids = list(unique_images["imageid"]) image_file_names = list(unique_images["imagefilename"]) return image_ids, image_file_names if subset in ["train", "train-mini"]: gtboxframe = gtboxframe[gtboxframe["split"] == "train"] image_ids, image_file_names = get_unique_images(gtboxframe) if subset == "train-mini": image_ids = image_ids[:2] image_file_names = image_file_names[:2] gtboxframe = gtboxframe[gtboxframe["imageid"].isin(image_ids)] elif subset in ["val-old-cl", "val-new-cl", "val-all"]: gtboxframe = gtboxframe[gtboxframe["split"].isin(["val-old-cl", "val-new-cl"])] image_ids, image_file_names = get_unique_images(gtboxframe) if subset != "val-all": gtboxframe = gtboxframe[gtboxframe["split"] == subset] else: raise RuntimeError("Unknown subset {0}".format(subset)) dataset = DatasetOneShotDetection(gtboxframe, gt_path, image_path, name, image_size, eval_scale, image_ids=image_ids, image_file_names=image_file_names, cache_images=cache_images, no_image_reading=no_image_reading, logger_prefix=logger_prefix) return dataset def build_instre_dataset(data_path, name, eval_scale, cache_images=False, no_image_reading=False, logger_prefix="OS2D"): logger = logging.getLogger(f"{logger_prefix}.dataset") logger.info("Preparing the INSTRE dataset: version {0}, eval scale {1}, image caching {2}".format(name, eval_scale, cache_images)) # INSTRE dataset was downloaded from here: ftp://ftp.irisa.fr/local/texmex/corpus/instre/instre.tar.gz # Splits by Iscen et al. (2016) were downloaded from here: ftp://ftp.irisa.fr/local/texmex/corpus/instre/gnd_instre.mat image_size = 1000 import scipy.io as sio dataset_path = os.path.join(data_path, "instre") annotation_file = os.path.join(dataset_path, "gnd_instre.mat") annotation_data = sio.loadmat(annotation_file) # annotation_data["qimlist"][0] - 1250 queries - each in annotation_data["qimlist"][0][i][0] file, root - os.path.join(data_path, "instre") # annotation_data["imlist"][0] - 27293 database images - each in annotation_data["imlist"][0][i][0] file, root - os.path.join(data_path, "instre") # annotation_data["gnd"][0] - 1250 annotations for all queries: # annotation_data["gnd"][0][i][0] - indices of positives in annotation_data["imlist"][0] (WARNING - 1-based) # annotation_data["gnd"][0][i][1] - bbox of the query object, one of the boxes from ent of *.txt # images in subsets INSTRE-S1 and INSTRE-S2 contain exactly one object # images in the subset INSTRE-M contain two objects each image_path = dataset_path gt_path = os.path.join(dataset_path, "classes") gt_image_path = os.path.join(gt_path, "images") mkdir(gt_image_path) classdatafile = os.path.join(gt_path, "instre.csv") if not os.path.isfile(classdatafile): logger.info(f"Did not find data file {classdatafile}, creating it from INSTRE source data") # create the annotation file from the raw dataset annotation_data["qimlist"] = annotation_data["qimlist"].flatten() annotation_data["imlist"] = annotation_data["imlist"].flatten() annotation_data["gnd"] = annotation_data["gnd"].flatten() num_classes = len(annotation_data["qimlist"]) gtboxframe = [] # will be creating dataframe from a list of dicts for i_class in range(num_classes): query_image_path_original = str(annotation_data["qimlist"][i_class][0]) if query_image_path_original.split("/")[0].lower() == "instre-m": # Query boxes from subset "INSTRE-M" contain both objects, so it is not clear how to use them logger.info(f"Skipping query {i_class}: {query_image_path_original}") continue logger.info(f"Adding query {i_class}: {query_image_path_original}") query_bbox = annotation_data["gnd"][i_class][1].flatten() query_positives = annotation_data["gnd"][i_class][0].flatten() - 1 # "-1" because of the original MATLAB indexing classid = i_class classfilename = f"{i_class:05d}_{'_'.join(query_image_path_original.split('/'))}" if not os.path.isfile(classfilename): query_img = read_image(os.path.join(dataset_path, query_image_path_original)) query_img_cropped_box = query_img.crop(query_bbox) query_img_cropped_box.save(os.path.join(gt_image_path, classfilename)) def convert_the_box_from_xywh(box, imsize): lx = float(box[0]) / imsize.w ty = float(box[1]) / imsize.h rx = lx + float(box[2]) / imsize.w by = ty + float(box[3]) / imsize.h return lx, ty, rx, by def read_boxes_from(file_with_boxes): with open(file_with_boxes, "r") as fo: lines = fo.readlines() boxes = [[int(s) for s in line.split(" ")] for line in lines if line] return boxes def get_box_file_for_image_file(image_filename): return image_filename.split(".")[0] + ".txt" def get_the_boxes(image_filename): file_with_boxes = os.path.join(image_path, get_box_file_for_image_file(image_filename)) # get image size - recompute boxes boxes = read_boxes_from(file_with_boxes) img = read_image(os.path.join(image_path, image_filename)) imsize = FeatureMapSize(img=img) # choose the correct box if have two of them # From INSTRE documentation: # Specially, for each tuple-class in INSTRE-M, there are two corresponding object classes in INSTRE-S1. # In each annotation file for a INSTRE-M image, the first line records the object labeled as [a] in INSTRE-S1 # and the second line records the object labeled as [b] in INSTRE-S1. # # CAUTION! the matlab file has boxes in x1, y1, x2, y2, but the .txt files in x, y, w, h query_path_split = query_image_path_original.split("/") image_filename_split = image_filename.split("/") if query_path_split[0].lower() == "instre-s1" and image_filename_split[0].lower() == "instre-m": assert len(boxes) == 2, f"INSTRE-M images should have exactly two boxes, but have {boxes}" assert query_path_split[1][2] in ["a", "b"] i_box = 0 if query_path_split[1][2] == "a" else 1 boxes = [convert_the_box_from_xywh(boxes[i_box], imsize)] elif query_path_split[0].lower() == "instre-s1" and image_filename_split[0].lower() == "instre-s1" or \ query_path_split[0].lower() == "instre-s2" and image_filename_split[0].lower() == "instre-s2": boxes = [convert_the_box_from_xywh(box, imsize) for box in boxes] else: raise RuntimeError(f"Should not be happening, query {query_image_path_original}, image {image_filename}, boxes {boxes}") return boxes for image_id in query_positives: # add one bbox to the annotation # required_columns = ["imageid", "imagefilename", "classid", "classfilename", "gtbboxid", "difficult", "lx", "ty", "rx", "by"] image_file_name = str(annotation_data["imlist"][image_id][0]) boxes = get_the_boxes(image_file_name) for box in boxes: item = OrderedDict() item["gtbboxid"] = len(gtboxframe) item["classid"] = classid item["classfilename"] = classfilename item["imageid"] = image_id assert annotation_data["imlist"][image_id].size == 1 item["imagefilename"] = image_file_name item["difficult"] = 0 item["lx"], item["ty"], item["rx"], item["by"] = box gtboxframe.append(item) gtboxframe = pd.DataFrame(gtboxframe) gtboxframe.to_csv(classdatafile) gtboxframe = read_annotation_file(classdatafile) # get these automatically from gtboxframe image_ids = None image_file_names = None # define a subset split (using closure) subset_name = name.lower() assert subset_name.startswith("instre"), "" subset_name = subset_name[len("instre"):] subsets = ["all", "s1-train", "s1-val", "s1-test", "s2-train", "s2-val", "s2-test"] found_subset = False for subset in subsets: if subset_name == "-"+subset: found_subset = subset break assert found_subset, "Could not identify subset {}".format(subset_name) if subset == "all": pass elif subset in ["s1-train", "s1-val", "s1-test"]: gtboxframe = gtboxframe[gtboxframe.classfilename.str.contains("INSTRE-S1")] classes = gtboxframe.classfilename.drop_duplicates() if subset == "s1-train": classes = classes[:len(classes) * 75 // 100] # first 75% elif subset == "s1-test": classes = classes[len(classes) * 8 // 10:] # last 20% else: # "s1-val" classes = classes[len(classes) * 75 // 100 : len(classes) * 8 // 10] # 5% gtboxframe = gtboxframe[gtboxframe.classfilename.isin(classes)] elif subset in ["s2-train", "s2-val", "s2-test"]: gtboxframe = gtboxframe[gtboxframe.classfilename.str.contains("INSTRE-S2")] classes = gtboxframe.classfilename.drop_duplicates() if subset == "s2-train": classes = classes[:len(classes) * 75 // 100] # first 75% elif subset == "s2-test": classes = classes[len(classes) * 8 // 10:] # last 20% else: # "s2-val" classes = classes[len(classes) * 75 // 100 : len(classes) * 8 // 10] # 5% gtboxframe = gtboxframe[gtboxframe.classfilename.isin(classes)] else: raise(RuntimeError("Unknown subset {0}".format(subset))) dataset = DatasetOneShotDetection(gtboxframe, gt_image_path, image_path, name, image_size, eval_scale, image_ids=image_ids, image_file_names=image_file_names, cache_images=cache_images, no_image_reading=no_image_reading, logger_prefix=logger_prefix) return dataset def build_imagenet_test_episodes(subset_name, data_path, logger): episode_id = int(subset_name.split('-')[-1]) epi_data_name = "epi_inloc_in_domain_1_5_10_500" image_size = 1000 dataset_path = os.path.join(data_path, "ImageNet-RepMet") roidb_path = os.path.join(dataset_path, "RepMet_CVPR2019_data", "data", "Imagenet_LOC", "voc_inloc_roidb.pkl") with open(roidb_path, 'rb') as fid: roidb = pickle.load(fid, encoding='latin1') episodes_path = os.path.join(dataset_path, "RepMet_CVPR2019_data", "data", "Imagenet_LOC", "episodes", f"{epi_data_name}.pkl") with open(episodes_path, 'rb') as fid: episode_data = pickle.load(fid, encoding='latin1') logger.info(f"Extracting episode {episode_id} out of {len(episode_data)}") episode = episode_data[episode_id] dataset_image_path = os.path.join(data_path, "ImageNet-RepMet", "ILSVRC") SWAP_IMG_PATH_SRC = "/dccstor/leonidka1/data/imagenet/ILSVRC/" def _get_image_path(image_path): image_path = image_path.replace(SWAP_IMG_PATH_SRC, "") return image_path # episode["epi_cats"] - list of class ids # episode["query_images"] - list of path to the episode images # episode["epi_cats_names"] - list of names of the episode classes # episode["train_boxes"] - list of box data about class boxes num_classes = len(episode["epi_cats"]) gt_path = os.path.join(dataset_path, epi_data_name) gt_path = os.path.join(gt_path, f"classes_episode_{episode_id}") gt_image_path = os.path.join(gt_path, "images") mkdir(gt_image_path) classdatafile = os.path.join(gt_path, f"classes_{epi_data_name}_episode_{episode_id}.csv") if not os.path.isfile(classdatafile): logger.info(f"Did not find data file {classdatafile}, creating it from the RepMet source data") # create the annotation file from the raw dataset gtboxframe = [] # will be creating dataframe from a list of dicts gt_filename_by_id = {} for i_class in range(len(episode["train_boxes"])): train_boxes_data = episode["train_boxes"][i_class] class_id = train_boxes_data[0] assert class_id in episode["epi_cats"], f"class_id={class_id} should be listed in episode['epi_cats']={episode['epi_cats']}" query_image_path_original = _get_image_path(train_boxes_data[2]) query_bbox = train_boxes_data[3] query_bbox = query_bbox.flatten() classfilename = f"{class_id:05d}_{'_'.join(query_image_path_original.split('/'))}" if class_id not in gt_filename_by_id: logger.info(f"Adding query #{len(gt_filename_by_id)} - {class_id}: {query_image_path_original}") if not os.path.isfile(classfilename) or True: query_img = read_image(os.path.join(dataset_image_path, query_image_path_original)) query_img_cropped_box = query_img.crop(query_bbox) query_img_cropped_box.save(os.path.join(gt_image_path, classfilename)) gt_filename_by_id[class_id] = classfilename else: logger.info(f"WARNING: class {class_id} has multiple entries in GT image {query_image_path_original}, using the first box as GT") for class_id in episode["epi_cats"]: if class_id not in gt_filename_by_id: logger.info(f"WARNING: ground truth for class {class_id} not found in episode {episode_id}") def convert_the_box_to_relative(box, imsize): lx = float(box[0]) / imsize.w ty = float(box[1]) / imsize.h rx = float(box[2]) / imsize.w by = float(box[3]) / imsize.h return lx, ty, rx, by def find_image_path_in_roidb(image_file_name, roidb): for i_image, im_data in enumerate(roidb["roidb"]): if im_data["flipped"]: raise RuntimeError(f"Image {i_image} data {im_data} has flipped flag on") if im_data["image"] == image_file_name: return i_image return None for image_file_name in episode["query_images"]: # add one bbox to the annotation # required_columns = ["imageid", "imagefilename", "classid", "classfilename", "gtbboxid", "difficult", "lx", "ty", "rx", "by"] image_id = find_image_path_in_roidb(image_file_name, roidb) im_data = roidb["roidb"][image_id] image_file_name = _get_image_path(image_file_name) imsize = FeatureMapSize(w=int(im_data["width"]), h=int(im_data["height"])) boxes_xyxy = im_data["boxes"] classes = im_data["gt_classes"] for box, class_id in zip(boxes_xyxy, classes): if class_id in gt_filename_by_id: item = OrderedDict() item["imageid"] = int(image_id) item["imagefilename"] = image_file_name item["classid"] = int(class_id) item["classfilename"] = gt_filename_by_id[class_id] item["gtbboxid"] = len(gtboxframe) item["difficult"] = 0 item["lx"], item["ty"], item["rx"], item["by"] = convert_the_box_to_relative(box, imsize) gtboxframe.append(item) gtboxframe = pd.DataFrame(gtboxframe) gtboxframe.to_csv(classdatafile) gtboxframe = pd.read_csv(classdatafile) return gtboxframe, gt_image_path, dataset_image_path, image_size def build_imagenet_trainval(subset_name, data_path, logger): image_size = 1000 dataset_path = os.path.join(data_path, "ImageNet-RepMet", "ILSVRC") repmet_test_classes_path = os.path.join(data_path, "ImageNet-RepMet", "repmet_test_classes.txt") annotation_path = os.path.join(dataset_path, "Annotations", "CLS-LOC") image_path = os.path.join(dataset_path, "Data", "CLS-LOC") image_ext = ".JPEG" # get test classes to exclude with open(repmet_test_classes_path, "r") as fid: repmet_test_classes = fid.readlines() classes_to_exclude = {} for cl in repmet_test_classes: classes_to_exclude[cl[:-1]] = 1 # cut off the EOL symbol # get annotations if subset_name.startswith("train"): list_of_annotations = glob.glob(os.path.join(annotation_path, "train", "*", "*.xml")) else: list_of_annotations = glob.glob(os.path.join(annotation_path, "val", "*.xml")) list_of_annotations = sorted(list_of_annotations) def read_annotation(xml_file: str): tree = ElementTree.parse(xml_file) root = tree.getroot() filename = root.find('filename').text im_size = root.find("size") width = int(im_size.find("width").text) height = int(im_size.find("height").text) im_size = FeatureMapSize(h=height, w=width) bboxes = [] class_ids = [] difficult_flags = [] for boxes in root.iter("object"): ymin, xmin, ymax, xmax = None, None, None, None difficult_flag = int(boxes.find("difficult").text) class_id = boxes.find("name").text for box in boxes.findall("bndbox"): assert ymin is None ymin = int(box.find("ymin").text) xmin = int(box.find("xmin").text) ymax = int(box.find("ymax").text) xmax = int(box.find("xmax").text) cur_box = [xmin, ymin, xmax, ymax] bboxes.append(cur_box) difficult_flags.append(difficult_flag) class_ids.append(class_id) return filename, bboxes, class_ids, difficult_flags, im_size def convert_the_box_to_relative(box, imsize): lx = float(box[0]) / imsize.w ty = float(box[1]) / imsize.h rx = float(box[2]) / imsize.w by = float(box[3]) / imsize.h return lx, ty, rx, by gtboxframe = [] # will be creating dataframe from a list of dicts for image_id, annotation_file in enumerate(list_of_annotations): filename, bboxes, class_ids, difficult_flags, im_size = read_annotation(annotation_file) if subset_name == "train": class_id = filename.split("_")[0] if class_id in classes_to_exclude: # skip the entire images associated with classes to exclude continue image_file_name = os.path.join("train", class_id, filename + image_ext) else: image_file_name = os.path.join("val", filename + image_ext) for bbox, class_id, difficult_flag in zip(bboxes, class_ids, difficult_flags): if class_id in classes_to_exclude: # skip annotations from classes that need to be excluded continue item = OrderedDict() item["imageid"] = image_id item["imagefilename"] = image_file_name item["classid"] = int(class_id[1:]) # cut off "n" at the beginning of an ImageNet class item["classfilename"] = None item["gtbboxid"] = len(gtboxframe) item["difficult"] = difficult_flag item["lx"], item["ty"], item["rx"], item["by"] = convert_the_box_to_relative(bbox, im_size) gtboxframe.append(item) if subset_name.startswith("val-"): # subsample validation set to have at most 5k boxes new_val_size = int(subset_name.split('-')[-1]) assert 0 < new_val_size <= len(gtboxframe), f"New size of validation {new_val_size} should be positive and <= {len(gtboxframe)}" gtboxframe = gtboxframe[::len(gtboxframe)//new_val_size] gtboxframe = gtboxframe[:new_val_size] gtboxframe = pd.DataFrame(gtboxframe) gt_image_path = None return gtboxframe, gt_image_path, image_path, image_size def build_repmet_dataset(data_path, name, eval_scale=None, cache_images=False, no_image_reading=False, logger_prefix="OS2D"): logger = logging.getLogger(f"{logger_prefix}.dataset") logger.info("Preparing the dataset from the RepMet format: version {0}, eval scale {1}, image caching {2}".format(name, eval_scale, cache_images)) # The RepMet format is defined here: https://github.com/jshtok/RepMet # define a subset split (using closure) subset_name = name.lower() assert subset_name.startswith("imagenet-repmet"), "" subset_name = subset_name[len("imagenet-repmet"):] subsets = ["test-episode", "train", "val"] found_subset = False episode_id = None for subset in subsets: if subset_name.startswith("-"+subset): found_subset = subset break assert found_subset, "Could not identify subset {}".format(subset_name) subset_name = subset_name[1:] # cut off dash at the beginning if found_subset == "test-episode": gtboxframe, gt_image_path, dataset_image_path, image_size = \ build_imagenet_test_episodes(subset_name, data_path, logger) else: gtboxframe, gt_image_path, dataset_image_path, image_size = \ build_imagenet_trainval(subset_name, data_path, logger) # get these automatically from gtboxframe image_ids = None image_file_names = None dataset = DatasetOneShotDetection(gtboxframe, gt_image_path, dataset_image_path, name, image_size, eval_scale, image_ids=image_ids, image_file_names=image_file_names, cache_images=cache_images, no_image_reading=no_image_reading, logger_prefix=logger_prefix) return dataset def build_dataset_by_name(data_path, name, eval_scale, cache_images=False, no_image_reading=False, logger_prefix="OS2D"): if name.lower().startswith("grozi"): return build_grozi_dataset(data_path, name, eval_scale, cache_images=cache_images, no_image_reading=no_image_reading, logger_prefix=logger_prefix) elif name.lower().startswith("instre"): return build_instre_dataset(data_path, name, eval_scale, cache_images=cache_images, no_image_reading=no_image_reading, logger_prefix=logger_prefix) elif name.lower().startswith("imagenet-repmet"): return build_repmet_dataset(data_path, name, eval_scale, cache_images=cache_images, no_image_reading=no_image_reading, logger_prefix=logger_prefix) else: return build_eval_dataset(data_path, name, eval_scale, cache_images=cache_images, no_image_reading=no_image_reading, logger_prefix=logger_prefix) class DatasetOneShotDetection(data.Dataset): """Dataset to load images/labels/boxes from a dataframe. """ def __init__(self, gtboxframe, gt_path, image_path, name, image_size, eval_scale, cache_images=False, no_image_reading=False, image_ids=None, image_file_names=None, logger_prefix="OS2D"): self.logger = logging.getLogger(f"{logger_prefix}.dataset") self.name = name self.image_size = image_size self.eval_scale = eval_scale self.cache_images = cache_images self.gtboxframe = gtboxframe required_columns = {"imageid", "imagefilename", "classid", "classfilename", "gtbboxid", "difficult", "lx", "ty", "rx", "by"} assert required_columns.issubset(self.gtboxframe.columns), "Missing columns in gtboxframe: {}".format(required_columns - set(self.gtboxframe.columns)) self.gt_path = gt_path self.image_path = image_path self.have_images_read = False if image_ids is not None and image_file_names is not None: self.image_ids = image_ids self.image_file_names = image_file_names else: unique_images = gtboxframe[["imageid", "imagefilename"]].drop_duplicates() self.image_ids = list(unique_images["imageid"]) self.image_file_names = list(unique_images["imagefilename"]) if not no_image_reading: # read GT images self._read_dataset_gt_images() # read data images self._read_dataset_images() self.have_images_read=True self.num_images = len(self.image_ids) self.num_boxes = len(self.gtboxframe) self.num_classes = len(self.gtboxframe["classfilename"].unique()) self.logger.info("Loaded dataset {0} with {1} images, {2} boxes, {3} classes".format( self.name, self.num_images, self.num_boxes, self.num_classes )) def get_name(self): return self.name def get_eval_scale(self): return self.eval_scale def get_class_ids(self): return self.gtboxframe["classid"].unique() def get_class_ids_for_image_ids(self, image_ids): dataframe = self.get_dataframe_for_image_ids(image_ids) return dataframe["classid"].unique() def get_dataframe_for_image_ids(self, image_ids): return self.gtboxframe[self.gtboxframe["imageid"].isin(image_ids)] def get_image_size_for_image_id(self, image_id): return self.image_size_per_image_id[image_id] def _read_dataset_images(self): # create caches self.image_path_per_image_id = OrderedDict() self.image_size_per_image_id = OrderedDict() self.image_per_image_id = OrderedDict() for image_id, image_file in zip(self.image_ids, self.image_file_names): if image_id not in self.image_path_per_image_id: # store the image path img_path = os.path.join(self.image_path, image_file) self.image_path_per_image_id[image_id] = img_path # get image size (needed for bucketing) img = self._get_dataset_image_by_id(image_id) self.image_size_per_image_id[image_id] = FeatureMapSize(img=img) self.logger.info("{1} {0} data images".format(len(self.image_path_per_image_id), "Read" if self.cache_images else "Found")) def _read_dataset_gt_images(self): self.gt_images_per_classid = OrderedDict() if self.gt_path is not None: for index, row in self.gtboxframe.iterrows(): gt_file = row["classfilename"] class_id = row["classid"] if class_id not in self.gt_images_per_classid: # if the GT image is not read save it to the dataset self.gt_images_per_classid[class_id] = read_image(os.path.join(self.gt_path, gt_file)) self.logger.info("Read {0} GT images".format(len(self.gt_images_per_classid))) else: self.logger.info("GT images are not provided") def split_images_into_buckets_by_size(self): buckets = [] bucket_image_size = [] for image_id, s in self.image_size_per_image_id.items(): if s not in bucket_image_size: # create a new empty bucket bucket_image_size.append(s) buckets.append([]) # add item to the suitable bucket i_bucket = bucket_image_size.index(s) buckets[i_bucket].append(image_id) return buckets def _get_dataset_image_by_id(self, image_id): assert image_id in self.image_path_per_image_id, "Can work only with checked images" if image_id not in self.image_per_image_id : img_path = self.image_path_per_image_id[image_id] img = read_image(img_path) img_size = FeatureMapSize(img=img) if max(img_size.w, img_size.h) != self.image_size: h, w = get_image_size_after_resize_preserving_aspect_ratio(img_size.h, img_size.w, self.image_size) img = img.resize((w, h), resample=Image.ANTIALIAS) # resize images in case they were not of the correct size on disk if self.cache_images: self.image_per_image_id[image_id] = img else: img = self.image_per_image_id[image_id] return img @staticmethod def get_boxes_from_image_dataframe(image_data, image_size): if not image_data.empty: # get the labels label_ids_global = torch.tensor(list(image_data["classid"]), dtype=torch.long) difficult_flag = torch.tensor(list(image_data["difficult"] == 1), dtype=torch.bool) # get the boxes boxes = image_data[["lx", "ty", "rx", "by"]].to_numpy() # renorm boxes using the image size boxes[:, 0] *= image_size.w boxes[:, 2] *= image_size.w boxes[:, 1] *= image_size.h boxes[:, 3] *= image_size.h boxes = torch.FloatTensor(boxes) boxes = BoxList(boxes, image_size=image_size, mode="xyxy") else: boxes = BoxList.create_empty(image_size) label_ids_global = torch.tensor([], dtype=torch.long) difficult_flag = torch.tensor([], dtype=torch.bool) boxes.add_field("labels", label_ids_global) boxes.add_field("difficult", difficult_flag) boxes.add_field("labels_original", label_ids_global) boxes.add_field("difficult_original", difficult_flag) return boxes def get_image_annotation_for_imageid(self, image_id): # get data for this image image_data = self.gtboxframe[self.gtboxframe["imageid"] == image_id] img_size = self.image_size_per_image_id[image_id] boxes = self.get_boxes_from_image_dataframe(image_data, img_size) return boxes def copy_subset(self, subset_size=None, set_eval_mode=True): dataset_subset = copy.copy(self) # shallow copy if subset_size is not None: dataset_subset.num_images = min(subset_size, dataset_subset.num_images) dataset_subset.image_ids = self.image_ids[:dataset_subset.num_images] dataset_subset.image_file_names = self.image_file_names[:dataset_subset.num_images] image_mask = dataset_subset.gtboxframe["imageid"].isin(dataset_subset.image_ids) dataset_subset.gtboxframe = dataset_subset.gtboxframe[image_mask] dataset_subset.name = self.name + "-subset{}".format(subset_size) # reload data dataset_subset._read_dataset_gt_images() dataset_subset._read_dataset_images() if set_eval_mode: # turn off data augmentation dataset_subset.data_augmentation = None return dataset_subset
{"hexsha": "d1c99bea3c6b745c86eced1570ee0cda18ada1b9", "size": 35253, "ext": "py", "lang": "Python", "max_stars_repo_path": "os2d/data/dataset.py", "max_stars_repo_name": "MenshovSergey/DetectChess", "max_stars_repo_head_hexsha": "1baea0d688723b2624d83be001b00870cf1ae634", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 144, "max_stars_repo_stars_event_min_datetime": "2020-03-17T06:22:11.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T01:58:07.000Z", "max_issues_repo_path": "os2d/data/dataset.py", "max_issues_repo_name": "MenshovSergey/DetectChess", "max_issues_repo_head_hexsha": "1baea0d688723b2624d83be001b00870cf1ae634", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 27, "max_issues_repo_issues_event_min_datetime": "2020-04-13T11:14:45.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-25T10:03:52.000Z", "max_forks_repo_path": "os2d/data/dataset.py", "max_forks_repo_name": "MenshovSergey/DetectChess", "max_forks_repo_head_hexsha": "1baea0d688723b2624d83be001b00870cf1ae634", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 34, "max_forks_repo_forks_event_min_datetime": "2020-03-17T07:29:24.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T01:55:48.000Z", "avg_line_length": 47.9632653061, "max_line_length": 158, "alphanum_fraction": 0.6470087652, "include": true, "reason": "import numpy,import scipy", "num_tokens": 8206}
import torch import numpy as np from gym import spaces from stable_baselines3.dqn.policies import QNetwork from sb3_contrib.qrdqn.policies import QuantileNetwork class OnlyObsSingleActionModel(torch.nn.Module): def __init__(self, model, num_classes, scaler, batch_size=50): super().__init__() self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') self.scaler = torch.FloatTensor(scaler).to(self.device) # action = np.expand_dims(np.arange(num_classes), 1) # self.action = torch.FloatTensor(action).to(self.device) self.batch_size = batch_size self.model = model self.model_type = - 1 if isinstance(model, QNetwork): self.model_type = 3 elif isinstance(model, QuantileNetwork): self.model_type = 2 elif isinstance(model.action_space, spaces.Box): self.model_type = 1 elif isinstance(model.action_space, spaces.Discrete): self.model_type = 0 def forward(self, rgb): # rgb_ = torch.tile(rgb, (self.batch_size, 1)) if rgb.size(0) == 1 else rgb _rgb = rgb * self.scaler if self.model_type <= 1: latent_pi, _, latent_sde = self.model._get_latent(_rgb) distribution = self.model._get_action_dist_from_latent(latent_pi, latent_sde) if self.model_type == 1: return distribution.distribution.mean return distribution.distribution.logits if self.mode > 2: return self.model.forward(_rgb) return self.model.forward(_rgb).mean(dim=1) def predict(self, rgb): values = self.forward( rgb) if self.model_type == 1: return values return values.argmax(dim=1).reshape(-1).unsqueeze(1) def true_forward(self, rgb): return self.model.predict(rgb * self.scaler.cpu().numpy() , deterministic=True)[0] def prob_forward(self, rgb): rgb_ = torch.tile(rgb, (self.batch_size, 1)) if rgb.size(0) == 1 else rgb latent_pi, _, latent_sde = self.model._get_latent(rgb_ * self.scaler) distribution = self.model._get_action_dist_from_latent(latent_pi, latent_sde) return distribution.distribution.probs
{"hexsha": "752ef74daf873c1233e184e20a462b1abb7efff8", "size": 2256, "ext": "py", "lang": "Python", "max_stars_repo_path": "randsm/model.py", "max_stars_repo_name": "anvinhnguyendinh/DiscreteRSonRL", "max_stars_repo_head_hexsha": "af9433f56c6b72f17e0fcc97c0e4ebddeecf96b9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-07-16T12:45:52.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-27T03:45:21.000Z", "max_issues_repo_path": "randsm/model.py", "max_issues_repo_name": "anvinhnguyendinh/DiscreteRSonRL", "max_issues_repo_head_hexsha": "af9433f56c6b72f17e0fcc97c0e4ebddeecf96b9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "randsm/model.py", "max_forks_repo_name": "anvinhnguyendinh/DiscreteRSonRL", "max_forks_repo_head_hexsha": "af9433f56c6b72f17e0fcc97c0e4ebddeecf96b9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.6, "max_line_length": 90, "alphanum_fraction": 0.6515957447, "include": true, "reason": "import numpy", "num_tokens": 532}
The chapter addresses the problem of optimally controlling an industrial micro-grid featuring a large share of renewable energy and a high volatility of electricity prices. We consider a micro-grid as a localized group of energy sources, loads and storage components that can operate in two distinct modes: grid-connected mode and isolated mode. In grid-connected mode, the micro-grid system has the possibility to buy/sell energy from/to the macro-grid in order to meet the load demand. The challenge in connected mode is to reduce the total energy cost. In isolated mode, the micro-grid can function autonomously in the sense that enough energy can be generated in the grid to supply the loads. In this case the challenge is to meet the load demand and to balance the power flow between the components of the grid. In our setting, the grid is executed in mixed mode. In other words, the industrial micro-grid, we are considering, can switch from one mode to another. \subsection{System Model} \label{subsec:31} % For figures use % \begin{figure}[b] %\sidecaption % Use the relevant command for your figure-insertion program % to insert the figure file. % For example, with the graphicx style use \includegraphics[scale=.65]{images/System_Model} % % If no graphics program available, insert a blank space i.e. use %\picplace{5cm}{2cm} % Give the correct figure height and width in cm % \caption{System Overview. Every grid-component is controlled by a reinforcement learning agent.} \label{fig:system_model} % Give a unique label \end{figure} As illustrated in Fig.\ref{fig:system_model}, the micro-grid system consists of the following components: renewable energy sources such as photovoltaic systems, electricity loads representing production machines, regenerative energy consumers, energy storages systems such as batteries and auxiliary process loads for air compressors and cooling systems. The components share the same energy bus, enabling a transfer of electrical energy between the components. The proposed system model is equally applicable to ac and dc micro-grids and a simple formal description of the components can be described as follow: \begin{itemize} \item{\textit{\textbf{ Renewable energy sources}} are renewable energy generators such as photovoltaic systems or wind turbines. The power generated depends on environmental and weather factors such as solar radiation profile and wind speed. It is therefore not predictable. Energy generators are characterized by two operating states: $on$ and $off$. The generators are considered to be in the $off$-state when the electrical power generated by the energy sources is negligible for example because of weather conditions such as cloudy weather or nighttime. We assume that if the generators are not in $off$ state, then the power generated is constant and equal to the maximum electricity power that can be produced by the source. Therefore % \begin{equation} P_G = \begin{cases} 0 & \text{if}\ state=off \\ P_Gmax & \text{otherwise} \end{cases} \; \end{equation} % } \item{\textit{\textbf{Energy storage systems}}. In order to take full advantage of renewable energy sources, it is vital to have energy storage systems capable of handling variations in energy production. In our environment, we consider batteries or super-capacitor systems that consume a constant energy when charging and produce a constant energy when discharging. In addition, the operation of energy storage systems has to be carefully designed and controlled to protect them from damages that are: overcharging and overdischarging. Therefore, each storage system is also characterized by a maximum and minimum state-of-charge (SoC). The energy storage systems have three required states: $charging$, $discharging$ and $idle$. The electrical power consumed or released by the component highly depends on the current operational state and the component\rq{s} state-of-charges.} \item{\textit{\textbf{Energy loads}} represent production machines that consume energy in order to execute a production task. A production machine can have different operation states: $powered off$, $executing$, $stand by$, etc… Depending on the production task and the operation state, a production machine can follow a predefined or a characteristic load profile. For a fixed set of production tasks, the load profile of a machine can be measured and approximated. In our setting, we consider 3 basic load profile as illustrated in Fig. \ref{fig:load_profile}. Any machine\rq{s} load profile can be seen as a linear combination of this basic load profiles. In isolated mode, the micro-grid cannot guarantee continuous power supply to the load because it is often influenced by the unpredictable power generation. When the generated power is not able to drive the loads, the noncritical loads must shut down.} % For figures use % \begin{figure}[h!] %\sidecaption % Use the relevant command for your figure-insertion program % to insert the figure file. % For example, with the graphicx style use \includegraphics[scale=.40]{images/load_profile} % % If no graphics program available, insert a blank space i.e. use %\picplace{5cm}{2cm} % Give the correct figure height and width in cm % \caption{Basic load profiles of the energy loads. In $execute$-state, the electrical power consumed $P_i$ can be constant and equal to $P_{max}$ (a), increase linearly (b) or exponentially (c) to $P_{max}$.} \label{fig:load_profile} % Give a unique label \end{figure} \item{\textit{\textbf{Regenerative energy consumers}} are energy consumers with the exception that these consumers can recuperate energy for a small period of time (less than a minute). In this case we consider the load to be constant and negative. Fig. \ref{fig:recuperative_load_profile} shows a typical load profile.} % For figures use % \begin{figure}[h!] \sidecaption % Use the relevant command for your figure-insertion program % to insert the figure file. % For example, with the graphicx style use \includegraphics[scale=.40]{images/Recuperative_Energy_Load_Profile} % % If no graphics program available, insert a blank space i.e. use %\picplace{5cm}{2cm} % Give the correct figure height and width in cm % \caption{Basic load profiles of regenerative energy consumers. In $execute$-state, the electrical power consumed $P_i$ can be negative for a short period of time.} \label{fig:recuperative_load_profile} % Give a unique label \end{figure} \item{\textit{\textbf{Auxiliary processes}} are energy consumer processes in the factory floor that do not execute a manufacturing task directly, but are required by the production task. These are for example compressors or cooling systems. Auxiliary processes have two required states: $off$ and $on$.} \item{The \textit{\textbf{grid component}} represents the main grid and is responsible for buying/selling electricity from/to the main grid. It has three state: $buying$, $selling$, $idle$. If the grid agent is in $idle$-state or in $off$-state, the industrial micro-grid can be considered to be executed in isolated mode because there are no interaction with the main grid.} \end{itemize} \subsection{Problem Formulation}\label{subsec:32} The main objective is to minimize the total cost the energy bought from the main grid to achieve a high productivity while considering future energy prices and weather dependent renewable energy generators. Let $\mathrm{M}$ denotes the set of components of the micro-grid (or controllable machine components), such that $M_i \in \mathrm{M}, \forall i \in [0, M-1]$ with $ M \in \mathbb{N}$. We assume that each component $M_i$ can take an energy state $s_{i,t}$ (i.e. “stopped”, “running”, “aborted”, “standby”, etc.) at the time step $t \in \mathbb{N}$ and the dynamic power consumption $P_i^t$ of $M_i$ solely depends on the current state $s_{i,t}$ of the component and the current time step $t$: $P_i^t=P_i (t, s_{i,t})$. The total energy requested/sold from/to the main grid $E_i^t$ at time interval $\triangle t$ over the complete time horizon $T \in \mathbb{N}$ is therefore the sum of the power consumed/generated of all components during the time horizon. Please notice that $P_i^t$ can also be negative in the case of renewable energy sources or regenerative energy consumers for example. % \begin{equation} E_i =\sum_{t=0}^{T-1}{ P_i^t \cdot \triangle t}, \forall i \in [0, M-1] \end{equation} % If $\lambda_t^- \in \mathbb{R}$ is the actual energy price and $\lambda_t^\sim \in \mathbb{R}$ the forecasted energy price at a time step $t$, the optimization problem can be formulated for the time horizon $T$ as follow: % \begin{equation} \label{eq:problem} minimize \sum_{k=0}^{t}{ {\lambda_t^-} \cdot ({ \sum_{i=0}^{M-1}{ P_i (k, s_{i,k}) \cdot \triangle t } })}+ \sum_{l=t+1}^{T-1}{\lambda_t^\sim \cdot ({ \sum_{i=0}^{M-1}{ P_i (l, s_{i,l}) \cdot \triangle t } }) } \end{equation} If $a_l (s_{i,l-1},s_{i,l})$ denotes the action of changing the state of the component $M_i$ at time $l$ from the state $s_(i,l-1)$ to the state $s_(i,l)$, the Eq. \ref{eq:problem} can be rewritten as follow: \begin{equation} \label{eq:problem_reformulated} minimize \sum_{k=0}^{t}{ {\lambda_t^-} \cdot ({ \sum_{i=0}^{M-1}{ P_i (k, s_{i,k}) \cdot \triangle t } })}+ \sum_{l=t+1}^{T-1}{\lambda_t^\sim \cdot ({ \sum_{i=0}^{M-1}{ P_i (l, a_l (s_{i,l-1},s_{i,l})) \cdot \triangle t } }) } \end{equation} Several constraints should be taken into consideration. One of them is the power balance between the energy demand of the micro-grid and the energy supply from the main grid: \begin{equation} \label{eq:constraint_energy_balance} \sum_{i=0}^{M-1}{ E_i} \leq E_{max}^\sim, \forall i \in [0, M-1] \end{equation} where $E_i$ denotes the total energy demand of the micro-grid component $M_i$ over the time horizon $T$ and $E_{max}^\sim$ the total available energy from the main grid. In addition, the total energy consumption of the micro-grid at each time interval $t$ should satisfy the upper and lower bound given by the overall load profile and a tolerance interval. This constraint can be expressed as follows: \begin{equation} \label{eq:constraint_load_profile} L_{target}^\sim - \triangle L^\sim \leq {\sum_{i=0}^{M-1}{ P_i (t, s_{i,t})} } \leq L_{target}^\sim + \triangle L^\sim \end{equation} with $L_{target}^\sim$ being the target load at time step $t$, $\triangle L^\sim$ a symmetric tolerance around $L_{target}^\sim$ and $ i \in [0, M-1]$ As expressed above, solving the optimization problem is equivalent to compute at each time step the optimal state (by choosing the action $a_l (s_{i,l-1},s_{i,l})$ ) so that the resulting total energy cost of the micro-grid for a complete time horizon is globally minimized. If every component of the micro-grid is controlled by an agent, then the optimization problem is equivalent to finding a coordinated strategy for all the agents. In this case, the main challenge lies in the volatility of future energy prices and the direct dependence of future energy consumption from actual decisions. Furthermore, other constraints such as the the throughput, production time, the product quality, the production cost, etc. have to be considered. \subsection{Markov Game Formulation} The formulated optimization problem can be transformed into a reinforcement learning task, where each agent controlling a component of the grid has to learn a policy that maximizes a cumulative reward signal derived from the objective function and is conditioned by the constraints formulated in Eq. \ref{eq:constraint_energy_balance} and Eq. \ref{eq:constraint_load_profile}. If we consider the global state of the environment as the aggregation of the observations of all agents, then the observed state of the environment solely depends on the actions of all the agents and the previously observed global state of the environment. In other words, no external factors besides the actions of all the agents will affect the dynamics of the environment. Under this assumption, the multi-agent reinforcement-learning task satisfies the Markov property and can be therefore formulated as a Markov decision process (MDP). In this work, we consider a multi-agent extension of Markov decision processes called observable Markov games \cite{Littman1994multiagent}. A Markov game for $N$ agents is defined by a set of states $\mathrm{S}$ describing the possible configurations of all agents, a set of actions $\mathrm{A}_1, \mathrm{A}_2, \ldots, \mathrm{A}_N$ and a set of observations $\mathrm{O}_1,\mathrm{O}_2, \ldots, \mathrm{O}_N$ for each agent. To choose actions, each agent $i ( i= 1,2,\ldots, N)$ uses a stochastic policy $\pi_{\theta_i} : \mathrm{O}_i \times \mathrm{A}_i \mapsto [0; 1]$, where $\theta_i$ are the parameters of the policy. The policy produces the next state according to the state transition function $T : \mathrm{S} \times \mathrm{A}_1 \times ... \times \mathrm{A}_N \mapsto \mathrm{S}$. Each agent $i$ obtains rewards as a function of the state and agent’s action $r_i : \mathrm{S} \times \mathrm{A}_i \mapsto \mathbb{R}$, and receives a private observation correlated with the environment state $o_i : \mathrm{S} \mapsto \mathrm{O}_i$. The initial states are determined by a distribution : $\mathrm{S} \mapsto [0; 1]$. Each agent $i$ aims to maximize its own total expected return $R_i = \sum_{t=0}^{T}{\gamma^t \cdot r_i^t}$ where $\gamma^t$ is a discount factor at time step $t$ and $T$ is the time horizon. The \textit{action-value function} is defined as $\mathrm{Q}^{\pi_i}(s_{i,t}, a_t^i)=\mathbb{E}[\mathrm{R}_i^t|s_{i,t},a_t^i]$, while the \textit{state-value function} is defined as $\mathrm{V}^{\pi_i}(s_{i,t})=\mathbb{E}[\mathrm{R}_i|s_{i,t}]$. The \textit{advantage function} $\mathrm{A}^{\pi_i}(s_{i,t}, a_t^i) = \mathrm{Q}^{\pi_i}(s_{i,t}, a_t^i) - \mathrm{V}^{\pi_i}(s_{i,t})$ describes whether taking action $a_t^i$ is better or worse for agent $i$ when in state $s_{i,t}$ than the average action of policy $\pi_{\theta_i}$. For our micro-grid, we provide a uniform representation of the operational state of the grid components which is illustrated in Fig. \ref{fig:state_chart}. The state space of every agent is represented by the operation state of the grid component it controls aggregated with the target load of the grid, the current time step within the production shift, the amount of production tasks to execute as well as the current energy price. Notice that we do not assume inter-dependencies between the production tasks. The action space of each agent is represented by all the actions that can change the operational state of a grid component (See Fig. \ref{fig:state_chart}). The reward of each agent depends on the type of the grid component it controls and therefore is use case dependent. See Eq. \ref{eq:total_reward} in section \ref{subsec:42} for more details. % For figures use % \begin{figure}[h!] %\sidecaption % Use the relevant command for your figure-insertion program % to insert the figure file. % For example, with the graphicx style use \includegraphics[scale=.40]{images/StateMachine_EFlex} % % If no graphics program available, insert a blank space i.e. use %\picplace{5cm}{2cm} % Give the correct figure height and width in cm % \caption{Uniform state representation of a grid component. Every component can be stopped, halted, suspended, powered up or aborted. Once the component reach the $execute$-state then it begins to execute a production task, to generate energy or to store/release energy.} \label{fig:state_chart} % Give a unique label \end{figure}
{"hexsha": "3fe7f1fc597c7034b2931aae08a6e4ef5afda2bf", "size": 15629, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "author/system_model.tex", "max_stars_repo_name": "jupiterbak/Artificial-Intelligence-in-Industry-4.0", "max_stars_repo_head_hexsha": "7ddeb55de44c4e50b195edf7a75aa4afb99fcd9e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-06-09T11:05:49.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-09T11:05:49.000Z", "max_issues_repo_path": "author/system_model.tex", "max_issues_repo_name": "jupiterbak/Artificial-Intelligence-in-Industry-4.0", "max_issues_repo_head_hexsha": "7ddeb55de44c4e50b195edf7a75aa4afb99fcd9e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "author/system_model.tex", "max_forks_repo_name": "jupiterbak/Artificial-Intelligence-in-Industry-4.0", "max_forks_repo_head_hexsha": "7ddeb55de44c4e50b195edf7a75aa4afb99fcd9e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 104.1933333333, "max_line_length": 1567, "alphanum_fraction": 0.7621728837, "num_tokens": 3938}
const LOCATIONS = Dict( k => i-1 for (i, k) in enumerate(( "none", "upper right", "upper left", "lower left", "lower right", "right", "center left", "center right", "lower center", "upper center", "center", "outer upper right", "outer center right", "outer lower right" ))) # Legend function legend!(p::PlotObject, args...; location=1, kinds=Tuple{}(), kwargs...) location = lookup(location, LOCATIONS) # Reset main viewport if there was a legend if haskey(p.attributes, :location) && p.attributes[:location] ∈ LEGEND_LOCATIONS[:right_out] p.viewport.inner[2] += p.legend.size[1] end chosen = choosegeoms(p, kinds) for i = 1:min(length(args), length(chosen)) j = chosen[i] p.geoms[j] = Geometry(p.geoms[j], label=args[i]) end maxrows = Int(get(kwargs, :maxrows, length(p.geoms))) p.legend = Legend(p.geoms, p.viewport.inner, maxrows) # Redefine viewport if legend is set outside if p.legend.size ≠ NULLPAIR && location ∈ LEGEND_LOCATIONS[:right_out] p.viewport.inner[2] -= p.legend.size[1] end p.attributes[:location] = location end legend!(f::Figure, args...; kwargs...) = legend!(currentplot(f), args...; kwargs...) """ legend(labels...; kwargs...) Set the legend of the plot, using a series of `labels` (strings). In addition to the legend strings, the keyword argument `location` can be used to define the location of the legend with respect to the plot axes and the keyword argument `maxrows` to distribute the legend labels in a grid with a maximum number of rows. Locations are defined as a number or a string, as indicated in the following table --- based on the convention of [Matplotlib legends](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.legend.html): |⁣# | String | |--:|:----------------------| | 0| `"none"` | | 1| `"upper right"` | | 2| `"upper left"` | | 3| `"lower left"` | | 4| `"lower right"` | | 5| `"right"` | | 6| `"center left"` | | 7| `"center right"` | | 8| `"lower center"` | | 9| `"upper center"` | | 10| `"center"` | | 11| `"outer upper right"` | | 12| `"outer center right"`| | 13| `"outer lower right"` | The labels are assigned to the geometries contained in the plot, in the same order as they were created. The assignment can be restricted to specific kinds of geometries through the keyword argument `kinds`, which can take a `Symbol` or a collection of `Symbol`s that identify the kinds. Use the helper function [`geometrykinds`](@ref) to see the list of kinds available in the current plot. Only geometries with non-empty labels and an available guide for legends will be presented in the legend. # Examples ```julia # Set the legends to "a" and "b" legend("a", "b") ``` """ function legend(args::AbstractString...; kwargs...) f = gcf() legend!(currentplot(f), args...; kwargs...) return f end """ geometrykinds([p]) Return a list with symbols that represent the kind of the geometries included in the given plot or figure `p`. If no argument is given, it takes the current plot of the current figure. # Examples ```julia julia> # Plot a set of points at values `(x, y)` julia> # and a regression line passing through `(x, ŷ)` julia> scatter(x, y) julia> plot(x, ŷ) julia> geometrykinds() 2-element Array{Symbol,1}: :scatter :line ``` """ geometrykinds(p::PlotObject) = [g.kind for g in p.geoms] geometrykinds(f::Figure=gcf()) = geometrykinds(currentplot(f)) function choosegeoms(p::PlotObject, kinds=Tuple{}()) isempty(kinds) && return collect(1:length(p.geoms)) gk = geometrykinds(p) findall(k -> k ∈ kinds, gk) end choosegeoms(p::PlotObject, kinds::Symbol) = choosegeoms(p, (kinds,)) # Hold hold!(p::PlotObject, state::Bool) = (p.attributes[:hold] = state) hold!(f::Figure, state) = hold!(currentplot(f), state) """ hold(flag::Bool) Set the hold flag for combining multiple plots. `hold(true)` prevents clearing previous plots, so that next plots will be drawn on top of the previous one until `hold(false)` is called. Use the keyword argument `hold=<true/false>` in plotting functions, to set the hold flag during the creation of plots. """ hold(state) = hold!(currentplot(gcf()), state) # Title function title!(p::PlotObject, s) if isempty(s) delete!(p.attributes, :title) else p.attributes[:title] = s end return nothing end title!(f::Figure, s) = title!(currentplot(f), s) """ title(s) Set the plot title as the string `s`. Use the keyword argument `title=s` in plotting functions, to set the title during the creation of plots. # Examples ```julia # Set the plot title to "Example Plot" title("Example Plot") # Clear the plot title title("") ``` """ function title(s::AbstractString) f = gcf() title!(currentplot(f), s) return f end const AXISLABEL_DOC = """ xlabel(s) ylabel(s) zlabel(s) Set the X, Y or Z axis labels as the string `s`. Use the keyword argument `xlab=s`, etc. in plotting functions, to set the axis labels during the creation of plots. # Examples ```julia # Set the x-axis label to "x" xlabel("x") # Clear the y-axis label ylabel("") ``` """ const TICKS_DOC = """ xticks(minor[, major = 1]) yticks(minor[, major = 1]) zticks(minor[, major = 1]) Set the `minor`intervals of the ticks for the X, Y or Z axis, and (optionally) the number of minor ticks between `major` ticks. Use the keyword argument `xticks=(minor, major)`, etc. in plotting functions, to set the tick intervals during the creation of plots (both the minor and major values are required in this case). # Examples ```julia # Minor ticks every 0.2 units in the X axis xticks(0.2) # Major ticks every 1 unit (5 minor ticks) in the Y axis yticks(0.2, 5) ``` """ # Attributes for axes const AXISLIM_DOC = """ xlim(inf, sup [, adjust::Bool = false]) xlim((inf, sup), ...) ylim(inf, sup, ...) ylim((inf, sup), ...) zlim(inf, sup, ...) zlim((inf, sup), ...) Set the limits for the plot axes. The axis limits can either be passed as individual arguments or as a tuple of `(inf, sup)` values. Setting either limit to `nothing` will cause it to be automatically determined based on the data, which is the default behavior. Additionally to the limits, the flag `adjust` can be used to tell whether or not the limits have to be adjusted. Use the keyword argument `xlim=(inf, sup)`, etc. in plotting functions, to set the axis limits during the creation of plots. # Examples ```julia # Set the x-axis limits to -1 and 1 xlim((-1, 1)) # Reset the x-axis limits to be determined automatically xlim() # Set the y-axis upper limit and set the lower limit to 0 ylim((0, nothing)) # Reset the y-axis lower limit and set the upper limit to 1 ylim((nothing, 1)) ``` """ const AXISLOG_DOC = """ xlog(flag) ylog(flag) zlog(flag) Set the X-, Y- or Z-axis to be drawn in logarithmic scale (`flag == true`), or in linear scale (`flag == false`). Use the keyword argument `xlog=<true/false>`, etc. in plotting functions, to set the logarithmic axes during the creation of plots. !!! note When the axis is set to logarithmic scale, its lower limit is adjusted to represent only positive values, even if the data of the plot contain zero or negative values. The aspect of logarithmic axes with limits explicitly set to contain negative values (with [`xlim`](@ref), etc.) is undefined. # Examples ```julia # Set the x-axis limits to log scale xlog(true) # Ensure that the y-axis is in linear scale ylog(false) ``` """ const AXISFLIP_DOC = """ xflip(flag) yflip(flag) zflip(flag) Reverse the direction of the X-, Y- or Z-axis (`flag == true`), or set them back to their normal direction (`flag == false` ). Use the keyword argument `xflip=<true/false>`, etc. in plotting functions, to set reversed axes during the creation of plots. # Examples ```julia # Reverse the x-axis xflip(true) # Ensure that the y-axis is not reversed yflip(false) ``` """ @eval function _config_axislimits!(ax, p, (minval, maxval), adjust) data_limits = minmax(p.geoms, p.axes.options[:scale])[ax] limits = set_limits((minval, maxval), data_limits) adjust && (limits = GR.adjustlimits(limits...)) p.axes.ranges[ax] = limits tickdata = p.axes.tickdata if haskey(tickdata, ax) axisticks = tickdata[ax] if get(p.attributes, Symbol(ax,:flip), false) limits = reverse(limits) end tickdata[ax] = (axisticks[1], limits, axisticks[3]) end return nothing end for ax = ("x", "y", "z") # xlabel, etc. fname! = Symbol(ax, :label!) fname = Symbol(ax, :label) @eval function $fname!(p::PlotObject, s) if isempty(s) delete!(p.attributes, Symbol($ax, :label)) else p.attributes[Symbol($ax, :label)] = s end return nothing end @eval $fname!(f::Figure, s) = $fname!(currentplot(f), s) @eval @doc AXISLABEL_DOC function $fname(s::AbstractString) f = gcf() $fname!(currentplot(f), s) return f end # xticks, etc. fname! = Symbol(ax, :ticks!) fname = Symbol(ax, :ticks) @eval function $fname!(p::PlotObject, minor, major=1) tickdata = p.axes.tickdata if haskey(tickdata, Symbol($ax)) tickdata[Symbol($ax)] = (float(minor), tickdata[Symbol($ax)][2], Int(major)) end p.attributes[Symbol($ax, :ticks)] = (minor, major) return nothing end @eval $fname!(f::Figure, args...) = $fname!(currentplot(f), args...) @eval @doc TICKS_DOC function $fname(args...) f = gcf() $fname!(currentplot(f), args...) return f end # xlim, etc. fname! = Symbol(ax, :lim!) fname = Symbol(ax, :lim) @eval function $fname!(p::PlotObject, limits, adjust=false) _config_axislimits!(Symbol($ax), p, limits, adjust) p.attributes[Symbol($ax, :lim)] = limits end @eval function $fname!(p::PlotObject, minval::Union{Nothing, Number}, maxval::Union{Nothing, Number}, adjust=false) $fname!(p, (minval, maxval), adjust) end @eval $fname!(p::PlotObject) = $fname!(p, (nothing, nothing)) @eval $fname!(f::Figure, args...) = $fname!(currentplot(f), args...) @eval @doc AXISLIM_DOC function $fname(args...) f = gcf() $fname!(currentplot(f), args...) return f end # xlog, xflip, etc. for (attr, docstr) ∈ (("log", :AXISLOG_DOC), ("flip", :AXISFLIP_DOC)) fname! = Symbol(ax, attr, :!) fname = Symbol(ax, attr) @eval function $fname!(p::PlotObject, flag) if p.axes.kind ∈ (:axes2d, :axes3d) p.attributes[Symbol($ax, $attr)] = flag newscale = set_scale(; p.attributes...) if p.axes.options[:scale] != newscale p.axes.options[:scale] = newscale axlimits = get(p.attributes, Symbol($ax, :lim), (nothing, nothing)) adjust = !get(p.attributes, Symbol($ax, :log), false) _config_axislimits!(Symbol($ax), p, axlimits, adjust) end end return nothing end @eval $fname!(f::Figure, args...) = $fname!(currentplot(f), args...) @eval @doc $docstr function $fname(args...) f = gcf() $fname!(currentplot(f), args...) return f end end end const TICKLABELS_DOC = """ xticklabels(f) yticklabels(f) Customize the string of the X and Y axes tick labels. The labels of the tick axis can be defined by a function with one argument (the numeric value of the tick position) that returns a string, or by an array of strings that are located sequentially at X = 1, 2, etc. Use the keyword argument `xticklabels=s`, etc. in plotting functions, to set the axis tick labels during the creation of plots. # Examples ```julia # Label the range (0-1) of the Y-axis as percent values yticklabels(p -> Base.Printf.@sprintf("%0.0f%%", 100p)) # Label the X-axis with a sequence of strings xticklabels(["first", "second", "third"]) ``` """ for ax = ("x", "y") fname! = Symbol(ax, :ticklabels!) fname = Symbol(ax, :ticklabels) @eval function $fname!(p::PlotObject, s) set_ticklabels!(p.axes.ticklabels; $fname = s) p.attributes[Symbol($ax, :ticklabels)] = s end @eval $fname!(f::Figure, s) = $fname!(currentplot(f), s) @eval @doc TICKLABELS_DOC function $fname(s) f = gcf() $fname!(currentplot(f), s) return f end end # Grid function grid!(p::PlotObject, flag) p.axes.options[:grid] = Int(flag) p.attributes[:grid] = flag end grid!(f::Figure, flag) = grid!(currentplot(f), flag) """ grid(flag::Bool) Draw or disable the grid of the current plot axes. Use the keyword argument `grid=<true/false>`, etc. in plotting functions, to set the grid during the creation of plots. """ function grid(flag) f = gcf() grid!(currentplot(f), flag) return f end # Colorbar colorbar!(p::PlotObject, flag::Bool) = (p.attributes[:colorbar] = flag) function colorbar!(p::PlotObject, levels::Integer) p.colorbar = Colorbar(p.axes, levels) colorbar!(p, true) end colorbar!(f::Figure, flag) = colorbar!(currentplot(f), flag) """ colorbar(flag::Bool) colorbar(levels::Integer) Set the color bar of the current plot. The input argument can be a `Bool` (`true` or `false`) to show or hide the colorbar -- if it is available, or an `Integer` to set the number of levels shown in the color bar (256 levels by default). Color bars are only presented when there is actual color data in the plot, regardless of the usage of this function. Use the keyword argument `colorbar=<true/false>`, etc. in plotting functions, to enable or disable the color bar during the creation of plots. """ function colorbar(flag) f = gcf() colorbar!(currentplot(f), flag) return f end # Aspect ratio function aspectratio!(p::PlotObject, r) margins = plotmargins(p.legend, p.colorbar; p.attributes...) set_ratio!(p.viewport.inner, r, margins) p.attributes[:ratio] = r end aspectratio!(f::Figure, r) = aspectratio!(currentplot(f), r) """ aspectratio(r) Set the aspect of the current plot to a given width : height ratio. Use the keyword argument `aspectratio=r`, etc. in plotting functions, to set the aspect ratio during the creation of plots. # Examples ```julia $(_example("aspectratio")) ``` """ function aspectratio(r) f = gcf() aspectratio!(currentplot(f), r) return f end # Radians in polar axes function radians!(p::PlotObject, flag) if p.axes.kind ≠ :polar return nothing end p.axes.options[:radians] = Int(flag) p.attributes[:radians] = flag end radians!(f::Figure, flag) = radians!(currentplot(f), flag) """ radians(flag::Bool) Set the scale of angles in polar plots. Use `radians(true)` to represent angles in radians (default setting), and `radians(false)` to represent them in degrees. This operation only modifies the guides of the polar plot grid lines. The existing geometries are left without changes Use the keyword argument `radians=<true/false>`, etc. in plotting functions, to set the scale of angles during the creation of polar plots. # Example ```julia # Example data θ = LinRange(0, 2π, 40) r = sin.(θ) # Draw the polar plot (by default in radians) polar(θ, r) # Change the angula scale radians(false) ``` """ function radians(flag) f = gcf() radians!(currentplot(f), flag) return f end # Pan and zoom function panzoom!(p::PlotObject, x, y, r = 0.0) GR.savestate() GR.setviewport(p.viewport.inner...) GR.setwindow(p.axes.ranges[:x]..., p.axes.ranges[:y]...) xmin, xmax, ymin, ymax = GR.panzoom(x, y, r) GR.restorestate() xlim!(p, (xmin, xmax)) ylim!(p, (ymin, ymax)) return nothing end panzoom!(f::Figure, args...) = panzoom!(currentplot(f), args...) """ panzoom(x, y[, s = 0]) Pan/zoom the axes of the current plot. The focus of the zoom is set at a point with an offset of `(x, y)` units in normalized device coordinates (NDC) from the center of the current axes. The corners of the axes are linearly displaced towards that point, such that the size of the new axes is `s` times their original size. If `s` is set to 0 (the default value), the center of the axes is displaced at the focus, without resizing- # Example ```julia # Move the center 1 unit right and 0.2 up (NDC) panzoom(1, 0.2) # Reduce the focus of the axes to half their size # focusing on the previous point panzoom(1, 0.2, 0.5) ``` """ function panzoom(args...) f = gcf() panzoom!(currentplot(f), args...) return f end # zoom for axes2d and axes3d with gr3 zoom2d!(p, r) = panzoom!(p, 0.0, 0.0, r) function zoomgr3!(p::PlotObject, r) p.axes.camera[1:3] ./= r return nothing end function zoom!(p::PlotObject, r) if p.axes.kind == :axes2d zoom2d!(p, r) elseif get(p.axes.options, :render3d, 0) == 2 zoomgr3!(p, r) end end zoom!(f::Figure, r) = zoom!(currentplot(f), r) """ zoom(r) Zoom the plot by the ratio indicated by `r`. In two-dimensional plots, the "zoomed" axes are centered around the same point, but proportionally resized to `r` times the original size. In three-dimensional scenes defined with "camera" settings (e.g. in [`isosurface`](@ref) plots), the camera distance is divided by `r`. # Examples ```julia # Reduce the axes to half their size zoom(0.5) ``` """ function zoom(r) f = gcf() zoom!(currentplot(f), r) return f end # 3-D perspectives function viewpoint!(p::PlotObject, rotation, tilt) p.axes.perspective .= [rotation, tilt] if get(p.axes.options, :render3d, 0) == 2 distance = norm(view(p.axes.camera, 1:3)) p.axes.camera .= set_camera(distance, rotation, tilt) end return nothing end viewpoint!(f::Figure, rotation, tilt) = viewpoint!(currentplot(f), rotation, tilt) """ viewpoint(rotation, tilt) Set the viewpoint of three-dimensional plots. `rotation` and `tilt` must be integer values that indicate the "azimuth" and "elevation" angles of the line of sight (in degrees). If both angles are zero, the plot is viewed in the direction of the Y axis (i.e. the X-Z plane is seen). Positive `rotation` values mean a counterclockwise rotation of the line of sight (or a clockwise rotation of the scene) around the vertical (Z) axis. Positive `tilt` values mean an ascension of the view point. # Examples ```julia # Reset the view to the X-Y plane # (rotation=0, tilt=90) viewpoint(0, 90) ``` """ function viewpoint(rotation, tilt) f = gcf() viewpoint!(currentplot(f), rotation, tilt) return f end function rotate!(p::PlotObject, angle) p.axes.perspective[1] += angle if get(p.axes.options, :render3d, 0) == 2 _rotate!(view(p.axes.camera, 1:3), angle) _rotate!(view(p.axes.camera, 7:9), angle) end return nothing end rotate!(f::Figure, angle) = rotate!(currentfigure(f), angle) function tilt!(p::PlotObject, angle) p.axes.perspective[2] += angle if get(p.axes.options, :render3d, 0) == 2 rotation = p.axes.perspective[1] camera_position = view(p.axes.camera, 1:3) _rotate!(camera_position, -rotation) _tilt!(camera_position, angle) _rotate!(camera_position, rotation) up_vector = view(p.axes.camera, 7:9) _rotate!(up_vector, -rotation) _tilt!(up_vector, angle) _rotate!(up_vector, rotation) end return nothing end tilt!(f::Figure, angle) = tilt!(currentfigure(f), angle) """ rotate(angle::Int) Rotate the viewpoint of the current plot by `angle` degrees around the vertical axis of the scene, with respect to its current position. # Examples ```julia # Rotate 10 degrees to the right rotate(10) ``` """ function rotate(angle) f = gcf() rotate!(currentplot(f), angle) return f end """ tilt(angle::Int) Tilt (elevate) the viewpoint of the current plot by `angle` degrees over the horizontal plane, with respect to its current position. # Examples ```julia # Tilt 10 degrees up tilt(10) ``` """ function tilt(angle) f = gcf() tilt!(currentplot(f), angle) return f end # Only for 3-D scenes with gr3 movefocus!(p::PlotObject, target) = _focus!(p.axes.camera, target) movefocus!(f::Figure, target) = movefocus!(currentplot(f), target) """ movefocus(target) Rotate the camera view axis, moving the focus to the `target` point. This only affects 3-D scenes created with camera settings, e.g. [`isosurface`](@ref) plots. Moving the focus point rotates the camera without changing its position; in order to rotate the camera around the center of the scene, use the functions [`rotate`](@ref), [`tilt`](@ref) or [`viewpoint`](@ref). # Examples ```julia # Move the focus to the point (1.0, 0.5, 0.0) movefocus([1.0, 0.5, 0.0]) ``` """ function movefocus(target) f = gcf() movefocus!(currentplot(f), target) return f end function turncamera!(p::PlotObject, angle) params = p.axes.camera # Rotate up vector towards right vector around axis axis = normalize([params[4]-params[1], params[5]-params[2], params[6]-params[3]]) up_vector = params[7:9] right_vector = axis × up_vector up_vector .= cosd(angle).*up_vector .+ sind(angle).*right_vector return nothing end turncamera!(f::Figure, angle) = turncamera!(currentplot(f), angle) """ turncamera(angle) Turn the orientation of the camera by `angle` degrees around its view axis (only for 3-D scenes created with camera settings). # Examples ```julia # Turn the perspective 10 degrees turncamera(10) ``` """ function turncamera(angle) f = gcf() turncamera!(currentplot(f), angle) return f end """ colormap!(p, cmap) Apply a colormap `cmap` to the given plot `p`, which can be a `PlotObject`, or a `Figure` (in such case the colormap is applied to all the plots contained in it). The value of `cmap` can be the number or the name of any of the [GR built-in colormaps](https://gr-framework.org/colormaps.html) (see [`colormap`](@ref) for more details). Use the keyword argument `colormap` in plotting functions, to set a particular colormap during the creation of plots (in this case it can only be identified by its number). # Examples ```julia # Create a surface plot with the "grayscale" colormap (2) surface(x, y, z, colormap=2) # Change it to the "viridis" colormap colormap!(gcf(), "viridis") ``` """ function colormap!(p::PlotObject, cmap) p.attributes[:colormap] = Int(cmap) return nothing end function colormap!(p::PlotObject, cmap::AbstractString) cmap = lowercase(replace(cmap, (' ', '_') => "")) colormap!(p, COLORMAPS[cmap]) end function colormap!(f::Figure, cmap) for p in f.plots colormap!(p, cmap) end return f end """ colorscheme!(p, scheme) Apply a color `scheme` to the given plot `p`, which can be a `PlotObject`, or a `Figure` (in such case the scheme is applied to all the plots contained in it). The value of `scheme` can be the number or the name of any available color scheme (see [`colorscheme`](@ref) for more details). Use the keyword argument `scheme` in plotting functions, to set a particular color scheme during the creation of plots (in this case only the number of an already exisiting scheme is allowed). # Examples ```julia # Create a plot with a dark scheme (2) plot(x, y, scheme=2) # Change it to the standard light scheme colorscheme!(currentplot(), "light") ``` """ function colorscheme!(p::PlotObject, scheme) p.attributes[:scheme] = Int(scheme) return nothing end function colorscheme!(p::PlotObject, scheme::AbstractString) scheme = replace(scheme, " " => "") scheme = lowercase(scheme) scheme_dict = Dict("none" => 0, "light" => 1, "dark" => 2, "solarizedlight" => 3, "solarizeddark" => 4) colorscheme!(p, scheme_dict[scheme]) end function colorscheme!(f::Figure, scheme) for p in f.plots colorscheme!(p, scheme) end return f end # Custom background """ background!(p, bgcolor[, alpha]) Add a custom background color to the given plot object or to all the plots inside the given figure. See [`background`](@ref) for more details. """ function background!(p::PlotObject, bgcolor) p.attributes[:backgroundcolor] = Int(bgcolor) return nothing end function background!(p::PlotObject, bgcolor, alpha) p.attributes[:backgroundcolor] = Int(bgcolor) p.attributes[:backgroundalpha] = alpha return nothing end background!(p::PlotObject, ::Nothing) = background!(p, -1) function background!(f::Figure, args...) for p in f.plots background!(p, args...) end return f end """ background(color[, alpha]) Add a custom background color to the current figure. The argument can be an hexadecimal color code or `nothing` for a transparent background. A partially transparent color can be defined adding the alpha value between 0 and 1 as second argument. Use the keyword arguments `backgroundcolor` and `backgroundalpha` in plotting functions, to set a particular background color configuration during the creation of plots. This overrides the default background defined by the [`colorscheme`](@ref) for the area outside the axes and legends of all the plots contained in the figure. Use [`background!`](@ref) to modify the background of individual subplots. # Examples ```julia # Create a plot with light blue background plot(x, y, backgroundcolor=0x88ccff) # Remove the background background(nothing) ``` """ background(args...) = background!(gcf(), args...)
{"hexsha": "aabbf2e8fabf028bb624d90761462b10a97eefb4", "size": 25801, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "src/attributes.jl", "max_stars_repo_name": "jheinen/GRUtils.jl", "max_stars_repo_head_hexsha": "e5437225b8847bf6c29c8db41987285939aeee2c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/attributes.jl", "max_issues_repo_name": "jheinen/GRUtils.jl", "max_issues_repo_head_hexsha": "e5437225b8847bf6c29c8db41987285939aeee2c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/attributes.jl", "max_forks_repo_name": "jheinen/GRUtils.jl", "max_forks_repo_head_hexsha": "e5437225b8847bf6c29c8db41987285939aeee2c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.9884937238, "max_line_length": 119, "alphanum_fraction": 0.6626099764, "num_tokens": 7025}
include("Misfits.jl")
{"hexsha": "bab9a301e2679f7e7ea8ce6f9129d6b9070960e8", "size": 24, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "test/runtests.jl", "max_stars_repo_name": "pawbz/Misfits.jl", "max_stars_repo_head_hexsha": "bee8937544d19ffc6b47213f10e3312fcb92f36f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "test/runtests.jl", "max_issues_repo_name": "pawbz/Misfits.jl", "max_issues_repo_head_hexsha": "bee8937544d19ffc6b47213f10e3312fcb92f36f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "test/runtests.jl", "max_forks_repo_name": "pawbz/Misfits.jl", "max_forks_repo_head_hexsha": "bee8937544d19ffc6b47213f10e3312fcb92f36f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-07T10:15:40.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-07T10:15:40.000Z", "avg_line_length": 6.0, "max_line_length": 21, "alphanum_fraction": 0.6666666667, "num_tokens": 8}
#!/usr/bin/env python from __future__ import absolute_import, division, print_function, unicode_literals, with_statement from mpi4py import MPI import sys import os import numpy as np import pympit as pt world = MPI.COMM_WORLD rank = world.rank procs = world.size startup = pt.work.since_start(MPI.COMM_WORLD) if world.rank == 0: print("Startup time = {} seconds".format(startup)) # Split the communicator into 2 groups ngroups = 2 groupsize = int(procs / ngroups) if groupsize == 0: groupsize = 1 group = int(rank / groupsize) grank = rank % groupsize if group >= ngroups: group = MPI.UNDEFINED grank = MPI.UNDEFINED gcomm = world.Split(group, grank) rcomm = world.Split(grank, group) # make a fake message nmsg = 100000 local_data = np.ones(nmsg, dtype=np.int64) # do some operations. use the lower-case functions as a worst case. start = MPI.Wtime() world_reduce = world.allreduce(local_data, op=MPI.SUM) chksum = np.sum(world_reduce) if chksum != (nmsg * procs): print("process {}: world comm allreduce = {} instead of {}".format(rank, chksum, (nmsg*procs))) group_reduce = gcomm.allreduce(local_data, op=MPI.SUM) chksum = np.sum(group_reduce) if chksum != (nmsg * groupsize): print("process {} of group {}: group comm allreduce = {} instead of {}".format(grank, group, chksum, (nmsg*groupsize))) rank_reduce = rcomm.allreduce(group_reduce, op=MPI.SUM) chksum = np.sum(rank_reduce) if chksum != (nmsg * procs): print("process {} of group {}: rank comm allreduce = {} instead of {}".format(grank, group, chksum, (nmsg*procs))) stop = MPI.Wtime() world.Barrier() elapsed = stop - start if rank == 0: print("Communication time = {:.4f}s".format(elapsed))
{"hexsha": "c3248fba47ddc82f806025c13aa5b0d98f8efa32", "size": 1717, "ext": "py", "lang": "Python", "max_stars_repo_path": "bin/pympit_collective.py", "max_stars_repo_name": "tskisner/pympit", "max_stars_repo_head_hexsha": "b522d0db0747c958186ee8a094a0f50d68a9a0cb", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "bin/pympit_collective.py", "max_issues_repo_name": "tskisner/pympit", "max_issues_repo_head_hexsha": "b522d0db0747c958186ee8a094a0f50d68a9a0cb", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "bin/pympit_collective.py", "max_forks_repo_name": "tskisner/pympit", "max_forks_repo_head_hexsha": "b522d0db0747c958186ee8a094a0f50d68a9a0cb", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.5205479452, "max_line_length": 124, "alphanum_fraction": 0.7023878858, "include": true, "reason": "import numpy", "num_tokens": 478}
import numpy as np from keras.models import Sequential, model_from_json from keras.layers import Dense from keras.layers import LSTM, Convolution1D, Flatten, Dropout, Activation, Input, Bidirectional from keras.layers.embeddings import Embedding from keras.layers.pooling import MaxPooling1D from keras.preprocessing import sequence from keras.layers.merge import Concatenate from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences import os from keras.callbacks import Callback from keras.callbacks import TensorBoard import Models from Emailer import Emailer import config # import our own modules from WordEmbeddings import Word2VecEmbeddings, GloVeEmbeddings, CharacterEmbeddings from KerasUtils import save_model from TwitterDataset import PreprocessedDataset ############ Single neural network models ############ # Callbacks for logging during training class ModelEvaluater(Callback): def __init__(self, model, x_val, y_val, verbosity=1, sample_weight=None): super(Callback, self).__init__() self.model=model self.x_val=x_val self.y_val=y_val self.verbosity=verbosity self.sample_weight=sample_weight if hasattr(config,'email'): self.emailer=Emailer('Training network update', config.email) else: self.emailer=None def on_epoch_end(self, epoch, logs=None): print("\nEvaluating epoch...") scores = self.model.evaluate(self.x_val, self.y_val, verbose=self.verbosity, sample_weight=self.sample_weight) print("\n\tValidation accuracy: %.2f%%" % (scores[1] * 100)) if self.emailer is not None: try: self.emailer.send_report("Epoch {} has score {}% on validation".format(epoch,scores[1]*100)); except Exception: print("Error sending email!") class ModelPredicter(Callback): def __init__(self, model, preprocessed_dataset, model_save_path, result_epoch_file): super(Callback, self).__init__() self.model = model self.preprocessed_dataset = preprocessed_dataset self.model_save_path=model_save_path self.result_epoch_file = result_epoch_file def on_epoch_end(self, epoch, logs=None): print("Generating prediction file for epoch %d at %s..." % (epoch, self.result_epoch_file.format(epoch))) save_model(self.model, self.model_save_path + "-e{}".format(epoch)) if self.result_epoch_file is not None: Network.predict(self.model, self.preprocessed_dataset, self.result_epoch_file.format(epoch)) class Network: word_embedding_models = { 'word2vec' : Word2VecEmbeddings, 'glove' : GloVeEmbeddings, 'characterEmbeddings':CharacterEmbeddings} @classmethod def create_embedding_layer(cls,preprocessed_dataset,**word_embeddings_opt): preprocessor = preprocessed_dataset.preprocessor # Create embedding layer word_embeddings_opt_param = {"initializer": "word2vec", "dim": 400, "trainable": False, "corpus_name": None} word_embeddings_opt_param.update(word_embeddings_opt) if word_embeddings_opt_param["initializer"] in Network.word_embedding_models: word_embeddings = Network.word_embedding_models[word_embeddings_opt_param["initializer"]]( preprocessor=preprocessor, preprocessed_tweets=preprocessed_dataset.all_preprocessed_tweets_weighted(), word_embedding_dimensions=word_embeddings_opt_param["dim"], embedding_corpus_name=word_embeddings_opt_param["corpus_name"]) print("Using predefined embedding layer!") embedding_layer = Embedding(input_dim=preprocessor.vocabulary.word_count, output_dim=word_embeddings.output_dimension, weights=[word_embeddings.embedding_matrix], input_length=preprocessed_dataset.max_tweet_length, trainable=word_embeddings_opt_param["trainable"]) else: print("Using generic embedding layer!") embedding_layer = Embedding(preprocessor.vocabulary.word_count, word_embeddings_opt_param["dim"], input_length=preprocessed_dataset.max_tweet_length, trainable=word_embeddings_opt_param["trainable"]) print("Created Embedding layer - Word count %d, dimensions %d, max tweet length %d" % (preprocessor.vocabulary.word_count, word_embeddings_opt_param["dim"], preprocessed_dataset.max_tweet_length)) return embedding_layer @classmethod def create_model(cls, preprocessed_dataset, word_embeddings_opt={}, model_builder=None): assert model_builder is not None embedding_layer = Network.create_embedding_layer(preprocessed_dataset, **word_embeddings_opt) model=model_builder.get_model(embedding_layer) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) print("Compiled model...") print(model.summary()) return model @classmethod def train(cls,model, preprocessed_dataset, training_opt={}, model_save_path=None, result_epoch_file=None): training_opt_param = {"epochs":4, "batch_size":64} training_opt_param.update(training_opt) # Create training data (x_train, y_train, x_orig_train), (x_val, y_val, x_orig_val) = \ preprocessed_dataset.shuffle_and_split_padded(model.input_names) evaluater=ModelEvaluater(model, x_val, y_val) callbacks=[evaluater] if not config.test_run: # TODO: make callbacks accessible from config predicter=ModelPredicter(model, preprocessed_dataset, model_save_path, result_epoch_file) callbacks.append(predicter) tensorBoard=TensorBoard(log_dir='./TensorBoard', histogram_freq=0, write_graph=True, write_images=True) callbacks.append(tensorBoard) model.fit(x_train, y_train, callbacks=callbacks, **training_opt_param) if model_save_path is not None: save_model(model, model_save_path) @classmethod def output_misclassified_samples(cls, model, preprocessed_dataset, preprocessor, misclassified_samples_file=None): def evaluate_misclassified_samples(x, y, x_orig, phase): misclassified_samples = [] x_padded = preprocessed_dataset.pad_tweets(x) pred_y = model.predict(x_padded, batch_size=64).reshape([-1]) for i in range(pred_y.shape[0]): if ((pred_y[i] > 0.5) and (y[i] == 0)) or \ ((pred_y[i] <= 0.5) and (y[i] == 1)): misclassified_samples.append( ( 2*(pred_y[i]-0.5)*2*(y[i]-0.5), 2*(y[i]-0.5), x_orig[i], ' '.join([preprocessor.vocabulary.id_to_word[id] for id in (x[i] if not isinstance(x,dict) else x['forward_input'][i] )]) ) ) misclassified_samples.sort() with open(misclassified_samples_file.format(phase), 'a+') as mc_s_f: mc_s_f.write("\n***** Misclassified {} samples *****\n".format(phase)) for sample in misclassified_samples: mc_s_f.write( "\t{} :\t({})\n\t\t\t{}\n\t\t\t{}\n".format( sample[0], sample[1], sample[2], sample[3]) ) print("Outputting misclassified samples...") (x_train, y_train, x_orig_train), (x_val, y_val, x_orig_val) = \ preprocessed_dataset.shuffle_and_split(model.input_names) evaluate_misclassified_samples(x_val, y_val, x_orig_val, "validation") evaluate_misclassified_samples(x_train, y_train, x_orig_train,"training") @classmethod def predict(cls, model, preprocessed_dataset, prediction_file): if not model: raise Exception("You need to train or load a pretrained model in order to predict") x_test = preprocessed_dataset.test_tweets_padded(model.input_names) predictions = model.predict(x_test, batch_size=64) print("Done with predictions, generating submission file...") with open(prediction_file, "w") as submission: submission.write("Id,Prediction\n") for i, prediction in enumerate(predictions): if prediction > 0.5: prediction = 1 else: prediction = -1 submission.write('%d,%d\n' % (i+1, prediction)) print("Generated submission file (%s) with %d results" % (prediction_file,predictions.shape[0])) ############ Boosted models ############ from sklearn.ensemble import AdaBoostClassifier from keras.wrappers.scikit_learn import KerasClassifier # Making sample_weight parameter explicit in KerasClassifier.fit method as expected by scikit_learn using a decorator def decorate_kerasClassifier_fit(fit): def decorated_fit(self, x, y, sample_weight=None, **kwargs): history = fit(self, x, y, sample_weight=sample_weight, **kwargs) y_predict = self.predict(x) estimator_error = np.mean( np.average(y_predict != y, weights=sample_weight, axis=0)) print("\n[KerasClassifier] Weighted training error: %.2f%%" % (estimator_error*100)) return history return decorated_fit KerasClassifier.fit = decorate_kerasClassifier_fit(KerasClassifier.fit) def decorate_kerasClassifier_predict(predict): def decorated_predict(self, x, **kwargs): return np.reshape(predict(self, x,**kwargs), (-1,)) return decorated_predict KerasClassifier.predict = decorate_kerasClassifier_predict(KerasClassifier.predict) # Sci-kit learn Adaboost derived class that stores current boosing iterate class BoostingState: """Scikit learn AdaBoost iterate wrapper""" def __init__(self, iboost, X, sample_weight): self.iboost=iboost self.X=X self.sample_weight=sample_weight # Log current state of boosting iteration class AdaptiveAdaBoostClassifier(AdaBoostClassifier): """AdaBoost with exposed current boosting iterate""" def _boost(self, iboost, X, y, sample_weight, random_state): """Store current boosting iterate, then call base class implementation of this method.""" print("\n[AdaptiveAdaBoostClassifier] Called _boost (%d-th iteration), logging boosting state..." % (iboost+1)) self._boosting_state = BoostingState(iboost,X,sample_weight) return super(AdaptiveAdaBoostClassifier,self)._boost(iboost, X, y, sample_weight, random_state) class AdaptiveKerasModelBuilder: """Keras model factory for vocabulary-adaptive (preprocessor-adaptive) AdaBoost ensemble with token-weighting for vocabulary generation and sample-weighting for word-embedding calculation. To be used with the KerasClassifier Scikit learn wrapper for keras Sequential models.""" def __init__(self, twitter_dataset, trivially_preprocessed_dataset, preprocessor_factory, word_embeddings_opt): self.twitter_dataset=twitter_dataset self.trivially_preprocessed_dataset=trivially_preprocessed_dataset self.preprocessor_factory=preprocessor_factory self.word_embeddings_opt=word_embeddings_opt self._created_models = [] @classmethod def register_adaboost(cls,adaboost): cls._adaboost = adaboost def __call__(self): boosting_state = self._adaboost._boosting_state trivial_preprocessor = self.trivially_preprocessed_dataset.preprocessor unrenormalized_sample_weight_sum = np.sum(boosting_state.sample_weight) renormalized_sample_weights = [w / unrenormalized_sample_weight_sum * boosting_state.sample_weight.shape[0] # sum of all tweet's weight should be constant/each training tweet has a mean of sample weight 1 for w in boosting_state.sample_weight] # Create preprocessor word_to_occurrence_full = {} print("[AdaptiveKerasModelBuilder] Boosting iteration %d: Creating new Keras model..." % (boosting_state.iboost+1) ) trivially_preprocessed_tweets = [trivial_preprocessor.map_id_seq_to_tweet(list(id_seq)) for id_seq in boosting_state.X] for tweet, weight in zip(trivially_preprocessed_tweets, renormalized_sample_weights): for word in tweet: if word in word_to_occurrence_full: word_to_occurrence_full[word] += weight else: word_to_occurrence_full[word] = weight #import pdb; pdb.set_trace() # introduce test data set to vocabulary # TODO: allow higher weighting!! for tweet in self.trivially_preprocessed_dataset.preprocessed_test_tweets: for word in tweet: if word in word_to_occurrence_full: word_to_occurrence_full[word] += 1 else: word_to_occurrence_full[word] = 1 preprocessor = self.preprocessor_factory(word_to_occurrence_full) preprocessed_dataset = PreprocessedDataset(self.twitter_dataset, preprocessor, config.validation_split_ratio) # Create model model = AdaptiveSequential(translator=None) model.translator = Translator(output_preprocessor=preprocessor, input_preprocessor=trivial_preprocessor, output_preprocessed_dataset=preprocessed_dataset) emb_preprocessed_tweets = [] emb_sample_weights = renormalized_sample_weights for id_seq in model.translator(boosting_state.X): emb_preprocessed_tweets.append([w for w in filter(lambda word: word != '<pad>',preprocessor.map_id_seq_to_tweet(list(id_seq)))]) # introduce test data set to word embeddings # TODO: allow higher weighting!! for tweet in preprocessed_dataset.preprocessed_test_tweets: emb_preprocessed_tweets.append(tweet) emb_sample_weights.append(1) # print("Tweets for word embeddings:") # for i in range(5): # print(str(emb_sample_weights[i]) + " : " + str(emb_preprocessed_tweets[i][-5:])) # print("................") # print("................") # for i in range(5,0,-1): # print(str(emb_sample_weights[-i]) + " : " + str(emb_preprocessed_tweets[-i][-5:])) #import pdb; pdb.set_trace() assert ("corpus_name" not in self.word_embeddings_opt) or (self.word_embeddings_opt["corpus_name"] is None) embedding_layer = AdaptiveAdaBoostModel.create_embedding_layer(preprocessed_train_tweets=emb_preprocessed_tweets, sample_weight=emb_sample_weights, preprocessed_dataset=preprocessed_dataset, **self.word_embeddings_opt) #model.add(translation_layer) #TODO: use Models module model.add(embedding_layer) # model.add(LSTM(200)) # model.add(Dropout(0.5)) # model.add(Dense(1, activation='sigmoid')) model.add(Convolution1D(200, 5, padding='valid', activation='relu')) model.add(MaxPooling1D()) model.add(Convolution1D(100, 3, padding='valid', activation='relu')) model.add(Flatten()) model.add(Dropout(0.5)) model.add(Dense(1)) model.add(Activation('sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) print("Compiled model...") print(model.summary()) self._created_models.append( (model,preprocessor) ) return model # Keras model concatenated with preprocessing used in vocabulary-adaptive AdaBoost ensembles class Translator: """Translates Scikit learn input samples (padded tweets as int-sequences) from a larger vocabulary obtained with input_preprocessor to a smaller vocabulary obtained by output_processor (output_preprocessed_dataset is obtained by generating a preprocessed TwitterDataset with that preprocessor) using preprocessor functions to map from int-sequences to token-sequences, preprocess them and then map again to int-sequences.""" def __init__(self, output_preprocessor, input_preprocessor, output_preprocessed_dataset): self.output_preprocessor = output_preprocessor self.input_preprocessor = input_preprocessor self.output_preprocessed_dataset = output_preprocessed_dataset def __call__(self, x): #print("[Translator] Translating input data set...") x_translated = np.zeros(shape=(x.shape[0], self.output_preprocessed_dataset.max_tweet_length), dtype=int) for i, id_seq in enumerate(x): reconstructed_input_token_seq = self.input_preprocessor.map_id_seq_to_tweet(list(id_seq)) reconstructed_input_tweet = ' '.join([word for word in filter(lambda w: w != '<pad>',reconstructed_input_token_seq)]) preprocessed_output_token_seq = self.output_preprocessor.preprocess_tweet(reconstructed_input_tweet) output_token_seq = self.output_preprocessor.map_tweet_to_id_seq(preprocessed_output_token_seq) x_translated[i, :] = np.array(self.output_preprocessed_dataset.pad_tweets( [output_token_seq] )[0]) # print("[Translator] translated:") # print(str(reconstructed_input_token_seq[:1]) + ' ... ' + str(reconstructed_input_token_seq[-10:])) # print(str(self.output_preprocessor.map_id_seq_to_tweet(list(x_translated[i,:]))[:1]) + ' ... ' + str(self.output_preprocessor.map_id_seq_to_tweet(list(x_translated[i,:]))[-10:])) return x_translated class AdaptiveSequential(Sequential): """Sequential keras model that applies preprocessing in a first step using translation from larger to smaller vocabulary""" def __init__(self,translator=None): print("[AdaptiveSequential] Creating AdaptiveSequential Keras model...") super().__init__() self.translator = translator def fit(self, x, y, batch_size=32, epochs=10, verbose=1, callbacks=None, validation_split=0., validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, **kwargs): # if sample_weight is not None: # # TODO: get a reference to preprocessed_dataset.shuffled_original_training_tweets # # ranked_weights = [(i[1], i[0]) for i in enumerate(sample_weight)] # ranked_weights.sort() # # with open(training_samples_sorted_by_weight.format(phase), 'a+') as tssbwf: # print("\n***** Training samples sorted by weight *****\n") # for weight, i in ranked_weights: # print("\t{} :\t({})\t{}".format(weight, y[i], # TODO: Print original unpreprocessed strings # ' '.join([self.preprocessor.vocabulary.id_to_word[id] for id in x[i]]) ) ) # FIXME: expect self.vocabulary to change to receiver.vocabulary evaluater=ModelEvaluater(self, x, y, verbosity=1, sample_weight=sample_weight) callbacks = [evaluater] if callbacks is None else (callbacks + [evaluater]) # Note, the validation accuracy displayed here is actually the weighted training accurracy return super().fit(self.translator(x), y, batch_size=batch_size, epochs=epochs, verbose=verbose, callbacks=callbacks, validation_split=validation_split, validation_data=validation_data, shuffle=shuffle, class_weight=class_weight, sample_weight=sample_weight, initial_epoch=initial_epoch) def evaluate(self, x, y, batch_size=32, verbose=1, sample_weight=None): return super().evaluate(self.translator(x), y, batch_size=batch_size, verbose=verbose, sample_weight=sample_weight) def predict(self, x, batch_size=32, verbose=0): return super().predict(self.translator(x), batch_size=batch_size, verbose=verbose) def predict_on_batch(self, x): return super().predict_on_batch(self.translator(x)) def train_on_batch(self, x, y, class_weight=None, sample_weight=None): return super().train_on_batch(self.translator(x), y, class_weight=class_weight, sample_weight=sample_weight) def test_on_batch(self, x, y, sample_weight=None): return super().test_on_batch(self.translator(x), y,sample_weight=None) def fit_generator(self, generator, steps_per_epoch, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, class_weight=None, max_q_size=10, workers=1, pickle_safe=False, initial_epoch=0): raise Exception("Generator interface not supported by this keras.models.Sequential wrapper") def evaluate_generator(self, generator, steps, max_q_size=10, workers=1, pickle_safe=False): raise Exception("Generator interface not supported by this keras.models.Sequential wrapper") def predict_generator(self, generator, steps, max_q_size=10, workers=1, pickle_safe=False, verbose=0): raise Exception("Generator interface not supported by this keras.models.Sequential wrapper") class StaticKerasModelBuilder: """Keras model factory for static-preprocessor AdaBoost ensemble. To be used with the KerasClassifier Scikit learn wrapper for keras Sequential models.""" def __init__(self, preprocessed_dataset, word_embeddings_opt, model_builder): self.preprocessed_dataset=preprocessed_dataset self.word_embeddings_opt=word_embeddings_opt self.model_builder=model_builder self._created_models = [] @classmethod def register_adaboost(cls,adaboost): cls._adaboost = adaboost def __call__(self): boosting_state = self._adaboost._boosting_state # Use config.ensemble_model_builder to configure model generation model = Network.create_model(preprocessed_dataset=self.preprocessed_dataset, word_embeddings_opt=self.word_embeddings_opt, model_builder=self.model_builder) self._created_models.append( model ) return model class AdaptiveAdaBoostModel: @classmethod def create_embedding_layer(cls, preprocessed_train_tweets, sample_weight, preprocessed_dataset,**word_embeddings_opt): preprocessor = preprocessed_dataset.preprocessor # Create embedding layer word_embeddings_opt_param = {"initializer": "word2vec", "dim": 400, "trainable": False, "corpus_name": None} word_embeddings_opt_param.update(word_embeddings_opt) if word_embeddings_opt_param["initializer"] in Network.word_embedding_models: word_embeddings = Network.word_embedding_models[word_embeddings_opt_param["initializer"]]( preprocessor=preprocessor, preprocessed_tweets=preprocessed_dataset.weighted_preprocessed_tweets(preprocessed_train_tweets=preprocessed_train_tweets, sample_weight=sample_weight), word_embedding_dimensions=word_embeddings_opt_param["dim"], embedding_corpus_name=word_embeddings_opt_param["corpus_name"]) embedding_layer = Embedding(input_dim=preprocessor.vocabulary.word_count, output_dim=word_embeddings_opt_param["dim"], weights=[word_embeddings.embedding_matrix], input_length=preprocessed_dataset.max_tweet_length, trainable=word_embeddings_opt_param["trainable"]) else: embedding_layer = Embedding(input_dim=preprocessor.vocabulary.word_count, output_dim=word_embeddings_opt_param["dim"], input_length=preprocessed_dataset.max_tweet_length, trainable=word_embeddings_opt_param["trainable"]) print("Created Embedding layer - Word count %d, dimensions %d, max tweet length %d" % (preprocessor.vocabulary.word_count, word_embeddings_opt_param["dim"], preprocessed_dataset.max_tweet_length)) return embedding_layer @classmethod def create_model(cls,twitter_dataset, trivially_preprocessed_dataset, preprocessor_factory, word_embeddings_opt={}, # result_epoch_file=None training_opt={}, adaboost_opt={}): model_builder= AdaptiveKerasModelBuilder( twitter_dataset=twitter_dataset, trivially_preprocessed_dataset=trivially_preprocessed_dataset, preprocessor_factory=preprocessor_factory, word_embeddings_opt=word_embeddings_opt) training_opt_param = {"epochs":4, "batch_size":64} training_opt_param.update(training_opt) adaboost_opt_param = { "algorithm": "SAMME.R", "n_estimators": 5, "learning_rate": 1} adaboost_opt_param.update(adaboost_opt) # evaluater=ModelEvaluater(model, x_val, y_val) # callbacks=[evaluater] # TODO: using boosting state # if not config.test_run: # TODO: make callbacks accessible from config # predicter=ModelPredicter(model, preprocessed_dataset, model_save_path, result_epoch_file) # callbacks.append(predicter) sklearn_model = KerasClassifier(build_fn=model_builder, verbose=1, **training_opt_param #, callbacks=callbacks ) adaboost_model = AdaptiveAdaBoostClassifier(sklearn_model, **adaboost_opt_param) # Store reference to AdaBoost instance in keras models to access AdaBoost internals at model fit time model_builder.register_adaboost(adaboost_model) return adaboost_model class StaticAdaBoostModel: @classmethod def create_model(cls, preprocessed_dataset, word_embeddings_opt={}, training_opt={}, adaboost_opt={}, model_builder=None): keras_model_factory = StaticKerasModelBuilder( preprocessed_dataset=preprocessed_dataset, word_embeddings_opt=word_embeddings_opt, model_builder=model_builder) training_opt_param = {"epochs": 4, "batch_size": 64} training_opt_param.update(training_opt) adaboost_opt_param = { "algorithm": "SAMME.R", "n_estimators": 5, "learning_rate": 1} adaboost_opt_param.update(adaboost_opt) sklearn_model = KerasClassifier(build_fn=keras_model_factory, verbose=1, **training_opt_param # , callbacks=callbacks ) adaboost_model = AdaptiveAdaBoostClassifier(sklearn_model, **adaboost_opt_param) # Store reference to AdaBoost instance in keras models to access AdaBoost internals at model fit time keras_model_factory.register_adaboost(adaboost_model) return adaboost_model class AdaBoostModel: @classmethod def train(cls,model, preprocessed_dataset, # NOTE: this dataset is trivially preprocessed (LexicalPreprocessor only) model_save_path=None): # Create training data (x_train, y_train, x_orig_train), (x_val, y_val, x_orig_val) = preprocessed_dataset.shuffle_and_split_padded() model.fit(x_train, y_train) print("***** Training summary *****") for iboost, (weight, error) in enumerate(zip(model.estimator_weights_,model.estimator_errors_)): print( "\t%d-th estimator: weighted training error = %.2f%%, estimator weight = %.8f" % (iboost, 100*error, weight) ) print("***** Evaluation *****") for iboost, accuracy in enumerate(model.staged_score(x_val, y_val)): print( "\tAfter %d-th boosting iteration: accuracy = %.2f%%" % (iboost, 100*accuracy) ) # TODO: save all models and weights to a json file (model.weights, etc. cf. doc) #if model_save_path is not None: # save_model(model, model_save_path) # @classmethod # def output_misclassified_samples(cls, # model, preprocessed_dataset, preprocessor, # misclassified_samples_file=None): @classmethod def predict(cls, model, preprocessed_dataset, prediction_file): if not model: raise Exception("You need to train or load a pretrained model in order to predict") x_test = preprocessed_dataset.test_tweets_padded() predictions = model.predict(x_test) print("Done with predictions, generating submission file...") with open(prediction_file, "w") as submission: submission.write("Id,Prediction\n") for i, prediction in enumerate(predictions): if prediction > 0.5: prediction = 1 else: prediction = -1 submission.write('%d,%d\n' % (i+1, prediction)) print("Generated submission file (%s) with %d results" % (prediction_file,predictions.shape[0]))
{"hexsha": "4658d0b5a2cfc10302e8eafb27685c885752bb57", "size": 32022, "ext": "py", "lang": "Python", "max_stars_repo_path": "NeuralNetwork.py", "max_stars_repo_name": "xabarass/cil-tweeter", "max_stars_repo_head_hexsha": "cf6c09879ef4cd431a61b6573a5b0f9e03ea3309", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "NeuralNetwork.py", "max_issues_repo_name": "xabarass/cil-tweeter", "max_issues_repo_head_hexsha": "cf6c09879ef4cd431a61b6573a5b0f9e03ea3309", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "NeuralNetwork.py", "max_forks_repo_name": "xabarass/cil-tweeter", "max_forks_repo_head_hexsha": "cf6c09879ef4cd431a61b6573a5b0f9e03ea3309", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.7474452555, "max_line_length": 212, "alphanum_fraction": 0.619230529, "include": true, "reason": "import numpy", "num_tokens": 6156}
import json import collections import numpy as np from scipy.stats import multivariate_normal # import matplotlib.pyplot as plt def parse_json_parameters(func): def inner(*args, **kwargs): print(args, kwargs) args = [json.loads(value) if type(value) is str else value for value in args] kwargs = {key: json.loads(kwargs[key]) if type(kwargs[key]) is str else kwargs[key] for key in kwargs} return func(*args, **kwargs) return inner def parse_json(x): try: return json.loads(x) except: return x # @parse_json_parameters def generate_gauss_sample(mu='0', sigma='1', n='1'): mu, sigma, n = parse_json(mu), parse_json(sigma), parse_json(n) try: return np.random.multivariate_normal(mu, sigma, n).tolist() except ValueError as ve: return np.random.normal(mu, sigma, n).tolist() # @parse_json_parameters def gauss_pdf(mu='0', sigma='1', x='0'): mu, sigma, x = parse_json(mu), parse_json(sigma), parse_json(x) return multivariate_normal(mu, sigma).pdf(x).tolist() # def plot_dataset(class1, class2): # for sample in class1: # plt.plot(sample[0], sample[1], 'r.') # for sample in class2: # plt.plot(sample[0], sample[1], 'g.') # plt.savefig('{}-{}.pdf'.format(len(class1), len(class2))) # def decide(class1, class2): # def inner(sample): # return 2 * class1.pdf(sample) - class2.pdf(sample) # return inner # def plot_roc(): # with open('dataset.txt') as f: # dataset = json.load(f) # # Training model # class1_samples = np.matrix(dataset['class1']['samples']).T # class2_samples = np.matrix(dataset['class2']['samples']).T # class1 = mnormal(mean=np.asarray(np.mean(class1_samples, axis=1)).reshape(-1), cov=np.cov(class1_samples)) # class2 = mnormal(mean=np.asarray(np.mean(class2_samples, axis=1)).reshape(-1), cov=np.cov(class2_samples)) # g = decide(class1, class2) # # Counting TP and FP # samples = [(g(sample), 'class1') for sample in dataset['class1']['samples']] + [(g(sample), 'class2') for sample in dataset['class2']['samples']] # samples.sort(key=lambda e: e[0], reverse=True) # samples = [sample[1] for sample in samples] # tp = np.cumsum([sample=='class1' for sample in samples]) # tp = [i/len(dataset['class1']['samples']) for i in tp] # fp = np.cumsum([sample=='class2' for sample in samples]) # fp = [i/len(dataset['class2']['samples']) for i in fp] # # Plot # plt.plot(fp, tp) # plt.plot([0,1], [0,1], '--') # plt.xlabel('False Positive Rate') # plt.ylabel('True Positive Rate') # plt.savefig('roc.pdf')
{"hexsha": "4f25539a7c0c30ccf2e316ab7f7df71a729fa981", "size": 2747, "ext": "py", "lang": "Python", "max_stars_repo_path": "models/gauss.py", "max_stars_repo_name": "tangym/autoapi", "max_stars_repo_head_hexsha": "adc3ce02a803dd989be787ff21568231103d8625", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "models/gauss.py", "max_issues_repo_name": "tangym/autoapi", "max_issues_repo_head_hexsha": "adc3ce02a803dd989be787ff21568231103d8625", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "models/gauss.py", "max_forks_repo_name": "tangym/autoapi", "max_forks_repo_head_hexsha": "adc3ce02a803dd989be787ff21568231103d8625", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.2179487179, "max_line_length": 151, "alphanum_fraction": 0.6133964325, "include": true, "reason": "import numpy,from scipy", "num_tokens": 729}
# -*- coding: utf-8 -*- """Tests for sparkdatachallenge package.""" from re import I import numpy as np import pytest import sparkdatachallenge incheck_pass = [ (np.array([1]), np.array([2]), True), (np.array([1]), np.array([1, 2]), False), (np.array([1002]), np.array([1, 2]), False), (np.array([-1]), np.array([1, 2]), False), (np.array([-1]), np.array([1]), False), (np.array([1]), np.array([-1]), False), (np.array([]), np.array([]), False), (np.array([1]), np.array([-1]), False), (np.array([1]), np.array([1_000_000]), False), (np.array([1]), np.array([0]), True), ] incheck_fail = [ (None, None), (None, np.array([1])), (np.array([1]), None), ] simple = [ (np.array([0, 1, 2, 2, 3, 5]), np.array([500_000, 500_000, 0, 0, 0, 20_000]), 8), (np.array([0, 0, 1, 2, 2, 3, 5]), np.array([0, 500_000, 500_000, 0, 0, 0, 20_000]), 8), (np.array([0, 0, 0, 1, 2, 2, 3, 5]), np.array([0, 0, 500_000, 500_000, 0, 0, 0, 20_000]), 9), (np.array([1, 3] * int(10 ** 0 / 2)), np.array([500_000, 0] * int(10 ** 0 / 2)), 0), (np.array([1, 3] * int(10 ** 1 / 2)), np.array([500_000, 0] * int(10 ** 1 / 2)), 35), (np.array([1, 3] * int(10 ** 2 / 2)), np.array([500_000, 0] * int(10 ** 2 / 2)), 3725), (np.array([1, 3] * int(10 ** 3 / 2)), np.array([500_000, 0] * int(10 ** 3 / 2)), 374750), (np.array([1, 3] * int(10 ** 4 / 2)), np.array([500_000, 0] * int(10 ** 4 / 2)), 37497500), (np.array([1, 3] * int(10 ** 5 / 2)), np.array([500_000, 0] * int(10 ** 5 / 2)), 1000000000), ] comp = [ (np.array([0, 1, 3]), np.array([0, 400_000, 5_000_000]), 1), ] @pytest.mark.parametrize("ina, inb, res", incheck_pass) def test_input_check_pass(ina, inb, res): assert res == sparkdatachallenge.check_input(ina, inb) @pytest.mark.parametrize("ina, inb", incheck_fail) def test_input_check_fail(ina, inb): with pytest.raises(TypeError): sparkdatachallenge.check_input(ina, inb) @pytest.mark.parametrize("ina, inb, res", simple) def test_simple(ina, inb, res): # assert res == sparkdatachallenge.solution_brute1(ina, inb, verbose=False) - fails on memory allocation # assert res == sparkdatachallenge.solution_brute2(ina, inb, verbose=False) - takes a bit assert res == sparkdatachallenge.solution_math(ina, inb) @pytest.mark.parametrize("ina, inb, res", comp) def test_simple(ina, inb, res): # assert res == sparkdatachallenge.solution_brute1(ina, inb, verbose=False) - fails on memory allocation # assert res == sparkdatachallenge.solution_brute2(ina, inb, verbose=False) - takes a bit assert res == sparkdatachallenge.solution_math2(ina, inb) # ============================================================================== # The code below is for debugging a particular test in eclipse/pydev. # (otherwise all tests are normally run with pytest) # Make sure that you run this code with the project directory as CWD, and # that the source directory is on the path # ============================================================================== # if __name__ == "__main__": # the_test_you_want_to_debug = test_hello_noargs # # print("__main__ running", the_test_you_want_to_debug) # the_test_you_want_to_debug() # print('-*# finished #*-') # eof
{"hexsha": "62053bff46d7be15c8950669e42168cd774afebe", "size": 3281, "ext": "py", "lang": "Python", "max_stars_repo_path": "tests/test_sparkdatachallenge.py", "max_stars_repo_name": "tomerten/sparkdatachallenge", "max_stars_repo_head_hexsha": "d20dbf5008a4dc5909b886486bb7f5658edd0e73", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tests/test_sparkdatachallenge.py", "max_issues_repo_name": "tomerten/sparkdatachallenge", "max_issues_repo_head_hexsha": "d20dbf5008a4dc5909b886486bb7f5658edd0e73", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tests/test_sparkdatachallenge.py", "max_forks_repo_name": "tomerten/sparkdatachallenge", "max_forks_repo_head_hexsha": "d20dbf5008a4dc5909b886486bb7f5658edd0e73", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.1511627907, "max_line_length": 108, "alphanum_fraction": 0.585492228, "include": true, "reason": "import numpy", "num_tokens": 1105}
[STATEMENT] lemma cbiovi:"b^-1 O ov^-1 \<subseteq> b^-1" [PROOF STATE] proof (prove) goal (1 subgoal): 1. b\<inverse> O ov\<inverse> \<subseteq> b\<inverse> [PROOF STEP] using covb [PROOF STATE] proof (prove) using this: ov O b \<subseteq> b goal (1 subgoal): 1. b\<inverse> O ov\<inverse> \<subseteq> b\<inverse> [PROOF STEP] by auto
{"llama_tokens": 146, "file": "Allen_Calculus_allen", "length": 2}
""" Utils functions for LSTM network. """ from keras.models import Sequential, load_model from keras.layers import Dense, Activation, Dropout from keras.layers import LSTM from keras.optimizers import RMSprop import io import numpy as np def create_sequences(text, sequence_length, step): sequences = [] next_chars = [] for i in range(0, len(text) - sequence_length, step): sequences.append(text[i: i + sequence_length]) next_chars.append(text[i + sequence_length]) return sequences, next_chars def build_model(sequence_length, chars): model = Sequential() model.add(LSTM(128, input_shape=(sequence_length, len(chars)))) model.add(Dense(len(chars))) model.add(Activation('softmax')) optimizer = RMSprop(lr=0.01) model.compile(loss='categorical_crossentropy', optimizer=optimizer) return model def sample(preds, temperature=1.0): if temperature == 0: temperature = 1 preds = np.asarray(preds).astype('float64') preds = np.log(preds) / temperature exp_preds = np.exp(preds) preds = exp_preds / np.sum(exp_preds) probas = np.random.multinomial(1, preds, 1) return np.argmax(probas) def extract_characters(text): return sorted(list(set(text))) def get_chars_index_dicts(chars): return dict((c, i) for i, c in enumerate(chars)), dict((i, c) for i, c in enumerate(chars)) def read_corpus(path): with io.open(path, 'r', encoding='utf8') as f: return f.read().lower() def vectorize(sequences, sequence_length, chars, char_to_index, next_chars): X = np.zeros((len(sequences), sequence_length, len(chars)), dtype=np.bool) y = np.zeros((len(sequences), len(chars)), dtype=np.bool) for i, sentence in enumerate(sequences): for t, char in enumerate(sentence): X[i, t, char_to_index[char]] = 1 y[i, char_to_index[next_chars[i]]] = 1 return X, y
{"hexsha": "8965bef5e90fbf0b52e49c3f3265a1de7d03c3a1", "size": 1917, "ext": "py", "lang": "Python", "max_stars_repo_path": "keras/lyrics/helper.py", "max_stars_repo_name": "PipelineAI/models", "max_stars_repo_head_hexsha": "d8df07877aa8b10ce9b84983bb440af75e84dca7", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 44, "max_stars_repo_stars_event_min_datetime": "2017-11-17T06:19:05.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-03T06:00:56.000Z", "max_issues_repo_path": "keras/lyrics/helper.py", "max_issues_repo_name": "PipelineAI/models", "max_issues_repo_head_hexsha": "d8df07877aa8b10ce9b84983bb440af75e84dca7", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2018-08-09T14:28:17.000Z", "max_issues_repo_issues_event_max_datetime": "2018-09-10T03:32:42.000Z", "max_forks_repo_path": "keras/lyrics/helper.py", "max_forks_repo_name": "PipelineAI/models", "max_forks_repo_head_hexsha": "d8df07877aa8b10ce9b84983bb440af75e84dca7", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 21, "max_forks_repo_forks_event_min_datetime": "2017-11-18T15:12:12.000Z", "max_forks_repo_forks_event_max_datetime": "2020-08-15T07:08:33.000Z", "avg_line_length": 27.0, "max_line_length": 95, "alphanum_fraction": 0.6807511737, "include": true, "reason": "import numpy", "num_tokens": 468}
""" @author: Jun Wang @date: 20210308 @contact: jun21wangustc@gmail.com """ # based on: # https://github.com/deepinsight/insightface/tree/master/evaluation/IJB import numpy as np from numpy import matlib from prettytable import PrettyTable from sklearn.metrics import roc_curve class IJBCEvaluator(object): """Implementation of IJBC test protocal. """ def __init__(self, template_media_list, template_pair_list, image_list, data_loader, feature_extractor): """Init IJBCEvaluator. Args: template_media_list(str): the path of 'ijbc_face_tid_mid.txt' template_pair_list(str): the path of 'ijbc_template_pair_label.txt ' image_list(str): the path of 'img_list.txt' data_loader(object): a test data loader. feature_extractor(object): a feature extractor. """ templates = [] medias = [] template_media_list_buf = open(template_media_list) line = template_media_list_buf.readline().strip() while line: image_name, tid, mid = line.split(' ') templates.append(int(tid)) medias.append(int(mid)) line = template_media_list_buf.readline().strip() self.templates = np.array(templates) self.medias = np.array(medias) template1 = [] template2 = [] label = [] template_pair_list_buf = open(template_pair_list) line = template_pair_list_buf.readline().strip() while line: t1, t2, cur_label = line.split(' ') template1.append(int(t1)) template2.append(int(t2)) label.append(int(cur_label)) line = template_pair_list_buf.readline().strip() self.template1 = np.array(template1) self.template2 = np.array(template2) self.label = np.array(label) self.image_list = [] faceness_scores = [] image_list_buf = open(image_list) line = image_list_buf.readline().strip() while line: self.image_list.append(line.split(' ')[0]) faceness_scores.append(float(line.split(' ')[-1])) line = image_list_buf.readline().strip() self.faceness_scores = np.array(faceness_scores) self.data_loader = data_loader self.feature_extractor = feature_extractor def verification(self, template_norm_feats, unique_templates): template2id = np.zeros((max(unique_templates)+1, 1), dtype=int) for count_template, uqt in enumerate(unique_templates): template2id[uqt] = count_template score = np.zeros((len(self.template1),)) # save cosine distance between pairs total_pairs = np.array(range(len(self.template1))) batchsize = 100000 # small batchsize instead of all pairs in one batch due to the memory limiation sublists = [total_pairs[i:i + batchsize] for i in range(0, len(self.template1), batchsize)] total_sublists = len(sublists) for c, s in enumerate(sublists): feat1 = template_norm_feats[template2id[self.template1[s]]] feat2 = template_norm_feats[template2id[self.template2[s]]] similarity_score = np.sum(feat1 * feat2, -1) score[s] = similarity_score.flatten() if c % 10 == 0: print('Finish {}/{} pairs.'.format(c, total_sublists)) return score def image2template_feature(self, img_feats): unique_templates = np.unique(self.templates) template_feats = np.zeros((len(unique_templates), img_feats.shape[1])) for count_template, uqt in enumerate(unique_templates): (ind_t,) = np.where(self.templates == uqt) face_norm_feats = img_feats[ind_t] face_medias = self.medias[ind_t] unique_medias, unique_media_counts = np.unique(face_medias, return_counts=True) media_norm_feats = [] for u,ct in zip(unique_medias, unique_media_counts): (ind_m,) = np.where(face_medias == u) if ct == 1: media_norm_feats += [face_norm_feats[ind_m]] else: # image features from the same video will be aggregated into one feature media_norm_feats += [np.mean(face_norm_feats[ind_m], 0, keepdims=True)] media_norm_feats = np.array(media_norm_feats) # media_norm_feats = media_norm_feats / np.sqrt(np.sum(media_norm_feats ** 2, -1, keepdims=True)) template_feats[count_template] = np.sum(media_norm_feats, 0) if count_template % 2000 == 0: print('Finish Calculating {} template features.'.format(count_template)) template_norm_feats = template_feats / np.sqrt(np.sum(template_feats ** 2, -1, keepdims=True)) return template_norm_feats, unique_templates def test(self, model, use_detector_score=True): fpr_list = [1e-6, 1e-5, 1e-4,1e-3, 1e-2, 1e-1] image_name2feature = self.feature_extractor.extract_online(model, self.data_loader) feature_list = [] for image_name in self.image_list: feature_list.append(image_name2feature[image_name]) feature_list = np.array(feature_list).astype(np.float32) #feature_list = np.load('/export2/wangjun492/face_database/facex-zoo/private_file/test_data/ijbc/img_feats.npy') if use_detector_score: feature_list = feature_list * matlib.repmat(self.faceness_scores[:,np.newaxis], 1, feature_list.shape[1]) template_norm_feats, unique_templates = self.image2template_feature(feature_list) score = self.verification(template_norm_feats, unique_templates) fpr, tpr, _ = roc_curve(self.label, score) fpr = np.flipud(fpr) tpr = np.flipud(tpr) # select largest tpr at same fpr tpr_list = [] for fpr_iter in np.arange(len(fpr_list)): _, min_index = min(list(zip(abs(fpr-fpr_list[fpr_iter]), range(len(fpr))))) tpr_list.append('%.4f' % tpr[min_index]) return tpr_list
{"hexsha": "af27fbcacc3be371d6d848dddf63fd10b5db9723", "size": 6087, "ext": "py", "lang": "Python", "max_stars_repo_path": "test_protocol/ijbc/ijbc_evaluator.py", "max_stars_repo_name": "weihaoxie/FaceX-Zoo", "max_stars_repo_head_hexsha": "db0b087e4f4d28152e172d6c8d3767a8870733b4", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1329, "max_stars_repo_stars_event_min_datetime": "2021-01-13T07:06:30.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T07:23:39.000Z", "max_issues_repo_path": "test_protocol/ijbc/ijbc_evaluator.py", "max_issues_repo_name": "weihaoxie/FaceX-Zoo", "max_issues_repo_head_hexsha": "db0b087e4f4d28152e172d6c8d3767a8870733b4", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 115, "max_issues_repo_issues_event_min_datetime": "2021-01-13T10:42:57.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-28T03:57:52.000Z", "max_forks_repo_path": "test_protocol/ijbc/ijbc_evaluator.py", "max_forks_repo_name": "weihaoxie/FaceX-Zoo", "max_forks_repo_head_hexsha": "db0b087e4f4d28152e172d6c8d3767a8870733b4", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 351, "max_forks_repo_forks_event_min_datetime": "2021-01-13T07:21:00.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-29T14:11:39.000Z", "avg_line_length": 46.1136363636, "max_line_length": 120, "alphanum_fraction": 0.6408739938, "include": true, "reason": "import numpy,from numpy", "num_tokens": 1378}