INSTRUCTION
stringlengths
1
46.3k
RESPONSE
stringlengths
75
80.2k
Convert a single page LaTeX document into an image. To display the returned image, `img.show()` Required external dependencies: `pdflatex` (with `qcircuit` package), and `poppler` (for `pdftocairo`). Args: A LaTeX document as a string. Returns: A PIL Image Raises: O...
def render_latex(latex: str) -> PIL.Image: # pragma: no cover """ Convert a single page LaTeX document into an image. To display the returned image, `img.show()` Required external dependencies: `pdflatex` (with `qcircuit` package), and `poppler` (for `pdftocairo`). Args: A LaTeX...
Create an image of a quantum circuit. A convenience function that calls circuit_to_latex() and render_latex(). Args: circ: A quantum Circuit qubits: Optional qubit list to specify qubit order Returns: Returns: A PIL Image (Use img.show() to display) Raises: ...
def circuit_to_image(circ: Circuit, qubits: Qubits = None) -> PIL.Image: # pragma: no cover """Create an image of a quantum circuit. A convenience function that calls circuit_to_latex() and render_latex(). Args: circ: A quantum Circuit qubits: Optional qubi...
Format an object as a latex string.
def _latex_format(obj: Any) -> str: """Format an object as a latex string.""" if isinstance(obj, float): try: return sympy.latex(symbolize(obj)) except ValueError: return "{0:.4g}".format(obj) return str(obj)
Tensorflow eager mode example. Given an arbitrary one-qubit gate, use gradient descent to find corresponding parameters of a universal ZYZ gate.
def fit_zyz(target_gate): """ Tensorflow eager mode example. Given an arbitrary one-qubit gate, use gradient descent to find corresponding parameters of a universal ZYZ gate. """ assert bk.BACKEND == 'eager' tf = bk.TL tfe = bk.tfe steps = 4000 dev = '/gpu:0' if bk.DEVICE == '...
Print version strings of currently installed dependencies ``> python -m quantumflow.meta`` Args: file: Output stream. Defaults to stdout.
def print_versions(file: typing.TextIO = None) -> None: """ Print version strings of currently installed dependencies ``> python -m quantumflow.meta`` Args: file: Output stream. Defaults to stdout. """ print('** QuantumFlow dependencies (> python -m quantumflow.meta) **') print(...
Tensorflow example. Given an arbitrary one-qubit gate, use gradient descent to find corresponding parameters of a universal ZYZ gate.
def fit_zyz(target_gate): """ Tensorflow example. Given an arbitrary one-qubit gate, use gradient descent to find corresponding parameters of a universal ZYZ gate. """ assert bk.BACKEND == 'tensorflow' tf = bk.TL steps = 4000 t = tf.get_variable('t', [3]) gate = qf.ZYZ(t[0], t[1],...
Prepare a 4-qubit W state using sqrt(iswaps) and local gates
def prepare_w4(): """ Prepare a 4-qubit W state using sqrt(iswaps) and local gates """ circ = qf.Circuit() circ += qf.X(1) circ += qf.ISWAP(1, 2) ** 0.5 circ += qf.S(2) circ += qf.Z(2) circ += qf.ISWAP(2, 3) ** 0.5 circ += qf.S(3) circ += qf.Z(3) circ += qf.ISWAP(0, 1)...
Converts a 1-qubit gate into a RN gate, a 1-qubit rotation of angle theta about axis (nx, ny, nz) in the Bloch sphere. Returns: A Circuit containing a single RN gate
def bloch_decomposition(gate: Gate) -> Circuit: """ Converts a 1-qubit gate into a RN gate, a 1-qubit rotation of angle theta about axis (nx, ny, nz) in the Bloch sphere. Returns: A Circuit containing a single RN gate """ if gate.qubit_nb != 1: raise ValueError('Expected 1-qubit...
Returns the Euler Z-Y-Z decomposition of a local 1-qubit gate.
def zyz_decomposition(gate: Gate) -> Circuit: """ Returns the Euler Z-Y-Z decomposition of a local 1-qubit gate. """ if gate.qubit_nb != 1: raise ValueError('Expected 1-qubit gate') q, = gate.qubits U = asarray(gate.asoperator()) U /= np.linalg.det(U) ** (1/2) # SU(2) if ab...
Decompose a 2-qubit unitary composed of two 1-qubit local gates. Uses the "Nearest Kronecker Product" algorithm. Will give erratic results if the gate is not the direct product of two 1-qubit gates.
def kronecker_decomposition(gate: Gate) -> Circuit: """ Decompose a 2-qubit unitary composed of two 1-qubit local gates. Uses the "Nearest Kronecker Product" algorithm. Will give erratic results if the gate is not the direct product of two 1-qubit gates. """ # An alternative approach would be t...
Returns the canonical coordinates of a 2-qubit gate
def canonical_coords(gate: Gate) -> Sequence[float]: """Returns the canonical coordinates of a 2-qubit gate""" circ = canonical_decomposition(gate) gate = circ.elements[6] # type: ignore params = [gate.params[key] for key in ('tx', 'ty', 'tz')] return params
Decompose a 2-qubit gate by removing local 1-qubit gates to leave the non-local canonical two-qubit gate. [1]_ [2]_ [3]_ [4]_ Returns: A Circuit of 5 gates: two initial 1-qubit gates; a CANONICAL gate, with coordinates in the Weyl chamber; two final 1-qubit gates The canonical coordinates can be found...
def canonical_decomposition(gate: Gate) -> Circuit: """Decompose a 2-qubit gate by removing local 1-qubit gates to leave the non-local canonical two-qubit gate. [1]_ [2]_ [3]_ [4]_ Returns: A Circuit of 5 gates: two initial 1-qubit gates; a CANONICAL gate, with coordinates in the Weyl chamber; two fina...
Diagonalize a complex symmetric matrix. The eigenvalues are complex, and the eigenvectors form an orthogonal matrix. Returns: eigenvalues, eigenvectors
def _eig_complex_symmetric(M: np.ndarray) -> Tuple[np.ndarray, np.ndarray]: """Diagonalize a complex symmetric matrix. The eigenvalues are complex, and the eigenvectors form an orthogonal matrix. Returns: eigenvalues, eigenvectors """ if not np.allclose(M, M.transpose()): raise np....
QAOA Maxcut using tensorflow
def maxcut_qaoa( graph, steps=DEFAULT_STEPS, learning_rate=LEARNING_RATE, verbose=False): """QAOA Maxcut using tensorflow""" if not isinstance(graph, nx.Graph): graph = nx.from_edgelist(graph) init_scale = 0.01 init_bias = 0.5 init_beta = normal(loc=init_bi...
Returns the K-qubit identity gate
def identity_gate(qubits: Union[int, Qubits]) -> Gate: """Returns the K-qubit identity gate""" _, qubits = qubits_count_tuple(qubits) return I(*qubits)
Direct product of two gates. Qubit count is the sum of each gate's bit count.
def join_gates(*gates: Gate) -> Gate: """Direct product of two gates. Qubit count is the sum of each gate's bit count.""" vectors = [gate.vec for gate in gates] vec = reduce(outer_product, vectors) return Gate(vec.tensor, vec.qubits)
Return a controlled unitary gate. Given a gate acting on K qubits, return a new gate on K+1 qubits prepended with a control bit.
def control_gate(control: Qubit, gate: Gate) -> Gate: """Return a controlled unitary gate. Given a gate acting on K qubits, return a new gate on K+1 qubits prepended with a control bit. """ if control in gate.qubits: raise ValueError('Gate and control qubits overlap') qubits = [control, *gate....
Return a conditional unitary gate. Do gate0 on bit 1 if bit 0 is zero, else do gate1 on 1
def conditional_gate(control: Qubit, gate0: Gate, gate1: Gate) -> Gate: """Return a conditional unitary gate. Do gate0 on bit 1 if bit 0 is zero, else do gate1 on 1""" assert gate0.qubits == gate1.qubits # FIXME tensor = join_gates(P0(control), gate0).tensor tensor += join_gates(P1(control), gate1...
Return true if gate tensor is (almost) unitary
def almost_unitary(gate: Gate) -> bool: """Return true if gate tensor is (almost) unitary""" res = (gate @ gate.H).asoperator() N = gate.qubit_nb return np.allclose(asarray(res), np.eye(2**N), atol=TOLERANCE)
Return true if gate tensor is (almost) the identity
def almost_identity(gate: Gate) -> bool: """Return true if gate tensor is (almost) the identity""" N = gate.qubit_nb return np.allclose(asarray(gate.asoperator()), np.eye(2**N))
Return true if gate tensor is (almost) Hermitian
def almost_hermitian(gate: Gate) -> bool: """Return true if gate tensor is (almost) Hermitian""" return np.allclose(asarray(gate.asoperator()), asarray(gate.H.asoperator()))
Pretty print a gate tensor Args: gate: ndigits: file: Stream to which to write. Defaults to stdout
def print_gate(gate: Gate, ndigits: int = 2, file: TextIO = None) -> None: """Pretty print a gate tensor Args: gate: ndigits: file: Stream to which to write. Defaults to stdout """ N = gate.qubit_nb gate_tensor = gate.vec.asarray() lines = [] for index...
r"""Returns a random unitary gate on K qubits. Ref: "How to generate random matrices from the classical compact groups" Francesco Mezzadri, math-ph/0609050
def random_gate(qubits: Union[int, Qubits]) -> Gate: r"""Returns a random unitary gate on K qubits. Ref: "How to generate random matrices from the classical compact groups" Francesco Mezzadri, math-ph/0609050 """ N, qubits = qubits_count_tuple(qubits) unitary = scipy.stats.unitary_g...
Prepare a 16-qubit W state using sqrt(iswaps) and local gates, respecting linear topology
def prepare_w16(): """ Prepare a 16-qubit W state using sqrt(iswaps) and local gates, respecting linear topology """ ket = qf.zero_state(16) circ = w16_circuit() ket = circ.run(ket) return ket
Return a circuit that prepares the the 16-qubit W state using\ sqrt(iswaps) and local gates, respecting linear topology
def w16_circuit() -> qf.Circuit: """ Return a circuit that prepares the the 16-qubit W state using\ sqrt(iswaps) and local gates, respecting linear topology """ gates = [ qf.X(7), qf.ISWAP(7, 8) ** 0.5, qf.S(8), qf.Z(8), qf.SWAP(7, 6), qf.SWAP(6, 5)...
A context manager to redirect stdout and/or stderr to /dev/null. Examples: with muted(sys.stdout): ... with muted(sys.stderr): ... with muted(sys.stdout, sys.stderr): ...
def muted(*streams): """A context manager to redirect stdout and/or stderr to /dev/null. Examples: with muted(sys.stdout): ... with muted(sys.stderr): ... with muted(sys.stdout, sys.stderr): ... """ devnull = open(os.devnull, 'w') try: old_streams = [os.dup(s.fileno()) for...
Checks if a given functions exists in the current platform.
def has_function(function_name, libraries=None): """Checks if a given functions exists in the current platform.""" compiler = distutils.ccompiler.new_compiler() with muted(sys.stdout, sys.stderr): result = compiler.has_function( function_name, libraries=libraries) if os.path.exists('a.out'): os....
Execute the build command.
def run(self): """Execute the build command.""" module = self.distribution.ext_modules[0] base_dir = os.path.dirname(__file__) if base_dir: os.chdir(base_dir) exclusions = [] for define in self.define or []: module.define_macros.append(define) for library in self.libraries o...
Dispatch messages received from agents to the right handlers
async def handle_agent_message(self, agent_addr, message): """Dispatch messages received from agents to the right handlers""" message_handlers = { AgentHello: self.handle_agent_hello, AgentJobStarted: self.handle_agent_job_started, AgentJobDone: self.handle_agent_job_...
Dispatch messages received from clients to the right handlers
async def handle_client_message(self, client_addr, message): """Dispatch messages received from clients to the right handlers""" # Verify that the client is registered if message.__class__ != ClientHello and client_addr not in self._registered_clients: await ZMQUtils.send_with_addr(...
:param client_addrs: list of clients to which we should send the update
async def send_container_update_to_client(self, client_addrs): """ :param client_addrs: list of clients to which we should send the update """ self._logger.debug("Sending containers updates...") available_containers = tuple(self._containers.keys()) msg = BackendUpdateContainers(available...
Handle an ClientHello message. Send available containers to the client
async def handle_client_hello(self, client_addr, _: ClientHello): """ Handle an ClientHello message. Send available containers to the client """ self._logger.info("New client connected %s", client_addr) self._registered_clients.add(client_addr) await self.send_container_update_to_client(...
Handle an Ping message. Pong the client
async def handle_client_ping(self, client_addr, _: Ping): """ Handle an Ping message. Pong the client """ await ZMQUtils.send_with_addr(self._client_socket, client_addr, Pong())
Handle an ClientNewJob message. Add a job to the queue and triggers an update
async def handle_client_new_job(self, client_addr, message: ClientNewJob): """ Handle an ClientNewJob message. Add a job to the queue and triggers an update """ self._logger.info("Adding a new job %s %s to the queue", client_addr, message.job_id) self._waiting_jobs[(client_addr, message.job_id)]...
Handle an ClientKillJob message. Remove a job from the waiting list or send the kill message to the right agent.
async def handle_client_kill_job(self, client_addr, message: ClientKillJob): """ Handle an ClientKillJob message. Remove a job from the waiting list or send the kill message to the right agent. """ # Check if the job is not in the queue if (client_addr, message.job_id) in self._waiting_jobs: ...
Handles a ClientGetQueue message. Send back info about the job queue
async def handle_client_get_queue(self, client_addr, _: ClientGetQueue): """ Handles a ClientGetQueue message. Send back info about the job queue""" #jobs_running: a list of tuples in the form #(job_id, is_current_client_job, agent_name, info, launcher, started_at, max_end) jobs_running ...
Send waiting jobs to available agents
async def update_queue(self): """ Send waiting jobs to available agents """ # For now, round-robin not_found_for_agent = [] while len(self._available_agents) > 0 and len(self._waiting_jobs) > 0: agent_addr = self._available_agents.pop(0) # Find ...
Handle an AgentAvailable message. Add agent_addr to the list of available agents
async def handle_agent_hello(self, agent_addr, message: AgentHello): """ Handle an AgentAvailable message. Add agent_addr to the list of available agents """ self._logger.info("Agent %s (%s) said hello", agent_addr, message.friendly_name) if agent_addr in self._registered_agents...
Handle an AgentJobStarted message. Send the data back to the client
async def handle_agent_job_started(self, agent_addr, message: AgentJobStarted): """Handle an AgentJobStarted message. Send the data back to the client""" self._logger.debug("Job %s %s started on agent %s", message.job_id[0], message.job_id[1], agent_addr) await ZMQUtils.send_with_addr(self._clie...
Handle an AgentJobDone message. Send the data back to the client, and start new job if needed
async def handle_agent_job_done(self, agent_addr, message: AgentJobDone): """Handle an AgentJobDone message. Send the data back to the client, and start new job if needed""" if agent_addr in self._registered_agents: self._logger.info("Job %s %s finished on agent %s", message.job_id[0], mess...
Handle an AgentJobSSHDebug message. Send the data back to the client
async def handle_agent_job_ssh_debug(self, _, message: AgentJobSSHDebug): """Handle an AgentJobSSHDebug message. Send the data back to the client""" await ZMQUtils.send_with_addr(self._client_socket, message.job_id[0], BackendJobSSHDebug(message.job_id[1], message.host, message.port, ...
Ping the agents
async def _do_ping(self): """ Ping the agents """ # the list() call here is needed, as we remove entries from _registered_agents! for agent_addr, friendly_name in list(self._registered_agents.items()): try: ping_count = self._ping_count.get(agent_addr, 0) ...
Deletes an agent
async def _delete_agent(self, agent_addr): """ Deletes an agent """ self._available_agents = [agent for agent in self._available_agents if agent != agent_addr] del self._registered_agents[agent_addr] await self._recover_jobs(agent_addr)
Recover the jobs sent to a crashed agent
async def _recover_jobs(self, agent_addr): """ Recover the jobs sent to a crashed agent """ for (client_addr, job_id), (agent, job_msg, _) in reversed(list(self._job_running.items())): if agent == agent_addr: await ZMQUtils.send_with_addr(self._client_socket, client_addr, ...
Calls self._loop.create_task with a safe (== with logged exception) coroutine
def _create_safe_task(self, coroutine): """ Calls self._loop.create_task with a safe (== with logged exception) coroutine """ task = self._loop.create_task(coroutine) task.add_done_callback(self.__log_safe_task) return task
Parse a valid date
def parse_date(date, default=None): """ Parse a valid date """ if date == "": if default is not None: return default else: raise Exception("Unknown format for " + date) for format_type in ["%Y-%m-%d %H:%M:%S", "%Y-%m-%d %H:%M", "%Y-%m-%d %H", "%Y-%m-%d", "%d/%m/%Y %H...
Returns True if the task/course is not yet accessible
def before_start(self, when=None): """ Returns True if the task/course is not yet accessible """ if when is None: when = datetime.now() return self._val[0] > when
Returns True if the course/task is still open
def is_open(self, when=None): """ Returns True if the course/task is still open """ if when is None: when = datetime.now() return self._val[0] <= when and when <= self._val[1]
Returns True if the course/task is still open with the soft deadline
def is_open_with_soft_deadline(self, when=None): """ Returns True if the course/task is still open with the soft deadline """ if when is None: when = datetime.now() return self._val[0] <= when and when <= self._soft_end
Returns true if the course/task is always accessible
def is_always_accessible(self): """ Returns true if the course/task is always accessible """ return self._val[0] == datetime.min and self._val[1] == datetime.max
Returns true if the course/task is never accessible
def is_never_accessible(self): """ Returns true if the course/task is never accessible """ return self._val[0] == datetime.max and self._val[1] == datetime.max
If the date is custom, return the start datetime with the format %Y-%m-%d %H:%M:%S. Else, returns "".
def get_std_start_date(self): """ If the date is custom, return the start datetime with the format %Y-%m-%d %H:%M:%S. Else, returns "". """ first, _ = self._val if first != datetime.min and first != datetime.max: return first.strftime("%Y-%m-%d %H:%M:%S") else: re...
If the date is custom, return the end datetime with the format %Y-%m-%d %H:%M:%S. Else, returns "".
def get_std_end_date(self): """ If the date is custom, return the end datetime with the format %Y-%m-%d %H:%M:%S. Else, returns "". """ _, second = self._val if second != datetime.max: return second.strftime("%Y-%m-%d %H:%M:%S") else: return ""
Runs a new job. It works exactly like the Client class, instead that there is no callback and directly returns result, in the form of a tuple (result, grade, problems, tests, custom, archive).
def new_job(self, task, inputdata, launcher_name="Unknown", debug=False): """ Runs a new job. It works exactly like the Client class, instead that there is no callback and directly returns result, in the form of a tuple (result, grade, problems, tests, custom, archive). ...
GET request
def GET_AUTH(self): """ GET request """ return self.template_helper.get_renderer().queue(*self.submission_manager.get_job_queue_snapshot(), datetime.fromtimestamp)
Handles GET request
def GET(self): """ Handles GET request """ if self.user_manager.session_logged_in() or not self.app.allow_registration: raise web.notfound() error = False reset = None msg = "" data = web.input() if "activate" in data: msg, error = self.a...
Returns the user info to reset
def get_reset_data(self, data): """ Returns the user info to reset """ error = False reset = None msg = "" user = self.database.users.find_one({"reset": data["reset"]}) if user is None: error = True msg = "Invalid reset hash." else: ...
Activates user
def activate_user(self, data): """ Activates user """ error = False user = self.database.users.find_one_and_update({"activate": data["activate"]}, {"$unset": {"activate": True}}) if user is None: error = True msg = _("Invalid activation hash.") else: ...
Parses input and register user
def register_user(self, data): """ Parses input and register user """ error = False msg = "" email_re = re.compile( r"(^[-!#$%&'*+/=?^_`{}|~0-9A-Z]+(\.[-!#$%&'*+/=?^_`{}|~0-9A-Z]+)*" # dot-atom r'|^"([\001-\010\013\014\016-\037!#-\[\]-\177]|\\[\001-011\013\014\0...
Send a reset link to user to recover its password
def lost_passwd(self, data): """ Send a reset link to user to recover its password """ error = False msg = "" # Check input format email_re = re.compile( r"(^[-!#$%&'*+/=?^_`{}|~0-9A-Z]+(\.[-!#$%&'*+/=?^_`{}|~0-9A-Z]+)*" # dot-atom r'|^"([\001-\010\013\0...
Reset the user password
def reset_passwd(self, data): """ Reset the user password """ error = False msg = "" # Check input format if len(data["passwd"]) < 6: error = True msg = _("Password too short.") elif data["passwd"] != data["passwd2"]: error = True ...
Handles POST request
def POST(self): """ Handles POST request """ if self.user_manager.session_logged_in() or not self.app.allow_registration: raise web.notfound() reset = None msg = "" error = False data = web.input() if "register" in data: msg, error = self....
:param course: a Course object :param taskid: the task id of the task :raise InvalidNameException, TaskNotFoundException, TaskUnreadableException :return: an object representing the task, of the type given in the constructor
def get_task(self, course, taskid): """ :param course: a Course object :param taskid: the task id of the task :raise InvalidNameException, TaskNotFoundException, TaskUnreadableException :return: an object representing the task, of the type given in the constructor """ ...
:param courseid: the course id of the course :param taskid: the task id of the task :raise InvalidNameException, TaskNotFoundException, TaskUnreadableException :return: the content of the task descriptor, as a dict
def get_task_descriptor_content(self, courseid, taskid): """ :param courseid: the course id of the course :param taskid: the task id of the task :raise InvalidNameException, TaskNotFoundException, TaskUnreadableException :return: the content of the task descriptor, as a dict ...
:param courseid: the course id of the course :param taskid: the task id of the task :raise InvalidNameException, TaskNotFoundException :return: the current extension of the task descriptor
def get_task_descriptor_extension(self, courseid, taskid): """ :param courseid: the course id of the course :param taskid: the task id of the task :raise InvalidNameException, TaskNotFoundException :return: the current extension of the task descriptor ...
:param courseid: the course id of the course :param taskid: the task id of the task :raise InvalidNameException :return: A FileSystemProvider to the folder containing the task files
def get_task_fs(self, courseid, taskid): """ :param courseid: the course id of the course :param taskid: the task id of the task :raise InvalidNameException :return: A FileSystemProvider to the folder containing the task files """ if not id_checker(courseid): ...
Update the task descriptor with the dict in content :param courseid: the course id of the course :param taskid: the task id of the task :param content: the content to put in the task file :param force_extension: If None, save it the same format. Else, save with the given extension ...
def update_task_descriptor_content(self, courseid, taskid, content, force_extension=None): """ Update the task descriptor with the dict in content :param courseid: the course id of the course :param taskid: the task id of the task :param content: the content to put in the task fi...
Returns the list of all available tasks in a course
def get_readable_tasks(self, course): """ Returns the list of all available tasks in a course """ course_fs = self._filesystem.from_subfolder(course.get_id()) tasks = [ task[0:len(task)-1] # remove trailing / for task in course_fs.list(folders=True, files=False, recursiv...
Returns true if a task file exists in this directory
def _task_file_exists(self, task_fs): """ Returns true if a task file exists in this directory """ for filename in ["task.{}".format(ext) for ext in self.get_available_task_file_extensions()]: if task_fs.exists(filename): return True return False
Deletes all possibles task files in directory, to allow to change the format
def delete_all_possible_task_files(self, courseid, taskid): """ Deletes all possibles task files in directory, to allow to change the format """ if not id_checker(courseid): raise InvalidNameException("Course with invalid name: " + courseid) if not id_checker(taskid): rai...
:return: a table containing taskid=>Task pairs
def get_all_tasks(self, course): """ :return: a table containing taskid=>Task pairs """ tasks = self.get_readable_tasks(course) output = {} for task in tasks: try: output[task] = self.get_task(course, task) except: p...
:param courseid: the course id of the course :param taskid: the task id of the task :raise InvalidNameException, TaskNotFoundException :return: a tuple, containing: (descriptor filename, task file manager for the descriptor)
def _get_task_descriptor_info(self, courseid, taskid): """ :param courseid: the course id of the course :param taskid: the task id of the task :raise InvalidNameException, TaskNotFoundException :return: a tuple, containing: (descriptor filename, task file...
:param course: a Course object :param taskid: a (valid) task id :raise InvalidNameException, TaskNotFoundException :return: True if an update of the cache is needed, False else
def _cache_update_needed(self, course, taskid): """ :param course: a Course object :param taskid: a (valid) task id :raise InvalidNameException, TaskNotFoundException :return: True if an update of the cache is needed, False else """ if not id_checker(taskid): ...
Updates the cache :param course: a Course object :param taskid: a (valid) task id :raise InvalidNameException, TaskNotFoundException, TaskUnreadableException
def _update_cache(self, course, taskid): """ Updates the cache :param course: a Course object :param taskid: a (valid) task id :raise InvalidNameException, TaskNotFoundException, TaskUnreadableException """ if not id_checker(taskid): raise InvalidNameE...
Clean/update the cache of all the tasks for a given course (id) :param courseid:
def update_cache_for_course(self, courseid): """ Clean/update the cache of all the tasks for a given course (id) :param courseid: """ to_drop = [] for (cid, tid) in self._cache: if cid == courseid: to_drop.append(tid) for tid in to_drop...
:param courseid: the course id of the course :param taskid: the task id of the task :raise InvalidNameException or CourseNotFoundException Erase the content of the task folder
def delete_task(self, courseid, taskid): """ :param courseid: the course id of the course :param taskid: the task id of the task :raise InvalidNameException or CourseNotFoundException Erase the content of the task folder """ if not id_checker(courseid): ...
Prepare SAML request
def prepare_request(settings): """ Prepare SAML request """ # Set the ACS url and binding method settings["sp"]["assertionConsumerService"] = { "url": web.ctx.homedomain + web.ctx.homepath + "/auth/callback/" + settings["id"], "binding": "urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ...
:return: a dict of available containers in the form { "name": { #for example, "default" "id": "container img id", # "sha256:715c5cb5575cdb2641956e42af4a53e69edf763ce701006b2c6e0f4f39b68dd3" "created": 12345678 # cre...
def get_containers(self): """ :return: a dict of available containers in the form { "name": { #for example, "default" "id": "container img id", # "sha256:715c5cb5575cdb2641956e42af4a53e69edf763ce701006b2c6e0f4f39b68dd3" ...
Get the external IP of the host of the docker daemon. Uses OpenDNS internally. :param env_with_dig: any container image that has dig
def get_host_ip(self, env_with_dig='ingi/inginious-c-default'): """ Get the external IP of the host of the docker daemon. Uses OpenDNS internally. :param env_with_dig: any container image that has dig """ try: container = self._docker.containers.create(env_with_dig, c...
Creates a container. :param environment: env to start (name/id of a docker image) :param network_grading: boolean to indicate if the network should be enabled in the container or not :param mem_limit: in Mo :param task_path: path to the task directory that will be mounted in the containe...
def create_container(self, environment, network_grading, mem_limit, task_path, sockets_path, course_common_path, course_common_student_path, ports=None): """ Creates a container. :param environment: env to start (name/id of a docker image) :param network_grading:...
Creates a student container :param parent_container_id: id of the "parent" container :param environment: env to start (name/id of a docker image) :param network_grading: boolean to indicate if the network should be enabled in the container or not (share the parent stack) :param mem_limit...
def create_container_student(self, parent_container_id, environment, network_grading, mem_limit, student_path, socket_path, systemfiles_path, course_common_student_path): """ Creates a student container :param parent_container_id: id of the "parent" container ...
A socket attached to the stdin/stdout of a container. The object returned contains a get_socket() function to get a socket.socket object and close_socket() to close the connection
def attach_to_container(self, container_id): """ A socket attached to the stdin/stdout of a container. The object returned contains a get_socket() function to get a socket.socket object and close_socket() to close the connection """ sock = self._docker.containers.get(container_id).attach_socket...
Return the full stdout/stderr of a container
def get_logs(self, container_id): """ Return the full stdout/stderr of a container""" stdout = self._docker.containers.get(container_id).logs(stdout=True, stderr=False).decode('utf8') stderr = self._docker.containers.get(container_id).logs(stdout=False, stderr=True).decode('utf8') return...
:param container_id: :return: an iterable that contains dictionnaries with the stats of the running container. See the docker api for content.
def get_stats(self, container_id): """ :param container_id: :return: an iterable that contains dictionnaries with the stats of the running container. See the docker api for content. """ return self._docker.containers.get(container_id).stats(decode=True)
Removes a container (with fire)
def remove_container(self, container_id): """ Removes a container (with fire) """ self._docker.containers.get(container_id).remove(v=True, link=False, force=True)
Kills a container :param signal: custom signal. Default is SIGKILL.
def kill_container(self, container_id, signal=None): """ Kills a container :param signal: custom signal. Default is SIGKILL. """ self._docker.containers.get(container_id).kill(signal)
:param filters: filters to apply on messages. See docker api. :return: an iterable that contains events from docker. See the docker api for content.
def event_stream(self, filters=None): """ :param filters: filters to apply on messages. See docker api. :return: an iterable that contains events from docker. See the docker api for content. """ if filters is None: filters = {} return self._docker.events(decod...
Correctly closes the socket :return:
def close_socket(self): """ Correctly closes the socket :return: """ try: self.docker_py_sock._sock.close() # pylint: disable=protected-access except AttributeError: pass self.docker_py_sock.close()
Checks that a given path is valid. If it's not, raises NotFoundException
def _checkpath(self, path): """ Checks that a given path is valid. If it's not, raises NotFoundException """ if path.startswith("/") or ".." in path or path.strip() != path: raise NotFoundException()
GET request
def POST_AUTH(self, courseid): # pylint: disable=arguments-differ """ GET request """ course, __ = self.get_course_and_check_rights(courseid, allow_all_staff=False) user_input = web.input(tasks=[], aggregations=[], users=[]) if "submission" in user_input: # Replay a unique ...
GET request
def GET_AUTH(self, courseid): # pylint: disable=arguments-differ """ GET request """ course, __ = self.get_course_and_check_rights(courseid, allow_all_staff=False) return self.show_page(course, web.input())
Save user profile modifications
def save_profile(self, userdata, data): """ Save user profile modifications """ result = userdata error = False # Check if updating username. if not userdata["username"] and "username" in data: if re.match(r"^[-_|~0-9A-Z]{4,}$", data["username"], re.IGNORECASE) is No...
GET request
def GET_AUTH(self): # pylint: disable=arguments-differ """ GET request """ userdata = self.database.users.find_one({"email": self.user_manager.session_email()}) if not userdata: raise web.notfound() return self.template_helper.get_renderer().preferences.profile("", False)
POST request
def POST_AUTH(self): # pylint: disable=arguments-differ """ POST request """ userdata = self.database.users.find_one({"email": self.user_manager.session_email()}) if not userdata: raise web.notfound() msg = "" error = False data = web.input() if "sa...
Init the external grader plugin. This simple grader allows only anonymous requests, and submissions are not stored in database. Available configuration: :: plugins: - plugin_module: inginious.frontend.plugins.simple_grader courseid : "external" ...
def init(plugin_manager, course_factory, client, config): """ Init the external grader plugin. This simple grader allows only anonymous requests, and submissions are not stored in database. Available configuration: :: plugins: - plugin_module: inginious.frontend...
List courses available to the connected client. Returns a dict in the form :: { "courseid1": { "name": "Name of the course", #the name of the course "require_password": False, #indicates if t...
def API_GET(self, courseid=None): # pylint: disable=arguments-differ """ List courses available to the connected client. Returns a dict in the form :: { "courseid1": { "name": "Name of the course", #th...
Convert the output to what the client asks
def _api_convert_output(return_value): """ Convert the output to what the client asks """ content_type = web.ctx.environ.get('CONTENT_TYPE', 'text/json') if "text/json" in content_type: web.header('Content-Type', 'text/json; charset=utf-8') return json.dumps(return_value) if "text/html"...
GET request
def GET(self, *args, **kwargs): """ GET request """ return self._handle_api(self.API_GET, args, kwargs)
PUT request
def PUT(self, *args, **kwargs): """ PUT request """ return self._handle_api(self.API_PUT, args, kwargs)
POST request
def POST(self, *args, **kwargs): """ POST request """ return self._handle_api(self.API_POST, args, kwargs)